Merge pull request #1 from LCTT/master

更新2018年9月28日
This commit is contained in:
way-ww 2018-09-28 13:31:44 +08:00 committed by GitHub
commit c1af3dfb7f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
68 changed files with 5985 additions and 2635 deletions

View File

@ -0,0 +1,155 @@
用 Hugo 30 分钟搭建静态博客
======
> 了解 Hugo 如何使构建网站变得有趣。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy)
你是不是强烈地想搭建博客来将自己对软件框架等的探索学习成果分享呢?你是不是面对缺乏指导文档而一团糟的项目就有一种想去改变它的冲动呢?或者换个角度,你是不是十分期待能创建一个属于自己的个人博客网站呢?
很多人在想搭建博客之前都有一些严重的迟疑顾虑感觉自己缺乏内容管理系统CMS的相关知识更缺乏时间去学习这些知识。现在如果我说不用花费大把的时间去学习 CMS 系统、学习如何创建一个静态网站、更不用操心如何去强化网站以防止它受到黑客攻击的问题,你就可以在 30 分钟之内创建一个博客?你信不信?利用 Hugo 工具,就可以实现这一切。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_1.png?itok=JgxBSOBG)
Hugo 是一个基于 Go 语言开发的静态站点生成工具。也许你会问,为什么选择它?
* 无需数据库、无需需要各种权限的插件、无需跑在服务器上的底层平台,更没有额外的安全问题。
* 都是静态站点,因此拥有轻量级、快速响应的服务性能。此外,所有的网页都是在部署的时候生成,所以服务器负载很小。
* 极易操作的版本控制。一些 CMS 平台使用它们自己的版本控制软件VCS或者在网页上集成 Git 工具。而 Hugo所有的源文件都可以用你所选的 VCS 软件来管理。
### 0-5 分钟:下载 Hugo生成一个网站
直白的说Hugo 使得写一个网站又一次变得有趣起来。让我们来个 30 分钟计时,搭建一个网站。
为了简化 Hugo 安装流程,这里直接使用 Hugo 可执行安装文件。
1. 下载和你操作系统匹配的 Hugo [版本][2]
2. 压缩包解压到指定路径,例如 windows 系统的 `C:\hugo_dir` 或者 Linux 系统的 `~/hugo_dir` 目录;下文中的变量 `${HUGO_HOME}` 所指的路径就是这个安装目录;
3. 打开命令行终端,进入安装目录:`cd ${HUGO_HOME}`
4. 确认 Hugo 已经启动:
* Unix 系统:`${HUGO_HOME}/[hugo version]`
* Windows 系统:`${HUGO_HOME}\[hugo.exe version]`例如cmd 命令行中输入:`c:\hugo_dir\hugo version`。
为了书写上的简化,下文中的 `hugo` 就是指 hugo 可执行文件所在的路径(包括可执行文件),例如命令 `hugo version` 就是指命令 `c:\hugo_dir\hugo version` 。(LCTT 译注:可以把 hugo 可执行文件所在的路径添加到系统环境变量下,这样就可以直接在终端中输入 `hugo version`
如果命令 `hugo version` 报错,你可能下载了错误的版本。当然,有很多种方法安装 Hugo更多详细信息请查阅 [官方文档][3]。最稳妥的方法就是把 Hugo 可执行文件放在某个路径下,然后执行的时候带上路径名
5. 创建一个新的站点来作为你的博客,输入命令:`hugo new site awesome-blog`
6. 进入新创建的路径下: `cd awesome-blog`
恭喜你!你已经创建了自己的新博客。
### 5-10 分钟:为博客设置主题
Hugo 中你可以自己构建博客的主题或者使用网上已经有的一些主题。这里选择 [Kiera][4] 主题,因为它简洁漂亮。按以下步骤来安装该主题:
1. 进入主题所在目录:`cd themes`
2. 克隆主题:`git clone https://github.com/avianto/hugo-kiera kiera`。如果你没有安装 Git 工具:
* 从 [Github][5] 上下载 hugo 的 .zip 格式的文件;
* 解压该 .zip 文件到你的博客主题 `theme` 路径;
* 重命名 `hugo-kiera-master``kiera`
3. 返回博客主路径:`cd awesome-blog`
4. 激活主题;通常来说,主题(包括 Kiera都自带文件夹 `exampleSite`,里面存放了内容配置的示例文件。激活 Kiera 主题需要拷贝它提供的 `config.toml` 到你的博客下:
* Unix 系统:`cp themes/kiera/exampleSite/config.toml .`
* Windows 系统:`copy themes\kiera\exampleSite\config.toml .`
* 选择 `Yes` 来覆盖原有的 `config.toml`
5. 可选操作 )你可以选择可视化的方式启动服务器来验证主题是否生效:`hugo server -D` 然后在浏览器中输入 `http://localhost:1313`。可用通过在终端中输入 `Crtl+C` 来停止服务器运行。现在你的博客还是空的,但这也给你留了写作的空间。它看起来如下所示:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_2.png?itok=PINOIOSU)
你已经成功的给博客设置了主题!你可以在官方 [Hugo 主题][4] 网站上找到上百种漂亮的主题供你使用。
### 10-20 分钟:给博客添加内容
对于碗来说它是空的时候用处最大可以用来盛放东西但对于博客来说不是这样空博客几乎毫无用处。在这一步你将会给博客添加内容。Hugo 和 Kiera 主题都为这个工作提供了方便性。按以下步骤来进行你的第一次提交:
1. archetypes 将会是你的内容模板。
2. 添加主题中的 archtypes 至你的博客:
* Unix 系统: `cp themes/kiera/archetypes/* archetypes/`
* Windows 系统:`copy themes\kiera\archetypes\* archetypes\`
* 选择 `Yes` 来覆盖原来的 `default.md` 内容架构类型
3. 创建博客 posts 目录:
* Unix 系统: `mkdir content/posts`
* Windows 系统: `mkdir content\posts`
4. 利用 Hugo 生成你的 post
* Unix 系统:`hugo nes posts/first-post.md`;
* Windows 系统:`hugo new posts\first-post.md`;
5. 在文本编辑器中打开这个新建的 post 文件:
* Unix 系统:`gedit content/posts/first-post.md`
* Windows 系统:`notepadd content\posts\first-post.md`
此刻,你可以疯狂起来了。注意到你的提交文件中包括两个部分。第一部分是以 `+++` 符号分隔开的。它包括了提交文档的主要数据,例如名称、时间等。在 Hugo 中,这叫做前缀。在前缀之后,才是正文。下面编辑第一个提交文件内容:
```
+++
title = "First Post"
date = 2018-03-03T13:23:10+01:00
draft = false
tags = ["Getting started"]
categories = []
+++
Hello Hugo world! No more excuses for having no blog or documentation now!
```
现在你要做的就是启动你的服务器:`hugo server -D`;然后打开浏览器,输入 `http://localhost:1313/`
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_3.png?itok=I-_v0qLx)
### 20-30 分钟:调整网站
前面的工作很完美,但还有一些问题需要解决。例如,简单地命名你的站点:
1. 终端中按下 `Ctrl+C` 以停止服务器。
2. 打开 `config.toml`,编辑博客的名称,版权,你的姓名,社交网站等等。
当你再次启动服务器后,你会发现博客私人订制味道更浓了。不过,还少一个重要的基础内容:主菜单。快速的解决这个问题。返回 `config.toml` 文件,在末尾插入如下一段:
```
[[menu.main]]
name = "Home" #Name in the navigation bar
weight = 10 #The larger the weight, the more on the right this item will be
url = "/" #URL address
[[menu.main]]
name = "Posts"
weight = 20
url = "/posts/"
```
上面这段代码添加了 `Home``Posts` 到主菜单中。你还需要一个 `About` 页面。这次是创建一个 `.md` 文件,而不是编辑 `config.toml` 文件:
1. 创建 `about.md` 文件:`hugo new about.md` 。注意它是 `about.md`,不是 `posts/about.md`。该页面不是博客提交内容,所以你不想它显示到博客内容提交当中吧。
2. 用文本编辑器打开该文件,输入如下一段:
```
+++
title = "About"
date = 2018-03-03T13:50:49+01:00
menu = "main" #Display this page on the nav menu
weight = "30" #Right-most nav item
meta = "false" #Do not display tags or categories
+++
> Waves are the practice of the water. Shunryu Suzuki
```
当你启动你的服务器并输入:`http://localhost:1313/`,你将会看到你的博客。(访问我 Gihub 主页上的 [例子][6] )如果你想让文章的菜单栏和 Github 相似,给 `themes/kiera/static/css/styles.css` 打上这个 [补丁][7]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/start-blog-30-minutes-hugo
作者:[Marek Czernek][a] 
译者:[jrg](https://github.com/jrglinux) 
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mczernek
[1]:https://gohugo.io/
[2]:https://github.com/gohugoio/hugo/releases
[3]:https://gohugo.io/getting-started/installing/
[4]:https://themes.gohugo.io/
[5]:https://github.com/avianto/hugo-kiera
[6]:https://m-czernek.github.io/awesome-blog/
[7]:https://github.com/avianto/hugo-kiera/pull/18/files

View File

@ -1,47 +1,48 @@
PKI 和 密码学中的私钥的角色
公钥基础设施和密码学中的私钥的角色
======
> 了解如何验证某人所声称的身份。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
在[上一篇文章][1]中,我们概述了密码学并讨论了密码学的核心概念:<ruby>保密性<rt>confidentiality</rt></ruby> (让数据保密)<ruby>完整性<rt>integrity</rt></ruby> (防止数据被篡改)和<ruby>身份认证<rt>authentication</rt></ruby> (确认数据源的<ruby>身份<rt>identity</rt></ruby>)。由于要在存在各种身份混乱的现实世界中完成身份认证,人们逐渐建立起一个复杂的<ruby>技术生态体系<rt>technological ecosystem</rt></ruby>,用于证明某人就是其声称的那个人。在本文中,我们将大致介绍这些体系是如何工作的。
在[上一篇文章][1]中,我们概述了密码学并讨论了密码学的核心概念:<ruby>保密性<rt>confidentiality</rt></ruby> (让数据保密)<ruby>完整性<rt>integrity</rt></ruby> (防止数据被篡改)和<ruby>身份认证<rt>authentication</rt></ruby> (确认数据源的<ruby>身份<rt>identity</rt></ruby>)。由于要在存在各种身份混乱的现实世界中完成身份认证,人们逐渐建立起一个复杂的<ruby>技术生态体系<rt>technological ecosystem</rt></ruby>,用于证明某人就是其声称的那个人。在本文中,我们将大致介绍这些体系是如何工作的。
### 公钥密码学及数字签名快速回顾
### 快速回顾公钥密码学及数字签名
互联网世界中的身份认证依赖于公钥密码学,其中密钥分为两部分:拥有者需要保密的私钥和可以对外公开的公钥。经过公钥加密过的数据,只能用对应的私钥解密。举个例子,对于希望与[记者][2]建立联系的举报人来说,这个特性非常有用。但就本文介绍的内容而言,私钥更重要的用途是与一个消息一起创建一个<ruby>数字签名<rt>digital signature</rt></ruby>,用于提供完整性和身份认证。
在实际应用中,我们签名的并不是真实消息,而是经过<ruby>密码学哈希函数<rt>cryptographic hash function</rt></ruby>处理过的消息<ruby>摘要<rt>digest</rt></ruby>。要发送一个包含源代码的压缩文件,发送者会对该压缩文件的 256 比特长度的 [SHA-256][3] 摘要而不是文件本身进行签名,然后用明文发送该压缩包(和签名)。接收者会独立计算收到文件的 SHA-256 摘要,然后结合该摘要、收到的签名及发送者的公钥,使用签名验证算法进行验证。验证过程取决于加密算法,加密算法不同,验证过程也相应不同;而且,由于不断发现微妙的触发条件,签名验证[漏洞][4]依然[层出不穷][5]。如果签名验证通过,说明文件在传输过程中没有被篡改而且来自于发送者,这是因为只有发送者拥有创建签名所需的私钥。
在实际应用中,我们签名的并不是真实消息,而是经过<ruby>密码学哈希函数<rt>cryptographic hash function</rt></ruby>处理过的消息<ruby>摘要<rt>digest</rt></ruby>。要发送一个包含源代码的压缩文件,发送者会对该压缩文件的 256 比特长度的 [SHA-256][3] 摘要进行签名,而不是文件本身进行签名,然后用明文发送该压缩包(和签名)。接收者会独立计算收到文件的 SHA-256 摘要,然后结合该摘要、收到的签名及发送者的公钥,使用签名验证算法进行验证。验证过程取决于加密算法,加密算法不同,验证过程也相应不同;而且,很微妙的是签名验证[漏洞][4]依然[层出不穷][5]。如果签名验证通过,说明文件在传输过程中没有被篡改而且来自于发送者,这是因为只有发送者拥有创建签名所需的私钥。
### 方案中缺失的环节
上述方案中缺失了一个重要的环节:我们从哪里获得发送者的公钥?发送者可以将公钥与消息一起发送,但除了发送者的自我宣称,我们无法核验其身份。假设你是一名银行柜员,一名顾客走过来向你说,“你好,我是 Jane Doe我要取一笔钱”。当你要求其证明身份时她指着衬衫上贴着的姓名标签说道“看Jane Doe”。如果我是这个柜员我会礼貌的拒绝她的请求。
如果你认识发送者,你们可以私下见面并彼此交换公钥。如果你并不认识发送者,你们可以私下见面,检查对方的证件,确认真实性后接受对方的公钥。为提高流程效率,你可以举办聚会并邀请一堆人,检查他们的证件,然后接受他们的公钥。此外,如果你认识并信任 Jane Doe 尽管她在银行的表现比较反常Jane 可以参加聚会收集大家的公钥然后交给你。事实上Jane 可以使用她自己的私钥对这些公钥(及对应的身份信息)进行签名,进而你可以从一个[线上密钥库][7]获取公钥(及对应的身份信息)并信任已被 Jane 签名的那部分。如果一个人的公钥被很多你信任的人(即使你并不认识他们)签名,你也可能选择信任这个人。按照这种方式,你可以建立一个[<ruby>信任网络<rt>Web of Trust</rt></ruby>][8]
如果你认识发送者,你们可以私下见面并彼此交换公钥。如果你并不认识发送者,你们可以私下见面,检查对方的证件,确认真实性后接受对方的公钥。为提高流程效率,你可以举办[聚会][6]并邀请一堆人,检查他们的证件,然后接受他们的公钥。此外,如果你认识并信任 Jane Doe尽管她在银行的表现比较反常Jane 可以参加聚会收集大家的公钥然后交给你。事实上Jane 可以使用她自己的私钥对这些公钥(及对应的身份信息)进行签名,进而你可以从一个[线上密钥库][7]获取公钥(及对应的身份信息)并信任已被 Jane 签名的那部分。如果一个人的公钥被很多你信任的人(即使你并不认识他们)签名,你也可能选择信任这个人。按照这种方式,你可以建立一个<ruby>[信任网络][8]<rt>Web of Trust</rt></ruby>
但事情也变得更加复杂:我们需要建立一种标准的编码机制,可以将公钥和其对应的身份信息编码成一个<ruby>数字捆绑<rt>digital bundle</rt></ruby>,以便我们进一步进行签名。更准确的说,这类数字捆绑被称为<ruby>证书<rt>cerificates</rt></ruby>。我们还需要可以创建、使用和管理这些证书的工具链。满足诸如此类的各种需求的方案构成了<ruby>公钥基础设施<rt>public key infrastructure, PKI</rt></ruby>
但事情也变得更加复杂:我们需要建立一种标准的编码机制,可以将公钥和其对应的身份信息编码成一个<ruby>数字捆绑<rt>digital bundle</rt></ruby>,以便我们进一步进行签名。更准确的说,这类数字捆绑被称为<ruby>证书<rt>cerificate</rt></ruby>。我们还需要可以创建、使用和管理这些证书的工具链。满足诸如此类的各种需求的方案构成了<ruby>公钥基础设施<rt>public key infrastructure</rt></ruby>PKI
### 比信任网络更进一步
你可以用人际关系网类比信任网络。如果人们之间广泛互信,可以很容易找到(两个人之间的)一条<ruby>短信任链<rt>short path of trust</rt></ruby>不妨以社交圈为例。基于 [GPG][9] 加密的邮件依赖于信任网络,([理论上][10])只适用于与少量朋友、家庭或同事进行联系的情形。
你可以用人际关系网类比信任网络。如果人们之间广泛互信,可以很容易找到(两个人之间的)一条<ruby>短信任链<rt>short path of trust</rt></ruby>就像一个社交圈。基于 [GPG][9] 加密的邮件依赖于信任网络,([理论上][10])只适用于与少量朋友、家庭或同事进行联系的情形。
LCTT 译注:作者提到的“短信任链”应该是暗示“六度空间理论”,即任意两个陌生人之间所间隔的人一般不会超过 6 个。对 GPG 的唱衰,一方面是因为密钥管理的复杂性没有改善,另一方面 Yahoo 和 Google 都提出了更便利的端到端加密方案。)
在实际应用中,信任网络有一些[<ruby>"硬伤"<rt>significant problems</rt></ruby>][11],主要是在可扩展性方面。当网络规模逐渐增大或者人们之间的连接逐渐降低时,信任网络就会慢慢失效。如果信任链逐渐变长,信任链中某人有意或无意误签证书的几率也会逐渐增大。如果信任链不存在,你不得不自己创建一条信任链;具体而言与其它组织建立联系,验证它们的密钥符合你的要求。考虑下面的场景,你和你的朋友要访问一个从未使用过的在线商店。你首先需要核验网站所用的公钥属于其对应的公司而不是伪造者,进而建立安全通信信道,最后完成下订单操作。核验公钥的方法包括去实体店、打电话等,都比较麻烦。这样会导致在线购物变得不那么便利(或者说不那么安全,毕竟很多人会图省事,不去核验密钥)。
在实际应用中,信任网络有一些<ruby>[硬伤][11]<rt>significant problems</rt></ruby>”,主要是在可扩展性方面。当网络规模逐渐增大或者人们之间的连接较少时,信任网络就会慢慢失效。如果信任链逐渐变长,信任链中某人有意或无意误签证书的几率也会逐渐增大。如果信任链不存在,你不得不自己创建一条信任链,与其它组织建立联系,验证它们的密钥符合你的要求。考虑下面的场景,你和你的朋友要访问一个从未使用过的在线商店。你首先需要核验网站所用的公钥属于其对应的公司而不是伪造者,进而建立安全通信信道,最后完成下订单操作。核验公钥的方法包括去实体店、打电话等,都比较麻烦。这样会导致在线购物变得不那么便利(或者说不那么安全,毕竟很多人会图省事,不去核验密钥)。
如果世界上有那么几个格外值得信任的人,他们专门负责核验和签发网站证书,情况会怎样呢?你可以只信任他们,那么浏览互联网也会变得更加容易。整体来看,这就是当今互联网的工作方式。那些“格外值得信任的人”就是被称为<ruby>证书颁发机构<rt>cerificate authorities, CAs</rt></ruby>的公司。当网站希望获得公钥签名时,只需向 CA 提交<ruby>证书签名请求<rt>certificate signing request</rt></ruby>
如果世界上有那么几个格外值得信任的人,他们专门负责核验和签发网站证书,情况会怎样呢?你可以只信任他们,那么浏览互联网也会变得更加容易。整体来看,这就是当今互联网的工作方式。那些“格外值得信任的人”就是被称为<ruby>证书颁发机构<rt>cerificate authoritie</rt></ruby>CA的公司。当网站希望获得公钥签名时,只需向 CA 提交<ruby>证书签名请求<rt>certificate signing request</rt></ruby>CSR
CSR 类似于包括公钥和身份信息(在本例中,即服务器的主机名)的<ruby>存根<rt>stub</rt></ruby>证书但CA 并不会直接对 CSR 本身进行签名。CA 在签名之前会进行一些验证。对于一些证书类型LCTT 译注:<ruby>DV<rt>Domain Validated</rt></ruby> 类型CA 只验证申请者的确是 CSR 中列出主机名对应域名的控制者(例如通过邮件验证,让申请者完成指定的域名解析)。[对于另一些证书类型][12] LCTT 译注:链接中提到<ruby>EV<rt>Extended Validated</rt></ruby> 类型,其实还有 <ruby>OV<rt>Organization Validated</rt></ruby> 类型CA 还会检查相关法律文书例如公司营业执照等。一旦验证完成CA一般在申请者付费后会从 CSR 中取出数据(即公钥和身份信息),使用 CA 自己的私钥进行签名,创建一个(签名)证书并发送给申请者。申请者将该证书部署在网站服务器上,当用户使用 HTTPS (或其它基于 [TLS][13] 加密的协议)与服务器通信时,该证书被分发给用户。
CSR 类似于包括公钥和身份信息(在本例中,即服务器的主机名)的<ruby>存根<rt>stub</rt></ruby>证书,但 CA 并不会直接对 CSR 本身进行签名。CA 在签名之前会进行一些验证。对于一些证书类型LCTT 译注:<ruby>域名证实<rt>Domain Validated</rt></ruby>DV 类型CA 只验证申请者的确是 CSR 中列出主机名对应域名的控制者(例如通过邮件验证,让申请者完成指定的域名解析)。[对于另一些证书类型][12] LCTT 译注:链接中提到<ruby>扩展证实<rt>Extended Validated</rt></ruby>EV类型,其实还有 <ruby>OV<rt>Organization Validated</rt></ruby> 类型CA 还会检查相关法律文书例如公司营业执照等。一旦验证完成CA一般在申请者付费后会从 CSR 中取出数据(即公钥和身份信息),使用 CA 自己的私钥进行签名,创建一个(签名)证书并发送给申请者。申请者将该证书部署在网站服务器上,当用户使用 HTTPS (或其它基于 [TLS][13] 加密的协议)与服务器通信时,该证书被分发给用户。
当用户访问该网站时,浏览器获取该证书,接着检查证书中的主机名是否与当前正在连接的网站一致(下文会详细说明),核验 CA 签名有效性。如果其中一步验证不通过,浏览器会给出安全警告并切断与网站的连接。反之,如果验证通过,浏览器会使用证书中的公钥核验服务器发送的签名信息,确认该服务器持有该证书的私钥。有几种算法用于协商后续通信用到的<ruby>共享密钥<rt>shared secret key</rt></ruby>,其中一种也用到了服务器发送的签名信息。<ruby>密钥交换<rt>Key exchange</rt></ruby>算法不在本文的讨论范围,可以参考这个[视频][14],其中仔细说明了一种密钥交换算法。
当用户访问该网站时,浏览器获取该证书,接着检查证书中的主机名是否与当前正在连接的网站一致(下文会详细说明),核验 CA 签名有效性。如果其中一步验证不通过,浏览器会给出安全警告并切断与网站的连接。反之,如果验证通过,浏览器会使用证书中的公钥核验服务器发送的签名信息,确认该服务器持有该证书的私钥。有几种算法用于协商后续通信用到的<ruby>共享密钥<rt>shared secret key</rt></ruby>,其中一种也用到了服务器发送的签名信息。<ruby>密钥交换<rt>key exchange</rt></ruby>算法不在本文的讨论范围,可以参考这个[视频][14],其中仔细说明了一种密钥交换算法。
### 建立信任
你可能会问,“如果 CA 使用其私钥对证书进行签名,也就意味着我们需要使用 CA 的公钥验证证书。那么 CA 的公钥从何而来,谁对其进行签名呢?” 答案是 CA 对自己签名!可以使用证书公钥对应的私钥,对证书本身进行签名!这类签名证书被称为是<ruby>自签名的<rt>self-signed</rt></ruby>;在 PKI 体系下,这意味着对你说“相信我”。(为了表达方便,人们通常说用证书进行了签名,虽然真正用于签名的私钥并不在证书中。)
通过遵守[浏览器][15]和[操作系统][16]供应商建立的规则CA 表明自己足够可靠并寻求加入到浏览器或操作系统预装的一组自签名证书中。这些证书被称为“<ruby>信任锚<rt>trust anchors</rt></ruby>”或 <ruby>CA 根证书<rt>root CA certificates</rt></ruby>,被存储在根证书区,我们<ruby>约定<rt>implicitly</rt></ruby>信任该区域内的证书。
通过遵守[浏览器][15]和[操作系统][16]供应商建立的规则CA 表明自己足够可靠并寻求加入到浏览器或操作系统预装的一组自签名证书中。这些证书被称为“<ruby>信任锚<rt>trust anchor</rt></ruby>”或 <ruby>CA 根证书<rt>root CA certificate</rt></ruby>,被存储在根证书区,我们<ruby>约定<rt>implicitly</rt></ruby>信任该区域内的证书。
CA 也可以签发一种特殊的证书,该证书自身可以作为 CA。在这种情况下它们可以生成一个证书链。要核验证书链需要从“信任锚”也就是 CA 根证书)开始,使用当前证书的公钥核验下一层证书的签名(或其它一些信息)。按照这个方式依次核验下一层证书,直到证书链底部。如果整个核验过程没有问题,信任链也建立完成。当向 CA 付费为网站签发证书时实际购买的是将证书放置在证书链下的权利。CA 将卖出的证书标记为“不可签发子证书”,这样它们可以在适当的长度终止信任链(防止其继续向下扩展)。
为何要使用长度超过 2 的信任链呢?毕竟网站的证书可以直接被 CA 根证书签名。在实际应用中,很多因素促使 CA 创建<ruby>中间 CA 证书<rt>intermediate CA certificate</rt></ruby>最主要是为了方便。由于价值连城CA 根证书对应的私钥通常被存放在特定的设备中,一种需要多人解锁的<ruby>硬件安全模块<rt>hardware security module, HSM</rt></ruby>,该模块完全离线并被保管在配备监控和报警设备的[地下室][18]中。
为何要使用长度超过 2 的信任链呢?毕竟网站的证书可以直接被 CA 根证书签名。在实际应用中,很多因素促使 CA 创建<ruby>中间 CA 证书<rt>intermediate CA certificate</rt></ruby>最主要是为了方便。由于价值连城CA 根证书对应的私钥通常被存放在特定的设备中,一种需要多人解锁的<ruby>硬件安全模块<rt>hardware security module</rt></ruby>HSM,该模块完全离线并被保管在配备监控和报警设备的[地下室][18]中。
<ruby>CA/浏览器论坛<rt>CAB Forum, CA/Browser Forum</rt></ruby>负责管理 CA[要求][19]任何与 CA 根证书LCTT 译注:就像前文提到的那样,这里是指对应的私钥)相关的操作必须由人工完成。设想一下,如果每个证书请求都需要员工将请求内容拷贝到保密介质中、进入地下室、与同事一起解锁 HSM、使用 CA 根证书对应的私钥签名证书最后将签名证书从保密介质中拷贝出来那么每天为大量网站签发证书是相当繁重乏味的工作。因此CA 创建内部使用的中间 CA用于证书签发自动化。
@ -72,12 +73,12 @@ via: https://opensource.com/article/18/7/private-keys
作者:[Alex Wood][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/awood
[1]:https://opensource.com/article/18/5/cryptography-pki
[1]:https://linux.cn/article-9792-1.html
[2]:https://theintercept.com/2014/10/28/smuggling-snowden-secrets/
[3]:https://en.wikipedia.org/wiki/SHA-2
[4]:https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html

View File

@ -16,12 +16,8 @@ Linux DNS 查询剖析(第四部分)
在第四部分中,我将介绍容器如何完成 DNS 查询。你想的没错,也不是那么简单。
* * *
### 1) Docker 和 DNS
============================================================
在 [Linux DNS 查询剖析(第三部分)][3] 中,我们介绍了 `dnsmasq`,其工作方式如下:将 DNS 查询指向到 localhost 地址 `127.0.0.1`,同时启动一个进程监听 `53` 端口并处理查询请求。
在按上述方式配置 DNS 的主机上,如果运行了一个 Docker 容器,容器内的 `/etc/resolv.conf` 文件会是怎样的呢?
@ -72,29 +68,29 @@ google.com.             112     IN      A       172.217.23.14
在这个问题上Docker 的解决方案是忽略所有可能的复杂情况,即无论主机中使用什么 DNS 服务器,容器内都使用 Google 的 DNS 服务器 `8.8.8.8` 和 `8.8.4.4` 完成 DNS 查询。
_我的经历在 2013 年,我遇到了使用 Docker 以来的第一个问题,与 Docker 的这种 DNS 解决方案密切相关。我们公司的网络屏蔽了 `8.8.8.8` 和 `8.8.4.4`导致容器无法解析域名。_
_我的经历在 2013 年,我遇到了使用 Docker 以来的第一个问题,与 Docker 的这种 DNS 解决方案密切相关。我们公司的网络屏蔽了 `8.8.8.8` 和 `8.8.4.4`导致容器无法解析域名。_
这就是 Docker 容器的情况,但对于包括 Kubernetes 在内的容器 _<ruby>编排引擎<rt>orchestrators</rt></ruby>_,情况又有些不同。
这就是 Docker 容器的情况,但对于包括 Kubernetes 在内的容器 <ruby>编排引擎<rt>orchestrators</rt></ruby>,情况又有些不同。
### 2) Kubernetes 和 DNS
在 Kubernetes 中,最小部署单元是 `pod``pod` 是一组相互协作的容器,共享 IP 地址(和其它资源)。
在 Kubernetes 中,最小部署单元是 pod是一组相互协作的容器,共享 IP 地址(和其它资源)。
Kubernetes 面临的一个额外的挑战是,将 Kubernetes 服务请求(例如,`myservice.kubernetes.io`)通过对应的<ruby>解析器<rt>resolver</rt></ruby>,转发到具体服务地址对应的<ruby>内网地址<rt>private network</rt></ruby>。这里提到的服务地址被称为归属于“<ruby>集群域<rt>cluster domain</rt></ruby>”。集群域可由管理员配置,根据配置可以是 `cluster.local``myorg.badger` 等。
在 Kubernetes 中,你可以为 `pod` 指定如下四种 `pod` 内 DNS 查询的方式。
在 Kubernetes 中,你可以为 pod 指定如下四种 pod 内 DNS 查询的方式。
* Default
**Default**
在这种(名称容易让人误解)的方式中,`pod` 与其所在的主机采用相同的 DNS 查询路径,与前面介绍的主机 DNS 查询一致。我们说这种方式的名称容易让人误解,因为该方式并不是默认选项!`ClusterFirst` 才是默认选项。
在这种名称容易让人误解的方式中pod 与其所在的主机采用相同的 DNS 查询路径,与前面介绍的主机 DNS 查询一致。我们说这种方式的名称容易让人误解,因为该方式并不是默认选项!`ClusterFirst` 才是默认选项。
如果你希望覆盖 `/etc/resolv.conf` 中的条目,你可以添加到 `kubelet` 的配置中。
* ClusterFirst
**ClusterFirst**
`ClusterFirst` 方式中,遇到 DNS 查询请求会做有选择的转发。根据配置的不同,有以下两种方式:
第一种方式配置相对古老但更简明,即采用一个规则:如果请求的域名不是集群域的子域,那么将其转发到 `pod` 所在的主机。
第一种方式配置相对古老但更简明,即采用一个规则:如果请求的域名不是集群域的子域,那么将其转发到 pod 所在的主机。
第二种方式相对新一些,你可以在内部 DNS 中配置选择性转发。
@ -115,27 +111,27 @@ data:
`stubDomains` 条目中,可以为特定域名指定特定的 DNS 服务器;而 `upstreamNameservers` 条目则给出,待查询域名不是集群域子域情况下用到的 DNS 服务器。
这是通过在一个 `pod` 中运行我们熟知的 `dnsmasq` 实现的。
这是通过在一个 pod 中运行我们熟知的 `dnsmasq` 实现的。
![kubedns](https://zwischenzugs.files.wordpress.com/2018/08/kubedns.png?w=525)
剩下两种选项都比较小众:
* ClusterFirstWithHostNet
**ClusterFirstWithHostNet**
适用于 `pod` 使用主机网络的情况,例如绕开 Docker 网络配置,直接使用与 `pod` 对应主机相同的网络。
适用于 pod 使用主机网络的情况,例如绕开 Docker 网络配置,直接使用与 pod 对应主机相同的网络。
* None
**None**
`None` 意味着不改变 DNS但强制要求你在 `pod` <ruby>规范文件<rt>specification</rt></ruby>`dnsConfig` 条目中指定 DNS 配置。
### CoreDNS 即将到来
除了上面提到的那些,一旦 `CoreDNS` 取代Kubernetes 中的 `kube-dns`,情况还会发生变化。`CoreDNS` 相比 `kube-dns` 具有可配置性更高、效率更高等优势。
除了上面提到的那些,一旦 `CoreDNS` 取代 Kubernetes 中的 `kube-dns`,情况还会发生变化。`CoreDNS` 相比 `kube-dns` 具有可配置性更高、效率更高等优势。
如果想了解更多,参考[这里][5]。
如果你对 OpenShift 的网络感兴趣,我曾写过一篇[文章][6]可供你参考。但文章中 OpenShift 的版本是 `3.6`,可能有些过时。
如果你对 OpenShift 的网络感兴趣,我曾写过一篇[文章][6]可供你参考。但文章中 OpenShift 的版本是 3.6,可能有些过时。
### 第四部分总结
@ -152,14 +148,14 @@ via: https://zwischenzugs.com/2018/08/06/anatomy-of-a-linux-dns-lookup-part-iv/
作者:[zwischenzugs][a]
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://zwischenzugs.com/
[1]:https://zwischenzugs.com/2018/06/08/anatomy-of-a-linux-dns-lookup-part-i/
[2]:https://zwischenzugs.com/2018/06/18/anatomy-of-a-linux-dns-lookup-part-ii/
[3]:https://zwischenzugs.com/2018/07/06/anatomy-of-a-linux-dns-lookup-part-iii/
[1]:https://linux.cn/article-9943-1.html
[2]:https://linux.cn/article-9949-1.html
[3]:https://linux.cn/article-9972-1.html
[4]:https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods
[5]:https://coredns.io/
[6]:https://zwischenzugs.com/2017/10/21/openshift-3-6-dns-in-pictures/

View File

@ -0,0 +1,78 @@
Steam 让我们在 Linux 上玩 Windows 的游戏更加容易
======
![Steam Wallpaper][1]
总所周知,[Linux 游戏][2]库中的游戏只有 Windows 游戏库中的一部分,实际上,许多人甚至都不会考虑将操作系统[转换为 Linux][3],原因很简单,因为他们喜欢的游戏,大多数都不能在这个平台上运行。
在撰写本文时Steam 上已有超过 5000 种游戏可以在 Linux 上运行,而 Steam 上的游戏总数已经接近 27000 种了。现在 5000 种游戏可能看起来很多,但还没有达到 27000 种,确实没有。
虽然几乎所有的新的<ruby>独立游戏<rt>indie game</rt></ruby>都是在 Linux 中推出的,但我们仍然无法在这上面玩很多的 [3A 大作][4]。对我而言,虽然这其中有很多游戏我都很希望能有机会玩,但这从来都不是一个非黑即白的问题。因为我主要是玩独立游戏和[复古游戏][5],所以几乎所有我喜欢的游戏都可以在 Linux 系统上运行。
### 认识 ProtonSteam 的一个 WINE 复刻
现在,这个问题已经成为过去式了,因为本周 Valve [宣布][6]要对 Steam Play 进行一次更新,此次更新会将一个名为 Proton 的 Wine 复刻版本添加到 Linux 客户端中。是的这个工具是开源的Valve 已经在 [GitHub][7] 上开源了源代码,但该功能仍然处于测试阶段,所以你必须使用测试版的 Steam 客户端才能使用这项功能。
#### 使用 proton ,可以在 Linux 系统上通过 Steam 运行更多 Windows 游戏
这对我们这些 Linux 用户来说,实际上意味着什么?简单来说,这意味着我们可以在 Linux 电脑上运行全部 27000 种游戏,而无需配置像 [PlayOnLinux][8] 或 [Lutris][9] 这样的东西。我要告诉你的是,配置这些东西有时候会非常让人头疼。
对此更为复杂的答案是,某种原因听起来非常美好。虽然在理论上,你可以用这种方式在 Linux 上玩所有的 Windows 平台上的游戏。但只有一少部分游戏在推出时会正式支持 Linux。这少部分游戏包括 《DOOM》、《最终幻想 VI》、《铁拳 7》、《星球大战前线 2》和其他几个。
#### 你可以在 Linux 上玩所有的 Windows 游戏(理论上)
虽然目前该列表只有大约 30 个游戏,你可以点击“为所有游戏启用 Steam Play”复选框来强制使用 Steam 的 Proton 来安装和运行任意游戏。但你最好不要有太高的期待,它们的稳定性和性能表现不一定有你希望的那么好,所以请把期望值压低一点。
![Steam Play][10]
据[这份报告][13],已经有超过一千个游戏可以在 Linux 上玩了。按[此指南][14]来了解如何启用 Steam Play 测试版本。
#### 体验 Proton没有我想的那么烂
例如,我安装了一些难度适中的游戏,使用 Proton 来进行安装。其中一个是《上古卷轴 4湮没》在我玩这个游戏的两个小时里它只崩溃了一次而且几乎是紧跟在游戏教程的自动保存点之后。
我有一块英伟达 Gtx 1050 Ti 的显卡。所以我可以使用 1080P 的高配置来玩这个游戏。而且我没有遇到除了这次崩溃之外的任何问题。我唯一真正感到不爽的只有它的帧数没有原本的高。在 90% 的时间里,游戏的帧数都在 60 帧以上,但我知道它的帧数应该能更高。
我安装和运行的其他所有游戏都运行得很完美,虽然我还没有较长时间地玩过它们中的任何一个。我安装的游戏中包括《森林》、《丧尸围城 4》和《刺客信条 2》。你觉得我这是喜欢恐怖游戏吗
#### 为什么 Steam仍然要下注在 Linux 上?
现在,一切都很好,这件事为什么会发生呢?为什么 Valve 要花费时间,金钱和资源来做这样的事?我倾向于认为,他们这样做是因为他们懂得 Linux 社区的价值,但是如果要我老实地说,我不相信我们和它有任何的关系。
如果我一定要在这上面花钱,我想说 Valve 开发了 Proton因为他们还没有放弃 [Steam Machine][11]。因为 [Steam OS][12] 是基于 Linux 的发行版在这类东西上面投资可以获取最大的利润Steam OS 上可用的游戏越多,就会有更多的人愿意购买 Steam Machine。
可能我是错的,但是我敢打赌啊,我们会在不远的未来看到新一批的 Steam Machine。可能我们会在一年内看到它们也有可能我们再等五年都见不到谁知道呢
无论哪种方式,我所知道的是,我终于能兴奋地从我的 Steam 游戏库里玩游戏了。这个游戏库是多年来我通过各种慈善包、促销码和不定时地买的游戏慢慢积累的,只不过是想试试让它在 Lutris 中运行。
#### 为 Linux 上越来越多的游戏而激动?
你怎么看?你对此感到激动吗?或者说你会害怕只有很少的开发者会开发 Linux 平台上的游戏因为现在几乎没有需求Valve 喜欢 Linux 社区,还是说他们喜欢钱?请在下面的评论区告诉我们您的想法,然后重新搜索来查看更多类似这样的开源软件方面的文章。
--------------------------------------------------------------------------------
via: https://itsfoss.com/steam-play-proton/
作者:[Phillip Prado][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/steam-wallpaper.jpeg
[2]:https://itsfoss.com/linux-gaming-guide/
[3]:https://itsfoss.com/reasons-switch-linux-windows-xp/
[4]:https://itsfoss.com/triplea-game-review/
[5]:https://itsfoss.com/play-retro-games-linux/
[6]:https://steamcommunity.com/games/221410
[7]:https://github.com/ValveSoftware/Proton/
[8]:https://www.playonlinux.com/en/
[9]:https://lutris.net/
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/SteamProton.jpg
[11]:https://store.steampowered.com/sale/steam_machines
[12]:https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/
[13]:https://spcr.netlify.com/
[14]:https://itsfoss.com/steam-play/

View File

@ -1,26 +1,27 @@
如何在 Linux 上使用 tcpdump 命令捕获和分析数据包
======
tcpdump 是一个有名的命令行**数据包分析**工具。我们可以使用 tcpdump 命令捕获实时 TCP/IP 数据包,这些数据包也可以保存到文件中。之后这些捕获的数据包可以通过 tcpdump 命令进行分析。tcpdump 命令在网络级故障排除时变得非常方便。
`tcpdump` 是一个有名的命令行**数据包分析**工具。我们可以使用 `tcpdump` 命令捕获实时 TCP/IP 数据包,这些数据包也可以保存到文件中。之后这些捕获的数据包可以通过 `tcpdump` 命令进行分析。`tcpdump` 命令在网络层面进行故障排除时变得非常方便。
![](https://www.linuxtechi.com/wp-content/uploads/2018/08/tcpdump-command-examples-linux.jpg)
tcpdump 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux可以使用 apt 命令安装它
`tcpdump` 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux可以使用 `apt` 命令安装它
```
# apt install tcpdump -y
```
在基于 RPM 的 Linux 操作系统上,可以使用下面的 yum 命令安装 tcpdump
在基于 RPM 的 Linux 操作系统上,可以使用下面的 `yum` 命令安装 `tcpdump`。
```
# yum install tcpdump -y
```
当我们在没用任何选项的情况下运行 tcpdump 命令时,它将捕获所有接口的数据包。因此,要停止或取消 tcpdump 命令,请输入 '**ctrl+c**'。在本教程中,我们将使用不同的实例来讨论如何捕获和分析数据包,
当我们在没用任何选项的情况下运行 `tcpdump` 命令时,它将捕获所有接口的数据包。因此,要停止或取消 `tcpdump` 命令,请键入 `ctrl+c`。在本教程中,我们将使用不同的实例来讨论如何捕获和分析数据包。
### 示例: 1) 从特定接口捕获数据包
### 示例1从特定接口捕获数据包
当我们在没用任何选项的情况下运行 tcpdump 命令时,它将捕获所有接口上的数据包,因此,要从特定接口捕获数据包,请使用选项 '**-i**',后跟接口名称。
当我们在没用任何选项的情况下运行 `tcpdump` 命令时,它将捕获所有接口上的数据包,因此,要从特定接口捕获数据包,请使用选项 `-i`,后跟接口名称。
语法:
@ -28,7 +29,7 @@ tcpdump 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux
# tcpdump -i {接口名}
```
假设我想从接口“enp0s3”捕获数据包
假设我想从接口 `enp0s3` 捕获数据包。
输出将如下所示,
@ -49,21 +50,21 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
```
### 示例: 2) 从特定接口捕获特定数量数据包
### 示例2从特定接口捕获特定数量数据包
假设我们想从特定接口(如“enp0s3”)捕获12个数据包这可以使用选项 '**-c {数量} -I {接口名称}**' 轻松实现
假设我们想从特定接口(如 `enp0s3`)捕获 12 个数据包,这可以使用选项 `-c {数量} -I {接口名称}` 轻松实现。
```
root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3
```
上面的命令将生成如下所示的输出
上面的命令将生成如下所示的输出
[![N-Number-Packsets-tcpdump-interface][1]][2]
### 示例: 3) 显示 tcpdump 的所有可用接口
### 示例3显示 tcpdump 的所有可用接口
使用 '**-D**' 选项显示 tcpdump 命令的所有可用接口,
使用 `-D` 选项显示 `tcpdump` 命令的所有可用接口,
```
[root@compute-0-1 ~]# tcpdump -D
@ -86,11 +87,11 @@ root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3
[[email protected] ~]#
```
我正在我的一个openstack计算节点上运行tcpdump命令这就是为什么在输出中你会看到数字接口、标签接口、网桥和vxlan接口
我正在我的一个 openstack 计算节点上运行 `tcpdump` 命令,这就是为什么在输出中你会看到数字接口、标签接口、网桥和 vxlan 接口
### 示例: 4) 捕获带有可读时间戳(-tttt 选项)的数据包
### 示例4捕获带有可读时间戳的数据包`-tttt` 选项)
默认情况下,在tcpdump命令输出中没有显示可读性好的时间戳,如果您想将可读性好的时间戳与每个捕获的数据包相关联,那么使用 '**-tttt**'选项,示例如下所示,
默认情况下,在 `tcpdump` 命令输出中,不显示可读性好的时间戳,如果您想将可读性好的时间戳与每个捕获的数据包相关联,那么使用 `-tttt` 选项,示例如下所示,
```
[[email protected] ~]# tcpdump -c 8 -tttt -i enp0s3
@ -108,12 +109,11 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
134 packets received by filter
69 packets dropped by kernel
[[email protected] ~]#
```
### 示例: 5) 捕获数据包并将其保存到文件( -w 选项)
### 示例5捕获数据包并将其保存到文件`-w` 选项)
使用 tcpdump 命令中的 '**-w**' 选项将捕获的 TCP/IP 数据包保存到一个文件中,以便我们可以在将来分析这些数据包以供进一步分析。
使用 `tcpdump` 命令中的 `-w` 选项将捕获的 TCP/IP 数据包保存到一个文件中,以便我们可以在将来分析这些数据包以供进一步分析。
语法:
@ -121,9 +121,9 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
# tcpdump -w 文件名.pcap -i {接口名}
```
注意:文件扩展名必须为 **.pcap**
注意:文件扩展名必须为 `.pcap`
假设我要把 '**enp0s3**' 接口捕获到的包保存到文件名为 **enp0s3-26082018.pcap**
假设我要把 `enp0s3` 接口捕获到的包保存到文件名为 `enp0s3-26082018.pcap`
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
@ -140,24 +140,23 @@ tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 b
[root@compute-0-1 ~]# ls
anaconda-ks.cfg enp0s3-26082018.pcap
[root@compute-0-1 ~]#
```
捕获并保存大小**大于 N 字节**的数据包
捕获并保存大小**大于 N 字节**的数据包
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-2.pcap greater 1024
```
捕获并保存大小**小于 N 字节**的数据包
捕获并保存大小**小于 N 字节**的数据包
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-3.pcap less 1024
```
### 示例: 6) 从保存的文件中读取数据包( -r 选项)
### 示例6从保存的文件中读取数据包`-r` 选项)
在上面的例子中,我们已经将捕获的数据包保存到文件中,我们可以使用选项 '**-r**' 从文件中读取这些数据包,例子如下所示,
在上面的例子中,我们已经将捕获的数据包保存到文件中,我们可以使用选项 `-r` 从文件中读取这些数据包,例子如下所示,
```
[root@compute-0-1 ~]# tcpdump -r enp0s3-26082018.pcap
@ -183,12 +182,11 @@ p,TS val 81359114 ecr 81350901], length 508
2018-08-25 22:03:17.647502 IP controller0.example.com.amqp > compute-0-1.example.com.57788: Flags [.], ack 1956, win 1432, options [nop,nop,TS val 813
52753 ecr 81359114], length 0
.........................................................................................................................
```
### 示例: 7) 仅捕获特定接口上的 IP 地址数据包( -n 选项)
### 示例7仅捕获特定接口上的 IP 地址数据包(`-n` 选项)
使用 tcpdump 命令中的 -n 选项,我们能只捕获特定接口上的 IP 地址数据包,示例如下所示,
使用 `tcpdump` 命令中的 `-n` 选项,我们能只捕获特定接口上的 IP 地址数据包,示例如下所示,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3
@ -211,19 +209,18 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:22:28.539595 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1572, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0
22:22:28.539760 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1572:1912, ack 1, win 291, options [nop,nop,TS val 82510007 ecr 20666614], length 340
.........................................................................
```
您还可以使用 tcpdump 命令中的 -c 和 -N 选项捕获 N 个 IP 地址包,
您还可以使用 `tcpdump` 命令中的 `-c``-N` 选项捕获 N 个 IP 地址包,
```
[root@compute-0-1 ~]# tcpdump -c 25 -n -i enp0s3
```
### 示例: 8) 仅捕获特定接口上的TCP数据包
### 示例8仅捕获特定接口上的 TCP 数据包
tcpdump 命令中,我们能使用 '**tcp**' 选项来只捕获TCP数据包,
`tcpdump` 命令中,我们能使用 `tcp` 选项来只捕获 TCP 数据包,
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 tcp
@ -241,9 +238,9 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
...................................................................................................................................................
```
### 示例: 9) 从特定接口上的特定端口捕获数据包
### 示例9从特定接口上的特定端口捕获数据包
使用 tcpdump 命令,我们可以从特定接口 enp0s3 上的特定端口(例如 22 )捕获数据包
使用 `tcpdump` 命令,我们可以从特定接口 `enp0s3` 上的特定端口(例如 22捕获数据包。
语法:
@ -262,13 +259,12 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:54:55.038564 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 940, win 9177, options [nop,nop,TS val 21153238 ecr 84456505], length 0
22:54:55.038708 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 940:1304, ack 1, win 291, options [nop,nop,TS val 84456506 ecr 21153238], length 364
............................................................................................................................
[root@compute-0-1 ~]#
```
### 示例: 10) 在特定接口上捕获来自特定来源 IP 的数据包
### 示例10在特定接口上捕获来自特定来源 IP 的数据包
tcpdump命令中使用 '**src**' 关键字后跟 '**IP 地址**',我们可以捕获来自特定来源 IP 的数据包,
`tcpdump` 命令中,使用 `src` 关键字后跟 IP 地址,我们可以捕获来自特定来源 IP 的数据包,
语法:
@ -296,17 +292,16 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
10 packets captured
12 packets received by filter
0 packets dropped by kernel
[root@compute-0-1 ~]#
```
### 示例: 11) 在特定接口上捕获来自特定目的IP的数据包
### 示例11在特定接口上捕获来自特定目的 IP 的数据包
语法:
```
# tcpdump -n -i {接口名} dst {IP 地址}
```
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 dst 169.144.0.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
@ -318,42 +313,39 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
23:10:43.522157 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 800:996, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.522346 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 996:1192, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
.........................................................................................
```
### 示例: 12) 捕获两台主机之间的 TCP 数据包通信
### 示例12捕获两台主机之间的 TCP 数据包通信
假设我想捕获两台主机 169.144.0.1 和 169.144.0.20 之间的 TCP 数据包,示例如下所示,
```
[root@compute-0-1 ~]# tcpdump -w two-host-tcp-comm.pcap -i enp0s3 tcp and \(host 169.144.0.1 or host 169.144.0.20\)
```
使用 tcpdump 命令只捕获两台主机之间的 SSH 数据包流,
使用 `tcpdump` 命令只捕获两台主机之间的 SSH 数据包流,
```
[root@compute-0-1 ~]# tcpdump -w ssh-comm-two-hosts.pcap -i enp0s3 src 169.144.0.1 and port 22 and dst 169.144.0.20 and port 22
```
示例: 13) 捕获两台主机之间的 UDP 网络数据包(来回)
### 示例13捕获两台主机之间来回的 UDP 网络数据包
语法:
```
# tcpdump -w -s -i udp and \(host and host \)
```
```
[root@compute-0-1 ~]# tcpdump -w two-host-comm.pcap -s 1000 -i enp0s3 udp and \(host 169.144.0.10 and host 169.144.0.20\)
```
### 示例: 14) 捕获十六进制和ASCII格式的数据包
### 示例14捕获十六进制和 ASCII 格式的数据包
使用 tcpdump 命令,我们可以以 ASCII 和十六进制格式捕获 TCP/IP 数据包,
使用 `tcpdump` 命令,我们可以以 ASCII 和十六进制格式捕获 TCP/IP 数据包,
要使用** -A **选项捕获ASCII格式的数据包示例如下所示:
要使用 `-A` 选项捕获 ASCII 格式的数据包,示例如下所示:
```
[root@compute-0-1 ~]# tcpdump -c 10 -A -i enp0s3
@ -376,7 +368,7 @@ root@compute-0-1 @..........
..................................................................................................................................................
```
要同时以十六进制和 ASCII 格式捕获数据包,请使用** -XX **选项
要同时以十六进制和 ASCII 格式捕获数据包,请使用 `-XX` 选项
```
[root@compute-0-1 ~]# tcpdump -c 10 -XX -i enp0s3
@ -406,10 +398,9 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
0x0030: 3693 7c0e 0000 0101 080a 015a a734 0568 6.|........Z.4.h
0x0040: 39af
.......................................................................
```
这就是本文的全部内容,我希望您能了解如何使用 tcpdump 命令捕获和分析 TCP/IP 数据包。请分享你的反馈和评论。
这就是本文的全部内容,我希望您能了解如何使用 `tcpdump` 命令捕获和分析 TCP/IP 数据包。请分享你的反馈和评论。
--------------------------------------------------------------------------------
@ -418,7 +409,7 @@ via: https://www.linuxtechi.com/capture-analyze-packets-tcpdump-command-linux/
作者:[Pradeep Kumar][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ypingcn](https://github.com/ypingcn)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,16 +1,17 @@
如何在 Ubuntu 18.04 和其他 Linux 发行版中创建照片幻灯片
如何在 Ubuntu 和其他 Linux 发行版中创建照片幻灯片
======
创建照片幻灯片只需点击几下。以下是如何在 Ubuntu 18.04 和其他 Linux 发行版中制作照片幻灯片。
![How to create slideshow of photos in Ubuntu Linux][1]
想象一下,你的朋友和亲戚正在拜访你,并请你展示最近的活动/旅行照片。
想象一下,你的朋友和亲戚正在拜访你,并请你展示最近的活动/旅行照片。
你将照片保存在计算机上,并整齐地放在单独的文件夹中。你邀请计算机附近的所有人。你进入该文件夹​​,单击其中一张图片,然后按箭头键逐个显示照片。
但那太累了!如果这些图片每隔几秒自动更改一次,那将会好很多。
这称之为幻灯片,我将向你展示如何在 Ubuntu 中创建照片幻灯片。这能让你在文件夹中循环播放图片并以全屏模式显示它们。
这称之为幻灯片,我将向你展示如何在 Ubuntu 中创建照片幻灯片。这能让你在文件夹中循环播放图片并以全屏模式显示它们。
### 在 Ubuntu 18.04 和其他 Linux 发行版中创建照片幻灯片
@ -20,19 +21,19 @@
如果你在 Ubuntu 18.04 或任何其他发行版中使用 GNOME那么你很幸运。Gnome 的默认图像浏览器Eye of GNOME能够在当前文件夹中显示图片的幻灯片。
只需单击其中一张图片,你将在程序的右上角菜单中看到设置选项。它看起来像三条横栏堆在彼此的顶部
只需单击其中一张图片,你将在程序的右上角菜单中看到设置选项。它看起来像堆叠在一起的三条横栏。
你会在这里看到几个选项。勾选幻灯片选项,它将全屏显示图像。
![How to create slideshow of photos in Ubuntu Linux][2]
默认情况下,图像以 5 秒的间隔变化。你可以进入 Preferences->Slideshow 来更改幻灯片放映间隔。
默认情况下,图像以 5 秒的间隔变化。你可以进入 Preferences -> Slideshow 来更改幻灯片放映间隔。
![change slideshow interval in Ubuntu][3]Changing slideshow interval
![change slideshow interval in Ubuntu][3]
#### 方法 2使用 Shotwell Photo Manager 进行照片幻灯片放映
[Shotwell][4] 是一种流行的[ Linux 照片管理程序][5]。适用于所有主要的 Linux 发行版。
[Shotwell][4] 是一款流行的 [Linux 照片管理程序][5]。适用于所有主要的 Linux 发行版。
如果尚未安装,请在你的发行版软件中心中搜索 Shotwell 并安装。
@ -55,7 +56,7 @@ via: https://itsfoss.com/photo-slideshow-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,24 +1,24 @@
用 zsh 提高生产力的5个 tips
用 zsh 提高生产力的 5 个技巧
======
> zsh 提供了数之不尽的功能和特性,这里有五个可以让你在命令行暴增效率的方法。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK)
Z shell (亦称 zsh) 是 *unx 系统中的命令解析器 。 它跟 `sh` (Bourne shell) 家族的其他解析器 ( 如 `bash``ksh` ) 有着相似的特点但它还提供了大量的高级特性以及强大的命令行编辑功能选项如增强版tab补全。
Z shell[zsh][1])是 Linux 和类 Unix 系统中的一个[命令解析器][2]。 它跟 sh (Bourne shell) 家族的其它解析器(如 bash 和 ksh有着相似的特点但它还提供了大量的高级特性以及强大的命令行编辑功能如增强版 Tab 补全。
由于 zsh 有好几百页的文档去描述他的特性,所以我无法在这里阐明 zsh 的所有功能。在本文我会列出5个 tips让你通过使用 zsh 来提高你的生产力。
在这里不可能涉及到 zsh 的所有功能,[描述][3]它的特性需要好几百页。在本文中,我会列出 5 个技巧,让你通过在命令行使用 zsh 来提高你的生产力。
### 1\. 主题和插件
### 1主题和插件
多年来,开源社区已经为 zsh 开发了数不清的主题和插件。主题是预定义提示符的配置,而插件则是一组常用的别名命令和功能,让你更方便的使用一种特定的命令或者编程语言。
多年来,开源社区已经为 zsh 开发了数不清的主题和插件。主题是一个预定义提示符的配置,而插件则是一组常用的别名命令和函数,可以让你更方便的使用一种特定的命令或者编程语言。
如果你现在想开始用 zsh 的主题和插件,那么使用 zsh 的配置框架 (configuiration framework) 是你最快的入门方式。在众多的配置框架中,最受欢迎的则是 [Oh My Zsh][4]。在默认配置中,他就已经为 zsh 启用了一些合理的配置,同时它也自带多个主题和插件。
如果你现在想开始用 zsh 的主题和插件,那么使用一种 zsh 的配置框架是你最快的入门方式。在众多的配置框架中,最受欢迎的则是 [Oh My Zsh][4]。在默认配置中,它就已经为 zsh 启用了一些合理的配置,同时它也自带上百个主题和插件。
由于主题会在你的命令行提示符之前添加一些常用的信息,比如你 Git 仓库的状态,或者是当前使用的 Python 虚拟环境,所以它会让你的工作更高效。只需要看到这些信息,你就不用再敲命令去重新获取它们,而且这些提示也相当酷炫。
下图就是我(作者)选用的主题 [Powerlevel9k][5]
主题会在你的命令行提示符之前添加一些有用的信息,比如你 Git 仓库的状态,或者是当前使用的 Python 虚拟环境,所以它会让你的工作更高效。只需要看到这些信息,你就不用再敲命令去重新获取它们,而且这些提示也相当酷炫。下图就是我选用的主题 [Powerlevel9k][5]
![zsh Powerlevel9K theme][7]
zsh 主题 Powerlevel9k
*zsh 主题 Powerlevel9k*
除了主题Oh my Zsh 还自带了大量常用的 zsh 插件。比如,通过启用 Git 插件,你可以用一组简便的命令别名操作 Git 比如
@ -36,39 +36,37 @@ gcs='git commit -S'
glg='git log --stat'
```
zsh 还有许多插件是用于多种编程语言,打包系统和一些平时在命令行中常用的工具。
以下是我(作者) Ferdora 工作站中用到的插件表:
zsh 还有许多插件可以用于许多编程语言、打包系统和一些平时在命令行中常用的工具。以下是我 Ferdora 工作站中用到的插件表:
```
git golang fedora docker oc sudo vi-mode virtualenvwrapper
```
### 2\. 智能的命令别名
### 2智能的命令别名
命令别名在 zsh 中十分用。为你常用的命令定义别名可以节省你的打字时间。Oh My Zsh 默认配置了一些常用的命令别名,包括目录导航命令别名,为常用的命令添加额外的选项,比如:
命令别名在 zsh 中十分用。为你常用的命令定义别名可以节省你的打字时间。Oh My Zsh 默认配置了一些常用的命令别名,包括目录导航命令别名,为常用的命令添加额外的选项,比如:
```
ls='ls --color=tty'
grep='grep  --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'
```
除了命令别名以外, zsh 还自带两种额外常用的别名类型:后缀别名和全局别名。
除了命令别名意外, zsh 还自带两种额外常用的别名类型:后缀别名和全局别名。
后缀别名可以让你在基于文件后缀的前提下,在命令行中利用指定程序打开这个文件。比如,要用 vim 打开 YAML 文件,可以定义以下命令行别名:
后缀别名可以让你基于文件后缀,在命令行中利用指定程序打开这个文件。比如,要用 vim 打开 YAML 文件,可以定义以下命令行别名:
```
alias -s {yml,yaml}=vim
```
现在,如果你在命令行中输入任何后缀名为 `yml``yaml` 文件, zsh 都会用 vim 打开这个文件
现在,如果你在命令行中输入任何后缀名为 `yml``yaml` 文件, zsh 都会用 vim 打开这个文件
```
$ playbook.yml
# Opens file playbook.yml using vim
```
全局别名可以让你在使用命令行的任何时刻创建命令别名,而不仅仅是在开始的时候。这个在你想替换常用文件名或者管道命令的时候就显得非常有用了。比如
全局别名可以让你创建一个可在命令行的任何地方展开的别名,而不仅仅是在命令开始的时候。这个在你想替换常用文件名或者管道命令的时候就显得非常有用了。比如
```
alias -g G='| grep -i'
@ -84,9 +82,9 @@ drwxr-xr-x.  6 rgerardi rgerardi 4096 Aug 24 14:51 Downloads
接着,我们就来看看 zsh 是如何导航文件系统的。
### 3\. 便捷的目录导航
### 3便捷的目录导航
当你使用命令行的时候, 在不同的目录之间切换访问是最常见的工作了。 zsh 提供了一些十分有用的目录导航功能来简化这个操作。这些功能已经集成到 Oh My Zsh 中了, 而你可以用以下命令来启用它
当你使用命令行的时候,在不同的目录之间切换访问是最常见的工作了。 zsh 提供了一些十分有用的目录导航功能来简化这个操作。这些功能已经集成到 Oh My Zsh 中了, 而你可以用以下命令来启用它
```
setopt  autocd autopushd \ pushdignoredups
@ -104,7 +102,7 @@ $ pwd
如果想要回退,只要输入 `-`:
Zsh 会记录你访问过的目录,这样下次你就可以快速切换到这些目录中。如果想要看这个目录列表,只要输入 `dirs -v`
zsh 会记录你访问过的目录,这样下次你就可以快速切换到这些目录中。如果想要看这个目录列表,只要输入 `dirs -v`
```
$ dirs -v
@ -168,7 +166,7 @@ $ pwd
/tmp
```
最后,你可以在 zsh 中利用 Tab 来自动补全目录名称。你可以先输入目录的首字母,然后`TAB` 来补全它们:
最后,你可以在 zsh 中利用 Tab 来自动补全目录名称。你可以先输入目录的首字母,然后`TAB`来补全它们:
```
$ pwd
@ -179,22 +177,22 @@ $ Projects/Opensource.com/zsh-5tips/
以上仅仅是 zsh 强大的 Tab 补全系统中的一个功能。接来下我们来探索它更多的功能。
### 4\. 先进的 Tab 补全
### 4先进的 Tab 补全
Zsh 强大的补全系统是它其中一个卖点。为了简便起见,我称它为 Tab 补全,然而在系统底层,它不仅仅只做一件事。这里通常包括扩展以及命令的补全,我会在这里同时讨论它们。如果想了解更多,详见 [用户手册][8] ( [User's Guide][8] )。
zsh 强大的补全系统是它的卖点之一。为了简便起见,我称它为 Tab 补全,然而在系统底层,它起到了几个作用。这里通常包括展开以及命令补全,我会在这里用讨论它们。如果想了解更多,详见 [用户手册][8]。
在 Oh My Zsh 中,命令补全是默认启用的。要启用它,你只要在 `.zshrc` 文件中添加以下命令:
在 Oh My Zsh 中,命令补全是默认可用的。要启用它,你只要在 `.zshrc` 文件中添加以下命令:
```
autoload -U compinit
compinit
```
Zsh 的补全系统非常智能。他会根据当前上下文来进行命令的提示——比如,你输入了 `cd``TAB`zsh 只会为你提示目录名,因为它知道
当前的 `cd` 没有任何作用。
zsh 的补全系统非常智能。它会尝试唯一提示可用在当前上下文环境中的项目 —— 比如,你输入了 `cd``TAB`zsh 只会为你提示目录名,因为它知道其它的项目放在 `cd` 后面没用。
反之,如果你使用 `ssh` 或者 `ping` 这类与用户或者主机相关的命令, zsh 便会提示用户名。
反之,如果你使用与用户相关的命令便会提示用户名,而 `ssh` 或者 `ping` 这类则会提示主机名。
`zsh` 拥有一个巨大而又完整的库,因此它能识别许多不同的命令。比如,如果你使用 `tar` 命令, 你可以按 Tab 键,他会为你展示一个可以用于解压的文件列表:
zsh 拥有一个巨大而又完整的库,因此它能识别许多不同的命令。比如,如果你使用 `tar` 命令, 你可以按 `TAB` 键,它会为你展示一个可以用于解压的文件列表:
```
$ tar -xzvf test1.tar.gz test1/file1 (TAB)
@ -221,7 +219,7 @@ $ git add (TAB)
$ git add zsh-5tips.md
```
zsh 还能识别命令行选项,同时只会提示与选中子命令相关的命令列表:
zsh 还能识别命令行选项,同时只会提示与选中子命令相关的命令列表:
```
$ git commit - (TAB)
@ -243,27 +241,27 @@ $ git commit - (TAB)
... TRUNCATED ...
```
在按 `TAB` 键之后,你可以使用方向键来选择你想用的命令。现在你就不用记住所有的 Git 命令项了。
在按 `TAB` 键之后,你可以使用方向键来选择你想用的命令。现在你就不用记住所有的 `git` 命令项了。
zsh 还有很多有用的功能。当你用它的时候,你就知道哪些对你才是最有用的。
### 5\. 命令行编辑与历史记录
### 5命令行编辑与历史记录
Zsh 的命令行编辑功能也十分有效。默认条件下,他是模拟 emacs 编辑器的。如果你是跟我一样更喜欢用 vi/vim你可以用以下命令启用 vi 编辑
zsh 的命令行编辑功能也十分有用。默认条件下,它是模拟 emacs 编辑器的。如果你是跟我一样更喜欢用 vi/vim你可以用以下命令启用 vi 的键绑定
```
$ bindkey -v
```
如果你使用 Oh My Zsh`vi-mode` 插件可以启用额外的绑定,同时会在你的命令提示符上增加 vi 的模式提示--这个非常有用。
如果你使用 Oh My Zsh`vi-mode` 插件可以启用额外的绑定,同时会在你的命令提示符上增加 vi 的模式提示 —— 这个非常有用。
当启用 vi 的绑定后,你可以命令行中使用 vi 命令进行编辑。比如,输入 `ESC+/` 来查找命令行记录。在查找的时候,输入 `n` 来找下一个匹配行,输入 `N` 来找上一个。输入 `ESC` 后,最常用的 vi 命令有以下几个,如输入 `0` 跳转到第一行,输入 `$` 跳转到最后一行,输入 `i` 来插入文本,输入 `a` 来追加文本等等,一些直接操作的命令也同样有效,比如输入 `cw` 来修改单词。
当启用 vi 的绑定后,你可以命令行中使用 vi 命令进行编辑。比如,输入 `ESC+/` 来查找命令行记录。在查找的时候,输入 `n` 来找下一个匹配行,输入 `N` 来找上一个。输入 `ESC` 后,常用的 vi 命令都可以使用,如输入 `0` 跳转到行首,输入 `$` 跳转到行尾,输入 `i` 来插入文本,输入 `a` 来追加文本等等,即使是跟随的命令也同样有效,比如输入 `cw` 来修改单词。
除了命令行编辑如果你想修改或重新执行之前使用过的命令zsh 还提供几个常用的命令行历史功能。比如,你打错了一个命令,输入 `fc`,你可以在你偏好的编辑器中修复最后一条命令。使用哪个编辑是参照 `$EDITOR` 变量的,而默认是使用 vi。
另外一个有用的命令是 `r` 会重新执行上一条命令;而 `r <WORD>` 则会执行上一条包含 `WORD` 的命令。
另外一个有用的命令是 `r` 会重新执行上一条命令;而 `r <WORD>` 则会执行上一条包含 `WORD` 的命令。
最后,输入两个感叹号( `!!` ),可以在命令行中回溯最后一条命令。这个十分有用,比如,当你忘记使用 `sudo` 去执行需要权限的命令时:
最后,输入两个感叹号`!!`,可以在命令行中回溯最后一条命令。这个十分有用,比如,当你忘记使用 `sudo` 去执行需要权限的命令时:
```
$ less /var/log/dnf.log
@ -274,19 +272,16 @@ $ sudo less /var/log/dnf.log
这个功能让查找并且重新执行之前命令的操作更加方便。
### 何去何从
### 下一步呢
这里仅仅介绍了几个可以让你提高生产率的 zsh 特性;其实还有更多功能带你发掘;想知道更多的信息,你可以访问以下的资源:
这里仅仅介绍了几个可以让你提高生产率的 zsh 特性;其实还有更多功能有待你的发掘;想知道更多的信息,你可以访问以下的资源:
[An Introduction to the Z Shell][9]
- [An Introduction to the Z Shell][9]
- [A User's Guide to ZSH][10]
- [Archlinux Wiki][11]
- [zsh-lovers][12]
[A User's Guide to ZSH][10]
[Archlinux Wiki][11]
[zsh-lovers][12]
你有使用 zsh 提高生产力的tips可以分享吗作者很乐意在下方评论看到它们。
你有使用 zsh 提高生产力的技巧可以分享吗?我很乐意在下方评论中看到它们。
--------------------------------------------------------------------------------
@ -295,7 +290,7 @@ via: https://opensource.com/article/18/9/tips-productivity-zsh
作者:[Ricardo Gerardi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[tnuoccalanosrep](https://github.com/tnuoccalanosrep)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,48 +3,46 @@
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/waffles-recipe-eggs-cooking-mix.png?itok=Fp06VOBx)
有什么好的方法既可以宣传开源的精神又不用写代码呢这里有个点子“开源食堂”。在过去的8年间这就是我们在慕尼黑做的事情。
有什么好的方法,既可以宣传开源的精神又不用写代码呢?这里有个点子:“<ruby>开源食堂<rt>open source cooking</rt></ruby>”。在过去的 8 年间,这就是我们在慕尼黑做的事情。
开源食堂已经是我们常规的开源宣传活动了,因为我们发现开源与烹饪有很多共同点。
### 协作烹饪
[慕尼黑开源聚会][1]自2009年7月在[Café Netzwerk][2]创办以来,已经组织了若干次活动,活动一般在星期五的晚上组织。该聚会为开源项目工作者或者开源爱好者们提供了相互认识的方式。我们的信条是:“每四周的星期五属于免费软件Every fourth Friday for free software”。当然在一些周末,我们还会举办一些研讨会。那之后,我们很快加入了很多其他的活动,包括白香肠早餐、桑拿与烹饪活动。
[慕尼黑开源聚会][1]自 2009 7 月在 [Café Netzwerk][2] 创办以来,已经组织了若干次活动,活动一般在星期五的晚上组织。该聚会为开源项目工作者或者开源爱好者们提供了相互认识的方式。我们的信条是:“<ruby>每四周的星期五属于自由软件<rt>Every fourth Friday for free software</rt></ruby>”。当然在一些周末,我们还会举办一些研讨会。那之后,我们很快加入了很多其他的活动,包括白香肠早餐、桑拿与烹饪活动。
事实上,第一次开源烹饪聚会举办的有些混乱,但是我们经过这8年来以及15次的组织已经可以为25-30个与会者提供丰盛的美食了。
事实上,第一次开源烹饪聚会举办的有些混乱,但是我们经过这 8 年来以及 15 次的活动,已经可以为 25-30 个与会者提供丰盛的美食了。
回头看看这些夜晚,我们愈发发现共同烹饪与开源社区协作之间,有很多相似之处。
### 烹饪步骤中的开源精神
### 烹饪步骤中的自由开源精神
这里是几个烹饪与开源精神相同的地方:
* 我们乐于合作且朝着一个共同的目标前进
* 我们成立社区组织
* 我们成了一个社区
* 由于我们有相同的兴趣与爱好,我们可以更多的了解我们自身与他人,并且可以一同协作
* 我们也会犯错,但我们会从错误中学习,并为了共同的李医生去分享关于错误的经验,从而让彼此避免再犯相同的错误
* 我们也会犯错,但我们会从错误中学习,并为了共同的利益去分享关于错误的经验,从而让彼此避免再犯相同的错误
* 每个人都会贡献自己擅长的事情,因为每个人都有自己的一技之长
* 我们会动员其他人去做出贡献并加入到我们之中
* 虽说协作是关键,但难免会有点混乱
* 每个人都会从中收益
### 烹饪中的开源气息
同很多成功的开源聚会一样,开源烹饪也需要一些协作和组织结构。在每次活动之前,我们会组织所有的成员对菜单进行投票,而不单单是直接给每个人分一角披萨,我们希望真正的作出一道美味,迄今为止我们做过日本、墨西哥、匈牙利、印度等地区风味的美食,限于篇幅就不一一列举了。
就像在生活中,共同烹饪样需要各个成员之间相互的尊重和理解,所以我们也会试着为素食主义者、食物过敏者、或者对某些事物有偏好的人提供针对性的事物。正式开始烹饪之前,在家预先进行些小规模的测试会非常有帮助(乐趣!)
就像在生活中,共同烹饪样需要各个成员之间相互的尊重和理解,所以我们也会试着为素食主义者、食物过敏者、或者对某些事物有偏好的人提供针对性的事物。正式开始烹饪之前,在家预先进行些小规模的测试会非常有帮助(乐趣!)
可扩展性也很重要在杂货店采购必要的食材很容易就消耗掉3个小时。所以我们使用一些表格工具自然是 LibreOffice Calc来做一些所需要的食材以及相应的成本。
可扩展性也很重要,在杂货店采购必要的食材很容易就消耗掉 3 个小时。所以我们使用一些表格工具(自然是 LibreOffice Calc来做一些所需要的食材以及相应的成本。
我们会同志愿者一起,为每次晚餐准备一个“包管理器”,从而及时的制作出菜单并在问题产生的时候寻找一些独到的解决方法。
我们会同志愿者一起,对于每次晚餐我们都有一个“包维护者”,从而及时的制作出菜单并在问题产生的时候寻找一些独到的解决方法。
虽然不是所有人都是大厨,但是只要给与一些帮助,并比较合理的分配任务和责任,就很容易让每个人都参与其中。某种程度上来说,处理 18kg 的西红柿和 100 个鸡蛋都不会让你觉得是件难事,相信我!唯一的限制是一个烤炉只有四个灶,所以可能是时候对基础设施加大投入了。
发布有时间要求当然要求也不那么严格我们通常会在21:30和01:30之间的相当“灵活”时间内供应主菜即便如此这个时间也是硬性的发布规定。
发布有时间要求,当然要求也不那么严格,我们通常会在 21:30 01:30 之间的相当“灵活”时间内供应主菜,即便如此,这个时间也是硬性的发布规定。
最后,很多开源项目一样,烹饪文档同样有提升的空间。类似洗碟子这样的扫尾工作同样也有可优化的地方。
最后,很多开源项目一样,烹饪文档同样有提升的空间。类似洗碟子这样的扫尾工作同样也有可优化的地方。
### 未来的一些新功能点
@ -54,21 +52,18 @@
* 购买和烹饪一个价值 700 欧元的大南瓜,并且
* 找家可以为我们采购提供折扣的商店
最后一点,也是开源软件的动机:永远记住,还有一些人们生活在阴影中,他们为没有同等的权限去访问资源而苦恼着。我们如何通过开源的精神去帮助他们呢?
一想到这点,我便期待这下一次的开源烹饪聚会。如果读了上面的东西让你觉得不够完美,并且想自己运作这样的活动,我们非常乐意你能够借鉴我们的想法,甚至抄袭一个。我们也乐意你能够参与到我们其中,甚至做一些演讲和问答。
Article originally appeared on [blog.effenberger.org][3]. Reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/open-source-cooking
作者:[Florian Effenberger][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/sd886393)
校对:[校对者ID](https://github.com/校对者ID)
译者:[sd886393](https://github.com/sd886393)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,173 @@
每位 Ubuntu 18.04 用户都应该知道的快捷键
======
了解快捷键能够提升您的生产力。这里有一些实用的 Ubuntu 快捷键助您像专业人士一样使用 Ubuntu。
您可以用键盘和鼠标组合来使用操作系统。
> 注意:本文中提到的键盘快捷键适用于 Ubuntu 18.04 GNOME 版。 通常,它们中的大多数(或者全部)也适用于其他的 Ubuntu 版本,但我不能够保证。
![Ubuntu keyboard shortcuts][1]
### 实用的 Ubuntu 快捷键
让我们来看一看 Ubuntu GNOME 必备的快捷键吧!通用的快捷键如 `Ctrl+C`(复制)、`Ctrl+V`(粘贴)或者 `Ctrl+S`(保存)不再赘述。
注意Linux 中的 Super 键即键盘上带有 Windows 图标的键,本文中我使用了大写字母,但这不代表你需要按下 `shift` 键,比如,`T` 代表键盘上的 t 键,而不代表 `Shift+t`
#### 1、 Super 键:打开活动搜索界面
使用 `Super` 键可以打开活动菜单。如果你只能在 Ubuntu 上使用一个快捷键,那只能是 `Super` 键。
想要打开一个应用程序?按下 `Super` 键然后搜索应用程序。如果搜索的应用程序未安装,它会推荐来自应用中心的应用程序。
想要看看有哪些正在运行的程序?按下 `Super` 键,屏幕上就会显示所有正在运行的 GUI 应用程序。
想要使用工作区吗?只需按下 `Super` 键,您就可以在屏幕右侧看到工作区选项。
#### 2、 Ctrl+Alt+T打开 Ubuntu 终端窗口
![Ubuntu Terminal Shortcut][2]
*使用 Ctrl+alt+T 来打开终端窗口*
想要打开一个新的终端,您只需使用快捷键 `Ctrl+Alt+T`。这是我在 Ubuntu 中最喜欢的键盘快捷键。 甚至在我的许多 FOSS 教程中,当需要打开终端窗口是,我都会提到这个快捷键。
#### 3、 Super+L 或 Ctrl+Alt+L锁屏
当您离开电脑时锁定屏幕,是最基本的安全习惯之一。您可以使用 `Super+L` 快捷键,而不是繁琐地点击屏幕右上角然后选择锁定屏幕选项。
有些系统也会使用 `Ctrl+Alt+L` 键锁定屏幕。
#### 4、 Super+D or Ctrl+Alt+D显示桌面
按下 `Super+D` 可以最小化所有正在运行的应用程序窗口并显示桌面。
再次按 `Super+D` 将重新打开所有正在运行的应用程序窗口,像之前一样。
您也可以使用 `Ctrl+Alt+D` 来实现此目的。
#### 5、 Super+A显示应用程序菜单
您可以通过单击屏幕左下角的 9 个点打开 Ubuntu 18.04 GNOME 中的应用程序菜单。 但是一个更快捷的方法是使用 `Super+A` 快捷键。
它将显示应用程序菜单,您可以在其中查看或搜索系统上已安装的应用程序。
您可以使用 `Esc` 键退出应用程序菜单界面。
#### 6、 Super+Tab 或 Alt+Tab在运行中的应用程序间切换
如果您运行的应用程序不止一个,则可以使用 `Super+Tab``Alt+Tab` 快捷键在应用程序之间切换。
按住 `Super` 键同时按下 `Tab` 键,即可显示应用程序切换器。 按住 `Super` 的同时,继续按下 `Tab` 键在应用程序之间进行选择。 当光标在所需的应用程序上时,松开 `Super``Tab` 键。
默认情况下,应用程序切换器从左向右移动。 如果要从右向左移动,可使用 `Super+Shift+Tab` 快捷键。
在这里您也可以用 `Alt` 键代替 `Super` 键。
> 提示:如果有多个应用程序实例,您可以使用 Super+` 快捷键在这些实例之间切换。
#### 7、 Super+箭头:移动窗口位置
<https://player.vimeo.com/video/289091549>
这个快捷键也适用于 Windows 系统。 使用应用程序时,按下 `Super+左箭头`,应用程序将贴合屏幕的左边缘,占用屏幕的左半边。
同样,按下 `Super+右箭头`会使应用程序贴合右边缘。
按下 `Super+上箭头`将最大化应用程序窗口,`Super+下箭头`将使应用程序恢复到其正常的大小。
#### 8、 Super+M切换到通知栏
GNOME 中有一个通知栏,您可以在其中查看系统和应用程序活动的通知,这里也有一个日历。
![Notification Tray Ubuntu 18.04 GNOME][3]
*通知栏*
使用 `Super+M` 快捷键,您可以打开此通知栏。 如果再次按这些键,将关闭打开的通知托盘。
使用 `Super+V` 也可实现相同的功能。
#### 9、 Super+空格:切换输入法(用于多语言设置)
如果您使用多种语言,可能您的系统上安装了多个输入法。 例如,我需要在 Ubuntu 上同时使用[印地语] [4]和英语,所以我安装了印地语(梵文)输入法以及默认的英语输入法。
如果您也使用多语言设置,则可以使用 `Super+空格` 快捷键快速更改输入法。
#### 10、 Alt+F2运行控制台
这适用于高级用户。 如果要运行快速命令,而不是打开终端并在其中运行命令,则可以使用 `Alt+F2` 运行控制台。
![Alt+F2 to run commands in Ubuntu][5]
*控制台*
当您使用只能在终端运行的应用程序时,这尤其有用。
#### 11、 Ctrl+Q关闭应用程序窗口
如果您有正在运行的应用程序,可以使用 `Ctrl+Q` 快捷键关闭应用程序窗口。您也可以使用 `Ctrl+W` 来实现此目的。
`Alt+F4` 是关闭应用程序窗口更“通用”的快捷方式。
它不适用于一些应用程序,如 Ubuntu 中的默认终端。
#### 12、 Ctrl+Alt+箭头:切换工作区
![Workspace switching][6]
*切换工作区*
如果您是使用工作区的重度用户,可以使用 `Ctrl+Alt+上箭头``Ctrl+Alt+下箭头`在工作区之间切换。
#### 13、 Ctrl+Alt+Del注销
不!在 Linux 中使用著名的快捷键 `Ctrl+Alt+Del` 并不会像在 Windows 中一样打开任务管理器(除非您使用自定义快捷键)。
![Log Out Ubuntu][7]
*注销*
在普通的 GNOME 桌面环境中,您可以使用 `Ctrl+Alt+Del` 键打开关机菜单,但 Ubuntu 并不总是遵循此规范,因此当您在 Ubuntu 中使用 `Ctrl+Alt+Del` 键时,它会打开注销菜单。
### 在 Ubuntu 中使用自定义键盘快捷键
您不是只能使用默认的键盘快捷键,您可以根据需要创建自己的自定义键盘快捷键。
转到“设置->设备->键盘”,您将在这里看到系统的所有键盘快捷键。向下滚动到底部,您将看到“自定义快捷方式”选项。
![Add custom keyboard shortcut in Ubuntu][8]
您需要提供易于识别的快捷键名称、使用快捷键时运行的命令,以及您自定义的按键组合。
### Ubuntu 中你最喜欢的键盘快捷键是什么?
快捷键无穷无尽。如果需要,你可以看一看所有可能的 [GNOME 快捷键][9],看其中有没有你需要用到的快捷键。
您可以学习使用您经常使用应用程序的快捷键,这是很有必要的。例如,我使用 Kazam 进行[屏幕录制][10],键盘快捷键帮助我方便地暂停和开始录像。
您最喜欢、最离不开的 Ubuntu 快捷键是什么?
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-shortcuts/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[XiatianSummer](https://github.com/XiatianSummer)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-keyboard-shortcuts.jpeg
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-terminal-shortcut.jpg
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/notification-tray-ubuntu-gnome.jpeg
[4]: https://itsfoss.com/type-indian-languages-ubuntu/
[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/console-alt-f2-ubuntu-gnome.jpeg
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/workspace-switcher-ubuntu.png
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/log-out-ubuntu.jpeg
[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/custom-keyboard-shortcut.jpg
[9]: https://wiki.gnome.org/Design/OS/KeyboardShortcuts
[10]: https://itsfoss.com/best-linux-screen-recorders/

View File

@ -0,0 +1,110 @@
3 个开源日志聚合工具
======
> 日志聚合系统可以帮助我们进行故障排除和其它任务。以下是三个主要工具介绍。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr)
<ruby>指标聚合<rt>metrics aggregation</rt></ruby><ruby>日志聚合<rt>log aggregation</rt></ruby>有何不同?日志不能包括指标吗?日志聚合系统不能做与指标聚合系统相同的事情吗?
这些是我经常听到的问题。我还看到供应商推销他们的日志聚合系统作为所有可观察问题的解决方案。日志聚合是一个有价值的工具,但它通常对时间序列数据的支持不够好。
时间序列的指标聚合系统中几个有价值的功能是专门为时间序列数据定制的<ruby>固定间隔<rt>regular interval</rt></ruby>和存储系统。固定间隔允许用户不断地收集实时的数据结果。如果要求日志聚合系统以固定间隔收集指标数据,它也可以。但是,它的存储系统没有针对指标聚合系统中典型的查询类型进行优化。使用日志聚合工具中的存储系统处理这些查询将花费更多的资源和时间。
所以,我们知道日志聚合系统可能不适合时间序列数据,但是它有什么好处呢?日志聚合系统是收集事件数据的好地方。这些无规律的活动是非常重要的。最好的例子为 web 服务的访问日志,这些很重要,因为我们想知道什么正在访问我们的系统,什么时候访问的。另一个例子是应用程序错误记录 —— 因为它不是正常的操作记录,所以在故障排除过程中可能很有价值的。
日志记录的一些规则:
* **须**包含时间戳
* **须**格式化为 JSON
* **不**记录无关紧要的事件
* **须**记录所有应用程序的错误
* **可**记录警告错误
* **可**开关的日志记录
* **须**以可读的形式记录信息
* **不**在生产环境中记录信息
* **不**记录任何无法阅读或反馈的内容
### 云的成本
当研究日志聚合工具时,云服务可能看起来是一个有吸引力的选择。然而,这可能会带来巨大的成本。当跨数百或数千台主机和应用程序聚合时,日志数据是大量的。在基于云的系统中,数据的接收、存储和检索是昂贵的。
以一个真实的系统来参考,大约 500 个节点和几百个应用程序的集合每天产生 200GB 的日志数据。这个系统可能还有改进的空间,但是在许多 SaaS 产品中,即使将它减少一半,每月也要花费将近 10000 美元。而这通常仅保留 30 天,如果你想查看一年一年的趋势数据,就不可能了。
并不是要不使用这些基于云的系统,尤其是对于较小的组织它们可能非常有价值的。这里的目的是指出可能会有很大的成本,当这些成本很高时,就可能令人非常的沮丧。本文的其余部分将集中讨论自托管的开源和商业解决方案。
### 工具选择
#### ELK
[ELK][1],即 Elasticsearch、Logstash 和 Kibana 简称,是最流行的开源日志聚合工具。它被 Netflix、Facebook、微软、LinkedIn 和思科使用。这三个组件都是由 [Elastic][2] 开发和维护的。[Elasticsearch][3] 本质上是一个 NoSQL 数据库,以 Lucene 搜索引擎实现的。[Logstash][4] 是一个日志管道系统,可以接收数据,转换数据,并将其加载到像 Elasticsearch 这样的应用中。[Kibana][5] 是 Elasticsearch 之上的可视化层。
几年前,引入了 Beats 。Beats 是数据采集器。它们简化了将数据运送到 Logstash 的过程。用户不需要了解每种日志的正确语法,而是可以安装一个 Beats 来正确导出 NGINX 日志或 Envoy 代理日志,以便在 Elasticsearch 中有效地使用它们。
安装生产环境级 ELK 套件时,可能会包括其他几个部分,如 [Kafka][6]、[Redis][7] 和 [NGINX][8]。此外,用 Fluentd 替换 Logstash 也很常见,我们将在后面讨论。这个系统操作起来很复杂,这在早期导致了很多问题和抱怨。目前,这些问题基本上已经被修复,不过它仍然是一个复杂的系统,如果你使用少部分的功能,建议不要使用它了。
也就是说,有其它可用的服务,所以你不必苦恼于此。可以使用 [Logz.io][9],但是如果你有很多数据,它的标价有点高。当然,你可能规模比较小,没有很多数据。如果你买不起 Logz.io你可以看看 [AWS Elasticsearch Service][10] (ES) 。ES 是 Amazon Web Services (AWS) 提供的一项服务,它很容易就可以让 Elasticsearch 马上工作起来。它还拥有使用 Lambda 和 S3 将所有AWS 日志记录到 ES 的工具。这是一个更便宜的选择,但是需要一些管理操作,并有一些功能限制。
ELK 套件的母公司 Elastic [提供][11] 一款更强大的产品,它使用<ruby>开源核心<rt>open core</rt></ruby>模式,为分析工具和报告提供了额外的选项。它也可以在谷歌云平台或 AWS 上托管。由于这种工具和托管平台的组合提供了比大多数 SaaS 选项更加便宜,这也许是最好的选择,并且很有用。该系统可以有效地取代或提供 [安全信息和事件管理][12]SIEM系统的功能。
ELK 套件通过 Kibana 提供了很好的可视化工具但是它缺少警报功能。Elastic 在付费的 X-Pack 插件中提供了警报功能但是在开源系统没有内置任何功能。Yelp 已经开发了一种解决这个问题的方法,[ElastAlert][13],不过还有其他方式。这个额外的软件相当健壮,但是它增加了已经复杂的系统的复杂性。
#### Graylog
[Graylog][14] 最近越来越受欢迎,但它是在 2010 年由 Lennart Koopmann 创建并开发的。两年后,一家公司以同样的名字诞生了。尽管它的使用者越来越多,但仍然远远落后于 ELK 套件。这也意味着它具有较少的社区开发特征,但是它可以使用与 ELK 套件相同的 Beats 。由于 Graylog Collector Sidecar 使用 [Go][15] 编写,所以 Graylog 在 Go 社区赢得了赞誉。
Graylog 使用 Elasticsearch、[MongoDB][16] 和底层的 Graylog Server 。这使得它像 ELK 套件一样复杂也许还要复杂一些。然而Graylog 附带了内置于开源版本中的报警功能,以及其他一些值得注意的功能,如流、消息重写和地理定位。
流功能可以允许数据在被处理时被实时路由到特定的 Stream。使用此功能用户可以在单个 Stream 中看到所有数据库错误,在另外的 Stream 中看到 web 服务器错误。当添加新项目或超过阈值时,甚至可以基于这些 Stream 提供警报。延迟可能是日志聚合系统中最大的问题之一Stream 消除了 Graylog 中的这一问题。一旦日志进入,它就可以通过 Stream 路由到其他系统,而无需完全处理好。
消息重写功能使用开源规则引擎 [Drools][17] 。允许根据用户定义的规则文件评估所有传入的消息,从而可以删除消息(称为黑名单)、添加或删除字段或修改消息。
Graylog 最酷的功能或许是它的地理定位功能,它支持在地图上绘制 IP 地址。这是一个相当常见的功能,在 Kibana 也可以这样使用,但是它增加了很多价值 —— 特别是如果你想将它用作 SIEM 系统。地理定位功能在系统的开源版本中提供。
如果你需要的话Graylog 公司会提供对开源版本的收费支持。它还为其企业版提供了一个开源核心模式,提供存档、审计日志记录和其他支持。其它提供支持或托管服务的不太多,如果你不需要 Graylog 公司的,你可以托管。
#### Fluentd
[Fluentd][18] 是 [Treasure Data][19] 开发的,[CNCF][20] 已经将它作为一个孵化项目。它是用 C 和 Ruby 编写的,并被 [AWS][21] 和 [Google Cloud][22] 所推荐。Fluentd 已经成为许多系统中 logstach 的常用替代品。它可以作为一个本地聚合器,收集所有节点日志并将其发送到中央存储系统。它不是日志聚合系统。
它使用一个强大的插件系统,提供不同数据源和数据输出的快速和简单的集成功能。因为有超过 500 个插件可用,所以你的大多数用例都应该包括在内。如果没有,这听起来是一个为开源社区做出贡献的机会。
Fluentd 由于占用内存少(只有几十兆字节)和高吞吐量特性,是 Kubernetes 环境中的常见选择。在像 [Kubernetes][23] 这样的环境中,每个 pod 都有一个 Fluentd 附属件 ,内存消耗会随着每个新 pod 的创建而线性增加。在这种情况下,使用 Fluentd 将大大降低你的系统利用率。这对于 Java 开发的工具来说是一个常见的问题,这些工具旨在为每个节点运行一个工具,而内存开销并不是主要问题。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/open-source-log-aggregation-tools
作者:[Dan Barker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[1]: https://www.elastic.co/webinars/introduction-elk-stack
[2]: https://www.elastic.co/
[3]: https://www.elastic.co/products/elasticsearch
[4]: https://www.elastic.co/products/logstash
[5]: https://www.elastic.co/products/kibana
[6]: http://kafka.apache.org/
[7]: https://redis.io/
[8]: https://www.nginx.com/
[9]: https://logz.io/
[10]: https://aws.amazon.com/elasticsearch-service/
[11]: https://www.elastic.co/cloud
[12]: https://en.wikipedia.org/wiki/Security_information_and_event_management
[13]: https://github.com/Yelp/elastalert
[14]: https://www.graylog.org/
[15]: https://opensource.com/tags/go
[16]: https://www.mongodb.com/
[17]: https://www.drools.org/
[18]: https://www.fluentd.org/
[19]: https://www.treasuredata.com/
[20]: https://www.cncf.io/
[21]: https://aws.amazon.com/blogs/aws/all-your-data-fluentd/
[22]: https://cloud.google.com/logging/docs/agent/
[23]: https://opensource.com/resources/what-is-kubernetes

View File

@ -0,0 +1,46 @@
如何在 Ubuntu 16.04 强制 APT 包管理器使用 IPv4
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/ipv4-720x340.png)
**APT**, 是 **A** dvanced **P** ackage **T** ool 的缩写,是基于 Debian 的系统的默认包管理器。我们可以使用 APT 安装、更新、升级和删除应用程序。最近,我一直遇到一个奇怪的错误。每当我尝试更新我的 Ubuntu 16.04 时,我都会收到此错误 - **“0% [Connecting to in.archive.ubuntu.com (2001:67c:1560:8001::14)]”** ,同时更新流程会卡住很长时间。我的网络连接没问题,我可以 ping 通所有网站,包括 Ubuntu 官方网站。在搜索了一番谷歌后,我意识到 Ubuntu 镜像站点有时无法通过 IPv6 访问。在我强制将 APT 包管理器在更新系统时使用 IPv4 代替 IPv6 访问 Ubuntu 镜像站点后,此问题得以解决。如果你遇到过此错误,可以按照以下说明解决。
### 强制 APT 包管理器在 Ubuntu 16.04 中使用 IPv4
要在更新和升级 Ubuntu 16.04 LTS 系统时强制 APT 使用 IPv4 代替 IPv6只需使用以下命令
```
$ sudo apt-get -o Acquire::ForceIPv4=true update
$ sudo apt-get -o Acquire::ForceIPv4=true upgrade
```
瞧!这次更新很快就完成了。
你还可以使用以下命令在 `/etc/apt/apt.conf.d/99force-ipv4` 中添加以下行,以便将来对所有 `apt-get` 事务保持持久性:
```
$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4
```
**免责声明:**
我不知道最近是否有人遇到这个问题,但我今天在我的 Ubuntu 16.04 LTS 虚拟机中遇到了至少四、五次这样的错误,我按照上面的说法解决了这个问题。我不确定这是推荐的解决方案。请浏览 Ubuntu 论坛来确保此方法合法。由于我只是一个 VM我只将它用于测试和学习目的我不介意这种方法的真实性。请自行承担使用风险。
希望这有帮助。还有更多的好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-force-apt-package-manager-to-use-ipv4-in-ubuntu-16-04/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/

View File

@ -1,17 +1,17 @@
使用 `top` 命令了解 Fedora 的内存使用情况
使用 top 命令了解 Fedora 的内存使用情况
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/memory-top-816x345.jpg)
如果你使用过 `top` 命令来查看 Fedora 系统中的内存使用情况,你可能会惊讶,显示的数值看起来比系统可用的内存消耗更多。下面会详细介绍内存使用情况以及如何理解这些数据。
如果你使用过 `top` 命令来查看 Fedora 系统中的内存使用情况,你可能会惊讶,看起来消耗的数量比系统可用的内存更多。下面会详细介绍内存使用情况以及如何理解这些数据。
### 内存实际使用情况
操作系统对内存的使用方式并不是太通俗易懂,而是有很多不为人知的巧妙方式。通过这些方式,可以在无需用户干预的情况下,让操作系统更有效地使用内存。
操作系统对内存的使用方式并不是太通俗易懂。事实上,其背后有很多不为人知的巧妙技术在发挥着作用。通过这些方式,可以在无需用户干预的情况下,让操作系统更有效地使用内存。
大多数应用程序都不是系统自带的,但每个应用程序都依赖于安装在系统中的库中的一些函数集。在 Fedora 中RPM 包管理系统能够确保在安装应用程序时也会安装所依赖的库。
当应用程序运行时,操作系统并不需要将它要用到的所有信息都加载到物理内存中。而是会为存放代码的存储构建一个映射,称为虚拟内存。操作系统只把需要的部分加载到内存中,当某一个部分不再需要后,这一部分内存就会被释放掉。
当应用程序运行时,操作系统并不需要将它要用到的所有信息都加载到物理内存中。而是会为存放代码的存储空间构建一个映射,称为虚拟内存。操作系统只把需要的部分加载到内存中,当某一个部分不再需要后,这一部分内存就会被释放掉。
这意味着应用程序可以映射大量的虚拟内存,而使用较少的系统物理内存。特殊情况下,映射的虚拟内存甚至可以比系统实际可用的物理内存更多!而且在操作系统中这种情况也并不少见。
@ -21,25 +21,25 @@
### 使用 `top` 命令查看内存使用量
如果你还没有使用过 `top` 命令,可以打开终端直接执行查看。使用 **Shift + M** 可以按照内存使用量来进行排序。下图是在 Fedora Workstation 中执行的结果,在你的机器上显示的结果可能会略有不同:
如果你还没有使用过 `top` 命令,可以打开终端直接执行查看。使用 `Shift + M` 可以按照内存使用量来进行排序。下图是在 Fedora Workstation 中执行的结果,在你的机器上显示的结果可能会略有不同:
![](https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-09-17-14-23-17.png)
主要通过一下三列来查看内存使用情况VIRTRES 和 SHR。目前以 KB 为单位显示相关数值。
主要通过以下三列来查看内存使用情况:`VIRT`、`RES` 和 `SHR`。目前以 KB 为单位显示相关数值。
VIRT 列代表该进程映射的虚拟内存。如上所述,虚拟内存不是实际消耗的物理内存。例如, GNOME Shell 进程 gnome-shell 实际上没有消耗超过 3.1 GB 的物理内存,但它对很多更低或更高级的库都有依赖,系统必须对每个库都进行映射,以确保在有需要时可以加载这些库。
`VIRT` 列代表该进程映射的<ruby>虚拟<rt>virtual</rt></ruby>内存。如上所述,虚拟内存不是实际消耗的物理内存。例如, GNOME Shell 进程 `gnome-shell` 实际上没有消耗超过 3.1 GB 的物理内存,但它对很多更低或更高级的库都有依赖,系统必须对每个库都进行映射,以确保在有需要时可以加载这些库。
RES 列代表应用程序消耗了多少实际(驻留)内存。对于 GNOME Shell 大约是 180788 KB。例子中的系统拥有大约 7704 MB 的物理内存,因此内存使用率显示为 2.3%。
`RES` 列代表应用程序消耗了多少实际(<ruby>驻留<rt>resident</rt></ruby>)内存。对于 GNOME Shell 大约是 180788 KB。例子中的系统拥有大约 7704 MB 的物理内存,因此内存使用率显示为 2.3%。
但根据 SHR 列显示,其中至少有 88212 KB 是共享内存,这部分内存可能是其它应用程序也在使用的库函数。这意味着 GNOME Shell 本身大约有 92 MB 内存不与其他进程共享。需要注意的是,上述例子中的其它程序也共享了很多内存。在某些应用程序中,共享内存在内存使用量中会占很大的比例。
但根据 `SHR` 列显示,其中至少有 88212 KB 是<ruby>共享<rt>shared</rt></ruby>内存,这部分内存可能是其它应用程序也在使用的库函数。这意味着 GNOME Shell 本身大约有 92 MB 内存不与其他进程共享。需要注意的是,上述例子中的其它程序也共享了很多内存。在某些应用程序中,共享内存在内存使用量中会占很大的比例。
值得一提的是,有时进程之间通过内存通信,这些内存也是共享的,但 `top` 工具却不一定能检测到,所以以上的说明也不一定准确。(这一句不太会翻译出来,烦请校对大佬帮忙看看,谢谢)
值得一提的是,有时进程之间通过内存通信,这些内存也是共享的,但 `top` 这样的工具却不一定能检测到,所以以上的说明也不一定准确。
### 关于交换分区
系统还可以通过交换分区来存储数据(例如硬盘),但读写的速度相对较慢。当物理内存渐渐用满,操作系统就会查找内存中暂时不会使用的部分,将其写出到交换区域等待需要的时候使用。
因此,如果交换内存的使用量一直偏高,表明系统的物理内存已经供不应求了。尽管错误的内存申请也有可能导致出现这种情况,但如果这种现象经常出现,就需要考虑提升物理内存或者限制某些程序的运行了。
因此,如果交换内存的使用量一直偏高,表明系统的物理内存已经供不应求了。有时候一个不正常的应用也有可能导致出现这种情况,但如果这种现象经常出现,就需要考虑提升物理内存或者限制某些程序的运行了。
感谢 [Stig Nygaard][1] 在 [Flickr][2] 上提供的图片CC BY 2.0)。
@ -50,7 +50,7 @@ via: https://fedoramagazine.org/understand-fedora-memory-usage-top/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,74 @@
CPU Power Manager Control And Manage CPU Frequency In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Manage-CPU-Frequency-720x340.jpeg)
If you are a laptop user, you probably know that power management on Linux isnt really as good as on other OSes. While there are tools like **TLP** , [**Laptop Mode Tools** and **powertop**][1] to help reduce power consumption, overall battery life on Linux isnt as good as Windows or Mac OS. Another way to reduce power consumption is to limit the frequency of your CPU. While this is something that has always been doable, it generally requires complicated terminal commands, making it inconvenient. But fortunately, theres a gnome extension that helps you easily set and manage your CPUs frequency **CPU Power Manager**. CPU Power Manager uses the **intel_pstate** frequency scaling driver (supported by almost every Intel CPU) to control and manage CPU frequency in your GNOME desktop.
Another reason to use this extension is to reduce heating in your system. There are many systems out there which can get uncomfortably hot in normal usage. Limiting your CPUs frequency could reduce heating. It will also decrease the wear and tear on your CPU and other components.
### Installing CPU Power Manager
First, go to the [**extensions page**][2], and install the extension.
Once the extension has installed, youll get a CPU icon at the right side of the Gnome top bar. Click the icon, and you get an option to install the extension:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-icon.png)
If you click **“Attempt Installation”** , youll get a password prompt. The extension needs root privileges to add policykit rule for controlling CPU frequency. This is what the prompt looks like:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-1.png)
Type in your password and Click **“Authenticate”** , and that finishes installation. The last action adds a policykit file **mko.cpupower.setcpufreq.policy** at **/usr/share/polkit-1/actions**.
After installation is complete, if you click the CPU icon at the top right, youll get something like this:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager.png)
### Features
* **See the current CPU frequency:** Obviously, you can use this window to see the frequency that your CPU is running at.
* **Set maximum and minimum frequency:** With this extension, you can set maximum and minimum frequency limits in terms of percentage of max frequency. Once these limits are set, the CPU will operate only in this range of frequencies.
* **Turn Turbo Boost On and Off:** This is my favorite feature. Most Intel CPUs have “Turbo Boost” feature, whereby the one of the cores of the CPU is boosted past the normal maximum frequency for extra performance. While this can make your system more performant, it also increases power consumption a lot. So if you arent doing anything intensive, its nice to be able to turn off Turbo Boost and save power. In fact, in my case, I have Turbo Boost turned off most of the time.
* **Make Profiles:** You can make profiles with max and min frequency that you can turn on/off easily instead of fiddling with max and frequencies.
### Preferences
You can also customize the extension via the preferences window:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences.png)
As you can see, you can set whether CPU frequency is to be displayed, and whether to display it in **Mhz** or **Ghz**.
You can also edit and create/delete profiles:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences-1.png)
You can set maximum and minimum frequencies, and turbo boost for each profile.
### Conclusion
As I said in the beginning, power management on Linux is not the best, and many people are always looking to eek out a few minutes more out of their Linux laptop. If you are one of those, check out this extension. This is a unconventional method to save power, but it does work. I certainly love this extension, and have been using it for a few months now.
What do you think about this extension? Put your thoughts in the comments below!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequency-in-linux/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://www.ostechnix.com/improve-laptop-battery-performance-linux/
[2]: https://extensions.gnome.org/extension/945/cpu-power-manager/

View File

@ -1,234 +0,0 @@
Translating by qhwdw
# Caffeinated 6.828Lab 2: Memory Management
### Introduction
In this lab, you will write the memory management code for your operating system. Memory management has two components.
The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called pages. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory.
The second component of memory management is virtual memory, which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardwares memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMUs page tables according to a specification we provide.
### Getting started
In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes youve made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called lab2 based on our lab2 branch, origin/lab2:
```
athena% cd ~/6.828/lab
athena% add git
athena% git pull
Already up-to-date.
athena% git checkout -b lab2 origin/lab2
Branch lab2 set up to track remote branch refs/remotes/origin/lab2.
Switched to a new branch "lab2"
athena%
```
You will now need to merge the changes you made in your lab1 branch into the lab2 branch, as follows:
```
athena% git merge lab1
Merge made by recursive.
kern/kdebug.c | 11 +++++++++--
kern/monitor.c | 19 +++++++++++++++++++
lib/printfmt.c | 7 +++----
3 files changed, 31 insertions(+), 6 deletions(-)
athena%
```
Lab 2 contains the following new source files, which you should browse through:
- inc/memlayout.h
- kern/pmap.c
- kern/pmap.h
- kern/kclock.h
- kern/kclock.c
memlayout.h describes the layout of the virtual address space that you must implement by modifying pmap.c. memlayout.h and pmap.h define the PageInfo structure that youll use to keep track of which pages of physical memory are free. kclock.c and kclock.h manipulate the PCs battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in pmap.c needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works.
Pay particular attention to memlayout.h and pmap.h, since this lab requires you to use and understand many of the definitions they contain. You may want to review inc/mmu.h, too, as it also contains a number of definitions that will be useful for this lab.
Before beginning the lab, dont forget to add exokernel to get the 6.828 version of QEMU.
### Hand-In Procedure
When you are ready to hand in your lab code and write-up, add your answers-lab2.txt to the Git repository, commit your changes, and then run make handin.
```
athena% git add answers-lab2.txt
athena% git commit -am "my answer to lab2"
[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-)
athena% make handin
```
### Part 1: Physical Page Management
The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PCs physical memory with page granularity so that it can use the MMU to map and protect each piece of allocated memory.
Youll now write the physical page allocator. It keeps track of which pages are free with a linked list of struct PageInfo objects, each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables.
> Exercise 1
>
> In the file kern/pmap.c, you must implement code for the following functions (probably in the order given).
>
> boot_alloc()
>
> mem_init() (only up to the call to check_page_free_list())
>
> page_init()
>
> page_alloc()
>
> page_free()
>
> check_page_free_list() and check_page_alloc() test your physical page allocator. You should boot JOS and see whether check_page_alloc() reports success. Fix your code so that it passes. You may find it helpful to add your own assert()s to verify that your assumptions are correct.
This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code youll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes.
### Part 2: Virtual Memory
Before doing anything else, familiarize yourself with the x86s protected-mode memory management architecture: namely segmentationand page translation.
> Exercise 2
>
> Look at chapters 5 and 6 of the Intel 80386 Reference Manual, if you havent done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses paging for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it.
### Virtual, Linear, and Physical Addresses
In x86 terminology, a virtual address consists of a segment selector and an offset within the segment. A linear address is what you get after segment translation but before page translation. A physical address is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM.
![屏幕快照 2018-09-04 11.22.20](/Users/qhwdw/Desktop/屏幕快照 2018-09-04 11.22.20.png)
Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual memory layout you are going to set up for JOS in this lab, well expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of virtual memory.
> Exercise 3
>
> While GDB can only access QEMUs memory by virtual address, its often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU monitor commands from the lab tools guide, especially the xp command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console).
>
> Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data.
>
> Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual memory are mapped and with what permissions.
From code executing on the CPU, once were in protected mode (which we entered first thing in boot/boot.S), theres no way to directly use a linear or physical address. All memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses.
The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type uintptr_t represents opaque virtual addresses, and physaddr_trepresents physical addresses. Both these types are really just synonyms for 32-bit integers (uint32_t), so the compiler wont stop you from assigning one type to another! Since they are integer types (not pointers), the compiler will complain if you try to dereference them.
The JOS kernel can dereference a uintptr_t by first casting it to a pointer type. In contrast, the kernel cant sensibly dereference a physical address, since the MMU translates all memory references. If you cast a physaddr_t to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably wont get the memory location you intended.
To summarize:
| C type | Address type |
| ------------ | ------------ |
| `T*` | Virtual |
| `uintptr_t` | Virtual |
| `physaddr_t` | Physical |
>Question
>
>Assuming that the following JOS kernel code is correct, what type should variable x have, >uintptr_t or physaddr_t?
>
>![屏幕快照 2018-09-04 11.48.54](/Users/qhwdw/Desktop/屏幕快照 2018-09-04 11.48.54.png)
>
The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel, like any other software, cannot bypass virtual memory translation and thus cannot directly load and store to physical addresses. One reason JOS remaps of all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use KADDR(pa) to do that addition.
The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by boot_alloc() are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use PADDR(va) to do that subtraction.
### Reference counting
In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the pp_ref field of thestruct PageInfo corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should equal to the number of times the physical page appears below UTOP in all page tables (the mappings above UTOP are mostly set up at boot time by the kernel and should never be freed, so theres no need to reference count them). Well also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages.
Be careful when using page_alloc. The page it returns will always have a reference count of 0, so pp_ref should be incremented as soon as youve done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, page_insert) and sometimes the function calling page_alloc must do it directly.
### Page Table Management
Now youll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed.
> Exercise 4
>
> In the file kern/pmap.c, you must implement code for the following functions.
>
> pgdir_walk()
>
> boot_map_region()
>
> page_lookup()
>
> page_remove()
>
> page_insert()
>
> check_page(), called from mem_init(), tests your page table management routines. You should make sure it reports success before proceeding.
### Part 3: Kernel Address Space
JOS divides the processors 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol ULIM in inc/memlayout.h, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernels virtual address space to map in a user environment below it at the same time.
Youll find it helpful to refer to the JOS memory layout diagram in inc/memlayout.h both for this part and for later labs.
### Permissions and Fault Isolation
Since kernel and user memory are both present in each environments address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments private data.
The user environment will have no permission to any of the memory above ULIM, while the kernel will be able to read and write this memory. For the address range [UTOP,ULIM), both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below UTOP is for the user environment to use; the user environment will set permissions for accessing this memory.
### Initializing the Kernel Address Space
Now youll set up the address space above UTOP: the kernel part of the address space. inc/memlayout.h shows the layout you should use. Youll use the functions you just wrote to set up the appropriate linear to physical mappings.
> Exercise 5
>
> Fill in the missing code in mem_init() after the call to check_page().
Your code should now pass the check_kern_pgdir() and check_page_installed_pgdir() checks.
> Question
>
> 1、What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible:
>
> EntryBase Virtual AddressPoints to (logically):
>
> 1023 ? Page table for top 4MB of phys memory
>
> 1022 ? ?
>
> . ? ?
>
> . ? ?
>
> . ? ?
>
> 2 0x00800000 ?
>
> 1 0x00400000 ?
>
> 0 0x00000000 [see next question]
>
> 2、(From 20 Lecture3) We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernels memory? What specific mechanisms protect the kernel memory?
>
> 3、What is the maximum amount of physical memory that this operating system can support? Why?
>
> 4、How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down?
>
> 5、Revisit the page table setup in kern/entry.S and kern/entrypgdir.c. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary?
### Address Space Layout Alternatives
The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the upper part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86s backward-compatibility modes, known as virtual 8086 mode, is “hard-wired” in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there.
It is even possible, though much more difficult, to design the kernel so as not to have to reserve any fixed portion of the processors linear or virtual address space for itself, but instead effectively to allow allow user-level processes unrestricted use of the entire 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other!
Generalize the kernels memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system.
This completes the lab. Make sure you pass all of the make grade tests and dont forget to write up your answers to the questions inanswers-lab2.txt. Commit your changes (including adding answers-lab2.txt) and type make handin in the lab directory to hand in your lab.
------
via: <https://sipb.mit.edu/iap/6.828/lab/lab2/>
作者:[Mit][<https://sipb.mit.edu/iap/6.828/lab/lab2/>]
译者:[译者ID](https://github.com/%E8%AF%91%E8%80%85ID)
校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,80 @@
How to Install Cinnamon Desktop on Ubuntu
======
**This tutorial shows you how to install Cinnamon desktop environment on Ubuntu.**
[Cinnamon][1] is the default desktop environment of [Linux Mint][2]. Unlike Unity desktop environment in Ubuntu, Cinnamon is more traditional but elegant looking desktop environment with the bottom panel and app menu etc. Many Windows migrants [prefer Linux Mint over Ubuntu][3] because of Cinnamon desktop and its Windows-resembling user interface.
Now, you dont need to [install Linux Mint][4] just for trying Cinnamon. In this tutorial, Ill show you **how to install Cinnamon in Ubuntu 18.04, 16.04 and 14.04**.
You should note something before you install Cinnamon desktop on Ubuntu. Sometimes, installing additional desktop environments leads to conflict between the desktop environments. This may result in a broken session, broken applications and features etc. This is why you should be careful in making this choice.
### How to Install Cinnamon on Ubuntu
![How to install cinnamon desktop on Ubuntu Linux][5]
There used to be a-sort-of official PPA from Cinnamon team for Ubuntu but it doesnt exist anymore. Dont lose heart. There is an unofficial PPA available and it works perfectly. This PPA consists of the latest Cinnamon version.
Open a terminal and use the following commands:
```
sudo add-apt-repository ppa:embrosyn/cinnamon
sudo apt update && sudo apt install cinnamon
```
It will download files of around 150 MB in size (if I remember correctly). This also provides you with Nemo (Nautilus fork) and Cinnamon Control Center. This bonus stuff gives a closer feel of Linux Mint.
### Using Cinnamon desktop environment in Ubuntu
Once you have installed Cinnamon, log out of the current session. At the login screen, click on the Ubuntu symbol beside the username:
![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Change_Desktop_Environment_Ubuntu.jpeg)
When you do this, it will give you all the desktop environments available for your system. No need to tell you that you have to choose Cinnamon:
![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Install_Cinnamon_Ubuntu.jpeg)
Now you should be logged in to Ubuntu with Cinnamon desktop environment. Remember, you can do the same to switch back to Unity. Here is a quick screenshot of what it looked like to run **Cinnamon in Ubuntu** :
![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Cinnamon_Ubuntu_1404.jpeg)
Looks completely like Linux Mint, isnt it? I didnt find any compatibility issue between Cinnamon and Unity. I switched back and forth between Unity and Cinnamon and both worked perfectly.
#### Remove Cinnamon from Ubuntu
It is understandable that you might want to uninstall Cinnamon. We will use PPA Purge for this purpose. Lets install PPA Purge first:
```
sudo apt-get install ppa-purge
```
Afterward, use the following command to purge the PPA:
```
sudo ppa-purge ppa:embrosyn/cinnamon
```
In related articles, I suggest you to read more about [how to remove PPA in Linux][6].
I hope this post helps you to **install Cinnamon in Ubuntu**. Do share your experience with Cinnamon.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-cinnamon-on-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: http://cinnamon.linuxmint.com/
[2]: http://www.linuxmint.com/
[3]: https://itsfoss.com/linux-mint-vs-ubuntu/
[4]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/install-cinnamon-ubuntu.png
[6]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/

View File

@ -1,3 +1,4 @@
Translating by bayar199468
7 Best eBook Readers for Linux
======
**Brief:** In this article, we are covering some of the best ebook readers for Linux. These apps give a better reading experience and some will even help in managing your ebooks.

View File

@ -1,182 +0,0 @@
How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10
============================================================
by [Pradeep Kumar][1] · Published November 29, 2017 · Updated November 29, 2017
[![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2]
Wireshark is free and open source, cross platform, GUI based Network packet analyzer that is available for Linux, Windows, MacOS, Solaris etc. It captures network packets in real time & presents them in human readable format. Wireshark allows us to monitor the network packets up to microscopic level. Wireshark also has a command line utility called tshark that performs the same functions as Wireshark but through terminal & not through GUI.
Wireshark can be used for network troubleshooting, analyzing, software & communication protocol development & also for education purposed. Wireshark uses a library called pcap for capturing the network packets.
Wireshark comes with a lot of features & some those features are;
* Support for a hundreds of protocols for inspection,
* Ability to capture packets in real time & save them for later offline analysis,
* A number of filters to analyzing data,
* Data captured can be compressed & uncompressed on the fly,
* Various file formats for data analysis supported, output can also be saved to XML, CSV, plain text formats,
* data can be captured from a number of interfaces like ethernet, wifi, bluetooth, USB, Frame relay , token rings etc.
In this article, we will discuss how to install Wireshark on Ubuntu/Debain machines & will also learn to use Wireshark for capturing network packets.
#### Installation of Wireshark on Ubuntu 16.04 / 17.10
Wireshark is available with default Ubuntu repositories & can be simply installed using the following command. But there might be chances that you will not get the latest version of wireshark.
```
linuxtechi@nixworld:~$ sudo apt-get update
linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
```
So to install latest version of wireshark we have to enable or configure official wireshark repository.
Use the beneath commands one after the another to configure repository and to install latest version of Wireshark utility
```
linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable
linuxtechi@nixworld:~$ sudo apt-get update
linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
```
Once the Wireshark is installed execute the below command so that non-root users can capture live packets of interfaces,
```
linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap
```
#### Installation of Wireshark on Debian 9
Wireshark package and its dependencies are already present in the default debian 9 repositories, so to install latest and stable version of Wireshark on Debian 9, use the following command:
```
linuxtechi@nixhome:~$ sudo apt-get update
linuxtechi@nixhome:~$ sudo apt-get install wireshark -y
```
During the installation, it will prompt us to configure dumpcap for non-superusers,
Select yes and then hit enter.
[![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3]
Once the Installation is completed, execute the below command so that non-root users can also capture the live packets of the interfaces.
```
linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap
```
We can also use the latest source package to install the wireshark on Ubuntu/Debain & many other Linux distributions.
#### Installing Wireshark using source code on Debian / Ubuntu Systems
Firstly download the latest source package (which is 2.4.2 at the time for writing this article), use the following command,
```
linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz
```
Next extract the package & enter into the extracted directory,
```
linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp
linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2
```
Now we will compile the code with the following commands,
```
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make
```
Lastly install the compiled packages to install Wireshark on the system,
```
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig
```
Upon installation a separate group for Wireshark will also be created, we will now add our user to the group so that it can work with wireshark otherwise you might get permission denied error when starting wireshark.
To add the user to the wireshark group, execute the following command,
```
linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi
```
Now we can start wireshark either from GUI Menu or from terminal with this command,
```
linuxtechi@nixhome:~$ wireshark
```
#### Access Wireshark on Debian 9 System
[![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4]
Click on Wireshark icon
[![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5]
#### Access Wireshark on Ubuntu 16.04 / 17.10
[![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6]
Click on Wireshark icon
[![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7]
#### Capturing and Analyzing packets
Once the wireshark has been started, we should be presented with the wireshark window, example is shown above for Ubuntu and Debian system.
[![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8]
All these are the interfaces from where we can capture the network packets. Based on the interfaces you have on your system, this screen might be different for you.
We are selecting enp0s3 for capturing the network traffic for that inteface. After selecting the inteface, network packets for all the devices on our network start to populate (refer to screenshot below)
[![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9]
First time we see this screen we might get overwhelmed by the data that is presented in this screen & might have thought how to sort out this data but worry not, one the best features of Wireshark is its filters.
We can sort/filter out the data based on IP address, Port number, can also used source & destination filters, packet size etc & can also combine 2 or more filters together to create more comprehensive searches. We can either write our filters in Apply a Display Filter tab , or we can also select one of already created rules. To select pre-built filter, click on flag icon , next to Apply a Display Filter tab,
[![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10]
We can also filter data based on the color coding, By default, light purple is TCP traffic, light blue is UDP traffic, and black identifies packets with errors , to see what these codes mean, click View -> Coloring Rules, also we can change these codes.
[![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11]
After we have the results that we need, we can then click on any of the captured packets to get more details about that packet, this will show all the data about that network packet.
Wireshark is an extremely powerful tool takes some time to getting used to & make a command over it, this tutorial will help you get started. Please feel free to drop in your queries or suggestions in the comment box below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com
作者:[Pradeep Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/author/pradeep/
[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg
[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg
[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg
[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg
[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg
[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg
[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg
[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg
[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg
[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg

View File

@ -1,206 +0,0 @@
GraveAccent 翻译中 Conditional Rendering in React using Ternaries and Logical AND
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*eASRJrCIVgsy5VbNMAzD9w.jpeg)
Photo by [Brendan Church][1] on [Unsplash][2]
There are several ways that your React component can decide what to render. You can use the traditional `if` statement or the `switch` statement. In this article, well explore a few alternatives. But be warned that some come with their own gotchas, if youre not careful.
### Ternary vs if/else
Lets say we have a component that is passed a `name` prop. If the string is non-empty, we display a greeting. Otherwise we tell the user they need to sign in.
Heres a Stateless Function Component (SFC) that does just that.
```
const MyComponent = ({ name }) => {
if (name) {
return (
<div className="hello">
Hello {name}
</div>
);
}
return (
<div className="hello">
Please sign in
</div>
);
};
```
Pretty straightforward. But we can do better. Heres the same component written using a conditional ternary operator.
```
const MyComponent = ({ name }) => (
<div className="hello">
{name ? `Hello ${name}` : 'Please sign in'}
</div>
);
```
Notice how concise this code is compared to the example above.
A few things to note. Because we are using the single statement form of the arrow function, the `return` statement is implied. Also, using a ternary allowed us to DRY up the duplicate `<div className="hello">` markup. 🎉
### Ternary vs Logical AND
As you can see, ternaries are wonderful for `if/else` conditions. But what about simple `if` conditions?
Lets look at another example. If `isPro` (a boolean) is `true`, we are to display a trophy emoji. We are also to render the number of stars (if not zero). We could go about it like this.
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro ? '🏆' : null}
</div>
{stars ? (
<div>
Stars:{'⭐️'.repeat(stars)}
</div>
) : null}
</div>
);
```
But notice the “else” conditions return `null`. This is becasue a ternary expects an else condition.
For simple `if` conditions, we could use something a little more fitting: the logical AND operator. Heres the same code written using a logical AND.
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro && '🏆'}
</div>
{stars && (
<div>
Stars:{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
Not too different, but notice how we eliminated the `: null` (i.e. else condition) at the end of each ternary. Everything should render just like it did before.
Hey! What gives with John? There is a `0` when nothing should be rendered. Thats the gotcha that I was referring to above. Heres why.
[According to MDN][3], a Logical AND (i.e. `&&`):
> `expr1 && expr2`
> Returns `expr1` if it can be converted to `false`; otherwise, returns `expr2`. Thus, when used with Boolean values, `&&` returns `true` if both operands are true; otherwise, returns `false`.
OK, before you start pulling your hair out, let me break it down for you.
In our case, `expr1` is the variable `stars`, which has a value of `0`. Because zero is falsey, `0` is returned and rendered. See, that wasnt too bad.
I would write this simply.
> If `expr1` is falsey, returns `expr1`, else returns `expr2`.
So, when using a logical AND with non-boolean values, we must make the falsey value return something that React wont render. Say, like a value of `false`.
There are a few ways that we can accomplish this. Lets try this instead.
```
{!!stars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
```
Notice the double bang operator (i.e. `!!`) in front of `stars`. (Well, actually there is no “double bang operator”. Were just using the bang operator twice.)
The first bang operator will coerce the value of `stars` into a boolean and then perform a NOT operation. If `stars` is `0`, then `!stars` will produce `true`.
Then we perform a second NOT operation, so if `stars` is 0, `!!stars` would produce `false`. Exactly what we want.
If youre not a fan of `!!`, you can also force a boolean like this (which I find a little wordy).
```
{Boolean(stars) && (
```
Or simply give a comparator that results in a boolean value (which some might say is even more semantic).
```
{stars > 0 && (
```
#### A word on strings
Empty string values suffer the same issue as numbers. But because a rendered empty string is invisible, its not a problem that you will likely have to deal with, or will even notice. However, if you are a perfectionist and dont want an empty string on your DOM, you should take similar precautions as we did for numbers above.
### Another solution
A possible solution, and one that scales to other variables in the future, would be to create a separate `shouldRenderStars` variable. Then you are dealing with boolean values in your logical AND.
```
const shouldRenderStars = stars > 0;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
Then, if in the future, the business rule is that you also need to be logged in, own a dog, and drink light beer, you could change how `shouldRenderStars` is computed, and what is returned would remain unchanged. You could also place this logic elsewhere where its testable and keep the rendering explicit.
```
const shouldRenderStars =
stars > 0 && loggedIn && pet === 'dog' && beerPref === 'light`;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
### Conclusion
Im of the opinion that you should make best use of the language. And for JavaScript, this means using conditional ternary operators for `if/else`conditions and logical AND operators for simple `if` conditions.
While we could just retreat back to our safe comfy place where we use the ternary operator everywhere, you now possess the knowledge and power to go forth AND prosper.
--------------------------------------------------------------------------------
作者简介:
Managing Editor at the American Express Engineering Blog http://aexp.io and Director of Engineering @AmericanExpress. MyViews !== ThoseOfMyEmployer.
----------------
via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternaries-and-logical-and-7807f53b6935
作者:[Donavon West][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@donavon
[1]:https://unsplash.com/photos/pKeF6Tt3c08?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[2]:https://unsplash.com/search/photos/road-sign?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators

View File

@ -1,308 +0,0 @@
translating by Flowsnow
What is behavior-driven Python?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
Have you heard about [behavior-driven development][1] (BDD) and wondered what all the buzz is about? Maybe you've caught team members talking in "gherkin" and felt left out of the conversation. Or perhaps you're a Pythonista looking for a better way to test your code. Whatever the circumstance, learning about BDD can help you and your team achieve better collaboration and test automation, and Python's `behave` framework is a great place to start.
### What is BDD?
* Submitting forms on a website
* Searching for desired results
* Saving a document
* Making REST API calls
* Running command-line interface commands
In software, a behavior is how a feature operates within a well-defined scenario of inputs, actions, and outcomes. Products can exhibit countless behaviors, such as:
Defining a product's features based on its behaviors makes it easier to describe them, develop them, and test them. This is the heart of BDD: making behaviors the focal point of software development. Behaviors are defined early in development using a [specification by example][2] language. One of the most common behavior spec languages is [Gherkin][3], the Given-When-Then scenario format from the [Cucumber][4] project. Behavior specs are basically plain-language descriptions of how a behavior works, with a little bit of formal structure for consistency and focus. Test frameworks can easily automate these behavior specs by "gluing" step texts to code implementations.
Below is an example of a behavior spec written in Gherkin:
```
Scenario: Basic DuckDuckGo Search
  Given the DuckDuckGo home page is displayed
  When the user searches for "panda"
  Then results are shown for "panda"
```
At a quick glance, the behavior is intuitive to understand. Except for a few keywords, the language is freeform. The scenario is concise yet meaningful. A real-world example illustrates the behavior. Steps declaratively indicate what should happen—without getting bogged down in the details of how.
The [main benefits of BDD][5] are good collaboration and automation. Everyone can contribute to behavior development, not just programmers. Expected behaviors are defined and understood from the beginning of the process. Tests can be automated together with the features they cover. Each test covers a singular, unique behavior in order to avoid duplication. And, finally, existing steps can be reused by new behavior specs, creating a snowball effect.
### Python's behave framework
`behave` is one of the most popular BDD frameworks in Python. It is very similar to other Gherkin-based Cucumber frameworks despite not holding the official Cucumber designation. `behave` has two primary layers:
1. Behavior specs written in Gherkin `.feature` files
2. Step definitions and hooks written in Python modules that implement Gherkin steps
As shown in the example above, Gherkin scenarios use a three-part format:
1. Given some initial state
2. When an action is taken
3. Then verify the outcome
Each step is "glued" by decorator to a Python function when `behave` runs tests.
### Installation
As a prerequisite, make sure you have Python and `pip` installed on your machine. I strongly recommend using Python 3. (I also recommend using [`pipenv`][6], but the following example commands use the more basic `pip`.)
Only one package is required for `behave`:
```
pip install behave
```
Other packages may also be useful, such as:
```
pip install requests    # for REST API calls
pip install selenium    # for Web browser interactions
```
The [behavior-driven-Python][7] project on GitHub contains the examples used in this article.
### Gherkin features
The Gherkin syntax that `behave` uses is practically compliant with the official Cucumber Gherkin standard. A `.feature` file has Feature sections, which in turn have Scenario sections with Given-When-Then steps. Below is an example:
```
Feature: Cucumber Basket
  As a gardener,
  I want to carry many cucumbers in a basket,
  So that I dont drop them all.
 
  @cucumber-basket
  Scenario: Add and remove cucumbers
    Given the basket is empty
    When "4" cucumbers are added to the basket
    And "6" more cucumbers are added to the basket
    But "3" cucumbers are removed from the basket
    Then the basket contains "7" cucumbers
```
There are a few important things to note here:
* Both the Feature and Scenario sections have [short, descriptive titles][8].
* The lines immediately following the Feature title are comments ignored by `behave`. It is a good practice to put the user story there.
* Scenarios and Features can have tags (notice the `@cucumber-basket` mark) for hooks and filtering (explained below).
* Steps follow a [strict Given-When-Then order][9].
* Additional steps can be added for any type using `And` and `But`.
* Steps can be parametrized with inputs—notice the values in double quotes.
Scenarios can also be written as templates with multiple input combinations by using a Scenario Outline:
```
Feature: Cucumber Basket
  @cucumber-basket
  Scenario Outline: Add cucumbers
    Given the basket has “<initial>” cucumbers
    When "<more>" cucumbers are added to the basket
    Then the basket contains "<total>" cucumbers
    Examples: Cucumber Counts
      | initial | more | total |
      |    0    |   1  |   1   |
      |    1    |   2  |   3   |
      |    5    |   4  |   9   |
```
Scenario Outlines always have an Examples table, in which the first row gives column titles and each subsequent row gives an input combo. The row values are substituted wherever a column title appears in a step surrounded by angle brackets. In the example above, the scenario will be run three times because there are three rows of input combos. Scenario Outlines are a great way to avoid duplicate scenarios.
There are other elements of the Gherkin language, but these are the main mechanics. To learn more, read the Automation Panda articles [Gherkin by Example][10] and [Writing Good Gherkin][11].
### Python mechanics
Every Gherkin step must be "glued" to a step definition, a Python function that provides the implementation. Each function has a step type decorator with the matching string. It also receives a shared context and any step parameters. Feature files must be placed in a directory named `features/`, while step definition modules must be placed in a directory named `features/steps/`. Any feature file can use step definitions from any module—they do not need to have the same names. Below is an example Python module with step definitions for the cucumber basket features.
```
from behave import *
from cucumbers.basket import CucumberBasket
@given('the basket has "{initial:d}" cucumbers')
def step_impl(context, initial):
    context.basket = CucumberBasket(initial_count=initial)
@when('"{some:d}" cucumbers are added to the basket')
def step_impl(context, some):
    context.basket.add(some)
@then('the basket contains "{total:d}" cucumbers')
def step_impl(context, total):
    assert context.basket.count == total
```
Three [step matchers][12] are available: `parse`, `cfparse`, and `re`. The default and simplest marcher is `parse`, which is shown in the example above. Notice how parametrized values are parsed and passed into the functions as input arguments. A common best practice is to put double quotes around parameters in steps.
Each step definition function also receives a [context][13] variable that holds data specific to the current scenario being run, such as `feature`, `scenario`, and `tags` fields. Custom fields may be added, too, to share data between steps. Always use context to share data—never use global variables!
`behave` also supports [hooks][14] to handle automation concerns outside of Gherkin steps. A hook is a function that will be run before or after a step, scenario, feature, or whole test suite. Hooks are reminiscent of [aspect-oriented programming][15]. They should be placed in a special `environment.py` file under the `features/` directory. Hook functions can check the current scenario's tags, as well, so logic can be selectively applied. The example below shows how to use hooks to set up and tear down a Selenium WebDriver instance for any scenario tagged as `@web`.
```
from selenium import webdriver
def before_scenario(context, scenario):
    if 'web' in context.tags:
        context.browser = webdriver.Firefox()
        context.browser.implicitly_wait(10)
def after_scenario(context, scenario):
    if 'web' in context.tags:
        context.browser.quit()
```
Note: Setup and cleanup can also be done with [fixtures][16] in `behave`.
To offer an idea of what a `behave` project should look like, here's the example project's directory structure:
![](https://opensource.com/sites/default/files/uploads/behave_dir_layout.png)
Any Python packages and custom modules can be used with `behave`. Use good design patterns to build a scalable test automation solution. Step definition code should be concise.
### Running tests
To run tests from the command line, change to the project's root directory and run the `behave` command. Use the `help` option to see all available options.
Below are a few common use cases:
```
# run all tests
behave
# run the scenarios in a feature file
behave features/web.feature
# run all tests that have the @duckduckgo tag
behave --tags @duckduckgo
# run all tests that do not have the @unit tag
behave --tags ~@unit
# run all tests that have @basket and either @add or @remove
behave --tags @basket --tags @add,@remove
```
For convenience, options may be saved in [config][17] files.
### Other options
`behave` is not the only BDD test framework in Python. Other good frameworks include:
* `pytest-bdd` , a plugin for `pytest``behave`, it uses Gherkin feature files and step definition modules, but it also leverages all the features and plugins of `pytest`. For example, it can run Gherkin scenarios in parallel using `pytest-xdist`. BDD and non-BDD tests can also be executed together with the same filters. `pytest-bdd` also offers a more flexible directory layout.
* `radish` is a "Gherkin-plus" framework—it adds Scenario Loops and Preconditions to the standard Gherkin language, which makes it more friendly to programmers. It also offers rich command line options like `behave`.
* `lettuce` is an older BDD framework very similar to `behave`, with minor differences in framework mechanics. However, GitHub shows little recent activity in the project (as of May 2018).
Any of these frameworks would be good choices.
Also, remember that Python test frameworks can be used for any black box testing, even for non-Python products! BDD frameworks are great for web and service testing because their tests are declarative, and Python is a [great language for test automation][18].
This article is based on the author's [PyCon Cleveland 2018][19] talk, [Behavior-Driven Python][20].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/behavior-driven-python
作者:[Andrew Knight][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/andylpk247
[1]:https://automationpanda.com/bdd/
[2]:https://en.wikipedia.org/wiki/Specification_by_example
[3]:https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/
[4]:https://cucumber.io/
[5]:https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/
[6]:https://docs.pipenv.org/
[7]:https://github.com/AndyLPK247/behavior-driven-python
[8]:https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/
[9]:https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/
[10]:https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/
[11]:https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/
[12]:http://behave.readthedocs.io/en/latest/api.html#step-parameters
[13]:http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes
[14]:http://behave.readthedocs.io/en/latest/api.html#environment-file-functions
[15]:https://en.wikipedia.org/wiki/Aspect-oriented_programming
[16]:http://behave.readthedocs.io/en/latest/api.html#fixtures
[17]:http://behave.readthedocs.io/en/latest/behave.html#configuration-files
[18]:https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/
[19]:https://us.pycon.org/2018/
[20]:https://us.pycon.org/2018/schedule/presentation/87/

View File

@ -1,3 +1,4 @@
Translating by qhwdw
What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++
======

View File

@ -1,81 +0,0 @@
5 of the Best Linux Educational Software and Games for Kids
======
![](https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-programs-for-kids-featured.jpg)
Linux is a very powerful operating system, and that explains why it powers most of the servers on the Internet. Though it may not be the best OS in terms of user friendliness, its diversity is commendable. Everyone has their own need for Linux. Be it for coding, educational purposes or the internet of things (IoT), youll always find a suitable Linux distro for every use. To that end, many have dubbed Linux as the OS for future computing.
Because the future belongs to the kids of today, introducing them to Linux is the best way to prepare them for what the future holds. This OS may not have a reputation for popular games such as FIFA or PES; however, it offers the best educational software and games for kids. These are five of the best Linux educational software to keep your kids ahead of the game.
**Related** : [The Beginners Guide to Using a Linux Distro][1]
### 1. GCompris
If youre looking for the best educational software for kids, [GCompris][2] should be your starting point. This software is specifically designed for kids education and is ideal for kids between two and ten years old. As the pinnacle of all Linux educational software suites for children, GCompris offers about 100 activities for kids. It packs everything you want for your kids from reading practice to science, geography, drawing, algebra, quizzes, and more.
![Linux educational software and games][3]
GCompris even has activities for helping your kids learn computer peripherals. If your kids are young and you want them to learn alphabets, colors, and shapes, GCompris has programmes for those, too. Whats more, it also comes with helpful games for kids such as chess, tic-tac-toe, memory, and hangman. GCompris is not a Linux-only app. Its also available for Windows and Android.
### 2. TuxMath
Most students consider math a tough subject. You can change that perception by acquainting your kids with mathematical skills through Linux software applications such as [TuxMath][4]. TuxMath is a top-rated educational Math tutorial game for kids. In this game your role is to help Tux the penguin of Linux protect his planet from a rain of mathematical problems.
![linux-educational-software-tuxmath-1][5]
By finding the answer, you help Tux save the planet by destroying the asteroids with your laser before they make an impact. The difficulty of the math problems increases with each level you pass. This game is ideal for kids, as it can help them rack their brains for solutions. Besides making them good at math, it also helps them improve their mental agility.
### 3. Sugar on a Stick
[Sugar on a Stick][6] is a dedicated learning program for kids a brand new pedagogy that has gained a lot of traction. This program provides your kids with a fully-fledged learning platform where they can gain skills in creating, exploring, discovering and also reflecting on ideas. Just like GCompris, Sugar on a Stick comes with a host of learning resources for kids, including games and puzzles.
![linux-educational-software-sugar-on-a-stick][7]
The best thing about Sugar on a Stick is that you can set it up on a USB Drive. All you need is an X86-based PC, then plug in the USB, and boot the distro from it. Sugar on a Stick is a project by Sugar Labs a non-profit organization that is run by volunteers.
### 4. KDE Edu Suite
[KDE Edu Suite][8] is a package of software for different user purposes. With a host of applications from different fields, the KDE community has proven that it isnt just serious about empowering adults; it also cares about bringing the young generation to speed with everything surrounding them. It comes packed with various applications for kids ranging from science to math, geography, and more.
![linux-educational-software-kde-1][9]
The KDE Suite can be used for adult needs based on necessities, as a school teaching software, or as a kids leaning app. It offers a huge software package and is free to download. The KDE Edu suite can be installed on most GNU/Linux Distros.
### 5. Tux Paint
![linux-educational-software-tux-paint-2][10]
[Tux Paint][11] is another great Linux educational software for kids. This award-winning drawing program is used in schools around the world to help children nurture the art of drawing. It comes with a clean, easy-to-use interface and fun sound effects that help children use the program. There is also an encouraging cartoon mascot that guides kids as they use the program. Tux Paint comes with a variety of drawing tools that help kids unleash their creativity.
### Summing Up
Due to the popularity of these educational software for kids, many institutions have embraced these programs as teaching aids in schools and kindergartens. A typical example is [Edubuntu][12], an Ubuntu-derived distro that is widely used by teachers and parents for educating kids.
Tux Paint is another great example that has grown in popularity over the years and is being used in schools to teach children how to draw. This list is by no means exhaustive. There are hundreds of other Linux educational software and games that can be very useful for your kids.
If you know of any other great Linux educational software and games for kids, share with us in the comments section below.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/
作者:[Kenneth Kimari][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/kennkimari/
[1]:https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ (The Beginners Guide to Using a Linux Distro)
[2]:http://www.gcompris.net/downloads-en.html
[3]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg (Linux educational software and games)
[4]:https://tuxmath.en.uptodown.com/ubuntu
[5]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg (linux-educational-software-tuxmath-1)
[6]:http://wiki.sugarlabs.org/go/Sugar_on_a_Stick/Downloads
[7]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png (linux-educational-software-sugar-on-a-stick)
[8]:https://edu.kde.org/
[9]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg (linux-educational-software-kde-1)
[10]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg (linux-educational-software-tux-paint-2)
[11]:http://www.tuxpaint.org/
[12]:http://edubuntu.org/

View File

@ -1,223 +0,0 @@
[翻译中]translating by jrg!
Automating backups on a Raspberry Pi NAS
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
In the [first part][1] of this three-part series using a Raspberry Pi for network-attached storage (NAS), we covered the fundamentals of the NAS setup, attached two 1TB hard drives (one for data and one for backups), and mounted the data drive on a remote device via the network filesystem (NFS). In part two, we will look at automating backups. Automated backups allow you to continually secure your data and recover from a hardware defect or accidental file removal.
![](https://opensource.com/sites/default/files/uploads/nas_part2.png)
### Backup strategy
Let's get started by coming up with with a backup strategy for our small NAS. I recommend creating daily backups of your data and scheduling them for a time they won't interfere with other NAS activities, including when you need to access or store your files. For example, you could trigger the backup activities each day at 2am.
You also need to decide how long you'll keep each backup, since you would quickly run out of storage if you kept each daily backup indefinitely. Keeping your daily backups for one week allows you to travel back into your recent history if you realize something went wrong over the previous seven days. But what if you need something from further in the past? Keeping each Monday backup for a month and one monthly backup for a longer period of time should be sufficient. Let's keep the monthly backups for a year and one backup every year for long-distance time travels, e.g., for the last five years.
This results in a bunch of backups on your backup drive over a five-year period:
* 7 daily backups
* 4 (approx.) weekly backups
* 12 monthly backups
* 5 annual backups
You may recall that your backup drive and your data drive are of equal size (1TB each). How will more than 10 backups of 1TB from your data drive fit onto a 1TB backup disk? If you create full backups, they won't. Instead, you will create incremental backups, reusing the data from the last backup if it didn't change and creating replicas of new or changed files. That way, the backup doesn't double every night, but only grows a little bit depending on the changes that happen to your data over a day.
Here is my situation: My NAS has been running since August 2016, and 20 backups are on the backup drive. Currently, I store 406GB of files on the data drive. The backups take up 726GB on my backup drive. Of course, this depends heavily on your data's change frequency, but as you can see, the incremental backups don't consume as much space as 20 full backups would. Nevertheless, over time the 1TB disk will probably become insufficient for your backups. Once your data grows close to the 1TB limit (or whatever your backup drive capacity), you should choose a bigger backup drive and move your data there.
### Creating backups with rsync
To create a full backup, you can use the rsync command line tool. Here is an example command to create the initial full backup.
```
pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01
```
This command creates a full replica of all data stored on the data drive, mounted on `/nas/data`, on the backup drive. There, it will create the folder `2018-08-01` and create the backup inside it. The `-a` flag starts rsync in archive-mode, which means it preserves all kinds of metadata, like modification dates, permissions, and owners, and copies soft links as soft links.
Now that you have created your full, initial backup as of August 1, on August 2, you will create your first daily incremental backup.
```
pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02
```
This command tells rsync to again create a backup of `/nas/data`. The target directory this time is `/nas/backup/2018-08-02`. The script also specified the `--link-dest` option and passed the location of the last backup as an argument. With this option specified, rsync looks at the folder `/nas/backup/2018-08-01` and checks what data files changed compared to that folder's content. Unchanged files will not be copied, rather they will be hard-linked to their counterparts in yesterday's backup folder.
When using a hard-linked file from a backup, you won't notice any difference between the initial copy and the link. They behave exactly the same, and if you delete either the link or the initial file, the other will still exist. You can imagine them as two equal entry points to the same file. Here is an example:
![](https://opensource.com/sites/default/files/uploads/backup_flow.png)
The left box reflects the state shortly after the second backup. The box in the middle is yesterday's replica. The `file2.txt` didn't exist yesterday, but the image `file1.jpg` did and was copied to the backup drive. The box on the right reflects today's incremental backup. The incremental backup command created `file2.txt`, which didn't exist yesterday. Since `file1.jpg` didn't change since yesterday, today a hard link is created so it doesn't take much additional space on the disk.
### Automate your backups
You probably don't want to execute your daily backup command by hand at 2am each day. Instead, you can automate your backup by using a script like the following, which you may want to start with a cron job.
```
#!/bin/bash
TODAY=$(date +%Y-%m-%d)
DATADIR=/nas/data/
BACKUPDIR=/nas/backup/
SCRIPTDIR=/nas/data/backup_scripts
LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1)
TODAYPATH=${BACKUPDIR}/${TODAY}
if [[ ! -e ${TODAYPATH} ]]; then
        mkdir -p ${TODAYPATH}
fi
rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@
${SCRIPTDIR}/deleteOldBackups.sh
```
The first block calculates the last backup's folder name to use for links and the name of today's backup folder. The second block has the rsync command (as described above). The last block executes a `deleteOldBackups.sh` script. It will clean up the old, unnecessary backups based on the backup strategy outlined above. You could also execute the cleanup script independently from the backup script if you want it to run less frequently.
The following script is an example implementation of the backup strategy in this how-to article.
```
#!/bin/bash
BACKUPDIR=/nas/backup/
function listYearlyBackups() {
        for i in 0 1 2 3 4 5
                do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1
        done
}
function listMonthlyBackups() {
        for i in 0 1 2 3 4 5 6 7 8 9 10 11 12
                do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1
        done
}
function listWeeklyBackups() {
        for i in 0 1 2 3 4
                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")"
        done
}
function listDailyBackups() {
        for i in 0 1 2 3 4 5 6
                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")"
        done
}
function getAllBackups() {
        listYearlyBackups
        listMonthlyBackups
        listWeeklyBackups
        listDailyBackups
}
function listUniqueBackups() {
        getAllBackups | sort -u
}
function listBackupsToDelete() {
        ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")"
}
cd ${BACKUPDIR}
listBackupsToDelete | while read file_to_delete; do
        rm -rf ${file_to_delete}
done
```
This script will first list all the backups to keep (according to our backup strategy), then it will delete all the backup folders that are not necessary anymore.
To execute the scripts every night to create daily backups, schedule the backup script by running `crontab -e` as the root user. (You need to be in root to make sure it has permission to read all the files on the data drive, no matter who created them.) Add a line like the following, which starts the script every night at 2am.
```
0 2 * * * /nas/data/backup_scripts/daily.sh
```
For more information, read about [scheduling tasks with cron][2].
* Unmount your backup drive or mount it as read-only when no backups are running
* Attach the backup drive to a remote server and sync the files over the internet
There are additional things you can do to fortify your backups against accidental removal or damage, including the following:
This example backup strategy enables you to back up your valuable data to make sure it won't get lost. You can also easily adjust this technique for your personal needs and preferences.
In part three of this series, we will talk about [Nextcloud][3], a convenient way to store and access data on your NAS system that also provides offline access as it synchronizes your data to the client devices.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/automate-backups-raspberry-pi
作者:[Manuel Dewald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ntlx
[1]:https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
[2]:https://opensource.com/article/17/11/how-use-cron-linux
[3]:https://nextcloud.com/

View File

@ -1,3 +1,5 @@
translating---geekpi
Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension)
======
A Unity feature that I miss (it only actually worked for a short while though) is automatically getting player controls in the Ubuntu Sound Indicator when visiting a website like YouTube in a web browser, so you could pause or stop the video directly from the top bar, as well as see the video / song information and a preview.

View File

@ -1,3 +1,6 @@
Translating by MjSeven
An introduction to the Django Python web app framework
======

View File

@ -1,110 +0,0 @@
translating---geekpi
5 cool music player apps
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg)
Do you like music? Then Fedora may have just what youre looking for. This article introduces different music player apps that run on Fedora. Youre covered whether you have an extensive music library, a small one, or none at all. Here are four graphical application and one terminal-based music player that will have you jamming.
### Quod Libet
Quod Libet is a complete manager for your large audio library. If you have an extensive audio library that you would like not just listen to, but also manage, Quod Libet might a be a good choice for you.
![][1]
Quod Libet can import music from multiple locations on your disk, and allows you to edit tags of the audio files — so everything is under your control. As a bonus, there are various plugins available for anything from a simple equalizer to a [last.fm][2] sync. You can also search and play music directly from [Soundcloud][3].
Quod Libet works great on HiDPI screens, and is available as an RPM in Fedora or on [Flathub][4] in case you run [Silverblue][5]. Install it using Gnome Software or the command line:
```
$ sudo dnf install quodlibet
```
### Audacious
If you like a simple music player that could even look like the legendary Winamp, Audacious might be a good choice for you.
![][6]
Audacious probably wont manage all your music at once, but it works great if you like to organize your music as files. You can also export and import playlists without reorganizing the music files themselves.
As a bonus, you can make it look likeWinamp. To make it look the same as on the screenshot above, go to Settings / Appearance, select Winamp Classic Interface at the top, and choose the Refugee skin right below. And Bobs your uncle!
Audacious is available as an RPM in Fedora, and can be installed using the Gnome Software app or the following command on the terminal:
```
$ sudo dnf install audacious
```
### Lollypop
Lollypop is a music player that provides great integration with GNOME. If you enjoy how GNOME looks, and would like a music player thats nicely integrated, Lollypop could be for you.
![][7]
Apart from nice visual integration with the GNOME Shell, it woks nicely on HiDPI screens, and supports a dark theme.
As a bonus, Lollypop has an integrated cover art downloader, and a so-called Party Mode (the note button at the top-right corner) that selects and plays music automatically for you. It also integrates with online services such as [last.fm][2] or [libre.fm][8].
Available as both an RPM in Fedora or a [Flathub][4] for your [Silverblue][5] workstation, install it using the Gnome Software app or using the terminal:
```
$ sudo dnf install lollypop
```
### Gradio
What if you dont own any music, but still like to listen to it? Or you just simply love radio? Then Gradio is here for you.
![][9]
Gradio is a simple radio player that allows you to search and play internet radio stations. You can find them by country, language, or simply using search. As a bonus, its visually integrated into GNOME Shell, works great with HiDPI screens, and has an option for a dark theme.
Gradio is available on [Flathub][4] which works with both Fedora Workstation and [Silverblue][5]. Install it using the Gnome Software app.
### sox
Do you like using the terminal instead, and listening to some music while you work? You dont have to leave the terminal thanks to sox.
![][10]
sox is a very simple, terminal-based music player. All you need to do is to run a command such as:
```
$ play file.mp3
```
…and sox will play it for you. Apart from individual audio files, sox also supports playlists in the m3u format.
As a bonus, because sox is a terminal-based application, you can run it over ssh. Do you have a home server with speakers attached to it? Or do you want to play music from a different computer? Try using it together with [tmux][11], so you can keep listening even when the session closes.
sox is available in Fedora as an RPM. Install it by running:
```
$ sudo dnf install sox
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/5-cool-music-player-apps/
作者:[Adam Šamalík][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/asamalik/
[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png
[2]:https://last.fm
[3]:https://soundcloud.com/
[4]:https://flathub.org/home
[5]:https://teamsilverblue.org/
[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png
[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png
[8]:https://libre.fm
[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png
[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png
[11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/

View File

@ -1,74 +0,0 @@
translated by hopefully2333
Steam Makes it Easier to Play Windows Games on Linux
======
![Steam Wallpaper][1]
Its no secret that the [Linux gaming][2] library offers only a fraction of what the Windows library offers. In fact, many people wouldnt even consider [switching to Linux][3] simply because most of the games they want to play arent available on the platform.
At the time of writing this article, Linux has just over 5,000 games available on Steam compared to the librarys almost 27,000 total games. Now, 5,000 games may be a lot, but it isnt 27,000 games, thats for sure.
And though almost every new indie game seems to launch with a Linux release, we are still left without a way to play many [Triple-A][4] titles. For me, though there are many titles I would love the opportunity to play, this has never been a make-or-break problem since almost all of my favorite titles are available on Linux since I primarily play indie and [retro games][5] anyway.
### Meet Proton: a WINE Fork by Steam
Now, that problem is a thing of the past since this week Valve [announced][6] a new update to Steam Play that adds a forked version of Wine to the Linux and Mac Steam clients called Proton. Yes, the tool is open-source, and Valve has made the source code available on [Github][7]. The feature is still in beta though, so you must opt into the beta Steam client in order to take advantage of this functionality.
#### With proton, more Windows games are available for Linux on Steam
What does that actually mean for us Linux users? In short, it means that both Linux and Mac computers can now play all 27,000 of those games without needing to configure something like [PlayOnLinux][8] or [Lutris][9] to do so! Which, let me tell you, can be quite the headache at times.
The more complicated answer to this is that it sounds too good to be true for a reason. Though, in theory, you can play literally every Windows game on Linux this way, there is only a short list of games that are officially supported at launch, including DOOM, Final Fantasy VI, Tekken 7, Star Wars: Battlefront 2, and several more.
#### You can play all Windows games on Linux (in theory)
Though the list only has about 30 games thus far, you can force enable Steam to install and play any game through Proton by marking the “Enable Steam Play for all titles” checkbox. But dont get your hopes too high. They do not guarantee the stability and performance you may be hoping for, so keep your expectations reasonable.
![Steam Play][10]
#### Experiencing Proton: Not as bad as I expected
For example, I installed a few moderately taxing games to put Proton through its paces. One of which was The Elder Scrolls IV: Oblivion, and in the two hours I played the game, it only crashed once, and it was almost immediately after an autosave point during the tutorial.
I have an Nvidia Gtx 1050 Ti, so I was able to play the game at 1080p with high settings, and I didnt see a single problem outside of that one crash. The only negative feedback I really have is that the framerate was not nearly as high as it would have been if it was a native game. I got above 60 frames 90% of the time, but I admit it could have been better.
Every other game that I have installed and launched has also worked flawlessly, granted I havent played any of them for an extended amount of time yet. Some games I installed include The Forest, Dead Rising 4, H1Z1, and Assassins Creed II (can you tell I like horror games?).
#### Why is Steam (still) betting on Linux?
Now, this is all fine and dandy, but why did this happen? Why would Valve spend the time, money, and resources needed to implement something like this? I like to think they did so because they value the Linux community, but if I am honest, I dont believe we had anything to do with it.
If I had to put money on it, I would say Valve has developed Proton because they havent given up on [Steam machines][11] yet. And since [Steam OS][12] is running on Linux, it is in their best interest financially to invest in something like this. The more games available on Steam OS, the more people might be willing to buy a Steam Machine.
Maybe I am wrong, but I bet this means we will see a new wave of Steam machines coming in the not-so-distant future. Maybe we will see them in one year, or perhaps we wont see them for another five, who knows!
Either way, all I know is that I am beyond excited to finally play the games from my Steam library that I have slowly accumulated over the years from all of the Humble Bundles, promo codes, and random times I bought a game on sale just in case I wanted to try to get it running in Lutris.
#### Excited for more gaming on Linux?
What do you think? Are you excited about this, or are you afraid fewer developers will create native Linux games because there is almost no need to now? Does Valve love the Linux community, or do they love money? Let us know what you think in the comment section below, and check back in for more FOSS content like this.
--------------------------------------------------------------------------------
via: https://itsfoss.com/steam-play-proton/
作者:[Phillip Prado][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/steam-wallpaper.jpeg
[2]:https://itsfoss.com/linux-gaming-guide/
[3]:https://itsfoss.com/reasons-switch-linux-windows-xp/
[4]:https://itsfoss.com/triplea-game-review/
[5]:https://itsfoss.com/play-retro-games-linux/
[6]:https://steamcommunity.com/games/221410
[7]:https://github.com/ValveSoftware/Proton/
[8]:https://www.playonlinux.com/en/
[9]:https://lutris.net/
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/SteamProton.jpg
[11]:https://store.steampowered.com/sale/steam_machines
[12]:https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/

View File

@ -1,116 +0,0 @@
[Solved] “sub process usr bin dpkg returned an error code 1″ Error in Ubuntu
======
If you are encountering “sub process usr bin dpkg returned an error code 1” while installing software on Ubuntu Linux, here is how you can fix it.
One of the common issue in Ubuntu and other Debian based distribution is the broken packages. You try to update the system or install a new package and you encounter an error like Sub-process /usr/bin/dpkg returned an error code.
Thats what happened to me the other day. I was trying to install a radio application in Ubuntu when it threw me this error:
```
Unpacking python-gst-1.0 (1.6.2-1build1) ...
Selecting previously unselected package radiotray.
Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ...
Unpacking radiotray (0.7.3-5ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up polar-bookshelf (1.0.0-beta56) ...
ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory
dpkg: error processing package polar-bookshelf (--configure):
subprocess installed post-installation script returned error exit status 1
Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ...
Setting up python-gst-1.0 (1.6.2-1build1) ...
Setting up radiotray (0.7.3-5ubuntu1) ...
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
The last three lines are of the utmost importance here.
```
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
It tells me that the package polar-bookshelf is causing and issue. This might be crucial to how you fix this error here.
### Fixing Sub-process /usr/bin/dpkg returned an error code (1)
![Fix update errors in Ubuntu Linux][1]
Lets try to fix this broken error package. Ill show several methods that you can try one by one. The initial ones are easy to use and simply no-brainers.
You should try to run sudo apt update and then try to install a new package or upgrade after trying each of the methods discussed here.
#### Method 1: Reconfigure Package Database
The first method you can try is to reconfigure the package database. Probably the database got corrupted while installing a package. Reconfiguring often fixes the problem.
```
sudo dpkg --configure -a
```
#### Method 2: Use force install
If a package installation was interrupted previously, you may try to do a force install.
```
sudo apt-get install -f
```
#### Method 3: Try removing the troublesome package
If its not an issue for you, you may try to remove the package manually. Please dont do it for Linux Kernels (packages starting with linux-).
```
sudo apt remove
```
#### Method 4: Remove post info files of the troublesome package
This should be your last resort. You can try removing the files associated to the package in question from /var/lib/dpkg/info.
**You need to know a little about basic Linux commands to figure out whats happening and how can you use the same with your problem.**
In my case, I had an issue with polar-bookshelf. So I looked for the files associated with it:
```
ls -l /var/lib/dpkg/info | grep -i polar-bookshelf
-rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list
-rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums
-rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst
-rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm
```
Now all I needed to do was to remove these files:
```
sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp
```
Use the sudo apt update and then you should be able to install software as usual.
#### Which method worked for you (if it worked)?
I hope this quick article helps you in fixing the E: Sub-process /usr/bin/dpkg returned an error code (1) error.
If it did work for you, which method was it? Did you manage to fix this error with some other method? If yes, please share that to help others with this issue.
--------------------------------------------------------------------------------
via: https://itsfoss.com/dpkg-returned-an-error-code-1/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/fix-common-update-errors-ubuntu.jpeg

View File

@ -1,3 +1,4 @@
LuuMing translating
How to Use the Netplan Network Configuration Tool on Linux
======

View File

@ -1,3 +1,5 @@
translating by Flowsnow
How to build rpm packages
======

View File

@ -0,0 +1,109 @@
translating---geekpi
Backup Installed Packages And Restore Them On Freshly Installed Ubuntu
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/apt-clone-720x340.png)
Installing the same set of packages on multiple Ubuntu systems is time consuming and boring task. You dont want to spend your time to install the same packages over and over on multiple systems. When it comes to install packages on similar architecture Ubuntu systems, there are many methods available to make this task easier. You could simply migrate your old Ubuntu systems applications, settings and data to a newly installed system with a couple mouse clicks using [**Aptik**][1]. Or, you can take the [**backup entire list of installed packages**][2] using your package manager (Eg. APT), and install them later on a freshly installed system. Today, I learned that there is also yet another dedicated utility available to do this job. Say hello to **apt-clone** , a simple tool that lets you to create a list of installed packages for Debian/Ubuntu systems that can be restored on freshly installed systems or containers or into a directory.
Apt-clone will help you on situations where you want to,
* Install consistent applications across multiple systems running with similar Ubuntu (and derivatives) OS.
* Install same set of packages on multiple systems often.
* Backup the entire list of installed applications and restore them on demand wherever and whenever necessary.
In this brief guide, we will be discussing how to install and use Apt-clone on Debian-based systems. I tested this utility on Ubuntu 18.04 LTS system, however it should work on all Debian and Ubuntu-based systems.
### Backup Installed Packages And Restore Them Later On Freshly Installed Ubuntu System
Apt-clone is available in the default repositories. To install it, just enter the following command from the Terminal:
```
$ sudo apt install apt-clone
```
Once installed, simply create the list of installed packages and save them in any location of your choice.
```
$ mkdir ~/mypackages
$ sudo apt-clone clone ~/mypackages
```
The above command saved all installed packages in my Ubuntu system in a file named **apt-clone-state-ubuntuserver.tar.gz** under **~/mypackages** directory.
To view the details of the backup file, run:
```
$ apt-clone info mypackages/apt-clone-state-ubuntuserver.tar.gz
Hostname: ubuntuserver
Arch: amd64
Distro: bionic
Meta:
Installed: 516 pkgs (33 automatic)
Date: Sat Sep 15 10:23:05 2018
```
As you can see, I have 516 packages in total in my Ubuntu server.
Now, copy this file on your USB or external drive and go to any other system that want to install the same set of packages. Or you can also transfer the backup file to the system on the network and install the packages by using the following command:
```
$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz
```
Please be mindful that this command will overwrite your existing **/etc/apt/sources.list** and will install/remove packages. You have been warned! Also, just make sure the destination system is on same arch and same OS. For example, if the source system is running with 18.04 LTS 64bit, the destination system must also has the same.
If you dont want to restore packages on the system, you can simply use `--destination /some/location` option to debootstrap the clone into this directory.
```
$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz --destination ~/oldubuntu
```
In this case, the above command will restore the packages in a folder named **~/oldubuntu**.
For more details, refer help section:
```
$ apt-clone -h
```
Or, man pages:
```
$ man apt-clone
```
**Suggested read:**
+ [Systemback Restore Ubuntu Desktop and Server to previous state][3]
+ [Cronopete An Apples Time Machine Clone For Linux][4]
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-freshly-installed-ubuntu-system/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/how-to-migrate-system-settings-and-data-from-an-old-system-to-a-newly-installed-ubuntu-system/
[2]: https://www.ostechnix.com/create-list-installed-packages-install-later-list-centos-ubuntu/#comment-12598
[3]: https://www.ostechnix.com/systemback-restore-ubuntu-desktop-and-server-to-previous-state/
[4]: https://www.ostechnix.com/cronopete-apples-time-machine-clone-linux/

View File

@ -1,169 +0,0 @@
Linux tricks that can save you time and trouble
======
Some command line tricks can make you even more productive on the Linux command line.
![](https://images.idgesg.net/images/article/2018/09/boy-jumping-off-swing-100772498-large.jpg)
Good Linux command line tricks dont only save you time and trouble. They also help you remember and reuse complex commands, making it easier for you to focus on what you need to do, not how you should go about doing it. In this post, well look at some handy command line tricks that you might come to appreciate.
### Editing your commands
When making changes to a command that you're about to run on the command line, you can move your cursor to the beginning or the end of the command line to facilitate your changes using the ^a (control key plus “a”) and ^e (control key plus “e”) sequences.
You can also fix and rerun a previously entered command with an easy text substitution by putting your before and after strings between **^** characters -- as in ^before^after^.
```
$ eho hello world <== oops!
Command 'eho' not found, did you mean:
command 'echo' from deb coreutils
command 'who' from deb coreutils
Try: sudo apt install <deb name>
$ ^e^ec^ <== replace text
echo hello world
hello world
```
### Logging into a remote system with just its name
If you log into other systems from the command line (I do this all the time), you might consider adding some aliases to your system to supply the details. Your alias can provide the username you want to use (which may or may not be the same as your username on your local system) and the identity of the remote server. Use an alias server_name=ssh -v -l username IP-address' type of command like this:
```
$ alias butterfly=”ssh -v -l jdoe 192.168.0.11”
```
You can use the system name in place of the IP address if its listed in your /etc/hosts file or available through your DNS server.
And remember you can list your aliases with the **alias** command.
```
$ alias
alias butterfly='ssh -v -l jdoe 192.168.0.11'
alias c='clear'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l='ls -CF'
alias la='ls -A'
alias list_repos='grep ^[^#] /etc/apt/sources.list /etc/apt/sources.list.d/*'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias show_dimensions='xdpyinfo | grep '\''dimensions:'\'''
```
It's good practice to test new aliases and then add them to your ~/.bashrc or similar file to be sure they will be available any time you log in.
### Freezing and thawing out your terminal window
The ^s (control key plus “s”) sequence will stop a terminal from providing output by running an XOFF (transmit off) flow control. This affects PuTTY sessions, as well as terminal windows on your desktop. Sometimes typed by mistake, however, the way to make the terminal window responsive again is to enter ^q (control key plus “q”). The only real trick here is remembering ^q since you aren't very likely run into this situation very often.
### Repeating commands
Linux provides many ways to reuse commands. The key to command reuse is your history buffer and the commands it collects for you. The easiest way to repeat a command is to type an ! followed by the beginning letters of a recently used command. Another is to press the up-arrow on your keyboard until you see the command you want to reuse and then press enter. You can also display previously entered commands and then type ! followed by the number shown next to the command you want to reuse in the displayed command history entries.
```
!! <== repeat previous command
!ec <== repeat last command that started with "ec"
!76 <== repeat command #76 from command history
```
### Watching a log file for updates
Commands such as tail -f /var/log/syslog will show you lines as they are being added to the specified log file — very useful if you are waiting for some particular activity or want to track whats happening right now. The command will show the end of the file and then additional lines as they are added.
```
$ tail -f /var/log/auth.log
Sep 17 09:41:01 fly CRON[8071]: pam_unix(cron:session): session closed for user smmsp
Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session opened for user root
Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session closed for user root
Sep 17 09:47:00 fly sshd[8124]: Accepted password for shs from 192.168.0.22 port 47792
Sep 17 09:47:00 fly sshd[8124]: pam_unix(sshd:session): session opened for user shs by
Sep 17 09:47:00 fly systemd-logind[776]: New session 215 of user shs.
Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session opened for user root
Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session closed for user root
<== waits for additional lines to be added
```
### Asking for help
For most Linux commands, you can enter the name of the command followed by the option **\--help** to get some fairly succinct information on what the command does and how to use it. Less extensive than the man command, the --help option often tells you just what you need to know without expanding on all of the options available.
```
$ mkdir --help
Usage: mkdir [OPTION]... DIRECTORY...
Create the DIRECTORY(ies), if they do not already exist.
Mandatory arguments to long options are mandatory for short options too.
-m, --mode=MODE set file mode (as in chmod), not a=rwx - umask
-p, --parents no error if existing, make parent directories as needed
-v, --verbose print a message for each created directory
-Z set SELinux security context of each created directory
to the default type
--context[=CTX] like -Z, or if CTX is specified then set the SELinux
or SMACK security context to CTX
--help display this help and exit
--version output version information and exit
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Full documentation at: <http://www.gnu.org/software/coreutils/mkdir>
or available locally via: info '(coreutils) mkdir invocation'
```
### Removing files with care
To add a little caution to your use of the rm command, you can set it up with an alias that asks you to confirm your request to delete files before it goes ahead and deletes them. Some sysadmins make this the default. In that case, you might like the next option even more.
```
$ rm -i <== prompt for confirmation
```
### Turning off aliases
You can always disable an alias interactively by using the unalias command. It doesnt change the configuration of the alias in question; it just disables it until the next time you log in or source the file in which the alias is set up.
```
$ unalias rm
```
If the **rm -i** alias is set up as the default and you prefer to never have to provide confirmation before deleting files, you can put your **unalias** command in one of your startup files (e.g., ~/.bashrc).
### Remembering to use sudo
If you often forget to precede commands that only root can run with “sudo”, there are two things you can do. You can take advantage of your command history by using the “sudo !!” (use sudo to run your most recent command with sudo prepended to it), or you can turn some of these commands into aliases with the required "sudo" attached.
```
$ alias update=sudo apt update
```
### More complex tricks
Some useful command line tricks require a little more than a clever alias. An alias, after all, replaces a command, often inserting options so you don't have to enter them and allowing you to tack on additional information. If you want something more complex than an alias can manage, you can write a simple script or add a function to your .bashrc or other start-up file. The function below, for example, creates a directory and moves you into it. Once it's been set up, source your .bashrc or other file and you can use commands such as "md temp" to set up a directory and cd into it.
```
md () { mkdir -p "$@" && cd "$1"; }
```
### Wrap-up
Working on the Linux command line remains one of the most productive and enjoyable ways to get work done on my Linux systems, but a group of command line tricks and clever aliases can make that experience even better.
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-you-can-love.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]: https://www.facebook.com/NetworkWorld/
[2]: https://www.linkedin.com/company/network-world

View File

@ -1,49 +0,0 @@
translating---geekpi
How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/ipv4-720x340.png)
**APT** , short or **A** dvanced **P** ackage **T** ool, is the default package manager for Debian-based systems. Using APT, we can install, update, upgrade and remove applications from the system. Lately, I have been facing a strange error. Whenever I try update my Ubuntu 16.04 box, I get this error **“0% [Connecting to in.archive.ubuntu.com (2001:67c:1560:8001::14)]”** and the update process gets stuck for a long time. My Internet connection is working well and I can able to ping all websites including Ubuntu official site. After a couple Google searches, I realized that sometimes the Ubuntu mirrors are not reachable over IPv6. This problem is solved after I force APT package manager to use IPv4 in place of IPv6 to access Ubuntu mirrors while updating the system. If you ever encountered with this error, you can solve it as described below.
### Force APT Package Manager To Use IPv4 In Ubuntu 16.04
To force APT to use IPv4 in place of IPv6 while updating and upgrading your Ubuntu 16.04 LTS systems, simply use the following commands:
```
$ sudo apt-get -o Acquire::ForceIPv4=true update
$ sudo apt-get -o Acquire::ForceIPv4=true upgrade
```
Voila! This time update process run and completed quickly.
You can also make this persistent for all **apt-get** transactions in the future by adding the following line in **/etc/apt/apt.conf.d/99force-ipv4** file using command:
```
$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4
```
**Disclaimer:**
I dont know if anyone is having this issue lately, but I kept getting this error today at least four to five times in my Ubuntu 16.04 LTS virtual machine and I solved it as described above. I am not sure that it is the recommended solution. Go through Ubuntu forums and make sure this method is legitimate. Since mine is just a VM which I use it only for testing and learning purposes, I dont mind about the authenticity of this method. Use it on your own risk.
Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-force-apt-package-manager-to-use-ipv4-in-ubuntu-16-04/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/

View File

@ -1,113 +0,0 @@
[翻译中]translating by jrg!
Host your own cloud with Raspberry Pi NAS
======
Protect and secure your data with a self-hosted cloud powered by your Raspberry Pi.
In the first two parts of this series, we discussed the [hardware and software fundamentals][1] for building network-attached storage (NAS) on a Raspberry Pi. We also put a proper [backup strategy][2] in place to secure the data on the NAS. In this third part, we will talk about a convenient way to store, access, and share your data with [Nextcloud][3].
### Prerequisites
To use Nextcloud conveniently, you have to meet a few prerequisites. First, you should have a domain you can use for the Nextcloud instance. For the sake of simplicity in this how-to, we'll use **nextcloud.pi-nas.com**. This domain should be directed to your Raspberry Pi. If you want to run it on your home network, you probably need to set up dynamic DNS for this domain and enable port forwarding of ports 80 and 443 (if you go for an SSL setup, which is highly recommended; otherwise port 80 should be sufficient) from your router to the Raspberry Pi.
You can automate dynamic DNS updates from the Raspberry Pi using [ddclient][4].
### Install Nextcloud
To run Nextcloud on your Raspberry Pi (using the setup described in the [first part][1] of this series), install the following packages as dependencies to Nextcloud using **apt**.
```
sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl
```
The next step is to download Nextcloud. [Get the latest release's URL][5] and copy it to download via **wget** on the Raspberry Pi. In the first article in this series, we attached two disk drives to the Raspberry Pi, one for current data and one for backups. Install Nextcloud on the data drive to make sure data is backed up automatically every night.
```
sudo mkdir -p /nas/data/nextcloud
sudo chown pi /nas/data/nextcloud
cd /nas/data/
wget https://download.nextcloud.com/server/releases/nextcloud-14.0.0.zip -O /nas/data/nextcloud.zip
unzip nextcloud.zip
sudo ln -s /nas/data/nextcloud /var/www/nextcloud
sudo chown -R www-data:www-data /nas/data/nextcloud
```
When I wrote this, the latest release (as you see in the code above) was 14. Nextcloud is under heavy development, so you may find a newer version when installing your copy of Nextcloud onto your Raspberry Pi.
### Database setup
When we installed Nextcloud above, we also installed MySQL as a dependency to use it for all the metadata Nextcloud generates (for example, the users you create to access Nextcloud). If you would rather use a Postgres database, you'll need to adjust some of the modules installed above.
To access the MySQL database as root, start the MySQL client as root:
```
sudo mysql
```
This will open a SQL prompt where you can insert the following commands—substituting the placeholder with the password you want to use for the database connection—to create a database for Nextcloud.
```
CREATE USER nextcloud IDENTIFIED BY '<insert-password-here>';
CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* TO nextcloud;
```
You can exit the SQL prompt by pressing **Ctrl+D** or entering **quit**.
### Web server configuration
Nextcloud can be configured to run using Nginx or other web servers, but for this how-to, I decided to go with the Apache web server on my Raspberry Pi NAS. (Feel free to try out another alternative and let me know if you think it performs better.)
To set it up, configure a virtual host for the domain you created for your Nextcloud instance **nextcloud.pi-nas.com**. To create a virtual host, create the file **/etc/apache2/sites-available/001-nextcloud.conf** with content similar to the following. Make sure to adjust the ServerName to your domain and paths, if you didn't use the ones suggested earlier in this series.
```
<VirtualHost *:80>
ServerName nextcloud.pi-nas.com
ServerAdmin admin@pi-nas.com
DocumentRoot /var/www/nextcloud/
<Directory /var/www/nextcloud/>
AllowOverride None
</Directory>
</VirtualHost>
```
To enable this virtual host, run the following two commands.
```
a2ensite 001-nextcloud
sudo systemctl reload apache2
```
With this configuration, you should now be able to reach the web server with your domain via the web browser. To secure your data, I recommend using HTTPS instead of HTTP to access Nextcloud. A very easy (and free) way is to obtain a [Let's Encrypt][6] certificate with [Certbot][7] and have a cron job automatically refresh it. That way you don't have to mess around with self-signed or expiring certificates. Follow Certbot's simple how-to [instructions to install it on your Raspberry Pi][8]. During Certbot configuration, you can even decide to automatically forward HTTP to HTTPS, so visitors to **<http://nextcloud.pi-nas.com>** will be redirected to **<https://nextcloud.pi-nas.com>**. Please note, if your Raspberry Pi is running behind your home router, you must have port forwarding enabled for ports 443 and 80 to obtain Let's Encrypt certificates.
### Configure Nextcloud
The final step is to visit your fresh Nextcloud instance in a web browser to finish the configuration. To do so, open your domain in a browser and insert the database details from above. You can also set up your first Nextcloud user here, the one you can use for admin tasks. By default, the data directory should be inside the Nextcloud folder, so you don't need to change anything for the backup mechanisms from the [second part of this series][2] to pick up the data stored by users in Nextcloud.
Afterward, you will be directed to your Nextcloud and can log in with the admin user you created previously. To see a list of recommended steps to ensure a performant and secure Nextcloud installation, visit the Basic Settings tab in the Settings page (in our example: <https://nextcloud.pi-nas.com/>settings/admin) and see the Security & Setup Warnings section.
Congratulations! You've set up your own Nextcloud powered by a Raspberry Pi. Go ahead and [download a Nextcloud client][9] from the Nextcloud page to sync data with your client devices and access it offline. Mobile clients even provide features like instant upload of pictures you take, so they'll automatically sync to your desktop PC without wondering how to get them there.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi
作者:[Manuel Dewald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ntlx
[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
[2]: https://opensource.com/article/18/8/automate-backups-raspberry-pi
[3]: https://nextcloud.com/
[4]: https://sourceforge.net/p/ddclient/wiki/Home/
[5]: https://nextcloud.com/install/#instructions-server
[6]: https://letsencrypt.org/
[7]: https://certbot.eff.org/
[8]: https://certbot.eff.org/lets-encrypt/debianother-apache
[9]: https://nextcloud.com/install/#install-clients

View File

@ -1,124 +0,0 @@
belitex 翻译中
8 Python packages that will simplify your life with Django
======
This month's Python column looks at Django packages that will benefit your work, personal, or side projects.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V)
Django developers, we're devoting this month's Python column to packages that will help you. These are our favorite [Django][1] libraries for saving time, cutting down on boilerplate code, and generally simplifying our lives. We've got six packages for Django apps and two for Django's REST Framework, and we're not kidding when we say these packages show up in almost every project we work on.
But first, see our tips for making the [Django Admin more secure][2] and an article on 5 favorite [open source Django packages][3].
### A kitchen sink of useful time-savers: django-extensions
[Django-extensions][4] is a favorite Django package chock full of helpful tools like these management commands:
* **shell_plus** starts the Django shell with all your database models already loaded. No more importing from several different apps to test one complex relationship!
* **clean_pyc** removes all .pyc projects from everywhere inside your project directory.
* **create_template_tags** creates a template tag directory structure inside the app you specify.
* **describe_form** displays a form definition for a model, which you can then copy/paste into forms.py. (Note that this produces a regular Django form, not a ModelForm.)
* **notes** displays all comments with stuff like TODO, FIXME, etc. throughout your project.
* **TimeStampedModel** : This base class includes the fields **created** and **modified** and a **save()** method that automatically updates these fields appropriately.
* **ActivatorModel** : If your model will need fields like **status** , **activate_date** , and **deactivate_date** , use this base class. It comes with a manager that enables **.active()** and **.inactive()** querysets.
* **TitleDescriptionModel** and **TitleSlugDescriptionModel** : These include the **title** and **description** fields, and the latter also includes a **slug** field. The **slug** field will automatically populate based on the **title** field.
Django-extensions also includes useful abstract base classes to use for common patterns in your own models. Inherit from these base classes when you create your models to get their:
Django-extensions has more features you may find useful in your projects, so take a tour through its [docs][5]!
### 12-factor-app settings: django-environ
[Django-environ][6] allows you to use [12-factor app][7] methodology to manage your settings in your Django project. It collects other libraries, including [envparse][8] and [honcho][9]. Once you install django-environ, create an .env file at your project's root. Define in that module any settings variables that may change between environments or should remain secret (like API keys, debug status, and database URLs).
Then, in your project's settings.py file, import **environ** and set up variables for **environ.PATH()** and **environ.Env()** according to the [example][10]. Access settings variables defined in your .env file with **env('VARIABLE_NAME')**.
### Creating great management commands: django-click
[Django-click][11], based on [Click][12] (which we have recommended [before][13]… [twice][14]), helps you write Django management commands. This library doesn't have extensive documentation, but it does have a directory of [test commands][15] in its repository that are pretty useful. A basic Hello World command would look like this:
```
# app_name.management.commands.hello.py
import djclick as click
@click.command()
@click.argument('name')
def command(name):
    click.secho(f'Hello, {name}')
```
Then in the command line, run:
```
>> ./manage.py hello Lacey
Hello, Lacey
```
### Handling finite state machines: django-fsm
[Django-fsm][16] adds support for finite state machines to your Django models. If you run a news website and need articles to process through states like Writing, Editing, and Published, django-fsm can help you define those states and manage the rules and restrictions around moving from one state to another.
Django-fsm provides an FSMField to use for the model attribute that defines the model instance's state. Then you can use django-fsm's **@transition** decorator to define methods that move the model instance from one state to another and handle any side effects from that transition.
Although django-fsm is light on documentation, [Workflows (States) in Django][17] is a gist that serves as an excellent introduction to both finite state machines and django-fsm.
### Contact forms: #django-contact-form
A contact form is such a standard thing on a website. But don't write all that boilerplate code yourself—set yours up in minutes with [django-contact-form][18]. It comes with an optional spam-filtering contact form class (and a regular, non-filtering class) and a **ContactFormView** base class with methods you can override or customize, and it walks you through the templates you will need to create to make your form work.
### Registering and authenticating users: django-allauth
[Django-allauth][19] is an app that provides views, forms, and URLs for registering users, logging them in and out, resetting their passwords, and authenticating users with outside sites like GitHub or Twitter. It supports email-as-username authentication and is extensively documented. It can be a little confusing to set up the first time you use it; follow the [installation instructions][20] carefully and read closely when you [customize your settings][21] to make sure you're using all the settings you need to enable a specific feature.
### Handling user authentication with Django REST Framework: django-rest-auth
If your Django development includes writing APIs, you're probably using [Django REST Framework][22] (DRF). If you're using DRF, you should check out [django-rest-auth][23], a package that enables endpoints for user registration, login/logout, password reset, and social media authentication (by adding django-allauth, which works well with django-rest-auth).
### Visualizing a Django REST Framework API: django-rest-swagger
[Django REST Swagger][24] provides a feature-rich user interface for interacting with your Django REST Framework API. Once you've installed Django REST Swagger and added it to installed apps, add the Swagger view and URL pattern to your urls.py file; the rest is taken care of in the docstrings of your APIs.
![](https://opensource.com/sites/default/files/uploads/swagger-ui.png)
The UI for your API will include all your endpoints and available methods broken out by app. It will also list available operations for those endpoints and enable you to interact with the API (adding/deleting/fetching records, for example). It uses the docstrings in your API views to generate documentation for each endpoint, creating a set of API documentation for your project that's useful to you, your frontend developers, and your users.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/django-packages
作者:[Jeff Triplett][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laceynwilliams
[1]: https://www.djangoproject.com/
[2]: https://opensource.com/article/18/1/10-tips-making-django-admin-more-secure
[3]: https://opensource.com/business/15/12/5-favorite-open-source-django-packages
[4]: https://django-extensions.readthedocs.io/en/latest/
[5]: https://django-extensions.readthedocs.io/
[6]: https://django-environ.readthedocs.io/en/latest/
[7]: https://www.12factor.net/
[8]: https://github.com/rconradharris/envparse
[9]: https://github.com/nickstenning/honcho
[10]: https://django-environ.readthedocs.io/
[11]: https://github.com/GaretJax/django-click
[12]: http://click.pocoo.org/5/
[13]: https://opensource.com/article/18/9/python-libraries-side-projects
[14]: https://opensource.com/article/18/5/3-python-command-line-tools
[15]: https://github.com/GaretJax/django-click/tree/master/djclick/test/testprj/testapp/management/commands
[16]: https://github.com/viewflow/django-fsm
[17]: https://gist.github.com/Nagyman/9502133
[18]: https://django-contact-form.readthedocs.io/en/1.5/
[19]: https://django-allauth.readthedocs.io/en/latest/
[20]: https://django-allauth.readthedocs.io/en/latest/installation.html
[21]: https://django-allauth.readthedocs.io/en/latest/configuration.html
[22]: http://www.django-rest-framework.org/
[23]: https://django-rest-auth.readthedocs.io/
[24]: https://django-rest-swagger.readthedocs.io/en/latest/

View File

@ -0,0 +1,113 @@
Distributed tracing in a microservices world
======
What is distributed tracing and why is it so important in a microservices environment?
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pixelated-world.png?itok=fHjM6m53)
[Microservices][1] have become the default choice for greenfield applications. After all, according to practitioners, microservices provide the type of decoupling required for a full digital transformation, allowing individual teams to innovate at a far greater speed than ever before.
Microservices are nothing more than regular distributed systems, only at a larger scale. Therefore, they exacerbate the well-known problems that any distributed system faces, like lack of visibility into a business transaction across process boundaries.
Given that it's extremely common to have multiple versions of a single service running in production at the same time—be it in a [A/B testing][2] scenario or as part of rolling out a new release following the [Canary release][3] technique—when we account for the fact that we are talking about hundreds of services, it's clear that what we have is chaos. It's almost impossible to map the interdependencies and understand the path of a business transaction across services and their versions.
### Observability
This chaos ends up being a good thing, as long as we can observe what's going on and diagnose the problems that will eventually occur.
A system is said to be observable when we can understand its state based on the [metrics, logs, and traces][4] it emits. Given that we are talking about distributed systems, knowing the state of a single instance of a single service isn't enough; we need to be able to aggregate the metrics for all instances of a given service, perhaps grouped by version. Metrics solutions like [Prometheus][5] are very popular in tackling this aspect of the observability problem. Similarly, we need logs to be stored in a central location, as it's impossible to analyze the logs from the individual instances of each service. [Logstash][6] is usually applied here, in combination with a backing storage like [Elasticsearch][7]. And finally, we need to get end-to-end traces to understand the path a given transaction has taken. This is where distributed tracing solutions come into play.
### Distributed tracing
In monolithic web applications, logging frameworks provide enough capabilities to do a basic root-cause analysis when something fails. A developer just needs to place log statements in the code. Information like "context" (usually "thread") and "timestamp" are automatically added to the log entry, making it easier to understand the execution of a given request and correlate the entries.
```
Thread-1 2018-09-03T15:52:54+02:00 Request started
Thread-2 2018-09-03T15:52:55+02:00 Charging credit card x321
Thread-1 2018-09-03T15:52:55+02:00 Order submitted
Thread-1 2018-09-03T15:52:56+02:00 Charging credit card x123
Thread-1 2018-09-03T15:52:57+02:00 Changing order status
Thread-1 2018-09-03T15:52:58+02:00 Dispatching event to inventory
Thread-1 2018-09-03T15:52:59+02:00 Request finished
```
We can safely say that the second log entry above is not related to the other entries, as it's being executed in a different thread.
In microservices architectures, logging alone fails to deliver the complete picture. Is this service the first one in the call chain? And what happened at the inventory service (where we apparently dispatched an event)?
A common strategy to answer this question is creating an identifier at the very first building block of our transaction and propagating this identifier across all the calls, probably by sending it as an HTTP header whenever a remote call is made.
In a central log collector, we could then see entries like the ones below. Note how we could log the correlation ID (the first column in our example), so we know that the second entry is not related to the other entries.
```
abc123 Order     2018-09-03T15:52:58+02:00 Dispatching event to inventory
def456 Order     2018-09-03T15:52:58+02:00 Dispatching event to inventory
abc123 Inventory 2018-09-03T15:52:59+02:00 Received `order-submitted` event
abc123 Inventory 2018-09-03T15:53:00+02:00 Checking inventory status
abc123 Inventory 2018-09-03T15:53:01+02:00 Updating inventory
abc123 Inventory 2018-09-03T15:53:02+02:00 Preparing order manifest
```
This technique is one of the concepts at the core of any modern distributed tracing solution, but it's not really new; correlating log entries is decades old, probably as old as "distributed systems" itself.
What sets distributed tracing apart from regular logging is that the data structure that holds tracing data is more specialized, so we can also identify causality. Looking at the log entries above, it's hard to tell if the last step was caused by the previous entry, if they were performed concurrently, or if they share the same caller. Having a dedicated data structure also allows distributed tracing to record not only a message in a single point in time but also the start and end time of a given procedure.
![Trace showing spans][9]
Trace showing spans similar to the logs described above
[Click to enlarge][10]
Most of the modern distributed tracing tools are inspired by a 2010 [paper about Dapper][11], the distributed tracing solution used at Google. In that paper, the data structure described above was called a span, and you can see nine of them in the image above. This particular "forest" of spans is called a trace and is equivalent to the correlated log entries we've seen before.
The image above is a screenshot of a trace displayed in [Jaeger][12], an open source distributed tracing solution hosted by the [Cloud Native Computing Foundation (CNCF)][13]. It marks each service with a color to make it easier to see the process boundaries. Timing information can be easily visualized, both by looking at the macro timeline at the top of the screen or at the individual spans, giving a sense of how long each span takes and how impactful it is in this particular execution. It's also easy to observe when processes are asynchronous and therefore may outlive the initial request.
Like with logging, we need to annotate or instrument our code with the data we want to record. Unlike logging, we record spans instead of messages and do some demarcation to know when the span starts and finishes so we can get accurate timing information. As we would probably like to have our business code independent from a specific distributed tracing implementation, we can use an API such as [OpenTracing][14], leaving the decision about the concrete implementation as a packaging or runtime concern. Following is pseudo-Java code showing such demarcation.
```
try (Scope scope = tracer.buildSpan("submitOrder").startActive(true)) {
    scope.span().setTag("order-id", "c85b7644b6b5");
    chargeCreditCard();
    changeOrderStatus();
    dispatchEventToInventory();
}
```
Given the nature of the distributed tracing concept, it's clear the code executed "between" our business services can also be part of the trace. For instance, we could [turn on][15] the distributed tracing integration for [Istio][16], a service mesh solution that helps in the communication between microservices, and we'll suddenly have a better picture about the network latency and routing decisions made at this layer. Another example is the work done in the OpenTracing community to provide instrumentation for popular stacks, frameworks, and APIs, such as Java's [JAX-RS][17], [Spring Cloud][18], or [JDBC][19]. This enables us to see how our business code interacts with the rest of the middleware, understand where a potential problem might be happening, and identify the best areas to improve. In fact, today's middleware instrumentation is so rich that it's common to get started with distributed tracing by using only the so-called "framework instrumentation," leaving the business code free from any tracing-related code.
While a microservices architecture is almost unavoidable nowadays for established companies to innovate faster and for ambitious startups to achieve web scale, it's easy to feel helpless while conducting a root cause analysis when something eventually fails and the right tools aren't available. The good news is tools like Prometheus, Logstash, OpenTracing, and Jaeger provide the pieces to bring observability to your application.
Juraci Paixão Kröhling will present [What are My Microservices Doing?][20] at [Open Source Summit Europe][21], October 22-24 in Edinburgh, Scotland.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/distributed-tracing-microservices-world
作者:[Juraci Paixão Kröhling][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jpkroehling
[1]: https://en.wikipedia.org/wiki/Microservices
[2]: https://en.wikipedia.org/wiki/A/B_testing
[3]: https://martinfowler.com/bliki/CanaryRelease.html
[4]: https://blog.twitter.com/engineering/en_us/a/2016/observability-at-twitter-technical-overview-part-i.html
[5]: https://prometheus.io/
[6]: https://github.com/elastic/logstash
[7]: https://github.com/elastic/elasticsearch
[8]: /file/409621
[9]: https://opensource.com/sites/default/files/uploads/distributed-trace.png (Trace showing spans)
[10]: /sites/default/files/uploads/trace.png
[11]: https://ai.google/research/pubs/pub36356
[12]: https://www.jaegertracing.io/
[13]: https://www.cncf.io/
[14]: http://opentracing.io/
[15]: https://istio.io/docs/tasks/telemetry/distributed-tracing/
[16]: https://istio.io/
[17]: https://github.com/opentracing-contrib/java-jaxrs
[18]: https://github.com/opentracing-contrib/java-spring-cloud
[19]: https://github.com/opentracing-contrib/java-jdbc
[20]: https://osseu18.sched.com/event/FxW3/what-are-my-microservices-doing-juraci-paixao-krohling-red-hat#
[21]: https://osseu18.sched.com/

View File

@ -0,0 +1,136 @@
Clinews Read News And Latest Headlines From Commandline
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-720x340.jpeg)
A while ago, we have written about a CLI news client named [**InstantNews**][1] that helps you to read news and latest headlines from commandline instantly. Today, I stumbled upon a similar utility named **Clinews** which serves the same purpose reading news and latest headlines from popular websites, blogs from Terminal. You dont need to install GUI applications or mobile apps. You can read whats happening in the world right from your Terminal. It is free, open source utility written using **NodeJS**.
### Installing Clinews
Since Clinews is written using NodeJS, you can install it using NPM package manager. If you havent install NodeJS, install it as described in the following link.
Once node installed, run the following command to install Clinews:
```
$ npm i -g clinews
```
You can also install Clinews using **Yarn** :
```
$ yarn global add clinews
```
Yarn itself can installed using npm
```
$ npm -i yarn
```
### Configure News API
Clinews retrieves all news headlines from [**News API**][2]. News API is a simple and easy-to-use API that returns JSON metadata for the headlines currently published on a range of news sources and blogs. It currently provides live headlines from 70 popular sources, including Ars Technica, BBC, Blooberg, CNN, Daily Mail, Engadget, ESPN, Financial Times, Google News, hacker News, IGN, Mashable, National Geographic, Reddit r/all, Reuters, Speigel Online, Techcrunch, The Guardian, The Hindu, The Huffington Post, The Newyork Times, The Next Web, The Wall street Journal, USA today and [**more**][3].
First, you need an API key from News API. Go to [**https://newsapi.org/register**][4] URL and register a free account to get the API key.
Once you got the API key from News API site, edit your **.bashrc** file:
```
$ vi ~/.bashrc
```
Add newsapi API key at the end like below:
```
export IN_API_KEY="Paste-API-key-here"
```
Please note that you need to paste the key inside the double quotes. Save and close the file.
Run the following command to update the changes.
```
$ source ~/.bashrc
```
Done. Now let us go ahead and fetch the latest headlines from new sources.
### Read News And Latest Headlines From Commandline
To read news and latest headlines from specific new source, for example **The Hindu** , run:
```
$ news fetch the-hindu
```
Here, **“the-hindu”** is the new source id (fetch id).
The above command will fetch latest 10 headlines from The Hindu news portel and display them in the Terminal. Also, it displays a brief description of the news, the published date and time, and the actual link to the source.
**Sample output:**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-1.png)
To read a news in your browser, hold Ctrl key and click on the URL. It will open in your default web browser.
To view all the sources you can get news from, run:
```
$ news sources
```
**Sample output:**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-2.png)
As you see in the above screenshot, Clinews lists all news sources including the name of the news source, fetch id, description of the site, website URL and the country where it is located. As of writing this guide, Clinews currently supports 70+ news sources.
Clinews can also able to search for news stories across all sources matching search criteria/term. Say for example, to list all news stories with titles containing the words **“Tamilnadu”** , use the following command:
```
$ news search "Tamilnadu"
```
This command will scrap all news sources for stories that match term **Tamilnadu**.
Clinews has some extra flags that helps you to
* limit the amount of news stories you want to see,
* sort news stories (top, latest, popular),
* display news stories category wise (E.g. business, entertainment, gaming, general, music, politics, science-and-nature, sport, technology)
For more details, see the help section:
```
$ clinews -h
```
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/get-news-instantly-commandline-linux/
[2]: https://newsapi.org/
[3]: https://newsapi.org/sources
[4]: https://newsapi.org/register

View File

@ -0,0 +1,108 @@
Control your data with Syncthing: An open source synchronization tool
======
Decide how to store and share your personal information.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
These days, some of our most important possessions—from pictures and videos of family and friends to financial and medical documents—are data. And even as cloud storage services are booming, so there are concerns about privacy and lack of control over our personal data. From the PRISM surveillance program to Google [letting app developers scan your personal emails][1], the news is full of reports that should give us all pause regarding the security of our personal information.
[Syncthing][2] can help put your mind at ease. An open source peer-to-peer file synchronization tool that runs on Linux, Windows, Mac, Android, and others (sorry, no iOS), Syncthing uses its own protocol, called [Block Exchange Protocol][3]. In brief, Syncthing lets you synchronize your data across many devices without owning a server.
### Linux
In this post, I will explain how to install and synchronize files between a Linux computer and an Android phone.
Syncthing is readily available for most popular distributions. Fedora 28 includes the latest version.
To install Syncthing in Fedora, you can either search for it in Software Center or execute the following command:
```
sudo dnf install syncthing syncthing-gtk
```
Once its installed, open it. Youll be welcomed by an assistant to help configure Syncthing. Click **Next** until it asks to configure the WebUI. The safest option is to keep the option **Listen on localhost**. That will disable the web interface and keep unauthorized users away.
![Syncthing in Setup WebUI dialog box][5]
Syncthing in Setup WebUI dialog box
Close the dialog. Now that Syncthing is installed, its time to share a folder, connect a device, and start syncing. But first, lets continue with your other client.
### Android
Syncthing is available in Google Play and in F-Droid app stores.
![](https://opensource.com/sites/default/files/uploads/syncthing2.png)
Once the application is installed, youll be welcomed by a wizard. Grant Syncthing permissions to your storage. You might be asked to disable battery optimization for this application. It is safe to do so as we will optimize the app to synchronize only when plugged in and connected to a wireless network.
Click on the main menu icon and go to **Settings** , then **Run Conditions**. Tick **Always run in** **the background** , **Run only when charging** , and **Run only on wifi**. Now your Android client is ready to exchange files with your devices.
There are two important concepts to remember in Syncthing: folders and devices. Folders are what you want to share, but you must have a device to share with. Syncthing allows you to share individual folders with different devices. Devices are added by exchanging device IDs. A device ID is a unique, cryptographically secure identifier that is created when Syncthing starts for the first time.
### Connecting devices
Now lets connect your Linux machine and your Android client.
In your Linux computer, open Syncthing, click on the **Settings** icon and click **Show ID**. A QR code will show up.
In your Android mobile, open Syncthing. In the main screen, click the **Devices** tab and press the **+** symbol. In the first field, press the QR code symbol to open the QR scanner.
Point your mobile camera to the computer QR code. The Device ID** **field will be populated with your desktop client Device ID. Give it a friendly name and save. Because adding a device goes two ways, you now need to confirm on the computer client that you want to add the Android mobile. It might take a couple of minutes for your computer client to ask for confirmation. When it does, click **Add**.
![](https://opensource.com/sites/default/files/uploads/syncthing6.png)
In the **New Device** window, you can verify and configure some options about your new device, like the **Device Name** and **Addresses**. If you keep dynamic, it will try to auto-discover the device IP, but if you want to force one, you can add it in this field. If you already created a folder (more on this later), you can also share it with this new device.
![](https://opensource.com/sites/default/files/uploads/syncthing7.png)
Your computer and Android are now paired and ready to exchange files. (If you have more than one computer or mobile phone, simply repeat these steps.)
### Sharing folders
Now that the devices you want to sync are already connected, its time to share a folder. You can share folders on your computer and the devices you add to that folder will get a copy.
To share a folder, go to **Settings** and click **Add Shared Folder** :
![](https://opensource.com/sites/default/files/uploads/syncthing8.png)
In the next window, enter the information of the folder you want to share:
![](https://opensource.com/sites/default/files/uploads/syncthing9.png)
You can use any label you want. **Folder ID** will be generated randomly and will be used to identify the folder between the clients. In **Path** , click **Browse** and locate the folder you want to share. If you want Syncthing to monitor the folder for changes (such as deletes, new files, etc.), click **Monitor filesystem for changes**.
Remember, when you share a folder, any change that happens on the other clients will be reflected on every single device. That means that if you share a folder containing pictures with other computers or mobile devices, changes in these other clients will be reflected everywhere. If this is not what you want, you can make your folder “Send Only” so it will send files to the clients, but the other clients changes wont be synced.
When this is done, go to **Share with Devices** and select the hosts you want to sync with your folder:
All the devices you select will need to accept the share request; you will get a notification from the devices:
Just as when you shared the folder, you must configure the new shared folder:
![](https://opensource.com/sites/default/files/uploads/syncthing12.png)
Again, here you can define any label, but the ID must match each client. In the folder option, select the destination for the folder and its files. Remember that any change done in this folder will be reflected with every device allowed in the folder.
These are the steps to connect devices and share folders with Syncthing. It might take a few minutes to start copying, depending on your network settings or if you are not on the same network.
Syncthing offers many more great features and options. Try it—and take control of your data.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/take-control-your-data-syncthing
作者:[Michael Zamot][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mzamot
[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695
[2]: https://syncthing.net/
[3]: https://docs.syncthing.net/specs/bep-v1.html
[4]: /file/410191
[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png (Syncthing in Setup WebUI dialog box)

View File

@ -0,0 +1,104 @@
Gunpoint is a Delight for Stealth Game Fans
======
Gunpoint is a 2D stealth game in which you play as a spy stealing secrets and hacking networks like Ethan Hunt of Mission Impossible movie series.
<https://youtu.be/QMS3s3xZFlY>
Hi, Fellow Linux gamers. Lets take a look at a fun stealth game. Lets take a look at [Gunpoint][1].
Gunpoint is neither free nor open source. It is an independent game you can purchase directly from the creator or from Steam.
![][2]
### The Interesting History of Gunpoint
> The instant success of Gunpoint enabled its creator to become a full time game developer.
Gunpoint is a stealth game created by [Tom Francis][3]. Francis was inspired to create the game after he heard about Spelunky, which was created by one person. Francis played games as part of his day job, as an editor for PC Gamer UK magazine. He had no previous programming experience but used the easy-to-use Game Maker. He planned to create a demo with the hopes of getting a job as a developer.
He released his first prototype in May 2010 under the name Private Dick. Based on the response, Francis continued to work on the game. The final version was released in June of 2013 to high praise.
In a [blog post][4] weeks after Gunpoints launch, Francis revealed that he made back all the money he spent on development ($30 for Game Maker 8) in 64 seconds. Francis didnt reveal Gunpoints sales figures, but he did quit his job and today creates [games][5] full time.
### Experiencing the Gunpoint Gameplay
![Gunpoint Gameplay][6]
Like I said earlier, Gunpoint is a stealth game. You play a freelance spy named Richard Conway. As Conway, you will use a pair of Bullfrog hypertrousers to infiltrate buildings for clients. The hypertrousers allow you to jump very high, even through windows. You can also cling to walls or ceilings like a ninja.
Another tool you have is the Crosslink, which allows you to rewire circuits. Often you will need to use the Crosslink to reroute motion detections to unlock doors instead of setting off an alarm or rewire a light switch to turn off the light on another floor to distract a guard.
When you sneak into a building, your biggest concern is the on-site security guards. If they see Conway, they will shoot and in this game, its one shot one kill. You can jump off a three-story building no problem, but bullets will take you down. Thankfully, if Conway is killed you can just jump back a few seconds and try again.
Along the way, you will earn money to upgrade your tools and unlock new features. For example, I just unlocked the ability to rewire a guards gun. Dont ask me how that works.
### Minimum System Requirements
Here are the minimum system requirements for Gunpoint:
##### Linux
* Processor: 2GHz
* Memory: 1GB RAM
* Video card: 512MB
* Hard Drive: 700MB HD space
##### Windows
* OS: Windows XP, Visa, 7 or 8
* Processor: 2GHz
* Memory: 1GB RAM
* Video card: 512MB
* DirectX®: 9.0
* Hard Drive: 700MB HD space
##### macOS
* OS: OS X 10.7 or later
* Processor: 2GHz
* Memory: 1GB RAM
* Video card: 512MB
* Hard Drive: 700MB HD space
### Thoughts on Gunpoint
![Gunpoint game on Linux][7]
Image Courtesy: Steam Community
Gunpoint is a very fun game. The early levels are easy to get through, but the later levels make you put your thinking cap on. The hypertrousers and Crosslink are fun to play with. There is nothing like turning the lights off on a guard and bouncing over his head to hack a terminal.
Besides the fun mechanics, it also has an interesting [noir][8] murder mystery story. Several different (and conflicting) clients hire you to look into different aspects of the case. Some of them seem to have ulterior motives that are not in your best interest.
I always enjoy good mysteries and this one is no different. If you like noir or platforming games, be sure to check out [Gunpoint][1].
Have you every played Gunpoint? What other games should we review for your entertainment? Let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][9].
--------------------------------------------------------------------------------
via: https://itsfoss.com/gunpoint-game-review/
作者:[John Paul][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]: http://www.gunpointgame.com/
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/gunpoint.jpg
[3]: https://www.pentadact.com/
[4]: https://www.pentadact.com/2013-06-18-gunpoint-recoups-development-costs-in-64-seconds/
[5]: https://www.pentadact.com/2014-08-09-what-im-working-on-and-what-ive-done/
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/gunpoint-gameplay-1.jpeg
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/gunpoint-game-1.jpeg
[8]: https://en.wikipedia.org/wiki/Noir_fiction
[9]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,169 @@
5 ways to play old-school games on a Raspberry Pi
======
Relive the golden age of gaming with these open source platforms for Raspberry Pi.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arcade_game_gaming.jpg?itok=84Rjk_32)
They don't make 'em like they used to, do they? Video games, I mean.
Sure, there's a bit more grunt in the gear now. Princess Zelda used to be 16 pixels in each direction; there's now enough graphics power for every hair on her head. Today's processors could beat up 1988's processors in a cage-fight deathmatch without breaking a sweat.
But you know what's missing? The fun.
You've got a squillion and one buttons to learn just to get past the tutorial mission. There's probably a storyline, too. You shouldn't need a backstory to kill bad guys. All you need is jump and shoot. So, it's little wonder that one of the most enduring popular uses for a Raspberry Pi is to relive the 8- and 16-bit golden age of gaming in the '80s and early '90s. But where to start?
There are a few ways to play old-school games on the Pi. Each has its strengths and weaknesses, which I'll discuss here.
### Retropie
[Retropie][1] is probably the most popular retro-gaming platform for the Raspberry Pi. It's a solid all-rounder and a great default option for emulating classic desktop and console gaming systems.
#### What is it?
Retropie is built to run on [Raspbian][2]. It can also be installed over an existing Raspbian image if you'd prefer. It uses [EmulationStation][3] as a graphical front-end for a library of open source emulators, including the [Libretro][4] emulators.
You don't need to understand a word of that to play your games, though.
#### What's great about it
It's very easy to get started. All you need to do is burn the image to an SD card, configure your controllers, copy your games over, and start killing bad guys.
The huge user base means that there is a wealth of support and information out there, and active online communities to turn to for questions.
In addition to the emulators that come installed with the Retropie image, there's a huge library of emulators you can install from the package manager, and it's growing all the time. Retropie also offers a user-friendly menu system to manage this, saving you time.
From the Retropie menu, it's easy to add Kodi and the Raspbian desktop, which comes with the Chromium web browser. This means your retro-gaming rig is also good for home theatre, [YouTube][5], [SoundCloud][6], and all those other “lounge room computer” goodies.
Retropie also has a number of other customization options: You can change the graphics in the menus, set up different control pad configurations for different emulators, make your Raspberry Pi file system visible to your local Windows network—all sorts of stuff.
Retropie is built on Raspbian, which means you have the Raspberry Pi's most popular operating system to explore. Most Raspberry Pi projects and tutorials you find floating around are written for Raspbian, making it easy to customize and install new things on it. I've used my Retropie rig as a wireless bridge, installed MIDI synthesizers on it, taught myself a bit of Python, and more—all without compromising its use as a gaming machine.
#### What's not so great about it
Retropie's simple installation and ease of use is, in a way, a double-edged sword. You can go for a long time with Retropie without ever learning simple stuff like `sudo apt-get`, which means you're missing out on a lot of the Raspberry Pi experience.
It doesn't have to be this way; the command line is still there under the hood when you want it, but perhaps users are a bit too insulated from a Bash shell that's ultimately a lot less scary than it looks. Retropie's main menu is operable only with a control pad, which can be annoying when you don't have one plugged in because you've been using the system for things other than gaming.
#### Who's it for?
Anyone who wants to get straight into some gaming, anyone who wants the biggest and best library of emulators, and anyone who wants a great way to start exploring Linux when they're not playing games.
### Recalbox
[Recalbox][7] is a newer open source suite of emulators for the Raspberry Pi. It also supports other ARM-based small-board computers.
#### What is it?
Like Retropie, Recalbox is built on EmulationStation and Libretro. Where it differs is that it's not built on Raspbian, but on its own flavor of Linux: RecalboxOS.
#### What's great about it
The setup for Recalbox is even easier than for Retropie. You don't even need to image an SD card; simply copy some files over and go. It also has out-of-the-box support for some game controllers, getting you to Level 1 that little bit faster. Kodi comes preinstalled. This is a ready-to-go gaming and media rig.
#### What's not so great about it
Recalbox has fewer emulators than Retropie, fewer customization options, and a smaller user community.
Your Recalbox rig is probably always just going to be for emulators and Kodi, the same as when you installed it. If you feel like getting deeper into Linux, you'll probably want a new SD card for Raspbian.
#### Who's it for?
Recalbox is great if you want the absolute easiest retro gaming experience and can happily go without some of the more obscure gaming platforms, or if you are intimidated by the idea of doing anything a bit technical (and have no interest in growing out of that).
For most opensource.com readers, Recalbox will probably come in most handy to recommend to your not-so-technical friend or relative. Its super-simple setup and overall lack of options might even help you avoid having to help them with it.
### Roll your own
Ok, if you've been paying attention, you might have noticed that both Retropie and Recalbox are built from many of the same open source components. So what's to stop you from putting them together yourself?
#### What is it?
Whatever you want it to be, baby. The nature of open source software means you could use an existing emulator suite as a starting point, or pilfer from them at will.
#### What's great about it
If you have your own custom interface in mind, I guess there's nothing to do but roll your sleeves up and get to it. This is also a way to install emulators that haven't quite found their way into Retropie yet, such as [BeebEm][8] or [ArcEm][9].
#### What's not so great about it
Well, it's a bit of work, isn't it?
#### Who's it for?
Hackers, tinkerers, builders, seasoned hobbyists, and such.
### Native RISC OS gaming
Now here's a dark horse: [RISC OS][10], the original operating system for ARM devices.
#### What is it?
Before ARM went on to become the world's most popular CPU architecture, it was originally built to be the heart of the Acorn Archimedes. That's kind of a forgotten beast nowadays, but for a few years it was light years ahead as the most powerful desktop computer in the world, and it attracted a lot of games development.
Because the ARM processor in the Pi is the great-grandchild of the one in the Archimedes, we can still install RISC OS on it, and with a little bit of work, get these games running. This is different to the emulator options we've covered so far because we're playing our games on the operating system and CPU architecture for which they were written.
#### What's great about it
It's the perfect introduction to RISC OS. This is an absolute gem of an operating system and well worth checking out in its own right.
The fact that you're using much the same operating system as back in the day to load and play your games makes your retro gaming rig just that little bit more of a time machine. This definitely adds some charm and retro value to the project.
There are a few superb games that were released only on the Archimedes. The massive hardware advantage of the Archimedes also means that it often had the best graphics and smoothest gameplay of a lot of multi-platform titles. The rights holders to many of these games have been generous enough to make them legally available for free download.
#### What's not so great about it
Once you have installed RISC OS, it still takes a bit of elbow grease to get the games working. Here's a [guide to getting started][11].
This is definitely not a great all-rounder for the lounge room. There's nothing like [Kodi][12]. There's a web browser, [NetSurf][13], but it's struggling to catch up to the modern web. You won't get the range of titles to play as you would with an emulator suite. RISC OS Open is free for hobbyists to download and use and much of the source code has been made open. But despite the name, it's not a 100% open source operating system.
#### Who's it for?
This one's for novelty seekers, absolute retro heads, people who want to explore an interesting operating system from the '80s, people who are nostalgic for Acorn machines from back in the day, and people who want a totally different retro gaming project.
### Command line gaming
Do you really need to install an emulator or an exotic operating system just to relive the glory days? Why not just install some native linux games from the command line?
#### What is it?
There's a whole range of native Linux games tested to work on the [Raspberry Pi][14].
#### What's great about it
You can install most of these from packages using the command line and start playing. Easy. If you've already got Raspbian up and running, it's probably your fastest path to getting a game running.
#### What's not so great about it
This isn't, strictly speaking, actual retro gaming. Linux was born in 1991 and took a while longer to come together as a gaming platform. This isn't quite gaming from the classic 8- and 16-bit era; these are ports and retro-influenced games that were built later.
#### Who's it for?
If you're just after a bucket of fun, no problem. But if you're trying to relive the actual era, this isn't quite it.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/retro-gaming-raspberry-pi
作者:[James Mawson][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dxmjames
[1]: https://retropie.org.uk/
[2]: https://www.raspbian.org/
[3]: https://emulationstation.org/
[4]: https://www.libretro.com/
[5]: https://www.youtube.com/
[6]: https://soundcloud.com/
[7]: https://www.recalbox.com/
[8]: http://www.mkw.me.uk/beebem/
[9]: http://arcem.sourceforge.net/
[10]: https://opensource.com/article/18/7/gentle-intro-risc-os
[11]: https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/
[12]: https://kodi.tv/
[13]: https://www.netsurf-browser.org/
[14]: https://www.raspberrypi.org/forums/viewtopic.php?f=78&t=51794

View File

@ -0,0 +1,114 @@
translating by Flowsnow
A Simple, Beautiful And Cross-platform Podcast App
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png)
Podcasts have become very popular in the last few years. Podcasts are whats called “infotainment”, they are generally light-hearted, but they generally give you valuable information. Podcasts have blown up in the last few years, and if you like something, chances are there is a podcast about it. There are a lot of podcast players out there for the Linux desktop, but if you want something that is visually beautiful, has slick animations, and works on every platform, there arent a lot of alternatives to **CPod**. CPod (formerly known as **Cumulonimbus** ) is an open source and slickest podcast app that works on Linux, MacOS and Windows.
CPod runs on something called **Electron** a tool that allows developers to build cross-platform (E.g Windows, MacOs and Linux) desktop GUI applications. In this brief guide, we will be discussing how to install and use CPod podcast app in Linux.
### Installing CPod
Go to the [**releases page**][1] of CPod. Download and Install the binary for your platform of choice. If you use Ubuntu/Debian, you can just download and install the .deb file from the releases page as shown below.
```
$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb
$ sudo apt update
$ sudo apt install gdebi
$ sudo gdebi CPod_1.25.7_amd64.deb
```
If you use any other distribution, you probably should use the **AppImage** in the releases page.
Download the AppImage file from the releases page.
Open your terminal, and go to the directory where the AppImage file has been stored. Change the permissions to allow execution:
```
$ chmod +x CPod-1.25.7-x86_64.AppImage
```
Execute the AppImage File:
```
$ ./CPod-1.25.7-x86_64.AppImage
```
Youll be presented a dialog asking whether to integrate the app with the system. Click **Yes** if you want to do so.
### Features
**Explore Tab**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png)
CPod uses the Apple iTunes database to find podcasts. This is good, because the iTunes database is the biggest one out there. If there is a podcast out there, chances are its on iTunes. To find podcasts, just use the top search bar in the Explore section. The Explore Section also shows a few popular podcasts.
**Home Tab**
![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png)
The Home Tab is the tab that opens by default when you open the app. The Home Tab shows a chronological list of all the episodes of all the podcasts that you have subscribed to.
From the home tab, you can:
1. Mark episodes read.
2. Download them for offline playing
3. Add them to the queue.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png)
**Subscriptions Tab**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png)
You can of course, subscribe to podcasts that you like. A few other things you can do in the Subscriptions Tab is:
1. Refresh Podcast Artwork
2. Export and Import Subscriptions to/from an .OPML file.
**The Player**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png)
The player is perhaps the most beautiful part of CPod. The app changes the overall look and feel according to the banner of the podcast. Theres a sound visualiser at the bottom. To the right, you can see and search for other episodes of this podcast.
**Cons/Missing Features**
While I love this app, there are a few features and disadvantages that CPod does have:
1. Poor MPRIS Integration You can play/pause the podcast from the media player dialog of your desktop environment, but not much more. The name of the podcast is not shown, and you can go to the next/previous episode.
2. No support for chapters.
3. No auto-downloading you have to manually download episodes.
4. CPU usage during use is pretty high (even for an Electron app).
### Verdict
While it does have its cons, CPod is clearly the most aesthetically pleasing podcast player app out there, and it has most basic features down. If you love using visually beautiful apps, and dont need the advanced features, this is the perfect app for you. I know for a fact that Im going to use it.
Do you like CPod? Please put your opinions on the comments below!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://github.com/z-------------/CPod/releases

View File

@ -0,0 +1,171 @@
Why Linux users should try Rust
======
![](https://images.idgesg.net/images/article/2018/09/rust-rusted-metal-100773678-large.jpg)
Rust is a fairly young and modern programming language with a lot of features that make it incredibly flexible and very secure. It's also becoming quite popular, having won first place for the "most loved programming language" in the Stack Overflow Developer Survey three years in a row — [2016][1], [2017][2], and [2018][3].
Rust is also an _open-source_ language with a suite of special features that allow it to be adapted to many different programming projects. It grew out of what was a personal project of a Mozilla employee back in 2006, was picked up as a special project by Mozilla a few years later (2009), and then announced for public use in 2010.
Rust programs run incredibly fast, prevent segfaults, and guarantee thread safety. These attributes make the language tremendously appealing to developers focused on application security. Rust is also a very readable language and one that can be used for anything from simple programs to very large and complex projects.
Rust is:
* Memory safe — Rust will not suffer from dangling pointers, buffer overflows, or other memory-related errors. And it provides memory safety without garbage collection.
* General purpose — Rust is an appropriate language for any type of programming
* Fast — Rust is comparable in performance to C/C++ but with far better security features.
* Efficient — Rust is built to facilitate concurrent programming.
* Project-oriented — Rust has a built-in dependency and build management system called Cargo.
* Well supported — Rust has an impressive [support community][4].
Rust also enforces RAII (Resource Acquisition Is Initialization). That means when an object goes out of scope, its destructor will be called and its resources will be freed, providing a shield against resource leaks. It provides functional abstractions and a great [type system][5] together with speed and mathematical soundness.
In short, Rust is an impressive systems programming language with features that other most languages lack, making it a serious contender for languages like C, C++ and Objective-C that have been used for years.
### Installing Rust
Installing Rust is a fairly simple process.
```
$ curl https://sh.rustup.rs -sSf | sh
```
Once Rust in installed, calling rustc with the **\--version** argument or using the **which** command displays version information.
```
$ which rustc
rustc 1.27.2 (58cc626de 2018-07-18)
$ rustc --version
rustc 1.27.2 (58cc626de 2018-07-18)
```
### Getting started with Rust
The simplest code example is not all that different from what you'd enter if you were using one of many scripting languages.
```
$ cat hello.rs
fn main() {
// Print a greeting
println!("Hello, world!");
}
```
In these lines, we are setting up a function (main), adding a comment describing the function, and using a println statement to create output. You could compile and then run a program like this using the command shown below.
```
$ rustc hello.rs
$ ./hello
Hello, world!
```
Alternately, you might create a "project" (generally used only for more complex programs than this one!) to keep your code organized.
```
$ mkdir ~/projects
$ cd ~/projects
$ mkdir hello_world
$ cd hello_world
```
Notice that even a simple program, once compiled, becomes a fairly large executable.
```
$ ./hello
Hello, world!
$ ls -l hello*
-rwxrwxr-x 1 shs shs 5486784 Sep 23 19:02 hello <== executable
-rw-rw-r-- 1 shs shs 68 Sep 23 15:25 hello.rs
```
And, of course, that's just a start — the traditional "Hello, world!" program. The Rust language has a suite of features to get you moving quickly to advanced levels of programming skill.
### Learning Rust
![rust programming language book cover][6]
No Starch Press
The Rust Programming Language book by Steve Klabnik and Carol Nichols (2018) provides one of the best ways to learn Rust. Written by two members of the core development team, this book is available in print from [No Starch Press][7] or in ebook format at [rust-lang.org][8]. It has earned its reference as "the book" among the Rust developer community.
Among the many topics covered, you will learn about these advanced topics:
* Ownership and borrowing
* Safety guarantees
* Testing and error handling
* Smart pointers and multi-threading
* Advanced pattern matching
* Using Cargo (the built-in package manager)
* Using Rust's advanced compiler
#### Table of Contents
The table of contents is shown below.
```
Foreword by Nicholas Matsakis and Aaron Turon
Acknowledgements
Introduction
Chapter 1: Getting Started
Chapter 2: Guessing Game
Chapter 3: Common Programming Concepts
Chapter 4: Understanding Ownership
Chapter 5: Structs
Chapter 6: Enums and Pattern Matching
Chapter 7: Modules
Chapter 8: Common Collections
Chapter 9: Error Handling
Chapter 10: Generic Types, Traits, and Lifetimes
Chapter 11: Testing
Chapter 12: An Input/Output Project
Chapter 13: Iterators and Closures
Chapter 14: More About Cargo and Crates.io
Chapter 15: Smart Pointers
Chapter 16: Concurrency
Chapter 17: Is Rust Object Oriented?
Chapter 18: Patterns
Chapter 19: More About Lifetimes
Chapter 20: Advanced Type System Features
Appendix A: Keywords
Appendix B: Operators and Symbols
Appendix C: Derivable Traits
Appendix D: Macros
Index
```
[The Rust Programming Language][7] takes you from basic installation and language syntax to complex topics, such as modules, error handling, crates (synonymous with a library or package in other languages), modules (allowing you to partition your code within the crate itself), lifetimes, etc.
Probably the most important thing to say is that the book can move you from basic programming skills to building and compiling complex, secure and very useful programs.
### Wrap-up
If you're ready to get into some serious programming with a language that's well worth the time and effort to study and becoming increasingly popular, Rust is a good bet!
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3308162/linux/why-you-should-try-rust.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]: https://insights.stackoverflow.com/survey/2016#technology-most-loved-dreaded-and-wanted
[2]: https://insights.stackoverflow.com/survey/2017#technology-most-loved-dreaded-and-wanted-languages
[3]: https://insights.stackoverflow.com/survey/2018#technology-most-loved-dreaded-and-wanted-languages
[4]: https://www.rust-lang.org/en-US/community.html
[5]: https://doc.rust-lang.org/reference/type-system.html
[6]: https://images.idgesg.net/images/article/2018/09/rust-programming-language_book-cover-100773679-small.jpg
[7]: https://nostarch.com/Rust
[8]: https://doc.rust-lang.org/book/2018-edition/index.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,208 @@
9 Easiest Ways To Find Out Process ID (PID) In Linux
======
Everybody knows about PID, Exactly what is PID? Why you want PID? What are you going to do using PID? Are you having the same questions on your mind? If so, you are in the right place to get all the details.
Mainly, we are looking PID to kill an unresponsive program and its similar to Windows task manager. Linux GUI also offering the same feature but CLI is an efficient way to perform the kill operation.
### What Is Process ID?
PID stands for process identification number which is generally used by most operating system kernels such as Linux, Unix, macOS and Windows. It is a unique identification number that is automatically assigned to each process when it is created in an operating system. A process is a running instance of a program.
**Suggested Read :**
**(#)** [How To Find Out Which Port Number A Process Is Using In Linux][1]
**(#)** [3 Easy Ways To Kill Or Terminate A Process In Linux][2]
Each time process ID will be getting change to all the processes except init because init is always the first process on the system and is the ancestor of all other processes. Its PID is 1.
The default maximum value of PIDs is `32,768`. The same has been verified by running the following command on your system `cat /proc/sys/kernel/pid_max`. On 32-bit systems 32768 is the maximum value but we can set to any value up to 2^22 (approximately 4 million) on 64-bit systems.
You may ask, why we need such amount of PIDs? because we cant reused the PIDs immediately thats why. Also in order to prevent possible errors.
The PID for the running processes on the system can be found by using the below nine methods such as pidof command, pgrep command, ps command, pstree command, ss command, netstat command, lsof command, fuser command and systemctl command.
This can be achieved using the below six methods.
* `pidof:` pidof — find the process ID of a running program.
* `pgrep:` pgre look up or signal processes based on name and other attributes.
* `ps:` ps report a snapshot of the current processes.
* `pstree:` pstree display a tree of processes.
* `ss:` ss is used to dump socket statistics.
* `netstat:` netstat is displays a list of open sockets.
* `lsof:` lsof list open files.
* `fuser:` fuser list process IDs of all processes that have one or more files open
* `systemctl:` systemctl Control the systemd system and service manager
In this tutorial we are going to find out the Apache process id to test this article. Make sure your need to input your process name instead of us.
### Method-1 : Using pidof Command
pidof used to find the process ID of a running program. Its prints those ids on the standard output. To demonstrate this, we are going to find out the Apache2 process id from Debian 9 (stretch) system.
```
# pidof apache2
3754 2594 2365 2364 2363 2362 2361
```
From the above output you may face difficulties to identify the Process ID because its shows all the PIDs (included Parent and Childs) aginst the process name. Hence we need to find out the parent PID (PPID), which is the one we are looking. It could be the first number. In my case its `3754` and its shorted in descending order.
### Method-2 : Using pgrep Command
pgrep looks through the currently running processes and lists the process IDs which match the selection criteria to stdout.
```
# pgrep apache2
2361
2362
2363
2364
2365
2594
3754
```
This also similar to the above output but its shorting the results in ascending order, which clearly says that the parent PID is the last one. In my case its `3754`.
**Note :** If you have more than one process id of the process, you may face trouble to identify the parent process id when using pidof & pgrep command.
### Method-3 : Using pstree Command
pstree shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified in the pstree command then its shows all the process owned by the corresponding user.
pstree visually merges identical branches by putting them in square brackets and prefixing them with the repetition count.
```
# pstree -p | grep "apache2"
|-apache2(3754)-|-apache2(2361)
| |-apache2(2362)
| |-apache2(2363)
| |-apache2(2364)
| |-apache2(2365)
| `-apache2(2594)
```
To get parent process alone, use the following format.
```
# pstree -p | grep "apache2" | head -1
|-apache2(3754)-|-apache2(2361)
```
pstree command is very simple because its segregating the Parent and Child processes separately but its not easy when using pidof & pgrep command.
### Method-4 : Using ps Command
ps displays information about a selection of the active processes. It displays the process ID (pid=PID), the terminal associated with the process (tname=TTY), the cumulated CPU time in [DD-]hh:mm:ss format (time=TIME), and the executable name (ucmd=CMD). Output is unsorted by default.
```
# ps aux | grep "apache2"
www-data 2361 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start
www-data 2362 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start
www-data 2363 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start
www-data 2364 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start
www-data 2365 0.0 0.4 302652 8400 ? S 06:25 0:00 /usr/sbin/apache2 -k start
www-data 2594 0.0 0.4 302652 8400 ? S 06:55 0:00 /usr/sbin/apache2 -k start
root 3754 0.0 1.4 302580 29324 ? Ss Dec11 0:23 /usr/sbin/apache2 -k start
root 5648 0.0 0.0 12784 940 pts/0 S+ 21:32 0:00 grep apache2
```
From the above output we can easily identify the parent process id (PPID) based on the process start date. In my case apache2 process was started @ `Dec11` which is the parent and others are childs. PID of apache2 is `3754`.
### Method-5: Using ss Command
ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state informations than other tools.
It can display stats for all kind of sockets such as PACKET, TCP, UDP, DCCP, RAW, Unix domain, etc.
```
# ss -tnlp | grep apache2
LISTEN 0 128 :::80 :::* users:(("apache2",pid=3319,fd=4),("apache2",pid=3318,fd=4),("apache2",pid=3317,fd=4))
```
### Method-6: Using netstat Command
netstat Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.
By default, netstat displays a list of open sockets.
If you dont specify any address families, then the active sockets of all configured address families will be printed. This program is obsolete. Replacement for netstat is ss.
```
# netstat -tnlp | grep apache2
tcp6 0 0 :::80 :::* LISTEN 3317/apache2
```
### Method-7: Using lsof Command
lsof list open files. The Linux lsof command lists information about files that are open by processes running on the system.
```
# lsof -i -P | grep apache2
apache2 3317 root 4u IPv6 40518 0t0 TCP *:80 (LISTEN)
apache2 3318 www-data 4u IPv6 40518 0t0 TCP *:80 (LISTEN)
apache2 3319 www-data 4u IPv6 40518 0t0 TCP *:80 (LISTEN)
```
### Method-8: Using fuser Command
The fuser utility shall write to standard output the process IDs of processes running on the local system that have one or more named files open.
```
# fuser -v 80/tcp
USER PID ACCESS COMMAND
80/tcp: root 3317 F.... apache2
www-data 3318 F.... apache2
www-data 3319 F.... apache2
```
### Method-9: Using systemctl Command
systemctl Control the systemd system and service manager. This is the replacement of old SysV init system management and
most of the modern Linux operating systems were adapted systemd.
```
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; disabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: active (running) since Tue 2018-09-25 10:03:28 IST; 3s ago
Process: 3294 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 3317 (apache2)
Tasks: 55 (limit: 4915)
Memory: 7.9M
CPU: 71ms
CGroup: /system.slice/apache2.service
├─3317 /usr/sbin/apache2 -k start
├─3318 /usr/sbin/apache2 -k start
└─3319 /usr/sbin/apache2 -k start
Sep 25 10:03:28 ubuntu systemd[1]: Starting The Apache HTTP Server...
Sep 25 10:03:28 ubuntu systemd[1]: Started The Apache HTTP Server.
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/9-methods-to-check-find-the-process-id-pid-ppid-of-a-running-program-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[1]: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-using-in-linux/
[2]: https://www.2daygeek.com/kill-terminate-a-process-in-linux-using-kill-pkill-killall-command/

View File

@ -0,0 +1,80 @@
translating---geekpi
Hegemon A Modular System Monitor Application Written In Rust
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png)
When it comes to monitor running processes in Unix-like systems, the most commonly used applications are **top** and **htop** , which is an enhanced version of top. My personal favorite is htop. However, the developers are releasing few alternatives to these applications every now and then. One such alternative to top and htop utilities is **Hegemon**. It is a modular system monitor application written using **Rust** programming language.
Concerning about the features of Hegemon, we can list the following:
* Hegemon will monitor the usage of CPU, memory and Swap.
* It monitors the systems temperature and fan speed.
* The update interval time can be adjustable. The default value is 3 seconds.
* We can reveal more detailed graph and additional information by expanding the data streams.
* Unit tests
* Clean interface
* Free and open source.
### Installing Hegemon
Make sure you have installed **Rust 1.26** or later version. To install Rust in your Linux distribution, refer the following guide:
[Install Rust Programming Language In Linux][2]
Also, install [libsensors][1] library. It is available in the default repositories of most Linux distributions. For example, you can install it in RPM based systems such as Fedora using the following command:
```
$ sudo dnf install lm_sensors-devel
```
On Debian-based systems like Ubuntu, Linux Mint, it can be installed using command:
```
$ sudo apt-get install libsensors4-dev
```
Once you installed Rust and libsensors, install Hegemon using command:
```
$ cargo install hegemon
```
Once hegemon installed, start monitoring the running processes in your Linux system using command:
```
$ hegemon
```
Here is the sample output from my Arch Linux desktop.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif)
To exit, press **Q**.
Please be mindful that hegemon is still in its early development stage and it is not complete replacement for **top** command. There might be bugs and missing features. If you came across any bugs, report them in the projects github page. The developer is planning to bring more features in the upcoming versions. So, keep an eye on this project.
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://github.com/lm-sensors/lm-sensors
[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/

View File

@ -0,0 +1,88 @@
How to Boot Ubuntu 18.04 / Debian 9 Server in Rescue (Single User mode) / Emergency Mode
======
Booting a Linux Server into a single user mode or **rescue mode** is one of the important troubleshooting that a Linux admin usually follow while recovering the server from critical conditions. In Ubuntu 18.04 and Debian 9, single user mode is known as a rescue mode.
Apart from the rescue mode, Linux servers can be booted in **emergency mode** , the main difference between them is that, emergency mode loads a minimal environment with read only root file system file system, also it does not enable any network or other services. But rescue mode try to mount all the local file systems & try to start some important services including network.
In this article we will discuss how we can boot our Ubuntu 18.04 LTS / Debian 9 Server in rescue mode and emergency mode.
#### Booting Ubuntu 18.04 LTS Server in Single User / Rescue Mode:
Reboot your server and go to boot loader (Grub) screen and Select “ **Ubuntu** “, bootloader screen would look like below,
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Bootloader-Screen-Ubuntu18-04-Server.jpg)
Press “ **e** ” and then go the end of line which starts with word “ **linux** ” and append “ **systemd.unit=rescue.target** “. Remove the word “ **$vt_handoff** ” if it exists.
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-target-ubuntu18-04.jpg)
Now Press Ctrl-x or F10 to boot,
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-mode-ubuntu18-04.jpg)
Now press enter and then you will get the shell where all file systems will be mounted in read-write mode and do the troubleshooting. Once you are done with troubleshooting, you can reboot your server using “ **reboot** ” command.
#### Booting Ubuntu 18.04 LTS Server in emergency mode
Reboot the server and go the boot loader screen and select “ **Ubuntu** ” and then press “ **e** ” and go to the end of line which starts with word linux, and append “ **systemd.unit=emergency.target**
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergecny-target-ubuntu18-04-server.jpg)
Now Press Ctlr-x or F10 to boot in emergency mode, you will get a shell and do the troubleshooting from there. As we had already discussed that in emergency mode, file systems will be mounted in read-only mode and also there will be no networking in this mode,
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg)
Use below command to mount the root file system in read-write mode,
```
# mount -o remount,rw /
```
Similarly, you can remount rest of file systems in read-write mode .
#### Booting Debian 9 into Rescue & Emergency Mode
Reboot your Debian 9.x server and go to grub screen and select “ **Debian GNU/Linux**
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Debian9-Grub-Screen.jpg)
Press “ **e** ” and go to end of line which starts with word linux and append “ **systemd.unit=rescue.target** ” to boot the system in rescue mode and to boot in emergency mode then append “ **systemd.unit=emergency.target**
#### Rescue mode :
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-mode-Debian9.jpg)
Now press Ctrl-x or F10 to boot in rescue mode
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-Mode-Shell-Debian9.jpg)
Press Enter to get the shell and from there you can start troubleshooting.
#### Emergency Mode:
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-target-grub-debian9.jpg)
Now press ctrl-x or F10 to boot your system in emergency mode
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg)
Press enter to get the shell and use “ **mount -o remount,rw /** ” command to mount the root file system in read-write mode.
**Note:** In case root password is already set in Ubuntu 18.04 and Debian 9 Server then you must enter root password to get shell in rescue and emergency mode
Thats all from this article, please do share your feedback and comments in case you like this article.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/
作者:[Pradeep Kumar][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linuxtechi.com/author/pradeep/

View File

@ -0,0 +1,160 @@
How to Replace one Linux Distro With Another in Dual Boot [Guide]
======
**If you have a Linux distribution installed, you can replace it with another distribution in the dual boot. You can also keep your personal documents while switching the distribution.**
![How to Replace One Linux Distribution With Another From Dual Boot][1]
Suppose you managed to [successfully dual boot Ubuntu and Windows][2]. But after reading the [Linux Mint versus Ubuntu discussion][3], you realized that [Linux Mint][4] is more suited for your needs. What would you do now? How would you [remove Ubuntu][5] and [install Mint in dual boot][6]?
You might think that you need to uninstall [Ubuntu][7] from dual boot first and then repeat the dual booting steps with Linux Mint. Let me tell you something. You dont need to do all of that.
If you already have a Linux distribution installed in dual boot, you can easily replace it with another. You dont have to uninstall the existing Linux distribution. You simply delete its partition and install the new distribution on the disk space vacated by the previous distribution.
Another good news is that you may be able to keep your Home directory with all your documents and pictures while switching the Linux distributions.
Let me show you how to switch Linux distributions.
### Replace one Linux with another from dual boot
<https://youtu.be/ptF2RUehbKs>
Let me describe the scenario I am going to use here. I have Linux Mint 19 installed on my system in dual boot mode with Windows 10. I am going to replace it with elementary OS 5. Ill also keep my personal files (music, pictures, videos, documents from my home directory) while switching distributions.
Lets first take a look at the requirements:
* A system with Linux and Windows dual boot
* Live USB of Linux you want to install
* Backup of your important files in Windows and in Linux on an external disk (optional yet recommended)
#### Things to keep in mind for keeping your home directory while changing Linux distribution
If you want to keep your files from existing Linux install as it is, you must have a separate root and home directory. You might have noticed that in my [dual boot tutorials][8], I always go for Something Else option and then manually create root and home partitions instead of choosing Install alongside Windows option. This is where all the troubles in manually creating separate home partition pay off.
Keeping Home on a separate partition is helpful in situations when you want to replace your existing Linux install with another without losing your files.
Note: You must remember the exact username and password of your existing Linux install in order to use the same home directory as it is in the new distribution.
If you dont have a separate Home partition, you may create it later as well BUT I wont recommend that. That process is slightly complicated and I dont want you to mess up your system.
With that much background information, its time to see how to replace a Linux distribution with another.
#### Step 1: Create a live USB of the new Linux distribution
Alright! I already mentioned it in the requirements but I still included it in the main steps to avoid confusion.
You can create a live USB using a start up disk creator like [Etcher][9] in Windows or Linux. The process is simple so I am not going to list the steps here.
#### Step 2: Boot into live USB and proceed to installing Linux
Since you have already dual booted before, you probably know the drill. Plugin the live USB, restart your system and at the boot time, press F10 or F12 repeatedly to enter BIOS settings.
In here, choose to boot from the USB. And then youll see the option to try the live environment or installing it immediately.
You should start the installation procedure. When you reach the Installation type screen, choose the Something else option.
![Replacing one Linux with another from dual boot][10]
Select Something else here
#### Step 3: Prepare the partition
Youll see the partitioning screen now. Look closely and youll see your Linux installation with Ext4 file system type.
![Identifying Linux partition in dual boot][11]
Identify where your Linux is installed
In the above picture, the Ext4 partition labeled as Linux Mint 19 is the root partition. The second Ext4 partition of 82691 MB is the Home partition. I [havent used any swap space][12] here.
Now, if you have just one Ext4 partition, that means that your home directory is on the same partition as root. In this case, you wont be able to keep your Home directory. I suggest that you copy the important files to an external disk else youll lose them forever.
Its time to delete the root partition. Select the root partition and click the sign. This will create some free space.
![Delete root partition of your existing Linux install][13]
Delete root partition
When you have the free space, click on + sign.
![Create root partition for the new Linux][14]
Create a new root partition
Now you should create a new partition out of this free space. If you had just one root partition in your previous Linux install, you should create root and home partitions here. You can also create the swap partition if you want to.
If you had root and home partition separately, just create a root partition from the deleted root partition.
![Create root partition for the new Linux][15]
Creating root partition
You may ask why did I use delete and add instead of using the change option. Its because a few years ago, using change didnt work for me. So I prefer to do a and +. Is it superstition? Maybe.
One important thing to do here is to mark the newly created partition for format. f you dont change the size of the partition, it wont be formatted unless you explicitly ask it to format. And if the partition is not formatted, youll have issues.
![][16]
Its important to format the root partition
Now if you already had a separate Home partition on your existing Linux install, you should select it and click on change.
![Recreate home partition][17]
Retouch the already existing home partition (if any)
You just have to specify that you are mounting it as home partition.
![Specify the home mount point][18]
Specify the home mount point
If you had a swap partition, you can repeat the same steps as the home partition. This time specify that you want to use the space as swap.
At this stage, you should have a root partition (with format option selected) and a home partition (and a swap if you want to). Hit the install now button to start the installation.
![Verify partitions while replacing one Linux with another][19]
Verify the partitions
The next few screens would be familiar to you. What matters is the screen where you are asked to create user and password.
If you had a separate home partition previously and you want to use the same home directory, you MUST use the same username and password that you had before. Computer name doesnt matter.
![To keep the home partition intact, use the previous user and password][20]
To keep the home partition intact, use the previous user and password
Your struggle is almost over. You dont have to do anything else other than waiting for the installation to finish.
![Wait for installation to finish][21]
Wait for installation to finish
Once the installation is over, restart your system. Youll have a new Linux distribution or version.
In my case, I had the entire home directory of Linux Mint 19 as it is in the elementary OS. All the videos, pictures I had remained as it is. Isnt that nice?
--------------------------------------------------------------------------------
via: https://itsfoss.com/replace-linux-from-dual-boot/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Replace-Linux-Distro-from-dual-boot.png
[2]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
[3]: https://itsfoss.com/linux-mint-vs-ubuntu/
[4]: https://www.linuxmint.com/
[5]: https://itsfoss.com/uninstall-ubuntu-linux-windows-dual-boot/
[6]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
[7]: https://www.ubuntu.com/
[8]: https://itsfoss.com/guide-install-elementary-os-luna/
[9]: https://etcher.io/
[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-1.jpg
[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-2.jpg
[12]: https://itsfoss.com/swap-size/
[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-3.jpg
[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-4.jpg
[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-5.jpg
[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-6.jpg
[17]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-7.jpg
[18]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-8.jpg
[19]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-9.jpg
[20]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-10.jpg
[21]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-11.jpg

View File

@ -0,0 +1,161 @@
Taking the Audiophile Linux distro for a spin
======
This lightweight open source audio OS offers a rich feature set and high-quality digital sound.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_givingmusic.jpg?itok=xVKF1dlb)
I recently stumbled on the [Audiophile Linux project][1], one of a number of special-purpose music-oriented Linux distributions. Audiophile Linux:
1. is based on [ArchLinux][2]
2. provides a real-time Linux kernel customized for playing music
3. uses the lightweight [Fluxbox][3] window manager
4. avoids unnecessary daemons and services
5. allows playback of DSF and supports the usual PCM formats
6. supports various music players, including one of my favorite combos: MPD + Cantata
The Audiophile Linux site hasnt shown a lot of activity since April 2017, but it does contain some updates and commentary from this year. Given its orientation and feature set, I decided to take it for a spin on my old Toshiba laptop.
### Installing Audiophile Linux
The site provides [a clear set of install instructions][4] that require the use of the terminal. The first step after downloading the .iso is burning it to a USB stick. I used the GNOME Disks utilitys Restore Disk Image for this purpose. Once I had the USB set up and ready to go, I plugged it into the Toshiba and booted it. When the splash screen came up, I set the boot device to the USB stick and a minute or so later, the Arch Grub menu was displayed. I booted Linux from that menu, which put me in a root shell session, where I could carry out the install to the hard drive:
![](https://opensource.com/sites/default/files/uploads/root_shell_session.jpg)
I was willing to sacrifice the 320-GB hard drive in the Toshiba for this test, so I was able to use the previous Linux partitioning (from the last experiment). I then proceeded as follows:
```
fdisk -l              # find the disk / partition, in my case /dev/sda and /dev/sda1
mkfs.ext4 /dev/sda1   # build the ext4 filesystem in the root partition
mount /dev/sda1 /mnt  # mount the new file system
time cp -ax / /mnt    # copy over the OS
        # reported back cp -ax / /mnt 1.36s user 136.54s system 88% cpu 2:36.37 total
arch-chroot /mnt /bin/bash # run in the new system root
cd /etc/apl-files
./runme.sh            # do the rest of the install
grub-install --target=i386-pc /dev/sda # make the new OS bootable part 1
grub-mkconfig -o /boot/grub/grub.cfg   # part 2
passwd root           # set roots password
ln -s /usr/share/zoneinfo/America/Vancouver /etc/localtime # set my time zone
hwclock --systohc --utc # update the hardware clock
./autologin.sh        # set the system up so that it automatically logs in
exit                  # done with the chroot session
genfstab -U /mnt >> /mnt/etc/fstab # create the fstab for the new system
```
At that point, I was ready to boot the new operating system, so I did—and voilà, up came the system!
![](https://opensource.com/sites/default/files/uploads/audiophile_linux.jpg)
### Finishing the configuration
Once Audiophile Linux was up and running, I needed to [finish the configuration][4] and load some music. Grabbing the application menu by right-clicking on the screen background, I started **X-terminal** and entered the remaining configuration commands:
```
ping 8.8.8.8 # check connectivity (works fine)
su # become root
pacman-key init # create pacmans encryption data part 1
pacman-key --populate archlinux # part 2
pacman -Sy # part 3
pacman -S archlinux-keyring # part 4
```
At this point, the install instructions note that there is a problem with updating software with the `pacman -Suy` command, and that first the **libxfont** package must be removed using `pacman -Rc libxfont`. I followed this instruction, but the second run of `pacman -Suy` led to another dependency error, this time with the **x265** package. I looked further down the page in the install instructions and saw this recommendation:
_Again there is an error in upstream repo of Arch packages. Try to remove conflicting packages with “pacman -R ffmpeg2.8” and then do pacman -Suy later._
I chose to use `pacman -Rc ffmpeg2.8`, and then reran `pacman -Suy`. (As an aside, typing all these **pacman** commands made me realize how familiar I am with **apt** , and how much this whole process made me feel like I was trying to write an email in some language I dont know using an online translator.)
To be clear, here was my sequence of operations:
```
pacman -Suy # failed
pacman -Rc libxfont
pacman -Suy # failed, again
pacman -Rc ffmpeg2.8 # uninstalled Cantata, have to fix that later!
pacman -Suy # worked!
```
Now back to the rest of the instructions:
```
pacman -S terminus-font
pacman -S xorg-server
pacman -S firefox # the docs suggested installing chromium but I prefer FF
reboot
```
And the last little bit, fiddling `/etc/fstab` to avoid access time modifications. I also thought Id try installing [Cantata][5] once more using `pacman -S cantata`, and it worked just fine (no `ffmpeg2.8` problems).
I found the `DAC Setup > List cards` on the application menu, which showed the built-in Intel sound hardware plus my USB DAC that I had plugged in earlier. Then I selected `DAC Setup > Edit mpd.conf` and adjusted the output stanza of `mpd.conf`. I used `scp` to copy an album over from my main music server into **~/Music**. And finally, I used the application menu `DAC Setup > Restart mpd`. And… nothing… the **conky** info on the screen indicated “MPD not responding”. So I scanned again through the comments at the bottom of the installation instructions and spotted this:
_After every update of mpd, you have to do:
1. Become root
```
$su
```
2. run this commands
```
# cat /etc/apl-files/mpd.service > /usr/lib/systemd/system/mpd.service
# systemctl daemon-reload
# systemctl restart mpd.service_
```
_And this will be fixed._
![](https://opensource.com/sites/default/files/uploads/library.png)
And it works! Right now Im enjoying [Nils Frahms "All Melody"][6] from the album of the same name, playing over my [Schiit Fulla 2][7] in glorious high-resolution sound. Time to copy in some more music so I can give it a better listen.
So… does it sound better than the same DAC connected to my regular work laptop and playing back through [Guayadeque][8] or [GogglesMM][9]? Im going to see if I can detect a difference at some point, but right now all I can say is it sounds just wonderful; plus [I like the Cantata / mpd combo a lot][10], and I really enjoy having the heads-up display in the upper right of the screen.
### As for the music...
The other day I was reorganizing my work hard drive a bit and I decided to check to make sure that 1) all the music on it was also on the house music servers and 2) _vice versa_ (gotta set up `rsync` for that purpose one day soon). In doing so, I found some music I hadnt enjoyed for a while, which is kind of like buying a brand-new album, except it costs much less.
[Six Degrees Records][11] has long been one of my favorite purveyors of unusual music. A great example is the group [Zuco 103][12]'s album [Whaa!][13], whose CD version I purchased from Six Degrees online store some years ago. Check out [this fun documentary about the group][14].
<https://youtu.be/ncaqD92cjQ8>
For a completely different experience, take a look at the [Ragazze Quartets performance of Terry Rileys "Four Four Three."][15] I picked up ahigh-resolutionn version of this fascinating music from [Channel Classics][16], which operates a Linux-friendly download store (no bloatware to install on your computer).
And finally, I was saddened to hear of the recent passing of [Rachid Taha][17], whose wonderful blend of North African and French musical traditions, along with his frank confrontation of the challenges of being North African and living in Europe, has made some powerful—and fun—music. Check out [Tahas version of "Rock the Casbah."][18] I have a few of his songs scattered around various compilation albums, and some time ago bought the CD version of [Rachid Taha: The Definitive Collection][19], which Ive been enjoying again recently.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/audiophile-linux-distro
作者:[Chris Hermansen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[1]: https://www.ap-linux.com/
[2]: https://www.archlinux.org/
[3]: http://fluxbox.org/
[4]: https://www.ap-linux.com/documentation/ap-linux-v4-install-instructions/
[5]: https://github.com/CDrummond/cantata
[6]: https://www.youtube.com/watch?v=1PTj1qIqcWM
[7]: https://www.audiostream.com/content/listening-session-history-lesson-bw-schiit-and-shinola-together-last
[8]: http://www.guayadeque.org/
[9]: https://gogglesmm.github.io/
[10]: https://opensource.com/article/17/8/cantata-music-linux
[11]: https://www.sixdegreesrecords.com/
[12]: https://www.sixdegreesrecords.com/?s=zuco+103
[13]: https://www.musicomh.com/reviews/albums/zuco-103-whaa
[14]: https://www.youtube.com/watch?v=ncaqD92cjQ8
[15]: https://www.youtube.com/watch?v=DwMaO7bMVD4
[16]: https://www.channelclassics.com/catalogue/37816-Riley-Four-Four-Three/
[17]: https://en.wikipedia.org/wiki/Rachid_Taha
[18]: https://www.youtube.com/watch?v=n1p_dkJo6Y8
[19]: http://www.bbc.co.uk/music/reviews/26rg/

View File

@ -0,0 +1,90 @@
translating by belitex
3 open source distributed tracing tools
======
Find performance issues quickly with these tools, which provide a graphical view of what's happening across complex software systems.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
Distributed tracing systems enable users to track a request through a software system that is distributed across multiple applications, services, and databases as well as intermediaries like proxies. This allows for a deeper understanding of what is happening within the software system. These systems produce graphical representations that show how much time the request took on each step and list each known step.
A user reviewing this content can determine where the system is experiencing latencies or blockages. Instead of testing the system like a binary search tree when requests start failing, operators and developers can see exactly where the issues begin. This can also reveal where performance changes might be occurring from deployment to deployment. Its always better to catch regressions automatically by alerting to the anomalous behavior than to have your customers tell you.
How does this tracing thing work? Well, each request gets a special ID thats usually injected into the headers. This ID uniquely identifies that transaction. This transaction is normally called a trace. The trace is the overall abstract idea of the entire transaction. Each trace is made up of spans. These spans are the actual work being performed, like a service call or a database request. Each span also has a unique ID. Spans can create subsequent spans called child spans, and child spans can have multiple parents.
Once a transaction (or trace) has run its course, it can be searched in a presentation layer. There are several tools in this space that well discuss later, but the picture below shows [Jaeger][1] from my [Istio walkthrough][2]. It shows multiple spans of a single trace. The power of this is immediately clear as you can better understand the transactions story at a glance.
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png)
This demo uses Istios built-in OpenTracing implementation, so I can get tracing without even modifying my application. It also uses Jaeger, which is OpenTracing-compatible.
So what is OpenTracing? Lets find out.
### OpenTracing API
[OpenTracing][3] is a spec that grew out of [Zipkin][4] to provide cross-platform compatibility. It offers a vendor-neutral API for adding tracing to applications and delivering that data into distributed tracing systems. A library written for the OpenTracing spec can be used with any system that is OpenTracing-compliant. Zipkin, Jaeger, and Appdash are examples of open source tools that have adopted the open standard, but even proprietary tools like [Datadog][5] and [Instana][6] are adopting it. This is expected to continue as OpenTracing reaches ubiquitous status.
### OpenCensus
Okay, we have OpenTracing, but what is this [OpenCensus][7] thing that keeps popping up in my searches? Is it a competing standard, something completely different, or something complementary?
The answer depends on who you ask. I will do my best to explain the difference (as I understand it): OpenCensus takes a more holistic or all-inclusive approach. OpenTracing is focused on establishing an open API and spec and not on open implementations for each language and tracing system. OpenCensus provides not only the specification but also the language implementations and wire protocol. It also goes beyond tracing by including additional metrics that are normally outside the scope of distributed tracing systems.
OpenCensus allows viewing data on the host where the application is running, but it also has a pluggable exporter system for exporting data to central aggregators. The current exporters produced by the OpenCensus team include Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx, but anyone can create an exporter.
From my perspective, theres a lot of overlap. One isnt necessarily better than the other, but its important to know what each does and doesnt do. OpenTracing is primarily a spec, with others doing the implementation and opinionation. OpenCensus provides a holistic approach for the local component with more opinionation but still requires other systems for remote aggregation.
### Tool options
#### Zipkin
Zipkin was one of the first systems of this kind. It was developed by Twitter based on the [Google Dapper paper][8] about the internal system Google uses. Zipkin was written using Java, and it can use Cassandra or ElasticSearch as a scalable backend. Most companies should be satisfied with one of those options. The lowest supported Java version is Java 6. It also uses the [Thrift][9] binary communication protocol, which is popular in the Twitter stack and is hosted as an Apache project.
The system consists of reporters (clients), collectors, a query service, and a web UI. Zipkin is meant to be safe in production by transmitting only a trace ID within the context of a transaction to inform receivers that a trace is in process. The data collected in each reporter is then transmitted asynchronously to the collectors. The collectors store these spans in the database, and the web UI presents this data to the end user in a consumable format. The delivery of data to the collectors can occur in three different methods: HTTP, Kafka, and Scribe.
The [Zipkin community][10] has also created [Brave][11], a Java client implementation compatible with Zipkin. It has no dependencies, so it wont drag your projects down or clutter them with libraries that are incompatible with your corporate standards. There are many other implementations, and Zipkin is compatible with the OpenTracing standard, so these implementations should also work with other distributed tracing systems. The popular Spring framework has a component called [Spring Cloud Sleuth][12] that is compatible with Zipkin.
#### Jaeger
[Jaeger][1] is a newer project from Uber Technologies that the [CNCF][13] has since adopted as an Incubating project. It is written in Golang, so you dont have to worry about having dependencies installed on the host or any overhead of interpreters or language virtual machines. Similar to Zipkin, Jaeger also supports Cassandra and ElasticSearch as scalable storage backends. Jaeger is also fully compatible with the OpenTracing standard.
Jaegers architecture is similar to Zipkin, with clients (reporters), collectors, a query service, and a web UI, but it also has an agent on each host that locally aggregates the data. The agent receives data over a UDP connection, which it batches and sends to a collector. The collector receives that data in the form of the [Thrift][14] protocol and stores that data in Cassandra or ElasticSearch. The query service can access the data store directly and provide that information to the web UI.
By default, a user wont get all the traces from the Jaeger clients. The system samples 0.1% (1 in 1,000) of traces that pass through each client. Keeping and transmitting all traces would be a bit overwhelming to most systems. However, this can be increased or decreased by configuring the agents, which the client consults with for its configuration. This sampling isnt completely random, though, and its getting better. Jaeger uses probabilistic sampling, which tries to make an educated guess at whether a new trace should be sampled or not. [Adaptive sampling is on its roadmap][15], which will improve the sampling algorithm by adding additional context for making decisions.
#### Appdash
[Appdash][16] is a distributed tracing system written in Golang, like Jaeger. It was created by [Sourcegraph][17] based on Googles Dapper and Twitters Zipkin. Similar to Jaeger and Zipkin, Appdash supports the OpenTracing standard; this was a later addition and requires a component that is different from the default component. This adds risk and complexity.
At a high level, Appdashs architecture consists mostly of three components: a client, a local collector, and a remote collector. Theres not a lot of documentation, so this description comes from testing the system and reviewing the code. The client in Appdash gets added to your code. Appdash provides Python, Golang, and Ruby implementations, but OpenTracing libraries can be used with Appdashs OpenTracing implementation. The client collects the spans and sends them to the local collector. The local collector then sends the data to a centralized Appdash server running its own local collector, which is the remote collector for all other nodes in the system.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/distributed-tracing-tools
作者:[Dan Barker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[1]: https://www.jaegertracing.io/
[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls
[3]: http://opentracing.io/
[4]: https://zipkin.io/
[5]: https://www.datadoghq.com/
[6]: https://www.instana.com/
[7]: https://opencensus.io/
[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf
[9]: https://thrift.apache.org/
[10]: https://zipkin.io/pages/community.html
[11]: https://github.com/openzipkin/brave
[12]: https://cloud.spring.io/spring-cloud-sleuth/
[13]: https://www.cncf.io/
[14]: https://en.wikipedia.org/wiki/Apache_Thrift
[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling
[16]: https://github.com/sourcegraph/appdash
[17]: https://about.sourcegraph.com/

View File

@ -0,0 +1,302 @@
heguangzhi Translating
An introduction to swap space on Linux systems
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
Swap space is a common aspect of computing today, regardless of operating system. Linux uses swap space to increase the amount of virtual memory available to a host. It can use one or more dedicated swap partitions or a swap file on a regular filesystem or logical volume.
There are two basic types of memory in a typical computer. The first type, random access memory (RAM), is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM. RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off.
Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU (central processing unit) cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. During the boot process, a computer copies specific operating system programs, such as the kernel and init or systemd, and data from the hard drive into RAM, where it is accessed directly by the computers processor, the CPU.
### Swap space
Swap space is the second type of memory in modern Linux systems. The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed.
For example, assume you have a computer system with 8GB of RAM. If you start up programs that dont fill that RAM, everything is fine and no swapping is required. But suppose the spreadsheet you are working on grows when you add more rows, and that, plus everything else that's running, now fills all of RAM. Without swap space available, you would have to stop working on the spreadsheet until you could free up some of your limited RAM by closing down some other programs.
The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging,” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernels memory management code and can be paged back into RAM if they are needed.
The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as virtual memory.
### Types of Linux swap
Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies—a standard disk partition that is designated as swap space by the `mkswap` command.
A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group where a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the `mkswap` command is run to configure it as swap space. I dont recommend using a file for swap space unless absolutely necessary.
### Thrashing
Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back that little time is left for real work. The typical symptoms of this are obvious: The system becomes slow or completely unresponsive, and the hard drive activity light is on almost constantly.
If you can manage to issue a command like `free` that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30 to 40 times the number of CPU cores in the system. Another symptom is that both RAM and swap space are almost completely allocated.
After the fact, looking at SAR (system activity report) data can also show these symptoms. I install SAR on every system I work on and use it for post-repair forensic analysis.
### What is the right amount of swap space?
Many years ago, the rule of thumb for the amount of swap space that should be allocated on the hard drive was 2X the amount of RAM installed in the computer (of course, that was when most computers' RAM was measured in KB or MB). So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work.
RAM has become an inexpensive commodity and most computers these days have amounts of RAM that extend into tens of gigabytes. Most of my newer computers have at least 8GB of RAM, one has 32GB, and my main workstation has 64GB. My older computers have from 4 to 8 GB of RAM.
When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. The Fedora 28 online Installation Guide, which can be found online at [Fedora Installation Guide][1], defines current thinking about swap space allocation. I have included below some discussion and the table of recommendations from that document.
The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage.
_Table 1: Recommended system swap space in Fedora 28 documentation_
| **Amount of system RAM** | **Recommended swap space** | **Recommended swap with hibernation** |
|--------------------------|-----------------------------|---------------------------------------|
| less than 2 GB | 2 times the amount of RAM | 3 times the amount of RAM |
| 2 GB - 8 GB | Equal to the amount of RAM | 2 times the amount of RAM |
| 8 GB - 64 GB | 0.5 times the amount of RAM | 1.5 times the amount of RAM |
| more than 64 GB | workload dependent | hibernation not recommended |
At the border between each range listed above (for example, a system with 2 GB, 8 GB, or 64 GB of system RAM), use discretion with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance.
Of course, most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table 2, below, contains my recommendations based on my personal experiences in multiple environments. These may not work for you, but as with Table 1, they may help you get started.
_Table 2: Recommended system swap space per the author_
| Amount of RAM | Recommended swap space |
|---------------|------------------------|
| ≤ 2GB | 2X RAM |
| 2GB 8GB | = RAM |
| >8GB | 8GB |
One consideration in both tables is that as the amount of RAM increases, beyond a certain point adding more swap space simply leads to thrashing well before the swap space even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. As with all recommendations that affect system performance, use what works best for your specific environment. This will take time and effort to experiment and make changes based on the conditions in your Linux environment.
#### Adding more swap space to a non-LVM disk environment
Due to changing requirements for swap space on hosts with Linux already installed, it may become necessary to modify the amount of swap space defined for the system. This procedure can be used for any general case where the amount of swap space needs to be increased. It assumes sufficient available disk space is available. This procedure also assumes that the disks are partitioned in “raw” EXT4 and swap partitions and do not use logical volume management (LVM).
The basic steps to take are simple:
1. Turn off the existing swap space.
2. Create a new swap partition of the desired size.
3. Reread the partition table.
4. Configure the partition as swap space.
5. Add the new partition/etc/fstab.
6. Turn on swap.
A reboot should not be necessary.
For safety's sake, before turning off swap, at the very least you should ensure that no applications are running and that no swap space is in use. The `free` or `top` commands can tell you whether swap space is in use. To be even safer, you could revert to run level 1 or single-user mode.
Turn off the swap partition with the command which turns off all swap space:
```
swapoff -a
```
Now display the existing partitions on the hard drive.
```
fdisk -l
```
This displays the current partition tables on each drive. Identify the current swap partition by number.
Start `fdisk` in interactive mode with the command:
```
fdisk /dev/<device name>
```
For example:
```
fdisk /dev/sda
```
At this point, `fdisk` is now interactive and will operate only on the specified disk drive.
Use the fdisk `p` sub-command to verify that there is enough free space on the disk to create the new swap partition. The space on the hard drive is shown in terms of 512-byte blocks and starting and ending cylinder numbers, so you may have to do some math to determine the available space between and at the end of allocated partitions.
Use the `n` sub-command to create a new swap partition. fdisk will ask you the starting cylinder. By default, it chooses the lowest-numbered available cylinder. If you wish to change that, type in the number of the starting cylinder.
The `fdisk` command now allows you to enter the size of the partitions in a number of formats, including the last cylinder number or the size in bytes, KB or MB. Type in 4000M, which will give about 4GB of space on the new partition (for example), and press Enter.
Use the `p` sub-command to verify that the partition was created as you specified it. Note that the partition will probably not be exactly what you specified unless you used the ending cylinder number. The `fdisk` command can only allocate disk space in increments on whole cylinders, so your partition may be a little smaller or larger than you specified. If the partition is not what you want, you can delete it and create it again.
Now it is necessary to specify that the new partition is to be a swap partition. The sub-command `t` allows you to specify the type of partition. So enter `t`, specify the partition number, and when it asks for the hex code partition type, type 82, which is the Linux swap partition type, and press Enter.
When you are satisfied with the partition you have created, use the `w` sub-command to write the new partition table to the disk. The `fdisk` program will exit and return you to the command prompt after it completes writing the revised partition table. You will probably receive the following message as `fdisk` completes writing the new partition table:
```
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
```
At this point, you use the `partprobe` command to force the kernel to re-read the partition table so that it is not necessary to perform a reboot.
```
partprobe
```
Now use the command `fdisk -l` to list the partitions and the new swap partition should be among those listed. Be sure that the new partition type is “Linux swap”.
It will be necessary to modify the /etc/fstab file to point to the new swap partition. The existing line may look like this:
```
LABEL=SWAP-sdaX   swap        swap    defaults        0 0
```
where `X` is the partition number. Add a new line that looks similar this, depending upon the location of your new swap partition:
```
/dev/sdaY         swap        swap    defaults        0 0
```
Be sure to use the correct partition number. Now you can perform the final step in creating the swap partition. Use the `mkswap` command to define the partition as a swap partition.
```
mkswap /dev/sdaY
```
The final step is to turn swap on using the command:
```
swapon -a
```
Your new swap partition is now online along with the previously existing swap partition. You can use the `free` or `top` commands to verify this.
#### Adding swap to an LVM disk environment
If your disk setup uses LVM, changing swap space will be fairly easy. Again, this assumes that space is available in the volume group in which the current swap volume is located. By default, the installation procedures for Fedora Linux in an LVM environment create the swap partition as a logical volume. This makes it easy because you can simply increase the size of the swap volume.
Here are the steps required to increase the amount of swap space in an LVM environment:
1. Turn off all swap.
2. Increase the size of the logical volume designated for swap.
3. Configure the resized volume as swap space.
4. Turn on swap.
First, lets verify that swap exists and is a logical volume using the `lvs` command (list logical volume).
```
[root@studentvm1 ~]# lvs
  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   fedora_studentvm1 -wi-ao----  2.00g                                                      
  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                            
  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                  
  swap   fedora_studentvm1 -wi-ao----  8.00g                                                      
  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                      
  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                      
  var    fedora_studentvm1 -wi-ao---- 10.00g                                                      
[root@studentvm1 ~]#
```
You can see that the current swap size is 8GB. In this case, we want to add 2GB to this swap volume. First, stop existing swap. You may have to terminate running programs if swap space is in use.
```
swapoff -a
```
Now increase the size of the logical volume.
```
[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap
  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents).
  Logical volume fedora_studentvm1/swap successfully resized.
[root@studentvm1 ~]#
```
Run the `mkswap` command to make this entire 10GB partition into swap space.
```
[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap
mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10 GiB (10737414144 bytes)
no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a
[root@studentvm1 ~]#
```
Turn swap back on.
```
[root@studentvm1 ~]# swapon -a
[root@studentvm1 ~]#
```
Now verify the new swap space is present with the list block devices command. Again, a reboot is not required.
```
[root@studentvm1 ~]# lsblk
NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                    8:0    0   60G  0 disk
|-sda1                                 8:1    0    1G  0 part /boot
`-sda2                                 8:2    0   59G  0 part
  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm  
  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm  
  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  /
  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm  
  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm  
  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm  
  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  /
  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm  
  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP]
  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr
  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home
  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var
  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp
sr0                                   11:0    1 1024M  0 rom  
[root@studentvm1 ~]#
```
You can also use the `swapon -s` command, or `top`, `free`, or any of several other commands to verify this.
```
[root@studentvm1 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        4038808      382404     2754072        4152      902332     3404184
Swap:      10485756           0    10485756
[root@studentvm1 ~]#
```
Note that the different commands display or require as input the device special file in different forms. There are a number of ways in which specific devices are accessed in the /dev directory. My article, [Managing Devices in Linux][2], includes more information about the /dev directory and its contents.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/swap-space-linux-systems
作者:[David Both][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/
[2]: https://opensource.com/article/16/11/managing-devices-linux

View File

@ -0,0 +1,260 @@
translating by Flowsnow
How to use the Scikit-learn Python library for data science projects
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
The Scikit-learn Python library, initially released in 2007, is commonly used in solving machine learning and data science problems—from the beginning to the end. The versatile library offers an uncluttered, consistent, and efficient API and thorough online documentation.
### What is Scikit-learn?
[Scikit-learn][1] is an open source Python library that has powerful tools for data analysis and data mining. It's available under the BSD license and is built on the following machine learning libraries:
* **NumPy** , a library for manipulating multi-dimensional arrays and matrices. It also has an extensive compilation of mathematical functions for performing various calculations.
* **SciPy** , an ecosystem consisting of various libraries for completing technical computing tasks.
* **Matplotlib** , a library for plotting various charts and graphs.
Scikit-learn offers an extensive range of built-in algorithms that make the most of data science projects.
Here are the main ways the Scikit-learn library is used.
#### 1. Classification
The [classification][2] tools identify the category associated with provided data. For example, they can be used to categorize email messages as either spam or not.
* Support vector machines (SVMs)
* Nearest neighbors
* Random forest
#### 2. Regression
Classification algorithms in Scikit-learn include:
Regression involves creating a model that tries to comprehend the relationship between input and output data. For example, regression tools can be used to understand the behavior of stock prices.
Regression algorithms include:
* SVMs
* Ridge regression
* Lasso
#### 3. Clustering
The Scikit-learn clustering tools are used to automatically group data with the same characteristics into sets. For example, customer data can be segmented based on their localities.
Clustering algorithms include:
* K-means
* Spectral clustering
* Mean-shift
#### 4. Dimensionality reduction
Dimensionality reduction lowers the number of random variables for analysis. For example, to increase the efficiency of visualizations, outlying data may not be considered.
Dimensionality reduction algorithms include:
* Principal component analysis (PCA)
* Feature selection
* Non-negative matrix factorization
#### 5. Model selection
Model selection algorithms offer tools to compare, validate, and select the best parameters and models to use in your data science projects.
Model selection modules that can deliver enhanced accuracy through parameter tuning include:
* Grid search
* Cross-validation
* Metrics
#### 6. Preprocessing
The Scikit-learn preprocessing tools are important in feature extraction and normalization during data analysis. For example, you can use these tools to transform input data—such as text—and apply their features in your analysis.
Preprocessing modules include:
* Preprocessing
* Feature extraction
### A Scikit-learn library example
Let's use a simple example to illustrate how you can use the Scikit-learn library in your data science projects.
We'll use the [Iris flower dataset][3], which is incorporated in the Scikit-learn library. The Iris flower dataset contains 150 details about three flower species:
* Setosa—labeled 0
* Versicolor—labeled 1
* Virginica—labeled 2
The dataset includes the following characteristics of each flower species (in centimeters):
* Sepal length
* Sepal width
* Petal length
* Petal width
#### Step 1: Importing the library
Since the Iris dataset is included in the Scikit-learn data science library, we can load it into our workspace as follows:
```
from sklearn import datasets
iris = datasets.load_iris()
```
These commands import the **datasets** module from **sklearn** , then use the **load_digits()** method from **datasets** to include the data in the workspace.
#### Step 2: Getting dataset characteristics
The **datasets** module contains several methods that make it easier to get acquainted with handling data.
In Scikit-learn, a dataset refers to a dictionary-like object that has all the details about the data. The data is stored using the **.data** key, which is an array list.
For instance, we can utilize **iris.data** to output information about the Iris flower dataset.
```
print(iris.data)
```
Here is the output (the results have been truncated):
```
[[5.1 3.5 1.4 0.2]
 [4.9 3.  1.4 0.2]
 [4.7 3.2 1.3 0.2]
 [4.6 3.1 1.5 0.2]
 [5.  3.6 1.4 0.2]
 [5.4 3.9 1.7 0.4]
 [4.6 3.4 1.4 0.3]
 [5.  3.4 1.5 0.2]
 [4.4 2.9 1.4 0.2]
 [4.9 3.1 1.5 0.1]
 [5.4 3.7 1.5 0.2]
 [4.8 3.4 1.6 0.2]
 [4.8 3.  1.4 0.1]
 [4.3 3.  1.1 0.1]
 [5.8 4.  1.2 0.2]
 [5.7 4.4 1.5 0.4]
 [5.4 3.9 1.3 0.4]
 [5.1 3.5 1.4 0.3]
```
Let's also use **iris.target** to give us information about the different labels of the flowers.
```
print(iris.target)
```
Here is the output:
```
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2]
```
If we use **iris.target_names** , we'll output an array of the names of the labels found in the dataset.
```
print(iris.target_names)
```
Here is the result after running the Python code:
```
['setosa' 'versicolor' 'virginica']
```
#### Step 3: Visualizing the dataset
We can use the [box plot][4] to produce a visual depiction of the Iris flower dataset. The box plot illustrates how the data is distributed over the plane through their quartiles.
Here's how to achieve this:
```
import seaborn as sns
box_data = iris.data #variable representing the data array
box_target = iris.target #variable representing the labels array
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Let's see the result:
![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png)
On the horizontal axis:
* 0 is sepal length
* 1 is sepal width
* 2 is petal length
* 3 is petal width
The vertical axis is dimensions in centimeters.
### Wrapping up
Here is the entire code for this simple Scikit-learn data science tutorial.
```
from sklearn import datasets
iris = datasets.load_iris()
print(iris.data)
print(iris.target)
print(iris.target_names)
import seaborn as sns
box_data = iris.data #variable representing the data array
box_target = iris.target #variable representing the labels array
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Scikit-learn is a versatile Python library you can use to efficiently complete data science projects.
If you want to learn more, check out the tutorials on [LiveEdu][5], such as Andrey Bulezyuk's video on using the Scikit-learn library to create a [machine learning application][6].
Do you have any questions or comments? Feel free to share them below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects
作者:[Dr.Michael J.Garbade][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/drmjg
[1]: http://scikit-learn.org/stable/index.html
[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/
[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set
[4]: https://en.wikipedia.org/wiki/Box_plot
[5]: https://www.liveedu.tv/guides/data-science/
[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/

View File

@ -0,0 +1,441 @@
How To Find And Delete Duplicate Files In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png)
I always backup the configuration files or any old files to somewhere in my hard disk before edit or modify them, so I can restore them from the backup if I accidentally did something wrong. But the problem is I forgot to clean up those files and my hard disk is filled with a lot of duplicate files after a certain period of time. I feel either too lazy to clean the old files or afraid that I may delete an important files. If youre anything like me and overwhelming with multiple copies of same files in different backup directories, you can find and delete duplicate files using the tools given below in Unix-like operating systems.
**A word of caution:**
Please be careful while deleting duplicate files. If youre not careful, it will lead you to [**accidental data loss**][1]. I advice you to pay extra attention while using these tools.
### Find And Delete Duplicate Files In Linux
For the purpose of this guide, I am going to discuss about three utilities namely,
1. Rdfind,
2. Fdupes,
3. FSlint.
These three utilities are free, open source and works on most Unix-like operating systems.
##### 1. Rdfind
**Rdfind** , stands for **r** edundant **d** ata **find** , is a free and open source utility to find duplicate files across and/or within directories and sub-directories. It compares files based on their content, not on their file names. Rdfind uses **ranking** algorithm to classify original and duplicate files. If you have two or more equal files, Rdfind is smart enough to find which is original file, and consider the rest of the files as duplicates. Once it found the duplicates, it will report them to you. You can decide to either delete them or replace them with [**hard links** or **symbolic (soft) links**][2].
**Installing Rdfind**
Rdfind is available in [**AUR**][3]. So, you can install it in Arch-based systems using any AUR helper program like [**Yay**][4] as shown below.
```
$ yay -S rdfind
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install rdfind
```
On Fedora:
```
$ sudo dnf install rdfind
```
On RHEL, CentOS:
```
$ sudo yum install epel-release
$ sudo yum install rdfind
```
**Usage**
Once installed, simply run Rdfind command along with the directory path to scan for the duplicate files.
```
$ rdfind ~/Downloads
```
![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png)
As you see in the above screenshot, Rdfind command will scan ~/Downloads directory and save the results in a file named **results.txt** in the current working directory. You can view the name of the possible duplicate files in results.txt file.
```
$ cat results.txt
# Automatically generated
# duptype id depth size device inode priority name
DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex
DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex
[...]
DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf
DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf
# end of file
```
By reviewing the results.txt file, you can easily find the duplicates. You can remove the duplicates manually if you want to.
Also, you can **-dryrun** option to find all duplicates in a given directory without changing anything and output the summary in your Terminal:
```
$ rdfind -dryrun true ~/Downloads
```
Once you found the duplicates, you can replace them with either hardlinks or symlinks.
To replace all duplicates with hardlinks, run:
```
$ rdfind -makehardlinks true ~/Downloads
```
To replace all duplicates with symlinks/soft links, run:
```
$ rdfind -makesymlinks true ~/Downloads
```
You may have some empty files in a directory and want to ignore them. If so, use **-ignoreempty** option like below.
```
$ rdfind -ignoreempty true ~/Downloads
```
If you dont want the old files anymore, just delete duplicate files instead of replacing them with hard or soft links.
To delete all duplicates, simply run:
```
$ rdfind -deleteduplicates true ~/Downloads
```
If you do not want to ignore empty files and delete them along with all duplicates, run:
```
$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads
```
For more details, refer the help section:
```
$ rdfind --help
```
And, the manual pages:
```
$ man rdfind
```
##### 2. Fdupes
**Fdupes** is yet another command line utility to identify and remove the duplicate files within specified directories and the sub-directories. It is free, open source utility written in **C** programming language. Fdupes identifies the duplicates by comparing file sizes, partial MD5 signatures, full MD5 signatures, and finally performing a byte-by-byte comparison for verification.
Similar to Rdfind utility, Fdupes comes with quite handful of options to perform operations, such as:
* Recursively search duplicate files in directories and sub-directories
* Exclude empty files and hidden files from consideration
* Show the size of the duplicates
* Delete duplicates immediately as they encountered
* Exclude files with different owner/group or permission bits as duplicates
* And a lot more.
**Installing Fdupes**
Fdupes is available in the default repositories of most Linux distributions.
On Arch Linux and its variants like Antergos, Manjaro Linux, install it using Pacman like below.
```
$ sudo pacman -S fdupes
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install fdupes
```
On Fedora:
```
$ sudo dnf install fdupes
```
On RHEL, CentOS:
```
$ sudo yum install epel-release
$ sudo yum install fdupes
```
**Usage**
Fdupes usage is pretty simple. Just run the following command to find out the duplicate files in a directory, for example **~/Downloads**.
```
$ fdupes ~/Downloads
```
Sample output from my system:
```
/home/sk/Downloads/Hyperledger.pdf
/home/sk/Downloads/Hyperledger(1).pdf
```
As you can see, I have a duplicate file in **/home/sk/Downloads/** directory. It shows the duplicates from the parent directory only. How to view the duplicates from sub-directories? Just use **-r** option like below.
```
$ fdupes -r ~/Downloads
```
Now you will see the duplicates from **/home/sk/Downloads/** directory and its sub-directories as well.
Fdupes can also be able to find duplicates from multiple directories at once.
```
$ fdupes ~/Downloads ~/Documents/ostechnix
```
You can even search multiple directories, one recursively like below:
```
$ fdupes ~/Downloads -r ~/Documents/ostechnix
```
The above commands searches for duplicates in “~/Downloads” directory and “~/Documents/ostechnix” directory and its sub-directories.
Sometimes, you might want to know the size of the duplicates in a directory. If so, use **-S** option like below.
```
$ fdupes -S ~/Downloads
403635 bytes each:
/home/sk/Downloads/Hyperledger.pdf
/home/sk/Downloads/Hyperledger(1).pdf
```
Similarly, to view the size of the duplicates in parent and child directories, use **-Sr** option.
We can exclude empty and hidden files from consideration using **-n** and **-A** respectively.
```
$ fdupes -n ~/Downloads
$ fdupes -A ~/Downloads
```
The first command will exclude zero-length files from consideration and the latter will exclude hidden files from consideration while searching for duplicates in the specified directory.
To summarize duplicate files information, use **-m** option.
```
$ fdupes -m ~/Downloads
1 duplicate files (in 1 sets), occupying 403.6 kilobytes
```
To delete all duplicates, use **-d** option.
```
$ fdupes -d ~/Downloads
```
Sample output:
```
[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf
[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf
Set 1 of 1, preserve files [1 - 2, all]:
```
This command will prompt you for files to preserve and delete all other duplicates. Just enter any number to preserve the corresponding file and delete the remaining files. Pay more attention while using this option. You might delete original files if youre not be careful.
If you want to preserve the first file in each set of duplicates and delete the others without prompting each time, use **-dN** option (not recommended).
```
$ fdupes -dN ~/Downloads
```
To delete duplicates as they are encountered, use **-I** flag.
```
$ fdupes -I ~/Downloads
```
For more details about Fdupes, view the help section and man pages.
```
$ fdupes --help
$ man fdupes
```
##### 3. FSlint
**FSlint** is yet another duplicate file finder utility that I use from time to time to get rid of the unnecessary duplicate files and free up the disk space in my Linux system. Unlike the other two utilities, FSlint has both GUI and CLI modes. So, it is more user-friendly tool for newbies. FSlint not just finds the duplicates, but also bad symlinks, bad names, temp files, bad IDS, empty directories, and non stripped binaries etc.
**Installing FSlint**
FSlint is available in [**AUR**][5], so you can install it using any AUR helpers.
```
$ yay -S fslint
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install fslint
```
On Fedora:
```
$ sudo dnf install fslint
```
On RHEL, CentOS:
```
$ sudo yum install epel-release
```
$ sudo yum install fslint
Once it is installed, launch it from menu or application launcher.
This is how FSlint GUI looks like.
![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-1.png)
As you can see, the interface of FSlint is user-friendly and self-explanatory. In the **Search path** tab, add the path of the directory you want to scan and click **Find** button on the lower left corner to find the duplicates. Check the recurse option to recursively search for duplicates in directories and sub-directories. The FSlint will quickly scan the given directory and list out them.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/fslint-2.png)
From the list, choose the duplicates you want to clean and select any one of them given actions like Save, Delete, Merge and Symlink.
In the **Advanced search parameters** tab, you can specify the paths to exclude while searching for duplicates.
![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-3.png)
**FSlint command line options**
FSlint provides a collection of the following CLI utilities to find duplicates in your filesystem:
* **findup** — find DUPlicate files
* **findnl** — find Name Lint (problems with filenames)
* **findu8** — find filenames with invalid utf8 encoding
* **findbl** — find Bad Links (various problems with symlinks)
* **findsn** — find Same Name (problems with clashing names)
* **finded** — find Empty Directories
* **findid** — find files with dead user IDs
* **findns** — find Non Stripped executables
* **findrs** — find Redundant Whitespace in files
* **findtf** — find Temporary Files
* **findul** — find possibly Unused Libraries
* **zipdir** — Reclaim wasted space in ext2 directory entries
All of these utilities are available under **/usr/share/fslint/fslint/fslint** location.
For example, to find duplicates in a given directory, do:
```
$ /usr/share/fslint/fslint/findup ~/Downloads/
```
Similarly, to find empty directories, the command would be:
```
$ /usr/share/fslint/fslint/finded ~/Downloads/
```
To get more details on each utility, for example **findup** , run:
```
$ /usr/share/fslint/fslint/findup --help
```
For more details about FSlint, refer the help section and man pages.
```
$ /usr/share/fslint/fslint/fslint --help
$ man fslint
```
##### Conclusion
You know now about three tools to find and delete unwanted duplicate files in Linux. Among these three tools, I often use Rdfind. It doesnt mean that the other two utilities are not efficient, but I am just happy with Rdfind so far. Well, its your turn. Which is your favorite tool and why? Let us know them in the comment section below.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/
[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/
[3]: https://aur.archlinux.org/packages/rdfind/
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[5]: https://aur.archlinux.org/packages/fslint/

View File

@ -0,0 +1,232 @@
# Caffeinated 6.828:实验 2内存管理
### 简介
在本实验中,你将为你的操作系统写内存管理方面的代码。内存管理有两部分组成。
第一部分是内核的物理内存分配器,内核通过它来分配内存,以及在不需要时释放所分配的内存。分配器以页为单位分配内存,每个页的大小为 4096 字节。你的任务是去维护那个数据结构,它负责记录物理页的分配和释放,以及每个分配的页有多少进程共享它。本实验中你将要写出分配和释放内存页的全套代码。
第二个部分是虚拟内存的管理它负责由内核和用户软件使用的虚拟内存地址到物理内存地址之间的映射。当使用内存时x86 架构的硬件是由内存管理单元MMU负责执行映射操作来查阅一组页表。接下来你将要修改 JOS以根据我们提供的特定指令去设置 MMU 的页表。
### 预备知识
在本实验及后面的实验中,你将逐步构建你的内核。我们将会为你提供一些附加的资源。使用 Git 去获取这些资源、提交自实验 1 以来的改变(如有需要的话)、获取课程仓库的最新版本、以及在我们的实验 2 origin/lab2的基础上创建一个称为 lab2 的本地分支:
```
athena% cd ~/6.828/lab
athena% add git
athena% git pull
Already up-to-date.
athena% git checkout -b lab2 origin/lab2
Branch lab2 set up to track remote branch refs/remotes/origin/lab2.
Switched to a new branch "lab2"
athena%
```
现在,你需要将你在 lab1 分支中的改变合并到 lab2 分支中,命令如下:
```
athena% git merge lab1
Merge made by recursive.
kern/kdebug.c | 11 +++++++++--
kern/monitor.c | 19 +++++++++++++++++++
lib/printfmt.c | 7 +++----
3 files changed, 31 insertions(+), 6 deletions(-)
athena%
```
实验 2 包含如下的新源代码,后面你将遍历它们:
- inc/memlayout.h
- kern/pmap.c
- kern/pmap.h
- kern/kclock.h
- kern/kclock.c
`memlayout.h` 描述虚拟地址空间的布局,这个虚拟地址空间是通过修改 `pmap.c`、`memlayout.h` 和 `pmap.h` 所定义的 *PageInfo* 数据结构来实现的,这个数据结构用于跟踪物理内存页面是否被释放。`kclock.c` 和 `kclock.h` 维护 PC 基于电池的时钟和 CMOS RAM 硬件,在 BIOS 中记录了 PC 上安装的物理内存数量,以及其它的一些信息。在 `pmap.c` 中的代码需要去读取这个设备硬件信息,以算出在这个设备上安装了多少物理内存,这些只是由你来完成的一部分代码:你不需要知道 CMOS 硬件工作原理的细节。
特别需要注意的是 `memlayout.h``pmap.h`,因为本实验需要你去使用和理解的大部分内容都包含在这两个文件中。你或许还需要去复习 `inc/mmu.h` 这个文件,因为它也包含了本实验中用到的许多定义。
开始本实验之前,记得去添加 `exokernel` 以获取 QEMU 的 6.828 版本。
### 实验过程
在你准备进行实验和写代码之前,先添加你的 `answers-lab2.txt` 文件到 Git 仓库,提交你的改变然后去运行 `make handin`
```
athena% git add answers-lab2.txt
athena% git commit -am "my answer to lab2"
[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-)
athena% make handin
```
### 第 1 部分:物理页面管理
操作系统必须跟踪物理内存页是否使用的状态。JOS 以页为最小粒度来管理 PC 的物理内存,以便于它使用 MMU 去映射和保护每个已分配的内存片段。
现在,你将要写内存的物理页分配器的代码。它使用链接到 `PageInfo` 数据结构的一组列表来保持对物理页的状态跟踪,每个列表都对应到一个物理内存页。在你能够写出剩下的虚拟内存实现之前,你需要先写出物理内存页面分配器,因为你的页表管理代码将需要去分配物理内存来存储页表。
> 练习 1
>
> 在文件 `kern/pmap.c` 中,你需要去实现以下函数的代码(或许要按给定的顺序来实现)。
>
> boot_alloc()
>
> mem_init()(只要能够调用 check_page_free_list() 即可)
>
> page_init()
>
> page_alloc()
>
> page_free()
>
> `check_page_free_list()``check_page_alloc()` 可以测试你的物理内存页分配器。你将需要引导 JOS 然后去看一下 `check_page_alloc()` 是否报告成功即可。如果没有报告成功,修复你的代码直到成功为止。你可以添加你自己的 `assert()` 以帮助你去验证是否符合你的预期。
本实验以及所有的 6.828 实验中,将要求你做一些检测工作,以便于你搞清楚它们是否按你的预期来工作。这个任务不需要详细描述你添加到 JOS 中的代码的细节。查找 JOS 源代码中你需要去修改的那部分的注释;这些注释中经常包含有技术规范和提示信息。你也可能需要去查阅 JOS、和 Intel 的技术手册、以及你的 6.004 或 6.033 课程笔记的相关部分。
### 第 2 部分:虚拟内存
在你开始动手之前,需要先熟悉 x86 内存管理架构的保护模式:即分段和页面转换。
> 练习 2
>
> 如果你对 x86 的保护模式还不熟悉,可以查看 Intel 80386 参考手册的第 5 章和第 6 章。阅读这些章节5.2 和 6.4中关于页面转换和基于页面的保护。我们建议你也去了解关于段的章节在虚拟内存和保护模式中JOS 使用了分页、段转换、以及在 x86 上不能禁用的基于段的保护,因此你需要去理解这些基础知识。
### 虚拟地址、线性地址和物理地址
在 x86 的专用术语中,一个虚拟地址是由一个段选择器和在段中的偏移量组成。一个线性地址是在页面转换之前、段转换之后得到的一个地址。一个物理地址是段和页面转换之后得到的最终地址,它最终将进入你的物理内存中的硬件总线。
![屏幕快照 2018-09-04 11.22.20](https://ws1.sinaimg.cn/large/0069RVTdly1fuxgrc398jj30gx04bgm1.jpg)
回顾实验 1 中的第 3 部分,我们安装了一个简单的页表,这样内核就可以在 0xf0100000 链接的地址上运行,尽管它实际上是加载在 0x00100000 处的 ROM BIOS 的物理内存上。这个页表仅映射了 4MB 的内存。在实验中,你将要为 JOS 去设置虚拟内存布局,我们将从虚拟地址 0xf0000000 处开始扩展它,首先将物理内存扩展到 256MB并映射许多其它区域的虚拟内存。
> 练习 3
>
> 虽然 GDB 能够通过虚拟地址访问 QEMU 的内存,它经常用于在配置虚拟内存期间检查物理内存。在实验工具指南中复习 QEMU 的监视器命令,尤其是 `xp` 命令,它可以让你去检查物理内存。访问 QEMU 监视器,可以在终端中按 `Ctrl-a c`(相同的绑定返回到串行控制台)。
>
> 使用 QEMU 监视器的 `xp` 命令和 GDB 的 `x` 命令去检查相应的物理内存和虚拟内存,以确保你看到的是相同的数据。
>
> 我们的打过补丁的 QEMU 版本提供一个非常有用的 `info pg` 命令它可以展示当前页表的一个简单描述包括所有已映射的内存范围、权限、以及标志。Stock QEMU 也提供一个 `info mem` 命令用于去展示一个概要信息,这个信息包含了已映射的虚拟内存范围和使用了什么权限。
在 CPU 上运行的代码,一旦处于保护模式(这是在 boot/boot.S 中所做的第一件事情)中,是没有办法去直接使用一个线性地址或物理地址的。所有的内存引用都被解释为虚拟地址,然后由 MMU 来转换,这意味着在 C 语言中的指针都是虚拟地址。
例如在物理内存分配器中JOS 内存经常需要在不反向引用的情况下去维护作为地址的一个很难懂的值或一个整数。有时它们是虚拟地址而有时是物理地址。为便于在代码中证明JOS 源文件中将它们区分为两种:类型 `uintptr_t` 表示一个难懂的虚拟地址,而类型 `physaddr_trepresents` 表示物理地址。这些类型其实不过是 32 位整数uint32_t的同义词因此编译器不会阻止你将一个类型的数据指派为另一个类型因为它们都是整数而不是指针类型如果你想去反向引用它们编译器将报错。
JOS 内核能够通过将它转换为指针类型的方式来反向引用一个 `uintptr_t` 类型。相反,内核不能反向引用一个物理地址,因为这是由 MMU 来转换所有的内存引用。如果你转换一个 `physaddr_t` 为一个指针类型,并反向引用它,你或许能够加载和存储最终结果地址(硬件将它解释为一个虚拟地址),但你并不会取得你想要的内存位置。
总结如下:
| C type | Address type |
| ------------ | ------------ |
| `T*` | Virtual |
| `uintptr_t` | Virtual |
| `physaddr_t` | Physical |
>问题:
>
>假设下面的 JOS 内核代码是正确的,那么变量 `x` 应该是什么类型uintptr_t 还是 physaddr_t
>
>![屏幕快照 2018-09-04 11.48.54](https://ws3.sinaimg.cn/large/0069RVTdly1fuxgrbkqd3j30m302bmxc.jpg)
>
JOS 内核有时需要去读取或修改它知道物理地址的内存。例如,添加一个映射到页表,可以要求分配物理内存去存储一个页目录,然后去初始化它们。然而,内核也和其它的软件一样,并不能跳过虚拟地址转换,内核并不能直接加载和存储物理地址。一个原因是 JOS 将重映射从虚拟地址 0xf0000000 处物理地址 0 开始的所有的物理地址,以帮助内核去读取和写入它知道物理地址的内存。为转换一个物理地址为一个内核能够真正进行读写操作的虚拟地址,内核必须添加 0xf0000000 到物理地址以找到在重映射区域中相应的虚拟地址。你应该使用 KADDR(pa) 去做那个添加操作。
JOS 内核有时也需要能够通过给定的内核数据结构中存储的虚拟地址找到内存中的物理地址。内核全局变量和通过 `boot_alloc()` 分配的内存是加载到内核的这些区域中,从 0xf0000000 处开始,到全部物理内存映射的区域。因此,在这些区域中转变一个虚拟地址为物理地址时,内核能够只是简单地减去 0xf0000000 即可得到物理地址。你应该使用 PADDR(va) 去做那个减法操作。
### 引用计数
在以后的实验中,你将会经常遇到多个虚拟地址(或多个环境下的地址空间中)同时映射到相同的物理页面上。你将在 PageInfo 数据结构中用 pp_ref 字段来提供一个引用到每个物理页面的计数器。如果一个物理页面的这个计数器为 0表示这个页面已经被释放因为它不再被使用了。一般情况下这个计数器应该等于相应的物理页面出现在所有页表下面的 UTOP 的次数UTOP 上面的映射大都是在引导时由内核设置的,并且它从不会被释放,因此不需要引用计数器)。我们也使用它去跟踪到页目录的指针数量,反过来就是,页目录到页表的数量。
使用 `page_alloc` 时要注意。它返回的页面引用计数总是为 0因此一旦对返回页做了一些操作比如将它插入到页表`pp_ref` 就应该增加。有时这需要通过其它函数(比如,`page_instert`)来处理,而有时这个函数是直接调用 `page_alloc` 来做的。
### 页表管理
现在,你将写一套管理页表的代码:去插入和删除线性地址到物理地址的映射表,并且在需要的时候去创建页表。
> 练习 4
>
> 在文件 `kern/pmap.c` 中,你必须去实现下列函数的代码。
>
> pgdir_walk()
>
> boot_map_region()
>
> page_lookup()
>
> page_remove()
>
> page_insert()
>
> `check_page()`,调用 `mem_init()`,测试你的页表管理动作。在进行下一步流程之前你应该确保它成功运行。
### 第 3 部分:内核地址空间
JOS 分割处理器的 32 位线性地址空间为两部分:用户环境(进程),我们将在实验 3 中开始加载和运行,它将控制其上的布局和低位部分的内容,而内核总是维护对高位部分的完全控制。线性地址的定义是在 `inc/memlayout.h` 中通过符号 ULIM 来划分的,它为内核保留了大约 256MB 的虚拟地址空间。这就解释了为什么我们要在实验 1 中给内核这样的一个高位链接地址的原因:如是不这样做的话,内核的虚拟地址空间将没有足够的空间去同时映射到下面的用户空间中。
你可以在 `inc/memlayout.h` 中找到一个图表,它有助于你去理解 JOS 内存布局,这在本实验和后面的实验中都会用到。
### 权限和缺页隔离
由于内核和用户的内存都存在于它们各自环境的地址空间中,因此我们需要在 x86 的页表中使用权限位去允许用户代码只能访问用户所属地址空间的部分。否则的话,用户代码中的 bug 可能会覆写内核数据,导致系统崩溃或者发生各种莫名其妙的的故障;用户代码也可能会偷窥其它环境的私有数据。
对于 ULIM 以上部分的内存,用户环境没有任何权限,只有内核才可以读取和写入这部分内存。对于 [UTOP,ULIM] 地址范围,内核和用户都有相同的权限:它们可以读取但不能写入这个地址范围。这个地址范围是用于向用户环境暴露某些只读的内核数据结构。最后,低于 UTOP 的地址空间是为用户环境所使用的;用户环境将为访问这些内核设置权限。
### 初始化内核地址空间
现在,你将去配置 UTOP 以上的地址空间:内核部分的地址空间。`inc/memlayout.h` 中展示了你将要使用的布局。我将使用函数去写相关的线性地址到物理地址的映射配置。
> 练习 5
>
> 完成调用 `check_page()` 之后在 `mem_init()` 中缺失的代码。
现在,你的代码应该通过了 `check_kern_pgdir()``check_page_installed_pgdir()` 的检查。
> 问题:
>
> 1、在这个时刻页目录中的条目是什么它们映射的址址是什么以及它们映射到哪里了换句话说就是尽可能多地填写这个表
>
> EntryBase Virtual AddressPoints to (logically):
>
> 1023 ? Page table for top 4MB of phys memory
>
> 1022 ? ?
>
> . ? ?
>
> . ? ?
>
> . ? ?
>
> 2 0x00800000 ?
>
> 1 0x00400000 ?
>
> 0 0x00000000 [see next question]
>
> 2、(来自课程 3) 我们将内核和用户环境放在相同的地址空间中。为什么用户程序不能去读取和写入内核的内存?有什么特殊机制保护内核内存?
>
> 3、这个操作系统能够支持的最大的物理内存数量是多少为什么
>
> 4、我们真实地拥有最大数量的物理内存吗管理内存的开销有多少这个开销可以减少吗
>
> 5、复习在 `kern/entry.S``kern/entrypgdir.c` 中的页表设置。一旦我们打开分页EIP 中是一个很小的数字(稍大于 1MB。在什么情况下我们转而去运行在 KERNBASE 之上的一个 EIP当我们启用分页并开始在 KERNBASE 之上运行一个 EIP 时,是什么让我们能够持续运行一个很低的 EIP为什么这种转变是必需的
### 地址空间布局的其它选择
在 JOS 中我们使用的地址空间布局并不是我们唯一的选择。一个操作系统可以在低位的线性地址上映射内核而为用户进程保留线性地址的高位部分。然而x86 内核一般并不采用这种方法,而 x86 向后兼容模式是不这样做的其中一个原因,这种模式被称为“虚拟 8086 模式”,处理器使用线性地址空间的最下面部分是“不可改变的”,所以,如果内核被映射到这里是根本无法使用的。
虽然很困难,但是设计这样的内核是有这种可能的,即:不为处理器自身保留任何固定的线性地址或虚拟地址空间,而有效地允许用户级进程不受限制地使用整个 4GB 的虚拟地址空间 —— 同时还要在这些进程之间充分保护内核以及不同的进程之间相互受保护!
将内核的内存分配系统进行概括类推,以支持二次幂为单位的各种页大小,从 4KB 到一些你选择的合理的最大值。你务必要有一些方法,将较大的分配单位按需分割为一些较小的单位,以及在需要时,将多个较小的分配单位合并为一个较大的分配单位。想一想在这样的一个系统中可能会出现些什么样的问题。
这个实验做完了。确保你通过了所有的等级测试,并记得在 `answers-lab2.txt` 中写下你对上述问题的答案。提交你的改变(包括添加 `answers-lab2.txt` 文件),并在 `lab` 目录下输入 `make handin` 去提交你的实验。
------
via: <https://sipb.mit.edu/iap/6.828/lab/lab2/>
作者:[Mit][<https://sipb.mit.edu/iap/6.828/lab/lab2/>]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,183 @@
在 Debian 9 / Ubuntu 16.04 / 17.10 中如何安装并使用 Wireshark
======
作者 [Pradeep Kumar][1],首发于 2017 年 11 月 29 日,更新于 2017 年 11 月 29 日
[![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2]
Wireshark 是免费的,开源的,跨平台的基于 GUI 的网络数据包分析器,可用于 Linux, Windows, MacOS, Solaris 等。它可以实时捕获网络数据包并以人性化的格式呈现。Wireshark 允许我们监控网络数据包上升到微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。
Wireshark 可用于网络故障排除分析软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。
Wireshark 具有许多功能:
* 支持数百项协议检查
* 能够实时捕获数据包并保存,以便以后进行离线分析
* 许多用于分析数据的过滤器
* 捕获的数据可以被压缩和解压缩to 校正on the fly 什么意思?)
* 支持各种文件格式的数据分析,输出也可以保存为 XML, CSV 和纯文本格式
* 数据可以从以太网wifi蓝牙USB帧中继令牌环等多个接口中捕获
在本文中,我们将讨论如何在 Ubuntu/Debian 上安装 Wireshark并将学习如何使用 Wireshark 捕获网络数据包。
#### 在 Ubuntu 16.04 / 17.10 上安装 Wireshark
Wireshark 在 Ubuntu 默认仓库中可用,只需使用以下命令即可安装。但有可能得不到最新版本的 wireshark。
```
linuxtechi@nixworld:~$ sudo apt-get update
linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
```
因此,要安装最新版本的 wireshark我们必须启用或配置官方 wireshark 仓库。
使用下面的命令来配置仓库并安装最新版本的 wireshark 实用程序。
```
linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable
linuxtechi@nixworld:~$ sudo apt-get update
linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
```
一旦安装了 wireshark执行以下命令以便非 root 用户也可以捕获接口的实时数据包。
```
linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap
```
#### 在 Debian 9 上安装 Wireshark
Wireshark 包及其依赖项已存在于 debian 9 的默认仓库中,因此要在 Debian 9 上安装最新且稳定版本的 Wireshark请使用以下命令
```
linuxtechi@nixhome:~$ sudo apt-get update
linuxtechi@nixhome:~$ sudo apt-get install wireshark -y
```
在安装过程中,它会提示我们为非超级用户配置 dumpcap
选择 `yes` 并回车。
[![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3]
安装完成后,执行以下命令,以便非 root 用户也可以捕获接口的实时数据包。
```
linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap
```
我们还可以使用最新的源代码包在 Ubuntu/Debian 和其它 Linux 发行版上安装 wireshark。
#### 在 Debian / Ubuntu 系统上使用源代码安装 Wireshark
首先下载最新的源代码包(写这篇文章时它的最新版本是 2.4.2),使用以下命令:
```
linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz
```
然后解压缩包,进入解压缩的目录:
```
linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp
linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2
```
现在我们使用以下命令编译代码:
```
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make
```
最后安装已编译的软件包以便在系统上安装 Wireshark
```
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig
```
在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 `permission denied权限被拒绝`错误。
要将用户添加到 wireshark 组,执行以下命令:
```
linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi
```
现在我们可以使用以下命令从 GUI 菜单或终端启动 wireshark
```
linuxtechi@nixhome:~$ wireshark
```
#### 在 Debian 9 系统上使用 Wireshark
[![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4]
点击 Wireshark 图标
[![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5]
#### 在 Ubuntu 16.04 / 17.10 上使用 Wireshark
[![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6]
点击 Wireshark 图标
[![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7]
#### 捕获并分析数据包
一旦 wireshark 启动,我们就会看到 wireshark 窗口,上面有 Ubuntu 和 Debian 系统的示例。
[![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8]
所有这些都是我们可以捕获网络数据包的接口。根据你系统上的界面,此屏幕可能与你的不同。
我们选择 `enp0s3` 来捕获该接口的网络流量。选择接口后,在我们网络上所有设备的网络数据包开始填充(参考下面的屏幕截图):
[![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9]
第一次看到这个屏幕我们可能会被这个屏幕上显示的数据所淹没并且可能已经想过如何整理这些数据但不用担心Wireshark 的最佳功能之一就是它的过滤器。
我们可以根据 IP 地址,端口号,也可以使用来源和目标过滤器,数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 `Apply a Display Filter(应用显示过滤器)`选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 `Apply a Display Filter(应用显示过滤器)`选项卡旁边的 `flag` 图标。
[![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10]
我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 `View -> Coloring Rules`,我们也可以改变这些编码。
[![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11]
在我们得到我们需要的结果之后,我们可以点击任何捕获的数据包以获得有关该数据包的更多详细信息,这将显示该网络数据包的所有数据。
Wireshark 是一个非常强大的工具,需要一些时间来习惯并对其进行命令操作,本教程将帮助你入门。请随时在下面的评论框中提出你的疑问或建议。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com
作者:[Pradeep Kumar][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/author/pradeep/
[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg
[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg
[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg
[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg
[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg
[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg
[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg
[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg
[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg
[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg

View File

@ -0,0 +1,205 @@
在 React 条件渲染中使用三元表达式和 “&&
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*eASRJrCIVgsy5VbNMAzD9w.jpeg)
Photo by [Brendan Church][1] on [Unsplash][2]
React 组件可以通过多种方式决定渲染内容。你可以使用传统的 if 语句或 switch 语句。在本文中,我们将探讨一些替代方案。但要注意,如果你不小心,有些方案会带来自己的陷阱。
### 三元表达式 vs if/else
假设我们有一个组件被传进来一个 `name` prop。 如果这个字符串非空,我们会显示一个问候语。否则,我们会告诉用户他们需要登录。
这是一个只实现了如上功能的无状态函数式组件。
```
const MyComponent = ({ name }) => {
if (name) {
return (
<div className="hello">
Hello {name}
</div>
);
}
return (
<div className="hello">
Please sign in
</div>
);
};
```
这个很简单但是我们可以做得更好。这是使用三元运算符编写的相同组件。
```
const MyComponent = ({ name }) => (
<div className="hello">
{name ? `Hello ${name}` : 'Please sign in'}
</div>
);
```
请注意这段代码与上面的例子相比是多么简洁。
有几点需要注意。因为我们使用了箭头函数的单语句形式所以隐含了return语句。另外使用三元运算符允许我们省略掉重复的 `<div className="hello">` 标记。🎉
### 三元表达式 vs &&
正如您所看到的,三元表达式用于表达 if/else 条件式非常好。但是对于简单的 if 条件式怎么样呢?
让我们看另一个例子。如果 isPro一个布尔值为真我们将显示一个奖杯表情符号。我们也要渲染星星的数量如果不是0。我们可以这样写。
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro ? '🏆' : null}
</div>
{stars ? (
<div>
Stars:{'⭐️'.repeat(stars)}
</div>
) : null}
</div>
);
```
请注意 “else” 条件返回 null 。 这是因为三元表达式要有"否则"条件。
对于简单的 “if” 条件式,我们可以使用更合适的东西:&& 运算符。这是使用 “&&” 编写的相同代码。
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro && '🏆'}
</div>
{stars && (
<div>
Stars:{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
没有太多区别,但是注意我们消除了每个三元表达式最后面的 `: null` else 条件式)。一切都应该像以前一样渲染。
约翰得到了什么当什么都不应该渲染时只有一个0。这就是我上面提到的陷阱。这里有解释为什么。
[根据 MDN][3],一个逻辑运算符“和”(也就是`&&`
> `expr1 && expr2`
> 如果 `expr1` 可以被转换成 `false` ,返回 `expr1`;否则返回 `expr2`。 如此,当与布尔值一起使用时,如果两个操作数都是 true`&&` 返回 `true` ;否则,返回 `false`
好的,在你开始拔头发之前,让我为你解释它。
在我们这个例子里, `expr1` 是变量 `stars`,它的值是 `0`因为0是 falsey 的值, `0` 会被返回和渲染。看,这还不算太坏。
我会简单地这么写。
> 如果 `expr1` 是 falsey返回 `expr1` ,否则返回 `expr2`
所以,当对非布尔值使用 “&&” 时,我们必须让 falsy 的值返回 React 无法渲染的东西,比如说,`false` 这个值。
我们可以通过几种方式实现这一目标。让我们试试吧。
```
{!!stars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
```
注意 `stars` 前的双感叹操作符( `!!`)(呃,其实没有双感叹操作符。我们只是用了感叹操作符两次)。
第一个感叹操作符会强迫 `stars` 的值变成布尔值并且进行一次“非”操作。如果 `stars``0` ,那么 `!stars` 会 是 `true`
然后我们执行第二个`非`操作,所以如果 `stars` 是0`!!stars` 会是 `false`。正好是我们想要的。
如果你不喜欢 `!!`,那么你也可以强制转换出一个布尔数比如这样(这种方式我觉得有点冗长)。
```
{Boolean(stars) && (
```
或者只是用比较符产生一个布尔值(有些人会说这样甚至更加语义化)。
```
{stars > 0 && (
```
#### 关于字符串
空字符串与数字有一样的毛病。但是因为渲染后的空字符串是不可见的所以这不是那种你很可能会去处理的难题甚至可能不会注意到它。然而如果你是完美主义者并且不希望DOM上有空字符串你应采取我们上面对数字采取的预防措施。
### 其它解决方案
一种可能的将来可扩展到其他变量的解决方案,是创建一个单独的 `shouldRenderStars` 变量。然后你用“&&”处理布尔值。
```
const shouldRenderStars = stars > 0;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
之后,在将来,如果业务规则要求你还需要已登录,拥有一条狗以及喝淡啤酒,你可以改变 `shouldRenderStars` 的得出方式,而返回的内容保持不变。你还可以把这个逻辑放在其它可测试的地方,并且保持渲染明晰。
```
const shouldRenderStars =
stars > 0 && loggedIn && pet === 'dog' && beerPref === 'light`;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
### 结论
我认为你应该充分利用这种语言。对于 JavaScript这意味着为 `if/else` 条件式使用三元表达式,以及为 `if` 条件式使用 `&&` 操作符。
我们可以回到每处都使用三元运算符的舒适区,但你现在消化了这些知识和力量,可以继续前进 && 取得成功了。
--------------------------------------------------------------------------------
作者简介:
美国运通工程博客的执行编辑 http://aexp.io 以及 @AmericanExpress 的工程总监。MyViews !== ThoseOfMyEmployer.
----------------
via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternaries-and-logical-and-7807f53b6935
作者:[Donavon West][a]
译者:[GraveAccent](https://github.com/GraveAccent)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@donavon
[1]:https://unsplash.com/photos/pKeF6Tt3c08?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[2]:https://unsplash.com/search/photos/road-sign?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators

View File

@ -1,179 +0,0 @@
Hugo30分钟搭建博客一个Go语言开发的静态站点生成工具
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy)
你是不是强烈的想搭建博客来将自己对软件框架等的探索学习成果分享呢?
你是不是面对缺乏指导文档而一团糟的项目就有一种想去改变它的冲动呢?
或者换个角度,你是不是十分期待能创建一个属于自己的个人博客网站呢?
很多人在想搭建博客之前都有一些严重的迟疑顾虑感觉自己缺乏内容管理系统CMS的相关知识更缺乏时间去学习这些知识。现在如果我说不用花费大把的时间去学习 CMS 系统、学习如何创建一个静态网站、更不用操心如何去强化网站以防止它受到黑客攻击的问题,你就可以在 30 分钟之内创建一个博客?你信不信?利用 Hugo 工具,就可以实现这一切。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_1.png?itok=JgxBSOBG)
Hugo 是一个基于 Go 语言开发的静态站点生成工具。也许你会问,为什么选择它?
* 无需数据库、无需需要各种权限的插件、无需跑在服务器上的底层平台,更没有额外的安全问题。
* 都是静态站点,因此拥有轻量级、快速响应的服务性能。此外,所有的网页都是在部署的时候呈现,所以服务器负载很小。
* 极易操作的版本控制。一些 CMS 平台使用它们自己的版本控制软件VCS或者在网页上集成 Git 工具。而 Hugo所有的源文件都可以用你所选的 VCS 软件来管理。
### 0-5 分钟:下载 Hugo生成一个网站
直白的说Hugo 使得写一个网站又一次变得有趣起来。让我们来个 30 分钟计时,搭建一个网站。
为了简化 Hugo 安装流程,这里直接使用 Hugo 可执行安装文件。
1. 下载和你操作系统匹配的 Hugo [版本][2]
2. 压缩包解压到指定路径,例如 windows 系统的 `C:\hugo_dir` 或者 Linux 系统的 `~/hugo_dir` 目录;下文中的变量 `${HUGO_HOME}` 所指的路径就是这个安装目录;
3. 打开命令行终端,进入安装目录:`cd ${HUGO_HOME}`
4. 确认 Hugo 已经启动:
* Unix 系统:`${HUGO_HOME}/[hugo version]`
* Windows 系统:`${HUGO_HOME}\[hugo.exe version]`
例如Windows 系统下cmd 命令行中输入:`c:\hugo_dir\hugo version`。
为了书写上的简化,下文中的 `hugo` 就是指 hugo 可执行文件所在的路径(包括可执行文件),例如命令 `hugo version` 就是指命令 `c:\hugo_dir\hugo version` 。(译者注:可以把 hugo 可执行文件所在的路径添加到系统环境变量下,这样就可以直接在终端中输入 `hugo version`
如果命令 `hugo version` 报错,你可能下载了错误的版本。当然,有很多种方法安装 Hugo更多详细信息请查阅 [官方文档][3]。最稳妥的方法就是把 Hugo 可执行文件放在某个路径下,然后执行的时候带上路径名
5. 创建一个新的站点来作为你的博客,输入命令:`hugo new site awesome-blog`
6. 进入新创建的路径下: `cd awesome-blog`
恭喜你!你已经创建了自己的新博客。
### 5-10 分钟:为博客设置主题
Hugo 中你可以自己构建博客的主题或者使用网上已经有的一些主题。这里选择 [Kiera][4] 主题,因为它简洁漂亮。按以下步骤来安装该主题:
1. 进入主题所在目录:`cd themes`
2. 克隆主题:`git clone https://github.com/avianto/hugo-kiera kiera`。如果你没有安装 Git 工具:
* 从 [Github][5] 上下载 hugo 的 .zip 格式的文件;
* 解压该 .zip 文件到你的博客主题 `theme` 路径;
* 重命名 `hugo-kiera-master``kiera`
3. 返回博客主路径:`cd awesome-blog`
4. 激活主题;通常来说,主题(包括 Kiera )都自带文件夹 `exampleSite`,里面存放了内容配置的示例文件。激活 Kiera 主题需要拷贝它提供的 `config.toml` 到你的博客下:
* Unix 系统:`cp themes/kiera/exampleSite/config.toml .`
* Windows 系统:`copy themes\kiera\exampleSite\config.toml .`
* 选择 `Yes` 来覆盖原有的 `config.toml`
5. 可选操作 )你可以选择可视化的方式启动服务器来验证主题是否生效:`hugo server -D` 然后在浏览器中输入 `http://localhost:1313`。可用通过在终端中输入 `Crtl+C` 来停止服务器运行。现在你的博客还是空的,但这也给你留了写作的空间。它看起来如下所示:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_2.png?itok=PINOIOSU)
你已经成功的给博客设置了主题!你可以在官方 [Hugo 主题][4] 网站上找到上百种漂亮的主题供你使用。
### 10-20 分钟:给博客添加内容
对于碗来说它是空的时候用处最大可以用来盛放东西但对于博客来说不是这样空博客几乎毫无用处。在这一步你将会给博客添加内容。Hugo 和 Kiera 主题都为这个工作提供了方便性。按以下步骤来进行你的第一次提交:
1. archetypes 将会是你的内容模板。
2. 添加主题中的 archtypes 至你的博客:
* Unix 系统: `cp themes/kiera/archetypes/* archetypes/`
* Windows 系统:`copy themes\kiera\archetypes\* archetypes\`
* 选择 `Yes` 来覆盖原来的 `default.md` 内容架构类型
3. 创建博客 posts 目录:
* Unix 系统: `mkdir content/posts`
* Windows 系统: `mkdir content\posts`
4. 利用 Hugo 生成你的 post
* Unix 系统:`hugo nes posts/first-post.md`;
* Windows 系统:`hugo new posts\first-post.md`;
5. 在文本编辑器中打开这个新建的 post 文件:
* Unix 系统:`gedit content/posts/first-post.md`
* Windows 系统:`notepadd content\posts\first-post.md`
此刻,你可以疯狂起来了。注意到你的提交文件中包括两个部分。第一部分是以 `+++` 符号分隔开的。它包括了提交文档的主要数据,例如名称、时间等。在 Hugo 中,这叫做前缀。在前缀之后,才是正文。下面编辑第一个提交文件内容:
```
+++
title = "First Post"
date = 2018-03-03T13:23:10+01:00
draft = false
tags = ["Getting started"]
categories = []
+++
Hello Hugo world! No more excuses for having no blog or documentation now!
```
现在你要做的就是启动你的服务器:`hugo server -D`;然后打开浏览器,输入 `http://localhost:1313/`
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_3.png?itok=I-_v0qLx)
### 20-30 分钟:调整网站
前面的工作很完美,但还有一些问题需要解决。例如,简单地命名你的站点:
1. 终端中按下 `Ctrl+C` 以停止服务器。
2. 打开 `config.toml`,编辑博客的名称,版权,你的姓名,社交网站等等。
当你再次启动服务器后,你会发现博客私人订制味道更浓了。不过,还少一个重要的基础内容:主菜单。快速的解决这个问题。返回 `config.toml` 文件,在末尾插入如下一段:
```
[[menu.main]]
name = "Home" #Name in the navigation bar
weight = 10 #The larger the weight, the more on the right this item will be
url = "/" #URL address
[[menu.main]]
name = "Posts"
weight = 20
url = "/posts/"
```
上面这段代码添加了 `Home``Posts` 到主菜单中。你还需要一个 `About` 页面。这次是创建一个 `.md` 文件,而不是编辑 `config.toml` 文件:
1. 创建 `about.md` 文件:`hugo new about.md` 。注意它是 `about.md`,不是 `posts/about.md`。该页面不是博客提交内容,所以你不想它显示到博客内容提交当中吧。
2. 用文本编辑器打开该文件,输入如下一段:
```
+++
title = "About"
date = 2018-03-03T13:50:49+01:00
menu = "main" #Display this page on the nav menu
weight = "30" #Right-most nav item
meta = "false" #Do not display tags or categories
+++
> Waves are the practice of the water. Shunryu Suzuki
```
当你启动你的服务器并输入:`http://localhost:1313/`,你将会看到你的博客。(访问我 Gihub 主页上的 [例子][6] )如果你想让文章的菜单栏和 Github 相似,给 `themes/kiera/static/css/styles.css` 打上这个 [补丁][7]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/start-blog-30-minutes-hugo
作者:[Marek Czernek][a]
译者:[jrg](https://github.com/jrglinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mczernek
[1]:https://gohugo.io/
[2]:https://github.com/gohugoio/hugo/releases
[3]:https://gohugo.io/getting-started/installing/
[4]:https://themes.gohugo.io/
[5]:https://github.com/avianto/hugo-kiera
[6]:https://m-czernek.github.io/awesome-blog/
[7]:https://github.com/avianto/hugo-kiera/pull/18/files

View File

@ -0,0 +1,237 @@
什么是行为驱动的Python
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
您是否听说过[行为驱动开发][1]BDD并想知道所有的新奇事物是什么 也许你已经发现了团队成员在使用“gherkin”了并感到被排除在外无法参与其中。 或许你是一个Python爱好者正在寻找更好的方法来测试你的代码。 无论在什么情况下了解BDD都可以帮助您和您的团队实现更好的协作和测试自动化而Python的`行为`框架是一个很好的起点。
### 什么是BDD
* 在网站上提交表单
* 搜索想要的结果
* 保存文档
* 进行REST API调用
* 运行命令行界面命令
在软件中,行为是指在明确定义的输入,行为和结果场景中功能是如何运转的。 产品可以表现出无数的行为,例如:
根据产品的行为定义产品的功能可以更容易地描述产品,并对其进行开发和测试。 BDD的核心是使行为成为软件开发的焦点。 在开发早期使用示例语言的规范来定义行为。 最常见的行为规范语言之一是GherkinCucumber项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
下面是用Gherkin编写的行为规范的示例
根据产品的行为定义产品的功能可以更容易地描述产品,开发产品并对其进行测试。 这是BDD的核心使行为成为软件开发的焦点。 在开发早期使用[示例规范][2]的语言来定义行为。 最常见的行为规范语言之一是[Gherkin][3][Cucumber][4]项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
下面是用Gherkin编写的行为规范的示例
```
Scenario: Basic DuckDuckGo Search
Given the DuckDuckGo home page is displayed
When the user searches for "panda"
Then results are shown for "panda"
```
快速浏览一下,行为是直观易懂的。 除少数关键字外,该语言为自由格式。 场景简洁而有意义。 一个真实的例子说明了这种行为。 步骤以声明的方式表明应该发生什么——而不会陷入如何如何的细节中。
[BDD的主要优点][5]是良好的协作和自动化。 每个人都可以为行为开发做出贡献,而不仅仅是程序员。 从流程开始就定义并理解预期的行为。 测试可以与它们涵盖的功能一起自动化。 每个测试都包含一个单一的,独特的行为,以避免重复。 最后,现有的步骤可以通过新的行为规范重用,从而产生雪球效果。
### Python的behave框架
`behave`是Python中最流行的BDD框架之一。 它与其他基于Gherkin的Cucumber框架非常相似尽管没有得到官方的Cucumber定名。 `behave`有两个主要层:
1. 用Gherkin的`.feature`文件编写的行为规范
2. 用Python模块编写的步骤定义和钩子用于实现Gherkin步骤
如上例所示Gherkin场景有三部分格式
1. 鉴于一些初始状态
2. 当行为发生时
3. 然后验证结果
当`behave`运行测试时每个步骤由装饰器“粘合”到Python函数。
### 安装
作为先决条件请确保在你的计算机上安装了Python和`pip`。 我强烈建议使用Python 3.(我还建议使用[`pipenv`][6],但以下示例命令使用更基本的`pip`。)
`behave`框架只需要一个包:
```
pip install behave
```
其他包也可能有用,例如:
```
pip install requests    # 用于调用REST API
pip install selenium    # 用于web浏览器交互
```
GitHub上的[behavior-driven-Python][7]项目包含本文中使用的示例。
### Gherkin特点
`behave`框架使用的Gherkin语法实际上是符合官方的Cucumber Gherkin标准的。 `.feature`文件包含功能Feature部分而Feature部分又包含具有Given-When-Then步骤的场景Scenario部分。 以下是一个例子:
```
Feature: Cucumber Basket
  As a gardener,
  I want to carry many cucumbers in a basket,
  So that I dont drop them all.
  @cucumber-basket
  Scenario: Add and remove cucumbers
    Given the basket is empty
    When "4" cucumbers are added to the basket
    And "6" more cucumbers are added to the basket
    But "3" cucumbers are removed from the basket
    Then the basket contains "7" cucumbers
```
这里有一些重要的事情需要注意:
- Feature和Scenario部分都有[简短的描述性标题][8]。
- 紧跟在Feature标题后面的行是会被`behave`框架忽略掉的注释。将功能描述放在那里是一种很好的做法。
- Scenarios和Features可以有标签注意`@cucumber-basket`标记)用于钩子和过滤(如下所述)。
- 步骤都遵循[严格的Given-When-Then顺序][9]。
- 使用`And`和`Bu`t可以为任何类型添加附加步骤。
- 可以使用输入对步骤进行参数化——注意双引号里的值。
通过使用场景大纲,场景也可以写为具有多个输入组合的模板:
```
Feature: Cucumber Basket
@cucumber-basket
Scenario Outline: Add cucumbers
Given the basket has “<initial>” cucumbers
When "<more>" cucumbers are added to the basket
Then the basket contains "<total>" cucumbers
Examples: Cucumber Counts
| initial | more | total |
| 0 | 1 | 1 |
| 1 | 2 | 3 |
| 5 | 4 | 9 |
```
场景大纲总是有一个Examples表其中第一行给出列标题后续每一行给出一个输入组合。 只要列标题出现在由尖括号括起的步骤中,行值就会被替换。 在上面的示例中,场景将运行三次,因为有三行输入组合。 场景大纲是避免重复场景的好方法。
Gherkin语言还有其他元素但这些是主要的机制。 想了解更多信息请阅读Automation Panda这个网站的文章[Gherkin by Example][10]和[Writing Good Gherkin][11]。
### Python机制
每个Gherkin步骤必须“粘合”到步骤定义即提供了实现的Python函数。 每个函数都有一个带有匹配字符串的步骤类型装饰器。 它还接收共享的上下文和任何步骤参数。 功能文件必须放在名为`features/`的目录中,而步骤定义模块必须放在名为`features/steps/`的目录中。 任何功能文件都可以使用任何模块中的步骤定义——它们不需要具有相同的名称。 下面是一个示例Python模块其中包含cucumber basket功能的步骤定义。
```
from behave import *
from cucumbers.basket import CucumberBasket
@given('the basket has "{initial:d}" cucumbers')
def step_impl(context, initial):
context.basket = CucumberBasket(initial_count=initial)
@when('"{some:d}" cucumbers are added to the basket')
def step_impl(context, some):
context.basket.add(some)
@then('the basket contains "{total:d}" cucumbers')
def step_impl(context, total):
assert context.basket.count == total
```
可以使用三个[步骤匹配器][12]`parse``cfparse`和`re`。默认和最简单的匹配器是`parse`,如上例所示。注意如何解析参数化值并将其作为输入参数传递给函数。一个常见的最佳实践是在步骤中给参数加双引号。
每个步骤定义函数还接收一个[上下文][13]变量,该变量保存当前正在运行的场景的数据,例如`feature`, `scenario`和`tags`字段。也可以添加自定义字段,用于在步骤之间共享数据。始终使用上下文来共享数据——永远不要使用全局变量!
`behave`框架还支持[钩子][14]来处理Gherkin步骤之外的自动化问题。钩子是一个将在步骤场景功能或整个测试套件之前或之后运行的功能。钩子让人联想到[面向方面的编程][15]。它们应放在`features/`目录下的特殊`environment.py`文件中。钩子函数也可以检查当前场景的标签,因此可以有选择地应用逻辑。下面的示例显示了如何使用钩子为标记为`@web`的任何场景生成和销毁一个Selenium WebDriver实例。
```
from selenium import webdriver
def before_scenario(context, scenario):
if 'web' in context.tags:
context.browser = webdriver.Firefox()
context.browser.implicitly_wait(10)
def after_scenario(context, scenario):
if 'web' in context.tags:
context.browser.quit()
```
注意:也可以使用[fixtures][16]进行构建和清理。
要了解一个`behave`项目应该是什么样子,这里是示例项目的目录结构:
![](https://opensource.com/sites/default/files/uploads/behave_dir_layout.png)
任何Python包和自定义模块都可以与`behave`框架一起使用。 使用良好的设计模式构建可扩展的测试自动化解决方案。步骤定义代码应简明扼要。
### 运行测试
要从命令行运行测试,请切换到项目的根目录并运行`behave`命令。 使用`-help`选项查看所有可用选项。
以下是一些常见用例:
```
# run all tests
behave
# run the scenarios in a feature file
behave features/web.feature
# run all tests that have the @duckduckgo tag
behave --tags @duckduckgo
# run all tests that do not have the @unit tag
behave --tags ~@unit
# run all tests that have @basket and either @add or @remove
behave --tags @basket --tags @add,@remove
```
为方便起见,选项可以保存在[config][17]文件中。
### 其他选择
`behave`不是Python中唯一的BDD测试框架。其他好的框架包括
- `pytest-bdd``pytest`的插件,和`behave`一样它使用Gherkin功能文件和步骤定义模块但它也利用了`pytest`的所有功能和插件。例如,它可以使用`pytest-xdist`并行运行Gherkin场景。 BDD和非BDD测试也可以与相同的过滤器一起执行。 `pytest-bdd`还提供更灵活的目录布局。
- `radish`是一个“Gherkin增强版”框架——它将Scenario循环和前提条件添加到标准的Gherkin语言中这使得它对程序员更友好。它还提供丰富的命令行选项如`behave`。
- `lettuce`是一种较旧的BDD框架与`behave`非常相似在框架机制方面存在细微差别。然而GitHub最近显示该项目的活动很少截至2018年5月
任何这些框架都是不错的选择。
另外请记住Python测试框架可用于任何黑盒测试即使对于非Python产品也是如此 BDD框架非常适合Web和服务测试因为它们的测试是声明性的而Python是一种[很好的测试自动化语言][18]。
本文基于作者的[PyCon Cleveland 2018][19]演讲,[行为驱动的Python][20]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/behavior-driven-python
作者:[Andrew Knight][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/andylpk247
[1]:https://automationpanda.com/bdd/
[2]:https://en.wikipedia.org/wiki/Specification_by_example
[3]:https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/
[4]:https://cucumber.io/
[5]:https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/
[6]:https://docs.pipenv.org/
[7]:https://github.com/AndyLPK247/behavior-driven-python
[8]:https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/
[9]:https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/
[10]:https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/
[11]:https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/
[12]:http://behave.readthedocs.io/en/latest/api.html#step-parameters
[13]:http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes
[14]:http://behave.readthedocs.io/en/latest/api.html#environment-file-functions
[15]:https://en.wikipedia.org/wiki/Aspect-oriented_programming
[16]:http://behave.readthedocs.io/en/latest/api.html#fixtures
[17]:http://behave.readthedocs.io/en/latest/behave.html#configuration-files
[18]:https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/
[19]:https://us.pycon.org/2018/
[20]:https://us.pycon.org/2018/schedule/presentation/87/

View File

@ -0,0 +1,80 @@
# 5 个给孩子的非常好的 Linux 教育软件和游戏
![](https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-programs-for-kids-featured.jpg)
Linux 是一个非常强大的操作系统,因此因特网上的大多数服务器都使用它。尽管它算不上是对用户友好的最佳操作系统,但它的多元化还是值的称赞的。对于 Linux 来说每个人都能在它上面找到他们自己的所需。不论你是用它来写代码、还是用于教学或物联网IoT你总能找到一个适合你用的 Linux 发行版。为此,许多人认为 Linux 是未来计算的最佳操作系统。
未来是属于孩子们的,让孩子们了解 Linux 是他们掌控未来的最佳方式。这个操作系统上或许并没有一些像 FIFA 或 PES 那样的声名赫赫的游戏;但是,它为孩子们提供了一些非常好的教育软件和游戏。这里有五款最好的 Linux 教育软件,可以让你的孩子远离游戏。
**相关阅读**[使用一个 Linux 发行版的新手指南][1]
### 1. GCompris
如果你正在为你的孩子寻找一款最佳的教育软件,[GCompris][2] 将是你的最好的开端。这款软件专门为 2 到 10 岁的孩子所设计。作为所有的 Linux 教育软件套装的巅峰之作GCompris 为孩子们提供了大约 100 项活动。它囊括了你期望你的孩子学习的所有内容,从阅读材料到科学、地理、绘画、代数、测验、等等。
![Linux educational software and games][3]
GCompris 甚至有一项活动可以帮你的孩子学习计算机的相关知识。如果你的孩子还很小你希望他去学习字母、颜色、和形状GCompris 也有这方面的相关内容。更重要的是它也为孩子们准备了一些益智类游戏比如国际象棋、井字棋、好记性、以及猜词游戏。GCompris 并不是一个仅在 Linux 上可运行的游戏。它也可以运行在 Windows 和 Android 上。
### 2. TuxMath
很多学生认为数学是们非常难的课程。你可以通过 Linux 教育软件如 [TuxMath][4] 来让你的孩子了解数学技能从而来改变这种看法。TuxMath 是为孩子开发的顶级的数学教育辅助游戏。在这个游戏中,你的角色是在如雨点般下降的数学问题中帮助 Linux 企鹅 Tux 来保护它的星球。
![linux-educational-software-tuxmath-1][5]
在它们落下来毁坏 Tux 的星球之前,找到问题的答案,就可以使用你的激光去帮助 Tux 拯救它的星球。数字问题的难度每过一关就会提升一点。这个游戏非常适合孩子,因为它可以让孩子们去开动脑筋解决问题。而且还有助他们学好数学,以及帮助他们开发智力。
### 3. Sugar on a Stick
[Sugar on a Stick][6] 是献给孩子们的学习程序 —— 一个广受好评的全新教学法。这个程序为你的孩子提供一个成熟的教学平台,在那里,他们可以收获创造、探索、发现和思考方面的技能。和 GCompris 一样Sugar on a Stick 为孩子们带来了包括游戏和谜题在内的大量学习资源。
![linux-educational-software-sugar-on-a-stick][7]
关于 Sugar on a Stick 最大的一个好处是你可以将它配置在一个 U 盘上。你只要有一台 X86 的 PC插入那个 U 盘,然后就可以从 U 盘引导这个发行版。Sugar on a Stick 是由 Sugar 实验室提供的一个项目 —— 这个实验室是一个由志愿者运作的非盈利组织。
### 4. KDE Edu Suite
[KDE Edu Suite][8] 是一个用途与众不同的软件包。带来了大量不同领域的应用程序KDE 社区已经证实,它不仅是一系列成年人授权的问题;它还关心年青一代如何适应他们周围的一切。它囊括了一系列孩子们使用的应用程序,从科学到数学、地理等等。
![linux-educational-software-kde-1][9]
KDE Edu 套件根据长大后所必需的知识为基础,既能够用作学校的教学软件,也能够作为孩子们的学习 APP。它提供了大量的可免费下载的软件包。KDE Edu 套件在主流的 GNU/Linux 发行版都能安装。
### 5. Tux Paint
![linux-educational-software-tux-paint-2][10]
[Tux Paint][11] 是给孩子们的另一个非常好的 Linux 教育软件。这个屡获殊荣的绘画软件在世界各地被用于帮助培养孩子们的绘画技能它有一个简洁的、易于使用的界面和有趣的音效可以高效地帮助孩子去使用这个程序。它也有一个卡通吉祥物去鼓励孩子们使用这个程序。Tux Paint 中有许多绘画工具,它们可以帮助孩子们放飞他们的创意。
### 总结
由于这些教育软件深受孩子们的欢迎,许多学校和幼儿园都使用这些程序进行辅助教学。典型的一个例子就是 [Edubuntu][12],它是儿童教育领域中广受老师和家长们欢迎的一个基于 Ubuntu 的发行版。
Tux Paint 是另一个非常好的例子,它在这些年越来越流行,它大量地用于学校中教孩子们如何绘画。以上的这个清单并不很详细。还有成百上千的对孩子有益的其它 Linux 教育软件和游戏。
如果你还知道给孩子们的其它非常好的 Linux 教育软件和游戏,在下面的评论区分享给我们吧。
------
via: https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/
作者:[Kenneth Kimari][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/kennkimari/
[1]: https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ "The Beginners Guide to Using a Linux Distro"
[2]: http://www.gcompris.net/downloads-en.html
[3]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg "Linux educational software and games"
[4]: https://tuxmath.en.uptodown.com/ubuntu
[5]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg "linux-educational-software-tuxmath-1"
[6]: http://wiki.sugarlabs.org/go/Sugar_on_a_Stick/Downloads
[7]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png "linux-educational-software-sugar-on-a-stick"
[8]: https://edu.kde.org/
[9]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg "linux-educational-software-kde-1"
[10]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg "linux-educational-software-tux-paint-2"
[11]: http://www.tuxpaint.org/
[12]: http://edubuntu.org/

View File

@ -0,0 +1,229 @@
Part-II 树莓派自建 NAS 云盘之数据自动备份
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
在《树莓派自建 NAS 云盘》系列的 [第一篇][1] 文章中,我们讨论了建立 NAS 的一些基本步骤,添加了两块 1TB 的存储硬盘驱动(一个用于数据存储,一个用于数据备份),并且通过 网络文件系统NFS将数据存储盘挂载到远程终端上。本文是此系列的第二篇文章我们将探讨数据自动备份。数据自动备份保证了数据的安全为硬件损坏后的数据恢复提供便利以及减少了文件误操作带来的不必要的麻烦。
![](https://opensource.com/sites/default/files/uploads/nas_part2.png)
### 备份策略
我们就从为小型 NAS 构想一个备份策略着手开始吧。我建议每天有时间节点有计划的去备份数据,以防止干扰到我们正常的访问 NAS比如备份时间点避开正在访问 NAS 并写入文件的时间点。举个例子,你可以每天凌晨 2 点去进行数据备份。
另外,你还得决定每天的备份需要被保留的时间长短,因为如果没有时间限制,存储空间很快就会被用完。一般每天的备份保留一周便可以,如果数据出了问题,你便可以很方便的从备份中恢复出来原数据。但是如果需要恢复数据到更久之前怎么办?可以将每周一的备份文件保留一个月、每个月的备份保留更长时间。让我们把每月的备份保留一年时间,每一年的备份保留更长时间、例如五年。
这样,五年内在备份盘上产生大量备份:
* 每周 7 个日备份
* 每月 4 个周备份
* 每年 12 个月备份
* 每五年 5 个年备份
你应该还记得,我们搭建的备份盘和数据盘大小相同(每个 1 TB。如何将不止 10 个 1TB 数据的备份从数据盘存放到只有 1TB 大小的备份盘呢?如果你创建的是完整备份,这显然不可能。因此,你需要创建增量备份,它是每一份备份都基于上一份备份数据而创建的。增量备份方式不会每隔一天就成倍的去占用存储空间,它每天只会增加一点占用空间。
以下是我的情况:我的 NAS 自 2016 年 8 月开始运行,备份盘上有 20 个备份。目前,我在数据盘上存储了 406GB 的文件。我的备份盘用了 726GB。当然备份盘空间使用率在很大程度上取决于数据的更改频率但正如你所看到的增量备份不会占用 20 个完整备份所需的空间。然而随着时间的推移1TB 空间也可能不足以进行备份。一旦数据增长接近 1TB 限制(或任何备份盘容量),应该选择更大的备份盘空间并将数据移动转移过去。
### 利用 rsync 进行数据备份
利用 rsync 命令行工具可以生成完整备份。
```
pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01
```
这段命令将挂载在 /nas/data/ 目录下的数据盘中的数据进行了完整的复制备份。备份文件保存在 /nas/backup/2018-08-01 目录下。`-a` 参数是以归档模式进行备份,这将会备份所有的元数据,例如文件的修改日期、权限、拥有者以及软连接文件。
现在,你已经在 8 月 1 日创建了完整的初始备份,你将在 8 月 2 日创建第一个增量备份。
```
pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02
```
上面这行代码又创建了一个关于 `/nas/data` 目录中数据的备份。备份路径是 `/nas/backup/2018-08-02`。这里的参数 `--link-dest` 指定了一个备份文件所在的路径。这样,这次备份会与 `/nas/backup/2018-08-01` 的备份进行比对,只备份已经修改过的文件,未做修改的文件将不会被复制,而是创建一个到上一个备份文件中它们的硬链接。
使用备份文件中的硬链接文件时,你一般不会注意到硬链接和初始拷贝之间的差别。它们表现的完全一样,如果删除其中一个硬链接或者文件,其他的依旧存在。你可以把它们看做是同一个文件的两个不同入口。下面就是一个例子:
![](https://opensource.com/sites/default/files/uploads/backup_flow.png)
左侧框是在进行了第二次备份后的原数据状态。中间的盒子是昨天的备份。昨天的备份中只有图片 `file1.jpg` 并没有 `file2.txt` 。右侧的框反映了今天的增量备份。增量备份命令创建昨天不存在的 `file2.txt`。由于 `file1.jpg` 自昨天以来没有被修改,所以今天创建了一个硬链接,它不会额外占用磁盘上的空间。
### 自动化备份
你肯定也不想每天凌晨去输入命令进行数据备份吧。你可以创建一个任务定时去调用下面的脚本让它自动化备份
```
#!/bin/bash
TODAY=$(date +%Y-%m-%d)
DATADIR=/nas/data/
BACKUPDIR=/nas/backup/
SCRIPTDIR=/nas/data/backup_scripts
LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1)
TODAYPATH=${BACKUPDIR}/${TODAY}
if [[ ! -e ${TODAYPATH} ]]; then
        mkdir -p ${TODAYPATH}
fi
rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@
${SCRIPTDIR}/deleteOldBackups.sh
```
第一段代码指定了数据路径、备份路劲、脚本路径以及昨天和今天的备份路径。第二段代码调用 rsync 命令。最后一段代码执行 `deleteOldBackups.sh` 脚本,它会清除一些过期的没有必要的备份数据。如果不想频繁的调用 `deleteOldBackups.sh`,你也可以手动去执行它。
下面是今天讨论的备份策略的一个简单完整的示例脚本。
```
#!/bin/bash
BACKUPDIR=/nas/backup/
function listYearlyBackups() {
        for i in 0 1 2 3 4 5
                do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1
        done
}
function listMonthlyBackups() {
        for i in 0 1 2 3 4 5 6 7 8 9 10 11 12
                do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1
        done
}
function listWeeklyBackups() {
        for i in 0 1 2 3 4
                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")"
        done
}
function listDailyBackups() {
        for i in 0 1 2 3 4 5 6
                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")"
        done
}
function getAllBackups() {
        listYearlyBackups
        listMonthlyBackups
        listWeeklyBackups
        listDailyBackups
}
function listUniqueBackups() {
        getAllBackups | sort -u
}
function listBackupsToDelete() {
        ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")"
}
cd ${BACKUPDIR}
listBackupsToDelete | while read file_to_delete; do
        rm -rf ${file_to_delete}
done
```
这段脚本会首先根据你的备份策略列出所有需要保存的备份文件,然后它会删除那些再也不需要了的备份目录。
下面创建一个定时任务去执行上面这段代码。以 root 用户权限打开 `crontab -e`,输入以下这段命令,它将会创建一个每天凌晨 2 点去执行 `/nas/data/backup_scripts/daily.sh` 的定时任务。
```
0 2 * * * /nas/data/backup_scripts/daily.sh
```
有关创建定时任务请参考 [cron 创建定时任务][2]。
* 当没有备份任务时,卸载你的备份盘或者将它挂载为只读盘;
* 利用远程服务器作为你的备份盘,这样就可以通过互联网同步数据
你也可用下面的方法来加强你的备份策略,以防止备份数据的误删除或者被破坏:
本文中备份策略示例是备份一些我觉得有价值的数据,你也可以根据个人需求去修改这些策略。
我将会在 《树莓派自建 NAS 云盘》 系列的第三篇文章中讨论 [Nextcloud][3]。Nextcloud 提供了更方便的方式去访问 NAS 云盘上的数据并且它还提供了离线操作,你还可以在客户端中同步你的数据。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/automate-backups-raspberry-pi
作者:[Manuel Dewald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[jrg](https://github.com/jrglinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ntlx
[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
[2]: https://opensource.com/article/17/11/how-use-cron-linux
[3]: https://nextcloud.com/

View File

@ -0,0 +1,108 @@
5 个很酷的音乐播放器
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg)
你喜欢音乐吗?那么 Fedora 中可能有你正在寻找的东西。本文介绍在 Fedora 上运行的不同音乐播放器。无论你有大量的音乐库,还是小型音乐库,或者根本没有音乐库,你都会被覆盖到。这里有四个图形程序和一个基于终端的音乐播放器,可以让你挑选。
### Quod Libet
Quod Libet 是你的大型音频库的管理员。如果你有一个大量的音频库你不想只听但也要管理Quod Libet 可能是一个很好的选择。
![][1]
Quod Libet 可以从磁盘上的多个位置导入音乐,并允许你编辑音频文件的标签 - 因此一切都在你的控制之下。额外地,它还有各种插件可用,从简单的均衡器到 [last.fm][2] 同步。你也可以直接从 [Soundcloud][3] 搜索和播放音乐。
Quod Libet 在 HiDPI 屏幕上工作得很好,它有 Fedora 的 RPM 包,如果你运行[Silverblue][5],它在 [Flathub][4] 中也有。使用 Gnome Software 或命令行安装它:
```
$ sudo dnf install quodlibet
```
### Audacious
如果你喜欢简单的音乐播放器,甚至可能看起来像传说中的 WinampAudacious 可能是你的不错选择。
![][6]
Audacious 可能不会立即管理你的所有音乐,但你如果想将音乐组织为文件,它能做得很好。你还可以导出和导入播放列表,而无需重新组织音乐文件本身。
额外地,你可以让它看起来像 Winamp。要让它与上面的截图相同请进入 “Settings/Appearance,”,选择顶部的 “Winamp Classic Interface”然后选择右下方的 “Refugee” 皮肤。而鲍勃是你的叔叔!这就完成了。
Audacious 在 Fedora 中作为 RPM 提供,可以使用 Gnome Software 或在终端运行以下命令安装:
```
$ sudo dnf install audacious
```
### Lollypop
Lollypop 是一个音乐播放器,它与 GNOME 集成良好。如果你喜欢 GNOME 的外观并且想要一个集成良好的音乐播放器Lollypop 可能适合你。
![][7]
除了与 GNOME Shell 的良好视觉集成之外,它还可以很好地用于 HiDPI 屏幕,并支持黑暗主题。
额外地Lollypop 有一个集成的封面下载器和一个所谓的派对模式(右上角的音符按钮),它可以自动选择和播放音乐。它还集成了 [last.fm][2] 或 [libre.fm][8] 等在线服务。
它有 Fedora 的 RPM 也有用于 [Silverblue][5] 工作站的 [Flathub][4],使用 Gnome Software 或终端进行安装:
```
$ sudo dnf install lollypop
```
### Gradio
如果你没有任何音乐但仍喜欢听怎么办或者你只是喜欢收音机Gradio 就是为你准备的。
![][9]
Gradio 是一个简单的收音机,它允许你搜索和播放网络电台。你可以按国家、语言或直接搜索找到它们。额外地,它可视化地集成到了 GNOME Shell 中,可以与 HiDPI 屏幕配合使用,并且可以选择黑暗主题。
可以在 [Flathub][4] 中找到 Gradio它同时可以运行在 Fedora Workstation 和 [Silverblue][5] 中。使用 Gnome Software 安装它
### sox
你喜欢使用终端在工作时听一些音乐吗?多亏有了 sox你不必离开终端。
![][10]
sox 是一个非常简单的基于终端的音乐播放器。你需要做的就是运行如下命令:
```
$ play file.mp3
```
接着 sox 就会为你播放。除了单独的音频文件外sox 还支持 m3u 格式的播放列表。
额外地,因为 sox 是基于终端的程序,你可以在 ssh 中运行它。你有一个带扬声器的家用服务器吗?或者你想从另一台电脑上播放音乐吗?尝试将它与 [tmux][11] 一起使用,这样即使会话关闭也可以继续听。
sox 在 Fedora 中以 RPM 提供。运行下面的命令安装:
```
$ sudo dnf install sox
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/5-cool-music-player-apps/
作者:[Adam Šamalík][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/asamalik/
[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png
[2]:https://last.fm
[3]:https://soundcloud.com/
[4]:https://flathub.org/home
[5]:https://teamsilverblue.org/
[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png
[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png
[8]:https://libre.fm
[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png
[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png
[11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/

View File

@ -0,0 +1,116 @@
[已解决] Ubuntu 中的 “sub process usr bin dpkg returned an error code 1” 错误
======
如果你在 Ubuntu Linux 上安装软件时遇到 “sub process usr bin dpkg returned an error code 1”请按照以下步骤进行修复。
Ubuntu 和其他基于 Debian 的发行版中的一个常见问题是已经损坏的包。你尝试更新系统或安装新软件包时遇到类似 “Sub-process /usr/bin/dpkg returned an error code” 的错误。
这就是前几天发生在我身上的事。我试图在 Ubuntu 中安装一个电台程序时,它给我了这个错误:
```
Unpacking python-gst-1.0 (1.6.2-1build1) ...
Selecting previously unselected package radiotray.
Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ...
Unpacking radiotray (0.7.3-5ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up polar-bookshelf (1.0.0-beta56) ...
ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory
dpkg: error processing package polar-bookshelf (--configure):
subprocess installed post-installation script returned error exit status 1
Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ...
Setting up python-gst-1.0 (1.6.2-1build1) ...
Setting up radiotray (0.7.3-5ubuntu1) ...
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
这里最后三行非常重要。
```
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
它告诉我 polar-bookshelf 包引发了问题。这可能对你如何修复这个错误至关重要。
### 修复 Sub-process /usr/bin/dpkg returned an error code (1)
![Fix update errors in Ubuntu Linux][1]
让我们尝试修复这个损坏的错误包。我将展示几种你可以逐一尝试的方法。最初的那些易于使用,几乎不用动脑子。
你应该尝试运行 sudo apt update接着尝试安装新的包或尝试升级这里讨论的每个包。
#### 方法 1重新配包数据库
你可以尝试的第一种方法是重新配置包数据库。数据库可能在安装包时损坏了。重新配置通常可以解决问题。
```
sudo dpkg --configure -a
```
#### 方法 2强制安装
如果是之前中断安装的包,你可以尝试强制安装。
```
sudo apt-get install -f
```
#### 方法3尝试删除有问题的包
如果这不是你的问题,你可以尝试手动删除包。请不要在 Linux Kernels以 linux- 开头的软件包)中执行此操作。
```
sudo apt remove
```
#### 方法 4删除有问题的包中的信息文件
这应该是你最后的选择。你可以尝试从 /var/lib/dpkg/info 中删除与相关软件包关联的文件。
**你需要了解一些基本的 Linux 命令来了解发生了什么以及如何对应你的问题**
就我而言,我在 polar-bookshelf 中遇到问题。所以我查找了与之关联的文件:
```
ls -l /var/lib/dpkg/info | grep -i polar-bookshelf
-rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list
-rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums
-rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst
-rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm
```
现在我需要做的就是删除这些文件:
```
sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp
```
使用 sudo apt update接着你应该就能像往常一样安装软件了。
#### 哪种方法适合你(如果有效)?
我希望这篇快速文章可以帮助你修复 “E: Sub-process /usr/bin/dpkg returned an error code (1)” 的错误
如果它对你有用,是那种方法?你是否设法使用其他方法修复此错误?如果是,请分享一下以帮助其他人解决此问题。
--------------------------------------------------------------------------------
via: https://itsfoss.com/dpkg-returned-an-error-code-1/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/fix-common-update-errors-ubuntu.jpeg

View File

@ -1,172 +0,0 @@
每位 Ubuntu 18.04 用户都应该知道的快捷键
======
了解快捷键能够提升您的生产力。这里有一些实用的 Ubuntu 快捷键助您像专业人士一样使用 Ubuntu。
您可以使用有键盘和鼠标组合的操作系统。
注意:本文中提到的键盘快捷键适用于 Ubuntu 18.04 GNOME 版。 通常,它们中的大多数(或者全部)也适用于其他的 Ubuntu 版本,但我不能够保证。
![Ubuntu keyboard shortcuts][1]
### 实用的 Ubuntu 快捷键
让我们来看一看 Ubuntu GNOME 必备的快捷键吧!通用的快捷键如 Ctrl+C复制Ctrl+V粘贴或者 Ctrl+S保存不再赘述。
注意Linux 中的 Super 键即键盘上带有 Windows 图标的键,本文中我使用了大写字母,但这不代表你需要按下 shift 键比如T 代表键盘上的t而不代表 Shift+t。
#### 1\. Super 键:打开活动搜索界面
使用 Super 键可以打开活动菜单。如果你只能在 Ubuntu 上使用一个快捷键,那只能是 Super 键。
想要打开一个应用程序?按下 Super 键然后搜索应用程序。如果搜索的应用程序未安装,它会推荐来自应用中心的应用程序。
想要看看有哪些正在运行的程序?按下 Super 键,屏幕上就会显示所有正在运行的 GUI 应用程序。
想要使用工作区吗?只需按下 Super 键,您就可以在屏幕右侧看到工作区选项。
#### 2\. Ctrl+Alt+T打开 Ubuntu 终端窗口
![Ubuntu Terminal Shortcut][2]
*使用 Ctrl+alt+T 来打开终端窗口*
想要打开一个新的终端,您只需使用快捷键 Ctrl+Alt+T。这是我在 Ubuntu 中最喜欢的键盘快捷键。 甚至在我的许多 FOSS 教程中,当需要打开终端窗口是,我都会提到这个快捷键。
#### 3\. Super+L 或 Ctrl——Alt+L锁屏
当您离开电脑时锁定屏幕,是最基本的安全习惯之一。您可以使用 Super + L 快捷键,而不是繁琐地点击屏幕右上角然后选择锁定屏幕选项。
有些系统也会使用 Ctrl+Alt+L 键锁定屏幕。
#### 4\. Super+D or Ctrl+Alt+D显示桌面
按下 Super + D 可以最小化所有正在运行的应用程序窗口并显示桌面。
再次按 Super + D 将重新打开所有正在运行的应用程序窗口,像之前一样。
您也可以使用 Ctrl + Alt + D 来实现此目的。
#### 5\. Super+A显示应用程序菜单
您可以通过单击屏幕左下角的 9个点打开 Ubuntu 18.04 GNOME 中的应用程序菜单。 但是一个更快捷的方法是使用 Super + A 快捷键。
它将显示应用程序菜单,您可以在其中查看或搜索系统上已安装的应用程序。
您可以使用 Esc 键退出应用程序菜单界面。
#### 6\. Super+Tab or Alt+Tab在运行中的应用程序间切换
如果您运行的应用程序不止一个,则可以使用 Super + Tab 或 Alt + Tab 快捷键在应用程序之间切换。
按住 Super 键同时按下 Tab 键,即可显示应用程序切换器。 按住 Super 的同时,继续点击 Tab 键在应用程序之间进行选择。 当光标在所需的应用程序上时,松开 Super 和 Tab 键。
默认情况下,应用程序切换器从左向右移动。 如果要从右向左移动,可使用 Super + Shift + Tab 快捷键。
在这里您也可以用 Alt 键代替 Super 键。
提示:如果有多个应用程序实例,您可以使用 Super + \` 快捷键在这些实例之间切换。
#### 7\. Super+Arrow keys: 移动窗口位置
<https://player.vimeo.com/video/289091549>
这个快捷键也适用于 Windows 系统。 使用应用程序时,按下 Super 和左箭头键,应用程序将贴合屏幕的左边缘,占用屏幕的左半边。
同样,按下 Super 和右箭头键会使应用程序贴合右边缘。
按下 Super 和上箭头键将最大化应用程序窗口,超级和下箭头将使应用程序恢复到其正常的大小。
#### 8\. Super+M: 切换到通知栏
GNOME 中有一个通知栏,您可以在其中查看系统和应用程序活动的通知,这里也有一个日历。
![Notification Tray Ubuntu 18.04 GNOME][3]
*通知栏*
使用 Super + M 快捷键,您可以打开此通知栏。 如果再次按这些键,将关闭打开的通知托盘。
使用 Super+V 也可实现相同的功能。
#### 9\. Super+Space切换输入法用于多语言设置
如果您使用多种语言,可能您的系统上安装了多个输入法。 例如,我需要在 Ubuntu 上同时使用[印地语] [4]和英语,所以我安装了印地语(梵文)输入法以及默认的英语输入法。
如果您也使用多语言设置,则可以使用 Super + Space 快捷键快速更改输入法。
#### 10\. Alt+F2运行控制台
这适用于高级用户。 如果要运行快速命令,而不是打开终端并在其中运行命令,则可以使用 Alt + F2 运行控制台。
![Alt+F2 to run commands in Ubuntu][5]
*控制台*
当您使用只能在终端运行的应用程序时,这尤其有用。
#### 11\. Ctrl+Q关闭应用程序窗口
如果您有正在运行的应用程序,可以使用 Ctrl + Q 快捷键关闭应用程序窗口。您也可以使用 Ctrl + W 来实现此目的。
Alt + F4 是关闭应用程序窗口更“通用”的快捷方式。
它不适用于一些应用程序,如 Ubuntu 中的默认终端。
#### 12\. Ctrl+Alt+arrow切换工作区
![Workspace switching][6]
*切换工作区*
如果您是使用工作区的重度用户,可以使用 Ctrl + Alt + 上箭头和 Ctrl + Alt + 下箭头键在工作区之间切换。
#### 13\. Ctrl+Alt+Del注销
不会!在 Linux 中使用著名的快捷键 Ctrl+Alt+Del 并不会像在 Windows 中一样打开任务管理器(除非您使用自定义快捷键)。
![Log Out Ubuntu][7]
*注销*
在普通的 GNOME 桌面环境中,您可以使用 Ctrl + Alt + Del 键打开关机菜单,但 Ubuntu 并不总是遵循此规范,因此当您在 Ubuntu 中使用 Ctrl + Alt + Del 键时,它会打开注销菜单。
### 在 Ubuntu 中使用自定义键盘快捷键
您不是只能使用默认的键盘快捷键,您可以根据需要创建自己的自定义键盘快捷键。
转到“设置->设备->键盘”,您将在这里看到系统的所有键盘快捷键。向下滚动到底部,您将看到“自定义快捷方式”选项。
![Add custom keyboard shortcut in Ubuntu][8]
您需要提供易于识别的快捷键名称、使用快捷键时运行的命令,以及您自定义的按键组合。
### Ubuntu 中你最喜欢的键盘快捷键是什么?
快捷键永无止境。如果需要,你可以看一看所有可能的 [GNOME 快捷键][9],看其中有没有你需要用到的快捷键。
您可以学习使用您经常使用应用程序的快捷键,这是很有必要的。例如,我使用 Kazam 进行[屏幕录制][10],键盘快捷键帮助我方便地暂停和开始录像。
您最喜欢、最离不开的 Ubuntu 快捷键是什么?
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-shortcuts/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[XiatianSummer](https://github.com/XiatianSummer)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-keyboard-shortcuts.jpeg
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-terminal-shortcut.jpg
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/notification-tray-ubuntu-gnome.jpeg
[4]: https://itsfoss.com/type-indian-languages-ubuntu/
[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/console-alt-f2-ubuntu-gnome.jpeg
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/workspace-switcher-ubuntu.png
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/log-out-ubuntu.jpeg
[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/custom-keyboard-shortcut.jpg
[9]: https://wiki.gnome.org/Design/OS/KeyboardShortcuts
[10]: https://itsfoss.com/best-linux-screen-recorders/

View File

@ -1,118 +0,0 @@
3个开源日志聚合工具
======
日志聚合系统可以帮助我们故障排除并进行其他的任务。以下是三个主要工具介绍。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr)
指标聚合与日志聚合有何不同?日志不能包括指标吗?日志聚合系统不能做与指标聚合系统相同的事情吗?
这些是我经常听到的问题。我还看到供应商推销他们的日志聚合系统作为所有可观察问题的解决方案。日志聚合是一个有价值的工具,但它通常对时间序列数据的支持不够好。
时间序列指标聚合系统中几个有价值的功能为规则间隔与专门为时间序列数据定制的存储系统。规则间隔允许用户一次性地导出真实的数据结果。如果要求日志聚合系统定期收集指标数据,它也可以。但是,它的存储系统没有针对指标聚合系统中典型的查询类型进行优化。使用日志聚合工具中的存储系统处理这些查询将花费更多的资源和时间。
所以,我们知道日志聚合系统可能不适合时间序列数据,但是它有什么好处呢?日志聚合系统是收集事件数据的好地方。这些是非常重要的不规则活动。最好的例子为 web 服务的访问日志。这些都很重要,因为我们想知道什么东西正在访问我们的系统,什么时候访问。另一个例子是应用程序错误记录——因为它不是正常的操作记录,所以在故障排除过程中可能很有价值的。
日志记录的一些规则:
* 包含时间戳
* 包含 JSON 格式
* 不记录无关紧要的事件
* 记录所有应用程序的错误
* 记录警告错误
* 日志记录开关
* 以可读的形式记录信息
* 不在生产环境中记录信息
* 不记录任何无法阅读或无反馈的内容
### 云的成本
当研究日志聚合工具时,云可能看起来是一个有吸引力的选择。然而,这可能会带来巨大的成本。当跨数百或数千台主机和应用程序聚合时,日志数据是大量的。在基于云的系统中,数据的接收、存储和检索是昂贵的。
作为一个真实的系统大约500个节点和几百个应用程序的集合每天产生 200GB 的日志数据。这个系统可能还有改进的空间,但是在许多 SaaS 产品中即使将它减少一半每月也要花费将近10000美元。这通常包括仅保留30天如果你想查看一年一年的趋势数据就不可能了。
并不是要不使用这些系统,尤其是对于较小的组织它们可能非常有价值的。目的是指出可能会有很大的成本,当这些成本达到时,就可能令人非常的沮丧。本文的其余部分将集中讨论自托管的开源和商业解决方案。
### 工具选择
#### ELK
[ELK][1] ,简称 Elasticsearch、Logstash 和 Kibana是最流行的开源日志聚合工具。它被Netflix、Facebook、微软、LinkedIn 和思科使用。这三个组件都是由 [Elastic][2] 开发和维护的。[Elasticsearch][3] 本质上是一个NoSQLLucene 搜索引擎实现的。[Logstash][4] 是一个日志管道系统,可以接收数据,转换数据,并将其加载到像 Elasticsearch 这样的应用中。[Kibana][5] 是 Elasticsearch 之上的可视化层。
几年前Beats 被引入。Beats 是数据采集器。它们简化了将数据运送到日志存储的过程。用户不需要了解每种日志的正确语法,而是可以安装一个 Beats 来正确导出 NGINX 日志或代理日志以便在Elasticsearch 中有效地使用它们。
安装生产环境级 ELK stack 时,可能会包括其他几个部分,如 [Kafka][6], [Redis][7], and [NGINX][8]。此外,用 Fluentd 替换 Logstash 也很常见,我们将在后面讨论。这个系统操作起来很复杂,这在早期导致很多问题和投诉。目前,这些基本上已经被修复,不过它仍然是一个复杂的系统,如果你使用少部分的功能,建议不要使用它了。
也就是说,服务是可用的,所以你不必担心。[Logz.io][9] 也可以使用,但是如果你有很多数据,它的标价有点高。当然,你可能没有很多数据,来使用用它。如果你买不起 Logz.io你可以看看 [AWS Elasticsearch Service][10] (ES) 。ES 是 Amazon Web Services (AWS) 提供的一项服务,它使得 Elasticsearch 很容易快速工作。它还拥有使用 Lambda 和 S3 将所有AWS日志记录到 ES 的工具。这是一个更便宜的选择,但是需要一些管理操作,并有一些功能限制。
Elastic [offers][11] 的母公司提供一款更强大的产品,它使用开放核心模型,为分析工具和报告提供了额外的选项。它也可以在谷歌云平台或 AWS 上托管。由于这种工具和托管平台的组合提供了比大多数 SaaS 选项更加便宜,这将是最好的选择并且具有很大的价值。该系统可以有效地取代或提供[security information and event management][12] ( SIEM )系统的功能。
ELK 栈通过 Kibana 提供了很好的可视化工具但是它缺少警报功能。Elastic 在付费的 X-Pack 插件中提供了提醒功能但是在开源系统没有内置任何功能。Yelp 已经开发了一种解决这个问题的方法,[ElastAlert][13], 不过还有其他方式。这个额外的软件相当健壮,但是它增加了已经复杂的系统的复杂性。
#### Graylog
[Graylog][14] 最近越来越受欢迎但它是在2010年 Lennart Koopmann 创建并开发的。两年后,一家公司以同样的名字诞生了。尽管它的使用者越来越多,但仍然远远落后于 ELK 栈。这也意味着它具有较少的社区开发特征,但是它可以使用与 ELK stack 相同的 Beats 。Graylog 由于 Graylog Collector Sidecar 使用 [Go][15] 编写所以在 Go 社区赢得了赞誉。
Graylog 使用 Elasticsearch、[MongoDB][16] 并且 提供 Graylog Server 。这使得它像ELK 栈一样复杂也许还要复杂一些。然而Graylog 附带了内置于开源版本中的报警功能,以及其他一些值得注意的功能,如 streaming 、消息重写 和 地理定位。
streaming 能允许数据在被处理时被实时路由到特定的 Streams。使用此功能用户可以在单个Stream 中看到所有数据库错误,在不同的 Stream 中看到 web 服务器错误。当添加新项目或超过阈值时,甚至可以基于这些 Stream 提供警报。延迟可能是日志聚合系统中最大的问题之一Stream消除了灰色日志中的这一问题。一旦日志进入它就可以通过 Stream 路由到其他系统,而无需全部处理。
消息重写功能使用开源规则引擎 [Drools][17] 。允许根据用户定义的规则文件评估所有传入的消息,从而可以删除消息(称为黑名单)、添加或删除字段或修改消息。
Graylog 最酷的功能是它的地理定位功能,它支持在地图上绘制 IP 地址。这是一个相当常见的功能,在 Kibana 也可以这样使用,但是它增加了很多价值——特别是如果你想将它用作 SIEM 系统。地理定位功能在系统的开源版本中提供。
Graylog 如果你想要的话,它会对开源版本的支持收费。它还为其企业版提供了一个开放的核心模型,提供存档、审计日志记录和其他支持。如果你不需要 Graylog (the company) 支持或托管的,你可以独立使用。
#### Fluentd
[Fluentd][18] 是 [Treasure Data][19] 开发的,[CNCF][20] 已经将它作为一个孵化项目。它是用 C 和 Ruby 编写的,并由[AWS][21] 和 [Google Cloud][22]推荐。fluentd已经成为许多装置中logstach的常用替代品。它充当本地聚合器收集所有节点日志并将其发送到中央存储系统。它不是日志聚合系统。
它使用一个强大的插件系统提供不同数据源和数据输出的快速和简单的集成功能。因为有超过500个插件可用所以你的大多数用例都应该包括在内。如果没有这听起来是一个为开源社区做出贡献的机会。
Fluentd 由于占用内存少(只有几十兆字节)和高吞吐量特性,是 Kubernetes 环境中的常见选择。在像 [Kubernetes][23] 这样的环境中每个pod 都有一个 Fluentd sidecar ,内存消耗会随着每个新 pod 的创建而线性增加。在这中情况下,使用 Fluentd 将大大降低你的系统利用率。这对于Java开发的工具来说这是一个常见的问题这些工具旨在为每个节点运行一个工具而内存开销并不是主要问题。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/open-source-log-aggregation-tools
作者:[Dan Barker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[1]: https://www.elastic.co/webinars/introduction-elk-stack
[2]: https://www.elastic.co/
[3]: https://www.elastic.co/products/elasticsearch
[4]: https://www.elastic.co/products/logstash
[5]: https://www.elastic.co/products/kibana
[6]: http://kafka.apache.org/
[7]: https://redis.io/
[8]: https://www.nginx.com/
[9]: https://logz.io/
[10]: https://aws.amazon.com/elasticsearch-service/
[11]: https://www.elastic.co/cloud
[12]: https://en.wikipedia.org/wiki/Security_information_and_event_management
[13]: https://github.com/Yelp/elastalert
[14]: https://www.graylog.org/
[15]: https://opensource.com/tags/go
[16]: https://www.mongodb.com/
[17]: https://www.drools.org/
[18]: https://www.fluentd.org/
[19]: https://www.treasuredata.com/
[20]: https://www.cncf.io/
[21]: https://aws.amazon.com/blogs/aws/all-your-data-fluentd/
[22]: https://cloud.google.com/logging/docs/agent/
[23]: https://opensource.com/resources/what-is-kubernetes

View File

@ -0,0 +1,170 @@
让你提高效率的 Linux 技巧
======
想要在 Linux 命令行工作中提高效率,你需要使用一些技巧。
![](https://images.idgesg.net/images/article/2018/09/boy-jumping-off-swing-100772498-large.jpg)
巧妙的 Linux 命令行技巧能让你节省时间、避免出错,还能让你记住和复用各种复杂的命令,专注在需要做的事情本身,而不是做事的方式。以下介绍一些好用的命令行技巧。
### 命令编辑
如果要对一个已输入的命令进行修改,可以使用 ^actrl + a或 ^ectrl + e将光标快速移动到命令的开头或命令的末尾。
还可以使用 `^` 字符实现对上一个命令的文本替换并重新执行命令,例如 `^before^after^` 相当于把上一个命令中的 `before` 替换为 `after` 然后重新执行一次。
```
$ eho hello world <== 错误的命令
Command 'eho' not found, did you mean:
command 'echo' from deb coreutils
command 'who' from deb coreutils
Try: sudo apt install <deb name>
$ ^e^ec^ <== 替换
echo hello world
hello world
```
### 使用远程机器的名称登录到机器上
如果使用命令行登录其它机器上,可以考虑添加别名。在别名中,可以填入需要登录的用户名(与本地系统上的用户名可能相同,也可能不同)以及远程机器的登录信息。例如使用 `server_name ='ssh -v -l username IP-address'` 这样的别名命令:
```
$ alias butterfly=”ssh -v -l jdoe 192.168.0.11”
```
也可以通过在 `/etc/hosts` 文件中添加记录或者在 DNS 服务器中加入解析记录来把 IP 地址替换成易记的机器名称。
执行 `alias` 命令可以列出机器上已有的别名。
```
$ alias
alias butterfly='ssh -v -l jdoe 192.168.0.11'
alias c='clear'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l='ls -CF'
alias la='ls -A'
alias list_repos='grep ^[^#] /etc/apt/sources.list /etc/apt/sources.list.d/*'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias show_dimensions='xdpyinfo | grep '\''dimensions:'\'''
```
只要将新的别名添加到 `~/.bashrc` 或类似的文件中,就可以让别名在每次登录后都能立即生效。
### 冻结、解冻终端界面
^sctrl + s将通过执行流量控制命令 XOFF 来停止终端输出内容,这会对 PuTTY 会话和桌面终端窗口产生影响。如果误输入了这个命令,可以使用 ^qctrl + q让终端重新响应。所以只需要记住^q 这个组合键就可以了,毕竟这种情况并不多见。
### 复用命令
Linux 提供了很多让用户复用命令的方法,其核心是通过历史缓冲区收集执行过的命令。复用命令的最简单方法是输入 `!` 然后接最近使用过的命令的开头字母;当然也可以按键盘上的向上箭头,直到看到要复用的命令,然后按 Enter 键。还可以先使用 `history` 显示命令历史,然后输入 `!` 后面再接命令历史记录中需要复用的命令旁边的数字。
```
!! <== 复用上一条命令
!ec <== 复用上一条以 “ec” 开头的命令
!76 <== 复用命令历史中的 76 号命令
```
### 查看日志文件并动态显示更新内容
使用形如 `tail -f /var/log/syslog` 的命令可以查看指定的日志文件,并动态显示文件中增加的内容,需要监控向日志文件中追加内容的的事件时相当有用。这个命令会输出文件内容的末尾部分,并逐渐显示新增的内容。
```
$ tail -f /var/log/auth.log
Sep 17 09:41:01 fly CRON[8071]: pam_unix(cron:session): session closed for user smmsp
Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session opened for user root
Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session closed for user root
Sep 17 09:47:00 fly sshd[8124]: Accepted password for shs from 192.168.0.22 port 47792
Sep 17 09:47:00 fly sshd[8124]: pam_unix(sshd:session): session opened for user shs by
Sep 17 09:47:00 fly systemd-logind[776]: New session 215 of user shs.
Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session opened for user root
Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session closed for user root
<== 等待显示追加的内容
```
### 寻求帮助
对于大多数 Linux 命令,都可以通过在输入命令后加上选项 `--help` 来获得这个命令的作用、用法以及它的一些相关信息。除了 `man` 命令之外, `--help` 选项可以让你在不使用所有扩展选项的情况下获取到所需要的内容。
```
$ mkdir --help
Usage: mkdir [OPTION]... DIRECTORY...
Create the DIRECTORY(ies), if they do not already exist.
Mandatory arguments to long options are mandatory for short options too.
-m, --mode=MODE set file mode (as in chmod), not a=rwx - umask
-p, --parents no error if existing, make parent directories as needed
-v, --verbose print a message for each created directory
-Z set SELinux security context of each created directory
to the default type
--context[=CTX] like -Z, or if CTX is specified then set the SELinux
or SMACK security context to CTX
--help display this help and exit
--version output version information and exit
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Full documentation at: <http://www.gnu.org/software/coreutils/mkdir>
or available locally via: info '(coreutils) mkdir invocation'
```
### 谨慎删除文件
如果要谨慎使用 `rm` 命令,可以为它设置一个别名,在删除文件之前需要进行确认才能删除。有些系统管理员会默认使用这个别名,对于这种情况,你可能需要看看下一个技巧。
```
$ rm -i <== 请求确认
```
### 关闭别名
你可以使用 `unalias` 命令以交互方式禁用别名。它不会更改别名的配置,而仅仅是暂时禁用,直到下次登录或重新设置了这一个别名才会重新生效。
```
$ unalias rm
```
如果已经将 `rm -i` 默认设置为 `rm` 的别名,但你希望在删除文件之前不必进行确认,则可以将 `unalias` 命令放在一个启动文件(例如 ~/.bashrc中。
### 使用 sudo
如果你经常在只有 root 用户才能执行的命令前忘记使用 `sudo`,这里有两个方法可以解决。一是利用命令历史记录,可以使用 `sudo !!`(使用 `!!` 来运行最近的命令,并在前面添加 `sudo`)来重复执行,二是设置一些附加了所需 `sudo` 的命令别名。
```
$ alias update=sudo apt update
```
### 更复杂的技巧
有时命令行技巧并不仅仅是一个别名。毕竟,别名能帮你做的只有替换命令以及增加一些命令参数,节省了输入的时间。但如果需要比别名更复杂功能,可以通过编写脚本、向 `.bashrc` 或其他启动文件添加函数来实现。例如,下面这个函数会在创建一个目录后进入到这个目录下。在设置完毕后,执行 `source .bashrc`,就可以使用 `md temp` 这样的命令来创建目录立即进入这个目录下。
```
md () { mkdir -p "$@" && cd "$1"; }
```
### 总结
使用 Linux 命令行是在 Linux 系统上工作最有效也最有趣的方法,但配合命令行技巧和巧妙的别名可以让你获得更好的体验。
加入 [Facebook][1] 和 [LinkedIn][2] 上的 Network World 社区可以和我们一起讨论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-you-can-love.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]: https://www.facebook.com/NetworkWorld/
[2]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,112 @@
Part-III 树莓派自建 NAS 云盘之云盘构建
======
用树莓派 NAS 云盘来保护数据的安全!
在前面两篇文章中(译注:文章链接 [Part-I][1][Part-II][2]),我们讨论了用树莓派搭建一个 NASnetwork-attached storage 所需要的一些 [软硬件环境及其操作步骤][1]。我们还制定了适当的 [备份策略][2] 来保护NAS上的数据。本文中我们将介绍讨论利用 [Nestcloud][3] 来方便快捷的存储、获取以及分享你的数据。
### 必要的准备工作
想要方便的使用 Nextcloud需要一些必要的准备工作。首先你需要一个指向 Nextcloud 的域名。方便起见,本文将使用 **nextcloud.pi-nas.com** 。如果你是在家庭网络里运行,你需要为该域名配置 DNS 服务(动态域名解析服务)并在路由器中开启 80 端口和 443 端口转发功能(如果需要使用 https则需要开启 443 端口转发,如果只用 http80 端口足以)。
你可以使用 [ddclient][4] 在树莓派中自动更新 DNS。
### 安装 Nextcloud
为了在树莓派(参考 [Part-I][1] 中步骤设置)中运行 Nextcloud首先用命令 **apt** 安装 以下的一些依赖软件包。
```
sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl
```
其次,下载 Nextcloud。在树莓派中利用 **wget** 下载其 [最新的版本][5]。在 [Part-I] 文章中,我们将两个磁盘驱动器连接到树莓派,一个用于存储当前数据,另一个用于备份。这里在数据存储盘上安装 Nextcloud以确保每晚自动备份数据。
```
sudo mkdir -p /nas/data/nextcloud
sudo chown pi /nas/data/nextcloud
cd /nas/data/
wget https://download.nextcloud.com/server/releases/nextcloud-14.0.0.zip -O /nas/data/nextcloud.zip
unzip nextcloud.zip
sudo ln -s /nas/data/nextcloud /var/www/nextcloud
sudo chown -R www-data:www-data /nas/data/nextcloud
```
截止到写作本文时Nextcloud 最新版更新到如上述代码中所示的 14.0.0 版本。Nextcloud 正在快速的迭代更新中,所以你可以在你的树莓派中安装更新一点的版本。
### 配置数据库
如上所述Nextcloud 安装完毕。之前安装依赖软件包时就已经安装了 MySQL 数据库来存储 Nextcloud 的一些重要数据(例如,那些你创建的可以访问 Nextcloud 的用户的信息)。如果你更愿意使用 Pstgres 数据库,则上面的依赖软件包需要做一些调整。
以 root 权限启动 MySQL:
```
sudo mysql
```
这将会打开 SQL 提示符界面,在那里可以插入如下指令--使用数据库连接密码替换其中的占位符--为 Nextcloud 创建一个数据库。
```
CREATE USER nextcloud IDENTIFIED BY '<insert-password-here>';
CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* TO nextcloud;
```
**Ctrl+D** 或输入 **quit** 退出 SQL 提示符界面。
### Web 服务器配置
Nextcloud 可以配置以适配于 Nginx 服务器或者其他 Web 服务器运行的环境。但本文中,我决定在我的树莓派 NAS 中运行 Apache 服务器(如果你有其他效果更好的服务器选择方案,不妨也跟我分享一下)。
首先为你的 Nextcloud 域名创建一个虚拟主机,创建配置文件 **/etc/apache2/sites-available/001-netxcloud.conf**,在其中输入下面的参数内容。修改其中 ServerName 为你的域名。
```
<VirtualHost *:80>
ServerName nextcloud.pi-nas.com
ServerAdmin admin@pi-nas.com
DocumentRoot /var/www/nextcloud/
<Directory /var/www/nextcloud/>
AllowOverride None
</Directory>
</VirtualHost>
```
使用下面的命令来启动该虚拟主机。
```
a2ensite 001-nextcloud
sudo systemctl reload apache2
```
现在,你应该可以通过浏览器中输入域名访问到 web 服务器了。这里我推荐使用 HTTPS 协议而不是 HTTP 协议来访问 Nextcloud。一个简单而且免费的方法就是利用 [Certbot][7] 下载 [Let's Encrypt][6] 证书,然后设置定时任务自动刷新。这样就避免了自签证书等的麻烦。参考 [如何在树莓派中安装][8] Certbot 。在配置 Certbot 的时候,你甚至可以配置将 HTTP 自动转到 HTTPS ,例如访问 **<http://nextcloud.pi-nas.com>** 自动跳转到 **<https://nextcloud.pi-nas.com>**。注意,如果你的树莓派 NAS 运行在家庭路由器的下面,别忘了设置路由器的 443 端口和 80 端口转发。
### 配置 Nextcloud
最后一步,通过浏览器访问 Nextcloud 来配置它。在浏览器中输入域名地址,插入上文中的数据库设置信息。这里,你可以创建 Nextcloud 管理员用户。默认情况下,数据保存目录在在 Nextcloud 目录下,所以你也无需修改我们在 [Part-II][2] 一文中设置的备份策略。
然后,页面会跳转到 Nextcloud 登陆界面,用刚才创建的管理员用户登陆。在设置页面中会有基础操作教程和安全安装教程(这里是访问 <https://nextcloud.pi-nas.com/>settings/admin
恭喜你,到此为止,你已经成功在树莓派中安装了你自己的云 Nextcloud。去 Nextcloud 主页 [下载 Nextcloud 客户端][9],客户端可以同步数据并且离线访问服务器。移动端甚至可以上传图片等资源,然后电脑桌面都可以去访问它们。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi
作者:[Manuel Dewald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[jrg](https://github.com/jrglinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ntlx
[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
[2]: https://opensource.com/article/18/8/automate-backups-raspberry-pi
[3]: https://nextcloud.com/
[4]: https://sourceforge.net/p/ddclient/wiki/Home/
[5]: https://nextcloud.com/install/#instructions-server
[6]: https://letsencrypt.org/
[7]: https://certbot.eff.org/
[8]: https://certbot.eff.org/lets-encrypt/debianother-apache
[9]: https://nextcloud.com/install/#install-clients

View File

@ -0,0 +1,121 @@
简化 Django 开发的八个 Python 包
======
这个月的 Python 专栏将介绍一些 Django 包,它们有益于你的工作,以及你的个人或业余项目。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V)
Django 开发者们,在这个月的 Python 专栏中,我们会介绍一些能帮助你们的软件包。这些软件包是我们最喜欢的 [Django][1] 库,能够节省开发时间,减少样板代码,通常来说,这会让我们的生活更加轻松。我们为 Django 应用准备了六个包,为 Django 的 REST 框架准备了两个包。几乎所有我们的项目里,都用到了这些包,真的,不是说笑。
不过在继续阅读之前,请先看看我们关于[让 Django 管理后台更安全][2]的几个提示,以及这篇关于 [5 个最受欢迎的开源 Django 包][3] 的文章。
### 有用又省时的工具集合django-extensions
[Django-extensions][4] 这个 Django 包非常受欢迎,全是有用的工具,比如下面这些管理命令:
* **shell_plus** 打开 Django 的管理 shell这个 shell 已经自动导入了所有的数据库模型。在测试复杂的数据关系时,就不需要再从几个不同的应用里做 import 的操作了。
* **clean_pyc** 删除项目目录下所有位置的 .pyc 文件
* **create_template_tags** 在指定的应用下,创建模板标签的目录结构。
* **describe_form** 输出模型的表单定义,可以粘贴到 forms.py 文件中。(需要注意的是,这种方法创建的是普通 Django 表单,而不是模型表单。)
* **notes** 输出你项目里所有带 TODOFIXME 等标记的注释。
Django-extensions 还包括几个有用的抽象基类,在定义模型时,它们能满足常见的模式。当你需要以下模型时,可以继承这些基类:
* **TimeStampedModel** : 这个模型的基类包含了 **created** 字段和 **modified** 字段,还有一个 **save()** 方法,在适当的场景下,该方法自动更新 created 和 modified 字段的值。
* **ActivatorModel** : 如果你的模型需要像 **status****activate_date** 和 **deactivate_date** 这样的字段,可以使用这个基类。它还自带了一个启用 **.active()** 和 **.inactive()** 查询集的 manager。
* **TitleDescriptionModel** 和 **TitleSlugDescriptionModel** : 这两个模型包括了 **title****description** 字段,其中 description 字段还包括 **slug**,它根据 **title** 字段自动产生。
Django-extensions 还有其他更多的功能,也许对你的项目有帮助,所以,去浏览一下它的[文档][5]吧!
### 12 因子应用的配置django-environ
在 Django 项目的配置方面,[Django-environ][6] 提供了符合 [12 因子应用][7] 方法论的管理方法。它是其他一些库的集合,包括 [envparse][8] 和 [honcho][9] 等。安装了 django-environ 之后,在项目的根目录创建一个 .env 文件,用这个文件去定义那些随环境不同而不同的变量,或者需要保密的变量。(比如 API keys是否启用 debug数据库的 URLs 等)
然后,在项目的 settings.py 中引入 **environ**,并参考[官方文档的例子][10]设置好 **environ.PATH()****environ.Env()**。就可以通过 **env('VARIABLE_NAME')** 来获取 .env 文件中定义的变量值了。
### 创建出色的管理命令django-click
[Django-click][11] 是基于 [Click][12] 的, ( 我们[之前推荐过][13]… [两次][14] Click),它对编写 Django 管理命令很有帮助。这个库没有很多文档,但是代码仓库中有个存放[测试命令][15]的目录,非常有参考价值。 Django-click 基本的 Hello World 命令是这样写的:
```
# app_name.management.commands.hello.py
import djclick as click
@click.command()
@click.argument('name')
def command(name):
click.secho(f'Hello, {name}')
```
在命令行下调用它,这样执行即可:
```
>> ./manage.py hello Lacey
Hello, Lacey
```
### 处理有限状态机django-fsm
[Django-fsm][16] 给 Django 的模型添加了有限状态机的支持。如果你管理一个新闻网站想用类似于“写作中”“编辑中”“已发布”来流转文章的状态django-fsm 能帮你定义这些状态,还能管理状态变化的规则与限制。
Django-fsm 为模型提供了 FSMField 字段,用来定义模型实例的状态。用 django-fsm 的 **@transition** 修饰符,可以定义状态变化的方法,并处理状态变化的任何副作用。
虽然 django-fsm 文档很轻量,不过 [Django 中的工作流(状态)][17] 这篇 GitHubGist 对有限状态机和 django-fsm 做了非常好的介绍。
### 联系人表单:#django-contact-form
联系人表单可以说是网站的标配。但是不要自己去写全部的样板代码,用 [django-contact-form][18] 在几分钟内就可以搞定。它带有一个可选的能过滤垃圾邮件的表单类(也有不过滤的普通表单类)和一个 **ContactFormView** 基类,基类的方法可以覆盖或自定义修改。而且它还能引导你完成模板的创建,好让表单正常工作。
### 用户注册和认证django-allauth
[Django-allauth][19] 是一个 Django 应用,它为用户注册,登录注销,密码重置,还有第三方用户认证(比如 GitHub 或 Twitter提供了视图表单和 URLs支持邮件地址作为用户名的认证方式而且有大量的文档记录。第一次用的时候它的配置可能会让人有点晕头转向请仔细阅读[安装说明][20],在[自定义你的配置][21]时要专注,确保启用某个功能的所有配置都用对了。
### 处理 Django REST 框架的用户认证django-rest-auth
如果 Django 开发中涉及到对外提供 API你很可能用到了 [Django REST Framework][22] (DRF)。如果你在用 DRF那么你应该试试 django-rest-auth它提供了用户注册登录/注销,密码重置和社交媒体认证的 endpoints (是通过添加 django-allauth 的支持来实现的,这两个包协作得很好)。
### Django REST 框架的 API 可视化django-rest-swagger
[Django REST Swagger][24] 提供了一个功能丰富的用户界面,用来和 Django REST 框架的 API 交互。你只需要安装 Django REST Swagger把它添加到 Django 项目的 installed apps 中,然后在 urls.py 中添加 Swagger 的视图和 URL 模式就可以了,剩下的事情交给 API 的 docstring 处理。
![](https://opensource.com/sites/default/files/uploads/swagger-ui.png)
API 的用户界面按照 app 的维度展示了所有 endpoints 和可用方法,并列出了这些 endpoints 的可用操作,而且它提供了和 API 交互的功能(比如添加/删除/获取记录。django-rest-swagger 从 API 视图中的 docstrings 生成每个 endpoint 的文档,通过这种方法,为你的项目创建了一份 API 文档,这对你,对前端开发人员和用户都很有用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/django-packages
作者:[Jeff Triplett][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[belitex](https://github.com/belitex)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laceynwilliams
[1]: https://www.djangoproject.com/
[2]: https://opensource.com/article/18/1/10-tips-making-django-admin-more-secure
[3]: https://opensource.com/business/15/12/5-favorite-open-source-django-packages
[4]: https://django-extensions.readthedocs.io/en/latest/
[5]: https://django-extensions.readthedocs.io/
[6]: https://django-environ.readthedocs.io/en/latest/
[7]: https://www.12factor.net/
[8]: https://github.com/rconradharris/envparse
[9]: https://github.com/nickstenning/honcho
[10]: https://django-environ.readthedocs.io/
[11]: https://github.com/GaretJax/django-click
[12]: http://click.pocoo.org/5/
[13]: https://opensource.com/article/18/9/python-libraries-side-projects
[14]: https://opensource.com/article/18/5/3-python-command-line-tools
[15]: https://github.com/GaretJax/django-click/tree/master/djclick/test/testprj/testapp/management/commands
[16]: https://github.com/viewflow/django-fsm
[17]: https://gist.github.com/Nagyman/9502133
[18]: https://django-contact-form.readthedocs.io/en/1.5/
[19]: https://django-allauth.readthedocs.io/en/latest/
[20]: https://django-allauth.readthedocs.io/en/latest/installation.html
[21]: https://django-allauth.readthedocs.io/en/latest/configuration.html
[22]: http://www.django-rest-framework.org/
[23]: https://django-rest-auth.readthedocs.io/
[24]: https://django-rest-swagger.readthedocs.io/en/latest/

View File

@ -0,0 +1,270 @@
如何在 Linux 中查看进程占用的端口号
======
对于 Linux 系统管理员来说,清楚某个服务是否正确地绑定或监听某个端口,是至关重要的。如果你需要处理端口相关的问题,这篇文章可能会对你有用。
端口是 Linux 系统上特定进程之间逻辑连接的标识,包括物理端口和软件端口。由于 Linux 操作系统是一个软件,因此本文只讨论软件端口。软件端口始终与主机的 IP 地址和相关的通信协议相关联,因此端口常用于区分应用程序。大部分涉及到网络的服务都必须打开一个套接字来监听传入的网络请求,而每个服务都使用一个独立的套接字。
**推荐阅读:**
**(#)** [在 Linux 上查看进程 ID 的 4 种方法][1]
**(#)** [在 Linux 上终止进程的 3 种方法][2]
套接字是和 IP 地址软件端口和协议结合起来使用的而端口号对传输控制协议Transmission Control Protocol, TCP和 用户数据报协议User Datagram Protocol, UDP协议都适用TCP 和 UDP 都可以使用0到65535之间的端口号进行通信。
以下是端口分配类别:
* `0-1023:` 常用端口和系统端口
* `1024-49151:` 软件的注册端口
* `49152-65535:` 动态端口或私有端口
在 Linux 上的 `/etc/services` 文件可以查看到更多关于保留端口的信息。
```
# less /etc/services
# /etc/services:
# $Id: services,v 1.55 2013/04/14 ovasik Exp $
#
# Network services, Internet style
# IANA services version: last updated 2013-04-10
#
# Note that it is presently the policy of IANA to assign a single well-known
# port number for both TCP and UDP; hence, most entries here have two entries
# even if the protocol doesn't support UDP operations.
# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
# are included, only the more common ones.
#
# The latest IANA port assignments can be gotten from
# http://www.iana.org/assignments/port-numbers
# The Well Known Ports are those from 0 through 1023.
# The Registered Ports are those from 1024 through 49151
# The Dynamic and/or Private Ports are those from 49152 through 65535
#
# Each line describes one service, and is of the form:
#
# service-name port/protocol [aliases ...] [# comment]
tcpmux 1/tcp # TCP port service multiplexer
tcpmux 1/udp # TCP port service multiplexer
rje 5/tcp # Remote Job Entry
rje 5/udp # Remote Job Entry
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
discard 9/udp sink null
systat 11/tcp users
systat 11/udp users
daytime 13/tcp
daytime 13/udp
qotd 17/tcp quote
qotd 17/udp quote
msp 18/tcp # message send protocol (historic)
msp 18/udp # message send protocol (historic)
chargen 19/tcp ttytst source
chargen 19/udp ttytst source
ftp-data 20/tcp
ftp-data 20/udp
# 21 is registered to ftp, but also used by fsp
ftp 21/tcp
ftp 21/udp fsp fspd
ssh 22/tcp # The Secure Shell (SSH) Protocol
ssh 22/udp # The Secure Shell (SSH) Protocol
telnet 23/tcp
telnet 23/udp
# 24 - private mail system
lmtp 24/tcp # LMTP Mail Delivery
lmtp 24/udp # LMTP Mail Delivery
```
可以使用以下六种方法查看端口信息。
* `ss:` ss 可以用于转储套接字统计信息。
* `netstat:` netstat 可以显示打开的套接字列表。
* `lsof:` lsof 可以列出打开的文件。
* `fuser:` fuser 可以列出那些打开了文件的进程的进程 ID。
* `nmap:` nmap 是网络检测工具和端口扫描程序。
* `systemctl:` systemctl 是 systemd 系统的控制管理器和服务管理器。
以下我们将找出 `sshd` 守护进程所使用的端口号。
### 方法1使用 ss 命令
`ss` 一般用于转储套接字统计信息。它能够输出类似于 `netstat` 输出的信息,但它可以比其它工具显示更多的 TCP 信息和状态信息。
它还可以显示所有类型的套接字统计信息,包括 PACKET、TCP、UDP、DCCP、RAW、Unix 域等。
```
# ss -tnlp | grep ssh
LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3))
LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4))
```
也可以使用端口号来检查。
```
# ss -tnlp | grep ":22"
LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3))
LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4))
```
### 方法2使用 netstat 命令
`netstat` 能够显示网络连接、路由表、接口统计信息、伪装连接以及多播成员。
默认情况下,`netstat` 会列出打开的套接字。如果不指定任何地址族,则会显示所有已配置地址族的活动套接字。但 `netstat` 已经过时了,一般会使用 `ss` 来替代。
```
# netstat -tnlp | grep ssh
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 997/sshd
tcp6 0 0 :::22 :::* LISTEN 997/sshd
```
也可以使用端口号来检查。
```
# netstat -tnlp | grep ":22"
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1208/sshd
tcp6 0 0 :::22 :::* LISTEN 1208/sshd
```
### 方法3使用 lsof 命令
`lsof` 能够列出打开的文件,并列出系统上被进程打开的文件的相关信息。
```
# lsof -i -P | grep ssh
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 11584 root 3u IPv4 27625 0t0 TCP *:22 (LISTEN)
sshd 11584 root 4u IPv6 27627 0t0 TCP *:22 (LISTEN)
sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED)
```
也可以使用端口号来检查。
```
# lsof -i tcp:22
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1208 root 3u IPv4 20919 0t0 TCP *:ssh (LISTEN)
sshd 1208 root 4u IPv6 20921 0t0 TCP *:ssh (LISTEN)
sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED)
```
### 方法4使用 fuser 命令
`fuser` 工具会将本地系统上打开了文件的进程的进程 ID 显示在标准输出中。
```
# fuser -v 22/tcp
USER PID ACCESS COMMAND
22/tcp: root 1208 F.... sshd
root 12388 F.... sshd
root 49339 F.... sshd
```
### 方法5使用 nmap 命令
`nmap`“Network Mapper”是一款用于网络检测和安全审计的开源工具。它最初用于对大型网络进行快速扫描但它对于单个主机的扫描也有很好的表现。
`nmap` 使用原始 IP 数据包来确定网络上可用的主机,这些主机的服务(包括应用程序名称和版本)、主机运行的操作系统(包括操作系统版本等信息)、正在使用的数据包过滤器或防火墙的类型,以及很多其它信息。
```
# nmap -sV -p 22 localhost
Starting Nmap 6.40 ( http://nmap.org ) at 2018-09-23 12:36 IST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000089s latency).
Other addresses for localhost (not scanned): 127.0.0.1
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.4 (protocol 2.0)
Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 0.44 seconds
```
### 方法6使用 systemctl 命令
`systemctl` 是 systemd 系统的控制管理器和服务管理器。它取代了旧的 SysV init 系统管理,目前大多数现代 Linux 操作系统都采用了 systemd。
**推荐阅读:**
**(#)** [chkservice Linux 终端上的 systemd 单元管理工具][3]
**(#)** [如何查看 Linux 系统上正在运行的服务][4]
```
# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2018-09-23 02:08:56 EDT; 6h 11min ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 11584 (sshd)
CGroup: /system.slice/sshd.service
└─11584 /usr/sbin/sshd -D
Sep 23 02:08:56 vps.2daygeek.com systemd[1]: Starting OpenSSH server daemon...
Sep 23 02:08:56 vps.2daygeek.com sshd[11584]: Server listening on 0.0.0.0 port 22.
Sep 23 02:08:56 vps.2daygeek.com sshd[11584]: Server listening on :: port 22.
Sep 23 02:08:56 vps.2daygeek.com systemd[1]: Started OpenSSH server daemon.
Sep 23 02:09:15 vps.2daygeek.com sshd[11589]: Connection closed by 103.5.134.167 port 49899 [preauth]
Sep 23 02:09:41 vps.2daygeek.com sshd[11592]: Accepted password for root from 103.5.134.167 port 49902 ssh2
```
以上输出的内容显示了最近一次启动 `sshd` 服务时 `ssh` 服务的监听端口。但它不会将最新日志更新到输出中。
```
# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-09-06 07:40:59 IST; 2 weeks 3 days ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 1208 (sshd)
CGroup: /system.slice/sshd.service
├─ 1208 /usr/sbin/sshd -D
├─23951 sshd: [accepted]
└─23952 sshd: [net]
Sep 23 12:50:36 vps.2daygeek.com sshd[23909]: Invalid user pi from 95.210.113.142 port 51666
Sep 23 12:50:36 vps.2daygeek.com sshd[23909]: input_userauth_request: invalid user pi [preauth]
Sep 23 12:50:37 vps.2daygeek.com sshd[23911]: pam_unix(sshd:auth): check pass; user unknown
Sep 23 12:50:37 vps.2daygeek.com sshd[23911]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=95.210.113.142
Sep 23 12:50:37 vps.2daygeek.com sshd[23909]: pam_unix(sshd:auth): check pass; user unknown
Sep 23 12:50:37 vps.2daygeek.com sshd[23909]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=95.210.113.142
Sep 23 12:50:39 vps.2daygeek.com sshd[23911]: Failed password for invalid user pi from 95.210.113.142 port 51670 ssh2
Sep 23 12:50:39 vps.2daygeek.com sshd[23909]: Failed password for invalid user pi from 95.210.113.142 port 51666 ssh2
Sep 23 12:50:40 vps.2daygeek.com sshd[23911]: Connection closed by 95.210.113.142 port 51670 [preauth]
Sep 23 12:50:40 vps.2daygeek.com sshd[23909]: Connection closed by 95.210.113.142 port 51666 [preauth]
```
大部分情况下,以上的输出不会显示进程的实际端口号。这时更建议使用以下这个 `journalctl` 命令检查日志文件中的详细信息。
```
# journalctl | grep -i "openssh\|sshd"
Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[997]: Received signal 15; terminating.
Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Stopping OpenSSH server daemon...
Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Starting OpenSSH server daemon...
Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[11584]: Server listening on 0.0.0.0 port 22.
Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[11584]: Server listening on :: port 22.
Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Started OpenSSH server daemon.
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-using-in-linux/
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[1]: https://www.2daygeek.com/how-to-check-find-the-process-id-pid-ppid-of-a-running-program-in-linux/
[2]: https://www.2daygeek.com/kill-terminate-a-process-in-linux-using-kill-pkill-killall-command/
[3]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/
[4]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/

View File

@ -0,0 +1,128 @@
如何让 Ping 的输出更简单易读
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-720x340.png)
众所周知,`ping` 命令可以用来检查目标主机是否可达。使用 `ping` 命令的时候,会发送一个 ICMP Echo 请求,通过目标主机的响应与否来确定目标主机的状态。如果你经常使用 `ping` 命令,你可以尝试一下 `prettyping`。Prettyping 只是将一个标准的 ping 工具增加了一层封装,在运行标准 ping 命令的同时添加了颜色和 unicode 字符解析输出,所以它的输出更漂亮紧凑、清晰易读。它是用 `bash``awk` 编写的免费开源工具,支持大部分类 Unix 操作系统,包括 GNU/Linux、FreeBSD 和 Mac OS X。Prettyping 除了美化 ping 命令的输出,还有很多值得注意的功能。
* 检测丢失的数据包并在输出中标记出来。
* 显示实时数据。每次收到响应后,都会更新统计数据,而对于普通 ping 命令,只会在执行结束后统计。
* 能够在输出结果不混乱的前提下灵活处理“未知信息”(例如错误信息)。
* 能够避免输出重复的信息。
* 兼容常用的 ping 工具命令参数。
* 能够由普通用户执行。
* 可以将输出重定向到文件中。
* 不需要安装,只需要下载二进制文件,赋予可执行权限即可执行。
* 快速且轻巧。
* 输出结果清晰直观。
### 安装 Prettyping
如上所述Prettyping 是一个绿色软件,不需要任何安装,只要使用以下命令下载 Prettyping 二进制文件:
```
$ curl -O https://raw.githubusercontent.com/denilsonsa/prettyping/master/prettyping
```
将二进制文件放置到 `$PATH`(例如 `/usr/local/bin`)中:
```
$ sudo mv prettyping /usr/local/bin
```
然后对其赋予可执行权限:
```
$ sudo chmod +x /usr/local/bin/prettyping
```
就可以使用了。
### 让 ping 的输出清晰易读
安装完成后,通过 `prettyping` 来 ping 任何主机或 IP 地址,就可以以图形方式查看输出。
```
$ prettyping ostechnix.com
```
输出效果大概会是这样:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-in-action.gif)
如果你不带任何参数执行 `prettyping`,它就会一直运行直到被 ctrl + c 中断。
由于 Prettyping 只是一个对普通 ping 命令的封装,所以常用的 ping 参数也是有效的。例如使用 `-c 5` 来指定 ping 一台主机的 5 次:
```
$ prettyping -c 5 ostechnix.com
```
Prettyping 默认会使用彩色输出,如果你不喜欢彩色的输出,可以加上 `--nocolor` 参数:
```
$ prettyping --nocolor ostechnix.com
```
同样的,也可以用 `--nomulticolor` 参数禁用多颜色支持:
```
$ prettyping --nomulticolor ostechnix.com
```
使用 `--nounicode` 参数禁用 unicode 字符:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-without-unicode-support.png)
如果你的终端不支持 **UTF-8**,或者无法修复系统中的 unicode 字体,只需要加上 `--nounicode` 参数就能轻松解决。
Prettyping 支持将输出的内容重定向到文件中,例如执行以下这个命令会将 `prettyping ostechnix.com` 的输出重定向到 `ostechnix.txt` 中:
```
$ prettyping ostechnix.com | tee ostechnix.txt
```
Prettyping 还有很多选项帮助你完成各种任务,例如:
* 启用/禁用延时图例(默认启用)
* 强制按照终端的格式输出(默认自动)
* 在统计数据中统计最后的 n 次 ping默认 60 次)
* 覆盖对终端尺寸的检测
* 覆盖 awk 解释器(默认不覆盖)
* 覆盖 ping 工具(默认不覆盖)
查看帮助文档可以了解更多:
```
$ prettyping --help
```
尽管 prettyping 没有添加任何额外功能,但我个人喜欢它的这些优点:
* 实时统计 - 可以随时查看所有实时统计信息,标准 `ping` 命令只会在命令执行结束后才显示统计信息。
* 紧凑的显示 - 可以在终端看到更长的时间跨度。
* 检测丢失的数据包并显示出来。
如果你一直在寻找可视化显示 `ping` 命令输出的工具,那么 Prettyping 肯定会有所帮助。尝试一下,你不会失望的。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/

View File

@ -0,0 +1,98 @@
如何在 Ubuntu Linux 中使用 RAR 文件
======
[RAR][1] 是一种非常好的归档文件格式。但相比之下 7-zip 能提供了更好的压缩率,并且默认情况下还可以在多个平台上轻松支持 Zip 文件。不过 RAR 仍然是最流行的归档格式之一。然而 [Ubuntu][2] 自带的归档管理器却不支持提取 RAR 文件,也不允许创建 RAR 文件。
方法总比问题多。只要安装 `unrar` 这款由 [RARLAB][3] 提供的免费软件,就能在 Ubuntu 上支持提取RAR文件了。你也可以试安装 `rar` 来创建和管理 RAR 文件。
![RAR files in Ubuntu Linux][4]
### 提取 RAR 文件
在未安装 unrar 的情况下,提取 RAR 文件会报出“未能提取”错误,就像下面这样(以 [Ubuntu 18.04][5] 为例):
![Error in RAR extraction in Ubuntu][6]
如果要解决这个错误并提取 RAR 文件,请按照以下步骤安装 unrar
打开终端并输入:
```
sudo apt-get install unrar
```
安装 unrar 后,直接输入 `unrar` 就可以看到它的用法以及如何使用这个工具处理 RAR 文件。
最常用到的功能是提取 RAR 文件。因此,可以**通过右键单击 RAR 文件并执行提取**,也可以借助此以下命令通过终端执行操作:
```
unrar x FileName.rar
```
结果类似以下这样:
![Using unrar in Ubuntu][7]
如果家目录中不存在对应的文件,就必须使用 `cd` 命令移动到目标目录下。例如 RAR 文件如果在 `Music` 目录下,只需要使用 `cd Music` 就可以移动到相应的目录,然后提取 RAR 文件。
### 创建和管理 RAR 文件
![Using rar archive in Ubuntu Linux][8]
`unrar` 不允许创建 RAR 文件。因此还需要安装 `rar` 命令行工具才能创建 RAR 文件。
要创建 RAR 文件,首先需要通过以下命令安装 rar
```
sudo apt-get install rar
```
按照下面的命令语法创建 RAR 文件:
```
rar a ArchiveName File_1 File_2 Dir_1 Dir_2
```
按照这个格式输入命令时,它会将目录中的每个文件添加到 RAR 文件中。如果需要某一个特定的文件,就要指定文件确切的名称或路径。
默认情况下RAR 文件会放置在**家目录**中。
以类似的方式,可以更新或管理 RAR 文件。同样是使用以下的命令语法:
```
rar u ArchiveName Filename
```
在终端输入 `rar` 就可以列出 RAR 工具的相关命令。
### 总结
现在你已经知道如何在 Ubuntu 上管理 RAR 文件了,你会更喜欢使用 7-zip、Zip 或 Tar.xz 吗?
欢迎在评论区中评论。
--------------------------------------------------------------------------------
via: https://itsfoss.com/use-rar-ubuntu-linux/
作者:[Ankush Das][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[1]: https://www.rarlab.com/rar_file.htm
[2]: https://www.ubuntu.com/
[3]: https://www.rarlab.com/
[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-ubuntu-linux.png
[5]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/extract-rar-error.jpg
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/unrar-rar-extraction.jpg
[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-update-create.jpg