Merge pull request #3 from LCTT/master

更新
This commit is contained in:
Flynn 2017-01-15 23:31:02 +08:00 committed by GitHub
commit 3c6c219c74
179 changed files with 18228 additions and 4501 deletions

View File

@ -0,0 +1,256 @@
Powerline给 Vim 和 Bash 提供更棒的状态行和提示信息
=================================================
Powerline 是一个极棒的 [Vim 编辑器][1]的状态行插件,这个插件是使用 Python 开发的,主要用于显示状态行和提示信息,适用于很多软件,比如 bash、zsh、tmux 等等。
[
![Install Powerline Statuslines in Linux](http://www.tecmint.com/wp-content/uploads/2015/10/Install-Powerline-Statuslines-in-Linux-620x297.png)
][2]
*Powerline 使 Linux 终端更具威力*
### 特色
1. 使用 python 编写,使其更具扩展性且功能丰富
2. 稳定易测的代码库,兼容 python 2.6+ 和 python 3
3. 支持多种 Linux 功能及工具的提示和状态栏
4. 通过 JSON 保存配置和颜色方案
5. 快速、轻量级,具有后台守护进程支持,提供更佳的性能
### Powerline 效果截图
[
![Powerline Vim Statuslines](http://www.tecmint.com/wp-content/uploads/2015/10/Powerline-Vim-Statuslines.png)
][3]
*Vim 中 Powerline 状态行效果*
在本文中,我会介绍如何安装 Powerline 及其字体,以及如何在 RedHat 和 Debian 类的系统中使 Bash 和 Vim 支持 Powerline。
### 第一步:准备好安装 Powerline 所需的软件
由于和其它无关项目之间存在命名冲突,因此 powerline 只能放在 PyPIPython Package Index中的 `powerline-status` 包下.
为了从 PyPI 中安装该包,需要先准备好 `pip`(该工具专门用于 Python 包的管理)工具。所以首先要在 Linux 系统下安装好 `pip` 工具。
#### 在 Debian、Ubuntu 和 Linux Mint 中安装 pip
```
# apt-get install python-pip
```
**示例输出:**
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
Recommended packages:
python-dev-all python-wheel
The following NEW packages will be installed:
python-pip
0 upgraded, 1 newly installed, 0 to remove and 533 not upgraded.
Need to get 97.2 kB of archives.
After this operation, 477 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe python-pip all 1.5.4-1ubuntu3 [97.2 kB]
Fetched 97.2 kB in 1s (73.0 kB/s)
Selecting previously unselected package python-pip.
(Reading database ... 216258 files and directories currently installed.)
Preparing to unpack .../python-pip_1.5.4-1ubuntu3_all.deb ...
Unpacking python-pip (1.5.4-1ubuntu3) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up python-pip (1.5.4-1ubuntu3) ...
```
#### 在 CentOS、RHEL 和 Fedora 中安装 pip
在 Fedora 类系统中,需要先打开 [epel 仓库][4],然后按照如下方法安装 pip 包。
```
# yum install python-pip
# dnf install python-pip [Fedora 22+ 以上]
```
**示例输出:**
```
Installing:
python-pip noarch 7.1.0-1.el7 epel 1.5 M
Transaction Summary
=================================================================================
Install 1 Package
Total download size: 1.5 M
Installed size: 6.6 M
Is this ok [y/d/N]: y
Downloading packages:
python-pip-7.1.0-1.el7.noarch.rpm | 1.5 MB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python-pip-7.1.0-1.el7.noarch 1/1
Verifying : python-pip-7.1.0-1.el7.noarch 1/1
Installed:
python-pip.noarch 0:7.1.0-1.el7
Complete!
```
### 第二步:在 Linux 中安装 Powerline
现在可以从 Git 仓库中安装 Powerline 的最新开发版。在此之前系统需要安装好 Git 工具以便可以从仓库拉下代码。
```
# apt-get install git
# yum install git
# dnf install git
```
然后你可以通过 `pip` 命令安装 Powerline。
```
# pip install git+git://github.com/Lokaltog/powerline
```
**示例输出:**
```
Cloning git://github.com/Lokaltog/powerline to /tmp/pip-WAlznH-build
Running setup.py (path:/tmp/pip-WAlznH-build/setup.py) egg_info for package from git+git://github.com/Lokaltog/powerline
warning: no previously-included files matching '*.pyc' found under directory 'powerline/bindings'
warning: no previously-included files matching '*.pyo' found under directory 'powerline/bindings'
Installing collected packages: powerline-status
Found existing installation: powerline-status 2.2
Uninstalling powerline-status:
Successfully uninstalled powerline-status
Running setup.py install for powerline-status
warning: no previously-included files matching '*.pyc' found under directory 'powerline/bindings'
warning: no previously-included files matching '*.pyo' found under directory 'powerline/bindings'
changing mode of build/scripts-2.7/powerline-lint from 644 to 755
changing mode of build/scripts-2.7/powerline-daemon from 644 to 755
changing mode of build/scripts-2.7/powerline-render from 644 to 755
changing mode of build/scripts-2.7/powerline-config from 644 to 755
changing mode of /usr/local/bin/powerline-config to 755
changing mode of /usr/local/bin/powerline-lint to 755
changing mode of /usr/local/bin/powerline-render to 755
changing mode of /usr/local/bin/powerline-daemon to 755
Successfully installed powerline-status
Cleaning up...
```
### 第三步:在 Linux 中安装 Powerline 的字体
Powerline 使用特殊的符号来为开发者显示特殊的箭头效果和符号内容。因此你的系统中必须要有符号字体或者补丁过的字体。
通过下面的 [wget][5] 命令下载最新的系统字体及字体配置文件。
```
# wget https://github.com/powerline/powerline/raw/develop/font/PowerlineSymbols.otf
# wget https://github.com/powerline/powerline/raw/develop/font/10-powerline-symbols.conf
```
然后你将下载的字体放到字体目录下 `/usr/share/fonts` 或者 `/usr/local/share/fonts`,或者你可以通过 `xset q` 命令找到一个有效的字体目录。
```
# mv PowerlineSymbols.otf /usr/share/fonts/
```
接下来你需要通过如下命令更新你系统的字体缓存。
```
# fc-cache -vf /usr/share/fonts/
```
其次安装字体配置文件。
```
# mv 10-powerline-symbols.conf /etc/fonts/conf.d/
```
注意:如果相应的符号没有出现,可以尝试关闭终端会话并重启 X window这样就会生效了。
### 第四步:给 Bash Shell 和 Vim 状态行设置 Powerline
在这一节将介绍 bash shell 和 vim 编辑器中关于 Powerline 的配置。首先通过在 `~/.bashrc` 中添加如下内容以便设置终端为 256 色。
```
export TERM="screen-256color"
```
#### 打开 Bash Shell 中的 Powerline
如果希望在 bash shell 中默认打开 Powerline可以在 `~/.bashrc` 中添加如下内容。
首先通过如下命令获取 powerline 的安装位置。
```
# pip show powerline-status
Name: powerline-status
Version: 2.2.dev9999-git.aa33599e3fb363ab7f2744ce95b7c6465eef7f08
Location: /usr/local/lib/python2.7/dist-packages
Requires:
```
一旦找到 powerline 的具体位置后,根据你系统的情况替换到下列行中的 `/usr/local/lib/python2.7/dist-packages` 对应的位置。
```
powerline-daemon -q
POWERLINE_BASH_CONTINUATION=1
POWERLINE_BASH_SELECT=1
. /usr/local/lib/python2.7/dist-packages/powerline/bindings/bash/powerline.sh
```
然后退出后重新登录,现在 powerline 的状态行应该如下显示了。
[
![Bash Powerline Statuslines](http://www.tecmint.com/wp-content/uploads/2015/10/Bash-Powerline-Statuslines.gif)
][6]
现在切换目录并注意显示你当前路径的面包屑导航提示的变化。
如果远程 Linux 服务器上安装了 powerline你能看到后台挂起的任务当你用 ssh 登录上去时,会看到该提示增加了主机名。
#### 在 Vim 中打开 Powerline
如果你喜欢使用 vim正好有一个 vim 的强力插件。可以在 `~/.vimrc` 中添加如下内容打开该插件LCTT 译注:注意同样需要根据你的系统情况修改路径)。
```
set rtp+=/usr/local/lib/python2.7/dist-packages/powerline/bindings/vim/
set laststatus=2
set t_Co=256
```
然后你打开 vim 后会看到一个新的状态行:
[
![Vim Powerline Statuslines](http://www.tecmint.com/wp-content/uploads/2015/10/Vim-Powerline-Statuslines.gif)
][7]
### 总结
Powerline 可以在某些软件中提供颜色鲜艳、很优美的状态行及提示内容,这对编程环境有利。希望这篇指南对您有帮助,如果您需要帮助或者有任何好的想法,请留言给我。
--------------------------------------------------------------------------------
作者简介:
![](http://1.gravatar.com/avatar/7badddbc53297b2e8ed7011cf45df0c0?s=128&d=blank&r=g)
我是Ravi SaiveTecMint的作者。一个喜欢分享诀窍和想法的电脑极客及Linux专家。我的大部分服务都运行在开源平台Linux中。关注我的TwitterFacebook和Google+。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/powerline-adds-powerful-statuslines-and-prompts-to-vim-and-bash/
作者:[Ravi Saive][a]
译者:[beyondworld](https://github.com/beyondworld)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://www.tecmint.com/vi-editor-usage/
[2]:http://www.tecmint.com/wp-content/uploads/2015/10/Install-Powerline-Statuslines-in-Linux.png
[3]:http://www.tecmint.com/wp-content/uploads/2015/10/Powerline-Vim-Statuslines.png
[4]:https://linux.cn/article-2324-1.html
[5]:http://www.tecmint.com/10-wget-command-examples-in-linux/
[6]:http://www.tecmint.com/wp-content/uploads/2015/10/Bash-Powerline-Statuslines.gif
[7]:http://www.tecmint.com/wp-content/uploads/2015/10/Vim-Powerline-Statuslines.gif

View File

@ -0,0 +1,62 @@
初识 HTTP/2
============================================================
![](https://static.viget.com/_284x284_crop_center-center/http2-pizza.png?mtime=20160822160641)
> 用披萨来说明当你订单数很大的时候 HTTP/2 是怎么打败 HTTP/1.1 的。
在建立网站和应用的方式上 HTTP/2 有些令人惊叹的改变,在 HTTP/2 发布后的一年半,几乎 [10% 的网站使用了 HTTP/2][4]。它绝对值得采用,但是这篇文章应该首先推给使用 HTTP/2 的前端开发者。这个连载的文章是指导前端开发者怎么转换到 HTTP/2。
本文涵盖了 HTTP/2 对 HTTP/1.1 来说有什么提高的内容,并且向前端开发者介绍了 HTTP/2。
### 再次让我想起什么是 HTTP ...
超文本传输协议,也就是 HTTP这个协议决定了 web 内容怎么传输。HTTP/1.1 在 1999 年被标准化,那时候的 web 和现在有很大的不同,表格霸占了整个网络。样式通常被内联在元素中,如果网站管理员更加的细致,他们会在头部写个 `<style>`标签。 JavaScript 也被丢在文档里面,那时候完整的网站通常也不会超过几页。
HTTP/1.1 预计这种情况会持续一段时间,所以它并没有太过关注在让一个站点可以加载大量的资源方面,因为那时候的开发者并不需要这个。因此它使用了一个非常简单的方式来处理资源,你访问一个资源然后服务器去寻找它,并且返回你访问的资源,或者告诉你这个资源不存在。这种被叫作"线头堵塞"方式非常高效,但是当你需要多个资源的时候,这个进程会依次寻找每个资源。这意味着在你访问第二个资源之前,服务器必须找到你访问的第一个资源并且载入它,或者告诉你没找到。
### 大型站点的发展
在 1999 年之后的几年里,随着 php 和其他像 Rails 这样的动态语言的崛起站点变得越来越复杂。css 文件也随着向响应式开发的转变而变的越来越大,因此像 Sass 这样的可以创造一个简单的工作环境的 CSS 编译器就应运而生。 JavaScript 也在 web 上有了更大的作用,它允许开发者编写复杂的应用,这曾经只是 C++ 开发人员的工作。随着 Retina 和高清显示屏的兴起,也让图片变得更高清。随着这些改变,文件大小呈现指数式的增长,使得本来是等待几个字节的资源变成了加载几千字节,甚至在某些情况下有几兆。当你开始载入页面的其它东西前,必须先载入数百 K 的东西,你只能乐观的假设你的用户有很快的网络接入。
想象 HTTP/1.1 是个过去的那种柜台购买式的街旁披萨店。你能自己过去并且预定一个雪碧和 2 片 Angry Hawaiian ,然后等待 3 分钟。他们可以很容易地处理这些,实际上这是一个蓬勃发展的商业模式-定单简单、处理迅速。
![](https://static.viget.com/_300xAUTO_crop_center-center/http2-pizzaorder1.png?mtime=20160823122331)
然而,一旦你决定在同样的披萨店主办一场小区域的季度颁奖典礼,事情就变的更复杂了。每个人都预定不同的东西,快速而杂乱无章让等待时间直线上升。
![](https://static.viget.com/_300xAUTO_crop_center-center/http2-pizzaorder2.png?mtime=20160823130750)
### 哪里是 HTTP/2 的舞台
HTTP/2 对前端开发者主要的承诺就是复用。意思就是资源请求能发生在同一时间,并且服务器能马上响应这些资源。在请求之间没有等待,因为它们发生在同一时间。
使用同样的比喻HTTP/2 允许披萨店在餐馆他们自己区域举办派对。派一个服务员接受订单,并把所有已经准备好的订单交付。当其他人的比萨在制作的时候,你也不需要花 30 分钟去等待你的雪碧,它们在第一批交付的东西之中。这方式使得管理大量订单更加简单,并且防止人们等他们的订单时间太长。
复用带给我们的 web 开发的大变化是改变了文件的加载方式。帮助绕过资源加载的 HTTP/1.1 瓶颈的方式是通过连接和压缩需要被加载的文件。所有任务运行器都默认采取这样的操作方式或者需要作一点小设置就行。和过去一样开发人员可以将图像放在精灵拼图sprite sheets这会减少了对服务器的请求数。
### 改进 HTTP/1.1
将文件连接起来是个处理 HTTP1.1 的请求限制问题的非常聪明的方式,但是连接文件的主要问题是它要求用户第一次访问整个网站时下载所有的资源。一旦它们载入后,浏览器就会缓存所有的资源。这能提高用户每次访问网页时的速度,但是前期负载很重,对[跳出率不利][5]。此外,他们可能为所不访问的页面加载资源。期望用户访问每个页面以查看每个样式,并与每个脚本进行交互是不现实的。此外,在加拿大和欧洲以及几乎每个美国移动提供商的地方,有每月的带宽上限。不是加载额外的 54 千字节的内容就会超过每月的流量限制,而是让我们假设用户想保留这些额外的流量看 Taylor Swift 的 gif。
使用 HTTP/2 和多路复用,您可以开发一些最高效的网站,但它需要一些重新思考、甚至​​撤销之前的最佳做法。重复一次,我的目的是加快 HTTP/2 的会话,使用我们新的工具,我们可以发现这些新的最佳的做法。
在我的下一篇文章,[我将探索建设基于 HTTP/2 的网站的一些最好方式][6]。
--------------------------------------------------------------------------------
via: https://www.viget.com/articles/getting-started-with-http-2-part-1?imm_mid=0eb24a&cmp=em-web-na-na-newsltr_20161130
作者:[Ben][a]
译者:[hkurj](https://github.com/hkurj)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.viget.com/about/team/btinsley
[1]:https://twitter.com/home?status=Using%20pizza%20to%20show%20how%20HTTP%2F2%20beats%20HTTP%2F1.1%20when%20your%20orders%20get%20too%20big.%20https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-1
[2]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-1
[3]:http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-1
[4]:https://w3techs.com/technologies/details/ce-http2/all/all
[5]:https://blog.kissmetrics.com/speed-is-a-killer/
[6]:https://www.viget.com/articles/getting-started-with-http-2-part-2
[7]:https://www.viget.com/about/team/btinsley

View File

@ -1,16 +1,17 @@
#忘记技术债务 —— 教你如何创造技术财富
忘记技术债务 —— 教你如何创造技术财富
===============
电视里正播放着《老屋》节目,[Andrea Goulet][58] 和她商业上的合作伙伴正悠闲地坐在客厅里,商讨着他们的战略计划。那正是大家思想的火花碰撞出创新事物的时刻。他们正在寻求一种能够实现自身价值的方式 —— 为其它公司清理<ruby>遗留代码<rt>legacy code</rt></ruby>及科技债务。他们此刻的情景,像极了电视里的场景。(译注:《老屋》电视节目提供专业的家装,家庭改建,重新装饰,创意等等信息,与软件的改造有异曲同工之处)。
电视里正播放着《老屋》节目,[Andrea Goulet][58] 和她商业合作伙伴正悠闲地坐在客厅里,商讨着他们的战略计划。那正是大家思想的火花碰撞出创新事物的时刻。他们正在寻求一种能够实现自身价值的方式 —— 为其它公司清理<ruby>遗留代码<rt>legacy code</rt></ruby>及科技债务。他们此刻的情景,像极了电视里的场景。(LCTT 译注:《老屋》电视节目提供专业的家装、家庭改建、重新装饰、创意等等信息,与软件的改造有异曲同工之处)。
“我们意识到我们现在做的工作不仅仅是清理遗留代码,实际上我们是在用重建老屋的方式来重构软件,让系统运行更持久,更稳定,更高效”Goulet 说。“这让我开始思考公司如何花钱来改善他们的代码,以便让他们的系统运行更高效。就好比为了让屋子变得更有价值,你不得不使用一个全新的屋顶。这并不吸引人,但却是至关重要的,然而很多人都搞错了。“
“我们意识到我们现在做的工作不仅仅是清理遗留代码,实际上我们是在用重建老屋的方式来重构软件,让系统运行更持久、更稳定、更高效”Goulet 说。“这让我开始思考公司如何花钱来改善他们的代码,以便让他们的系统运行更高效。就好比为了让屋子变得更有价值,你不得不使用一个全新的屋顶。这并不吸引人,但却是至关重要的,然而很多人都搞错了。“
如今,她是 [Corgibytes][57] 公司的 CEO —— 一家提高软件现代化和进行系统重构方面的咨询公司。她曾经见过各种各样糟糕的系统遗留代码以及严重的科技债务事件。Goulet 认为创业公司需要转变思维模式,不是偿还债务,而是创造科技财富,不是要铲除旧代码,而是要逐步修复代码。她解释了这种新的方法,以及如何完成这些看似不可能完成的事情 —— 实际上是聘用优秀的工程师来完成这些工作。
如今,她是 [Corgibytes][57] 公司的 CEO —— 这是一家提高软件现代化和进行系统重构方面的咨询公司。她曾经见过各种各样糟糕的系统遗留代码以及严重的科技债务事件。Goulet 认为**创业公司需要转变思维模式,不是偿还债务,而是创造科技财富,不是要铲除旧代码,而是要逐步修复代码**。她解释了这种新的方法,以及如何完成这些看似不可能完成的事情 —— 实际上是聘用优秀的工程师来完成这些工作。
### 反思遗留代码
关于遗留代码最常见的定义是由 Michael Feathers 在他的著作[<ruby>《高效利用遗留代码》<rt>Working Effectively with Legacy Code</rt></ruby>][56]一书中提出:遗留代码就是没有被测试的代码。这个定义比大多数人所认为的 —— 遗留代码仅指那些古老陈旧的系统这个说法要妥当得多。但是 Goulet 认为这两种定义都不够明确。“遗留代码与软件的年头儿毫无关系。一个两年的应用程序,其代码可能已经进入遗留状态了,”她说。“关键要看软件质量提高的难易程度。”
关于遗留代码最常见的定义是由 Michael Feathers 在他的著作[<ruby>《高效利用遗留代码》<rt>Working Effectively with Legacy Code</rt></ruby>][56]一书中提出:遗留代码就是没有被测试所覆盖的代码。这个定义比大多数人所认为的 —— 遗留代码仅指那些古老陈旧的系统这个说法要妥当得多。但是 Goulet 认为这两种定义都不够明确。“遗留代码与软件的年头儿毫无关系。一个两年的应用程序,其代码可能已经进入遗留状态了,”她说。“**关键要看软件质量提高的难易程度。**
这意味着代码写得不够清楚,缺少解释说明,没有包含任何关于代码构思和决策制定的流程。单元测试可以有一定帮助,但也要包括所有的写那部分代码的原因以及逻辑推理相关的文档。如果想要提升代码,但没办法搞清楚原开发者的意图 —— 那些代码就属于遗留代码了。
这意味着写得不够清楚、缺少解释说明的代码,是没有包含任何关于代码构思和决策制定的流程的成果。单元测试就是这样的一种成果,但它并没有包括了写那部分代码的原因以及逻辑推理相关的所有文档。如果想要提升代码,但没办法搞清楚原开发者的意图 —— 那些代码就属于遗留代码了。
> **遗留代码不是技术问题,而是沟通上的问题。**
@ -18,35 +19,35 @@
如果你像 Goulet 所说的那样迷失在遗留代码里,你会发现每一次的沟通交流过程都会变得像那条[<ruby>康威定律<rt>Conways Law</rt></ruby>][54]所描述的一样。
Goulet 说:“这个定律认为系统的基础架构能反映出整个公司的组织沟通结构,如果想修复公司的遗留代码,而没有一个好的组织沟通方式是不可能完成的。那是很多人都没注意到的一个重要环节。”
Goulet 说:“这个定律认为你的代码能反映出整个公司的组织沟通结构,如果想修复公司的遗留代码,而没有一个好的组织沟通方式是不可能完成的。那是很多人都没注意到的一个重要环节。”
Goulet 和她的团队成员更像是考古学家一样来研究遗留系统项目。他们根据前开发者写的代码构件相关的线索来推断出他们的思想意图。然后再根据这些构件之间的关系来做出新的决策。
最重要的代码是什么样子呢?**良好的代码结构、清晰的思想意图、整洁的代码**。例如,如果使用通用的名称如 “foo” 或 “bar” 来命名一个变量,半年后再返回来看这段代码时,根本就看不出这个变量的用途是什么。
代码构件最重要的什么呢?**良好的代码结构、清晰的思想意图、整洁的代码**。例如,如果使用通用的名称如 “foo” 或 “bar” 来命名一个变量,半年后再返回来看这段代码时,根本就看不出这个变量的用途是什么。
如果代码读起来很困难,可以使用源代码控制系统,这是一个非常有用的工具,因为它可以提供代码的历史修改信息,并允许软件开发者写明他们作出本次修改的原因。
Goulet 说:“我一个朋友认为提交代码时附带的信息,如需要,每一个概要部分的内容应该有推文的一半多,而代码的描述信息应该有一篇博客那么长。你得用这个方式来为你修改的代码写一个合理的说明。这不会浪费太多额外的时间,并且能给后期的项目开发者提供非常多的有用信息,但是让人惊讶的是很少有人会这么做。我们经常听到一些开发人员在调试代码的过程中,很沮丧的报怨这是谁写的这烂代码,最后发现还不是他们自己写的。”
Goulet 说:“我一个朋友认为提交代码时附带的信息,每一个概要部分的内容应该有半条推文那么长(几十个字),如需要的话,代码的描述信息应该有一篇博客那么长。你得用这个方式来为你修改的代码写一个合理的说明。这不会浪费太多额外的时间,并且能给后期的项目开发者提供非常多的有用信息,但是让人惊讶的是很少有人会这么做。我们经常能看到一些开发人员在被一段代码激怒之后,要用 `git blame` 扒代码库找出这些垃圾是谁干的,结果最后发现是他们自己干的。”
使用自动化测试对于理解程序的流程非常有用。Goulet 解释道:“很多人都比较认可 Michael Feathers 提出的关于遗留代码的定义。测试套件对于理解开发者的意图来说是非常有用的工具,尤其当用来与[<ruby>行为驱动开发模式<rt>Behavior Driven Development</rt></ruby>][53]相结合时,比如编写测试场景。”
理由很简单,如果你想利用好遗留代码,你得多注意使代码在将来易于理解和工作的一些细节上。编写并运行单元程序、接受、认可,并且进行集成测试,写清楚注释的内容。方便以后你自己或是别人来理解你写的代码。
理由很简单,如果你想将遗留代码限制在一定程度下,注意到这些细节将使代码更易于理解,便于在以后也能工作。编写并运行一个代码单元,接受、认可,并且集成测试。写清楚注释的内容,方便以后你自己或是别人来理解你写的代码。
尽管如此,由于很多已知的和不可意料的原因,遗留代码仍然会发生
尽管如此,由于很多已知的和不可意料的原因,遗留代码仍然会出现
在创业公司刚成立初期公司经常会急于推出很多新的功能。开发人员在巨大的交付压力下测试常常半途而废。Corgibytes 团队就遇到过好多公司很多年都懒得对系统做详细的测试了。
确实如此当你急于开发出系统原型的时候强制性地去做太多的测试也许意义不大。但是一旦产品开发完成并投入使用后你就需要投入时间精力来维护及完善系统了。Goulet 说:“很多人觉得运维没什么好担心的,重要的是产品功能特性上的强大。如果真这样,当系统规模到一定程序的时候,就很难再扩展了。同时也就失去市场竞争力了。
确实如此当你急于开发出系统原型的时候强制性地去做太多的测试也许意义不大。但是一旦产品开发完成并投入使用后你就需要投入时间精力来维护及完善系统了。Goulet 说:“很多人说,‘别在维护上费心思,重要的是功能!’ **如果真这样,当系统规模到一定程序的时候,就很难再扩展了。同时也就失去市场竞争力了。**”
最后才明白过来,原来热力学第二定律对代码也同样适用:你所面临的一切将向熵增的方向发展。你需要与混乱无序的技术债务进行一场无休无止的战斗。遗留代码随着时间的增长,也逐渐变成一种债务。
最后才明白过来,原来热力学第二定律对代码也同样适用:**你所面临的一切将向熵增的方向发展。**你需要与混乱无序的技术债务进行一场无休无止的战斗。随着时间的推移,遗留代码也逐渐变成一种债务。
她说:“我们再次拿家来做比喻。你必须坚持每天收拾餐具,打扫卫生,倒垃圾。如果你不这么做,情况将来越来越糟糕,直到有一天你不得不向 HazMat 团队求助。”(译者HazMat 团队,危害物质专队)
她说:“我们再次拿家来做比喻。你必须坚持每天收拾餐具、打扫卫生、倒垃圾。如果你不这么做,情况将来越来越糟糕,直到有一天你不得不向 HazMat 团队求助。”LCTT 译HazMat 团队,危害物质专队)
就跟这种情况一样Corgibytes 团队接到很多公司 CEO 的求助电话,比如 Features 公司的 CEO 在电话里抱怨道“现在我们公司的开发团队工作效率太低了三年前只需要两个星期就完成的工作现在却要花费12个星期。”
> **技术债务往往反出公司运作上的问题。**
> **技术债务往往反出公司运作上的问题。**
很多公司的 CTO 明知会发生技术债务的问题,但是他们很难说服其它同事相信花钱来修复那些已经存在的问题是值得的。这看起来像是在走回头路,很乏味或者没有新的产品。有些公司直到系统已经严重影响了日常工作效率时,才着手去处理这些技术债务方面的问题,那时付出的代价就太高了。
很多公司的 CTO 明知会发生技术债务的问题,但是他们很难说服其它同事相信花钱来修复那些已经存在的问题是值得的。这看起来像是在走回头路,很乏味,也不是新的产品。有些公司直到系统已经严重影响了日常工作效率时,才着手去处理这些技术债务方面的问题,那时付出的代价就太高了。
### 忘记债务,创造技术财富
@ -66,90 +67,90 @@ Goulet 说:“我一个朋友认为提交代码时附带的信息,如需要
这就像对一栋房子,要实现其现代化及维护的方式有两种:小动作,表面上的更改(“我买了一块新的小地毯!”)和大改造,需要很多年才能偿还所有债务(“我想我们应替换掉所有的管道...”)。你必须考虑好两者,才能让你们已有的产品和整个团队顺利地运作起来。
这还需要提前预算好 —— 否则那些较大的花销将会是硬伤。定期维护是最基本的预期费用。让人震惊的是,很多公司在商务上都没把维护成本预算进来。
这还需要提前预算好 —— 否则那些较大的花销将会是硬伤。定期维护是最基本的预期费用。让人震惊的是,很多公司都没把维护当成商务成本预算进来。
这就是 Goulet 提出“**<ruby>软件重构<rt>software remodeling</rt></ruby>**”这个术语的原因。当你房子里的一些东西损坏的时候,你并不是铲除整个房子,从头开始重建。同样的,当你们公司出现老的损坏的代码时,重写代码通常不是最明智的选择。
这就是 Goulet 提出“**<ruby>软件重构<rt>software remodeling</rt></ruby>**”这个术语的原因。当你房子里的一些东西损坏的时候,你并不是铲除整个房子,从头开始重建。同样的,当你们公司出现老的损坏的代码时,重写代码通常不是最明智的选择。
下面是 Corgibytes 公司在重构客户代码用到的一些方法:
* 把大型的应用系统分解成轻量级的更易于维护的微服务。
* 相互功能模块之间降低耦合性以便于扩展。
* 更新品牌和提升用户前端界面体验。
* 集合自动化测试来检查代码可用性。
* 重构或者修改代码库来提高易用性
* 把大型的应用系统分解成轻量级的更易于维护的微服务。
* 让功能模块彼此解耦以便于扩展。
* 更新形象和提升用户前端界面体验。
* 集合自动化测试来检查代码可用性。
* 代码库可以让重构或者修改更易于操作
系统重构也进入到运维领域。比如Corgibytes 公司经常推荐新客户使用 [Docker][50]以便简单快速的部署新的开发环境。当你们团队有30个工程师的时候把初始化配置时间从 10 小时减少到 10 分钟对完成更多的工作很有帮助。系统重构不仅仅是应用于软件开发本身,也包括如何进行系统重构。
系统重构也进入到 DevOps 领域。比如Corgibytes 公司经常推荐新客户使用 [Docker][50],以便简单快速的部署新的开发环境。当你们团队有 30 个工程师的时候,把初始化配置时间从 10 小时减少到 10 分钟对完成更多的工作很有帮助。系统重构不仅仅是应用于软件开发本身,也包括如何进行系统重构。
如果你知道做些什么能让你们的代码管理起来更容易更高效就应该把这它们写入到每年或季度的项目规划中。别指望它们会自动呈现出来。但是也别给自己太大的压力来马上实施它们。Goulets 看到很多公司从一开始就致力于100% 覆盖率测试而陷入困境。
如果你知道做些什么能让你们的代码管理起来更容易更高效就应该把这它们写入到每年或季度的项目规划中。别指望它们会自动呈现出来。但是也别给自己太大的压力来马上实施它们。Goulets 看到很多公司从一开始就致力于 100% 测试覆盖率而陷入困境。
**具体来说,每个公司都应该把以下三种类型的重构工作规划到项目建设中来:**
* 自动测试
* 持续交付
* 自动测试
* 持续交付
* 文化提升
咱们来深入的了解下每一项内容。
**自动化测试**
#### 自动测试
“有一位客户即将进行第二轮融资,但是他们没办法在短期内招聘到足够的人才。我们帮助他们引进了一种自动化测试框架,这让他们的团队在 3 个月的时间内工作效率翻了一倍”Goulets说。“这样他们就可以在他们的投资人面前自豪的说我们一个精英团队完成的任务比两个普通的团队要多。
“有一位客户即将进行第二轮融资,但是他们没办法在短期内招聘到足够的人才。我们帮助他们引进了一种自动化测试框架,这让他们的团队在 3 个月的时间内工作效率翻了一倍”Goulets 说。“这样他们就可以在他们的投资人面前自豪的说,‘我们一个精英团队完成的任务比两个普通的团队要多。’”
自动化测试从根本上来讲就是单个测试的组合。你可以使用单元测试再次检查某一行代码。可以使用集成测试来确保系统的不同部分都正常运行。还可以使用验收性测试来检验系统的功能特性是否跟你想像的一样。当你把这些测试写成测试脚本后,你只需要简单地用鼠标点一下按钮就可以让系统自行检验了,而不用手工的去梳理并检查每一项功能。
自动化测试从根本上来讲就是单个测试的组合,就是可以再次检查某一行代码的单元测试。可以使用集成测试来确保系统的不同部分都正常运行。还可以使用验收性测试来检验系统的功能特性是否跟你想像的一样。当你把这些测试写成测试脚本后,你只需要简单地用鼠标点一下按钮就可以让系统自行检验了,而不用手工的去梳理并检查每一项功能。
在产品市场尚未打开之前就来制定自动化测试机制有些言之过早。但是一旦你有一款感到满,并且客户也很依赖的产品,就应该把这件事付诸实施了。
在产品市场尚未打开之前就来制定自动化测试机制有些言之过早。但是一旦你有一款感到满,并且客户也很依赖的产品,就应该把这件事付诸实施了。
**持续性交付**
#### 持续交付
这是与自动化交付相关的工作,过去是需要人工完成。目的是当系统部分修改完成时可以迅速进行部署,并且短期内得到反馈。这使公司在其它竞争对手面前有很大的优势,尤其是在售后服务行业。
这是与自动化交付相关的工作,过去是需要人工完成。目的是当系统部分修改完成时可以迅速进行部署,并且短期内得到反馈。这使公司在其它竞争对手面前有很大的优势,尤其是在客户服务行业。
“比如说你每次部署系统时环境都很复杂。熵值无法有效控制”Goulets 说。“我们曾经见过花 12 个小时甚至更多的时间来部署一个很大的集群环境。在这种情况下,你不会愿意频繁部署了。因为太折腾人了,你还会推迟系统功能上线的时间。这样,你将落后于其它公司并失去竞争力。”
**在持续性改进的过程中常见的其它自动化任务包括:**
*   在提交完成之后检查中断部分。
* 在出现故障时进行回滚操作。
* 审查自动化代码的质量。
* 根据需求增加或减少服务器硬件资源。
* 让开发,测试及生产环境配置简单易懂。
* 在提交完成之后检查构建中断部分。
* 在出现故障时进行回滚操作。
* 自动化审查代码的质量。
* 根据需求增加或减少服务器硬件资源。
* 让开发、测试及生产环境配置简单易懂。
举一个简单的例子,比如说一个客户提交了一个系统 Bug 报告。开发团队越高效解决并修复那个 Bug 越好。对于开发人员来说,修复 Bug 的挑战根本不是个事儿,这本来也是他们的强项,主要是系统设置上不够完善导致他们浪费太多的时间去处理 bug 以外的其它问题。
使用持续改进的方式时,在你决定哪些工作应该让机器去做,哪些最好交给研发去完成的时候,你会变得更干脆了。如果机器更擅长,那就使其自动化完成。这样也能让研发愉快地去解决其它有挑战性的问题。同时客户也会很高兴地看到他们报怨的问题被快速处理了。你的待修复的未完成任务数减少了,之后你就可以把更多的时间投入到运用新的方法来提高公司产品质量上了。**这是创造科技财富的一种转变。**因为开发人员可以修复 bug 后立即发布新代码,这样他们就有时间和精力做更多事。
使用持续改进的方式时,你要严肃地决定决定哪些工作应该让机器去做,哪些交给研发去完成更好。如果机器更擅长,那就使其自动化完成。这样也能让研发愉快地去解决其它有挑战性的问题。同时客户也会很高兴地看到他们报怨的问题被快速处理了。你的待修复的未完成任务数减少了,之后你就可以把更多的时间投入到运用新的方法来提高产品质量上了。**这是创造科技财富的一种转变。**因为开发人员可以修复 bug 后立即发布新代码,这样他们就有时间和精力做更多事。
“你必须时刻问自己我应该如何为我们的客户改善产品功能如何做得更好如何让产品运行更高效不过还要不止于此。”Goulets 说。“一旦你回答完这些问题后,你就得询问下自己,如何自动去完成那些需要改善的功能。”
**提升企业文化**
#### 文化提升
Corgibytes公司每天都会看到同样的问题一家创业公司建立了一个对开发团队毫无影响的文化环境。公司 CEO 抱着双臂思考着为什么这样的环境对员工没多少改变。然而事实却是公司的企业文化对工作并不利。为了激励工程师,你必须全面地了解他们的工作环境。
Corgibytes 公司每天都会看到同样的问题:一家创业公司建立了一个对开发团队毫无推动的文化环境。公司 CEO 抱着双臂思考着为什么这样的环境对员工没多少改变。然而事实却是公司的企业文化对工作并不利。为了激励工程师,你必须全面地了解他们的工作环境。
为了证明这一点Goulet 引用了作者 Robert Henry 说过的一段话:
> **目的不是创造艺术,而是在最美妙的状态下让艺术应运而生。**
“你们也要开始这样思考一下你们的软件,”她说。“你们的企业文件就类似状态。你们的目标是总能创造一个让艺术品应运而生的环境,这件艺术品就是你们公司的代码一流的售后服务、充满幸福感的开发者、良好的市场、盈利能力等等。这些都息息相关。”
“你们也要开始这样思考一下你们的软件,”她说。“你们的企业文件就类似那个状态。你们的目标是创造一个让艺术品应运而生的环境,这件艺术品就是你们公司的代码一流的售后服务、充满幸福感的开发者、良好的市场预期、盈利能力等等。这些都息息相关。”
优先考虑公司的技术债务和遗留代码也是一种文化。那是真正为开发团队清除障碍,以制造影响的方法。同时,这也会让你将来有更多的时间精力去完成更重要的工作。如果你不从根本上改变固有的企业文化环境,你就不可能重构公司产品。改变对产品维护及现代化投资的态度是开始实施变革的第一步最理想情况是从公司的CEO开始转变。
优先考虑解决公司的技术债务和遗留代码也是一种文化。那是真正为开发团队清除障碍,以制造影响的方法。同时,这也会让你将来有更多的时间精力去完成更重要的工作。如果你不从根本上改变固有的企业文化环境,你就不可能重构公司产品。改变对产品维护及现代化投资的态度是开始实施变革的第一步,最理想情况是从公司的 CEO 开始自顶向下转变。
以下是 Goulet 关于建立那种流态文化方面提出的建议:
*   反对公司嘉奖那些加班到深夜的“英雄”。提倡高效率的工作方式。
*   了解协同开发技术,比如 Woody Zuill 提出的[<ruby>合作编程<rt>Mob Programming</rt></ruby>][44]模式。
* 遵从 4 个[现代敏捷开发][42] 原则:用户至上、实践及快速学习、把系统安全放在首位、持续交付价值。
* 每周为研发提供项目外的职业发展时间。
* 把[日工作记录][43]作为一种驱动开发团队主动解决问题的方式。
* 把同情心放在第一位。Corgibytes 公司让员工参加 [Brene Brown 勇气工厂][40]的培训是非常有用的。
* 反对公司嘉奖那些加班到深夜的“英雄”。提倡高效率的工作方式。
* 了解协同开发技术,比如 Woody Zuill 提出的[<ruby>合作编程<rt>Mob Programming</rt></ruby>][44]模式。
* 遵从 4 个[现代敏捷开发][42]原则:用户至上、实践及快速学习、把安全放在首位、持续交付价值。
* 每周为研发人员提供项目外的职业发展时间。
* 把[日工作记录][43]作为一种驱动开发团队主动解决问题的方式。
* 把同情心放在第一位。Corgibytes 公司让员工参加 [Brene Brown 勇气工厂][40]的培训是非常有用的。
“如果公司高管和投资者不支持这种升级方式你得从客户服务的角度去说服他们”Goulet 说,“告诉他们通过这次调整后,最终产品将如何给公司的大客户提高更好的体验。这是你能做的一个很有力的论点。”
“如果公司高管和投资者不支持这种升级方式你得从客户服务的角度去说服他们”Goulet 说,“告诉他们通过这次调整后,最终产品将如何给公司的大多数客户提高更好的体验。这是你能做的一个很有力的论点。”
### 寻找最具天才的代码重构者
整个行业都认为顶尖的工程师不愿意干修复遗留代码的工作。他们只想着去开发新的东西。大家都说把他们留在维护部门真是太浪费人才了。
其实这些都是误解。如果你知道去哪里和如何找工程师,并为他们提供一个愉快的工作环境,你就可以找到技术非常精湛的工程师,来帮你解决那些最棘手的技术债务问题。
**其实这些都是误解。如果你知道去哪里和如何找工程师,并为他们提供一个愉快的工作环境,你就可以找到技术非常精湛的工程师,来帮你解决那些最棘手的技术债务问题。**
“每一次开会的时候,我们都会问现场的同事‘谁喜欢去干遗留代码的工作?’每次只有不到 10% 的同事会举手。”Goulet 说。“但是我跟这些人交流后,我发现这些工程师恰好是喜欢最具挑战性工作的人才。”
“每次在会议上,我们都会问现场的同事‘谁喜欢去在遗留代码上工作?’每次只有不到 10% 的与会者会举手。”Goulet 说。“但是我跟这些人交流后,我发现这些工程师恰好是喜欢最具挑战性工作的人才。”
有一位客户来寻求她的帮助他们使用国产的数据库没有任何相关文档也没有一种有效的方法来弄清楚他们公司的产品架构。她称修理这种情况的一类工程师为“修正者”。在Corgibytes公司她有一支这样的修正者团队由她支配热衷于通过研究二进制代码来解决技术问题。
有一位客户来寻求她的帮助,他们使用国产的数据库,没有任何相关文档,也没有一种有效的方法来弄清楚他们公司的产品架构。她称修理这种情况的一类工程师为“修正者”。在 Corgibytes 公司,她有一支这样的修正者团队由她支配,热衷于通过研究二进制代码来解决技术问题。
![](https://s3.amazonaws.com/marquee-test-akiaisur2rgicbmpehea/BeX5wWrESmCTaJYsuKhW_Screen%20Shot%202016-08-11%20at%209.17.04%20AM.png)
@ -163,7 +164,7 @@ Corgibytes公司每天都会看到同样的问题一家创业公司建立了
但是随着时间的推移,她发现可以重新定义招聘流程来帮助她识别出更出色的候选人。比如说,她在应聘要求中写道,“公司 CEO 将会重新审查你的简历,因此请确保求职信中致意时不用写明性别。所有以‘尊敬的先生’或‘先生’开头的信件将会被当垃圾处理掉”。这些只是她的招聘初期策略。
“我开始这么做是因为很多申请人把我当成一家软件公司的男性 CEO这让我很厌烦”Goulet 说。“所以,有一天我想我应该它当作应聘要求放到网上,看有多少人注意到这个问题。令我惊讶的是,这让我过滤掉一些不太严谨的申请人。还突显出了很多擅于从事遗留代码方面工作的人。”
“我开始这么做是因为很多申请人把我当成男性,因为我是一家软件公司的男性 CEO我必须是男性”Goulet 说。“所以,有一天我想我应该它当作应聘要求放到网上,看有多少人注意到这个问题。令我惊讶的是,这让我过滤掉一些不太严谨的申请人。还突显出了很多擅于从事遗留代码方面工作的人。”
Goulet 想起一个应聘者发邮件给我说,“我查看了你们网站的代码(我喜欢这个网站,这也是我的工作)。你们的网站架构很奇特,好像是用 PHP 写的,但是你们却运行在用 Ruby 语言写的 Jekyll 下。我真的很好奇那是什么呢。”
@ -177,9 +178,9 @@ Goulet 从她的设计师那里得知,原来,在 HTML、CSS 和 JavaScript
如果他们通过首轮面试Goulet 将会让候选者阅读一篇 Arlo Belshee 写的文章“[<ruby>命名是一个过程<rt>Naming is a Process</rt></ruby>][46]”。它讲的是非常详细的处理遗留代码的的过程。她最经典的指导方法是:“阅读完这段代码并且告诉我,你是怎么理解的。”
她将找出对问题的理解很深刻并且也愿意接受文章里提出的观点的候选者。这对于区分有深刻理解的候选者和仅仅想获得工作的候选者来说,是极其有用的办法。她强烈要求候选者找出一段与他操作相关的代码,来证明他是充满激情的、有主见的及善于分析问题的人。
她将找出对问题的理解很深刻并且也愿意接受文章里提出的观点的候选者。这对于区分有深刻理解的候选者和仅仅想获得工作的候选者来说,是极其有用的办法。她强烈要求候选者找出一段与他操作相关的代码,来证明他是充满激情的、有主见的及善于分析问题的人。
最后,她会让候选者跟公司里当前的团队成员一起使用 [Exercism.io][45] 工具进行编程。这是一个开源项目,它允许开发者学习如何在不同的编程语言环境下使用一系列的测试驱动开发的练习进行编程。第一部分的协同编程课程允许候选者选择其中一种语言进行内建。下一个练习中,面试者可以选择一种语言进行编程。他们总能看到那些人处理异常的方法、随机应便的能力以及是否愿意承认某些自己不了解的技术。
最后,她会让候选者跟公司里当前的团队成员一起使用 [Exercism.io][45] 工具进行编程。这是一个开源项目,它允许开发者学习如何在不同的编程语言环境下使用一系列的测试驱动开发的练习进行编程。结对编程课程的第一部分允许候选者选择其中一种语言来使用。下一个练习中,面试官可以选择一种语言进行编程。他们总能看到那些人处理异常的方法、随机应便的能力以及是否愿意承认某些自己不了解的技术。
“当一个人真正的从执业者转变为大师的时候他会毫不犹豫的承认自己不知道的东西”Goulet说。
@ -189,30 +190,28 @@ Goulet 从她的设计师那里得知,原来,在 HTML、CSS 和 JavaScript
如果一个有天赋的修正者在眼前Goulet 懂得如何让他走向成功。下面是如何让这种类型的开发者感到幸福及高效工作的一些方式:
* 给他们高度的自主权。把问题解释清楚,然后安排他们去完成,但是永不命令他们应该如何去解决问题。
* 如果他们要求升级他们的电脑配置和相关工具,尽管去满足他们。他们明白什么样的需求才能最大限度地提高工作效率。
* 帮助他们[避免更换任务][39]。他们喜欢全身心投入到某一个任务直至完成。
* 给他们高度的自主权。把问题解释清楚,然后安排他们去完成,但是永不命令他们应该如何去解决问题。
* 如果他们要求升级他们的电脑配置和相关工具,尽管去满足他们。他们明白什么样的需求才能最大限度地提高工作效率。
* 帮助他们[避免分心][39]。他们喜欢全身心投入到某一个任务直至完成。
总之,这些方法已经帮助 Corgibytes 公司培养出 20 几位对遗留代码充满激情的专业开发者。
总之,这些方法已经帮助 Corgibytes 公司培养出二十几位对遗留代码充满激情的专业开发者。
### 稳定期没什么不好
大多数创业公司都都不想跳过他们的成长期。一些公司甚至认为成长期应该是永无止境的。而且,他们觉得也没这个必要,即便他们已经进入到了下一个阶段:稳定期。完全进入到稳定期意味着你拥有人力资源及管理方法来创造技术财富,同时根据优先权适当支出。
大多数创业公司都都不想跳过他们的成长期。一些公司甚至认为成长期应该是永无止境的。而且,他们觉得也没这个必要跳过成长期,即便他们已经进入到了下一个阶段:稳定期。**完全进入到稳定期意味着你拥有人力资源及管理方法来创造技术财富,同时根据优先权适当支出。**
“在成长期和稳定期之间有个转折点就是维护人员必须要足够壮大并且相对于专注新功能的产品开发人员你开始更公平的对待维护人员”Goulet说。“你们公司的产品开发完成了。现在你得让他们更加稳定地运行。”
“在成长期和稳定期之间有个转折点就是维护人员必须要足够壮大并且相对于专注新功能的产品开发人员你开始更公平的对待维护人员”Goulet 说。“你们公司的产品开发完成了。现在你得让他们更加稳定地运行。”
这就意味着要把公司更多的预算分配到产品维护及现代化方面。“你不应该把产品维护当作是一个不值得关注的项目,”她说。“这必须成为你们公司固有的一种企业文化 —— 这将帮助你们公司将来取得更大的成功。“
最终,你通过这些努力创建的技术财富,将会为你的团队带来一大批全新的开发者:他们就像侦查兵一样,有充足的时间和资源去探索新的领域,挖掘新客户资源并且给公司创造更多的机遇。当你们在新的市场领域做得更广泛并且不断发展得更好 —— 那么你们公司已经真正地进入到繁荣发展的状态了。
最终,你通过这些努力创建的技术财富,将会为你的团队带来一大批全新的开发者:他们就像侦查兵一样,有充足的时间和资源去探索新的领域,挖掘新客户资源并且给公司创造更多的机遇。当你们在新的市场领域做得更广泛并且不断取得进展 —— 那么你们公司已经真正地进入到繁荣发展的状态了。
--------------------------------------------------------------------------------
via: http://firstround.com/review/forget-technical-debt-heres-how-to-build-technical-wealth/
作者:[http://firstround.com/][a]
译者:[rusking](https://github.com/rusking)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,181 @@
初识 HTTP/2
============================================================
![](https://static.viget.com/_284x284_crop_center-center/ben-t-http-blog-thumb-01_360.png?mtime=20160928234634)
> HTTP/2 时代的开启为前端开发带来了最佳体验。
如果你对 HTTP/2 有所了解,那你可能用过它,或者至少想过怎样能把它融入你的项目中。尽管有很多关于它如何改变工作流程,提高 Web 速度和效率等方面的猜想,但最佳使用方式还没有定下来。这里我想讲的就是我在之前的项目中所发现的 HTTP/2 的最佳实践。
如果你还不确定什么是 HTTP/2或者为什么它能改进你的工作可以先看看我[介绍背景方面的第一篇文章][4]。
记住:开始之前,我要告诉你,尽管你的浏览器可能支持 HTTP/2但你的服务器可能不支持。检查你的主机托管服务看看他们是否提供 HTTP/2 的支持。否则你可能要建立你自己的服务器。这篇文章并不会涉及这方面该如何做,但你可以查看 [http2 github][5] 页面,找一找这方面的工具。 
### 🙏 [热身工作]
首先组织好你的文件。看一看下面的文件树结构,作为组织你的样式表的起点:
```
/styles
|── /setup
| /* 变量、混入minin和函数 */
|── /global
| /* 能放在任何组件和部分中的可重用组件 */
|── /components
| /* 特殊组件和部分 */
|── setup.scss // setup 样式索引
|── global.scss // 全局样式索引
```
这会把你的样式分到三个目录下面:`setup`、`global` 和 `componenets`。接下来我会说明这些目录对你的项目有什么用。 
### setup 目录
`setup` 目录保存所有的变量、函数、混入minin以及一些正常编译需要的其它文件的定义。要想让这个目录物尽其用把这个目录下所有内容导入到 `setup.scss` 文件中是个很不错的主意,这样这个文件就会像下面所展示的一样:
```
/* setup.scss */
/* 变量 */
@import "setup/variables/colors";
/* 混入 */
@import "setup/mixins/color";
/* 函数 */
@import "setup/functions/color";
... 等等
```
现在我们能快速引用这个站点中的所有定义,应该确保在所有的样式文件顶部包含我们这里创建的这个文件。
### global 目录
接下来的目录global 目录,应该包含可在当前站点的多个部分或者每一个页面中重复使用的组件。像按钮、文本、主要样式,以及你的浏览器默认设置应该放在这里。我不建议把页面的头部或底部样式放在这儿,因为某些项目中没有头部,或者不同页面头部不同。而且,底部永远是页面的最后一个元素,所以在用户加载完当前站点的其它东西前,不必过分优先考虑加载底部样式。
记住,如果没有那些定义在 setup 目录下的东西,你的 global 样式就可能没有作用,你的 global 文件看起来应该像这样:
```
/* global.scss */
/* 应用定义 */
@import "setup";
/* 全局样式 */
@import "global/reset";
@import "global/buttons";
@import "global/typography";
@import "global/grid";
... 等等
```
注意,首先要做的就是导入 setup 样式。这样的话,之后的文件都可以引用这个样式里的定义。
由于站点内的每个页面都需要 global 样式,我们可以用典型的方式,在 `<head>` 标签内用一个 `<link>` 标签来加载它们。你所看到的将是一个十分小巧的 css 文件,或者说理论上小巧的,这取决于你需要多少全局样式。
### 最后,你的组件
注意,我没有在上述目录树中的 components 目录里包含索引文件。这是 HTTP/2 所带来的效用。直到现在,我们已经按照标准步骤构建了一个典型的站点,保持相当简单的结构,仅选择全局化那些最重要的样式。组件充当它们自己的索引文件。
大多数开发者有独特的组织组件的方式,因此我并不想影响你的策略。但是,你所有的组件看起来应该像这样:
```
/* header.scss */
/* 应用定义 */
@import "../setup";
header {
// 样式
}
... 等等
```
同样的,你要把 setup 样式包含进来,确保所有东西在编译时都定义过。除了编译这些文件,以及可能要把他们放到 `/assets` 目录,以便很容易找到模版,对这些文件你不必 <ruby>拼接<rt>concatenate</rt></ruby><ruby>压缩<rt>minify</rt></ruby> 它们或者改变什么。
现在样式表已经差不多了,构建站点应该很简单。
### 构建组件
或许对于模板语言你有自己的选择,这取决于你的项目,有可能是 Twig、Rails、Jade 或者 Handlebars。我认为考虑组件最好的方式是它是否有自己的模版文件它该有个与名字相应的样式。这样你的项目中模版和样式的比例就会是个不错的 1:1 的比例,而且你知道哪个文件有哪些东西,哪里有哪个文件,因为它们的命名是有规律的。
现在它正步入正轨,用好 HTTP/2 的多种功能十分简单,让我们做一个模版:
```
{# header.html #}
{# compiled header styles #}
<link href="assets/components/header.css" rel="stylesheet" media="all">
<header>
<h1>This Awesome HTTP/2 Site</h1>
... 等等
```
非常好!在模版里你可能有更简单的方式链接到资源,但这里显示你所要做的仅是在开始构建时,在模版文件中链接一个小小的头部样式。这将允许你的站点仅仅加载特定资源到任意给定页面的组件中,而且,能够设定页面从头到脚的组件的优先级。
### 结合在一起
现在所有的组件都有结构,浏览器将会类似以下方式来渲染它们:
```
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" media="all" href="/assets/global.css">
</head>
<body>
<link rel="stylesheet" media="all" href="/assets/components/header.css">
<header>
... etc
</header>
<link rel="stylesheet" media="all" href="/assets/components/title.css">
<section class="title">
... etc
</section>
<link rel="stylesheet" media="all" href="/assets/components/image-component.css">
<section class="image-component">
... etc
</section>
<link rel="stylesheet" media="all" href="/assets/components/text-component.css">
<section class="text-component">
... etc
</section>
<link rel="stylesheet" media="all" href="/assets/components/footer.css">
<footer>
... etc
</footer>
</body>
</html>
```
这是一个高级别方法,但你的项目中可能有调整的更细致的组件。例如,在头部的 `<nav>` 组件可能要加载自己的样式表。尽你所能地自由发挥,让组件更有作用 - HTTP/2 不会因这些需求而阻碍你!
### 结论
这只是一个关于如何在前端用 HTTP/2 构建项目的基本介绍仅是皮毛而已。你可能注意到我上面所用的方法有的还有改进的空间。请不吝赐教正如我在第一篇文章中所说的HTTP/2 可能颠覆自 HTTP/1 以来我们所熟知的某些标准,所以要慎重思考和实践,以便高效使用 HTTP/2 的开发环境。
--------------------------------------------------------------------------------
via: https://www.viget.com/articles/getting-started-with-http-2-part-2
作者:[Ben][a]
译者:[GitFuture](https://github.com/GitFuture)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.viget.com/about/team/btinsley
[1]:https://twitter.com/home?status=Firmly%20planting%20a%20flag%20in%20the%20sand%20for%20HTTP%2F2%20best%20practices%20for%20front%20end%20development.%20https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-2
[2]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-2
[3]:http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-2
[4]:https://linux.cn/article-8111-1.html
[5]:https://github.com/http2/http2-spec/wiki/Tools

View File

@ -1,29 +1,29 @@
开始使用Ansible
Ansible 起步指南
==========
这是一篇关于 Ansible 的课程,你也可以用来作小项目的模板,或者继续深入这个工具。在本指南的最后,你将了解足够的自动化服务器配置、部署等。
这是一篇关于 Ansible 的速成课程,你可以用作小项目的模板,或者帮你深入了解这个神奇的工具。阅读了本指南之后,你将对自动化服务器配置、部署等有足够的了解。
### Ansible 是什么,为什么你该了解?
Ansible是一个简单的配置管理系统。你只需要访问你的服务器或设备的ssh。它也不同于其他工具因为它使用push的方式而不是像chef那样使用pull的方式。你可以将代码部署到任意数量的服务器上,配置网络设备或在基础架构中自动执行任何操作。
Ansible 简单的说是一个配置管理系统configuration management system。你只需要可以使用 ssh 访问你的服务器或设备就行。它也不同于其他工具,因为它使用推送的方式,而不是像 puppet 或 chef 那样使用拉取的方式。你可以将代码部署到任意数量的服务器上,配置网络设备或在基础架构中自动执行任何操作。
#### 要求
### 前置要求
假设你使用 Mac 或 Linux 作为你的工作站Ubuntu Trusty为你的服务器并有一些安装软件包的经验。此外你的计算机上将需要以下软件。所以如果你还没有它们请先安装
假设你使用 Mac 或 Linux 作为你的工作站Ubuntu Trusty为你的服务器,并有一些安装软件包的经验。此外,你的计算机上将需要以下软件。所以,如果你还没有它们,请先安装:
- Virtualbox
- Vagrant
- Mac 用户: Homebrew
- [Virtualbox](https://www.virtualbox.org/)
- [Vagrant](https://www.vagrantup.com/downloads.html)
- Mac 用户[Homebrew](http://brew.sh/)
#### 情景
我们将模拟2个连接到MySQL数据库的Web应用程序服务器。Web应用程序使用Rails 5和Puma。
### 情景
我们将模拟 2 个连接到 MySQL 数据库的 Web 应用程序服务器。Web 应用程序使用 Rails 5 和 Puma。
### 准备
#### Vagrantfile
为这个项目创建一个文件夹并将下面的内容保存到Vagrantfile
为这个项目创建一个文件夹,并将下面的内容保存到名为 `Vagrantfile` 的文件。
```
VMs = [
@ -46,9 +46,9 @@ Vagrant.configure(2) do |config|
end
```
### 配置你的虚拟网络
#### 配置你的虚拟网络
我们希望我们的虚拟机能互相交互但不要让流量流出到真实的网络所以我们将在Virtualbox中创建一个仅主机的网络适配器。
我们希望我们的虚拟机能互相交互,但不要让流量流出到真实的网络,所以我们将在 Virtualbox 中创建一个仅主机HOST-Only的网络适配器。
1. 打开 Virtualbox
2. 转到 Preferences
@ -56,18 +56,18 @@ end
4. 单击 Host-Only
5. 单击添加网络
6. 单击 Adapter
7. 将IPv4设置为 10.1.1.1IPv4网络掩码255.255.255.0
7. 将 IPv4 设置为 `10.1.1.1`IPv4 网络掩码:`255.255.255.0`
8. 单击 “OK”
#### 测试虚拟机及虚拟网络
在终端中,在具有Vagrantfile的目录中,输入下面的命令:
在终端中,在存放 `Vagrantfile` 的项目目录中,输入下面的命令:
```
vagrant up
```
这回创建你的虚拟机,因此会花费一会时间。输入下面的命令并验证输出来检查是否已经工作:
它会创建你的虚拟机,因此会花费一会时间。输入下面的命令并验证输出内容以检查是否已经工作:
```
$ vagrant status
@ -82,8 +82,7 @@ above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
```
现在使用用户名和密码为vagrantVagrantfile中的IP登录其中一台虚拟机这将验证虚拟机并将它们的密钥添加到你的已知主机文件中。
现在使用 `vagrant` 的用户名和密码 ,按 `Vagrantfile` 中的 IP 登录其中一台虚拟机,这将验证虚拟机并将它们的密钥添加到你的已知主机(`known_hosts`)文件中。
```
ssh vagrant@10.1.1.11 # password is `vagrant`
@ -107,16 +106,16 @@ $ brew install ansible
$ sudo apt install ansible
```
确保你使用了ansible最近的版本 2.1 或者更高的版本:
确保你使用了ansible 最近的版本 2.1 或者更高的版本:
```
$ ansible --version
ansible 2.1.1.0
```
### inventory
### 清单
Ansible 使用 inventory 来了解要使用的服务器,以及如何将它们分组以并行执行任务。让我们为这个项目创建我们的 inventory并将 inventory 放在与 Vagrantfile 相同的文件夹中:
Ansible 使用清单文件来了解要使用的服务器,以及如何将它们分组以并行执行任务。让我们为这个项目创建我们的清单文件 `inventory`,并将它放在与 `Vagrantfile` 相同的文件夹中:
```
[all:children]
@ -135,21 +134,20 @@ web2 ansible_host=10.1.1.12
dbserver ansible_host=10.1.1.21
```
- `[allchildren]` 定义一个组all的组
- `[allvars]` 定义属于组all的变量
- `[webs]` 定义一个组,就像[dbs]
- 文件的其余部分只是主机的声明带有它们的名称和IP
- `[allchildren]` 定义一个组的组`all`
- `[allvars]` 定义属于组 `all` 的变量
- `[webs]` 定义一个组,就像 `[db]` 一样
- 文件的其余部分只是主机的声明,带有它们的名称和 IP
- 空行表示声明结束
现在我们有了一个inventory,我们可以从命令行开始使用 ansible指定一个主机或一个组来执行命令。以下是检查与服务器的连接的命令示例
现在我们有了一个清单,我们可以从命令行开始使用 ansible指定一个主机或一个组来执行命令。以下是检查与服务器的连接的命令示例
```
$ ansible -i inventory all -m ping
```
- `-i` 指定inventory文件
- `-i` 指定清单文件
- `all` 指定要操作的服务器或服务器组
- `-m' 指定一个ansible模块在这种情况下为ping
- `-m' 指定一个 ansible 模块,在这种情况下为 `ping`
下面是命令输出:
@ -168,9 +166,9 @@ web2 | SUCCESS => {
}
```
服务器以不同的顺序响应,这只取决于谁先响应,但是这个没有因为ansible独立保持每台服务器的状态。
服务器以不同的顺序响应,这只取决于谁先响应,但是这个没有关,因为 ansible 独立保持每台服务器的状态。
你也可以使用另外一个选项运行任何命令:
你也可以使用另外一个选项运行任何命令:
- `-a <command>`
@ -195,11 +193,11 @@ Filesystem Size Used Avail Use% Mounted on
/dev/sda1 40G 1.4G 37G 4% /
```
### Playbook
### 剧本
Playbook 只是 YAML 文件它将inventory中的服务器组与命令关联。ansible的正确用法是任务它可以是期望的状态shell 命令或许多其他选项。有关 ansible 可做的所有事情列表,可以查看所有模块的列表。
剧本playbook只是个 YAML 文件,它将清单文件中的服务器组与命令关联。在 ansible 中的对于关键字是 `tasks`它可以是一个预期的状态、shell 命令或许多其它的选项。有关 ansible 可做的所有事情列表,可以查看[所有模块的列表](http://docs.ansible.com/ansible/list_of_all_modules.html)
下面是一个运行 shell 命令的 playbook 示例,将其保存为 playbook1.yml
下面是一个运行 shell 命令的剧本示例,将其保存为 `playbook1.yml`
```
---
@ -211,8 +209,8 @@ Playbook 只是 YAML 文件它将inventory中的服务器组与命令关联
- `---` 是 YAML 文件的开始
- ` - hosts`:指定要使用的组
- `tasks`:标记任务列表的开始
- ` - shell`指定使用shell模块的第一个任务
- 记住YAML 需要缩进确保你始终遵循playbook中的正确结构
- ` - shell`:指定第一个任务使用 [shell] (http://docs.ansible.com/ansible/shell_module.html) 模块
- **记住YAML 需要缩进结构,确保你始终遵循剧本中的正确结构**
用下面的命令运行它:
@ -237,14 +235,13 @@ web1 : ok=2 changed=1 unreachable=0 failed=0
web2 : ok=2 changed=1 unreachable=0 failed=0
```
正如你所见ansible 运行了 2 个任务,而不是只有 playbook 中的一个。TASK [setup]是一个隐式任务它会首先运行以捕获服务器的信息如主机名、IP、分布和更多详细信息然后可以使用该信息运行条件任务。
正如你所见ansible 运行了 2 个任务,而不是只有剧本中的一个。`TASK [setup]` 是一个隐式任务它会首先运行以捕获服务器的信息如主机名、IP、发行版和更多详细信息然后可以使用这些信息运行条件任务。
还有一个最后的PLAY RECAP其中 ansible 显示了有多少个运行的任务以及每个对应的状态。在我们的例子中,因为我们运行了一个 shell 命令ansible 不知道结果的状态,它被认为是 changed。
还有最后的 `PLAY RECAP`,其中 ansible 显示了运行了多少个任务以及每个对应的状态。在我们的例子中,因为我们运行了一个 shell 命令ansible 不知道结果的状态,它被认为是 `changed`
#### 安装软件
### 安装软件
我们将使用 apt 在我们的服务器上安装软件因为我们需要root所以我们必须使用 become 语句,将这个内容保存在 playbook2.yml 中并运行它ansible-playbook playbook2.yml
我们将使用 [apt](http://docs.ansible.com/ansible/apt_module.html) 在我们的服务器上安装软件,因为我们需要 root 权限,所以我们必须使用 `become` 语句,将这个内容保存在 `playbook2.yml` 中并运行它(`ansible-playbook playbook2.yml`
```
---
@ -255,7 +252,7 @@ web2 : ok=2 changed=1 unreachable=0 failed=0
- apt: name=git state=present
```
可以应用于 ansible 中所有模块的语句; 一个是 name 语句,让我们可以打印关于正在执行的任务的更具描述性的文本。要使用它,任务还是一样,但是添加 name 字段:描述性文本作为第一行,所以我们以前的文本将是
一些语句可以应用于 ansible 中所有模块;一个是 `name` 语句,可以让我们输出关于正在执行的任务的更具描述性的文本。要使用它,保持任务内容一样,但是添加 `name :描述性文本` 作为第一行,所以我们以前的文本将改成
```
---
@ -267,9 +264,9 @@ web2 : ok=2 changed=1 unreachable=0 failed=0
apt: name=git state=present
```
### 使用 `with_items`
#### 使用 `with_items`
当你在处理一个项目列表、要安装的包、要创建的文件等时可以用 ansible 提供的 with_items。下面是我们如何在 playbook3.yml 中使用它,同时添加一些我们已经知道的其他语句:
当你要处理一个列表时,比如要安装的项目和软件包、要创建的文件,可以用 ansible 提供的 `with_items`。下面是我们如何在 `playbook3.yml` 中使用它,同时添加一些我们已经知道的其他语句:
```
---
@ -287,9 +284,9 @@ web2 : ok=2 changed=1 unreachable=0 failed=0
- python-software-properties
```
### 使用 `template``vars`
#### 使用 `template``vars`
`vars` 是一个定义变量语句,可以在 `task` 语句或 `template` 文件中使用。 Jinja2 是 Ansible 中使用的模板引擎,但是关于它你不需要学习很多。在你的 playbook 中定义变量,如下所示:
`vars` 是一个定义变量语句,可以在 `task` 语句或 `template` 文件中使用。 [Jinja2](http://jinja.pocoo.org/docs/dev/) 是 Ansible 中使用的模板引擎,但是关于它你不需要学习很多。在你的剧本中定义变量,如下所示:
```
---
@ -302,7 +299,7 @@ web2 : ok=2 changed=1 unreachable=0 failed=0
template: src=myconfig.j2 dest={{path_to_vault}}/app.conf
```
正如你看到的,我可以使用 {{path_to_vault}} 作为 playbook 的一部分,但也因为我使用了模板语句,我可以使用 myconfig.j2 中的任何变量,它必须存在一个名为 templates 的子文件夹中。你项目树应该如下所示:
正如你看到的,我可以使用 `{{path_to_vault}}` 作为剧本的一部分,但也因为我使用了 `template`语句,我可以使用 `myconfig.j2` 中的任何变量,该文件必须存在一个名为 `templates` 的子文件夹中。你项目树应该如下所示:
```
├── Vagrantfile
@ -313,15 +310,15 @@ web2 : ok=2 changed=1 unreachable=0 failed=0
└── myconfig.j2
```
当 ansible 找到一个模板语句后它会在模板文件夹内查找,并将把被“{{”和“}}”括起来的变量展开来。
当 ansible 找到一个 `template` 语句后它会在 `templates` 文件夹内查找,并将把被 `{{``}}` 括起来的变量展开来。
示例模板:
示例模板
```
this is just an example vault_dir: {{path_to_vault}} secret_password: {{secret_key}}
```
即使你不扩展变量你也可以使用`模板`。考虑到将来会添加所以我先做了。比如创建一个 `hosts.j2` 模板并加入主机名和IP。
即使你不扩展变量你也可以使用 `template`。考虑到将来会添加所以我先做了。比如创建一个 `hosts.j2` 模板并加入主机名和 IP。
```
10.1.1.11 web1
@ -329,31 +326,31 @@ this is just an example vault_dir: {{path_to_vault}} secret_password: {{secret_k
10.1.1.21 dbserver
```
这里要像这样的语句:
这里要像这样的语句:
```
- name: Installing the hosts file in all servers
template: src=hosts.j2 dest=/etc/hosts mode=644
```
### shell 命令
#### shell 命令
你应该总是尝试使用模块,因为 Ansible 可以跟踪任务的状态,并避免不必要的重复,但有时 shell 命令是不可避免的。 对于这些情况Ansible 提供两个选项:
你应该尽量使用模块,因为 Ansible 可以跟踪任务的状态,并避免不必要的重复,但有时 shell 命令是不可避免的。 对于这些情况Ansible 提供两个选项:
- command直接运行一个命令没有环境变量或重定向|<>等)
- shell运行 /bin/sh 并展开变量和重定向
- [command](http://docs.ansible.com/ansible/command_module.html):直接运行一个命令,没有环境变量或重定向(`|``<``>` 等)
- [shell](http://docs.ansible.com/ansible/shell_module.html):运行 `/bin/sh` 并展开变量和支持重定向
#### 其他有用的模块
- apt_repository - Debian家族中添加/删除包仓库
- yum_repository - RedHat系列中添加/删除包仓库
- service - 启动/停止/重新启动/启用/禁用服务
- git - 从git服务器部署代码
- unarchive - 从Web或本地源解开软件包
- [apt_repository](http://docs.ansible.com/ansible/apt_repository_module.html) - 在 Debian 系的发行版中添加/删除包仓库
- [yum_repository](https://docs.ansible.com/ansible/yum_repository_module.html) - 在 RedHat 系的发行版中添加/删除包仓库
- [service](http://docs.ansible.com/ansible/service_module.html) - 启动/停止/重新启动/启用/禁用服务
- [git](http://docs.ansible.com/ansible/git_module.html) - 从 git 服务器部署代码
- [unarchive](http://docs.ansible.com/ansible/unarchive_module.html) - 从 Web 或本地源解开软件包
#### 只在一台服务器中运行任务
Rails 使用 `migrations` 来逐步更改数据库,但由于你有多个应用程序服务器,因此这些迁移不能被分配为组任务,而只需要一个服务器来运行迁移。在这种情况下,当使用 run_once 时run_once 将分派任务到一个服务器,并继续下一个任务,直到这个任务完成。你只需要在你的任务中设置 run_oncetrue。
Rails 使用 [migrations](http://edgeguides.rubyonrails.org/active_record_migrations.html) 来逐步更改数据库,但由于你有多个应用程序服务器,因此这些迁移任务不能被分配为组任务,而我们只需要一个服务器来运行迁移。在这种情况下,当使用 `run_once` 时,`run_once` 将分派任务到一个服务器,并直到这个任务完成继续下一个任务。你只需要在你的任务中设置 `run_oncetrue`
```
- name: 'Run db:migrate'
@ -361,9 +358,9 @@ Rails 使用 `migrations` 来逐步更改数据库,但由于你有多个应用
run_once: true
```
##### 会失败的任务
#### 会失败的任务
通过指定 ignore_errorstrue你可以运行可能会失败但不影响剩余 playbook 完成的任务。这是非常有用的,例如,当删除最初不存在的日志文件时。
通过指定 `ignore_errorstrue`,你可以运行可能会失败的任务,但不影响剧本中剩余的任务完成。这是非常有用的,例如,当删除最初不存在的日志文件时。
```
- name: 'Delete logs'
@ -371,11 +368,11 @@ Rails 使用 `migrations` 来逐步更改数据库,但由于你有多个应用
ignore_errors: true
```
##### 放到一起
### 放到一起
现在用我们先前学到的,这里是每个文件的最终版:
Vagrantfile
`Vagrantfile`
```
VMs = [
@ -398,7 +395,7 @@ Vagrant.configure(2) do |config|
end
```
inventory:
`inventory`
```
[all:children]
@ -417,7 +414,7 @@ web2 ansible_host=10.1.1.12
dbserver ansible_host=10.1.1.21
```
templates/hosts.j2:
`templates/hosts.j2`:
```
10.1.1.11 web1
@ -425,7 +422,7 @@ templates/hosts.j2:
10.1.1.21 dbserver
```
templates/my.cnf.j2:
`templates/my.cnf.j2`
```
[client]
@ -470,9 +467,11 @@ max_allowed_packet = 16M
key_buffer = 16M
!includedir /etc/mysql/conf.d/
```
final-playbook.yml:
`final-playbook.yml`
```
- hosts: all
become_user: root
become: true
@ -551,7 +550,7 @@ final-playbook.yml:
shell: cd {{appdir}};rails server -b 0.0.0.0 -p 80 --pid /run/puma.pid -d
```
### 打开你的环境
### 放在你的环境中
将这些文件放在相同的目录,运行下面的命令打开你的开发环境:
@ -560,9 +559,9 @@ vagrant up
ansible-playbook -i inventory final-playbook.yml
```
#### 部署新的代码
### 部署新的代码
确保修改了代码并push到了仓库中。接下来确保你git语句中有正确的分支:
确保修改了代码并推送到了仓库中。接下来,确保你 git 语句中使用了正确的分支:
```
- name: 'Clone app repo'
@ -573,19 +572,19 @@ ansible-playbook -i inventory final-playbook.yml
force=yes
```
作为一个例子,你可以在master上修改version字段再次运行 playbook
作为一个例子,你可以修改 `version` 字段为 `master`,再次运行剧本
```
ansible-playbook -i inventory final-playbook.yml
```
检查所有的 web 服务器上的页面是否已更改:`http// 10.1.1.11` 或 `http// 10.1.1.12`。将其更改为 `version = staging` 并重新运行 playbook 并再次检查页面。
检查所有的 web 服务器上的页面是否已更改:`http://10.1.1.11` 或 `http://10.1.1.12`。将其更改为 `version = staging` 并重新运行剧本并再次检查页面。
你还可以创建只包含与部署相关的任务的替代 playbook,以便其运行更快。
你还可以创建只包含与部署相关的任务的替代剧本,以便其运行更快。
### 接下来是什么
这只是可以做的很小一部分。我们没有接触角色、过滤器、调试等许多其他很棒的功能,但我希望它给了你一个良好的开始!所以,请继续学习并使用它。如果你有任何问题,你可以在 twitter 或评论栏联系我,让我知道你还想知道哪些关于 ansible 的东西!
这只是可以做的很小一部分。我们没有接触角色role、过滤器filter、调试等许多其他很棒的功能,但我希望它给了你一个良好的开始!所以,请继续学习并使用它。如果你有任何问题,你可以在 [twitter](https://twitter.com/c0d5x) 或评论栏联系我,让我知道你还想知道哪些关于 ansible 的东西!
--------------------------------------------------------------------------------
@ -593,10 +592,8 @@ ansible-playbook -i inventory final-playbook.yml
via: https://gorillalogic.com/blog/getting-started-with-ansible/?utm_source=webopsweekly&utm_medium=email
作者:[JOSE HIDALGO][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,347 @@
CentOS 上的 FirewallD 简明指南
============================================================
[FirewallD][4] 是 iptables 的前端控制器,用于实现持久的网络流量规则。它提供命令行和图形界面,在大多数 Linux 发行版的仓库中都有。与直接控制 iptables 相比,使用 FirewallD 有两个主要区别:
1. FirewallD 使用区域和服务而不是链式规则。
2. 它动态管理规则集,允许更新规则而不破坏现有会话和连接。
> FirewallD 是 iptables 的一个封装,可以让你更容易地管理 iptables 规则 - 它并*不是* iptables 的替代品。虽然 iptables 命令仍可用于 FirewallD但建议使用 FirewallD 时仅使用 FirewallD 命令。
本指南将向您介绍 FirewallD 的区域和服务的概念,以及一些基本的配置步骤。
### 安装与管理 FirewallD
CentOS 7 和 Fedora 20+ 已经包含了 FirewallD但是默认没有激活。可以像其它的 systemd 单元那样控制它。
1、 启动服务,并在系统引导时启动该服务:
```
sudo systemctl start firewalld
sudo systemctl enable firewalld
```
要停止并禁用:
```
sudo systemctl stop firewalld
sudo systemctl disable firewalld
```
2、 检查防火墙状态。输出应该是 `running` 或者 `not running`
```
sudo firewall-cmd --state
```
3、 要查看 FirewallD 守护进程的状态:
```
sudo systemctl status firewalld
```
示例输出
```
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: active (running) since Wed 2015-09-02 18:03:22 UTC; 1min 12s ago
Main PID: 11954 (firewalld)
CGroup: /system.slice/firewalld.service
└─11954 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
```
4、 重新加载 FirewallD 配置:
```
sudo firewall-cmd --reload
```
### 配置 FirewallD
FirewallD 使用 XML 进行配置。除非是非常特殊的配置,你不必处理它们,而应该使用 `firewall-cmd`
配置文件位于两个目录中:
* `/usr/lib/FirewallD` 下保存默认配置,如默认区域和公用服务。 避免修改它们,因为每次 firewall 软件包更新时都会覆盖这些文件。
* `/etc/firewalld` 下保存系统配置文件。 这些文件将覆盖默认配置。
#### 配置集
FirewallD 使用两个_配置集_“运行时”和“持久”。 在系统重新启动或重新启动 FirewallD 时,不会保留运行时的配置更改,而对持久配置集的更改不会应用于正在运行的系统。
默认情况下,`firewall-cmd` 命令适用于运行时配置,但使用 `--permanent` 标志将保存到持久配置中。要添加和激活持久性规则,你可以使用两种方法之一。
1、 将规则同时添加到持久规则集和运行时规则集中。

```
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --add-service=http
```
2、 将规则添加到持久规则集中并重新加载 FirewallD。

```
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --reload
```
> `reload` 命令会删除所有运行时配置并应用永久配置。因为 firewalld 动态管理规则集,所以它不会破坏现有的连接和会话。
### 防火墙的区域
“区域”是针对给定位置或场景(例如家庭、公共、受信任等)可能具有的各种信任级别的预构建规则集。不同的区域允许不同的网络服务和入站流量类型,而拒绝其他任何流量。 首次启用 FirewallD 后,`public` 将是默认区域。
区域也可以用于不同的网络接口。例如,要分离内部网络和互联网的接口,你可以在 `internal` 区域上允许 DHCP但在`external` 区域仅允许 HTTP 和 SSH。未明确设置为特定区域的任何接口将添加到默认区域。
要找到默认区域:

```
sudo firewall-cmd --get-default-zone
```
要修改默认区域:
```
sudo firewall-cmd --set-default-zone=internal
```
要查看你网络接口使用的区域:
```
sudo firewall-cmd --get-active-zones
```
示例输出:
```
public
interfaces: eth0
```
要得到特定区域的所有配置:
```
sudo firewall-cmd --zone=public --list-all
```
示例输出:
```
public (default, active)
interfaces: ens160
sources:
services: dhcpv6-client http ssh
ports: 12345/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
```
要得到所有区域的配置:

```
sudo firewall-cmd --list-all-zones
```
示例输出:
```
block
interfaces:
sources:
services:
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
...
work
interfaces:
sources:
services: dhcpv6-client ipp-client ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
```
#### 与服务一起使用
FirewallD 可以根据特定网络服务的预定义规则来允许相关流量。你可以创建自己的自定义系统规则,并将它们添加到任何区域。 默认支持的服务的配置文件位于 `/usr/lib /firewalld/services`,用户创建的服务文件在 `/etc/firewalld/services` 中。
要查看默认的可用服务:
```
sudo firewall-cmd --get-services
```
比如,要启用或禁用 HTTP 服务:

```
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --remove-service=http --permanent
```
#### 允许或者拒绝任意端口/协议
比如:允许或者禁用 12345 端口的 TCP 流量。
```
sudo firewall-cmd --zone=public --add-port=12345/tcp --permanent
sudo firewall-cmd --zone=public --remove-port=12345/tcp --permanent
```
#### 端口转发
下面是**在同一台服务器上**将 80 端口的流量转发到 12345 端口。
```
sudo firewall-cmd --zone="public" --add-forward-port=port=80:proto=tcp:toport=12345
```
要将端口转发到**另外一台服务器上**
1、 在需要的区域中激活 masquerade。
```
sudo firewall-cmd --zone=public --add-masquerade
```
2、 添加转发规则。例子中是将 IP 地址为 123.456.78.9 的_远程服务器上_ 80 端口的流量转发到 8080 上。

```
sudo firewall-cmd --zone="public" --add-forward-port=port=80:proto=tcp:toport=8080:toaddr=123.456.78.9
```
要删除规则,用 `--remove` 替换 `--add`。比如:
```
sudo firewall-cmd --zone=public --remove-masquerade
```
### 用 FirewallD 构建规则集
例如,以下是如何使用 FirewallD 为你的服务器配置基本规则(如果您正在运行 web 服务器)。
1. 将 `eth0` 的默认区域设置为 `dmz`。 在所提供的默认区域中dmz非军事区是最适合于这个程序的因为它只允许 SSH 和 ICMP。
```
sudo firewall-cmd --set-default-zone=dmz
sudo firewall-cmd --zone=dmz --add-interface=eth0
```
2、 把 HTTP 和 HTTPS 添加永久的服务规则到 dmz 区域中:
```
sudo firewall-cmd --zone=dmz --add-service=http --permanent
sudo firewall-cmd --zone=dmz --add-service=https --permanent
```

3、 重新加载 FirewallD 让规则立即生效:
```
sudo firewall-cmd --reload
```

如果你运行 `firewall-cmd --zone=dmz --list-all` 会有下面的输出:
```
dmz (default)
interfaces: eth0
sources:
services: http https ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
```

这告诉我们,`dmz` 区域是我们的**默认**区域,它被用于 `eth0` 接口**中所有网络的**源地址**和**端口**。 允许传入 HTTP端口 80、HTTPS端口 443和 SSH端口 22的流量并且由于没有 IP 版本控制的限制,这些适用于 IPv4 和 IPv6。 不允许**IP 伪装**以及**端口转发**。 我们没有 **ICMP 块**,所以 ICMP 流量是完全允许的。没有**丰富Rich规则**,允许所有出站流量。
### 高级配置
服务和端口适用于基本配置,但对于高级情景可能会限制较多。 丰富Rich规则和直接Direct接口允许你为任何端口、协议、地址和操作向任何区域 添加完全自定义的防火墙规则。
#### 丰富规则
丰富规则的语法有很多,但都完整地记录在 [firewalld.richlanguage(5)][5] 的手册页中(或在终端中 `man firewalld.richlanguage`)。 使用 `--add-rich-rule`、`--list-rich-rules` 、 `--remove-rich-rule` 和 firewall-cmd 命令来管理它们。
这里有一些常见的例子:
允许来自主机 192.168.0.14 的所有 IPv4 流量。
```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family="ipv4" source address=192.168.0.14 accept'
```
拒绝来自主机 192.168.1.10 到 22 端口的 IPv4 的 TCP 流量。
```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family="ipv4" source address="192.168.1.10" port port=22 protocol=tcp reject'
```
允许来自主机 10.1.0.3 到 80 端口的 IPv4 的 TCP 流量,并将流量转发到 6532 端口上。

```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family=ipv4 source address=10.1.0.3 forward-port port=80 protocol=tcp to-port=6532'
```
将主机 172.31.4.2 上 80 端口的 IPv4 流量转发到 8080 端口(需要在区域上激活 masquerade
```
sudo firewall-cmd --zone=public --add-rich-rule 'rule family=ipv4 forward-port port=80 protocol=tcp to-port=8080 to-addr=172.31.4.2'
```
列出你目前的丰富规则:
```
sudo firewall-cmd --list-rich-rules
```
### iptables 的直接接口
对于最高级的使用,或对于 iptables 专家FirewallD 提供了一个直接Direct接口允许你给它传递原始 iptables 命令。 直接接口规则不是持久的,除非使用 `--permanent`
要查看添加到 FirewallD 的所有自定义链或规则:
```
firewall-cmd --direct --get-all-chains
firewall-cmd --direct --get-all-rules
```
讨论 iptables 的具体语法已经超出了这篇文章的范围。如果你想学习更多,你可以查看我们的 [iptables 指南][6]。
### 更多信息
你可以查阅以下资源以获取有关此主题的更多信息。虽然我们希望我们提供的是有效的,但是请注意,我们不能保证外部材料的准确性或及时性。
* [FirewallD 官方网站][1]
* [RHEL 7 安全指南FirewallD 简介][2]
* [Fedora WikiFirewallD][3]
--------------------------------------------------------------------------------
via: https://www.linode.com/docs/security/firewalls/introduction-to-firewalld-on-centos
作者:[Linode][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linode.com/docs/security/firewalls/introduction-to-firewalld-on-centos
[1]:http://www.firewalld.org/
[2]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html#sec-Introduction_to_firewalld
[3]:https://fedoraproject.org/wiki/FirewallD
[4]:http://www.firewalld.org/
[5]:https://jpopelka.fedorapeople.org/firewalld/doc/firewalld.richlanguage.html
[6]:https://www.linode.com/docs/networking/firewalls/control-network-traffic-with-iptables

View File

@ -1,13 +1,13 @@
如何在 Ubuntu 16.10 的 Unity 8 上使用之前的 Xorg 程序
如何在 Ubuntu 16.10 的 Unity 8 上运行老式 Xorg 程序
====
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-feature-image.jpg "How to Use Old Xorg Apps in Unity 8 on Ubuntu 16.10s")
随着 Ubuntu 16.10 的发布Unity 8 吸引到了比平时更多的目光。这是因为这个大家最爱的 Linux 发行版的最新版本进行着一项桌面显示实验。桌面发行版是人们最熟悉的 Unity 环境,但有一点点不同。它不再使用 X11 图形技术Ubuntu的开发者选择了另一种截然不同的方式。
随着 Ubuntu 16.10 的发布Unity 8 越来越吸引到了更多人的关注。这是因为在这个大家最爱的 Linux 发行版的最新版本中可以体验其带有的试验性桌面。桌面发行版是人们最熟悉的 Unity 环境,但有一点点不同。它不再使用 X11 图形技术Ubuntu 的开发者选择了另一种截然不同的方式。
原来Unity 8 用的是 Mir这是 Ubuntu 对 Linux 上更好显示服务的号召所做出的回答。这项技术已经 Ubuntu phone 和平板上大量使用,但是这次新版是我们在桌面环境上第一次见到 Mir 。
原来Unity 8 用的是 Mir这是 Ubuntu 为了在 Linux 上提供显示服务而做出的努力。这项技术已经在 Ubuntu phone 和平板上大量使用,但是这次新版是我们在桌面环境上第一次见到 Mir 。
这项技术相当新颖,结果是没多少 Linux 程序能运行在它之上。不是所有,那也是大部分的程序被设计在 Xorg 和 X11 之上运行。如果你一直尝试在 Unity 8 上运行这些程序,当你了解到在 Unity 8上确实有可能运行之前的 Xorg 程序时,你会很开心的。接下来是如何做!
这项技术相当新颖,结果是没多少 Linux 程序能运行在它之上。不是所有,那也是大部分的程序设计在 Xorg 和 X11 之上运行。如果你想要试试在 Unity 8 上运行这些程序,你肯定会为在 Unity 8上确实能够运行之前的 Xorg 程序而高兴。接下来是如何做!
### 登录进 Unity 8
@ -19,7 +19,7 @@ Unity 8 在 Ubuntu 16.10 上是一个可选会话。在使用之前只须牢记
### 安装 Libertine
Xorg 程序(例如 Firefox 等)确实能在 Unity 8 上使用,在使用之前需要一点小调整。在 Mir 桌面上用终端打开 Libertine ,在 scopes 窗口中点击终端图标就能完成。一旦打开,输入你的密码。接下来,输入以下的命令:
Xorg 程序(例如 Firefox 等)确实能在 Unity 8 上使用,在使用之前需要一点小调整。在 Mir 桌面上用终端打开 Libertine ,在 Scopes 窗口中点击终端图标就能完成。一旦打开,输入你的密码。接下来,输入以下的命令:
![unity8-installing-libertine-in-terminal](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-installing-libertine-in-terminal.jpg "unity8-installing-libertine-in-terminal")
@ -27,15 +27,15 @@ Xorg 程序(例如 Firefox 等)确实能在 Unity 8 上使用,在使用之
sudo apt install libertine-tools libertine-scope libertine
```
当这些程序完成安装后,点击并拖动 scopes 窗口以刷新内容。然后在面板上点击来启动libertine。
当这些程序完成安装后,点击并拖动 Scopes 窗口以刷新内容。然后,在面板上点击来启动 libertine。
### 新建 Xorg 容器
打开 Libertine到时间来新建一些容器了。这些容器很特别,因为他们能让基于 X11 的 Linux 程序在 Mir/Unity 8 桌面上的容器之中运行。另外点击“i386 multiarch support"复选框来获得 32 位支持。否则,什么都不要动(或者输入名字和密码),点击”OK”。
打开 Libertine可以新建一些(应用)容器了。这些容器很特别,因为它们能让基于 X11 的 Linux 程序在 Mir/Unity 8 桌面上的容器之中运行。另外,如果需要支持 32 位应用勾选“i386 multiarch support”复选框。否则什么都不要动或者输入名字和密码点击“OK”。
![unity8-libertine-create-new-container](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-libertine-create-new-container.jpg "unity8-libertine-create-new-container")
在这之后,这个 Xorg 容器就准备好以使用了。在 Libertine 找到它并启动。删除也很容易,右键点击容器,选择“删除”选项。
在这之后,这个 Xorg 容器就准备好,可以使用了。在 Libertine 找到它并启动。删除也很容易,右键点击容器,选择“删除”选项。
**注意**:每一个 Xorg 容器有 500 MB的最大内存限制。所以多个容器是有必要的。
@ -43,23 +43,21 @@ sudo apt install libertine-tools libertine-scope libertine
![unity8-libertine-install-software](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-libertine-install-software.jpg "unity8-libertine-install-software")
两天内在 Libertine 容器中安装好软件。第一步允许用户启动容器后选择“输入包名或者 Debian 文件”,这意味着用户可以在软件中心或者终端找到一个软件的名字,然后输入 Libertine 来安装,也可以指定特定的 DEB 文件来安装,也可以在Libertine LXC 容器中直接搜索安装包。
在 Libertine 容器中安装软件有两个方法。第一种是允许用户启动容器后选择“输入包名或者 Debian 文件”,这意味着用户可以在软件中心或者终端找到一个软件的名字,然后在 Libertine 中输入它来安装。也可以指定特定的 DEB 文件来安装,可以在Libertine LXC 容器中直接搜索安装包。
**注意**Unity 8 非常新,一些程序或许不能在 Libertine 里加载或者完全安装。
### 结论
Unity 8展现了不少的新特性它现代、时髦而且比之前任何一个 Unity 迭代版本都快。唯一限制它的就是使用率。事实是大部分用户更乐意选择实用的应用程序,而不是一个别致新颖的桌面环境。某种程度上来说,使用 Libertine 能解决这个问题但它不会永久有效。早晚有一天Canonical 将有必要自行引进程序或者向社区求助来彻底解决这个问题。
Unity 8 展现了不少的新特性,它现代、时髦,而且比之前任何一个 Unity 迭代版本都快。唯一限制它的就是使用率。事实是大部分用户更乐意选择实用的应用程序,而不是一个别致新颖的桌面环境。某种程度上来说,使用 Libertine 能解决这个问题但它不会永久有效。早晚有一天Canonical 都需要自行移植这些程序或者向社区求助来彻底解决这个问题。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/use-old-xorg-apps-unity-8/
作者:[Derrik Diener][a]
译者:[ypingcn](https://github.com/ypingcn)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,50 @@
使用 Inkscape添加颜色
=========
![inkscape-addingcolour](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-addingcolour-945x400.png)
在我们先前的 Inkscape 文章中,[我们介绍了 Inkscape 的基础][2] - 安装,以及如何创建基本形状及操作它们。我们还介绍了使用 Palette 更改 inkscape 对象的颜色。 虽然 Palette 对于从预定义列表快速更改对象颜色非常有用,但大多数情况下,你需要更好地控制对象的颜色。这时我们使用 Inkscape 中最重要的对话框之一 - <ruby>填充和轮廓<rt>Fill and Stroke</rt></ruby> 对话框。
**关于文章中的动画的说明:**动画中的一些颜色看起来有条纹。这只是动画创建导致的。当你在 Inkscape 尝试时,你会看到很好的平滑渐变的颜色。
### 使用 Fill/Stroke 对话框
要在 Inkscape 中打开 “Fill and Stroke” 对话框,请从主菜单中选择 `Object`>`Fill and Stroke`。打开后,此对话框中的三个选项卡允许你检查和更改当前选定对象的填充颜色、描边颜色和描边样式。
![open-fillstroke](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/open-fillstroke.gif)
在 Inkscape 中Fill 用来给予对象主体颜色。对象的轮廓是你的对象的可选择外框,可在<ruby>轮廓样式<rt>Stroke style</rt></ruby>选项卡中进行配置,它允许您更改轮廓的粗细,创建虚线轮廓或为轮廓添加圆角。 在下面的动画中,我会改变星形的填充颜色,然后改变轮廓颜色,并调整轮廓的粗细:
![using-fillstroke](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/using-fillstroke.gif)
### 添加并编辑渐变效果
对象的填充(或者轮廓)也可以是渐变的。要从 “Fill and Stroke” 对话框快速设置渐变填充,请先选择 “Fill” 选项卡,然后选择<ruby>线性渐变<rt>linear gradient </rt></ruby> 选项:
![create-gradient](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/create-gradient.gif)
要进一步编辑我们的渐变,我们需要使用专门的<ruby>渐变工具<rt>Gradient Tool</rt></ruby>。 从工具栏中选择“Gradient Tool”会有一些渐变编辑锚点出现在你选择的形状上。 **移动锚点**将改变渐变的位置。 如果你**单击一个锚点**您还可以在“Fill and Stroke”对话框中更改该锚点的颜色。 要**在渐变中添加新的锚点**,请双击连接锚点的线,然后会出现一个新的锚点。
![editing-gradient](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/editing-gradient.gif)
* * *
这篇文章介绍了在 Inkscape 图纸中添加一些颜色和渐变的基础知识。 **“Fill and Stroke”** 对话框还有许多其他选项可供探索,如图案填充、不同的渐变样式和许多不同的轮廓样式。另外,查看**<ruby>工具控制栏<rt>Tools control bar</rt></ruby>** 的 **Gradient Tool** 中的其他选项,看看如何以不同的方式调整渐变。
-----------------------
作者简介Ryan 是一名 Fedora 设计师。他使用 Fedora Workstation 作为他的主要桌面,还有来自 Libre Graphics 世界的最好的工具,尤其是矢量图形编辑器 Inkscape。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/inkscape-adding-colour/
作者:[Ryan Lerch][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://ryanlerch.id.fedoraproject.org/
[1]:https://fedoramagazine.org/inkscape-adding-colour/
[2]:https://linux.cn/article-8079-1.html

View File

@ -1,6 +1,7 @@
### 使用 Fedora 和 Inkscape 制作一张简单的壁纸
使用 Fedora 和 Inkscape 制作一张简单的壁纸
================
![inkscape-wallpaper](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-wallpaper-945x400.png)
![inkscape-wallpaper](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-wallpaper-945x400.png)
在先前的两篇 Inkscape 的文章中,我们已经[介绍了 Inkscape 的基本使用、创建对象][18]以及[一些基本操作和如何修改颜色。][17]
@ -14,7 +15,7 @@
![Screenshot from 2016-09-07 08-37-01](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-08-37-01.png)
][16]
对于这张壁纸而言,我们会将尺寸改为**1024px x 768px**。要改变文档的尺寸,进入`File` > `Document Properties`。在<ruby>文档属性<rt>Document Properties</rt></ruby>对话框中<ruby>自定义文档大小<rt>Custom Size</rt></ruby>区域中输入宽度为 1024px高度为 768px
对于这张壁纸而言,我们会将尺寸改为**1024px x 768px**。要改变文档的尺寸,进入`File` > `Document Properties...`。在<ruby>文档属性<rt>Document Properties</rt></ruby>对话框中<ruby>自定义文档大小<rt>Custom Size</rt></ruby>区域中输入宽度为 `1024`,高度为 `768` ,单位是 `px`
[
![Screenshot from 2016-09-07 09-00-00](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-00-00.png)
@ -34,13 +35,13 @@
![rect](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/rect.png)
][13]
接着在矩形中添加一个<ruby>渐变填充<rt>Gradient Fill</rt></ruby>[如果你需要复习添加渐变,请阅读先前添加色彩的文章][12]
接着在矩形中添加一个<ruby>渐变填充<rt>Gradient Fill</rt></ruby>。如果你需要复习添加渐变,请阅读先前添加色彩的[那篇文章][12]
[
![Screenshot from 2016-09-07 09-41-13](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-41-13.png)
][11]
你的矩形可能也设置了轮廓颜色。 使用<ruby>填充和轮廓<rt> Fill and Stroke</rt></ruby>对话框将轮廓设置为 **none**
你的矩形也可以设置轮廓颜色。 使用<ruby>填充和轮廓<rt> Fill and Stroke</rt></ruby>对话框将轮廓设置为 **none**
[
![Screenshot from 2016-09-07 09-44-15](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-44-15.png)
@ -48,19 +49,19 @@
### 绘制图样
接下来我们画一个三角形,使用 3个 顶点的星型/多边形工具。你可以**按住 CTRL** 键给三角形一个角度并使之对称。
接下来我们画一个三角形,使用 3 个顶点的星型/多边形工具。你可以按住 `CTRL` 键给三角形一个角度并使之对称。
[
![Screenshot from 2016-09-07 09-52-38](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-52-38.png)
][9]
选中三角形并按下 **CTRL+D** 来复制它(复制的图形会覆盖在原来图形的上面),**因此在复制后确保将它移动到别处。**
选中三角形并按下 `CTRL+D` 来复制它(复制的图形会覆盖在原来图形的上面),**因此在复制后确保将它移动到别处。**
[
![Screenshot from 2016-09-07 10-44-01](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-10-44-01.png)
][8]
如图选中一个三角形,进入**OBJECT > FLIP-HORIZONTAL水平翻转**
如图选中一个三角形,进入`Object` > `FLIP-HORIZONTAL`(水平翻转)
[
![Screenshot from 2016-09-07 09-57-23](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-57-23.png)
@ -82,7 +83,7 @@
### 导出背景
最后,我们需要将我们的文档导出为 PNG 文件。点击 **FILE > EXPORT PNG**,打开导出对话框,选择文件位置和名字,确保选中的是 Drawing 标签,并点击 **EXPORT**
最后,我们需要将我们的文档导出为 PNG 文件。点击 `File` > `EXPORT PNG`,打开导出对话框,选择文件位置和名字,确保选中的是 `Drawing` 标签,并点击 `EXPORT`
[
![Screenshot from 2016-09-07 11-07-05](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-11-07-05-1.png)
@ -100,9 +101,7 @@
via: https://fedoramagazine.org/inkscape-design-imagination/
作者:[a2batic][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -119,11 +118,11 @@ via: https://fedoramagazine.org/inkscape-design-imagination/
[9]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-52-38.png
[10]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-44-15.png
[11]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-41-13.png
[12]:https://fedoramagazine.org/inkscape-adding-colour/
[12]:https://linux.cn/article-8084-1.html
[13]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/rect.png
[14]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-01-03.png
[15]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-00-00.png
[16]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-08-37-01.png
[17]:https://fedoramagazine.org/inkscape-adding-colour/
[18]:https://fedoramagazine.org/getting-started-inkscape-fedora/
[17]:https://linux.cn/article-8084-1.html
[18]:https://linux.cn/article-8079-1.html
[19]:https://fedoramagazine.org/inkscape-design-imagination/

View File

@ -0,0 +1,120 @@
从源代码编译 Vim 8.0
========
从源代码编译 Vim 实际上并不那么困难。下面是你所要做的:
1、首先安装包括 Git 在内的所有必备的库。对于一个 Debian 类的 Linux 发行版,例如 Ubuntu命令如下
```
sudo apt-get install libncurses5-dev libgnome2-dev libgnomeui-dev \
libgtk2.0-dev libatk1.0-dev libbonoboui2-dev \
libcairo2-dev libx11-dev libxpm-dev libxt-dev python-dev \
python3-dev ruby-dev lua5.1 lua5.1-dev libperl-dev git
```
在 Ubuntu 16.04 上lua 开发包的名称是 `liblua5.1-dev` 而非 `lua5.1-dev`
如果你知道你将使用哪种语言可随意删去你不需要的包。例如Python2 `python-dev` 或者是 Ruby `ruby-dev`。这一原则适用于本文的大部分内容。
对于 Fedora 20将是以下命令:
```
sudo yum install -y ruby ruby-devel lua lua-devel luajit \
luajit-devel ctags git python python-devel \
python3 python3-devel tcl-devel \
perl perl-devel perl-ExtUtils-ParseXS \
perl-ExtUtils-XSpp perl-ExtUtils-CBuilder \
perl-ExtUtils-Embed
```
在 Fedora 20 上需要这一步来纠正安装 XSubPP 时出现的问题:
```
# 从 /usr/bin 到 perl 目录做个 xsubpp (perl) 的符号链接
sudo ln -s /usr/bin/xsubpp /usr/share/perl5/ExtUtils/xsubpp
```
2、 如果你已经安装了 vim删掉它。
```
sudo apt-get remove vim vim-runtime gvim
```
如果是 Ubuntu 12.04.2,你或许也需要同时删除下面这些软件包:
```
sudo apt-get remove vim-tiny vim-common vim-gui-common vim-nox
```
3、 一旦上述内容都被安装好之后,获取 vim 源代码很容易。
注意:如果你使用 python你的配置目录或许有一个特定的机器名 (例如 `config-3.5m-x86_64-linux-gnu`)。检查 `/usr/lib/python[2/3/3.5]` 目录来找到你的 python 配置目录,据此更改 `python-config-dir` 和/或 `python3-config-dir `的参数。
添加/删除下面的编译参数以适合您的设置。例如,如果您不打算写任何 Lua 脚本,您可以删去 `enable-luainterp`
同时,如果你使用的不是 vim 8.0,请确认下面 `VIMRUNTIMEDIR` 参数设置正确(例如,如果使用 vim 8.0a, 就用 `/usr/share/vim/vim80a`)。记住,一些 vim 安装是直接安装在 `/usr/share/vim` 下的;调整好参数以适应你的系统:
```
cd ~
git clone https://github.com/vim/vim.git
cd vim
./configure --with-features=huge \
--enable-multibyte \
--enable-rubyinterp=yes \
--enable-pythoninterp=yes \
--with-python-config-dir=/usr/lib/python2.7/config \
--enable-python3interp=yes \
--with-python3-config-dir=/usr/lib/python3.5/config \
--enable-perlinterp=yes \
--enable-luainterp=yes \
--enable-gui=gtk2 --enable-cscope --prefix=/usr
make VIMRUNTIMEDIR=/usr/share/vim/vim80
```
在 Ubuntu 16.04 上,由于同时开启了 Python2 和 Python3Python 支持将不工作。 阅读 [chirinosky 的回答](http://stackoverflow.com/questions/23023783/vim-compiled-with-python-support-but-cant-see-sys-version) 以获取变通的处理方法。
如果你想将来轻松卸载 vim可以使用 `checkinstall` 来安装 。
```
sudo apt-get install checkinstall
cd ~/vim
sudo checkinstall
```
否则,可以使用 `make` 来安装。
```
cd ~/vim
sudo make install
```
要让 vim 成为你默认的编辑器,请使用 `update-alternatives`
```
sudo update-alternatives --install /usr/bin/editor editor /usr/bin/vim 1
sudo update-alternatives --set editor /usr/bin/vim
sudo update-alternatives --install /usr/bin/vi vi /usr/bin/vim 1
sudo update-alternatives --set vi /usr/bin/vim
```
4、 再检查下,通过查看 `vim --version` 输出来确认确实在运行新的 Vim 应用程序版本。
**如果你的 gvim 不工作(在 ubuntu 12.04.1 LTS 上),试着把 `--enable-gui=gtk2` 参数变为 `--enable-gui=gnome2`。**
如果你遇到问题,仔细检查在步骤 3 开始提到的,使用正确的 Python 配置目录配置 `configure`
这些 `configure``make` 命令假设你是一个 Debian 发行版Vim 的运行库文件目录放在 `/usr/share/vim/vim80/`,这不是 vim 的默认路径。 在 `configure` 命令中的 `--prefix=/usr` 也是如此。这些参数或许对一个不是基于 Debian 的 Linux 发行版来说是有所不同的,在这种情况下,试着移除 `configure` 命令中的 `--prefix` 变量和 `make` 命令中的 `VIMRUNTIMEDIR` (换句话说,使用这些参数的默认值)。
如果你遇到麻烦, 这里是一些[其它编译 Vim 的有用的信息](http://vim.wikia.com/wiki/Building_Vim)。
--------------------------------------------------------------------------------
via: https://github.com/Valloric/YouCompleteMe/wiki/Building-Vim-from-source
作者:[Val Markovic][a]
译者:[zky001](https://github.com/zky001)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://github.com/Valloric

View File

@ -0,0 +1,184 @@
 Linux 中管理设备
=============
探索 `/dev` 目录可以让您知道如何直接访问到 Linux 中的设备。
![Managing devices in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/OSDC_Penguin_Image_520x292_12324207_0714_mm_v1a.png itok=WfAkwbFy "Managing devices in Linux")
*照片提供Opensource.com*
Linux 目录结构中有很多有趣的功能这次我会讲到 `/dev` 目录一些迷人之处。在继续阅读这篇文章之前,建议你看看我前面的文章。[Linux 文件系统][9][一切皆为文件][8]这两篇文章介绍了一些有趣的 Linux 文件系统概念。请先看看 - 我会等你看完再回来。
……
太好了 !欢迎回来。现在我们可以继续更详尽地探讨 `/dev` 目录。
### 设备文件
设备文件也称为[设备特定文件][4]。设备文件用来为操作系统和用户提供它们代表的设备接口。所有的 Linux 设备文件均位于 `/dev` 目录下,是根 (`/`) 文件系统的一个组成部分,因为这些设备文件在操作系统启动过程中必须可以使用。
关于这些设备文件,要记住的一件重要的事情,就是它们大多不是设备驱动程序。更准确地描述来说,它们是设备驱动程序的门户。数据从应用程序或操作系统传递到设备文件,然后设备文件将它传递给设备驱动程序,驱动程序再将它发给物理设备。反向的数据通道也可以用,从物理设备通过设备驱动程序,再到设备文件,最后到达应用程序或其他设备。
让我们以一个典型命令的数据流程来直观地看看。
![dboth-dev-dir_0.png](https://opensource.com/sites/default/files/images/life-uploads/dboth-dev-dir_0.png)
*图 1一个典型命令的简单数据流程。*
在上面的图 1 中,显示一个简单命令的简化数据流程。从一个 GUI 终端仿真器,例如 Konsole 或 xterm 中发出 `cat /etc/resolv.conf` 命令,它会从磁盘中读取 `resolv.conf` 文件,磁盘设备驱动程序处理设备的具体功能,例如在硬盘驱动器上定位文件并读取它。数据通过设备文件传递,然后从命令到设备文件,然后到 6 号伪终端的设备驱动,然后在终端会话中显示。
当然, `cat` 命令的输出可以以下面的方式被重定向到一个文件, `cat /etc/resolv.conf > /etc/resolv.bak` 这样会创建该文件的备份。在这种情况下 1 左侧的数据流量将保持不变而右边的数据流量将通过 `/dev/sda2` 设备文件、硬盘设备驱动程序,然后到硬盘驱动器本身。
这些设备文件使得使用标准流 (STD/IO) 和重定向访问 Linux  Unix 计算机上的任何一个设备非常容易。只需将数据流定向到设备文件即可将数据发送到该设备。
### 设备文件类别
设备文件至少可以按两种方式划分。第一种也是最常用的分类是根据与设备相关联的数据流进行划分。比如tty (teletype) 和串行设备被认为是基于字符的,因为数据流的传送和处理是以一次一个字符或字节进行的;而块类型设备(如硬盘驱动器)是以块为单位传输数据,通常为 256 个字节的倍数。
您可以在终端上以一个非 root 用户,改变当前工作目录(`PWD`)到 `/dev` ,并显示长目录列表。 这将显示设备文件列表、文件权限及其主、次设备号。 例如,下面的设备文件只是我的 Fedora 24 工作站上 `/dev` 目录中的几个文件。 它们表示磁盘和 tty 设备类型。 注意输出中每行的最左边的字符。 `b` 代表是块类型设备,`c` 代表字符设备。
```
brw-rw----   1 root disk        8,   0 Nov  7 07:06 sda
brw-rw---- 1 root disk        8,   1 Nov  7 07:06 sda1
brw-rw---- 1 root disk        8,  16 Nov  7 07:06 sdb
brw-rw---- 1 root disk        8,  17 Nov  7 07:06 sdb1
brw-rw---- 1 root disk        8,  18 Nov  7 07:06 sdb2
crw--w----  1 root tty         4,   0 Nov  7 07:06 tty0
crw--w---- 1 root tty         4,   1 Nov  7 07:07 tty1
crw--w---- 1 root tty         4,  10 Nov  7 07:06 tty10
crw--w---- 1 root tty         4,  11 Nov  7 07:06 tty11
```
识别设备文件更详细和更明确的方法是使用设备主要以及次要号。 磁盘设备主设备号为 8将它们指定为 SCSI 块设备。请注意,所有 PATA 和 SATA 硬盘驱动器都由 SCSI 子系统管理,因为旧的 ATA 子系统多年前就由于代码质量糟糕而被认为不可维护。造成的结果就是,以前被称为 “hd[a-z]” 的硬盘驱动器现在被称为 “sd[a-z]”。
你大概可以从上面的示例中推出磁盘驱动器次设备号的模式。次设备号 0、 16、 32 等等直到 240是整个磁盘的号。所以主/次 8/16 表示整个磁盘 `/dev/sdb`  8/17 是第一个分区的设备文件`/dev/sdb1`。数字 8/34 代表 `/dev/sdc2`
在上面列表中的 tty 设备文件编号更简单一些,从 tty0 到 tty63 。
Kernel.org 上的 [Linux 下的已分配设备][5]文件是设备类型和主次编号分配的正式注册表。它可以帮助您了解所有当前定义的设备的主要/次要号码。
### 趣味设备文件
让我们花几分钟时间,执行几个有趣的实验,演示 Linux 设备文件的强大和灵活性。 大多数 Linux 发行版都有 1 到 7 个虚拟控制台,可用于使用 shell 接口登录到本地控制台会话。 可以使用 `Ctrl-Alt-F1`(控制台 1`Ctrl-Alt-F2`(控制台 2等键盘组合键来访问。
请按 `Ctrl-Alt-F2` 切换到控制台 2。在某些发行版登录显示的信息包括了与此控制台关联的 tty 设备但大多不包括。它应该是 tty2因为你是在控制台 2 中。
以非 root 用户身份登录。 然后你可以使用 `who am i` 命令 — 是的,就是这个命令,带空格 — 来确定哪个 tty 设备连接到这个控制台。
在我们实际执行此实验之前,看看 `/dev` 中的 tty2  tty3 的设备列表。
```
ls -l /dev/tty[23]
```
有大量的 tty 设备,但我们不关心他们中的大多数,只注意 tty2 和 tty3 设备。 作为设备文件,它们没什么特别之处。它们都只是字符类型设备。我们将使用这些设备进行此实验。 tty2 设备连接到虚拟控制台 2tty3 设备连接到虚拟控制台 3。
`Ctrl-Alt-F3` 切换到控制台 3。再次以同一非 root 用户身份登录。 现在在控制台 3 上输入以下命令。
```
echo "Hello world" > /dev/tty2
```
按 `Ctrl-Alt-f2` 键以返回到控制台 2。字符串 “Hello world”没有引号将显示在控制台 2。
该实验也可以使用 GUI 桌面上的终端仿真器来执行。 桌面上的终端会话使用 `/dev` 中的伪终端设备,如 `/dev/pts/1`。 使用 Konsole 或 Xterm 打开两个终端会话。 确定它们连接到哪些伪终端,并使用一个向另一个发送消息。
现在继续实验,使用 `cat` 命令,试试在不同的终端上显示 `/etc/fstab` 文件。
另一个有趣的实验是使用 `cat` 命令将文件直接打印到打印机。 假设您的打印机设备是 `/dev/usb/lp0`,并且您的打印机可以直接打印 PDF 文件,以下命令将在您的打印机上打印 `test.pdf` 文件。
```
cat test.pdf > /dev/usb/lp0
```
`/dev` 目录包含一些非常有趣的设备文件,这些文件是硬件的入口,人们通常不认为这是硬盘驱动器或显示器之类的设备。 例如,系统存储器 RAM 不是通常被认为是“设备”的东西,而 `/dev/mem` 是通过其可以实现对存储器的直接访问的入口。 下面的例子有一些有趣的结果。
```
dd if=/dev/mem bs=2048 count=100
```
上面的 `dd` 命令提供比简单地使用 `cat` 命令 dump 所有系统的内存提供了更多的控制。 它提供了指定从 `/dev/mem` 读取多少数据的能力,还允许指定从存储器哪里开始读取数据。虽然读取了一些内存,但内核响应了以下错误,在 `/var/log/messages` 中可以看到。
```
Nov 14 14:37:31 david kernel: usercopy: kernel memory exposure attempt detected from ffff9f78c0010000 (dma-kmalloc-512) (2048 bytes)
```
这个错误意味着内核正在通过保护属于其他进程的内存来完成它的工作,这正是它应该工作的方式。 所以,虽然可以使用 `/dev/mem` 来显示存储在 RAM 内存中的数据,但是访问的大多数内存空间是受保护的并且会导致错误。 只可以访问由内核内存管理器分配给运行 `dd` 命令的 BASH shell 的虚拟内存,而不会导致错误。 抱歉,但你不能窥视不属于你的内存,除非你发现了一个可利用的漏洞。
`/dev` 中还有一些非常有趣的设备文件。 设备文件 `null``zero``random` 和 `urandom` 不与任何物理设备相关联。
例如,空设备 `/dev/null` 可以用作来自 shell 命令或程序的输出重定向的目标,以便它们不显示在终端上。 我经常在我的 BASH 脚本中使用这个,以防止向用户展示可能会让他们感到困惑的输出。 `/dev/null` 设备可用于产生一个空字符串。 使用如下所示的 `dd` 命令查看 `/dev/null` 设备文件的一些输出。
```
# dd if=/dev/null bs=512 count=500 | od -c
0+0 records in
0+0 records out
0 bytes copied, 1.5885e-05 s, 0.0 kB/s
0000000
```
注意,因为空字符什么也没有所以确实没有可见的输出。 注意看看字节数。
`/dev/random``/dev/urandom` 设备也很有趣。 正如它们的名字所暗示的,它们都产生随机输出,不仅仅是数字,而是任何字节组合。 `/dev/urandom` 设备产生的是**确定性**的随机输出,并且非常快。 这意味着输出由算法确定,并使用种子字符串作为起点。 结果,如果原始种子是已知的,则黑客可以再现输出,尽管非常困难,但这是有可能的。 使用命令 `cat /dev/urandom` 可以查看典型的输出,使用 `Ctrl-c` 退出。
`/dev/random` 设备文件生成**非确定性**的随机输出,但它产生的输出更慢一些。 该输出不是由依赖于先前数字的算法确定的,而是由击键动作和鼠标移动而产生的。 这种方法使得复制特定系列的随机数要困难得多。使用 `cat` 命令去查看一些来自 `/dev/random` 设备文件输出。尝试移动鼠标以查看它如何影响输出。
正如其名字所暗示的,`/dev/zero` 设备文件产生一个无止境的零作为输出。 注意这些是八进制零而不是ASCII字符零`0`)。 使用如下所示的 `dd` 查看 `/dev/zero` 设备文件中的一些输出
```
# dd if=/dev/zero bs=512 count=500 | od -c
0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
500+0 records in
500+0 records out
256000 bytes (256 kB, 250 KiB) copied, 0.00126996 s, 202 MB/s
0764000
```
请注意,此命令的字节数不为零。
### 创建设备文件
在过去,在 `/dev` 中的设备文件都是在安装时创建的,导致一个目录中有几乎所有的设备文件,尽管大多数文件永远不会用到。 在不常发生的情况,例如需要新的设备文件,或意外删除后需要重新创建设备文件,可以使用 `mknod` 程序手动创建设备文件。 前提是你必须知道设备的主要和次要号码。
CentOS 和 RHEL 6、7以及 Fedora 的所有版本——可以追溯到至少 Fedora 15使用较新的创建设备文件的方法。 所有设备文件都是在引导时创建的。 这是因为 udev 设备管理器在设备添加和删除发生时会进行检测。这可实现在主机启动和运行时的真正的动态即插即用功能。 它还在引导时执行相同的任务,通过在引导过程的很早的时期检测系统上安装的所有设备。 [Linux.com][6] 上有一篇很棒的对 [udev 的描述][7]。
回到 `/dev` 中的文件列表,注意文件的日期和时间。 所有文件都是在上次启动时创建的。 您可以使用 `uptime` 或者 `last` 命令来验证这一点。在上面我的设备列表中,所有这些文件都是在 11 月 7 日上午 7:06 创建的,这是我最后一次启动系统。
当然, `mknod` 命令仍然可用, 但新的 `MAKEDEV` (是的,所有字母大写,在我看来是违背 Linux 使用小写命令名的原则的) 命令提供了一个创建设备文件的更容易的界面。 在当前版本的 Fedora 或 CentOS 7 中,默认情况下不安装 `MAKEDEV` 命令;它安装在 CentOS 6。您可以使用 YUM 或 DNF 来安装 MAKEDEV 包。
### 结论
有趣的是,我很久没有创建一个设备文件的需要了。 然而,最近我遇到一个有趣的情况,其中一个我常使用的设备文件没有创建,我不得不创建它。 之后该设备再没出过问题。所以丢失设备文件的情况仍然可以发生,知道如何处理它可能很重要。
设备文件有无数种,您遇到的设备文件我可能没有涵盖到。 这些信息在所下面引用的资源中有大量的细节信息可用。 关于这些文件的功能和工具,我希望我已经给您一些基本的了解,下一步您自己可以探索更多。
资源
- [一切皆文件][1] David Both, Opensource.com
- [Linux 文件系统介绍][2] David Both, Opensource.com
- [文件系统层次结构][10] The Linux Documentation Project
- [设备文件][4] Wikipedia
- [Linux 下已分配设备][5] Kernel.org
--------------------------------------------------------------------------------
via: https://opensource.com/article/16/11/managing-devices-linux
作者:[David Both][a]
译者:[erlinux](http://www.itxdm.me)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/life/15/9/everything-is-a-file
[2]:https://opensource.com/life/16/10/introduction-linux-filesystems
[4]:https://en.wikipedia.org/wiki/Device_file
[5]:https://www.kernel.org/doc/Documentation/devices.txt
[6]:https://www.linux.com/
[7]:https://www.linux.com/news/udev-introduction-device-management-modern-linux-system
[8]:https://opensource.com/life/15/9/everything-is-a-file
[9]:https://opensource.com/life/16/10/introduction-linux-filesystems
[10]:http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/dev.html

View File

@ -1,104 +1,101 @@
如何在 Linux 下安装 Python IDE - PyCharm
如何在 Linux 下安装 PyCharm
============================================
![][7]
![](https://fthmb.tqn.com/ju1u-Ju56vYnXabPbsVRyopd72Q=/768x0/filters:no_upscale()/about/pycharmstart-57e2cb405f9b586c351a4cf7.png)
### 简介
Linux 经常被看成是一个远离外部世界,只有极客才会使用的操作系统,虽然这是一个误解,但事实上,如果你想开发软件,那么 Linux 系统能够为你提供一个很好的开发环境。
Linux 经常被看成是一个远离外部世界,只有极客才会使用的操作系统,但是这是不准确的,如果你想开发软件,那么 Linux 能够为你提供一个非常棒的开发环境。
刚开始学习编程的新手们经常会问这样一个问题:应该使用哪种语言?当涉及到 Linux 系统的时候,通常的选择是 C、C++、Python、Java、PHP、Perl 和 Ruby On Rails
刚开始学习编程的新手们经常会问这样一个问题:应该使用哪种语言?当涉及到 Linux 系统的时候,通常的选择是 C、C++、Python、Java、PHP、Perl 和 Ruby On Rails
Linux 系统的许多核心程序都是用 C 语言写的,但是如果离开 Linux 系统的世界, C 语言不再像其他语言比如 Java 和 Python 那么常用。
Linux 系统的许多核心程序都是用 C 语言写的,但是如果离开 Linux 系统的世界, C 语言就不如其它语言比如 Java 和 Python 那么常用。
对于学习编程的人来说, Python 和 Java 都是不错的选择,因为它们是跨平台的,因此,你在 Linux 系统上写的程序在 Windows 系统和 Macs 系统上也能够很好的工作。
对于学习编程的人来说, Python 和 Java 都是不错的选择,因为它们是跨平台的,因此,你在 Linux 系统上写的程序在 Windows 系统和 Mac 系统上也能够很好的工作。
虽然你可以使用任何编辑器来开发 Python 程序但是如果你使用一个同时包含编辑器和调试器的优秀集成开发环境IDE来进行开发那么你的编程生涯将会变得更加轻松。
虽然你可以使用任何编辑器来开发 Python 程序,但是如果你使用一个同时包含编辑器和调试器的优秀集成开发环境IDE来进行开发那么你的编程生涯将会变得更加轻松。
PyCharm 是由 Jetbrains 公司开发的一个跨平台编辑器。如果你之前是在 Windows 环境下进行开发,那么你会立刻认出 Jetbrains 公司,它就是那个开发了 Resharper 的公司。 Resharper 是一个用于重构代码的优秀产品,它能够指出代码可能存在的问题以及自动添加声明:比如当你在使用一个类的时候它会自动为你导入。
PyCharm 是由 Jetbrains 公司开发的一个跨平台编辑器。如果你之前是在 Windows 环境下进行开发,那么你会立刻认出 Jetbrains 公司,它就是那个开发了 Resharper 的公司。 Resharper 是一个用于重构代码的优秀产品,它能够指出代码可能存在的问题,自动添加声明,比如当你在使用一个类的时候它会自动为你导入。
这篇文章将讨论如何在 Linux 系统上获取、安装和运行 PyCharm 。
### 如何获取 PyCharm
你可以通过访问[这儿][1]获取 PyCharm 。屏幕中央有一个很大的 'Download' 按钮
你可以通过访问[https://www.jetbrains.com/pycharm/][1]获取 PyCharm
你可以选择下载专业版或者社区版。如果你只是习惯于用 Python 编程那么推荐下载社区版
屏幕中央有一个很大的 'Download' 按钮
然而,如果你打算进行专业化的编程,那么专业版的一些优秀特性是不容忽视的。
你可以选择下载专业版或者社区版。如果你刚刚接触 Python 编程那么推荐下载社区版。然而,如果你打算发展到专业化的编程,那么专业版的一些优秀特性是不容忽视的。
### 如何安装 PyCharm
下载好的文件的名称可能 pycharm-professional-2016.2.3.tar.gz
下载好的文件的名称可能类似这种样子 pycharm-professional-2016.2.3.tar.gz
以 “tar.gz” 结尾的文件是被 [gzip][2] 工具压缩过的,并且用 [tar][3] 工具进行了归档从而保证文件夹结构在一个地方
以 “tar.gz” 结尾的文件是被 [gzip][2] 工具压缩过的,并且把文件夹用 [tar][3] 工具归档到了一起。你可以阅读关于[提取 tar.gz 文件][4]指南的更多信息
你可以阅读关于[提取 tar.gz 文件][4]指南的更多信息。
加快速度,为了解压文件,你需要做的是首先打开终端,然后通过下面的命令进入下载文件所在的文件夹:
加快节奏,为了解压文件,你需要做的是首先打开终端,然后通过下面的命令进入下载文件所在的文件夹:
```
cd ~/Downloads
```
```
cd ~/Downloads
```
现在,通过运行下面的命令找到你下载的文件的名字:
```
ls pycharm*
```
```
ls pycharm*
```
然后运行下面的命令解压文件:
```
tar -xvzf pycharm-professional-2016.2.3.tar.gz -C ~
```
```
tar -xvzf pycharm-professional-2016.2.3.tar.gz -C ~
```
记得把上面命令中的文件名替换成通过 ls 命令获知的 pycharm 文件名。(也就是你下载的文件的名字)
上面的命令将会把 PyCharm 软件安装在 home 目录中。
记得把上面命令中的文件名替换成通过 `ls` 命令获知的 pycharm 文件名。(也就是你下载的文件的名字)。上面的命令将会把 PyCharm 软件安装在 `home` 目录中。
### 如何运行 PyCharm
要运行 PyCharm 首先需要进入 home 目录:
要运行 PyCharm 首先需要进入 `home` 目录:
```
cd ~
```
```
cd ~
```
运行 ls 命令查找文件夹名:
运行 `ls` 命令查找文件夹名:
```
ls
```
```
ls
```
查找到文件名以后,运行下面的命令进入 PyCharm 目录:
```
cd pycharm-2016.2.3/bin
```
```
cd pycharm-2016.2.3/bin
```
最后,通过运行下面的命令来运行 PyCharm
```
sh pycharm.sh &
```
```
sh pycharm.sh &
```
如果你是在一个桌面环境比如 GNOME、KDE、Unity、Cinnamon 或者其他现代桌面上运行,那么你也可以通过针对桌面环境的菜单或者快捷方式来找到 PyCharm 。
如果你是在一个桌面环境比如 GNOME KDE Unity Cinnamon 或者其他现代桌面环境上运行,你也可以通过桌面环境的菜单或者快捷方式来找到 PyCharm 。
### 总结
现在, PyCharm 已经安装好了,你可以开始使用它来开发一个桌面应用、 web 应用和各种工具。
如果你想学习如何使用 Python 编程,那么这有很好的[学习资源][5]值得一看。里面的文章更多的是关于 Linux 学习,但也有一些资源比如 Pluralsight 和 Udemy 提供了关于 Python 学习的一些很好的教程。
如果你想学习如何使用 Python 编程,那么这有很好的[学习资源][5]值得一看。里面的文章更多的是关于 Linux 学习,但也有一些资源比如 Pluralsight 和 Udemy 提供了关于 Python 学习的一些很好的教程。
如果想了解 PyCharm 的所有可用特性,请点击[这儿][6]来查看。它覆盖了从创建项目到描述用户界面、调试以及代码重构的全部内容。
如果想了解 PyCharm 的更多特性,请点击[这儿][6]来查看。它覆盖了从创建项目到描述用户界面、调试以及代码重构的全部内容。
-----------------------------------------------------------------------------------------------------------
via: https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
作者:[ Gary Newell][a]
作者:[Gary Newell][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
校对:[oska874](https://github.com/oska874)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,26 +1,27 @@
### [在 Fedora 中使用 Inkscape][2]
Fedora 中使用 Inkscape 起步
=============
![inkscape-gettingstarted](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gettingstarted-945x400.png)
![inkscape-gettingstarted](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gettingstarted-945x400.png)
Inkscape 是一个流行的、功能齐全、免费和开源的矢量[图形编辑器][3],它已经在 Fedora 官方仓库中。它专门为[SVG格式][4]中创建矢量图形而定制。Inkscape 非常适合创建和操作图片和插图。它也适用于创建图表和模拟用户界面
Inkscape 是一个流行的、功能齐全、自由而开源的矢量[图形编辑器][3],它已经在 Fedora 官方仓库中。它特别适合创作 [SVG 格式][4]的矢量图形。Inkscape 非常适于创建和操作图片和插图,以及创建图表和用户界面设计
[
![cyberscoty-landscape-800px](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/cyberscoty-landscape-800px.png)
][5]
使用inkscape创建的[风车景色][1]的插图
*使用 inkscape 创建的[风车景色][1]的插图*
[官方网站的截图页][6]上有一些很好的例子说明Inkscape可以做些什么。Fedora杂志上的大多数精选图片也是使用 Inkscape 创建的,包括最近的精选图片:
[官方网站的截图页][6]上有一些很好的例子,说明 Inkscape 可以做些什么。<ruby>Fedora 杂志<rt>Fedora Magazine</rt></ruby>上的大多数精选图片也是使用 Inkscape 创建的,包括最近的精选图片:
[
![communty](https://cdn.fedoramagazine.org/wp-content/uploads/2016/09/communty.png)
][7]
最近使用 Inkscape 创建的 Fedora 杂志精选图片
*Fedora 杂志最近使用 Inkscape 创建的精选图片*
### 在 Fedora 安装 Inkscape
### 在 Fedora 安装 Inkscape
**Inkscape 已经[在 Fedora 官方仓库中了][8],因此可以非常简单地在 Fedora Workstation 使用 Software 这个程序安装它:**
**Inkscape 已经[在 Fedora 官方仓库中了][8],因此可以非常简单地在 Fedora Workstation 上使用 Software 这个应用来安装它:**
[
![inkscape-gnome-software](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gnome-software.png)
@ -32,9 +33,9 @@ Inkscape 是一个流行的、功能齐全、免费和开源的矢量[图形编
sudo dnf install inkscape
```
### (开始)深入 Inkscape
### (开始)深入 Inkscape
当第一次打开程序你会看到一个空白页面并且有一组不同的工具栏。对于初学者最重要的三个工具栏是Toolbar、Tools Control Bar、 Colour Palette
当第一次打开程序你会看到一个空白页面并且有一组不同的工具栏。对于初学者最重要的三个工具栏是Toolbar、Tools Control Bar、 Colour Palette(调色板)
[
![inkscape_window](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape_window.png)
@ -43,35 +44,35 @@ sudo dnf install inkscape
**Toolbar**提供了创建绘图的所有基本工具,包括以下工具:
* 矩形工具:用于绘制矩形和正方形
* 星/多边形(形状)工具
* 星/多边形(形状)工具
* 圆形工具:用于绘制椭圆和圆
* 文本工具:用于添加标签和其他文本
* 路径工具:用于创建或编辑更复杂或自定义的形状
* 选择工具:用于选择图形中的对象
**Colour Palette** 提供了一种快速方式来设置当前选定对象的颜色。 **Tools Control Bar** 提供了工具栏中当前选定工具的所有设置。每次选择新工具时Tools Control Bar 会变成该工具的设置:
**Colour Palette** 提供了一种设置当前选定对象的颜色的快速方式**Tools Control Bar** 提供了工具栏中当前选定工具的所有设置。每次选择新工具时Tools Control Bar 会变成该工具的相应设置:
[
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-toolscontrolbar.gif)
][11]
### 绘
### 绘图
接下来,让我们使用 Inkscape 绘制一个星星。 首先,从 **Toolbar** 中选择星形工具,**然后单击并拖动主绘图区域。**
接下来,让我们使用 Inkscape 绘制一个星星。 首先,从 **Toolbar** 中选择星形工具,**然后在主绘图区域上单击并拖动。**
你可能会注意到你的星看起来很像一个三角形。要更改它,请使用 “Tools Control Bar” 中的 “Corners” 选项,然后再添加几个点。 最后,当你完成后,当星星仍被选中时,从“调色板”中选择一种颜色来改变星星的颜色:
你可能会注意到你星看起来很像一个三角形。要更改它,请使用 **Tools Control Bar** 中的 **Corners** 选项,再添加几个点。 最后,当你完成后,在星星仍被选中的状态下,从 **Palette**(调色板)中选择一种颜色来改变星星的颜色:
[
![inkscape-drawastar](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-drawastar.gif)
][12]
接下来在Toolbar中实验一些其他形状工具如矩形工具螺旋工具和圆形工具。每个工具都设置下来创建一些独特的图形。
接下来,可以 Toolbar 中实验一些其他形状工具,如矩形工具,螺旋工具和圆形工具。通过不同的设置,每个工具都可以创建一些独特的图形。
### 在绘图中选择移动对象
### 在绘图中选择移动对象
现在你有一堆图形了,你使用选择工具来移动它们。要使用选择工具,首先从工具栏中选择它,然后单击要操作的形状,接着将图形拖动到您想要的位置。
现在你有一堆图形了,你可以使用 Select 工具来移动它们。要使用 Select 工具,首先从工具栏中选择它,然后单击要操作的形状,接着将图形拖动到您想要的位置。
选择形状时,你还可以使用调整大小手柄来缩放图形。此外,如果你单击所选的图形,调整大小控点将变为旋转模式,并允许你旋转图形:
选择形状后,你还可以使用尺寸句柄调整图形大小。此外,如果你单击所选的图形,尺寸句柄将转变为旋转模式,并允许你旋转图形:
[
![inkscape-movingshapes](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-movingshapes.gif)
@ -83,7 +84,7 @@ Inkscape是一个很棒的软件它还包含了更多的工具和功能。在
-----------------------
作者简介Ryan是一名 Fedora 设计师。他使用 Fedora Workstation 作为他的主要桌面还有来自Libre Graphics 世界的最好的工具,尤其是矢量图形编辑器 Inkscape。
作者简介Ryan 是一名 Fedora 设计师。他使用 Fedora Workstation 作为他的主要桌面,还有来自 Libre Graphics 世界的最好的工具,尤其是矢量图形编辑器 Inkscape。
--------------------------------------------------------------------------------
@ -92,7 +93,7 @@ via: https://fedoramagazine.org/getting-started-inkscape-fedora/
作者:[Ryan Lerch][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,119 @@
如何在 Linux/Unix 系统中验证端口是否被占用
==========
[![](https://s0.cyberciti.org/images/category/old/linux-logo.png)][1]
在 Linux 或者类 Unix 中,我该如何检查某个端口是否被占用?我又该如何验证 Linux 服务器中有哪些端口处于监听状态?
验证哪些端口在服务器的网络接口上处于监听状态是非常重要的。你需要注意那些开放端口来检测网络入侵。除了网络入侵,为了排除故障,确认服务器上的某个端口是否被其他应用程序占用也是必要的。比方说,你可能会在同一个系统中安装了 Apache 和 Nginx 服务器,所以了解是 Apache 还是 Nginx 占用了 # 80/443 TCP 端口真的很重要。这篇快速教程会介绍使用 `netstat``nmap``lsof` 命令来检查端口使用信息并找出哪些程序正在使用这些端口。
### 如何检查 Linux 中的程序和监听的端口
1、 打开一个终端,如 shell 命令窗口。
2、 运行以下任意一行命令:
```
sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN
sudo nmap -sTU -O IP地址
```
下面我们看看这些命令和它们的详细输出内容:
### 方式 1lsof 命令
语法如下:
```
$ sudo lsof -i -P -n
$ sudo lsof -i -P -n | grep LISTEN
$ doas lsof -i -P -n | grep LISTEN ### OpenBSD
```
输出如下:
[![Fig.01: Check the listening ports and applications with lsof command](https://s0.cyberciti.org/uploads/faq/2016/11/lsof-outputs.png)][2]
*图 1使用 lsof 命令检查监听端口和程序*
仔细看上面输出的最后一行:
```
sshd 85379 root 3u IPv4 0xffff80000039e000 0t0 TCP 10.86.128.138:22 (LISTEN)
```
- `sshd` 是程序的名称
- `10.86.128.138``sshd` 程序绑定 (LISTEN) 的 IP 地址
- `22` 是被使用 (LISTEN) 的 TCP 端口
- `85379``sshd` 任务的进程 ID (PID)
### 方式 2netstat 命令
你可以如下面所示使用 `netstat` 来检查监听的端口和程序。
**Linux 中 netstat 语法**
```
$ netstat -tulpn | grep LISTEN
```
**FreeBSD/MacOS X 中 netstat 语法**
```
$ netstat -anp tcp | grep LISTEN
$ netstat -anp udp | grep LISTEN
```
**OpenBSD 中 netstat 语法**
```
$ netstat -na -f inet | grep LISTEN
$ netstat -nat | grep LISTEN
```
### 方式 3nmap 命令
语法如下:
```
$ sudo nmap -sT -O localhost
$ sudo nmap -sU -O 192.168.2.13 ### 列出打开的 UDP 端口
$ sudo nmap -sT -O 192.168.2.13 ### 列出打开的 TCP 端口
```
示例输出如下:
[![Fig.02: Determines which ports are listening for TCP connections using nmap](https://s0.cyberciti.org/uploads/faq/2016/11/nmap-outputs.png)][3]
*图 2使用 nmap 探测哪些端口监听 TCP 连接*
你可以用一句命令合并 TCP/UDP 扫描:
```
$ sudo nmap -sTU -O 192.168.2.13
```
### 赠品:对于 Windows 用户
在 windows 系统下可以使用下面的命令检查端口使用情况:
```
netstat -bano | more
netstat -bano | grep LISTENING
netstat -bano | findstr /R /C:"[LISTING]"
```
----------------------------------------------------
via: https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
作者:[VIVEK GITE][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[oska874](https://github.com/oska874)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
[1]:https://www.cyberciti.biz/faq/category/linux/
[2]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/lsof-outputs/
[3]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/nmap-outputs/

View File

@ -0,0 +1,63 @@
如何在 Apache 中重定向 URL 到另外一台服务器
============================================================
如我们前面两篇文章([使用 mod_rewrite 执行内部重定向][1]和[基于浏览器来显示自定义内容][2])中提到的,在本文中,我们将解释如何在 Apache 中使用 mod_rewrite 模块重定向对已移动到另外一台服务器上的资源的访问。
假设你正在重新设计公司的网站。你已决定将内容和样式HTML文件、JavaScript 和 CSS存储在一个服务器上将文档存储在另一个服务器上 - 这样可能会更稳健。
**建议阅读:** [5 个提高 Apache Web 服务器性能的提示][3] 。
但是,你希望这个更改对用户是透明的,以便他们仍然能够通过之前的网址访问文档。
在下面的例子中,名为 `assets.pdf` 的文件已从 `192.168.0.100`(主机名:`web`)中的 `/var/www/html` 移动到`192.168.0.101`(主机名:`web2`)中的相同位置。
为了让用户在浏览到 `192.168.0.100/assets.pdf` 时可以访问到此文件,请打开 `192.168.0.100` 上的 Apache 配置文件并添加以下重写规则(或者也可以将以下规则添加到 [.htaccess 文件][4])中:
```
RewriteRule "^(/assets\.pdf$)" "http://192.168.0.101$1" [R,L]
```
其中 `$1` 占位符,代表与括号中的正则表达式匹配的任何内容。
现在保存更改,不要忘记重新启动 Apache让我们看看当我们打开 `192.168.0.100/assets.pdf`,尝试访问 `assets.pdf` 时会发生什么:
**建议阅读:** [25 个有用的网站 .htaccess 技巧] [5]
在下面我们就可以看到,为 `192.168.0.100` 上的 `assets.pdf` 所做的请求实际上是由 `192.168.0.101` 处理的。
```
# tail -n 1 /var/log/apache2/access.log
```
[
![Check Apache Logs](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Apache-Logs.png)
][6]
*检查 Apache 日志*
在本文中,我们讨论了如何对已移动到其他服务器的资源进行重定向。 总而言之,我强烈建议你看看 [mod_rewrite][7] 指南和 [Apache 重定向指南][8],以供将来参考。
一如既往那样,如果您对本文有任何疑虑,请随时使用下面的评论栏回复。 我们期待你的回音!
--------------------------------------------------------------------------------
作者简介Gabriel Cánepa 是来自阿根廷圣路易斯 Villa Mercedes 的 GNU/Linux 系统管理员和 Web 开发人员。 他在一家全球领先的消费品公司工作,非常高兴使用 FOSS 工具来提高他日常工作领域的生产力。
-----------
via: http://www.tecmint.com/redirect-website-url-from-one-server-to-different-server/
作者:[Gabriel Cánepa][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/redirection-with-mod_rewrite-in-apache/
[2]:http://www.tecmint.com/mod_rewrite-redirect-requests-based-on-browser/
[3]:http://www.tecmint.com/apache-performance-tuning/
[4]:http://www.tecmint.com/tag/htaccess/
[5]:http://www.tecmint.com/apache-htaccess-tricks/
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Apache-Logs.png
[7]:http://mod-rewrite-cheatsheet.com/
[8]:https://httpd.apache.org/docs/2.4/rewrite/remapping.html

View File

@ -1,55 +1,49 @@
在Ubuntu上搭建一台Email服务器(二)
如何在 Ubuntu 环境下搭建邮件服务器(二)
============================================================
### [dovecot-email.jpg][4]
![Dovecot email](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dovecot-email.jpg?itok=tY4veggw "Dovecot email")
![Dovecot email](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dovecot-email.jpg?itok=tY4veggw "Dovecot email")
本教程的第2部分将介绍如何使用Dovecot将邮件从Postfix服务器移动到用户的收件箱。以[Creative Commons Zero][2]Pixabay方式授权发布
本教程的第 2 部分将介绍如何使用 Dovecot 将邮件从 Postfix 服务器移动到用户的收件箱。以[Creative Commons Zero][2] 方式授权发布
在[第一部分][5]中我们安装并测试了Postfix SMTP服务器。Postfix或任何SMTP服务器都不是一个完整的邮件服务器因为它所做的是在SMTP服务器之间移动邮件。我们需要Dovecot将邮件从Postfix服务器移动到用户的收件箱中。
在[第一部分][5]中,我们安装并测试了 Postfix SMTP 服务器。Postfix 或任何 SMTP 服务器都不是一个完整的邮件服务器,因为它所做的是在 SMTP 服务器之间移动邮件。我们需要 Dovecot 将邮件从 Postfix 服务器移动到用户的收件箱中。
Dovecot支持两种标准邮件协议IMAPInternet邮件访问协议和POP3邮局协议。 IMAP服务器保留服务器上的所有邮件。您的用户可以选择将邮件下载到计算机或仅在服务器上访问它们。 IMAP对于有多台机器的用户是方便的。但对你而言会有更多的工作因为你必须确保你的服务器始终可用而且IMAP服务器需要大量的存储和内存。
Dovecot 支持两种标准邮件协议IMAPInternet 邮件访问协议)和 POP3邮局协议。 IMAP 服务器会在服务器上保留所有邮件。您的用户可以选择将邮件下载到计算机或仅在服务器上访问它们。 IMAP 对于有多台机器的用户是方便的。但对你而言需要更多的工作,因为你必须确保你的服务器始终可用,而且 IMAP 服务器需要大量的存储和内存。
POP3是较旧的协议。POP3服务器可以比IMAP服务器服务更多的用户因为邮件会下载到用户的计算机。大多数邮件客户端可以选择在服务器上保留一定天数的邮件因此POP3的行为有点像IMAP。但它不是IMAP当你像IMAP那样做那么常常会下载多次或意外删除。
POP3 是较旧的协议。POP3 服务器可以比 IMAP 服务器服务更多的用户,因为邮件会下载到用户的计算机。大多数邮件客户端可以选择在服务器上保留一定天数的邮件,因此 POP3 的行为有点像 IMAP。但它又不是 IMAP当你像 IMAP 那样(在多台计算机上使用它时)那么常常会下载多次或意外删除。
### 安装 Dovecot
启动你信任的Ubuntu系统并安装Dovecot
启动你的 Ubuntu 系统并安装 Dovecot
```
$ sudo apt-get install dovecot-imapd dovecot-pop3d
```
它会安装可用的配置并在完成后自动启动,你可以用`ps ax | grep dovecot`确认:
它会安装可用的配置并在完成后自动启动,你可以用 `ps ax | grep dovecot` 确认:
```
$ ps ax | grep dovecot
15988 ? Ss 0:00 /usr/sbin/dovecot
15990 ? S 0:00 dovecot/anvil
15991 ? S 0:00 dovecot/log
```
打开你的Postfix配置文件`/etc/postfix/main.cf`确保配置了maildirs而不是mbox邮件存储mbox是对于每个用户的大文件而maildir是每条消息都有一个文件。大量的小文件比一个庞大的文件更稳定且易于管理。下面添加两行第二行告诉Postfix你需要maildir格式并且在每个用户的家目录下创建一个`.Mail`目录。你可以取任何名字,不一定要是`.Mail`
打开你的 Postfix 配置文件 `/etc/postfix/main.cf`确保配置了maildir 而不是 mbox 的邮件存储方式mbox 是给每个用户一个单一大文件,而 maildir 是每条消息都存储为一个文件。大量的小文件比一个庞大的文件更稳定且易于管理。添加如下两行,第二行告诉 Postfix 你需要 maildir 格式,并且在每个用户的家目录下创建一个 `.Mail` 目录。你可以取任何名字,不一定要是 `.Mail`
```
mail_spool_directory = /var/mail
home_mailbox = .Mail/
```
现在调整你的Dovecot配置。首先把原始的`dovecot.conf`文件重命名,因为它会调用`conf.d`中的文件来让事情简单些:
现在调整你的 Dovecot 配置。首先把原始的 `dovecot.conf` 文件重命名放到一边,因为它会调用存放在 `conf.d` 中的文件,在你刚刚开始学习时把配置放一起更简单些:
```
$ sudo mv /etc/dovecot/dovecot.conf /etc/dovecot/dovecot-oldconf
```
现在创建一个新的`/etc/dovecot/dovecot.conf`
现在创建一个新的 `/etc/dovecot/dovecot.conf`
```
disable_plaintext_auth = no
mail_location = maildir:~/.Mail
namespace inbox {
@ -74,30 +68,27 @@ userdb {
}
```
注意`mail_location = maildir` 必须和`main.cf`中的`home_mailbox`参数匹配。保存你的更改并重新加载Postfix和Dovecot配置
注意 `mail_location = maildir` 必须和 `main.cf` 中的 `home_mailbox` 参数匹配。保存你的更改并重新加载 Postfix Dovecot 配置:
```
$ sudo postfix reload
$ sudo dovecot reload
```
### 快速导出配置
使用下面的命令来查看你的Postfix和Dovecot配置
使用下面的命令来快速查看你的 Postfix Dovecot 配置:
```
$ postconf -n
$ doveconf -n
```
### 测试 Dovecot
现在再次启动telnet并且给自己发送一条测试消息。粗体显示的是你输入的命令。`studio`是我服务器的主机名,因此你必须用自己的:
现在再次启动 telnet并且给自己发送一条测试消息。粗体显示的是你输入的命令。`studio` 是我服务器的主机名,因此你必须用自己的:
```
$ telnet studio 25
Trying 127.0.1.1...
Connected to studio.
@ -132,7 +123,7 @@ quit
Connection closed by foreign host.
```
现在请求Dovecot来取回你的新消息使用你的Linux用户名和密码登录
现在请求 Dovecot 来取回你的新消息,使用你的 Linux 用户名和密码登录:
```
@ -173,12 +164,11 @@ quit
Connection closed by foreign host.
```
花一点时间比较第一个例子中输入的消息和第二个例子中接收的消息。 它很容易欺骗返回地址和日期但Postfix不会这样。大多数邮件客户端默认显示一个最小的标头集,但是你需要读取完整的标头查看真实的回溯。
花一点时间比较第一个例子中输入的消息和第二个例子中接收的消息。 返回地址和日期是很容易伪造的,但 Postfix 不会被愚弄。大多数邮件客户端默认显示一个最小的标头集,但是你需要读取完整的标头才能查看真实的回溯。
You can also read your messages by looking in your `~/Mail/cur` directory. They are plain text. Mine has two test messages:
你也可以在你的 `~/Mail/cur` 目录中查看你的邮件,它们是普通文本,我已经有两封测试邮件:
```
$ ls .Mail/cur/
1480540325.V806I28e0229M351743.studio:2,S
1480555224.V806I28e000eM41463.studio:2,S
@ -186,10 +176,9 @@ $ ls .Mail/cur/
### 测试 IMAP
我们Dovecot同时启用了POP3和IMAP因此我们使用telnet测试IMAP。
我们 Dovecot 同时启用了 POP3 IMAP 服务,因此我们使用 telnet 测试 IMAP。
```
$ telnet studio imap2
Trying 127.0.1.1...
Connected to studio.
@ -221,28 +210,25 @@ A4 OK Logout completed.
Connection closed by foreign host
```
### Thunderbird邮件客户端
### Thunderbird 邮件客户端
图1中的屏幕截图显示了我局域网上另一台主机上的图形邮件客户端中的邮件。
1 中的屏幕截图显示了我局域网上另一台主机上的图形邮件客户端中的邮件。
![thunderbird mail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/thunderbird-mail.png?itok=IkWK5Ti_ "thunderbird mail")
### [thunderbird-mail.png][3]
*图1 Thunderbird mail*
![thunderbird mail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/thunderbird-mail.png?itok=IkWK5Ti_ "thunderbird mail")
此时,你已有一个可以工作的 IMAP 和 POP3 邮件服务器,并且你也知道该如何测试你的服务器。你的用户可以在他们设置邮件客户端时选择要使用的协议。如果您只想支持一个邮件协议,那么只需要在您的 Dovecot 配置中留下你要的协议名字。
图1 Thunderbird mail.[Used with permission][1]
此时你已有一个工作的IMAP和POP3邮件服务器并且你也知道该如何测试你的服务器。你的用户将在他们设置邮件客户端时选择要使用的协议。如果您只想支持一个邮件协议那么只需要命名您的Dovecot配置中的一个。
然而,这还远远没有完成。这是一个非常简单、没有加密的开放的安装。它也只适用于与邮件服务器在同一系统上的用户。这是不可扩展的,并具有一些安全风险,例如没有密码保护。 我们会在[下周][6]了解如何创建与系统用户分开的邮件用户,以及如何添加加密。
然而,这还远远没有完成。这是一个非常简单、没有加密的、大门敞开的安装。它也只适用于与邮件服务器在同一系统上的用户。这是不可扩展的,并具有一些安全风险,例如没有密码保护。 我们会在[下篇][6]了解如何创建与系统用户分开的邮件用户,以及如何添加加密。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-2
作者:[ CARLA SCHRODER][a]
作者:[CARLA SCHRODER][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -251,5 +237,5 @@ via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-par
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/thunderbird-mailpng
[4]:https://www.linux.com/files/images/dovecot-emailjpg
[5]:https://www.linux.com/learn/how-build-email-server-ubuntu-linux
[5]:https://linux.cn/article-8071-1.html
[6]:https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-3

View File

@ -0,0 +1,225 @@
在 Ubuntu 中用 UFW 配置防火墙
============================================================
UFW即简单防火墙uncomplicated firewall是一个 Arch Linux、Debian 或 Ubuntu 中管理防火墙规则的前端。 UFW 通过命令行使用(尽管它有可用的 GUI它的目的是使防火墙配置简单即不复杂uncomplicated
![How to Configure a Firewall with UFW](https://www.linode.com/docs/assets/ufw_tg.png "How to Configure a Firewall with UFW")
### 开始之前
1、 熟悉我们的[入门][1]指南,并完成设置服务器主机名和时区的步骤。
2、 本指南将尽可能使用 `sudo`。 在完成[保护你的服务器][2]指南的章节,创建一个标准用户帐户,强化 SSH 访问和移除不必要的网络服务。 **但不要**跟着创建防火墙部分 - 本指南是介绍使用 UFW 的,它对于 iptables 而言是另外一种控制防火墙的方法。
3、 更新系统
**Arch Linux**
```
sudo pacman -Syu
```
**Debian / Ubuntu**
```
sudo apt-get update && sudo apt-get upgrade
```
### 安装 UFW
UFW 默认包含在 Ubuntu 中,但在 Arch 和 Debian 中需要安装。 Debian 将自动启用 UFW 的 systemd 单元,并使其在重新启动时启动,但 Arch 不会。 这与告诉 UFW 启用防火墙规则不同,因为使用 systemd 或者 upstart 启用 UFW 仅仅是告知 init 系统打开 UFW 守护程序。
默认情况下UFW 的规则集为空,因此即使守护程序正在运行,也不会强制执行任何防火墙规则。 强制执行防火墙规则集的部分[在下面][3]。
#### Arch Linux
1、 安装 UFW
```
sudo pacman -S ufw
```
2、 启动并启用 UFW 的 systemd 单元:
```
sudo systemctl start ufw
sudo systemctl enable ufw
```
#### Debian / Ubuntu
1、 安装 UFW
```
sudo apt-get install ufw
```
### 使用 UFW 管理防火墙规则
#### 设置默认规则
大多数系统只需要打开少量的端口接受传入连接,并且关闭所有剩余的端口。 从一个简单的规则基础开始,`ufw default`命令可以用于设置对传入和传出连接的默认响应动作。 要拒绝所有传入并允许所有传出连接,那么运行:
```
sudo ufw default allow outgoing
sudo ufw default deny incoming
```
`ufw default` 也允许使用 `reject` 参数。
> 警告:
> 除非明确设置允许规则,否则配置默认 `deny``reject` 规则会锁定你的服务器。确保在应用默认 `deny``reject` 规则之前,已按照下面的部分配置了 SSH 和其他关键服务的允许规则。
#### 添加规则
可以有两种方式添加规则:用**端口号**或者**服务名**表示。
要允许 SSH 的 22 端口的传入和传出连接,你可以运行:
```
sudo ufw allow ssh
```
你也可以运行:
```
sudo ufw allow 22
```
相似的,要在特定端口(比如 111`deny` 流量,你需要运行:
```
sudo ufw deny 111
```
为了更好地调整你的规则,你也可以允许基于 TCP 或者 UDP 的包。下面例子会允许 80 端口的 TCP 包:
```
sudo ufw allow 80/tcp
sudo ufw allow http/tcp
```
这个会允许 1725 端口上的 UDP 包:
```
sudo ufw allow 1725/udp
```
#### 高级规则
除了基于端口的允许或阻止UFW 还允许您按照 IP 地址、子网和 IP 地址/子网/端口的组合来允许/阻止。
允许从一个 IP 地址连接:
```
sudo ufw allow from 123.45.67.89
```
允许特定子网的连接:
```
sudo ufw allow from 123.45.67.89/24
```
允许特定 IP/ 端口的组合:
```
sudo ufw allow from 123.45.67.89 to any port 22 proto tcp
```
`proto tcp` 可以删除或者根据你的需求改成 `proto udp`,所有例子的 `allow` 都可以根据需要变成 `deny`
#### 删除规则
要删除一条规则,在规则的前面加上 `delete`。如果你希望不再允许 HTTP 流量,你可以运行:
```
sudo ufw delete allow 80
```
删除规则同样可以使用服务名。
### 编辑 UFW 的配置文件
虽然可以通过命令行添加简单的规则,但仍有可能需要添加或删除更高级或特定的规则。 在运行通过终端输入的规则之前UFW 将运行一个文件 `before.rules`它允许回环接口、ping 和 DHCP 等服务。要添加或改变这些规则,编辑 `/etc/ufw/before.rules` 这个文件。 同一目录中的 `before6.rules` 文件用于 IPv6 。
还存在一个 `after.rule``after6.rule` 文件,用于添加在 UFW 运行你通过命令行输入的规则之后需要添加的任何规则。
还有一个配置文件位于 `/etc/default/ufw`。 从此处可以禁用或启用 IPv6可以设置默认规则并可以设置 UFW 以管理内置防火墙链。
### UFW 状态
你可以在任何时候使用命令:`sudo ufw status` 查看 UFW 的状态。这会显示所有规则列表,以及 UFW 是否处于激活状态:
```
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
80/tcp ALLOW Anywhere
443 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
```
### 启用防火墙
随着你选择规则完成,你初始运行 `ufw status` 可能会输出 `Status: inactive`。 启用 UFW 并强制执行防火墙规则:
```
sudo ufw enable
```
相似地,禁用 UFW 规则:
```
sudo ufw disable
```
> UFW 会继续运行,并且在下次启动时会再次启动。
### 日志记录
你可以用下面的命令启动日志记录:
```
sudo ufw logging on
```
可以通过运行 `sudo ufw logging low|medium|high` 设计日志级别,可以选择 `low`、 `medium` 或者 `high`。默认级别是 `low`
常规日志类似于下面这样,位于 `/var/logs/ufw`
```
Sep 16 15:08:14 <hostname> kernel: [UFW BLOCK] IN=eth0 OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:00:00 SRC=123.45.67.89 DST=987.65.43.21 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=8475 PROTO=TCP SPT=48247 DPT=22 WINDOW=1024 RES=0x00 SYN URGP=0
```
前面的值列出了你的服务器的日期、时间、主机名。剩下的重要信息包括:
* **[UFW BLOCK]**:这是记录事件的描述开始的位置。在此例中,它表示阻止了连接。
* **IN**:如果它包含一个值,那么代表该事件是传入事件
* **OUT**:如果它包含一个值,那么代表事件是传出事件
* **MAC**:目的地和源 MAC 地址的组合
* **SRC**:包源的 IP
* **DST**:包目的地的 IP
* **LEN**:数据包长度
* **TTL**:数据包 TTL或称为 time to live。 在找到目的地之前,它将在路由器之间跳跃,直到它过期。
* **PROTO**:数据包的协议
* **SPT**:包的源端口
* **DPT**:包的目标端口
* **WINDOW**:发送方可以接收的数据包的大小
* **SYN URGP**:指示是否需要三次握手。 `0` 表示不需要。
--------------------------------------------------------------------------------
via: https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
作者:[Linode][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
[1]:https://www.linode.com/docs/getting-started
[2]:https://www.linode.com/docs/security/securing-your-server
[3]:http://localhost:4567/docs/security/firewalls/configure-firewall-with-ufw#enable-the-firewall

View File

@ -0,0 +1,133 @@
在 Ubuntu 中使用 NTP 进行时间同步
==========
NTP 是通过网络来同步时间的一种 TCP/IP 协议。通常客户端向服务器请求当前的时间,并根据结果来设置其时钟。
这个描述是挺简单的,实现这一功能却是极为复杂的 - 首先要有多层 NTP 服务器,第一层 NTP 服务器连接原子时钟,第二层、第三层服务器则担起负载均衡的责任,以处理因特网传来的所有请求。另外,客户端可能也超乎你想象的复杂 - 它必须排除通讯延迟,调整时间的同时不干扰其它在服务器中运行的进程。幸运的是,所有的这些复杂性都进行了封装,你是不可见也不需要见到的。
在 Ubuntu 中,是使用 `ntpdate``ntpd` 来同步时间的。
* [timedatectl](#timedatectl)
* [timesyncd](#timesyncd)
* [ntpdate](#ntpdate)
* [timeservers](#timeservers)
* [ntpd](#ntpd)
* [安装](#installation)
* [配置](#configuration)
* [View status](#status)
* [PPS Support](#Support)
* [参考资料](#reference)
### <span id="timedatectl">timedatectl</span>
在最新的 Ubuntu 版本中,`timedatectl` 替代了老旧的 `ntpdate`。默认情况下,`timedatectl` 在系统启动的时候会立刻同步时间,并在稍后网络连接激活后通过 socket 再次检查一次。
如果已安装了 `ntpdate` / `ntp``timedatectl` 会退而让你使用之前的设置。这样确保了两个时间同步服务不会相互冲突,同时在你升级的时候还保留原本的行为和配置。但这也意味着从旧版本的发行版升级时 `ntp`/`ntpdate` 仍会安装,因此会导致新的基于 systemd 的时间服务被禁用。
### <span id="timesyncd">timesyncd</span>
在最新的 Ubuntu 版本中,`timesyncd` 替代了 `ntpd` 的客户端的部分。默认情况下 `timesyncd` 会定期检测并同步时间。它还会在本地存储更新的时间,以便在系统重启时做时间单步调整。
通过 `timedatectl``timesyncd` 设置的当前时间状态和时间配置,可以使用 `timedatectl status` 命令来进行确认。
```
timedatectl status
Local time: Fri 2016-04-29 06:32:57 UTC
Universal time: Fri 2016-04-29 06:32:57 UTC
RTC time: Fri 2016-04-29 07:44:02
Time zone: Etc/UTC (UTC, +0000)
Network time on: yes
NTP synchronized: no
RTC in local TZ: no
```
如果安装了 NTP并用它替代 `timedatectl` 来同步时间,则 `NTP synchronized` 将被设置为 `yes`
`timedatectl` 和 `timesyncd` 用来获取时间的 nameserver 可以通过 `/etc/systemd/timesyncd.conf` 来指定,另外在 `/etc/systemd/timesyncd.conf.d/` 下还有灵活的附加配置文件。
### <span id="ntpdate">ntpdate</span>
由于 `timedatectl` 的存在,各发行版已经弃用了 `ntpdate`,默认不再进行安装。如果你安装了,它会在系统启动的时候根据 Ubuntu 的 NTP 服务器来设置你电脑的时间。之后每当一个新的网络接口启动时,它就会重新尝试同步时间 —— 在这期间只要其涵盖的时间差不是太大,它就会慢慢偏移时间。该行为可以通过 `-B`/`-b` 开关来进行控制。
```
ntpdate ntp.ubuntu.com
```
### <span id="timeservers">时间服务器</span>
默认情况下,基于 systemd 的工具都是从 `ntp.ubuntu.com` 请求时间同步的。经典的基于 `ntpd` 的服务基本上都是使用 `[0-3].ubuntu.pool.ntp.org` 池中的 `2.ubuntu.pool.ntp.org`,还有 `ntp.ubuntu.com`,此外需要的话还支持 IPv6。如果想强制使用 IPv6可以使用 `ipv6.ntp.ubuntu.com`,不过这并非默认配置。
### <span id="ntpd">ntpd</span>
ntp 的守护进程 `ntpd` 会计算你的系统时钟的时间偏移量并且持续的进行调整,所以不会出现时间差距较大的更正,比如说,不会导致不连续的日志。该进程只花费少量的进程资源和内存,但对于现代的服务器来说实在是微不足道的了。
### <span id="installation">安装</span>
要安装 `ntpd`,在终端命令行中输入:
```
sudo apt install ntp
```
### <span id="configuration">配置</span>
编辑 `/etc/ntp.conf` —— 增加/移除 `server` 行。默认配置有以下服务器:
```
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.
server 0.ubuntu.pool.ntp.org
server 1.ubuntu.pool.ntp.org
server 2.ubuntu.pool.ntp.org
server 3.ubuntu.pool.ntp.org
```
修改配置文件之后,你需要重新加载 `ntpd`
```
sudo systemctl reload ntp.service
```
### <span id="status">查看状态</span>
使用 `ntpq` 来查看更多信息:
```
# sudo ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+stratum2-2.NTP. 129.70.130.70 2 u 5 64 377 68.461 -44.274 110.334
+ntp2.m-online.n 212.18.1.106 2 u 5 64 377 54.629 -27.318 78.882
*145.253.66.170 .DCFa. 1 u 10 64 377 83.607 -30.159 68.343
+stratum2-3.NTP. 129.70.130.70 2 u 5 64 357 68.795 -68.168 104.612
+europium.canoni 193.79.237.14 2 u 63 64 337 81.534 -67.968 92.792
```
### <span id="Support">PPS 支持</span>
从 Ubuntu 16.04 开始ntp 支持 PPS 规范,给 ntp 提供了本地时间源,以提供更高的精度。查看下边列出的链接来获取更多配置信息。
### <span id="reference">参考资料</span>
* 参考 [Ubuntu Time][1] wiki 页来获取更多信息
* [ntp.org网络时间协议项目主页][2]
* [ntp.org关于配置 PPS 的 FAQ][3]
--------------------------------------------------------------------------------
via: https://help.ubuntu.com/lts/serverguide/NTP.html
作者:[Ubuntu][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://help.ubuntu.com/lts/serverguide/NTP.html
[1]:https://help.ubuntu.com/community/UbuntuTime
[2]:http://www.ntp.org/
[3]:http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#S-CONFIG-ADV-PPS

View File

@ -0,0 +1,357 @@
使用 Windows 10 的 RSAT 工具来管理 Samba4 活动目录架构 (三)
============================================================
这一节的[Samba4 AD DC 架构系列文章][8],我们将会讨论如何把 Windows 10 系统的电脑添加到 Samba4 域环境中,以及如何在 Windows 10 系统下管理域环境。
一旦 Windows 10 系统加入到 Samba4 AD DC ,我们就可以在 Windows 10 系统中创建、删除或者禁用域用户和组了,可以创建新的组织单元,创建、编辑和管理域策略,还可以管理 Samba4 域 DNS 服务。
上面所有的功能和其它一些复杂的与域管理相关的工作都可以通过 Windows 环境下的 RSAT 工具来完成—— Microsoft 远程服务器管理工具。
#### 要求
1、 [在 Ubuntu 系统上使用 Samba4 来创建活动目录架构(一)][1]
2、 [在 Linux 命令行下管理 Samba4 AD 架构(二)][2]
### 第1步配置域时间同步
1、在使用 Windows 10 系统的 RSAT 工具来管理 Samba4 ADDC 之前,我们需要了解与活动目录相关的一个很重要的服务,该服务要求[精确的时间同步][9]
在大多数的 Linux 发行版中,都由 NTP 进程提供时间同步机制。AD 环境默认允许最大的时间差距是 5 分钟。
如果时间差距超过 5 分钟,你将会遇到各种各样的异常报错,最严重的会影响到 AD 用户、域成员服务器或共享访问等。
为了在 Ubuntu 系统中安装网络时间协议进程和 NTP 客户端工具,可执行以下命令:
```
$ sudo apt-get install ntp ntpdate
```
[
![Install NTP on Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/12/Install-NTP-on-Ubuntu.png)
][10]
*在 Ubuntu 系统下安装 NTP 服务*
2、下一步修改 NTP 配置文件,使用一个离你最近的 NTP 服务地址列表替换默认的 NTP 池服务列表。
NTP 服务器地址列表可以从 NTP 地址库项目官方网站获取:[http://www.pool.ntp.org/en/][11]。
```
$ sudo nano /etc/ntp.conf
```
在每一行 `pool` 前添加一个 # 符号以注释默认的服务器列表,并替换为适合你的 NTP 服务器地址,如下图所示:
```
pool 0.ro.pool.ntp.org iburst
pool 1.ro.pool.ntp.org iburst
pool 2.ro.pool.ntp.org iburst
# Use Ubuntu's ntp server as a fallback.
pool 3.ro.pool.ntp.org
```
[
![Configure NTP Server in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-NTP-Server-in-Ubuntu.png)
][12]
*在 Ubuntu 系统下配置 NTP 服务*
3、此时先不要关闭该文件。移动光标到文件顶部`driftfile` 参数后面添加下面一行内容。该设置是为了让客户端查询该服务时使用 AD 的 NTP 签署请求。
```
ntpsigndsocket /var/lib/samba/ntp_signd/
```
[
![Sync AD with NTP](http://www.tecmint.com/wp-content/uploads/2016/12/Sync-AD-with-NTP.png)
][13]
*使用 NTP 来同步 AD*
4、最后移动光标到文件底部并添加如下一行内容如截图所示仅允许网络客户端查询该服务器上的时间。
```
restrict default kod nomodify notrap nopeer mssntp
```
[
![Query Clients to NTP Server](http://www.tecmint.com/wp-content/uploads/2016/12/Query-Client-to-NTP-Server.png)
][14]
*限制 NTP 服务的查询客户端*
5、设置完成之后保存并关闭 NTP 配置文件,为了让 NTP 服务读取 `ntp_signed` 目录,需要授予 NTP 服务合适的权限。
以下是 Samba NTP socket 的系统路径。之后,重启 NTP 服务以应用更改,并使用 [netstat 命令][15]与[grep 过滤][16]相接合来检查 NTP 服务是否正常。
```
$ sudo chown root:ntp /var/lib/samba/ntp_signd/
$ sudo chmod 750 /var/lib/samba/ntp_signd/
$ sudo systemctl restart ntp
$ sudo netstat tulpn | grep ntp
```
[
![Grant Permission to NTP](http://www.tecmint.com/wp-content/uploads/2016/12/Grant-Permission-to-NTP.png)
][17]
*给 NTP 服务授权*
使用 ntpq 命令行工具来监控 NTP 进程,加上 '-p' 参数来显示摘要信息。
```
$ ntpq -p
```
[
![Monitor NTP Server Pool](http://www.tecmint.com/wp-content/uploads/2016/12/Monitor-NTP-Server-Pool.png)
][18]
*监控 NTP 服务器池*
### 第二步:处理 NTP 时间同步异常问题
6、有时候 NTP 进程在尝试与上游 ntp 服务端同步时间的计算过程中会卡住,导致客户端使用 `ntpdate` 工具手动强制同步时间时报如下错误:
```
# ntpdate -qu adc1
ntpdate[4472]: no server suitable for synchronization found
```
[
![NTP Time Synchronization Error](http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Time-Synchronization-Error.png)
][19]
*NTP 时间同步异常*
`ntpdate` 命令加上 `-d` 调试选项:
```
# ntpdate -d adc1.tecmint.lan
Server dropped: Leap not in sync
```
[
![NTP Server Dropped Leap Not in Sync](http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Server-Dropped-Leap-Not-Sync.png)
][20]
*NTP Server Dropped Leap Not in Sync*
7、为了避免出现该问题使用下面的方法来解决这个问题在服务器上停止 NTP 服务,使用 `ntpdate` 客户端工具加上 `-b` 参数指定外部 peer 地址来手动强制同步时间,如下图所示:
```
# systemctl stop ntp.service
# ntpdate -b 2.ro.pool.ntp.org [你的 ntp peer]
# systemctl start ntp.service
# systemctl status ntp.service
```
[
![Force NTP Time Synchronization](http://www.tecmint.com/wp-content/uploads/2016/12/Force-NTP-Time-Synchronization.png)
][21]
*强制 NTP 时间同步*
8、当时间正确同步之后启动服务器上的 NTP 服务,并且在客户端服务器上执行如下命令来验证 NTP 时间同步服务是否可用:
```
# ntpdate -du adc1.tecmint.lan [your_adc_server]
```
[
![Verify NTP Time Synchronization](http://www.tecmint.com/wp-content/uploads/2016/12/Verify-NTP-Time-Synchronization.png)
][22]
*验证 NTP 时间同步*
至此, NTP 服务应该已经工作正常了。
### 第 3 步:把 Windows 10 系统加入域环境
9、从我们的前一篇文章可以看出[Samba4 活动目录可以使用 samba-tool 工具在命令行下管理][23],可以直接在服务器上的 VTY 控制台或者通过 SSH 工具远程连接到服务器上进行管理。
另外,更直观更灵活的方式是使用已加入域的 Windows 电脑中的微软远程服务器管理工具RSAT来管理我们的 Samba4 AD 域控制器。这些工具在当前的大多数 Windows 系统中都可以使用。
把 Windows 10 或是之前版本的微软操作系统加入到 Samba4 AD DC 环境中的过程也是非常容易的。首先,确保你的 Windows 10 电脑已经设置了正确的 Samba4 DNS 服务器的 IP 地址,以查询出准确的域解析结果。
打开“控制面板 -> 网络和 Internet -> 网络和共享中心 -> 网卡设置 -> 属性 -> IPv4 -> 属性 -> 使用下面的 DNS 服务器地址”,并且手动输入 Samba4 AD 服务器的 IP 地址,如下图所示:
[
![join Windows to Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-to-Samba4-AD.png)
][24]
*把 Windows 10 加入到 Samba4 AD 环境*
[
![Add DNS and Samba4 AD IP Address](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-and-Samba4-AD-IP-Address.png)
][25]
*添加 DNS 和 Samba4 AD 服务器地址*
这里的 `192.168.1.254` 是 Samba4 AD 域控服务器的地址,用于域名解析。相应替换该 IP 地址。
10、下一步点击 OK 按钮以应用网络设置,打开 CMD 命令行窗口,通过 ping 域名和 Samba4 服务器的 FQDN 地址来测试通过 DNS 解析到域是否连通。
```
ping tecmint.lan
ping adc1.tecmint.lan
```
[
![Check Network Connectivity Between Windows and Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/12/Check-Samba4-AD-from-Windows.png)
][26]
*检查 Windows 和 Samb4 AD 服务器的网络连通性*
11、如果 Windows 客户端 DNS 查询的结果解析正确,那么,你还需要确认客户端时间是否已跟域环境同步。
打开“控制面板 -> 时钟、语言和区域 -> 设置时间和日期 -> Internet 时间页 -> 更改设置”,输入你同步时间的域名和 Internet 时间服务器字段。
点击立即更新按钮来强制与域同步时间,点击 OK 关闭窗口。
[
![Synchronize Time with Internet Server](http://www.tecmint.com/wp-content/uploads/2016/12/Synchronize-Time-with-Internet-Server.png)
][27]
*与 Internet 服务器同步时间*
12、最后通过打开“系统属性 -> 更改 -> 域成员 -> 输入域名”,点击 OK输入你的域管理员账号和密码再次点击 OK。
应该弹出一个新的窗口通知你已经是一个域成员了。点击 OK 关闭弹出窗口,并且重启机器以应用域更改。
下面的截图将说明这些操作步骤。
[
![Join Windows Domain to Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-Domain-to-Samba4-AD.png)
][28]
*把 Windows 域加入到 Samba4 AD 环境*
[
![Enter Domain Administration Login](http://www.tecmint.com/wp-content/uploads/2016/12/Enter-Domain-Administration-Login.png)
][29]
*输入域管理员账号登录*
[
![Domain Joined to Samba4 AD Confirmation](http://www.tecmint.com/wp-content/uploads/2016/12/Domain-Joined-to-Samba4-AD.png)
][30]
*确认域已加入到 Samba4 AD 环境*
[
![Restart Windows Server for Changes](http://www.tecmint.com/wp-content/uploads/2016/12/Restart-Windows-Server-for-Changes.png)
][31]
*重启 Windows 服务器以应用更改*
13、重启之后单击其它用户并且使用具有管理员权限的 Samba4 域账号登录到 Windows 系统,你已经准备好进入到后边几个步骤了。
[
![Login to Windows Using Samba4 AD Account](http://www.tecmint.com/wp-content/uploads/2016/12/Login-to-Windows-Using-Samba4-AD-Account.png)
][32]
*使用 Samba4 AD 账号登录到 Windows*
#### 第 4 步:使用 RSAT 工具来管理 Samba4 AD DC
14、微软远程服务器管理工具RSAT被广泛地用来管理 Samba4 活动目录,你可以根据你的 Windows 系统版本从下面的地址来下载该工具:
1. Windows 10: [https://www.microsoft.com/en-us/download/details.aspx?id=45520][4]
2. Windows 8.1: [http://www.microsoft.com/en-us/download/details.aspx?id=39296][5]
3. Windows 8: [http://www.microsoft.com/en-us/download/details.aspx?id=28972][6]
4. Windows 7: [http://www.microsoft.com/en-us/download/details.aspx?id=7887][7]
一旦 Windows 10 独立安装包下载完成,运行安装包,等待安装完成并重启机器以应用所有更新。
重启之后,打开“控制面板 -> 程序(卸载程序) -> 启用或关闭 Windows 功能”,勾选所有的远程服务器管理工具。
点击 OK 开始安装,安装完成之后重启系统。
[
![Administer Samba4 AD from Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Administer-Samba4-AD-from-Windows.png)
][33]
*从 Windows 系统下管理 Samba4 AD*
15、要进入 RSAT 工具集,打开“控制面板 -> 系统和安全 -> 管理工具”。
这些工具也可以在开始工菜单的管理工具菜单中找到。另外,你也可以打开 Windows MMC 工具和管理单元,从“文件 -> 添加/删除管理单元”菜单中访问它们。
[
![Access Remote Server Administration Tools](http://www.tecmint.com/wp-content/uploads/2016/12/Access-Remote-Server-Administration-Tools.png)
][34]
*访问远程服务器管理工具集*
最常用的工具,比如 AD UC DNS 和组策略管理工具可以通过从右键菜单发送到功能来新建快捷方式到桌面直接运行。
16、你可以通过 AD UC 和列出域里的电脑(新加入的 Windows 机器应该出现在列表中)来验证 RSAT 功能,创建一个组织单元或组。
在 Samba4 服务器上使用 `wbinf` 命令来检查用户和组是否已经创建成功。
[
![Active Directory Users and Computers](http://www.tecmint.com/wp-content/uploads/2016/12/Active-Directory-Users-and-Computers.png)
][35]
*活动目录用户和计算机*
[
![Create Organizational Units and New Users](http://www.tecmint.com/wp-content/uploads/2016/12/Create-Organizational-Unit-and-Users.png)
][36]
*创建组织单元和新用户*
[
![Confirm Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/12/Confirm-Samba4-AD-Users.png)
][37]
*确认 Samba4 AD 用户*
就这些吧!该主题的下一篇文章将包含其它 Samba4 活动目录的重要内容,包括通过 RSAT 工具来管理 Samba4 活动目录,比如,如何管理 DNS 服务器,添加 DNS 记录和创建 DNS 解析查询区,如何管理及应用域策略以及域用户如何创建交互式登录提示信息。
------
作者简介:我是一个电脑迷,开源软件及 Linux 系统爱好者有近4年的 Linux 桌面和服务器系统及 bash 编程经验。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
作者:[Matei Cezar][a]
译者:[rusking](https://github.com/rusking)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:https://linux.cn/article-8065-1.html
[2]:https://linux.cn/article-8070-1.html
[3]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/
[4]:https://www.microsoft.com/en-us/download/details.aspx?id=45520
[5]:http://www.microsoft.com/en-us/download/details.aspx?id=39296
[6]:http://www.microsoft.com/en-us/download/details.aspx?id=28972
[7]:http://www.microsoft.com/en-us/download/details.aspx?id=7887
[8]:http://www.tecmint.com/category/samba4-active-directory/
[9]:http://www.tecmint.com/how-to-synchronize-time-with-ntp-server-in-ubuntu-linux-mint-xubuntu-debian/
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Install-NTP-on-Ubuntu.png
[11]:http://www.pool.ntp.org/en/
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-NTP-Server-in-Ubuntu.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/12/Sync-AD-with-NTP.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Query-Client-to-NTP-Server.png
[15]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/
[16]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/
[17]:http://www.tecmint.com/wp-content/uploads/2016/12/Grant-Permission-to-NTP.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Monitor-NTP-Server-Pool.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Time-Synchronization-Error.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Server-Dropped-Leap-Not-Sync.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/Force-NTP-Time-Synchronization.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/12/Verify-NTP-Time-Synchronization.png
[23]:https://linux.cn/article-8070-1.html
[24]:http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-to-Samba4-AD.png
[25]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-and-Samba4-AD-IP-Address.png
[26]:http://www.tecmint.com/wp-content/uploads/2016/12/Check-Samba4-AD-from-Windows.png
[27]:http://www.tecmint.com/wp-content/uploads/2016/12/Synchronize-Time-with-Internet-Server.png
[28]:http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-Domain-to-Samba4-AD.png
[29]:http://www.tecmint.com/wp-content/uploads/2016/12/Enter-Domain-Administration-Login.png
[30]:http://www.tecmint.com/wp-content/uploads/2016/12/Domain-Joined-to-Samba4-AD.png
[31]:http://www.tecmint.com/wp-content/uploads/2016/12/Restart-Windows-Server-for-Changes.png
[32]:http://www.tecmint.com/wp-content/uploads/2016/12/Login-to-Windows-Using-Samba4-AD-Account.png
[33]:http://www.tecmint.com/wp-content/uploads/2016/12/Administer-Samba4-AD-from-Windows.png
[34]:http://www.tecmint.com/wp-content/uploads/2016/12/Access-Remote-Server-Administration-Tools.png
[35]:http://www.tecmint.com/wp-content/uploads/2016/12/Active-Directory-Users-and-Computers.png
[36]:http://www.tecmint.com/wp-content/uploads/2016/12/Create-Organizational-Unit-and-Users.png
[37]:http://www.tecmint.com/wp-content/uploads/2016/12/Confirm-Samba4-AD-Users.png
[38]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[39]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[40]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[41]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[42]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#comments

View File

@ -0,0 +1,121 @@
如何在 Linux 中找出最近或今天被修改的文件
============================================================
在本文中,我们将解释两个简单的[命令行小技巧][5],它可以帮你只列出所有的今天的文件。
Linux 用户在命令行上遇到的常见问题之一是[定位具有特定名称的文件][6],如果你知道确定的文件名则可能会容易得多。
不过,假设你忘记了白天早些时候创建的文件的名称(在你包含了数百个文件的 `home` 文件夹中),但现在你有急用。
下面用不同的方式只[列出所有你今天创建或修改的文件][7](直接或间接)。
1、 使用 [ls 命令][8],只列出你的 home 文件夹中今天的文件。
```
# ls -al --time-style=+%D | grep 'date +%D'
```
其中:
- `-a` - 列出所有文件,包括隐藏文件
- `-l` - 启用长列表格式
- `--time-style=FORMAT` - 显示指定 FORMAT 的时间
- `+D` - 以 `m/d/y` (月/日/年)格式显示或使用日期
[
![Find Recent Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/12/Find-Recent-Files-in-Linux.png)
][9]
*在Linux中找出最近的文件*
此外,你使用可以 `-X` 标志来[按字母顺序对结果排序][10]
```
# ls -alX --time-style=+%D | grep 'date +%D'
```
你也可以使用 `-S` 标志来基于大小(由大到小)来排序:
```
# ls -alS --time-style=+%D | grep 'date +%D'
```
2、 另外,使用 [find 命令][11]会更灵活,并且提供比 `ls` 更多的选项,可以实现相同的目的。
-  `-maxdepth` 级别用于指定在搜索操作的起点下(在这个情况下为当前目录)的搜索层级(子目录层级数)。
-  `-newerXY`,用于所寻找的文件的时间戳 `X` 比参照文件的时间戳 `Y` 更新一些的文件。 `X``Y` 表示以下任何字母:
     - `a` - 参照文件的访问时间
     - `B` - 参照文件的创建时间
     - `c` - 参照文件的 inode 状态改变时间
     - `m` - 参照文件的修改时间
     - `t` - 直接指定一个绝对时间
下面的命令意思是只找出 2016-12-06 这一天修改的文件:
```
# find . -maxdepth 1 -newermt "2016-12-06"
```
[
![Find Today's Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Files-in-Linux.png)
][12]
*在 Linux 中找出今天的文件*
重要:在上面的 [find 命令][13]中使用正确的**日期格式**作为参照时间,一旦你使用了错误的格式,你会得到如下错误:
```
# find . -maxdepth 1 -newermt "12-06-2016"
find: I cannot figure out how to interpret '12-06-2016' as a date or time
```
或者,使用下面的正确格式:
```
# find . -maxdepth 1 -newermt "12/06/2016"
或者
# find . -maxdepth 1 -newermt "12/06/16"
```
[
![Find Todays Modified Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Modified-Files.png)
][14]
*在 Linux 中找出今天修改的文件*
你可以在我们的下面一系列文章中获得 `ls ``find` 命令的更多使用信息。
- [用 15 例子的掌握 Linux ls 命令][1]
- [对 Linux 用户有用的 7 个奇特技巧][2]
- [用 35 个例子掌握 Linux find 命令][3]
- [在 Linux 中使用扩展查找多个文件名的方法][4]
在本文中,我们解释了如何使用 ls 和 find 命令帮助只列出今天的文件。 请使用以下反馈栏向我们发送有关该主题的任何问题或意见。 你也可以提醒我们其他可以用于这个目的的命令。
--------------------------------------------------------------------------------
作者简介Aaron Kili是一名 Linux 和 F.O.S.S 的爱好者,未来的 Linux 系统管理员、网站开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并乐于分享知识。
------------------
via: http://www.tecmint.com/find-recent-modified-files-in-linux/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[2]:http://www.tecmint.com/linux-ls-command-tricks/
[3]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[4]:http://www.tecmint.com/linux-find-command-to-search-multiple-filenames-extensions/
[5]:http://www.tecmint.com/tag/linux-tricks/
[6]:http://www.tecmint.com/linux-find-command-to-search-multiple-filenames-extensions/
[7]:http://www.tecmint.com/sort-ls-output-by-last-modified-date-and-time/
[8]:http://www.tecmint.com/tag/linux-ls-command/
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Find-Recent-Files-in-Linux.png
[10]:http://www.tecmint.com/sort-command-linux/
[11]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Files-in-Linux.png
[13]:http://www.tecmint.com/find-directory-in-linux/
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Find-Todays-Modified-Files.png

View File

@ -0,0 +1,190 @@
九款开源或商业的数据中心基础设施管理软件
============================================================
当一个公司发展壮大时,相应地对计算资源的需求也会与日俱增。无论是普通公司还是服务提供商,包括那些出租服务器的公司,当服务器数量过多时都不得不面对很多问题。
如何盘存服务器和备件?如何维护使数据中心保持健康运作,及时定位并修复潜在的威胁?如何快速找到宕机设备的机架位置?如何准备物理机上线工作?做完这些事情需要花费大量的时间,或者需要 IT 部门有一大帮管理员支持才能办到。
现在有一个更好的方案解决这些问题,使用特定软件来实现数据中心管理自动化,下文将介绍当前市场上已有的一些数据中心管理工具。
### 1、 Opendcim
这是该类目前唯一的自由软件,该软件开源并且按照商业化<ruby>数据中心基础设施管理<rt>Data Center Infrastructure Management</rt></ruby>DCIM解决方案的替代方案来设计。该软件可以管理库存、生成数据中心的地图和监控机房温度与电力消耗。
不过,它不支持远程关机、服务器重启、操作系统安装等功能。尽管如此,它仍然被全球很多非商业机构使用。
由于该软件开源,有研发能力的公司可以修改它,使 [opendcim][2] 更适合自己的公司。
[
![openDCIM](http://www.tecmint.com/wp-content/uploads/2016/12/openDCIM.png)
][3]
*openDCIM*
### 2、 NOC-PS
这是一款可以管理物理和虚拟设备的商业软件。它有很多可以用于初始化设备的工具,比如:操作系统和其他软件安装、网络配置,并且集成了 WHMCS 和 Blesta。美中不足的是如果你希望能够看到数据中心设备地图或者机架位置那该软件就不是你的最佳选择了。
[NOC-PS][4] 每 100 台服务器每年管理费需要 100€比较适合中小企业使用。
[
![NOC-PS](http://www.tecmint.com/wp-content/uploads/2016/12/NOC-PS.png)
][5]
*NOC-PS*
### 3、 DCImanager
[DCImanager][6] 是一个专用的解决方案,正如宣传所说的,考虑了 DC 工程师和托管服务提供商的需求。该软件集成了很多有名的计费软件,比如 WHMCS、Hostbill、BILLmanager 等。
该软件的主要功能有服务器配置、模板化安装操作系统、传感器监控、流量和电力消耗报告、VLAN 管理。除此之外,企业版还可以生成数据中心服务器地图、以及对服务器和备件进行盘点管理。
你可以试用免费版,但是免费版最多支持 5 台物理服务器管理,而收费版每 100 台服务器每年的授权使用费是 120€。
根据版本不同,收费版可适用中小企业或者大企业。
[
![DCImanager](http://www.tecmint.com/wp-content/uploads/2016/12/DCImanager.png)
][8]
*DCImanager*
### 4、 EasyDCIM
[EasyDCIM][9] 是一款主要面向服务提供商的收费软件。拥有可以安装操作系统或其他软件的特点,并且能方便地生成机房目录及机架分布图。
该软件本身并不支持对 IP 和 DNS 进行管理。不过可以通过安装模块的方式获得这些功能,这些模块可能付费或者免费(包括 WHMCS 集成模块)。
该软件每 100 台服务器每年的服务费起步价 $999。对于小公司来说这个价格有点贵不过中型或者大型企业可以尝试使用。
[
![EasyDCIM](http://www.tecmint.com/wp-content/uploads/2016/12/EasyDCIM.png)
][10]
*EasyDCIM*
### 5、 Ansible Tower
[Ansible Tower][11] 是红帽出品的企业级计算中心管理软件。该解决方案的核心思想是实现对服务器和不同用户设备的集中式部署。
由于 **Ansible Tower** 能够通过集成软件的方式使用几乎所有的工具程序,并且该软件的数据统计收集模块特别好用。不好的一面则是缺乏和当前比较流行的计费软件的集成,而且价格也不便宜。
每 100 台设备每年的服务器费是 $5000,这个价格估计只有大公司才能接受。
[
![Ansible Tower](http://www.tecmint.com/wp-content/uploads/2016/12/Ansible_Tower.png)
][12]
*Ansible Tower*
### 6、 Puppet Enterprise
在商业基础上发展而来并作为 IT 部门的辅助软件。该软件用于在服务器或者用户设备上安装操作系统及其他软件,无论是初步部署或者进一步开发都适用。
不幸的是,盘存和其他更好的交互方案(电缆连接、协议等)仍然处于开发中。
[Puppet Enterprise][13] 对于小于 10 台服务器的管理免费并且开放全部功能。而收费版则是每台服务器每年 $120。
这个价格适合大公司使用。
[
![Puppet Enterprise](http://www.tecmint.com/wp-content/uploads/2016/12/Puppet-Enterprise.png)
][14]
*Puppet Enterprise*
### 7、 Device 42
该软件主要用于数据中心监控。有一个很棒的盘存工具,自动创建软硬件依赖关系图。通过 [Device 42][15] 生成数据中心地图,给不同机架标特定颜色,并可以通过图表方式反映温度、空闲空间情况和机架的其他指标。但是不支持软件安装和计费软件的集成。
每 100 台服务器每年的收费是 $1499这个价位比较适合大中型企业。
[
![Device42](http://www.tecmint.com/wp-content/uploads/2016/12/Device42.png)
][16]
*Device42*
### 8、 CenterOS
这是一款适合数据中心管理的操作系统,主要功能是设备盘点。除此之外可以生成数据中心地图及机架方案,并连接了一个评价不错的服务器状态监控系统,方便内部技术管理工作。
该软件还有一个特性就是能够通过简单的几次点击就可以找到某个设备对应的人(可能是设备所有人、技术管理员或者该设备的制造商),当出现紧急问题时这个就特别有用了。
**建议阅读:** [8 Open Source/Commercial Billing Platforms for Hosting Providers][17]
该软件不是开源的,并且价格也只能在咨询后才能知道。
该软件价格的神秘性也决定了软件的目标客户,极有可能这个软件是给大公司用的。
[
![CenterOS](http://www.tecmint.com/wp-content/uploads/2016/12/CenterOS.png)
][19]
*CenterOS*
### 9、 LinMin
这是一款用于初始化物理设备以便后期使用的软件。使用 PXE 安装选定的操作系统,并可随后部署一系列必要的软件安装。
与同类软件不同的是,[LinMin][20] 有一个开发完善的硬盘备份系统,可以迅速在系统崩溃后恢复以及大规模部署相同配置的服务器。
该软件每 100 台服务器一年的收费是 $1999这价格也只有大中型企业能用了。
[
![LinMin](http://www.tecmint.com/wp-content/uploads/2016/12/LinMin.jpg)
][21]
*LinMin*
现在来总结下,当前市场上大部分能够自动化管理大量的基础设施的软件,可以分为两类。
第一类,主要用于完成设备的准备工作,以便能够进一步管理。另一类就是设备的盘点管理。找到一个通用的包含所有功能的软件并不容易,你在选择的时候可以放弃一些设备提供商提供的那些功能比较有限的工具。
现在你知道了这些解决方案,那么你可以逐个尝试下。值得注意的是这里列出的开源产品,如果你有好的开发人员,那么可以尝试定制软件来满足你需求。
希望通过这篇回顾能够帮你找到适合的软件让你的工作更轻松。另,祝您的服务器永不出错。
-----------------------------------
作者简介:
![](http://1.gravatar.com/avatar/ae5edcc20865ae20859fb566c796b97a?s=128&d=blank&r=g)
Nikita Nesmiyanov
我是一名俄罗斯西伯利亚托管软件开发公司的技术专家。我希望能够在新的 Linux 软件工具和托管行业的发展趋势、可能性、发展历史和发展机遇等方面拓展我的知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/data-center-server-management-tools/
作者:[Nikita Nesmiyanov][a]
译者:[beyondworld](https://github.com/beyondworld)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/nesmiyanov/
[1]:http://www.tecmint.com/web-control-panels-to-manage-linux-servers/
[2]:http://opendcim.org/
[3]:http://www.tecmint.com/wp-content/uploads/2016/12/openDCIM.png
[4]:http://noc-ps.com/
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/NOC-PS.png
[6]:https://www.ispsystem.com/software/dcimanager
[7]:http://www.tecmint.com/opensource-commercial-control-panels-manage-virtual-machines/
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/DCImanager.png
[9]:https://www.easydcim.com/
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/EasyDCIM.png
[11]:https://www.ansible.com/
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Ansible_Tower.png
[13]:https://puppet.com/
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Puppet-Enterprise.png
[15]:http://www.device42.com/
[16]:http://www.tecmint.com/wp-content/uploads/2016/12/Device42.png
[17]:http://www.tecmint.com/open-source-commercial-billing-software-system-web-hosting/
[18]:http://www.centeros.com/
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/CenterOS.png
[20]:http://www.linmin.com/
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/LinMin.jpg

View File

@ -0,0 +1,210 @@
如何在 Ubuntu 上搭建一台 Email 服务器(三)
============================================================
![Mail server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mail-server.jpg?itok=Ox1SCDsV "Mail server")
在本系列的最后,我们将详细介绍如何在 Dovecot 和 Postfix 中设置虚拟用户和邮箱。
欢迎回来,热心的 Linux 系统管理员们! 在本系列的[第一部分][3]和[第二部分][4]中,我们学习了如何将 Postfix 和 Dovecot 组合在一起,搭建一个不错的 IMAP 和 POP3 邮件服务器。 现在我们将学习设置虚拟用户,以便我们可以管理所有 Dovecot 中的用户。
### 抱歉,还不能配置 SSL
我知道我答应过教你们如何设置一个受 SSL 保护的服务器。 不幸的是,我低估了这个话题的范围。 所以,我会下个月再写一个全面的教程。
今天,在本系列的最后一部分中,我们将详细介绍如何在 Dovecot 和 Postfix 中设置虚拟用户和邮箱。 在你看来这是有点奇怪,所以我尽量让下面的例子简单点。我们将使用纯文本文件和纯文本来进行身份验证。 你也可以选择使用数据库后端和较强的加密认证形式,具体请参阅文末链接了解有关这些的更多信息。
### 虚拟用户
我们希望邮件服务器上用的是虚拟用户而不是 Linux 系统用户。使用 Linux 系统用户不能扩展,并且它们会暴露系统登录账号,给你的服务器带来不必要的风险。 设置虚拟用户需要在 Postfix 和 Dovecot 中编辑配置文件。我们将从 Postfix 开始。首先,我们将从一个干净、简化的 `/etc /postfix/main.cf` 开始。移动你原始的 `main.cf` 到别处做个备份,创建一个新的干净的文件,内容如下:
```
compatibility_level=2
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu/GNU)
biff = no
append_dot_mydomain = no
myhostname = localhost
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = $myhostname
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.0.0/24
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
virtual_mailbox_domains = /etc/postfix/vhosts.txt
virtual_mailbox_base = /home/vmail
virtual_mailbox_maps = hash:/etc/postfix/vmaps.txt
virtual_minimum_uid = 1000
virtual_uid_maps = static:5000
virtual_gid_maps = static:5000
virtual_transport = lmtp:unix:private/dovecot-lmtp0
```
你可以直接拷贝这份文件,除了 `mynetworks` 参数的设置 `192.168.0.0/24`,它应该是你的本地子网掩码。
接下来,创建用户和组 `vmail` 来拥有你的虚拟邮箱。虚拟邮箱保存在  `vmail` 的家目录下。
```
$ sudo groupadd -g 5000 vmail
$ sudo useradd -m -u 5000 -g 5000 -s /bin/bash vmail
```
接下来重新加载 Postfix 配置:
```
$ sudo postfix reload
[sudo] password for carla:
postfix/postfix-script: refreshing the Postfix mail system
```
### Dovecot 虚拟用户
我们会使用 Dovecot 的 `lmtp` 协议来连接到 Postfix。你可以这样安装
```
$ sudo apt-get install dovecot-lmtpd
```
`main.cf` 的最后一行涉及到 `lmtp`。复制这个 `/etc/dovecot/dovecot.conf` 示例文件来替换已存在的文件。再说一次,我们只使用这一个文件,而不是 `/etc/dovecot/conf.d` 内的所有文件。
```
protocols = imap pop3 lmtp
log_path = /var/log/dovecot.log
info_log_path = /var/log/dovecot-info.log
ssl = no
disable_plaintext_auth = no
mail_location = maildir:~/.Mail
pop3_uidl_format = %g
auth_verbose = yes
auth_mechanisms = plain
passdb {
driver = passwd-file
args = /etc/dovecot/passwd
}
userdb {
driver = static
args = uid=vmail gid=vmail home=/home/vmail/studio/%u
}
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
}
}
protocol lmtp {
postmaster_address = postmaster@studio
}
service lmtp {
user = vmail
}
```
最后,你可以创建一个含有用户和密码的文件 `/etc/dovecot/passwd`。对于纯文本验证,我们只需要用户的完整邮箱地址和密码:
```
alrac@studio:{PLAIN}password
layla@studio:{PLAIN}password
fred@studio:{PLAIN}password
molly@studio:{PLAIN}password
benny@studio:{PLAIN}password
```
Dovecot 虚拟用户独立于 Postfix 虚拟用户,因此你需要管理 Dovecot 中的用户。保存所有的设置并重启 Postfix 和 Dovecot
```
$ sudo service postfix restart
$ sudo service dovecot restart
```
现在让我们使用老朋友 telnet 来看下 Dovecot 是否设置正确。
```
$ telnet studio 110
Trying 127.0.1.1...
Connected to studio.
Escape character is '^]'.
+OK Dovecot ready.
user molly@studio
+OK
pass password
+OK Logged in.
quit
+OK Logging out.
Connection closed by foreign host.
```
现在一切都好!让我们用 `mail` 命令,发送测试消息给我们的用户。确保使用用户的完整电子邮箱地址而不只是用户名。
```
$ mail benny@studio
Subject: hello and welcome!
Please enjoy your new mail account!
.
```
最后一行的**英文句点**表示发送消息。让我们看下它是否到达了正确的邮箱。
```
$ sudo ls -al /home/vmail/studio/benny@studio/.Mail/new
total 16
drwx------ 2 vmail vmail 4096 Dec 14 12:39 .
drwx------ 5 vmail vmail 4096 Dec 14 12:39 ..
-rw------- 1 vmail vmail 525 Dec 14 12:39 1481747995.M696591P5790.studio,S=525,W=540
```
找到了。这是一封我们可以阅读的纯文本文件:
```
$ less 1481747995.M696591P5790.studio,S=525,W=540
Return-Path: <carla@localhost>
Delivered-To: benny@studio
Received: from localhost
by studio (Dovecot) with LMTP id V01ZKRuuUVieFgAABiesew
for <benny@studio>; Wed, 14 Dec 2016 12:39:55 -0800
Received: by localhost (Postfix, from userid 1000)
id 9FD9CA1F58; Wed, 14 Dec 2016 12:39:55 -0800 (PST)
Date: Wed, 14 Dec 2016 12:39:55 -0800
To: benny@studio
Subject: hello and welcome!
User-Agent: s-nail v14.8.6
Message-Id: <20161214203955.9FD9CA1F58@localhost>
From: carla@localhost (carla)
Please enjoy your new mail account!
```
你还可以使用 telnet 进行测试,如本系列前面部分所述,并在你最喜欢的邮件客户端中设置帐户,如 ThunderbirdClaws-Mail 或 KMail。
### 故障排查
当邮件工作不正常时,请检查日志文件(请参阅配置示例),然后运行 `journalctl -xe`。 这时会提供定位输入错误、未安装包和可以 Google 的短语等所有需要的信息。
### 接下来?
假设你的 LAN 名称服务配置正确,你现在有一台很好用的 LAN 邮件服务器。 显然,以纯文本发送消息不是最佳的,不支持互联网的邮件也是绝对不可以的。 请参阅 [Dovecot SSL 配置][5]和 [Postfix TLS 支持][6][VirtualUserFlatFilesPostfix][7] 涵盖了 TLS 和数据库后端。并请期待我之后的 SSL 指南。这次我说的是真的。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-3
作者:[CARLA SCHRODER][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/mail-serverjpg
[3]:https://linux.cn/article-8071-1.html
[4]:https://linux.cn/article-8077-1.html
[5]:http://wiki.dovecot.org/SSL/DovecotConfiguration
[6]:http://www.postfix.org/TLS_README.html
[7]:http://wiki2.dovecot.org/HowTo/VirtualUserFlatFilesPostfix

View File

@ -0,0 +1,161 @@
sshpass一个很棒的免交互 SSH 登录工具,但不要用在生产服务器上
============================================================
在大多数情况下Linux 系统管理员使用 SSH 登录到程 Linux 服务器时,要么是通过密码,要么是[无密码 SSH 登录][1]或基于密钥的 SSH 身份验证。
如果你想自动在 SSH 登录提示符中提供**密码**和**用户名**怎么办?这时 **sshpass** 就可以帮到你了。
sshpass 是一个简单、轻量级的命令行工具,通过它我们能够向命令提示符本身提供密码(非交互式密码验证),这样就可以通过 [cron 调度器][2]执行自动化的 shell 脚本进行备份。
ssh 直接使用 TTY 访问,以确保密码是用户键盘输入的。 sshpass 在专门的 tty 中运行 ssh以误导 ssh 相信它是从用户接收到的密码。
重要:使用 **sshpass** 是最不安全的,因为所有系统上的用户在命令行中通过简单的 “**ps**” 命令就可看到密码。因此,如果必要,比如说在生产环境,我强烈建议使用 [SSH 无密码身份验证][3]。
### 在 Linux 中安装 sshpass
在基于 **RedHat/CentOS** 的系统中,首先需要[启用 Epel 仓库][4]并使用 [yum 命令][5]安装它。
```
# yum install sshpass
# dnf install sshpass [Fedora 22 及以上版本]
```
在 Debian/Ubuntu 和它的衍生版中,你可以使用 [apt-get 命令][6]来安装。
```
$ sudo apt-get install sshpass
```
另外,你也可以从最新的源码安装 `sshpass`,首先下载源码并从 tar 文件中解压出内容:
```
$ wget http://sourceforge.net/projects/sshpass/files/latest/download -O sshpass.tar.gz
$ tar -xvf sshpass.tar.gz
$ cd sshpass-1.06
$ ./configure
# sudo make install
```
### 如何在 Linux 中使用 sshpass
**sshpass** 与 **ssh** 一起使用,使用下面的命令可以查看 `sshpass` 的使用选项的完整描述:
```
$ sshpass -h
```
下面为显示的 sshpass 帮助内容:
```
Usage: sshpass [-f|-d|-p|-e] [-hV] command parameters
-f filename Take password to use from file
-d number Use number as file descriptor for getting password
-p password Provide password as argument (security unwise)
-e Password is passed as env-var "SSHPASS"
With no parameters - password will be taken from stdin
-h Show help (this screen)
-V Print version information
At most one of -f, -d, -p or -e should be used
```
正如我之前提到的,**sshpass** 在用于脚本时才更可靠及更有用,请看下面的示例命令。
使用用户名和密码登录到远程 Linux ssh 服务器10.42.0.1),并[检查文件系统磁盘使用情况][7],如图所示。
```
$ sshpass -p 'my_pass_here' ssh aaronkilik@10.42.0.1 'df -h'
```
**重要提示**:此处,在命令行中提供了密码,这是不安全的,不建议使用此选项。
[
![sshpass - Linux Remote Login via SSH](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Linux-Remote-Login.png)
][8]
*sshpass 使用 SSH 远程登录 Linux*
但是,为了防止在屏幕上显示密码,可以使用 `-e` 标志,并将密码作为 SSHPASS 环境变量的值输入,如下所示:
```
$ export SSHPASS='my_pass_here'
$ echo $SSHPASS
$ sshpass -e ssh aaronkilik@10.42.0.1 'df -h'
```
[
![sshpass - Hide Password in Prompt](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Hide-Password-in-Prompt.png)
][9]
*sshpass 在终端中隐藏密码*
**注意:**在上面的示例中,`SSHPASS` 环境变量仅用于临时目的,并将在重新启动后删除。
要永久设置 `SSHPASS` 环境变量,打开 `/etc/profile` 文件,并在文件开头输入 `export` 语句:
```
export SSHPASS='my_pass_here'
```
保存文件并退出,接着运行下面的命令使更改生效:
```
$ source /etc/profile
```
另外,也可以使用 `-f` 标志,并把密码放在一个文件中。 这样,您可以从文件中读取密码,如下所示:
```
$ sshpass -f password_filename ssh aaronkilik@10.42.0.1 'df -h'
```
[
![sshpass - Supply Password File to Login](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Provide-Password-File.png)
][10]
*sshpass 在登录时提供密码文件*
你也可以使用 `sshpass` [通过 scp 传输文件][11]或者 [rsync 备份/同步文件][12],如下所示:
```
------- Transfer Files Using SCP -------
$ scp -r /var/www/html/example.com --rsh="sshpass -p 'my_pass_here' ssh -l aaronkilik" 10.42.0.1:/var/www/html
------- Backup or Sync Files Using Rsync -------
$ rsync --rsh="sshpass -p 'my_pass_here' ssh -l aaronkilik" 10.42.0.1:/data/backup/ /backup/
```
更多的用法,建议阅读 `sshpass` 的 man 页面,输入:
```
$ man sshpass
```
在本文中,我们解释了 `sshpass` 是一个非交互式密码验证的简单工具。 虽然这个工具可能是有帮助的,但还是强烈建议使用更安全的 ssh 公钥认证机制。
请在下面的评论栏写下任何问题或评论,以便可以进一步讨论。
--------------------------------------------------------------------------------
作者简介Aaron Kili 是一位 Linux 和 F.O.S.S 爱好者,未来的 Linux 系统管理员web 开发人员, 还是 TecMint 原创作者,热爱电脑工作,并乐于分享知识。
-----------
via: http://www.tecmint.com/sshpass-non-interactive-ssh-login-shell-script-ssh-password/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/
[3]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
[4]:https://linux.cn/article-2324-1.html
[5]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[6]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
[7]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Linux-Remote-Login.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Hide-Password-in-Prompt.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Provide-Password-File.png
[11]:http://www.tecmint.com/scp-commands-examples/
[12]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/

View File

@ -0,0 +1,293 @@
完全指南之在 Ubuntu 操作系统中安装及卸载软件
============================================================
![Complete guide for installing and removing applications in Ubuntu](https://itsfoss.com/wp-content/uploads/2016/12/Managing-Software-in-Ubuntu-1.jpg)
摘要:这篇文章详尽地说明了在 Ubuntu Linux 系统中安装及卸载软件的各种方法。
当你从 Windows 系统[转向 Linux 系统][14]的时候,刚开始的体验绝对是非比寻常的。在 Ubuntu 系统下就连最基本的事情,比如安装个应用程序都会让(刚从 Windows 世界来的)人感到无比困惑。
但是你也不用太担心。因为 Linux 系统提供了各种各样的方法来完成同一个任务,刚开始你感到困惑那也是正常的。你并不孤单,我们大家都是这么经历过来的。
在这篇初学者指南中,我将会教大家在 Ubuntu 系统里以最常用的方式来安装软件,以及如何卸载之前已安装的软件。
关于在 Ubuntu 上应使用哪种方法来安装软件,我也会提出自己的建议。请用心学习。这篇文章写得很长也很详细,你从中绝对能够学到东西。
### 在 Ubuntu 系统中安装和卸载软件
在这篇教程中我使用的是运行着 Unity 桌面环境的 Ubuntu 16.04 版本的系统。除了一些截图外,这篇教程也同样适用于其它版本的 Ubuntu 系统。
### 1.1 使用 Ubuntu 软件中心来安装软件 【推荐使用】
在 Ubuntu 系统中查找和安装软件最简单便捷的方法是使用 Ubuntu 软件中心。在 Ubuntu Unity 桌面里,你可以在 Dash 下搜索 Ubuntu 软件中心,然后选中打开即可:
[
![Run Ubuntu Software Center](https://itsfoss.com/wp-content/uploads/2016/12/Ubuntu-Software-Center.png)
][15]
你可以把 Ubuntu 软件中心想像成 Google 的 Play 商店或者是苹果的 App 商店。它包含 Ubuntu 系统下所有可用的软件。你可以通过应用程序的名称来搜索应用程序或者是通过浏览各种软件目录来进行查找软件。你还可以根据作者进行查询。这由你自己来选择。
![Installing software in Ubuntu using Ubuntu Software Center](https://itsfoss.com/wp-content/uploads/2016/12/install-software-Ubuntu-linux.jpeg)
一旦你找到自己想要的应用程序,选中它。软件中心将打开该应用程序的描述页面。你可以阅读关于这款软件的说明,评分等级和用户的评论。如果你愿意,也可以写一条评论。
一旦你确定想安装这款软件,你可以点击安装按钮来安装已选择的应用程序。在 Ubuntu 系统中,你需要输入 root 账号的密码才能安装该应用程序。
[
![Installing software in Ubuntu: The easy way](https://itsfoss.com/wp-content/uploads/2016/12/install-software-Ubuntu-linux-1.jpg)
][16]
还有什么比这更简单的吗?我觉得应该没有了吧!
提示:正如我[在 Ubuntu 16.04 系统安装完成后你需要做的事情][17]这篇文章提到的那样,你应该启用 Canonical 合作伙伴仓库。默认情况下Ubuntu 系统仅提供了那些源自自身软件库Ubuntu 认证)的软件。
但是还有一个 Canonical 合伙伙伴软件库它包含一些闭源专属软件Ubuntu 并不直接管控它。启用该仓库后将让你能够访问更多的软件。[在 Ubuntu 系统下安装 Skype 软件][18]就是通过那种方式安装完成的。
在 Unity Dash 中,找到软件或更新工具。
[
![Ubuntu Software Update Settings](https://itsfoss.com/wp-content/uploads/2014/08/Software_Update_Ubuntu.jpeg)
][19]
如下图,打开其它软件标签面,勾选 Canonical 合作伙伴选项。
[
![Enable Canonical partners in Ubuntu 14.04](https://itsfoss.com/wp-content/uploads/2014/04/Enable_Canonical_Partner.jpeg)
][20]
### 1.2 从 Ubuntu 软件中心卸载软件(推荐方式)
我们刚刚演示了如何在 Ubuntu 软件中心安装软件。那么如何使用同样的方法来卸载已安装的软件呢?
在 Ubuntu 软件中心卸载软件跟安装软件的步骤一样简单。
打开软件中心然后点击已安装的软件标签面。它将显示所有已安装的软件。或者,你也可以只搜索应用程序的名称。
要卸载 Ubuntu 系统中的应用程序,点击删除按钮即中。你同样需要输入 root 账号的密码。
[
![Uninstall software installed in Ubuntu](https://itsfoss.com/wp-content/uploads/2016/12/Uninstall-Software-Ubuntu.jpeg)
][22]
### 2.1 在 Ubuntu 系统中使用 .DEB 文件来安装软件
.deb 文件跟 Windows 下的 .exe 文件很相似。这是一种安装软件的简易方式。很多软件开发商都会提供 .deb 格式的安装包。
Google Chrome 浏览器就是这样的。你可以下载从其官网下载 .deb 安装文件
[
![Downloading deb packaging](https://itsfoss.com/wp-content/uploads/2016/12/install-software-deb-package.png)
][23]
一旦你下载完成 .deb 安装文件之后,只需要双击运行即可。它将在 Ubuntu 软件中心打开,你就可以使用前面 1.1 节中同样的方式来安装软件。
或者,你也可以使用轻量级的安装程序 [在 Ubuntu 系统中使用 Gdebi 工具来安装 .deb 安装文件][24]。
软件安装完成后,你可以随意删除下载的 .deb 安装包。
提示:在使用 .deb 文件的过程中需要注意的一些问题:
* 确保你是从官网下载的 .deb 安装文件。仅使用官网或者 GitHub 上提供的软件包。
* 确保你下载的 .deb 文件系统类型正确32 位或是 64 位)。请阅读我们写的快速指南[如何查看你的 Ubuntu 系统是 32 位的还是 64 位的][8]
### 2.2 使用 .DEB 文件来删除已安装的软件
卸载 .deb 文件安装的软件跟我们在 1.2 节看到的步骤一样的。只需要打开 Ubuntu 软件中心,搜索应用程序名称,然后单击移除并卸载即可。
或者你也可以使用[新立得包管理器][25]。这也不是必须的,但是如果在 Ubuntu 软件中心找不到已安装的应用程序的情况下,就可以使用这个工具了。新立得软件包管理器会列出你系统里已安装的所有可用的软件。这是一个非常强大和有用的工具。
这个工具很强大非常有用。在 Ubuntu 软件中心被开发出来提供一种更友好的安装软件方式之前,新立得包管理器是 Ubuntu 系统中默认的安装和卸载软件的工具。
你可以单击下面的链接来安装新立得软件包管器(它将会在 Ubuntu 软件中心中打开)。
- [安装新立得包管理器][26]
打开新立得包管理器,然后找到你想卸载的软件。已安装的软件标记为绿色按钮。单击并选择“标记为删除”。然后单击“应用”来删除你所选择的软件。
[
![Using Synaptic to remove software in Ubuntu](https://itsfoss.com/wp-content/uploads/2016/12/uninstall-software-ubuntu-synaptic.jpeg)
][27]
### 3.1 在 Ubuntu 系统中使用 APT 命令来安装软件(推荐方式)
你应该看到过一些网站告诉你使用 `sudo apt-get install` 命令在 Ubuntu 系统下安装软件。
实际上这种命令行方式跟第 1 节中我们看到的安装方式一样。只是你没有使用 Ubuntu 软件中心来安装或卸载软件,而是使用的是命令行接口。别的没什么不同。
使用 `apt-get` 命令来安装软件超级简单。你只需要执行下面的命令:
```
sudo apt-get install package_name
```
上面使用 `sudo` 是为了获取“管理员”或 “root” Linux 专用术语)账号权限。你可以替换 package_name 为你想要安装的软件包名。
`apt-get` 命令可以自动补全,你只需要输入一些字符并按 tab 键即可, `apt-get` 命令将会列出所有与该字符相匹配的程序。
### 3.2 在 Ubuntu 系统下使用 APT 命令来卸载软件(推荐方式)
在命令行下,你可以很轻易的卸载 Ubuntu 软件中心安装的软件,以及使用 `apt` 命令或是使用 .deb 安装包安装的各种软件。
你只需要使用下面的命令,替换 package-name 为你想要删除的软件名。
```
sudo apt-get remove package_name
```
同样地,你也可以通过按 tab 键来利用 `apt-get` 命令的自动补全功能。
使用 `apt-get` 命令来安装卸载或卸载并不算什么高深的技能。这实际上非常简便。通过这些简单命令的运用,你可以熟悉 Ubuntu Linux 系统的命令行操作,长期使用对你学习 Linux 系统的帮忙也很大。建议你看下我写的一篇很详细的[apt-get 命令使用指导][28]文章来进一步的了解该命令的使用。
- 建议阅读:[Linux 系统下 apt-get 命令初学者完全指南][29]
### 4.1 使用 PPA 命令在 Ubuntu 系统下安装应用程序
PPA 是[个人软件包归档( Personal Package Archive][30]的缩写。这是开发者为 Ubuntu 用户提供软件的另一种方式。
在第 1 节中出现了一个叫做 仓库repository 的术语。仓库本质上是一个软件集。 Ubuntu 官方仓库主要用于提供经过 Ubuntu 自己认证过的软件。 Canonical 合作伙伴仓库包含来自合作厂商提供的各种应用软件。
同时PPA 允许开发者创建自己的 APT 仓库。当用户在系统里添加了一个仓库时(`sources.list` 中增加了该仓库),用户就可以使用开发者自己的仓库里提供的软件了。
现在你也许要问既然我们已经有 Ubuntu 的官方仓库了,还有什么必要使用 PPA 方式呢?
答案是并不是所有的软件都会自动添加到 Ubuntu 的官方仓库中。只有受信任的软件才会添加到其中。假设你开发出一款很棒的 Linux 应用程序,然后你想为用户提供定期的更新,但是在它被添加到 Ubuntu 仓库之前,这需要花费好几个月的时间(如果是在被允许的情况下)。 PPA 的出现就是为了解决这个问题。
除此之外, Ubuntu 官方仓库通常不会把最新版的软件添加进来。这会影响到 Ubuntu 系统的安全性及稳定性。新版本的软件或许会有影响到系统的[回退][31]。这就是为什么在新款软件进入到官方仓库前要花费一定的时间,有时候需要等待几个月。
但是,如果你不想等待最新版出现在 Ubuntu 仓库中呢?这个时候 PPA 就对你有帮助了。通过 PPA 方式,你可以获得该应用程序的最新版本。
通常情况下, PPA 通过这三个命令来进行使用。第一个命令添加 PPA 仓库到源列表中。第二个命令更新软件缓存列表,这样你的系统就可以获取到可用的新版本软件了。第三个命令用于从 PPA 安装软件。
我将演示使用 PPA 方式来安装 [Numix 主题][32]
```
sudo add-apt-repository ppa:numix/ppa
sudo apt-get update
sudo apt-get install numix-gtk-theme numix-icon-theme-circle
```
在上面的实例中,我们添加了一个[Numix 项目][33]提供的 PPA 。在更新软件信息之后,我们安装了两个 Numix PPA 中可用的应用程序。
如果你想使用带有图形界面的应用程序,你可以使用 [Y-PPA 应用程序][34]。通过它你可以很方便地查询 PPA添加和删除软件。
注意PPA 的安全性经常受到争议。我的建议是你应该从受信任的源添加 PPA最好是从官方软件源添加。
### 4.2 卸载使用 PPA 方式安装的应用程序
在之前的文章[在 Ubuntu 系统下移除 PPA][35] 中我已经写得很详细了。你可以跳转到这篇文章去深入学习卸载 PPA 方式安装的软件。
这里简要提一下,你可以使用下面的两个命令来卸载:
```
sudo apt-get remove numix-gtk-theme numix-icon-theme-circle
sudo add-apt-repository --remove ppa:numix/ppa
```
第一个命令是卸载通过 PPA 方式安装的软件。第二个命令是从 `source.list` 中删除该 PPA。
### 5.1 在 Ubuntu Linux 系统中使用源代码来安装软件(不推荐使用)
我并不建议你使用[软件源代码][36]来安装该应用程序。这种方法很麻烦,容易出问题而且还非常地不方便。你得费尽周折去解决依赖包的问题。你还得保留源代码文件,以便将来卸载该应用程序。
但是还是有一些用户喜欢通过源代码编译的方式来安装软件,尽管他们自己本身并不会开发软件。实话告诉你,我曾经也经常使用这种方式来安装软件,不过那都是 5 年前的事了,那时候我还是一个实习生,我必须在 Ubuntu 系统下开发一款软件出来。但是,从那之后我更喜欢使用其它方式在 Ubuntu 系统中安装应用程序。我觉得,对于普通的 Linux 桌面用户,最好不要使用源代码的方式来安装软件。
在这一小节中我将简要地列出使用源代码方式来安装软件的几个步骤:
* 下载你想要安装软件的源代码。
* 解压下载的文件。
* 进入到解压目录里并找到 `README` 或者 `INSTALL` 文件。一款开发完善的软件都会包含这样的文件,用于提供安装或卸载软件的指导方法。
* 找到名为 `configure` 的配置文件。如果在当前目录下,使用这个命令来执行该文件:`./configure` 。它将会检查你的系统是否包含所有的必须的软件在软件术语中叫做依赖包来安装该应用程序。LCTT 译注:你可以先使用 `./configure --help` 来查看有哪些编译选项,包括安装的位置、可选的特性和模块等等。)注意并不是所有的软件都包括该配置文件,我觉得那些开发很糟糕的软件就没有这个配置文件。
* 如果配置文件执行结果提示你缺少依赖包,你得先安装它们。
* 一旦你安装完成所有的依赖包后,使用 `make` 命令来编译该应用程序。
* 编译完成后,执行 `sudo make install` 命令来安装该应用程序。
注意有一些软件包会提供一个安装软件的脚本文件,你只需要运行这个文件即可安装完成。但是大多数情况下,你可没那么幸运。
还有,使用这种方式安装的软件并不会像使用 Ubuntu 软件库、 PPA 方式或者 .deb 安装方式那样安装的软件会自动更新。
如果你坚持使用源代码方式来安装软件,我建议你看下这篇很详细的文章[在 Ubuntu 系统中使用源代码安装软件][37]。
### 5.2 卸载使用源代码方式安装的软件(不推荐使用)
如果你觉得使用源代码安装软件的方式太难了,再想想看,当你卸载使用这种方式安装的软件将会更痛苦。
* 首先,你不能删除用于安装该软件的源代码。
* 其次,你必须确保在安装的时候也有对应的方式来卸载它。一款设计上很糟糕的应用程序就不会提供卸载软件的方法,因此你不得不手动去删除那个软件包安装的所有文件。
正常情况下,你应该切换到源代码的解压目录下,使用下面的命令来卸载那个应用程序:
```
sudo make uninstall
```
但是,这也不能保证你每次都会很顺利地卸载完成。
看到了吧,使用源代码方式来安装软件实在是太麻烦了。这就是为什么我不推荐大家在 Ubuntu 系统中使用源代码来安装软件的原因。
### 其它一些在 Ubuntu 系统中安装软件的方法
另外,还有一些在 Ubuntu 系统下并不常用的安装软件的方法。由于这篇文章已经写得够长了,我就不再深入探讨了。下面我将把它们列出来:
* Ubuntu 新推出的 [Snap 打包][9]方式
* 使用 [dpkg][10] 命令
* [AppImage][11] 方式
* [pip][12] : 用于安装基于 Python 语言的应用程序
### 你是如何在 UBUNTU 系统中安装软件的呢?
如果你一直都在使用 Ubuntu 系统,那么你在 Ubuntu Linux 系统下最喜欢使用什么方式来安装软件呢?你觉得这篇文章对你有用吗?请分享你的一些观点,建议和提出相关的问题。
--------------------
作者简介:
![](https://secure.gravatar.com/avatar/20749c268f5d3e4d2c785499eb6a17c0?s=70&d=mm&r=g)
我叫 Abhishek Prakash F.O.S.S 开发者。我的工作是一名专业的软件开发人员。我是一名狂热的 Linux 系统及开源软件爱好者。我使用 Ubuntu 系统,并且相信分享是一种美德。除了 Linux 系统之外,我喜欢经典的侦探神秘小说。我是 Agatha Christie 作品的真爱粉。
--------------------------------------------------------------------------------
via: https://itsfoss.com/remove-install-software-ubuntu/
作者:[ABHISHEK PRAKASH][a]
译者:[rusking](https://github.com/rusking)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/author/abhishek/
[2]:https://itsfoss.com/remove-install-software-ubuntu/#comments
[3]:http://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fremove-install-software-ubuntu%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[4]:https://twitter.com/share?original_referer=/&text=How+To+Install+And+Remove+Software+In+Ubuntu+%5BComplete+Guide%5D&url=https://itsfoss.com/remove-install-software-ubuntu/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_pc
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fremove-install-software-ubuntu%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fremove-install-software-ubuntu%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[7]:https://www.reddit.com/submit?url=https://itsfoss.com/remove-install-software-ubuntu/&title=How+To+Install+And+Remove+Software+In+Ubuntu+%5BComplete+Guide%5D
[8]:https://itsfoss.com/32-bit-64-bit-ubuntu/
[9]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[10]:https://help.ubuntu.com/lts/serverguide/dpkg.html
[11]:http://appimage.org/
[12]:https://pypi.python.org/pypi/pip
[13]:https://itsfoss.com/remove-install-software-ubuntu/managing-software-in-ubuntu-1/
[14]:https://itsfoss.com/reasons-switch-linux-windows-xp/
[15]:https://itsfoss.com/wp-content/uploads/2016/12/Ubuntu-Software-Center.png
[16]:https://itsfoss.com/remove-install-software-ubuntu/install-software-ubuntu-linux-1/
[17]:https://itsfoss.com/things-to-do-after-installing-ubuntu-16-04/
[18]:https://itsfoss.com/install-skype-ubuntu-1404/
[19]:https://itsfoss.com/ubuntu-notify-updates-frequently/software_update_ubuntu/
[20]:https://itsfoss.com/things-to-do-after-installing-ubuntu-14-04/enable_canonical_partner/
[21]:https://itsfoss.com/essential-linux-applications/
[22]:https://itsfoss.com/remove-install-software-ubuntu/uninstall-software-ubuntu/
[23]:https://itsfoss.com/remove-install-software-ubuntu/install-software-deb-package/
[24]:https://itsfoss.com/gdebi-default-ubuntu-software-center/
[25]:http://www.nongnu.org/synaptic/
[26]:apt://synaptic
[27]:https://itsfoss.com/remove-install-software-ubuntu/uninstall-software-ubuntu-synaptic/
[28]:https://itsfoss.com/apt-get-linux-guide/
[29]:https://itsfoss.com/apt-get-linux-guide/
[30]:https://help.launchpad.net/Packaging/PPA
[31]:https://en.wikipedia.org/wiki/Software_regression
[32]:https://itsfoss.com/install-numix-ubuntu/
[33]:https://numixproject.org/
[34]:https://itsfoss.com/easily-manage-ppas-ubuntu-1310-ppa-manager/
[35]:https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
[36]:https://en.wikipedia.org/wiki/Source_code
[37]:http://www.howtogeek.com/105413/how-to-compile-and-install-from-source-on-ubuntu/

View File

@ -0,0 +1,262 @@
你值得了解的 10 个有趣的 Linux 命令行小技巧
============================================================
我非常喜欢使用命令,因为它们比 GUI图形用户界面应用程序对 Linux 系统提供更多的控制,因此,我一直在寻找一些有趣的方法[让 Linux 的操作变得简单好玩][1],主要是基于终端操作。
当我们发现[使用 Linux 的新技巧][2]时,尤其是像我这样的命令行极客,我们总会感到非常来劲。
**建议阅读:** [5 有趣的 Linux 命令行技巧 - 第一部分][3]
而且我们也会很想与数百万 Linux 用户分享新学到的实践或命令,特别是那些还在使用自己的方式操作这个令人兴奋的操作系统的新手们。
**建议阅读:** [10 个对新手有用的 Linux 命令行技巧 - 第二部分][4]
在这篇文章中,我们将回顾一系列[有用的命令行小技巧][4],它们可以显著地提高你的 Linux 使用技能。
### 1、 在 Linux 中锁定或隐藏文件或目录
锁定文件或目录最简单的方法是使用 Linux 文件权限。如果你是文件或目录的所有者,你可以阻止其他用户和组访问(删除、读取、写入、执行)它,如下所示:
```
$ chmod 700 tecmint.info
$ chmod go-rwx tecmint.info
```
想要了解更多有关 Linux 文件权限的内容,请阅读这篇文章[在 Linux 中管理用户和组,文件权限和属性][5]。
为了实现对系统中的其他用户隐藏文件或目录,可以通过在文件或目录开头添加 `.` 的方式重命名:
```
$ mv filename .tecmint.info
```
### 2、 在 Linux 中将 rwx 权限转为八进制格式
默认情况下,当你运行 [ls 命令][6]之后,它会使用 `rwx` 格式显示文件权限,为了了解 rwx 格式和八进制格式的等同性,你可以学习如何[在 Linux 中将 rwx 权限转为八进制格式][7]。
### 3、 当 `sudo` 命令执行失败时怎么使用 `su` 命令
虽然 [sudo 命令][8]被用来以超级用户权限执行命令,但是在某些情况下它也会执行失败,如下所示。
在这里,我想[清空一个大文件的内容][9],其文件名为 `uptime.log`,但是即便我是使用 sudo 命令也执行失败了。
```
$ cat /dev/null >/var/log/uptime.log
$ sudo cat /dev/null >/var/log/uptime.log
```
[
![在 Linux 中清空大文件的内容](http://www.tecmint.com/wp-content/uploads/2016/12/Empty-Large-File-Content-in-Linux.png)
][10]
*在 Linux 中清空大文件的内容*
遇到这种情况,你需要使用 `su` 命令切换到 `root` 用户,然后像下面这样去执行清空操作:
```
$ su
$ sudo cat /dev/null >/var/log/uptime.log
$ cat /var/log/uptime.log
```
[
![切换到超级用户](http://www.tecmint.com/wp-content/uploads/2016/12/Switch-to-Super-User.png)
][11]
*切换到超级用户*
尝试理解 [su 和 sudo 之间的区别][12],另外,通过阅读它们的手册页以了解更多的使用指南:
```
$ man sudo
$ man su
```
### 4、 在 Linux 中结束一个进程
有些时候,当你想[使用 kill、killall、pkill 命令结束一个进程][13]时,它们有可能无法生效,你可能会看到该进程仍然还在系统上运行。
如果要强制结束一个进程,可以发送 `-KILL` 信号给该进程。
首先[获取指定进程 ID][14],然后像下面这样结束该进程:
```
$ pidof vlc
$ sudo kill -KILL 10279
```
[
![在 Linux 中查找和结束进程](http://www.tecmint.com/wp-content/uploads/2016/12/Find-and-Kill-Process-in-Linux.png)
][15]
*在 Linux 中查找和结束进程*
查看 [kill 命令][16]以获取更多的使用选项和信息。
### 5、 在 Linux 中永久删除文件
一般情况下,我们通过使用 `rm` 命令将文件从 Linux 系统中删除。然而,这些文件并没有被真正的删除,它们仍被存储在那里并隐藏在你的硬盘中,其他用户仍然可以[在 Linux 中恢复删除的文件][17]并查看。
为了防止这种情况发生,我们可以使用 `shred` 命令来覆写文件内容,并在覆盖完成后选择删除文件。
```
$ shred -zvu tecmint.pdf
```
上述命令中所使用的选项说明:
1. `-z` 最后一次使用 0 进行覆盖以隐藏覆写动作。
2. `-u` 覆写后截断并移除文件。
3. `-v` 显示详细过程。
[
![在 Linux 中永久删除文件](http://www.tecmint.com/wp-content/uploads/2016/12/Delete-File-Permanently-in-Linux.png)
][18]
*在 Linux 中永久删除文件*
阅读 `shred` 手册以获取更多的使用信息。
```
$ man shred
```
### 6、 在 Linux 中重命名多个文件
你可以通过使用 `rename` 命令随时[在 Linux 中重命名多个文件][19]。
`rename` 命令会根据第一个参数中的规则重命名指定文件。
以下命令会将所有 `.pdf` 文件重命名为 `.doc` 文件,使用的规则为 `'s/\.pdf$/\.doc/'`
```
$ rename -v 's/\.pdf$/\.doc/' *.pdf
```
[
![在 Linux 中重命名多个文件](http://www.tecmint.com/wp-content/uploads/2016/12/Rename-Multiple-Files-in-Linux.png)
][20]
*在 Linux 中重命名多个文件*
在接下来的例子中,我们将通过重命名所有匹配 `"*.bak"` 的文件来移除其拓展名,使用的规则是 `'s/\e.bak$//'`
```
$ rename -v 's/\e.bak$//' *.bak
```
### 7、 在 Linux 中检查单词拼写
`look` 命令用于显示文件中以指定字符串为前缀的任意行,同时它也可以帮你检查命令行中给定单词的拼写。尽管它并不是那么有效和可靠,但它仍然算得上是其他强大的拼写检查工具的有用替代品。
```
$ look linu
$ look docum
```
[
![在 Linux 中检查单词拼写](http://www.tecmint.com/wp-content/uploads/2016/12/Spell-Checking-in-Linux.png)
][21]
*在 Linux 中检查单词拼写*
### 8、 按关键字搜索手册页
`man` 命令用于显示命令的手册页,当使用 `-k` 选项时,它会将关键字 `printf`(或者如下命令中的关键字 `adjust`、`apache`、`php` )作为正则表达式,来搜索所有匹配该名称手册页,并显示其简介。
```
$ man -k adjust
$ man -k apache
$ man -k php
```
[
![按关键字搜索手册页](http://www.tecmint.com/wp-content/uploads/2016/12/Show-Description-of-Keyword-in-Manual-Pages.png)
][22]
*按关键字搜索手册页*
### 9、 在 Linux 中实时监测日志
`watch` 命令可以[定期执行另一个 Linux 命令][23]并全屏显示该命令的执行结果。当 `watch` 命令与 [tail 命令][24](用于查看文件结尾的 Linux 命令)配合使用时,可以监测到日志文件的日志记录情况。
在以下示例中,你将实时监测系统认证日志文件。打开两个终端窗口,在第一个窗口中实时监测该日志文件,如下:
```
$ sudo watch tail /var/log/auth.log
```
你也可以使用 [tail 命令][25](显示文件结尾的 Linux 命令)的 `-f` 选项实时监测文件变化。这样,我们就可以在日志文件中看到日志的生成情况。
```
$ sudo tail -f /var/log/auth.log
```
接着,在第二个终端窗口中运行以下命令,之后,你就可以在第一个终端窗口中观察日志文件内容:
```
$ sudo mkdir -p /etc/test
$ sudo rm -rf /etc/test
```
### 10、 列出所有 Shell 内置命令
shell 内置命令是一个命令或者函数,从内部调用并直接在 shell 里执行,而不是从硬盘加载外部的可执行程序来执行。
列出所有 shell 内置命令及其语法,执行如下命令:
```
$ help
```
作为结束语,[命令行小技巧][26]不仅能派得上用场,而且让学习和使用 Linux 变得更加简单有趣,尤其是对新手来讲。
你也可以通过留言给我们分享其他在 Linux 中[有用有趣的命令行小技巧][27]。
--------------------------------------------------------------------------------
作者简介:
![](http://1.gravatar.com/avatar/4e444ab611c7b8c7bcb76e58d2e82ae0?s=128&d=blank&r=g)
Aaron Kili 是一名 Linux 和 F.O.S.S 的爱好者,未来的 Linux 系统管理员、网站开发人员,目前是 TecMint 的写作者,他喜欢用电脑工作,并且乐忠于分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-command-line-tricks-and-tips-worth-knowing/
作者:[Aaron Kili][a]
译者:[zhb127](https://github.com/zhb127)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/20-funny-commands-of-linux-or-linux-is-fun-in-terminal/
[2]:http://www.tecmint.com/tag/linux-tricks/
[3]:https://linux.cn/article-5485-1.html
[4]:https://linux.cn/article-6314-1.html
[5]:https://linux.cn/article-7418-1.html
[6]:http://www.tecmint.com/tag/linux-ls-command/
[7]:http://www.tecmint.com/check-linux-file-octal-permissions-using-stat-command/
[8]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/
[9]:https://linux.cn/article-8024-1.html
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Empty-Large-File-Content-in-Linux.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/12/Switch-to-Super-User.png
[12]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/
[13]:http://www.tecmint.com/how-to-kill-a-process-in-linux/
[14]:http://www.tecmint.com/find-process-name-pid-number-linux/
[15]:http://www.tecmint.com/wp-content/uploads/2016/12/Find-and-Kill-Process-in-Linux.png
[16]:http://www.tecmint.com/how-to-kill-a-process-in-linux/
[17]:https://linux.cn/article-7974-1.html
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Delete-File-Permanently-in-Linux.png
[19]:http://www.tecmint.com/rename-multiple-files-in-linux/
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/Rename-Multiple-Files-in-Linux.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/Spell-Checking-in-Linux.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/12/Show-Description-of-Keyword-in-Manual-Pages.png
[23]:http://www.tecmint.com/run-repeat-linux-command-every-x-seconds/
[24]:http://www.tecmint.com/view-contents-of-file-in-linux/
[25]:http://www.tecmint.com/view-contents-of-file-in-linux/
[26]:http://www.tecmint.com/tag/linux-tricks/
[27]:https://linux.cn/article-5485-1.html

View File

@ -0,0 +1,135 @@
Linux 系统管理员 2017 年的 10 个新决心
============================================================
当我们告别 2016 时,也到时间定下我们的 **新年决心** 了。不管你身为 Linux 系统管理员的经验水平如何,我们认为,制定接下来 12 个月的成长目标是很值得的。
如果你还没什么想法,我们将会在这篇文章分享 10 个简单的专业提升决心,你可以为 2017 年考虑一下。
### 1、 决定更自动化
你没必要忙得像头无头苍蝇,每天忙于解决可预见的问题。如果你发现自己每天都花费时间在执行重复的任务,你有必要现在就停下来。
在了解了所有[基于 Linux 而且开源的工具][4]后,你可以尽可能地[自动化你的 Linux 任务][5]来给自己一些休闲的时间。
你会发现,接下来的几个决心会帮你在工作上朝着这个目标前进。所以继续看下去吧。
另外,帮自己一个忙,花费几分钟来浏览我们[免费的电子书][6]部分吧。
你将有机会下载与[Bash shell 脚本编程][7]相关的书籍来提升你的技能。开心地自动化!
### 2、 学习一门新的脚本语言
虽然每一个系统管理员应该熟练地使用 Bash 写脚本,但考虑一下其它更现代化、健壮性更强的工具也是很重要的,例如 Python。
不要只是相信我们说的话 —— 看看不久前我们发布的[两篇关于 Python 的系列文章][9]。你将会意识到与其它语言相比Python 带来了面向对象编程的力量,使您写出更短、健壮性更强的脚本。
### 3、 学习一门新的编程语言
除了学习一门新的脚本语言,(你也可以)决定花费点时间来开始学习或者提升你的编程技能。不确定从何处开始?今年的 [Stackoverflow 开发者调查][10]表明 Javascript 连续第三年引领最流行语言的榜单。
其他经典例如 Java 和 C 也值得考虑。来看我们 [2016 年最好的编程课程][11]。
### 4、 注册一个 Github 账户并且定期更新
特别是如果你是一个编程新手,你应该考虑一下在 Github 上展示你的成果。通过允许别人去复刻你的脚本或者程序,你就能提高知识水平,并通过别人的帮助创造出更复杂的软件。
在[《如何安装和注册 Github 帐号》][12]一文中了解更多。
### 5、 向一个开源项目做贡献
在 Github 上向一个开源项目做贡献,这是另一个学习或者提高一门新脚本语言或者编程语言能力的好办法。
如果这吸引到了你的兴趣,点击 [Explore Github][13] 页面。这里你能按热度或者编程语言浏览仓库,你能在这里面找到一些有趣的事情来做。
在此基础上,你将因回馈社区而获得满足感。
### 6、 每月尝试一个新的发行版
经常会有新的发行版或者分支出现,你有不同的选项以供选择。谁知道你梦想中的发行版是否就在近前,而你还没发现它?每个月去一次 **Distrowatch** 然后选择一个新的发行版。
也别忘了[订阅 Techmint][14] 来获取新发行版的消息。
如果你想要尝试一个新的发行版,希望我们的评论能帮你做出决定。也可以点击我们这里关于最好的 Linux 发行版的文章:
- [2016年最好的 5 个注重安全的 Linux 发行版][1]
- [2016年最值得期待的 Linux 发行版][2]
- [2015年最流行的 10个 Linux 发行版][3]
### 7、 参加一个 Linux 或者开源会议。
如果你住在由 Linux 基金会赞助的会议举办地附近,我强烈建议你去参加会议。
这不仅将会给你一个提高 Linux 知识的机会,而且将是个见见其他开源专家的机会。
### 8、 从 Linux 基金会的免费或付费课程中学习
Linux 基金会分别通过 **edX.org** 和他们自己的门户,不断地提供免费或付费课程。
免费课程的话题包括但不仅限于Linux 介绍、云基础设施技术介绍和 OpenStack 介绍。
另一方面,付费课程包括 [LFCS 认证][16] 和 [LFCE 认证][17] 考试的准备,给开发者的 Linux 内核内部构件Linux 安全,性能试验,高可用性及其他。
另外,他们对企业课程有折扣,所以尝试去说服你上司来为你和你同事的训练付费。还有,也会提供周期性的免费在线研讨会,所以别忘了订阅他们的 newsletters
你也可以考虑下看看我们最棒的[在线 Linux 训练课程][18]。
### 9、 每周在 Linux 论坛上回答特定数量的问题
另一个回馈社区的好方法是帮助那些刚开始使用 Linux 的人。你将会发现网上的 Linux 论坛上有许多人正在寻找着答复。
牢记你曾经也是像他们那样是个新手,试着换位思考。
### 10、 教一个孩子或少年使用 Linux
如果我能回到 20 年前,我希望我能有台电脑,有个能[在青年时学习 Linux ][19]的机会。
我也希望我能比当年还早很多地开始编程。毫无疑问,这样事情就会简单许多。我认为给孩子和青年教授至少是基础的 Linux 和编程技巧(我对我的孩子这样做)是个重要的尝试。
教育成长中的一代如何有效地使用开源技术将会给他们选择的自由,而他们会因此永远感激你。
##### 总结
在这篇文章里我们分享了 10 个适合系统管理员的可能新年决心。[Tecmint.com][20] 祝你在朝着目标的工作顺顺利利,希望你能在 2017 年成为我们网站的常客。
如果你有关于这篇文章的问题或者评论,请不要犹豫使用下面的表格提交。我们期待着收到您的信息。
--------------------------------------------------------------------------------
作者简介:
![](http://1.gravatar.com/avatar/d9d14c5b51331864398e6288cb0c2091?s=128&d=blank&r=g)
Gabriel Cánepa 是个 GNU/Linux 系统管理员和网页开发者,他来自阿根廷圣路易斯的 Villa Mercedes 。他供职于全球领先的消费品公司,享受在日常工作的方方面面使用 FOSS自由及开源软件 工具来提高生产效率。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-system-administrators-new-years-resolutions-ideas/
作者:[Gabriel Cánepa][a]
译者:[ypingcn](https://github.com/ypingcn)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]: http://www.tecmint.com/best-security-centric-linux-distributions-of-2016/
[2]: http://www.tecmint.com/top-linux-distributions-to-look-forward-in-2016/
[3]: http://www.tecmint.com/10-top-most-popular-linux-distributions-of-2015/
[4]: http://www.tecmint.com/category/top-tools/
[5]: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/
[6]: http://tecmint.tradepub.com/category/information-technology-servers-and-linux-server-os/806/
[7]: http://tecmint.tradepub.com/free/w_syst05/?p=w_syst05
[8]: http://www.tecmint.com/category/python/
[9]: https://linux.cn/article-7693-1.html
[10]: http://stackoverflow.com/research/developer-survey-2016#technology
[11]: https://deals.tecmint.com/collections/best-of-bundles-2016
[12]: http://www.tecmint.com/install-git-centos-fedora-redhat/
[13]: https://help.github.com/articles/where-can-i-find-open-source-projects-to-work-on/
[14]: http://subscribe.tecmint.com/newsletter
[15]: http://events.linuxfoundation.org/
[16]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
[17]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/
[18]: http://www.tecmint.com/linux-online-training-courses/
[19]: http://www.tecmint.com/free-online-linux-learning-guide-for-beginners/
[20]: http://tecmint.com/

View File

@ -0,0 +1,212 @@
如何在 Docker 中设置 Go 并部署应用
============================================================
嗨,在本教程中,我们将学习如何使用 docker 部署 golang web 应用程序。 你可能已经知道,由于 golang 的高性能和可靠性docker 是完全是用 golang 写的。在我们详细介绍之前,请确保你已经安装了 docker 以及 golang 并对它们有基本了解。
### 关于 docker
Docker 是一个开源程序,它可以将应用及其完整的依赖包捆绑到一起,并打包为容器,与宿主机共享相同的 Linux 内核。另一方面,像 VMware 这样的基于 hypervisor 的虚拟化操作系统容器提供了高级别的隔离和安全性,这是由于客户机和主机之间的通信是通过 hypervisor 来实现的,它们不共享内核空间。但是硬件仿真也导致了性能的开销,所以容器虚拟化诞生了,以提供一个轻量级的虚拟环境,它将一组进程和资源与主机以及其它容器分组及隔离,因此,容器内部的进程无法看到容器外部的进程或资源。
### 用 Go 语言创建一个 “Hello World” web 应用
首先我们为 Go 应用创建一个目录,它会在浏览器中显示 “Hello World”。创建一个 `web-app` 目录并使它成为当前目录。进入 `web-app` 应用目录并编辑一个名为 `main.go` 的文件。
```
root@demohost:~# mkdir web-app
root@demohost:~# cd web-app/
root@demohost:~/web-app# vim.tiny main.go
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello %s", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/World", handler)
http.ListenAndServe(":8080", nil)
}
```
使用下面的命令运行上面的 “Hello World” Go 程序。在浏览器中输入 `http://127.0.0.1:8080/World` 测试,你会在浏览器中看到 “Hello World”。
```
root@demohost:~/web-app# PORT=8080 go run main.go
```
下一步是将上面的应用在 docker 中容器化。因此我们会创建一个 dockerfile 文件,它会告诉 docker 如何容器化我们的 web 应用。
```
root@demohost:~/web-app# vim.tiny Dockerfile
# 得到最新的 golang docker 镜像
FROM golang:latest
# 在容器内部创建一个目录来存储我们的 web 应用,接着使它成为工作目录。
RUN mkdir -p /go/src/web-app
WORKDIR /go/src/web-app
# 复制 web-app 目录到容器中
COPY . /go/src/web-app
# 下载并安装第三方依赖到容器中
RUN go-wrapper download
RUN go-wrapper install
# 设置 PORT 环境变量
ENV PORT 8080
# 给主机暴露 8080 端口,这样外部网络可以访问你的应用
EXPOSE 8080
# 告诉 Docker 启动容器运行的命令
CMD ["go-wrapper", "run"]
```
### 构建/运行容器
使用下面的命令构建你的 Go web-app你会在成功构建后获得确认。
```
root@demohost:~/web-app# docker build --rm -t web-app .
Sending build context to Docker daemon 3.584 kB
Step 1 : FROM golang:latest
latest: Pulling from library/golang
386a066cd84a: Already exists
75ea84187083: Pull complete
88b459c9f665: Pull complete
a31e17eb9485: Pull complete
1b272d7ab8a4: Pull complete
eca636a985c1: Pull complete
08158782d330: Pull complete
Digest: sha256:02718aef869a8b00d4a36883c82782b47fc01e774d0ac1afd434934d8ccfee8c
Status: Downloaded newer image for golang:latest
---> 9752d71739d2
Step 2 : RUN mkdir -p /go/src/web-app
---> Running in 9aef92fff9e8
---> 49936ff4f50c
Removing intermediate container 9aef92fff9e8
Step 3 : WORKDIR /go/src/web-app
---> Running in 58440a93534c
---> 0703574296dd
Removing intermediate container 58440a93534c
Step 4 : COPY . /go/src/web-app
---> 82be55bc8e9f
Removing intermediate container cae309ac7757
Step 5 : RUN go-wrapper download
---> Running in 6168e4e96ab1
+ exec go get -v -d
---> 59664b190fee
Removing intermediate container 6168e4e96ab1
Step 6 : RUN go-wrapper install
---> Running in e56f093b6f03
+ exec go install -v
web-app
---> 584cd410fdcd
Removing intermediate container e56f093b6f03
Step 7 : ENV PORT 8080
---> Running in 298e2a415819
---> c87fd2b43977
Removing intermediate container 298e2a415819
Step 8 : EXPOSE 8080
---> Running in 4f639a3790a7
---> 291167229d6f
Removing intermediate container 4f639a3790a7
Step 9 : CMD go-wrapper run
---> Running in 6cb6bc28e406
---> b32ca91bdfe0
Removing intermediate container 6cb6bc28e406
Successfully built b32ca91bdfe0
```
现在可以运行我们的 web-app 了,可以执行下面的命令。
```
root@demohost:~/web-app# docker run -p 8080:8080 --name="test" -d web-app
7644606b9af28a3ef1befd926f216f3058f500ffad44522c1d4756c576cfa85b
```
进入 `http://localhost:8080/World` 浏览你的 web 应用。你已经成功容器化了一个可重复的/确定性的 Go web 应用。使用下面的命令来启动、停止并检查容器的状态。
```
### 列出所有容器
root@demohost:~/ docker ps -a
### 使用 id 启动容器
root@demohost:~/ docker start CONTAINER_ID_OF_WEB_APP
### 使用 id 停止容器
root@demohost:~/ docker stop CONTAINER_ID_OF_WEB_APP
```
### 重新构建镜像
假设你正在开发 web 应用程序并在更改代码。现在要在更新代码后查看结果,你需要重新生成 docker 镜像、停止旧镜像并运行新镜像,并且每次更改代码时都要这样做。为了使这个过程自动化,我们将使用 docker 卷在主机和容器之间共享一个目录。这意味着你不必为在容器内进行更改而重新构建镜像。容器如何检测你是否对 web 程序的源码进行了更改?答案是有一个名为 “Gin” 的好工具 [https://github.com/codegangsta/gin][1],它能检测是否对源码进行了任何更改,然后重建镜像/二进制文件并在容器内运行更新过代码的进程。
要使这个过程自动化,我们将编辑 Dockerfile 并安装 Gin 将其作为入口命令来执行。我们将开放 `3030` 端口Gin 代理),而不是 `8080`。 Gin 代理将转发流量到 web 程序的 `8080` 端口。
```
root@demohost:~/web-app# vim.tiny Dockerfile
# 得到最新的 golang docker 镜像
FROM golang:latest
# 在容器内部创建一个目录来存储我们的 web 应用,接着使它称为工作目录。
RUN mkdir -p /go/src/web-app
WORKDIR /go/src/web-app
# 复制 web 程序到容器中
COPY . /go/src/web-app
# 下载并安装第三方依赖到容器中
RUN go get github.com/codegangsta/gin
RUN go-wrapper download
RUN go-wrapper install
# 设置 PORT 环境变量
ENV PORT 8080
# 给主机暴露 8080 端口,这样外部网络可以访问你的应用
EXPOSE 3030
# 启动容器时运行 Gin
CMD gin run
# 告诉 Docker 启动容器运行的命令
CMD ["go-wrapper", "run"]
```
现在构建镜像并启动容器:
```
root@demohost:~/web-app# docker build --rm -t web-app .
```
我们会在当前 web 程序的根目录下运行 docker并通过暴露的 `3030` 端口链接 CWD (当前工作目录)到容器中的应用目录下。
```
root@demohost:~/web-app# docker run -p 3030:3030 -v `pwd`:/go/src/web-app --name="test" -d web-app
````
打开 `http://localhost:3030/World` 你就能看到你的 web 程序了。现在如果你改变了任何代码,会在浏览器刷新后反映在你的浏览器中。
### 总结
就是这样,我们的 Go web 应用已经运行在 Ubuntu 16.04 Docker 容器中运行了!你可以通过使用 Go 框架来快速开发 API、网络应用和后端服务从而扩展当前的网络应用。
--------------------------------------------------------------------------------
via: http://linoxide.com/containers/setup-go-docker-deploy-application/
作者:[Dwijadas Dey][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/dwijadasd/
[1]:https://github.com/codegangsta/gin

View File

@ -0,0 +1,260 @@
如何在 Shell 脚本中跟踪调试命令的执行
============================================================
在 [shell 脚本调试系列][3] 中,本文将解释第三种 shell 脚本调试模式,即 shell 跟踪,并查看一些示例来演示它如何工作以及如何使用它。
本系列的前面部分清晰地阐明了另外两种 shell 脚本调试模式:详细模式和语法检查模式,并用易于理解的例子展示了如何在这些模式下启用 shell 脚本调试。
1. [如何在 Linux 中启用 Shell 脚本的调试模式][1]
2. [如何在 Shell 脚本中执行语法检查调试模式][2]
shell 跟踪简单的来说就是跟踪 shell 脚本中的命令的执行。要打开 shell 跟踪,请使用 `-x` 调试选项。
这会让 shell 在终端上显示所有执行的命令及其参数。
我们将使用下面的 `sys_info.sh` shell 脚本,它会简要地打印出你的系统日期和时间、登录的用户数和系统的运行时间。不过,脚本中包含我们需要查找和更正的语法错误。
```
#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
}
print_sys_info(){
echo "System Time : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime : $UPTIME
}
check_root
print_sys_info
exit 0
```
保存文件并执行脚本。脚本只能用 root 用户运行,因此如下使用 [sudo 命令][4]运行:
```
$ chmod +x sys_info.sh
$ sudo bash -x sys_info.sh
```
[
![Shell Tracing - Show Error in Script](http://www.tecmint.com/wp-content/uploads/2016/12/Shell-Tracing-Errors.png)
][5]
*shell 跟踪 - 显示脚本中的错误*
从上面的输出我们可以观察到,首先执行命令,然后其输出做为一个变量的值。
例如,先执行 `date`,其输出做为变量 `DATE` 的值。
我们可以执行语法检查来只显示其中的语法错误,如下所示:
```
$ sudo bash -n sys_info.sh
```
[
![Syntax Checking in Script](http://www.tecmint.com/wp-content/uploads/2016/12/Syntax-Checking-in-Script.png)
][6]
*脚本中语法检查*
如果我们审视这个 shell 脚本,我们就会发现 `if` 语句缺少了封闭条件的 `fi` 关键字。因此,让我们加上它,新的脚本应该看起来像这样:
```
#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}
print_sys_info(){
echo "System Time : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime : $UPTIME
}
check_root
print_sys_info
exit 0
```
再次保存文件并以 root 执行,同时做语法检查:
```
$ sudo bash -n sys_info.sh
```
[
![Perform Syntax Check in Shell Scripts](http://www.tecmint.com/wp-content/uploads/2016/12/Syntax-Check-in-Shell-Scripts.png)
][7]
*在 shell 脚本中执行语法检查*
上面的语法检查操作的结果仍然显示在脚本的第 21 行还有一个错误。所以,我们仍然要纠正一些语法。
再一次分析脚本,会发现第 21 行的错误是由于在 `print_sys_info` 函数内最后一个 [echo 命令][8]中没有闭合双引号 `"`
我们将在 `echo` 命令中添加闭合双引号并保存文件。修改过的脚本如下:
```
#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}
print_sys_info(){
echo "System Time : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime : $UPTIME"
}
check_root
print_sys_info
exit 0
```
现在再一次检查语法。
```
$ sudo bash -n sys_info.sh
```
上面的命令不会产生任何输出,因为我们的脚本语法上正确。我们也可以再次跟踪脚本执行,它应该工作得很好:
```
$ sudo bash -x sys_info.sh
```
[
![Trace Shell Script Execution](http://www.tecmint.com/wp-content/uploads/2016/12/Trace-Shell-Execution.png)
][9]
*跟踪 shell 脚本执行*
现在运行脚本。
```
$ sudo ./sys_info.sh
```
[
![Shell Script to Show Date, Time and Uptime](http://www.tecmint.com/wp-content/uploads/2016/12/Script-to-Show-Date-and-Uptime.png)
][10]
*用 shell 脚本显示日期、时间和运行时间*
### shell 跟踪执行的重要性
shell 脚本跟踪可以帮助我们识别语法错误,更重要的是识别逻辑错误。例如,在 `sys_info.sh` shell 脚本中的 `check_root` 函数,它用于确定用户是否为 root因为脚本只允许由超级用户执行。
```
check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}
```
这里的魔法是由 `if` 语句表达式 `["$ UID" -ne "$ ROOT_ID"]` 控制的,一旦我们不使用合适的数字运算符(示例中为 `-ne`,这意味着不相等),我们最终可能会出一个逻辑错误。
假设我们使用 `-eq` (意思是等于),这将允许任何系统用户以及 root 用户运行脚本,因此是一个逻辑错误。
```
check_root(){
if [ "$UID" -eq "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}
```
注意:我们在本系列开头介绍过,`set` 这个 shell 内置命令可以在 shell 脚本的特定部分激活调试。
因此,下面的行将帮助我们通过跟踪脚本的执行在其中找到这个逻辑错误:
具有逻辑错误的脚本:
```
#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -eq "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}
print_sys_info(){
echo "System Time : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime : $UPTIME"
}
#turning on and off debugging of check_root function
set -x ; check_root; set +x ;
print_sys_info
exit 0
```
保存文件并调用脚本,在输出中,我们可以看到一个普通系统用户可以在未 sudo 的情况下运行脚本。 这是因为 `USER_ID` 的值为 100不等于为 0 的 root 的 `ROOT_ID`
```
$ ./sys_info.sh
```
[
![Run Shell Script Without Sudo](http://www.tecmint.com/wp-content/uploads/2016/12/Run-Shell-Script-Without-Sudo.png)
][11]
**未 sudo 的情况下运行 shell 脚本**
那么,现在我们已经完成了 [shell 脚本调试系列][12],可以在下面的反馈栏里给我们关于本篇或者本系列提出问题或反馈。
--------------------------------------------------------------------------------
作者简介:
![](http://1.gravatar.com/avatar/4e444ab611c7b8c7bcb76e58d2e82ae0?s=128&d=blank&r=g)
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux SysAdmin、web 开 发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/trace-shell-script-execution-in-linux/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:https://linux.cn/article-8028-1.html
[2]:https://linux.cn/article-8045-1.html
[3]:https://linux.cn/article-8028-1.html
[4]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/Shell-Tracing-Errors.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/12/Syntax-Checking-in-Script.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/12/Syntax-Check-in-Shell-Scripts.png
[8]:http://www.tecmint.com/echo-command-in-linux/
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Trace-Shell-Execution.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Script-to-Show-Date-and-Uptime.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/12/Run-Shell-Script-Without-Sudo.png
[12]:https://linux.cn/article-8028-1.html

View File

@ -0,0 +1,100 @@
Linux 笔记本电脑选购指南
============================================================
众所周知,如果你去电脑城[购买一个新的笔记本电脑][5],你所见到的尽是预安装了 Windows 或是 Mac 系统的笔记本电脑。无论怎样,你都会被迫支付一笔额外的费用—— 微软系统的许可费用或是苹果电脑背后的商标使用权费用。
当然,你也可以选择购买一款笔记本电脑,然后安装自己喜欢的操作系统。然而,最困难的可能是需要找到一款硬件跟你想安装的操作系统兼容性良好的笔记本电脑。
在此之上,我们还需要考虑硬件驱动程序的可用性。那么,你应该怎么办呢?答案很简单:[购买一款预安装了 Linux 系统的笔记本电脑][6]。
幸运的是,正好有几家值得依赖的公司提供质量好、有名气,并且预安装了 Linux 系统的笔记本电脑,这样你就不用再担心驱动程序的可用性了。
也就是说,在这篇文章中,我们将根据用户对笔记本电脑的用途列出 3 款可供用户选择的高性价比机器。
### 普通用户使用的 Linux 笔记本电脑
如果你正在寻找一款能够满足日常工作及娱乐需求的 Linux 笔记本电脑,它能够正常运行办公软件,有诸如 Firefox或是 Chrome 这样的 Web 浏览器,有局域网 / Wifi 连接功能,那么你可以考虑选择 [System76][7] 公司生产的 Linux 笔记本电脑,它可以根据用户的定制化需求选择处理器类型,内存及磁盘大小,以及其它配件。
除此之外, System76 公司为他们所有的 Ubuntu 系统的笔记本电脑提供终身技术支持。如果你觉得这听起来不错,并且也比较感兴趣,你可以考虑下 [Lemur][8] 或者 [Gazelle][9] 这两款笔记本电脑。
![Lemur Laptop for Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Lemur-Laptop.png)
*Lemur Linux 笔记本电脑*
![Gazelle Laptop for Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Gazelle-Laptop.png)
*Gazelle Linux 笔记本电脑*
### 开发者使用的 Linux 笔记本电脑
如果你想找一款坚固可靠,外观精美,并且性能强悍的笔记本电脑用于开发工作,你可以考虑一下 [Dell 的 XPS 13 笔记本电脑][10]。
这款 13 英寸的精美笔记本电脑,配置全高清 HD 显示器触摸板售价范围视其配置情况而定CPU 代号/型号Intel 的第 7 代处理器 i5 和 i7固态硬盘大小128 至 512 GB内存大小8 至 16 GB。
![Dells XPS Laptop for Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Dells-XPS-Laptops.png)
*Dell XPS Linux 笔记本电脑*
这些都是你应该考虑在内的重要因素Dell 已经做得很到位了。不幸的是Dell ProSupport 为该型号的笔记本电脑仅提供 Ubuntu 16.04 LTS 系统的技术支持(在写本篇文章的时候 - 2016 年 12 月)。
### 系统管理员使用的 Linux 笔记本电脑
虽然系统管理员可以顺利搞定在裸机上安装 Linux 系统的工作,但是使用 System76 的产品,你可以避免寻找各种驱动并解决兼容性问题上的麻烦。
之后,你可以根据自己的需求来配置电脑特性,你可以提高笔记本电脑的性能,增加内存到 32 GB 以确保你可以运行虚拟化环境并进行各种系统管理相关的任务。
如果你对此比较感兴趣,你可以考虑购买 [Kudu][12] 或者是 [Oryx Pro][13] 笔记本电脑。
![Kudu Linux Laptop](http://www.tecmint.com/wp-content/uploads/2016/11/Kudu-Linux-Laptop.png)
*Kudu Linux 笔记本电脑*
![Oryx Pro Linux Laptop](http://www.tecmint.com/wp-content/uploads/2016/11/Oryx-Pro-Linux-Laptop.png)
*Oryx Pro 笔记本电脑*
### 总结
在这篇文章中,我们探讨了对于普通用户、开发者及系统管理员来说,为什么购买一款预安装了 Linux 系统的笔记本是一个不错的选择。一旦你决定好,你就可以轻松自如的考虑下应该如何消费这笔省下来的钱了。
你觉得在购买一款 Linux 系统的笔记本电脑时还应该注意些什么?请在下面的评论区与大家分享。
像往常一样,如果你对这篇文章有什么意见和看法,请随时提出来。我们很期待看到你的回复。
--------------------------------------------------------------------------------
作者简介:
![](http://1.gravatar.com/avatar/d9d14c5b51331864398e6288cb0c2091?s=128&d=blank&r=g)
Gabriel Cánepa 来自 ArgentinaSan LuisVilla Mercedes ,他是一名 GNU/Linux 系统管理员和网站开发工程师。目前在一家世界领先的消费品公司工作,在日常工作中,他非常善于使用 FOSS 工具来提高公司在各个领域的生产率。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/buy-linux-laptops/
作者:[Gabriel Cánepa][a]
译者:[rusking](https://github.com/rusking)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/11/Lemur-Laptop.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/11/Gazelle-Laptop.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/11/Kudu-Linux-Laptop.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Oryx-Pro-Linux-Laptop.png
[5]:http://amzn.to/2fPxTms
[6]:http://amzn.to/2fPxTms
[7]:https://system76.com/laptops
[8]:https://system76.com/laptops/lemur
[9]:https://system76.com/laptops/gazelle
[10]:http://amzn.to/2fBLMGj
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Dells-XPS-Laptops.png
[12]:https://system76.com/laptops/kudu
[13]:https://system76.com/laptops/oryx

View File

@ -0,0 +1,152 @@
在 Linux 终端中自定义 Bash 配色和提示内容
============================================================
现今,大多数(如果不是全部的话)现代 Linux 发行版的默认 shell 都是 Bash。然而你可能已经注意到这样一个现象在各个发行版中其终端配色和提示内容都各不相同。
如果你一直都在考虑,或者只是一时好奇,如何定制可以使 Bash 更好用。不管怎样,请继续读下去 —— 本文将告诉你怎么做。
### PS1 Bash 环境变量
命令提示符和终端外观是通过一个叫 `PS1` 的变量来进行管理的。根据 **Bash** 手册页说明,**PS1** 代表了 shell 准备好读取命令时显示的主体的提示字符串。
**PS1** 所允许的内容包括一些反斜杠转义的特殊字符,可以查看手册页中 **PRMPTING** 部分的内容来了解它们的含义。
为了演示,让我们先来显示下我们系统中 `PS1` 的当前内容吧(这或许看上去和你们的有那么点不同):
```
$ echo $PS1
[\u@\h \W]\$
```
现在,让我们来了解一下怎样自定义 PS1 吧,以满足我们各自的需求。
#### 自定义 PS1 格式
根据手册页 PROMPTING 章节的描述,下面对各个特殊字符的含义作如下说明:
-  `\u:` 显示当前用户的 **用户名**
-  `\h:` <ruby>完全限定域名 <rt>Fully-Qualified Domain Name</rt></ruby>FQDN中第一个点.)之前的**主机名**。
-  `\W:` 当前工作目录的**基本名**,如果是位于 `$HOME` (家目录)通常使用波浪符号简化表示(`~`)。
- `\$:` 如果当前用户是 root显示为 `#`,否则为 `$`
例如,如果我们想要显示当前命令的历史数量,可以考虑添加 `\!`;如果我们想要显示 FQDN 全称而不是短服务器名,那么可以考虑添加 `\H`
在下面的例子中,我们同时将这两个特殊字符引入我们当前的环境中,命令如下:
```
PS1="[\u@\H \W \!]\$"
```
当按下回车键后,你将会看到提示内容会变成下面这样。可以对比执行命令修改前和修改后的提示内容:
[
![Customize Linux Terminal Prompt PS1](http://www.tecmint.com/wp-content/uploads/2017/01/Customize-Linux-Terminal-Prompt.png)
][1]
*自定义 Linux 终端提示符 PS1*
现在,让我们再深入一点,修改命令提示符中的用户名和主机名 —— 同时修改文本和环境背景。
实际上,我们可以对提示符进行 3 个方面的自定义:
<table>
<tr>
<th>文本格式</th>
<th>前景色(文本)</th>
<th>背景色 </th>
</tr>
<tr>
<th>0: 常规文本</th>
<th>30: 黑色</th>
<th>40: 黑色</th>
</tr>
<tr>
<th>1: 加粗</th>
<th>31: 红色</th>
<th>41: 红色</th>
</tr>
<tr>
<th>4: 下划线文本</th>
<th> 32: 绿色</th>
<th>42: 绿色</th>
</tr>
<tr>
<th></th>
<th>33: 黄色</th>
<th>43: 黄色</th>
</tr>
<tr>
<th></th>
<th>34: 蓝色</th>
<th>44: 蓝色</th>
</tr>
<tr>
<th></th>
<th>35: 紫色</th>
<th>45: 紫色</th>
</tr>
<tr>
<th></th>
<th>36: 青色</th>
<th>46: 青色</th>
</tr>
<tr>
<th></th>
<th>37: 白色</th>
<th>47: 白色</th>
</tr>
</table>
我们将在开头使用 `\e` 特殊字符,跟着颜色序列,在结尾使用 `m` 来表示结束。
在该序列中,三个值(**背景****格式**和**前景**)由分号分隔(如果不赋值,则假定为默认值)。
**建议阅读:** [在 Linux 中学习 Bash shell 脚本][2]。
此外,由于值的范围不同,指定背景,格式,或者前景的先后顺序没有关系。
例如,下面的 `PS1` 将导致提示符为黄色带下划线文本,并且背景为红色:
```
PS1="\e[41;4;33m[\u@\h \W]$ "
```
[
![Change Linux Terminal Color Prompt PS1](http://www.tecmint.com/wp-content/uploads/2017/01/Change-Linux-Terminal-Color-Prompt.png)
][3]
*修改 Linux 终端提示符配色 PS1*
虽然它看起来那么漂亮,但是这个自定义将只会持续到当前用户会话结束。如果你关闭终端,或者退出本次会话,所有修改都会丢失。
为了让修改永久生效,你必须将下面这行添加到 `~/.bashrc`或者 `~/.bash_profile`,这取决于你的版本。
```
PS1="\e[41;4;33m[\u@\h \W]$ "
```
尽情去玩耍吧,你可以尝试任何色彩,直到找出最适合你的。
##### 小结
在本文中,我们讲述了如何来自定义 Bash 提示符的配色和提示内容。如果你对本文还有什么问题或者建议,请在下面评论框中写下来吧。我们期待你们的声音。
--------------------------------------------------------------------------------
作者简介Aaron Kili 是一位 Linux 及 F.O.S.S 的狂热爱好者,一位未来的 Linux 系统管理员web 开发者,而当前是 TechMint 的原创作者,他热爱计算机工作,并且信奉知识分享。
![](http://1.gravatar.com/avatar/4e444ab611c7b8c7bcb76e58d2e82ae0?s=128&d=blank&r=g)
--------------------------------------------------------------------------------
via: http://www.tecmint.com/customize-bash-colors-terminal-prompt-linux/
作者:[Aaron Kili][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2017/01/Customize-Linux-Terminal-Prompt.png
[2]:http://www.tecmint.com/category/bash-shell/
[3]:http://www.tecmint.com/wp-content/uploads/2017/01/Change-Linux-Terminal-Color-Prompt.png

View File

@ -0,0 +1,173 @@
5 个找出“二进制命令”描述和系统中位置的方法
============================================================
在数千个 [Linux 系统上的命令/程序][1]中,知道给定命令的类型和目的以及其在系统上的位置(绝对路径)对于新手来说可能是一个挑战。
知道命令/程序的一些细节不仅有助于 [Linux 用户掌握大量命令][2],还能使用户理解命令行或脚本在系统上的操作。
因此,在本文中我们将向你解释五个有用的命令,用于显示给定命令的简短描述和位置。
要在系统上发现新命令,请查看 PATH 环境变量中的所有目录。这些目录存储系统上安装的所有命令/程序。
一旦你找到一个有趣的命令,在继续阅读更多关于它的手册页之前,请尝试如下收集一些简要的信息。
假设你输出了 `PATH` 的值,然后进到其中的一个目录 `/usr/local/bin`,注意到一个名为 [`fswatch`(监视文件修改更改)][3]的新命令:
```
$ echo $PATH
$ cd /usr/local/bin
```
[
![Find New Commands in Linux](http://www.tecmint.com/wp-content/uploads/2017/01/Find-New-Commands-in-Linux.png)
][4]
*在 Linux 中找出新命令*
现在让我们在 Linux 中用不同的方法找出 `fswatch` 命令的描述和位置。
### 1、 whatis 命令
`whatis` 用于显示你作为参数输入的命令名的单行描述(例如下面命令中的 `fswatch`)。
如果描述太长,一些部分在默认情况下会被省略,使用 `-l` 标志来显示完整的描述。
```
$ whatis fswatch
$ whatis -l fswatch
```
[
![Linux whatis Command Example](http://www.tecmint.com/wp-content/uploads/2017/01/Whatis-Command-Example.png)
][5]
*Linux whatis 命令示例*
### 2、 apropos 命令
`apropos` 会搜索手册页名称和关键字描述(以命令名作为正则表达式搜索)。
使用 `-l` 标志来显示完整的描述。
```
$ apropos fswatch
$ apropos -l fswatch
```
[
![Linux apropos Command Example](http://www.tecmint.com/wp-content/uploads/2017/01/Linux-apropos-Command-Example.png)
][6]
*Linux apropos 命令示例*
默认上,`apropos` 会如示例那样输出所有匹配的行。你可以使用 `-e` 选项来精确匹配:
```
$ apropos fmt
$ apropos -e fmt
```
[
![Linux apropos Command Show by Keyword](http://www.tecmint.com/wp-content/uploads/2017/01/Linux-apropos-Command-Keyword-Example.png)
][7]
*Linux apropos 命令根据关键词显示*
### 3、 type 命令
`type` 命令会输出给定命令的完整路径名,此外,如果输入的命令名不是做为独立存储在磁盘的文件的程序,`type` 还会告诉你命令分类:
- shell 内置命令
- shell 关键字或保留字
- 别名
```
$ type fswatch
```
[
![Linux type Command Example](http://www.tecmint.com/wp-content/uploads/2017/01/Linux-type-Command-Example.png)
][8]
*Linux type 命令示例*
当命令是另外一个命令的别名时,`type` 会显示运行别名时所执行的命令。使用 `alias` 命令可以查看你系统上创建的所有别名:
```
$ alias
$ type l
$ type ll
```
[
![Show All Aliases in Linux](http://www.tecmint.com/wp-content/uploads/2017/01/Show-All-Aliases-in-Linux.png)
][9]
*显示 Linux 中所有别名*
### 4、 which 命令
`which` 可以帮助命令定位命令,它会打印出命令的绝对路径:
```
$ which fswatch
```
[
![Find Linux Command Location](http://www.tecmint.com/wp-content/uploads/2017/01/Find-Linux-Command-Location.png)
][10]
*找出 Linux 命令位置*
一些二进制文件存在于 `PATH` 环境变量中的多个目录,使用 `-a` 标志来找出所有匹配的路径名。
### 5、 whereis 命令
`whereis` 定位指定命令名的二进制、源和帮助页文件,如下所示:
```
$ whereis fswatch
$ whereis mkdir
$ whereis rm
```
[
![Linux whereis Command Example](http://www.tecmint.com/wp-content/uploads/2017/01/Linux-whereis-Command-Example.png)
][11]
*Linux whereis 命令示例*
虽然上面的命令对于查找关于命令/程序的一些快速信息很重要,但是该命令的手册总是可以提供完整的文档,它还包括其他相关程序的列表:
```
$ man fswatch
```
在本文中,我们回顾了五个简单的命令,用于显示命令的简短的手册描述和位置。 你可以在反馈栏中对此文章做出贡献或提出问题。
--------------------------------------------------------------------------------
作者简介:
![](http://1.gravatar.com/avatar/4e444ab611c7b8c7bcb76e58d2e82ae0?s=128&d=blank&r=g)
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux SysAdmin、web 开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/find-linux-command-description-and-location/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/category/top-tools/
[2]:http://www.tecmint.com/tag/linux-tricks/
[3]:http://www.tecmint.com/fswatch-monitors-files-and-directory-changes-modifications-in-linux/
[4]:http://www.tecmint.com/wp-content/uploads/2017/01/Find-New-Commands-in-Linux.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/01/Whatis-Command-Example.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/01/Linux-apropos-Command-Example.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/01/Linux-apropos-Command-Keyword-Example.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/01/Linux-type-Command-Example.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/01/Show-All-Aliases-in-Linux.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/01/Find-Linux-Command-Location.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/01/Linux-whereis-Command-Example.png

View File

@ -3,27 +3,27 @@ LXD 2.0 系列(五):镜像管理
这是 [LXD 2.0 系列介绍文章][0]的第五篇。
因为lxd容器管理有很多命令因此这篇文章会很长。 如果你想要快速地浏览这些相同的命令,你可以[尝试下我们的在线演示][1]
因为 lxd 容器管理有很多命令,因此这篇文章会很长。 如果你想要快速地浏览这些相同的命令,你可以[尝试下我们的在线演示][1]
![](https://linuxcontainers.org/static/img/containers.png)
### 容器镜像
如果你以前使用过LXC你可能还记得那些LXC“模板”基本上都是导出一个容器文件系统以及一点配置的shell脚本。
如果你以前使用过 LXC你可能还记得那些 LXC “模板”,基本上都是导出一个容器文件系统以及一点配置的 shell 脚本。
大多数模板通过在本机上根据发行版自举来生成文件系统。这可能需要相当长的时间,并且无法在所有的发行版上可用,另外可能需要大量的网络带宽。
大多数模板是通过在本机上执行一个完整的发行版自举来生成该文件系统。这可能需要相当长的时间,并且无法在所有的发行版上可用,另外可能需要大量的网络带宽。
回到LXC 1.0,我写了一个“下载”模板,它允许用户下载预先打包的容器镜像,在中央服务器上的模板脚本生成接着高度压缩、签名并通过https分发。我们很多用户从旧版生成容器切换到使用这种新的更快更可靠的创建容器的方法
回到 LXC 1.0,我写了一个“下载”模板,它允许用户下载预先打包的容器镜像,用模板脚本在中央服务器上生成,接着高度压缩、签名并通过 https 分发。我们很多用户从旧版的容器生成方式切换到了使用这种新的、更快更可靠的创建容器的方式
使用LXD我们通过全面的基于镜像的工作流程向前迈进了一步。所有容器都是从镜像创建的我们在LXD中具有高级镜像缓存和预加载支持以使镜像存储保持最新。
使用 LXD我们通过全面的基于镜像的工作流程向前迈进了一步。所有容器都是从镜像创建的我们在 LXD 中具有高级镜像缓存和预加载支持,以使镜像存储保持最新。
### 与LXD镜像交互
### 与 LXD 镜像交互
在更深入了解镜像格式之前让我们快速了解下LXD可以让你做些什么。
在更深入了解镜像格式之前,让我们快速了解下 LXD 可以让你做些什么。
#### 透明地导入镜像
所有的容器都是镜像创建的。镜像可以来自一台远程服务器并使用它的完整hash、短hash或者别名拉取下来但是最终每个LXD容器都是创建自一个本地镜像。
所有的容器都是镜像创建的。镜像可以来自一台远程服务器并使用它的完整 hash、短 hash 或者别名拉取下来,但是最终每个 LXD 容器都是创建自一个本地镜像。
这有个例子:
@ -33,9 +33,9 @@ lxc launch ubuntu:75182b1241be475a64e68a518ce853e800e9b50397d2f152816c24f038c94d
lxc launch ubuntu:75182b1241be c3
```
所有这些引用相同的远程镜像(在写这篇文章时)在第一次运行其中之一时,远程镜像将作为缓存镜像导入本地LXD镜像存储接着从中创建容器。
所有这些引用相同的远程镜像(在写这篇文章时)在第一次运行这些命令其中之一时,远程镜像将作为缓存镜像导入本地 LXD 镜像存储,接着从其创建容器。
下一次运行其中一个命令时LXD将只检查镜像是否仍然是最新的当不是由指纹引用时如果是它将创建容器而不下载任何东西。
下一次运行其中一个命令时LXD 将只检查镜像是否仍然是最新的(当不是由指纹引用时),如果是,它将创建容器而不下载任何东西。
现在镜像被缓存在本地镜像存储中,你也可以从那里启动它,甚至不检查它是否是最新的:
@ -49,13 +49,13 @@ lxc launch 75182b1241be c4
lxc launch my-image c5
```
如果你想要改变一些自动缓存或者过期行为,在本系列之前的文章中有一些命令。
如果你想要改变一些自动缓存或者过期行为,在本系列之前的文章中有[一些命令](https://linux.cn/article-7687-1.html)
#### 手动导入镜像
##### 从镜像服务器中复制
如果你想复制远程某个镜像到你本地镜像存储但不立即从它创建一个容器,你可以使用“lxc image copy”命令。它可以让你调整一些镜像标志,比如:
如果你想复制远程某个镜像到你本地镜像存储但不立即从它创建一个容器,你可以使用`lxc image copy`命令。它可以让你调整一些镜像标志,比如:
```
lxc image copy ubuntu:14.04 local:
@ -63,28 +63,29 @@ lxc image copy ubuntu:14.04 local:
这只是简单地复制一个远程镜像到本地存储。
如果您想要通过比其指纹更容易的方式来记住你引用的镜像副本,则可以在复制时添加别名:
如果您想要通过比记住其指纹更容易的方式来记住你引用的镜像副本,则可以在复制时添加别名:
```
lxc image copy ubuntu:12.04 local: --alias old-ubuntu
lxc launch old-ubuntu c6
```
如果你想要使用源服务器上设置的别名你可以要求LXD复制下来
如果你想要使用源服务器上设置的别名,你可以要求 LXD 复制下来:
```
lxc image copy ubuntu:15.10 local: --copy-aliases
lxc launch 15.10 c7
```
上面的副本都是一次性拷贝也就是复制远程镜像的当前版本到本地镜像存储中。如果你想要LXD保持镜像最新就像它缓存中存储的那样你需要使用`auto-update`标志:
上面的副本都是一次性拷贝,也就是复制远程镜像的当前版本到本地镜像存储中。如果你想要 LXD 保持镜像最新,就像它缓存中存储的那样,你需要使用 `auto-update` 标志:
```
lxc image copy images:gentoo/current/amd64 local: --alias gentoo --auto-update
```
##### 导入tarball
##### 导入 tarball
如果某人给你提供了一个单独的tarball你可以用下面的命令导入
如果某人给你提供了一个单独的 tarball你可以用下面的命令导入
```
lxc image import <tarball>
@ -96,15 +97,15 @@ lxc image import <tarball>
lxc image import <tarball> --alias random-image
```
现在如果你被给了有两个tarball识别哪个含有LXD的元数据。通常可以通过tarball名称如果不行就选择最小的那个元数据tarball包是很小的。 然后将它们一起导入:
现在如果你被给了两个 tarball要识别哪个是含有 LXD 元数据的。通常可以通过 tarball 的名称来识别,如果不行就选择最小的那个,元数据 tarball 包是很小的。 然后将它们一起导入:
```
lxc image import <metadata tarball> <rootfs tarball>
```
##### 从URL中导入
##### 从 URL 中导入
“lxc image import”也可以与指定的URL一起使用。如果你的一台https网络服务器的某个路径中有LXD-Image-URL和LXD-Image-Hash的标头设置那么LXD就会把这个镜像拉到镜像存储中。
`lxc image import` 也可以与指定的 URL 一起使用。如果你的一台 https Web 服务器的某个路径中有 `LXD-Image-URL``LXD-Image-Hash` 的标头设置,那么 LXD 就会把这个镜像拉到镜像存储中。
可以参照例子这么做:
@ -112,18 +113,17 @@ lxc image import <metadata tarball> <rootfs tarball>
lxc image import https://dl.stgraber.org/lxd --alias busybox-amd64
```
当拉取镜像时LXD还会设置一些标头远程服务器可以检查它们以返回适当的镜像。 它们是LXD-Server-Architectures和LXD-Server-Version。
这意味着它可以是一个穷人的镜像服务器。 它可以使任何静态Web服务器提供一个用户友好的方式导入你的镜像。
当拉取镜像时LXD 还会设置一些标头,远程服务器可以检查它们以返回适当的镜像。 它们是 `LXD-Server-Architectures``LXD-Server-Version`
这相当于一个简陋的镜像服务器。 它可以通过任何静态 Web 服务器提供一中用户友好的导入镜像的方式。
#### 管理本地镜像存储
现在我们本地已经有一些镜像了,让我们瞧瞧可以做些什么。我们已经涵盖了最主要的部分,从它们来创建容器,但是你还可以在本地镜像存储上做更多。
现在我们本地已经有一些镜像了,让我们瞧瞧可以做些什么。我们已经介绍了最主要的部分,可以从它们来创建容器,但是你还可以在本地镜像存储上做更多。
##### 列出镜像
要列出所有的镜像,运行“lxc image list”
要列出所有的镜像,运行 `lxc image list`
```
stgraber@dakara:~$ lxc image list
@ -174,7 +174,7 @@ stgraber@dakara:~$ lxc image list os=ubuntu
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
```
要了解所有镜像的信息你可以使用“lxc image info”
要了解镜像的所有信息,你可以使用`lxc image info`
```
stgraber@castiana:~$ lxc image info ubuntu
@ -206,7 +206,7 @@ Source:
##### 编辑镜像
一个编辑镜像的属性和标志的简单方法是使用:
编辑镜像的属性和标志的简单方法是使用:
```
lxc image edit <alias or fingerprint>
@ -228,7 +228,7 @@ properties:
public: false
```
你可以修改任何属性,打开或者关闭自动更新,后者标记一个镜像是公共的(以后还有更多)
你可以修改任何属性,打开或者关闭自动更新,或者标记一个镜像是公共的(后面详述)。
##### 删除镜像
@ -238,11 +238,11 @@ public: false
lxc image delete <alias or fingerprint>
```
注意你不必移除缓存对象它们会在过期后被LXD自动移除默认上在最后一次使用的10天后
注意你不必移除缓存对象,它们会在过期后被 LXD 自动移除(默认上,在最后一次使用的 10 天后)。
##### 导出镜像
如果你想得到目前镜像的tarball你可以使用“lxc image export”,像这样:
如果你想得到目前镜像的 tarball你可以使用`lxc image export`,像这样:
```
stgraber@dakara:~$ lxc image export old-ubuntu .
@ -254,34 +254,34 @@ stgraber@dakara:~$ ls -lh *.tar.xz
#### 镜像格式
LXD现在支持两种镜像布局unified或者split。这两者都是有效的LXD格式虽然后者在与其他容器或虚拟机一起运行时更容易重新使用文件系统。
LXD 现在支持两种镜像布局unified 或者 split。这两者都是有效的 LXD 格式,虽然后者在与其他容器或虚拟机一起运行时更容易重用文件系统。
LXD专注于系统容器不支持任何应用程序容器的“标准”镜像格式我们也不打算这么做。
LXD 专注于系统容器,不支持任何应用程序容器的“标准”镜像格式,我们也不打算这么做。
我们的镜像很简单,它们是由容器文件系统,以及包含了镜像制作时间、到期时间、什么架构,以及可选的一堆文件模板的元数据文件组成。
有关[镜像格式][1]的最新详细信息,请参阅此文档。
##### unified镜像 (一个tarball)
##### unified 镜像(一个 tarball
unified镜像格式是LXD在生成镜像时使用的格式。它们是一个单独的大型tarball包含“rootfs”目录的容器文件系统在tarball根目录下有metadata.yaml文件任何模板都进入“templates”目录。
unified 镜像格式是 LXD 在生成镜像时使用的格式。它们是一个单独的大型 tarball包含 `rootfs` 目录下的容器文件系统,在 tarball 根目录下有 `metadata.yaml` 文件,任何模板都放到 `templates` 目录。
tarball可以用任何方式压缩或者不压缩。镜像散列是压缩后的tarball的sha256。
tarball 可以用任何方式压缩(或者不压缩)。镜像散列是压缩后的 tarball sha256
##### Split镜像 (两个tarball)
##### Split 镜像(两个 tarball
这种格式最常用于滚动更新镜像以及某人已经有了一个压缩文件系统tarball
这种格式最常用于滚动更新镜像并已经有了一个压缩文件系统 tarball 时
它们由两个不同的tarball组成第一个只包含LXD使用的元数据因此metadata.yaml文件在根目录任何模板都在“templates”目录。
它们由两个不同的 tarball 组成,第一个只包含 LXD 使用的元数据, `metadata.yaml` 文件在根目录,任何模板都在 `templates` 目录。
第二个tarball只包含直接位于其根目录下的容器文件系统。大多数发行版已经有这样的tarball因为它们常用于引导新机器。 此镜像格式允许不修改重新使用。
第二个 tarball 只包含直接位于其根目录下的容器文件系统。大多数发行版已经有这样的 tarball因为它们常用于引导新机器。 此镜像格式允许不修改重用。
两个tarball都可以压缩或者不压缩它们可以使用不同的压缩算法。 镜像散列是元数据和rootfs tarball结合的sha256。
两个 tarball 都可以压缩(或者不压缩),它们可以使用不同的压缩算法。 镜像散列是元数据的 tarball rootfs tarball 结合的 sha256。
##### 镜像元数据
典型的metadata.yaml文件看起来像这样
典型的 `metadata.yaml` 文件看起来像这样:
```
architecture: "i686"
@ -336,31 +336,31 @@ templates:
##### 属性
两个唯一的必填字段是“creation date”UNIX EPOCH和“architecture”。 其他都可以保持未设置,镜像就可以正常地导入。
两个唯一的必填字段是 `creation date`UNIX 纪元时间)和 `architecture`。 其他都可以保持未设置,镜像就可以正常地导入。
额外的属性主要是帮助用户弄清楚镜像是什么。 例如“description”属性是在“lxc image list”中可见的。 用户可以使用其他属性的键/值对来搜索特定镜像。
额外的属性主要是帮助用户弄清楚镜像是什么。 例如 `description` 属性是在 `lxc image list` 中可见的。 用户可以使用其它属性的键/值对来搜索特定镜像。
相反,这些属性用户可以通过“lxc image edit”来编辑“creation date”和“architecture”字段是不可变的。
相反,这些属性用户可以通过 `lxc image edit`来编辑,`creation date` 和 `architecture` 字段是不可变的。
##### 模板
模板机制允许在容器生命周期中的某一点生成或重新生成容器中的一些文件。
我们使用pongo2模板引擎来做这些我们将所有我们知道的容器导出到模板。 这样,你可以使用用户定义的容器属性或常规LXD属性的自定义镜像来更改某些特定文件的内容。
我们使用 [pongo2 模板引擎](https://github.com/flosch/pongo2)来做这些,我们将所有我们知道的容器信息都导出到模板。 这样,你可以使用用户定义的容器属性或常规 LXD 属性来自定义镜像,从而更改某些特定文件的内容。
正如你在上面的例子中看到的,我们使用在Ubuntu中的模板找出cloud-init并关闭一些init脚本。
正如你在上面的例子中看到的,我们使用在 Ubuntu 中使用它们来进行 `cloud-init` 并关闭一些 init 脚本。
### 创建你的镜像
LXD专注于运行完整的Linux系统这意味着我们期望大多数用户只使用干净的发行版镜像而不是只用自己的镜像。
LXD 专注于运行完整的 Linux 系统,这意味着我们期望大多数用户只使用干净的发行版镜像,而不是只用自己的镜像。
但是有一些情况下,你有自己的镜像是有的。 例如生产服务器上的预配置镜像,或者构建那些我们没有构建的发行版或者架构的镜像。
但是有一些情况下,你有自己的镜像是有必要的。 例如生产服务器上的预配置镜像,或者构建那些我们没有构建的发行版或者架构的镜像。
#### 将容器变成镜像
目前使用LXD构造镜像最简单的方法是将容器变成镜像。
目前使用 LXD 构造镜像最简单的方法是将容器变成镜像。
可以这么做
可以这么做
```
lxc launch ubuntu:14.04 my-container
@ -369,7 +369,7 @@ lxc exec my-container bash
lxc publish my-container --alias my-new-image
```
你甚至可以将一个容器过去的snapshot变成镜像:
你甚至可以将一个容器过去的快照变成镜像:
```
lxc publish my-container/some-snapshot --alias some-image
@ -379,25 +379,22 @@ lxc publish my-container/some-snapshot --alias some-image
构建你自己的镜像也很简单。
1.生成容器文件系统。 这完全取决于你使用的发行版。 对于Ubuntu和Debian它将用于启动。
2.配置容器中正常工作所需的任何东西(如果需要任何东西)。
3.制作该容器文件系统的tarball可选择压缩它。
4.根据上面描述的内容写一个新的metadata.yaml文件。
5.创建另一个包含metadata.yaml文件的压缩包。
6.用下面的命令导入这两个tarball作为LXD镜像
```
lxc image import <metadata tarball> <rootfs tarball> --alias some-name
```
1. 生成容器文件系统。这完全取决于你使用的发行版。对于 Ubuntu 和 Debian它将用于启动。
2. 配置容器中该发行版正常工作所需的任何东西(如果需要任何东西)。
3. 制作该容器文件系统的 tarball可选择压缩它。
4. 根据上面描述的内容写一个新的 `metadata.yaml` 文件。
5. 创建另一个包含 `metadata.yaml` 文件的 tarball。
6. 用下面的命令导入这两个 tarball 作为 LXD 镜像:`lxc image import <metadata tarball> <rootfs tarball> --alias some-name`
正常工作前你可能需要经历几次这样的工作,调整这里或那里,可能会添加一些模板和属性。
在一切都正常工作前你可能需要经历几次这样的工作,调整这里或那里,可能会添加一些模板和属性。
### 发布你的镜像
所有LXD守护程序都充当镜像服务器。除非另有说明否则加载到镜像存储中的所有镜像都会被标记为私有因此只有受信任的客户端可以检索这些镜像但是如果要创建公共镜像服务器你需要做的是将一些镜像标记为公开并确保你的LXD守护进程监听网络。
所有 LXD 守护程序都充当镜像服务器。除非另有说明,否则加载到镜像存储中的所有镜像都会被标记为私有,因此只有受信任的客户端可以检索这些镜像,但是如果要创建公共镜像服务器,你需要做的是将一些镜像标记为公开,并确保你的 LXD 守护进程监听网络。
#### 只运行LXD公共服务器
#### 只运行 LXD 公共服务器
最简单的共享镜像的方式是运行一个公共的LXD守护进程。
最简单的共享镜像的方式是运行一个公共的 LXD 守护进程。
你只要运行:
@ -411,35 +408,34 @@ lxc config set core.https_address "[::]:8443"
lxc remote add <some name> <IP or DNS> --public
```
他们就可以像任何默认的镜像服务器一样使用它们。 由于远程服务器添加了“-public”因此不需要身份验证并且客户端仅限于使用已标记为public的镜像。
他们就可以像使用任何默认的镜像服务器一样使用它们。 由于远程服务器添加了 `-public` 选项,因此不需要身份验证,并且客户端仅限于使用已标记为 `public` 的镜像。
要将镜像设置成公共的,只需“lxc image edit”它们并将public标志设置为true
要将镜像设置成公共的,只需使用 `lxc image edit` 编辑它们,并将 `public` 标志设置为 `true`
#### 使用一台静态web服务器
#### 使用一台静态 web 服务器
如上所述,“lxc image import”支持从静态http服务器下载。 基本要求是:
如上所述,`lxc image import` 支持从静态 https 服务器下载。 基本要求是:
*服务器必须支持具有有效证书的HTTPSTLS1.2和EC密钥
*当点击“lxc image import”提供的URL时服务器必须返回一个包含LXD-Image-Hash和LXD-Image-URL的HTTP标头。
* 服务器必须支持具有有效证书的 HTTPS、TLS 1.2 和 EC 算法。
* 当访问 `lxc image import` 提供的 URL 时,服务器必须返回一个包含 `LXD-Image-Hash``LXD-Image-URL` 的 HTTP 标头。
如果你想使它动态化,你可以让你的服务器查找LXD在请求镜像中发送的LXD-Server-Architectures和LXD-Server-Version的HTTP头。 这可以让你返回架构正确的镜像。
如果你想使它动态化,你可以让你的服务器查找 LXD 在请求镜像时发送的 `LXD-Server-Architectures``LXD-Server-Version` 的 HTTP 标头,这可以让你返回符合该服务器架构的正确镜像。
#### 构建一个简单流服务器
“ubuntu:”和“ubuntu-daily:”在远端不使用LXD协议“images:”是的),而是使用不同的协议称为简单流
`ubuntu:``ubuntu-daily:` 远端服务器不使用 LXD 协议(`images:` 使用而是使用称为简单流simplestreams的不同协议
简单流基本上是一个镜像服务器的描述格式使用JSON来描述产品以及相关产品的文件列表。
简单流基本上是一个镜像服务器的描述格式,使用 JSON 来描述产品以及相关产品的文件列表。
它被各种工具,如OpenStackJujuMAAS等用来查找下载或者做镜像系统LXD将它作为原生协议支持用于镜像检索
它被各种工具,如 OpenStack、Juju、MAAS 等用来查找、下载或者做镜像系统LXD 将它作为用于镜像检索的原生协议
虽然的确不是提供LXD镜像的最简单的方法但是如果你的镜像也被其一些工具使用,那这也许值得考虑一下。
虽然的确不是提供 LXD 镜像的最简单的方法,但是如果你的镜像也被其一些工具使用,那这也许值得考虑一下。
更多信息可以在这里找到。
关于简单流的更多信息可以在[这里](https://launchpad.net/simplestreams)找到。
### 总结
我希望关于如何使用LXD管理镜像以及构建和发布镜像这点给你提供了一个好点子。对于以前的LXC而言可以在一组全球分布式系统上得到完全相同的镜像是一个很大的进步并且让将来的道路更加可复制。
我希望这篇关于如何使用 LXD 管理镜像以及构建和发布镜像文章让你有所了解。对于以前的 LXC 而言,可以在一组全球分布式系统上得到完全相同的镜像是一个很大的进步,并且引导了更多可复制性的发展方向。
### 额外信息
@ -460,7 +456,7 @@ via: https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,54 +0,0 @@
# Build Strong Real-Time Streaming Apps with Apache Calcite
![Calcite](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/calcite.jpg?itok=CUZmjPjy "Calcite ")
Calcite is a data framework that lets you to build custom database functionality, explains Microsoft developer Atri Sharma in this preview to his upcoming talk at Apache: Big Data Europe, Nov. 14-16 in Seville, Spain.[Creative Commons Zero][2]Wikimedia Commons: Parent Géry
The [Apache Calcite][7] data management framework contains many pieces of a typical database management system but omits others, such as storage of data and algorithms to process data. In his talk at the upcoming [Apache: Big Data][6] conference in Seville, Spain, Atri Sharma, a Software Engineer for Azure Data Lake at Microsoft, will talk about developing applications using [Apache Calcite][5]'s advanced query planning capabilities. We spoke with Sharma to learn more about Calcite and how existing applications can take advantage of its functionality.
![Atri Sharma](https://www.linux.com/sites/lcom/files/styles/floated_images/public/atri-sharma.jpg?itok=77cvZWfw "Atri Sharma")
Atri Sharma, Software Engineer, Azure Data Lake, Microsoft[Used with permission][1]
**Linux.com: Can you provide some background on Apache Calcite? What does it do?**
Atri Sharma: Calcite is a framework that is the basis of many database kernels. Calcite empowers you to build your custom database functionality and use the required resources from Calcite. For example, Hive uses Calcite for cost-based query optimization, Drill and Kylin use Calcite for SQL parsing and optimization, and Apex uses Calcite for streaming SQL.
**Linux.com: What are some features that make Apache Calcite different from other frameworks?**
Atri: Calcite is unique in the sense that it allows you to build your own data platform. Calcite does not manage your data directly but rather allows you to use Calcite's libraries to define your own components. For eg, instead of providing a generic query optimizer, it allows defining custom query optimizers using the Planners available in Calcite.
**Linux.com: Apache Calcite itself does not store or process data. How does that affect application development?**
Atri: Calcite is a dependency in the kernel of your database. It is targeted for data management platforms that wish to extend their functionalities without writing a lot of functionality from scratch.
**Linux.com: Who should be using it? Can you give some examples?**
Atri: Any data management platform looking to extend their functionalities should use Calcite. We are the foundation of your next high-performance database!
Specifically, I think the biggest examples would be Hive using Calcite for query optimization and Flink for parsing and streaming SQL processing. Hive and Flink are full-fledged data management engines, and they use Calcite for highly specialized purposes. This is a good case study for applications of Calcite to further strengthen the core of a data management platform.
**Linux.com: What are some new features that youre looking forward to?**
Atri: Streaming SQL enhancements are something I am very excited about. These features are exciting because they will enable users of Calcite to develop real-time streaming applications much faster, and the strength and capabilities of these applications will be manifold. Streaming applications are the new de facto, and the strength to have query optimization in streaming SQL will be very useful for a large crowd. Also, there is discussion ongoing about temporal tables, so watch out for more!
--------------------------------------------------------------------------------
via: https://www.linux.com/news/build-strong-real-time-streaming-apps-apache-calcite
作者:[AMBER ANKERHOLZ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/aankerholz
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/atri-sharmajpg
[4]:https://www.linux.com/files/images/calcitejpg
[5]:https://calcite.apache.org/
[6]:http://events.linuxfoundation.org/events/apache-big-data-europe
[7]:https://calcite.apache.org/

View File

@ -0,0 +1,146 @@
2017 is the year that front-end developers should go back and master the basics
============================================================
![](https://cdn-images-1.medium.com/max/1000/1*1Xsnx4_M8uJc2klBxEtGLQ.jpeg)
In our fast-paced ecosystem, we tend to spend our time trying out the latest inventions, then arguing about them on the internet.
Im not saying we shouldnt do that. But we should probably slow down a bit and take a look at the things that dont change all that much. Not only will this improve the quality of our work and the value we deliverit will actually help us learn these new tools faster.
This post is a mix of my experience and my wishes for the New Year. And I want to hear your suggestions in the comments just as much as I want to share my own.
### Learn how to write readable code
Most of our work lies not in writing new code, but maintaining existing code. That means you end up reading code much more often then writing it, so you need to optimize your code for _the next programmer_, not for the interpreter.
I recommend reading these three amazing booksin this order, from shortest to longest:
* [The Art of Readable Code][1] by Dustin Boswell
* [Clean Code: A Handbook of Agile Software Craftsmanship][2] by Robert C. Martin
* [Code Complete: A Practical Handbook of Software Construction][3] by Steve McConnell
![](https://cdn-images-1.medium.com/max/1000/1*YQGwR6skf705fovSLCbmXQ.jpeg)
### Learn JavaScript deeper
When every week we have a new JavaScript framework thats better than any older framework, its easy to spend most of your time learning frameworks rather than the language itself. If youre using a framework but dont understand how it works, _stop and start learning the language until you understand how the tools you use work_.
* A great start is [Kyle][4] Simpsons book series [You Dont Know JavaScript][5], which you can read online for free.
* [Eric Elliott][6] has a huge list of [JavaScript topics to learn in 2017][7].
* [Henrique Alves][8] has a list of [things you should know before using React][9](actually any framework).
* [JavaScript Developers: Watch Your Language][10] by Mike Pennisiunderstand TC-39 process for new ECMAScript features.
### Learn functional programming
For years we wanted classes in JavaScript. Now we finally have them but dont want to use them anymore. Functions are all we want! We even write HTML using functions (JSX).
* [Functional-Light JavaScript][11] by Kyle Simpson.
* Professor Frisbys [Mostly adequate guide to functional programming ebook][12] and [his free course][13].
![](https://cdn-images-1.medium.com/max/1000/1*Helkj3sq3oVOc-dtjRgrYQ.jpeg)
### Learn design basics
As front-end developers, were closer to users than anybody else on the teammaybe even closer than designers. And if designers have to verify every pixel you put on screen, youre doing something wrong.
* Design for Hackers: [a book][14] and [a free course][15] by [David Kadavy][16].
* [Design for Non-Designers][17] talk by [Tracy Osborn][18].
* [Design of Web Applications][19] by [Nathan Barry][20].
* [On Web Typography][21] by [Jason Santa Maria][22].
* [The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity][23] by Alan Cooper.
* A few articles on animation in UI: [How to Use Animation to Improve UX][24], [Transitional Interfaces][25].
### Learn how to work with humans
Some of us come to programming because we prefer to interact with computers more than with humans. Unfortunately, thats not how it works.
We rarely work in isolation: we have to talk to other developers, designers, managersand sometimes even users. Thats hard. But its important if you want to really understand what youre doing and why, because thats where the value in what we do lies.
* [Soft Skills: The software developers life manual][26] by [John Sonmez][27].
* [The Clean Coder: A Code of Conduct for Professional Programmers][28] by Robert C. Martin.
* [Start with No: The Negotiating Tools that the Pros Dont Want You to Know][29] by Jim Camp.
![](https://cdn-images-1.medium.com/max/1000/1*zv6BXllLujNl-vDqkXQMqw.jpeg)
### Learn how to write for humans
A big portion of communication with our colleagues and other people are textual: task descriptions and comments, code comments, Git commits, chat messages, emails, tweets, blog posts, etc.
Imagine how much time people spend reading and understanding all that. If you can reduce this time by writing more clearly and concisely, the world will be a better place to work.
* [On Writing Well: The Classic Guide to Writing Nonfiction][30] by William Zinsser.
* [The Elements of Style][31] by William Strunk and E. B. White.
* [Orwells rules on writing][32].
* In Russian: awesome [Glavred course][33].
### Learn the old computer science wisdom
Front-end development isnt just animated dropdown menus any more. Its more complicated than ever before. Part of that notorious “JavaScript fatigue” stems from the increased complexity of the tasks we have to solve.
This, however, means that its time to learn from all wisdom that non-front-end developers have built up over the decades. And this is where I want to hear your recommendations the most.
Here are a couple resources I personally would recommend on this:
* [Learn To Think Like A Computer Scientist][34] course at Coursera.
* [The five programming books that meant most to me][35] by [DHH][36]
* * *
What would you recommend? What are you going to learn in 2017?
--------------------------------------------------------------------------------
作者简介:
![](https://cdn-images-1.medium.com/fit/c/60/60/0*FXw8cxdKYar82R9X.jpeg)
Web developer, passionate photographer and owner of crazy dogs.
--------------------------------------------------------------------------------
via: https://medium.freecodecamp.com/what-to-learn-in-2017-if-youre-a-frontend-developer-b6cfef46effd#.ss9xbwrew
作者:[Artem Sapegin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.com/@sapegin
[1]:https://www.amazon.com/gp/product/0596802293/
[2]:https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882/
[3]:https://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670/
[4]:https://medium.com/u/5dccb9bb4625
[5]:https://github.com/getify/You-Dont-Know-JS
[6]:https://medium.com/u/c359511de780
[7]:https://medium.com/javascript-scene/top-javascript-frameworks-topics-to-learn-in-2017-700a397b711#.zhnbn4rvg
[8]:https://medium.com/u/b6c3841651ac
[9]:http://alves.im/blog/before-dive-into-react.html
[10]:https://bocoup.com/weblog/javascript-developers-watch-your-language
[11]:https://github.com/getify/Functional-Light-JS
[12]:https://github.com/MostlyAdequate/mostly-adequate-guide
[13]:https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript
[14]:https://www.amazon.com/Design-Hackers-Reverse-Engineering-Beauty-ebook/dp/B005J578EW
[15]:http://designforhackers.com/
[16]:https://medium.com/u/5377a93ef640
[17]:https://youtu.be/ZbrzdMaumNk
[18]:https://medium.com/u/e611097a5bd4
[19]:http://nathanbarry.com/webapps/
[20]:https://medium.com/u/ac3090433602
[21]:https://abookapart.com/products/on-web-typography
[22]:https://medium.com/u/8eddcb9e4ac4
[23]:https://www.amazon.com/Inmates-Are-Running-Asylum-Products-ebook/dp/B000OZ0N62/
[24]:http://babich.biz/how-to-use-animation-to-improve-ux/
[25]:https://medium.com/@pasql/transitional-interfaces-926eb80d64e3#.igcwawszz
[26]:https://www.amazon.com/Soft-Skills-software-developers-manual/dp/1617292397/
[27]:https://medium.com/u/56e8cba02b
[28]:https://www.amazon.com/Clean-Coder-Conduct-Professional-Programmers/dp/0137081073/
[29]:https://www.amazon.com/Start-No-Negotiating-Tools-that-ebook/dp/B003EY7JEE/
[30]:https://www.amazon.com/gp/product/0060891548/
[31]:https://www.amazon.com/Elements-Style-4th-William-Strunk/dp/0205313426/
[32]:http://www.economist.com/blogs/prospero/2013/07/george-orwell-writing
[33]:http://maximilyahov.ru/glvrd-pro/
[34]:https://www.coursera.org/specializations/algorithms
[35]:https://signalvnoise.com/posts/3375-the-five-programming-books-that-meant-most-to-me
[36]:https://medium.com/u/54bcbf647830

View File

@ -0,0 +1,150 @@
#rusking translating
How to get started as an open source programmer
============================================================
![How to get started as an open source programmer](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/programming_keys.jpg?itok=_VDcN66X "How to get started as an open source programmer")
Image credits : 
Zagrev on [Flickr][1]. [CC BY-SA 2.0][2]
Looking out at the world of technology is exciting. It has a lot of moving parts, and it seems the further you dig into it, the deeper it gets, and then it's [turtles all the way down][3]. For that very reason, technology is also overwhelming. Where do you start if you're keen to join in and help shape the way the modern world functions? What's the first step? What's the twentieth step?
The first thing to understand is that open source is open. This might seem obvious, but the phrase "open source" is thrown around so often these days that sometimes people forget it's just a description of a cultural phenomenon, not the name of a Fortune 500 company. Unlike other jobs or groups, you don't have to interview or complete a sign-up sheet or registration form to become an open source programmer. All you do to become an open source programmer is _program_and then share your code, ideally with a guarantee that the code remains open regardless of how it's used.
That's it. You're an open source programmer!
You now have your destination, but what about the logistics?
### Skill trees
Have you ever played an RPG? In these games, there's the concept of linear "skill trees". When you play, you acquire basic skills that you build upon to "level up" and get new skills, which you use to acquire new ones and "level up" again. And so on.
Becoming a programmer is a little like adding to your skill tree. You get some basic skills, you practice them until they're second nature, and then you get new skills, and so on, and then you are progressing along your chosen skill tree.
You'll find you'll encounter more than one skill tree. Open source has many entry points and many individuals with their own unique strengths, talents, and interests. However, certain definable skills contribute to being a great programmer, and developing them is an important part of participating successfully in open source projects.
### Scripting
![Scroll--How to program ](https://opensource.com/sites/default/files/scroll.png "Scroll--How to program")
One of the biggest advantages of a POSIX system like Linux or BSD is that every time you use your computer, you've got the opportunity to practice a little programming. If you have no idea where to start programming, then begin with how you work. Find repetitive tasks that you perform every day, and start automating them. This step can be something simple, like converting or re-sizing batches of photos, checking email, or even just getting the five applications you use each day launched with one click. Whatever the task, take the time to automate something for yourself.
If you can do something from a terminal, then it can be scripted. Learn `bash` or `tsch` and let system scripting be your introduction to writing code and to how your system works.
### Sysadmin
![Caesar head](https://opensource.com/sites/default/files/caesar_0.png "Caesar head")
From this point, you can continue on to become a programmer, or you can cross over to a different skill tree entirely: that of systems administration. The two careers have some overlap (a good sysadmin ought to have a little programming experience, and should be ready to wield Python, Perl, or a similar language to develop unique solutions), but a _programmer_ is someone who builds with code day in and day out.
### Programmer
![Wizard hat--How to program](https://opensource.com/sites/default/files/pointy-hat.png "Wizard hat--How to program")
Open source is a great way to learn programming skills; you get to look at other people's code, borrow ideas and techniques, learn from their mistakes, get a critique of your own code, and if you use Linux or BSD, the _entire_ stack is open to you—as far as the eye can see, it's all yours for the taking.
That's what the travel brochure says, anyway. The reality is that you're probably not going to start digging into the source code of a project and come out the other side with the sudden realization that you accidentally learned to code. Programming is hard work. If it wasn't, everyone would do it.
Luckily, programming is logical and structured, so it builds upon itself. You might not fall into programming, but the more you poke at it, the more you start to understand it.
Understanding how to control and automate a computer is one thing, but knowing how to write the stuff that other people want to automate is the point that you cross over into the realm of _programming_.
### Polyglot
![Parrot--How to Program](https://opensource.com/sites/default/files/parrot-head.png "Parrot--How to Program")
All programming languages aim to do the same thing: make computers compute. Choosing one is a mix of what you think you want to do, what (if any) language is in common use in the industry you are targeting, and what language you happen to best understand given the materials available to you and your learning style.
With a little bit of research, you can get a good idea of a language's complexity, and then decide what to try first based on your own level of comfort.
Another way to choose a language is to look at your goal, find out if other people are working toward the same thing, and then look at what they're using. If your aim is to develop desktop tools, you might learn C and Vala for one, or C++ for another.
At the end of the day, though, don't get overwhelmed with all the available choices. Languages stack well. Once you learn one programming language well enough to fall back on it when you need to get something done, you'll find it a lot easier to pick up another one. A "language" is just a set of syntax and rules, after all; learn one, and it's fairly trivial to superimpose new rules over the same theory.
The primary goal is to learn a language. Pick the one that makes sense to you or the one that's most appealing to you or the one that your friends are using or the one that has the documentation you understand best, but focus on one and learn it.
### Open Whazzit?
Whether or not you're just learning to program or you're an old pro just getting into open source, before jumping head first into this brave new world, you need to learn what makes open source, well, "open source."
Claiming software is open source is the latest marketing approach some software vendors are wielding. Unfortunately, some vendors just mean they've released a public API or that they're receptive ("open") to suggestions from their users. The word "open" isn't trademarked and no committee governs how or when the word is used. However, the [Open Source Initiative][4], co-founded by the late Ian Murdock of Debian Linux, [defines][5] what open source means (licenses that "allow software to be freely used, modified, and shared"), and formally approves and [tracks][6]licenses deemed truly "open."
Apply one of those licenses to your code, and you're an open source programmer. Congratulations!
### Community
![Community--How to program](https://opensource.com/sites/default/files/minions.png "Community--How to program")
Ask any open source enthusiast and they'll tell you the most important thing about open software is the people. Without motivated contributors, software stagnates. Computers need users, bug reporters, designers, and programmers.
If you want to join or cultivate the global open source community, you need to become a part of the community, even if you're not a people person. This usually encompasses subscribing to mailing lists, joining IRC channels, or hopping onto forums, and starting at the bottom of the totem pole. Any mature community has been around long enough to see prospective members come and go, so you have to understand that when you saunter in ready to change their world, before they all agree to your master plan, you have to prove that you're not going to disappear after three months when something sparkly on the other side of the Net catches your eye. Be ready for the long haul if you aspire to something big.
If you're just around to lend a hand, then that's acceptable, too. I myself have submitted small patches to projects, and sometimes the project leads think these are good and other times they reject them. If the rejected patch is important to me, I maintain it for myself and clients, and otherwise I move forward.
It's part of the process.
Where do these communities exist? It depends on the project. Some projects have dedicated community managers who help bring everyone together in public spaces for everyone to see. Other projects form around forums, use mailing lists, or even issue trackers. Look for the communities, and you'll find them.
Just as importantly, though, look at the code! They call it open "source" for a reason, so be sure to find the code and take a peek. Even if it's still above your level of full comprehension, it gives you an idea of how the software project organizes itself and possibly where they might need assistance. How is the code organized? Does the code have comments? Is it tidy with a consistent style? Review the documentation, particularly the README, LICENSE, or COPYING files.
Don't under estimate the importance of following through on the promise of open code. It's the reason you're getting involved, so look at it critically from every angle to see what you can learn from it and how you might contribute.
Finding the best community is a lot like dating, but specifically it's like dating in [Groundhog Day][7]. It takes time, and the first couple of tries might fall flat. The more you go through the process, the more you start to feel déjà vu. Eventually, though, you learn enough about yourself and your interests, you find the right combination of other people, and you settle in somewhere. Have patience, and let it happen naturally.
### Actions > Words
![Wingfoot--How to Program](https://opensource.com/sites/default/files/wingfoot.png "Wingfoot--How to Program")
Being an open source programmer is about the code (the "source" part of open source), and ideas are a dime a dozen. What speaks volumes is producing. You need to show you know what you're doing, willing to get your hands dirty, spend your time on the project, and can back up your ideas with something that compiles.
To do that effectively, of course, you should do your homework on the project itself, including learning how a project prefers to receive submissions and which branches are the stable and development ones.
To approach getting started:
* Get familiar with a project and its development culture, and be respectful of it.
* Write patches, bug fixes, or small, requested features, and submit them.
* Don't get discouraged if your work is rejected. You are not being rejected personally, your work was evaluated and the development team made a call.
* Don't get discouraged if your work is accepted, but changed beyond recognition.
* Rinse, repeat, and try new and bigger changes.
![Leaderboard--How to program](https://opensource.com/sites/default/files/leaderboard.png "Leaderboard--How to program")
There is no leaderboard in open source. Some sites try to make it seem like they have such a thing, but there isn't one. Participate, contribute, add to the pool of ideas, add to the stash of commits, and you're doing it right.
### Develop
![Treasure Map--How to Program](https://opensource.com/sites/default/files/treasure-map.png "Treasure Map--How to Program")
Programming in any environment is always, ultimately, about personal development. Whether you're searching for new ways of solving problems, looking for new ways to optimize code, learning a new language, or learning how to deal with other people better, you never want to stop growing. The more you develop yourself, the more a project benefits.
Growth, both personal and professional, is the final one on the list, but it actually persists through the entire process. Becoming an open source programmer isn't like getting a government job; it's a process. You learn, you share, you keep learning, you get distracted and write a [Game of Life][8] implementation, and you learn some more.
This process is what open source is about: the freedom to develop, in every sense of the word. So go find your skill tree, choose your super powers, pay your dues, level up, and get involved.
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/penguinmedallion200x200.png?itok=ROQSR50J)
Seth Kenlon - Seth Kenlon is an independent multimedia artist, free culture advocate, and UNIX geek. He is one of the maintainers of the Slackware-based multimedia production project, http://slackermedia.ml
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/how-get-started-open-source-programmer
作者:[Seth Kenlon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/seth
[1]:https://www.flickr.com/photos/zagrev/79470567/in/photolist-82iQc-pijuye-9CmY3Z-c1EJAf-4Y65Zt-dhLziB-51QVc-hjqkN-4rNTuC-5Mbvqi-5MfK13-7dh6AW-2fiSu7-48R7et-5sC5ck-qf1TE9-48R6qv-pXuSG9-KFBLJ-95jQ8U-jBR7-dhLpfV-5bCZVH-9vsPTT-bA2nvP-bn7cWw-d7j8q-ubap-pij32X-7WT6iw-dcZZm2-3knisv-4dgN2f-bc6V1-E9xar-EovvU-6T71Mg-pi5zwE-5SR26m-dPKXrn-HFyzb-3aJF9W-7Rvz19-zbewj-xMsv-7MFi3u-2mVokJ-nsVAx-7g5k-4jCbbP
[2]:https://creativecommons.org/licenses/by-nc-sa/2.0/
[3]:https://en.wikipedia.org/wiki/Turtles_all_the_way_down
[4]:http://opensource.org/
[5]:https://opensource.org/licenses
[6]:https://opensource.org/licenses/category
[7]:https://en.wikipedia.org/wiki/Groundhog_Day_(film)
[8]:https://en.wikipedia.org/wiki/Conway's_Game_of_Life

View File

@ -0,0 +1,105 @@
# rusking translating
What engineers and marketers can learn from each other
============================================================
### Marketers think engineering is all math; engineers think marketing is all fluff. They're both wrong.
![What engineers and marketers can learn from each other](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_fortunecookie3.png?itok=dlRX4vO9 "What engineers and marketers can learn from each other")
Image by : 
opensource.com
After many years of practicing marketing in the B2B tech world, I think I've heard just about every misconception that engineers seem to have about marketers. Here are some of the more common:
* "Marketing is a waste of money that we should be putting into actual product development."
* "Those marketers just throw stuff against the wall and hope it sticks. Where's the discipline?"
* "Does anyone actually read this stuff?"
* "The best thing a marketer can tell me is how to unsubscribe, unfollow, and unfriend."
And here's my personal favorite:
_"Marketing is all fluff."_
That last one is simply incorrect—but more than that, It's actually a major impediment to innovation in our organizations today.
Let me explain why.
### Seeing my own reflection
I think these comments from engineers bother me so much because I see a bit of my former self in them.
You see, I was once as geeky as they come—and was proud of it. I hold a Bachelor's in electrical engineering from Rensselaer Polytechnic Institute, and began my professional career as an officer in the US Air Force during Desert Storm. There, I was in charge of developing and deploying a near real-time intelligence system that correlated several sources of data to create a picture of the battlefield.
After I left the Air Force, I planned to pursue a doctorate from MIT. But my Colonel convinced me to take a look at their business school. "Are you really going to be in a lab?" he asked me. "Are you going to teach at a university? Jackie, you are gifted at orchestrating complex activities. I think you really need to look at MIT Sloan."
So I took his advice, believing I could still enroll in a few tech courses at MIT. Taking a marketing course, however, would certainly have been a step too far—a total waste of time. I continued to bring my analytical skills to bear on any problem put in front of me.
Soon after, I became a management consultant at The Boston Consulting Group. Throughout my six years there, I consistently heard the same feedback: "Jackie, you're not visionary enough. You're not thinking outside the box. You assume your analysis is going to point you to the answer."
And of course, I agreed with them—because that's the way the world works, isn't it? What I realize now (and wish I'd discovered out far earlier) is that by taking this approach I was missing something pivotal: the open mind, the art, the emotion—the human and creative elements.
All this became much more apparent when I joined Delta Air Lines soon after September 11, 2001, and was asked to help lead consumer marketing. Marketing _definitely_ wasn't my thing, but I was willing to help however they needed me to.
But suddenly, my rulebook for achieving familiar results was turned upside down. Thousands of people (both inside and outside the airline) were involved in this problem. Emotions were running high. I was facing problems that required different kinds of solutions, answers I couldn't reach simply by crunching numbers.
That's when I learned—and quickly, because we had much work to do if we were going to pull Delta back up to where it deserved to be—that marketing can be as much a strategic, problem-oriented and user-centered function as engineering is, even if these two camps don't immediately recognize it.
### Two cultures
That "great divide" between engineering and marketing is deep indeed—so entrenched that it resembles what C.P. Snow once called[ the "two cultures" problem][1]. Scientifically minded engineers and artistically minded marketers tend to speak different languages, and they're acculturated to believe they value divergent things.
But the fact is that they're more similar than they might think.[ A recent study][2] from the University of Washington (co-sponsored by Microsoft, Google, and the National Science Foundation) identified "what makes a great software engineer," and (not surprisingly) the list of characteristics sounds like it could apply to great marketers, too. For example, the authors list traits like:
* Passion
* Open-mindedness
* Curiosity
* Cultivation of craft
* Ability to handle complexity
And these are just a few! Of course, not every characteristic on the list applies to marketers—but the Venn diagram connecting these "two cultures" is tighter than I believe most of us think. _Both_ are striving to solve complex user and/or customer challenges. They just take a different approach to doing it.
Reading this list got me thinking: _What if these two personalities understood each other just a little bit more? Would there be power in that?_
You bet. I've seen it firsthand at Red Hat, where I'm surrounded by people I'd have quickly dismissed as "crazy creatives" during my early days. And I'd be willing to bet that a marketer has (at one time or another) looked at an engineer and thought, "Look at this data nerd. Can't see the forest beyond the trees."
I now understand the power of having both perspectives in the same room. And in reality, engineers and marketers are _both_ working at the _intersection of customers, creativity, and analytics_. And if they could just learn to recognize the ways their personalities compliment each other, we could see tremendously positive results—results far more surprising and innovative than we'd see if we kept them isolated from one another.
### Listening to the crazies (and the nerds)
Case in point: _The Open Organization_.
In my role at Red Hat I spent much of my day thinking about how to extend and amplify our brand—but never in a million years would I have thought to do it by asking our CEO to write a book. That idea came from a cross-functional team of those "crazy creatives," a group of people I rely on to help me imagine new and innovative solutions to branding challenges.
When I heard the idea, I recognized it right away as a quintessentially Red Hat approach to our work: something that would be valuable to a community of practitioners, and something that helps spread the message of openness just a little farther. By prioritizing these two goals above all others, we'd reinforce Red Hat's position as a positive force in the open source world, a trusted expert ready to help customers navigate the turbulence of[ digital disruption][3].
Here's the clincher: _That's exactly the same spirit guiding Red Hat engineers tackling problems of code._ The group of Red Hatters urging me to help make _The Open Organization_ a reality demonstrated one of the very same motivations as the programmers that make up our internal and external communities: a desire to share.
In the end, bringing _The Open Organization_ to life required help from across the spectrum of skills—both the intensely analytic and the beautifully artistic. Everyone pitched in. The project only cemented my belief that engineers and marketers are more alike than different.
But it also reinforced something else: The realization that openness shows no bias, no preference for a culture of engineering or a culture of marketing. The idea of a more open world can inspire them both equally, and the passion it ignites ripples across the artificial boundaries we draw around our groups.
That hardly sounds like fluff to me.
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/portrait_jackieyeaney_400x600.jpg?itok=zV4bnrzI)
Jackie Yeaney - Chief Marketing Officer at Ellucian
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/17/1/engineers-marketers-can-learn
作者:[Jackie Yeaney][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jackie-yeaney
[1]:https://en.wikipedia.org/wiki/The_Two_Cultures#Implications_and_influence
[2]:https://faculty.washington.edu/ajko/papers/Li2015GreatEngineers.pdf
[3]:https://opensource.com/open-organization/16/7/future-belongs-open-leaders

View File

@ -0,0 +1,128 @@
How to choose your first programming language
============================================================[
][1]
![How to choose your first programming language](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDU_OSDC_IntroOS_520x292_FINAL.png?itok=va-tdc8j "How to choose your first programming language")
Image by : 
opensource.com
The reasons for learning to program are as a varied as the people who want to learn. You might have a program you want to make, or maybe you just want to jump in. So, before choosing your first programming language, ask yourself: Where do you want that program to run? What do you want that program to do?
Your reasons for learning to code should inform your choice of a first language.
_In this article, I use "code," "program," and "develop" interchangeably as verbs, while "code," "program," "application," and "app" interchangeably as nouns. This is to reflect language usage you may hear._
### Know your device
Where your programs will run is a defining factor in your choice of language.
Desktop applications are the traditional software programs that run on a desktop or laptop computer. For these you'll be writing code that only runs on a single computer at a time. Mobile applications, known as apps, run on portable communications devices using iOS, Android, or other operating systems. Web applications are websites that function like applications.
Web development is often broken into two subcategories, based on the web's client-server architecture:
* Front-end programming, which is writing code that runs in the web browser itself. This is the part that faces the user, or the "front end" of the program. It's sometimes called "client-side" programming, because the web browser is the client half of the web's client-server architecture. The web browser runs on your local computer or device.
* Back-end programming, which is also known as "server-side" programming, the code written runs on a server, which is a computer you don't have physical access to.
### What to create
Programming is a broad discipline and can be used in a variety of fields. Common examples include:
* data science,
* web development,
* game development, and
* work automation of various types.
Now that we've looked at why and where you want to  program, let's look at two great languages for beginners.
### Python
[Python][2] is one of the most popular languages for first-time programmers, and that is not by accident. Python is a general-purpose language. This means it can be used for a wide range of programming tasks. There's almost _nothing_ you can't do with Python. This lets a wide range of beginners make practical use of the language. Additionally, Python has two key design features that make it great for new programmers: a clear, English-like [syntax][3] and an emphasis on code [readability][4].
A language's syntax is essentially what you type to make the language perform. This can include words, special characters (like `;`, `$`, `%`, or `{}`), white space, or any combination. Python uses English for as much of this as possible, unlike other languages, which often use punctuation or special characters. As a result, Python reads much more like a natural, human language. This helps new programmers focus on solving problems, and they spend less time struggling with the specifics of the language itself.
Combined with that clear syntax is a focus on readability. When writing code, you'll create logical "blocks" of code, sections of code that work together for some related purpose. In many languages, those blocks are marked (or delimited) by special characters. They may be enclosed in `{}` or some other character. The combination of block-delimiting characters and your ability to write your code in almost any fashion can decrease readability. Let's look at an example.
Here's a small function, called "fun," which takes a number, `x` as its input. If `x`equals **0**, it runs another function called `no_fun` (which does something that's no fun). That function takes no input. Otherwise, it runs the function `big_fun`, using the same input, `x`.
This function defined in the ["C" language][5] could be written like this:
```
void fun(int x)
{
if (x == 0) {
no_fun();
} else {
big_fun(x);
}
}
```
or, like this:
```
void fun(int x) { if (x == 0) {no_fun(); } else {big_fun(x); }}
```
Both are functionally equivalent and both will run. The `{}` and `;` tell us where different parts of the block are; however, one is _clearly_ more readable to a human. Contrast that with the same function in Python:
```
def fun(x):
if x == 0:
no_fun()
else:
big_fun(x)
```
In this case, there's only one option. If the code isn't structured this way, it won't work, so if you have code that works, you have code that's readable. Also, notice the difference in syntax. Other than `def`, the words in the Python code are English and would be clear to a broad audience. In the C language example `void` and `int` are less intuitive.
Python also has an excellent ecosystem. This means two things. First, you have a large, active community of people using the language you can turn to when you need help and guidance. Second, it has a large number of preexisiting libraries, which are chunks of code that perform special functions. These range from advanced mathematical processing to graphics to computer vision to almost anything you can imagine.
Python has two drawbacks to it being your first language. The first is that it can sometimes be tricky to install, especially on computers running Windows. (If you have a Mac or a Linux computer, Python is already installed.) Although this issue isn't insurmountable, and the situation is improving all the time, it can be a deterrent for some people. The second drawback is for people who specifically want to build websites. While there are projects written in Python (like [Django][6] and [Flask][7]) that let you build websites, there aren't many options for writing Python that will run in a web browser. It is primarily a back-end or server-side language.
### JavaScript
If you know your primary reason for learning to program is to build websites, [JavaScript][8] may be the best choice for you. JavaScript is _the_ language of the web. Besides being the default language of the web, JavaScript has a few advantages as a beginner language.
First, there's nothing to install. You can open any text editor (like Notepad on Windows, but not a word processor like Microsoft Word) and start typing JavaScript. The code will run in your web browser. Most modern web browsers have a JavaScript engine built in, so your code will run on almost any computer and a lot of mobile devices. The fact that you can run your code immediately in a web browser provides a _very_ fast feedback loop, which is good for new coders. You can try something and see the results very quickly.
While JavaScript started life as a front-end language, an environment called [Node.js][9] lets you write code that runs in a web browser or on a server. Now JavaScript can be used as a front-end or back-end language. This has led to an increase in its popularity. JavaScript also has a huge number of packages that provide added functionality to the core language, allowing it to be used as a general-purpose language, and not just as the language of web development. Like Python, JavaScript has a vibrant, active ecosystem.
Despite these strengths, JavaScript is not without its drawbacks for new programmers. The [syntax of JavaScript][10] is not as clear or English-like as Python. It's much more like the C example above. It also doesn't have readability as a key design principle.
### Making a choice
It's hard to go wrong with either Python or JavaScript as your first language. The key factor is what you intend to do. Why are you learning to code? Your answer should influence your decision most heavily. If you're looking to make contributions to open source, you will find a _huge_ number of projects written in both languages. In addition, many projects that aren't primarily written in JavaScript still make use of it for their front-end component. As you're making a choice, don't forget about your local community. Do you have friends or co-workers who use either of these languages? For a new coder, having live support is very important.
Good luck and happy coding.
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/kojo_headshot_pro_square.jpg?itok=jv1kT8T0)
Kojo Idrissa - I'm a new software developer (1 year) who changed careers from accounting and university teaching. I've been a fan of Open Source software since around the time the term was coined, but I didn't have a NEED to do much coding in my prior careers. Tech-wise, I focus on Python, automated testing, and learning Django. I hope to learn more JavaScript soon. Topic-wise, I like to focus on helping new people get started with programing or getting involved in contributing to Open Source projects. I also focus on inclusive culture in tech environments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/choosing-your-first-programming-language
作者:[Kojo Idrissa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/transitionkojo
[1]:https://opensource.com/article/17/1/choosing-your-first-programming-language?rate=fWoYXudAZ59IkAKZ8n5lQpsa4bErlSzDEo512Al6Onk
[2]:https://www.python.org/about/
[3]:https://en.wikipedia.org/wiki/Python_syntax_and_semantics
[4]:https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Indentation
[5]:https://en.wikipedia.org/wiki/C_(programming_language
[6]:https://www.djangoproject.com/
[7]:http://flask.pocoo.org/
[8]:https://en.wikipedia.org/wiki/JavaScript
[9]:https://nodejs.org/en/
[10]:https://en.wikipedia.org/wiki/JavaScript_syntax#Basics

View File

@ -0,0 +1,89 @@
Tips for non-native English speakers working on open source projects
============================================================
![Tips for non-native English speakers working on open source projects](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/world_hands_diversity.png?itok=LMT5xbxJ "Tips for non-native English speakers working on open source projects")
Image by : 
opensource.com
The primary language of most open source projects is English, but open source users and contributors span the globe. Non-native speakers face many communication and cultural challenges when participating in the ecosystem.
In this article, we will share challenges, how to overcome them, and best practices for easing onboarding of non-native speakers, as non-native English speakers and contributors to OpenStack. We are based in Japan, Brazil, and China, and work daily with the huge OpenStack community that is spread around the world.
The official language of OpenStack is English, which means we communicate daily as non-native speakers.
### Challenges
Non-native English speakers face specific communication challenges when starting out in open source communities: they are related to limited language skills and cultural disparity.
### Language skills
Let's focus on the specific language skills behind reading, writing, listening, and speaking.
Reading: This is the easiest but also the most important skill. It is the easiest because if you can't understand what is written you have the opportunity to read it again, or as many times as needed. If you encounter an uncommon phrase, expression, or abbreviation, you can use a dictionary or translator. On the other hand, it is the most important skill because for most open source projects the main means of communication are mailing lists and IRC.
Writing: English grammar is an issue especially for languages that structure sentences differently. This may pose a problem for communication in writing emails and communicating via IRC channels. For some, writing long and beautiful sentences is difficult, and the reliance on simpler sentences is prevalent because these are easy to write and convey understanding.
Listening: This is more problematic than reading and writing for non-native speakers. Normally, conversation between native English speakers is very fast, which makes following the discussions for those still learning difficult and limits their participation in those discussions. Furthermore, trying to understand the variety of accents in a globally spread community adds to the complexity. Interestingly, American pronunciation is often easier to understand than others.
Speaking: Speaking is more difficult than listening because the participant's vocabulary may a bit limited. Furthermore, English's phonemes and grammar are often very different from a non-native speaker's mother language, making an interaction even more difficult to understand.
### Cultural differences
Each culture has different norms when interacting with other people in the open source community. For example, the Japanese tend not to say yes or no clearly as a way to respect others and to avoid fighting each other. This is often very different from other cultures and may cause misunderstanding of what was expressed.
In Chinese culture, people prefer to just say yes, instead of saying no or trying to negotiate. In a globally distributed community as OpenStack, this often leads to the lack of confidence when expressing opinions. Furthermore, Chinese people like to list the facts first and give the thesis at the end, and this can cause confusion for people from other cultures because it is not what they expect.
A Brazilian, for instance, may find that discussions are driven in a similar way; however, some cultures are very short and direct in responses, which may sound a bit rude.
### Overcoming obstacles
Challenges related to language skills are easier to overcome than cultural ones. Cultural differences need to be respected, while English skills can always be improved.
In order to brush up on your language skills, be in contact with the language as much as you can. Do not think about your limitations. Just do your best and you will improve eventually.
Read as much as you can, because this will help you gather vocabulary. Communicating through chat and mailing lists daily helps, too. Some tools, such as real-time dictionaries and translators, are very useful with these platforms.
Talking to others or yourself helps you become comfortable speaking out more frequently. Having one-on-one conversations to express your ideas is easier than discussing in larger groups.
### Onboarding newcomers
A few initiatives from both newcomers and native speakers may positively affect the onboarding process.
### Newcomers
Speak and write your opinion, and ask your questions; this participation is always a good opportunity to exercise your English. Do not be afraid.
For meetings, make sure you prepare yourself in advance so you will be comfortable with the subject and more confident about the opinions you are expressing.
Make friends who are English speakers and talk more to practice your English skills.
Writing blogs and technical articles in English are also great ideas.
### Tips for native English speakers
Please speak slowly and use simple words and sentences. Don't make fun of non-native English speakers if you find something wrong about the English they used. Try to encourage newcomers to express their opinions, and make them comfortable enough to do so.
_This article was co-written by Masayuki Igawa, Dong Ma, and Samuel de Medeiros Queiroz. Learn more in their talk at linux.conf.au 2017 ([#lca2017][1]) in Hobart: [Non-native English speakers in Open Source communities: A True Story.][2]_
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/myface_s.jpg?itok=-dy9_LQd)
Masayuki Igawa - Masayuki Igawa is a software engineer for over 15 years on a wide range of software projects, and developing open source software related to Linux kernel and virtualization. He's been an active technical contributor to OpenStack since 2013. He is a core member of some OpenStack QA projects such as Tempest, subunit2sql, openstack-health and stackviz. He currently works for HPE's Upstream OpenStack team to make OpenStack better for everyone. He has previously been a speaker at OpenStack Summits,
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/non-native-speakers-take-open-source-communities
作者:[Masayuki Igawa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/masayukig
[1]:https://twitter.com/search?q=%23lca2017&src=typd
[2]:https://linux.conf.au/schedule/presentation/70/

View File

@ -1,227 +0,0 @@
alim0x translating
The (updated) history of Android
============================================================
### Follow the endless iterations from Android 0.5 to Android 7 and beyond.
Google Search was literally everywhere in Lollipop. A new "always-on voice recognition" feature allowed users to say "OK Google" at any time, from any screen, even when the display was off. The Google app was still Google's primary home screen, a feature which debuted in KitKat. The search bar was now present on the new recent apps screen, too.
Google Now was still the left-most home screen page, but now a Material Design revamp gave it headers with big bold colors and redesigned typography.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/play-store-1-150x150.jpg)
][1]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/play2-150x150.jpg)
][2]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/6-150x150.jpg)
][3]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/5-150x150.jpg)
][4]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/12-2-150x150.jpg)
][5]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/14-1-150x150.jpg)
][6]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/19-1-150x150.jpg)
][7]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/13-2-150x150.jpg)
][8]
The Play Store followed a similar path to other Lollipop apps. There was a huge visual refresh with bold colors, new typography, and a fresh layout. It's rare that there's any additional functionality here, just a new coat of paint on everything.
The Navigation panel for the Play Store could now actually be used for navigation, with entries for each section of the Play Store. Lollipop also typically did away with the overflow button in the action bar, instead deciding to go with a single action button (usually search) and dumping every extra option in the navigation bar. This gave users a single place to look for items instead of having to hunt through two different menus.
Also new in Lollipop apps was the ability to make the status bar transparent. This allowed the action bar color to bleed right through the status bar, making the bar only slightly darker than the surrounding UI. Some interfaces even used a full-bleed hero image at the top of the screen, which would show through the status bar.
[
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1-980x481.jpg)
][38]
Google Calendar was completely re-written, gaining lots of new design touches and losing lots of features. You could no longer pinch zoom to adjust the time scale of views, month view was gone on phones, and week view regressed from a seven-day view to five days. Google would spend the next few versions re-adding some of these features after users complained. "Google Calendar" also doubled down on the "Google" by removing the ability to add third-party accounts directly in the app. Non-Google accounts would now need to be added via Gmail.
It did look nice, though. In some views, the start of each month came with a header picture, just like a real paper calendar. Events with locations attached showed pictures from those locations. For instance, my "flight to San Francisco" displayed the Golden Gate Bridge. Google Calendar would also pull events out of Gmail and display them right on your calendar.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/7-150x150.jpg)
][9]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/8-150x150.jpg)
][10]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/12-150x150.jpg)
][11]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/13-150x150.jpg)
][12]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/3-1-150x150.jpg)
][13]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/14-150x150.jpg)
][14]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/6-2-150x150.jpg)
][15]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/5-3-150x150.jpg)
][16]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/7-2-150x150.jpg)
][17]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/9-1-150x150.jpg)
][18]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/10-1-150x150.jpg)
][19]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/28-1-150x150.jpg)
][20]
Other apps all fell under pretty much the same description: not much in the way of new functionality, but big redesigns swapped out the greys of KitKat with bold, bright colors. Hangouts gained the ability to receive Google Voice SMSes, and the clock got a background color that changes with the time of day.
#### Job Scheduler whips the app ecosystem into shape
Google decided to focus on battery savings with Lollipop in a project it called "Project Volta." Google started creating more battery tracking tools for itself and developers, starting with the "Battery Historian." This python script took all of Android's battery logging data and spun it into a readable, interactive graph. With its new diagnostic equipment, Google flagged background tasks as a big consumer of battery.
At I/O 2014, the company noted that enabling airplane mode and turning off the screen allowed an Android phone to run in standby for a month. However, if users enabled everything and started using the device, they wouldn't get through a single day. The takeaway was that if you could just get everything to stop doing stuff, your battery would do a lot better.
As such, the company created a new API called "JobScheduler," the new traffic cop for background tasks on Android. Before Job Scheduler, every single app was responsible for its background processing, which meant every app would individually wake up the processor and modem, check for connectivity, organize databases, download updates, and upload logs. Everything had its own individual timer, so your phone would be woken up a lot. With JobScheduler, background tasks get batched up from an unorganized free-for-all into an orderly background processing window.
JobScheduler lets apps specify conditions that their task needs (general connectivity, Wi-Fi, plugged into a wall outlet, etc), and it will send an announcement when those conditions are met. It's like the difference between push e-mail and checking for e-mail every five minutes... but with task requirements. Google also started pushing a "lazier" approach to background tasks. If something can wait until the device is on Wi-Fi, plugged-in, and idle, it should wait until then. You can see the results of this today when, on Wi-Fi, you can plug in an Android phone and only _then_ will it start downloading app updates. You don't instantly need to download app updates; it's best to wait until the user has unlimited power and data.
#### Device setup gets future-proofed
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/25-1-150x150.jpg)
][21]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/26-150x150.jpg)
][22]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup2-150x150.jpg)
][23]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup3-150x150.jpg)
][24]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup4-150x150.jpg)
][25]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup5-150x150.jpg)
][26]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup6-150x150.png)
][27]
Setup was overhauled to not just confirm to the Material Design guidelines, but it was also "future-proofed" so that it can handle any new login and authentication schemes Google cooks up in the future. Remember, part of the entire reasoning for writing "The History of Android" is that older versions of Android don't work anymore. Over the years, Google has upgraded its authentication schemes to use better encryption and two-factor authentication, but adding these new login requirements breaks compatibility with older clients. Lots of Android features require access to Google's cloud infrastructure, so when you can't log in, things like Gmail for Android 1.0 just don't work.
In Lollipop, setup works much like it did before for the first few screens. You get a "welcome to Android screen" and options to set up cellular and Wi-Fi connectivity. Immediately after this screen, things changed though. As soon as Lollipop hit the internet, it pinged Google's servers to "check for updates." These weren't updates to the OS or to apps, but updates to the setup process about to run. After Android downloaded the newest version of setup, _then_ it asked you to log in with your Google account.
The benefit of this is evident when trying to log into Lollipop and Kitkat today. Thanks to the updatable setup flow, the "2014" Lollipop OS can handle 2016 improvements, like Google's new "[tap to sign in][39]" 2FA method. KitKat chokes, but luckily it has a "web-browser sign-in" that can handle 2FA.
Lollipop setup even takes the extreme stance of putting your Google e-mail and password on separate pages. [Google hates passwords][40] and has come up with several [experimental ways][41] to log into Google without one. If your account is setup to not have a password, Lollipop can just skip the password page. If you have a 2FA setup that uses a code, setup can slip the appropriate "enter 2FA code" page into the setup flow. Every piece of signing in is on a single page, so the setup flow is modular. Pages can be added and removed as needed.
Setup also gave users control over app restoration. Android was doing some kind of data restoration previously to this, but it was impossible to understand because it just picked one of your devices without any user input and started restoring things. A new screen in the setup flow let users see their collection of device profiles in the cloud and pick the appropriate one. You could also choose which apps to restore from that backup. This backup was apps, your home screen layout, and a few minor settings like Wi-Fi hotspots. It wasn't a full app data backup.
#### Settings
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/29-1-150x150.jpg)
][28]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/settings-1-150x150.jpg)
][29]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2014-11-11-16.45.47-150x150.png)
][30]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/battery-150x150.jpg)
][31]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/11/user1-150x150.jpg)
][32]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/11/users2-150x150.jpg)
][33]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/30-1-150x150.jpg)
][34]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/31-150x150.jpg)
][35]
Setting swapped from a dark theme to a light one. Along with a new look, it got a handy search function. Every screen gave the user access to a magnifying glass, which let them more easily hunt down that elusive option.
There were a few settings related to Project Volta. "Network Restrictions" allowed users to flag a Wi-Fi connection as metered, which would allow JobScheduler to avoid it for background processing. Also as part of Volta, a "Battery Saver" mode was added. This would limit background tasks and throttle down the CPU, which gave you a long lasting but very sluggish device.
Multi-user support has been in Android tablets for a while, but Lollipop finally brought it down to Android phones. The settings screen added a new "users" page that let you add additional account or start up a "Guest" account. Guest accounts were temporary—they could be wiped out with a single tap. And unlike a normal account, it didn't try to download every app associated with your account, since it was destined to be wiped out soon.
--------------------------------------------------------------------------------
作者简介:
Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/
作者:[RON AMADEO][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://arstechnica.com/author/ronamadeo
[1]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[2]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[3]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[4]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[5]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[6]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[7]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[8]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[9]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[10]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[11]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[12]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[13]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[14]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[15]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[16]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[17]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[18]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[19]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[20]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[21]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[22]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[23]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[24]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[25]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[26]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[27]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[28]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[29]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[30]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[31]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[32]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[33]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[34]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[35]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[36]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1.jpg
[37]:http://arstechnica.com/author/ronamadeo/
[38]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1.jpg
[39]:http://arstechnica.com/gadgets/2016/06/googles-new-two-factor-authentication-system-tap-yes-to-log-in/
[40]:https://www.theguardian.com/technology/2016/may/24/google-passwords-android
[41]:http://www.androidpolice.com/2015/12/22/google-appears-to-be-testing-a-new-way-to-log-into-your-account-on-other-devices-with-just-your-phone-no-password-needed/

View File

@ -0,0 +1,230 @@
alim0x translating
### Android 6.0 Marshmallow
In October 2015, Google brought Android 6.0 Marshmallow into the world. For the OS's launch, Google commissioned two new Nexus devices: the [Huawei Nexus 6P and LG Nexus 5X][39]. Rather than just the usual speed increase, the new phones also included a key piece of hardware: a fingerprint reader for Marshmallow's new fingerprint API. Marshmallow was also packing a crazy new search feature called "Google Now on Tap," user controlled app permissions, a new data backup system, and plenty of other refinements.
#### The new Google App
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/32-1-150x150.jpg)
][3]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/app-drawer-150x150.jpg)
][4]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/2015-10-01-19.01.201-150x150.png)
][5]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/Untitled-3-150x150.gif)
][6]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/google-now-home-150x150.jpg)
][7]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/typing-150x150.jpg)
][8]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/serp-150x150.jpg)
][9]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/voice-150x150.jpg)
][10]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/icons-150x150.jpg)
][11]
Marshmallow was the first version of Android after [Google's big logo redesign][40]. The OS was updated accordingly, mainly with a new Google app that added a colorful logo to the search widget, search page, and the app icon.
Google reverted the app drawer from a paginated horizontal layout back to the single, vertically scrolling sheet. The earliest versions of Android all had vertically scrolling sheets until Google changed to a horizontal page system in Honeycomb. The scrolling single sheet made finding things in a large selection of apps much faster. A "quick scroll" feature, which let you drag on the scroll bar to bring up letter indexing, helped too. This new app drawer layout also carried over to the widget drawer. Given that the old system could easily grow to 15+ pages, this was a big improvement.
The "suggested apps" bar at the top of Marshmallow's app drawer made finding apps faster, too.
This bar changed from time to time and tried to surface the apps you needed when you needed them. It used an algorithm that took into account app usage, apps that are normally launched together, and time of day.
#### Google Now on Tap—a feature that didn't quite work out
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/ontap-150x150.jpg)
][12]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/onta3p-150x150.jpg)
][13]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/now-on-tap-150x150.jpg)
][14]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/fail1-150x150.jpg)
][15]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/youtube-150x150.jpg)
][16]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/apps-150x150.jpg)
][17]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/fail2-150x150.jpg)
][18]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/hangouts-150x150.jpg)
][19]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/voice-context-150x150.jpg)
][20]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/wrongstephen-150x150.jpg)
][21]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/assist-api-980x576-150x150.jpg)
][22]
One of Marshmallow's headline features was "Google Now on Tap." With Now on Tap, you could hold down the home button on any screen and Android would send the entire screen to Google for processing. Google would then try to figure out what the screen was about, and a special list of search results would pop up from the bottom of the screen.
Results yielded by Now on Tap weren't the usual 10 blue links—though there was always a link to a Google Search. Now on Tap could also deep link into other apps using Google's App Indexing feature. The idea was you could call up Now on Tap for a YouTube music video and get a link to the Google Play or Amazon "buy" page. Now on Tapping (am I allowed to verb that?) a news article about an actor could link to his page inside the IMDb app.
Rather than make this a proprietary feature, Google built a whole new "Assistant API" into Android. The user could pick an "Assist App" which would be granted scads of information upon long-pressing the home button. The Assist app would get all the text that was currently loaded by the app—not just what was immediately on screen—along with all the images and any special metadata the developer wanted to include. This API powered Google Now on Tap, and it also allowed third parties to make Now on Tap rivals if they wished.
Google hyped Now on Tap during Marshmallow's initial presentation, but in practice, the feature wasn't very useful. Google Search is worthwhile because you're asking it an exact question—you type in whatever you want, and it scours the entire Internet looking for the answer or web page. Now on Tap made things infinitely harder because it didn't even know what question you were asking. You opened Now on Tap with a very specific intent, but you sent Google the very unspecific query of "everything on your screen." Google had to guess what your query was and then tried to deliver useful search results or actions based on that.
Behind the scenes, Google was probably processing like crazy to brute-force out the result you wanted from an entire page of text and images. But more often than not, Now on Tap yielded what felt like a list of search results for every proper noun on the page. Sifting through the list of results for multiple queries was like being trapped in one of those Bing "[Search Overload][41]" commercials. The lack of any kind of query targeting made Now on Tap feel like you were asking Google to read your mind, and it never could. Google eventually patched in an "Assist" button to the text selection menu, giving Now on Tap some of the query targeting that it desperately needed.
Calling Now on Tap anything other than a failure is hard. The shortcut to access Now on Tap—long pressing on the home button—basically made it a hidden, hard-to-discover feature that was easy to forget about. We speculate the feature had extremely low usage numbers. Even when users did discover Now on Tap, it failed to read your mind so often that, after a few attempts, most users probably gave up on it.
With the launch of the Google Pixels in 2016, the company seemingly admitted defeat. It renamed Now on Tap "Screen Search" and demoted it in favor of the Google Assistant. The Assistant—Google's new voice command system—took over On Tap's home button gesture and related it to a second gesture once the voice system was activated. Google also seems to have learned from Now on Tap's poor discoverability. With the Assistant, Google added a set of animated colored dots to the home button that helped users discover and be reminded about the feature.
#### Permissions
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/33-1-150x150.jpg)
][23]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/34-1-150x150.jpg)
][24]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/perm-150x150.jpg)
][25]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/denied-1-150x150.jpg)
][26]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/denied-2-150x150.jpg)
][27]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/apps-150x150.jpg)
][28]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/overlay-150x150.jpg)
][29]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/system-permisions-150x150.jpg)
][30]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/warning-150x150.jpg)
][31]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/Google_IO_2015_-_Android_M_Permissions_-_YouTube_-_Google_Chrome_2015-09-04_12-31-49-150x150.jpg)
][32]
Android 6.0 finally introduced an app permissions system that gave users granular control over what data apps had access to.
Apps no longer gave you a huge list of permissions at install. With Marshmallow, apps installed without asking for any permissions at all. When apps needed a permission—like access to your location, camera, microphone, or contact list—they asked at the exact time they needed it. During your usage of an app, an "Allow or Deny" dialog popped up anytime the app wanted a new permission. Some app setup flow tackled this by asking for a few key permissions at startup, and everything else popped up as the app needed it. This better communicated to the user what the permissions are for—this app needs camera access because you just tapped on the camera button.
Besides the in-the-moment "Allow or Deny" dialogs, Marshmallow also added a permissions setting screen. This big list of checkboxes allowed data-conscious users to browse which apps have access to what permissions. They can browse not only by app, but also by permission. For instance, you could see every app that has access to the microphone.
Google had been experimenting with app permissions for some time, and these screens were basically the rebirth of the hidden "[App Ops][42]" system that was accidentally introduced in Android 4.3 and quickly removed.
While Google experimented in previous versions, the big difference with Marshmallow's permissions system was that it represented an orderly transition to a permission OS. Android 4.3's App Ops was never meant to be exposed to users, so developers didn't know about it. The result of denying an app a permission in 4.3 was often a weird error message or an outright crash. Marshmallow's system was opt-in for developers—the new permission system only applied to apps that were targeting the Marshmallow SDK, which Google used as a signal that the developer was ready for permission handling. The system also allowed for communication to users when a feature didn't work because of a denied permission. Apps were told when they were denied a permission, and they could instruct the user to turn the permission back on if you wanted to use a feature.
#### The Fingerprint API
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/finger1-150x150.jpg)
][33]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/fingerlock-150x150.jpg)
][34]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/2015-10-16-17.19.36-150x150.png)
][35]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/fingerprintplaystore-150x150.jpg)
][36]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/confirm-150x150.jpg)
][37]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/2015-09-04_16-38-31-150x150.png)
][38]
Before Marshmallow, few OEMs had come up with their own fingerprint solution in response to [Apple's Touch ID][43]. But with Marshmallow, Google finally came up with an ecosystem-wide API for fingerprint recognition. The new system included UI for registering fingerprints, a fingerprint-guarded lock screen, and APIs that allowed apps to protect content behind a fingerprint scan or lock-screen challenge.
The Play Store was one of the first apps to support the API. Instead of having to enter your password to purchase an app, you could just use your fingerprint. The Nexus 5X and 6P were the first phones to support the fingerprint API with an actual hardware fingerprint reader on the back.
Later the fingerprint API became one of the rare examples of the Android ecosystem actually cooperating and working together. Every phone with a fingerprint reader uses Google's API, and most banking and purchasing apps are pretty good about supporting it.
--------------------------------------------------------------------------------
作者简介:
Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/
作者:[RON AMADEO][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[1]:https://www.youtube.com/watch?v=f17qe9vZ8RM
[2]:https://www.youtube.com/watch?v=VOn7VrTRlA4&list=PLOU2XLYxmsIJDPXCTt5TLDu67271PruEk&index=11
[3]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[4]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[5]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[6]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[7]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[8]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[9]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[10]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[11]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[12]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[13]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[14]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[15]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[16]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[17]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[18]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[19]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[20]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[21]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[22]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[23]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[24]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[25]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[26]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[27]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[28]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[29]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[30]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[31]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[32]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[33]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[34]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[35]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[36]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[37]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[38]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/#
[39]:http://arstechnica.com/gadgets/2015/10/nexus-5x-and-nexus-6p-review-the-true-flagships-of-the-android-ecosystem/
[40]:http://arstechnica.com/gadgets/2015/09/google-gets-a-new-logo/
[41]:https://www.youtube.com/watch?v=9yfMVbaehOE
[42]:http://www.androidpolice.com/2013/07/25/app-ops-android-4-3s-hidden-app-permission-manager-control-permissions-for-individual-apps/
[43]:http://arstechnica.com/apple/2014/09/ios-8-thoroughly-reviewed/10/#h3

View File

@ -0,0 +1,171 @@
# Behind-the-scenes changes
Marshmallow expanded on the power-saving JobScheduler APIs that were originally introduced in Lollipop. JobScheduler turned app background processing from a free-for-all that frequently woke up the device to an organized system. JobScheduler was basically a background-processing traffic cop.
In Marshmallow, Google added a "Doze" mode to save even more power when a device is left alone. If a device was stationary, unplugged, and had its screen off, it would slowly drift into a low-power, disconnected mode that locked down background processing. After a period of time, network access was disabled. Wake locks—an app's request to keep your phone awake so it can do background processing—got ignored. System Alarms (not user-set alarm clock alarms) and the [JobScheduler][25] shut down, too.
If you've ever put a device in airplane mode and noticed the battery lasts forever, Doze was like an automatic airplane mode that kicked in when you left your device alone—it really did boost battery life. It worked for phones that were left alone on a desk all day or all night, and it was great for tablets, which are often forgotten about on the coffee table.
The only notification that could punch through Doze mode was a "high priority message" from Google Cloud Messaging. This was meant for texting services so, even if a device is dozing, messages still came through.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/inactive-apps-150x150.jpg)
][1]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/battery-optimizations-150x150.jpg)
][2]
"App Standby" was another power saving feature that more-or-less worked quietly in the background. The idea behind it was simple: if you stopped interacting with an app for a period of time, Android deemed it unimportant and took away its internet access and background processing privileges.
For the purposes of App Standby, "interacting" with an app meant opening the app, starting a foreground service, or generating a notification. Any one of these actions would reset the Standby timer on an app. For every other edge case, Google added a cryptically-named "Battery Optimizations" screen in the settings. This let users whitelist apps to make them immune from app standby. As for developers, they had an option in Developer Settings called "Inactive apps" which let them manually put an app on standby for testing.
App Standby basically auto-disabled apps you weren't using, which was a great way to fight battery drain from crapware or forgotten-about apps. Because it was completely silent and automatically happened in the background, it helped even novice users have a well-tuned device.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/autobackup-150x150.jpg)
][3]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/07/backup2-150x150.jpg)
][4]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/BACKUP1-150x150.jpg)
][5]
Google tried many app backup schemes over the years, and in Marshmallow it [took another swing][26]. Marshmallow's brute force app backup system aimed to dump the entire app data folder to the cloud. It was possible and technically worked, but app support for it was bad, even among Google apps. Setting up a new Android phone is still a huge hassle, with countless sign-ins and tutorial popups.
In terms of interface, Marshmallow's backup system used the Google Drive app. In the settings of Google Drive, there's now a "Manage Backups" screen, which showed app data not only from the new system, but also every other app backup scheme Google has tried over the years.
![Android's App Linking settings, basically a URL forwarding system for apps. ](https://cdn.arstechnica.net/wp-content/uploads/2016/10/app-linkingf-980x576-980x576.jpg)
Buried in the settings was a new "App linking" feature, which could "link" an app to a website. Before app linking, opening up a Google Maps URL on a fresh install usually popped up an "Open With" dialog box that wanted to know if it should open the URL in a browser or in the Google Maps app.
This was a silly question, since of course you wanted to use the app instead of the website—that's why you had the app installed. App linking let website owners associate their app with their webpage. If users had the app installed, Android would suppress the "Open With" dialog and use that app instead. To activate app linking, developers just had to throw some JSON code on their website that Android would pick up.
App linking was great for sites with an obvious app client, like Google Maps, Instagram, and Facebook. For sites with an API and multiple clients, like Twitter, the App Linking settings screen gave users control over the default app association for any URL. Out-of-the-box app linking covered 90 percent of use cases though, which cut down on the annoying pop ups on a new phone.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/adopt1-150x150.jpg)
][6]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/setup-150x150.jpg)
][7]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/format1-150x150.jpg)
][8]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/move-data-150x150.jpg)
][9]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/downloads-150x150.jpg)
][10]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/removingisbad-150x150.jpg)
][11]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/explorer-150x150.jpg)
][12]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/sort-options-150x150.jpg)
][13]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/gridorlist-150x150.jpg)
][14]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/file-mange-150x150.jpg)
][15]
Adoptable storage was one of Marshmallow's best features. It turned SD cards from a janky secondary storage pool into a perfect merged-storage solution. Slide in an SD card, format it, and you instantly had more storage in your device that you never had to think about again.
Sliding in a SD card showed a setup notification, and users could choose to format the card as "portable" or "internal" storage. The "Internal" option was the new adoptable storage mode, and it paved over the card with an ext4 file system. The only downside? The card and the data were both "locked" to your phone. You couldn't pull the card out and plug it into anything without formatting it first. Google was going for a set-it-and-forget-it use case with internal storage.
If you did yank the card out, Android did its best to deal with things. It popped up a message along the lines of "You'd better put that back or else!" along with an option to "forget" the card. Of course "forgetting" the card would result in all sorts of data loss, and it was not recommended.
The sad part of adoptable storage is that devices that could actually use it didn't come for a long time. Neither Nexus device had an SD card, so for the review we rigged up a USB stick as our adoptable storage. OEMs initially resisted the feature, with [LG and Samsung][27] disabling it on their early 2016 flagships. Samsung stated that "We believe that our users want a microSD card to transfer files between their phone and other devices," which was not possible once the card was formatted to ext4.
Google's implementation let users choose between portable and internal formatting options. But rather than give users that choice, OEMs completely took the internal storage feature away. Advanced users were unhappy about this, and of course the Android modding scene quickly re-enabled adoptable storage. On the Galaxy S7, modders actually defeated Samsung's SD card lockdown [a day before][28] the device was even officially released!
#### Volume and Notifications
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/1-2-150x150.jpg)
][16]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-4-150x150.jpg)
][17]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/dnd1-150x150.jpg)
][18]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/2015-09-13-05.13.49-150x150.png)
][19]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/2015-09-08-19.58.51-150x150.png)
][20]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/dnd11-150x150.jpg)
][21]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/dnd4-150x150.jpg)
][22]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/3-3-150x150.jpg)
][23]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/2015-09-08-19.23.13-150x150.png)
][24]
Google walked back the priority notification controls that were in the volume popup in favor of a simpler design. Hitting the volume key popped up a single slider for the current audio source, along with a drop down button that expanded the controls to show all three audio sliders: Notifications, media, and alarms. All the priority notification controls still existed—they just lived in a "do not disturb" quick-settings tile now.
One of the most relieving additions to the notification controls gave users control over Heads-Up notifications—now renamed "Peek" notifications. This feature let notifications pop up over the top portion of the screen, just like on iOS. The idea was that the most important notifications should be elevated over your normal, everyday notifications.
However, in Lollipop, when this feature was introduced, Google had the terrible idea of letting developers decide if their apps were "important" or not. Of course, every developer thinks its app is the most important thing in the world. So while the feature was originally envisioned for instant messages from your closest contacts, it ended up being hijacked by Facebook "Like" notifications. In Marshmallow, every app got a "treat as priority" checkbox in the notification settings, which gave users an easy ban hammer for unruly apps.
--------------------------------------------------------------------------------
作者简介:
Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/
作者:[RON AMADEO][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[1]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[2]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[3]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[4]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[5]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[6]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[7]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[8]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[9]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[10]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[11]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[12]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[13]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[14]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[15]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[16]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[17]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[18]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[19]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[20]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[21]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[22]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[23]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[24]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/#
[25]:http://arstechnica.com/gadgets/2014/11/android-5-0-lollipop-thoroughly-reviewed/6/#h2
[26]:http://arstechnica.com/gadgets/2015/10/android-6-0-marshmallow-thoroughly-reviewed/6/#h2
[27]:http://arstechnica.com/gadgets/2016/02/the-lg-g5-and-galaxy-s7-wont-support-android-6-0s-adoptable-storage/
[28]:http://www.androidpolice.com/2016/03/10/modaco-manages-to-get-adoptable-sd-card-storage-working-on-the-galaxy-s7-and-galaxy-s7-edge-no-root-required/

View File

@ -0,0 +1,185 @@
# Monthly security updates
[
![Check out that new "Android security patch level" field. ](https://cdn.arstechnica.net/wp-content/uploads/2016/10/settings-5-980x957.jpg)
][31]
A few months before the release of Marshmallow, [vulnerabilities][32] in Android's "Stagefright" media server were disclosed to the public, which could allow for remote code execution on older versions of Android. Android took a beating in the press, with [a billion phones][33] affected by the newly discovered bugs.
Google responded by starting a monthly Android security update program. Every month it would round up bugs, fix them, and push out new code to AOSP and Nexus devices. OEMs—who were already struggling with updates (possibly due to apathy)—were basically told to "deal with it" and keep up. Every other major operating system has frequent security updates—it's just the cost of being such a huge platform. To accommodate OEMs, Google give them access to the updates a full month ahead of time. After 30 days, security bulletins are posted and Google devices get the updates.
The monthly update program started two months before the release of Marshmallow, but in this major OS update Google added an "Android Security Patch Level" field to the About Phone screen. Rather than use some arcane version number, this was just a date. This let anyone easily see how out of date their phone was, an acted as a nice way to shame slow OEMs.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/text-150x150.jpg)
][2]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/extra-150x150.jpg)
][3]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/translate-150x150.jpg)
][4]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/chromecustomtab-150x150.jpg)
][5]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/09/CCT_Large-2-150x150.gif)
][6]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings-5-150x150.jpg)
][7]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings-1-150x150.jpg)
][8]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings2-150x150.jpg)
][9]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings-3-150x150.jpg)
][10]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/nearby-150x150.jpg)
][11]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings-6-150x150.jpg)
][12]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings-7-150x150.jpg)
][13]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings-8-150x150.jpg)
][14]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/settings-9-150x150.jpg)
][15]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/2015-10-03-18.21.17-150x150.png)
][16]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/2015-10-04-05.32.23-150x150.png)
][17]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2015/10/easter-egg-150x150.jpg)
][18]
The text selection menu is now a floating toolbar that pops up right next to the text you're selecting. This wasn't just the regular "cut/copy/paste" commands, either. Apps could put special options on the toolbar, like the "add link" option in Google Docs.
After the standard text commands, an ellipsis button would expose a second menu, and it was here that apps could add extra features to the text selection menu. Using a new "text processing" API, it was now super easy to ship text directly to another app. If you had Google Translate installed, a "translate" option would show up in this menu. Eventually Google Search added an "Assist" option to this menu for Google Now on Tap.
Marshmallow added a hidden settings section called the "System UI Tuner." This section would turn into a catch-all for power user features and experimental items. To access this you had to pull down the notification panel and hold down on the "settings" button for several seconds. The settings gear would spin, and eventually you'd see a message indicating that the System UI Tuner was unlocked. Once it was turned on, you could find it as the bottom of the system settings next to Developer Options.
In this first version of the System UI Tuner, users would add custom tiles to the Quick Settings panel, a feature that would later be refined into an API apps could use. For now the feature was very rough, basically allowing users to type a custom command into a text box. System status icons could be individually turned on and off, so if you really hated knowing you were connected to Wi-Fi, you could kill the icon. A popular power user addition was the option for embedding a percentage readout into the battery icon. There was also a "demo" mode for screenshots, which would replace the normal status bar with a fake, clean version.
### Android 7.0 Nougat, Pixel Phones, and the future
[Android 7.0 Nougat][34] and [the Pixel Phones][35] came out just a few months ago, and you can read our full reviews for both of them. Both still have a ton of features and implications that we have not seen come to fruition yet, so we'll save a deep "history" dive for when they are actually "history." 
### FURTHER READING
[Android 7.0 Nougat review—Do more on your gigantic smartphone][25]
Nougat made serious changes to the [graphics and sensor pipeline][36] for Daydream VR, Google's upcoming smartphone-powered VR experience [we tried][37] but have yet to log any serious time with. A new "Seamless update" feature borrowed an update mechanism from Chrome OS, which uses dual system partitions to quietly update one partition in the background while you're still using the other one. Considering the Pixel phones are the only devices to launch with this and haven't gotten an update yet, we're not sure what that looks like, either.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/08/2016-08-17-18.21.22-150x150.png)
][19]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/08/2016-08-17-18.20.59-150x150.png)
][20]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/08/Android-N_1-150x150.jpg)
][21]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/03/2016-03-20-19.26.55-150x150.png)
][22]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/08/pip-active-150x150.png)
][23]
One of the most interesting additions to Nougat is a revamp of the app framework to allow for resizable apps. This allowed Google to implement split screen on phones and tablets, picture-in-picture on Android TV, and a mysterious floating windowed mode. We've been able to access the floating window mode with some software trickery, but we've yet to see Google use it in an actual product. Is it being aimed at desktop computing?
Android and Chrome OS also continue to grow together. Android apps [can run][38] on some Chromebooks now, giving the "Web-only" OS the Play Store and a serious app ecosystem. Rumors continue to swirl that the future of Chrome OS and Android will come even closer together, with the name "[Andromeda][39]"—a portmanteau of "Android" and "Chrome"—being tossed around as the codename for a merged Chrome/Android OS.
We have yet to see how the historical legacy of the Pixel phones will shake out. Google dove into the hardware pool with the launch of two new smartphone flagships, the Pixel and Pixel XL, only recently. Google had produced co-branded Nexus phones with partners before, but the Pixel line is a "Google" branded product. The company claims it is a full hardware OEM now, using HTC as a contract manufacturer similarly to the way Apple uses Foxconn.
### FURTHER READING
[Google Pixel review: The best Android phone, even if it is a little pricey][26]</aside>
With its own hardware comes a change in how Google makes software. The company created the "Google Assistant" as the future of the "OK Google" voice command system. But rather than ship it out to every Android device, the Assistant is an exclusive Pixel feature. Google made some changes to the interface, with a custom "Pixel launcher" home screen app and a new System UI, both of which are Pixel exclusives. We'll have to wait to see what the balance of future features are between "Android" and "Pixel" going forward.
### FURTHER READING
[Chatting with Googles Hiroshi Lockheimer about Pixel, Android OEMs, and more][27]</aside>
With these changes, we're probably at the most uncertain point in Android's history. But ahead of the platform's recent October 2016 event, [Hiroshi Lockheimer][40], SVP of Android, Chrome OS, and Google Play, said he believed we'll all look back fondly on these latest Android developments. Lockheimer is essentially the current king of software at Google, and he thought the newest updates could be the most significant Android happening since the OS debuted eight years earlier. While he wouldn't elaborate much on this sentiment after the unveilings, the fact remains that this time next year we _might_ not even be talking about Android—it could be an Android/Chrome OS hybrid! So as has always been the case since 2008, the next chapter in Android's history looks to be nothing if not interesting.
--------------------------------------------------------------------------------
作者简介:
Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/
作者:[RON AMADEO][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[1]:http://android-developers.blogspot.com/2015/09/chrome-custom-tabs-smooth-transition.html
[2]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[3]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[4]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[5]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[6]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[7]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[8]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[9]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[10]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[11]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[12]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[13]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[14]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[15]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[16]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[17]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[18]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[19]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[20]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[21]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[22]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[23]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/#
[24]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/settings-5.jpg
[25]:http://arstechnica.com/gadgets/2016/08/android-7-0-nougat-review-do-more-on-your-gigantic-smartphone/
[26]:http://arstechnica.com/gadgets/2016/10/google-pixel-review-bland-pricey-but-still-best-android-phone/
[27]:http://arstechnica.com/gadgets/2016/10/chatting-with-googles-hiroshi-lockheimer-about-pixel-android-oems-and-more/
[28]:http://arstechnica.com/gadgets/2016/08/android-7-0-nougat-review-do-more-on-your-gigantic-smartphone/
[29]:http://arstechnica.com/gadgets/2016/10/google-pixel-review-bland-pricey-but-still-best-android-phone/
[30]:http://arstechnica.com/gadgets/2016/10/chatting-with-googles-hiroshi-lockheimer-about-pixel-android-oems-and-more/
[31]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/settings-5.jpg
[32]:http://arstechnica.com/security/2015/07/950-million-android-phones-can-be-hijacked-by-malicious-text-messages/
[33]:http://arstechnica.com/security/2015/10/a-billion-android-phones-are-vulnerable-to-new-stagefright-bugs/
[34]:http://arstechnica.com/gadgets/2016/08/android-7-0-nougat-review-do-more-on-your-gigantic-smartphone/
[35]:http://arstechnica.com/gadgets/2016/10/google-pixel-review-bland-pricey-but-still-best-android-phone/
[36]:http://arstechnica.com/gadgets/2016/08/android-7-0-nougat-review-do-more-on-your-gigantic-smartphone/11/#h1
[37]:http://arstechnica.com/gadgets/2016/10/daydream-vr-hands-on-googles-dumb-vr-headset-is-actually-very-clever/
[38]:http://arstechnica.com/gadgets/2016/05/if-you-want-to-run-android-apps-on-chromebooks-youll-need-a-newer-model/
[39]:http://arstechnica.com/gadgets/2016/09/android-chrome-andromeda-merged-os-reportedly-coming-to-the-pixel-3/
[40]:http://arstechnica.com/gadgets/2016/10/chatting-with-googles-hiroshi-lockheimer-about-pixel-android-oems-and-more/

View File

@ -0,0 +1,123 @@
The Beginners Guide to Start Using Vim
============================================================
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2012/03/vim-beginner-guide-featured.jpg "The Beginner's Guide to Start Using Vims")
This article is part of the [VIM User Guide][12] series:
* The Beginners Guide to Start Using Vim
* [Vim Keyboard Shortcuts Cheatsheet][3]
* [5 Vim Tips and Tricks for Experienced Users][4]
* [3 Useful VIM Editor Tips and Tricks for Advanced Users][5]
Choosing a text editor is a very important decision for a programmer. This is partly because of the plethora of variables: graphical/non-graphical interfaces, different shortcuts, language specializations, plugins, customizations, etc. My advice is not to try to search for the best one. Instead, choose the one that corresponds best to your habits and your tasks. If you want to work in a group, its generally best to select the same editor as your co-worker. That way, if you have a problem, you will be able to find some help.
It is exactly for that reason that I started using Vim a few years ago. Traditionally, Vim is placed in conflict with the legendary Emacs. I confess that I know very little about Emacs, but what you have to know about these two text editors is that they can both be fully customized, and very confusing at first. This tutorial will not explain everything about Vim but will try to give you the basics to use it correctly in the first place, and then present a few tips that will (I hope) allow you to learn on your own.
Vim comes from “VI iMproved”. Vi is a non-graphical text editor widely distributed in Unix systems. It comes by default with Linux. Vim is an enhancement of this original editor. However, unlke Vi, Vim is not installed by default on every distribution.
### Installation
To install Vim on Ubuntu, use the command:
```
sudo apt-get install vim
```
If you are already interested in some plugins, use the command:
```
sudo apt-cache search vim
```
This will give you a long list of packages related to Vim. Among them are some for various programming languages, addon managers, etc.
For this tutorial, I will be using the latest version of Vim (7.3.154) on Ubuntu. You can use any other version though.
### Warming Up
Type the command `vim` in a terminal. You should see a nice welcome screen.
![vim-welcome](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2012/02/vim-welcome.jpg "vim-welcome")
And if youve never used Vi or Vim before, it is very likely that you dont even know how to exit… Yes, its true. **None of the shortcuts you normally use will work in Vim**.
First of all, to use any menu-type function like save or exit, your command should begin with a colon (:). Saving is `:w` and quitting is `:q`. If you want to quit a file without saving, use the force quit command `:q!`. A cool thing with Vim is that you dont have to type commands separately. In other words, if you want to save and then quit, you can directly use `:wq`.
So for now, quit Vim and open it on a sample text file. Simply add the name of the text file that you want to edit after the command:
```
vim [text file name]
```
![vim-file](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2012/02/vim-file.jpg "vim-file")
By default, when you open a text file, you are in visual mode. It is quite specific to Vim and confusing at the beginning. Vim is composed mainly of two modes: visual and editing. The visual mode is for viewing a text and using some commands. To go into editing mode, just press `i` to insert and `a` to add some text. To go back into the visual mode and access all the menu-type functions, press the “Escape” key. The difference between insertion and addition is simply whether you want the text you type to appear before or after the cursor in visual mode. To understand this fully, you should really try it yourself. My advice is: add at the end of lines, and insert in other cases.
To move the cursor within a text, whether you are in visual or editing mode, you can generally use the keyboard arrows. A real purist would tell you to use the keys _h_ for left, _j_for down, _k_ for up, and _l_ for right.
Now that you are warmed up and know how to control Vim at a basic level, lets go to the core.
### A few basic commands
Now that you master the transformation from visual to editing mode, here are a few commands that you can use in visual mode:
* _x_: to delete a character
* _u_: to undo an action (the equivalent of Ctrl+z)
* _dd_: to delete a line
* _dw_: to delete a word
* _yy_: to copy a line
* _yw_: to copy a word
* _p_: to paste the previously deleted or copied line or word
* _e _: to move to the next word (faster than just moving with the arrow keys)
* _r_: to replace a letter (press _r_, then the new letter)
And of course, there are more, but this is enough for now. If you master all of them, you will already be very fluent with Vim.
As a side note for those who always want more, you can type a number before any of these commands and the command will be executed that number of times. For example, _5x_ will delete five characters in a row, while _3p_ will paste three times.
### Advanced Commands
Finally, as a bonus and an appetizer for your own research, here are a few advanced and very useful commands:
* _/searched_word _: to search for a word within the text
* _:sp name_of_a_text_file_: will split the screen in half horizontally, showing the new text file in the other half. To shift the focus from the right to the left window, use the shortcut Ctrl+w
![vim-sp](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2012/02/vim-sp.jpg "vim-sp")
* _:vsp name_of_a_text_file_: same as before, but splits the screen vertically
* Ctrl+Shift+C and Ctrl+Shift+V: to copy and paste text in a terminal
* _:! name_of_a_command_: to launch a command external to Vim, directly into your shell. For example, `:! ls` will display the files within the directory you are currently working in, without quitting the editor
![vim-ls](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2012/02/vim-ls.jpg "vim-ls")
### Conclusion
I think you now have every tool you need to start using Vim. You can go even further by installing the various plugins, editing the _.vimrc_ file, or even using the interactive tutor by typing the command _vimtutor_.
If you have any other commands that you would like to share about Vim, please let us know in the comments.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/start-with-vim-linux/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/himanshu/
[1]:https://www.maketecheasier.com/author/adrienbrochard/
[2]:https://www.maketecheasier.com/start-with-vim-linux/#comments
[3]:https://www.maketecheasier.com/vim-keyboard-shortcuts-cheatsheet/
[4]:https://www.maketecheasier.com/vim-tips-tricks-for-experienced-users/
[5]:https://www.maketecheasier.com/vim-tips-tricks-advanced-users/
[6]:https://www.maketecheasier.com/category/linux-tips/
[7]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fstart-with-vim-linux%2F
[8]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fstart-with-vim-linux%2F&text=The+Beginner%26%238217%3Bs+Guide+to+Start+Using+Vim
[9]:mailto:?subject=The%20Beginner%E2%80%99s%20Guide%20to%20Start%20Using%20Vim&body=https%3A%2F%2Fwww.maketecheasier.com%2Fstart-with-vim-linux%2F
[10]:https://www.maketecheasier.com/turn-dropbox-into-a-blogging-tool-with-scriptogram/
[11]:https://www.maketecheasier.com/4-sms-back-up-applications-to-keep-your-messages-safe-android/
[12]:https://www.maketecheasier.com/series/vim-user-guide/
[13]:https://support.google.com/adsense/troubleshooter/1631343

View File

@ -0,0 +1,216 @@
Vim Keyboard Shortcuts Cheatsheet
============================================================
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2013/12/vim-shortcut-cheatsheet-featured.jpg "Vim Keyboard Shortcuts Cheatsheets")
This article is part of the [VIM User Guide][12] series:
* [The Beginners Guide to Start Using Vim][3]
* Vim Keyboard Shortcuts Cheatsheet
* [5 Vim Tips and Tricks for Experienced Users][4]
* [3 Useful VIM Editor Tips and Tricks for Advanced Users][5]
The Vim editor is a command-line based tool thats an enhanced version of the venerable vi editor. Despite the abundance of graphical rich text editors, familiarity with Vim will help every Linux user — from an experienced system administrator to a newbie Raspberry Pi user.
The light-weight editor is a very powerful tool. In the hands of an experienced operator, it can do wonders. Besides regular text editing functions, the editor also supports advanced features such as find & replace based on regular expressions and encoding conversion as well as programming features such as syntax highlighting and code folding.
One important thing to note when using Vim, is that the function of a key depends on the “mode” the editor is in. For example, pressing the alphabet “j” will move the cursor down one line in the “command mode”. Youll have to switch to the “insert mode” to make the keys input the character they represent.
Heres a cheatsheet to help you get the most out of Vim.
### Main
| Shortcut Keys | Function |
| --- | --- |
| Escape key | Gets out of the current mode into the “command mode”. All keys are bound of commands. |
| i | “Insert mode” for inserting text. Keys behave as expected. |
| : | “Last-line mode” where Vim expects you to enter a command such as to save the document. |
### Navigation keys
| Shortcut Keys | Function |
| --- | --- |
| h | moves the cursor one character to the left. |
| j or Ctrl + J | moves the cursor down one line. |
| k or Ctrl + P | moves the cursor up one line. |
| l | moves the cursor one character to the right. |
| 0 | moves the cursor to the beginning of the line. |
| $ | moves the cursor to the end of the line. |
| ^ | moves the cursor to the first non-empty character of the line |
| w | move forward one word (next alphanumeric word) |
| W | move forward one word (delimited by a white space) |
| 5w | move forward five words |
| b | move backward one word (previous alphanumeric word) |
| B | move backward one word (delimited by a white space) |
| 5b | move backward five words |
| G | move to the end of the file |
| gg | move to the beginning of the file. |
### Navigate around the document
| Shortcut Keys | Function |
| --- | --- |
| ( | jumps to the previous sentence |
| ) | jumps to the next sentence |
| { | jumps to the previous paragraph |
| } | jumps to the next paragraph |
| [[ | jumps to the previous section |
| ]] | jumps to the next section |
| [] | jump to the end of the previous section |
| ][ | jump to the end of the next section |
### Insert text
| Shortcut Keys | Function |
| --- | --- |
| a | Insert text after the cursor |
| A | Insert text at the end of the line |
| i | Insert text before the cursor |
| o | Begin a new line below the cursor |
| O | Begin a new line above the cursor |
### Special inserts
| Shortcut Keys | Function |
| --- | --- |
| :r [filename] | Insert the file [filename] below the cursor |
| :r ![command] | Execute [command] and insert its output below the cursor |
### Delete text
| Shortcut Keys | Function |
| --- | --- |
| x | delete character at cursor |
| dw | delete a word. |
| d0 | delete to the beginning of a line. |
| d$ | delete to the end of a line. |
| d) | delete to the end of sentence. |
| dgg | delete to the beginning of the file. |
| dG | delete to the end of the file. |
| dd | delete line |
| 3dd | delete three lines |
### Simple replace text
| Shortcut Keys | Function |
| --- | --- |
| r{text} | Replace the character under the cursor with {text} |
| R | Replace characters instead of inserting them |
### Copy/Paste text
| Shortcut Keys | Function |
| --- | --- |
| yy | copy current line into storage buffer |
| ["x]yy | Copy the current lines into register x |
| p | paste storage buffer after current line |
| P | paste storage buffer before current line |
| ["x]p | paste from register x after current line |
| ["x]P | paste from register x before current line |
### Undo/Redo operation
| Shortcut Keys | Function |
| --- | --- |
| u | undo the last operation. |
| Ctrl+r | redo the last undo. |
### Search and Replace keys
| Shortcut Keys | Function |
| --- | --- |
| /search_text | search document for search_text going forward |
| ?search_text | search document for search_text going backward |
| n | move to the next instance of the result from the search |
| N | move to the previous instance of the result |
| :%s/original/replacement | Search for the first occurrence of the string “original” and replace it with “replacement” |
| :%s/original/replacement/g | Search and replace all occurrences of the string “original” with “replacement” |
| :%s/original/replacement/gc | Search for all occurrences of the string “original” but ask for confirmation before replacing them with “replacement” |
### Bookmarks
| Shortcut Keys | Function |
| --- | --- |
| m {a-z A-Z} | Set bookmark {a-z A-Z} at the current cursor position |
| :marks | List all bookmarks |
| `{a-z A-Z} | Jumps to the bookmark {a-z A-Z} |
### Select text
| Shortcut Keys | Function |
| --- | --- |
| v | Enter visual mode per character |
| V | Enter visual mode per line |
| Esc | Exit visual mode |
### Modify selected text
| Shortcut Keys | Function |
| --- | --- |
| ~ | Switch case |
| d | delete a word. |
| c | change |
| y | yank |
| > | shift right |
| < | shift left |
| ! | filter through an external command |
### Save and quit
| Shortcut Keys | Function |
| --- | --- |
| :q | Quits Vim but fails when file has been changed |
| :w | Save the file |
| :w new_name | Save the file with the new_name filename |
| :wq | Save the file and quit Vim. |
| :q! | Quit Vim without saving the changes to the file. |
| ZZ | Write file, if modified, and quit Vim |
| ZQ | Same as :q! Quits Vim without writing changes |
### Download VIM Keyboard Shortcuts Cheatsheet
Cant get enough of this? We have prepared a downloadable cheat sheet for you so you can access to it when you need it.
[Download it here!][14]
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/vim-keyboard-shortcuts-cheatsheet/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/himanshu/
[1]:https://www.maketecheasier.com/author/mayank/
[2]:https://www.maketecheasier.com/vim-keyboard-shortcuts-cheatsheet/#comments
[3]:https://www.maketecheasier.com/start-with-vim-linux/
[4]:https://www.maketecheasier.com/vim-tips-tricks-for-experienced-users/
[5]:https://www.maketecheasier.com/vim-tips-tricks-advanced-users/
[6]:https://www.maketecheasier.com/category/linux-tips/
[7]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fvim-keyboard-shortcuts-cheatsheet%2F
[8]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fvim-keyboard-shortcuts-cheatsheet%2F&text=Vim+Keyboard+Shortcuts+Cheatsheet
[9]:mailto:?subject=Vim%20Keyboard%20Shortcuts%20Cheatsheet&body=https%3A%2F%2Fwww.maketecheasier.com%2Fvim-keyboard-shortcuts-cheatsheet%2F
[10]:https://www.maketecheasier.com/locate-system-image-tool-in-windows-81/
[11]:https://www.maketecheasier.com/create-system-image-in-windows8/
[12]:https://www.maketecheasier.com/series/vim-user-guide/
[13]:https://support.google.com/adsense/troubleshooter/1631343
[14]:http://www.maketecheasier.com/cheatsheet/vim-keyboard-shortcuts-cheatsheet/

View File

@ -0,0 +1,130 @@
5 Vim Tips and Tricks for Experienced Users
============================================================
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2014/08/vim-tips-tricks-featured.jpg "5 Vim Tips and Tricks for Experienced Userss")
This article is part of the [VIM User Guide][12] series:
* [The Beginners Guide to Start Using Vim][3]
* [Vim Keyboard Shortcuts Cheatsheet][4]
* 5 Vim Tips and Tricks for Experienced Users
* [3 Useful VIM Editor Tips and Tricks for Advanced Users][5]
The Vim editor offers so many features that its very difficult to learn all of them. While, of course, spending more and more time on the command line editor always helps, there is no denying the fact that you learn new and productive things faster while interacting with fellow Vim users. Here are some Vim tips and tricks for you.
**Note**  To create the examples here, I used Vim version 7.4.52.
### 1\. Working with multiple files
If you are a software developer or someone who uses Vim as their primary editor, chances are that you have to work with multiple files simultaneously. Following are some useful tips that you can use while working with multiple files.
Instead of opening different files in different shell tabs, you can open multiple files in a single tab by passing their filenames as arguments to the vim command. For example:
```
vim file1 file2 file3
```
The first file (file1 in the example) will be the current file and read into the buffer.
Once inside the editor, use the `:next` or `:n` command to move to the next file, and the `:prev` or `:N` command to return to the previous one. To directly switch to the first or the last file, use `:bf` and `:bl` commands, respectively. To open and start editing another file, use the `:e` command with the filename as argument (use the complete path in case the file is not present in the current directory).
At any point if it is required to list down currently opened files, use the `:ls` command. See the screen shot shown below.
![vim-ls](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2014/08/vim-ls.png "vim-ls")
Note that “%a” represents the file in the current active window, while “#” represents the file in the previous active window.
### 2\. Save time with auto complete
Want to save time and improve accuracy? Use abbreviations. They come in handy while writing long, complex words that recur frequently throughout the file. The Vim command for abbreviations is `ab`. For example, after you run the command
```
:ab asap as soon as possible
```
each occurrence of the word “asap” will be automatically replaced by “as soon as possible”, as you type.
Similarly, you can also use abbreviations to correct common typing mistakes. For example, the command
```
:ab recieve receive
```
will automatically correct the spelling mistake as you type. If you want to prevent the expansion/correction from happening at a particular occurrence, just type “Ctrl + V” after the last character of the word and then press the space bar key.
If you want to save the abbreviation youve created so that it is available to you the next time you use the Vim editor, add the complete `ab` command (without the initial colon) to “/etc/vim/vimrc” file. To remove a particular abbreviation, you can use the `una`command. For example, `:una asap`.
### 3\. Split windows to easily copy/paste
There are times when you want to copy a piece of code or a portion of text from one file to another. While the process is easy when working with GUI editors, it gets a bit tedious and time-consuming while working with a command line editor. Fortunately, Vim provides a way to minimize the time and effort required to do this.
Open one of the two files and then split the Vim window to open the other file. This can be done by using the `split` command with the file name as argument. For example,
```
:split test.c
```
will split the window and open “test.c”.
![vim-split](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2014/08/vim-split.png "vim-split")
Observe that the command split the Vim window horizontally. In case you want to split the window vertically, you can do so using the `vsplit` command. Once both the files are opened, copy the stuff from one file, press “Ctrl + w” to switch the control to another file, and paste.
### 4\. Save a file you edited without the required permissions
There are times when you realize that a file is read-only only after making a bunch of changes to it.
![vim-sudo](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2014/08/vim-sudo.png "vim-sudo")
Although closing the file and reopening it with the required permissions is a way out, its a sheer waste of time if youve already made a lot of changes, as all of them will be lost during the process. Vim provides you a way to handle this situation by allowing you to change the file permissions from within the editor before you save it. The command for this is:
```
:w !sudo tee %
```
The command will ask you for the password, just like `sudo` does on the command line, and will then save the changes.
**A related tip**: To quickly access the command prompt while editing a file in Vim, run the `:sh` command from within the editor. This will place you in an interactive shell. Once you are done, run the `exit` command to quickly return to your Vim session.
### 5\. Preserve indentation during copy/paste
Most of the experienced programmers work on Vim with auto indentation enabled. Although its a time-saving practice, it creates a problem while pasting an already indented code. For example, this is what happened when I pasted an already indented code into a file opened in Vim editor with auto indent on.
![vim-indentation](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2014/08/vim-indentation.png "vim-indentation")
The solution to this problem is the `pastetoggle` option. Add the line
```
set pastetoggle=<F2>
```
to your vimrc file, and press F2 in insert mode just before pasting the code. This should preserve the original indentation. Note that you can replace F2 with any other key if its already mapped to some other functionality.
### Conclusion
The only way you can further improve your Vim editor skills is by using the command line editor for your day-to-day work. Just note down the actions that take time and then try to find out if there is an editor command that will do the actions more quickly.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/vim-tips-tricks-for-experienced-users/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/himanshu/
[1]:https://www.maketecheasier.com/author/himanshu/
[2]:https://www.maketecheasier.com/vim-tips-tricks-for-experienced-users/#comments
[3]:https://www.maketecheasier.com/start-with-vim-linux/
[4]:https://www.maketecheasier.com/vim-keyboard-shortcuts-cheatsheet/
[5]:https://www.maketecheasier.com/vim-tips-tricks-advanced-users/
[6]:https://www.maketecheasier.com/category/linux-tips/
[7]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fvim-tips-tricks-for-experienced-users%2F
[8]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fvim-tips-tricks-for-experienced-users%2F&text=5+Vim+Tips+and+Tricks+for+Experienced+Users
[9]:mailto:?subject=5%20Vim%20Tips%20and%20Tricks%20for%20Experienced%20Users&body=https%3A%2F%2Fwww.maketecheasier.com%2Fvim-tips-tricks-for-experienced-users%2F
[10]:https://www.maketecheasier.com/enable-two-step-verification-apple-icloud-account/
[11]:https://www.maketecheasier.com/mistakes-wordpress-user-should-avoid/
[12]:https://www.maketecheasier.com/series/vim-user-guide/
[13]:https://support.google.com/adsense/troubleshooter/1631343

View File

@ -0,0 +1,409 @@
Installing, Obtaining, and Making GTK Themes
----------------
Many Linux desktops supporting themes. A theme is a particular appearance or "****" for the GUI. Users can change the theme to make the desktop look different. Usually, users also change the icons. However, the theme and icon-pack are two separate entities. Numerous people want to make their own theme, so here is an article about making GTK themes as well as various essential information.
**NOTE:** This article primarily focuses on GTK3, but it will discuss a little about GTK2, Metacity, and others. Cursors and icons will not be discussed in this article.
**Basic Concepts**
The GIMP ToolKit (GTK) is a widget-toolkit used to create GUIs on a variety of systems (thus making GTK cross-platform). GTK ([http://www.gtk.org/][17]) is commonly and incorrectly thought to stand for "GNOME ToolKit", but is actually stands for "GIMP ToolKit" because it was first created to design an user interface for GIMP. GTK is an object-oriented toolkit written in C (GTK itself is not a language). GTK is entirely open-source under the LGPL license. GTK is a widely used toolkit for GUIs and many tools are available for GTK.
Themes made for GTK will not work in Qt-based applications. A Qt-theme is needed to apply a theme to Qt applications.
The themes use Cascading Style-Sheets (CSS) to generate the theme's appearance. This is the same CSS that web-developers use on web-pages. However, instead of HTML tags being referenced, GTK widgets are specified. It is important that theme developers learn CSS.
**Theme Location**
Themes may be stored in "~/.themes" or "/usr/share/themes". Themes that are in "~/.themes" are only accessible to the owner of that home folder. While themes in "/usr/share/themes" are global-themes that are accessible by all users. When a GTK application executes, it has a list of possible theme files that it checks in a specific order. If the theme file is not found, then it will try the next file on the list. Below is the list in the order that GTK3 applications try to use.
$XDG_CONFIG_HOME/gtk-3.0/gtk.css (typically ~/.config/gtk-3.0/gtk.css)
~/.themes/NAME/gtk-3.0/gtk.css
$datadir/share/themes/NAME/gtk-3.0/gtk.css (typically /usr/share/themes/name/gtk-3.0/gtk.css)
**NOTE:** "NAME" is a placeholder for the name of the current theme.
If there are two themes with the same name, then the one in the user's home folder (~/.themes) will be used. Developers can take advantage of GTK's theme-seeking algorithm by testing new themes in their local home's theme directory.
**Theme Engines**
A "Theme engine" is a piece of software that changes the look of the GUI's widgets. The engine reads and uses the theme's files to know how the various widgets should be drawn. Some engines come with themes of their own. Each engine has its advantages and disadvantages, and some engines add special properties and features.
Many theme-engines can be obtained from the default repositories. Debian-based Linux distros can execute "apt-get install gtk2-engines-murrine gtk2-engines-pixbuf gtk3-engines-unico" to install three different engines. Many engines are available for both GTK2 and GTK3\. Below is a small list of examples.
* gtk2-engines-aurora - Aurora GTK2 engine
* gtk2-engines-pixbuf - Pixbuf GTK2 engine
* gtk3-engines-oxygen - Engine port of the Oxygen widget style to GTK
* gtk3-engines-unico - Unico GTK3 engine
* gtk3-engines-xfce - GTK3 engine for Xfce
**Creating GTK3 Themes**
To create a GTK3 theme, developers can start with an empty file or they can use a pre-existing theme as a template. It may help beginners to start with a pre-existing theme. For instance, a theme can be copied to the user's home folder and then the developer can start editing the files.
The general format for a GTK3 theme is to create a folder named after the theme. Then, create a sub-directory called "gtk-3.0" and create a file inside of it named "gtk.css". In the "gtk.css" file, use CSS code to control how the theme will look. Move the theme to ~/.themes for testing purposes. Use the newly created theme and make changes as necessary. If desired, developers can add additional components to the theme for GTK2, Openbox, Metacity, Unity, etc.
To explain how to create themes, we will study the "Ambiance" theme, which is usually found at /usr/share/themes/Ambiance. This directory contains the below listed sub-directories and a file named "index.theme".
* gtk-2.0
* gtk-3.0
* metacity-1
* unity
"**index.theme**" contains metadata (such as the theme's name) and some important settings (such as the button layout). Below is the "index.theme" file for "Ambiance".
Code:
```
[Desktop Entry]
Type=X-GNOME-Metatheme
Name=Ambiance
Comment=Ubuntu Ambiance theme
Encoding=UTF-8
[X-GNOME-Metatheme]
GtkTheme=Ambiance
MetacityTheme=Ambiance
IconTheme=ubuntu-mono-dark
CursorTheme=DMZ-White
ButtonLayout=close,minimize,maximize:
X-Ubuntu-UseOverlayScrollbars=true
```
The "**gtk-2.0**" directory contains files for GTK2 such as a "gtkrc" file and an "apps" directory that contains application-specific GTK settings. The "gtkrc" file is the main CSS-file for the GTK2 portion of the theme. Below are the contents of /usr/share/themes/Ambiance/gtk-2.0/apps/nautilus.rc
Code:
```
# ==============================================================================
# NAUTILUS SPECIFIC SETTINGS
# ==============================================================================
style "nautilus_info_pane" {
bg[NORMAL] = @bg_color
}
widget_class "*Nautilus*<GtkNotebook>*<GtkEventBox>" style "nautilus_info_pane"
widget_class "*Nautilus*<GtkButton>" style "notebook_button"
widget_class "*Nautilus*<GtkButton>*<GtkLabel>" style "notebook_button"
```
The "**gtk-3.0**" directory contains files for GTK3\. Instead of "gtkrc", GTK3 uses "gtk.css" as the main theme file. In the Ambiance theme, the file contains one line - '@import url("gtk-main.css");'. The "settings.ini" file contains important theme-wide settings. GTK3 themes use an "apps" directory for the same purpose as GTK2\. The "assets" directory contains images for radio buttons, check-boxes, etc. Below are the contents of /usr/share/themes/Ambiance/gtk-3.0/gtk-main.css
Code:
```
/*default color scheme */
@define-color bg_color #f2f1f0;
@define-color fg_color #4c4c4c;
@define-color base_color #ffffff;
@define-color text_color #3C3C3C;
@define-color selected_bg_color #f07746;
@define-color selected_fg_color #ffffff;
@define-color tooltip_bg_color #000000;
@define-color tooltip_fg_color #ffffff;
/* misc colors used by gtk+
*
* Gtk doesn't currently expand color variables for style properties. Thus,
* gtk-widgets.css uses literal color names, but includes a comment containing
* the name of the variable. Please remember to change values there as well
* when changing one of the variables below.
*/
@define-color info_fg_color rgb (181, 171, 156);
@define-color info_bg_color rgb (252, 252, 189);
@define-color warning_fg_color rgb (173, 120, 41);
@define-color warning_bg_color rgb (250, 173, 61);
@define-color question_fg_color rgb (97, 122, 214);
@define-color question_bg_color rgb (138, 173, 212);
@define-color error_fg_color rgb (235, 235, 235);
@define-color error_bg_color rgb (223, 56, 44);
@define-color link_color @selected_bg_color;
@define-color success_color #4e9a06;
@define-color error_color #df382c;
/* theme common colors */
@define-color button_bg_color shade (@bg_color, 1.02); /*shade (#cdcdcd, 1.08);*/
@define-color notebook_button_bg_color shade (@bg_color, 1.02);
@define-color button_insensitive_bg_color mix (@button_bg_color, @bg_color, 0.6);
@define-color dark_bg_color #3c3b37;
@define-color dark_fg_color #dfdbd2;
@define-color backdrop_fg_color mix (@bg_color, @fg_color, 0.8);
@define-color backdrop_text_color mix (@base_color, @text_color, 0.8);
@define-color backdrop_dark_fg_color mix (@dark_bg_color, @dark_fg_color, 0.75);
/*@define-color backdrop_dark_bg_color mix (@dark_bg_color, @dark_fg_color, 0.75);*/
@define-color backdrop_selected_bg_color shade (@bg_color, 0.92);
@define-color backdrop_selected_fg_color @fg_color;
@define-color focus_color alpha (@selected_bg_color, 0.5);
@define-color focus_bg_color alpha (@selected_bg_color, 0.1);
@define-color shadow_color alpha(black, 0.5);
@define-color osd_fg_color #eeeeec;
@define-color osd_bg_color alpha(#202526, 0.7);
@define-color osd_border_color alpha(black, 0.7);
@import url("gtk-widgets-borders.css");
@import url("gtk-widgets-assets.css");
@import url("gtk-widgets.css");
@import url("apps/geary.css");
@import url("apps/unity.css");
@import url("apps/baobab.css");
@import url("apps/gedit.css");
@import url("apps/nautilus.css");
@import url("apps/gnome-panel.css");
@import url("apps/gnome-terminal.css");
@import url("apps/gnome-system-log.css");
@import url("apps/unity-greeter.css");
@import url("apps/glade.css");
@import url("apps/california.css");
@import url("apps/software-center.css");
@import url("public-colors.css");
```
The "**metacity-1**" folder contains images that the Metacity window-manager uses for buttons (such as the "close window" button). This directory also contains a file named "metacity-theme-1.xml" that contain's the theme's metadata (like the developer's name) and styling. However, the Metacity portion of the theme uses XML rather than CSS.
The "**unity**" directory contains SVG files that Unity uses for buttons. Besides the SVG files, there are no other files in this folder.
Some themes may contain other directories. For instance, "Clearlooks-Phenix" has a folder named "**openbox-3**" and "**xfwm4**". The "openbox-3" folder only contains a "themerc" file that declares the settings and appearance (a sample is seen below). The "xfwm4" directory contains *.xpm files, *.png images (in the "png" folder), a "README" file, and a "themerc" file which contains settings (as seen below).
/usr/share/themes/Clearlooks-Phenix/xfwm4/themerc
Code:
```
# Clearlooks XFWM4 by Casey Kirsle
show_app_icon=true
active_text_color=#FFFFFF
inactive_text_color=#939393
title_shadow_active=frame
title_shadow_inactive=false
button_layout=O|HMC
button_offset=2
button_spacing=2
full_width_title=true
maximized_offset=0
title_vertical_offset_active=1
title_vertical_offset_inactive=1
```
/usr/share/themes/Clearlooks-Phenix/openbox-3/themerc
Code:
```
!# Clearlooks-Evolving
!# Clearlooks as it evolves in gnome-git...
!# Last updated 09/03/10
# Fonts
# these are really halos, but who cares?
*.font: shadow=n
window.active.label.text.font:shadow=y:shadowtint=30:shadowoffset=1
window.inactive.label.text.font:shadow=y:shadowtint=00:shadowoffset=0
menu.items.font:shadow=y:shadowtint=0:shadowoffset=1
!# general stuff
border.width: 1
padding.width: 3
padding.height: 2
window.handle.width: 3
window.client.padding.width: 0
menu.overlap: 2
*.justify: center
!# lets set our damn shadows here, eh?
*.bg.highlight: 50
*.bg.shadow: 05
window.active.title.bg.highlight: 35
window.active.title.bg.shadow: 05
window.inactive.title.bg.highlight: 30
window.inactive.title.bg.shadow: 05
window.*.grip.bg.highlight: 50
window.*.grip.bg.shadow: 30
window.*.handle.bg.highlight: 50
window.*.handle.bg.shadow: 30
!# Menu settings
menu.border.color: #aaaaaa
menu.border.width: 1
menu.title.bg: solid flat
menu.title.bg.color: #E6E7E6
menu.title.text.color: #111111
menu.items.bg: Flat Solid
menu.items.bg.color: #ffffff
menu.items.text.color: #111111
menu.items.disabled.text.color: #aaaaaa
menu.items.active.bg: Flat Gradient splitvertical border
menu.items.active.bg.color: #97b8e2
menu.items.active.bg.color.splitTo: #a8c5e9
menu.items.active.bg.colorTo: #91b3de
menu.items.active.bg.colorTo.splitTo: #80a7d6
menu.items.active.bg.border.color: #4b6e99
menu.items.active.text.color: #ffffff
menu.separator.width: 1
menu.separator.padding.width: 0
menu.separator.padding.height: 3
menu.separator.color: #aaaaaa
!# set handles here and only the once?
window.*.handle.bg: Raised solid
window.*.handle.bg.color: #eaebec
window.*.grip.bg: Raised solid
window.*.grip.bg.color: #eaebec
!# Active
window.*.border.color: #585a5d
window.active.title.separator.color: #4e76a8
*.title.bg: Raised Gradient splitvertical
*.title.bg.color: #8CB0DC
*.title.bg.color.splitTo: #99BAE3
*.title.bg.colorTo: #86ABD9
*.title.bg.colorTo.splitTo: #7AA1D1
window.active.label.bg: Parentrelative
window.active.label.text.color: #ffffff
window.active.button.*.bg: Flat Gradient splitvertical Border
window.active.button.*.bg.color: #92B4DF
window.active.button.*.bg.color.splitTo: #B0CAEB
window.active.button.*.bg.colorTo: #86ABD9
window.active.button.*.bg.colorTo.splitTo: #769FD0
window.active.button.*.bg.border.color: #49678B
window.active.button.*.image.color: #F4F5F6
window.active.button.hover.bg.color: #b5d3ef
window.active.button.hover.bg.color.splitTo: #b5d3ef
window.active.button.hover.bg.colorTo: #9cbae7
window.active.button.hover.bg.colorTo.splitTo: #8caede
window.active.button.hover.bg.border.color: #4A658C
window.active.button.hover.image.color: #ffffff
window.active.button.pressed.bg: Flat solid Border
window.active.button.pressed.bg.color: #7aa1d2
window.active.button.hover.bg.border.color: #4A658C
!# inactive
!#window.inactive.border.color: #7e8285
window.inactive.title.separator.color: #96999d
window.inactive.title.bg: Raised Gradient splitvertical
window.inactive.title.bg.color: #E3E2E0
window.inactive.title.bg.color.splitTo: #EBEAE9
window.inactive.title.bg.colorTo: #DEDCDA
window.inactive.title.bg.colorTo.splitTo: #D5D3D1
window.inactive.label.bg: Parentrelative
window.inactive.label.text.color: #70747d
window.inactive.button.*.bg: Flat Gradient splitVertical Border
window.inactive.button.*.bg.color: #ffffff
window.inactive.button.*.bg.color.splitto: #ffffff
window.inactive.button.*.bg.colorTo: #F9F8F8
window.inactive.button.*.bg.colorTo.splitto: #E9E7E6
window.inactive.button.*.bg.border.color: #928F8B
window.inactive.button.*.image.color: #6D6C6C
!# osd (pop ups and what not, dock?)
osd.border.width: 1
osd.border.color: #aaaaaa
osd.bg: flat border gradient splitvertical
osd.bg.color: #F0EFEE
osd.bg.color.splitto: #f5f5f4
osd.bg.colorTo: #EAEBEC
osd.bg.colorTo.splitto: #E7E5E4
osd.bg.border.color: #ffffff
osd.active.label.bg: parentrelative
osd.active.label.bg.color: #efefef
osd.active.label.bg.border.color: #9c9e9c
osd.active.label.text.color: #444
osd.inactive.label.bg: parentrelative
osd.inactive.label.text.color: #70747d
!# yeah whatever, this is fine anyhoo?
osd.hilight.bg: flat vertical gradient
osd.hilight.bg.color: #9ebde5
osd.hilight.bg.colorTo: #749dcf
osd.unhilight.bg: flat vertical gradient
osd.unhilight.bg.color: #BABDB6
osd.unhilight.bg.colorTo: #efefef
```
**Testing Themes**
When creating themes, it may be helpful to test it and tweak the code to get the desired appearance. Such developers may want to use some type of "theme-previewer". Thankfully, some exist.
* GTK+ Change Theme - This program can change the GTK theme and allow developers to preview the theme. The program is composed of one window that contains many widgets, thus providing a complete preview for the theme. To install this program, type "apt-get install gtk-chtheme".
* GTK Theme Switch - This program allows users to easily change the user's theme. Be sure to have some applications open to view and test the theme. To install this program, type "apt-get install gtk-theme-switch" and type "gtk-theme-switch2" in a terminal to run it.
* LXappearance - This program can change themes, icons, and fonts.
* PyWF - This is a Python-based alternative to "The Widget Factory". PyWF can be obtained at [http://gtk-apps.org/content/show.php/PyTWF?content=102024][1]
* The Widget Factory - This is an old GTK-previewer. To install this program, type "apt-get install thewidgetfactory" and type "twf" in a terminal to run it.
**Theme Downloads**
* Cinnamon - [http://gnome-look.org/index.php?xcontentmode=104][2]
* Compiz - [http://gnome-look.org/index.php?xcontentmode=102][3]
* GNOME Shell - [http://gnome-look.org/index.php?xcontentmode=191][4]
* GTK2 - [http://gnome-look.org/index.php?xcontentmode=100][5]
* GTK3 - [http://gnome-look.org/index.php?xcontentmode=167][6]
* KDE/Qt - [http://kde-look.org/index.php?xcontentmode=8x9x10x11x12x13x14x15x16][7]
* Linux Mint Themes - [http://linuxmint-art.org/index.php?xcontentmode=9x14x100][8]
* Metacity - [http://gnome-look.org/index.php?xcontentmode=101][9]
* Ubuntu Themes - [http://www.ubuntuthemes.org/][10]
**Further Reading**
* Graphical User Interface (GUI) Reading Guide - [http://www.linux.org/threads/gui-reading-guide.6471/][11]
* GTK - [http://www.linux.org/threads/understanding-gtk.6291/][12]
* Introduction to Glade - [http://www.linux.org/threads/introduction-to-glade.7142/][13]
* Desktop Environment vs Window Managers - [http://www.linux.org/threads/desktop-environment-vs-window-managers.7802/][14]
* Official GTK+ 3 Reference Manual - [https://developer.gnome.org/gtk3/stable/][15]
* GtkCssProvider - [https://developer.gnome.org/gtk3/stable/GtkCssProvider.html][16]
--------------------------------------------------------------------------------
via: http://www.linux.org/threads/installing-obtaining-and-making-gtk-themes.8463/
作者:[DevynCJohnson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linux.org/members/devyncjohnson.4843/
[1]:http://gtk-apps.org/content/show.php/PyTWF?content=102024
[2]:http://gnome-look.org/index.php?xcontentmode=104
[3]:http://gnome-look.org/index.php?xcontentmode=102
[4]:http://gnome-look.org/index.php?xcontentmode=191
[5]:http://gnome-look.org/index.php?xcontentmode=100
[6]:http://gnome-look.org/index.php?xcontentmode=167
[7]:http://kde-look.org/index.php?xcontentmode=8x9x10x11x12x13x14x15x16
[8]:http://linuxmint-art.org/index.php?xcontentmode=9x14x100
[9]:http://gnome-look.org/index.php?xcontentmode=101
[10]:http://www.ubuntuthemes.org/
[11]:http://www.linux.org/threads/gui-reading-guide.6471/
[12]:http://www.linux.org/threads/understanding-gtk.6291/
[13]:http://www.linux.org/threads/introduction-to-glade.7142/
[14]:http://www.linux.org/threads/desktop-environment-vs-window-managers.7802/
[15]:https://developer.gnome.org/gtk3/stable/
[16]:https://developer.gnome.org/gtk3/stable/GtkCssProvider.html
[17]:http://www.gtk.org/

View File

@ -0,0 +1,995 @@
Network automation with Ansible
================
### Network Automation
As the IT industry transforms with technologies from server virtualization to public and private clouds with self-service capabilities, containerized applications, and Platform as a Service (PaaS) offerings, one of the areas that continues to lag behind is the network.
Over the past 5+ years, the network industry has seen many new trends emerge, many of which are categorized as software-defined networking (SDN).
###### Note
SDN is a new approach to building, managing, operating, and deploying networks. The original definition for SDN was that there needed to be a physical separation of the control plane from the data (packet forwarding) plane, and the decoupled control plane must control several devices.
Nowadays, many more technologies get put under the _SDN umbrella_, including controller-based networks, APIs on network devices, network automation, whitebox switches, policy networking, Network Functions Virtualization (NFV), and the list goes on.
For purposes of this report, we refer to SDN solutions as solutions that include a network controller as part of the solution, and improve manageability of the network but dont necessarily decouple the control plane from the data plane.
One of these trends is the emergence of application programming interfaces (APIs) on network devices as a way to manage and operate these devices and truly offer machine to machine communication. APIs simplify the development process when it comes to automation and building network applications, providing more structure on how data is modeled. For example, when API-enabled devices return data in JSON/XML, it is structured and easier to work with as compared to CLI-only devices that return raw text that then needs to be manually parsed.
Prior to APIs, the two primary mechanisms used to configure and manage network devices were the command-line interface (CLI) and Simple Network Management Protocol (SNMP). If we look at each of those, the CLI was meant as a human interface to the device, and SNMP wasnt built to be a real-time programmatic interface for network devices.
Luckily, as many vendors scramble to add APIs to devices, sometimes _just because_ its a check in the box on an RFP, there is actually a great byproduct—enabling network automation. Once a true API is exposed, the process for accessing data within the device, as well as managing the configuration, is greatly simplified, but as well review in this report, automation is also possible using more traditional methods, such as CLI/SNMP.
###### Note
As network refreshes happen in the months and years to come, vendor APIs should no doubt be tested and used as key decision-making criteria for purchasing network equipment (virtual and physical). Users should want to know how data is modeled by the equipment, what type of transport is used by the API, if the vendor offers any libraries or integrations to automation tools, and if open standards/protocols are being used.
Generally speaking, network automation, like most types of automation, equates to doing things faster. While doing more faster is nice, reducing the time for deployments and configuration changes isnt always a problem that needs solving for many IT organizations.
Including speed, well now take a look at a few of the reasons that IT organizations of all shapes and sizes should look at gradually adopting network automation. You should note that the same principles apply to other types of automation as well.
### Simplified Architectures
Today, every network is a unique snowflake, and network engineers take pride in solving transport and application issues with one-off network changes that ultimately make the network not only harder to maintain and manage, but also harder to automate.
Instead of thinking about network automation and management as a secondary or tertiary project, it needs to be included from the beginning as new architectures and designs are deployed. Which features work across vendors? Which extensions work across platforms? What type of API or automation tooling works when using particular network device platforms? When these questions get answered earlier on in the design process, the resulting architecture becomes simpler, repeatable, and easier to maintain _and_ automate, all with fewer vendor proprietary extensions enabled throughout the network.
### Deterministic Outcomes
In an enterprise organization, change review meetings take place to review upcoming changes on the network, the impact they have on external systems, and rollback plans. In a world where a human is touching the CLI to make those _upcoming changes_, the impact of typing the wrong command is catastrophic. Imagine a team with three, four, five, or 50 engineers. Every engineer may have his own way of making that particular _upcoming change_. And the ability to use a CLI or a GUI does not eliminate or reduce the chance of error during the control window for the change.
Using proven and tested network automation helps achieve more predictable behavior and gives the executive team a better chance at achieving deterministic outcomes, moving one step closer to having the assurance that the task is going to get done right the first time without human error.
### Business Agility
It goes without saying that network automation offers speed and agility not only for deploying changes, but also for retrieving data from network devices as fast as the business demands. Since the advent of server virtualization, server and virtualization admins have had the ability to deploy new applications almost instantaneously. And the faster applications are deployed, the more questions are raised as to why it takes so long to configure a VLAN, route, FW ACL, or load-balancing policy.
By understanding the most common workflows within an organization and _why_ network changes are really required, the process to deploy modern automation tooling such as Ansible becomes much simpler.
This chapter introduced some of the high-level points on why you should consider network automation. In the next section, we take a look at what Ansible is and continue to dive into different types of network automation that are relevant to IT organizations of all sizes.
### What Is Ansible?
Ansible is one of the newer IT automation and configuration management platforms that exists in the open source world. Its often compared to other tools such as Puppet, Chef, and SaltStack. Ansible emerged on the scene in 2012 as an open source project created by Michael DeHaan, who also created Cobbler and cocreated Func, both of which are very popular in the open source community. Less than 18 months after the Ansible open source project started, Ansible Inc. was formed and received $6 million in Series A funding. It became and is still the number one contributor to and supporter of the Ansible open source project. In October 2015, Red Hat acquired Ansible Inc.
But, what exactly is Ansible?
_Ansible is a super-simple automation platform that is agentless and extensible._
Lets dive into this statement in a bit more detail and look at the attributes of Ansible that have helped it gain a significant amount of traction within the industry.
### Simple
One of the most attractive attributes of Ansible is that you _DO NOT_ need any special coding skills in order to get started. All instructions, or tasks to be automated, are documented in a standard, human-readable data format that anyone can understand. It is not uncommon to have Ansible installed and automating tasks in under 30 minutes!
For example, the following task from an Ansible playbook is used to ensure a VLAN exists on a Cisco Nexus switch:
```
- nxos_vlan: vlan_id=100 name=web_vlan
```
You can tell by looking at this almost exactly what its going to do without understanding or writing any code!
###### Note
The second half of this report covers the Ansible terminology (playbooks, plays, tasks, modules, etc.) in great detail. However, we have included a few brief examples in the meantime to convey key concepts when using Ansible for network automation.
### Agentless
If you look at other tools on the market, such as Puppet and Chef, youll learn that, by default, they require that each device you are automating have specialized software installed. This is _NOT_ the case with Ansible, and this is the major reason why Ansible is a great choice for networking automation.
Its well understood that IT automation tools, including Puppet, Chef, CFEngine, SaltStack, and Ansible, were initially built to manage and automate the configuration of Linux hosts to increase the pace at which applications are deployed. Because Linux systems were being automated, getting agents installed was never a technical hurdle to overcome. If anything, it just delayed the setup, since now _N_ number of hosts (the hosts you want to automate) needed to have software deployed on them.
On top of that, when agents are used, there is additional complexity required for DNS and NTP configuration. These are services that most environments do have already, but when you need to get something up fairly quick or simply want to see what it can do from a test perspective, it could significantly delay the overall setup and installation process.
Since this report is meant to cover Ansible for network automation, its worth pointing out that having Ansible as an agentless platform is even more compelling to network admins than to sysadmins. Why is this?
Its more compelling for network admins because as mentioned, Linux operating systems are open, and anything can be installed on them. For networking, this is definitely not the case, although it is gradually changing. If we take the most widely deployed network operating system, Cisco IOS, as just one example and ask the question, _"Can third-party software be installed on IOS based platforms?"_ it shouldnt come as a surprise that the answer is _NO_.
For the last 20+ years, nearly all network operating systems have been closed and vertically integrated with the underlying network hardware. Because its not so easy to load an agent on a network device (router, switch, load balancer, firewall, etc.) without vendor support, having an automation platform like Ansible that was built from the ground up to be agentless and extensible is just what the doctor ordered for the network industry. We can finally start eliminating manual interactions with the network with ease!
### Extensible
Ansible is also extremely extensible. As open source and code start to play a larger role in the network industry, having platforms that are extensible is a must. This means that if the vendor or community doesnt provide a particular feature or function, the open source community, end user, customer, consultant, or anyone else can _extend_ Ansible to enable a given set of functionality. In the past, the network vendor or tool vendor was on the hook to provide the new plug-ins and integrations. Imagine using an automation platform like Ansible, and your network vendor of choice releases a new feature that you _really_ need automated. While the network vendor or Ansible could in theory release the new plug-in to automate that particular feature, the great thing is, anyone from your internal engineers to your value-added reseller (VARs) or consultant could now provide these integrations.
It is a fact that Ansible is extremely extensible because as stated, Ansible was initially built to automate applications and systems. It is because of Ansibles extensibility that Ansible integrations have been written for network vendors, including but not limited to Cisco, Arista, Juniper, F5, HP, A10, Cumulus, and Palo Alto Networks.
### Why Ansible for Network Automation?
Weve taken a brief look at what Ansible is and also some of the benefits of network automation, but why should Ansible be used for network automation?
In full transparency, many of the reasons already stated are what make Ansible such as great platform for automating application deployments. However, well take this a step further now, getting even more focused on networking, and continue to outline a few other key points to be aware of.
### Agentless
The importance of an agentless architecture cannot be stressed enough when it comes to network automation, especially as it pertains to automating existing devices. If we take a look at all devices currently installed at various parts of the network, from the DMZ and campus, to the branch and data center, the lions share of devices do _NOT_ have a modern device API. While having an API makes things so much simpler from an automation perspective, an agentless platform like Ansible makes it possible to automate and manage those _legacy_ _(traditional)_ devices, for example, _CLI-based devices_, making it a tool that can be used in any network environment.
###### Note
If CLI-only devices are integrated with Ansible, the mechanisms as to how the devices are accessed for read-only and read-write operations occur through protocols such as telnet, SSH, and SNMP.
As standalone network devices like routers, switches, and firewalls continue to add support for APIs, SDN solutions are also emerging. The one common theme with SDN solutions is that they all offer a single point of integration and policy management, usually in the form of an SDN controller. This is true for solutions such as Cisco ACI, VMware NSX, Big Switch Big Cloud Fabric, and Juniper Contrail, as well as many of the other SDN offerings from companies such as Nuage, Plexxi, Plumgrid, Midokura, and Viptela. This even includes open source controllers such as OpenDaylight.
These solutions all simplify the management of networks, as they allow an administrator to start to migrate from box-by-box management to network-wide, single-system management. While this is a great step in the right direction, these solutions still dont eliminate the risks for human error during change windows. For example, rather than configure _N_ switches, you may need to configure a single GUI that could take just as long in order to make the required configuration change—it may even be more complex, because after all, who prefers a GUI _over_ a CLI! Additionally, you may possibly have different types of SDN solutions deployed per application, network, region, or data center.
The need to automate networks, for configuration management, monitoring, and data collection, does not go away as the industry begins migrating to controller-based network architectures.
As most software-defined networks are deployed with a controller, nearly all controllers expose a modern REST API. And because Ansible has an agentless architecture, it makes it extremely simple to automate not only legacy devices that may not have an API, but also software-defined networking solutions via REST APIs, all without requiring any additional software (agents) on the endpoints. The net result is being able to automate any type of device using Ansible with or without an API.
### Free and Open Source Software (FOSS)
Being that Ansible is open source with all code publicly accessible on GitHub, it is absolutely free to get started using Ansible. It can literally be installed and providing value to network engineers in minutes. Ansible, the open source project, or Ansible Inc., do not require any meetings with sales reps before they hand over software either. That is stating the obvious, since its true for all open source projects, but being that the use of open source, community-driven software within the network industry is fairly new and gradually increasing, we wanted to explicitly make this point.
It is also worth stating that Ansible, Inc. is indeed a company and needs to make money somehow, right? While Ansible is open source, it also has an enterprise product called Ansible Tower that adds features such as role-based access control (RBAC), reporting, web UI, REST APIs, multi-tenancy, and much more, which is usually a nice fit for enterprises looking to deploy Ansible. And the best part is that even Ansible Tower is _FREE_ for up to 10 devices—so, at least you can get a taste of Tower to see if it can benefit your organization without spending a dime and sitting in countless sales meetings.
### Extensible
We stated earlier that Ansible was primarily built as an automation platform for deploying Linux applications, although it has expanded to Windows since the early days. The point is that the Ansible open source project did not have the goal of automating network infrastructure. The truth is that the more the Ansible community understood how flexible and extensible the underlying Ansible architecture was, the easier it became to _extend_ Ansible for their automation needs, which included networking. Over the past two years, there have been a number of Ansible integrations developed, many by industry independents such as Matt Oswalt, Jason Edelman, Kirk Byers, Elisa Jasinska, David Barroso, Michael Ben-Ami, Patrick Ogenstad, and Gabriele Gerbino, as well as by leading networking network vendors such as Arista, Juniper, Cumulus, Cisco, F5, and Palo Alto Networks.
### Integrating into Existing DevOps Workflows
Ansible is used for application deployments within IT organizations. Its used by operations teams that need to manage the deployment, monitoring, and management of various types of applications. By integrating Ansible with the network infrastructure, it expands what is possible when new applications are turned up or migrated. Rather than have to wait for a new top of rack (TOR) switch to be turned up, a VLAN to be added, or interface speed/duplex to be checked, all of these network-centric tasks can be automated and integrated into existing workflows that already exist within the IT organization.
### Idempotency
The term _idempotency_ (pronounced item-potency) is used often in the world of software development, especially when working with REST APIs, as well as in the world of _DevOps_ automation and configuration management frameworks, including Ansible. One of Ansibles beliefs is that all Ansible modules (integrations) should be idempotent. Okay, so what does it mean for a module to be idempotent? After all, this is a new term for most network engineers.
The answer is simple. Being idempotent allows the defined task to run one time or a thousand times without having an adverse effect on the target system, only ever making the change once. In other words, if a change is required to get the system into its desired state, the change is made; and if the device is already in its desired state, no change is made. This is unlike most traditional custom scripts and the copy and pasting of CLI commands into a terminal window. When the same command or script is executed repeatedly on the same system, errors are (sometimes) raised. Ever paste a command set into a router and get some type of error that invalidates the rest of your configuration? Was that fun?
Another example is if you have a text file or a script that configures 10 VLANs, the same commands are then entered 10 times _EVERY_ time the script is run. If an idempotent Ansible module is used, the existing configuration is gathered first from the network device, and each new VLAN being configured is checked against the current configuration. Only if the new VLAN needs to be added (or changed—VLAN name, as an example) is a change or command actually pushed to the device.
As the technologies become more complex, the value of idempotency only increases because with idempotency, you shouldnt care about the _existing_ state of the network device being modified, only the _desired_ state that you are trying to achieve from a network configuration and policy perspective.
### Network-Wide and Ad Hoc Changes
One of the problems solved with configuration management tools is configuration drift (when a devices desired configuration gradually drifts, or changes, over time due to manual change and/or having multiple disparate tools being used in an environment)—in fact, this is where tools like Puppet and Chef got started. Agents _phone home_ to the head-end server, validate its configuration, and if a change is required, the change is made. The approach is simple enough. What if an outage occurs and you need to troubleshoot though? You usually bypass the management system, go direct to a device, find the fix, and quickly leave for the day, right? Sure enough, at the next time interval when the agent phones back home, the change made to fix the problem is overwritten (based on how the _master/head-end server_ is configured). One-off changes should always be limited in highly automated environments, but tools that still allow for them are greatly valuable. As you guessed, one of these tools is Ansible.
Because Ansible is agentless, there is not a default push or pull to prevent configuration drift. The tasks to automate are defined in what is called an Ansible playbook. When using Ansible, it is up to the user to run the playbook. If the playbook is to be executed at a given time interval and youre not using Ansible Tower, you will definitely know how often the tasks are run; if you are just using the native Ansible command line from a terminal prompt, the playbook is run once and only once.
Running a playbook once by default is attractive for network engineers. It is added peace of mind that changes made manually on the device are not going to be automatically overwritten. Additionally, the scope of devices that a playbook is executed against is easily changed when needed such that even if a single change needs to automate only a single device, Ansible can still be used. The _scope_ of devices is determined by what is called an Ansible inventory file; the inventory could have one device or a thousand devices.
The following shows a sample inventory file with two groups defined and a total of six network devices:
```
[core-switches]
dc-core-1
dc-core-2
[leaf-switches]
leaf1
leaf2
leaf3
leaf4
```
To automate all hosts, a snippet from your play definition in a playbook looks like this:
```
hosts: all
```
And to automate just one leaf switch, it looks like this:
```
hosts: leaf1
```
And just the core switches:
```
hosts: core-switches
```
###### Note
As stated previously, playbooks, plays, and inventories are covered in more detail later on this report.
Being able to easily automate one device or _N_ devices makes Ansible a great choice for making those one-off changes when they are required. Its also great for those changes that are network-wide: possibly for shutting down all interfaces of a given type, configuring interface descriptions, or adding VLANs to wiring closets across an enterprise campus network.
### Network Task Automation with Ansible
This report is gradually getting more technical in two areas. The first area is around the details and architecture of Ansible, and the second area is about exactly what types of tasks can be automated from a network perspective with Ansible. The latter is what well take a look at in this chapter.
Automation is commonly equated with speed, and considering that some network tasks dont require speed, its easy to see why some IT teams dont see the value in automation. VLAN configuration is a great example because you may be thinking, "How _fast_ does a VLAN really need to get created? Just how many VLANs are being added on a daily basis? Do _I_ really need automation?”
In this section, we are going to focus on several other tasks where automation makes sense such as device provisioning, data collection, reporting, and compliance. But remember, as we stated earlier, automation is much more than speed and agility as its offering you, your team, and your business more predictable and more deterministic outcomes.
### Device Provisioning
One of the easiest and fastest ways to get started using Ansible for network automation is creating device configuration files that are used for initial device provisioning and pushing them to network devices.
If we take this process and break it down into two steps, the first step is creating the configuration file, and the second is pushing the configuration onto the device.
First, we need to decouple the _inputs_ from the underlying vendor proprietary syntax (CLI) of the config file. This means well have separate files with values for the configuration parameters such as VLANs, domain information, interfaces, routing, and everything else, and then, of course, a configuration template file(s). For this example, this is our standard golden template thats used for all devices getting deployed. Ansible helps bridge the gap between rendering the inputs and values with the configuration template. In less than a few seconds, Ansible can generate hundreds of configuration files predictably and reliably.
Lets take a quick look at an example of taking a current configuration and decomposing it into a template and separate variables (inputs) file.
Here is an example of a configuration file snippet:
```
hostname leaf1
ip domain-name ntc.com
!
vlan 10
name web
!
vlan 20
name app
!
vlan 30
name db
!
vlan 40
name test
!
vlan 50
name misc
```
If we extract the input values, this file is transformed into a template.
###### Note
Ansible uses the Python-based Jinja2 templating language, thus the template called _leaf.j2_ is a Jinja2 template.
Note that in the following example the _double curly braces_ denote a variable.
The resulting template looks like this and is given the filename _leaf.j2_:
```
!
hostname {{ inventory_hostname }}
ip domain-name {{ domain_name }}
!
!
{% for vlan in vlans %}
vlan {{ vlan.id }}
name {{ vlan.name }}
{% endfor %}
!
```
Since the double curly braces denote variables, and we see those values are not in the template, they need to be stored somewhere. They get stored in a variables file. A matching variables file for the previously shown template looks like this:
```
---
hostname: leaf1
domain_name: ntc.com
vlans:
- { id: 10, name: web }
- { id: 20, name: app }
- { id: 30, name: db }
- { id: 40, name: test }
- { id: 50, name: misc }
```
This means if the team that controls VLANs wants to add a VLAN to the network devices, no problem. Have them change it in the variables file and regenerate a new config file using the Ansible module called `template`. This whole process is idempotent too; only if there is a change to the template or values being entered will a new configuration file be generated.
Once the configuration is generated, it needs to be _pushed_ to the network device. One such method to push configuration files to network devices is using the open source Ansible module called `napalm_install_config`.
The next example is a sample playbook to _build and push_ a configuration to network devices. Again, this playbook uses the `template` module to build the configuration files and the `napalm_install_config` to push them and activate them as the new running configurations on the devices.
Even though every line isnt reviewed in the example, you can still make out what is actually happening.
###### Note
The following playbook introduces new concepts such as the built-in variable `inventory_hostname`. These concepts are covered in [Ansible Terminology and Getting Started][1].
```
---
- name: BUILD AND PUSH NETWORK CONFIGURATION FILES
hosts: leaves
connection: local
gather_facts: no
tasks:
- name: BUILD CONFIGS
template:
src=templates/leaf.j2
dest=configs/{{inventory_hostname }}.conf
- name: PUSH CONFIGS
napalm_install_config:
hostname={{ inventory_hostname }}
username={{ un }}
password={{ pwd }}
dev_os={{ os }}
config_file=configs/{{ inventory_hostname }}.conf
commit_changes=1
replace_config=0
```
This two-step process is the simplest way to get started with network automation using Ansible. You simply template your configs, build config files, and push them to the network device—otherwise known as the _BUILD and PUSH_ method.
###### Note
Another example like this is reviewed in much more detail in [Ansible Network Integrations][2].
### Data Collection and Monitoring
Monitoring tools typically use SNMP—these tools poll certain management information bases (MIBs) and return data to the monitoring tool. Based on the data being returned, it may be more or less than you actually need. What if interface stats are being polled? You are likely getting back every counter that is displayed in a _show interface_ command. What if you only need _interface resets_ and wish to see these resets correlated to the interfaces that have CDP/LLDP neighbors on them? Of course, this is possible with current technology; it could be you are running multiple show commands and parsing the output manually, or youre using an SNMP-based tool but going between tabs in the GUI trying to find the data you actually need. How does Ansible help with this?
Being that Ansible is totally open and extensible, its possible to collect and monitor the exact counters or values needed. This may require some up-front custom work but is totally worth it in the end, because the data being gathered is what you need, not what the vendor is providing you. Ansible also provides intuitive ways to perform certain tasks conditionally, which means based on data being returned, you can perform subsequent tasks, which may be to collect more data or to make a configuration change.
Network devices have _A LOT_ of static and ephemeral data buried inside, and Ansible helps extract the bits you need.
You can even use Ansible modules that use SNMP behind the scenes, such as a module called `snmp_device_version`. This is another open source module that exists within the community:
```
- name: GET SNMP DATA
snmp_device_version:
host=spine
community=public
version=2c
```
Running the preceding task returns great information about a device and adds some level of discovery capabilities to Ansible. For example, that task returns the following data:
```
{"ansible_facts": {"ansible_device_os": "nxos", "ansible_device_vendor": "cisco", "ansible_device_version": "7.0(3)I2(1)"}, "changed": false}
```
You can now determine what type of device something is without knowing up front. All you need to know is the read-only community string of the device.
### Migrations
Migrating from one platform to the next is never an easy task. This may be from the same vendor or from different vendors. Vendors may offer a script or a tool to help with migrations. Ansible can be used to build out configuration templates for all types of network devices and operating systems in such a way that you could generate a configuration file for all vendors given a defined and common set of inputs (common data model). Of course, if there are vendor proprietary extensions, theyll need to be accounted for, too. Having this type of flexibility helps with not only migrations, but also disaster recovery (DR), as its very common to have different switch models in the production and DR data centers, maybe even different vendors.
### Configuration Management
As stated, configuration management is the most common type of automation. What Ansible allows you to do fairly easily is create _roles_ to streamline the consumption of task-based automation. From a high level, a role is a logical grouping of reusable tasks that are automated against a particular group of devices. Another way to think about roles is to think about workflows. First and foremost, workflows and processes need to be understood before automation is going to start adding value. Its always important to start small and expand from there.
For example, a set of tasks that automate the configuration of routers and switches is very common and is a great place to start. But where do the IP addresses come from that are configured on network devices? Maybe an IP address management solution? Once the IP addresses are allocated for a given function and deployed, does DNS need to be updated too? Do DHCP scopes need to be created?
Can you see how the workflow can start small and gradually expand across different IT systems? As the workflow continues to expand, so would the role.
### Compliance
As with many forms of automation, making configuration changes with any type of automation tool is seen as a risk. While making manual changes could arguably be riskier, as youve read and may have experienced firsthand, Ansible has capabilities to automate data collection, monitoring, and configuration building, which are all "read-only" and "low risk" actions. One _low risk_ use case that can use the data being gathered is configuration compliance checks and configuration validation. Does the deployed configuration meet security requirements? Are the required networks configured? Is protocol XYZ disabled? Since each module, or integration, with Ansible returns data, it is quite simple to _assert_ that something is _TRUE_ or _FALSE_. And again, based on _it_ being _TRUE_ or _FALSE_, its up to you to determine what happens next—maybe it just gets logged, or maybe a complex operation is performed.
### Reporting
We now understand that Ansible can also be used to collect data and perform compliance checks. The data being returned and collected from the device by way of Ansible is up for grabs in terms of what you want to do with it. Maybe the data being returned becomes inputs to other tasks, or maybe you just want to create reports. Being that reports are generated from templates combined with the actual important data to be inserted into the template, the process to create and use reporting templates is the same process used to create configuration templates.
From a reporting perspective, these templates may be flat text files, markdown files that are viewed on GitHub, HTML files that get dynamically placed on a web server, and the list goes on. The user has the power to create the exact type of report she wishes, inserting the exact data she needs to be part of that report.
It is powerful to create reports not only for executive management, but also for the ops engineers, since there are usually different metrics both teams need.
### How Ansible Works
After looking at what Ansible can offer from a network automation perspective, well now take a look at how Ansible works. You will learn about the overall communication flow from an Ansible control host to the nodes that are being automated. First, we review how Ansible works _out of the box_, and we then take a look at how Ansible, and more specifically Ansible _modules_, work when network devices are being automated.
### Out of the Box
By now, you should understand that Ansible is an automation platform. In fact, it is a lightweight automation platform that is installed on a single server or on every administrators laptop within an organization. You decide. Ansible is easily installed using utilities such as pip, apt, and yum on Linux-based machines.
###### Note
The machine that Ansible is installed on is referred to as the _control host_ through the remainder of this report.
The control host will perform all automation tasks that are defined in an Ansible playbook (dont worry; well cover playbooks and other Ansible terms soon enough). The important piece for now is to understand that a playbook is simply a set of automation tasks and instructions that gets executed on a given number of hosts.
When a playbook is created, you also need to define which hosts you want to automate. The mapping between the playbook and the hosts to automate happens by using what is known as an Ansible inventory file. This was already shown in an earlier example, but here is another sample inventory file showing two groups: `cisco`and `arista`:
```
[cisco]
nyc1.acme.com
nyc2.acme.com
[arista]
sfo1.acme.com
sfo2.acme.com
```
###### Note
You can also use IP addresses within the inventory file, instead of hostnames. For these examples, the hostnames were resolvable via DNS.
As you can see, the Ansible inventory file is a text file that lists hosts and groups of hosts. You then reference a specific host or a group from within the playbook, thus dictating which hosts get automated for a given play and playbook. This is shown in the following two examples.
The first example shows what it looks like if you wanted to automate all hosts within the `cisco` group, and the second example shows how to automate just the _nyc1.acme.com_ host:
```
---
- name: TEST PLAYBOOK
hosts: cisco
tasks:
- TASKS YOU WANT TO AUTOMATE
```
```
---
- name: TEST PLAYBOOK
hosts: nyc1.acme.com
tasks:
- TASKS YOU WANT TO AUTOMATE
```
Now that the basics of inventory files are understood, we can take a look at how Ansible (the control host) communicates with devices _out of the box_ and how tasks are automated on Linux endpoints. This is an important concept to understand, as this is usually different when network devices are being automated.
There are two main requirements for Ansible to work out of the box to automate Linux-based systems. These requirements are SSH and Python.
First, the endpoints must support SSH for transport, since Ansible uses SSH to connect to each target node. Because Ansible supports a pluggable connection architecture, there are also various plug-ins available for different types of SSH implementations.
The second requirement is how Ansible gets around the need to require an _agent_ to preexist on the target node. While Ansible does not require a software agent, it does require an onboard Python execution engine. This execution engine is used to execute Python code that is transmitted from the Ansible control host to the target node being automated.
If we elaborate on this out of the box workflow, it is broken down as follows:
1. When an Ansible play is executed, the control host connects to the Linux-based target node using SSH.
2. For each task, that is, Ansible module being executed within the play, Python code is transmitted over SSH and executed directly on the remote system.
3. Each Ansible module upon execution on the remote system returns JSON data to the control host. This data includes information such as if the configuration changed, if the task passed/failed, and other module-specific data.
4. The JSON data returned back to Ansible can then be used to generate reports using templates or as inputs to subsequent modules.
5. Repeat step 3 for each task that exists within the play.
6. Repeat step 1 for each play within the playbook.
Shouldnt this mean that network devices should work out of the box with Ansible because they also support SSH? It is true that network devices do support SSH, but it is the first requirement combined with the second one that limits the functionality possible for network devices.
To start, most network devices do not support Python, so it makes using the default Ansible connection mechanism process a non-starter. That said, over the past few years, vendors have added Python support on several different device platforms. However, most of these platforms still lack the integration needed to allow Ansible to get direct access to a Linux shell over SSH with the proper permissions to copy over the required code, create temp directories and files, and execute the code on box. While all the parts are there for Ansible to work natively with SSH/Python _and_ Linux-based network devices, it still requires network vendors to open their systems more than they already have.
###### Note
It is worth noting that Arista does offer native integration because it is able to drop SSH users directly into a Linux shell with access to a Python execution engine, which in turn does allow Ansible to use its default connection mechanism. Because we called out Arista, we need to also highlight Cumulus as working with Ansibles default connection mechanism, too. This is because Cumulus Linux is native Linux, and there isnt a need to use a vendor API for the automation of the Cumulus Linux OS.
### Ansible Network Integrations
The previous section covered the way Ansible works by default. We looked at how Ansible sets up a connection to a device at the beginning of a _play_, executes tasks by copying Python code to the devices, executes the code, and then returns results back to the Ansible control host.
In this section, well take a look at what this process is when automating network devices with Ansible. As already covered, Ansible has a pluggable connection architecture. For _most_ network integrations, the `connection` parameter is set to `local`. The most common place to make the connection type local is within the playbook, as shown in the following example:
```
---
- name: TEST PLAYBOOK
hosts: cisco
connection: local
tasks:
- TASKS YOU WANT TO AUTOMATE
```
Notice how within the play definition, this example added the `connection` parameter as compared to the examples in the previous section.
This tells Ansible not to connect to the target device via SSH and to just connect to the local machine running the playbook. Basically, this delegates the connection responsibility to the actual Ansible modules being used within the _tasks_ section of the playbook. Delegating power for each type of module allows the modules to connect to the device in whatever fashion necessary; this could be NETCONF for Juniper and HP Comware7, eAPI for Arista, NX-API for Cisco Nexus, or even SNMP for traditional/legacy-based systems that dont have a programmatic API.
###### Note
Network integrations in Ansible come in the form of Ansible modules. While we continue to whet your appetite using terminology such as playbooks, plays, tasks, and modules to convey key concepts, each of these terms are finally covered in greater detail in [Ansible Terminology and Getting Started][3] and [Hands-on Look at Using Ansible for Network Automation][4].
Lets take a look at another sample playbook:
```
---
- name: TEST PLAYBOOK
hosts: cisco
connection: local
tasks:
- nxos_vlan: vlan_id=10 name=WEB_VLAN
```
If you notice, this playbook now includes a task, and this task uses the `nxos_vlan` module. The `nxos_vlan` module is just a Python file, and it is in this file where the connection to the Cisco NX-OS device is made using NX-API. However, the connection could have been set up using any other device API, and this is how vendors and users like us are able to build our own integrations. Integrations (modules) are typically done on a per-feature basis, although as youve already seen with modules like `napalm_install_config`, they can be used to _push_ a full configuration file, too.
One of the major differences is that with the default connection mechanism, Ansible launches a persistent SSH connection to the device, and this connection persists for a given play. When the connection setup and teardown occurs within the module, as with many network modules that use `connection=local`, Ansible is logging in/out of the device on _every_ task versus this happening on the play level.
And in traditional Ansible fashion, each network module returns JSON data. The only difference is the massaging of this data is happening locally on the Ansible control host versus on the target node. The data returned back to the playbook varies per vendor and type of module, but as an example, many of the Cisco NX-OS modules return back existing state, proposed state, and end state, as well as the commands (if any) that are being sent to the device.
As you get started using Ansible for network automation, it is important to remember that setting the connection parameter to local is taking Ansible out of the connection setup/teardown process and leaving that up to the module. This is why modules supported for different types of vendor platforms will have different ways of communicating with the devices.
### Ansible Terminology and Getting Started
This chapter walks through many of the terms and key concepts that have been gradually introduced already in this report. These are terms such as _inventory file_, _playbook_, _play_, _tasks_, and _modules_. We also review a few other concepts that are helpful to be aware of when getting started with Ansible for network automation.
Please reference the following sample inventory file and playbook throughout this section, as they are continuously used in the examples that follow to convey what each Ansible term means.
_Sample inventory_:
```
# sample inventory file
# filename inventory
[all:vars]
user=admin
pwd=admin
[tor]
rack1-tor1 vendor=nxos
rack1-tor2 vendor=nxos
rack2-tor1 vendor=arista
rack2-tor2 vendor=arista
[core]
core1
core2
```
_Sample playbook_:
```
---
# sample playbook
# filename site.yml
- name: PLAY 1 - Top of Rack (TOR) Switches
hosts: tor
connection: local
tasks:
- name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
nxos_vlan:
vlan_id=10
name=WEB_VLAN
host={{ inventory_hostname }}
username=admin
password=admin
when: vendor == "nxos"
- name: ENSURE VLAN 10 EXISTS ON ARISTA TOR SWITCHES
eos_vlan:
vlanid=10
name=WEB_VLAN
host={{ inventory_hostname }}
username={{ user }}
password={{ pwd }}
when: vendor == "arista"
- name: PLAY 2 - Core (TOR) Switches
hosts: core
connection: local
tasks:
- name: ENSURE VLANS EXIST IN CORE
nxos_vlan:
vlan_id={{ item }}
host={{ inventory_hostname }}
username={{ user }}
password={{ pwd }}
with_items:
- 10
- 20
- 30
- 40
- 50
```
### Inventory File
Using an inventory file, such as the preceding one, enables us to automate tasks for specific hosts and groups of hosts by referencing the proper host/group using the `hosts` parameter that exists at the top section of each play.
It is also possible to store variables within an inventory file. This is shown in the example. If the variable is on the same line as a host, it is a host-specific variable. If the variables are defined within brackets such as `[all:vars]`, it means that the variables are in scope for the group `all`, which is a default group that includes _all_ hosts in the inventory file.
###### Note
Inventory files are the quickest way to get started with Ansible, but should you already have a source of truth for network devices such as a network management tool or CMDB, it is possible to create and use a dynamic inventory script rather than a static inventory file.
### Playbook
The playbook is the top-level object that is executed to automate network devices. In our example, this is the file _site.yml_, as depicted in the preceding example. A playbook uses YAML to define the set of tasks to automate, and each playbook is comprised of one or more plays. This is analogous to a football playbook. Like in football, teams have playbooks made up of plays, and Ansible playbooks are made up of plays, too.
###### Note
YAML is a data format that is supported by all programming languages. YAML is itself a superset of JSON, and its quite easy to recognize YAML files, as they always start with three dashes (hyphens), `---`.
### Play
One or more plays can exist within an Ansible playbook. In the preceding example, there are two plays within the playbook. Each starts with a _header_ section where play-specific parameters are defined.
The two plays from that example have the following parameters defined:
`name`
The text `PLAY 1 - Top of Rack (TOR) Switches` is arbitrary and is displayed when the playbook runs to improve readability during playbook execution and reporting. This is an optional parameter.
`hosts`
As covered previously, this is the host or group of hosts that are automated in this particular play. This is a required parameter.
`connection`
As covered previously, this is the type of connection mechanism used for the play. This is an optional parameter, but is commonly set to `local` for network automation plays.
Each play is comprised of one or more tasks.
### Tasks
Tasks represent what is automated in a declarative manner without worrying about the underlying syntax or "how" the operation is performed.
In our example, the first play has two tasks. Each task ensures VLAN 10 exists. The first task does this for Cisco Nexus devices, and the second task does this for Arista devices:
```
tasks:
- name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
nxos_vlan:
vlan_id=10
name=WEB_VLAN
host={{ inventory_hostname }}
username=admin
password=admin
when: vendor == "nxos"
```
Tasks can also use the `name` parameter just like plays can. As with plays, the text is arbitrary and is displayed when the playbook runs to improve readability during playbook execution and reporting. It is an optional parameter for each task.
The next line in the example task starts with `nxos_vlan`. This tell us that this task will execute the Ansible module called `nxos_vlan`.
Well now dig deeper into modules.
### Modules
It is critical to understand modules within Ansible. While any programming language can be used to write Ansible modules as long as they return JSON key-value pairs, they are almost always written in Python. In our example, we see two modules being executed: `nxos_vlan` and `eos_vlan`. The modules are both Python files; and in fact, while you cant tell from looking at the playbook, the real filenames are _eos_vlan.py_ and _nxos_vlan.py_, respectively.
Lets look at the first task in the first play from the preceding example:
```
- name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
nxos_vlan:
vlan_id=10
name=WEB_VLAN
host={{ inventory_hostname }}
username=admin
password=admin
when: vendor == "nxos"
```
This task executes `nxos_vlan`, which is a module that automates VLAN configuration. In order to use modules, including this one, you need to specify the desired state or configuration policy you want the device to have. This example states: VLAN 10 should be configured with the name `WEB_VLAN`, and it should exist on each switch being automated. We can see this easily with the `vlan_id`and `name` parameters. There are three other parameters being passed into the module as well. They are `host`, `username`, and `password`:
`host`
This is the hostname (or IP address) of the device being automated. Since the hosts we want to automate are already defined in the inventory file, we can use the built-in Ansible variable `inventory_hostname`. This variable is equal to what is in the inventory file. For example, on the first iteration, the host in the inventory file is `rack1-tor1`, and on the second iteration, it is `rack1-tor2`. These names are passed into the module and then within the module, a DNS lookup occurs on each name to resolve it to an IP address. Then the communication begins with the device.
`username`
Username used to log in to the switch.
`password`
Password used to log in to the switch.
The last piece to cover here is the use of the `when` statement. This is how Ansible performs conditional tasks within a play. As we know, there are multiple devices and types of devices that exist within the `tor` group for this play. Using `when` offers an option to be more selective based on any criteria. Here we are only automating Cisco devices because we are using the `nxos_vlan` module in this task, while in the next task, we are automating only the Arista devices because the `eos_vlan` module is used.
###### Note
This isnt the only way to differentiate between devices. This is being shown to illustrate the use of `when` and that variables can be defined within the inventory file.
Defining variables in an inventory file is great for getting started, but as you continue to use Ansible, youll want to use YAML-based variables files to help with scale, versioning, and minimizing change to a given file. This will also simplify and improve readability for the inventory file and each variables file used. An example of a variables file was given earlier when the build/push method of device provisioning was covered.
Here are a few other points to understand about the tasks in the last example:
* Play 1 task 1 shows the `username` and `password` hardcoded as parameters being passed into the specific module (`nxos_vlan`).
* Play 1 task 1 and play 2 passed variables into the module instead of hardcoding them. This masks the `username` and `password`parameters, but its worth noting that these variables are being pulled from the inventory file (for this example).
* Play 1 uses a _horizontal_ key=value syntax for the parameters being passed into the modules, while play 2 uses the vertical key=value syntax. Both work just fine. You can also use vertical YAML syntax with "key: value" syntax.
* The last task also introduces how to use a _loop_ within Ansible. This is by using `with_items` and is analogous to a for loop. That particular task is looping through five VLANs to ensure they all exist on the switch. Note: its also possible to store these VLANs in an external YAML variables file as well. Also note that the alternative to not using `with_items` would be to have one task per VLAN—and that just wouldnt scale!
### Hands-on Look at Using Ansible for Network Automation
In the previous chapter, a general overview of Ansible terminology was provided. This covered many of the specific Ansible terms, such as playbooks, plays, tasks, modules, and inventory files. This section will continue to provide working examples of using Ansible for network automation, but will provide more detail on working with modules to automate a few different types of devices. Examples will include automating devices from multiple vendors, including Cisco, Arista, Cumulus, and Juniper.
The examples in this section assume the following:
* Ansible is installed.
* The proper APIs are enabled on the devices (NX-API, eAPI, NETCONF).
* Users exist with the proper permissions on the system to make changes via the API.
* All Ansible modules exist on the system and are in the library path.
###### Note
Setting the module and library path can be done within the _ansible.cfg_ file. You can also use the `-M` flag from the command line to change it when executing a playbook.
The inventory used for the examples in this section is shown in the following section (with passwords removed and IP addresses changed). In this example, some hostnames are not FQDNs as they were in the previous examples.
### Inventory File
```
[cumulus]
cvx ansible_ssh_host=1.2.3.4 ansible_ssh_pass=PASSWORD
[arista]
veos1
[cisco]
nx1 hostip=5.6.7.8 un=USERNAME pwd=PASSWORD
[juniper]
vsrx hostip=9.10.11.12 un=USERNAME pwd=PASSWORD
```
###### Note
Just in case youre wondering at this point, Ansible does support functionality that allows you to store passwords in encrypted files. If you want to learn more about this feature, check out [Ansible Vault][5] in the docs on the Ansible website.
This inventory file has four groups defined with a single host in each group. Lets review each section in a little more detail:
Cumulus
The host `cvx` is a Cumulus Linux (CL) switch, and it is the only device in the `cumulus` group. Remember that CL is native Linux, so this means the default connection mechanism (SSH) is used to connect to and automate the CL switch. Because `cvx` is not defined in DNS or _/etc/hosts_, well let Ansible know not to use the hostname defined in the inventory file, but rather the name/IP defined for `ansible_ssh_host`. The username to log in to the CL switch is defined in the playbook, but you can see that the password is being defined in the inventory file using the `ansible_ssh_pass` variable.
Arista
The host called `veos1` is an Arista switch running EOS. It is the only host that exists within the `arista` group. As you can see for Arista, there are no other parameters defined within the inventory file. This is because Arista uses a special configuration file for their devices. This file is called _.eapi.conf_ and for our example, it is stored in the home directory. Here is the conf file being used for this example to function properly:
```
[connection:veos1]
host: 2.4.3.4
username: unadmin
password: pwadmin
```
This file contains all required information for Ansible (and the Arista Python library called _pyeapi_) to connect to the device using just the information as defined in the conf file.
Cisco
Just like with Cumulus and Arista, there is only one host (`nx1`) that exists within the `cisco` group. This is an NX-OS-based Cisco Nexus switch. Notice how there are three variables defined for `nx1`. They include `un` and `pwd`, which are accessed in the playbook and passed into the Cisco modules in order to connect to the device. In addition, there is a parameter called `hostip`. This is required because `nx1` is also not defined in DNS or configured in the _/etc/hosts_ file.
###### Note
We could have named this parameter anything. If automating a native Linux device, `ansible_ssh_host` is used just like we saw with the Cumulus example (if the name as defined in the inventory is not resolvable). In this example, we could have still used `ansible_ssh_host`, but it is not a requirement, since well be passing this variable as a parameter into Cisco modules, whereas `ansible_ssh_host` is automatically checked when using the default SSH connection mechanism.
Juniper
As with the previous three groups and hosts, there is a single host `vsrx` that is located within the `juniper` group. The setup within the inventory file is identical to that of Ciscos as both are used the same exact way within the playbook.
### Playbook
The next playbook has four different plays. Each play is built to automate a specific group of devices based on vendor type. Note that this is only one way to perform these tasks within a single playbook. There are other ways in which we could have used conditionals (`when` statement) or created Ansible roles (which is not covered in this report).
Here is the example playbook:
```
---
- name: PLAY 1 - CISCO NXOS
hosts: cisco
connection: local
tasks:
- name: ENSURE VLAN 100 exists on Cisco Nexus switches
nxos_vlan:
vlan_id=100
name=web_vlan
host={{ hostip }}
username={{ un }}
password={{ pwd }}
- name: PLAY 2 - ARISTA EOS
hosts: arista
connection: local
tasks:
- name: ENSURE VLAN 100 exists on Arista switches
eos_vlan:
vlanid=100
name=web_vlan
connection={{ inventory_hostname }}
- name: PLAY 3 - CUMULUS
remote_user: cumulus
sudo: true
hosts: cumulus
tasks:
- name: ENSURE 100.10.10.1 is configured on swp1
cl_interface: name=swp1 ipv4=100.10.10.1/24
- name: restart networking without disruption
shell: ifreload -a
- name: PLAY 4 - JUNIPER SRX changes
hosts: juniper
connection: local
tasks:
- name: INSTALL JUNOS CONFIG
junos_install_config:
host={{ hostip }}
file=srx_demo.conf
user={{ un }}
passwd={{ pwd }}
logfile=deploysite.log
overwrite=yes
diffs_file=junpr.diff
```
You will notice the first two plays are very similar to what we already covered in the original Cisco and Arista example. The only difference is that each group being automated (`cisco` and `arista`) is defined in its own play, and this is in contrast to using the `when`conditional that was used earlier.
There is no right way or wrong way to do this. It all depends on what information is known up front and what fits your environment and use cases best, but our intent is to show a few ways to do the same thing.
The third play automates the configuration of interface `swp1` that exists on the Cumulus Linux switch. The first task within this play ensures that `swp1` is a Layer 3 interface and is configured with the IP address 100.10.10.1\. Because Cumulus Linux is native Linux, the networking service needs to be restarted for the changes to take effect. This could have also been done using Ansible handlers (out of the scope of this report). There is also an Ansible core module called `service` that could have been used, but that would disrupt networking on the switch; using `ifreload` restarts networking non-disruptively.
Up until now in this section, we looked at Ansible modules focused on specific tasks such as configuring interfaces and VLANs. The fourth play uses another option. Well look at a module that _pushes_ a full configuration file and immediately activates it as the new running configuration. This is what we showed previously using `napalm_install_config`, but this example uses a Juniper-specific module called `junos_install_config`.
This module `junos_install_config` accepts several parameters, as seen in the example. By now, you should understand what `user`, `passwd`, and `host` are used for. The other parameters are defined as follows:
`file`
This is the config file that is copied from the Ansible control host to the Juniper device.
`logfile`
This is optional, but if specified, it is used to store messages generated while executing the module.
`overwrite`
When set to yes/true, the complete configuration is replaced with the file being sent (default is false).
`diffs_file`
This is optional, but if specified, will store the diffs generated when applying the configuration. An example of the diff generated when just changing the hostname but still sending a complete config file is shown next:
```
# filename: junpr.diff
[edit system]
- host-name vsrx;
+ host-name vsrx-demo;
```
That covers the detailed overview of the playbook. Lets take a look at what happens when the playbook is executed:
###### Note
Note: the `-i` flag is used to specify the inventory file to use. The `ANSIBLE_HOSTS`environment variable can also be set rather than using the flag each time a playbook is executed.
```
ntc@ntc:~/ansible/multivendor$ ansible-playbook -i inventory demo.yml
PLAY [PLAY 1 - CISCO NXOS] *************************************************
TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] *********************
changed: [nx1]
PLAY [PLAY 2 - ARISTA EOS] *************************************************
TASK: [ENSURE VLAN 100 exists on Arista switches] **************************
changed: [veos1]
PLAY [PLAY 3 - CUMULUS] ****************************************************
GATHERING FACTS ************************************************************
ok: [cvx]
TASK: [ENSURE 100.10.10.1 is configured on swp1] ***************************
changed: [cvx]
TASK: [restart networking without disruption] ******************************
changed: [cvx]
PLAY [PLAY 4 - JUNIPER SRX changes] ****************************************
TASK: [INSTALL JUNOS CONFIG] ***********************************************
changed: [vsrx]
PLAY RECAP ***************************************************************
to retry, use: --limit @/home/ansible/demo.retry
cvx : ok=3 changed=2 unreachable=0 failed=0
nx1 : ok=1 changed=1 unreachable=0 failed=0
veos1 : ok=1 changed=1 unreachable=0 failed=0
vsrx : ok=1 changed=1 unreachable=0 failed=0
```
You can see that each task completes successfully; and if you are on the terminal, youll see that each changed task was displayed with an amber color.
Lets run this playbook again. By running it again, we can verify that all of the modules are _idempotent_; and when doing this, we see that NO changes are made to the devices and everything is green:
```
PLAY [PLAY 1 - CISCO NXOS] ***************************************************
TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] ***********************
ok: [nx1]
PLAY [PLAY 2 - ARISTA EOS] ***************************************************
TASK: [ENSURE VLAN 100 exists on Arista switches] ****************************
ok: [veos1]
PLAY [PLAY 3 - CUMULUS] ******************************************************
GATHERING FACTS **************************************************************
ok: [cvx]
TASK: [ENSURE 100.10.10.1 is configured on swp1] *****************************
ok: [cvx]
TASK: [restart networking without disruption] ********************************
skipping: [cvx]
PLAY [PLAY 4 - JUNIPER SRX changes] ******************************************
TASK: [INSTALL JUNOS CONFIG] *************************************************
ok: [vsrx]
PLAY RECAP ***************************************************************
cvx : ok=2 changed=0 unreachable=0 failed=0
nx1 : ok=1 changed=0 unreachable=0 failed=0
veos1 : ok=1 changed=0 unreachable=0 failed=0
vsrx : ok=1 changed=0 unreachable=0 failed=0
```
Notice how there were 0 changes, but they still returned "ok" for each task. This verifies, as expected, that each of the modules in this playbook are idempotent.
### Summary
Ansible is a super-simple automation platform that is agentless and extensible. The network community continues to rally around Ansible as a platform that can be used for network automation tasks that range from configuration management to data collection and reporting. You can push full configuration files with Ansible, configure specific network resources with idempotent modules such as interfaces or VLANs, or simply just automate the collection of information such as neighbors, serial numbers, uptime, and interface stats, and customize reports as you need them.
Because of its architecture, Ansible proves to be a great tool available here and now that helps bridge the gap from _legacy CLI/SNMP_ network device automation to modern _API-driven_ automation.
Ansibles ease of use and agentless architecture accounts for the platforms increasing following within the networking community. Again, this makes it possible to automate devices without APIs (CLI/SNMP); devices that have modern APIs, including standalone switches, routers, and Layer 4-7 service appliances; and even those software-defined networking (SDN) controllers that offer RESTful APIs.
There is no device left behind when using Ansible for network automation.
-----------
作者简介:
![](https://d3tdunqjn7n0wj.cloudfront.net/360x360/jason-edelman-crop-5b2672f569f553a3de3a121d0179efcb.jpg)
Jason Edelman, CCIE 15394 & VCDX-NV 167, is a born and bred network engineer from the great state of New Jersey. He was the typical “lover of the CLI” or “router jockey.” At some point several years ago, he made the decision to focus more on software, development practices, and how they are converging with network engineering. Jason currently runs a boutique consulting firm, Network to Code, helping vendors and end users take advantage of new tools and technologies to reduce their operational inefficiencies. Jason has a Bachelors...
--------------------------------------------------------------------------------
via: https://www.oreilly.com/learning/network-automation-with-ansible
作者:[Jason Edelman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/ee4fd-jason-edelman
[1]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
[2]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_network_integrations
[3]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
[4]:https://www.oreilly.com/learning/network-automation-with-ansible#handson_look_at_using_ansible_for_network_automation
[5]:http://docs.ansible.com/ansible/playbooks_vault.html
[6]:https://www.oreilly.com/people/ee4fd-jason-edelman
[7]:https://www.oreilly.com/people/ee4fd-jason-edelman

Some files were not shown because too many files have changed in this diff Show More