beyondworld translated

This commit is contained in:
lixiang23 2017-01-09 20:35:06 +08:00
commit 60a2527796
122 changed files with 10426 additions and 5257 deletions

View File

@ -0,0 +1,276 @@
忘记技术债务 —— 教你如何创造技术财富
===============
电视里正播放着《老屋》节目,[Andrea Goulet][58] 和她的商业合作伙伴正悠闲地坐在客厅里,商讨着他们的战略计划。那正是大家思想的火花碰撞出创新事物的时刻。他们正在寻求一种能够实现自身价值的方式 —— 为其它公司清理<ruby>遗留代码<rt>legacy code</rt></ruby>及科技债务。他们此刻的情景像极了电视里的场景。LCTT 译注:《老屋》电视节目提供专业的家装、家庭改建、重新装饰、创意等等信息,与软件的改造有异曲同工之处)。
“我们意识到我们现在做的工作不仅仅是清理遗留代码实际上我们是在用重建老屋的方式来重构软件让系统运行更持久、更稳定、更高效”Goulet 说。“这让我开始思考公司如何花钱来改善他们的代码,以便让他们的系统运行更高效。就好比为了让屋子变得更有价值,你不得不使用一个全新的屋顶。这并不吸引人,但却是至关重要的,然而很多人都搞错了。“
如今,她是 [Corgibytes][57] 公司的 CEO —— 这是一家提高软件现代化和进行系统重构方面的咨询公司。她曾经见过各种各样糟糕的系统、遗留代码以及严重的科技债务事件。Goulet 认为**创业公司需要转变思维模式,不是偿还债务,而是创造科技财富,不是要铲除旧代码,而是要逐步修复代码**。她解释了这种新的方法,以及如何完成这些看似不可能完成的事情 —— 实际上是聘用优秀的工程师来完成这些工作。
### 反思遗留代码
关于遗留代码最常见的定义是由 Michael Feathers 在他的著作[<ruby>《高效利用遗留代码》<rt>Working Effectively with Legacy Code</rt></ruby>][56]一书中提出:遗留代码就是没有被测试所覆盖的代码。这个定义比大多数人所认为的 —— 遗留代码仅指那些古老、陈旧的系统这个说法要妥当得多。但是 Goulet 认为这两种定义都不够明确。“遗留代码与软件的年头儿毫无关系。一个两年的应用程序,其代码可能已经进入遗留状态了,”她说。“**关键要看软件质量提高的难易程度。**”
这意味着写得不够清楚、缺少解释说明的代码,是没有包含任何关于代码构思和决策制定的流程的成果。单元测试就是这样的一种成果,但它并没有包括了写那部分代码的原因以及逻辑推理相关的所有文档。如果想要提升代码,但没办法搞清楚原开发者的意图 —— 那些代码就属于遗留代码了。
> **遗留代码不是技术问题,而是沟通上的问题。**
![](https://s3.amazonaws.com/marquee-test-akiaisur2rgicbmpehea/H4y9x4gQj61G9aK4v8Kp_Screen%20Shot%202016-08-11%20at%209.16.38%20AM.png)
如果你像 Goulet 所说的那样迷失在遗留代码里,你会发现每一次的沟通交流过程都会变得像那条[<ruby>康威定律<rt>Conways Law</rt></ruby>][54]所描述的一样。
Goulet 说:“这个定律认为你的代码能反映出整个公司的组织沟通结构,如果想修复公司的遗留代码,而没有一个好的组织沟通方式是不可能完成的。那是很多人都没注意到的一个重要环节。”
Goulet 和她的团队成员更像是考古学家一样来研究遗留系统项目。他们根据前开发者写的代码构件相关的线索来推断出他们的思想意图。然后再根据这些构件之间的关系来做出新的决策。
代码构件最重要的什么呢?**良好的代码结构、清晰的思想意图、整洁的代码**。例如,如果使用通用的名称如 “foo” 或 “bar” 来命名一个变量,半年后再返回来看这段代码时,根本就看不出这个变量的用途是什么。
如果代码读起来很困难,可以使用源代码控制系统,这是一个非常有用的工具,因为它可以提供代码的历史修改信息,并允许软件开发者写明他们作出本次修改的原因。
Goulet 说:“我一个朋友认为提交代码时附带的信息,每一个概要部分的内容应该有半条推文那么长(几十个字),如需要的话,代码的描述信息应该有一篇博客那么长。你得用这个方式来为你修改的代码写一个合理的说明。这不会浪费太多额外的时间,并且能给后期的项目开发者提供非常多的有用信息,但是让人惊讶的是很少有人会这么做。我们经常能看到一些开发人员在被一段代码激怒之后,要用 `git blame` 扒代码库找出这些垃圾是谁干的,结果最后发现是他们自己干的。”
使用自动化测试对于理解程序的流程非常有用。Goulet 解释道:“很多人都比较认可 Michael Feathers 提出的关于遗留代码的定义。测试套件对于理解开发者的意图来说是非常有用的工具,尤其当用来与[<ruby>行为驱动开发模式<rt>Behavior Driven Development</rt></ruby>][53]相结合时,比如编写测试场景。”
理由很简单,如果你想将遗留代码限制在一定程度下,注意到这些细节将使代码更易于理解,便于在以后也能工作。编写并运行一个代码单元,接受、认可,并且集成测试。写清楚注释的内容,方便以后你自己或是别人来理解你写的代码。
尽管如此,由于很多已知的和不可意料的原因,遗留代码仍然会出现。
在创业公司刚成立初期公司经常会急于推出很多新的功能。开发人员在巨大的交付压力下测试常常半途而废。Corgibytes 团队就遇到过好多公司很多年都懒得对系统做详细的测试了。
确实如此当你急于开发出系统原型的时候强制性地去做太多的测试也许意义不大。但是一旦产品开发完成并投入使用后你就需要投入时间精力来维护及完善系统了。Goulet 说:“很多人说,‘别在维护上费心思,重要的是功能!’ **如果真这样,当系统规模到一定程序的时候,就很难再扩展了。同时也就失去市场竞争力了。**”
最后才明白过来,原来热力学第二定律对代码也同样适用:**你所面临的一切将向熵增的方向发展。**你需要与混乱无序的技术债务进行一场无休无止的战斗。随着时间的推移,遗留代码也逐渐变成一种债务。
她说:“我们再次拿家来做比喻。你必须坚持每天收拾餐具、打扫卫生、倒垃圾。如果你不这么做,情况将来越来越糟糕,直到有一天你不得不向 HazMat 团队求助。”LCTT 译注HazMat 团队,危害物质专队)
就跟这种情况一样Corgibytes 团队接到很多公司 CEO 的求助电话,比如 Features 公司的 CEO 在电话里抱怨道“现在我们公司的开发团队工作效率太低了三年前只需要两个星期就完成的工作现在却要花费12个星期。”
> **技术债务往往反映出公司运作上的问题。**
很多公司的 CTO 明知会发生技术债务的问题,但是他们很难说服其它同事相信花钱来修复那些已经存在的问题是值得的。这看起来像是在走回头路,很乏味,也不是新的产品。有些公司直到系统已经严重影响了日常工作效率时,才着手去处理这些技术债务方面的问题,那时付出的代价就太高了。
### 忘记债务,创造技术财富
如果你想把[<ruby>重构技术债务<rt>reframe your technical debt</rt></ruby>][52] — [敏捷开发讲师 Declan Whelan 最近造出的一个术语][51] — 作为一个积累技术财富的机会,你很可能要先说服你们公司的 CEO、投资者和其它的股东接受并为之共同努力。
“我们没必要把技术债务想像得很可怕。当产品处于开发设计初期技术债务反而变得非常有用”Goulet 说。“当你解决一些系统遗留的技术问题时,你会充满成就感。例如,当你在自己家里安装新窗户时,你确实会花费一笔不少的钱,但是之后你每个月就可以节省 100 美元的电费。程序代码亦是如此。虽然暂时没有提高工作效率,但随时时间推移将提高生产力。”
一旦你意识到项目团队工作不再富有成效时,就需要确认下是哪些技术债务在拖后腿了。
“我跟很多不惜一切代价招募英才的初创公司交流过,他们高薪聘请一些工程师来只为了完成更多的工作。”她说。“与此相反,他们应该找出如何让原有的每个工程师能更高效率工作的方法。你需要去解决什么样的技术债务以增加额外的生产率?”
如果你改变自己的观点并且专注于创造技术财富,你将会看到产能过剩的现象,然后重新把多余的产能投入到修复更多的技术债务和遗留代码的良性循环中。你们的产品将会走得更远,发展得更好。
> **别把你们公司的软件当作一个项目来看。从现在起,把它想象成一栋自己要长久居住的房子。**
“这是一个极其重要的思想观念的转变”Goulet 说。“这将带你走出短浅的思维模式,并让你比之前更加关注产品的维护工作。”
这就像对一栋房子,要实现其现代化及维护的方式有两种:小动作,表面上的更改(“我买了一块新的小地毯!”)和大改造,需要很多年才能偿还所有债务(“我想我们应替换掉所有的管道...”)。你必须考虑好两者,才能让你们已有的产品和整个团队顺利地运作起来。
这还需要提前预算好 —— 否则那些较大的花销将会是硬伤。定期维护是最基本的预期费用。让人震惊的是,很多公司都没把维护当成商务成本预算进来。
这就是 Goulet 提出“**<ruby>软件重构<rt>software remodeling</rt></ruby>**”这个术语的原因。当你房子里的一些东西损坏的时候,你并不是铲除整个房子,从头开始重建。同样的,当你们公司出现老的、损坏的代码时,重写代码通常不是最明智的选择。
下面是 Corgibytes 公司在重构客户代码用到的一些方法:
* 把大型的应用系统分解成轻量级的更易于维护的微服务。
* 让功能模块彼此解耦以便于扩展。
* 更新形象和提升用户前端界面体验。
* 集合自动化测试来检查代码可用性。
* 代码库可以让重构或者修改更易于操作。
系统重构也进入到 DevOps 领域。比如Corgibytes 公司经常推荐新客户使用 [Docker][50],以便简单快速的部署新的开发环境。当你们团队有 30 个工程师的时候,把初始化配置时间从 10 小时减少到 10 分钟对完成更多的工作很有帮助。系统重构不仅仅是应用于软件开发本身,也包括如何进行系统重构。
如果你知道做些什么能让你们的代码管理起来更容易更高效就应该把这它们写入到每年或季度的项目规划中。别指望它们会自动呈现出来。但是也别给自己太大的压力来马上实施它们。Goulets 看到很多公司从一开始就致力于 100% 测试覆盖率而陷入困境。
**具体来说,每个公司都应该把以下三种类型的重构工作规划到项目建设中来:**
* 自动测试
* 持续交付
* 文化提升
咱们来深入的了解下每一项内容。
#### 自动测试
“有一位客户即将进行第二轮融资,但是他们没办法在短期内招聘到足够的人才。我们帮助他们引进了一种自动化测试框架,这让他们的团队在 3 个月的时间内工作效率翻了一倍”Goulets 说。“这样他们就可以在他们的投资人面前自豪的说,‘我们一个精英团队完成的任务比两个普通的团队要多。’”
自动化测试从根本上来讲就是单个测试的组合,就是可以再次检查某一行代码的单元测试。可以使用集成测试来确保系统的不同部分都正常运行。还可以使用验收性测试来检验系统的功能特性是否跟你想像的一样。当你把这些测试写成测试脚本后,你只需要简单地用鼠标点一下按钮就可以让系统自行检验了,而不用手工的去梳理并检查每一项功能。
在产品市场尚未打开之前就来制定自动化测试机制有些言之过早。但是一旦你有一款感到满意,并且客户也很依赖的产品,就应该把这件事付诸实施了。
#### 持续交付
这是与自动化交付相关的工作,过去是需要人工完成。目的是当系统部分修改完成时可以迅速进行部署,并且短期内得到反馈。这使公司在其它竞争对手面前有很大的优势,尤其是在客户服务行业。
“比如说你每次部署系统时环境都很复杂。熵值无法有效控制”Goulets 说。“我们曾经见过花 12 个小时甚至更多的时间来部署一个很大的集群环境。在这种情况下,你不会愿意频繁部署了。因为太折腾人了,你还会推迟系统功能上线的时间。这样,你将落后于其它公司并失去竞争力。”
**在持续性改进的过程中常见的其它自动化任务包括:**
* 在提交完成之后检查构建中断部分。
* 在出现故障时进行回滚操作。
* 自动化审查代码的质量。
* 根据需求增加或减少服务器硬件资源。
* 让开发、测试及生产环境配置简单易懂。
举一个简单的例子,比如说一个客户提交了一个系统 Bug 报告。开发团队越高效解决并修复那个 Bug 越好。对于开发人员来说,修复 Bug 的挑战根本不是个事儿,这本来也是他们的强项,主要是系统设置上不够完善导致他们浪费太多的时间去处理 bug 以外的其它问题。
使用持续改进的方式时,你要严肃地决定决定哪些工作应该让机器去做,哪些交给研发去完成更好。如果机器更擅长,那就使其自动化完成。这样也能让研发愉快地去解决其它有挑战性的问题。同时客户也会很高兴地看到他们报怨的问题被快速处理了。你的待修复的未完成任务数减少了,之后你就可以把更多的时间投入到运用新的方法来提高产品的质量上了。**这是创造科技财富的一种转变。**因为开发人员可以修复 bug 后立即发布新代码,这样他们就有时间和精力做更多事。
“你必须时刻问自己我应该如何为我们的客户改善产品功能如何做得更好如何让产品运行更高效不过还要不止于此。”Goulets 说。“一旦你回答完这些问题后,你就得询问下自己,如何自动去完成那些需要改善的功能。”
#### 文化提升
Corgibytes 公司每天都会看到同样的问题:一家创业公司建立了一个对开发团队毫无推动的文化环境。公司 CEO 抱着双臂思考着为什么这样的环境对员工没多少改变。然而事实却是公司的企业文化对工作并不利。为了激励工程师,你必须全面地了解他们的工作环境。
为了证明这一点Goulet 引用了作者 Robert Henry 说过的一段话:
> **目的不是创造艺术,而是在最美妙的状态下让艺术应运而生。**
“你们也要开始这样思考一下你们的软件,”她说。“你们的企业文件就类似那个状态。你们的目标就是创造一个让艺术品应运而生的环境,这件艺术品就是你们公司的代码、一流的售后服务、充满幸福感的开发者、良好的市场预期、盈利能力等等。这些都息息相关。”
优先考虑解决公司的技术债务和遗留代码也是一种文化。那是真正为开发团队清除障碍,以制造影响的方法。同时,这也会让你将来有更多的时间精力去完成更重要的工作。如果你不从根本上改变固有的企业文化环境,你就不可能重构公司产品。改变对产品维护及现代化的投资的态度是开始实施变革的第一步,最理想情况是从公司的 CEO 开始自顶向下转变。
以下是 Goulet 关于建立那种流态文化方面提出的建议:
* 反对公司嘉奖那些加班到深夜的“英雄”。提倡高效率的工作方式。
* 了解协同开发技术,比如 Woody Zuill 提出的[<ruby>合作编程<rt>Mob Programming</rt></ruby>][44]模式。
* 遵从 4 个[现代敏捷开发][42]原则:用户至上、实践及快速学习、把安全放在首位、持续交付价值。
* 每周为研发人员提供项目外的职业发展时间。
* 把[日工作记录][43]作为一种驱动开发团队主动解决问题的方式。
* 把同情心放在第一位。Corgibytes 公司让员工参加 [Brene Brown 勇气工厂][40]的培训是非常有用的。
“如果公司高管和投资者不支持这种升级方式你得从客户服务的角度去说服他们”Goulet 说,“告诉他们通过这次调整后,最终产品将如何给公司的大多数客户提高更好的体验。这是你能做的一个很有力的论点。”
### 寻找最具天才的代码重构者
整个行业都认为顶尖的工程师不愿意干修复遗留代码的工作。他们只想着去开发新的东西。大家都说把他们留在维护部门真是太浪费人才了。
**其实这些都是误解。如果你知道去哪里和如何找工程师,并为他们提供一个愉快的工作环境,你就可以找到技术非常精湛的工程师,来帮你解决那些最棘手的技术债务问题。**
“每次在会议上,我们都会问现场的同事‘谁喜欢去在遗留代码上工作?’每次只有不到 10% 的与会者会举手。”Goulet 说。“但是我跟这些人交流后,我发现这些工程师恰好是喜欢最具挑战性工作的人才。”
有一位客户来寻求她的帮助,他们使用国产的数据库,没有任何相关文档,也没有一种有效的方法来弄清楚他们公司的产品架构。她称修理这种情况的一类工程师为“修正者”。在 Corgibytes 公司,她有一支这样的修正者团队由她支配,热衷于通过研究二进制代码来解决技术问题。
![](https://s3.amazonaws.com/marquee-test-akiaisur2rgicbmpehea/BeX5wWrESmCTaJYsuKhW_Screen%20Shot%202016-08-11%20at%209.17.04%20AM.png)
那么,如何才能找到这些技术人才呢? Goulet 尝试过各种各样的方法,其中有一些方法还是富有成效的。
她创办了一个社区网站 [legacycode.rocks][49] 并且在招聘启示上写道:“长期招聘那些喜欢重构遗留代码的另类开发人员...如果你以从事处理遗留代码的工作为自豪,欢迎加入!”
“我开始收到很多人发来邮件说,‘噢,天呐,我也属于这样的开发人员!’”她说。“只需要发布这条信息,并且告诉他们这份工作是非常有意义的,就吸引了合适的人才。”
在招聘的过程中,她也会使用持续性交付的经验来回答那些另类开发者想知道的信息:包括详细的工作内容以及明确的要求。“我这么做的原因是因为我讨厌重复性工作。如果我收到多封邮件来咨询同一个问题,我会把答案发布在网上,我感觉自己更像是在写说明文档一样。”
但是随着时间的推移,她发现可以重新定义招聘流程来帮助她识别出更出色的候选人。比如说,她在应聘要求中写道,“公司 CEO 将会重新审查你的简历,因此请确保求职信中致意时不用写明性别。所有以‘尊敬的先生’或‘先生’开头的信件将会被当垃圾处理掉”。这些只是她的招聘初期策略。
“我开始这么做是因为很多申请人把我当成男性,因为我是一家软件公司的男性 CEO我必须是男性”Goulet 说。“所以,有一天我想我应该它当作应聘要求放到网上,看有多少人注意到这个问题。令我惊讶的是,这让我过滤掉一些不太严谨的申请人。还突显出了很多擅于从事遗留代码方面工作的人。”
Goulet 想起一个应聘者发邮件给我说,“我查看了你们网站的代码(我喜欢这个网站,这也是我的工作)。你们的网站架构很奇特,好像是用 PHP 写的,但是你们却运行在用 Ruby 语言写的 Jekyll 下。我真的很好奇那是什么呢。”
Goulet 从她的设计师那里得知,原来,在 HTML、CSS 和 JavaScript 文件中有一个未使用的 PHP 类名她一直想解决这个问题但是一直没机会。Goulet 的回复是:“你正在找工作吗?”
另外一名候选人注意到她曾经在一篇说明文档中使用 CTO 这个词,但是她的团队里并没有这个头衔(她的合作伙伴是 Chief Code Whisperer。这些注重细节、充满求知欲、积极主动的候选者更能引起她的注意。
> **代码修正者不仅需要注重细节,而且这也是他们必备的品质。**
让人吃惊的是Goulet 从来没有为招募最优秀的代码修正者而感到厌烦过。“大多数人都是通过我们的网站直接投递简历,但是当我们想扩大招聘范围的时候,我们会通过 [PowerToFly][48] 和 [WeWorkRemotely][47] 网站进行招聘。我现在确实不需要招募新人马了。他们需要经历一段很艰难的时期才能理解代码修正者的意义是什么。”
如果他们通过首轮面试Goulet 将会让候选者阅读一篇 Arlo Belshee 写的文章“[<ruby>命名是一个过程<rt>Naming is a Process</rt></ruby>][46]”。它讲的是非常详细的处理遗留代码的的过程。她最经典的指导方法是:“阅读完这段代码并且告诉我,你是怎么理解的。”
她将找出对问题的理解很深刻并且也愿意接受文章里提出的观点的候选者。这对于区分有深刻理解的候选者和仅仅想获得工作的候选者来说,是极其有用的办法。她强烈要求候选者找出一段与他操作相关的代码,来证明他是充满激情的、有主见的及善于分析问题的人。
最后,她会让候选者跟公司里当前的团队成员一起使用 [Exercism.io][45] 工具进行编程。这是一个开源项目,它允许开发者学习如何在不同的编程语言环境下使用一系列的测试驱动开发的练习进行编程。结对编程课程的第一部分允许候选者选择其中一种语言来使用。下一个练习中,面试官可以选择一种语言进行编程。他们总能看到那些人处理异常的方法、随机应便的能力以及是否愿意承认某些自己不了解的技术。
“当一个人真正的从执业者转变为大师的时候他会毫不犹豫的承认自己不知道的东西”Goulet说。
让他们使用自己不熟悉的编程语言来写代码,也能衡量其坚韧不拔的毅力。“我们想听到某个人说,‘我会深入研究这个问题直到彻底解决它。’也许第二天他们仍然会跑过来跟我们说,‘我会一直留着这个问题直到我找到答案为止。’那是作为一个成功的修正者表现出来的一种气质。”
> **产品开发人员在我们这个行业很受追捧,因此很多公司也想让他们来做维护工作。这是一个误解。最优秀的维护修正者并不是最好的产品开发工程师。**
如果一个有天赋的修正者在眼前Goulet 懂得如何让他走向成功。下面是如何让这种类型的开发者感到幸福及高效工作的一些方式:
* 给他们高度的自主权。把问题解释清楚,然后安排他们去完成,但是永不命令他们应该如何去解决问题。
* 如果他们要求升级他们的电脑配置和相关工具,尽管去满足他们。他们明白什么样的需求才能最大限度地提高工作效率。
* 帮助他们[避免分心][39]。他们喜欢全身心投入到某一个任务直至完成。
总之,这些方法已经帮助 Corgibytes 公司培养出二十几位对遗留代码充满激情的专业开发者。
### 稳定期没什么不好
大多数创业公司都都不想跳过他们的成长期。一些公司甚至认为成长期应该是永无止境的。而且,他们觉得也没这个必要跳过成长期,即便他们已经进入到了下一个阶段:稳定期。**完全进入到稳定期意味着你拥有人力资源及管理方法来创造技术财富,同时根据优先权适当支出。**
“在成长期和稳定期之间有个转折点就是维护人员必须要足够壮大并且相对于专注新功能的产品开发人员你开始更公平的对待维护人员”Goulet 说。“你们公司的产品开发完成了。现在你得让他们更加稳定地运行。”
这就意味着要把公司更多的预算分配到产品维护及现代化方面。“你不应该把产品维护当作是一个不值得关注的项目,”她说。“这必须成为你们公司固有的一种企业文化 —— 这将帮助你们公司将来取得更大的成功。“
最终,你通过这些努力创建的技术财富,将会为你的团队带来一大批全新的开发者:他们就像侦查兵一样,有充足的时间和资源去探索新的领域,挖掘新客户资源并且给公司创造更多的机遇。当你们在新的市场领域做得更广泛并且不断取得进展 —— 那么你们公司已经真正地进入到繁荣发展的状态了。
--------------------------------------------------------------------------------
via: http://firstround.com/review/forget-technical-debt-heres-how-to-build-technical-wealth/
作者:[http://firstround.com/][a]
译者:[rusking](https://github.com/rusking)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://firstround.com/
[1]:http://corgibytes.com/blog/2016/04/15/inception-layers/
[2]:http://www.courageworks.com/
[3]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[4]:https://www.industriallogic.com/blog/modern-agile/
[5]:http://mobprogramming.org/
[6]:http://exercism.io/
[7]:http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/
[8]:https://weworkremotely.com/
[9]:https://www.powertofly.com/
[10]:http://legacycode.rocks/
[11]:https://www.docker.com/
[12]:http://legacycoderocks.libsyn.com/technical-wealth-with-declan-wheelan
[13]:https://www.agilealliance.org/resources/initiatives/technical-debt/
[14]:https://en.wikipedia.org/wiki/Behavior-driven_development
[15]:https://en.wikipedia.org/wiki/Conway%27s_law
[16]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[17]:http://corgibytes.com/
[18]:https://www.linkedin.com/in/andreamgoulet
[19]:http://corgibytes.com/blog/2016/04/15/inception-layers/
[20]:http://www.courageworks.com/
[21]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[22]:https://www.industriallogic.com/blog/modern-agile/
[23]:http://mobprogramming.org/
[24]:http://mobprogramming.org/
[25]:http://exercism.io/
[26]:http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/
[27]:https://weworkremotely.com/
[28]:https://www.powertofly.com/
[29]:http://legacycode.rocks/
[30]:https://www.docker.com/
[31]:http://legacycoderocks.libsyn.com/technical-wealth-with-declan-wheelan
[32]:https://www.agilealliance.org/resources/initiatives/technical-debt/
[33]:https://en.wikipedia.org/wiki/Behavior-driven_development
[34]:https://en.wikipedia.org/wiki/Conway%27s_law
[35]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[36]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[37]:http://corgibytes.com/
[38]:https://www.linkedin.com/in/andreamgoulet
[39]:http://corgibytes.com/blog/2016/04/15/inception-layers/
[40]:http://www.courageworks.com/
[41]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[42]:https://www.industriallogic.com/blog/modern-agile/
[43]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[44]:http://mobprogramming.org/
[45]:http://exercism.io/
[46]:http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/
[47]:https://weworkremotely.com/
[48]:https://www.powertofly.com/
[49]:http://legacycode.rocks/
[50]:https://www.docker.com/
[51]:http://legacycoderocks.libsyn.com/technical-wealth-with-declan-wheelan
[52]:https://www.agilealliance.org/resources/initiatives/technical-debt/
[53]:https://en.wikipedia.org/wiki/Behavior-driven_development
[54]:https://en.wikipedia.org/wiki/Conway%27s_law
[56]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[57]:http://corgibytes.com/
[58]:https://www.linkedin.com/in/andreamgoulet

View File

@ -0,0 +1,50 @@
使用 Inkscape添加颜色
=========
![inkscape-addingcolour](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-addingcolour-945x400.png)
在我们先前的 Inkscape 文章中,[我们介绍了 Inkscape 的基础][2] - 安装,以及如何创建基本形状及操作它们。我们还介绍了使用 Palette 更改 inkscape 对象的颜色。 虽然 Palette 对于从预定义列表快速更改对象颜色非常有用,但大多数情况下,你需要更好地控制对象的颜色。这时我们使用 Inkscape 中最重要的对话框之一 - <ruby>填充和轮廓<rt>Fill and Stroke</rt></ruby> 对话框。
**关于文章中的动画的说明:**动画中的一些颜色看起来有条纹。这只是动画创建导致的。当你在 Inkscape 尝试时,你会看到很好的平滑渐变的颜色。
### 使用 Fill/Stroke 对话框
要在 Inkscape 中打开 “Fill and Stroke” 对话框,请从主菜单中选择 `Object`>`Fill and Stroke`。打开后,此对话框中的三个选项卡允许你检查和更改当前选定对象的填充颜色、描边颜色和描边样式。
![open-fillstroke](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/open-fillstroke.gif)
在 Inkscape 中Fill 用来给予对象主体颜色。对象的轮廓是你的对象的可选择外框,可在<ruby>轮廓样式<rt>Stroke style</rt></ruby>选项卡中进行配置,它允许您更改轮廓的粗细,创建虚线轮廓或为轮廓添加圆角。 在下面的动画中,我会改变星形的填充颜色,然后改变轮廓颜色,并调整轮廓的粗细:
![using-fillstroke](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/using-fillstroke.gif)
### 添加并编辑渐变效果
对象的填充(或者轮廓)也可以是渐变的。要从 “Fill and Stroke” 对话框快速设置渐变填充,请先选择 “Fill” 选项卡,然后选择<ruby>线性渐变<rt>linear gradient </rt></ruby> 选项:
![create-gradient](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/create-gradient.gif)
要进一步编辑我们的渐变,我们需要使用专门的<ruby>渐变工具<rt>Gradient Tool</rt></ruby>。 从工具栏中选择“Gradient Tool”会有一些渐变编辑锚点出现在你选择的形状上。 **移动锚点**将改变渐变的位置。 如果你**单击一个锚点**您还可以在“Fill and Stroke”对话框中更改该锚点的颜色。 要**在渐变中添加新的锚点**,请双击连接锚点的线,然后会出现一个新的锚点。
![editing-gradient](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/editing-gradient.gif)
* * *
这篇文章介绍了在 Inkscape 图纸中添加一些颜色和渐变的基础知识。 **“Fill and Stroke”** 对话框还有许多其他选项可供探索,如图案填充、不同的渐变样式和许多不同的轮廓样式。另外,查看**<ruby>工具控制栏<rt>Tools control bar</rt></ruby>** 的 **Gradient Tool** 中的其他选项,看看如何以不同的方式调整渐变。
-----------------------
作者简介Ryan 是一名 Fedora 设计师。他使用 Fedora Workstation 作为他的主要桌面,还有来自 Libre Graphics 世界的最好的工具,尤其是矢量图形编辑器 Inkscape。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/inkscape-adding-colour/
作者:[Ryan Lerch][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://ryanlerch.id.fedoraproject.org/
[1]:https://fedoramagazine.org/inkscape-adding-colour/
[2]:https://linux.cn/article-8079-1.html

View File

@ -1,4 +1,5 @@
### 使用 Fedora 和 Inkscape 制作一张简单的壁纸
使用 Fedora 和 Inkscape 制作一张简单的壁纸
================
![inkscape-wallpaper](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-wallpaper-945x400.png)
@ -14,7 +15,7 @@
![Screenshot from 2016-09-07 08-37-01](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-08-37-01.png)
][16]
对于这张壁纸而言,我们会将尺寸改为**1024px x 768px**。要改变文档的尺寸,进入`File` > `Document Properties`。在<ruby>文档属性<rt>Document Properties</rt></ruby>对话框中<ruby>自定义文档大小<rt>Custom Size</rt></ruby>区域中输入宽度为 1024px高度为 768px
对于这张壁纸而言,我们会将尺寸改为**1024px x 768px**。要改变文档的尺寸,进入`File` > `Document Properties...`。在<ruby>文档属性<rt>Document Properties</rt></ruby>对话框中<ruby>自定义文档大小<rt>Custom Size</rt></ruby>区域中输入宽度为 `1024`,高度为 `768` ,单位是 `px`
[
![Screenshot from 2016-09-07 09-00-00](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-00-00.png)
@ -34,13 +35,13 @@
![rect](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/rect.png)
][13]
接着在矩形中添加一个<ruby>渐变填充<rt>Gradient Fill</rt></ruby>[如果你需要复习添加渐变,请阅读先前添加色彩的文章][12]
接着在矩形中添加一个<ruby>渐变填充<rt>Gradient Fill</rt></ruby>。如果你需要复习添加渐变,请阅读先前添加色彩的[那篇文章][12]
[
![Screenshot from 2016-09-07 09-41-13](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-41-13.png)
][11]
你的矩形可能也设置了轮廓颜色。 使用<ruby>填充和轮廓<rt> Fill and Stroke</rt></ruby>对话框将轮廓设置为 **none**
你的矩形也可以设置轮廓颜色。 使用<ruby>填充和轮廓<rt> Fill and Stroke</rt></ruby>对话框将轮廓设置为 **none**
[
![Screenshot from 2016-09-07 09-44-15](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-44-15.png)
@ -48,19 +49,19 @@
### 绘制图样
接下来我们画一个三角形,使用 3个 顶点的星型/多边形工具。你可以**按住 CTRL** 键给三角形一个角度并使之对称。
接下来我们画一个三角形,使用 3 个顶点的星型/多边形工具。你可以按住 `CTRL` 键给三角形一个角度并使之对称。
[
![Screenshot from 2016-09-07 09-52-38](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-52-38.png)
][9]
选中三角形并按下 **CTRL+D** 来复制它(复制的图形会覆盖在原来图形的上面),**因此在复制后确保将它移动到别处。**
选中三角形并按下 `CTRL+D` 来复制它(复制的图形会覆盖在原来图形的上面),**因此在复制后确保将它移动到别处。**
[
![Screenshot from 2016-09-07 10-44-01](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-10-44-01.png)
][8]
如图选中一个三角形,进入**OBJECT > FLIP-HORIZONTAL水平翻转**
如图选中一个三角形,进入`Object` > `FLIP-HORIZONTAL`(水平翻转)
[
![Screenshot from 2016-09-07 09-57-23](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-57-23.png)
@ -82,7 +83,7 @@
### 导出背景
最后,我们需要将我们的文档导出为 PNG 文件。点击 **FILE > EXPORT PNG**,打开导出对话框,选择文件位置和名字,确保选中的是 Drawing 标签,并点击 **EXPORT**
最后,我们需要将我们的文档导出为 PNG 文件。点击 `File` > `EXPORT PNG`,打开导出对话框,选择文件位置和名字,确保选中的是 `Drawing` 标签,并点击 `EXPORT`
[
![Screenshot from 2016-09-07 11-07-05](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-11-07-05-1.png)
@ -100,9 +101,7 @@
via: https://fedoramagazine.org/inkscape-design-imagination/
作者:[a2batic][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -119,11 +118,11 @@ via: https://fedoramagazine.org/inkscape-design-imagination/
[9]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-52-38.png
[10]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-44-15.png
[11]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-41-13.png
[12]:https://fedoramagazine.org/inkscape-adding-colour/
[12]:https://linux.cn/article-8084-1.html
[13]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/rect.png
[14]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-01-03.png
[15]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-00-00.png
[16]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-08-37-01.png
[17]:https://fedoramagazine.org/inkscape-adding-colour/
[18]:https://fedoramagazine.org/getting-started-inkscape-fedora/
[17]:https://linux.cn/article-8084-1.html
[18]:https://linux.cn/article-8079-1.html
[19]:https://fedoramagazine.org/inkscape-design-imagination/

View File

@ -0,0 +1,82 @@
“硅谷的女儿”的成才之路
=======================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
在 2014 年,为了对网上一些关于在科技行业女性稀缺的评论作出回应,我的同事 [Crystal Beasley][1] 倡议在科技/信息安全方面工作的女性在网络上分享自己的“成才之路”。这篇文章就是我的故事。我把我的故事与你们分享是因为我相信榜样的力量,也相信一个人有多种途径,选择一个让自己满意的有挑战性的工作以及可以实现目标的人生。
### 和电脑相伴的童年
我可以说是硅谷的女儿。我的故事不是一个从科技业余爱好转向专业的故事,也不是从小就专注于这份事业的故事。这个故事更多的是关于环境如何塑造你 — 通过它的那种已然存在的文化来改变你,如果你想要被改变的话。这不是从小就开始努力并为一个明确的目标而奋斗的故事,我意识到,这其实是享受了一些特权的成长故事。
我出生在曼哈顿但是我在新泽西州长大因为我的爸爸退伍后在那里的罗格斯大学攻读计算机科学的博士学位。当我四岁时学校里有人问我爸爸干什么谋生时我说“他就是看电视和捕捉小虫子但是我从没有见过那些小虫子”LCTT 译注小虫子bug。他在家里有一台哑终端LCTT 译注:就是那台“电视”),这大概与他在 Bolt Beranek Newman 公司的工作有关,做关于早期互联网人工智能方面的工作。我就在旁边看着。
我没能玩上父亲的会抓小虫子的电视,但是我很早就接触到了技术领域,我很珍惜这个礼物。提早的熏陶对于一个未来的高手是十分必要的 — 所以,请花时间和你的小孩谈谈你在做的事情!
![](https://opensource.com/sites/default/files/resize/moss-520x433.png)
*我父亲的终端和这个很类似 —— 如果不是这个的话 CC BY-SA 4.0*
当我六岁时我们搬到了加州。父亲在施乐的帕克研究中心Xerox PARC找到了一个工作。我记得那时我认为这个城市一定有很多熊因为在它的旗帜上有一个熊。在1979年帕洛阿图市还是一个大学城还有果园和开阔地带。
在 Palo Alto 的公立学校待了一年之后,我的姐姐和我被送到了“半岛学校”,这个“民主典范”学校对我造成了深刻的影响。在那里,好奇心和创新意识是被高度推崇的,教育也是由学生自己分组讨论决定的。在学校,我们很少能看到叫做电脑的东西,但是在家就不同了。
在父亲从施乐辞职之后他就去了苹果公司在那里他工作使用并带回家让我玩的第一批电脑就是Apple II 和 LISA。我的父亲在最初的 LISA 的研发团队。我直到现在还深刻的记得他让我们一次又一次的“玩”鼠标训练的场景,因为他想让我的 3 岁大的妹妹也能对这个东西觉得好用 —— 她也确实那样。
![](https://opensource.com/sites/default/files/resize/600px-apple_lisa-520x520.jpg)
*我们的 LISA 看起来就像这样。谁看到鼠标哪儿去了CC BY-SA 4.0*
在学校,我的数学的概念学得不错,但是基本计算却惨不忍睹。我的第一个学校的老师告诉我的家长和我,说我的数学很差,还说我很“笨”。虽然我在“常规的”数学项目中表现出色,能理解一个超出 7 岁孩子理解能力的逻辑谜题,但是我不能完成我们每天早上都要做的“练习”。她说我傻,这事我不会忘记。在那之后的十年我都没能相信自己的逻辑能力和算法的水平。**不要低估你对孩子说的话的影响**。
在我玩了几年爸爸的电脑之后,他从 Apple 公司跳槽到了 EA又跳到了 SGI我又体验了他带回来的新玩意。这让我们认为我们家的房子是镇里最酷的因为我们在车库里有一个能玩 Doom 的 SGI 的机器。我不会太多的编程,但是现在看来,从那些年里我学到对尝试新的科技毫不恐惧。同时,我的学文学和教育的母亲,成为了一个科技行业的作家,她向我证实了一个人的职业可以改变,而且一个做母亲的人可能同时驾驭一个科技职位。我不是说这对她来说很简单,但是她让我认为这件事看起来很简单。你可能会想这些早期的熏陶能把我带到科技行业,但是它没有。
### 本科时光
我想我要成为一个小学教师,我就读米尔斯学院就是想要做这个。但是后来我开始研究女性学,后来又研究神学,我这样做仅仅是由于我自己的一个渴求:我希望能理解人类的意志以及为更好的世界而努力。
同时,我也感受到了互联网的巨大力量。在 1991 年,拥有你自己的 UNIX 的账户,能够和全世界的人谈话,是很令人兴奋的事。我仅仅从在互联网中“玩”就学到了不少,从那些愿意回答我提出的问题的人那里学到的就更多了。这些学习对我的职业生涯的影响不亚于我在正规学校教育之中学到的知识。所有的信息都是有用的。我在一个女子学院度过了学习的关键时期,那时是一个杰出的女性在掌管计算机院。在那个宽松氛围的学院,我们不仅被允许,还被鼓励去尝试很多的道路(我们能接触到很多很多的科技,还有聪明人愿意帮助我们),我也确实那样做了。我十分感激当年的教育。在那个学院,我也了解了什么是极客文化。
之后我去了研究生院去学习女性主义神学,但是技术的气息已经渗入我的灵魂。当我意识到我不想成为一个教授或者一个学术伦理家时,我离开了学术圈,带着学校债务和一些想法回到了家。
### 新的开端
在 1995 年,我被互联网连接人们以及分享想法和信息的能力所震惊(直到现在仍是如此)。我想要进入这个行业。看起来我好像要“女承父业”,但是我不知道如何开始。我开始在硅谷做临时工,从 Sun 微系统公司得到我的第一个“真正”技术职位前尝试做了一些事情(为半导体数据公司写最基础的数据库,技术手册印发前的事务,备份工资单的存跟)。这些事很让人激动。(毕竟,我们是“.com”中的那个”点“
在 Sun 公司,我努力学习,尽可能多的尝试新事物。我的第一个工作是<ruby>网页化<rt> HTMLing</rt></ruby>(啥?这居然是一个词!)白皮书,以及为 Beta 程序修改一些基础的服务工具(大多数是 Perl 写的)。后来我成为 Solaris beta 项目组中的项目经理,并在 Open Solaris 的 Beta 版运行中感受到了开源的力量。
在那里我做的最重要的事情就是学习。我发现在同样重视工程和教育的地方有一种气氛,在那里我的问题不再显得“傻”。我很庆幸我选对了导师和朋友。在决定休第二个孩子的产假之前,我上每一堂我能上的课程,读每一本我能读的书,尝试自学我在学校没有学习过的技术,商业以及项目管理方面的技能。
### 重回工作
当我准备重新工作时Sun 公司已经不再是合适的地方了。所以我整理了我的联系信息网络帮到了我利用我的沟通技能最终获得了一个管理互联网门户的长期合同2005 年时,一切皆门户),并且开始了解 CRM、发布产品的方式、本地化、网络等知识。我讲这么多背景主要是我的尝试以及失败的经历和我成功的经历同等重要从中学到很多。我也认为我们需要这个方面的榜样。
从很多方面来看,我的职业生涯的第一部分是我的技术教育。时变势移 —— 我在帮助组织中的女性和其他弱势群体,但是并没有看出为一个技术行业的女性有多难。当时无疑我没有看到这个行业的缺陷,但是现在这个行业更加的厌恶女性,一点没有减少。
在这些事情之后,我还没有把自己当作一个标杆,或者一个高级技术人员。当我在父母圈子里认识的一位极客朋友鼓励我申请一个看起来定位十分模糊且技术性很强的开源的非盈利基础设施机构(互联网系统协会 ISC它是广泛部署的开源 DNS 名称服务器 BIND 的缔造者,也是 13 台根域名服务器之一的运营商)的产品经理时,我很震惊。有很长一段时间,我都不知道他们为什么要雇佣我!我对 DNS、基础设备以及协议的开发知之甚少但是我再次遇到了老师并再度开始飞速发展。我花时间出差在关键流程攻关搞清楚如何与高度国际化的团队合作解决麻烦的问题最重要的是拥抱支持我们的开源和充满活力的社区。我几乎重新学了一切通过试错的方式。我学习如何构思一个产品。如何通过建设开源社区领导那些有这特定才能技能和耐心的人是他们给了产品价值。
### 成为别人的导师
当我在 ISC 工作时,我通过 [TechWomen 项目][2] (一个让来自中东和北非的技术行业的女性到硅谷来接受教育的计划),我开始喜欢教学生以及支持那些技术女性,特别是在开源行业中奋斗的。也正是从这时起我开始相信自己的能力。我还需要学很多。
当我第一次读 TechWomen 关于导师的广告时,我根本不认为他们会约我面试!我有冒名顶替综合症。当他们邀请我成为第一批导师(以及以后六年每年的导师)时,我很震惊,但是现在我学会了相信这些都是我努力得到的待遇。冒名顶替综合症是真实的,但是随着时间过去我就慢慢名副其实了。
### 现在
最后,我不得不离开我在 ISC 的工作。幸运的是,我的工作以及我的价值让我进入了 Mozilla ,在这里我的努力和我的幸运让我在这里承担着重要的角色。现在,我是一名支持多样性与包容的高级项目经理。我致力于构建一个更多样化,更有包容性的 Mozilla ,站在之前的做同样事情的巨人的肩膀上,与最聪明友善的人们一起工作。我用我的激情来让人们找到贡献一个世界需要的互联网的有意义的方式:这让我兴奋了很久。当我爬上山峰,我能极目四望!
通过对组织和个人行为的干预来获取一种改变文化的新方式,这和我的人生轨迹有着不可思议的联系 —— 从我的早期的学术生涯,到职业生涯再到现在。每天都是一个新的挑战,我想这是我喜欢在科技行业工作,尤其是在开放互联网工作的理由。互联网天然的多元性是它最开始吸引我的原因,也是我还在寻求的 —— 所有人都有机会和获取资源的可能性,无论背景如何。榜样、导师、资源,以及最重要的,尊重,是不断发展技术和开源文化的必要组成部分,实现我相信它能实现的所有事 —— 包括给所有人平等的接触机会。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/5/my-open-source-story-larissa-shapiro
作者:[Larissa Shapiro][a]
译者:[name1e5s](https://github.com/name1e5s)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/larissa-shapiro
[1]: http://skinnywhitegirl.com/blog/my-nerd-story/1101/
[2]: https://www.techwomen.org/mentorship/why-i-keep-coming-back-to-mentor-with-techwomen

View File

@ -0,0 +1,365 @@
Linux 服务器安全简明指南
============================================================
现在让我们强化你的服务器以防止未授权访问。
### 经常升级系统
保持最新的软件是你可以在任何操作系统上采取的最大的安全预防措施。软件更新的范围从关键漏洞补丁到小 bug 的修复,许多软件漏洞实际上是在它们被公开的时候得到修补的。
### 自动安全更新
有一些用于服务器上自动更新的参数。[Fedora 的 Wiki][15] 上有一篇很棒的剖析自动更新的利弊的文章,但是如果你把它限制到安全更新上,自动更新的风险将是最小的。
自动更新的可行性必须你自己判断,因为它归结为**你**在你的服务器上做什么。请记住,自动更新仅适用于来自仓库的包,而不是自行编译的程序。你可能会发现一个复制了生产服务器的测试环境是很有必要的。可以在部署到生产环境之前,在测试环境里面更新来检查问题。
* CentOS 使用 [yum-cron][2] 进行自动更新。
* Debian 和 Ubuntu 使用 [无人值守升级][3]。
* Fedora 使用 [dnf-automatic][4]。
### 添加一个受限用户账户
到目前为止,你已经作为 `root` 用户访问了你的服务器,它有无限制的权限,可以执行**任何**命令 - 甚至可能意外中断你的服务器。 我们建议创建一个受限用户帐户,并始终使用它。 管理任务应该使用 `sudo` 来完成,它可以临时提升受限用户的权限,以便管理你的服务器。
> 不是所有的 Linux 发行版都在系统上默认包含 `sudo`,但大多数都在其软件包仓库中有 `sudo`。 如果得到这样的输出 `sudocommand not found`,请在继续之前安装 `sudo`
要添加新用户,首先通过 SSH [登录到你的服务器][16]。
#### CentOS / Fedora
1、 创建用户,用你想要的名字替换 `example_user`,并分配一个密码:
```
useradd example_user && passwd example_user
```
2、 将用户添加到具有 sudo 权限的 `wheel` 组:
```
usermod -aG wheel example_user
```
#### Ubuntu
1、 创建用户,用你想要的名字替换 `example_user`。你将被要求输入用户密码:
```
adduser example_user
```
2、 添加用户到 `sudo` 组,这样你就有管理员权限了:
```
adduser example_user sudo
```
#### Debian
1、 Debian 默认的包中没有 `sudo` 使用 `apt-get` 来安装:
```
apt-get install sudo
```
2、 创建用户,用你想要的名字替换 `example_user`。你将被要求输入用户密码:
```
adduser example_user
```
3、 添加用户到 `sudo` 组,这样你就有管理员权限了:
```
adduser example_user sudo
```
创建完有限权限的用户后,断开你的服务器连接:
```
exit
```
重新用你的新用户登录。用你的用户名代替 `example_user`,用你的服务器 IP 地址代替例子中的 IP 地址:
```
ssh example_user@203.0.113.10
```
现在你可以用你的新用户帐户管理你的服务器,而不是 `root`。 几乎所有超级用户命令都可以用 `sudo`(例如:`sudo iptables -L -nv`)来执行,这些命令将被记录到 `/var/log/auth.log` 中。
### 加固 SSH 访问
默认情况下,密码认证用于通过 SSH 连接到您的服务器。加密密钥对更加安全,因为它用私钥代替了密码,这通常更难以暴力破解。在本节中,我们将创建一个密钥对,并将服务器配置为不接受 SSH 密码登录。
#### 创建验证密钥对
1、这是在你本机上完成的**不是**在你的服务器上,这里将创建一个 4096 位的 RSA 密钥对。在创建过程中,您可以选择使用密码加密私钥。这意味着它不能在没有输入密码的情况下使用,除非将密码保存到本机桌面的密钥管理器中。我们建议您使用带有密码的密钥对,但如果你不想使用密码,则可以将此字段留空。
**Linux / OS X**
> 如果你已经创建了 RSA 密钥对,则这个命令将会覆盖它,这可能会导致你不能访问其它的操作系统。如果你已创建过密钥对,请跳过此步骤。要检查现有的密钥,请运行 `ls〜/ .ssh / id_rsa *`
```
ssh-keygen -b 4096
```
在输入密码之前,按下 **回车**使用 `/home/your_username/.ssh` 中的默认名称 `id_rsa``id_rsa.pub`
**Windows**
这可以使用 PuTTY 完成,在我们指南中已有描述:[使用 SSH 公钥验证][6]。
2、将公钥上传到您的服务器上。 将 `example_user` 替换为你用来管理服务器的用户名称,将 `203.0.113.10` 替换为你的服务器的 IP 地址。
**Linux**
在本机上:
```
ssh-copy-id example_user@203.0.113.10
```
**OS X**
在你的服务器上(用你的权限受限用户登录):
```
mkdir -p ~/.ssh && sudo chmod -R 700 ~/.ssh/
```
在本机上:
```
scp ~/.ssh/id_rsa.pub example_user@203.0.113.10:~/.ssh/authorized_keys
```
> 如果相对于 `scp` 你更喜欢 `ssh-copy-id` 的话,那么它也可以在 [Homebrew][5] 中找到。使用 `brew install ssh-copy-id` 安装。
**Windows**
* **选择 1**:使用 [WinSCP][1] 来完成。 在登录窗口中,输入你的服务器的 IP 地址作为主机名,以及非 root 的用户名和密码。单击“登录”连接。
一旦 WinSCP 连接后,你会看到两个主要部分。 左边显示本机上的文件,右边显示服务区上的文件。 使用左侧的文件浏览器,导航到你已保存公钥的文件,选择公钥文件,然后点击上面工具栏中的“上传”。
系统会提示你输入要将文件放在服务器上的路径。 将文件上传到 `/home/example_user/.ssh /authorized_keys`,用你的用户名替换 `example_user`
* **选择 2**:将公钥直接从 PuTTY 键生成器复制到连接到你的服务器中(作为非 root 用户):
```
mkdir ~/.ssh; nano ~/.ssh/authorized_keys
```
上面命令将在文本编辑器中打开一个名为 `authorized_keys` 的空文件。 将公钥复制到文本文件中,确保复制为一行,与 PuTTY 所生成的完全一样。 按下 `CTRL + X`,然后按下 `Y`,然后回车保存文件。
最后,你需要为公钥目录和密钥文件本身设置权限:
```
sudo chmod 700 -R ~/.ssh && chmod 600 ~/.ssh/authorized_keys
```
这些命令通过阻止其他用户访问公钥目录以及文件本身来提供额外的安全性。有关它如何工作的更多信息,请参阅我们的指南[如何修改文件权限][7]。
3、 现在退出并重新登录你的服务器。如果你为私钥指定了密码,则需要输入密码。
#### SSH 守护进程选项
1、 **不允许 root 用户通过 SSH 登录。** 这要求所有的 SSH 连接都是通过非 root 用户进行。当以受限用户帐户连接后,可以通过使用 `sudo` 或使用 `su -` 切换为 root shell 来使用管理员权限。
```
# Authentication:
...
PermitRootLogin no
```
2、 **禁用 SSH 密码认证。** 这要求所有通过 SSH 连接的用户使用密钥认证。根据 Linux 发行版的不同,它可能需要添加 `PasswordAuthentication` 这行,或者删除前面的 `#` 来取消注释。
```
# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no
```
> 如果你从许多不同的计算机连接到服务器,你可能想要继续启用密码验证。这将允许你使用密码进行身份验证,而不是为每个设备生成和上传密钥对。
3、 **只监听一个互联网协议。** 在默认情况下SSH 守护进程同时监听 IPv4 和 IPv6 上的传入连接。除非你需要使用这两种协议进入你的服务器,否则就禁用你不需要的。 _这不会禁用系统范围的协议它只用于 SSH 守护进程。_
使用选项:
* `AddressFamily inet` 只监听 IPv4。
* `AddressFamily inet6` 只监听 IPv6。
默认情况下,`AddressFamily` 选项通常不在 `sshd_config` 文件中。将它添加到文件的末尾:
```
echo 'AddressFamily inet' | sudo tee -a /etc/ssh/sshd_config
```
4、 重新启动 SSH 服务以加载新配置。
如果你使用的 Linux 发行版使用 systemdCentOS 7、Debian 8、Fedora、Ubuntu 15.10+
```
sudo systemctl restart sshd
```
如果您的 init 系统是 SystemV 或 UpstartCentOS 6、Debian 7、Ubuntu 14.04
```
sudo service ssh restart
```
#### 使用 Fail2Ban 保护 SSH 登录
[Fail2Ban][17] 是一个应用程序,它会在太多的失败登录尝试后禁止 IP 地址登录到你的服务器。由于合法登录通常不会超过三次尝试(如果使用 SSH 密钥,那不会超过一个),因此如果服务器充满了登录失败的请求那就表示有恶意访问。
Fail2Ban 可以监视各种协议,包括 SSH、HTTP 和 SMTP。默认情况下Fail2Ban 仅监视 SSH并且因为 SSH 守护程序通常配置为持续运行并监听来自任何远程 IP 地址的连接,所以对于任何服务器都是一种安全威慑。
有关安装和配置 Fail2Ban 的完整说明,请参阅我们的指南:[使用 Fail2ban 保护服务器][18]。
### 删除未使用的面向网络的服务
大多数 Linux 发行版都安装并运行了网络服务,监听来自互联网、回环接口或两者兼有的传入连接。 将不需要的面向网络的服务从系统中删除,以减少对运行进程和对已安装软件包攻击的概率。
#### 查明运行的服务
要查看服务器中运行的服务:
```
sudo netstat -tulpn
```
> 如果默认情况下 `netstat` 没有包含在你的 Linux 发行版中,请安装软件包 `net-tools` 或使用 `ss -tulpn` 命令。
以下是 `netstat` 的输出示例。 请注意,因为默认情况下不同发行版会运行不同的服务,你的输出将有所不同:
```
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 7315/rpcbind
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3277/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3179/exim4
tcp 0 0 0.0.0.0:42526 0.0.0.0:* LISTEN 2845/rpc.statd
tcp6 0 0 :::48745 :::* LISTEN 2845/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 7315/rpcbind
tcp6 0 0 :::22 :::* LISTEN 3277/sshd
tcp6 0 0 ::1:25 :::* LISTEN 3179/exim4
udp 0 0 127.0.0.1:901 0.0.0.0:* 2845/rpc.statd
udp 0 0 0.0.0.0:47663 0.0.0.0:* 2845/rpc.statd
udp 0 0 0.0.0.0:111 0.0.0.0:* 7315/rpcbind
udp 0 0 192.0.2.1:123 0.0.0.0:* 3327/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 3327/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 3327/ntpd
udp 0 0 0.0.0.0:705 0.0.0.0:* 7315/rpcbind
udp6 0 0 :::111 :::* 7315/rpcbind
udp6 0 0 fe80::f03c:91ff:fec:123 :::* 3327/ntpd
udp6 0 0 2001:DB8::123 :::* 3327/ntpd
udp6 0 0 ::1:123 :::* 3327/ntpd
udp6 0 0 :::123 :::* 3327/ntpd
udp6 0 0 :::705 :::* 7315/rpcbind
udp6 0 0 :::60671 :::* 2845/rpc.statd
```
`netstat` 告诉我们服务正在运行 [RPC][19]`rpc.statd` 和 `rpcbind`、SSH`sshd`)、[NTPdate][20]`ntpd`)和[Exim][21]`exim4`)。
##### TCP
请参阅 `netstat` 输出的 `Local Address` 那一列。进程 `rpcbind` 正在侦听 `0.0.0.0:111``:::111`,外部地址是 `0.0.0.0:*` 或者 `:::*` 。这意味着它从任何端口和任何网络接口接受来自任何外部地址IPv4 和 IPv6上的其它 RPC 客户端的传入 TCP 连接。 我们看到类似的 SSHExim 正在侦听来自回环接口的流量,如所示的 `127.0.0.1` 地址。
##### UDP
UDP 套接字是[无状态][14]的,这意味着它们只有打开或关闭,并且每个进程的连接是独立于前后发生的连接。这与 TCP 的连接状态(例如 `LISTEN`、`ESTABLISHED`和 `CLOSE_WAIT`)形成对比。
我们的 `netstat`输出说明 NTPdate 1接受服务器的公网 IP 地址的传入连接2通过本地主机进行通信3接受来自外部的连接。这些连接是通过端口 123 进行的,同时支持 IPv4 和 IPv6。我们还看到了 RPC 打开的更多的套接字。
#### 查明该移除哪个服务
如果你在没有启用防火墙的情况下对服务器进行基本的 TCP 和 UDP 的 [nmap][22] 扫描,那么在打开端口的结果中将出现 SSH、RPC 和 NTPdate 。通过[配置防火墙][23],你可以过滤掉这些端口,但 SSH 除外,因为它必须允许你的传入连接。但是,理想情况下,应该禁用未使用的服务。
* 你可能主要通过 SSH 连接管理你的服务器,所以让这个服务需要保留。如上所述,[RSA 密钥][8]和 [Fail2Ban][9] 可以帮助你保护 SSH。
* NTP 是服务器计时所必需的,但有个替代 NTPdate 的方法。如果你喜欢不开放网络端口的时间同步方法,并且你不需要纳秒精度,那么你可能有兴趣用 [OpenNTPD][10] 来代替 NTPdate。
* 然而Exim 和 RPC 是不必要的,除非你有特定的用途,否则应该删除它们。
> 本节针对 Debian 8。默认情况下不同的 Linux 发行版具有不同的服务。如果你不确定某项服务的功能,请尝试搜索互联网以了解该功能是什么,然后再尝试删除或禁用它。
#### 卸载监听的服务
如何移除包取决于发行版的包管理器:
**Arch**
```
sudo pacman -Rs package_name
```
**CentOS**
```
sudo yum remove package_name
```
**Debian / Ubuntu**
```
sudo apt-get purge package_name
```
**Fedora**
```
sudo dnf remove package_name
```
再次运行 `sudo netstat -tulpn`,你看到监听的服务就只会有 SSH`sshd`)和 NTP`ntpdate`,网络时间协议)。
### 配置防火墙
使用防火墙阻止不需要的入站流量能为你的服务器提供一个高效的安全层。 通过指定入站流量,你可以阻止入侵和网络测绘。 最佳做法是只允许你需要的流量,并拒绝一切其他流量。请参阅我们的一些关于最常见的防火墙程序的文档:
* [iptables][11] 是 netfilter 的控制器,它是 Linux 内核的包过滤框架。 默认情况下iptables 包含在大多数 Linux 发行版中。
* [firewallD][12] 是可用于 CentOS/Fedora 系列发行版的 iptables 控制器。
* [UFW][13] 为 Debian 和 Ubuntu 提供了一个 iptables 前端。
### 接下来
这些是加固 Linux 服务器的最基本步骤,但是进一步的安全层将取决于其预期用途。 其他技术可以包括应用程序配置,使用[入侵检测][24]或者安装某个形式的[访问控制][25]。
现在你可以按你的需求开始设置你的服务器了。 我们有一个文档库来以帮助你从[从共享主机迁移][26]到[启用两步验证][27]到[托管网站] [28]等各种主题。
--------------------------------------------------------------------------------
via: https://www.linode.com/docs/security/securing-your-server/
作者:[Phil Zona][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linode.com/docs/security/securing-your-server/
[1]:http://winscp.net/
[2]:https://fedoraproject.org/wiki/AutoUpdates#Fedora_21_or_earlier_versions
[3]:https://help.ubuntu.com/lts/serverguide/automatic-updates.html
[4]:https://dnf.readthedocs.org/en/latest/automatic.html
[5]:http://brew.sh/
[6]:https://www.linode.com/docs/security/use-public-key-authentication-with-ssh#windows-operating-system
[7]:https://www.linode.com/docs/tools-reference/modify-file-permissions-with-chmod
[8]:https://www.linode.com/docs/security/securing-your-server/#create-an-authentication-key-pair
[9]:https://www.linode.com/docs/security/securing-your-server/#use-fail2ban-for-ssh-login-protection
[10]:https://en.wikipedia.org/wiki/OpenNTPD
[11]:https://www.linode.com/docs/security/firewalls/control-network-traffic-with-iptables
[12]:https://www.linode.com/docs/security/firewalls/introduction-to-firewalld-on-centos
[13]:https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
[14]:https://en.wikipedia.org/wiki/Stateless_protocol
[15]:https://fedoraproject.org/wiki/AutoUpdates#Why_use_Automatic_updates.3F
[16]:https://www.linode.com/docs/getting-started#logging-in-for-the-first-time
[17]:http://www.fail2ban.org/wiki/index.php/Main_Page
[18]:https://www.linode.com/docs/security/using-fail2ban-for-security
[19]:https://en.wikipedia.org/wiki/Open_Network_Computing_Remote_Procedure_Call
[20]:http://support.ntp.org/bin/view/Main/SoftwareDownloads
[21]:http://www.exim.org/
[22]:https://nmap.org/
[23]:https://www.linode.com/docs/security/securing-your-server/#configure-a-firewall
[24]:https://linode.com/docs/security/ossec-ids-debian-7
[25]:https://en.wikipedia.org/wiki/Access_control#Access_Control
[26]:https://www.linode.com/docs/migrate-to-linode/migrate-from-shared-hosting
[27]:https://www.linode.com/docs/security/linode-manager-security-controls
[28]:https://www.linode.com/docs/websites/hosting-a-website

View File

@ -1,4 +1,4 @@
安卓平台上的依赖注入 - 第一部分
安卓平台上的依赖注入(一)
===========================
![](https://d262ilb51hltx0.cloudfront.net/max/2000/1*YWlAzAY20KLLGIyyD_mzZw.png)
@ -9,11 +9,11 @@
但这句话实际是什么意思?让我们看看 SOLID 中每个字母在架构里所代表的重要含义,例如:
- [S 单职责原则][1]
- [O 开闭原则][2]
- [L Liskov 替换原则][3]
- [I 接口分离原则][4]
- [D 依赖反转原则][5] 这也是依赖注入的核心概念。
- [S - 单职责原则][1]
- [O - 开闭原则][2]
- [L - Liskov 替换原则][3]
- [I - 接口分离原则][4]
- [D - 依赖反转原则][5] 这也是依赖注入dependency injection的核心概念。
简单来说,我们需要提供一个类,这个类有它所需要的所有对象,以便实现其功能。
@ -39,7 +39,7 @@ class DependencyInjection {
}
```
正如我们所见,第一种情况是我们在构造器里创建了依赖对象,但在第二种情况下,它作为参数被传递给构造器,这就是我们所说的依赖注入。这样做是为了让我们所写的类不依靠特定依赖关系的实现,却能直接使用它。
正如我们所见,第一种情况是我们在构造器里创建了依赖对象,但在第二种情况下,它作为参数被传递给构造器,这就是我们所说的依赖注入dependency injection。这样做是为了让我们所写的类不依靠特定依赖关系的实现,却能直接使用它。
参数传递的目标是构造器,我们就称之为构造器依赖注入;或者是某个方法,就称之为方法依赖注入:
@ -58,13 +58,13 @@ class Example {
```
要是你想总体深入地了解依赖注入,可以看看由 [Dan Lew][t2] 发表的[精彩的演讲][t1],事实上是这个演讲启迪了这概述。
要是你想总体深入地了解依赖注入,可以看看由 [Dan Lew][t2] 发表的[精彩的演讲][t1],事实上是这个演讲启迪了这概述。
在 Android 平台,当需要框架来处理依赖注入这个特殊的问题时,我们有不同的选择,其中最有名的框架就是 [Dagger 2][t3]。它最开始是由 Square 公司(译者注Square 是美国一家移动支付公司)里一些很棒的开发者开发出来的,然后慢慢发展成由 Google 自己开发。特别地Dagger 1 先被开发出来,然后 Big G 接手这个项目,做了很多改动,比如以注释为基础,在编译的时候就完成 Dagger 的任务,也就是第二个版本
在 Android 平台,当需要框架来处理依赖注入这个特殊的问题时,我们有不同的选择,其中最有名的框架就是 [Dagger 2][t3]。它最开始是由 Square 公司(LCTT 译注Square 是美国一家移动支付公司)的一些很棒的开发者开发出来的,然后慢慢发展成由 Google 自己开发。首先开发出来的是 Dagger 1然后 Big G 接手这个项目发布了第二个版本做了很多改动比如以注解annotation为基础在编译的时候完成其任务
### 导入框架
安装 Dagger 并不难,但需要导入 `android-apt` 插件,通过向项目的根目录下的 build.gradle 文件中添加它的依赖关系:
安装 Dagger 并不难,但需要导入 `android-apt` 插件,通过向项目的根目录下的 `build.gradle` 文件中添加它的依赖关系:
```
buildscript{
@ -76,13 +76,13 @@ buildscript{
}
```
然后,我们需要将 `android-apt` 插件应用到项目 build.gradle 文件,放在文件顶部 Android 应用那一句的下一行:
然后,我们需要将 `android-apt` 插件应用到项目 `build.gradle` 文件,放在文件顶部 Android application 那一句的下一行:
```
apply plugin: com.neenbedankt.android-apt
```
这个时候,我们只用添加依赖关系,然后就能使用库和注释了:
这个时候,我们只用添加依赖关系,然后就能使用库及其注解annotation了:
```
dependencies{
@ -97,7 +97,7 @@ dependencies{
### Dagger 模块
要注入依赖,首先需要告诉框架我们能提供什么(比如说上下文)以及特定的对象应该怎样创建。为了完成注入,我们用 `@Module` 注释对一个特殊的类进行了注解(这样 Dagger 就能识别它了),寻找 `@Provide` 标记的方法,生成图表,能够返回我们所请求的对象。
要注入依赖,首先需要告诉框架我们能提供什么(比如说上下文)以及特定的对象应该怎样创建。为了完成注入,我们用 `@Module` 注释对一个特殊的类进行了注解(这样 Dagger 就能识别它了),寻找 `@Provide` 注解的方法,生成图表,能够返回我们所请求的对象。
看下面的例子,这里我们创建了一个模块,它会返回给我们 `ConnectivityManager`,所以我们要把 `Context` 对象传给这个模块的构造器。
@ -122,11 +122,11 @@ public class ApplicationModule {
}
```
>Dagger 中十分有意思的一点是只用在一个方法前面添加一个 Singleton 注解,就能处理所有从 Java 中继承过来的问题。
> Dagger 中十分有意思的一点是简单地注解一个方法来提供一个单例Singleton,就能处理所有从 Java 中继承过来的问题。
### 容器
### 组件
当我们有一个模块的时候,我们需要告诉 Dagger 想把依赖注入到哪里:我们在一个容器里,一个特殊的注解过的接口里完成依赖注入。我们在这个接口里创造不同的方法,而接口的参数是我们想注入依赖关系的类。
当我们有一个模块的时候,我们需要告诉 Dagger 想把依赖注入到哪里:我们在一个组件Component里完成依赖注入这是一个我们特别创建的特殊注解接口。我们在这个接口里创造不同的方法,而接口的参数是我们想注入依赖关系的类。
下面给出一个例子并告诉 Dagger 我们想要 `MainActivity` 类能够接受 `ConnectivityManager`(或者在图表里的其它依赖对象)。我们只要做类似以下的事:
@ -139,15 +139,15 @@ public interface ApplicationComponent {
}
```
>正如我们所见,@Component 注解有几个参数,一个是所支持的模块的数组,意味着它能提供的依赖。这里既可以是 Context 也可以是 ConnectivityManager因为他们在 ApplicationModule 类中有声明。
> 正如我们所见,@Component 注解有几个参数,一个是所支持的模块的数组,代表它能提供的依赖。这里既可以是 `Context` 也可以是 `ConnectivityManager`,因为它们在 `ApplicationModule` 类中有声明。
### 使
### 用
这时,我们要做的是尽快创建容器(比如在应用的 onCreate 方法里面)并且返回这个容器,那么类就能用它来注入依赖了:
这时,我们要做的是尽快创建组件(比如在应用的 `onCreate` 阶段)并返回它,那么类就能用它来注入依赖了:
>为了让框架自动生成 DaggerApplicationComponent我们需要构建项目以便 Dagger 能够扫描我们的代码,并生成我们需要的部分。
> 为了让框架自动生成 `DaggerApplicationComponent`,我们需要构建项目以便 Dagger 能够扫描我们的代码,并生成我们需要的部分。
`MainActivity` 里,我们要做的两件事是用 `@Inject` 注解符对想要注入的属性进行注,调用我们在 `ApplicationComponent` 接口中声明的方法(请注意后面一部分会因我们使用的注入类型的不同而变化,但这里简单起见我们不去管它),然后依赖就被注入了,我们就能自由使用他们:
`MainActivity` 里,我们要做的两件事是用 `@Inject` 注解符对想要注入的属性进行注,调用我们在 `ApplicationComponent` 接口中声明的方法(请注意后面一部分会因我们使用的注入类型的不同而变化,但这里简单起见我们不去管它),然后依赖就被注入了,我们就能自由使用他们:
```
public class MainActivity extends AppCompatActivity {
@ -164,7 +164,7 @@ public class MainActivity extends AppCompatActivity {
### 总结
当然了,我们可以手动注入依赖,管理所有不同的对象,但 Dagger 打消了很多有关模板的“噪声”Dagger 给我们有用的附加品(比如 `Singleton`),而仅用 Java 处理将会很糟糕。
当然了,我们可以手动注入依赖,管理所有不同的对象,但 Dagger 消除了很多比如模板这样的“噪声”,给我们提供有用的附加品(比如 `Singleton`),而仅用 Java 处理将会很糟糕。
--------------------------------------------------------------------------------
@ -172,7 +172,7 @@ via: https://medium.com/di-101/di-101-part-1-81896c2858a0#.3hg0jj14o
作者:[Roberto Orgiu][a]
译者:[GitFuture](https://github.com/GitFuture)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,109 @@
如何在 Linux 下安装 PyCharm
============================================
![](https://fthmb.tqn.com/ju1u-Ju56vYnXabPbsVRyopd72Q=/768x0/filters:no_upscale()/about/pycharmstart-57e2cb405f9b586c351a4cf7.png)
### 简介
Linux 经常被看成是一个远离外部世界,只有极客才会使用的操作系统,但是这是不准确的,如果你想开发软件,那么 Linux 能够为你提供一个非常棒的开发环境。
刚开始学习编程的新手们经常会问这样一个问题:应该使用哪种语言?当涉及到 Linux 系统的时候,通常的选择是 C、C++、Python、Java、PHP、Perl 和 Ruby On Rails。
Linux 系统的许多核心程序都是用 C 语言写的,但是如果离开 Linux 系统的世界, C 语言就不如其它语言比如 Java 和 Python 那么常用。
对于学习编程的人来说, Python 和 Java 都是不错的选择,因为它们是跨平台的,因此,你在 Linux 系统上写的程序在 Windows 系统和 Mac 系统上也能够很好的工作。
虽然你可以使用任何编辑器来开发 Python 程序但是如果你使用一个同时包含编辑器和调试器的优秀的集成开发环境IDE来进行开发那么你的编程生涯将会变得更加轻松。
PyCharm 是由 Jetbrains 公司开发的一个跨平台编辑器。如果你之前是在 Windows 环境下进行开发,那么你会立刻认出 Jetbrains 公司,它就是那个开发了 Resharper 的公司。 Resharper 是一个用于重构代码的优秀产品,它能够指出代码可能存在的问题,自动添加声明,比如当你在使用一个类的时候它会自动为你导入。
这篇文章将讨论如何在 Linux 系统上获取、安装和运行 PyCharm 。
### 如何获取 PyCharm
你可以通过访问[https://www.jetbrains.com/pycharm/][1]获取 PyCharm 。
屏幕中央有一个很大的 'Download' 按钮。
你可以选择下载专业版或者社区版。如果你刚刚接触 Python 编程那么推荐下载社区版。然而,如果你打算发展到专业化的编程,那么专业版的一些优秀特性是不容忽视的。
### 如何安装 PyCharm
下载好的文件的名称可能类似这种样子 pycharm-professional-2016.2.3.tar.gz
以 “tar.gz” 结尾的文件是被 [gzip][2] 工具压缩过的,并且把文件夹用 [tar][3] 工具归档到了一起。你可以阅读关于[提取 tar.gz 文件][4]指南的更多信息。
加快速度,为了解压文件,你需要做的是首先打开终端,然后通过下面的命令进入下载文件所在的文件夹:
```
cd ~/Downloads
```
现在,通过运行下面的命令找到你下载的文件的名字:
```
ls pycharm*
```
然后运行下面的命令解压文件:
```
tar -xvzf pycharm-professional-2016.2.3.tar.gz -C ~
```
记得把上面命令中的文件名替换成通过 `ls` 命令获知的 pycharm 文件名。(也就是你下载的文件的名字)。上面的命令将会把 PyCharm 软件安装在 `home` 目录中。
### 如何运行 PyCharm
要运行 PyCharm 首先需要进入 `home` 目录:
```
cd ~
```
运行 `ls` 命令查找文件夹名:
```
ls
```
查找到文件名以后,运行下面的命令进入 PyCharm 目录:
```
cd pycharm-2016.2.3/bin
```
最后,通过运行下面的命令来运行 PyCharm
```
sh pycharm.sh &
```
如果你是在一个桌面环境比如 GNOME 、 KDE 、 Unity 、 Cinnamon 或者其他现代桌面环境上运行,你也可以通过桌面环境的菜单或者快捷方式来找到 PyCharm 。
### 总结
现在, PyCharm 已经安装好了,你可以开始使用它来开发一个桌面应用、 web 应用和各种工具。
如果你想学习如何使用 Python 编程,那么这里有很好的[学习资源][5]值得一看。里面的文章更多的是关于 Linux 学习,但也有一些资源比如 Pluralsight 和 Udemy 提供了关于 Python 学习的一些很好的教程。
如果想了解 PyCharm 的更多特性,请点击[这儿][6]来查看。它覆盖了从创建项目到描述用户界面、调试以及代码重构的全部内容。
-----------------------------------------------------------------------------------------------------------
via: https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
作者:[Gary Newell][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[oska874](https://github.com/oska874)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com/gary-newell-2180098
[1]:https://www.jetbrains.com/pycharm/
[2]:https://www.lifewire.com/example-uses-of-the-linux-gzip-command-4078675
[3]:https://www.lifewire.com/uses-of-linux-command-tar-2201086
[4]:https://www.lifewire.com/extract-tar-gz-files-2202057
[5]:https://www.lifewire.com/learn-linux-in-structured-manner-4061368
[6]:https://www.lifewire.com/pycharm-the-best-linux-python-ide-4091045
[7]:https://fthmb.tqn.com/ju1u-Ju56vYnXabPbsVRyopd72Q=/768x0/filters:no_upscale()/about/pycharmstart-57e2cb405f9b586c351a4cf7.png

View File

@ -0,0 +1,146 @@
PyCharm - Linux 下最好的 Python IDE
=========
![](https://fthmb.tqn.com/AVEbzYN3BPH_8cGYkPflIx58-XE=/768x0/filters:no_upscale()/about/pycharm2-57e2d5ee5f9b586c352c7493.png)
### 介绍
在这篇指南中,我将向你介绍一个集成开发环境 - PyCharm 你可以在它上面使用 Python 编程语言开发专业应用。
Python 是一门优秀的编程语言,因为它真正实现了跨平台,用它开发的应用程序在 Windows、Linux 以及 Mac 系统上均可运行,无需重新编译任何代码。
PyCharm 是由 [Jetbrains][3] 开发的一个编辑器和调试器,[Jetbrains][3] 就是那个开发了 Resharper 的人。不得不说Resharper 是一个很优秀的工具,它被 Windows 开发者们用来重构代码,同时,它也使得 Windows 开发者们写 .NET 代码更加轻松。[Resharper][2] 的许多原则也被加入到了 [PyCharm][3] 专业版中。
### 如何安装 PyCharm
我已经[写了一篇][4]关于如何获取 PyCharm 的指南,下载、解压文件,然后运行。
### 欢迎界面
当你第一次运行 PyCharm 或者关闭一个项目的时候,会出现一个屏幕,上面显示一系列近期项目。
你也会看到下面这些菜单选项:
* 创建新项目
* 打开项目
* 从版本控制仓库检出
还有一个配置设置选项,你可以通过它设置默认 Python 版本或者一些其他设置。
### 创建一个新项目
当你选择‘创建一个新项目’以后,它会提供下面这一系列可能的项目类型供你选择:
* Pure Python
* Django
* Flask
* Google App Engine
* Pyramid
* Web2Py
* Angular CLI
* AngularJS
* Foundation
* HTML5 Bolierplate
* React Starter Kit
* Twitter Bootstrap
* Web Starter Kit
这不是一个编程教程,所以我没必要说明这些项目类型是什么。如果你想创建一个可以运行在 Windows、Linux 和 Mac 上的简单桌面运行程序,那么你可以选择 Pure Python 项目,然后使用 Qt 库来开发图形应用程序,这样的图形应用程序无论在何种操作系统上运行,看起来都像是原生的,就像是在该系统上开发的一样。
选择了项目类型以后,你需要输入一个项目名字并且选择一个 Python 版本来进行开发。
### 打开一个项目
你可以通过单击‘最近打开的项目’列表中的项目名称来打开一个项目,或者,你也可以单击‘打开’,然后浏览到你想打开的项目所在的文件夹,找到该项目,然后选择‘确定’。
### 从源码控制进行查看
PyCharm 提供了从各种在线资源查看项目源码的选项,在线资源包括 [GitHub][5]、[CVS][6]、Git、[Mercurial][7] 以及 [Subversion][8]。
### PyCharm IDE集成开发环境
PyCharm IDE 中可以打开顶部的菜单,在这个菜单下方你可以看到每个打开的项目的标签。
屏幕右方是调试选项区,可以单步运行代码。
左侧面板有项目文件和外部库的列表。
如果想在项目中新建一个文件,你可以鼠标右击项目的名字,然后选择‘新建’。然后你可以在下面这些文件类型中选择一种添加到项目中:
* 文件
* 目录
* Python 包
* Python 包
* Jupyter 笔记
* HTML 文件
* Stylesheet
* JavaScript
* TypeScript
* CoffeeScript
* Gherkin
* 数据源
当添加了一个文件,比如 Python 文件以后,你可以在右边面板的编辑器中进行编辑。
文本是全彩色编码的,并且有黑体文本。垂直线显示缩进,从而能够确保缩进正确。
编辑器具有智能补全功能,这意味着当你输入库名字或可识别命令的时候,你可以按 'Tab' 键补全命令。
### 调试程序
你可以利用屏幕右上角的’调试选项’调试程序的任何一个地方。
如果你是在开发一个图形应用程序,你可以点击‘绿色按钮’来运行程序,你也可以通过 'shift+F10' 快捷键来运行程序。
为了调试应用程序,你可以点击紧挨着‘绿色按钮’的‘绿色箭头’或者按 shift+F9 快捷键。你可以点击一行代码的灰色边缘,从而设置断点,这样当程序运行到这行代码的时候就会停下来。
你可以按 'F8' 单步向前运行代码,这意味着你只是运行代码但无法进入函数内部,如果要进入函数内部,你可以按 'F7'。如果你想从一个函数中返回到调用函数,你可以按 'shift+F8'。
调试过程中,你会在屏幕底部看到许多窗口,比如进程和线程列表,以及你正在监视的变量。
当你运行到一行代码的时候,你可以对这行代码中出现的变量进行监视,这样当变量值改变的时候你能够看到。
另一个不错的选择是使用覆盖检查器运行代码。在过去这些年里,编程界发生了很大的变化,现在,对于开发人员来说,进行测试驱动开发是很常见的,这样他们可以检查对程序所做的每一个改变,确保不会破坏系统的另一部分。
覆盖检查器能够很好的帮助你运行程序,执行一些测试,运行结束以后,它会以百分比的形式告诉你测试运行所覆盖的代码有多少。
还有一个工具可以显示‘类函数’或‘类’的名字,以及一个项目被调用的次数和在一个特定代码片段运行所花费的时间。
### 代码重构
PyCharm 一个很强大的特性是代码重构选项。
当你开始写代码的时候,会在右边缘出现一个小标记。如果你写的代码可能出错或者写的不太好, PyCharm 会标记上一个彩色标记。
点击彩色标记将会告诉你出现的问题并提供一个解决方法。
比如,你通过一个导入语句导入了一个库,但没有使用该库中的任何东西,那么不仅这行代码会变成灰色,彩色标记还会告诉你‘该库未使用’。
对于正确的代码,也可能会出现错误提示,比如在导入语句和函数起始之间只有一个空行。当你创建了一个名称非小写的函数时它也会提示你。
你不必遵循 PyCharm 的所有规则。这些规则大部分只是好的编码准则,与你的代码是否能够正确运行无关。
代码菜单还有其它的重构选项。比如,你可以进行代码清理以及检查文件或项目问题。
### 总结
PyCharm 是 Linux 系统上开发 Python 代码的一个优秀编辑器,并且有两个可用版本。社区版可供临时开发者使用,专业版则提供了开发者开发专业软件可能需要的所有工具。
--------------------------------------------------------------------------------
via: https://www.lifewire.com/pycharm-the-best-linux-python-ide-4091045
作者:[Gary Newell][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com/gary-newell-2180098
[1]:https://www.jetbrains.com/
[2]:https://www.jetbrains.com/resharper/
[3]:https://www.jetbrains.com/pycharm/specials/pycharm/pycharm.html?&gclid=CjwKEAjw34i_BRDH9fbylbDJw1gSJAAvIFqU238G56Bd2sKU9EljVHs1bKKJ8f3nV--Q9knXaifD8xoCRyjw_wcB&gclsrc=aw.ds.ds&dclid=CNOy3qGQoc8CFUJ62wodEywCDg
[4]:https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
[5]:https://github.com/
[6]:http://www.linuxhowtos.org/System/cvs_tutorial.htm
[7]:https://www.mercurial-scm.org/
[8]:https://subversion.apache.org/

View File

@ -0,0 +1,113 @@
Fedora 中使用 Inkscape 起步
=============
![inkscape-gettingstarted](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gettingstarted-945x400.png)
Inkscape 是一个流行的、功能齐全、自由而开源的矢量[图形编辑器][3],它已经在 Fedora 官方仓库中。它特别适合创作 [SVG 格式][4]的矢量图形。Inkscape 非常适于创建和操作图片和插图,以及创建图表和用户界面设计。
[
![cyberscoty-landscape-800px](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/cyberscoty-landscape-800px.png)
][5]
*使用 inkscape 创建的[风车景色][1]的插图*
[其官方网站的截图页][6]上有一些很好的例子,说明 Inkscape 可以做些什么。<ruby>Fedora 杂志<rt>Fedora Magazine</rt></ruby>上的大多数精选图片也是使用 Inkscape 创建的,包括最近的精选图片:
[
![communty](https://cdn.fedoramagazine.org/wp-content/uploads/2016/09/communty.png)
][7]
*Fedora 杂志最近使用 Inkscape 创建的精选图片*
### 在 Fedora 上安装 Inkscape
**Inkscape 已经[在 Fedora 官方仓库中了][8],因此可以非常简单地在 Fedora Workstation 上使用 Software 这个应用来安装它:**
[
![inkscape-gnome-software](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gnome-software.png)
][9]
另外,如果你习惯用命令行,你可以使用 `dnf` 命令来安装:
```
sudo dnf install inkscape
```
### (开始)深入 Inkscape
当第一次打开程序时你会看到一个空白页面并且有一组不同的工具栏。对于初学者最重要的三个工具栏是Toolbar、Tools Control Bar、 Colour Palette调色板
[
![inkscape_window](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape_window.png)
][10]
**Toolbar**提供了创建绘图的所有基本工具,包括以下工具:
* 矩形工具:用于绘制矩形和正方形
* 星形/多边形(形状)工具
* 圆形工具:用于绘制椭圆和圆
* 文本工具:用于添加标签和其他文本
* 路径工具:用于创建或编辑更复杂或自定义的形状
* 选择工具:用于选择图形中的对象
**Colour Palette** 提供了一种设置当前选定对象的颜色的快速方式。 **Tools Control Bar** 提供了工具栏中当前选定工具的所有设置。每次选择新工具时Tools Control Bar 会变成该工具的相应设置:
[
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-toolscontrolbar.gif)
][11]
### 绘图
接下来,让我们使用 Inkscape 绘制一个星星。 首先,从 **Toolbar** 中选择星形工具,**然后在主绘图区域上单击并拖动。**
你可能会注意到你画的星星看起来很像一个三角形。要更改它,请使用 **Tools Control Bar** 中的 **Corners** 选项,再添加几个点。 最后,当你完成后,在星星仍被选中的状态下,从 **Palette**(调色板)中选择一种颜色来改变星星的颜色:
[
![inkscape-drawastar](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-drawastar.gif)
][12]
接下来,可以在 Toolbar 中实验一些其他形状工具,如矩形工具,螺旋工具和圆形工具。通过不同的设置,每个工具都可以创建一些独特的图形。
### 在绘图中选择并移动对象
现在你有一堆图形了,你可以使用 Select 工具来移动它们。要使用 Select 工具,首先从工具栏中选择它,然后单击要操作的形状,接着将图形拖动到您想要的位置。
选择形状后,你还可以使用尺寸句柄调整图形大小。此外,如果你单击所选的图形,尺寸句柄将转变为旋转模式,并允许你旋转图形:
[
![inkscape-movingshapes](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-movingshapes.gif)
][13]
* * *
Inkscape是一个很棒的软件它还包含了更多的工具和功能。在本系列的下一篇文章中我们将介绍更多可用来创建插图和文档的功能和选项。
-----------------------
作者简介Ryan 是一名 Fedora 设计师。他使用 Fedora Workstation 作为他的主要桌面,还有来自 Libre Graphics 世界的最好的工具,尤其是矢量图形编辑器 Inkscape。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/getting-started-inkscape-fedora/
作者:[Ryan Lerch][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ryanlerch.id.fedoraproject.org/
[1]:https://openclipart.org/detail/185885/windmill-in-landscape
[2]:https://fedoramagazine.org/getting-started-inkscape-fedora/
[3]:https://inkscape.org/
[4]:https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
[5]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/cyberscoty-landscape-800px.png
[6]:https://inkscape.org/en/about/screenshots/
[7]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/09/communty.png
[8]:https://apps.fedoraproject.org/packages/inkscape
[9]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gnome-software.png
[10]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape_window.png
[11]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-toolscontrolbar.gif
[12]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-drawastar.gif
[13]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-movingshapes.gif

View File

@ -0,0 +1,119 @@
如何在 Linux/Unix 系统中验证端口是否被占用
==========
[![](https://s0.cyberciti.org/images/category/old/linux-logo.png)][1]
在 Linux 或者类 Unix 中,我该如何检查某个端口是否被占用?我又该如何验证 Linux 服务器中有哪些端口处于监听状态?
验证哪些端口在服务器的网络接口上处于监听状态是非常重要的。你需要注意那些开放端口来检测网络入侵。除了网络入侵,为了排除故障,确认服务器上的某个端口是否被其他应用程序占用也是必要的。比方说,你可能会在同一个系统中安装了 Apache 和 Nginx 服务器,所以了解是 Apache 还是 Nginx 占用了 # 80/443 TCP 端口真的很重要。这篇快速教程会介绍使用 `netstat``nmap``lsof` 命令来检查端口使用信息并找出哪些程序正在使用这些端口。
### 如何检查 Linux 中的程序和监听的端口
1、 打开一个终端,如 shell 命令窗口。
2、 运行以下任意一行命令:
```
sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN
sudo nmap -sTU -O IP地址
```
下面我们看看这些命令和它们的详细输出内容:
### 方式 1lsof 命令
语法如下:
```
$ sudo lsof -i -P -n
$ sudo lsof -i -P -n | grep LISTEN
$ doas lsof -i -P -n | grep LISTEN ### OpenBSD
```
输出如下:
[![Fig.01: Check the listening ports and applications with lsof command](https://s0.cyberciti.org/uploads/faq/2016/11/lsof-outputs.png)][2]
*图 1使用 lsof 命令检查监听端口和程序*
仔细看上面输出的最后一行:
```
sshd 85379 root 3u IPv4 0xffff80000039e000 0t0 TCP 10.86.128.138:22 (LISTEN)
```
- `sshd` 是程序的名称
- `10.86.128.138``sshd` 程序绑定 (LISTEN) 的 IP 地址
- `22` 是被使用 (LISTEN) 的 TCP 端口
- `85379``sshd` 任务的进程 ID (PID)
### 方式 2netstat 命令
你可以如下面所示使用 `netstat` 来检查监听的端口和程序。
**Linux 中 netstat 语法**
```
$ netstat -tulpn | grep LISTEN
```
**FreeBSD/MacOS X 中 netstat 语法**
```
$ netstat -anp tcp | grep LISTEN
$ netstat -anp udp | grep LISTEN
```
**OpenBSD 中 netstat 语法**
```
$ netstat -na -f inet | grep LISTEN
$ netstat -nat | grep LISTEN
```
### 方式 3nmap 命令
语法如下:
```
$ sudo nmap -sT -O localhost
$ sudo nmap -sU -O 192.168.2.13 ### 列出打开的 UDP 端口
$ sudo nmap -sT -O 192.168.2.13 ### 列出打开的 TCP 端口
```
示例输出如下:
[![Fig.02: Determines which ports are listening for TCP connections using nmap](https://s0.cyberciti.org/uploads/faq/2016/11/nmap-outputs.png)][3]
*图 2使用 nmap 探测哪些端口监听 TCP 连接*
你可以用一句命令合并 TCP/UDP 扫描:
```
$ sudo nmap -sTU -O 192.168.2.13
```
### 赠品:对于 Windows 用户
在 windows 系统下可以使用下面的命令检查端口使用情况:
```
netstat -bano | more
netstat -bano | grep LISTENING
netstat -bano | findstr /R /C:"[LISTING]"
```
----------------------------------------------------
via: https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
作者:[VIVEK GITE][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[oska874](https://github.com/oska874)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
[1]:https://www.cyberciti.biz/faq/category/linux/
[2]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/lsof-outputs/
[3]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/nmap-outputs/

View File

@ -0,0 +1,257 @@
在 Ubuntu 系统上使用 Samba4 来创建活动目录架构(一)
============================================================
Samba 是一个自由的开源软件套件,用于实现 Windows 操作系统与 Linux/Unix 系统之间的无缝连接及共享资源。
Samba 不仅可以通过 SMB/CIFS 协议组件来为 Windows 与 Linux 系统之间提供独立的文件及打印机共享服务它还能实现活动目录Active Directory域控制器Domain Controller的功能或者让 Linux 主机加入到域环境中作为域成员服务器。当前的 Samba4 版本实现的 AD DC 域及森林级别可以取代 Windows 2008 R2 系统的域相关功能。
本系列的文章的主要内容是使用 Samba4 软件来配置活动目录域控制器,涉及到 Ubuntu、CentOS 和 Windows 系统相关的以下主题:
- 第 1 节:在 Ubuntu 系统上使用 Samba4 来创建活动目录架构
- 第 2 节:在 Linux 命令行下管理 Samba4 AD 架构
- 第 3 节:在 Windows 10 操作系统上安装 RSAT 工具来管理 Samba4 AD
- 第 4 节:从 Windows 中管理 Samba4 AD 域控制器 DNS 和组策略
- 第 5 节:使用 Sysvol Replication 复制功能把 Samba 4 DC 加入到已有的 AD
- 第 6 节:从 Linux DC 服务器通过 GOP 来添加一个共享磁盘并映射到 AD
- 第 7 节:把 Ubuntu 16.04 系统主机作为域成员服务器添加到 AD
- 第 8 节:把 CenterOS 7 系统主机作为域成员服务器添加到 AD
- 第 9 节:在 AD Intranet 区域创建使用 kerberos 认证的 Apache Website
这篇指南将阐明在 Ubuntu 16.04  Ubuntu 14.04 操作系统上安装配置 Samba4 作为域控服务器组件的过程中,你需要注意的每一个步骤。
以下安装配置文档将会说明在 Windows 和 Linux 的混合系统环境中,关于用户、机器、共享卷、权限及其它资源信息的主要配置点。
#### 环境要求:
1. [Ubuntu 16.04 服务器安装][1]
2. [Ubuntu 14.04 服务器安装][2]
3. 为你的 AD DC 服务器[设置静态IP地址][3]
### 第一步:初始化 Samba4 安装环境
1、 在开始安装 Samba4 AD DC 之前,让我们先做一些准备工作。首先运行以下命令来确保系统已更新了最新的安全特性,内核及其它补丁:
```
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
```
2、 其次打开服务器上的 `/etc/fstab` 文件,确保文件系统分区的 ACL 已经启用 ,如下图所示。
通常情况下,当前常见的 Linux 文件系统,比如 ext3、ext4、xfs 或 btrfs 都默认支持并已经启用了 ACL 。如果未设置,则打开并编辑 `/etc/fstab` 文件,在第三列添加 `acl`,然后重启系统以使用修改的配置生效。
[
![Enable ACL's on Linux Filesystem](http://www.tecmint.com/wp-content/uploads/2016/11/Enable-ACL-on-Linux-Filesystem.png)
][5]
*启动 Linux 文件系统的 ACL 功能*
3、 最后使用一个具有描述性的名称来[设置主机名][6] ,比如这往篇文章所使用的 `adc1`。通过编辑 `/etc/hostname` 文件或使用使用下图所示的命令来设置主机名。
```
$ sudo hostnamectl set-hostname adc1
```
为了使修改的主机名生效必须重启服务器。
### 第二步: 为 Samba4 AD DC 服务器安装必需的软件包
4、 为了让你的服务器转变为域控制器你需要在服务器上使用具有 root 权限的账号执行以下命令来安装 Samba 套件及所有必需的软件包。
```
$ sudo apt-get install samba krb5-user krb5-config winbind libpam-winbind libnss-winbind
```
[
![Install Samba on Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/11/Install-Samba-on-Ubuntu.png)
][7]
*在 Ubuntu 系统上安装 Samba 套件*
5、 安装包在执行的过程中将会询问你一系列的问题以便完成域控制器的配置。
在第一屏中你需要以大写为 Kerberos 默认 REALM 输入一个名字。以**大写**为你的域环境输入名字,然后单击回车继续。
[
![Configuring Kerberos Authentication](http://www.tecmint.com/wp-content/uploads/2016/11/Configuring-Kerberos-Authentication.png)
][8]
*配置 Kerosene 认证服务*
6、 下一步输入你的域中 Kerberos 服务器的主机名。使用和上面相同的名字,这一次使用**小写**,然后单击回车继续。
[
![Set Hostname Kerberos Server](http://www.tecmint.com/wp-content/uploads/2016/11/Set-Hostname-Kerberos-Server.png)
][9]
*设置 Kerberos 服务器的主机名*
7、 最后指定 Kerberos realm 管理服务器的主机名。使用更上面相同的名字,单击回车安装完成。
[
![Set Hostname Administrative Server](http://www.tecmint.com/wp-content/uploads/2016/11/Set-Hostname-Administrative-Server.png)
][10]
*设置管理服务器的主机名*
### 第三步:为你的域环境开启 Samba AD DC 服务
8、 在为域服务器配置 Samba 服务之前,先运行如下命令来停止并禁用所有 Samba 进程。
```
$ sudo systemctl stop samba-ad-dc.service smbd.service nmbd.service winbind.service
$ sudo systemctl disable samba-ad-dc.service smbd.service nmbd.service winbind.service
```
9、 下一步重命名或删除 Samba 原始配置文件。在开启 Samba 服务之前,必须执行这一步操作,因为在开启服务的过程中 Samba 将会创建一个新的配置文件,如果检测到原有的 `smb.conf` 配置文件则会报错。
```
$ sudo mv /etc/samba/smb.conf /etc/samba/smb.conf.initial
```
10、 现在使用 root 权限的账号并接受 Samba 提示的默认选项以交互方式启动域供给domain provision
还有,输入正确的 DNS 服务器地址并且为 Administrator 账号设置强密码。如果使用的是弱密码,则域供给过程会失败。
```
$ sudo samba-tool domain provision --use-rfc2307 interactive
```
[
![Samba Domain Provisioning](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Domain-Provisioning.png)
][11]
*Samba 域供给*
11、 最后使用以下命令重命名或删除 Kerberos 认证在 `/etc` 目录下的主配置文件,并且把 Samba 新生成的 Kerberos 配置文件创建一个软链接指向 `/etc` 目录。
```
$ sudo mv /etc/krb6.conf /etc/krb5.conf.initial
$ sudo ln s /var/lib/samba/private/krb5.conf /etc/
```
[
![Create Kerberos Configuration](http://www.tecmint.com/wp-content/uploads/2016/11/Create-Kerberos-Configuration.png)
][12]
*创建 Kerberos 配置文件*
12、 启动并开启 Samba 活动目录域控制器后台进程
```
$ sudo systemctl start samba-ad-dc.service
$ sudo systemctl status samba-ad-dc.service
$ sudo systemctl enable samba-ad-dc.service
```
[
![Enable Samba Active Directory Domain Controller](http://www.tecmint.com/wp-content/uploads/2016/11/Enable-Samba-AD-DC.png)
][13]
*开启 Samba 活动目录域控制器服务*
13、 下一步[使用 netstat 命令][14] 来验证活动目录启动的服务是否正常。
```
$ sudo netstat tulpn| egrep smbd|samba
```
[
![Verify Samba Active Directory](http://www.tecmint.com/wp-content/uploads/2016/11/Verify-Samba-Active-Directory.png)
][15]
*验证 Samba 活动目录*
### 第四步: Samba 最后的配置
14、 此刻Samba 应该跟你想像的一样完全运行正常。Samba 现在实现的域功能级别可以完全跟 Windows AD DC 2008 R2 相媲美。
可以使用 `samba-tool` 工具来验证 Samba 服务是否正常:
```
$ sudo samba-tool domain level show
```
[
![Verify Samba Domain Level](http://www.tecmint.com/wp-content/uploads/2016/11/Verify-Samba-Domain-Level.png)
][16]
*验证 Samba 域服务级别*
15、 为了满足 DNS 本地解析的需求,你可以编辑网卡配置文件,修改 `dns-nameservers` 参数的值为域控制器地址(使用 127.0.0.1 作为本地 DNS 解析地址),并且设置 `dns-search` 参数为你的 realm 值。
```
$ sudo cat /etc/network/interfaces
$ sudo cat /etc/resolv.conf
```
[
![Configure DNS for Samba AD](http://www.tecmint.com/wp-content/uploads/2016/11/Configure-DNS-for-Samba-AD.png)
][17]
*为 Samba 配置 DNS 服务器地址*
设置完成后,重启服务器并检查解析文件是否指向正确的 DNS 服务器地址。
16、 最后通过 `ping` 命令查询结果来检查某些重要的 AD DC 记录是否正常,使用类似下面的命令,替换对应的域名。
```
$ ping c3 tecmint.lan # 域名
$ ping c3 adc1.tecmint.lan # FQDN
$ ping c3 adc1 # 主机
```
[
![Check Samba AD DNS Records](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-AD-DNS-Records.png)
][18]
*检查 Samba AD DNS 记录*
执行下面的一些查询命令来检查 Samba 活动目录域控制器是否正常。
```
$ host t A tecmint.lan
$ host t A adc1.tecmint.lan
$ host t SRV _kerberos._udp.tecmint.lan # UDP Kerberos SRV record
$ host -t SRV _ldap._tcp.tecmint.lan # TCP LDAP SRV record
```
17、 并且通过请求一个域管理员账号的身份来列出缓存的票据信息以验证 Kerberos 认证是否正常。注意域名部分使用大写。
```
$ kinit administrator@TECMINT.LAN
$ klist
```
[
![Check Kerberos Authentication on Domain](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Kerberos-Authentication-on-Domain.png)
][19]
*检查域环境中的 Kerberos 认证是否正确*
至此! 你当前的网络环境中已经完全运行着一个 AD 域控制器,你现在可以把 Windows 或 Linux 系统的主机集成到 Samba AD 中了。
在下一期的文章中将会包括其它 Samba AD 域的主题,比如,在 Samba 命令行下如何管理你的域控制器,如何把 Windows 10 系统主机添加到同一个域环境中,如何使用 RSAT 工具远程管理 Samba AD 域,以及其它重要的主题。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-samba4-active-directory-ubuntu/
作者:[Matei Cezar][a]
译者:[rusking](https://github.com/rusking)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/installation-of-ubuntu-16-04-server-edition/
[2]:http://www.tecmint.com/ubuntu-14-04-server-installation-guide-and-lamp-setup/
[3]:http://www.tecmint.com/set-add-static-ip-address-in-linux/
[4]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-ACL-on-Linux-Filesystem.png
[6]:http://www.tecmint.com/set-hostname-permanently-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/Install-Samba-on-Ubuntu.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Configuring-Kerberos-Authentication.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Set-Hostname-Kerberos-Server.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Set-Hostname-Administrative-Server.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Domain-Provisioning.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Create-Kerberos-Configuration.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-Samba-AD-DC.png
[14]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/
[15]:http://www.tecmint.com/wp-content/uploads/2016/11/Verify-Samba-Active-Directory.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Verify-Samba-Domain-Level.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/11/Configure-DNS-for-Samba-AD.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-AD-DNS-Records.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Kerberos-Authentication-on-Domain.png

View File

@ -0,0 +1,391 @@
在 Linux 命令行下管理 Samba4 AD 架构(二)
============================================================
这篇文章包括了管理 Samba4 域控制器架构过程中的一些常用命令,比如添加、移除、禁用或者列出用户及用户组等。
我们也会关注一下如何配置域安全策略以及如何把 AD 用户绑定到本地的 PAM 认证中,以实现 AD 用户能够在 Linux 域控制器上进行本地登录。
#### 要求
- [在 Ubuntu 系统上使用 Samba4 来创建活动目录架构][1]
### 第一步:在命令行下管理
1、 可以通过 `samba-tool` 命令行工具来进行管理,这个工具为域管理工作提供了一个功能强大的管理接口。
通过 `samba-tool` 命令行接口你可以直接管理域用户及用户组、域组策略、域站点DNS 服务、域复制关系和其它重要的域功能。
使用 root 权限的账号,直接输入 `samba-tool` 命令,不要加任何参数选项来查看该工具能实现的所有功能。
```
# samba-tool -h
```
[
![samba-tool - Manage Samba Administration Tool](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png)
][3]
*samba-tool —— Samba 管理工具*
2、 现在,让我们开始使用 `samba-tool` 工具来管理 Samba4 活动目录中的用户。
使用如下命令来创建 AD 用户:
```
# samba-tool user add your_domain_user
```
添加一个用户,包括 AD 可选的一些重要属性,如下所示:
```
--------- review all options ---------
# samba-tool user add -h
# samba-tool user add your_domain_user --given-name=your_name --surname=your_username --mail-address=your_domain_user@tecmint.lan --login-shell=/bin/bash
```
[
![Create User on Samba AD](http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png)
][4]
*在 Samba AD 上创建用户*
3、 可以通过下面的命令来列出所有 Samba AD 域用户:
```
# samba-tool user list
```
[
![List Samba AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png)
][5]
*列出 Samba AD 用户信息*
4、 使用下面的命令来删除 Samba AD 域用户:
```
# samba-tool user delete your_domain_user
```
5、 重置 Samba 域用户的密码:
```
# samba-tool user setpassword your_domain_user
```
6、 启用或禁用 Samba 域用户账号:
```
# samba-tool user disable your_domain_user
# samba-tool user enable your_domain_user
```
7、 同样地可以使用下面的方法来管理 Samba 用户组:
```
--------- review all options ---------
# samba-tool group add h
# samba-tool group add your_domain_group
```
8、 删除 samba 域用户组:
```
# samba-tool group delete your_domain_group
```
9、 显示所有的 Samba 域用户组信息:
 
```
# samba-tool group list
```
10、 列出指定组下的 Samba 域用户:
```
# samba-tool group listmembers "your_domain group"
```
[
![List Samba Domain Members of Group](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png)
][6]
*列出 Samba 域用户组*
11、 从 Samba 域组中添加或删除某一用户:
```
# samba-tool group addmembers your_domain_group your_domain_user
# samba-tool group remove members your_domain_group your_domain_user
```
12、 如上面所提到的 `samba-tool` 命令行工具也可以用于管理 Samba 域策略及安全。
查看 samba 域密码设置:
```
# samba-tool domain passwordsettings show
```
[
![Check Samba Domain Password](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png)
][7]
*检查 Samba 域密码*
13、 为了修改 samba 域密码策略,比如密码复杂度,密码失效时长,密码长度,密码重复次数以及其它域控制器要求的安全策略等,可参照如下命令来完成:
```
---------- List all command options ----------
# samba-tool domain passwordsettings -h
```
[
![Manage Samba Domain Password Settings](http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png)
][8]
*管理 Samba 域密码策略*
不要把上图中的密码策略规则用于生产环境中。上面的策略仅仅是用于演示目的。
### 第二步:使用活动目录账号来完成 Samba 本地认证
14、 默认情况下离开 Samba AD DC 环境AD 用户不能从本地登录到 Linux 系统。
为了让活动目录账号也能登录到系统,你必须在 Linux 系统环境中做如下设置,并且要修改 Samba4 AD DC 配置。
首先,打开 Samba 主配置文件,如果以下内容不存在,则添加:
```
$ sudo nano /etc/samba/smb.conf
```
确保以下参数出现在配置文件中:
```
winbind enum users = yes
winbind enum groups = yes
```
[
![Samba Authentication Using Active Directory User Accounts](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png)
][9]
*Samba 通过 AD 用户账号来进行认证*
15、 修改之后使用 `testparm` 工具来验证配置文件没有错误,然后通过如下命令来重启 Samba 服务:
```
$ testparm
$ sudo systemctl restart samba-ad-dc.service
```
[
![Check Samba Configuration for Errors](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png)
][10]
*检查 Samba 配置文件是否报错*
16、 下一步我们需要修改本地 PAM 配置文件,以让 Samba4 活动目录账号能够完成本地认证、开启会话,并且在第一次登录系统时创建一个用户目录。
使用 `pam-auth-update` 命令来打开 PAM 配置提示界面,确保所有的 PAM 选项都已经使用 `[空格]` 键来启用,如下图所示:
完成之后,按 `[Tab]` 键跳转到 OK ,以启用修改。
```
$ sudo pam-auth-update
```
[
![Configure PAM for Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png)
][11]
*为 Samba4 AD 配置 PAM 认证*
[
![Enable PAM Authentication Module for Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png)
][12]
为 Samba4 AD 用户启用 PAM认证模块
17、 现在使用文本编辑器打开 `/etc/nsswitch.conf` 配置文件,在 `passwd``group` 参数的最后面添加 `winbind` 参数,如下图所示:
```
$ sudo vi /etc/nsswitch.conf
```
[
![Add Windbind Service Switch for Samba](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png)
][13]
*为 Samba 服务添加 Winbind Service Switch 设置*
18、 最后编辑 `/etc/pam.d/common-password` 文件,查找下图所示行并删除 `user_authtok` 参数。
该设置确保 AD 用户在通过 Linux 系统本地认证后,可以在命令行下修改他们的密码。有这个参数时,本地认证的 AD 用户不能在控制台下修改他们的密码。
```
password [success=1 default=ignore] pam_winbind.so try_first_pass
```
[
![Allow Samba AD Users to Change Passwords](http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png)
][14]
*允许 Samba AD 用户修改密码*
在每次 PAM 更新安装完成并应用到 PAM 模块,或者你每次执行 `pam-auth-update` 命令后,你都需要删除 `use_authtok` 参数。
19、 Samba4 的二进制文件会生成一个内建的 windindd 进程,并且默认是启用的。
因此,你没必要再次去启用并运行 Ubuntu 系统官方自带的 winbind 服务。
为了防止系统里原来已废弃的 winbind 服务被启动,确保执行以下命令来禁用并停止原来的 winbind 服务。
```
$ sudo systemctl disable winbind.service
$ sudo systemctl stop winbind.service
```
虽然我们不再需要运行原有的 winbind 进程,但是为了安装并使用 wbinfo 工具,我们还得从系统软件库中安装 Winbind 包。
wbinfo 工具可以用来从 winbindd 进程侧来查询活动目录用户和组。
以下命令显示了使用 `wbinfo` 命令如何查询 AD 用户及组信息。
```
$ wbinfo -g
$ wbinfo -u
$ wbinfo -i your_domain_user
```
[
![Check Samba4 AD Information ](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png)
][15]
*检查 Samba4 AD 信息*
[
![Check Samba4 AD User Info](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png)
][16]
*检查 Samba4 AD 用户信息*
20、 除了 `wbinfo` 工具外,你也可以使用 `getent` 命令行工具从 Name Service Switch 库中查询活动目录信息库,在 `/etc/nsswitch.conf` 配置文件中有相关描述内容。
通过 grep 命令用管道符从 `getent` 命令过滤结果集,以获取信息库中 AD 域用户及组信息。
```
# getent passwd | grep TECMINT
# getent group | grep TECMINT
```
[
![Get Samba4 AD Details](http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png)
][17]
*查看 Samba4 AD 详细信息*
### 第三步:使用活动目录账号登录 Linux 系统
21、 为了使用 Samba4 AD 用户登录系统,使用 `su -` 命令切换到 AD 用户账号即可。
第一次登录系统后,控制台会有信息提示用户的 home 目录已创建完成,系统路径为 `/home/$DOMAIN/` 之下,名字为用户的 AD 账号名。
使用 `id` 命令来查询其它已登录的用户信息。
```
# su - your_ad_user
$ id
$ exit
```
[
![Check Samba4 AD User Authentication on Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png)
][18]
*检查 Linux 下 Samba4 AD 用户认证结果*
22、 当你成功登入系统后在控制台下输入 `passwd` 命令来修改已登录的 AD 用户密码。
```
$ su - your_ad_user
$ passwd
```
[
![Change Samba4 AD User Password](http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png)
][19]
*修改 Samba4 AD 用户密码*
23、 默认情况下活动目录用户没有可以完成系统管理工作的 root 权限。
要授予 AD 用户 root 权限,你必须把用户名添加到本地 sudo 组中,可使用如下命令完成。
确保你已输入域 、斜杠和 AD 用户名,并且使用英文单引号括起来,如下所示:
```
# usermod -aG sudo 'DOMAIN\your_domain_user'
```
要检查 AD 用户在本地系统上是否有 root 权限,登录后执行一个命令,比如,使用 sudo 权限执行 `apt-get update` 命令。
```
# su - tecmint_user
$ sudo apt-get update
```
[
![Grant sudo Permission to Samba4 AD User](http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png)
][20]
*授予 Samba4 AD 用户 sudo 权限*
24、 如果你想把活动目录组中的所有账号都授予 root 权限,使用 `visudo` 命令来编辑 `/etc/sudoers` 配置文件,在 root 权限那一行添加如下内容:
```
%DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
```
注意 `/etc/sudoers` 的格式,不要弄乱。
`/etc/sudoers` 配置文件对于 ASCII 引号字符处理的不是很好,因此务必使用 '%' 来标识用户组,使用反斜杠来转义域名后的第一个斜杠,如果你的组名中包含空格(大多数 AD 内建组默认情况下都包含空格)使用另外一个反斜杠来转义空格。并且域的名称要大写。
[
![Give Sudo Access to All Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png)
][21]
*授予所有 Samba4 用户 sudo 权限*
好了,差不多就这些了!管理 Samba4 AD 架构也可以使用 Windows 环境中的其它几个工具,比如 ADUC、DNS 管理器、 GPM 等等,这些工具可以通过安装从 Microsoft 官网下载的 RSAT 软件包来获得。
要通过 RSAT 工具来管理 Samba4 AD DC ,你必须要把 Windows 系统加入到 Samba4 活动目录。这将是我们下一篇文章的重点,在这之前,请继续关注。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-active-directory-linux-command-line
作者:[Matei Cezar][a]
译者:[rusking](https://github.com/rusking)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:https://linux.cn/article-8065-1.html
[2]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
[3]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png

View File

@ -0,0 +1,65 @@
把 SQL Server 迁移到 Linux不如换成 MySQL
============================================================
最近几年,数量庞大的个人和组织放弃 Windows 平台选择 Linux 平台,而且随着人们体验到更多 Linux 的发展,这个数字将会继续增长。在很长的一段时间内, Linux 是网络服务器的领导者,因为大部分的网络服务器都运行在 Linux 之上,这或许是为什么那么多的个人和组织选择迁移的一个原因。
迁移的原因有很多,更强的平台稳定性、可靠性、成本、所有权和安全性等等。随着更多的个人和组织迁移到 Linux 平台MS SQL 服务器数据库管理系统的迁移也有着同样的趋势,首选的是 MySQL ,这是因为 MySQL 的互用性、平台无关性和购置成本低。
有如此多的个人和组织完成了迁移,这是应业务需求而产生的迁移,而不是为了迁移的乐趣。因此,有必要做一个综合可行性和成本效益分析,以了解迁移对于你的业务上的正面和负面影响。
迁移需要基于以下重要因素:
### 对平台的掌控
不像 Windows 那样,你不能完全控制版本发布和修复,而 Linux 可以让你需要需要修复的时候真正给了你获取修复的灵活性。这一点受到了开发者和安全人员的喜爱,因为他们能在一个安全威胁被确定时立即自行打补丁,不像 Windows ,你只能期望官方尽快发布补丁。
### 跟随大众
目前, 运行在 Linux 平台上的服务器在数量上远超 Windows几乎是全世界服务器数量的四分之三而且这种趋势在最近一段时间内不会改变。因此许多组织正在将他们的服务完全迁移到 Linux 上,而不是同时使用两种平台,同时使用将会增加他们的运营成本。
### 微软没有开放 SQL Server 的源代码
微软宣称他们下一个名为 Denali 的新版 MS SQL Server 将会是一个 Linux 版本,并且不会开放其源代码,这意味着他们仍然使用的是软件授权模式,只是新版本将能在 Linux 上运行而已。这一点将许多乐于接受开源新版本的人拒之门外。
这也没有给那些使用闭源的 Oracle 用户另一个选择,对于使用完全开源的 [MySQL 用户][7]也是如此。
### 节约授权许可证的花费
授权许可证的潜在成本让许多用户很失望。在 Windows 平台上运行 MS SQL 服务器有太多的授权许可证牵涉其中。你需要这些授权许可证:
*   Windows 操作系统
*   MS SQL 服务器
*   特定的数据库工具,例如 SQL 分析工具等
不像 Windows 平台Linux 完全没有高昂的授权花费,因此更能吸引用户。 MySQL 数据库也能免费获取,甚而它提供了像 MS SQL 服务器一样的灵活性,那就更值得选择了。不像那些给 MS SQL 设计的收费工具,大部分的 MySQL 数据库实用程序是免费的。
### 有时候用的是特殊的硬件
因为 Linux 是不同的开发者所开发,并在不断改进中,所以它独立于所运行的硬件之上,并能被广泛使用在不同的硬件平台。然而尽管微软正在努力让 Windows 和 MSSQL 服务器做到硬件无关,但在平台无关上依旧有些限制。
### 支持
有了 Linux、MySQL 和其它的开源软件,获取满足自己特定需求的帮助变得更加简单,因为有不同开发者参与到这些软件的开发过程中。这些开发者或许就在你附近,这样更容易获取帮助。在线论坛也能帮上不少,你能发帖并讨论你所面对的问题。
至于那些商业软件,你只能根据他们的软件协议和时间来获得帮助,有时候他们不能在你的时间范围内给出一个解决方案。
在不同的情况中,迁移到 Linux 都是你最好的选择,加入一个彻底的、稳定可靠的平台来获取优异表现,众所周知,它比 Windows 更健壮。这值得一试。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/
作者:[Tony Branson][a]
译者:[ypingcn](https://github.com/ypingcn)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/howtoforgecom
[1]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#to-have-control-over-the-platform
[2]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#joining-the-crowd
[3]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#microsoft-isnrsquot-open-sourcing-sql-serverrsquos-code
[4]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#saving-on-license-costs
[5]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#sometimes-the-specific-hardware-being-used
[6]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#support
[7]:http://www.scalearc.com/how-it-works/products/scalearc-for-mysql

View File

@ -0,0 +1,124 @@
如何在 Ubuntu 环境下搭建邮件服务器(一)
============================================================
![mail server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mail-stack.jpg?itok=SVMfa8WZ "mail server")
在这个系列的文章中,我们将通过使用 Postfix、Dovecot 和 openssl 这三款工具来为你展示如何在 ubuntu 系统上搭建一个既可靠又易于配置的邮件服务器。
在这个容器和微服务技术日新月异的时代,值得庆幸的是有些事情并没有改变,例如搭建一个 Linux 下的邮件服务器,仍然需要许多步骤才能间隔各种服务器耦合在一起,而当你将这些配置好,放在一起,却又非常可靠稳定,不会像微服务那样一睁眼有了,一闭眼又没了。 在这个系列教程中我们将通过使用 Postfix、Dovecot 和 openssl 这三款工具在 ubuntu 系统上搭建一个既可靠又易于配置的邮件服务器。
Postfix 是一个古老又可靠的软件,它比原始的 Unix 系统的 MTA 软件 sendmail 更加容易配置和使用还有人仍然在用sendmail 吗?)。 Exim 是 Debain 系统上的默认 MTA 软件,它比 Postfix 更加轻量而且超级容易配置,因此我们在将来的教程中会推出 Exim 的教程。
DovecotLCTT 译注:详情请阅读[维基百科](https://en.wikipedia.org/wiki/Dovecot_(software))和 Courier 是两个非常受欢迎的优秀的 IMAP/POP3 协议的服务器软件Dovecot 更加的轻量并且易于配置。
你必须要保证你的邮件通讯是安全的,因此我们就需要使用到 OpenSSL 这个软件OpenSSL 也提供了一些很好用的工具来测试你的邮件服务器。
为了简单起见,在这一系列的教程中,我们将指导大家安装一个在局域网上的邮件服务器,你应该拥有一个局域网内的域名服务,并确保它是启用且正常工作的,查看这篇“[使用 dnsmasq 为局域网轻松提供 DNS 服务][5]”会有些帮助,然后,你就可以通过注册域名并相应地配置防火墙,来将这台局域网服务器变成互联网可访问邮件服务器。这个过程网上已经有很多很详细的教程了,这里不再赘述,请大家继续跟着教程进行即可。
### 一些术语
让我们先来快速了解一些术语,因为当我们了解了这些术语的时候就能知道这些见鬼的东西到底是什么。 :D
* **MTA**邮件传输代理Mail Transfer Agent基于 SMTP 协议(简单邮件传输协议)的服务端,比如 Postfix、Exim、Sendmail 等。SMTP 服务端彼此之间进行相互通信LCTT 译注 : 详情请阅读[维基百科](https://en.wikipedia.org/wiki/Message_transfer_agent))。
* **MUA** 邮件用户代理Mail User Agent你本地的邮件客户端例如 : Evolution、KMail、Claws Mail 或者 ThunderbirdLCTT 译注 : 例如国内的 Foxmail
* **POP3**邮局协议Post-Office Protocol版本 3将邮件从 SMTP 服务器传输到你的邮件客户端的的最简单的协议。POP 服务端是非常简单小巧的,单一的一台机器可以为数以千计的用户提供服务。
* **IMAP** 交互式消息访问协议Interactive Message Access Protocol许多企业使用这个协议因为邮件可以被保存在服务器上而用户不必担心会丢失消息。IMAP 服务器需要大量的内存和存储空间。
* **TLS**传输套接层Transport socket layer是 SSLSecure Sockets Layer安全套接层的改良版为 SASL 身份认证提供了加密的传输服务层。
* **SASL**简单身份认证与安全层Simple Authentication and Security Layer用于认证用户。SASL进行身份认证而上面说的 TLS 提供认证数据的加密传输。
* **StartTLS**: 也被称为伺机 TLS 。如果服务器双方都支持 SSL/TLSStartTLS 就会将纯文本连接升级为加密连接TLS 或 SSL。如果有一方不支持加密则使用明文传输。StartTLS 会使用标准的未加密端口 25 SMTP、 110POP3和 143 IMAP而不是对应的加密端口 465SMTP、995POP3 和 993 IMAP
### 啊,我们仍然有 sendmail
绝大多数的 Linux 版本仍然还保留着 `/usr/sbin/sendmail` 。 这是在那个 MTA 只有一个 sendmail 的古代遗留下来的痕迹。在大多数 Linux 发行版中,`/usr/sbin/sendmail` 会符号链接到你安装的 MTA 软件上。如果你的 Linux 中有它,不用管它,你的发行版会自己处理好的。
### 安装 Postfix
使用 `apt-get install postfix` 来做基本安装时要注意(图 1安装程序会打开一个向导询问你想要搭建的服务器类型你要选择“Internet Server”虽然这里是局域网服务器。它会让你输入完全限定的服务器域名例如 myserver.mydomain.net。对于局域网服务器假设你的域名服务已经正确配置(我多次提到这个是因为经常有人在这里出现错误),你也可以只使用主机名。
![Postfix](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/postfix-1.png?itok=NJLdtICb "Postfix")
*图 1Postfix 的配置。*
Ubuntu 系统会为 Postfix 创建一个配置文件,并启动三个守护进程 : `master`、`qmgr` 和 `pickup`,这里没用一个叫 Postfix 的命令或守护进程。LCTT 译注:名为 `postfix` 的命令是管理命令。)
```
$ ps ax
6494 ? Ss 0:00 /usr/lib/postfix/master
6497 ? S 0:00 pickup -l -t unix -u -c
6498 ? S 0:00 qmgr -l -t unix -u
```
你可以使用 Postfix 内置的配置语法检查来测试你的配置文件,如果没用发现语法错误,不会输出任何内容。
```
$ sudo postfix check
[sudo] password for carla:
```
使用 `netstat` 来验证 `postfix` 是否正在监听 25 端口。
```
$ netstat -ant
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
tcp6 0 0 :::25 :::* LISTEN
```
现在让我们再操起古老的 `telnet` 来进行测试 :
```
$ telnet myserver 25
Trying 127.0.1.1...
Connected to myserver.
Escape character is '^]'.
220 myserver ESMTP Postfix (Ubuntu)
EHLO myserver
250-myserver
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
^]
telnet>
```
嘿,我们已经验证了我们的服务器名,而且 Postfix 正在监听 SMTP 的 25 端口而且响应了我们键入的命令。
按下 `^]` 终止连接,返回 telnet。输入 `quit` 来退出 telnet。输出的 ESMTP扩展的 SMTP 250 状态码如下。
LCTT 译注: ESMTP (Extended SMTP),即扩展 SMTP就是对标准 SMTP 协议进行的扩展。详情请阅读[维基百科](https://en.wikipedia.org/wiki/Extended_SMTP)
* **PIPELINING** 允许多个命令流式发出,而不必对每个命令作出响应。
* **SIZE** 表示服务器可接收的最大消息大小。
* **VRFY** 可以告诉客户端某一个特定的邮箱地址是否存在,这通常应该被取消,因为这是一个安全漏洞。
* **ETRN** 适用于非持久互联网连接的服务器。这样的站点可以使用 ETRN 从上游服务器请求邮件投递Postfix 可以配置成延迟投递邮件到 ETRN 客户端。
* **STARTTLS** (详情见上述说明)。
* **ENHANCEDSTATUSCODES**,服务器支撑增强型的状态码和错误码。
* **8BITMIME**,支持 8 位 MIME这意味着完整的 ASCII 字符集。最初,原始的 ASCII 是 7 位。
* **DSN**,投递状态通知,用于通知你投递时的错误。
Postfix 的主配置文件是: `/etc/postfix/main.cf`,这个文件是安装程序创建的,可以参考[这个资料][6]来查看完整的 `main.cf` 参数列表, `/etc/postfix/postfix-files` 这个文件描述了 Postfix 完整的安装文件。
下一篇教程我们会讲解 Dovecot 的安装和测试,然后会给我们自己发送一些邮件。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/how-build-email-server-ubuntu-linux
作者:[CARLA SCHRODER][a]
译者:[WangYihang](https://github.com/WangYihang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/postfix-1png
[4]:https://www.linux.com/files/images/mail-stackjpg
[5]:https://www.linux.com/learn/dnsmasq-easy-lan-name-services
[6]:http://www.postfix.org/postconf.5.html

View File

@ -0,0 +1,63 @@
如何在 Apache 中重定向 URL 到另外一台服务器
============================================================
如我们前面两篇文章([使用 mod_rewrite 执行内部重定向][1]和[基于浏览器来显示自定义内容][2])中提到的,在本文中,我们将解释如何在 Apache 中使用 mod_rewrite 模块重定向对已移动到另外一台服务器上的资源的访问。
假设你正在重新设计公司的网站。你已决定将内容和样式HTML文件、JavaScript 和 CSS存储在一个服务器上将文档存储在另一个服务器上 - 这样可能会更稳健。
**建议阅读:** [5 个提高 Apache Web 服务器性能的提示][3] 。
但是,你希望这个更改对用户是透明的,以便他们仍然能够通过之前的网址访问文档。
在下面的例子中,名为 `assets.pdf` 的文件已从 `192.168.0.100`(主机名:`web`)中的 `/var/www/html` 移动到`192.168.0.101`(主机名:`web2`)中的相同位置。
为了让用户在浏览到 `192.168.0.100/assets.pdf` 时可以访问到此文件,请打开 `192.168.0.100` 上的 Apache 配置文件并添加以下重写规则(或者也可以将以下规则添加到 [.htaccess 文件][4])中:
```
RewriteRule "^(/assets\.pdf$)" "http://192.168.0.101$1" [R,L]
```
其中 `$1` 占位符,代表与括号中的正则表达式匹配的任何内容。
现在保存更改,不要忘记重新启动 Apache让我们看看当我们打开 `192.168.0.100/assets.pdf`,尝试访问 `assets.pdf` 时会发生什么:
**建议阅读:** [25 个有用的网站 .htaccess 技巧] [5]
在下面我们就可以看到,为 `192.168.0.100` 上的 `assets.pdf` 所做的请求实际上是由 `192.168.0.101` 处理的。
```
# tail -n 1 /var/log/apache2/access.log
```
[
![Check Apache Logs](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Apache-Logs.png)
][6]
*检查 Apache 日志*
在本文中,我们讨论了如何对已移动到其他服务器的资源进行重定向。 总而言之,我强烈建议你看看 [mod_rewrite][7] 指南和 [Apache 重定向指南][8],以供将来参考。
一如既往那样,如果您对本文有任何疑虑,请随时使用下面的评论栏回复。 我们期待你的回音!
--------------------------------------------------------------------------------
作者简介Gabriel Cánepa 是来自阿根廷圣路易斯 Villa Mercedes 的 GNU/Linux 系统管理员和 Web 开发人员。 他在一家全球领先的消费品公司工作,非常高兴使用 FOSS 工具来提高他日常工作领域的生产力。
-----------
via: http://www.tecmint.com/redirect-website-url-from-one-server-to-different-server/
作者:[Gabriel Cánepa][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/redirection-with-mod_rewrite-in-apache/
[2]:http://www.tecmint.com/mod_rewrite-redirect-requests-based-on-browser/
[3]:http://www.tecmint.com/apache-performance-tuning/
[4]:http://www.tecmint.com/tag/htaccess/
[5]:http://www.tecmint.com/apache-htaccess-tricks/
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Apache-Logs.png
[7]:http://mod-rewrite-cheatsheet.com/
[8]:https://httpd.apache.org/docs/2.4/rewrite/remapping.html

View File

@ -0,0 +1,241 @@
如何在 Ubuntu 环境下搭建邮件服务器(二)
============================================================
![Dovecot email](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dovecot-email.jpg?itok=tY4veggw "Dovecot email")
本教程的第 2 部分将介绍如何使用 Dovecot 将邮件从 Postfix 服务器移动到用户的收件箱。以[Creative Commons Zero][2] 方式授权发布
在[第一部分][5]中,我们安装并测试了 Postfix SMTP 服务器。Postfix 或任何 SMTP 服务器都不是一个完整的邮件服务器,因为它所做的只是在 SMTP 服务器之间移动邮件。我们需要 Dovecot 将邮件从 Postfix 服务器移动到用户的收件箱中。
Dovecot 支持两种标准邮件协议IMAPInternet 邮件访问协议)和 POP3邮局协议。 IMAP 服务器会在服务器上保留所有邮件。您的用户可以选择将邮件下载到计算机或仅在服务器上访问它们。 IMAP 对于有多台机器的用户是方便的。但对你而言需要更多的工作,因为你必须确保你的服务器始终可用,而且 IMAP 服务器需要大量的存储和内存。
POP3 是较旧的协议。POP3 服务器可以比 IMAP 服务器服务更多的用户,因为邮件会下载到用户的计算机。大多数邮件客户端可以选择在服务器上保留一定天数的邮件,因此 POP3 的行为有点像 IMAP。但它又不是 IMAP当你像 IMAP 那样(在多台计算机上使用它时)那么常常会下载多次或意外删除。
### 安装 Dovecot
启动你的 Ubuntu 系统并安装 Dovecot
```
$ sudo apt-get install dovecot-imapd dovecot-pop3d
```
它会安装可用的配置,并在完成后自动启动,你可以用 `ps ax | grep dovecot` 确认:
```
$ ps ax | grep dovecot
15988 ? Ss 0:00 /usr/sbin/dovecot
15990 ? S 0:00 dovecot/anvil
15991 ? S 0:00 dovecot/log
```
打开你的 Postfix 配置文件 `/etc/postfix/main.cf`确保配置了maildir 而不是 mbox 的邮件存储方式mbox 是给每个用户一个单一大文件,而 maildir 是每条消息都存储为一个文件。大量的小文件比一个庞大的文件更稳定且易于管理。添加如下两行,第二行告诉 Postfix 你需要 maildir 格式,并且在每个用户的家目录下创建一个 `.Mail` 目录。你可以取任何名字,不一定要是 `.Mail`
```
mail_spool_directory = /var/mail
home_mailbox = .Mail/
```
现在调整你的 Dovecot 配置。首先把原始的 `dovecot.conf` 文件重命名放到一边,因为它会调用存放在 `conf.d` 中的文件,在你刚刚开始学习时把配置放一起更简单些:
```
$ sudo mv /etc/dovecot/dovecot.conf /etc/dovecot/dovecot-oldconf
```
现在创建一个新的 `/etc/dovecot/dovecot.conf`
```
disable_plaintext_auth = no
mail_location = maildir:~/.Mail
namespace inbox {
inbox = yes
mailbox Drafts {
special_use = \Drafts
}
mailbox Sent {
special_use = \Sent
}
mailbox Trash {
special_use = \Trash
}
}
passdb {
driver = pam
}
protocols = " imap pop3"
ssl = no
userdb {
driver = passwd
}
```
注意 `mail_location = maildir` 必须和 `main.cf` 中的 `home_mailbox` 参数匹配。保存你的更改并重新加载 Postfix 和 Dovecot 配置:
```
$ sudo postfix reload
$ sudo dovecot reload
```
### 快速导出配置
使用下面的命令来快速查看你的 Postfix 和 Dovecot 配置:
```
$ postconf -n
$ doveconf -n
```
### 测试 Dovecot
现在再次启动 telnet并且给自己发送一条测试消息。粗体显示的是你输入的命令。`studio` 是我服务器的主机名,因此你必须用自己的:
```
$ telnet studio 25
Trying 127.0.1.1...
Connected to studio.
Escape character is '^]'.
220 studio.router ESMTP Postfix (Ubuntu)
EHLO studio
250-studio.router
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-DSN
250 SMTPUTF8
mail from: tester@test.net
250 2.1.0 Ok
rcpt to: carla@studio
250 2.1.5 Ok
data
354 End data with .Date: November 25, 2016
From: tester
Message-ID: first-test
Subject: mail server test
Hi carla,
Are you reading this? Let me know if you didn't get this.
.
250 2.0.0 Ok: queued as 0C261A1F0F
quit
221 2.0.0 Bye
Connection closed by foreign host.
```
现在请求 Dovecot 来取回你的新消息,使用你的 Linux 用户名和密码登录:
```
$ telnet studio 110
Trying 127.0.0.1...
Connected to studio.
Escape character is '^]'.
+OK Dovecot ready.
user carla
+OK
pass password
+OK Logged in.
stat
+OK 2 809
list
+OK 2 messages:
1 383
2 426
.
retr 2
+OK 426 octets
Return-Path: <tester@test.net>
X-Original-To: carla@studio
Delivered-To: carla@studio
Received: from studio (localhost [127.0.0.1])
by studio.router (Postfix) with ESMTP id 0C261A1F0F
for <carla@studio>; Wed, 30 Nov 2016 17:18:57 -0800 (PST)
Date: November 25, 2016
From: tester@studio.router
Message-ID: first-test
Subject: mail server test
Hi carla,
Are you reading this? Let me know if you didn't get this.
.
quit
+OK Logging out.
Connection closed by foreign host.
```
花一点时间比较第一个例子中输入的消息和第二个例子中接收的消息。 返回地址和日期是很容易伪造的,但 Postfix 不会被愚弄。大多数邮件客户端默认显示一个最小的标头集,但是你需要读取完整的标头才能查看真实的回溯。
你也可以在你的 `~/Mail/cur` 目录中查看你的邮件,它们是普通文本,我已经有两封测试邮件:
```
$ ls .Mail/cur/
1480540325.V806I28e0229M351743.studio:2,S
1480555224.V806I28e000eM41463.studio:2,S
```
### 测试 IMAP
我们 Dovecot 同时启用了 POP3 和 IMAP 服务,因此让我们使用 telnet 测试 IMAP。
```
$ telnet studio imap2
Trying 127.0.1.1...
Connected to studio.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS
ID ENABLE IDLE AUTH=PLAIN] Dovecot ready.
A1 LOGIN carla password
A1 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS
ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS
THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT
CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE
QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS
BINARY MOVE SPECIAL-USE] Logged in
A2 LIST "" "*"
* LIST (\HasNoChildren) "." INBOX
A2 OK List completed (0.000 + 0.000 secs).
A3 EXAMINE INBOX
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
* OK [PERMANENTFLAGS ()] Read-only mailbox.
* 2 EXISTS
* 0 RECENT
* OK [UIDVALIDITY 1480539462] UIDs valid
* OK [UIDNEXT 3] Predicted next UID
* OK [HIGHESTMODSEQ 1] Highest
A3 OK [READ-ONLY] Examine completed (0.000 + 0.000 secs).
A4 logout
* BYE Logging out
A4 OK Logout completed.
Connection closed by foreign host
```
### Thunderbird 邮件客户端
图 1 中的屏幕截图显示了我局域网上另一台主机上的图形邮件客户端中的邮件。
![thunderbird mail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/thunderbird-mail.png?itok=IkWK5Ti_ "thunderbird mail")
*图1 Thunderbird mail*
此时,你已有一个可以工作的 IMAP 和 POP3 邮件服务器,并且你也知道该如何测试你的服务器。你的用户可以在他们设置邮件客户端时选择要使用的协议。如果您只想支持一个邮件协议,那么只需要在您的 Dovecot 配置中留下你要的协议名字。
然而,这还远远没有完成。这是一个非常简单、没有加密的、大门敞开的安装。它也只适用于与邮件服务器在同一系统上的用户。这是不可扩展的,并具有一些安全风险,例如没有密码保护。 我们会在[下篇][6]了解如何创建与系统用户分开的邮件用户,以及如何添加加密。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-2
作者:[CARLA SCHRODER][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/thunderbird-mailpng
[4]:https://www.linux.com/files/images/dovecot-emailjpg
[5]:https://linux.cn/article-8071-1.html
[6]:https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-3

View File

@ -0,0 +1,215 @@
RHEL (Red Hat Enterprise Linux红帽企业级 Linux) 7.3 安装指南
=====
RHEL 是由红帽公司开发维护的开源 Linux 发行版,可以运行在所有的主流 CPU 架构中。一般来说,多数的 Linux 发行版都可以免费下载、安装和使用,但对于 RHEL只有在购买了订阅之后你才能下载和使用否则只能获取到试用期为 30 天的评估版。
本文会告诉你如何在你的机器上安装最新的 RHEL 7.3,当然了,使用的是期限 30 天的评估版 ISO 镜像,请自行到 [https://access.redhat.com/downloads][1] 下载。
如果你更喜欢使用 CentOS请移步 [CentOS 7.3 安装指南][2]。
欲了解 RHEL 7.3 的新特性,请参考 [版本更新日志][3]。
#### 先决条件
本次安装是在支持 UEFI 的虚拟机固件上进行的。为了完成安装,你首先需要进入主板的 EFI 固件更改启动顺序为已刻录好 ISO 镜像的对应设备DVD 或者 U 盘)。
如果是通过 USB 介质来安装,你需要确保这个可以启动的 USB 设备是用支持 UEFI 兼容的工具来创建的,比如 [Rufus][4],它能将你的 USB 设备设置为 UEFI 固件所需要的 GPT 分区方案。
为了进入主板的 UEFI 固件设置面板,你需要在电脑初始化 POST (Power on Self Test通电自检) 的时候按下一个特殊键。
关于该设置需要用到特殊键你可以向主板厂商进行咨询获取。通常来说在笔记本上可能是这些键F2、F9、F10、F11 或者 F12也可能是 Fn 与这些键的组合。
此外,更改 UEFI 启动顺序前,你要确保快速启动选项 (QuickBoot/FastBoot) 和 安全启动选项 (Secure Boot) 处于关闭状态,这样才能在 EFI 固件中运行 RHEL。
有一些 UEFI 固件主板模型有这样一个选项,它让你能够以传统的 BIOS 或者 EFI CSM (Compatibility Support Module兼容支持模块) 两种模式来安装操作系统,其中 CSM 是主板固件中一个用来模拟 BIOS 环境的模块。这种类型的安装需要 U 盘以 MBR 而非 GPT 来进行分区。
此外,一旦在你的 UEFI 机器中以这两种模式之一成功安装好 RHEL 或者类似的 OS那么安装好的系统就必须以你安装时使用的模式来运行。而且你也不能够从 UEFI 模式变更到传统的 BIOS 模式,反之亦然。强行变更这两种模式会让你的系统变得不稳定、无法启动,同时还需要重新安装系统。
### RHEL 7.3 安装指南
1、 首先下载并使用合适的工具刻录 RHEL 7.3 ISO 镜像到 DVD 或者创建一个可启动的 U 盘。
给机器加电启动,把 DVD/U 盘放入合适驱动器中,并根据你的 UEFI/BIOS 类型,按下特定的启动键变更启动顺序来启动安装介质。
当安装介质被检测到之后,它会启动到 RHEL 的 grub 菜单。选择“Install red hat Enterprise Linux 7.3” 并按回车继续。
[![RHEL 7.3 Boot Menu](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg)][5]
*RHEL 7.3 启动菜单*
2、 之后屏幕就会显示 RHEL 7.3 欢迎界面。该界面选择安装过程中使用的语言 (LCTT 译注:这里选的只是安装过程中使用的语言,之后的安装中才会进行最终使用的系统语言环境) ,然后按回车到下一界面。
[![Select RHEL 7.3 Language](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png)][6]
*选择 RHEL 7.3 安装过程使用的语言*
3、 下一界面中显示的是安装 RHEL 时你需要设置的所有事项的总体概览。首先点击日期和时间 (DATE & TIME) 并在地图中选择你的设备所在地区。
点击最上面的完成 (Done) 按钮来保持你的设置,并进行下一步系统设置。
[![RHEL 7.3 Installation Summary](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png)][7]
*RHEL 7.3 安装概览*
[![Select RHEL 7.3 Date and Time](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png)][8]
*选择 RHEL 7.3 日期和时间*
4、 接下来就是配置你的键盘keyboard布局并再次点击完成 (Done) 按钮返回安装主菜单。
[![Configure Keyboard Layout](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png)][9]
*配置键盘布局*
5、 紧接着选择你使用的语言支持language support并点击完成 (Done),然后进行下一步。
[![Choose Language Support](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png)][10]
*选择语言支持*
6、 安装源Installation Source保持默认就好因为本例中我们使用本地安装 (DVD/USB 镜像)然后选择要安装的软件集Software Selection
此处你可以选择基本环境 (base environment) 和附件 (Add-ons) 。由于 RHEL 常用作 Linux 服务器最小化安装Minimal Installation对于系统管理员来说则是最佳选择。
对于生产环境来说,这也是官方极力推荐的安装方式,因为我们只需要在 OS 中安装极少量软件就好了。
这也意味着高安全性、可伸缩性以及占用极少的磁盘空间。同时,通过购买订阅 (subscription) 或使用 DVD 镜像源,这里列出的的其它环境和附件都是可以在命令行中很容易地安装。
[![RHEL 7.3 Software Selection](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png)][11]
*RHEL 7.3 软件集选择*
7、 万一你想要安装预定义的基本环境之一比方说 Web 服务器、文件 & 打印服务器、架构服务器、虚拟化主机、带 GUI 的服务器等,直接点击选择它们,然后在右边的框选择附件,最后点击完成 (Done) 结束这一步操作即可。
[![Select Server with GUI on RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png)][12]
*选择带 GUI 的服务器*
8、 在接下来点击安装目标 (Installation Destination),这个步骤要求你为将要安装的系统进行分区、格式化文件系统并设置挂载点。
最安全的做法就是让安装器自动配置硬盘分区,这样会创建 Linux 系统所有需要用到的基本分区 (在 LVM 中创建 `/boot`、`/boot/efi`、`/(root)` 以及 `swap` 等分区),并格式化为 RHEL 7.3 默认的 XFS 文件系统。
请记住:如果安装过程是从 UEFI 固件中启动的,那么硬盘的分区表则是 GPT 分区方案。否则,如果你以 CSM 或传统 BIOS 来启动,硬盘的分区表则使用老旧的 MBR 分区方案。
假如不喜欢自动分区,你也可以选择配置你的硬盘分区表,手动创建自己需要的分区。
不论如何,本文推荐你选择自动配置分区。最后点击完成 (Done) 继续下一步。
[![Choose RHEL 7.3 Installation Drive](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png)][13]
*选择 RHEL 7.3 的安装硬盘*
9、 下一步是禁用 Kdump 服务然后配置网络。
[![Disable Kdump Feature](http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png)][14]
*禁用 Kdump 特性*
10、 在网络和主机名Network and Hostname设置你机器使用的主机名和一个描述性名称同时拖动 Ethernet 开关按钮到 `ON` 来启用网络功能。
如果你在自己的网络中有一个 DHCP 服务器,那么网络 IP 设置会自动获取和使用。
[![Configure Network Hostname](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png)][15]
*配置网络主机名称*
11、 如果要为网络接口设置静态 IP点击配置 (Configure) 按钮,然后手动设置 IP如下方截图所示。
设置好网络接口的 IP 地址之后,点击保存 (Save) 按钮,最后切换一下网络接口的 `OFF` 和 `ON` 状态已应用刚刚设置的静态 IP。
最后,点击完成 (Done) 按钮返回到安装设置主界面。
[![Configure Network IP Address](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png)][16]
*配置网络 IP 地址*
12、 最后在安装配置主界面需要你配置的最后一项就是安全策略配置Security Policy文件了。选择并应用默认的Default安全策略然后点击完成 (Done) 返回主界面。
回顾所有的安装设置项并点击开始安装 (Begin Installation) 按钮来启动安装过程,这个过程启动之后,你就没有办法停止它了。
[![Apply Security Policy for RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png)][17]
*为 RHEL 7.3 启用安全策略*
[![Begin Installation of RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png)][18]
*开始安装 RHEL 7.3*
13、 在安装过程中你的显示器会出现用户设置 (User Settings)。首先点击 Root 密码 (Root Password) 为 root 账户设置一个高强度密码。
[![Configure User Settings](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png)][19]
*配置用户选项*
[![Set Root Account Password](http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png)][20]
*设置 Root 账户密码*
14、 最后创建一个新用户通过选中使该用户成为管理员 (Make this user administrator) 为新建的用户授权 root 权限。同时还要为这个账户设置一个高强度密码,点击完成 (Done) 返回用户设置菜单,就可以等待安装过程完成了。
[![Create New User Account](http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png)][21]
*创建新用户账户*
[![RHEL 7.3 Installation Process](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png)][22]
*RHEL 7.3 安装过程*
15、 安装过程结束并成功安装后弹出或拔掉 DVD/USB 设备,重启机器。
[![RHEL 7.3 Installation Complete](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png)][23]
*RHEL 7.3 安装完成*
[![Booting Up RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png)][24]
*启动 RHEL 7.3*
至此,安装完成。为了后期一直使用 RHEL你需要从 Red Hat 消费者门户购买一个订阅,然后在命令行 [使用订阅管理器来注册你的 RHEL 系统][25]。
------------------
作者简介:
Matei Cezar
![](http://2.gravatar.com/avatar/be16e54026c7429d28490cce41b1e157?s=128&d=blank&r=g)
我是一个终日沉溺于电脑的家伙,对开源的 Linux 软件非常着迷,有着 4 年 Linux 桌面发行版、服务器和 bash 编程经验。
---------------------------------------------------------------------
via: http://www.tecmint.com/red-hat-enterprise-linux-7-3-installation-guide/
作者:[Matei Cezar][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:https://access.redhat.com/downloads
[2]:https://linux.cn/article-8048-1.html
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/7.3_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.3_Release_Notes-Overview.html
[4]:https://rufus.akeo.ie/
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg
[6]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png
[23]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png
[25]:http://www.tecmint.com/enable-redhat-subscription-reposiories-and-updates-for-rhel-7/

View File

@ -0,0 +1,225 @@
在 Ubuntu 中用 UFW 配置防火墙
============================================================
UFW即简单防火墙uncomplicated firewall是一个 Arch Linux、Debian 或 Ubuntu 中管理防火墙规则的前端。 UFW 通过命令行使用(尽管它有可用的 GUI它的目的是使防火墙配置简单即不复杂uncomplicated
![How to Configure a Firewall with UFW](https://www.linode.com/docs/assets/ufw_tg.png "How to Configure a Firewall with UFW")
### 开始之前
1、 熟悉我们的[入门][1]指南,并完成设置服务器主机名和时区的步骤。
2、 本指南将尽可能使用 `sudo`。 在完成[保护你的服务器][2]指南的章节,创建一个标准用户帐户,强化 SSH 访问和移除不必要的网络服务。 **但不要**跟着创建防火墙部分 - 本指南是介绍使用 UFW 的,它对于 iptables 而言是另外一种控制防火墙的方法。
3、 更新系统
**Arch Linux**
```
sudo pacman -Syu
```
**Debian / Ubuntu**
```
sudo apt-get update && sudo apt-get upgrade
```
### 安装 UFW
UFW 默认包含在 Ubuntu 中,但在 Arch 和 Debian 中需要安装。 Debian 将自动启用 UFW 的 systemd 单元,并使其在重新启动时启动,但 Arch 不会。 这与告诉 UFW 启用防火墙规则不同,因为使用 systemd 或者 upstart 启用 UFW 仅仅是告知 init 系统打开 UFW 守护程序。
默认情况下UFW 的规则集为空,因此即使守护程序正在运行,也不会强制执行任何防火墙规则。 强制执行防火墙规则集的部分[在下面][3]。
#### Arch Linux
1、 安装 UFW
```
sudo pacman -S ufw
```
2、 启动并启用 UFW 的 systemd 单元:
```
sudo systemctl start ufw
sudo systemctl enable ufw
```
#### Debian / Ubuntu
1、 安装 UFW
```
sudo apt-get install ufw
```
### 使用 UFW 管理防火墙规则
#### 设置默认规则
大多数系统只需要打开少量的端口接受传入连接,并且关闭所有剩余的端口。 从一个简单的规则基础开始,`ufw default`命令可以用于设置对传入和传出连接的默认响应动作。 要拒绝所有传入并允许所有传出连接,那么运行:
```
sudo ufw default allow outgoing
sudo ufw default deny incoming
```
`ufw default` 也允许使用 `reject` 参数。
> 警告:
> 除非明确设置允许规则,否则配置默认 `deny``reject` 规则会锁定你的服务器。确保在应用默认 `deny``reject` 规则之前,已按照下面的部分配置了 SSH 和其他关键服务的允许规则。
#### 添加规则
可以有两种方式添加规则:用**端口号**或者**服务名**表示。
要允许 SSH 的 22 端口的传入和传出连接,你可以运行:
```
sudo ufw allow ssh
```
你也可以运行:
```
sudo ufw allow 22
```
相似的,要在特定端口(比如 111`deny` 流量,你需要运行:
```
sudo ufw deny 111
```
为了更好地调整你的规则,你也可以允许基于 TCP 或者 UDP 的包。下面例子会允许 80 端口的 TCP 包:
```
sudo ufw allow 80/tcp
sudo ufw allow http/tcp
```
这个会允许 1725 端口上的 UDP 包:
```
sudo ufw allow 1725/udp
```
#### 高级规则
除了基于端口的允许或阻止UFW 还允许您按照 IP 地址、子网和 IP 地址/子网/端口的组合来允许/阻止。
允许从一个 IP 地址连接:
```
sudo ufw allow from 123.45.67.89
```
允许特定子网的连接:
```
sudo ufw allow from 123.45.67.89/24
```
允许特定 IP/ 端口的组合:
```
sudo ufw allow from 123.45.67.89 to any port 22 proto tcp
```
`proto tcp` 可以删除或者根据你的需求改成 `proto udp`,所有例子的 `allow` 都可以根据需要变成 `deny`
#### 删除规则
要删除一条规则,在规则的前面加上 `delete`。如果你希望不再允许 HTTP 流量,你可以运行:
```
sudo ufw delete allow 80
```
删除规则同样可以使用服务名。
### 编辑 UFW 的配置文件
虽然可以通过命令行添加简单的规则,但仍有可能需要添加或删除更高级或特定的规则。 在运行通过终端输入的规则之前UFW 将运行一个文件 `before.rules`它允许回环接口、ping 和 DHCP 等服务。要添加或改变这些规则,编辑 `/etc/ufw/before.rules` 这个文件。 同一目录中的 `before6.rules` 文件用于 IPv6 。
还存在一个 `after.rule``after6.rule` 文件,用于添加在 UFW 运行你通过命令行输入的规则之后需要添加的任何规则。
还有一个配置文件位于 `/etc/default/ufw`。 从此处可以禁用或启用 IPv6可以设置默认规则并可以设置 UFW 以管理内置防火墙链。
### UFW 状态
你可以在任何时候使用命令:`sudo ufw status` 查看 UFW 的状态。这会显示所有规则列表,以及 UFW 是否处于激活状态:
```
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
80/tcp ALLOW Anywhere
443 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
```
### 启用防火墙
随着你选择规则完成,你初始运行 `ufw status` 可能会输出 `Status: inactive`。 启用 UFW 并强制执行防火墙规则:
```
sudo ufw enable
```
相似地,禁用 UFW 规则:
```
sudo ufw disable
```
> UFW 会继续运行,并且在下次启动时会再次启动。
### 日志记录
你可以用下面的命令启动日志记录:
```
sudo ufw logging on
```
可以通过运行 `sudo ufw logging low|medium|high` 设计日志级别,可以选择 `low`、 `medium` 或者 `high`。默认级别是 `low`
常规日志类似于下面这样,位于 `/var/logs/ufw`
```
Sep 16 15:08:14 <hostname> kernel: [UFW BLOCK] IN=eth0 OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:00:00 SRC=123.45.67.89 DST=987.65.43.21 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=8475 PROTO=TCP SPT=48247 DPT=22 WINDOW=1024 RES=0x00 SYN URGP=0
```
前面的值列出了你的服务器的日期、时间、主机名。剩下的重要信息包括:
* **[UFW BLOCK]**:这是记录事件的描述开始的位置。在此例中,它表示阻止了连接。
* **IN**:如果它包含一个值,那么代表该事件是传入事件
* **OUT**:如果它包含一个值,那么代表事件是传出事件
* **MAC**:目的地和源 MAC 地址的组合
* **SRC**:包源的 IP
* **DST**:包目的地的 IP
* **LEN**:数据包长度
* **TTL**:数据包 TTL或称为 time to live。 在找到目的地之前,它将在路由器之间跳跃,直到它过期。
* **PROTO**:数据包的协议
* **SPT**:包的源端口
* **DPT**:包的目标端口
* **WINDOW**:发送方可以接收的数据包的大小
* **SYN URGP**:指示是否需要三次握手。 `0` 表示不需要。
--------------------------------------------------------------------------------
via: https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
作者:[Linode][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
[1]:https://www.linode.com/docs/getting-started
[2]:https://www.linode.com/docs/security/securing-your-server
[3]:http://localhost:4567/docs/security/firewalls/configure-firewall-with-ufw#enable-the-firewall

View File

@ -0,0 +1,161 @@
sshpass一个很棒的免交互 SSH 登录工具,但不要用在生产服务器上
============================================================
在大多数情况下Linux 系统管理员使用 SSH 登录到程 Linux 服务器时,要么是通过密码,要么是[无密码 SSH 登录][1]或基于密钥的 SSH 身份验证。
如果你想自动在 SSH 登录提示符中提供**密码**和**用户名**怎么办?这时 **sshpass** 就可以帮到你了。
sshpass 是一个简单、轻量级的命令行工具,通过它我们能够向命令提示符本身提供密码(非交互式密码验证),这样就可以通过 [cron 调度器][2]执行自动化的 shell 脚本进行备份。
ssh 直接使用 TTY 访问,以确保密码是用户键盘输入的。 sshpass 在专门的 tty 中运行 ssh以误导 ssh 相信它是从用户接收到的密码。
重要:使用 **sshpass** 是最不安全的,因为所有系统上的用户在命令行中通过简单的 “**ps**” 命令就可看到密码。因此,如果必要,比如说在生产环境,我强烈建议使用 [SSH 无密码身份验证][3]。
### 在 Linux 中安装 sshpass
在基于 **RedHat/CentOS** 的系统中,首先需要[启用 Epel 仓库][4]并使用 [yum 命令][5]安装它。
```
# yum install sshpass
# dnf install sshpass [Fedora 22 及以上版本]
```
在 Debian/Ubuntu 和它的衍生版中,你可以使用 [apt-get 命令][6]来安装。
```
$ sudo apt-get install sshpass
```
另外,你也可以从最新的源码安装 `sshpass`,首先下载源码并从 tar 文件中解压出内容:
```
$ wget http://sourceforge.net/projects/sshpass/files/latest/download -O sshpass.tar.gz
$ tar -xvf sshpass.tar.gz
$ cd sshpass-1.06
$ ./configure
# sudo make install
```
### 如何在 Linux 中使用 sshpass
**sshpass** 与 **ssh** 一起使用,使用下面的命令可以查看 `sshpass` 的使用选项的完整描述:
```
$ sshpass -h
```
下面为显示的 sshpass 帮助内容:
```
Usage: sshpass [-f|-d|-p|-e] [-hV] command parameters
-f filename Take password to use from file
-d number Use number as file descriptor for getting password
-p password Provide password as argument (security unwise)
-e Password is passed as env-var "SSHPASS"
With no parameters - password will be taken from stdin
-h Show help (this screen)
-V Print version information
At most one of -f, -d, -p or -e should be used
```
正如我之前提到的,**sshpass** 在用于脚本时才更可靠及更有用,请看下面的示例命令。
使用用户名和密码登录到远程 Linux ssh 服务器10.42.0.1),并[检查文件系统磁盘使用情况][7],如图所示。
```
$ sshpass -p 'my_pass_here' ssh aaronkilik@10.42.0.1 'df -h'
```
**重要提示**:此处,在命令行中提供了密码,这是不安全的,不建议使用此选项。
[
![sshpass - Linux Remote Login via SSH](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Linux-Remote-Login.png)
][8]
*sshpass 使用 SSH 远程登录 Linux*
但是,为了防止在屏幕上显示密码,可以使用 `-e` 标志,并将密码作为 SSHPASS 环境变量的值输入,如下所示:
```
$ export SSHPASS='my_pass_here'
$ echo $SSHPASS
$ sshpass -e ssh aaronkilik@10.42.0.1 'df -h'
```
[
![sshpass - Hide Password in Prompt](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Hide-Password-in-Prompt.png)
][9]
*sshpass 在终端中隐藏密码*
**注意:**在上面的示例中,`SSHPASS` 环境变量仅用于临时目的,并将在重新启动后删除。
要永久设置 `SSHPASS` 环境变量,打开 `/etc/profile` 文件,并在文件开头输入 `export` 语句:
```
export SSHPASS='my_pass_here'
```
保存文件并退出,接着运行下面的命令使更改生效:
```
$ source /etc/profile
```
另外,也可以使用 `-f` 标志,并把密码放在一个文件中。 这样,您可以从文件中读取密码,如下所示:
```
$ sshpass -f password_filename ssh aaronkilik@10.42.0.1 'df -h'
```
[
![sshpass - Supply Password File to Login](http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Provide-Password-File.png)
][10]
*sshpass 在登录时提供密码文件*
你也可以使用 `sshpass` [通过 scp 传输文件][11]或者 [rsync 备份/同步文件][12],如下所示:
```
------- Transfer Files Using SCP -------
$ scp -r /var/www/html/example.com --rsh="sshpass -p 'my_pass_here' ssh -l aaronkilik" 10.42.0.1:/var/www/html
------- Backup or Sync Files Using Rsync -------
$ rsync --rsh="sshpass -p 'my_pass_here' ssh -l aaronkilik" 10.42.0.1:/data/backup/ /backup/
```
更多的用法,建议阅读 `sshpass` 的 man 页面,输入:
```
$ man sshpass
```
在本文中,我们解释了 `sshpass` 是一个非交互式密码验证的简单工具。 虽然这个工具可能是有帮助的,但还是强烈建议使用更安全的 ssh 公钥认证机制。
请在下面的评论栏写下任何问题或评论,以便可以进一步讨论。
--------------------------------------------------------------------------------
作者简介Aaron Kili 是一位 Linux 和 F.O.S.S 爱好者,未来的 Linux 系统管理员web 开发人员, 还是 TecMint 原创作者,热爱电脑工作,并乐于分享知识。
-----------
via: http://www.tecmint.com/sshpass-non-interactive-ssh-login-shell-script-ssh-password/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/
[3]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
[4]:https://linux.cn/article-2324-1.html
[5]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[6]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
[7]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Linux-Remote-Login.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Hide-Password-in-Prompt.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/sshpass-Provide-Password-File.png
[11]:http://www.tecmint.com/scp-commands-examples/
[12]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/

View File

@ -0,0 +1,412 @@
LXD 2.0 系列(四):资源控制
======================================
这是 [LXD 2.0 系列介绍文章][0]的第四篇。
因为 LXD 容器管理有很多命令,因此这篇文章会很长。 如果你想要快速地浏览这些相同的命令,你可以[尝试下我们的在线演示][1]
![](https://linuxcontainers.org/static/img/containers.png)
### 可用资源限制
LXD 提供了各种资源限制。其中一些与容器本身相关如内存配额、CPU 限制和 I/O 优先级。而另外一些则与特定设备相关,如 I/O 带宽或磁盘用量限制。
与所有 LXD 配置一样,资源限制可以在容器运行时动态更改。某些可能无法启用,例如,如果设置的内存值小于当前内存用量,但 LXD 将会试着设置并且报告失败。
所有的限制也可以通过配置文件继承,在这种情况下每个受影响的容器将受到该限制的约束。也就是说,如果在默认配置文件中设置 `limits.memory=256MB`,则使用默认配置文件(通常是全都使用)的每个容器的内存限制为 256MB。
我们不支持资源限制池,将其中的限制由一组容器共享,因为我们没有什么好的方法通过现有的内核 API 实现这些功能。
#### 磁盘
这或许是最需要和最明显的需求。只需设置容器文件系统的大小限制,并对容器强制执行。
LXD 确实可以让你这样做!
不幸的是,这比它听起来复杂得多。 Linux 没有基于路径的配额,而大多数文件系统只有基于用户和组的配额,这对容器没有什么用处。
如果你正在使用 ZFS 或 btrfs 存储后端,这意味着现在 LXD 只能支持磁盘限制。也有可能为 LVM 实现此功能,但这取决于与它一起使用的文件系统,并且如果结合实时更新那会变得棘手起来,因为并不是所有的文件系统都允许在线增长,而几乎没有一个允许在线收缩。
#### CPU
当涉及到 CPU 的限制,我们支持 4 种不同的东西:
* 只给我 X 个 CPU 核心
在这种模式下,你让 LXD 为你选择一组核心,然后为更多的容器和 CPU 的上线/下线提供负载均衡。
容器只看到这个数量的 CPU 核心。
* 给我一组特定的 CPU 核心例如核心1、3 和 5
类似于第一种模式,但是不会做负载均衡,你会被限制在那些核心上,无论它们有多忙。
* 给我你拥有的 20 处理能力
在这种模式下,你可以看到所有的 CPU但调度程序将限制你使用 20 的 CPU 时间,但这只有在负载状态才会这样!所以如果系统不忙,你的容器可以跑得很欢。而当其他的容器也开始使用 CPU 时,它会被限制用量。
* 每测量 200ms给我 50ms并且不超过
此模式与上一个模式类似,你可以看到所有的 CPU但这一次无论系统可能是多么空闲你只能使用你设置的极限时间下的尽可能多的 CPU 时间。在没有过量使用的系统上,这可使你可以非常整齐地分割 CPU并确保这些容器的持续性能。
另外还可以将前两个中的一个与最后两个之一相结合,即请求一组 CPU然后进一步限制这些 CPU 的 CPU 时间。
除此之外,我们还有一个通用的优先级调节方式,可以告诉调度器当你处于负载状态时,两个争夺资源的容器谁会取得胜利。
#### 内存
内存听起来很简单,就是给我多少 MB 的内存!
它绝对可以那么简单。 我们支持这种限制以及基于百分比的请求,比如给我 10 的主机内存!
另外我们在上层支持一些额外的东西。 例如,你可以选择在每个容器上打开或者关闭 swap如果打开还可以设置优先级以便你可以选择哪些容器先将内存交换到磁盘
内存限制默认是“hard”。 也就是说,当内存耗尽时,内核将会开始杀掉你的那些进程。
或者你可以将强制策略设置为“soft”在这种情况下只要没有别的进程的情况下你将被允许使用尽可能多的内存。一旦别的进程想要这块内存你将无法分配任何内存直到你低于你的限制或者主机内存再次有空余。
#### 网络 I/O
网络 I/O 可能是我们看起来最简单的限制,但是相信我,实现真的不简单!
我们支持两种限制。 第一个是对网络接口的速率限制。你可以设置入口和出口的限制或者只是设置“最大”限制然后应用到出口和入口。这个只支持“桥接”和“p2p”类型接口。
第二种是全局网络 I/O 优先级,仅当你的网络接口趋于饱和的时候再使用。
#### 块 I/O
我把最古怪的放在最后。对于用户看起来它可能简单,但有一些情况下,它的结果并不会和你的预期一样。
我们在这里支持的基本上与我在网络 I/O 中描述的相同。
你可以直接设置磁盘的读写 IO 的频率和速率,并且有一个全局的块 I/O 优先级,它会通知 I/O 调度程序更倾向哪个。
古怪的是如何设置以及在哪里应用这些限制。不幸的是,我们用于实现这些功能的底层使用的是完整的块设备。这意味着我们不能为每个路径设置每个分区的 I/O 限制。
这也意味着当使用可以支持多个块设备映射到指定的路径(带或者不带 RAID的 ZFS 或 btrfs 时,我们并不知道这个路径是哪个块设备提供的。
这意味着,完全有可能,实际上确实有可能,容器使用的多个磁盘挂载点(绑定挂载或直接挂载)可能来自于同一个物理磁盘。
这就使限制变得很奇怪。为了使限制生效LXD 具有猜测给定路径所对应块设备的逻辑,这其中包括询问 ZFS 和 btrfs 工具,甚至可以在发现一个文件系统中循环挂载的文件时递归地找出它们。
这个逻辑虽然不完美但通常会找到一组应该应用限制的块设备。LXD 接着记录并移动到下一个路径。当遍历完所有的路径,然后到了非常奇怪的部分。它会平均你为相应块设备设置的限制,然后应用这些。
这意味着你将在容器中“平均”地获得正确的速度,但这也意味着你不能对来自同一个物理磁盘的“/fast”和一个“/slow”目录应用不同的速度限制。 LXD 允许你设置它,但最后,它会给你这两个值的平均值。
### 它怎么工作?
除了网络限制是通过较旧但是良好的“tc”实现的上述大多数限制是通过 Linux 内核的 cgroup API 来实现的。
LXD 在启动时会检测你在内核中启用了哪些 cgroup并且将只应用你的内核支持的限制。如果你缺少一些 cgroup守护进程会输出警告接着你的 init 系统将会记录这些。
在 Ubuntu 16.04 上,默认情况下除了内存交换审计外将会启用所有限制,内存交换审计需要你通过`swapaccount = 1`这个内核引导参数来启用。
### 应用这些限制
上述所有限制都能够直接或者用某个配置文件应用于容器。容器范围的限制可以使用:
```
lxc config set CONTAINER KEY VALUE
```
或对于配置文件设置:
```
lxc profile set PROFILE KEY VALUE
```
当指定特定设备时:
```
lxc config device set CONTAINER DEVICE KEY VALUE
```
或对于配置文件设置:
```
lxc profile device set PROFILE DEVICE KEY VALUE
```
有效配置键、设备类型和设备键的完整列表可以[看这里][1]。
#### CPU
要限制使用任意两个 CPU 核心可以这么做:
```
lxc config set my-container limits.cpu 2
```
要指定特定的 CPU 核心,比如说第二和第四个:
```
lxc config set my-container limits.cpu 1,3
```
更加复杂的情况还可以设置范围:
```
lxc config set my-container limits.cpu 0-3,7-11
```
限制实时生效,你可以看下面的例子:
```
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc config set zerotier limits.cpu 2
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
```
注意为了避免完全混淆用户空间lxcfs 会重排 `/proc/cpuinfo` 中的条目,以便没有错误。
就像 LXD 中的一切,这些设置也可以应用在配置文件中:
```
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc profile set default limits.cpu 3
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
```
要限制容器使用 10% 的 CPU 时间,要设置下 CPU allowance
```
lxc config set my-container limits.cpu.allowance 10%
```
或者给它一个固定的 CPU 时间切片:
```
lxc config set my-container limits.cpu.allowance 25ms/200ms
```
最后,要将容器的 CPU 优先级调到最低:
```
lxc config set my-container limits.cpu.priority 0
```
#### 内存
要直接应用内存限制运行下面的命令:
```
lxc config set my-container limits.memory 256MB
```
(支持的后缀是 KB、MB、GB、TB、PB、EB
要关闭容器的内存交换(默认启用):
```
lxc config set my-container limits.memory.swap false
```
告诉内核首先交换指定容器的内存:
```
lxc config set my-container limits.memory.swap.priority 0
```
如果你不想要强制的内存限制:
```
lxc config set my-container limits.memory.enforce soft
```
#### 磁盘和块 I/O
不像 CPU 和内存,磁盘和 I/O 限制是直接作用在实际的设备上的,因此你需要编辑原始设备或者屏蔽某个具体的设备。
要设置磁盘限制(需要 btrfs 或者 ZFS
```
lxc config device set my-container root size 20GB
```
比如:
```
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 179G 542M 178G 1% /
stgraber@dakara:~$ lxc config device set zerotier root size 20GB
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 20G 542M 20G 3% /
```
要限制速度,你可以:
```
lxc config device set my-container root limits.read 30MB
lxc config device set my-container root.limits.write 10MB
```
或者限制 IO 频率:
```
lxc config device set my-container root limits.read 20Iops
lxc config device set my-container root limits.write 10Iops
```
最后你在一个过量使用的繁忙系统上,你或许想要:
```
lxc config set my-container limits.disk.priority 10
```
将那个容器的 I/O 优先级调到最高。
#### 网络 I/O
只要机制可用,网络 I/O 基本等同于块 I/O。
比如:
```
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s
2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600]
stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit
stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s
2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]
```
这就是如何将一个千兆网的连接速度限制到仅仅 100Mbit/s 的!
和块 I/O 一样,你可以设置一个总体的网络优先级:
```
lxc config set my-container limits.network.priority 5
```
### 获取当前资源使用率
[LXD API][2] 可以导出目前容器资源使用情况的一点信息,你可以得到:
* 内存:当前、峰值、目前内存交换和峰值内存交换
* 磁盘:当前磁盘使用率
* 网络:每个接口传输的字节和包数。
另外如果你使用的是非常新的 LXD在写这篇文章时的 git 版本),你还可以在`lxc info`中得到这些信息:
```
stgraber@dakara:~$ lxc info zerotier
Name: zerotier
Architecture: x86_64
Created: 2016/02/20 20:01 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 29258
Ips:
eth0: inet 172.17.0.101
eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8
eth0: inet6 fe80::216:3eff:feec:65a8
lo: inet 127.0.0.1
lo: inet6 ::1
lxcbr0: inet 10.0.3.1
lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2
zt0: inet 29.17.181.59
zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1
zt0: inet6 fe80::79:e7ff:fe0d:5123
Resources:
Processes: 33
Disk usage:
root: 808.07MB
Memory usage:
Memory (current): 106.79MB
Memory (peak): 195.51MB
Swap (current): 124.00kB
Swap (peak): 124.00kB
Network usage:
lxcbr0:
Bytes received: 0 bytes
Bytes sent: 570 bytes
Packets received: 0
Packets sent: 0
zt0:
Bytes received: 1.10MB
Bytes sent: 806 bytes
Packets received: 10957
Packets sent: 10957
eth0:
Bytes received: 99.35MB
Bytes sent: 5.88MB
Packets received: 64481
Packets sent: 64481
lo:
Bytes received: 9.57kB
Bytes sent: 9.57kB
Packets received: 81
Packets sent: 81
Snapshots:
zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless)
```
### 总结
LXD 团队花费了几个月的时间来迭代我们使用的这些限制的语言。 它是为了在保持强大和功能明确的基础上同时保持简单。
实时地应用这些限制和通过配置文件继承,使其成为一种非常强大的工具,可以在不影响正在运行的服务的情况下实时管理服务器上的负载。
### 更多信息
LXD 的主站在: <https://linuxcontainers.org/lxd>
LXD 的 GitHub 仓库: <https://github.com/lxc/lxd>
LXD 的邮件列表: <https://lists.linuxcontainers.org>
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
如果你不想在你的机器上安装LXD你可以[在线尝试下][3]。
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/blob/master/doc/configuration.md
[2]: https://github.com/lxc/lxd/blob/master/doc/rest-api.md
[3]: https://linuxcontainers.org/lxd/try-it

View File

@ -1,227 +0,0 @@
alim0x translating
The (updated) history of Android
============================================================
### Follow the endless iterations from Android 0.5 to Android 7 and beyond.
Google Search was literally everywhere in Lollipop. A new "always-on voice recognition" feature allowed users to say "OK Google" at any time, from any screen, even when the display was off. The Google app was still Google's primary home screen, a feature which debuted in KitKat. The search bar was now present on the new recent apps screen, too.
Google Now was still the left-most home screen page, but now a Material Design revamp gave it headers with big bold colors and redesigned typography.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/play-store-1-150x150.jpg)
][1]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/play2-150x150.jpg)
][2]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/6-150x150.jpg)
][3]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/5-150x150.jpg)
][4]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/12-2-150x150.jpg)
][5]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/14-1-150x150.jpg)
][6]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/19-1-150x150.jpg)
][7]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/13-2-150x150.jpg)
][8]
The Play Store followed a similar path to other Lollipop apps. There was a huge visual refresh with bold colors, new typography, and a fresh layout. It's rare that there's any additional functionality here, just a new coat of paint on everything.
The Navigation panel for the Play Store could now actually be used for navigation, with entries for each section of the Play Store. Lollipop also typically did away with the overflow button in the action bar, instead deciding to go with a single action button (usually search) and dumping every extra option in the navigation bar. This gave users a single place to look for items instead of having to hunt through two different menus.
Also new in Lollipop apps was the ability to make the status bar transparent. This allowed the action bar color to bleed right through the status bar, making the bar only slightly darker than the surrounding UI. Some interfaces even used a full-bleed hero image at the top of the screen, which would show through the status bar.
[
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1-980x481.jpg)
][38]
Google Calendar was completely re-written, gaining lots of new design touches and losing lots of features. You could no longer pinch zoom to adjust the time scale of views, month view was gone on phones, and week view regressed from a seven-day view to five days. Google would spend the next few versions re-adding some of these features after users complained. "Google Calendar" also doubled down on the "Google" by removing the ability to add third-party accounts directly in the app. Non-Google accounts would now need to be added via Gmail.
It did look nice, though. In some views, the start of each month came with a header picture, just like a real paper calendar. Events with locations attached showed pictures from those locations. For instance, my "flight to San Francisco" displayed the Golden Gate Bridge. Google Calendar would also pull events out of Gmail and display them right on your calendar.
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/7-150x150.jpg)
][9]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/8-150x150.jpg)
][10]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/12-150x150.jpg)
][11]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/13-150x150.jpg)
][12]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/3-1-150x150.jpg)
][13]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/14-150x150.jpg)
][14]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/6-2-150x150.jpg)
][15]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/5-3-150x150.jpg)
][16]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/7-2-150x150.jpg)
][17]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/9-1-150x150.jpg)
][18]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/10-1-150x150.jpg)
][19]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/28-1-150x150.jpg)
][20]
Other apps all fell under pretty much the same description: not much in the way of new functionality, but big redesigns swapped out the greys of KitKat with bold, bright colors. Hangouts gained the ability to receive Google Voice SMSes, and the clock got a background color that changes with the time of day.
#### Job Scheduler whips the app ecosystem into shape
Google decided to focus on battery savings with Lollipop in a project it called "Project Volta." Google started creating more battery tracking tools for itself and developers, starting with the "Battery Historian." This python script took all of Android's battery logging data and spun it into a readable, interactive graph. With its new diagnostic equipment, Google flagged background tasks as a big consumer of battery.
At I/O 2014, the company noted that enabling airplane mode and turning off the screen allowed an Android phone to run in standby for a month. However, if users enabled everything and started using the device, they wouldn't get through a single day. The takeaway was that if you could just get everything to stop doing stuff, your battery would do a lot better.
As such, the company created a new API called "JobScheduler," the new traffic cop for background tasks on Android. Before Job Scheduler, every single app was responsible for its background processing, which meant every app would individually wake up the processor and modem, check for connectivity, organize databases, download updates, and upload logs. Everything had its own individual timer, so your phone would be woken up a lot. With JobScheduler, background tasks get batched up from an unorganized free-for-all into an orderly background processing window.
JobScheduler lets apps specify conditions that their task needs (general connectivity, Wi-Fi, plugged into a wall outlet, etc), and it will send an announcement when those conditions are met. It's like the difference between push e-mail and checking for e-mail every five minutes... but with task requirements. Google also started pushing a "lazier" approach to background tasks. If something can wait until the device is on Wi-Fi, plugged-in, and idle, it should wait until then. You can see the results of this today when, on Wi-Fi, you can plug in an Android phone and only _then_ will it start downloading app updates. You don't instantly need to download app updates; it's best to wait until the user has unlimited power and data.
#### Device setup gets future-proofed
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/25-1-150x150.jpg)
][21]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/26-150x150.jpg)
][22]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup2-150x150.jpg)
][23]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup3-150x150.jpg)
][24]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup4-150x150.jpg)
][25]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup5-150x150.jpg)
][26]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup6-150x150.png)
][27]
Setup was overhauled to not just confirm to the Material Design guidelines, but it was also "future-proofed" so that it can handle any new login and authentication schemes Google cooks up in the future. Remember, part of the entire reasoning for writing "The History of Android" is that older versions of Android don't work anymore. Over the years, Google has upgraded its authentication schemes to use better encryption and two-factor authentication, but adding these new login requirements breaks compatibility with older clients. Lots of Android features require access to Google's cloud infrastructure, so when you can't log in, things like Gmail for Android 1.0 just don't work.
In Lollipop, setup works much like it did before for the first few screens. You get a "welcome to Android screen" and options to set up cellular and Wi-Fi connectivity. Immediately after this screen, things changed though. As soon as Lollipop hit the internet, it pinged Google's servers to "check for updates." These weren't updates to the OS or to apps, but updates to the setup process about to run. After Android downloaded the newest version of setup, _then_ it asked you to log in with your Google account.
The benefit of this is evident when trying to log into Lollipop and Kitkat today. Thanks to the updatable setup flow, the "2014" Lollipop OS can handle 2016 improvements, like Google's new "[tap to sign in][39]" 2FA method. KitKat chokes, but luckily it has a "web-browser sign-in" that can handle 2FA.
Lollipop setup even takes the extreme stance of putting your Google e-mail and password on separate pages. [Google hates passwords][40] and has come up with several [experimental ways][41] to log into Google without one. If your account is setup to not have a password, Lollipop can just skip the password page. If you have a 2FA setup that uses a code, setup can slip the appropriate "enter 2FA code" page into the setup flow. Every piece of signing in is on a single page, so the setup flow is modular. Pages can be added and removed as needed.
Setup also gave users control over app restoration. Android was doing some kind of data restoration previously to this, but it was impossible to understand because it just picked one of your devices without any user input and started restoring things. A new screen in the setup flow let users see their collection of device profiles in the cloud and pick the appropriate one. You could also choose which apps to restore from that backup. This backup was apps, your home screen layout, and a few minor settings like Wi-Fi hotspots. It wasn't a full app data backup.
#### Settings
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/29-1-150x150.jpg)
][28]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/settings-1-150x150.jpg)
][29]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2014-11-11-16.45.47-150x150.png)
][30]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/battery-150x150.jpg)
][31]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/11/user1-150x150.jpg)
][32]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/11/users2-150x150.jpg)
][33]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/30-1-150x150.jpg)
][34]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/31-150x150.jpg)
][35]
Setting swapped from a dark theme to a light one. Along with a new look, it got a handy search function. Every screen gave the user access to a magnifying glass, which let them more easily hunt down that elusive option.
There were a few settings related to Project Volta. "Network Restrictions" allowed users to flag a Wi-Fi connection as metered, which would allow JobScheduler to avoid it for background processing. Also as part of Volta, a "Battery Saver" mode was added. This would limit background tasks and throttle down the CPU, which gave you a long lasting but very sluggish device.
Multi-user support has been in Android tablets for a while, but Lollipop finally brought it down to Android phones. The settings screen added a new "users" page that let you add additional account or start up a "Guest" account. Guest accounts were temporary—they could be wiped out with a single tap. And unlike a normal account, it didn't try to download every app associated with your account, since it was destined to be wiped out soon.
--------------------------------------------------------------------------------
作者简介:
Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/
作者:[RON AMADEO][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://arstechnica.com/author/ronamadeo
[1]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[2]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[3]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[4]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[5]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[6]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[7]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[8]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[9]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[10]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[11]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[12]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[13]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[14]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[15]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[16]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[17]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[18]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[19]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[20]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[21]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[22]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[23]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[24]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[25]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[26]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[27]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[28]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[29]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[30]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[31]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[32]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[33]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[34]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[35]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[36]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1.jpg
[37]:http://arstechnica.com/author/ronamadeo/
[38]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1.jpg
[39]:http://arstechnica.com/gadgets/2016/06/googles-new-two-factor-authentication-system-tap-yes-to-log-in/
[40]:https://www.theguardian.com/technology/2016/may/24/google-passwords-android
[41]:http://www.androidpolice.com/2015/12/22/google-appears-to-be-testing-a-new-way-to-log-into-your-account-on-other-devices-with-just-your-phone-no-password-needed/

View File

@ -0,0 +1,995 @@
Network automation with Ansible
================
### Network Automation
As the IT industry transforms with technologies from server virtualization to public and private clouds with self-service capabilities, containerized applications, and Platform as a Service (PaaS) offerings, one of the areas that continues to lag behind is the network.
Over the past 5+ years, the network industry has seen many new trends emerge, many of which are categorized as software-defined networking (SDN).
###### Note
SDN is a new approach to building, managing, operating, and deploying networks. The original definition for SDN was that there needed to be a physical separation of the control plane from the data (packet forwarding) plane, and the decoupled control plane must control several devices.
Nowadays, many more technologies get put under the _SDN umbrella_, including controller-based networks, APIs on network devices, network automation, whitebox switches, policy networking, Network Functions Virtualization (NFV), and the list goes on.
For purposes of this report, we refer to SDN solutions as solutions that include a network controller as part of the solution, and improve manageability of the network but dont necessarily decouple the control plane from the data plane.
One of these trends is the emergence of application programming interfaces (APIs) on network devices as a way to manage and operate these devices and truly offer machine to machine communication. APIs simplify the development process when it comes to automation and building network applications, providing more structure on how data is modeled. For example, when API-enabled devices return data in JSON/XML, it is structured and easier to work with as compared to CLI-only devices that return raw text that then needs to be manually parsed.
Prior to APIs, the two primary mechanisms used to configure and manage network devices were the command-line interface (CLI) and Simple Network Management Protocol (SNMP). If we look at each of those, the CLI was meant as a human interface to the device, and SNMP wasnt built to be a real-time programmatic interface for network devices.
Luckily, as many vendors scramble to add APIs to devices, sometimes _just because_ its a check in the box on an RFP, there is actually a great byproduct—enabling network automation. Once a true API is exposed, the process for accessing data within the device, as well as managing the configuration, is greatly simplified, but as well review in this report, automation is also possible using more traditional methods, such as CLI/SNMP.
###### Note
As network refreshes happen in the months and years to come, vendor APIs should no doubt be tested and used as key decision-making criteria for purchasing network equipment (virtual and physical). Users should want to know how data is modeled by the equipment, what type of transport is used by the API, if the vendor offers any libraries or integrations to automation tools, and if open standards/protocols are being used.
Generally speaking, network automation, like most types of automation, equates to doing things faster. While doing more faster is nice, reducing the time for deployments and configuration changes isnt always a problem that needs solving for many IT organizations.
Including speed, well now take a look at a few of the reasons that IT organizations of all shapes and sizes should look at gradually adopting network automation. You should note that the same principles apply to other types of automation as well.
### Simplified Architectures
Today, every network is a unique snowflake, and network engineers take pride in solving transport and application issues with one-off network changes that ultimately make the network not only harder to maintain and manage, but also harder to automate.
Instead of thinking about network automation and management as a secondary or tertiary project, it needs to be included from the beginning as new architectures and designs are deployed. Which features work across vendors? Which extensions work across platforms? What type of API or automation tooling works when using particular network device platforms? When these questions get answered earlier on in the design process, the resulting architecture becomes simpler, repeatable, and easier to maintain _and_ automate, all with fewer vendor proprietary extensions enabled throughout the network.
### Deterministic Outcomes
In an enterprise organization, change review meetings take place to review upcoming changes on the network, the impact they have on external systems, and rollback plans. In a world where a human is touching the CLI to make those _upcoming changes_, the impact of typing the wrong command is catastrophic. Imagine a team with three, four, five, or 50 engineers. Every engineer may have his own way of making that particular _upcoming change_. And the ability to use a CLI or a GUI does not eliminate or reduce the chance of error during the control window for the change.
Using proven and tested network automation helps achieve more predictable behavior and gives the executive team a better chance at achieving deterministic outcomes, moving one step closer to having the assurance that the task is going to get done right the first time without human error.
### Business Agility
It goes without saying that network automation offers speed and agility not only for deploying changes, but also for retrieving data from network devices as fast as the business demands. Since the advent of server virtualization, server and virtualization admins have had the ability to deploy new applications almost instantaneously. And the faster applications are deployed, the more questions are raised as to why it takes so long to configure a VLAN, route, FW ACL, or load-balancing policy.
By understanding the most common workflows within an organization and _why_ network changes are really required, the process to deploy modern automation tooling such as Ansible becomes much simpler.
This chapter introduced some of the high-level points on why you should consider network automation. In the next section, we take a look at what Ansible is and continue to dive into different types of network automation that are relevant to IT organizations of all sizes.
### What Is Ansible?
Ansible is one of the newer IT automation and configuration management platforms that exists in the open source world. Its often compared to other tools such as Puppet, Chef, and SaltStack. Ansible emerged on the scene in 2012 as an open source project created by Michael DeHaan, who also created Cobbler and cocreated Func, both of which are very popular in the open source community. Less than 18 months after the Ansible open source project started, Ansible Inc. was formed and received $6 million in Series A funding. It became and is still the number one contributor to and supporter of the Ansible open source project. In October 2015, Red Hat acquired Ansible Inc.
But, what exactly is Ansible?
_Ansible is a super-simple automation platform that is agentless and extensible._
Lets dive into this statement in a bit more detail and look at the attributes of Ansible that have helped it gain a significant amount of traction within the industry.
### Simple
One of the most attractive attributes of Ansible is that you _DO NOT_ need any special coding skills in order to get started. All instructions, or tasks to be automated, are documented in a standard, human-readable data format that anyone can understand. It is not uncommon to have Ansible installed and automating tasks in under 30 minutes!
For example, the following task from an Ansible playbook is used to ensure a VLAN exists on a Cisco Nexus switch:
```
- nxos_vlan: vlan_id=100 name=web_vlan
```
You can tell by looking at this almost exactly what its going to do without understanding or writing any code!
###### Note
The second half of this report covers the Ansible terminology (playbooks, plays, tasks, modules, etc.) in great detail. However, we have included a few brief examples in the meantime to convey key concepts when using Ansible for network automation.
### Agentless
If you look at other tools on the market, such as Puppet and Chef, youll learn that, by default, they require that each device you are automating have specialized software installed. This is _NOT_ the case with Ansible, and this is the major reason why Ansible is a great choice for networking automation.
Its well understood that IT automation tools, including Puppet, Chef, CFEngine, SaltStack, and Ansible, were initially built to manage and automate the configuration of Linux hosts to increase the pace at which applications are deployed. Because Linux systems were being automated, getting agents installed was never a technical hurdle to overcome. If anything, it just delayed the setup, since now _N_ number of hosts (the hosts you want to automate) needed to have software deployed on them.
On top of that, when agents are used, there is additional complexity required for DNS and NTP configuration. These are services that most environments do have already, but when you need to get something up fairly quick or simply want to see what it can do from a test perspective, it could significantly delay the overall setup and installation process.
Since this report is meant to cover Ansible for network automation, its worth pointing out that having Ansible as an agentless platform is even more compelling to network admins than to sysadmins. Why is this?
Its more compelling for network admins because as mentioned, Linux operating systems are open, and anything can be installed on them. For networking, this is definitely not the case, although it is gradually changing. If we take the most widely deployed network operating system, Cisco IOS, as just one example and ask the question, _"Can third-party software be installed on IOS based platforms?"_ it shouldnt come as a surprise that the answer is _NO_.
For the last 20+ years, nearly all network operating systems have been closed and vertically integrated with the underlying network hardware. Because its not so easy to load an agent on a network device (router, switch, load balancer, firewall, etc.) without vendor support, having an automation platform like Ansible that was built from the ground up to be agentless and extensible is just what the doctor ordered for the network industry. We can finally start eliminating manual interactions with the network with ease!
### Extensible
Ansible is also extremely extensible. As open source and code start to play a larger role in the network industry, having platforms that are extensible is a must. This means that if the vendor or community doesnt provide a particular feature or function, the open source community, end user, customer, consultant, or anyone else can _extend_ Ansible to enable a given set of functionality. In the past, the network vendor or tool vendor was on the hook to provide the new plug-ins and integrations. Imagine using an automation platform like Ansible, and your network vendor of choice releases a new feature that you _really_ need automated. While the network vendor or Ansible could in theory release the new plug-in to automate that particular feature, the great thing is, anyone from your internal engineers to your value-added reseller (VARs) or consultant could now provide these integrations.
It is a fact that Ansible is extremely extensible because as stated, Ansible was initially built to automate applications and systems. It is because of Ansibles extensibility that Ansible integrations have been written for network vendors, including but not limited to Cisco, Arista, Juniper, F5, HP, A10, Cumulus, and Palo Alto Networks.
### Why Ansible for Network Automation?
Weve taken a brief look at what Ansible is and also some of the benefits of network automation, but why should Ansible be used for network automation?
In full transparency, many of the reasons already stated are what make Ansible such as great platform for automating application deployments. However, well take this a step further now, getting even more focused on networking, and continue to outline a few other key points to be aware of.
### Agentless
The importance of an agentless architecture cannot be stressed enough when it comes to network automation, especially as it pertains to automating existing devices. If we take a look at all devices currently installed at various parts of the network, from the DMZ and campus, to the branch and data center, the lions share of devices do _NOT_ have a modern device API. While having an API makes things so much simpler from an automation perspective, an agentless platform like Ansible makes it possible to automate and manage those _legacy_ _(traditional)_ devices, for example, _CLI-based devices_, making it a tool that can be used in any network environment.
###### Note
If CLI-only devices are integrated with Ansible, the mechanisms as to how the devices are accessed for read-only and read-write operations occur through protocols such as telnet, SSH, and SNMP.
As standalone network devices like routers, switches, and firewalls continue to add support for APIs, SDN solutions are also emerging. The one common theme with SDN solutions is that they all offer a single point of integration and policy management, usually in the form of an SDN controller. This is true for solutions such as Cisco ACI, VMware NSX, Big Switch Big Cloud Fabric, and Juniper Contrail, as well as many of the other SDN offerings from companies such as Nuage, Plexxi, Plumgrid, Midokura, and Viptela. This even includes open source controllers such as OpenDaylight.
These solutions all simplify the management of networks, as they allow an administrator to start to migrate from box-by-box management to network-wide, single-system management. While this is a great step in the right direction, these solutions still dont eliminate the risks for human error during change windows. For example, rather than configure _N_ switches, you may need to configure a single GUI that could take just as long in order to make the required configuration change—it may even be more complex, because after all, who prefers a GUI _over_ a CLI! Additionally, you may possibly have different types of SDN solutions deployed per application, network, region, or data center.
The need to automate networks, for configuration management, monitoring, and data collection, does not go away as the industry begins migrating to controller-based network architectures.
As most software-defined networks are deployed with a controller, nearly all controllers expose a modern REST API. And because Ansible has an agentless architecture, it makes it extremely simple to automate not only legacy devices that may not have an API, but also software-defined networking solutions via REST APIs, all without requiring any additional software (agents) on the endpoints. The net result is being able to automate any type of device using Ansible with or without an API.
### Free and Open Source Software (FOSS)
Being that Ansible is open source with all code publicly accessible on GitHub, it is absolutely free to get started using Ansible. It can literally be installed and providing value to network engineers in minutes. Ansible, the open source project, or Ansible Inc., do not require any meetings with sales reps before they hand over software either. That is stating the obvious, since its true for all open source projects, but being that the use of open source, community-driven software within the network industry is fairly new and gradually increasing, we wanted to explicitly make this point.
It is also worth stating that Ansible, Inc. is indeed a company and needs to make money somehow, right? While Ansible is open source, it also has an enterprise product called Ansible Tower that adds features such as role-based access control (RBAC), reporting, web UI, REST APIs, multi-tenancy, and much more, which is usually a nice fit for enterprises looking to deploy Ansible. And the best part is that even Ansible Tower is _FREE_ for up to 10 devices—so, at least you can get a taste of Tower to see if it can benefit your organization without spending a dime and sitting in countless sales meetings.
### Extensible
We stated earlier that Ansible was primarily built as an automation platform for deploying Linux applications, although it has expanded to Windows since the early days. The point is that the Ansible open source project did not have the goal of automating network infrastructure. The truth is that the more the Ansible community understood how flexible and extensible the underlying Ansible architecture was, the easier it became to _extend_ Ansible for their automation needs, which included networking. Over the past two years, there have been a number of Ansible integrations developed, many by industry independents such as Matt Oswalt, Jason Edelman, Kirk Byers, Elisa Jasinska, David Barroso, Michael Ben-Ami, Patrick Ogenstad, and Gabriele Gerbino, as well as by leading networking network vendors such as Arista, Juniper, Cumulus, Cisco, F5, and Palo Alto Networks.
### Integrating into Existing DevOps Workflows
Ansible is used for application deployments within IT organizations. Its used by operations teams that need to manage the deployment, monitoring, and management of various types of applications. By integrating Ansible with the network infrastructure, it expands what is possible when new applications are turned up or migrated. Rather than have to wait for a new top of rack (TOR) switch to be turned up, a VLAN to be added, or interface speed/duplex to be checked, all of these network-centric tasks can be automated and integrated into existing workflows that already exist within the IT organization.
### Idempotency
The term _idempotency_ (pronounced item-potency) is used often in the world of software development, especially when working with REST APIs, as well as in the world of _DevOps_ automation and configuration management frameworks, including Ansible. One of Ansibles beliefs is that all Ansible modules (integrations) should be idempotent. Okay, so what does it mean for a module to be idempotent? After all, this is a new term for most network engineers.
The answer is simple. Being idempotent allows the defined task to run one time or a thousand times without having an adverse effect on the target system, only ever making the change once. In other words, if a change is required to get the system into its desired state, the change is made; and if the device is already in its desired state, no change is made. This is unlike most traditional custom scripts and the copy and pasting of CLI commands into a terminal window. When the same command or script is executed repeatedly on the same system, errors are (sometimes) raised. Ever paste a command set into a router and get some type of error that invalidates the rest of your configuration? Was that fun?
Another example is if you have a text file or a script that configures 10 VLANs, the same commands are then entered 10 times _EVERY_ time the script is run. If an idempotent Ansible module is used, the existing configuration is gathered first from the network device, and each new VLAN being configured is checked against the current configuration. Only if the new VLAN needs to be added (or changed—VLAN name, as an example) is a change or command actually pushed to the device.
As the technologies become more complex, the value of idempotency only increases because with idempotency, you shouldnt care about the _existing_ state of the network device being modified, only the _desired_ state that you are trying to achieve from a network configuration and policy perspective.
### Network-Wide and Ad Hoc Changes
One of the problems solved with configuration management tools is configuration drift (when a devices desired configuration gradually drifts, or changes, over time due to manual change and/or having multiple disparate tools being used in an environment)—in fact, this is where tools like Puppet and Chef got started. Agents _phone home_ to the head-end server, validate its configuration, and if a change is required, the change is made. The approach is simple enough. What if an outage occurs and you need to troubleshoot though? You usually bypass the management system, go direct to a device, find the fix, and quickly leave for the day, right? Sure enough, at the next time interval when the agent phones back home, the change made to fix the problem is overwritten (based on how the _master/head-end server_ is configured). One-off changes should always be limited in highly automated environments, but tools that still allow for them are greatly valuable. As you guessed, one of these tools is Ansible.
Because Ansible is agentless, there is not a default push or pull to prevent configuration drift. The tasks to automate are defined in what is called an Ansible playbook. When using Ansible, it is up to the user to run the playbook. If the playbook is to be executed at a given time interval and youre not using Ansible Tower, you will definitely know how often the tasks are run; if you are just using the native Ansible command line from a terminal prompt, the playbook is run once and only once.
Running a playbook once by default is attractive for network engineers. It is added peace of mind that changes made manually on the device are not going to be automatically overwritten. Additionally, the scope of devices that a playbook is executed against is easily changed when needed such that even if a single change needs to automate only a single device, Ansible can still be used. The _scope_ of devices is determined by what is called an Ansible inventory file; the inventory could have one device or a thousand devices.
The following shows a sample inventory file with two groups defined and a total of six network devices:
```
[core-switches]
dc-core-1
dc-core-2
[leaf-switches]
leaf1
leaf2
leaf3
leaf4
```
To automate all hosts, a snippet from your play definition in a playbook looks like this:
```
hosts: all
```
And to automate just one leaf switch, it looks like this:
```
hosts: leaf1
```
And just the core switches:
```
hosts: core-switches
```
###### Note
As stated previously, playbooks, plays, and inventories are covered in more detail later on this report.
Being able to easily automate one device or _N_ devices makes Ansible a great choice for making those one-off changes when they are required. Its also great for those changes that are network-wide: possibly for shutting down all interfaces of a given type, configuring interface descriptions, or adding VLANs to wiring closets across an enterprise campus network.
### Network Task Automation with Ansible
This report is gradually getting more technical in two areas. The first area is around the details and architecture of Ansible, and the second area is about exactly what types of tasks can be automated from a network perspective with Ansible. The latter is what well take a look at in this chapter.
Automation is commonly equated with speed, and considering that some network tasks dont require speed, its easy to see why some IT teams dont see the value in automation. VLAN configuration is a great example because you may be thinking, "How _fast_ does a VLAN really need to get created? Just how many VLANs are being added on a daily basis? Do _I_ really need automation?”
In this section, we are going to focus on several other tasks where automation makes sense such as device provisioning, data collection, reporting, and compliance. But remember, as we stated earlier, automation is much more than speed and agility as its offering you, your team, and your business more predictable and more deterministic outcomes.
### Device Provisioning
One of the easiest and fastest ways to get started using Ansible for network automation is creating device configuration files that are used for initial device provisioning and pushing them to network devices.
If we take this process and break it down into two steps, the first step is creating the configuration file, and the second is pushing the configuration onto the device.
First, we need to decouple the _inputs_ from the underlying vendor proprietary syntax (CLI) of the config file. This means well have separate files with values for the configuration parameters such as VLANs, domain information, interfaces, routing, and everything else, and then, of course, a configuration template file(s). For this example, this is our standard golden template thats used for all devices getting deployed. Ansible helps bridge the gap between rendering the inputs and values with the configuration template. In less than a few seconds, Ansible can generate hundreds of configuration files predictably and reliably.
Lets take a quick look at an example of taking a current configuration and decomposing it into a template and separate variables (inputs) file.
Here is an example of a configuration file snippet:
```
hostname leaf1
ip domain-name ntc.com
!
vlan 10
name web
!
vlan 20
name app
!
vlan 30
name db
!
vlan 40
name test
!
vlan 50
name misc
```
If we extract the input values, this file is transformed into a template.
###### Note
Ansible uses the Python-based Jinja2 templating language, thus the template called _leaf.j2_ is a Jinja2 template.
Note that in the following example the _double curly braces_ denote a variable.
The resulting template looks like this and is given the filename _leaf.j2_:
```
!
hostname {{ inventory_hostname }}
ip domain-name {{ domain_name }}
!
!
{% for vlan in vlans %}
vlan {{ vlan.id }}
name {{ vlan.name }}
{% endfor %}
!
```
Since the double curly braces denote variables, and we see those values are not in the template, they need to be stored somewhere. They get stored in a variables file. A matching variables file for the previously shown template looks like this:
```
---
hostname: leaf1
domain_name: ntc.com
vlans:
- { id: 10, name: web }
- { id: 20, name: app }
- { id: 30, name: db }
- { id: 40, name: test }
- { id: 50, name: misc }
```
This means if the team that controls VLANs wants to add a VLAN to the network devices, no problem. Have them change it in the variables file and regenerate a new config file using the Ansible module called `template`. This whole process is idempotent too; only if there is a change to the template or values being entered will a new configuration file be generated.
Once the configuration is generated, it needs to be _pushed_ to the network device. One such method to push configuration files to network devices is using the open source Ansible module called `napalm_install_config`.
The next example is a sample playbook to _build and push_ a configuration to network devices. Again, this playbook uses the `template` module to build the configuration files and the `napalm_install_config` to push them and activate them as the new running configurations on the devices.
Even though every line isnt reviewed in the example, you can still make out what is actually happening.
###### Note
The following playbook introduces new concepts such as the built-in variable `inventory_hostname`. These concepts are covered in [Ansible Terminology and Getting Started][1].
```
---
- name: BUILD AND PUSH NETWORK CONFIGURATION FILES
hosts: leaves
connection: local
gather_facts: no
tasks:
- name: BUILD CONFIGS
template:
src=templates/leaf.j2
dest=configs/{{inventory_hostname }}.conf
- name: PUSH CONFIGS
napalm_install_config:
hostname={{ inventory_hostname }}
username={{ un }}
password={{ pwd }}
dev_os={{ os }}
config_file=configs/{{ inventory_hostname }}.conf
commit_changes=1
replace_config=0
```
This two-step process is the simplest way to get started with network automation using Ansible. You simply template your configs, build config files, and push them to the network device—otherwise known as the _BUILD and PUSH_ method.
###### Note
Another example like this is reviewed in much more detail in [Ansible Network Integrations][2].
### Data Collection and Monitoring
Monitoring tools typically use SNMP—these tools poll certain management information bases (MIBs) and return data to the monitoring tool. Based on the data being returned, it may be more or less than you actually need. What if interface stats are being polled? You are likely getting back every counter that is displayed in a _show interface_ command. What if you only need _interface resets_ and wish to see these resets correlated to the interfaces that have CDP/LLDP neighbors on them? Of course, this is possible with current technology; it could be you are running multiple show commands and parsing the output manually, or youre using an SNMP-based tool but going between tabs in the GUI trying to find the data you actually need. How does Ansible help with this?
Being that Ansible is totally open and extensible, its possible to collect and monitor the exact counters or values needed. This may require some up-front custom work but is totally worth it in the end, because the data being gathered is what you need, not what the vendor is providing you. Ansible also provides intuitive ways to perform certain tasks conditionally, which means based on data being returned, you can perform subsequent tasks, which may be to collect more data or to make a configuration change.
Network devices have _A LOT_ of static and ephemeral data buried inside, and Ansible helps extract the bits you need.
You can even use Ansible modules that use SNMP behind the scenes, such as a module called `snmp_device_version`. This is another open source module that exists within the community:
```
- name: GET SNMP DATA
snmp_device_version:
host=spine
community=public
version=2c
```
Running the preceding task returns great information about a device and adds some level of discovery capabilities to Ansible. For example, that task returns the following data:
```
{"ansible_facts": {"ansible_device_os": "nxos", "ansible_device_vendor": "cisco", "ansible_device_version": "7.0(3)I2(1)"}, "changed": false}
```
You can now determine what type of device something is without knowing up front. All you need to know is the read-only community string of the device.
### Migrations
Migrating from one platform to the next is never an easy task. This may be from the same vendor or from different vendors. Vendors may offer a script or a tool to help with migrations. Ansible can be used to build out configuration templates for all types of network devices and operating systems in such a way that you could generate a configuration file for all vendors given a defined and common set of inputs (common data model). Of course, if there are vendor proprietary extensions, theyll need to be accounted for, too. Having this type of flexibility helps with not only migrations, but also disaster recovery (DR), as its very common to have different switch models in the production and DR data centers, maybe even different vendors.
### Configuration Management
As stated, configuration management is the most common type of automation. What Ansible allows you to do fairly easily is create _roles_ to streamline the consumption of task-based automation. From a high level, a role is a logical grouping of reusable tasks that are automated against a particular group of devices. Another way to think about roles is to think about workflows. First and foremost, workflows and processes need to be understood before automation is going to start adding value. Its always important to start small and expand from there.
For example, a set of tasks that automate the configuration of routers and switches is very common and is a great place to start. But where do the IP addresses come from that are configured on network devices? Maybe an IP address management solution? Once the IP addresses are allocated for a given function and deployed, does DNS need to be updated too? Do DHCP scopes need to be created?
Can you see how the workflow can start small and gradually expand across different IT systems? As the workflow continues to expand, so would the role.
### Compliance
As with many forms of automation, making configuration changes with any type of automation tool is seen as a risk. While making manual changes could arguably be riskier, as youve read and may have experienced firsthand, Ansible has capabilities to automate data collection, monitoring, and configuration building, which are all "read-only" and "low risk" actions. One _low risk_ use case that can use the data being gathered is configuration compliance checks and configuration validation. Does the deployed configuration meet security requirements? Are the required networks configured? Is protocol XYZ disabled? Since each module, or integration, with Ansible returns data, it is quite simple to _assert_ that something is _TRUE_ or _FALSE_. And again, based on _it_ being _TRUE_ or _FALSE_, its up to you to determine what happens next—maybe it just gets logged, or maybe a complex operation is performed.
### Reporting
We now understand that Ansible can also be used to collect data and perform compliance checks. The data being returned and collected from the device by way of Ansible is up for grabs in terms of what you want to do with it. Maybe the data being returned becomes inputs to other tasks, or maybe you just want to create reports. Being that reports are generated from templates combined with the actual important data to be inserted into the template, the process to create and use reporting templates is the same process used to create configuration templates.
From a reporting perspective, these templates may be flat text files, markdown files that are viewed on GitHub, HTML files that get dynamically placed on a web server, and the list goes on. The user has the power to create the exact type of report she wishes, inserting the exact data she needs to be part of that report.
It is powerful to create reports not only for executive management, but also for the ops engineers, since there are usually different metrics both teams need.
### How Ansible Works
After looking at what Ansible can offer from a network automation perspective, well now take a look at how Ansible works. You will learn about the overall communication flow from an Ansible control host to the nodes that are being automated. First, we review how Ansible works _out of the box_, and we then take a look at how Ansible, and more specifically Ansible _modules_, work when network devices are being automated.
### Out of the Box
By now, you should understand that Ansible is an automation platform. In fact, it is a lightweight automation platform that is installed on a single server or on every administrators laptop within an organization. You decide. Ansible is easily installed using utilities such as pip, apt, and yum on Linux-based machines.
###### Note
The machine that Ansible is installed on is referred to as the _control host_ through the remainder of this report.
The control host will perform all automation tasks that are defined in an Ansible playbook (dont worry; well cover playbooks and other Ansible terms soon enough). The important piece for now is to understand that a playbook is simply a set of automation tasks and instructions that gets executed on a given number of hosts.
When a playbook is created, you also need to define which hosts you want to automate. The mapping between the playbook and the hosts to automate happens by using what is known as an Ansible inventory file. This was already shown in an earlier example, but here is another sample inventory file showing two groups: `cisco`and `arista`:
```
[cisco]
nyc1.acme.com
nyc2.acme.com
[arista]
sfo1.acme.com
sfo2.acme.com
```
###### Note
You can also use IP addresses within the inventory file, instead of hostnames. For these examples, the hostnames were resolvable via DNS.
As you can see, the Ansible inventory file is a text file that lists hosts and groups of hosts. You then reference a specific host or a group from within the playbook, thus dictating which hosts get automated for a given play and playbook. This is shown in the following two examples.
The first example shows what it looks like if you wanted to automate all hosts within the `cisco` group, and the second example shows how to automate just the _nyc1.acme.com_ host:
```
---
- name: TEST PLAYBOOK
hosts: cisco
tasks:
- TASKS YOU WANT TO AUTOMATE
```
```
---
- name: TEST PLAYBOOK
hosts: nyc1.acme.com
tasks:
- TASKS YOU WANT TO AUTOMATE
```
Now that the basics of inventory files are understood, we can take a look at how Ansible (the control host) communicates with devices _out of the box_ and how tasks are automated on Linux endpoints. This is an important concept to understand, as this is usually different when network devices are being automated.
There are two main requirements for Ansible to work out of the box to automate Linux-based systems. These requirements are SSH and Python.
First, the endpoints must support SSH for transport, since Ansible uses SSH to connect to each target node. Because Ansible supports a pluggable connection architecture, there are also various plug-ins available for different types of SSH implementations.
The second requirement is how Ansible gets around the need to require an _agent_ to preexist on the target node. While Ansible does not require a software agent, it does require an onboard Python execution engine. This execution engine is used to execute Python code that is transmitted from the Ansible control host to the target node being automated.
If we elaborate on this out of the box workflow, it is broken down as follows:
1. When an Ansible play is executed, the control host connects to the Linux-based target node using SSH.
2. For each task, that is, Ansible module being executed within the play, Python code is transmitted over SSH and executed directly on the remote system.
3. Each Ansible module upon execution on the remote system returns JSON data to the control host. This data includes information such as if the configuration changed, if the task passed/failed, and other module-specific data.
4. The JSON data returned back to Ansible can then be used to generate reports using templates or as inputs to subsequent modules.
5. Repeat step 3 for each task that exists within the play.
6. Repeat step 1 for each play within the playbook.
Shouldnt this mean that network devices should work out of the box with Ansible because they also support SSH? It is true that network devices do support SSH, but it is the first requirement combined with the second one that limits the functionality possible for network devices.
To start, most network devices do not support Python, so it makes using the default Ansible connection mechanism process a non-starter. That said, over the past few years, vendors have added Python support on several different device platforms. However, most of these platforms still lack the integration needed to allow Ansible to get direct access to a Linux shell over SSH with the proper permissions to copy over the required code, create temp directories and files, and execute the code on box. While all the parts are there for Ansible to work natively with SSH/Python _and_ Linux-based network devices, it still requires network vendors to open their systems more than they already have.
###### Note
It is worth noting that Arista does offer native integration because it is able to drop SSH users directly into a Linux shell with access to a Python execution engine, which in turn does allow Ansible to use its default connection mechanism. Because we called out Arista, we need to also highlight Cumulus as working with Ansibles default connection mechanism, too. This is because Cumulus Linux is native Linux, and there isnt a need to use a vendor API for the automation of the Cumulus Linux OS.
### Ansible Network Integrations
The previous section covered the way Ansible works by default. We looked at how Ansible sets up a connection to a device at the beginning of a _play_, executes tasks by copying Python code to the devices, executes the code, and then returns results back to the Ansible control host.
In this section, well take a look at what this process is when automating network devices with Ansible. As already covered, Ansible has a pluggable connection architecture. For _most_ network integrations, the `connection` parameter is set to `local`. The most common place to make the connection type local is within the playbook, as shown in the following example:
```
---
- name: TEST PLAYBOOK
hosts: cisco
connection: local
tasks:
- TASKS YOU WANT TO AUTOMATE
```
Notice how within the play definition, this example added the `connection` parameter as compared to the examples in the previous section.
This tells Ansible not to connect to the target device via SSH and to just connect to the local machine running the playbook. Basically, this delegates the connection responsibility to the actual Ansible modules being used within the _tasks_ section of the playbook. Delegating power for each type of module allows the modules to connect to the device in whatever fashion necessary; this could be NETCONF for Juniper and HP Comware7, eAPI for Arista, NX-API for Cisco Nexus, or even SNMP for traditional/legacy-based systems that dont have a programmatic API.
###### Note
Network integrations in Ansible come in the form of Ansible modules. While we continue to whet your appetite using terminology such as playbooks, plays, tasks, and modules to convey key concepts, each of these terms are finally covered in greater detail in [Ansible Terminology and Getting Started][3] and [Hands-on Look at Using Ansible for Network Automation][4].
Lets take a look at another sample playbook:
```
---
- name: TEST PLAYBOOK
hosts: cisco
connection: local
tasks:
- nxos_vlan: vlan_id=10 name=WEB_VLAN
```
If you notice, this playbook now includes a task, and this task uses the `nxos_vlan` module. The `nxos_vlan` module is just a Python file, and it is in this file where the connection to the Cisco NX-OS device is made using NX-API. However, the connection could have been set up using any other device API, and this is how vendors and users like us are able to build our own integrations. Integrations (modules) are typically done on a per-feature basis, although as youve already seen with modules like `napalm_install_config`, they can be used to _push_ a full configuration file, too.
One of the major differences is that with the default connection mechanism, Ansible launches a persistent SSH connection to the device, and this connection persists for a given play. When the connection setup and teardown occurs within the module, as with many network modules that use `connection=local`, Ansible is logging in/out of the device on _every_ task versus this happening on the play level.
And in traditional Ansible fashion, each network module returns JSON data. The only difference is the massaging of this data is happening locally on the Ansible control host versus on the target node. The data returned back to the playbook varies per vendor and type of module, but as an example, many of the Cisco NX-OS modules return back existing state, proposed state, and end state, as well as the commands (if any) that are being sent to the device.
As you get started using Ansible for network automation, it is important to remember that setting the connection parameter to local is taking Ansible out of the connection setup/teardown process and leaving that up to the module. This is why modules supported for different types of vendor platforms will have different ways of communicating with the devices.
### Ansible Terminology and Getting Started
This chapter walks through many of the terms and key concepts that have been gradually introduced already in this report. These are terms such as _inventory file_, _playbook_, _play_, _tasks_, and _modules_. We also review a few other concepts that are helpful to be aware of when getting started with Ansible for network automation.
Please reference the following sample inventory file and playbook throughout this section, as they are continuously used in the examples that follow to convey what each Ansible term means.
_Sample inventory_:
```
# sample inventory file
# filename inventory
[all:vars]
user=admin
pwd=admin
[tor]
rack1-tor1 vendor=nxos
rack1-tor2 vendor=nxos
rack2-tor1 vendor=arista
rack2-tor2 vendor=arista
[core]
core1
core2
```
_Sample playbook_:
```
---
# sample playbook
# filename site.yml
- name: PLAY 1 - Top of Rack (TOR) Switches
hosts: tor
connection: local
tasks:
- name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
nxos_vlan:
vlan_id=10
name=WEB_VLAN
host={{ inventory_hostname }}
username=admin
password=admin
when: vendor == "nxos"
- name: ENSURE VLAN 10 EXISTS ON ARISTA TOR SWITCHES
eos_vlan:
vlanid=10
name=WEB_VLAN
host={{ inventory_hostname }}
username={{ user }}
password={{ pwd }}
when: vendor == "arista"
- name: PLAY 2 - Core (TOR) Switches
hosts: core
connection: local
tasks:
- name: ENSURE VLANS EXIST IN CORE
nxos_vlan:
vlan_id={{ item }}
host={{ inventory_hostname }}
username={{ user }}
password={{ pwd }}
with_items:
- 10
- 20
- 30
- 40
- 50
```
### Inventory File
Using an inventory file, such as the preceding one, enables us to automate tasks for specific hosts and groups of hosts by referencing the proper host/group using the `hosts` parameter that exists at the top section of each play.
It is also possible to store variables within an inventory file. This is shown in the example. If the variable is on the same line as a host, it is a host-specific variable. If the variables are defined within brackets such as `[all:vars]`, it means that the variables are in scope for the group `all`, which is a default group that includes _all_ hosts in the inventory file.
###### Note
Inventory files are the quickest way to get started with Ansible, but should you already have a source of truth for network devices such as a network management tool or CMDB, it is possible to create and use a dynamic inventory script rather than a static inventory file.
### Playbook
The playbook is the top-level object that is executed to automate network devices. In our example, this is the file _site.yml_, as depicted in the preceding example. A playbook uses YAML to define the set of tasks to automate, and each playbook is comprised of one or more plays. This is analogous to a football playbook. Like in football, teams have playbooks made up of plays, and Ansible playbooks are made up of plays, too.
###### Note
YAML is a data format that is supported by all programming languages. YAML is itself a superset of JSON, and its quite easy to recognize YAML files, as they always start with three dashes (hyphens), `---`.
### Play
One or more plays can exist within an Ansible playbook. In the preceding example, there are two plays within the playbook. Each starts with a _header_ section where play-specific parameters are defined.
The two plays from that example have the following parameters defined:
`name`
The text `PLAY 1 - Top of Rack (TOR) Switches` is arbitrary and is displayed when the playbook runs to improve readability during playbook execution and reporting. This is an optional parameter.
`hosts`
As covered previously, this is the host or group of hosts that are automated in this particular play. This is a required parameter.
`connection`
As covered previously, this is the type of connection mechanism used for the play. This is an optional parameter, but is commonly set to `local` for network automation plays.
Each play is comprised of one or more tasks.
### Tasks
Tasks represent what is automated in a declarative manner without worrying about the underlying syntax or "how" the operation is performed.
In our example, the first play has two tasks. Each task ensures VLAN 10 exists. The first task does this for Cisco Nexus devices, and the second task does this for Arista devices:
```
tasks:
- name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
nxos_vlan:
vlan_id=10
name=WEB_VLAN
host={{ inventory_hostname }}
username=admin
password=admin
when: vendor == "nxos"
```
Tasks can also use the `name` parameter just like plays can. As with plays, the text is arbitrary and is displayed when the playbook runs to improve readability during playbook execution and reporting. It is an optional parameter for each task.
The next line in the example task starts with `nxos_vlan`. This tell us that this task will execute the Ansible module called `nxos_vlan`.
Well now dig deeper into modules.
### Modules
It is critical to understand modules within Ansible. While any programming language can be used to write Ansible modules as long as they return JSON key-value pairs, they are almost always written in Python. In our example, we see two modules being executed: `nxos_vlan` and `eos_vlan`. The modules are both Python files; and in fact, while you cant tell from looking at the playbook, the real filenames are _eos_vlan.py_ and _nxos_vlan.py_, respectively.
Lets look at the first task in the first play from the preceding example:
```
- name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
nxos_vlan:
vlan_id=10
name=WEB_VLAN
host={{ inventory_hostname }}
username=admin
password=admin
when: vendor == "nxos"
```
This task executes `nxos_vlan`, which is a module that automates VLAN configuration. In order to use modules, including this one, you need to specify the desired state or configuration policy you want the device to have. This example states: VLAN 10 should be configured with the name `WEB_VLAN`, and it should exist on each switch being automated. We can see this easily with the `vlan_id`and `name` parameters. There are three other parameters being passed into the module as well. They are `host`, `username`, and `password`:
`host`
This is the hostname (or IP address) of the device being automated. Since the hosts we want to automate are already defined in the inventory file, we can use the built-in Ansible variable `inventory_hostname`. This variable is equal to what is in the inventory file. For example, on the first iteration, the host in the inventory file is `rack1-tor1`, and on the second iteration, it is `rack1-tor2`. These names are passed into the module and then within the module, a DNS lookup occurs on each name to resolve it to an IP address. Then the communication begins with the device.
`username`
Username used to log in to the switch.
`password`
Password used to log in to the switch.
The last piece to cover here is the use of the `when` statement. This is how Ansible performs conditional tasks within a play. As we know, there are multiple devices and types of devices that exist within the `tor` group for this play. Using `when` offers an option to be more selective based on any criteria. Here we are only automating Cisco devices because we are using the `nxos_vlan` module in this task, while in the next task, we are automating only the Arista devices because the `eos_vlan` module is used.
###### Note
This isnt the only way to differentiate between devices. This is being shown to illustrate the use of `when` and that variables can be defined within the inventory file.
Defining variables in an inventory file is great for getting started, but as you continue to use Ansible, youll want to use YAML-based variables files to help with scale, versioning, and minimizing change to a given file. This will also simplify and improve readability for the inventory file and each variables file used. An example of a variables file was given earlier when the build/push method of device provisioning was covered.
Here are a few other points to understand about the tasks in the last example:
* Play 1 task 1 shows the `username` and `password` hardcoded as parameters being passed into the specific module (`nxos_vlan`).
* Play 1 task 1 and play 2 passed variables into the module instead of hardcoding them. This masks the `username` and `password`parameters, but its worth noting that these variables are being pulled from the inventory file (for this example).
* Play 1 uses a _horizontal_ key=value syntax for the parameters being passed into the modules, while play 2 uses the vertical key=value syntax. Both work just fine. You can also use vertical YAML syntax with "key: value" syntax.
* The last task also introduces how to use a _loop_ within Ansible. This is by using `with_items` and is analogous to a for loop. That particular task is looping through five VLANs to ensure they all exist on the switch. Note: its also possible to store these VLANs in an external YAML variables file as well. Also note that the alternative to not using `with_items` would be to have one task per VLAN—and that just wouldnt scale!
### Hands-on Look at Using Ansible for Network Automation
In the previous chapter, a general overview of Ansible terminology was provided. This covered many of the specific Ansible terms, such as playbooks, plays, tasks, modules, and inventory files. This section will continue to provide working examples of using Ansible for network automation, but will provide more detail on working with modules to automate a few different types of devices. Examples will include automating devices from multiple vendors, including Cisco, Arista, Cumulus, and Juniper.
The examples in this section assume the following:
* Ansible is installed.
* The proper APIs are enabled on the devices (NX-API, eAPI, NETCONF).
* Users exist with the proper permissions on the system to make changes via the API.
* All Ansible modules exist on the system and are in the library path.
###### Note
Setting the module and library path can be done within the _ansible.cfg_ file. You can also use the `-M` flag from the command line to change it when executing a playbook.
The inventory used for the examples in this section is shown in the following section (with passwords removed and IP addresses changed). In this example, some hostnames are not FQDNs as they were in the previous examples.
### Inventory File
```
[cumulus]
cvx ansible_ssh_host=1.2.3.4 ansible_ssh_pass=PASSWORD
[arista]
veos1
[cisco]
nx1 hostip=5.6.7.8 un=USERNAME pwd=PASSWORD
[juniper]
vsrx hostip=9.10.11.12 un=USERNAME pwd=PASSWORD
```
###### Note
Just in case youre wondering at this point, Ansible does support functionality that allows you to store passwords in encrypted files. If you want to learn more about this feature, check out [Ansible Vault][5] in the docs on the Ansible website.
This inventory file has four groups defined with a single host in each group. Lets review each section in a little more detail:
Cumulus
The host `cvx` is a Cumulus Linux (CL) switch, and it is the only device in the `cumulus` group. Remember that CL is native Linux, so this means the default connection mechanism (SSH) is used to connect to and automate the CL switch. Because `cvx` is not defined in DNS or _/etc/hosts_, well let Ansible know not to use the hostname defined in the inventory file, but rather the name/IP defined for `ansible_ssh_host`. The username to log in to the CL switch is defined in the playbook, but you can see that the password is being defined in the inventory file using the `ansible_ssh_pass` variable.
Arista
The host called `veos1` is an Arista switch running EOS. It is the only host that exists within the `arista` group. As you can see for Arista, there are no other parameters defined within the inventory file. This is because Arista uses a special configuration file for their devices. This file is called _.eapi.conf_ and for our example, it is stored in the home directory. Here is the conf file being used for this example to function properly:
```
[connection:veos1]
host: 2.4.3.4
username: unadmin
password: pwadmin
```
This file contains all required information for Ansible (and the Arista Python library called _pyeapi_) to connect to the device using just the information as defined in the conf file.
Cisco
Just like with Cumulus and Arista, there is only one host (`nx1`) that exists within the `cisco` group. This is an NX-OS-based Cisco Nexus switch. Notice how there are three variables defined for `nx1`. They include `un` and `pwd`, which are accessed in the playbook and passed into the Cisco modules in order to connect to the device. In addition, there is a parameter called `hostip`. This is required because `nx1` is also not defined in DNS or configured in the _/etc/hosts_ file.
###### Note
We could have named this parameter anything. If automating a native Linux device, `ansible_ssh_host` is used just like we saw with the Cumulus example (if the name as defined in the inventory is not resolvable). In this example, we could have still used `ansible_ssh_host`, but it is not a requirement, since well be passing this variable as a parameter into Cisco modules, whereas `ansible_ssh_host` is automatically checked when using the default SSH connection mechanism.
Juniper
As with the previous three groups and hosts, there is a single host `vsrx` that is located within the `juniper` group. The setup within the inventory file is identical to that of Ciscos as both are used the same exact way within the playbook.
### Playbook
The next playbook has four different plays. Each play is built to automate a specific group of devices based on vendor type. Note that this is only one way to perform these tasks within a single playbook. There are other ways in which we could have used conditionals (`when` statement) or created Ansible roles (which is not covered in this report).
Here is the example playbook:
```
---
- name: PLAY 1 - CISCO NXOS
hosts: cisco
connection: local
tasks:
- name: ENSURE VLAN 100 exists on Cisco Nexus switches
nxos_vlan:
vlan_id=100
name=web_vlan
host={{ hostip }}
username={{ un }}
password={{ pwd }}
- name: PLAY 2 - ARISTA EOS
hosts: arista
connection: local
tasks:
- name: ENSURE VLAN 100 exists on Arista switches
eos_vlan:
vlanid=100
name=web_vlan
connection={{ inventory_hostname }}
- name: PLAY 3 - CUMULUS
remote_user: cumulus
sudo: true
hosts: cumulus
tasks:
- name: ENSURE 100.10.10.1 is configured on swp1
cl_interface: name=swp1 ipv4=100.10.10.1/24
- name: restart networking without disruption
shell: ifreload -a
- name: PLAY 4 - JUNIPER SRX changes
hosts: juniper
connection: local
tasks:
- name: INSTALL JUNOS CONFIG
junos_install_config:
host={{ hostip }}
file=srx_demo.conf
user={{ un }}
passwd={{ pwd }}
logfile=deploysite.log
overwrite=yes
diffs_file=junpr.diff
```
You will notice the first two plays are very similar to what we already covered in the original Cisco and Arista example. The only difference is that each group being automated (`cisco` and `arista`) is defined in its own play, and this is in contrast to using the `when`conditional that was used earlier.
There is no right way or wrong way to do this. It all depends on what information is known up front and what fits your environment and use cases best, but our intent is to show a few ways to do the same thing.
The third play automates the configuration of interface `swp1` that exists on the Cumulus Linux switch. The first task within this play ensures that `swp1` is a Layer 3 interface and is configured with the IP address 100.10.10.1\. Because Cumulus Linux is native Linux, the networking service needs to be restarted for the changes to take effect. This could have also been done using Ansible handlers (out of the scope of this report). There is also an Ansible core module called `service` that could have been used, but that would disrupt networking on the switch; using `ifreload` restarts networking non-disruptively.
Up until now in this section, we looked at Ansible modules focused on specific tasks such as configuring interfaces and VLANs. The fourth play uses another option. Well look at a module that _pushes_ a full configuration file and immediately activates it as the new running configuration. This is what we showed previously using `napalm_install_config`, but this example uses a Juniper-specific module called `junos_install_config`.
This module `junos_install_config` accepts several parameters, as seen in the example. By now, you should understand what `user`, `passwd`, and `host` are used for. The other parameters are defined as follows:
`file`
This is the config file that is copied from the Ansible control host to the Juniper device.
`logfile`
This is optional, but if specified, it is used to store messages generated while executing the module.
`overwrite`
When set to yes/true, the complete configuration is replaced with the file being sent (default is false).
`diffs_file`
This is optional, but if specified, will store the diffs generated when applying the configuration. An example of the diff generated when just changing the hostname but still sending a complete config file is shown next:
```
# filename: junpr.diff
[edit system]
- host-name vsrx;
+ host-name vsrx-demo;
```
That covers the detailed overview of the playbook. Lets take a look at what happens when the playbook is executed:
###### Note
Note: the `-i` flag is used to specify the inventory file to use. The `ANSIBLE_HOSTS`environment variable can also be set rather than using the flag each time a playbook is executed.
```
ntc@ntc:~/ansible/multivendor$ ansible-playbook -i inventory demo.yml
PLAY [PLAY 1 - CISCO NXOS] *************************************************
TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] *********************
changed: [nx1]
PLAY [PLAY 2 - ARISTA EOS] *************************************************
TASK: [ENSURE VLAN 100 exists on Arista switches] **************************
changed: [veos1]
PLAY [PLAY 3 - CUMULUS] ****************************************************
GATHERING FACTS ************************************************************
ok: [cvx]
TASK: [ENSURE 100.10.10.1 is configured on swp1] ***************************
changed: [cvx]
TASK: [restart networking without disruption] ******************************
changed: [cvx]
PLAY [PLAY 4 - JUNIPER SRX changes] ****************************************
TASK: [INSTALL JUNOS CONFIG] ***********************************************
changed: [vsrx]
PLAY RECAP ***************************************************************
to retry, use: --limit @/home/ansible/demo.retry
cvx : ok=3 changed=2 unreachable=0 failed=0
nx1 : ok=1 changed=1 unreachable=0 failed=0
veos1 : ok=1 changed=1 unreachable=0 failed=0
vsrx : ok=1 changed=1 unreachable=0 failed=0
```
You can see that each task completes successfully; and if you are on the terminal, youll see that each changed task was displayed with an amber color.
Lets run this playbook again. By running it again, we can verify that all of the modules are _idempotent_; and when doing this, we see that NO changes are made to the devices and everything is green:
```
PLAY [PLAY 1 - CISCO NXOS] ***************************************************
TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] ***********************
ok: [nx1]
PLAY [PLAY 2 - ARISTA EOS] ***************************************************
TASK: [ENSURE VLAN 100 exists on Arista switches] ****************************
ok: [veos1]
PLAY [PLAY 3 - CUMULUS] ******************************************************
GATHERING FACTS **************************************************************
ok: [cvx]
TASK: [ENSURE 100.10.10.1 is configured on swp1] *****************************
ok: [cvx]
TASK: [restart networking without disruption] ********************************
skipping: [cvx]
PLAY [PLAY 4 - JUNIPER SRX changes] ******************************************
TASK: [INSTALL JUNOS CONFIG] *************************************************
ok: [vsrx]
PLAY RECAP ***************************************************************
cvx : ok=2 changed=0 unreachable=0 failed=0
nx1 : ok=1 changed=0 unreachable=0 failed=0
veos1 : ok=1 changed=0 unreachable=0 failed=0
vsrx : ok=1 changed=0 unreachable=0 failed=0
```
Notice how there were 0 changes, but they still returned "ok" for each task. This verifies, as expected, that each of the modules in this playbook are idempotent.
### Summary
Ansible is a super-simple automation platform that is agentless and extensible. The network community continues to rally around Ansible as a platform that can be used for network automation tasks that range from configuration management to data collection and reporting. You can push full configuration files with Ansible, configure specific network resources with idempotent modules such as interfaces or VLANs, or simply just automate the collection of information such as neighbors, serial numbers, uptime, and interface stats, and customize reports as you need them.
Because of its architecture, Ansible proves to be a great tool available here and now that helps bridge the gap from _legacy CLI/SNMP_ network device automation to modern _API-driven_ automation.
Ansibles ease of use and agentless architecture accounts for the platforms increasing following within the networking community. Again, this makes it possible to automate devices without APIs (CLI/SNMP); devices that have modern APIs, including standalone switches, routers, and Layer 4-7 service appliances; and even those software-defined networking (SDN) controllers that offer RESTful APIs.
There is no device left behind when using Ansible for network automation.
-----------
作者简介:
![](https://d3tdunqjn7n0wj.cloudfront.net/360x360/jason-edelman-crop-5b2672f569f553a3de3a121d0179efcb.jpg)
Jason Edelman, CCIE 15394 & VCDX-NV 167, is a born and bred network engineer from the great state of New Jersey. He was the typical “lover of the CLI” or “router jockey.” At some point several years ago, he made the decision to focus more on software, development practices, and how they are converging with network engineering. Jason currently runs a boutique consulting firm, Network to Code, helping vendors and end users take advantage of new tools and technologies to reduce their operational inefficiencies. Jason has a Bachelors...
--------------------------------------------------------------------------------
via: https://www.oreilly.com/learning/network-automation-with-ansible
作者:[Jason Edelman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/ee4fd-jason-edelman
[1]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
[2]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_network_integrations
[3]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
[4]:https://www.oreilly.com/learning/network-automation-with-ansible#handson_look_at_using_ansible_for_network_automation
[5]:http://docs.ansible.com/ansible/playbooks_vault.html
[6]:https://www.oreilly.com/people/ee4fd-jason-edelman
[7]:https://www.oreilly.com/people/ee4fd-jason-edelman

View File

@ -0,0 +1,182 @@
Cathon is translating---
What is Docker?
================
![](https://d3tdunqjn7n0wj.cloudfront.net/720x480/card-catalog-crop-c76cf2c8b4881e6662c4e9058367a874.jpg)
This is an excerpt from [Docker: Up and Running][3] by Karl Matthias and Sean P. Kane. It may contain references to unavailable content that is part of the larger resource.
Docker was first introduced to the world—with no pre-announcement and little fanfare—by Solomon Hykes, founder and CEO of dotCloud, in a five-minute [lightning talk][4] at the Python Developers Conference in Santa Clara, California, on March 15, 2013\. At the time of this announcement, only about 40 people outside dotCloud been given the opportunity to play with Docker.
Within a few weeks of this announcement, there was a surprising amount of press. The project was quickly open-sourced and made publicly available on [GitHub][5], where anyone could download and contribute to the project. Over the next few months, more and more people in the industry started hearing about Docker and how it was going to revolutionize the way software was built, delivered, and run. And within a year, almost no one in the industry was unaware of Docker, but many were still unsure what it was exactly, and why people were so excited about.
Docker is a tool that promises to easily encapsulate the process of creating a distributable artifact for any application, deploying it at scale into any environment, and streamlining the workflow and responsiveness of agile software organizations.
### The Promise of Docker
While ostensibly viewed as a virtualization platform, Docker is far more than that. Dockers domain spans a few crowded segments of the industry that include technologies like KVM, Xen, OpenStack, Mesos, Capistrano, Fabric, Ansible, Chef, Puppet, SaltStack, and so on. There is something very telling about the list of products that Docker competes with, and maybe youve spotted it already. For example, most engineers would not say that virtualization products compete with configuration management tools, yet both technologies are being disrupted by Docker. The technologies in that list are also generally acclaimed for their ability to improve productivity and thats what is causing a great deal of the buzz. Docker sits right in the middle of some of the most enabling technologies of the last decade.
If you were to do a feature-by-feature comparison of Docker and the reigning champion in any of these areas, Docker would very likely look like a middling competitor. Its stronger in some areas than others, but what Docker brings to the table is a feature set that crosses a broad range of workflow challenges. By combining the ease of application deployment tools like Capistrano and Fabric, with the ease of administrating virtualization systems, and then providing hooks that make workflow automation and orchestration easy to implement, Docker provides a very enabling feature set.
Lots of new technologies come and go, and a dose of skepticism about the newest rage is always healthy. Without digging deeper, it would be easy to dismiss Docker as just another technology that solves a few very specific problems for developers or operations teams. If you look at Docker as a virtualization or deployment technology alone, it might not seem very compelling. But Docker is much more than it seems on the surface.
It is hard and often expensive to get communication and processes right between teams of people, even in smaller organizations. Yet we live in a world where the communication of detailed information between teams is increasingly required to be successful. A tool that reduces the complexity of that communication while aiding in the production of more robust software would be a big win. And thats exactly why Docker merits a deeper look. Its no panacea, and implementing Docker well requires some thought, but Docker is a good approach to solving some real-world organizational problems and helping enable companies to ship better software faster. Delivering a well-designed Docker workflow can lead to happier technical teams and real money for the organizations bottom line.
So where are companies feeling the most pain? Shipping software at the speed expected in todays world is hard to do well, and as companies grow from one or two developers to many teams of developers, the burden of communication around shipping new releases becomes much heavier and harder to manage. Developers have to understand a lot of complexity about the environment they will be shipping software into, and production operations teams need to increasingly understand the internals of the software they ship. These are all generally good skills to work on because they lead to a better understanding of the environment as a whole and therefore encourage the designing of robust software, but these same skills are very difficult to scale effectively as an organizations growth accelerates.
The details of each companys environment often require a lot of communication that doesnt directly build value in the teams involved. For example, requiring developers to ask an operations team for _release 1.2.1_ of a particular library slows them down and provides no direct business value to the company. If developers could simply upgrade the version of the library they use, write their code, test with the new version, and ship it, the delivery time would be measurably shortened. If operations people could upgrade software on the host system without having to coordinate with multiple teams of application developers, they could move faster. Docker helps to build a layer of isolation in software that reduces the burden of communication in the world of humans.
Beyond helping with communication issues, Docker is opinionated about software architecture in a way that encourages more robustly crafted applications. Its architectural philosophy centers around atomic or throwaway containers. During deployment, the whole running environment of the old application is thrown away with it. Nothing in the environment of the application will live longer than the application itself and thats a simple idea with big repercussions. It means that applications are not likely to accidentally rely on artifacts left by a previous release. It means that ephemeral debugging changes are less likely to live on in future releases that picked them up from the local filesystem. And it means that applications are highly portable between servers because all state has to be included directly into the deployment artifact and be immutable, or sent to an external dependency like a database, cache, or file server.
This leads to applications that are not only more scalable, but more reliable. Instances of the application container can come and go with little repercussion on the uptime of the frontend site. These are proven architectural choices that have been successful for non-Docker applications, but the design choices included in Dockers own design mean that Dockerized applications will follow these best practices by requirement and thats a good thing.
### Benefits of the Docker Workflow
Its hard to cohesively group into categories all of the things Docker brings to the table. When implemented well, it benefits organizations, teams, developers, and operations engineers in a multitude of ways. It makes architectural decisions simpler because all applications essentially look the same on the outside from the hosting systems perspective. It makes tooling easier to write and share between applications. Nothing in this world comes with benefits and no challenges, but Docker is surprisingly skewed toward the benefits. Here are some more of the things you get with Docker:
Packaging software in a way that leverages the skills developers already have.
Many companies have had to create positions for release and build engineers in order to manage all the knowledge and tooling required to create software packages for their supported platforms. Tools like rpm, mock, dpkg, and pbuilder can be complicated to use, and each one must be learned independently. Docker wraps up all your requirements together into one package that is defined in a single file.
Bundling application software and required OS filesystems together in a single standardized image format.
In the past, you typically needed to package not only your application, but many of the dependencies that it relied on, including libraries and daemons. However, you couldnt ever ensure that 100 percent of the execution environment was identical. All of this made packaging difficult to master, and hard for many companies to accomplish reliably. Often someone running Scientific Linux would resort to trying to deploy a community package tested on Red Hat Linux, hoping that the package was close enough to what they needed. With Docker you deploy your application along with every single file required to run it. Dockers layered images make this an efficient process that ensures that your application is running in the expected environment.
Using packaged artifacts to test and deliver the exact same artifact to all systems in all environments.
When developers commit changes to a version control system, a new Docker image can be built, which can go through the whole testing process and be deployed to production without any need to recompile or repackage at any step in the process.
Abstracting software applications from the hardware without sacrificing resources.
Traditional enterprise virtualization solutions like VMware are typically used when people need to create an abstraction layer between the physical hardware and the software applications that run on it, at the cost of resources. The hypervisors that manage the VMs and each VMs running kernel use a percentage of the hardware systems resources, which are then no longer available to the hosted applications. A container, on the other hand, is just another process that talks directly to the Linux kernel and therefore can utilize more resources, up until the system or quota-based limits are reached.
When Docker was first released, Linux containers had been around for quite a few years, and many of the other technologies that it is built on are not entirely new. However, Dockers unique mix of strong architectural and workflow choices combine together into a whole that is much more powerful than the sum of its parts. Docker finally makes Linux containers, which have been around for more than a decade, approachable to the average technologist. It fits containers relatively easily into the existing workflow and processes of real companies. And the problems discussed above have been felt by so many people that interest in the Docker project has been accelerating faster than anyone could have reasonably expected.
In the first year, newcomers to the project were surprised to find out that Docker wasnt already production-ready, but a steady stream of commits from the open source Docker community has moved the project forward at a very brisk pace. That pace seems to only pick up steam as time goes on. As Docker has now moved well into the 1.x release cycle, stability is good, production adoption is here, and many companies are looking to Docker as a solution to some of the serious complexity issues that they face in their application delivery processes.
### What Docker Isnt
Docker can be used to solve a wide breadth of challenges that other categories of tools have traditionally been enlisted to fix; however, Dockers breadth of features often means that it lacks depth in specific functionality. For example, some organizations will find that they can completely remove their configuration management tool when they migrate to Docker, but the real power of Docker is that although it can replace some aspects of more traditional tools, it is usually compatible with them or even augmented by combining with them, as well. In the following list, we explore some of the tool categories that Docker doesnt directly replace but that can often be used in conjunction to achieve great results:
Enterprise Virtualization Platform (VMware, KVM, etc.)
A container is not a virtual machine in the traditional sense. Virtual machines contain a complete operating system, running on top of the host operating system. The biggest advantage is that it is easy to run many virtual machines with radically different operating systems on a single host. With containers, both the host and the containers share the same kernel. This means that containers utilize fewer system resources, but must be based on the same underlying operating system (i.e., Linux).
Cloud Platform (Openstack, CloudStack, etc.)
Like Enterprise virtualization, the container workflow shares a lot of similarities on the surface with cloud platforms. Both are traditionally leveraged to allow applications to be horizontally scaled in response to changing demand. Docker, however, is not a cloud platform. It only handles deploying, running, and managing containers on pre-existing Docker hosts. It doesnt allow you to create new host systems (instances), object stores, block storage, and the many other resources that are typically associated with a cloud platform.
Configuration Management (Puppet, Chef, etc.)
Although Docker can significantly improve an organizations ability to manage applications and their dependencies, it does not directly replace more traditional configuration management. Dockerfiles are used to define how a container should look at build time, but they do not manage the containers ongoing state, and cannot be used to manage the Docker host system.
Deployment Framework (Capistrano, Fabric, etc.)
Docker eases many aspects of deployment by creating self-contained container images that encapsulate all the dependencies of an application and can be deployed, in all environments, without changes. However, Docker cant be used to automate a complex deployment process by itself. Other tools are usually still needed to stitch together the larger workflow automation.
Workload Management Tool (Mesos, Fleet, etc.)
The Docker server does not have any internal concept of a cluster. Additional orchestration tools (including Dockers own Swarm tool) must be used to coordinate work intelligently across a pool of Docker hosts, and track the current state of all the hosts and their resources, and keep an inventory of running containers.
Development Environment (Vagrant, etc.)
Vagrant is a virtual machine management tool for developers that is often used to simulate server stacks that closely resemble the production environment in which an application is destined to be deployed. Among other things, Vagrant makes it easy to run Linux software on Mac OS X and Windows-based workstations. Since the Docker server only runs on Linux, Docker originally provided a tool called Boot2Docker to allow developers to quickly launch Linux-based Docker machines on various platforms. Boot2Docker is sufficient for many standard Docker workflows, but it doesnt provide the breadth of features found in Docker Machine and Vagrant.
Wrapping your head around Docker can be challenging when you are coming at it without a strong frame of reference. In the next chapter we will lay down a broad overview of Docker, what it is, how it is intended to be used, and what advantages it brings to the table when implemented with all of this in mind.
-----------------
作者简介:
#### [Karl Matthias][1]
Karl Matthias has worked as a developer, systems administrator, and network engineer for everything from startups to Fortune 500 companies. After working for startups overseas for a few years in Germany and the UK, he has recently returned with his family to Portland, Oregon to work as Lead Site Reliability Engineer at New Relic. When not devoting his time to things digital, he can be found herding his two daughters, shooting film with vintage cameras, or riding one of his bicycles.
#### [Sean Kane][2]
Sean Kane is currently a Lead Site Reliability Engineer for the Shared Infrastructure Team at New Relic. He has had a long career in production operations, with many diverse roles, in a broad range of industries. He has spoken about subjects like alerting fatigue and hardware automation at various meet-ups and technical conferences, including Velocity. Sean spent most of his youth living overseas, and exploring what life has to offer, including graduating from the Ringling Brother & Barnum & Bailey Clown College, completing 2 summer internship...
--------------------------------------------------------------------------------
via: https://www.oreilly.com/learning/what-is-docker
作者:[Karl Matthias ][a],[Sean Kane][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/5abbf-karl-matthias
[b]:https://www.oreilly.com/people/d5ce6-sean-kane
[1]:https://www.oreilly.com/people/5abbf-karl-matthias
[2]:https://www.oreilly.com/people/d5ce6-sean-kane
[3]:http://shop.oreilly.com/product/0636920036142.do?intcmp=il-security-books-videos-update-na_new_site_what_is_docker_text_cta
[4]:http://youtu.be/wW9CAH9nSLs
[5]:https://github.com/docker/docker
[6]:https://commons.wikimedia.org/wiki/File:2009_3962573662_card_catalog.jpg

View File

@ -1,6 +1,3 @@
@poodarchu 翻译中
Building a data science portfolio: Storytelling with data
========

View File

@ -1,233 +0,0 @@
ivo-wang translating
Setting Up Real-Time Monitoring with Ganglia for Grids and Clusters of Linux Servers
===========
Ever since system administrators have been in charge of managing servers and groups of machines, tools like monitoring applications have been their best friends. You will probably be familiar with tools like [Nagios][11], [Zabbix][10], [Icinga][9], and Centreon. While those are the heavyweights of monitoring, setting them up and fully taking advantage of their features may be somewhat difficult for new users.
In this article we will introduce you to Ganglia, a monitoring system that is easily scalable and allows to view a wide variety of system metrics of Linux servers and clusters (plus graphs) in real time.
[![Install Gangila Monitoring in Linux](http://www.tecmint.com/wp-content/uploads/2016/06/Install-Gangila-Monitoring-in-Linux.png)][8]
Install Gangila Monitoring in Linux
Ganglia lets you set up grids (locations) and clusters (groups of servers) for better organization.
Thus, you can create a grid composed of all the machines in a remote environment, and then group those machines into smaller sets based on other criteria.
In addition, Ganglias web interface is optimized for mobile devices, and also allows you to export data en `.csv`and `.json` formats.
Our test environment will consist of a central CentOS 7 server (IP address 192.168.0.29) where we will install Ganglia, and an Ubuntu 14.04 machine (192.168.0.32), the box that we want to monitor through Ganglias web interface.
Throughout this guide we will refer to the CentOS 7 system as the master node, and to the Ubuntu box as the monitored machine.
### Installing and Configuring Ganglia
To install the monitoring utilities in the the master node, follow these steps:
#### 1. Enable the [EPEL repository][7] and then install Ganglia and related utilities from there:
```
# yum update && yum install epel-release
# yum install ganglia rrdtool ganglia-gmetad ganglia-gmond ganglia-web
```
The packages installed in the step above along with ganglia, the application itself, perform the following functions:
1. `rrdtool`, the Round-Robin Database, is a tool thats used to store and display the variation of data over time using graphs.
2. `ganglia-gmetad` is the daemon that collects monitoring data from the hosts that you want to monitor. In those hosts and in the master node it is also necessary to install ganglia-gmond (the monitoring daemon itself):
3. `ganglia-web` provides the web frontend where we will view the historical graphs and data about the monitored systems.
#### 2. Set up authentication for the Ganglia web interface (/usr/share/ganglia). We will use basic authentication as provided by Apache.
If you want to explore more advanced security mechanisms, refer to the [Authorization and Authentication][6]section of the Apache docs.
To accomplish this goal, create a username and assign a password to access a resource protected by Apache. In this example, we will create a username called `adminganglia` and assign a password of our choosing, which will be stored in /etc/httpd/auth.basic (feel free to choose another directory and / or file name as long as Apache has read permissions on those resources, you will be fine):
```
# htpasswd -c /etc/httpd/auth.basic adminganglia
```
Enter the password for adminganglia twice before proceeding.
#### 3. Modify /etc/httpd/conf.d/ganglia.conf as follows:
```
Alias /ganglia /usr/share/ganglia
<Location /ganglia>
AuthType basic
AuthName "Ganglia web UI"
AuthBasicProvider file
AuthUserFile "/etc/httpd/auth.basic"
Require user adminganglia
</Location>
```
#### 4. Edit /etc/ganglia/gmetad.conf:
First, use the gridname directive followed by a descriptive name for the grid youre setting up:
```
gridname "Home office"
```
Then, use data_source followed by a descriptive name for the cluster (group of servers), a polling interval in seconds and the IP address of the master and monitored nodes:
```
data_source "Labs" 60 192.168.0.29:8649 # Master node
data_source "Labs" 60 192.168.0.32 # Monitored node
```
#### 5. Edit /etc/ganglia/gmond.conf.
a) Make sure the cluster block looks as follows:
```
cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
```
b) In the udp_send_chanel block, comment out the mcast_join directive:
```
udp_send_channel {
#mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
```
c) Finally, comment out the mcast_join and bind directives in the udp_recv_channel block:
```
udp_recv_channel {
#mcast_join = 239.2.11.71 ## comment out
port = 8649
#bind = 239.2.11.71 ## comment out
}
```
Save the changes and exit.
#### 6. Open port 8649/udp and allow PHP scripts (run via Apache) to connect to the network using the necessary SELinux boolean:
```
# firewall-cmd --add-port=8649/udp
# firewall-cmd --add-port=8649/udp --permanent
# setsebool -P httpd_can_network_connect 1
```
#### 7. Restart Apache, gmetad, and gmond. Also, make sure they are enabled to start on boot:
```
# systemctl restart httpd gmetad gmond
# systemctl enable httpd gmetad httpd
```
At this point, you should be able to open the Ganglia web interface at `http://192.168.0.29/ganglia` and login with the credentials from #Step 2.
[![Gangila Web Interface](http://www.tecmint.com/wp-content/uploads/2016/06/Gangila-Web-Interface.png)][5]
Gangila Web Interface
#### 8. In the Ubuntu host, we will only install ganglia-monitor, the equivalent of ganglia-gmond in CentOS:
```
$ sudo aptitude update && aptitude install ganglia-monitor
```
#### 9. Edit the /etc/ganglia/gmond.conf file in the monitored box. This should be identical to the same file in the master node except that the commented out lines in the cluster, udp_send_channel, and udp_recv_channelshould be enabled:
```
cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
udp_send_channel {
mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
udp_recv_channel {
mcast_join = 239.2.11.71 ## comment out
port = 8649
bind = 239.2.11.71 ## comment out
}
```
Then, restart the service:
```
$ sudo service ganglia-monitor restart
```
#### 10. Refresh the web interface and you should be able to view the statistics and graphs for both hosts inside the Home office grid / Labs cluster (use the dropdown menu next to to Home office grid to choose a cluster, Labsin our case):
[![Ganglia Home Office Grid Report](http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Home-Office-Grid-Report.png)][4]
Ganglia Home Office Grid Report
Using the menu tabs (highlighted above) you can access lots of interesting information about each server individually and in groups. You can even compare the stats of all the servers in a cluster side by side using the Compare Hosts tab.
Simply choose a group of servers using a regular expression and you will be able to see a quick comparison of how they are performing:
[![Ganglia Host Server Information](http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Server-Information.png)][3]
Ganglia Host Server Information
One of the features I personally find most appealing is the mobile-friendly summary, which you can access using the Mobile tab. Choose the cluster youre interested in and then the individual host:
[![Ganglia Mobile Friendly Summary View](http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Mobile-View.png)][2]
Ganglia Mobile Friendly Summary View
### Summary
In this article we have introduced Ganglia, a powerful and scalable monitoring solution for grids and clusters of servers. Feel free to install, explore, and play around with Ganglia as much as you like (by the way, you can even try out Ganglia in a demo provided in the projects [official website][1].
While youre at it, you will also discover that several well-known companies both in the IT world or not use Ganglia. There are plenty of good reasons for that besides the ones we have shared in this article, with easiness of use and graphs along with stats (its nice to put a face to the name, isnt it?) probably being at the top.
But dont just take our word for it, try it out yourself and dont hesitate to drop us a line using the comment form below if you have any questions.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-configure-ganglia-monitoring-centos-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]:http://ganglia.info/
[2]:http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Mobile-View.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Server-Information.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Home-Office-Grid-Report.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/06/Gangila-Web-Interface.png
[6]:http://httpd.apache.org/docs/current/howto/auth.html
[7]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[8]:http://www.tecmint.com/wp-content/uploads/2016/06/Install-Gangila-Monitoring-in-Linux.png
[9]:http://www.tecmint.com/install-icinga-in-centos-7/
[10]:http://www.tecmint.com/install-and-configure-zabbix-monitoring-on-debian-centos-rhel/
[11]:http://www.tecmint.com/install-nagios-in-linux/

View File

@ -1,149 +0,0 @@
ucasFL translating
PyCharm - The Best Linux Python IDE
=========
![](https://fthmb.tqn.com/AVEbzYN3BPH_8cGYkPflIx58-XE=/768x0/filters:no_upscale()/about/pycharm2-57e2d5ee5f9b586c352c7493.png)
### Introduction
In this guide I will introduce you to the PyCharm integrated development environment which can be used to develop professional applications using the Python programming language.
Python is a great programming language because it is truly cross platform and can be used to develop a single application which will run on Windows, Linux and Mac computers without having to recompile any code.
PyCharm is an editor and debugger developed by [Jetbrains][1] who are the same people who developed Resharper which is a great tool used by Windows developers for refactoring code and to make their lives easier when writing .NET code. Many of the principles of [Resharper][2] have been added to the professional version of [PyCharm][3].
### How To Install PyCharm
I have written a guide showing how to get PyCharm, download it, extract the files and run it.
[Simply click this link][4].
### The Welcome Screen
When you first run PyCharm or when you close a project you will be presented with a screen showing a list of recent projects.
You will also see the following menu options:
* Create New Project
* Open A Project
* Checkout From Version Control
There is also a configure settings option which lets you set up the default Python version and other such settings.
### Creating A New Project
When you choose to create a new project you are provided with a list of possible project types as follows:
* Pure Python
* Django
* Flask
* Google App Engine
* Pyramid
* Web2Py
* Angular CLI
* AngularJS
* Foundation
* HTML5 Bolierplate
* React Starter Kit
* Twitter Bootstrap
* Web Starter Kit
This isn't a programming tutorial so I won't be listing what all of those project types are. If you want to create a base desktop application which will run on Windows, Linux and Mac then you can choose a Pure Python project and use QT libraries to develop graphical applications which look native to the operating system they are running on regardless as to where they were developed.
As well as choosing the project type you can also enter the name for your project and also choose the version of Python to develop against.
### Open A Project
You can open a project by clicking on the name within the recently opened projects list or you can click the open button and navigate to the folder where the project you wish to open is located.
### Checking Out From Source Control
PyCharm provides the option to check out project code from various online resources including [GitHub][5], [CVS][6], Git, [Mercurial][7] and [Subversion][8].
### The PyCharm IDE
The PyCharm IDE starts with a menu at the top and underneath this you have tabs for each open project.
On the right side of the screen are debugging options for stepping through code.
The left pane has a list of project files and external libraries.
To add a file you right-click on the project name and choose "new". You then get the option to add one of the following file types:
* File
* Directory
* Python Package
* Python File
* Jupyter Notebook
* HTML File
* Stylesheet
* JavaScript
* TypeScript
* CoffeeScript
* Gherkin
* Data Source
When you add a file, such as a python file you can start typing into the editor in the right panel.
The text is all colour coded and has bold text . A vertical line shows the indentation so you can be sure that you are tabbing correctly.
The editor also includes full intellisense which means as you start typing the names of libraries or recognised commands you can complete the commands by pressing tab.
### Debugging The Application
You can debug your application at any point by using the debugging options in the top right corner.
If you are developing a graphical application then you can simply press the green button to run the application. You can also press shift and F10.
To debug the application you can either click the button next to the green arrow or press shift and F9.You can place breakpoints in the code so that the program stops on a given line by clicking in the grey margin on the line you wish to break at.
To make a single step forward you can press F8 which steps over the code. This means it will run the code but it won't step into a function. To step into the function you would press F7\. If you are in a function and want to step out to the calling function press shift and F8.
At the bottom of the screen whilst you are debugging you will see various windows such as a list of processes and threads, and variables that you are watching the values for. 
As you are stepping through code you can add a watch on a variable so that you can see when the value changes. 
Another great option is to run the code with coverage checker. The programming world has changed a lot during the years and now it is common for developers to perform test-driven development so that every change they make they can check to make sure they haven't broken another part of the system. 
The coverage checker actually helps you to run the program, perform some tests and then when you have finished it will tell you how much of the code was covered as a percentage during your test run.
There is also a tool for showing the name of a method or class, how many times the items were called, and how long was spent in that particular piece of code.
### Code Refactoring
A really powerful feature of PyCharm is the code refactoring option.
When you start to develop code little marks will appear in the right margin. If you type something which is likely to cause an error or just isn't written well then PyCharm will place a coloured marker.
Clicking on the coloured marker will tell you the issue and will offer a solution.
For example, if you have an import statement which imports a library and then don't use anything from that library not only will the code turn grey the marker will state that the library is unused.
Other errors that will appear are for good coding such as only having one blank line between an import statement and the start of a function. You will also be told when you have created a function that isn't in lowercase.
You don't have to abide by all of the PyCharm rules. Many of them are just good coding guidelines and are nothing to do with whether the code will run or not.
The code menu has other refactoring options. For example, you can perform code cleanup and you can inspect a file or project for issues.
### Summary
PyCharm is a great editor for developing Python code in Linux and there are two versions available. The community version is for the casual developer whereas the professional environment provides all the tools a developer could need for creating professional software.
--------------------------------------------------------------------------------
via: https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
作者:[Gary Newell ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com/gary-newell-2180098
[1]:https://www.jetbrains.com/
[2]:https://www.jetbrains.com/resharper/
[3]:https://www.jetbrains.com/pycharm/specials/pycharm/pycharm.html?&gclid=CjwKEAjw34i_BRDH9fbylbDJw1gSJAAvIFqU238G56Bd2sKU9EljVHs1bKKJ8f3nV--Q9knXaifD8xoCRyjw_wcB&gclsrc=aw.ds.ds&dclid=CNOy3qGQoc8CFUJ62wodEywCDg
[4]:https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
[5]:https://github.com/
[6]:http://www.linuxhowtos.org/System/cvs_tutorial.htm
[7]:https://www.mercurial-scm.org/
[8]:https://subversion.apache.org/

View File

@ -1,182 +0,0 @@
Getting Started with HTTP/2: Part 2
============================================================
![](https://static.viget.com/_284x284_crop_center-center/ben-t-http-blog-thumb-01_360.png?mtime=20160928234634)
Firmly planting a flag in the sand for HTTP/2 best practices for front end development.
If you have been keeping up with the talk of HTTP/2, you have probably attempted it or at least thought of how incorporate it into your projects. While there are a lot of hypotheses on how to its features can change your workflow and improve speed and efficiency on the web, best practices still haven't quite been pinned down yet. What I want to cover in this post are some HTTP/2 best practices I have discovered on a recent project.
If you aren't quite sure what HTTP/2 is or why it offers to improve your work, [check out my first post for a bit of background][4]. 
One note though: before we can get going, I need to mention that while your browser probably supports HTTP/2, your server probably doesn't. Check in with your hosting service to see if they offer HTTP/2 compatibility. Otherwise, you may be able to spin up your own server. This post does not cover how to do that unfortunately, but you can always check out the [http2 github][5] for some tools to get going in that direction.
### 🙏 [Rubs Hands Together]
A good way to start is to first organize your files. Take a look at the file tree below for a starting point to organize your stylesheets:
```
`/styles
|── /setup
| /* variables, mixins and functions */
|── /global
| /* reusable components that could be within any component or section */
|── /components
| /* specific components and sections */
|── setup.scss // index for setup styles
|── global.scss // index for global styles`
```
This breaks out your styles into three main categories: Setup, Global and Components. I will get into what each of these directories offer to your project next.
### Setting Up
The Setup level directory will hold all of your variables, functions, mixins and any other definition that another file will need to compile properly. To make this directory fully reusable, it's a good idea to import the contents of this directory into `setup.scss` so that it looks something like this:
```
`/* setup.scss */
/* variables */
@import "setup/variables/colors";
/* mixins */
@import "setup/mixins/color";
/* functions */
@import "setup/functions/color";
... etc`
```
Now that we have a quick reference to any definition on the site, we should be sure to include it at the top of any style file we create from here on out.
### Going Global
Your next directory, Global, should contain components that can be reused across the site within multiple sections, or on every single page. Things like buttons, text and heading styles as well as your browser resets should go here. I do not recommend putting your header or footer styles in here because on some projects, the header is absent or different on certain pages. Furthermore, the footer is always the last element on the page, so it should not be a huge priority to load the styles for it before the user has loaded anything else on the site.
Keeping in mind that your Global styles probably won't work without the things we defined in the Setup directory, your Global file should look something like this:
```
`/* global.scss */
/* application definitions */
@import "setup";
/* global styles */
@import "global/reset";
@import "global/buttons";
@import "global/typography";
@import "global/grid";
... etc`
```
Note that the first thing to import is the Setup styles. This way, any following file that uses something defined in that will have a reference to pull from.
Since the Global styles will be needed on every page of the site, we can load them in the typical way, using a `<link>` in the `<head>`. What you will have will be a very light CSS file, or theoretically light, depending on how much global style you need.
### Finally, Your Components
Notice that I did not include an index file for the Components directory in the file tree above. This is really where HTTP/2 comes into play. Up until now, we have been following standard practices for typical site build out, maintaining a fairly lean infrastructure and opting to globalize only the most necessary styles. Components act as their own index files.
Most developers have their own way of organizing their components, so I am not going to bother going into strategies here. However, all of your components should look something like this:
```
`/* header.scss */
/* application definitions */
@import "../setup";
header {
// styles
}
... etc`
```
This way, again, you have those Setup styles there to make sure that everything is defined during compilation. You don't have to concatenate, minify or really do anything to these files other than compile them, and probably place them in an /assets directory, easy to find for your templates.
Now that our stylesheets are ready to go, building out the site should be simple.
### Building Out the Components
You probably have your own templating language of choice depending on the projects you are on, be it Twig, Rails, Jade or Handlebars. I think the best way to think about your components is that if it has its own template file, it should have a corresponding style with the same name. This way your project has a nice 1:1 ratio across your templates and styles and you know where which file everything is in because they are named accordingly.
Now that that is out of the way, taking advantage of HTTP/2's multiplexing is really simple, so let's build a template:
```
`{# header.html #}
{# compiled header styles #}
<link href="assets/components/header.css" rel="stylesheet" media="all">
<header>
<h1>This Awesome HTTP/2 Site</h1>
... etc`
```
And that is pretty much it! You probably have a less heavy-handed way of linking to assets within your templates, but this shows you that all you need to do is link to that one small header style in the template file before you start your markup. This allows your site to only load the specific assets to the components on any given page, and furthermore, prioritizing the components from the top of your page to the bottom.
### Mixing It All Together
Now that all the components have a structure, the browser will render them something like this:
```
`<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" media="all" href="/assets/global.css">
</head>
<body>
<link rel="stylesheet" media="all" href="/assets/components/header.css">
<header>
... etc
</header>
<link rel="stylesheet" media="all" href="/assets/components/title.css">
<section class="title">
... etc
</section>
<link rel="stylesheet" media="all" href="/assets/components/image-component.css">
<section class="image-component">
... etc
</section>
<link rel="stylesheet" media="all" href="/assets/components/text-component.css">
<section class="text-component">
... etc
</section>
<link rel="stylesheet" media="all" href="/assets/components/footer.css">
<footer>
... etc
</footer>
</body>
</html>`
```
This is an upper level approach, but you will probably have finer-tuned components on your project. For example, you may have a `<nav>` component within the header that has its own stylesheet to load. Feel free to go as deep as you want with your components in a way that makes sense - HTTP/2 will not penalize you with those extra requests!
### Conclusion
This is just a basic look at how to build a project with HTTP/2 in mind on the front end, but this only scratches the surface. Perhaps you noticed a method I used that can be improved upon. Please bring it up in the comments! As stated in my first post, HTTP/2 is probably going to undo some of the standards we have held since HTTP/1, so it will take some serious thinking and experimenting to move into a fully efficient world of HTTP/2 development.
--------------------------------------------------------------------------------
via: https://www.viget.com/articles/getting-started-with-http-2-part-2
作者:[Ben][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.viget.com/about/team/btinsley
[1]:https://twitter.com/home?status=Firmly%20planting%20a%20flag%20in%20the%20sand%20for%20HTTP%2F2%20best%20practices%20for%20front%20end%20development.%20https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-2
[2]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-2
[3]:http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.viget.com%2Farticles%2Fgetting-started-with-http-2-part-2
[4]:https://www.viget.com/articles/getting-started-with-http-2-part-1
[5]:https://github.com/http2/http2-spec/wiki/Tools

View File

@ -1,602 +0,0 @@
GETTING STARTED WITH ANSIBLE
==========
This is a crash course on Ansible that you can also use as a template for small projects or to get you into this awesome tool. By the end of this guide, you will know enough to automate server configurations, deployments and more.
### What is Ansible and why you should care ?
Ansible is a configuration management system known for its simplicity. You only need ssh access to your servers or equipment. It also differs from other options because it pushes changes instead of pulling like puppet or chef normally do. You can deploy code to any number of servers, configure network equipment or automate anything in your infrastructure.
#### Requirements
Its assumed that you are using Mac or Linux as your workstation, Ubuntu Trusty for your servers and have some experience installing packages. Also, you will need the following software on your computer. So, if you dont have them already, go ahead and install:
- Virtualbox
- Vagrant
- Mac users: Homebrew
#### Scenario
We are going to emulate 2 web application servers connecting to a MySQL database. The web application uses Rails 5 with Puma.
### Preparations
#### Vagrantfile
Create a folder for this project and save the following content in a file called: Vagrantfile
```
VMs = [
[ "web1", "10.1.1.11"],
[ "web2", "10.1.1.12"],
[ "dbserver", "10.1.1.21"],
]
Vagrant.configure(2) do |config|
VMs.each { |vm|
config.vm.define vm[0] do |box|
box.vm.box = "ubuntu/trusty64"
box.vm.network "private_network", ip: vm[1]
box.vm.hostname = vm[0]
box.vm.provider "virtualbox" do |vb|
vb.memory = "512"
end
end
}
end
```
### Configure your virtual network
We want our VMs to talk to each other, but dont let that traffic go out to your real network, so we are going to create aHost-Only adapter in Virtualbox.
1. Open Virtualbox
2. Go to Preferences
3. Go to Network
4. Click on Host-Only networks
5. Click to add a network
6. Click on Adapter
7. Set IPv4 to 10.1.1.1, IPv4 Network Mark: 255.255.255.0
8. Click Ok
#### Test your VMs and virtual network
In a terminal, in the directory for this project where you have the Vagrantfile, type the following command:
```
vagrant up
```
This will create your VMs so it may take a while. Check that everything worked by typing this command and verifying the output:
```
$ vagrant status
Current machine states:
web1 running (virtualbox)
web2 running (virtualbox)
master running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
```
Now log into each one of the VMs using user & password vagrant and the IPs in the Vagrantfile, this will validate the VMs and add their keys to your known hosts file.
```
ssh vagrant@10.1.1.11 # password is `vagrant`
ssh vagrant@10.1.1.12
ssh vagrant@10.1.1.21
```
Congratulations! Now you have servers to play with. Here comes the exiting part!
### Install Ansible
For Mac users:
```
$ brew install ansible
```
For Ubuntu users:
```
$ sudo apt install ansible
```
Make sure you got a recent version of ansible that is 2.1 or superior:
```
$ ansible --version
ansible 2.1.1.0
```
### The Inventory
Ansible uses an inventory to know what servers to work with and how to group them to perform tasks(in parallel). Lets create our inventory for this project and name it inventory in the same folder as the Vagrantfile:
```
[all:children]
webs
db
[all:vars]
ansible_user=vagrant
ansible_ssh_pass=vagrant
[webs]
web1 ansible_host=10.1.1.11
web2 ansible_host=10.1.1.12
[db]
dbserver ansible_host=10.1.1.21
```
- `[all:children]` defines a group(all) of groups
- `[all:vars]` defines variables that belong to the group all
- `[webs]` defines a group just like [dbs]
- The rest of the file is just declarations of hosts, with their names and IPs
- A blank line means end of a declaration
Now that we have an inventory we can start using ansible from the command line, specifying a host or a group to perform commands. Here is a typical example of a command to check connectivity to your servers:
```
$ ansible -i inventory all -m ping
```
- `-i` specifies the inventory file
- `all` specifies the server or group of servers to operate
- `-m` specifies an ansible module, in this case ping
Here is the output of this command:
```
dbserver | SUCCESS => {
"changed": false,
"ping": "pong"
}
web1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
web2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
Note that servers respond with a different order. This only depends on who responds first, but is not relevant, because ansible keeps the status of each server separate.
You can also run any command using another switch:
- `-a <command>`
```
$ ansible -i inventory all -a uptime
web1 | SUCCESS | rc=0 >>
21:43:27 up 25 min, 1 user, load average: 0.00, 0.01, 0.05
dbserver | SUCCESS | rc=0 >>
21:43:27 up 24 min, 1 user, load average: 0.00, 0.01, 0.05
web2 | SUCCESS | rc=0 >>
21:43:27 up 25 min, 1 user, load average: 0.00, 0.01, 0.05
```
Here is another example with only one server:
```
$ ansible -i inventory dbserver -a "df -h /"
dbserver | SUCCESS | rc=0 >>
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 40G 1.4G 37G 4% /
```
### Playbooks
Playbooks are just YAML files that associate groups of servers in an inventory with commands. The correct word in ansible is tasks, and it can be a desired state, a shell command, or many other options. For a list of all the things you can do with ansible take a look at the list of all modules.
Here is an example of a playbook for running a shell command, save this as playbook1.yml:
```
---
- hosts: all
tasks:
- shell: uptime
```
- `---` is the start of the YAML file
- `- hosts`: specifies what group is going to be used
- `tasks`: marks the start of a list of tasks
- `- shell`: specifies the first task using the shell module
- REMEMBER: YAML requires indentation so make sure you are always following the correct structure in your playbooks
Run it with:
```
$ ansible-playbook -i inventory playbook1.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [web1]
ok: [web2]
ok: [dbmaster]
TASK [command] *****************************************************************
changed: [web1]
changed: [web2]
changed: [dbmaster]
PLAY RECAP *********************************************************************
dbmaster : ok=2 changed=1 unreachable=0 failed=0
web1 : ok=2 changed=1 unreachable=0 failed=0
web2 : ok=2 changed=1 unreachable=0 failed=0
```
As you can see ansible ran 2 tasks, instead of just one we have in our playbook. The TASK [setup] is an implicit task that runs first to capture information of the servers like hostnames, IPs, distributions, and many more details, that information can then be used to run conditional tasks.
There is also a final PLAY RECAP where ansible shows how many tasks ran and the corresponding state for each. In our case, since we ran a shell command, ansible doesnt know the resulting state and its then considered as changed.
### Installing Software
We are going to use apt to install software on our servers, for this we need to be root, so we have to use the become statement, save this content in playbook2.yml and run it(ansible-playbook playbook2.yml):
```
---
- hosts: webs
become_user: root
become: true
tasks:
- apt: name=git state=present
```
There are statements you can apply to all modules in ansible; one is the name statement that lets you print a more descriptive text about the task being executed. In order to use it you keep your task the same but add name: descriptive text as the first line, so our previous text will be:
```
---
- hosts: webs
become_user: root
become: true
tasks:
- name: This task will make sure git is present on the system
apt: name=git state=present
```
### Using `with_items`
When you are dealing with a list of items, packages to install, files to create, etc. ansible provides with_items. Here is how we use it in our playbook3.yml, adding at the same time some other statements we already know:
```
---
- hosts: all
become_user: root
become: true
tasks:
- name: Installing dependencies
apt: name={{item}} state=present
with_items:
- git
- mysql-client
- libmysqlclient-dev
- build-essential
- python-software-properties
```
### Using `template` and `vars`
`vars` is one statement that defines variables you can use either in `task` statements or inside `template` files. Jinja2 is the templating engine used in Ansible, but you dont need to learn a lot about it to use it. Define variables in your playbook like this:
```
---
- hosts: all
vars:
- secret_key: VqnzCLdCV9a3jK
- path_to_vault: /opt/very/deep/path
tasks:
- name: Setting a configuration file using template
template: src=myconfig.j2 dest={{path_to_vault}}/app.conf
```
As you can see I can use {{path_to_vault}} as part of the playbook, but also since I am using a template statement, I can use any variable inside the myconfig.j2 file, which has to be stored in a subfolder called templates. Your project tree should look like:
```
├── Vagrantfile
├── inventory
├── playbook1.yml
├── playbook2.yml
└── templates
└── myconfig.j2
```
When ansible finds a template statement it will look into the templates folder and expand the variables surrounded by{{ and }}.
Example template:
```
this is just an example vault_dir: {{path_to_vault}} secret_password: {{secret_key}}
```
You can also use `template` even if you are not expanding variables. I do this in advance considering I may add them later. For example, lets create a `hosts.j2` template and add the hostnames and IPs:
```
10.1.1.11 web1
10.1.1.12 web2
10.1.1.21 dbserver
```
This will require a statement like this:
```
- name: Installing the hosts file in all servers
template: src=hosts.j2 dest=/etc/hosts mode=644
```
### Shell commands
You should always try to use modules because Ansible can track the state of the task and avoid repeating it unnecessarily, but there are times when a shell command is unavoidable. For those cases Ansible offers two options:
- command: Literally just running a command without environment variables or redirections (|, <, >, etc.)
- shell: Runs /bin/sh and expands variables and redirections
#### Other useful modules
- apt_repository Add/Remove package repositories in Debian family
- yum_repository Add/Remove package repositories in RedHat family
- service Start/Stop/Restart/Enable/Disable services
- git Deploy code from a git server
- unarchive Unarchive packages from the web or local sources
#### Running a task only in one server
Rails uses `migrations` to make gradual changes to your DB, but since you have more than one app server, these migrations can not be assigned as a group task, instead we need only one server to run the migrations. In cases like this is when run_once is used, run_once will delegate the task to one server and continue with the next task until this task is done. You only need to set run_once: true in your task.
```
- name: 'Run db:migrate'
shell: cd {{appdir}};rails db:migrate
run_once: true
```
##### Tasks that can fail
By specifying ignore_errors: true you can run a task that may fail but doesnt affect the completion of the rest of your playbook. This is useful, for example, when deleting a log file that initially will not exist.
```
- name: 'Delete logs'
shell: rm -f /var/log/nginx/errors.log
ignore_errors: true
```
##### Putting it all together
Now using what we previously learned, here is the final version of each file:
Vagrantfile:
```
VMs = [
[ "web1", "10.1.1.11"],
[ "web2", "10.1.1.12"],
[ "dbserver", "10.1.1.21"],
]
Vagrant.configure(2) do |config|
VMs.each { |vm|
config.vm.define vm[0] do |box|
box.vm.box = "ubuntu/trusty64"
box.vm.network "private_network", ip: vm[1]
box.vm.hostname = vm[0]
box.vm.provider "virtualbox" do |vb|
vb.memory = "512"
end
end
}
end
```
inventory:
```
[all:children]
webs
db
[all:vars]
ansible_user=vagrant
ansible_ssh_pass=vagrant
[webs]
web1 ansible_host=10.1.1.11
web2 ansible_host=10.1.1.12
[db]
dbserver ansible_host=10.1.1.21
```
templates/hosts.j2:
```
10.1.1.11 web1
10.1.1.12 web2
10.1.1.21 dbserver
```
templates/my.cnf.j2:
```
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
server-id = 1
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 0.0.0.0
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
final-playbook.yml:
- hosts: all
become_user: root
become: true
tasks:
- name: 'Install common software on all servers'
apt: name={{item}} state=present
with_items:
- git
- mysql-client
- libmysqlclient-dev
- build-essential
- python-software-properties
- name: 'Install hosts file'
template: src=hosts.j2 dest=/etc/hosts mode=644
- hosts: db
become_user: root
become: true
tasks:
- name: 'Software for DB server'
apt: name={{item}} state=present
with_items:
- mysql-server
- percona-xtrabackup
- mytop
- mysql-utilities
- name: 'MySQL config file'
template: src=my.cnf.j2 dest=/etc/mysql/my.cnf
- name: 'Restart MySQL'
service: name=mysql state=restarted
- name: 'Grant access to web app servers'
shell: echo 'GRANT ALL PRIVILEGES ON *.* TO "root"@"%" WITH GRANT OPTION;FLUSH PRIVILEGES;'|mysql -u root mysql
- hosts: webs
vars:
- appdir: /opt/dummyapp
become_user: root
become: true
tasks:
- name: 'Add ruby-ng repo'
apt_repository: repo='ppa:brightbox/ruby-ng'
- name: 'Install rails software'
apt: name={{item}} state=present
with_items:
- ruby-dev
- ruby-all-dev
- ruby2.2
- ruby2.2-dev
- ruby-switch
- libcurl4-openssl-dev
- libssl-dev
- zlib1g-dev
- nodejs
- name: 'Set ruby to 2.2'
shell: ruby-switch --set ruby2.2
- name: 'Install gems'
shell: gem install bundler rails
- name: 'Kill puma if running'
shell: file /run/puma.pid >/dev/null && kill `cat /run/puma.pid` 2>/dev/null
ignore_errors: True
- name: 'Clone app repo'
git:
repo=https://github.com/c0d5x/rails_dummyapp.git
dest={{appdir}}
version=staging
force=yes
- name: 'Run bundler'
shell: cd {{appdir}};bundler
- name: 'Run db:setup'
shell: cd {{appdir}};rails db:setup
run_once: true
- name: 'Run db:migrate'
shell: cd {{appdir}};rails db:migrate
run_once: true
- name: 'Run rails server'
shell: cd {{appdir}};rails server -b 0.0.0.0 -p 80 --pid /run/puma.pid -d
```
### Turn up your environment
Having these files in the same directory, turn up your dev environment by running:
```
vagrant up
ansible-playbook -i inventory final-playbook.yml
```
#### Deployment of new code
Make changes to your code and push those changes to your repo. Then, simply make sure you have the correct branch in your git statement:
```
- name: 'Clone app repo'
git:
repo=https://github.com/c0d5x/rails_dummyapp.git
dest={{appdir}}
version=staging
force=yes
```
As an example, you can change the version field with master, run the playbook again:
```
ansible-playbook -i inventory final-playbook.yml
```
Check that the page has changed on any of the web servers: `http://10.1.1.11` or `http://10.1.1.12`. Change it back to `version=staging` and rerun the playbook and check the page again.
You can also create an alternative playbook that has only the tasks related to the deployment so that it runs faster.
### What is next !?
This is a very small portion of what ansible can do. We didnt touch roles, filters, debugor many other awesome features that it offers, but hopefully it gives you a good start! So, go ahead and start using it and learn as you go. If you have any questions you can reach me on twitter or comment below and let me know what else youd like to find out about ansible!
--------------------------------------------------------------------------------
via: https://gorillalogic.com/blog/getting-started-with-ansible/?utm_source=webopsweekly&utm_medium=email
作者:[JOSE HIDALGO][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://gorillalogic.com/author/josehidalgo/

View File

@ -1,112 +0,0 @@
### [Getting started with Inkscape on Fedora][2]
![inkscape-gettingstarted](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gettingstarted-945x400.png)
Inkscape is a popular, full-featured, free and open source vector [graphics editor][3] available in the official Fedora repositories. Its specifically tailored for creating vector graphics in the [SVG format][4]. Inkscape is great for creating and manipulating pictures and illustrations. Its also good for creating diagrams, and user interface mockups.
[
![cyberscoty-landscape-800px](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/cyberscoty-landscape-800px.png)
][5]
[Windmill Landscape][1]illustration created using inkscape
The [screenshots page on the official website][6] has some great examples of what can be done with Inkscape. The majority of the featured images here on the Fedora Magazine are also created using Inkscape, including this recent featured image:
[
![communty](https://cdn.fedoramagazine.org/wp-content/uploads/2016/09/communty.png)
][7]
A recent featured image here on the Fedora Magazine that was created with Inkscape
### Installing Inkscape on Fedora
Inkscape is [available in the official Fedora repositories][8], so its super easy to install using the Software app in Fedora Workstation**:**
[
![inkscape-gnome-software](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gnome-software.png)
][9]
Alternatively, if you are comfortable with the command line, you can install using the following `dnf` command:
```
sudo dnf install inkscape
```
### Dive into Inkscape (getting started)
When opening the app for the first time, you are greeted with a blank page, and a bunch of different toolbars. For beginners, the three most important of these toolbars are the Toolbar, the Tools Control Bar, and the Colour Palette:
[
![inkscape_window](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape_window.png)
][10]
The **Toolbar** provides all the basic tools for creating drawings, including tools such as:
* The rectangle tool, for drawing rectangles and squares
* The star / polygon (shapes) tool
* The circle tool, for drawing ellipses and circles
* The text tool, for adding labels and other text
* The path tool, for creating or editing more complex or customized shapes
* The select tool for selecting objects in your drawing
The **Colour Palette** provides a quick way to set the colour of the currently selected object. The **Tools Control Bar** provides all the settings for the currently selected tool in the Toolbar. Each time you select a new tool, the Tools Control Bar will update with the settings for that tool:
[
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-toolscontrolbar.gif)
][11]
### Drawing shapes
Next, lets draw a star with Inkscape. First, choose the star tool from the **Toolbar, **and click and drag on the main drawing area**.**
Youll probably notice your star looks a lot like a triangle. To change this, play with the Corners option in the **Tools Control Bar**, and add a few more points. Finally, when youre done, with the star still selected choose a colour from the **Palette** to change the colour of your star:
[
![inkscape-drawastar](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-drawastar.gif)
][12]
Next, experiment with some of the other shapes tools in the Toolbar, such as the rectangle tool, the spiral tool and the circle tool. Also play around with some of the settings for each tool to create a bunch of unique shapes.
### Selecting and moving objects in your drawing
Now you have a bunch of shapes, and can use the Select tool to move them around. To use the select tool, first select it from the toolbar, and then click on the shape you want to manipulate. Then click and drag the shape to where you want it to be.
When a shape is selected, you can also use the resize handles to scale the shape. Additionally, if you click on a shape that is selected, the resize handles change to rotate mode, allowing you to spin your shape:
[
![inkscape-movingshapes](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-movingshapes.gif)
][13]
* * *
Inkscape is an awesome piece of software that is packed with many more tools and features. In the next articles in this series, we will cover more of the features and options you can use to create awesome illustrations and documents.
-----------------------
作者简介Ryan is a designer that works on stuff for Fedora. He uses Fedora Workstation as his primary desktop, along with the best tools from the Libre Graphics world, notably, the vector graphics editor, Inkscape.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/getting-started-inkscape-fedora/
作者:[Ryan Lerch][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ryanlerch.id.fedoraproject.org/
[1]:https://openclipart.org/detail/185885/windmill-in-landscape
[2]:https://fedoramagazine.org/getting-started-inkscape-fedora/
[3]:https://inkscape.org/
[4]:https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
[5]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/cyberscoty-landscape-800px.png
[6]:https://inkscape.org/en/about/screenshots/
[7]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/09/communty.png
[8]:https://apps.fedoraproject.org/packages/inkscape
[9]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-gnome-software.png
[10]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape_window.png
[11]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-toolscontrolbar.gif
[12]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-drawastar.gif
[13]:https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-movingshapes.gif

View File

@ -1,74 +0,0 @@
How to Use Old Xorg Apps in Unity 8 on Ubuntu 16.10
====
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-feature-image.jpg "How to Use Old Xorg Apps in Unity 8 on Ubuntu 16.10s")
With the release of Ubuntu 16.10, Unity 8 has been getting more attention than usual. This is because the latest release of everyones favorite Linux distribution comes with an experimental desktop to play with. This desktop is the Unity environment most are used to, with a twist. It no longer is making use of X11 graphics technology and instead the makers of Ubuntu have gone a different way.
In its place, Unity 8 is using Mir, Ubuntus answer to calls for a better-performing display server on Linux. This technology has been in heavy use already on the Ubuntu phone and tablet, but this new release is the first time weve seen it on the desktop.
This technology is new and shiny. As a result, not a lot of established Linux programs can work on it, as most, if not all, of these tools are built to work with Xorg and X11\. However, if youve been wanting to try out Unity 8, youll be happy to know that it is indeed possible to get these old Xorg apps working in Unity 8\. Heres how!
### Logging Into Unity 8
![unity8-select-unity-8-login](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-select-unity-8-login.jpg "unity8-select-unity-8-login")
Unity 8 comes as an optional session in Ubuntu 16.10\. Theres one key thing to keep in mind before using it: it will not load with AMD graphics drivers, or Intel for that matter. The only supported graphics drivers as of now are the open source Nvidia drivers. To use the Unity 8 session, start up Ubuntu like normal. Then, before logging in, click the Ubuntu icon above your username and select “Unity8.” If all goes well, the new, experimental desktop will load.
**Note**: Unity 8 is very new and unstable. Use at your own risk.
### Installing Libertine
Xorg programs (like Firefox, etc.) do work in Unity 8; they just need a little tweak before anything will run. Start off by opening a terminal on the Mir desktop. This is done by clicking the terminal icon in the “scopes” window. Once open, enter your password. After that, enter the following commands:
![unity8-installing-libertine-in-terminal](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-installing-libertine-in-terminal.jpg "unity8-installing-libertine-in-terminal")
```
sudo apt install libertine-tools libertine-scope libertine
```
When these programs finish installing, click and drag the scope window to refresh it. Then, click on the top-hat to launch libertine.
### Creating Xorg Containers
With Libertine open, its time to create some containers. These containers are special, as they allow X11 based Linux programs to run inside of a container on the Mir/Unity 8 desktop. Additionally, check the “i386 multiarch support” box for 32bit support. Otherwise, leave everything as is (or give it a name and password), and click OK.
![unity8-libertine-create-new-container](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-libertine-create-new-container.jpg "unity8-libertine-create-new-container")
From this point on, the Xorg container is ready to use. Look for it in Libertine and launch the container. It also should be noted that containers can be erased by right-clicking on them, then selecting the “Delete” option.
**Note**: each Xorg container has a maximum memory limit of 500 megabytes, so multiple containers may be necessary.
### Installing Software
![unity8-libertine-install-software](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/unity8-libertine-install-software.jpg "unity8-libertine-install-software")
Software is installed in Libertine containers in two ways. The first way allows for users to launch the container and select “Enter package name or Debian file,” meaning it is possible to find the name of a program in the software center or terminal and enter it into Libertine to install it. It is also possible to specify a .DEB package file for installation. It is also possible to search for the package directly within the Libertine LXC container.
**Note**: Unity 8 is very new, and some programs may not load or fully install with Libertine.
### Conclusion
Unity 8 shows a lot of promise. Its modern, sleek, and faster than any iteration of Unity that came before it. The only thing that is holding it back is adoption. The simple fact is that most users would rather have programs that work instead of a fancy, fresh desktop. To an extent, using Libertine solves this issue, but it wont work forever. Sooner or later Canonical will need to start porting programs on their own or reach out to the community as as whole to make this happen.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/use-old-xorg-apps-unity-8/
作者:[Derrik Diener][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/derrikdiener/
[1]:https://www.maketecheasier.com/use-old-xorg-apps-unity-8/#respond
[3]:https://www.maketecheasier.com/shimo-vpn-client-for-mac/
[4]:https://www.maketecheasier.com/schedule-windows-empty-recycle-bin/
[5]:mailto:?subject=How%20to%20Use%20Old%20Xorg%20Apps%20in%20Unity%208%20on%20Ubuntu%2016.10&body=https%3A%2F%2Fwww.maketecheasier.com%2Fuse-old-xorg-apps-unity-8%2F
[6]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fuse-old-xorg-apps-unity-8%2F&text=How+to+Use+Old+Xorg+Apps+in+Unity+8+on+Ubuntu+16.10
[7]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fuse-old-xorg-apps-unity-8%2F
[8]:https://www.maketecheasier.com/category/linux-tips/

View File

@ -1,3 +1,4 @@
Translating by cposture 20161228
# Applying the Linus Torvalds “Good Taste” Coding Requirement
In [a recent interview with Linus Torvalds][1], the creator of Linux, at approximately 14:20 in the interview, he made a quick point about coding with “good taste”. Good taste? The interviewer prodded him for details and Linus came prepared with illustrations.
@ -44,7 +45,7 @@ Again, the purpose of this code was to only initialize the values of the points
To accomplish this I initially looped over every point in the grid and used conditionals to test for the edges. This is what it looked like:
```
```Tr
for (r = 0; r < GRID_SIZE; ++r) {
for (c = 0; c < GRID_SIZE; ++c) {
```

View File

@ -1,49 +0,0 @@
### [Inkscape: Adding some colour][1]
![inkscape-addingcolour](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-addingcolour-945x400.png)
In our previous Inkscape article, [we covered the absolute basics of getting started with Inkscape][2] — installing, and how to create basic shapes and manipulate them. We also covered changing the colour of inkscape objects using the Palette. While the Palette is great for quickly changing the colour of your objects from a pre-defined list, most times you will need more control over the colours of your objects. This is where we use one of the most important dialogs in Inkscape — the Fill and Stroke dialog.
**A quick note about the animations in this post: **some of the colours in the animations appear banded. This is just an artifact of the way the animations are created. When you try this out on Inkscape you will see nice smooth gradients of colour.
### Using the Fill / Stroke dialog
To open the Fill and Stroke dialog in Inkscape, choose `Object` > `Fill and Stroke` from the main menu. Once opened, the main three tabs of this dialog allow you to inspect and change the Fill colour, Stroke colour, and Stroke style of the currently selected object.
![open-fillstroke](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/open-fillstroke.gif)
In Inkscape, the Fill is the main colour given to the body of an object. The stroke of the object is an optional outline of your object. The stroke of an object also has additional styles — configurable in the Stroke style tab — allowing you to change the thickness of the stroke, create a dotted outline, or add rounded corners to you stroke. In this next animation, I change the fill colour of the star, and then change the stroke colour, and tweak the thickness of the stroke:
![using-fillstroke](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/using-fillstroke.gif)
### Adding and Editing a gradient
A gradient can also be the Fill (or the stroke) of an object. To quickly set a gradient fill from the Fill and Stroke dialog, first choose the Fill tab, then pick the linear gradient option:
![create-gradient](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/create-gradient.gif)
To edit our gradient further, we need to use the specialised Gradient Tool. Choose the Gradient tool from the toolbar, and some additional gradient editing handles will appear on your selected shape. **Moving the handles** around will change the positioning of the gradient. If you **click on an individual handle**, you can also change the colour of that handle in the Fill and Stroke dialog. To **add an additional stop in your gradient**, double click on the line connecting the handles, and a new handle will appear.
![editing-gradient](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/editing-gradient.gif)
* * *
That covers the basics of adding some more colour and gradients to your Inkscape drawings. The** Fill and Stroke dialog**also has many other options to explore, like pattern fills, different gradient styles, and many different stroke styles. Also check out the additional options in the **Tools control bar** for the **Gradient Tool** to see how you can tweak gradients in different ways too.
-----------------------
作者简介Ryan is a designer that works on stuff for Fedora. He uses Fedora Workstation as his primary desktop, along with the best tools from the Libre Graphics world, notably, the vector graphics editor, Inkscape.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/inkscape-adding-colour/
作者:[Ryan Lerch][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://ryanlerch.id.fedoraproject.org/
[1]:https://fedoramagazine.org/inkscape-adding-colour/
[2]:https://fedoramagazine.org/getting-started-inkscape-fedora/

View File

@ -1,5 +1,4 @@
**************Translating by messon007******************
wcnnbdk1 translating
# Perl and the birth of the dynamic web
>The fascinating story of Perl's role in the dynamic web spans newsgroups and mailing lists, computer science labs, and continents.

View File

@ -1,3 +1,5 @@
translating by ypingcn.
CLOUD FOCUSED LINUX DISTROS FOR PEOPLE WHO BREATHE ONLINE
============================================================

View File

@ -1,126 +0,0 @@
翻译中 by zky001
How to check if port is in use on Linux or Unix
============================================================
[
![](https://s0.cyberciti.org/images/category/old/linux-logo.png)
][1]
How do I determine if a port is in use under Linux or Unix-like system? How can I verify which ports are listening on Linux server?
It is important you verify which ports are listing on the servers network interfaces. You need to pay attention to open ports to detect an intrusion. Apart from an intrusion, for troubleshooting purposes, it may be necessary to check if a port is already in use by a different application on your servers. For example, you may install Apache and Nginx server on the same system. So it is necessary to know if Apache or Nginx is using TCP port # 80/443\. This quick tutorial provides steps to use the netstat, nmap and lsof command to check the ports in use and view the application that is utilizing the port.
### How to check the listening ports and applications on Linux:
1. Open a terminal application i.e. shell prompt.
2. Run any one of the following command:
```
sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN
sudo nmap -sTU -O IP-address-Here
```
Let us see commands and its output in details.
### Option #1: lsof command
The syntax is:
```
$ sudo lsof -i -P -n
$ sudo lsof -i -P -n | grep LISTEN
$ doas lsof -i -P -n | grep LISTEN
```
### [OpenBSD] ###
Sample outputs:
[
![Fig.01: Check the listening ports and applications with lsof command](https://s0.cyberciti.org/uploads/faq/2016/11/lsof-outputs.png)
][2]
Fig.01: Check the listening ports and applications with lsof command
Consider the last line from above outputs:
```
sshd 85379 root 3u IPv4 0xffff80000039e000 0t0 TCP 10.86.128.138:22 (LISTEN)
```
- sshd is the name of the application.
- 10.86.128.138 is the IP address to which sshd application bind to (LISTEN)
- 22 is the TCP port that is being used (LISTEN)
- 85379 is the process ID of the sshd process
### Option #2: netstat command
You can check the listening ports and applications with netstat as follows.
### Linux netstat syntax
```
$ netstat -tulpn | grep LISTEN
```
### FreeBSD/MacOS X netstat syntax
```
$ netstat -anp tcp | grep LISTEN
$ netstat -anp udp | grep LISTEN
```
### OpenBSD netstat syntax
````
$ netstat -na -f inet | grep LISTEN
$ netstat -nat | grep LISTEN
```
### Option #3: nmap command
The syntax is:
```
$ sudo nmap -sT -O localhost
$ sudo nmap -sU -O 192.168.2.13 ##[ list open UDP ports ]##
$ sudo nmap -sT -O 192.168.2.13 ##[ list open TCP ports ]##
```
Sample outputs:
[
![Fig.02: Determines which ports are listening for TCP connections using nmap](https://s0.cyberciti.org/uploads/faq/2016/11/nmap-outputs.png)
][3]
Fig.02: Determines which ports are listening for TCP connections using nmap
You can combine TCP/UDP scan in a single command:
`$ sudo nmap -sTU -O 192.168.2.13`
### A note about Windows users
You can check port usage from Windows operating system using following command:
```
netstat -bano | more
netstat -bano | grep LISTENING
netstat -bano | findstr /R /C:"[LISTING]"
````
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
作者:[ VIVEK GITE][a]
译者:[zky001](https://github.com/zky001)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
[1]:https://www.cyberciti.biz/faq/category/linux/
[2]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/lsof-outputs/
[3]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/nmap-outputs/

View File

@ -1,175 +0,0 @@
Build, Deploy and Manage Custom Apps with IBM Bluemix
============================================================
![IBM Blue mix logo](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/IBM-Blue-mix-logo.jpg?resize=300%2C266)
_IBMs Bluemix affords developers an opportunity to build, deploy and manage custom apps. Bluemix is built on Cloud Foundry. It supports a number of programming languages as well as OpenWhisk, which allows developers to call any function without the need for resource management._
Bluemix is an open standards, cloud-based platform implemented by IBM. It has an open architecture which enables organisations to create, develop and manage their applications on the cloud. It is based on Cloud Foundry and hence can be considered as a Platform as a Service (PaaS). With Bluemix, developers need not worry about cloud configurations, but can concentrate on their applications. Cloud configurations will be done automatically by Bluemix.
Bluemix also provides a dashboard, with which developers can create, manage and view services and applications, while monitoring resource usage also.
It supports the following programming languages:
* Java
* Python
* Ruby on Rails
* PHP
* Node.js
It also supports OpenWhisk (Function as a Service), which is also an IBM product that allows developers to call any function without requiring any resource management.
![Figure 1 An Overview of IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-1-An-Overview-of-IBM-Bluemix.jpg?resize=296%2C307)
Figure 1: An Overview of IBM Bluemix
![Figure 2 The IBM Bluemix architecture](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-2-The-IBM-Bluemix-architecture.jpg?resize=350%2C239)
Figure 2: The IBM Bluemix architecture
![Figure 3 Creating an organisation in IBM Bluemix](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-3-Creating-an-organisation-in-IBM-Bluemix.jpg?resize=350%2C280)
Figure 3: Creating an organisation in IBM Bluemix
**How IBM Bluemix works**
Bluemix is built on top of IBMs SoftLayer IaaS (Infrastructure as a Service). It uses Cloud Foundry as an open source PaaS. It starts by pushing code through Cloud Foundry, which plays the role of combining the code and suitable runtime environment based on the programming language in which the application is written. IBM services, third party services or community built services can be used for different functionalities. Secure connectors can be used to connect to on-premise systems and the cloud.
![Figure 4 Setting up Space in IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-4-Setting-up-Space-in-IBM-Bluemix.jpg?resize=350%2C267)
Figure 4: Setting up Space in IBM Bluemix
![Figure 5 The app template](http://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-5-The-app-template.jpg?resize=350%2C135)
Figure 5: The app template
![Figure 6 IBM Bluemix supported programming languages](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-6-IBM-Bluemix-supported-programming-languages.jpg?resize=350%2C173)
Figure 6: IBM Bluemix supported programming languages
**Creating an app in Bluemix**
In this article, we will create a sample Hello World application in IBM Bluemix by using the Liberty for Java starter pack, in just a few simple steps.
1\. Go to [_https://console.ng.bluemix.net/registration/_][2].
2\. Confirm the Bluemix account.
3\. Click on the confirmation link in the mail to complete the sign up process.
4\. Give your email ID and click on _Continue_ to log in.
5\. Enter the password and click on _Log in._
6. _Set up_ and _Environment_ share resources in specific regions.
7\. Create Space to manage access and roll-back in Bluemix. We can map Spaces to development stages such as dev, test, uat, pre-prod and prod.
![Figure 7 Naming the app](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg?resize=350%2C133)
Figure 7: Naming the app
![Figure 8 Knowing when the app is ready](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-8-Knowing-when-the-app-is-ready.jpg?resize=350%2C170)
Figure 8: Knowing when the app is ready
![Figure 9 The IBM Bluemix Java App](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-9-The-IBM-Bluemix-Java-App.jpg?resize=350%2C151)
Figure 9: The IBM Bluemix Java App
8\. Once this initial configuration is completed, click on_ Im ready_. _Good to Go_!
9\. Verify the IBM Bluemix dashboard after successfully logging in, specifically sections such as Cloud Foundry Apps where 2GB is available and Virtual Server where 0 instances are available, as of now.
10\. Click on _Create app_. Choose the template for app creation. In our case, we will go for a Web app.
11\. How do you get started? Click on Liberty for Java, and then verify the description.
12\. Click on _Continue_.
13\. What do you want to name your new app? For this article, lets use osfy-bluemix-tutorial and click on _Finish_.
14\. It will take some time to create resources and to host an application on Bluemix.
15\. In a few minutes, your app will be up and running. Note the URL of the application.
16\. Visit the applications URL _http://osfy-bluemix-tutorial.au-syd.mybluemix.net/_. Bingo, our first Java application is up and running on IBM Bluemix.
17\. To verify the source code, click on _Files_ and navigate to different files and folders in the portal.
18\. The _Logs_ section provides all the activity logs, starting from the applications creation.
19\. The _Environment Variables_ section provides details on all the environment variables of VCAP_Services as well as those that are user defined.
20\. To verify the applications consumption of resources, go to the Liberty for Java section.
21\. The _Overview_ section of each application contains details regarding resources, the applications health, and activity logs, by default.
22\. Open Eclipse, go to the Help menu and click on _Eclipse Marketplace_.
23\. Find _IBM Eclipse tools_ for _Bluemix_ and click on _Install_.
24\. Confirm the selected features and install them in Eclipse.
25\. Download the application starter code. Import it into Eclipse by clicking on _File Menu_, select _Import Existing Projects_ into _Workspace_ and start modifying the existing code.
![Figure 10 The Java app source files](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-10-The-Java-app-source-files.jpg?resize=350%2C173)
Figure 10: The Java app source files
![Figure 11 The Java app logs](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-11-The-Java-app-logs.jpg?resize=350%2C133)
Figure 11: The Java app logs
![Figure 12 Java app -- Liberty for Java](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-12-Java-app-Liberty-for-Java.jpg?resize=350%2C169)
Figure 12: Java app — Liberty for Java
**[
][1]Why IBM Bluemix?**
Here are some compelling reasons to use IBM Bluemix:
* Supports multiple languages and platforms
* Free trial
1\. Minimal registration process
2\. No credit card required
3\. 30-days trial period with quotas of 2GB of runtime, 20 services, 500 routes
4\. Unlimited access to standard support
5\. No production use limitations
* Pay only for the use of each runtime and service
* Quick set-up hence faster time to market
* Continuous delivery of new features
* Secure integration with on-premise resources
* Use cases
1\. Web applications and mobile back-ends
2\. APIs and on-premise integration
* DevOps services are available as SaaS on the cloud and support continuous delivery of:
1\. Web IDE
2\. SCM
3\. Agile planning
4\. Delivery pipeline service
--------------------------------------------------------------------------------
via: http://opensourceforu.com/2016/11/build-deploy-manage-custom-apps-ibm-bluemix/
作者:[MITESH_SONI][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensourceforu.com/author/mitesh_soni/
[1]:http://opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg
[2]:https://console.ng.bluemix.net/registration/

View File

@ -1,382 +0,0 @@
#rusking translating
How to Manage Samba4 AD Infrastructure from Linux Command Line Part 2
============================================================
This tutorial will cover [some basic daily commands][2] you need to use in order to manage Samba4 AD Domain Controller infrastructure, such as adding, removing, disabling or listing users and groups.
Well also take a look on how to manage domain security policy and how to bind AD users to local PAM authentication in order for AD users to be able to perform local logins on Linux Domain Controller.
#### Requirements
1. [Create an AD Infrastructure with Samba4 on Ubuntu 16.04 Part 1][1]
### Step 1: Manage Samba AD DC from Command Line
1. Samba AD DC can be managed through samba-tool command line utility which offers a great interface for administrating your domain.
With the help of samba-tool interface you can directly manage domain users and groups, domain Group Policy, domain sites, DNS services, domain replication and other critical domain functions.
To review the entire functionality of samba-tool just type the command with root privileges without any option or parameter.
```
# samba-tool -h
```
[
![samba-tool - Manage Samba Administration Tool](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png)
][3]
samba-tool Manage Samba Administration Tool
2. Now, lets start using samba-tool utility to administer Samba4 Active Directory and manage our users.
In order to create a user on AD use the following command:
```
# samba-tool user add your_domain_user
```
To add a user with several important fields required by AD, use the following syntax:
```
--------- review all options ---------
# samba-tool user add -h
# samba-tool user add your_domain_user --given-name=your_name --surname=your_username --mail-address=your_domain_user@tecmint.lan --login-shell=/bin/bash
```
[
![Create User on Samba AD](http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png)
][4]
Create User on Samba AD
3. A listing of all samba AD domain users can be obtained by issuing the following command:
```
# samba-tool user list
```
[
![List Samba AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png)
][5]
List Samba AD Users
4. To delete a samba AD domain user use the below syntax:
```
# samba-tool user delete your_domain_user
```
5. Reset a samba domain user password by executing the below command:
```
# samba-tool user setpassword your_domain_user
```
6. In order to disable or enable an samba AD User account use the below command:
```
# samba-tool user disable your_domain_user
# samba-tool user enable your_domain_user
```
7. Likewise, samba groups can be managed with the following command syntax:
```
--------- review all options ---------
# samba-tool group add h
# samba-tool group add your_domain_group
```
8. Delete a samba domain group by issuing the below command:
```
# samba-tool group delete your_domain_group
```
9. To display all samba domain groups run the following command:
```
# samba-tool group list
```
10. To list all the samba domain members in a specific group use the command:
```
# samba-tool group listmembers "your_domain group"
```
[
![List Samba Domain Members of Group](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png)
][6]
List Samba Domain Members of Group
11. Adding/Removing a member from a samba domain group can be done by issuing one of the following commands:
```
# samba-tool group addmembers your_domain_group your_domain_user
# samba-tool group remove members your_domain_group your_domain_user
```
12. As mentioned earlier, samba-tool command line interface can also be used to manage your samba domain policy and security.
To review your samba domain password settings use the below command:
```
# samba-tool domain passwordsettings show
```
[
![Check Samba Domain Password](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png)
][7]
Check Samba Domain Password
13. In order to modify samba domain password policy, such as the password complexity level, password ageing, length, how many old password to remember and other security features required for a Domain Controller use the below screenshot as a guide.
```
---------- List all command options ----------
# samba-tool domain passwordsettings -h
```
[
![Manage Samba Domain Password Settings](http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png)
][8]
Manage Samba Domain Password Settings
Never use the password policy rules as illustrated above on a production environment. The above settings are used just for demonstration purposes.
### Step 2: Samba Local Authentication Using Active Directory Accounts
14. By default, AD users cannot perform local logins on the Linux system outside Samba AD DCenvironment.
In order to login on the system with an Active Directory account you need to make the following changes on your Linux system environment and modify Samba4 AD DC.
First, open samba main configuration file and add the below lines, if missing, as illustrated on the below screenshot.
```
$ sudo nano /etc/samba/smb.conf
```
Make sure the following statements appear on the configuration file:
```
winbind enum users = yes
winbind enum groups = yes
```
[
![Samba Authentication Using Active Directory User Accounts](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png)
][9]
Samba Authentication Using Active Directory User Accounts
15. After youve made the changes, use testparm utility to make sure no errors are found on samba configuration file and restart samba daemons by issuing the below command.
```
$ testparm
$ sudo systemctl restart samba-ad-dc.service
```
[
![Check Samba Configuration for Errors](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png)
][10]
Check Samba Configuration for Errors
16. Next, we need to modify local PAM configuration files in order for Samba4 Active Directory accounts to be able to authenticate and open a session on the local system and create a home directory for users at first login.
Use the pam-auth-update command to open PAM configuration prompt and make sure you enable all PAM profiles using `[space]` key as illustrated on the below screenshot.
When finished hit `[Tab]` key to move to Ok and apply changes.
```
$ sudo pam-auth-update
```
[
![Configure PAM for Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png)
][11]
Configure PAM for Samba4 AD
[
![Enable PAM Authentication Module for Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png)
][12]
Enable PAM Authentication Module for Samba4 AD Users
17. Now, open /etc/nsswitch.conf file with a text editor and add winbind statement at the end of the password and group lines as illustrated on the below screenshot.
```
$ sudo vi /etc/nsswitch.conf
```
[
![Add Windbind Service Switch for Samba](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png)
][13]
Add Windbind Service Switch for Samba
18. Finally, edit /etc/pam.d/common-password file, search for the below line as illustrated on the below screenshot and remove the use_authtok statement.
This setting assures that Active Directory users can change their password from command line while authenticated in Linux. With this setting on, AD users authenticated locally on Linux cannot change their password from console.
```
password [success=1 default=ignore] pam_winbind.so try_first_pass
```
[
![Allow Samba AD Users to Change Passwords](http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png)
][14]
Allow Samba AD Users to Change Passwords
Remove use_authtok option each time PAM updates are installed and applied to PAM modules or each time you execute pam-auth-update command.
19. Samba4 binaries comes with a winbindd daemon built-in and enabled by default.
For this reason youre no longer required to separately enable and run winbind daemon provided by winbind package from official Ubuntu repositories.
In case the old and deprecated winbind service is started on the system make sure you disable it and stop the service by issuing the below commands:
```
$ sudo systemctl disable winbind.service
$ sudo systemctl stop winbind.service
```
Although, we no longer need to run old winbind daemon, we still need to install Winbind package from repositories in order to install and use wbinfo tool.
Wbinfo utility can be used to query Active Directory users and groups from winbindd daemon point of view.
The following commands illustrates how to query AD users and groups using wbinfo.
```
$ wbinfo -g
$ wbinfo -u
$ wbinfo -i your_domain_user
```
[
![Check Samba4 AD Information ](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png)
][15]
Check Samba4 AD Information
[
![Check Samba4 AD User Info](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png)
][16]
Check Samba4 AD User Info
20. Apart from wbinfo utility you can also use getent command line utility to query Active Directory database from Name Service Switch libraries which are represented in /etc/nsswitch.conf file.
Pipe getent command through a grep filter in order to narrow the results regarding just your AD realm user or group database.
```
# getent passwd | grep TECMINT
# getent group | grep TECMINT
```
[
![Get Samba4 AD Details](http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png)
][17]
Get Samba4 AD Details
### Step 3: Login in Linux with an Active Directory User
21. In order to authenticate on the system with a Samba4 AD user, just use the AD username parameter after `su -` command.
At the first login a message will be displayed on the console which notifies you that a home directory has been created on `/home/$DOMAIN/` system path with the mane of your AD username.
Use id command to display extra information about the authenticated user.
```
# su - your_ad_user
$ id
$ exit
```
[
![Check Samba4 AD User Authentication on Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png)
][18]
Check Samba4 AD User Authentication on Linux
22. To change the password for an authenticated AD user type passwd command in console after you have successfully logged into the system.
```
$ su - your_ad_user
$ passwd
```
[
![Change Samba4 AD User Password](http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png)
][19]
Change Samba4 AD User Password
23. By default, Active Directory users are not granted with root privileges in order to perform administrative tasks on Linux.
To grant root powers to an AD user you must add the username to the local sudo group by issuing the below command.
Make sure you enclose the realm, slash and AD username with single ASCII quotes.
```
# usermod -aG sudo 'DOMAIN\your_domain_user'
```
To test if AD user has root privileges on the local system, login and run a command, such as apt-get update, with sudo permissions.
```
# su - tecmint_user
$ sudo apt-get update
```
[
![Grant sudo Permission to Samba4 AD User](http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png)
][20]
Grant sudo Permission to Samba4 AD User
24. In case you want to add root privileges for all accounts of an Active Directory group, edit /etc/sudoers file using visudo command and add the below line after root privileges line, as illustrated on the below screenshot:
```
%DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
```
Pay attention to sudoers syntax so you dont break things out.
Sudoers file doesnt handles very well the use of ASCII quotation marks, so make sure you use `%` to denote that youre referring to a group and use a backslash to escape the first slash after the domain name and another backslash to escape spaces if your group name contains spaces (most of AD built-in groups contain spaces by default). Also, write the realm with uppercases.
[
![Give Sudo Access to All Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png)
][21]
Give Sudo Access to All Samba4 AD Users
Thats all for now! Managing Samba4 AD infrastructure can be also achieved with several tools from Windows environment, such as ADUC, DNS Manager, GPM or other, which can be obtained by installing RSAT package from Microsoft download page.
To administer Samba4 AD DC through RSAT utilities, its absolutely necessary to join the Windows system into Samba4 Active Directory. This will be the subject of our next tutorial, till then stay tuned to TecMint.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-active-directory-linux-command-line
作者:[Matei Cezar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
[3]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png

View File

@ -1,142 +0,0 @@
翻译中-byzky001
Compiling Vim from source is actually not that difficult.
Here's what you should do:
1. First, install all the prerequisite libraries, including Git.
For a Debian-like Linux distribution like Ubuntu,
that would be the following:
```sh
sudo apt-get install libncurses5-dev libgnome2-dev libgnomeui-dev \
libgtk2.0-dev libatk1.0-dev libbonoboui2-dev \
libcairo2-dev libx11-dev libxpm-dev libxt-dev python-dev \
python3-dev ruby-dev lua5.1 lua5.1-dev libperl-dev git
```
On Ubuntu 16.04, `liblua5.1-dev` is the lua dev package name not `lua5.1-dev`.
(If you know what languages you'll be using, feel free to leave out
packages you won't need, e.g. Python2 `python-dev` or Ruby `ruby-dev`.
This principle heavily applies to the whole page.)
For Fedora 20, that would be the following:
```sh
sudo yum install -y ruby ruby-devel lua lua-devel luajit \
luajit-devel ctags git python python-devel \
python3 python3-devel tcl-devel \
perl perl-devel perl-ExtUtils-ParseXS \
perl-ExtUtils-XSpp perl-ExtUtils-CBuilder \
perl-ExtUtils-Embed
```
This step is needed to rectify an issue with how Fedora 20 installs XSubPP:
```sh
# symlink xsubpp (perl) from /usr/bin to the perl dir
sudo ln -s /usr/bin/xsubpp /usr/share/perl5/ExtUtils/xsubpp
```
2. Remove vim if you have it already.
```sh
sudo apt-get remove vim vim-runtime gvim
```
On Ubuntu 12.04.2 you probably have to remove these packages as well:
```sh
sudo apt-get remove vim-tiny vim-common vim-gui-common vim-nox
```
3. Once everything is installed, getting the source is easy.
Note: If you are using Python, your config directory might have
a machine-specific name (e.g. `config-3.5m-x86_64-linux-gnu`).
Check in /usr/lib/python[2/3/3.5] to find yours, and change
the `python-config-dir` and/or `python3-config-dir` arguments accordingly.
Add/remove the flags below to fit your setup. For example, you can leave out
`enable-luainterp` if you don't plan on writing any Lua.
Also, if you're not using vim 8.0,
make sure to set the VIMRUNTIMEDIR variable correctly below
(for instance, with vim 8.0a, use /usr/share/vim/vim80a).
Keep in mind that some vim installations are located directly
inside /usr/share/vim; adjust to fit your system:
```sh
cd ~
git clone https://github.com/vim/vim.git
cd vim
./configure --with-features=huge \
--enable-multibyte \
--enable-rubyinterp=yes \
--enable-pythoninterp=yes \
--with-python-config-dir=/usr/lib/python2.7/config \
--enable-python3interp=yes \
--with-python3-config-dir=/usr/lib/python3.5/config \
--enable-perlinterp=yes \
--enable-luainterp=yes \
--enable-gui=gtk2 --enable-cscope --prefix=/usr
make VIMRUNTIMEDIR=/usr/share/vim/vim80
```
On Ubuntu 16.04, Python support was not working due to enabling both Python2 and Python3. Read [answer by chirinosky](http://stackoverflow.com/questions/23023783/vim-compiled-with-python-support-but-cant-see-sys-version) for workaround.
If you want to be able to easily uninstall vim use `checkinstall`.
```sh
sudo apt-get install checkinstall
cd ~/vim
sudo checkinstall
```
Otherwise, you can use `make` to install.
```sh
cd ~/vim
sudo make install
```
Set vim as your default editor with `update-alternatives`.
```sh
sudo update-alternatives --install /usr/bin/editor editor /usr/bin/vim 1
sudo update-alternatives --set editor /usr/bin/vim
sudo update-alternatives --install /usr/bin/vi vi /usr/bin/vim 1
sudo update-alternatives --set vi /usr/bin/vim
```
4. Double check that you are in fact running the new Vim binary by looking at
the output of `vim --version`.
**If you don't get gvim working (on ubuntu 12.04.1 LTS), try changing
`--enable-gui=gtk2` to `--enable-gui=gnome2`**
If you have problems, double check that you `configure`d using the correct Python config
directory, as noted at the beginning of Step 3.
These `configure` and `make` calls assume a Debian-like distro where Vim's
runtime files directory is placed in `/usr/share/vim/vim80/`,
which is not Vim's default. Same thing goes for `--prefix=/usr` in the
`configure` call. Those values may need to be different with a Linux
distro that is not based on Debian. In such a case, try to remove the
`--prefix` variable in the `configure` call and the `VIMRUNTIMEDIR` in the
`make` call (in other words, go with the defaults).
If you get stuck, here's some [other useful information on building Vim]
(http://vim.wikia.com/wiki/Building_Vim).
--------------------------------------------------------------------------------
via: https://www.dataquest.io/blog/data-science-portfolio-project/
作者:[Val Markovic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://github.com/Valloric

View File

@ -1,173 +0,0 @@
erlinux translate...
Managing devices in Linux
============================================================
>Explore how the /dev directory gives you direct access to your devices in Linux.
![Managing devices in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/OSDC_Penguin_Image_520x292_12324207_0714_mm_v1a.png?itok=WfAkwbFy "Managing devices in Linux")
>Image by :Opensource.com
There are many interesting features of the Linux directory structure. This month I cover some fascinating aspects of the /dev directory. Before you proceed any further with this article, I suggest that, if you have not already done so, you read my earlier articles, _[Everything is a file][1]_, and _[An introduction to Linux filesystems][2]_, both of which introduce some interesting Linux filesystem concepts. Go ahead—I will wait.
Great! Welcome back. Now we can proceed with a more detailed exploration of the /dev directory.
### Device files
Device files are also known as [device ][3][special files][4]. Device files are employed to provide the operating system and users an interface to the devices that they represent. All Linux device files are located in the /dev directory, which is an integral part of the root (/) filesystem because these device files must be available to the operating system during the boot process.
One of the most important things to remember about these device files is that they are most definitely not device drivers. They are more accurately described as portals to the device drivers. Data is passed from an application or the operating system to the device file which then passes it to the device driver which then sends it to the physical device. The reverse data path is also used, from the physical device through the device driver, the device file, and then to an application or another device.
Let's look at the data flow of a typical command to visualize this.
![dboth-dev-dir_0.png](https://opensource.com/sites/default/files/images/life-uploads/dboth-dev-dir_0.png)
Figure 1: Simple data flow for a typical command.</center>
In Figure 1, above, a simplified data flow is shown for a common command. Issuing the **cat /etc/resolv.conf** command from a GUI terminal emulator such as Konsole or xterm causes the resolv.conf file to be read from the disk with the disk device driver handling the device specific functions such as locating the file on the hard drive and reading it. The data is passed through the device file and then from the command to the device file and device driver for pseudo-terminal 6 where it is displayed in the terminal session.
Of course, the output of the **cat** command could have been redirected to a file in the following manner, **cat /etc/resolv.conf > /etc/resolv.bak** in order to create a backup of the file. In that case, the data flow on the left side of Figure 1 would remain the same while the data flow on the right would be through the /dev/sda2 device file, the hard drive device driver and then onto the hard drive itself.
These device files make it very easy to use standard streams (STD/IO) and redirection to access any and every device on a Linux or Unix computer. Simply directing a data stream to a device file sends the data to that device.
### Classification
Device files can be classified in at least two ways. The first and most commonly used classification is that of the data stream commonly associated with the device. For example, tty (teletype) and serial devices are considered to be character based because the data stream is transferred and handled one character or byte at a time. Block type devices such as hard drives transfer data in blocks, typically a multiple of 256 bytes.
If you have not already, go ahead and as a non-root user in a terminal session, change the present working directory (PWD) to /dev and display a long listing. This shows a list of device files with their file permissions and their major and minor identification numbers. For example, the following device files are just a few of the ones in the /dev/directory on my Fedora 24 workstation. They represent disk and tty type devices. Notice the leftmost character of each line in the output. The ones that have a "b" are block type devices and the ones that begin with "c" are character devices.
```
brw-rw----   1 root disk        8,   0 Nov  7 07:06 sda
brw-rw---- 1 root disk        8,   1 Nov  7 07:06 sda1
brw-rw---- 1 root disk        8,  16 Nov  7 07:06 sdb
brw-rw---- 1 root disk        8,  17 Nov  7 07:06 sdb1
brw-rw---- 1 root disk        8,  18 Nov  7 07:06 sdb2
crw--w----  1 root tty         4,   0 Nov  7 07:06 tty0
crw--w---- 1 root tty         4,   1 Nov  7 07:07 tty1
crw--w---- 1 root tty         4,  10 Nov  7 07:06 tty10
crw--w---- 1 root tty         4,  11 Nov  7 07:06 tty11
```
The more detailed and explicit way to identify device files is using the device major and minor numbers. The disk devices have a major number of 8 which designates them as SCSI block devices. Note that all PATA and SATA hard drives have been managed by the SCSI subsystem because the old ATA subsystem was many years ago deemed as not maintainable due to the poor quality of its code. As a result, hard drives that would previously have been designated as "hd[a-z]" are now referred to as "sd[a-z]".
You can probably infer the pattern of disk drive minor numbers in the small sample shown above. Minor numbers 0, 16, 32 and so on up through 240 are the whole disk numbers. So major/minor 8/16 represents the whole disk /dev/sdb and 8/17 is the device file for the first partition, /dev/sdb1\. Numbers 8/34 would be /dev/sdc2.
The tty device files in the list above are numbered a bit more simply from tty0 through tty63.
The [Linux Allocated Devices][5] file at Kernel.org is the official registry of device types and major and minor number allocations. It can help you understand the major/minor numbers for all currently defined devices.
### Fun with device files
Let's take a few minutes now and perform a couple fun experiments that will illustrate the power and flexibility of the Linux device files. Most Linux distributions have multiple virtual consoles, 1 through 7, that can be used to login to a local console session with a shell interface. These can be accessed using the key combinations Ctrl-Alt-F1 for console 1, Ctrl-Alt-F2 for console 2, and so on.
Press Ctrl-Alt-F2 to switch to console 2\. On some distributions, the login information includes the tty device associated with this console, but many do not. It should be tty2 because you are in console 2.
Log in as a non-root user. Then you can use the who am i command—yes, just like that, with spaces—to determine which tty device is connected to this console.
Before we actually perform this experiment, look at a listing of the tty2 and tty3 devices in /dev.
```
ls -l /dev/tty[23]
```
There will be a large number of tty devices defined but we do not care about most of them, just the tty2 and tty3 devices. As device files, there is nothing special about them; they are simply character type devices. We will use these devices for this experiment. The tty2 device is attached to virtual console 2 and the tty3 device is attached to virtual console 3.
Press Ctrl-Alt-F3 to switch to console 3\. Log in again as the same non-root user. Now enter the following command on console 3.
```
echo "Hello world" > /dev/tty2
```
Press Ctrl-Alt-F2 to return to console 2\. The string "Hello world" (without quotes) is displayed in console 2.
This experiment can also be performed with terminal emulators on the GUI desktop. Terminal sessions on the desktop use pseudo terminal devices in the /dev tree, such as /dev/pts/1\. Open two terminal sessions using Konsole or Xterm. Determine which pseudo-terminals they are connected to and use one to send a message to the other.
Now continue the experiment by using the cat command to display the /etc/fstab file on a different terminal.
Another interesting experiment is to print a file directly to the printer using the cat command. Assuming that your printer device is /dev/usb/lp0, and that your printer can print PDF files directly, the following command will print the PDF file test.pdf on your printer.
```
cat test.pdf > /dev/usb/lp0
```
The /dev directory contains some very interesting device files that are portals to hardware that one does not normally think of as a device like a hard drive or display. For one example, system memory—RAM—is not something that is normally considered as a "device," yet /dev/mem is the portal through which direct access to memory can be achieved. The following example had some interesting results.
```
dd if=/dev/mem bs=2048 count=100
```
The **dd** command above provides a bit more control than simply using the **cat**command to dump all of a system's memory. It provides the ability to specify how much data is read from /dev/mem and would also allow me to specify the point at which to start reading data from memory. Although some memory was read, the kernel responded with the following error that I found in /var/log/messages.
```
Nov 14 14:37:31 david kernel: usercopy: kernel memory exposure attempt detected from ffff9f78c0010000 (dma-kmalloc-512) (2048 bytes)
```
What this error means is that the kernel is doing its job by protecting memory that belongs to other processes which is exactly how it should work. So, although you can use /dev/mem to display data stored in RAM memory, access to most memory space is protected and will result in errors. Only that virtual memory which is assigned by the kernel memory manager to the BASH shell running the **dd** command should be accessible without causing an error. Sorry, but you cannot snoop in memory that does not belong to you unless you find a vulnerability to exploit.
There are some other very interesting device files in /dev. The device files null, zero, random and urandom are not associated with any physical devices.
For example, the null device /dev/null can be used as a target for the redirection of output from shell commands or programs so that they are not displayed on the terminal. I frequently use this in my BASH scripts to prevent users from being presented with output that might be confusing to them. The /dev/null device can be used to produce a string of null characters. Use the **dd** command as shown below to view some output from the /dev/null device file.
```
# dd if=/dev/null  bs=512 count=500 | od -c     
0+0 records in
0+0 records out
0 bytes copied, 1.5885e-05 s, 0.0 kB/s
0000000
```
Note that there is really no visible output because null characters are nothing. Note the byte count.
The /dev/random and /dev/urandom devices are also very interesting. As their names imply, they both produce random output—not just numbers but any and all byte combinations. The /dev/urandom device produces deterministic random output and is very fast. That means the output is determined by an algorithm and uses a seed string as a starting point. As a result it is possible, although very difficult, for a hacker to reproduce the output if the original seed is known. Use the command **cat /dev/urandom** to view typical output. You can use Ctrl-c to break out.
The /dev/random device file produces non-deterministic random output but it produces output more slowly. This output is not determined by an algorithm that is dependent upon the previous number, but is generated in response to keystrokes and mouse movements. This method makes it far more difficult to duplicate a specific series of random numbers. Use the **cat **command to view some of the output from the /dev/random device file. Try moving the mouse to see how it affects the output.
As its name implies, the /dev/zero device file produces an unending string of zeroes as output. Note that these are Octal zeroes and not the ASCII character zero (0). Use the **dd** command as shown below to view some output from the /dev/zero device file.
```
# dd if=/dev/zero  bs=512 count=500 | od -c
0000000  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
500+0 records in
500+0 records out
256000 bytes (256 kB, 250 KiB) copied, 0.00126996 s, 202 MB/s
0764000
```
Note that the byte count for this command is non-zero.
### Creating device files
In the past, the device files in /dev were all created at installation time, resulting in a directory full of almost every possible device file, even though most would never be used. In the unlikely event that a new device file was needed or one was accidentally deleted and needed to be re-created, the **mknod** program was available to manually create device files. All you had to know was the device major and minor numbers.
CentOS and RHEL 6 and 7, as well as all versions of Fedora going back to at least as far Fedora 15, use the newer method of creating the device files. All device files are created at boot time. This functionality is possible because the udev device manager detects addition and removal of devices as they occur. This allows for true dynamic plug-n-play functionality while the host is up and running. It also performs the same task at boot time by detecting all devices installed on the system very early in the boot process. [Linux.com][6] has a good [description of udev][7].
Going back to your listing of the files in /dev, notice the date and time on the files. All of them were created during the last boot. You can verify this using the **uptime**or **last** commands. In my device listing above, all of those files were created at 7:06 AM on November 7, which is the last time I booted the system.
Of course, the **mknod** command is still available, but the new **MAKEDEV** (yes, in all uppercase—which in my opinion is contrary to the Linux philosophy of using all lowercase command names) command provides an easier interface for creating device files, should the need arise. The MAKEDEV command is not installed by default in current versions of Fedora or CentOS 7; it is installed in CentOS 6\. You can use YUM or DNF to install the MAKEDEV package.
### Conclusion
Interestingly enough, it had been a long time since I needed to create a device file. However, just recently I had an interesting situation where one of the device files I typically use was not created and I did have to create it. I have not had any problem with that device since. So a situation caused by a missing device file can still happen and knowing how to deal with it can be important.
I have not covered many of the myriad different types of device files that you might encounter. That information is available in plenty of detail in the resources cited. I hope I have given you some basic understanding of how these files function and the tools to allow you to explore more on your own.
--------------------------------------------------------------------------------
via: https://opensource.com/article/16/11/managing-devices-linux
作者:[David Both][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/life/15/9/everything-is-a-file
[2]:https://opensource.com/life/16/10/introduction-linux-filesystems
[3]:https://en.wikipedia.org/wiki/Device_file
[4]:https://en.wikipedia.org/wiki/Device_file
[5]:https://www.kernel.org/doc/Documentation/devices.txt
[6]:https://www.linux.com/
[7]:https://www.linux.com/news/udev-introduction-device-management-modern-linux-system

View File

@ -1,84 +0,0 @@
Mir is not only about Unity8
============================================================
![mir](https://insights.ubuntu.com/wp-content/uploads/2cf2/MIR.png)
_This is a guest post by Alan Griffiths, Software engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com_
Mir is a project to support the management applications on the display(s) of a computer. It can be compared to the more familiar X-Windows used on the current Ubuntu desktop (and many others). Ill discuss some of the motivation for Mir below, but the point of this post is to clarify the relationship between Mir and Unity8.
Most of the time you hear about Mir it is mentioned alongside Unity8\. This is not surprising as Unity8 is Canonicals new user interface shell and the thing end-users interact with. Mir “only” makes this possible. Unity8 is currently used on phones and tablets and is also available as a “preview” on the Ubuntu 16.10 desktop.
Here I want to explain that Mir is available to use without Unity8\. Either for an alternative shell, or as a simpler interface for embedded environments: information kiosks, electronic signage, etc. The evidence for this is proved by the Mir “Abstraction Layer” which provides three important elements:
1.libmiral.so a stable interface to Mir providing basic window management;
2\. miral-shell a sample shell offering both “traditional” and “tiling” window management; and,
3\. miral-kiosk a sample “kiosk” offering only basic window management.
The miral-shell and miral-kiosk sample servers are available from the zesty archive and Kevin Gunn has been [blogging][1] about providing a miral-kiosk based “kiosk” snap on “Voices”. Ill give a bit more detail about using these examples below, but there is more (including “how to” develop your own alternative Mir server) on [my “voices” blog][2].
**USING MIR**
Mir is a set of programming libraries, not an application in its own right. That means it needs applications to use it for anything to happen. There are two ways to use the Mir libraries: as a “client” when writing an application, or as a “server” when implementing a shell. Clients (as with X11) typically use a toolkit rather than using Mir (or X11) directly.
Theres Mir support available in GTK, Qt and SDL2\. This means that applications using these toolkits should “just work” on Mir when that support is enabled in the toolkit (which is the default in Ubuntu). In addition theres Xmir: an X11 server that runs on Mir, this allows X based applications to run on Mir servers.
But a Mir client needs a corresponding Mir server before anything can happen. Over the last development cycle the Mir team has produced MirAL as the recommended way to write Mir servers and a package “miral-examples” by way of demonstration. For zesty, the development version of Ubuntu, you can install from the archive:
```
$ sudo apt install miral-examples mir-graphics-drivers-desktop qtubuntu-desktop
```
_For other platforms you would need to build MirAL this yourself (see An Example Mir Desktop Environment for details)._
With miral-examples installed you can run a Mir server as a window on your Unity7 desktop and start clients (such as gedit) within it as follows:
```
$ miral-shell&
$ miral-run gedit
```
This will give you (very basic) “traditional” desktop window management. Alternatively, you can try “tiling” window management:
```
$ miral-shell --window-manager tiling&
$ miral-run qterminal
```
Or the (even more basic) kiosk:
```
$ miral-kiosk&
$ miral-run 7kaa
```
None of these Mir servers provide a complete “desktop” with support for a “launcher”, notifications, etc. but they demonstrate the potential to use Mir without Unity8.
**THE PROBLEM MIR SOLVES**
The X-Windows system has been, and remains, immensely successful in providing a way to interact with computers. It provides a consistent abstraction across a wide range of hardware and drivers. This underlies many desktop environments and graphical user interface toolkits and lets them work together on an enormous range of computers.
But it comes from an era when computers were used very differently from now, and there are real concerns today that are hard to meet given the long legacy that X needs to support.
In 1980 most computers were big things managed by specialists and connecting them to one another was “bleeding edge”. In that era the cost of developing software was such that any benefit to be gained by one application “listening in” on another was negligible: there were few computers, they were isolated, and the work they did was not open to financial exploitation.
X-Windows developed in this environment and, through a series of extensions, has adapted to many changes. But it is inherently insecure: any application can find out what happening on the display (and affect it). You can write applications like Xeyes (that tracks the cursor with its “eyes”) or “Tickeys” (that listens to the keyboard to generate typewriter noises). The reality is that any and all applications can track and manipulate almost all of what is happening. That is how X based desktops like Unity7, Gnome, KDE and the rest work.
The open nature of window management in X-Windows is poorly adapted to a world with millions of computers connected to the Internet, being used for credit card transactions and online banking, and managed by non-experts who willingly install programs from complete strangers. There has been a growing realization that adapting X-Windows to the new requirements of security and graphics performance isnt feasible.
There are at least two open source projects aimed at providing a replacement: Mir and Wayland. While some see these as competing, there are a lot of areas where they have common interests: They both need to interact with other software that previously assumed X11, and much of the work needed to introduce support alternatives benefits both projects.
Canonicals replacement for X-Windows, Mir, only exposes the information to an application that it needs to have (so no snooping on keystrokes, or tracking the cursor). It can meet the needs of the current age and can exploit modern hardware such as graphics processors.
--------------------------------------------------------------------------------
via: https://insights.ubuntu.com/2016/11/28/mir-is-not-only-about-unity8/
作者:[ Guest][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://insights.ubuntu.com/author/guest/
[1]:http://voices.canonical.com/kevin.gunn/
[2]:http://voices.canonical.com/alan.griffiths/

View File

@ -1,76 +0,0 @@
翻译中 by ypingcn
Moving with SQL Server to Linux? Move from SQL Server to MySQL as well!
============================================================
### On this page
1. [To have Control Over the Platform][1]
2. [Joining the Crowd][2]
3. [Microsoft isnt Open Sourcing SQL Servers Code][3]
4. [Saving on License Costs][4]
5. [Sometimes, the Specific Hardware being Used][5]
6. [Support][6]
Over the recent years, there has been a large number of individuals as well as organizations who are ditching the Windows platform for Linux platform, and this number will continue to grow as more developments in Linux are experienced. Linux has for long been the leader in Web servers as most of the web servers run on Linux, and this could be one of the reasons why the high migration is being experienced.
The reasons for this migration are as numerous, ranging from more platform stability, reliability, costs, ownership and security among others. As more entities migrate to the Linux platform, so is the migration from MS SQL server database management system, top MySQL, because of interoperability and platform independence of MySQL, as well as low acquisition costs.
As much as the migration is to be done, the need for it should be necessitated by the business and not just for the mere pleasure of it.As such, a comprehensive feasibility and cost-benefit analysis should be carried out to know the impact that the migration will have on your business, both positive and negative.
The migration may be based on the following key factors:
### To have Control Over the Platform
Unlike in windows where you are not in full control of the releases and fixes, Linux does give you that flexibility to get fixes as and when you require them. This is preferred by developers and security personnel in that they are able to immediately apply a fix when a security threat is identified, unlike in Windows where you can only hope they release the fixes soon.
### Joining the Crowd
The Linux platform far outnumbers Windows in the number of servers that are running on it, nearly a quarter of all servers in the world, and the trend is not about to change anytime soon. Many organizations, therefore, do migrate so as to be fully on Linux rather than running two platforms concurrently, which adds up to their operating costs.
### Microsoft isnt Open Sourcing SQL Servers Code
In as much as Microsoft have announced that their next release of MSSQL server (named Denali) will be a Linux version, that will still not open their source code, meaning that their licenses will still apply, but the release will be run on Linux. This still locks out the many users who would happily take to the release if it was open source.
This still does not give an alternative to those users who are using Oracle, which is not open source; neither does it to those [using MySQL][7], which is fully open source.
### Saving on License Costs
The cost implication of licenses is a huge letdown for many users. Running a MSSQL server on Windows platform has too many licenses involved. You need licenses for:
* The windows operating system
* The MSSQL server
* Specific database tools e.g. SQL analytics tools, etc.
Unlike in Windows platform, Linux eliminates the issues of high licenses costs, and thus more appealing to users. MySQL database is also a free source even though it offers the flexibility just as MSSQL server, thus it is more preferred. Most of the database utility tools for MySQL are mostly free, unlike for MSSQL.
### Sometimes, the Specific Hardware being Used
Because Linux is developed and always being enhanced by various developers, it is independent of the hardware it operates on and thus widely used on different hardware platforms. However, as much as Microsoft has tried to ensure that Windows and MSSQL server are hardware independent; there are still some limitations in platform independence.
### Support
With Linux and MySQL, as well as with other open source software, it is easier to get support on the specific need that you have, because there are various developers involved in their development. These developers maybe within your locality, thus are easily reached. Also, online forums are of great help whereby you are able to post and discuss the issues you face.
For commercial software, you get support based on their software agreement with you and their timing, and at times may not give you a solution within the timelines that you have.
In every case, migrating to Linux gives you the best option and outcome that you can have, by joining a radical, stable and reliable platform, which is known to be more robust than Windows. It is worth a shot.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/
作者:[Tony Branson ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/howtoforgecom
[1]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#to-have-control-over-the-platform
[2]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#joining-the-crowd
[3]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#microsoft-isnrsquot-open-sourcing-sql-serverrsquos-code
[4]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#saving-on-license-costs
[5]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#sometimes-the-specific-hardware-being-used
[6]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#support
[7]:http://www.scalearc.com/how-it-works/products/scalearc-for-mysql

View File

@ -1,123 +0,0 @@
translating by dongdongmian
How to Build an Email Server on Ubuntu Linux
============================================================
![mail server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mail-stack.jpg?itok=SVMfa8WZ "mail server")
In this series, we will show how to build a reliable configurable mail server with Postfix, Dovecot, and OpenSSL on Ubuntu Linux.[Creative Commons Zero][2]Pixabay
In this fast-changing world of containers and microservices it's comforting that some things don't change, such as setting up a Linux email server. It's still a dance of many steps and knitting together several different servers, and once you put it all together it just sits there, all nice and stable, instead of winking in and out of existence like microservices. In this series, we'll put together a nice reliable configurable mail server with Postfix, Dovecot, and OpenSSL on Ubuntu Linux.
Postfix is a reliable old standby that is easier to configure and use than Sendmail, the original Unix MTA (does anyone still use Sendmail?). Exim is Debian's default MTA; it is more lightweight than Postfix and super-configurable, so we'll look at Exim in a future tutorial.
Dovecot and Courier are two popular and excellent IMAP/POP3 servers. Dovecot is more lightweight and easier to configure.
You must secure your email sessions, so we'll use OpenSSL. OpenSSL also supplies some nice tools for testing your mail server.
For simplicity, we'll set up a LAN mail server in this series. You should have LAN name services already enabled and working; see [Dnsmasq For Easy LAN Name Services][5] for some pointers. Then later, you can adapt a LAN server to an Internet-accessible server by registering your domain name and configuring your firewall accordingly. These are documented everywhere, so please do your homework and be careful.
### Terminology
Let's take a quick look at some terminology, because it is nice when we know what the heck we're talking about.
* **MTA**: Mail transfer agent, a simple mail transfer protocol (SMTP) server such as Postfix, Exim, and Sendmail. SMTP servers talk to each other
* **MUA**: Mail user agent, your local mail client such as Evolution, KMail, Claws Mail, or Thunderbird.
* **POP3**: Post-office protocol, the simplest protocol for moving messages from an SMTP server to your mail client. A POP server is simple and lightweight; you can serve thousands of users from a single box.
* **IMAP**: Interactive message access protocol. Most businesses use IMAP because messages remain on the server, so users don't have to worry about losing them. IMAP servers require a lot of memory and storage.
* **TLS**: Transport socket layer, an evolution of SSL (secure sockets layer), which provides encrypted transport for SASL-authenticated logins.
* **SASL**: Simple authentication and security layer, for authenticating users. SASL does the authenticating, then TLS provides the encrypted transport of the authentication data.
* **StartTLS**: Also known as opportunistic TLS. StartTLS upgrades your plain text authentication to encrypted authentication if both servers support SSL/TLS. If one of them doesn't then it remains in cleartext. StartTLS uses the standard unencrypted ports: 25 (SMTP), 110 (POP3), and 143 (IMAP) instead of the standard encrypted ports: 465 (SMTP), 995 (POP3), and 993 (IMAP).
### Yes, We Still Have Sendmail
Most Linuxes still have `/usr/sbin/sendmail`. This is a holdover from the very olden days when Sendmail was the only MTA. On most distros `/usr/sbin/sendmail` is symlinked to your installed MTA. However your distro handles it, if it's there, it's on purpose.
### Install Postfix
`apt-get install postfix` takes care of the basic Postfix installation (Figure 1). This opens a wizard that asks what kind of server you want. Select "Internet Site", even for a LAN server. It will ask for your fully qualified server domain name (e.g., myserver.mydomain.net). On a LAN server, assuming your name services are correctly configured (I keep mentioning this because people keep getting it wrong), you can use just the hostname (e.g., myserver).
![Postfix](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/postfix-1.png?itok=NJLdtICb "Postfix")
Figure 1: Postfix configuration.[Creative Commons Zero][1]Carla Schroder
Ubuntu will create a configuration file and launch three Postfix daemons: `master, qmgr`, and `pickup`. There is no Postfix command or daemon.
```
$ ps ax
6494 ? Ss 0:00 /usr/lib/postfix/master
6497 ? S 0:00 pickup -l -t unix -u -c
6498 ? S 0:00 qmgr -l -t unix -u
```
Use Postfix's built-in syntax checker to test your configuration files. If it finds no syntax errors, it reports nothing:
```
$ sudo postfix check
[sudo] password for carla:
```
Use `netstat` to verify that Postfix is listening on port 25:
```
$ netstat -ant
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
tcp6 0 0 :::25 :::* LISTEN
```
Now let's fire up trusty old `telnet` to test:
```
$ telnet myserver 25
Trying 127.0.1.1...
Connected to myserver.
Escape character is '^]'.
220 myserver ESMTP Postfix (Ubuntu)
**EHLO myserver**
250-myserver
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
**^]**
telnet>
```
Hurrah! We have verified the server name, and that Postfix is listening and responding to requests on port 25, the SMTP port.
Type `quit` to exit `telnet`. In the example, the commands that you type to interact with your server are in bold. The output are ESMTP (extended SMTP) 250 status codes.
* PIPELINING allows multiple commands to flow without having to respond to each one.
* SIZE tells the maximum message size that the server accepts.
* VRFY can tell a client if a particular mailbox exists. This is often ignored as it could be a security hole.
* ETRN is for sites with irregular Internet connectivity. Such a site can use ETRN to request mail delivery from an upstream server, and Postfix can be configured to defer mail delivery to ETRN clients.
* STARTTLS (see above).
* ENHANCEDSTATUSCODES, the server supports enhanced status and error codes.
* 8BITMIME, supports 8-bit MIME, which means the full ASCII character set. Once upon a time the original ASCII was 7 bits.
* DSN, delivery status notifiction, informs you of delivery errors.
The main Postfix configuration file is `/etc/postfix/main.cf`. This is created by the installer. See [Postfix Configuration Parameters][6] for a complete listing of `main.cf` parameters. `/etc/postfix/postfix-files` describes the complete Postfix installation.
Come back next week for installing and testing Dovecot, and sending ourselves some messages.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/how-build-email-server-ubuntu-linux
作者:[CARLA SCHRODER][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/postfix-1png
[4]:https://www.linux.com/files/images/mail-stackjpg
[5]:https://www.linux.com/learn/dnsmasq-easy-lan-name-services
[6]:http://www.postfix.org/postconf.5.html

View File

@ -1,259 +0,0 @@
GHLandy Translating
Installation of Red Hat Enterprise Linux (RHEL) 7.3 Guide
============================================================
Red Hat Enterprise Linux is an Open Source Linux distribution developed by Red Hat company, which can run all major processor architectures. Unlike other Linux distributions which are free to download, install and use, RHEL can be downloaded and used, with the exception the 30-day evaluation version, only if you buy a subscription.
In this tutorial will take a look on how you can install the latest release of RHEL 7.3, on your machine using the 30-day evaluation version of the ISO image downloaded from Red Hat Customer Portal at [https://access.redhat.com/downloads][1].
If youre looking for CentOS, go through our [CentOS 7.3 Installation Guide][2].
To review whats new in RHEL 7.3 release please read the [version release notes][3].
#### Pre-Requirements
This installation will be performed on a UEFI virtualized firmware machine. To perform the installation of RHELon a UEFI machine first you need to instruct the EFI firmware of your motherboard to modify the Boot Ordermenu in order to boot the ISO media from the appropriate drive (DVD or USB stick).
If the installation is done through a bootable USB media, you need to assure that the bootable USB is created using a UEFI compatible tool, such as [Rufus][4], which can partition your USB drive with a valid GPT partition scheme required by UEFI firmware.
To modify the motherboard UEFI firmware settings you need to press a special key during your machine initialization POST (Power on Self Test).
The proper special key needed for this configuration can be obtained by consulting your motherboard vendor manual. Usually, these keys can be F2, F9, F10, F11 or F12 or a combination of Fn with these keys in case your device is a Laptop.
Besides modifying UEFI Boot Order you need to make sure that QuickBoot/FastBoot and Secure Boot options are disabled in order to properly run RHEL from EFI firmware.
Some UEFI firmware motherboard models contain an option which allows you to perform the installation of an Operating System from Legacy BIOS or EFI CSM (Compatibility Support Module), a module of the firmware which emulates a BIOS environment. Using this type of installation requires the bootable USB drive to be partitioned in MBR scheme, not GPT style.
Also, once you install RHEL, or any other OS for that matter, on your UEFI machine from one of these two modes, the OS must run on the same firmware youve performed the installation.
You cant switch from UEFI to BIOS Legacy or vice-versa. Switching between UEFI and Bios Legacy will render your OS unusable, unable to boot and the OS will require reinstallation.
### Installation Guide of RHEL 7.3
1. First, download and burn RHEL 7.3 ISO image on a DVD or create a bootable USB stick using the correct utility.
Power-on the machine, place the DVD/USB stick in the appropriate drive and instruct UEFI/BIOS, by pressing a special boot key, to boot from the appropriate installation media.
Once the installation media is detected it will boot-up in RHEL grub menu. From here select Install red hat Enterprise Linux 7.3 and press [Enter] key to continue.
[
![RHEL 7.3 Boot Menu](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg)
][5]
RHEL 7.3 Boot Menu
2. The next screen appearing will take you to the welcome screen of RHEL 7.3 From here chose the language that will be used for the installation process and press [Enter] key to move on to the next screen.
[
![Select RHEL 7.3 Language](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png)
][6]
Select RHEL 7.3 Language
3. The next screen that will appear contains a summary of all the items you will need to setup for the installation of RHEL. First hit on DATE & TIME item and choose the physical location of your device from the map.
Hit on the upper Done button to save the configuration and proceed further with configuring the system.
[
![RHEL 7.3 Installation Summary](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png)
][7]
RHEL 7.3 Installation Summary
[
![Select RHEL 7.3 Date and Time](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png)
][8]
Select RHEL 7.3 Date and Time
4. On the next step, configure your system keyboard layout and the and hit on Done button again to go back to the main installer menu.
[
![Configure Keyboard Layout](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png)
][9]
Configure Keyboard Layout
5. Next, select the language support for your system and hit Done button to move to the next step.
[
![Choose Language Support](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png)
][10]
Choose Language Support
6. Leave the Installation Source item as default because in this case were performing the installation from our local media drive (DVD/USB image) and click on Software Selection item.
From here you can choose the base environment and Add-ons for your RHEL OS. Because RHEL is a Linux distribution inclined to be used mostly for servers, the Minimal Installation item is the perfect choice for a system administrator.
This type of installation is the most recommended in a production environment because only the minimal software required to properly run the OS will be installed.
This also means a high degree of security and flexibility and a small size footprint on your machine hard drive. All other environments and add-ons listed here can be easily installed afterwards from command line by buying a subscription or by using the DVD image as a source.
[
![RHEL 7.3 Software Selection](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png)
][11]
RHEL 7.3 Software Selection
7. In case you want to install one of the pre-configured server base environments, such as Web Server, File and Print Server, Infrastructure Server, Virtualization Host or Server with a Graphical User Interface, just check the preferred item, choose Add-ons from the right plane and hit on Done button finish this step.
[
![Select Server with GUI on RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png)
][12]
Select Server with GUI on RHEL 7.3
8. On the next step hit on Installation Destination item in order to select the device drive where the required partitions, file system and mount points will be created for your system.
The safest method would be to let the installer automatically configure hard disk partitions. This option will create all basic partitions required for a Linux system (`/boot`, `/boot/efi` and `/(root)` and `swap` in LVM), formatted with the default RHEL 7.3 file system, XFS.
Keep in mind that if the installation process was started and performed from UEFI firmware, the partition table of the hard disk would be GPT style. Otherwise, if you boot from CSM or BIOS legacy, the hard drive partition table would be old MBR scheme.
If youre not satisfied with automatic partitioning you can choose to configure your hard disk partition table and manually create your custom required partitions.
Anyway, in this tutorial we recommend that you choose to automatically configure partitioning and hit on Donebutton to move on.
[
![Choose RHEL 7.3 Installation Drive](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png)
][13]
Choose RHEL 7.3 Installation Drive
9. Next, disable Kdump service and move to network configuration item.
[
![Disable Kdump Feature](http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png)
][14]
Disable Kdump Feature
10. In Network and Hostname item, setup and apply your machine host name using a descriptive name and enable the network interface by dragging the Ethernet switch button to `ON` position.
The network IP settings will be automatically pulled and applied in case you have a DHCP server in your network.
[
![Configure Network Hostname](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png)
][15]
Configure Network Hostname
11. To statically setup the network interface click on the Configure button and manually configure the IPsettings as illustrated on the below screenshot.
When you finish setting-up the network interface IP addresses, hit on Save button, then turn `OFF` and `ON` the network interface in order to apply changes.
Finally, click on Done button to return to the main installation screen.
[
![Configure Network IP Address](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png)
][16]
Configure Network IP Address
12. Finally, the last item you need to configure from this menu is a Security Policy profile. Select and apply the Default security policy and hit on Done to go back to the main menu.
Review all your installation items and hit on Begin Installation button in order to start the installation process. Once the installation process has been started you cannot revert changes.
[
![Apply Security Policy for RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png)
][17]
Apply Security Policy for RHEL 7.3
[
![Begin Installation of RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png)
][18]
Begin Installation of RHEL 7.3
13. During the installation process the User Settings screen will appear on your monitor. First, hit on Root Password item and choose a strong password for the root account.
[
![Configure User Settings](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png)
][19]
Configure User Settings
[
![Set Root Account Password](http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png)
][20]
Set Root Account Password
14. Finally, create a new user and grant the user with root privileges by checking Make this user administrator. Choose a strong password for this user, hit on Done button to return to the User Settings menu and wait for the installation process to finish.
[
![Create New User Account](http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png)
][21]
Create New User Account
[
![RHEL 7.3 Installation Process](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png)
][22]
RHEL 7.3 Installation Process
15. After the installation process finishes with success, eject the DVD/USB key from the appropriate drive and reboot the machine.
[
![RHEL 7.3 Installation Complete](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png)
][23]
RHEL 7.3 Installation Complete
[
![Booting Up RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png)
][24]
Booting Up RHEL 7.3
Thats all! In order to further use Red Hat Enterprise Linux, buy a subscription from Red Hat customer portal and [register your RHEL system using subscription-manager][25] command line.
------------------
作者简介:
Matei Cezar
![](http://2.gravatar.com/avatar/be16e54026c7429d28490cce41b1e157?s=128&d=blank&r=g)
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/red-hat-enterprise-linux-7-3-installation-guide/
作者:[Matei Cezar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:https://access.redhat.com/downloads
[2]:http://www.tecmint.com/centos-7-3-installation-guide/
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/7.3_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.3_Release_Notes-Overview.html
[4]:https://rufus.akeo.ie/
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg
[6]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png
[23]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png
[25]:http://www.tecmint.com/enable-redhat-subscription-reposiories-and-updates-for-rhel-7/

View File

@ -0,0 +1,335 @@
# Kprobes Event Tracing on ARMv8
Timeszoro Translating
![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
### Introduction
Kprobes is a kernel feature that allows instrumenting the kernel by setting arbitrary breakpoints that call out to developer-supplied routines before and after the breakpointed instruction is executed (or simulated). See the kprobes documentation[[1]][2] for more information. Basic kprobes functionality is selected withCONFIG_KPROBES. Kprobes support was added to mainline for arm64 in the v4.8 release.
In this article we describe the use of kprobes on arm64 using the debugfs event tracing interfaces from the command line to collect dynamic trace events. This feature has been available for some time on several architectures (including arm32), and is now available on arm64\. The feature allows use of kprobes without having to write any code.
### Types of Probes
The kprobes subsystem provides three different types of dynamic probes described below.
### Kprobes
The basic probe is a software breakpoint kprobes inserts in place of the instruction you are probing, saving the original instruction for eventual single-stepping (or simulation) when the probe point is hit.
### Kretprobes
Kretprobes is a part of kprobes that allows intercepting a returning function instead of having to set a probe (or possibly several probes) at the return points. This feature is selected whenever kprobes is selected, for supported architectures (including ARMv8).
### Jprobes
Jprobes allows intercepting a call into a function by supplying an intermediary function with the same calling signature, which will be called first. Jprobes is a programming interface only and cannot be used through the debugfs event tracing subsystem. As such we will not be discussing jprobes further here. Consult the kprobes documentation if you wish to use jprobes.
### Invoking Kprobes
Kprobes provides a set of APIs which can be called from kernel code to set up probe points and register functions to be called when probe points are hit. Kprobes is also accessible without adding code to the kernel, by writing to specific event tracing debugfs files to set the probe address and information to be recorded in the trace log when the probe is hit. The latter is the focus of what this document will be talking about. Lastly kprobes can be accessed through the perf command.
### Kprobes API
The kernel developer can write functions in the kernel (often done in a dedicated debug module) to set probe points and take whatever action is desired right before and right after the probed instruction is executed. This is well documented in kprobes.txt.
### Event Tracing
The event tracing subsystem has its own documentation[[2]][3] which might be worth a read to understand the background of event tracing in general. The event tracing subsystem serves as a foundation for both tracepoints and kprobes event tracing. The event tracing documentation focuses on tracepoints, so bear that in mind when consulting that documentation. Kprobes differs from tracepoints in that there is no predefined list of tracepoints but instead arbitrary dynamically created probe points that trigger the collection of trace event information. The event tracing subsystem is controlled and monitored through a set of debugfs files. Event tracing (CONFIG_EVENT_TRACING) will be selected automatically when needed by something like the kprobe event tracing subsystem.
#### Kprobes Events
With the kprobes event tracing subsystem the user can specify information to be reported at arbitrary breakpoints in the kernel, determined simply by specifying the address of any existing probeable instruction along with formatting information. When that breakpoint is encountered during execution kprobes passes the requested information to the common parts of the event tracing subsystem which formats and appends the data to the trace log, much like how tracepoints works. Kprobes uses a similar but mostly separate collection of debugfs files to control and display trace event information. This feature is selected withCONFIG_KPROBE_EVENT. The kprobetrace documentation[[3]][4] provides the essential information on how to use kprobes event tracing and should be consulted to understand details about the examples presented below.
### Kprobes and Perf
The perf tools provide another command line interface to kprobes. In particular “perf probe” allows probe points to be specified by source file and line number, in addition to function name plus offset, and address. The perf interface is really a wrapper for using the debugfs interface for kprobes.
### Arm64 Kprobes
All of the above aspects of kprobes are now implemented for arm64, in practice there are some differences from other architectures though:
* Register name arguments are, of course, architecture specific and can be found in the ARM ARM.
* Not all instruction types can currently be probed. Currently unprobeable instructions include mrs/msr(except DAIF read), exception generation instructions, eret, and hint (except for the nop variant). In these cases it is simplest to just probe a nearby instruction instead. These instructions are blacklisted from probing because the changes they cause to processor state are unsafe to do during kprobe single-stepping or instruction simulation, because the single-stepping context kprobes constructs is inconsistent with what the instruction needs, or because the instruction cant tolerate the additional processing time and exception handling in kprobes (ldx/stx).
* An attempt is made to identify instructions within a ldx/stx sequence and prevent probing, however it is theoretically possible for this check to fail resulting in allowing a probed atomic sequence which can never succeed. Be careful when probing around atomic code sequences.
* Note that because of the details of Linux ARM64 calling conventions it is not possible to reliably duplicate the stack frame for the probed function and for that reason no attempt is made to do so with jprobes, unlike the majority of other architectures supporting jprobes. The reason for this is that there is insufficient information for the callee to know for certain the amount of the stack that is needed.
* Note that the stack pointer information recorded from a probe will reflect the particular stack pointer in use at the time the probe was hit, be it the kernel stack pointer or the interrupt stack pointer.
* There is a list of kernel functions which cannot be probed, usually because they are called as part of kprobes processing. Part of this list is architecture-specific and also includes things like exception entry code.
### Using Kprobes Event Tracing
One common use case for kprobes is instrumenting function entry and/or exit. It is particularly easy to install probes for this since one can just use the function name for the probe address. Kprobes event tracing will look up the symbol name and determine the address. The ARMv8 calling standard defines where the function arguments and return values can be found, and these can be printed out as part of the kprobe event processing.
### Example: Function entry probing
Instrumenting a USB ethernet driver reset function:
```
_$ pwd
/sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p ax88772_reset %x0
EOF
$ echo 1 > events/kprobes/enable_
```
At this point a trace event will be recorded every time the drivers _ax8872_reset()_ function is called. The event will display the pointer to the _usbnet_ structure passed in via X0 (as per the ARMv8 calling standard) as this functions only argument. After plugging in a USB dongle requiring this ethernet driver we see the following trace information:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1   #P:8
#
#                           _—=> irqs-off
#                          / _—-=> need-resched
#                         | / _—=> hardirq/softirq
#                         || / _=> preempt-depth
#                         ||| / delay
#        TASK-PID   CPU#  |||| TIMESTAMP  FUNCTION
#           | |    |   ||||    |      |
kworker/0:0-4             [000] d… 10972.102939:   p_ax88772_reset_0:
(ax88772_reset+0x0/0x230)   arg1=0xffff800064824c80_
```
Here we can see the value of the pointer argument passed in to our probed function. Since we did not use the optional labelling features of kprobes event tracing the information we requested is automatically labeled_arg1_.  Note that this refers to the first value in the list of values we requested that kprobes log for this probe, not the actual position of the argument to the function. In this case it also just happens to be the first argument to the function weve probed.
### Example: Function entry and return probing
The kretprobe feature is used specifically to probe a function return. At function entry the kprobes subsystem will be called and will set up a hook to be called at function return, where it will record the requested event information. For the most common case the return information, typically in the X0 register, is quite useful. The return value in %x0 can also be referred to as _$retval_. The following example also demonstrates how to provide a human-readable label to be displayed with the information of interest.
Example of instrumenting the kernel __do_fork()_ function to record arguments and results using a kprobe and a kretprobe:
```
_$ cd /sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p _do_fork %x0 %x1 %x2 %x3 %x4 %x5
r _do_fork pid=%x0
EOF
$ echo 1 > events/kprobes/enable_
```
At this point every call to _do_fork() will produce two kprobe events recorded into the “_trace_” file, one reporting the calling argument values and one reporting the return value. The return value shall be labeled “_pid_” in the trace file. Here are the contents of the trace file after three fork syscalls have been made:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 6/6   #P:8
#
#                              _—=> irqs-off
#                             / _—-=> need-resched
#                            | / _—=> hardirq/softirq
#                            || / _=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
              bash-1671  [001] d…   204.946007: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   204.946391: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x724
              bash-1671  [001] d…   208.845749: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   208.846127: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x725
              bash-1671  [001] d…   214.401604: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_
```
### Example: Dereferencing pointer arguments
For pointer values the kprobe event processing subsystem also allows dereferencing and printing of desired memory contents, for various base data types. It is necessary to manually calculate the offset into structures in order to display a desired field.
Instrumenting the `_do_wait()` function:
```
_$ cat > kprobe_events <<EOF
p:wait_p do_wait wo_type=+0(%x0):u32 wo_flags=+4(%x0):u32
r:wait_r do_wait $retval
EOF
$ echo 1 > events/kprobes/enable_
```
Note that the argument labels used in the first probe are optional and can be used to more clearly identify the information recorded in the trace log. The signed offset and parentheses indicate that the register argument is a pointer to memory contents to be recorded in the trace log. The “_:u32_” indicates that the memory location contains an unsigned four-byte wide datum (an enum and an int in a locally defined structure in this case).
The probe labels (after the colon) are optional and will be used to identify the probe in the log. The label must be unique for each probe. If unspecified a useful label will be automatically generated from a nearby symbol name, as has been shown in earlier examples.
Also note the “_$retval_” argument could just be specified as “_%x0_“.
Here are the contents of the “_trace_” file after two fork syscalls have been made:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 4/4   #P:8
#
#                              _—=> irqs-off
#                             / _—-=> need-resched
#                            | / _—=> hardirq/softirq
#                            || / _=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
             bash-1702  [001] d…   175.342074: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xe
             bash-1702  [002] d..1   175.347236: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0x757
             bash-1702  [002] d…   175.347337: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xf
             bash-1702  [002] d..1   175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6_
```
### Example: Probing arbitrary instruction addresses
In previous examples we have inserted probes for function entry and exit, however it is possible to probe an arbitrary instruction (with a few exceptions). If we are placing a probe inside a C function the first step is to look at the assembler version of the code to identify where we want to place the probe. One way to do this is to use gdb on the vmlinux file and display the instructions in the function where you wish to place the probe. An example of doing this for the _module_alloc_ function in arch/arm64/kernel/modules.c follows. In this case, because gdb seems to prefer using the weak symbol definition and its associated stub code for this function, we get the symbol value from System.map instead:
```
_$ grep module_alloc System.map
ffff2000080951c4 T module_alloc
ffff200008297770 T kasan_module_alloc_
```
In this example were using cross-development tools and we invoke gdb on our host system to examine the instructions comprising our function of interest:
```
_$ ${CROSS_COMPILE}gdb vmlinux
(gdb) x/30i 0xffff2000080951c4
        0xffff2000080951c4 <module_alloc>:    sub    sp, sp, #0x30
        0xffff2000080951c8 <module_alloc+4>:    adrp    x3, 0xffff200008d70000
        0xffff2000080951cc <module_alloc+8>:    add    x3, x3, #0x0
        0xffff2000080951d0 <module_alloc+12>:    mov    x5, #0x713             // #1811
        0xffff2000080951d4 <module_alloc+16>:    mov    w4, #0xc0              // #192
        0xffff2000080951d8 <module_alloc+20>:
              mov    x2, #0xfffffffff8000000    // #-134217728
        0xffff2000080951dc <module_alloc+24>:    stp    x29, x30, [sp,#16]         0xffff2000080951e0 <module_alloc+28>:    add    x29, sp, #0x10
        0xffff2000080951e4 <module_alloc+32>:    movk    x5, #0xc8, lsl #48
        0xffff2000080951e8 <module_alloc+36>:    movk    w4, #0x240, lsl #16
        0xffff2000080951ec <module_alloc+40>:    str    x30, [sp]         0xffff2000080951f0 <module_alloc+44>:    mov    w7, #0xffffffff        // #-1
        0xffff2000080951f4 <module_alloc+48>:    mov    x6, #0x0               // #0
        0xffff2000080951f8 <module_alloc+52>:    add    x2, x3, x2
        0xffff2000080951fc <module_alloc+56>:    mov    x1, #0x8000            // #32768
        0xffff200008095200 <module_alloc+60>:    stp    x19, x20, [sp,#32]         0xffff200008095204 <module_alloc+64>:    mov    x20, x0
        0xffff200008095208 <module_alloc+68>:    bl    0xffff2000082737a8 <__vmalloc_node_range>
        0xffff20000809520c <module_alloc+72>:    mov    x19, x0
        0xffff200008095210 <module_alloc+76>:    cbz    x0, 0xffff200008095234 <module_alloc+112>
        0xffff200008095214 <module_alloc+80>:    mov    x1, x20
        0xffff200008095218 <module_alloc+84>:    bl    0xffff200008297770 <kasan_module_alloc>
        0xffff20000809521c <module_alloc+88>:    tbnz    w0, #31, 0xffff20000809524c <module_alloc+136>
        0xffff200008095220 <module_alloc+92>:    mov    sp, x29
        0xffff200008095224 <module_alloc+96>:    mov    x0, x19
        0xffff200008095228 <module_alloc+100>:    ldp    x19, x20, [sp,#16]         0xffff20000809522c <module_alloc+104>:    ldp    x29, x30, [sp],#32
        0xffff200008095230 <module_alloc+108>:    ret
        0xffff200008095234 <module_alloc+112>:    mov    sp, x29
        0xffff200008095238 <module_alloc+116>:    mov    x19, #0x0               // #0_
```
In this case we are going to display the result from the following source line in this function:
```
_p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START,
VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
NUMA_NO_NODE, __builtin_return_address(0));_
```
…and also the return value from the function call in this line:
```
_if (p && (kasan_module_alloc(p, size) < 0)) {_
```
We can identify these in the assembler code from the call to the external functions. To display these values we will place probes at 0xffff20000809520c _and _0xffff20000809521c on our target system:
```
_$ cat > kprobe_events <<EOF
p 0xffff20000809520c %x0
p 0xffff20000809521c %x0
EOF
$ echo 1 > events/kprobes/enable_
```
Now after plugging an ethernet adapter dongle into the USB port we see the following written into the trace log:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 12/12   #P:8
#
#                           _—=> irqs-off
#                          / _—-=> need-resched
#                         | / _—=> hardirq/softirq
#                         || / _=> preempt-depth
#                         ||| / delay
#        TASK-PID   CPU#  |||| TIMESTAMP  FUNCTION
#           | |    |   ||||    |      |
      systemd-udevd-2082  [000] d… 77.200991: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001188000
      systemd-udevd-2082  [000] d… 77.201059: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.201115: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001198000
      systemd-udevd-2082  [000] d… 77.201157: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.227456: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011a0000
      systemd-udevd-2082  [000] d… 77.227522: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.227579: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b0000
      systemd-udevd-2082  [000] d… 77.227635: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      modprobe-2097  [002] d… 78.030643: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b8000
      modprobe-2097  [002] d… 78.030761: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      modprobe-2097  [002] d… 78.031132: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001270000
      modprobe-2097  [002] d… 78.031187: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0_
```
One more feature of the kprobes event system is recording of statistics information, which can be found inkprobe_profile.  After the above trace the contents of that file are:
```
_$ cat kprobe_profile
 p_0xffff20000809520c                                    6            0
p_0xffff20000809521c                                    6            0_
```
This indicates that there have been a total of 8 hits each of the two breakpoints we set, which of course is consistent with the trace log data.  More kprobe_profile features are described in the kprobetrace documentation.
There is also the ability to further filter kprobes events.  The debugfs files used to control this are listed in the kprobetrace documentation while the details of their contents are (mostly) described in the trace events documentation.
### Conclusion
Linux on ARMv8 now is on parity with other architectures supporting the kprobes feature. Work is being done by others to also add uprobes and systemtap support. These features/tools and other already completed features (e.g.: perf, coresight) allow the Linux ARMv8 user to debug and test performance as they would on other, older architectures.
* * *
Bibliography
[[1]][5] Jim Keniston, Prasanna S. Panchamukhi, Masami Hiramatsu. “Kernel Probes (Kprobes).” _GitHub_. GitHub, Inc., 15 Aug. 2016\. Web. 13 Dec. 2016.
[[2]][6] Tso, Theodore, Li Zefan, and Tom Zanussi. “Event Tracing.” _GitHub_. GitHub, Inc., 3 Mar. 2016\. Web. 13 Dec. 2016.
[[3]][7] Hiramatsu, Masami. “Kprobe-based Event Tracing.” _GitHub_. GitHub, Inc., 18 Aug. 2016\. Web. 13 Dec. 2016.
----------------
作者简介 [David Long][8]David works as an engineer in the Linaro Kernel - Core Development team. Before coming to Linaro he spent several years in the commercial and defense industries doing both embedded realtime work, and software development tools for Unix. That was followed by a dozen years at Digital (aka Compaq) doing Unix standards, C compiler, and runtime library work. After that David went to a series of startups doing embedded Linux and Android, embedded custom OS's, and Xen virtualization. He has experience with MIPS, Alpha, and ARM platforms (amongst others). He has used most flavors of Unix starting in 1979 with Bell Labs V6, and has been a long-time Linux user and advocate. He has also occasionally been known to debug a device driver with a soldering iron and digital oscilloscope.
--------------------------------------------------------------------------------
via: http://www.linaro.org/blog/kprobes-event-tracing-armv8/
作者:[ David Long][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linaro.org/author/david-long/
[1]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[2]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[3]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[4]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[5]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[6]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[7]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[8]:http://www.linaro.org/author/david-long/
[9]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#comments
[10]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[11]:http://www.linaro.org/tag/arm64/
[12]:http://www.linaro.org/tag/armv8/
[13]:http://www.linaro.org/tag/jprobes/
[14]:http://www.linaro.org/tag/kernel/
[15]:http://www.linaro.org/tag/kprobes/
[16]:http://www.linaro.org/tag/kretprobes/
[17]:http://www.linaro.org/tag/perf/
[18]:http://www.linaro.org/tag/tracing/

View File

@ -0,0 +1,302 @@
rusking translating
HOW TO INSTALL AND REMOVE SOFTWARE IN UBUNTU [COMPLETE GUIDE]
============================================================
![Complete guide for installing and removing applications in Ubuntu](https://itsfoss.com/wp-content/uploads/2016/12/Managing-Software-in-Ubuntu-1.jpg)
_Brief: This detailed guide shows you various ways to install software_ _in Ubuntu Linux and it also demonstrates how to remove installed software in Ubuntu._
When you [switch to Linux][14], the experience could be overwhelming at the start. Even the basic things like installing applications in Ubuntu could seem confusing.
Dont worry. Linux provides so many ways to do the same task that it is only natural that you may seem lost, at least in the beginning. You are not alone. We have all been to that stage.
In this beginners guide, Ill show most popular ways to install software in Ubuntu. Ill also show you how to uninstall the software you had installed earlier.
Ill also provide my recommendation about which method you should be using for installing software in Ubuntu. Sit tight and pay attention. This is a long article, a detailed one which is surely going to add to your knowledge.
### INSTALLING AND UNINSTALLING SOFTWARE IN UBUNTU
I am using Ubuntu 16.04 running with Unity desktop environment in this guide. Apart from a couple of screenshots, this guide is applicable to all other flavors of Ubuntu.
### 1.1 INSTALL SOFTWARE USING UBUNTU SOFTWARE CENTER [RECOMMENDED]
The easiest and most convenient way to find and install software in Ubuntu is by using Ubuntu Software Center. In Ubuntu Unity, you can search for Ubuntu Software Center in Dash and click on it to open it:
[
![Run Ubuntu Software Center](https://itsfoss.com/wp-content/uploads/2016/12/Ubuntu-Software-Center.png)
][15]
You can think of Ubuntu Software Center as Googles Play Store or Apples App Store. It showcases all the software available for your Ubuntu system. You can either search for an application by its name or just browse through various categories of software. You can also opt for the editors pick. Your choice mainly.
![Installing software in Ubuntu using Ubuntu Software Center](https://itsfoss.com/wp-content/uploads/2016/12/install-software-Ubuntu-linux.jpeg)
Once you have found the application you are looking for, simply click on it. This will open a page inside Software Center with a description of the application. You can read the description, see its raiting and also read reviews. You can also write a review if you want.
Once you are convinced that you want the application, you can click on the install button to install the selected application. Youll have to enter your password in order to install applications in Ubuntu.
[
![Installing software in Ubuntu: The easy way](https://itsfoss.com/wp-content/uploads/2016/12/install-software-Ubuntu-linux-1.jpg)
][16]
Can it be any easier than this? I doubt that.
Tip: As I had mentioned in [things to do after installing Ubuntu 16.04][17], you should enable Canonical partner repository. By default, Ubuntu provides only those softwares that come from its own repository (verified by Ubuntu).
But there is also a Canonical Partner repository which is not directly controlled by Ubuntu and includes closed source proprietary software. Enabling this repository gives you access to more software. [Installing Skype in Ubuntu][18] is achieved by this method.
In Unity Dash, look for Software & Updates.
[
![Ubuntu Software Update Settings](https://itsfoss.com/wp-content/uploads/2014/08/Software_Update_Ubuntu.jpeg)
][19]
And in here, under Other Software tab, check the options of Canonical Partners.
[
![Enable Canonical partners in Ubuntu 14.04](https://itsfoss.com/wp-content/uploads/2014/04/Enable_Canonical_Partner.jpeg)
][20]
### 1.2 REMOVE SOFTWARE USING UBUNTU SOFTWARE CENTER [RECOMMENDED]
We just saw how to install software using Ubuntu Software Center. How about removing software that you had installed using this method?
Uninstalling software with Ubuntu Software Center is as easy as the installation process itself.
Open the Software Center and click on the Installed tab. It will show you all the installed software. Alternatively, you can just search for the application by its name.
To remove the application from Ubuntu, simply click on Remove button. Again you will have to provide your password here.
[
![Uninstall software installed in Ubuntu](https://itsfoss.com/wp-content/uploads/2016/12/Uninstall-Software-Ubuntu.jpeg)
][22]
### 2.1 INSTALL SOFTWARE IN UBUNTU USING .DEB FILES
.deb files are similar to the .exe files in Windows. This is an easy way to provide software installation. Many software vendors provide their software in .deb format. Google Chrome is such an example.
You can download .deb file from the official website.
[
![Downloading deb packaging](https://itsfoss.com/wp-content/uploads/2016/12/install-software-deb-package.png)
][23]
Once you have downloaded the .deb file, just double click on it to run it. It will open in Ubuntu Software Center and you can install it in the same way as we saw in section 1.1.
Alternatively, you can use a lightweight program [Gdebi to install .deb files in Ubuntu][24].
Once you have installed the software, you are free to delete the downloaded .deb file.
Tip: A few things to keep in mind while dealing with .deb file.
* Make sure that you are downloading the .deb file from the official source. Only rely on the official website or GitHub pages.
* Make sure that you are downloading the .deb file for correct system type (32 bit or 64 bit). Read our quick guide to [know if your Ubuntu system is 32 bit or 64 bit][8].
### 2.2 REMOVE SOFTWARE THAT WAS INSTALLED USING .DEB
Removing software that was installed by a .deb file is the same as we saw earlier in section 1.2\. Just go to Ubuntu Software Center, search for the application name and click on remove to uninstall it.
Alternatively, you can use [Synaptic Package Manager][25]. Not necessarily but this may happen that the installed application is not visible in Ubuntu Software Center. Synaptic Package Manager is lists all the software that are available for your system and all the software that are already installed on your system.This is a very powerful and very useful tool.
This is a very powerful and very useful tool. Before Ubuntu Software Center came into existence to provide a more user-friendly approach to software installation, Synaptic was the default program for installing and uninstalling software in Ubuntu.
You can install Synaptic package manager by clicking on the link below (it will open Ubuntu Software Center).
[Install Synaptic Package Manager][26]
Open Synaptic Manager and then search for the software you want to uninstall. Installed softwares are marked with a green button. Click on it and select “mark for removal”. Once you do that, click on “apply” to remove the selected software.
[
![Using Synaptic to remove software in Ubuntu](https://itsfoss.com/wp-content/uploads/2016/12/uninstall-software-ubuntu-synaptic.jpeg)
][27]
### 3.1 INSTALL SOFTWARE IN UBUNTU USING APT COMMANDS [RECOMMENDED]
You might have noticed a number of websites giving you a command like “sudo apt-get install” to install software in Ubuntu.
This is actually the command line equivalent of what we saw in section 1\. Basically, instead of using the graphical interface of Ubuntu Software Center, you are using the command line interface. Nothing else changes.
Using the apt-get command to install software is extremely easy. All you need to do is to use a command like:
```
sudo apt-get install package_name
```
Here sudo gives admin or root (in Linux term) privileges. You can replace package_name with the desired software name.
apt-get commands have auto-completion so if you type a few letters and hit tab, it will provide all the programs matching with those letters.
### 3.2 REMOVE SOFTWARE IN UBUNTU USING APT COMMANDS [RECOMMENDED]
You can easily remove softwares that were installed using Ubuntu Software Center, apt command or .deb file using the command line.
All you have to do is to use the following command, just replace the package_name with the software name you want to delete.
```
sudo apt-get remove package_name
```
Here again, you can benefit from auto completion by pressing the tab key.
Using apt-get commands is not rocket science. This is in fact very convenient. With these simple commands, you get acquainted with the command line part of Ubuntu Linux and it does help in long run. I recommend reading my detailed [guide on using apt-get commands][28] to learn in detail about it.
[Suggested ReadUsing apt-get Commands In Linux [Complete Beginners Guide]][29]
### 4.1 INSTALL APPLICATIONS IN UBUNTU USING PPA
PPA stands for [Personal Package Archive][30]. This is another way that developers use to provide their software to Ubuntu users.
In section 1, you came across a term called repository. Repository basically contains a collection of software. Ubuntus official repository has the softwares that are approved by Ubuntu. Canonical partner repository contains the softwares from partnered vendors.
In the same way, PPA enables a developer to create its own APT repository. When an end user (i.e you) adds this repository to the system (sources.list is modified with this entry), software provided by the developer in his/her repository becomes available for the user.
Now you may ask whats the need of PPA when we already have the official Ubuntu repository?
The answer is that not all software automatically get added to Ubuntus official repository. Only the trusted software make it to that list. Imagine that you developed a cool Linux application and you want to provide regular updates to your users but it will take months before it could be added to Ubuntus repository (if it could). PPA comes handy in those cases.
Apart from that, Ubuntus official repository often doesnt include the latest version of a software. This is done to secure the stability of the Ubuntu system. A brand new software version might have a [regression][31] that could impact the system. This is why it takes some time before a new version makes it to the official repository, sometimes it takes months.
But what if you do not want to wait till the latest version comes to the Ubuntus official repository? This is where PPA saves your day. By using PPA, you get the newer version.
Typically PPA are used in three commands. First to add the PPA repository to the sources list. Second to update the cache of software list so that your system could be aware of the new available software. And third to install the software from the PPA.
Ill show you an example by using [Numix theme][32] PPA:
```
sudo add-apt-repository ppa:numix/ppa
sudo apt-get update
sudo apt-get install numix-gtk-theme numix-icon-theme-circle
```
In the above example, we added a PPA provided [Numix project][33]. And after updating the software information, we add two programs available in Numix PPA.
If you want a GUI application, you can use [Y-PPA application][34]. It lets you search for PPA, add and remove software in a better way.
Tip: Security of PPA has often debated. My advice is that you should add PPA from a trusted source, preferably from the official sources.
### 4.2 REMOVE APPLICATIONS INSTALLED USING PPA
I have discussed [removing PPA in Ubuntu][35] in detail earlier. You should refer to that article to get more insights about handling PPA removal.
To quickly discuss it here, you can use the following two commands.
```
sudo apt-get remove numix-gtk-theme numix-icon-theme-circle
```
```
sudo add-apt-repository --remove ppa:numix/ppa
```
First command removes the software installed via the PPA. Second command removes the PPA from sources.list.
### 5.1 INSTALLING SOFTWARE USING SOURCE CODE IN UBUNTU LINUX [NOT RECOMMENDED]
Installing a software using the [source code][36] is not something I would recommend to you. Its tedious, troublesome and not very convenient. Youll have to fight your way through dependencies and what not. Youll have to keep the source code files else you wont be able to uninstall it later.
But building from source code is still liked by a few, even if they are not developing software of their own. To tell you the truth, last I used source code extensively was 5 years ago when I was an intern and I had to develop a software in Ubuntu. I have preferred the other ways to install applications in Ubuntu since then. For normal desktop Linux user, installing from source code should be best avoided.
Ill be short in this section and just list out the steps to install a software from source code:
* Download the source code of the program you want to install.
* Extract the downloaded file.
* Go to extracted directory and look for a README or INSTALL file. A well-developed software may include such a file to provide installation and/or removal instructions.
* Look for a file called configure. If its present, run the file using the command: ./configure This will check if your system has all the required softwares (called dependencies in software term) to install the program. Note that not all software include configure file which is, in my opinion, bad development practice.
* If configure notifies you of missing dependencies, install them.
* Once you have everything, use the command make to compile the program.
* Once the program is compiled, run the command sudo make install to install the software.
Do note that some softwares provide you with an install script and just running that files will install the software for you. But you wont be that lucky most of the time.
Also note that the program you installed using this way wont be updated automatically like programs installed from Ubuntus repository or PPAs or .deb.
I recommend reading this detailed article on [using the source code in Ubuntu][37] if you insist on using source code.
### 5.2 REMOVING SOFTWARE INSTALLED USING SOURCE CODE [NOT RECOMMENDED]
If you thought installing software from source code was difficult, think again. Removing the software installed using source code could be a bigger pain.
* First, you should not delete the source code you used to install the program.
* Second, you should make sure at the installation time that there is a way to uninstall the program. A badly configured program might not provide a way to uninstall the program and then youll have to manually remove all the files installed by the software.
Normally, you should be able to uninstall the program by going to its extracted directory and using this command:
```
sudo make uninstall
```
But this is not a guarantee that youll get this uninstall all the time.
You see, there are lots of ifs and buts attached with source code and not that many advantages. This is the reason why I do not recommend using the source code to install the software in Ubuntu.
### FEW MORE WAYS TO INSTALL APPLICATIONS IN UBUNTU
There are a few more (not so popular) ways you can install software in Ubuntu. Since this article is already way too long, I wont cover them. I am just going to list them here:
* Ubuntus new [Snap packaging][9].
* [dpkg][10] commands
* [AppImage][11]
* [pip][12] : used for installing Python based programs
### HOW DO YOU INSTALL APPLICATIONS IN UBUNTU?
If you have already been using Ubuntu, whats your favorite way to install software in Ubuntu Linux? Did you find this guide useful? Do share your views, suggestions and questions.
--------------------
作者简介:
![](https://secure.gravatar.com/avatar/20749c268f5d3e4d2c785499eb6a17c0?s=70&d=mm&r=g)
I am Abhishek Prakash, 'creator' of It's F.O.S.S. Working as a software professional. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mystery. Huge fan of Agatha Christie work.
--------------------------------------------------------------------------------
via: https://itsfoss.com/remove-install-software-ubuntu/
作者:[ABHISHEK PRAKASH][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/author/abhishek/
[2]:https://itsfoss.com/remove-install-software-ubuntu/#comments
[3]:http://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fremove-install-software-ubuntu%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[4]:https://twitter.com/share?original_referer=/&text=How+To+Install+And+Remove+Software+In+Ubuntu+%5BComplete+Guide%5D&url=https://itsfoss.com/remove-install-software-ubuntu/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_pc
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fremove-install-software-ubuntu%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fremove-install-software-ubuntu%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[7]:https://www.reddit.com/submit?url=https://itsfoss.com/remove-install-software-ubuntu/&title=How+To+Install+And+Remove+Software+In+Ubuntu+%5BComplete+Guide%5D
[8]:https://itsfoss.com/32-bit-64-bit-ubuntu/
[9]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[10]:https://help.ubuntu.com/lts/serverguide/dpkg.html
[11]:http://appimage.org/
[12]:https://pypi.python.org/pypi/pip
[13]:https://itsfoss.com/remove-install-software-ubuntu/managing-software-in-ubuntu-1/
[14]:https://itsfoss.com/reasons-switch-linux-windows-xp/
[15]:https://itsfoss.com/wp-content/uploads/2016/12/Ubuntu-Software-Center.png
[16]:https://itsfoss.com/remove-install-software-ubuntu/install-software-ubuntu-linux-1/
[17]:https://itsfoss.com/things-to-do-after-installing-ubuntu-16-04/
[18]:https://itsfoss.com/install-skype-ubuntu-1404/
[19]:https://itsfoss.com/ubuntu-notify-updates-frequently/software_update_ubuntu/
[20]:https://itsfoss.com/things-to-do-after-installing-ubuntu-14-04/enable_canonical_partner/
[21]:https://itsfoss.com/essential-linux-applications/
[22]:https://itsfoss.com/remove-install-software-ubuntu/uninstall-software-ubuntu/
[23]:https://itsfoss.com/remove-install-software-ubuntu/install-software-deb-package/
[24]:https://itsfoss.com/gdebi-default-ubuntu-software-center/
[25]:http://www.nongnu.org/synaptic/
[26]:apt://synaptic
[27]:https://itsfoss.com/remove-install-software-ubuntu/uninstall-software-ubuntu-synaptic/
[28]:https://itsfoss.com/apt-get-linux-guide/
[29]:https://itsfoss.com/apt-get-linux-guide/
[30]:https://help.launchpad.net/Packaging/PPA
[31]:https://en.wikipedia.org/wiki/Software_regression
[32]:https://itsfoss.com/install-numix-ubuntu/
[33]:https://numixproject.org/
[34]:https://itsfoss.com/easily-manage-ppas-ubuntu-1310-ppa-manager/
[35]:https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
[36]:https://en.wikipedia.org/wiki/Source_code
[37]:http://www.howtogeek.com/105413/how-to-compile-and-install-from-source-on-ubuntu/

View File

@ -0,0 +1,242 @@
Living (Android) without Kotlin
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*Fd349rzh3XWwSbCP2IV7zA.jpeg)
> It is easier to get into something than to get out of it.Donald Rumsfeld
Living without Kotlin is like playing Warcraft III on the touchpad. Buying a mouse is simple but what can you do if your new employer does not want to let you use Kotlin in production?
There are a few options.
* Fight with your product owner to obtain the rights to use Kotlin.
* Use Kotlin and do not tell anyone about it because you know best what is best for you.
* Wipe away your tears and use Java in all its glory.
Imagine that you lost the battle with your product owner and as a professional engineer, you will not lie and use hipster technology without anyones permission. I know that it sounds pretty scary especially when you have already tasted Kotlin but do not lose your hope.
In further parts of this article, I want to shortly describe some Kotlin features that you can apply to your Android Java code through some well-known tools and libraries. A rudimentary understanding of Kotlin and Java is required.
### Data classes
You really do love Kotlins data classes, don't you? It is so easy to get `equals()`, `hashCode()`, `toString()` and `copy()` generated for you. To be specific `data` keyword also generates `componentN()` functions corresponding to the properties in their order of declaration. They are used in destructuring declarations.
```
data class Person(val name: String)
val (riddle) = Person("Peter")
println(riddle)
```
Do you know what will be printed? For sure it will not be the value returned from the `toString()` of the `Person` class. Here is where the destructuring declaration comes to play and assigns a value from `name` to `riddle`. Using parenthesis for `(riddle)` compiler knows that it has to use destructuring declaration mechanism.
```
val (riddle): String = Person("Peter").component1()
println(riddle) // prints Peter
view raw
```
>The code does not compile. It is just to show how does the destructuring declaration work.
As you can see `data` keyword is a super useful language feature so what can you do to bring it to your Java world? Use annotation processor and modify the Abstract Syntax Tree. If you want to go deeper please read the article listed at the end(Project LombokTrick Explained).
Using project Lombok you can achieve almost the same functionality that `data` keyword provides. Unfortunately, there is no way to have destructuring declarations.
```
import lombok.Data;
@Data class Person {
final String name;
}
```
`@Data` annotation generates `equals()`, `hashCode()` and `toString()`. Additionally, it creates getters for all fields, setters for all non-final fields and constructor with all required fields(finals). It is worth to be aware of that Lombok is used just for compilation so the library code will not be added to your final `.apk`.
### Lambda expressions
Android engineers have a pretty tough life because of lack of Java 8 features and one of them are lambda expressions. Lambdas are great as they cut down tons of boilerplate for you. You can use them in your callbacks and streams. In Kotlin lambda expressions are built-in and they look way much better than they look in Java. Moreover, the bytecode of the lambda can be inserted directly into the bytecode of the calling method, so the method count does not increase. It can be done using inline functions.
```
button.setOnClickListener { println("Hello World") }
```
Lately Google announced support for Java 8 features in Android and thanks to Jack compiler you can use lambdas in your code. It is also good to mention that they are available on API level 23 and lower.
```
button.setOnClickListener(view -> System.out.println("Hello World!"));
```
How to enable them? Just add those several lines to your `build.gradle` file.
```
defaultConfig {
jackOptions {
enabled true
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
```
If you are not a fan of Jack compiler or for some reasons you cannot use it, there is a different solution for you. Project Retrolambda lets you run Java 8 code with lambda expressions and method references on Java 7, 6 or 5 and here is the setup.
```
dependencies {
classpath 'me.tatarka:gradle-retrolambda:3.4.0'
}
apply plugin: 'me.tatarka.retrolambda'
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
```
As I mentioned before inline functions in lambdas in Kotlin do not increase method count but what about using them with Jack or Retrolambda? Obviously, they do not come for free and the hidden costs are listed below.
![](https://cdn-images-1.medium.com/max/800/1*H7h2MB2auMslMkdaDtqAfg.png)
### Data manipulations
Kotlin introduces higher-order functions as a replacement for streams. They are extremely useful when you have to transform one set of data to another or filter the collection.
```
fun foo(persons: MutableList<Person>) {
persons.filter { it.age >= 21 }
.filter { it.name.startsWith("P") }
.map { it.name }
.sorted()
.forEach(::println)
}
data class Person(val name: String, val age: Int)
```
Streams are also provided by Google using Jack compiler. Unfortunately, Jack does not work with Lombok because it skips generating intermediate `.class` files when compiling the code and Lombok depends on these files.
```
void foo(List<Person> persons) {
persons.stream()
.filter(it -> it.getAge() >= 21)
.filter(it -> it.getName().startsWith("P"))
.map(Person::getName)
.sorted()
.forEach(System.out::println);
}
class Person {
final private String name;
final private int age;
public Person(String name, int age) {
this.name = name;
this.age = age;
}
String getName() { return name; }
int getAge() { return age; }
}
```
That is too good to be true so where is the catch? Sadly, streams are available from API 24\. Good job Google but in what universe apps have `minSdkVersion = 24`?
Fortunately Android platform has an awesome open source community which produces a lot of great libraries. Lightweight-Stream-API is one of them and it contains streams implementation based on iterators for Java 7 and below.
```
import lombok.Data;
import com.annimon.stream.Stream;
void foo(List<Person> persons) {
Stream.of(persons)
.filter(it -> it.getAge() >= 21)
.filter(it -> it.getName().startsWith("P"))
.map(Person::getName)
.sorted()
.forEach(System.out::println);
}
@Data class Person {
final String name;
final int age;
}
```
The sample above combines Lombok, Retrolambda, and Lightweight-Stream-API and it looks almost as good as Kotlin, doesnt it. Using static factory method allows you to transform any Iterable into a stream and apply lambdas on it as on Java 8 streams. It would be perfect to wrap the static invocation `Stream.of(persons)` into extension function of Iterable type but Java does not support it.
### Extension functions
Extension mechanism provides an ability to add functionality to a class without having to inherit from it. This well-known concept fits great in the Android world and that is why Kotlin is so popular in the community.
Is there any technique or magic trick that brings extension functions to your Java toolbox? Thanks to Lombok you can use them as an experimental feature. According to what Lombok documentation says they want to move it out of experimental status with no or minor changes soon. Lets refactor last sample and wrap `Stream.of(persons)` into extension function.
```
import lombok.Data;
import lombok.experimental.ExtensionMethod;
@ExtensionMethod(Streams.class)
public class Foo {
void foo(List<Person> persons) {
persons.toStream()
.filter(it -> it.getAge() >= 21)
.filter(it -> it.getName().startsWith("P"))
.map(Person::getName)
.sorted()
.forEach(System.out::println);
}
}
@Data class Person {
final String name;
final int age;
}
class Streams {
static <T> Stream<T> toStream(List<T> list) {
return Stream.of(list);
}
}
```
All methods that are `public`, `static`, and have at least one argument whose type is not primitive, are considered extension methods. `@ExtensionMethod` annotation allows you to specify a class that contains your extension functions. Instead of using one `.class` object you can also pass an array.
* * *
I am fully aware that some of my thoughts are pretty controversial especially Lombok ones and I also know that there are a lot of other libraries that can make your life easier. Please do not hesitate to share your experience in comments. Cheers!
![](https://cdn-images-1.medium.com/max/800/1*peB9mmElOn6xwR3eH0HXXA.png)
---------------------------------
作者简介:
![](https://cdn-images-1.medium.com/fit/c/60/60/1*l7_L6VCKzkOm0gq4Kplnkw.jpeg)
Coder and professional dreamer @ Grid Dynamics
--------------------------------------------------------------------------------
via: https://hackernoon.com/living-android-without-kotlin-db7391a2b170#.q95i5232f
作者:[Piotr Ślesarew][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://hackernoon.com/@piotr.slesarew?source=post_header_lockup
[1]:http://jakewharton.com/exploring-java-hidden-costs/
[2]:https://medium.com/u/8ddd94878165
[3]:https://projectlombok.org/index.html
[4]:https://github.com/aNNiMON/Lightweight-Stream-API
[5]:https://github.com/orfjackal/retrolambda
[6]:http://notatube.blogspot.com/2010/11/project-lombok-trick-explained.html
[7]:http://notatube.blogspot.com/2010/11/project-lombok-trick-explained.html
[8]:https://twitter.com/SliskiCode

View File

@ -1,405 +0,0 @@
Part 4 - LXD 2.0: Resource control
======================================
This is the fourth blog post [in this series about LXD 2.0][0].
As there are a lot of commands involved with managing LXD containers, this post is rather long. If youd instead prefer a quick step-by-step tour of those same commands, you can [try our online demo instead][1]!
![](https://linuxcontainers.org/static/img/containers.png)
### Available resource limits
LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.
As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.
All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.
We dont support resource limits pooling where a limit would be shared by a group of containers, there is simply no good way to implement something like that with the existing kernel APIs.
#### Disk
This is perhaps the most requested and obvious one. Simply setting a size limit on the containers filesystem and have it enforced against the container.
And thats exactly what LXD lets you do!
Unfortunately this is far more complicated than it sounds. Linux doesnt have path-based quotas, instead most filesystems only have user and group quotas which are of little use to containers.
This means that right now LXD only supports disk limits if youre using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.
#### CPU
When it comes to CPU limits, we support 4 different things:
* Just give me X CPUs
In this mode, you let LXD pick a bunch of cores for you and then load-balance things as more containers and CPUs go online/offline.
The container only sees that number of CPU.
* Give me a specific set of CPUs (say, core 1, 3 and 5)
Similar to the first mode except that no load-balancing is happening, youre stuck with those cores no matter how busy they may be.
* Give me 20% of whatever you have
In this mode, you get to see all the CPUs but the scheduler will restrict you to 20% of the CPU time but only when under load! So if the system isnt busy, your container can have as much fun as it wants. When containers next to it start using the CPU, then it gets capped.
* Out of every measured 200ms, give me 50ms (and no more than that)
This mode is similar to the previous one in that you get to see all the CPUs but this time, you can only use as much CPU time as you set in the limit, no matter how idle the system may be. On a system without over-commit this lets you slice your CPU very neatly and guarantees constant performance to those containers.
Its also possible to combine one of the first two with one of the last two, that is, request a set of CPUs and then further restrict how much CPU time you get on those.
On top of that, we also have a generic priority knob which is used to tell the scheduler who wins when youre under load and two containers are fighting for the same resource.
#### Memory
Memory sounds pretty simple, just give me X MB of RAM!
And it absolutely can be that simple. We support that kind of limits as well as percentage based requests, just give me 10% of whatever the host has!
Then we support some extra stuff on top. For example, you can choose to turn swap on and off on a per-container basis and if its on, set a priority so you can choose what container will have their memory swapped out to disk first!
Oh and memory limits are “hard” by default. That is, when you run out of memory, the kernel out of memory killer will start having some fun with your processes.
Alternatively you can set the enforcement policy to “soft”, in which case youll be allowed to use as much memory as you want so long as nothing else is. As soon as something else wants that memory, you wont be able to allocate anything until youre back under your limit or until the host has memory to spare again.
#### Network I/O
Network I/O is probably our simplest looking limit, trust me, the implementation really isnt simple though!
We support two things. The first is a basic bit/s limits on network interfaces. You can set a limit of ingress and egress or just set the “max” limit which then applies to both. This is only supported for “bridged” and “p2p” type interfaces.
The second thing is a global network I/O priority which only applies when the network interface youre trying to talk through is saturated.
#### Block I/O
I kept the weirdest for last. It may look straightforward and feel like that to the user but there are a bunch of cases where it wont exactly do what you think it should.
What we support here is basically identical to what I described in Network I/O.
You can set IOps or byte/s read and write limits directly on a disk device entry and there is a global block I/O priority which tells the I/O scheduler who to prefer.
The weirdness comes from how and where those limits are applied. Unfortunately the underlying feature we use to implement those uses full block devices. That means we cant set per-partition I/O limits let alone per-path.
It also means that when using ZFS or btrfs which can use multiple block devices to back a given path (with or without RAID), we effectively dont know what block device is providing a given path.
This means that its entirely possible, in fact likely, that a container may have multiple disk entries (bind-mounts or straight mounts) which are coming from the same underlying disk.
And thats where things get weird. To make things work, LXD has logic to guess what block devices back a given path, this does include interrogating the ZFS and btrfs tools and even figures things out recursively when it finds a loop mounted file backing a filesystem.
That logic while not perfect, usually yields a set of block devices that should have a limit applied. LXD then records that and moves on to the next path. When its done looking at all the paths, it gets to the very weird part. It averages the limits youve set for every affected block devices and then applies those.
That means that “in average” youll be getting the right speed in the container, but it also means that you cant have a “/fast” and a “/slow” directory both coming from the same physical disk and with differing speed limits. LXD will let you set it up but in the end, theyll both give you the average of the two values.
### How does it all work?
Most of the limits described above are applied through the Linux kernel Cgroups API. Thats with the exception of the network limits which are applied through good old “tc”.
LXD at startup time detects what cgroups are enabled in your kernel and will only apply the limits which your kernel support. Should you be missing some cgroups, a warning will also be printed by the daemon which will then get logged by your init system.
On Ubuntu 16.04, everything is enabled by default with the exception of swap memory accounting which requires you pass the “swapaccount=1” kernel boot parameter.
### Applying some limits
All the limits described above are applied directly to the container or to one of its profiles. Container-wide limits are applied with:
```
lxc config set CONTAINER KEY VALUE
```
or for a profile:
```
lxc profile set PROFILE KEY VALUE
```
while device-specific ones are applied with:
```
lxc config device set CONTAINER DEVICE KEY VALUE
```
or for a profile:
```
lxc profile device set PROFILE DEVICE KEY VALUE
```
The complete list of valid configuration keys, device types and device keys can be [found here][1].
#### CPU
To just limit a container to any 2 CPUs, do:
```
lxc config set my-container limits.cpu 2
```
To pin to specific CPU cores, say the second and fourth:
```
lxc config set my-container limits.cpu 1,3
```
More complex pinning ranges like this works too:
```
lxc config set my-container limits.cpu 0-3,7-11
```
The limits are applied live, as can be seen in this example:
```
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc config set zerotier limits.cpu 2
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
```
Note that to avoid utterly confusing userspace, lxcfs arranges the /proc/cpuinfo entries so that there are no gaps.
As with just about everything in LXD, those settings can also be applied in profiles:
```
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc profile set default limits.cpu 3
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
```
To limit the CPU time of a container to 10% of the total, set the CPU allowance:
```
lxc config set my-container limits.cpu.allowance 10%
```
Or to give it a fixed slice of CPU time:
```
lxc config set my-container limits.cpu.allowance 25ms/200ms
```
And lastly, to reduce the priority of a container to a minimum:
```
lxc config set my-container limits.cpu.priority 0
```
#### Memory
To apply a straightforward memory limit run:
```
lxc config set my-container limits.memory 256MB
```
(The supported suffixes are kB, MB, GB, TB, PB and EB)
To turn swap off for the container (defaults to enabled):
```
lxc config set my-container limits.memory.swap false
```
To tell the kernel to swap this containers memory first:
```
lxc config set my-container limits.memory.swap.priority 0
```
And finally if you dont want hard memory limit enforcement:
```
lxc config set my-container limits.memory.enforce soft
```
#### Disk and block I/O
Unlike CPU and memory, disk and I/O limits are applied to the actual device entry, so you either need to edit the original device or mask it with a more specific one.
To set a disk limit (requires btrfs or ZFS):
```
lxc config device set my-container root size 20GB
```
For example:
```
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 179G 542M 178G 1% /
stgraber@dakara:~$ lxc config device set zerotier root size 20GB
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 20G 542M 20G 3% /
```
To restrict speed you can do the following:
```
lxc config device set my-container root limits.read 30MB
lxc config device set my-container root.limits.write 10MB
```
Or to restrict IOps instead:
```
lxc config device set my-container root limits.read 20Iops
lxc config device set my-container root limits.write 10Iops
```
And lastly, if youre on a busy system with over-commit, you may want to also do:
```
lxc config set my-container limits.disk.priority 10
```
To increase the I/O priority for that container to the maximum.
#### Network I/O
Network I/O is basically identical to block I/O as far the knobs available.
For example:
```
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s
2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600]
stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit
stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s
2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]
```
And thats how you throttle an otherwise nice gigabit connection to a mere 100Mbit/s one!
And as with block I/O, you can set an overall network priority with:
```
lxc config set my-container limits.network.priority 5
```
### Getting the current resource usage
The [LXD API][2] exports quite a bit of information on current container resource usage, you can get:
* Memory: current, peak, current swap and peak swap
* Disk: current disk usage
* Network: bytes and packets received and transferred for every interface
And now if youre running a very recent LXD (only in git at the time of this writing), you can also get all of those in “lxc info”:
```
stgraber@dakara:~$ lxc info zerotier
Name: zerotier
Architecture: x86_64
Created: 2016/02/20 20:01 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 29258
Ips:
eth0: inet 172.17.0.101
eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8
eth0: inet6 fe80::216:3eff:feec:65a8
lo: inet 127.0.0.1
lo: inet6 ::1
lxcbr0: inet 10.0.3.1
lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2
zt0: inet 29.17.181.59
zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1
zt0: inet6 fe80::79:e7ff:fe0d:5123
Resources:
Processes: 33
Disk usage:
root: 808.07MB
Memory usage:
Memory (current): 106.79MB
Memory (peak): 195.51MB
Swap (current): 124.00kB
Swap (peak): 124.00kB
Network usage:
lxcbr0:
Bytes received: 0 bytes
Bytes sent: 570 bytes
Packets received: 0
Packets sent: 0
zt0:
Bytes received: 1.10MB
Bytes sent: 806 bytes
Packets received: 10957
Packets sent: 10957
eth0:
Bytes received: 99.35MB
Bytes sent: 5.88MB
Packets received: 64481
Packets sent: 64481
lo:
Bytes received: 9.57kB
Bytes sent: 9.57kB
Packets received: 81
Packets sent: 81
Snapshots:
zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless)
```
### Conclusion
The LXD team spent quite a few months iterating over the language were using for those limits. Its meant to be as simple as it can get while remaining very powerful and specific when you want it to.
Live application of those limits and inheritance through profiles makes it a very powerful tool to live manage the load on your servers without impacting the running services.
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
And if you dont want or cant install LXD on your own machine, you can always [try it online instead][3]!
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/blob/master/doc/configuration.md
[2]: https://github.com/lxc/lxd/blob/master/doc/rest-api.md
[3]: https://linuxcontainers.org/lxd/try-it

View File

@ -1,458 +0,0 @@
Part 5 - LXD 2.0: Image management
==================================
This is the fifth blog post [in this series about LXD 2.0][0].
As there are a lot of commands involved with managing LXD containers, this post is rather long. If youd instead prefer a quick step-by-step tour of those same commands, you can [try our online demo instead][1]!
![](https://linuxcontainers.org/static/img/containers.png)
### Container images
If youve used LXC before, you probably remember those LXC “templates”, basically shell scripts that spit out a container filesystem and a bit of configuration.
Most templates generate the filesystem by doing a full distribution bootstrapping on your local machine. This may take quite a while, wont work for all distributions and may require significant network bandwidth.
Back in LXC 1.0, I wrote a “download” template which would allow users to download pre-packaged container images, generated on a central server from the usual template scripts and then heavily compressed, signed and distributed over https. A lot of our users switched from the old style container generation to using this new, much faster and much more reliable method of creating a container.
With LXD, were taking this one step further by being all-in on the image based workflow. All containers are created from an image and we have advanced image caching and pre-loading support in LXD to keep the image store up to date.
### Interacting with LXD images
Before digging deeper into the image format, lets quickly go through what LXD lets you do with those images.
#### Transparently importing images
All containers are created from an image. The image may have come from a remote image server and have been pulled using its full hash, short hash or an alias, but in the end, every LXD container is created from a local image.
Here are a few examples:
```
lxc launch ubuntu:14.04 c1
lxc launch ubuntu:75182b1241be475a64e68a518ce853e800e9b50397d2f152816c24f038c94d6e c2
lxc launch ubuntu:75182b1241be c3
```
All of those refer to the same remote image (at the time of this writing), the first time one of those is run, the remote image will be imported in the local LXD image store as a cached image, then the container will be created from it.
The next time one of those commands are run, LXD will only check that the image is still up to date (when not referring to it by its fingerprint), if it is, it will create the container without downloading anything.
Now that the image is cached in the local image store, you can also just start it from there without even checking if its up to date:
```
lxc launch 75182b1241be c4
```
And lastly, if you have your own local image under the name “myimage”, you can just do:
```
lxc launch my-image c5
```
If you want to change some of that automatic caching and expiration behavior, there are instructions in an earlier post in this series.
#### Manually importing images
##### Copying from an image server
If you want to copy some remote image into your local image store but not immediately create a container from it, you can use the “lxc image copy” command. It also lets you tweak some of the image flags, for example:
```
lxc image copy ubuntu:14.04 local:
```
This simply copies the remote image into the local image store.
If you want to be able to refer to your copy of the image by something easier to remember than its fingerprint, you can add an alias at the time of the copy:
```
lxc image copy ubuntu:12.04 local: --alias old-ubuntu
lxc launch old-ubuntu c6
```
And if you would rather just use the aliases that were set on the source server, you can ask LXD to copy the for you:
lxc image copy ubuntu:15.10 local: --copy-aliases
lxc launch 15.10 c7
All of the copies above were one-shot copy, so copying the current version of the remote image into the local image store. If you want to have LXD keep the image up to date, as it does for the ones stored in its cache, you need to request it with the `auto-update` flag:
```
lxc image copy images:gentoo/current/amd64 local: --alias gentoo --auto-update
```
##### Importing a tarball
If someone provides you with a LXD image as a single tarball, you can import it with:
```
lxc image import <tarball>
```
If you want to set an alias at import time, you can do it with:
```
lxc image import <tarball> --alias random-image
```
Now if you were provided with two tarballs, identify which contains the LXD metadata. Usually the tarball name gives it away, if not, pick the smallest of the two, metadata tarballs are tiny. Then import them both together with:
```
lxc image import <metadata tarball> <rootfs tarball>
```
##### Importing from a URL
“lxc image import” also works with some special URLs. If you have an https web server which serves a path with the LXD-Image-URL and LXD-Image-Hash headers set, then LXD will pull that image into its image store.
For example you can do:
```
lxc image import https://dl.stgraber.org/lxd --alias busybox-amd64
```
When pulling the image, LXD also sets some headers which the remote server could check to return an appropriate image. Those are LXD-Server-Architectures and LXD-Server-Version.
This is meant as a poor mans image server. It can be made to work with any static web server and provides a user friendly way to import your image.
#### Managing the local image store
Now that we have a bunch of images in our local image store, lets see what we can do with them. Weve already covered the most obvious, creating containers from them but there are a few more things you can do with the local image store.
##### Listing images
To get a list of all images in the store, just run “lxc image list”:
```
stgraber@dakara:~$ lxc image list
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| alpine-32 | 6d9c131efab3 | yes | Alpine edge (i386) (20160329_23:52) | i686 | 2.50MB | Mar 30, 2016 at 4:36am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| gentoo | 1a134c5951e0 | no | Gentoo current (amd64) (20160329_14:12) | x86_64 | 232.50MB | Mar 30, 2016 at 4:34am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| my-image | c9b6e738fae7 | no | Scientific Linux 6 x86_64 (default) (20160215_02:36) | x86_64 | 625.34MB | Mar 2, 2016 at 4:56am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
```
You can filter based on the alias or fingerprint simply by doing:
```
stgraber@dakara:~$ lxc image list amd64
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
```
Or by specifying a key=value filter of image properties:
```
stgraber@dakara:~$ lxc image list os=ubuntu
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
```
To see everything LXD knows about a given image, you can use “lxc image info”:
```
stgraber@castiana:~$ lxc image info ubuntu
Fingerprint: e8a33ec326ae7dd02331bd72f5d22181ba25401480b8e733c247da5950a7d084
Size: 139.43MB
Architecture: i686
Public: no
Timestamps:
Created: 2016/03/15 00:00 UTC
Uploaded: 2016/03/16 05:50 UTC
Expires: 2017/04/26 00:00 UTC
Properties:
version: 12.04
aliases: 12.04,p,precise
architecture: i386
description: ubuntu 12.04 LTS i386 (release) (20160315)
label: release
os: ubuntu
release: precise
serial: 20160315
Aliases:
- ubuntu
Auto update: enabled
Source:
Server: https://cloud-images.ubuntu.com/releases
Protocol: simplestreams
Alias: precise/i386
```
##### Editing images
A convenient way to edit image properties and some of the flags is to use:
lxc image edit <alias or fingerprint>
This opens up your default text editor with something like this:
autoupdate: true
properties:
aliases: 14.04,default,lts,t,trusty
architecture: amd64
description: ubuntu 14.04 LTS amd64 (release) (20160314)
label: release
os: ubuntu
release: trusty
serial: "20160314"
version: "14.04"
public: false
You can change any property you want, turn auto-update on and off or mark an image as publicly available (more on that later).
##### Deleting images
Remove an image is a simple matter of running:
```
lxc image delete <alias or fingerprint>
```
Note that you dont have to remove cached entries, those will automatically be removed by LXD after they expire (by default, after 10 days since they were last used).
##### Exporting images
If you want to get image tarballs from images currently in your image store, you can use “lxc image export”, like:
```
stgraber@dakara:~$ lxc image export old-ubuntu .
Output is in .
stgraber@dakara:~$ ls -lh *.tar.xz
-rw------- 1 stgraber domain admins 656 Mar 30 00:55 meta-ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
-rw------- 1 stgraber domain admins 156M Mar 30 00:55 ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
```
#### Image formats
LXD right now supports two image layouts, unified or split. Both of those are effectively LXD-specific though the latter makes it easier to re-use the filesystem with other container or virtual machine runtimes.
LXD being solely focused on system containers, doesnt support any of the application container “standard” image formats out there, nor do we plan to.
Our images are pretty simple, theyre made of a container filesystem, a metadata file describing things like when the image was made, when it expires, what architecture its for, … and optionally a bunch of file templates.
See this document for up to date details on the [image format][1].
##### Unified image (single tarball)
The unified image format is what LXD uses when generating images itself. They are a single big tarball, containing the container filesystem inside a “rootfs” directory, have the metadata.yaml file at the root of the tarball and any template goes into a “templates” directory.
Any compression (or none at all) can be used for that tarball. The image hash is the sha256 of the resulting compressed tarball.
##### Split image (two tarballs)
This format is most commonly used by anyone rolling their own images and who already have a compressed filesystem tarball.
They are made of two distinct tarball, the first contains just the metadata bits that LXD uses, so the metadata.yaml file at the root and any template in the “templates” directory.
The second tarball contains only the container filesystem directly at its root. Most distributions already produce such tarballs as they are common for bootstrapping new machines. This image format allows re-using them unmodified.
Any compression (or none at all) can be used for either tarball, they can absolutely use different compression algorithms. The image hash is the sha256 of the concatenation of the metadata and rootfs tarballs.
##### Image metadata
A typical metadata.yaml file looks something like:
```
architecture: "i686"
creation_date: 1458040200
properties:
architecture: "i686"
description: "Ubuntu 12.04 LTS server (20160315)"
os: "ubuntu"
release: "precise"
templates:
/var/lib/cloud/seed/nocloud-net/meta-data:
when:
- start
template: cloud-init-meta.tpl
/var/lib/cloud/seed/nocloud-net/user-data:
when:
- start
template: cloud-init-user.tpl
properties:
default: |
#cloud-config
{}
/var/lib/cloud/seed/nocloud-net/vendor-data:
when:
- start
template: cloud-init-vendor.tpl
properties:
default: |
#cloud-config
{}
/etc/init/console.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty1.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty2.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty3.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty4.override:
when:
- create
template: upstart-override.tpl
```
##### Properties
The two only mandatory fields are the creation date (UNIX EPOCH) and the architecture. Everything else can be left unset and the image will import fine.
The extra properties are mainly there to help the user figure out what the image is about. The “description” property for example is whats visible in “lxc image list”. The other properties can be used by the user to search for specific images using key/value search.
Those properties can then be edited by the user through “lxc image edit” in contrast, the creation date and architecture fields are immutable.
##### Templates
The template mechanism allows for some files in the container to be generated or re-generated at some point in the container lifecycle.
We use the pongo2 templating engine for those and we export just about everything we know about the container to the template. That way you can have custom images which use user-defined container properties or normal LXD properties to change the content of some specific files.
As you can see in the example above, were using those in Ubuntu to seed cloud-init and to turn off some init scripts.
### Creating your own images
LXD being focused on running full Linux systems means that we expect most users to just use clean distribution images and not spin their own image.
However there are a few cases where having your own images is useful. Such as having pre-configured images of your production servers or building your own images for a distribution or architecture that we dont build images for.
#### Turning a container into an image
The easiest way by far to build an image with LXD is to just turn a container into an image.
This can be done with:
```
lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image
```
You can even turn a past container snapshot into a new image:
```
lxc publish my-container/some-snapshot --alias some-image
```
#### Manually building an image
Building your own image is also pretty simple.
1. Generate a container filesystem. This entirely depends on the distribution youre using. For Ubuntu and Debian, it would be by using debootstrap.
2. Configure anything thats needed for the distribution to work properly in a container (if anything is needed).
3. Make a tarball of that container filesystem, optionally compress it.
4. Write a new metadata.yaml file based on the one described above.
5. Create another tarball containing that metadata.yaml file.
6. Import those two tarballs as a LXD image with:
```
lxc image import <metadata tarball> <rootfs tarball> --alias some-name
```
You will probably need to go through this a few times before everything works, tweaking things here and there, possibly adding some templates and properties.
### Publishing your images
All LXD daemons act as image servers. Unless told otherwise all images loaded in the image store are marked as private and so only trusted clients can retrieve those images, but should you want to make a public image server, all you have to do is tag a few images as public and make sure you LXD daemon is listening to the network.
#### Just running a public LXD server
The easiest way to share LXD images is to run a publicly visible LXD daemon.
You typically do that by running:
```
lxc config set core.https_address "[::]:8443"
```
Remote users can then add your server as a public image server with:
```
lxc remote add <some name> <IP or DNS> --public
```
They can then use it just as they would any of the default image servers. As the remote server was added with “public”, no authentication is required and the client is restricted to images which have themselves been marked as public.
To change what images are public, just “lxc image edit” them and set the public flag to true.
#### Use a static web server
As mentioned above, “lxc image import” supports downloading from a static http server. The requirements are basically:
* The server must support HTTPs with a valid certificate, TLS1.2 and EC ciphers
* When hitting the URL provided to “lxc image import”, the server must return an answer including the LXD-Image-Hash and LXD-Image-URL HTTP headers
If you want to make this dynamic, you can have your server look for the LXD-Server-Architectures and LXD-Server-Version HTTP headers which LXD will provide when fetching the image. This allows you to return the right image for the servers architecture.
#### Build a simplestreams server
The “ubuntu:” and “ubuntu-daily:” remotes arent using the LXD protocol (“images:” is), those are instead using a different protocol called simplestreams.
simplestreams is basically an image server description format, using JSON to describe a list of products and files related to those products.
It is used by a variety of tools like OpenStack, Juju, MAAS, … to find, download or mirror system images and LXD supports it as a native protocol for image retrieval.
While certainly not the easiest way to start providing LXD images, it may be worth considering if your images can also be used by some of those other tools.
More information can be found here.
### Conclusion
I hope this gave you a good idea of how LXD manages its images and how to build and distribute your own. The ability to have the exact same image easily available bit for bit on a bunch of globally distributed system is a big step up from the old LXC days and leads the way to more reproducible infrastructure.
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
And if you dont want or cant install LXD on your own machine, you can always [try it online instead][3]!
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/blob/master/doc/image-handling.md
[2]: https://launchpad.net/simplestreams
[3]: https://linuxcontainers.org/lxd/try-it
原文https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/

View File

@ -1,209 +0,0 @@
Part 6 - LXD 2.0: Remote hosts and container migration
=======================================================
This is the third blog post [in this series about LXD 2.0][0].
![](https://linuxcontainers.org/static/img/containers.png)
### Remote protocols
LXD 2.0 supports two protocols:
* LXD 1.0 API: Thats the REST API used between the clients and a LXD daemon as well as between LXD daemons when copying/moving images and containers.
* Simplestreams: The Simplestreams protocol is a read-only, image-only protocol used by both the LXD client and daemon to get image information and import images from some public image servers (like the Ubuntu images).
Everything below will be using the first of those two.
### Security
Authentication for the LXD API is done through client certificate authentication over TLS 1.2 using recent ciphers. When two LXD daemons must exchange information directly, a temporary token is generated by the source daemon and transferred through the client to the target daemon. This token may only be used to access a particular stream and is immediately revoked so cannot be re-used.
To avoid Man In The Middle attacks, the client tool also sends the certificate of the source server to the target. That means that for a particular download operation, the target server is provided with the source server URL, a one-time access token for the resource it needs and the certificate that the server is supposed to be using. This prevents MITM attacks and only give temporary access to the object of the transfer.
### Network requirements
LXD 2.0 uses a model where the target of an operation (the receiving end) is connecting directly to the source to fetch the data.
This means that you must ensure that the target server can connect to the source directly, updating any needed firewall along the way.
We have [a plan][1] to allow this to be reversed and also to allow proxying through the client itself for those rare cases where draconian firewalls are preventing any communication between the two hosts.
### Interacting with remote hosts
Rather than having our users have to always provide hostname or IP addresses and then validating certificate information whenever they want to interact with a remote host, LXD is using the concept of “remotes”.
By default, the only real LXD remote configured is “local:” which also happens to be the default remote (so you dont have to type its name). The local remote uses the LXD REST API to talk to the local daemon over a unix socket.
### Adding a remote
Say you have two machines with LXD installed, your local machine and a remote host that well call “foo”.
First you need to make sure that “foo” is listening to the network and has a password set, so get a remote shell on it and run:
```
lxc config set core.https_address [::]:8443
lxc config set core.trust_password something-secure
```
Now on your local LXD, we just need to make it visible to the network so we can transfer containers and images from it:
lxc config set core.https_address [::]:8443
Now that the daemon configuration is done on both ends, you can add “foo” to your local client with:
```
lxc remote add foo 1.2.3.4
```
(replacing 1.2.3.4 by your IP address or FQDN)
Youll see something like this:
```
stgraber@dakara:~$ lxc remote add foo 2607:f2c0:f00f:2770:216:3eff:fee1:bd67
Certificate fingerprint: fdb06d909b77a5311d7437cabb6c203374462b907f3923cefc91dd5fce8d7b60
ok (y/n)? y
Admin password for foo:
Client certificate stored at server: foo
```
You can then list your remotes and youll see “foo” listed there:
```
stgraber@dakara:~$ lxc remote list
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| NAME | URL | PROTOCOL | PUBLIC | STATIC |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| foo | https://[2607:f2c0:f00f:2770:216:3eff:fee1:bd67]:8443 | lxd | NO | NO |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| images | https://images.linuxcontainers.org:8443 | lxd | YES | NO |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| local (default) | unix:// | lxd | NO | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | YES | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | YES | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
```
### Interacting with it
Ok, so we have a remote server defined, what can we do with it now?
Well, just about everything you saw in the posts until now, the only difference being that you must tell LXD what host to run against.
For example:
```
lxc launch ubuntu:14.04 c1
```
Will run on the default remote (“lxc remote get-default”) which is your local host.
```
lxc launch ubuntu:14.04 foo:c1
```
Will instead run on foo.
Listing running containers on a remote host can be done with:
```
stgraber@dakara:~$ lxc list foo:
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| c1 | RUNNING | 10.245.81.95 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe43:7994 (eth0) | PERSISTENT | 0 |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
```
One thing to keep in mind is that you have to specify the remote host for both images and containers. So if you have a local image called “my-image” on “foo” and want to create a container called “c2” from it, you have to run:
```
lxc launch foo:my-image foo:c2
```
Finally, getting a shell into a remote container works just as you would expect:
```
lxc exec foo:c1 bash
```
### Copying containers
Copying containers between hosts is as easy as it sounds:
```
lxc copy foo:c1 c2
```
And youll have a new local container called “c2” created from a copy of the remote “c1” container. This requires “c1” to be stopped first, but you could just copy a snapshot instead and do it while the source container is running:
```
lxc snapshot foo:c1 current
lxc copy foo:c1/current c3
```
### Moving containers
Unless youre doing live migration (which will be covered in a later post), you have to stop the source container prior to moving it, after which everything works as youd expect.
```
lxc stop foo:c1
lxc move foo:c1 local:
```
This example is functionally identical to:
```
lxc stop foo:c1
lxc move foo:c1 c1
```
### How this all works
Interactions with remote containers work as you would expect, rather than using the REST API over a local Unix socket, LXD just uses the exact same API over a remote HTTPS transport.
Where it gets a bit trickier is when interaction between two daemons must occur, as is the case for copy and move.
In those cases the following happens:
1. The user runs “lxc move foo:c1 c1”.
2. The client contacts the local: remote to check for an existing “c1” container.
3. The client fetches container information from “foo”.
4. The client requests a migration token from the source “foo” daemon.
5. The client sends that migration token as well as the source URL and “foo”s certificate to the local LXD daemon alongside the container configuration and devices.
6. The local LXD daemon then connects directly to “foo” using the provided token
A. It connects to a first control websocket
B. It negotiates the filesystem transfer protocol (zfs send/receive, btrfs send/receive or plain rsync)
C. If available locally, it unpacks the image which was used to create the source container. This is to avoid needless data transfer.
D. It then transfers the container and any of its snapshots as a delta.
7. If succesful, the client then instructs “foo” to delete the source container.
### Try all this online
Dont have two machines to try remote interactions and moving/copying containers?
Thats okay, you can test it all online using our [demo service][2].
The included step-by-step walkthrough even covers it!
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/issues/553
[2]: https://linuxcontainers.org/lxd/try-it/

View File

@ -1,145 +0,0 @@
Part 7 - LXD 2.0: Docker in LXD
==================================
This is the seventh blog post [in this series about LXD 2.0][0].
![](https://linuxcontainers.org/static/img/containers.png)
### Why run Docker inside LXD
As I briefly covered in the [first post of this series][1], LXDs focus is system containers. That is, we run a full unmodified Linux distribution inside our containers. LXD for all intent and purposes doesnt care about the workload running in the container. It just sets up the container namespaces and security policies, then spawns /sbin/init and waits for the container to stop.
Application containers such as those implemented by Docker or Rkt are pretty different in that they are used to distribute applications, will typically run a single main process inside them and be much more ephemeral than a LXD container.
Those two container types arent mutually exclusive and we certainly see the value of using Docker containers to distribute applications. Thats why weve been working hard over the past year to make it possible to run Docker inside LXD.
This means that with Ubuntu 16.04 and LXD 2.0, you can create containers for your users who will then be able to connect into them just like a normal Ubuntu system and then run Docker to install the services and applications they want.
### Requirements
There are a lot of moving pieces to make all of this working and we got it all included in Ubuntu 16.04:
- A kernel with CGroup namespace support (4.4 Ubuntu or 4.6 mainline)
- LXD 2.0 using LXC 2.0 and LXCFS 2.0
- A custom version of Docker (or one built with all the patches that we submitted)
- A Docker image which behaves when confined by user namespaces, or alternatively make the parent LXD container a privileged container (security.privileged=true)
### Running a basic Docker workload
Enough talking, lets run some Docker containers!
First of all, you need an Ubuntu 16.04 container which you can get with:
```
lxc launch ubuntu-daily:16.04 docker -p default -p docker
```
The “-p default -p docker” instructs LXD to apply both the “default” and “docker” profiles to the container. The default profile contains the basic network configuration while the docker profile tells LXD to load a few required kernel modules and set up some mounts for the container. The docker profile also enables container nesting.
Now lets make sure the container is up to date and install docker:
```
lxc exec docker -- apt update
lxc exec docker -- apt dist-upgrade -y
lxc exec docker -- apt install docker.io -y
```
And thats it! Youve got Docker installed and running in your container.
Now lets start a basic web service made of two Docker containers:
```
stgraber@dakara:~$ lxc exec docker -- docker run --detach --name app carinamarina/hello-world-app
Unable to find image 'carinamarina/hello-world-app:latest' locally
latest: Pulling from carinamarina/hello-world-app
efd26ecc9548: Pull complete
a3ed95caeb02: Pull complete
d1784d73276e: Pull complete
72e581645fc3: Pull complete
9709ddcc4d24: Pull complete
2d600f0ec235: Pull complete
c4cf94f61cbd: Pull complete
c40f2ab60404: Pull complete
e87185df6de7: Pull complete
62a11c66eb65: Pull complete
4c5eea9f676d: Pull complete
498df6a0d074: Pull complete
Digest: sha256:6a159db50cb9c0fbe127fb038ed5a33bb5a443fcdd925ec74bf578142718f516
Status: Downloaded newer image for carinamarina/hello-world-app:latest
c8318f0401fb1e119e6c5bb23d1e706e8ca080f8e44b42613856ccd0bf8bfb0d
stgraber@dakara:~$ lxc exec docker -- docker run --detach --name web --link app:helloapp -p 80:5000 carinamarina/hello-world-web
Unable to find image 'carinamarina/hello-world-web:latest' locally
latest: Pulling from carinamarina/hello-world-web
efd26ecc9548: Already exists
a3ed95caeb02: Already exists
d1784d73276e: Already exists
72e581645fc3: Already exists
9709ddcc4d24: Already exists
2d600f0ec235: Already exists
c4cf94f61cbd: Already exists
c40f2ab60404: Already exists
e87185df6de7: Already exists
f2d249ff479b: Pull complete
97cb83fe7a9a: Pull complete
d7ce7c58a919: Pull complete
Digest: sha256:c31cf04b1ab6a0dac40d0c5e3e64864f4f2e0527a8ba602971dab5a977a74f20
Status: Downloaded newer image for carinamarina/hello-world-web:latest
d7b8963401482337329faf487d5274465536eebe76f5b33c89622b92477a670f
```
With those two Docker containers now running, we can then get the IP address of our LXD container and access the service!
```
stgraber@dakara:~$ lxc list
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
| docker | RUNNING | 172.17.0.1 (docker0) | 2001:470:b368:4242:216:3eff:fe55:45f4 (eth0) | PERSISTENT | 0 |
| | | 10.178.150.73 (eth0) | | | |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
stgraber@dakara:~$ curl http://10.178.150.73
The linked container said... "Hello World!"
```
### Conclusion
Thats it! Its really that simple to run Docker containers inside a LXD container.
Now as I mentioned earlier, not all Docker images will behave as well as my example, thats typically because of the extra confinement that comes with LXD, specifically the user namespace.
Only the overlayfs storage driver of Docker works in this mode. That storage driver may come with its own set of limitation which may further limit how many images will work in this environment.
If your workload doesnt work properly and you trust the user inside the LXD container, you can try:
```
lxc config set docker security.privileged true
lxc restart docker
```
That will de-activate the user namespace and will run the container in privileged mode.
Note however that in this mode, root inside the container is the same uid as root on the host. There are a number of known ways for users to escape such containers and gain root privileges on the host, so you should only ever do that if youd trust the user inside your LXD container with root privileges on the host.
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://www.stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/
[2]: https://linuxcontainers.org/lxd/try-it/

View File

@ -1,283 +0,0 @@
# Forget Technical Debt —Here'sHowtoBuild Technical Wealth
#忘记技术债务——教你如何创造技术财富
电视里正播放着《老屋》节目,[Andrea Goulet][58]和她商业上的合作伙伴正悠闲地坐在客厅里商讨着他们的战略计划。那正是大家思想的火花碰撞出创新事物的时刻。他们正在寻求一种能够实现自身价值的方式——为其它公司清理遗留代码及科技债务。他们此刻的情景,像极了电视里的剧情。
“我们意识到我们现在做的工作不仅仅是清理出遗留代码实际上我们是在用重建老屋的方式来重构软件让系统运行更持久更稳定更高效”Goulet说。“这让我开始思考着如何让更多的公司花钱来改善他们的代码以便让他们的系统运行更高效。就好比为了让屋子变得更实用你不得不使用一个全新的屋顶。这并不吸引人但却是至关重要的然而很多人都搞错了。“
如今,她是[Corgibytes][57]公司的CEO——一家提高软件现代化和进行系统重构方面的咨询公司。她曾经见过各种各样糟糕的系统遗留代码以及不计其数的严重的科技债务事件。Goulet认为创业公司需要从偿还债务思维模式向创造科技财富的思维模式转变并且要从铲除旧代码的方式向逐步修复的方式转变。她解释了这种新的方法以及如何完成这些看似不可能完成的事情——实际上是聘用大量的工程事来完成这些工作。
### 反思遗留代码
关于遗留代码最广泛的定义由Michael Feathers在他的著作[修改代码的艺术][56][][55]一书中提出遗留代码就是没有被测试的代码。这个定义比大多数人所认为的——遗留代码仅指那些古老陈旧的系统这个说法要妥当得多。但是Goulet认为这两种定义都不够明确。“随时软件周期的生长遗留代码显得毫无用处。一两年的应用程序其代码已经进入遗留状态了”她说。“最重要的是如何提高软件质量的难易程度。”
这意味着代码写得不够清楚,缺少解释说明,没有任何关于你写的代码构件和做出这个决定的流程。一个单元测试属于一种类型的构件,也包括所有的你写那部分代码的原因以及逻辑推理相关的文档。当你去修复代码的过程中,如果没办法搞清楚原开发者的意图,那些代码就属于遗留代码了。
> 遗留代码不是技术问题,而是沟通上的问题
![](https://s3.amazonaws.com/marquee-test-akiaisur2rgicbmpehea/H4y9x4gQj61G9aK4v8Kp_Screen%20Shot%202016-08-11%20at%209.16.38%20AM.png)
如果你像Goulet所说的那样迷失在遗留代码里你会发现每一次的沟通交流过程都会变得像那条鲜为人知的[康威定律][54]所描述的一样。
Goulet说“这个定律认为系统的基础架构能反映出你们整个公司的组织沟通结构如果你想修复你们公司的遗留代码而没有一个好的组织沟通方式是不可能完成的。那是很多人都没注意到的一个重要环节。”
Goulet和她的团队成员更像是考古学家一样来研究遗留系统项目。他们根据前开发者写的代码构件相关的线索来推断出他们的思想意图。然后再根据这些构件之间的关系来作出新的决策。
最重要的代码构件是什么呢良好的代码结构、清晰的思想意图、整洁的代码。例如如果你使用了通用的名称如”foo“或”bar“来命名一个变量半年后你再返回来看这段代码时你根本就看不出这个变量的用途是什么。
如果代码读起来很困难,可以使用源代码控制系统,这是一个非常有用的构件,因为从该构件可以看出代码的历史修改信息,这为软件开发者写明他们作出本次修改的原因提供一个很好的途径。
Goulet说”我一个朋友认为对于代码注释的信息如有需要每一个概要部分的内容应该有推文的一半多而代码的描述信息应该有一篇博客那么长。你得用这个方式来为你修改的代码写一个合理的说明。这也不会浪费太多的时间并且给后期的项目开发者提供更多有用的信息但是让人惊讶的是没人会这么做。我们经常听到一些很沮丧的开发人员在调试一段代码的过程中报怨这是谁写的这烂代码最后发现还不是他们自己写的。“
使用自动化测试对于理解程序的流程非常有用。Goulet解释道“很多人都比较认可Michael Feathers提出的关于遗留代码的定义尤其是我们与[行为驱动开发模式][53]相结合的过程中使用测试套件,比如编写测试场景,这对于理解开发者的意图来说是非常有用的工具。
理由很简单,如果你想把遗留代码的程度降到最低,你得多注意下代码的易理解性以及将来回顾该代码的一些细节上。编写并运行单元程序、接受、认可,并且进行集成测试,写清楚注释的内容。方便以后你自己或是别人来理解你写的代码。
尽管如此,由于很多已知的和不可意料的原因,遗留代码仍然会发生。
在创业公司刚成立初期公司经常会急于推出很多新的功能。开发人员在巨大的压力下一边完成项目交付一边测试系统缺陷。Corgibytes团队就遇到过好多公司很多年都懒得对系统做详细的测试了。
确实如此,当你急于开发出系统原型的时候,强制性地去做太多的系统测试也许意义不大。但是,一旦产品开发完成并投入使用后,你就不得投入大量的时间精力来维护及完善系统。“很多人觉得运维没什么好担心的,重要的是产品功能特性上的强大。如果真这样,当系统规模到一定程序的时候,就很难再扩展了。同时也就失去市场竞争力了。
最后才明白过来,原来热力学第二定律对你们公司的代码也同样适用:你所面临的一切将向熵增的方向发展。你需要与混乱无序的技术债务进行一场无休无止的战斗。并且随着时间的增长,遗留代码也逐渐变成一种简单类型的债务。
她说“我们再次拿家来做比喻。你必须坚持每天收拾餐具打扫卫生倒垃圾。如果你不这么做情况将来越来越糟糕直到有一天你不得不向HazMat团队求助。”
就跟这种情况一样Corgibytes团队接到很多公司CEO的求助电话比如Features公司的CEO在电话里抱怨道“现在我们公司的开发团队工作效率太低了三年前只需要两个星期就完成的工作现在却要花费12个星期。”
> 技术债务往往反应出公司运作上的问题
很多公司的CEO明知会发生技术债务的问题但是他们也难让其它同事相信花钱来修复那些已经存在的问题是很值的。这看起来像是在走回头路很乏味或者没有新的产品。有些公司直到系统已经严重的影响了日常工作效率时才着手去处理技术债务方面的问题那时付出的代价就太高了。
### 忘记债务,创造技术财富
# 推荐文章
如果你想把[重构技术债务][52]作为一个积累技术财富的机会-[敏捷开发讲师Declan Whelan最近提到的一个术语][51]你很可能要先说服你们公司的CEO、投资者和其它的股东登上这条财富之船。
“我们没必要把技术债务想像得很可怕。当产品处于开发设计初期技术债务反而变得非常有用”Goulet说。“当你解决一些系统遗留的技术问题时你会充满成就感。例如当你在自己家里安装新窗户时你确实会花费一笔不少的钱但是之后你每个月就可以节省100美元的电费。程序代码亦是如此。这虽然暂时没有提高工作效率但是随时时间地推移将为你们公司创造更多的生产率。“
一旦你意识到项目团队工作不再富有成效时,你必须要确认下是哪些技术债务在拖后腿了。
“我跟很多不惜一切代价招募英才的初创公司交流过,他们高薪聘请一些工程师来只为了完成更多的工作。”她说。”相反的是,他们应该找出如何让原有的每个工程师都更高效率工作的方法。你需要去解决什么样的技术债务以增加额外的生产率?"
如果你改变自己的观点并且专注于创造技术财富,你将会看到产能过剩的现象,然后重新把多余的产能投入到修复更多的技术债务和遗留代码的的良性循环中。你们的产品将会走得更远,发展得更好。
> 别想着把你们公司的软件当作一个项目来看。从现在起,你把它想象成一栋自己要长久居住的房子。
“这是一个极其重要的思想观念的转变”Goulet说。“这将带你走出短浅的思维模式并且你会比之前更加关注产品的维护工作。”
这就像一栋房子,要实现其现代化的改造方式有两种:小动作,表面上的更改(“我买了一块新的小地毯!”)和大改造,需要很多年才能偿还所有债务(“我假设我们将要替换掉所有的管道...")。你必须考虑好两者才能你们已有的产品和整个团队顺利地运作起来。
这还需要提前预算好——否则那些较大的花销将会是硬伤。定期维护是最基本的预期费用。让人震惊的是,很多公司在商务上都没把维护成本预算进来。
这就是Goulet提出软件重构这个术语的原因。当你房子里的一些东西损坏的时候你不用铲除整个房子而是重新修复坏掉的那一部分就可以了。同样的当你们公司出现老的损坏的代码时重写代码通常不是最明智的选择。
下面是Corgibytes公司在重构客户代码用到的一些方法
* 把大型的应用系统分解成轻量级的更易于维护的微服务。
* 相互功能模块之间降低耦合性以便于扩展。
* 更新品牌和提升用户前端界面体验。
* 集合自动化测试来检查代码可用性。
* 重构或者修改代码库来提高易用性。
系统重构也进入到运维领域。比如Corgibytes公司经常推荐新客户使用[Docker][50]以便简单快速的部属新的开发环境。当你们公司有30个工程师的时候把初始化配置时间从10小时减少到10分钟对完成更多的工作很有帮助。系统重构不仅仅是应用于软件开发本身也包括如何进行系统重构。
如果你知道有什么新的技术能让你们的代码管理起来更容易创建更高效就应该把这它们写入到每年或季度项目规划中。你别指望它们会自动呈现出来。但是也别给自己太大的压力来马上实施它们。Goulets看到很多公司从一开始就这些新的技术进行100%覆盖率测试而陷入困境。
具体来说,每个公司都应该把以下三种类型的重构工作规划到项目建设中来:
* 自动化测试
* 持续性交付
* 文化提升
咱们来深入的了解下每一项内容
自动化测试
“有一位客户即将进行第二轮融资但是他们却没办法在短期内招聘到足够的人才。我们帮助他们引进了一种自动化测试框架这让他们的团队在3个月的时间内工作效率翻了一倍”Goulets说。“这样他们就可以在他们的投资人面前自豪的说”我们的一个精英团队完成的任务比两个普通的团队要多。“”
自动化测试从根本上来讲就是单个测试的组合。你可以使用单元测试再次检查某一行代码。可以使用集成测试来确保系统的不同部分都正常运行。还可以使用验收性测试来检验系统的功能特性是否跟你想像的一样。当你把这些测试写成测试脚本后,你只需要简单地用鼠标点一下按钮就可以让系统自行检验了,而不用手工的去梳理并检查每一项功能。
在产品的市场定位前就来制定自动化测试机制是有些言之过早了。但是如果你有一款信心满满的产品,并且也很依赖客户,那就更应该把这件事考虑在内了。
持续性交付
这是与自动化交付相关的工作,过去是需要人工完成。目的是当系统部分修改完成时可以迅速进行部属,并且短期内得到反馈。这给公司在其它竞争对手面前有很大的优势,尤其是在售后服务行业。
“比如说你每次部属系统时环境都很复杂。熵值无法有效控制”Goulets说。“我们曾经花了12个小时甚至更多的时间来部属一个很大的集群环境。然而想必你将来也不会经常干部属新环境这样的工作。因为太折腾人了而且还推迟了系统功能上线的时间。同时你也落后于其它公司并失去竞争力了。
在持续性改进的过程中常见的其它自动化任务包括:
* 在提交完成之后检查中断部分。
* 在出现故障时进行回滚操作。
* 审查自动化代码的质量。
* 根据需求增加或减少服务器硬件资源。
* 让开发,测试及生产环境配置简单易懂。
举一个简单的例子比如说一个客户提交了一个系统Bug报告。开发团队越高效解决并修复那个Bug越好。对于开发人员来说修复Bug的挑战根本不是个事儿这本来也是他们的强项主要是系统设置上不够完善导致他们浪费太多的时间去处理bug以外的其它问题。
使用持续改进的方式时,在你决定哪些工作应该让机器去做还是最好丢给研发去完成的时候,你会变得很严肃无情。如果选择让机器去处理,你得使其自动化完成。这样也能让研发很愉快地去解决其它有挑战性的问题。同时客户也会很高兴地看到他们报怨的问题被快速处理了。你的待修复的未完成任务数减少了,之后你就可以把更多的时间投入到运用新的方法来提高公司产品质量上了。
”你必须时刻问自己我应该如何为我们的客户改善产品功能如何做得更好如何让产品运行更高效Goulets说。“一旦你回答完这些问题后你就得询问下自己如何自动去完成那些需要改善的功能”
提升企业文化
Corgibytes公司每天都会遇到同样的问题一家创业公司建立了一个对开发团队毫无影响的文化环境。公司CEO抱着双臂思考着为什么这样的环境对员工没多少改变。然而事实却是公司的企业文化观念与他们是截然相反的。为了激烈你们公司的工程师你必须全面地了解他们的工作环境。
为了证明这一点Goulet引用了作者Robert Henry说过的一段话
> 目的不是创造艺术,而是在最美妙的状态下让艺术应运而生。
“也就是说你得开始思考一下你们公司的产品,“她说。”你们的企业文件就应该跟自己的产品一样。你们的目标是永远创造一个让艺术品应运而生的环境,这件艺术品就是你们公司的代码,一流的售后服务、充满幸福感的员工、良好的市场、盈利能力等等。这些都息息相关。“
优先考虑公司的技术债务和遗留代码也是一种文化。那才是真正能让开发团队深受影响的方法。同时这也会让你将来有更多的时间精力去完成更重要的工作。如果你不从根本上改变固有的企业文化环境你就不可能重构公司产品。改变你所有的对产品维护及现代化上投资的态度是开始实施变革的第一步最理想情况是从公司的CEO开始转变。
以下是Goulet关于建立那种流态文化方面提出的建议
* 反对公司嘉奖那些加班到深夜的”英雄“。提倡高效率的工作方式。
* 了解协同开发技术比如Woody Zuill提出的[暴徒编程][44][][43]模式。
* 遵从4个[现代敏捷开发][42] 原则:用户至上、实践及快速学习、把系统安全放在首位、持续交付价值。
* 每周为研发提供项目外的职业发展时间。
* 把[日工作记录]作为一种驱动开发团队主动解决问题的方式。
* 把同情心放在第一位。Corgibytes公司让员工参加[Brene Brown勇气工厂][40]的培训是非常有用的。
”如果公司高管和投资者不支持这种文件升级方式你得从客户服务的角度去说服他们“Goulet说”告诉他们通过这次调整后最终产品将如何给公司的大客户提高更好的体验。这是你能做的一个很有力的论点。“
### 寻找最具天财的代码重构者
整个行业都认为那些顶尖的工程师都不愿意去干修复遗留代码的工作。他们只想着去开发新的东西。大家都说把他们留在维护部门真是太浪费人才了。
其实这些都是误解。如果你知道如何寻找到那些技术精湛的工程师以为他们提供一个愉快的工作环境,你可以安排他们来帮你解决那些最棘手的技术债务问题。
”每一次开会的时候我们都会问现场的同事谁喜欢去干遗留代码的工作但是也只有那么不到10%的同事会举手。“Goulet说。”但是当我跟这些人交流的过程中我发现这些工程师恰好是喜欢最具挑战性工作的人才。“
有一位客户来寻求她的帮助他们使用国产的数据库没有任何相关文档也没有一种有效的方法来弄清楚他们公司的产品架构。她称那些类似于面包和黄油的一类工程师为”修正者“。在Corgibytes公司她有一支这样的修正者团队由她支配他们没啥爱好只喜欢通过研究二进制代码来解决技术问题。
![](https://s3.amazonaws.com/marquee-test-akiaisur2rgicbmpehea/BeX5wWrESmCTaJYsuKhW_Screen%20Shot%202016-08-11%20at%209.17.04%20AM.png)
那么如何才能找到这些技术人才呢Goulet尝试过各种各样的方法其中有一些方法还是富有成效的。
她创办了一个社区网站[legacycode.rocks][49]并且在招聘启示上写道:”长期招聘那些喜欢重构遗留代码的另类开发人员...如果你以从事处理遗留代码的工作为自豪,欢迎加入!“
”我刚开始收到很多这些人发来邮件说,’噢,天呐,我也属于这样的开发人员!‘“她说。”我开始发布这条信息,并且告诉他们这份工作是非常有意义的,以吸引合适的人才“
推荐文章
在招聘的过程中,她也会使用持续性交付的经验来回答那些另类开发者想知道的信息:包括详细的工作内容以及明确的要求。我这么做的原因是因为我讨厌重复性工作。如果我收到多封邮件来咨询同一个问题,我会把答案发布在网上,我感觉自己更像是在写说明文档一样。”
但是随着时间的推移她注意到她会重新定义招聘流程来帮助她识别出更出色的候选人。比如说她在应聘要求中写道“公司CEO将会重新审查你的简历因此确保邮件发送给CEO时”不用写明性别。所有以“尊敬的先生或女士”开头的信件将会被当垃圾处理掉。然后这只不过是她招聘初期的策略而已。
“我开始这么做是因为很多申请人把我当成一家软件公司的男性CEO这让我很厌烦”Goulet说。“所有有一天我想我应该它当作应聘要求放到网上看有多少人注意到这个问题。令我惊讶的是这让我过滤掉一些不太严谨的申请人。还突显出了很多擅于从事遗留代码方面工作的人。
Goulet想起一个应聘者发邮件给我说“我查看了你们网站的代码我喜欢这个网站以及你们打招呼的方式这就是我所希望的。你们的网站架构很奇特好像是用PHP写的但是你们却运行在用Ruby语言写的Jekyll下。我真的很好奇那是什么呢。”
原来是这样的Goulet从她的设计师那里得知在HTML、CSS和JavaScript文件中有一个未使用的PHP类名她一直想解决这个问题但是一直没机会。她的回复是“你正在找工作吗
另外一名候选人注意到她曾经在一篇说明文档中使用CTO这个词但是她的团队里并没有这个头衔她的合作伙伴是首席代码语者。其次是那些注重细节、充满求知欲、积极主动的候选者更能引起她的注意。
> 代码修正者不仅需要注重细节,而且这也是他们必备的品质。
让人吃惊的是Goulet从来没有为招募最优秀的代码修正者而感到厌烦过。”大多数人都是通过我们的网站直接投递简历但是当我们想扩大招聘范围的时候我们会通过[PowerToFly][48]和[WeWorkRemotely][47]网站进行招聘。我现在确实不需要招募新人马了。他们需要经历一段很艰难的时期才能理解代码修正者的意义是什么。“
如果他们通过首轮面试Goulet将会让候选者阅读一篇Arlo Belshee写的文章”[命名是一个过程][46]“。它讲的是非常详细的处理遗留代码的的过程。她最经典的指导方法是:”阅读完这段代码并且告诉我,你是怎么理解的。“
她将找出对问题的理解很深刻并且也愿意接受文章里提出的观点候选者。这对于筛选出有坚定信念的想被雇用的候选者来说是极其有用的办法。她强力要求候选者找出一段与你操作相关的最关键的代码来证明你是充满激情的、有主见的及善于分析问题的人。
最后,她会让候选者跟公司里当前的团队成员一起使用[Exercism.io][45]工具进行编程。这是一个开源项目,它允许开发者学习如何在不同的编程语言环境下使用一系列的测试驱动开发的练习进行编程。第一部分的协同编程课程允许候选者选择其中一种语言进行内建。下一个练习中,面试者可以选择一种语言进行编程。他们总能看到那些人处理异常的方法、随机应便的能力以及是否愿意承认某些自己不了解 的技术。
“当一个人真正的从专家转变为大师的时候他才会毫不犹豫的承认自己不知道的东西“Goulet说。
让他们使用自己不熟悉的编程语言来写代码也能衡量其坚韧不拔的毅力。”我们想听到某个人说,‘我会深入研究这个问题直到彻底解决它。“也许第二天他们仍然会跑过来跟我们说,’我会一直留着这个问题直到我找到答案为止。‘那是作为一个成功的修正者表现出来的一种气质。“
> 如果你认为产品开发人员在我们这个行业很受追捧,因此很多公司也想让他们来做维护工作。那你可错了。最优秀的维护修正者并不是最好的产品开发工程师。
如果一个有天赋的修正者在眼前Goulet懂得如何让他走向成功。下面是如何让这种类型的开发者感到幸福及高效工作的一些方式
* 给他们高度的自主权。把问题解释清楚,然后安排他们去完成,但是永不命令他们应该如何去解决问题。
* 如果他们要求升级他们的电脑配置和相关工具,尽管去满足他们。他们明白什么样的需求才能最大限度地提高工作效率。
* 帮助他们[避免更换任务][39]。他们喜欢全身心投入到某一个任务直至完成。
总之这些方法已经帮助Corgibytes公司培养出20几位对遗留代码充满激情的专业开发者。
### 稳定期没什么不好
大多数创业公司都都不想跳过他们的成长期。一些公司甚至认为成长期应该是永无止境的。而且,他们觉得也没这个必要,即便他们已经进入到了下一个阶段:稳定期。完全进入到稳定期意味着你可以利用当前的人力资源及管理方法在创造技术财富和消耗资源之间做出一个正确的选择。
”在成长期和稳定期之间有个转折点就是维护人员必须要足够壮大并且你开始更公平的对待维护人员以及专注新功能的产品开发人员“Goulet说。”你们公司的产品开发完成了。现在你得让他们更加稳定地运行。“
这就意味着要把公司更多的预算分配到产品维护及现代化方面。”你不应该把产品维护当作是一个不值得关注的项目,“她说。”这必须成为你们公司固有的一种企业文化——这将帮助你们公司将来取得更大的成功。“
最终,你通过这么努力创建的技术财富将会为你的团队带来一大批全新的开发者:他们就像侦查兵一样,有充足的时间和资源去探索新的领域,挖掘新客户资源并且给公司创造更多的机遇。当你们在新的市场领域做得更广泛并且不断发展得更好——那么你们公司已经真正地进入到繁荣发展的状态了。
--------------------------------------------------------------------------------
via: http://firstround.com/review/forget-technical-debt-heres-how-to-build-technical-wealth/
作者:[http://firstround.com/][a]
译者:[rusking](https://github.com/rusking)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://firstround.com/
[1]:http://corgibytes.com/blog/2016/04/15/inception-layers/
[2]:http://www.courageworks.com/
[3]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[4]:https://www.industriallogic.com/blog/modern-agile/
[5]:http://mobprogramming.org/
[6]:http://exercism.io/
[7]:http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/
[8]:https://weworkremotely.com/
[9]:https://www.powertofly.com/
[10]:http://legacycode.rocks/
[11]:https://www.docker.com/
[12]:http://legacycoderocks.libsyn.com/technical-wealth-with-declan-wheelan
[13]:https://www.agilealliance.org/resources/initiatives/technical-debt/
[14]:https://en.wikipedia.org/wiki/Behavior-driven_development
[15]:https://en.wikipedia.org/wiki/Conway%27s_law
[16]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[17]:http://corgibytes.com/
[18]:https://www.linkedin.com/in/andreamgoulet
[19]:http://corgibytes.com/blog/2016/04/15/inception-layers/
[20]:http://www.courageworks.com/
[21]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[22]:https://www.industriallogic.com/blog/modern-agile/
[23]:http://mobprogramming.org/
[24]:http://mobprogramming.org/
[25]:http://exercism.io/
[26]:http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/
[27]:https://weworkremotely.com/
[28]:https://www.powertofly.com/
[29]:http://legacycode.rocks/
[30]:https://www.docker.com/
[31]:http://legacycoderocks.libsyn.com/technical-wealth-with-declan-wheelan
[32]:https://www.agilealliance.org/resources/initiatives/technical-debt/
[33]:https://en.wikipedia.org/wiki/Behavior-driven_development
[34]:https://en.wikipedia.org/wiki/Conway%27s_law
[35]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[36]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[37]:http://corgibytes.com/
[38]:https://www.linkedin.com/in/andreamgoulet
[39]:http://corgibytes.com/blog/2016/04/15/inception-layers/
[40]:http://www.courageworks.com/
[41]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[42]:https://www.industriallogic.com/blog/modern-agile/
[43]:http://mobprogramming.org/
[44]:http://mobprogramming.org/
[45]:http://exercism.io/
[46]:http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/
[47]:https://weworkremotely.com/
[48]:https://www.powertofly.com/
[49]:http://legacycode.rocks/
[50]:https://www.docker.com/
[51]:http://legacycoderocks.libsyn.com/technical-wealth-with-declan-wheelan
[52]:https://www.agilealliance.org/resources/initiatives/technical-debt/
[53]:https://en.wikipedia.org/wiki/Behavior-driven_development
[54]:https://en.wikipedia.org/wiki/Conway%27s_law
[55]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[56]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[57]:http://corgibytes.com/
[58]:https://www.linkedin.com/in/andreamgoulet

View File

@ -0,0 +1,225 @@
安卓编年史
============================================================
### 让我们跟着安卓从 0.5 版本到 7 的无尽迭代来看看它的发展历史。
毫不夸张地说谷歌搜索在棒棒糖中无处不在。“持续开启语音识别”这项特性让用户可以在任何界面随时说出“OK Google”即时是在息屏状态也没有问题。谷歌应用依然是谷歌的首要主屏这项特性是自奇巧时引入的。现在搜索栏也会显示在新的最近应用界面。
Google Now 依然是最左侧的主屏,但现在 Material Design 对它进行了大翻新,给了它一个色彩大胆的头部以及重新设计的排版。
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/play-store-1-150x150.jpg)
][1]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/play2-150x150.jpg)
][2]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/6-150x150.jpg)
][3]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/5-150x150.jpg)
][4]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/12-2-150x150.jpg)
][5]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/14-1-150x150.jpg)
][6]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/19-1-150x150.jpg)
][7]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/13-2-150x150.jpg)
][8]
Play 商店遵从了和其它棒棒糖应用相似的轨迹。它在视觉上焕然疑一新,大胆的色彩,新排版,还有一个全新的布局。通常这里不会有什么新增功能,就只是给一切换件新马甲。
Play 商店的导航面板现在真的可以用于导航了,每个分类有各自的入口。棒棒糖也不再在操作栏放“更多”按钮了,取而代之的是一个独立的操作按钮(通常是搜索),并且去掉了导航栏中多余的选项。这给了用户一个单独的地方来查找项目,而不用在两个菜单中寻找搜索的地方。
棒棒糖还给了应用让状态栏透明的能力。这让操作栏的颜色可以渗透到状态栏,让它只比周围的界面暗一点点。一些界面甚至在顶部使用了全幅英雄图片,同时显示到了状态栏上。
[
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1-980x481.jpg)
][38]
谷歌日历完全重写了,获得了很多新设计,也失去了很多特性。你不再能够双指缩放来调整时间视图,月份视图也从手机上消失了,周视图从七天退化成了五天的视图。在用户抱怨之后,谷歌将会花费接下来几个版本的时间来重新添加回这里面的一些特性。“谷歌日历”还加强了“谷歌”部分,去除了直接在应用内添加第三方账户的能力。非谷歌账户现在需要从 Gamil 来添加。
尽管如此,它看起来还是很棒。在一些视图上,月份开头带有头图,就像是真实的纸质日历。带有地点的事件会附带显示来自那个地点的照片。举个例子,我的“去往旧金山”会显示金门大桥。谷歌日历还会从 Gamil 获取事件并在你的日历中显示。
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/7-150x150.jpg)
][9]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/8-150x150.jpg)
][10]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/12-150x150.jpg)
][11]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/13-150x150.jpg)
][12]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/3-1-150x150.jpg)
][13]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/14-150x150.jpg)
][14]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/6-2-150x150.jpg)
][15]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/5-3-150x150.jpg)
][16]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/7-2-150x150.jpg)
][17]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/9-1-150x150.jpg)
][18]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/10-1-150x150.jpg)
][19]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/28-1-150x150.jpg)
][20]
其它应用都可以套用基本相同的描述:功能上没有太多新鲜的,但新设计换掉了奇巧中的灰色以大胆,明亮的色彩。环聊获得了收取 Google Voice 信息的能力,时钟应用的背景颜色会随着每天时间的变化而改变。
#### 任务调度器鞭策应用生态成型
谷歌决定在棒棒糖中实施“伏特计划Project Volta关注电量使用问题。谷歌从“电池史学家Battery Historian”开始为自己和开发者创建了更多的电池追踪工具。这个 python 脚本获取所有的安卓电量日志数据,并转换成可读,交互式的图表。在这个新诊断工具的帮助下,谷歌将后台任务标记为主要的耗电大户。
在 2014 年的 I/O 大会上,这家公司注意到启用飞行模式并关闭屏幕可以让安卓手机待机将近一个月。但是,如果用户全部启用并使用设备,它们没法坚持一整天。结论就是如果你能让一切都停止活动,你的电池表现就能好得多。
因此,谷歌创建了一个新 API称作“JobScheduler任务调度器这是个新的针对安卓后台任务的警察。在任务调度器出现之前每个单独的应用为它自己的后台进程负责这意味着每个应用会独立唤醒处理器和调制解调器检查连通性、组织数据库、下载更新以及上传日志。所有东西都有它自己独立的定时器所以你的手机会一直被唤醒。有了任务调度器后台任务从无组织的混乱转变为统一的批处理有有序的后台进程处理窗口。
任务调度器可以让应用指定它们的任务所需的条件连通性、Wi-Fi、接入电源等等它会在那些条件满足的时候发送一条通知。这就像是推送邮件和每五分钟检查一次邮件的区别……但是带上任务需求的。谷歌还开始给后台任务推进一个“懒”实现。如果一些事情可以推迟到设备处于 Wi-Fi接入电源以及待机状态那它就应该等到那时候执行。你现在可以看到这一策略的成果在 Wi-Fi 下你可以将安卓手机接入电源并且只有在_这种条件下_它才会开始下载应用更新。你通常不需要立即下载应用更新最好的时候是等到用户有无限的电源和网络的时候进行。
#### 开机设置获得面向未来的新设计
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/25-1-150x150.jpg)
][21]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/26-150x150.jpg)
][22]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup2-150x150.jpg)
][23]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup3-150x150.jpg)
][24]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup4-150x150.jpg)
][25]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup5-150x150.jpg)
][26]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/10/setup6-150x150.png)
][27]
开机设置经过了大翻新,它不止是为了跟上 Material Design 指南,还是“面向未来”的,这样不管未来谷歌采用什么新的登录和验证方案,它都能够适应。记住,写“安卓编年史”的部分原因就是一些旧版安卓已经不再能工作了。这些年来,谷歌已经为用户升级了更佳加密的验证方案以及二次验证,但添加这些新的登录要求破坏了旧客户端的兼容性。很多安卓特性要求访问谷歌云设施,所以你没法登录的话,像安卓 1.0 的 Gmail 这样的就没法工作了。
在棒棒糖中,开机设置工作的前几个界面和之前的很像。你可以看到“欢迎使用安卓界面”以及一些设置数据和 Wi-Fi 连接的选项。但在这个界面之后就有了变化。一旦棒棒糖连接到了互联网它会连接到谷歌的服务器来“检查更新”。这并不是检查系统或应用的更新是在检查即将执行的设置工作的更新。安卓下载了最新版本的设置_然后_它会要求你登录你的谷歌账户。
在今天登录进棒棒糖和奇巧的时候这个好处很明显。有可以可升级的设置流程“2014”的棒棒糖系统可以适应 2016 的改进,像是谷歌新的“[触碰登录][39]”双重认证。奇巧在这就卡住了,但幸运的是它有个“浏览器登录”可以解决双重认证的问题。
棒棒糖的开机设置对将你的谷歌账户和密码放在单独的页面持极端立场。[谷歌讨厌密码][40]并提供了一些[实验性的方式][40]来不用单独页面登录到谷歌。如果你的账户设置为不使用密码,棒棒糖可以跳过密码页面。如果你设置了双重认证,设置页面就会进入到“输入双因素码”的设置流程。每个登录部分都是在单独的一个页面,所以设置流程是模块化的。页面可以随要求添加或移除。
开机设置还给了用户对应用还原的控制。安卓在这之前也提供了一些数据还原,但那是无法理解的,因为它仅仅只是在没有任何用户输入的情况下选择你的一台设备并开始恢复。开机设置流程中的一个新界面让用户可以看到在云端的设备配置集合,并选择合适的那个。你还可以选择要从那个备份还原的应用。备份有应用,你的主屏布局,以及一些小设置如 Wi-Fi 热点。它不是完全的应用数据备份。
#### 设置
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/29-1-150x150.jpg)
][28]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/settings-1-150x150.jpg)
][29]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2014-11-11-16.45.47-150x150.png)
][30]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/battery-150x150.jpg)
][31]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/11/user1-150x150.jpg)
][32]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2014/11/users2-150x150.jpg)
][33]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/30-1-150x150.jpg)
][34]
* [
![](https://cdn.arstechnica.net/wp-content/uploads/2016/10/31-150x150.jpg)
][35]
设置从暗色主题切换到了亮色。除了新外观,它还有方便的搜索功能。每个界面用户都能访问放大镜,让他们可以更容易地找到难找的选项。
这里有一些和伏特计划有关的额设置。“网络限制”允许用户将一个 Wi-Fi 连接标记为计费的,让任务调度器处理后台处理时避免使用它。同时作为伏特计划的一部分,添加了一个“节电模式”。它会限制后台任务并限制 CPU 性能,给你更长的续航但更慢的设备。
多用户支持已经出现在安卓平板中有一段时间了,但棒棒糖终于将它带到了安卓手机上。设置界面添加了一个新的“用户”页面,让你添加额外的账户或设置一个“访客”账户。访客账户是临时的——它们可以一次点击轻松删除。它不会像正常账户那样尝试下载关联到你账户的每个应用,因为它注定要在不久后被删除。
--------------------------------------------------------------------------------
作者简介:
Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/
作者:[RON AMADEO][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://arstechnica.com/author/ronamadeo
[1]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[2]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[3]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[4]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[5]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[6]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[7]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[8]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[9]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[10]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[11]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[12]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[13]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[14]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[15]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[16]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[17]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[18]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[19]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[20]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[21]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[22]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[23]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[24]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[25]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[26]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[27]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[28]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[29]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[30]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[31]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[32]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[33]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[34]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[35]:http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/#
[36]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1.jpg
[37]:http://arstechnica.com/author/ronamadeo/
[38]:https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1.jpg
[39]:http://arstechnica.com/gadgets/2016/06/googles-new-two-factor-authentication-system-tap-yes-to-log-in/
[40]:https://www.theguardian.com/technology/2016/may/24/google-passwords-android
[41]:http://www.androidpolice.com/2015/12/22/google-appears-to-be-testing-a-new-way-to-log-into-your-account-on-other-devices-with-just-your-phone-no-password-needed/

View File

@ -1,7 +1,7 @@
The (updated) history of Android
安卓编年史
============================================================
> Follow the endless iterations from Android 0.5 to Android 7 and beyond.
> 让我们跟着安卓从 0.5 版本到 7 的无尽迭代来看看它的发展历史。
### Android TV
@ -34,13 +34,13 @@ The (updated) history of Android
![](https://cdn.arstechnica.net/wp-content/uploads/2014/11/device-2014-10-31-174128-150x150.png)
][10]
November 2014 saw Android continue its march to take over everything with a screen as Google unveiled Android TV. A division inside the company had previously tried to take over the living room with Google TV during the Honeycomb era, but this was a total reboot of the idea directly from the Android team. Android TV took Android 5.0 Lollipop and gave it a Material Design interface purpose-built for the biggest screen in the house. For launch hardware, Google tapped Asus to build the "Nexus Player," an underpowered-but-versatile set top box.
2014 年 11 月谷歌公布了安卓 TV安卓继续进行它带着一块屏幕征服一切的征程。这家公司里的一个部门之前在蜂巢时代尝试过用谷歌 TV 掌控客厅,但这次完全是来自安卓团队的新点子。安卓 TV 使用安卓 5.0 棒棒糖,并给了它一个为室内最大屏幕设计的 Material Design 界面。首发硬件谷歌选择了华硕来代工“Nexus Player”这是一个配置不足但够用的机顶盒。
Android TV was really about three things: video, music, and games. You controlled the TV with a tiny remote consisting only of a D-Pad with 4 buttons: Back, Home, Microphone, and Play/Pause. For games, Asus simply cloned the Xbox 360 controller, giving players a million buttons and a pair of analog sticks.
安卓 TV 专注于三样东西:视频,音乐,以及游戏。你可以用一个小遥控器控制电视,它只有四向方向键以及四个按钮,后退、主页、麦克风以及播放/暂停。至于游戏,华硕克隆了一个 Xbox 360 手柄,给了玩家一堆按键和一对摇杆。
The interface was pretty simple. Large horizontally-scrolling media thumbnails occupied the screen, filling the TV with content from YouTube, Google Play, Netflix, Hulu, and other sources. Instead of soiling everything in an app, the thumbnails were actually "recommended" items from many different content sources. Below that you could directly access the apps and settings.
安卓 TV 的界面很简单。大幅的媒体略缩图占据了屏幕,可以横向滚动,这些媒体中有 Youtube、Google Play、Netflix、Hulu 以及其它来源。这些略缩图实际上是来自不同媒体源的“推荐”项目,它不是简单地将一个应用的内容填满屏幕。在那下面你可以直接访问应用和设置。
The voice interface was great. You could ask Android TV to play whatever you wanted, instead of hunting it down through the GUI. You could also run clever search results on content, like "show me movies with Harrison Ford." And instead of app silos, every app could provide content to the indexing service. All these apps were housed in a TV-version of the Play Store. Developers specifically supporting Android TV devices also supported the Google cast protocol, allowing users to beam videos and music from their phones and tablets to the TV.
语音界面很赞。你可以告诉安卓 TV 播放任意你想要的内容,而不用通过图形界面去寻找。你还能在内容里获得更聪明的搜索结果,比如“显示和 Harrison Ford 有关的电影”。每个应用都可以给索引服务提供内容,而不是简单的应用集合。所有的这些应用都在 Play 商店有一个 TV 版本。开发者对安卓 TV 的特别支持还包括谷歌 cast 协议,允许用户从他们的手机和平板向电视投射视频和音乐。
### Android 5.1 Lollipop
@ -108,13 +108,13 @@ The voice interface was great. You could ask Android TV to play whatever you wan
![](https://cdn.arstechnica.net/wp-content/uploads/2015/03/play-store-150x150.jpg)
][31]
Android 5.1 came out in March 2015 and was the tiniest of updates. The goal here was mainly to [fix encryption performance][43] on the Nexus 6, along with adding device protection and a few interface tweaks.
安卓 5.1 在 2015 年 3 月发布,这是安卓最小的更新。它的目的主要是[修复 Nexus 6 上的加密性能问题][43],还添加了设备保护和一些小的界面调整。
Device protection's only UI addition took the form of a new warning during setup. The feature offered to "Protect your device from reuse" if it was stolen. Once a lock screen was set, device protection would kick in, and could be triggered during a device wipe. If you wiped the phone the way an owner normally would—by unlocking the phone and picking "reset" from the settings—nothing would happen. If you wipe the phone through developer tools though, the device would demand that you "verify a previously-synced Google Account" during the next setup.
设备保护是唯一的新增界面,采用的是在开机设置显示新警告的形式。这个特性在设备被偷了之后“保护你的设备不被再次利用”。一旦设置了锁屏,设备保护就开始介入,并且会在擦除设备的时候被触发。如果你以机主正常的方式擦除设备——解锁手机并从设置选择“重置”——什么都不会发生。但如果你通过开发者工具擦除,设备会在下次开机设置的时候要求你“验证之前同步的谷歌账户”。
The idea was that a developer would know the pervious Google credentials on the device, but a thief would not so they'd be stuck at setup. In practice this triggered [a cat and mouse game][44] of people finding exploits that get around device protection, and Google getting word of the bug and patching it. Software features added by OEM skins also introduced fun new bugs to get around device protection.
这个想法是基于开发者是会知道之前登录的谷歌帐号凭证的,但小偷就不知道了,他们会卡在设置这一步。在现实中这引发了[一个猫鼠游戏][44],人们寻找漏洞来绕过设备保护,而谷歌知道了这个 bug 并修补它。OEM 定制也引入了一些有趣的 bug 来绕过设备保护。
There was also a whole host of extremely minor UI changes that we have dutifully cataloged in the gallery, above. There's not much to say about them beyond the captions.
还有很多特别微小的界面改动,我们没法一一列在上面的图中。除了上面的图片描述之外没什么可说的了。
### Android Auto
@ -149,21 +149,21 @@ There was also a whole host of extremely minor UI changes that we have dutifully
![](https://cdn.arstechnica.net/wp-content/uploads/2015/07/IMG_3594-150x150.jpg)
][41]
Also in March 2015, Google launched "Android Auto," a new Android-inspired interface for car infotainment systems. Android Auto was Google's answer to Apple's CarPlay and worked much the same way. It wasn't a full operating system—it's a "casted" interface that runs on your phone and uses the car's built-in screen as an external monitor. Running Android Auto means having a compatible car, installing the Android Auto app on your phone (Android 5.0 and above), and hooking the phone up to the car with a USB cable.
同样是在 2015 年 3 月,谷歌发布了“安卓 Auto”一个基于安卓界面的全新车载娱乐信息系统。安卓 Auto 是谷歌面对苹果的 CarPlay 交出的答卷,它们很多地方都很相似。安卓 Auto 不完全是个操作系统——它是一个运行在你手机上的“投射”界面,使用车载显示屏作为一块外置显示器。运行安卓 Auto 意味着拥有一款兼容的汽车,在手机上(安卓 5.0 及以上版本)安装了安卓 Auto 应用,并用 USB 线将手机连接到汽车。
Android Auto brought Google's Material Design interface to your existing infotainment system, bringing top-tier software design to a platform that [typically struggles][45] with designing good software. Android Auto was a ground up redesign of the Android interface made specifically to comply with the myriad of infotainment regulations around the world. There was no tradition "home screen" full of app icons, instead Android's navigation bar was changed into an always-on app launcher (almost like a tabbed interface).
安卓 Auto 给你已有的车载系统带来了谷歌的 Material Design 界面,给这个[通常挣扎于]设计好软件的平台带来了顶级的软件设计。安卓 Auto 是个对安卓界面的完全重新设计,以遵循世界各地对车载系统无数的规定。它没有通常充满应用图标的“主屏”,安卓的导航栏也变为了一个常驻的应用启动器(几乎像是个标签页式的界面)。
The paired down feature set only really had four sections, from left to right on the navigation bar: Google Maps, a dialer/contacts screen, the "home" section that was a hybrid of Google Now and a notification panel, and a music page. The last button was an "OEM" page that let you exit Android Auto and return to the stock infotainment system (it was also meant to eventually house custom car manufacturer features). There was Google's voice command system, which took the form of a microphone button on the top right of the screen.
算下来特性实际上只有四部分,导航栏从左到右是:谷歌地图,一个拨号/联系人界面,“主屏”部分混合了 Google Now 和一个通知面板还有一个音乐页面。最后一个按钮是一个“OEM”页面让你可以退出安卓 Auto返回到自带的车载系统这也是为了放置汽车制造商的定制特性。安卓 Auto 还带有谷歌的语音命令系统,以一个麦克风按钮的形式显示在屏幕右上角。
There wasn't much in the way of apps for Android Auto. Only two categories were allowed: music and messaging apps. Infotainment regulations meant customizing the UI wasn't really an option. Messaging apps had no interface and could just plug into the voice system, and music apps couldn't change the interface much, only tweaking the colors and iconography of Google's default "music app" template. What really mattered was delivering the music and messages though, and apps could do that.
安卓 Auto 的应用没有多少。它只允许两个类别的应用:音乐和信息应用。车载信息娱乐系统的规定意味着自定义界面不是个好选择。信息应用没有界面,并且可以接入语音系统,音乐应用也不会太多地改变界面,仅仅只是调整一下谷歌默认的“音乐应用”模板的颜色和图标。但实际上重要的是音乐和消息的送达,在不让驾驶员太多分心的情况下,一般的应用就没法使用了。
Android Auto hasn't seen much in the way of updates after its initial launch, but it has seen a ton of car manufacturer support. In 2017, there will be [over 100][46] compatible vehicle models.
安卓 Auto 在它的最初发布之后就没看到多少更新了,但已经逐渐有很多汽车制造商支持了。到了 2017 年,将会有[超过 100][46] 款支持的汽车型号。
--------------------------------------------------------------------------------
作者简介:
Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
--------------------------------------------------------------------------------
@ -171,7 +171,7 @@ Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS an
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/30/
作者:[RON AMADEO][a]
译者:[译者ID](https://github.com/译者ID)
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,82 +0,0 @@
”硅谷的女儿“的天才故事
=======================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
在 2014 年,为了对网上一些关于女性在科技行业的缺失的评论作出回应,我的同事 [Crystal Beasley][1] 建立了一个能让在科技/信息安全方面工作的女性在网络上分享自己的“天才之路”。这篇文章就是我的故事。我把我的故事与你们分享是因为我相信榜样的力量,我也相信有一个人有很多的方式进入一个让自己满意的有挑战性的工作和一个实现了所有目标的人生。
### 和电脑相伴的童年
我,在其他的光环之下,是硅谷的女儿。我的故事不是一个观众变成舞台的主角的故事。也不是从小就为这份事业做贡献的故事。这个故事更多的是关于环境如何塑造你 — 通过它的那种已然存在的文化来改变你,如果你想要被改变的话。这不是从小就看是努力并为一个明确的目标而奋斗的故事,我知道这是关于特权的故事。
我出生在曼哈顿但是我在新泽西州长大因为我的爸爸作为一个退伍军人在那里的罗格斯大学攻读计算机科学的博士学位。当我四岁时学校里有人问我我的爸爸干什么谋生时我说“他就是看电视和捕捉小虫子但是我从没有见过那些小虫子”译者注小虫子bug。他在家里有一台哑终端这大概与他在博尔特-贝拉尼克-纽曼公司的工作有关,他会通过早期的互联网来进行它在人工智能方面的工作。我就在旁边看着。
我没能玩上父亲的会抓小虫子的电视,但是我很早就接触到了技术领域,我很珍惜这个礼物。提早的熏陶对于一个未来的天才是十分必要的 — 所以,请花时间和你的小孩谈谈他以后要做什么!
![](https://opensource.com/sites/default/files/resize/moss-520x433.png)
>我父亲的终端和这个很类似——如果不是这个的话 CC BY-SA 4.0
当我六岁时我们搬到了加州。父亲在施乐的研究中心找到了一个工作。我记得那时我认为这个城市一定有很多熊因为在它的旗帜上都有一个熊。在1979年帕洛阿尔托还是一个大学城还有果园和开阔地带。
在帕洛阿尔托的公立学校待了一年之后,我的姐姐和我被送到了“半岛学校”,这个“模范学校”对我造成了深刻的影响。在那里,好奇心和创新意识是被推崇的,教育也是有学生自己决定的。我们很少在学校看到能叫做电脑的东西,但是在家就不同了。
在父亲从施乐辞职之后他就去了苹果在那里他帮助研发——以及带回家让我玩的第一批电脑就是Apple II 和 LISA。我的父亲在原先的 LISA 的研发团队。我直到现在还深刻的记得他让我们一次又一次的“玩鼠标”场景,因为他想让我的 3 岁大的妹妹对这个东西感到舒服——她也确实那样。
![](https://opensource.com/sites/default/files/resize/600px-apple_lisa-520x520.jpg)
>我们的 LISA 看起来就像这样看到鼠标了吗CC BY-SA 4.0
在学校,我的数学的概念学得不错,但是基本计算却惨不忍睹。我的第一个学校的老师告诉我的家长,还有我,说我的数学很差以及我很“笨”。虽然我在“常规的”数学项目中表现出色,能理解一个 7 岁的孩子能理解的逻辑谜题,但是我不能完成我们每天早上都要做的“练习”。她说我傻,这事我不会忘记。在那之后的十年我都没能相信自己的逻辑能力和算法的水平。不要 低估你给孩子的说的话的力量。
在我玩了几年爸爸的电脑之后,他从苹果跳到了 EA 又跳到了 SGI我又体验了他带回来的新玩意。这让我们认为我们家的房子是镇里最酷的因为我们在车库里有一个能玩 Doom 的 SGI 的机器。我不会太多的编程,但是现在我发现,在那些年里我对尝试新的科技不再恐惧。同时,我的学文学和教育的母亲,成为了一个科技行业的作家,她向我证实了一个人的职业可以改变以及科技行业的人也可以做母亲。我不是说这对她来说很简单,但是她让我认为这件是看起来很简单。你可能回想这些早期的熏陶能把我带到科技行业,但是它没有。
### 本科时光
我想我要成为一个小学教师,我就读米尔斯学院就是想要做这个。但是后来我开始研究女性,后来有研究神学,我这样做仅仅是由于我自己的一个渴求:我希望能理解人类的意志以及为更好的世界而努力。
同时,我也感受到了互联网的巨大力量。在 1991 年,拥有你自己的 UNIX 的账户是很令人高兴的事,这件事值得你向全世界的人吹嘘。我仅仅从在互联网中“玩”就学到了不少,从那些愿意回答我提出的问题的人那里学到的就更多了。这些学习对我的职业生涯的影响不亚于我在学校教育部之中学到的知识。没有没有用的信息。我在一个女子学院度过了影响我一生的关键时期,然后那个女子学院的一个辉煌的女人跑进了计算机院,我不忍为这是一个事故。在那个老师的权力不算太大的学院,我们不止是被允许,甚至是被鼓励去尝试很多的道路(我们能接触到很多很多的科技,还能有聪明人来供我们求助),我也确实那样做了。我十分感激当年的教育。在那个学院,我也了解了什么是极客文化。
之后我去了研究生院去学习 女权主义神学,但是技术行业的气息已经渗入我的灵魂。当我知道我不能成为一个教授或者一个专家时,我离开了学术圈,带着债务和很多点子回到了家。
### 新的开端
在 1995 年,我被我看见的万维网连接 人们以及分享想法和信息的能力所震惊(直到现在仍是如此)。我想要进入这个行业。看起来我好像要“女承父业”,但是我不知道我会用什么方式来这样做。我开始在硅谷做临时工,在我在太阳微系统公司得到我的第一个“技术”职位前做一些事情(为数据写最基础的数据库,技术手册印发钱的事务,备份工资单的存跟)。这些事很让人激动。(毕竟,我们是“点 com”的那个”点“
在 Sun ,我努力学习,尽可能多的尝试我新事物。我的第一个工作是网页化(啥?这是一个单独的词汇)论文以及为测试中的 Solaris 修改一些基础的服务工具大多数是Perl写的。在那里最终在 Open Solaris 的测试版运行时我感受到了开源的力量。
在那里我学到了一个很重要的事情。我发现在同样重视工程和教育的地方有一种气氛,在那里我的问题不再显得“傻”。我很庆幸我选对了导师和朋友。在决定为第二个孩子的出生产假之前,我上每一堂我能上的课程,读每一本我能读的书,尝试自学我在学校没有学习过的技术,商业以及项目管理方面的技能。
### 重回工作
当我准备重新工作时Sun 已经不是一个值得回去的地方。所以我收集了很多人的信息网络是你的朋友利用我的沟通技能最终建立了一个互联网门户2005 年时,一切皆门户),并且开始了解 CRM发布产品的方式本地化网络等知识。我这么做是基于我过去的尝试以及失败的经历所得出的教训也是这个教训让我成功。我也认为我们需要这个方面的榜样。
从很多方面来看,我的职业生涯的第一部分是 我的技术上的自我教育。这事发生的时间和地点都和现在不一样——我在帮助女性和其他弱势群体的组织工作,但是我之后成为一个技术行业的女性。当时我无疑,没有看到这个行业的缺陷,现在这个行业更加的厌恶女性,而不是更加喜欢她们。
在这些事情之后我还没有把自己当作一个榜样或者一个高级技术人员。当我的一个在父母的圈子里认识极客朋友鼓励我申请一个看起来定位十分模糊且技术性很强的开源的非盈利基础设施商店互联网系统协会BIND一个广泛部署的开源服务器的开发商13 台 DNS 根域名服务器之一的运营商)的项目经理时,我很震惊。有很长一段时间,我都不知道他们为什么要雇佣我!我对 DNS ,基础设备,以及协议的开发知之甚少,但是我再次遇到了老师,并再度开始飞速发展。我花时间旅行,在关键流程攻关,搞清楚如何与高度国际化的团队合作,解决麻烦的问题,最重要的是,拥抱支持我们的开源和充满活力的社区。我几乎重新学了一切,通过试错的方式。我学习如何构思一个产品。如何通过建设开源社区,领导那些有这特定才能,技能和耐心的人,是他们给了产品价值。
### 成为别人的导师
当我在 ISC 工作时,我通过 [TechWomen 项目][2] (一个让来自中东和北非的技术行业的女性带到硅谷来接受教育的计划),我开始喜欢教学生以及支持那些女性,特别是在开源行业中奋斗的。这也就是我开始相信自己的能力的开端。我还需要学很多。
当我第一次读 TechWomen 的广告时,我认为那些导师甚至都不会想要和我说话!我有冒名顶替综合征。当他们邀请我成为第一批导师(以及以后 6 年的导师)时,我很震惊,但是现在我学会了相信这些都是我努力得到的待遇。冒名顶替综合征是真实的,但是它能被时间冲淡。
### 现在
最后,我不得不离开我在 ISC 的工作。幸运的是,我的工作以及我的价值让我进入了 Mozilla ,在这里我的努力和我的幸运让我在这里有着重要的作用。现在,我是一名支持多样性的包容的高级项目经理。我致力于构建一个更多样化,更有包容性的 Mozilla ,站在之前的做同样事情的巨人的肩膀上,与最聪明友善的人们一起工作。我用我的激情来让人们找到贡献一个世界需要的互联网的有意义的方式:这让我兴奋了很久。我能看见,我做到了!
通过对组织和个人行为的干预来用一种新的方法来改变一种文化这件事情和我的人生有着十分奇怪的联系 —— 从我的早期的学术生涯,到职业生涯再到现在。每天都是一个新的挑战,我想我最喜欢的就是在科技行业的工作,尤其是在开放互联网的工作。互联网天然的多元性是它最开始吸引我的原因,也是我还在寻求的——一个所有人都有获取的资源可能性,无论背景如何。榜样,导师,资源,以及最重要的,对不断发展的技术和开源文化的尊重能实现我相信它能实现的事 —— 包括给任何的平等的接入权和机会。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/5/my-open-source-story-larissa-shapiro
作者:[Larissa Shapiro][a]
译者:[name1e5s](https://github.com/name1e5s)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/larissa-shapiro
[1]: http://skinnywhitegirl.com/blog/my-nerd-story/1101/
[2]: https://www.techwomen.org/mentorship/why-i-keep-coming-back-to-mentor-with-techwomen

View File

@ -0,0 +1,84 @@
什么是SRE网站可靠性工程
============================================================
网站可靠性工程师是近来越来越多看到的一个职位。它是什么意思?它来自哪里?让我们从 Google SRE 团队来学习。
![Bridge](https://d3tdunqjn7n0wj.cloudfront.net/360x240/bridge-1031545-1400-389c9609ff7c64083c93db48dc77eeff.jpg)
这里有一篇由 Niall Richard Murphy、Jennifer Petoff、Chris Jones、Betsy Beyer 编辑一篇来自[网站可靠性工程][9]的摘录。
网站可靠性工程也在[11月7-10日在阿姆斯特丹举办的 O'Reilly Velocity 会议][10]上有提到。
### 介绍
> 希望不是一种策略。
>
> 传统的 SRE 说
一个公认的事实是系统不会自己。 那么一个特定系统的复杂大规模系统_应该_怎么运行呢
### sysadmin 服务管理方法
sysadmin服务管理模型有几个优点。对于决定该如何运行和服务的公司而言这种方法相对容易实现它作为一个熟悉的行业范例有很多例子可以从中学习和效仿。相关人才库已经广泛普及。有一系列现有的工具软件组件现成的或其他和集成公司可用于帮助运行这些组装的系统所以新手sysadmin团队不必重新发明轮子以及从头设计系统。
因此传统运营团队及其在产品开发中的同行往往会发生冲突最突出的是如何将软件发布到生产环境。在他们核心中开发团队希望推出新功能并看到它们被用户采纳。在_他们_的核心上ops 团队希望确保服务在运行中不会中断。因为大多数中断是由某种变化引起的 - 新的配置、新的功能发布或者新的用户流量类型 - 这两个团队的目标基本上处于紧张状态。
两个团队都明白,以最可能的条款(“我们可以没有阻碍地在任何时间发布任何东西”以及“我们不想在系统工作后改变任何东西”)来表达他们的利益是不可接受的。因为他们的词汇和风险假设都不同,两个团体经常采用熟悉斗争形式来提高他们的利益。 ops 团队试图通过发布介绍和提高门槛来保护运行中的系统免受更改的风险。例如发布审查可能包含对_每个_问题的显式审查这些问题过去都_曾经_引起过服务中断 - 它可能是一个任意长度的列表并且不是所有元素都提供相等的值。开发团队很快学会了如何回应。他们有较少的“发布”和更多的“标志翻转”、“增量更新”或“cherrypicks”。他们采取诸如分割产品功能的策略以便更少的功能受到发布审查。
### Google 服务管理的方法:网站可靠性工程
冲突不是提供软件服务的必然部分。Google 选择以不同的方式运行我们的系统我们的网站可靠性工程团队专注于雇佣软件工程师来运行我们的产品并创建系统来完成那些本来由sysadmins手动完成的工作。
什么是网站可靠性工程是如它在谷歌定义的那样么我的解释很简单SRE 是当你要求一位软件工程师设计一个运维团队时会发生的那样。当我在2003年加入 Google 并负责运行一个由 7 名工程师组成的“生产团队”时,那时我工作的全部都是软件工程。所以我设计和管理了一个假如我是一名 SRE _我_想要的团队的样子。这个团队已经成为了 Google 的目前的 SRE 团队,它仍然是一名终生软件工程师所想象的那个样子。
Google 服务管理方法的主要构成部分是由每个 SRE 团队的组成。作为一个整体SRE可以分为两大类。
50-60 的人是 Google 软件工程师,或者更确切地说,是通过 Google 软件工程师的标准程序招聘的人。其他 40-50 的候选人非常接近 Google 软件工程师资格(即所需技能集的 85-99以及一些具有大多数软件工程师没有的一些 SRE 技术技能的人。到目前为止UNIX 系统内部和网络第1层到第3层的专业知识是我们寻求的两种最常见的替代技术技能。
所有 SRE 的共同点是对开发软件系统以解决复杂问题的信念和能力。在 SRE 中我们密切跟踪两个团队的职业发展并且迄今为止发现在两种工程师之间的表现没有实际差异。事实上SRE 团队的多样背景经常产生聪明、高质量的系统,这显然是几个技能集合成的产物。
我们这样招聘 SRE 的结果是我们有了这样一个团队a手动执行任务很快会变得无聊。b他们有必要的技能集来写出软件以取代以前的手动操作即使解决方案很复杂。SRE 还会与其他开发部门分享学术以及知识背景。因此SRE 从根本上做了一个运维团队历来做的工作,但它使用具有软件专业知识的工程师,并期望这些内在倾向于用软件,并且有能力用软件的人用软件设计并实现自动化来代替人力劳动。
按照设计,至关重要的是 SRE 团队专注于工程。没有恒定的工程,运维工作增加,团队将需要更多的人来上工作量。最终,传统的以 ops 为中心的团队与服务规模呈线性关系:如果服务支持的产品成功,运维工作将随着流量而增长。这意味着雇用更多的人一遍又一遍地完成相同的任务。
为了避免这种命运负责管理服务的团队需要写代码否则就会被工作淹没。因此Google _设置了一个 “ops” 工作如 ticket、紧急呼叫、手动任务最多只占 50% SRE 工作的上限_。此上限确保SRE团队在其计划中有足够的时间使服务稳定及可操作。50% 是上限;随着时间的推移除了自己的设备SRE 团队应该只有很少的运维工作他们几乎可以完全从事开发任务因为服务基本上可以运行和维修自己我们想要的系统是_自动的_而不只是_自动化_。在实践中规模和新功能始终 SRE 要考虑的
Google的经验法则是SRE团队必须花费剩余的 50 的时间来进行实际开发。那么我们该如何执行这个阈值呢?首先,我们必须测量 SRE 如何花费时间。通过测量,我们确保团队不断花费不到 50% 的时间用于开发改变他们实践的工作上。通常这意味着会将一些运维负担转移回开发团队,或者给团队添加新的员工,而不指派该团队额外的运维责任。意识到在运维和开发工作之间保持这种平衡使我们能保证 SRE 具有参与创造性的自主工程的空间,同时仍然保留从运维那学来的智慧。
我们发现Google SRE 的运行大规模系统的方法有很多优点。由于 SRE 是直接修改代码以使Google的系统运行自己SRE团队的特点是快速创新以及大量接受变革。这样的团队能相对价廉地支持相同的服务面向运维的团队需要大量的人。相反运行、维护和改进系统所需的 SRE 的数量随系统的大小而线性地缩放。最后SRE 不仅规避了开发/运维分裂的障碍,而且这种结构也改善了我们的产品开发团队:产品开发和 SRE 团队之间的轻松转移交叉培训整个团队,并且提高了那些在学习构建百万级别分布式系统上有困难的开发人员的技能。
尽管有这些好处SRE 模型的特点是其自身独特的挑战。 Google 面临的一个持续挑战是招聘 SRESRE 不仅与产品开发招聘流程竞争相同的候选人,而且我们将招聘人员的编码和系统工程技能都设置得如此之高,这意味着我们的招聘池必然很小。由于我们的学科相对新颖独特,在如何建立和管理 SRE 团队方面没有太多的行业信息(尽管希望这本书能朝着这个方向迈进!)。一旦 SRE 团队到位,他们潜在的非正统的服务管理方法需要强有力的管理支持。例如,一旦错误预估耗尽,除非是管理层的强制要求, 否则在季度剩余的时间里决定停止发布可能不会被产品开发团队所接受。
###### DevOps 或者 SRE
“DevOps” 这个术语在 2008 年末出现并在写这篇文章时2016 年早期)仍在发生变动。 其核心原则IT部门在系统设计和开发的每个阶段的参与、对自动化与人力投入的严重依赖、工程实践和工具在操作任务中的应用与许多 SRE 的原则和实践一致。 人们可以将 DevOps 视为向更广泛的组织管理结构和人员的几种核心SRE原则。 可以等价地将 SRE 视为具有某些特殊扩展的 DevOps 的特定实现。
------------------------
作者简介Benjamin Treynor Sloss 创造了“网站可靠性工程”一词他自2003年以来一直负责 Google 的全球运营、网络和生产工程。截至2016年他管理着全球范围内一个大约4000名软硬件和网络工程师团队。
--------------------------------------------------------------------------------
via: https://www.oreilly.com/ideas/what-is-sre-site-reliability-engineering
作者:[Benjamin Treynor][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/benjamin-treynor-sloss
[1]:https://shop.oreilly.com/product/0636920053385.do
[2]:https://shop.oreilly.com/product/0636920053385.do
[3]:https://www.oreilly.com/ideas/what-is-sre-site-reliability-engineering
[4]:https://shop.oreilly.com/product/0636920053385.do
[5]:https://shop.oreilly.com/product/0636920053385.do
[6]:https://www.oreilly.com/people/benjamin-treynor-sloss
[7]:https://pixabay.com/
[8]:https://www.oreilly.com/people/benjamin-treynor-sloss
[9]:http://shop.oreilly.com/product/0636920041528.do?intcmp=il-webops-books-videos-update-na_new_site_site_reliability_engineering_text_cta
[10]:http://conferences.oreilly.com/velocity/devops-web-performance-eu?intcmp=il-webops-confreg-update-vleu16_new_site_what_is_sre_text_cta
[11]:https://pixabay.com/

View File

@ -0,0 +1,226 @@
使用Ganglia来监控linux类型的网格和集群服务器
===========
自从SA接手服务和主机管理以后监控类的工具就成了他们的好帮手。其中比较有名的有[Nagios][11], [Zabbix][10], [Icinga][9], 和 Centreon.以上这些重量级的监控工具让一个新手SA来设置并使用其中的高级特性是非常困难的。
本文将向你介绍Ganglia它是一个容易扩展配置的监控系统。它可以查看服务器中的各项性能指标也可以实时图形化的展示集群配置。
[![Install Gangila Monitoring in Linux](http://www.tecmint.com/wp-content/uploads/2016/06/Install-Gangila-Monitoring-in-Linux.png)][8]
在Linux上安装Ganglia
Ganglia能够让集群和网格服务器更加容易管理。
我们可以远程创建一个包含所有主机的网格配置,其中的成员主机可以使用模板设置。
此外Ganglia对移动设备进行过页面优化排版非常人性化。当然你还可以导出`csv`和 `.json`格式的数据。
我们的测试环境包括一个安装Ganglia的主节点服务器CentOS7(IP 地址 192.168.0.29)和一个作为被监控端的Ubuntu 14.04主机 (192.168.0.32)。我们将通过Ganglia Web的页面来监控这台Ubuntu主机。
下面的例子可以给大家提供参考CentOS7作为主节点Ubuntu作为被监控对象。
### 安装和配置 Ganglia
请遵循以下步骤在主节点服务器安装监控工具。
#### 1. 1. 使用yum源 [EPEL repository][7] ,然后安装 Ganglia和相关工具:
命令如下
```
# yum update && yum install epel-release
# yum install ganglia rrdtool ganglia-gmetad ganglia-gmond ganglia-web
```
Ganglia将附加安装一些应用它们的功能如下
1. `rrdtool`, 轮询数据库,它是一个储存以及用图形化显示变化数据的工具
2. `ganglia-gmetad` 一个守护进程用来收集被监控主机的数据。被监控主机与主节点主机都要安装Ganglia-gmond监控守护进程自己
3. `ganglia-web` 提供Web前端用于显示监控系统的历史数据
#### 2. 使用Apache为Ganglia配置Web身份认证
如果你想了解更多的高级认证机制,请参阅[Authorization and Authentication][6]选择Apache部分。
为完成这部分的任务我们需要用Apache来创建一个用户名和对应的密码下面的例子我们先来创建一个叫`adminganglia`的用户名,然后给他分配一个密码,它将被储存在 /etc/httpd/auth.basic如果随便选择根目录或其他Apache没有权限读取的目录这项配置最终将会以失败告终。 
```
# htpasswd -c /etc/httpd/auth.basic adminganglia
```
给adminganglia添加密码需要经过2次确认
#### 3. 修改配置文件 /etc/httpd/conf.d/ganglia.conf 
```
Alias /ganglia /usr/share/ganglia
<Location /ganglia>
AuthType basic
AuthName "Ganglia web UI"
AuthBasicProvider file
AuthUserFile "/etc/httpd/auth.basic"
Require user adminganglia
</Location>
```
#### 4. 编辑 /etc/ganglia/gmetad.conf:
首先, 使用gridname命令来设置集群的名称。
```
gridname "Home office"
```
然后, 使用data_source命令根据集群的名称来设置主节点主机和被监控节点的轮询时间
```
data_source "Labs" 60 192.168.0.29:8649 # Master node
data_source "Labs" 60 192.168.0.32 # Monitored node
```
#### 5. 编辑 /etc/ganglia/gmond.conf.
a)确保集群的配置和下面的一样。
```
cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
```
b) 在udp_send_chanel 注释掉 mcast_join directive:
```
udp_send_channel {
# mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
```
c)在udp_recv_channel 中:注释掉mcast_join 和bind部分
```
udp_recv_channel {
# mcast_join = 239.2.11.71 ## comment out
port = 8649
# bind = 239.2.11.71 ## comment out
}
```
保存并退出
#### 6.打开8649/udp端口更改SELinux确保php脚本能够连接
```
# firewall-cmd --add-port=8649/udp
# firewall-cmd --add-port=8649/udp --permanent
# setsebool -P httpd_can_network_connect 1
```
#### 7.重启Apachegmetadgmond并确保他们在开机启动里面。
```
# systemctl restart httpd gmetad gmond
# systemctl enable httpd gmetad httpd
```
至此我们现在能够打开并登录Ganglia的Web页面 `http://192.168.0.29/ganglia` 
[![Gangila Web Interface](http://www.tecmint.com/wp-content/uploads/2016/06/Gangila-Web-Interface.png)][5]
Gangila Web Interface
#### 8. 在Ubuntu上安装Ganglia-monitor
```
$ sudo aptitude update && aptitude install ganglia-monitor
```
#### 9. 编辑被监控主机的配置文件/etc/ganglia/gmond.conf在主节点主机上也是相同的文件注释掉网格里面不在线的主机。需要编辑udp_send_channel和udp_recv_channelshould这两项
```
cluster {
name = "Labs" # The name in the data_source directive in gmetad.conf
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
udp_send_channel {
mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
udp_recv_channel {
mcast_join = 239.2.11.71 ## comment out
port = 8649
bind = 239.2.11.71 ## comment out
}
```
Then, restart the service:
之后重启服务
```
$ sudo service ganglia-monitor restart
```
#### 10. 刷新页面你将看到各种状态以及图形化的展示集群或网格的配置情况(用下拉菜单选择我们想查看的集群或网格):
[![Ganglia Home Office Grid Report](http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Home-Office-Grid-Report.png)][4]
Ganglia中网格的报告
使用菜单按钮你可以选择组里面的节点主机,这将非常容易的获取到你感兴趣的信息。可以使用对比选项来查看集群中所有主机的信息。
当然你也可以使用正则表达式来快速对比一组主机
[![Ganglia Host Server Information](http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Server-Information.png)][3]
Ganglia Host Server Information
One of the features I personally find most appealing is the mobile-friendly summary, which you can access using the Mobile tab. Choose the cluster youre interested in and then the individual host:
能够使用移动设备管理,对于移动端有友好界面,这是一个非常吸引人的特点。在集群中选中一个主机,点击它。
[![Ganglia Mobile Friendly Summary View](http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Mobile-View.png)][2]
Ganglia 移动端截图
### 总结
本篇文章向大家介绍了Ganglia他是一个功能强大扩展性很好的监控工具主要用来监控集群和网格。它可以随意安装便捷的组合各种功能你甚至可以尝试一下官方提供的demo网站[official website][1])。
此时你可能会发现许多知名的it企业或许并不使用Ganglia来监控作为监控工具。他们有自己更好的工具去实现除了那些工具以外我们这篇文章里面提到的Ganglia可能是最方便的图形化在图示主机上显示对应的名字工具。
但是请不要拘泥于本篇文章,尝试一下自己去做,不必犹豫不敢尝试。如果你有任何问题也欢迎给我留言。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-configure-ganglia-monitoring-centos-linux/
作者:[Gabriel Cánepa][a]
译者:[ivo-wang](https://github.com/ivo-wang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]:http://ganglia.info/
[2]:http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Mobile-View.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Server-Information.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/06/Ganglia-Home-Office-Grid-Report.png
[5]:http://www.tecmint.co m/wp-content/uploads/2016/06/Gangila-Web-Interface.png
[6]:http://httpd.apache.org/docs/current/howto/auth.html
[7]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[8]:http://www.tecmint.com/wp-content/uploads/2016/06/ Install-Gangila-Monitoring-in-Linux.png
[9]:http://www.tecmint.com/install-icinga-in-centos-7/
[10]:http://www.tecmint.com/install-and-configure-zabbix-monitoring-on-debian-centos-rhel/
[11]:http://www.tecmint.com/install-nagios-in-linux/

Some files were not shown because too many files have changed in this diff Show More