Merge pull request #1 from LCTT/master

This commit is contained in:
Fuliang.Li 2016-12-29 10:02:28 +08:00 committed by GitHub
commit 7ec3675ddf
35 changed files with 3986 additions and 2032 deletions

View File

@ -3,29 +3,29 @@
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
在 2014 年,为了对网上一些关于在科技行业女性稀缺的评论作出回应,我的同事 [Crystal Beasley][1] 倡议在科技/信息安全方面工作的女性在网络上分享自己的“成才之路”。这篇文章就是我的故事。我把我的故事与你们分享是因为我相信榜样的力量,也相信一个人有多种途径,选择一个让自己满意的有挑战性的工作以及实现目标的人生。
在 2014 年,为了对网上一些关于在科技行业女性稀缺的评论作出回应,我的同事 [Crystal Beasley][1] 倡议在科技/信息安全方面工作的女性在网络上分享自己的“成才之路”。这篇文章就是我的故事。我把我的故事与你们分享是因为我相信榜样的力量,也相信一个人有多种途径,选择一个让自己满意的有挑战性的工作以及可以实现目标的人生。
### 和电脑相伴的童年
我可以说是硅谷的女儿。我的故事不是一个从科技业余爱好转向专业的故事,也不是从小就专注于这份事业的故事。这个故事更多的是关于环境如何塑造你 — 通过它的那种已然存在的文化来改变你,如果你想要被改变的话。这不是从小就开始努力并为一个明确的目标而奋斗的故事,我意识到,这其实是享受了一些特权的成长故事。
我出生在曼哈顿,但是我在新泽西州长大,因为我的爸爸退伍后,在那里的罗格斯大学攻读计算机科学的博士学位。当我四岁时,学校里有人问我爸爸干什么谋生时,我说,“他就是看电视和捕捉小虫子,但是我从没有见过那些小虫子”(译小虫子bug。他在家里有一台哑终端这大概与他在 Bolt Beranek Newman 公司的工作有关,做关于早期互联网人工智能方面的工作。我就在旁边看着。
我出生在曼哈顿,但是我在新泽西州长大,因为我的爸爸退伍后,在那里的罗格斯大学攻读计算机科学的博士学位。当我四岁时,学校里有人问我爸爸干什么谋生时,我说,“他就是看电视和捕捉小虫子,但是我从没有见过那些小虫子”(LCTT 译注小虫子bug。他在家里有一台哑终端LCTT 译注:就是那台“电视”),这大概与他在 Bolt Beranek Newman 公司的工作有关,做关于早期互联网人工智能方面的工作。我就在旁边看着。
我没能玩上父亲的会抓小虫子的电视,但是我很早就接触到了技术领域,我很珍惜这个礼物。提早的熏陶对于一个未来的高手是十分必要的 — 所以,请花时间和你的小孩谈谈你所知道的你做的事情!
我没能玩上父亲的会抓小虫子的电视,但是我很早就接触到了技术领域,我很珍惜这个礼物。提早的熏陶对于一个未来的高手是十分必要的 — 所以,请花时间和你的小孩谈谈你做的事情!
![](https://opensource.com/sites/default/files/resize/moss-520x433.png)
*我父亲的终端和这个很类似 —— 如果不是这个的话 CC BY-SA 4.0*
当我六岁时我们搬到了加州。父亲在施乐的研究中心找到了一个工作。我记得那时我认为这个城市一定有很多熊因为在它的旗帜上有一个熊。在1979年Palo Alto 还是一个大学城,还有果园和开阔地带。
当我六岁时,我们搬到了加州。父亲在施乐的帕克研究中心Xerox PARC找到了一个工作。我记得那时我认为这个城市一定有很多熊因为在它的旗帜上有一个熊。在1979年帕洛阿图市还是一个大学城,还有果园和开阔地带。
在 Palo Alto 的公立学校待了一年之后,我的姐姐和我被送到了“半岛学校”,这个“民主范”学校对我造成了深刻的影响。在那里,好奇心和创新意识是被高度推崇的,教育也是由学生自己分组讨论决定的。在学校,我们很少能看到叫做电脑的东西,但是在家就不同了。
在 Palo Alto 的公立学校待了一年之后,我的姐姐和我被送到了“半岛学校”,这个“民主范”学校对我造成了深刻的影响。在那里,好奇心和创新意识是被高度推崇的,教育也是由学生自己分组讨论决定的。在学校,我们很少能看到叫做电脑的东西,但是在家就不同了。
在父亲从施乐辞职之后,他就去了 Apple 公司在那里他工作使用并带回家让我玩的第一批电脑就是Apple II 和 LISA。我的父亲在最初的 LISA 的研发团队。我直到现在还深刻的记得他让我们一次又一次的“玩鼠标”的场景,因为他想让我的 3 岁大的妹妹对这个东西感到舒服 —— 她也确实那样。
在父亲从施乐辞职之后,他就去了苹果公司在那里他工作使用并带回家让我玩的第一批电脑就是Apple II 和 LISA。我的父亲在最初的 LISA 的研发团队。我直到现在还深刻的记得他让我们一次又一次的“玩”鼠标训练的场景,因为他想让我的 3 岁大的妹妹也能对这个东西觉得好用 —— 她也确实那样。
![](https://opensource.com/sites/default/files/resize/600px-apple_lisa-520x520.jpg)
*我们的 LISA 看起来就像这样,看到鼠标了吗CC BY-SA 4.0*
*我们的 LISA 看起来就像这样。谁看到鼠标哪儿去了CC BY-SA 4.0*
在学校,我的数学的概念学得不错,但是基本计算却惨不忍睹。我的第一个学校的老师告诉我的家长和我,说我的数学很差,还说我很“笨”。虽然我在“常规的”数学项目中表现出色,能理解一个超出 7 岁孩子理解能力的逻辑谜题,但是我不能完成我们每天早上都要做的“练习”。她说我傻,这事我不会忘记。在那之后的十年我都没能相信自己的逻辑能力和算法的水平。**不要低估你对孩子说的话的影响**。
@ -33,7 +33,7 @@
### 本科时光
我想我要成为一个小学教师,我就读米尔斯学院就是想要做这个。但是后来我开始研究女性,后来又研究神学,我这样做仅仅是由于我自己的一个渴求:我希望能理解人类的意志以及为更好的世界而努力。
我想我要成为一个小学教师,我就读米尔斯学院就是想要做这个。但是后来我开始研究女性,后来又研究神学,我这样做仅仅是由于我自己的一个渴求:我希望能理解人类的意志以及为更好的世界而努力。
同时,我也感受到了互联网的巨大力量。在 1991 年,拥有你自己的 UNIX 的账户,能够和全世界的人谈话,是很令人兴奋的事。我仅仅从在互联网中“玩”就学到了不少,从那些愿意回答我提出的问题的人那里学到的就更多了。这些学习对我的职业生涯的影响不亚于我在正规学校教育之中学到的知识。所有的信息都是有用的。我在一个女子学院度过了学习的关键时期,那时是一个杰出的女性在掌管计算机院。在那个宽松氛围的学院,我们不仅被允许,还被鼓励去尝试很多的道路(我们能接触到很多很多的科技,还有聪明人愿意帮助我们),我也确实那样做了。我十分感激当年的教育。在那个学院,我也了解了什么是极客文化。
@ -41,31 +41,31 @@
### 新的开端
在 1995 年,我被万维网连接人们以及分享想法和信息的能力所震惊(直到现在仍是如此)。我想要进入这个行业。看起来我好像要“女承父业”,但是我不知道如何开始。我开始在硅谷做临时工,从 Sun Microsystems 公司得到我的第一个“真正”技术职位前尝试做了一些事情(为半导体数据写最基础的数据库,技术手册印发前的事务,备份工资单的存跟)。这些事很让人激动。(毕竟,我们是“.com”中的那个”点“
在 1995 年,我被互联网连接人们以及分享想法和信息的能力所震惊(直到现在仍是如此)。我想要进入这个行业。看起来我好像要“女承父业”,但是我不知道如何开始。我开始在硅谷做临时工,从 Sun 微系统公司得到我的第一个“真正”技术职位前尝试做了一些事情(为半导体数据公司写最基础的数据库,技术手册印发前的事务,备份工资单的存跟)。这些事很让人激动。(毕竟,我们是“.com”中的那个”点“
在 Sun ,我努力学习,尽可能多的尝试新事物。我的第一个工作是<ruby>网页化<rt> HTMLing</rt></ruby>(啥?这是一个词!)白皮书,以及为 Beta 程序修改一些基础的服务工具大多数是Perl写的。后来我成为 Solaris beta 项目组中的项目经理,并在 Open Solaris 的 Beta 版运行中感受到了开源的力量。
在 Sun 公司,我努力学习,尽可能多的尝试新事物。我的第一个工作是<ruby>网页化<rt> HTMLing</rt></ruby>(啥?这居然是一个词!)白皮书,以及为 Beta 程序修改一些基础的服务工具(大多数是 Perl 写的)。后来我成为 Solaris beta 项目组中的项目经理,并在 Open Solaris 的 Beta 版运行中感受到了开源的力量。
在那里我做的最重要的事情就是学。我发现在同样重视工程和教育的地方有一种气氛,在那里我的问题不再显得“傻”。我很庆幸我选对了导师和朋友。在决定休第二个孩子的产假之前,我上每一堂我能上的课程,读每一本我能读的书,尝试自学我在学校没有学习过的技术,商业以及项目管理方面的技能。
在那里我做的最重要的事情就是学。我发现在同样重视工程和教育的地方有一种气氛,在那里我的问题不再显得“傻”。我很庆幸我选对了导师和朋友。在决定休第二个孩子的产假之前,我上每一堂我能上的课程,读每一本我能读的书,尝试自学我在学校没有学习过的技术,商业以及项目管理方面的技能。
### 重回工作
当我准备重新工作时Sun 已经不是可行的地方。所以,我收集了很多人的信息(网络是你的朋友利用我的沟通技能最终获得了一个管理互联网门户的长期合同2005 年时,一切皆门户),并且开始了解 CRM,发布产品的方式,本地化,网络等知识。我讲这么多背景,主要是我尝试以及失败的经历,和我成功的经历同等重要,从中学到很多。我也认为我们需要这个方面的榜样。
当我准备重新工作时Sun 公司已经不再是合适的地方了。所以,我整理了我的联系信息(网络帮到了我利用我的沟通技能最终获得了一个管理互联网门户的长期合同2005 年时,一切皆门户),并且开始了解 CRM、发布产品的方式、本地化、网络等知识。我讲这么多背景,主要是我尝试以及失败的经历,和我成功的经历同等重要,从中学到很多。我也认为我们需要这个方面的榜样。
从很多方面来看,我的职业生涯的第一部分是我的技术教育。这事发生的时间和地点都和现在不一样了 —— 我在帮助组织中的女性和其他弱势群体,但是我之后成为一个技术行业的女性。当时无疑我没有看到这个行业的缺陷,但是现在这个行业更加的厌恶女性,一点没有减少。
从很多方面来看,我的职业生涯的第一部分是我的技术教育。时变势移 —— 我在帮助组织中的女性和其他弱势群体,但是并没有看出为一个技术行业的女性有多难。当时无疑我没有看到这个行业的缺陷,但是现在这个行业更加的厌恶女性,一点没有减少。
在这些事情之后,我还没有把自己当作一个标杆,或者一个高级技术人员。当我在父母圈子里认识的一位极客朋友鼓励我申请一个看起来定位十分模糊且技术性很强的开源的非盈利基础设施商店互联网系统协会BIND --一个广泛部署的开源 DNS 名称服务器--的缔造者,13 台根域名服务器之一的运营商)的产品经理时,我很震惊。有很长一段时间,我都不知道他们为什么要雇佣我!我对 DNS 基础设备,以及协议的开发知之甚少,但是我再次遇到了老师,并再度开始飞速发展。我花时间旅行,在关键流程攻关,搞清楚如何与高度国际化的团队合作,解决麻烦的问题,最重要的是,拥抱支持我们的开源和充满活力的社区。我几乎重新学了一切,通过试错的方式。我学习如何构思一个产品。如何通过建设开源社区,领导那些有这特定才能,技能和耐心的人,是他们给了产品价值。
在这些事情之后,我还没有把自己当作一个标杆,或者一个高级技术人员。当我在父母圈子里认识的一位极客朋友鼓励我申请一个看起来定位十分模糊且技术性很强的开源的非盈利基础设施机构(互联网系统协会 ISC它是广泛部署的开源 DNS 名称服务器 BIND 的缔造者,也是 13 台根域名服务器之一的运营商)的产品经理时,我很震惊。有很长一段时间,我都不知道他们为什么要雇佣我!我对 DNS基础设备,以及协议的开发知之甚少,但是我再次遇到了老师,并再度开始飞速发展。我花时间出差,在关键流程攻关,搞清楚如何与高度国际化的团队合作,解决麻烦的问题,最重要的是,拥抱支持我们的开源和充满活力的社区。我几乎重新学了一切,通过试错的方式。我学习如何构思一个产品。如何通过建设开源社区,领导那些有这特定才能,技能和耐心的人,是他们给了产品价值。
### 成为别人的导师
当我在 ISC 工作时,我通过 [TechWomen 项目][2] (一个让来自中东和北非的技术行业的女性到硅谷来接受教育的计划),我开始喜欢教学生以及支持那些技术女性,特别是在开源行业中奋斗的。也正是从这时起我开始相信自己的能力。我还需要学很多。
当我第一次读 TechWomen 关于导师的广告时,我认为那些导师甚至都不会想要和我说话!我有冒名顶替综合征。当他们邀请我成为第一批导师(以及以后 6 年的导师)时,我很震惊,但是现在我学会了相信这些都是我努力得到的待遇。冒名顶替综合征是真实的,但是它能被时间冲淡
当我第一次读 TechWomen 关于导师的广告时,我根本不认为他们会约我面试!我有冒名顶替综合症。当他们邀请我成为第一批导师(以及以后六年每年的导师)时,我很震惊,但是现在我学会了相信这些都是我努力得到的待遇。冒名顶替综合症是真实的,但是随着时间过去我就慢慢名副其实了
### 现在
最后,我不得不离开我在 ISC 的工作。幸运的是,我的工作以及我的价值让我进入了 Mozilla ,在这里我的努力和我的幸运让我在这里承担着重要的角色。现在,我是一名支持多样性的高级项目经理。我致力于构建一个更多样化,更有包容性的 Mozilla ,站在之前的做同样事情的巨人的肩膀上,与最聪明友善的人们一起工作。我用我的激情来让人们找到贡献一个世界需要的互联网的有意义的方式:这让我兴奋了很久。我能看见,我做到了
最后,我不得不离开我在 ISC 的工作。幸运的是,我的工作以及我的价值让我进入了 Mozilla ,在这里我的努力和我的幸运让我在这里承担着重要的角色。现在,我是一名支持多样性与包容的高级项目经理。我致力于构建一个更多样化,更有包容性的 Mozilla ,站在之前的做同样事情的巨人的肩膀上,与最聪明友善的人们一起工作。我用我的激情来让人们找到贡献一个世界需要的互联网的有意义的方式:这让我兴奋了很久。当我爬上山峰,我能极目四望
通过对组织和个人行为的干预来获取一种新的方式,以改变文化,这和我的人生轨迹有着不可思议的联系 —— 从我的早期的学术生涯,到职业生涯再到现在。每天都是一个新的挑战,我想这是我喜欢在科技行业工作,尤其是在开放互联网工作的地方。互联网天然的多元性是它最开始吸引我的原因,也是我还在寻求的 —— 所有人都有机会和获取资源的可能性,无论背景如何。榜样,导师,资源,以及最重要的,尊重,是不断发展技术和开源文化的必要组成部分,实现我相信它能实现的所有事 —— 包括给所有人平等的接触机会。
通过对组织和个人行为的干预来获取一种改变文化的新方式,这和我的人生轨迹有着不可思议的联系 —— 从我的早期的学术生涯,到职业生涯再到现在。每天都是一个新的挑战,我想这是我喜欢在科技行业工作,尤其是在开放互联网工作的理由。互联网天然的多元性是它最开始吸引我的原因,也是我还在寻求的 —— 所有人都有机会和获取资源的可能性,无论背景如何。榜样、导师、资源,以及最重要的,尊重,是不断发展技术和开源文化的必要组成部分,实现我相信它能实现的所有事 —— 包括给所有人平等的接触机会。
--------------------------------------------------------------------------------

View File

@ -0,0 +1,391 @@
在 Linux 命令行下管理 Samba4 AD 架构(二)
============================================================
这篇文章包括了管理 Samba4 域控制器架构过程中的一些常用命令,比如添加、移除、禁用或者列出用户及用户组等。
我们也会关注一下如何配置域安全策略以及如何把 AD 用户绑定到本地的 PAM 认证中,以实现 AD 用户能够在 Linux 域控制器上进行本地登录。
#### 要求
- [在 Ubuntu 系统上使用 Samba4 来创建活动目录架构][1]
### 第一步:在命令行下管理
1、 可以通过 `samba-tool` 命令行工具来进行管理,这个工具为域管理工作提供了一个功能强大的管理接口。
通过 `samba-tool` 命令行接口你可以直接管理域用户及用户组、域组策略、域站点DNS 服务、域复制关系和其它重要的域功能。
使用 root 权限的账号,直接输入 `samba-tool` 命令,不要加任何参数选项来查看该工具能实现的所有功能。
```
# samba-tool -h
```
[
![samba-tool - Manage Samba Administration Tool](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png)
][3]
*samba-tool —— Samba 管理工具*
2、 现在,让我们开始使用 `samba-tool` 工具来管理 Samba4 活动目录中的用户。
使用如下命令来创建 AD 用户:
```
# samba-tool user add your_domain_user
```
添加一个用户,包括 AD 可选的一些重要属性,如下所示:
```
--------- review all options ---------
# samba-tool user add -h
# samba-tool user add your_domain_user --given-name=your_name --surname=your_username --mail-address=your_domain_user@tecmint.lan --login-shell=/bin/bash
```
[
![Create User on Samba AD](http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png)
][4]
*在 Samba AD 上创建用户*
3、 可以通过下面的命令来列出所有 Samba AD 域用户:
```
# samba-tool user list
```
[
![List Samba AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png)
][5]
*列出 Samba AD 用户信息*
4、 使用下面的命令来删除 Samba AD 域用户:
```
# samba-tool user delete your_domain_user
```
5、 重置 Samba 域用户的密码:
```
# samba-tool user setpassword your_domain_user
```
6、 启用或禁用 Samba 域用户账号:
```
# samba-tool user disable your_domain_user
# samba-tool user enable your_domain_user
```
7、 同样地可以使用下面的方法来管理 Samba 用户组:
```
--------- review all options ---------
# samba-tool group add h
# samba-tool group add your_domain_group
```
8、 删除 samba 域用户组:
```
# samba-tool group delete your_domain_group
```
9、 显示所有的 Samba 域用户组信息:
 
```
# samba-tool group list
```
10、 列出指定组下的 Samba 域用户:
```
# samba-tool group listmembers "your_domain group"
```
[
![List Samba Domain Members of Group](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png)
][6]
*列出 Samba 域用户组*
11、 从 Samba 域组中添加或删除某一用户:
```
# samba-tool group addmembers your_domain_group your_domain_user
# samba-tool group remove members your_domain_group your_domain_user
```
12、 如上面所提到的 `samba-tool` 命令行工具也可以用于管理 Samba 域策略及安全。
查看 samba 域密码设置:
```
# samba-tool domain passwordsettings show
```
[
![Check Samba Domain Password](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png)
][7]
*检查 Samba 域密码*
13、 为了修改 samba 域密码策略,比如密码复杂度,密码失效时长,密码长度,密码重复次数以及其它域控制器要求的安全策略等,可参照如下命令来完成:
```
---------- List all command options ----------
# samba-tool domain passwordsettings -h
```
[
![Manage Samba Domain Password Settings](http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png)
][8]
*管理 Samba 域密码策略*
不要把上图中的密码策略规则用于生产环境中。上面的策略仅仅是用于演示目的。
### 第二步:使用活动目录账号来完成 Samba 本地认证
14、 默认情况下离开 Samba AD DC 环境AD 用户不能从本地登录到 Linux 系统。
为了让活动目录账号也能登录到系统,你必须在 Linux 系统环境中做如下设置,并且要修改 Samba4 AD DC 配置。
首先,打开 Samba 主配置文件,如果以下内容不存在,则添加:
```
$ sudo nano /etc/samba/smb.conf
```
确保以下参数出现在配置文件中:
```
winbind enum users = yes
winbind enum groups = yes
```
[
![Samba Authentication Using Active Directory User Accounts](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png)
][9]
*Samba 通过 AD 用户账号来进行认证*
15、 修改之后使用 `testparm` 工具来验证配置文件没有错误,然后通过如下命令来重启 Samba 服务:
```
$ testparm
$ sudo systemctl restart samba-ad-dc.service
```
[
![Check Samba Configuration for Errors](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png)
][10]
*检查 Samba 配置文件是否报错*
16、 下一步我们需要修改本地 PAM 配置文件,以让 Samba4 活动目录账号能够完成本地认证、开启会话,并且在第一次登录系统时创建一个用户目录。
使用 `pam-auth-update` 命令来打开 PAM 配置提示界面,确保所有的 PAM 选项都已经使用 `[空格]` 键来启用,如下图所示:
完成之后,按 `[Tab]` 键跳转到 OK ,以启用修改。
```
$ sudo pam-auth-update
```
[
![Configure PAM for Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png)
][11]
*为 Samba4 AD 配置 PAM 认证*
[
![Enable PAM Authentication Module for Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png)
][12]
为 Samba4 AD 用户启用 PAM认证模块
17、 现在使用文本编辑器打开 `/etc/nsswitch.conf` 配置文件,在 `passwd``group` 参数的最后面添加 `winbind` 参数,如下图所示:
```
$ sudo vi /etc/nsswitch.conf
```
[
![Add Windbind Service Switch for Samba](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png)
][13]
*为 Samba 服务添加 Winbind Service Switch 设置*
18、 最后编辑 `/etc/pam.d/common-password` 文件,查找下图所示行并删除 `user_authtok` 参数。
该设置确保 AD 用户在通过 Linux 系统本地认证后,可以在命令行下修改他们的密码。有这个参数时,本地认证的 AD 用户不能在控制台下修改他们的密码。
```
password [success=1 default=ignore] pam_winbind.so try_first_pass
```
[
![Allow Samba AD Users to Change Passwords](http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png)
][14]
*允许 Samba AD 用户修改密码*
在每次 PAM 更新安装完成并应用到 PAM 模块,或者你每次执行 `pam-auth-update` 命令后,你都需要删除 `use_authtok` 参数。
19、 Samba4 的二进制文件会生成一个内建的 windindd 进程,并且默认是启用的。
因此,你没必要再次去启用并运行 Ubuntu 系统官方自带的 winbind 服务。
为了防止系统里原来已废弃的 winbind 服务被启动,确保执行以下命令来禁用并停止原来的 winbind 服务。
```
$ sudo systemctl disable winbind.service
$ sudo systemctl stop winbind.service
```
虽然我们不再需要运行原有的 winbind 进程,但是为了安装并使用 wbinfo 工具,我们还得从系统软件库中安装 Winbind 包。
wbinfo 工具可以用来从 winbindd 进程侧来查询活动目录用户和组。
以下命令显示了使用 `wbinfo` 命令如何查询 AD 用户及组信息。
```
$ wbinfo -g
$ wbinfo -u
$ wbinfo -i your_domain_user
```
[
![Check Samba4 AD Information ](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png)
][15]
*检查 Samba4 AD 信息*
[
![Check Samba4 AD User Info](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png)
][16]
*检查 Samba4 AD 用户信息*
20、 除了 `wbinfo` 工具外,你也可以使用 `getent` 命令行工具从 Name Service Switch 库中查询活动目录信息库,在 `/etc/nsswitch.conf` 配置文件中有相关描述内容。
通过 grep 命令用管道符从 `getent` 命令过滤结果集,以获取信息库中 AD 域用户及组信息。
```
# getent passwd | grep TECMINT
# getent group | grep TECMINT
```
[
![Get Samba4 AD Details](http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png)
][17]
*查看 Samba4 AD 详细信息*
### 第三步:使用活动目录账号登录 Linux 系统
21、 为了使用 Samba4 AD 用户登录系统,使用 `su -` 命令切换到 AD 用户账号即可。
第一次登录系统后,控制台会有信息提示用户的 home 目录已创建完成,系统路径为 `/home/$DOMAIN/` 之下,名字为用户的 AD 账号名。
使用 `id` 命令来查询其它已登录的用户信息。
```
# su - your_ad_user
$ id
$ exit
```
[
![Check Samba4 AD User Authentication on Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png)
][18]
*检查 Linux 下 Samba4 AD 用户认证结果*
22、 当你成功登入系统后在控制台下输入 `passwd` 命令来修改已登录的 AD 用户密码。
```
$ su - your_ad_user
$ passwd
```
[
![Change Samba4 AD User Password](http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png)
][19]
*修改 Samba4 AD 用户密码*
23、 默认情况下活动目录用户没有可以完成系统管理工作的 root 权限。
要授予 AD 用户 root 权限,你必须把用户名添加到本地 sudo 组中,可使用如下命令完成。
确保你已输入域 、斜杠和 AD 用户名,并且使用英文单引号括起来,如下所示:
```
# usermod -aG sudo 'DOMAIN\your_domain_user'
```
要检查 AD 用户在本地系统上是否有 root 权限,登录后执行一个命令,比如,使用 sudo 权限执行 `apt-get update` 命令。
```
# su - tecmint_user
$ sudo apt-get update
```
[
![Grant sudo Permission to Samba4 AD User](http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png)
][20]
*授予 Samba4 AD 用户 sudo 权限*
24、 如果你想把活动目录组中的所有账号都授予 root 权限,使用 `visudo` 命令来编辑 `/etc/sudoers` 配置文件,在 root 权限那一行添加如下内容:
```
%DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
```
注意 `/etc/sudoers` 的格式,不要弄乱。
`/etc/sudoers` 配置文件对于 ASCII 引号字符处理的不是很好,因此务必使用 '%' 来标识用户组,使用反斜杠来转义域名后的第一个斜杠,如果你的组名中包含空格(大多数 AD 内建组默认情况下都包含空格)使用另外一个反斜杠来转义空格。并且域的名称要大写。
[
![Give Sudo Access to All Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png)
][21]
*授予所有 Samba4 用户 sudo 权限*
好了,差不多就这些了!管理 Samba4 AD 架构也可以使用 Windows 环境中的其它几个工具,比如 ADUC、DNS 管理器、 GPM 等等,这些工具可以通过安装从 Microsoft 官网下载的 RSAT 软件包来获得。
要通过 RSAT 工具来管理 Samba4 AD DC ,你必须要把 Windows 系统加入到 Samba4 活动目录。这将是我们下一篇文章的重点,在这之前,请继续关注。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-active-directory-linux-command-line
作者:[Matei Cezar][a]
译者:[rusking](https://github.com/rusking)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:https://linux.cn/article-8065-1.html
[2]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
[3]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png

View File

@ -0,0 +1,65 @@
把 SQL Server 迁移到 Linux不如换成 MySQL
============================================================
最近几年,数量庞大的个人和组织放弃 Windows 平台选择 Linux 平台,而且随着人们体验到更多 Linux 的发展,这个数字将会继续增长。在很长的一段时间内, Linux 是网络服务器的领导者,因为大部分的网络服务器都运行在 Linux 之上,这或许是为什么那么多的个人和组织选择迁移的一个原因。
迁移的原因有很多,更强的平台稳定性、可靠性、成本、所有权和安全性等等。随着更多的个人和组织迁移到 Linux 平台MS SQL 服务器数据库管理系统的迁移也有着同样的趋势,首选的是 MySQL ,这是因为 MySQL 的互用性、平台无关性和购置成本低。
有如此多的个人和组织完成了迁移,这是应业务需求而产生的迁移,而不是为了迁移的乐趣。因此,有必要做一个综合可行性和成本效益分析,以了解迁移对于你的业务上的正面和负面影响。
迁移需要基于以下重要因素:
### 对平台的掌控
不像 Windows 那样,你不能完全控制版本发布和修复,而 Linux 可以让你需要需要修复的时候真正给了你获取修复的灵活性。这一点受到了开发者和安全人员的喜爱,因为他们能在一个安全威胁被确定时立即自行打补丁,不像 Windows ,你只能期望官方尽快发布补丁。
### 跟随大众
目前, 运行在 Linux 平台上的服务器在数量上远超 Windows几乎是全世界服务器数量的四分之三而且这种趋势在最近一段时间内不会改变。因此许多组织正在将他们的服务完全迁移到 Linux 上,而不是同时使用两种平台,同时使用将会增加他们的运营成本。
### 微软没有开放 SQL Server 的源代码
微软宣称他们下一个名为 Denali 的新版 MS SQL Server 将会是一个 Linux 版本,并且不会开放其源代码,这意味着他们仍然使用的是软件授权模式,只是新版本将能在 Linux 上运行而已。这一点将许多乐于接受开源新版本的人拒之门外。
这也没有给那些使用闭源的 Oracle 用户另一个选择,对于使用完全开源的 [MySQL 用户][7]也是如此。
### 节约授权许可证的花费
授权许可证的潜在成本让许多用户很失望。在 Windows 平台上运行 MS SQL 服务器有太多的授权许可证牵涉其中。你需要这些授权许可证:
*   Windows 操作系统
*   MS SQL 服务器
*   特定的数据库工具,例如 SQL 分析工具等
不像 Windows 平台Linux 完全没有高昂的授权花费,因此更能吸引用户。 MySQL 数据库也能免费获取,甚而它提供了像 MS SQL 服务器一样的灵活性,那就更值得选择了。不像那些给 MS SQL 设计的收费工具,大部分的 MySQL 数据库实用程序是免费的。
### 有时候用的是特殊的硬件
因为 Linux 是不同的开发者所开发,并在不断改进中,所以它独立于所运行的硬件之上,并能被广泛使用在不同的硬件平台。然而尽管微软正在努力让 Windows 和 MSSQL 服务器做到硬件无关,但在平台无关上依旧有些限制。
### 支持
有了 Linux、MySQL 和其它的开源软件,获取满足自己特定需求的帮助变得更加简单,因为有不同开发者参与到这些软件的开发过程中。这些开发者或许就在你附近,这样更容易获取帮助。在线论坛也能帮上不少,你能发帖并讨论你所面对的问题。
至于那些商业软件,你只能根据他们的软件协议和时间来获得帮助,有时候他们不能在你的时间范围内给出一个解决方案。
在不同的情况中,迁移到 Linux 都是你最好的选择,加入一个彻底的、稳定可靠的平台来获取优异表现,众所周知,它比 Windows 更健壮。这值得一试。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/
作者:[Tony Branson][a]
译者:[ypingcn](https://github.com/ypingcn)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/howtoforgecom
[1]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#to-have-control-over-the-platform
[2]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#joining-the-crowd
[3]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#microsoft-isnrsquot-open-sourcing-sql-serverrsquos-code
[4]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#saving-on-license-costs
[5]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#sometimes-the-specific-hardware-being-used
[6]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#support
[7]:http://www.scalearc.com/how-it-works/products/scalearc-for-mysql

View File

@ -0,0 +1,124 @@
如何在 Ubuntu 环境下搭建邮件服务器(一)
============================================================
![mail server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mail-stack.jpg?itok=SVMfa8WZ "mail server")
在这个系列的文章中,我们将通过使用 Postfix、Dovecot 和 openssl 这三款工具来为你展示如何在 ubuntu 系统上搭建一个既可靠又易于配置的邮件服务器。
在这个容器和微服务技术日新月异的时代,值得庆幸的是有些事情并没有改变,例如搭建一个 Linux 下的邮件服务器,仍然需要许多步骤才能间隔各种服务器耦合在一起,而当你将这些配置好,放在一起,却又非常可靠稳定,不会像微服务那样一睁眼有了,一闭眼又没了。 在这个系列教程中我们将通过使用 Postfix、Dovecot 和 openssl 这三款工具在 ubuntu 系统上搭建一个既可靠又易于配置的邮件服务器。
Postfix 是一个古老又可靠的软件,它比原始的 Unix 系统的 MTA 软件 sendmail 更加容易配置和使用还有人仍然在用sendmail 吗?)。 Exim 是 Debain 系统上的默认 MTA 软件,它比 Postfix 更加轻量而且超级容易配置,因此我们在将来的教程中会推出 Exim 的教程。
DovecotLCTT 译注:详情请阅读[维基百科](https://en.wikipedia.org/wiki/Dovecot_(software))和 Courier 是两个非常受欢迎的优秀的 IMAP/POP3 协议的服务器软件Dovecot 更加的轻量并且易于配置。
你必须要保证你的邮件通讯是安全的,因此我们就需要使用到 OpenSSL 这个软件OpenSSL 也提供了一些很好用的工具来测试你的邮件服务器。
为了简单起见,在这一系列的教程中,我们将指导大家安装一个在局域网上的邮件服务器,你应该拥有一个局域网内的域名服务,并确保它是启用且正常工作的,查看这篇“[使用 dnsmasq 为局域网轻松提供 DNS 服务][5]”会有些帮助,然后,你就可以通过注册域名并相应地配置防火墙,来将这台局域网服务器变成互联网可访问邮件服务器。这个过程网上已经有很多很详细的教程了,这里不再赘述,请大家继续跟着教程进行即可。
### 一些术语
让我们先来快速了解一些术语,因为当我们了解了这些术语的时候就能知道这些见鬼的东西到底是什么。 :D
* **MTA**邮件传输代理Mail Transfer Agent基于 SMTP 协议(简单邮件传输协议)的服务端,比如 Postfix、Exim、Sendmail 等。SMTP 服务端彼此之间进行相互通信LCTT 译注 : 详情请阅读[维基百科](https://en.wikipedia.org/wiki/Message_transfer_agent))。
* **MUA** 邮件用户代理Mail User Agent你本地的邮件客户端例如 : Evolution、KMail、Claws Mail 或者 ThunderbirdLCTT 译注 : 例如国内的 Foxmail
* **POP3**邮局协议Post-Office Protocol版本 3将邮件从 SMTP 服务器传输到你的邮件客户端的的最简单的协议。POP 服务端是非常简单小巧的,单一的一台机器可以为数以千计的用户提供服务。
* **IMAP** 交互式消息访问协议Interactive Message Access Protocol许多企业使用这个协议因为邮件可以被保存在服务器上而用户不必担心会丢失消息。IMAP 服务器需要大量的内存和存储空间。
* **TLS**传输套接层Transport socket layer是 SSLSecure Sockets Layer安全套接层的改良版为 SASL 身份认证提供了加密的传输服务层。
* **SASL**简单身份认证与安全层Simple Authentication and Security Layer用于认证用户。SASL进行身份认证而上面说的 TLS 提供认证数据的加密传输。
* **StartTLS**: 也被称为伺机 TLS 。如果服务器双方都支持 SSL/TLSStartTLS 就会将纯文本连接升级为加密连接TLS 或 SSL。如果有一方不支持加密则使用明文传输。StartTLS 会使用标准的未加密端口 25 SMTP、 110POP3和 143 IMAP而不是对应的加密端口 465SMTP、995POP3 和 993 IMAP
### 啊,我们仍然有 sendmail
绝大多数的 Linux 版本仍然还保留着 `/usr/sbin/sendmail` 。 这是在那个 MTA 只有一个 sendmail 的古代遗留下来的痕迹。在大多数 Linux 发行版中,`/usr/sbin/sendmail` 会符号链接到你安装的 MTA 软件上。如果你的 Linux 中有它,不用管它,你的发行版会自己处理好的。
### 安装 Postfix
使用 `apt-get install postfix` 来做基本安装时要注意(图 1安装程序会打开一个向导询问你想要搭建的服务器类型你要选择“Internet Server”虽然这里是局域网服务器。它会让你输入完全限定的服务器域名例如 myserver.mydomain.net。对于局域网服务器假设你的域名服务已经正确配置(我多次提到这个是因为经常有人在这里出现错误),你也可以只使用主机名。
![Postfix](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/postfix-1.png?itok=NJLdtICb "Postfix")
*图 1Postfix 的配置。*
Ubuntu 系统会为 Postfix 创建一个配置文件,并启动三个守护进程 : `master`、`qmgr` 和 `pickup`,这里没用一个叫 Postfix 的命令或守护进程。LCTT 译注:名为 `postfix` 的命令是管理命令。)
```
$ ps ax
6494 ? Ss 0:00 /usr/lib/postfix/master
6497 ? S 0:00 pickup -l -t unix -u -c
6498 ? S 0:00 qmgr -l -t unix -u
```
你可以使用 Postfix 内置的配置语法检查来测试你的配置文件,如果没用发现语法错误,不会输出任何内容。
```
$ sudo postfix check
[sudo] password for carla:
```
使用 `netstat` 来验证 `postfix` 是否正在监听 25 端口。
```
$ netstat -ant
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
tcp6 0 0 :::25 :::* LISTEN
```
现在让我们再操起古老的 `telnet` 来进行测试 :
```
$ telnet myserver 25
Trying 127.0.1.1...
Connected to myserver.
Escape character is '^]'.
220 myserver ESMTP Postfix (Ubuntu)
EHLO myserver
250-myserver
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
^]
telnet>
```
嘿,我们已经验证了我们的服务器名,而且 Postfix 正在监听 SMTP 的 25 端口而且响应了我们键入的命令。
按下 `^]` 终止连接,返回 telnet。输入 `quit` 来退出 telnet。输出的 ESMTP扩展的 SMTP 250 状态码如下。
LCTT 译注: ESMTP (Extended SMTP),即扩展 SMTP就是对标准 SMTP 协议进行的扩展。详情请阅读[维基百科](https://en.wikipedia.org/wiki/Extended_SMTP)
* **PIPELINING** 允许多个命令流式发出,而不必对每个命令作出响应。
* **SIZE** 表示服务器可接收的最大消息大小。
* **VRFY** 可以告诉客户端某一个特定的邮箱地址是否存在,这通常应该被取消,因为这是一个安全漏洞。
* **ETRN** 适用于非持久互联网连接的服务器。这样的站点可以使用 ETRN 从上游服务器请求邮件投递Postfix 可以配置成延迟投递邮件到 ETRN 客户端。
* **STARTTLS** (详情见上述说明)。
* **ENHANCEDSTATUSCODES**,服务器支撑增强型的状态码和错误码。
* **8BITMIME**,支持 8 位 MIME这意味着完整的 ASCII 字符集。最初,原始的 ASCII 是 7 位。
* **DSN**,投递状态通知,用于通知你投递时的错误。
Postfix 的主配置文件是: `/etc/postfix/main.cf`,这个文件是安装程序创建的,可以参考[这个资料][6]来查看完整的 `main.cf` 参数列表, `/etc/postfix/postfix-files` 这个文件描述了 Postfix 完整的安装文件。
下一篇教程我们会讲解 Dovecot 的安装和测试,然后会给我们自己发送一些邮件。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/how-build-email-server-ubuntu-linux
作者:[CARLA SCHRODER][a]
译者:[WangYihang](https://github.com/WangYihang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/postfix-1png
[4]:https://www.linux.com/files/images/mail-stackjpg
[5]:https://www.linux.com/learn/dnsmasq-easy-lan-name-services
[6]:http://www.postfix.org/postconf.5.html

View File

@ -0,0 +1,215 @@
RHEL (Red Hat Enterprise Linux红帽企业级 Linux) 7.3 安装指南
=====
RHEL 是由红帽公司开发维护的开源 Linux 发行版,可以运行在所有的主流 CPU 架构中。一般来说,多数的 Linux 发行版都可以免费下载、安装和使用,但对于 RHEL只有在购买了订阅之后你才能下载和使用否则只能获取到试用期为 30 天的评估版。
本文会告诉你如何在你的机器上安装最新的 RHEL 7.3,当然了,使用的是期限 30 天的评估版 ISO 镜像,请自行到 [https://access.redhat.com/downloads][1] 下载。
如果你更喜欢使用 CentOS请移步 [CentOS 7.3 安装指南][2]。
欲了解 RHEL 7.3 的新特性,请参考 [版本更新日志][3]。
#### 先决条件
本次安装是在支持 UEFI 的虚拟机固件上进行的。为了完成安装,你首先需要进入主板的 EFI 固件更改启动顺序为已刻录好 ISO 镜像的对应设备DVD 或者 U 盘)。
如果是通过 USB 介质来安装,你需要确保这个可以启动的 USB 设备是用支持 UEFI 兼容的工具来创建的,比如 [Rufus][4],它能将你的 USB 设备设置为 UEFI 固件所需要的 GPT 分区方案。
为了进入主板的 UEFI 固件设置面板,你需要在电脑初始化 POST (Power on Self Test通电自检) 的时候按下一个特殊键。
关于该设置需要用到特殊键你可以向主板厂商进行咨询获取。通常来说在笔记本上可能是这些键F2、F9、F10、F11 或者 F12也可能是 Fn 与这些键的组合。
此外,更改 UEFI 启动顺序前,你要确保快速启动选项 (QuickBoot/FastBoot) 和 安全启动选项 (Secure Boot) 处于关闭状态,这样才能在 EFI 固件中运行 RHEL。
有一些 UEFI 固件主板模型有这样一个选项,它让你能够以传统的 BIOS 或者 EFI CSM (Compatibility Support Module兼容支持模块) 两种模式来安装操作系统,其中 CSM 是主板固件中一个用来模拟 BIOS 环境的模块。这种类型的安装需要 U 盘以 MBR 而非 GPT 来进行分区。
此外,一旦在你的 UEFI 机器中以这两种模式之一成功安装好 RHEL 或者类似的 OS那么安装好的系统就必须以你安装时使用的模式来运行。而且你也不能够从 UEFI 模式变更到传统的 BIOS 模式,反之亦然。强行变更这两种模式会让你的系统变得不稳定、无法启动,同时还需要重新安装系统。
### RHEL 7.3 安装指南
1、 首先下载并使用合适的工具刻录 RHEL 7.3 ISO 镜像到 DVD 或者创建一个可启动的 U 盘。
给机器加电启动,把 DVD/U 盘放入合适驱动器中,并根据你的 UEFI/BIOS 类型,按下特定的启动键变更启动顺序来启动安装介质。
当安装介质被检测到之后,它会启动到 RHEL 的 grub 菜单。选择“Install red hat Enterprise Linux 7.3” 并按回车继续。
[![RHEL 7.3 Boot Menu](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg)][5]
*RHEL 7.3 启动菜单*
2、 之后屏幕就会显示 RHEL 7.3 欢迎界面。该界面选择安装过程中使用的语言 (LCTT 译注:这里选的只是安装过程中使用的语言,之后的安装中才会进行最终使用的系统语言环境) ,然后按回车到下一界面。
[![Select RHEL 7.3 Language](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png)][6]
*选择 RHEL 7.3 安装过程使用的语言*
3、 下一界面中显示的是安装 RHEL 时你需要设置的所有事项的总体概览。首先点击日期和时间 (DATE & TIME) 并在地图中选择你的设备所在地区。
点击最上面的完成 (Done) 按钮来保持你的设置,并进行下一步系统设置。
[![RHEL 7.3 Installation Summary](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png)][7]
*RHEL 7.3 安装概览*
[![Select RHEL 7.3 Date and Time](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png)][8]
*选择 RHEL 7.3 日期和时间*
4、 接下来就是配置你的键盘keyboard布局并再次点击完成 (Done) 按钮返回安装主菜单。
[![Configure Keyboard Layout](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png)][9]
*配置键盘布局*
5、 紧接着选择你使用的语言支持language support并点击完成 (Done),然后进行下一步。
[![Choose Language Support](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png)][10]
*选择语言支持*
6、 安装源Installation Source保持默认就好因为本例中我们使用本地安装 (DVD/USB 镜像)然后选择要安装的软件集Software Selection
此处你可以选择基本环境 (base environment) 和附件 (Add-ons) 。由于 RHEL 常用作 Linux 服务器最小化安装Minimal Installation对于系统管理员来说则是最佳选择。
对于生产环境来说,这也是官方极力推荐的安装方式,因为我们只需要在 OS 中安装极少量软件就好了。
这也意味着高安全性、可伸缩性以及占用极少的磁盘空间。同时,通过购买订阅 (subscription) 或使用 DVD 镜像源,这里列出的的其它环境和附件都是可以在命令行中很容易地安装。
[![RHEL 7.3 Software Selection](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png)][11]
*RHEL 7.3 软件集选择*
7、 万一你想要安装预定义的基本环境之一比方说 Web 服务器、文件 & 打印服务器、架构服务器、虚拟化主机、带 GUI 的服务器等,直接点击选择它们,然后在右边的框选择附件,最后点击完成 (Done) 结束这一步操作即可。
[![Select Server with GUI on RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png)][12]
*选择带 GUI 的服务器*
8、 在接下来点击安装目标 (Installation Destination),这个步骤要求你为将要安装的系统进行分区、格式化文件系统并设置挂载点。
最安全的做法就是让安装器自动配置硬盘分区,这样会创建 Linux 系统所有需要用到的基本分区 (在 LVM 中创建 `/boot`、`/boot/efi`、`/(root)` 以及 `swap` 等分区),并格式化为 RHEL 7.3 默认的 XFS 文件系统。
请记住:如果安装过程是从 UEFI 固件中启动的,那么硬盘的分区表则是 GPT 分区方案。否则,如果你以 CSM 或传统 BIOS 来启动,硬盘的分区表则使用老旧的 MBR 分区方案。
假如不喜欢自动分区,你也可以选择配置你的硬盘分区表,手动创建自己需要的分区。
不论如何,本文推荐你选择自动配置分区。最后点击完成 (Done) 继续下一步。
[![Choose RHEL 7.3 Installation Drive](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png)][13]
*选择 RHEL 7.3 的安装硬盘*
9、 下一步是禁用 Kdump 服务然后配置网络。
[![Disable Kdump Feature](http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png)][14]
*禁用 Kdump 特性*
10、 在网络和主机名Network and Hostname设置你机器使用的主机名和一个描述性名称同时拖动 Ethernet 开关按钮到 `ON` 来启用网络功能。
如果你在自己的网络中有一个 DHCP 服务器,那么网络 IP 设置会自动获取和使用。
[![Configure Network Hostname](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png)][15]
*配置网络主机名称*
11、 如果要为网络接口设置静态 IP点击配置 (Configure) 按钮,然后手动设置 IP如下方截图所示。
设置好网络接口的 IP 地址之后,点击保存 (Save) 按钮,最后切换一下网络接口的 `OFF` 和 `ON` 状态已应用刚刚设置的静态 IP。
最后,点击完成 (Done) 按钮返回到安装设置主界面。
[![Configure Network IP Address](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png)][16]
*配置网络 IP 地址*
12、 最后在安装配置主界面需要你配置的最后一项就是安全策略配置Security Policy文件了。选择并应用默认的Default安全策略然后点击完成 (Done) 返回主界面。
回顾所有的安装设置项并点击开始安装 (Begin Installation) 按钮来启动安装过程,这个过程启动之后,你就没有办法停止它了。
[![Apply Security Policy for RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png)][17]
*为 RHEL 7.3 启用安全策略*
[![Begin Installation of RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png)][18]
*开始安装 RHEL 7.3*
13、 在安装过程中你的显示器会出现用户设置 (User Settings)。首先点击 Root 密码 (Root Password) 为 root 账户设置一个高强度密码。
[![Configure User Settings](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png)][19]
*配置用户选项*
[![Set Root Account Password](http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png)][20]
*设置 Root 账户密码*
14、 最后创建一个新用户通过选中使该用户成为管理员 (Make this user administrator) 为新建的用户授权 root 权限。同时还要为这个账户设置一个高强度密码,点击完成 (Done) 返回用户设置菜单,就可以等待安装过程完成了。
[![Create New User Account](http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png)][21]
*创建新用户账户*
[![RHEL 7.3 Installation Process](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png)][22]
*RHEL 7.3 安装过程*
15、 安装过程结束并成功安装后弹出或拔掉 DVD/USB 设备,重启机器。
[![RHEL 7.3 Installation Complete](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png)][23]
*RHEL 7.3 安装完成*
[![Booting Up RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png)][24]
*启动 RHEL 7.3*
至此,安装完成。为了后期一直使用 RHEL你需要从 Red Hat 消费者门户购买一个订阅,然后在命令行 [使用订阅管理器来注册你的 RHEL 系统][25]。
------------------
作者简介:
Matei Cezar
![](http://2.gravatar.com/avatar/be16e54026c7429d28490cce41b1e157?s=128&d=blank&r=g)
我是一个终日沉溺于电脑的家伙,对开源的 Linux 软件非常着迷,有着 4 年 Linux 桌面发行版、服务器和 bash 编程经验。
---------------------------------------------------------------------
via: http://www.tecmint.com/red-hat-enterprise-linux-7-3-installation-guide/
作者:[Matei Cezar][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:https://access.redhat.com/downloads
[2]:https://linux.cn/article-8048-1.html
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/7.3_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.3_Release_Notes-Overview.html
[4]:https://rufus.akeo.ie/
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg
[6]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png
[23]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png
[25]:http://www.tecmint.com/enable-redhat-subscription-reposiories-and-updates-for-rhel-7/

View File

@ -0,0 +1,412 @@
LXD 2.0 系列(四):资源控制
======================================
这是 [LXD 2.0 系列介绍文章][0]的第四篇。
因为 LXD 容器管理有很多命令,因此这篇文章会很长。 如果你想要快速地浏览这些相同的命令,你可以[尝试下我们的在线演示][1]
![](https://linuxcontainers.org/static/img/containers.png)
### 可用资源限制
LXD 提供了各种资源限制。其中一些与容器本身相关如内存配额、CPU 限制和 I/O 优先级。而另外一些则与特定设备相关,如 I/O 带宽或磁盘用量限制。
与所有 LXD 配置一样,资源限制可以在容器运行时动态更改。某些可能无法启用,例如,如果设置的内存值小于当前内存用量,但 LXD 将会试着设置并且报告失败。
所有的限制也可以通过配置文件继承,在这种情况下每个受影响的容器将受到该限制的约束。也就是说,如果在默认配置文件中设置 `limits.memory=256MB`,则使用默认配置文件(通常是全都使用)的每个容器的内存限制为 256MB。
我们不支持资源限制池,将其中的限制由一组容器共享,因为我们没有什么好的方法通过现有的内核 API 实现这些功能。
#### 磁盘
这或许是最需要和最明显的需求。只需设置容器文件系统的大小限制,并对容器强制执行。
LXD 确实可以让你这样做!
不幸的是,这比它听起来复杂得多。 Linux 没有基于路径的配额,而大多数文件系统只有基于用户和组的配额,这对容器没有什么用处。
如果你正在使用 ZFS 或 btrfs 存储后端,这意味着现在 LXD 只能支持磁盘限制。也有可能为 LVM 实现此功能,但这取决于与它一起使用的文件系统,并且如果结合实时更新那会变得棘手起来,因为并不是所有的文件系统都允许在线增长,而几乎没有一个允许在线收缩。
#### CPU
当涉及到 CPU 的限制,我们支持 4 种不同的东西:
* 只给我 X 个 CPU 核心
在这种模式下,你让 LXD 为你选择一组核心,然后为更多的容器和 CPU 的上线/下线提供负载均衡。
容器只看到这个数量的 CPU 核心。
* 给我一组特定的 CPU 核心例如核心1、3 和 5
类似于第一种模式,但是不会做负载均衡,你会被限制在那些核心上,无论它们有多忙。
* 给我你拥有的 20 处理能力
在这种模式下,你可以看到所有的 CPU但调度程序将限制你使用 20 的 CPU 时间,但这只有在负载状态才会这样!所以如果系统不忙,你的容器可以跑得很欢。而当其他的容器也开始使用 CPU 时,它会被限制用量。
* 每测量 200ms给我 50ms并且不超过
此模式与上一个模式类似,你可以看到所有的 CPU但这一次无论系统可能是多么空闲你只能使用你设置的极限时间下的尽可能多的 CPU 时间。在没有过量使用的系统上,这可使你可以非常整齐地分割 CPU并确保这些容器的持续性能。
另外还可以将前两个中的一个与最后两个之一相结合,即请求一组 CPU然后进一步限制这些 CPU 的 CPU 时间。
除此之外,我们还有一个通用的优先级调节方式,可以告诉调度器当你处于负载状态时,两个争夺资源的容器谁会取得胜利。
#### 内存
内存听起来很简单,就是给我多少 MB 的内存!
它绝对可以那么简单。 我们支持这种限制以及基于百分比的请求,比如给我 10 的主机内存!
另外我们在上层支持一些额外的东西。 例如,你可以选择在每个容器上打开或者关闭 swap如果打开还可以设置优先级以便你可以选择哪些容器先将内存交换到磁盘
内存限制默认是“hard”。 也就是说,当内存耗尽时,内核将会开始杀掉你的那些进程。
或者你可以将强制策略设置为“soft”在这种情况下只要没有别的进程的情况下你将被允许使用尽可能多的内存。一旦别的进程想要这块内存你将无法分配任何内存直到你低于你的限制或者主机内存再次有空余。
#### 网络 I/O
网络 I/O 可能是我们看起来最简单的限制,但是相信我,实现真的不简单!
我们支持两种限制。 第一个是对网络接口的速率限制。你可以设置入口和出口的限制或者只是设置“最大”限制然后应用到出口和入口。这个只支持“桥接”和“p2p”类型接口。
第二种是全局网络 I/O 优先级,仅当你的网络接口趋于饱和的时候再使用。
#### 块 I/O
我把最古怪的放在最后。对于用户看起来它可能简单,但有一些情况下,它的结果并不会和你的预期一样。
我们在这里支持的基本上与我在网络 I/O 中描述的相同。
你可以直接设置磁盘的读写 IO 的频率和速率,并且有一个全局的块 I/O 优先级,它会通知 I/O 调度程序更倾向哪个。
古怪的是如何设置以及在哪里应用这些限制。不幸的是,我们用于实现这些功能的底层使用的是完整的块设备。这意味着我们不能为每个路径设置每个分区的 I/O 限制。
这也意味着当使用可以支持多个块设备映射到指定的路径(带或者不带 RAID的 ZFS 或 btrfs 时,我们并不知道这个路径是哪个块设备提供的。
这意味着,完全有可能,实际上确实有可能,容器使用的多个磁盘挂载点(绑定挂载或直接挂载)可能来自于同一个物理磁盘。
这就使限制变得很奇怪。为了使限制生效LXD 具有猜测给定路径所对应块设备的逻辑,这其中包括询问 ZFS 和 btrfs 工具,甚至可以在发现一个文件系统中循环挂载的文件时递归地找出它们。
这个逻辑虽然不完美但通常会找到一组应该应用限制的块设备。LXD 接着记录并移动到下一个路径。当遍历完所有的路径,然后到了非常奇怪的部分。它会平均你为相应块设备设置的限制,然后应用这些。
这意味着你将在容器中“平均”地获得正确的速度,但这也意味着你不能对来自同一个物理磁盘的“/fast”和一个“/slow”目录应用不同的速度限制。 LXD 允许你设置它,但最后,它会给你这两个值的平均值。
### 它怎么工作?
除了网络限制是通过较旧但是良好的“tc”实现的上述大多数限制是通过 Linux 内核的 cgroup API 来实现的。
LXD 在启动时会检测你在内核中启用了哪些 cgroup并且将只应用你的内核支持的限制。如果你缺少一些 cgroup守护进程会输出警告接着你的 init 系统将会记录这些。
在 Ubuntu 16.04 上,默认情况下除了内存交换审计外将会启用所有限制,内存交换审计需要你通过`swapaccount = 1`这个内核引导参数来启用。
### 应用这些限制
上述所有限制都能够直接或者用某个配置文件应用于容器。容器范围的限制可以使用:
```
lxc config set CONTAINER KEY VALUE
```
或对于配置文件设置:
```
lxc profile set PROFILE KEY VALUE
```
当指定特定设备时:
```
lxc config device set CONTAINER DEVICE KEY VALUE
```
或对于配置文件设置:
```
lxc profile device set PROFILE DEVICE KEY VALUE
```
有效配置键、设备类型和设备键的完整列表可以[看这里][1]。
#### CPU
要限制使用任意两个 CPU 核心可以这么做:
```
lxc config set my-container limits.cpu 2
```
要指定特定的 CPU 核心,比如说第二和第四个:
```
lxc config set my-container limits.cpu 1,3
```
更加复杂的情况还可以设置范围:
```
lxc config set my-container limits.cpu 0-3,7-11
```
限制实时生效,你可以看下面的例子:
```
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc config set zerotier limits.cpu 2
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
```
注意为了避免完全混淆用户空间lxcfs 会重排 `/proc/cpuinfo` 中的条目,以便没有错误。
就像 LXD 中的一切,这些设置也可以应用在配置文件中:
```
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc profile set default limits.cpu 3
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
```
要限制容器使用 10% 的 CPU 时间,要设置下 CPU allowance
```
lxc config set my-container limits.cpu.allowance 10%
```
或者给它一个固定的 CPU 时间切片:
```
lxc config set my-container limits.cpu.allowance 25ms/200ms
```
最后,要将容器的 CPU 优先级调到最低:
```
lxc config set my-container limits.cpu.priority 0
```
#### 内存
要直接应用内存限制运行下面的命令:
```
lxc config set my-container limits.memory 256MB
```
(支持的后缀是 KB、MB、GB、TB、PB、EB
要关闭容器的内存交换(默认启用):
```
lxc config set my-container limits.memory.swap false
```
告诉内核首先交换指定容器的内存:
```
lxc config set my-container limits.memory.swap.priority 0
```
如果你不想要强制的内存限制:
```
lxc config set my-container limits.memory.enforce soft
```
#### 磁盘和块 I/O
不像 CPU 和内存,磁盘和 I/O 限制是直接作用在实际的设备上的,因此你需要编辑原始设备或者屏蔽某个具体的设备。
要设置磁盘限制(需要 btrfs 或者 ZFS
```
lxc config device set my-container root size 20GB
```
比如:
```
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 179G 542M 178G 1% /
stgraber@dakara:~$ lxc config device set zerotier root size 20GB
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 20G 542M 20G 3% /
```
要限制速度,你可以:
```
lxc config device set my-container root limits.read 30MB
lxc config device set my-container root.limits.write 10MB
```
或者限制 IO 频率:
```
lxc config device set my-container root limits.read 20Iops
lxc config device set my-container root limits.write 10Iops
```
最后你在一个过量使用的繁忙系统上,你或许想要:
```
lxc config set my-container limits.disk.priority 10
```
将那个容器的 I/O 优先级调到最高。
#### 网络 I/O
只要机制可用,网络 I/O 基本等同于块 I/O。
比如:
```
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s
2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600]
stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit
stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s
2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]
```
这就是如何将一个千兆网的连接速度限制到仅仅 100Mbit/s 的!
和块 I/O 一样,你可以设置一个总体的网络优先级:
```
lxc config set my-container limits.network.priority 5
```
### 获取当前资源使用率
[LXD API][2] 可以导出目前容器资源使用情况的一点信息,你可以得到:
* 内存:当前、峰值、目前内存交换和峰值内存交换
* 磁盘:当前磁盘使用率
* 网络:每个接口传输的字节和包数。
另外如果你使用的是非常新的 LXD在写这篇文章时的 git 版本),你还可以在`lxc info`中得到这些信息:
```
stgraber@dakara:~$ lxc info zerotier
Name: zerotier
Architecture: x86_64
Created: 2016/02/20 20:01 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 29258
Ips:
eth0: inet 172.17.0.101
eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8
eth0: inet6 fe80::216:3eff:feec:65a8
lo: inet 127.0.0.1
lo: inet6 ::1
lxcbr0: inet 10.0.3.1
lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2
zt0: inet 29.17.181.59
zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1
zt0: inet6 fe80::79:e7ff:fe0d:5123
Resources:
Processes: 33
Disk usage:
root: 808.07MB
Memory usage:
Memory (current): 106.79MB
Memory (peak): 195.51MB
Swap (current): 124.00kB
Swap (peak): 124.00kB
Network usage:
lxcbr0:
Bytes received: 0 bytes
Bytes sent: 570 bytes
Packets received: 0
Packets sent: 0
zt0:
Bytes received: 1.10MB
Bytes sent: 806 bytes
Packets received: 10957
Packets sent: 10957
eth0:
Bytes received: 99.35MB
Bytes sent: 5.88MB
Packets received: 64481
Packets sent: 64481
lo:
Bytes received: 9.57kB
Bytes sent: 9.57kB
Packets received: 81
Packets sent: 81
Snapshots:
zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless)
```
### 总结
LXD 团队花费了几个月的时间来迭代我们使用的这些限制的语言。 它是为了在保持强大和功能明确的基础上同时保持简单。
实时地应用这些限制和通过配置文件继承,使其成为一种非常强大的工具,可以在不影响正在运行的服务的情况下实时管理服务器上的负载。
### 更多信息
LXD 的主站在: <https://linuxcontainers.org/lxd>
LXD 的 GitHub 仓库: <https://github.com/lxc/lxd>
LXD 的邮件列表: <https://lists.linuxcontainers.org>
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
如果你不想在你的机器上安装LXD你可以[在线尝试下][3]。
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/blob/master/doc/configuration.md
[2]: https://github.com/lxc/lxd/blob/master/doc/rest-api.md
[3]: https://linuxcontainers.org/lxd/try-it

View File

@ -1,6 +1,3 @@
@poodarchu 翻译中
Building a data science portfolio: Storytelling with data
========

View File

@ -1,149 +0,0 @@
ucasFL translating
PyCharm - The Best Linux Python IDE
=========
![](https://fthmb.tqn.com/AVEbzYN3BPH_8cGYkPflIx58-XE=/768x0/filters:no_upscale()/about/pycharm2-57e2d5ee5f9b586c352c7493.png)
### Introduction
In this guide I will introduce you to the PyCharm integrated development environment which can be used to develop professional applications using the Python programming language.
Python is a great programming language because it is truly cross platform and can be used to develop a single application which will run on Windows, Linux and Mac computers without having to recompile any code.
PyCharm is an editor and debugger developed by [Jetbrains][1] who are the same people who developed Resharper which is a great tool used by Windows developers for refactoring code and to make their lives easier when writing .NET code. Many of the principles of [Resharper][2] have been added to the professional version of [PyCharm][3].
### How To Install PyCharm
I have written a guide showing how to get PyCharm, download it, extract the files and run it.
[Simply click this link][4].
### The Welcome Screen
When you first run PyCharm or when you close a project you will be presented with a screen showing a list of recent projects.
You will also see the following menu options:
* Create New Project
* Open A Project
* Checkout From Version Control
There is also a configure settings option which lets you set up the default Python version and other such settings.
### Creating A New Project
When you choose to create a new project you are provided with a list of possible project types as follows:
* Pure Python
* Django
* Flask
* Google App Engine
* Pyramid
* Web2Py
* Angular CLI
* AngularJS
* Foundation
* HTML5 Bolierplate
* React Starter Kit
* Twitter Bootstrap
* Web Starter Kit
This isn't a programming tutorial so I won't be listing what all of those project types are. If you want to create a base desktop application which will run on Windows, Linux and Mac then you can choose a Pure Python project and use QT libraries to develop graphical applications which look native to the operating system they are running on regardless as to where they were developed.
As well as choosing the project type you can also enter the name for your project and also choose the version of Python to develop against.
### Open A Project
You can open a project by clicking on the name within the recently opened projects list or you can click the open button and navigate to the folder where the project you wish to open is located.
### Checking Out From Source Control
PyCharm provides the option to check out project code from various online resources including [GitHub][5], [CVS][6], Git, [Mercurial][7] and [Subversion][8].
### The PyCharm IDE
The PyCharm IDE starts with a menu at the top and underneath this you have tabs for each open project.
On the right side of the screen are debugging options for stepping through code.
The left pane has a list of project files and external libraries.
To add a file you right-click on the project name and choose "new". You then get the option to add one of the following file types:
* File
* Directory
* Python Package
* Python File
* Jupyter Notebook
* HTML File
* Stylesheet
* JavaScript
* TypeScript
* CoffeeScript
* Gherkin
* Data Source
When you add a file, such as a python file you can start typing into the editor in the right panel.
The text is all colour coded and has bold text . A vertical line shows the indentation so you can be sure that you are tabbing correctly.
The editor also includes full intellisense which means as you start typing the names of libraries or recognised commands you can complete the commands by pressing tab.
### Debugging The Application
You can debug your application at any point by using the debugging options in the top right corner.
If you are developing a graphical application then you can simply press the green button to run the application. You can also press shift and F10.
To debug the application you can either click the button next to the green arrow or press shift and F9.You can place breakpoints in the code so that the program stops on a given line by clicking in the grey margin on the line you wish to break at.
To make a single step forward you can press F8 which steps over the code. This means it will run the code but it won't step into a function. To step into the function you would press F7\. If you are in a function and want to step out to the calling function press shift and F8.
At the bottom of the screen whilst you are debugging you will see various windows such as a list of processes and threads, and variables that you are watching the values for. 
As you are stepping through code you can add a watch on a variable so that you can see when the value changes. 
Another great option is to run the code with coverage checker. The programming world has changed a lot during the years and now it is common for developers to perform test-driven development so that every change they make they can check to make sure they haven't broken another part of the system. 
The coverage checker actually helps you to run the program, perform some tests and then when you have finished it will tell you how much of the code was covered as a percentage during your test run.
There is also a tool for showing the name of a method or class, how many times the items were called, and how long was spent in that particular piece of code.
### Code Refactoring
A really powerful feature of PyCharm is the code refactoring option.
When you start to develop code little marks will appear in the right margin. If you type something which is likely to cause an error or just isn't written well then PyCharm will place a coloured marker.
Clicking on the coloured marker will tell you the issue and will offer a solution.
For example, if you have an import statement which imports a library and then don't use anything from that library not only will the code turn grey the marker will state that the library is unused.
Other errors that will appear are for good coding such as only having one blank line between an import statement and the start of a function. You will also be told when you have created a function that isn't in lowercase.
You don't have to abide by all of the PyCharm rules. Many of them are just good coding guidelines and are nothing to do with whether the code will run or not.
The code menu has other refactoring options. For example, you can perform code cleanup and you can inspect a file or project for issues.
### Summary
PyCharm is a great editor for developing Python code in Linux and there are two versions available. The community version is for the casual developer whereas the professional environment provides all the tools a developer could need for creating professional software.
--------------------------------------------------------------------------------
via: https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
作者:[Gary Newell ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com/gary-newell-2180098
[1]:https://www.jetbrains.com/
[2]:https://www.jetbrains.com/resharper/
[3]:https://www.jetbrains.com/pycharm/specials/pycharm/pycharm.html?&gclid=CjwKEAjw34i_BRDH9fbylbDJw1gSJAAvIFqU238G56Bd2sKU9EljVHs1bKKJ8f3nV--Q9knXaifD8xoCRyjw_wcB&gclsrc=aw.ds.ds&dclid=CNOy3qGQoc8CFUJ62wodEywCDg
[4]:https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
[5]:https://github.com/
[6]:http://www.linuxhowtos.org/System/cvs_tutorial.htm
[7]:https://www.mercurial-scm.org/
[8]:https://subversion.apache.org/

View File

@ -1,3 +1,5 @@
It's translated by GitFuture now.
Getting Started with HTTP/2: Part 2
============================================================
![](https://static.viget.com/_284x284_crop_center-center/ben-t-http-blog-thumb-01_360.png?mtime=20160928234634)

View File

@ -1,4 +1,3 @@
bjwrkj 翻译中..
# Suspend to Idle
### Introduction

View File

@ -1,3 +1,4 @@
Translating by cposture 20161228
# Applying the Linus Torvalds “Good Taste” Coding Requirement
In [a recent interview with Linus Torvalds][1], the creator of Linux, at approximately 14:20 in the interview, he made a quick point about coding with “good taste”. Good taste? The interviewer prodded him for details and Linus came prepared with illustrations.
@ -44,7 +45,7 @@ Again, the purpose of this code was to only initialize the values of the points
To accomplish this I initially looped over every point in the grid and used conditionals to test for the edges. This is what it looked like:
```
```Tr
for (r = 0; r < GRID_SIZE; ++r) {
for (c = 0; c < GRID_SIZE; ++c) {
```

View File

@ -1,5 +1,3 @@
**************Translating by messon007******************
# Perl and the birth of the dynamic web
>The fascinating story of Perl's role in the dynamic web spans newsgroups and mailing lists, computer science labs, and continents.

View File

@ -1,4 +1,3 @@
翻译中 by zky001
How to check if port is in use on Linux or Unix
============================================================
@ -115,7 +114,7 @@ netstat -bano | findstr /R /C:"[LISTING]"
via: https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
作者:[ VIVEK GITE][a]
译者:[zky001](https://github.com/zky001)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,3 +1,5 @@
Vic020
Build, Deploy and Manage Custom Apps with IBM Bluemix
============================================================

View File

@ -1,76 +0,0 @@
翻译中 by ypingcn
Moving with SQL Server to Linux? Move from SQL Server to MySQL as well!
============================================================
### On this page
1. [To have Control Over the Platform][1]
2. [Joining the Crowd][2]
3. [Microsoft isnt Open Sourcing SQL Servers Code][3]
4. [Saving on License Costs][4]
5. [Sometimes, the Specific Hardware being Used][5]
6. [Support][6]
Over the recent years, there has been a large number of individuals as well as organizations who are ditching the Windows platform for Linux platform, and this number will continue to grow as more developments in Linux are experienced. Linux has for long been the leader in Web servers as most of the web servers run on Linux, and this could be one of the reasons why the high migration is being experienced.
The reasons for this migration are as numerous, ranging from more platform stability, reliability, costs, ownership and security among others. As more entities migrate to the Linux platform, so is the migration from MS SQL server database management system, top MySQL, because of interoperability and platform independence of MySQL, as well as low acquisition costs.
As much as the migration is to be done, the need for it should be necessitated by the business and not just for the mere pleasure of it.As such, a comprehensive feasibility and cost-benefit analysis should be carried out to know the impact that the migration will have on your business, both positive and negative.
The migration may be based on the following key factors:
### To have Control Over the Platform
Unlike in windows where you are not in full control of the releases and fixes, Linux does give you that flexibility to get fixes as and when you require them. This is preferred by developers and security personnel in that they are able to immediately apply a fix when a security threat is identified, unlike in Windows where you can only hope they release the fixes soon.
### Joining the Crowd
The Linux platform far outnumbers Windows in the number of servers that are running on it, nearly a quarter of all servers in the world, and the trend is not about to change anytime soon. Many organizations, therefore, do migrate so as to be fully on Linux rather than running two platforms concurrently, which adds up to their operating costs.
### Microsoft isnt Open Sourcing SQL Servers Code
In as much as Microsoft have announced that their next release of MSSQL server (named Denali) will be a Linux version, that will still not open their source code, meaning that their licenses will still apply, but the release will be run on Linux. This still locks out the many users who would happily take to the release if it was open source.
This still does not give an alternative to those users who are using Oracle, which is not open source; neither does it to those [using MySQL][7], which is fully open source.
### Saving on License Costs
The cost implication of licenses is a huge letdown for many users. Running a MSSQL server on Windows platform has too many licenses involved. You need licenses for:
* The windows operating system
* The MSSQL server
* Specific database tools e.g. SQL analytics tools, etc.
Unlike in Windows platform, Linux eliminates the issues of high licenses costs, and thus more appealing to users. MySQL database is also a free source even though it offers the flexibility just as MSSQL server, thus it is more preferred. Most of the database utility tools for MySQL are mostly free, unlike for MSSQL.
### Sometimes, the Specific Hardware being Used
Because Linux is developed and always being enhanced by various developers, it is independent of the hardware it operates on and thus widely used on different hardware platforms. However, as much as Microsoft has tried to ensure that Windows and MSSQL server are hardware independent; there are still some limitations in platform independence.
### Support
With Linux and MySQL, as well as with other open source software, it is easier to get support on the specific need that you have, because there are various developers involved in their development. These developers maybe within your locality, thus are easily reached. Also, online forums are of great help whereby you are able to post and discuss the issues you face.
For commercial software, you get support based on their software agreement with you and their timing, and at times may not give you a solution within the timelines that you have.
In every case, migrating to Linux gives you the best option and outcome that you can have, by joining a radical, stable and reliable platform, which is known to be more robust than Windows. It is worth a shot.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/
作者:[Tony Branson ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/howtoforgecom
[1]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#to-have-control-over-the-platform
[2]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#joining-the-crowd
[3]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#microsoft-isnrsquot-open-sourcing-sql-serverrsquos-code
[4]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#saving-on-license-costs
[5]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#sometimes-the-specific-hardware-being-used
[6]:https://www.howtoforge.com/tutorial/moving-with-sql-server-to-linux-move-from-sql-server-to-mysql-as-well/#support
[7]:http://www.scalearc.com/how-it-works/products/scalearc-for-mysql

View File

@ -1,124 +0,0 @@
translating by dongdongmian
translating by WangYihang
How to Build an Email Server on Ubuntu Linux
============================================================
![mail server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mail-stack.jpg?itok=SVMfa8WZ "mail server")
In this series, we will show how to build a reliable configurable mail server with Postfix, Dovecot, and OpenSSL on Ubuntu Linux.[Creative Commons Zero][2]Pixabay
In this fast-changing world of containers and microservices it's comforting that some things don't change, such as setting up a Linux email server. It's still a dance of many steps and knitting together several different servers, and once you put it all together it just sits there, all nice and stable, instead of winking in and out of existence like microservices. In this series, we'll put together a nice reliable configurable mail server with Postfix, Dovecot, and OpenSSL on Ubuntu Linux.
Postfix is a reliable old standby that is easier to configure and use than Sendmail, the original Unix MTA (does anyone still use Sendmail?). Exim is Debian's default MTA; it is more lightweight than Postfix and super-configurable, so we'll look at Exim in a future tutorial.
Dovecot and Courier are two popular and excellent IMAP/POP3 servers. Dovecot is more lightweight and easier to configure.
You must secure your email sessions, so we'll use OpenSSL. OpenSSL also supplies some nice tools for testing your mail server.
For simplicity, we'll set up a LAN mail server in this series. You should have LAN name services already enabled and working; see [Dnsmasq For Easy LAN Name Services][5] for some pointers. Then later, you can adapt a LAN server to an Internet-accessible server by registering your domain name and configuring your firewall accordingly. These are documented everywhere, so please do your homework and be careful.
### Terminology
Let's take a quick look at some terminology, because it is nice when we know what the heck we're talking about.
* **MTA**: Mail transfer agent, a simple mail transfer protocol (SMTP) server such as Postfix, Exim, and Sendmail. SMTP servers talk to each other
* **MUA**: Mail user agent, your local mail client such as Evolution, KMail, Claws Mail, or Thunderbird.
* **POP3**: Post-office protocol, the simplest protocol for moving messages from an SMTP server to your mail client. A POP server is simple and lightweight; you can serve thousands of users from a single box.
* **IMAP**: Interactive message access protocol. Most businesses use IMAP because messages remain on the server, so users don't have to worry about losing them. IMAP servers require a lot of memory and storage.
* **TLS**: Transport socket layer, an evolution of SSL (secure sockets layer), which provides encrypted transport for SASL-authenticated logins.
* **SASL**: Simple authentication and security layer, for authenticating users. SASL does the authenticating, then TLS provides the encrypted transport of the authentication data.
* **StartTLS**: Also known as opportunistic TLS. StartTLS upgrades your plain text authentication to encrypted authentication if both servers support SSL/TLS. If one of them doesn't then it remains in cleartext. StartTLS uses the standard unencrypted ports: 25 (SMTP), 110 (POP3), and 143 (IMAP) instead of the standard encrypted ports: 465 (SMTP), 995 (POP3), and 993 (IMAP).
### Yes, We Still Have Sendmail
Most Linuxes still have `/usr/sbin/sendmail`. This is a holdover from the very olden days when Sendmail was the only MTA. On most distros `/usr/sbin/sendmail` is symlinked to your installed MTA. However your distro handles it, if it's there, it's on purpose.
### Install Postfix
`apt-get install postfix` takes care of the basic Postfix installation (Figure 1). This opens a wizard that asks what kind of server you want. Select "Internet Site", even for a LAN server. It will ask for your fully qualified server domain name (e.g., myserver.mydomain.net). On a LAN server, assuming your name services are correctly configured (I keep mentioning this because people keep getting it wrong), you can use just the hostname (e.g., myserver).
![Postfix](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/postfix-1.png?itok=NJLdtICb "Postfix")
Figure 1: Postfix configuration.[Creative Commons Zero][1]Carla Schroder
Ubuntu will create a configuration file and launch three Postfix daemons: `master, qmgr`, and `pickup`. There is no Postfix command or daemon.
```
$ ps ax
6494 ? Ss 0:00 /usr/lib/postfix/master
6497 ? S 0:00 pickup -l -t unix -u -c
6498 ? S 0:00 qmgr -l -t unix -u
```
Use Postfix's built-in syntax checker to test your configuration files. If it finds no syntax errors, it reports nothing:
```
$ sudo postfix check
[sudo] password for carla:
```
Use `netstat` to verify that Postfix is listening on port 25:
```
$ netstat -ant
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
tcp6 0 0 :::25 :::* LISTEN
```
Now let's fire up trusty old `telnet` to test:
```
$ telnet myserver 25
Trying 127.0.1.1...
Connected to myserver.
Escape character is '^]'.
220 myserver ESMTP Postfix (Ubuntu)
**EHLO myserver**
250-myserver
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
**^]**
telnet>
```
Hurrah! We have verified the server name, and that Postfix is listening and responding to requests on port 25, the SMTP port.
Type `quit` to exit `telnet`. In the example, the commands that you type to interact with your server are in bold. The output are ESMTP (extended SMTP) 250 status codes.
* PIPELINING allows multiple commands to flow without having to respond to each one.
* SIZE tells the maximum message size that the server accepts.
* VRFY can tell a client if a particular mailbox exists. This is often ignored as it could be a security hole.
* ETRN is for sites with irregular Internet connectivity. Such a site can use ETRN to request mail delivery from an upstream server, and Postfix can be configured to defer mail delivery to ETRN clients.
* STARTTLS (see above).
* ENHANCEDSTATUSCODES, the server supports enhanced status and error codes.
* 8BITMIME, supports 8-bit MIME, which means the full ASCII character set. Once upon a time the original ASCII was 7 bits.
* DSN, delivery status notifiction, informs you of delivery errors.
The main Postfix configuration file is `/etc/postfix/main.cf`. This is created by the installer. See [Postfix Configuration Parameters][6] for a complete listing of `main.cf` parameters. `/etc/postfix/postfix-files` describes the complete Postfix installation.
Come back next week for installing and testing Dovecot, and sending ourselves some messages.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/how-build-email-server-ubuntu-linux
作者:[CARLA SCHRODER][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/postfix-1png
[4]:https://www.linux.com/files/images/mail-stackjpg
[5]:https://www.linux.com/learn/dnsmasq-easy-lan-name-services
[6]:http://www.postfix.org/postconf.5.html

View File

@ -0,0 +1,254 @@
Building an Email Server on Ubuntu Linux, Part 2
============================================================
### [dovecot-email.jpg][4]
![Dovecot email](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dovecot-email.jpg?itok=tY4veggw "Dovecot email")
Part 2 in this tutorial shows how to use Dovecot to move messages off your Postfix server and into your users' email inboxes.[Creative Commons Zero][2]Pixabay
In [part 1][5], we installed and tested the Postfix SMTP server. Postfix, or any SMTP server, isn't a complete mail server because all it does is move messages between SMTP servers. We need Dovecot to move messages off your Postfix server and into your users' email inboxes.
Dovecot supports the two standard mail protocols, IMAP (Internet Message Access Protocol) and POP3 (Post Office Protocol). An IMAP server retains all messages on the server. Your users have the option to download messages to their computers or access them only on the server. IMAP is convenient for users who have multiple machines. It's more work for you because you have to ensure that your server is always available, and IMAP servers require a lot of storage and memory.
POP3 is an older protocol. A POP3 server can serve many more users than an IMAP server because messages are downloaded to your users' computers. Most mail clients have the option to leave messages on the server for a certain number of days, so POP3 can behave somewhat like IMAP. But it's not IMAP, and when you do this messages are often downloaded multiple times or deleted unexpectedly.
### Install Dovecot
Fire up your trusty Ubuntu system and install Dovecot:
```
$ sudo apt-get install dovecot-imapd dovecot-pop3d
```
It installs with a working configuration and automatically starts after installation, which you can confirm with `ps ax | grep dovecot`:
```
$ ps ax | grep dovecot
15988 ? Ss 0:00 /usr/sbin/dovecot
15990 ? S 0:00 dovecot/anvil
15991 ? S 0:00 dovecot/log
```
Open your main Postfix configuration file, `/etc/postfix/main.cf`, and make sure it is configured for maildirs and not mbox mail stores; mbox is single giant file for each user, while maildir gives each message its own file. Lots of little files are more stable and easier to manage than giant bloaty files. Add these two lines; the second line tells Postfix you want maildir format, and to create a `.Mail` directory for every user in their home directories. You can name this directory anything you want, it doesn't have to be `.Mail`:
```
mail_spool_directory = /var/mail
home_mailbox = .Mail/
```
Now tweak your Dovecot configuration. First rename the original `dovecot.conf` file to get it out of the way, because it calls a host of `conf.d` files and it is better to keep things simple while you're learning:
```
$ sudo mv /etc/dovecot/dovecot.conf /etc/dovecot/dovecot-oldconf
```
Now create a clean new `/etc/dovecot/dovecot.conf` with these contents:
```
disable_plaintext_auth = no
mail_location = maildir:~/.Mail
namespace inbox {
inbox = yes
mailbox Drafts {
special_use = \Drafts
}
mailbox Sent {
special_use = \Sent
}
mailbox Trash {
special_use = \Trash
}
}
passdb {
driver = pam
}
protocols = " imap pop3"
ssl = no
userdb {
driver = passwd
}
```
Note that `mail_location = maildir` must match the `home_mailbox` parameter in `main.cf`. Save your changes and reload both Postfix and Dovecot's configurations:
```
$ sudo postfix reload
$ sudo dovecot reload
```
### Fast Way to Dump Configurations
Use these commands to quickly review your Postfix and Dovecot configurations:
```
$ postconf -n
$ doveconf -n
```
### Test Dovecot
Now let's put telnet to work again, and send ourselves a test message. The lines in bold are the commands that you type. `studio` is my server's hostname, so of course you must use your own:
```
$ telnet studio 25
Trying 127.0.1.1...
Connected to studio.
Escape character is '^]'.
220 studio.router ESMTP Postfix (Ubuntu)
EHLO studio
250-studio.router
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-DSN
250 SMTPUTF8
mail from: tester@test.net
250 2.1.0 Ok
rcpt to: carla@studio
250 2.1.5 Ok
data
354 End data with .Date: November 25, 2016
From: tester
Message-ID: first-test
Subject: mail server test
Hi carla,
Are you reading this? Let me know if you didn't get this.
.
250 2.0.0 Ok: queued as 0C261A1F0F
quit
221 2.0.0 Bye
Connection closed by foreign host.
```
Now query Dovecot to fetch your new message. Log in using your Linux username and password:
```
$ telnet studio 110
Trying 127.0.0.1...
Connected to studio.
Escape character is '^]'.
+OK Dovecot ready.
user carla
+OK
pass password
+OK Logged in.
stat
+OK 2 809
list
+OK 2 messages:
1 383
2 426
.
retr 2
+OK 426 octets
Return-Path: <tester@test.net>
X-Original-To: carla@studio
Delivered-To: carla@studio
Received: from studio (localhost [127.0.0.1])
by studio.router (Postfix) with ESMTP id 0C261A1F0F
for <carla@studio>; Wed, 30 Nov 2016 17:18:57 -0800 (PST)
Date: November 25, 2016
From: tester@studio.router
Message-ID: first-test
Subject: mail server test
Hi carla,
Are you reading this? Let me know if you didn't get this.
.
quit
+OK Logging out.
Connection closed by foreign host.
```
Take a moment to compare the message entered in the first example, and the message received in the second example. It is easy to spoof the return address and date, but Postfix is not fooled. Most mail clients default to displaying a minimal set of headers, but you need to read the full headers to see the true backtrace.
You can also read your messages by looking in your `~/Mail/cur` directory. They are plain text. Mine has two test messages:
```
$ ls .Mail/cur/
1480540325.V806I28e0229M351743.studio:2,S
1480555224.V806I28e000eM41463.studio:2,S
```
### Testing IMAP
Our Dovecot configuration enables both POP3 and IMAP, so let's use telnet to test IMAP.
```
$ telnet studio imap2
Trying 127.0.1.1...
Connected to studio.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS
ID ENABLE IDLE AUTH=PLAIN] Dovecot ready.
A1 LOGIN carla password
A1 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS
ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS
THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT
CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE
QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS
BINARY MOVE SPECIAL-USE] Logged in
A2 LIST "" "*"
* LIST (\HasNoChildren) "." INBOX
A2 OK List completed (0.000 + 0.000 secs).
A3 EXAMINE INBOX
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
* OK [PERMANENTFLAGS ()] Read-only mailbox.
* 2 EXISTS
* 0 RECENT
* OK [UIDVALIDITY 1480539462] UIDs valid
* OK [UIDNEXT 3] Predicted next UID
* OK [HIGHESTMODSEQ 1] Highest
A3 OK [READ-ONLY] Examine completed (0.000 + 0.000 secs).
A4 logout
* BYE Logging out
A4 OK Logout completed.
Connection closed by foreign host
```
### Thunderbird Mail Client
This screenshot in Figure 1 shows what my messages look like in a graphical mail client on another host on my LAN.
### [thunderbird-mail.png][3]
![thunderbird mail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/thunderbird-mail.png?itok=IkWK5Ti_ "thunderbird mail")
Figure 1: Thunderbird mail.[Used with permission][1]
At this point, you have a working IMAP and POP3 mail server, and you know how to test your server. Your users will choose which protocol they want to use when they set up their mail clients. If you want to support only one mail protocol, then name just the one in your Dovecot configuration.
However, you are far from finished. This is a very simple, wide-open setup with no encryption. It also works only for users on the same system as your mail server. This is not scalable and has some security risks, such as no protection for passwords. Come back [next week ][6]to learn how to create mail users that are separate from system users, and how to add encryption.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-2
作者:[ CARLA SCHRODER][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/thunderbird-mailpng
[4]:https://www.linux.com/files/images/dovecot-emailjpg
[5]:https://www.linux.com/learn/how-build-email-server-ubuntu-linux
[6]:https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-3

View File

@ -0,0 +1,220 @@
Building an Email Server on Ubuntu Linux, Part 3
============================================================
### [mail-server.jpg][2]
![Mail server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mail-server.jpg?itok=Ox1SCDsV "Mail server")
In the final part of this tutorial series, we go into detail on how to set up virtual users and mailboxes in Dovecot and Postfix.[Creative Commons Zero][1]pixabay
Welcome back, me hearty Linux syadmins! In [part 1][3] and [part 2][4] of this series, we learned to how to put Postfix and Dovecot together to make a nice IMAP and POP3 mail server. Now we will learn to make virtual users so that we can manage all of our users in Dovecot.
### Sorry, No SSL. Yet.
I know I promised to show you how to set up a proper SSL-protected server. Unfortunately, I underestimated how large that topic is. So, I will realio trulio write a comprehensive how-to by next month.
For today, in this final part of this series, we'll go into detail on how to set up virtual users and mailboxes in Dovecot and Postfix. It's a bit weird to wrap your mind around, so the following examples are as simple as I can make them. We'll use plain flat files and plain-text authentication. You have the options of using database back ends and nice strong forms of encrypted authentication; see the links at the end for more information on these.
### Virtual Users
You want virtual users on your email server and not Linux system users. Using Linux system users does not scale, and it exposes their logins, and your Linux server, to unnecessary risk. Setting up virtual users requires editing configuration files in both Postfix and Dovecot. We'll start with Postfix. First, we'll start with a clean, simplified `/etc/postfix/main.cf`. Move your original `main.cf` out of the way and create a new clean one with these contents:
```
compatibility_level=2
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu/GNU)
biff = no
append_dot_mydomain = no
myhostname = localhost
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = $myhostname
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.0.0/24
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
virtual_mailbox_domains = /etc/postfix/vhosts.txt
virtual_mailbox_base = /home/vmail
virtual_mailbox_maps = hash:/etc/postfix/vmaps.txt
virtual_minimum_uid = 1000
virtual_uid_maps = static:5000
virtual_gid_maps = static:5000
virtual_transport = lmtp:unix:private/dovecot-lmtp0
```
You may copy this exactly, except for the `192.168.0.0/24` parameter for `mynetworks`, as this should reflect your own local subnet.
Next, create the user and group `vmail`, which will own your virtual mailboxes. The virtual mailboxes are stored in `vmail's` home directory.
```
$ sudo groupadd -g 5000 vmail
$ sudo useradd -m -u 5000 -g 5000 -s /bin/bash vmail
```
Then reload the Postfix configurations:
```
$ sudo postfix reload
[sudo] password for carla:
postfix/postfix-script: refreshing the Postfix mail system
```
### Dovecot Virtual Users
We'll use Dovecot's `lmtp` protocol to connect it to Postfix. You probably need to install it:
```
$ sudo apt-get install dovecot-lmtpd
```
The last line in our example `main.cf` references `lmtp`. Copy this example `/etc/dovecot/dovecot.conf`, replacing your existing file. Again, we are using just this single file, rather than calling the files in `/etc/dovecot/conf.d`.
```
protocols = imap pop3 lmtp
log_path = /var/log/dovecot.log
info_log_path = /var/log/dovecot-info.log
ssl = no
disable_plaintext_auth = no
mail_location = maildir:~/.Mail
pop3_uidl_format = %g
auth_verbose = yes
auth_mechanisms = plain
passdb {
driver = passwd-file
args = /etc/dovecot/passwd
}
userdb {
driver = static
args = uid=vmail gid=vmail home=/home/vmail/studio/%u
}
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
}
}
protocol lmtp {
postmaster_address = postmaster@studio
}
service lmtp {
user = vmail
}
```
At last, you can create the file that holds your users and passwords, `/etc/dovecot/passwd`. For simple plain text authorization we need only our users' full email addresses and passwords:
```
alrac@studio:{PLAIN}password
layla@studio:{PLAIN}password
fred@studio:{PLAIN}password
molly@studio:{PLAIN}password
benny@studio:{PLAIN}password
```
The Dovecot virtual users are independent of the Postfix virtual users, so you will manage your users in Dovecot. Save all of your changes and restart Postfix and Dovecot:
```
$ sudo service postfix restart
$ sudo service dovecot restart
```
Now let's use good old telnet to see if Dovecot is set up correctly.
```
$ telnet studio 110
Trying 127.0.1.1...
Connected to studio.
Escape character is '^]'.
+OK Dovecot ready.
user molly@studio
+OK
pass password
+OK Logged in.
quit
+OK Logging out.
Connection closed by foreign host.
```
So far so good! Now let's send some test messages to our users with the `mail` command. Make sure to use the whole user's email address and not just the username.
```
$ mail benny@studio
Subject: hello and welcome!
Please enjoy your new mail account!
.
```
The period on the last line sends your message. Let's see if it landed in the correct mailbox.
```
$ sudo ls -al /home/vmail/studio/benny@studio/.Mail/new
total 16
drwx------ 2 vmail vmail 4096 Dec 14 12:39 .
drwx------ 5 vmail vmail 4096 Dec 14 12:39 ..
-rw------- 1 vmail vmail 525 Dec 14 12:39 1481747995.M696591P5790.studio,S=525,W=540
```
And there it is. It is a plain text file that we can read:
```
$ less 1481747995.M696591P5790.studio,S=525,W=540
Return-Path: <carla@localhost>
Delivered-To: benny@studio
Received: from localhost
by studio (Dovecot) with LMTP id V01ZKRuuUVieFgAABiesew
for <benny@studio>; Wed, 14 Dec 2016 12:39:55 -0800
Received: by localhost (Postfix, from userid 1000)
id 9FD9CA1F58; Wed, 14 Dec 2016 12:39:55 -0800 (PST)
Date: Wed, 14 Dec 2016 12:39:55 -0800
To: benny@studio
Subject: hello and welcome!
User-Agent: s-nail v14.8.6
Message-Id: <20161214203955.9FD9CA1F58@localhost>
From: carla@localhost (carla)
Please enjoy your new mail account!
```
You could also use telnet for testing, as in the previous segments of this series, and set up accounts in your favorite mail client, such as Thunderbird, Claws-Mail, or KMail.
### Troubleshooting
When things don't work, check your logfiles (see the configuration examples), and run `journalctl -xe`. This should give you all the information you need to spot typos, uninstalled packages, and nice search terms for Google.
### What Next?
Assuming your LAN name services are correctly configured, you now have a nice usable LAN mail server. Obviously, sending messages in plain text is not optimal, and an absolute no-no for Internet mail. See [Dovecot SSL configuration][5] and [Postfix TLS Support][6]. [VirtualUserFlatFilesPostfix][7] covers TLS and database back ends. And watch for my upcoming SSL how-to. Really.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-3
作者:[ CARLA SCHRODER][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/mail-serverjpg
[3]:https://www.linux.com/learn/how-build-email-server-ubuntu-linux
[4]:https://www.linux.com/learn/sysadmin/building-email-server-ubuntu-linux-part-2
[5]:http://wiki.dovecot.org/SSL/DovecotConfiguration
[6]:http://www.postfix.org/TLS_README.html
[7]:http://www.postfix.org/TLS_README.html

View File

@ -1,15 +1,15 @@
#rusking translating
How to Manage Samba4 AD Infrastructure from Linux Command Line Part 2
============================================================
This tutorial will cover [some basic daily commands][2] you need to use in order to manage Samba4 AD Domain Controller infrastructure, such as adding, removing, disabling or listing users and groups.
This tutorial will cover [some basic daily commands][4] you need to use in order to manage Samba4 AD Domain Controller infrastructure, such as adding, removing, disabling or listing users and groups.
Well also take a look on how to manage domain security policy and how to bind AD users to local PAM authentication in order for AD users to be able to perform local logins on Linux Domain Controller.
#### Requirements
1. [Create an AD Infrastructure with Samba4 on Ubuntu 16.04 Part 1][1]
2. [Manage Samba4 Active Directory Infrastructure from Windows10 via RSAT Part 3][2]
3. [Manage Samba4 AD Domain Controller DNS and Group Policy from Windows Part 4][3]
### Step 1: Manage Samba AD DC from Command Line
@ -24,7 +24,7 @@ To review the entire functionality of samba-tool just type the command with root
```
[
![samba-tool - Manage Samba Administration Tool](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png)
][3]
][5]
samba-tool Manage Samba Administration Tool
@ -45,7 +45,7 @@ To add a user with several important fields required by AD, use the following sy
```
[
![Create User on Samba AD](http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png)
][4]
][6]
Create User on Samba AD
@ -56,7 +56,7 @@ Create User on Samba AD
```
[
![List Samba AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png)
][5]
][7]
List Samba AD Users
@ -106,7 +106,7 @@ List Samba AD Users
```
[
![List Samba Domain Members of Group](http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png)
][6]
][8]
List Samba Domain Members of Group
@ -126,7 +126,7 @@ To review your samba domain password settings use the below command:
```
[
![Check Samba Domain Password](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png)
][7]
][9]
Check Samba Domain Password
@ -138,7 +138,7 @@ Check Samba Domain Password
```
[
![Manage Samba Domain Password Settings](http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png)
][8]
][10]
Manage Samba Domain Password Settings
@ -164,7 +164,7 @@ winbind enum groups = yes
```
[
![Samba Authentication Using Active Directory User Accounts](http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png)
][9]
][11]
Samba Authentication Using Active Directory User Accounts
@ -176,7 +176,7 @@ $ sudo systemctl restart samba-ad-dc.service
```
[
![Check Samba Configuration for Errors](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png)
][10]
][12]
Check Samba Configuration for Errors
@ -191,13 +191,13 @@ $ sudo pam-auth-update
```
[
![Configure PAM for Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png)
][11]
][13]
Configure PAM for Samba4 AD
[
![Enable PAM Authentication Module for Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png)
][12]
][14]
Enable PAM Authentication Module for Samba4 AD Users
@ -208,7 +208,7 @@ $ sudo vi /etc/nsswitch.conf
```
[
![Add Windbind Service Switch for Samba](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png)
][13]
][15]
Add Windbind Service Switch for Samba
@ -221,7 +221,7 @@ password [success=1 default=ignore] pam_winbind.so try_first_pass
```
[
![Allow Samba AD Users to Change Passwords](http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png)
][14]
][16]
Allow Samba AD Users to Change Passwords
@ -251,13 +251,13 @@ $ wbinfo -i your_domain_user
```
[
![Check Samba4 AD Information ](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png)
][15]
][17]
Check Samba4 AD Information
[
![Check Samba4 AD User Info](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png)
][16]
][18]
Check Samba4 AD User Info
@ -271,7 +271,7 @@ Pipe getent command through a grep filter in order to narrow the results reg
```
[
![Get Samba4 AD Details](http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png)
][17]
][19]
Get Samba4 AD Details
@ -290,7 +290,7 @@ $ exit
```
[
![Check Samba4 AD User Authentication on Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png)
][18]
][20]
Check Samba4 AD User Authentication on Linux
@ -302,7 +302,7 @@ $ passwd
```
[
![Change Samba4 AD User Password](http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png)
][19]
][21]
Change Samba4 AD User Password
@ -324,7 +324,7 @@ $ sudo apt-get update
```
[
![Grant sudo Permission to Samba4 AD User](http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png)
][20]
][22]
Grant sudo Permission to Samba4 AD User
@ -340,7 +340,7 @@ Sudoers file doesnt handles very well the use of ASCII quotation marks, so
[
![Give Sudo Access to All Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png)
][21]
][23]
Give Sudo Access to All Samba4 AD Users
@ -348,11 +348,16 @@ Thats all for now! Managing Samba4 AD infrastructure can be also achieved w
To administer Samba4 AD DC through RSAT utilities, its absolutely necessary to join the Windows system into Samba4 Active Directory. This will be the subject of our next tutorial, till then stay tuned to TecMint.
------
作者简介I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-active-directory-linux-command-line
via: http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
作者:[Matei Cezar ][a]
作者:[Matei Cezar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
@ -360,23 +365,30 @@ via: http://www.tecmint.com/manage-samba4-active-directory-linux-command-line
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
[3]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png
[2]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[3]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/
[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Administration-Tool.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/11/Create-User-on-Samba-AD.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-AD-Users.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/List-Samba-Domain-Members-of-Group.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Domain-Password.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Manage-Samba-Domain-Password-Settings.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Samba-Authentication-Using-Active-Directory-Accounts.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba-Configuration-for-Errors.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/11/PAM-Configuration-for-Samba4-AD.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Enable-PAM-Authentication-Module-for-Samba4-AD.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Windbind-Service-Switch-for-Samba.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Allow-Samba-AD-Users-to-Change-Password.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Information-of-Samba4-AD.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Info.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/11/Get-Samba4-AD-Details.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Check-Samba4-AD-User-Authentication-on-Linux.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Change-Samba4-AD-User-Password.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/11/Grant-sudo-Permission-to-Samba4-AD-User.png
[23]:http://www.tecmint.com/wp-content/uploads/2016/11/Give-Sudo-Access-to-All-Samba4-AD-Users.png
[24]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[25]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[26]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[27]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#
[28]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/#comments

View File

@ -0,0 +1,129 @@
translating---geekpi
# LXD 2.0: LXD and OpenStack [11/12]
This is the eleventh blog post in [this series about LXD 2.0][1].
![LXD logo](https://linuxcontainers.org/static/img/containers.png)
Introduction
============================================================
First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasnt be able to get networking going properly.
I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!
So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).
# Requirements
This post assumes youve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.
Remember, were running a full OpenStack here, this thing isnt exactly light!
# Setting up the container
OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, well use a privileged container.
Well configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).
Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.
```
lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem
```
There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:
```
lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe
```
Then we need to add a couple of PPAs and install conjure-up, the deployment tool well use to get OpenStack going.
```
lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y
```
And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:
* Use the “dir” storage backend (“zfs” doesnt work in a nested container)
* Do NOT configure IPv6 networking (conjure-up/juju dont play well with it)
```
lxc exec openstack -- lxd init
```
And thats it for the container configuration itself, now we can deploy OpenStack!
# Deploying OpenStack with conjure-up
As mentioned earlier, well be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.
Start it with:
```
lxc exec openstack -- sudo -u ubuntu -i conjure-up
```
* Select “OpenStack with NovaLXD”
* Then select “localhost” as the deployment target (uses LXD)
* And hit “Deploy all remaining applications”
This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine youre running this on. Youll see all services getting a container allocated, then getting deployed and finally interconnected.
![Conjure-Up deploying OpenStack](https://www.stgraber.org/wp-content/uploads/2016/10/conjure-up.png)
Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.
# Access the dashboard and spawn a container
The dashboard runs inside a container, so you cant just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:
```
lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>
```
Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.
You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon
This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and youll be greeted by the OpenStack dashboard!
![oslxd-dashboard](https://www.stgraber.org/wp-content/uploads/2016/10/oslxd-dashboard.png)
You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.
Once its running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.
# Conclusion
OpenStack is a pretty complex piece of software, its also not something you really want to run at home or on a single server. But its certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.
Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.
Its also one of the very few cases where multiple level of container nesting actually makes sense!
--------------------------------------------------------------------------
作者简介Im Stéphane Graber. Im probably mostly known as the LXC and LXD project leader, currently working as a technical lead for LXD at Canonical Ltd. from my home in Montreal, Quebec, Canada.
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/10/26/lxd-2-0-lxd-and-openstack-1112/
作者:[Stéphane Graber ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.stgraber.org/author/stgraber/
[1]:https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

View File

@ -1,405 +0,0 @@
Part 4 - LXD 2.0: Resource control
======================================
This is the fourth blog post [in this series about LXD 2.0][0].
As there are a lot of commands involved with managing LXD containers, this post is rather long. If youd instead prefer a quick step-by-step tour of those same commands, you can [try our online demo instead][1]!
![](https://linuxcontainers.org/static/img/containers.png)
### Available resource limits
LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.
As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.
All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.
We dont support resource limits pooling where a limit would be shared by a group of containers, there is simply no good way to implement something like that with the existing kernel APIs.
#### Disk
This is perhaps the most requested and obvious one. Simply setting a size limit on the containers filesystem and have it enforced against the container.
And thats exactly what LXD lets you do!
Unfortunately this is far more complicated than it sounds. Linux doesnt have path-based quotas, instead most filesystems only have user and group quotas which are of little use to containers.
This means that right now LXD only supports disk limits if youre using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.
#### CPU
When it comes to CPU limits, we support 4 different things:
* Just give me X CPUs
In this mode, you let LXD pick a bunch of cores for you and then load-balance things as more containers and CPUs go online/offline.
The container only sees that number of CPU.
* Give me a specific set of CPUs (say, core 1, 3 and 5)
Similar to the first mode except that no load-balancing is happening, youre stuck with those cores no matter how busy they may be.
* Give me 20% of whatever you have
In this mode, you get to see all the CPUs but the scheduler will restrict you to 20% of the CPU time but only when under load! So if the system isnt busy, your container can have as much fun as it wants. When containers next to it start using the CPU, then it gets capped.
* Out of every measured 200ms, give me 50ms (and no more than that)
This mode is similar to the previous one in that you get to see all the CPUs but this time, you can only use as much CPU time as you set in the limit, no matter how idle the system may be. On a system without over-commit this lets you slice your CPU very neatly and guarantees constant performance to those containers.
Its also possible to combine one of the first two with one of the last two, that is, request a set of CPUs and then further restrict how much CPU time you get on those.
On top of that, we also have a generic priority knob which is used to tell the scheduler who wins when youre under load and two containers are fighting for the same resource.
#### Memory
Memory sounds pretty simple, just give me X MB of RAM!
And it absolutely can be that simple. We support that kind of limits as well as percentage based requests, just give me 10% of whatever the host has!
Then we support some extra stuff on top. For example, you can choose to turn swap on and off on a per-container basis and if its on, set a priority so you can choose what container will have their memory swapped out to disk first!
Oh and memory limits are “hard” by default. That is, when you run out of memory, the kernel out of memory killer will start having some fun with your processes.
Alternatively you can set the enforcement policy to “soft”, in which case youll be allowed to use as much memory as you want so long as nothing else is. As soon as something else wants that memory, you wont be able to allocate anything until youre back under your limit or until the host has memory to spare again.
#### Network I/O
Network I/O is probably our simplest looking limit, trust me, the implementation really isnt simple though!
We support two things. The first is a basic bit/s limits on network interfaces. You can set a limit of ingress and egress or just set the “max” limit which then applies to both. This is only supported for “bridged” and “p2p” type interfaces.
The second thing is a global network I/O priority which only applies when the network interface youre trying to talk through is saturated.
#### Block I/O
I kept the weirdest for last. It may look straightforward and feel like that to the user but there are a bunch of cases where it wont exactly do what you think it should.
What we support here is basically identical to what I described in Network I/O.
You can set IOps or byte/s read and write limits directly on a disk device entry and there is a global block I/O priority which tells the I/O scheduler who to prefer.
The weirdness comes from how and where those limits are applied. Unfortunately the underlying feature we use to implement those uses full block devices. That means we cant set per-partition I/O limits let alone per-path.
It also means that when using ZFS or btrfs which can use multiple block devices to back a given path (with or without RAID), we effectively dont know what block device is providing a given path.
This means that its entirely possible, in fact likely, that a container may have multiple disk entries (bind-mounts or straight mounts) which are coming from the same underlying disk.
And thats where things get weird. To make things work, LXD has logic to guess what block devices back a given path, this does include interrogating the ZFS and btrfs tools and even figures things out recursively when it finds a loop mounted file backing a filesystem.
That logic while not perfect, usually yields a set of block devices that should have a limit applied. LXD then records that and moves on to the next path. When its done looking at all the paths, it gets to the very weird part. It averages the limits youve set for every affected block devices and then applies those.
That means that “in average” youll be getting the right speed in the container, but it also means that you cant have a “/fast” and a “/slow” directory both coming from the same physical disk and with differing speed limits. LXD will let you set it up but in the end, theyll both give you the average of the two values.
### How does it all work?
Most of the limits described above are applied through the Linux kernel Cgroups API. Thats with the exception of the network limits which are applied through good old “tc”.
LXD at startup time detects what cgroups are enabled in your kernel and will only apply the limits which your kernel support. Should you be missing some cgroups, a warning will also be printed by the daemon which will then get logged by your init system.
On Ubuntu 16.04, everything is enabled by default with the exception of swap memory accounting which requires you pass the “swapaccount=1” kernel boot parameter.
### Applying some limits
All the limits described above are applied directly to the container or to one of its profiles. Container-wide limits are applied with:
```
lxc config set CONTAINER KEY VALUE
```
or for a profile:
```
lxc profile set PROFILE KEY VALUE
```
while device-specific ones are applied with:
```
lxc config device set CONTAINER DEVICE KEY VALUE
```
or for a profile:
```
lxc profile device set PROFILE DEVICE KEY VALUE
```
The complete list of valid configuration keys, device types and device keys can be [found here][1].
#### CPU
To just limit a container to any 2 CPUs, do:
```
lxc config set my-container limits.cpu 2
```
To pin to specific CPU cores, say the second and fourth:
```
lxc config set my-container limits.cpu 1,3
```
More complex pinning ranges like this works too:
```
lxc config set my-container limits.cpu 0-3,7-11
```
The limits are applied live, as can be seen in this example:
```
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc config set zerotier limits.cpu 2
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
```
Note that to avoid utterly confusing userspace, lxcfs arranges the /proc/cpuinfo entries so that there are no gaps.
As with just about everything in LXD, those settings can also be applied in profiles:
```
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc profile set default limits.cpu 3
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
```
To limit the CPU time of a container to 10% of the total, set the CPU allowance:
```
lxc config set my-container limits.cpu.allowance 10%
```
Or to give it a fixed slice of CPU time:
```
lxc config set my-container limits.cpu.allowance 25ms/200ms
```
And lastly, to reduce the priority of a container to a minimum:
```
lxc config set my-container limits.cpu.priority 0
```
#### Memory
To apply a straightforward memory limit run:
```
lxc config set my-container limits.memory 256MB
```
(The supported suffixes are kB, MB, GB, TB, PB and EB)
To turn swap off for the container (defaults to enabled):
```
lxc config set my-container limits.memory.swap false
```
To tell the kernel to swap this containers memory first:
```
lxc config set my-container limits.memory.swap.priority 0
```
And finally if you dont want hard memory limit enforcement:
```
lxc config set my-container limits.memory.enforce soft
```
#### Disk and block I/O
Unlike CPU and memory, disk and I/O limits are applied to the actual device entry, so you either need to edit the original device or mask it with a more specific one.
To set a disk limit (requires btrfs or ZFS):
```
lxc config device set my-container root size 20GB
```
For example:
```
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 179G 542M 178G 1% /
stgraber@dakara:~$ lxc config device set zerotier root size 20GB
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 20G 542M 20G 3% /
```
To restrict speed you can do the following:
```
lxc config device set my-container root limits.read 30MB
lxc config device set my-container root.limits.write 10MB
```
Or to restrict IOps instead:
```
lxc config device set my-container root limits.read 20Iops
lxc config device set my-container root limits.write 10Iops
```
And lastly, if youre on a busy system with over-commit, you may want to also do:
```
lxc config set my-container limits.disk.priority 10
```
To increase the I/O priority for that container to the maximum.
#### Network I/O
Network I/O is basically identical to block I/O as far the knobs available.
For example:
```
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s
2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600]
stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit
stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'
/dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s
2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]
```
And thats how you throttle an otherwise nice gigabit connection to a mere 100Mbit/s one!
And as with block I/O, you can set an overall network priority with:
```
lxc config set my-container limits.network.priority 5
```
### Getting the current resource usage
The [LXD API][2] exports quite a bit of information on current container resource usage, you can get:
* Memory: current, peak, current swap and peak swap
* Disk: current disk usage
* Network: bytes and packets received and transferred for every interface
And now if youre running a very recent LXD (only in git at the time of this writing), you can also get all of those in “lxc info”:
```
stgraber@dakara:~$ lxc info zerotier
Name: zerotier
Architecture: x86_64
Created: 2016/02/20 20:01 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 29258
Ips:
eth0: inet 172.17.0.101
eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8
eth0: inet6 fe80::216:3eff:feec:65a8
lo: inet 127.0.0.1
lo: inet6 ::1
lxcbr0: inet 10.0.3.1
lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2
zt0: inet 29.17.181.59
zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1
zt0: inet6 fe80::79:e7ff:fe0d:5123
Resources:
Processes: 33
Disk usage:
root: 808.07MB
Memory usage:
Memory (current): 106.79MB
Memory (peak): 195.51MB
Swap (current): 124.00kB
Swap (peak): 124.00kB
Network usage:
lxcbr0:
Bytes received: 0 bytes
Bytes sent: 570 bytes
Packets received: 0
Packets sent: 0
zt0:
Bytes received: 1.10MB
Bytes sent: 806 bytes
Packets received: 10957
Packets sent: 10957
eth0:
Bytes received: 99.35MB
Bytes sent: 5.88MB
Packets received: 64481
Packets sent: 64481
lo:
Bytes received: 9.57kB
Bytes sent: 9.57kB
Packets received: 81
Packets sent: 81
Snapshots:
zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless)
```
### Conclusion
The LXD team spent quite a few months iterating over the language were using for those limits. Its meant to be as simple as it can get while remaining very powerful and specific when you want it to.
Live application of those limits and inheritance through profiles makes it a very powerful tool to live manage the load on your servers without impacting the running services.
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
And if you dont want or cant install LXD on your own machine, you can always [try it online instead][3]!
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/blob/master/doc/configuration.md
[2]: https://github.com/lxc/lxd/blob/master/doc/rest-api.md
[3]: https://linuxcontainers.org/lxd/try-it

View File

@ -1,458 +0,0 @@
Part 5 - LXD 2.0: Image management
==================================
This is the fifth blog post [in this series about LXD 2.0][0].
As there are a lot of commands involved with managing LXD containers, this post is rather long. If youd instead prefer a quick step-by-step tour of those same commands, you can [try our online demo instead][1]!
![](https://linuxcontainers.org/static/img/containers.png)
### Container images
If youve used LXC before, you probably remember those LXC “templates”, basically shell scripts that spit out a container filesystem and a bit of configuration.
Most templates generate the filesystem by doing a full distribution bootstrapping on your local machine. This may take quite a while, wont work for all distributions and may require significant network bandwidth.
Back in LXC 1.0, I wrote a “download” template which would allow users to download pre-packaged container images, generated on a central server from the usual template scripts and then heavily compressed, signed and distributed over https. A lot of our users switched from the old style container generation to using this new, much faster and much more reliable method of creating a container.
With LXD, were taking this one step further by being all-in on the image based workflow. All containers are created from an image and we have advanced image caching and pre-loading support in LXD to keep the image store up to date.
### Interacting with LXD images
Before digging deeper into the image format, lets quickly go through what LXD lets you do with those images.
#### Transparently importing images
All containers are created from an image. The image may have come from a remote image server and have been pulled using its full hash, short hash or an alias, but in the end, every LXD container is created from a local image.
Here are a few examples:
```
lxc launch ubuntu:14.04 c1
lxc launch ubuntu:75182b1241be475a64e68a518ce853e800e9b50397d2f152816c24f038c94d6e c2
lxc launch ubuntu:75182b1241be c3
```
All of those refer to the same remote image (at the time of this writing), the first time one of those is run, the remote image will be imported in the local LXD image store as a cached image, then the container will be created from it.
The next time one of those commands are run, LXD will only check that the image is still up to date (when not referring to it by its fingerprint), if it is, it will create the container without downloading anything.
Now that the image is cached in the local image store, you can also just start it from there without even checking if its up to date:
```
lxc launch 75182b1241be c4
```
And lastly, if you have your own local image under the name “myimage”, you can just do:
```
lxc launch my-image c5
```
If you want to change some of that automatic caching and expiration behavior, there are instructions in an earlier post in this series.
#### Manually importing images
##### Copying from an image server
If you want to copy some remote image into your local image store but not immediately create a container from it, you can use the “lxc image copy” command. It also lets you tweak some of the image flags, for example:
```
lxc image copy ubuntu:14.04 local:
```
This simply copies the remote image into the local image store.
If you want to be able to refer to your copy of the image by something easier to remember than its fingerprint, you can add an alias at the time of the copy:
```
lxc image copy ubuntu:12.04 local: --alias old-ubuntu
lxc launch old-ubuntu c6
```
And if you would rather just use the aliases that were set on the source server, you can ask LXD to copy the for you:
lxc image copy ubuntu:15.10 local: --copy-aliases
lxc launch 15.10 c7
All of the copies above were one-shot copy, so copying the current version of the remote image into the local image store. If you want to have LXD keep the image up to date, as it does for the ones stored in its cache, you need to request it with the `auto-update` flag:
```
lxc image copy images:gentoo/current/amd64 local: --alias gentoo --auto-update
```
##### Importing a tarball
If someone provides you with a LXD image as a single tarball, you can import it with:
```
lxc image import <tarball>
```
If you want to set an alias at import time, you can do it with:
```
lxc image import <tarball> --alias random-image
```
Now if you were provided with two tarballs, identify which contains the LXD metadata. Usually the tarball name gives it away, if not, pick the smallest of the two, metadata tarballs are tiny. Then import them both together with:
```
lxc image import <metadata tarball> <rootfs tarball>
```
##### Importing from a URL
“lxc image import” also works with some special URLs. If you have an https web server which serves a path with the LXD-Image-URL and LXD-Image-Hash headers set, then LXD will pull that image into its image store.
For example you can do:
```
lxc image import https://dl.stgraber.org/lxd --alias busybox-amd64
```
When pulling the image, LXD also sets some headers which the remote server could check to return an appropriate image. Those are LXD-Server-Architectures and LXD-Server-Version.
This is meant as a poor mans image server. It can be made to work with any static web server and provides a user friendly way to import your image.
#### Managing the local image store
Now that we have a bunch of images in our local image store, lets see what we can do with them. Weve already covered the most obvious, creating containers from them but there are a few more things you can do with the local image store.
##### Listing images
To get a list of all images in the store, just run “lxc image list”:
```
stgraber@dakara:~$ lxc image list
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| alpine-32 | 6d9c131efab3 | yes | Alpine edge (i386) (20160329_23:52) | i686 | 2.50MB | Mar 30, 2016 at 4:36am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| gentoo | 1a134c5951e0 | no | Gentoo current (amd64) (20160329_14:12) | x86_64 | 232.50MB | Mar 30, 2016 at 4:34am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| my-image | c9b6e738fae7 | no | Scientific Linux 6 x86_64 (default) (20160215_02:36) | x86_64 | 625.34MB | Mar 2, 2016 at 4:56am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
```
You can filter based on the alias or fingerprint simply by doing:
```
stgraber@dakara:~$ lxc image list amd64
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
```
Or by specifying a key=value filter of image properties:
```
stgraber@dakara:~$ lxc image list os=ubuntu
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
```
To see everything LXD knows about a given image, you can use “lxc image info”:
```
stgraber@castiana:~$ lxc image info ubuntu
Fingerprint: e8a33ec326ae7dd02331bd72f5d22181ba25401480b8e733c247da5950a7d084
Size: 139.43MB
Architecture: i686
Public: no
Timestamps:
Created: 2016/03/15 00:00 UTC
Uploaded: 2016/03/16 05:50 UTC
Expires: 2017/04/26 00:00 UTC
Properties:
version: 12.04
aliases: 12.04,p,precise
architecture: i386
description: ubuntu 12.04 LTS i386 (release) (20160315)
label: release
os: ubuntu
release: precise
serial: 20160315
Aliases:
- ubuntu
Auto update: enabled
Source:
Server: https://cloud-images.ubuntu.com/releases
Protocol: simplestreams
Alias: precise/i386
```
##### Editing images
A convenient way to edit image properties and some of the flags is to use:
lxc image edit <alias or fingerprint>
This opens up your default text editor with something like this:
autoupdate: true
properties:
aliases: 14.04,default,lts,t,trusty
architecture: amd64
description: ubuntu 14.04 LTS amd64 (release) (20160314)
label: release
os: ubuntu
release: trusty
serial: "20160314"
version: "14.04"
public: false
You can change any property you want, turn auto-update on and off or mark an image as publicly available (more on that later).
##### Deleting images
Remove an image is a simple matter of running:
```
lxc image delete <alias or fingerprint>
```
Note that you dont have to remove cached entries, those will automatically be removed by LXD after they expire (by default, after 10 days since they were last used).
##### Exporting images
If you want to get image tarballs from images currently in your image store, you can use “lxc image export”, like:
```
stgraber@dakara:~$ lxc image export old-ubuntu .
Output is in .
stgraber@dakara:~$ ls -lh *.tar.xz
-rw------- 1 stgraber domain admins 656 Mar 30 00:55 meta-ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
-rw------- 1 stgraber domain admins 156M Mar 30 00:55 ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
```
#### Image formats
LXD right now supports two image layouts, unified or split. Both of those are effectively LXD-specific though the latter makes it easier to re-use the filesystem with other container or virtual machine runtimes.
LXD being solely focused on system containers, doesnt support any of the application container “standard” image formats out there, nor do we plan to.
Our images are pretty simple, theyre made of a container filesystem, a metadata file describing things like when the image was made, when it expires, what architecture its for, … and optionally a bunch of file templates.
See this document for up to date details on the [image format][1].
##### Unified image (single tarball)
The unified image format is what LXD uses when generating images itself. They are a single big tarball, containing the container filesystem inside a “rootfs” directory, have the metadata.yaml file at the root of the tarball and any template goes into a “templates” directory.
Any compression (or none at all) can be used for that tarball. The image hash is the sha256 of the resulting compressed tarball.
##### Split image (two tarballs)
This format is most commonly used by anyone rolling their own images and who already have a compressed filesystem tarball.
They are made of two distinct tarball, the first contains just the metadata bits that LXD uses, so the metadata.yaml file at the root and any template in the “templates” directory.
The second tarball contains only the container filesystem directly at its root. Most distributions already produce such tarballs as they are common for bootstrapping new machines. This image format allows re-using them unmodified.
Any compression (or none at all) can be used for either tarball, they can absolutely use different compression algorithms. The image hash is the sha256 of the concatenation of the metadata and rootfs tarballs.
##### Image metadata
A typical metadata.yaml file looks something like:
```
architecture: "i686"
creation_date: 1458040200
properties:
architecture: "i686"
description: "Ubuntu 12.04 LTS server (20160315)"
os: "ubuntu"
release: "precise"
templates:
/var/lib/cloud/seed/nocloud-net/meta-data:
when:
- start
template: cloud-init-meta.tpl
/var/lib/cloud/seed/nocloud-net/user-data:
when:
- start
template: cloud-init-user.tpl
properties:
default: |
#cloud-config
{}
/var/lib/cloud/seed/nocloud-net/vendor-data:
when:
- start
template: cloud-init-vendor.tpl
properties:
default: |
#cloud-config
{}
/etc/init/console.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty1.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty2.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty3.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty4.override:
when:
- create
template: upstart-override.tpl
```
##### Properties
The two only mandatory fields are the creation date (UNIX EPOCH) and the architecture. Everything else can be left unset and the image will import fine.
The extra properties are mainly there to help the user figure out what the image is about. The “description” property for example is whats visible in “lxc image list”. The other properties can be used by the user to search for specific images using key/value search.
Those properties can then be edited by the user through “lxc image edit” in contrast, the creation date and architecture fields are immutable.
##### Templates
The template mechanism allows for some files in the container to be generated or re-generated at some point in the container lifecycle.
We use the pongo2 templating engine for those and we export just about everything we know about the container to the template. That way you can have custom images which use user-defined container properties or normal LXD properties to change the content of some specific files.
As you can see in the example above, were using those in Ubuntu to seed cloud-init and to turn off some init scripts.
### Creating your own images
LXD being focused on running full Linux systems means that we expect most users to just use clean distribution images and not spin their own image.
However there are a few cases where having your own images is useful. Such as having pre-configured images of your production servers or building your own images for a distribution or architecture that we dont build images for.
#### Turning a container into an image
The easiest way by far to build an image with LXD is to just turn a container into an image.
This can be done with:
```
lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image
```
You can even turn a past container snapshot into a new image:
```
lxc publish my-container/some-snapshot --alias some-image
```
#### Manually building an image
Building your own image is also pretty simple.
1. Generate a container filesystem. This entirely depends on the distribution youre using. For Ubuntu and Debian, it would be by using debootstrap.
2. Configure anything thats needed for the distribution to work properly in a container (if anything is needed).
3. Make a tarball of that container filesystem, optionally compress it.
4. Write a new metadata.yaml file based on the one described above.
5. Create another tarball containing that metadata.yaml file.
6. Import those two tarballs as a LXD image with:
```
lxc image import <metadata tarball> <rootfs tarball> --alias some-name
```
You will probably need to go through this a few times before everything works, tweaking things here and there, possibly adding some templates and properties.
### Publishing your images
All LXD daemons act as image servers. Unless told otherwise all images loaded in the image store are marked as private and so only trusted clients can retrieve those images, but should you want to make a public image server, all you have to do is tag a few images as public and make sure you LXD daemon is listening to the network.
#### Just running a public LXD server
The easiest way to share LXD images is to run a publicly visible LXD daemon.
You typically do that by running:
```
lxc config set core.https_address "[::]:8443"
```
Remote users can then add your server as a public image server with:
```
lxc remote add <some name> <IP or DNS> --public
```
They can then use it just as they would any of the default image servers. As the remote server was added with “public”, no authentication is required and the client is restricted to images which have themselves been marked as public.
To change what images are public, just “lxc image edit” them and set the public flag to true.
#### Use a static web server
As mentioned above, “lxc image import” supports downloading from a static http server. The requirements are basically:
* The server must support HTTPs with a valid certificate, TLS1.2 and EC ciphers
* When hitting the URL provided to “lxc image import”, the server must return an answer including the LXD-Image-Hash and LXD-Image-URL HTTP headers
If you want to make this dynamic, you can have your server look for the LXD-Server-Architectures and LXD-Server-Version HTTP headers which LXD will provide when fetching the image. This allows you to return the right image for the servers architecture.
#### Build a simplestreams server
The “ubuntu:” and “ubuntu-daily:” remotes arent using the LXD protocol (“images:” is), those are instead using a different protocol called simplestreams.
simplestreams is basically an image server description format, using JSON to describe a list of products and files related to those products.
It is used by a variety of tools like OpenStack, Juju, MAAS, … to find, download or mirror system images and LXD supports it as a native protocol for image retrieval.
While certainly not the easiest way to start providing LXD images, it may be worth considering if your images can also be used by some of those other tools.
More information can be found here.
### Conclusion
I hope this gave you a good idea of how LXD manages its images and how to build and distribute your own. The ability to have the exact same image easily available bit for bit on a bunch of globally distributed system is a big step up from the old LXC days and leads the way to more reproducible infrastructure.
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
And if you dont want or cant install LXD on your own machine, you can always [try it online instead][3]!
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/blob/master/doc/image-handling.md
[2]: https://launchpad.net/simplestreams
[3]: https://linuxcontainers.org/lxd/try-it
原文https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/

View File

@ -1,209 +0,0 @@
Part 6 - LXD 2.0: Remote hosts and container migration
=======================================================
This is the third blog post [in this series about LXD 2.0][0].
![](https://linuxcontainers.org/static/img/containers.png)
### Remote protocols
LXD 2.0 supports two protocols:
* LXD 1.0 API: Thats the REST API used between the clients and a LXD daemon as well as between LXD daemons when copying/moving images and containers.
* Simplestreams: The Simplestreams protocol is a read-only, image-only protocol used by both the LXD client and daemon to get image information and import images from some public image servers (like the Ubuntu images).
Everything below will be using the first of those two.
### Security
Authentication for the LXD API is done through client certificate authentication over TLS 1.2 using recent ciphers. When two LXD daemons must exchange information directly, a temporary token is generated by the source daemon and transferred through the client to the target daemon. This token may only be used to access a particular stream and is immediately revoked so cannot be re-used.
To avoid Man In The Middle attacks, the client tool also sends the certificate of the source server to the target. That means that for a particular download operation, the target server is provided with the source server URL, a one-time access token for the resource it needs and the certificate that the server is supposed to be using. This prevents MITM attacks and only give temporary access to the object of the transfer.
### Network requirements
LXD 2.0 uses a model where the target of an operation (the receiving end) is connecting directly to the source to fetch the data.
This means that you must ensure that the target server can connect to the source directly, updating any needed firewall along the way.
We have [a plan][1] to allow this to be reversed and also to allow proxying through the client itself for those rare cases where draconian firewalls are preventing any communication between the two hosts.
### Interacting with remote hosts
Rather than having our users have to always provide hostname or IP addresses and then validating certificate information whenever they want to interact with a remote host, LXD is using the concept of “remotes”.
By default, the only real LXD remote configured is “local:” which also happens to be the default remote (so you dont have to type its name). The local remote uses the LXD REST API to talk to the local daemon over a unix socket.
### Adding a remote
Say you have two machines with LXD installed, your local machine and a remote host that well call “foo”.
First you need to make sure that “foo” is listening to the network and has a password set, so get a remote shell on it and run:
```
lxc config set core.https_address [::]:8443
lxc config set core.trust_password something-secure
```
Now on your local LXD, we just need to make it visible to the network so we can transfer containers and images from it:
lxc config set core.https_address [::]:8443
Now that the daemon configuration is done on both ends, you can add “foo” to your local client with:
```
lxc remote add foo 1.2.3.4
```
(replacing 1.2.3.4 by your IP address or FQDN)
Youll see something like this:
```
stgraber@dakara:~$ lxc remote add foo 2607:f2c0:f00f:2770:216:3eff:fee1:bd67
Certificate fingerprint: fdb06d909b77a5311d7437cabb6c203374462b907f3923cefc91dd5fce8d7b60
ok (y/n)? y
Admin password for foo:
Client certificate stored at server: foo
```
You can then list your remotes and youll see “foo” listed there:
```
stgraber@dakara:~$ lxc remote list
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| NAME | URL | PROTOCOL | PUBLIC | STATIC |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| foo | https://[2607:f2c0:f00f:2770:216:3eff:fee1:bd67]:8443 | lxd | NO | NO |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| images | https://images.linuxcontainers.org:8443 | lxd | YES | NO |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| local (default) | unix:// | lxd | NO | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | YES | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | YES | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
```
### Interacting with it
Ok, so we have a remote server defined, what can we do with it now?
Well, just about everything you saw in the posts until now, the only difference being that you must tell LXD what host to run against.
For example:
```
lxc launch ubuntu:14.04 c1
```
Will run on the default remote (“lxc remote get-default”) which is your local host.
```
lxc launch ubuntu:14.04 foo:c1
```
Will instead run on foo.
Listing running containers on a remote host can be done with:
```
stgraber@dakara:~$ lxc list foo:
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| c1 | RUNNING | 10.245.81.95 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe43:7994 (eth0) | PERSISTENT | 0 |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
```
One thing to keep in mind is that you have to specify the remote host for both images and containers. So if you have a local image called “my-image” on “foo” and want to create a container called “c2” from it, you have to run:
```
lxc launch foo:my-image foo:c2
```
Finally, getting a shell into a remote container works just as you would expect:
```
lxc exec foo:c1 bash
```
### Copying containers
Copying containers between hosts is as easy as it sounds:
```
lxc copy foo:c1 c2
```
And youll have a new local container called “c2” created from a copy of the remote “c1” container. This requires “c1” to be stopped first, but you could just copy a snapshot instead and do it while the source container is running:
```
lxc snapshot foo:c1 current
lxc copy foo:c1/current c3
```
### Moving containers
Unless youre doing live migration (which will be covered in a later post), you have to stop the source container prior to moving it, after which everything works as youd expect.
```
lxc stop foo:c1
lxc move foo:c1 local:
```
This example is functionally identical to:
```
lxc stop foo:c1
lxc move foo:c1 c1
```
### How this all works
Interactions with remote containers work as you would expect, rather than using the REST API over a local Unix socket, LXD just uses the exact same API over a remote HTTPS transport.
Where it gets a bit trickier is when interaction between two daemons must occur, as is the case for copy and move.
In those cases the following happens:
1. The user runs “lxc move foo:c1 c1”.
2. The client contacts the local: remote to check for an existing “c1” container.
3. The client fetches container information from “foo”.
4. The client requests a migration token from the source “foo” daemon.
5. The client sends that migration token as well as the source URL and “foo”s certificate to the local LXD daemon alongside the container configuration and devices.
6. The local LXD daemon then connects directly to “foo” using the provided token
A. It connects to a first control websocket
B. It negotiates the filesystem transfer protocol (zfs send/receive, btrfs send/receive or plain rsync)
C. If available locally, it unpacks the image which was used to create the source container. This is to avoid needless data transfer.
D. It then transfers the container and any of its snapshots as a delta.
7. If succesful, the client then instructs “foo” to delete the source container.
### Try all this online
Dont have two machines to try remote interactions and moving/copying containers?
Thats okay, you can test it all online using our [demo service][2].
The included step-by-step walkthrough even covers it!
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/issues/553
[2]: https://linuxcontainers.org/lxd/try-it/

View File

@ -1,145 +0,0 @@
Part 7 - LXD 2.0: Docker in LXD
==================================
This is the seventh blog post [in this series about LXD 2.0][0].
![](https://linuxcontainers.org/static/img/containers.png)
### Why run Docker inside LXD
As I briefly covered in the [first post of this series][1], LXDs focus is system containers. That is, we run a full unmodified Linux distribution inside our containers. LXD for all intent and purposes doesnt care about the workload running in the container. It just sets up the container namespaces and security policies, then spawns /sbin/init and waits for the container to stop.
Application containers such as those implemented by Docker or Rkt are pretty different in that they are used to distribute applications, will typically run a single main process inside them and be much more ephemeral than a LXD container.
Those two container types arent mutually exclusive and we certainly see the value of using Docker containers to distribute applications. Thats why weve been working hard over the past year to make it possible to run Docker inside LXD.
This means that with Ubuntu 16.04 and LXD 2.0, you can create containers for your users who will then be able to connect into them just like a normal Ubuntu system and then run Docker to install the services and applications they want.
### Requirements
There are a lot of moving pieces to make all of this working and we got it all included in Ubuntu 16.04:
- A kernel with CGroup namespace support (4.4 Ubuntu or 4.6 mainline)
- LXD 2.0 using LXC 2.0 and LXCFS 2.0
- A custom version of Docker (or one built with all the patches that we submitted)
- A Docker image which behaves when confined by user namespaces, or alternatively make the parent LXD container a privileged container (security.privileged=true)
### Running a basic Docker workload
Enough talking, lets run some Docker containers!
First of all, you need an Ubuntu 16.04 container which you can get with:
```
lxc launch ubuntu-daily:16.04 docker -p default -p docker
```
The “-p default -p docker” instructs LXD to apply both the “default” and “docker” profiles to the container. The default profile contains the basic network configuration while the docker profile tells LXD to load a few required kernel modules and set up some mounts for the container. The docker profile also enables container nesting.
Now lets make sure the container is up to date and install docker:
```
lxc exec docker -- apt update
lxc exec docker -- apt dist-upgrade -y
lxc exec docker -- apt install docker.io -y
```
And thats it! Youve got Docker installed and running in your container.
Now lets start a basic web service made of two Docker containers:
```
stgraber@dakara:~$ lxc exec docker -- docker run --detach --name app carinamarina/hello-world-app
Unable to find image 'carinamarina/hello-world-app:latest' locally
latest: Pulling from carinamarina/hello-world-app
efd26ecc9548: Pull complete
a3ed95caeb02: Pull complete
d1784d73276e: Pull complete
72e581645fc3: Pull complete
9709ddcc4d24: Pull complete
2d600f0ec235: Pull complete
c4cf94f61cbd: Pull complete
c40f2ab60404: Pull complete
e87185df6de7: Pull complete
62a11c66eb65: Pull complete
4c5eea9f676d: Pull complete
498df6a0d074: Pull complete
Digest: sha256:6a159db50cb9c0fbe127fb038ed5a33bb5a443fcdd925ec74bf578142718f516
Status: Downloaded newer image for carinamarina/hello-world-app:latest
c8318f0401fb1e119e6c5bb23d1e706e8ca080f8e44b42613856ccd0bf8bfb0d
stgraber@dakara:~$ lxc exec docker -- docker run --detach --name web --link app:helloapp -p 80:5000 carinamarina/hello-world-web
Unable to find image 'carinamarina/hello-world-web:latest' locally
latest: Pulling from carinamarina/hello-world-web
efd26ecc9548: Already exists
a3ed95caeb02: Already exists
d1784d73276e: Already exists
72e581645fc3: Already exists
9709ddcc4d24: Already exists
2d600f0ec235: Already exists
c4cf94f61cbd: Already exists
c40f2ab60404: Already exists
e87185df6de7: Already exists
f2d249ff479b: Pull complete
97cb83fe7a9a: Pull complete
d7ce7c58a919: Pull complete
Digest: sha256:c31cf04b1ab6a0dac40d0c5e3e64864f4f2e0527a8ba602971dab5a977a74f20
Status: Downloaded newer image for carinamarina/hello-world-web:latest
d7b8963401482337329faf487d5274465536eebe76f5b33c89622b92477a670f
```
With those two Docker containers now running, we can then get the IP address of our LXD container and access the service!
```
stgraber@dakara:~$ lxc list
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
| docker | RUNNING | 172.17.0.1 (docker0) | 2001:470:b368:4242:216:3eff:fe55:45f4 (eth0) | PERSISTENT | 0 |
| | | 10.178.150.73 (eth0) | | | |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
stgraber@dakara:~$ curl http://10.178.150.73
The linked container said... "Hello World!"
```
### Conclusion
Thats it! Its really that simple to run Docker containers inside a LXD container.
Now as I mentioned earlier, not all Docker images will behave as well as my example, thats typically because of the extra confinement that comes with LXD, specifically the user namespace.
Only the overlayfs storage driver of Docker works in this mode. That storage driver may come with its own set of limitation which may further limit how many images will work in this environment.
If your workload doesnt work properly and you trust the user inside the LXD container, you can try:
```
lxc config set docker security.privileged true
lxc restart docker
```
That will de-activate the user namespace and will run the container in privileged mode.
Note however that in this mode, root inside the container is the same uid as root on the host. There are a number of known ways for users to escape such containers and gain root privileges on the host, so you should only ever do that if youd trust the user inside your LXD container with root privileges on the host.
### Extra information
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://www.stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/
[2]: https://linuxcontainers.org/lxd/try-it/

View File

@ -0,0 +1,220 @@
Manage Samba4 AD Domain Controller DNS and Group Policy from Windows Part 4
============================================================
Continuing the previous tutorial on [how to administer Samba4 from Windows 10 via RSAT][4], in this part well see how to remotely manage our Samba AD Domain controller DNS server from Microsoft DNS Manager, how to create DNS records, how to create a Reverse Lookup Zone and how to create a domain policy via Group Policy Management tool.
#### Requirements
1. [Create an AD Infrastructure with Samba4 on Ubuntu 16.04 Part 1][1]
2. [Manage Samba4 AD Infrastructure from Linux Command Line Part 2][2]
3. [Manage Samba4 Active Directory Infrastructure from Windows10 via RSAT Part 3][3]
### Step 1: Manage Samba DNS Server
Samba4 AD DC uses an internal DNS resolver module which is created during the initial domain provision (if BIND9 DLZ module is not specifically used).
Samba4 internal DNS module supports the basic features needed for an AD Domain Controller. The domain DNS server can be managed in two ways, directly from command line through samba-tool interface or remotely from a Microsoft workstation which is part of the domain via RSAT DNS Manager.
Here, well cover the second method because its more intuitive and not so prone to errors.
1. To administer the DNS service for your domain controller via RSAT, go to your Windows machine, open Control Panel -> System and Security -> Administrative Tools and run DNS Manager utility.
Once the tool opens, it will ask you on what DNS running server you want to connect. Choose The following computer, type your domain name in the field (or IP Address or FQDN can be used as well), check the box that says Connect to the specified computer now and hit OK to open your Samba DNSservice.
[
![Connect Samba4 DNS on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Connect-Samba4-DNS-on-Windows.png)
][5]
Connect Samba4 DNS on Windows
2. In order to add a DNS record (as an example we will add an `A` record that will point to our LAN gateway), navigate to domain Forward Lookup Zone, right click on the right plane and choose New Host(`A` or `AAA`).
[
![Add DNS A Record on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-A-Record.png)
][6]
Add DNS A Record on Windows
3. On the New host opened window, type the name and the IP Address of your DNS resource. The FQDNwill be automatically written for you by DNS utility. When finished, hit the Add Host button and a pop-up window will inform you that your DNS A record has been successfully created.
Make sure you add DNS A records only for those resources in your network [configured with static IP Addresses][7]. Dont add DNS A records for hosts which are configured to acquire network configurations from a DHCP server or their IP Addresses change often.
[
![Configure Samba Host on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Host-on-Windows.png)
][8]
Configure Samba Host on Windows
To update a DNS record just double click on it and write your modifications. To delete the record right click on the record and choose delete from the menu.
In the same way you can add other types of DNS records for your domain, such as CNAME (also known as DNS alias record) MX records (very useful for mail servers) or other type of records (SPF, TXT, SRVetc).
### Step 2: Create a Reverse Lookup Zone
By default, Samba4 Ad DC doesnt automatically add a reverse lookup zone and PTR records for your domain because these types of records are not crucial for a domain controller to function correctly.
Instead, a DNS reverse zone and its PTR records are crucial for the functionality of some important network services, such as an e-mail service because these type of records can be used to verify the identity of clients requesting a service.
Practically, PTR records are just the opposite of standard DNS records. The clients know the IP address of a resource and queries the DNS server to find out their registered DNS name.
4. In order to a create a reverse lookup zone for Samba AD DC, open DNS Manager, right click on Reverse Lookup Zone from the left plane and choose New Zone from the menu.
[
![Create Reverse Lookup DNS Zone](http://www.tecmint.com/wp-content/uploads/2016/12/Create-Reverse-Lookup-DNS-Zone.png)
][9]
Create Reverse Lookup DNS Zone
5. Next, hit Next button and choose Primary zone from Zone Type Wizard.
[
![Select DNS Zone Type](http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-Zone-Type.png)
][10]
Select DNS Zone Type
6. Next, choose To all DNS servers running on domain controllers in this domain from the AD Zone Replication Scope, chose IPv4 Reverse Lookup Zone and hit Next to continue.
[
![Select DNS for Samba Domain Controller](http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-for-Samba-Domain-Controller.png)
][11]
Select DNS for Samba Domain Controller
[
![Add Reverse Lookup Zone Name](http://www.tecmint.com/wp-content/uploads/2016/12/Add-Reverse-Lookup-Zone-Name.png)
][12]
Add Reverse Lookup Zone Name
7. Next, type the IP network address for your LAN in Network ID filed and hit Next to continue.
All PTR records added in this zone for your resources will point back only to 192.168.1.0/24 network portion. If you want to create a PTR record for a server that does not reside in this network segment (for example mail server which is located in 10.0.0.0/24 network), then youll need to create a new reverse lookup zone for that network segment as well.
[
![Add IP Address of Reverse Lookup DNS Zone](http://www.tecmint.com/wp-content/uploads/2016/12/Add-IP-Address-of-Reverse-DNS-Zone.png)
][13]
Add IP Address of Reverse Lookup DNS Zone
8. On the next screen choose to Allow only secure dynamic updates, hit next to continue and, finally hit on finish to complete zone creation.
[
![Enable Secure Dynamic Updates](http://www.tecmint.com/wp-content/uploads/2016/12/Enable-Secure-Dynamic-Updates.png)
][14]
Enable Secure Dynamic Updates
[
![New DNS Zone Summary](http://www.tecmint.com/wp-content/uploads/2016/12/New-DNS-Zone-Summary.png)
][15]
New DNS Zone Summary
9. At this point you have a valid DNS reverse lookup zone configured for your domain. In order to add a PTR record in this zone, right click on the right plane and choose to create a PTR record for a network resource.
In this case weve created a pointer for our gateway. In order to test if the record was properly added and works as expected from clients point of view, open a Command Prompt and issue a nslookup query against the name of the resource and another query for its IP Address.
Both queries should return the correct answer for your DNS resource.
```
nslookup gate.tecmint.lan
nslookup 192.168.1.1
ping gate
```
[
![Add DNS PTR Record and Query PTR](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-PTR-Record-and-Query.png)
][16]
Add DNS PTR Record and Query PTR
### Step 3: Domain Group Policy Management
10. An important aspect of a domain controller is its ability to control system resources and security from a single central point. This type of task can be easily achieved in a domain controller with the help of Domain Group Policy.
Unfortunately, the only way to edit or manage group policy in a samba domain controller is through RSAT GPM console provided by Microsoft.
In the below example well see how simple can be to manipulate group policy for our samba domain in order to create an interactive logon banner for our domain users.
In order to access group policy console, go to Control Panel -> System and Security -> Administrative Tools and open Group Policy Management console.
Expand the fields for your domain and right click on Default Domain Policy. Choose Edit from the menu and a new windows should appear.
[
![Manage Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Manage-Samba-Domain-Group-Policy.png)
][17]
Manage Samba Domain Group Policy
11. On Group Policy Management Editor window go to Computer Configuration -> Policies -> Windows Settings -> Security settings -> Local Policies -> Security Options and a new options list should appear in the right plane.
In the right plane search and edit with your custom settings following two entries presented on the below screenshot.
[
![Configure Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Domain-Group-Policy.png)
][18]
Configure Samba Domain Group Policy
12. After finishing editing the two entries, close all windows, open an elevated Command prompt and force group policy to apply on your machine by issuing the below command:
```
gpupdate /force
```
[
![Update Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Update-Samba-Domain-Group-Policy.png)
][19]
Update Samba Domain Group Policy
13. Finally, reboot your computer and youll see the logon banner in action when youll try to perform logon.
[
![Samba4 AD Domain Controller Logon Banner](http://www.tecmint.com/wp-content/uploads/2016/12/Samba4-Domain-Controller-User-Login.png)
][20]
Samba4 AD Domain Controller Logon Banner
Thats all! Group Policy is a very complex and sensitive subject and should be treated with maximum care by system admins. Also, be aware that group policy settings wont apply in any way to Linux systems integrated into the realm.
------
作者简介I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/
作者:[Matei Cezar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
[3]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[4]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/Connect-Samba4-DNS-on-Windows.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-A-Record.png
[7]:http://www.tecmint.com/set-add-static-ip-address-in-linux/
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Host-on-Windows.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Create-Reverse-Lookup-DNS-Zone.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-Zone-Type.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-for-Samba-Domain-Controller.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-Reverse-Lookup-Zone-Name.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-IP-Address-of-Reverse-DNS-Zone.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Enable-Secure-Dynamic-Updates.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/12/New-DNS-Zone-Summary.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-PTR-Record-and-Query.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/12/Manage-Samba-Domain-Group-Policy.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Domain-Group-Policy.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/Update-Samba-Domain-Group-Policy.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/Samba4-Domain-Controller-User-Login.png
[21]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/#
[22]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/#
[23]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/#
[24]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/#
[25]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/#comments

View File

@ -0,0 +1,360 @@
Manage Samba4 Active Directory Infrastructure from Windows10 via RSAT Part 3
============================================================
In this part of the [Samba4 AD DC infrastructure series][8] we will talk on how join a Windows 10 machine into a Samba4 realm and how to administer the domain from a Windows 10 workstation.
Once a Windows 10 system has been joined to Samba4 AD DC we can create, remove or disable domain users and groups, we can create new Organizational Units, we can create, edit and manage domain policy or we can manage Samba4 domain DNS service.
All of the above functions and other complex tasks concerning domain administration can be achieved via any modern Windows platform with the help of RSAT Microsoft Remote Server Administration Tools.
#### Requirements
1. [Create an AD Infrastructure with Samba4 on Ubuntu 16.04 Part 1][1]
2. [Manage Samba4 AD Infrastructure from Linux Command Line Part 2][2]
3. [Manage Samba4 AD Domain Controller DNS and Group Policy from Windows Part 4][3]
### Step 1: Configure Domain Time Synchronization
1. Before starting to administer Samba4 ADDC from Windows 10 with the help of RSAT tools, we need to know and take care of a crucial piece of service required for an Active Directory and this service refers to [accurate time synchronization][9].
Time synchronization can be offered by NTP daemon in most of the Linux distributions. The default maximum time period discrepancy an AD can support is about 5 minutes.
If the divergence time period is greater than 5 minutes you should start experience various errors, most important concerning AD users, joined machines or share access.
To install Network Time Protocol daemon and NTP client utility in Ubuntu, execute the below command.
```
$ sudo apt-get install ntp ntpdate
```
[
![Install NTP on Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/12/Install-NTP-on-Ubuntu.png)
][10]
Install NTP on Ubuntu
2. Next, open and edit NTP configuration file and replace the default NTP pool server list with a new list of NTP servers which are geographically located near your current physical equipment location.
The list of NTP servers can be obtained by visiting official NTP Pool Project webpage [http://www.pool.ntp.org/en/][11].
```
$ sudo nano /etc/ntp.conf
```
Comment the default server list by adding a `#` in front of each pool line and add the below pool lines with your proper NTP servers as illustrated on the below screenshot.
```
pool 0.ro.pool.ntp.org iburst
pool 1.ro.pool.ntp.org iburst
pool 2.ro.pool.ntp.org iburst
# Use Ubuntu's ntp server as a fallback.
pool 3.ro.pool.ntp.org
```
[
![Configure NTP Server in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-NTP-Server-in-Ubuntu.png)
][12]
Configure NTP Server in Ubuntu
3. Now, dont close the file yet. Move to the top at the file and add the below line after the driftfile statement. This setup allows the clients to query the server using AD signed NTP requests.
```
ntpsigndsocket /var/lib/samba/ntp_signd/
```
[
![Sync AD with NTP](http://www.tecmint.com/wp-content/uploads/2016/12/Sync-AD-with-NTP.png)
][13]
Sync AD with NTP
4. Finally, move to the bottom of the file and add the below line, as illustrated on the below screenshot, which will allow network clients only to query the time on the server.
```
restrict default kod nomodify notrap nopeer mssntp
```
[
![Query Clients to NTP Server](http://www.tecmint.com/wp-content/uploads/2016/12/Query-Client-to-NTP-Server.png)
][14]
Query Clients to NTP Server
5. When finished, save and close the NTP configuration file and grant NTP service with the proper permissions in order to read the ntp_signed directory.
This is the system path where Samba NTP socket is located. Afterwards, restart NTP daemon to apply changes and verify if NTP has open sockets in your system network table using [netstat command][15]combined with [grep filter][16].
```
$ sudo chown root:ntp /var/lib/samba/ntp_signd/
$ sudo chmod 750 /var/lib/samba/ntp_signd/
$ sudo systemctl restart ntp
$ sudo netstat tulpn | grep ntp
```
[
![Grant Permission to NTP](http://www.tecmint.com/wp-content/uploads/2016/12/Grant-Permission-to-NTP.png)
][17]
Grant Permission to NTP
Use the ntpq command line utility to monitor NTP daemon along with the `-p` flag in order to print a summary of peers state.
```
$ ntpq -p
```
[
![Monitor NTP Server Pool](http://www.tecmint.com/wp-content/uploads/2016/12/Monitor-NTP-Server-Pool.png)
][18]
Monitor NTP Server Pool
### Step 2: Troubleshoot NTP Time Issues
6. Sometimes the NTP daemon gets stuck in calculations while trying to synchronize time with an upstream ntp server peer, resulting the following error messages when manually trying to force time synchronization by running ntpdate utility on a client side:
```
# ntpdate -qu adc1
ntpdate[4472]: no server suitable for synchronization found
```
[
![NTP Time Synchronization Error](http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Time-Synchronization-Error.png)
][19]
NTP Time Synchronization Error
when using ntpdate command with `-d` flag.
```
# ntpdate -d adc1.tecmint.lan
Server dropped: Leap not in sync
```
[
![NTP Server Dropped Leap Not in Sync](http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Server-Dropped-Leap-Not-Sync.png)
][20]
NTP Server Dropped Leap Not in Sync
7. To circumvent this issue, use the following trick to solve the problem: On the server, stop the NTP service and use the ntpdate client utility to manually force time synchronization with an external peer using the `-b` flag as shown below:
```
# systemctl stop ntp.service
# ntpdate -b 2.ro.pool.ntp.org [your_ntp_peer]
# systemctl start ntp.service
# systemctl status ntp.service
```
[
![Force NTP Time Synchronization](http://www.tecmint.com/wp-content/uploads/2016/12/Force-NTP-Time-Synchronization.png)
][21]
Force NTP Time Synchronization
8. After the time has been accurately synchronized, start the NTP daemon on the server and verify from the client side if the service is ready to serve time for local clients by issuing the following command:
```
# ntpdate -du adc1.tecmint.lan [your_adc_server]
```
[
![Verify NTP Time Synchronization](http://www.tecmint.com/wp-content/uploads/2016/12/Verify-NTP-Time-Synchronization.png)
][22]
Verify NTP Time Synchronization
By now, NTP server should work as expected.
### Step 3: Join Windows 10 into Realm
9. As we saw in our previous tutorial, [Samba4 Active Directory can be managed from command line using samba-tool][23] utility interface which can be accessed directly from servers VTY console or remotely connected through SSH.
Other, more intuitively and flexible alternative, would be to manage our Samba4 AD Domain Controller via Microsoft Remote Server Administration Tools (RSAT) from a Windows workstation integrated into the domain. These tools are available in almost all modern Windows systems.
The process of joining Windows 10 or older versions of Microsoft OS into Samba4 AD DC is very simple. First, make sure that your Windows 10 workstation has the correct Samba4 DNS IP address configured in order to query the proper realm resolver.
Open Control panel -> Network and Internet -> Network and Sharing Center -> Ethernet card -> Properties -> IPv4 -> Properties -> Use the following DNS server addresses and manually place Samba4 AD IP Address to the network interface as illustrated in the below screenshots.
[
![join Windows to Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-to-Samba4-AD.png)
][24]
join Windows to Samba4 AD
[
![Add DNS and Samba4 AD IP Address](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-and-Samba4-AD-IP-Address.png)
][25]
Add DNS and Samba4 AD IP Address
Here, 192.168.1.254 is the IP Address of Samba4 AD Domain Controller responsible for DNS resolution. Replace the IP Address accordingly.
10. Next, apply the network settings by hitting on OK button, open a Command Prompt and issue a pingagainst the generic domain name and Samba4 host FQDN in order to test if the realm is reachable through DNS resolution.
```
ping tecmint.lan
ping adc1.tecmint.lan
```
[
![Check Network Connectivity Between Windows and Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/12/Check-Samba4-AD-from-Windows.png)
][26]
Check Network Connectivity Between Windows and Samba4 AD
11. If the resolver correctly responds to Windows client DNS queries, then, you need to assure that the time is accurately synchronized with the realm.
Open Control Panel -> Clock, Language and Region -> Set Time and Date -> Internet Time tab -> Change Settings and write your domain name on Synchronize with and Internet time server field.
Hit on Update Now button to force time synchronization with the realm and hit OK to close the window.
[
![Synchronize Time with Internet Server](http://www.tecmint.com/wp-content/uploads/2016/12/Synchronize-Time-with-Internet-Server.png)
][27]
Synchronize Time with Internet Server
12. Finally, join the domain by opening System Properties -> Change -> Member of Domain, write your domain name, hit OK, enter your domain administrative account credentials and hit OK again.
A new pop-up window should open informing youre a member of the domain. Hit OK to close the pop-up window and reboot the machine in order to apply domain changes.
The below screenshot will illustrate these steps.
[
![Join Windows Domain to Samba4 AD](http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-Domain-to-Samba4-AD.png)
][28]
Join Windows Domain to Samba4 AD
[
![Enter Domain Administration Login](http://www.tecmint.com/wp-content/uploads/2016/12/Enter-Domain-Administration-Login.png)
][29]
Enter Domain Administration Login
[
![Domain Joined to Samba4 AD Confirmation](http://www.tecmint.com/wp-content/uploads/2016/12/Domain-Joined-to-Samba4-AD.png)
][30]
Domain Joined to Samba4 AD Confirmation
[
![Restart Windows Server for Changes](http://www.tecmint.com/wp-content/uploads/2016/12/Restart-Windows-Server-for-Changes.png)
][31]
Restart Windows Server for Changes
13. After restart, hit on Other user and logon to Windows with a Samba4 domain account with administrative privileges and you should be ready to move to the next step.
[
![Login to Windows Using Samba4 AD Account](http://www.tecmint.com/wp-content/uploads/2016/12/Login-to-Windows-Using-Samba4-AD-Account.png)
][32]
Login to Windows Using Samba4 AD Account
#### Step 4: Administer Samba4 AD DC with RSAT
14. Microsoft Remote Server Administration Tools (RSAT), which will be further used to administer Samba4 Active Directory, can be downloaded from the following links, depending on your Windows version:
1. Windows 10: [https://www.microsoft.com/en-us/download/details.aspx?id=45520][4]
2. Windows 8.1: [http://www.microsoft.com/en-us/download/details.aspx?id=39296][5]
3. Windows 8: [http://www.microsoft.com/en-us/download/details.aspx?id=28972][6]
4. Windows 7: [http://www.microsoft.com/en-us/download/details.aspx?id=7887][7]
Once the update standalone installer package for Windows 10 has been downloaded on your system, run the installer, wait for the installation to finish and restart the machine to apply all updates.
After reboot, open Control Panel -> Programs (Uninstall a Program) -> Turn Windows features on or offand check all Remote Server Administration Tools.
Click OK to start the installation and after the installation process finishes, restart the system.
[
![Administer Samba4 AD from Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Administer-Samba4-AD-from-Windows.png)
][33]
Administer Samba4 AD from Windows
15. To access RSAT tools go to Control Panel -> System and Security -> Administrative Tools.
The tools can also be found in the Administrative tools menu from start menu. Alternatively, you can open Windows MMC and add Snap-ins using the File -> Add/Remove Snap-in menu.
[
![Access Remote Server Administration Tools](http://www.tecmint.com/wp-content/uploads/2016/12/Access-Remote-Server-Administration-Tools.png)
][34]
Access Remote Server Administration Tools
The most used tools, such as AD UC, DNS and Group Policy Management can be launched directly from Desktop by creating shortcuts using Send to feature from menu.
16. You can verify RSAT functionality by opening AD UC and list domain Computers (newly joined windows machine should appear in the list), create a new Organizational Unit or a new user or group.
Verify if the users or groups had been properly created by issuing wbinfo command from Samba4 server side.
[
![Active Directory Users and Computers](http://www.tecmint.com/wp-content/uploads/2016/12/Active-Directory-Users-and-Computers.png)
][35]
Active Directory Users and Computers
[
![Create Organizational Units and New Users](http://www.tecmint.com/wp-content/uploads/2016/12/Create-Organizational-Unit-and-Users.png)
][36]
Create Organizational Units and New Users
[
![Confirm Samba4 AD Users](http://www.tecmint.com/wp-content/uploads/2016/12/Confirm-Samba4-AD-Users.png)
][37]
Confirm Samba4 AD Users
Thats it! On the next part of this topic we will cover other important aspects of a Samba4 Active Directory which can be administered via RSAT, such as, how to manage DNS server, add DNS records and create a reverse DNS lookup zone, how to manage and apply domain policy and how to create an interactive logon banner for your domain users.
------
作者简介I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
作者:[Matei Cezar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
[3]:http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/
[4]:https://www.microsoft.com/en-us/download/details.aspx?id=45520
[5]:http://www.microsoft.com/en-us/download/details.aspx?id=39296
[6]:http://www.microsoft.com/en-us/download/details.aspx?id=28972
[7]:http://www.microsoft.com/en-us/download/details.aspx?id=7887
[8]:http://www.tecmint.com/category/samba4-active-directory/
[9]:http://www.tecmint.com/how-to-synchronize-time-with-ntp-server-in-ubuntu-linux-mint-xubuntu-debian/
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Install-NTP-on-Ubuntu.png
[11]:http://www.pool.ntp.org/en/
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-NTP-Server-in-Ubuntu.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/12/Sync-AD-with-NTP.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Query-Client-to-NTP-Server.png
[15]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/
[16]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/
[17]:http://www.tecmint.com/wp-content/uploads/2016/12/Grant-Permission-to-NTP.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Monitor-NTP-Server-Pool.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Time-Synchronization-Error.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/NTP-Server-Dropped-Leap-Not-Sync.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/Force-NTP-Time-Synchronization.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/12/Verify-NTP-Time-Synchronization.png
[23]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
[24]:http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-to-Samba4-AD.png
[25]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-and-Samba4-AD-IP-Address.png
[26]:http://www.tecmint.com/wp-content/uploads/2016/12/Check-Samba4-AD-from-Windows.png
[27]:http://www.tecmint.com/wp-content/uploads/2016/12/Synchronize-Time-with-Internet-Server.png
[28]:http://www.tecmint.com/wp-content/uploads/2016/12/Join-Windows-Domain-to-Samba4-AD.png
[29]:http://www.tecmint.com/wp-content/uploads/2016/12/Enter-Domain-Administration-Login.png
[30]:http://www.tecmint.com/wp-content/uploads/2016/12/Domain-Joined-to-Samba4-AD.png
[31]:http://www.tecmint.com/wp-content/uploads/2016/12/Restart-Windows-Server-for-Changes.png
[32]:http://www.tecmint.com/wp-content/uploads/2016/12/Login-to-Windows-Using-Samba4-AD-Account.png
[33]:http://www.tecmint.com/wp-content/uploads/2016/12/Administer-Samba4-AD-from-Windows.png
[34]:http://www.tecmint.com/wp-content/uploads/2016/12/Access-Remote-Server-Administration-Tools.png
[35]:http://www.tecmint.com/wp-content/uploads/2016/12/Active-Directory-Users-and-Computers.png
[36]:http://www.tecmint.com/wp-content/uploads/2016/12/Create-Organizational-Unit-and-Users.png
[37]:http://www.tecmint.com/wp-content/uploads/2016/12/Confirm-Samba4-AD-Users.png
[38]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[39]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[40]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[41]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#
[42]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/#comments

View File

@ -1,79 +1,76 @@
# Forget Technical Debt —Here'sHowtoBuild Technical Wealth
#忘记技术债务——教你如何创造技术财富
#忘记技术债务 —— 教你如何创造技术财富
电视里正播放着《老屋》节目,[Andrea Goulet][58]和她商业上的合作伙伴正悠闲地坐在客厅里商讨着他们的战略计划。那正是大家思想的火花碰撞出创新事物的时刻。他们正在寻求一种能够实现自身价值的方式——为其它公司清理遗留代码及科技债务。他们此刻的情景,像极了电视里的剧情
电视里正播放着《老屋》节目,[Andrea Goulet][58] 和她商业上的合作伙伴正悠闲地坐在客厅里商讨着他们的战略计划。那正是大家思想的火花碰撞出创新事物的时刻。他们正在寻求一种能够实现自身价值的方式 —— 为其它公司清理<ruby>遗留代码<rt>legacy code</rt></ruby>及科技债务。他们此刻的情景,像极了电视里的场景。(译者注:《老屋》电视节目提供专业的家装,家庭改建,重新装饰,创意等等信息,与软件的改造有异曲同工之处)
“我们意识到我们现在做的工作不仅仅是清理遗留代码实际上我们是在用重建老屋的方式来重构软件让系统运行更持久更稳定更高效”Goulet说。“这让我开始思考着如何让更多的公司花钱来改善他们的代码,以便让他们的系统运行更高效。就好比为了让屋子变得更实用,你不得不使用一个全新的屋顶。这并不吸引人,但却是至关重要的,然而很多人都搞错了。“
“我们意识到我们现在做的工作不仅仅是清理遗留代码实际上我们是在用重建老屋的方式来重构软件让系统运行更持久更稳定更高效”Goulet 说。“这让我开始思考公司如何花钱来改善他们的代码,以便让他们的系统运行更高效。就好比为了让屋子变得更有价值,你不得不使用一个全新的屋顶。这并不吸引人,但却是至关重要的,然而很多人都搞错了。“
如今,她是[Corgibytes][57]公司的CEO——一家提高软件现代化和进行系统重构方面的咨询公司。她曾经见过各种各样糟糕的系统遗留代码以及不计其数的严重的科技债务事件。Goulet认为创业公司需要从偿还债务思维模式向创造科技财富的思维模式转变并且要从铲除旧代码的方式向逐步修复的方式转变。她解释了这种新的方法以及如何完成这些看似不可能完成的事情——实际上是聘用大量的工程事来完成这些工作。
如今,她是 [Corgibytes][57] 公司的 CEO —— 一家提高软件现代化和进行系统重构方面的咨询公司。她曾经见过各种各样糟糕的系统,遗留代码,以及严重的科技债务事件。Goulet 认为创业公司需要转变思维模式,不是偿还债务,而是创造科技财富,不是要铲除旧代码,而是要逐步修复代码。她解释了这种新的方法,以及如何完成这些看似不可能完成的事情 —— 实际上是聘用优秀的工程师来完成这些工作。
### 反思遗留代码
关于遗留代码最广泛的定义由Michael Feathers在他的著作[修改代码的艺术][56][][55]一书中提出:遗留代码就是没有被测试的代码。这个定义比大多数人所认为的——遗留代码仅指那些古老陈旧的系统这个说法要妥当得多。但是Goulet认为这两种定义都不够明确。“随时软件周期的生长遗留代码显得毫无用处。一两年的应用程序其代码已经进入遗留状态了”她说。“最重要的是如何提高软件质量的难易程度。”
关于遗留代码最常见的定义是由 Michael Feathers 在他的著作[<ruby>《高效利用遗留代码》<rt>Working Effectively with Legacy Code</rt></ruby>][56]一书中提出:遗留代码就是没有被测试的代码。这个定义比大多数人所认为的 —— 遗留代码仅指那些古老陈旧的系统这个说法要妥当得多。但是 Goulet 认为这两种定义都不够明确。“遗留代码与软件的年头儿毫无关系。一个两年的应用程序,其代码可能已经进入遗留状态了,”她说。“关键要看软件质量提高的难易程度。”
这意味着代码写得不够清楚,缺少解释说明,没有任何关于你写的代码构件和做出这个决定的流程。一个单元测试属于一种类型的构件,也包括所有的你写那部分代码的原因以及逻辑推理相关的文档。当你去修复代码的过程中,如果没办法搞清楚原开发者的意图,那些代码就属于遗留代码了。
这意味着代码写得不够清楚,缺少解释说明,没有包含任何关于代码构思和决策制定的流程。单元测试可以有一定帮助,但也要包括所有的写那部分代码的原因以及逻辑推理相关的文档。如果想要提升代码,但没办法搞清楚原开发者的意图 —— 那些代码就属于遗留代码了。
> 遗留代码不是技术问题,而是沟通上的问题
> **遗留代码不是技术问题,而是沟通上的问题。**
![](https://s3.amazonaws.com/marquee-test-akiaisur2rgicbmpehea/H4y9x4gQj61G9aK4v8Kp_Screen%20Shot%202016-08-11%20at%209.16.38%20AM.png)
如果你像Goulet所说的那样迷失在遗留代码里你会发现每一次的沟通交流过程都会变得像那条鲜为人知的[康威定律][54]所描述的一样。
如果你像 Goulet 所说的那样迷失在遗留代码里,你会发现每一次的沟通交流过程都会变得像那条[<ruby>康威定律<rt>Conways Law</rt></ruby>][54]所描述的一样。
Goulet说“这个定律认为系统的基础架构能反映出你们整个公司的组织沟通结构,如果想修复你们公司的遗留代码而没有一个好的组织沟通方式是不可能完成的。那是很多人都没注意到的一个重要环节。”
Goulet 说:“这个定律认为系统的基础架构能反映出整个公司的组织沟通结构,如果想修复公司的遗留代码而没有一个好的组织沟通方式是不可能完成的。那是很多人都没注意到的一个重要环节。”
Goulet和她的团队成员更像是考古学家一样来研究遗留系统项目。他们根据前开发者写的代码构件相关的线索来推断出他们的思想意图。然后再根据这些构件之间的关系来出新的决策。
Goulet 和她的团队成员更像是考古学家一样来研究遗留系统项目。他们根据前开发者写的代码构件相关的线索来推断出他们的思想意图。然后再根据这些构件之间的关系来出新的决策。
最重要的代码构件是什么呢?良好的代码结构、清晰的思想意图、整洁的代码。例如如果你使用了通用的名称如”foo“或”bar“来命名一个变量半年后你再返回来看这段代码时根本就看不出这个变量的用途是什么。
最重要的代码是什么样子呢?**良好的代码结构、清晰的思想意图、整洁的代码**。例如,如果使用通用的名称如 “foo” 或 “bar” 来命名一个变量,半年后再返回来看这段代码时,根本就看不出这个变量的用途是什么。
如果代码读起来很困难,可以使用源代码控制系统,这是一个非常有用的构件,因为从该构件可以看出代码的历史修改信息,这为软件开发者写明他们作出本次修改的原因提供一个很好的途径
如果代码读起来很困难,可以使用源代码控制系统,这是一个非常有用的工具,因为它可以提供代码的历史修改信息,并允许软件开发者写明他们作出本次修改的原因。
Goulet说:”我一个朋友认为对于代码注释的信息,如有需要,每一个概要部分的内容应该有推文的一半多,而代码的描述信息应该有一篇博客那么长。你得用这个方式来为你修改的代码写一个合理的说明。这也不会浪费太多的时间,并且给后期的项目开发者提供更多有用的信息,但是让人惊讶的是没人会这么做。我们经常听到一些很沮丧的开发人员在调试一段代码的过程中报怨这是谁写的这烂代码,最后发现还不是他们自己写的。“
Goulet 说:“我一个朋友认为提交代码时附带的信息,如需要,每一个概要部分的内容应该有推文的一半多,而代码的描述信息应该有一篇博客那么长。你得用这个方式来为你修改的代码写一个合理的说明。这不会浪费太多额外的时间,并且能给后期的项目开发者提供非常多的有用的信息,但是让人惊讶的是很少有人会这么做。我们经常听到一些开发人员在调试代码的过程中,很沮丧的报怨这是谁写的这烂代码,最后发现还不是他们自己写的。”
使用自动化测试对于理解程序的流程非常有用。Goulet解释道“很多人都比较认可Michael Feathers提出的关于遗留代码的定义尤其是我们与[行为驱动开发模式][53]相结合的过程中使用测试套件,比如编写测试场景,这对于理解开发者的意图来说是非常有用的工具。
使用自动化测试对于理解程序的流程非常有用。Goulet 解释道:“很多人都比较认可 Michael Feathers 提出的关于遗留代码的定义。测试套件对于理解开发者的意图来说是非常有用的工具,尤其当用来与[<ruby>行为驱动开发模式<rt>Behavior Driven Development</rt></ruby>][53]相结合时,比如编写测试场景。”
理由很简单,如果你想把遗留代码的程度降到最低,你得多注意下代码的易理解性以及将来回顾该代码的一些细节上。编写并运行单元程序、接受、认可,并且进行集成测试,写清楚注释的内容。方便以后你自己或是别人来理解你写的代码。
理由很简单,如果你想利用好遗留代码,你得多注意使代码在将来易于理解和工作的一些细节上。编写并运行单元程序、接受、认可,并且进行集成测试,写清楚注释的内容。方便以后你自己或是别人来理解你写的代码。
尽管如此,由于很多已知的和不可意料的原因,遗留代码仍然会发生。
在创业公司刚成立初期,公司经常会急于推出很多新的功能。开发人员在巨大的压力下一边完成项目交付一边测试系统缺陷。Corgibytes团队就遇到过好多公司很多年都懒得对系统做详细的测试了。
在创业公司刚成立初期,公司经常会急于推出很多新的功能。开发人员在巨大的交付压力下测试常常半途而废。Corgibytes 团队就遇到过好多公司很多年都懒得对系统做详细的测试了。
确实如此,当你急于开发出系统原型的时候,强制性地去做太多的系统测试也许意义不大。但是,一旦产品开发完成并投入使用后,你就不得投入大量的时间精力来维护及完善系统。“很多人觉得运维没什么好担心的,重要的是产品功能特性上的强大。如果真这样,当系统规模到一定程序的时候,就很难再扩展了。同时也就失去市场竞争力了。
确实如此,当你急于开发出系统原型的时候,强制性地去做太多的测试也许意义不大。但是,一旦产品开发完成并投入使用后,你就需要投入时间精力来维护及完善系统了。Goulet 说:“很多人觉得运维没什么好担心的,重要的是产品功能特性上的强大。如果真这样,当系统规模到一定程序的时候,就很难再扩展了。同时也就失去市场竞争力了。
最后才明白过来,原来热力学第二定律对你们公司的代码也同样适用:你所面临的一切将向熵增的方向发展。你需要与混乱无序的技术债务进行一场无休无止的战斗。并且随着时间的增长,遗留代码也逐渐变成一种简单类型的债务。
最后才明白过来,原来热力学第二定律对代码也同样适用:你所面临的一切将向熵增的方向发展。你需要与混乱无序的技术债务进行一场无休无止的战斗。遗留代码随着时间的增长,也逐渐变成一种债务。
她说“我们再次拿家来做比喻。你必须坚持每天收拾餐具打扫卫生倒垃圾。如果你不这么做情况将来越来越糟糕直到有一天你不得不向HazMat团队求助。”
她说:“我们再次拿家来做比喻。你必须坚持每天收拾餐具,打扫卫生,倒垃圾。如果你不这么做,情况将来越来越糟糕,直到有一天你不得不向 HazMat 团队求助。”(译者注HazMat 团队,危害物质专队)
就跟这种情况一样Corgibytes团队接到很多公司CEO的求助电话比如Features公司的CEO在电话里抱怨道“现在我们公司的开发团队工作效率太低了三年前只需要两个星期就完成的工作现在却要花费12个星期。”
就跟这种情况一样Corgibytes 团队接到很多公司 CEO 的求助电话,比如 Features 公司的 CEO 在电话里抱怨道“现在我们公司的开发团队工作效率太低了三年前只需要两个星期就完成的工作现在却要花费12个星期。”
> 技术债务往往反应出公司运作上的问题
> **技术债务往往反应出公司运作上的问题。**
很多公司的CEO明知会发生技术债务的问题但是他们也难让其它同事相信花钱来修复那些已经存在的问题是很值的。这看起来像是在走回头路,很乏味或者没有新的产品。有些公司直到系统已经严重影响了日常工作效率时才着手去处理技术债务方面的问题,那时付出的代价就太高了。
很多公司的 CTO 明知会发生技术债务的问题,但是他们很难说服其它同事相信,花钱来修复那些已经存在的问题是值得的。这看起来像是在走回头路,很乏味或者没有新的产品。有些公司直到系统已经严重影响了日常工作效率时才着手去处理这些技术债务方面的问题,那时付出的代价就太高了。
### 忘记债务,创造技术财富
# 推荐文章
如果你想把[<ruby>重构技术债务<rt>reframe your technical debt</rt></ruby>][52] — [敏捷开发讲师 Declan Whelan 最近造出的一个术语][51] — 作为一个积累技术财富的机会,你很可能要先说服你们公司的 CEO、投资者和其它的股东接受并为之共同努力。
如果你想把[重构技术债务][52]作为一个积累技术财富的机会-[敏捷开发讲师Declan Whelan最近提到的一个术语][51]你很可能要先说服你们公司的CEO、投资者和其它的股东登上这条财富之船。
“我们没必要把技术债务想像得很可怕。当产品处于开发设计初期技术债务反而变得非常有用”Goulet 说。“当你解决一些系统遗留的技术问题时,你会充满成就感。例如,当你在自己家里安装新窗户时,你确实会花费一笔不少的钱,但是之后你每个月就可以节省 100 美元的电费。程序代码亦是如此。虽然暂时没有提高工作效率,但随时时间推移将提高生产力。”
“我们没必要把技术债务想像得很可怕。当产品处于开发设计初期技术债务反而变得非常有用”Goulet说。“当你解决一些系统遗留的技术问题时你会充满成就感。例如当你在自己家里安装新窗户时你确实会花费一笔不少的钱但是之后你每个月就可以节省100美元的电费。程序代码亦是如此。这虽然暂时没有提高工作效率但是随时时间地推移将为你们公司创造更多的生产率。“
一旦你意识到项目团队工作不再富有成效时,就需要确认下是哪些技术债务在拖后腿了。
一旦你意识到项目团队工作不再富有成效时,你必须要确认下是哪些技术债务在拖后腿了。
“我跟很多不惜一切代价招募英才的初创公司交流过,他们高薪聘请一些工程师来只为了完成更多的工作。”她说。“与此相反,他们应该找出如何让原有的每个工程师能更高效率工作的方法。你需要去解决什么样的技术债务以增加额外的生产率?”
“我跟很多不惜一切代价招募英才的初创公司交流过,他们高薪聘请一些工程师来只为了完成更多的工作。”她说。”相反的是,他们应该找出如何让原有的每个工程师都更高效率工作的方法。你需要去解决什么样的技术债务以增加额外的生产率?"
如果你改变自己的观点并且专注于创造技术财富,你将会看到产能过剩的现象,然后重新把多余的产能投入到修复更多的技术债务和遗留代码的良性循环中。你们的产品将会走得更远,发展得更好。
如果你改变自己的观点并且专注于创造技术财富,你将会看到产能过剩的现象,然后重新把多余的产能投入到修复更多的技术债务和遗留代码的的良性循环中。你们的产品将会走得更远,发展得更好。
> **别把你们公司的软件当作一个项目来看。从现在起,把它想象成一栋自己要长久居住的房子。**
> 别想着把你们公司的软件当作一个项目来看。从现在起,你把它想象成一栋自己要长久居住的房子。
“这是一个极其重要的思想观念的转变”Goulet 说。“这将带你走出短浅的思维模式,并让你比之前更加关注产品的维护工作。”
“这是一个极其重要的思想观念的转变”Goulet说。“这将带你走出短浅的思维模式并且你会比之前更加关注产品的维护工作。”
这就像对一栋房子,要实现其现代化及维护的方式有两种:小动作,表面上的更改(“我买了一块新的小地毯!”)和大改造,需要很多年才能偿还所有债务(“我想我们应替换掉所有的管道...”)。你必须考虑好两者,才能让你们已有的产品和整个团队顺利地运作起来。
就像一栋房子,要实现其现代化的改造方式有两种:小动作,表面上的更改(“我买了一块新的小地毯!”)和大改造,需要很多年才能偿还所有债务(“我假设我们将要替换掉所有的管道...")。你必须考虑好两者才能你们已有的产品和整个团队顺利地运作起来。
还需要提前预算好 —— 否则那些较大的花销将会是硬伤。定期维护是最基本的预期费用。让人震惊的是,很多公司在商务上都没把维护成本预算进来。
还需要提前预算好——否则那些较大的花销将会是硬伤。定期维护是最基本的预期费用。让人震惊的是,很多公司在商务上都没把维护成本预算进来
就是 Goulet 提出“**<ruby>软件重构<rt>software remodeling</rt></ruby>**”这个术语的原因。当你房子里的一些东西损坏的时候,你并不是铲除整个房子,从头开始重建。同样的,当你们公司出现老的,损坏的代码时,重写代码通常不是最明智的选择
这就是Goulet提出软件重构这个术语的原因。当你房子里的一些东西损坏的时候你不用铲除整个房子而是重新修复坏掉的那一部分就可以了。同样的当你们公司出现老的损坏的代码时重写代码通常不是最明智的选择。
下面是Corgibytes公司在重构客户代码用到的一些方法
下面是 Corgibytes 公司在重构客户代码用到的一些方法:
* 把大型的应用系统分解成轻量级的更易于维护的微服务。
* 相互功能模块之间降低耦合性以便于扩展。
@ -81,134 +78,132 @@ Goulet说”我一个朋友认为对于代码注释的信息如有需要
* 集合自动化测试来检查代码可用性。
* 重构或者修改代码库来提高易用性。
系统重构也进入到运维领域。比如Corgibytes公司经常推荐新客户使用[Docker][50],以便简单快速的部属新的开发环境。当你们公司有30个工程师的时候把初始化配置时间从10小时减少到10分钟对完成更多的工作很有帮助。系统重构不仅仅是应用于软件开发本身也包括如何进行系统重构。
系统重构也进入到运维领域。比如Corgibytes 公司经常推荐新客户使用 [Docker][50],以便简单快速的部署新的开发环境。当你们团队有30个工程师的时候把初始化配置时间从 10 小时减少到 10 分钟对完成更多的工作很有帮助。系统重构不仅仅是应用于软件开发本身,也包括如何进行系统重构。
如果你知道有什么新的技术能让你们的代码管理起来更容易,创建更高效,就应该把这它们写入到每年或季度项目规划中。别指望它们会自动呈现出来。但是也别给自己太大的压力来马上实施它们。Goulets看到很多公司从一开始就这些新的技术进行100%覆盖率测试而陷入困境。
如果你知道做些什么能让你们的代码管理起来更容易更高效,就应该把这它们写入到每年或季度项目规划中。别指望它们会自动呈现出来。但是也别给自己太大的压力来马上实施它们。Goulets 看到很多公司从一开始就致力于100% 覆盖率测试而陷入困境。
具体来说,每个公司都应该把以下三种类型的重构工作规划到项目建设中来:
**具体来说,每个公司都应该把以下三种类型的重构工作规划到项目建设中来:**
* 自动化测试
* 持续性交付
* 文化提升
咱们来深入的了解下每一项内容
咱们来深入的了解下每一项内容
自动化测试
**自动化测试**
“有一位客户即将进行第二轮融资,但是他们没办法在短期内招聘到足够的人才。我们帮助他们引进了一种自动化测试框架这让他们的团队在3个月的时间内工作效率翻了一倍”Goulets说。“这样他们就可以在他们的投资人面前自豪的说”我们的一个精英团队完成的任务比两个普通的团队要多。“
“有一位客户即将进行第二轮融资,但是他们没办法在短期内招聘到足够的人才。我们帮助他们引进了一种自动化测试框架,这让他们的团队在 3 个月的时间内工作效率翻了一倍”Goulets说。“这样他们就可以在他们的投资人面前自豪的说‘我们一个精英团队完成的任务比两个普通的团队要多。’
自动化测试从根本上来讲就是单个测试的组合。你可以使用单元测试再次检查某一行代码。可以使用集成测试来确保系统的不同部分都正常运行。还可以使用验收性测试来检验系统的功能特性是否跟你想像的一样。当你把这些测试写成测试脚本后,你只需要简单地用鼠标点一下按钮就可以让系统自行检验了,而不用手工的去梳理并检查每一项功能。
在产品的市场定位前就来制定自动化测试机制是有些言之过早了。但是如果你有一款信心满满的产品,并且也很依赖客户,那就更应该把这件事考虑在内了。
在产品市场尚未打开之前就来制定自动化测试机制有些言之过早。但是一旦你有一款感到满满,并且客户也很依赖的产品,就应该把这件事付诸实施了。
持续性交付
**持续性交付**
这是与自动化交付相关的工作,过去是需要人工完成。目的是当系统部分修改完成时可以迅速进行部属,并且短期内得到反馈。这给公司在其它竞争对手面前有很大的优势,尤其是在售后服务行业。
这是与自动化交付相关的工作,过去是需要人工完成。目的是当系统部分修改完成时可以迅速进行部署,并且短期内得到反馈。这使公司在其它竞争对手面前有很大的优势,尤其是在售后服务行业。
“比如说你每次部属系统时环境都很复杂。熵值无法有效控制”Goulets说。“我们曾经花了12个小时甚至更多的时间来部属一个很大的集群环境。然而想必你将来也不会经常干部属新环境这样的工作。因为太折腾人了而且还推迟了系统功能上线的时间。同时你也落后于其它公司并失去竞争力了。
“比如说你每次部署系统时环境都很复杂。熵值无法有效控制”Goulets 说。“我们曾经见过花 12 个小时甚至更多的时间来部署一个很大的集群环境。在这种情况下,你不会愿意频繁部署了。因为太折腾人了,你还会推迟系统功能上线的时间。这样,你将落后于其它公司并失去竞争力。”
在持续性改进的过程中常见的其它自动化任务包括:
**在持续性改进的过程中常见的其它自动化任务包括:**
* 在提交完成之后检查中断部分。
*   在提交完成之后检查中断部分。
* 在出现故障时进行回滚操作。
* 审查自动化代码的质量。
* 根据需求增加或减少服务器硬件资源。
* 让开发,测试及生产环境配置简单易懂。
举一个简单的例子比如说一个客户提交了一个系统Bug报告。开发团队越高效解决并修复那个Bug越好。对于开发人员来说修复Bug的挑战根本不是个事儿这本来也是他们的强项主要是系统设置上不够完善导致他们浪费太多的时间去处理bug以外的其它问题。
举一个简单的例子,比如说一个客户提交了一个系统 Bug 报告。开发团队越高效解决并修复那个 Bug 越好。对于开发人员来说,修复 Bug 的挑战根本不是个事儿,这本来也是他们的强项,主要是系统设置上不够完善导致他们浪费太多的时间去处理 bug 以外的其它问题。
使用持续改进的方式时,在你决定哪些工作应该让机器去做还是最好丢给研发去完成的时候,你会变得很严肃无情。如果选择让机器去处理,你得使其自动化完成。这样也能让研发很愉快地去解决其它有挑战性的问题。同时客户也会很高兴地看到他们报怨的问题被快速处理了。你的待修复的未完成任务数减少了,之后你就可以把更多的时间投入到运用新的方法来提高公司产品质量上了。
使用持续改进的方式时,在你决定哪些工作应该让机器去做,哪些最好交给研发去完成的时候,你会变得更干脆了。如果机器更擅长,那就使其自动化完成。这样也能让研发愉快地去解决其它有挑战性的问题。同时客户也会很高兴地看到他们报怨的问题被快速处理了。你的待修复的未完成任务数减少了,之后你就可以把更多的时间投入到运用新的方法来提高公司产品质量上了。**这是创造科技财富的一种转变。**因为开发人员可以修复 bug 后立即发布新代码,这样他们就有时间和精力做更多事。
你必须时刻问自己我应该如何为我们的客户改善产品功能如何做得更好如何让产品运行更高效Goulets说。“一旦你回答完这些问题后你就得询问下自己如何自动去完成那些需要改善的功能”
你必须时刻问自己,‘我应该如何为我们的客户改善产品功能?如何做得更好?如何让产品运行更高效?’不过还要不止于此。”Goulets 说。“一旦你回答完这些问题后,你就得询问下自己如何自动去完成那些需要改善的功能
提升企业文化
**提升企业文化**
Corgibytes公司每天都会到同样的问题一家创业公司建立了一个对开发团队毫无影响的文化环境。公司CEO抱着双臂思考着为什么这样的环境对员工没多少改变。然而事实却是公司的企业文化观念与他们是截然相反的。为了激烈你们公司的工程师,你必须全面地了解他们的工作环境。
Corgibytes公司每天都会到同样的问题:一家创业公司建立了一个对开发团队毫无影响的文化环境。公司 CEO 抱着双臂思考着为什么这样的环境对员工没多少改变。然而事实却是公司的企业文化对工作并不利。为了激励工程师,你必须全面地了解他们的工作环境。
为了证明这一点Goulet引用了作者Robert Henry说过的一段话
为了证明这一点Goulet 引用了作者 Robert Henry 说过的一段话:
> 目的不是创造艺术,而是在最美妙的状态下让艺术应运而生。
> **目的不是创造艺术,而是在最美妙的状态下让艺术应运而生。**
“也就是说你得开始思考一下你们公司的产品,“她说。”你们的企业文件就应该跟自己的产品一样。你们的目标是永远创造一个让艺术品应运而生的环境,这件艺术品就是你们公司的代码,一流的售后服务、充满幸福感的员工、良好的市场、盈利能力等等。这些都息息相关。“
“你们也要开始这样思考一下你们的软件,”她说。“你们的企业文件就类似状态。你们的目标是总能创造一个让艺术品应运而生的环境,这件艺术品就是你们公司的代码,一流的售后服务、充满幸福感的开发者、良好的市场、盈利能力等等。这些都息息相关。”
优先考虑公司的技术债务和遗留代码也是一种文化。那才是真正能让开发团队深受影响的方法。同时,这也会让你将来有更多的时间精力去完成更重要的工作。如果你不从根本上改变固有的企业文化环境,你就不可能重构公司产品。改变你所有的对产品维护及现代化上投资的态度是开始实施变革的第一步最理想情况是从公司的CEO开始转变。
优先考虑公司的技术债务和遗留代码也是一种文化。那是真正为开发团队清除障碍,以制造影响的方法。同时这也会让你将来有更多的时间精力去完成更重要的工作。如果你不从根本上改变固有的企业文化环境你就不可能重构公司产品。改变对产品维护及现代化上投资的态度是开始实施变革的第一步最理想情况是从公司的CEO开始转变。
以下是Goulet关于建立那种流态文化方面提出的建议
以下是 Goulet 关于建立那种流态文化方面提出的建议:
* 反对公司嘉奖那些加班到深夜的”英雄“。提倡高效率的工作方式。
* 了解协同开发技术比如Woody Zuill提出的[暴徒编程][44][][43]模式。
* 遵从4个[现代敏捷开发][42] 原则:用户至上、实践及快速学习、把系统安全放在首位、持续交付价值。
*   反对公司嘉奖那些加班到深夜的“英雄”。提倡高效率的工作方式。
*   了解协同开发技术,比如 Woody Zuill 提出的[<ruby>合作编程<rt>Mob Programming</rt></ruby>][44]模式。
* 遵从 4 个[现代敏捷开发][42] 原则:用户至上、实践及快速学习、把系统安全放在首位、持续交付价值。
* 每周为研发提供项目外的职业发展时间。
* 把[日工作记录]作为一种驱动开发团队主动解决问题的方式。
* 把同情心放在第一位。Corgibytes公司让员工参加[Brene Brown勇气工厂][40]的培训是非常有用的。
* 把[日工作记录][43]作为一种驱动开发团队主动解决问题的方式。
* 把同情心放在第一位。Corgibytes 公司让员工参加 [Brene Brown 勇气工厂][40]的培训是非常有用的。
”如果公司高管和投资者不支持这种文件升级方式你得从客户服务的角度去说服他们“Goulet说告诉他们通过这次调整后,最终产品将如何给公司的大客户提高更好的体验。这是你能做的一个很有力的论点。
“如果公司高管和投资者不支持这种升级方式你得从客户服务的角度去说服他们”Goulet 说,“告诉他们通过这次调整后,最终产品将如何给公司的大客户提高更好的体验。这是你能做的一个很有力的论点。
### 寻找最具天的代码重构者
### 寻找最具天的代码重构者
整个行业都认为那些顶尖的工程师不愿意干修复遗留代码的工作。他们只想着去开发新的东西。大家都说把他们留在维护部门真是太浪费人才了。
整个行业都认为顶尖的工程师不愿意干修复遗留代码的工作。他们只想着去开发新的东西。大家都说把他们留在维护部门真是太浪费人才了。
其实这些都是误解。如果你知道如何寻找到那些技术精湛的工程师以为他们提供一个愉快的工作环境,你可以安排他们来帮你解决那些最棘手的技术债务问题。
其实这些都是误解。如果你知道去哪里和如何找工程师,并为他们提供一个愉快的工作环境,你就可以找到技术非常精湛的工程师,来帮你解决那些最棘手的技术债务问题。
”每一次开会的时候我们都会问现场的同事谁喜欢去干遗留代码的工作但是也只有那么不到10%的同事会举手。“Goulet说。”但是当我跟这些人交流的过程中我发现这些工程师恰好是喜欢最具挑战性工作的人才。“
“每一次开会的时候,我们都会问现场的同事‘谁喜欢去干遗留代码的工作?’每次只有不到 10% 的同事会举手。”Goulet 说。“但是我跟这些人交流后,我发现这些工程师恰好是喜欢最具挑战性工作的人才。”
有一位客户来寻求她的帮助,他们使用国产的数据库,没有任何相关文档,也没有一种有效的方法来弄清楚他们公司的产品架构。她称那些类似于面包和黄油的一类工程师为”修正者“。在Corgibytes公司她有一支这样的修正者团队由她支配他们没啥爱好只喜欢通过研究二进制代码来解决技术问题。
有一位客户来寻求她的帮助,他们使用国产的数据库,没有任何相关文档,也没有一种有效的方法来弄清楚他们公司的产品架构。她称修理这种情况的一类工程师为“修正者”。在Corgibytes公司她有一支这样的修正者团队由她支配热衷于通过研究二进制代码来解决技术问题。
![](https://s3.amazonaws.com/marquee-test-akiaisur2rgicbmpehea/BeX5wWrESmCTaJYsuKhW_Screen%20Shot%202016-08-11%20at%209.17.04%20AM.png)
那么如何才能找到这些技术人才呢Goulet尝试过各种各样的方法其中有一些方法还是富有成效的。
那么,如何才能找到这些技术人才呢? Goulet 尝试过各种各样的方法,其中有一些方法还是富有成效的。
她创办了一个社区网站[legacycode.rocks][49]并且在招聘启示上写道:”长期招聘那些喜欢重构遗留代码的另类开发人员...如果你以从事处理遗留代码的工作为自豪,欢迎加入!
她创办了一个社区网站 [legacycode.rocks][49] 并且在招聘启示上写道:“长期招聘那些喜欢重构遗留代码的另类开发人员...如果你以从事处理遗留代码的工作为自豪,欢迎加入!
”我刚开始收到很多这些人发来邮件说,’噢,天呐,我也属于这样的开发人员!‘“她说。”我开始发布这条信息,并且告诉他们这份工作是非常有意义的,以吸引合适的人才“
“我开始收到很多人发来邮件说,‘噢,天呐,我也属于这样的开发人员!’”她说。“只需要发布这条信息,并且告诉他们这份工作是非常有意义的,就吸引了合适的人才。”
推荐文章
在招聘的过程中,她也会使用持续性交付的经验来回答那些另类开发者想知道的信息:包括详细的工作内容以及明确的要求。“我这么做的原因是因为我讨厌重复性工作。如果我收到多封邮件来咨询同一个问题,我会把答案发布在网上,我感觉自己更像是在写说明文档一样。”
在招聘的过程中,她也会使用持续性交付的经验来回答那些另类开发者想知道的信息:包括详细的工作内容以及明确的要求。我这么做的原因是因为我讨厌重复性工作。如果我收到多封邮件来咨询同一个问题,我会把答案发布在网上,我感觉自己更像是在写说明文档一样。”
但是随着时间的推移,她发现可以重新定义招聘流程来帮助她识别出更出色的候选人。比如说,她在应聘要求中写道,“公司 CEO 将会重新审查你的简历,因此请确保求职信中致意时不用写明性别。所有以‘尊敬的先生’或‘先生’开头的信件将会被当垃圾处理掉”。这些只是她的招聘初期策略。
但是随着时间的推移她注意到她会重新定义招聘流程来帮助她识别出更出色的候选人。比如说她在应聘要求中写道“公司CEO将会重新审查你的简历因此确保邮件发送给CEO时”不用写明性别。所有以“尊敬的先生或女士”开头的信件将会被当垃圾处理掉。然后这只不过是她招聘初期的策略而已。
“我开始这么做是因为很多申请人把我当成一家软件公司的男性 CEO这让我很厌烦”Goulet 说。“所以,有一天我想我应该它当作应聘要求放到网上,看有多少人注意到这个问题。令我惊讶的是,这让我过滤掉一些不太严谨的申请人。还突显出了很多擅于从事遗留代码方面工作的人。”
“我开始这么做是因为很多申请人把我当成一家软件公司的男性CEO这让我很厌烦”Goulet说。“所有有一天我想我应该它当作应聘要求放到网上看有多少人注意到这个问题。令我惊讶的是这让我过滤掉一些不太严谨的申请人。还突显出了很多擅于从事遗留代码方面工作的人。
Goulet 想起一个应聘者发邮件给我说,“我查看了你们网站的代码(我喜欢这个网站,这也是我的工作)。你们的网站架构很奇特,好像是用 PHP 写的,但是你们却运行在用 Ruby 语言写的 Jekyll 下。我真的很好奇那是什么呢。”
Goulet想起一个应聘者发邮件给我说“我查看了你们网站的代码我喜欢这个网站以及你们打招呼的方式这就是我所希望的。你们的网站架构很奇特好像是用PHP写的但是你们却运行在用Ruby语言写的Jekyll下。我真的很好奇那是什么呢。
Goulet 从她的设计师那里得知,原来,在 HTML、CSS 和 JavaScript 文件中有一个未使用的 PHP 类名她一直想解决这个问题但是一直没机会。Goulet 的回复是:“你正在找工作吗?
原来是这样的Goulet从她的设计师那里得知在HTML、CSS和JavaScript文件中有一个未使用的PHP类名她一直想解决这个问题但是一直没机会。她的回复是“你正在找工作吗
另外一名候选人注意到她曾经在一篇说明文档中使用 CTO 这个词,但是她的团队里并没有这个头衔(她的合作伙伴是 Chief Code Whisperer。这些注重细节、充满求知欲、积极主动的候选者更能引起她的注意。
另外一名候选人注意到她曾经在一篇说明文档中使用CTO这个词但是她的团队里并没有这个头衔她的合作伙伴是首席代码语者。其次是那些注重细节、充满求知欲、积极主动的候选者更能引起她的注意。
> **代码修正者不仅需要注重细节,而且这也是他们必备的品质。**
> 代码修正者不仅需要注重细节,而且这也是他们必备的品质。
让人吃惊的是Goulet 从来没有为招募最优秀的代码修正者而感到厌烦过。“大多数人都是通过我们的网站直接投递简历,但是当我们想扩大招聘范围的时候,我们会通过 [PowerToFly][48] 和 [WeWorkRemotely][47] 网站进行招聘。我现在确实不需要招募新人马了。他们需要经历一段很艰难的时期才能理解代码修正者的意义是什么。”
让人吃惊的是Goulet从来没有为招募最优秀的代码修正者而感到厌烦过。”大多数人都是通过我们的网站直接投递简历但是当我们想扩大招聘范围的时候我们会通过[PowerToFly][48]和[WeWorkRemotely][47]网站进行招聘。我现在确实不需要招募新人马了。他们需要经历一段很艰难的时期才能理解代码修正者的意义是什么。“
如果他们通过首轮面试Goulet 将会让候选者阅读一篇 Arlo Belshee 写的文章“[<ruby>命名是一个过程<rt>Naming is a Process</rt></ruby>][46]”。它讲的是非常详细的处理遗留代码的的过程。她最经典的指导方法是:“阅读完这段代码并且告诉我,你是怎么理解的。”
如果他们通过首轮面试Goulet将会让候选者阅读一篇Arlo Belshee写的文章”[命名是一个过程][46]“。它讲的是非常详细的处理遗留代码的的过程。她最经典的指导方法是:”阅读完这段代码并且告诉我,你是怎么理解的。“
她将找出对问题的理解很深刻并且也愿意接受文章里提出的观点的候选者。这对于区分有深刻理解的候选者和仅仅想获得工作的候选者中来说,是极其有用的办法。她强烈要求候选者找出一段与他操作相关的代码,来证明他是充满激情的、有主见的及善于分析问题的人。
她将找出对问题的理解很深刻并且也愿意接受文章里提出的观点候选者。这对于筛选出有坚定信念的想被雇用的候选者来说是极其有用的办法。她强力要求候选者找出一段与你操作相关的最关键的代码来证明你是充满激情的、有主见的及善于分析问题的人
最后,她会让候选者跟公司里当前的团队成员一起使用 [Exercism.io][45] 工具进行编程。这是一个开源项目,它允许开发者学习如何在不同的编程语言环境下使用一系列的测试驱动开发的练习进行编程。第一部分的协同编程课程允许候选者选择其中一种语言进行内建。下一个练习中,面试者可以选择一种语言进行编程。他们总能看到那些人处理异常的方法、随机应便的能力以及是否愿意承认某些自己不了解的技术
最后,她会让候选者跟公司里当前的团队成员一起使用[Exercism.io][45]工具进行编程。这是一个开源项目,它允许开发者学习如何在不同的编程语言环境下使用一系列的测试驱动开发的练习进行编程。第一部分的协同编程课程允许候选者选择其中一种语言进行内建。下一个练习中,面试者可以选择一种语言进行编程。他们总能看到那些人处理异常的方法、随机应便的能力以及是否愿意承认某些自己不了解 的技术
“当一个人真正的从执业者转变为大师的时候他会毫不犹豫的承认自己不知道的东西”Goulet说
“当一个人真正的从专家转变为大师的时候他才会毫不犹豫的承认自己不知道的东西“Goulet说。
让他们使用自己不熟悉的编程语言来写代码,也能衡量其坚韧不拔的毅力。“我们想听到某个人说,‘我会深入研究这个问题直到彻底解决它。’也许第二天他们仍然会跑过来跟我们说,‘我会一直留着这个问题直到我找到答案为止。’那是作为一个成功的修正者表现出来的一种气质。”
让他们使用自己不熟悉的编程语言来写代码也能衡量其坚韧不拔的毅力。”我们想听到某个人说,‘我会深入研究这个问题直到彻底解决它。“也许第二天他们仍然会跑过来跟我们说,’我会一直留着这个问题直到我找到答案为止。‘那是作为一个成功的修正者表现出来的一种气质。“
> **产品开发人员在我们这个行业很受追捧,因此很多公司也想让他们来做维护工作。这是一个误解。最优秀的维护修正者并不是最好的产品开发工程师。**
> 如果你认为产品开发人员在我们这个行业很受追捧,因此很多公司也想让他们来做维护工作。那你可错了。最优秀的维护修正者并不是最好的产品开发工程师。
如果一个有天赋的修正者在眼前Goulet懂得如何让他走向成功。下面是如何让这种类型的开发者感到幸福及高效工作的一些方式
如果一个有天赋的修正者在眼前Goulet 懂得如何让他走向成功。下面是如何让这种类型的开发者感到幸福及高效工作的一些方式:
* 给他们高度的自主权。把问题解释清楚,然后安排他们去完成,但是永不命令他们应该如何去解决问题。
* 如果他们要求升级他们的电脑配置和相关工具,尽管去满足他们。他们明白什么样的需求才能最大限度地提高工作效率。
* 帮助他们[避免更换任务][39]。他们喜欢全身心投入到某一个任务直至完成。
总之这些方法已经帮助Corgibytes公司培养出20几位对遗留代码充满激情的专业开发者。
总之,这些方法已经帮助 Corgibytes 公司培养出 20 几位对遗留代码充满激情的专业开发者。
### 稳定期没什么不好
大多数创业公司都都不想跳过他们的成长期。一些公司甚至认为成长期应该是永无止境的。而且,他们觉得也没这个必要,即便他们已经进入到了下一个阶段:稳定期。完全进入到稳定期意味着你可以利用当前的人力资源及管理方法在创造技术财富和消耗资源之间做出一个正确的选择
大多数创业公司都都不想跳过他们的成长期。一些公司甚至认为成长期应该是永无止境的。而且,他们觉得也没这个必要,即便他们已经进入到了下一个阶段:稳定期。完全进入到稳定期意味着你拥有人力资源及管理方法来创造技术财富,同时根据优先权适当支出
”在成长期和稳定期之间有个转折点就是维护人员必须要足够壮大并且你开始更公平的对待维护人员以及专注新功能的产品开发人员“Goulet说。”你们公司的产品开发完成了。现在你得让他们更加稳定地运行。“
“在成长期和稳定期之间有个转折点就是维护人员必须要足够壮大并且相对于专注新功能的产品开发人员你开始更公平的对待维护人员”Goulet说。“你们公司的产品开发完成了。现在你得让他们更加稳定地运行。”
这就意味着要把公司更多的预算分配到产品维护及现代化方面。”你不应该把产品维护当作是一个不值得关注的项目,“她说。”这必须成为你们公司固有的一种企业文化——这将帮助你们公司将来取得更大的成功。“
这就意味着要把公司更多的预算分配到产品维护及现代化方面。“你不应该把产品维护当作是一个不值得关注的项目,”她说。“这必须成为你们公司固有的一种企业文化 —— 这将帮助你们公司将来取得更大的成功。“
最终,你通过这么努力创建的技术财富将会为你的团队带来一大批全新的开发者:他们就像侦查兵一样,有充足的时间和资源去探索新的领域,挖掘新客户资源并且给公司创造更多的机遇。当你们在新的市场领域做得更广泛并且不断发展得更好——那么你们公司已经真正地进入到繁荣发展的状态了。
最终,你通过这些努力创建的技术财富,将会为你的团队带来一大批全新的开发者:他们就像侦查兵一样,有充足的时间和资源去探索新的领域,挖掘新客户资源并且给公司创造更多的机遇。当你们在新的市场领域做得更广泛并且不断发展得更好 —— 那么你们公司已经真正地进入到繁荣发展的状态了。
--------------------------------------------------------------------------------
@ -218,7 +213,7 @@ via: http://firstround.com/review/forget-technical-debt-heres-how-to-build-techn
译者:[rusking](https://github.com/rusking)
校对:[校对者ID](https://github.com/校对者ID)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -265,7 +260,7 @@ via: http://firstround.com/review/forget-technical-debt-heres-how-to-build-techn
[40]:http://www.courageworks.com/
[41]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[42]:https://www.industriallogic.com/blog/modern-agile/
[43]:http://mobprogramming.org/
[43]:http://corgibytes.com/blog/2016/08/02/how-we-use-daily-journals/
[44]:http://mobprogramming.org/
[45]:http://exercism.io/
[46]:http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/
@ -277,7 +272,6 @@ via: http://firstround.com/review/forget-technical-debt-heres-how-to-build-techn
[52]:https://www.agilealliance.org/resources/initiatives/technical-debt/
[53]:https://en.wikipedia.org/wiki/Behavior-driven_development
[54]:https://en.wikipedia.org/wiki/Conway%27s_law
[55]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[56]:https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
[57]:http://corgibytes.com/
[58]:https://www.linkedin.com/in/andreamgoulet

View File

@ -0,0 +1,149 @@
PyCharm - Linux 下最好的 Python IDE(集成开发环境)
=========
![](https://fthmb.tqn.com/AVEbzYN3BPH_8cGYkPflIx58-XE=/768x0/filters:no_upscale()/about/pycharm2-57e2d5ee5f9b586c352c7493.png)
### 介绍
在这篇指南中,我将向你介绍一个集成开发环境 - PyCharm 你可以在它上面使用 Python 编程语言开发专业应用。
Python 是一门优秀的编程语言,因为它真正实现了跨平台,用它开发的应用程序在 Windows、Linux 以及 Mac 系统上均可运行,无需重新编译任何代码。
PyCharm 是由 [Jetbrains][3] 开发的一个编辑器和调试器,[Jetbrains][3] 就是那个开发了 Resharper 的人。不得不说Resharper 是一个很优秀的工具,它被 Windows 开发者们用来重构代码,同时,它也使得 Windows 开发者们写 .NET 代码更加轻松。[Resharper][2] 的许多原则也被加入到了 [PyCharm][3] 专业版中。
### 如何安装 PyCharm
我已经写了一篇关于如何获取 PyCharm 的指南,下载,解压文件,然后运行。
[点击链接][4].
### 欢迎界面
当你第一次运行 PyCharm 或者关闭一个项目的时候,会出现一个屏幕,上面显示一系列近期项目。
你也会看到下面这些菜单选项:
* 创建新项目
* 打开项目
* 版本控制检查
还有一个配置设置选项,你可以通过它设置默认 Python 版本或者一些其他设置。
### 创建一个新项目
当你选择‘创建一个新项目’以后,它会提供下面这一系列可能的项目类型供你选择:
* Pure Python
* Django
* Flask
* Google App Engine
* Pyramid
* Web2Py
* Angular CLI
* AngularJS
* Foundation
* HTML5 Bolierplate
* React Starter Kit
* Twitter Bootstrap
* Web Starter Kit
这不是一个编程教程,所以我没必要说明这些项目类型是什么。如果你想创建一个可以运行在 Windows、Linux 和 Mac 上的简单桌面运行程序那么你可以选择 Pure Python 项目,然后使用 QT 库来开发图形应用程序,这样的图形应用程序无论在任何操作系统上运行,看起来都像是原生的,就像是在该系统上开发的一样。
选择了项目类型以后,你需要输入一个项目名字并且选择一个 Python 版本来进行开发。
### 打开一个项目
你可以通过单击‘最近打开的项目’列表中的项目名称来打开一个项目,或者,你也可以单击‘打开’,然后浏览到你想打开的项目所在的文件夹,找到该项目,然后选择‘确定’。
### 从源码控制进行查看
PyCharm 提供了从各种在线资源查看项目源码的选项,在线资源包括 [GitHub][5]、[CVS][6]、Git、[Mercurial][7] 以及 [Subversion][8]。
### PyCharm IDE(集成开发环境)
PyCharm IDE 可以通过顶部的一个菜单打开,在这个菜单下面你可以为每个打开的项目‘贴上’标签。
屏幕右方是调试选项区,可以单步运行代码。
左面板有一系列项目文件和外部库。
如果想在项目中新建一个文件,你可以‘右击’项目名字,然后选择‘新建’。然后你可以在下面这些文件类型中选择一种添加到项目中:
* 文件
* 目录
* Python 包
* Python 包
* Jupyter 笔记
* HTML 文件
* Stylesheet
* JavaScript
* TypeScript
* CoffeeScript
* Gherkin
* 数据源
当添加了一个文件,比如 Python 文件以后,你可以在右边面板的编辑器中进行编辑。
文本是全彩色编码的,并且有黑体文本。垂直线显示缩进,从而能够确保缩进正确。
编辑器具有智能补全功能,这意味着当你输入库名字或可识别命令的时候,你可以按 'Tab' 键补全命令。
### 调试程序
你可以利用屏幕右上角的’调试选项’调试程序的任何一个地方。
如果你是在开发一个图形应用程序,你可以点击‘绿色按钮’来运行程序,你也可以通过 'shift+F10' 快捷键来运行程序。
为了调试应用程序,你可以点击紧挨着‘绿色按钮’的‘绿色箭头’或者按 shift+F9 快捷键。你可以点击一行代码的灰色边缘,从而设置断点,这样当程序运行到这行代码的时候就会停下来。
你可以按 'F8' 单步向前运行代码,这意味着你只是运行代码但无法进入函数内部,如果要进入函数内部,你可以按 'F7'。如果你想从一个函数中返回到调用函数,你可以按 'shift+F8'。
调试过程中,你会在屏幕底部看到许多窗口,比如进程和线程列表,以及你正在监视的变量。
当你运行到一行代码的时候,你可以对这行代码中出现的变量进行监视,这样当变量值改变的时候你能够看到。
另一个不错的选择是运行检查器覆盖的代码。在过去这些年里,编程界发生了很大的变化,现在,对于开发人员来说,进行测试驱动开发是很常见的,这样他们可以检查对程序所做的每一个改变,确保不会破坏系统的另一部分。
检查器能够很好的帮助你运行程序,执行一些测试,运行结束以后,它会以百分比的形式告诉你测试运行所覆盖的代码有多少。
还有一个工具可以显示‘类函数’或‘类’的名字,以及一个项目被调用的次数和在一个特定代码片段运行所花费的时间。
### 代码重构
PyCharm 一个很强大的特性是代码重构选项。
当你开始写代码的时候,会在右边缘出现一个小标记。如果你写的代码可能出错或者写的不太好, PyCharm 会标记上一个彩色标记。
点击彩色标记将会告诉你出现的问题并提供一个解决方法。
比如,你通过一个导入语句导入了一个库,但没有使用该库中的任何东西,那么不仅这行代码会变成灰色,彩色标记还会告诉你‘该库未使用’。
对于正确的代码,也可能会出现错误提示,比如在导入语句和函数起始之间只有一个空行。当你创建了一个名称非小写的函数时它也会提示你。
你不必遵循 PyCharm 的所有规则。这些规则大部分只是好的编码准则,与你的代码是否能够正确运行无关。
代码菜单还有其他重构选项。比如,你可以进行代码清理以及检查文件或项目问题。
### 总结
PyCharm 是 Linux 系统上开发 Python 代码的一个优秀编辑器,并且有两个可用版本。社区版可供临时开发者使用,专业版则提供了开发者开发专业软件可能需要的所有工具。
--------------------------------------------------------------------------------
via: https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
作者:[Gary Newell ][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com/gary-newell-2180098
[1]:https://www.jetbrains.com/
[2]:https://www.jetbrains.com/resharper/
[3]:https://www.jetbrains.com/pycharm/specials/pycharm/pycharm.html?&gclid=CjwKEAjw34i_BRDH9fbylbDJw1gSJAAvIFqU238G56Bd2sKU9EljVHs1bKKJ8f3nV--Q9knXaifD8xoCRyjw_wcB&gclsrc=aw.ds.ds&dclid=CNOy3qGQoc8CFUJ62wodEywCDg
[4]:https://www.lifewire.com/how-to-install-the-pycharm-python-ide-in-linux-4091033
[5]:https://github.com/
[6]:http://www.linuxhowtos.org/System/cvs_tutorial.htm
[7]:https://www.mercurial-scm.org/
[8]:https://subversion.apache.org/

View File

@ -1,217 +0,0 @@
RHEL (Red Hat Enterprise Linux红帽企业级 Linux) 7.3 安装指南
=====
RHEL 是红帽公司开发维护的开源 Linux 发行版,可以运行在所有的主流 CPU 架构中。一般来说,多数的 Linux 发行版都可以免费下载、安装和使用,但对于 RHEL只有在购买了订阅之后你才能下载和使用否则只能获取到试用期为 30 天的评估版。
本文会告诉你如何在你的机器上安装最新的 RHEL 7.3,当然了,使用的是期限 30 天的评估版 ISO 镜像,自行到 [https://access.redhat.com/downloads][1] 下载。
如果你更喜欢使用 CentOS请移步 [CentOS 7.3 安装指南][2]。
欲了解 RHEL 7.3 的新特性,请参考 [版本更新日志][3].
#### 先决条件
本次安装是在支持 UEFI 的虚拟机固件上进行的。为了完成安装,你首先需要进入主板的 EFI 固件更改启动顺序为已刻录好 ISO 镜像的对应设备DVD 或者 U 盘)。
如果是通过 USB 媒介来安装,你需要确保这个可以启动的 USB 设备是用支持 UEFI 兼容的工具来创建的,比如 [Rufus][4],它能将你的 USB 设备设置为 UEFI 固件所需要的 GPT 分区方案。
为了进入主板的 UEFI 固件设置面板,你需要在电脑初始化 POST (Power on Self Test通电自检) 的时候按下一个特殊键。
关于该设置需要用到特殊键你可以向主板厂商进行咨询获取。通常来说在笔记本上可能是这些键F2、F9、F10、F11 或者 F12也可能是 Fn 与这些键的组合。
此外,更改 UEFI 启动顺序前,你要确保快速启动选项 (QuickBoot/FastBoot) 和 安全启动选项 (Secure Boot) 处于关闭状态,这样才能在 EFI 固件中运行 RHEL。
有一些 UEFI 固件主板模型有这样一个选项,它让你能够以传统的 BIOS 或者 EFI CSM (Compatibility Support Module兼容支持模块) 两种模式来安装操作系统,其中 CSM 是主板固件中一个用来模拟 BIOS 环境的模块。这种类型的安装需要 U 盘以 MBR 而非 GPT 来进行分区。
此外,一旦你在含有两种模式的 UEFI 机器中成功安装好 RHEL 或者类似的 OS那么安装好的系统就必须和你安装时使用的模式来运行。
而且,你也不能够从 UEFI 模式变更到传统的 BIOS 模式,反之亦然。强行变更这两种模式会让你的系统变得不稳定、无法启动,同时还需要重新安装系统。
### RHEL 7.3 安装指南
1. 首先,下载并使用合适的工具刻录 RHEL 7.3 ISO 镜像到 DVD 或者创建一个可启动的 U 盘。
给机器加电启动,把 DVD/U 盘反正合适驱动器中并按下特定的启动键变得更启动顺序来启动安装介质。
探测到安装介质之后,它会启动到 RHEL grub 菜单。选择 Install red hat Enterprise Linux 7.3 并按 [Enter] 继续。
[![RHEL 7.3 Boot Menu](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg)][5]
RHEL 7.3 启动菜单
2. 之后屏幕就会显示 RHEL 7.3 欢迎界面。该界面选择安装过程中使用的语言 (LCTT 译注:这里选的只是安装过程中使用的言语,之后的安装中才会进行最终使用的系统言语环境) ,然后 [Enter] 到下一界面。
[![Select RHEL 7.3 Language](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png)][6]
选择 RHEL 7.3 安装过程使用的言语
3. 下一界面中显示的是安装 RHEL 是你需要设置的所有事项的总体概览。首先点击日期和时间 (DATE & TIME) 并再地图中选择你的设备所在区域。
点击最上面的完成 (Done) 按钮来保持你的设置,并进行下一步系统设置。
[![RHEL 7.3 Installation Summary](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png)][7]
RHEL 7.3 安装概览
[![Select RHEL 7.3 Date and Time](http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png)][8]
选择 RHEL 7.3 日期和时间
4. 接下来,就是配置你的键盘布局并再次点击完成 (Done) 按钮返回安装主菜单。
[![Configure Keyboard Layout](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png)][9]
配置键盘布局
5. 紧接着,选择你使用的语言支持,并点击完成 (Done),然后进行下一步。
[![Choose Language Support](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png)][10]
选择语言支持
6. 安装源保持默认就好,因为本例中我们使用本地安装 (DVD/USB 镜像),然后选择要安装的软件集。
此处你对基本环境 (base environment) 和附件 (Add-ons) 进行选择。由于 RHEL 常用作 Linux 服务器,最小化安装对于系统管理员来说则是最佳选择。
对于生产环境来说,这也是官方极力推荐的安装方式,因为我们只需要在 OS 中安装极少量软件就好了。
这也意味着高安全性、可伸缩性以及占用极少的磁盘空间。同时,通过购买订阅 (subscription) 或使用 DVD 镜像元,其中列出的的其他环境和附件都是可以在命令行中很容易就可以安装的。
[![RHEL 7.3 Software Selection](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png)][11]
RHEL 7.3 软件集选择
7. 万一你想要安装预定义的基本环境之一,比方说 Web 服务器、文件 & 打印服务器、基本服务器、带 GUI 的可视化主机 & 服务器等,直接点击选择它们,然后在右边的框选择附件,最后点击完成 (Done) 结束这一步操作即可。
[![Select Server with GUI on RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png)][12]
选择 带 GUI 的可视化主机 & 服务器
8. 在接下来点击安装目标 (Installation Destination),这个步骤要求你为将要安装的系统进行分区、格式化文件系统并设置挂载点。
最好的做法就是让安装器自动配置硬盘分区,这样会创建 Linux 系统所有需要用到的基本分区 (在 LVM 中 分区 `/boot`、`/boot/efi`、`/(root)` 以及 `swap` ),并格式化为 RHEL 7.3 默认的 XFS 文件系统。
请记住:如果安装进程是从 UEFI 固件中启动的,那么硬盘的分区表则是 GPT 分区方案。否则,如果你以 CSM 或传统 BIOS 来启动,硬盘的分区表则使用老旧的 MBR 分区方案。
假如不喜欢自动分区,你也可以选择配置你的硬盘分区表,手动创建自己需要的分区。
不论如何,本文推荐你选择自动配置分区。最后点击完成 (Done) 继续下一步。
[![Choose RHEL 7.3 Installation Drive](http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png)][13]
选择 RHEL 7.3 的安装硬盘
9. 下一步是禁用 Kdump 服务然后配置网络。
[![Disable Kdump Feature](http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png)][14]
禁用 Kdump 特性
10. 在网络和主机名称中,设置你机器使用的主机名和一个描述性名称,同时拖动 Ethernet 开关按钮到 `ON` 来启用网络。
如果你在自己的网络中有一个 DHCP 服务器,那么网络 IP 设置会自动获取和使用。
[![Configure Network Hostname](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png)][15]
配置网络主机名称
11. 如果要为网络接口设置静态 IP点击配置 (Configure) 按钮,然后手动设置 IP如下方截图所示。
设置好网络接口的 IP 地址之后,点击保存 (Save) 按钮,最后切换一下网络接口的 `OFF` 和 `ON` 状态已应用刚刚设置的静态 IP。
最后,点击完成 (Done) 按钮返回到安装设置主界面。
[![Configure Network IP Address](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png)][16]
配置网络 IP 地址
12. 最后,在安装配置主界面需要你配置的最后一项就是安全策略配置文件了。选择并应用默认的安全策略,然后点击完成 (Done) 返回主界面。
回顾所有的安装设置项并点击开始安装 (Begin Installation) 按钮来启动安装进程,这个进程启动之后,你就没有办法停止它了。
[![Apply Security Policy for RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png)][17]
为 RHEL 7.3 启用安全策略
[![Begin Installation of RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png)][18]
开始安装 RHEL 7.3
13. 在安装进程中,你的显示器会出现用户设置 (User Settings)。首先点击 Root 密码 (Root Password) 为 root 账户设置一个高强度密码。
[![Configure User Settings](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png)][19]
配置用户选项
[![Set Root Account Password](http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png)][20]
设置 Root 账户密码
14. 最后,创建一个新用户,通过选中使该用户成为管理员 (Make this user administrator) 为新建的用户授权 root 权限。同时还要为这个账户设置一个高强度密码,点击完成 (Done) 返回用户设置菜单,就可以等待安装进程完成了。
[![Create New User Account](http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png][21]
创建新用户账户
[![RHEL 7.3 Installation Process](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png)][22]
RHEL 7.3 安装进程
15. 安装进程介绍并成功安装后,弹出 DVD/USB 设备,重启机器。
[![RHEL 7.3 Installation Complete](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png)][23]
RHEL 7.3 安装完成
[![Booting Up RHEL 7.3](http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png)][24]
启动 RHEL 7.3
至此,安装完成。为了后期一直使用 RHEL你需要从 Red Hat 消费者门户购买一个订阅,然后在命令行 [使用订阅管理器来注册你的 RHEL 系统][25]。
------------------
作者简介:
Matei Cezar
![](http://2.gravatar.com/avatar/be16e54026c7429d28490cce41b1e157?s=128&d=blank&r=g)
我是一个终日沉溺于电脑的家伙,对开源的 Linux 软件非常着迷,有着 4 年 Linux 桌面发行版、服务器和 bash 编程经验。
---------------------------------------------------------------------
via: http://www.tecmint.com/red-hat-enterprise-linux-7-3-installation-guide/
作者:[Matei Cezar][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:https://access.redhat.com/downloads
[2]:https://linux.cn/article-8048-1.html
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/7.3_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.3_Release_Notes-Overview.html
[4]:https://rufus.akeo.ie/
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Boot-Menu.jpg
[6]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Language.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Summary.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-RHEL-7.3-Date-and-Time.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Keyboard-Layout.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-Language-Support.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Software-Selection.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/12/Select-Server-with-GUI-on-RHEL-7.3.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/12/Choose-RHEL-7.3-Installation-Drive.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/12/Disable-Kdump-Feature.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-Hostname.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Network-IP-Address.png
[17]:http://www.tecmint.com/wp-content/uploads/2016/12/Apply-Security-Policy-on-RHEL-7.3.png
[18]:http://www.tecmint.com/wp-content/uploads/2016/12/Begin-RHEL-7.3-Installation.png
[19]:http://www.tecmint.com/wp-content/uploads/2016/12/Configure-User-Settings.png
[20]:http://www.tecmint.com/wp-content/uploads/2016/12/Set-Root-Account-Password.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/12/Create-New-User-Account.png
[22]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Process.png
[23]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Installation-Complete.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/12/RHEL-7.3-Booting.png
[25]:http://www.tecmint.com/enable-redhat-subscription-reposiories-and-updates-for-rhel-7/

View File

@ -0,0 +1,354 @@
LXD 2.0 系列LXD和Juju
======================================
这是 [LXD 2.0 系列介绍文章][1]的第十篇。
![LXD logo](https://linuxcontainers.org/static/img/containers.png)
介绍
============================================================
Juju是Canonical的服务建模和部署工具。 它支持非常广泛的云提供商,使您能够轻松地在任何云上部署任何您想要的服务。
此外Juju 2.0还支持LXD既适用于本地部署也适合开发并且可以在云实例或物理机上共同协作。
本篇文章将关注本地使用通过一个没有任何Juju经验的LXD用户来体验。
# 要求
本篇文章假设你已经安装了LXD 2.0并且配置完毕看前面的文章并且是在Ubuntu 16.04 LTS上运行的。
# 设置 Juju
第一件事是在Ubuntu 16.04上安装Juju 2.0。这个很简单:
```
stgraber@dakara:~$ sudo apt install juju
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
juju-2.0
Suggested packages:
juju-core
The following NEW packages will be installed:
juju juju-2.0
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 39.7 MB of archives.
After this operation, 269 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 juju-2.0 amd64 2.0~beta7-0ubuntu1.16.04.1 [39.6 MB]
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 juju all 2.0~beta7-0ubuntu1.16.04.1 [9,556 B]
Fetched 39.7 MB in 0s (53.4 MB/s)
Selecting previously unselected package juju-2.0.
(Reading database ... 255132 files and directories currently installed.)
Preparing to unpack .../juju-2.0_2.0~beta7-0ubuntu1.16.04.1_amd64.deb ...
Unpacking juju-2.0 (2.0~beta7-0ubuntu1.16.04.1) ...
Selecting previously unselected package juju.
Preparing to unpack .../juju_2.0~beta7-0ubuntu1.16.04.1_all.deb ...
Unpacking juju (2.0~beta7-0ubuntu1.16.04.1) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up juju-2.0 (2.0~beta7-0ubuntu1.16.04.1) ...
Setting up juju (2.0~beta7-0ubuntu1.16.04.1) ...
```
安装完成后我们可以使用LXD启动一个新的“控制器”。这意味着Juju不会修改你主机上的任何东西它会在LXD容器中安装它的管理服务。
现在我们创建一个“test”控制器
```
stgraber@dakara:~$ juju bootstrap localhost test
Creating Juju controller "local.test" on localhost/localhost
Bootstrapping model "admin"
Starting new instance for initial controller
Launching instance
- juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0
Installing Juju agent on bootstrap instance
Preparing for Juju GUI 2.1.2 release installation
Waiting for address
Attempting to connect to 10.178.150.72:22
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: cloud-utils
Installing package: cloud-image-utils
Installing package: tmux
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.0-beta7/juju-2.0-beta7-xenial-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap agent installed
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Bootstrap complete, local.test now available.
```
这会花费一点时间这时你可以看到一个正在运行的一个新的LXD容器
```
stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
```
在Juju这边你可以确认它有响应并且还没有服务运行
```
stgraber@dakara:~$ juju status
[Services]
NAME STATUS EXPOSED CHARM
[Units]
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
[Machines]
ID STATE DNS INS-ID SERIES AZ
```
你也可以在浏览器中访问Juju的GUI界面
```
stgraber@dakara:~$ juju gui
Opening the Juju GUI in your browser.
If it does not open, open this URL:
https://10.178.150.72:17070/gui/97fa390d-96ad-44df-8b59-e15fdcfc636b/
```
![Juju web UI](https://www.stgraber.org/wp-content/uploads/2016/06/juju-gui.png)
尽管我更倾向使用命令行,因此我会在接下来使用。
# 部署一个minecraft服务
让我们先来一个简单的部署在一个容器中使用一个Juju单元的服务。
```
stgraber@dakara:~$ juju deploy cs:trusty/minecraft
Added charm "cs:trusty/minecraft-3" to the model.
Deploying charm "cs:trusty/minecraft-3" with the charm series "trusty".
```
返回会很快然而这不意味着服务已经启动并运行了。你应该使用“juju status”来查看
```
stgraber@dakara:~$ juju status
[Services]
NAME STATUS EXPOSED CHARM
minecraft maintenance false cs:trusty/minecraft-3
[Units]
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
minecraft/1 maintenance executing 2.0-beta7 1 10.178.150.74 (install) Installing java
[Machines]
ID STATE DNS INS-ID SERIES AZ
1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty
```
我们可以看到它正在忙于在刚刚创建的LXD容器中安装java。
```
stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 | RUNNING | 10.178.150.74 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
```
过一会之后,如我们所见服务就部署完毕了:
```
stgraber@dakara:~$ juju status
[Services]
NAME STATUS EXPOSED CHARM
minecraft active false cs:trusty/minecraft-3
[Units]
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
minecraft/1 active idle 2.0-beta7 1 25565/tcp 10.178.150.74 Ready
[Machines]
ID STATE DNS INS-ID SERIES AZ
1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty
```
这时你就可以启动你的Minecraft客户端了它指向10.178.150.74端口是25565。现在可以在新的minecraft服务器上玩了
当你不再需要它,只需运行:
```
stgraber@dakara:~$ juju destroy-service minecraft
```
只要等待几秒就好了。
# 部署一个更复杂的web应用
Juju的主要工作是建模复杂的服务并以可扩展的方式部署它们。
为了更好地展示让我们部署一个Juju “组合”。 这个组合是由网站API数据库静态Web服务器和反向代理组成的基本Web服务。
所以这将扩展到4个互联的LXD容器。
```
stgraber@dakara:~$ juju deploy cs:~charmers/bundle/web-infrastructure-in-a-box
added charm cs:~hp-discover/trusty/node-app-1
service api deployed (charm cs:~hp-discover/trusty/node-app-1 with the series "trusty" defined by the bundle)
annotations set for service api
added charm cs:trusty/mongodb-3
service mongodb deployed (charm cs:trusty/mongodb-3 with the series "trusty" defined by the bundle)
annotations set for service mongodb
added charm cs:~hp-discover/trusty/nginx-4
service nginx deployed (charm cs:~hp-discover/trusty/nginx-4 with the series "trusty" defined by the bundle)
annotations set for service nginx
added charm cs:~hp-discover/trusty/nginx-proxy-3
service nginx-proxy deployed (charm cs:~hp-discover/trusty/nginx-proxy-3 with the series "trusty" defined by the bundle)
annotations set for service nginx-proxy
added charm cs:~hp-discover/trusty/website-3
service website deployed (charm cs:~hp-discover/trusty/website-3 with the series "trusty" defined by the bundle)
annotations set for service website
related mongodb:database and api:mongodb
related website:nginx-engine and nginx:web-engine
related api:website and nginx-proxy:website
related nginx-proxy:website and website:website
added api/0 unit to new machine
added mongodb/0 unit to new machine
added nginx/0 unit to new machine
added nginx-proxy/0 unit to new machine
deployment of bundle "cs:~charmers/bundle/web-infrastructure-in-a-box-10" completed
```
几秒后你会看到LXD容器在运行了
```
stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-2 | RUNNING | 10.178.150.98 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-3 | RUNNING | 10.178.150.29 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-4 | RUNNING | 10.178.150.202 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 | RUNNING | 10.178.150.214 (eth0) | | PERSISTENT | 0 |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
```
几分钟后,所有的服务应该部署完毕并运行了:
```
stgraber@dakara:~$ juju status
[Services]
NAME STATUS EXPOSED CHARM
api unknown false cs:~hp-discover/trusty/node-app-1
mongodb unknown false cs:trusty/mongodb-3
nginx unknown false cs:~hp-discover/trusty/nginx-4
nginx-proxy unknown false cs:~hp-discover/trusty/nginx-proxy-3
website false cs:~hp-discover/trusty/website-3
[Relations]
SERVICE1 SERVICE2 RELATION TYPE
api mongodb database regular
api nginx-proxy website regular
mongodb mongodb replica-set peer
nginx website nginx-engine subordinate
nginx-proxy website website regular
[Units]
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
api/0 unknown idle 2.0-beta7 2 8000/tcp 10.178.150.98
mongodb/0 unknown idle 2.0-beta7 3 27017/tcp,27019/tcp,27021/tcp,28017/tcp 10.178.150.29
nginx-proxy/0 unknown idle 2.0-beta7 5 80/tcp 10.178.150.214
nginx/0 unknown idle 2.0-beta7 4 10.178.150.202
website/0 unknown idle 2.0-beta7 10.178.150.202
[Machines]
ID STATE DNS INS-ID SERIES AZ
2 started 10.178.150.98 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-2 trusty
3 started 10.178.150.29 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-3 trusty
4 started 10.178.150.202 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-4 trusty
5 started 10.178.150.214 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 trusty
```
这时你就可以在80端口访问http://10.178.150.214并且会看到一个Juju学院页面。
[
![Juju Academy web service](https://www.stgraber.org/wp-content/uploads/2016/06/juju-academy.png)
][2]
# 清理所有东西
如果你不需要Juju创建的容器并且不在乎下次需要再次启动最简单的方法是
```
stgraber@dakara:~$ juju destroy-controller test --destroy-all-models
WARNING! This command will destroy the "local.test" controller.
This includes all machines, services, data and other resources.
Continue [y/N]? y
Destroying controller
Waiting for hosted model resources to be reclaimed
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 2 machines
Waiting on 1 model
Waiting on 1 model
All hosted models reclaimed, cleaning up controller machines
```
我们用下面的方式确认:
```
stgraber@dakara:~$ lxc list juju-
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
```
# 总结
Juju 2.0内置的LXD支持使得可以用一种非常干净的方式来测试各种服务。
在Juju charm store中有很多预制的“组合”可以用来部署甚至可以用多个“charm”来组合你想要的架构。
Juju与LXD是一个完美的解决方案从一个小的Web服务到大规模的基础设施都可以简单开发这些都在你自己的机器上并且不会在你的系统上造成混乱
--------------------------------------------------------------------------
作者简介我是Stéphane Graber。我是LXC和LXD项目的领导者目前在加拿大魁北克蒙特利尔的家所在的Canonical有限公司担任LXD的技术主管。
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/06/06/lxd-2-0-lxd-and-juju-1012/
作者:[ Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.stgraber.org/author/stgraber/
[1]:https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[2]:https://www.stgraber.org/wp-content/uploads/2016/06/juju-academy.png

View File

@ -0,0 +1,473 @@
LXD 2.0 系列(五):镜像管理
======================================
这是 [LXD 2.0 系列介绍文章][0]的第五篇。
因为lxd容器管理有很多命令因此这篇文章会很长。 如果你想要快速地浏览这些相同的命令,你可以[尝试下我们的在线演示][1]
![](https://linuxcontainers.org/static/img/containers.png)
### 容器镜像
如果你以前使用过LXC你可能还记得那些LXC“模板”基本上都是导出一个容器文件系统以及一点配置的shell脚本。
大多数模板通过在本机上根据发行版自举来生成文件系统。这可能需要相当长的时间,并且无法在所有的发行版上可用,另外可能需要大量的网络带宽。
回到LXC 1.0我写了一个“下载”模板它允许用户下载预先打包的容器镜像在中央服务器上的模板脚本生成接着高度压缩、签名并通过https分发。我们很多用户从旧版生成容器切换到使用这种新的更快更可靠的创建容器的方法。
使用LXD我们通过全面的基于镜像的工作流程向前迈进了一步。所有容器都是从镜像创建的我们在LXD中具有高级镜像缓存和预加载支持以使镜像存储保持最新。
### 与LXD镜像交互
在更深入了解镜像格式之前让我们快速了解下LXD可以让你做些什么。
#### 透明地导入镜像
所有的容器都是有镜像创建的。镜像可以来自一台远程服务器并使用它的完整hash、短hash或者别名拉取下来但是最终每个LXD容器都是创建自一个本地镜像。
这有个例子:
```
lxc launch ubuntu:14.04 c1
lxc launch ubuntu:75182b1241be475a64e68a518ce853e800e9b50397d2f152816c24f038c94d6e c2
lxc launch ubuntu:75182b1241be c3
```
所有这些引用相同的远程镜像在写这篇文章时在第一次运行其中之一时远程镜像将作为缓存镜像导入本地LXD镜像存储接着从中创建容器。
下一次运行其中一个命令时LXD将只检查镜像是否仍然是最新的当不是由指纹引用时如果是它将创建容器而不下载任何东西。
现在镜像被缓存在本地镜像存储中,你也可以从那里启动它,甚至不检查它是否是最新的:
```
lxc launch 75182b1241be c4
```
最后如果你有个名为“myimage”的本地镜像你可以
```
lxc launch my-image c5
```
如果你想要改变一些自动缓存或者过期行为,在本系列之前的文章中有一些命令。
#### 手动导入镜像
##### 从镜像服务器中复制
如果你想复制远程某个镜像到你本地镜像存储但不立即从它创建一个容器你可以使用“lxc image copy”命令。它可以让你调整一些镜像标志比如
```
lxc image copy ubuntu:14.04 local:
```
这只是简单地复制一个远程镜像到本地存储。
如果您想要通过比其指纹更容易的方式来记住你引用的镜像副本,则可以在复制时添加别名:
```
lxc image copy ubuntu:12.04 local: --alias old-ubuntu
lxc launch old-ubuntu c6
```
如果你想要使用源服务器上设置的别名你可以要求LXD复制下来
lxc image copy ubuntu:15.10 local: --copy-aliases
lxc launch 15.10 c7
上面的副本都是一次性拷贝也就是复制远程镜像的当前版本到本地镜像存储中。如果你想要LXD保持镜像最新就像它缓存中存储的那样你需要使用`auto-update`标志:
```
lxc image copy images:gentoo/current/amd64 local: --alias gentoo --auto-update
```
##### 导入tarball
如果某人给你提供了一个单独的tarball你可以用下面的命令导入
```
lxc image import <tarball>
```
如果你想在导入时设置一个别名,你可以这么做:
```
lxc image import <tarball> --alias random-image
```
现在如果你被给了有两个tarball识别哪个含有LXD的元数据。通常可以通过tarball名称如果不行就选择最小的那个元数据tarball包是很小的。 然后将它们一起导入:
```
lxc image import <metadata tarball> <rootfs tarball>
```
##### 从URL中导入
“lxc image import”也可以与指定的URL一起使用。如果你的一台https网络服务器的某个路径中有LXD-Image-URL和LXD-Image-Hash的标头设置那么LXD就会把这个镜像拉到镜像存储中。
可以参照例子这么做:
```
lxc image import https://dl.stgraber.org/lxd --alias busybox-amd64
```
当拉取镜像时LXD还会设置一些标头远程服务器可以检查它们以返回适当的镜像。 它们是LXD-Server-Architectures和LXD-Server-Version。
这意味着它可以是一个穷人的镜像服务器。 它可以使任何静态Web服务器提供一个用户友好的方式导入你的镜像。
#### 管理本地镜像存储
现在我们本地已经有一些镜像了,让我们瞧瞧可以做些什么。我们已经涵盖了最主要的部分,从它们来创建容器,但是你还可以在本地镜像存储上做更多。
##### 列出镜像
要列出所有的镜像运行“lxc image list”
```
stgraber@dakara:~$ lxc image list
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| alpine-32 | 6d9c131efab3 | yes | Alpine edge (i386) (20160329_23:52) | i686 | 2.50MB | Mar 30, 2016 at 4:36am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| gentoo | 1a134c5951e0 | no | Gentoo current (amd64) (20160329_14:12) | x86_64 | 232.50MB | Mar 30, 2016 at 4:34am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| my-image | c9b6e738fae7 | no | Scientific Linux 6 x86_64 (default) (20160215_02:36) | x86_64 | 625.34MB | Mar 2, 2016 at 4:56am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
```
你可以通过别名或者指纹来过滤:
```
stgraber@dakara:~$ lxc image list amd64
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
```
或者指定一个镜像属性中的键值对来过滤:
```
stgraber@dakara:~$ lxc image list os=ubuntu
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
```
要了解所有镜像的信息你可以使用“lxc image info”
```
stgraber@castiana:~$ lxc image info ubuntu
Fingerprint: e8a33ec326ae7dd02331bd72f5d22181ba25401480b8e733c247da5950a7d084
Size: 139.43MB
Architecture: i686
Public: no
Timestamps:
Created: 2016/03/15 00:00 UTC
Uploaded: 2016/03/16 05:50 UTC
Expires: 2017/04/26 00:00 UTC
Properties:
version: 12.04
aliases: 12.04,p,precise
architecture: i386
description: ubuntu 12.04 LTS i386 (release) (20160315)
label: release
os: ubuntu
release: precise
serial: 20160315
Aliases:
- ubuntu
Auto update: enabled
Source:
Server: https://cloud-images.ubuntu.com/releases
Protocol: simplestreams
Alias: precise/i386
```
##### 编辑镜像
一个编辑镜像的属性和标志的简单方法是使用:
```
lxc image edit <alias or fingerprint>
```
这会打开默认文本编辑器,内容像这样:
```
autoupdate: true
properties:
aliases: 14.04,default,lts,t,trusty
architecture: amd64
description: ubuntu 14.04 LTS amd64 (release) (20160314)
label: release
os: ubuntu
release: trusty
serial: "20160314"
version: "14.04"
public: false
```
你可以修改任何属性,打开或者关闭自动更新,后者标记一个镜像是公共的(以后还有更多)
##### 删除镜像
删除镜像只需要运行:
```
lxc image delete <alias or fingerprint>
```
注意你不必移除缓存对象它们会在过期后被LXD自动移除默认上在最后一次使用的10天后
##### 导出镜像
如果你想得到目前镜像的tarball你可以使用“lxc image export”像这样
```
stgraber@dakara:~$ lxc image export old-ubuntu .
Output is in .
stgraber@dakara:~$ ls -lh *.tar.xz
-rw------- 1 stgraber domain admins 656 Mar 30 00:55 meta-ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
-rw------- 1 stgraber domain admins 156M Mar 30 00:55 ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
```
#### 镜像格式
LXD现在支持两种镜像布局unified或者split。这两者都是有效的LXD格式虽然后者在与其他容器或虚拟机一起运行时更容易重新使用文件系统。
LXD专注于系统容器不支持任何应用程序容器的“标准”镜像格式我们也不打算这么做。
我们的镜像很简单,它们是由容器文件系统,以及包含了镜像制作时间、到期时间、什么架构,以及可选的一堆文件模板的元数据文件组成。
有关[镜像格式][1]的最新详细信息,请参阅此文档。
##### unified镜像 (一个tarball)
unified镜像格式是LXD在生成镜像时使用的格式。它们是一个单独的大型tarball包含“rootfs”目录的容器文件系统在tarball根目录下有metadata.yaml文件任何模板都进入“templates”目录。
tarball可以用任何方式压缩或者不压缩。镜像散列是压缩后的tarball的sha256。
##### Split镜像 (两个tarball)
这种格式最常用于滚动更新镜像以及某人已经有了一个压缩文件系统tarball。
它们由两个不同的tarball组成第一个只包含LXD使用的元数据因此metadata.yaml文件在根目录任何模板都在“templates”目录。
第二个tarball只包含直接位于其根目录下的容器文件系统。大多数发行版已经有这样的tarball因为它们常用于引导新机器。 此镜像格式允许不修改重新使用。
两个tarball都可以压缩或者不压缩它们可以使用不同的压缩算法。 镜像散列是元数据和rootfs tarball结合的sha256。
##### 镜像元数据
典型的metadata.yaml文件看起来像这样
```
architecture: "i686"
creation_date: 1458040200
properties:
architecture: "i686"
description: "Ubuntu 12.04 LTS server (20160315)"
os: "ubuntu"
release: "precise"
templates:
/var/lib/cloud/seed/nocloud-net/meta-data:
when:
- start
template: cloud-init-meta.tpl
/var/lib/cloud/seed/nocloud-net/user-data:
when:
- start
template: cloud-init-user.tpl
properties:
default: |
#cloud-config
{}
/var/lib/cloud/seed/nocloud-net/vendor-data:
when:
- start
template: cloud-init-vendor.tpl
properties:
default: |
#cloud-config
{}
/etc/init/console.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty1.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty2.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty3.override:
when:
- create
template: upstart-override.tpl
/etc/init/tty4.override:
when:
- create
template: upstart-override.tpl
```
##### 属性
两个唯一的必填字段是“creation date”UNIX EPOCH和“architecture”。 其他都可以保持未设置,镜像就可以正常地导入。
额外的属性主要是帮助用户弄清楚镜像是什么。 例如“description”属性是在“lxc image list”中可见的。 用户可以使用其他属性的键/值对来搜索特定镜像。
相反这些属性用户可以通过“lxc image edit”来编辑“creation date”和“architecture”字段是不可变的。
##### 模板
模板机制允许在容器生命周期中的某一点生成或重新生成容器中的一些文件。
我们使用pongo2模板引擎来做这些我们将所有我们知道的容器导出到模板。 这样你可以使用用户定义的容器属性或常规LXD属性的自定义镜像来更改某些特定文件的内容。
正如你在上面的例子中看到的我们使用在Ubuntu中的模板找出cloud-init并关闭一些init脚本。
### 创建你的镜像
LXD专注于运行完整的Linux系统这意味着我们期望大多数用户只使用干净的发行版镜像而不是只用自己的镜像。
但是有一些情况下,你有自己的镜像是有用的。 例如生产服务器上的预配置镜像,或者构建那些我们没有构建的发行版或者架构的镜像。
#### 将容器变成镜像
目前使用LXD构造镜像最简单的方法是将容器变成镜像。
可以这么做
```
lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image
```
你甚至可以将一个容器过去的snapshot变成镜像
```
lxc publish my-container/some-snapshot --alias some-image
```
#### 手动构建镜像
构建你自己的镜像也很简单。
1.生成容器文件系统。 这完全取决于你使用的发行版。 对于Ubuntu和Debian它将用于启动。
2.配置容器中正常工作所需的任何东西(如果需要任何东西)。
3.制作该容器文件系统的tarball可选择压缩它。
4.根据上面描述的内容写一个新的metadata.yaml文件。
5.创建另一个包含metadata.yaml文件的压缩包。
6.用下面的命令导入这两个tarball作为LXD镜像
```
lxc image import <metadata tarball> <rootfs tarball> --alias some-name
```
正常工作前你可能需要经历几次这样的工作,调整这里或那里,可能会添加一些模板和属性。
### 发布你的镜像
所有LXD守护程序都充当镜像服务器。除非另有说明否则加载到镜像存储中的所有镜像都会被标记为私有因此只有受信任的客户端可以检索这些镜像但是如果要创建公共镜像服务器你需要做的是将一些镜像标记为公开并确保你的LXD守护进程监听网络。
#### 只运行LXD公共服务器
最简单的共享镜像的方式是运行一个公共的LXD守护进程。
你只要运行:
```
lxc config set core.https_address "[::]:8443"
```
远程用户就可以添加你的服务器作为公共服务器:
```
lxc remote add <some name> <IP or DNS> --public
```
他们就可以像任何默认的镜像服务器一样使用它们。 由于远程服务器添加了“-public”因此不需要身份验证并且客户端仅限于使用已标记为public的镜像。
要将镜像设置成公共的只需“lxc image edit”它们并将public标志设置为true。
#### 使用一台静态web服务器
如上所述“lxc image import”支持从静态http服务器下载。 基本要求是:
*服务器必须支持具有有效证书的HTTPSTLS1.2和EC密钥
*当点击“lxc image import”提供的URL时服务器必须返回一个包含LXD-Image-Hash和LXD-Image-URL的HTTP标头。
如果你想使它动态化你可以让你的服务器查找LXD在请求镜像中发送的LXD-Server-Architectures和LXD-Server-Version的HTTP头。 这可以让你返回架构正确的镜像。
#### 构建一个简单流服务器
“ubuntu:”和“ubuntu-daily:”在远端不使用LXD协议“images:”是的),而是使用不同的协议称为简单流。
简单流基本上是一个镜像服务器的描述格式使用JSON来描述产品以及相关产品的文件列表。
它被各种工具如OpenStackJujuMAAS等用来查找下载或者做镜像系统LXD将它作为原生协议支持用于镜像检索。
虽然的确不是提供LXD镜像的最简单的方法但是如果你的镜像也被其他一些工具使用那这也许值得考虑一下。
更多信息可以在这里找到。
### 总结
我希望关于如何使用LXD管理镜像以及构建和发布镜像这点给你提供了一个好点子。对于以前的LXC而言可以在一组全球分布式系统上得到完全相同的镜像是一个很大的进步并且让将来的道路更加可复制。
### 额外信息
LXD 的主站在: <https://linuxcontainers.org/lxd>
LXD 的 GitHub 仓库: <https://github.com/lxc/lxd>
LXD 的邮件列表: <https://lists.linuxcontainers.org>
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
如果你不想或者不能在你的机器上安装 LXD ,你可以在 web 上[试试在线版的 LXD][3] 。
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/blob/master/doc/image-handling.md
[2]: https://launchpad.net/simplestreams
[3]: https://linuxcontainers.org/lxd/try-it
原文https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/

View File

@ -0,0 +1,216 @@
LXD 2.0 系列(六):远程主机及容器迁移
======================================
这是 [LXD 2.0 系列介绍文章][0]的第六篇。
![](https://linuxcontainers.org/static/img/containers.png)
### 远程协议
LXD 2.0 支持两种协议:
* LXD 1.0 API这是在客户端和LXD守护进程之间使用的REST API以及在复制/移动镜像和容器时在LXD守护进程之间使用的REST API。
* SimplestreamsSimplestreams协议是LXD客户端和守护进程使用的只读、只有镜像的协议以获取镜像信息以及从一些公共镜像服务器如Ubuntu镜像导入镜像。
以下所有内容都将使用这两个协议中的第一个。
### 安全
LXD API的验证是通过使用最近的密钥通过TLS 1.2的客户端证书验证。 当两个LXD守护程序必须直接交换信息时源守护程序生成一个临时令牌并通过客户端传输到目标守护程序。 此令牌仅可用于访问特定流,并且立即被撤销,因此不能重新使用。
为了避免中间人攻击,客户端工具还将源服务器的证书发送到目标。 这意味着对于特定的下载操作目标服务器会被提供源服务器的URL、需要的资源的一次性访问令牌以及服务器应该使用的证书。 这可以防止MITM攻击并且只允许临时访问传输对象。
### 网络需求
LXD 2.0使用一种模型,它其中操作的目标(接收端)直接连接到源以获取数据。
这意味着你必须确保目标服务器可以直接连接到源、可以更新任何所需的防火墙。
我们有个[允许反向连接的计划][1],允许通过客户端本身代理以应对那些严格的防火墙阻止两台主机之间通信的罕见情况。
### 与远程主机交互
LXD使用的是“remotes”的概念而不是让我们的用户总是提供主机名或IP地址然后在他们想要与远程主机交互时验证证书信息。
默认情况下唯一真正的LXD远程配置是“local:”这也是默认远程所以你不必输入它的名称。本地远程使用LXD REST API通过unix套接字与本地守护进程通信。
### 添加一台远程主机
假设你已经有两台装有LXD的机器你的本机以及远程那台我们称为“foo”的主机。
首先你需要确保“foo”正在监听网络并设置了一个密码因此在远程shell上运行
```
lxc config set core.https_address [::]:8443
lxc config set core.trust_password something-secure
```
在你本地LXD上你需要使它对网络可见这样我们可以从它传输容器和镜像
```
lxc config set core.https_address [::]:8443
```
现在守护进程的配置已经在两段完成了你可以添加“foo”到你的本地客户端
```
lxc remote add foo 1.2.3.4
```
(将 1.2.3.4 替换成你的IP或者FQDN)
看上去像这样:
```
stgraber@dakara:~$ lxc remote add foo 2607:f2c0:f00f:2770:216:3eff:fee1:bd67
Certificate fingerprint: fdb06d909b77a5311d7437cabb6c203374462b907f3923cefc91dd5fce8d7b60
ok (y/n)? y
Admin password for foo:
Client certificate stored at server: foo
```
你接着可以列出远端服务器你可以在列表中看到“foo”
```
stgraber@dakara:~$ lxc remote list
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| NAME | URL | PROTOCOL | PUBLIC | STATIC |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| foo | https://[2607:f2c0:f00f:2770:216:3eff:fee1:bd67]:8443 | lxd | NO | NO |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| images | https://images.linuxcontainers.org:8443 | lxd | YES | NO |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| local (default) | unix:// | lxd | NO | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | YES | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | YES | YES |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
```
### 与它交互
好了,所以我们已经有了一台定义好的远程服务器,我们现在可以做些什么?
好了就如你看到现在的唯一的不同是你不许告诉LXD要哪台主机运行。
比如:
```
lxc launch ubuntu:14.04 c1
```
它会在默认主机“lxc remote get-default”也就是你的本机上运行。
```
lxc launch ubuntu:14.04 foo:c1
```
这个会在foo上运行。
列出远程主机正在运行的容器可以这么做:
```
stgraber@dakara:~$ lxc list foo:
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| c1 | RUNNING | 10.245.81.95 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe43:7994 (eth0) | PERSISTENT | 0 |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
```
你要记住的一件事是你需要在远程主机上同时指定镜像和容器。因此如果你在“foo”上有一个“my-image”的镜像并且希望从它创建一个“c2”的容器你需要运行
```
lxc launch foo:my-image foo:c2
```
最后就如你希望的那样得到一个远程容器的shell
```
lxc exec foo:c1 bash
```
### 复制容器
在两台主机间复制容器就如它听上去那样简单:
```
lxc copy foo:c1 c2
```
你会有一个新的从远程“c1”复制过来的本地“c2”容器。这需要停止“c1”容器但是你可以在运行的时候只复制一个快照
```
lxc snapshot foo:c1 current
lxc copy foo:c1/current c3
```
### 移动容器
除非你在做实时更新(将会在之后的文章中覆盖),不然你需要在移动前先停止容器,接着就会如你预料的那样。
```
lxc stop foo:c1
lxc move foo:c1 local:
```
这个例子等同于:
```
lxc stop foo:c1
lxc move foo:c1 c1
```
### 这些如何工作
正如你期望的那样, 与远程容器的交互时LXD只使用完全相同的HTTPS传输的API而不是通过本地Unix套接字使用REST API。
当两个守护程序之间交互时会变得有些棘手,如复制和移动的情况。
有有以下这些情况:
1.用户运行“lxc move fooc1 c1”。
2.客户端联系本地远程以检查现有的“c1”容器。
3.客户端从“foo”获取容器信息。
4.客户端从源“foo”守护程序请求迁移令牌。
5.客户端将迁移令牌以及源URL和“foo”证书发送到本地LXD守护程序以及容器配置和周围设备。
6.然后本地LXD守护程序使用提供的令牌直接连接到“foo”
  A.它连接到第一个控制websocket
  B.它协商文件系统传输协议zfs发送/接收btrfs发送/接收或者纯rsync
  C.如果在本地可用,它会解压用于创建源容器的镜像。这是为了避免不必要的数据传输。
  D.然后它会将容器及其任何快照作为增量传输。
7.如果成功客户端会命令“foo”删除源容器。
### 在线尝试
没有两台机器来尝试远端交互和复制/移动容器?
没有问题,你可以使用我们的[demo服务][2]。
这里甚至还包括了一步步的指导!
### 额外信息
LXD 的主站在: <https://linuxcontainers.org/lxd>
LXD 的 GitHub 仓库: <https://github.com/lxc/lxd>
LXD 的邮件列表: <https://lists.linuxcontainers.org>
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://github.com/lxc/lxd/issues/553
[2]: https://linuxcontainers.org/lxd/try-it/

View File

@ -0,0 +1,148 @@
LXD 2.0 系列LXD中的Docker
======================================
这是 [LXD 2.0 系列介绍文章][0]的第七篇。
![](https://linuxcontainers.org/static/img/containers.png)
### 为什么在LXD中运行Docker
正如我在[系列的第一篇][1]中简要介绍的LXD的重点是系统容器。也就是我们在容器中运行一个完全未经修改的Linux发行版。LXD的所有意图和目的不在乎容器中的负载。它只是设置容器命名空间和安全策略然后生成/sbin/init接着等待容器停止。
应用程序容器例如由Docker或Rkt实现的应用程序容器是非常不同的因为它们用于分发应用程序通常在它们内部运行单个主进程并且比LXD容器生命期更短暂。
这两种容器类型不是相互排斥的我们的确看到使用Docker容器来分发应用程序的价值。这就是为什么我们在过去一年努力工作以便让LXD中运行Docker成为可能。
这意味着使用Ubuntu 16.04和LXD 2.0您可以为用户创建容器然后可以像正常的Ubuntu系统一样连接到这些容器然后运行Docker来安装他们想要的服务和应用程序。
### 要求
要让它正常工作要做很多事情Ubuntu 16.04上已经包含了这些:
- 支持CGroup命名空间的内核4.4 Ubuntu或4.6 mainline
- 使用LXC 2.0和LXCFS 2.0的LXD 2.0
- 一个自定义版本的Docker或一个用我们提交的所有补丁构建的
- Docker镜像当用户命名空间限制时或者使父LXD容器成为特权容器security.privileged = true
### 运行一个基础的Docker负载
说完这些让我们开始运行Docker容器
首先你可以用下面的命令得到一个Ubuntu 16.04的容器:
```
lxc launch ubuntu-daily:16.04 docker -p default -p docker
```
“-p default -p docker”表示LXD将“default”和“docker”配置文件应用于容器。默认配置文件包含基本网络配置而docker配置文件告诉LXD加载几个必需的内核模块并为容器设置一些挂载。 docker配置文件还允许容器嵌套。
现在让我们确保容器是最新的并安装docker
```
lxc exec docker -- apt update
lxc exec docker -- apt dist-upgrade -y
lxc exec docker -- apt install docker.io -y
```
就是这样你已经安装并运行了一个Docker容器。
现在让我们用两个Docker容器开启一个基础的web服务
```
stgraber@dakara:~$ lxc exec docker -- docker run --detach --name app carinamarina/hello-world-app
Unable to find image 'carinamarina/hello-world-app:latest' locally
latest: Pulling from carinamarina/hello-world-app
efd26ecc9548: Pull complete
a3ed95caeb02: Pull complete
d1784d73276e: Pull complete
72e581645fc3: Pull complete
9709ddcc4d24: Pull complete
2d600f0ec235: Pull complete
c4cf94f61cbd: Pull complete
c40f2ab60404: Pull complete
e87185df6de7: Pull complete
62a11c66eb65: Pull complete
4c5eea9f676d: Pull complete
498df6a0d074: Pull complete
Digest: sha256:6a159db50cb9c0fbe127fb038ed5a33bb5a443fcdd925ec74bf578142718f516
Status: Downloaded newer image for carinamarina/hello-world-app:latest
c8318f0401fb1e119e6c5bb23d1e706e8ca080f8e44b42613856ccd0bf8bfb0d
stgraber@dakara:~$ lxc exec docker -- docker run --detach --name web --link app:helloapp -p 80:5000 carinamarina/hello-world-web
Unable to find image 'carinamarina/hello-world-web:latest' locally
latest: Pulling from carinamarina/hello-world-web
efd26ecc9548: Already exists
a3ed95caeb02: Already exists
d1784d73276e: Already exists
72e581645fc3: Already exists
9709ddcc4d24: Already exists
2d600f0ec235: Already exists
c4cf94f61cbd: Already exists
c40f2ab60404: Already exists
e87185df6de7: Already exists
f2d249ff479b: Pull complete
97cb83fe7a9a: Pull complete
d7ce7c58a919: Pull complete
Digest: sha256:c31cf04b1ab6a0dac40d0c5e3e64864f4f2e0527a8ba602971dab5a977a74f20
Status: Downloaded newer image for carinamarina/hello-world-web:latest
d7b8963401482337329faf487d5274465536eebe76f5b33c89622b92477a670f
```
现在这两个Docker容器已经运行了我们可以得到LXD容器的IP地址并且访问它的服务了
```
stgraber@dakara:~$ lxc list
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
| docker | RUNNING | 172.17.0.1 (docker0) | 2001:470:b368:4242:216:3eff:fe55:45f4 (eth0) | PERSISTENT | 0 |
| | | 10.178.150.73 (eth0) | | | |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
stgraber@dakara:~$ curl http://10.178.150.73
The linked container said... "Hello World!"
```
### 总结
就是这样了在LXD容器中运行Docker容器真的很简单。
现在正如我前面提到的并不是所有的Docker镜像都会像我的示例一样这通常是因为LXD提供了额外的限制特别是用户命名空间。
只有Docker的overlayfs存储驱动在这种模式下工作。该存储驱动有一组自己的限制这可以进一步限制在该环境中可以有多少镜像工作。
如果您的负载无法正常工作并且您信任LXD容器中的用户你可以试下
```
lxc config set docker security.privileged true
lxc restart docker
```
这将取消激活用户命名空间,并以特权模式运行容器。
但是请注意在这种模式下容器内的root与主机上的root是相同的uid。现在有许多已知的方法让用户脱离容器并获得主机上的root权限所以你应该只有在信任你的LXD容器中的用户可以具有主机上的root权限才这样做。
### 额外信息
LXD 的主站在: <https://linuxcontainers.org/lxd>
LXD 的 GitHub 仓库: <https://github.com/lxc/lxd>
LXD 的邮件列表: <https://lists.linuxcontainers.org>
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://www.stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/
[2]: https://linuxcontainers.org/lxd/try-it/

View File

@ -1,41 +1,41 @@
Part 8 - LXD 2.0: LXD in LXD
==============================
LXD 2.0 系列LXD中的LXD
======================================
This is the eighth blog post [in this series about LXD 2.0][0].
这是 [LXD 2.0 系列介绍文章][0]的第八篇。
![](https://linuxcontainers.org/static/img/containers.png)
### Introduction
### 介绍
In the previous post I covered how to run [Docker inside LXD][1] which is a good way to get access to the portfolio of application provided by Docker while running in the safety of the LXD environment.
在上一篇文章中,我介绍了如何运行[LXD中的Docker][1]这是一个很好的方式来访问由Docker提供的应用程序组合同时Docker还运行在LXD提供的安全环境中。
One use case I mentioned was offering a LXD container to your users and then have them use their container to run Docker. Well, what if they themselves want to run other Linux distributions inside their container using LXD, or even allow another group of people to have access to a Linux system by running a container for them?
我提到的一个情况是为你的用户提供一个LXD容器然后让他们使用他们的容器来运行Docker。那么如果他们自己想使用LXD在其容器中运行其他Linux发行版或者甚至运行容器允许另一组人来访问Linux系统
Turns out, LXD makes it very simple to allow your users to run nested containers.
原来LXD使得用户运行嵌套容器变得非常简单。
### Nesting LXD
### 嵌套LXD
The most simple case can be shown by using an Ubuntu 16.04 image. Ubuntu 16.04 cloud images come with LXD pre-installed. The daemon itself isnt running as its socket-activated so it doesnt use any resources until you actually talk to it.
最简单的情况可以使用Ubuntu 16.04镜像来展示。 Ubuntu 16.04云镜像预装了LXD。守护进程本身没有运行因为它是套接字激活的所以它不使用任何资源直到你真正使用它。
So lets start an Ubuntu 16.04 container with nesting enabled:
让我们启动一个启用了嵌套的Ubuntu 16.04容器:
```
lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true
```
You can also set the security.nesting key on an existing container with:
你也可以在一个存在的容器上设置security.nesting
```
lxc config set <container name> security.nesting true
```
Or for all containers using a particular profile with:
或者对所有的容器使用一个配置文件:
```
lxc profile set <profile name> security.nesting true
```
With that container started, you can now get a shell inside it, configure LXD and spawn a container:
容器启动后你可以从容器内部得到一个shell配置LXD并生成一个容器
```
stgraber@dakara:~$ lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true
@ -79,34 +79,37 @@ root@c1:~# lxc list
root@c1:~#
```
It really is that simple!
就是这样简单
### The online demo server
### 在线演示服务器
As this post is pretty short, I figured I would spend a bit of time to talk about the [demo server][2] were running. We also just reached the 10000 sessions mark earlier today!
因为这篇文章很短,我想我会花一点时间谈论我们运行中的[演示服务器][2]。我们今天早些时候刚刚达到了10000个会话
That server is basically just a normal LXD running inside a pretty beefy virtual machine with a tiny daemon implementing the REST API used by our website.
这个服务器基本上只是一个运行在一个相当强大的虚拟机上的正常的LXD一个小型的守护进程实现我们的网站使用的REST API。
When you accept the terms of service, a new LXD container is created for you with security.nesting enabled as we saw above. You are then attached to that container as you would when using “lxc exec” except that were doing it using websockets and javascript.
当你接受服务条款时将为你创建一个新的LXD容器并启用security.nesting如上所述接着你就像使用“lxc exec”时一样连接到了那个容器除了我们使用websockets和javascript来做这些。
The containers you then create inside this environment are all nested LXD containers.
You can then nest even further in there if you want to.
你在此环境中创建的容器都是嵌套的LXD容器。
如果你想,你可以进一步地嵌套。
We are using the whole range of [LXD resource limitations][3] to prevent one users actions from impacting the others and pretty closely monitor the server for any sign of abuse.
我们全范围地使用了[LXD资源限制][3],以防止一个用户的行为影响其他用户,并仔细监控服务器的任何滥用迹象。
If you want to run your own similar server, you can grab the code for our website and the daemon with:
如果你想运行自己的类似的服务器,你可以获取我们的网站和守护进程的代码:
```
git clone https://github.com/lxc/linuxcontainers.org
git clone https://github.com/lxc/lxd-demo-server
```
### Extra information
### 额外信息
The main LXD website is at: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
LXD 的主站在: <https://linuxcontainers.org/lxd>
LXD 的 GitHub 仓库: <https://github.com/lxc/lxd>
LXD 的邮件列表: <https://lists.linuxcontainers.org>
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
@ -114,7 +117,7 @@ IRC support happens in: #lxcontainers on irc.freenode.net
via: https://www.stgraber.org/2016/04/14/lxd-2-0-lxd-in-lxd-812/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,36 +1,36 @@
Part 9 - LXD 2.0: Live migration
=================================
LXD 2.0 系列(九):实时迁移
======================================
This is the ninth blog post [in this series about LXD 2.0][0].
这是 [LXD 2.0 系列介绍文章][0]的第九篇。
![](https://linuxcontainers.org/static/img/containers.png)
### Introduction
### 介绍
One of the very exciting feature of LXD 2.0, albeit experimental, is the support for container checkpoint and restore.
LXD 2.0中的有一个尽管是实验性质的但非常令人兴奋的功能,那就是支持容器检查点和恢复。
Simply put, checkpoint/restore means that the running container state can be serialized down to disk and then restored, either on the same host as a stateful snapshot of the container or on another host which equates to live migration.
简单地说,检查点/恢复意味着正在运行的容器状态可以被序列化到磁盘,然后在与容器状态快照相同的主机上或者在等同于实时迁移的另一主机上恢复。
### Requirements
### 要求
To have access to container live migration and stateful snapshots, you need the following:
要访问容器实时迁移和有状态快照,你需要以下条件:
- A very recent Linux kernel, 4.4 or higher.
- CRIU 2.0, possibly with some cherry-picked commits depending on your exact kernel configuration.
- Run LXD directly on the host. Its not possible to use those features with container nesting.
- For migration, the target machine must at least implement the instruction set of the source, the target kernel must at least offer the same syscalls as the source and any kernel filesystem which was mounted on the source must also be mountable on the target.
- 一个最近的Linux内核4.4或更高版本。
- CRIU 2.0可能有一些cherry-pick的提交具体取决于你确切的内核配置。
- 直接在主机上运行LXD。 不能在容器嵌套下使用这些功能。
- 对于迁移,目标机器必须至少实现源的指令集,目标内核必须至少提供与源相同的系统调用,并且在源上挂载的任何内核文件系统也必须可挂载到目标主机上。
All the needed dependencies are provided by Ubuntu 16.04 LTS, in which case, all you need to do is install CRIU itself:
Ubuntu 16.04 LTS已经提供了所有需要的依赖在这种情况下您只需要安装CRIU本身
```
apt install criu
```
### Using the thing
### 使用CRIU
#### Stateful snapshots
#### 有状态快照
A normal container snapshot looks like:
一个普通的快照看上去像这样:
```
stgraber@dakara:~$ lxc snapshot c1 first
@ -38,7 +38,7 @@ stgraber@dakara:~$ lxc info c1 | grep first
first (taken at 2016/04/25 19:35 UTC) (stateless)
```
A stateful snapshot instead looks like:
一个有状态快照看上去像这样:
```
stgraber@dakara:~$ lxc snapshot c1 second --stateful
@ -46,24 +46,24 @@ stgraber@dakara:~$ lxc info c1 | grep second
second (taken at 2016/04/25 19:36 UTC) (stateful)
```
This means that all the container runtime state was serialized to disk and included as part of the snapshot. Restoring one such snapshot is done as you would a stateless one:
这意味着所有容器运行时状态都被序列化到磁盘并且作为了快照的一部分。就像你还原无状态快照那样还原一个有状态快照:
```
stgraber@dakara:~$ lxc restore c1 second
stgraber@dakara:~$
```
#### Stateful stop/start
#### 有状态快照的停止/启动
Say you want to reboot your server for a kernel update or similar maintenance. Rather than have to wait for all the containers to start from scratch after reboot, you can do:
比方说你想要升级内核或者其他类似的维护。与其等待所有的容器启动,你可以:
```
stgraber@dakara:~$ lxc stop c1 --stateful
```
The container state will be written to disk and then picked up the next time you start it.
容器状态将会写入到磁盘,会在下次启动时读取。
You can even look at what the state looks like:
你甚至可以看到像下面那样的状态:
```
root@dakara:~# tree /var/lib/lxd/containers/c1/rootfs/state/
@ -226,15 +226,15 @@ root@dakara:~# tree /var/lib/lxd/containers/c1/rootfs/state/
0 directories, 154 files
```
Restoring the container can be done with a simple:
还原容器也很简单:
```
stgraber@dakara:~$ lxc start c1
```
### Live migration
### 实时迁移
Live migration is basically the same as the stateful stop/start above, except that the container directory and configuration happens to be moved to another machine too.
实时迁移基本上与上面的有状态快照的停止/启动相同,除了容器目录和配置被移动到另一台机器上。
```
stgraber@dakara:~$ lxc list c1
@ -264,52 +264,52 @@ stgraber@dakara:~$ lxc list s-tollana:
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
```
### Limitations
### 限制
As I said before, checkpoint/restore of containers is still pretty new and were still very much working on this feature, fixing issues as we are made aware of them. We do need more people trying this feature and sending us feedback, I would however not recommend using this in production just yet.
正如我之前说的,容器的检查点/恢复还是非常新的功能,我们还在努力地开发这个功能、修复问题已知问题。我们确实需要更多的人来尝试这个功能,并给我们反馈,但我不建议在生产中使用这个功能。
The current list of issues were tracking is [available on Launchpad][1].
我们跟踪的问题列表在[Launchpad上][1]。
We expect a basic Ubuntu container with a few services to work properly with CRIU in Ubuntu 16.04. However more complex containers, using device passthrough, complex network services or special storage configurations are likely to fail.
我们期望在Ubuntu 16.04上有一个基本的带有几个服务的Ubuntu容器能够与CRIU一起工作。然而在更复杂的容器、使用设备传递、复杂的网络服务或特殊的存储配置可能会失败。
Whenever possible, CRIU will fail at dump time, rather than at restore time. In such cases, the source container will keep running, the snapshot or migration will simply fail and a log file will be generated for debugging.
只要有可能CRIU会在转储时失败而不是在恢复时。在这种情况下源容器将继续运行快照或迁移将会失败并生成一个日志文件用于调试。
In rare cases, CRIU fails to restore the container, in which case the source container will still be around but will be stopped and will have to be manually restarted.
在极少数情况下CRIU无法恢复容器在这种情况下源容器仍然存在但将被停止并且必须手动重新启动。
### Sending bug reports
### 发送bug报告
Were tracking bugs related to checkpoint/restore against the CRIU Ubuntu package on Launchpad. Most of the work to fix those bugs will then happen upstream either on CRIU itself or the Linux kernel, but its easier for us to track things this way.
我们正在跟踪Launchpad上关于CRIU Ubuntu软件包的检查点/恢复相关的错误。大多数修复bug工作是在上游的CRIU或Linux内核上但是这种方式我们更容易跟踪。
To file a new bug report, head here.
要提交新的bug报告请看这里。
Please make sure to include:
请务必包括:
The command you ran and the error message as displayed to you
你运行的命令和显示给你的错误消息
- Output of “lxc info” (*)
- Output of “lxc info <container name>
- Output of “lxc config show expanded <container name>
- Output of “dmesg” (*)
- Output of “/proc/self/mountinfo” (*)
- Output of “lxc exec <container name> — cat /proc/self/mountinfo”
- Output of “uname -a” (*)
- The content of /var/log/lxd.log (*)
- The content of /etc/default/lxd-bridge (*)
- A tarball of /var/log/lxd/<container name>/ (*)
- “lxc info”的输出*
- “lxc info <container name>的输出
- “lxc config show -expanded <container name>的输出
- “dmesg”*)的输出
- “/proc/self/mountinfo”的输出*
- “lxc exec <container name> - cat /proc/self/mountinfo”的输出
- “uname -a”*)的输出
- /var/log/lxd.log*)的内容
- /etc/default/lxd-bridge*)的内容
- /var/log/lxd/<container name>/ 的tarball*
If reporting a migration bug as opposed to a stateful snapshot or stateful stop bug, please include the data for both the source and target for any of the above which has been marked with a (*).
如果报告迁移错误,而不是状态快照或有状态停止错误,请将上面所有含有(*)标记的源与目标主机的信息发来。
### Extra information
### 额外信息
The CRIU website can be found at: <https://criu.org>
CRIU 的网站在: <https://criu.org>
The main LXD website is at: <https://linuxcontainers.org/lxd>
LXD 的主站在: <https://linuxcontainers.org/lxd>
Development happens on Github at: <https://github.com/lxc/lxd>
LXD 的 GitHub 仓库: <https://github.com/lxc/lxd>
Mailing-list support happens on: <https://lists.linuxcontainers.org>
LXD 的邮件列表: <https://lists.linuxcontainers.org>
IRC support happens in: #lxcontainers on irc.freenode.net
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
--------------------------------------------------------------------------------
@ -317,7 +317,7 @@ IRC support happens in: #lxcontainers on irc.freenode.net
via: https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
作者:[Stéphane Graber][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出