From 5b7e6604092c38ebad6a1a528f006c6c9ed27304 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 28 Jun 2016 17:14:24 +0800 Subject: [PATCH 001/471] PUB:20160518 Android Use Apps Even Without Installing Them @alim0x @carolinewuyan --- ...d Use Apps Even Without Installing Them.md | 30 +++++++++++-------- 1 file changed, 17 insertions(+), 13 deletions(-) rename {translated/talk => published}/20160518 Android Use Apps Even Without Installing Them.md (68%) diff --git a/translated/talk/20160518 Android Use Apps Even Without Installing Them.md b/published/20160518 Android Use Apps Even Without Installing Them.md similarity index 68% rename from translated/talk/20160518 Android Use Apps Even Without Installing Them.md rename to published/20160518 Android Use Apps Even Without Installing Them.md index a8aecc1d19..40cf217035 100644 --- a/translated/talk/20160518 Android Use Apps Even Without Installing Them.md +++ b/published/20160518 Android Use Apps Even Without Installing Them.md @@ -3,23 +3,23 @@ 谷歌安卓的一项新创新将可以让你无需安装即可在你的设备上使用应用程序。现在已经初具雏形。 -还记得那时候吗,某人发给你了一个链接,要求你通过安装来查看应用。 +还记得那时候吗,某人发给你了一个链接,要求你通过安装一个应用才能查看。 -是否要安装这个应用来查看一个一次性的链接,这种进退两难的选择一定让你感到很沮丧。而且,应用安装本身也会消耗你不少宝贵的时间。 +是否要安装这个应用就为了看一下链接,这种进退两难的选择一定让你感到很沮丧。而且,安装应用这个事也会消耗你不少宝贵的时间。 -上述场景可能大多数人都经历过,或者说大多数现代科技用户都经历过。尽管如此,我们都接受这是正确且合理的过程。 +上述场景可能大多数人都经历过,或者说大多数现代科技用户都经历过。尽管如此,我们都接受,认为这是天经地义的事情。 事实真的如此吗? -针对这个问题谷歌的安卓部门给出了一个全新的,开箱即用的答案: +针对这个问题谷歌的安卓部门给出了一个全新的、开箱即用的答案: -### Android Instant Apps +### Android Instant Apps (AIA) -Android Instant Apps 声称第一时间帮你摆脱这样的两难境地,让你简单地点击链接(见打开链接的示例)然后直接开始使用这个应用。 +Android Instant Apps 声称可以从一开始就帮你摆脱这样的两难境地,让你简单地点击链接(见打开链接的示例)然后直接开始使用这个应用。 -另一个真实生活场景的例子,如果你想停车但是没有停车码表的配对应用,有了 Instant Apps 在这种情况下就方便多了。 +另一个真实生活场景的例子,如果你想停车但是没有停车码表的相应应用,有了 Instant Apps 在这种情况下就方便多了。 -根据谷歌的信息,你可以简单地将你的手机和码表触碰,停车应用就会直接显示在你的屏幕上,并且准备就绪可以使用。 +根据谷歌提供的信息,你可以简单地将你的手机和码表触碰,停车应用就会直接显示在你的屏幕上,并且准备就绪可以使用。 #### 它是怎么工作的? @@ -30,21 +30,25 @@ Instant Apps 和你已经熟悉的应用基本相同,只有一个不同—— 这样应用就可以快速打开,让你可以完成你的目标任务。 ![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/05/AIA-demo.jpg) ->AIA 示例 + +*AIA 示例* ![](https://4.bp.blogspot.com/-p5WOrD6wVy8/VzyIpsDqULI/AAAAAAAADD0/xbtQjurJZ6EEji_MPaY1sLK5wVkXSvxJgCKgB/s800/B%2526H%2B-%2BDevice%2B%2528Final%2529.gif) ->B&H 图片(通过谷歌搜索) + +*B&H 图片(通过谷歌搜索)* ![](https://2.bp.blogspot.com/-q5ApCzECuNA/VzyKa9l0t2I/AAAAAAAADEI/nYhhMClDl5Y3qL5-wiOb2J2QjtGWwbF2wCLcB/s800/BuzzFeed-Device-Install%2B%2528Final%2529.gif) ->BuzzFeedVideo(通过一个共享链接) + +*BuzzFeedVideo(通过一个共享链接)* ![](https://2.bp.blogspot.com/-mVhKMMzhxms/VzyKg25ihBI/AAAAAAAADEM/dJN6_8H7qkwRyulCF7Yr2234-GGUXzC6ACLcB/s800/Park%2Band%2BPay%2B-%2BDevice%2Bwith%2BMeter%2B%2528Final%2529.gif) ->停车与支付(例)(通过 NFC) + +*停车与支付(例)(通过 NFC)* 听起来很棒,不是吗?但是其中还有很多技术方面的问题需要解决。 -比如,从安全的观点来说:如果任何应用从理论上来说都能在你的设备上运行,甚至你都不用安装它——你要怎么保证设备远离恶意软件攻击? +比如,从安全的观点来说:从理论上来说,如果任何应用都能在你的设备上运行,甚至你都不用安装它——你要怎么保证设备远离恶意软件攻击? 因此,为了消除这类威胁,谷歌还在这个项目上努力,目前只有少数合作伙伴,未来将逐步扩展。 From 2dec6e3903ae2e540674b51551a9778b7d856cf1 Mon Sep 17 00:00:00 2001 From: Purling Nayuki Date: Tue, 28 Jun 2016 17:56:47 +0800 Subject: [PATCH 002/471] Proofread 20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System --- ...-The Man Behind Ubuntu Operating System.md | 71 ++++++++++--------- 1 file changed, 36 insertions(+), 35 deletions(-) diff --git a/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md b/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md index 0d3547aeb9..a9cd35e7ca 100644 --- a/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md +++ b/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md @@ -1,117 +1,118 @@ Mark Shuttleworth – Ubuntu 操作系统背后的人 ================================================================================ -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg) +![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg) -**Mark Richard Shuttleworth** 是 Ubuntu 的创始人,他也有事被称作是 Debian 背后的那个人。他出生于1973年的 Welkom,南非。他不仅是个企业家,还是个太空游客——他是第一个前往太空旅行的独立非洲国家公民。 +**Mark Richard Shuttleworth** 是 Ubuntu 的创始人,有时也被称作 Debian 背后的人。他于 1973 年出生在南非的 Welkom。他不仅是个企业家,还是个太空游客——他是第一个前往太空旅行的独立非洲国家公民。 -Mark 还在1996年成立了 **Thawte**,一家互联网安全企业,那是他还只是 University of Cape Town 的一名金融/IT学生。 +Mark 曾在 1996 年成立了 **Thawte**,一家互联网安全企业,那时他还只是 University of Cape Town(开普敦大学)的一名金融/IT学生。 -在2000年,Mark 创立了 HBD,一家投资公司,同时他还创立了 Shuttleworth基金会,致力于给社会中有创新性的领袖提供资助——以奖金和投资等形式。 +2000 年,Mark 创立了 HBD,一家投资公司,同时他还创立了 Shuttleworth 基金会,致力于给社会中有创新性的领袖以奖金和投资等形式提供资助。 -> "移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,数据清晰地表明相对于平板电脑的发展,传统 PC 行业正在萎缩。所以如果我们想和个人电脑行业有关系,我们必须和移动设备行业有产生联系。移动互联网行业之所以有趣,还因为在这里没有盗版 Windows 操作系统。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会持续使用你的操作系统。在传统 PC 行业,我们时不时得和 ‘免费 Windows’ 产生竞争,这一竞争困难的非常微妙。所以我们现在的目标是围绕 Ubuntu 和移动设备——手机和平板——为用户打造更深度的生态环境。" +> "移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,相对于平板电脑的发展而言,传统 PC 行业很明显正在萎缩。所以如果我们想要涉足个人电脑产业,我们必须首先涉足移动行业。移动产业之所以有趣,还因为在这里没有盗版 Windows 操作系统。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会持续使用你的操作系统。在传统 PC 行业,我们时不时得和“免费”的 Windows 产生竞争,这是一种非常微妙的挑战。所以我们现在的重心是围绕 Ubuntu 和移动设备——手机和平板——以图与普通用户建立更深层次的联系。" > > — Mark Shuttleworth -在2002年,在俄罗斯的 Star City 接收完为期一年的训练后,他作为 Soyuz 任务代号 TM-34 的一员飞往了国际空间站。再后来,在面向有志于航空航天或者其科学相关的南非学生群体中,内完成了推广科学,编程,数学的演讲后,Mark 创立了 **Canonical Ltd**。此后直至2013年,他一直在领导 Ubuntu 操作系统的开发。 +2002 年,他在俄罗斯的 Star City 接受了为期一年的训练,随后作为联盟号 TM-34 任务组的一员飞往了国际空间站。再后来,在发起面向有志于航空航天或者其相关学科的南非学生群体中推广科学、编程及数学的运动后,Mark 创立了 **Canonical Ltd**。此后直至2013年,他一直在领导 Ubuntu 操作系统的开发。 -现今,Shuttleworth 有英国与南非双重国籍并和18只可爱的鸭子住在英国的 Isle of Man 小岛上一处花园,一同的还有他一样可爱的女友 Claire,2 条黑母狗以及时不时经过的羊群。 +现今,Shuttleworth 拥有英国与南非双重国籍并和 18 只可爱的鸭子住在英国的 Isle of Man 小岛上的一处花园,一同的还有他可爱的女友 Claire,两条黑母狗以及时不时经过的羊群。 -> "电脑不再只是一台电子设备了。他现在是你思维的延续,以及通向他人的入口。" +> "电脑不仅仅是一台电子设备了。它现在是你思维的延续,以及通向他人的大门。" > > — Mark Shuttleworth ### Mark Shuttleworth 的早年生活### -正如我们之前提到的,Mark 出生在 Welkom,南非的橙色自由州。他是一名外科医生和护士学校教师的孩子。Mark 在 Western Province Preparatory School 就读并在1986年成为了学生会主席,一个学期后就读于 Rondebosch 男子高中,再之后入学 Bishops/Diocesan 学院并在1991年再次成为那里的学生会主席。 +正如我们之前提到的,Mark 出生在南非的橙色自由州 Welkom。他是一名外科医生和护士学校教师的孩子。他在 Western Province Preparatory School 就读并在 1986 年成为了学生会主席,一个学期后就读于 Rondebosch 男子高中,再之后入学 Bishops/Diocesan 学院并在 1991 年再次成为那里的学生会主席。 -Mark 在 University of Cape Town 拿到了 Bachelor of Business Science degree in the Finance and Information Systems (译者:商业科学里的双学士学位,两个学科分别是金融和信息系统),他在学校就读是住在 Smuts Hall。他,作为学生,也在那里帮助安装了学校的第一条宿舍网络。 +Mark 在 University of Cape Town 拿到了 Bachelor of Business Science degree in the Finance and Information Systems (译注:商业科学里的双学士学位,两个学科分别是金融和信息系统),他在学校就读时住在 Smuts Hall。作为学生,他也在那里帮助安装了学校的第一条宿舍网络。 ->“有无数的企业和国家佐证,引入开源政策能提高竞争力和效率。在不同层面上创造生产力对于公司和国家而言都是至关重要的。” +>“无数的企业和国家已经证明,引入开源政策能提高竞争力和效率。在不同层面上创造生产力对于公司和国家而言都是至关重要的。” > > — Mark Shuttleworth ### Mark Shuttleworth 的职业生涯 ### -Mark 在1995年创立 Thawte,公司专注于数字证书和互联网安全,然后他在1999年把公司卖给了 VeriSign,赚取了大约 5.75 亿美元。 +Mark 在 1995 年创立了 Thawte,公司专注于数字证书和互联网安全,然后在 1999 年把公司卖给了 VeriSign,赚取了大约 5.75 亿美元。 -2000年的时候,Mark 创立了 HBD 风险资本公司,这项事业成为了投资方和项目孵化器。2004年的时候,他创立了 Canonical Ltd. 以支持和鼓励自由软件开发项目的商业化,特别是 Ubuntu 操作系统的项目。直到2009年,Mark 才从 Canonical CEO 的位置上退下。 +2000 年,Mark 创立了 HBD 风险资本公司,这项事业成为了投资方和项目孵化器。2004 年,他创立了 Canonical Ltd. 以支持和鼓励自由软件开发项目的商业化,特别是 Ubuntu 操作系统的项目。直到 2009 年,Mark 才从 Canonical CEO 的位置上退下。 -> “在 [DDC](https://en.wikipedia.org/wiki/DCC_Alliance) (译者:一个 Debian Gnu/Linux 开发者联盟) 的早期,我更倾向于让开发者做些他们自己的(内核开发)工作看看能弄出些什么。现在我们基本上已经完成了这个开发阶段了。” +> “在 [DDC](https://en.wikipedia.org/wiki/DCC_Alliance) (译者:一个 Debian GNU/Linux 开发者联盟) 的早期,我更倾向于让拥护者放手去做,看看能发展出什么。” > > — Mark Shuttleworth ### Linux、免费开源软件 与 Mark Shuttleworth ### -在90年代末,Mark 作为 Debian 系统开发者的一员参与了项目。 +在 90 年代末,Mark 曾作为一名开发者参与 Debian 操作系统项目。 -2001年,Mark 创立了 Shuttleworth 基金会,这是个扎根南非的,非赢利性,专注于赞助社会创新,免费/教育用途开源软件的基金会,赞助过的项目包括 Freedom Toaster。 +2001 年,Mark 创立了 Shuttleworth 基金会,这是个扎根南非的、非赢利性的基金会,专注于赞助社会创新、免费/教育用途开源软件,曾赞助过 Freedom Toaster。 -2004年的时候,Mark 通过出资开发 基于 Debian 的 Ubuntu 操作系统回归了免费软件界,这一切也经由他的公司,Canonical,完成。 +2004 年,Mark 通过出资开发基于 Debian 的 Ubuntu 操作系统回归了自由软件界,这一切也经由他的 Canonical 公司完成。 -2005年,Mark 出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,Mark 经常被一个朗朗上口的名字称呼——“**SABDFL (Self-Appointed Benevolent Dictator for Life)**”。为了能够找到足够多的能手开发这个巨大的项目,Mark 花费了6个月的时间在 Debian 的邮件列表里找到能手,这一切都是在他乘坐在南极洲的一艘破冰船——Kapitan Khlebnikov——上完成的。2005年,Mark 买下了 Impi Linux 65% 的股份。 +2005 年,Mark 出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,Mark 经常被一个朗朗上口的名字称呼——“**SABDFL (Self-Appointed Benevolent Dictator for Life)**”。为了能够找到足够多的能手开发这个巨大的项目,Mark 花费了 6 个月的时间从 Debian 邮件列表里找到能手,这一切都是在他乘坐在南极洲的一艘破冰船——Kapitan Khlebnikov——上完成的。同年,Mark 买下了 Impi Linux 65% 的股份。 > “我呼吁电信公司的掌权者们尽快开发出跨洲际的高效信息传输服务。” > > — Mark Shuttleworth -2006年,KDE 宣布 Shuttleworth 成为第一位 **patron** 级别赞助者——彼时 KDE 最高级别的赞助。这一赞助协议终止与2012年,取而代之的是 Kubuntu——一个运用 KDE 作为默认桌面环境的 Ubuntu 变种——的资金。 +2006 年,KDE 宣布 Shuttleworth 成为第一位 **patron** 级别赞助者——彼时 KDE 最高级别的赞助。这一赞助协议在 2012 年终止,取而代之的是一个使用 KDE 作为默认桌面环境的 Ubuntu 变种——Kubuntu 的资金。 -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/shuttleworth-kde.jpg) +![](http://www.unixmen.com/wp-content/uploads/2015/10/shuttleworth-kde.jpg) -2009年,Shuttleworth 宣布他会从 CEO 退位以更好的关注与合作伙伴,产品设计和顾客体验。Jane Silber ——2004年起公司的COO——晋升CEO。 +2009 年,Shuttleworth 宣布他会从 CEO 退位以更好地关注合作伙伴,产品设计和顾客体验。从 2004 年起担任公司 COO 的 Jane Silber 晋升为 CEO。 -2010年,Mark 由于 Ubuntu 项目从 Open University 收到了荣誉学位。 +2010 年,Mark 由于 Ubuntu 项目从 Open University 获得了荣誉学位。 -2012年,Mark 和 Kenneth Rogoff 一同在牛津大学与 Peter Thiel 和 Garry Kasparov 就 **创新悖论**(The Innovation Enigma)展开辩论。 +2012 年,Mark 和 Kenneth Rogoff 一同在牛津大学与 Peter Thiel 和 Garry Kasparov 就**创新悖论**(The Innovation Enigma)展开辩论。 -2013年,Mark 和 Ubuntu 一同被授予 **澳大利亚反个人隐私老大哥监控奖**(Austrian anti-privacy Big Brother Award),理由为把 Ubuntu 会把 Unity 桌面的搜索框的搜索结果发往 Canonical 服务器(译者:因此侵犯了个人隐私)。而一年前的2012年,Mark 曾经申明过这一过程极具匿名性。 +2013 年,Mark 和 Ubuntu 一同被授予**澳大利亚反个人隐私老大哥监控奖**(Austrian anti-privacy Big Brother Award),理由是 Ubuntu 会把 Unity 桌面的搜索框的搜索结果发往 Canonical 服务器(译注:因此侵犯了个人隐私)。而一年前,Mark 曾经申明过这一过程进行了匿名化处理。 -> “所有主流 PC 厂家现在都提供 Ubuntu 预安装选项。所以我们和业界的合作已经相当紧密了。但那些 PC 厂家对于给买家推广新东西这件事都很紧张。如果我们可以让买家习惯 Ubuntu 的桌面/平板/手机操作系统的体验,那他们也应该更愿意买预装 Ubuntu 的设备。因为没有哪个操作系统是通过抄袭模仿获得成功的。Android 很棒,如果我们想成功的话我们必须给市场带去更新更好的东西。整个环境都有停滞发展的危险,如果我们中没有人追寻未来的话。但如果你尝试去追寻未来了,那你必须接受不是所有人对未来的预见都和你一样这一事实。” +> “所有主流 PC 厂家现在都提供 Ubuntu 预安装选项。所以我们和业界的合作已经相当紧密了。但那些 PC 厂家对于给买家推广新东西这件事都很紧张。如果我们可以让买家习惯 Ubuntu 的桌面/平板/手机操作系统的体验,那他们也应该更愿意买预装 Ubuntu 的设备。因为没有哪个操作系统是通过抄袭模仿获得成功的。Android 很棒,但如果我们想成功的话我们必须给市场带去更新更好的东西(校注:而不是改进或者模仿 Android)。如果我们中没有人追寻未来的话,我们将陷入停滞不前的危险。但如果你尝试去追寻未来了,那你必须接受不是所有人对未来的预见都和你一样这一事实。” > > — Mark Shuttleworth ### Mark Shuttleworth 的太空之旅 ### -Mark 在2002年由于作为世界第二名自费太空游客而闻名世界,同时他也是南非第一个旅行太空的人。这趟旅行 Mark 作为俄罗斯 Soyuz TM-34 的一名航空参与者加入,并支付了约两千万美元。2天后,Soyuz 太空梭抵达了国际空间站,在那里 Mark 呆了8天并参与了 AIDS 和 GENOME 研究的相关实验。2002年的晚些时候,Mark 乘坐 Soyuz TM-33 返回了地球。为了参与这趟旅行,Mark 花了一年时间准备与训练,包括7个月居住在俄罗斯的 Start City。 +Mark 在 2002 年作为世界第二名自费太空游客而闻名世界,同时他也是南非第一个旅行太空的人。这趟旅行中,Mark 作为俄罗斯联盟号 TM-34 任务的一名航空参与者加入,并为此支付了约两千万美元。2 天后,联盟号宇宙飞船抵达了国际空间站,在那里 Mark 呆了 8 天并参与了艾滋病和基因组研究的相关实验。同年晚些时候,Mark 随联盟号 TM-33 任务返回了地球。为了参与这趟旅行,Mark 花了一年时间准备与训练,其中有 7 个月居住在俄罗斯的 Start City。 -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg) +![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg) 在太空中,Mark 与 Nelson Mandela 和另一个南非女孩 Michelle Foster (她问 Mark 要不要娶她)通过无线电进行了交谈。Mark 回避了结婚问题,在换话题之前他说他感到很荣幸。身患绝症的 Forster 和 Nelson Mandela 通过 Dream 基金会的赞助获得了与 Mark 交谈的机会。 归来后,Mark 在世界各地做了旅行,并和各地的学生就太空之旅发表了感言。 ->“粗略的统计数据表明 Ubuntu 的实际用户依然在增长。而我们的合作方——Dell,HP,Lenovo 和其他硬件生产商,以及游戏厂商 EA,Valve 都在加入我们——这让我觉得我们在引导一项很有意义的事业。” +>“粗略的统计数据表明 Ubuntu 的实际用户依然在增长。而我们的合作方——Dell、HP、Lenovo 和其他硬件生产商,以及游戏厂商 EA、Valve 都在加入我们——这让我觉得我们在关键的领域继续领先。” > > — Mark Shuttleworth ### Mark Shuttleworth 的交通工具 ### -Mark 有他自己的私人客机,Bombardier Global Express,经常被称为 Canonical 一号,但事实上此飞机是通过 HBD 风险投资公司注册拥有的。飞机侧面的喷绘龙图案是 HBD 风投公司的吉祥物,Norman。 +Mark 有他自己的私人客机 Bombardier Global Express。虽然它经常被称为 Canonical 一号,但事实上此飞机是通过 HBD 风险投资公司注册拥有的。飞机侧面的龙图案涂装是 HBD 风投公司的吉祥物 Norman。 ### 与南非储蓄银行的法律冲突 ### -在从南非转移25亿南非兰特去往 Isle of Man 的过程中,南非储蓄银行征收了 2.5 亿南非兰特的税金。Mark 上诉了,经过冗长的法庭唇枪舌战,南非储蓄银行被勒令返还 2.5 亿征税,以及其利息。Mark 宣布他会把这 2.5 亿存入信托基金,以用于帮助上诉宪法法院的案子。 +在从南非转移 25 亿南非兰特去往 Isle of Man 的过程中,南非储蓄银行征收了 2.5 亿南非兰特的税金。Mark 上诉了,经过冗长的法庭唇枪舌战,南非储蓄银行被勒令返还 2.5 亿征税,以及其利息。Mark 宣布他会把这 2.5 亿存入信托基金,以用于帮助上诉宪法法院的案子。 -> “离境征税倒也不和宪法冲突。但离境征税的主要目的不是为了提高税收,而是通过监管资金流出来保护本国经济。” +> “离境征税倒也不和宪法冲突。但离境征税的主要目的不是提高税收,而是通过监管资金流出来保护本国经济。” > -> — 法官 Dikgang Moseneke +> — Dikgang Moseneke 法官 -2015年,南非宪法法院修正了低级法院的判决结果,并宣布了上述对于离岸征税的理解。 +2015 年,南非宪法法院修正了低级法院的判决结果,并宣布了上述对于离岸征税的理解。 ### Mark Shuttleworth 喜欢的东西 ### -Cesária Évora, mp3s,Spring, Chelsea, finally seeing something obvious for first time, coming home, Sinatra, daydreaming, sundowners, flirting, d’Urberville, string theory, Linux, particle physics, Python, reincarnation, mig-29s, snow, travel, Mozilla, lime marmalade, body shots, the African bush, leopards, Rajasthan, Russian saunas, snowboarding, weightlessness, Iain m banks, broadband, Alastair Reynolds, fancy dress, skinny-dipping, flashes of insight, post-adrenaline euphoria, the inexplicable, convertibles, Clifton, country roads, international space station, machine learning, artificial intelligence, Wikipedia, Slashdot, kitesurfing, and Manx lanes. +Cesária Évora, mp3s, Spring, Chelsea, finally seeing something obvious for first time, coming home, Sinatra, daydreaming, sundowners, flirting, d’Urberville, string theory, Linux, particle physics, Python, reincarnation, mig-29s, snow, travel, Mozilla, lime marmalade, body shots, the African bush, leopards, Rajasthan, Russian saunas, snowboarding, weightlessness, Iain m banks, broadband, Alastair Reynolds, fancy dress, skinny-dipping, flashes of insight, post-adrenaline euphoria, the inexplicable, convertibles, Clifton, country roads, international space station, machine learning, artificial intelligence, Wikipedia, Slashdot, kitesurfing, and Manx lanes. ### Shuttleworth 不喜欢的东西 ### Admin, salary negotiations, legalese, and public speaking. +(校对:和程序猿不喜欢 PM 一个道理?) -------------------------------------------------------------------------------- @@ -119,7 +120,7 @@ via: http://www.unixmen.com/mark-shuttleworth-man-behind-ubuntu-operating-system 作者:[M.el Khamlichi][a] 译者:[Moelf](https://github.com/Moelf) -校对:[校对者ID](https://github.com/校对者ID) +校对:[PurlingNayuki](https://github.com/PurlingNayuki) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e2c29ecd550f2484476d29e66e7344948ebbb5a5 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 28 Jun 2016 20:53:08 +0800 Subject: [PATCH 003/471] PUB:20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method @alim0x --- ...Passwords With A New Trust-Based Authentication Method.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) rename {translated/talk => published}/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md (96%) diff --git a/translated/talk/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md b/published/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md similarity index 96% rename from translated/talk/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md rename to published/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md index 7e0e85d688..ba14bf8742 100644 --- a/translated/talk/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md +++ b/published/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md @@ -14,7 +14,8 @@ 基于这个信任分,一个需要登录认证的应用可以验证你确实可以授权登录,从而不会提示需要密码。 ![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/05/Abacus-to-Trust-API.jpg) ->Abacus 到 Trust API + +*Abacus 到 Trust API* ### 需要思考的地方 @@ -31,7 +32,7 @@ via: http://www.iwillfolo.com/will-google-replace-passwords-with-a-new-trust-bas 作者:[iWillFolo][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d5373fa7ec5bfdc41fc4a6aef67bb21b951741ca Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 28 Jun 2016 21:04:27 +0800 Subject: [PATCH 004/471] =?UTF-8?q?PUB:20160611=20vlock=20=E2=80=93=20A=20?= =?UTF-8?q?Smart=20Way=20to=20Lock=20User=20Virtual=20Console=20or=20Termi?= =?UTF-8?q?nal=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @martin2011qi --- ...User Virtual Console or Terminal in Linux.md | 102 ++++++++++++++++++ ...User Virtual Console or Terminal in Linux.md | 99 ----------------- 2 files changed, 102 insertions(+), 99 deletions(-) create mode 100644 published/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md delete mode 100644 translated/tech/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md diff --git a/published/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md b/published/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md new file mode 100644 index 0000000000..99c1da66ab --- /dev/null +++ b/published/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md @@ -0,0 +1,102 @@ +vlock – 一个锁定 Linux 用户虚拟控制台或终端的好方法 +======================================================================= + +虚拟控制台是 Linux 上非常重要的功能,它们给系统用户提供了 shell 提示符,以保证用户在登录和远程登录一个未安装图形界面的系统时仍能使用。 + +一个用户可以同时操作多个虚拟控制台会话,只需在虚拟控制台间来回切换即可。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/vlock-Lock-User-Terminal-in-Linux.png) + +*用 vlock 锁定 Linux 用户控制台或终端* + +这篇使用指导旨在教会大家如何使用 vlock 来锁定用户虚拟控制台和终端。 + +### vlock 是什么? + +vlock 是一个用于锁定一个或多个用户虚拟控制台用户会话的工具。在多用户系统中 vlock 扮演着重要的角色,它让用户可以在锁住自己会话的同时不影响其他用户通过其他虚拟控制台操作同一个系统。必要时,还可以锁定所有的控制台,同时禁止在虚拟控制台间切换。 + +vlock 的主要功能面向控制台会话方面,同时也支持非控制台会话的锁定,但该功能的测试还不完全。 + +### 在 Linux 上安装 vlock + +根据你的 Linux 系统选择 vlock 安装指令: + +``` +# yum install vlock [On RHEL / CentOS / Fedora] +$ sudo apt-get install vlock [On Ubuntu / Debian / Mint] +``` + +### 在 Linux 上使用 vlock + +vlock 操作选项的常规语法: + +``` +# vlock option +# vlock option plugin +# vlock option -t plugin +``` + +#### vlock 常用选项及用法: + +1、 锁定用户的当前虚拟控制台或终端会话,如下: + +``` +# vlock --current +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-User-Terminal-Session-in-Linux.png) + +*锁定 Linux 用户终端会话* + +选项 -c 或 --current,用于锁定当前的会话,该参数为运行 vlock 时的默认行为。 + +2、 锁定所有你的虚拟控制台会话,并禁用虚拟控制台间切换,命令如下: + +``` +# vlock --all +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-All-Linux-Terminal-Sessions.png) + +*锁定所有 Linux 终端会话* + +选项 -a 或 --all,用于锁定所有用户的控制台会话,并禁用虚拟控制台间切换。 + +其他的选项只有在编译 vlock 时编入了相关插件支持和引用后,才能发挥作用: + +3、 选项 -n 或 --new,调用时后,会在锁定用户的控制台会话前切换到一个新的虚拟控制台。 + +``` +# vlock --new +``` + +4、 选项 -s 或 --disable-sysrq,在禁用虚拟控制台的同时禁用 SysRq 功能,只有在与 -a 或 --all 同时使用时才起作用。 + +``` +# vlock -sa +``` + +5、 选项 -t 或 --timeout ,用以设定屏幕保护插件的 timeout 值。 + +``` +# vlock --timeout 5 +``` + +你可以使用 `-h` 或 `--help` 和 `-v` 或 `--version` 分别查看帮助消息和版本信息。 + +我们的介绍就到这了,提示一点,你可以将 vlock 的 `~/.vlockrc` 文件包含到系统启动中,并参考入门手册[添加环境变量][1],特别是 Debian 系的用户。 + +想要找到更多或是补充一些这里没有提及的信息,可以直接在写在下方评论区。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/vlock-lock-user-virtual-console-terminal-linux/ + +作者:[Aaron Kili][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/set-path-variable-linux-permanently/ diff --git a/translated/tech/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md b/translated/tech/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md deleted file mode 100644 index 26c74b019e..0000000000 --- a/translated/tech/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md +++ /dev/null @@ -1,99 +0,0 @@ -vlock – 一个锁定 Linux 用户虚拟控制台或终端的好方法 -======================================================================= - -虚拟控制台是 Linux 非常重要的功能,他们为使用系统的用户提供了 shell 提示符,以保证用户在登录和远程登录一个未安装图形界面的系统时仍能使用。 - -一个用户可以同时操作多个虚拟控制台会话,只需在虚拟控制台间来回切换即可。 - -![](http://www.tecmint.com/wp-content/uploads/2016/05/vlock-Lock-User-Terminal-in-Linux.png) ->用 vlock 锁定 Linux 用户控制台或终端 - -这篇使用指导,旨在教会大家如何使用 vlock 来锁定用户虚拟控制台和终端。 - -### vlock 是什么? - -vlock 是一个用于锁定一个或多个用户虚拟控制台用户会话的工具。在多用户系统中 vlock 是扮演着重要的角色,他让用户可以在锁住自己会话的同时不影响其他用户通过其他虚拟控制台操作同一个系统。必要时,还可以锁定所有的控制台,同时禁止在虚拟控制台间切换。 - -vlock 的主要功能面向控制台会话方面,同时也支持非控制台会话的锁定,但该功能的测试还不完全。 - -### 在 Linux 上安装 vlock - -根据你的 Linux 系统选择 vlock 安装指令: - -``` -# yum install vlock [On RHEL / CentOS / Fedora] -$ sudo apt-get install vlock [On Ubuntu / Debian / Mint] -``` - -### 在 Linux 上使用 vlock - -vlock 操作选项的常规语法: - -``` -# vlock option -# vlock option plugin -# vlock option -t plugin -``` - -#### vlock 常用选项及用法: - -1. 锁定用户的当前虚拟控制台或终端会话,如下: - - ``` - # vlock --current - ``` - - ![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-User-Terminal-Session-in-Linux.png) - >锁定 Linux 用户终端会话 - - 选项 -c 或 --current,锁定当前的会话,该参数为运行 vlock 时的默认行为。 - -2. 锁定所有你的虚拟控制台会话,并禁用虚拟控制台间切换,命令如下: - - ``` - # vlock --all - ``` - - ![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-All-Linux-Terminal-Sessions.png) - >锁定所有 Linux 终端会话 - - 选项 -a 或 --all,锁定所有用户的控制台会话,并禁用虚拟控制台间切换。 - - 其他的选项只有在编译 vlock 时编入了相关插件支持及其引用后,才能发挥作用: - -3. 选项 -n 或 --new,调用时后,会在锁定用户的控制台会话前切换到一个新的虚拟控制台。 - - ``` - # vlock --new - ``` - -4. 选项 -s 或 --disable-sysrq,在禁用虚拟控制台的同时禁用 SysRq 功能,只有在与 -a 或 --all 同时使用时才起作用。 - - ``` - # vlock -sa - ``` - -5. 选项 -t 或 --timeout ,用以设定屏幕保护插件的 timeout 值。 - - ``` - # vlock --timeout 5 - ``` - -你可以使用 `-h` 或 `--help` 和 `-v` 或 `--version` 分别查看帮助消息和版本信息。 - -我们的介绍就到这了,提示一点,你可以将 vlock 的 `~/.vlockrc` 文件包含到系统启动中并参考入门手册[添加环境变量][1],特别是 Debian 系的用户。 - -想要找到更多或是补充一些这里没有提及的信息,可以直接在写在下方评论区。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/vlock-lock-user-virtual-console-terminal-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Aaron Kili][a] -译者:[martin2011qi](https://github.com/martin2011qi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/set-path-variable-linux-permanently/ From d009439a06283b0fefda6806dc9dd13a92da702b Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 28 Jun 2016 23:19:55 +0800 Subject: [PATCH 005/471] =?UTF-8?q?20160628-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...n All Distributions – Are They Any Good.md | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 sources/talk/20160627 Linux Applications That Works On All Distributions – Are They Any Good.md diff --git a/sources/talk/20160627 Linux Applications That Works On All Distributions – Are They Any Good.md b/sources/talk/20160627 Linux Applications That Works On All Distributions – Are They Any Good.md new file mode 100644 index 0000000000..98967d866b --- /dev/null +++ b/sources/talk/20160627 Linux Applications That Works On All Distributions – Are They Any Good.md @@ -0,0 +1,90 @@ +Linux Applications That Works On All Distributions – Are They Any Good? +============================================================================ + +![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Bundled-applications.jpg) + + +A revisit of Linux community’s latest ambitions – promoting decentralized applications in order to tackle distribution fragmentation. + +Following last week’s article: [Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful][1]?, a couple of new opinions rose to the surface which may contain crucial information about the usefulness of such apps. + +### The Con Side + +Commenting on the subject [here][2], a [Gentoo][3] user who goes by the name Till, gave rise to a few points which hasn’t been addressed to the fullest the last time we covered the issue. + +While previously we settled on merely calling it bloat, Till here on the other hand, is dissecting that bloat further so to help us better understand both its components and its consequences. + +Referring to such apps as “bundled applications” – for the way they work on all distributions is by containing dependencies together with the apps themselves, Till says: + +>“bundles ship a lot of software that now needs to be maintained by the application developer. If library X has a security problem and needs an update, you rely on every single applications to ship correct updates in order to make your system save.” + +Essentially, Till raises an important security point. However, it isn’t necessarily has to be tied to security alone, but can also be linked to other aspects such as system maintenance, atomic updates, etc… + +Furthermore, if we take that notion one step further and assume that dependencies developers may cooperate, therefore releasing their software in correlation with the apps who use them (an utopic situation), we shall then get an overall slowdown of the entire platform development. + +Another problem that arises from the same point made above is dependencies-transparency becomes obscure, that is, if you’d want to know which libraries are bundled with a certain app, you’ll have to rely on the developer to publish such data. + +Or, as Till puts it: “Questions like, did package XY already include the updated library Z, will be your daily bread”. + +For comparison, with the standard methods available on Linux nowadays (both binary and source distributions), you can easily notice which libraries are being updated upon a system update. + +And you can also rest assure that all other apps on the system will use it, freeing you from the need to check each app individually. + +Other cons that may be deduced from the term bloat include: bigger package size (each app is bundled with dependencies), higher memory usage (no more library sharing) and also – + +One less filter mechanism to prevent malicious software – distributions package maintainers also serve as a filter between developers and users, helping to assure users get quality software. + +With bundled apps this may no longer be the case. + +As a finalizing general point, Till asserts that although useful in some cases, for the most part, bundled apps weakens the free software position in distributions (as proprietary vendors will now be able to deliver software without sharing it on public repositories). + +And apart from that, it introduces many other issues. Many problems are simply moved towards the developers. + +### The Pro Side + +In contrast, another comment by a person named Sven tries to contradict common claims that basically go against the use of bundled applications, hence justifying and promoting the use of it. + +“waste of space” – Sven claims that in today’s world we have many other things that wastes disk space, such as movies stored on hard drive, installed locals, etc… + +Ultimately, these things are infinitely more wasteful than a mere “100 MB to run a program you use all day … Don’t be ridiculous.” + +“waste of RAM” – the major points in favor are: + +- Shared libraries waste significantly less RAM compared to application runtime data. +- RAM is cheap today. + +“security nightmare” – not every application you run is actually security-critical. + +Also, many applications never even see any security updates, unless on a ‘rolling distro’. + +In addition to Sven’s opinions, who try to stick to the pragmatic side, a few advantages were also pointed by Till who admits that bundled apps has their merits in certain cases: + +- Proprietary vendors who want to keep their code out of the public repositories will be able to do so more easily. +- Niche applications, which are not packaged by your distribution, will now be more readily available. +- Testing on binary distributions which do not have beta packages will become easier. +- Freeing users from solving dependencies problems. + +### Final Thoughts + +Although shedding new light onto the matter, it seems that one conclusion still stands and accepted by all parties – bundled apps has their niche to fill in the Linux ecosystem. + +Nevertheless, the role that niche should take, whether main or marginal one, appears to be a lot clearer now, at least from a theoretical point of view. + +Users who are looking to make their system optimized as possible, should, in the majority of cases, avoid using bundled apps. + +Whereas, users that are after ease-of-use, meaning – doing the least of work in order to maintain their systems, should and probably would feel very comfortable adopting the new method. + +-------------------------------------------------------------------------------- + +via: http://www.iwillfolo.com/linux-applications-that-works-on-all-distributions-are-they-any-good/ + +作者:[Editorials][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.iwillfolo.com/category/editorials/ +[1]: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/ +[2]: http://www.proli.net/2016/06/25/gnulinux-bundled-application-ramblings/ +[3]: http://www.iwillfolo.com/5-reasons-use-gentoo-linux/ From 4dd996a6a8e9c1891a1ae329a0001ea65b2e87f8 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 28 Jun 2016 23:23:31 +0800 Subject: [PATCH 006/471] =?UTF-8?q?20160628-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...And Is ‘One Fits All’ Linux Packages Useful.md | 110 ++++++++++++++++++ 1 file changed, 110 insertions(+) create mode 100644 sources/talk/20160625 Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful.md diff --git a/sources/talk/20160625 Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful.md b/sources/talk/20160625 Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful.md new file mode 100644 index 0000000000..35088e4be1 --- /dev/null +++ b/sources/talk/20160625 Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful.md @@ -0,0 +1,110 @@ +Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful? +================================================================================= + +![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Flatpak-and-Snap-Packages.jpg) + +An in-depth look into the new generation of packages starting to permeate the Linux ecosystem. + + +Lately we’ve been hearing more and more about Ubuntu’s Snap packages and Flatpak (formerly referred to as xdg-app) created by Red Hat’s employee Alexander Larsson. + +These 2 types of next generation packages are in essence having the same goal and characteristics which are: being standalone packages that doesn’t rely on 3rd-party system libraries in order to function. + +This new technology direction which Linux seems to be headed is automatically giving rise to questions such as, what are the advantages / disadvantages of standalone packages? does this lead us to a better Linux overall? what are the motives behind it? + +To answer these questions and more, let us explore the things we know about Snap and Flatpak so far. + +### The Motive + +According to both [Flatpak][1] and [Snap][2] statements, the main motive behind them is to be able to bring one and the same version of application to run across multiple Linux distributions. + +>“From the very start its primary goal has been to allow the same application to run across a myriad of Linux distributions and operating systems.” Flatpak + +>“… ‘snap’ universal Linux package format, enabling a single binary package to work perfectly and securely on any Linux desktop, server, cloud or device.” Snap + +To be more specific, the guys behind Snap and Flatpak (S&F) believe that there’s a barrier of fragmentation on the Linux platform. + +A barrier which holds back the platform advancement by burdening developers with more, perhaps unnecessary, work to get their software run on the many distributions out there. + +Therefore, as leading Linux distributions (Ubuntu & Red Hat), they wish to eliminate the barrier and strengthen the platform in general. + +But what are the more personal gains which motivate the development of S&F? + +#### Personal Gains? + +Although not officially stated anywhere, it may be assumed that by leading the efforts of creating a unified package that could potentially be adopted by the vast majority of Linux distros (if not all of them), the captains of these projects could assume a key position in determining where the Linux ship sails. + +### The Advantages + +The benefits of standalone packages are diverse and can depend on different factors. + +Basically however, these factors can be categorized under 2 distinct criteria: + +#### User Perspective + ++ From a Linux user point of view, Snap and Flatpak both bring the possibility of installing any package (software / app) on any distribution the user is using. + +That is, for instance, if you’re using a not so popular distribution which has only a scarce supply of packages available in their repo, due to workforce limitations probably, you’ll now be able to easily and significantly increase the amount of packages available to you – which is a great thing. + ++ Also, users of popular distributions that do have many packages available in their repos, will enjoy the ability of installing packages that might not have behaved with their current set of installed libraries. + +For example, a Debian user who wants to install a package from ‘testing branch’ will not have to convert his entire system into ‘testing’ (in order for the package to run against newer libraries), rather, that user will simply be able to install only the package he wants from whichever branch he likes and on whatever branch he’s on. + +The latter point, was already basically possible for users who were compiling their packages straight from source, however, unless using a source based distribution such as Gentoo, most users will see this as just an unworthily hassle. + ++ The advanced user, or perhaps better put, the security aware user might feel more comfortable with this type of packages as long as they come from a reliable source as they tend to provide another layer of isolation since they are generally isolated from system packages. + +* Both S&F are being developed with enhanced security in mind, which generally makes use of “sandboxing” i.e isolation in order to prevent cases where they carry a virus which can infect the entire system, similar to the way .exe files on MS Windows may. (More on MS and S&F later) + +#### Developer Perspective + +For developers, the advantages of developing S&F packages will probably be a lot clearer than they are to the average user, some of these were already hinted in a previous section of this post. + +Nonetheless, here they are: + ++ S&F will make it easier on devs who want to develop for more than one Linux distribution by unifying the process of development, therefore minimizing the amount of work a developer needs to do in order to get his app running on multiple distributions. + +++ Developers could therefore gain easier access to a wider range of distributions. + ++ S&F allow devs to privately distribute their packages without being dependent on distribution maintainers to stabilize their package for each and every distro. + +++ Through the above, devs may gain access to direct statistics of user adoption / engagement for their software. + +++ Also through the above, devs could get more directly involved with users, rather than having to do so through a middleman, in this case, the distribution. + +### The Downsides + +– Bloat. Simple as that. Flatpak and Snap aren’t just magic making dependencies evaporate into thin air. Rather, instead of relying on the target system to provide the required dependencies, S&F comes with the dependencies prebuilt into them. + +As the saying goes “if the mountain won’t come to Muhammad, Muhammad must go to the mountain…” + +– Just as the security-aware user might enjoy S&F packages extra layer of isolation, as long they come from a trusted source. The less knowledgeable user on the hand, might be prone to the other side of the coin hazard which is using a package from an unknown source which may contain malicious software. + +The above point can be said to be valid even with today’s popular methods, as PPAs, overlays, etc might also be maintained by untrusted sources. + +However, with S&F packages the risk increases since malicious software developers need to create only one version of their program in order to infect a large number of distributions, whereas, without it they’d needed to create multiple versions in order to adjust their malware to other distributions. + +Was Microsoft Right All Along? + +With all that’s mentioned above in mind, it’s pretty clear that for the most part, the advantages of using S&F packages outweighs the drawbacks. + +At the least for users of binary-based distributions, or, non lightweight focused distros. + +Which eventually lead me to asking the above question – could it be that Microsoft was right all along? if so and S&F becomes the Linux standard, would you still consider Linux a Unix-like variant? + +Well apparently, the best one to answer those questions is probably time. + +Nevertheless, I’d argue that even if not entirely right, MS certainly has a good point to their credit, and having all these methods available here on Linux out of the box is certainly a plus in my book. + + +-------------------------------------------------------------------------------- + +via: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/ + +作者:[Editorials][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.iwillfolo.com/category/editorials/ From 7b4d8e5cacdadb4141e56992fae127d6eab120d4 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jun 2016 00:21:09 +0800 Subject: [PATCH 007/471] =?UTF-8?q?=E4=BA=8C=E6=A0=A1?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...-The Man Behind Ubuntu Operating System.md | 108 +++++++++--------- 1 file changed, 55 insertions(+), 53 deletions(-) diff --git a/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md b/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md index a9cd35e7ca..b40048201b 100644 --- a/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md +++ b/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md @@ -1,102 +1,100 @@ -Mark Shuttleworth – Ubuntu 操作系统背后的人 +马克·沙特尔沃思 – Ubuntu 背后的那个男人 ================================================================================ + ![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg) -**Mark Richard Shuttleworth** 是 Ubuntu 的创始人,有时也被称作 Debian 背后的人。他于 1973 年出生在南非的 Welkom。他不仅是个企业家,还是个太空游客——他是第一个前往太空旅行的独立非洲国家公民。 +**马克·理查德·沙特尔沃思(马克Richard Shuttleworth)** 是 Ubuntu 的创始人,也被称作 [Debian 背后的人][1]([之一][2])。他于 1973 年出生在南非的韦尔科姆(Welkom)。他不仅是个企业家,还是个太空游客——他是第一个前往太空旅行的非洲独立国家的公民。 -Mark 曾在 1996 年成立了 **Thawte**,一家互联网安全企业,那时他还只是 University of Cape Town(开普敦大学)的一名金融/IT学生。 +马克曾在 1996 年成立了一家名为 **Thawte** 的互联网商务安全公司,那时他还在开普敦大学( University of Cape Town)的学习金融和信息技术。 -2000 年,Mark 创立了 HBD,一家投资公司,同时他还创立了 Shuttleworth 基金会,致力于给社会中有创新性的领袖以奖金和投资等形式提供资助。 +2000 年,马克创立了 HBD(Here be Dragons (此处有龙/危险)的缩写,所以其吉祥物是一只龙),这是一家投资公司,同时他还创立了沙特尔沃思基金会(Shuttleworth Foundation),致力于给社会中有创新性的领袖以奖金和投资等形式提供资助。 - -> "移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,相对于平板电脑的发展而言,传统 PC 行业很明显正在萎缩。所以如果我们想要涉足个人电脑产业,我们必须首先涉足移动行业。移动产业之所以有趣,还因为在这里没有盗版 Windows 操作系统。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会持续使用你的操作系统。在传统 PC 行业,我们时不时得和“免费”的 Windows 产生竞争,这是一种非常微妙的挑战。所以我们现在的重心是围绕 Ubuntu 和移动设备——手机和平板——以图与普通用户建立更深层次的联系。" +> "移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,相对于平板电脑的发展而言,传统 PC 行业很明显正在萎缩。所以如果我们想要涉足个人电脑产业,我们必须首先涉足移动行业。移动产业之所以有趣,是因为在这里没有盗版 Windows 操作系统的市场。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会一直使用你的操作系统。在传统 PC 行业,我们时不时得和“免费”的 Windows 产生竞争,这是一种非常微妙的挑战。所以我们现在的重心是围绕 Ubuntu 和移动设备——手机和平板——以图与普通用户建立更深层次的联系。" > -> — Mark Shuttleworth +> — 马克·沙特尔沃思 -2002 年,他在俄罗斯的 Star City 接受了为期一年的训练,随后作为联盟号 TM-34 任务组的一员飞往了国际空间站。再后来,在发起面向有志于航空航天或者其相关学科的南非学生群体中推广科学、编程及数学的运动后,Mark 创立了 **Canonical Ltd**。此后直至2013年,他一直在领导 Ubuntu 操作系统的开发。 - - -现今,Shuttleworth 拥有英国与南非双重国籍并和 18 只可爱的鸭子住在英国的 Isle of Man 小岛上的一处花园,一同的还有他可爱的女友 Claire,两条黑母狗以及时不时经过的羊群。 +2002 年,他在俄罗斯的星城(Star City)接受了为期一年的训练,随后作为联盟号 TM-34 任务组的一员飞往了国际空间站。再后来,在面向有志于航空航天或者其相关学科的南非学生群体发起了推广科学、编程及数学的运动后,马克 创立了 **Canonical Ltd**。此后直至2013年,他一直在领导 Ubuntu 操作系统的开发。 +现今,Shuttleworth 拥有英国与南非双重国籍并和 18 只可爱的鸭子住在英国的 Isle of Man 小岛上的一处花园,一同的还有他可爱的女友 Claire,两条黑色母狗以及时不时经过的羊群。 > "电脑不仅仅是一台电子设备了。它现在是你思维的延续,以及通向他人的大门。" > -> — Mark Shuttleworth +> — 马克·沙特尔沃思 -### Mark Shuttleworth 的早年生活### -正如我们之前提到的,Mark 出生在南非的橙色自由州 Welkom。他是一名外科医生和护士学校教师的孩子。他在 Western Province Preparatory School 就读并在 1986 年成为了学生会主席,一个学期后就读于 Rondebosch 男子高中,再之后入学 Bishops/Diocesan 学院并在 1991 年再次成为那里的学生会主席。 +### 马克·沙特尔沃思的早年生活### -Mark 在 University of Cape Town 拿到了 Bachelor of Business Science degree in the Finance and Information Systems (译注:商业科学里的双学士学位,两个学科分别是金融和信息系统),他在学校就读时住在 Smuts Hall。作为学生,他也在那里帮助安装了学校的第一条宿舍网络。 +正如我们之前提到的,马克出生在南非的奥兰治自由邦(Orange Free State)的韦尔科姆(Welkom)。他是一名外科医生和护士学校教师的孩子。他在西部省预科学校就读并在 1986 年成为了学生会主席,一个学期后就读于 Rondebosch 男子高中,再之后入学 Bishops Diocesan 学院并在 1991 年再次成为那里的学生会主席。 + +马克在开普敦大学( University of Cape Town)拿到了金融和信息系统的商业科学双学士学位,他在学校就读时住在 Smuts Hall。作为学生,他也在那里帮助安装了学校的第一条宿舍互联网接入。 >“无数的企业和国家已经证明,引入开源政策能提高竞争力和效率。在不同层面上创造生产力对于公司和国家而言都是至关重要的。” > -> — Mark Shuttleworth +> — 马克·沙特尔沃思 -### Mark Shuttleworth 的职业生涯 ### +### 马克·沙特尔沃思的职业生涯 ### -Mark 在 1995 年创立了 Thawte,公司专注于数字证书和互联网安全,然后在 1999 年把公司卖给了 VeriSign,赚取了大约 5.75 亿美元。 +马克在 1995 年创立了 Thawte,公司专注于数字证书和互联网安全,然后在 1999 年把公司卖给了 VeriSign,赚取了大约 5.75 亿美元。 -2000 年,Mark 创立了 HBD 风险资本公司,这项事业成为了投资方和项目孵化器。2004 年,他创立了 Canonical Ltd. 以支持和鼓励自由软件开发项目的商业化,特别是 Ubuntu 操作系统的项目。直到 2009 年,Mark 才从 Canonical CEO 的位置上退下。 +2000 年,马克创立了 HBD 风险资本公司,成为了商业投资人和项目孵化器。2004 年,他创立了 Canonical Ltd. 以支持和鼓励自由软件开发项目的商业化,特别是 Ubuntu 操作系统的项目。直到 2009 年,马克才从 Canonical CEO 的位置上退下。 -> “在 [DDC](https://en.wikipedia.org/wiki/DCC_Alliance) (译者:一个 Debian GNU/Linux 开发者联盟) 的早期,我更倾向于让拥护者放手去做,看看能发展出什么。” +> “在 [DDC](https://en.wikipedia.org/wiki/DCC_Alliance) (LCTT 译注:一个 Debian GNU/Linux 开发者联盟) 的早期,我更倾向于让拥护者们放手去做,看看能发展出什么。” > -> — Mark Shuttleworth +> — 马克·沙特尔沃思 -### Linux、免费开源软件 与 Mark Shuttleworth ### +### Linux、免费开源软件与马克·沙特尔沃思 ### -在 90 年代末,Mark 曾作为一名开发者参与 Debian 操作系统项目。 +在 90 年代后期,马克曾作为一名开发者参与 Debian 操作系统项目。 -2001 年,Mark 创立了 Shuttleworth 基金会,这是个扎根南非的、非赢利性的基金会,专注于赞助社会创新、免费/教育用途开源软件,曾赞助过 Freedom Toaster。 +2001 年,马克创立了沙特尔沃思基金会,这是个扎根南非的、非赢利性的基金会,专注于赞助社会创新、免费/教育用途开源软件,曾赞助过[自由烤面包机][3](Freedom Toaster)(LCTT 译注:自由烤面包机是一个可以给用户带来的 CD/DVD 上刻录自由软件的公共信息亭)。 -2004 年,Mark 通过出资开发基于 Debian 的 Ubuntu 操作系统回归了自由软件界,这一切也经由他的 Canonical 公司完成。 +2004 年,马克通过出资开发基于 Debian 的 Ubuntu 操作系统返回了自由软件界,这一切也经由他的 Canonical 公司完成。 -2005 年,Mark 出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,Mark 经常被一个朗朗上口的名字称呼——“**SABDFL (Self-Appointed Benevolent Dictator for Life)**”。为了能够找到足够多的能手开发这个巨大的项目,Mark 花费了 6 个月的时间从 Debian 邮件列表里找到能手,这一切都是在他乘坐在南极洲的一艘破冰船——Kapitan Khlebnikov——上完成的。同年,Mark 买下了 Impi Linux 65% 的股份。 +2005 年,马克出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,马克经常被一个朗朗上口的名字称呼——“**SABDFL :自封的生命之仁慈独裁者(Self-Appointed Benevolent Dictator for Life)**”。为了能够找到足够多的高手开发这个巨大的项目,马克花费了 6 个月的时间从 Debian 邮件列表里寻找,这一切都是在他乘坐在南极洲的一艘破冰船——赫列布尼科夫船长号(Kapitan Khlebnikov)——上完成的。同年,马克买下了 Impi Linux 65% 的股份。 > “我呼吁电信公司的掌权者们尽快开发出跨洲际的高效信息传输服务。” > -> — Mark Shuttleworth +> — 马克·沙特尔沃思 -2006 年,KDE 宣布 Shuttleworth 成为第一位 **patron** 级别赞助者——彼时 KDE 最高级别的赞助。这一赞助协议在 2012 年终止,取而代之的是一个使用 KDE 作为默认桌面环境的 Ubuntu 变种——Kubuntu 的资金。 +2006 年,KDE 宣布沙特尔沃思成为 KDE 的**第一赞助人(first patron)**——彼时 KDE 最高级别的赞助。这一赞助协议在 2012 年终止,取而代之的是对 Kubuntu 的资金支持,这是一个使用 KDE 作为默认桌面环境的 Ubuntu 变种。 ![](http://www.unixmen.com/wp-content/uploads/2015/10/shuttleworth-kde.jpg) -2009 年,Shuttleworth 宣布他会从 CEO 退位以更好地关注合作伙伴,产品设计和顾客体验。从 2004 年起担任公司 COO 的 Jane Silber 晋升为 CEO。 +2009 年,Shuttleworth 宣布他会从 Canonical 的 CEO 上退位以更好地关注合作关系、产品设计和客户。从 2004 年起担任公司 COO 的珍妮·希比尔(Jane Silber)晋升为 CEO。 -2010 年,Mark 由于 Ubuntu 项目从 Open University 获得了荣誉学位。 +2010 年,马克由于其贡献而被开放大学(Open University)授予了荣誉学位。 -2012 年,Mark 和 Kenneth Rogoff 一同在牛津大学与 Peter Thiel 和 Garry Kasparov 就**创新悖论**(The Innovation Enigma)展开辩论。 +2012 年,马克和肯尼斯·罗格夫(Kenneth Rogoff)一同在牛津大学与彼得·蒂尔(Peter Thiel)和加里·卡斯帕罗夫(Garry Kasparov)就**创新悖论**(The Innovation Enigma)展开辩论。 -2013 年,Mark 和 Ubuntu 一同被授予**澳大利亚反个人隐私老大哥监控奖**(Austrian anti-privacy Big Brother Award),理由是 Ubuntu 会把 Unity 桌面的搜索框的搜索结果发往 Canonical 服务器(译注:因此侵犯了个人隐私)。而一年前,Mark 曾经申明过这一过程进行了匿名化处理。 +2013 年,马克和 Ubuntu 一同被授予**澳大利亚反个人隐私大哥奖**(Austrian anti-privacy Big Brother Award),理由是默认情况下, Ubuntu 会把 Unity 桌面的搜索框的搜索结果发往 Canonical 服务器(LCTT 译注:因此侵犯了个人隐私)。而一年前,马克曾经申明过这一过程进行了匿名化处理。 - -> “所有主流 PC 厂家现在都提供 Ubuntu 预安装选项。所以我们和业界的合作已经相当紧密了。但那些 PC 厂家对于给买家推广新东西这件事都很紧张。如果我们可以让买家习惯 Ubuntu 的桌面/平板/手机操作系统的体验,那他们也应该更愿意买预装 Ubuntu 的设备。因为没有哪个操作系统是通过抄袭模仿获得成功的。Android 很棒,但如果我们想成功的话我们必须给市场带去更新更好的东西(校注:而不是改进或者模仿 Android)。如果我们中没有人追寻未来的话,我们将陷入停滞不前的危险。但如果你尝试去追寻未来了,那你必须接受不是所有人对未来的预见都和你一样这一事实。” +> “所有主流 PC 厂家现在都提供 Ubuntu 预安装选项,所以我们和业界的合作已经相当紧密了。但那些 PC 厂家对于给买家推广新东西这件事都很紧张。如果我们可以让 PC 买家习惯 Ubuntu 的平板/手机操作系统的体验,那他们也应该更愿意买预装 Ubuntu 的 PC。没有哪个操作系统是通过抄袭模仿获得成功的,Android 很棒,但如果我们想成功的话我们必须给市场带去更新更好的东西(LCTT 译注:而不是改进或者模仿 Android)。如果我们中没有人追寻未来的话,我们将陷入停滞不前的危险。但如果你尝试去追寻未来了,那你必须接受不是所有人对未来的预见都和你一样这一事实。” > -> — Mark Shuttleworth +> — 马克·沙特尔沃思 -### Mark Shuttleworth 的太空之旅 ### - -Mark 在 2002 年作为世界第二名自费太空游客而闻名世界,同时他也是南非第一个旅行太空的人。这趟旅行中,Mark 作为俄罗斯联盟号 TM-34 任务的一名航空参与者加入,并为此支付了约两千万美元。2 天后,联盟号宇宙飞船抵达了国际空间站,在那里 Mark 呆了 8 天并参与了艾滋病和基因组研究的相关实验。同年晚些时候,Mark 随联盟号 TM-33 任务返回了地球。为了参与这趟旅行,Mark 花了一年时间准备与训练,其中有 7 个月居住在俄罗斯的 Start City。 +### 马克·沙特尔沃思的太空之旅 ### +马克在 2002 年作为世界第二名自费太空游客而闻名世界,同时他也是南非第一个旅行太空的人。这趟旅行中,马克作为俄罗斯联盟号 TM-34 任务的一名乘员加入,并为此支付了约两千万美元。2 天后,联盟号宇宙飞船抵达了国际空间站,在那里马克呆了 8 天并参与了艾滋病和基因组研究的相关实验。同年晚些时候,马克随联盟号 TM-33 任务返回了地球。为了参与这趟旅行,马克花了一年时间准备与训练,其中有 7 个月居住在俄罗斯的星城。 ![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg) -在太空中,Mark 与 Nelson Mandela 和另一个南非女孩 Michelle Foster (她问 Mark 要不要娶她)通过无线电进行了交谈。Mark 回避了结婚问题,在换话题之前他说他感到很荣幸。身患绝症的 Forster 和 Nelson Mandela 通过 Dream 基金会的赞助获得了与 Mark 交谈的机会。 +在太空中,马克与纳尔逊·曼德拉(Nelson Mandela)和另一个 14 岁的南非女孩米歇尔·福斯特(Michelle Foster) (她问马克要不要娶她)通过无线电进行了交谈。马克礼貌地回避了这个结婚问题,在巧妙地改换话题之前他说他感到很荣幸。身患绝症的女孩福斯特通过梦想基金会( Dream foundation)的赞助获得了与马克和纳尔逊·曼德拉交谈的机会。 -归来后,Mark 在世界各地做了旅行,并和各地的学生就太空之旅发表了感言。 +归来后,马克在世界各地做了旅行,并和各地的学生就太空之旅发表了感言。 - ->“粗略的统计数据表明 Ubuntu 的实际用户依然在增长。而我们的合作方——Dell、HP、Lenovo 和其他硬件生产商,以及游戏厂商 EA、Valve 都在加入我们——这让我觉得我们在关键的领域继续领先。” +>“粗略的统计数据表明 Ubuntu 的实际用户依然在增长。而我们的合作方——戴尔、惠普、联想和其他硬件生产商,以及游戏厂商 EA、Valve 都在加入我们——这让我觉得我们在关键的领域继续领先。” > -> — Mark Shuttleworth +> — 马克·沙特尔沃思 -### Mark Shuttleworth 的交通工具 ### +### 马克·沙特尔沃思的交通工具 ### -Mark 有他自己的私人客机 Bombardier Global Express。虽然它经常被称为 Canonical 一号,但事实上此飞机是通过 HBD 风险投资公司注册拥有的。飞机侧面的龙图案涂装是 HBD 风投公司的吉祥物 Norman。 +马克有他自己的私人客机庞巴迪全球特快(Bombardier Global Express),虽然它经常被称为 Canonical 一号,但事实上此飞机是通过 HBD 风险投资公司注册拥有的。涂画在飞机侧面的龙图案是 HBD 风投公司的吉祥物 ,名叫 Norman。 -### 与南非储蓄银行的法律冲突 ### +![](http://www.leader.co.za/leadership/logos/logomarkshuttleworthdirectory_31ce.gif) -在从南非转移 25 亿南非兰特去往 Isle of Man 的过程中,南非储蓄银行征收了 2.5 亿南非兰特的税金。Mark 上诉了,经过冗长的法庭唇枪舌战,南非储蓄银行被勒令返还 2.5 亿征税,以及其利息。Mark 宣布他会把这 2.5 亿存入信托基金,以用于帮助上诉宪法法院的案子。 +### 与南非储备银行的法律冲突 ### + +在从南非转移 25 亿南非兰特去往 Isle of Man 的过程中,南非储备银行征收了 2.5 亿南非兰特的税金。马克上诉了,经过冗长的法庭唇枪舌战,南非储备银行被勒令返还 2.5 亿征税,以及其利息。马克宣布他会把这 2.5 亿存入信托基金,以用于帮助那些上诉到宪法法院的案子。 > “离境征税倒也不和宪法冲突。但离境征税的主要目的不是提高税收,而是通过监管资金流出来保护本国经济。” @@ -105,14 +103,15 @@ Mark 有他自己的私人客机 Bombardier Global Express。虽然它经常被 2015 年,南非宪法法院修正了低级法院的判决结果,并宣布了上述对于离岸征税的理解。 -### Mark Shuttleworth 喜欢的东西 ### +### 马克·沙特尔沃思喜欢的东西 ### -Cesária Évora, mp3s, Spring, Chelsea, finally seeing something obvious for first time, coming home, Sinatra, daydreaming, sundowners, flirting, d’Urberville, string theory, Linux, particle physics, Python, reincarnation, mig-29s, snow, travel, Mozilla, lime marmalade, body shots, the African bush, leopards, Rajasthan, Russian saunas, snowboarding, weightlessness, Iain m banks, broadband, Alastair Reynolds, fancy dress, skinny-dipping, flashes of insight, post-adrenaline euphoria, the inexplicable, convertibles, Clifton, country roads, international space station, machine learning, artificial intelligence, Wikipedia, Slashdot, kitesurfing, and Manx lanes. +Cesária Évora、mp3、春天、切尔西(Chelsea)、“恍然大悟”(finally seeing something obvious for first time)、回家、辛纳屈(Sinatra)、白日梦、暮后小酌、挑逗、苔丝(d’Urberville)、弦理论、Linux、粒子物理、Python、转世、米格-29、雪、旅行、Mozilla、酸橙果酱、激情代价(body shots)、非洲丛林、豹、拉贾斯坦邦、俄罗斯桑拿、单板滑雪、失重、Iain m 银行、宽度、阿拉斯泰尔·雷诺兹(Alastair Reynolds)、化装舞会服装、裸泳、灵机一动、肾上腺素激情消退、莫名(the inexplicable)、活动顶篷式汽车、Clifton、国家公路、国际空间站、机器学习、人工智能、维基百科、Slashdot、风筝冲浪(kitesurfing)和 Manx lanes。 -### Shuttleworth 不喜欢的东西 ### -Admin, salary negotiations, legalese, and public speaking. -(校对:和程序猿不喜欢 PM 一个道理?) + +### 马克·沙特尔沃思不喜欢的东西 ### + +行政、涨工资、法律术语和公众演讲。 -------------------------------------------------------------------------------- @@ -120,8 +119,11 @@ via: http://www.unixmen.com/mark-shuttleworth-man-behind-ubuntu-operating-system 作者:[M.el Khamlichi][a] 译者:[Moelf](https://github.com/Moelf) -校对:[PurlingNayuki](https://github.com/PurlingNayuki) +校对:[PurlingNayuki](https://github.com/PurlingNayuki), [wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.unixmen.com/author/pirat9/ +[1]:https://wiki.debian.org/PeopleBehindDebian +[2]:https://raphaelhertzog.com/2011/11/17/people-behind-debian-mark-shuttleworth-ubuntus-founder/ +[3]:https://en.wikipedia.org/wiki/Freedom_Toaster \ No newline at end of file From 3d20813c1ba8ba15bd78c6e7010116fca05d6117 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jun 2016 06:10:30 +0800 Subject: [PATCH 008/471] =?UTF-8?q?=E5=B7=B2=E5=8F=91=E5=B8=83?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Raspberry Pi projects for the classroom.md | 98 ------------------- 1 file changed, 98 deletions(-) delete mode 100644 translated/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md diff --git a/translated/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md b/translated/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md deleted file mode 100644 index 06191d551c..0000000000 --- a/translated/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md +++ /dev/null @@ -1,98 +0,0 @@ -5 个适合课堂教学的树莓派项目 -================================================================================ -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-open-source-yearbook-lead3.png) - -图片来源 : opensource.com - -### 1. Minecraft Pi ### - -![](https://opensource.com/sites/default/files/lava.png) - -上图由树莓派基金会提供。遵循 [CC BY-SA 4.0.][1] 协议。 - -Minecraft(我的世界)几乎是世界上每个青少年都极其喜爱的游戏 —— 在吸引年轻人注意力方面,它也是最具创意的游戏之一。伴随着每一个树莓派的游戏版本不仅仅是一个关于创造性思维的建筑游戏,它还带有一个编程接口,允许使用者通过 Python 代码来与 Minecraft 世界进行互动。 - -对于教师来说,Minecraft: Pi 版本是一个鼓励学生们解决遇到的问题以及通过书写代码来执行特定任务的极好方式。你可以使用 Python API - 来建造一所房子,让它跟随你到任何地方;或在你所到之处修建一座桥梁;又或者是下一场岩溶雨;或在天空中显示温度;以及其他任何你能想像到的事物。 - -可在 "[Minecraft Pi 入门][2]" 中了解更多相关内容。 - -### 2. 反应游戏和交通指示灯 ### - -![](https://opensource.com/sites/default/files/pi_traffic_installed_yellow_led_on.jpg) - -上图由 [Low Voltage Labs][3] 提供。遵循 [CC BY-SA 4.0][1] 协议。 - -在树莓派上进行物理计算是非常容易的 —— 只需将 LED 灯 和按钮连接到 GPIO 针脚上,再加上少量的代码,你就可以点亮 LED 灯并通过按按钮来控制物体。一旦你知道来执行基本操作的代码,下一步就可以随你的想像那样去做了! - -假如你知道如何让一盏灯闪烁,你就可以让三盏灯闪烁。选出三盏交通灯颜色的 LED 灯,你就可以编程出交通灯闪烁序列。假如你知道如何使用一个按钮来触发一个事件,然后你就有一个人行横道了!同时,你还可以找到诸如 [PI-TRAFFIC][4]、[PI-STOP][5]、[Traffic HAT][6] 等预先构建好的交通灯插件。 - -这不总是关于代码的 —— 它还可以被用来作为一个的练习,用以理解真实世界中的系统是如何被设计出来的。计算思维在生活中的各种情景中都是一个有用的技能。 - -![](https://opensource.com/sites/default/files/reaction-game.png) - -上图由树莓派基金会提供。遵循 [CC BY-SA 4.0][1] 协议。 - -下面尝试将两个按钮和一个 LED 灯连接起来,来制作一个二人制反应游戏 —— 让灯在一段随机的时间中点亮,然后看谁能够先按到按钮! - -想了解更多的话,请查看 [GPIO 新手指南][7]。你所需要的尽在 [CamJam EduKit 1][8]。 - -### 3. Sense HAT 像素宠物 ### - -Astro Pi— 一个增强版的树莓派 —将于今年 12 月(注:应该是去年的事了。)问世,但你并没有错过让你的手玩弄硬件的机会。Sense HAT 是一个用在 Astro Pi 任务中的感应器主板插件,且任何人都可以买到。你可以用它来做数据收集、科学实验、游戏或者更多。 观看下面这个由树莓派的 Carrie Anne 带来的 Gurl Geek Diaries 录像来开始一段美妙的旅程吧 —— 通过在 Sense HAT 的显示器上展现出你自己设计的一个动物像素宠物: - -注:youtube 视频 - - -在 "[探索 Sense HAT][9]" 中可以学到更多。 - -### 4. 红外鸟箱 ### - -![](https://opensource.com/sites/default/files/ir-bird-box.png) -上图由 [Low Voltage Labs][3] 提供。遵循 [CC BY-SA 4.0][1] 协议。 - -让全班所有同学都能够参与进来的一个好的练习是 —— 在一个鸟箱中沿着某些红外线放置一个树莓派和 NoIR 照相模块,这样你就可以在黑暗中观看,然后通过网络或在网络中你可以从树莓派那里获取到视频流。等鸟进入笼子,然后你就可以在不打扰到它们的情况下观察它们。 - -在这期间,你可以学习到所有关于红外和光谱的知识,以及如何用软件来调整摄像头的焦距和控制它。 - -在 "[制作一个红外鸟箱][10]" 中你可以学到更多。 - -### 5. 机器人 ### - -![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg) - -上图由 Low Voltage Labs 提供。遵循 [CC BY-SA 4.0][1] 协议。 - -拥有一个树莓派,一些感应器和一个感应器控制电路板,你就可以构建你自己的机器人。你可以制作各种类型的机器人,从用透明胶带和自制底盘组合在一起的简易四驱车,一直到由游戏控制器驱动的具有自我意识,带有传感器和摄像头的金属马儿。 - -学习如何直接去控制单个的发动机,例如通过 RTK Motor Controller Board (£8/$12),或者尝试新的 CamJam robotics kit (£17/$25) ,它带有发动机、轮胎和一系列的感应器 — 这些都很有价值并很有学习的潜力。 - -另外,如何你喜欢更为骨灰级别的东西,可以尝试 PiBorg 的 [4Borg][11] (£99/$150) 或 [DiddyBorg][12] (£180/$273) 或者一干到底,享受他们的 DoodleBorg 金属版 (£250/$380) — 并构建一个他们声名远扬的 [DoodleBorg tank][13](很不幸的时,这个没有卖的) 的迷你版。 - -另外请参考 [CamJam robotics kit worksheets][14]。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom - -作者:[Ben Nuttall][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/bennuttall -[1]:https://creativecommons.org/licenses/by-sa/4.0/ -[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi -[3]:http://lowvoltagelabs.com/ -[4]:http://lowvoltagelabs.com/products/pi-traffic/ -[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390 -[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html -[7]:http://pythonhosted.org/gpiozero/recipes/ -[8]:http://camjam.me/?page_id=236 -[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat -[10]:https://www.raspberrypi.org/learning/infrared-bird-box/ -[11]:https://www.piborg.org/4borg -[12]:https://www.piborg.org/diddyborg -[13]:https://www.piborg.org/doodleborg -[14]:http://camjam.me/?page_id=1035#worksheets From 43f4e88ce27b1d9f9b518309eb93d8fbd4af4cb6 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jun 2016 06:31:25 +0800 Subject: [PATCH 009/471] PUB:20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE @alim0x --- ...- NEW GENERATION OF LINUX APPS ARE HERE.md | 60 ++++++++----------- 1 file changed, 26 insertions(+), 34 deletions(-) rename {translated/tech => published}/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md (62%) diff --git a/translated/tech/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md b/published/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md similarity index 62% rename from translated/tech/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md rename to published/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md index cc212e7a4c..4f31e9ed63 100644 --- a/translated/tech/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md +++ b/published/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md @@ -5,19 +5,19 @@ ORB:新一代 Linux 应用 我们之前讨论过[在 Ubuntu 上离线安装应用][1]。我们现在要再次讨论它。 -[Orbital Apps][2] 给我们带来了新的软件包类型,**ORB**,它带有便携软件,交互式安装向导支持,以及离线使用的能力。 +[Orbital Apps][2] 给我们带来了一种新的软件包类型 **ORB**,它具有便携软件、交互式安装向导支持,以及离线使用的能力。 -便携软件很方便。主要是因为它们能够无需任何管理员权限直接运行,也能够带着所有的设置和数据随U盘存储。而交互式的安装向导也能让我们轻松地安装应用。 +便携软件很方便。主要是因为它们能够无需任何管理员权限直接运行,也能够带着所有的设置和数据随 U 盘存储。而交互式的安装向导也能让我们轻松地安装应用。 -### 开放可运行包 OPEN RUNNABLE BUNDLE (ORB) +### 开放式可运行的打包(OPEN RUNNABLE BUNDLE) (ORB) -ORB 是一个免费和开源的包格式,它和其它包格式在很多方面有所不同。ORB 的一些特性: +ORB 是一个自由开源的包格式,它和其它包格式在很多方面有所不同。ORB 的一些特性: -- **压缩**:所有的包经过压缩,使用 squashfs,体积最多减少 60%。 -- **便携模式**:如果一个便携 ORB 应用是从可移动设备运行的,它会把所有设置和数据存储在那之上。 +- **压缩**:所有的包都经过 squashfs 压缩,体积最多可减少 60%。 +- **便携模式**:如果一个便携 ORB 应用是在可移动设备上运行的,它会把所有设置和数据存储在那之上。 - **安全**:所有的 ORB 包使用 PGP/RSA 签名,通过 TLS 1.2 分发。 - **离线**:所有的依赖都打包进软件包,所以不再需要下载依赖。 -- **开放包**:ORB 包可以作为 ISO 镜像挂载。 +- **开放式软件包**:ORB 软件包可以作为 ISO 镜像挂载。 ### 种类 @@ -26,77 +26,69 @@ ORB 应用现在有两种类别: - 便携软件 - SuperDEB -#### 1. 便携 ORB 软件 +### 1. 便携 ORB 软件 -便携 ORB 软件可以立即运行而不需要任何的事先安装。那意味着它不需要管理员权限和依赖!你可以直接从 Orbital Apps 网站下载下来就能使用。 +便携 ORB 软件可以立即运行而不需要任何的事先安装。这意味着它不需要管理员权限,也没有依赖!你可以直接从 Orbital Apps 网站下载下来就能使用。 -并且由于它支持便携模式,你可以将它拷贝到U盘携带。它所有的设置和数据会和它一起存储在U盘。只需将U盘连接到任何运行 Ubuntu 16.04 的机器上就行了。 +并且由于它支持便携模式,你可以将它拷贝到 U 盘携带。它所有的设置和数据会和它一起存储在 U 盘。只需将 U 盘连接到任何运行 Ubuntu 16.04 的机器上就行了。 -##### 可用便携软件 +#### 可用便携软件 目前有超过 35 个软件以便携包的形式提供,包括一些十分流行的软件,比如:[Deluge][3],[Firefox][4],[GIMP][5],[Libreoffice][6],[uGet][7] 以及 [VLC][8]。 完整的可用包列表可以查阅 [便携 ORB 软件列表][9]。 -##### 使用便携软件 +#### 使用便携软件 按照以下步骤使用便携 ORB 软件: - 从 Orbital Apps 网站下载想要的软件包。 -- 将其移动到想要的位置(本地磁盘/U盘)。 +- 将其移动到想要的位置(本地磁盘/U 盘)。 - 打开存储 ORB 包的目录。 -![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-1-1024x576.jpg) - + ![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-1-1024x576.jpg) - 打开 ORB 包的属性。 -![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-2.jpg) ->给 ORB 包添加运行权限 + ![给 ORB 包添加运行权限](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-2.jpg) - 在权限标签页添加运行权限。 - 双击打开它。 等待几秒,让它准备好运行。大功告成。 -#### 2. SuperDEB +### 2. SuperDEB 另一种类型的 ORB 软件是 SuperDEB。SuperDEB 很简单,交互式安装向导能够让软件安装过程顺利得多。如果你不喜欢从终端或软件中心安装软件,superDEB 就是你的菜。 最有趣的部分是你安装时不需要一个互联网连接,因为所有的依赖都由安装向导打包了。 -##### 可用的 SuperDEB +#### 可用的 SuperDEB 超过 60 款软件以 SuperDEB 的形式提供。其中一些流行的有:[Chromium][10],[Deluge][3],[Firefox][4],[GIMP][5],[Libreoffice][6],[uGet][7] 以及 [VLC][8]。 完整的可用 SuperDEB 列表,参阅 [SuperDEB 列表][11]。 -##### 使用 SuperDEB 安装向导 +#### 使用 SuperDEB 安装向导 - 从 Orbital Apps 网站下载需要的 SuperDEB。 - 像前面一样给它添加**运行权限**(属性 > 权限)。 - 双击 SuperDEB 安装向导并按下列说明操作: -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-1.png) ->点击 OK + ![点击 OK](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-1.png) -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-2.png) ->输入你的密码并继续 + ![输入你的密码并继续](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-2.png) -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-3.png) ->它会开始安装… + ![它会开始安装…](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-3.png) -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-4.png) ->一会儿他就完成了… + ![一会儿它就完成了…](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-4.png) - 完成安装之后,你就可以正常使用了。 ### ORB 软件兼容性 -从 Orbital Apps 可知,它们完全适配 Ubuntu 16.04 [64 bit]。 +从 Orbital Apps 可知,它们完全适配 Ubuntu 16.04 [64 位]。 ->阅读建议:[如何在 Ubuntu 获知你的是电脑 32 位还是 64 位的][12]。 - -至于其它发行版兼容性不受保证。但我们可以说,它在所有 Ubuntu 16.04 衍生版(UbuntuMATE,UbuntuGNOME,Lubuntu,Xubuntu 等)以及基于 Ubuntu 16.04 的发行版(比如即将到来的 Linux Mint 18)上都适用。我们现在还不清楚 Orbital Apps 是否有计划拓展它的支持到其它版本 Ubuntu 或 Linux 发行版上。 +至于其它发行版兼容性则不受保证。但我们可以说,它在所有 Ubuntu 16.04 衍生版(UbuntuMATE,UbuntuGNOME,Lubuntu,Xubuntu 等)以及基于 Ubuntu 16.04 的发行版(比如即将到来的 Linux Mint 18)上都适用。我们现在还不清楚 Orbital Apps 是否有计划拓展它的支持到其它版本 Ubuntu 或 Linux 发行版上。 如果你在你的系统上经常使用便携 ORB 软件,你可以考虑安装 ORB 启动器。它不是必需的,但是推荐安装它以获取更佳的体验。最简短的 ORB 启动器安装流程是打开终端输入以下命令: @@ -116,11 +108,11 @@ wget -O - https://www.orbital-apps.com/orb.sh | bash ---------------------------------- -via: http://itsfoss.com/orb-linux-apps/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 +via: http://itsfoss.com/orb-linux-apps/ 作者:[Munif Tanjim][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 01ae32507b400e0096ad041392ef4d0daedb63da Mon Sep 17 00:00:00 2001 From: bazz2 Date: Wed, 29 Jun 2016 07:10:36 +0800 Subject: [PATCH 010/471] [bazz2 translating]Building Serverless App with Docker --- sources/tech/20160621 Building Serverless App with Docker.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160621 Building Serverless App with Docker.md b/sources/tech/20160621 Building Serverless App with Docker.md index 9791ee5aa6..efedc85ffd 100644 --- a/sources/tech/20160621 Building Serverless App with Docker.md +++ b/sources/tech/20160621 Building Serverless App with Docker.md @@ -1,3 +1,4 @@ +[bazz222222222] BUILDING SERVERLESS APPS WITH DOCKER ====================================== From 24fb3deab47cbe2d233d6fa4c71f242adc06503e Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jun 2016 08:17:32 +0800 Subject: [PATCH 011/471] PUB:20160510 65% of companies are contributing to open source projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Cathon 翻译的不错,语言组织很好。 --- ...es are contributing to open source projects.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) rename {translated/talk => published}/20160510 65% of companies are contributing to open source projects.md (70%) diff --git a/translated/talk/20160510 65% of companies are contributing to open source projects.md b/published/20160510 65% of companies are contributing to open source projects.md similarity index 70% rename from translated/talk/20160510 65% of companies are contributing to open source projects.md rename to published/20160510 65% of companies are contributing to open source projects.md index 03290b0e97..46939a4e3a 100644 --- a/translated/talk/20160510 65% of companies are contributing to open source projects.md +++ b/published/20160510 65% of companies are contributing to open source projects.md @@ -3,11 +3,11 @@ ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_openseries.png?itok=s7lXChId) -今年 Black Duck 和 North Bridge 发布了第十届年度开源软件前景调查,来检验开源软件的发展趋势。今年这份调查的亮点在于,当前主流社会对开源软件的接受程度以及过去的十年中人们对开源软件态度的变化。 +今年 Black Duck 和 North Bridge 发布了第十届年度开源软件前景调查,来调查开源软件的发展趋势。今年这份调查的亮点在于,当前主流社会对开源软件的接受程度以及过去的十年中人们对开源软件态度的变化。 -[2016 年的开源软件前景调查][1],分析了来自约3400位专家的反馈。今年的调查中,开发者发表了他们的看法,包括了大约 70% 的参与者。数据显示,安全专家的参与人数呈指数级增长,增长超过 450% 。他们的参与表明,开源社区开始逐渐关注开源软件中存在的安全问题,以及当新的技术出现时确保它们的安全性。 +[2016 年的开源软件前景调查][1],分析了来自约3400位专家的反馈。今年的调查中,开发者发表了他们的看法,大约 70% 的参与者是开发者。数据显示,安全专家的参与人数呈指数级增长,增长超过 450% 。他们的参与表明,开源社区开始逐渐关注开源软件中存在的安全问题,以及当新的技术出现时确保它们的安全性。 -Black Duck 的年度 [开源新秀奖][2] 涉及到一些新出现的技术,如 Docker 和 Kontena 容器。容器技术这一年有了巨大的发展 ———— 76% 的受访者表示,他们的企业有一些使用容器技术的规划。而 59% 的受访者正准备使用容器技术完成大量的部署,从开发与测试,到内部与外部的生产环境部署。开发者社区已经把容器技术作为一种简单快速开发的方法。 +Black Duck 的[年度开源新秀奖][2] 涉及到一些新出现的技术,如容器方面的 Docker 和 Kontena。容器技术这一年有了巨大的发展 ———— 76% 的受访者表示,他们的企业有一些使用容器技术的规划。而 59% 的受访者正准备使用容器技术完成大量的部署,从开发与测试,到内部与外部的生产环境部署。开发者社区已经把容器技术作为一种简单快速开发的方法。 调查显示,几乎每个组织都有开发者致力于开源软件,这一点毫不惊讶。当像微软和苹果这样的大公司将它们的一些解决方案开源时,开发者就获得了更多的机会来参与开源项目。我非常希望这样的趋势会延续下去,让更多的软件开发者无论在工作中,还是工作之余都可以致力于开源项目。 @@ -30,11 +30,11 @@ Black Duck 的年度 [开源新秀奖][2] 涉及到一些新出现的技术, #### 安全和管理 -一流的开源安全与管理实践的发展,并没有跟上人们使用开源不断增长的步伐。尽管备受关注的开源项目近年来爆炸式地增长,调查结果却指出: +一流的开源安全与管理实践的发展,也没有跟上人们使用开源不断增长的步伐。尽管备受关注的开源项目近年来爆炸式地增长,调查结果却指出: * 50% 的企业在选择和批准开源代码这方面没有出台正式的政策。 * 47% 的企业没有正式的流程来跟踪开源代码,这就限制了它们对开源代码的了解,以及控制开源代码的能力。 -* 超过三分之一的企业没有用于识别,跟踪,和修复重大开源安全漏洞的流程。 +* 超过三分之一的企业没有用于识别、跟踪和修复重大开源安全漏洞的流程。 #### 不断增长的开源参与者 @@ -45,16 +45,17 @@ Black Duck 的年度 [开源新秀奖][2] 涉及到一些新出现的技术, * 约三分之一的企业有专门为开源项目设置的全职岗位。 * 59% 的受访者参与开源项目以获得竞争优势。 -Black Duck 和 North Bridge 从今年的调查中了解了很多,如安全,政策,商业模式等。我们很兴奋能够分享这些新发现。感谢我们的合作者,以及所有参与我们调查的受访者。这是一个伟大的十年,我很高兴我们可以肯定地说,开源的未来充满了无限可能。 +Black Duck 和 North Bridge 从今年的调查中了解到了很多,如安全,政策,商业模式等。我们很兴奋能够分享这些新发现。感谢我们的合作者,以及所有参与我们调查的受访者。这是一个伟大的十年,我很高兴我们可以肯定地说,开源的未来充满了无限可能。 想要了解更多内容,可以查看完整的[调查结果][3]。 + -------------------------------------------------------------------------------- via: https://opensource.com/business/16/5/2016-future-open-source-survey 作者:[Haidee LeClair][a] 译者:[Cathon](https://github.com/Cathon) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) [a]: https://opensource.com/users/blackduck2016 [1]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results From a221770db74f303f528d0f1d2a1b27813ace74d8 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Wed, 29 Jun 2016 08:56:30 +0800 Subject: [PATCH 012/471] [translated] Building Serverless App with Docker --- ...621 Building Serverless App with Docker.md | 100 ------------------ ...621 Building Serverless App with Docker.md | 98 +++++++++++++++++ 2 files changed, 98 insertions(+), 100 deletions(-) delete mode 100644 sources/tech/20160621 Building Serverless App with Docker.md create mode 100644 translated/tech/20160621 Building Serverless App with Docker.md diff --git a/sources/tech/20160621 Building Serverless App with Docker.md b/sources/tech/20160621 Building Serverless App with Docker.md deleted file mode 100644 index efedc85ffd..0000000000 --- a/sources/tech/20160621 Building Serverless App with Docker.md +++ /dev/null @@ -1,100 +0,0 @@ -[bazz222222222] -BUILDING SERVERLESS APPS WITH DOCKER -====================================== - -Every now and then, there are waves of technology that threaten to make the previous generation of technology obsolete. There has been a lot of talk about a technique called “serverless” for writing apps. The idea is to deploy your application as a series of functions, which are called on-demand when they need to be run. You don’t need to worry about managing servers, and these functions scale as much as you need, because they are called on-demand and run on a cluster. - -But serverless doesn’t mean there is no Docker – in fact, Docker is serverless. You can use Docker to containerize these functions, then run them on-demand on a Swarm. Serverless is a technique for building distributed apps and Docker is the perfect platform for building them on. - -### From servers to serverless - -So how might we write applications like this? Let’s take our example [a voting application consisting of 5 services][1]: - -![](https://blog.docker.com/wp-content/uploads/Picture1.png) - -This consists of: - -- Two web frontends -- A worker for processing votes in the background -- A message queue for processing votes -- A database - -The background processing of votes is a very easy target for conversion to a serverless architecture. In the voting app, we can run a bit of code like this to run the background task: - -``` -import dockerrun -client = dockerrun.from_env() -client.run("bfirsh/serverless-record-vote-task", [voter_id, vote], detach=True) -``` - -The worker and message queue can be replaced with a Docker container that is run on-demand on a Swarm, automatically scaling to demand. - -We can even eliminate the web frontends. We can replace them with Docker containers that serve up a single HTTP request, triggered by a lightweight HTTP server that spins up Docker containers for each HTTP request. The heavy lifting has now moved the long-running HTTP server to Docker containers that run on-demand, so they can automatically scale to handle load. - -Our new architecture looks something like this: - -![](https://blog.docker.com/wp-content/uploads/Picture2.png) - -The red blocks are the continually running services and the green blocks are Docker containers that are run on-demand. This application has fewer long-running services that need managing, and by its very nature scales up automatically in response to demand (up to the size of your Swarm!). - -### So what can we do with this? - -There are three useful techniques here which you can use in your apps: - -1. Run functions in your code as on-demand Docker containers -2. Use a Swarm to run these on a cluster -3. Run containers from containers, by passing a Docker API socket - - -With the combination of these techniques, this opens up loads of possibilities about how you can architect your applications. Running background work is a great example of something that works well, but a whole load of other things are possible too, for example: - -- Launching a container to serve user-facing HTTP requests is probably not practical due to the latency. However – you could write a load balancer which knew how to auto-scale its own web frontends by running containers on a Swarm. -- A MongoDB container which could introspect the structure of a Swarm and launch the correct shards and replicas. - -### What’s next - -We’ve got all these radically new tools and abstractions for building apps, and we’ve barely scratched the surface of what is possible with them. We’re still building applications like we have servers that stick around for a long time, not for the future where we have Swarms that can run code on-demand anywhere in your infrastructure. - -This hopefully gives you some ideas about what you can build, but we also need your help. We have all the fundamentals to be able to start building these applications, but its still in its infrancy – we need better tooling, libraries, example apps, documentation, and so on. - -[This GitHub repository has links off to tools, libraries, examples, and blog posts][3]. Head over there if you want to learn more, and please contribute any links you have there so we can start working together on this. - -Get involved, and happy hacking! - -### Learn More about Docker - -- New to Docker? Try our 10 min [online tutorial][4] -- Share images, automate builds, and more with [a free Docker Hub account][5] -- Read the Docker [1.12 Release Notes][6] -- Subscribe to [Docker Weekly][7] -- Sign up for upcoming [Docker Online Meetups][8] -- Attend upcoming [Docker Meetups][9] -- Watch [DockerCon EU 2015 videos][10] -- Start [contributing to Docker][11] - - --------------------------------------------------------------------------------- - -via: https://blog.docker.com/2016/06/building-serverless-apps-with-docker/ - -作者:[Ben Firshman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://blog.docker.com/author/bfirshman/ - -[1]: https://github.com/docker/example-voting-app -[3]: https://github.com/bfirsh/serverless-docker -[4]: https://docs.docker.com/engine/understanding-docker/ -[5]: https://hub.docker.com/ -[6]: https://docs.docker.com/release-notes/ -[7]: https://www.docker.com/subscribe_newsletter/ -[8]: http://www.meetup.com/Docker-Online-Meetup/ -[9]: https://www.docker.com/community/meetup-groups -[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv -[11]: https://docs.docker.com/contributing/contributing/ - - - diff --git a/translated/tech/20160621 Building Serverless App with Docker.md b/translated/tech/20160621 Building Serverless App with Docker.md new file mode 100644 index 0000000000..9af6c5d7aa --- /dev/null +++ b/translated/tech/20160621 Building Serverless App with Docker.md @@ -0,0 +1,98 @@ +用 docker 创建 serverless 应用 +====================================== + +当今世界会时不时地出现一波科技浪潮,将以前的技术拍死在海滩上。针对 serverless 应用的概念我们已经谈了很多,它是指将你的应用程序按功能来部署,这些功能在被用到时才会启动。你不用费心去管理服务器和程序规模,因为它们会在需要的时候在一个集群中启动并运行。 + +但是 serverless 并不意味着没有 Docker 什么事儿,事实上 Docker 就是 serverless 的。你可以使用 Docker 来容器化这些功能,然后在 Swarm 中按需求来运行它们。Serverless 是一项构建分布式应用的技术,而 Docker 是它们完美的构建平台。 + +### 从 servers 到 serverless + +那如何才能写一个 serverless 应用呢?来看一下我们的例子,[5个服务组成的投票系统][1]: + +![](https://blog.docker.com/wp-content/uploads/Picture1.png) + +投票系统由下面5个服务组成: + +- 两个 web 前端 +- 一个后台处理投票的进程 +- 一个计票的消息队列 +- 一个数据库 + +后台处理投票的进程很容易转换成 serverless 构架,我们可以使用以下代码来实现: + +``` +import dockerrun +client = dockerrun.from_env() +client.run("bfirsh/serverless-record-vote-task", [voter_id, vote], detach=True) +``` + +这个投票处理进程和消息队列可以用运行在 Swarm 上的 Docker 容器来代替,并实现按需自动部署。 + +我们也可以用容器替换 web 前端,使用一个轻量级 HTTP 服务器来触发容器响应一个 HTTP 请求。Docker 容器代替长期运行的 HTTP 服务器来挑起响应请求的重担,这些容器可以自动扩容来支撑大访问量。 + +新的架构就像这样: + +![](https://blog.docker.com/wp-content/uploads/Picture2.png) + +红色框内是持续运行的服务,绿色框内是按需启动的容器。这个架构提供更少的长期运行服务让你管理,并且可以自动扩容(最大容量由你的 Swarm 决定)。 + +### 我们可以做点什么? + +你可以在你的应用中使用3种技术: + +1. 在 Docker 容器中按需运行代码。 +2. 使用 Swarm 来部署集群。 +3. 通过使用 Docker API 套接字在容器中运行容器。 + +结合这3种技术,你可以有很多方法搭建你的应用架构。用这种方法来部署后台环境真是非常有效,而在另一些场景,也可以这么玩,比如说: + +- 由于存在延时,使用容器实现面向用户的 HTTP 请求可能不是很合适,但你可以写一个负载均衡器,使用 Swarm 来对自己的 web 前端进行自动扩容。 +- 实现一个 MongoDB 容器,可以自检 Swarm 并且启动正确的分片和副本(LCTT 译注:分片技术为大规模并行检索提供支持,副本技术则是为数据提供冗余)。 + +### 下一步怎么做 + +我们提供了这些前卫的工具和概念来构建应用,并没有深入发掘它们的功能。我们的架构里还是存在长期运行的服务,将来我们需要使用 Swarm 来把所有服务都用按需扩容的方式实现 + +希望本文能在你搭建架构时给你一些启发,但我们还是需要你的帮助。我们提供了所有的基本工具,但它们还不是很完善,我们需要更多更好的工具、库、应用案例、文档以及其他资料。 + +[我们在这里发布了工具、库和文档][3]。如果想了解更多,请移步到那里,另外请贡献一些链接给我们,这样我们就能一直工作了。 + +玩得愉快。 + +### 更多关于 Docker 的资料 + +- New to Docker? Try our 10 min [online tutorial][4] +- Share images, automate builds, and more with [a free Docker Hub account][5] +- Read the Docker [1.12 Release Notes][6] +- Subscribe to [Docker Weekly][7] +- Sign up for upcoming [Docker Online Meetups][8] +- Attend upcoming [Docker Meetups][9] +- Watch [DockerCon EU 2015 videos][10] +- Start [contributing to Docker][11] + + +-------------------------------------------------------------------------------- + +via: https://blog.docker.com/2016/06/building-serverless-apps-with-docker/ + +作者:[Ben Firshman][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.docker.com/author/bfirshman/ + +[1]: https://github.com/docker/example-voting-app +[3]: https://github.com/bfirsh/serverless-docker +[4]: https://docs.docker.com/engine/understanding-docker/ +[5]: https://hub.docker.com/ +[6]: https://docs.docker.com/release-notes/ +[7]: https://www.docker.com/subscribe_newsletter/ +[8]: http://www.meetup.com/Docker-Online-Meetup/ +[9]: https://www.docker.com/community/meetup-groups +[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv +[11]: https://docs.docker.com/contributing/contributing/ + + + From 9d63106e2b0aceae38d4ad550ba0a66d206175d5 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 29 Jun 2016 09:44:46 +0800 Subject: [PATCH 013/471] =?UTF-8?q?20160629-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o record your terminal session on Linux.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 sources/tech/20160609 How to record your terminal session on Linux.md diff --git a/sources/tech/20160609 How to record your terminal session on Linux.md b/sources/tech/20160609 How to record your terminal session on Linux.md new file mode 100644 index 0000000000..787bafea6d --- /dev/null +++ b/sources/tech/20160609 How to record your terminal session on Linux.md @@ -0,0 +1,101 @@ +How to record your terminal session on Linux +================================================= + +Recording a terminal session may be important in helping someone learn a process, sharing information in an understandable way, and also presenting a series of commands in a proper manner. Whatever the purpose, there are many times when copy-pasting text from the terminal won't be very helpful while capturing a video of the process is quite far-fetched and may not be always possible. In this quick guide, we will take a look at the easiest way to record and share a terminal session in .gif format. + +### Prerequisites + +If you just want to record your terminal sessions and be able to play the recording in your terminal, or share them with people who will use a terminal for playback, then the only tool that you'll need is called “ttyrec”. Ubuntu users may install it by inserting the following command on a terminal: + +``` +sudo apt-get install ttyrec +``` + +If you want to produce a .gif file from the recording and be able to share it with people who don't use the terminal, publish it on websites, or simply keep a .gif handy for when you'll need it instead of written commands, you will have to install two additional packages. The first one is “imagemagick” which you can install with: + +``` +sudo apt-get install imagemagick +``` + +and the second one is “tty2gif” which can be downloaded from here. The latter has a dependency that can be satisfied with: +sudo apt-get install python-opster + +### Capturing + +To start capturing the terminal session, all you need to do is simply start with “ttyrec” + enter. This will launch the real-time recording tool which will run in the background until we enter “exit” or we press “Ctrl+D”. By default, ttyrec creates a file named “ttyrecord” on the destination of the terminal session which by default is “Home”. + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_1.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_2.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_3.jpg) + +### Playing + +Playing the file is as simple as opening a terminal on the destination of the “ttyrecord” file and using the “ttyplay” command followed by the name of the recording (in our case it's ttyrecord but you may change this into whatever you want). + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_4.jpg) + +This will result in the playback of the recorded session, in real-time, and with typing corrections included (all actions are recorded). This will look like a completely normal automated terminal session, but the commands and their apparent execution are obviously not really applied to the system, as they are only reproduced as a recording. + +It is also important to note that the playback of the terminal session recording is completely controllable. You may double the playback speed by hitting the “+” button, slow it down with the “-” button, pause it with “0”, and resume it in normal speed with “1”. + +### Converting into a .gif + +For reasons of convenience, many of us would like to convert the recorded session into a .gif file, and that is very easy to do. Here's how: + +First, untar the downloaded “tty2gif.tar.bz2” by opening a terminal in the download location and entering the following command: + +``` +tar xvfj tty2gif.tar.bz2 +``` + +Next, copy the resulting “tty2gif.py file onto the destination of the “ttyrecord” file (or whatever the name you've specified is), and then open a terminal on that destination and type the command: + +``` +python tty2gif.py typing ttyrecord +``` + +If you are getting errors in this step, check that you have installed the “python-opster” package. If errors persist, give the following two commands consecutively: + +``` +sudo apt-get install xdotool +export WINDOWID=$(xdotool getwindowfocus) +``` + +then repeat the “python tty2gif.py typing ttyrecord ” and you should now see a number of gif files that were created on the location of the “ttyrecord” + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_5.jpg) + +The next step is to unify all these gifs that correspond to individual terminal session actions into one final .gif file using the imagemagick utility. To do this, open a terminal on the destination and insert the following command: + +``` +convert -delay 25 -loop 0 *.gif example.gif +``` + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_6.jpg) + +You may name the resulting file as you like (I used “example.gif”), and you may change the delay and loop settings as needed. Here is the resulting file of this quick tutorial: + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/example.gif) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-record-your-terminal-session-on-linux/ + +作者:[Bill Toulas][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/howtoforgecom + + + + + + + + + From dd518199f8bb4eec6cdfd1c473a6b3011820b36c Mon Sep 17 00:00:00 2001 From: Chunyang Wen Date: Wed, 29 Jun 2016 09:56:39 +0800 Subject: [PATCH 014/471] chunyang-wen translating --- ...art 4 - How to Use Comparison Operators with Awk in Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md b/sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md index 7f3a46360f..1d0ef007af 100644 --- a/sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md +++ b/sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md @@ -1,3 +1,5 @@ +chunyang-wen translating + How to Use Comparison Operators with Awk in Linux =================================================== From b3bdf02802b0a892b687d8101b23706e5d1d88e2 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Wed, 29 Jun 2016 10:21:02 +0800 Subject: [PATCH 015/471] Update Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ... to Filter Text or Strings Using Pattern Specific Actions.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md b/sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md index f6aeb69e57..acdfe66167 100644 --- a/sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md +++ b/sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md @@ -1,3 +1,5 @@ +FSSlc translating + How to Use Awk to Filter Text or Strings Using Pattern Specific Actions ========================================================================= From e0f40f33d812b7909173bf8a6bb9e84e5a8218dc Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Wed, 29 Jun 2016 11:09:16 +0800 Subject: [PATCH 016/471] =?UTF-8?q?Update=20Part=206=20-=20How=20to=20Use?= =?UTF-8?q?=20=E2=80=98next=E2=80=99=20Command=20with=20Awk=20in=20Linux.m?= =?UTF-8?q?d?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md b/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md index f272dabcce..5ae8cca086 100644 --- a/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md +++ b/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md @@ -1,4 +1,4 @@ -How to Use ‘next’ Command with Awk in Linux +[translating]How to Use ‘next’ Command with Awk in Linux ============================================= ![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png) From e3803263a8902b17072cdb09ac545ca2c1a6216c Mon Sep 17 00:00:00 2001 From: wwy Date: Wed, 29 Jun 2016 12:06:59 +0800 Subject: [PATCH 017/471] [Translated] Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files (#4120) * finish translation * check again * remove source --- ...sions to Filter Text or String in Files.md | 214 ------------------ ...sions to Filter Text or String in Files.md | 212 +++++++++++++++++ 2 files changed, 212 insertions(+), 214 deletions(-) delete mode 100644 sources/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md create mode 100644 translated/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md diff --git a/sources/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md b/sources/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md deleted file mode 100644 index 6871673536..0000000000 --- a/sources/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md +++ /dev/null @@ -1,214 +0,0 @@ -translating by wwy-hust - -How to Use Awk and Regular Expressions to Filter Text or String in Files -============================================================================= - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Linux-Awk-Command-Examples.png) - -When we run certain commands in Unix/Linux to read or edit text from a string or file, we most times try to filter output to a given section of interest. This is where using regular expressions comes in handy. - -### What are Regular Expressions? - -A regular expression can be defined as a strings that represent several sequence of characters. One of the most important things about regular expressions is that they allow you to filter the output of a command or file, edit a section of a text or configuration file and so on. - -### Features of Regular Expression - -Regular expressions are made of: - -- Ordinary characters such as space, underscore(_), A-Z, a-z, 0-9. -- Meta characters that are expanded to ordinary characters, they include: - - `(.)` it matches any single character except a newline. - - `(*)` it matches zero or more existences of the immediate character preceding it. - - `[ character(s) ]` it matches any one of the characters specified in character(s), one can also use a hyphen (-) to mean a range of characters such as [a-f], [1-5], and so on. - - `^` it matches the beginning of a line in a file. - - `$` matches the end of line in a file. - - `\` it is an escape character. - -In order to filter text, one has to use a text filtering tool such as awk. You can think of awk as a programming language of its own. But for the scope of this guide to using awk, we shall cover it as a simple command line filtering tool. - -The general syntax of awk is: - -``` -# awk 'script' filename -``` - -Where `'script'` is a set of commands that are understood by awk and are execute on file, filename. - -It works by reading a given line in the file, makes a copy of the line and then executes the script on the line. This is repeated on all the lines in the file. - -The `'script'` is in the form `'/pattern/ action'` where pattern is a regular expression and the action is what awk will do when it finds the given pattern in a line. - -### How to Use Awk Filtering Tool in Linux - -In the following examples, we shall focus on the meta characters that we discussed above under the features of awk. - -#### A simple example of using awk: - -The example below prints all the lines in the file /etc/hosts since no pattern is given. - -``` -# awk '//{print}'/etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Command-Example.gif) ->Awk Prints all Lines in a File - -#### Use Awk with Pattern: - -I the example below, a pattern `localhost` has been given, so awk will match line having localhost in the `/etc/hosts` file. - -``` -# awk '/localhost/{print}' /etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-Command-with-Pattern.gif) ->Awk Print Given Matching Line in a File - -#### Using Awk with (.) wild card in a Pattern - -The `(.)` will match strings containing loc, localhost, localnet in the example below. - -That is to say *** l some_single_character c ***. - -``` -# awk '/l.c/{print}' /etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Wild-Cards.gif) ->Use Awk to Print Matching Strings in a File - -#### Using Awk with (*) Character in a Pattern - -It will match strings containing localhost, localnet, lines, capable, as in the example below: - -``` -# awk '/l*c/{print}' /etc/localhost -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Match-Strings-in-File.gif) ->Use Awk to Match Strings in File - -You will also realize that `(*)` tries to a get you the longest match possible it can detect. - -Let look at a case that demonstrates this, take the regular expression `t*t` which means match strings that start with letter `t` and end with `t` in the line below: - -``` -this is tecmint, where you get the best good tutorials, how to's, guides, tecmint. -``` - -You will get the following possibilities when you use the pattern `/t*t/`: - -``` -this is t -this is tecmint -this is tecmint, where you get t -this is tecmint, where you get the best good t -this is tecmint, where you get the best good tutorials, how t -this is tecmint, where you get the best good tutorials, how tos, guides, t -this is tecmint, where you get the best good tutorials, how tos, guides, tecmint -``` - -And `(*)` in `/t*t/` wild card character allows awk to choose the the last option: - -``` -this is tecmint, where you get the best good tutorials, how to's, guides, tecmint -``` - -#### Using Awk with set [ character(s) ] - -Take for example the set [al1], here awk will match all strings containing character a or l or 1 in a line in the file /etc/hosts. - -``` -# awk '/[al1]/{print}' /etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matching-Character.gif) ->Use-Awk to Print Matching Character in File - -The next example matches strings starting with either `K` or `k` followed by `T`: - -``` -# awk '/[Kk]T/{print}' /etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matched-String-in-File.gif) ->Use Awk to Print Matched String in File - -#### Specifying Characters in a Range - -Understand characters with awk: - -- `[0-9]` means a single number -- `[a-z]` means match a single lower case letter -- `[A-Z]` means match a single upper case letter -- `[a-zA-Z]` means match a single letter -- `[a-zA-Z 0-9]` means match a single letter or number - -Lets look at an example below: - -``` -# awk '/[0-9]/{print}' /etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-To-Print-Matching-Numbers-in-File.gif) ->Use Awk To Print Matching Numbers in File - -All the line from the file /etc/hosts contain at least a single number [0-9] in the above example. - -#### Use Awk with (^) Meta Character - -It matches all the lines that start with the pattern provided as in the example below: - -``` -# awk '/^fe/{print}' /etc/hosts -# awk '/^ff/{print}' /etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-All-Matching-Lines-with-Pattern.gif) ->Use Awk to Print All Matching Lines with Pattern - -#### Use Awk with ($) Meta Character - -It matches all the lines that end with the pattern provided: - -``` -# awk '/ab$/{print}' /etc/hosts -# awk '/ost$/{print}' /etc/hosts -# awk '/rs$/{print}' /etc/hosts -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Given-Pattern-String.gif) ->Use Awk to Print Given Pattern String - -#### Use Awk with (\) Escape Character - -It allows you to take the character following it as a literal that is to say consider it just as it is. - -In the example below, the first command prints out all line in the file, the second command prints out nothing because I want to match a line that has $25.00, but no escape character is used. - -The third command is correct since a an escape character has been used to read $ as it is. - -``` -# awk '//{print}' deals.txt -# awk '/$25.00/{print}' deals.txt -# awk '/\$25.00/{print}' deals.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Escape-Character.gif) ->Use Awk with Escape Character - -### Summary - -That is not all with the awk command line filtering tool, the examples above a the basic operations of awk. In the next parts we shall be advancing on how to use complex features of awk. Thanks for reading through and for any additions or clarifications, post a comment in the comments section. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md b/translated/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md new file mode 100644 index 0000000000..791c349f56 --- /dev/null +++ b/translated/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md @@ -0,0 +1,212 @@ +如何使用Awk和正则表达式过滤文本或文件中的字符串 +============================================================================= + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Linux-Awk-Command-Examples.png) + +当我们在 Unix/Linux 下使用特定的命令从字符串或文件中读取或编辑文本时,我们经常会尝试过滤输出以得到感兴趣的部分。这时正则表达式就派上用场了。 + +### 什么是正则表达式? + +正则表达式可以定义为代表若干个字符序列的字符串。它最重要的功能就是它允许你过滤一条命令或一个文件的输出,编辑文本或配置等文件的一部分。 + +### 正则表达式的特点 + +正则表达式由以下内容组合而成: + +- 普通的字符,例如空格、下划线、A-Z、a-z、0-9。 +- 可以扩展为普通字符的元字符,它们包括: + - `(.)` 它匹配除了换行符外的任何单个字符。 + - `(*)` 它匹配零个或多个在其之前的立即字符。 + - `[ character(s) ]` 它匹配任何由 character(s) 指定的一个字符,你可以使用连字符(-)代表字符区间,例如 [a-f]、[1-5]等。 + - `^` 它匹配文件中一行的开头。 + - `$` 它匹配文件中一行的结尾。 + - `\` 这是一个转义字符。 + +你必须使用类似 awk 这样的文本过滤工具来过滤文本。你还可以把 awk 当作一个用于自身的编程语言。但由于这个指南的适用范围是关于使用 awk 的,我会按照一个简单的命令行过滤工具来介绍它。 + +awk 的一般语法如下: + +``` +# awk 'script' filename +``` + +此处 `'script'` 是一个由 awk 使用并应用于 filename 的命令集合。 + +它通过读取文件中的给定的一行,复制该行的内容并在该行上执行脚本的方式工作。这个过程会在该文件中的所有行上重复。 + +该脚本 `'script'` 中内容的格式是 `'/pattern/ action'`,其中 `pattern` 是一个正则表达式,而 `action` 是当 awk 在该行中找到此模式时应当执行的动作。 + +### 如何在 Linux 中使用 Awk 过滤工具 + +在下面的例子中,我们将聚焦于之前讨论过的元字符。 + +#### 一个使用 awk 的简单示例: + +下面的例子打印文件 /etc/hosts 中的所有行,因为没有指定任何的模式。 + +``` +# awk '//{print}'/etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Command-Example.gif) +>Awk 打印文件中的所有行 + +#### 结合模式使用 Awk + +在下面的示例中,指定了模式 `localhost`,因此 awk 将匹配文件 `/etc/hosts` 中有 `localhost` 的那些行。 + +``` +# awk '/localhost/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-Command-with-Pattern.gif) +>Awk 打印文件中匹配模式的行 + +#### 在 Awk 模式中使用通配符 (.) + +在下面的例子中,符号 `(.)` 将匹配包含 loc、localhost、localnet 的字符串。 + +这里的意思是匹配 *** l 一些单个字符 c ***。 + +``` +# awk '/l.c/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Wild-Cards.gif) +>使用 Awk 打印文件中匹配模式的字符串 + +#### 在 Awk 模式中使用字符 (*) + +在下面的例子中,将匹配包含 localhost、localnet、lines, capable 的字符串。 + +``` +# awk '/l*c/{print}' /etc/localhost +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Match-Strings-in-File.gif) +>使用 Awk 匹配文件中的字符串 + +你可能也意识到 `(*)` 将会尝试匹配它可能检测到的最长的匹配。 + +让我们看一看可以证明这一点的例子,正则表达式 `t*t` 的意思是在下面的行中匹配以 `t` 开始和 `t` 结束的字符串: + +``` +this is tecmint, where you get the best good tutorials, how to's, guides, tecmint. +``` + +当你使用模式 `/t*t/` 时,会得到如下可能的结果: + +``` +this is t +this is tecmint +this is tecmint, where you get t +this is tecmint, where you get the best good t +this is tecmint, where you get the best good tutorials, how t +this is tecmint, where you get the best good tutorials, how tos, guides, t +this is tecmint, where you get the best good tutorials, how tos, guides, tecmint +``` + +在 `/t*t/` 中的通配符 `(*)` 将使得 awk 选择匹配的最后一项: + +``` +this is tecmint, where you get the best good tutorials, how to's, guides, tecmint +``` + +#### 结合集合 [ character(s) ] 使用 Awk + +以集合 [al1] 为例,awk 将匹配文件 /etc/hosts 中所有包含字符 a 或 l 或 1 的字符串。 + +``` +# awk '/[al1]/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matching-Character.gif) +>使用 Awk 打印文件中匹配的字符 + +下一个例子匹配以 `K` 或 `k` 开始头,后面跟着一个 `T` 的字符串: + +``` +# awk '/[Kk]T/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matched-String-in-File.gif) +>使用 Awk 打印文件中匹配的字符 + +#### 以范围的方式指定字符 + +awk 所能理解的字符: + +- `[0-9]` 代表一个单独的数字 +- `[a-z]` 代表一个单独的小写字母 +- `[A-Z]` 代表一个单独的大写字母 +- `[a-zA-Z]` 代表一个单独的字母 +- `[a-zA-Z 0-9]` 代表一个单独的字母或数字 + +让我们看看下面的例子: + +``` +# awk '/[0-9]/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-To-Print-Matching-Numbers-in-File.gif) +>使用 Awk 打印文件中匹配的数字 + +在上面的例子中,文件 /etc/hosts 中的所有行都至少包含一个单独的数字 [0-9]。 + +#### 结合元字符 (\^) 使用 Awk + +在下面的例子中,它匹配所有以给定模式开头的行: + +``` +# awk '/^fe/{print}' /etc/hosts +# awk '/^ff/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-All-Matching-Lines-with-Pattern.gif) +>使用 Awk 打印与模式匹配的行 + +#### 结合元字符 ($) 使用 Awk + +它将匹配所有以给定模式结尾的行: + +``` +# awk '/ab$/{print}' /etc/hosts +# awk '/ost$/{print}' /etc/hosts +# awk '/rs$/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Given-Pattern-String.gif) +>使用 Awk 打印与模式匹配的字符串 + +#### 结合转义字符 (\\) 使用 Awk + +它允许你将该转义字符后面的字符作为文字,即理解为其字面的意思。 + +在下面的例子中,第一个命令打印出文件中的所有行,第二个命令中我想匹配具有 $25.00 的一行,但我并未使用转义字符,因而没有打印出任何内容。 + +第三个命令是正确的,因为一个这里使用了一个转义字符以转义 $,以将其识别为 '$'(而非元字符)。 + +``` +# awk '//{print}' deals.txt +# awk '/$25.00/{print}' deals.txt +# awk '/\$25.00/{print}' deals.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Escape-Character.gif) +>结合转义字符使用 Awk + +### 总结 + +以上内容并不是 Awk 命令用做过滤工具的全部,上述的示例均是 awk 的基础操作。在下面的章节中,我将进一步介绍如何使用 awk 的高级功能。感谢您的阅读,请在评论区贴出您的评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ + +作者:[Aaron Kili][a] +译者:[wwy-hust](https://github.com/wwy-hust) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 829ed85c8e1e7b09b840417f4959eebd9184e9c3 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Wed, 29 Jun 2016 15:23:36 +0800 Subject: [PATCH 018/471] =?UTF-8?q?Delete=20Part=206=20-=20How=20to=20Use?= =?UTF-8?q?=20=E2=80=98next=E2=80=99=20Command=20with=20Awk=20in=20Linux.m?= =?UTF-8?q?d?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...to Use ‘next’ Command with Awk in Linux.md | 76 ------------------- 1 file changed, 76 deletions(-) delete mode 100644 sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md diff --git a/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md b/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md deleted file mode 100644 index 5ae8cca086..0000000000 --- a/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md +++ /dev/null @@ -1,76 +0,0 @@ -[translating]How to Use ‘next’ Command with Awk in Linux -============================================= - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png) - -In this sixth part of Awk series, we shall look at using `next` command, which tells Awk to skip all remaining patterns and expressions that you have provided, but instead read the next input line. - -The `next` command helps you to prevent executing what I would refer to as time-wasting steps in a command execution. - -To understand how it works, let us consider a file called food_list.txt that looks like this: - -``` -Food List Items -No Item_Name Price Quantity -1 Mangoes $3.45 5 -2 Apples $2.45 25 -3 Pineapples $4.45 55 -4 Tomatoes $3.45 25 -5 Onions $1.45 15 -6 Bananas $3.45 30 -``` - -Consider running the following command that will flag food items whose quantity is less than or equal to 20 with a `(*)` sign at the end of each line: - -``` -# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt - -No Item_Name Price Quantity -1 Mangoes $3.45 5 * -2 Apples $2.45 25 -3 Pineapples $4.45 55 -4 Tomatoes $3.45 25 -5 Onions $1.45 15 * -6 Bananas $3.45 30 -``` - -The command above actually works as follows: - -- First, it checks whether the quantity, fourth field of each input line is less than or equal to 20, if a value meets that condition, it is printed and flagged with the `(*)` sign at the end using expression one: `$4 <= 20` -- Secondly, it checks if the fourth field of each input line is greater than 20, and if a line meets the condition it gets printed using expression two: `$4 > 20` - -But there is one problem here, when the first expression is executed, a line that we want to flag is printed using: `{ printf "%s\t%s\n", $0,"**" ; }` and then in the same step, the second expression is also checked which becomes a time wasting factor. - -So there is no need to execute the second expression, `$4 > 20` again after printing already flagged lines that have been printed using the first expression. - -To deal with this problem, you have to use the `next` command as follows: - -``` -# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt - -No Item_Name Price Quantity -1 Mangoes $3.45 5 * -2 Apples $2.45 25 -3 Pineapples $4.45 55 -4 Tomatoes $3.45 25 -5 Onions $1.45 15 * -6 Bananas $3.45 30 -``` - -After a single input line is printed using `$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`, the `next` command included will help skip the second expression `$4 > 20` `{ print $0 ;}`, so execution goes to the next input line without having to waste time on checking whether the quantity is greater than 20. - -The next command is very important is writing efficient commands and where necessary, you can always use to speed up the execution of a script. Prepare for the next part of the series where we shall look at using standard input (STDIN) as input for Awk. - -Hope you find this how to guide helpful and you can as always put your thoughts in writing by leaving a comment in the comment section below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/use-next-command-with-awk-in-linux/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ From 15c2d860155ee1ecba780f658eb6323c49668d7a Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Wed, 29 Jun 2016 15:26:36 +0800 Subject: [PATCH 019/471] Add files via upload From 13b816e625fe9853a1562b4c01e2dfca49d75ca8 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Wed, 29 Jun 2016 15:37:21 +0800 Subject: [PATCH 020/471] =?UTF-8?q?Create=20Part=206=20-=20How=20to=20Use?= =?UTF-8?q?=20=E2=80=98next=E2=80=99=20Command=20with=20Awk=20in=20Linux.m?= =?UTF-8?q?d?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...to Use ‘next’ Command with Awk in Linux.md | 76 +++++++++++++++++++ 1 file changed, 76 insertions(+) create mode 100644 translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md diff --git a/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md b/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md new file mode 100644 index 0000000000..8ead60bd44 --- /dev/null +++ b/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md @@ -0,0 +1,76 @@ + +如何使用AWK的‘next’命令 +============================================= + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png) + +在Awk 系列的第六章, 我们来看一下`next`命令 ,它告诉 Awk 跳过你所提供的表达式而是读取下一个输入行. +`next` 命令帮助你阻止运行多余的步骤. + +要明白它是如何工作的, 让我们来分析一下food_list.txt它看起来像这样 : + +``` +Food List Items +No Item_Name Price Quantity +1 Mangoes $3.45 5 +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 +6 Bananas $3.45 30 +``` + +运行下面的命令,它将在每个食物数量小于或者等于20的行后面标一个星号: + +``` +# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Price Quantity +1 Mangoes $3.45 5 * +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 * +6 Bananas $3.45 30 +``` + +上面的命令实际运行如下: + +- 首先, 它用`$4 <= 20`表达式检查每个输入行的第四列是否小于或者等于20,如果满足条件, 它将在末尾打一个星号 `(*)` . +- 接着, 它用`$4 > 20`表达式检查每个输入行的第四列是否大于20,如果满足条件,显示出来. + +但是这里有一个问题, 当第一个表达式用`{ printf "%s\t%s\n", $0,"**" ; }`命令进行标注的时候在同样的步骤第二个表达式也进行了判断这样就浪费了时间. + +因此当我们已经用第一个表达式打印标志行的时候就不在需要用第二个表达式`$4 > 20`再次打印. + +要处理这个问题, 我们需要用到`next` 命令: + +``` +# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Price Quantity +1 Mangoes $3.45 5 * +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 * +6 Bananas $3.45 30 +``` + +当输入行用`$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`命令打印以后,`next`命令 将跳过第二个`$4 > 20` `{ print $0 ;}`表达式, 继续判断下一个输入行,而不是浪费时间继续判断一下是不是当前输入行还大于20. + +next命令在编写高效的命令脚本时候是非常重要的, 它可以很大的提高脚本速度. 下面我们准备来学习Awk的下一个系列了. + +希望这篇文章对你有帮助,你可以给我们留言. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-next-command-with-awk-in-linux/ + +作者:[Aaron Kili][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 97bf6bf4c369dffe7d241e6b4e84aff02c4005fc Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Wed, 29 Jun 2016 16:11:16 +0800 Subject: [PATCH 021/471] merge (#4124) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Delete Part 6 - How to Use ‘next’ Command with Awk in Linux.md * Add files via upload * Create Part 6 - How to Use ‘next’ Command with Awk in Linux.md --- ...to Use ‘next’ Command with Awk in Linux.md | 76 ------------------- ...to Use ‘next’ Command with Awk in Linux.md | 76 +++++++++++++++++++ 2 files changed, 76 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md create mode 100644 translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md diff --git a/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md b/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md deleted file mode 100644 index 5ae8cca086..0000000000 --- a/sources/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md +++ /dev/null @@ -1,76 +0,0 @@ -[translating]How to Use ‘next’ Command with Awk in Linux -============================================= - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png) - -In this sixth part of Awk series, we shall look at using `next` command, which tells Awk to skip all remaining patterns and expressions that you have provided, but instead read the next input line. - -The `next` command helps you to prevent executing what I would refer to as time-wasting steps in a command execution. - -To understand how it works, let us consider a file called food_list.txt that looks like this: - -``` -Food List Items -No Item_Name Price Quantity -1 Mangoes $3.45 5 -2 Apples $2.45 25 -3 Pineapples $4.45 55 -4 Tomatoes $3.45 25 -5 Onions $1.45 15 -6 Bananas $3.45 30 -``` - -Consider running the following command that will flag food items whose quantity is less than or equal to 20 with a `(*)` sign at the end of each line: - -``` -# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt - -No Item_Name Price Quantity -1 Mangoes $3.45 5 * -2 Apples $2.45 25 -3 Pineapples $4.45 55 -4 Tomatoes $3.45 25 -5 Onions $1.45 15 * -6 Bananas $3.45 30 -``` - -The command above actually works as follows: - -- First, it checks whether the quantity, fourth field of each input line is less than or equal to 20, if a value meets that condition, it is printed and flagged with the `(*)` sign at the end using expression one: `$4 <= 20` -- Secondly, it checks if the fourth field of each input line is greater than 20, and if a line meets the condition it gets printed using expression two: `$4 > 20` - -But there is one problem here, when the first expression is executed, a line that we want to flag is printed using: `{ printf "%s\t%s\n", $0,"**" ; }` and then in the same step, the second expression is also checked which becomes a time wasting factor. - -So there is no need to execute the second expression, `$4 > 20` again after printing already flagged lines that have been printed using the first expression. - -To deal with this problem, you have to use the `next` command as follows: - -``` -# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt - -No Item_Name Price Quantity -1 Mangoes $3.45 5 * -2 Apples $2.45 25 -3 Pineapples $4.45 55 -4 Tomatoes $3.45 25 -5 Onions $1.45 15 * -6 Bananas $3.45 30 -``` - -After a single input line is printed using `$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`, the `next` command included will help skip the second expression `$4 > 20` `{ print $0 ;}`, so execution goes to the next input line without having to waste time on checking whether the quantity is greater than 20. - -The next command is very important is writing efficient commands and where necessary, you can always use to speed up the execution of a script. Prepare for the next part of the series where we shall look at using standard input (STDIN) as input for Awk. - -Hope you find this how to guide helpful and you can as always put your thoughts in writing by leaving a comment in the comment section below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/use-next-command-with-awk-in-linux/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md b/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md new file mode 100644 index 0000000000..8ead60bd44 --- /dev/null +++ b/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md @@ -0,0 +1,76 @@ + +如何使用AWK的‘next’命令 +============================================= + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png) + +在Awk 系列的第六章, 我们来看一下`next`命令 ,它告诉 Awk 跳过你所提供的表达式而是读取下一个输入行. +`next` 命令帮助你阻止运行多余的步骤. + +要明白它是如何工作的, 让我们来分析一下food_list.txt它看起来像这样 : + +``` +Food List Items +No Item_Name Price Quantity +1 Mangoes $3.45 5 +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 +6 Bananas $3.45 30 +``` + +运行下面的命令,它将在每个食物数量小于或者等于20的行后面标一个星号: + +``` +# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Price Quantity +1 Mangoes $3.45 5 * +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 * +6 Bananas $3.45 30 +``` + +上面的命令实际运行如下: + +- 首先, 它用`$4 <= 20`表达式检查每个输入行的第四列是否小于或者等于20,如果满足条件, 它将在末尾打一个星号 `(*)` . +- 接着, 它用`$4 > 20`表达式检查每个输入行的第四列是否大于20,如果满足条件,显示出来. + +但是这里有一个问题, 当第一个表达式用`{ printf "%s\t%s\n", $0,"**" ; }`命令进行标注的时候在同样的步骤第二个表达式也进行了判断这样就浪费了时间. + +因此当我们已经用第一个表达式打印标志行的时候就不在需要用第二个表达式`$4 > 20`再次打印. + +要处理这个问题, 我们需要用到`next` 命令: + +``` +# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Price Quantity +1 Mangoes $3.45 5 * +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 * +6 Bananas $3.45 30 +``` + +当输入行用`$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`命令打印以后,`next`命令 将跳过第二个`$4 > 20` `{ print $0 ;}`表达式, 继续判断下一个输入行,而不是浪费时间继续判断一下是不是当前输入行还大于20. + +next命令在编写高效的命令脚本时候是非常重要的, 它可以很大的提高脚本速度. 下面我们准备来学习Awk的下一个系列了. + +希望这篇文章对你有帮助,你可以给我们留言. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-next-command-with-awk-in-linux/ + +作者:[Aaron Kili][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 51d14dd0b753206579be37ebff71c03fca0bf772 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jun 2016 16:43:20 +0800 Subject: [PATCH 022/471] PUB:20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System @Moelf @PurlingNayuki --- ...th--The Man Behind Ubuntu Operating System.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/talk => published}/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md (80%) diff --git a/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md b/published/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md similarity index 80% rename from translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md rename to published/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md index b40048201b..b5923ca86a 100644 --- a/translated/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md +++ b/published/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md @@ -3,21 +3,21 @@ ![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg) -**马克·理查德·沙特尔沃思(马克Richard Shuttleworth)** 是 Ubuntu 的创始人,也被称作 [Debian 背后的人][1]([之一][2])。他于 1973 年出生在南非的韦尔科姆(Welkom)。他不仅是个企业家,还是个太空游客——他是第一个前往太空旅行的非洲独立国家的公民。 +**马克·理查德·沙特尔沃思(Mark Richard Shuttleworth)** 是 Ubuntu 的创始人,也被称作 [Debian 背后的人][1]([之一][2])。他于 1973 年出生在南非的韦尔科姆(Welkom)。他不仅是个企业家,还是个太空游客——他是第一个前往太空旅行的非洲独立国家的公民。 马克曾在 1996 年成立了一家名为 **Thawte** 的互联网商务安全公司,那时他还在开普敦大学( University of Cape Town)的学习金融和信息技术。 -2000 年,马克创立了 HBD(Here be Dragons (此处有龙/危险)的缩写,所以其吉祥物是一只龙),这是一家投资公司,同时他还创立了沙特尔沃思基金会(Shuttleworth Foundation),致力于给社会中有创新性的领袖以奖金和投资等形式提供资助。 +2000 年,马克创立了 HBD(Here be Dragons (此处有龙/危险)的缩写,所以其吉祥物是一只龙),这是一家投资公司,同时他还创立了沙特尔沃思基金会(Shuttleworth Foundation),致力于以奖金和投资等形式给社会中有创新性的领袖提供资助。 -> "移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,相对于平板电脑的发展而言,传统 PC 行业很明显正在萎缩。所以如果我们想要涉足个人电脑产业,我们必须首先涉足移动行业。移动产业之所以有趣,是因为在这里没有盗版 Windows 操作系统的市场。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会一直使用你的操作系统。在传统 PC 行业,我们时不时得和“免费”的 Windows 产生竞争,这是一种非常微妙的挑战。所以我们现在的重心是围绕 Ubuntu 和移动设备——手机和平板——以图与普通用户建立更深层次的联系。" +> “移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,相对于平板电脑的发展而言,传统 PC 行业很明显正在萎缩。所以如果我们想要涉足个人电脑产业,我们必须首先涉足移动行业。移动产业之所以有趣,是因为在这里没有盗版 Windows 操作系统的市场。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会一直使用你的操作系统。在传统 PC 行业,我们时不时得和“免费”的 Windows 产生竞争,这是一种非常微妙的挑战。所以我们现在的重心是围绕 Ubuntu 和移动设备——手机和平板——以图与普通用户建立更深层次的联系。” > > — 马克·沙特尔沃思 2002 年,他在俄罗斯的星城(Star City)接受了为期一年的训练,随后作为联盟号 TM-34 任务组的一员飞往了国际空间站。再后来,在面向有志于航空航天或者其相关学科的南非学生群体发起了推广科学、编程及数学的运动后,马克 创立了 **Canonical Ltd**。此后直至2013年,他一直在领导 Ubuntu 操作系统的开发。 -现今,Shuttleworth 拥有英国与南非双重国籍并和 18 只可爱的鸭子住在英国的 Isle of Man 小岛上的一处花园,一同的还有他可爱的女友 Claire,两条黑色母狗以及时不时经过的羊群。 +现今,沙特尔沃思拥有英国与南非双重国籍并和 18 只可爱的鸭子住在英国的 Isle of Man 小岛上的一处花园,一同的还有他可爱的女友 Claire,两条黑色母狗以及时不时经过的羊群。 -> "电脑不仅仅是一台电子设备了。它现在是你思维的延续,以及通向他人的大门。" +> “电脑不仅仅是一台电子设备了。它现在是你思维的延续,以及通向他人的大门。” > > — 马克·沙特尔沃思 @@ -41,7 +41,7 @@ > > — 马克·沙特尔沃思 -### Linux、免费开源软件与马克·沙特尔沃思 ### +### Linux、自由开源软件与马克·沙特尔沃思 ### 在 90 年代后期,马克曾作为一名开发者参与 Debian 操作系统项目。 @@ -49,7 +49,7 @@ 2004 年,马克通过出资开发基于 Debian 的 Ubuntu 操作系统返回了自由软件界,这一切也经由他的 Canonical 公司完成。 -2005 年,马克出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,马克经常被一个朗朗上口的名字称呼——“**SABDFL :自封的生命之仁慈独裁者(Self-Appointed Benevolent Dictator for Life)**”。为了能够找到足够多的高手开发这个巨大的项目,马克花费了 6 个月的时间从 Debian 邮件列表里寻找,这一切都是在他乘坐在南极洲的一艘破冰船——赫列布尼科夫船长号(Kapitan Khlebnikov)——上完成的。同年,马克买下了 Impi Linux 65% 的股份。 +2005 年,马克出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,人们经常用一个朗朗上口的名字称呼他——“**SABDFL :自封的生命之仁慈独裁者(Self-Appointed Benevolent Dictator for Life)**”。为了能够找到足够多的高手开发这个巨大的项目,马克花费了 6 个月的时间从 Debian 邮件列表里寻找,这一切都是在他乘坐在南极洲的一艘破冰船——赫列布尼科夫船长号(Kapitan Khlebnikov)——上完成的。同年,马克买下了 Impi Linux 65% 的股份。 > “我呼吁电信公司的掌权者们尽快开发出跨洲际的高效信息传输服务。” @@ -78,7 +78,7 @@ ![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg) -在太空中,马克与纳尔逊·曼德拉(Nelson Mandela)和另一个 14 岁的南非女孩米歇尔·福斯特(Michelle Foster) (她问马克要不要娶她)通过无线电进行了交谈。马克礼貌地回避了这个结婚问题,在巧妙地改换话题之前他说他感到很荣幸。身患绝症的女孩福斯特通过梦想基金会( Dream foundation)的赞助获得了与马克和纳尔逊·曼德拉交谈的机会。 +在太空中,马克与纳尔逊·曼德拉(Nelson Mandela)和另一个 14 岁的南非女孩米歇尔·福斯特(Michelle Foster) (她问马克要不要娶她)通过无线电进行了交谈。马克礼貌地回避了这个结婚问题,但在巧妙地改换话题之前他说他感到很荣幸。身患绝症的女孩福斯特通过梦想基金会( Dream foundation)的赞助获得了与马克和纳尔逊·曼德拉交谈的机会。 归来后,马克在世界各地做了旅行,并和各地的学生就太空之旅发表了感言。 From 812a4ecc6f6e40645e5ea1d7fa0fa11383c3cb33 Mon Sep 17 00:00:00 2001 From: cposture Date: Wed, 29 Jun 2016 22:01:21 +0800 Subject: [PATCH 023/471] Translating by cposture --- sources/tech/20160512 Bitmap in Linux Kernel.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160512 Bitmap in Linux Kernel.md b/sources/tech/20160512 Bitmap in Linux Kernel.md index b7e832ba0a..adffc9d049 100644 --- a/sources/tech/20160512 Bitmap in Linux Kernel.md +++ b/sources/tech/20160512 Bitmap in Linux Kernel.md @@ -1,3 +1,4 @@ +[Translating by cposture 2016.06.29] Data Structures in the Linux Kernel ================================================================================ From 537488f5505cf31ae41038f3df9809134cc39c9f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 30 Jun 2016 06:38:45 +0800 Subject: [PATCH 024/471] PUB:20160621 Building Serverless App with Docker @bazz2 --- ...160621 Building Serverless App with Docker.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/tech => published}/20160621 Building Serverless App with Docker.md (81%) diff --git a/translated/tech/20160621 Building Serverless App with Docker.md b/published/20160621 Building Serverless App with Docker.md similarity index 81% rename from translated/tech/20160621 Building Serverless App with Docker.md rename to published/20160621 Building Serverless App with Docker.md index 9af6c5d7aa..0b5409fc89 100644 --- a/translated/tech/20160621 Building Serverless App with Docker.md +++ b/published/20160621 Building Serverless App with Docker.md @@ -1,9 +1,9 @@ -用 docker 创建 serverless 应用 +用 Docker 创建 serverless 应用 ====================================== -当今世界会时不时地出现一波科技浪潮,将以前的技术拍死在海滩上。针对 serverless 应用的概念我们已经谈了很多,它是指将你的应用程序按功能来部署,这些功能在被用到时才会启动。你不用费心去管理服务器和程序规模,因为它们会在需要的时候在一个集群中启动并运行。 +当今世界会时不时地出现一波波科技浪潮,将以前的技术拍死在海滩上。针对 serverless 应用的概念我们已经谈了很多,它是指将你的应用程序按功能来部署,这些功能在被用到时才会启动。你不用费心去管理服务器和程序规模,因为它们会在需要的时候在一个集群中启动并运行。 -但是 serverless 并不意味着没有 Docker 什么事儿,事实上 Docker 就是 serverless 的。你可以使用 Docker 来容器化这些功能,然后在 Swarm 中按需求来运行它们。Serverless 是一项构建分布式应用的技术,而 Docker 是它们完美的构建平台。 +但是 serverless 并不意味着没有 Docker 什么事儿,事实上 Docker 就是 serverless 的。你可以使用 Docker 来容器化这些功能,然后在 Swarm 中按需求来运行它们。serverless 是一项构建分布式应用的技术,而 Docker 是它们完美的构建平台。 ### 从 servers 到 serverless @@ -28,13 +28,13 @@ client.run("bfirsh/serverless-record-vote-task", [voter_id, vote], detach=True) 这个投票处理进程和消息队列可以用运行在 Swarm 上的 Docker 容器来代替,并实现按需自动部署。 -我们也可以用容器替换 web 前端,使用一个轻量级 HTTP 服务器来触发容器响应一个 HTTP 请求。Docker 容器代替长期运行的 HTTP 服务器来挑起响应请求的重担,这些容器可以自动扩容来支撑大访问量。 +我们也可以用容器替换 web 前端,使用一个轻量级 HTTP 服务器来触发容器响应一个 HTTP 请求。Docker 容器代替长期运行的 HTTP 服务器来挑起响应请求的重担,这些容器可以自动扩容来支撑更大访问量。 新的架构就像这样: ![](https://blog.docker.com/wp-content/uploads/Picture2.png) -红色框内是持续运行的服务,绿色框内是按需启动的容器。这个架构提供更少的长期运行服务让你管理,并且可以自动扩容(最大容量由你的 Swarm 决定)。 +红色框内是持续运行的服务,绿色框内是按需启动的容器。这个架构里需要你来管理的长期运行服务更少,并且可以自动扩容(最大容量由你的 Swarm 决定)。 ### 我们可以做点什么? @@ -51,11 +51,11 @@ client.run("bfirsh/serverless-record-vote-task", [voter_id, vote], detach=True) ### 下一步怎么做 -我们提供了这些前卫的工具和概念来构建应用,并没有深入发掘它们的功能。我们的架构里还是存在长期运行的服务,将来我们需要使用 Swarm 来把所有服务都用按需扩容的方式实现 +我们提供了这些前卫的工具和概念来构建应用,并没有深入发掘它们的功能。我们的架构里还是存在长期运行的服务,将来我们需要使用 Swarm 来把所有服务都用按需扩容的方式实现。 希望本文能在你搭建架构时给你一些启发,但我们还是需要你的帮助。我们提供了所有的基本工具,但它们还不是很完善,我们需要更多更好的工具、库、应用案例、文档以及其他资料。 -[我们在这里发布了工具、库和文档][3]。如果想了解更多,请移步到那里,另外请贡献一些链接给我们,这样我们就能一直工作了。 +[我们在这里发布了工具、库和文档][3]。如果想了解更多,请贡献给我们一些你知道的资源,以便我们能够完善这篇文章。 玩得愉快。 @@ -77,7 +77,7 @@ via: https://blog.docker.com/2016/06/building-serverless-apps-with-docker/ 作者:[Ben Firshman][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 11180e4a7325b5ca739fc23f77d296066c377b52 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=91=A8=E5=AE=B6=E6=9C=AA?= Date: Thu, 30 Jun 2016 09:36:00 +0800 Subject: [PATCH 025/471] Translating by GitFuture --- ... 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md index e6b143cc00..acb98b2d20 100644 --- a/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md +++ b/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md @@ -1,3 +1,5 @@ +Translating by GitFuture + Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04 ===================================================================================== From 6ee0c8807948f541cddfa3d4005ccb410a3c19c5 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 30 Jun 2016 10:08:09 +0800 Subject: [PATCH 026/471] =?UTF-8?q?20160630-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Flatpak brings standalone apps to Linux.md | 34 +++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 sources/tech/20160621 Flatpak brings standalone apps to Linux.md diff --git a/sources/tech/20160621 Flatpak brings standalone apps to Linux.md b/sources/tech/20160621 Flatpak brings standalone apps to Linux.md new file mode 100644 index 0000000000..5441627710 --- /dev/null +++ b/sources/tech/20160621 Flatpak brings standalone apps to Linux.md @@ -0,0 +1,34 @@ +Flatpak brings standalone apps to Linux +=== + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/flatpak-945x400.jpg) + +The development team behind [Flatpak][1] has [just announced the general availability][2] of the Flatpak desktop application framework. Flatpak (which was also known during development as xdg-app) provides the ability for an application — bundled as a Flatpak — to be installed and run easily and consistently on many different Linux distributions. Applications bundled as Flatpaks also have the ability to be sandboxed for security, isolating them from your operating system, and other applications. Check out the [Flatpak website][3], and the [press release][4] for more information on the tech that makes up the Flatpak framework. + +### Installing Flatpak on Fedora + +For users wanting to run applications bundled as Flatpaks, installation on Fedora is easy, with Flatpak already available in the official Fedora 23 and Fedora 24 repositories. The Flatpak website has [full details on installation on Fedora][5], as well as how to install on Arch, Debian, Mageia, and Ubuntu. [Many applications][6] have builds already bundled with Flatpak — including LibreOffice, and nightly builds of popular graphics applications Inkscape and GIMP. + +### For Application Developers + +If you are an application developer, the Flatpak website also contains some great resources on getting started [bundling and distributing your applications with Flatpak][7]. These resources contain information on using Flakpak SDKs to build standalone, sandboxed Flatpak applications. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-flatpak/ + +作者:[Ryan Lerch][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/introducing-flatpak/ +[1]: http://flatpak.org/ +[2]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[3]: http://flatpak.org/ +[4]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[5]: http://flatpak.org/getting.html +[6]: http://flatpak.org/apps.html +[7]: http://flatpak.org/developer.html From 62d8ec64c4bbda8951040ef2eb3c457845449cb0 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 30 Jun 2016 10:14:34 +0800 Subject: [PATCH 027/471] =?UTF-8?q?20160630-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... technologies in Fedora: systemd-nspawn.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/tech/20160621 Container technologies in Fedora: systemd-nspawn.md diff --git a/sources/tech/20160621 Container technologies in Fedora: systemd-nspawn.md b/sources/tech/20160621 Container technologies in Fedora: systemd-nspawn.md new file mode 100644 index 0000000000..1ec72a3c90 --- /dev/null +++ b/sources/tech/20160621 Container technologies in Fedora: systemd-nspawn.md @@ -0,0 +1,106 @@ +Container technologies in Fedora: systemd-nspawn +=== + +Welcome to the “Container technologies in Fedora” series! This is the first article in a series of articles that will explain how you can use the various container technologies available in Fedora. This first article will deal with `systemd-nspawn`. + +### What is a container? + +A container is a user-space instance which can be used to run a program or an operating system in isolation from the system hosting the container (called the host system). The idea is very similar to a `chroot` or a [virtual machine][1]. The processes running in a container are managed by the same kernel as the host operating system, but they are isolated from the host file system, and from the other processes. + + +### What is systemd-nspawn? + +The systemd project considers container technologies as something that should fundamentally be part of the desktop and that should integrate with the rest of the user’s systems. To this end, systemd provides `systemd-nspawn`, a tool which is able to create containers using various Linux technologies. It also provides some container management tools. + +In many ways, `systemd-nspawn` is similar to `chroot`, but is much more powerful. It virtualizes the file system, process tree, and inter-process communication of the guest system. Much of its appeal lies in the fact that it provides a number of tools, such as `machinectl`, for managing containers. Containers run by `systemd-nspawn` will integrate with the systemd components running on the host system. As an example, journal entries can be logged from a container in the host system’s journal. + +In Fedora 24, `systemd-nspawn` has been split out from the systemd package, so you’ll need to install the `systemd-container` package. As usual, you can do that with a `dnf install systemd-container`. + +### Creating the container + +Creating a container with `systemd-nspawn` is easy. Let’s say you have an application made for Debian, and it doesn’t run well anywhere else. That’s not a problem, we can make a container! To set up a container with the latest version of Debian (at this point in time, Jessie), you need to pick a directory to set up your system in. I’ll be using `~/DebianJessie` for now. + +Once the directory has been created, you need to run `debootstrap`, which you can install from the Fedora repositories. For Debian Jessie, you run the following command to initialize a Debian file system. + +``` +$ debootstrap --arch=amd64 stable ~/DebianJessie +``` + +This assumes your architecture is x86_64. If it isn’t, you must change `amd64` to the name of your architecture. You can find your machine’s architecture with `uname -m`. + +Once your root directory is set up, you will start your container with the following command. + +``` +$ systemd-nspawn -bD ~/DebianJessie +``` + +You’ll be up and running within seconds. You’ll notice something as soon as you try to log in: you can’t use any accounts on your system. This is because systemd-nspawn virtualizes users. The fix is simple: remove -b from the previous command. You’ll boot directly to the root shell in the container. From there, you can just use passwd to set a password for root, or you can use adduser to add a new user. As soon as you’re done with that, go ahead and put the -b flag back. You’ll boot to the familiar login console and you log in with the credentials you set. + +All of this applies for any distribution you would want to run in the container, but you need to create the system using the correct package manager. For Fedora, you would use DNF instead of debootstrap. To set up a minimal Fedora system, you can run the following command, replacing the absolute path with wherever you want the container to be. + +``` +$ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd dnf fedora-release +``` + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-04-14.png) + +### Setting up the network + +You’ll notice an issue if you attempt to start a service that binds to a port currently in use on your host system. Your container is using the same network interface. Luckily, `systemd-nspawn` provides several ways to achieve separate networking from the host machine. + +#### Local networking + +The first method uses the `--private-network` flag, which only creates a loopback device by default. This is ideal for environments where you don’t need networking, such as build systems and other continuous integration systems. + +#### Multiple networking interfaces + +If you have multiple network devices, you can give one to the container with the `--network-interface` flag. To give `eno1` to my container, I would add the flag `--network-interface=eno1`. While an interface is assigned to a container, the host can’t use it at the same time. When the container is completely shut down, it will be available to the host again. + +#### Sharing network interfaces + +For those of us who don’t have spare network devices, there are other options for providing access to the container. One of those is the `--port` flag. This forwards a port on the container to the host. The format is `protocol:host:container`, where protocol is either `tcp` or `udp`, `host` is a valid port number on the host, and `container` is a valid port on the container. You can omit the protocol and specify only `host:container`. I often use something similar to `--port=2222:22`. + +You can enable complete, host-only networking with the `--network-veth` flag, which creates a virtual Ethernet interface between the host and the container. You can also bridge two connections with `--network-bridge`. + +### Using systemd components + +If the system in your container has D-Bus, you can use systemd’s provided utilities to control and monitor your container. Debian doesn’t include dbus in the base install. If you want to use it with Debian Jessie, you’ll want to run `apt install dbus`. + +#### machinectl + +To easily manage containers, systemd provides the machinectl utility. Using machinectl, you can log in to a container with machinectl login name, check the status with machinectl status name, reboot with machinectl reboot name, or power it off with machinectl poweroff name. + +### Other systemd commands + +Most systemd commands, such as journalctl, systemd-analyze, and systemctl, support containers with the `--machine` option. For example, if you want to see the journals of a container named “foobar”, you can use journalctl `--machine=foobar`. You can also see the status of a service running in this container with `systemctl --machine=foobar` status service. + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-09-25.png) + +### Working with SELinux + +If you’re running with SELinux enforcing (the default in Fedora), you’ll need to set the SELinux context for your container. To do that, you need to run the following two commands on the host system. + +``` +$ semanage fcontext -a -t svirt_sandbox_file_t "/path/to/container(/.*)?" +$ restorecon -R /path/to/container/ +``` + +Make sure you replace “/path/to/container” with the path to your container. For my container, “DebianJessie”, I would run the following: + +``` +$ semanage fcontext -a -t svirt_sandbox_file_t "/home/johnmh/DebianJessie(/.*)?" +$ restorecon -R /home/johnmh/DebianJessie/ +``` + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ + +作者:[John M. Harris, Jr.][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ +[1]: https://en.wikipedia.org/wiki/Virtual_machine From c3b14933f388d9921ae1c3fd5fd4f1e267233dad Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 30 Jun 2016 10:22:15 +0800 Subject: [PATCH 028/471] =?UTF-8?q?20160630-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20160620 Monitor Linux With Netdata.md | 113 ++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 sources/tech/20160620 Monitor Linux With Netdata.md diff --git a/sources/tech/20160620 Monitor Linux With Netdata.md b/sources/tech/20160620 Monitor Linux With Netdata.md new file mode 100644 index 0000000000..95156ef108 --- /dev/null +++ b/sources/tech/20160620 Monitor Linux With Netdata.md @@ -0,0 +1,113 @@ +Monitor Linux With Netdata +=== + +Netdata is a real-time resource monitoring tool with a friendly web front-end developed and maintained by [FireHOL][1]. With this tool, you can read charts representing resource utilization of things like CPUs, RAM, disks, network, Apache, Postfix and more. It is similar to other monitoring software like Nagios; however, Netdata is only for real-time monitoring via a web interface. + + +### Understanding Netdata + +There’s currently no authentication, so if you’re concerned about someone getting information about the applications you’re running on your system, you should restrict who has access via a firewall policy. The UI is simplified in a way anyone could look at the graphs and understand what they’re seeing, or at least be impressed by your flashy setup. + +The web front-end is very responsive and requires no Flash plugin. The UI doesn’t clutter things up with unneeded features, but sticks to what it does. At first glance, it may seem a bit much with the hundreds of charts you have access to, but luckily the most commonly needed charts (i.e. CPU, RAM, network, and disk) are at the top. If you wish to drill deeper into the graphical data, all you have to do is scroll down or click on the item in the menu to the right. Netdata even allows you to control the chart with play, reset, zoom and resize with the controls on the bottom right of each chart. + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-1.png) +>Netdata chart control + +When it comes down to system resources, the software doesn’t need too much either. The creators choose to write the software in C. Netdata doesn’t use much more than ~40MB of RAM. + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture.png) +>Netdata memory usage + +### Download Netdata + +To download this software, you can head over to [Netdata GitHub page][2]. Then click the “Clone or download” green button on the left of the page. You should then be presented with two options. + +#### Via the ZIP file + +One option is to download the ZIP file. This will include everything in the repository; however, if the repository is updated then you will need to download the ZIP file again. Once you download the ZIP file, you can use the `unzip` tool in the command line to extract the contents. Running the following command will extract the contents of the ZIP file into a “`netdata`” folder. + +``` +$ cd ~/Downloads +$ unzip netdata-master.zip +``` + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-2.png) +>Netdata unzipped + +ou don’t need to add the `-d` option in unzip because their content is inside a folder at the root of the ZIP file. If they didn’t have that folder at the root, unzip would have extracted the contents in the current directory (which can be messy). + +#### Via git + +The next option is to download the repository via git. You will, of course, need git installed on your system. This is usually installed by default on Fedora. If not, you can install git from the command line with the following command. + +``` +$ sudo dnf install git +``` + +After installing git, you will need to “clone” the repository to your system. To do this, run the following command. + +``` +$ git clone https://github.com/firehol/netdata.git +``` + +This will then clone (or make a copy of) the repository in the current working directory. + +### Install Netdata + +There are some packages you will need to build Netdata successfully. Luckily, it’s a single line to install the things you need ([as stated in their installation guide][3]). Running the following command in the terminal will install all of the dependencies you need to use Netdata. + +``` +$ dnf install zlib-devel libuuid-devel libmnl-devel gcc make git autoconf autogen automake pkgconfig +``` + +Once the required packages are installed, you will need to cd into the netdata/ directory and run the netdata-installer.sh script. + +``` +$ sudo ./netdata-installer.sh +``` + +You will then be prompted to press enter to build and install the program. If you wish to continue, press enter to be on your way! + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-3-600x341.png) +>Netdata install. + +If all goes well, you will have Netdata built, installed, and running on your system. The installer will also add an uninstall script in the same folder as the installer called `netdata-uninstaller.sh`. If you change your mind later, running this script will remove it from your system. + +You can see it running by checking its status via systemctl. + +``` +$ sudo systemctl status netdata +``` + +### Accessing Netdata + +Now that we have Netdata installed and running, you can access the web interface via port 19999. I have it running on a test machine, as shown in the screenshot below. + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-4-768x458.png) +>An overview of what Netdata running on your system looks like + +Congratulations! You now have successfully installed and have access to beautiful displays, graphs, and advanced statistics on the performance of your machine. Whether it’s for a personal machine so you can show it off to your friends or for getting deeper insight into the performance of your server, Netdata delivers on performance reporting for any system you choose. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/monitor-linux-netdata/ + +作者:[Martino Jones][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/monitor-linux-netdata/ +[1]: https://firehol.org/ +[2]: https://github.com/firehol/netdata +[3]: https://github.com/firehol/netdata/wiki/Installation + + + + + + + + From 67d0d2a7cca68e6a72ace06fa5e1a489dbfacd98 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 30 Jun 2016 12:11:13 +0800 Subject: [PATCH 029/471] PUB:20160523 Driving cars into the future with Linux MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @erlinux 翻译的不够认真,请下次提交译文时自己多读几遍。或者可以选择更短更简单一些的稿件。 --- ...Driving cars into the future with Linux.md | 106 +++++++++++++++++ ...Driving cars into the future with Linux.md | 108 ------------------ 2 files changed, 106 insertions(+), 108 deletions(-) create mode 100644 published/20160523 Driving cars into the future with Linux.md delete mode 100644 translated/talk/20160523 Driving cars into the future with Linux.md diff --git a/published/20160523 Driving cars into the future with Linux.md b/published/20160523 Driving cars into the future with Linux.md new file mode 100644 index 0000000000..fad7ca99f2 --- /dev/null +++ b/published/20160523 Driving cars into the future with Linux.md @@ -0,0 +1,106 @@ +与 Linux 一同驾车奔向未来 +=========================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open-snow-car-osdc-lead.png?itok=IgYZ6mNY) + +当我驾车的时候并没有这么想过,但是我肯定喜欢一个配有这样系统的车子,它可以让我按下几个按钮就能与我的妻子、母亲以及孩子们语音通话。这样的系统也可以让我选择是否从云端、卫星广播、以及更传统的 AM/FM 收音机收听音乐流媒体。我也会得到最新的天气情况,以及它可以引导我的车载 GPS 找到抵达下一个目的地的最快路线。[车载娱乐系统(In-vehicle infotainment)][1],业界常称作 IVI,它已经普及出现在最新的汽车上了。 + +前段时间,我乘坐飞机跨越了数百英里,然后租了一辆汽车。令人愉快的是,我发现我租赁的汽车上配置了类似我自己车上同样的 IVI 技术。毫不犹豫地,我就通过蓝牙连接把我的联系人上传到了系统当中,然后打电话回家给我的家人,让他们知道我已经安全抵达了,然后我的主机会让他们知道我正在去往他们家的路上。 + +在最近的[新闻综述][2]中,Scott Nesbitt 引述了一篇文章,说福特汽车公司因其开源的[智能设备连接(Smart Device Link)][3](SDL)从竞争对手汽车制造商中得到了足够多的回报,这个中间件框架可以用于支持移动电话。 SDL 是 [GENIVI 联盟][4]的一个项目,这个联盟是一个非营利性组织,致力于建设支持开源车载娱乐系统的中间件。据 GENIVI 的执行董事 [Steven Crumb][5] 称,他们的[成员][6]有很多,包括戴姆勒集团、现代、沃尔沃、日产、本田等等 170 个企业。 + +为了在同行业间保持竞争力,汽车生产企业需要一个中间设备系统,以支持现代消费者所使用的各种人机界面技术。无论您使用的是 Android、iOS 还是其他设备,汽车 OEM 厂商都希望自己的产品能够支持这些。此外,这些的 IVI 系统必须有足够适应能力以支持日益变化的移动技术。OEM 厂商希望提供有价值的服务,并可以在他们的 IVI 之上增加服务,以满足他们客户的各种需求。 + +### 步入 Linux 和开源软件 + +除了 GENIVI 在努力之外,[Linux 基金会][7]也赞助支持了[车载 Linux(Automotive Grade Linux)][8](AGL)工作组,这是一个致力于为汽车应用寻求开源解决方案的软件基金会。虽然 AGL 初期将侧重于 IVI 系统,但是未来他们希望发展到不同的方向,包括[远程信息处理(telematics)][9]、抬头显示器(HUD)及其他控制系统等等。 现在 AGL 已经有超过 50 名成员,包括捷豹、丰田、日产,并在其[最近发布的一篇公告][10]中宣称福特、马自达、三菱、和斯巴鲁也加入了。 + +为了了解更多信息,我们采访了这一新兴领域的两位领导人。具体来说,我们想知道 Linux 和开源软件是如何被使用的,并且它们是如何事实上改变了汽车行业的面貌。首先,我们将与 [Alison Chaiken][11] 谈谈,她是一位任职于 Peloton Technology 的软件工程师,也是一位在车载 Linux 、网络安全和信息透明化方面的专家。她曾任职于 [Alison Chaiken][11] 公司、诺基亚和斯坦福直线性加速器。然后我们和 [Steven Crumb][12] 进行了交谈,他是 GENIVI 执行董事,他之前从事于高性能计算环境(超级计算机和早期的云计算)的开源工作。他说,虽然他再不是一个程序员了,但是他乐于帮助企业解决在使用开源软件时的实际业务问题。 + +### 采访 Alison Chaiken (by [Deb Nicholson][13]) + +#### 你是如何开始对汽车软件领域感兴趣的? + +我曾在诺基亚从事于手机上的 [MeeGo][14] 产品,2009 年该项目被取消了。我想,我下一步怎么办?其时,我的一位同事正在从事于 [MeeGo-IVI][15],这是一个早期的车载 Linux 发行版。 “Linux 在汽车方面将有很大发展,” 我想,所以我就朝着这个方向努力。 + +#### 你能告诉我们你在这些日子里工作在哪些方面吗? + +我目前正在启动一个高级巡航控制系统的项目,它用在大型卡车上,使用实时 Linux 以提升安全性和燃油经济性。我喜欢在这方面的工作,因为没有人会反对提升货运的能力。 + +#### 近几年有几则汽车被黑的消息。开源代码方案可以帮助解决这个问题吗? + +我恰好针对这一话题准备了一次讲演,我会在南加州 Linux 2016 博览会上就 Linux 能否解决汽车上的安全问题做个讲演 ([讲演稿在此][16])。值得注意的是,GENIVI 和车载 Linux 项目已经公开了他们的代码,这两个项目可以通过 Git 提交补丁。(如果你有补丁的话),请给上游发送您的补丁!许多眼睛都盯着,bug 将无从遁形。 + +#### 执法机构和保险公司可以找到很多汽车上的数据的用途。他们获取这些信息很容易吗? + +好问题。IEEE-1609 专用短程通信标准(Dedicated Short Range Communication Standard)就是为了让汽车的 WiFi 消息可以安全、匿名地传递。不过,如果你从你的车上发推,那可能就有人能够跟踪到你。 + +#### 开发人员和公民个人可以做些什么,以在汽车技术进步的同时确保公民自由得到保护? + +电子前沿基金会( Electronic Frontier Foundation)(EFF)在关注汽车问题方面做了出色的工作,包括对哪些数据可以存储在汽车 “黑盒子”里通过官方渠道发表了看法,以及 DMCA 规定 1201 如何应用于汽车上。 + +#### 在未来几年,你觉得在汽车方面会发生哪些令人激动的发展? + +可以拯救生命的自适应巡航控制系统和防撞系统将取得长足发展。当它们大量进入汽车里面时,我相信这会使得(因车祸而导致的)死亡人数下降。如果这都不令人激动,我不知道还有什么会更令人激动。此外,像自动化停车辅助功能,将会使汽车更容易驾驶,减少汽车磕碰事故。 + +#### 我们需要做什么?人们怎样才能参与? + +车载 Linux 开发是以开源的方式开发,它运行在每个人都能买得起的廉价硬件上(如树莓派 2 和中等价位的 Renesas Porter 主板)。 GENIVI 汽车 Linux 中间件联盟通过 Git 开源了很多软件。此外,还有很酷的 [OSVehicle 开源硬件][17]汽车平台。 + +只需要不太多的预算,人们就可以参与到 Linux 软件和开放硬件中。如果您感兴趣,请加入我们在 Freenode 上的IRC #automotive 吧。 + +### 采访 Steven Crumb (by Don Watkins) + +#### GENIVI 在 IVI 方面做了哪些巨大贡献? + +GENIVI 率先通过使用自由开源软件填补了汽车行业的巨大空白,这包括 Linux、非安全关键性汽车软件(如车载娱乐系统(IVI))等。作为消费者,他们很期望在车辆上有和智能手机一样的功能,对这种支持 IVI 功能的软件的需求量成倍地增长。不过不断提升的软件数量也增加了建设 IVI 系统的成本,从而延缓了其上市时间。 + +GENIVI 使用开源软件和社区开发的模式为汽车制造商及其软件提供商节省了大量资金,从而显著地缩短了产品面市时间。我为 GENIVI 而感到激动,我们有幸引导了一场革命,在缓慢进步的汽车行业中,从高度结构化和专有的解决方案转换为以社区为基础的开发方式。我们还没有完全达成目标,但是我们很荣幸在这个可以带来实实在在好处的转型中成为其中的一份子。 + +#### 你们的主要成员怎样推动了 GENIVI 的发展方向? + +GENIVI 有很多成员和非成员致力于我们的工作。在许多开源项目中,任何公司都可以通过通过技术输出而发挥影响,包括简单地贡献代码、补丁、花点时间测试。前面说过,宝马、奔驰、现代汽车、捷豹路虎、标致雪铁龙、雷诺/日产和沃尔沃都是 GENIVI 积极的参与者和贡献者,其他的许多 OEM 厂商也在他们的汽车中采用了 IVI 解决方案,广泛地使用了 GENIVI 的软件。 + +#### 这些贡献的代码使用了什么许可证? + +GENIVI 采用了一些许可证,包括从(L)GPLv2 到 MPLv2 和 Apache2.0。我们的一些工具使用的是 Eclipse 许可证。我们有一个[公开许可策略][18],详细地说明了我们的许可证偏好。 + +#### 个人或团体如何参与其中?社区的参与对于这个项目迈向成功有多重要? + +GENIVI 的开发完全是开放的([projects.genivi.org][19]),因此,欢迎任何有兴趣在汽车中使用开源软件的人参加。也就是说,公司可以通过成员的方式[加入该联盟][20],联盟以开放的方式资助其不断进行开发。GENIVI 的成员可以享受各种各样的便利,在过去六年中,已经有多达 140 家公司参与到这个全球性的社区当中。 + +社区对于 GENIVI 是非常重要的,没有一个活跃的贡献者社区,我们不可能在这些年开发和维护了这么多有价值的软件。我们努力让参与到 GENIVI 更加简单,现在只要加入一个[邮件列表][21]就可以接触到各种软件项目中的人们。我们使用了许多开源项目采用的标准做法,并提供了高品质的工具和基础设施,以帮助开发人员宾至如归而富有成效。 + +无论你是否熟悉汽车软件,都欢迎你加入我们的社区。人们已经对汽车改装了许多年,所以对于许多人来说,在汽车上修修改改是自热而然的做法。对于汽车来说,软件是一个新的领域,GENIVI 希望能为对汽车和开源软件有兴趣的人打开这扇门。 + +------------------------------- +via: https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb + +作者:[Don Watkins][a] +译者:[erlinux](https://github.com/erlinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: https://en.wikipedia.org/wiki/In_car_entertainment +[2]: https://opensource.com/life/16/1/weekly-news-jan-9 +[3]: http://projects.genivi.org/smartdevicelink/home +[4]: http://www.genivi.org/ +[5]: https://www.linkedin.com/in/stevecrumb +[6]: http://www.genivi.org/genivi-members +[7]: http://www.linuxfoundation.org/ +[8]: https://www.automotivelinux.org/ +[9]: https://en.wikipedia.org/wiki/Telematics +[10]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and +[11]: https://www.linkedin.com/in/alison-chaiken-3ba456b3 +[12]: https://www.linkedin.com/in/stevecrumb +[13]: https://opensource.com/users/eximious +[14]: https://en.wikipedia.org/wiki/MeeGo +[15]: http://webinos.org/deliverable-d026-target-platform-requirements-and-ipr/automotive/ +[16]: http://she-devel.com/Chaiken_automotive_cybersecurity.pdf +[17]: https://www.osvehicle.com/ +[18]: http://projects.genivi.org/how +[19]: http://projects.genivi.org/ +[20]: http://genivi.org/join +[21]: http://lists.genivi.org/mailman/listinfo/genivi-projects diff --git a/translated/talk/20160523 Driving cars into the future with Linux.md b/translated/talk/20160523 Driving cars into the future with Linux.md deleted file mode 100644 index 9b9d69d68c..0000000000 --- a/translated/talk/20160523 Driving cars into the future with Linux.md +++ /dev/null @@ -1,108 +0,0 @@ - -驾车通往未来Linux -=========================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open-snow-car-osdc-lead.png?itok=IgYZ6mNY) - - -当我开车的时候不认为和 Linux 有多大联系,但是我肯定我是喜欢一个配备有系统的车子,让我按几个按钮语音就可以传给我的妻子母亲以及孩子。同样,这样的系统可以让我选择是否从云端流媒体收听音乐,卫星广播,以及传统的 AM/FM 收音机。我也会得到天气更新以及可以给我的车载信息娱乐 GPS 找到最快的下一个目的地[In-vehicle infotainment][1],以及 IVI 作为行业知名产业,已经普及到最新的汽车生产商。 - -前段时间,我不得坐飞机飞跃数百英里,租一辆车。令人愉快的是,我发现我的租凭车配置了 IVI 技术。任何时候,我只要通过蓝牙连接,上传联系人到系统中,打电话回家给我的家人,让他们知道我已经安全到家了。然后“主人“会知道我再途中还是已经到他们家了。 - -在最近的 [news roundup][2],Scott Nesbitt 引用一篇文章,说福特汽车公司是由它的开源 [Smart Device Link][3](SDL)中间设备框架,对手汽车制造商,支持那个移动手机获得大量的支持。 SDL 是 [GENIVI Alliance][4] 的项目,一个非营利性的致力于建设中间件支持开源的车载信息娱乐系统。根据文献 [[Steven Crumb][5],GENIVI 执行董事,他们 [membership][6] 很广,包括 Daimler 集团,现代,沃尔沃,日产,本田等等 170 个。 - -为了在同行业中保持竞争力,汽车企业需要一个中间设备系统,可以支持当今消费者提供的各种人机界面技术。无论您拥有 Android,iOS 或其他设备,汽车 OEM 厂商希望自己的系统单位能够支持这些。此外,这些的 IVI 系统必须有足够适应能力以支持移动技术的不断下降,半衰期。 OEM 厂商要提供价值服务,并在他们的 IVI 堆栈支持各种为他们的客户添加选择。进入 Linux 和开源软件。 - -除了 GENIVI 的努力下,[Linux Foundation][7] 赞助 [Automotive Grade Linux][8](AGL)工作组,一个软件基金会,致力于寻找针对汽车应用的开源解决方案。虽然 AGL 初期将侧重于 IVI 系统,他们展望不同的分歧,包括 [telematics][9],小心显示器和其他控制系统。 AGL 有超过 50 名成员在这个时候,包括捷豹,丰田,日产,并在 [recent press release][10] 宣布福特、马自达、三菱、和斯巴鲁加入。 - - -为了了解更多信息,我们在这一新鲜兴领域采访了两位领导人。明确地来说,我们想知道是如何被使用的 Linux 和开源软件,如果它们实际上是改变汽车行业的面貌。首先,我们谈谈 [Alison Chaiken][11],在大集团技术的软件工程师和汽车 Linux 专家,网络安全和透明度。她曾任职于 [Alison Chaiken][11] 公司,诺基亚和斯坦福直线性加速器。然后我们用 [Steven Crumb][12],GENIVI 执行董事,谁得到了在开源环境高性能计算(超级计算机和早期的云计算)开始聊天。他说,虽然他再不是一个程序员了,但是他喜欢帮助企业解决开源软件的实际业务问题。 - -### 采访 Alison Chaiken (by [Deb Nicholson][13]) - -#### 你是如何开始对汽车软件空间感兴趣的? - -我是在诺基亚手机产品时, 2009 年该项目被取消。我想,下一步是什么?一位同事正在对 [MeeGo-IVI][15],早期的汽车 Linux 发行版。 “Linux 在汽车是大了,” 我想,所以我在朝着这个方向努力。 - -#### 你能告诉我们你这些日子工作在哪些方面? - -我目前正在启动为使用 Linux 系统增加大货车钻机的安全性和燃油经济性的先进巡航控制。我喜欢在这方面的工作,因为没有人会反对卡车得以提升。 - -#### 目前关于汽车已在近年来砍死几个人故事。开源代码方案可以帮助解决这个问题吗? - -I presented a talk on precisely this topic, on how Linux can (and cannot) contribute to security solutions in automotive at Southern California Linux Expo 2016 ([Slides][16]). Notably, GENIVI and Automotive Grade Linux have published their code and both projects take patches via Git. Please send your fixes upstream! Many eyes make all bugs shallow. -我提出的谈话正是这一主题,就如何 Linux 可以(或不可以)在南加州 2016 年世博会作出贡献的安全解决方案的 Linux汽车([Slides][16])。值得注意的是,GENIVI 和汽车级 Linux 已经公布了他们的代码,这两个项目的 Git 通过采取补丁。请上游发送您的修复!许多眼睛都盯着肤浅的bugs。 - -#### 执法机构和保险公司可以找到很多有关数据用途的驱动程序。它将如何容易成为他们获取这些信息? - -好问题。该专用短程通信标准(IEEE-1609),以保持匿名的 Wi-Fi 安全消息驱动程序。不过,如果你从你的车张贴到 Twitter,有人能够跟踪你。 - -#### 有什么可以开发人员和公民个人一起完成,以确保公民自由受到保护作为汽车技术发展的? - -电子前沿基金会(EFF)一样对汽车保持的问题上,通过什么样的数据可以存储在汽车 “黑盒子”,并在 DMCA 的规定 1201 如何应用于汽车官方渠道评论已经出色的工作了。 - -#### 在未来几年令人兴奋的事情上,那些是你看到的驱动因素? - -自适应巡航控制和防撞系统有足够的预付款来挽救生命。当他们通过运输车队的推出,我真的相信死亡人数会下降。如果这还不是令人兴奋的,我不知道是什么。此外,像自动化停车辅助功能,将会使汽车更容易驾驶,减少汽车相撞事故。 - -#### 有什么是需要人参与以及如何建造? - -汽车 Linux 级开发是开放源代码的,运行在廉价硬件(如树莓派 Pi 2 和中等价位的 Renesas Porter board),任何人都可以购买。 GENIVI 汽车 Linux 的中间设备联盟有很多软件通过 Git 的公开。此外,还有很酷的 [OSVehicle open hardware][17] 汽车平台。 - -#### 这里是 Linux 软件和开放硬件,许多方面具有中等人数预算的参与。如果您有任何疑问,加入我们在 Freenode 上 IRC#automotive。 - -### 采访 Steven Crumb (by Don Watkins) - -#### 关于GENIVI's 对 IVI 为什么那么大 ? - -GENIVI 率先通过使用自由和开源软件,包括 Linux,像车载信息娱乐(IVI)系统的非安全关键汽车软件填补了汽车行业的巨大差距。作为消费者来到期望在他们的车辆相同的功能在智能手机上的软件,以支持 IVI 功能所需的量成倍增长。软件增加量也增加了建设 IVI 系统的成本,从而延缓了上市时间。 - -GENIVI 的使用开源软件和社区发展模式节省了汽车制造商和他们的软件提供商显著大量的资金,而显著减少了产品上市时间。我很兴奋,因为 GENIVI 我们很幸运慢慢从高度结构化和专有的方法来社区为基础的方法不断发展的组织​​领导排序在汽车行业的一场革命。我们还没有完成,但它一直是一个荣幸参加正在产生实实在在的好处的转换。 - -#### 你的庞大会员怎么才可以驱动 GENIVI 方向? - -GENIVI 有很多会员和非会员促进我们的工作。与许多开源项目,任何公司都可以通过简单地贡献代码,修补程序和时间来检验影响的技术输出。随着中说,宝马,奔驰,现代汽车,捷豹路虎,标致雪铁龙,雷诺 / 日产和沃尔沃是所有积极采用者和贡献者 GENIVI 和其他许多 OEM 厂商已经在他们的汽车 IVI 解决方案,广泛使用 GENIVI 的软件。 - -#### 贡献的代码使用了什么许可证? - -GENIVI 采用数量的许可证从(L)GPLv2 许可,以 MPLv2 到 Apache2.0。我们的一些工具使用 Eclipse 许可证。我们有一个[public licensing policy][18],详细说明我们的许可偏好。 - -#### 一个人或一群人如何参与其中?重要的是如何对项目的持续成功的社区贡献? - -GENIVI 完全做它开放发展的在([projects.genivi.org][19]),因此,有兴趣的人在汽车使用开源软件,欢迎参加。这就是说,该联盟能够通过公司 [joining GENIVI][20] 作为成员不断发展的开放基金。 GENIVI 会员享受各种各样的福利,而不是其中最重要的是在已经发展了近六年来 140 家公司全球社区参与。 - -社区是 GENIVI 非常重要的,我们不可能生产和维护我们发展了很多年没有贡献者一个活跃的社区有价值的软件。我们努力做出贡献 GENIVI 简单,只要加入一个 [邮件列表] [21] 并连接到人们在不同的软件项目。我们使用许多开源项目采用的标准做法,并提供高质量的工具和基础设施,以帮助开发人员有宾至如归的感觉,并富有成效。 - -无论在汽车软件某人的熟悉,欢迎他们加入我们的社区。人们已经改装车多年,所以对于许多人来说,是一种天然的抽奖,任何汽车。软件是汽车的新域,GENIVI 希望成为敞开的门有兴趣的人与汽车,开源软件的工作。 - -------------------------------- -via: https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb - -作者:[Don Watkins][a] -译者:[erlinux](https://github.com/erlinux) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[1]: https://en.wikipedia.org/wiki/In_car_entertainment -[2]: https://opensource.com/life/16/1/weekly-news-jan-9 -[3]: http://projects.genivi.org/smartdevicelink/home -[4]: http://www.genivi.org/ -[5]: https://www.linkedin.com/in/stevecrumb -[6]: http://www.genivi.org/genivi-members -[7]: http://www.linuxfoundation.org/ -[8]: https://www.automotivelinux.org/ -[9]: https://en.wikipedia.org/wiki/Telematics -[10]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and -[11]: https://www.linkedin.com/in/alison-chaiken-3ba456b3 -[12]: https://www.linkedin.com/in/stevecrumb -[13]: https://opensource.com/users/eximious -[14]: https://en.wikipedia.org/wiki/MeeGo -[15]: http://webinos.org/deliverable-d026-target-platform-requirements-and-ipr/automotive/ -[16]: http://she-devel.com/Chaiken_automotive_cybersecurity.pdf -[17]: https://www.osvehicle.com/ -[18]: http://projects.genivi.org/how -[19]: http://projects.genivi.org/ -[20]: http://genivi.org/join -[21]: http://lists.genivi.org/mailman/listinfo/genivi-projects From f94c66337bc58cb829b7d3bbb321a08f7f944c37 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Fri, 1 Jul 2016 09:22:57 +0800 Subject: [PATCH 030/471] [Translated] Awk Part 3 (#4127) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * [Translated] Awk Part 3 * 移动翻译后的文章 --- ... Strings Using Pattern Specific Actions.md | 85 ------------------- ... Strings Using Pattern Specific Actions.md | 82 ++++++++++++++++++ 2 files changed, 82 insertions(+), 85 deletions(-) delete mode 100644 sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md create mode 100644 translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md diff --git a/sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md b/sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md deleted file mode 100644 index acdfe66167..0000000000 --- a/sources/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md +++ /dev/null @@ -1,85 +0,0 @@ -FSSlc translating - -How to Use Awk to Filter Text or Strings Using Pattern Specific Actions -========================================================================= - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Filter-Text-or-Strings-Using-Pattern.png) - -In the third part of the Awk command series, we shall take a look at filtering text or strings based on specific patterns that a user can define. - -Sometimes, when filtering text, you want to indicate certain lines from an input file or lines of strings based on a given condition or using a specific pattern that can be matched. Doing this with Awk is very easy, it is one of the great features of Awk that you will find helpful. - -Let us take a look at an example below, say you have a shopping list for food items that you want to buy, called food_prices.list. It has the following list of food items and their prices. - -``` -$ cat food_prices.list -No Item_Name Quantity Price -1 Mangoes 10 $2.45 -2 Apples 20 $1.50 -3 Bananas 5 $0.90 -4 Pineapples 10 $3.46 -5 Oranges 10 $0.78 -6 Tomatoes 5 $0.55 -7 Onions 5 $0.45 -``` - -And then, you want to indicate a `(*)` sign on food items whose price is greater than $2, this can be done by running the following command: - -``` -$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Text-Using-Awk.gif) ->Print Items Whose Price is Greater Than $2 - -From the output above, you can see that the there is a `(*)` sign at the end of the lines having food items, mangoes and pineapples. If you check their prices, they are above $2. - -In this example, we have used used two patterns: - -- the first: `/ *\$[2-9]\.[0-9][0-9] */` gets the lines that have food item price greater than $2 and -- the second: `/*\$[0-1]\.[0-9][0-9] */` looks for lines with food item price less than $2. - -This is what happens, there are four fields in the file, when pattern one encounters a line with food item price greater than $2, it prints all the four fields and a `(*)` sign at the end of the line as a flag. - -The second pattern simply prints the other lines with food price less than $2 as they appear in the input file, food_prices.list. - -This way you can use pattern specific actions to filter out food items that are priced above $2, though there is a problem with the output, the lines that have the `(*)` sign are not formatted out like the rest of the lines making the output not clear enough. - -We saw the same problem in Part 2 of the awk series, but we can solve it in two ways: - -1. Using printf command which is a long and boring way using the command below: - -``` -$ awk '/ *\$[2-9]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4; }' food_prices.list -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Printf.gif) ->Filter and Print Items Using Awk and Printf - -2. Using $0 field. Awk uses the variable 0 to store the whole input line. This is handy for solving the problem above and it is simple and fast as follows: - -``` -$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Variable.gif) ->Filter and Print Items Using Awk and Variable - -Conclusion -That’s it for now and these are simple ways of filtering text using pattern specific action that can help in flagging lines of text or strings in a file using Awk command. - -Hope you find this article helpful and remember to read the next part of the series which will focus on using comparison operators using awk tool. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/awk-filter-text-or-string-using-patterns/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ - diff --git a/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md b/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md new file mode 100644 index 0000000000..ef91e93575 --- /dev/null +++ b/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md @@ -0,0 +1,82 @@ +如何使用 Awk 来筛选文本或字符串 +========================================================================= + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Filter-Text-or-Strings-Using-Pattern.png) + +作为 Awk 命令系列的第三部分,这次我们将看一看如何基于用户定义的特定模式来筛选文本或字符串。 + +在筛选文本时,有时你可能想根据某个给定的条件或使用一个特定的可被匹配的模式,去标记某个文件或数行字符串中的某几行。使用 Awk 来完成这个任务是非常容易的,这也正是 Awk 中可能对你有所帮助的几个特色之一。 + +让我们看一看下面这个例子,比方说你有一个写有你想要购买的食物的购物清单,其名称为 food_prices.list,它所含有的食物名称及相应的价格如下所示: + +``` +$ cat food_prices.list +No Item_Name Quantity Price +1 Mangoes 10 $2.45 +2 Apples 20 $1.50 +3 Bananas 5 $0.90 +4 Pineapples 10 $3.46 +5 Oranges 10 $0.78 +6 Tomatoes 5 $0.55 +7 Onions 5 $0.45 +``` + +然后,你想使用一个 `(*)` 符号去标记那些单价大于 $2 的食物,那么你可以通过运行下面的命令来达到此目的: + +``` +$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Text-Using-Awk.gif) +>打印出单价大于 $2 的项目 + +从上面的输出你可以看到在含有 芒果(mangoes) 和 菠萝(pineapples) 的那行末尾都已经有了一个 `(*)` 标记。假如你检查它们的单价,你可以看到它们的单价的确超过了 $2 。 + +在这个例子中,我们已经使用了两个模式: + +- 第一个模式: `/ *\$[2-9]\.[0-9][0-9] */` 将会得到那些含有食物单价大于 $2 的行, +- 第二个模式: `/*\$[0-1]\.[0-9][0-9] */` 将查找那些食物单价小于 $2 的那些行。 + +上面的命令具体做了什么呢?这个文件有四个字段,当模式一匹配到含有食物单价大于 $2 的行时,它便会输出所有的四个字段并在该行末尾加上一个 `(*)` 符号来作为标记。 + +第二个模式只是简单地输出其他含有食物单价小于 $2 的行,因为它们出现在输入文件 food_prices.list 中。 + +这样你就可以使用模式来筛选出那些价格超过 $2 的食物项目,尽管上面的输出还有些问题,带有 `(*)` 符号的那些行并没有像其他行那样被格式化输出,这使得输出显得不够清晰。 + +我们在 Awk 系列的第二部分中也看到了同样的问题,但我们可以使用下面的两种方式来解决: + +1. 可以像下面这样使用 printf 命令,但这样使用又长又无聊: + +``` +$ awk '/ *\$[2-9]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4; }' food_prices.list +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Printf.gif) +>使用 Awk 和 Printf 来筛选和输出项目 + +2. 使用 `$0` 字段。Awk 使用变量 **0** 来存储整个输入行。对于上面的问题,这种方式非常方便,并且它还简单、快速: + +``` +$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Variable.gif) +>使用 Awk 和变量来筛选和输出项目 + +### 结论 + +这就是全部内容了,使用 Awk 命令你便可以通过几种简单的方法去利用模式匹配来筛选文本,帮助你在一个文件中对文本或字符串的某些行做标记。 + +希望这篇文章对你有所帮助。记得阅读这个系列的下一部分,我们将关注在 awk 工具中使用比较运算符。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/awk-filter-text-or-string-using-patterns/ + +作者:[Aaron Kili][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ \ No newline at end of file From b0d645d5dd92daf67acdfb00f5c59e55abd47a13 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Fri, 1 Jul 2016 11:01:06 +0800 Subject: [PATCH 031/471] merge (#4128) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Delete Part 6 - How to Use ‘next’ Command with Awk in Linux.md * Add files via upload * Create Part 6 - How to Use ‘next’ Command with Awk in Linux.md From 6ef12a2d7ac686988f7f375e8b19829920eb81ab Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 1 Jul 2016 11:04:54 +0800 Subject: [PATCH 032/471] =?UTF-8?q?=E5=BD=92=E6=A1=A3201606?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...3 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md | 0 ...210 Getting started with Docker by Dockerizing this Blog.md.md | 0 ...0160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md | 0 ...w to Enable Multiple PHP-FPM Instances with Nginx or Apache.md | 0 .../20160218 7 Steps to Start Your Linux SysAdmin Career.md | 0 .../20160218 9 Key Trends in Hybrid Cloud Computing.md | 0 published/{ => 201606}/20160218 A Linux-powered microwave oven.md | 0 .../20160218 Getting to Know Linux File Permissions.md | 0 .../20160218 Linux Systems Patched for Critical glibc Flaw.md | 0 .../20160218 Top 4 open source issue tracking tools.md | 0 ...eils a tiny 64-bit ARM processor for the Internet of Things.md | 0 ...a Center Admin Suite Should Bring Order to Containerization.md | 0 .../20160228 Two Outstanding All-in-One Linux Servers.md | 0 .../20160304 Image processing at NASA with open source tools.md | 0 ...0 65% of companies are contributing to open source projects.md | 0 ...13 How to Set Up 2-Factor Authentication for Login and sudo.md | 0 .../20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md | 0 .../20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md | 0 .../20160518 Android Use Apps Even Without Installing Them.md | 0 .../20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md | 0 .../20160523 Driving cars into the future with Linux.md | 0 .../20160524 Test Fedora 24 Beta in an OpenStack cloud.md | 0 .../20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md | 0 ...29 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md | 0 published/{ => 201606}/20160601 Apps to Snaps.md | 0 published/{ => 201606}/20160601 scp command in Linux.md | 0 .../20160605 How to Add Cron Jobs in Linux and Unix.md | 0 ...lace Passwords With A New Trust-Based Authentication Method.md | 0 ...A Smart Way to Lock User Virtual Console or Terminal in Linux.md | 0 .../{ => 201606}/20160621 Building Serverless App with Docker.md | 0 30 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201606}/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md (100%) rename published/{ => 201606}/20151210 Getting started with Docker by Dockerizing this Blog.md.md (100%) rename published/{ => 201606}/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md (100%) rename published/{ => 201606}/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md (100%) rename published/{ => 201606}/20160218 7 Steps to Start Your Linux SysAdmin Career.md (100%) rename published/{ => 201606}/20160218 9 Key Trends in Hybrid Cloud Computing.md (100%) rename published/{ => 201606}/20160218 A Linux-powered microwave oven.md (100%) rename published/{ => 201606}/20160218 Getting to Know Linux File Permissions.md (100%) rename published/{ => 201606}/20160218 Linux Systems Patched for Critical glibc Flaw.md (100%) rename published/{ => 201606}/20160218 Top 4 open source issue tracking tools.md (100%) rename published/{ => 201606}/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md (100%) rename published/{ => 201606}/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md (100%) rename published/{ => 201606}/20160228 Two Outstanding All-in-One Linux Servers.md (100%) rename published/{ => 201606}/20160304 Image processing at NASA with open source tools.md (100%) rename published/{ => 201606}/20160510 65% of companies are contributing to open source projects.md (100%) rename published/{ => 201606}/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md (100%) rename published/{ => 201606}/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md (100%) rename published/{ => 201606}/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md (100%) rename published/{ => 201606}/20160518 Android Use Apps Even Without Installing Them.md (100%) rename published/{ => 201606}/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md (100%) rename published/{ => 201606}/20160523 Driving cars into the future with Linux.md (100%) rename published/{ => 201606}/20160524 Test Fedora 24 Beta in an OpenStack cloud.md (100%) rename published/{ => 201606}/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md (100%) rename published/{ => 201606}/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md (100%) rename published/{ => 201606}/20160601 Apps to Snaps.md (100%) rename published/{ => 201606}/20160601 scp command in Linux.md (100%) rename published/{ => 201606}/20160605 How to Add Cron Jobs in Linux and Unix.md (100%) rename published/{ => 201606}/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md (100%) rename published/{ => 201606}/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md (100%) rename published/{ => 201606}/20160621 Building Serverless App with Docker.md (100%) diff --git a/published/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md b/published/201606/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md similarity index 100% rename from published/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md rename to published/201606/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md diff --git a/published/20151210 Getting started with Docker by Dockerizing this Blog.md.md b/published/201606/20151210 Getting started with Docker by Dockerizing this Blog.md.md similarity index 100% rename from published/20151210 Getting started with Docker by Dockerizing this Blog.md.md rename to published/201606/20151210 Getting started with Docker by Dockerizing this Blog.md.md diff --git a/published/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md b/published/201606/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md similarity index 100% rename from published/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md rename to published/201606/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md diff --git a/published/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md b/published/201606/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md similarity index 100% rename from published/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md rename to published/201606/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md diff --git a/published/20160218 7 Steps to Start Your Linux SysAdmin Career.md b/published/201606/20160218 7 Steps to Start Your Linux SysAdmin Career.md similarity index 100% rename from published/20160218 7 Steps to Start Your Linux SysAdmin Career.md rename to published/201606/20160218 7 Steps to Start Your Linux SysAdmin Career.md diff --git a/published/20160218 9 Key Trends in Hybrid Cloud Computing.md b/published/201606/20160218 9 Key Trends in Hybrid Cloud Computing.md similarity index 100% rename from published/20160218 9 Key Trends in Hybrid Cloud Computing.md rename to published/201606/20160218 9 Key Trends in Hybrid Cloud Computing.md diff --git a/published/20160218 A Linux-powered microwave oven.md b/published/201606/20160218 A Linux-powered microwave oven.md similarity index 100% rename from published/20160218 A Linux-powered microwave oven.md rename to published/201606/20160218 A Linux-powered microwave oven.md diff --git a/published/20160218 Getting to Know Linux File Permissions.md b/published/201606/20160218 Getting to Know Linux File Permissions.md similarity index 100% rename from published/20160218 Getting to Know Linux File Permissions.md rename to published/201606/20160218 Getting to Know Linux File Permissions.md diff --git a/published/20160218 Linux Systems Patched for Critical glibc Flaw.md b/published/201606/20160218 Linux Systems Patched for Critical glibc Flaw.md similarity index 100% rename from published/20160218 Linux Systems Patched for Critical glibc Flaw.md rename to published/201606/20160218 Linux Systems Patched for Critical glibc Flaw.md diff --git a/published/20160218 Top 4 open source issue tracking tools.md b/published/201606/20160218 Top 4 open source issue tracking tools.md similarity index 100% rename from published/20160218 Top 4 open source issue tracking tools.md rename to published/201606/20160218 Top 4 open source issue tracking tools.md diff --git a/published/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md b/published/201606/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md similarity index 100% rename from published/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md rename to published/201606/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md diff --git a/published/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md b/published/201606/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md similarity index 100% rename from published/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md rename to published/201606/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md diff --git a/published/20160228 Two Outstanding All-in-One Linux Servers.md b/published/201606/20160228 Two Outstanding All-in-One Linux Servers.md similarity index 100% rename from published/20160228 Two Outstanding All-in-One Linux Servers.md rename to published/201606/20160228 Two Outstanding All-in-One Linux Servers.md diff --git a/published/20160304 Image processing at NASA with open source tools.md b/published/201606/20160304 Image processing at NASA with open source tools.md similarity index 100% rename from published/20160304 Image processing at NASA with open source tools.md rename to published/201606/20160304 Image processing at NASA with open source tools.md diff --git a/published/20160510 65% of companies are contributing to open source projects.md b/published/201606/20160510 65% of companies are contributing to open source projects.md similarity index 100% rename from published/20160510 65% of companies are contributing to open source projects.md rename to published/201606/20160510 65% of companies are contributing to open source projects.md diff --git a/published/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md b/published/201606/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md similarity index 100% rename from published/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md rename to published/201606/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md diff --git a/published/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md b/published/201606/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md similarity index 100% rename from published/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md rename to published/201606/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md diff --git a/published/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md b/published/201606/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md similarity index 100% rename from published/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md rename to published/201606/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md diff --git a/published/20160518 Android Use Apps Even Without Installing Them.md b/published/201606/20160518 Android Use Apps Even Without Installing Them.md similarity index 100% rename from published/20160518 Android Use Apps Even Without Installing Them.md rename to published/201606/20160518 Android Use Apps Even Without Installing Them.md diff --git a/published/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md b/published/201606/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md similarity index 100% rename from published/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md rename to published/201606/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md diff --git a/published/20160523 Driving cars into the future with Linux.md b/published/201606/20160523 Driving cars into the future with Linux.md similarity index 100% rename from published/20160523 Driving cars into the future with Linux.md rename to published/201606/20160523 Driving cars into the future with Linux.md diff --git a/published/20160524 Test Fedora 24 Beta in an OpenStack cloud.md b/published/201606/20160524 Test Fedora 24 Beta in an OpenStack cloud.md similarity index 100% rename from published/20160524 Test Fedora 24 Beta in an OpenStack cloud.md rename to published/201606/20160524 Test Fedora 24 Beta in an OpenStack cloud.md diff --git a/published/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md b/published/201606/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md similarity index 100% rename from published/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md rename to published/201606/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md diff --git a/published/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md b/published/201606/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md similarity index 100% rename from published/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md rename to published/201606/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md diff --git a/published/20160601 Apps to Snaps.md b/published/201606/20160601 Apps to Snaps.md similarity index 100% rename from published/20160601 Apps to Snaps.md rename to published/201606/20160601 Apps to Snaps.md diff --git a/published/20160601 scp command in Linux.md b/published/201606/20160601 scp command in Linux.md similarity index 100% rename from published/20160601 scp command in Linux.md rename to published/201606/20160601 scp command in Linux.md diff --git a/published/20160605 How to Add Cron Jobs in Linux and Unix.md b/published/201606/20160605 How to Add Cron Jobs in Linux and Unix.md similarity index 100% rename from published/20160605 How to Add Cron Jobs in Linux and Unix.md rename to published/201606/20160605 How to Add Cron Jobs in Linux and Unix.md diff --git a/published/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md b/published/201606/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md similarity index 100% rename from published/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md rename to published/201606/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md diff --git a/published/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md b/published/201606/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md similarity index 100% rename from published/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md rename to published/201606/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md diff --git a/published/20160621 Building Serverless App with Docker.md b/published/201606/20160621 Building Serverless App with Docker.md similarity index 100% rename from published/20160621 Building Serverless App with Docker.md rename to published/201606/20160621 Building Serverless App with Docker.md From 6ce3fa40c5b991e0733da9db77d59d730974c5f4 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 1 Jul 2016 11:06:17 +0800 Subject: [PATCH 033/471] PUB:20160531 Why Ubuntu-based Distros Are Leaders MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @vim-kakali 总体翻译的不错。个别地方理解有误,语句组织需调整。加油! --- ...31 Why Ubuntu-based Distros Are Leaders.md | 63 ++++++++++++++ ...31 Why Ubuntu-based Distros Are Leaders.md | 85 ------------------- 2 files changed, 63 insertions(+), 85 deletions(-) create mode 100644 published/20160531 Why Ubuntu-based Distros Are Leaders.md delete mode 100644 translated/talk/20160531 Why Ubuntu-based Distros Are Leaders.md diff --git a/published/20160531 Why Ubuntu-based Distros Are Leaders.md b/published/20160531 Why Ubuntu-based Distros Are Leaders.md new file mode 100644 index 0000000000..c909e753f7 --- /dev/null +++ b/published/20160531 Why Ubuntu-based Distros Are Leaders.md @@ -0,0 +1,63 @@ +为什么 Ubuntu 家族会占据 Linux 发行版的主导地位? +========================================= + +在过去的数年中,我体验了一些优秀的 Linux 发行版。给我印象最深刻的是那些由强大的社区维护的发行版,而流行的发行版比强大的社区给我的印象更深。流行的 Linux 发行版往往能吸引新用户,这通常是由于其流行而使得使用该发行版会更加容易。并非绝对如此,但一般来说是这样的。 + +说到这里,首先映入我脑海的一个发行版是 [Ubuntu][1]。其基于健壮的 [Debian][2] 发行版构建,它不仅成为了一个非常受欢迎的 Linux 发行版,而且它也衍生出了不可计数的其他分支,比如 Linux Mint 就是一个例子。在本文中,我会探讨为何我认为 Ubuntu 会赢得 Linux 发行版之战的原因,以及它是怎样影响到了整个 Linux 桌面领域。 + +### Ubuntu 易于使用 + +在我几年前首次尝试使用 Ubuntu 前,我更喜欢使用 KED 桌面。在那个时期,我接触的大多是这种 KDE 桌面环境。主要原因还是 KDE 是大多数新手容易入手的 Linux 发行版中最受欢迎的。这些新手友好的发行版有 Knoppix、Simply Mepis、Xandros、Linspire 以及其它的发行版等等,这些发行版都推荐他们的用户去使用广受欢迎的 KDE。 + +现在 KDE 能满足我的需求,我也没有什么理由去折腾其他的桌面环境。有一天我的 Debian 安装失败了(由于我个人的操作不当),我决定尝试开发代号为 Dapper Drake 的 Ubuntu 版本(LCTT 译注:Ubuntu 6.06 - Dapper Drake,发布日期:2006 年 6 月 1 日),每个人都对它赞不绝口。那个时候,我对于它的印象仅限于屏幕截图,但是我想试试也挺有趣的。 + +Ubuntu Dapper Drake 给我的最大的印象是它让我很清楚地知道每个东西都在哪儿。记住,我是来自于 KDE 世界的用户,在 KDE 上要想改变菜单的设置就有 15 种方法 !而 Ubuntu 上的 GNOME 实现极具极简主义的。 + +时间来到 2016 年,最新的版本号是 16.04:我们有了好几种 Ubuntu 特色版本,也有一大堆基于 Ubuntu 的发行版。所有的 Ubuntu 特色版和衍生发行版的共同具有的核心都是为易用而设计。发行版想要增大用户基数时,这就是最重要的原因。 + +### Ubuntu LTS + +过去,我几乎一直坚持使用 LTS(Long Term Support)发行版作为我的主要桌面系统。10月份的发行版很适合我测试硬盘驱动器,甚至把它用在一个老旧的手提电脑上。我这样做的原因很简单——我没有兴趣在一个正式使用的电脑上折腾短期发行版。我是个很忙的家伙,我觉得这样会浪费我的时间。 + +对于我来说,我认为 Ubuntu 提供 LTS 发行版是 Ubuntu 能够变得流行的最大的原因。这样说吧———给普罗大众提供一个桌面 Linux 发行版,这个发行版能够得到长期的有效支持就是它的优势。事实上,不只 Ubuntu 是这样,其他的分支在这一点上也做的很好。长期支持策略以及对新手的友好环境,我认为这就为 Ubuntu 的普及带来了莫大的好处。 + +### Ubuntu Snap 软件包 + +以前,用户会夸赞可以在他们的系统上使用 PPA(personal package archive 个人软件包档案)获得新的软件。不好的是,这种技术也有缺点。当它用在各种软件名称时, PPA 经常会找不到,这种情况很常见。 + +现在有了 [Snap 软件包][3] 。当然这不是一个全新的概念,过去已经进行了类似的尝试。用户可以在一个长期支持版本上运行最新的软件,而不必去使用最新的 Ubuntu 发行版。虽然我认为目前还处于 Snap 软件包的早期,但是我很期待可以在一个稳定的发行版上运行的崭新的软件。 + +最明显的问题是,如果你要运行很多软件,那么 Snap 包实际会占用很多硬盘空间。不仅如此,大多数 Ubuntu 软件仍然需要由官方从 deb 包进行转换。第一个问题可以通过使用更大的硬盘空间得到解决,而后一个问题的解决则需要等待。 + +### Ubuntu 社区 + +首先,我承认大多数主要的 Linux 发行版都有强大的社区。然而,我坚信 Ubuntu 社区的成员是最多样化的,他们来自各行各业。例如,我们的论坛包括从苹果硬件支持到游戏等不同分类。特别是这些专业的讨论话题还非常广泛。 + +除过论坛,Ubuntu 也提供了一个很正式的社区组织。这个组织包括一个理事会、技术委员会、[本地社区团队][4]和开发者成员委员会。还有很多,但是这些都是我知道的社区组织部分。 + +我们还有一个 [Ubuntu 问答][5]版块。我认为,这种功能可以代替人们从论坛寻求帮助的方式,我发现在这个网站你得到有用信息的可能性更大。不仅如此,那些提供的解决方案中被选出的最精准的答案也会被写入到官方文档中。 + +### Ubuntu 的未来 + +我认为 Ubuntu 的 Unity 界面(LCTT 译注:Unity 是 Canonical 公司为 Ubuntu 操作系统的 GNOME 桌面环境开发的图形化界面)在提升桌面占有率上少有作为。我能理解其中的缘由,现在它主要做一些诸如可以使开发团队的工作更轻松的事情。但是最终,我还是认为 Unity 为 Ubuntu MATE 和 Linux Mint 的普及铺平道路。 + +我最好奇的一点是 Ubuntu's IRC 和邮件列表的发展(LCTT 译注:可以在 Ubuntu LoCo Teams 的 IRC Chat 上提问关于地方团队和计划的事件的问题,也可以和一些不同团队的成员进行交流)。事实是,他们都不能像 Ubuntu 问答板块那样文档化。至于邮件列表,我一直认为这对于合作是一种很痛苦的过时方法,但这仅仅是我的个人看法——其他人可能有不同的看法,也可能会认为它很好。 + +你怎么看?你认为 Ubuntu 将来会占据主要的份额吗?也许你会认为 Arch 和 Linux Mint 或者其他的发行版会在普及度上打败 Ubuntu? 既然这样,那请大声说出你最喜爱的发行版。如果这个发行版是 Ubuntu 衍生版 ,说说你为什么更喜欢它而不是 Ubuntu 本身。如果不出意外,Ubuntu 会成为构建其他发行版的基础,我想很多人都是这样认为的。 + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/why-ubuntu-based-distros-are-leaders.html + +作者:[Matt Hartley][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.datamation.com/author/Matt-Hartley-3080.html +[1]: http://www.ubuntu.com/ +[2]: https://www.debian.org/ +[3]: http://www.datamation.com/open-source/ubuntu-snap-packages-the-good-the-bad-the-ugly.html +[4]: http://loco.ubuntu.com/ +[5]: http://askubuntu.com/ diff --git a/translated/talk/20160531 Why Ubuntu-based Distros Are Leaders.md b/translated/talk/20160531 Why Ubuntu-based Distros Are Leaders.md deleted file mode 100644 index 9d9ce0f418..0000000000 --- a/translated/talk/20160531 Why Ubuntu-based Distros Are Leaders.md +++ /dev/null @@ -1,85 +0,0 @@ -为什么 Ubuntu 家族会占据 Linux 发行版的主导地位? -========================================= - -在过去的数年中,我已经尝试了大量的优秀 Linux 发行版。我印象最深刻的是那些被强大的社区维护的发行版。但是这样的发行版却比他们s所属的社区更受人欢迎。流行的 Linux 发行版吸引着更多的人,通常由于这样的特点使得使用该发行版更加容易。这很明显毫无关系,但一般认为这种说法是正确的。 - - -我想到的一个发行版 [Ubuntu][1]。它属于健壮的 [Debian][2]分支,Ubuntu 不可思议的成为了受欢迎的 Linux 发行版,而且它也衍生出了其他的版本,比如 Linux Mint。在本文中,我会探讨我坚信 Ubuntu 会赢得 Linux 发行版战争的原因,以及它在整个 Linux 桌面领域有着怎样的影响力。 - - -### Ubuntu容易使用 - - -多年前我第一次尝试使用Ubuntu,在这之前我更喜欢使用 KED 桌面。在那个时期,我接触的大多是这种 KDE 桌面环境。主要原因还是 KDE 是大多数新手友好的 Linux 发行版中最受欢迎的。新手友好的发行版有 Knoppix,Simply Mepis, Xandros, Linspire等,另外一些发行版和这些发行版都指出他们的用户趋向于使用 KDE。 - - - -现在KDE能满足我的需求,也没有什么理由去折腾其他的桌面环境了。有一天我的 Debian 安装失败了(由于我个人的操作不当),我决定尝试开发代号为「整洁的公鸭(Ubuntu Dapper Drake)」的 Ubuntu 版本【译者注:ubuntu 6.06 - Dapper Drake(整洁的公鸭),发布日期:2006年6月1日】。那个时候,我对于它的印象比一个屏幕截图还要少,但是我认为它很有趣并且毫无顾忌的使用它。 - - - -Ubuntu Dapper Drake 给我的最大的印象是它的操作很简单。记住,我是来自于 KDE 世界的用户,在 KDE 上要想改变菜单的设置就有15钟方法。Ubuntu 图形界面的安装启动极具极简主义。 - -时间来到2016年,最新的版本号是16.04:我们有多种可用的 Ubuntu 衍生版本,许多的都是基于 Ubuntu 的。所有的 Ubuntu 风格和公用发行版的核心都被设计的容易使用。并且发行版想要增大用户基数的时候,这就是最重要的原因。 - - -### Ubuntu LTS - -过去,我几乎一直坚持使用 LTS(Long Term Support)发行版作为我的主要桌面系统。10月份的发行版很适合我测试硬盘驱动器,甚至把它用在一个老旧的手提电脑上。我这样做的原因很简单——我没有兴趣在一个作为实验品的电脑上折腾短期发行版。我是个很忙的家伙,我觉得这样会浪费我的时间。 - - -对于我来说,我认为 Ubuntu 提供 LTS 发行版是 Ubuntu 能够变得流行的原因。这样说吧———提供一个大众的桌面 Linux 发行版,这个发行版能够得到长期的充分支持就是它的优势。事实上,Ubuntu 的优势不只这一点,其他的分支在这一点上也做的很好。长期支持版带有一个对新手的友好环境的策略,我认为这就为 Ubuntu 的普及带来了莫大的好处。 - - -### Ubuntu Snap 包 - - -以前,用户在他们的系统上使用很多 PPA(personal package archive个人软件包档案),他们总会抱怨它获得新的软件名称的能力。不好的是,这种技术也有缺点。它工作的时候带有任意的软件名称,而 PPA 却没有发现,这种情况很常见。 - - -现在有了[Snap 包][3] 。当然这不是一个全新的概念,过去已经进行了类似的尝试。用户不必要在最新的 Ubuntu 发行版上运行最新的软件,我认为这才是 Snap 将要长期提供给 Ubuntu 用户的东西。然而我仍然认为我们将会看到 Snap 淘汰的的那一天,我很期待看到一个在稳定的发行版上运行的优秀软件。 - - - -如果你要运行很多软件,那么 Snap 包实际使用的硬盘空间很明显存在问题。不仅如此,大多数 Ubuntu 软件也是通过由官方开发的 deb 包进行管理的。当后者需要花费一些时间的时候,这个问题可以通过 Snap 使用更大的硬盘驱动器空间得到解决。 - - - -### Ubuntu 社区 - -首先,我承认大多数主要的 Linux 发行版都有强大的社区。然而,我坚信 Ubuntu 社区的成员是最多样化的,他们来自各行各业。例如,我们有一个论坛来分类不同的苹果硬件对于游戏的支持程度。这些大量的专业讨论特别广泛。 - - -除过论坛,Ubuntu 也提供了一个很正式的社区组织。这个组织包括一个委员会,技术板块,[各地的团队LoCo teams][4](Ubuntu Local Community Teams)和开发人员板块。还有很多,但是这些都是我知道的社区组织部分。 - - -我们还有一个[Ubuntu 问答][5]板块。我认为,这种特色可以代替人们从论坛寻求帮助的方式,我发现在这个网站你得到有用信息的可能行更大。不仅如此,那些提供的解决方案中被选出的最精准的答案也会被写入到官方文档中。 - - -### Ubuntu 的未来 - - -我认为 Ubuntu 的 Unity 接口【译者注:Unity 是 Canonical 公司为 Ubuntu 操作系统的 GNOME 桌面环境开发的图形化 shell】在增加桌面舒适性上少有作为。我能理解其中的缘由,现在它主要做一些诸如可以使开发团队的工作更轻松的事情。但是最终,我还是希望 Unity 可以为 Ubuntu MATE 和 Linux Mint 的普及铺平道路。 - - -我最好奇的一点是 Ubuntu's IRC(Internet Relay Chat) 和邮件列表的发展【译者注:可以在 Ubuntu LoCo Teams IRC Chat上提问关于地方团队和计划的事件的问题,也可以和一些不同团队的成员进行交流】。事实是,他们都不能像 Ubuntu 问答板块那样为它们自己增添一些好的文档。至于邮件列表,我一直认为这对于合作是一种很痛苦的过时方法,但这仅仅是我的个人看法——其他人可能有不同的看法,也可能会认为它很好。 - -你说什么?你认为 Ubuntu 将来会剩下一点主要的使用者?也许你相信 Arch 和 Linux Mint 或者其他的发行版会在普及度上打败 Ubuntu 。 既然这样,那请大声说出你最喜爱的发行版。如果这个发行版是 Ubuntu 衍生版 ,说说你为什么更喜欢它而不是 Ubuntu 本身。如果不出意外,Ubuntu 会成为构建其他发行版的基础,我想很多人都是这样认为的。 - - --------------------------------------------------------------------------------- - -via: http://www.datamation.com/open-source/why-ubuntu-based-distros-are-leaders.html - -作者:[Matt Hartley][a] -译者:[vim-kakali](https://github.com/vim-kakali) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.datamation.com/author/Matt-Hartley-3080.html -[1]: http://www.ubuntu.com/ -[2]: https://www.debian.org/ -[3]: http://www.datamation.com/open-source/ubuntu-snap-packages-the-good-the-bad-the-ugly.html -[4]: http://loco.ubuntu.com/ -[5]: http://askubuntu.com/ From f4f3e69fde76733524282f676ace4df6a6c7f7e6 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Fri, 1 Jul 2016 11:21:25 +0800 Subject: [PATCH 034/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...w to Read Awk Input from STDIN in Linux.md | 73 ------------------ ...w to Read Awk Input from STDIN in Linux.md | 76 +++++++++++++++++++ 2 files changed, 76 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md create mode 100644 translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md diff --git a/sources/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md b/sources/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md deleted file mode 100644 index 157716481a..0000000000 --- a/sources/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md +++ /dev/null @@ -1,73 +0,0 @@ -vim-kakali translating - - -How to Read Awk Input from STDIN in Linux -============================================ - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Read-Awk-Input-from-STDIN.png) - -In the previous parts of the Awk tool series, we looked at reading input mostly from a file(s), but what if you want to read input from STDIN. -In this Part 7 of Awk series, we shall look at few examples where you can filter the output of other commands instead of reading input from a file. - -We shall start with the [dir utility][1] that works similar to [ls command][2], in the first example below, we use the output of `dir -l` command as input for Awk to print owner’s username, groupname and the files he/she owns in the current directory: - -``` -# dir -l | awk '{print $3, $4, $9;}' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-By-User-in-Directory.png) ->List Files Owned By User in Directory - -Take a look at another example where we [employ awk expressions][3], here, we want to print files owned by the root user by using an expression to filter strings as in the awk command below: - -``` -# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} ' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-by-Root-User.png) ->List Files Owned by Root User - -The command above includes the `(==)` comparison operator to help us filter out files in the current directory which are owned by the root user. This is achieved using the expression `$3==”root”`. - -Let us look at another example of where we use a [awk comparison operator][4] to match a certain string. - -Here, we have used the [cat utility][5] to view the contents of a file named tecmint_deals.txt and we want to view the deals of type Tech only, so we shall run the following commands: - -``` -# cat tecmint_deals.txt -# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}' -# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-Comparison-Operator-to-Match-String.png) ->Use Awk Comparison Operator to Match String - -In the example above, we have used the value `~ /pattern/` comparison operator, but there are two commands to try and bring out something very important. - -When you run the command with pattern tech nothing is printed out because there is no deal of that type, but with Tech, you get deals of type Tech. - -So always be careful when using this comparison operator, it is case sensitive as we have seen above. - -You can always use the output of another command instead as input for awk instead of reading input from a file, this is very simple as we have looked at in the examples above. - -Hope the examples were clear enough for you to understand, if you have any concerns, you can express them through the comment section below and remember to check the next part of the series where we shall look at awk features such as variables, numeric expressions and assignment operators. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/read-awk-input-from-stdin-in-linux/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/linux-dir-command-usage-with-examples/ -[2]: http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ -[3]: http://www.tecmint.com/combine-multiple-expressions-in-awk -[4]: http://www.tecmint.com/comparison-operators-in-awk -[5]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ - - - diff --git a/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md b/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md new file mode 100644 index 0000000000..0d0d499d00 --- /dev/null +++ b/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md @@ -0,0 +1,76 @@ + + +在 Linux 上怎么读取标准输入(STDIN)作为 Awk 的输入 +============================================ + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Read-Awk-Input-from-STDIN.png) + + +在 Awk 工具系列的前几节,我们看到大多数操作都是从一个文件或多个文件读取输入,或者你想要把标准输入作为 Awk 的输入. +在 Awk 系列的第7节中,我们将会看到几个例子,这些例子都是关于你可以筛选其他命令的输出代替从一个文件读取输入作为 awk 的输入. + + +我们开始使用 [dir utility][1] , dir 命令和 [ls 命令][2] 相似,在第一个例子下面,我们使用 'dir -l' 命令的输出作为 Awk 命令的输入,这样就可以打印出文件拥有者的用户名,所属组组名以及在当前路径下他/她拥有的文件. +``` +# dir -l | awk '{print $3, $4, $9;}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-By-User-in-Directory.png) +>列出当前路径下的用户文件 + + +看另一个例子,我们 [使用 awk 表达式][3] ,在这里,我们想要在 awk 命令里使用一个表达式筛选出字符串,通过这样来打印出 root 用户的文件.命令如下: +``` +# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} ' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-by-Root-User.png) +>列出 root 用户的文件 + + +上面的命令包含了 '(==)' 来进行比较操作,这帮助我们在当前路径下筛选出 root 用户的文件.这种方法的实现是通过使用 '$3=="root"' 表达式. + +让我们再看另一个例子,我们使用一个 [awk 比较运算符][4] 来匹配一个确定的字符串. + + +现在,我们已经用了 [cat utility][5] 来浏览文件名为 tecmint_deals.txt 的文件内容,并且我们想要仅仅查看有字符串 Tech 的部分,所以我们会运行下列命令: +``` +# cat tecmint_deals.txt +# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}' +# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-Comparison-Operator-to-Match-String.png) +>用 Awk 比较运算符匹配字符串 + + +在上面的例子中,我们已经用了参数为 `~ /匹配字符/` 的比较操作,但是上面的两个命令给我们展示了一些很重要的问题. + +当你运行带有 tech 字符串的命令时终端没有输出,因为在文件中没有 tech 这种字符串,但是运行带有 Tech 字符串的命令,你却会得到包含 Tech 的输出. + +所以你应该在进行这种比较操作的时候时刻注意这种问题,正如我们在上面看到的那样, awk 对大小写很敏感. + + +你可以一直使用另一个命令的输出作为 awk 命令的输入来代替从一个文件中读取输入,这就像我们在上面看到的那样简单. + + +希望这些例子足够简单可以使你理解 awk 的用法,如果你有任何问题,你可以在下面的评论区提问,记得查看 awk 系列接下来的章节内容,我们将关注 awk 的一些功能,比如变量,数字表达式以及赋值运算符. +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/read-awk-input-from-stdin-in-linux/ + +作者:[Aaron Kili][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/linux-dir-command-usage-with-examples/ +[2]: http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ +[3]: http://www.tecmint.com/combine-multiple-expressions-in-awk +[4]: http://www.tecmint.com/comparison-operators-in-awk +[5]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ + + + From b4bc04b6caa15c7a921d018dfb498698a30c1c9e Mon Sep 17 00:00:00 2001 From: Markgolzh Date: Fri, 1 Jul 2016 12:41:59 +0800 Subject: [PATCH 035/471] =?UTF-8?q?=E6=AD=A3=E5=9C=A8=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译中 --- .../tech/20160621 Flatpak brings standalone apps to Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160621 Flatpak brings standalone apps to Linux.md b/sources/tech/20160621 Flatpak brings standalone apps to Linux.md index 5441627710..c2c1b51e7b 100644 --- a/sources/tech/20160621 Flatpak brings standalone apps to Linux.md +++ b/sources/tech/20160621 Flatpak brings standalone apps to Linux.md @@ -1,3 +1,4 @@ +翻译中:by zky001 Flatpak brings standalone apps to Linux === @@ -19,7 +20,7 @@ If you are an application developer, the Flatpak website also contains some grea via: https://fedoramagazine.org/introducing-flatpak/ 作者:[Ryan Lerch][a] -译者:[译者ID](https://github.com/译者ID) +译者:[zky001](https://github.com/zky001) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3b7b5a5758b05e2e64f5b87cce775f12b82bd148 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 2 Jul 2016 14:15:18 +0800 Subject: [PATCH 036/471] PUB:20160303 Top 5 open source command shells for Linux @mr-ping --- ... 5 open source command shells for Linux.md | 86 +++++++++++++++++++ ... 5 open source command shells for Linux.md | 86 ------------------- 2 files changed, 86 insertions(+), 86 deletions(-) create mode 100644 published/20160303 Top 5 open source command shells for Linux.md delete mode 100644 translated/tech/20160303 Top 5 open source command shells for Linux.md diff --git a/published/20160303 Top 5 open source command shells for Linux.md b/published/20160303 Top 5 open source command shells for Linux.md new file mode 100644 index 0000000000..ec6aa3517a --- /dev/null +++ b/published/20160303 Top 5 open source command shells for Linux.md @@ -0,0 +1,86 @@ +Linux 下五个顶级的开源命令行 Shell +=============================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/terminal_blue_smoke_command_line_0.jpg?itok=u2mRRqOa) + +这个世界上有两种 Linux 用户:敢于冒险的和态度谨慎的。 + +其中一类用户总是本能的去尝试任何能够戳中其痛点的新选择。他们尝试过不计其数的窗口管理器、系统发行版和几乎所有能找到的桌面插件。 + +另一类用户找到他们喜欢的东西后,会一直使用下去。他们往往喜欢所使用的系统发行版的默认配置。最先熟练掌握的文本编辑器会成为他们最钟爱的那一个。 + +作为一个使用桌面版和服务器版十五年之久的 Linux 用户,比起第一类来,我无疑属于第二类用户。我更倾向于使用现成的东西,如此一来,很多时候我就可以通过文档和示例方便地找到我所需要的使用案例。如果我决定选择使用非费标准的东西,这个切换过程一定会基于细致的研究,并且前提是来自好基友的大力推荐。 + +但这并不意味着我不喜欢尝试新事物并且查漏补失。所以最近一段时间,在我不假思索的使用了 bash shell 多年之后,决定尝试一下另外四个 shell 工具:ksh、tcsh、zsh 和 fish。这四个 shell 都可以通过我所用的 Fedora 系统的默认库轻松安装,并且他们可能已经内置在你所使用的系统发行版当中了。 + +这里对它们每个选择都稍作介绍,并且阐述下它适合做为你的下一个 Linux 命令行解释器的原因所在。 + +### bash + +首先,我们回顾一下最为熟悉的一个。 [GNU Bash][1],又名 Bourne Again Shell,它是我这些年使用过的众多 Linux 发行版的默认选择。它最初发布于 1989 年,并且轻松成长为 Linux 世界中使用最广泛的 shell,甚至常见于其他一些类 Unix 系统当中。 + +Bash 是一个广受赞誉的 shell,当你通过互联网寻找各种事情解决方法所需的文档时,总能够无一例外的发现这些文档都默认你使用的是 bash shell。但 bash 也有一些缺点存在,如果你写过 Bash 脚本就会发现我们写的代码总是得比真正所需要的多那么几行。这并不是说有什么事情是它做不到的,而是说它读写起来并不总是那么直观,至少是不够优雅。 + +如上所述,基于其巨大的安装量,并且考虑到各类专业和非专业系统管理员已经适应了它的使用方式和独特之处,至少在将来一段时间内,bash 或许会一直存在。 + +### ksh + +[KornShell][4],或许你对这个名字并不熟悉,但是你一定知道它的调用命令 ksh。这个替代性的 shell 于 80 年代起源于贝尔实验室,由 David Korn 所写。虽然最初是一个专有软件,但是后期版本是在 [Eclipse Public 许可][5]下发布的。 + +ksh 的拥趸们列出了他们觉得其优越的诸多理由,包括更好的循环语法,清晰的管道退出代码,处理重复命令和关联数组的更简单的方式。它能够模拟 vi 和 emacs 的许多行为,所以如果你是一个重度文本编辑器患者,它值得你一试。最后,我发现它虽然在高级脚本方面拥有不同的体验,但在基本输入方面与 bash 如出一辙。 + +### tcsh + +[tcsh][6] 衍生于 csh(Berkely Unix C shell),并且可以追溯到早期的 Unix 和计算机时代开始。 + +tcsh 最大的卖点在于它的脚本语言,对于熟悉 C 语言编程的人来说,看起来会非常亲切。tcsh 的脚本编写有人喜欢,有人憎恶。但是它也有其他的技术特色,包括可以为 aliases 添加参数,各种可能迎合你偏好的默认行为,包括 tab 自动完成和将 tab 完成的工作记录下来以备后查。 + +tcsh 以 [BSD 许可][7]发布。 + +### zsh + +[zsh][8] 是另外一个与 bash 和 ksh 有着相似之处的 shell。诞生于 90 年代初,zsh 支持众多有用的新技术,包括拼写纠正、主题化、可命名的目录快捷键,在多个终端中共享同一个命令历史信息和各种相对于原来的 bash 的轻微调整。 + +虽然部分需要遵照 GPL 许可,但 zsh 的代码和二进制文件可以在一个类似 MIT 许可证的许可下进行分发; 你可以在 [actual license][9] 中查看细节。 + +### fish + +之前我访问了 [fish][10] 的主页,当看到 “好了,这是一个为 90 后而生的命令行 shell” 这条略带调侃的介绍时(fish 完成于 2005 年),我就意识到我会爱上这个交互友好的 shell 的。 + +fish 的作者提供了若干切换过来的理由,这些理由有点小幽默并且能戳中笑点,不过还真是那么回事。这些特性包括自动建议(“注意, Netscape Navigator 4.0 来了”,LCTT 译注:NN4 是一个重要版本。),支持“惊人”的 256 色 VGA 调色,不过也有真正有用的特性,包括根据你机器上的 man 页面自动补全命令,清除脚本和基于 web 界面的配置方式。 + +fish 的许可主要基于 GPLv2,但有些部分是在其他许可下的。你可以查看资源库来了解[完整信息][11]。 + +*** + +如果你想要寻找关于每个选择确切不同之处的详尽纲要,[这个网站][12]应该可以帮到你。 + +我的立场到底是怎样的呢?好吧,最终我应该还是会重新投入 bash 的怀抱,因为对于大多数时间都在使用命令行交互的人来说,切换过程对于编写高级的脚本能带来的好处微乎其微,并且我已经习惯于使用 bash 了。 + +但是我很庆幸做出了敞开大门并且尝试新选择的决定。我知道门外还有许许多多其他的东西。你尝试过哪些 shell,更中意哪一个?请在评论里告诉我们。 + +--- + +via: https://opensource.com/business/16/3/top-linux-shells + +作者:[Jason Baker][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jason-baker + +[1]: https://www.gnu.org/software/bash/ +[2]: http://mywiki.wooledge.org/BashPitfalls +[3]: http://www.gnu.org/licenses/gpl.html +[4]: http://www.kornshell.org/ +[5]: https://www.eclipse.org/legal/epl-v10.html +[6]: http://www.tcsh.org/Welcome +[7]: https://en.wikipedia.org/wiki/BSD_licenses +[8]: http://www.zsh.org/ +[9]: https://sourceforge.net/p/zsh/code/ci/master/tree/LICENCE +[10]: https://fishshell.com/ +[11]: https://github.com/fish-shell/fish-shell/blob/master/COPYING +[12]: http://hyperpolyglot.org/unix-shells + diff --git a/translated/tech/20160303 Top 5 open source command shells for Linux.md b/translated/tech/20160303 Top 5 open source command shells for Linux.md deleted file mode 100644 index d40eb58b54..0000000000 --- a/translated/tech/20160303 Top 5 open source command shells for Linux.md +++ /dev/null @@ -1,86 +0,0 @@ -最牛的五个Linux开源command shell -=============================================== - -关键字: shell , Linux , bash , zsh , fish , ksh , tcsh , license - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/terminal_blue_smoke_command_line_0.jpg?itok=u2mRRqOa) - -这个世界上有两种Linux用户:敢于冒险的和态度谨慎的。 - -其中一类用户总是本能的去尝试任何能够戳中其痛点的新选择。他们尝试过不计其数的窗口管理器、系统发行版和几乎所有能找到的桌面插件。 - -另一类用户找到他们喜欢的东西后,会一直使用下去。他们往往喜欢所使用的系统发行版的默认选项。最先熟练掌握的文本编辑器会成为他们最钟爱的那一个。 - -作为一个使用桌面版和服务器版十五年之久的Linux用户,比起第一类来,我无疑属于第二类用户。我更倾向于使用现成的东西,如此一来,很多时候我就可以通过文档和示例方便地找到我所需要的使用案例。如果我决定选择使用非费标准的东西,这个切换过程一定会基于细致的研究,并且前提是来自挚友的大力推荐。 - -但这并不意味着我不喜欢尝试新事物并且查漏补失。所以最近一段时间,在我不假思索的使用了bash shell多年之后,决定尝试一下另外四个shell工具:ksh, tcsh, zsh, 和 fish. 这四个shell都可以通过我所以用的Fedora系统的默认库轻松安装,并且他们可能已经内置在你所使用的系统发行版当中了。 - -这里对每个选择都稍作介绍,并且阐述下它适合做为你的下一个Linux命令行解释器的原因所在。 - -### bash - -首先,我们回顾一下最为熟悉的一个。 [GNU Bash][1],又名 Bourne Again Shell,它是我这些年使用过的众多Linux发行版的默认选择。它最初发布于1989年,并且轻松成长为Linux世界中使用最广泛的shell,甚至常见于其他一些类Unix系统当中。 - -Bash是一个广受赞誉的shell,当你通过互联网寻找各种事情解决方法所需的文档时,总能够无一例外的发现这些文档都默认你使用的是bash shell。但Bash也有一些缺点存在,如果你写过Bash脚本就会发现我们写的代码总是得比真正所需要的多那么几行。这并不是说有什么事情是它做不到的,而是说它读写起来并不总是那么直观,至少是不够优雅。 - -如上所述,基于其巨大的安装量,并且考虑到各类专业和非专业系统管理员已经适应了它的使用方式和独特之处,至少在将来一段时间内,bash或许会一直存在。 - -### ksh - -[KornShell][4],或许你对这个名字并不熟悉,但是你一定知道它的调用命令 ksh。这个替代性的shell于80年代起源于贝尔实验室,由David Korn所写。虽然最初是一个专有软件,但是后期版本是在[Eclipse Public 许可][5]下发布的。 - -ksh的拥趸们列出了他们觉得其优越的诸多理由,包括更好的循环语法,清晰的管道退出代码,更简单的方式来处理重复命令和关联数组。它能够模拟vi和emacs的许多行为,所以如果你是一个重度文本编辑器患者,它值得你一试。最后,我发现它虽然在高级脚本方面拥有不同的体验,但在基本输入方面与bash如出一辙。 - -### tcsh - -[Tcsh][6]衍生于csh(Berkely Unix C shell),并且可以追溯到早期的Unix和计算本身。 - -Tcsh最大的卖点在于它的脚本语言,对于熟悉C语言编程的人来说,看起来会非常亲切。Tcsh的脚本编写有人喜欢,有人憎恶。但是它也有其他的技术特色,包括可以为aliases添加参数,各种可能迎合你偏好的默认行为,包括tab自动完成和将tab完成的工作记录下来以备后查。 - -你可以在[BSD 许可][7]下找到tcsh。 - -### zsh - -[Zsh][8]是另外一个与bash和ksh有着相似之处的shell。产生于90年代初,zsh支持众多有用的新技术,包括拼写纠正,主题化,可命名的目录快捷键,在多个终端中分享命令历史信息和各种相对于original Bourne shell的轻微调整。 - -虽然部分需要遵照GPL许可,但zsh的代码和二进制文件可以在MIT-like许可下进行分发; 你可以在 [actual license][9] 中查看细节。 - -### fish - -之前我访问了[fish][10]的主页,当看到 “好了,这是一个为90年代而生的命令行shell” 这条略带调侃的介绍时(fish完成于2005年),我就意识到我会爱上这个交互友好的shell的。 - -Fish的作者提供了若干切换过来的理由,shell中所有的不太实用的调用都有点小幽默并且能戳中笑点。这些特性包括自动建议("Watch out, Netscape Navigator 4.0"),支持“惊人”的256色VGA调色,不过也有真正有用的特性,包括根据机器的man页面自动补全命令,清除脚本和基于web的配置。 - -Fish的许可主要基于第二版GPL,但有些部分是在其他许可下的。你可以查看资源库来了解[完整信息][11] - -*** - -如果你想要寻找关于每个选择确切不同之处的详尽纲要,[这个网站][12]应该可以帮到你。 - -我的立场到底是怎样的呢?好吧,最终我应该还是会重新投入bash的怀抱,因为对于大多数时间都在使用命令行交互的人来说,切换过程对于高级脚本能带来的好处微乎其微,并且我已经习惯于使用bash了。 - -但是我很庆幸做出了敞开大门并且尝试新选择的决定。我知道门外还有许许多多其他的东西。你尝试过哪些shell,更中意哪一个?请在评论里告诉我们。 - -本文来源: https://opensource.com/business/16/3/top-linux-shells - -作者:[Jason Baker][a] -译者:[mr-ping](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jason-baker - -[1]: https://www.gnu.org/software/bash/ -[2]: http://mywiki.wooledge.org/BashPitfalls -[3]: http://www.gnu.org/licenses/gpl.html -[4]: http://www.kornshell.org/ -[5]: https://www.eclipse.org/legal/epl-v10.html -[6]: http://www.tcsh.org/Welcome -[7]: https://en.wikipedia.org/wiki/BSD_licenses -[8]: http://www.zsh.org/ -[9]: https://sourceforge.net/p/zsh/code/ci/master/tree/LICENCE -[10]: https://fishshell.com/ -[11]: https://github.com/fish-shell/fish-shell/blob/master/COPYING -[12]: http://hyperpolyglot.org/unix-shells - From 3ceeaa9bcff79f20da48d20ca1605b07c7b47918 Mon Sep 17 00:00:00 2001 From: Louis Wei Date: Sat, 2 Jul 2016 04:21:55 -0500 Subject: [PATCH 037/471] translated by wi-cuckoo (#4131) * translated by wi-cuckoo * translated by wi-cuckoo --- ...o Write and Tune Shell Scripts – Part 2.md | 285 ------------------ ...o Write and Tune Shell Scripts – Part 2.md | 285 ++++++++++++++++++ 2 files changed, 285 insertions(+), 285 deletions(-) delete mode 100644 sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md create mode 100644 translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md diff --git a/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md b/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md deleted file mode 100644 index 6a65df8d49..0000000000 --- a/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md +++ /dev/null @@ -1,285 +0,0 @@ -translating by wi-cuckoo -Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2 -=============================================================================== - -In the previous article of this [Python series][1] we shared a brief introduction to Python, its command-line shell, and the IDLE. We also demonstrated how to perform arithmetic calculations, how to store values in variables, and how to print back those values to the screen. Finally, we explained the concepts of methods and properties in the context of Object Oriented Programming through a practical example. - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png) ->Write Linux Shell Scripts in Python Programming - -In this guide we will discuss control flow (to choose different courses of action depending on information entered by a user, the result of a calculation, or the current value of a variable) and loops (to automate repetitive tasks) and then apply what we have learned so far to write a simple shell script that will display the operating system type, the hostname, the kernel release, version, and the machine hardware name. - -This example, although basic, will help us illustrate how we can leverage Python OOP’s capabilities to write shell scripts easier than using regular bash tools. - -In other words, we want to go from - -``` -# uname -snrvm -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png) ->Check Hostname of Linux - -to - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png) ->Check Linux Hostname Using Python Script - -or - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Script-to-Check-Linux-System-Information.png) ->Script to Check Linux System Information - -Looks pretty, doesn’t it? Let’s roll up our sleeves and make it happen. - -### Control flow in Python - -As we said earlier, control flow allows us to choose different outcomes depending on a given condition. Its most simple implementation in Python is an if / else clause. - -The basic syntax is: - -``` -if condition: - # action 1 -else: - # action 2 -``` - -When condition evaluates to true, the code block below will be executed (represented by `# action 1`. Otherwise, the code under else will be run. -A condition can be any statement that can evaluate to either true or false. - -For example: - -1. 1 < 3 # true - -2. firstName == “Gabriel” # true for me, false for anyone not named Gabriel - - - In the first example we compared two values to determine if one is greater than the other. - - In the second example we compared firstName (a variable) to determine if, at the current execution point, its value is identical to “Gabriel” - - The condition and the else statement must be followed by a colon (:) - - Indentation is important in Python. Lines with identical indentation are considered to be in the same code block. - -Please note that the if / else statement is only one of the many control flow tools available in Python. We reviewed it here since we will use it in our script later. You can learn more about the rest of the tools in the [official docs][2]. - -### Loops in Python - -Simply put, a loop is a sequence of instructions or statements that are executed in order as long as a condition is true, or once per item in a list. - -The most simple loop in Python is represented by the for loop iterates over the items of a given list or string beginning with the first item and ending with the last. - -Basic syntax: - -``` -for x in example: - # do this -``` - -Here example can be either a list or a string. If the former, the variable named x represents each item in the list; if the latter, x represents each character in the string: - -``` ->>> rockBands = [] ->>> rockBands.append("Roxette") ->>> rockBands.append("Guns N' Roses") ->>> rockBands.append("U2") ->>> for x in rockBands: - print(x) -or ->>> firstName = "Gabriel" ->>> for x in firstName: - print(x) -``` - -The output of the above examples is shown in the following image: - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png) ->Learn Loops in Python - -### Python Modules - -For obvious reasons, there must be a way to save a sequence of Python instructions and statements in a file that can be invoked when it is needed. - -That is precisely what a module is. Particularly, the os module provides an interface to the underlying operating system and allows us to perform many of the operations we usually do in a command-line prompt. - -As such, it incorporates several methods and properties that can be called as we explained in the previous article. However, we need to import (or include) it in our environment using the import keyword: - -``` ->>> import os -``` - -Let’s print the current working directory: - -``` ->>> os.getcwd() -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png) ->Learn Python Modules - -Let’s now put all of this together (along with the concepts discussed in the previous article) to write the desired script. - -### Python Script - -It is considered good practice to start a script with a statement that indicates the purpose of the script, the license terms under which it is released, and a revision history listing the changes that have been made. Although this is more of a personal preference, it adds a professional touch to our work. - -Here’s the script that produces the output we shown at the top of this article. It is heavily commented so that you can understand what’s happening. - -Take a few minutes to go through it before proceeding. Note how we use an if / else structure to determine whether the length of each field caption is greater than the value of the field itself. - -Based on the result, we use empty characters to fill in the space between a field caption and the next. Also, we use the right number of dashes as separator between the field caption and its value below. - -``` -#!/usr/bin/python3 -# Change the above line to #!/usr/bin/python if you don't have Python 3 installed - -# Script name: uname.py -# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily -# License: GPL v3 (http://www.gnu.org/licenses/gpl.html) - -# Copyright (C) 2016 Gabriel Alejandro Cánepa -# ​Facebook / Skype / G+ / Twitter / Github: gacanepa -# Email: gacanepa (at) gmail (dot) com - -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. - -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. - -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . - -# REVISION HISTORY -# DATE VERSION AUTHOR CHANGE DESCRIPTION -# ---------- ------- -------------- -# 2016-05-28 1.0 Gabriel Cánepa Initial version - -# Import the os module -import os - -# Assign the output of os.uname() to the the systemInfo variable -# os.uname() returns a 5-string tuple (sysname, nodename, release, version, machine) -# Documentation: https://docs.python.org/3.2/library/os.html#module-os -systemInfo = os.uname() - -# This is a fixed array with the desired captions in the script output -headers = ["Operating system","Hostname","Release","Version","Machine"] - -# Initial value of the index variable. It is used to define the -# index of both systemInfo and headers in each step of the iteration. -index = 0 - -# Initial value of the caption variable. -caption = "" - -# Initial value of the values variable -values = "" - -# Initial value of the separators variable -separators = "" - -# Start of the loop -for item in systemInfo: - if len(item) < len(headers[index]): - # A string containing dashes to the length of item[index] or headers[index] - # To repeat a character(s), enclose it within quotes followed - # by the star sign (*) and the desired number of times. - separators = separators + "-" * len(headers[index]) + " " - caption = caption + headers[index] + " " - values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " " - else: - separators = separators + "-" * len(item) + " " - caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1) - values = values + item + " " - # Increment the value of index by 1 - index = index + 1 -# End of the loop - -# Print the variable named caption converted to uppercase -print(caption.upper()) - -# Print separators -print(separators) - -# Print values (items in systemInfo) -print(values) - -# INSTRUCTIONS: -# 1) Save the script as uname.py (or another name of your choosing) and give it execute permissions: -# chmod +x uname.py -# 2) Execute it: -# ./uname.py -``` - -Once you have saved the above script to a file, give it execute permissions and run it as indicated at the bottom of the code: - -``` -# chmod +x uname.py -# ./uname.py -``` - -If you get the following error while attempting to execute the script: - -``` --bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory -``` - -It means you don’t have Python 3 installed. If that is the case, you can either install the package or replace the interpreter line (pay special attention and be very careful if you followed the steps to update the symbolic links to the Python binaries as outlined in the previous article): - -``` -#!/usr/bin/python3 -``` - -with - -``` -#!/usr/bin/python -``` - -which will cause the installed version of Python 2 to execute the script instead. - -**Note**: This script has been tested successfully both in Python 2.x and 3.x. - -Although somewhat rudimentary, you can think of this script as a Python module. This means that you can open it in the IDLE (File → Open… → Select file): - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png) ->Open Python in IDLE - -A new window will open with the contents of the file. Then go to Run → Run module (or just press F5). The output of the script will be shown in the original shell: - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png) ->Run Python Script - -If you want to obtain the same results with a script written purely in Bash, you would need to use a combination of [awk][3], [sed][4], and resort to complex methods to store and retrieve items in a list (not to mention the use of tr to convert lowercase letters to uppercase). - -In addition, Python provides portability in that all Linux systems ship with at least one Python version (either 2.x or 3.x, sometimes both). Should you need to rely on a shell to accomplish the same goal, you would need to write different versions of the script based on the shell. - -This goes to show that Object Oriented Programming features can become strong allies of system administrators. - -**Note**: You can find [this python script][5] (and others) in one of my GitHub repositories. - -### Summary - -In this article we have reviewed the concepts of control flow, loops / iteration, and modules in Python. We have shown how to leverage OOP methods and properties in Python to simplify otherwise complex shell scripts. - -Do you have any other ideas you would like to test? Go ahead and write your own Python scripts and let us know if you have any questions. Don’t hesitate to drop us a line using the comment form below, and we will get back to you as soon as we can. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/learn-python-programming-and-scripting-in-linux/ -[2]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs. -[3]: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ -[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[5]: https://github.com/gacanepa/scripts/blob/master/python/uname.py diff --git a/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md b/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md new file mode 100644 index 0000000000..d7b6bf6c2e --- /dev/null +++ b/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md @@ -0,0 +1,285 @@ +学习使用 python 控制流和循环来编写和执行 Shell 脚本 —— Part 2 +====================================================================================== + +在[Python series][1]之前的文章里,我们分享了 Python的一个简介,它的命令行 shell 和 IDLE(译者注:python 自带的一个IDE)。我们也演示了如何进行数值运算,如何用变量存储值,还有如何打印那些值到屏幕上。最后,我们通过一个练习示例讲解了面向对象编程中方法和属性概念。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png) +>在 Python 编程中写 Linux Shell 脚本 + +本篇中,我嫩会讨论控制流(根据用户输入的信息,计算的结果,或者一个变量的当前值选择不同的动作行为)和循环(自动重复执行任务),接着应用到我们目前所学东西中,编写一个简单的 shell 脚本,这个脚本会显示操作系统类型,主机名,内核发行版,版本号和机器硬件名字。 + +这个例子尽管很基础,但是会帮助我们证明,比起使用一些 bash 工具写 shell 脚本,我们可以使得用 Python OOP 的兼容特性来编写 shell 脚本会更简单些。 + +换句话说,我们想从这里出发 + +``` +# uname -snrvm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png) +> 检查 Linux 的主机号 + +到 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png) +> 用 Python 脚本来检查 Linux 的主机号 + +或者 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Script-to-Check-Linux-System-Information.png) +> 用脚本检查 Linux 系统信息 + +看着不错,不是吗?那我们就挽起袖子,开干吧。 + +### Python 中的控制流 + +如我们刚说那样,控制流允许我们根据一个给定的条件,选择不同的输出结果。在 Python 中最简单的实现就是一个 if/else 语句。 + +基本语法是这样的: + +``` +if condition: + # action 1 +else: + # action 2 +``` + +当 condition 求值为真(true),下面的代码块就会被执行(`# action 1`代表的部分)。否则,else 下面的代码就会运行。 +condition 可以是任何表达式,只要可以求得值为真或者假。 + +举个例子: + +1. 1 < 3 # 真 + +2. firstName == "Gabriel" # 对 firstName 为 Gabriel 的人是真,对其他不叫 Gabriel 的人为假 + + - 在第一个例子中,我们比较了两个值,判断 1 是否小于 3。 + - 在第二个例子中,我们比较了 firstName(一个变量)与字符串 “Gabriel”,看在当前执行的位置,firstName 的值是否等于该字符串。 + - 条件和 else 表达式都必须带着一个冒号(:)。 + - 缩进在 Python 非常重要。同样缩进下的行被认为是相同的代码块。 + +请注意,if/else 表达式只是 Python 中许多控制流工具的一个而已。我们先在这里了解以下,后面会用在我们的脚本中。你可以在[官方文档][2]中学到更多工具。 + +### Python 中的循环 + +简单来说,一个循环就是一组指令或者表达式序列,可以按顺序一直执行,只要一个条件为真,或者在一个列表里一次执行一个条目。 + +Python 中最简单的循环,就是 for 循环迭代一个给定列表的元素,或者一个字符串从第一个字符开始到最后一个字符结束。 + +基本语句: + +``` +for x in example: + # do this +``` + +这里的 example 可以是一个列表或者一个字符串。如果是列表,变量 x 就代表列表中每个元素;如果是字符串,x 就代表字符串中每个字符。 + +``` +>>> rockBands = [] +>>> rockBands.append("Roxette") +>>> rockBands.append("Guns N' Roses") +>>> rockBands.append("U2") +>>> for x in rockBands: + print(x) +or +>>> firstName = "Gabriel" +>>> for x in firstName: + print(x) +``` + +上面例子的输出如下图所示: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png) +>学习 Python 中的循环 + +### Python 模块 + +很明显,必须有个途径可以保存一系列的 Python 指令和表达式到文件里,然后需要的时候再取出来。 + +准确来说模块就是这样的。特别地,os 模块提供了一个接口到操作系统的底层,允许我们做许多通常在命令行下的操作。 + +没错,os 模块包含了许多方法和属性,可以用来调用,就如我们之前文章里讲解的那样。尽管如此,我们需要使用 import 关键词导入(或者叫包含)模块到开发环境里来: + +``` +>>> import os +``` + +我们来打印出当前的工作目录: + +``` +>>> os.getcwd() +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png) +>学习 Python 模块 + +现在,让我们把所有结合在一起(包括之前文章里讨论的概念),编写需要的脚本。 + +### Python 脚本 + +以一个声明开始一个脚本是个不错的想法,表明脚本的目的,发行所依据的证书,和一个修订历史列出所做的修改。尽管这主要是个人喜好,但这会让我们的工作看起来比较专业。 + +这里有个脚本,可以输出这篇文章最前面展示的那样。脚本做了大量的注释,为了让大家可以理解发生了什么。 + +在进行下一步之前,花点时间来理解它。注意,我们是如何使用一个 if/else 结构,判断每个字段标题的长度是否比字段本身的值还大。 + +基于这个结果,我们用空字符去填充一个字段标题和下一个之间的空格。同时,我们使用一定数量的短线作为字段标题与其值之间的分割符。 + +``` +#!/usr/bin/python3 +# Change the above line to #!/usr/bin/python if you don't have Python 3 installed + +# Script name: uname.py +# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily +# License: GPL v3 (http://www.gnu.org/licenses/gpl.html) + +# Copyright (C) 2016 Gabriel Alejandro Cánepa +# ​Facebook / Skype / G+ / Twitter / Github: gacanepa +# Email: gacanepa (at) gmail (dot) com + +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +# REVISION HISTORY +# DATE VERSION AUTHOR CHANGE DESCRIPTION +# ---------- ------- -------------- +# 2016-05-28 1.0 Gabriel Cánepa Initial version + +# Import the os module +import os + +# Assign the output of os.uname() to the the systemInfo variable +# os.uname() returns a 5-string tuple (sysname, nodename, release, version, machine) +# Documentation: https://docs.python.org/3.2/library/os.html#module-os +systemInfo = os.uname() + +# This is a fixed array with the desired captions in the script output +headers = ["Operating system","Hostname","Release","Version","Machine"] + +# Initial value of the index variable. It is used to define the +# index of both systemInfo and headers in each step of the iteration. +index = 0 + +# Initial value of the caption variable. +caption = "" + +# Initial value of the values variable +values = "" + +# Initial value of the separators variable +separators = "" + +# Start of the loop +for item in systemInfo: + if len(item) < len(headers[index]): + # A string containing dashes to the length of item[index] or headers[index] + # To repeat a character(s), enclose it within quotes followed + # by the star sign (*) and the desired number of times. + separators = separators + "-" * len(headers[index]) + " " + caption = caption + headers[index] + " " + values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " " + else: + separators = separators + "-" * len(item) + " " + caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1) + values = values + item + " " + # Increment the value of index by 1 + index = index + 1 +# End of the loop + +# Print the variable named caption converted to uppercase +print(caption.upper()) + +# Print separators +print(separators) + +# Print values (items in systemInfo) +print(values) + +# INSTRUCTIONS: +# 1) Save the script as uname.py (or another name of your choosing) and give it execute permissions: +# chmod +x uname.py +# 2) Execute it: +# ./uname.py +``` + +如果你已经保存上面的脚本到一个文件里,给文件执行权限,并且运行它,像代码底部描述的那样: + +``` +# chmod +x uname.py +# ./uname.py +``` + +如果试图运行脚本时,你得到了如下的错误: + +``` +-bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory +``` + +这意味着你没有安装 Python3。如果那样的话,你要么安装 Python3 的包,要么替换解释器那行(如果你跟着下面的步骤去更新 Python 执行文件的软连接,如之前文章里概述的那样,要特别注意并且非常小心): + +``` +#!/usr/bin/python3 +``` + +为 + +``` +#!/usr/bin/python +``` + +这样会导致使用安装好的 Python 2 版本去执行该脚本。 + +**注意**: 该脚本在 Python 2.x 与 Pyton 3.x 上都测试成功过了。 + +尽管比较粗糙,你可以认为该脚本就是一个 Python 模块。这意味着你可以在 IDLE 中打开它(File → Open… → Select file): + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png) +>在 IDLE 中打开 Python + +一个包含有文件内容的新窗口就会打开。然后执行 Run → Run module(或者按 F5)。脚本的输出就会在原 Shell 里显示出来: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png) +>执行 Python 脚本 + +如果你想纯粹用 bash 写一个脚本,也获得同样的结果,你可能需要结合使用 [awk][3],[sed][4],并且借助复杂的方法来存储与获得列表中的元素(忘了提醒使用 tr 命令将小写字母转为大写) + +另外,Python 在所有的 Linux 系统版本中集成了至少一个 Python 版本(2.x 或者 3.x,或者两者都有)。你还需要依赖 shell 去完成同样的目标吗,那样你可能会为不同的 shell 编写不同的版本。 + +这里演示了面向对象编程的特性,会成为一个系统管理员得力的助手。 + +**注意**:你可以在我的 Github 仓库里获得 [这个 python 脚本][5](或者其他的)。 + +### 总结 + +这篇文章里,我们讲解了 Python 中控制流,循环/迭代,和模块的概念。我们也演示了如何利用 Python 中 OOP 的方法和属性,来简化复杂的 shell 脚本。 + +你有任何其他希望去验证的想法吗?开始吧,写出自己的 Python 脚本,如果有任何问题可以咨询我们。不必犹豫,在分割线下面留下评论,我们会尽快回复你。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Gabriel Cánepa][a] +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/learn-python-programming-and-scripting-in-linux/ +[2]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs. +[3]: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ +[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[5]: https://github.com/gacanepa/scripts/blob/master/python/uname.py + From dc6ee9092447215559f875a3a2b872933601d082 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 3 Jul 2016 10:44:46 +0800 Subject: [PATCH 038/471] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit rename for windows --- ...20160621 Container technologies in Fedora - systemd-nspawn.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/{20160621 Container technologies in Fedora: systemd-nspawn.md => 20160621 Container technologies in Fedora - systemd-nspawn.md} (100%) diff --git a/sources/tech/20160621 Container technologies in Fedora: systemd-nspawn.md b/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md similarity index 100% rename from sources/tech/20160621 Container technologies in Fedora: systemd-nspawn.md rename to sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md From 00d5430e9f33cf89bd71d24e4d1bd4de1acb0671 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 3 Jul 2016 14:19:32 +0800 Subject: [PATCH 039/471] Translated translated/tech/20160425 How to Use Awk to Print Fields and Columns in File.md --- ...Awk to Print Fields and Columns in File.md | 54 +++++++++---------- 1 file changed, 26 insertions(+), 28 deletions(-) diff --git a/translated/tech/20160425 How to Use Awk to Print Fields and Columns in File.md b/translated/tech/20160425 How to Use Awk to Print Fields and Columns in File.md index 127cc061c1..94b914cd3d 100644 --- a/translated/tech/20160425 How to Use Awk to Print Fields and Columns in File.md +++ b/translated/tech/20160425 How to Use Awk to Print Fields and Columns in File.md @@ -1,21 +1,20 @@ -ictlyh Translating -How to Use Awk to Print Fields and Columns in File +如何使用 Awk 打印文件中的字段和列 =================================================== -In this part of our [Linux Awk command series][1], we shall have a look at one of the most important features of Awk, which is field editing. +在 [Linux Awk 命令系列介绍][1] 的这部分,我们来看一下 awk 最重要的功能之一,字段编辑。 -It is good to know that Awk automatically divides input lines provided to it into fields, and a field can be defined as a set of characters that are separated from other fields by an internal field separator. +首先我们要知道 Awk 会自动把输入的行切分为字段,字段可以定义为是一些字符集,这些字符集和其它字段被内部字段分隔符分离。 ![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Print-Fields-and-Columns.png) ->Awk Print Fields and Columns +>Awk 输出字段和列 -If you are familiar with the Unix/Linux or do [bash shell programming][2], then you should know what internal field separator (IFS) variable is. The default IFS in Awk are tab and space. +如果你熟悉 Unix/Linux 或者懂得 [bash shell 编程][2],那么你也应该知道内部字段分隔符(IFS)变量。Awk 默认的 IFS 是 tab 和空格。 -This is how the idea of field separation works in Awk: when it encounters an input line, according to the IFS defined, the first set of characters is field one, which is accessed using $1, the second set of characters is field two, which is accessed using $2, the third set of characters is field three, which is accessed using $3 and so forth till the last set of character(s). +Awk 字段切分的工作原理如下:当获得一行输入时,根据定义的 IFS,第一个字符集是字段一,用 $1 表示,第二个字符集是字段二,用 $2 表示,第三个字符集是字段三,用 $3 表示,以此类推直到最后一个字符集。 -To understand this Awk field editing better, let us take a look at the examples below: +为了更好的理解 Awk 的字段编辑,让我们来看看下面的例子: -**Example 1**: I have created a text file called tecmintinfo.txt. +**事例 1:**: 我创建了一个名为 tecmintinfo.txt 的文件。 ``` # vi tecmintinfo.txt @@ -23,24 +22,23 @@ To understand this Awk field editing better, let us take a look at the examples ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Create-File-in-Linux.png) ->Create File in Linux +>在 Linux 中创建文件 -Then from the command line, I try to print the first, second and third fields from the file tecmintinfo.txt using the command below: +然后在命令行中使用以下命令打印 tecmintinfo.txt 文件中的第一、第二和第三个字段。 ``` $ awk '//{print $1 $2 $3 }' tecmintinfo.txt TecMint.comisthe ``` +从上面的输出中你可以看到,三个字段中的第一个是按照定义的 IFS,也就是空格,打印的。 -From the output above, you can see that the characters from the first three fields are printed based on the IFS defined which is space: +- 字段一 “TecMint.com” 使用 $1 访问。 +- 字段二 “is” 通过 $2 访问。 +- 字段三 “the” 通过 $3 访问。 -- Field one which is “TecMint.com” is accessed using $1. -- Field two which is “is” is accessed using $2. -- Field three which is “the” is accessed using $3. +如果你注意打印的输出,可以看到字段值之间并没有分隔开,这是 print 默认的方式。 -If you have noticed in the printed output, the field values are not separated and this is how print behaves by default. - -To view the output clearly with space between the field values, you need to add (,) operator as follows: +为了在字段值之间加入空格,你需要像下面这样添加(,)分隔符: ``` $ awk '//{print $1, $2, $3; }' tecmintinfo.txt @@ -48,11 +46,11 @@ $ awk '//{print $1, $2, $3; }' tecmintinfo.txt TecMint.com is the ``` -One important thing to note and always remember is that the use of ($) in Awk is different from its use in shell scripting. +很重要而且必须牢记的一点是,Awk 中 ($) 的使用和在 shell 脚本中不一样。 -Under shell scripting ($) is used to access the value of variables while in Awk ($) it is used only when accessing the contents of a field but not for accessing the value of variables. +在 shell 脚本中 ($) 用于获取变量的值,而在 Awk 中 ($) 只用于获取一个字段的内容,而不能用于获取变量的值。 -**Example 2**: Let us take a look at one other example using a file which contains multiple lines called my_shoping.list. +**事例2**: 让我们再看一个使用多行文件 my_shoping.list 的例子。 ``` No Item_Name Unit_Price Quantity Price @@ -62,7 +60,7 @@ No Item_Name Unit_Price Quantity Price 4 Ethernet_Cables #30,000 4 #120,000 ``` -Say you wanted to only print Unit_Price of each item on the shopping list, you will need to run the command below: +假设你只想打印购物清单中每个物品的 Unit_Price,你需要允许下面的命令: ``` $ awk '//{print $2, $3 }' my_shopping.txt @@ -74,9 +72,9 @@ RAM_Chips #150,000 Ethernet_Cables #30,000 ``` -Awk also has a printf command that helps you to format your output is a nice way as you can see the above output is not clear enough. +Awk 也有一个 printf 命令,它能帮助你用更好的方式格式化输出,正如你可以看到上面的输出并不清晰。 -Using printf to format output of the Item_Name and Unit_Price: +使用 printf 格式化输出 Item_Name 和 Unit_Price: ``` $ awk '//{printf "%-10s %s\n",$2, $3 }' my_shopping.txt @@ -88,18 +86,18 @@ RAM_Chips #150,000 Ethernet_Cables #30,000 ``` -### Summary +### 总结 -Field editing is very important when using Awk to filter text or strings, it helps you get particular data in columns in a list. And always remember that the use of ($) operator in Awk is different from that in shell scripting. +使用 Awk 进行文本和字符串过滤时字段编辑功能非常重要,它能帮助你从列表中获取列的特定数据。同时需要记住 Awk 中 ($) 操作符和 shell 脚本中不一样。 -I hope the article was helpful to you and for any additional information required or questions, you can post a comment in the comment section. +我希望这篇文章能对你有所帮助,如果你需要获取其它信息或者有任何疑问,都可以在下面的评论框中告诉我们。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ictlyh](https://github.com/ictlyh) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From aeebb76494651b04e75d8f325873d2ed17949ba8 Mon Sep 17 00:00:00 2001 From: pspkforever Date: Sun, 3 Jul 2016 20:52:08 +0800 Subject: [PATCH 040/471] Update 20160621 Docker Datacenter in AWS and Azure in Few Clicks.md --- ...0160621 Docker Datacenter in AWS and Azure in Few Clicks.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160621 Docker Datacenter in AWS and Azure in Few Clicks.md b/sources/tech/20160621 Docker Datacenter in AWS and Azure in Few Clicks.md index aa4a317103..d9431155ff 100644 --- a/sources/tech/20160621 Docker Datacenter in AWS and Azure in Few Clicks.md +++ b/sources/tech/20160621 Docker Datacenter in AWS and Azure in Few Clicks.md @@ -1,5 +1,6 @@ +translated by pspkforever DOCKER DATACENTER IN AWS AND AZURE IN A FEW CLICKS -==================================================== +=================================================== Introducing Docker Datacenter AWS Quickstart and Azure Marketplace Templates production-ready, high availability deployments in just a few clicks. From 2a1976bbaffed5e9a4b5c13d501d3a54b3fec30f Mon Sep 17 00:00:00 2001 From: chenxinlong <237448382@qq.com> Date: Sun, 3 Jul 2016 21:01:54 +0800 Subject: [PATCH 041/471] =?UTF-8?q?[=E7=BF=BB=E8=AF=91=E4=B8=AD]20160624?= =?UTF-8?q?=20IT=20runs=20on=20the=20cloud=20and=20the=20cloud=20runs=20on?= =?UTF-8?q?=20Linux.Any=20questions=3F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... on the cloud and the cloud runs on Linux. Any questions.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md b/sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md index 2a575c1a31..94f0a8af2f 100644 --- a/sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md +++ b/sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md @@ -1,3 +1,4 @@ +chenxinlong translating IT runs on the cloud, and the cloud runs on Linux. Any questions? =================================================================== @@ -37,7 +38,7 @@ So, just as the vast majority of Android phone and Chromebook users have no clue via: http://www.zdnet.com/article/it-runs-on-the-cloud-and-the-cloud-runs-on-linux-any-questions/#ftag=RSSbaffb68 作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) +译者:[chenxinlong](https://github.com/chenxinlong) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9fe021efe27c6148ff98b52ef20c1561d9b0f83f Mon Sep 17 00:00:00 2001 From: crazyLinuxer <958983476@qq.com> Date: Sun, 3 Jul 2016 21:32:46 +0800 Subject: [PATCH 042/471] finish translating by kylepeng93 (#4135) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Add files via upload * Delete 给学习OpenStack基础设施的新手的入门指南.md * Delete 20160416 A newcomer's guide to navigating OpenStack Infrastructure.md * Add files via upload * Delete 给学习OpenStack基础设施的新手的入门指南.md * translated by kylepeng93 --- ... to navigating OpenStack Infrastructure.md | 88 ------------------- ... to navigating OpenStack Infrastructure.md | 65 ++++++++++++++ 2 files changed, 65 insertions(+), 88 deletions(-) delete mode 100644 sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md create mode 100644 translated/tech/A newcomer's guide to navigating OpenStack Infrastructure.md diff --git a/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md b/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md deleted file mode 100644 index 25dc193fdc..0000000000 --- a/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md +++ /dev/null @@ -1,88 +0,0 @@ -translating by kylepeng93 -A newcomer's guide to navigating OpenStack Infrastructure -=========================================================== - -New contributors to OpenStack are welcome, but having a road map for navigating within this maturing, fast-paced open source community doesn't hurt. At OpenStack Summit in Austin, [Paul Belanger][1] (Red Hat, Inc.), [Elizabeth K. Joseph][2] (HPE), and [Christopher Aedo][3] (IBM) will lead a session on [OpenStack Infrastructure for Beginners][4]. In this interview, they offer tips and resources to help onboard new OpenStack contributors. - -![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) - -**Your talk description says you'll be "diving into the heart of infrastructure and explain everything you need to know about the systems that keep OpenStack working." That's a tall order for a 40-minute time slot. What are the top things beginners should know about OpenStack infrastructure?** - -**Elizabeth K. Joseph (EKJ)**: We don't use GitHub for OpenStack patches. This is something that trips up a lot of new contributors because we do maintain mirrors of all our repositories on GitHub for historical reasons. Instead we use a fully open source code review and continuous integration (CI) system maintained by the OpenStack Infrastructure team. Relatedly, since we run a CI system, every change proposed to OpenStack is tested before merging. - -**Paul Belanger (PB)**: A lot of passionate people in the project, so don't get discouraged if your patch gets a -1. - -**Christopher Aedo (CA)**: The community wants to help you succeed, don't be afraid to ask questions or ask for pointers to more information to improve your understanding. - -### Which online resources would you recommend for beginners to fill in the holes for what you can't cover in your talk? - -**PB**: Definitely our [OpenStack Project Infrastructure documentation][5]. At lot of effort has been taken to keep it up to date as much as possible. Every system used in running OpenStack as a project has a dedicated page, even the OpenStack cloud the Infrastructure teams is bringing online. - -**EKJ**: I'll echo what Paul said about the Infrastructure documentation, and add that we love seeing patches from folks who are learning. We often don't realize what we're missing in terms of documentation until someone asks. So read, learn, and then help us fill in the gaps. You can ask questions on the [openstack-infra mailing list][6] or in our IRC channel at #openstack-infra on Freenode. - -**CA**: I love [this detailed post][7] about building images, by Ian Wienand. - -### Which "gotchas" should new OpenStack contributors look out for? - -**EKJ**: Contributing is not just about submitting new code and new features; the OpenStack community places a very high value on doing code reviews. If you want people to look at a patch you submitted, consider reviewing some of the work of others and providing clear and constructive feedback. The more your fellow contributors know about your work and see you doing reviews, the more likely you'll get your code reviewed in a timely manner. - -**CA**: I see a lot of newcomers getting tripped up with [Gerrit][8]. Read through the [developer workflow][9] in the Developers Guide, and then maybe read through it one more time. If you're not used to Gerrit, it can seem confusing and overwhelming at first, but walking through a few code reviews usually makes it all come together. Also, I'm a big fan of IRC. It can be a great place to get help, but it's best if you can maintain a persistent presence so people can answer your questions even if you're not "there" at that particular moment. (Read [IRC, the secret to success in open source][10].) You don't need to be "always on," but the ability to easily scroll back in a channel and catch up on a conversation can be invaluable. - -**PB**: I agree with both Elizabeth and Chris—Gerrit is what to look out for. It is going to be the hub of your development effort. Not only will you be submitting code for people to review, but you'll also be reviewing other contributors' code. Watch out for the Gerrit UI; it can be confusing at times. I'd recommend trying out [Gertty][11], which is a console-based interface to the Gerrit Code Review system, which happens to be a project driven by OpenStack Infrastructure. - -### What resources do you recommend for beginners to help them network with other OpenStack contributors? - -**PB**: For me, it was using IRC and joining the #openstack-infra channel on Freenode ([IRC logs][12]). There is a lot of fantastic information and people in that channel. You get to see the day-to-day operations of the OpenStack project, and once you know how the project works, you'll have a better understanding on how to contribute to its future. - -**CA**: I want to second that note for IRC; staying on IRC throughout the day made a huge difference for me in terms of feeling informed and connected. It's also such a great way to get help when you're stuck with someone on one of the projects—the ones with active IRC channels always have someone around willing to get your issues sorted out. - -**EKJ**: The [openstack-dev mailing list][13] is quite important for staying up to date with news about projects you're working on inside of OpenStack, so I recommend subscribing to that. The mailing list uses subject tags to separate projects, so you can instruct your email client to use those and focus on threads that impact projects you care about. Beyond online resources, many OpenStack groups have popped up all over the world that serve the needs of both users and contributors to OpenStack, and many of them routinely have talks and events with key OpenStack contributors. You can search on Meetup.com in your area, or search on [groups.openstack.org][14] to see if there is an OpenStack group in your area. Finally, there are the [OpenStack Summits][15], which happen every six months, and where we'll be giving our Infrastructure talk. In their current format, the summits consist of both a user conference and a developer conference in one space to talk about everything related to OpenStack, past, present, and future. - -### In which areas does OpenStack need to improve to become more beginner-friendly? - -**PB**: I think our [account-setup][16] process could be made easier for new contributors, especially how many steps are needed to submit your first patch. There is a large cost to enroll into OpenStack development model, which maybe be too much for contributors; however, once enrolled, the model works fantastic for developers. - -**CA**: We have a very pro-developer community, but the focus is on developing OpenStack itself, with less consideration given to the users of OpenStack clouds. We need to bring in application developers and encourage more people to develop things that run beautifully on OpenStack clouds, and encourage them to share those apps in the [Community App Catalog][17]. We can do this by continuing to improve our API standards and by ensuring different libraries (like libcloud, phpopencloud, and others) continue to work reliably for developers. Oh, also by sponsoring more OpenStack hackathons! All these things can ease entry for newcomers, which will lead to them sticking around. - -**EKJ**: I've worked on open source software for many years, but for a large number of OpenStack developers, this is the first open source project they've every worked on. I've found that their proprietary software background doesn't prepare them for the open source ideals, methodologies, and collaboration techniques used in an open source project. I'd love to see us do a better job of welcoming people who have this proprietary software background and working with them so they can truly understand the value of what they're working on in the open source software community. - -### I think 2016 is shaping up to be the Year of the Open Source Haiku. Explain OpenStack to beginners via Haiku. - -**PB**: OpenStack runs clouds If you enjoy free software Submit your first patch - -**CA**: In the near future OpenStack will rule the world Help make it happen! - -**EKJ**: OpenStack is free Deploy on your own servers And run your own cloud! - -*Paul, Elizabeth*, and Christopher will be [speaking at OpenStack Summit][18] in Austin on Monday, April 25, starting at 11:15am. - - ------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners - -作者:[linux.com][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://rikkiendsley.com/ -[1]: https://twitter.com/pabelanger -[2]: https://twitter.com/pleia2 -[3]: https://twitter.com/docaedo -[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 -[5]: http://docs.openstack.org/infra/system-config/ -[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html -[8]: https://code.google.com/p/gerrit/ -[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow -[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/ -[11]: https://pypi.python.org/pypi/gertty -[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/ -[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -[14]: https://groups.openstack.org/ -[15]: https://www.openstack.org/summit/ -[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup -[17]: https://apps.openstack.org/ -[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 diff --git a/translated/tech/A newcomer's guide to navigating OpenStack Infrastructure.md b/translated/tech/A newcomer's guide to navigating OpenStack Infrastructure.md new file mode 100644 index 0000000000..b22d5e1fd5 --- /dev/null +++ b/translated/tech/A newcomer's guide to navigating OpenStack Infrastructure.md @@ -0,0 +1,65 @@ +translating by kylepeng93 +给学习OpenStack基础设施的新手的入门指南 +=========================================================== + +任何一个为OpenStack贡献源码的人会受到社区的欢迎,但是,对于一个发展趋近成熟并且快速迭代的开源社区而言,能够拥有一个新手指南并不是 +件坏事。在奥斯汀举办的OpenStack峰会上,[Paul Belanger][1] (红帽公司), [Elizabeth K. Joseph][2] (HPE公司),和[Christopher Aedo][3] (IBM公司)将会就针对新人的OpenStack基础设施作一场专门的会谈。在这次采访中,他们将会提供一些建议和资源来帮助新人成为OpenStack贡献者中的一员。 +![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) + +**你在谈话中表示你将“投入全部身心于基础设施并解释你需要知道的有关于维持OpenStack正常工作的系统的每一件事情”。这是一个持续了40分钟的的艰巨任务。那么,对于学习OpenStack基础设施的新手来说最需要知道那些事情呢?** +**Elizabeth K. Joseph (EKJ)**: 我们没有为OpenStack使用GitHub这种提交补丁的方式,这是因为这样做会对新手造成巨大的困扰,尽管由于历史原因我们还是保留了所有原先就在GitHub上的所有库的镜像。相反,我们使用了一种完全开源的复查形式,并通过OpenStack基础设施团队来持续的维持系统集成(CI)。相关的,自从我们使用了CI系统,每一个提交给OpenStack的改变都会在被合并之前进行测试。 +**Paul Belanger (PB)**: 这个项目中的大多数都是富有激情的人,因此当你提交的补丁被某个人否定时不要感到沮丧。 +**Christopher Aedo (CA)**:社区很想帮助你取得成功,因此不要害怕提问或者寻求更多的那些能够促进你理解某些事物的引导者。 + +### 在你的讲话中,对于一些你无法涉及到的方面,你会向新手推荐哪些在线资源来让他们更加容易入门? +**PB**:当然是我们的[OpenStack项目基础设施文档][5]。我们已经花了足够大的努力来尽可能让这些文档能够随时保持最新状态。我们对每个运行OpenStack项目并投入使用的系统都有制作专门的页面来进行说明。甚至于连OpenStack云基础设施团队也即将上线。 + +**EKJ**:我对于将基础设施文档作为新手入门教程这件事上的观点和Paul是一致的,另外,我们十分乐意看到来自那些folk了我们项目的学习者提交上来的补丁。我们通常不会意识到我们忽略了文档中的某些内容,除非它们恰好被人问起。因此,阅读,学习,然后帮助我们修补这些知识上的漏洞。你可以在[OpenStack基础设施邮件清单]提出你的问题,或者在我们位于FreeNode上的#OpenStack-infra的IRC专栏发起你的提问。 +**CA**:我喜欢[这个详细的发布][7],它是由Ian Wienand写的一篇关于构建图片的文章。 +### "gotchas" 会是OpenStack新的代码贡献者苦苦寻找的吗? +**EKJ**:向项目作出贡献并不仅仅是提交新的代码和新的特性;OpenStack社区高度重视代码复查。如果你想要别人查看你的补丁,那你最好先看看其他人是如何做的,然后参考他们的风格,最后一步布做到你也能够向其他人一样提交清晰且结构分明的代码补丁。你越是能让你的同伴了解你的工作并知道你正在做的复查,那他们也就更有可能形成及时复查你的代码的风格。 +**CA**:我看到过大量的新手在面对Gerrit时受挫,阅读开发者引导中的[开发者工作步骤][9]时,可能只是将他通读了一遍。如果你没有经常使用Gerrit,那你最初对它的感觉可能是困惑和无力的。但是,如果你随后做了一些代码复查的工作,那么你马上就能轻松应对它。同样,我是IRC的忠实粉丝。它可能是一个获得帮助的好地方,但是,你最好保持一个长期保留的状态,这样,尽管你在某个时候没有出现,人们也可以回答你的问题。(阅读[IRC,开源界的成功秘诀][10]。)你不必总是在线,但是你最好能够轻松的在一个通道中进行回滚操作,以此来跟上最新的动态,这种能力非常重要。 +**PB**:我同意Elizabeth和Chris—Gerrit关于寻求何种方式来学习的观点。每个开发人员所作出的努力都将让社区变得更加美好。你不仅仅要提交代码给别人去复查,同时,你也要能够复查其他人的代码。留意Gerrit的用户接口,你可能一时会变的很疑惑。我推荐新手去尝试[Gertty][11],它是一个基于控制台的终端接口,用于Gerrit代码复查系统,而它恰好也是OpenStack基础设施所驱动的一个项目。 +### 你对于OpenStack新手如何通过网络与其他贡献者交流方面有什么好的建议? +**PB**:对我来说,是通过IRC以及在Freenode上参加#OpenStack-infra专栏([IRC logs][12]).这专栏上面有很多对新手来说很有价值的资源。你可以看到OpenStack项目日复一日的运作情况,同时,一旦你知道了OpenStack项目的工作原理,你将更好的知道如何为OpenStack的未来发展作出贡献。 +**CA**:我想要为IRC再次说明一点,在IRC上保持一整天的在线记录对我来说有非常重大的意义,因为我会感觉到被重视并且时刻保持连接。这也是一种非常好的获得帮助的方式,特别是当你卡在了项目中出现的某一个难题的时候,而在专栏中,总会有一些人很乐意为你解决问题。 +**EKJ**:[OpenStack开发邮件列表][13]对于能够时刻查看到你所致力于的OpenStack项目的最新情况是非常重要的。因此,我推荐一定要订阅它。邮件列表使用课题标签来区分项目,因此你可以设置你的邮件客户端来使用它,并且集中精力于你所关心的项目。除了在线资源之外,全世界范围内也成立了一些OpenStack小组,他们被用来为OpenStack的用户和贡献者提供服务。这些小组可能会定期举行座谈和针对OpenStack主要贡献者的一些活动。你可以在MeetUp.com上搜素你所在地域的贡献者活动聚会,或者在[groups.openstack.org]上查看你所在的地域是否存在OpenStack小组。最后,还有一个每六个月举办一次的OpenStack峰会,这个峰会上会作一些关于基础设施的演说。当前状态下,这个峰会包含了用户会议和开发者会议,会议内容都是和OpenStack相关的东西,包括它的过去,现在和未来。 +### OpenStack需要在那些方面得到提升来让新手更加容易学会并掌握? +**PB**: 我认为我们的[account-setup][16]过程对于新的贡献者已经做的比较容易了,特别是教他们如何提交他们的第一个补丁。真正参与到OpenStack开发者模式的过程是需要花费很大的努力的,可能对于开发者来说已经显得非常多了;然而,一旦融入进去了,这个模式将会运转的十分高效和令人满意。 +**CA**: 我们拥有一个由专业开发者组成的社区,而且我们的关注点都是发展OpenStack本身,同时,我们致力于让用户付出更小的代价去使用OpenStack云基础设施平台。我们需要发掘更多的应用开发者,并且鼓励更多的人去开发能在OpenStack云上完美运行的云应用程序,我们还鼓励他们在[社区App目录]上去贡献那些由他们开发的app。我们可以通过提升我们的API标准和保证我们不同的库(比如libcloud,phpopencloud已经其他一些库),并让他们持续的为开发者提供可信赖的支持来实现这一目标。还有一点就是通过倡导更多的OpenStack黑客加入进来。所有的这些事情都可以降低新人的学习门槛,这样也能引导他们与这个社区之间的关系更加紧密。y. +**EKJ**: 我已经致力于开源软件很多年了。但是,对于大量的OpenStack开发者而言,这是一个他们每个人都在从事的第一个开源项目。我发现他们之前使用私有软件的背景并没有为他们塑造开源的观念和方法论,还有在开源项目中需要具备的合作技巧。我乐于看到我们能够让那些曾经一直在使用私有软件工作的人能够真正的明白他们在开源如软件社区所从事的事情的巨大价值。 +### 我认为2016年对于开源Haiku的进一步增长是具有重大意义的一年。通过Haiku来向新手解释OpenStack。 +**PB**: 如果你喜欢自由软件,你可以向OpenStack提交你的第一个补丁。 +**CA**: 在不久的未来,OpenStack将以统治世界的姿态让这个世界变得更好。 +**EKJ**: OpenStack是一个可以免费部署在你的服务器上面并且运行你自己的云的一个软件。 +*Paul,Elizabeth和Christopher将会在4月25号星期一上午11:15于奥斯汀举办的OpenStack峰会上进行演说。 + +------------------------------------------------------------------------------ + +via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners + +作者:[linux.com][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://rikkiendsley.com/ +[1]: https://twitter.com/pabelanger +[2]: https://twitter.com/pleia2 +[3]: https://twitter.com/docaedo +[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 +[5]: http://docs.openstack.org/infra/system-config/ +[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra +[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html +[8]: https://code.google.com/p/gerrit/ +[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow +[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/ +[11]: https://pypi.python.org/pypi/gertty +[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/ +[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +[14]: https://groups.openstack.org/ +[15]: https://www.openstack.org/summit/ +[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup +[17]: https://apps.openstack.org/ +[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 From d3c75bbbfb30f512a5351f365437a81082ffe532 Mon Sep 17 00:00:00 2001 From: Chunyang Wen Date: Mon, 4 Jul 2016 14:41:26 +0800 Subject: [PATCH 043/471] Finish tranlating Awk-Part4 by chunyang-wen (#4138) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * âFinish tranlating awk series part4 * Update Part 4 - How to Use Comparison Operators with Awk in Linux.md --- ... Comparison Operators with Awk in Linux.md | 97 ------------------- ... Comparison Operators with Awk in Linux.md | 95 ++++++++++++++++++ 2 files changed, 95 insertions(+), 97 deletions(-) delete mode 100644 sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md create mode 100644 translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md diff --git a/sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md b/sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md deleted file mode 100644 index 1d0ef007af..0000000000 --- a/sources/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md +++ /dev/null @@ -1,97 +0,0 @@ -chunyang-wen translating - -How to Use Comparison Operators with Awk in Linux -=================================================== - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Comparison-Operators-with-AWK.png) - -When dealing with numerical or string values in a line of text, filtering text or strings using comparison operators comes in handy for Awk command users. - -In this part of the Awk series, we shall take a look at how you can filter text or strings using comparison operators. If you are a programmer then you must already be familiar with comparison operators but those who are not, let me explain in the section below. - -### What are Comparison operators in Awk? - -Comparison operators in Awk are used to compare the value of numbers or strings and they include the following: - -- `>` – greater than -- `<` – less than -- `>=` – greater than or equal to -- `<=` – less than or equal to -- `==` – equal to -- `!=` – not equal to -- `some_value ~ / pattern/` – true if some_value matches pattern -- `some_value !~ / pattern/` – true if some_value does not match pattern - -Now that we have looked at the various comparison operators in Awk, let us understand them better using an example. - -In this example, we have a file named food_list.txt which is a shopping list for different food items and I would like to flag food items whose quantity is less than or equal 20 by adding `(**)` at the end of each line. - -``` -File – food_list.txt -No Item_Name Quantity Price -1 Mangoes 45 $3.45 -2 Apples 25 $2.45 -3 Pineapples 5 $4.45 -4 Tomatoes 25 $3.45 -5 Onions 15 $1.45 -6 Bananas 30 $3.45 -``` - -The general syntax for using comparison operators in Awk is: - -``` -# expression { actions; } -``` - -To achieve the above goal, I will have to run the command below: - -``` -# awk '$3 <= 30 { printf "%s\t%s\n", $0,"**" ; } $3 > 30 { print $0 ;}' food_list.txt - -No Item_Name` Quantity Price -1 Mangoes 45 $3.45 -2 Apples 25 $2.45 ** -3 Pineapples 5 $4.45 ** -4 Tomatoes 25 $3.45 ** -5 Onions 15 $1.45 ** -6 Bananas 30 $3.45 ** -``` - -In the above example, there are two important things that happen: - -- The first expression `{ action ; }` combination, `$3 <= 30 { printf “%s\t%s\n”, $0,”**” ; }` prints out lines with quantity less than or equal to 30 and adds a `(**)` at the end of each line. The value of quantity is accessed using `$3` field variable. -- The second expression `{ action ; }` combination, `$3 > 30 { print $0 ;}` prints out lines unchanged since their quantity is greater then `30`. - -One more example: - -``` -# awk '$3 <= 20 { printf "%s\t%s\n", $0,"TRUE" ; } $3 > 20 { print $0 ;} ' food_list.txt - -No Item_Name Quantity Price -1 Mangoes 45 $3.45 -2 Apples 25 $2.45 -3 Pineapples 5 $4.45 TRUE -4 Tomatoes 25 $3.45 -5 Onions 15 $1.45 TRUE -6 Bananas 30 $3.45 -``` - -In this example, we want to indicate lines with quantity less or equal to 20 with the word (TRUE) at the end. - -### Summary - -This is an introductory tutorial to comparison operators in Awk, therefore you need to try out many other options and discover more. - -In case of any problems you face or any additions that you have in mind, then drop a comment in the comment section below. Remember to read the next part of the Awk series where I will take you through compound expressions. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/comparison-operators-in-awk/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md b/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md new file mode 100644 index 0000000000..86649a52d5 --- /dev/null +++ b/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md @@ -0,0 +1,95 @@ +在 Linux 下如何使用 Awk 比较操作符 +=================================================== + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Comparison-Operators-with-AWK.png) + +对于 Awk 命令的用户来说,处理一行文本中的数字或者字符串时,使用比较运算符来过滤文本和字符串是十分方便的。 + +在 Awk 系列的此部分中,我们将探讨一下如何使用比较运算符来过滤文本或者字符串。如果你是程序员,那么你应该已经熟悉比较运算符;对于其它人,下面的部分将介绍比较运算符。 + +### Awk 中的比较运算符是什么? + +Awk 中的比较运算符用于比较字符串和或者数值,包括以下类型: + +- `>` – 大于 +- `<` – 小于 +- `>=` – 大于等于 +- `<=` – 小于等于 +- `==` – 等于 +- `!=` – 不等于 +- `some_value ~ / pattern/` – 如果some_value匹配模式pattern,则返回true +- `some_value !~ / pattern/` – 如果some_value不匹配模式pattern,则返回true + +现在我们通过例子来熟悉 Awk 中各种不同的比较运算符。 + +在这个例子中,我们有一个文件名为 food_list.txt 的文件,里面包括不同食物的购买列表。我想给食物数量小于或等于30的物品所在行的后面加上`(**)` + +``` +File – food_list.txt +No Item_Name Quantity Price +1 Mangoes 45 $3.45 +2 Apples 25 $2.45 +3 Pineapples 5 $4.45 +4 Tomatoes 25 $3.45 +5 Onions 15 $1.45 +6 Bananas 30 $3.45 +``` + +Awk 中使用比较运算符的通用语法如下: + +``` +# expression { actions; } +``` + +为了实现刚才的目的,执行下面的命令: + +``` +# awk '$3 <= 30 { printf "%s\t%s\n", $0,"**" ; } $3 > 30 { print $0 ;}' food_list.txt + +No Item_Name` Quantity Price +1 Mangoes 45 $3.45 +2 Apples 25 $2.45 ** +3 Pineapples 5 $4.45 ** +4 Tomatoes 25 $3.45 ** +5 Onions 15 $1.45 ** +6 Bananas 30 $3.45 ** +``` + +在刚才的例子中,发生如下两件重要的事情: + +- 第一个表达式 `{ action ; }` 组合, `$3 <= 30 { printf “%s\t%s\n”, $0,”**” ; }` 打印出数量小于等于30的行,并且在后面增加`(**)`。物品的数量是通过 `$3`这个域变量获得的。 +- 第二个表达式 `{ action ; }` 组合, `$3 > 30 { print $0 ;}` 原样输出数量小于等于 `30` 的行。 + +再举一个例子: + +``` +# awk '$3 <= 20 { printf "%s\t%s\n", $0,"TRUE" ; } $3 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Quantity Price +1 Mangoes 45 $3.45 +2 Apples 25 $2.45 +3 Pineapples 5 $4.45 TRUE +4 Tomatoes 25 $3.45 +5 Onions 15 $1.45 TRUE +6 Bananas 30 $3.45 +``` + +在这个例子中,我们想通过在行的末尾增加 (TRUE) 来标记数量小于等于20的行。 + +### 总结 + +这是一篇对 Awk 中的比较运算符介绍性的指引,因此你需要尝试其他选项,发现更多使用方法。 + +如果你遇到或者想到任何问题,请在下面评论区留下评论。请记得阅读 Awk 系列下一部分的文章,那里我将介绍组合表达式。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/comparison-operators-in-awk/ + +作者:[Aaron Kili][a] +译者:[chunyang-wen](https://github.com/chunyang-wen) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 707906d8c929f69366da0a5bdb59fe3f9ae2c5fe Mon Sep 17 00:00:00 2001 From: Dongliang Mu Date: Mon, 4 Jul 2016 02:41:54 -0400 Subject: [PATCH 044/471] translated (#4137) * translated * move to translated directory --- ...inux developers think of Git and GitHub.md | 95 ------------------- ...inux developers think of Git and GitHub.md | 93 ++++++++++++++++++ 2 files changed, 93 insertions(+), 95 deletions(-) delete mode 100644 sources/tech/20160218 What do Linux developers think of Git and GitHub.md create mode 100644 translated/tech/20160218 What do Linux developers think of Git and GitHub.md diff --git a/sources/tech/20160218 What do Linux developers think of Git and GitHub.md b/sources/tech/20160218 What do Linux developers think of Git and GitHub.md deleted file mode 100644 index c0de7f4e6a..0000000000 --- a/sources/tech/20160218 What do Linux developers think of Git and GitHub.md +++ /dev/null @@ -1,95 +0,0 @@ -translated by mudongliang - -What do Linux developers think of Git and GitHub? -===================================================== - -**Also in today’s open source roundup: DistroWatch reviews XStream Desktop 153, and Street Fighter V is coming to Linux and SteamOS in the spring** - -## What do Linux developers think of Git and GitHub? - -The popularity of Git and GitHub among Linux developers is well established. But what do developers think of them? And should GitHub really be synonymous with Git itself? A Linux redditor recently asked about this and got some very interesting answers. - -Dontwakemeup46 asked his question: - ->I am learning Git and Github. What I am interested in is how these two are viewed by the community. That git and github are used extensively, is something I know. But are there serious issues with either Git or Github? Something that the community would love to change? - -[More at Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580413015211&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) - -His fellow Linux redditors responded with their thoughts about Git and GitHub: - ->Derenir: ”Github is not affliated with Git. - ->Git is made by Linus Torvalds. - ->Github hardly supports Linux. - ->Github is a corporate bordelo that tries to make money from Git. - ->[https://desktop.github.com/](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580415025712&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&type=U&out=https%3A%2F%2Fdesktop.github.com%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=https%3A%2F%2Fdesktop.github.com%2F) see here no Linux Support.” - ->**Bilog78**: ”A minor update: git hasn't been “made by Linus Torvalds” for a while. The maintainer is Junio C Hamano and the main contributors after him are Jeff King and Shawn O. Pearce.” - ->**Fearthefuture**: ”I like git but can't understand why people even use github anymore. From my point of view the only thing it does better than bitbucket are user statistics and the larger userbase. Bitbucket has unlimited free private repos, much better UI and very good integration with other services such as Jenkins.” - ->**Thunger**: ”Gitlab.com is also nice, especially since you can host your own instance on your own servers.” - ->**Takluyver**: ”Lots of people are familiar with the UI of Github and associated services like Travis, and lots of people already have Github accounts, so it's a good place for projects to be. People also use their Github profile as a kind of portfolio, so they're motivated to put more projects on there. Github is a de facto standard for hosting open source projects.” - ->**Tdammers**: ”Serious issue with git would be the UI, which is kind of counterintuitive, to the point that many users just stick with a handful of memorized incantations. - -Github: most serious issue here is that it's a proprietary hosted solution; you buy convenience, and the price is that your code is on someone else's server and not under your control anymore. Another common criticism of github is that its workflow isn't in line with the spirit of git itself, particularly the way pull requests work. And finally, github is monopolizing the code hosting landscape, and that's bad for diversity, which in turn is crucial for a thriving free software community.” - ->**Dies**: ”How is that the case? More importantly, if that is the case, then what's done is done and I guess we're stuck with Github since they control so many projects.” - ->**Tdammers**: ”The code is hosted on someone else's server, "someone else" in this case being github. Which, for an open-source project, is not typically a huge problem, but still, you don't control it. If you have a private project on github, then the only assurance you have that it will remain private is github's word for it. If you decide to delete things, then you can never be sure whether it's been deleted, or just hidden. - -Github doesn't control the projects themselves (you can always take your code and host it elsewhere, declaring the new location the "official" one), it just has deeper access to the code than the developers themselves.” - ->**Drelos**: ”I have read a lot of praises and bad stuff about Github ([here's an example](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580428524613&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fwww.wired.com%2F2015%2F06%2Fproblem-putting-worlds-code-github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=here%27s%20an%20example)) but my simple noob question is why aren't efforts towards a free and open "version" ?” - ->**Twizmwazin**: ”GitLab is sorta pushing there.” - -[More at Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580429720714&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) - -## DistroWatch reviews XStream Desktop 153 - -XStreamOS is a version of Solaris created by Sonicle. XStream Desktop brings the power of Solaris to desktop users, and distrohoppers might be interested in checking it out. DistroWatch did a full review of XStream Desktop 153 and found that it performed fairly well. - -Jesse Smith reports for DistroWatch: - ->I think XStream Desktop does a lot of things well. Admittedly, my trial got off to a rocky start when the operating system would not boot on my hardware and I could not get the desktop to use my display's full screen resolution when running in VirtualBox. However, after that, XStream performed fairly well. The installer works well, the operating system automatically sets up and uses boot environments, insuring we can recover the system if something goes wrong. The package management tools work well and XStream ships with a useful collection of software. - ->I did run into a few problems playing media, specifically getting audio to work. I am not sure if that is another hardware compatibility issue or a problem with the media software that ships with the operating system. On the other hand, tools such as the web browser, e-mail, productivity suite and configuration tools all worked well. - ->What I appreciate about XStream the most is that the operating system is a branch of the OpenSolaris family that is being kept up to date. Other derivatives of OpenSolaris tend to lag behind, at least with desktop software, but XStream is still shipping recent versions of Firefox and LibreOffice. - ->For me personally, XStream is missing a few components, like a printer manager, multimedia support and drivers for my specific hardware. Other aspects of the operating system are quite attractive. I like the way the developers have set up LXDE, I like the default collection of software and I especially like the way file system snapshots and boot environments are enabled out of the box. Most Linux distributions, openSUSE aside, have not caught on to the usefulness of boot environments yet and I hope it is a technology that is picked up by more projects. - -[More at DistroWatch](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580434172315&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fdistrowatch.com%2Fweekly.php%3Fissue%3D20160215%23xstreamos&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20DistroWatch) - -## Street Fighter V and SteamOS - -Street Fighter is one of the most well known game franchises of all time, and now [Capcom has announced](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) that Street Fighter V will be coming to Linux and SteamOS in the spring. This is great news for Linux gamers. - -Joe Parlock reports for Destructoid: - ->Are you one of the less than one percent of Steam users who play on a Linux-based system? Are you part of the even smaller percentage of people who play on Linux and are excited for Street Fighter V? Well, I’ve got some good news for you. - ->Capcom has announced via Steam that Street Fighter V will be coming to SteamOS and other Linux operating systems sometime this spring. It’ll come at no extra cost, so those who already own the PC build of the game will just be able to install it on Linux and be good to go. - -[More at Destructoid](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) - -Did you miss a roundup? Check the [Eye On Open home page](http://www.infoworld.com/blog/eye-on-open/) to get caught up with the latest news about open source and Linux. - ------------------------------------------------------------------------------- - -via: http://www.infoworld.com/article/3033059/linux/what-do-linux-developers-think-of-git-and-github.html - -作者:[Jim Lynch][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.infoworld.com/author/Jim-Lynch/ - diff --git a/translated/tech/20160218 What do Linux developers think of Git and GitHub.md b/translated/tech/20160218 What do Linux developers think of Git and GitHub.md new file mode 100644 index 0000000000..a0eb63506e --- /dev/null +++ b/translated/tech/20160218 What do Linux developers think of Git and GitHub.md @@ -0,0 +1,93 @@ +Linux 开发者如何看待 Git 和 Github? +===================================================== + +**同样在今日的开源摘要: DistroWatch 评估 XStream 桌面 153 版本,街头霸王 V 即将在这个春天进入 Linux 和 SteamOS** + +## Linux 开发者如何看待 Git 和 Github? + +Git 和 Github 在 Linux 开发者中有很高的知名度。但是开发者如何看待它们呢?另外,Github 是不是真的和 Git 是一个意思?一个 Linux reddit 用户最近问到了这个问题,并且得到了很有意思的答案。 + +Dontwakemeup46 提问: + +> 我正在学习 Git 和 Github。我感兴趣的是社区如何看待两者?据我所知,Git 和 Github 应用十分广泛。但是 Git 或 Github 有没有严重的,社区喜欢去修改的问题呢? + +[更多见 Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580413015211&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) + +与他志同道合的 Linux reddit 用户回答了他们对于 Git 和 Github的想法: + +>Derenir: “Github 并不隶属于 Git。 + +>Git 是由 Linus Torvalds 开发的。 + +>Github 几乎不支持 Linux。 + +>Github 是一家唯利是图的,企图借助 Git 赚钱的公司。 + +>[https://desktop.github.com/](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580415025712&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&type=U&out=https%3A%2F%2Fdesktop.github.com%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=https%3A%2F%2Fdesktop.github.com%2F) 并没有支持 Linux。” + +>**Bilog78**: “一个简单的更新: Linus Torvalds 已经不再维护 Git了。维护者是 Junio C Hamano,以及 Linus 之后的主要贡献者是Jeff King 和 Shawn O. Pearce。” + +>**Fearthefuture**: “我喜欢 Git,但是不明白人们为什么还要使用 Github。从我的角度,Github 比 Bitbucket 好的一点是用户统计和更大的用户基础。Bitbucket 有无限的免费私有库,更好的 UI,以及更好地继承其他服务,比如说 Jenkins。” + +>**Thunger**: “Gitlab.com 也很不错,特别是你可以在自己的服务器上架设自己的实例。” + +>**Takluyver**: “很多人熟悉 Github 的 UI 以及相关联的服务,像 Travis 。并且很多人都有 Github 账号,所以它是一个很好地存储项目的地方。人们也使用他们的 Github 简况作为一种求职用的作品选辑,所以他们很积极地将更多的项目放在这里。Github 是一个事实上的,存放开源项目的标准。” + +>**Tdammers**: “Git 严重问题在于 UI,它有些违反直觉,到很多用户只使用一些容易记住的咒语程度。” + +Github:最严重的问题在于它是私人拥有的解决方案;你买了方便,但是代价是你的代码在别人的服务器上面,已经不在你的掌控范围之内了。另一个对于 Github 的普遍批判是它的工作流和 Git 本身的精神不符,特别是 pull requests 工作的方式。最后, Github 垄断代码的托管环境,同时对于多样性是很不好的,这反过来对于旺盛的免费软件社区很重要。” + +>**Dies**: “更重要的是,如果一旦是这样,做过的都做过了,并且我猜我们会被 Github 所困,因为它们控制如此多的项目。” + +>**Tdammers**: “代码托管在别人的服务器上,别人指的是 Github。这对于开源项目来说,并不是什么太大的问题,但是仍然,你无法控制它。如果你在 Github 上有私有项目,唯一的保险在于它将保持私有是 Github 的承诺。如果你决定删除东西,你不能确定东西是否被删除了,或者只是隐藏了。 + +Github 并不自己控制这些项目(你总是可以拿走你的代码,然后托管到别的地方,声明新位置是“官方”的),它只是有比开发者本身有更深的使用权。” + +>**Drelos**: “我已经读了大量的关于 Github 的赞美与批评。(这里有一个[例子](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580428524613&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fwww.wired.com%2F2015%2F06%2Fproblem-putting-worlds-code-github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=here%27s%20an%20example)),但是我的新手问题是为什么不向一个免费开源的版本努力呢?” + +>**Twizmwazin**: “Gitlab 的源码就存在这里” + +[更多见 Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580429720714&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) + +## DistroWatch 评估 XStream 桌面 153 版本 + +XStreamOS 是一个由 Sonicle 创建的 Solaris 的一个版本。XStream 桌面将 Solaris 的强大带给了桌面用户,同时新手用户很可能有兴趣体验一下。DistroWatch 对于 XStream 桌面 153 版本做了一个很全面的评估,并且发现它运行相当好。 + +Jesse Smith 为 DistroWatch 报道: + +> 我认为 XStream 桌面做好了很多事情。无可否认地,我的实验陷入了头晕目眩的状态当操作系统无法在我的硬件上启动。同时,当运行在 VirtualBox 中我无法使得桌面使用我显示器的完整分辨率。 + +> 我确实在播放多媒体文件时遇见一些问题,特别是使声卡工作。我不确定是不是另外一个硬件兼容问题,或者一个关于操作系统自带的多媒体软件的问题。另一方面,像 Web 浏览器,电子邮件,生产工具套件以及配置工具这样的工作都工作的很好。 + +> 我最欣赏 XStream 的地方是这个操作系统是 OpenSolaris 家族的一个使用保持最新的分支。OpenSolaris 的其他衍生系统有落后的倾向,至少在桌面软件上,但是 XStream 仍然搭载最新版本的火狐和 LibreOffice。 + +>对我个人来说,XStream 缺少一些组件,比如打印机管理器,多媒体支持和我特定硬件的驱动。这个操作系统的其他方面也是相当吸引人的。我喜欢开发者搭建 LXDE 的方式,软件的默认组合,以及我最喜欢文件系统快照和启动环境开箱即用的方式。大多数的 Linux 发行版,openSUSE 除外,并没有使得启动环境的有用性流行起来。我希望它是一个被更多项目采用的技术。 + +[更多见 DistroWatch](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580434172315&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fdistrowatch.com%2Fweekly.php%3Fissue%3D20160215%23xstreamos&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20DistroWatch) + +## 街头霸王 V 和 SteamOS + +街头霸王是最出名的游戏之一,并且 [Capcom 已经宣布](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) 街头霸王 V 将会在这个春天进入 Linux 和 StreamOS。这对于 Linux 游戏者是非常好的消息。 + +Joe Parlock 为 Destructoid 报道: + +>你是少于 1% 的,在 Linux 系统上玩游戏的 Stream 用户吗?你是更少百分比的,在 Linux 平台上玩游戏,同时很喜欢街头霸王 V 的人之一吗?是的话,我有一些好消息要告诉你。 + +>Capcom 已经宣布,这个春天街头霸王 V 通过 Stream 进入 StreamOS 以及其他 Linux 发行版。它无需任何额外的成本,所以那些已经个人电脑建立的游戏的人可以很容易在 Linux 上安装它,并且运行良好。 + +[更多 Destructoid](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) + +你是否错过了摘要?检查 [Eye On Open home page](http://www.infoworld.com/blog/eye-on-open/) 来获得关于 Linux 和开源的最新的新闻。 + +------------------------------------------------------------------------------ + +via: http://www.infoworld.com/article/3033059/linux/what-do-linux-developers-think-of-git-and-github.html + +作者:[Jim Lynch][a] +译者:[mudongliang](https://github.com/mudongliang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/Jim-Lynch/ + From ca31a2b5decd7fde3da794d258d931e19fb33a00 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 4 Jul 2016 15:21:06 +0800 Subject: [PATCH 045/471] PUB:20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX @alim0x --- ... HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md | 51 ++++++++++--------- 1 file changed, 28 insertions(+), 23 deletions(-) rename {translated/tech => published}/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md (72%) diff --git a/translated/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md b/published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md similarity index 72% rename from translated/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md rename to published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md index c08efbc683..e6287d62e9 100644 --- a/translated/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md +++ b/published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md @@ -1,14 +1,15 @@ -在 Ubuntu Linux 中使用 WEBP 图片 +在 Ubuntu Linux 中使用 WebP 图片 ========================================= ![](http://itsfoss.com/wp-content/uploads/2016/05/support-webp-ubuntu-linux.jpg) ->简介:这篇指南会向你展示如何在 Linux 下查看 WebP 图片以及将 WebP 图片转换为 JPEG 或 PNG 格式。 -### 什么是 WEBP? +> 简介:这篇指南会向你展示如何在 Linux 下查看 WebP 图片以及将 WebP 图片转换为 JPEG 或 PNG 格式。 -Google 为图片推出 [WebP 文件格式][0]已经超过五年了。Google 说,WebP 提供有损和无损压缩,相比 JPEG 压缩,WebP 压缩文件大小能更小约 25%。 +### 什么是 WebP? -Google 的目标是让 WebP 成为 web 图片的新标准,但是我没能看到这一切发生。已经五年过去了,除了谷歌的生态系统以外它仍未被接受成为一个标准。但正如我们所知的,Google 对它的技术很有进取心。几个月前 Google 将 Google Plus 的所有图片改为了 WebP 格式。 +自从 Google 推出 [WebP 图片格式][0],已经过去五年了。Google 说,WebP 提供有损和无损压缩,相比 JPEG 压缩,WebP 压缩文件大小,能更小约 25%。 + +Google 的目标是让 WebP 成为 web 图片的新标准,但是并没有成为现实。已经五年过去了,除了谷歌的生态系统以外它仍未被接受成为一个标准。但正如我们所知的,Google 对它的技术很有进取心。几个月前 Google 将 Google Plus 的所有图片改为了 WebP 格式。 如果你用 Google Chrome 从 Google Plus 上下载那些图片,你会得到 WebP 图片,不论你之前上传的是 PNG 还是 JPEG。这都不是重点。真正的问题在于当你尝试着在 Ubuntu 中使用默认的 GNOME 图片查看器打开它时你会看到如下错误: @@ -17,7 +18,8 @@ Google 的目标是让 WebP 成为 web 图片的新标准,但是我没能看 > **Unrecognized image file format(未识别文件格式)** ![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-1.png) ->GNOME 图片查看器不支持 WebP 图片 + +*GNOME 图片查看器不支持 WebP 图片* 在这个教程里,我们会看到 @@ -41,7 +43,8 @@ sudo apt-get install gthumb 一旦安装完成,你就可以简单地右键点击 WebP 图片,选择 gThumb 来打开它。你现在应该可以看到如下画面: ![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-2.jpeg) ->gThumb 中显示的 WebP 图片 + +*gThumb 中显示的 WebP 图片* ### 让 gThumb 成为 Ubuntu 中 WebP 图片的默认应用 @@ -50,28 +53,30 @@ sudo apt-get install gthumb #### 步骤 1:右键点击 WebP 文件选择属性。 ![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) ->从右键菜单中选择属性 + +*从右键菜单中选择属性* #### 步骤 2:转到打开方式标签,选择 gThumb 并点击设置为默认。 ![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) ->让 gThumb 成为 Ubuntu 中 WebP 图片的默认应用 + +*让 gThumb 成为 Ubuntu 中 WebP 图片的默认应用* ### 让 gThumb 成为所有图片的默认应用 -gThumb 的功能比图片查看器更多。举个例子,你可以做一些简单的编辑,给图片添加滤镜等。添加滤镜的效率没有 XnRetro(在[ Linux 下添加类似 Instagram 滤镜效果][5]的专用工具)那么高,但它还是有一些基础的滤镜可以用。 +gThumb 的功能比图片查看器更多。举个例子,你可以做一些简单的图片编辑,给图片添加滤镜等。添加滤镜的效率没有 XnRetro(在[ Linux 下添加类似 Instagram 滤镜效果][5]的专用工具)那么高,但它还是有一些基础的滤镜可以用。 -我非常喜欢 gThumb 并且决定让它成为默认的图片查看器。如果你也想在 Ubuntu 中让 gThumb 成为所有图片的默认默认应用,遵照以下步骤操作: +我非常喜欢 gThumb 并且决定让它成为默认的图片查看器。如果你也想在 Ubuntu 中让 gThumb 成为所有图片的默认应用,遵照以下步骤操作: -#### 步骤1:打开系统设置 +步骤1:打开系统设置 ![](http://itsfoss.com/wp-content/uploads/2014/04/System_Settings_ubuntu_1404.jpeg) -#### 步骤2:转到详情(Details) +步骤2:转到详情(Details) ![](http://itsfoss.com/wp-content/uploads/2013/11/System_settings_Ubuntu_1.jpeg) -#### 步骤3:在这里将 gThumb 设置为图片的默认应用 +步骤3:在这里将 gThumb 设置为图片的默认应用 ![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-5.png) @@ -100,7 +105,7 @@ sudo apt-get install webp ##### 将 JPEG/PNG 转换为 WebP -我们将使用 cwebp 命令(它代表压缩为 WebP 吗?)来将 JPEG 或 PNG 文件转换为 WebP。命令格式是这样的: +我们将使用 cwebp 命令(它代表转换为 WebP 的意思吗?)来将 JPEG 或 PNG 文件转换为 WebP。命令格式是这样的: ``` cwebp -q [图片质量] [JPEG/PNG_文件名] -o [WebP_文件名] @@ -132,7 +137,7 @@ dwebp example.webp -o example.png [下载 XnConvert][1] -XnConvert 是个强大的工具,你可以用它来批量修改图片尺寸。但在这个教程里,我们只能看到如何将单个 WebP 图片转换为 PNG/JPEG。 +XnConvert 是个强大的工具,你可以用它来批量修改图片尺寸。但在这个教程里,我们只介绍如何将单个 WebP 图片转换为 PNG/JPEG。 打开 XnConvert 并选择输入文件: @@ -148,24 +153,24 @@ XnConvert 是个强大的工具,你可以用它来批量修改图片尺寸。 也许你一点都不喜欢 WebP 图片格式,也不想在 Linux 仅仅为了查看 WebP 图片而安装一个新软件。如果你不得不将 WebP 文件转换以备将来使用,这会是件更痛苦的事情。 -一个解决这个问题更简单,不那么痛苦的途径是安装一个 Chrome 扩展 Save Image as PNG。有了这个插件,你可以右键点击 WebP 图片并直接存储为 PNG 格式。 +解决这个问题的一个更简单、不那么痛苦的途径是安装一个 Chrome 扩展 Save Image as PNG。有了这个插件,你可以右键点击 WebP 图片并直接存储为 PNG 格式。 ![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-8.png) ->在 Google Chrome 中将 WebP 图片保存为 PNG 格式 -[获取 Save Image as PNG 扩展][2] +*在 Google Chrome 中将 WebP 图片保存为 PNG 格式* + +- [获取 Save Image as PNG 扩展][2] ### 你的选择是? -我希望这个详细的教程能够帮你在 Linux 上获取 WebP 支持并帮你转换 WebP 图片。你在 Linux 怎么处理 WebP 图片?你使用哪个工具?以上描述的方法中,你最喜欢哪一个? - +我希望这个详细的教程能够帮你在 Linux 上支持 WebP 并帮你转换 WebP 图片。你在 Linux 怎么处理 WebP 图片?你使用哪个工具?以上描述的方法中,你最喜欢哪一个? ---------------------- -via: http://itsfoss.com/webp-ubuntu-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 +via: http://itsfoss.com/webp-ubuntu-linux/ 作者:[Abhishek Prakash][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 8c55d2a65499b237aa7d3792f7ea5ff3e712cd86 Mon Sep 17 00:00:00 2001 From: alim0x Date: Mon, 4 Jul 2016 19:01:38 +0800 Subject: [PATCH 046/471] [translated]How to permanently mount a Windows share on Linux --- ...manently mount a Windows share on Linux.md | 127 ------------------ ...manently mount a Windows share on Linux.md | 122 +++++++++++++++++ 2 files changed, 122 insertions(+), 127 deletions(-) delete mode 100644 sources/tech/20160624 How to permanently mount a Windows share on Linux.md create mode 100644 translated/tech/20160624 How to permanently mount a Windows share on Linux.md diff --git a/sources/tech/20160624 How to permanently mount a Windows share on Linux.md b/sources/tech/20160624 How to permanently mount a Windows share on Linux.md deleted file mode 100644 index 30b2cf71f8..0000000000 --- a/sources/tech/20160624 How to permanently mount a Windows share on Linux.md +++ /dev/null @@ -1,127 +0,0 @@ -alim0x translating - -How to permanently mount a Windows share on Linux -================================================== - ->If you get tired of having to remount Windows shares when you reboot your Linux box, read about an easy way to make those shares permanently mount. - -![](http://tr2.cbsistatic.com/hub/i/2016/06/02/e965310b-b38d-43e6-9eac-ea520992138b/68fd9ec5d6731cc405bdd27f2f42848d/linuxadminhero.jpg) ->Image: Jack Wallen - -It has never been easier for Linux to interact within a Windows network. And considering how many businesses are adopting Linux, those two platforms have to play well together. Fortunately, with the help of a few tools, you can easily map Windows network drives onto a Linux machine, and even ensure they are still there upon rebooting the Linux machine. - -### Before we get started - -For this to work, you will be using the command line. The process is pretty simple, but you will be editing the /etc/fstab file, so do use caution. -Also, I assume you already have Samba working properly so you can manually mount shares from a Windows network to your Linux box, and that you know the IP address of the machine hosting the share. - -Are you ready? Let's go. - -### Create your mount point - -The first thing we're going to do is create a folder that will serve as the mount point for the share. For the sake of simplicity, we'll name this folder share and we'll place it in /media. Open your terminal window and issue the command: - -``` -sudo mkdir /media/share -``` - -### A few installations - -Now we have to install the system that allows for cross-platform file sharing; this system is cifs-utils. From the terminal window, issue the command: - -``` -sudo apt-get install cifs-utils -``` - -This command will also install all of the dependencies for cifs-utils. - -Once this is installed, open up the file /etc/nsswitch.conf and look for the line: - -``` -hosts: files mdns4_minimal [NOTFOUND=return] dns -``` - -Edit that line so it looks like: - -``` -hosts: files mdns4_minimal [NOTFOUND=return] wins dns -``` - -Now you must install windbind so that your Linux machine can resolve Windows computer names on a DHCP network. From the terminal, issue this command: - -``` -sudo apt-get install libnss-windbind windbind -``` - -Restart networking with the command: - -``` -sudo service networking restart -``` - -### Mount the network drive - -Now we're going to map the network drive. This is where we must edit the /etc/fstab file. Before you make that first edit, back up the file with this command: - -``` -sudo cp /etc/fstab /etc/fstab.old -``` - -If you need to restore that file, issue the command: - -``` -sudo mv /etc/fstab.old /etc/fstab -``` - -Create a credentials file in your home directory called .smbcredentials. In that file, add your username and password, like so (USER is the actual username and password is the actual password): - -``` -username=USER - -password=PASSWORD -``` - -You now have to know the Group ID (GID) and User ID (UID) of the user that will be mounting the drive. Issue the command: - -``` -id USER -``` - -USER is the actual username, and you should see something like: - -``` -uid=1000(USER) gid=1000(GROUP) -``` - -USER is the actual username, and GROUP is the group name. The numbers before (USER) and (GROUP) will be used in the /etc/fstab file. - -It's time to edit the /etc/fstab file. Open that file in your editor and add the following line to the end (replace everything in ALL CAPS and the IP address of the remote machine): - -``` -//192.168.1.10/SHARE /media/share cifs credentials=/home/USER/.smbcredentials,iocharset=uft8,gid=GID,udi=UID,file_mode=0777,dir_mode=0777 0 0 -``` - -**Note**: The above should be on a single line. - -Save and close that file. Issue the command sudo mount -a and the share will be mounted. Check in /media/share and you should see the files and folders on the network share. - -### Sharing made easy - -Thanks to cifs-utils and Samba, mapping network shares is incredibly easy on a Linux machine. And now, you won't have to manually remount those shares every time your machine boots. - -For more networking tips and tricks, sign up for our Data Center newsletter. -[SUBSCRIBE](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fhow-to-permanently-mount-a-windows-share-on-linux%2F&) - --------------------------------------------------------------------------------- - -via: http://www.techrepublic.com/article/how-to-permanently-mount-a-windows-share-on-linux/ - -作者:[Jack Wallen][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.techrepublic.com/search/?a=jack+wallen - - diff --git a/translated/tech/20160624 How to permanently mount a Windows share on Linux.md b/translated/tech/20160624 How to permanently mount a Windows share on Linux.md new file mode 100644 index 0000000000..a205923fe0 --- /dev/null +++ b/translated/tech/20160624 How to permanently mount a Windows share on Linux.md @@ -0,0 +1,122 @@ +如何在 Linux 上永久挂载一个 Windows 共享 +================================================== + +> 如果你已经厌倦了每次重启 Linux 就得重新挂载 Windows 共享,读读这个让共享永久挂载的简单方法。 + +![](http://tr2.cbsistatic.com/hub/i/2016/06/02/e965310b-b38d-43e6-9eac-ea520992138b/68fd9ec5d6731cc405bdd27f2f42848d/linuxadminhero.jpg) +>图片: Jack Wallen + +在 Linux 上和一个 Windows 网络进行交互从来就不是件轻松的事情。想想多少企业正在采用 Linux,这两个平台不得不一起好好协作。幸运的是,有了一些工具的帮助,你可以轻松地将 Windows 网络驱动器映射到一台 Linux 机器上,甚至可以确保在重启 Linux 机器之后共享还在。 + +### 在我们开始之前 + +要实现这个,你需要用到命令行。过程十分简单,但你需要编辑 /etc/fstab 文件,所以小心操作。还有,我假设你已经有正常工作的 Samba 了,可以手动从 Windows 网络挂载共享到你的 Linux 机器,还知道这个共享的主机 IP 地址。 + +准备好了吗?那就开始吧。 + +### 创建你的挂载点 + +我们要做的第一件事是创建一个文件夹,他将作为共享的挂载点。为了简单起见,我们将这个文件夹命名为 share,放在 /media 之下。打开你的终端执行以下命令: + +``` +sudo mkdir /media/share +``` + +### 一些安装 + +现在我们得安装允许跨平台文件共享的系统;这个系统是 cifs-utils。在终端窗口输入: + +``` +sudo apt-get install cifs-utils +``` + +这个命令同时还会安装 cifs-utils 所有的依赖。 + +安装完成之后,打开文件 /etc/nsswitch.conf 并找到这一行: + +``` +hosts: files mdns4_minimal [NOTFOUND=return] dns +``` + +编辑这一行,让它看起来像这样: + +``` +hosts: files mdns4_minimal [NOTFOUND=return] wins dns +``` + +现在你必须安装 windbind 让你的 Linux 机器可以在 DHCP 网络中解析 Windows 机器名。在终端里执行: + +``` +sudo apt-get install libnss-windbind windbind +``` + +用这个命令重启网络服务: + +``` +sudo service networking restart +``` + +### 挂载网络驱动器 + +现在我们要映射网络驱动器。这里我们必须编辑 /etc/fstab 文件。在你做第一次编辑之前,用这个命令备份以下这个文件: + +``` +sudo cp /etc/fstab /etc/fstab.old +``` + +如果你需要恢复这个文件,执行以下命令: + +``` +sudo mv /etc/fstab.old /etc/fstab +``` + +在你的主目录创建一个认证信息文件 .smbcredentials。在这个文件里添加你的用户名和密码,就像这样(USER 和 PASSWORD 是实际的用户名和密码): + +``` +username=USER + +password=PASSWORD +``` + +你需要知道挂载这个驱动器的用户的组 ID(GID)和用户 ID(UID)。执行命令: + +``` +id USER +``` + +USER 是实际的用户名,你应该会看到类似这样的信息: + +``` +uid=1000(USER) gid=1000(GROUP) +``` + +USER 是实际的用户名,GROUP 是组名。在(USER)和(GROUP)之前的数字将会被用在 /etc/fstab 文件之中。 + +是时候编辑 /etc/fstab 文件了。在你的编辑器中打开那个文件并添加下面这行到文件末尾(替换以下全大写字段以及远程机器的 IP 地址): + +``` +//192.168.1.10/SHARE /media/share cifs credentials=/home/USER/.smbcredentials,iocharset=uft8,gid=GID,udi=UID,file_mode=0777,dir_mode=0777 0 0 +``` + +**注意**:上面这些内容应该在同一行上。 + +保存并关闭那个文件。执行 sudo mount -a 命令,共享将被挂载。检查一下 /media/share,你应该能看到那个网络共享上的文件和文件夹了。 + +### 共享很简单 + +有了 cifs-utils 和 Samba,映射网络共享在一台 Linux 机器上简单得让人难以置信。现在,你再也不用在每次机器启动的时候手动重新挂载那些共享了。 + +更多网络提示和技巧,订阅我们的 Data Center 消息吧。 +[订阅](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fhow-to-permanently-mount-a-windows-share-on-linux%2F&) + +-------------------------------------------------------------------------------- + +via: http://www.techrepublic.com/article/how-to-permanently-mount-a-windows-share-on-linux/ + +作者:[Jack Wallen][a] +译者:[alim0x](https://github.com/alim0x) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.techrepublic.com/search/?a=jack+wallen From d37b8ed2443acda0425cd583081bfac88aa6e506 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 4 Jul 2016 22:13:40 +0800 Subject: [PATCH 047/471] =?UTF-8?q?20160704-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md diff --git a/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md b/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md new file mode 100644 index 0000000000..07b0f4e5a3 --- /dev/null +++ b/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md @@ -0,0 +1,64 @@ +INSTALL MATE 1.14 IN UBUNTU MATE 16.04 (XENIAL XERUS) VIA PPA +================================================================= + +MATE Desktop 1.14 is now available for Ubuntu MATE 16.04 (Xenial Xerus). According to the release [announcement][1], it took about 2 months to release MATE Desktop 1.14 in a PPA because everything has been well tested, so you shouldn't encounter any issues. + +![](https://2.bp.blogspot.com/-v38tLvDAxHg/V1k7beVd5SI/AAAAAAAAX7A/1X72bmQ3ia42ww6kJ_61R-CZ6yrYEBSpgCLcB/s400/mate114-ubuntu1604.png) + +**The PPA currently provides MATE 1.14.1 (Ubuntu MATE 16.04 ships with MATE 1.12.x by default), which includes changes such as:** + +- client-side decoration apps now render correctly in all themes; +- touchpad configuration now supports edge and two-finger scrolling independently; +- python extensions in Caja can now be managed separately; +- all three window focus modes are selectable; +- MATE Panel now has the ability to change icon sizes for menubar and menu items; +- volume and Brightness OSD can now be enabled/disabled; +- many other improvements and bug fixes. + +MATE 1.14 also includes improved support for GTK+3 across the entire desktop, as well as various other GTK+3 tweaks however, the PPA packages are built with GTK+2 "to ensure compatibility with Ubuntu MATE 16.04 and all the 3rd party MATE applets, plugins and extensions", mentions the Ubuntu MATE blog. + +A complete MATE 1.14 changelog can be found [HERE][2]. + +### Upgrade to MATE Desktop 1.14.x in Ubuntu MATE 16.04 + +To upgrade to the latest MATE Desktop 1.14.x in Ubuntu MATE 16.04 using the official Xenial MATE PPA, open a terminal and use the following commands: + +``` +sudo apt-add-repository ppa:ubuntu-mate-dev/xenial-mate +sudo apt update +sudo apt dist-upgrade +``` + +**Note**: mate-netspeed applet will be removed when upgrading. That's because the applet is now part of the mate-applets package, so it's still available. + +Once the upgrade finishes, restart your system. That's it! + +### How to revert the changes + +If you're not satisfied with MATE 1.14, you encountered some bugs, etc., and you want to go back to the MATE version available in the official repositories, you can purge the PPA and downgrade the packages. + +To do this, use the following commands: + +``` +sudo apt install ppa-purge +sudo ppa-purge ppa:ubuntu-mate-dev/xenial-mate +``` + +After all the MATE packages are downgraded, restart the system. + +via [Ubuntu MATE blog][3] + +-------------------------------------------------------------------------------- + +via: http://www.webupd8.org/2016/06/install-mate-114-in-ubuntu-mate-1604.html + +作者:[Andrew][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.webupd8.org/p/about.html +[1]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ +[2]: http://mate-desktop.com/blog/2016-04-08-mate-1-14-released/ +[3]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ From f0c0375835f7eb5614c71cf09d1c259bb9bb3ca2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 4 Jul 2016 22:20:09 +0800 Subject: [PATCH 048/471] =?UTF-8?q?20160704-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...3 Advanced Image Processing with Python.md | 122 ++++++++++++++++++ 1 file changed, 122 insertions(+) create mode 100644 sources/tech/20160623 Advanced Image Processing with Python.md diff --git a/sources/tech/20160623 Advanced Image Processing with Python.md b/sources/tech/20160623 Advanced Image Processing with Python.md new file mode 100644 index 0000000000..b431eb72f4 --- /dev/null +++ b/sources/tech/20160623 Advanced Image Processing with Python.md @@ -0,0 +1,122 @@ +Advanced Image Processing with Python +====================================== + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/Image-Search-Engine.png) + +Building an image processing search engine is no easy task. There are several concepts, tools, ideas and technologies that go into it. One of the major image-processing concepts is reverse image querying (RIQ) or reverse image search. Google, Cloudera, Sumo Logic and Birst are among the top organizations to use reverse image search. Great for analyzing images and making use of mined data, RIQ provides a very good insight into analytics. + +### Top Companies and Reverse Image Search + +There are many top tech companies that are using RIQ to best effect. For example, Pinterest first brought in visual search in 2014. It subsequently released a white paper in 2015, revealing the architecture. Reverse image search enabled Pinterest to obtain visual features from fashion objects and display similar product recommendations. + +As is generally known, Google images uses reverse image search allowing users to upload an image and then search for connected images. The submitted image is analyzed and a mathematical model made out of it, by advanced algorithm use. The image is then compared with innumerable others in the Google databases before results are matched and similar results obtained. + +**Here is a graph representation from the OpenCV 2.4.9 Features Comparison Report:** + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/search-engine-graph.jpg) + +### Algorithms & Python Libraries + +Before we get down to the workings of it, let us rush through the main elements that make building an image processing search engine with Python possible: + +### Patented Algorithms + +#### SIFT (Scale-Invariant Feature Transform) Algorithm + +1. A patented technology with nonfree functionality that uses image identifiers in order to identify a similar image, even those clicked from different angles, sizes, depths and scale, that they are included in the search results. Check the detailed video on SIFT here. +2. SIFT correctly matches the search criteria with a large database of features from many images. +3. Matching same images with different viewpoints and matching invariant features to obtain search results is another SIFT feature. Read more about scale-invariant keypoints here. + +#### SURF (Speeded Up Robust Features) Algorithm + +1. [SURF][1] is also patented with nonfree functionality and a more ‘speeded’ up version of SIFT. Unlike SIFT, SURF approximates Laplacian of Gaussian (unlike SIFT) with Box Filter. + +2. SURF relies on the determinant of Hessian Matrix for both its location and scale. + +3. Rotation invariance is not a requisite in many applications. So not finding this orientation speeds up the process. + +4. SURF includes several features that the speed improved in each step. Three times faster than SIFT, SURF is great with rotation and blurring. It is not as great in illumination and viewpoint change though. + +5. Open CV, a programming function library provides SURF functionalities. SURF.compute() and SURF. Detect() can be used to find descriptors and keypoints. Read more about SURF [here][2]. + +### Open Source Algorithms + +#### KAZE Algorithm + +1.KAZE is a open source 2D multiscale and novel feature detection and description algorithm in nonlinear scale spaces. Efficient techniques in Additive Operator Splitting (AOS) and variable conductance diffusion is used to build the nonlinear scale space. + +2. Multiscale image processing basics are simple – Creating an image’s scale space while filtering original image with right function over enhancing time or scale. + +#### AKAZE (Accelerated-KAZE) Algorithm + +1. As the name suggests, this is a faster mode to image search, finding matching keypoints between two images. AKAZE uses a binary descriptor and nonlinear scale space that balances accuracy and speed. + +#### BRISK (Binary Robust Invariant Scalable Keypoints) Algorithm + +1. BRISK is great for description, keypoint detection and matching. + +2. An algorithm that is highly adaptive, scale-space FAST-based detector along with a bit-string descriptor, helps speed up the search significantly. + +3. Scale-space keypoint detection and keypoint description helps optimize the performance with relation to the task at hand. + +#### FREAK (Fast Retina Keypoint) + +1. This is a novel keypoint descriptor inspired by the human eye.A binary strings cascade is efficiently computed by an image intensity comparison. The FREAK algorithm allows faster computing with lower memory load as compared to BRISK, SURF and SIFT. + +#### ORB (Oriented FAST and Rotated BRIEF) + +1.A fast binary descriptor, ORB is resistant to noise and rotation invariant. ORB builds on the FAST keypoint detector and the BRIEF descriptor, elements attributed to its low cost and good performance. + +2. Apart from the fast and precise orientation component, efficiently computing the oriented BRIEF, analyzing variance and co-relation of oriented BRIEF features, is another ORB feature. + +### Python Libraries + +#### Open CV + +1. OpenCV is available for both academic and commercial use. A open source machine learning and computer vision library, OpenCV makes it easy for organizations to utilize and modify code. + +2. Over 2500 optimized algorithms, including state-of-the-art machine learning and computer vision algorithms serve various image search purposes – face detection, object identification, camera movement tracking, finding similar images from image database, following eye movements, scenery recognition, etc. + +3. Top companies like Google, IBM, Yahoo, IBM, Sony, Honda, Microsoft and Intel make wide use of OpenCV. + +4. OpenCV uses Python, Java, C, C++ and MATLAB interfaces while supporting Windows, Linux, Mac OS and Android. + +#### Python Imaging Library (PIL) + +1. The Python Imaging Library (PIL) supports several file formats while providing image processing and graphics solutions.The open source PIL adds image processing capabilities to your Python interpreter. +2. The standard procedure for image manipulation include image enhancing, transparency and masking handling, image filtering, per-pixel manipulation, etc. + +For detailed statistics and graphs, view the OpenCV 2.4.9 Features Comparison Report [here][3]. + +### Building an Image Search Engine + +An image search engine helps pick similar images from a prepopulated set of image base. The most popular among these is Google’s well known image search engine. For starters, there are various approaches to build a system like this. To mention a few: + +1.Using image extraction, image description extraction, meta data extraction and search result extraction to build an image search engine. +2. Define your image descriptor, dataset indexing, define your similarity metric and then search and rank. +3. Select image to be searched, select directory for carrying out search, search directory for all pictures, create picture feature index, evaluate same feature for search picture, match pictures in search and obtain matched pictures. + +Our approach basically began with comparing grayscaled versions of the images, gradually moving on to complex feature matching algorithms like SIFT and SURF, and then finally settling down to am open source solution called BRISK. All these algorithms give efficient results with minor changes in performance and latency. An engine built on these algorithms have numerous applications like analyzing graphic data for popularity statistics, identification of objects in graphic contents, and many more. + +**Example**: An image search engine needs to be build by an IT company for a client. So if a brand logo image is submitted in the search, all related brand image searches show up as results. The obtained results can also be used for analytics by the client, allowing them to estimate the brand popularity as per the geographic location. Its still early days though, RIQ or reverse image search has not been exploited to its full extent yet. + +This concludes our article on building an image search engine using Python. Check our blog section out for the latest on technology and programming. + +Statistics Source: OpenCV 2.4.9 Features Comparison Report (computer-vision-talks.com) + +(Guidance and additional inputs by Ananthu Nair.) + +-------------------------------------------------------------------------------- + +via: http://www.cuelogic.com/blog/advanced-image-processing-with-python/ + +作者:[Snehith Kumbla][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.cuelogic.com/blog/author/snehith-kumbla/ +[1]: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html +[2]: http://www.vision.ee.ethz.ch/~surf/eccv06.pdf +[3]: https://docs.google.com/spreadsheets/d/1gYJsy2ROtqvIVvOKretfxQG_0OsaiFvb7uFRDu5P8hw/edit#gid=10 From 2208862a8be7c82c9a062f74c4aa51568d1b2448 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 4 Jul 2016 22:23:58 +0800 Subject: [PATCH 049/471] =?UTF-8?q?20160704-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...29 USE TASK MANAGER EQUIVALENT IN LINUX.md | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md diff --git a/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md new file mode 100644 index 0000000000..91488f82cf --- /dev/null +++ b/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md @@ -0,0 +1,52 @@ +USE TASK MANAGER EQUIVALENT IN LINUX +==================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/Task-Manager-in-Linux.jpg) + +These are some of the most frequently asked questions by Linux beginners, “**is there a task manager for Linux**“, “how do you open task manager in Linux” ? + +People who are coming from Windows know how useful is the task manager. You press the Ctrl+Alt+Del to get to task manager in Windows. This task manager shows you all the running processes and their memory consumption. You can choose to end a process from this task manager application. + +When you have just begun with Linux, you look for a **task manager equivalent in Linux** as well. An expert Linux user prefers the command line way to find processes and memory consumption etc but you don’t have to go that way, at least not when you are just starting with Linux. + +All major Linux distributions have a task manager equivalent. Mostly, **it is called System Monitor** but it actually depends on your Linux distribution and the [desktop environment][1] it uses. + +In this article, we’ll see how to find and use the task manager in Linux with GNOME as the [desktop environment][2]. + +### TASK MANAGER EQUIVALENT IN LINUX WITH GNOME DESKTOP + +While using GNOME, press super key (Windows Key) and look for System Monitor: + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-monitor-gnome-fedora.png) + +When you start the System Monitor, it shows you all the running processes and the memory consumption by them. + +![](https://itsfoss.com/wp-content/uploads/2016/06/fedora-system-monitor.jpeg) + +You can select a process and click on End process to kill it. + +![](https://itsfoss.com/wp-content/uploads/2016/06/kill-process-fedora.png) + +You can also see some statistics about your system in the Resources tab such as CPU consumption per core basis, memory usage, network usage etc. + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-stats-fedora.png) + +This was the graphical way. If you want to go command line way, just run the command ‘top’ in terminal and you can see all the running processes and their memory consumption. You can easily [kill processes in Linux][3] command line. + +This all you need to know about task manager equivalent in Fedora Linux. I hope you find this quick tutorial helpful. If you have questions or suggestions, feel free to ask. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/task-manager-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[Abhishek Prakash][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://wiki.archlinux.org/index.php/desktop_environment +[2]: https://itsfoss.com/best-linux-desktop-environments/ +[3]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/ From 210ed0daf4ee349b1529045b827068ff97e57276 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 4 Jul 2016 22:27:27 +0800 Subject: [PATCH 050/471] =?UTF-8?q?20160704-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...DERING TO DROP 32 BIT SUPPORT IN UBUNTU.md | 42 +++++++++++++++++++ 1 file changed, 42 insertions(+) create mode 100644 sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md diff --git a/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md b/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md new file mode 100644 index 0000000000..c7a3e8a499 --- /dev/null +++ b/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md @@ -0,0 +1,42 @@ +CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU +======================================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/Ubuntu-32-bit-goes-for-a-toss-.jpg) + +Yesterday, developer [Dimitri John Ledkov][1] wrote a message on the [Ubuntu Mailing list][2] calling for the end of i386 support by Ubuntu 18.10. Ledkov argues that more software is being developed with 64-bit support. He is also concerned that it will be difficult to provide security support for the aging i386 architecture. + +Ledkov also argues that building i386 images is not free, but takes quite a bit of Canonical’s resources. + +>Building i386 images is not “for free”, it comes at the cost of utilizing our build farm, QA and validation time. Whilst we have scalable build-farms, i386 still requires all packages, autopackage tests, and ISOs to be revalidated across our infrastructure. As well as take up mirror space & bandwidth. + +Ledkov offers a plan where the 16.10, 17.04, and 17.10 versions of Ubuntu will continue to have i386 kernels, netboot installers, and cloud images, but drop i386 ISO for desktop and server. The 18.04 LTS would then drop support for i386 kernels, netboot installers, and cloud images, but still provide the ability for i386 programs to run on 64-bit architecture. Then, 18.10 would end the i386 port and limit legacy 32-bit applications to snaps, containers, and virtual machines. + +Ledkov’s plan had not been accepted yet, but it shows a definite push toward eliminating 32-bit support. + +### GOOD NEWS + +Don’t despair yet. this will not affect the distros used to resurrect your old system. [Martin Wimpress][3], the creator of [Ubuntu MATE][4], revealed during a discussion on Googl+ that these changes will only affect mainline Ubuntu. + +>The i386 archive will continue to exist into 18.04 and flavours can continue to elect to build i386 isos. There is however a security concern, in that some larger applications (Firefox, Chromium, LibreOffice) are already presenting challenges in terms of applying some security patches to older LTS releases. So flavours are being asked to be mindful of the support period they can reasonably be expected to support i386 versions for. + +### THOUGHTS + +I understand why they need to make this move from a security standpoint, but it’s going to make people move away from mainline Ubuntu to either one of the flavors or a different architecture. Thankfully, we have alternative [lightweight Linux distributions][5]. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-32-bit-support-drop/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[John Paul][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[1]: https://plus.google.com/+DimitriJohnLedkov +[2]: https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2016-June/016661.html +[3]: https://twitter.com/m_wimpress +[4]: http://ubuntu-mate.org/ +[5]: https://itsfoss.com/lightweight-linux-beginners/ From 1c334e4618c06e82310b0da30c42e79f28a05a62 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 4 Jul 2016 22:32:24 +0800 Subject: [PATCH 051/471] =?UTF-8?q?20160704-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Command Line History by Going Incognito.md | 124 ++++++++++++++++++ 1 file changed, 124 insertions(+) create mode 100644 sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md diff --git a/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md b/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md new file mode 100644 index 0000000000..274ff3b175 --- /dev/null +++ b/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md @@ -0,0 +1,124 @@ +How to Hide Linux Command Line History by Going Incognito +================================================================ + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-featured.jpg) + +If you’re a Linux command line user, you’ll agree that there are times when you do not want certain commands you run to be recorded in the command line history. There could be many reasons for this. For example, you’re at a certain position in your company, and you have some privileges that you don’t want others to abuse. Or, there are some critical commands that you don’t want to run accidentally while you’re browsing the history list. + +But is there a way to control what goes into the history list and what doesn’t? Or, in other words, can we turn on a web browser-like incognito mode in the Linux command line? The answer is yes, and there are many ways to achieve this, depending on what exactly you want. In this article we will discuss some of the popular solutions available. + +Note: all the commands presented in this article have been tested on Ubuntu. + +### Different ways available + +The first two ways we’ll describe here have already been covered in [one of our previous articles][1]. If you are already aware of them, you can skip over these. However, if you aren’t aware, you’re advised to go through them carefully. + +#### 1. Insert space before command + +Yes, you read it correctly. Insert a space in the beginning of a command, and it will be ignored by the shell, meaning the command won’t be recorded in history. However, there’s a dependency – the said solution will only work if the HISTCONTROL environment variable is set to “ignorespace” or “ignoreboth,” which is by default in most cases. + +So, a command like the following: + +``` +[space]echo "this is a top secret" +``` + +Won’t appear in the history if you’ve already done this command: + +``` +export HISTCONTROL = ignorespace +``` + +The below screenshot is an example of this behavior. + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-bash-command-space.png) + +The fourth “echo” command was not recorded in the history as it was run with a space in the beginning. + +#### 2. Disable the entire history for the current session + +If you want to disable the entire history for a session, you can easily do that by unsetting the HISTSIZE environment variable before you start with your command line work. To unset the variable run the following command: + +``` +export HISTFILE=0 +``` + +HISTFILE is the number of lines (or commands) that can be stored in the history list for an ongoing bash session. By default, this variable has a set value – for example, 1000 in my case. + +So, the command mentioned above will set the environment variable’s value to zero, and consequently nothing will be stored in the history list until you close the terminal. Keep in mind that you’ll also not be able to see the previously run commands by pressing the up arrow key or running the history command. + +#### 3. Erase the entire history after you’re done + +This can be seen as an alternative to the solution mentioned in the previous section. The only difference is that in this case you run a command AFTER you’re done with all your work. Thh following is the command in question: + +``` +history -cw +``` + +As already mentioned, this will have the same effect as the HISTFILE solution mentioned above. + +#### 4. Turn off history only for the work you do + +While the solutions (2 and 3) described above do the trick, they erase the entire history, something which might be undesired in many situations. There might be cases in which you want to retain the history list up until the point you start your command line work. For situations like these you need to run the following command before starting with your work: + +``` +[space]set +o history +``` + +Note: [space] represents a blank space. + +The above command will disable the history temporarily, meaning whatever you do after running this command will not be recorded in history, although all the stuff executed prior to the above command will be there as it is in the history list. + +To re-enable the history, run the following command: + +``` +[Space]set -o history +``` + +This brings things back to normal again, meaning any command line work done after the above command will show up in the history. + +#### 5. Delete specific commands from history + +Now suppose the history list already contains some commands that you didn’t want to be recorded. What can be done in this case? It’s simple. You can go ahead and remove them. The following is how to accomplish this: + +``` +[space]history | grep "part of command you want to remove" +``` + +The above command will output a list of matching commands (that are there in the history list) with a number [num] preceding each of them. + +Once you’ve identified the command you want to remove, just run the following command to remove that particular entry from the history list: + +``` +history -d [num] +``` + +The following screenshot is an example of this. + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-delete-specific-commands.png) + +The second ‘echo’ command was removed successfully. + +Alternatively, you can just press the up arrow key to take a walk back through the history list, and once the command of your interest appears on the terminal, just press “Ctrl + U” to totally blank the line, effectively removing it from the list. + +### Conclusion + +There are multiple ways in which you can manipulate the Linux command line history to suit your needs. Keep in mind, however, that it’s usually not a good practice to hide or remove a command from history, although it’s also not wrong, per se, but you should be aware of what you’re doing and what effects it might have. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-command-line-history-incognito/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.maketecheasier.com/author/himanshu/ +[1]: https://www.maketecheasier.com/command-line-history-linux/ + + + + + From c4d1ef7fb577612467d2ec614dbb56a75c42537b Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 4 Jul 2016 22:48:22 +0800 Subject: [PATCH 052/471] =?UTF-8?q?20160704-6=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ervices with Python RabbitMQ and Nameko.md | 279 ++++++++++++++++++ 1 file changed, 279 insertions(+) create mode 100644 sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md diff --git a/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md b/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md new file mode 100644 index 0000000000..5048465904 --- /dev/null +++ b/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md @@ -0,0 +1,279 @@ +Microservices with Python RabbitMQ and Nameko +============================================== + +>"Micro-services is the new black" - Splitting the project in to independently scalable services is the currently the best option to ensure the evolution of the code. In Python there is a Framework called "Nameko" which makes it very easy and powerful. + +### Micro services + +>The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. - M. Fowler + +I recommend reading the [Fowler's posts][1] to understand the theory behind it. + +#### Ok I so what does it mean? + +In brief a Micro Service Architecture exists when your system is divided in small (single context bound) responsibilities blocks, those blocks doesn't know each other, they only have a common point of communication, generally a message queue, and does know the communication protocol and interfaces. + +#### Give me a real-life example + +>The code is available on github: take a look at service and api folders for more info. + +Consider you have an REST API, that API has an endpoint receiving some data and you need to perform some kind of computation with that data, instead of blocking the caller you can do it asynchronously, return an status "OK - Your request will be processed" to the caller and do it in a background task. + +Also you want to send an email notification when the computation is finished without blocking the main computing process, so it is better to delegate the "email sending" to another service. + +#### Scenario + +![](http://brunorocha.org/static/media/microservices/micro_services.png) + +### Show me the code! + +Lets create the system to understand it in practice. + +#### Environment + +We need an environment with: + +- A running RabbitMQ +- Python VirtualEnv for services +- Python VirtualEnv for API + +#### Rabbit + +The easiest way to have a RabbitMQ in development environment is running its official docker container, considering you have Docker installed run: + +``` +docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management +``` + +Go to the browser and access using credentials guest:guest if you can login to RabbitMQ dashboard it means you have it running locally for development. + +![](http://brunorocha.org/static/media/microservices/RabbitMQManagement.png) + +#### The Service environment + +Now lets create the Micro Services to consume our tasks. We'll have a service for computing and another for mail, follow the steps. + +In a shell create the root project directory + +``` +$ mkdir myproject +$ cd myproject +``` + +Create and activate a virtualenv (you can also use virtualenv-wrapper) + +``` +$ virtualenv service_env +$ source service_env/bin/activate +``` + +Install nameko framework and yagmail + +``` +(service_env)$ pip install nameko +(service_env)$ pip install yagmail +``` + +#### The service code + +Now having that virtualenv prepared (consider you can run service in a server and API in another) lets code the nameko RPC Services. + +We are going to put both services in a single python module, but you can also split in separate modules and also run them in separate servers if needed. + +In a file called `service.py` + +``` +import yagmail +from nameko.rpc import rpc, RpcProxy + + +class Mail(object): + name = "mail" + + @rpc + def send(self, to, subject, contents): + yag = yagmail.SMTP('myname@gmail.com', 'mypassword') + # read the above credentials from a safe place. + # Tip: take a look at Dynaconf setting module + yag.send(to=to.encode('utf-8), + subject=subject.encode('utf-8), + contents=[contents.encode('utf-8)]) + + +class Compute(object): + name = "compute" + mail = RpcProxy('mail') + + @rpc + def compute(self, operation, value, other, email): + operations = {'sum': lambda x, y: int(x) + int(y), + 'mul': lambda x, y: int(x) * int(y), + 'div': lambda x, y: int(x) / int(y), + 'sub': lambda x, y: int(x) - int(y)} + try: + result = operations[operation](value, other) + except Exception as e: + self.mail.send.async(email, "An error occurred", str(e)) + raise + else: + self.mail.send.async( + email, + "Your operation is complete!", + "The result is: %s" % result + ) + return result +``` + +Now with the above services definition we need to run it as a Nameko RPC service. + +>NOTE: We are going to run it in a console and leave it running, but in production it is recommended to put the service to run using supervisord or an alternative. + +Run the service and let it running in a shell + +``` +(service_env)$ nameko run service --broker amqp://guest:guest@localhost +starting services: mail, compute +Connected to amqp://guest:**@127.0.0.1:5672// +Connected to amqp://guest:**@127.0.0.1:5672// +``` + +#### Testing it + +Go to another shell (with the same virtenv) and test it using nameko shell + +``` +(service_env)$ nameko shell --broker amqp://guest:guest@localhost +Nameko Python 2.7.9 (default, Apr 2 2015, 15:33:21) +[GCC 4.9.2] shell on linux2 +Broker: amqp://guest:guest@localhost +>>> +``` + +You are now in the RPC client testing shell exposing the n.rpc object, play with it + +``` +>>> n.rpc.mail.send("name@email.com", "testing", "Just testing") +``` + +The above should sent an email and we can also call compute service to test it, note that it also spawns an async mail sending with result. + +``` +>>> n.rpc.compute.compute('sum', 30, 10, "name@email.com") +40 +>>> n.rpc.compute.compute('sub', 30, 10, "name@email.com") +20 +>>> n.rpc.compute.compute('mul', 30, 10, "name@email.com") +300 +>>> n.rpc.compute.compute('div', 30, 10, "name@email.com") +3 +``` + +### Calling the micro-service through the API + +In a different shell (or even a different server) prepare the API environment + +Create and activate a virtualenv (you can also use virtualenv-wrapper) + +``` +$ virtualenv api_env +$ source api_env/bin/activate +``` + +Install Nameko, Flask and Flasgger + +``` +(api_env)$ pip install nameko +(api_env)$ pip install flask +(api_env)$ pip install flasgger +``` + +>NOTE: In api you dont need the yagmail because it is service responsability + +Lets say you have the following code in a file `api.py` + +``` +from flask import Flask, request +from flasgger import Swagger +from nameko.standalone.rpc import ClusterRpcProxy + +app = Flask(__name__) +Swagger(app) +CONFIG = {'AMQP_URI': "amqp://guest:guest@localhost"} + + +@app.route('/compute', methods=['POST']) +def compute(): + """ + Micro Service Based Compute and Mail API + This API is made with Flask, Flasgger and Nameko + --- + parameters: + - name: body + in: body + required: true + schema: + id: data + properties: + operation: + type: string + enum: + - sum + - mul + - sub + - div + email: + type: string + value: + type: integer + other: + type: integer + responses: + 200: + description: Please wait the calculation, you'll receive an email with results + """ + operation = request.json.get('operation') + value = request.json.get('value') + other = request.json.get('other') + email = request.json.get('email') + msg = "Please wait the calculation, you'll receive an email with results" + subject = "API Notification" + with ClusterRpcProxy(CONFIG) as rpc: + # asynchronously spawning and email notification + rpc.mail.send.async(email, subject, msg) + # asynchronously spawning the compute task + result = rpc.compute.compute.async(operation, value, other, email) + return msg, 200 + +app.run(debug=True) +``` + +Put the above API to run in a different shell or server + +``` +(api_env) $ python api.py + * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) +``` + +and then access the url you will see the Flasgger UI and you can interact with the api and start producing tasks on queue to the service to consume. + +![](http://brunorocha.org/static/media/microservices/Flasgger_API_documentation.png) + +>NOTE: You can see the shell where service is running for logging, prints and error messages. You can also access the RabbitMQ dashboard to see if there is some message in process there. + +There is a lot of more advanced things you can do with Nameko framework you can find more information on + +Let's Micro Serve! + + +-------------------------------------------------------------------------------- + +via: http://brunorocha.org/python/microservices-with-python-rabbitmq-and-nameko.html + +作者: [Bruno Rocha][a] +译者: [译者ID](https://github.com/译者ID) +校对: [校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://facebook.com/rochacbruno +[1]:http://martinfowler.com/articles/microservices.html From 0b346375d6a8e41c231f5708917daf7de5a63d20 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 4 Jul 2016 23:03:33 +0800 Subject: [PATCH 053/471] =?UTF-8?q?20160704-7=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20 Detecting cats in images with OpenCV.md | 231 ++++++++++++++++++ 1 file changed, 231 insertions(+) create mode 100644 sources/tech/20160620 Detecting cats in images with OpenCV.md diff --git a/sources/tech/20160620 Detecting cats in images with OpenCV.md b/sources/tech/20160620 Detecting cats in images with OpenCV.md new file mode 100644 index 0000000000..37a3ce7fc2 --- /dev/null +++ b/sources/tech/20160620 Detecting cats in images with OpenCV.md @@ -0,0 +1,231 @@ +Detecting cats in images with OpenCV +======================================= + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) + +Did you know that OpenCV can detect cat faces in images…right out-of-the-box with no extras? + +I didn’t either. + +But after [Kendrick Tan broke the story][1], I had to check it out for myself…and do a little investigative work to see how this cat detector seemed to sneak its way into the OpenCV repository without me noticing (much like a cat sliding into an empty cereal box, just waiting to be discovered). + +In the remainder of this blog post, I’ll demonstrate how to use OpenCV’s cat detector to detect cat faces in images. This same technique can be applied to video streams as well. + +>Looking for the source code to this post? [Jump right to the downloads section][2]. + + +### Detecting cats in images with OpenCV + +If you take a look at the [OpenCV repository][3], specifically within the [haarcascades directory][4] (where OpenCV stores all its pre-trained Haar classifiers to detect various objects, body parts, etc.), you’ll notice two files: + +- haarcascade_frontalcatface.xml +- haarcascade_frontalcatface_extended.xml + +Both of these Haar cascades can be used detecting “cat faces” in images. In fact, I used these very same cascades to generate the example image at the top of this blog post. + +Doing a little investigative work, I found that the cascades were trained and contributed to the OpenCV repository by the legendary [Joseph Howse][5] who’s authored a good many tutorials, books, and talks on computer vision. + +In the remainder of this blog post, I’ll show you how to utilize Howse’s Haar cascades to detect cats in images. + +Cat detection code + +Let’s get started detecting cats in images with OpenCV. Open up a new file, name it cat_detector.py , and insert the following code: + +### Detecting cats in images with OpenCVPython + +``` +# import the necessary packages +import argparse +import cv2 + +# construct the argument parse and parse the arguments +ap = argparse.ArgumentParser() +ap.add_argument("-i", "--image", required=True, + help="path to the input image") +ap.add_argument("-c", "--cascade", + default="haarcascade_frontalcatface.xml", + help="path to cat detector haar cascade") +args = vars(ap.parse_args()) +``` + +Lines 2 and 3 import our necessary Python packages while Lines 6-12 parse our command line arguments. We only require a single argument here, the input `--image` that we want to detect cat faces in using OpenCV. + +We can also (optionally) supply a path our Haar cascade via the `--cascade` switch. We’ll default this path to `haarcascade_frontalcatface.xml` and assume you have the `haarcascade_frontalcatface.xml` file in the same directory as your cat_detector.py script. + +Note: I’ve conveniently included the code, cat detector Haar cascade, and example images used in this tutorial in the “Downloads” section of this blog post. If you’re new to working with Python + OpenCV (or Haar cascades), I would suggest downloading the provided .zip file to make it easier to follow along. + +Next, let’s detect the cats in our input image: + +``` +# load the input image and convert it to grayscale +image = cv2.imread(args["image"]) +gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) + +# load the cat detector Haar cascade, then detect cat faces +# in the input image +detector = cv2.CascadeClassifier(args["cascade"]) +rects = detector.detectMultiScale(gray, scaleFactor=1.3, + minNeighbors=10, minSize=(75, 75)) +``` + +On Lines 15 and 16 we load our input image from disk and convert it to grayscale (a normal pre-processing step before passing the image to a Haar cascade classifier, although not strictly required). + +Line 20 loads our Haar cascade from disk (in this case, the cat detector) and instantiates the cv2.CascadeClassifier object. + +Detecting cat faces in images with OpenCV is accomplished on Lines 21 and 22 by calling the detectMultiScale method of the detector object. We pass four parameters to the detectMultiScale method, including: + +1. Our image, gray , that we want to detect cat faces in. +2.A scaleFactor of our [image pyramid][6] used when detecting cat faces. A larger scale factor will increase the speed of the detector, but could harm our true-positive detection accuracy. Conversely, a smaller scale will slow down the detection process, but increase true-positive detections. However, this smaller scale can also increase the false-positive detection rate as well. See the “A note on Haar cascades” section of this blog post for more information. +3. The minNeighbors parameter controls the minimum number of detected bounding boxes in a given area for the region to be considered a “cat face”. This parameter is very helpful in pruning false-positive detections. +4. Finally, the minSize parameter is pretty self-explanatory. This value ensures that each detected bounding box is at least width x height pixels (in this case, 75 x 75). + +The detectMultiScale function returns rects , a list of 4-tuples. These tuples contain the (x, y)-coordinates and width and height of each detected cat face. + +Finally, let’s draw a rectangle surround each cat face in the image: + +``` +# loop over the cat faces and draw a rectangle surrounding each +for (i, (x, y, w, h)) in enumerate(rects): + cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2) + cv2.putText(image, "Cat #{}".format(i + 1), (x, y - 10), + cv2.FONT_HERSHEY_SIMPLEX, 0.55, (0, 0, 255), 2) + +# show the detected cat faces +cv2.imshow("Cat Faces", image) +cv2.waitKey(0) +``` + +Given our bounding boxes (i.e., rects ), we loop over each of them individually on Line 25. + +We then draw a rectangle surrounding each cat face on Line 26, while Lines 27 and 28 displays an integer, counting the number of cats in the image. + +Finally, Lines 31 and 32 display the output image to our screen. + +### Cat detection results + +To test our OpenCV cat detector, be sure to download the source code to this tutorial using the “Downloads” section at the bottom of this post. + +Then, after you have unzipped the archive, you should have the following three files/directories: + +1. cat_detector.py : Our Python + OpenCV script used to detect cats in images. +2. haarcascade_frontalcatface.xml : The cat detector Haar cascade. +3. images : A directory of testing images that we’re going to apply the cat detector cascade to. + +From there, execute the following command: + +Detecting cats in images with OpenCVShell + +``` +$ python cat_detector.py --image images/cat_01.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_01.jpg) +>Figure 1: Detecting a cat face in an image, even with parts of the cat occluded + +Notice that we have been able to detect the cat face in the image, even though the rest of its body is obscured. + +Let’s try another image: + +``` +python cat_detector.py --image images/cat_02.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_02.jpg) +>Figure 2: A second example of detecting a cat in an image with OpenCV, this time the cat face is slightly different + +This cat’s face is clearly different from the other one, as it’s in the middle of a “meow”. In either case, the cat detector cascade is able to correctly find the cat face in the image. + +The same is true for this image as well: + +``` +$ python cat_detector.py --image images/cat_03.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_03.jpg) +>Figure 3: Cat detection with OpenCV and Python + +Our final example demonstrates detecting multiple cats in an image using OpenCV and Python: + +``` +$ python cat_detector.py --image images/cat_04.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) +>Figure 4: Detecting multiple cats in the same image with OpenCV + +Note that the Haar cascade can return bounding boxes in an order that you may not like. In this case, the middle cat is actually labeled as the third cat. You can resolve this “issue” by sorting the bounding boxes according to their (x, y)-coordinates for a consistent ordering. + +#### A quick note on accuracy + +It’s important to note that in the comments section of the .xml files, Joseph Howe details that the cat detector Haar cascades can report cat faces where there are actually human faces. + +In this case, he recommends performing both face detection and cat detection, then discarding any cat bounding boxes that overlap with the face bounding boxes. + +#### A note on Haar cascades + +First published in 2001 by Paul Viola and Michael Jones, [Rapid Object Detection using a Boosted Cascade of Simple Features][7], this original work has become one of the most cited papers in computer vision. + +This algorithm is capable of detecting objects in images, regardless of their location and scale. And perhaps most intriguing, the detector can run in real-time on modern hardware. + +In their paper, Viola and Jones focused on training a face detector; however, the framework can also be used to train detectors for arbitrary “objects”, such as cars, bananas, road signs, etc. + +#### The problem? + +The biggest problem with Haar cascades is getting the detectMultiScale parameters right, specifically scaleFactor and minNeighbors . You can easily run into situations where you need to tune both of these parameters on an image-by-image basis, which is far from ideal when utilizing an object detector. + +The scaleFactor variable controls your [image pyramid][8] used to detect objects at various scales of an image. If your scaleFactor is too large, then you’ll only evaluate a few layers of the image pyramid, potentially leading to you missing objects at scales that fall in between the pyramid layers. + +On the other hand, if you set scaleFactor too low, then you evaluate many pyramid layers. This will help you detect more objects in your image, but it (1) makes the detection process slower and (2) substantially increases the false-positive detection rate, something that Haar cascades are known for. + +To remember this, we often apply [Histogram of Oriented Gradients + Linear SVM detection][9] instead. + +The HOG + Linear SVM framework parameters are normally much easier to tune — and best of all, HOG + Linear SVM enjoys a much smaller false-positive detection rate. The only downside is that it’s harder to get HOG + Linear SVM to run in real-time. + +### Interested in learning more about object detection? + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/custom_object_detector_example.jpg) +>Figure 5: Learn how to build custom object detectors inside the PyImageSearch Gurus course. + +If you’re interested in learning how to train your own custom object detectors, be sure to take a look at the PyImageSearch Gurus course. + +Inside the course, I have 15 lessons covering 168 pages of tutorials dedicated to teaching you how to build custom object detectors from scratch. You’ll discover how to detect road signs, faces, cars (and nearly any other object) in images by applying the HOG + Linear SVM framework for object detection. + +To learn more about the PyImageSearch Gurus course (and grab 10 FREE sample lessons), just click the button below: + +### Summary + +In this blog post, we learned how to detect cats in images using the default Haar cascades shipped with OpenCV. These Haar cascades were trained and contributed to the OpenCV project by [Joseph Howse][9], and were originally brought to my attention [in this post][10] by Kendrick Tan. + +While Haar cascades are quite useful, we often use HOG + Linear SVM instead, as it’s a bit easier to tune the detector parameters, and more importantly, we can enjoy a much lower false-positive detection rate. + +I detail how to build custom HOG + Linear SVM object detectors to recognize various objects in images, including cars, road signs, and much more [inside the PyImageSearch Gurus course][11]. + +Anyway, I hope you enjoyed this blog post! + +Before you go, be sure to signup for the PyImageSearch Newsletter using the form below to be notified when new blog posts are published. + +-------------------------------------------------------------------------------- + +via: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/ + +作者:[Adrian Rosebrock][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.pyimagesearch.com/author/adrian/ +[1]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[2]: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/# +[3]: https://github.com/Itseez/opencv +[4]: https://github.com/Itseez/opencv/tree/master/data/haarcascades +[5]: http://nummist.com/ +[6]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[7]: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf +[8]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[9]: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/ +[10]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[11]: https://www.pyimagesearch.com/pyimagesearch-gurus/ + + + From 65d69dc85245aa256e43c3b2ec8c8f7a22ca0f8b Mon Sep 17 00:00:00 2001 From: Chunyang Wen Date: Tue, 5 Jul 2016 10:25:36 +0800 Subject: [PATCH 054/471] Translating: How to Hide Linux Command Line History by Going Incognito chunyang-wen (#4140) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * âFinish tranlating awk series part4 * Update Part 4 - How to Use Comparison Operators with Awk in Linux.md * translating: How to Hide Linux Command Line History by Going Incognito --- ... How to Hide Linux Command Line History by Going Incognito.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md b/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md index 274ff3b175..0d5ce6e6e1 100644 --- a/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md +++ b/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md @@ -1,3 +1,4 @@ +chunyang-wen translating How to Hide Linux Command Line History by Going Incognito ================================================================ From 79c2a31368988fdd76532ee653585b8ceba5bd49 Mon Sep 17 00:00:00 2001 From: Johnny Liao Date: Tue, 5 Jul 2016 10:44:43 +0800 Subject: [PATCH 055/471] Update 20160623 Advanced Image Processing with Python.md [Translating] Advanced Image Processing with Python --- sources/tech/20160623 Advanced Image Processing with Python.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160623 Advanced Image Processing with Python.md b/sources/tech/20160623 Advanced Image Processing with Python.md index b431eb72f4..0a3a722845 100644 --- a/sources/tech/20160623 Advanced Image Processing with Python.md +++ b/sources/tech/20160623 Advanced Image Processing with Python.md @@ -1,3 +1,5 @@ +Johnny-Liao translating... + Advanced Image Processing with Python ====================================== From ac384892ccaa5920a5585ab3661c2f461393d26f Mon Sep 17 00:00:00 2001 From: Xin Wang <2650454635@qq.com> Date: Tue, 5 Jul 2016 11:04:17 +0800 Subject: [PATCH 056/471] Update 20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md --- sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md index 91488f82cf..aa1e999f50 100644 --- a/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md +++ b/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md @@ -1,3 +1,4 @@ +xinglianfly translate USE TASK MANAGER EQUIVALENT IN LINUX ==================================== From 10e5d95d9c2eb23efbd600fac4495a6e9011f6f5 Mon Sep 17 00:00:00 2001 From: gitfuture Date: Tue, 5 Jul 2016 12:39:34 +0800 Subject: [PATCH 057/471] finish translating by GitFuture --- ...P 2.0 Support for Nginx on Ubuntu 16.04.md | 304 ------------------ ...P 2.0 Support for Nginx on Ubuntu 16.04.md | 302 +++++++++++++++++ 2 files changed, 302 insertions(+), 304 deletions(-) delete mode 100644 sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md create mode 100644 translated/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md diff --git a/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md deleted file mode 100644 index acb98b2d20..0000000000 --- a/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md +++ /dev/null @@ -1,304 +0,0 @@ -Translating by GitFuture - -Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04 -===================================================================================== - - -The LEMP stack is an acronym which represents is a group of packages (Linux OS, Nginx web server, MySQL\MariaDB database and PHP server-side dynamic programming language) which are used to deploy dynamic web applications and web pages. - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-with-FastCGI-on-Ubuntu-16.04.png) ->Install Nginx with MariaDB 10, PHP 7 and HTTP 2.0 Support on Ubuntu 16.04 - -This tutorial will guide you on how to install a LEMP stack (Nginx with MariaDB and PHP7) on Ubuntu 16.04 server. - -Requirements - -[Installation of Ubuntu 16.04 Server Edition][1] - -### Step 1: Install the Nginx Web Server - -#### 1. Nginx is a modern and resources efficient web server used to display web pages to visitors on the internet. We’ll start by installing Nginx web server from Ubuntu official repositories by using the [apt command line][2]. - -``` -$ sudo apt-get install nginx -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-on-Ubuntu-16.04.png) ->Install Nginx on Ubuntu 16.04 - -#### 2. Next, issue the [netstat][3] and [systemctl][4] commands in order to confirm if Nginx is started and binds on port 80. - -``` -$ netstat -tlpn -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Network-Port-Connection.png) ->Check Nginx Network Port Connection - -``` -$ sudo systemctl status nginx.service -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Service-Status.png) ->Check Nginx Service Status - -Once you have the confirmation that the server is started you can open a browser and navigate to your server IP address or DNS record using HTTP protocol in order to visit Nginx default web page. - -``` -http://IP-Address -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-Nginx-Webpage.png) ->Verify Nginx Webpage - -### Step 2: Enable Nginx HTTP/2.0 Protocol - -#### 3. The HTTP/2.0 protocol which is build by default in the latest release of Nginx binaries on Ubuntu 16.04 works only in conjunction with SSL and promises a huge speed improvement in loading web SSL web pages. - -To enable the protocol in Nginx on Ubuntu 16.04, first navigate to Nginx available sites configuration files and backup the default configuration file by issuing the below command. - -``` -$ cd /etc/nginx/sites-available/ -$ sudo mv default default.backup -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Backup-Nginx-Sites-Configuration-File.png) ->Backup Nginx Sites Configuration File - -#### 4. Then, using a text editor create a new default page with the below instructions: - -``` -server { - listen 443 ssl http2 default_server; - listen [::]:443 ssl http2 default_server; - - root /var/www/html; - - index index.html index.htm index.php; - - server_name 192.168.1.13; - - location / { - try_files $uri $uri/ =404; - } - - ssl_certificate /etc/nginx/ssl/nginx.crt; - ssl_certificate_key /etc/nginx/ssl/nginx.key; - - ssl_protocols TLSv1 TLSv1.1 TLSv1.2; - ssl_prefer_server_ciphers on; - ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; - ssl_dhparam /etc/nginx/ssl/dhparam.pem; - ssl_session_cache shared:SSL:20m; - ssl_session_timeout 180m; - resolver 8.8.8.8 8.8.4.4; - add_header Strict-Transport-Security "max-age=31536000; - #includeSubDomains" always; - - - location ~ \.php$ { - include snippets/fastcgi-php.conf; - fastcgi_pass unix:/run/php/php7.0-fpm.sock; - } - - location ~ /\.ht { - deny all; - } - -} - -server { - listen 80; - listen [::]:80; - server_name 192.168.1.13; - return 301 https://$server_name$request_uri; -} -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-Nginx-HTTP-2-Protocol.png) ->Enable Nginx HTTP 2 Protocol - -The above configuration snippet enables the use of `HTTP/2.0` by adding the http2 parameter to all SSL listen directives. - -Also, the last part of the excerpt enclosed in server directive is used to redirect all non-SSL traffic to SSL/TLS default host. Also, replace the `server_name` directive to match your own IP address or DNS record (FQDN preferably). - -#### 5. Once you finished editing Nginx default configuration file with the above settings, generate and list the SSL certificate file and key by executing the below commands. - -Fill the certificate with your own custom settings and pay attention to Common Name setting to match your DNS FQDN record or your server IP address that will be used to access the web page. - -``` -$ sudo mkdir /etc/nginx/ssl -$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt -$ ls /etc/nginx/ssl/ -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Generate-SSL-Certificate-and-Key.png) ->Generate SSL Certificate and Key for Nginx - -#### 6. Also, create a strong DH cypher, which was changed on the above configuration file on `ssl_dhparam` instruction line, by issuing the below command: - -``` -$ sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-Diffie-Hellman-Key.png) ->Create Diffie-Hellman Key - -#### 7. Once the `Diffie-Hellman` key has been created, verify if Nginx configuration file is correctly written and can be applied by Nginx web server and restart the daemon to reflect changes by running the below commands. - -``` -$ sudo nginx -t -$ sudo systemctl restart nginx.service -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Configuration.png) ->Check Nginx Configuration - -#### 8. In order to test if Nginx uses HTTP/2.0 protocol issue the below command. The presence of `h2` advertised protocol confirms that Nginx has been successfully configured to use HTTP/2.0 protocol. All modern up-to-date browsers should support this protocol by default. - -``` -$ openssl s_client -connect localhost:443 -nextprotoneg '' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Test-Nginx-HTTP-2-Protocol.png) ->Test Nginx HTTP 2.0 Protocol - -### Step 3: Install PHP 7 Interpreter - -Nginx can be used with PHP dynamic processing language interpreter to generate dynamic web content with the help of FastCGI process manager obtained by installing the php-fpm binary package from Ubuntu official repositories. - -#### 9. In order to grab PHP7.0 and the additional packages that will allow PHP to communicate with Nginx web server issue the below command on your server console: - -``` -$ sudo apt install php7.0 php7.0-fpm -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-PHP-FPM-for-Ngin.png) ->Install PHP 7 and PHP-FPM for Ngin - -#### 10. Once the PHP7.0 interpreter has been successfully installed on your machine, start and check php7.0-fpm daemon by issuing the below command: - -``` -$ sudo systemctl start php7.0-fpm -$ sudo systemctl status php7.0-fpm -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Start-Verify-php-fpm-Service.png) ->Start and Verify php-fpm Service - -#### 11. The current configuration file of Nginx is already configured to use PHP FastCGI process manager in order to server dynamic content. - -The server block that enables Nginx to use PHP interpreter is presented on the below excerpt, so no further modifications of default Nginx configuration file are required. - -``` -location ~ \.php$ { - include snippets/fastcgi-php.conf; - fastcgi_pass unix:/run/php/php7.0-fpm.sock; - } -``` - -Below is a screenshot of what instructions you need to uncomment and modify is case of an original Nginx default configuration file. - - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-PHP-FastCGI-for-Nginx.png) ->Enable PHP FastCGI for Nginx - -#### 12. To test Nginx web server relation with PHP FastCGI process manager create a PHP `info.php` test configuration file by issuing the below command and verify the settings by visiting this configuration file using the below address: `http://IP_or domain/info.php`. - -``` -$ sudo su -c 'echo "" |tee /var/www/html/info.php' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-PHP-Info-File.png) ->Create PHP Info File - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-PHP-FastCGI-Info.png) ->Verify PHP FastCGI Info - -Also check if HTTP/2.0 protocol is advertised by the server by locating the line `$_SERVER[‘SERVER_PROTOCOL’]` on PHP Variables block as illustrated on the below screenshot. - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-HTTP-2.0-Protocol-Info.png) ->Check HTTP 2.0 Protocol Info - -#### 13. In order to install extra PHP7.0 modules use the `apt search php7.0` command to find a PHP module and install it. - -Also, try to install the following PHP modules which can come in handy in case you are planning to [install WordPress][5] or other CMS. - -``` -$ sudo apt install php7.0-mcrypt php7.0-mbstring -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-Modules.png) ->Install PHP 7 Modules - -#### 14. To register the PHP extra modules just restart PHP-FPM daemon by issuing the below command. - -``` -$ sudo systemctl restart php7.0-fpm.service -``` - -### Step 4: Install MariaDB Database - -#### 15. Finally, in order to complete our LEMP stack we need the MariaDB database component to store and manage website data. - -Install MariaDB database management system by running the below command and restart PHP-FPM service in order to use MySQL module to access the database. - -``` -$ sudo apt install mariadb-server mariadb-client php7.0-mysql -$ sudo systemctl restart php7.0-fpm.service -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-MariaDB-for-Nginx.png) ->Install MariaDB for Nginx - -#### 16. To secure the MariaDB installation, run the security script provided by the binary package from Ubuntu repositories which will ask you set a root password, remove anonymous users, disable root login remotely and remove test database. - -Run the script by issuing the below command and answer all questions with yes. Use the below screenshot as a guide. - -``` -$ sudo mysql_secure_installation -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Secure-MariaDB-Installation-for-Nginx.png) ->Secure MariaDB Installation for Nginx - -#### 17. To configure MariaDB so that ordinary users can access the database without system sudo privileges, go to MySQL command line interface with root privileges and run the below commands on MySQL interpreter: - -``` -$ sudo mysql -MariaDB> use mysql; -MariaDB> update user set plugin=’‘ where User=’root’; -MariaDB> flush privileges; -MariaDB> exit -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/MariaDB-User-Permissions.png) ->MariaDB User Permissions - -Finally, login to MariaDB database and run an arbitrary command without root privileges by executing the below command: - -``` -$ mysql -u root -p -e 'show databases' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-MariaDB-Databases.png) ->Check MariaDB Databases - -That’ all! Now you have a **LEMP** stack configured on **Ubuntu 16.04** server that allows you to deploy complex dynamic web applications that can interact with databases. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-nginx-mariadb-php7-http2-on-ubuntu-16-04/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Matei Cezar ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/cezarmatei/ -[1]: http://www.tecmint.com/installation-of-ubuntu-16-04-server-edition/ -[2]: http://www.tecmint.com/apt-advanced-package-command-examples-in-ubuntu/ -[3]: http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ -[4]: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ -[5]: http://www.tecmint.com/install-wordpress-using-lamp-or-lemp-on-rhel-centos-fedora/ diff --git a/translated/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/translated/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md new file mode 100644 index 0000000000..f432833a50 --- /dev/null +++ b/translated/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md @@ -0,0 +1,302 @@ +在 Ubuntu 16.04 为 Nginx 服务器安装 LEMP 环境(MariaDB, PHP 7 并且支持 HTTP 2.0) +===================== + +LEMP 是字首组合词,代表一组软件包(Linux OS,Nginx 网络服务器,MySQL\MariaDB 数据库和 PHP 服务端动态编程语言),它被用来搭建动态的网络应用和网页。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-with-FastCGI-on-Ubuntu-16.04.png) +>在 Ubuntu 16.04 安装 Nginx 以及 MariaDB,PHP7 并且支持 HTTP 2.0 + +这篇教程会教你怎么在 Ubuntu 16.04 的服务器上安装 LEMP (Nginx 和 MariaDB 以及 PHP7)。 + +准备 + +[安装 Ubuntu 16.04 服务器版本][1] + +### 步骤 1:安装 Nginx 服务器 + +#### 1. Nginx 是一个先进的、资源优化的网络服务器程序,用来向因特网上的访客展示网页。我们从 Nginx 服务器的安装开始介绍,使用 [apt 命令][2] 从 Ubuntu 的官方软件仓库中获取 Nginx 程序。 + +``` +$ sudo apt-get install nginx +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-on-Ubuntu-16.04.png) +>在 Ubuntu 16.04 安装 Nginx + +#### 2. 然后输入 [netstat][3] 和 [systemctl][4] 命令,确认 Nginx 进程已经启动并且绑定在 80 端口。 + +``` +$ netstat -tlpn +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Network-Port-Connection.png) +>检查 Nginx 网络端口连接 + +``` +$ sudo systemctl status nginx.service +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Service-Status.png) +>检查 Nginx 服务状态 + +当你确认服务进程已经启动了,你可以打开一个浏览器,使用 HTTP 协议访问你的服务器 IP 地址或者域名,浏览 Nginx 的默认网页。 + +``` +http://IP-Address +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-Nginx-Webpage.png) +>验证 Nginx 网页 + +### 步骤 2:启用 Nginx HTTP/2.0 协议 + +#### 3. HTTP/2.0 协议默认包含在 Ubuntu 16.04 最新发行版的 Nginx 二进制文件中,它只能通过 SSL 连接并且保证加载网页的速度有巨大提升。 + +要启用Nginx 的这个协议,首先找到 Nginx 提供的网站配置文件,输入下面这个命令备份配置文件。 + +``` +$ cd /etc/nginx/sites-available/ +$ sudo mv default default.backup +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Backup-Nginx-Sites-Configuration-File.png) +>备份 Nginx 的网站配置文件 + +#### 4. 然后,用文本编辑器新建一个默认文件,输入以下内容: + +``` +server { + listen 443 ssl http2 default_server; + listen [::]:443 ssl http2 default_server; + + root /var/www/html; + + index index.html index.htm index.php; + + server_name 192.168.1.13; + + location / { + try_files $uri $uri/ =404; + } + + ssl_certificate /etc/nginx/ssl/nginx.crt; + ssl_certificate_key /etc/nginx/ssl/nginx.key; + + ssl_protocols TLSv1 TLSv1.1 TLSv1.2; + ssl_prefer_server_ciphers on; + ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; + ssl_dhparam /etc/nginx/ssl/dhparam.pem; + ssl_session_cache shared:SSL:20m; + ssl_session_timeout 180m; + resolver 8.8.8.8 8.8.4.4; + add_header Strict-Transport-Security "max-age=31536000; + #includeSubDomains" always; + + + location ~ \.php$ { + include snippets/fastcgi-php.conf; + fastcgi_pass unix:/run/php/php7.0-fpm.sock; + } + + location ~ /\.ht { + deny all; + } + +} + +server { + listen 80; + listen [::]:80; + server_name 192.168.1.13; + return 301 https://$server_name$request_uri; +} +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-Nginx-HTTP-2-Protocol.png) +>启用 Nginx HTTP 2 协议 + +上面的配置片段向所有的 SSL 监听指令中添加 http2 参数来启用 `HTTP/2.0`。 + +添加到服务器配置的最后一段,是用来将所有非 SSL 的流量重定向到 SSL/TLS 默认主机。然后用你主机的 IP 地址或者 DNS 记录(优先 FQDN)替换掉 `server_name` 选项。 (directive 的翻译是指令,但我觉得翻译成选项更好) + +#### 5. 当你按照以上步骤编辑完 Nginx 的默认配置文件之后,用下面这些命令来生成、查看 SSL 证书和密钥。 + +用你自定义的设置完成证书的制作,注意常用名设置成和你的 DNS FQDN 记录或者服务器 IP 地址相匹配,DNS 记录或者 IP 地址是用来访问网页的。 + + +``` +$ sudo mkdir /etc/nginx/ssl +$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt +$ ls /etc/nginx/ssl/ +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Generate-SSL-Certificate-and-Key.png) +>生成 Nginx 的 SSL 证书和密钥 + +#### 6. 通过输入以下命令使用一个强 DH 加密算法,在之前的配置文件 `ssl_dhparam` 这一行中进行修改。 + +``` +$ sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-Diffie-Hellman-Key.png) +>创建 Diffie-Hellman 密钥 + +#### 7. 当 `Diffie-Hellman` 密钥生成之后,验证 Nginx 的配置文件是否正确、能否被 Nginx 网络服务程序应用。然后运行以下命令重启守护进程来观察有什么变化。 + +``` +$ sudo nginx -t +$ sudo systemctl restart nginx.service +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Configuration.png) +>检查 Nginx 的配置 + +#### 8. 键入下面的命令来测试 Nginx 使用的是 HTTP/2.0 协议。看到协议中有 `h2` 的话,表明 Nginx 已经成功配置使用 HTTP/2.0 协议。所有最新的浏览器默认都能够支持这个协议。 + +``` +$ openssl s_client -connect localhost:443 -nextprotoneg '' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Test-Nginx-HTTP-2-Protocol.png) +>测试 Nginx HTTP 2.0 协议 + +### 第 3 步:安装 PHP 7 解释器 + +通过 FastCGI 进程管理程序的协助,Nginx 能够使用 PHP 动态语言解释器生成动态网络内容。FastCGI 能够从 Ubuntu 官方仓库中安装 php-fpm 二进制包来获取。 + +#### 9. 在你的服务器控制台里输入下面的命令来获取 PHP7.0 和扩展包,这能够让 PHP 与 Nginx 网络服务进程通信, + +``` +$ sudo apt install php7.0 php7.0-fpm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-PHP-FPM-for-Ngin.png) +>安装 PHP 7 以及 PHP-FPM + +#### 10. 当 PHP7.0 解释器安装成功后,输入以下命令启动或者检查 php7.0-fpm 守护进程: + +``` +$ sudo systemctl start php7.0-fpm +$ sudo systemctl status php7.0-fpm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Start-Verify-php-fpm-Service.png) +>开启、验证 php-fpm 服务 + +#### 11. 当前的 Nginx 配置文件已经配置了使用 PHP FPM 来提供动态内容。 + +下面给出的这部分服务器配置让 Nginx 能够使用 PHP 解释器,所以不需要对 Nginx 配置文件作别的修改。 + +``` +location ~ \.php$ { + include snippets/fastcgi-php.conf; + fastcgi_pass unix:/run/php/php7.0-fpm.sock; + } +``` + +下面是的截图是 Nginx 默认配置文件的内容。你可能需要对其中的代码进行修改或者取消注释。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-PHP-FastCGI-for-Nginx.png) +>启用 PHP FastCGI + +#### 12. 要测试启用了 PHP-FPM 的 Nginx 服务器,用下面的命令创建一个 PHP 测试配置文件 `info.php`。接着用 `http://IP_or domain/info.php` 这个网址来查看配置。 + +``` +$ sudo su -c 'echo "" |tee /var/www/html/info.php' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-PHP-Info-File.png) +>创建 PHP Info 文件 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-PHP-FastCGI-Info.png) +>检查 PHP FastCGI 的信息 + +检查服务器是否应用 HTTP/2.0 协议,定位到 PHP 变量区域中的 `$_SERVER[‘SERVER_PROTOCOL’]` 就像下面这张截图一样。(advertised by server 翻译不清楚,这里翻译成服务器应用了 HTTP/2.0 协议) + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-HTTP-2.0-Protocol-Info.png) +>检查 HTTP2.0 协议信息 + +#### 13. 为了安装其它的 PHP7.0 模块,使用 `apt search php7.0` 命令查找 php 的模块然后安装。 + +如果你想要 [安装 WordPress][5] 或者别的 CMS,需要安装以下的 PHP 模块,这些模块迟早有用。 + +``` +$ sudo apt install php7.0-mcrypt php7.0-mbstring +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-Modules.png) +>安装 PHP 7 模块 + +#### 14. 注册 PHP 额外的模块,输入下面的命令重启 PHP-FPM 守护进程。 + +``` +$ sudo systemctl restart php7.0-fpm.service +``` + +### 第 4 步:安装 MariaDB 数据库 + +#### 15. 最后,我们需要 MariaDB 数据库来存储、管理网站数据来完成搭建 LEMP + +运行下面的命令安装 MariaDB 数据库管理系统,重启 PHP-FPM 服务使用 MySQL 模块与数据库通信。 + +``` +$ sudo apt install mariadb-server mariadb-client php7.0-mysql +$ sudo systemctl restart php7.0-fpm.service +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-MariaDB-for-Nginx.png) +>安装 MariaDB + +#### 16. 为了保证 MariaDB 的安装,运行来自 Ubuntu 软件仓库中的二进制包提供的安全脚本,这会询问你设置一个根用户密码,移除匿名用户,禁用根用户远程登陆,移除测试数据库。 + +输入下面的命令运行脚本,并且确认所有的选择。参照下面的截图。 + +``` +$ sudo mysql_secure_installation +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Secure-MariaDB-Installation-for-Nginx.png) +>MariaDB 的安全安装 + + +#### 17. 配置 MariaDB 以便普通用户能够不使用 系统的 sudo 权限来访问数据库。用根用户权限打开 MySQL 命令行界面,运行下面的命令: + +``` +$ sudo mysql +MariaDB> use mysql; +MariaDB> update user set plugin=’‘ where User=’root’; +MariaDB> flush privileges; +MariaDB> exit +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/MariaDB-User-Permissions.png) +>MariaDB 的用户权限 + +最后登陆到 MariaDB 数据库,通过以下命令不使用 root 权限执行任意一个命令: + +``` +$ mysql -u root -p -e 'show databases' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-MariaDB-Databases.png) +>查看 MariaDB 数据库 + +好了!现在你拥有了配置在 **Ubuntu 16.04** 服务器上的 **LEMP** 环境,你能够部署能够与数据库交互的复杂动态网络应用。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-nginx-mariadb-php7-http2-on-ubuntu-16-04/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Matei Cezar ][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/cezarmatei/ +[1]: http://www.tecmint.com/installation-of-ubuntu-16-04-server-edition/ +[2]: http://www.tecmint.com/apt-advanced-package-command-examples-in-ubuntu/ +[3]: http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ +[4]: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ +[5]: http://www.tecmint.com/install-wordpress-using-lamp-or-lemp-on-rhel-centos-fedora/ From c1984135e0fe5ba67cc39442f562b7d0c8aa0ffc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=91=A8=E5=AE=B6=E6=9C=AA?= Date: Tue, 5 Jul 2016 13:20:24 +0800 Subject: [PATCH 058/471] Translating by GitFuture --- sources/tech/20160620 Monitor Linux With Netdata.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160620 Monitor Linux With Netdata.md b/sources/tech/20160620 Monitor Linux With Netdata.md index 95156ef108..dfe6920639 100644 --- a/sources/tech/20160620 Monitor Linux With Netdata.md +++ b/sources/tech/20160620 Monitor Linux With Netdata.md @@ -1,3 +1,5 @@ +Translating by GitFuture + Monitor Linux With Netdata === From 5ee075fddd06778e15b0b87a3c3e69492f8cb489 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 5 Jul 2016 19:04:26 +0800 Subject: [PATCH 059/471] PUB:20160610 Getting started with ReactOS @name1e5s @PurlingNayuki --- .../20160610 Getting started with ReactOS.md | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) rename {translated/tech => published}/20160610 Getting started with ReactOS.md (92%) diff --git a/translated/tech/20160610 Getting started with ReactOS.md b/published/20160610 Getting started with ReactOS.md similarity index 92% rename from translated/tech/20160610 Getting started with ReactOS.md rename to published/20160610 Getting started with ReactOS.md index ffeeb9021f..8d04902798 100644 --- a/translated/tech/20160610 Getting started with ReactOS.md +++ b/published/20160610 Getting started with ReactOS.md @@ -1,9 +1,7 @@ ReactOS 新手指南 ==================================== - -ReactOS 是一个比较年轻的开源操作系统,它提供了一个和 Windows NT 类似的图形界面,并且它的目标也是提供一个与 NT 功能和应用程序兼容性差不多的系统。这个项目在没有使用任何 Unix 的情况下实现了一个类似 Wine 的用户模式。它的开发者们从头实现了 NT 的架构以及对于 FAT32 的兼容,因此它也不需要负任何法律责任。这也就是说,它不是又双叒叕一个 Linux 发行版,而是一个独特的类 Windows 系统,并且是开源世界的一部分。这份快速指南是给那些想要一个易于使用的 Windows 的开源替代品的人准备的。 - +ReactOS 是一个比较年轻的开源操作系统,它提供了一个和 Windows NT 类似的图形界面,并且它的目标也是提供一个与 NT 功能和应用程序兼容性差不多的系统。这个项目在没有使用任何 Unix 架构的情况下实现了一个类似 Wine 的用户模式。它的开发者们从头实现了 NT 的架构以及对于 FAT32 的兼容,因此它也不需要负任何法律责任。这也就是说,它不是又双叒叕一个 Linux 发行版,而是一个独特的类 Windows 系统,并且是开源世界的一部分。这份快速指南是给那些想要一个易于使用的 Windows 的开源替代品的人准备的。 ### 安装系统 @@ -31,7 +29,6 @@ ReactOS 是一个比较年轻的开源操作系统,它提供了一个和 Windo 下一步是选择分区的格式,不过现在我们只能选择 FAT32。 - ![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_6.png) 再下一步是选择安装文件夹。我就使用默认的“/ReactOS”了,应该没有问题。 @@ -96,7 +93,7 @@ ReactOS 是一个比较年轻的开源操作系统,它提供了一个和 Windo ![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_20.png) -ReactOS 还有一个好啊,就是我们可以通过“我的电脑”来操作注册表。 +ReactOS 还有一个好的地方,就是我们可以通过“我的电脑”来操作注册表。 ![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_21.png) From c51449deb46e6a5b8aef7afde9da31ab95e6109d Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 5 Jul 2016 19:48:11 +0800 Subject: [PATCH 060/471] PUB:20160620 PowerPC gains an Android 4.4 port with Big Endian support @dongfengweixiao --- ...ndroid 4.4 port with Big Endian support.md | 106 ++++++++++++++++++ ...ndroid 4.4 port with Big Endian support.md | 103 ----------------- 2 files changed, 106 insertions(+), 103 deletions(-) create mode 100644 published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md delete mode 100644 translated/tech/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md diff --git a/published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md b/published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md new file mode 100644 index 0000000000..7ad127370b --- /dev/null +++ b/published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md @@ -0,0 +1,106 @@ +Android 4.4 移植到了 PowerPC 架构,支持大端架构 +=========================================================== + +eInfochips(一家软件厂商) 已将将 Android 4.4 系统移植到 PowerPC 架构,它将用于一家航空电子客户用来监视引擎的健康状况的人机界面(HMI:Human Machine Interface)。 + +eInfochips 已经开发了第一个面向 PowerPC 架构的 CPU 的 Android 移植版本,并支持大端(Big Endian)架构。此移植基于 Android 开源项目 [Android Open Source Project (AOSP)] 中 Android 4.4 (KitKat) 的代码,其功能内核的版本号为 3.12.19。 + +Android 开始兴起的时候,PowerPC 正在快速丢失和 ARM 架构共同角逐的市场。高端的网络客户和其它的企业级的嵌入式工具大多运行在诸如飞思卡尔(Freescale)的 PowerQUICC 和 QorIQ 这样的 PowerPC 处理器上,但是并不是 Linux 系统。不过,有几个 Android 的移植计划。在 2009 年,飞思卡尔和 Embedded Alley(一家软件厂商,当前是 Mentor Graphics 的 Linux 团队的一部分)[宣布了针对 PowerQUICC 和 QorIQ 芯片的移植版本][15],当前由 NXP 公司构建。另一个名为 [Android-PowerPC][16] 的项目也作出了相似的工作。 + +这些努力来的都并不容易,然而,当航空公司找到 eInfochips,希望能够为他们那些基于 PowerPC 的引擎监控系统添加 Android 应用程序以改善人机界面。该公司找出了这些早期的移植版本,然而,它们都相距甚远。所以,他们不得不从头开始新的移植。 + +最主要的问题是这些移植的 Android 版本实在是太老了,和现在的 Android 差别太大了。Embedded Alley 移植的版本为 Android 1.5 (Cupcake),它于 2009 年发布,Linux 内核版本为 2.6.28。而 Android-PowerPC 项目最后一版的移植是 Android 2.2 (Froyo),它于 2010 年发布,内核版本为 2.6.32。此外,航空公司还有一些额外的技术诉求,例如对大端架构(Big Endian)的支持,这种老式的内存访问方式仍旧应用于网络通信和电信行业。然而那些早期的移植版本仅能够支持小端(Little Endian)的内存访问。 + +### 来自 eInfochips 的全新 PowerPC 架构移植 + +eInfochips, 它最为出名的应该是那些基于 ARM/骁龙处理器的模块计算机板卡,例如 [Eragon 600][17]。 它已经完成了基于 QorIQ 的 Android 4.4 系统移植,且发布了白皮书介绍了该项目。采用该项目的航空电子设备客户仍旧不愿透露名称,目前仍旧不清楚什么时候会公开此该移植版本。 + +![](http://files.linuxgizmos.com/einfochips_porting_android_on_powerpc.jpg) + +*图片来自 eInfochips 的博客日志* + +全新的 PowerPC Android 项目包括: + +- 基于 PowerPC [e5500][1] 仿生定制 +- 基于 Android KitKat 的大端支持 +- 使用 GCC 5.2 工具链开发 +- Android 4.4 框架的 PowerPC 支持 +- PowerPC e5500 的 Android 内核版本为 3.12.19 + +根据 eInfochips 的销售经理 Sooryanarayanan Balasubramanian 描述,该航空电子客户想要使用 Android 主要是因为熟悉的界面能够缩减培训的时间,并且让程序更新和增加新程序变得更加容易。他继续解释说:“这次成功的移植了 Android,使得今后的工作仅仅需要在应用层作出修修改改,而不再向以前一样需要在所有层面之间作相互的校验。”,“这是第一次在航空航天工业作出这些尝试,这需要在设计时尽量认真。” + +通过白皮书,可以知道将 Android 移植到 PowerPC 上需要对框架、核心库、开发工具链、运行时链接器、对象链接器和开源编译工具作出大量的修改。在字节码生成阶段,移植团队决定使用便携模式(portable mode)而不是快速解释模式(fast interpreter mode)。这是因为还没有 PowerPC 可用的快速解释模式,而使用开源的 [libffi][18] 的便携模式能够支持 PowerPC。 + +同时,团队还面临着在 Android 运行时 (ART) 环境和 Dalvik 虚拟机 (DVM) 环境之间的选择。他们发现,ART 环境下的便携模式还未经测试且缺乏良好的文档支持,所以最终选择了 DVM 环境下的便携模式。 + +白皮书中还提及了其它的一些在移植过程中遇到的困难,包括重新开发工具链,重写脚本以解决 AOSP 对编译器标志“非标准”使用的问题。最终完成的移植版本提供了 37 个服务,以及提供了无界面的 Android 部署,在前端使用用户空间的模拟 UI。 + + +### 目标硬件 + +感谢来自 [eInfochips 博客日志][2] 的图片(如下图所示),让我们能够确认此 PowerPC 的 Android 移植项目的硬件平台。这个板卡为 [X-ES Xpedite 6101][3],它是一个加固级 XMC/PrPMC 夹层模组。 + +![](http://hackerboards.com/files/xes_xpedite6101-sm.jpg) + +*X-ES Xpedite 6101 照片和框图* + +X-ES Xpedite 6101 板卡拥有一个可选的 NXP 公司基于 QorIQ T 系列通信处理器(T2081、T1042 和 T1022),它们分别集成了 8 个、4 个和 2 个 e6500 核心,稍有不同的是,T2081 的处理器主频为 1.8GHz,T1042/22 的处理器主频为 1.4GHz。所有的核心都集成了 AltiVec SIMD 引擎,这也就意味着它能够提供 DSP 级别的浮点运算性能。所有以上 3 款 X-ES 板卡都能够支持最高 8GB 的 DDR3-1600 ECC SDRAM 内存。外加 512MB NOR 和 32GB 的 NAND 闪存。 + +![](http://hackerboards.com/files/nxp_qoriq_t2081_block-sm.jpg) + +*NXP T2081 框图* + +板卡的 I/O 包括一个 x4 PCI Express Gen2 通道,以及双工的千兆级网卡、 RS232/422/485 串口和 SATA 3.0 接口。此外,它可选 3 款 QorIQ 处理器,Xpedite 6101 提供了三种 [X-ES 加固等级][19],分别是额定工作温度 0 ~ 55°C, -40 ~ 70°C, 或者是 -40 ~ 85°C,且包含 3 类冲击和抗振类别。 + +此外,我们已经介绍过的基于 X-ES QorIQ 的 XMC/PrPMC 板卡包括 [XPedite6401 和 XPedite6370][20],它们支持已有的板卡级 Linux 、风河的 VxWorks(一种实时操作系统) 和 Green Hills 的 Integrity(也是一种操作系统)。 + + +### 更多信息 + +eInfochips Android PowerPC 移植白皮书可以[在此][4]下载(需要先免费注册)。 + +### 相关资料 + +- [Commercial embedded Linux distro boosts virtualization][5] +- [Freescale unveils first ARM-based QorIQ SoCs][6] +- [High-end boards run Linux on 64-bit ARM QorIQ SoCs][7] +- [Free, Open Enea Linux taps Yocto Project and Linaro code][8] +- [LynuxWorks reverts to its LynxOS roots, changes name][9] +- [First quad- and octa-core QorIQ SoCs unveiled][10] +- [Free white paper shows how Linux won embedded][11] +- [Quad-core Snapdragon COM offers three dev kit options][12] +- [Tiny COM runs Linux on quad-core 64-bit Snapdragon 410][13] +- [PowerPC based IoT gateway COM ships with Linux BSP][14] + + +-------------------------------------------------------------------------------- + +via: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/ + +作者:[Eric Brown][a] +译者:[dongfengweixiao](https://github.com/dongfengweixiao) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/ +[1]: http://linuxdevices.linuxgizmos.com/low-cost-powerquicc-chips-offer-flexible-interconnect-options/ +[2]: https://www.einfochips.com/blog/k2-categories/aerospace/presenting-a-case-for-porting-android-on-powerpc-architecture.html +[3]: http://www.xes-inc.com/products/processor-mezzanines/xpedite6101/ +[4]: http://biz.einfochips.com/portingandroidonpowerpc +[5]: http://hackerboards.com/commercial-embedded-linux-distro-boosts-virtualization/ +[6]: http://hackerboards.com/freescale-unveils-first-arm-based-qoriq-socs/ +[7]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/ +[8]: http://hackerboards.com/free-open-enea-linux-taps-yocto-and-linaro-code/ +[9]: http://hackerboards.com/lynuxworks-reverts-to-its-lynxos-roots-changes-name/ +[10]: http://hackerboards.com/first-quad-and-octa-core-qoriq-socs-unveiled/ +[11]: http://hackerboards.com/free-white-paper-shows-how-linux-won-embedded/ +[12]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/ +[13]: http://hackerboards.com/tiny-com-runs-linux-and-android-on-quad-core-64-bit-snapdragon-410/ +[14]: http://hackerboards.com/powerpc-based-iot-gateway-com-ships-with-linux-bsp/ +[15]: http://linuxdevices.linuxgizmos.com/android-ported-to-powerpc/ +[16]: http://www.androidppc.com/ +[17]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/ +[18]: https://sourceware.org/libffi/ +[19]: http://www.xes-inc.com/capabilities/ruggedization/ +[20]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/ diff --git a/translated/tech/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md b/translated/tech/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md deleted file mode 100644 index 3b450cb603..0000000000 --- a/translated/tech/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md +++ /dev/null @@ -1,103 +0,0 @@ -PowerPC 获得大端 Android 4.4 系统的移植 -=========================================================== - -eInfochips(一家软件厂商) 已将将 Android 4.4 系统移植到 PowerPC 架构,它将作为一家航空电子客户的人机界面(HMI:Human Machine Interface)用来监视引擎的建康状况。 - -eInfochips 已经开发了第一个面向 PowerPC 架构的 CPU 的 Android 移植版本,它使用较新的大端 Android 系统。此移植基于 Android 开源项目[Android Open Source Project (AOSP)] 中 Android 4.4 (KitKat) 的代码,其功能内核的版本号为 3.12.19。 - -Android 开始兴起的时候,PowerPC正在快速失去和 ARM 架构共通角逐的市场。高端的网络客户和以市场为导向的嵌入式工具大多运行在诸如飞思卡尔(Freescale)的 PowerQUICC 和 QorIQ 上,而不取决于 Linux 系统。一些 Android 的移植计划最终失败,然而在 2009 年,飞思卡尔和 Embedded Alley(一家软件厂商,当前是 Mentor Graphics 的 Linux 团队的一部分)[宣布了针对 PowerQUICC 和 QorIQ 芯片的移植版本][15],当前由 NXP 公司构建。另一个名为[Android-PowerPC][16] 的项目也作出了相似的工作。 - -这些努力来的都并不容易,然而,当航空公司找到 eInfochips,希望能够为他们那些基于 PowerPC 的引擎监控系统添加 Android 应用程序以改善人机界面。此公司找出了这些早期的移植版本,然而,他们都很难达到标准。所以,他们不得不从头开始新的移植。 - -最主要的问题是这些移植的 Android 版本实在是太老了,且 very different。Embedded Alley 移植的版本为 Android 1.5 (Cupcake),它于 2009 年发布,Linux 内核版本为 2.6.28。最后一版的移植为 Android-PowerPC 项目的 Android 2.2 (Froyo)它于 2010 年发布,内核版本为 2.6.32。此外,航空公司还有一些额外的技术诉求,例如对大端的支持. 现有的存储器接入方案仍旧应用于网络通信和电信行业。然而那些早期的移植版本仅能够支持小端的存储器访问。 - -### 来自 eInfochips 的全新 PowerPC 架构移植 - -eInfochips, 它最为出名的应该是那些基于 ARM/骁龙处理器的模块计算机板卡,例如 [Eragon 600][17]。 它已经完成了基于 QorIQ 的 Android 4.4 系统移植,且发布了白皮书描述了此项目。采用该项目的航空电子设备客户仍旧不愿透露姓名,目前仍旧不清楚什么时候会公开此该移植版本。 - - -![](http://hackerboards.com/files/einfochips_porting_android_on_powerpc-sm.jpg) ->图片来自 eInfochips 的博客日志 - -- 全新的 PowerPC Android 项目包括: -- 基于 PowerPC [e5500][1] 深度定制(bionic 定制不知道什么鬼,校对的时候也可以想想怎么处理) -- 基于 Android KitKat 的大端序支持 -- 开发工具链为 Gcc 5.2 -- Android 4.4 框架的 PowerPC 支持 -- PowerPC e5500 的 Android 内核版本为 3.12.19 - -根据 eInfochips 的销售经理 Sooryanarayanan Balasubramanian 描述,航空电子客户想要使用 Android 主要是因为熟悉的界面能够缩减培训的时间,并且让程序更新和提供新的程序变得更加容易。他继续解释说:“这次成功的移植了 Android,使得今后的工作仅仅需要在应用层作出修修改改,而不再向以前一样需要在所有层之间作相互的校验。”“这是第一次在航空航天工业作出这些尝试,这需要在设计时作出尽职的调查。” - -通过白皮书,可以知道将 Android 移植到 PowerPC 上需要对框架,核心库,开发工具链,运行时链接器,对象链接器和开源编译工具作出大量的修改。在字节码生成阶段,移植团队决定使用便携模式而不是快速的解释模式。这是因为,还没有 PowerPC 可用的快速解释模式,而使用 [libffi][18] 的便携模式能够支持 PowerPC。 - -同时,团队还面临在 Android 运行时 (ART) 环境和 Dalvik 虚拟机 (DVM) 环境之间的选择。他们发现,ART 环境下的便携模式还未经测试且缺乏良好的文档支持,所以最终选择了 DVM 环境下的便携模式。 - -白皮书中还提及了其它的一些在移植过程中遇到的困难,包括重新开发工具链,重写脚本以解决 AOSP “非标准”的使用编译器标志的问题。最终,移植提供了 37 个服务,and features a headless Android deployment along with an emulated UI in user space. - - -### 目标硬件 - -感谢来自 [eInfochips 博客日志][2] 的图片(如下图所示),我们能够确认此 PowerPC 的 Android 移植项目的硬件平台。这个板卡为 [X-ES Xpedite 6101][3],它是固实的 XMC/PrPMC 夹层模组。 - -![](http://hackerboards.com/files/xes_xpedite6101-sm.jpg) ->X-ES Xpedite 6101 照片和框图 - -X-ES Xpedite 6101 板卡拥有可选择的 NXP 公司基于 QorIQ T系列通信处理器 T2081, T1042, 和 T1022,他们分别拥有 8 个,4 个和 2 个 e6500 核心,稍有不同的是,T2081 的处理器主频为 1.8GHz,T1042/22 的处理器主频为 1.4GHz。所有的核心都集成了 AltiVec SIMD 引擎,这也就意味着它能够提供 DSP 级别的浮点运算性能。所有以上 3 款 X-ES 板卡都能够支持最高 8GB 的 DDR3-1600 ECC SDRAM 内存。外加 512MB NOR 和 32GB 的 NAND 闪存。 - -![](http://hackerboards.com/files/nxp_qoriq_t2081_block-sm.jpg) ->NXP T2081 框图 - -板卡的 I/O 包括一个 x4 PCI Express Gen2 通到,along with dual helpings of Gigabit Ethernet, RS232/422/485 串口和 SATA 3.0 接口。此外,它可选 3 款 QorIQ 处理器,Xpedite 6101 提供了三种[X-ES 加固等级][19],分别是额定工作温度 0 ~ 55°C, -40 ~ 70°C, 或者是 -40 ~ 85°C,且包含 3 类冲击和抗振类别。 - -此外,我们已经介绍过的基于 X-ES QorIQ 的 XMC/PrPMC 板卡包括[XPedite6401 和 XPedite6370][20],它们支持已有的板卡级 Linux Linux,Wind River VxWorks(一种实时操作系统) 和 Green Hills Integrity(也是一种操作系统)。 - - -### 更多信息 - -eInfochips Android PowerPC 移植白皮书可以[在此[4]下载(需要先免费注册)。 - -### Related posts: - -- [Commercial embedded Linux distro boosts virtualization][5] -- [Freescale unveils first ARM-based QorIQ SoCs][6] -- [High-end boards run Linux on 64-bit ARM QorIQ SoCs][7] -- [Free, Open Enea Linux taps Yocto Project and Linaro code][8] -- [LynuxWorks reverts to its LynxOS roots, changes name][9] -- [First quad- and octa-core QorIQ SoCs unveiled][10] -- [Free white paper shows how Linux won embedded][11] -- [Quad-core Snapdragon COM offers three dev kit options][12] -- [Tiny COM runs Linux on quad-core 64-bit Snapdragon 410][13] -- [PowerPC based IoT gateway COM ships with Linux BSP][14] - - --------------------------------------------------------------------------------- - -via: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/ - -作者:[Eric Brown][a] -译者:[dongfengweixiao](https://github.com/dongfengweixiao) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/ -[1]: http://linuxdevices.linuxgizmos.com/low-cost-powerquicc-chips-offer-flexible-interconnect-options/ -[2]: https://www.einfochips.com/blog/k2-categories/aerospace/presenting-a-case-for-porting-android-on-powerpc-architecture.html -[3]: http://www.xes-inc.com/products/processor-mezzanines/xpedite6101/ -[4]: http://biz.einfochips.com/portingandroidonpowerpc -[5]: http://hackerboards.com/commercial-embedded-linux-distro-boosts-virtualization/ -[6]: http://hackerboards.com/freescale-unveils-first-arm-based-qoriq-socs/ -[7]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/ -[8]: http://hackerboards.com/free-open-enea-linux-taps-yocto-and-linaro-code/ -[9]: http://hackerboards.com/lynuxworks-reverts-to-its-lynxos-roots-changes-name/ -[10]: http://hackerboards.com/first-quad-and-octa-core-qoriq-socs-unveiled/ -[11]: http://hackerboards.com/free-white-paper-shows-how-linux-won-embedded/ -[12]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/ -[13]: http://hackerboards.com/tiny-com-runs-linux-and-android-on-quad-core-64-bit-snapdragon-410/ -[14]: http://hackerboards.com/powerpc-based-iot-gateway-com-ships-with-linux-bsp/ -[15]: http://linuxdevices.linuxgizmos.com/android-ported-to-powerpc/ -[16]: http://www.androidppc.com/ -[17]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/ -[18]: https://sourceware.org/libffi/ -[19]: http://www.xes-inc.com/capabilities/ruggedization/ -[20]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/ From 9ad8f6b22e8b087be9024c94f2e62e3b80f957a0 Mon Sep 17 00:00:00 2001 From: runningwater Date: Tue, 5 Jul 2016 21:59:33 +0800 Subject: [PATCH 061/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng is a File” and Types of Files in Linux.md | 123 +++++++++--------- 1 file changed, 62 insertions(+), 61 deletions(-) rename {sources => translated}/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md (52%) mode change 100644 => 100755 diff --git a/sources/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md b/translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md old mode 100644 new mode 100755 similarity index 52% rename from sources/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md rename to translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md index e1c3ee4661..c9a1b243ff --- a/sources/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md +++ b/translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md @@ -1,54 +1,53 @@ -(翻译中 by runningwater) -Explanation of “Everything is a File” and Types of Files in Linux +诠释 Linux 中“一切都是文件”概念和相应的文件类型 ==================================================================== ![](http://www.tecmint.com/wp-content/uploads/2016/05/Everything-is-a-File-in-Linux.png) ->Everything is a File and Types of Files in Linux +>Linux 系统中一切都是文件并有相应的文件类型 -That is in fact true although it is just a generalization concept, in Unix and its derivatives such as Linux, everything is considered as a file. If something is not a file, then it must be running as a process on the system. +在 Unix 和它衍生的比如 Linux 系统中,一切都可以看做文件。虽然它仅仅只是一个泛泛的概念,但这是事实。如果有不是文件的,那它一定是正运行的进程。 -To understand this, take for example the amount of space on your root (/) directory is always consumed by different types of Linux files. When you create a file or transfer a file to your system, it occupies some space on the physical disk and it is considered to be in a specific format (file type). +要理解这点,可以举个例子,您的根目录(/) 的空间是由不同类型的 Linux 文件所占据的。当您创建一个文件或向系统传一个文件时,它在物理磁盘上占据的一些空间,可以认为是一个特定的格式(文件类型)。 -And also the Linux system does not differentiate between files and directories, but directories do one important job, that is store other files in groups in a hierarchy for easy location. All your hardware components are represented as files and the system communicates with them using these files. +虽然 Linux 系统中文件和目录没有什么不同,但目录还有一个重要的功能,那就是有结构性的分组存储其它文件,以方便查找访问。所有的硬件部件都表示为文件,系统使用这些文件来与硬件通信。 -The idea is an important description of a great property of Linux, where input/output resources such as your documents, directories (folders in Mac OS X and Windows), keyboard, monitor, hard-drives, removable media, printers, modems, virtual terminals and also inter-process and network communication are streams of bytes defined by file system space. +这些思想是对伟大的 Linux 财产的重要阐述,因此像文档、目录(Mac OS X 和 Windows 系统下是文件夹)、键盘、监视器、硬盘、可移动媒体设备、打印机、调制解调器、虚拟终端,还有进程间通信(IPC)和网络通信等输入/输出资源都在定义在文件系统空间下的字节流。 -A notable advantage of everything being a file is that the same set of Linux tools, utilities and APIs can be used on the above input/output resources. +一切都可看作是文件,其最显著的好处是对于上面所列出的输入/输出资源,只需要相同的一套 Linux 工具、实用程序和 API。 -Although everything in Linux is a file, there are certain special files that are more than just a file for example [sockets and named pipes][1]. +虽然在 Linux 中一切都可看作是文件,但也有一些特殊的文件,比如[套接字和命令管道][1]。 -### What are the different types of files in Linux? +### Linux 文件类型的不同之处? -In Linux there are basically three types of files: +Linux 系统中有三种基本的文件类型: -- Ordinary/Regular files -- Special files -- Directories +- 普通/常规文件 +- 特殊文件 +- 目录文件 -#### Ordinary/Regular Files +#### 普通/常规文件 -These are files data contain text, data or program instructions and they are the most common type of files you can expect to find on a Linux system and they include: +它们是包含文本、数据、程序指令等数据的文件,其在 Linux 系统中是最常见的一种。包括如下: -- Readable files -- Binary files -- Image files -- Compressed files and so on. +- 只读文件 +- 二进制文件 +- 图像文件 +- 压缩文件等等 -#### Special Files +#### 特殊文件 -Special files include the following: +特殊文件包括以下几种: -Block files : These are device files that provide buffered access to system hardware components. They provide a method of communication with device drivers through the file system. +块文件:设备文件,对访问系统硬件部件提供了缓存接口。他们提供了一种使用文件系统与设备驱动通信的方法。 -One important aspect about block files is that they can transfer a large block of data and information at a given time. +有关于块文件一个重要的性能就是它们能在指定时间内传输大块的数据和信息。 -Listing block files sockets in a directory: +列出某目录下的块文件: ``` # ls -l /dev | grep "^b" ``` -Sample Output +输出例子 ``` brw-rw---- 1 root disk 7, 0 May 18 10:26 loop0 @@ -74,15 +73,15 @@ brw-rw---- 1 root disk 1, 5 May 18 10:26 ram5 ... ``` -Character files : These are also device files that provide unbuffered serial access to system hardware components. They work by providing a way of communication with devices by transferring data one character at a time. +字符文件: 也是设备文件,对访问系统硬件组件提供了非缓冲串行接口。它们与设备的通信工作方式是一次只传输一个字符的数据。 -Listing character files sockets in a directory: +列出某目录下的字符文件: ``` # ls -l /dev | grep "^c" ``` -Sample Output +输出例子 ``` crw------- 1 root root 10, 235 May 18 15:54 autofs @@ -114,15 +113,15 @@ crw-rw-rw- 1 root tty 5, 2 May 18 17:40 ptmx crw-rw-rw- 1 root root 1, 8 May 18 10:26 random ``` -Symbolic link files : A symbolic link is a reference to another file on the system. Therefore, symbolic link files are files that point to other files, and they can either be directories or regular files. +符号链接文件 : 符号链接是指向系统上其他文件的引用。因此,符号链接文件是指向其它文件的文件,也可以是目录或常规文件。 -Listing symbolic link sockets in a directory: +列出某目录下的符号链接文件: ``` # ls -l /dev/ | grep "^l" ``` -Sample Output +输出例子 ``` lrwxrwxrwx 1 root root 3 May 18 10:26 cdrom -> sr0 @@ -135,27 +134,27 @@ lrwxrwxrwx 1 root root 15 May 18 15:54 stdin -> /proc/self/fd/0 lrwxrwxrwx 1 root root 15 May 18 15:54 stdout -> /proc/self/fd/1 ``` -You can make symbolic links using the `ln` utility in Linux as in the example below. +Linux 中使用 `ln` 工具就可以创建一个符号链接文件,如下所示: ``` # touch file1.txt -# ln -s file1.txt /home/tecmint/file1.txt [create symbolic link] -# ls -l /home/tecmint/ | grep "^l" [List symbolic links] +# ln -s file1.txt /home/tecmint/file1.txt [创建符号链接文件] +# ls -l /home/tecmint/ | grep "^l" [列出符号链接文件] ``` -In the above example, I created a file called `file1.txt` in `/tmp` directory, then created the symbolic link, `/home/tecmint/file1.txt` to point to `/tmp/file1.txt`. +在上面的例子中,首先我们在 `/tmp` 目录创建了一个名叫 `file1.txt` 的文件,然后创建符号链接文件,所以 `/home/tecmint/file1.txt` 指向 `/tmp/file1.txt` 文件。 -Pipes or Named pipes : These are files that allow inter-process communication by connecting the output of one process to the input of another. +套接字和命令管道 : 连接一个进行的输出和另一个进程的输入,允许进程间通信的文件。 -A named pipe is actually a file that is used by two process to communicate with each and it acts as a Linux pipe. +命名管道实际上是一个文件,用来使两个进程彼此通信,就像一个 Linux pipe(管道) 命令一样。 -Listing pipes sockets in a directory: +列出某目录下的管道文件: ``` # ls -l | grep "^p" ``` -Sample Output +输出例子 ``` prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe1 @@ -165,62 +164,64 @@ prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe4 prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe5 ``` -You can use the mkfifo utility to create a named pipe in Linux as follows. +在 Linux 中可以使用 `mkfifo` 工具来创建一个命名管道,如下所示: ``` # mkfifo pipe1 # echo "This is named pipe1" > pipe1 ``` -In the above example, I created a named pipe called pipe1, then I passed some data to it using the [echo command][2], after that the shell became un-interactive while processing the input. +在上的例子中,我们创建了一个名叫 `pipe1` 的命名管道,然后使用 [echo 命令][2] 加入一些数据,在这操作后,要使用这些输入数据就要用非交互的 shell 了。 -Then I opened another shell and run the another command to print out what was passed to pipe. + + +然后,我们打开另外的 shell 终端,运行另外的命令来打印出刚加入管道的数据。 ``` # while read line ;do echo "This was passed-'$line' "; done Date: Wed, 6 Jul 2016 02:18:11 +0800 Subject: [PATCH 062/471] translated --- ... Compound Expressions with Awk in Linux.md | 81 ------------------- ... Compound Expressions with Awk in Linux.md | 79 ++++++++++++++++++ 2 files changed, 79 insertions(+), 81 deletions(-) delete mode 100644 sources/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md create mode 100644 translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md diff --git a/sources/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md b/sources/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md deleted file mode 100644 index db9a863484..0000000000 --- a/sources/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md +++ /dev/null @@ -1,81 +0,0 @@ -martin - -How to Use Compound Expressions with Awk in Linux -==================================================== - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Compound-Expressions-with-Awk.png) - -All along, we have been looking at simple expressions when checking whether a condition has been meet or not. What if you want to use more then one expression to check for a particular condition in? - -In this article, we shall take a look at the how you can combine multiple expressions referred to as compound expressions to check for a condition when filtering text or strings. - -In Awk, compound expressions are built using the `&&` referred to as `(and)` and the `||` referred to as `(or)` compound operators. - -The general syntax for compound expressions is: - -``` -( first_expression ) && ( second_expression ) -``` - -Here, `first_expression` and `second_expression` must be true to make the whole expression true. - -``` -( first_expression ) || ( second_expression) -``` - -Here, one of the expressions either `first_expression` or `second_expression` must be true for the whole expression to be true. - -**Caution**: Remember to always include the parenthesis. - -The expressions can be built using the comparison operators that we looked at in Part 4 of the awk series. - -Let us now get a clear understanding using an example below: - -In this example, a have a text file named `tecmint_deals.txt`, which contains a list of some amazing random Tecmint deals, it includes the name of the deal, the price and type. - -``` -TecMint Deal List -No Name Price Type -1 Mac_OS_X_Cleanup_Suite $9.99 Software -2 Basics_Notebook $14.99 Lifestyle -3 Tactical_Pen $25.99 Lifestyle -4 Scapple $19.00 Unknown -5 Nano_Tool_Pack $11.99 Unknown -6 Ditto_Bluetooth_Altering_Device $33.00 Tech -7 Nano_Prowler_Mini_Drone $36.99 Tech -``` - -Say that we want only print and flag deals that are above $20 and of type “Tech” using the (**) sign at the end of each line. - -We shall need to run the command below. - -``` -# awk '($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/) && ($4=="Tech") { printf "%s\t%s\n",$0,"*"; } ' tecmint_deals.txt - -6 Ditto_Bluetooth_Altering_Device $33.00 Tech * -7 Nano_Prowler_Mini_Drone $36.99 Tech * -``` - -In this example, we have used two expressions in a compound expression: - -- First expression, `($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/)` ; checks the for lines with deals with price above `$20`, and it is only true if the value of $3 which is the price matches the pattern `/^\$[2-9][0-9]*\.[0-9][0-9]$/` -- And the second expression, `($4 == “Tech”)` ; checks whether the deal is of type “`Tech`” and it is only true if the value of `$4` equals to “`Tech`”. -Remember, a line will only be flagged with the `(**)`, if first expression and second expression are true as states the principle of the `&&` operator. - -### Summary - -Some conditions always require building compound expressions for you to match exactly what you want. When you understand the use of comparison and compound expression operators then, filtering text or strings based on some difficult conditions will become easy. - -Hope you find this guide useful and for any questions or additions, always remember to leave a comment and your concern will be solved accordingly. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/combine-multiple-expressions-in-awk/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md b/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md new file mode 100644 index 0000000000..ed1ba4aa7c --- /dev/null +++ b/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md @@ -0,0 +1,79 @@ +如何使用 Awk 复合表达式 +==================================================== + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Compound-Expressions-with-Awk.png) + +一直以来在查对条件是否匹配时,我们寻求的都是简单的表达式。那如果你想用超过一个表达式,来查对特定的条件呢? + +本文,我们将看看如何在过滤文本和字符串时,结合多个表达式,即复合表达式,用以查对条件。 + +Awk 的复合表达式可由表示`与`的组合操作符 `&&` 和表示`或`的 `||` 构成。 + +复合表达式的常规写法如下: + +``` +( first_expression ) && ( second_expression ) +``` + +为了保证整个表达式的正确,在这里必须确保 `first_expression` 和 `second_expression` 是正确的。 + +``` +( first_expression ) || ( second_expression) +``` + +为了保证整个表达式的正确,在这里必须确保 `first_expression` 或 `second_expression` 是正确的。 + +**注意**:切记要加括号。 + +表达式可以由比较操作符构成,具体可查看 awk 系列的第四部分。 + +现在让我们通过一个例子来加深理解: + +此例中,有一个文本文件 `tecmint_deals.txt`,文本中包含着一张随机的 Tecmint 交易清单,其中包含了名称、价格和种类。 + +``` +TecMint Deal List +No Name Price Type +1 Mac_OS_X_Cleanup_Suite $9.99 Software +2 Basics_Notebook $14.99 Lifestyle +3 Tactical_Pen $25.99 Lifestyle +4 Scapple $19.00 Unknown +5 Nano_Tool_Pack $11.99 Unknown +6 Ditto_Bluetooth_Altering_Device $33.00 Tech +7 Nano_Prowler_Mini_Drone $36.99 Tech +``` + +我们只想打印出价格超过 $20 的物品,并在其中种类为 “Tech” 的物品的行末用 (**) 打上标记。 + +我们将要执行以下命令。 + +``` +# awk '($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/) && ($4=="Tech") { printf "%s\t%s\n",$0,"*"; } ' tecmint_deals.txt + +6 Ditto_Bluetooth_Altering_Device $33.00 Tech * +7 Nano_Prowler_Mini_Drone $36.99 Tech * +``` + +此例,在复合表达式中我们使用了两个表达式: + +- 表达式 1:`($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/)` ;查找交易价格超过 `$20` 的行,即只有当 `$3` 也就是价格满足 `/^\$[2-9][0-9]*\.[0-9][0-9]$/` 时值才为 true。 +- 表达式 2:`($4 == “Tech”)` ;查找是否有种类为 “`Tech`”的交易,即只有当 `$4` 等于 “`Tech`” 时值才为 true。 +切记,只有当 `&&` 操作符的两端状态,也就是两个表达式都是 true 的情况下,这一行才会被打上 `(**)` 标志。 + +### 总结 + +有些时候为了匹配你的真实想法,就不得不用到复合表达式。当你掌握了比较和复合表达式操作符的用法之后,在难的文本或字符串过滤条件也能轻松解决。 + +希望本向导对你有所帮助,如果你有任何问题或者补充,可以在下方发表评论,你的问题将会得到相应的解释。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/combine-multiple-expressions-in-awk/ + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 152232980f6b3ecf536dfe8d4e543d76920e9d59 Mon Sep 17 00:00:00 2001 From: runningwater Date: Wed, 6 Jul 2016 09:59:41 +0800 Subject: [PATCH 063/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=94=B3=E9=A2=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...1 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md b/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md index c7a3e8a499..ad51164ed4 100644 --- a/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md +++ b/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md @@ -1,3 +1,4 @@ +(翻译中 by runningwater) CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU ======================================================== @@ -29,7 +30,7 @@ I understand why they need to make this move from a security standpoint, but it via: https://itsfoss.com/ubuntu-32-bit-support-drop/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 作者:[John Paul][a] -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 06a314e359903d5b08a6b4aa9403f58c76fd2a71 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 6 Jul 2016 18:54:48 +0800 Subject: [PATCH 064/471] PUB:20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember @mr-ping --- ...hat Every Linux Newbies Should Remember.md | 141 ++++++++++++++++++ ...hat Every Linux Newbies Should Remember.md | 141 ------------------ 2 files changed, 141 insertions(+), 141 deletions(-) create mode 100644 published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md delete mode 100644 translated/tech/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md diff --git a/published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md b/published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md new file mode 100644 index 0000000000..50ec85acaa --- /dev/null +++ b/published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md @@ -0,0 +1,141 @@ +Linux 新手必知必会的 10 条 Linux 基本命令 +===================================================================== + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/4225072_orig.png) + + +Linux 对我们的生活产生了巨大的冲击。至少你的安卓手机使用的就是 Linux 核心。尽管如此,在第一次开始使用 Linux 时你还是会感到难以下手。因为在 Linux 中,通常需要使用终端命令来取代 Windows 系统中的点击启动图标操作。但是不必担心,这里我们会介绍 10 个 Linux 基本命令来帮助你开启 Linux 神秘之旅。 + + +### 帮助新手走出第一步的 10 个 Linux 基本命令 + +当我们谈论 Linux 命令时,实质上是在谈论 Linux 系统本身。这短短的 10 个 Linux 基本命令不会让你变成天才或者 Linux 专家,但是能帮助你轻松开始 Linux 之旅。使用这些基本命令会帮助新手们完成 Linux 的日常任务,由于它们的使用频率如此至高,所以我更乐意称他们为 Linux 命令之王! + +让我们开始学习这 10 条 Linux 基本命令吧。 + + +#### 1. sudo + +这条命令的意思是“以超级用户的身份执行”,是 SuperUserDo 的简写,它是新手将要用到的最重要的一条 Linux 命令。当一条单行命令需要 root 权限的时候,`sudo`命令就派上用场了。你可以在每一条需要 root 权限的命令前都加上`sudo`。 + +``` +$ sudo su +``` + + +#### 2. ls (list) + + +跟其他人一样,你肯定也经常想看看目录下都有些什么东西。使用列表命令,终端会把当前工作目录下所有的文件以及文件夹展示给你。比如说,我当前处在 /home 文件夹中,我想看看 /home 文件夹中都有哪些文件和目录。 + +``` +/home$ ls +``` + + +在 /home 中执行`ls`命令将会返回类似下面的内容: + +``` +imad lost+found +``` + + +#### 3. cd + +变更目录命令(cd)是终端中总会被用到的主要命令。它是最常用到的 Linux 基本命令之一。此命令使用非常简单,当你打算从当前目录跳转至某个文件夹时,只需要将文件夹键入此命令之后即可。如果你想跳转至上层目录,只需要在此命令之后键入两个点 (..) 就可以了。 +​ +举个例子,我现在处在 /home 目录中,我想移动到 /home 目录中的 usr 文件夹下,可以通过以下命令来完成操作。 + +``` +/home $ cd usr + +/home/usr $ +``` + + +#### 4. mkdir + +只是可以切换目录还是不够完美。有时候你会想要新建一个文件夹或子文件夹。此时可以使用 mkdir 命令来完成操作。使用方法很简单,只需要把新的文件夹名跟在 mkdir 命令之后就好了。 + +``` +~$ mkdir folderName +``` + + +#### 5. cp + +拷贝-粘贴(copy-and-paste)是我们组织文件需要用到的重要命令。使用 `cp` 命令可以帮助你在终端当中完成拷贝-粘贴操作。首先确定你想要拷贝的文件,然后键入打算粘贴此文件的目标位置。 + +``` +$ cp src des +``` + +注意:如果目标目录对新建文件需要 root 权限时,你可以使用 `sudo` 命令来完成文件拷贝操作。 + + +#### 6. rm + +rm 命令可以帮助你移除文件甚至目录。如果不希望每删除一个文件都提示确认一次,可以用`-f`参数来强制执行。也可以使用 `-r` 参数来递归的移除文件夹。 + +``` +$ rm myfile.txt +``` + + +#### 7. apt-get + +这个命令会依据发行版的不同而有所区别。在基于 Debian 的发行版中,我们拥有 Advanced Packaging Tool(APT)包管理工具来安装、移除和升级包。apt-get 命令会帮助你安装需要在 Linux 系统中运行的软件。它是一个功能强大的命令行,可以用来帮助你对软件执行安装、升级和移除操作。 + +在其他发行版中,例如 Fedora、Centos,都各自不同的包管理工具。Fedora 之前使用的是 yum,不过现在 dnf 成了它默认的包管理工具。 + +``` +$ sudo apt-get update + +$ sudo dnf update +``` + + +#### 8. grep + +当你需要查找一个文件,但是又忘记了它具体的位置和路径时,`grep` 命令会帮助你解决这个难题。你可以提供文件的关键字,使用`grep`命令来查找到它。 + +``` +$ grep user /etc/passwd +``` + + +#### 9. cat + +作为一个用户,你应该会经常需要浏览脚本内的文本或者代码。`cat`命令是 Linux 系统的基本命令之一,它的用途就是将文件的内容展示给你。 + +``` +$ cat CMakeLists.txt +``` + + +#### 10. poweroff + +最后一个命令是 `poweroff`。有时你需要直接在终端中执行关机操作。此命令可以完成这个任务。由于关机操作需要 root 权限,所以别忘了在此命令之前添加`sudo`。 + +``` +$ sudo poweroff +``` + + +### 总结 + +如我在文章开始所言,这 10 条命令并不会让你立即成为一个 Linux 大拿,但它们会让你在初期快速上手 Linux。以这些命令为基础,给自己设置一个目标,每天学习一到三条命令,这就是此文的目的所在。在下方评论区分享有趣并且有用的命令。别忘了跟你的朋友分享此文。 + + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember + +作者:[Commenti][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember#comments +[1]: http://linuxandubuntu.com/home/category/linux diff --git a/translated/tech/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md b/translated/tech/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md deleted file mode 100644 index c69b7a6e54..0000000000 --- a/translated/tech/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md +++ /dev/null @@ -1,141 +0,0 @@ -Linux新手必知必会的10条Linux基本命令 -===================================================================== - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/4225072_orig.png) - - -[Linux][1]对我们的生活产生了巨大的冲击。至少你的安卓手机使用的就是Linux核心。尽管如此,在第一次开始使用Linux时你还是会感到难以下手。因为在Linux中,通常需要使用终端命令来取代Windows系统中的点击启动图标操作。但是不必担心,这里我们会介绍10个Linux基本命令来帮助你开启Linux神秘之旅。 - - -### 帮助新手走出第一步的10个Linux基本命令 - -当我们谈论Linux命令时,实质上是在谈论Linux系统本身。这短短的10个Linux基本命令不会让你变成天才或者Linux专家,但是能帮助你轻松开始Linux之旅。使用这些基本命令会帮助新手们完成Linux的日常任务,由于它们的使用频率如此至高,所以我更乐意称他们为Linux命令之王! - -让我们开始学习这10条Linux基本命令吧。 - - -#### 1. sudo - -这条命令的意思是“以超级用户的身份执行”,是 SuperUserDo 的简写,它是新手将要用到的最重要的一条Linux命令。当一条单行命令需要root权限的时候,`sudo`命令就派上用场了。你可以在每一条需要root权限的命令前都加上`sudo`。 - -``` -$ sudo su -``` - - -#### 2. ls (list) - - -跟其他人一样,你肯定也经常想看看目录下都有些什么东西。使用列表命令,终端会把当前工作目录下所有的文件以及文件夹展示给你。比如说,我当前处在 /home 文件夹中,我想看看 /home文件夹中都有哪些文件和目录。 - -``` -/home$ ls -``` - - -在/home中执行`ls`命令将会返回以下内容 - -``` -imad lost+found -``` - - -#### 3. cd - -变更目录命令(cd)是终端中总会被用到的主要命令。他是最常用到的Linux基本命令之一。此命令使用非常简单,当你打算从当前目录跳转至某个文件夹时,只需要将文件夹键入此命令之后即可。如果你想跳转至上层目录,只需要在此命令之后键入两个点(..)就可以了。 -​ -举个例子,我现在处在/home目录中,我想移动到/home目录中的usr文件夹下,可以通过以下命令来完成操作。 - -``` -/home $ cd usr - -/home/usr $ -``` - - -#### 4. mkdir - -只是可以切换目录还是不够完美。有时候你会想要新建一个文件夹或子文件夹。此时可以使用mkdir命令来完成操作。使用方法很简单,只需要把新的文件夹名跟在mkdir命令之后就好了。 - -``` -~$ mkdir folderName -``` - - -#### 5. cp - -拷贝-粘贴(copy-and-paste)是我们组织文件需要用到的重要命令。使用 `cp` 命令可以帮助你在终端当中完成拷贝-粘贴操作。首先确定你想要拷贝的文件,然后键入打算粘贴此文件的目标位置。 - -``` -$ cp src des -``` - -注意:如果目标目录对新建文件需要root权限时,你可以使用`sudo`命令来完成文件拷贝操作。 - - -#### 6. rm - -rm命令可以帮助你移除文件甚至目录。如果文件需要root权限才能移除,可以用`-f`参数来强制执行。也可以使用`-r`参数来递归的移除文件夹。 - -``` -$ rm myfile.txt -``` - - -#### 7. apt-get - -这个命令会依据发行版的不同而有所区别。在基于Debian的发行版中,我们拥有Advanced Packaging Tool(APT)包管理工具来安装、移除和升级包。apt-get命令会帮助你安装需要在Linux系统中运行的软件。它是一个功能强大的命令行,可以用来帮助你对软件执行安装、升级和移除操作。 - -在其他发行版中,例如Fedora、Centos,都各自不同的包管理工具。Fedora之前使用的是yum,不过现在dnf成了它默认的包管理工具。 - -``` -$ sudo apt-get update - -$ sudo dnf update -``` - - -#### 8. grep - -当你需要查找一个文件,但是又忘记了它具体的位置和路径时,`grep`命令会帮助你解决这个难题。你可以提供文件的关键字,使用`grep`命令来查找到它。 - -``` -$ grep user /etc/passwd -``` - - -#### 9. cat - -作为一个用户,你应该会经常需要浏览脚本内的文本或者代码。`cat`命令是Linux系统的基本命令之一,它的用途就是将文件的内容展示给你。 - -``` -$ cat CMakeLists.txt -``` - - -#### 10. poweroff - -最后一个命令是 `poweroff`。有时你需要直接在终端中执行关机操作。此命令可以完成这个任务。由于关机操作需要root权限,所以别忘了在此命令之前添加`sudo`。 - -``` -$ sudo poweroff -``` - - -### 总结 - -如我在文章开始所言,这10条命令并不会让你立即成为一个Linux大拿。它们会让你在初期快速上手Linux。以这些命令为基础,给自己设置一个目标,每天学习一到三条命令,这就是此文的目的所在。在下方评论区分享有趣并且有用的命令。别忘了跟你的朋友分享此文。 - - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember - -作者:[Commenti][a] -译者:[mr-ping](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember#comments -[1]: http://linuxandubuntu.com/home/category/linux From 9095deb4f7fdbc910100e8ccdae643ca21c5a63e Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Jul 2016 10:00:13 +0800 Subject: [PATCH 065/471] PUB:20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7 @HaohongWANG --- ... Web server on Ubuntu 15.04 or CentOS 7.md | 213 ++++++++++++++++++ ... Web server on Ubuntu 15.04 or CentOS 7.md | 210 ----------------- 2 files changed, 213 insertions(+), 210 deletions(-) create mode 100644 published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md delete mode 100644 translated/tech/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md diff --git a/published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md b/published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md new file mode 100644 index 0000000000..3db9318138 --- /dev/null +++ b/published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md @@ -0,0 +1,213 @@ +如何在 Ubuntu 15.04/CentOS 7 中安装 Lighttpd Web 服务器 +================================================================================= + +Lighttpd 是一款开源 Web 服务器软件。Lighttpd 安全快速,符合行业标准,适配性强并且针对高配置环境进行了优化。相对于其它的 Web 服务器而言,Lighttpd 占用内存更少;因其对 CPU 占用小和对处理速度的优化而在效率和速度方面从众多 Web 服务器中脱颖而出。而 Lighttpd 诸如 FastCGI、CGI、认证、输出压缩、URL 重写等高级功能更是那些面临性能压力的服务器的福音。 + +以下便是我们在运行 Ubuntu 15.04 或 CentOS 7 Linux 发行版的机器上安装 Lighttpd Web 服务器的简要流程。 + +### 安装Lighttpd + +#### 使用包管理器安装 + +这里我们通过使用包管理器这种最简单的方法来安装 Lighttpd。只需以 sudo 模式在终端或控制台中输入下面的指令即可。 + +**CentOS 7** + +由于 CentOS 7.0 官方仓库中并没有提供 Lighttpd,所以我们需要在系统中安装额外的软件源 epel 仓库。使用下面的 yum 指令来安装 epel。 + + # yum install epel-release + +然后,我们需要更新系统及为 Lighttpd 的安装做前置准备。 + + # yum update + # yum install lighttpd + +![Install Lighttpd Centos](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-centos.png) + +**Ubuntu 15.04** + +Ubuntu 15.04 官方仓库中包含了 Lighttpd,所以只需更新本地仓库索引并使用 apt-get 指令即可安装 Lighttpd。 + + # apt-get update + # apt-get install lighttpd + +![Install lighttpd ubuntu](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-ubuntu.png) + +#### 从源代码安装 Lighttpd + +如果想从 Lighttpd 源码安装最新版本(例如 1.4.39),我们需要在本地编译源码并进行安装。首先我们要安装编译源码所需的依赖包。 + + # cd /tmp/ + # wget http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-1.4.39.tar.gz + +下载完成后,执行下面的指令解压缩。 + + # tar -zxvf lighttpd-1.4.39.tar.gz + +然后使用下面的指令进行编译。 + + # cd lighttpd-1.4.39 + # ./configure + # make + +**注:**在这份教程中,我们安装的是默认配置的 Lighttpd。其他拓展功能,如对 SSL 的支持,mod_rewrite,mod_redirect 等,需自行配置。 + +当编译完成后,我们就可以把它安装到系统中了。 + + # make install + +### 设置 Lighttpd + +如果有更高的需求,我们可以通过修改默认设置文件,如`/etc/lighttpd/lighttpd.conf`,来对 Lighttpd 进行进一步设置。 而在这份教程中我们将使用默认设置,不对设置文件进行修改。如果你曾做过修改并想检查设置文件是否出错,可以执行下面的指令。 + + # lighttpd -t -f /etc/lighttpd/lighttpd.conf + +#### 使用 CentOS 7 + +在 CentOS 7 中,我们需创建一个在 Lighttpd 默认配置文件中设置的 webroot 文件夹,例如`/src/www/htdocs`。 + + # mkdir -p /srv/www/htdocs/ + +而后将默认欢迎页面从`/var/www/lighttpd`复制至刚刚新建的目录中: + + # cp -r /var/www/lighttpd/* /srv/www/htdocs/ + +### 开启服务 + +现在,通过执行 systemctl 指令来重启 Web 服务。 + + # systemctl start lighttpd + +然后我们将它设置为伴随系统启动自动运行。 + + # systemctl enable lighttpd + +### 设置防火墙 + +如要让我们运行在 Lighttpd 上的网页或网站能在 Internet 或同一个网络内被访问,我们需要在防火墙程序中设置打开 80 端口。由于 CentOS 7 和 Ubuntu15.04 都附带 Systemd 作为默认初始化系统,所以我们默认用的都是 firewalld。如果要打开 80 端口或 http 服务,我们只需执行下面的命令: + + # firewall-cmd --permanent --add-service=http + success + # firewall-cmd --reload + success + +### 连接至 Web 服务器 + +在将 80 端口设置为默认端口后,我们就可以直接访问 Lighttpd 的默认欢迎页了。我们需要根据运行 Lighttpd 的设备来设置浏览器的 IP 地址和域名。在本教程中,我们令浏览器访问 [http://lighttpd.linoxide.com/](http://lighttpd.linoxide.com/) ,同时将该子域名指向上述 IP 地址。如此一来,我们就可以在浏览器中看到如下的欢迎页面了。 + +![Lighttpd Welcome Page](http://blog.linoxide.com/wp-content/uploads/2016/02/lighttpd-welcome-page.png) + +此外,我们可以将网站的文件添加到 webroot 目录下,并删除 Lighttpd 的默认索引文件,使我们的静态网站可以在互联网上访问。 + +如果想在 Lighttpd Web 服务器中运行 PHP 应用,请参考下面的步骤: + +### 安装 PHP5 模块 + +在 Lighttpd 成功安装后,我们需要安装 PHP 及相关模块,以在 Lighttpd 中运行 PHP5 脚本。 + +#### 使用 Ubuntu 15.04 + + # apt-get install php5 php5-cgi php5-fpm php5-mysql php5-curl php5-gd php5-intl php5-imagick php5-mcrypt php5-memcache php-pear + +#### 使用 CentOS 7 + + # yum install php php-cgi php-fpm php-mysql php-curl php-gd php-intl php-pecl-imagick php-mcrypt php-memcache php-pear lighttpd-fastcgi + +### 设置 Lighttpd 的 PHP 服务 + +如要让 PHP 与 Lighttpd 协同工作,我们只要根据所使用的发行版执行如下对应的指令即可。 + +#### 使用 CentOS 7 + +首先要做的便是使用文件编辑器编辑 php 设置文件(例如`/etc/php.ini`)并取消掉对**cgi.fix_pathinfo=1**这一行的注释。 + + # nano /etc/php.ini + +完成上面的步骤之后,我们需要把 PHP-FPM 进程的所有权从 Apache 转移至 Lighttpd。要完成这些,首先用文件编辑器打开`/etc/php-fpm.d/www.conf`文件。 + + # nano /etc/php-fpm.d/www.conf + +然后在文件中增加下面的语句: + + user = lighttpd + group = lighttpd + +做完这些,我们保存并退出文本编辑器。然后从`/etc/lighttpd/modules.conf`设置文件中添加 FastCGI 模块。 + + # nano /etc/lighttpd/modules.conf + +然后,去掉下面语句前面的`#`来取消对它的注释。 + + include "conf.d/fastcgi.conf" + +最后我们还需在文本编辑器设置 FastCGI 的设置文件。 + + # nano /etc/lighttpd/conf.d/fastcgi.conf + +在文件尾部添加以下代码: + + fastcgi.server += ( ".php" => + (( + "host" => "127.0.0.1", + "port" => "9000", + "broken-scriptfilename" => "enable" + )) + ) + +在编辑完成后保存并退出文本编辑器即可。 + +#### 使用 Ubuntu 15.04 + +如需启用 Lighttpd 的 FastCGI,只需执行下列代码: + + # lighttpd-enable-mod fastcgi + + Enabling fastcgi: ok + Run /etc/init.d/lighttpd force-reload to enable changes + + # lighttpd-enable-mod fastcgi-php + + Enabling fastcgi-php: ok + Run `/etc/init.d/lighttpd` force-reload to enable changes + +然后,执行下列命令来重启 Lighttpd。 + + # systemctl force-reload lighttpd + +### 检测 PHP 工作状态 + +如需检测 PHP 是否按预期工作,我们需在 Lighttpd 的 webroot 目录下新建一个 php 文件。本教程中,在 Ubuntu 下 /var/www/html 目录,CentOS 下 /src/www/htdocs 目录下使用文本编辑器创建并打开 info.php。 + +**使用 CentOS 7** + + # nano /var/www/info.php + +**使用 Ubuntu 15.04** + + # nano /srv/www/htdocs/info.php + +然后只需将下面的语句添加到文件里即可。 + + + +在编辑完成后保存并推出文本编辑器即可。 + +现在,我们需根据路径 [http://lighttpd.linoxide.com/info.php](http://lighttpd.linoxide.com/info.php) 下的 info.php 文件的 IP 地址或域名,来让我们的网页浏览器指向系统上运行的 Lighttpd。如果一切都按照以上说明进行,我们将看到如下图所示的 PHP 页面信息。 + +![phpinfo lighttpd](http://blog.linoxide.com/wp-content/uploads/2016/02/phpinfo-lighttpd.png) + +### 总结 + +至此,我们已经在 CentOS 7 和 Ubuntu 15.04 Linux 发行版上成功安装了轻巧快捷并且安全的 Lighttpd Web 服务器。现在,我们已经可以上传网站文件到网站根目录、配置虚拟主机、启用 SSL、连接数据库,在我们的 Lighttpd Web 服务器上运行 Web 应用等功能了。 如果你有任何疑问,建议或反馈请在下面的评论区中写下来以让我们更好的改良 Lighttpd。谢谢! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-lighttpd-web-server-ubuntu-15-04-centos-7/ + +作者:[Arun Pyasi][a] +译者:[HaohongWANG](https://github.com/HaohongWANG) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ diff --git a/translated/tech/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md b/translated/tech/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md deleted file mode 100644 index 8840e8b973..0000000000 --- a/translated/tech/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md +++ /dev/null @@ -1,210 +0,0 @@ -[Translated] Haohong Wang -如何在Ubuntu 15.04/CentOS 7中安装Lighttpd Web server -================================================================================= -Lighttpd 是一款开源Web服务器软件。Lighttpd 安全快速,符合行业标准,适配性强并且针对高配置环境进行了优化。Lighttpd因其CPU、内存占用小,针对小型CPU加载的快速适配以及出色的效率和速度而从众多Web服务器中脱颖而出。 而Lighttpd诸如FastCGI,CGI,Auth,Out-Compression,URL-Rewriting等高级功能更是那些低配置的服务器的福音。 - -以下便是在我们运行Ubuntu 15.04 或CentOS 7 Linux发行版的机器上安装Lighttpd Web服务器的简要流程。 - -### 安装Lighttpd - -#### 使用包管理器安装 - -这里我们通过使用包管理器这种最简单的方法来安装Lighttpd。只需以sudo模式在终端或控制台中输入下面的指令即可。 - -**CentOS 7** - -由于CentOS 7.0官方repo中并没有提供Lighttpd,所以我们需要在系统中安装额外的软件源epel repo。使用下面的yum指令来安装epel。 - - # yum install epel-release - -然后,我们需要更新系统及进程为Lighttpd的安装做准备。 - - # yum update - # yum install lighttpd - -![Install Lighttpd Centos](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-centos.png) - -**Ubuntu 15.04** - -Ubuntu 15.04官方repo中包含了Lighttpd,所以只需更新本地repo并使用apt-get指令即可安装Lighttpd。 - - # apt-get update - # apt-get install lighttpd - -![Install lighttpd ubuntu](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-ubuntu.png) - -#### 从源代码安装Lighttpd - -如果想从Lighttpd源码安装最新版本(例如1.4.39),我们需要在本地编译源码并进行安装。首先我们要安装编译源码所需的依赖包。 - - # cd /tmp/ - # wget http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-1.4.39.tar.gz - -下载完成后,执行下面的指令解压缩。 - - # tar -zxvf lighttpd-1.4.39.tar.gz - -然后使用下面的指令进行编译。 - - # cd lighttpd-1.4.39 - # ./configure - # make - -**注:**在这份教程中,我们安装的是默认配置的Lighttpd。其他诸如高级功能或拓展功能,如对SSL的支持,mod_rewrite,mod_redirect等,需自行配置。 - -当编译完成后,我们就可以把它安装到系统中了。 - - # make install - -### 设置Lighttpd - -如果有更高的需求,我们可以通过修改默认设置文件,如`/etc/lighttpd/lighttpd.conf`,来对Lighttpd进行进一步设置。 而在这份教程中我们将使用默认设置,不对设置文件进行修改。如果你曾做过修改并想检查设置文件是否出错,可以执行下面的指令。 - - # lighttpd -t -f /etc/lighttpd/lighttpd.conf - -#### 使用 CentOS 7 - -在CentOS 7中,我们需在Lighttpd默认设置中创设一个例如`/src/www/htdocs`的webroot文件夹。 - - # mkdir -p /srv/www/htdocs/ - -而后将默认欢迎页面从`/var/www/lighttpd`复制至刚刚新建的目录中: - - # cp -r /var/www/lighttpd/* /srv/www/htdocs/ - -### 开启服务 - -现在,通过执行systemctl指令来重启数据库服务。 - - # systemctl start lighttpd - -然后我们将它设置为伴随系统启动自动运行。 - - # systemctl enable lighttpd - -### 设置防火墙 - -如要让我们运行在Lighttpd上的网页和网站能在Internet或相似的网络上被访问,我们需要在防火墙程序中设置打开80端口。由于CentOS 7和Ubuntu15.04都附带Systemd作为默认初始化系统,所以我们安装firewalld作为解决方案。如果要打开80端口或http服务,我们只需执行下面的命令: - - # firewall-cmd --permanent --add-service=http - success - # firewall-cmd --reload - success - -### 连接至Web Server -在将80端口设置为默认端口后,我们就可以默认直接访问Lighttpd的欢迎页了。我们需要根据运行Lighttpd的设备来设置浏览器的IP地址和域名。在本教程中,我们令浏览器指向 [http://lighttpd.linoxide.com/](http://lighttpd.linoxide.com/) 同时将子域名指向它的IP地址。如此一来,我们就可以在浏览器中看到如下的欢迎页面了。 - -![Lighttpd Welcome Page](http://blog.linoxide.com/wp-content/uploads/2016/02/lighttpd-welcome-page.png) - -此外,我们可以将网站的文件添加到webroot目录下,并删除lighttpd的默认索引文件,使我们的静态网站链接至互联网上。 - -如果想在Lighttpd Web Server中运行PHP应用,请参考下面的步骤: - -### 安装PHP5模块 -在Lighttpd成功安装后,我们需要安装PHP及相关模块以在Lighttpd中运行PHP5脚本。 - -#### 使用 Ubuntu 15.04 - - # apt-get install php5 php5-cgi php5-fpm php5-mysql php5-curl php5-gd php5-intl php5-imagick php5-mcrypt php5-memcache php-pear - -#### 使用 CentOS 7 - - # yum install php php-cgi php-fpm php-mysql php-curl php-gd php-intl php-pecl-imagick php-mcrypt php-memcache php-pear lighttpd-fastcgi - -### 设置Lighttpd的PHP服务 - -如要让PHP与Lighttpd协同工作,我们只要根据所使用的发行版执行如下对应的指令即可。 - -#### 使用 CentOS 7 - -首先要做的便是使用文件编辑器编辑php设置文件(例如`/etc/php.ini`)并取消掉对**cgi.fix_pathinfo=1**的注释。 - - # nano /etc/php.ini - -完成上面的步骤之后,我们需要把PHP-FPM进程的所有权从Apache转移至Lighttpd。要完成这些,首先用文件编辑器打开`/etc/php-fpm.d/www.conf`文件。 - - # nano /etc/php-fpm.d/www.conf - -然后在文件中增加下面的语句: - - user = lighttpd - group = lighttpd - -做完这些,我们保存并退出文本编辑器。然后从`/etc/lighttpd/modules.conf`设置文件中添加FastCGI模块。 - - # nano /etc/lighttpd/modules.conf - -然后,去掉下面语句前面的`#`来取消对它的注释。 - - include "conf.d/fastcgi.conf" - -最后我们还需在文本编辑器设置FastCGI的设置文件。 - - # nano /etc/lighttpd/conf.d/fastcgi.conf - -在文件尾部添加以下代码: - - fastcgi.server += ( ".php" => - (( - "host" => "127.0.0.1", - "port" => "9000", - "broken-scriptfilename" => "enable" - )) - ) - -在编辑完成后保存并退出文本编辑器即可。 - -#### 使用 Ubuntu 15.04 - -如需启用Lighttpd的FastCGI,只需执行下列代码: - - # lighttpd-enable-mod fastcgi - - Enabling fastcgi: ok - Run /etc/init.d/lighttpd force-reload to enable changes - - # lighttpd-enable-mod fastcgi-php - - Enabling fastcgi-php: ok - Run `/etc/init.d/lighttpd` force-reload to enable changes - -然后,执行下列命令来重启Lighttpd。 - - # systemctl force-reload lighttpd - -### 检测PHP工作状态 - -如需检测PHP是否按预期工作,我们需在Lighttpd的webroot目录下新建一个php文件。本教程中,在Ubuntu下/var/www/html 目录,CentOS下/src/www/htdocs目录下使用文本编辑器创建并打开info.php。 - -**使用 CentOS 7** - - # nano /var/www/info.php - -**使用 Ubuntu 15.04** - - # nano /srv/www/htdocs/info.php - -然后只需将下面的语句添加到文件里即可。 - - - -在编辑完成后保存并推出文本编辑器即可。 - -现在,我们需根据路径 [http://lighttpd.linoxide.com/info.php](http://lighttpd.linoxide.com/info.php) 下的info.php文件的IP地址或域名,来让我们的网页浏览器指向系统上运行的Lighttpd。如果一切都按照以上说明进行,我们将看到如下图所示的PHP页面信息。 - -![phpinfo lighttpd](http://blog.linoxide.com/wp-content/uploads/2016/02/phpinfo-lighttpd.png) - -### 总结 - -至此,我们已经在CentOS 7和Ubuntu 15.04 Linux 发行版上成功安装了轻巧快捷并且安全的Lighttpd Web服务器。现在,我们已经可以利用Lighttpd Web服务器来实现上传网站文件到网站根目录,配置虚拟主机,启用SSL,连接数据库,运行Web应用等功能了。 如果你有任何疑问,建议或反馈请在下面的评论区中写下来以让我们更好的改良Lighttpd。谢谢!(译注:评论网址 http://linoxide.com/linux-how-to/setup-lighttpd-web-server-ubuntu-15-04-centos-7/ ) --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/setup-lighttpd-web-server-ubuntu-15-04-centos-7/ - -作者:[Arun Pyasi][a] -译者:[HaohongWANG](https://github.com/HaohongWANG) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ From 0974bbba79d51c90b3698b3b1306ae903880a015 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Thu, 7 Jul 2016 10:18:00 +0800 Subject: [PATCH 066/471] Mike Translating 20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md --- .../20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md b/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md index 07b0f4e5a3..238739ac0e 100644 --- a/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md +++ b/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md @@ -1,3 +1,5 @@ +MikeCoder Translating + INSTALL MATE 1.14 IN UBUNTU MATE 16.04 (XENIAL XERUS) VIA PPA ================================================================= From e830e2e57ac22d3935e86be765f69a174a3b0d08 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Jul 2016 10:27:47 +0800 Subject: [PATCH 067/471] PUB:20160301 The Evolving Market for Commercial Software Built On Open Source MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @alim0x 语言组织的很不错! --- ...ommercial Software Built On Open Source.md | 37 +++++++++++++++++++ ...ommercial Software Built On Open Source.md | 36 ------------------ 2 files changed, 37 insertions(+), 36 deletions(-) create mode 100644 published/20160301 The Evolving Market for Commercial Software Built On Open Source.md delete mode 100644 translated/tech/20160301 The Evolving Market for Commercial Software Built On Open Source.md diff --git a/published/20160301 The Evolving Market for Commercial Software Built On Open Source.md b/published/20160301 The Evolving Market for Commercial Software Built On Open Source.md new file mode 100644 index 0000000000..fa3448b1bb --- /dev/null +++ b/published/20160301 The Evolving Market for Commercial Software Built On Open Source.md @@ -0,0 +1,37 @@ +构建在开源之上的商业软件市场持续成长 +===================================================================== + +![](https://www.linux.com/images/stories/41373/Structure-event-photo.jpg) + +*与会者在 Structure 上听取演讲,Structure Data 2016 也将在 UCSF Mission Bay 会议中心举办。图片来源:Structure Events。* + +如今真的很难低估开源项目对于企业软件市场的影响;开源软件的集成如此快速地形成了业界常态,我们没能捕捉到转折点也情有可原。 + +举个例子,Hadoop,改变的不止是数据分析界,它引领了新一代数据公司,它们围绕开源项目创造自己的软件,按需调整和支持那些代码,更像红帽在上世纪 90 年代和本世纪早期拥抱 Linux 那样。软件越来越多地通过公有云交付,而不是运行在购买者自己的服务器,拥有了令人惊奇的操作灵活性,但同时也带来了一些关于授权、支持以及价格之类的新问题。 + +我们多年来持续追踪这个趋势,这些话题充斥了我们的 Structure Data 会议,而今年的 Structure Data 2016 也不例外。三家围绕 Hadoop 最重要的大数据公司——Hortonworks、Cloudera 和 MapR ——的 CEO 们将会共同讨论它们是如何销售他们围绕开源项目的企业软件和服务,获利的同时回报社区项目。 + +以前在企业软件上获利是很容易的事情。一个客户购买了之后,企业供应商的一系列软件就变成了收银机,从维护合同和阶段性升级中获得近乎终生的收入,软件也越来越难以被替代,因为它已经成为了客户的业务核心。客户抱怨这种绑定,但如果它们想提高工作队伍的生产力也确实没有多少选择。 + +而现在的情况不再是这样了。尽管无数的公司还陷于在他们的基础设施上运行至关重要的巨型软件包,新的项目被部署到使用开源技术的云服务器上。这让升级功能不再需要去掉大量软件包再重新安装别的,同时也让公司按需付费,而不是为一堆永远用不到的特性买单。 + +有很多客户想要利用开源项目的优势,而又不想建立和支持一支工程师队伍来调整那些开源项目以满足自己的需求。这些客户愿意为开源项目和在这之上的专有特性之间的差异付费。 + +这对于基础设施相关的软件来说格外正确。当然,你的客户们可以自己对项目进行调整,比如 Hadoop,Spark 或 Node.js,但付费可以帮助他们自定义地打包部署如今这些重要的开源技术,而不用自己干这些活儿。只需看看 Structure Data 2016 的发言者就明白了,比如 Confluent(Kafka),Databricks(Spark),以及 Cloudera-Hortonworks-MapR(Hadoop)三人组。 + +当然还有一个值得提到的是在出错的时候有个供应商给你背锅。如果你的工程师弄糟了开源项目的实现,那你只能怪你自己了。但是如果你和一个愿意提供服务级品质、能确保性能和正常运行时间指标的公司签订了合同,你实际上就是为得到支持、指导,以及有人背锅而买单。 + +构建在开源之上的商业软件市场的持续成长是我们在 Structure Data 上追踪多年的内容,如果这个话题正合你意,我们鼓励你加入我们,在旧金山,3 月 9 日和 10 日。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/enterprise/cloud-computing/889564-the-evolving-market-for-commercial-software-built-on-open-source- + +作者:[Tom Krazit][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/community/forums/person/70513 diff --git a/translated/tech/20160301 The Evolving Market for Commercial Software Built On Open Source.md b/translated/tech/20160301 The Evolving Market for Commercial Software Built On Open Source.md deleted file mode 100644 index c45b3eb760..0000000000 --- a/translated/tech/20160301 The Evolving Market for Commercial Software Built On Open Source.md +++ /dev/null @@ -1,36 +0,0 @@ -构建在开源之上的商业软件市场持续成长 -===================================================================== - -![](https://www.linux.com/images/stories/41373/Structure-event-photo.jpg) -> 与会者在 Structure 上听取演讲,Structure Data 2016 也将在 UCSF Mission Bay 会议中心举办。图片来源:Structure Events。 - -如今真的很难低估开源项目对于企业软件市场的影响;开源集成如此快速地形成了规范,我们没能捕捉到转折点也情有可原。 - -举个例子,Hadoop,改变的不止是数据分析的世界。它引领了新一代数据公司,它们围绕开源项目创造自己的软件,按需调整和支持那些代码,更像红帽在 90 年代和 21 世纪早期拥抱 Linux 那样。软件越来越多地通过公有云交付,而不是购买者自己的服务器,拥有了令人惊奇的操作灵活性,但同时也带来了一些关于授权,支持以及价格之类的新问题。 - -我们多年来持续追踪这个趋势,它们组成了我们的 Structure Data 会议,而 Structure Data 2016 也不例外。三家围绕 Hadoop 最重要的大数据公司——Hortonworks,Cloudera 和 MapR——的 CEO 将会共同讨论它们是如何销售他们围绕开源项目的企业软件和服务,获利的同时回报那个社区项目。 - -以前在企业软件上获利是很容易的事情。一个客户购买了之后,企业供应商的一系列大型软件就变成了它自己的收银机,从维护合同和阶段性升级中获得近乎终生的收入,软件也越来越难以被替代,因为它已经成为了客户的业务核心。客户抱怨这种绑定,但如果它们想提高工作队伍的生产力也确实没有多少选择。 - -而现在的情况不再是这样了。尽管无数的公司还陷于在他们的基础设施上运行至关重要的巨大软件包,新的项目被使用开源技术部署到云服务器上。这让升级功能不再需要去掉大量软件包再重新安装别的,同时也让公司按需付费,而不是为一堆永远用不到的特性买单。 - -有很多客户想要利用开源项目的优势,而又不想建立和支持一支工程师队伍来调整开源项目以满足自己的需求。这些客户愿意为开源项目和在这之上的专有特性之间的差异付费。 - -这对于基础设施相关的软件来说格外正确。当然,你的客户们可以安装他们自己对项目的调整,比如 Hadoop,Spark 或 Node.js,但付费可以帮助他们自定义包部署如今重要的开源技术而不用自己干这些活儿。只需看看 Structure Data 2016 的发言者就明白了,比如 Confluent(Kafka),Databricks(Spark),以及 Cloudera-Hortonworks-MapR(Hadoop)三人组。 - -当然还有一个值得提到的是在出错的时候有个供应商给你指责。如果你的工程师弄糟了开源项目的实现,那你只能怪你自己了。但是如果你和一个愿意保证在服务级别的特定性能和正常运行时间指标的公司签订了合同,你就是愿意为支持,指导,以及在突然出现不可避免的问题时朝你公司外的人发火的机会买单。 - -构建在开源之上的商业软件市场的持续成长是我们在 Structure Data 上追踪多年的内容,如果这个话题正合你意,我们鼓励你加入我们,在旧金山,3 月 9 日和 10 日。 - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/enterprise/cloud-computing/889564-the-evolving-market-for-commercial-software-built-on-open-source- - -作者:[Tom Krazit ][a] -译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/community/forums/person/70513 From c8a617c9a47666c535ddda47de9e2d60dbd1f8f8 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Thu, 7 Jul 2016 10:46:19 +0800 Subject: [PATCH 068/471] finish translating "INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md" --- ...MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md | 66 ------------------- ...MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md | 62 +++++++++++++++++ 2 files changed, 62 insertions(+), 66 deletions(-) delete mode 100644 sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md create mode 100644 translated/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md diff --git a/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md b/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md deleted file mode 100644 index 238739ac0e..0000000000 --- a/sources/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md +++ /dev/null @@ -1,66 +0,0 @@ -MikeCoder Translating - -INSTALL MATE 1.14 IN UBUNTU MATE 16.04 (XENIAL XERUS) VIA PPA -================================================================= - -MATE Desktop 1.14 is now available for Ubuntu MATE 16.04 (Xenial Xerus). According to the release [announcement][1], it took about 2 months to release MATE Desktop 1.14 in a PPA because everything has been well tested, so you shouldn't encounter any issues. - -![](https://2.bp.blogspot.com/-v38tLvDAxHg/V1k7beVd5SI/AAAAAAAAX7A/1X72bmQ3ia42ww6kJ_61R-CZ6yrYEBSpgCLcB/s400/mate114-ubuntu1604.png) - -**The PPA currently provides MATE 1.14.1 (Ubuntu MATE 16.04 ships with MATE 1.12.x by default), which includes changes such as:** - -- client-side decoration apps now render correctly in all themes; -- touchpad configuration now supports edge and two-finger scrolling independently; -- python extensions in Caja can now be managed separately; -- all three window focus modes are selectable; -- MATE Panel now has the ability to change icon sizes for menubar and menu items; -- volume and Brightness OSD can now be enabled/disabled; -- many other improvements and bug fixes. - -MATE 1.14 also includes improved support for GTK+3 across the entire desktop, as well as various other GTK+3 tweaks however, the PPA packages are built with GTK+2 "to ensure compatibility with Ubuntu MATE 16.04 and all the 3rd party MATE applets, plugins and extensions", mentions the Ubuntu MATE blog. - -A complete MATE 1.14 changelog can be found [HERE][2]. - -### Upgrade to MATE Desktop 1.14.x in Ubuntu MATE 16.04 - -To upgrade to the latest MATE Desktop 1.14.x in Ubuntu MATE 16.04 using the official Xenial MATE PPA, open a terminal and use the following commands: - -``` -sudo apt-add-repository ppa:ubuntu-mate-dev/xenial-mate -sudo apt update -sudo apt dist-upgrade -``` - -**Note**: mate-netspeed applet will be removed when upgrading. That's because the applet is now part of the mate-applets package, so it's still available. - -Once the upgrade finishes, restart your system. That's it! - -### How to revert the changes - -If you're not satisfied with MATE 1.14, you encountered some bugs, etc., and you want to go back to the MATE version available in the official repositories, you can purge the PPA and downgrade the packages. - -To do this, use the following commands: - -``` -sudo apt install ppa-purge -sudo ppa-purge ppa:ubuntu-mate-dev/xenial-mate -``` - -After all the MATE packages are downgraded, restart the system. - -via [Ubuntu MATE blog][3] - --------------------------------------------------------------------------------- - -via: http://www.webupd8.org/2016/06/install-mate-114-in-ubuntu-mate-1604.html - -作者:[Andrew][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.webupd8.org/p/about.html -[1]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ -[2]: http://mate-desktop.com/blog/2016-04-08-mate-1-14-released/ -[3]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ diff --git a/translated/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md b/translated/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md new file mode 100644 index 0000000000..63ea47c541 --- /dev/null +++ b/translated/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md @@ -0,0 +1,62 @@ +在 Ubuntu Mate 16.04("Xenial Xerus") 上通过 PPA 安装 Mate 1.14 +================================================================= + +Mate 桌面环境 1.14 现在可以在 Ubuntu Mate 16.04("Xenial Xerus") 上使用了。根据这个[发布版本][1]的描述, 为了全面测试 Mate 1.14,所以在 PPA 上发布 Mate 桌面环境 1.14大概了用2个月的时间。因此,你不太可能遇到安装的问题。 + +![](https://2.bp.blogspot.com/-v38tLvDAxHg/V1k7beVd5SI/AAAAAAAAX7A/1X72bmQ3ia42ww6kJ_61R-CZ6yrYEBSpgCLcB/s400/mate114-ubuntu1604.png) + +**现在 PPA 提供 Mate 1.14.1 包含如下改变(Ubuntu Mate 16.04 默认安装的是 Mate 1.12.x):** + +- 客户端的装饰应用现在可以正确的在所有主题中渲染; +- 触摸板配置现在支持边缘操作和双指的滚动; +- 在 Caja 中的 Python 扩展可以被单独管理; +- 所有三个窗口焦点模式都是是可选的; +- Mate Panel 中的所有菜单栏图标和菜单图标可以改变大小; +- 音量和亮度 OSD 目前可以启用和禁用; +- 更多的改进和 bug 修改; + +Mate 1.14 同时改进了整个桌面环境中对 GTK+3 的支持,包括各种各项的 GTK+3 小应用。但是,Ubuntu MATE 的博客中提到:PPA 的发行包使用 GTK+2编译"为了确保对 Ubuntu MATE 16.05还有各种各样的第三方 MATE 应用,插件,扩展的支持"。 + +MATE 1.14 的完成修改列表[点击此处][2]阅读。 + +### 在 Ubuntu MATE 16.04 中升级 MATE 1.14.x + +在 Ubuntu MATE 16.04 中打开终端,并且输入如下命令,来从官方的 Xenial MATE PPA 中升级最新的 MATE 桌面环境: + +``` +sudo apt-add-repository ppa:ubuntu-mate-dev/xenial-mate +sudo apt update +sudo apt dist-upgrade +``` + +**注意**: mate-netspeed 应用将会在升级中删除。因为该应用现在已经是 mate-applets 应用报的一部分,所以他依旧是可以获得的。 + +一旦升级完成,请重启你的系统,享受全新的 MATE! + +### 如何回滚这次升级 + +如果你并不满意 MATE 1.14, 比如你遭遇了一些 bug 。或者你想回到 MATE 的官方源版本,你可以使用如下的命令清除 PPA,并且下载降级包。 + +``` +sudo apt install ppa-purge +sudo ppa-purge ppa:ubuntu-mate-dev/xenial-mate +``` + +在所有的 MATE 包降级之后,重启系统。 + +via [Ubuntu MATE blog][3] + +-------------------------------------------------------------------------------- + +via: http://www.webupd8.org/2016/06/install-mate-114-in-ubuntu-mate-1604.html + +作者:[Andrew][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.webupd8.org/p/about.html +[1]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ +[2]: http://mate-desktop.com/blog/2016-04-08-mate-1-14-released/ +[3]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ From b81108a34c617faefc38dbf171f340ab6f92b967 Mon Sep 17 00:00:00 2001 From: martin qi Date: Wed, 6 Jul 2016 22:04:25 -0500 Subject: [PATCH 069/471] translating --- ...Create LVM Using vgcreate, lvcreate and lvextend Commands.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md b/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md index 4d2a9d7a13..390ad18e14 100644 --- a/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md +++ b/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md @@ -1,3 +1,5 @@ +martin + Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands ============================================================================================ From 31ad3c1e2596310d5c76f5eb376c6928abefe2bf Mon Sep 17 00:00:00 2001 From: Ping Date: Thu, 7 Jul 2016 11:13:06 +0800 Subject: [PATCH 070/471] Translating --- .../20160304 Microservices with Python RabbitMQ and Nameko.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md b/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md index 5048465904..6879a0778b 100644 --- a/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md +++ b/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md @@ -1,3 +1,5 @@ +Translating by Ping + Microservices with Python RabbitMQ and Nameko ============================================== From 20a922daf85836a96f09d121f02b53b10fb7da99 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Jul 2016 11:33:55 +0800 Subject: [PATCH 071/471] PUB:Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @GHLandy --- ...ng and Linux Filesystem Troubleshooting.md | 94 +++++++++---------- 1 file changed, 47 insertions(+), 47 deletions(-) rename {translated/tech => published}/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md (67%) diff --git a/translated/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/published/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md similarity index 67% rename from translated/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md rename to published/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md index 884ad9055f..c58e73d197 100644 --- a/translated/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md +++ b/published/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @@ -1,10 +1,10 @@ -LFCS 第十讲:学习简单的 Shell 脚本编程和文件系统故障排除 +LFCS 系列第十讲:学习简单的 Shell 脚本编程和文件系统故障排除 ================================================================================ -Linux 基金会发起了 LFCS 认证 (Linux Foundation Certified Sysadmin, Linux 基金会认证系统管理员),这是一个全新的认证体系,主要目标是让全世界任何人都有机会考取认证。认证内容为 Linux 中间系统的管理,主要包括:系统运行和服务的维护、全面监控和分析的能力以及问题来临时何时想上游团队请求帮助的决策能力 +Linux 基金会发起了 LFCS 认证 (Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员),这是一个全新的认证体系,旨在让世界各地的人能够参与到中等水平的 Linux 系统的基本管理操作的认证考试中去,这项认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。 ![Basic Shell Scripting and Filesystem Troubleshooting](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-10.png) -LFCS 系列第十讲 +*LFCS 系列第十讲* 请看以下视频,这里边介绍了 Linux 基金会认证程序。 @@ -18,54 +18,53 @@ LFCS 系列第十讲 首先要声明一些概念。 - Shell 是一个程序,它将命令传递给操作系统来执行。 -- Terminal 也是一个程序,作为最终用户,我们需要使用它与 Shell 来交互。比如,下边的图片是 GNOME Terminal。 +- Terminal 也是一个程序,允许最终用户使用它与 Shell 来交互。比如,下边的图片是 GNOME Terminal。 ![Gnome Terminal](http://www.tecmint.com/wp-content/uploads/2014/11/Gnome-Terminal.png) -Gnome Terminal +*Gnome Terminal* 启动 Shell 之后,会呈现一个命令提示符 (也称为命令行) 提示我们 Shell 已经做好了准备,接受标准输入设备输入的命令,这个标准输入设备通常是键盘。 -你可以参考该系列文章的 [第一讲 使用命令创建、编辑和操作文件][1] 来温习一些常用的命令。 +你可以参考该系列文章的 [第一讲 如何在 Linux 上使用 GNU sed 等命令来创建、编辑和操作文件][1] 来温习一些常用的命令。 Linux 为提供了许多可以选用的 Shell,下面列出一些常用的: **bash Shell** -Bash 代表 Bourne Again Shell,它是 GNU 项目默认的 Shell。它借鉴了 Korn shell (ksh) 和 C shell (csh) 中有用的特性,并同时对性能进行了提升。它同时也是 LFCS 认证中所涵盖的风发行版中默认 Shell,也是本系列教程将使用的 Shell。 +Bash 代表 Bourne Again Shell,它是 GNU 项目默认的 Shell。它借鉴了 Korn shell (ksh) 和 C shell (csh) 中有用的特性,并同时对性能进行了提升。它同时也是 LFCS 认证中所涵盖的各发行版中默认 Shell,也是本系列教程将使用的 Shell。 **sh Shell** -Bash Shell 是一个比较古老的 shell,一次多年来都是多数类 Unix 系统的默认 shell。 +Bourne SHell 是一个比较古老的 shell,多年来一直都是很多类 Unix 系统的默认 shell。 **ksh Shell** Korn SHell (ksh shell) 也是一个 Unix shell,是贝尔实验室 (Bell Labs) 的 David Korn 在 19 世纪 80 年代初的时候开发的。它兼容 Bourne shell ,并同时包含了 C shell 中的多数特性。 - 一个 shell 脚本仅仅只是一个可执行的文本文件,里边包含一条条可执行命令。 ### 简单的 Shell 脚本编程 ### -如前所述,一个 shell 脚本就是一个纯文本文件,因此,可以使用自己喜欢的文本编辑器来创建和编辑。你可以考虑使用 vi/vim (参考本系列 [第二部分 - vi/vim 编辑器的使用][2]),它的语法高亮让我的编辑工作非常方便。 +如前所述,一个 shell 脚本就是一个纯文本文件,因此,可以使用自己喜欢的文本编辑器来创建和编辑。你可以考虑使用 vi/vim (参考本系列 [第二讲 如何安装和使用纯文本编辑器 vi/vim][2]),它的语法高亮让我的编辑工作非常方便。 输入如下命令来创建一个名为 myscript.sh 的脚本文件: # vim myscript.sh -shell 脚本的第一行 (著名的 [shebang 符](http://smilejay.com/2012/03/linux_shebang/)) 必须如下: +shell 脚本的第一行 (著名的 [释伴(shebang)行](https://linux.cn/article-3664-1.html)) 必须如下: #!/bin/bash -这条语句“告诉”操作系统需要用那个解释器来运行这个脚本文件之后命令。 +这条语句“告诉”操作系统需要用哪个解释器来运行这个脚本文件之后命令。 -现在可以添加需要执行的命令了。通过注释,我们可以声明每一条命令或者整个脚本的具体含义。注意,shell 会忽略掉以井号 (#) 开始的语句。 +现在可以添加需要执行的命令了。通过注释,我们可以声明每一条命令或者整个脚本的具体含义。注意,shell 会忽略掉以井号 (#) 开始的注释语句。 #!/bin/bash echo 这是关于 LFCS 认证系列的第十部分 echo 今天是 $(date +%Y-%m-%d) -编写并保存脚本之后,通过以下命令来是脚本文件称为可执行文件: +编写并保存脚本之后,通过以下命令来使脚本文件成为可执行文件: # chmod 755 myscript.sh @@ -73,17 +72,17 @@ shell 脚本的第一行 (著名的 [shebang 符](http://smilejay.com/2012/03/li echo $PATH -我们就会看到环境变量 ($PATH) 的具体内容:当输入命令是系统搜索可执行程序的目录,每一项之间使用冒号 (:) 隔开。称它为环境变量,是因为他本是就是 shell 环境的一部分 —— 当 shell 第每次启动时 shell 及其子进程可以获取的一系列信息。 +我们就会看到环境变量 ($PATH) 的具体内容:这是当输入命令时系统所搜索可执行程序的目录,每一项之间使用冒号 (:) 隔开。称它为环境变量,是因为它本是就是 shell 环境的一部分 —— 这是当 shell 每次启动时 shell 及其子进程可以获取的一系列信息。 当我们输入一个命令并按下回车时,shell 会搜索 $PATH 变量中列出的目录并执行第一个知道的实例。请看如下例子: ![Linux Environment Variables](http://www.tecmint.com/wp-content/uploads/2014/11/Environment-Variable.png) -环境变量 +*环境变量* 假如存在两个同名的可执行程序,一个在 /usr/local/bin,另一个在 /usr/bin,则会执行环境变量中最先列出的那个,并忽略另外一个。 -如果我们自己编写的脚本没有在 $PATH 变量列出目录的其中一个,则需要输入 ./filename 来执行它。而如果存储在 $PATH 变量中的任意一个目录,我们就可以像运行其他命令一样来运行之前编写的脚本了。 +如果我们自己编写的脚本没有放在 $PATH 变量列出目录中的任何一个,则需要输入 ./filename 来执行它。而如果存储在 $PATH 变量中的任意一个目录,我们就可以像运行其他命令一样来运行之前编写的脚本了。 # pwd # ./myscript.sh @@ -94,7 +93,7 @@ shell 脚本的第一行 (著名的 [shebang 符](http://smilejay.com/2012/03/li ![Execute Script in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Execute-Script.png) -执行脚本 +*执行脚本* #### if 条件语句 #### @@ -106,22 +105,22 @@ shell 脚本的第一行 (著名的 [shebang 符](http://smilejay.com/2012/03/li OTHER-COMMANDS fi -其中,CONDITION 为如下情形的任意一项 (仅列出常用的),并且达到以下条件时返回 true: +其中,CONDITION 可以是如下情形的任意一项 (仅列出常用的),并且达到以下条件时返回 true: - [ -a file ] → 指定文件存在。 - [ -d file ] → 指定文件存在,并且是一个目录。 - [ -f file ] → 指定文件存在,并且是一个普通文件。 -- [ -u file ] → 指定文件存在,并设置了 SUID。 -- [ -g file ] → 指定文件存在,并设置了 SGID。 +- [ -u file ] → 指定文件存在,并设置了 SUID 权限位。 +- [ -g file ] → 指定文件存在,并设置了 SGID 权限位。 - [ -k file ] → 指定文件存在,并设置了“黏连 (Sticky)”位。 - [ -r file ] → 指定文件存在,并且文件可读。 -- [ -s file ] → 指定文件存在,并且文件为空。 -- [ -w file ] → 指定文件存在,并且文件可写入· +- [ -s file ] → 指定文件存在,并且文件不为空。 +- [ -w file ] → 指定文件存在,并且文件可写入。 - [ -x file ] → 指定文件存在,并且可执行。 - [ string1 = string2 ] → 字符串相同。 - [ string1 != string2 ] → 字符串不相同。 -[ int1 op int2 ] 为前述列表中的一部分,紧跟着的项 (例如: -eq –> int1 与 int2 相同时返回 true) 则是 [ int1 op int2 ] 的一个子项, 其中 op 为以下比较操作符。 +[ int1 op int2 ] 为前述列表中的一部分 (例如: -eq –> int1 与 int2 相同时返回 true) ,其中比较项也可以是一个列表子项, 其中 op 为以下比较操作符。 - -eq –> int1 等于 int2 时返回 true。 - -ne –> int1 不等于 int2 时返回 true。 @@ -142,13 +141,13 @@ shell 脚本的第一行 (著名的 [shebang 符](http://smilejay.com/2012/03/li #### While 循环语句 #### -该循环结构会一直执行重复的命令,直到控制命令执行的退出状态值等于 0 时 (即执行成功) 停止。基本语法如下: +该循环结构会一直执行重复的命令,直到控制命令(EVALUATION_COMMAND)执行的退出状态值等于 0 时 (即执行成功) 停止。基本语法如下: while EVALUATION_COMMAND; do EXECUTE_COMMANDS; done -其中,EVALUATION_COMMAND 可以是任何能够返回成功 (0) 或失败 (0 以外的值) 的退出状态值的命令,EXECUTE_COMMANDS 则可以是任何的程序、脚本或者 shell 结构体,包括其他的嵌套循环。 +其中,EVALUATION\_COMMAND 可以是任何能够返回成功 (0) 或失败 (0 以外的值) 的退出状态值的命令,EXECUTE\_COMMANDS 则可以是任何的程序、脚本或者 shell 结构体,包括其他的嵌套循环。 #### 综合使用 #### @@ -168,7 +167,7 @@ shell 脚本的第一行 (著名的 [shebang 符](http://smilejay.com/2012/03/li ![Script to Monitor Linux Services](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Services.png) -使用脚本监控 Linux 服务 +*使用脚本监控 Linux 服务* 我们编写的脚本看起来应该是这样的: @@ -188,7 +187,7 @@ shell 脚本的第一行 (著名的 [shebang 符](http://smilejay.com/2012/03/li ![Linux Service Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Script.png) -Linux 服务监控脚本 +*Linux 服务监控脚本* **我们来解释一下这个脚本的工作流程** @@ -196,21 +195,21 @@ Linux 服务监控脚本 # cat myservices.txt -2). 以上命令由圆括号括着,并在前面添加美元符,表示它需要从 myservices.txt 的记录列表中取值作为变量传递给 for 循环。 +2). 以上命令由圆括号括着,并在前面添加美元符,表示它需要从 myservices.txt 的记录列表中取值并作为变量传递给 for 循环。 -3). 对于记录列表中的每一项纪录 (即每一项纪录的服务变量),都会这行以下动作: +3). 对于记录列表中的每一项纪录 (即每一项纪录的服务变量),都会执行以下动作: # systemctl status $service | grep --quiet "running" 此时,需要在每个通用变量名 (即每一项纪录的服务变量) 的前面添加美元符,以表明它是作为变量来传递的。其输出则通过管道符传给 grep。 -其中,-quiet 选项用于阻止 grep 命令将出现 "running" 的行回显到屏幕。当 grep 捕获到 "running" 时,则会返回一个退出状态码 "0" (在 if 结构体表示为 $?),由此确认某个服务正在运行中。 +其中,-quiet 选项用于阻止 grep 命令将发现的 “running” 的行回显到屏幕。当 grep 捕获到 “running” 时,则会返回一个退出状态码 “0” (在 if 结构体表示为 $?),由此确认某个服务正在运行中。 -如果退出状态码是非零值 (即 systemctl status $service 命令中的回显中没有出现 "running"),则表明某个服务为运行。 +如果退出状态码是非零值 (即 systemctl status $service 命令中的回显中没有出现 “running”),则表明某个服务为运行。 ![Services Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Services-Monitoring-Script.png) -服务监控脚本 +*服务监控脚本* 我们可以增加一步,在开始循环之前,先确认 myservices.txt 是否存在。 @@ -236,7 +235,7 @@ Linux 服务监控脚本 你可能想把自己维护的主机写入一个文本文件,并使用脚本探测它们是否能够 ping 得通 (脚本中的 myhosts 可以随意替换为你想要的名称)。 -内置的 read shell 命令将告诉 while 循环一行行的读取 myhosts,并将读取的每行内容传给 host 变量,随后 host 变量传递给 ping 命令。 +shell 的内置 read 命令将告诉 while 循环一行行的读取 myhosts,并将读取的每行内容传给 host 变量,随后 host 变量传递给 ping 命令。 #!/bin/bash @@ -248,18 +247,18 @@ Linux 服务监控脚本 ![Script to Ping Servers](http://www.tecmint.com/wp-content/uploads/2014/11/Script-to-Ping-Servers.png) -使用脚本 Ping 服务器 +*使用脚本 Ping 服务器* 扩展阅读: - [Learn Shell Scripting: A Guide from Newbies to System Administrator][3] - [5 Shell Scripts to Learn Shell Programming][4] -### 文件系统排除 ### +### 文件系统排错 ### -尽管 Linux 是一个很稳定的操作系统,但仍然会因为某些原因出现崩溃 (比如以外断电等),正好你有一个 (或者更多个) 文件系统未能正确卸载,Linux 重启的时候就会自动检测其中可能发生的错误。 +尽管 Linux 是一个很稳定的操作系统,但仍然会因为某些原因出现崩溃时 (比如因为断电等),正好你有一个 (或者更多个) 文件系统未能正确卸载,Linux 重启的时候就会自动检测其中可能发生的错误。 -此外,每次系统正常启动的时候,都会在文件系统挂载之前校验它们的完整度。而这些全部都依赖于 fsck 工具 ("文件系统校验")。 +此外,每次系统正常启动的时候,都会在文件系统挂载之前校验它们的完整度。而这些全部都依赖于 fsck 工具 (“file system check,文件系统校验”)。 如果对 fsck 进行设定,它除了校验文件系统的完整性之外,还可以尝试修复错误。fsck 能否成功修复错误,取决于文件系统的损伤程度;如果可以修复,被损坏部分的文件会恢复到位于每个文件系统根目录的 lost+found。 @@ -279,13 +278,13 @@ fsck 的基本用如下: ![Scan Linux Filesystem for Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Filesystem-Errors.png) -检查文件系统错误 +*检查文件系统错误* -除了 -y 选项,我们也可以使用 -a 选项来自动修复文件系统错误,而不必做出交互式应答,并在文件系统看起来 "干净" 的情况下强制校验。 +除了 -y 选项,我们也可以使用 -a 选项来自动修复文件系统错误,而不必做出交互式应答,并在文件系统看起来 “干净” 卸载的情况下强制校验。 # fsck -af /dev/sdg1 -如果只是要找出什么地方发生了错误 (不用再检测到错误的时候修复),我们开使用 -n 选项,这样只会讲文集系统错误输出到标准输出设备。 +如果只是要找出什么地方发生了错误 (不用在检测到错误的时候修复),我们可以使用 -n 选项,这样只会将文件系统错误输出到标准输出设备上。 # fsck -n /dev/sdg1 @@ -293,11 +292,11 @@ fsck 的基本用如下: ### 总结 ### -至此,系列教程的十讲就全部结束了,全系列教程涵盖了通过 LFCS 测试所需的基础内容。 +至此,系列教程的第十讲就全部结束了,全系列教程涵盖了通过 LFCS 测试所需的基础内容。 -但显而易见的,本系列的十讲并不足以在单个主题方面做到全面描述,我们希望这一系列教程可以成为你学习的基础素材,并一直保持学习的热情。 +但显而易见的,本系列的十讲并不足以在单个主题方面做到全面描述,我们希望这一系列教程可以成为你学习的基础素材,并一直保持学习的热情(LCTT 译注:还有后继补充的几篇)。 -我们欢迎你提出任何问题或者建议,所以你可以毫不犹豫的通过以下链接联系我们。 +我们欢迎你提出任何问题或者建议,所以你可以毫不犹豫的通过以下链接联系到我们: 成为一个 [Linux 认证系统工程师][5] -------------------------------------------------------------------------------- @@ -305,12 +304,13 @@ via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-tro 作者:[Gabriel Cánepa][a] 译者:[GHLandy](https://github.com/GHLandy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]:http://www.tecmint.com/vi-editor-usage/ +[1]:https://linux.cn/article-7161-1.html +[2]:https://linux.cn/article-7165-1.html [3]:http://www.tecmint.com/learning-shell-scripting-language-a-guide-from-newbies-to-system-administrator/ [4]:http://www.tecmint.com/basic-shell-programming-part-ii/ +[5]:http://www.shareasale.com/r.cfm?b=768106&u=1260899&m=59485&urllink=&afftrack= From a544a71b9b93837acf5e90124ec81544d291a11d Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Thu, 7 Jul 2016 11:36:11 +0800 Subject: [PATCH 072/471] Update Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md --- ...xplore Linux with Installed Help Documentations and Tools.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md index 60903d5cce..e4cf8cb12d 100644 --- a/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md +++ b/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md @@ -1,4 +1,4 @@ -翻译申请 tresspassing2 +[translating by kokialoves] Part 12 - LFCS: How to Explore Linux with Installed Help Documentations and Tools ================================================================================== From 5929a18fcd8901faae1c7ebdceb0cc70482b2dc7 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Jul 2016 14:50:44 +0800 Subject: [PATCH 073/471] PUB:20160527 Turn Your Old Laptop into a Chromebook @alim0x --- ... Turn Your Old Laptop into a Chromebook.md | 39 +++++++++++-------- 1 file changed, 23 insertions(+), 16 deletions(-) rename {translated/tech => published}/20160527 Turn Your Old Laptop into a Chromebook.md (91%) diff --git a/translated/tech/20160527 Turn Your Old Laptop into a Chromebook.md b/published/20160527 Turn Your Old Laptop into a Chromebook.md similarity index 91% rename from translated/tech/20160527 Turn Your Old Laptop into a Chromebook.md rename to published/20160527 Turn Your Old Laptop into a Chromebook.md index 3c7038a2a2..df776e73b5 100644 --- a/translated/tech/20160527 Turn Your Old Laptop into a Chromebook.md +++ b/published/20160527 Turn Your Old Laptop into a Chromebook.md @@ -2,7 +2,8 @@ ======================================== ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-ready-main.jpg?itok=gtzJVSq0) ->学习如何用 CloudReady 在你的旧电脑上安装 Chrome OS + +*学习如何用 CloudReady 在你的旧电脑上安装 Chrome OS* Linux 之年就在眼前。根据[报道][1],Google 在 2016 年第一季度卖出了比苹果卖出的 Macbook 更多的 Chromebook。并且,Chromebook 即将变得更加激动人心。在 Google I/O 大会上,Google 宣布安卓 Google Play 商店将在 6 月中旬来到 Chromebook,这让用户能够在他们的 Chrome OS 设备上运行安卓应用。 @@ -16,33 +17,35 @@ Linux 之年就在眼前。根据[报道][1],Google 在 2016 年第一季度 在你开始在笔记本上安装 CloudReady 之前,你需要一些准备: -- 一个容量大于等于 4GB 的 USB 存储设备 - +- 一个容量不小于 4GB 的 USB 存储设备 - 打开 Chrome 浏览器,到 Google Chrome Store 去安装 [Chromebook Recovery Utility(Chrome 恢复工具)][3] - - 更改目标机器的 BIOS 设置以便能从 USB 启动 ### 开始 Neverware 提供两个版本的 CloudReady 镜像:32 位和 64 位。从下载页面[下载][4]合适你硬件的系统版本。 -解压下载的 zip 文件,你会得到一个 chromiumos_image.bin 文件。现在插入 U 盘并打开 Chromebook recovery utility。点击工具右上角的齿轮,选择 erase recovery media(擦除恢复媒介,如图 1)。 +解压下载的 zip 文件,你会得到一个 chromiumos_image.bin 文件。现在插入 U 盘并打开 Chromebook Recovery Utility。点击工具右上角的齿轮,选择 erase recovery media(擦除恢复媒介,如图 1)。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloudready-erase.png?itok=1si1QrCL) ->图 1:选择 erase recovery media。[image:cloudready-erase] + +*图 1:选择 erase recovery media。[image:cloudready-erase]* 接下来,选择目标 USB 驱动器并把它格式化。格式化完成后,再次打开右上齿轮,这次选择 use local image(使用本地镜像)。浏览解压的 bin 文件并选中,选好 USB 驱动器,点击继续,然后点击创建按钮(图 2)。它会开始将镜像写入驱动器。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloudready-create.png?itok=S1FGzRp-) ->图 2:创建 CloudReady 镜像。[Image:cloudready-create] + +*图 2:创建 CloudReady 镜像。[Image:cloudready-create]* 驱动器写好可启动的 CloudReady 之后,插到目标 PC 上并启动。系统启动进 Chromium OS 需要一小段时间。启动之后,你会看到图 3 中的界面。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-ready-install-1.jpg?itok=D6SjlIQ4) ->图 3:准备好安装 CloudReady。 + +*图 3:准备好安装 CloudReady。* ![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/cloud-ready-install-single_crop.jpg?itok=My2rUjYC) ->图 4:单系统选项。 + +*图 4:单系统选项。* 到任务栏选择 Install CloudReady(安装 CloudReady)。 @@ -53,7 +56,8 @@ Neverware 提供两个版本的 CloudReady 镜像:32 位和 64 位。从下载 按照下一步按钮说明选择安装。 ![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/cloud-ready-install-dual_crop.jpg?itok=Daywck_s) ->图 5:双系统选项。 + +*图 5:双系统选项。* 整个过程最多 20 分钟左右,这取决于存储媒介和处理能力。安装完成后,电脑会关闭并重启。 @@ -62,17 +66,20 @@ Neverware 提供两个版本的 CloudReady 镜像:32 位和 64 位。从下载 你连上无线网络之后,系统会自动查找更新并提供 Adobe Flash 安装。安装完成后,你会看到 Chromium OS 登录界面。现在你只需登录你的 Gmail 账户,开始使用你的“Chromebook”即可。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-ready-post-install-network.jpg?itok=gSX2fQZS) ->图 6:网络设置。 + +*图 6:网络设置。* ### 让 Netflix 正常工作 如果你想要播放 Netflix 或其它 DRM 保护流媒体站点,你需要做一些额外的工作。转到设置并点击安装 Widevine 插件(图 7)。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/install_widevine.png?itok=bUJaRmyx0) ->图 7:安装 Widevine。 + +*图 7:安装 Widevine。* ![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/user-agent-changer.jpg?itok=5QDCLrZk) ->图 8:安装 User Agent Switcher. + +*图 8:安装 User Agent Switcher。* 现在你需要使用 user agent switcher 这个伎俩(图 8)。 @@ -96,20 +103,20 @@ Indicator Flag: "IE" 点击“添加(Add)”。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/spoof-netflix.png?itok=8DEZK4Pl) ->图 9:为 CloudReady 创建条目。 + +*图 9:为 CloudReady 创建条目。* 然后,到“permanent spoof list(永久欺骗列表)”选项中将 CloudReady Widevine 添加为 [www.netflix.com](http://www.netflix.com) 的永久 UA 串。 现在,重启机器,你就可以观看 Netflix 和其它一些服务了。 - -------------------------------------------------------------------------------- via: https://www.linux.com/learn/turn-your-old-laptop-chromebook 作者:[SWAPNIL BHARTIYA][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8c4cc8d44ae5ebffc1a7667fcf68e1838e739340 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Thu, 7 Jul 2016 17:40:22 +0800 Subject: [PATCH 074/471] Create Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md --- ...Installed Help Documentations and Tools.md | 178 ++++++++++++++++++ 1 file changed, 178 insertions(+) create mode 100644 translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md diff --git a/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md new file mode 100644 index 0000000000..2d0238becf --- /dev/null +++ b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md @@ -0,0 +1,178 @@ +LFCS第十二讲: 如何使用Linux的帮助文档和工具 +================================================================================== + +由于 2016 年 2 月 2 号开始启用了新的 LFCS 考试要求, 我们在[LFCS series][1]系列添加了一些必要的内容 . 为了考试的需要, 我们强烈建议你看一下[LFCE series][2] . + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) +>LFCS: 了解Linux的帮助文档和工具 + +当你习惯了在命令行下进行工作, 你会发现Linux有许多文档需要你去使用和配置Linux系统. + +另一个你必须熟悉命令行帮助工具的理由是,在[LFCS][3] 和 [LFCE][4] 考试中, 你只能靠你自己和命令行工具,没有互联网也没有百度。 + +基于上面的理由, 在这一章里我们将给你一些建议来帮助你通过**Linux Foundation Certification** 考试. + +### Linux 帮助手册 + +man命令, 大体上来说就是一个工具手册. 它包含选项列表(和解释) , 甚至还提供一些例子. + +我们用**man command** 加工具名称来打开一个帮助手册以便获取更多内容. 例如: + +``` +# man diff +``` + +我们将打开`diff`的手册页, 这个工具将一行一行的对比文本文档 (如你想退出只需要轻轻的点一下Q键). + +下面我来比较两个文本文件 `file1` 和 `file2` . 这两个文本文件包含着相同版本Linux的安装包信息. + +输入`diff` 命令它将告诉我们 `file1` 和`file2` 有什么不同: + +``` +# diff file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) +>在Linux中比较两个文本文件 + +`<` 这个符号是说`file2`少一行. 如果是 `file1`少一行, 我们将用 `>` 符号来替代. + +接下来说, **7d6** 意思是说 文件1的**#7**行在 `file2`中被删除了 ( **24d22** 和**41d38**是同样的意思), 65,67d61告诉我们移动 **65** 到 **67** . 我们把以上步骤都做了两个文件将完全匹配. + +你还可以通过 `-y` 选项来对比两个文件: + +``` +# diff -y file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) +>通过列表来列出两个文件的不同 + + 当然你也可以用`diff`来比较两个二进制文件 . 如果它们完全一样, `diff` 将什么也不会输出. 否则, 他将会返回如下信息: “**Binary files X and Y differ**”. + +### –help 选项 + +`--help`选项 , 大多数命令都可以用它(并不是所有) , 他可以理解为一个命令的简单介绍. 尽管它不提供工具的详细介绍, 但是确实是一个能够快速列出程序使用信息的不错的方法. + +例如, + +``` +# sed --help +``` + +显示 sed 的每个选项的用法(sed文本流编辑器). + +一个经典的`sed`例子,替换文件字符. 用 `-i` 选项 (描述为 “**编辑文件在指定位置**”), 你可以编辑一个文件而且并不需要打开他. 如果你想要备份一个原始文件, 用 `-i` 选项 加后缀来创建一个原始文件的副本. + +例如, 替换 `lorem.txt`中的`Lorem` 为 `Tecmint` (忽略大小写) 并且创建一个新的原始文件副本, 命令如下: + +``` +# less lorem.txt | grep -i lorem +# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt +# less lorem.txt | grep -i lorem +# less lorem.txt.orig | grep -i lorem +``` + + 请注意`lorem.txt`文件中`Lorem` 都已经替换为 `Tecmint` , 并且原始的 `lorem.txt` 保存为`lorem.txt.orig`. + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) +>替换文件文本 + +### /usr/share/doc内的文档 +这可能是我最喜欢的方法. 如果你进入 `/usr/share/doc` 目录, 你可以看到好多Linux已经安装的工具的名称的文件夹. + +根据[Filesystem Hierarchy Standard][5](文件目录标准),这些文件夹包含了许多帮助手册没有的信息, 还有一些模板和配置文件. + +例如, 让我们来看一下 `squid-3.3.8` (版本可能会不同) 一个非常受欢迎的HTTP代理[squid cache server][6]. + +让我们用`cd`命令进入目录 : + +``` +# cd /usr/share/doc/squid-3.3.8 +``` + +列出当前文件夹列表: + +``` +# ls +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) +>ls linux列表命令 + +你应该特别注意 `QUICKSTART` 和 `squid.conf.documented`. 这两个文件包含了Squid许多信息, . 对于别的安装包来说, 他们的名字可能不同 (有可能是 **QuickRef** 或者**00QUICKSTART**), 但原理是一样的. + +对于另外一些安装包, 比如 the Apache web server, 在`/usr/share/doc`目录提供了配置模板, 当你配置独立服务器或者虚拟主机的时候会非常有用. + +### GNU 信息文档 + +你可以把它想象为帮助手册的超级链接形式. 正如上面说的, 他不仅仅提供工具的帮助信息, 而且还是超级链接的形式(是的!在命令行中的超级链接) 你可以通过箭头按钮和回车按钮来浏览你需要的内容. + +一个典型的例子是: + +``` +# info coreutils +``` + +通过coreutils 列出当前系统的 基本文件,shell脚本和文本处理工具[basic file, shell and text manipulation utilities][7] , 你可以得到他们的详细介绍. + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) +>Info Coreutils + +和帮助手册一样你可以按Q键退出. + +此外, GNU info 还可以像帮助手册一样使用. 例如: + +``` +# info tune2fs +``` + +它将显示 **tune2fs**的帮助手册, ext2/3/4 文件系统管理工具. + +让我们来看看怎么用**tune2fs**: + +显示 **/dev/mapper/vg00-vol_backups**文件系统信息: + +``` +# tune2fs -l /dev/mapper/vg00-vol_backups +``` + +修改文件系统标签 (修改为Backups): + +``` +# tune2fs -L Backups /dev/mapper/vg00-vol_backups +``` + +设置 `/` 自检的挂载次数 (用`-c` 选项设置 `/`的自检的挂载次数 或者用 `-i` 选项设置 自检时间 **d=days, w=weeks, and m=months**). + +``` +# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts +# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks +``` + +以上这些内容也可以通过 `--help` 选项找到, 或者查看帮助手册. + +### 摘要 + +不管你选择哪种方法,知道并且会使用它们在考试中对你是非常有用的. 你知道其它的一些方法吗? 欢迎给我们留言. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ + +作者:[Gabriel Cánepa][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[5]: http://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/ +[6]: http://www.tecmint.com/configure-squid-server-in-linux/ +[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[8]: From f5f8c7b7f9f40b54018b76c74026acb7189bebd7 Mon Sep 17 00:00:00 2001 From: Mike Date: Thu, 7 Jul 2016 17:43:39 +0800 Subject: [PATCH 075/471] sources/tech/20160609 How to record your terminal session on Linux.md (#4154) * sources/tech/20160609 How to record your terminal session on Linux.md * translated/tech/20160609 How to record your terminal session on Linux.md --- ...o record your terminal session on Linux.md | 101 ----------------- ...o record your terminal session on Linux.md | 104 ++++++++++++++++++ 2 files changed, 104 insertions(+), 101 deletions(-) delete mode 100644 sources/tech/20160609 How to record your terminal session on Linux.md create mode 100644 translated/tech/20160609 How to record your terminal session on Linux.md diff --git a/sources/tech/20160609 How to record your terminal session on Linux.md b/sources/tech/20160609 How to record your terminal session on Linux.md deleted file mode 100644 index 787bafea6d..0000000000 --- a/sources/tech/20160609 How to record your terminal session on Linux.md +++ /dev/null @@ -1,101 +0,0 @@ -How to record your terminal session on Linux -================================================= - -Recording a terminal session may be important in helping someone learn a process, sharing information in an understandable way, and also presenting a series of commands in a proper manner. Whatever the purpose, there are many times when copy-pasting text from the terminal won't be very helpful while capturing a video of the process is quite far-fetched and may not be always possible. In this quick guide, we will take a look at the easiest way to record and share a terminal session in .gif format. - -### Prerequisites - -If you just want to record your terminal sessions and be able to play the recording in your terminal, or share them with people who will use a terminal for playback, then the only tool that you'll need is called “ttyrec”. Ubuntu users may install it by inserting the following command on a terminal: - -``` -sudo apt-get install ttyrec -``` - -If you want to produce a .gif file from the recording and be able to share it with people who don't use the terminal, publish it on websites, or simply keep a .gif handy for when you'll need it instead of written commands, you will have to install two additional packages. The first one is “imagemagick” which you can install with: - -``` -sudo apt-get install imagemagick -``` - -and the second one is “tty2gif” which can be downloaded from here. The latter has a dependency that can be satisfied with: -sudo apt-get install python-opster - -### Capturing - -To start capturing the terminal session, all you need to do is simply start with “ttyrec” + enter. This will launch the real-time recording tool which will run in the background until we enter “exit” or we press “Ctrl+D”. By default, ttyrec creates a file named “ttyrecord” on the destination of the terminal session which by default is “Home”. - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_1.jpg) - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_2.jpg) - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_3.jpg) - -### Playing - -Playing the file is as simple as opening a terminal on the destination of the “ttyrecord” file and using the “ttyplay” command followed by the name of the recording (in our case it's ttyrecord but you may change this into whatever you want). - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_4.jpg) - -This will result in the playback of the recorded session, in real-time, and with typing corrections included (all actions are recorded). This will look like a completely normal automated terminal session, but the commands and their apparent execution are obviously not really applied to the system, as they are only reproduced as a recording. - -It is also important to note that the playback of the terminal session recording is completely controllable. You may double the playback speed by hitting the “+” button, slow it down with the “-” button, pause it with “0”, and resume it in normal speed with “1”. - -### Converting into a .gif - -For reasons of convenience, many of us would like to convert the recorded session into a .gif file, and that is very easy to do. Here's how: - -First, untar the downloaded “tty2gif.tar.bz2” by opening a terminal in the download location and entering the following command: - -``` -tar xvfj tty2gif.tar.bz2 -``` - -Next, copy the resulting “tty2gif.py file onto the destination of the “ttyrecord” file (or whatever the name you've specified is), and then open a terminal on that destination and type the command: - -``` -python tty2gif.py typing ttyrecord -``` - -If you are getting errors in this step, check that you have installed the “python-opster” package. If errors persist, give the following two commands consecutively: - -``` -sudo apt-get install xdotool -export WINDOWID=$(xdotool getwindowfocus) -``` - -then repeat the “python tty2gif.py typing ttyrecord ” and you should now see a number of gif files that were created on the location of the “ttyrecord” - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_5.jpg) - -The next step is to unify all these gifs that correspond to individual terminal session actions into one final .gif file using the imagemagick utility. To do this, open a terminal on the destination and insert the following command: - -``` -convert -delay 25 -loop 0 *.gif example.gif -``` - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_6.jpg) - -You may name the resulting file as you like (I used “example.gif”), and you may change the delay and loop settings as needed. Here is the resulting file of this quick tutorial: - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/example.gif) - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/how-to-record-your-terminal-session-on-linux/ - -作者:[Bill Toulas][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twitter.com/howtoforgecom - - - - - - - - - diff --git a/translated/tech/20160609 How to record your terminal session on Linux.md b/translated/tech/20160609 How to record your terminal session on Linux.md new file mode 100644 index 0000000000..bc91a552a7 --- /dev/null +++ b/translated/tech/20160609 How to record your terminal session on Linux.md @@ -0,0 +1,104 @@ +如何在 Linux 上录制你的终端操作 +================================================= + +录制一次终端操作可能是一个帮助他人学习 Linux 、展示一系列正确命令行操作的和分享知识的通俗易懂方法。不管是什么目的,大多数情况下从终端复制粘贴文本从终端不会是很有帮助而且录制视频的过程也是相当困难。在这次的文章中,我们将简单的了解一下在终端会话中用 GIF 格式录制视频的方法。 + +### 预先要求 + +如果你只是希望能记录你的终端会话,并且能在终端进行播放或者和他人分享,那么你只需要一个叫做:'ttyrec' 的软件。Ubuntu 使用者可以通过运行这行代码进行安装: + +``` +sudo apt-get install ttyrec +``` + +如果你想将生成的视频转换成一个 GIF 文件,并且能够和那些希望发布在网站上或者不使用终端或者只是简单的想保留一个 GIF 的用户分享。那么你需要安装额外的两个软件包。第一个就是'imagemagick', 你可以通过以下的命令安装: + +``` +sudo apt-get install imagemagick +``` + +第二个软件包就是:'tty2gif',你可以从这里下载。这个软件包需要安装如下依赖: + +``` +sudo apt-get install python-opster +``` + +### 录制 + +开始录制中毒啦操作,你需要的仅仅是键入'ttyprec' + 回车。这个命令将会在后台运行一个实时的记录工具。我们可以通过键入'exit'或者'ctrl+d'来停止。ttyrec 默认会在'Home'目录下创建一个'ttyrecord'的文件。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_1.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_2.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_3.jpg) + +### 播放 + +播放这个文件非常简单。你只需要打开终端并且使用 'ttyplay' 命令打开 'ttyrecord' 文件即可。(在这个例子里,我们使用 ttyrecord 作为文件名,当然,你也可以对这个文件进行重命名) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_4.jpg) + +然后就可以开始播放这个文件。这个视频记录了所有的操作,包括你的删除,修改。这看起来想一个拥有自我意识的终端,但是这个命令执行的过程并不是只是为了给系统看,而是为了更好的展现给人。 + +注意的一点,播放这个记录是完全可控的,你可以通过点击 '+' 或者 '-' 或者 '0', 或者 '1' 按钮加速、减速、暂停、和恢复播放。 + +### 导出成 GIF + +为了方便,我们通常会将视频记录转换为 GIF 格式,并且,这个也非常方便可以实现。以下是方法: + +首先,解压这个文件 'tty2gif.tar.bz2': + +``` +tar xvfj tty2gif.tar.bz2 +``` + +然后,将 'tty2gif.py' 这个文件拷贝到 'ttyprecord' 文件同目录(或者你命名的那个视频文件), 然后在这个目录下打开终端,输入命令: + +``` +python tty2gif.py typing ttyrecord +``` + +如果你出现了错误,检查一下你是否有安装 'python-opster' 包。如果还是有错误,使用如下命令进行排除。 + +``` +sudo apt-get install xdotool +export WINDOWID=$(xdotool getwindowfocus) +``` + +然后重复这个命令 'python tty2gif.py' 并且你将会看到在 ttyrecord 目录下多了一大串的 gif 文件。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_5.jpg) + +接下来的一步就是整合所有的 gif 文件,将他打包成一个 GIF 文件。我们通过使用 imagemagick 工具。输入下列命令: + +``` +convert -delay 25 -loop 0 *.gif example.gif +``` + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_6.jpg) + +你可以任意的文件名,我喜欢用 'example.gif'。 并且,你可以改变这个延时和循环时间。 Enjoy. + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/example.gif) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-record-your-terminal-session-on-linux/ + +作者:[Bill Toulas][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/howtoforgecom + + + + + + + + + From 9a0e5ef1a90ee5fe91778e0718585d2fbfb8f5a0 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Thu, 7 Jul 2016 17:45:41 +0800 Subject: [PATCH 076/471] Delete Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md --- ...Installed Help Documentations and Tools.md | 181 ------------------ 1 file changed, 181 deletions(-) delete mode 100644 sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md diff --git a/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md deleted file mode 100644 index e4cf8cb12d..0000000000 --- a/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md +++ /dev/null @@ -1,181 +0,0 @@ -[translating by kokialoves] -Part 12 - LFCS: How to Explore Linux with Installed Help Documentations and Tools -================================================================================== - -Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) ->LFCS: Explore Linux with Installed Documentations and Tools – Part 12 - -Once you get used to working with the command line and feel comfortable doing so, you realize that a regular Linux installation includes all the documentation you need to use and configure the system. - -Another good reason to become familiar with command line help tools is that in the [LFCS][3] and [LFCE][4] exams, those are the only sources of information you can use – no internet browsing and no googling. It’s just you and the command line. - -For that reason, in this article we will give you some tips to effectively use the installed docs and tools in order to prepare to pass the **Linux Foundation Certification** exams. - -### Linux Man Pages - -A man page, short for manual page, is nothing less and nothing more than what the word suggests: a manual for a given tool. It contains the list of options (with explanation) that the command supports, and some man pages even include usage examples as well. - -To open a man page, use the **man command** followed by the name of the tool you want to learn more about. For example: - -``` -# man diff -``` - -will open the manual page for `diff`, a tool used to compare text files line by line (to exit, simply hit the q key.). - -Let’s say we want to compare two text files named `file1` and `file2` in Linux. These files contain the list of packages that are installed in two Linux boxes with the same distribution and version. - -Doing a `diff` between `file1` and `file2` will tell us if there is a difference between those lists: - -``` -# diff file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) ->Compare Two Text Files in Linux - -where the `<` sign indicates lines missing in `file2`. If there were lines missing in `file1`, they would be indicated by the `>` sign instead. - -On the other hand, **7d6** means line **#7** in file should be deleted in order to match `file2` (same with **24d22** and **41d38**), and 65,67d61 tells us we need to remove lines **65** through **67** in file one. If we make these corrections, both files will then be identical. - -Alternatively, you can display both files side by side using the `-y` option, according to the man page. You may find this helpful to more easily identify missing lines in files: - -``` -# diff -y file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) ->Compare and List Difference of Two Files - -Also, you can use `diff` to compare two binary files. If they are identical, `diff` will exit silently without output. Otherwise, it will return the following message: “**Binary files X and Y differ**”. - -### The –help Option - -The `--help` option, available in many (if not all) commands, can be considered a short manual page for that specific command. Although it does not provide a comprehensive description of the tool, it is an easy way to obtain information on the usage of a program and a list of its available options at a quick glance. - -For example, - -``` -# sed --help -``` - -shows the usage of each option available in sed (the stream editor). - -One of the classic examples of using `sed` consists of replacing characters in files. Using the `-i` option (described as “**edit files in place**”), you can edit a file without opening it. If you want to make a backup of the original contents as well, use the `-i` option followed by a SUFFIX to create a separate file with the original contents. - -For example, to replace each occurrence of the word `Lorem` with `Tecmint` (case insensitive) in `lorem.txt` and create a new file with the original contents of the file, do: - -``` -# less lorem.txt | grep -i lorem -# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt -# less lorem.txt | grep -i lorem -# less lorem.txt.orig | grep -i lorem -``` - -Please note that every occurrence of `Lorem` has been replaced with `Tecmint` in `lorem.txt`, and the original contents of `lorem.txt` has been saved to `lorem.txt.orig`. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) ->Replace A String in Files - -### Installed Documentation in /usr/share/doc - -This is probably my favorite pick. If you go to `/usr/share/doc` and do a directory listing, you will see lots of directories with the names of the installed tools in your Linux system. - -According to the [Filesystem Hierarchy Standard][5], these directories contain useful information that might not be in the man pages, along with templates and configuration files to make configuration easier. - -For example, let’s consider `squid-3.3.8` (version may vary from distribution to distribution) for the popular HTTP proxy and [squid cache server][6]. - -Let’s `cd` into that directory: - -``` -# cd /usr/share/doc/squid-3.3.8 -``` - -and do a directory listing: - -``` -# ls -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) ->Linux Directory Listing with ls Command - -You may want to pay special attention to `QUICKSTART` and `squid.conf.documented`. These files contain an extensive documentation about Squid and a heavily commented configuration file, respectively. For other packages, the exact names may differ (as **QuickRef** or **00QUICKSTART**, for example), but the principle is the same. - -Other packages, such as the Apache web server, provide configuration file templates inside `/usr/share/doc`, that will be helpful when you have to configure a standalone server or a virtual host, to name a few cases. - -### GNU info Documentation - -You can think of info documents as man pages on steroids. As such, they not only provide help for a specific tool, but also they do so with hyperlinks (yes, hyperlinks in the command line!) that allow you to navigate from a section to another using the arrow keys and Enter to confirm. - -Perhaps the most illustrative example is: - -``` -# info coreutils -``` - -Since coreutils contains the [basic file, shell and text manipulation utilities][7] which are expected to exist on every operating system, you can reasonably expect a detailed description for each one of those categories in info **coreutils**. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) ->Info Coreutils - -As it is the case with man pages, you can exit an info document by pressing the `q` key. - -Additionally, GNU info can be used to display regular man pages as well when followed by the tool name. For example: - -``` -# info tune2fs -``` - -will return the man page of **tune2fs**, the ext2/3/4 filesystems management tool. - -And now that we’re at it, let’s review some of the uses of **tune2fs**: - -Display information about the filesystem on top of **/dev/mapper/vg00-vol_backups**: - -``` -# tune2fs -l /dev/mapper/vg00-vol_backups -``` - -Set a filesystem volume name (Backups in this case): - -``` -# tune2fs -L Backups /dev/mapper/vg00-vol_backups -``` - -Change the check intervals and `/` or mount counts (use the `-c` option to set a number of mount counts and `/` or the `-i` option to set a check interval, where **d=days, w=weeks, and m=months**). - -``` -# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts -# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks -``` - -All of the above options can be listed with the `--help` option, or viewed in the man page. - -### Summary - -Regardless of the method that you choose to invoke help for a given tool, knowing that they exist and how to use them will certainly come in handy in the exam. Do you know of any other tools that can be used to look up documentation? Feel free to share with the Tecmint community using the form below. - -Questions and other comments are more than welcome as well. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[3]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[5]: http://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/ -[6]: http://www.tecmint.com/configure-squid-server-in-linux/ -[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[8]: From aebeabf6f037c2c12940bdc5e44edbaa43792b4d Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Thu, 7 Jul 2016 20:26:30 +0800 Subject: [PATCH 077/471] =?UTF-8?q?Create=20Part=2012=20-=20How=20to=20Exp?= =?UTF-8?q?lore=20Linux=20with=20Installed=20Help=20Documentati=E2=80=A6?= =?UTF-8?q?=20(#4155)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Create Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md * Delete Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md --- ...Installed Help Documentations and Tools.md | 181 ------------------ ...Installed Help Documentations and Tools.md | 178 +++++++++++++++++ 2 files changed, 178 insertions(+), 181 deletions(-) delete mode 100644 sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md create mode 100644 translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md diff --git a/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md deleted file mode 100644 index e4cf8cb12d..0000000000 --- a/sources/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md +++ /dev/null @@ -1,181 +0,0 @@ -[translating by kokialoves] -Part 12 - LFCS: How to Explore Linux with Installed Help Documentations and Tools -================================================================================== - -Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) ->LFCS: Explore Linux with Installed Documentations and Tools – Part 12 - -Once you get used to working with the command line and feel comfortable doing so, you realize that a regular Linux installation includes all the documentation you need to use and configure the system. - -Another good reason to become familiar with command line help tools is that in the [LFCS][3] and [LFCE][4] exams, those are the only sources of information you can use – no internet browsing and no googling. It’s just you and the command line. - -For that reason, in this article we will give you some tips to effectively use the installed docs and tools in order to prepare to pass the **Linux Foundation Certification** exams. - -### Linux Man Pages - -A man page, short for manual page, is nothing less and nothing more than what the word suggests: a manual for a given tool. It contains the list of options (with explanation) that the command supports, and some man pages even include usage examples as well. - -To open a man page, use the **man command** followed by the name of the tool you want to learn more about. For example: - -``` -# man diff -``` - -will open the manual page for `diff`, a tool used to compare text files line by line (to exit, simply hit the q key.). - -Let’s say we want to compare two text files named `file1` and `file2` in Linux. These files contain the list of packages that are installed in two Linux boxes with the same distribution and version. - -Doing a `diff` between `file1` and `file2` will tell us if there is a difference between those lists: - -``` -# diff file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) ->Compare Two Text Files in Linux - -where the `<` sign indicates lines missing in `file2`. If there were lines missing in `file1`, they would be indicated by the `>` sign instead. - -On the other hand, **7d6** means line **#7** in file should be deleted in order to match `file2` (same with **24d22** and **41d38**), and 65,67d61 tells us we need to remove lines **65** through **67** in file one. If we make these corrections, both files will then be identical. - -Alternatively, you can display both files side by side using the `-y` option, according to the man page. You may find this helpful to more easily identify missing lines in files: - -``` -# diff -y file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) ->Compare and List Difference of Two Files - -Also, you can use `diff` to compare two binary files. If they are identical, `diff` will exit silently without output. Otherwise, it will return the following message: “**Binary files X and Y differ**”. - -### The –help Option - -The `--help` option, available in many (if not all) commands, can be considered a short manual page for that specific command. Although it does not provide a comprehensive description of the tool, it is an easy way to obtain information on the usage of a program and a list of its available options at a quick glance. - -For example, - -``` -# sed --help -``` - -shows the usage of each option available in sed (the stream editor). - -One of the classic examples of using `sed` consists of replacing characters in files. Using the `-i` option (described as “**edit files in place**”), you can edit a file without opening it. If you want to make a backup of the original contents as well, use the `-i` option followed by a SUFFIX to create a separate file with the original contents. - -For example, to replace each occurrence of the word `Lorem` with `Tecmint` (case insensitive) in `lorem.txt` and create a new file with the original contents of the file, do: - -``` -# less lorem.txt | grep -i lorem -# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt -# less lorem.txt | grep -i lorem -# less lorem.txt.orig | grep -i lorem -``` - -Please note that every occurrence of `Lorem` has been replaced with `Tecmint` in `lorem.txt`, and the original contents of `lorem.txt` has been saved to `lorem.txt.orig`. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) ->Replace A String in Files - -### Installed Documentation in /usr/share/doc - -This is probably my favorite pick. If you go to `/usr/share/doc` and do a directory listing, you will see lots of directories with the names of the installed tools in your Linux system. - -According to the [Filesystem Hierarchy Standard][5], these directories contain useful information that might not be in the man pages, along with templates and configuration files to make configuration easier. - -For example, let’s consider `squid-3.3.8` (version may vary from distribution to distribution) for the popular HTTP proxy and [squid cache server][6]. - -Let’s `cd` into that directory: - -``` -# cd /usr/share/doc/squid-3.3.8 -``` - -and do a directory listing: - -``` -# ls -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) ->Linux Directory Listing with ls Command - -You may want to pay special attention to `QUICKSTART` and `squid.conf.documented`. These files contain an extensive documentation about Squid and a heavily commented configuration file, respectively. For other packages, the exact names may differ (as **QuickRef** or **00QUICKSTART**, for example), but the principle is the same. - -Other packages, such as the Apache web server, provide configuration file templates inside `/usr/share/doc`, that will be helpful when you have to configure a standalone server or a virtual host, to name a few cases. - -### GNU info Documentation - -You can think of info documents as man pages on steroids. As such, they not only provide help for a specific tool, but also they do so with hyperlinks (yes, hyperlinks in the command line!) that allow you to navigate from a section to another using the arrow keys and Enter to confirm. - -Perhaps the most illustrative example is: - -``` -# info coreutils -``` - -Since coreutils contains the [basic file, shell and text manipulation utilities][7] which are expected to exist on every operating system, you can reasonably expect a detailed description for each one of those categories in info **coreutils**. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) ->Info Coreutils - -As it is the case with man pages, you can exit an info document by pressing the `q` key. - -Additionally, GNU info can be used to display regular man pages as well when followed by the tool name. For example: - -``` -# info tune2fs -``` - -will return the man page of **tune2fs**, the ext2/3/4 filesystems management tool. - -And now that we’re at it, let’s review some of the uses of **tune2fs**: - -Display information about the filesystem on top of **/dev/mapper/vg00-vol_backups**: - -``` -# tune2fs -l /dev/mapper/vg00-vol_backups -``` - -Set a filesystem volume name (Backups in this case): - -``` -# tune2fs -L Backups /dev/mapper/vg00-vol_backups -``` - -Change the check intervals and `/` or mount counts (use the `-c` option to set a number of mount counts and `/` or the `-i` option to set a check interval, where **d=days, w=weeks, and m=months**). - -``` -# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts -# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks -``` - -All of the above options can be listed with the `--help` option, or viewed in the man page. - -### Summary - -Regardless of the method that you choose to invoke help for a given tool, knowing that they exist and how to use them will certainly come in handy in the exam. Do you know of any other tools that can be used to look up documentation? Feel free to share with the Tecmint community using the form below. - -Questions and other comments are more than welcome as well. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[3]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[5]: http://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/ -[6]: http://www.tecmint.com/configure-squid-server-in-linux/ -[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[8]: diff --git a/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md new file mode 100644 index 0000000000..2d0238becf --- /dev/null +++ b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md @@ -0,0 +1,178 @@ +LFCS第十二讲: 如何使用Linux的帮助文档和工具 +================================================================================== + +由于 2016 年 2 月 2 号开始启用了新的 LFCS 考试要求, 我们在[LFCS series][1]系列添加了一些必要的内容 . 为了考试的需要, 我们强烈建议你看一下[LFCE series][2] . + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) +>LFCS: 了解Linux的帮助文档和工具 + +当你习惯了在命令行下进行工作, 你会发现Linux有许多文档需要你去使用和配置Linux系统. + +另一个你必须熟悉命令行帮助工具的理由是,在[LFCS][3] 和 [LFCE][4] 考试中, 你只能靠你自己和命令行工具,没有互联网也没有百度。 + +基于上面的理由, 在这一章里我们将给你一些建议来帮助你通过**Linux Foundation Certification** 考试. + +### Linux 帮助手册 + +man命令, 大体上来说就是一个工具手册. 它包含选项列表(和解释) , 甚至还提供一些例子. + +我们用**man command** 加工具名称来打开一个帮助手册以便获取更多内容. 例如: + +``` +# man diff +``` + +我们将打开`diff`的手册页, 这个工具将一行一行的对比文本文档 (如你想退出只需要轻轻的点一下Q键). + +下面我来比较两个文本文件 `file1` 和 `file2` . 这两个文本文件包含着相同版本Linux的安装包信息. + +输入`diff` 命令它将告诉我们 `file1` 和`file2` 有什么不同: + +``` +# diff file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) +>在Linux中比较两个文本文件 + +`<` 这个符号是说`file2`少一行. 如果是 `file1`少一行, 我们将用 `>` 符号来替代. + +接下来说, **7d6** 意思是说 文件1的**#7**行在 `file2`中被删除了 ( **24d22** 和**41d38**是同样的意思), 65,67d61告诉我们移动 **65** 到 **67** . 我们把以上步骤都做了两个文件将完全匹配. + +你还可以通过 `-y` 选项来对比两个文件: + +``` +# diff -y file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) +>通过列表来列出两个文件的不同 + + 当然你也可以用`diff`来比较两个二进制文件 . 如果它们完全一样, `diff` 将什么也不会输出. 否则, 他将会返回如下信息: “**Binary files X and Y differ**”. + +### –help 选项 + +`--help`选项 , 大多数命令都可以用它(并不是所有) , 他可以理解为一个命令的简单介绍. 尽管它不提供工具的详细介绍, 但是确实是一个能够快速列出程序使用信息的不错的方法. + +例如, + +``` +# sed --help +``` + +显示 sed 的每个选项的用法(sed文本流编辑器). + +一个经典的`sed`例子,替换文件字符. 用 `-i` 选项 (描述为 “**编辑文件在指定位置**”), 你可以编辑一个文件而且并不需要打开他. 如果你想要备份一个原始文件, 用 `-i` 选项 加后缀来创建一个原始文件的副本. + +例如, 替换 `lorem.txt`中的`Lorem` 为 `Tecmint` (忽略大小写) 并且创建一个新的原始文件副本, 命令如下: + +``` +# less lorem.txt | grep -i lorem +# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt +# less lorem.txt | grep -i lorem +# less lorem.txt.orig | grep -i lorem +``` + + 请注意`lorem.txt`文件中`Lorem` 都已经替换为 `Tecmint` , 并且原始的 `lorem.txt` 保存为`lorem.txt.orig`. + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) +>替换文件文本 + +### /usr/share/doc内的文档 +这可能是我最喜欢的方法. 如果你进入 `/usr/share/doc` 目录, 你可以看到好多Linux已经安装的工具的名称的文件夹. + +根据[Filesystem Hierarchy Standard][5](文件目录标准),这些文件夹包含了许多帮助手册没有的信息, 还有一些模板和配置文件. + +例如, 让我们来看一下 `squid-3.3.8` (版本可能会不同) 一个非常受欢迎的HTTP代理[squid cache server][6]. + +让我们用`cd`命令进入目录 : + +``` +# cd /usr/share/doc/squid-3.3.8 +``` + +列出当前文件夹列表: + +``` +# ls +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) +>ls linux列表命令 + +你应该特别注意 `QUICKSTART` 和 `squid.conf.documented`. 这两个文件包含了Squid许多信息, . 对于别的安装包来说, 他们的名字可能不同 (有可能是 **QuickRef** 或者**00QUICKSTART**), 但原理是一样的. + +对于另外一些安装包, 比如 the Apache web server, 在`/usr/share/doc`目录提供了配置模板, 当你配置独立服务器或者虚拟主机的时候会非常有用. + +### GNU 信息文档 + +你可以把它想象为帮助手册的超级链接形式. 正如上面说的, 他不仅仅提供工具的帮助信息, 而且还是超级链接的形式(是的!在命令行中的超级链接) 你可以通过箭头按钮和回车按钮来浏览你需要的内容. + +一个典型的例子是: + +``` +# info coreutils +``` + +通过coreutils 列出当前系统的 基本文件,shell脚本和文本处理工具[basic file, shell and text manipulation utilities][7] , 你可以得到他们的详细介绍. + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) +>Info Coreutils + +和帮助手册一样你可以按Q键退出. + +此外, GNU info 还可以像帮助手册一样使用. 例如: + +``` +# info tune2fs +``` + +它将显示 **tune2fs**的帮助手册, ext2/3/4 文件系统管理工具. + +让我们来看看怎么用**tune2fs**: + +显示 **/dev/mapper/vg00-vol_backups**文件系统信息: + +``` +# tune2fs -l /dev/mapper/vg00-vol_backups +``` + +修改文件系统标签 (修改为Backups): + +``` +# tune2fs -L Backups /dev/mapper/vg00-vol_backups +``` + +设置 `/` 自检的挂载次数 (用`-c` 选项设置 `/`的自检的挂载次数 或者用 `-i` 选项设置 自检时间 **d=days, w=weeks, and m=months**). + +``` +# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts +# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks +``` + +以上这些内容也可以通过 `--help` 选项找到, 或者查看帮助手册. + +### 摘要 + +不管你选择哪种方法,知道并且会使用它们在考试中对你是非常有用的. 你知道其它的一些方法吗? 欢迎给我们留言. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ + +作者:[Gabriel Cánepa][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[5]: http://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/ +[6]: http://www.tecmint.com/configure-squid-server-in-linux/ +[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[8]: From 330fc8a182c2d084c1fa9768c7843703417204af Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 7 Jul 2016 23:02:01 +0800 Subject: [PATCH 078/471] PUB:20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @GitFuture 翻译的很仔细,用心了。加油~ --- ...P 2.0 Support for Nginx on Ubuntu 16.04.md | 124 ++++++++++-------- 1 file changed, 72 insertions(+), 52 deletions(-) rename {translated/tech => published}/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md (60%) diff --git a/translated/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md similarity index 60% rename from translated/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md rename to published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md index f432833a50..0ff5f7a412 100644 --- a/translated/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md +++ b/published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md @@ -1,43 +1,47 @@ -在 Ubuntu 16.04 为 Nginx 服务器安装 LEMP 环境(MariaDB, PHP 7 并且支持 HTTP 2.0) +在 Ubuntu 16.04 为 Nginx 服务器安装 LEMP 环境(MariaDB,PHP 7 并支持 HTTP 2.0) ===================== -LEMP 是字首组合词,代表一组软件包(Linux OS,Nginx 网络服务器,MySQL\MariaDB 数据库和 PHP 服务端动态编程语言),它被用来搭建动态的网络应用和网页。 +LEMP 是个缩写,代表一组软件包(L:Linux OS,E:Nginx 网络服务器,M:MySQL/MariaDB 数据库和 P:PHP 服务端动态编程语言),它被用来搭建动态的网络应用和网页。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-with-FastCGI-on-Ubuntu-16.04.png) ->在 Ubuntu 16.04 安装 Nginx 以及 MariaDB,PHP7 并且支持 HTTP 2.0 + +*在 Ubuntu 16.04 安装 Nginx 以及 MariaDB,PHP7 并且支持 HTTP 2.0* 这篇教程会教你怎么在 Ubuntu 16.04 的服务器上安装 LEMP (Nginx 和 MariaDB 以及 PHP7)。 -准备 +**前置准备** -[安装 Ubuntu 16.04 服务器版本][1] +- [安装 Ubuntu 16.04 服务器版本][1] ### 步骤 1:安装 Nginx 服务器 -#### 1. Nginx 是一个先进的、资源优化的网络服务器程序,用来向因特网上的访客展示网页。我们从 Nginx 服务器的安装开始介绍,使用 [apt 命令][2] 从 Ubuntu 的官方软件仓库中获取 Nginx 程序。 +1、Nginx 是一个先进的、资源优化的 Web 服务器程序,用来向因特网上的访客展示网页。我们从 Nginx 服务器的安装开始介绍,使用 [apt 命令][2] 从 Ubuntu 的官方软件仓库中获取 Nginx 程序。 ``` $ sudo apt-get install nginx ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-on-Ubuntu-16.04.png) ->在 Ubuntu 16.04 安装 Nginx -#### 2. 然后输入 [netstat][3] 和 [systemctl][4] 命令,确认 Nginx 进程已经启动并且绑定在 80 端口。 +*在 Ubuntu 16.04 安装 Nginx* + +2、 然后输入 [netstat][3] 和 [systemctl][4] 命令,确认 Nginx 进程已经启动并且绑定在 80 端口。 ``` $ netstat -tlpn ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Network-Port-Connection.png) ->检查 Nginx 网络端口连接 + +*检查 Nginx 网络端口连接* ``` $ sudo systemctl status nginx.service ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Service-Status.png) ->检查 Nginx 服务状态 + +*检查 Nginx 服务状态* 当你确认服务进程已经启动了,你可以打开一个浏览器,使用 HTTP 协议访问你的服务器 IP 地址或者域名,浏览 Nginx 的默认网页。 @@ -46,11 +50,12 @@ http://IP-Address ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-Nginx-Webpage.png) ->验证 Nginx 网页 + +*验证 Nginx 网页* ### 步骤 2:启用 Nginx HTTP/2.0 协议 -#### 3. HTTP/2.0 协议默认包含在 Ubuntu 16.04 最新发行版的 Nginx 二进制文件中,它只能通过 SSL 连接并且保证加载网页的速度有巨大提升。 +3、 对 HTTP/2.0 协议的支持默认包含在 Ubuntu 16.04 最新发行版的 Nginx 二进制文件中了,它只能通过 SSL 连接并且保证加载网页的速度有巨大提升。 要启用Nginx 的这个协议,首先找到 Nginx 提供的网站配置文件,输入下面这个命令备份配置文件。 @@ -60,9 +65,10 @@ $ sudo mv default default.backup ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Backup-Nginx-Sites-Configuration-File.png) ->备份 Nginx 的网站配置文件 -#### 4. 然后,用文本编辑器新建一个默认文件,输入以下内容: +*备份 Nginx 的网站配置文件* + +4、然后,用文本编辑器新建一个默认文件,输入以下内容: ``` server { @@ -113,16 +119,16 @@ server { ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-Nginx-HTTP-2-Protocol.png) ->启用 Nginx HTTP 2 协议 + +*启用 Nginx HTTP 2 协议* 上面的配置片段向所有的 SSL 监听指令中添加 http2 参数来启用 `HTTP/2.0`。 -添加到服务器配置的最后一段,是用来将所有非 SSL 的流量重定向到 SSL/TLS 默认主机。然后用你主机的 IP 地址或者 DNS 记录(优先 FQDN)替换掉 `server_name` 选项。 (directive 的翻译是指令,但我觉得翻译成选项更好) +上述添加到服务器配置的最后一段,是用来将所有非 SSL 的流量重定向到 SSL/TLS 默认主机。然后用你主机的 IP 地址或者 DNS 记录(最好用 FQDN 名称)替换掉 `server_name` 选项的参数。 -#### 5. 当你按照以上步骤编辑完 Nginx 的默认配置文件之后,用下面这些命令来生成、查看 SSL 证书和密钥。 - -用你自定义的设置完成证书的制作,注意常用名设置成和你的 DNS FQDN 记录或者服务器 IP 地址相匹配,DNS 记录或者 IP 地址是用来访问网页的。 +5、 当你按照以上步骤编辑完 Nginx 的默认配置文件之后,用下面这些命令来生成、查看 SSL 证书和密钥。 +用你自定义的设置完成证书的制作,注意 Common Name 设置成和你的 DNS FQDN 记录或者服务器 IP 地址相匹配。 ``` $ sudo mkdir /etc/nginx/ssl @@ -131,18 +137,20 @@ $ ls /etc/nginx/ssl/ ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Generate-SSL-Certificate-and-Key.png) ->生成 Nginx 的 SSL 证书和密钥 -#### 6. 通过输入以下命令使用一个强 DH 加密算法,在之前的配置文件 `ssl_dhparam` 这一行中进行修改。 +*生成 Nginx 的 SSL 证书和密钥* + +6、 通过输入以下命令使用一个强 DH 加密算法,这会修改之前的配置文件 `ssl_dhparam` 所配置的文件。 ``` $ sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048 ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-Diffie-Hellman-Key.png) ->创建 Diffie-Hellman 密钥 -#### 7. 当 `Diffie-Hellman` 密钥生成之后,验证 Nginx 的配置文件是否正确、能否被 Nginx 网络服务程序应用。然后运行以下命令重启守护进程来观察有什么变化。 +*创建 Diffie-Hellman 密钥* + +7、 当 `Diffie-Hellman` 密钥生成之后,验证 Nginx 的配置文件是否正确、能否被 Nginx 网络服务程序应用。然后运行以下命令重启守护进程来观察有什么变化。 ``` $ sudo nginx -t @@ -150,31 +158,34 @@ $ sudo systemctl restart nginx.service ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Configuration.png) ->检查 Nginx 的配置 -#### 8. 键入下面的命令来测试 Nginx 使用的是 HTTP/2.0 协议。看到协议中有 `h2` 的话,表明 Nginx 已经成功配置使用 HTTP/2.0 协议。所有最新的浏览器默认都能够支持这个协议。 +*检查 Nginx 的配置* + +8、 键入下面的命令来测试 Nginx 使用的是 HTTP/2.0 协议。看到协议中有 `h2` 的话,表明 Nginx 已经成功配置使用 HTTP/2.0 协议。所有最新的浏览器默认都能够支持这个协议。 ``` $ openssl s_client -connect localhost:443 -nextprotoneg '' ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Test-Nginx-HTTP-2-Protocol.png) ->测试 Nginx HTTP 2.0 协议 + +*测试 Nginx HTTP 2.0 协议* ### 第 3 步:安装 PHP 7 解释器 通过 FastCGI 进程管理程序的协助,Nginx 能够使用 PHP 动态语言解释器生成动态网络内容。FastCGI 能够从 Ubuntu 官方仓库中安装 php-fpm 二进制包来获取。 -#### 9. 在你的服务器控制台里输入下面的命令来获取 PHP7.0 和扩展包,这能够让 PHP 与 Nginx 网络服务进程通信, +9、 在你的服务器控制台里输入下面的命令来获取 PHP7.0 和扩展包,这能够让 PHP 与 Nginx 网络服务进程通信。 ``` $ sudo apt install php7.0 php7.0-fpm ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-PHP-FPM-for-Ngin.png) ->安装 PHP 7 以及 PHP-FPM -#### 10. 当 PHP7.0 解释器安装成功后,输入以下命令启动或者检查 php7.0-fpm 守护进程: +*安装 PHP 7 以及 PHP-FPM* + +10、 当 PHP7.0 解释器安装成功后,输入以下命令启动或者检查 php7.0-fpm 守护进程: ``` $ sudo systemctl start php7.0-fpm @@ -182,9 +193,10 @@ $ sudo systemctl status php7.0-fpm ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Start-Verify-php-fpm-Service.png) ->开启、验证 php-fpm 服务 -#### 11. 当前的 Nginx 配置文件已经配置了使用 PHP FPM 来提供动态内容。 +*开启、验证 php-fpm 服务* + +11、 当前的 Nginx 配置文件已经配置了使用 PHP FPM 来提供动态内容。 下面给出的这部分服务器配置让 Nginx 能够使用 PHP 解释器,所以不需要对 Nginx 配置文件作别的修改。 @@ -198,26 +210,30 @@ location ~ \.php$ { 下面是的截图是 Nginx 默认配置文件的内容。你可能需要对其中的代码进行修改或者取消注释。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-PHP-FastCGI-for-Nginx.png) ->启用 PHP FastCGI -#### 12. 要测试启用了 PHP-FPM 的 Nginx 服务器,用下面的命令创建一个 PHP 测试配置文件 `info.php`。接着用 `http://IP_or domain/info.php` 这个网址来查看配置。 +*启用 PHP FastCGI* + +12、 要测试启用了 PHP-FPM 的 Nginx 服务器,用下面的命令创建一个 PHP 测试配置文件 `info.php`。接着用 `http://IP_or domain/info.php` 这个网址来查看配置。 ``` $ sudo su -c 'echo "" |tee /var/www/html/info.php' ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-PHP-Info-File.png) ->创建 PHP Info 文件 + +*创建 PHP Info 文件* ![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-PHP-FastCGI-Info.png) ->检查 PHP FastCGI 的信息 -检查服务器是否应用 HTTP/2.0 协议,定位到 PHP 变量区域中的 `$_SERVER[‘SERVER_PROTOCOL’]` 就像下面这张截图一样。(advertised by server 翻译不清楚,这里翻译成服务器应用了 HTTP/2.0 协议) +*检查 PHP FastCGI 的信息* + +检查服务器是否宣告支持 HTTP/2.0 协议,定位到 PHP 变量区域中的 `$_SERVER[‘SERVER_PROTOCOL’]` 就像下面这张截图一样。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-HTTP-2.0-Protocol-Info.png) ->检查 HTTP2.0 协议信息 -#### 13. 为了安装其它的 PHP7.0 模块,使用 `apt search php7.0` 命令查找 php 的模块然后安装。 +*检查 HTTP2.0 协议信息* + +13、 为了安装其它的 PHP7.0 模块,使用 `apt search php7.0` 命令查找 php 的模块然后安装。 如果你想要 [安装 WordPress][5] 或者别的 CMS,需要安装以下的 PHP 模块,这些模块迟早有用。 @@ -226,9 +242,10 @@ $ sudo apt install php7.0-mcrypt php7.0-mbstring ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-Modules.png) ->安装 PHP 7 模块 -#### 14. 注册 PHP 额外的模块,输入下面的命令重启 PHP-FPM 守护进程。 +*安装 PHP 7 模块* + +14、 要注册这些额外的 PHP 模块,输入下面的命令重启 PHP-FPM 守护进程。 ``` $ sudo systemctl restart php7.0-fpm.service @@ -236,9 +253,9 @@ $ sudo systemctl restart php7.0-fpm.service ### 第 4 步:安装 MariaDB 数据库 -#### 15. 最后,我们需要 MariaDB 数据库来存储、管理网站数据来完成搭建 LEMP +15、 最后,我们需要 MariaDB 数据库来存储、管理网站数据,才算完成 LEMP 的搭建。 -运行下面的命令安装 MariaDB 数据库管理系统,重启 PHP-FPM 服务使用 MySQL 模块与数据库通信。 +运行下面的命令安装 MariaDB 数据库管理系统,重启 PHP-FPM 服务以便使用 MySQL 模块与数据库通信。 ``` $ sudo apt install mariadb-server mariadb-client php7.0-mysql @@ -246,9 +263,10 @@ $ sudo systemctl restart php7.0-fpm.service ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-MariaDB-for-Nginx.png) ->安装 MariaDB -#### 16. 为了保证 MariaDB 的安装,运行来自 Ubuntu 软件仓库中的二进制包提供的安全脚本,这会询问你设置一个根用户密码,移除匿名用户,禁用根用户远程登陆,移除测试数据库。 +*安装 MariaDB* + +16、 为了安全加固 MariaDB,运行来自 Ubuntu 软件仓库中的二进制包提供的安全脚本,这会询问你设置一个 root 密码,移除匿名用户,禁用 root 用户远程登录,移除测试数据库。 输入下面的命令运行脚本,并且确认所有的选择。参照下面的截图。 @@ -257,10 +275,10 @@ $ sudo mysql_secure_installation ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Secure-MariaDB-Installation-for-Nginx.png) ->MariaDB 的安全安装 +*MariaDB 的安全安装* -#### 17. 配置 MariaDB 以便普通用户能够不使用 系统的 sudo 权限来访问数据库。用根用户权限打开 MySQL 命令行界面,运行下面的命令: +17、 配置 MariaDB 以便普通用户能够不使用系统的 sudo 权限来访问数据库。用 root 用户权限打开 MySQL 命令行界面,运行下面的命令: ``` $ sudo mysql @@ -271,26 +289,28 @@ MariaDB> exit ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/MariaDB-User-Permissions.png) ->MariaDB 的用户权限 -最后登陆到 MariaDB 数据库,通过以下命令不使用 root 权限执行任意一个命令: +*MariaDB 的用户权限* + +最后通过执行以下命令登录到 MariaDB 数据库,就可以不需要 root 权限而执行任意数据库内的命令: ``` $ mysql -u root -p -e 'show databases' ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-MariaDB-Databases.png) ->查看 MariaDB 数据库 + +*查看 MariaDB 数据库* 好了!现在你拥有了配置在 **Ubuntu 16.04** 服务器上的 **LEMP** 环境,你能够部署能够与数据库交互的复杂动态网络应用。 -------------------------------------------------------------------------------- -via: http://www.tecmint.com/install-nginx-mariadb-php7-http2-on-ubuntu-16-04/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 +via: http://www.tecmint.com/install-nginx-mariadb-php7-http2-on-ubuntu-16-04/ -作者:[Matei Cezar ][a] +作者:[Matei Cezar][a] 译者:[GitFuture](https://github.com/GitFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d776733b45d3533b18228db35d7ae3fd1e3edc8b Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 8 Jul 2016 00:02:48 +0800 Subject: [PATCH 079/471] PUB:20160606 Basic Git Commands You Must Know @alim0x --- ...160606 Basic Git Commands You Must Know.md | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-) rename {translated/tech => published}/20160606 Basic Git Commands You Must Know.md (75%) diff --git a/translated/tech/20160606 Basic Git Commands You Must Know.md b/published/20160606 Basic Git Commands You Must Know.md similarity index 75% rename from translated/tech/20160606 Basic Git Commands You Must Know.md rename to published/20160606 Basic Git Commands You Must Know.md index dbaf78ad95..57aa6662e2 100644 --- a/translated/tech/20160606 Basic Git Commands You Must Know.md +++ b/published/20160606 Basic Git Commands You Must Know.md @@ -5,15 +5,15 @@ *简介:这个快速指南将向你展示所有的基础 Git 命令以及用法。你可以下载这些命令作为快速参考。* -我们在早先一篇文章中已经见过快速指南和 [Vi cheat sheet 下载][1]了。在这篇文章里,我们将会看到开始使用 Git 所需要的基础命令。 +我们在早先一篇文章中已经快速介绍过 [Vi 速查表][1]了。在这篇文章里,我们将会介绍开始使用 Git 时所需要的基础命令。 -### GIT +### Git [Git][2] 是一个分布式版本控制系统,它被用在大量开源项目中。它是在 2005 年由 Linux 创始人 [Linus Torvalds][3] 写就的。这个程序允许非线性的项目开发,并且能够通过存储在本地服务器高效处理大量数据。在这个教程里,我们将要和 Git 愉快玩耍并学习如何开始使用它。 我在这个教程里使用 Ubuntu,但你可以使用你选择的任何发行版。除了安装以外,剩下的所有命令在任何 Linux 发行版上都是一样的。 -### 安装 GIT +### 安装 Git 要安装 git 执行以下命令: @@ -23,11 +23,11 @@ sudo apt-get install git-core 在它完成下载之后,你就安装好了 Git 并且可以使用了。 -### 设置 GIT: +### 设置 Git 在 Git 安装之后,不论是从 apt-get 还是从源码安装,你需要将你的用户名和邮箱地址复制到 gitconfig 文件。你可以访问 ~/.gitconfig 这个文件。 -全新安装 Git 之后打开它会是完全空白的页面: +全新安装 Git 之后打开它会是完全空白的: ``` sudo vim ~/.gitconfig @@ -42,7 +42,7 @@ git config --global user.email user@example.com 然后你就完成设置了。现在让我们开始 Git。 -### 仓库: +### 仓库 创建一个新目录,打开它并运行以下命令: @@ -52,13 +52,11 @@ git init ![](http://itsfoss.com/wp-content/uploads/2016/05/Playing-around-git-1-1024x173.png) -这个命令会创建一个新的 git 仓库。你的本地仓库由三个 git 维护的“树”组成。 +这个命令会创建一个新的 Git 仓库(repository)。你的本地仓库由三个 Git 维护的“树”组成。 -第一个是你的**工作目录**,保存实际的文件。第二个是索引,实际上扮演的是暂存区,最后一个是 HEAD,它指向你最后一个 commit 提交。 +第一个是你的工作目录(Working Directory),保存实际的文件。第二个是索引,实际上扮演的是暂存区(staging area),最后一个是 HEAD,它指向你最后一个 commit 提交。使用 git clone /path/to/repository 签出你的仓库(从你刚创建的仓库或服务器上已存在的仓库)。 -使用 git clone /path/to/repository 签出你的仓库(从你刚创建的仓库或服务器上已存在的仓库)。 - -### 添加文件并提交: +### 添加文件并提交 你可以用以下命令添加改动: @@ -98,7 +96,7 @@ git commit -a ### 推送你的改动 -你的改动在你本地工作副本的 HEAD 中。如果你还没有从一个已存在的仓库克隆或想将你的仓库连接到远程服务器,你需要先添加它: +你的改动在你本地工作副本的 HEAD 中。如果你还没有从一个已存在的仓库克隆,或想将你的仓库连接到远程服务器,你需要先添加它: ``` git remote add origin <服务器地址> @@ -110,9 +108,9 @@ git remote add origin <服务器地址> git push -u origin master ``` -### 分支: +### 分支 -分支用于开发特性,它们之间是互相独立的。主分支 master 是你创建一个仓库时的“默认”分支。使用其它分支用于开发,在完成时将它合并回主分支。 +分支用于开发特性,分支之间是互相独立的。主分支 master 是你创建一个仓库时的“默认”分支。使用其它分支用于开发,在完成时将它合并回主分支。 创建一个名为“mybranch”的分支并切换到它之上: @@ -144,19 +142,19 @@ git push origin <分支名> ### 更新和合并 -要将你本地仓库更新到最新提交,运行: +要将你本地仓库更新到最新的提交上,运行: ``` git pull ``` -在你的工作目录获取和合并远程变动。要合并其它分支到你的活动分支(如 master),使用: +在你的工作目录获取并合并远程变动。要合并其它分支到你的活动分支(如 master),使用: ``` git merge <分支> ``` -在这两种情况下,git 会尝试自动合并(auto-merge)改动。不幸的是,这不总是可能的,可能会导致冲突。你需要负责通过编辑 git 显示的文件,手动合并那些冲突。改动之后,你需要用以下命令将它们标记为已合并: +在这两种情况下,git 会尝试自动合并(auto-merge)改动。不幸的是,这不总是可能的,可能会导致冲突。你需要通过编辑 git 所显示的文件,手动合并那些冲突。改动之后,你需要用以下命令将它们标记为已合并: ``` git add <文件名> @@ -168,7 +166,7 @@ git add <文件名> git diff <源分支> <目标分支> ``` -### GIT 日志: +### Git 日志 你可以这么查看仓库历史: @@ -176,7 +174,7 @@ git diff <源分支> <目标分支> git log ``` -要查看每个提交一行样式的日志你可以用: +要以每个提交一行的样式查看日志,你可以用: ``` git log --pretty=oneline @@ -196,9 +194,9 @@ git log --name-status 在这整个过程中如果你需要任何帮助,你可以用 git --help。 -Git 棒不棒!!!祝贺你你已经会 git 基础了。如果你愿意的话,你可以从下面这个链接下载这些基础 Git 命令作为快速参考: +Git 棒不棒?!祝贺你你已经会 Git 基础了。如果你愿意的话,你可以从下面这个链接下载这些基础 Git 命令作为快速参考: -[下载 Git Cheat Sheet][4] +- [下载 Git 速查表][4] -------------------------------------------------------------------------------- @@ -207,7 +205,7 @@ via: http://itsfoss.com/basic-git-commands-cheat-sheet/ 作者:[Rakhi Sharma][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 55016034e0a77d8ea7606e6530c98a94c25be0b7 Mon Sep 17 00:00:00 2001 From: Cao Lu Date: Fri, 8 Jul 2016 00:26:25 +0800 Subject: [PATCH 080/471] [Translated] 20160518 Python 3 - An Intro to Encryption.md (#4156) * translated * Cathon is translating --- ...meet the IT needs of today and tomorrow.md | 1 + ...60518 Python 3 - An Intro to Encryption.md | 280 ------------------ ...60518 Python 3 - An Intro to Encryption.md | 280 ++++++++++++++++++ 3 files changed, 281 insertions(+), 280 deletions(-) delete mode 100644 sources/tech/20160518 Python 3 - An Intro to Encryption.md create mode 100644 translated/tech/20160518 Python 3 - An Intro to Encryption.md diff --git a/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md b/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md index e4d511c5bd..1417c9fe7b 100644 --- a/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md +++ b/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md @@ -1,3 +1,4 @@ +[Cathon is translating] Training vs. hiring to meet the IT needs of today and tomorrow ================================================================ diff --git a/sources/tech/20160518 Python 3 - An Intro to Encryption.md b/sources/tech/20160518 Python 3 - An Intro to Encryption.md deleted file mode 100644 index 3a840114d3..0000000000 --- a/sources/tech/20160518 Python 3 - An Intro to Encryption.md +++ /dev/null @@ -1,280 +0,0 @@ -[Cathon is Translating...] - -Python 3: An Intro to Encryption -=================================== - -Python 3 doesn’t have very much in its standard library that deals with encryption. Instead, you get hashing libraries. We’ll take a brief look at those in the chapter, but the primary focus will be on the following 3rd party packages: PyCrypto and cryptography. We will learn how to encrypt and decrypt strings with both of these libraries. - ---- - -### Hashing - -If you need secure hashes or message digest algorithms, then Python’s standard library has you covered in the **hashlib** module. It includes the FIPS secure hash algorithms SHA1, SHA224, SHA256, SHA384, and SHA512 as well as RSA’s MD5 algorithm. Python also supports the adler32 and crc32 hash functions, but those are in the **zlib** module. - -One of the most popular uses of hashes is storing the hash of a password instead of the password itself. Of course, the hash has to be a good one or it can be decrypted. Another popular use case for hashes is to hash a file and then send the file and its hash separately. Then the person receiving the file can run a hash on the file to see if it matches the hash that was sent. If it does, then that means no one has changed the file in transit. - - -Let’s try creating an md5 hash: - -``` ->>> import hashlib ->>> md5 = hashlib.md5() ->>> md5.update('Python rocks!') -Traceback (most recent call last): - File "", line 1, in - md5.update('Python rocks!') -TypeError: Unicode-objects must be encoded before hashing ->>> md5.update(b'Python rocks!') ->>> md5.digest() -b'\x14\x82\xec\x1b#d\xf6N}\x16*+[\x16\xf4w' -``` - -Let’s take a moment to break this down a bit. First off, we import **hashlib** and then we create an instance of an md5 HASH object. Next we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s **digest** method to get our hash. If you prefer the hex digest, we can do that too: - -``` ->>> md5.hexdigest() -'1482ec1b2364f64e7d162a2b5b16f477' -``` - -There’s actually a shortcut method of creating a hash, so we’ll look at that next when we create our sha512 hash: - -``` ->>> sha = hashlib.sha1(b'Hello Python').hexdigest() ->>> sha -'422fbfbc67fe17c86642c5eaaa48f8b670cbed1b' -``` - -As you can see, we can create our hash instance and call its digest method at the same time. Then we print out the hash to see what it is. I chose to use the sha1 hash as it has a nice short hash that will fit the page better. But it’s also less secure, so feel free to try one of the others. - ---- - -### Key Derivation - -Python has pretty limited support for key derivation built into the standard library. In fact, the only method that hashlib provides is the **pbkdf2_hmac** method, which is the PKCS#5 password-based key derivation function 2. It uses HMAC as its psuedorandom function. You might use something like this for hashing your password as it supports a salt and iterations. For example, if you were to use SHA-256 you would need a salt of at least 16 bytes and a minimum of 100,000 iterations. - -As a quick aside, a salt is just random data that you use as additional input into your hash to make it harder to “unhash” your password. Basically it protects your password from dictionary attacks and pre-computed rainbow tables. - -Let’s look at a simple example: - -``` ->>> import binascii ->>> dk = hashlib.pbkdf2_hmac(hash_name='sha256', - password=b'bad_password34', - salt=b'bad_salt', - iterations=100000) ->>> binascii.hexlify(dk) -b'6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02' -``` - -Here we create a SHA256 hash on a password using a lousy salt but with 100,000 iterations. Of course, SHA is not actually recommended for creating keys of passwords. Instead you should use something like **scrypt** instead. Another good option would be the 3rd party package, bcrypt. It is designed specifically with password hashing in mind. - ---- - -### PyCryptodome - -The PyCrypto package is probably the most well known 3rd party cryptography package for Python. Sadly PyCrypto’s development stopping in 2012. Others have continued to release the latest version of PyCryto so you can still get it for Python 3.5 if you don’t mind using a 3rd party’s binary. For example, I found some binary Python 3.5 wheels for PyCrypto on Github (https://github.com/sfbahr/PyCrypto-Wheels). - -Fortunately there is a fork of the project called PyCrytodome that is a drop-in replacement for PyCrypto. To install it for Linux, you can use the following pip command: - - -``` -pip install pycryptodome -``` - -Windows is a bit different: - -``` -pip install pycryptodomex -``` - -If you run into issues, it’s probably because you don’t have the right dependencies installed or you need a compiler for Windows. Check out the PyCryptodome [website][1] for additional installation help or to contact support. - -Also worth noting is that PyCryptodome has many enhancements over the last version of PyCrypto. It is well worth your time to visit their home page and see what new features exist. - -### Encrypting a String - -Once you’re done checking their website out, we can move on to some examples. For our first trick, we’ll use DES to encrypt a string: - -``` ->>> from Crypto.Cipher import DES ->>> key = 'abcdefgh' ->>> def pad(text): - while len(text) % 8 != 0: - text += ' ' - return text ->>> des = DES.new(key, DES.MODE_ECB) ->>> text = 'Python rocks!' ->>> padded_text = pad(text) ->>> encrypted_text = des.encrypt(text) -Traceback (most recent call last): - File "", line 1, in - encrypted_text = des.encrypt(text) - File "C:\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\blockalgo.py", line 244, in encrypt - return self._cipher.encrypt(plaintext) -ValueError: Input strings must be a multiple of 8 in length ->>> encrypted_text = des.encrypt(padded_text) ->>> encrypted_text -b'>\xfc\x1f\x16x\x87\xb2\x93\x0e\xfcH\x02\xd59VQ' -``` - -This code is a little confusing, so let’s spend some time breaking it down. First off, it should be noted that the key size for DES encryption is 8 bytes, which is why we set our key variable to a size letter string. The string that we will be encrypting must be a multiple of 8 in length, so we create a function called **pad** that can pad any string out with spaces until it’s a multiple of 8. Next we create an instance of DES and some text that we want to encrypt. We also create a padded version of the text. Just for fun, we attempt to encrypt the original unpadded variant of the string which raises a **ValueError**. Here we learn that we need that padded string after all, so we pass that one in instead. As you can see, we now have an encrypted string! - -Of course the example wouldn’t be complete if we didn’t know how to decrypt our string: - -``` ->>> des.decrypt(encrypted_text) -b'Python rocks! ' -``` - -Fortunately, that is very easy to accomplish as all we need to do is call the **decrypt** method on our des object to get our decrypted byte string back. Our next task is to learn how to encrypt and decrypt a file with PyCrypto using RSA. But first we need to create some RSA keys! - -### Create an RSA Key - -If you want to encrypt your data with RSA, then you’ll need to either have access to a public / private RSA key pair or you will need to generate your own. For this example, we will just generate our own. Since it’s fairly easy to do, we will do it in Python’s interpreter: - -``` ->>> from Crypto.PublicKey import RSA ->>> code = 'nooneknows' ->>> key = RSA.generate(2048) ->>> encrypted_key = key.exportKey(passphrase=code, pkcs=8, - protection="scryptAndAES128-CBC") ->>> with open('/path_to_private_key/my_private_rsa_key.bin', 'wb') as f: - f.write(encrypted_key) ->>> with open('/path_to_public_key/my_rsa_public.pem', 'wb') as f: - f.write(key.publickey().exportKey()) -``` - -First we import **RSA** from **Crypto.PublicKey**. Then we create a silly passcode. Next we generate an RSA key of 2048 bits. Now we get to the good stuff. To generate a private key, we need to call our RSA key instance’s **exportKey** method and give it our passcode, which PKCS standard to use and which encryption scheme to use to protect our private key. Then we write the file out to disk. - -Next we create our public key via our RSA key instance’s **publickey** method. We used a shortcut in this piece of code by just chaining the call to exportKey with the publickey method call to write it to disk as well. - -### Encrypting a File - -Now that we have both a private and a public key, we can encrypt some data and write it to a file. Here’s a pretty standard example: - -``` -from Crypto.PublicKey import RSA -from Crypto.Random import get_random_bytes -from Crypto.Cipher import AES, PKCS1_OAEP - -with open('/path/to/encrypted_data.bin', 'wb') as out_file: - recipient_key = RSA.import_key( - open('/path_to_public_key/my_rsa_public.pem').read()) - session_key = get_random_bytes(16) - - cipher_rsa = PKCS1_OAEP.new(recipient_key) - out_file.write(cipher_rsa.encrypt(session_key)) - - cipher_aes = AES.new(session_key, AES.MODE_EAX) - data = b'blah blah blah Python blah blah' - ciphertext, tag = cipher_aes.encrypt_and_digest(data) - - out_file.write(cipher_aes.nonce) - out_file.write(tag) - out_file.write(ciphertext) -``` - -The first three lines cover our imports from PyCryptodome. Next we open up a file to write to. Then we import our public key into a variable and create a 16-byte session key. For this example we are going to be using a hybrid encryption method, so we use PKCS#1 OAEP, which is Optimal asymmetric encryption padding. This allows us to write a data of an arbitrary length to the file. Then we create our AES cipher, create some data and encrypt the data. This will return the encrypted text and the MAC. Finally we write out the nonce, MAC (or tag) and the encrypted text. - -As an aside, a nonce is an arbitrary number that is only used for crytographic communication. They are usually random or pseudorandom numbers. For AES, it must be at least 16 bytes in length. Feel free to try opening the encrypted file in your favorite text editor. You should just see gibberish. - -Now let’s learn how to decrypt our data: - -``` -from Crypto.PublicKey import RSA -from Crypto.Cipher import AES, PKCS1_OAEP - -code = 'nooneknows' - -with open('/path/to/encrypted_data.bin', 'rb') as fobj: - private_key = RSA.import_key( - open('/path_to_private_key/my_rsa_key.pem').read(), - passphrase=code) - - enc_session_key, nonce, tag, ciphertext = [ fobj.read(x) - for x in (private_key.size_in_bytes(), - 16, 16, -1) ] - - cipher_rsa = PKCS1_OAEP.new(private_key) - session_key = cipher_rsa.decrypt(enc_session_key) - - cipher_aes = AES.new(session_key, AES.MODE_EAX, nonce) - data = cipher_aes.decrypt_and_verify(ciphertext, tag) - -print(data) -``` - -If you followed the previous example, this code should be pretty easy to parse. In this case, we are opening our encrypted file for reading in binary mode. Then we import our private key. Note that when you import the private key, you must give it your passcode. Otherwise you will get an error. Next we read in our file. You will note that we read in the private key first, then the next 16 bytes for the nonce, which is followed by the next 16 bytes which is the tag and finally the rest of the file, which is our data. - -Then we need to decrypt our session key, recreate our AES key and decrypt the data. - -You can use PyCryptodome to do much, much more. However we need to move on and see what else we can use for our cryptographic needs in Python. - ---- - -### The cryptography package - -The **cryptography** package aims to be “cryptography for humans” much like the **requests** library is “HTTP for Humans”. The idea is that you will be able to create simple cryptographic recipes that are safe and easy-to-use. If you need to, you can drop down to low=level cryptographic primitives, which require you to know what you’re doing or you might end up creating something that’s not very secure. - -If you are using Python 3.5, you can install it with pip, like so: - -``` -pip install cryptography -``` - -You will see that cryptography installs a few dependencies along with itself. Assuming that they all completed successfully, we can try encrypting some text. Let’s give the **Fernet** symmetric encryption algorithm. The Fernet algorithm guarantees that any message you encrypt with it cannot be manipulated or read without the key you define. Fernet also support key rotation via **MultiFernet**. Let’s take a look at a simple example: - -``` ->>> from cryptography.fernet import Fernet ->>> cipher_key = Fernet.generate_key() ->>> cipher_key -b'APM1JDVgT8WDGOWBgQv6EIhvxl4vDYvUnVdg-Vjdt0o=' ->>> cipher = Fernet(cipher_key) ->>> text = b'My super secret message' ->>> encrypted_text = cipher.encrypt(text) ->>> encrypted_text -(b'gAAAAABXOnV86aeUGADA6mTe9xEL92y_m0_TlC9vcqaF6NzHqRKkjEqh4d21PInEP3C9HuiUkS9f' - b'6bdHsSlRiCNWbSkPuRd_62zfEv3eaZjJvLAm3omnya8=') ->>> decrypted_text = cipher.decrypt(encrypted_text) ->>> decrypted_text -b'My super secret message' -``` - -First off we need to import Fernet. Next we generate a key. We print out the key to see what it looks like. As you can see, it’s a random byte string. If you want, you can try running the **generate_key** method a few times. The result will always be different. Next we create our Fernet cipher instance using our key. - -Now we have a cipher we can use to encrypt and decrypt our message. The next step is to create a message worth encrypting and then encrypt it using the **encrypt** method. I went ahead and printed our the encrypted text so you can see that you can no longer read the text. To **decrypt** our super secret message, we just call decrypt on our cipher and pass it the encrypted text. The result is we get a plain text byte string of our message. - ---- - -### Wrapping Up - -This chapter barely scratched the surface of what you can do with PyCryptodome and the cryptography packages. However it does give you a decent overview of what can be done with Python in regards to encrypting and decrypting strings and files. Be sure to read the documentation and start experimenting to see what else you can do! - ---- - -### Related Reading - -PyCrypto Wheels for Python 3 on [github][2] - -PyCryptodome [documentation][3] - -Python’s Cryptographic [Services][4] - -The cryptography package’s [website][5] - ------------------------------------------------------------------------------- - -via: http://www.blog.pythonlibrary.org/2016/05/18/python-3-an-intro-to-encryption/ - -作者:[Mike][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.blog.pythonlibrary.org/author/mld/ -[1]: http://pycryptodome.readthedocs.io/en/latest/ -[2]: https://github.com/sfbahr/PyCrypto-Wheels -[3]: http://pycryptodome.readthedocs.io/en/latest/src/introduction.html -[4]: https://docs.python.org/3/library/crypto.html -[5]: https://cryptography.io/en/latest/ diff --git a/translated/tech/20160518 Python 3 - An Intro to Encryption.md b/translated/tech/20160518 Python 3 - An Intro to Encryption.md new file mode 100644 index 0000000000..5c57264aa7 --- /dev/null +++ b/translated/tech/20160518 Python 3 - An Intro to Encryption.md @@ -0,0 +1,280 @@ +Python 3: 加密简介 +=================================== + +Python 3 的标准库中没什么用来解决加密的,不过却有用于处理哈希的库。在这里我们会对其进行一个简单的介绍,但重点会放在两个第三方的软件包: PyCrypto 和 cryptography 上。我们将学习如何使用这两个库,来加密和解密字符串。 + +--- + +### 哈希 + +如果需要用到安全哈希算法或是消息摘要算法,那么你可以使用标准库中的 **hashlib** 模块。这个模块包含了标准的安全哈希算法,包括 SHA1,SHA224,SHA256,SHA384,SHA512 以及 RSA 的 MD5 算法。Python 的 **zlib** 模块也提供 adler32 以及 crc32 哈希函数。 + +一个哈希最常见的用法是,存储密码的哈希值而非密码本身。当然了,使用的哈希函数需要稳健一点,否则容易被破解。另一个常见的用法是,计算一个文件的哈希值,然后将这个文件和它的哈希值分别发送。接受到文件的人可以计算文件的哈希值,检验是否与接受到的哈希值相符。如果两者相符,就说明文件在传送的过程中未经篡改。 + +让我们试着创建一个 md5 哈希: + +``` +>>> import hashlib +>>> md5 = hashlib.md5() +>>> md5.update('Python rocks!') +Traceback (most recent call last): + File "", line 1, in + md5.update('Python rocks!') +TypeError: Unicode-objects must be encoded before hashing +>>> md5.update(b'Python rocks!') +>>> md5.digest() +b'\x14\x82\xec\x1b#d\xf6N}\x16*+[\x16\xf4w' +``` + +Let’s take a moment to break this down a bit. First off, we import **hashlib** and then we create an instance of an md5 HASH object. Next we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s **digest** method to get our hash. If you prefer the hex digest, we can do that too: + +让我们花点时间一行一行来讲解。首先,我们导入 **hashlib** ,然后创建一个 md5 哈希对象的实例。接着,我们向这个实例中添加一个字符串后,却得到了报错信息。原来,计算 md5 哈希时,需要使用字节形式的字符串而非普通字符串。正确添加字符串后,我们调用它的 **digest** 函数来得到哈希值。如果你想要十六进制的哈希值,也可以用以下方法: + +``` +>>> md5.hexdigest() +'1482ec1b2364f64e7d162a2b5b16f477' +``` + +实际上,有一种精简的方法来创建哈希,下面我们看一下用这种方法创建一个 sha512 哈希: + +``` +>>> sha = hashlib.sha1(b'Hello Python').hexdigest() +>>> sha +'422fbfbc67fe17c86642c5eaaa48f8b670cbed1b' +``` + +可以看到,我们可以同时创建一个哈希实例并且调用其 digest 函数。然后,我们打印出这个哈希值看一下。这里我使用 sha1 哈希函数作为例子,但它不是特别安全,读者可以随意尝试其他的哈希函数。 + + +--- + +### 密钥导出 + +Python 的标准库对密钥导出支持较弱。实际上,hashlib 函数库提供的唯一方法就是 **pbkdf2_hmac** 函数。它是基于口令的密钥导出函数 PKCS#5 ,并使用 HMAC 作为伪随机函数。因为它支持加盐和迭代操作,你可以使用类似的方法来哈希你的密码。例如,如果你打算使用 SHA-256 加密方法,你将需要至少 16 个字节的盐,以及最少 100000 次的迭代操作。 + +简单来说,盐就是随机的数据,被用来加入到哈希的过程中,以加大破解的难度。这基本可以保护你的密码免受字典和彩虹表的攻击。 + +让我们看一个简单的例子: + +``` +>>> import binascii +>>> dk = hashlib.pbkdf2_hmac(hash_name='sha256', + password=b'bad_password34', + salt=b'bad_salt', + iterations=100000) +>>> binascii.hexlify(dk) +b'6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02' +``` + +这里,我们用 SHA256 对一个密码进行哈希,使用了一个糟糕的盐,但经过了 100000 次迭代操作。当然,SHA 实际上并不被推荐用来创建密码的密钥。你应该使用类似 **scrypt** 的算法来替代。另一个不错的选择是使用一个叫 **bcrypt** 的第三方库。它是被专门设计出来哈希密码的。 + +--- + +### PyCryptodome + +PyCrypto 可能是 Python 中密码学方面最有名的第三方软件包。可惜的是,它的开发工作于 2012 年就已停止。其他人还在继续发布最新版本的 PyCrypto,如果你不介意使用第三方的二进制包,仍可以取得 Python 3.5 的相应版本。比如,我在 Github (https://github.com/sfbahr/PyCrypto-Wheels) 上找到了对应 Python 3.5 的 PyCrypto 二进制包。 + +幸运的是,有一个该项目的分支 PyCrytodome 取代了 PyCrypto 。为了在 Linux 上安装它,你可以使用以下 pip 命令: + +``` +pip install pycryptodome +``` + +在 Windows 系统上安装则稍有不同: + +``` +pip install pycryptodomex +``` + +如果你遇到了问题,可能是因为你没有安装正确的依赖包(译者注:如 python-devel),或者你的 Windows 系统需要一个编译器。如果你需要安装上的帮助或技术支持,可以访问 PyCryptodome 的[网站][1]。 + +还值得注意的是,PyCryptodome 在 PyCrypto 最后版本的基础上有很多改进。非常值得去访问它们的主页,看看有什么新的特性。 + +### 加密字符串 + +访问了他们的主页之后,我们可以看一些例子。在第一个例子中,我们将使用 DES 算法来加密一个字符串: + +``` +>>> from Crypto.Cipher import DES +>>> key = 'abcdefgh' +>>> def pad(text): + while len(text) % 8 != 0: + text += ' ' + return text +>>> des = DES.new(key, DES.MODE_ECB) +>>> text = 'Python rocks!' +>>> padded_text = pad(text) +>>> encrypted_text = des.encrypt(text) +Traceback (most recent call last): + File "", line 1, in + encrypted_text = des.encrypt(text) + File "C:\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\blockalgo.py", line 244, in encrypt + return self._cipher.encrypt(plaintext) +ValueError: Input strings must be a multiple of 8 in length +>>> encrypted_text = des.encrypt(padded_text) +>>> encrypted_text +b'>\xfc\x1f\x16x\x87\xb2\x93\x0e\xfcH\x02\xd59VQ' +``` + +这段代码稍有些复杂,让我们一点点来看。首先需要注意的是,DES 加密使用的密钥长度为 8 个字节,这也是我们将密钥变量设置为 8 个字符的原因。而我们需要加密的字符串的长度必须是 8 的倍数,所以我们创建了一个名为 **pad** 的函数,来给一个字符串末尾添加空格,直到它的长度是 8 的倍数。然后,我们创建了一个 DES 的实例,以及我们需要加密的文本。我们还创建了一个经过填充处理的文本。我们尝试这对未经填充处理的文本进行加密,啊欧,报错了!我们需要对经过填充处理的文本进行加密,然后得到加密的字符串。 +(译者注:encrypt 函数的参数应为 byte 类型字符串,代码为:`encrypted_text = des.encrypt(padded_textpadded_text.encode('uf-8'))`) + +知道了如何加密,还要知道如何解密: + +``` +>>> des.decrypt(encrypted_text) +b'Python rocks! ' +``` + +幸运的是,解密非常容易,我们只需要调用 des 对象的 **decrypt** 方法就可以得到我们原来的 byte 类型字符串了。下一个任务是学习如何用 RSA 算法加密和解密一个文件。首先,我们需要创建一些 RSA 密钥。 + +### 创建 RSA 密钥 + +如果你希望使用 RSA 算法加密数据,那么你需要拥有访问 RAS 公钥和私钥的权限,否则你需要生成一组自己的密钥对。在这个例子中,我们将生成自己的密钥对。创建 RSA 密钥非常容易,所以我们将在 Python 解释器中完成。 + +``` +>>> from Crypto.PublicKey import RSA +>>> code = 'nooneknows' +>>> key = RSA.generate(2048) +>>> encrypted_key = key.exportKey(passphrase=code, pkcs=8, + protection="scryptAndAES128-CBC") +>>> with open('/path_to_private_key/my_private_rsa_key.bin', 'wb') as f: + f.write(encrypted_key) +>>> with open('/path_to_public_key/my_rsa_public.pem', 'wb') as f: + f.write(key.publickey().exportKey()) +``` + +首先我们从 **Crypto.PublicKey** 包中导入 **RSA**,然后创建一个傻傻的密码。接着我们生成 2048 位的 RSA 密钥。现在我们到了关键的部分。为了生成私钥,我们需要调用 RSA 密钥实例的 **exportKey** 方法,然后传入密码,使用的 PKCS 标准,以及加密方案这三个参数。之后,我们把私钥写入磁盘的文件中。 + +接下来,我们通过 RSA 密钥实例的 **publickey** 方法创建我们的公钥。我们使用方法链调用 publickey 和 exportKey 方法生成公钥,同样将它写入磁盘上的文件。 + +### 加密文件 + +有了私钥和公钥之后,我们就可以加密一些数据,并写入文件了。这儿有个比较标准的例子: + +``` +from Crypto.PublicKey import RSA +from Crypto.Random import get_random_bytes +from Crypto.Cipher import AES, PKCS1_OAEP + +with open('/path/to/encrypted_data.bin', 'wb') as out_file: + recipient_key = RSA.import_key( + open('/path_to_public_key/my_rsa_public.pem').read()) + session_key = get_random_bytes(16) + + cipher_rsa = PKCS1_OAEP.new(recipient_key) + out_file.write(cipher_rsa.encrypt(session_key)) + + cipher_aes = AES.new(session_key, AES.MODE_EAX) + data = b'blah blah blah Python blah blah' + ciphertext, tag = cipher_aes.encrypt_and_digest(data) + + out_file.write(cipher_aes.nonce) + out_file.write(tag) + out_file.write(ciphertext) +``` + +代码的前三行导入 PyCryptodome 包。然后我们打开一个文件用于写入数据。接着我们导入公钥赋给一个变量,创建一个 16 字节的会话密钥。在这个例子中,我们将使用混合加密方法,即 PKCS#1 OAEP ,也就是最优非对称加密填充。这允许我们向文件中写入任意长度的数据。接着我们创建 AES 加密,要加密的数据,然后加密数据。我们将得到加密的文本和消息认证码。最后,我们将随机数,消息认证码和加密的文本写入文件。 + +顺便提一下,随机数通常是真随机或伪随机数,只是用来进行密码通信的。对于 AES 加密,其密钥长度最少是 16 个字节。随意用一个你喜欢的编辑器试着打开这个被加密的文件,你应该只能看到乱码。 + +现在让我们学习如何解密我们的数据。 + +``` +from Crypto.PublicKey import RSA +from Crypto.Cipher import AES, PKCS1_OAEP + +code = 'nooneknows' + +with open('/path/to/encrypted_data.bin', 'rb') as fobj: + private_key = RSA.import_key( + open('/path_to_private_key/my_rsa_key.pem').read(), + passphrase=code) + + enc_session_key, nonce, tag, ciphertext = [ fobj.read(x) + for x in (private_key.size_in_bytes(), + 16, 16, -1) ] + + cipher_rsa = PKCS1_OAEP.new(private_key) + session_key = cipher_rsa.decrypt(enc_session_key) + + cipher_aes = AES.new(session_key, AES.MODE_EAX, nonce) + data = cipher_aes.decrypt_and_verify(ciphertext, tag) + +print(data) +``` + +如果你认真看了上一个例子,这段代码应该很容易解析。在这里,我们先读取二进制的加密文件,然后导入私钥。注意,当你导入私钥时,需要提供一个密码,否则会出现错误。然后,我们文件中读取数据,首先是加密的会话密钥,然后是 16 字节的随机数和 16 字节的消息认证码,最后是剩下的加密的数据。 + +接下来我们需要解密出会话密钥,重新创建 AES 密钥,然后解密出数据。 + +你还可以用 PyCryptodome 库做更多的事。不过我们要接着讨论在 Python 中还可以用什么来满足我们加密解密的需求。 + +--- + +### cryptography 包 + +**cryptography** 的目标是成为人类易于使用的密码学包,就像 **requests** 是人类易于使用的 HTTP 库一样。这个想法使你能够创建简单安全,易于使用的加密方案。如果有需要的话,你也可以使用一些底层的密码学基元,但这也需要你知道更多的细节,否则创建的东西将是不安全的。 + +如果你使用的 Python 版本是 3.5, 你可以使用 pip 安装,如下: + +``` +pip install cryptography +``` + +你会看到 cryptography 包还安装了一些依赖包(译者注:如 libopenssl-devel)。如果安装都顺利,我们就可以试着加密一些文本了。让我们使用 **Fernet** 对称加密算法,它保证了你加密的任何信息在不知道密码的情况下不能被篡改或读取。Fernet 还通过 **MultiFernet** 支持密钥轮换。下面让我们看一个简单的例子: + +``` +>>> from cryptography.fernet import Fernet +>>> cipher_key = Fernet.generate_key() +>>> cipher_key +b'APM1JDVgT8WDGOWBgQv6EIhvxl4vDYvUnVdg-Vjdt0o=' +>>> cipher = Fernet(cipher_key) +>>> text = b'My super secret message' +>>> encrypted_text = cipher.encrypt(text) +>>> encrypted_text +(b'gAAAAABXOnV86aeUGADA6mTe9xEL92y_m0_TlC9vcqaF6NzHqRKkjEqh4d21PInEP3C9HuiUkS9f' + b'6bdHsSlRiCNWbSkPuRd_62zfEv3eaZjJvLAm3omnya8=') +>>> decrypted_text = cipher.decrypt(encrypted_text) +>>> decrypted_text +b'My super secret message' +``` + +首先我们需要导入 Fernet,然后生成一个密钥。我们输出密钥看看它是什么样儿。如你所见,它是一个随机的字节串。如果你愿意的话,可以试着多运行 **generate_key** 方法几次,生成的密钥会是不同的。然后我们使用这个密钥生成 Fernet 密码实例。 + +现在我们有了用来加密和解密消息的密码。下一步是创建一个需要加密的消息,然后使用 **encrypt** 方法对它加密。我打印出加密的文本,然后你可以看到你不再能够读懂它。为了**解密**出我们的秘密消息,我们只需调用 decrypt 方法,并传入加密的文本作为参数。结果就是我们得到了消息字节串形式的纯文本。 + +--- + +### 小结 + +这一章仅仅浅显地介绍了 PyCryptodome 和 cryptography 这两个包的使用。不过这也确实给了你一个关于如何加密解密字符串和文件的简述。请务必阅读文档,做做实验,看看还能做些什么! + +--- + +### 相关阅读 + +[Github][2] 上 Python 3 的 PyCrypto Wheels + +PyCryptodome 的 [文档][3] + +Python’s 加密 [服务][4] + +Cryptography 包的 [官网][5] + +------------------------------------------------------------------------------ + +via: http://www.blog.pythonlibrary.org/2016/05/18/python-3-an-intro-to-encryption/ + +作者:[Mike][a] +译者:[Cathon](https://github.com/Cathon) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.blog.pythonlibrary.org/author/mld/ +[1]: http://pycryptodome.readthedocs.io/en/latest/ +[2]: https://github.com/sfbahr/PyCrypto-Wheels +[3]: http://pycryptodome.readthedocs.io/en/latest/src/introduction.html +[4]: https://docs.python.org/3/library/crypto.html +[5]: https://cryptography.io/en/latest/ From 5b80208b154d3c0aeb72e224354f57b921710b43 Mon Sep 17 00:00:00 2001 From: Ping Date: Thu, 7 Jul 2016 17:29:13 +0800 Subject: [PATCH 081/471] Finished Translating --- ...ervices with Python RabbitMQ and Nameko.md | 281 ----------------- ...ervices with Python RabbitMQ and Nameko.md | 291 ++++++++++++++++++ 2 files changed, 291 insertions(+), 281 deletions(-) delete mode 100644 sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md create mode 100644 translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md diff --git a/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md b/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md deleted file mode 100644 index 6879a0778b..0000000000 --- a/sources/tech/20160304 Microservices with Python RabbitMQ and Nameko.md +++ /dev/null @@ -1,281 +0,0 @@ -Translating by Ping - -Microservices with Python RabbitMQ and Nameko -============================================== - ->"Micro-services is the new black" - Splitting the project in to independently scalable services is the currently the best option to ensure the evolution of the code. In Python there is a Framework called "Nameko" which makes it very easy and powerful. - -### Micro services - ->The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. - M. Fowler - -I recommend reading the [Fowler's posts][1] to understand the theory behind it. - -#### Ok I so what does it mean? - -In brief a Micro Service Architecture exists when your system is divided in small (single context bound) responsibilities blocks, those blocks doesn't know each other, they only have a common point of communication, generally a message queue, and does know the communication protocol and interfaces. - -#### Give me a real-life example - ->The code is available on github: take a look at service and api folders for more info. - -Consider you have an REST API, that API has an endpoint receiving some data and you need to perform some kind of computation with that data, instead of blocking the caller you can do it asynchronously, return an status "OK - Your request will be processed" to the caller and do it in a background task. - -Also you want to send an email notification when the computation is finished without blocking the main computing process, so it is better to delegate the "email sending" to another service. - -#### Scenario - -![](http://brunorocha.org/static/media/microservices/micro_services.png) - -### Show me the code! - -Lets create the system to understand it in practice. - -#### Environment - -We need an environment with: - -- A running RabbitMQ -- Python VirtualEnv for services -- Python VirtualEnv for API - -#### Rabbit - -The easiest way to have a RabbitMQ in development environment is running its official docker container, considering you have Docker installed run: - -``` -docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management -``` - -Go to the browser and access using credentials guest:guest if you can login to RabbitMQ dashboard it means you have it running locally for development. - -![](http://brunorocha.org/static/media/microservices/RabbitMQManagement.png) - -#### The Service environment - -Now lets create the Micro Services to consume our tasks. We'll have a service for computing and another for mail, follow the steps. - -In a shell create the root project directory - -``` -$ mkdir myproject -$ cd myproject -``` - -Create and activate a virtualenv (you can also use virtualenv-wrapper) - -``` -$ virtualenv service_env -$ source service_env/bin/activate -``` - -Install nameko framework and yagmail - -``` -(service_env)$ pip install nameko -(service_env)$ pip install yagmail -``` - -#### The service code - -Now having that virtualenv prepared (consider you can run service in a server and API in another) lets code the nameko RPC Services. - -We are going to put both services in a single python module, but you can also split in separate modules and also run them in separate servers if needed. - -In a file called `service.py` - -``` -import yagmail -from nameko.rpc import rpc, RpcProxy - - -class Mail(object): - name = "mail" - - @rpc - def send(self, to, subject, contents): - yag = yagmail.SMTP('myname@gmail.com', 'mypassword') - # read the above credentials from a safe place. - # Tip: take a look at Dynaconf setting module - yag.send(to=to.encode('utf-8), - subject=subject.encode('utf-8), - contents=[contents.encode('utf-8)]) - - -class Compute(object): - name = "compute" - mail = RpcProxy('mail') - - @rpc - def compute(self, operation, value, other, email): - operations = {'sum': lambda x, y: int(x) + int(y), - 'mul': lambda x, y: int(x) * int(y), - 'div': lambda x, y: int(x) / int(y), - 'sub': lambda x, y: int(x) - int(y)} - try: - result = operations[operation](value, other) - except Exception as e: - self.mail.send.async(email, "An error occurred", str(e)) - raise - else: - self.mail.send.async( - email, - "Your operation is complete!", - "The result is: %s" % result - ) - return result -``` - -Now with the above services definition we need to run it as a Nameko RPC service. - ->NOTE: We are going to run it in a console and leave it running, but in production it is recommended to put the service to run using supervisord or an alternative. - -Run the service and let it running in a shell - -``` -(service_env)$ nameko run service --broker amqp://guest:guest@localhost -starting services: mail, compute -Connected to amqp://guest:**@127.0.0.1:5672// -Connected to amqp://guest:**@127.0.0.1:5672// -``` - -#### Testing it - -Go to another shell (with the same virtenv) and test it using nameko shell - -``` -(service_env)$ nameko shell --broker amqp://guest:guest@localhost -Nameko Python 2.7.9 (default, Apr 2 2015, 15:33:21) -[GCC 4.9.2] shell on linux2 -Broker: amqp://guest:guest@localhost ->>> -``` - -You are now in the RPC client testing shell exposing the n.rpc object, play with it - -``` ->>> n.rpc.mail.send("name@email.com", "testing", "Just testing") -``` - -The above should sent an email and we can also call compute service to test it, note that it also spawns an async mail sending with result. - -``` ->>> n.rpc.compute.compute('sum', 30, 10, "name@email.com") -40 ->>> n.rpc.compute.compute('sub', 30, 10, "name@email.com") -20 ->>> n.rpc.compute.compute('mul', 30, 10, "name@email.com") -300 ->>> n.rpc.compute.compute('div', 30, 10, "name@email.com") -3 -``` - -### Calling the micro-service through the API - -In a different shell (or even a different server) prepare the API environment - -Create and activate a virtualenv (you can also use virtualenv-wrapper) - -``` -$ virtualenv api_env -$ source api_env/bin/activate -``` - -Install Nameko, Flask and Flasgger - -``` -(api_env)$ pip install nameko -(api_env)$ pip install flask -(api_env)$ pip install flasgger -``` - ->NOTE: In api you dont need the yagmail because it is service responsability - -Lets say you have the following code in a file `api.py` - -``` -from flask import Flask, request -from flasgger import Swagger -from nameko.standalone.rpc import ClusterRpcProxy - -app = Flask(__name__) -Swagger(app) -CONFIG = {'AMQP_URI': "amqp://guest:guest@localhost"} - - -@app.route('/compute', methods=['POST']) -def compute(): - """ - Micro Service Based Compute and Mail API - This API is made with Flask, Flasgger and Nameko - --- - parameters: - - name: body - in: body - required: true - schema: - id: data - properties: - operation: - type: string - enum: - - sum - - mul - - sub - - div - email: - type: string - value: - type: integer - other: - type: integer - responses: - 200: - description: Please wait the calculation, you'll receive an email with results - """ - operation = request.json.get('operation') - value = request.json.get('value') - other = request.json.get('other') - email = request.json.get('email') - msg = "Please wait the calculation, you'll receive an email with results" - subject = "API Notification" - with ClusterRpcProxy(CONFIG) as rpc: - # asynchronously spawning and email notification - rpc.mail.send.async(email, subject, msg) - # asynchronously spawning the compute task - result = rpc.compute.compute.async(operation, value, other, email) - return msg, 200 - -app.run(debug=True) -``` - -Put the above API to run in a different shell or server - -``` -(api_env) $ python api.py - * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) -``` - -and then access the url you will see the Flasgger UI and you can interact with the api and start producing tasks on queue to the service to consume. - -![](http://brunorocha.org/static/media/microservices/Flasgger_API_documentation.png) - ->NOTE: You can see the shell where service is running for logging, prints and error messages. You can also access the RabbitMQ dashboard to see if there is some message in process there. - -There is a lot of more advanced things you can do with Nameko framework you can find more information on - -Let's Micro Serve! - - --------------------------------------------------------------------------------- - -via: http://brunorocha.org/python/microservices-with-python-rabbitmq-and-nameko.html - -作者: [Bruno Rocha][a] -译者: [译者ID](https://github.com/译者ID) -校对: [校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://facebook.com/rochacbruno -[1]:http://martinfowler.com/articles/microservices.html diff --git a/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md b/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md new file mode 100644 index 0000000000..ea449e9f6c --- /dev/null +++ b/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md @@ -0,0 +1,291 @@ +基于 Python、 RabbitMQ 和 Nameko 的微服务 +============================================== + +>"微服务是一股新浪潮" - 现如今,将项目拆分成多个独立的、可扩展的服务是保障代码演变的最好选择。在 Python 的世界里,有个叫做 “Nameko” 的框架,它将微服务的实现变得简单并且强大。 + + +### 微服务 + +> 在最近的几年里,“微服务架构”如雨后春笋般涌现。它用于描述一种特定的软件应用设计方式,这种方式使得应用可以由多个独立部署的服务以服务套件的形式组成。 - M. Fowler + +推荐各位读一下 [Fowler's posts][1] 以理解它背后的原理。 + + +#### 好吧,那它究竟意味着什么呢? + +简单来说,**微服务架构**可以将你的系统拆分成多个负责不同任务的小块儿,它们之间互不依赖,各自只提供用于通讯的通用指向。这个指向通常是已经将通讯协议和接口定义好的消息队列。 + + +#### 这里给大家提供一个真实案例 + +>案例的代码可以通过github: 访问,查看 service 和 api 文件夹可以获取更多信息。 + +想象一下,你有一个 REST API ,这个 API 有一个端点(译者注:REST 风格的 API 可以有多个端点用于处理对同一资源的不同类型的请求)用来接受数据,并且你需要将接收到的数据进行一些运算。那么相比阻塞接口调用者的请求来说,异步实现此接口是一个更好的选择。你可以先给用户返回一个 "OK - 你的请求稍后会处理" 的状态,然后在后台任务中完成运算。 + +同样,如果你想要在不阻塞主进程的前提下,在计算完成后发送一封提醒邮件,那么将“邮件发送”委托给其他服务去做会更好一些。 + + +#### 场景描述 + +![](http://brunorocha.org/static/media/microservices/micro_services.png) + + +### 用代码说话: + +让我们将系统创建起来,在实践中理解它: + + +#### 环境 + +我们需要的环境: + +- 运行良好的 RabbitMQ(译者注:[RabbitMQ][2]是一个流行的消息队列实现) +- 由 VirtualEnv 提供的 Services 虚拟环境 +- 由 VirtualEnv 提供的 API 虚拟环境 + + +#### Rabbit + +在开发环境中使用 RabbitMQ 最简单的方式就是运行其官方的 docker 容器。在你已经拥有 Docker 的情况下,运行: + +``` +docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management +``` + +在浏览器中访问 ,如果能够使用 guest:guest 验证信息登录 RabbitMQ 的控制面板,说明它已经在你的开发环境中运行起来了。 + +![](http://brunorocha.org/static/media/microservices/RabbitMQManagement.png) + + +#### 服务环境 + +现在让我们创建微服务来消费我们的任务。其中一个服务用来执行计算任务,另一个用来发送邮件。按以下步骤执行: + +在 Shell 中创建项目的根目录 + +``` +$ mkdir myproject +$ cd myproject +``` + +用 virtualenv 工具创建并且激活一个虚拟环境(你也可以使用virtualenv-wrapper) + +``` +$ virtualenv service_env +$ source service_env/bin/activate +``` + +安装 nameko 框架和 yagmail + +``` +(service_env)$ pip install nameko +(service_env)$ pip install yagmail +``` + + +#### 服务的代码 + +现在我们已经准备好了 virtualenv 所提供的虚拟环境(可以想象成我们的服务是运行在一个独立服务器上的,而我们的 API 运行在另一个服务器上),接下来让我们编码,实现 nameko 的 RPC 服务。 + +我们会将这两个服务放在同一个 python 模块中,当然如果你乐意,也可以把它们放在单独的模块里并且当成不同的服务运行: + +在名为 `service.py` 的文件中 + +```python +import yagmail +from nameko.rpc import rpc, RpcProxy + + +class Mail(object): + name = "mail" + + @rpc + def send(self, to, subject, contents): + yag = yagmail.SMTP('myname@gmail.com', 'mypassword') + # 以上的验证信息请从安全的地方进行读取 + # 贴士: 可以去看看 Dynaconf 设置模块 + yag.send(to=to.encode('utf-8), + subject=subject.encode('utf-8), + contents=[contents.encode('utf-8)]) + + +class Compute(object): + name = "compute" + mail = RpcProxy('mail') + + @rpc + def compute(self, operation, value, other, email): + operations = {'sum': lambda x, y: int(x) + int(y), + 'mul': lambda x, y: int(x) * int(y), + 'div': lambda x, y: int(x) / int(y), + 'sub': lambda x, y: int(x) - int(y)} + try: + result = operations[operation](value, other) + except Exception as e: + self.mail.send.async(email, "An error occurred", str(e)) + raise + else: + self.mail.send.async( + email, + "Your operation is complete!", + "The result is: %s" % result + ) + return result +``` + +现在我们已经用以上代码定义好了两个服务,下面让我们将 Nameko RPC service 运行起来。 + +>注意:我们会在控制台中启动并运行它。但在生产环境中,建议大家使用 supervisord 替代控制台命令。 + +在 Shell 中启动并运行服务 + +``` +(service_env)$ nameko run service --broker amqp://guest:guest@localhost +starting services: mail, compute +Connected to amqp://guest:**@127.0.0.1:5672// +Connected to amqp://guest:**@127.0.0.1:5672// +``` + + +#### 测试 + +在另外一个 Shell 中(使用相同的虚拟环境),用 nameko shell 进行测试: + +``` +(service_env)$ nameko shell --broker amqp://guest:guest@localhost +Nameko Python 2.7.9 (default, Apr 2 2015, 15:33:21) +[GCC 4.9.2] shell on linux2 +Broker: amqp://guest:guest@localhost +>>> +``` + +现在你已经处在 RPC 客户端中了,Shell 的测试工作是通过 n.rpc 对象来进行的,它的使用方法如下: + +``` +>>> n.rpc.mail.send("name@email.com", "testing", "Just testing") +``` + +上边的代码会发送一封邮件,我们同样可以调用计算服务对其进行测试。需要注意的是,此测试还会附带进行异步的邮件发送。 + +``` +>>> n.rpc.compute.compute('sum', 30, 10, "name@email.com") +40 +>>> n.rpc.compute.compute('sub', 30, 10, "name@email.com") +20 +>>> n.rpc.compute.compute('mul', 30, 10, "name@email.com") +300 +>>> n.rpc.compute.compute('div', 30, 10, "name@email.com") +3 +``` + + +### 在 API 中调用微服务 + +在另外一个 Shell 中(甚至可以是另外一台服务器上),准备好 API 环境。 + +用 virtualenv 工具创建并且激活一个虚拟环境(你也可以使用virtualenv-wrapper) + +``` +$ virtualenv api_env +$ source api_env/bin/activate +``` + +安装 Nameko, Flask 和 Flasgger + +``` +(api_env)$ pip install nameko +(api_env)$ pip install flask +(api_env)$ pip install flasgger +``` + +>注意: 在 API 中并不需要 yagmail ,因为在这里,处理邮件是服务的职责 + +创建含有以下内容的 `api.py` 文件: + +```python +from flask import Flask, request +from flasgger import Swagger +from nameko.standalone.rpc import ClusterRpcProxy + +app = Flask(__name__) +Swagger(app) +CONFIG = {'AMQP_URI': "amqp://guest:guest@localhost"} + + +@app.route('/compute', methods=['POST']) +def compute(): + """ + Micro Service Based Compute and Mail API + This API is made with Flask, Flasgger and Nameko + --- + parameters: + - name: body + in: body + required: true + schema: + id: data + properties: + operation: + type: string + enum: + - sum + - mul + - sub + - div + email: + type: string + value: + type: integer + other: + type: integer + responses: + 200: + description: Please wait the calculation, you'll receive an email with results + """ + operation = request.json.get('operation') + value = request.json.get('value') + other = request.json.get('other') + email = request.json.get('email') + msg = "Please wait the calculation, you'll receive an email with results" + subject = "API Notification" + with ClusterRpcProxy(CONFIG) as rpc: + # asynchronously spawning and email notification + rpc.mail.send.async(email, subject, msg) + # asynchronously spawning the compute task + result = rpc.compute.compute.async(operation, value, other, email) + return msg, 200 + +app.run(debug=True) +``` + +在其他的 shell 或者服务器上运行此文件 + +``` +(api_env) $ python api.py + * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) +``` + +然后访问 这个 url,就可以看到 Flasgger 的界面了,利用它可以进行 API 的交互并可以发布任务到队列以供服务进行消费。 + +![](http://brunorocha.org/static/media/microservices/Flasgger_API_documentation.png) + +>注意: 你可以在 shell 中查看到服务的运行日志,打印信息和错误信息。也可以访问 RabbitMQ 控制面板来查看消息在队列中的处理情况。 + +Nameko 框架还为我们提供了很多高级特性,你可以从 获取更多的信息。 + +别光看了,撸起袖子来,实现微服务! + + +-------------------------------------------------------------------------------- + +via: http://brunorocha.org/python/microservices-with-python-rabbitmq-and-nameko.html + +作者: [Bruno Rocha][a] +译者: [译者ID](http://www.mr-ping.com) +校对: [校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://facebook.com/rochacbruno +[1]:http://martinfowler.com/articles/microservices.html +[2]:http://rabbitmq.mr-ping.com/description.html From 0895b6abc926c4ff963db4099fbdaca030694c1d Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 8 Jul 2016 09:27:23 +0800 Subject: [PATCH 082/471] Add translator info --- .../20160304 Microservices with Python RabbitMQ and Nameko.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md b/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md index ea449e9f6c..8b88bfceec 100644 --- a/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md +++ b/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md @@ -281,7 +281,7 @@ Nameko 框架还为我们提供了很多高级特性,你可以从 Date: Fri, 8 Jul 2016 10:07:08 +0800 Subject: [PATCH 083/471] [translating] 20160624 Industrial SBC builds on Raspberry Pi Compute Module.md --- ...60624 Industrial SBC builds on Raspberry Pi Compute Module.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md b/sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md index 17874b2cf3..9bc9a894ca 100644 --- a/sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md +++ b/sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md @@ -1,3 +1,4 @@ +zpl1025 Industrial SBC builds on Raspberry Pi Compute Module ===================================================== From a8b453e13a62571e9e90b8af58f02f8995578a4e Mon Sep 17 00:00:00 2001 From: Xin Wang <2650454635@qq.com> Date: Fri, 8 Jul 2016 12:42:08 +0800 Subject: [PATCH 084/471] Create USE TASK MANAGER EQUIVALENT IN LINUX.md (#4160) * Update 20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md * Create 20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md * Delete 20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md --- ...29 USE TASK MANAGER EQUIVALENT IN LINUX.md | 53 ------------------- ...29 USE TASK MANAGER EQUIVALENT IN LINUX.md | 52 ++++++++++++++++++ 2 files changed, 52 insertions(+), 53 deletions(-) delete mode 100644 sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md create mode 100644 translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md diff --git a/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md deleted file mode 100644 index aa1e999f50..0000000000 --- a/sources/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md +++ /dev/null @@ -1,53 +0,0 @@ -xinglianfly translate -USE TASK MANAGER EQUIVALENT IN LINUX -==================================== - -![](https://itsfoss.com/wp-content/uploads/2016/06/Task-Manager-in-Linux.jpg) - -These are some of the most frequently asked questions by Linux beginners, “**is there a task manager for Linux**“, “how do you open task manager in Linux” ? - -People who are coming from Windows know how useful is the task manager. You press the Ctrl+Alt+Del to get to task manager in Windows. This task manager shows you all the running processes and their memory consumption. You can choose to end a process from this task manager application. - -When you have just begun with Linux, you look for a **task manager equivalent in Linux** as well. An expert Linux user prefers the command line way to find processes and memory consumption etc but you don’t have to go that way, at least not when you are just starting with Linux. - -All major Linux distributions have a task manager equivalent. Mostly, **it is called System Monitor** but it actually depends on your Linux distribution and the [desktop environment][1] it uses. - -In this article, we’ll see how to find and use the task manager in Linux with GNOME as the [desktop environment][2]. - -### TASK MANAGER EQUIVALENT IN LINUX WITH GNOME DESKTOP - -While using GNOME, press super key (Windows Key) and look for System Monitor: - -![](https://itsfoss.com/wp-content/uploads/2016/06/system-monitor-gnome-fedora.png) - -When you start the System Monitor, it shows you all the running processes and the memory consumption by them. - -![](https://itsfoss.com/wp-content/uploads/2016/06/fedora-system-monitor.jpeg) - -You can select a process and click on End process to kill it. - -![](https://itsfoss.com/wp-content/uploads/2016/06/kill-process-fedora.png) - -You can also see some statistics about your system in the Resources tab such as CPU consumption per core basis, memory usage, network usage etc. - -![](https://itsfoss.com/wp-content/uploads/2016/06/system-stats-fedora.png) - -This was the graphical way. If you want to go command line way, just run the command ‘top’ in terminal and you can see all the running processes and their memory consumption. You can easily [kill processes in Linux][3] command line. - -This all you need to know about task manager equivalent in Fedora Linux. I hope you find this quick tutorial helpful. If you have questions or suggestions, feel free to ask. - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/task-manager-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 - -作者:[Abhishek Prakash][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://wiki.archlinux.org/index.php/desktop_environment -[2]: https://itsfoss.com/best-linux-desktop-environments/ -[3]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/ diff --git a/translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md new file mode 100644 index 0000000000..81a2147ff5 --- /dev/null +++ b/translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md @@ -0,0 +1,52 @@ +在linux下使用任务管理器 +==================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/Task-Manager-in-Linux.jpg) + +有很多Linux初学者经常问起的问题,“**Linux有任务管理器吗**”,“**在Linux上面你是怎样打开任务管理器的呢**”? + +来自Windows的用户都知道任务管理器非常有用。你按下Ctrl+Alt+Del来打开任务管理器。这个任务管理器向你展示了所有的正在运行的进程和它们消耗的内存。你可以从任务管理器程序中选择并杀死一个进程。 + +当你刚使用Linux的时候,你也会寻找一个**在Linux相当于任务管理器**的一个东西。一个Linux使用专家更喜欢使用命令行的方式查找进程和消耗的内存,但是你不必去使用这种方式,至少在你初学Linux的时候。 + +所有主流的Linux发行版都有一个类似于任务管理器的东西。大部分情况下,**它叫System Monitor**。但是它实际上依赖于你的Linux的发行版和它使用的[桌面环境][1]。 + +在这篇文章中,我们将会看到如何在使用GNOME的[桌面环境][2]的Linux上查找并使用任务管理器。 + +###在使用GNOME的桌面环境的linux上使用任务管理器 + +当你使用GNOME的时候,按下super键(Windows 键)来查找任务管理器: + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-monitor-gnome-fedora.png) + +当你启动System Monitor的时候,它会向你展示所有正在运行的进程和被它们消耗的内存。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/fedora-system-monitor.jpeg) + +你可以选择一个进程并且点击“End Process”来杀掉它。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/kill-process-fedora.png) + +你也可以在Resources标签里面看到关于一些系统的数据,例如每个cpu核心的消耗,内存的使用,网络的使用等。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-stats-fedora.png) + +这是图形化的一种方式。如果你想使用命令行,在终端里运行“top”命令然后你就可以看到所有运行的进程和他们消耗的内存。你也可以参考[使用命令行杀死进程][3]这篇文章。 + +这就是所有你需要知道的关于在Fedora Linux上任务管理器的知识。我希望这个教程帮你学到了知识,如果你有什么问题,请尽管问。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/task-manager-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[Abhishek Prakash][a] +译者:[xinglianfly](https://github.com/xinglianfly) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://wiki.archlinux.org/index.php/desktop_environment +[2]: https://itsfoss.com/best-linux-desktop-environments/ +[3]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/ From 3c3bf07eb6d07cedaa6fc1f6f54a0435583434fe Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 8 Jul 2016 13:12:27 +0800 Subject: [PATCH 085/471] Update 20160104 What is good stock portfolio management software on Linux.md Translating by ivo-wang --- ... What is good stock portfolio management software on Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160104 What is good stock portfolio management software on Linux.md b/sources/tech/20160104 What is good stock portfolio management software on Linux.md index 258cf104fc..b24139b9c5 100644 --- a/sources/tech/20160104 What is good stock portfolio management software on Linux.md +++ b/sources/tech/20160104 What is good stock portfolio management software on Linux.md @@ -1,3 +1,4 @@ +Translating by ivo-wang What is good stock portfolio management software on Linux ================================================================================ If you are investing in the stock market, you probably understand the importance of a sound portfolio management plan. The goal of portfolio management is to come up with the best investment plan tailored for you, considering your risk tolerance, time horizon and financial goals. Given its importance, no wonder there are no shortage of commercial portfolio management apps and stock market monitoring software, each touting various sophisticated portfolio performance tracking and reporting capabilities. From 5d68602ae859cf2dd01ccfe83bc1c9c2b122bcda Mon Sep 17 00:00:00 2001 From: chenxinlong <237448382@qq.com> Date: Fri, 8 Jul 2016 23:04:34 +0800 Subject: [PATCH 086/471] [translated] IT runs on the cloud and the cloud runs on Linux.Any questions --- ... the cloud runs on Linux. Any questions.md | 63 ------------------- ... the cloud runs on Linux. Any questions.md | 62 ++++++++++++++++++ 2 files changed, 62 insertions(+), 63 deletions(-) delete mode 100644 sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md create mode 100644 translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md diff --git a/sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md b/sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md deleted file mode 100644 index 94f0a8af2f..0000000000 --- a/sources/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md +++ /dev/null @@ -1,63 +0,0 @@ -chenxinlong translating -IT runs on the cloud, and the cloud runs on Linux. Any questions? -=================================================================== - ->IT is moving to the cloud. And, what powers the cloud? Linux. When even Microsoft's Azure has embraced Linux, you know things have changed. - -![](http://zdnet1.cbsistatic.com/hub/i/r/2016/06/24/7d2b00eb-783d-4202-bda2-ca65d45c460a/resize/770xauto/732db8df725ede1cc38972788de71a0b/linux-owns-cloud.jpg) ->Image: ZDNet - -Like it or lump it, the cloud is taking over IT. We've seen [the rise of the cloud over in-house IT][1] for years now. And, what powers the cloud? Linux. - -A recent survey by the [Uptime Institute][2] of 1,000 IT executives found that 50 percent of senior enterprise IT executives expect the [majority of IT workloads to reside off-premise in cloud][3] or colocation sites in the future. Of those surveyed, 23 percent expect the shift to happen next year, and 70 percent expect that shift to occur within the next four years. - -This comes as no surprise. Much as many of us still love our physical servers and racks, it often doesn't make financial sense to run your own data center. - -It's really very simple. Just compare your [capital expense (CAPEX) of running your own hardware versus the operational expenses (OPEX)][4] of using a cloud. Now, that's not to say you want to outsource everything and the kitchen sink, but most of the time and for many of your jobs you'll want to move to the cloud. - -In turn, if you're going to make the best use of the cloud, you need to know Linux. - -[Amazon Web Services][5], [Apache CloudStack][6], [Rackspace][7], [Google Cloud Platform][8], and [OpenStack][9] all run Linux at their hearts. The result? By 2014, [Linux server application deployments had risen to 79 percent][10] of all businesses, while Windows server app deployments had fallen to 36 percent. Linux has only gained more momentum since then. - -Even Microsoft understands this. - -In the past year alone, Azure Chief Technology Officer Mark Russinovich said, Microsoft has gone from [one in four of its Azure virtual machines running Linux][11] to [nearly one in three][12]. - -Think about that. Microsoft, which is switching to the [cloud for its main source of revenue][13], is relying on Linux for a third of its cloud business. - -Even now, both those who love Microsoft and those who hate it have trouble getting their minds around the fundamental shift of [Microsoft from a proprietary software company to an open-source][14], cloud-based service business. - -Linux's penetration into the proprietary server room is even deeper than it first appears. For example, [Docker recently announced a public beta of its Windows 10 and Mac OS X releases][15]. So, does that mean [Docker][16] is porting its eponymous container service to Windows 10 and the Mac? Nope. - -On both platforms, Docker runs within a Linux virtual machine. HyperKit on Mac OS and Hyper-V on Windows. Your interface may look like just another Mac or Windows application, but at heart your containers will still be running on Linux. - -So, just as the vast majority of Android phone and Chromebook users have no clue they're running Linux, so too will IT users continue to quietly move to Linux and the cloud. - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/it-runs-on-the-cloud-and-the-cloud-runs-on-linux-any-questions/#ftag=RSSbaffb68 - -作者:[Steven J. Vaughan-Nichols][a] -译者:[chenxinlong](https://github.com/chenxinlong) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[1]: http://www.zdnet.com/article/2014-the-year-the-cloud-killed-the-datacenter/ -[2]: https://uptimeinstitute.com/ -[3]: http://www.zdnet.com/article/move-to-cloud-accelerating-faster-than-thought-survey-finds/ -[4]: http://www.zdnet.com/article/rethinking-capex-and-opex-in-a-cloud-centric-world/ -[5]: https://aws.amazon.com/ -[6]: https://cloudstack.apache.org/ -[7]: https://www.rackspace.com/en-us -[8]: https://cloud.google.com/ -[9]: http://www.openstack.org/ -[10]: http://www.zdnet.com/article/linux-foundation-finds-enterprise-linux-growing-at-windows-expense/ -[11]: http://news.microsoft.com/bythenumbers/azure-virtual -[12]: http://www.zdnet.com/article/microsoft-nearly-one-in-three-azure-virtual-machines-now-are-running-linux/ -[13]: http://www.zdnet.com/article/microsofts-q3-azure-commercial-cloud-strong-but-earnings-revenue-light/ -[14]: http://www.zdnet.com/article/why-microsoft-is-turning-into-an-open-source-company/ -[15]: http://www.zdnet.com/article/new-docker-betas-for-azure-windows-10-now-available/ -[16]: http://www.docker.com/ - diff --git a/translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md b/translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md new file mode 100644 index 0000000000..804eb7db67 --- /dev/null +++ b/translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md @@ -0,0 +1,62 @@ +信息技术运行在云端,而云运行在 Linux 上。有什么问题吗? +=================================================================== + +>信息技术正在逐渐被迁移到云端. 那又是什么驱动了云呢?答案是 Linux。 当连微软的 Azure 都开始拥抱 Linux 时,你就应该知道这一切都已经改变了。 + +![](http://zdnet1.cbsistatic.com/hub/i/r/2016/06/24/7d2b00eb-783d-4202-bda2-ca65d45c460a/resize/770xauto/732db8df725ede1cc38972788de71a0b/linux-owns-cloud.jpg) +>图片: ZDNet + +不管你接不接受, 云正在接管信息技术都已成现实。 我们这几年见证了 [ 云在信息技术产业内部的崛起 ][1] 。 那又是什么驱动了云呢? 答案是 Linux 。 + +[Uptime Institute][2] 最近对 1000 个 IT 执行部门进行调查并发现了约 50% 左右的高级企业的 IT 执行部门认为在将来 [ 大部分的 IT 工作内容应存储备份在云上 ][3] 或托管网站上。在这个调查中,23% 的人认为这种改变即将发生在明年,有 70% 的人则认为这种情况会在四年内出现。 + +这一点都不奇怪。 我们中的许多人仍热衷于我们的物理服务器和机架, 但一般运营一个自己的数据中心并不会产生任何的经济效益。 + +这真的非常简单。 只需要对比你 [ 运行在硬件上的个人资本费用 (CAPEX) 和使用云的操作费用 (OPEX)][4]。 但这并不是说你会想把所有的一切都外置,而是说在大部分时间内你会想把你的一些工作内容迁移到云端。 + +相应地,如果你想充分地利用云,你就得了解 Linux 。 + +[ 亚马逊 web 服务 ][5], [ Apache 的 CloudStack][6], [Rackspace][7], [ 谷歌云平台 ][8] 以及 [ OpenStack ][9] 的核心都是运行在 Linux 上的。那么结果如何?截至到 2014 年, [ 在 Linux 服务器上部署的应用达到所有企业的 79% ][10],而 Windows 服务器上部署的应用则跌到 36%。从那时起, Linux 就获得了更多的发展动力。 + +即便是微软自身也明白这一点。 + +Azure 的技术主管 Mark Russinovich 曾说,仅仅在过去的几年内微软就从 [ 四分之一的 Azure 虚拟机运行在 Linux 上 ][11] 变为 [ 将近三分之一的 Azure 虚拟机运行在 Linux 上][12]. + +试想一下。 微软, 一家正逐渐将 [ 云变为自身财政收入的主要来源 ][13] 的公司,其三分之一的云产业依靠于 Linux 。 + +即使是到目前为止, 这些不论喜欢或者不喜欢微软的人都很难想象得到 [ 微软会从一家以专利保护为基础的软件公司转变为一家开源,基于云服务的企业][14] 。 + +Linux 对于这些专用服务器机房的渗透甚至比它刚开始的时候更深了。 举个例子, [ Docker 最近发行了其在 Windows 10 和 Mac OS X 上的公测版本 ][15] 。 所以难道这意味着 [Docker][16] 将会把其同名的容器服务移植到 Windows 10 和 Mac 上吗? 并不是的。 + +在这两个平台上, Docker 只是运行在一个 Linux 虚拟机内部。 在 Mac OS 上是 HyperKit ,在 Windows 上则是 Hyper-V 。 你的图形界面可能看起来就像另一个 Mac 或 Windows 上的应用, 但在其内部核心的容器仍然是运行在 Linux 上的。 + +所以,就像大量的安卓手机和 Chromebook 的用户压根就不知道他们所运行的是 Linux 系统一样。这些信息技术的用户也会随之悄然地迁移到 Linux 和云上。 + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/it-runs-on-the-cloud-and-the-cloud-runs-on-linux-any-questions/#ftag=RSSbaffb68 + +作者:[Steven J. Vaughan-Nichols][a] +译者:[chenxinlong](https://github.com/chenxinlong) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]: http://www.zdnet.com/article/2014-the-year-the-cloud-killed-the-datacenter/ +[2]: https://uptimeinstitute.com/ +[3]: http://www.zdnet.com/article/move-to-cloud-accelerating-faster-than-thought-survey-finds/ +[4]: http://www.zdnet.com/article/rethinking-capex-and-opex-in-a-cloud-centric-world/ +[5]: https://aws.amazon.com/ +[6]: https://cloudstack.apache.org/ +[7]: https://www.rackspace.com/en-us +[8]: https://cloud.google.com/ +[9]: http://www.openstack.org/ +[10]: http://www.zdnet.com/article/linux-foundation-finds-enterprise-linux-growing-at-windows-expense/ +[11]: http://news.microsoft.com/bythenumbers/azure-virtual +[12]: http://www.zdnet.com/article/microsoft-nearly-one-in-three-azure-virtual-machines-now-are-running-linux/ +[13]: http://www.zdnet.com/article/microsofts-q3-azure-commercial-cloud-strong-but-earnings-revenue-light/ +[14]: http://www.zdnet.com/article/why-microsoft-is-turning-into-an-open-source-company/ +[15]: http://www.zdnet.com/article/new-docker-betas-for-azure-windows-10-now-available/ +[16]: http://www.docker.com/ + From f0653ae0566331e4b1d7ad58a3a6beafabe64f0f Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 9 Jul 2016 11:23:43 +0800 Subject: [PATCH 087/471] PUB:20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Moelf 翻译的不错! --- ... Raspberry Pi as a Secure Landing Point.md | 227 +++++++----------- 1 file changed, 85 insertions(+), 142 deletions(-) rename {translated/tech => published}/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md (50%) diff --git a/translated/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md b/published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md similarity index 50% rename from translated/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md rename to published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md index fc489163ef..7038dfd87a 100644 --- a/translated/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md +++ b/published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md @@ -1,25 +1,27 @@ -Securi-Pi: 使用树莓派作为安全跳板 +Securi-Pi:使用树莓派作为安全跳板 ================================================================================ -像很多 LinuxJournal 的读者一样,我也过上了当今非常普遍的“科技游牧”生活,在网络到网络间,从一个接入点到另一个接入点,我们身处现实世界的不同地方却始终保持统一的互联网接入端。近来我发现越来越多的网络环境开始屏蔽对外的常用端口比如 SMTP(端口25),SSH(端口22)之类的。当你走进一家咖啡馆然后想 SSH 到你的一台服务器上做点事情的时候发现端口22被屏蔽了是一件很烦的事情。 +像很多 LinuxJournal 的读者一样,我也过上了当今非常普遍的“科技游牧”生活,在网络之间,从一个接入点到另一个接入点,我们身处现实世界的不同地方却始终保持连接到互联网和日常使用的其它网络上。近来我发现越来越多的网络环境开始屏蔽对外的常用端口比如 SMTP(端口25),SSH(端口22)之类的。当你走进一家咖啡馆然后想 SSH 到你的一台服务器上做点事情的时候发现端口 22 被屏蔽了是一件很烦的事情。 -不过,我到目前为止还没发现有什么网络环境会把 HTTPS 给墙了(端口443)。在稍微配置了一下家中的树莓派 2之后,我成功地让自己能通过接入树莓派的443接口充当跳板从而让我在各种网络环境下连上想要的目标端口。简而言之,我把家中的树莓派设置成了一个 OpenVPN 的端点,SSH 端点同时也是一个 Apache 服务器——用于监听443端口上的我的接入活动并执行我预先设置好的网络策略。 +不过,我到目前为止还没发现有什么网络环境会把 HTTPS 给墙了(端口443)。在稍微配置了一下家中的树莓派 2 之后,我成功地让自己通过接入树莓派的 443 端口充当跳板,从而让我在各种网络环境下都能连上想要的目标端口。简而言之,我把家中的树莓派设置成了一个 OpenVPN 的端点和 SSH 端点,同时也是一个 Apache 服务器,所有这些服务都监听在 443 端口上,以便可以限制我不想暴露的网络服务。 -### 笔记 -此解决方案能搞定大多数有限制的网络环境,但有些防火墙会对外部流量调用深度包检查(Deep packet inspection),它们时常能屏蔽掉用本篇文章里的方式传输的信息。不过我到目前为止还没在这样的防火墙后测试过。同时,尽管我使用了很多基于密码学的工具(OpenVPN,HTTPS,SSH),我并没有非常严格地审计过这套配置方案(译者注:作者的意思是指这套方案能帮你绕过端口限制,但不代表你就是完全安全地连接上了树莓派)。有时候甚至 DNS 服务都会泄露你的信息,很可能在我没有考虑周到的角落里会有遗漏。我强烈不推荐把此跳板配置方案当作是万无一失的隐藏网络流量的办法,此配置只是希望能绕过一些端口限制连上网络,而不是做一些危险的事情。 +### 备注 + +此解决方案能搞定大多数有限制的网络环境,但有些防火墙会对外部流量调用深度包检查(Deep packet inspection),它们时常能屏蔽掉用本篇文章里的方式传输的信息。不过我到目前为止还没在这样的防火墙后测试过。同时,尽管我使用了很多基于密码学的工具(OpenVPN,HTTPS,SSH),我并没有非常严格地审计过这套配置方案(LCTT 译注:作者的意思是指这套方案能帮你绕过端口限制,但不代表你的活动就是完全安全的)。有时候甚至 DNS 服务都会泄露你的信息,很可能在我没有考虑周到的角落里会有遗漏。我强烈不推荐把此跳板配置方案当作是万无一失的隐藏网络流量的办法,此配置只是希望能绕过一些端口限制连上网络,而不是做一些危险的事情。 ### 起步 -让我们先从你需要什么说起,我用的是树莓派 2,装载了最新版本的 Raspbian,不过这个配置也应该能在树莓派 Model B 上运行;512MB 的内存对我们来说绰绰有余了,虽然性能可能没有树莓派 2这么好,毕竟Model B只有一颗单核心 CPU 相比于四核心的树莓派 2。我的树莓派在家里的防火墙和路由器之后,所以我还能用这个树莓派作为跳板访问家里的其他电子设备。同时这也意味着我的流量在互联网上看起来仿佛来自我家的ip地址,所以这也算某种意义上保护了我的匿名性。如果你没有树莓派,或者不想从家里运行这个服务,那你完全可以把这个配置放在一台小型云服务器上(译者:比如 IPS )。你只要确保服务器运行着基于 Debian 的 Linux 发行版即可,这份指南依然可用。 +让我们先从你需要什么说起,我用的是树莓派 2,装载了最新版本的 Raspbian,不过这个配置也应该能在树莓派 Model B 上运行;512MB 的内存对我们来说绰绰有余了,虽然性能可能没有树莓派 2这么好,毕竟相比于四核心的树莓派 2, Model B 只有一颗单核心 CPU。我的树莓派放置在家里的防火墙和路由器的后面,所以我还能用这个树莓派作为跳板访问家里的其他电子设备。同时这也意味着我的流量在互联网上看起来仿佛来自我家的 ip 地址,所以这也算某种意义上保护了我的匿名性。如果你没有树莓派,或者不想从家里运行这个服务,那你完全可以把这个配置放在一台小型云服务器上(LCTT 译注:比如 IPS )。你只要确保服务器运行着基于 Debian 的 Linux 发行版即可,这份指南依然可用。 ![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11913f1.jpg) -图 1 树莓派,即将成为我们的加密网络端点 +*图 1 树莓派,即将成为我们的加密网络端点* ### 安装并配置 BIND -无论你是用树莓派还是一台服务器,当你成功启动之后你就可以安装 BIND 了,驱动了互联网相当一部分的域名服务软件。你将会把 BIND 仅仅作为缓存域名服务使用,而不用把它配置为用来处理来自互联网的域名请求。安装 BIND 会让你拥有一个可以被 OpenVPN 使用的 DNS 服务器。安装 BIND 十分简单,`apt-get` 就可以直接搞定: + +无论你是用树莓派还是一台服务器,当你成功启动之后你就可以安装 BIND 了,这是一个驱动了互联网相当一部分的域名服务软件。你将会把 BIND 仅仅作为缓存域名服务使用,而不用把它配置为用来处理来自互联网的域名请求。安装 BIND 会让你拥有一个可以被 OpenVPN 使用的 DNS 服务器。安装 BIND 十分简单,`apt-get` 就可以直接搞定: ``` root@test:~# apt-get install bind9 @@ -32,15 +34,13 @@ Suggested packages: bind9-doc resolvconf ufw The following NEW packages will be installed: bind9 bind9utils -0 upgraded, 2 newly installed, 0 to remove and - ↪0 not upgraded. +0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 490 kB of archives. -After this operation, 1,128 kB of additional disk - ↪space will be used. +After this operation, 1,128 kB of additional disk space will be used. Do you want to continue [Y/n]? y ``` -在我们能把 BIND 当做缓存域名服务器之前,还有一些小细节需要配置。两个修改都在`/etc/bind/named.conf.options`里完成。首先你要反注释掉 forwarders 这一节内容,同时你还要增加一个可以转发域名请求的目标服务器。作为例子我会用 Google 的 DNS 服务器(8.8.8.8)(译者:国内的话需要找一个替代品);文件的 forwarders 节看上去大致是这样的: +在我们把 BIND 作为缓存域名服务器之前,还有一些小细节需要配置。两个修改都在`/etc/bind/named.conf.options`里完成。首先你要取消注释掉 forwarders 这一节内容,同时你还要增加一个可以转发域名请求的目标服务器。作为例子我会用 Google 的 DNS 服务器(8.8.8.8)(LCTT 译注:国内的话需要找一个替代品);文件的 forwarders 节看上去大致是这样的: ``` @@ -49,19 +49,18 @@ forwarders { }; ``` -第二点你需要做的更改是允许来自互联网和本地局域网的 query,直接把这一行加入配置文件的低端,最后一个`}`之前就可以了: +第二点你需要做的更改是允许来自内网和本机的查询请求,直接把这一行加入配置文件的后面,记得放在最后一个`};`之前就可以了: ``` allow-query { 192.168.1.0/24; 127.0.0.0/16; }; ``` -上面那行配置会允许此 DNS 服务器接收来自网络和局域网的请求。下一步,你需要重启一下 BIND 的服务: +上面那行配置会允许此 DNS 服务器接收来自其所在的网络(在本例中,我的网络就在我的防火墙之后)和本机的请求。下一步,你需要重启一下 BIND 的服务: ``` root@test:~# /etc/init.d/bind9 restart -[....] Stopping domain name service...: bind9waiting - ↪for pid 13209 to die +[....] Stopping domain name service...: bind9waiting for pid 13209 to die . ok [ ok ] Starting domain name service...: bind9. ``` @@ -91,12 +90,12 @@ Name: www.google.com Address: 173.194.33.180 ``` -完美!现在你的系统里已经有一个正常的域名服务在允许了,下一步我们来配置一下OpenVPN。 +完美!现在你的系统里已经有一个正常的域名服务在工作了,下一步我们来配置一下OpenVPN。 ### 安装并配置 OpenVPN -OpenVPN 是一个运用 SSL/TLS 作为密钥交换的开源 VPN 解决方案。同时它也非常便于在 Linux 环境下部署。配置 OpenVPN 可能有一点艰巨,不过在此其实你也不需要在默认的配置文件里做太多修改。首先你会需要运行一下 `apt-get` 来安装 OpenVPN: +OpenVPN 是一个运用 SSL/TLS 作为密钥交换的开源 VPN 解决方案。同时它也非常便于在 Linux 环境下部署。配置 OpenVPN 可能有一点点难,不过其实你也不需要在默认的配置文件里做太多修改。首先你需要运行一下 `apt-get` 来安装 OpenVPN: ``` @@ -110,22 +109,18 @@ Suggested packages: resolvconf The following NEW packages will be installed: liblzo2-2 libpkcs11-helper1 openvpn -0 upgraded, 3 newly installed, 0 to remove and - ↪0 not upgraded. +0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 621 kB of archives. -After this operation, 1,489 kB of additional disk - ↪space will be used. +After this operation, 1,489 kB of additional disk space will be used. Do you want to continue [Y/n]? y ``` -现在 OpenVPN 已经安装好了,你需要去配置它了。OpenVPN 是基于 SSL 的,并且它同时依赖于服务端和客户端两方的证书来工作。为了生成这些证书,你需要配置机器上的证书签发(CA)。幸运地,OpenVPN 在安装中自带了一些用于生成证书的脚本比如 “easy-rsa” 来帮助你加快这个过程。你将要创建一个文件目录用于放置 easy-rsa 脚本的模板: +现在 OpenVPN 已经安装好了,你需要去配置它了。OpenVPN 是基于 SSL 的,并且它同时依赖于服务端和客户端两方的证书来工作。为了生成这些证书,你需要在机器上配置一个证书签发(CA)。幸运地,OpenVPN 在安装中自带了一些用于生成证书的脚本比如 “easy-rsa” 来帮助你加快这个过程。你将要创建一个文件目录用于放置 easy-rsa 脚本,从模板目录复制过来: ``` root@test:~# mkdir /etc/openvpn/easy-rsa -root@test:~# cp -rpv - ↪/usr/share/doc/openvpn/examples/easy-rsa/2.0/* - ↪/etc/openvpn/easy-rsa/ +root@test:~# cp -rpv /usr/share/doc/openvpn/examples/easy-rsa/2.0/* /etc/openvpn/easy-rsa/ ``` 下一步,把 vars 文件复制一个备份: @@ -137,7 +132,6 @@ root@test:/etc/openvpn/easy-rsa# cp vars vars.bak 接下来,编辑一下 vars 以让其中的信息符合你的状态。我将以我需要编辑的信息作为例子: - ``` KEY_SIZE=4096 KEY_COUNTRY="US" @@ -147,19 +141,17 @@ KEY_ORG="Linux Journal" KEY_EMAIL="bill.childers@linuxjournal.com" ``` -下一步是 source 一下 vars ,这样系统就能把其中的信息当作环境变量处理了: +下一步是导入(source)一下 vars 中的环境变量,这样系统就能把其中的信息当作环境变量处理了: ``` root@test:/etc/openvpn/easy-rsa# source ./vars -NOTE: If you run ./clean-all, I will be doing a - ↪rm -rf on /etc/openvpn/easy-rsa/keys +NOTE: If you run ./clean-all, I will be doing a rm -rf on /etc/openvpn/easy-rsa/keys ``` -### 搭建CA(证书签发) +### 搭建 CA(证书签发) - -接下来你要允许一下 `clean-all` 来确保有一个清理干净的系统工作环境,紧接着你也就要做证书签发了。注意一下我修改了一些 changeme 的跳出的交互提示内容以符合我需要的安装情况: +接下来你要运行一下 `clean-all` 来确保有一个清理干净的系统工作环境,紧接着你就要做证书签发了。注意一下我修改了一些 changeme 的所提示修改的内容以符合我需要的安装情况: ``` @@ -182,10 +174,8 @@ Country Name (2 letter code) [US]: State or Province Name (full name) [CA]: Locality Name (eg, city) [Silicon Valley]: Organization Name (eg, company) [Linux Journal]: -Organizational Unit Name (eg, section) - ↪[changeme]:SecTeam -Common Name (eg, your name or your server's hostname) - ↪[changeme]:test.linuxjournal.com +Organizational Unit Name (eg, section) [changeme]:SecTeam +Common Name (eg, your name or your server's hostname) [changeme]:test.linuxjournal.com Name [changeme]:test.linuxjournal.com Email Address [bill.childers@linuxjournal.com]: ``` @@ -193,12 +183,11 @@ Email Address [bill.childers@linuxjournal.com]: ### 生成服务端证书 -一旦CA创建好了,你接着就可以生成客户端的 OpenVPN 证书了: +一旦 CA 创建好了,你接着就可以生成客户端的 OpenVPN 证书了: ``` -root@test:/etc/openvpn/easy-rsa# - ↪./build-key-server test.linuxjournal.com +root@test:/etc/openvpn/easy-rsa# ./build-key-server test.linuxjournal.com Generating a 4096 bit RSA private key ...................................................++ writing new private key to 'test.linuxjournal.com.key' @@ -215,10 +204,8 @@ Country Name (2 letter code) [US]: State or Province Name (full name) [CA]: Locality Name (eg, city) [Silicon Valley]: Organization Name (eg, company) [Linux Journal]: -Organizational Unit Name (eg, section) - ↪[changeme]:SecTeam -Common Name (eg, your name or your server's hostname) - ↪[test.linuxjournal.com]: +Organizational Unit Name (eg, section) [changeme]:SecTeam +Common Name (eg, your name or your server's hostname) [test.linuxjournal.com]: Name [changeme]:test.linuxjournal.com Email Address [bill.childers@linuxjournal.com]: @@ -226,8 +213,7 @@ Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: -Using configuration from - ↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf +Using configuration from /etc/openvpn/easy-rsa/openssl-1.0.0.cnf Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows @@ -238,10 +224,8 @@ organizationName :PRINTABLE:'Linux Journal' organizationalUnitName:PRINTABLE:'SecTeam' commonName :PRINTABLE:'test.linuxjournal.com' name :PRINTABLE:'test.linuxjournal.com' -emailAddress - ↪:IA5STRING:'bill.childers@linuxjournal.com' -Certificate is to be certified until Sep 1 - ↪06:23:59 2025 GMT (3650 days) +emailAddress :IA5STRING:'bill.childers@linuxjournal.com' +Certificate is to be certified until Sep 1 06:23:59 2025 GMT (3650 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y @@ -249,13 +233,12 @@ Write out database with 1 new entries Data Base Updated ``` -下一步需要用掉一些时间来生成 OpenVPN 服务器需要的 Diffie-Hellman 密钥。这个步骤在一般的桌面级 CPU 上会需要几分钟的时间,但在 ARM 构架的树莓派上,会用掉超级超级长的时间。耐心点,只要终端上的点还在跳,那么一切就在按部就班运行: +下一步需要用掉一些时间来生成 OpenVPN 服务器需要的 Diffie-Hellman 密钥。这个步骤在一般的桌面级 CPU 上会需要几分钟的时间,但在 ARM 构架的树莓派上,会用掉超级超级长的时间。耐心点,只要终端上的点还在跳,那么一切就在按部就班运行(下面的示例省略了不少的点): ``` root@test:/etc/openvpn/easy-rsa# ./build-dh -Generating DH parameters, 4096 bit long safe prime, - ↪generator 2 +Generating DH parameters, 4096 bit long safe prime, generator 2 This is going to take a long time ....................................................+ @@ -263,11 +246,10 @@ This is going to take a long time ### 生成客户端证书 -现在你要生成一下客户端用于登陆 OpenVPN 的密钥。通常来说 OpenVPN 都会被配置成使用证书验证的加密方式,在这个配置下客户端需要持有由服务端签发的一份证书: +现在你要生成一下客户端用于登录 OpenVPN 的密钥。通常来说 OpenVPN 都会被配置成使用证书验证的加密方式,在这个配置下客户端需要持有由服务端签发的一份证书: ``` -root@test:/etc/openvpn/easy-rsa# ./build-key - ↪bills-computer +root@test:/etc/openvpn/easy-rsa# ./build-key bills-computer Generating a 4096 bit RSA private key ...................................................++ ...................................................++ @@ -285,10 +267,8 @@ Country Name (2 letter code) [US]: State or Province Name (full name) [CA]: Locality Name (eg, city) [Silicon Valley]: Organization Name (eg, company) [Linux Journal]: -Organizational Unit Name (eg, section) - ↪[changeme]:SecTeam -Common Name (eg, your name or your server's hostname) - ↪[bills-computer]: +Organizational Unit Name (eg, section) [changeme]:SecTeam +Common Name (eg, your name or your server's hostname) [bills-computer]: Name [changeme]:bills-computer Email Address [bill.childers@linuxjournal.com]: @@ -296,8 +276,7 @@ Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: -Using configuration from - ↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf +Using configuration from /etc/openvpn/easy-rsa/openssl-1.0.0.cnf Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows @@ -308,30 +287,26 @@ organizationName :PRINTABLE:'Linux Journal' organizationalUnitName:PRINTABLE:'SecTeam' commonName :PRINTABLE:'bills-computer' name :PRINTABLE:'bills-computer' -emailAddress - ↪:IA5STRING:'bill.childers@linuxjournal.com' -Certificate is to be certified until - ↪Sep 1 07:35:07 2025 GMT (3650 days) +emailAddress :IA5STRING:'bill.childers@linuxjournal.com' +Certificate is to be certified until Sep 1 07:35:07 2025 GMT (3650 days) Sign the certificate? [y/n]:y -1 out of 1 certificate requests certified, - ↪commit? [y/n]y +1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated root@test:/etc/openvpn/easy-rsa# ``` -现在你需要再生成一个 HMAC 代码作为共享密钥来进一步增加整个加密提供的安全性: +现在你需要再生成一个 HMAC 码作为共享密钥来进一步增加整个加密提供的安全性: ``` -root@test:~# openvpn --genkey --secret - ↪/etc/openvpn/easy-rsa/keys/ta.key +root@test:~# openvpn --genkey --secret /etc/openvpn/easy-rsa/keys/ta.key ``` ### 配置服务器 -最后,你来到了需要配置 OpenVPN 服务的时候了。你需要创建一个 `/etc/openvpn/server.conf` 文件;这个配置文件的大多数地方都可以套用模板解决。设置 OpenVPN 服务的主要修改在于让它只用 TCP 而不是 UDP 链接。这是下一步所必需的---如果不是 TCP 链接那么你的服务将不能通过 端口443 运作。创建 `/etc/openvpn/server.conf` 然后把下述配置丢进去: +最后,我们到了配置 OpenVPN 服务的时候了。你需要创建一个 `/etc/openvpn/server.conf` 文件;这个配置文件的大多数地方都可以套用模板解决。设置 OpenVPN 服务的主要修改在于让它只用 TCP 而不是 UDP 链接。这是下一步所必需的---如果不是 TCP 连接那么你的服务将不能工作在端口 443 上。创建 `/etc/openvpn/server.conf` 然后把下述配置丢进去: ``` @@ -339,23 +314,15 @@ port 1194 proto tcp dev tun ca easy-rsa/keys/ca.crt -cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever - ↪your hostname was -key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key - ↪- This file should be kept secret +cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever your hostname was +key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key - This file should be kept secret management localhost 7505 dh easy-rsa/keys/dh4096.pem tls-auth /etc/openvpn/certs/ta.key 0 -server 10.8.0.0 255.255.255.0 # The server will use this - ↪subnet for clients connecting to it +server 10.8.0.0 255.255.255.0 # The server will use this subnet for clients connecting to it ifconfig-pool-persist ipp.txt -push "redirect-gateway def1 bypass-dhcp" # Forces clients - ↪to redirect all traffic through the VPN -push "dhcp-option DNS 192.168.1.1" # Tells the client to - ↪use the DNS server at 192.168.1.1 for DNS - - ↪replace with the IP address of the OpenVPN - ↪machine and clients will use the BIND - ↪server setup earlier +push "redirect-gateway def1 bypass-dhcp" # Forces clients to redirect all traffic through the VPN +push "dhcp-option DNS 192.168.1.1" # Tells the client to use the DNS server at 192.168.1.1 for DNS - replace with the IP address of the OpenVPN machine and clients will use the BIND server setup earlier keepalive 30 240 comp-lzo # Enable compression persist-key @@ -364,14 +331,12 @@ status openvpn-status.log verb 3 ``` -最后,你将需要在服务器上启用 IP 转发,配置 OpenVPN 为开机启动并立刻启动 OpenVPN 服务: +最后,你将需要在服务器上启用 IP 转发,配置 OpenVPN 为开机启动,并立刻启动 OpenVPN 服务: ``` -root@test:/etc/openvpn/easy-rsa/keys# echo - ↪"net.ipv4.ip_forward = 1" >> /etc/sysctl.conf -root@test:/etc/openvpn/easy-rsa/keys# sysctl -p - ↪/etc/sysctl.conf +root@test:/etc/openvpn/easy-rsa/keys# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf +root@test:/etc/openvpn/easy-rsa/keys# sysctl -p /etc/sysctl.conf net.core.wmem_max = 12582912 net.core.rmem_max = 12582912 net.ipv4.tcp_rmem = 10240 87380 12582912 @@ -387,61 +352,48 @@ net.ipv4.tcp_wmem = 10240 87380 12582912 net.ipv4.ip_forward = 0 net.ipv4.ip_forward = 1 -root@test:/etc/openvpn/easy-rsa/keys# update-rc.d - ↪openvpn defaults +root@test:/etc/openvpn/easy-rsa/keys# update-rc.d openvpn defaults update-rc.d: using dependency based boot sequencing -root@test:/etc/openvpn/easy-rsa/keys# - ↪/etc/init.d/openvpn start +root@test:/etc/openvpn/easy-rsa/keys# /etc/init.d/openvpn start [ ok ] Starting virtual private network daemon:. ``` ### 配置 OpenVPN 客户端 +客户端的安装取决于客户端的操作系统,但你需要将之前生成的证书和密钥复制到你的客户端上,并导入你的 OpenVPN 客户端并新建一个配置文件。每种操作系统下的 OpenVPN 客户端在操作上会有些稍许不同,这也不在这篇文章的覆盖范围内,所以你最好去看看特定操作系统下的 OpenVPN 文档来获取更多信息。请参考本文档里的资源那一节。 -客户端的安装取决于客户端的操作系统,但你总会需要之前生成的证书和密钥,并导入你的 OpenVPN 客户端并新建一个配置文件。每种操作系统下的 OpenVPN 客户端在操作上会有些稍许不同,这也不在这篇文章的覆盖范围内,所以你最好去看看特定操作系统下的 OpenVPN 文档来获取更多信息。参考文档里的 Resources 章节。 +### 安装 SSLH —— "魔法"多协议切换工具 -### 安装 SSLH —— "魔法"多协议工具 - -本文章介绍的解决方案最有趣的部分就是运用 SSLH 了。SSLH 是一个多重协议工具——它可以监听443端口的流量,然后分析他们是以SSH,HTTPS 还是 OpenVPN 的通讯包,并把他们分别转发给正确的系统服务。这就是为何本解决方案可以让你绕过大多数端口封杀——你可以一直使用HTTPS通讯,介于它几乎从来不会被封杀。 +本文章介绍的解决方案最有趣的部分就是运用 SSLH 了。SSLH 是一个多重协议工具——它可以监听 443 端口的流量,然后分析他们是 SSH,HTTPS 还是 OpenVPN 的通讯包,并把它们分别转发给正确的系统服务。这就是为何本解决方案可以让你绕过大多数端口封杀——你可以一直使用 HTTPS 通讯,因为它几乎从来不会被封杀。 同样,直接 `apt-get` 安装: ``` -root@test:/etc/openvpn/easy-rsa/keys# apt-get - ↪install sslh +root@test:/etc/openvpn/easy-rsa/keys# apt-get install sslh Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: - apache2 apache2-mpm-worker apache2-utils - ↪apache2.2-bin apache2.2-common - libapr1 libaprutil1 libaprutil1-dbd-sqlite3 - ↪libaprutil1-ldap libconfig9 + apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common + libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig9 Suggested packages: - apache2-doc apache2-suexec apache2-suexec-custom - ↪openbsd-inetd inet-superserver + apache2-doc apache2-suexec apache2-suexec-custom openbsd-inetd inet-superserver The following NEW packages will be installed: - apache2 apache2-mpm-worker apache2-utils - ↪apache2.2-bin apache2.2-common - libapr1 libaprutil1 libaprutil1-dbd-sqlite3 - ↪libaprutil1-ldap libconfig9 sslh -0 upgraded, 11 newly installed, 0 to remove - ↪and 0 not upgraded. + apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common + libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig9 sslh +0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 1,568 kB of archives. -After this operation, 5,822 kB of additional - ↪disk space will be used. +After this operation, 5,822 kB of additional disk space will be used. Do you want to continue [Y/n]? y ``` -在 SSLH 被安装之后,包管理器会询问要在 inetd 还是 standalone 模式下允许。选择 standalone 模式,因为你希望 SSLH 在它自己的进程里运行。如果你没有安装 Apache,apt包管理器会自动帮你下载并安装的,尽管它也不是完全不可或缺。如果你已经有 Apache 了,那你需要确保它只监听 localhost 端口而不是所有的端口(不然的话 SSLH 会无法运行因为 443 端口已经被 Apache 监听占用)。安装后,你会看到一个如下所示的错误信息: +在 SSLH 被安装之后,包管理器会询问要在 inetd 还是 standalone 模式下允许。选择 standalone 模式,因为你希望 SSLH 在它自己的进程里运行。如果你没有安装 Apache,apt 包管理器会自动帮你下载并安装的,尽管它也不是完全不可或缺。如果你已经有 Apache 了,那你需要确保它只监听 localhost 端口而不是所有的端口(不然的话 SSLH 会无法运行,因为 443 端口已经被 Apache 监听占用)。安装后,你会看到一个如下所示的错误信息: ``` -[....] Starting ssl/ssh multiplexer: sslhsslh disabled, - ↪please adjust the configuration to your needs -[FAIL] and then set RUN to 'yes' in /etc/default/sslh - ↪to enable it. ... failed! +[....] Starting ssl/ssh multiplexer: sslhsslh disabled, please adjust the configuration to your needs +[FAIL] and then set RUN to 'yes' in /etc/default/sslh to enable it. ... failed! failed! ``` @@ -461,20 +413,16 @@ failed! RUN=yes -# binary to use: forked (sslh) or single-thread - ↪(sslh-select) version +# binary to use: forked (sslh) or single-thread (sslh-select) version DAEMON=/usr/sbin/sslh -DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh - ↪127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn - ↪127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid" +DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn 127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid" ``` 保存编辑并启动 SSLH: ``` - root@test:/etc/openvpn/easy-rsa/keys# - ↪/etc/init.d/sslh start + root@test:/etc/openvpn/easy-rsa/keys# /etc/init.d/sslh start [ ok ] Starting ssl/ssh multiplexer: sslh. ``` @@ -485,26 +433,21 @@ $ ssh -p 443 root@test.linuxjournal.com root@test:~# ``` -SSLH 现在开始监听端口443 并且可以转发流量信息到 SSH,Apache 或者 OpenVPN 取决于抵达流量包的类型。这套系统现已整装待发了! +SSLH 现在开始监听端口 443 并且可以转发流量信息到 SSH、Apache 或者 OpenVPN ,这取决于抵达流量包的类型。这套系统现已整装待发了! ### 结论 -现在你可以启动 OpenVPN 并且配置你的客户端连接到服务器的 443 端口了,然后 SSLH 会从那里把流量转发到服务器的 1194 端口。但介于你正在和服务器的 443 端口通信,你的 VPN 流量不会被封锁。现在你可以舒服地坐在陌生小镇的咖啡店里,畅通无阻地通过树莓派上的 OpenVPN 浏览互联网。你顺便还给你的链接增加了一些安全性,这个额外作用也会让你的链接更安全和私密一些。享受通过安全跳板浏览互联网把! +现在你可以启动 OpenVPN 并且配置你的客户端连接到服务器的 443 端口了,然后 SSLH 会从那里把流量转发到服务器的 1194 端口。但鉴于你正在和服务器的 443 端口通信,你的 VPN 流量不会被封锁。现在你可以舒服地坐在陌生小镇的咖啡店里,畅通无阻地通过你的树莓派上的 OpenVPN 浏览互联网。你顺便还给你的链接增加了一些安全性,这个额外作用也会让你的链接更安全和私密一些。享受通过安全跳板浏览互联网把! -资源: +### 参考资源 -安装与配置 OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) and [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn) - -OpenVPN 客户端下载: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html) - -OpenVPN Client for iOS: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8) - -OpenVPN Client for Android: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en) - -Tunnelblick for Mac OS X (OpenVPN client): [https://tunnelblick.net](https://tunnelblick.net) - -SSLH 介绍: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) 和 [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh) +- 安装与配置 OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) 和 [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn) +- OpenVPN 客户端下载: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html) +- OpenVPN iOS 客户端: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8) +- OpenVPN Android 客户端: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en) +- Tunnelblick for Mac OS X (OpenVPN 客户端): [https://tunnelblick.net](https://tunnelblick.net) +- SSLH 介绍: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) 和 [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh) ---------- @@ -512,7 +455,7 @@ via: http://www.linuxjournal.com/content/securi-pi-using-raspberry-pi-secure-lan 作者:[Bill Childers][a] 译者:[Moelf](https://github.com/Moelf) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 198b63761f251706b37fa0d2c8f09a72127b1ee2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Jul 2016 14:30:38 +0800 Subject: [PATCH 088/471] =?UTF-8?q?20160709-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...reate Your Own Shell in Python - Part I.md | 226 ++++++++++++++++++ 1 file changed, 226 insertions(+) create mode 100644 sources/tech/20160705 Create Your Own Shell in Python - Part I.md diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md new file mode 100644 index 0000000000..9c3ffa55dd --- /dev/null +++ b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -0,0 +1,226 @@ +Create Your Own Shell in Python : Part I + +I’m curious to know how a shell (like bash, csh, etc.) works internally. So, I implemented one called yosh (Your Own SHell) in Python to answer my own curiosity. The concept I explain in this article can be applied to other languages as well. + +(Note: You can find source code used in this blog post here. I distribute it with MIT license.) + +Let’s start. + +### Step 0: Project Structure + +For this project, I use the following project structure. + +``` +yosh_project +|-- yosh + |-- __init__.py + |-- shell.py +``` + +`yosh_project` is the root project folder (you can also name it just `yosh`). + +`yosh` is the package folder and `__init__.py` will make it a package named the same as the package folder name. (If you don’t write Python, just ignore it.) + +`shell.py` is our main shell file. + +### Step 1: Shell Loop + +When you start a shell, it will show a command prompt and wait for your command input. After it receives the command and executes it (the detail will be explained later), your shell will be back to the wait loop for your next command. + +In `shell.py`, we start by a simple main function calling the shell_loop() function as follows: + +``` +def shell_loop(): + # Start the loop here + + +def main(): + shell_loop() + + +if __name__ == "__main__": + main() +``` + +Then, in our `shell_loop()`, we use a status flag to indicate whether the loop should continue or stop. In the beginning of the loop, our shell will show a command prompt and wait to read command input. + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() +``` + +After that, we tokenize the command input and execute it (we’ll implement the tokenize and execute functions soon). + +Therefore, our shell_loop() will be the following. + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() + + # Tokenize the command input + cmd_tokens = tokenize(cmd) + + # Execute the command and retrieve new status + status = execute(cmd_tokens) +``` + +That’s all of our shell loop. If we start our shell with python shell.py, it will show the command prompt. However, it will throw an error if we type a command and hit enter because we don’t define tokenize function yet. + +To exit the shell, try ctrl-c. I will tell how to exit gracefully later. + +### Step 2: Tokenization + +When a user types a command in our shell and hits enter. The command input will be a long string containing both a command name and its arguments. Therefore, we have to tokenize it (split a string into several tokens). + +It seems simple at first glance. We might use cmd.split() to separate the input by spaces. It works well for a command like `ls -a my_folder` because it splits the command into a list `['ls', '-a', 'my_folder']` which we can use them easily. + +However, there are some cases that some arguments are quoted with single or double quotes like `echo "Hello World"` or `echo 'Hello World'`. If we use cmd.split(), we will get a list of 3 tokens `['echo', '"Hello', 'World"']` instead of 2 tokens `['echo', 'Hello World']`. + +Fortunately, Python provides a library called shlex that helps us split like a charm. (Note: we can also use regular expression but it’s not the main point of this article.) + +``` +import sys +import shlex + +... + +def tokenize(string): + return shlex.split(string) + +... +``` + +Then, we will send these tokens to the execution process. + +### Step 3: Execution + +This is the core and fun part of a shell. What happened when a shell executes mkdir test_dir? (Note: mkdir is a program to be executed with arguments test_dir for creating a directory named test_dir.) + +The first function involved in this step is execvp. Before I explain what execvp does, let’s see it in action. + +``` +import os +... + +def execute(cmd_tokens): + # Execute command + os.execvp(cmd_tokens[0], cmd_tokens) + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +Try running our shell again and input a command `mkdir test_dir`, then, hit enter. + +The problem is, after we hit enter, our shell exits instead of waiting for the next command. However, the directory is correctly created. + +So, what execvp really does? + +execvp is a variant of a system call exec. The first argument is the program name. The v indicates the second argument is a list of program arguments (variable number of arguments). The p indicates the PATH environment will be used for searching for the given program name. In our previous attempt, the mkdir program was found based on your PATH environment variable. + +(There are other variants of exec such as execv, execvpe, execl, execlp, execlpe; you can google them for more information.) + +exec replaces the current memory of a calling process with a new process to be executed. In our case, our shell process memory was replaced by `mkdir` program. Then, mkdir became the main process and created the test_dir directory. Finally, its process exited. + +The main point here is that our shell process was replaced by mkdir process already. That’s the reason why our shell disappeared and did not wait for the next command. + +Therefore, we need another system call to rescue: fork. + +fork will allocate new memory and copy the current process into a new process. We called this new process as child process and the caller process as parent process. Then, the child process memory will be replaced by a execed program. Therefore, our shell, which is a parent process, is safe from memory replacement. + +Let’s see our modified code. + +``` +... + +def execute(cmd_tokens): + # Fork a child shell process + # If the current process is a child process, its `pid` is set to `0` + # else the current process is a parent process and the value of `pid` + # is the process id of its child process. + pid = os.fork() + + if pid == 0: + # Child process + # Replace the child shell process with the program called with exec + os.execvp(cmd_tokens[0], cmd_tokens) + elif pid > 0: + # Parent process + while True: + # Wait response status from its child process (identified with pid) + wpid, status = os.waitpid(pid, 0) + + # Finish waiting if its child process exits normally + # or is terminated by a signal + if os.WIFEXITED(status) or os.WIFSIGNALED(status): + break + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +When the parent process call `os.fork()`, you can imagine that all source code is copied into a new child process. At this point, the parent and child process see the same code and run in parallel. + +If the running code is belong to the child process, pid will be 0. Else, the running code is belong to the parent process, pid will be the process id of the child process. + +When os.execvp is invoked in the child process, you can imagine like all the source code of the child process is replaced by the code of a program that is being called. However, the code of the parent process is not changed. + +When the parent process finishes waiting its child process to exit or be terminated, it returns the status indicating to continue the shell loop. + +### Run + +Now, you can try running our shell and enter mkdir test_dir2. It should work properly. Our main shell process is still there and waits for the next command. Try ls and you will see the created directories. + +However, there are some problems here. + +First, try cd test_dir2 and then ls. It’s supposed to enter the directory test_dir2 which is an empty directory. However, you will see that the directory was not changed into test_dir2. + +Second, we still have no way to exit from our shell gracefully. + +We will continue to solve such problems in [Part 2][1]. + + +-------------------------------------------------------------------------------- + +via: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ + +作者:[Supasate Choochaisri][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/supasate_choochaisri/ +[1]: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ From 69d53d49c78c2d648bf41ef064a08188ffde13d5 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Jul 2016 14:35:30 +0800 Subject: [PATCH 089/471] =?UTF-8?q?20160709-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eate Your Own Shell in Python - Part II.md | 215 ++++++++++++++++++ 1 file changed, 215 insertions(+) create mode 100644 sources/tech/20160706 Create Your Own Shell in Python - Part II.md diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md new file mode 100644 index 0000000000..af0ec01b36 --- /dev/null +++ b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -0,0 +1,215 @@ +Create Your Own Shell in Python - Part II +=========================================== + +In [part 1][1], we already created a main shell loop, tokenized command input, and executed a command by fork and exec. In this part, we will solve the remaining problmes. First, `cd test_dir2` does not change our current directory. Second, we still have no way to exit from our shell gracefully. + +### Step 4: Built-in Commands + +The statement “cd test_dir2 does not change our current directory” is true and false in some senses. It’s true in the sense that after executing the command, we are still at the same directory. However, the directory is actullay changed, but, it’s changed in the child process. + +Remember that we fork a child process, then, exec the command which does not happen on a parent process. The result is we just change the current directory of a child process, not the directory of a parent process. + +Then, the child process exits, and the parent process continues with the same intact directory. + +Therefore, this kind of commands must be built-in with the shell itself. It must be executed in the shell process without forking. + +#### cd + +Let’s start with cd command. + +We first create a builtins directory. Each built-in command will be put inside this directory. + +``` +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + |-- __init__.py + |-- shell.py +``` + +In cd.py, we implement our own cd command by using a system call os.chdir. + +``` +import os +from yosh.constants import * + + +def cd(args): + os.chdir(args[0]) + + return SHELL_STATUS_RUN +``` + +Notice that we return shell running status from a built-in function. Therefore, we move constants into yosh/constants.py to be used across the project. + +``` +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + |-- __init__.py + |-- constants.py + |-- shell.py +``` + +In constants.py, we put shell status constants here. + +``` +SHELL_STATUS_STOP = 0 +SHELL_STATUS_RUN = 1 +``` + +Now, our built-in cd is ready. Let’s modify our shell.py to handle built-in functions. + +``` +... +# Import constants +from yosh.constants import * + +# Hash map to store built-in function name and reference as key and value +built_in_cmds = {} + + +def tokenize(string): + return shlex.split(string) + + +def execute(cmd_tokens): + # Extract command name and arguments from tokens + cmd_name = cmd_tokens[0] + cmd_args = cmd_tokens[1:] + + # If the command is a built-in command, invoke its function with arguments + if cmd_name in built_in_cmds: + return built_in_cmds[cmd_name](cmd_args) + + ... +``` + +We use a Python dictionary built_in_cmds as a hash map to store our built-in functions. In execute function, we extract command name and arguments. If the command name is in our hash map, we call that built-in function. + +(Note: built_in_cmds[cmd_name] returns the function reference that can be invoked with arguments immediately.) + +We are almost ready to use the built-in cd function. The last thing is to add cd function into the built_in_cmds map. + +``` +... +# Import all built-in function references +from yosh.builtins import * + +... + +# Register a built-in function to built-in command hash map +def register_command(name, func): + built_in_cmds[name] = func + + +# Register all built-in commands here +def init(): + register_command("cd", cd) + + +def main(): + # Init shell before starting the main loop + init() + shell_loop() +``` + +We define register_command function for adding a built-in function to our built-in commmand hash map. Then, we define init function and register the built-in cd function there. + +Notice the line register_command("cd", cd). The first argument is a command name. The second argument is a reference to a function. In order to let cd, in the second argument, refer to the cd function reference in yosh/builtins/cd.py, we have to put the following line in yosh/builtins/__init__.py. + +``` +from yosh.builtins.cd import * +``` +Therefore, in yosh/shell.py, when we import * from yosh.builtins, we get cd function reference that is already imported by yosh.builtins. + +We’ve done preparing our code. Let’s try by running our shell as a module python -m yosh.shell at the same level as the yosh directory. + +Now, our cd command should change our shell directory correctly while non-built-in commands still work too. Cool. + +#### exit + +Here comes the last piece: to exit gracefully. + +We need a function that changes the shell status to be SHELL_STATUS_STOP. So, the shell loop will naturally break and the shell program will end and exit. + +As same as cd, if we fork and exec exit in a child process, the parent process will still remain inact. Therefore, the exit function is needed to be a shell built-in function. + +Let’s start by creating a new file called exit.py in the builtins folder. + +``` +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + | |-- exit.py + |-- __init__.py + |-- constants.py + |-- shell.py +``` + +The exit.py defines the exit function that just returns the status to break the main loop. + +``` +from yosh.constants import * + + +def exit(args): + return SHELL_STATUS_STOP +``` + +Then, we import the exit function reference in `yosh/builtins/__init__.py`. + +``` +from yosh.builtins.cd import * +from yosh.builtins.exit import * +``` + +Finally, in shell.py, we register the exit command in `init()` function. + + +``` +... + +# Register all built-in commands here +def init(): + register_command("cd", cd) + register_command("exit", exit) + +... +``` + +That’s all! + +Try running python -m yosh.shell. Now you can enter exit to quit the program gracefully. + +### Final Thought + +I hope you enjoy creating yosh (your own shell) like I do. However, my version of yosh is still in an early stage. I don’t handle several corner cases that can corrupt the shell. There are a lot of built-in commands that I don’t cover. Some non-built-in commands can also be implemented as built-in commands to improve performance (avoid new process creation time). And, a ton of features are not yet implemented (see Common features and Differing features). + +I’ve provided the source code at github.com/supasate/yosh. Feel free to fork and play around. + +Now, it’s your turn to make it real Your Own SHell. + +Happy Coding! + +-------------------------------------------------------------------------------- + +via: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ + +作者:[Supasate Choochaisri][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/supasate_choochaisri/ +[1]: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ +[2]: http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html +[3]: http://www.tldp.org/LDP/intro-linux/html/x12249.html +[4]: https://github.com/supasate/yosh From 2f7178d48fd0eb639ce6d19b7533df67d5433b38 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Jul 2016 14:57:27 +0800 Subject: [PATCH 090/471] =?UTF-8?q?20160709-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ing With an Ipad Pro and a Raspberry Pi.md | 407 ++++++++++++++++++ 1 file changed, 407 insertions(+) create mode 100644 sources/tech/20160626 Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi.md diff --git a/sources/tech/20160626 Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi.md b/sources/tech/20160626 Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi.md new file mode 100644 index 0000000000..b7b3e8d5f1 --- /dev/null +++ b/sources/tech/20160626 Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi.md @@ -0,0 +1,407 @@ +Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi +=================================================================== + +![](http://www.movingelectrons.net/images/bkup_photos_main.jpg) +>Backup Photos While Traveling - Gear. + +### Introduction + +I’ve been on a quest to finding the ideal travel photo backup solution for a long time. Relying on just tossing your SD cards in your camera bag after they are full during a trip is a risky move that leaves you too exposed: SD cards can be lost or stolen, data can get corrupted or cards can get damaged in transit. Backing up to another medium - even if it’s just another SD card - and leaving that in a safe(r) place while traveling is the best practice. Ideally, backing up to a remote location would be the way to go, but that may not be practical depending on where you are traveling to and Internet availability in the region. + +My requirements for the ideal backup procedure are: + +1. Use an iPad to manage the process instead of a laptop. I like to travel light and since most of my trips are business related (i.e. non-photography related), I’d hate to bring my personal laptop along with my business laptop. My iPad, however; is always with me, so using it as a tool just makes sense. +2. Use as few hardware devices as practically possible. +3. Connection between devices should be secure. I’ll be using this setup in hotels and airports, so closed and encrypted connection between devices is ideal. +4. The whole process should be sturdy and reliable. I’ve tried other options using router/combo devices and [it didn’t end up well][1]. + +### The Setup + +I came up with a setup that meets the above criteria and is also flexible enough to expand on it in the future. It involves the use of the following gear: + +1. [iPad Pro 9.7][2] inches. It’s the most powerful, small and lightweight iOS device at the time of writing. Apple pencil is not really needed, but it’s part of my gear as I so some editing on the iPad Pro while on the road. All the heavy lifting will be done by the Raspberry Pi, so any other device capable of connecting through SSH would fit the bill. +2. [Raspberry Pi 3][3] with Raspbian installed. +3. [Micro SD card][4] for Raspberry Pi and a Raspberry Pi [box/case][5]. +5. [128 GB Pen Drive][6]. You can go bigger, but 128 GB is enough for my user case. You can also get a portable external hard drive like [this one][7], but the Raspberry Pi may not provide enough power through its USB port, which means you would have to get a [powered USB hub][8], along with the needed cables, defeating the purpose of having a lightweight and minimalistic setup. +6. [SD card reader][9] +7. [SD Cards][10]. I use several as I don’t wait for one to fill up before using a different one. That allows me to spread photos I take on a single trip amongst several cards. + +The following diagram shows how these devices will be interacting with each other. + +![](http://www.movingelectrons.net/images/bkup_photos_diag.jpg) +>Backup Photos While Traveling - Process Diagram. + +The Raspberry Pi will be configured to act as a secured Hot Spot. It will create its own WPA2-encrypted WiFi network to which the iPad Pro will connect. Although there are many online tutorials to create an Ad Hoc (i.e. computer-to-computer) connection with the Raspberry Pi, which is easier to setup; that connection is not encrypted and it’s relatively easy for other devices near you to connect to it. Therefore, I decided to go with the WiFi option. + +The camera’s SD card will be connected to one of the Raspberry Pi’s USB ports through an SD card reader. Additionally, a high capacity Pen Drive (128 GB in my case) will be permanently inserted in one of the USB ports on the Raspberry Pi. I picked the [Sandisk Ultra Fit][11] because of its tiny size. The main idea is to have the Raspberry Pi backup the photos from the SD Card to the Pen Drive with the help of a Python script. The backup process will be incremental, meaning that only changes (i.e. new photos taken) will be added to the backup folder each time the script runs, making the process really fast. This is a huge advantage if you take a lot of photos or if you shoot in RAW format. The iPad will be used to trigger the Python script and to browse the SD Card and Pen Drive as needed. + +As an added benefit, if the Raspberry Pi is connected to Internet through a wired connection (i.e. through the Ethernet port), it will be able to share the Internet connection with the devices connected to its WiFi network. + +### 1. Raspberry Pi Configuration + +This is the part where we roll up our sleeves and get busy as we’ll be using Raspbian’s command-line interface (CLI) . I’ll try to be as descriptive as possible so it’s easy to go through the process. + +#### Install and Configure Raspbian + +Connect a keyboard, mouse and an LCD monitor to the Raspberry Pi. Insert the Micro SD in the Raspberry Pi’s slot and proceed to install Raspbian per the instructions in the [official site][12]. + +After the installation is done, go to the CLI (Terminal in Raspbian) and type: + +``` +sudo apt-get update +sudo apt-get upgrade +``` + +This will upgrade all software on the machine. I configured the Raspberry Pi to connect to the local network and changed the default password as a safety measure. + +By default SSH is enabled on Raspbian, so all sections below can be done from a remote machine. I also configured RSA authentication, but that’s optional. More info about it [here][13]. + +This is a screenshot of the SSH connection to the Raspberry Pi from [iTerm][14] on Mac: + +##### Creating Encrypted (WPA2) Access Point + +The installation was made based on [this][15] article, it was optimized for my user case. + +##### 1. Install Packages + +We need to type the following to install the required packages: + +``` +sudo apt-get install hostapd +sudo apt-get install dnsmasq +``` + +hostapd allows to use the built-in WiFi as an access point. dnsmasq is a combined DHCP and DNS server that’s easy to configure. + +##### 2. Edit dhcpcd.conf + +Connect to the Raspberry Pi through Ethernet. Interface configuration on the Raspbery Pi is handled by dhcpcd, so first we tell it to ignore wlan0 as it will be configured with a static IP address. + +Open up the dhcpcd configuration file with sudo nano `/etc/dhcpcd.conf` and add the following line to the bottom of the file: + +``` +denyinterfaces wlan0 +``` + +Note: This must be above any interface lines that may have been added. + +##### 3. Edit interfaces + +Now we need to configure our static IP. To do this, open up the interface configuration file with sudo nano `/etc/network/interfaces` and edit the wlan0 section so that it looks like this: + +``` +allow-hotplug wlan0 +iface wlan0 inet static + address 192.168.1.1 + netmask 255.255.255.0 + network 192.168.1.0 + broadcast 192.168.1.255 +# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf +``` + +Also, the wlan1 section was edited to be: + +``` +#allow-hotplug wlan1 +#iface wlan1 inet manual +# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf +``` + +Important: Restart dhcpcd with sudo service dhcpcd restart and then reload the configuration for wlan0 with `sudo ifdown eth0; sudo ifup wlan0`. + +##### 4. Configure Hostapd + +Next, we need to configure hostapd. Create a new configuration file with `sudo nano /etc/hostapd/hostapd.conf with the following contents: + +``` +interface=wlan0 + +# Use the nl80211 driver with the brcmfmac driver +driver=nl80211 + +# This is the name of the network +ssid=YOUR_NETWORK_NAME_HERE + +# Use the 2.4GHz band +hw_mode=g + +# Use channel 6 +channel=6 + +# Enable 802.11n +ieee80211n=1 + +# Enable QoS Support +wmm_enabled=1 + +# Enable 40MHz channels with 20ns guard interval +ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40] + +# Accept all MAC addresses +macaddr_acl=0 + +# Use WPA authentication +auth_algs=1 + +# Require clients to know the network name +ignore_broadcast_ssid=0 + +# Use WPA2 +wpa=2 + +# Use a pre-shared key +wpa_key_mgmt=WPA-PSK + +# The network passphrase +wpa_passphrase=YOUR_NEW_WIFI_PASSWORD_HERE + +# Use AES, instead of TKIP +rsn_pairwise=CCMP +``` + +Now, we also need to tell hostapd where to look for the config file when it starts up on boot. Open up the default configuration file with `sudo nano /etc/default/hostapd` and find the line `#DAEMON_CONF=""` and replace it with `DAEMON_CONF="/etc/hostapd/hostapd.conf"`. + +##### 5. Configure Dnsmasq + +The shipped dnsmasq config file contains tons of information on how to use it, but we won’t be using all the options. I’d recommend moving it (rather than deleting it), and creating a new one with + +``` +sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig +sudo nano /etc/dnsmasq.conf +``` + +Paste the following into the new file: + +``` +interface=wlan0 # Use interface wlan0 +listen-address=192.168.1.1 # Explicitly specify the address to listen on +bind-interfaces # Bind to the interface to make sure we aren't sending things elsewhere +server=8.8.8.8 # Forward DNS requests to Google DNS +domain-needed # Don't forward short names +bogus-priv # Never forward addresses in the non-routed address spaces. +dhcp-range=192.168.1.50,192.168.1.100,12h # Assign IP addresses in that range with a 12 hour lease time +``` + +##### 6. Set up IPV4 forwarding + +One of the last things that we need to do is to enable packet forwarding. To do this, open up the sysctl.conf file with `sudo nano /etc/sysctl.conf`, and remove the # from the beginning of the line containing `net.ipv4.ip_forward=1`. This will enable it on the next reboot. + +We also need to share our Raspberry Pi’s internet connection to our devices connected over WiFi by the configuring a NAT between our wlan0 interface and our eth0 interface. We can do this by writing a script with the following lines. + +``` +sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE +sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT +sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT +``` + +I named the script hotspot-boot.sh and made it executable with: + +``` +sudo chmod 755 hotspot-boot.sh +``` + +The script should be executed when the Raspberry Pi boots. There are many ways to accomplish this, and this is the way I went with: + +1. Put the file in `/home/pi/scripts`. +2. Edit the rc.local file by typing `sudo nano /etc/rc.local` and place the call to the script before the line that reads exit 0 (more information [here][16]). + +This is how the rc.local file looks like after editing it. + +``` +#!/bin/sh -e +# +# rc.local +# +# This script is executed at the end of each multiuser runlevel. +# Make sure that the script will "exit 0" on success or any other +# value on error. +# +# In order to enable or disable this script just change the execution +# bits. +# +# By default this script does nothing. + +# Print the IP address +_IP=$(hostname -I) || true +if [ "$_IP" ]; then + printf "My IP address is %s\n" "$_IP" +fi + +sudo /home/pi/scripts/hotspot-boot.sh & + +exit 0 + +``` + +#### Installing Samba and NTFS Compatibility. + +We also need to install the following packages to enable the Samba protocol and allow the File Browser App to see the connected devices to the Raspberry Pi as shared folders. Also, ntfs-3g provides NTFS compatibility in case we decide to connect a portable hard drive to the Raspberry Pi. + +``` +sudo apt-get install ntfs-3g +sudo apt-get install samba samba-common-bin +``` + +You can follow [this][17] article for details on how to configure Samba. + +Important Note: The referenced article also goes through the process of mounting external hard drives on the Raspberry Pi. We won’t be doing that because, at the time of writing, the current version of Raspbian (Jessie) auto-mounts both the SD Card and the Pendrive to `/media/pi/` when the device is turned on. The article also goes over some redundancy features that we won’t be using. + +### 2. Python Script + +Now that the Raspberry Pi has been configured, we need to work on the script that will actually backup/copy our photos. Note that this script just provides certain degree of automation to the backup process. If you have a basic knowledge of using the Linux/Raspbian CLI, you can just SSH into the Raspberry Pi and copy yourself all photos from one device to the other by creating the needed folders and using either the cp or the rsync command. We’ll be using the rsync method on the script as it’s very reliable and allows for incremental backups. + +This process relies on two files: the script itself and the configuration file `backup_photos.conf`. The latter just have a couple of lines indicating where the destination drive (Pendrive) is mounted and what folder has been mounted to. This is what it looks like: + +``` +mount folder=/media/pi/ +destination folder=PDRIVE128GB +``` + +Important: Do not add any additional spaces between the `=` symbol and the words to both sides of it as the script will break (definitely an opportunity for improvement). + +Below is the Python script, which I named `backup_photos.py` and placed in `/home/pi/scripts/`. I included comments in between the lines of code to make it easier to follow. + +``` +#!/usr/bin/python3 + +import os +import sys +from sh import rsync + +''' +The script copies an SD Card mounted on /media/pi/ to a folder with the same name +created in the destination drive. The destination drive's name is defined in +the .conf file. + + +Argument: label/name of the mounted SD Card. +''' + +CONFIG_FILE = '/home/pi/scripts/backup_photos.conf' +ORIGIN_DEV = sys.argv[1] + +def create_folder(path): + + print ('attempting to create destination folder: ',path) + if not os.path.exists(path): + try: + os.mkdir(path) + print ('Folder created.') + except: + print ('Folder could not be created. Stopping.') + return + else: + print ('Folder already in path. Using that instead.') + + + +confFile = open(CONFIG_FILE,'rU') +#IMPORTANT: rU Opens the file with Universal Newline Support, +#so \n and/or \r is recognized as a new line. + +confList = confFile.readlines() +confFile.close() + + +for line in confList: + line = line.strip('\n') + + try: + name , value = line.split('=') + + if name == 'mount folder': + mountFolder = value + elif name == 'destination folder': + destDevice = value + + + except ValueError: + print ('Incorrect line format. Passing.') + pass + + +destFolder = mountFolder+destDevice+'/'+ORIGIN_DEV +create_folder(destFolder) + +print ('Copying files...') + +# Comment out to delete files that are not in the origin: +# rsync("-av", "--delete", mountFolder+ORIGIN_DEV, destFolder) +rsync("-av", mountFolder+ORIGIN_DEV+'/', destFolder) + +print ('Done.') +``` + +### 3. iPad Pro Configuration + +Since all the heavy-lifting will be done on the Raspberry Pi and no files will be transferred through the iPad Pro, which was a huge disadvantage in [one of the workflows I tried before][18]; we just need to install [Prompt 2][19] on the iPad to access the Raspeberry Pi through SSH. Once connected, you can either run the Python script or copy the files manually. + +![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_prompt.jpg) +>SSH Connection to Raspberry Pi From iPad Using Prompt. + +Since we installed Samba, we can access USB devices connected to the Raspberry Pi in a more graphical way. You can stream videos, copy and move files between devices. [File Browser][20] is perfect for that. + +### 4. Putting it All Together + +Let’s suppose that `SD32GB-03` is the label of an SD card connected to one of the USB ports on the Raspberry Pi. Also, let’s suppose that `PDRIVE128GB` is the label of the Pendrive, also connected to the device and defined on the `.conf` file as indicated above. If we wanted to backup the photos on the SD Card, we would need to go through the following steps: + +1. Turn on Raspberry Pi so that drives are mounted automatically. +2. Connect to the WiFi network generated by the Raspberry Pi. +3. Connect to the Raspberry Pi through SSH using the [Prompt][21] App. +4. Type the following once you are connected: + +``` +python3 backup_photos.py SD32GB-03 +``` + +The first backup my take some minutes depending on how much of the card is used. That means you need to keep the connection alive to the Raspberry Pi from the iPad. You can get around this by using the [nohup][22] command before running the script. + +``` +nohup python3 backup_photos.py SD32GB-03 & +``` + +![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_finished.png) +>iTerm Screenshot After Running Python Script. + +### Further Customization + +I installed a VNC server to access Raspbian’s graphical interface from another computer or the iPad through [Remoter App][23]. I’m looking into installing [BitTorrent Sync][24] for backing up photos to a remote location while on the road, which would be the ideal setup. I’ll expand this post once I have a workable solution. + +Feel free to either include your comments/questions below or reach out to me. My contact info is at the footer of this page. + + +-------------------------------------------------------------------------------- + +via: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html + +作者:[Editor][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html +[1]: http://bit.ly/1MVVtZi +[2]: http://www.amazon.com/dp/B01D3NZIMA/?tag=movinelect0e-20 +[3]: http://www.amazon.com/dp/B01CD5VC92/?tag=movinelect0e-20 +[4]: http://www.amazon.com/dp/B010Q57T02/?tag=movinelect0e-20 +[5]: http://www.amazon.com/dp/B01F1PSFY6/?tag=movinelect0e-20 +[6]: http://amzn.to/293kPqX +[7]: http://amzn.to/290syFY +[8]: http://amzn.to/290syFY +[9]: http://amzn.to/290syFY +[10]: http://amzn.to/290syFY +[11]: http://amzn.to/293kPqX +[12]: https://www.raspberrypi.org/downloads/noobs/ +[13]: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md +[14]: https://www.iterm2.com/ +[15]: https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ +[16]: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md +[17]: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/ +[18]: http://bit.ly/1MVVtZi +[19]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH +[20]: https://itunes.apple.com/us/app/filebrowser-access-files-on/id364738545?mt=8&uo=4&at=11lqkH +[21]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH +[22]: https://en.m.wikipedia.org/wiki/Nohup +[23]: https://itunes.apple.com/us/app/remoter-pro-vnc-ssh-rdp/id519768191?mt=8&uo=4&at=11lqkH +[24]: https://getsync.com/ From 23b2147154afc4511b65f6fdbc46850acf9b3dfe Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 9 Jul 2016 15:10:43 +0800 Subject: [PATCH 091/471] =?UTF-8?q?20160709-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oying a replicated Python 3 Application.md | 281 ++++++++++++++++++ 1 file changed, 281 insertions(+) create mode 100644 sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md diff --git a/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md b/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md new file mode 100644 index 0000000000..85b592b306 --- /dev/null +++ b/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md @@ -0,0 +1,281 @@ +Tutorial: Getting started with Docker Swarm and deploying a replicated Python 3 Application +============== + +At [Dockercon][1] recently [Ben Firshman][2] did a very cool presentation on building serverless apps with Docker, you can [read about it here][3] (along with watching the video). A little while back [I wrote an article][4] on building a microservice with [AWS Lambda][5]. + +Today, I want to show you how to use [Docker Swarm][6] and then deploy a simple Python Falcon REST app. Although I won’t be using [dockerrun][7] or the serverless capabilities I think you might be surprised how easy it is to deploy (replicated) Python applications (actually any sort of application: Java, Go, etc.) with Docker Swarm. + + +Note: Some of the steps I’ll show you are taken from the [Swarm Tutorial][8]. I’ve modified some things and [added a Vagrant helper repo][9] to spin up a local testing environment for Docker Swarm to utilize. Keep in mind you must be using Docker Engine 1.12 or later. At the time of this article I am using RC2 of 1.12. Keep in mind this is all build on beta software at this time, things can change. + +The first thing you will want to do is to ensure you have [Vagrant][10] properly installed and working if you want to run this locally. You can also follow the steps for the most part and spin up the Docker Swarm VMs on your preferred cloud provider. + +We are going to spin up three VMs: A single docker swarm manager and two workers. + +Security Note: The Vagrantfile uses a shell script located on Docker’s test server. This is a potential security issue to run scripts you don’t have control over so make sure to [review the script][11] prior to running. + +``` +$ git clone https://github.com/chadlung/vagrant-docker-swarm +$ cd vagrant-docker-swarm +$ vagrant plugin install vagrant-vbguest +$ vagrant up +``` + +The vagrant up command will take some time to complete. + +SSH into the manager1 VM: + +``` +$ vagrant ssh manager1 +``` + +Run the following command in the manager1 ssh terminal session: + +``` +$ sudo docker swarm init --listen-addr 192.168.99.100:2377 +``` + +There will be no workers registered yet: + +``` +$ sudo docker node ls +``` + +Let’s register the two workers. Use two new terminal sessions (leave the manager1 session running): + +``` +$ vagrant ssh worker1 +``` + +Run the following command in the worker1 ssh terminal session: + +``` +$ sudo docker swarm join 192.168.99.100:2377 +``` + +Repeat those commands used for worker1 but substitute worker2. + +From the manager1 terminal run: + +``` +$ docker node ls +``` + +You should see: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-3.15.25-PM.png) + +On the manager1 terminal let’s deploy a simple service. + +``` +sudo docker service create --replicas 1 --name pinger alpine ping google.com +``` + +That will deploy a service that will ping google.com to one of the workers (or manager, the manager can also run services but this [can also be disabled][12] if you only want workers to run containers). To see which node got the service run this: + +``` +$ sudo docker service tasks pinger +``` + +Result will be similar to this: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.23.05-PM.png) + +So we know its on worker1. Let’s go to the terminal session for worker1 and attach to the running container: + +``` +$ sudo docker ps +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.25.02-PM.png) + +You can see the container id is: ae56769b9d4d + +In my case I run: + +``` +$ sudo docker attach ae56769b9d4d +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.26.49-PM.png) + +You can just CTRL-C to stop the pinging. + +Go back to the manager1 terminal session and remove the pinger service: + +``` +$ sudo docker service rm pinger +``` + +Now we will move onto deploying a replicated Python app part of this article. Please keep in mind in order to keep this article simple and easy to follow this will be a bare bones trivial service. + +The first thing you will need to do is to either add your own image to [Docker Hub][13] or use [the one I already have][14]. Its a simple Python 3 Falcon REST app that has one endpoint: /hello with a param of value=SOME_STRING + +The Python code for the [chadlung/hello-app][15] image looks like this: + +``` +import json +from wsgiref import simple_server + +import falcon + + +class HelloResource(object): + def on_get(self, req, resp): + try: + value = req.get_param('value') + + resp.content_type = 'application/json' + resp.status = falcon.HTTP_200 + resp.body = json.dumps({'message': str(value)}) + except Exception as ex: + resp.status = falcon.HTTP_500 + resp.body = str(ex) + + +if __name__ == '__main__': + app = falcon.API() + hello_resource = HelloResource() + app.add_route('/hello', hello_resource) + httpd = simple_server.make_server('0.0.0.0', 8080, app) + httpd.serve_forever() +``` + +The Dockerfile is as simple as: + +``` +FROM python:3.4.4 + +RUN pip install -U pip +RUN pip install -U falcon + +EXPOSE 8080 + +COPY . /hello-app +WORKDIR /hello-app + +CMD ["python", "app.py"] +``` + +Again, this is meant to be very trivial. You can hit the endpoint by running the image locally if you want: + +This gives you back: + +``` +{"message": "Fred"} +``` + +Build and deploy the hellp-app to Docker Hub (modify below to use your own Docker Hub repo or [use this one][15]): + +``` +$ sudo docker build . -t chadlung/hello-app:2 +$ sudo docker push chadlung/hello-app:2 +``` + +Now we want to deploy this to the Docker Swarm we set up earlier. Go into the manager1 terminal session and run: + +``` +$ sudo docker service create -p 8080:8080 --replicas 2 --name hello-app chadlung/hello-app:2 +$ sudo docker service inspect --pretty hello-app +$ sudo docker service tasks hello-app +``` + +Now we are ready to test it out. Using any of the node’s IPs in the swarm hit the /hello endpoint. In my case I will just cURL from the manager1 terminal: + +Remember, all IPs in the swarm will work even if the service is only running on one or more nodes. + +``` +$ curl -v -X GET "http://192.168.99.100:8080/hello?value=Chad" +$ curl -v -X GET "http://192.168.99.101:8080/hello?value=Test" +$ curl -v -X GET "http://192.168.99.102:8080/hello?value=Docker" +``` + +Results: + +``` +* Hostname was NOT found in DNS cache +* Trying 192.168.99.101... +* Connected to 192.168.99.101 (192.168.99.101) port 8080 (#0) +> GET /hello?value=Chad HTTP/1.1 +> User-Agent: curl/7.35.0 +> Host: 192.168.99.101:8080 +> Accept: */* +> +* HTTP 1.0, assume close after body +< HTTP/1.0 200 OK +< Date: Tue, 28 Jun 2016 23:52:55 GMT +< Server: WSGIServer/0.2 CPython/3.4.4 +< content-type: application/json +< content-length: 19 +< +{"message": "Chad"} +``` + +Calling the other node from my web browser: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-6.54.31-PM.png) + +If you want to see all the services running try this from the manager1 node: + +``` +$ sudo docker service ls +``` +If you want to add some visualization to all this you can install [Docker Swarm Visualizer][16] (this is very handy for presentations, etc.). From the manager1 terminal session run the following: + +![]($ sudo docker run -it -d -p 5000:5000 -e HOST=192.168.99.100 -e PORT=5000 -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer) + +Simply open a browser now and point it at: + +Results (assuming running two Docker Swarm services): + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-30-at-2.37.28-PM.png) + +To stop the hello-app (which was replicated to two nodes) run this from the manager1 terminal session: + +``` +$ sudo docker service rm hello-app +``` + +To stop the Visualizer run this from the manager1 terminal session: + +``` +$ sudo docker ps +``` + +Get the container id, in my case it was: f71fec0d3ce1 + +Run this from the manager1 terminal session:: + +``` +$ sudo docker stop f71fec0d3ce1 +``` + +Good luck with Docker Swarm and keep in kind this article was based on the release candidate of version 1.12. + +-------------------------------------------------------------------------------- + +via: http://www.giantflyingsaucer.com/blog/?p=5923 + +作者:[Chad Lung][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.giantflyingsaucer.com/blog/?author=2 +[1]: http://dockercon.com/ +[2]: https://blog.docker.com/author/bfirshman/ +[3]: https://blog.docker.com/author/bfirshman/ +[4]: http://www.giantflyingsaucer.com/blog/?p=5730 +[5]: https://aws.amazon.com/lambda/ +[6]: https://docs.docker.com/swarm/ +[7]: https://github.com/bfirsh/dockerrun +[8]: https://docs.docker.com/engine/swarm/swarm-tutorial/ +[9]: https://github.com/chadlung/vagrant-docker-swarm +[10]: https://www.vagrantup.com/ +[11]: https://test.docker.com/ +[12]: https://docs.docker.com/engine/reference/commandline/swarm_init/ +[13]: https://hub.docker.com/ +[14]: https://hub.docker.com/r/chadlung/hello-app/ +[15]: https://hub.docker.com/r/chadlung/hello-app/ +[16]: https://github.com/ManoMarks/docker-swarm-visualizer From d62ef3e80349a01d665c8709f784ef845de297ea Mon Sep 17 00:00:00 2001 From: Mike Date: Sat, 9 Jul 2016 18:35:01 +0800 Subject: [PATCH 092/471] =?UTF-8?q?Finish=20Translating=20=E3=80=8Atech/20?= =?UTF-8?q?160620=20Detecting=20cats=20in=20images=20with=20OpenCV.md?= =?UTF-8?q?=E3=80=8B=20(#4162)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * sources/tech/20160609 How to record your terminal session on Linux.md * translated/tech/20160609 How to record your terminal session on Linux.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * translated/tech/20160620 Detecting cats in images with OpenCV.md * remove source --- ...20 Detecting cats in images with OpenCV.md | 231 ------------------ ...20 Detecting cats in images with OpenCV.md | 228 +++++++++++++++++ 2 files changed, 228 insertions(+), 231 deletions(-) delete mode 100644 sources/tech/20160620 Detecting cats in images with OpenCV.md create mode 100644 translated/tech/20160620 Detecting cats in images with OpenCV.md diff --git a/sources/tech/20160620 Detecting cats in images with OpenCV.md b/sources/tech/20160620 Detecting cats in images with OpenCV.md deleted file mode 100644 index 37a3ce7fc2..0000000000 --- a/sources/tech/20160620 Detecting cats in images with OpenCV.md +++ /dev/null @@ -1,231 +0,0 @@ -Detecting cats in images with OpenCV -======================================= - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) - -Did you know that OpenCV can detect cat faces in images…right out-of-the-box with no extras? - -I didn’t either. - -But after [Kendrick Tan broke the story][1], I had to check it out for myself…and do a little investigative work to see how this cat detector seemed to sneak its way into the OpenCV repository without me noticing (much like a cat sliding into an empty cereal box, just waiting to be discovered). - -In the remainder of this blog post, I’ll demonstrate how to use OpenCV’s cat detector to detect cat faces in images. This same technique can be applied to video streams as well. - ->Looking for the source code to this post? [Jump right to the downloads section][2]. - - -### Detecting cats in images with OpenCV - -If you take a look at the [OpenCV repository][3], specifically within the [haarcascades directory][4] (where OpenCV stores all its pre-trained Haar classifiers to detect various objects, body parts, etc.), you’ll notice two files: - -- haarcascade_frontalcatface.xml -- haarcascade_frontalcatface_extended.xml - -Both of these Haar cascades can be used detecting “cat faces” in images. In fact, I used these very same cascades to generate the example image at the top of this blog post. - -Doing a little investigative work, I found that the cascades were trained and contributed to the OpenCV repository by the legendary [Joseph Howse][5] who’s authored a good many tutorials, books, and talks on computer vision. - -In the remainder of this blog post, I’ll show you how to utilize Howse’s Haar cascades to detect cats in images. - -Cat detection code - -Let’s get started detecting cats in images with OpenCV. Open up a new file, name it cat_detector.py , and insert the following code: - -### Detecting cats in images with OpenCVPython - -``` -# import the necessary packages -import argparse -import cv2 - -# construct the argument parse and parse the arguments -ap = argparse.ArgumentParser() -ap.add_argument("-i", "--image", required=True, - help="path to the input image") -ap.add_argument("-c", "--cascade", - default="haarcascade_frontalcatface.xml", - help="path to cat detector haar cascade") -args = vars(ap.parse_args()) -``` - -Lines 2 and 3 import our necessary Python packages while Lines 6-12 parse our command line arguments. We only require a single argument here, the input `--image` that we want to detect cat faces in using OpenCV. - -We can also (optionally) supply a path our Haar cascade via the `--cascade` switch. We’ll default this path to `haarcascade_frontalcatface.xml` and assume you have the `haarcascade_frontalcatface.xml` file in the same directory as your cat_detector.py script. - -Note: I’ve conveniently included the code, cat detector Haar cascade, and example images used in this tutorial in the “Downloads” section of this blog post. If you’re new to working with Python + OpenCV (or Haar cascades), I would suggest downloading the provided .zip file to make it easier to follow along. - -Next, let’s detect the cats in our input image: - -``` -# load the input image and convert it to grayscale -image = cv2.imread(args["image"]) -gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - -# load the cat detector Haar cascade, then detect cat faces -# in the input image -detector = cv2.CascadeClassifier(args["cascade"]) -rects = detector.detectMultiScale(gray, scaleFactor=1.3, - minNeighbors=10, minSize=(75, 75)) -``` - -On Lines 15 and 16 we load our input image from disk and convert it to grayscale (a normal pre-processing step before passing the image to a Haar cascade classifier, although not strictly required). - -Line 20 loads our Haar cascade from disk (in this case, the cat detector) and instantiates the cv2.CascadeClassifier object. - -Detecting cat faces in images with OpenCV is accomplished on Lines 21 and 22 by calling the detectMultiScale method of the detector object. We pass four parameters to the detectMultiScale method, including: - -1. Our image, gray , that we want to detect cat faces in. -2.A scaleFactor of our [image pyramid][6] used when detecting cat faces. A larger scale factor will increase the speed of the detector, but could harm our true-positive detection accuracy. Conversely, a smaller scale will slow down the detection process, but increase true-positive detections. However, this smaller scale can also increase the false-positive detection rate as well. See the “A note on Haar cascades” section of this blog post for more information. -3. The minNeighbors parameter controls the minimum number of detected bounding boxes in a given area for the region to be considered a “cat face”. This parameter is very helpful in pruning false-positive detections. -4. Finally, the minSize parameter is pretty self-explanatory. This value ensures that each detected bounding box is at least width x height pixels (in this case, 75 x 75). - -The detectMultiScale function returns rects , a list of 4-tuples. These tuples contain the (x, y)-coordinates and width and height of each detected cat face. - -Finally, let’s draw a rectangle surround each cat face in the image: - -``` -# loop over the cat faces and draw a rectangle surrounding each -for (i, (x, y, w, h)) in enumerate(rects): - cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2) - cv2.putText(image, "Cat #{}".format(i + 1), (x, y - 10), - cv2.FONT_HERSHEY_SIMPLEX, 0.55, (0, 0, 255), 2) - -# show the detected cat faces -cv2.imshow("Cat Faces", image) -cv2.waitKey(0) -``` - -Given our bounding boxes (i.e., rects ), we loop over each of them individually on Line 25. - -We then draw a rectangle surrounding each cat face on Line 26, while Lines 27 and 28 displays an integer, counting the number of cats in the image. - -Finally, Lines 31 and 32 display the output image to our screen. - -### Cat detection results - -To test our OpenCV cat detector, be sure to download the source code to this tutorial using the “Downloads” section at the bottom of this post. - -Then, after you have unzipped the archive, you should have the following three files/directories: - -1. cat_detector.py : Our Python + OpenCV script used to detect cats in images. -2. haarcascade_frontalcatface.xml : The cat detector Haar cascade. -3. images : A directory of testing images that we’re going to apply the cat detector cascade to. - -From there, execute the following command: - -Detecting cats in images with OpenCVShell - -``` -$ python cat_detector.py --image images/cat_01.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_01.jpg) ->Figure 1: Detecting a cat face in an image, even with parts of the cat occluded - -Notice that we have been able to detect the cat face in the image, even though the rest of its body is obscured. - -Let’s try another image: - -``` -python cat_detector.py --image images/cat_02.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_02.jpg) ->Figure 2: A second example of detecting a cat in an image with OpenCV, this time the cat face is slightly different - -This cat’s face is clearly different from the other one, as it’s in the middle of a “meow”. In either case, the cat detector cascade is able to correctly find the cat face in the image. - -The same is true for this image as well: - -``` -$ python cat_detector.py --image images/cat_03.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_03.jpg) ->Figure 3: Cat detection with OpenCV and Python - -Our final example demonstrates detecting multiple cats in an image using OpenCV and Python: - -``` -$ python cat_detector.py --image images/cat_04.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) ->Figure 4: Detecting multiple cats in the same image with OpenCV - -Note that the Haar cascade can return bounding boxes in an order that you may not like. In this case, the middle cat is actually labeled as the third cat. You can resolve this “issue” by sorting the bounding boxes according to their (x, y)-coordinates for a consistent ordering. - -#### A quick note on accuracy - -It’s important to note that in the comments section of the .xml files, Joseph Howe details that the cat detector Haar cascades can report cat faces where there are actually human faces. - -In this case, he recommends performing both face detection and cat detection, then discarding any cat bounding boxes that overlap with the face bounding boxes. - -#### A note on Haar cascades - -First published in 2001 by Paul Viola and Michael Jones, [Rapid Object Detection using a Boosted Cascade of Simple Features][7], this original work has become one of the most cited papers in computer vision. - -This algorithm is capable of detecting objects in images, regardless of their location and scale. And perhaps most intriguing, the detector can run in real-time on modern hardware. - -In their paper, Viola and Jones focused on training a face detector; however, the framework can also be used to train detectors for arbitrary “objects”, such as cars, bananas, road signs, etc. - -#### The problem? - -The biggest problem with Haar cascades is getting the detectMultiScale parameters right, specifically scaleFactor and minNeighbors . You can easily run into situations where you need to tune both of these parameters on an image-by-image basis, which is far from ideal when utilizing an object detector. - -The scaleFactor variable controls your [image pyramid][8] used to detect objects at various scales of an image. If your scaleFactor is too large, then you’ll only evaluate a few layers of the image pyramid, potentially leading to you missing objects at scales that fall in between the pyramid layers. - -On the other hand, if you set scaleFactor too low, then you evaluate many pyramid layers. This will help you detect more objects in your image, but it (1) makes the detection process slower and (2) substantially increases the false-positive detection rate, something that Haar cascades are known for. - -To remember this, we often apply [Histogram of Oriented Gradients + Linear SVM detection][9] instead. - -The HOG + Linear SVM framework parameters are normally much easier to tune — and best of all, HOG + Linear SVM enjoys a much smaller false-positive detection rate. The only downside is that it’s harder to get HOG + Linear SVM to run in real-time. - -### Interested in learning more about object detection? - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/custom_object_detector_example.jpg) ->Figure 5: Learn how to build custom object detectors inside the PyImageSearch Gurus course. - -If you’re interested in learning how to train your own custom object detectors, be sure to take a look at the PyImageSearch Gurus course. - -Inside the course, I have 15 lessons covering 168 pages of tutorials dedicated to teaching you how to build custom object detectors from scratch. You’ll discover how to detect road signs, faces, cars (and nearly any other object) in images by applying the HOG + Linear SVM framework for object detection. - -To learn more about the PyImageSearch Gurus course (and grab 10 FREE sample lessons), just click the button below: - -### Summary - -In this blog post, we learned how to detect cats in images using the default Haar cascades shipped with OpenCV. These Haar cascades were trained and contributed to the OpenCV project by [Joseph Howse][9], and were originally brought to my attention [in this post][10] by Kendrick Tan. - -While Haar cascades are quite useful, we often use HOG + Linear SVM instead, as it’s a bit easier to tune the detector parameters, and more importantly, we can enjoy a much lower false-positive detection rate. - -I detail how to build custom HOG + Linear SVM object detectors to recognize various objects in images, including cars, road signs, and much more [inside the PyImageSearch Gurus course][11]. - -Anyway, I hope you enjoyed this blog post! - -Before you go, be sure to signup for the PyImageSearch Newsletter using the form below to be notified when new blog posts are published. - --------------------------------------------------------------------------------- - -via: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/ - -作者:[Adrian Rosebrock][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.pyimagesearch.com/author/adrian/ -[1]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html -[2]: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/# -[3]: https://github.com/Itseez/opencv -[4]: https://github.com/Itseez/opencv/tree/master/data/haarcascades -[5]: http://nummist.com/ -[6]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ -[7]: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf -[8]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ -[9]: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/ -[10]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html -[11]: https://www.pyimagesearch.com/pyimagesearch-gurus/ - - - diff --git a/translated/tech/20160620 Detecting cats in images with OpenCV.md b/translated/tech/20160620 Detecting cats in images with OpenCV.md new file mode 100644 index 0000000000..9b1e19fef7 --- /dev/null +++ b/translated/tech/20160620 Detecting cats in images with OpenCV.md @@ -0,0 +1,228 @@ +使用 OpenCV 识别图片中的猫 +======================================= + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) + +你知道 OpenCV 可以识别在图片中识别猫脸吗?还是在开箱即用的情况下,无需多余的附件。 + +我也不知道。 + +但是在看完'[Kendrick Tan broke the story][1]'这个故事之后, 我需要亲自体验一下...去看看到OpenCV 是如何在我没有察觉到的情况下,将这一个功能添加进了他的软件库。 + +作为这个博客的大纲,我将会展示如何使用 OpenCV 的猫检测器在图片中识别猫脸。同样的,你也可以在视频流中使用该技术。 + +> 想找这篇博客的源码?[请点这][2]。 + + +### 使用 OpenCV 在图片中检测猫 + +如果你看一眼[OpenCV 的代码库][3],尤其是在[haarcascades 目录][4](OpenCV 用来保存处理他对多种目标检测的Cascade预先训练的级联图像分类), 你将会注意到这两个文件: + +- haarcascade_frontalcatface.xml +- haarcascade_frontalcatface_extended.xml + +这两个 Haar Cascade 文件都将被用来在图片中检测猫脸。实际上,我使用了相同的方式来生成这篇博客顶端的图片。 + +在做了一些调查工作之后,我发现训练这些记过并且将其提供给 OpenCV 仓库的是鼎鼎大名的 [Joseph Howse][5],他在计算机视觉领域有着很高的声望。 + +在博客的剩余部分,我将会展示给你如何使用 Howse 的 Haar 级联模型来检测猫。 + +让我们开工。新建一个叫 cat_detector.py 的文件,并且输入如下的代码: + +### 使用 OpenCVPython 来检测猫 + +``` +# import the necessary packages +import argparse +import cv2 + +# construct the argument parse and parse the arguments +ap = argparse.ArgumentParser() +ap.add_argument("-i", "--image", required=True, + help="path to the input image") +ap.add_argument("-c", "--cascade", + default="haarcascade_frontalcatface.xml", + help="path to cat detector haar cascade") +args = vars(ap.parse_args()) +``` + +第2和第3行主要是导入了必要的 python 包。6-12行主要是我们的命令行参数。我们在这只需要使用单独的参数'--image'。 + +我们可以指定一个 Haar cascade 的路径通过 `--cascade` 参数。默认使用 `haarcascades_frontalcatface.xml`,同时需要保证这个文件和你的 `cat_detector.py` 在同一目录下。 + +注意:我已经打包了猫的检测代码,还有在这个教程里的样本图片。你可以在博客的'Downloads' 部分下载到。如果你是刚刚接触 Python+OpenCV(或者 Haar 级联模型), 我会建议你下载 zip 压缩包,这个会方便你进行操作。 + +接下来,就是检测猫的时刻了: + +``` +# load the input image and convert it to grayscale +image = cv2.imread(args["image"]) +gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) + +# load the cat detector Haar cascade, then detect cat faces +# in the input image +detector = cv2.CascadeClassifier(args["cascade"]) +rects = detector.detectMultiScale(gray, scaleFactor=1.3, + minNeighbors=10, minSize=(75, 75)) +``` + +在15,16行,我们从硬盘上读取了图片,并且进行灰度化(一个常用的图片预处理,方便 Haar cascade 进行分类,尽管不是必须) + +20行,我们加载了Haar casacade,即猫检测器,并且初始化了 cv2.CascadeClassifier 对象。 + +使用 OpenCV 检测猫脸的步骤是21,22行,通过调用 detectMultiScale 方法。我们使用四个参数来调用。包括: + +1. 灰度化的图片,即样本图片。 +2. scaleFactor 参数,[图片金字塔][6]使用的检测猫脸时的检测粒度。一个更大的粒度将会加快检测的速度,但是会对准确性产生影响。相反的,一个更小的粒度将会影响检测的时间,但是会增加正确性。但是,细粒度也会增加错误的检测数量。你可以看博客的 'Haar 级联模型笔记' 部分来获得更多的信息。 +3. minNeighbors 参数控制了检测的最小数量,即在给定区域最小的检测猫脸的次数。这个参数很好的可以排除错误的检测结果。 +4. 最后,minSize 参数很好的自我说明了用途。即最后图片的最小大小,这个例子中就是 75\*75 + +detectMultiScale 函数 return rects,这是一个4维数组链表。这些item 中包含了猫脸的(x,y)坐标值,还有宽度,高度。 + +最后,让我们在图片上画下这些矩形来标识猫脸: + +``` +# loop over the cat faces and draw a rectangle surrounding each +for (i, (x, y, w, h)) in enumerate(rects): + cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2) + cv2.putText(image, "Cat #{}".format(i + 1), (x, y - 10), + cv2.FONT_HERSHEY_SIMPLEX, 0.55, (0, 0, 255), 2) + +# show the detected cat faces +cv2.imshow("Cat Faces", image) +cv2.waitKey(0) +``` + +给相关的区域(举个例子,rects),我们在25行依次历遍它。 + +在26行,我们在每张猫脸的周围画上一个矩形。27,28行展示了一个整数,即图片中猫的数量。 + +最后,31,32行在屏幕上展示了输出的图片。 + +### 猫检测结果 + +为了测试我们的 OpenCV 毛检测器,可以在文章的最后,下载教程的源码。 + +然后,在你解压缩之后,你将会得到如下的三个文件/目录: + +1. cat_detector.py:我们的主程序 +2. haarcascade_frontalcatface.xml: Haar cascade 猫检测资源 +3. images:我们将会使用的检测图片目录。 + +到这一步,执行以下的命令: + +使用 OpenCVShell 检测猫。 + +``` +$ python cat_detector.py --image images/cat_01.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_01.jpg) +>1. 在图片中检测猫脸,甚至是猫的一部分。 + +注意,我们已经可以检测猫脸了,即使他的其余部分是被隐藏的。 + +试下另外的一张图片: + +``` +python cat_detector.py --image images/cat_02.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_02.jpg) +>2. 第二个例子就是在略微不同的猫脸中检测。 + +这次的猫脸和第一次的明显不同,因为它在'Meow'的中央。这种情况下,我们依旧能检测到正确的猫脸。 + +这张图片的结果也是正确的: + +``` +$ python cat_detector.py --image images/cat_03.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_03.jpg) +>3. 使用 OpenCV 和 python 检测猫脸 + +我们最后的一个样例就是在一张图中检测多张猫脸: + +``` +$ python cat_detector.py --image images/cat_04.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) +>Figure 4: Detecting multiple cats in the same image with OpenCV +>4. 在同一张图片中使用 OpenCV 检测多只猫 + +注意,Haar cascade 的返回值并不是有序的。这种情况下,中间的那只猫会被标记成第三只。你可以通过判断他们的(x, y)坐标来自己排序。 + +#### 精度的 Tips + +xml 文件中的注释,非常重要,Joseph Hower 提到了猫 脸检测器有可能会将人脸识别成猫脸。 + +这种情况下,他推荐使用两种检测器(人脸&猫脸),然后将出现在人脸识别结果中的结果剔除掉。 + +#### Haar 级联模型注意事项 + +这个方法首先出现在 Paul Viola 和 Michael Jones 2001 年发布的 [Rapid Object Detection using a Boosted Cascade of Simple Features] 论文中。现在它已经成为了计算机识别领域引用最多的成果之一。 + +这个算法能够识别图片中的对象,无论地点,规模。并且,他也能在现有的硬件条件下实现实时计算。 + +在他们的论文中,Viola 和 Jones 关注在训练人脸检测器;但是,这个框架也能用来检测各类事物,如汽车,香蕉,路标等等。 + +#### 有问题? + +Haar 级联模型最大的问题就是如何确定 detectMultiScale 方法的参数正确。特别是 scaleFactor 和 minNeighbors 参数。你很容易陷入,一张一张图片调参数的坑,这个就是该模型很难被实用化的原因。 + +这个 scaleFactor 变量控制了用来检测图片各种对象的[图像棱锥图][8]。如何参数过大,你就会得到更少的特征值,这会导致你无法在图层中识别一些目标。 + +换句话说,如果参数过低,你会检测出过多的图层。这虽然可以能帮助你检测更多的对象。但是他会造成计算速度的降低还会提高错误率。 + +为了避免这个,我们通常使用[Histogram of Oriented Gradients + Linear SVM detection][9]。 + +HOG + 线性 SVM 框架,它的参数更加容易的进行调优。而且也有更低的错误识别率,但是最大的缺点及时无法实时运算。 + +### 对对象识别感兴趣?并且希望了解更多? + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/custom_object_detector_example.jpg) +>5. 在 PyImageSearch Gurus 课程中学习如何构建自定义的对象识别器。 + +如果你对学习如何训练自己的自定义对象识别器,请务必要去学习 PyImageSearch Gurus 的课程。 + +在这个课程中,我提供了15节课还有超过168页的教程,来教你如何从0开始构建自定义的对象识别器。你会掌握如何应用 HOG+线性 SVM 计算框架来构建自己的对象识别器。 + +### 总结 + +在这篇博客里,我们学习了如何使用默认的 Haar 级联模型来识别图片中的猫脸。这些 Haar casacades 是通过[Joseph Howse][9] 贡献给 OpenCV 项目的。我是在[这篇文章][10]中开始注意到这个。 + +尽管 Haar 级联模型相当有用,但是我们也经常用 HOG 和 线性 SVM 替代。因为后者相对而言更容易使用,并且可以有效地降低错误的识别概率。 + +我也会在[在 PyImageSearch Gurus 的课程中][11]详细的讲述如何使用 HOG 和线性 SVM 对象识别器,来识别包括汽车,路标在内的各种事物。 + +不管怎样,我希望你享受这篇博客。 + +在你离开之前,确保你会使用这下面的表单注册 PyImageSearch Newsletter。这样你能收到最新的消息。 + +-------------------------------------------------------------------------------- + +via: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/ + +作者:[Adrian Rosebrock][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.pyimagesearch.com/author/adrian/ +[1]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[2]: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/# +[3]: https://github.com/Itseez/opencv +[4]: https://github.com/Itseez/opencv/tree/master/data/haarcascades +[5]: http://nummist.com/ +[6]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[7]: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf +[8]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[9]: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/ +[10]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[11]: https://www.pyimagesearch.com/pyimagesearch-gurus/ + + + From 781d751e3718a4ffa4a1d3c341f8b520021959df Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 9 Jul 2016 21:15:36 +0800 Subject: [PATCH 093/471] Translated by cposture --- .../tech/20160512 Bitmap in Linux Kernel.md | 398 ----------------- .../tech/20160512 Bitmap in Linux Kernel.md | 405 ++++++++++++++++++ 2 files changed, 405 insertions(+), 398 deletions(-) delete mode 100644 sources/tech/20160512 Bitmap in Linux Kernel.md create mode 100644 translated/tech/20160512 Bitmap in Linux Kernel.md diff --git a/sources/tech/20160512 Bitmap in Linux Kernel.md b/sources/tech/20160512 Bitmap in Linux Kernel.md deleted file mode 100644 index adffc9d049..0000000000 --- a/sources/tech/20160512 Bitmap in Linux Kernel.md +++ /dev/null @@ -1,398 +0,0 @@ -[Translating by cposture 2016.06.29] -Data Structures in the Linux Kernel -================================================================================ - -Bit arrays and bit operations in the Linux kernel --------------------------------------------------------------------------------- - -Besides different [linked](https://en.wikipedia.org/wiki/Linked_data_structure) and [tree](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) based data structures, the Linux kernel provides [API](https://en.wikipedia.org/wiki/Application_programming_interface) for [bit arrays](https://en.wikipedia.org/wiki/Bit_array) or `bitmap`. Bit arrays are heavily used in the Linux kernel and following source code files contain common `API` for work with such structures: - -* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) -* [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) - -Besides these two files, there is also architecture-specific header file which provides optimized bit operations for certain architecture. We consider [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture, so in our case it will be: - -* [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) - -header file. As I just wrote above, the `bitmap` is heavily used in the Linux kernel. For example a `bit array` is used to store set of online/offline processors for systems which support [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) cpu (more about this you can read in the [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) part), a `bit array` stores set of allocated [irqs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) during initialization of the Linux kernel and etc. - -So, the main goal of this part is to see how `bit arrays` are implemented in the Linux kernel. Let's start. - -Declaration of bit array -================================================================================ - -Before we will look on `API` for bitmaps manipulation, we must know how to declare it in the Linux kernel. There are two common method to declare own bit array. The first simple way to declare a bit array is to array of `unsigned long`. For example: - -```C -unsigned long my_bitmap[8] -``` - -The second way is to use the `DECLARE_BITMAP` macro which is defined in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) header file: - -```C -#define DECLARE_BITMAP(name,bits) \ - unsigned long name[BITS_TO_LONGS(bits)] -``` - -We can see that `DECLARE_BITMAP` macro takes two parameters: - -* `name` - name of bitmap; -* `bits` - amount of bits in bitmap; - -and just expands to the definition of `unsigned long` array with `BITS_TO_LONGS(bits)` elements, where the `BITS_TO_LONGS` macro converts a given number of bits to number of `longs` or in other words it calculates how many `8` byte elements in `bits`: - -```C -#define BITS_PER_BYTE 8 -#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) -#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) -``` - -So, for example `DECLARE_BITMAP(my_bitmap, 64)` will produce: - -```python ->>> (((64) + (64) - 1) / (64)) -1 -``` - -and: - -```C -unsigned long my_bitmap[1]; -``` - -After we are able to declare a bit array, we can start to use it. - -Architecture-specific bit operations -================================================================================ - -We already saw above a couple of source code and header files which provide [API](https://en.wikipedia.org/wiki/Application_programming_interface) for manipulation of bit arrays. The most important and widely used API of bit arrays is architecture-specific and located as we already know in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file. - -First of all let's look at the two most important functions: - -* `set_bit`; -* `clear_bit`. - -I think that there is no need to explain what these function do. This is already must be clear from their name. Let's look on their implementation. If you will look into the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, you will note that each of these functions represented by two variants: [atomic](https://en.wikipedia.org/wiki/Linearizability) and not. Before we will start to dive into implementations of these functions, first of all we must to know a little about `atomic` operations. - -In simple words atomic operations guarantees that two or more operations will not be performed on the same data concurrently. The `x86` architecture provides a set of atomic instructions, for example [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html) instruction, [cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) instruction and etc. Besides atomic instructions, some of non-atomic instructions can be made atomic with the help of the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction. It is enough to know about atomic operations for now, so we can begin to consider implementation of `set_bit` and `clear_bit` functions. - -First of all, let's start to consider `non-atomic` variants of this function. Names of non-atomic `set_bit` and `clear_bit` starts from double underscore. As we already know, all of these functions are defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and the first function is `__set_bit`: - -```C -static inline void __set_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory"); -} -``` - -As we can see it takes two arguments: - -* `nr` - number of bit in a bit array. -* `addr` - address of a bit array where we need to set bit. - -Note that the `addr` parameter is defined with `volatile` keyword which tells to compiler that value maybe changed by the given address. The implementation of the `__set_bit` is pretty easy. As we can see, it just contains one line of [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) code. In our case we are using the [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) instruction which selects a bit which is specified with the first operand (`nr` in our case) from the bit array, stores the value of the selected bit in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) flags register and set this bit. - -Note that we can see usage of the `nr`, but there is `addr` here. You already might guess that the secret is in `ADDR`. The `ADDR` is the macro which is defined in the same header code file and expands to the string which contains value of the given address and `+m` constraint: - -```C -#define ADDR BITOP_ADDR(addr) -#define BITOP_ADDR(x) "+m" (*(volatile long *) (x)) -``` - -Besides the `+m`, we can see other constraints in the `__set_bit` function. Let's look on they and try to understand what do they mean: - -* `+m` - represents memory operand where `+` tells that the given operand will be input and output operand; -* `I` - represents integer constant; -* `r` - represents register operand - -Besides these constraint, we also can see - the `memory` keyword which tells compiler that this code will change value in memory. That's all. Now let's look at the same function but at `atomic` variant. It looks more complex that its `non-atomic` variant: - -```C -static __always_inline void -set_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "orb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)CONST_MASK(nr)) - : "memory"); - } else { - asm volatile(LOCK_PREFIX "bts %1,%0" - : BITOP_ADDR(addr) : "Ir" (nr) : "memory"); - } -} -``` - -First of all note that this function takes the same set of parameters that `__set_bit`, but additionally marked with the `__always_inline` attribute. The `__always_inline` is macro which defined in the [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) and just expands to the `always_inline` attribute: - -```C -#define __always_inline inline __attribute__((always_inline)) -``` - -which means that this function will be always inlined to reduce size of the Linux kernel image. Now let's try to understand implementation of the `set_bit` function. First of all we check a given number of bit at the beginning of the `set_bit` function. The `IS_IMMEDIATE` macro defined in the same [header](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) file and expands to the call of the builtin [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) function: - -```C -#define IS_IMMEDIATE(nr) (__builtin_constant_p(nr)) -``` - -The `__builtin_constant_p` builtin function returns `1` if the given parameter is known to be constant at compile-time and returns `0` in other case. We no need to use slow `bts` instruction to set bit if the given number of bit is known in compile time constant. We can just apply [bitwise or](https://en.wikipedia.org/wiki/Bitwise_operation#OR) for byte from the give address which contains given bit and masked number of bits where high bit is `1` and other is zero. In other case if the given number of bit is not known constant at compile-time, we do the same as we did in the `__set_bit` function. The `CONST_MASK_ADDR` macro: - -```C -#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3)) -``` - -expands to the give address with offset to the byte which contains a given bit. For example we have address `0x1000` and the number of bit is `0x9`. So, as `0x9` is `one byte + one bit` our address with be `addr + 1`: - -```python ->>> hex(0x1000 + (0x9 >> 3)) -'0x1001' -``` - -The `CONST_MASK` macro represents our given number of bit as byte where high bit is `1` and other bits are `0`: - -```C -#define CONST_MASK(nr) (1 << ((nr) & 7)) -``` - -```python ->>> bin(1 << (0x9 & 7)) -'0b10' -``` - -In the end we just apply bitwise `or` for these values. So, for example if our address will be `0x4097` and we need to set `0x9` bit: - -```python ->>> bin(0x4097) -'0b100000010010111' ->>> bin((0x4097 >> 0x9) | (1 << (0x9 & 7))) -'0b100010' -``` - -the `ninth` bit will be set. - -Note that all of these operations are marked with `LOCK_PREFIX` which is expands to the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction which guarantees atomicity of this operation. - -As we already know, besides the `set_bit` and `__set_bit` operations, the Linux kernel provides two inverse functions to clear bit in atomic and non-atomic context. They are `clear_bit` and `__clear_bit`. Both of these functions are defined in the same [header file](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) and takes the same set of arguments. But not only arguments are similar. Generally these functions are very similar on the `set_bit` and `__set_bit`. Let's look on the implementation of the non-atomic `__clear_bit` function: - -```C -static inline void __clear_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("btr %1,%0" : ADDR : "Ir" (nr)); -} -``` - -Yes. As we see, it takes the same set of arguments and contains very similar block of inline assembler. It just uses the [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) instruction instead of `bts`. As we can understand form the function's name, it clears a given bit by the given address. The `btr` instruction acts like `btr`. This instruction also selects a given bit which is specified in the first operand, stores its value in the `CF` flag register and clears this bit in the given bit array which is specifed with second operand. - -The atomic variant of the `__clear_bit` is `clear_bit`: - -```C -static __always_inline void -clear_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "andb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)~CONST_MASK(nr))); - } else { - asm volatile(LOCK_PREFIX "btr %1,%0" - : BITOP_ADDR(addr) - : "Ir" (nr)); - } -} -``` - -and as we can see it is very similar on `set_bit` and just contains two differences. The first difference it uses `btr` instruction to clear bit when the `set_bit` uses `bts` instruction to set bit. The second difference it uses negated mask and `and` instruction to clear bit in the given byte when the `set_bit` uses `or` instruction. - -That's all. Now we can set and clear bit in any bit array and and we can go to other operations on bitmasks. - -Most widely used operations on a bit arrays are set and clear bit in a bit array in the Linux kernel. But besides this operations it is useful to do additional operations on a bit array. Yet another widely used operation in the Linux kernel - is to know is a given bit set or not in a bit array. We can achieve this with the help of the `test_bit` macro. This macro is defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and expands to the call of the `constant_test_bit` or `variable_test_bit` depends on bit number: - -```C -#define test_bit(nr, addr) \ - (__builtin_constant_p((nr)) \ - ? constant_test_bit((nr), (addr)) \ - : variable_test_bit((nr), (addr))) -``` - -So, if the `nr` is known in compile time constant, the `test_bit` will be expanded to the call of the `constant_test_bit` function or `variable_test_bit` in other case. Now let's look at implementations of these functions. Let's start from the `variable_test_bit`: - -```C -static inline int variable_test_bit(long nr, volatile const unsigned long *addr) -{ - int oldbit; - - asm volatile("bt %2,%1\n\t" - "sbb %0,%0" - : "=r" (oldbit) - : "m" (*(unsigned long *)addr), "Ir" (nr)); - - return oldbit; -} -``` - -The `variable_test_bit` function takes similar set of arguments as `set_bit` and other function take. We also may see inline assembly code here which executes [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) and [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) instruction. The `bt` or `bit test` instruction selects a given bit which is specified with first operand from the bit array which is specified with the second operand and stores its value in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) bit of flags register. The second `sbb` instruction substracts first operand from second and subscrtact value of the `CF`. So, here write a value of a given bit number from a given bit array to the `CF` bit of flags register and execute `sbb` instruction which calculates: `00000000 - CF` and writes the result to the `oldbit`. - -The `constant_test_bit` function does the same as we saw in the `set_bit`: - -```C -static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr) -{ - return ((1UL << (nr & (BITS_PER_LONG-1))) & - (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; -} -``` - -It generates a byte where high bit is `1` and other bits are `0` (as we saw in `CONST_MASK`) and applies bitwise [and](https://en.wikipedia.org/wiki/Bitwise_operation#AND) to the byte which contains a given bit number. - -The next widely used bit array related operation is to change bit in a bit array. The Linux kernel provides two helper for this: - -* `__change_bit`; -* `change_bit`. - -As you already can guess, these two variants are atomic and non-atomic as for example `set_bit` and `__set_bit`. For the start, let's look at the implementation of the `__change_bit` function: - -```C -static inline void __change_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("btc %1,%0" : ADDR : "Ir" (nr)); -} -``` - -Pretty easy, is not it? The implementation of the `__change_bit` is the same as `__set_bit`, but instead of `bts` instruction, we are using [btc](http://x86.renejeschke.de/html/file_module_x86_id_23.html). This instruction selects a given bit from a given bit array, stores its value in the `CF` and changes its value by the applying of complement operation. So, a bit with value `1` will be `0` and vice versa: - -```python ->>> int(not 1) -0 ->>> int(not 0) -1 -``` - -The atomic version of the `__change_bit` is the `change_bit` function: - -```C -static inline void change_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "xorb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)CONST_MASK(nr))); - } else { - asm volatile(LOCK_PREFIX "btc %1,%0" - : BITOP_ADDR(addr) - : "Ir" (nr)); - } -} -``` - -It is similar on `set_bit` function, but also has two differences. The first difference is `xor` operation instead of `or` and the second is `bts` instead of `bts`. - -For this moment we know the most important architecture-specific operations with bit arrays. Time to look at generic bitmap API. - -Common bit operations -================================================================================ - -Besides the architecture-specific API from the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, the Linux kernel provides common API for manipulation of bit arrays. As we know from the beginning of this part, we can find it in the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and additionally in the * [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) source code file. But before these source code files let's look into the [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) header file which provides a set of useful macro. Let's look on some of they. - -First of all let's look at following four macros: - -* `for_each_set_bit` -* `for_each_set_bit_from` -* `for_each_clear_bit` -* `for_each_clear_bit_from` - -All of these macros provide iterator over certain set of bits in a bit array. The first macro iterates over bits which are set, the second does the same, but starts from a certain bits. The last two macros do the same, but iterates over clear bits. Let's look on implementation of the `for_each_set_bit` macro: - -```C -#define for_each_set_bit(bit, addr, size) \ - for ((bit) = find_first_bit((addr), (size)); \ - (bit) < (size); \ - (bit) = find_next_bit((addr), (size), (bit) + 1)) -``` - -As we may see it takes three arguments and expands to the loop from first set bit which is returned as result of the `find_first_bit` function and to the last bit number while it is less than given size. - -Besides these four macros, the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) provides API for rotation of `64-bit` or `32-bit` values and etc. - -The next [header](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) file which provides API for manipulation with a bit arrays. For example it provdes two functions: - -* `bitmap_zero`; -* `bitmap_fill`. - -To clear a bit array and fill it with `1`. Let's look on the implementation of the `bitmap_zero` function: - -```C -static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) -{ - if (small_const_nbits(nbits)) - *dst = 0UL; - else { - unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); - memset(dst, 0, len); - } -} -``` - -First of all we can see the check for `nbits`. The `small_const_nbits` is macro which defined in the same header [file](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) and looks: - -```C -#define small_const_nbits(nbits) \ - (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG) -``` - -As we may see it checks that `nbits` is known constant in compile time and `nbits` value does not overflow `BITS_PER_LONG` or `64`. If bits number does not overflow amount of bits in a `long` value we can just set to zero. In other case we need to calculate how many `long` values do we need to fill our bit array and fill it with [memset](http://man7.org/linux/man-pages/man3/memset.3.html). - -The implementation of the `bitmap_fill` function is similar on implementation of the `biramp_zero` function, except we fill a given bit array with `0xff` values or `0b11111111`: - -```C -static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) -{ - unsigned int nlongs = BITS_TO_LONGS(nbits); - if (!small_const_nbits(nbits)) { - unsigned int len = (nlongs - 1) * sizeof(unsigned long); - memset(dst, 0xff, len); - } - dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits); -} -``` - -Besides the `bitmap_fill` and `bitmap_zero` functions, the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file provides `bitmap_copy` which is similar on the `bitmap_zero`, but just uses [memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) instead of [memset](http://man7.org/linux/man-pages/man3/memset.3.html). Also it provides bitwise operations for bit array like `bitmap_and`, `bitmap_or`, `bitamp_xor` and etc. We will not consider implementation of these functions because it is easy to understand implementations of these functions if you understood all from this part. Anyway if you are interested how did these function implemented, you may open [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and start to research. - -That's all. - -Links -================================================================================ - -* [bitmap](https://en.wikipedia.org/wiki/Bit_array) -* [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure) -* [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) -* [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) -* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) -* [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) -* [API](https://en.wikipedia.org/wiki/Application_programming_interface) -* [atomic operations](https://en.wikipedia.org/wiki/Linearizability) -* [xchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_328.html) -* [cmpxchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html) -* [lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html) -* [bts instruction](http://x86.renejeschke.de/html/file_module_x86_id_25.html) -* [btr instruction](http://x86.renejeschke.de/html/file_module_x86_id_24.html) -* [bt instruction](http://x86.renejeschke.de/html/file_module_x86_id_22.html) -* [sbb instruction](http://x86.renejeschke.de/html/file_module_x86_id_286.html) -* [btc instruction](http://x86.renejeschke.de/html/file_module_x86_id_23.html) -* [man memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) -* [man memset](http://man7.org/linux/man-pages/man3/memset.3.html) -* [CF](https://en.wikipedia.org/wiki/FLAGS_register) -* [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) -* [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) - - ------------------------------------------------------------------------------- - -via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md - -作者:[0xAX][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twitter.com/0xAX diff --git a/translated/tech/20160512 Bitmap in Linux Kernel.md b/translated/tech/20160512 Bitmap in Linux Kernel.md new file mode 100644 index 0000000000..6475b9260e --- /dev/null +++ b/translated/tech/20160512 Bitmap in Linux Kernel.md @@ -0,0 +1,405 @@ +--- +date: 2016-07-09 14:42 +status: public +title: 20160512 Bitmap in Linux Kernel +--- + +Linux 内核里的数据结构 +================================================================================ + +Linux 内核中的位数组和位操作 +-------------------------------------------------------------------------------- + +除了不同的基于[链式](https://en.wikipedia.org/wiki/Linked_data_structure)和[树](https://en.wikipedia.org/wiki/Tree_%28data_structure%29)的数据结构以外,Linux 内核也为[位数组](https://en.wikipedia.org/wiki/Bit_array)或`位图`提供了 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。位数组在 Linux 内核里被广泛使用,并且在以下的源代码文件中包含了与这样的结构搭配使用的通用 `API`: + +* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) +* [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) + +除了这两个文件之外,还有体系结构特定的头文件,它们为特定的体系结构提供优化的位操作。我们将探讨 [x86_64](https://en.wikipedia.org/wiki/X86-64) 体系结构,因此在我们的例子里,它会是 + +* [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) + +头文件。正如我上面所写的,`位图`在 Linux 内核中被广泛地使用。例如,`位数组`常常用于保存一组在线/离线处理器,以便系统支持[热插拔](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)的 CPU(你可以在 [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) 部分阅读更多相关知识 ),一个`位数组`可以在 Linux 内核初始化等期间保存一组已分配的中断处理。 + +因此,本部分的主要目的是了解位数组是如何在 Linux 内核中实现的。让我们现在开始吧。 + +位数组声明 +================================================================================ + +在我们开始查看位图操作的 `API` 之前,我们必须知道如何在 Linux 内核中声明它。有两中通用的方法声明位数组。第一种简单的声明一个位数组的方法是,定义一个 unsigned long 的数组,例如: + +```C +unsigned long my_bitmap[8] +``` + +第二种方法,是使用 `DECLARE_BITMAP` 宏,它定义于 [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 头文件: + +```C +#define DECLARE_BITMAP(name,bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +``` + +我们可以看到 `DECLARE_BITMAP` 宏使用两个参数: + +* `name` - 位图名称; +* `bits` - 位图中位数; + +并且只是使用 `BITS_TO_LONGS(bits)` 元素展开 `unsigned long` 数组的定义。 `BITS_TO_LONGS` 宏将一个给定的位数转换为 `longs` 的个数,换言之,就是计算 `bits` 中有多少个 `8` 字节元素: + +```C +#define BITS_PER_BYTE 8 +#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) +``` + +因此,例如 `DECLARE_BITMAP(my_bitmap, 64)` 将产生: + +```python +>>> (((64) + (64) - 1) / (64)) +1 +``` + +与: + +```C +unsigned long my_bitmap[1]; +``` + +在能够声明一个位数组之后,我们便可以使用它了。 + +体系结构特定的位操作 +================================================================================ + +我们已经看了以上一对源文件和头文件,它们提供了位数组操作的 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。其中重要且广泛使用的位数组 API 是体系结构特定的且位于已提及的头文件中 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h)。 + +首先让我们查看两个最重要的函数: + +* `set_bit`; +* `clear_bit`. + +我认为没有必要解释这些函数的作用。从它们的名字来看,这已经很清楚了。让我们直接查看它们的实现。如果你浏览 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,你将会注意到这些函数中的每一个都有[原子性](https://en.wikipedia.org/wiki/Linearizability)和非原子性两种变体。在我们开始深入这些函数的实现之前,首先,我们必须了解一些有关原子操作的知识。 + +简而言之,原子操作保证两个或以上的操作不会并发地执行同一数据。`x86` 体系结构提供了一系列原子指令,例如, [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html)、[cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) 等指令。除了原子指令,一些非原子指令可以在 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令的帮助下具有原子性。目前已经对原子操作有了充分的理解,我们可以接着探讨 `set_bit` 和 `clear_bit` 函数的实现。 + +我们先考虑函数的非原子性变体。非原子性的 `set_bit` 和 `clear_bit` 的名字以双下划线开始。正如我们所知道的,所有这些函数都定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并且第一个函数就是 `__set_bit`: + +```C +static inline void __set_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory"); +} +``` + +正如我们所看到的,它使用了两个参数: + +* `nr` - 位数组中的位号(从0开始,译者注) +* `addr` - 我们需要置位的位数组地址 + +注意,`addr` 参数使用 `volatile` 关键字定义,以告诉编译器给定地址指向的变量可能会被修改。 `__set_bit` 的实现相当简单。正如我们所看到的,它仅包含一行[内联汇编代码](https://en.wikipedia.org/wiki/Inline_assembler)。在我们的例子中,我们使用 [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) 指令,从位数组中选出一个第一操作数(我们的例子中的 `nr`),存储选出的位的值到 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 标志寄存器并设置该位(即 `nr` 指定的位置为1,译者注)。 + +注意,我们了解了 `nr` 的用法,但这里还有一个参数 `addr` 呢!你或许已经猜到秘密就在 `ADDR`。 `ADDR` 是一个定义在同一头文件的宏,它展开为一个包含给定地址和 `+m` 约束的字符串: + +```C +#define ADDR BITOP_ADDR(addr) +#define BITOP_ADDR(x) "+m" (*(volatile long *) (x)) +``` + +除了 `+m` 之外,在 `__set_bit` 函数中我们可以看到其他约束。让我们查看并试图理解它们所表示的意义: + +* `+m` - 表示内存操作数,这里的 `+` 表明给定的操作数为输入输出操作数; +* `I` - 表示整型常量; +* `r` - 表示寄存器操作数 + +除了这些约束之外,我们也能看到 `memory` 关键字,其告诉编译器这段代码会修改内存中的变量。到此为止,现在我们看看相同的原子性变体函数。它看起来比非原子性变体更加复杂: + +```C +static __always_inline void +set_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "orb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)CONST_MASK(nr)) + : "memory"); + } else { + asm volatile(LOCK_PREFIX "bts %1,%0" + : BITOP_ADDR(addr) : "Ir" (nr) : "memory"); + } +} +``` + +(BITOP_ADDR 的定义为:`#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))`,ORB 为字节按位或,译者注) + +首先注意,这个函数使用了与 `__set_bit` 相同的参数集合,但额外地使用了 `__always_inline` 属性标记。 `__always_inline` 是一个定义于 [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) 的宏,并且只是展开为 `always_inline` 属性: + +```C +#define __always_inline inline __attribute__((always_inline)) +``` + +其意味着这个函数总是内联的,以减少 Linux 内核映像的大小。现在我们试着了解 `set_bit` 函数的实现。首先我们在 `set_bit` 函数的开头检查给定的位数量。`IS_IMMEDIATE` 宏定义于相同[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h),并展开为 gcc 内置函数的调用: + +```C +#define IS_IMMEDIATE(nr) (__builtin_constant_p(nr)) +``` + +如果给定的参数是编译期已知的常量,`__builtin_constant_p` 内置函数则返回 `1`,其他情况返回 `0`。假若给定的位数是编译期已知的常量,我们便无须使用效率低下的 `bts` 指令去设置位。我们可以只需在给定地址指向的字节和和掩码上执行 [按位或](https://en.wikipedia.org/wiki/Bitwise_operation#OR) 操作,其字节包含给定的位,而掩码为位号高位 `1`,其他位为 0。在其他情况下,如果给定的位号不是编译期已知常量,我们便做和 `__set_bit` 函数一样的事。`CONST_MASK_ADDR` 宏: + +```C +#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3)) +``` + +展开为带有到包含给定位的字节偏移的给定地址,例如,我们拥有地址 `0x1000` 和 位号是 `0x9`。因为 `0x9` 是 `一个字节 + 一位`,所以我们的地址是 `addr + 1`: + +```python +>>> hex(0x1000 + (0x9 >> 3)) +'0x1001' +``` + +`CONST_MASK` 宏将我们给定的位号表示为字节,位号对应位为高位 `1`,其他位为 `0`: + +```C +#define CONST_MASK(nr) (1 << ((nr) & 7)) +``` + +```python +>>> bin(1 << (0x9 & 7)) +'0b10' +``` + +最后,我们应用 `按位或` 运算到这些变量上面,因此,假如我们的地址是 `0x4097` ,并且我们需要置位号为 `9` 的位 为 1: + +```python +>>> bin(0x4097) +'0b100000010010111' +>>> bin((0x4097 >> 0x9) | (1 << (0x9 & 7))) +'0b100010' +``` + +`第 9 位` 将会被置位。(这里的 9 是从 0 开始计数的,比如0010,按照作者的意思,其中的 1 是第 1 位,译者注) + +注意,所有这些操作使用 `LOCK_PREFIX` 标记,其展开为 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令,保证该操作的原子性。 + +正如我们所知,除了 `set_bit` 和 `__set_bit` 操作之外,Linux 内核还提供了两个功能相反的函数,在原子性和非原子性的上下文中清位。它们为 `clear_bit` 和 `__clear_bit`。这两个函数都定义于同一个[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 并且使用相同的参数集合。不仅参数相似,一般而言,这些函数与 `set_bit` 和 `__set_bit` 也非常相似。让我们查看非原子性 `__clear_bit` 的实现吧: + +```C +static inline void __clear_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("btr %1,%0" : ADDR : "Ir" (nr)); +} +``` + +没错,正如我们所见,`__clear_bit` 使用相同的参数集合,并包含极其相似的内联汇编代码块。它仅仅使用 [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) 指令替换 `bts`。正如我们从函数名所理解的一样,通过给定地址,它清除了给定的位。`btr` 指令表现得像 `bts`(原文这里为 btr,可能为笔误,修正为 bts,译者注)。该指令选出第一操作数指定的位,存储它的值到 `CF` 标志寄存器,并且清楚第二操作数指定的位数组中的对应位。 + +`__clear_bit` 的原子性变体为 `clear_bit`: + +```C +static __always_inline void +clear_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "andb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)~CONST_MASK(nr))); + } else { + asm volatile(LOCK_PREFIX "btr %1,%0" + : BITOP_ADDR(addr) + : "Ir" (nr)); + } +} +``` + +并且正如我们所看到的,它与 `set_bit` 非常相似,同时只包含了两处差异。第一处差异为 `clear_bit` 使用 `btr` 指令来清位,而 `set_bit` 使用 `bts` 指令来置位。第二处差异为 `clear_bit` 使用否定的位掩码和 `按位与` 在给定的字节上置位,而 `set_bit` 使用 `按位或` 指令。 + +到此为止,我们可以在任何位数组置位和清位了,并且能够转到位掩码上的其他操作。 + +在 Linux 内核位数组上最广泛使用的操作是设置和清除位,但是除了这两个操作外,位数组上其他操作也是非常有用的。Linux 内核里另一种广泛使用的操作是知晓位数组中一个给定的位是否被置位。我们能够通过 `test_bit` 宏的帮助实现这一功能。这个宏定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并展开为 `constant_test_bit` 或 `variable_test_bit` 的调用,这要取决于位号。 + +```C +#define test_bit(nr, addr) \ + (__builtin_constant_p((nr)) \ + ? constant_test_bit((nr), (addr)) \ + : variable_test_bit((nr), (addr))) +``` + +因此,如果 `nr` 是编译期已知常量,`test_bit` 将展开为 `constant_test_bit` 函数的调用,而其他情况则为 `variable_test_bit`。现在让我们看看这些函数的实现,我们从 `variable_test_bit` 开始看起: + +```C +static inline int variable_test_bit(long nr, volatile const unsigned long *addr) +{ + int oldbit; + + asm volatile("bt %2,%1\n\t" + "sbb %0,%0" + : "=r" (oldbit) + : "m" (*(unsigned long *)addr), "Ir" (nr)); + + return oldbit; +} +``` + +`variable_test_bit` 函数调用了与 `set_bit` 及其他函数使用的相似的参数集合。我们也可以看到执行 [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) 和 [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) 指令的内联汇编代码。`bt` 或 `bit test` 指令从第二操作数指定的位数组选出第一操作数指定的一个指定位,并且将该位的值存进标志寄存器的 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 位。第二个指令 `sbb` 从第二操作数中减去第一操作数,再减去 `CF` 的值。因此,这里将一个从给定位数组中的给定位号的值写进标志寄存器的 `CF` 位,并且执行 `sbb` 指令计算: `00000000 - CF`,并将结果写进 `oldbit` 变量。 + +`constant_test_bit` 函数做了和我们在 `set_bit` 所看到的一样的事: + +```C +static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr) +{ + return ((1UL << (nr & (BITS_PER_LONG-1))) & + (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; +} +``` + +它生成了一个位号对应位为高位 `1`,而其他位为 `0` 的字节(正如我们在 `CONST_MASK` 所看到的),并将 [按位与](https://en.wikipedia.org/wiki/Bitwise_operation#AND) 应用于包含给定位号的字节。 + +下一广泛使用的位数组相关操作是改变一个位数组中的位。为此,Linux 内核提供了两个辅助函数: + +* `__change_bit`; +* `change_bit`. + +你可能已经猜测到,就拿 `set_bit` 和 `__set_bit` 例子说,这两个变体分别是原子和非原子版本。首先,让我们看看 `__change_bit` 函数的实现: + +```C +static inline void __change_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("btc %1,%0" : ADDR : "Ir" (nr)); +} +``` + +相当简单,不是吗? `__change_bit` 的实现和 `__set_bit` 一样,只是我们使用 [btc](http://x86.renejeschke.de/html/file_module_x86_id_23.html) 替换 `bts` 指令而已。 该指令从一个给定位数组中选出一个给定位,将该为位的值存进 `CF` 并使用求反操作改变它的值,因此值为 `1` 的位将变为 `0`,反之亦然: + +```python +>>> int(not 1) +0 +>>> int(not 0) +1 +``` + + `__change_bit` 的原子版本为 `change_bit` 函数: + +```C +static inline void change_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "xorb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)CONST_MASK(nr))); + } else { + asm volatile(LOCK_PREFIX "btc %1,%0" + : BITOP_ADDR(addr) + : "Ir" (nr)); + } +} +``` + +它和 `set_bit` 函数很相似,但也存在两点差异。第一处差异为 `xor` 操作而不是 `or`。第二处差异为 `btc`(原文为 `bts`,为作者笔误,译者注) 而不是 `bts`。 + +目前,我们了解了最重要的体系特定的位数组操作,是时候看看一般的位图 API 了。 + +通用位操作 +================================================================================ + +除了 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 中体系特定的 API 外,Linux 内核提供了操作位数组的通用 API。正如我们本部分开头所了解的一样,我们可以在 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件和* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) 源文件中找到它。但在查看这些源文件之前,我们先看看 [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) 头文件,其提供了一系列有用的宏,让我们看看它们当中一部分。 + +首先我们看看以下 4 个 宏: + +* `for_each_set_bit` +* `for_each_set_bit_from` +* `for_each_clear_bit` +* `for_each_clear_bit_from` + +所有这些宏都提供了遍历位数组中某些位集合的迭代器。第一个红迭代那些被置位的位。第二个宏也是一样,但它是从某一确定位开始。最后两个宏做的一样,但是迭代那些被清位的位。让我们看看 `for_each_set_bit` 宏: + +```C +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) +``` + +正如我们所看到的,它使用了三个参数,并展开为一个循环,该循环从作为 `find_first_bit` 函数返回结果的第一个置位开始到最后一个置位且小于给定大小为止。 + +除了这四个宏, [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 也提供了 `64-bit` 或 `32-bit` 变量循环的 API 等等。 + +下一个 [头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 提供了操作位数组的 API。例如,它提供了以下两个函数: + +* `bitmap_zero`; +* `bitmap_fill`. + +它们分别可以清除一个位数组和用 `1` 填充位数组。让我们看看 `bitmap_zero` 函数的实现: + +```C +static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) +{ + if (small_const_nbits(nbits)) + *dst = 0UL; + else { + unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); + } +} +``` + +首先我们可以看到对 `nbits` 的检查。 `small_const_nbits` 是一个定义在同一[头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 的宏: + +```C +#define small_const_nbits(nbits) \ + (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG) +``` + +正如我们可以看到的,它检查 `nbits` 是否为编译期已知常量,并且其值不超过 `BITS_PER_LONG` 或 `64`。如果位数目没有超过一个 `long` 变量的位数,我们可以仅仅设置为 0。在其他情况,我们需要计算有多少个需要填充位数组的 `long` 变量并且使用 [memset](http://man7.org/linux/man-pages/man3/memset.3.html) 进行填充。 + +`bitmap_fill` 函数的实现和 `biramp_zero` 函数很相似,除了我们需要在给定的位数组中填写 `0xff` 或 `0b11111111`: + +```C +static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) +{ + unsigned int nlongs = BITS_TO_LONGS(nbits); + if (!small_const_nbits(nbits)) { + unsigned int len = (nlongs - 1) * sizeof(unsigned long); + memset(dst, 0xff, len); + } + dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits); +} +``` + +除了 `bitmap_fill` 和 `bitmap_zero`,[include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件也提供了和 `bitmap_zero` 很相似的 `bitmap_copy`,只是仅仅使用 [memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) 而不是 [memset](http://man7.org/linux/man-pages/man3/memset.3.html) 这点差异而已。它也提供了位数组的按位操作,像 `bitmap_and`, `bitmap_or`, `bitamp_xor`等等。我们不会探讨这些函数的实现了,因为如果你理解了本部分的所有内容,这些函数的实现是很容易理解的。无论如何,如果你对这些函数是如何实现的感兴趣,你可以打开并研究 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件。 + +本部分到此为止。 + +链接 +================================================================================ + +* [bitmap](https://en.wikipedia.org/wiki/Bit_array) +* [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure) +* [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) +* [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) +* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) +* [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) +* [API](https://en.wikipedia.org/wiki/Application_programming_interface) +* [atomic operations](https://en.wikipedia.org/wiki/Linearizability) +* [xchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_328.html) +* [cmpxchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html) +* [lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html) +* [bts instruction](http://x86.renejeschke.de/html/file_module_x86_id_25.html) +* [btr instruction](http://x86.renejeschke.de/html/file_module_x86_id_24.html) +* [bt instruction](http://x86.renejeschke.de/html/file_module_x86_id_22.html) +* [sbb instruction](http://x86.renejeschke.de/html/file_module_x86_id_286.html) +* [btc instruction](http://x86.renejeschke.de/html/file_module_x86_id_23.html) +* [man memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) +* [man memset](http://man7.org/linux/man-pages/man3/memset.3.html) +* [CF](https://en.wikipedia.org/wiki/FLAGS_register) +* [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) +* [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) + + +------------------------------------------------------------------------------ + +via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md + +作者:[0xAX][a] +译者:[cposture](https://github.com/cposture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/0xAX \ No newline at end of file From b1cdc7fcbe2fee9dbb5ad3264d19e028d9f3b818 Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 9 Jul 2016 21:46:14 +0800 Subject: [PATCH 094/471] Translating by cposture --- .../tech/20160705 Create Your Own Shell in Python - Part I.md | 1 + .../tech/20160706 Create Your Own Shell in Python - Part II.md | 1 + 2 files changed, 2 insertions(+) diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md index 9c3ffa55dd..48e84381c8 100644 --- a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md +++ b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.09 Create Your Own Shell in Python : Part I I’m curious to know how a shell (like bash, csh, etc.) works internally. So, I implemented one called yosh (Your Own SHell) in Python to answer my own curiosity. The concept I explain in this article can be applied to other languages as well. diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md index af0ec01b36..3154839443 100644 --- a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md +++ b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.09 Create Your Own Shell in Python - Part II =========================================== From 906b6da96425b7ba345091e788643d692ce4a043 Mon Sep 17 00:00:00 2001 From: FrankXinqi Date: Sat, 9 Jul 2016 23:52:07 +0800 Subject: [PATCH 095/471] Translating How bad a boss is Linus Torvalds? --- sources/talk/20151117 How bad a boss is Linus Torvalds.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20151117 How bad a boss is Linus Torvalds.md b/sources/talk/20151117 How bad a boss is Linus Torvalds.md index d593dcf697..552155a18c 100644 --- a/sources/talk/20151117 How bad a boss is Linus Torvalds.md +++ b/sources/talk/20151117 How bad a boss is Linus Torvalds.md @@ -1,3 +1,4 @@ +Translating by FrankXinqi How bad a boss is Linus Torvalds? ================================================================================ ![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg) From 012588071ea9ffda2377d7665f4a071d35f8b308 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Sat, 9 Jul 2016 23:57:15 +0800 Subject: [PATCH 096/471] Translated by cposture (#4164) * Translated by cposture * Translating by cposture --- .../tech/20160512 Bitmap in Linux Kernel.md | 398 ----------------- ...reate Your Own Shell in Python - Part I.md | 1 + ...eate Your Own Shell in Python - Part II.md | 1 + .../tech/20160512 Bitmap in Linux Kernel.md | 405 ++++++++++++++++++ 4 files changed, 407 insertions(+), 398 deletions(-) delete mode 100644 sources/tech/20160512 Bitmap in Linux Kernel.md create mode 100644 translated/tech/20160512 Bitmap in Linux Kernel.md diff --git a/sources/tech/20160512 Bitmap in Linux Kernel.md b/sources/tech/20160512 Bitmap in Linux Kernel.md deleted file mode 100644 index adffc9d049..0000000000 --- a/sources/tech/20160512 Bitmap in Linux Kernel.md +++ /dev/null @@ -1,398 +0,0 @@ -[Translating by cposture 2016.06.29] -Data Structures in the Linux Kernel -================================================================================ - -Bit arrays and bit operations in the Linux kernel --------------------------------------------------------------------------------- - -Besides different [linked](https://en.wikipedia.org/wiki/Linked_data_structure) and [tree](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) based data structures, the Linux kernel provides [API](https://en.wikipedia.org/wiki/Application_programming_interface) for [bit arrays](https://en.wikipedia.org/wiki/Bit_array) or `bitmap`. Bit arrays are heavily used in the Linux kernel and following source code files contain common `API` for work with such structures: - -* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) -* [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) - -Besides these two files, there is also architecture-specific header file which provides optimized bit operations for certain architecture. We consider [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture, so in our case it will be: - -* [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) - -header file. As I just wrote above, the `bitmap` is heavily used in the Linux kernel. For example a `bit array` is used to store set of online/offline processors for systems which support [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) cpu (more about this you can read in the [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) part), a `bit array` stores set of allocated [irqs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) during initialization of the Linux kernel and etc. - -So, the main goal of this part is to see how `bit arrays` are implemented in the Linux kernel. Let's start. - -Declaration of bit array -================================================================================ - -Before we will look on `API` for bitmaps manipulation, we must know how to declare it in the Linux kernel. There are two common method to declare own bit array. The first simple way to declare a bit array is to array of `unsigned long`. For example: - -```C -unsigned long my_bitmap[8] -``` - -The second way is to use the `DECLARE_BITMAP` macro which is defined in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) header file: - -```C -#define DECLARE_BITMAP(name,bits) \ - unsigned long name[BITS_TO_LONGS(bits)] -``` - -We can see that `DECLARE_BITMAP` macro takes two parameters: - -* `name` - name of bitmap; -* `bits` - amount of bits in bitmap; - -and just expands to the definition of `unsigned long` array with `BITS_TO_LONGS(bits)` elements, where the `BITS_TO_LONGS` macro converts a given number of bits to number of `longs` or in other words it calculates how many `8` byte elements in `bits`: - -```C -#define BITS_PER_BYTE 8 -#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) -#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) -``` - -So, for example `DECLARE_BITMAP(my_bitmap, 64)` will produce: - -```python ->>> (((64) + (64) - 1) / (64)) -1 -``` - -and: - -```C -unsigned long my_bitmap[1]; -``` - -After we are able to declare a bit array, we can start to use it. - -Architecture-specific bit operations -================================================================================ - -We already saw above a couple of source code and header files which provide [API](https://en.wikipedia.org/wiki/Application_programming_interface) for manipulation of bit arrays. The most important and widely used API of bit arrays is architecture-specific and located as we already know in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file. - -First of all let's look at the two most important functions: - -* `set_bit`; -* `clear_bit`. - -I think that there is no need to explain what these function do. This is already must be clear from their name. Let's look on their implementation. If you will look into the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, you will note that each of these functions represented by two variants: [atomic](https://en.wikipedia.org/wiki/Linearizability) and not. Before we will start to dive into implementations of these functions, first of all we must to know a little about `atomic` operations. - -In simple words atomic operations guarantees that two or more operations will not be performed on the same data concurrently. The `x86` architecture provides a set of atomic instructions, for example [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html) instruction, [cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) instruction and etc. Besides atomic instructions, some of non-atomic instructions can be made atomic with the help of the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction. It is enough to know about atomic operations for now, so we can begin to consider implementation of `set_bit` and `clear_bit` functions. - -First of all, let's start to consider `non-atomic` variants of this function. Names of non-atomic `set_bit` and `clear_bit` starts from double underscore. As we already know, all of these functions are defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and the first function is `__set_bit`: - -```C -static inline void __set_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory"); -} -``` - -As we can see it takes two arguments: - -* `nr` - number of bit in a bit array. -* `addr` - address of a bit array where we need to set bit. - -Note that the `addr` parameter is defined with `volatile` keyword which tells to compiler that value maybe changed by the given address. The implementation of the `__set_bit` is pretty easy. As we can see, it just contains one line of [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) code. In our case we are using the [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) instruction which selects a bit which is specified with the first operand (`nr` in our case) from the bit array, stores the value of the selected bit in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) flags register and set this bit. - -Note that we can see usage of the `nr`, but there is `addr` here. You already might guess that the secret is in `ADDR`. The `ADDR` is the macro which is defined in the same header code file and expands to the string which contains value of the given address and `+m` constraint: - -```C -#define ADDR BITOP_ADDR(addr) -#define BITOP_ADDR(x) "+m" (*(volatile long *) (x)) -``` - -Besides the `+m`, we can see other constraints in the `__set_bit` function. Let's look on they and try to understand what do they mean: - -* `+m` - represents memory operand where `+` tells that the given operand will be input and output operand; -* `I` - represents integer constant; -* `r` - represents register operand - -Besides these constraint, we also can see - the `memory` keyword which tells compiler that this code will change value in memory. That's all. Now let's look at the same function but at `atomic` variant. It looks more complex that its `non-atomic` variant: - -```C -static __always_inline void -set_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "orb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)CONST_MASK(nr)) - : "memory"); - } else { - asm volatile(LOCK_PREFIX "bts %1,%0" - : BITOP_ADDR(addr) : "Ir" (nr) : "memory"); - } -} -``` - -First of all note that this function takes the same set of parameters that `__set_bit`, but additionally marked with the `__always_inline` attribute. The `__always_inline` is macro which defined in the [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) and just expands to the `always_inline` attribute: - -```C -#define __always_inline inline __attribute__((always_inline)) -``` - -which means that this function will be always inlined to reduce size of the Linux kernel image. Now let's try to understand implementation of the `set_bit` function. First of all we check a given number of bit at the beginning of the `set_bit` function. The `IS_IMMEDIATE` macro defined in the same [header](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) file and expands to the call of the builtin [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) function: - -```C -#define IS_IMMEDIATE(nr) (__builtin_constant_p(nr)) -``` - -The `__builtin_constant_p` builtin function returns `1` if the given parameter is known to be constant at compile-time and returns `0` in other case. We no need to use slow `bts` instruction to set bit if the given number of bit is known in compile time constant. We can just apply [bitwise or](https://en.wikipedia.org/wiki/Bitwise_operation#OR) for byte from the give address which contains given bit and masked number of bits where high bit is `1` and other is zero. In other case if the given number of bit is not known constant at compile-time, we do the same as we did in the `__set_bit` function. The `CONST_MASK_ADDR` macro: - -```C -#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3)) -``` - -expands to the give address with offset to the byte which contains a given bit. For example we have address `0x1000` and the number of bit is `0x9`. So, as `0x9` is `one byte + one bit` our address with be `addr + 1`: - -```python ->>> hex(0x1000 + (0x9 >> 3)) -'0x1001' -``` - -The `CONST_MASK` macro represents our given number of bit as byte where high bit is `1` and other bits are `0`: - -```C -#define CONST_MASK(nr) (1 << ((nr) & 7)) -``` - -```python ->>> bin(1 << (0x9 & 7)) -'0b10' -``` - -In the end we just apply bitwise `or` for these values. So, for example if our address will be `0x4097` and we need to set `0x9` bit: - -```python ->>> bin(0x4097) -'0b100000010010111' ->>> bin((0x4097 >> 0x9) | (1 << (0x9 & 7))) -'0b100010' -``` - -the `ninth` bit will be set. - -Note that all of these operations are marked with `LOCK_PREFIX` which is expands to the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction which guarantees atomicity of this operation. - -As we already know, besides the `set_bit` and `__set_bit` operations, the Linux kernel provides two inverse functions to clear bit in atomic and non-atomic context. They are `clear_bit` and `__clear_bit`. Both of these functions are defined in the same [header file](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) and takes the same set of arguments. But not only arguments are similar. Generally these functions are very similar on the `set_bit` and `__set_bit`. Let's look on the implementation of the non-atomic `__clear_bit` function: - -```C -static inline void __clear_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("btr %1,%0" : ADDR : "Ir" (nr)); -} -``` - -Yes. As we see, it takes the same set of arguments and contains very similar block of inline assembler. It just uses the [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) instruction instead of `bts`. As we can understand form the function's name, it clears a given bit by the given address. The `btr` instruction acts like `btr`. This instruction also selects a given bit which is specified in the first operand, stores its value in the `CF` flag register and clears this bit in the given bit array which is specifed with second operand. - -The atomic variant of the `__clear_bit` is `clear_bit`: - -```C -static __always_inline void -clear_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "andb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)~CONST_MASK(nr))); - } else { - asm volatile(LOCK_PREFIX "btr %1,%0" - : BITOP_ADDR(addr) - : "Ir" (nr)); - } -} -``` - -and as we can see it is very similar on `set_bit` and just contains two differences. The first difference it uses `btr` instruction to clear bit when the `set_bit` uses `bts` instruction to set bit. The second difference it uses negated mask and `and` instruction to clear bit in the given byte when the `set_bit` uses `or` instruction. - -That's all. Now we can set and clear bit in any bit array and and we can go to other operations on bitmasks. - -Most widely used operations on a bit arrays are set and clear bit in a bit array in the Linux kernel. But besides this operations it is useful to do additional operations on a bit array. Yet another widely used operation in the Linux kernel - is to know is a given bit set or not in a bit array. We can achieve this with the help of the `test_bit` macro. This macro is defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and expands to the call of the `constant_test_bit` or `variable_test_bit` depends on bit number: - -```C -#define test_bit(nr, addr) \ - (__builtin_constant_p((nr)) \ - ? constant_test_bit((nr), (addr)) \ - : variable_test_bit((nr), (addr))) -``` - -So, if the `nr` is known in compile time constant, the `test_bit` will be expanded to the call of the `constant_test_bit` function or `variable_test_bit` in other case. Now let's look at implementations of these functions. Let's start from the `variable_test_bit`: - -```C -static inline int variable_test_bit(long nr, volatile const unsigned long *addr) -{ - int oldbit; - - asm volatile("bt %2,%1\n\t" - "sbb %0,%0" - : "=r" (oldbit) - : "m" (*(unsigned long *)addr), "Ir" (nr)); - - return oldbit; -} -``` - -The `variable_test_bit` function takes similar set of arguments as `set_bit` and other function take. We also may see inline assembly code here which executes [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) and [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) instruction. The `bt` or `bit test` instruction selects a given bit which is specified with first operand from the bit array which is specified with the second operand and stores its value in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) bit of flags register. The second `sbb` instruction substracts first operand from second and subscrtact value of the `CF`. So, here write a value of a given bit number from a given bit array to the `CF` bit of flags register and execute `sbb` instruction which calculates: `00000000 - CF` and writes the result to the `oldbit`. - -The `constant_test_bit` function does the same as we saw in the `set_bit`: - -```C -static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr) -{ - return ((1UL << (nr & (BITS_PER_LONG-1))) & - (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; -} -``` - -It generates a byte where high bit is `1` and other bits are `0` (as we saw in `CONST_MASK`) and applies bitwise [and](https://en.wikipedia.org/wiki/Bitwise_operation#AND) to the byte which contains a given bit number. - -The next widely used bit array related operation is to change bit in a bit array. The Linux kernel provides two helper for this: - -* `__change_bit`; -* `change_bit`. - -As you already can guess, these two variants are atomic and non-atomic as for example `set_bit` and `__set_bit`. For the start, let's look at the implementation of the `__change_bit` function: - -```C -static inline void __change_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("btc %1,%0" : ADDR : "Ir" (nr)); -} -``` - -Pretty easy, is not it? The implementation of the `__change_bit` is the same as `__set_bit`, but instead of `bts` instruction, we are using [btc](http://x86.renejeschke.de/html/file_module_x86_id_23.html). This instruction selects a given bit from a given bit array, stores its value in the `CF` and changes its value by the applying of complement operation. So, a bit with value `1` will be `0` and vice versa: - -```python ->>> int(not 1) -0 ->>> int(not 0) -1 -``` - -The atomic version of the `__change_bit` is the `change_bit` function: - -```C -static inline void change_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "xorb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)CONST_MASK(nr))); - } else { - asm volatile(LOCK_PREFIX "btc %1,%0" - : BITOP_ADDR(addr) - : "Ir" (nr)); - } -} -``` - -It is similar on `set_bit` function, but also has two differences. The first difference is `xor` operation instead of `or` and the second is `bts` instead of `bts`. - -For this moment we know the most important architecture-specific operations with bit arrays. Time to look at generic bitmap API. - -Common bit operations -================================================================================ - -Besides the architecture-specific API from the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, the Linux kernel provides common API for manipulation of bit arrays. As we know from the beginning of this part, we can find it in the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and additionally in the * [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) source code file. But before these source code files let's look into the [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) header file which provides a set of useful macro. Let's look on some of they. - -First of all let's look at following four macros: - -* `for_each_set_bit` -* `for_each_set_bit_from` -* `for_each_clear_bit` -* `for_each_clear_bit_from` - -All of these macros provide iterator over certain set of bits in a bit array. The first macro iterates over bits which are set, the second does the same, but starts from a certain bits. The last two macros do the same, but iterates over clear bits. Let's look on implementation of the `for_each_set_bit` macro: - -```C -#define for_each_set_bit(bit, addr, size) \ - for ((bit) = find_first_bit((addr), (size)); \ - (bit) < (size); \ - (bit) = find_next_bit((addr), (size), (bit) + 1)) -``` - -As we may see it takes three arguments and expands to the loop from first set bit which is returned as result of the `find_first_bit` function and to the last bit number while it is less than given size. - -Besides these four macros, the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) provides API for rotation of `64-bit` or `32-bit` values and etc. - -The next [header](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) file which provides API for manipulation with a bit arrays. For example it provdes two functions: - -* `bitmap_zero`; -* `bitmap_fill`. - -To clear a bit array and fill it with `1`. Let's look on the implementation of the `bitmap_zero` function: - -```C -static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) -{ - if (small_const_nbits(nbits)) - *dst = 0UL; - else { - unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); - memset(dst, 0, len); - } -} -``` - -First of all we can see the check for `nbits`. The `small_const_nbits` is macro which defined in the same header [file](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) and looks: - -```C -#define small_const_nbits(nbits) \ - (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG) -``` - -As we may see it checks that `nbits` is known constant in compile time and `nbits` value does not overflow `BITS_PER_LONG` or `64`. If bits number does not overflow amount of bits in a `long` value we can just set to zero. In other case we need to calculate how many `long` values do we need to fill our bit array and fill it with [memset](http://man7.org/linux/man-pages/man3/memset.3.html). - -The implementation of the `bitmap_fill` function is similar on implementation of the `biramp_zero` function, except we fill a given bit array with `0xff` values or `0b11111111`: - -```C -static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) -{ - unsigned int nlongs = BITS_TO_LONGS(nbits); - if (!small_const_nbits(nbits)) { - unsigned int len = (nlongs - 1) * sizeof(unsigned long); - memset(dst, 0xff, len); - } - dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits); -} -``` - -Besides the `bitmap_fill` and `bitmap_zero` functions, the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file provides `bitmap_copy` which is similar on the `bitmap_zero`, but just uses [memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) instead of [memset](http://man7.org/linux/man-pages/man3/memset.3.html). Also it provides bitwise operations for bit array like `bitmap_and`, `bitmap_or`, `bitamp_xor` and etc. We will not consider implementation of these functions because it is easy to understand implementations of these functions if you understood all from this part. Anyway if you are interested how did these function implemented, you may open [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and start to research. - -That's all. - -Links -================================================================================ - -* [bitmap](https://en.wikipedia.org/wiki/Bit_array) -* [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure) -* [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) -* [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) -* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) -* [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) -* [API](https://en.wikipedia.org/wiki/Application_programming_interface) -* [atomic operations](https://en.wikipedia.org/wiki/Linearizability) -* [xchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_328.html) -* [cmpxchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html) -* [lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html) -* [bts instruction](http://x86.renejeschke.de/html/file_module_x86_id_25.html) -* [btr instruction](http://x86.renejeschke.de/html/file_module_x86_id_24.html) -* [bt instruction](http://x86.renejeschke.de/html/file_module_x86_id_22.html) -* [sbb instruction](http://x86.renejeschke.de/html/file_module_x86_id_286.html) -* [btc instruction](http://x86.renejeschke.de/html/file_module_x86_id_23.html) -* [man memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) -* [man memset](http://man7.org/linux/man-pages/man3/memset.3.html) -* [CF](https://en.wikipedia.org/wiki/FLAGS_register) -* [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) -* [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) - - ------------------------------------------------------------------------------- - -via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md - -作者:[0xAX][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twitter.com/0xAX diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md index 9c3ffa55dd..48e84381c8 100644 --- a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md +++ b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.09 Create Your Own Shell in Python : Part I I’m curious to know how a shell (like bash, csh, etc.) works internally. So, I implemented one called yosh (Your Own SHell) in Python to answer my own curiosity. The concept I explain in this article can be applied to other languages as well. diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md index af0ec01b36..3154839443 100644 --- a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md +++ b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.09 Create Your Own Shell in Python - Part II =========================================== diff --git a/translated/tech/20160512 Bitmap in Linux Kernel.md b/translated/tech/20160512 Bitmap in Linux Kernel.md new file mode 100644 index 0000000000..6475b9260e --- /dev/null +++ b/translated/tech/20160512 Bitmap in Linux Kernel.md @@ -0,0 +1,405 @@ +--- +date: 2016-07-09 14:42 +status: public +title: 20160512 Bitmap in Linux Kernel +--- + +Linux 内核里的数据结构 +================================================================================ + +Linux 内核中的位数组和位操作 +-------------------------------------------------------------------------------- + +除了不同的基于[链式](https://en.wikipedia.org/wiki/Linked_data_structure)和[树](https://en.wikipedia.org/wiki/Tree_%28data_structure%29)的数据结构以外,Linux 内核也为[位数组](https://en.wikipedia.org/wiki/Bit_array)或`位图`提供了 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。位数组在 Linux 内核里被广泛使用,并且在以下的源代码文件中包含了与这样的结构搭配使用的通用 `API`: + +* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) +* [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) + +除了这两个文件之外,还有体系结构特定的头文件,它们为特定的体系结构提供优化的位操作。我们将探讨 [x86_64](https://en.wikipedia.org/wiki/X86-64) 体系结构,因此在我们的例子里,它会是 + +* [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) + +头文件。正如我上面所写的,`位图`在 Linux 内核中被广泛地使用。例如,`位数组`常常用于保存一组在线/离线处理器,以便系统支持[热插拔](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)的 CPU(你可以在 [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) 部分阅读更多相关知识 ),一个`位数组`可以在 Linux 内核初始化等期间保存一组已分配的中断处理。 + +因此,本部分的主要目的是了解位数组是如何在 Linux 内核中实现的。让我们现在开始吧。 + +位数组声明 +================================================================================ + +在我们开始查看位图操作的 `API` 之前,我们必须知道如何在 Linux 内核中声明它。有两中通用的方法声明位数组。第一种简单的声明一个位数组的方法是,定义一个 unsigned long 的数组,例如: + +```C +unsigned long my_bitmap[8] +``` + +第二种方法,是使用 `DECLARE_BITMAP` 宏,它定义于 [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 头文件: + +```C +#define DECLARE_BITMAP(name,bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +``` + +我们可以看到 `DECLARE_BITMAP` 宏使用两个参数: + +* `name` - 位图名称; +* `bits` - 位图中位数; + +并且只是使用 `BITS_TO_LONGS(bits)` 元素展开 `unsigned long` 数组的定义。 `BITS_TO_LONGS` 宏将一个给定的位数转换为 `longs` 的个数,换言之,就是计算 `bits` 中有多少个 `8` 字节元素: + +```C +#define BITS_PER_BYTE 8 +#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) +``` + +因此,例如 `DECLARE_BITMAP(my_bitmap, 64)` 将产生: + +```python +>>> (((64) + (64) - 1) / (64)) +1 +``` + +与: + +```C +unsigned long my_bitmap[1]; +``` + +在能够声明一个位数组之后,我们便可以使用它了。 + +体系结构特定的位操作 +================================================================================ + +我们已经看了以上一对源文件和头文件,它们提供了位数组操作的 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。其中重要且广泛使用的位数组 API 是体系结构特定的且位于已提及的头文件中 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h)。 + +首先让我们查看两个最重要的函数: + +* `set_bit`; +* `clear_bit`. + +我认为没有必要解释这些函数的作用。从它们的名字来看,这已经很清楚了。让我们直接查看它们的实现。如果你浏览 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,你将会注意到这些函数中的每一个都有[原子性](https://en.wikipedia.org/wiki/Linearizability)和非原子性两种变体。在我们开始深入这些函数的实现之前,首先,我们必须了解一些有关原子操作的知识。 + +简而言之,原子操作保证两个或以上的操作不会并发地执行同一数据。`x86` 体系结构提供了一系列原子指令,例如, [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html)、[cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) 等指令。除了原子指令,一些非原子指令可以在 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令的帮助下具有原子性。目前已经对原子操作有了充分的理解,我们可以接着探讨 `set_bit` 和 `clear_bit` 函数的实现。 + +我们先考虑函数的非原子性变体。非原子性的 `set_bit` 和 `clear_bit` 的名字以双下划线开始。正如我们所知道的,所有这些函数都定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并且第一个函数就是 `__set_bit`: + +```C +static inline void __set_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory"); +} +``` + +正如我们所看到的,它使用了两个参数: + +* `nr` - 位数组中的位号(从0开始,译者注) +* `addr` - 我们需要置位的位数组地址 + +注意,`addr` 参数使用 `volatile` 关键字定义,以告诉编译器给定地址指向的变量可能会被修改。 `__set_bit` 的实现相当简单。正如我们所看到的,它仅包含一行[内联汇编代码](https://en.wikipedia.org/wiki/Inline_assembler)。在我们的例子中,我们使用 [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) 指令,从位数组中选出一个第一操作数(我们的例子中的 `nr`),存储选出的位的值到 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 标志寄存器并设置该位(即 `nr` 指定的位置为1,译者注)。 + +注意,我们了解了 `nr` 的用法,但这里还有一个参数 `addr` 呢!你或许已经猜到秘密就在 `ADDR`。 `ADDR` 是一个定义在同一头文件的宏,它展开为一个包含给定地址和 `+m` 约束的字符串: + +```C +#define ADDR BITOP_ADDR(addr) +#define BITOP_ADDR(x) "+m" (*(volatile long *) (x)) +``` + +除了 `+m` 之外,在 `__set_bit` 函数中我们可以看到其他约束。让我们查看并试图理解它们所表示的意义: + +* `+m` - 表示内存操作数,这里的 `+` 表明给定的操作数为输入输出操作数; +* `I` - 表示整型常量; +* `r` - 表示寄存器操作数 + +除了这些约束之外,我们也能看到 `memory` 关键字,其告诉编译器这段代码会修改内存中的变量。到此为止,现在我们看看相同的原子性变体函数。它看起来比非原子性变体更加复杂: + +```C +static __always_inline void +set_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "orb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)CONST_MASK(nr)) + : "memory"); + } else { + asm volatile(LOCK_PREFIX "bts %1,%0" + : BITOP_ADDR(addr) : "Ir" (nr) : "memory"); + } +} +``` + +(BITOP_ADDR 的定义为:`#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))`,ORB 为字节按位或,译者注) + +首先注意,这个函数使用了与 `__set_bit` 相同的参数集合,但额外地使用了 `__always_inline` 属性标记。 `__always_inline` 是一个定义于 [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) 的宏,并且只是展开为 `always_inline` 属性: + +```C +#define __always_inline inline __attribute__((always_inline)) +``` + +其意味着这个函数总是内联的,以减少 Linux 内核映像的大小。现在我们试着了解 `set_bit` 函数的实现。首先我们在 `set_bit` 函数的开头检查给定的位数量。`IS_IMMEDIATE` 宏定义于相同[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h),并展开为 gcc 内置函数的调用: + +```C +#define IS_IMMEDIATE(nr) (__builtin_constant_p(nr)) +``` + +如果给定的参数是编译期已知的常量,`__builtin_constant_p` 内置函数则返回 `1`,其他情况返回 `0`。假若给定的位数是编译期已知的常量,我们便无须使用效率低下的 `bts` 指令去设置位。我们可以只需在给定地址指向的字节和和掩码上执行 [按位或](https://en.wikipedia.org/wiki/Bitwise_operation#OR) 操作,其字节包含给定的位,而掩码为位号高位 `1`,其他位为 0。在其他情况下,如果给定的位号不是编译期已知常量,我们便做和 `__set_bit` 函数一样的事。`CONST_MASK_ADDR` 宏: + +```C +#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3)) +``` + +展开为带有到包含给定位的字节偏移的给定地址,例如,我们拥有地址 `0x1000` 和 位号是 `0x9`。因为 `0x9` 是 `一个字节 + 一位`,所以我们的地址是 `addr + 1`: + +```python +>>> hex(0x1000 + (0x9 >> 3)) +'0x1001' +``` + +`CONST_MASK` 宏将我们给定的位号表示为字节,位号对应位为高位 `1`,其他位为 `0`: + +```C +#define CONST_MASK(nr) (1 << ((nr) & 7)) +``` + +```python +>>> bin(1 << (0x9 & 7)) +'0b10' +``` + +最后,我们应用 `按位或` 运算到这些变量上面,因此,假如我们的地址是 `0x4097` ,并且我们需要置位号为 `9` 的位 为 1: + +```python +>>> bin(0x4097) +'0b100000010010111' +>>> bin((0x4097 >> 0x9) | (1 << (0x9 & 7))) +'0b100010' +``` + +`第 9 位` 将会被置位。(这里的 9 是从 0 开始计数的,比如0010,按照作者的意思,其中的 1 是第 1 位,译者注) + +注意,所有这些操作使用 `LOCK_PREFIX` 标记,其展开为 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令,保证该操作的原子性。 + +正如我们所知,除了 `set_bit` 和 `__set_bit` 操作之外,Linux 内核还提供了两个功能相反的函数,在原子性和非原子性的上下文中清位。它们为 `clear_bit` 和 `__clear_bit`。这两个函数都定义于同一个[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 并且使用相同的参数集合。不仅参数相似,一般而言,这些函数与 `set_bit` 和 `__set_bit` 也非常相似。让我们查看非原子性 `__clear_bit` 的实现吧: + +```C +static inline void __clear_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("btr %1,%0" : ADDR : "Ir" (nr)); +} +``` + +没错,正如我们所见,`__clear_bit` 使用相同的参数集合,并包含极其相似的内联汇编代码块。它仅仅使用 [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) 指令替换 `bts`。正如我们从函数名所理解的一样,通过给定地址,它清除了给定的位。`btr` 指令表现得像 `bts`(原文这里为 btr,可能为笔误,修正为 bts,译者注)。该指令选出第一操作数指定的位,存储它的值到 `CF` 标志寄存器,并且清楚第二操作数指定的位数组中的对应位。 + +`__clear_bit` 的原子性变体为 `clear_bit`: + +```C +static __always_inline void +clear_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "andb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)~CONST_MASK(nr))); + } else { + asm volatile(LOCK_PREFIX "btr %1,%0" + : BITOP_ADDR(addr) + : "Ir" (nr)); + } +} +``` + +并且正如我们所看到的,它与 `set_bit` 非常相似,同时只包含了两处差异。第一处差异为 `clear_bit` 使用 `btr` 指令来清位,而 `set_bit` 使用 `bts` 指令来置位。第二处差异为 `clear_bit` 使用否定的位掩码和 `按位与` 在给定的字节上置位,而 `set_bit` 使用 `按位或` 指令。 + +到此为止,我们可以在任何位数组置位和清位了,并且能够转到位掩码上的其他操作。 + +在 Linux 内核位数组上最广泛使用的操作是设置和清除位,但是除了这两个操作外,位数组上其他操作也是非常有用的。Linux 内核里另一种广泛使用的操作是知晓位数组中一个给定的位是否被置位。我们能够通过 `test_bit` 宏的帮助实现这一功能。这个宏定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并展开为 `constant_test_bit` 或 `variable_test_bit` 的调用,这要取决于位号。 + +```C +#define test_bit(nr, addr) \ + (__builtin_constant_p((nr)) \ + ? constant_test_bit((nr), (addr)) \ + : variable_test_bit((nr), (addr))) +``` + +因此,如果 `nr` 是编译期已知常量,`test_bit` 将展开为 `constant_test_bit` 函数的调用,而其他情况则为 `variable_test_bit`。现在让我们看看这些函数的实现,我们从 `variable_test_bit` 开始看起: + +```C +static inline int variable_test_bit(long nr, volatile const unsigned long *addr) +{ + int oldbit; + + asm volatile("bt %2,%1\n\t" + "sbb %0,%0" + : "=r" (oldbit) + : "m" (*(unsigned long *)addr), "Ir" (nr)); + + return oldbit; +} +``` + +`variable_test_bit` 函数调用了与 `set_bit` 及其他函数使用的相似的参数集合。我们也可以看到执行 [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) 和 [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) 指令的内联汇编代码。`bt` 或 `bit test` 指令从第二操作数指定的位数组选出第一操作数指定的一个指定位,并且将该位的值存进标志寄存器的 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 位。第二个指令 `sbb` 从第二操作数中减去第一操作数,再减去 `CF` 的值。因此,这里将一个从给定位数组中的给定位号的值写进标志寄存器的 `CF` 位,并且执行 `sbb` 指令计算: `00000000 - CF`,并将结果写进 `oldbit` 变量。 + +`constant_test_bit` 函数做了和我们在 `set_bit` 所看到的一样的事: + +```C +static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr) +{ + return ((1UL << (nr & (BITS_PER_LONG-1))) & + (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; +} +``` + +它生成了一个位号对应位为高位 `1`,而其他位为 `0` 的字节(正如我们在 `CONST_MASK` 所看到的),并将 [按位与](https://en.wikipedia.org/wiki/Bitwise_operation#AND) 应用于包含给定位号的字节。 + +下一广泛使用的位数组相关操作是改变一个位数组中的位。为此,Linux 内核提供了两个辅助函数: + +* `__change_bit`; +* `change_bit`. + +你可能已经猜测到,就拿 `set_bit` 和 `__set_bit` 例子说,这两个变体分别是原子和非原子版本。首先,让我们看看 `__change_bit` 函数的实现: + +```C +static inline void __change_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("btc %1,%0" : ADDR : "Ir" (nr)); +} +``` + +相当简单,不是吗? `__change_bit` 的实现和 `__set_bit` 一样,只是我们使用 [btc](http://x86.renejeschke.de/html/file_module_x86_id_23.html) 替换 `bts` 指令而已。 该指令从一个给定位数组中选出一个给定位,将该为位的值存进 `CF` 并使用求反操作改变它的值,因此值为 `1` 的位将变为 `0`,反之亦然: + +```python +>>> int(not 1) +0 +>>> int(not 0) +1 +``` + + `__change_bit` 的原子版本为 `change_bit` 函数: + +```C +static inline void change_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "xorb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)CONST_MASK(nr))); + } else { + asm volatile(LOCK_PREFIX "btc %1,%0" + : BITOP_ADDR(addr) + : "Ir" (nr)); + } +} +``` + +它和 `set_bit` 函数很相似,但也存在两点差异。第一处差异为 `xor` 操作而不是 `or`。第二处差异为 `btc`(原文为 `bts`,为作者笔误,译者注) 而不是 `bts`。 + +目前,我们了解了最重要的体系特定的位数组操作,是时候看看一般的位图 API 了。 + +通用位操作 +================================================================================ + +除了 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 中体系特定的 API 外,Linux 内核提供了操作位数组的通用 API。正如我们本部分开头所了解的一样,我们可以在 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件和* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) 源文件中找到它。但在查看这些源文件之前,我们先看看 [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) 头文件,其提供了一系列有用的宏,让我们看看它们当中一部分。 + +首先我们看看以下 4 个 宏: + +* `for_each_set_bit` +* `for_each_set_bit_from` +* `for_each_clear_bit` +* `for_each_clear_bit_from` + +所有这些宏都提供了遍历位数组中某些位集合的迭代器。第一个红迭代那些被置位的位。第二个宏也是一样,但它是从某一确定位开始。最后两个宏做的一样,但是迭代那些被清位的位。让我们看看 `for_each_set_bit` 宏: + +```C +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) +``` + +正如我们所看到的,它使用了三个参数,并展开为一个循环,该循环从作为 `find_first_bit` 函数返回结果的第一个置位开始到最后一个置位且小于给定大小为止。 + +除了这四个宏, [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 也提供了 `64-bit` 或 `32-bit` 变量循环的 API 等等。 + +下一个 [头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 提供了操作位数组的 API。例如,它提供了以下两个函数: + +* `bitmap_zero`; +* `bitmap_fill`. + +它们分别可以清除一个位数组和用 `1` 填充位数组。让我们看看 `bitmap_zero` 函数的实现: + +```C +static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) +{ + if (small_const_nbits(nbits)) + *dst = 0UL; + else { + unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); + } +} +``` + +首先我们可以看到对 `nbits` 的检查。 `small_const_nbits` 是一个定义在同一[头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 的宏: + +```C +#define small_const_nbits(nbits) \ + (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG) +``` + +正如我们可以看到的,它检查 `nbits` 是否为编译期已知常量,并且其值不超过 `BITS_PER_LONG` 或 `64`。如果位数目没有超过一个 `long` 变量的位数,我们可以仅仅设置为 0。在其他情况,我们需要计算有多少个需要填充位数组的 `long` 变量并且使用 [memset](http://man7.org/linux/man-pages/man3/memset.3.html) 进行填充。 + +`bitmap_fill` 函数的实现和 `biramp_zero` 函数很相似,除了我们需要在给定的位数组中填写 `0xff` 或 `0b11111111`: + +```C +static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) +{ + unsigned int nlongs = BITS_TO_LONGS(nbits); + if (!small_const_nbits(nbits)) { + unsigned int len = (nlongs - 1) * sizeof(unsigned long); + memset(dst, 0xff, len); + } + dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits); +} +``` + +除了 `bitmap_fill` 和 `bitmap_zero`,[include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件也提供了和 `bitmap_zero` 很相似的 `bitmap_copy`,只是仅仅使用 [memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) 而不是 [memset](http://man7.org/linux/man-pages/man3/memset.3.html) 这点差异而已。它也提供了位数组的按位操作,像 `bitmap_and`, `bitmap_or`, `bitamp_xor`等等。我们不会探讨这些函数的实现了,因为如果你理解了本部分的所有内容,这些函数的实现是很容易理解的。无论如何,如果你对这些函数是如何实现的感兴趣,你可以打开并研究 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件。 + +本部分到此为止。 + +链接 +================================================================================ + +* [bitmap](https://en.wikipedia.org/wiki/Bit_array) +* [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure) +* [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) +* [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) +* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) +* [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) +* [API](https://en.wikipedia.org/wiki/Application_programming_interface) +* [atomic operations](https://en.wikipedia.org/wiki/Linearizability) +* [xchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_328.html) +* [cmpxchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html) +* [lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html) +* [bts instruction](http://x86.renejeschke.de/html/file_module_x86_id_25.html) +* [btr instruction](http://x86.renejeschke.de/html/file_module_x86_id_24.html) +* [bt instruction](http://x86.renejeschke.de/html/file_module_x86_id_22.html) +* [sbb instruction](http://x86.renejeschke.de/html/file_module_x86_id_286.html) +* [btc instruction](http://x86.renejeschke.de/html/file_module_x86_id_23.html) +* [man memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) +* [man memset](http://man7.org/linux/man-pages/man3/memset.3.html) +* [CF](https://en.wikipedia.org/wiki/FLAGS_register) +* [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) +* [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) + + +------------------------------------------------------------------------------ + +via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md + +作者:[0xAX][a] +译者:[cposture](https://github.com/cposture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/0xAX \ No newline at end of file From 052f6bf1faedcdaeb330e150ab2187691be74fdd Mon Sep 17 00:00:00 2001 From: chenxinlong <237448382@qq.com> Date: Sun, 10 Jul 2016 01:23:01 +0800 Subject: [PATCH 097/471] [translating]Growing a carrer alongside Linux --- .../20160316 Growing a career alongside Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md b/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md index 24799508c3..b9e5e54f0a 100644 --- a/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md +++ b/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md @@ -1,3 +1,5 @@ +chenxinlong translating + Growing a career alongside Linux ================================== @@ -38,7 +40,7 @@ The constant evolution and fun of using Linux has been a driving force for me fo via: https://opensource.com/life/16/3/my-linux-story-michael-perry 作者:[Michael Perry][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/chenxinlong) 校对:[校对者ID](https://github.com/校对者ID) [a]: https://opensource.com/users/mpmilestogo From b52e80c3dcad00c66042d7eebf2bbbe44dbc20b0 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 10 Jul 2016 08:08:09 +0800 Subject: [PATCH 098/471] PUB:20160218 How to Best Manage Encryption Keys on Linux @mudongliang --- ...to Best Manage Encryption Keys on Linux.md | 104 +++++++++++++++++ ...to Best Manage Encryption Keys on Linux.md | 110 ------------------ 2 files changed, 104 insertions(+), 110 deletions(-) create mode 100644 published/20160218 How to Best Manage Encryption Keys on Linux.md delete mode 100644 translated/tech/20160218 How to Best Manage Encryption Keys on Linux.md diff --git a/published/20160218 How to Best Manage Encryption Keys on Linux.md b/published/20160218 How to Best Manage Encryption Keys on Linux.md new file mode 100644 index 0000000000..fa0e559ff9 --- /dev/null +++ b/published/20160218 How to Best Manage Encryption Keys on Linux.md @@ -0,0 +1,104 @@ +在 Linux 上管理加密密钥的最佳体验 +============================================= + +存储 SSH 的加密秘钥和记住密码一直是一个让人头疼的问题。但是不幸的是,在当前这个充满了恶意黑客和攻击的世界中,基本的安全预防是必不可少的。对于许多普通用户来说,大多数人只能是记住密码,也可能寻找到一个好程序去存储密码,正如我们提醒这些用户不要在每个网站采用相同的密码。但是对于在各个 IT 领域的人们,我们需要将这个事情提高一个层面。我们需要使用像 SSH 密钥这样的加密秘钥,而不只是密码。 + +设想一个场景:我有一个运行在云上的服务器,用作我的主 git 库。我有很多台工作电脑,所有这些电脑都需要登录到这个中央服务器去做 push 与 pull 操作。这里我设置 git 使用 SSH。当 git 使用 SSH 时,git 实际上是以 SSH 的方式登录到服务器,就好像你通过 SSH 命令打开一个服务器的命令行一样。为了把这些配置好,我在我的 .ssh 目录下创建一个配置文件,其中包含一个有服务器名字、主机名、登录用户、密钥文件路径等信息的主机项。之后我可以通过输入如下命令来测试这个配置是否正确。 + + ssh gitserver + +很快我就可以访问到服务器的 bash shell。现在我可以配置 git 使用相同配置项以及存储的密钥来登录服务器。这很简单,只是有一个问题:对于每一个我要用它登录服务器的电脑,我都需要有一个密钥文件,那意味着需要密钥文件会放在很多地方。我会在当前这台电脑上存储这些密钥文件,我的其他电脑也都需要存储这些。就像那些有特别多的密码的用户一样,我们这些 IT 人员也被这些特别多的密钥文件淹没。怎么办呢? + +### 清理 + +在我们开始帮助你管理密钥之前,你需要有一些密钥应该怎么使用的基础知识,以及明白我们下面的提问的意义所在。同时,有个前提,也是最重要的,你应该知道你的公钥和私钥该放在哪里。然后假设你应该知道: + +1. 公钥和私钥之间的差异; +2. 为什么你不可以从公钥生成私钥,但是反之则可以? +3. `authorized_keys` 文件的目的以及里面包含什么内容; +4. 如何使用私钥去登录一个你的对应公钥存储在其上的 `authorized_keys` 文件中的服务器。 + +![](http://www.linux.com/images/stories/41373/key-management-diagram.png) + +这里有一个例子。当你在亚马逊的网络服务上创建一个云服务器,你必须提供一个用于连接你的服务器的 SSH 密钥。每个密钥都有一个公开的部分(公钥)和私密的部分(私钥)。你要想让你的服务器安全,乍看之下你可能应该将你的私钥放到服务器上,同时你自己带着公钥。毕竟,你不想你的服务器被公开访问,对吗?但是实际上的做法正好是相反的。 + +你应该把自己的公钥放到 AWS 服务器,同时你持有用于登录服务器的私钥。你需要保护好私钥,并让它处于你的控制之中,而不是放在一些远程服务器上,正如上图中所示。 + +原因如下:如果公钥被其他人知道了,它们不能用于登录服务器,因为他们没有私钥。进一步说,如果有人成功攻入你的服务器,他们所能找到的只是公钥,他们不可以从公钥生成私钥。同时,如果你在其他的服务器上使用了相同的公钥,他们不可以使用它去登录别的电脑。 + +这就是为什么你要把你自己的公钥放到你的服务器上以便通过 SSH 登录这些服务器。你持有这些私钥,不要让这些私钥脱离你的控制。 + +但是还有一点麻烦。试想一下我 git 服务器的例子。我需要做一些抉择。有时我登录架设在别的地方的开发服务器,而在开发服务器上,我需要连接我的 git 服务器。如何使我的开发服务器连接 git 服务器?显然是通过使用私钥,但这样就会有问题。在该场景中,需要我把私钥放置到一个架设在别的地方的服务器上,这相当危险。 + +一个进一步的场景:如果我要使用一个密钥去登录许多的服务器,怎么办?如果一个入侵者得到这个私钥,这个人就能用这个私钥得到整个服务器网络的权限,这可能带来一些严重的破坏,这非常糟糕。 + +同时,这也带来了另外一个问题,我真的应该在这些其他服务器上使用相同的密钥吗?因为我刚才描述的,那会非常危险的。 + +最后,这听起来有些混乱,但是确实有一些简单的解决方案。让我们有条理地组织一下。 + +(注意,除了登录服务器,还有很多地方需要私钥密钥,但是我提出的这个场景可以向你展示当你使用密钥时你所面对的问题。) + +### 常规口令 + +当你创建你的密钥时,你可以选择是否包含一个密钥使用时的口令。有了这个口令,私钥文件本身就会被口令所加密。例如,如果你有一个公钥存储在服务器上,同时你使用私钥去登录服务器的时候,你会被提示输入该口令。没有口令,这个密钥是无法使用的。或者你也可以配置你的密钥不需要口令,然后只需要密钥文件就可以登录服务器了。 + +一般来说,不使用口令对于用户来说是更方便的,但是在很多情况下我强烈建议使用口令,原因是,如果私钥文件被偷了,偷密钥的人仍然不可以使用它,除非他或者她可以找到口令。在理论上,这个将节省你很多时间,因为你可以在攻击者发现口令之前,从服务器上删除公钥文件,从而保护你的系统。当然还有一些使用口令的其它原因,但是在很多场合这个原因对我来说更有价值。(举一个例子,我的 Android 平板上有 VNC 软件。平板上有我的密钥。如果我的平板被偷了之后,我会马上从服务器上删除公钥,使得它的私钥没有作用,无论有没有口令。)但是在一些情况下我不使用口令,是因为我正在登录的服务器上没有什么有价值的数据,这取决于情境。 + +### 服务器基础设施 + +你如何设计自己服务器的基础设施将会影响到你如何管理你的密钥。例如,如果你有很多用户登录,你将需要决定每个用户是否需要一个单独的密钥。(一般来说,应该如此;你不会想在用户之间共享私钥。那样当一个用户离开组织或者失去信任时,你可以删除那个用户的公钥,而不需要必须给其他人生成新的密钥。相似地,通过共享密钥,他们能以其他人的身份登录,这就更糟糕了。)但是另外一个问题是你如何配置你的服务器。举例来说,你是否使用像 Puppet 这样工具配置大量的服务器?你是否基于你自己的镜像创建大量的服务器?当你复制你的服务器,是否每一个的密钥都一样?不同的云服务器软件允许你配置如何选择;你可以让这些服务器使用相同的密钥,也可以给每一个服务器生成一个新的密钥。 + +如果你在操作这些复制的服务器,如果用户需要使用不同的密钥登录两个不同但是大部分都一样的系统,它可能导致混淆。但是另一方面,服务器共享相同的密钥会有安全风险。或者,第三,如果你的密钥有除了登录之外的需要(比如挂载加密的驱动),那么你会在很多地方需要相同的密钥。正如你所看到的,你是否需要在不同的服务器上使用相同的密钥不是我能为你做的决定;这其中有权衡,你需要自己去决定什么是最好的。 + +最终,你可能会有: + +- 需要登录的多个服务器 +- 多个用户登录到不同的服务器,每个都有自己的密钥 +- 每个用户使用多个密钥登录到不同的服务器 + +(如果你正在别的情况下使用密钥,这个同样的普适理论也能应用于如何使用密钥,需要多少密钥,它们是否共享,你如何处理公私钥等方面。) + +### 安全方法 + +了解你的基础设施和特有的情况,你需要组合一个密钥管理方案,它会指导你如何去分发和存储你的密钥。比如,正如我之前提到的,如果我的平板被偷了,我会从我服务器上删除公钥,我希望这在平板在用于访问服务器之前完成。同样的,我会在我的整体计划中考虑以下内容: + +1. 私钥可以放在移动设备上,但是必须包含口令; +2. 必须有一个可以快速地从服务器上删除公钥的方法。 + +在你的情况中,你可能决定你不想在自己经常登录的系统上使用口令;比如,这个系统可能是一个开发者一天登录多次的测试机器。这没有问题,但是你需要调整一点你的规则。你可以添加一条规则:不可以通过移动设备登录该机器。换句话说,你需要根据自己的状况构建你的准则,不要假设某个方案放之四海而皆准。 + +### 软件 + +至于软件,令人吃惊的是,现实世界中并没有很多好的、可靠的存储和管理私钥的软件解决方案。但是应该有吗?考虑下这个,如果你有一个程序存储你所有服务器的全部密钥,并且这个程序被一个快捷的密钥锁住,那么你的密钥就真的安全了吗?或者类似的,如果你的密钥被放置在你的硬盘上,用于 SSH 程序快速访问,密钥管理软件是否真正提供了任何保护吗? + +但是对于整体基础设施和创建/管理公钥来说,有许多的解决方案。我已经提到了 Puppet,在 Puppet 的世界中,你可以创建模块以不同的方式管理你的服务器。这个想法是服务器是动态的,而且不需要精确地复制彼此。[这里有一个聪明的方法](http://manuel.kiessling.net/2014/03/26/building-manageable-server-infrastructures-with-puppet-part-4/),在不同的服务器上使用相同的密钥,但是对于每一个用户使用不同的 Puppet 模块。这个方案可能适合你,也可能不适合你。 + +或者,另一个选择就是完全换个不同的档位。在 Docker 的世界中,你可以采取一个不同的方式,正如[关于 SSH 和 Docker 博客](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)所描述的那样。 + +但是怎么样管理私钥?如果你搜索过的话,你无法找到很多可以选择的软件,原因我之前提到过;私钥存放在你的硬盘上,一个管理软件可能无法提到更多额外的安全。但是我使用这种方法来管理我的密钥: + +首先,我的 `.ssh/config` 文件中有很多的主机项。我要登录的都有一个主机项,但是有时我对于一个单独的主机有不止一项。如果我有很多登录方式,就会出现这种情况。对于放置我的 git 库的服务器来说,我有两个不同的登录项;一个限制于 git,另一个用于一般用途的 bash 访问。这个为 git 设置的登录选项在机器上有极大的限制。还记得我之前说的我存储在远程开发机器上的 git 密钥吗?好了。虽然这些密钥可以登录到我其中一个服务器,但是使用的账号是被严格限制的。 + +其次,大部分的私钥都包含口令。(对于需要多次输入口令的情况,考虑使用 [ssh-agent](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)。) + +再次,我有一些我想要更加小心地保护的服务器,我不会把这些主机项放在我的 host 文件中。这更加接近于社会工程方面,密钥文件还在,但是可能需要攻击者花费更长的时间去找到这个密钥文件,分析出来它们对应的机器。在这种情况下,我就需要手动打出来一条长长的 SSH 命令。(没那么可怕。) + +同时你可以看出来我没有使用任何特别的软件去管理这些私钥。 + +## 无放之四海而皆准的方案 + +我们偶尔会在 linux.com 收到一些问题,询问管理密钥的好软件的建议。但是退一步看,这个问题事实上需要重新思考,因为没有一个普适的解决方案。你问的问题应该基于你自己的状况。你是否简单地尝试找到一个位置去存储你的密钥文件?你是否寻找一个方法去管理多用户问题,其中每个人都需要将他们自己的公钥插入到 `authorized_keys` 文件中? + +通过这篇文章,我已经囊括了这方面的基础知识,希望到此你明白如何管理你的密钥,并且,只有当你问出了正确的问题,无论你寻找任何软件(甚至你需要另外的软件),它都会出现。 + +------------------------------------------------------------------------------ + +via: http://www.linux.com/learn/tutorials/838235-how-to-best-manage-encryption-keys-on-linux + +作者:[Jeff Cogswell][a] +译者:[mudongliang](https://github.com/mudongliang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linux.com/community/forums/person/62256 diff --git a/translated/tech/20160218 How to Best Manage Encryption Keys on Linux.md b/translated/tech/20160218 How to Best Manage Encryption Keys on Linux.md deleted file mode 100644 index b833d94658..0000000000 --- a/translated/tech/20160218 How to Best Manage Encryption Keys on Linux.md +++ /dev/null @@ -1,110 +0,0 @@ -Linux 如何最好地管理加密密钥 -============================================= - -![](http://www.linux.com/images/stories/41373/key-management-diagram.png) - -存储 SSH 的加密秘钥以及记住密码一直是一个让人头疼的问题。但是不幸的是,在当前充满了恶意黑客和攻击的世界中,基本的安全预警是必不可少的。对于大量的普通用户,它相当于简单地记住密码,也可能寻找一个好程序去存储密码,正如我们提醒这些用户不要在每个网站都有相同的密码。但是对于在各个 IT 领域的我们,我们需要将这个提高一个层次。我们不得不处理加密秘钥,比如 SSH 密钥,而不只是密码。 - -设想一个场景:我有一个运行在云上的服务器,用于我的主 git 库。我有很多台工作电脑。所有电脑都需要登陆中央服务器去 push 与 pull。我设置 git 使用 SSH。当 git 使用 SSH, git 实际上以相同的方式登陆服务器,就好像你通过 SSH 命令打开一个服务器的命令行。为了配置所有内容,我在我的 .ssh 目录下创建一个配置文件,其中包含一个有服务器名字,主机名,登陆用户,密钥文件的路径的主机项。之后我可以通过输入命令来测试这个配置。 - ->ssh gitserver - -很快我得到了服务器的 bash shell。现在我可以配置 git 使用相同项与存储的密钥来登陆服务器。很简单,除了一个问题,对于每一个我用于登陆服务器的电脑,我需要有一个密钥文件。那意味着需要不止一个密钥文件。当前这台电脑和我的其他电脑都存储有这些密钥文件。同样的,用户每天有特大量的密码,对于我们 IT人员,很容易结束这特大量的密钥文件。怎么办呢? - -## 清理 - -在开始使用程序去帮助你管理你的密钥之前,你不得不在你的密码应该怎么处理和我们问的问题是否有意义这两个方面打下一些基础。同时,这需要第一,也是最重要的,你明白你的公钥和私钥的使用位置。我将设想你知道: - -1. 公钥和私钥之间的差异; - -2. 为什么你不可以从公钥生成私钥,但是你可以逆向生成? - -3. `authorized_keys` 文件的目的以及它的内容; - -4. 你如何使用私钥去登陆服务器,其中服务器上的 `authorized_keys` 文件中存有相应的公钥; - -这里有一个例子。当你在亚马逊的网络服务上创建一个云服务器,你必须提供一个 SSH 密码,用于连接你的服务器。每一个密钥有一个公开的部分,和私密的部分。因为你想保持你的服务器安全,乍看之下你可能要将你的私钥放到服务器上,同时你自己带着公钥。毕竟,你不想你的服务器被公开访问,对吗?但是实际上这是逆向的。 - -你把自己的公钥放到 AWS 服务器,同时你持有你自己的私钥用于登陆服务器。你保护私钥,同时保持私钥在自己一方,而不是在一些远程服务器上,正如上图中所示。 - -原因如下:如果公钥公之于众,他们不可以登陆服务器,因为他们没有私钥。进一步说,如果有人成功攻入你的服务器,他们所能找到的只是公钥。你不可以从公钥生成私钥。同时如果你在其他的服务器上使用相同的密钥,他们不可以使用它去登陆别的电脑。 - -这就是为什么你把你自己的公钥放到你的服务器上以便通过 SSH 登陆这些服务器。你持有这些私钥,不要让这些私钥脱离你的控制。 - -但是还有麻烦。试想一下我 git 服务器的例子。我要做一些决定。有时我登陆架设在别的地方的开发服务器。在开发服务器上,我需要连接我的 git 服务器。如何使我的开发服务器连接 git 服务器?通过使用私钥。同时这里面还有麻烦。这个场景需要我把私钥放置到一个架设在别的地方的服务器上。这相当危险。 - -一个进一步的场景:如果我要使用一个密钥去登陆许多的服务器,怎么办?如果一个入侵者得到这个私钥,他或她将拥有私钥,并且得到服务器的全部虚拟网络的权限,同时准备做一些严重的破坏。这一点也不好。 - -同时那当然会带来一个别的问题,我真的应该在这些其他服务器上使用相同的密钥?因为我刚才描述的,那会非常危险的。 - -最后,这听起来有些混乱,但是有一些简单的解决方案。让我们有条理地组织一下: - -(注意你有很多地方需要密钥登陆服务器,但是我提出这个作为一个场景去向你展示当你处理密钥的时候你面对的问题) - -## 关于口令句 - -当你创建你的密钥时,你可以选择是否包含一个口令字,这个口令字会在使用私钥的时候是必不可少的。有了这个口令字,私钥文件本身会被口令字加密。例如,如果你有一个公钥存储在服务器上,同时你使用私钥去登陆服务器的时候,你会被提示,输入口令字。没有口令字,这个密钥是无法使用的。或者,你可以配置你的密钥不需要口令字。然后所有你需要的只是用于登陆服务器的密钥文件。 - -普遍上,不使用口令字对于用户来说是更容易的,但是我强烈建议在很多情况下使用口令字,原因是,如果私钥文件被偷了,偷密钥的人仍然不可以使用它,除非他或者她可以找到口令字。在理论上,这个将节省你很多时间,因为你可以在攻击者发现口令字之前,从服务器上删除公钥文件,从而保护你的系统。还有一些别的原因去使用口令字,但是这个原因对我来说在很多场合更有价值。(举一个例子,我的 Android 平板上有 VNC 软件。平板上有我的密钥。如果我的平板被偷了之后,我会马上从服务器上删除公钥,使得它的私钥没有作用,无论有没有口令字。)但是在一些情况下我不使用口令字,是因为我正在登陆的服务器上没有什么有价值的数据。它取决于情境。 - -## 服务器基础设施 - -你如何设置自己服务器的基础设置将会影响到你如何管理你的密钥。例如,如果你有很多用户登陆,你将需要决定每个用户是否需要一个单独的密钥。(普遍来说,他们应该;你不会想要用户之间共享私钥。那样当一个用户离开组织或者失去信任时,你可以删除那个用户的公钥,而不需要必须给其他人生成新的密钥。相似地,通过共享密钥,他们能以其他人的身份登录,这就更坏了。)但是另外一个问题,你如何配置你的服务器。你是否使用工具,比如 Puppet,配置大量的服务器?同时你是否基于你自己的镜像创建大量的服务器?当你复制你的服务器,你是否需要为每个人设置相同的密钥?不同的云服务器软件允许你配置这个;你可以让这些服务器使用相同的密钥,或者给每一个生成一个新的密钥。 - -如果你在处理复制的服务器,它可能导致混淆如果用户需要使用不同的密钥登陆两个不同的系统。但是另一方面,服务器共享相同的密钥会有安全风险。或者,第三,如果你的密钥有除了登陆之外的需要(比如挂载加密的驱动),之后你会在很多地方需要相同的密钥。正如你所看到的,你是否需要在不同的服务器上使用相同的密钥不是我为你做的决定;这其中有权衡,而且你需要去决定什么是最好的。 - -最终,你可能会有: - -- 需要登录的多个服务器 - -- 多个用户登陆不同的服务器,每个都有自己的密钥 - -- 每个用户多个密钥当他们登陆不同的服务器的时候 - -(如果你正在别的情况下使用密钥,相同的普遍概念会应用于如何使用密钥,需要多少密钥,他们是否共享,你如何处理密钥的私密部分和公开部分。) - -## 安全方法 - -知道你的基础设施和独一无二的情况,你需要组合一个密钥管理方案,它会引导你去分发和存储你的密钥。比如,正如我之前提到的,如果我的平板被偷了,我会从我服务器上删除公钥,期望在平板在用于访问服务器。同样的,我会在我的整体计划中考虑以下内容: - -1. 移动设备上的私钥没有问题,但是必须包含口令字; - -2. 必须有一个方法可以快速地从服务器上删除公钥。 - -在你的情况中,你可能决定,你不想在自己经常登录的系统上使用口令字;比如,这个系统可能是一个开发者一天登录多次的测试机器。这没有问题,但是你需要调整你的规则。你可能添加一条规则,不可以通过移动设备登录机器。换句话说,你需要根据自己的状况构建你的协议,不要假设某个方案放之四海而皆准。 - -## 软件 - -至于软件,毫不意外,现实世界中并没有很多好的,可用的软件解决方案去存储和管理你的私钥。但是应该有吗?考虑到这个,如果你有一个程序存储你所有服务器的全部密钥,并且这个程序被一个核心密钥锁住,那么你的密钥就真的安全了吗?或者,同样的,如果你的密钥被放置在你的硬盘上,用于 SSH 程序快速访问,那样一个密钥管理软件是否真正提供了任何保护吗? - -但是对于整体基础设施和创建,管理公钥,有许多的解决方案。我已经提到了 Puppet。在 Puppet 的世界中,你创建模块来以不同的方式管理你的服务器。这个想法是服务器是动态的,而且不必要准确地复制其他机器。[这里有一个聪明的途径](http://manuel.kiessling.net/2014/03/26/building-manageable-server-infrastructures-with-puppet-part-4/),它在不同的服务器上使用相同的密钥,但是对于每一个用户使用不同的 Puppet 模块。这个方案可能适合你,也可能不适合你。 - -或者,另一个选项就是完全换挡。在 Docker 的世界中,你可以采取一个不同的方式,正如[关于 SSH 和 Docker 博客](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)所描述的。 - -但是怎么样管理私钥?如果你搜索,你无法找到很多的软件选择,原因我之前提到过;密钥存放在你的硬盘上,一个管理软件可能无法提到很多额外的安全。但是我确实使用这种方法来管理我的密钥: - -首先,我的 `.ssh/config` 文件中有很多的主机项。我有一个我登陆的主机项,但是有时我对于一个单独的主机有不止一项。如果我有很多登陆,那种情况就会发生。对于架设我的 git 库的服务器,我有两个不同的登陆项;一个限制于 git,另一个为普遍目的的 bash 访问。这个为 git 设置的登陆选项在机器上有极大的限制。还记得我之前说的关于我存在于远程开发机器上的 git 密钥吗?好了。虽然这些密钥可以登陆到我其中一个服务器,但是使用的账号是被严格限制的。 - -其次,大部分的私钥都包含口令字。(对于处理不得不多次输入口令字的情况,考虑使用 [ssh-agent](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)。) - -再次,我确实有许多服务器,我想要更加小心地防御,并且在我 host 文件中并没有这样的项。这更加接近于社交工程方面,因为密钥文件还存在于那里,但是可能需要攻击者花费更长的时间去定位这个密钥文件,分析出来他们攻击的机器。在这些例子中,我只是手动打出来长的 SSH 命令。(这真不怎么坏。) - -同时你可以看出来我没有使用任何特别的软件去管理这些私钥。 - -## 无放之四海而皆准的方案 - -我们偶然间收到 linux.com 的问题,关于管理密钥的好软件的建议。但是让我们后退一步。这个问题事实上需要重新定制,因为没有一个普适的解决方案。你问的问题基于你自己的状况。你是否简单地尝试找到一个位置去存储你的密钥文件?你是否寻找一个方法去管理多用户问题,其中每个人都需要将他们自己的公钥插入到 `authorized_keys` 文件中? - -通过这篇文章,我已经囊括了基础知识,希望到此你明白如何管理你的密钥,并且,只有当你问了正确的问题,无论你寻找任何软件(甚至你需要另外的软件),它都会出现。 - ------------------------------------------------------------------------------- - -via: http://www.linux.com/learn/tutorials/838235-how-to-best-manage-encryption-keys-on-linux - -作者:[Jeff Cogswell][a] -译者:[mudongliang](https://github.com/mudongliang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linux.com/community/forums/person/62256 From 881615ade8c5473f69836b0f654f4c675dbe67bf Mon Sep 17 00:00:00 2001 From: FrankXinqi Date: Sun, 10 Jul 2016 13:18:33 +0800 Subject: [PATCH 099/471] Delete 20151117 How bad a boss is Linus Torvalds.md --- ...151117 How bad a boss is Linus Torvalds.md | 78 ------------------- 1 file changed, 78 deletions(-) delete mode 100644 sources/talk/20151117 How bad a boss is Linus Torvalds.md diff --git a/sources/talk/20151117 How bad a boss is Linus Torvalds.md b/sources/talk/20151117 How bad a boss is Linus Torvalds.md deleted file mode 100644 index 552155a18c..0000000000 --- a/sources/talk/20151117 How bad a boss is Linus Torvalds.md +++ /dev/null @@ -1,78 +0,0 @@ -Translating by FrankXinqi -How bad a boss is Linus Torvalds? -================================================================================ -![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg) - -*Linus Torvalds addressed a packed auditorium of Linux enthusiasts during his speech at the LinuxWorld show in San Jose, California, on August 10, 1999. Credit: James Niccolai* - -**It depends on context. In the world of software development, he’s what passes for normal. The question is whether that situation should be allowed to continue.** - -I've known Linus Torvalds, Linux's inventor, for over 20 years. We're not chums, but we like each other. - -Lately, Torvalds has been getting a lot of flack for his management style. Linus doesn't suffer fools gladly. He has one way of judging people in his business of developing the Linux kernel: How good is your code? - -Nothing else matters. As Torvalds said earlier this year at the Linux.conf.au Conference, "I'm not a nice person, and I don't care about you. [I care about the technology and the kernel][1] -- that's what's important to me." - -Now, I can deal with that kind of person. If you can't, you should avoid the Linux kernel community, where you'll find a lot of this kind of meritocratic thinking. Which is not to say that I think everything in Linuxland is hunky-dory and should be impervious to calls for change. A meritocracy I can live with; a bastion of male dominance where women are subjected to scorn and disrespect is a problem. - -That's why I see the recent brouhaha about Torvalds' management style -- or more accurately, his total indifference to the personal side of management -- as nothing more than standard operating procedure in the world of software development. And at the same time, I see another instance that has come to light as evidence of a need for things to really change. - -The first situation arose with the [release of Linux 4.3][2], when Torvalds used the Linux Kernel Mailing List to tear into a developer who had inserted some networking code that Torvalds thought was -- well, let's say "crappy." "[[A]nd it generates [crappy] code.][3] It looks bad, and there's no reason for it." He goes on in this vein for quite a while. Besides the word "crap" and its earthier synonym, he uses the word "idiotic" pretty often. - -Here's the thing, though. He's right. I read the code. It's badly written and it does indeed seem to have been designed to use the new "overflow_usub()" function just for the sake of using it. - -Now, some people see this diatribe as evidence that Torvalds is a bad-tempered bully. I see a perfectionist who, within his field, doesn't put up with crap. - -Many people have told me that this is not how professional programmers should act. People, have you ever worked with top developers? That's exactly how they act, at Apple, Microsoft, Oracle and everywhere else I've known them. - -I've heard Steve Jobs rip a developer to pieces. I've cringed while a senior Oracle developer lead tore into a room of new programmers like a piranha through goldfish. - -In Accidental Empires, his classic book on the rise of PCs, Robert X. Cringely described Microsoft's software management style when Bill Gates was in charge as a system where "Each level, from Gates on down, screams at the next, goading and humiliating them." Ah, yes, that's the Microsoft I knew and hated. - -The difference between the leaders at big proprietary software companies and Torvalds is that he says everything in the open for the whole world to see. The others do it in private conference rooms. I've heard people claim that Torvalds would be fired in their company. Nope. He'd be right where he is now: on top of his programming world. - -Oh, and there's another difference. If you get, say, Larry Ellison mad at you, you can kiss your job goodbye. When you get Torvalds angry at your work, you'll get yelled at in an email. That's it. - -You see, Torvalds isn't anyone's boss. He's the guy in charge of a project with about 10,000 contributors, but he has zero hiring and firing authority. He can hurt your feelings, but that's about it. - -That said, there is a serious problem within both open-source and proprietary software development circles. No matter how good a programmer you are, if you're a woman, the cards are stacked against you. - -No case shows this better than that of Sarah Sharp, an Intel developer and formerly a top Linux programmer. [In a post on her blog in October][4], she explained why she had stopped contributing to the Linux kernel more than a year earlier: "I finally realized that I could no longer contribute to a community where I was technically respected, but I could not ask for personal respect.... I did not want to work professionally with people who were allowed to get away with subtle sexist or homophobic jokes." - -Who can blame her? I can't. Torvalds, like almost every software manager I've ever known, I'm sorry to say, has permitted a hostile work environment. - -He would probably say that it's not his job to ensure that Linux contributors behave with professionalism and mutual respect. He's concerned with the code and nothing but the code. - -As Sharp wrote: - -> I have the utmost respect for the technical efforts of the Linux kernel community. They have scaled and grown a project that is focused on maintaining some of the highest coding standards out there. The focus on technical excellence, in combination with overloaded maintainers, and people with different cultural and social norms, means that Linux kernel maintainers are often blunt, rude, or brutal to get their job done. Top Linux kernel developers often yell at each other in order to correct each other's behavior. -> -> That's not a communication style that works for me. … -> -> Many senior Linux kernel developers stand by the right of maintainers to be technically and personally brutal. Even if they are very nice people in person, they do not want to see the Linux kernel communication style change. - -She's right. - -Where I differ from other observers is that I don't think that this problem is in any way unique to Linux or open-source communities. With five years of work in the technology business and 25 years as a technology journalist, I've seen this kind of immature boy behavior everywhere. - -It's not Torvalds' fault. He's a technical leader with a vision, not a manager. The real problem is that there seems to be no one in the software development universe who can set a supportive tone for teams and communities. - -Looking ahead, I hope that companies and organizations, such as the Linux Foundation, can find a way to empower community managers or other managers to encourage and enforce civil behavior. - -We won't, unfortunately, find that kind of managerial finesse in our pure technical or business leaders. It's not in their DNA. - --------------------------------------------------------------------------------- - -via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-is-linus-torvalds.html - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ -[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html -[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/ -[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html -[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/ From 6af4748c8c461292252d40bab071a9e9290248cc Mon Sep 17 00:00:00 2001 From: FrankXinqi Date: Sun, 10 Jul 2016 13:21:17 +0800 Subject: [PATCH 100/471] Finish Translating How bad a boss is Linus Torvalds --- ...151117 How bad a boss is Linus Torvalds.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 translated/talk/20151117 How bad a boss is Linus Torvalds.md diff --git a/translated/talk/20151117 How bad a boss is Linus Torvalds.md b/translated/talk/20151117 How bad a boss is Linus Torvalds.md new file mode 100644 index 0000000000..5c7375c190 --- /dev/null +++ b/translated/talk/20151117 How bad a boss is Linus Torvalds.md @@ -0,0 +1,80 @@ +Translated by FrankXinqi + +Linus Torvalds作为一个老板有多么糟糕? +================================================================================ +![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg) + +*1999 年 8 月 10 日,加利福尼亚州圣何塞市,在 LinuxWorld Show 上 Linus Torvalds 在一个坐满 Linux 爱好者的礼堂中发表了一篇演讲。作者:James Niccolai* + +**这取决于所处的领域。在软件开发的世界中,他变得更加平庸。问题是,这种情况是否应该被允许继续?** + +Linus Torvalds 是 Linux 的发明者,我认识他超过 20 年了。我们不是密友,但是我们欣赏彼此。 + +最近,因为 Linus Torvalds 的管理风格,他正遭到严厉的炮轰。Linus 无法忍受胡来的人。「代码的质量有多好?」是他在 Linux 内核的开发过程中评判人的一种方式。 + +没有什么比这个更重要了。正如 Linus 今年(1999年)早些时候在 Linux.conf.au 会议上说的那样,「我不是一个友好的人,并且我不关心你。对我重要的是『[我所关心的技术和内核][1]』。」 + +现在我也可以和这只关心技术的这一类人打交道了。如果你不能,你应当避免参加 Linux 内核会议,因为在那里你会遇到许多有这种精英思想的人。这不代表我认为在 Linux 领域所有东西都是极好的,并且应该不受其他影响的来激起改变。我能够一起生活的一个精英;在一个男性做主导的大城堡中遇到的问题是,女性经常受到蔑视和无礼的对待。 + +这就是我看到的最近关于 Linus 管理风格所引发社会争吵的原因 -- 或者更准确的说,他对于个人管理方面是完全冷漠的 -- 就像不过是在软件开发世界的标准操作流程一样。与此同时,我看到了另外一个非常需要被改变的事实,它必须作为证据公开。 + +第一次是在 [Linux 4.3 发布][2]的时候出现的这个情况,Linus 使用 Linux 内核邮件列表来狠狠的攻击一个插入了非常糟糕并且没有价值的网络代码的开发者。「[这段代码导致了非常糟糕并且没有价值的代码。][3]这看起来太糟糕了,并且完全没有理由这样做。」当他说到这里的时候,他沉默了很长时间。除了使用「非常糟糕并且没有价值」这个词,他在早期使用「愚蠢的」这个同义词是相对较好的。 + +但是,事情就是这样。Linus 是对的。我读了代码后,发现代码确实很烂并且开发者是为了使用新的「overflow_usub()」 函数而使用的。 + +现在,一些人把 Linus 的这种谩骂的行为看作他脾气不好而且恃强凌弱的证据。我见过一个完美主义者,在他的领域中,他无法忍受这种糟糕。 + +许多人告诉我,这不是一个专业的程序员应当有的行为。人们,你曾经和最优秀的开发者一起工作过吗?据我所知道的,在 Apple,Microsoft,Oracle 这就是他们的行为。 + +我曾经听过 Steve Jobs 攻击一个开发者,像把他撕成碎片那样。我大为不快,当一个 Oracle 的高级开发者攻击一屋子的新开发者的时候就像食人鱼穿过一群金鱼那样。 + +在意外的电脑帝国,在 Robert X. Cringely 关于 PCs 崛起的经典书籍中,他这样描述 Bill Gates 的微软软件管理风格,Bill Gates 像计算机系统一样管理他们,『比尔盖茨是最高等级,从他开始每一个等级依次递减,上级会向下级叫嚷,刺激他们,甚至羞辱他们。』 + +Linus 和所有大型的私有软件公司的领导人不同的是,Linus 说在这里所有的东西是向全世界公开的。而其他人是在私有的会议室中做东西的。我听有人说 Linus 在那种公司中可能会被开除。这是不可能的。他会在正确的地方就像现在这样,他在编程世界的最顶端。 + +但是,这里有另外一个不同。如果 Larry Ellison (Oracle的首席执行官)向你发火,你就别想在这里干了。如果 Linus 向你发火,你会在邮件中收到他的责骂。这就是差别。 + +你知道的,Linus 不是任何人的老板。他完全没有雇佣和解聘的权利,他只是负责着有 10,000 个贡献者的一个项目而已。他仅仅能做的就是从心理上伤害你。 + +这说明,在开源软件开发圈和私有软件开发圈中同时存在一个非常严重的问题。不管你是一个多么好的编程者,如果你是一个女性,你的这个身份就是对你不利的。 + +这种情况并没有在 Sarah Sharp 的身上有任何好转,她现在是一个Intel的开发者,以前是一个顶尖的Linux程序员。[在她博客10月份的一个帖子中][4],她解释道:『我最终发现,我不能够再为Linux社区做出贡献了。因为在在那里,我虽然能够得到技术上的尊重,却得不到个人的尊重……我不想专职于同那些轻微的性别歧视者或开同性恋玩笑的人一起工作。』 + +谁能责怪她呢?我不能。我非常伤心的说,Linus 就像所有我见过的软件经理一样,是他造成了这种不利的工作环境。 + +他可能会说,确保 Linux 的贡献者都表现出专业精神和相互尊重不应该是他的工作。除了代码以外,他不关系任何其他事情。 + +就像Sarah Sharp写的那样: + + +> 我对于 Linux 内核社区做出的技术努力表现出非常的尊重。他们在那维护一些最高标准的代码,以此来平衡并且发展一个项目。他们专注于优秀的技术,却带有过量的维护人员,他们有不同的文化背景和社会规范,这些意味着这些 Linux 内核维护者说话非常直率,粗鲁或者为了完成他们的任务而不讲道理。顶尖的 Linux 内核开发者经常为了使别人改正行为而向他们大喊大叫。 +> +> 这种事情发生在我身上,但它不是一种有效的沟通方式。 +> +> 许多高级的 Linux 内核开发者支持那些技术上和人性上不讲道理的维护者的权利。即使他们是非常友好的人,他们不想看到 Linux 内核交流方式改变。 + +她是对的。 + +我和其他调查者不同的是,我不认为这个问题对于 Linux 或开源社区在任何方面有特殊之处。作为一个从事技术商业工作超过五年和有着 25 年技术工作经历的记者,我随处可见这种不成熟的男孩的行为。 + +这不是 Linus 的错误。他不是一个经理,他是一个有想象力的技术领导者。看起来真正的问题是,在软件开发领域没有人能够用一种支持的语气来对待团队和社区。 + +展望未来,我希望像 Linux Foundation 这样的公司和组织,能够找到一种方式去授权社区经理或其他经理来鼓励并且强制实施民主的行为。 + +非常遗憾的是,我们不能够在我们这种纯技术或纯商业的领导人中找到这种管理策略。它不存在于这些人的基因中。 + +-------------------------------------------------------------------------------- + +via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-is-linus-torvalds.html + +作者:[Steven J. Vaughan-Nichols][a] +译者:[FrankXinqi](https://github.com/FrankXinqi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html +[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/ +[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html +[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/ From a703d7b4d351d4e0b90541d39b1e8057fa3210cc Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 10 Jul 2016 15:34:57 +0800 Subject: [PATCH 101/471] =?UTF-8?q?20160710-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r game with python and asyncio - part 1.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md diff --git a/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md b/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md new file mode 100644 index 0000000000..62af26b481 --- /dev/null +++ b/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md @@ -0,0 +1,72 @@ +Writing online multiplayer game with python and asyncio - part 1 +=================================================================== + +Have you ever combined async with Python? Here I’ll tell you how to do it and show it on a [working example][1] - a popular Snake game, designed for multiple players. + +[Play gmae][2] + +### 1. Introduction + +Massive multiplayer online games are undoubtedly one of the main trends of our century, in both tech and cultural domains. And while for a long time writing a server for a MMO game was associated with massive budgets and complex low-level programming techniques, things are rapidly changing in the recent years. Modern frameworks based on dynamic languages allow handling thousands of parallel user connections on moderate hardware. At the same time, HTML 5 and WebSockets standards enabled the creation of real-time graphics-based game clients that run directly in web browser, without any extensions. + +Python may be not the most popular tool for creating scalable non-blocking servers, especially comparing to node.js popularity in this area. But the latest versions of Python are aimed to change this. The introduction of [asyncio][3] library and a special [async/await][4] syntax makes asynchronous code look as straightforward as regular blocking code, which now makes Python a worthy choice for asynchronous programming. So I will try to utilize these new features to demonstrate a way to create an online multiplayer game. + +### 2. Getting asynchronous + +A game server should handle a maximum possible number of parallel users' connections and process them all in real time. And a typical solution - creating threads, doesn't solve a problem in this case. Running thousands of threads requires CPU to switch between them all the time (it is called context switching), which creates big overhead, making it very ineffective. Even worse with processes, because, in addition, they do occupy too much memory. In Python there is even one more problem - regular Python interpreter (CPython) is not designed to be multithreaded, it aims to achieve maximum performance for single-threaded apps instead. That's why it uses GIL (global interpreter lock), a mechanism which doesn't allow multiple threads to run Python code at the same time, to prevent uncontrolled usage of the same shared objects. Normally the interpreter switches to another thread when currently running thread is waiting for something, usually a response from I/O (like a response from web server for example). This allows having non-blocking I/O operations in your app, because every operation blocks only one thread instead of blocking the whole server. However, it also makes general multithreading idea nearly useless, because it doesn't allow you to execute python code in parallel, even on multi-core CPU. While at the same time it is completely possible to have non-blocking I/O in one single thread, thus eliminating the need of heavy context-switching. + +Actually, a single-threaded non-blocking I/O is a thing you can do in pure python. All you need is a standard [select][5] module which allows you to write an event loop waiting for I/O from non-blocking sockets. However, this approach requires you to define all the app logic in one place, and soon your app becomes a very complex state-machine. There are frameworks that simplify this task, most popular are [tornado][6] and [twisted][7]. They are utilized to implement complex protocols using callback methods (and this is similar to node.js). The framework runs its own event loop invoking your callbacks on the defined events. And while this may be a way to go for some, it still requires programming in callback style, what makes your code fragmented. Compare this to just writing synchronous code and running multiple copies concurrently, like we would do with normal threads. Why wouldn't this be possible in one thread? + +And this is where the concept of microthreads come in. The idea is to have concurrently running tasks in one thread. When you call a blocking function in one task, behind the scenes it calls a "manager" (or "scheduler") that runs an event loop. And when there is some event ready to process, a manager passes execution to a task waiting for it. That task will also run until it reaches a blocking call, and then it will return execution to a manager again. + +>Microthreads are also called lightweight threads or green threads (a term which came from Java world). Tasks which are running concurrently in pseudo-threads are called tasklets, greenlets or coroutines. + +One of the first implementations of microthreads in Python was [Stackless Python][8]. It got famous because it is used in a very successful online game [EVE online][9]. This MMO game boasts about a persistent universe, where thousands of players are involved in different activities, all happening in the real time. Stackless is a standalone Python interpreter which replaces standard function calling stack and controls the flow directly to allow minimum possible context-switching expenses. Though very effective, this solution remained less popular than "soft" libraries that work with a standard interpreter. Packages like [eventlet][10] and [gevent][11] come with patching of a standard I/O library in the way that I/O function pass execution to their internal event loop. This allows turning normal blocking code into non-blocking in a very simple way. The downside of this approach is that it is not obvious from the code, which calls are non-blocking. A newer version of Python introduced native coroutines as an advanced form of generators. Later in Python 3.4 they included asyncio library which relies on native coroutines to provide single-thread concurrency. But only in python 3.5 coroutines became an integral part of python language, described with the new keywords async and await. Here is a simple example, which illustrates using asyncio to run concurrent tasks: + +``` +import asyncio + +async def my_task(seconds): + print("start sleeping for {} seconds".format(seconds)) + await asyncio.sleep(seconds) + print("end sleeping for {} seconds".format(seconds)) + +all_tasks = asyncio.gather(my_task(1), my_task(2)) +loop = asyncio.get_event_loop() +loop.run_until_complete(all_tasks) +loop.close() +``` + +We launch two tasks, one sleeps for 1 second, the other - for 2 seconds. The output is: + +``` +start sleeping for 1 seconds +start sleeping for 2 seconds +end sleeping for 1 seconds +end sleeping for 2 seconds +``` + +As you can see, coroutines do not block each other - the second task starts before the first is finished. This is happening because asyncio.sleep is a coroutine which returns execution to a scheduler until the time will pass. In the next section, we will use coroutine-based tasks to create a game loop. + +-------------------------------------------------------------------------------- + +via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ + +作者:[Kyrylo Subbotin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ +[1]: http://snakepit-game.com/ +[2]: http://snakepit-game.com/ +[3]: https://docs.python.org/3/library/asyncio.html +[4]: https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-492 +[5]: https://docs.python.org/2/library/select.html +[6]: http://www.tornadoweb.org/ +[7]: http://twistedmatrix.com/ +[8]: http://www.stackless.com/ +[9]: http://www.eveonline.com/ +[10]: http://eventlet.net/ +[11]: http://www.gevent.org/ From 29ea3e80ac4610c3de0fdf91989deabc010980a7 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 10 Jul 2016 15:35:13 +0800 Subject: [PATCH 102/471] =?UTF-8?q?20160710-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r game with python and asyncio - part 2.md | 233 ++++++++++++++++++ 1 file changed, 233 insertions(+) create mode 100644 sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md diff --git a/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md b/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md new file mode 100644 index 0000000000..a33b0b0fdc --- /dev/null +++ b/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md @@ -0,0 +1,233 @@ +Writing online multiplayer game with python and asyncio - Part 2 +================================================================== + +![](https://7webpages.com/media/cache/fd/d1/fdd1f8f8bbbf4166de5f715e6ed0ac00.gif) + +Have you ever made an asynchronous Python app? Here I’ll tell you how to do it and in the next part, show it on a [working example][1] - a popular Snake game, designed for multiple players. + +see the intro and theory about how to [Get Asynchronous [part 1]][2] + +[Play the game][3] + +### 3. Writing game loop + +The game loop is a heart of every game. It runs continuously to get player's input, update state of the game and render the result on the screen. In online games the loop is divided into client and server parts, so basically there are two loops which communicate over the network. Usually, client role is to get player's input, such as keypress or mouse movement, pass this data to a server and get back the data to render. The server side is processing all the data coming from players, updating game's state, doing necessary calculations to render next frame and passes back the result, such as new placement of game objects. It is very important not to mix client and server roles without a solid reason. If you start doing game logic calculations on the client side, you can easily go out of sync with other clients, and your game can also be created by simply passing any data from the client side. + +A game loop iteration is often called a tick. Tick is an event meaning that current game loop iteration is over and the data for the next frame(s) is ready. +In the next examples we will use the same client, which connects to a server from a web page using WebSocket. It runs a simple loop which passes pressed keys' codes to the server and displays all messages that come from the server. [Client source code is located here][4]. + +#### Example 3.1: Basic game loop + +[Example 3.1 source code][5] + +We will use [aiohttp][6] library to create a game server. It allows creating web servers and clients based on asyncio. A good thing about this library is that it supports normal http requests and websockets at the same time. So we don't need other web servers to render game's html page. + +Here is how we run the server: + +``` +app = web.Application() +app["sockets"] = [] + +asyncio.ensure_future(game_loop(app)) + +app.router.add_route('GET', '/connect', wshandler) +app.router.add_route('GET', '/', handle) + +web.run_app(app) +``` + +web.run_app is a handy shortcut to create server's main task and to run asyncio event loop with it's run_forever() method. I suggest you check the source code of this method to see how the server is actually created and terminated. + +An app is a dict-like object which can be used to share data between connected clients. We will use it to store a list of connected sockets. This list is then used to send notification messages to all connected clients. A call to asyncio.ensure_future() will schedule our main game_loop task which sends 'tick' message to clients every 2 seconds. This task will run concurrently in the same asyncio event loop along with our web server. + +There are 2 web request handlers: handle just serves a html page and wshandler is our main websocket server's task which handles interaction with game clients. With every connected client a new wshandler task is launched in the event loop. This task adds client's socket to the list, so that game_loop task may send messages to all the clients. Then it echoes every keypress back to the client with a message. + +In the launched tasks we are running worker loops over the main event loop of asyncio. A switch between tasks happens when one of them uses await statement to wait for a coroutine to finish. For instance, asyncio.sleep just passes execution back to a scheduler for a given amount of time, and ws.receive() is waiting for a message from websocket, while the scheduler may switch to some other task. + +After you open the main page in a browser and connect to the server, just try to press some keys. Their codes will be echoed back from the server and every 2 seconds this message will be overwritten by game loop's 'tick' message which is sent to all clients. + +So we have just created a server which is processing client's keypresses, while the main game loop is doing some work in the background and updates all clients periodically. + +#### Example 3.2: Starting game loop by request + +[Example 3.2 source code][7] + +In the previous example a game loop was running continuously all the time during the life of the server. But in practice, there is usually no sense to run game loop when no one is connected. Also, there may be different game "rooms" running on one server. In this concept one player "creates" a game session (a match in a multiplayer game or a raid in MMO for example) so other players may join it. Then a game loop runs while the game session continues. + +In this example we use a global flag to check if a game loop is running, and we start it when the first player connects. In the beginning, a game loop is not running, so the flag is set to False. A game loop is launched from the client's handler: + +``` + if app["game_is_running"] == False: + asyncio.ensure_future(game_loop(app)) +``` + +This flag is then set to True at the start of game loop() and then back to False in the end, when all clients are disconnected. + +#### Example 3.3: Managing tasks + +[Example 3.3 source code][8] + +This example illustrates working with task objects. Instead of storing a flag, we store game loop's task directly in our application's global dict. This may be not an optimal thing to do in a simple case like this, but sometimes you may need to control already launched tasks. +``` + if app["game_loop"] is None or \ + app["game_loop"].cancelled(): + app["game_loop"] = asyncio.ensure_future(game_loop(app)) +``` + +Here ensure_future() returns a task object that we store in a global dict; and when all users disconnect, we cancel it with + +``` + app["game_loop"].cancel() +``` + +This cancel() call tells scheduler not to pass execution to this coroutine anymore and sets its state to cancelled which then can be checked by cancelled() method. And here is one caveat worth to mention: when you have external references to a task object and exception happens in this task, this exception will not be raised. Instead, an exception is set to this task and may be checked by exception() method. Such silent fails are not useful when debugging a code. Thus, you may want to raise all exceptions instead. To do so you need to call result() method of unfinished task explicitly. This can be done in a callback: + +``` + app["game_loop"].add_done_callback(lambda t: t.result()) +``` + +Also if we are going to cancel this task in our code and we don't want to have CancelledError exception, it has a point checking its "cancelled" state: +``` + app["game_loop"].add_done_callback(lambda t: t.result() + if not t.cancelled() else None) +``` + +Note that this is required only if you store a reference to your task objects. In the previous examples all exceptions are raised directly without additional callbacks. + +#### Example 3.4: Waiting for multiple events + +[Example 3.4 source code][9] + +In many cases, you need to wait for multiple events inside client's handler. Beside a message from a client, you may need to wait for different types of things to happen. For instance, if your game's time is limited, you may wait for a signal from timer. Or, you may wait for a message from other process using pipes. Or, for a message from a different server in the network, using a distributed messaging system. + +This example is based on example 3.1 for simplicity. But in this case we use Condition object to synchronize game loop with connected clients. We do not keep a global list of sockets here as we are using sockets only within the handler. When game loop iteration ends, we notify all clients using Condition.notify_all() method. This method allows implementing publish/subscribe pattern within asyncio event loop. + +To wait for two events in the handler, first, we wrap awaitable objects in a task using ensure_future() + +``` + if not recv_task: + recv_task = asyncio.ensure_future(ws.receive()) + if not tick_task: + await tick.acquire() + tick_task = asyncio.ensure_future(tick.wait()) +``` + +Before we can call Condition.wait(), we need to acquire a lock behind it. That is why, we call tick.acquire() first. This lock is then released after calling tick.wait(), so other coroutines may use it too. But when we get a notification, a lock will be acquired again, so we need to release it calling tick.release() after received notification. + +We are using asyncio.wait() coroutine to wait for two tasks. + +``` + done, pending = await asyncio.wait( + [recv_task, + tick_task], + return_when=asyncio.FIRST_COMPLETED) +``` + +It blocks until either of tasks from the list is completed. Then it returns 2 lists: tasks which are done and tasks which are still running. If the task is done, we set it to None so it may be created again on the next iteration. + +#### Example 3.5: Combining with threads + +[Example 3.5 source code][10] + +In this example we combine asyncio loop with threads by running the main game loop in a separate thread. As I mentioned before, it's not possible to perform real parallel execution of python code with threads because of GIL. So it is not a good idea to use other thread to do heavy calculations. However, there is one reason to use threads with asyncio: this is the case when you need to use other libraries which do not support asyncio. Using these libraries in the main thread will simply block execution of the loop, so the only way to use them asynchronously is to run in a different thread. + +We run game loop using run_in_executor() method of asyncio loop and ThreadPoolExecutor. Note that game_loop() is not a coroutine anymore. It is a function that is executed in another thread. However, we need to interact with the main thread to notify clients on the game events. And while asyncio itself is not threadsafe, it has methods which allow running your code from another thread. These are call_soon_threadsafe() for normal functions and run_coroutine_threadsafe() for coroutines. We will put a code which notifies clients about game's tick to notify() coroutine and runs it in the main event loop from another thread. + +``` +def game_loop(asyncio_loop): + print("Game loop thread id {}".format(threading.get_ident())) + async def notify(): + print("Notify thread id {}".format(threading.get_ident())) + await tick.acquire() + tick.notify_all() + tick.release() + + while 1: + task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop) + # blocking the thread + sleep(1) + # make sure the task has finished + task.result() +``` + +When you launch this example, you will see that "Notify thread id" is equal to "Main thread id", this is because notify() coroutine is executed in the main thread. While sleep(1) call is executed in another thread, and, as a result, it will not block the main event loop. + +#### Example 3.6: Multiple processes and scaling up + +[Example 3.6 source code][11] + +One threaded server may work well, but it is limited to one CPU core. To scale the server beyond one core, we need to run multiple processes containing their own event loops. So we need a way for processes to interact with each other by exchanging messages or sharing game's data. Also in games, it is often required to perform heavy calculations, such as path finding and alike. These tasks are sometimes not possible to complete quickly within one game tick. It is not recommended to perform time-consuming calculations in coroutines, as it will block event processing, so in this case, it may be reasonable to pass the heavy task to other process running in parallel. + +The easiest way to utilize multiple cores is to launch multiple single core servers, like in the previous examples, each on a different port. You can do this with supervisord or similar process-controller system. Then, you may use a load balancer, such as HAProxy, to distribute connecting clients between the processes. There are different ways for processes to interact wich each other. One is to use network-based systems, which allows you to scale to multiple servers as well. There are already existing adapters to use popular messaging and storage systems with asyncio. Here are some examples: + +- [aiomcache][12] for memcached client +- [aiozmq][13] for zeroMQ +- [aioredis][14] for Redis storage and pub/sub + +You can find many other packages like this on github and pypi, most of them have "aio" prefix. + +Using network services may be effective to store persistent data and exchange some kind of messages. But its performance may be not enough if you need to perform real-time data processing that involves inter-process communications. In this case, a more appropriate way may be using standard unix pipes. asyncio has support for pipes and there is a [very low-level example of the server which uses pipes][15] in aiohttp repository. + +In the current example, we will use python's high-level [multiprocessing][16] library to instantiate new process to perform heavy calculations on a different core and to exchange messages with this process using multiprocessing.Queue. Unfortunately, the current implementation of multiprocessing is not compatible with asyncio. So every blocking call will block the event loop. But this is exactly the case where threads will be helpful because if we run multiprocessing code in a different thread, it will not block our main thread. All we need is to put all inter-process communications to another thread. This example illustrates this technique. It is very similar to multi-threading example above, but we create a new process from a thread. + +``` +def game_loop(asyncio_loop): + # coroutine to run in main thread + async def notify(): + await tick.acquire() + tick.notify_all() + tick.release() + + queue = Queue() + + # function to run in a different process + def worker(): + while 1: + print("doing heavy calculation in process {}".format(os.getpid())) + sleep(1) + queue.put("calculation result") + + Process(target=worker).start() + + while 1: + # blocks this thread but not main thread with event loop + result = queue.get() + print("getting {} in process {}".format(result, os.getpid())) + task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop) + task.result() +``` + +Here we run worker() function in another process. It contains a loop doing heavy calculations and putting results to the queue, which is an instance of multiprocessing.Queue. Then we get the results and notify clients in the main event loop from a different thread, exactly as in the example 3.5. This example is very simplified, it doesn't have a proper termination of the process. Also, in a real game, we would probably use the second queue to pass data to the worker. + +There is a project called [aioprocessing][17], which is a wrapper around multiprocessing that makes it compatible with asyncio. However, it uses exactly the same approach as described in this example - creating processes from threads. It will not give you any advantage, other than hiding these tricks behind a simple interface. Hopefully, in the next versions of Python, we will get a multiprocessing library based on coroutines and supports asyncio. + +>Important! If you are going to run another asyncio event loop in a different thread or sub-process created from main thread/process, you need to create a loop explicitly, using asyncio.new_event_loop(), otherwise, it will not work. + +-------------------------------------------------------------------------------- + +via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/ + +作者:[Kyrylo Subbotin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/ +[1]: http://snakepit-game.com/ +[2]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ +[3]: http://snakepit-game.com/ +[4]: https://github.com/7WebPages/snakepit-game/blob/master/simple/index.html +[5]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_basic.py +[6]: http://aiohttp.readthedocs.org/ +[7]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_handler.py +[8]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_global.py +[9]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_wait.py +[10]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_thread.py +[11]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_process.py +[12]: https://github.com/aio-libs/aiomcache +[13]: https://github.com/aio-libs/aiozmq +[14]: https://github.com/aio-libs/aioredis +[15]: https://github.com/KeepSafe/aiohttp/blob/master/examples/mpsrv.py +[16]: https://docs.python.org/3.5/library/multiprocessing.html +[17]: https://github.com/dano/aioprocessing From 873e8caa692723e7671aed69b4f44a6fd7316c0a Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 10 Jul 2016 15:53:05 +0800 Subject: [PATCH 103/471] =?UTF-8?q?20160710-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r game with python and asyncio - part 3.md | 137 ++++++++++++++++++ 1 file changed, 137 insertions(+) create mode 100644 sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md diff --git a/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md b/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md new file mode 100644 index 0000000000..90cf71c466 --- /dev/null +++ b/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md @@ -0,0 +1,137 @@ +Writing online multiplayer game with python and asyncio - Part 3 +================================================================= + +![](https://7webpages.com/media/cache/17/81/178135a6db5074c72a1394d31774c658.gif) + +In this series, we are making an asynchronous Python app on the example of a multiplayer [Snake game][1]. The previous article focused on [Writing Game Loop][2] and Part 1 was covering how to [Get Asynchronous][3]. + +You can find the code [here][4]. + +### 4. Making a complete game + +![](https://7webpages.com/static/img/14chs7.gif) + +#### 4.1 Project's overview + +In this part, we will review a design of a complete online game. It is a classic snake game with added multiplayer. You can try it yourself at (). A source code is located [in github repository][5]. The game consists of the following files: + +- [server.py][6] - a server handling main game loop and connections. +- [game.py][7] - a main Game class, which implements game's logic and most of the game's network protocol. +- [player.py][8] - Player class, containing individual player's data and snake's representation. This one is responsible for getting player's input and moving the snake accordingly. +- [datatypes.py][9] - basic data structures. +- [settings.py][10] - game settings, and it has descriptions in commentaries. +- [index.html][11] - all html and javascript client part in one file. + +#### 4.2 Inside a game loop + +Multiplayer snake game is a good example to learn because of its simplicity. All snakes move to one position every single frame, and frames are changing at a very slow rate, allowing you to watch how game engine is working actually. There is no instant reaction to player's keypresses because of the slow speed. A pressed key is remembered and then taken into account while calculating the next frame at the end of game loop's iteration. + +> Modern action games are running at much higher frame rates and often frame rates of server and client are not equal. Client frame rate usually depends on the client hardware performance, while server frame rate is fixed. A client may render several frames after getting the data corresponding to one "game tick". This allows to create smooth animations, which are only limited by client's performance. In this case, a server should pass not only current positions of the objects but also their moving directions, speeds and velocities. And while client frame rate is called FPS (frames per second), sever frame rate is called TPS (ticks per second). In this snake game example both values are equal, and one frame displayed by a client is calculated within one server's tick. + +We will use textmode-like play field, which is, in fact, a html table with one-char cells. All objects of the game are displayed with characters of different colors placed in table's cells. Most of the time client passes pressed keys' codes to the server and gets back play field updates with every "tick". An update from server consists of messages representing characters to render along with their coordinates and colors. So we are keeping all game logic on the server and we are sending to client only rendering data. In addition, we minimize the possibilities to hack the game by substituting its information sent over the network. + +#### 4.3 How does it work? + +The server in this game is close to Example 3.2 for simplicity. But instead of having a global list of connected websockets, we have one server-wide Game object. A Game instance contains a list of Player objects (inside self._players attribute) which represents players connected to this game, their personal data and websocket objects. Having all game-related data in a Game object also allows us to have multiple game rooms if we want to add such feature. In this case, we need to maintain multiple Game objects, one per game started. + +All interactions between server and clients are done with messages encoded in json. Message from the client containing only a number is interpreted as a code of the key pressed by the player. Other messages from client are sent in the following format: + +``` +[command, arg1, arg2, ... argN ] +``` + +Messages from server are sent as a list because there is often a bunch of messages to send at once (rendering data mostly): + +``` +[[command, arg1, arg2, ... argN ], ... ] +``` + +At the end of every game loop iteration, the next frame is calculated and sent to all the clients. Of course, we are not sending complete frame every time, but only a list of changes for the next frame. + +Note that players are not joining the game immediately after connecting to the server. The connection starts in "spectator" mode, so one can watch how others are playing. if the game is already started, or a "game over" screen from the previous game session. Then a player may press "Join" button to join the existing game or to create a new game if the game is not currently running (no other active players). In the later case, the play field is cleared before the start. + +The play field is stored in Game._world attribute, which is a 2d array made of nested lists. It is used to keep game field's state internally. Each element of an array represents a field's cell which is then rendered to a html table cell. It has a type of Char, which is a namedtuple consisting of a character and color. It is important to keep play field in sync with all the connected clients, so all updates to the play field should be made only along with sending corresponding messages to the clients. This is performed by Game.apply_render() method. It receives a list of Draw objects, which is then used to update play field internally and also to send render message to clients. + +We are using namedtuple not only because it is a good way to represent simple data structures, but also because it takes less space comparing to dict when sending in a json message. If you are sending complex data structures in a real game app, it is recommended to serialize them into a plain and shorter format or even pack in a binary format (such as bson instead of json) to minimize network traffic. + +ThePlayer object contains snake's representation in a deque object. This data type is similar to a list but is more effective for adding and removing elements on its sides, so it is ideal to represent a moving snake. The main method of the class is Player.render_move(), it returns rendering data to move player's snake to the next position. Basically, it renders snake's head in the new position and removes the last element where the tail was in the previous frame. In case the snake has eaten a digit and has to grow, a tail is not moving for a corresponding number of frames. The snake rendering data is used in Game.next_frame() method of the main class, which implements all game logic. This method renders all snake moves and checks for obstacles in front of every snake and also spawns digits and "stones". It is called directly from game_loop() to generate the next frame at every "tick". + +In case there is an obstacle in front of snake's head, a Game.game_over() method is called from Game.next_frame(). It notifies all connected clients about the dead snake (which is turned into stones by player.render_game_over() method) and updates top scores table. Player object's alive flag is set to False, so this player will be skipped when rendering the next frames, until joining the game once again. In case there are no more snakes alive, a "game over" message is rendered at the game field. Also, the main game loop will stop and set game.running flag to False, which will cause a game field to be cleared when some player will press "Join" button next time. + +Spawning of digits and stones is also happening while rendering every next frame, and it is determined by random values. A chance to spawn a digit or a stone can be changed in settings.py along with some other values. Note that digit spawning is happening for every live snake in the play field, so the more snakes are there, the more digits will appear, and they all will have enough food to consume. + +#### 4.4 Network protocol +List of messages sent from client + +Command | Parameters |Description +:-- |:-- |:-- +new_player | [name] |Setting player's nickname +join | |Player is joining the game + + +List of messages sent from server + +Command | Parameters |Description +:-- |:-- |:-- +handshake |[id] |Assign id to a player +world |[[(char, color), ...], ...] |Initial play field (world) map +reset_world | |Clean up world map, replacing all characters with spaces +render |[x, y, char, color] |Display character at position +p_joined |[id, name, color, score] |New player joined the game +p_gameover |[id] |Game ended for a player +p_score |[id, score] |Setting score for a player +top_scores |[[name, score, color], ...] |Update top scores table + +Typical messages exchange order + +Client -> Server |Server -> Client |Server -> All clients |Commentaries +:-- |:-- |:-- |:-- +new_player | | |Name passed to server + |handshake | |ID assigned + |world | |Initial world map passed + |top_scores | |Recent top scores table passed +join | | |Player pressed "Join", game loop started + | |reset_world |Command clients to clean up play field + | |render, render, ... |First game tick, first frame rendered +(key code) | | |Player pressed a key + | |render, render, ... |Second frame rendered + | |p_score |Snake has eaten a digit + | |render, render, ... |Third frame rendered + | | |... Repeat for a number of frames ... + | |p_gameover |Snake died when trying to eat an obstacle + | |top_scores |Updated top scores table (if updated) + +### 5. Conclusion + +To tell the truth, I really enjoy using the latest asynchronous capabilities of Python. The new syntax really makes a difference, so async code is now easily readable. It is obvious which calls are non-blocking and when the green thread switching is happening. So now I can claim with confidence that Python is a good tool for asynchronous programming. + +SnakePit has become very popular at 7WebPages team, and if you decide to take a break at your company, please, don’t forget to leave a feedback for us, say, on [Twitter][12] or [Facebook][13] . + +Get to know more from: + + + +-------------------------------------------------------------------------------- + +via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-part-3/ + +作者:[Saheetha Shameer][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-part-3/ +[1]: http://snakepit-game.com/ +[2]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/ +[3]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ +[4]: https://github.com/7WebPages/snakepit-game +[5]: https://github.com/7WebPages/snakepit-game +[6]: https://github.com/7WebPages/snakepit-game/blob/master/server.py +[7]: https://github.com/7WebPages/snakepit-game/blob/master/game.py +[8]: https://github.com/7WebPages/snakepit-game/blob/master/player.py +[9]: https://github.com/7WebPages/snakepit-game/blob/master/datatypes.py +[10]: https://github.com/7WebPages/snakepit-game/blob/master/settings.py +[11]: https://github.com/7WebPages/snakepit-game/blob/master/index.html +[12]: https://twitter.com/7WebPages +[13]: https://www.facebook.com/7WebPages/ From b60e48372495b9d92e370cfc17c97b5df5c7b4c4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 10 Jul 2016 16:02:42 +0800 Subject: [PATCH 104/471] =?UTF-8?q?20160710-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... The People I Follow On Twitter Are Men.md | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md diff --git a/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md b/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md new file mode 100644 index 0000000000..7aeab64d2f --- /dev/null +++ b/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md @@ -0,0 +1,87 @@ +72% Of The People I Follow On Twitter Are Men +=============================================== + +![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/abacus.jpg) + +At least, that's my estimate. Twitter does not ask users their gender, so I [have written a program that guesses][1] based on their names. Among those who follow me, the distribution is even worse: 83% are men. None are gender-nonbinary as far as I can tell. + +The way to fix the first number is not mysterious: I should notice and seek more women experts tweeting about my interests, and follow them. + +The second number, on the other hand, I can merely influence, but I intend to improve it as well. My network on Twitter should represent of the software industry's diverse future, not its unfair present. + +### How Did I Measure It? + +I set out to estimate the gender distribution of the people I follow—my "friends" in Twitter's jargon—and found it surprisingly hard. [Twitter analytics][2] readily shows me the converse, an estimate of my followers' gender: + +![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/twitter-analytics.png) + +So, Twitter analytics divides my followers' accounts among male, female, and unknown, and tells me the ratio of the first two groups. (Gender-nonbinary folk are absent here—they're lumped in with the Twitter accounts of organizations, and those whose gender is simply unknown.) But Twitter doesn't tell me the ratio of my friends. That [which is measured improves][3], so I searched for a service that would measure this number for me, and found [FollowerWonk][4]. + +FollowerWonk guesses my friends are 71% men. Is this a good guess? For the sake of validation, I compare FollowerWonk's estimate of my followers to Twitter's estimate: + +**Twitter analytics** + + |men |women +:-- |:-- |:-- +Followers |83% |17% + +**FollowerWonk** + + |men |women +:-- |:-- |:-- +Followers |81% |19% +Friends I follow |72% |28% + +My followers show up 81% male here, close to the Twitter analytics number. So far so good. If FollowerWonk and Twitter agree on the gender ratio of my followers, that suggests FollowerWonk's estimate of the people I follow (which Twitter doesn't analyze) is reasonably good. With it, I can make a habit of measuring my numbers, and improve them. + +At $30 a month, however, checking my friends' gender distribution with FollowerWonk is a pricey habit. I don't need all its features anyhow. Can I solve only the gender-distribution problem economically? + +Since FollowerWonk's numbers seem reasonable, I tried to reproduce them. Using Python and [some nice Philadelphians' Twitter][5] API wrapper, I began downloading the profiles of all my friends and followers. I immediately found that Twitter's rate limits are miserly, so I randomly sampled only a subset of users instead. + +I wrote a rudimentary program that searches for a pronoun announcement in each of my friends' profiles. For example, a profile description that includes "she/her" probably belongs to a woman, a description with "they/them" is probably nonbinary. But most don't state their pronouns: for these, the best gender-correlated information is the "name" field: for example, @gvanrossum's name field is "Guido van Rossum", and the first name "Guido" suggests that @gvanrossum is male. Where pronouns were not announced, I decided to use first names to estimate my numbers. + +My script passes parts of each name to the SexMachine library to guess gender. [SexMachine][6] has predictable downfalls, like mistaking "Brooklyn Zen Center" for a woman named "Brooklyn", but its estimates are as good as FollowerWonk's and Twitter's: + + + + |nonbinary |men |women |no gender,unknown +:-- |:-- |:-- |:-- |:-- +Friends I follow |1 |168 |66 |173 + |0% |72% |28% | +Followers |0 |459 |108 |433 + |0% |81% |19% | + +(Based on all 408 friends and a sample of 1000 followers.) + +### Know Your Number + +I want you to check your Twitter network's gender distribution, too. So I've deployed "Proportional" to PythonAnywhere's handy service for $10 a month: + +> + +The application may rate-limit you or otherwise fail, so use it gently. The [code is on GitHub][7]. It includes a command-line tool, as well. + +Who is represented in your network on Twitter? Are you speaking and listening to the same unfairly distributed group who have been talking about software for the last few decades, or does your network look like the software industry of the future? Let's know our numbers and improve them. + + + + + +-------------------------------------------------------------------------------- + +via: https://emptysqua.re/blog/gender-of-twitter-users-i-follow/ + +作者:[A. Jesse Jiryu Davis][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/AJesseJiryuDavis/ +[1]: https://www.proporti.onl/ +[2]: https://analytics.twitter.com/ +[3]: http://english.stackexchange.com/questions/14952/that-which-is-measured-improves +[4]: https://moz.com/followerwonk/ +[5]: https://github.com/bear/python-twitter/graphs/contributors +[6]: https://pypi.python.org/pypi/SexMachine/ +[7]: https://github.com/ajdavis/twitter-gender-distribution From 7c1bf8b1e3407d70fbaa229d44cc815ccfa1a16a Mon Sep 17 00:00:00 2001 From: Mike Date: Sun, 10 Jul 2016 22:49:50 +0800 Subject: [PATCH 105/471] Tutorial: Getting started with Docker Swarm and deploying a replicated Python 3 Application (#4168) * sources/tech/20160609 How to record your terminal session on Linux.md * translated/tech/20160609 How to record your terminal session on Linux.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * translated/tech/20160620 Detecting cats in images with OpenCV.md * remove source * sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md --- ...ker Swarm and deploying a replicated Python 3 Application.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md b/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md index 85b592b306..0e96160c77 100644 --- a/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md +++ b/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md @@ -1,3 +1,5 @@ +MikeCoder 2016.07.10 Translating + Tutorial: Getting started with Docker Swarm and deploying a replicated Python 3 Application ============== From 9e7161a2d5cfa30ddaf3eb44f43f01b17cf63b2b Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 11 Jul 2016 08:07:48 +0800 Subject: [PATCH 106/471] PUB:20160629 USE TASK MANAGER EQUIVALENT IN LINUX MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @xinglianfly 不错,继续加油~ --- ...29 USE TASK MANAGER EQUIVALENT IN LINUX.md | 51 ++++++++++++++++++ ...29 USE TASK MANAGER EQUIVALENT IN LINUX.md | 52 ------------------- 2 files changed, 51 insertions(+), 52 deletions(-) create mode 100644 published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md delete mode 100644 translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md diff --git a/published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md new file mode 100644 index 0000000000..444f37af43 --- /dev/null +++ b/published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md @@ -0,0 +1,51 @@ +在 Linux 下使用任务管理器 +==================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/Task-Manager-in-Linux.jpg) + +有很多 Linux 初学者经常问起的问题,“**Linux 有任务管理器吗?**”,“**怎样在 Linux 上打开任务管理器呢?**” + +来自 Windows 的用户都知道任务管理器非常有用。你可以在 Windows 中按下 `Ctrl+Alt+Del` 打开任务管理器。这个任务管理器向你展示了所有的正在运行的进程和它们消耗的内存,你可以从任务管理器程序中选择并杀死一个进程。 + +当你刚使用 Linux 的时候,你也会寻找一个**在 Linux 相当于任务管理器**的一个东西。一个 Linux 使用专家更喜欢使用命令行的方式查找进程和消耗的内存等等,但是你不用必须使用这种方式,至少在你初学 Linux 的时候。 + +所有主流的 Linux 发行版都有一个类似于任务管理器的东西。大部分情况下,它叫系统监视器(System Monitor),不过实际上它依赖于你的 Linux 的发行版及其使用的[桌面环境][1]。 + +在这篇文章中,我们将会看到如何在以 GNOME 为[桌面环境][2]的 Linux 上找到并使用任务管理器。 + +### 在使用 GNOME 桌面环境的 Linux 上的任务管理器等价物 + +使用 GNOME 时,按下 super 键(Windows 键)来查找任务管理器: + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-monitor-gnome-fedora.png) + +当你启动系统监视器的时候,它会向你展示所有正在运行的进程及其消耗的内存。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/fedora-system-monitor.jpeg) + +你可以选择一个进程并且点击“终止进程(End Process)”来杀掉它。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/kill-process-fedora.png) + +你也可以在资源(Resources)标签里面看到关于一些统计数据,例如 CPU 的每个核心的占用,内存用量、网络用量等。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-stats-fedora.png) + +这是图形化的方式。如果你想使用命令行,在终端里运行“top”命令然后你就可以看到所有运行的进程及其消耗的内存。你也可以很容易地使用命令行[杀死进程][3]。 + +这就是关于在 Fedora Linux 上任务管理器的知识。我希望这个教程帮你学到了知识,如果你有什么问题,请尽管问。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/task-manager-linux/ + +作者:[Abhishek Prakash][a] +译者:[xinglianfly](https://github.com/xinglianfly) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://wiki.archlinux.org/index.php/desktop_environment +[2]: https://itsfoss.com/best-linux-desktop-environments/ +[3]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/ diff --git a/translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md deleted file mode 100644 index 81a2147ff5..0000000000 --- a/translated/tech/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md +++ /dev/null @@ -1,52 +0,0 @@ -在linux下使用任务管理器 -==================================== - -![](https://itsfoss.com/wp-content/uploads/2016/06/Task-Manager-in-Linux.jpg) - -有很多Linux初学者经常问起的问题,“**Linux有任务管理器吗**”,“**在Linux上面你是怎样打开任务管理器的呢**”? - -来自Windows的用户都知道任务管理器非常有用。你按下Ctrl+Alt+Del来打开任务管理器。这个任务管理器向你展示了所有的正在运行的进程和它们消耗的内存。你可以从任务管理器程序中选择并杀死一个进程。 - -当你刚使用Linux的时候,你也会寻找一个**在Linux相当于任务管理器**的一个东西。一个Linux使用专家更喜欢使用命令行的方式查找进程和消耗的内存,但是你不必去使用这种方式,至少在你初学Linux的时候。 - -所有主流的Linux发行版都有一个类似于任务管理器的东西。大部分情况下,**它叫System Monitor**。但是它实际上依赖于你的Linux的发行版和它使用的[桌面环境][1]。 - -在这篇文章中,我们将会看到如何在使用GNOME的[桌面环境][2]的Linux上查找并使用任务管理器。 - -###在使用GNOME的桌面环境的linux上使用任务管理器 - -当你使用GNOME的时候,按下super键(Windows 键)来查找任务管理器: - -![](https://itsfoss.com/wp-content/uploads/2016/06/system-monitor-gnome-fedora.png) - -当你启动System Monitor的时候,它会向你展示所有正在运行的进程和被它们消耗的内存。 - -![](https://itsfoss.com/wp-content/uploads/2016/06/fedora-system-monitor.jpeg) - -你可以选择一个进程并且点击“End Process”来杀掉它。 - -![](https://itsfoss.com/wp-content/uploads/2016/06/kill-process-fedora.png) - -你也可以在Resources标签里面看到关于一些系统的数据,例如每个cpu核心的消耗,内存的使用,网络的使用等。 - -![](https://itsfoss.com/wp-content/uploads/2016/06/system-stats-fedora.png) - -这是图形化的一种方式。如果你想使用命令行,在终端里运行“top”命令然后你就可以看到所有运行的进程和他们消耗的内存。你也可以参考[使用命令行杀死进程][3]这篇文章。 - -这就是所有你需要知道的关于在Fedora Linux上任务管理器的知识。我希望这个教程帮你学到了知识,如果你有什么问题,请尽管问。 - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/task-manager-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 - -作者:[Abhishek Prakash][a] -译者:[xinglianfly](https://github.com/xinglianfly) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://wiki.archlinux.org/index.php/desktop_environment -[2]: https://itsfoss.com/best-linux-desktop-environments/ -[3]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/ From 10df0b843dc68efb2186563cd26c36899aaf4e64 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 11 Jul 2016 09:30:49 +0800 Subject: [PATCH 107/471] =?UTF-8?q?20160711-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Encrypt a Flash Drive Using VeraCrypt.md | 100 ++++++++++++++++++ 1 file changed, 100 insertions(+) create mode 100644 sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md diff --git a/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md b/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md new file mode 100644 index 0000000000..2dd2ae024f --- /dev/null +++ b/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md @@ -0,0 +1,100 @@ +How to Encrypt a Flash Drive Using VeraCrypt +============================================ + +Many security experts prefer open source software like VeraCrypt, which can be used to encrypt flash drives, because of its readily available source code. + +Encryption is a smart idea for protecting data on a USB flash drive, as we covered in our piece that described ]how to encrypt a flash drive][1] using Microsoft BitLocker. + +But what if you do not want to use BitLocker? + +You may be concerned that because Microsoft's source code is not available for inspection, it could be susceptible to security "backdoors" used by the government or others. Because source code for open source software is widely shared, many security experts feel open source software is far less likely to have any backdoors. + +Fortunately, there are several open source encryption alternatives to BitLocker. + +If you need to be able to encrypt and access files on any Windows machine, as well as computers running Apple OS X or Linux, the open source [VeraCrypt][2] offers an excellent alternative. + +VeraCrypt is derived from TrueCrypt, a well-regarded open source encryption software product that has now been discontinued. But the code for TrueCrypt was audited and no major security flaws were found. In addition, it has since been improved in VeraCrypt. + +Versions exist for Windows, OS X and Linux. + +Encrypting a USB flash drive with VeraCrypt is not as straightforward as it is with BitLocker, but it still only takes a few minutes. + +### Encrypting Flash Drive with VeraCrypt in 8 Steps + +After [downloading VeraCrypt][3] for your operating system: + +Start VeraCrypt, and click on Create Volume to start the VeraCrypt Volume Creation Wizard. + +![](http://www.esecurityplanet.com/imagesvr_ce/6246/Vera0.jpg) + +The VeraCrypt Volume Creation Wizard allows you to create an encrypted file container on the flash drive which sits along with other unencrypted files, or you can choose to encrypt the entire flash drive. For the moment, we will choose to encrypt the entire flash drive. + +![](http://www.esecurityplanet.com/imagesvr_ce/6703/Vera1.jpg) + +On the next screen, choose Standard VeraCrypt Volume. + +![](http://www.esecurityplanet.com/imagesvr_ce/835/Vera2.jpg) + +Select the drive letter of the flash drive you want to encrypt (in this case O:). + +![](http://www.esecurityplanet.com/imagesvr_ce/9427/Vera3.jpg) + +Choose the Volume Creation Mode. If your flash drive is empty or you want to delete everything it contains, choose the first option. If you want to keep any existing files, choose the second option. + +![](http://www.esecurityplanet.com/imagesvr_ce/7828/Vera4.jpg) + +This screen allows you to choose your encryption options. If you are unsure of which to choose, leave the default settings of AES and SHA-512. + +![](http://www.esecurityplanet.com/imagesvr_ce/5918/Vera5.jpg) + +After confirming the Volume Size screen, enter and re-enter the password you want to use to encrypt your data. + +![](http://www.esecurityplanet.com/imagesvr_ce/3850/Vera6.jpg) + +To work effectively, VeraCrypt must draw from a pool of entropy or "randomness." To generate this pool, you'll be asked to move your mouse around in a random fashion for about a minute. Once the bar has turned green, or preferably when it reaches the far right of the screen, click Format to finish creating your encrypted drive. + +![](http://www.esecurityplanet.com/imagesvr_ce/7468/Vera8.jpg) + +### Using a Flash Drive Encrypted with VeraCrypt + +When you want to use an encrypted flash drive, first insert the drive in the computer and start VeraCrypt. + +Then select an unused drive letter (such as z:) and click Auto-Mount Devices. + +![](http://www.esecurityplanet.com/imagesvr_ce/2016/Vera10.jpg) + +Enter your password and click OK. + +![](http://www.esecurityplanet.com/imagesvr_ce/8222/Vera11.jpg) + +The mounting process may take a few minutes, after which your unencrypted drive will become available with the drive letter you selected previously. + +### VeraCrypt Traveler Disk Setup + +If you set up a flash drive with an encrypted container rather than encrypting the whole drive, you also have the option to create what VeraCrypt calls a traveler disk. This installs a copy of VeraCrypt on the USB flash drive itself, so when you insert the drive in another Windows computer you can run VeraCrypt automatically from the flash drive; there is no need to install it on the computer. + +You can set up a flash drive to be a Traveler Disk by choosing Traveler Disk SetUp from the Tools menu of VeraCrypt. + +![](http://www.esecurityplanet.com/imagesvr_ce/5812/Vera12.jpg) + +It is worth noting that in order to run VeraCrypt from a Traveler Disk on a computer, you must have administrator privileges on that computer. While that may seem to be a limitation, no confidential files can be opened safely on a computer that you do not control, such as one in a business center. + +>Paul Rubens has been covering enterprise technology for over 20 years. In that time he has written for leading UK and international publications including The Economist, The Times, Financial Times, the BBC, Computing and ServerWatch. + +-------------------------------------------------------------------------------- + +via: http://www.esecurityplanet.com/open-source-security/how-to-encrypt-flash-drive-using-veracrypt.html + +作者:[Paul Rubens ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.esecurityplanet.com/author/3700/Paul-Rubens +[1]: http://www.esecurityplanet.com/views/article.php/3880616/How-to-Encrypt-a-USB-Flash-Drive.htm +[2]: http://www.esecurityplanet.com/open-source-security/veracrypt-a-worthy-truecrypt-alternative.html +[3]: https://veracrypt.codeplex.com/releases/view/619351 + + + From f27d68468252699c606181a6f91be7bd40f0dfef Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 11 Jul 2016 09:37:47 +0800 Subject: [PATCH 108/471] =?UTF-8?q?20160711-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ork on Ubuntu Linux 14.04 and 16.04 LTS.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md diff --git a/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md b/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md new file mode 100644 index 0000000000..b3d40a94c2 --- /dev/null +++ b/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md @@ -0,0 +1,143 @@ +How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS +======================================================================= + +> am a new Ubuntu Linux 16.04 LTS user. How do I setup a network bridge on the host server powered by Ubuntu 14.04 LTS or 16.04 LTS operating system? + +![](http://s0.cyberciti.org/images/category/old/ubuntu-logo.jpg) + +A Bridged networking is nothing but a simple technique to connect to the outside network through the physical interface. It is useful for LXC/KVM/Xen/Containers virtualization and other virtual interfaces. The virtual interfaces appear as regular hosts to the rest of the network. In this tutorial I will explain how to configure a Linux bridge with bridge-utils (brctl) command line utility on Ubuntu server. + +### Our sample bridged networking + +![](http://s0.cyberciti.org/uploads/faq/2016/07/my-br0-br1-setup.jpg) +>Fig.01: Sample Ubuntu Bridged Networking Setup For Kvm/Xen/LXC Containers (br0) + +In this example eth0 and eth1 is the physical network interface. eth0 connected to the LAN and eth1 is attached to the upstream ISP router/Internet. + +### Install bridge-utils + +Type the following [apt-get command][1] to install the bridge-utils: + +``` +$ sudo apt-get install bridge-utils +``` + +OR + +```` +$ sudo apt install bridge-utils +``` + +Sample outputs: + +![](http://s0.cyberciti.org/uploads/faq/2016/07/ubuntu-install-bridge-utils.jpg) +>Fig.02: Ubuntu Linux install bridge-utils package + +### Creating a network bridge on the Ubuntu server + +Edit `/etc/network/interfaces` using a text editor such as nano or vi, enter: + +``` +$ sudo cp /etc/network/interfaces /etc/network/interfaces.bakup-1-july-2016 +$ sudo vi /etc/network/interfaces +``` + +Let us setup eth1 and map it to br1, enter (delete or comment out all eth1 entries): + +``` +# br1 setup with static wan IPv4 with ISP router as gateway +auto br1 +iface br1 inet static + address 208.43.222.51 + network 255.255.255.248 + netmask 255.255.255.0 + broadcast 208.43.222.55 + gateway 208.43.222.49 + bridge_ports eth1 + bridge_stp off + bridge_fd 0 + bridge_maxwait 0 +``` + +To setup eth0 and map it to br0, enter (delete or comment out all eth1 entries): + +``` +auto br0 +iface br0 inet static + address 10.18.44.26 + netmask 255.255.255.192 + broadcast 10.18.44.63 + dns-nameservers 10.0.80.11 10.0.80.12 + # set static route for LAN + post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1 + post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1 + bridge_ports eth0 + bridge_stp off + bridge_fd 0 + bridge_maxwait 0 +``` + +### A note about br0 and DHCP + +DHCP config options: + +``` +auto br0 +iface br0 inet dhcp + bridge_ports eth0 + bridge_stp off + bridge_fd 0 + bridge_maxwait 0 +``` + +Save and close the file. + +### Restart the server or networking service + +You need to reboot the server or type the following command to restart the networking service (this may not work on SSH based session): + +``` +$ sudo systemctl restart networking +``` + +If you are using Ubuntu 14.04 LTS or older not systemd based system, enter: + +``` +$ sudo /etc/init.d/restart networking +``` + +### Verify connectivity + +Use the ping/ip commands to verify that both LAN and WAN interfaces are reachable: +``` +# See br0 and br1 +ip a show +# See routing info +ip r +# ping public site +ping -c 2 cyberciti.biz +# ping lan server +ping -c 2 10.0.80.12 +``` + +Sample outputs: + +![](http://s0.cyberciti.org/uploads/faq/2016/07/br0-br1-eth0-eth1-configured-on-ubuntu.jpg) +>Fig.03: Verify Bridging Ethernet Connections + +Now, you can configure XEN/KVM/LXC containers to use br0 and br1 to reach directly to the internet or private lan. No need to setup special routing or iptables SNAT rules. + + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/how-to-create-bridge-interface-ubuntu-linux/ + +作者:[VIVEK GITE][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/nixcraft +[1]: http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html + From 673ca254163c71fd49a37045e31ed2ad48ac6a3c Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 11 Jul 2016 09:55:10 +0800 Subject: [PATCH 109/471] =?UTF-8?q?20160711-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0160630 What makes up the Fedora kernel.md | 33 +++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 sources/tech/20160630 What makes up the Fedora kernel.md diff --git a/sources/tech/20160630 What makes up the Fedora kernel.md b/sources/tech/20160630 What makes up the Fedora kernel.md new file mode 100644 index 0000000000..95b61a201a --- /dev/null +++ b/sources/tech/20160630 What makes up the Fedora kernel.md @@ -0,0 +1,33 @@ +What makes up the Fedora kernel? +==================================== + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/kernel-945x400.png) + +Every Fedora system runs a kernel. Many pieces of code come together to make this a reality. + +Each release of the Fedora kernel starts with a baseline release from the [upstream community][1]. This is often called a ‘vanilla’ kernel. The upstream kernel is the standard. The goal is to have as much code upstream as possible. This makes it easier for bug fixes and API updates to happen as well as having more people review the code. In an ideal world, Fedora would be able to to take the kernel straight from kernel.org and send that out to all users. + +Realistically, using the vanilla kernel isn’t complete enough for Fedora. Some features Fedora users want may not be available. The [Fedora kernel][2] that users actually receive contains a number of patches on top of the vanilla kernel. These patches are considered ‘out of tree’. Many of these patches will not exist out of tree patches very long. If patches are available to fix an issue, the patches may be pulled in to the Fedora tree so the fix can go out to users faster. When the kernel is rebased to a new version, the patches will be removed if they are in the new version. + +Some patches remain in the Fedora kernel tree for an extended period of time. A good example of patches that fall into this category are the secure boot patches. These patches provide a feature Fedora wants to support even though the upstream community has not yet accepted them. It takes effort to keep these patches up to date so Fedora tries to minimize the number of patches that are carried without being accepted by an upstream kernel maintainer. + +Generally, the best way to get a patch included in the Fedora kernel is to send it to the ]Linux Kernel Mailing List (LKML)][3] first and then ask for it to be included in Fedora. If a patch has been accepted by a maintainer it stands a very high chance of being included in the Fedora kernel tree. Patches that come from places like github which have not been submitted to LKML are unlikely to be taken into the tree. It’s important to send the patches to LKML first to ensure Fedora is carrying the correct patches in its tree. Without the community review, Fedora could end up carrying patches which are buggy and cause problems. + +The Fedora kernel contains code from many places. All of it is necessary to give the best experience possible. + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/makes-fedora-kernel/ + +作者:[Laura Abbott][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/makes-fedora-kernel/ +[1]: http://www.kernel.org/ +[2]: http://pkgs.fedoraproject.org/cgit/rpms/kernel.git/ +[3]: http://www.labbott.name/blog/2015/10/02/the-art-of-communicating-with-lkml/ From 015283e8d711271e9ef5064ac3df173c4083d287 Mon Sep 17 00:00:00 2001 From: Mike Date: Mon, 11 Jul 2016 11:26:41 +0800 Subject: [PATCH 110/471] translated/tech/20160609 How to record your terminal session on Linux.md (#4169) * sources/tech/20160609 How to record your terminal session on Linux.md * translated/tech/20160609 How to record your terminal session on Linux.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * sources/tech/20160620 Detecting cats in images with OpenCV.md * translated/tech/20160620 Detecting cats in images with OpenCV.md * remove source * sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md * sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md --- ...oying a replicated Python 3 Application.md | 283 ------------------ ...oying a replicated Python 3 Application.md | 282 +++++++++++++++++ 2 files changed, 282 insertions(+), 283 deletions(-) delete mode 100644 sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md create mode 100644 translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md diff --git a/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md b/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md deleted file mode 100644 index 0e96160c77..0000000000 --- a/sources/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md +++ /dev/null @@ -1,283 +0,0 @@ -MikeCoder 2016.07.10 Translating - -Tutorial: Getting started with Docker Swarm and deploying a replicated Python 3 Application -============== - -At [Dockercon][1] recently [Ben Firshman][2] did a very cool presentation on building serverless apps with Docker, you can [read about it here][3] (along with watching the video). A little while back [I wrote an article][4] on building a microservice with [AWS Lambda][5]. - -Today, I want to show you how to use [Docker Swarm][6] and then deploy a simple Python Falcon REST app. Although I won’t be using [dockerrun][7] or the serverless capabilities I think you might be surprised how easy it is to deploy (replicated) Python applications (actually any sort of application: Java, Go, etc.) with Docker Swarm. - - -Note: Some of the steps I’ll show you are taken from the [Swarm Tutorial][8]. I’ve modified some things and [added a Vagrant helper repo][9] to spin up a local testing environment for Docker Swarm to utilize. Keep in mind you must be using Docker Engine 1.12 or later. At the time of this article I am using RC2 of 1.12. Keep in mind this is all build on beta software at this time, things can change. - -The first thing you will want to do is to ensure you have [Vagrant][10] properly installed and working if you want to run this locally. You can also follow the steps for the most part and spin up the Docker Swarm VMs on your preferred cloud provider. - -We are going to spin up three VMs: A single docker swarm manager and two workers. - -Security Note: The Vagrantfile uses a shell script located on Docker’s test server. This is a potential security issue to run scripts you don’t have control over so make sure to [review the script][11] prior to running. - -``` -$ git clone https://github.com/chadlung/vagrant-docker-swarm -$ cd vagrant-docker-swarm -$ vagrant plugin install vagrant-vbguest -$ vagrant up -``` - -The vagrant up command will take some time to complete. - -SSH into the manager1 VM: - -``` -$ vagrant ssh manager1 -``` - -Run the following command in the manager1 ssh terminal session: - -``` -$ sudo docker swarm init --listen-addr 192.168.99.100:2377 -``` - -There will be no workers registered yet: - -``` -$ sudo docker node ls -``` - -Let’s register the two workers. Use two new terminal sessions (leave the manager1 session running): - -``` -$ vagrant ssh worker1 -``` - -Run the following command in the worker1 ssh terminal session: - -``` -$ sudo docker swarm join 192.168.99.100:2377 -``` - -Repeat those commands used for worker1 but substitute worker2. - -From the manager1 terminal run: - -``` -$ docker node ls -``` - -You should see: - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-3.15.25-PM.png) - -On the manager1 terminal let’s deploy a simple service. - -``` -sudo docker service create --replicas 1 --name pinger alpine ping google.com -``` - -That will deploy a service that will ping google.com to one of the workers (or manager, the manager can also run services but this [can also be disabled][12] if you only want workers to run containers). To see which node got the service run this: - -``` -$ sudo docker service tasks pinger -``` - -Result will be similar to this: - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.23.05-PM.png) - -So we know its on worker1. Let’s go to the terminal session for worker1 and attach to the running container: - -``` -$ sudo docker ps -``` - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.25.02-PM.png) - -You can see the container id is: ae56769b9d4d - -In my case I run: - -``` -$ sudo docker attach ae56769b9d4d -``` - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.26.49-PM.png) - -You can just CTRL-C to stop the pinging. - -Go back to the manager1 terminal session and remove the pinger service: - -``` -$ sudo docker service rm pinger -``` - -Now we will move onto deploying a replicated Python app part of this article. Please keep in mind in order to keep this article simple and easy to follow this will be a bare bones trivial service. - -The first thing you will need to do is to either add your own image to [Docker Hub][13] or use [the one I already have][14]. Its a simple Python 3 Falcon REST app that has one endpoint: /hello with a param of value=SOME_STRING - -The Python code for the [chadlung/hello-app][15] image looks like this: - -``` -import json -from wsgiref import simple_server - -import falcon - - -class HelloResource(object): - def on_get(self, req, resp): - try: - value = req.get_param('value') - - resp.content_type = 'application/json' - resp.status = falcon.HTTP_200 - resp.body = json.dumps({'message': str(value)}) - except Exception as ex: - resp.status = falcon.HTTP_500 - resp.body = str(ex) - - -if __name__ == '__main__': - app = falcon.API() - hello_resource = HelloResource() - app.add_route('/hello', hello_resource) - httpd = simple_server.make_server('0.0.0.0', 8080, app) - httpd.serve_forever() -``` - -The Dockerfile is as simple as: - -``` -FROM python:3.4.4 - -RUN pip install -U pip -RUN pip install -U falcon - -EXPOSE 8080 - -COPY . /hello-app -WORKDIR /hello-app - -CMD ["python", "app.py"] -``` - -Again, this is meant to be very trivial. You can hit the endpoint by running the image locally if you want: - -This gives you back: - -``` -{"message": "Fred"} -``` - -Build and deploy the hellp-app to Docker Hub (modify below to use your own Docker Hub repo or [use this one][15]): - -``` -$ sudo docker build . -t chadlung/hello-app:2 -$ sudo docker push chadlung/hello-app:2 -``` - -Now we want to deploy this to the Docker Swarm we set up earlier. Go into the manager1 terminal session and run: - -``` -$ sudo docker service create -p 8080:8080 --replicas 2 --name hello-app chadlung/hello-app:2 -$ sudo docker service inspect --pretty hello-app -$ sudo docker service tasks hello-app -``` - -Now we are ready to test it out. Using any of the node’s IPs in the swarm hit the /hello endpoint. In my case I will just cURL from the manager1 terminal: - -Remember, all IPs in the swarm will work even if the service is only running on one or more nodes. - -``` -$ curl -v -X GET "http://192.168.99.100:8080/hello?value=Chad" -$ curl -v -X GET "http://192.168.99.101:8080/hello?value=Test" -$ curl -v -X GET "http://192.168.99.102:8080/hello?value=Docker" -``` - -Results: - -``` -* Hostname was NOT found in DNS cache -* Trying 192.168.99.101... -* Connected to 192.168.99.101 (192.168.99.101) port 8080 (#0) -> GET /hello?value=Chad HTTP/1.1 -> User-Agent: curl/7.35.0 -> Host: 192.168.99.101:8080 -> Accept: */* -> -* HTTP 1.0, assume close after body -< HTTP/1.0 200 OK -< Date: Tue, 28 Jun 2016 23:52:55 GMT -< Server: WSGIServer/0.2 CPython/3.4.4 -< content-type: application/json -< content-length: 19 -< -{"message": "Chad"} -``` - -Calling the other node from my web browser: - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-6.54.31-PM.png) - -If you want to see all the services running try this from the manager1 node: - -``` -$ sudo docker service ls -``` -If you want to add some visualization to all this you can install [Docker Swarm Visualizer][16] (this is very handy for presentations, etc.). From the manager1 terminal session run the following: - -![]($ sudo docker run -it -d -p 5000:5000 -e HOST=192.168.99.100 -e PORT=5000 -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer) - -Simply open a browser now and point it at: - -Results (assuming running two Docker Swarm services): - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-30-at-2.37.28-PM.png) - -To stop the hello-app (which was replicated to two nodes) run this from the manager1 terminal session: - -``` -$ sudo docker service rm hello-app -``` - -To stop the Visualizer run this from the manager1 terminal session: - -``` -$ sudo docker ps -``` - -Get the container id, in my case it was: f71fec0d3ce1 - -Run this from the manager1 terminal session:: - -``` -$ sudo docker stop f71fec0d3ce1 -``` - -Good luck with Docker Swarm and keep in kind this article was based on the release candidate of version 1.12. - --------------------------------------------------------------------------------- - -via: http://www.giantflyingsaucer.com/blog/?p=5923 - -作者:[Chad Lung][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.giantflyingsaucer.com/blog/?author=2 -[1]: http://dockercon.com/ -[2]: https://blog.docker.com/author/bfirshman/ -[3]: https://blog.docker.com/author/bfirshman/ -[4]: http://www.giantflyingsaucer.com/blog/?p=5730 -[5]: https://aws.amazon.com/lambda/ -[6]: https://docs.docker.com/swarm/ -[7]: https://github.com/bfirsh/dockerrun -[8]: https://docs.docker.com/engine/swarm/swarm-tutorial/ -[9]: https://github.com/chadlung/vagrant-docker-swarm -[10]: https://www.vagrantup.com/ -[11]: https://test.docker.com/ -[12]: https://docs.docker.com/engine/reference/commandline/swarm_init/ -[13]: https://hub.docker.com/ -[14]: https://hub.docker.com/r/chadlung/hello-app/ -[15]: https://hub.docker.com/r/chadlung/hello-app/ -[16]: https://github.com/ManoMarks/docker-swarm-visualizer diff --git a/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md b/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md new file mode 100644 index 0000000000..b06c56b46e --- /dev/null +++ b/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md @@ -0,0 +1,282 @@ +教程:开始学习如何使用 Docker Swarm 部署可扩展的 Python3 应用 +============== + +[Ben Firshman][2]最近在[Dockercon][1]做了一个关于使用 Docker 构建无服务应用的演讲,你可以在[这查看详情][3](可以和视频一起)。之后,我写了[一篇文章][4]关于如何使用[AWS Lambda][5]构建微服务系统。 + +今天,我想展示给你的就是如何使用[Docker Swarm][6]然后部署一个简单的 Python Falcon REST 应用。尽管,我不会使用[dockerrun][7]或者是其他无服务特性。你可能会惊讶,使用 Docker Swarm 部署(替换)一个 Python(Java, Go 都一样) 应用是如此的简单。 + +注意:这展示的部分步骤是截取自[Swarm Tutorial][8]。我已经修改了部分章节,并且[在 Vagrant 的帮助文档][9]中添加了构建本地测试环境的文档。请确保,你使用的是1.12或以上版本的 Docker 引擎。我写这篇文章的时候,使用的是1.12RC2版本的 Docker。注意的是,这只是一个测试版本,只会可能还会有修改。 + +你要做的第一件事,就是你要保证你正确的安装了[Vagrant][10],如果你想本地运行的话。你也可以按如下步骤使用你最喜欢的云服务提供商部署 Docker Swarm 虚拟机系统。 + +我们将会使用这三台 VM:一个简单的 Docker Swarm 管理平台和两台 worker。 + +安全注意事项:Vagrantfile 代码中包含了部分位于 Docker 测试服务器上的 shell 脚本。这是一个隐藏的安全问题。如果你没有权限的话。请确保你会在运行代码之前[审查这部分的脚本][11]。 + +``` +$ git clone https://github.com/chadlung/vagrant-docker-swarm +$ cd vagrant-docker-swarm +$ vagrant plugin install vagrant-vbguest +$ vagrant up +``` + +Vagrant up 命令可能会花很长的时间来执行。 + +SSH 登陆进入 manager1 虚拟机: + +``` +$ vagrant ssh manager1 +``` + +在 manager1 的终端中执行如下命令: + +``` +$ sudo docker swarm init --listen-addr 192.168.99.100:2377 +``` + +现在还没有 worker 注册上来: + +``` +$ sudo docker node ls +``` + +Let’s register the two workers. Use two new terminal sessions (leave the manager1 session running): +通过两个新的终端会话(退出 manager1 的登陆后),我们注册两个 worker。 + +``` +$ vagrant ssh worker1 +``` + +在 worker1 上执行如下命令: + +``` +$ sudo docker swarm join 192.168.99.100:2377 +``` + +在 worker2 上重复这些命令。 + +在 manager1 上执行这个命令: + +``` +$ docker node ls +``` + +你将会看到: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-3.15.25-PM.png) + +开始在 manager1 的终端里,部署一个简单的服务。 + +``` +sudo docker service create --replicas 1 --name pinger alpine ping google.com +``` + +这个命令将会部署一个服务,他会从 worker 机器中的一台 ping google.com。(manager 也可以运行服务,不过[这也可以被禁止][12])如果你只是想 worker 运行容器的话)。可以使用如下命令,查看哪些节点正在执行服务: + +``` +$ sudo docker service tasks pinger +``` + +结果回合这个比较类似: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.23.05-PM.png) + +所以,我们知道了服务正跑在 worker1 上。我们可以回到 worker1 的会话里,然后进入正在运行的容器: + +``` +$ sudo docker ps +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.25.02-PM.png) + +你可以看到容器的 id 是: ae56769b9d4d + +在我的例子中,我运行的是如下的代码: + +``` +$ sudo docker attach ae56769b9d4d +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.26.49-PM.png) + +你可以仅仅只用 CTRL-C 来停止服务。 + +回到 manager1,并且移除 pinger 服务。 + +``` +$ sudo docker service rm pinger +``` + +现在,我们将会部署可复制的 Python 应用。请记住,为了保持文章的简洁,而且容易复制,所以部署的是一个简单的应用。 + +你需要做的第一件事就是将镜像放到[Docker Hub][13]上,或者使用我[已经上传的一个][14]。这是一个简单的 Python 3 Falcon REST 应用。他有一个简单的入口: /hello 带一个 value 参数。 + +[chadlung/hello-app][15]的 Python 代码看起来像这样: + +``` +import json +from wsgiref import simple_server + +import falcon + + +class HelloResource(object): + def on_get(self, req, resp): + try: + value = req.get_param('value') + + resp.content_type = 'application/json' + resp.status = falcon.HTTP_200 + resp.body = json.dumps({'message': str(value)}) + except Exception as ex: + resp.status = falcon.HTTP_500 + resp.body = str(ex) + + +if __name__ == '__main__': + app = falcon.API() + hello_resource = HelloResource() + app.add_route('/hello', hello_resource) + httpd = simple_server.make_server('0.0.0.0', 8080, app) + httpd.serve_forever() +``` + +Dockerfile 很简单: + +``` +FROM python:3.4.4 + +RUN pip install -U pip +RUN pip install -U falcon + +EXPOSE 8080 + +COPY . /hello-app +WORKDIR /hello-app + +CMD ["python", "app.py"] +``` + +再一次说明,这是非常详细的奖惩,如果你想,你也可以在本地访问这个入口: + +这将返回如下结果: + +``` +{"message": "Fred"} +``` + +在 Docker Hub 上构建和部署这个 hello-app(修改成你自己的 Docker Hub 仓库或者[这个][15]): + +``` +$ sudo docker build . -t chadlung/hello-app:2 +$ sudo docker push chadlung/hello-app:2 +``` + +现在,我们可以将应用部署到之前的 Docker Swarm 了。登陆 manager1 终端,并且执行: + +``` +$ sudo docker service create -p 8080:8080 --replicas 2 --name hello-app chadlung/hello-app:2 +$ sudo docker service inspect --pretty hello-app +$ sudo docker service tasks hello-app +``` + +现在,我们已经可以测试了。使用任何一个节点 Swarm 的 IP,来访问/hello 的入口,在本例中,我在 Manager1 的终端里使用 curl 命令: + +注意,在 Swarm 中的所有 IP 都可以工作,即使这个服务只运行在一台或者更多的节点上。 + +``` +$ curl -v -X GET "http://192.168.99.100:8080/hello?value=Chad" +$ curl -v -X GET "http://192.168.99.101:8080/hello?value=Test" +$ curl -v -X GET "http://192.168.99.102:8080/hello?value=Docker" +``` + +结果就是: + +``` +* Hostname was NOT found in DNS cache +* Trying 192.168.99.101... +* Connected to 192.168.99.101 (192.168.99.101) port 8080 (#0) +> GET /hello?value=Chad HTTP/1.1 +> User-Agent: curl/7.35.0 +> Host: 192.168.99.101:8080 +> Accept: */* +> +* HTTP 1.0, assume close after body +< HTTP/1.0 200 OK +< Date: Tue, 28 Jun 2016 23:52:55 GMT +< Server: WSGIServer/0.2 CPython/3.4.4 +< content-type: application/json +< content-length: 19 +< +{"message": "Chad"} +``` + +从浏览器中访问其他节点: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-6.54.31-PM.png) + +如果你想看运行的所有服务,你可以在 manager1 节点上运行如下代码: + +``` +$ sudo docker service ls +``` + +如果你想添加可视化控制平台,你可以安装[Docker Swarm Visualizer][16](这非常简单上手)。在 manager1 的终端中执行如下代码: + +![]($ sudo docker run -it -d -p 5000:5000 -e HOST=192.168.99.100 -e PORT=5000 -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer) + +打开你的浏览器,并且访问: + +结果(假设已经运行了两个 Docker Swarm 服务): + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-30-at-2.37.28-PM.png) + +停止运行 hello-app(已经在两个节点上运行了),可以在 manager1 上执行这个代码: + +``` +$ sudo docker service rm hello-app +``` + +如果想停止, 那么在 manager1 的终端中执行: + +``` +$ sudo docker ps +``` + +获得容器的 ID,这里是: f71fec0d3ce1 + +从 manager1 的终端会话中执行这个代码: + +``` +$ sudo docker stop f71fec0d3ce1 +``` + +祝你使用 Docker Swarm。这篇文章主要是以1.12版本来进行描述的。 + +-------------------------------------------------------------------------------- + +via: http://www.giantflyingsaucer.com/blog/?p=5923 + +作者:[Chad Lung][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.giantflyingsaucer.com/blog/?author=2 +[1]: http://dockercon.com/ +[2]: https://blog.docker.com/author/bfirshman/ +[3]: https://blog.docker.com/author/bfirshman/ +[4]: http://www.giantflyingsaucer.com/blog/?p=5730 +[5]: https://aws.amazon.com/lambda/ +[6]: https://docs.docker.com/swarm/ +[7]: https://github.com/bfirsh/dockerrun +[8]: https://docs.docker.com/engine/swarm/swarm-tutorial/ +[9]: https://github.com/chadlung/vagrant-docker-swarm +[10]: https://www.vagrantup.com/ +[11]: https://test.docker.com/ +[12]: https://docs.docker.com/engine/reference/commandline/swarm_init/ +[13]: https://hub.docker.com/ +[14]: https://hub.docker.com/r/chadlung/hello-app/ +[15]: https://hub.docker.com/r/chadlung/hello-app/ +[16]: https://github.com/ManoMarks/docker-swarm-visualizer From 3dce8e6bd0fcc5454e85f05a5c099df1aba4f050 Mon Sep 17 00:00:00 2001 From: gitfuture Date: Mon, 11 Jul 2016 12:32:06 +0800 Subject: [PATCH 111/471] finish translating by GitFuture --- .../20160620 Monitor Linux With Netdata.md | 115 ------------------ .../20160620 Monitor Linux With Netdata.md | 112 +++++++++++++++++ 2 files changed, 112 insertions(+), 115 deletions(-) delete mode 100644 sources/tech/20160620 Monitor Linux With Netdata.md create mode 100644 translated/tech/20160620 Monitor Linux With Netdata.md diff --git a/sources/tech/20160620 Monitor Linux With Netdata.md b/sources/tech/20160620 Monitor Linux With Netdata.md deleted file mode 100644 index dfe6920639..0000000000 --- a/sources/tech/20160620 Monitor Linux With Netdata.md +++ /dev/null @@ -1,115 +0,0 @@ -Translating by GitFuture - -Monitor Linux With Netdata -=== - -Netdata is a real-time resource monitoring tool with a friendly web front-end developed and maintained by [FireHOL][1]. With this tool, you can read charts representing resource utilization of things like CPUs, RAM, disks, network, Apache, Postfix and more. It is similar to other monitoring software like Nagios; however, Netdata is only for real-time monitoring via a web interface. - - -### Understanding Netdata - -There’s currently no authentication, so if you’re concerned about someone getting information about the applications you’re running on your system, you should restrict who has access via a firewall policy. The UI is simplified in a way anyone could look at the graphs and understand what they’re seeing, or at least be impressed by your flashy setup. - -The web front-end is very responsive and requires no Flash plugin. The UI doesn’t clutter things up with unneeded features, but sticks to what it does. At first glance, it may seem a bit much with the hundreds of charts you have access to, but luckily the most commonly needed charts (i.e. CPU, RAM, network, and disk) are at the top. If you wish to drill deeper into the graphical data, all you have to do is scroll down or click on the item in the menu to the right. Netdata even allows you to control the chart with play, reset, zoom and resize with the controls on the bottom right of each chart. - -![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-1.png) ->Netdata chart control - -When it comes down to system resources, the software doesn’t need too much either. The creators choose to write the software in C. Netdata doesn’t use much more than ~40MB of RAM. - -![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture.png) ->Netdata memory usage - -### Download Netdata - -To download this software, you can head over to [Netdata GitHub page][2]. Then click the “Clone or download” green button on the left of the page. You should then be presented with two options. - -#### Via the ZIP file - -One option is to download the ZIP file. This will include everything in the repository; however, if the repository is updated then you will need to download the ZIP file again. Once you download the ZIP file, you can use the `unzip` tool in the command line to extract the contents. Running the following command will extract the contents of the ZIP file into a “`netdata`” folder. - -``` -$ cd ~/Downloads -$ unzip netdata-master.zip -``` - -![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-2.png) ->Netdata unzipped - -ou don’t need to add the `-d` option in unzip because their content is inside a folder at the root of the ZIP file. If they didn’t have that folder at the root, unzip would have extracted the contents in the current directory (which can be messy). - -#### Via git - -The next option is to download the repository via git. You will, of course, need git installed on your system. This is usually installed by default on Fedora. If not, you can install git from the command line with the following command. - -``` -$ sudo dnf install git -``` - -After installing git, you will need to “clone” the repository to your system. To do this, run the following command. - -``` -$ git clone https://github.com/firehol/netdata.git -``` - -This will then clone (or make a copy of) the repository in the current working directory. - -### Install Netdata - -There are some packages you will need to build Netdata successfully. Luckily, it’s a single line to install the things you need ([as stated in their installation guide][3]). Running the following command in the terminal will install all of the dependencies you need to use Netdata. - -``` -$ dnf install zlib-devel libuuid-devel libmnl-devel gcc make git autoconf autogen automake pkgconfig -``` - -Once the required packages are installed, you will need to cd into the netdata/ directory and run the netdata-installer.sh script. - -``` -$ sudo ./netdata-installer.sh -``` - -You will then be prompted to press enter to build and install the program. If you wish to continue, press enter to be on your way! - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-3-600x341.png) ->Netdata install. - -If all goes well, you will have Netdata built, installed, and running on your system. The installer will also add an uninstall script in the same folder as the installer called `netdata-uninstaller.sh`. If you change your mind later, running this script will remove it from your system. - -You can see it running by checking its status via systemctl. - -``` -$ sudo systemctl status netdata -``` - -### Accessing Netdata - -Now that we have Netdata installed and running, you can access the web interface via port 19999. I have it running on a test machine, as shown in the screenshot below. - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-4-768x458.png) ->An overview of what Netdata running on your system looks like - -Congratulations! You now have successfully installed and have access to beautiful displays, graphs, and advanced statistics on the performance of your machine. Whether it’s for a personal machine so you can show it off to your friends or for getting deeper insight into the performance of your server, Netdata delivers on performance reporting for any system you choose. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/monitor-linux-netdata/ - -作者:[Martino Jones][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/monitor-linux-netdata/ -[1]: https://firehol.org/ -[2]: https://github.com/firehol/netdata -[3]: https://github.com/firehol/netdata/wiki/Installation - - - - - - - - diff --git a/translated/tech/20160620 Monitor Linux With Netdata.md b/translated/tech/20160620 Monitor Linux With Netdata.md new file mode 100644 index 0000000000..bc453d51ee --- /dev/null +++ b/translated/tech/20160620 Monitor Linux With Netdata.md @@ -0,0 +1,112 @@ +用 Netdata 监控 Linux +======= + +Netdata 是一个实时的资源监控工具,它拥有基于 web 的友好界面,由 [FireHQL][1] 开发和维护。通过这个工具,你可以通过图表来了解 CPU,RAM,硬盘,网络,Apache, Postfix等软硬件的资源使用情况。它很像 Nagios 等别的监控软件;但是,Netdata 仅仅支持通过网络接口进行实时监控。 + +### 了解 Netdata + +目前 Netdata 还没有验证机制,如果你担心别人能从你的电脑上获取相关信息的话,你应该设置防火墙规则来限制访问。UI 是简化版的,以便用户查看和理解他们看到的表格数据,至少你能够记住它的快速安装。 + +它的 web 前端响应很快,而且不需要 Flash 插件。 UI 很整洁,保持着 Netdata 应有的特性。粗略的看,你能够看到很多图表,幸运的是绝大多数常用的图表数据(像 CPU,RAM,网络和硬盘)都在顶部。如果你想深入了解图形化数据,你只需要下滑滚动条,或者点击在右边菜单的项目。通过每个图表的右下方的按钮, Netdata 还能让你控制图表的显示,重置,缩放。 + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-1.png) +>Netdata 图表控制 + +Netdata 并不会占用多少系统资源,它占用的内存不会超过 40MB。因为这个软件是作者用 C 语言写的。 + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture.png) +>Netdata 显示的内存使用情况 + +### 下载 Netdata + +要下载这个软件,你可以从访问 [Netdata GitHub page][2]。然后点击页面左边绿色的 "Clone or download" 按钮 。你应该能看到两个选项。 + +#### 通过 ZIP 文件下载 + +另一种方法是下载 ZIP 文件。它包含在仓库的所有东西。但是如果仓库更新了,你需要重新下载 ZIP 文件。下载完 ZIP 文件后,你要用 `unzip` 命令行工具来解压文件。运行下面的命令能把 ZIP 文件的内容解压到 `netdata` 文件夹。 + +``` +$ cd ~/Downloads +$ unzip netdata-master.zip +``` + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-2.png) +>解压 Netdata + +没必要在 unzip 命令后加上 `-d` 选项,因为文件都是是放在 ZIP 文件里的一个文件夹里面。如果没有那个文件夹, unzip 会把所有东西都解压到当前目录下面(这会让文件非常混乱)。 + +#### 通过 Git 下载 + +还有一种方式是通过 git 下载整个仓库。当然,你的系统需要安装 git。Git 在 Fedora 系统是默认安装的。如果没有安装,你可以用下面的命令在命令行里安装 git。 + +``` +$ sudo dnf install git +``` + +安装好 git 后,你要把仓库 “clone” 到你的系统里。运行下面的命令。 + +``` +$ git clone https://github.com/firehol/netdata.git +``` + +这个命令会在当前工作目录克隆(或者说复制一份)仓库。 + +### 安装 Netdata + +有些软件包是你成功构造 Netdata 时候需要的。 还好,一行命令就可以安装你所需要的东西([as stated in their installation guide][3])。在命令行运行下面的命令就能满足安装 Netdata 需要的所有依赖关系。 + +``` +$ dnf install zlib-devel libuuid-devel libmnl-devel gcc make git autoconf autogen automake pkgconfig +``` + +当所有需要的软件包都安装好了,你就 cd 到 netdata/ 目录,运行 netdata-installer.sh 脚本。 + +``` +$ sudo ./netdata-installer.sh +``` + +然后就会提示你按回车键,开始安装程序。如果要继续的话,就按下回车吧。 + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-3-600x341.png) +>Netdata 的安装。 + +如果一切顺利,你的系统上就已经安装并且运行了 Netdata。安装脚本还会在相应的文件夹里添加一个卸载脚本,叫做 `netdata-uninstaller.sh`。如果你以后不想使用 Netdata,运行这个脚本可以从你的系统里面卸载掉 Netdata。 + +你可以通过 systemctl 查看它的运行状态。 + +``` +$ sudo systemctl status netdata +``` + +### 使用 Netdata + +既然我们已经安装并且运行了 Netdata,你就能够通过 19999 端口来访问 web 界面。下面的截图是我在一个测试机器上运行的 Netdata。 + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-4-768x458.png) +>关于 Netdata 运行时的概览 + +恭喜!你已经成功安装并且能够看到关于你的机器性能的完美显示,图形和高级的统计数据。无论是否是你个人的机器,你都可以向你的朋友们炫耀,因为你能够深入的了解你的服务器性能,Netdata 在任何机器上的性能报告都非常出色。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/monitor-linux-netdata/ + +作者:[Martino Jones][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/monitor-linux-netdata/ +[1]: https://firehol.org/ +[2]: https://github.com/firehol/netdata +[3]: https://github.com/firehol/netdata/wiki/Installation + + + + + + + + From ae39cc08f7934f4542971b29f8082d3ce1e7b261 Mon Sep 17 00:00:00 2001 From: Xin Wang <2650454635@qq.com> Date: Mon, 11 Jul 2016 15:44:02 +0800 Subject: [PATCH 112/471] Update 20160524 Writing online multiplayer game with python and asyncio - part 1.md --- ...g online multiplayer game with python and asyncio - part 1.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md b/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md index 62af26b481..95122771ac 100644 --- a/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md +++ b/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md @@ -1,3 +1,4 @@ +xinglianfly translate Writing online multiplayer game with python and asyncio - part 1 =================================================================== From 5e914658077a5aec5b83f365f13819032878fe9a Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 11 Jul 2016 20:24:51 +0800 Subject: [PATCH 113/471] =?UTF-8?q?20160711-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...18 An Introduction to Mocking in Python.md | 490 ++++++++++++++++++ 1 file changed, 490 insertions(+) create mode 100644 sources/tech/20160618 An Introduction to Mocking in Python.md diff --git a/sources/tech/20160618 An Introduction to Mocking in Python.md b/sources/tech/20160618 An Introduction to Mocking in Python.md new file mode 100644 index 0000000000..e9c5c847cb --- /dev/null +++ b/sources/tech/20160618 An Introduction to Mocking in Python.md @@ -0,0 +1,490 @@ +An Introduction to Mocking in Python +===================================== + +This article is about mocking in python, + +**How to Run Unit Tests Without Testing Your Patience** + +More often than not, the software we write directly interacts with what we would label as “dirty” services. In layman’s terms: services that are crucial to our application, but whose interactions have intended but undesired side-effects—that is, undesired in the context of an autonomous test run.For example: perhaps we’re writing a social app and want to test out our new ‘Post to Facebook feature’, but don’t want to actually post to Facebook every time we run our test suite. + +The Python unittest library includes a subpackage named unittest.mock—or if you declare it as a dependency, simply mock—which provides extremely powerful and useful means by which to mock and stub out these undesired side-effects. + +>Source | + +Note: mock is [newly included][1] in the standard library as of Python 3.3; prior distributions will have to use the Mock library downloadable via [PyPI][2]. + +### Fear System Calls + +To give you another example, and one that we’ll run with for the rest of the article, consider system calls. It’s not difficult to see that these are prime candidates for mocking: whether you’re writing a script to eject a CD drive, a web server which removes antiquated cache files from /tmp, or a socket server which binds to a TCP port, these calls all feature undesired side-effects in the context of your unit-tests. + +>As a developer, you care more that your library successfully called the system function for ejecting a CD as opposed to experiencing your CD tray open every time a test is run. + +As a developer, you care more that your library successfully called the system function for ejecting a CD (with the correct arguments, etc.) as opposed to actually experiencing your CD tray open every time a test is run. (Or worse, multiple times, as multiple tests reference the eject code during a single unit-test run!) + +Likewise, keeping your unit-tests efficient and performant means keeping as much “slow code” out of the automated test runs, namely filesystem and network access. + +For our first example, we’ll refactor a standard Python test case from original form to one using mock. We’ll demonstrate how writing a test case with mocks will make our tests smarter, faster, and able to reveal more about how the software works. + +### A Simple Delete Function + +We all need to delete files from our filesystem from time to time, so let’s write a function in Python which will make it a bit easier for our scripts to do so. + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os + +def rm(filename): + os.remove(filename) +``` + +Obviously, our rm method at this point in time doesn’t provide much more than the underlying os.remove method, but our codebase will improve, allowing us to add more functionality here. + +Let’s write a traditional test case, i.e., without mocks: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import rm + +import os.path +import tempfile +import unittest + +class RmTestCase(unittest.TestCase): + + tmpfilepath = os.path.join(tempfile.gettempdir(), "tmp-testfile") + + def setUp(self): + with open(self.tmpfilepath, "wb") as f: + f.write("Delete me!") + + def test_rm(self): + # remove the file + rm(self.tmpfilepath) + # test that it was actually removed + self.assertFalse(os.path.isfile(self.tmpfilepath), "Failed to remove the file.") +``` + +Our test case is pretty simple, but every time it is run, a temporary file is created and then deleted. Additionally, we have no way of testing whether our rm method properly passes the argument down to the os.remove call. We can assume that it does based on the test above, but much is left to be desired. + +Refactoring with MocksLet’s refactor our test case using mock: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import rm + +import mock +import unittest + +class RmTestCase(unittest.TestCase): + + @mock.patch('mymodule.os') + def test_rm(self, mock_os): + rm("any path") + # test that rm called os.remove with the right parameters + mock_os.remove.assert_called_with("any path") +``` + +With these refactors, we have fundamentally changed the way that the test operates. Now, we have an insider, an object we can use to verify the functionality of another. + +### Potential Pitfalls + +One of the first things that should stick out is that we’re using the mock.patch method decorator to mock an object located at mymodule.os, and injecting that mock into our test case method. Wouldn’t it make more sense to just mock os itself, rather than the reference to it at mymodule.os? + +Well, Python is somewhat of a sneaky snake when it comes to imports and managing modules. At runtime, the mymodule module has its own os which is imported into its own local scope in the module. Thus, if we mock os, we won’t see the effects of the mock in the mymodule module. + +The mantra to keep repeating is this: + +> Mock an item where it is used, not where it came from. + +If you need to mock the tempfile module for myproject.app.MyElaborateClass, you probably need to apply the mock to myproject.app.tempfile, as each module keeps its own imports. + +With that pitfall out of the way, let’s keep mocking. + +### Adding Validation to ‘rm’ + +The rm method defined earlier is quite oversimplified. We’d like to have it validate that a path exists and is a file before just blindly attempting to remove it. Let’s refactor rm to be a bit smarter: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os +import os.path + +def rm(filename): + if os.path.isfile(filename): + os.remove(filename) +``` + +Great. Now, let’s adjust our test case to keep coverage up. + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import rm + +import mock +import unittest + +class RmTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # set up the mock + mock_path.isfile.return_value = False + + rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + rm("any path") + + mock_os.remove.assert_called_with("any path") +``` + +Our testing paradigm has completely changed. We now can verify and validate internal functionality of methods without any side-effects. + +### File-Removal as a Service + +So far, we’ve only been working with supplying mocks for functions, but not for methods on objects or cases where mocking is necessary for sending parameters. Let’s cover object methods first. + +We’ll begin with a refactor of the rm method into a service class. There really isn’t a justifiable need, per se, to encapsulate such a simple function into an object, but it will at the very least help us demonstrate key concepts in mock. Let’s refactor: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os +import os.path + +class RemovalService(object): + """A service for removing objects from the filesystem.""" + + def rm(filename): + if os.path.isfile(filename): + os.remove(filename) +``` + +### You’ll notice that not much has changed in our test case: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import RemovalService + +import mock +import unittest + +class RemovalServiceTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # instantiate our service + reference = RemovalService() + + # set up the mock + mock_path.isfile.return_value = False + + reference.rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + reference.rm("any path") + + mock_os.remove.assert_called_with("any path") +``` + +Great, so we now know that the RemovalService works as planned. Let’s create another service which declares it as a dependency: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os +import os.path + +class RemovalService(object): + """A service for removing objects from the filesystem.""" + + def rm(self, filename): + if os.path.isfile(filename): + os.remove(filename) + + +class UploadService(object): + + def __init__(self, removal_service): + self.removal_service = removal_service + + def upload_complete(self, filename): + self.removal_service.rm(filename) +``` + +Since we already have test coverage on the RemovalService, we’re not going to validate internal functionality of the rm method in our tests of UploadService. Rather, we’ll simply test (without side-effects, of course) that UploadService calls the RemovalService.rm method, which we know “just works™” from our previous test case. + +There are two ways to go about this: + +1. Mock out the RemovalService.rm method itself. +2. Supply a mocked instance in the constructor of UploadService. + +As both methods are often important in unit-testing, we’ll review both. + +### Option 1: Mocking Instance Methods + +The mock library has a special method decorator for mocking object instance methods and properties, the @mock.patch.object decorator: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import RemovalService, UploadService + +import mock +import unittest + +class RemovalServiceTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # instantiate our service + reference = RemovalService() + + # set up the mock + mock_path.isfile.return_value = False + + reference.rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + reference.rm("any path") + + mock_os.remove.assert_called_with("any path") + + +class UploadServiceTestCase(unittest.TestCase): + + @mock.patch.object(RemovalService, 'rm') + def test_upload_complete(self, mock_rm): + # build our dependencies + removal_service = RemovalService() + reference = UploadService(removal_service) + + # call upload_complete, which should, in turn, call `rm`: + reference.upload_complete("my uploaded file") + + # check that it called the rm method of any RemovalService + mock_rm.assert_called_with("my uploaded file") + + # check that it called the rm method of _our_ removal_service + removal_service.rm.assert_called_with("my uploaded file") +``` + +Great! We’ve validated that the UploadService successfully calls our instance’s rm method. Notice anything interesting in there? The patching mechanism actually replaced the rm method of all RemovalService instances in our test method. That means that we can actually inspect the instances themselves. If you want to see more, try dropping in a breakpoint in your mocking code to get a good feel for how the patching mechanism works. + +### Pitfall: Decorator Order + +When using multiple decorators on your test methods, order is important, and it’s kind of confusing. Basically, when mapping decorators to method parameters, [work backwards][3]. Consider this example: + +``` +@mock.patch('mymodule.sys') + @mock.patch('mymodule.os') + @mock.patch('mymodule.os.path') + def test_something(self, mock_os_path, mock_os, mock_sys): + pass +``` + +Notice how our parameters are matched to the reverse order of the decorators? That’s partly because of [the way that Python works][4]. With multiple method decorators, here’s the order of execution in pseudocode: + +``` +patch_sys(patch_os(patch_os_path(test_something))) +``` + +Since the patch to sys is the outermost patch, it will be executed last, making it the last parameter in the actual test method arguments. Take note of this well and use a debugger when running your tests to make sure that the right parameters are being injected in the right order. + +### Option 2: Creating Mock Instances + +Instead of mocking the specific instance method, we could instead just supply a mocked instance to UploadService with its constructor. I prefer option 1 above, as it’s a lot more precise, but there are many cases where option 2 might be efficient or necessary. Let’s refactor our test again: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import RemovalService, UploadService + +import mock +import unittest + +class RemovalServiceTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # instantiate our service + reference = RemovalService() + + # set up the mock + mock_path.isfile.return_value = False + + reference.rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + reference.rm("any path") + + mock_os.remove.assert_called_with("any path") + + +class UploadServiceTestCase(unittest.TestCase): + + def test_upload_complete(self, mock_rm): + # build our dependencies + mock_removal_service = mock.create_autospec(RemovalService) + reference = UploadService(mock_removal_service) + + # call upload_complete, which should, in turn, call `rm`: + reference.upload_complete("my uploaded file") + + # test that it called the rm method + mock_removal_service.rm.assert_called_with("my uploaded file") +``` + +In this example, we haven’t even had to patch any functionality, we simply create an auto-spec for the RemovalService class, and then inject this instance into our UploadService to validate the functionality. + +The [mock.create_autospec][5] method creates a functionally equivalent instance to the provided class. What this means, practically speaking, is that when the returned instance is interacted with, it will raise exceptions if used in illegal ways. More specifically, if a method is called with the wrong number of arguments, an exception will be raised. This is extremely important as refactors happen. As a library changes, tests break and that is expected. Without using an auto-spec, our tests will still pass even though the underlying implementation is broken. + +### Pitfall: The mock.Mock and mock.MagicMock Classes + +The mock library also includes two important classes upon which most of the internal functionality is built upon: [mock.Mock][6] and mock.MagicMock. When given a choice to use a mock.Mock instance, a mock.MagicMock instance, or an auto-spec, always favor using an auto-spec, as it helps keep your tests sane for future changes. This is because mock.Mock and mock.MagicMock accept all method calls and property assignments regardless of the underlying API. Consider the following use case: + +``` +class Target(object): + def apply(value): + return value + +def method(target, value): + return target.apply(value) +``` + +We can test this with a mock.Mock instance like this: + +``` +class MethodTestCase(unittest.TestCase): + + def test_method(self): + target = mock.Mock() + + method(target, "value") + + target.apply.assert_called_with("value") +``` + +This logic seems sane, but let’s modify the Target.apply method to take more parameters: + +``` +class Target(object): + def apply(value, are_you_sure): + if are_you_sure: + return value + else: + return None +``` + +Re-run your test, and you’ll find that it still passes. That’s because it isn’t built against your actual API. This is why you should always use the create_autospec method and the autospec parameter with the @patch and @patch.object decorators. + +### Real-World Example: Mocking a Facebook API Call + +To finish up, let’s write a more applicable real-world example, one which we mentioned in the introduction: posting a message to Facebook. We’ll write a nice wrapper class and a corresponding test case. + +``` +import facebook + +class SimpleFacebook(object): + + def __init__(self, oauth_token): + self.graph = facebook.GraphAPI(oauth_token) + + def post_message(self, message): + """Posts a message to the Facebook wall.""" + self.graph.put_object("me", "feed", message=message) +``` + +Here’s our test case, which checks that we post the message without actually posting the message: + +``` +import facebook +import simple_facebook +import mock +import unittest + +class SimpleFacebookTestCase(unittest.TestCase): + + @mock.patch.object(facebook.GraphAPI, 'put_object', autospec=True) + def test_post_message(self, mock_put_object): + sf = simple_facebook.SimpleFacebook("fake oauth token") + sf.post_message("Hello World!") + + # verify + mock_put_object.assert_called_with(message="Hello World!") +``` + +As we’ve seen so far, it’s really simple to start writing smarter tests with mock in Python. + +### Mocking in python Conclusion + +Python’s mock library, if a little confusing to work with, is a game-changer for [unit-testing][7]. We’ve demonstrated common use-cases for getting started using mock in unit-testing, and hopefully this article will help [Python developers][8] overcome the initial hurdles and write excellent, tested code. + +-------------------------------------------------------------------------------- + +via: http://slviki.com/index.php/2016/06/18/introduction-to-mocking-in-python/ + +作者:[Dasun Sucharith][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.slviki.com/ +[1]: http://www.python.org/dev/peps/pep-0417/ +[2]: https://pypi.python.org/pypi/mock +[3]: http://www.voidspace.org.uk/python/mock/patch.html#nesting-patch-decorators +[4]: http://docs.python.org/2/reference/compound_stmts.html#function-definitions +[5]: http://www.voidspace.org.uk/python/mock/helpers.html#autospeccing +[6]: http://www.voidspace.org.uk/python/mock/mock.html +[7]: http://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters +[8]: http://www.toptal.com/python + + + + + + + + + From 1431c59c8d4c5fb4701d9cb82bbfc1cad8ee307f Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 11 Jul 2016 20:35:31 +0800 Subject: [PATCH 114/471] =?UTF-8?q?20160711-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ook Messenger bot with Python and Flask.md | 319 ++++++++++++++++++ 1 file changed, 319 insertions(+) create mode 100644 sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md diff --git a/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md b/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md new file mode 100644 index 0000000000..1f3fbe1ed3 --- /dev/null +++ b/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md @@ -0,0 +1,319 @@ +How to build and deploy a Facebook Messenger bot with Python and Flask, a tutorial +========================================================================== + +This is my log of how I built a simple Facebook Messenger bot. The functionality is really simple, it’s an echo bot that will just print back to the user what they write. + +This is something akin to the Hello World example for servers, the echo server. + +The goal of the project is not to build the best Messenger bot, but rather to get a feel for what it takes to build a minimal bot and how everything comes together. + +- [Tech Stack][1] +- [Bot Architecture][2] +- [The Bot Server][3] +- [Deploying to Heroku][4] +- [Creating the Facebook App][5] +- [Conclusion][6] + +### Tech Stack + +The tech stack that was used is: + +- [Heroku][7] for back end hosting. The free-tier is more than enough for a tutorial of this level. The echo bot does not require any sort of data persistence so a database was not used. +- [Python][8] was the language of choice. The version that was used is 2.7 however it can easily be ported to Python 3 with minor alterations. +- [Flask][9] as the web development framework. It’s a very lightweight framework that’s perfect for small scale projects/microservices. +- Finally the [Git][10] version control system was used for code maintenance and to deploy to Heroku. +- Worth mentioning: [Virtualenv][11]. This python tool is used to create “environments” clean of python libraries so you can only install the necessary requirements and minimize the app footprint. + +### Bot Architecture + +Messenger bots are constituted by a server that responds to two types of requests: + +- GET requests are being used for authentication. They are sent by Messenger with an authentication code that you register on FB. +- POST requests are being used for the actual communication. The typical workflow is that the bot will initiate the communication by sending the POST request with the data of the message sent by the user, we will handle it, send a POST request of our own back. If that one is completed successfully (a 200 OK status is returned) we also respond with a 200 OK code to the initial Messenger request. +For this tutorial the app will be hosted on Heroku, which provides a nice and easy interface to deploy apps. As mentioned the free tier will suffice for this tutorial. + +After the app has been deployed and is running, we’ll create a Facebook app and link it to our app so that messenger knows where to send the requests that are meant for our bot. + +### The Bot Server +The basic server code was taken from the following [Chatbot][12] project by Github user [hult (Magnus Hult)][13], with a few modifications to the code to only echo messages and a couple bugfixes I came across. This is the final version of the server code: + +``` +from flask import Flask, request +import json +import requests + +app = Flask(__name__) + +# This needs to be filled with the Page Access Token that will be provided +# by the Facebook App that will be created. +PAT = '' + +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' + +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" + +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" + + +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text + +if __name__ == '__main__': + app.run() +``` + +Let’s break down the code. The first part is the imports that will be needed: + +``` +from flask import Flask, request +import json +import requests +``` + +Next we define the two functions (using the Flask specific app.route decorators) that will handle the GET and POST requests to our bot. + +``` +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' +``` + +The verify_token object that is being sent by Messenger will be declared by us when we create the Facebook app. We have to validate the one we are being have against itself. Finally we return the “hub.challenge” back to Messenger. + +The function that handles the POST requests is a bit more interesting. + +``` +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" +``` + +When called we grab the massage payload, use function messaging_events to break it down and extract the sender user id and the actual message sent, generating a python iterator that we can loop over. Notice that in each request sent by Messenger it is possible to have more than one messages. + +``` +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" +``` + +While iterating over each message we call the send_message function and we perform the POST request back to Messnger using the Facebook Graph messages API. During this time we still have not responded to the original Messenger request which we are blocking. This can lead to timeouts and 5XX errors. + +The above was spotted during an outage due to a bug I came across, which was occurred when the user was sending emojis which are actual unicode ids, however Python was miss-encoding. We ended up sending back garbage. + +This POST request back to Messenger would never finish, and that in turn would cause 5XX status codes to be returned to the original request, rendering the service unusable. + +This was fixed by escaping the messages with `encode('unicode_escape')` and then just before we sent back the message decode it with `decode('unicode_escape')`. + +``` +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text +``` + +### Deploying to Heroku + +Once the code was built to my liking it was time for the next step. +Deploy the app. + +Sure, but how? + +I have deployed apps before to Heroku (mainly Rails) however I was always following a tutorial of some sort, so the configuration has already been created. In this case though I had to start from scratch. + +Fortunately it was the official [Heroku documentation][14] to the rescue. The article explains nicely the bare minimum required for running an app. + +Long story short, what we need besides our code are two files. The first file is the “requirements.txt” file which is a list of of the library dependencies required to run the application. + +The second file required is the “Procfile”. This file is there to inform the Heroku how to run our service. Again the bare minimum needed for this file is the following: + +>web: gunicorn echoserver:app + +The way this will be interpreted by heroku is that our app is started by running the echoserver.py file and the app will be using gunicorn as the web server. The reason we are using an additional webserver is performance related and is explained in the above Heroku documentation: + +>Web applications that process incoming HTTP requests concurrently make much more efficient use of dyno resources than web applications that only process one request at a time. Because of this, we recommend using web servers that support concurrent request processing whenever developing and running production services. + +>The Django and Flask web frameworks feature convenient built-in web servers, but these blocking servers only process a single request at a time. If you deploy with one of these servers on Heroku, your dyno resources will be underutilized and your application will feel unresponsive. + +>Gunicorn is a pure-Python HTTP server for WSGI applications. It allows you to run any Python application concurrently by running multiple Python processes within a single dyno. It provides a perfect balance of performance, flexibility, and configuration simplicity. + +Going back to our “requirements.txt” file let’s see how it binds with the Virtualenv tool that was mentioned. + +At anytime, your developement machine may have a number of python libraries installed. When deploying applications you don’t want to have these libraries loaded as it makes it hard to make out which ones you actually use. + +What Virtualenv does is create a new blank virtual enviroment so that you can only install the libraries that your app requires. + +You can check which libraries are currently installed by running the following command: + +``` +kostis@KostisMBP ~ $ pip freeze +cycler==0.10.0 +Flask==0.10.1 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +matplotlib==1.5.1 +numpy==1.10.4 +pyparsing==2.1.0 +python-dateutil==2.5.0 +pytz==2015.7 +requests==2.10.0 +scipy==0.17.0 +six==1.10.0 +virtualenv==15.0.1 +Werkzeug==0.11.10 +``` + +Note: The pip tool should already be installed on your machine along with Python. + +If not check the [official site][15] for how to install it. + +Now let’s use Virtualenv to create a new blank enviroment. First we create a new folder for our project, and change dir into it: + +``` +kostis@KostisMBP projects $ mkdir echoserver +kostis@KostisMBP projects $ cd echoserver/ +kostis@KostisMBP echoserver $ +``` + +Now let’s create a new enviroment called echobot. To activate it you run the following source command, and checking with pip freeze we can see that it’s now empty. + +``` +kostis@KostisMBP echoserver $ virtualenv echobot +kostis@KostisMBP echoserver $ source echobot/bin/activate +(echobot) kostis@KostisMBP echoserver $ pip freeze +(echobot) kostis@KostisMBP echoserver $ +``` + +We can start installing the libraries required. The ones we’ll need are flask, gunicorn, and requests and with them installed we create the requirements.txt file: + +``` +(echobot) kostis@KostisMBP echoserver $ pip install flask +(echobot) kostis@KostisMBP echoserver $ pip install gunicorn +(echobot) kostis@KostisMBP echoserver $ pip install requests +(echobot) kostis@KostisMBP echoserver $ pip freeze +click==6.6 +Flask==0.11 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +requests==2.10.0 +Werkzeug==0.11.10 +(echobot) kostis@KostisMBP echoserver $ pip freeze > requirements.txt +``` + +After all the above have been run, we create the echoserver.py file with the python code and the Procfile with the command that was mentioned, and we should end up with the following files/folders: + +``` +(echobot) kostis@KostisMBP echoserver $ ls +Procfile echobot echoserver.py requirements.txt +``` + +We are now ready to upload to Heroku. We need to do two things. The first is to install the Heroku toolbet if it’s not already installed on your system (go to [Heroku][16] for details). The second is to create a new Heroku app through the [web interface][17]. + +Click on the big plus sign on the top right and select “Create new app”. + + + + + + + + +-------------------------------------------------------------------------------- + +via: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/ + +作者:[Konstantinos Tsaprailis][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/kostistsaprailis +[1]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#tech-stack +[2]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#bot-architecture +[3]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#the-bot-server +[4]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#deploying-to-heroku +[5]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#creating-the-facebook-app +[6]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#conclusion +[7]: https://www.heroku.com +[8]: https://www.python.org +[9]: http://flask.pocoo.org +[10]: https://git-scm.com +[11]: https://virtualenv.pypa.io/en/stable +[12]: https://github.com/hult/facebook-chatbot-python +[13]: https://github.com/hult +[14]: https://devcenter.heroku.com/articles/python-gunicorn +[15]: https://pip.pypa.io/en/stable/installing +[16]: https://toolbelt.heroku.com +[17]: https://dashboard.heroku.com/apps + + From 0f727514320c4fb03fb57f9ef07c727c9f53213a Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 11 Jul 2016 20:39:40 +0800 Subject: [PATCH 115/471] =?UTF-8?q?20160711-6=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e text format for the visually impaired.md | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 sources/tech/20160624 DAISY A Linux-compatible text format for the visually impaired.md diff --git a/sources/tech/20160624 DAISY A Linux-compatible text format for the visually impaired.md b/sources/tech/20160624 DAISY A Linux-compatible text format for the visually impaired.md new file mode 100644 index 0000000000..4bf5ccaf42 --- /dev/null +++ b/sources/tech/20160624 DAISY A Linux-compatible text format for the visually impaired.md @@ -0,0 +1,46 @@ +DAISY : A Linux-compatible text format for the visually impaired +================================================================= + + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/osdc-lead_books.png?itok=K8wqfPT5) +>Image by : +Image by Kate Ter Haar. Modified by opensource.com. CC BY-SA 2.0. + +If you're blind or visually impaired like I am, you usually require various levels of hardware or software to do things that people who can see take for granted. One among these is specialized formats for reading print books: Braille (if you know how to read it) or specialized text formats such as DAISY. + +### What is DAISY? + +DAISY stands for Digital Accessible Information System. It's an open standard used almost exclusively by the blind to read textbooks, periodicals, newspapers, fiction, you name it. It was founded in the mid '90s by [The DAISY Consortium][1], a group of organizations dedicated to producing a set of standards that would allow text to be marked up in a way that would make it easy to read, skip around in, annotate, and otherwise manipulate text in much the same way a sighted user would. + +The current version of DAISY 3.0, was released in mid-2005 and is a complete rewrite of the standard. It was created with the goal of making it much easier to write books complying with it. It's worth noting that DAISY can support plain text only, audio recordings (in PCM Wave or MPEG Layer III format) only, or a combination of text and audio. Specialized software can read these books and allow users to set bookmarks and navigate a book as easily as a sighted person would with a print book. + +### How does DAISY work? + +DAISY, regardless of the specific version, works a bit like this: You have your main navigation file (ncc.html in DAISY 2.02) that contains metadata about the book, such as author's name, copyright date, how many pages the book has, etc. This file is a valid XML document in the case of DAISY 3.0, with DTD (document type definition) files being highly recommended to be included with each book. + +In the navigation control file is markup describing precise positions—either text caret offsets in the case of text navigation or time down to the millisecond in the case of audio recordings—that allows the software to skip to that exact point in the book much as a sighted person would turn to a chapter page. It's worth noting that this navigation control file only contains positions for the main, and largest, elements of a book. + +The smaller elements are handled by SMIL (synchronized multimedia integration language) files. These files contain position points for each chapter in the book. The level of navigation depends heavily on how well the book was marked up. Think of it like this: If a print book has no chapter headings, you will have a hard time figuring out which chapter you're in. If a DAISY book is badly marked up, you might only be able to navigate to the start of the book, or possibly only to the table of contents. If a book is marked up badly enough (or missing markup entirely), your DAISY reading software is likely to simply ignore it. + +### Why the need for specialized software? + +You may be wondering why, if DAISY is little more than HTML, XML, and audio files, you would need specialized software to read and manipulate it. Technically speaking, you don't. The specialized software is mostly for convenience. In Linux, for example, a simple web browser can be used to open the books and read them. If you click on the XML file in a DAISY 3 book, all the software will generally do is read the spines of the books you give it access to and create a list of them that you click on to open. If a book is badly marked up, it won't show up in this list. + +Producing DAISY is another matter entirely, and usually requires either specialized software or enough knowledge of the specifications to modify general-purpose software to parse it. + +### Conclusion + +Fortunately, DAISY is a dying standard. While it is very good at what it does, the need for specialized software to produce it has set us apart from the normal sighted world, where readers use a variety of formats to read their books electronically. This is why the DAISY consortium has succeeded DAISY with EPUB, version 3, which supports what are called media overlays. This is basically an EPUB book with optional audio or video. Since EPUB shares a lot of DAISY's XML markup, some software that can read DAISY can see EPUB books but usually cannot read them. This means that once the websites that provide books for us switch over to this open format, we will have a much larger selection of software to read our books. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/5/daisy-linux-compatible-text-format-visually-impaired + +作者:[Kendell Clark][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kendell-clark +[1]: http://www.daisy.org From d60d5ab51956646d56e2743a8f70a71da9fb0d84 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 11 Jul 2016 20:45:31 +0800 Subject: [PATCH 116/471] =?UTF-8?q?20160711-7=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20160711 Getting started with Git.md | 139 ++++++++++++++++++ 1 file changed, 139 insertions(+) create mode 100644 sources/tech/20160711 Getting started with Git.md diff --git a/sources/tech/20160711 Getting started with Git.md b/sources/tech/20160711 Getting started with Git.md new file mode 100644 index 0000000000..032e5e3510 --- /dev/null +++ b/sources/tech/20160711 Getting started with Git.md @@ -0,0 +1,139 @@ +Getting started with Git +========================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) +>Image by : opensource.com + + +In the introduction to this series we learned who should use Git, and what it is for. Today we will learn how to clone public Git repositories, and how to extract individual files without cloning the whole works. + +Since Git is so popular, it makes life a lot easier if you're at least familiar with it at a basic level. If you can grasp the basics (and you can, I promise!), then you'll be able to download whatever you need, and maybe even contribute stuff back. And that, after all, is what open source is all about: having access to the code that makes up the software you run, the freedom to share it with others, and the right to change it as you please. Git makes this whole process easy, as long as you're comfortable with Git. + +So let's get comfortable with Git. + +### Read and write + +Broadly speaking, there are two ways to interact with a Git repository: you can read from it, or you can write to it. It's just like a file: sometimes you open a document just to read it, and other times you open a document because you need to make changes. + +In this article, we'll cover reading from a Git repository. We'll tackle the subject of writing back to a Git repository in a later article. + +### Git or GitHub? + +A word of clarification: Git is not the same as GitHub (or GitLab, or Bitbucket). Git is a command-line program, so it looks like this: + +``` +$ git +usage: Git [--version] [--help] [-C ] + [-p | --paginate | --no-pager] [--bare] + [--Git-dir=] [] + +``` + +As Git is open source, lots of smart people have built infrastructures around it which, in themselves, have become very popular. + +My articles about Git teach pure Git first, because if you understand what Git is doing then you can maintain an indifference to what front end you are using. However, my articles also include common ways of accomplishing each task through popular Git services, since that's probably what you'll encounter first. + +### Installing Git + +To install Git on Linux, grab it from your distribution's software repository. BSD users should find Git in the Ports tree, in the devel section. + +For non-open source operating systems, go to the [project site][1] and follow the instructions. Once installed, there should be no difference between Linux, BSD, and Mac OS X commands. Windows users will have to adapt Git commands to match the Windows file system, or install Cygwin to run Git natively, without getting tripped up by Windows file system conventions. + +### Afternoon tea with Git + +Not every one of us needs to adopt Git into our daily lives right away. Sometimes, the most interaction you have with Git is to visit a repository of code, download a file or two, and then leave. On the spectrum of getting to know Git, this is more like afternoon tea than a proper dinner party. You make some polite conversation, you get the information you need, and then you part ways without the intention of speaking again for at least another three months. + +And that's OK. + +Generally speaking, there are two ways to access Git: via command line, or by any one of the fancy Internet technologies providing quick and easy access through the web browser. + +Say you want to install a trash bin for use in your terminal because you've been burned one too many times by the rm command. You've heard about Trashy, which calls itself "a sane intermediary to the rm command", and you want to look over its documentation before you install it. Lucky for you, [Trashy is hosted publicly on GitLab.com][2]. + +### Landgrab + +The first way we'll work with this Git repository is a sort of landgrab method: we'll clone the entire thing, and then sort through the contents later. Since the repository is hosted with a public Git service, there are two ways to do this: on the command line, or through a web interface. + +To grab an entire repository with Git, use the git clone command with the URL of the Git repository. If you're not clear on what the right URL is, the repository should tell you. GitLab gives you a copy-and-paste repository URL [for Trashy][3]. + +![](https://opensource.com/sites/default/files/1_gitlab-url.jpg) + +You might notice that on some services, both SSH and HTTPS links are provided. You can use SSH only if you have write permissions to the repository. Otherwise, you must use the HTTPS URL. + +Once you have the right URL, cloning the repository is pretty simple. Just git clone the URL, and optionally name the directory to clone it into. The default behaviour is to clone the git directory to your current directory; for example, 'trashy.git' gets put in your current location as 'trashy'. I use the .clone extension as a shorthand for repositories that are read-only, and the .git extension as shorthand for repositories I can read and write, but that's not by any means an official mandate. + +``` +$ git clone https://gitlab.com/trashy/trashy.git trashy.clone +Cloning into 'trashy.clone'... +remote: Counting objects: 142, done. +remote: Compressing objects: 100% (91/91), done. +remote: Total 142 (delta 70), reused 103 (delta 47) +Receiving objects: 100% (142/142), 25.99 KiB | 0 bytes/s, done. +Resolving deltas: 100% (70/70), done. +Checking connectivity... done. +``` + +Once the repository has been cloned successfully, you can browse files in it just as you would any other directory on your computer. + +The other way to get a copy of the repository is through the web interface. Both GitLab and GitHub provide a snapshot of any repository in a .zip file. GitHub has a big green download button, but on GitLab, look for an inconspicuous download button on the far right of your browser window: + +![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) + +### Pick and choose + +An alternate method of obtaining a file from a Git repository is to find the file you're after and pluck it right out of the repository. This method is only supported via web interfaces, which is essentially you looking at someone else's clone of a repository; you can think of it as a sort of HTTP shared directory. + +The problem with using this method is that you might find that certain files don't actually exist in a raw Git repository, as a file might only exist in its complete form after a make command builds the file, which won't happen until you download the repository, read the README or INSTALL file, and run the command. Assuming, however, that you are sure a file does exist and you just want to go into the repository, grab it, and walk away, you can do that. + +In GitLab and GitHub, click the Files link for a file view, view the file in Raw mode, and use your web browser's save function, e.g. in Firefox, File > Save Page As. In a GitWeb repository (a web view of personal git repositories used some who prefer to host git themselves), the Raw view link is in the file listing view. + +![](https://opensource.com/sites/default/files/1_webgit-file.jpg) + +### Best practices + +Generally, cloning an entire Git repository is considered the right way of interacting with Git. There are a few reasons for this. Firstly, a clone is easy to keep updated with the git pull command, so you won't have to keep going back to some web site for a new copy of a file each time an improvement has been made. Secondly, should you happen to make an improvement yourself, then it is easier to submit those changes to the original author if it is all nice and tidy in a Git repository. + +For now, it's probably enough to just practice going out and finding interesting Git repositories and cloning them to your drive. As long as you know the basics of using a terminal, then it's not hard to do. Don't know the basics of terminal usage? Give me five more minutes of your time. + +### Terminal basics + +The first thing to understand is that all files have a path. That makes sense; if I told you to open a file for me on a regular non-terminal day, you'd have to get to where that file is on your drive, and you'd do that by navigating a bunch of computer windows until you reached that file. For example, maybe you'd click your home directory > Pictures > InktoberSketches > monkey.kra. + +In that scenario, we could say that the file monkeysketch.kra has the path $HOME/Pictures/InktoberSketches/monkey.kra. + +In the terminal, unless you're doing special sysadmin work, your file paths are generally going to start with $HOME (or, if you're lazy, just the ~ character) followed by a list of folders up to the filename itself. This is analogous to whatever icons you click in your GUI to reach the file or folder. + +If you want to clone a Git repository into your Documents directory, then you could open a terminal and run this command: + +``` +$ git clone https://gitlab.com/foo/bar.git +$HOME/Documents/bar.clone +``` + +Once that is complete, you can open a file manager window, navigate to your Documents folder, and you'll find the bar.clone directory waiting for you. + +If you want to get a little more advanced, you might revisit that repository at some later date, and try a git pull to see if there have been updates to the project: + +``` +$ cd $HOME/Documents/bar.clone +$ pwd +bar.clone +$ git pull +``` + +For now, that's all the terminal commands you need to get started, so go out and explore. The more you do it, the better you get at it, and that is, at least give or take a vowel, the name of the game. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/stumbling-git + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://git-scm.com/download +[2]: https://gitlab.com/trashy/trashy +[3]: https://gitlab.com/trashy/trashy.git + From 93bb30d7e4d0f40f711aa8304fbbfa389fba163d Mon Sep 17 00:00:00 2001 From: cposture Date: Tue, 12 Jul 2016 00:41:32 +0800 Subject: [PATCH 117/471] translating partly --- ...reate Your Own Shell in Python - Part I.md | 41 ++++++++++--------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md index 48e84381c8..0b7e415f2a 100644 --- a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md +++ b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -1,15 +1,15 @@ -Translating by cposture 2016.07.09 -Create Your Own Shell in Python : Part I +使用 Python 创建你自己的 Shell:Part I +========================================== -I’m curious to know how a shell (like bash, csh, etc.) works internally. So, I implemented one called yosh (Your Own SHell) in Python to answer my own curiosity. The concept I explain in this article can be applied to other languages as well. +我很好奇一个 shell (像 bash,csh 等)内部是如何工作的。为了满足自己的好奇心,我使用 Python 实现了一个名为 yosh (Your Own Shell)的 Shell。本文章所介绍的概念也可以应用于其他编程语言。 -(Note: You can find source code used in this blog post here. I distribute it with MIT license.) +(提示:你可以发布于此的博文中找到使用的源代码,代码以 MIT 许可发布) -Let’s start. +让我们开始吧。 -### Step 0: Project Structure +### 步骤 0:项目结构 -For this project, I use the following project structure. +对于此项目,我使用了以下的项目结构。 ``` yosh_project @@ -18,17 +18,17 @@ yosh_project |-- shell.py ``` -`yosh_project` is the root project folder (you can also name it just `yosh`). +`yosh_project` 为项目根目录(你也可以把它简单地命名为 `yosh`)。 -`yosh` is the package folder and `__init__.py` will make it a package named the same as the package folder name. (If you don’t write Python, just ignore it.) +`yosh` 为包目录,并且 `__init__.py` 将会使一个包名等同于包目录名字(如果你不写 Python,可以忽略它) -`shell.py` is our main shell file. +`shell.py` 是我们的主脚本文件。 -### Step 1: Shell Loop +### 步骤 1:Shell 循环 -When you start a shell, it will show a command prompt and wait for your command input. After it receives the command and executes it (the detail will be explained later), your shell will be back to the wait loop for your next command. +当你启动一个 shell,它会显示一个命令提示符同时等待用户输入命令。在接收了输入的命令并执行它之后(稍后文章会进行详细解释),你的 shell 会回到循环,等待下一条指令。 -In `shell.py`, we start by a simple main function calling the shell_loop() function as follows: +在 `shell.py`,我们会以一个简单的 mian 函数开始,该函数调用了 shell_loop() 函数,如下: ``` def shell_loop(): @@ -43,7 +43,7 @@ if __name__ == "__main__": main() ``` -Then, in our `shell_loop()`, we use a status flag to indicate whether the loop should continue or stop. In the beginning of the loop, our shell will show a command prompt and wait to read command input. +接着,在 `shell_loop()`,为了指示循环是否继续或停止,我们使用了一个状态标志。在循环的开始,我们的 shell 将显示一个命令提示符,并等待读取命令输入。 ``` import sys @@ -64,9 +64,9 @@ def shell_loop(): cmd = sys.stdin.readline() ``` -After that, we tokenize the command input and execute it (we’ll implement the tokenize and execute functions soon). +之后,我们切分命令输入并进行执行(我们将马上解释命令切分和执行函数)。 -Therefore, our shell_loop() will be the following. +因此,我们的 shell_loop() 会是如下这样: ``` import sys @@ -93,14 +93,15 @@ def shell_loop(): status = execute(cmd_tokens) ``` -That’s all of our shell loop. If we start our shell with python shell.py, it will show the command prompt. However, it will throw an error if we type a command and hit enter because we don’t define tokenize function yet. +这就是我们整个 shell 循环。如果我们使用 python shell.py 命令启动 shell,它会显示命令提示符。然而如果我们输入命令并按回车,它将会抛出错误,因为我们还没定义命令切分函数。 -To exit the shell, try ctrl-c. I will tell how to exit gracefully later. +为了退出 shell,可以尝试输入 ctrl-c。稍后我将解释如何以优雅的形式退出 shell。 -### Step 2: Tokenization +### 步骤 2:命令切分 -When a user types a command in our shell and hits enter. The command input will be a long string containing both a command name and its arguments. Therefore, we have to tokenize it (split a string into several tokens). +当一个用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的很长的字符串。因此,我们必须切分该字符串(分割一个字符串为多个标记)。 +咋一看它似乎很简单。我们或许可以使用 cmd.split(),用空格分割输入。 It seems simple at first glance. We might use cmd.split() to separate the input by spaces. It works well for a command like `ls -a my_folder` because it splits the command into a list `['ls', '-a', 'my_folder']` which we can use them easily. However, there are some cases that some arguments are quoted with single or double quotes like `echo "Hello World"` or `echo 'Hello World'`. If we use cmd.split(), we will get a list of 3 tokens `['echo', '"Hello', 'World"']` instead of 2 tokens `['echo', 'Hello World']`. From ef29f31a9bd72e1c1d902ff924473833369a3496 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 12 Jul 2016 09:33:12 +0800 Subject: [PATCH 118/471] =?UTF-8?q?20160712-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ntrol your DigitalOcean cloud instances.md | 70 +++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md diff --git a/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md b/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md new file mode 100644 index 0000000000..e24e381b38 --- /dev/null +++ b/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md @@ -0,0 +1,70 @@ +Using Vagrant to control your DigitalOcean cloud instances +========================================================= + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/fedora-vagrant-do-945x400.jpg) + +[Vagrant][1] is an application to create and support virtual development environments using virtual machines. Fedora has [official support for Vagrant][2] with libvirt on your local system. [DigitalOcean][3] is a cloud provider that provides a one-click deployment of a Fedora Cloud instance to an all-SSD server in under a minute. During the [recent Cloud FAD][4] in Raleigh, the Fedora Cloud team packaged a new plugin for Vagrant which enables Fedora users to keep up cloud instances in DigitalOcean using local Vagrantfiles. + +### How to use this plugin + +First step is to install the package in the command line. + +``` +$ sudo dnf install -y vagrant-digitalocean +``` + +After installing the plugin, the next task is to create the local Vagrantfile. An example is provided below. + +``` +$ mkdir digitalocean +$ cd digitalocean +$ cat Vagrantfile +Vagrant.configure('2') do |config| + config.vm.hostname = 'dropletname.kushaldas.in' + # Alternatively, use provider.name below to set the Droplet name. config.vm.hostname takes precedence. + + config.vm.provider :digital_ocean do |provider, override| + override.ssh.private_key_path = '/home/kdas/.ssh/id_rsa' + override.vm.box = 'digital_ocean' + override.vm.box_url = "https://github.com/devopsgroup-io/vagrant- digitalocean/raw/master/box/digital_ocean.box" + + provider.token = 'Your AUTH Token' + provider.image = 'fedora-23-x64' + provider.region = 'nyc2' + provider.size = '512mb' + provider.ssh_key_name = 'Kushal' + end +end +``` + +### Notes about Vagrant DigitalOcean plugin + +A few points to remember about the SSH key naming scheme: if you already have the key uploaded to DigitalOcean, make sure that the provider.ssh_key_name matches the name of the existing key in their server. The provider.image details are found at the [DigitalOcean documentation][5]. The AUTH token is created on the control panel within the Apps & API section. + +You can then get the instance up with the following command. + +``` +$ vagrant up --provider=digital_ocean +``` + +This command will fire up the instance in the DigitalOcean server. You can then SSH into the box by using vagrant ssh command. Run vagrant destroy to destroy the instance. + + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/using-vagrant-digitalocean-cloud/ + +作者:[Kushal Das][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://kushal.id.fedoraproject.org/ +[1]: https://www.vagrantup.com/ +[2]: https://fedoramagazine.org/running-vagrant-fedora-22/ +[3]: https://www.digitalocean.com/ +[4]: https://communityblog.fedoraproject.org/fedora-cloud-fad-2016/ +[5]: https://developers.digitalocean.com/documentation/v2/#create-a-new-droplet From 9dabbb29de61bae3271d4df2900333a9b57df6c1 Mon Sep 17 00:00:00 2001 From: zpl Date: Tue, 12 Jul 2016 11:15:30 +0800 Subject: [PATCH 119/471] [translated] 20160624 Industrial SBC builds on Raspberry Pi Compute Module.md --- ...C builds on Raspberry Pi Compute Module.md | 72 ------------------- ...C builds on Raspberry Pi Compute Module.md | 71 ++++++++++++++++++ 2 files changed, 71 insertions(+), 72 deletions(-) delete mode 100644 sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md create mode 100644 translated/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md diff --git a/sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md b/sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md deleted file mode 100644 index 9bc9a894ca..0000000000 --- a/sources/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md +++ /dev/null @@ -1,72 +0,0 @@ -zpl1025 -Industrial SBC builds on Raspberry Pi Compute Module -===================================================== - -![](http://hackerboards.com/files/embeddedmicro_mypi-thm.jpg) - -On Kickstarter, a “MyPi” industrial SBC using the RPi Compute Module offers a mini-PCIe slot, serial port, wide-range power, and modular expansion. - -You might wonder why in 2016 someone would introduce a sandwich-style single board computer built around the aging, ARM11 based COM version of the original Raspberry Pi, the [Raspberry Pi Compute Module][1]. First off, there are still plenty of industrial applications that don’t need much CPU horsepower, and second, the Compute Module is still the only COM based on Raspberry Pi hardware, although the cheaper, somewhat COM-like [Raspberry Pi Zero][2], which has the same 700MHz processor, comes close. - -![](http://hackerboards.com/files/embeddedmicro_mypi-sm.jpg) - -![](http://hackerboards.com/files/embeddedmicro_mypi_encl-sm.jpg) - ->MyPi with COM and I/O add-ons (left), and in its optional industrial enclosure - -In addition, Embedded Micro Technology says its SBC is also designed to support a swap-in for a promised Raspberry Pi Compute Module upgrade built around the Raspberry Pi 3’s quad-core, Cortex-A53 Broadcom BCM2837 SoC. Since this product could arrive any week now, it’s unclear how that all sorts out for Kickstarter backers. Still, it’s nice to know you’re somewhat futureproof, even if you have to pay for the upgrade. - -The MyPi is not the only new commercial embedded device based on the Raspberry Pi Compute Module. Pigeon Computers launched a [Pigeon RB100][3] industrial automation controller based on the COM in May. Most such devices arrived in 2014, however, shortly after the COM arrived, including the [Techbase Modberry][4]. - -The MyPi is over a third of the way toward its $21,696 funding goal with 30 days to go. An early bird package starts at $119, with shipments in September. Other kit options include a $187 version that includes the $30 Raspberry Pi Compute Module, as well as various cables. Kits are also available with add-on boards and an industrial enclosure. - -![](http://hackerboards.com/files/embeddedmicro_mypi_baseboard-sm.jpg) - -![](http://hackerboards.com/files/embeddedmicro_mypi_detail-sm.jpg) - ->MyPi baseboard without COM or add-ons (left) and its port details - -The Raspberry Pi Compute Module starts the MyPi off with the Broadcom BCM2835 SoC, 512MB RAM, and 4GB eMMC flash. The MyPi adds to this with a microSD card slot, an HDMI port, two USB 2.0 ports, a 10/100 Ethernet port, and a similarly coastline RS232 port (via USB). - -![](http://hackerboards.com/files/embeddedmicro_mypi_angle1-sm.jpg) - -![](http://hackerboards.com/files/embeddedmicro_mypi_angle2.jpg) - ->Two views of the MyPi board with RPi and mini-PCIe modules installed - -The MyPi is further equipped with a mini-PCIe socket, which is said to be “USB only and intended for use of modems available in the mPCIe form factor.” A SIM card slot is also available. Dual standard Raspberry Pi camera connectors are onboard along with an audio out interface, a battery-backed RTC, and LEDs. The SBC has a wide-range, 9-23V DC input. - -The MyPi is designed for Raspberry Pi hackers who have stacked so many HAT add-on boards, they can no longer work with them effectively or stuff them inside and industrial enclosure, says Embedded Micro. The MyPi supports HATs, but also offers the company’s own “ASIO” (Application Specific I/O) add-on modules, which route their I/O back to the carrier board, which, in turn, connects it to the 8-pin, green, Phoenix-style, industrial I/O connector (labeled “ASIO Out”) on the board’s edge, as illustrated in the diagram below. - -![](http://hackerboards.com/files/embeddedmicro_mypi_io-sm.jpg) ->MyPi’s modular expansion interface - -As the Kickstarter page explains it: “Rather than have a plug in HAT card with IO signal connectors poking out on all sides, instead we take these same IO signals back down a second output connector which is directly connected to the green industrial connector.” Additionally, “by simply using extended length interface pins on the card (raising it up) you can expand the IO set further — all without using any cable assemblies!” says Embedded Micro. - -![](http://hackerboards.com/files/embeddedmicro_mypi_with_iocards-sm.jpg) ->MyPi and its optional I/O add-on cards - -The company offers a line of hardened ASIO plug-in cards for the MyPi, as shown above. These initially include CAN-Bus, 4-20mA transducer signals, RS485, Narrow Band RF, and more. - -### Further information - -The MyPi is available on Kickstarter starting at a 79-Pound ($119) early bird package (without the Raspberry Pi Compute Module) through July 23, with shipments due in September. More information may be found on the [MyPi Kickstarter page][5] and the [Embedded Micro Technology website][6]. - - --------------------------------------------------------------------------------- - -via: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ - -作者:[Eric Brown][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ -[1]: http://hackerboards.com/raspberry-pi-morphs-into-30-dollar-com/ -[2]: http://hackerboards.com/pi-zero-tweak-adds-camera-connector-keeps-5-price/ -[3]: http://hackerboards.com/automation-controller-runs-linux-on-raspberry-pi-com/ -[4]: http://hackerboards.com/automation-controller-taps-raspberry-pi-compute-module/ -[5]: https://www.kickstarter.com/projects/410598173/mypi-industrial-strength-raspberry-pi-for-iot-proj -[6]: http://www.embeddedpi.com/ diff --git a/translated/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md b/translated/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md new file mode 100644 index 0000000000..361dc746ee --- /dev/null +++ b/translated/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md @@ -0,0 +1,71 @@ +用树莓派计算模块搭建的工业单板计算机 +===================================================== + +![](http://hackerboards.com/files/embeddedmicro_mypi-thm.jpg) + +在 Kickstarter 众筹网站上,一个叫“MyPi”的项目用 RPi 计算模块制作了一款 SBC(Single Board Computer 单板计算机),提供 mini-PCIe 插槽,串口,宽范围输入电源,以及模块扩展等功能。 + +你也许觉得奇怪,都 2016 年了,为什么还会有人发布这样一款长得有点像三明治,用过时的基于 ARM11 的原版树莓派 COM(Compuer on Module,模块化计算机)版本,[树莓派计算模块][1],构建的单板计算机。首先,目前仍然有大量工业应用不需要太多 CPU 处理能力,第二,树莓派计算模块仍是目前仅有的基于树莓派硬件的 COM,虽然更便宜,有点像 COM 并采用 700MHz 处理器的 [零号树莓派][2],快要发布了。 + +![](http://hackerboards.com/files/embeddedmicro_mypi-sm.jpg) + +![](http://hackerboards.com/files/embeddedmicro_mypi_encl-sm.jpg) + +>安装了 COM 和 I/O 组件的 MyPi(左),装入了可选的工业外壳中 + +另外,Embedded Micro Technology 还表示它的 SBC 还设计成支持和承诺的树莓派计算模块升级版互换,采用树莓派 3 的四核,Cortex-A53 博通 BCM2837 SoC。因为这个产品最近很快就会到货,不确定他们怎么能及时为 Kickstarter 赞助者处理好这一切。不过,以后能支持也挺不错,就算要为这个升级付费也可以接受。 + +MyPi 并不是唯一一款新的基于树莓派计算模块的商业嵌入式设备。Pigeon Computers 在五月份启动了 [Pigeon RB100][3] 的项目,是一个基于 COM 的工业自动化控制器。不过,包括 [Techbase Modberry][4] 在内的这一类设备大都出现在 2014 年 COM 发布之后的一小段时间内。 + +MyPi 的目标是 30 天内筹集 $21,696,目前已经实现了三分之一。早期参与包的价格 $119 起,九月份发货。其他选项有 $187 版本,里面包含了价值 $30 的树莓派计算模块,以及各种线缆。套件里还有各种插件板以及工业外壳可选。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_baseboard-sm.jpg) + +![](http://hackerboards.com/files/embeddedmicro_mypi_detail-sm.jpg) + +>不带 COM 和插件板的 MyPi 主板(左)以及它的接口定义 + +树莓派计算模块能给 MyPi 带来博通 BCM2835 Soc,512MB 内存,以及 4GB eMMC 存储空间。MyPi 主板扩展了一个 microSD 卡槽,一个 HDMI 接口,两个 USB 2.0 接口,一个 10/100 以太网口,还有一个类似网口的 RS232 端口(通过 USB)。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_angle1-sm.jpg) + +![](http://hackerboards.com/files/embeddedmicro_mypi_angle2.jpg) + +>插上树莓派计算模块和 mini-PCIe 模块的 MyPi 的两个视角 + +MyPi 还将配备一个 mini-PCIe 插槽,据说“只支持 USB,以及只适用 mPCIe 形式的调制解调器”。还带有一个 SIM 卡插槽。板上还有双标准的树莓派摄像头接口,一个音频输出接口,自带备用电池的 RTC,LED 灯。还支持宽范围的 9-23V 直流输入。 + +Embedded Micro 表示,MyPi 是为那些树莓派爱好者们设计的,他们堆积了太多 HAT 外接板,已经不能有效地工作了,或者不能很好地装入工业外壳里。MyPi 支持 HAT,另外还提供了公司自己定义的“ASIO”(特定应用接口)插件模块,它会将自己的 I/O 扩展到主板上,主板再将它们连到主板边上的 8 脚,绿色,工业 I/O 连接器(标记了“ASIO Out”)上,在下面图片里有描述。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_io-sm.jpg) +>MyPi 的模块扩展接口 + +就像 Kickstarter 页面里描述的:“比起在板边插满带 IO 信号接头的 HAT 板,我们有意将同样的 IO 信号接到另一个接头,它直接接到绿色的工业接头上。”另外,“通过简单地延长卡上的插脚长度(抬高),你将来可以直接扩展 IO 集 - 这些都不需要任何排线!”Embedded Micro 表示。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_with_iocards-sm.jpg) +>MyPi 和它的可选 I/O 插件板卡 + +公司为 MyPi 提供了一系列可靠的 ASIO 插件板,像上面展示的。这些一开始会包括 CAN 总线,4-20mA 传感器信号,RS485,窄带 RF,等等。 + +### 更多信息 + +MyPi 在 Kickstarter 上提供了 7 月 23 日到期的 79 英镑($119)早期参与包(不包括树莓派计算模块),预计九月份发货。更多信息请查看 [Kickstarter 上 MyPi 的页面][5] 以及 [Embedded Micro Technology 官网][6]。 + + +-------------------------------------------------------------------------------- + +via: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ + +作者:[Eric Brown][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ +[1]: http://hackerboards.com/raspberry-pi-morphs-into-30-dollar-com/ +[2]: http://hackerboards.com/pi-zero-tweak-adds-camera-connector-keeps-5-price/ +[3]: http://hackerboards.com/automation-controller-runs-linux-on-raspberry-pi-com/ +[4]: http://hackerboards.com/automation-controller-taps-raspberry-pi-compute-module/ +[5]: https://www.kickstarter.com/projects/410598173/mypi-industrial-strength-raspberry-pi-for-iot-proj +[6]: http://www.embeddedpi.com/ From d20392fb56786226dfbc9d77ad52ab3bf19c4865 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 12 Jul 2016 14:20:28 +0800 Subject: [PATCH 120/471] =?UTF-8?q?20160712-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160309 Let’s Build A Web Server. Part 1.md | 150 ++++++++++++++++++ 1 file changed, 150 insertions(+) create mode 100644 translated/tech/20160309 Let’s Build A Web Server. Part 1.md diff --git a/translated/tech/20160309 Let’s Build A Web Server. Part 1.md b/translated/tech/20160309 Let’s Build A Web Server. Part 1.md new file mode 100644 index 0000000000..4c8048786d --- /dev/null +++ b/translated/tech/20160309 Let’s Build A Web Server. Part 1.md @@ -0,0 +1,150 @@ +Let’s Build A Web Server. Part 1. +===================================== + +Out for a walk one day, a woman came across a construction site and saw three men working. She asked the first man, “What are you doing?” Annoyed by the question, the first man barked, “Can’t you see that I’m laying bricks?” Not satisfied with the answer, she asked the second man what he was doing. The second man answered, “I’m building a brick wall.” Then, turning his attention to the first man, he said, “Hey, you just passed the end of the wall. You need to take off that last brick.” Again not satisfied with the answer, she asked the third man what he was doing. And the man said to her while looking up in the sky, “I am building the biggest cathedral this world has ever known.” While he was standing there and looking up in the sky the other two men started arguing about the errant brick. The man turned to the first two men and said, “Hey guys, don’t worry about that brick. It’s an inside wall, it will get plastered over and no one will ever see that brick. Just move on to another layer.”1 + +The moral of the story is that when you know the whole system and understand how different pieces fit together (bricks, walls, cathedral), you can identify and fix problems faster (errant brick). + +What does it have to do with creating your own Web server from scratch? + +I believe to become a better developer you MUST get a better understanding of the underlying software systems you use on a daily basis and that includes programming languages, compilers and interpreters, databases and operating systems, web servers and web frameworks. And, to get a better and deeper understanding of those systems you MUST re-build them from scratch, brick by brick, wall by wall. + +Confucius put it this way: + +>“I hear and I forget.” + +![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_hear.png) + +>“I see and I remember.” + +![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_see.png) + +>“I do and I understand.” + +![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_do.png) + +I hope at this point you’re convinced that it’s a good idea to start re-building different software systems to learn how they work. + +In this three-part series I will show you how to build your own basic Web server. Let’s get started. + +First things first, what is a Web server? + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_response.png) + +In a nutshell it’s a networking server that sits on a physical server (oops, a server on a server) and waits for a client to send a request. When it receives a request, it generates a response and sends it back to the client. The communication between a client and a server happens using HTTP protocol. A client can be your browser or any other software that speaks HTTP. + +What would a very simple implementation of a Web server look like? Here is my take on it. The example is in Python but even if you don’t know Python (it’s a very easy language to pick up, try it!) you still should be able to understand concepts from the code and explanations below: + +``` +import socket + +HOST, PORT = '', 8888 + +listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) +listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) +listen_socket.bind((HOST, PORT)) +listen_socket.listen(1) +print 'Serving HTTP on port %s ...' % PORT +while True: + client_connection, client_address = listen_socket.accept() + request = client_connection.recv(1024) + print request + + http_response = """\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + client_connection.close() +``` + +Save the above code as webserver1.py or download it directly from GitHub and run it on the command line like this + +``` +$ python webserver1.py +Serving HTTP on port 8888 … +``` + +Now type in the following URL in your Web browser’s address bar http://localhost:8888/hello, hit Enter, and see magic in action. You should see “Hello, World!” displayed in your browser like this: + +![](https://ruslanspivak.com/lsbaws-part1/browser_hello_world.png) + +Just do it, seriously. I will wait for you while you’re testing it. + +Done? Great. Now let’s discuss how it all actually works. + +First let’s start with the Web address you’ve entered. It’s called an URL and here is its basic structure: + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_URL_Web_address.png) + +This is how you tell your browser the address of the Web server it needs to find and connect to and the page (path) on the server to fetch for you. Before your browser can send a HTTP request though, it first needs to establish a TCP connection with the Web server. Then it sends an HTTP request over the TCP connection to the server and waits for the server to send an HTTP response back. And when your browser receives the response it displays it, in this case it displays “Hello, World!” + +Let’s explore in more detail how the client and the server establish a TCP connection before sending HTTP requests and responses. To do that they both use so-called sockets. Instead of using a browser directly you are going to simulate your browser manually by using telnet on the command line. + +On the same computer you’re running the Web server fire up a telnet session on the command line specifying a host to connect to localhost and the port to connect to 8888 and then press Enter: + +``` +$ telnet localhost 8888 +Trying 127.0.0.1 … +Connected to localhost. +``` + +At this point you’ve established a TCP connection with the server running on your local host and ready to send and receive HTTP messages. In the picture below you can see a standard procedure a server has to go through to be able to accept new TCP connections. + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_socket.png) + +In the same telnet session type GET /hello HTTP/1.1 and hit Enter: + +``` +$ telnet localhost 8888 +Trying 127.0.0.1 … +Connected to localhost. +GET /hello HTTP/1.1 + +HTTP/1.1 200 OK +Hello, World! +``` + +You’ve just manually simulated your browser! You sent an HTTP request and got an HTTP response back. This is the basic structure of an HTTP request: + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_anatomy.png) + +The HTTP request consists of the line indicating the HTTP method (GET, because we are asking our server to return us something), the path /hello that indicates a “page” on the server we want and the protocol version. + +For simplicity’s sake our Web server at this point completely ignores the above request line. You could just as well type in any garbage instead of “GET /hello HTTP/1.1” and you would still get back a “Hello, World!” response. + +Once you’ve typed the request line and hit Enter the client sends the request to the server, the server reads the request line, prints it and returns the proper HTTP response. + +Here is the HTTP response that the server sends back to your client (telnet in this case): + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_response_anatomy.png) + +Let’s dissect it. The response consists of a status line HTTP/1.1 200 OK, followed by a required empty line, and then the HTTP response body. + +The response status line HTTP/1.1 200 OK consists of the HTTP Version, the HTTP status code and the HTTP status code reason phrase OK. When the browser gets the response, it displays the body of the response and that’s why you see “Hello, World!” in your browser. + +And that’s the basic model of how a Web server works. To sum it up: The Web server creates a listening socket and starts accepting new connections in a loop. The client initiates a TCP connection and, after successfully establishing it, the client sends an HTTP request to the server and the server responds with an HTTP response that gets displayed to the user. To establish a TCP connection both clients and servers use sockets. + +Now you have a very basic working Web server that you can test with your browser or some other HTTP client. As you’ve seen and hopefully tried, you can also be a human HTTP client too, by using telnet and typing HTTP requests manually. + +Here’s a question for you: “How do you run a Django application, Flask application, and Pyramid application under your freshly minted Web server without making a single change to the server to accommodate all those different Web frameworks?” + +I will show you exactly how in Part 2 of the series. Stay tuned. + +BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch and goes into more detail on topics I just covered. Subscribe to the mailing list to get the latest updates about the book and the release date. + +-------------------------------------------------------------------------------- + +via: https://ruslanspivak.com/lsbaws-part1/ + +作者:[Ruslan][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://linkedin.com/in/ruslanspivak/ + + + From 2359828cf75142ed6ff3dd37282e21db733f5d40 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 12 Jul 2016 14:28:24 +0800 Subject: [PATCH 121/471] =?UTF-8?q?20160712-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160406 Let’s Build A Web Server. Part 2.md | 427 ++++++++++++++++++ 1 file changed, 427 insertions(+) create mode 100644 translated/tech/20160406 Let’s Build A Web Server. Part 2.md diff --git a/translated/tech/20160406 Let’s Build A Web Server. Part 2.md b/translated/tech/20160406 Let’s Build A Web Server. Part 2.md new file mode 100644 index 0000000000..482352ac9a --- /dev/null +++ b/translated/tech/20160406 Let’s Build A Web Server. Part 2.md @@ -0,0 +1,427 @@ +Let’s Build A Web Server. Part 2. +=================================== + +Remember, in Part 1 I asked you a question: “How do you run a Django application, Flask application, and Pyramid application under your freshly minted Web server without making a single change to the server to accommodate all those different Web frameworks?” Read on to find out the answer. + +In the past, your choice of a Python Web framework would limit your choice of usable Web servers, and vice versa. If the framework and the server were designed to work together, then you were okay: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_before_wsgi.png) + +But you could have been faced (and maybe you were) with the following problem when trying to combine a server and a framework that weren’t designed to work together: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_after_wsgi.png) + +Basically you had to use what worked together and not what you might have wanted to use. + +So, how do you then make sure that you can run your Web server with multiple Web frameworks without making code changes either to the Web server or to the Web frameworks? And the answer to that problem became the Python Web Server Gateway Interface (or WSGI for short, pronounced “wizgy”). + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_idea.png) + +WSGI allowed developers to separate choice of a Web framework from choice of a Web server. Now you can actually mix and match Web servers and Web frameworks and choose a pairing that suits your needs. You can run Django, Flask, or Pyramid, for example, with Gunicorn or Nginx/uWSGI or Waitress. Real mix and match, thanks to the WSGI support in both servers and frameworks: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interop.png) + +So, WSGI is the answer to the question I asked you in Part 1 and repeated at the beginning of this article. Your Web server must implement the server portion of a WSGI interface and all modern Python Web Frameworks already implement the framework side of the WSGI interface, which allows you to use them with your Web server without ever modifying your server’s code to accommodate a particular Web framework. + +Now you know that WSGI support by Web servers and Web frameworks allows you to choose a pairing that suits you, but it is also beneficial to server and framework developers because they can focus on their preferred area of specialization and not step on each other’s toes. Other languages have similar interfaces too: Java, for example, has Servlet API and Ruby has Rack. + +It’s all good, but I bet you are saying: “Show me the code!” Okay, take a look at this pretty minimalistic WSGI server implementation: + +``` +# Tested with Python 2.7.9, Linux & Mac OS X +import socket +import StringIO +import sys + + +class WSGIServer(object): + + address_family = socket.AF_INET + socket_type = socket.SOCK_STREAM + request_queue_size = 1 + + def __init__(self, server_address): + # Create a listening socket + self.listen_socket = listen_socket = socket.socket( + self.address_family, + self.socket_type + ) + # Allow to reuse the same address + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + # Bind + listen_socket.bind(server_address) + # Activate + listen_socket.listen(self.request_queue_size) + # Get server host name and port + host, port = self.listen_socket.getsockname()[:2] + self.server_name = socket.getfqdn(host) + self.server_port = port + # Return headers set by Web framework/Web application + self.headers_set = [] + + def set_app(self, application): + self.application = application + + def serve_forever(self): + listen_socket = self.listen_socket + while True: + # New client connection + self.client_connection, client_address = listen_socket.accept() + # Handle one request and close the client connection. Then + # loop over to wait for another client connection + self.handle_one_request() + + def handle_one_request(self): + self.request_data = request_data = self.client_connection.recv(1024) + # Print formatted request data a la 'curl -v' + print(''.join( + '< {line}\n'.format(line=line) + for line in request_data.splitlines() + )) + + self.parse_request(request_data) + + # Construct environment dictionary using request data + env = self.get_environ() + + # It's time to call our application callable and get + # back a result that will become HTTP response body + result = self.application(env, self.start_response) + + # Construct a response and send it back to the client + self.finish_response(result) + + def parse_request(self, text): + request_line = text.splitlines()[0] + request_line = request_line.rstrip('\r\n') + # Break down the request line into components + (self.request_method, # GET + self.path, # /hello + self.request_version # HTTP/1.1 + ) = request_line.split() + + def get_environ(self): + env = {} + # The following code snippet does not follow PEP8 conventions + # but it's formatted the way it is for demonstration purposes + # to emphasize the required variables and their values + # + # Required WSGI variables + env['wsgi.version'] = (1, 0) + env['wsgi.url_scheme'] = 'http' + env['wsgi.input'] = StringIO.StringIO(self.request_data) + env['wsgi.errors'] = sys.stderr + env['wsgi.multithread'] = False + env['wsgi.multiprocess'] = False + env['wsgi.run_once'] = False + # Required CGI variables + env['REQUEST_METHOD'] = self.request_method # GET + env['PATH_INFO'] = self.path # /hello + env['SERVER_NAME'] = self.server_name # localhost + env['SERVER_PORT'] = str(self.server_port) # 8888 + return env + + def start_response(self, status, response_headers, exc_info=None): + # Add necessary server headers + server_headers = [ + ('Date', 'Tue, 31 Mar 2015 12:54:48 GMT'), + ('Server', 'WSGIServer 0.2'), + ] + self.headers_set = [status, response_headers + server_headers] + # To adhere to WSGI specification the start_response must return + # a 'write' callable. We simplicity's sake we'll ignore that detail + # for now. + # return self.finish_response + + def finish_response(self, result): + try: + status, response_headers = self.headers_set + response = 'HTTP/1.1 {status}\r\n'.format(status=status) + for header in response_headers: + response += '{0}: {1}\r\n'.format(*header) + response += '\r\n' + for data in result: + response += data + # Print formatted response data a la 'curl -v' + print(''.join( + '> {line}\n'.format(line=line) + for line in response.splitlines() + )) + self.client_connection.sendall(response) + finally: + self.client_connection.close() + + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 + + +def make_server(server_address, application): + server = WSGIServer(server_address) + server.set_app(application) + return server + + +if __name__ == '__main__': + if len(sys.argv) < 2: + sys.exit('Provide a WSGI application object as module:callable') + app_path = sys.argv[1] + module, application = app_path.split(':') + module = __import__(module) + application = getattr(module, application) + httpd = make_server(SERVER_ADDRESS, application) + print('WSGIServer: Serving HTTP on port {port} ...\n'.format(port=PORT)) + httpd.serve_forever() +``` + +It’s definitely bigger than the server code in Part 1, but it’s also small enough (just under 150 lines) for you to understand without getting bogged down in details. The above server also does more - it can run your basic Web application written with your beloved Web framework, be it Pyramid, Flask, Django, or some other Python WSGI framework. + +Don’t believe me? Try it and see for yourself. Save the above code as webserver2.py or download it directly from GitHub. If you try to run it without any parameters it’s going to complain and exit. + +``` +$ python webserver2.py +Provide a WSGI application object as module:callable +``` + +It really wants to serve your Web application and that’s where the fun begins. To run the server the only thing you need installed is Python. But to run applications written with Pyramid, Flask, and Django you need to install those frameworks first. Let’s install all three of them. My preferred method is by using virtualenv. Just follow the steps below to create and activate a virtual environment and then install all three Web frameworks. + +``` +$ [sudo] pip install virtualenv +$ mkdir ~/envs +$ virtualenv ~/envs/lsbaws/ +$ cd ~/envs/lsbaws/ +$ ls +bin include lib +$ source bin/activate +(lsbaws) $ pip install pyramid +(lsbaws) $ pip install flask +(lsbaws) $ pip install django +``` + +At this point you need to create a Web application. Let’s start with Pyramid first. Save the following code as pyramidapp.py to the same directory where you saved webserver2.py or download the file directly from GitHub: + +``` +from pyramid.config import Configurator +from pyramid.response import Response + + +def hello_world(request): + return Response( + 'Hello world from Pyramid!\n', + content_type='text/plain', + ) + +config = Configurator() +config.add_route('hello', '/hello') +config.add_view(hello_world, route_name='hello') +app = config.make_wsgi_app() +``` + +Now you’re ready to serve your Pyramid application with your very own Web server: + +``` +(lsbaws) $ python webserver2.py pyramidapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +You just told your server to load the ‘app’ callable from the python module ‘pyramidapp’ Your server is now ready to take requests and forward them to your Pyramid application. The application only handles one route now: the /hello route. Type http://localhost:8888/hello address into your browser, press Enter, and observe the result: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_pyramid.png) + +You can also test the server on the command line using the ‘curl’ utility: + +``` +$ curl -v http://localhost:8888/hello +... +``` + +Check what the server and curl prints to standard output. + +Now onto Flask. Let’s follow the same steps. + +``` +from flask import Flask +from flask import Response +flask_app = Flask('flaskapp') + + +@flask_app.route('/hello') +def hello_world(): + return Response( + 'Hello world from Flask!\n', + mimetype='text/plain' + ) + +app = flask_app.wsgi_app +``` + +Save the above code as flaskapp.py or download it from GitHub and run the server as: + +``` +(lsbaws) $ python webserver2.py flaskapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +Now type in the http://localhost:8888/hello into your browser and press Enter: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_flask.png) + +Again, try ‘curl’ and see for yourself that the server returns a message generated by the Flask application: + +``` +$ curl -v http://localhost:8888/hello +... +``` + +Can the server also handle a Django application? Try it out! It’s a little bit more involved, though, and I would recommend cloning the whole repo and use djangoapp.py, which is part of the GitHub repository. Here is the source code which basically adds the Django ‘helloworld’ project (pre-created using Django’s django-admin.py startproject command) to the current Python path and then imports the project’s WSGI application. + +``` +import sys +sys.path.insert(0, './helloworld') +from helloworld import wsgi + + +app = wsgi.application +``` + +Save the above code as djangoapp.py and run the Django application with your Web server: + +``` +(lsbaws) $ python webserver2.py djangoapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +Type in the following address and press Enter: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_django.png) + +And as you’ve already done a couple of times before, you can test it on the command line, too, and confirm that it’s the Django application that handles your requests this time around: + +``` +$ curl -v http://localhost:8888/hello +... +``` + +Did you try it? Did you make sure the server works with those three frameworks? If not, then please do so. Reading is important, but this series is about rebuilding and that means you need to get your hands dirty. Go and try it. I will wait for you, don’t worry. No seriously, you must try it and, better yet, retype everything yourself and make sure that it works as expected. + +Okay, you’ve experienced the power of WSGI: it allows you to mix and match your Web servers and Web frameworks. WSGI provides a minimal interface between Python Web servers and Python Web Frameworks. It’s very simple and it’s easy to implement on both the server and the framework side. The following code snippet shows the server and the framework side of the interface: + +``` +def run_application(application): + """Server code.""" + # This is where an application/framework stores + # an HTTP status and HTTP response headers for the server + # to transmit to the client + headers_set = [] + # Environment dictionary with WSGI/CGI variables + environ = {} + + def start_response(status, response_headers, exc_info=None): + headers_set[:] = [status, response_headers] + + # Server invokes the ‘application' callable and gets back the + # response body + result = application(environ, start_response) + # Server builds an HTTP response and transmits it to the client + … + +def app(environ, start_response): + """A barebones WSGI app.""" + start_response('200 OK', [('Content-Type', 'text/plain')]) + return ['Hello world!'] + +run_application(app) +``` + +Here is how it works: + +1. The framework provides an ‘application’ callable (The WSGI specification doesn’t prescribe how that should be implemented) +2. The server invokes the ‘application’ callable for each request it receives from an HTTP client. It passes a dictionary ‘environ’ containing WSGI/CGI variables and a ‘start_response’ callable as arguments to the ‘application’ callable. +3. The framework/application generates an HTTP status and HTTP response headers and passes them to the ‘start_response’ callable for the server to store them. The framework/application also returns a response body. +4. The server combines the status, the response headers, and the response body into an HTTP response and transmits it to the client (This step is not part of the specification but it’s the next logical step in the flow and I added it for clarity) + +And here is a visual representation of the interface: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interface.png) + +So far, you’ve seen the Pyramid, Flask, and Django Web applications and you’ve seen the server code that implements the server side of the WSGI specification. You’ve even seen the barebones WSGI application code snippet that doesn’t use any framework. + +The thing is that when you write a Web application using one of those frameworks you work at a higher level and don’t work with WSGI directly, but I know you’re curious about the framework side of the WSGI interface, too because you’re reading this article. So, let’s create a minimalistic WSGI Web application/Web framework without using Pyramid, Flask, or Django and run it with your server: + +``` +def app(environ, start_response): + """A barebones WSGI application. + + This is a starting point for your own Web framework :) + """ + status = '200 OK' + response_headers = [('Content-Type', 'text/plain')] + start_response(status, response_headers) + return ['Hello world from a simple WSGI application!\n'] +``` + +Again, save the above code in wsgiapp.py file or download it from GitHub directly and run the application under your Web server as: + +``` +(lsbaws) $ python webserver2.py wsgiapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +Type in the following address and press Enter. This is the result you should see: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_simple_wsgi_app.png) + +You just wrote your very own minimalistic WSGI Web framework while learning about how to create a Web server! Outrageous. + +Now, let’s get back to what the server transmits to the client. Here is the HTTP response the server generates when you call your Pyramid application using an HTTP client: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response.png) + +The response has some familiar parts that you saw in Part 1 but it also has something new. It has, for example, four HTTP headers that you haven’t seen before: Content-Type, Content-Length, Date, and Server. Those are the headers that a response from a Web server generally should have. None of them are strictly required, though. The purpose of the headers is to transmit additional information about the HTTP request/response. + +Now that you know more about the WSGI interface, here is the same HTTP response with some more information about what parts produced it: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response_explanation.png) + +I haven’t said anything about the ‘environ’ dictionary yet, but basically it’s a Python dictionary that must contain certain WSGI and CGI variables prescribed by the WSGI specification. The server takes the values for the dictionary from the HTTP request after parsing the request. This is what the contents of the dictionary look like: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_environ.png) + +A Web framework uses the information from that dictionary to decide which view to use based on the specified route, request method etc., where to read the request body from and where to write errors, if any. + +By now you’ve created your own WSGI Web server and you’ve made Web applications written with different Web frameworks. And, you’ve also created your barebones Web application/Web framework along the way. It’s been a heck of a journey. Let’s recap what your WSGI Web server has to do to serve requests aimed at a WSGI application: + +- First, the server starts and loads an ‘application’ callable provided by your Web framework/application +- Then, the server reads a request +- Then, the server parses it +- Then, it builds an ‘environ’ dictionary using the request data +- Then, it calls the ‘application’ callable with the ‘environ’ dictionary and a ‘start_response’ callable as parameters and gets back a response body. +- Then, the server constructs an HTTP response using the data returned by the call to the ‘application’ object and the status and response headers set by the ‘start_response’ callable. +- And finally, the server transmits the HTTP response back to the client + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_server_summary.png) + +That’s about all there is to it. You now have a working WSGI server that can serve basic Web applications written with WSGI compliant Web frameworks like Django, Flask, Pyramid, or your very own WSGI framework. The best part is that the server can be used with multiple Web frameworks without any changes to the server code base. Not bad at all. + +Before you go, here is another question for you to think about, “How do you make your server handle more than one request at a time?” + +Stay tuned and I will show you a way to do that in Part 3. Cheers! + +BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch and goes into more detail on topics I just covered. Subscribe to the mailing list to get the latest updates about the book and the release date. + +-------------------------------------------------------------------------------- + +via: https://ruslanspivak.com/lsbaws-part2/ + +作者:[Ruslan][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/rspivak/ + + + + + + From ec68fcb53855146fdbb507d5ef3b25f551fd6f0c Mon Sep 17 00:00:00 2001 From: Chunyang Wen Date: Tue, 12 Jul 2016 14:30:58 +0800 Subject: [PATCH 122/471] Translated: How to Hide Linux Command Line History by Going Incognito (#4174) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * âFinish tranlating awk series part4 * Update Part 4 - How to Use Comparison Operators with Awk in Linux.md * translating: How to Hide Linux Command Line History by Going Incognito * finish translating How to Hide Linux Command Line History by Going Incognito * update translator --- ...Command Line History by Going Incognito.md | 125 ------------------ ...Command Line History by Going Incognito.md | 124 +++++++++++++++++ 2 files changed, 124 insertions(+), 125 deletions(-) delete mode 100644 sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md create mode 100644 translated/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md diff --git a/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md b/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md deleted file mode 100644 index 0d5ce6e6e1..0000000000 --- a/sources/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md +++ /dev/null @@ -1,125 +0,0 @@ -chunyang-wen translating -How to Hide Linux Command Line History by Going Incognito -================================================================ - -![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-featured.jpg) - -If you’re a Linux command line user, you’ll agree that there are times when you do not want certain commands you run to be recorded in the command line history. There could be many reasons for this. For example, you’re at a certain position in your company, and you have some privileges that you don’t want others to abuse. Or, there are some critical commands that you don’t want to run accidentally while you’re browsing the history list. - -But is there a way to control what goes into the history list and what doesn’t? Or, in other words, can we turn on a web browser-like incognito mode in the Linux command line? The answer is yes, and there are many ways to achieve this, depending on what exactly you want. In this article we will discuss some of the popular solutions available. - -Note: all the commands presented in this article have been tested on Ubuntu. - -### Different ways available - -The first two ways we’ll describe here have already been covered in [one of our previous articles][1]. If you are already aware of them, you can skip over these. However, if you aren’t aware, you’re advised to go through them carefully. - -#### 1. Insert space before command - -Yes, you read it correctly. Insert a space in the beginning of a command, and it will be ignored by the shell, meaning the command won’t be recorded in history. However, there’s a dependency – the said solution will only work if the HISTCONTROL environment variable is set to “ignorespace” or “ignoreboth,” which is by default in most cases. - -So, a command like the following: - -``` -[space]echo "this is a top secret" -``` - -Won’t appear in the history if you’ve already done this command: - -``` -export HISTCONTROL = ignorespace -``` - -The below screenshot is an example of this behavior. - -![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-bash-command-space.png) - -The fourth “echo” command was not recorded in the history as it was run with a space in the beginning. - -#### 2. Disable the entire history for the current session - -If you want to disable the entire history for a session, you can easily do that by unsetting the HISTSIZE environment variable before you start with your command line work. To unset the variable run the following command: - -``` -export HISTFILE=0 -``` - -HISTFILE is the number of lines (or commands) that can be stored in the history list for an ongoing bash session. By default, this variable has a set value – for example, 1000 in my case. - -So, the command mentioned above will set the environment variable’s value to zero, and consequently nothing will be stored in the history list until you close the terminal. Keep in mind that you’ll also not be able to see the previously run commands by pressing the up arrow key or running the history command. - -#### 3. Erase the entire history after you’re done - -This can be seen as an alternative to the solution mentioned in the previous section. The only difference is that in this case you run a command AFTER you’re done with all your work. Thh following is the command in question: - -``` -history -cw -``` - -As already mentioned, this will have the same effect as the HISTFILE solution mentioned above. - -#### 4. Turn off history only for the work you do - -While the solutions (2 and 3) described above do the trick, they erase the entire history, something which might be undesired in many situations. There might be cases in which you want to retain the history list up until the point you start your command line work. For situations like these you need to run the following command before starting with your work: - -``` -[space]set +o history -``` - -Note: [space] represents a blank space. - -The above command will disable the history temporarily, meaning whatever you do after running this command will not be recorded in history, although all the stuff executed prior to the above command will be there as it is in the history list. - -To re-enable the history, run the following command: - -``` -[Space]set -o history -``` - -This brings things back to normal again, meaning any command line work done after the above command will show up in the history. - -#### 5. Delete specific commands from history - -Now suppose the history list already contains some commands that you didn’t want to be recorded. What can be done in this case? It’s simple. You can go ahead and remove them. The following is how to accomplish this: - -``` -[space]history | grep "part of command you want to remove" -``` - -The above command will output a list of matching commands (that are there in the history list) with a number [num] preceding each of them. - -Once you’ve identified the command you want to remove, just run the following command to remove that particular entry from the history list: - -``` -history -d [num] -``` - -The following screenshot is an example of this. - -![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-delete-specific-commands.png) - -The second ‘echo’ command was removed successfully. - -Alternatively, you can just press the up arrow key to take a walk back through the history list, and once the command of your interest appears on the terminal, just press “Ctrl + U” to totally blank the line, effectively removing it from the list. - -### Conclusion - -There are multiple ways in which you can manipulate the Linux command line history to suit your needs. Keep in mind, however, that it’s usually not a good practice to hide or remove a command from history, although it’s also not wrong, per se, but you should be aware of what you’re doing and what effects it might have. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/linux-command-line-history-incognito/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.maketecheasier.com/author/himanshu/ -[1]: https://www.maketecheasier.com/command-line-history-linux/ - - - - - diff --git a/translated/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md b/translated/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md new file mode 100644 index 0000000000..9ef701559b --- /dev/null +++ b/translated/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md @@ -0,0 +1,124 @@ +如何进入无痕模式进而隐藏 Linux 的命令行历史 +================================================================ + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-featured.jpg) + +如果你是 Linux 命令行的用户,你会同意有的时候你不希望某些命令记录在你的命令行历史中。其中原因可能很多。例如,你在公司处于某个职位,你有一些不希望被其它人滥用的特权。亦或者有些特别重要的命令,你不希望在你浏览历史列表时误执行。 + +然而,有方法可以控制哪些命令进入历史列表,哪些不进入吗?或者换句话说,我们在 Linux 终端中可以开启像浏览器一样的无痕模式吗?答案是肯定的,而且根据你想要的具体目标,有很多实现方法。在这篇文章中,我们将讨论一些行之有效的方法。 + +注意:文中出现的所有命令都在 Ubuntu 下测试过。 + +### 不同的可行方法 + +前面两种方法已经在之前[一篇文章][1]中描述了。如果你已经了解,这部分可以略过。然而,如果你不了解,建议仔细阅读。 + +#### 1. 在命令前插入空格 + +是的,没看错。在命令前面插入空格,这条命令会被终端忽略,也就意味着它不会出现在历史记录中。但是这种方法有个前提,只有在你的环境变量 HISTCONTROL 设置为 "ignorespace" 或者 "ignoreboth" 才会起作用。在大多数情况下,这个是默认值。 + +所以,像下面的命令: + +``` +[space]echo "this is a top secret" +``` + +你执行后,它不会出现在历史记录中。 + +``` +export HISTCONTROL = ignorespace +``` + +下面的截图是这种方式的一个例子。 + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-bash-command-space.png) + +第四个 "echo" 命令因为前面有空格,它没有被记录到历史中。 + +#### 2. 禁用当前会话的所有历史记录 + +如果你想禁用某个会话所有历史,你可以简单地在开始命令行工作前清除环境变量 HISTSIZE 的值。执行下面的命令来清除其值: + +``` +export HISTFILE=0 +``` + +HISTFILE 表示对于 bash 会话其历史中可以保存命令的个数。默认情况,它设置了一个非零值,例如 在我的电脑上,它的值为 1000。 + +所以上面所提到的命令将其值设置为 0,结果就是直到你关闭终端,没有东西会存储在历史记录中。记住同样你也不能通过按向上的箭头按键来执行之前的命令,也不能运行 history 命令。 + +#### 3. 工作结束后清除整个历史 + +这可以看作是前一部分所提方案的另外一种实现。唯一的区别是在你完成所有工作之后执行这个命令。下面是刚讨论的命令: + +``` +history -cw +``` + +刚才已经提到,这个和 HISTFILE 方法有相同效果。 + +#### 4. 只针对你的工作关闭历史记录 + +虽然前面描述的方法(2 和 3)可以实现目的,它们清除整个历史,在很多情况下,有些可能不是我们所期望的。有时候你可能想保存直到你开始命令行工作之间的历史记录。类似的需求需要在你开始工作前执行下述命令: + +``` +[space]set +o history +``` + +备注:[space] 表示空格。 + +上面的命令会临时禁用历史功能,这意味着在这命令之后你执行的所有操作都不会记录到历史中,然而这个命令之前的所有东西都会原样记录在历史列表中。 + +要重新开启历史功能,执行下面的命令: + +``` +[Space]set -o history +``` + +它将环境恢复原状,也就是你完成你的工作,执行上述命令之后的命令都会出现在历史中。 + +#### 5. 从历史记录中删除指定的命令 + +现在假设历史记录中有一些命令你不希望被记录。这种情况下我们怎么办?很简单。直接动手删除它们。通过下面的命令来删除: + +``` +[space]history | grep "part of command you want to remove" +``` + +上面的命令会输出历史记录中匹配的命令,每一条前面会有个数字。 + +一旦你找到你想删除的命令,执行下面的命令,从历史记录中删除那个指定的项: + +``` +history -d [num] +``` + +下面是这个例子的截图。 + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-delete-specific-commands.png) + +第二个 ‘echo’命令被成功的删除了。 + +同样的,你可以使用向上的箭头一直往回翻看历史记录。当你发现你感兴趣的命令出现在终端上时,按下 “Ctrl + U”清除整行,也会从历史记录中删除它。 + +### 总结 + +有多种不同的方法可以操作 Linux 命令行历史来满足你的需求。然而请记住,从历史中隐藏或者删除命令通常不是一个好习惯,尽管本质上这并没有错。但是你必须知道你在做什么,以及可能产生的后果。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-command-line-history-incognito/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier + +作者:[Himanshu Arora][a] +译者:[chunyang-wen](https://github.com/chunyang-wen) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.maketecheasier.com/author/himanshu/ +[1]: https://www.maketecheasier.com/command-line-history-linux/ + + + + + From bdd3fb33e5e8fde49e01a1b701dc21b2dd1e4315 Mon Sep 17 00:00:00 2001 From: KS Date: Tue, 12 Jul 2016 15:52:43 +0800 Subject: [PATCH 123/471] Update 20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md --- ... and deploy a Facebook Messenger bot with Python and Flask.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md b/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md index 1f3fbe1ed3..6ab74e3527 100644 --- a/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md +++ b/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md @@ -1,3 +1,4 @@ +wyangsun translating How to build and deploy a Facebook Messenger bot with Python and Flask, a tutorial ========================================================================== From 9595b6a7edda06476c75a1bb0e2d2bd4c9e0a492 Mon Sep 17 00:00:00 2001 From: CHL Date: Tue, 12 Jul 2016 17:34:52 +0800 Subject: [PATCH 124/471] [Translating]72% Of The People I Follow On Twitter Are Men (#4176) Translating by Flowsnow! 72% Of The People I Follow On Twitter Are Men --- .../20160623 72% Of The People I Follow On Twitter Are Men.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md b/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md index 7aeab64d2f..71a23511e9 100644 --- a/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md +++ b/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md @@ -1,3 +1,4 @@ +Translating by Flowsnow! 72% Of The People I Follow On Twitter Are Men =============================================== From 9101a462726211d3be2529f590bbb673f9accbaa Mon Sep 17 00:00:00 2001 From: martin qi Date: Tue, 12 Jul 2016 22:41:03 +0800 Subject: [PATCH 125/471] translated --- ...gcreate, lvcreate and lvextend Commands.md | 208 ------------------ ...gcreate, lvcreate and lvextend Commands.md | 206 +++++++++++++++++ 2 files changed, 206 insertions(+), 208 deletions(-) delete mode 100644 sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md create mode 100644 translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md diff --git a/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md b/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md deleted file mode 100644 index 390ad18e14..0000000000 --- a/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md +++ /dev/null @@ -1,208 +0,0 @@ -martin - -Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands -============================================================================================ - -Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Manage-LVM-and-Create-LVM-Partition-in-Linux.png) ->LFCS: Manage LVM and Create LVM Partition – Part 11 - -One of the most important decisions while installing a Linux system is the amount of storage space to be allocated for system files, home directories, and others. If you make a mistake at that point, growing a partition that has run out of space can be burdensome and somewhat risky. - -**Logical Volumes Management** (also known as **LVM**), which have become a default for the installation of most (if not all) Linux distributions, have numerous advantages over traditional partitioning management. Perhaps the most distinguishing feature of LVM is that it allows logical divisions to be resized (reduced or increased) at will without much hassle. - -The structure of the LVM consists of: - -* One or more entire hard disks or partitions are configured as physical volumes (PVs). -* A volume group (**VG**) is created using one or more physical volumes. You can think of a volume group as a single storage unit. -* Multiple logical volumes can then be created in a volume group. Each logical volume is somewhat equivalent to a traditional partition – with the advantage that it can be resized at will as we mentioned earlier. - -In this article we will use three disks of **8 GB** each (**/dev/sdb**, **/dev/sdc**, and **/dev/sdd**) to create three physical volumes. You can either create the PVs directly on top of the device, or partition it first. - -Although we have chosen to go with the first method, if you decide to go with the second (as explained in [Part 4 – Create Partitions and File Systems in Linux][3] of this series) make sure to configure each partition as type `8e`. - -### Creating Physical Volumes, Volume Groups, and Logical Volumes - -To create physical volumes on top of **/dev/sdb**, **/dev/sdc**, and **/dev/sdd**, do: - -``` -# pvcreate /dev/sdb /dev/sdc /dev/sdd -``` - -You can list the newly created PVs with: - -``` -# pvs -``` - -and get detailed information about each PV with: - -``` -# pvdisplay /dev/sdX -``` - -(where **X** is b, c, or d) - -If you omit `/dev/sdX` as parameter, you will get information about all the PVs. - -To create a volume group named `vg00` using `/dev/sdb` and `/dev/sdc` (we will save `/dev/sdd` for later to illustrate the possibility of adding other devices to expand storage capacity when needed): - -``` -# vgcreate vg00 /dev/sdb /dev/sdc -``` - -As it was the case with physical volumes, you can also view information about this volume group by issuing: - -``` -# vgdisplay vg00 -``` - -Since `vg00` is formed with two **8 GB** disks, it will appear as a single **16 GB** drive: - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-LVM-Volume-Groups.png) ->List LVM Volume Groups - -When it comes to creating logical volumes, the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use. - -For example, let’s create two LVs named `vol_projects` (**10 GB**) and `vol_backups` (remaining space), which we can use later to store project documentation and system backups, respectively. - -The `-n` option is used to indicate a name for the LV, whereas `-L` sets a fixed size and `-l` (lowercase L) is used to indicate a percentage of the remaining space in the container VG. - -``` -# lvcreate -n vol_projects -L 10G vg00 -# lvcreate -n vol_backups -l 100%FREE vg00 -``` - -As before, you can view the list of LVs and basic information with: - -``` -# lvs -``` - -and detailed information with - -``` -# lvdisplay -``` - -To view information about a single **LV**, use **lvdisplay** with the **VG** and **LV** as parameters, as follows: - -``` -# lvdisplay vg00/vol_projects -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Logical-Volume.png) ->List Logical Volume - -In the image above we can see that the LVs were created as storage devices (refer to the LV Path line). Before each logical volume can be used, we need to create a filesystem on top of it. - -We’ll use ext4 as an example here since it allows us both to increase and reduce the size of each LV (as opposed to xfs that only allows to increase the size): - -``` -# mkfs.ext4 /dev/vg00/vol_projects -# mkfs.ext4 /dev/vg00/vol_backups -``` - -In the next section we will explain how to resize logical volumes and add extra physical storage space when the need arises to do so. - -### Resizing Logical Volumes and Extending Volume Groups - -Now picture the following scenario. You are starting to run out of space in `vol_backups`, while you have plenty of space available in `vol_projects`. Due to the nature of LVM, we can easily reduce the size of the latter (say **2.5 GB**) and allocate it for the former, while resizing each filesystem at the same time. - -Fortunately, this is as easy as doing: - -``` -# lvreduce -L -2.5G -r /dev/vg00/vol_projects -# lvextend -l +100%FREE -r /dev/vg00/vol_backups -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Resize-Reduce-Logical-Volume-and-Volume-Group.png) ->Resize Reduce Logical Volume and Volume Group - -It is important to include the minus `(-)` or plus `(+)` signs while resizing a logical volume. Otherwise, you’re setting a fixed size for the LV instead of resizing it. - -It can happen that you arrive at a point when resizing logical volumes cannot solve your storage needs anymore and you need to buy an extra storage device. Keeping it simple, you will need another disk. We are going to simulate this situation by adding the remaining PV from our initial setup (`/dev/sdd`). - -To add `/dev/sdd` to `vg00`, do - -``` -# vgextend vg00 /dev/sdd -``` - -If you run vgdisplay `vg00` before and after the previous command, you will see the increase in the size of the VG: - -``` -# vgdisplay vg00 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Volume-Group-Size.png) ->Check Volume Group Disk Size - -Now you can use the newly added space to resize the existing LVs according to your needs, or to create additional ones as needed. - -### Mounting Logical Volumes on Boot and on Demand - -Of course there would be no point in creating logical volumes if we are not going to actually use them! To better identify a logical volume we will need to find out what its `UUID` (a non-changing attribute that uniquely identifies a formatted storage device) is. - -To do that, use blkid followed by the path to each device: - -``` -# blkid /dev/vg00/vol_projects -# blkid /dev/vg00/vol_backups -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png) ->Find Logical Volume UUID - -Create mount points for each LV: - -``` -# mkdir /home/projects -# mkdir /home/backups -``` - -and insert the corresponding entries in `/etc/fstab` (make sure to use the UUIDs obtained before): - -``` -UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects ext4 defaults 0 0 -UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults 0 0 -``` - -Then save the changes and mount the LVs: - -``` -# mount -a -# mount | grep home -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png) ->Find Logical Volume UUID - -When it comes to actually using the LVs, you will need to assign proper `ugo+rwx` permissions as explained in [Part 8 – Manage Users and Groups in Linux][4] of this series. - -### Summary - -In this article we have introduced [Logical Volume Management][5], a versatile tool to manage storage devices that provides scalability. When combined with RAID (which we explained in [Part 6 – Create and Manage RAID in Linux][6] of this series), you can enjoy not only scalability (provided by LVM) but also redundancy (offered by RAID). - -In this type of setup, you will typically find `LVM` on top of `RAID`, that is, configure RAID first and then configure LVM on top of it. - -If you have questions about this article, or suggestions to improve it, feel free to reach us using the comment form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[3]: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ -[4]: http://www.tecmint.com/manage-users-and-groups-in-linux/ -[5]: http://www.tecmint.com/create-lvm-storage-in-linux/ -[6]: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/ diff --git a/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md b/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md new file mode 100644 index 0000000000..84b9db7968 --- /dev/null +++ b/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md @@ -0,0 +1,206 @@ +LFCS 系列第十一讲:如何使用命令 vgcreate、lvcreate 和 lvextend 管理和创建 LVM +============================================================================================ + +由于 LFCS 考试中的一些改变已在 2016 年 2 月 2 日生效,我们添加了一些必要的专题到 [LFCS 系列][1]。我们也非常推荐备考的同学,同时阅读 [LFCE 系列][2]。 + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Manage-LVM-and-Create-LVM-Partition-in-Linux.png) +>LFCS:管理 LVM 和创建 LVM 分区 + +在安装 Linux 系统的时候要做的最重要的决定之一便是给系统文件,home 目录等分配空间。在这个地方犯了错,再要增长空间不足的分区,那样既麻烦又有风险。 + +**逻辑卷管理** (即 **LVM**)相较于传统的分区管理有许多优点,已经成为大多数(如果不能说全部的话) Linux 发行版安装时的默认选择。LVM 最大的优点应该是能方便的按照你的意愿调整(减小或增大)逻辑分区的大小。 + +LVM 的组成结构: + +* 把一块或多块硬盘或者一个或多个分区配置成物理卷(PV)。 +* 一个用一个或多个物理卷创建出的卷组(**VG**)。可以把一个卷组想象成一个单独的存储单元。 +* 在一个卷组上可以创建多个逻辑卷。每个逻辑卷相当于一个传统意义上的分区 —— 优点是它的大小可以根据需求重新调整大小,正如之前提到的那样。 + +本文,我们将使用三块 **8 GB** 的磁盘(**/dev/sdb**、**/dev/sdc** 和 **/dev/sdd**)分别创建三个物理卷。你既可以直接在设备上创建 PV,也可以先分区在创建。 + +在这里我们选择第一种方式,如果你决定使用第二种(可以参考本系列[第四讲:创建分区和文件系统][3])确保每个分区的类型都是 `8e`。 + +### 创建物理卷,卷组和逻辑卷 + +要在 **/dev/sdb**、**/dev/sdc** 和 **/dev/sdd**上创建物理卷,运行: + +``` +# pvcreate /dev/sdb /dev/sdc /dev/sdd +``` + +你可以列出新创建的 PV ,通过: + +``` +# pvs +``` + +并得到每个 PV 的详细信息,通过: + +``` +# pvdisplay /dev/sdX +``` + +(**X** 即 b、c 或 d) + +如果没有输入 `/dev/sdX` ,那么你将得到所有 PV 的信息。 + +使用 /dev/sdb` 和 `/dev/sdc` 创建卷组 ,命名为 `vg00` (在需要时是可以通过添加其他设备来扩展空间的,我们等到说明这点的时候再用,所以暂时先保留 `/dev/sdd`): + +``` +# vgcreate vg00 /dev/sdb /dev/sdc +``` + +就像物理卷那样,你也可以查看卷组的信息,通过: + +``` +# vgdisplay vg00 +``` + +由于 `vg00` 是由两个 **8 GB** 的磁盘组成的,所以它将会显示成一个 **16 GB** 的硬盘: + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-LVM-Volume-Groups.png) +>LVM 卷组列表 + +当谈到创建逻辑卷,空间的分配必须考虑到当下和以后的需求。根据每个逻辑卷的用途来命名是一个好的做法。 + +举个例子,让我们创建两个 LV,命名为 `vol_projects` (**10 GB**) 和 `vol_backups` (剩下的空间), 在日后分别用于部署项目文件和系统备份。 + +参数 `-n` 用于为 LV 指定名称,而 `-L` 用于设定固定的大小,还有 `-l` (小写的 L)在 VG 的预留空间中用于指定百分比大小的空间。 + +``` +# lvcreate -n vol_projects -L 10G vg00 +# lvcreate -n vol_backups -l 100%FREE vg00 +``` + +和之前一样,你可以查看 LV 的列表和基础信息,通过: + +``` +# lvs +``` + +或是详细信息,通过: + +``` +# lvdisplay +``` + +若要查看单个 **LV** 的信息,使用 **lvdisplay** 加上 **VG** 和 **LV** 作为参数,如下: + +``` +# lvdisplay vg00/vol_projects +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Logical-Volume.png) +>逻辑卷列表 + +如上图,我们看到 LV 已经被创建成存储设备了(参考 LV Path line)。在使用每个逻辑卷之前,需要先在上面创建文件系统。 + +这里我们拿 ext4 来做举例,因为对于每个 LV 的大小, ext4 既可以增大又可以减小(相对的 xfs 就只允许增大): + +``` +# mkfs.ext4 /dev/vg00/vol_projects +# mkfs.ext4 /dev/vg00/vol_backups +``` + +我们将在下一节向大家说明,如何调整逻辑卷的大小并在需要的时候添加额外的外部存储空间。 + +### 调整逻辑卷大小和扩充卷组 + +现在设想以下场景。`vol_backups` 中的空间即将用完,而 `vol_projects` 中还有富余的空间。由于 LVM 的特性,我们可以轻易的减小后者的大小(比方说 **2.5 GB**),并将其分配给前者,与此同时调整每个文件系统的大小。 + +幸运的是这很简单,只需: + +``` +# lvreduce -L -2.5G -r /dev/vg00/vol_projects +# lvextend -l +100%FREE -r /dev/vg00/vol_backups +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Resize-Reduce-Logical-Volume-and-Volume-Group.png) +>减小逻辑卷和卷组 + +在调整逻辑卷的时候,其中包含的减号 `(-)` 或加号 `(+)` 是十分重要的。否则 LV 将会被设置成指定的大小,而非调整指定大小。 + +有些时候,你可能会遭遇那种无法仅靠调整逻辑卷的大小就可以解决的问题,那时你就需要购置额外的存储设备了,你可能需要再加一块硬盘。这里我们将通过添加之前配置时预留的 PV (`/dev/sdd`),用以模拟这种情况。 + +想把 `/dev/sdd` 加到 `vg00`,执行: + +``` +# vgextend vg00 /dev/sdd +``` + +如果你在运行上条命令的前后执行 vgdisplay `vg00` ,你就会看出 VG 的大小增加了。 + +``` +# vgdisplay vg00 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Volume-Group-Size.png) +>查看卷组磁盘大小 + +现在,你可以使用新加的空间,按照你的需求调整现有 LV 的大小,或者创建一个新的 LV。 + +### 在启动和需求时挂载逻辑卷 + +当然,如果我们不打算实际的使用逻辑卷,那么创建它们就变得毫无意义了。为了更好的识别逻辑卷,我们需要找出它的 `UUID` (用于识别一个格式化存储设备的唯一且不变的属性)。 + +要做到这点,可使用 blkid 加每个设备的路径来实现: + +``` +# blkid /dev/vg00/vol_projects +# blkid /dev/vg00/vol_backups +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png) +>寻找逻辑卷的 UUID + +为每个 LV 创建挂载点: + +``` +# mkdir /home/projects +# mkdir /home/backups +``` + +并在 `/etc/fstab` 插入相应的条目(确保使用之前获得的UUID): + +``` +UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects ext4 defaults 0 0 +UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults 0 0 +``` + +保存并挂载 LV: + +``` +# mount -a +# mount | grep home +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Mount-Logical-Volumes-on-Linux-1.png) +>挂载逻辑卷 + +在涉及到 LV 的实际使用时,你还需要按照曾在本系列[第八讲:管理用户和用户组][4]中讲解的那样,为其设置合适的 `ugo+rwx`。 + +### 总结 + +本文介绍了 [逻辑卷管理][5],一个用于管理可扩展存储设备的多功能工具。与 RAID(曾在本系列讲解过的 [第六讲:组装分区为RAID设备——创建和管理系统备份][6])结合使用,你将同时体验到(LVM 带来的)可扩展性和(RAID 提供的)冗余。 + +在这类的部署中,你通常会在 `RAID` 上发现 `LVM`,这就是说,要先配置好 RAID 然后它在上面配置 LVM。 + +如果你对本问有任何的疑问和建议,可以直接在下方的评论区告诉我们。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-and-create-lvm-parition-using-vgcreate-lvcreate-and-lvextend/ + +作者:[Gabriel Cánepa][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: https://linux.cn/article-7161-1.html +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: https://linux.cn/article-7187-1.html +[4]: https://linux.cn/article-7418-1.html +[5]: http://www.tecmint.com/create-lvm-storage-in-linux/ +[6]: https://linux.cn/article-7229-1.html From 31e8877ddf1ea8a13a6f209c81c7dead2b84e05b Mon Sep 17 00:00:00 2001 From: Cinlen Chan <237448382@qq.com> Date: Wed, 13 Jul 2016 09:20:18 +0800 Subject: [PATCH 126/471] [translated]Growing a carrer alongside Linux (#4179) * [translated]Growing a carrer alongside Linux * [translated]Growing a career alongside Linux --- ...160316 Growing a career alongside Linux.md | 51 ------------------- ...160316 Growing a career alongside Linux.md | 49 ++++++++++++++++++ 2 files changed, 49 insertions(+), 51 deletions(-) delete mode 100644 sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md create mode 100644 translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md diff --git a/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md b/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md deleted file mode 100644 index b9e5e54f0a..0000000000 --- a/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md +++ /dev/null @@ -1,51 +0,0 @@ -chenxinlong translating - -Growing a career alongside Linux -================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT) - -My Linux story started in 1998 and continues today. Back then, I worked for The Gap managing thousands of desktops running [OS/2][1] (and a few years later, [Warp 3.0][2]). As an OS/2 guy, I was really happy then. The desktops hummed along and it was quite easy to support thousands of users with the tools the GAP had built. Changes were coming, though. - -In November of 1998, I received an invitation to join a brand new startup which would focus on Linux in the enterprise. This startup became quite famous as [Linuxcare][2]. - -### My time at Linuxcare - -I had played with Linux a bit, but had never considered delivering it to enterprise customers. Mere months later (which is a turn of the corner in startup time and space), I was managing a line of business that let enterprises get their hardware, software, and even books certified on a few flavors of Linux that were popular back then. - -I supported customers like IBM, Dell, and HP in ensuring their hardware ran Linux successfully. You hear a lot now about preloading Linux on hardware today, but way back then I was invited to Dell to discuss getting a laptop certified to run Linux for an upcoming trade show. Very exciting times! We also supported IBM and HP on a number of certification efforts that spanned a few years. - -Linux was changing fast, much like it always has. It gained hardware support for more key devices like sound, network, graphics. At around that time, I shifted from RPM-based systems to [Debian][3] for my personal use. - -### Using Linux through the years - -Fast forward some years and I worked at a number of companies that did Linux as hardened appliances, Linux as custom software, and Linux in the data center. By the mid 2000s, I was busy doing consulting for that rather large software company in Redmond around some analysis and verification of Linux compared to their own solutions. My personal use had not changed though—I would still run Debian testing systems on anything I could. - -I really appreciated the flexibility of a distribution that floated and was forever updated. Debian is one of the most fun and well supported distributions and has the best community I've ever been a part of. - -When I look back at my own adoption of Linux, I remember with fondness the numerous Linux Expo trade shows in San Jose, San Francisco, Boston, and New York in the early and mid 2000's. At Linuxcare we always did fun and funky booths, and walking the show floor always resulted in getting re-acquainted with old friends. Rumors of work were always traded, and the entire thing underscored the fun of using Linux in real endeavors. - -The rise of virtualization and cloud has really made the use of Linux even more interesting. When I was with Linuxcare, we partnered with a small 30-person company in Palo Alto. We would drive to their offices and get things ready for a trade show that they would attend with us. Who would have ever known that little startup would become VMware? - -I have so many stories, and there were so many people I was so fortunate to meet and work with. Linux has evolved in so many ways and has become so important. And even with its increasing importance, Linux is still fun to use. I think its openness and the ability to modify it has contributed to a legion of new users, which always astounds me. - -### Today - -I've moved away from doing mainstream Linux things over the past five years. I manage large scale infrastructure projects that include a variety of OSs (both proprietary and open), but my heart has always been with Linux. - -The constant evolution and fun of using Linux has been a driving force for me for over the past 18 years. I started with the 2.0 Linux kernel and have watched it become what it is now. It's a remarkable thing. An organic thing. A cool thing. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/3/my-linux-story-michael-perry - -作者:[Michael Perry][a] -译者:[译者ID](https://github.com/chenxinlong) -校对:[校对者ID](https://github.com/校对者ID) - -[a]: https://opensource.com/users/mpmilestogo -[1]: https://en.wikipedia.org/wiki/OS/2 -[2]: https://archive.org/details/IBMOS2Warp3Collection -[3]: https://en.wikipedia.org/wiki/Linuxcare -[4]: https://www.debian.org/ -[5]: diff --git a/translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md b/translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md new file mode 100644 index 0000000000..f7ddefb92c --- /dev/null +++ b/translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md @@ -0,0 +1,49 @@ +培养一个 Linux 职业生涯 +================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT) + +我与 Linux 的故事开始于 1998 年,一直延续到今天。 当时我在 Gap 公司工作,管理着成千台运行着 [OS/2][1] 系统的台式机 ( 并且在随后的几年里是 [Warp 3.0][2])。 作为一个 OS/2 的使用者,那时我非常高兴。 随着这些台式机的嗡鸣,我们使用 Gap 开发的工具轻而易举地就能支撑起对成千的用户的服务支持。 然而,一切都将改变了。 + +在 1998 年的 11 月, 我收到邀请加入一个新成立的公司,这家公司将专注于 Linux 上。 这就是后来非常出名的 [Linuxcare][2]. + +### 我在 Linuxcare 的时光 + +我曾经有接触过一些 Linux , 但我从未想过要把它提供给企业客户。仅仅几个月后 ( 从这里开始成为了空间和时间上的转折点 ), 我就在管理一整条线的业务,让企业获得他们的软件,硬件,甚至是证书认证等不同风格且在当时非常盛行的的 Linux 服务。 + +我向我的客户提供像 IBM ,Dell ,HP 等产品以确保他们的硬件能够成功的运行 Linux 。 今天你们应该都听过许多关于在硬件上预载 Linux 的事, 但当时我被邀请到 Dell 去讨论有关于为即将到来的贸易展在笔记本电脑上运行 Linux 的相关事宜。 这是多么激动人心的时刻 !同时我们也在有效期的几年内支持 IBM 和 HP 等多项认证工作。 + +Linux 变化得非常快,并且它总是这样。 它也获得了更多的关键设备的支持,比如声音,网络和图形。在这段时间, 我把个人使用的系统从基于 RPM 的系统换成了 [Debian][3] 。 + +### 使用 Linux 的这些年 + +几年前我在一些以 Linux 为硬设备,客户软件以及数据中心的公司工作。而在二十世纪中期的时候,那时我正在忙着做咨询,而那些在 Redmond 的大型软件公司正围绕着 Linux 做一些分析和验证以和自己的解决方案做对比。 我个人使用的系统一直改变,我仍会在任何能做到的时候运行 Debian 测试系统。 + +我真的非常欣赏其发行版本的灵活性和永久更新状态。 Debian 是我所使用过的最有趣且拥有良好支持的发行版本且拥有最好的社区的版本之一。 + +当我回首我使用 Linux 的这几年,我仍记得大约在二十世纪前期和中期的时候在圣何塞,旧金山,波士顿和纽约的那些 Linux 的展览会。在 Linuxcare 时我们总是会做一些有趣而且时髦的展览位,在那边逛的时候总会碰到一些老朋友。这一切工作都是需要付出代价的,所有的这一切都是在努力地强调使用 Linux 的乐趣。 + +同时,虚拟化和云的崛起也让 Linux 变得更加有趣。 当我在 Linuxcare 的时候, 我们常和 Palo Alto 的一个约 30 人左右的小公司在一块。我们会开车到他们的办公处然后准备一个展览并且他们也会参与进来。 谁会想得到这个小小的开端会成就后来的 VMware ? + +我还有许多的故事,能认识这些人并和他们一起工作我感到很幸运。 Linux 在各方面都不断发展且变得尤为重要。 并且甚至随着它重要性的提升,它使用起来仍然非常有趣。 我认为它的开放性和可定制能力给它带来了大量的新用户,这也是让我感到非常震惊的一点。 + +### 现在 + +在过去的五年里我逐渐离开 Linux 的主流事物。 我所管理的大规模基础设施项目中包含着许多不同的操作系统 ( 包括非开源的和开源的 ), 但我的心一直以来都是和 Linux 在一起的。 + +在使用 Linux 过程中的乐趣和不断进步是在过去的 18 年里一直驱动我的动力。我从 Linux 2.0 内核开始看着它变成现在的这样。 Linux 是一个卓越,有机且非常酷的东西。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/3/my-linux-story-michael-perry + +作者:[Michael Perry][a] +译者:[译者ID](https://github.com/chenxinlong) +校对:[校对者ID](https://github.com/校对者ID) + +[a]: https://opensource.com/users/mpmilestogo +[1]: https://en.wikipedia.org/wiki/OS/2 +[2]: https://archive.org/details/IBMOS2Warp3Collection +[3]: https://en.wikipedia.org/wiki/Linuxcare +[4]: https://www.debian.org/ +[5]: From bfa39b05dd36e7fbe3b05ba3a25aabca7f47f999 Mon Sep 17 00:00:00 2001 From: cposture Date: Wed, 13 Jul 2016 09:27:55 +0800 Subject: [PATCH 127/471] translating partly 75 --- ...reate Your Own Shell in Python - Part I.md | 27 ++++++++++--------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md index 0b7e415f2a..67e5809b3d 100644 --- a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md +++ b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -101,12 +101,12 @@ def shell_loop(): 当一个用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的很长的字符串。因此,我们必须切分该字符串(分割一个字符串为多个标记)。 -咋一看它似乎很简单。我们或许可以使用 cmd.split(),用空格分割输入。 -It seems simple at first glance. We might use cmd.split() to separate the input by spaces. It works well for a command like `ls -a my_folder` because it splits the command into a list `['ls', '-a', 'my_folder']` which we can use them easily. +咋一看似乎很简单。我们或许可以使用 cmd.split(),用空格分割输入。它对类似 `ls -a my_folder` 的命令起作用,因为它能够将命令分割为一个列表 `['ls', '-a', 'my_folder']`,这样我们便能轻易处理它们了。 -However, there are some cases that some arguments are quoted with single or double quotes like `echo "Hello World"` or `echo 'Hello World'`. If we use cmd.split(), we will get a list of 3 tokens `['echo', '"Hello', 'World"']` instead of 2 tokens `['echo', 'Hello World']`. +然而,也有一些类似 `echo "Hello World"` 或 `echo 'Hello World'` 以单引号或双引号引用参数的情况。如果我们使用 cmd.spilt,我们将会得到一个存有 3 个标记的列表 `['echo', '"Hello', 'World"']` 而不是 2 个标记 `['echo', 'Hello World']`。 + +幸运的是,Python 提供了一个名为 shlex 的库,能够帮助我们效验如神地分割命令。(提示:我们也可以使用正则表达式,但它不是本文的重点。) -Fortunately, Python provides a library called shlex that helps us split like a charm. (Note: we can also use regular expression but it’s not the main point of this article.) ``` import sys @@ -120,13 +120,13 @@ def tokenize(string): ... ``` -Then, we will send these tokens to the execution process. +然后我们将这些标记发送到执行过程。 -### Step 3: Execution +### 步骤 3:执行 -This is the core and fun part of a shell. What happened when a shell executes mkdir test_dir? (Note: mkdir is a program to be executed with arguments test_dir for creating a directory named test_dir.) +这是 shell 中核心和有趣的一部分。当 shell 执行 mkdir test_dir 时,发生了什么?(提示:midir 是一个带有 test_dir 参数的执行程序,用于创建一个名为 test_dir 的目录。) -The first function involved in this step is execvp. Before I explain what execvp does, let’s see it in action. +execvp 是涉及这一步的首个函数。在我们解释 execvp 所做的事之前,让我们看看它的实际效果。 ``` import os @@ -142,15 +142,16 @@ def execute(cmd_tokens): ... ``` -Try running our shell again and input a command `mkdir test_dir`, then, hit enter. +再次尝试运行我们的 shell,并输入 `mkdir test_dir` 命令,接着按下回车键。 -The problem is, after we hit enter, our shell exits instead of waiting for the next command. However, the directory is correctly created. +在我们敲下回车键之后,问题是我们的 shell 会直接退出而不是等待下一个命令。然而,目标正确地被创建。 -So, what execvp really does? +因此,execvp 实际上做了什么? -execvp is a variant of a system call exec. The first argument is the program name. The v indicates the second argument is a list of program arguments (variable number of arguments). The p indicates the PATH environment will be used for searching for the given program name. In our previous attempt, the mkdir program was found based on your PATH environment variable. +execvp 是系统调用 exec 的一个变体。第一个参数是程序名字。v 表示第二个参数是一个程序参数列表(可变参数)。p 表示环境变量 PATH 会被用于搜索给定的程序名字。在我们上一次的尝试中,可以在你的 PATH 环境变量查找到 mkdir 程序。 + +(还有其他 exec 变体,比如 execv、execvpe、execl、execlp、execlpe;你可以 google 它们获取更多的信息。) -(There are other variants of exec such as execv, execvpe, execl, execlp, execlpe; you can google them for more information.) exec replaces the current memory of a calling process with a new process to be executed. In our case, our shell process memory was replaced by `mkdir` program. Then, mkdir became the main process and created the test_dir directory. Finally, its process exited. From a0bab457e91524872b1803ff490b436ab143ae63 Mon Sep 17 00:00:00 2001 From: Chunyang Wen Date: Wed, 13 Jul 2016 11:07:17 +0800 Subject: [PATCH 128/471] Translating: writing online game part2 & 3 (#4180) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * âFinish tranlating awk series part4 * Update Part 4 - How to Use Comparison Operators with Awk in Linux.md * translating: How to Hide Linux Command Line History by Going Incognito * finish translating How to Hide Linux Command Line History by Going Incognito * update translator * translating game part2 and 3 --- ...g online multiplayer game with python and asyncio - part 2.md | 1 + ...g online multiplayer game with python and asyncio - part 3.md | 1 + 2 files changed, 2 insertions(+) diff --git a/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md b/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md index a33b0b0fdc..75df13d9b5 100644 --- a/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md +++ b/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md @@ -1,3 +1,4 @@ +chunyang-wen translating Writing online multiplayer game with python and asyncio - Part 2 ================================================================== diff --git a/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md b/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md index 90cf71c466..3c38af72ac 100644 --- a/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md +++ b/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md @@ -1,3 +1,4 @@ +chunyang-wen translating Writing online multiplayer game with python and asyncio - Part 3 ================================================================= From 97bdb52dc246b07480649ea4d70b6c0df55f0e96 Mon Sep 17 00:00:00 2001 From: cposture Date: Wed, 13 Jul 2016 13:48:03 +0800 Subject: [PATCH 129/471] Translated by cposture --- ...reate Your Own Shell in Python - Part I.md | 31 +++++++++---------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md index 67e5809b3d..74cac3887e 100644 --- a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md +++ b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -152,16 +152,15 @@ execvp 是系统调用 exec 的一个变体。第一个参数是程序名字。v (还有其他 exec 变体,比如 execv、execvpe、execl、execlp、execlpe;你可以 google 它们获取更多的信息。) +exec 会用即将运行的新进程替换调用进程的当前内存。在我们的例子中,我们的 shell 进程内存会被替换为 `mkdir` 程序。接着,mkdir 成为主进程并创建 test_dir 目录。最后该进程退出。 -exec replaces the current memory of a calling process with a new process to be executed. In our case, our shell process memory was replaced by `mkdir` program. Then, mkdir became the main process and created the test_dir directory. Finally, its process exited. +这里的重点在于我们的 shell 进程已经被 mkdir 进程所替换。这就是我们的 shell 消失且不会等待下一条命令的原因。 -The main point here is that our shell process was replaced by mkdir process already. That’s the reason why our shell disappeared and did not wait for the next command. +因此,我们需要其他的系统调用来解决问题:fork -Therefore, we need another system call to rescue: fork. +fork 会开辟新的内存并拷贝当前进程到一个新的进程。我们称这个新的进程为子进程,调用者进程为父进程。然后,子进程内存会被替换为被执行的程序。因此,我们的 shell,也就是父进程,可以免受内存替换的危险。 -fork will allocate new memory and copy the current process into a new process. We called this new process as child process and the caller process as parent process. Then, the child process memory will be replaced by a execed program. Therefore, our shell, which is a parent process, is safe from memory replacement. - -Let’s see our modified code. +让我们看看已修改的代码。 ``` ... @@ -194,25 +193,25 @@ def execute(cmd_tokens): ... ``` -When the parent process call `os.fork()`, you can imagine that all source code is copied into a new child process. At this point, the parent and child process see the same code and run in parallel. +当我们的父进程调用 `os.fork()`时,你可以想象所有的源代码被拷贝到了新的子进程。此时此刻,父进程和子进程看到的是相同的代码,并且并行运行着。 -If the running code is belong to the child process, pid will be 0. Else, the running code is belong to the parent process, pid will be the process id of the child process. +如果运行的代码属于子进程,pid 将为 0。否则,如果运行的代码属于父进程,pid 将会是子进程的进程 id。 -When os.execvp is invoked in the child process, you can imagine like all the source code of the child process is replaced by the code of a program that is being called. However, the code of the parent process is not changed. +当 os.execvp 在子进程中被调用时,你可以想象子进程的所有源代码被替换为正被调用程序的代码。然而父进程的代码不会被改变。 -When the parent process finishes waiting its child process to exit or be terminated, it returns the status indicating to continue the shell loop. +当父进程完成等待子进程退出或终止时,它会返回一个状态,指示继续 shell 循环。 -### Run +### 运行 -Now, you can try running our shell and enter mkdir test_dir2. It should work properly. Our main shell process is still there and waits for the next command. Try ls and you will see the created directories. +现在,你可以尝试运行我们的 shell 并输入 mkdir test_dir2。它应该可以正确执行。我们的主 shell 进程仍然存在并等待下一条命令。尝试执行 ls,你可以看到已创建的目录。 -However, there are some problems here. +但是,这里仍有许多问题。 -First, try cd test_dir2 and then ls. It’s supposed to enter the directory test_dir2 which is an empty directory. However, you will see that the directory was not changed into test_dir2. +第一,尝试执行 cd test_dir2,接着执行 ls。它应该会进入到一个空的 test_dir2 目录。然而,你将会看到目录没有变为 test_dir2。 -Second, we still have no way to exit from our shell gracefully. +第二,我们仍然没有办法优雅地退出我们的 shell。 -We will continue to solve such problems in [Part 2][1]. +我们将会在 [Part 2][1] 解决诸如此类的问题。 -------------------------------------------------------------------------------- From c544898d9927eb375711356a35cf74c079abe39f Mon Sep 17 00:00:00 2001 From: cposture Date: Wed, 13 Jul 2016 15:03:42 +0800 Subject: [PATCH 130/471] Translated by cposture --- ...reate Your Own Shell in Python - Part I.md | 228 ------------------ ...reate Your Own Shell in Python - Part I.md | 228 ++++++++++++++++++ 2 files changed, 228 insertions(+), 228 deletions(-) delete mode 100644 sources/tech/20160705 Create Your Own Shell in Python - Part I.md create mode 100644 translated/tech/20160705 Create Your Own Shell in Python - Part I.md diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md deleted file mode 100644 index 74cac3887e..0000000000 --- a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md +++ /dev/null @@ -1,228 +0,0 @@ -使用 Python 创建你自己的 Shell:Part I -========================================== - -我很好奇一个 shell (像 bash,csh 等)内部是如何工作的。为了满足自己的好奇心,我使用 Python 实现了一个名为 yosh (Your Own Shell)的 Shell。本文章所介绍的概念也可以应用于其他编程语言。 - -(提示:你可以发布于此的博文中找到使用的源代码,代码以 MIT 许可发布) - -让我们开始吧。 - -### 步骤 0:项目结构 - -对于此项目,我使用了以下的项目结构。 - -``` -yosh_project -|-- yosh - |-- __init__.py - |-- shell.py -``` - -`yosh_project` 为项目根目录(你也可以把它简单地命名为 `yosh`)。 - -`yosh` 为包目录,并且 `__init__.py` 将会使一个包名等同于包目录名字(如果你不写 Python,可以忽略它) - -`shell.py` 是我们的主脚本文件。 - -### 步骤 1:Shell 循环 - -当你启动一个 shell,它会显示一个命令提示符同时等待用户输入命令。在接收了输入的命令并执行它之后(稍后文章会进行详细解释),你的 shell 会回到循环,等待下一条指令。 - -在 `shell.py`,我们会以一个简单的 mian 函数开始,该函数调用了 shell_loop() 函数,如下: - -``` -def shell_loop(): - # Start the loop here - - -def main(): - shell_loop() - - -if __name__ == "__main__": - main() -``` - -接着,在 `shell_loop()`,为了指示循环是否继续或停止,我们使用了一个状态标志。在循环的开始,我们的 shell 将显示一个命令提示符,并等待读取命令输入。 - -``` -import sys - -SHELL_STATUS_RUN = 1 -SHELL_STATUS_STOP = 0 - - -def shell_loop(): - status = SHELL_STATUS_RUN - - while status == SHELL_STATUS_RUN: - # Display a command prompt - sys.stdout.write('> ') - sys.stdout.flush() - - # Read command input - cmd = sys.stdin.readline() -``` - -之后,我们切分命令输入并进行执行(我们将马上解释命令切分和执行函数)。 - -因此,我们的 shell_loop() 会是如下这样: - -``` -import sys - -SHELL_STATUS_RUN = 1 -SHELL_STATUS_STOP = 0 - - -def shell_loop(): - status = SHELL_STATUS_RUN - - while status == SHELL_STATUS_RUN: - # Display a command prompt - sys.stdout.write('> ') - sys.stdout.flush() - - # Read command input - cmd = sys.stdin.readline() - - # Tokenize the command input - cmd_tokens = tokenize(cmd) - - # Execute the command and retrieve new status - status = execute(cmd_tokens) -``` - -这就是我们整个 shell 循环。如果我们使用 python shell.py 命令启动 shell,它会显示命令提示符。然而如果我们输入命令并按回车,它将会抛出错误,因为我们还没定义命令切分函数。 - -为了退出 shell,可以尝试输入 ctrl-c。稍后我将解释如何以优雅的形式退出 shell。 - -### 步骤 2:命令切分 - -当一个用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的很长的字符串。因此,我们必须切分该字符串(分割一个字符串为多个标记)。 - -咋一看似乎很简单。我们或许可以使用 cmd.split(),用空格分割输入。它对类似 `ls -a my_folder` 的命令起作用,因为它能够将命令分割为一个列表 `['ls', '-a', 'my_folder']`,这样我们便能轻易处理它们了。 - -然而,也有一些类似 `echo "Hello World"` 或 `echo 'Hello World'` 以单引号或双引号引用参数的情况。如果我们使用 cmd.spilt,我们将会得到一个存有 3 个标记的列表 `['echo', '"Hello', 'World"']` 而不是 2 个标记 `['echo', 'Hello World']`。 - -幸运的是,Python 提供了一个名为 shlex 的库,能够帮助我们效验如神地分割命令。(提示:我们也可以使用正则表达式,但它不是本文的重点。) - - -``` -import sys -import shlex - -... - -def tokenize(string): - return shlex.split(string) - -... -``` - -然后我们将这些标记发送到执行过程。 - -### 步骤 3:执行 - -这是 shell 中核心和有趣的一部分。当 shell 执行 mkdir test_dir 时,发生了什么?(提示:midir 是一个带有 test_dir 参数的执行程序,用于创建一个名为 test_dir 的目录。) - -execvp 是涉及这一步的首个函数。在我们解释 execvp 所做的事之前,让我们看看它的实际效果。 - -``` -import os -... - -def execute(cmd_tokens): - # Execute command - os.execvp(cmd_tokens[0], cmd_tokens) - - # Return status indicating to wait for next command in shell_loop - return SHELL_STATUS_RUN - -... -``` - -再次尝试运行我们的 shell,并输入 `mkdir test_dir` 命令,接着按下回车键。 - -在我们敲下回车键之后,问题是我们的 shell 会直接退出而不是等待下一个命令。然而,目标正确地被创建。 - -因此,execvp 实际上做了什么? - -execvp 是系统调用 exec 的一个变体。第一个参数是程序名字。v 表示第二个参数是一个程序参数列表(可变参数)。p 表示环境变量 PATH 会被用于搜索给定的程序名字。在我们上一次的尝试中,可以在你的 PATH 环境变量查找到 mkdir 程序。 - -(还有其他 exec 变体,比如 execv、execvpe、execl、execlp、execlpe;你可以 google 它们获取更多的信息。) - -exec 会用即将运行的新进程替换调用进程的当前内存。在我们的例子中,我们的 shell 进程内存会被替换为 `mkdir` 程序。接着,mkdir 成为主进程并创建 test_dir 目录。最后该进程退出。 - -这里的重点在于我们的 shell 进程已经被 mkdir 进程所替换。这就是我们的 shell 消失且不会等待下一条命令的原因。 - -因此,我们需要其他的系统调用来解决问题:fork - -fork 会开辟新的内存并拷贝当前进程到一个新的进程。我们称这个新的进程为子进程,调用者进程为父进程。然后,子进程内存会被替换为被执行的程序。因此,我们的 shell,也就是父进程,可以免受内存替换的危险。 - -让我们看看已修改的代码。 - -``` -... - -def execute(cmd_tokens): - # Fork a child shell process - # If the current process is a child process, its `pid` is set to `0` - # else the current process is a parent process and the value of `pid` - # is the process id of its child process. - pid = os.fork() - - if pid == 0: - # Child process - # Replace the child shell process with the program called with exec - os.execvp(cmd_tokens[0], cmd_tokens) - elif pid > 0: - # Parent process - while True: - # Wait response status from its child process (identified with pid) - wpid, status = os.waitpid(pid, 0) - - # Finish waiting if its child process exits normally - # or is terminated by a signal - if os.WIFEXITED(status) or os.WIFSIGNALED(status): - break - - # Return status indicating to wait for next command in shell_loop - return SHELL_STATUS_RUN - -... -``` - -当我们的父进程调用 `os.fork()`时,你可以想象所有的源代码被拷贝到了新的子进程。此时此刻,父进程和子进程看到的是相同的代码,并且并行运行着。 - -如果运行的代码属于子进程,pid 将为 0。否则,如果运行的代码属于父进程,pid 将会是子进程的进程 id。 - -当 os.execvp 在子进程中被调用时,你可以想象子进程的所有源代码被替换为正被调用程序的代码。然而父进程的代码不会被改变。 - -当父进程完成等待子进程退出或终止时,它会返回一个状态,指示继续 shell 循环。 - -### 运行 - -现在,你可以尝试运行我们的 shell 并输入 mkdir test_dir2。它应该可以正确执行。我们的主 shell 进程仍然存在并等待下一条命令。尝试执行 ls,你可以看到已创建的目录。 - -但是,这里仍有许多问题。 - -第一,尝试执行 cd test_dir2,接着执行 ls。它应该会进入到一个空的 test_dir2 目录。然而,你将会看到目录没有变为 test_dir2。 - -第二,我们仍然没有办法优雅地退出我们的 shell。 - -我们将会在 [Part 2][1] 解决诸如此类的问题。 - - --------------------------------------------------------------------------------- - -via: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ - -作者:[Supasate Choochaisri][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://disqus.com/by/supasate_choochaisri/ -[1]: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ diff --git a/translated/tech/20160705 Create Your Own Shell in Python - Part I.md b/translated/tech/20160705 Create Your Own Shell in Python - Part I.md new file mode 100644 index 0000000000..b54d0bff29 --- /dev/null +++ b/translated/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -0,0 +1,228 @@ +使用 Python 创建你自己的 Shell:Part I +========================================== + +我很想知道一个 shell (像 bash,csh 等)内部是如何工作的。为了满足自己的好奇心,我使用 Python 实现了一个名为 **yosh** (Your Own Shell)的 Shell。本文章所介绍的概念也可以应用于其他编程语言。 + +(提示:你可以在[这里](https://github.com/supasate/yosh)查找本博文使用的源代码,代码以 MIT 许可证发布。在 Mac OS X 10.11.5 上,我使用 Python 2.7.10 和 3.4.3 进行了测试。它应该可以运行在其他类 Unix 环境,比如 Linux 和 Windows 上的 Cygwin。) + +让我们开始吧。 + +### 步骤 0:项目结构 + +对于此项目,我使用了以下的项目结构。 + +``` +yosh_project +|-- yosh + |-- __init__.py + |-- shell.py +``` + +`yosh_project` 为项目根目录(你也可以把它简单命名为 `yosh`)。 + +`yosh` 为包目录,且 `__init__.py` 可以使它成为与包目录名字相同的包(如果你不写 Python,可以忽略它。) + +`shell.py` 是我们主要的脚本文件。 + +### 步骤 1:Shell 循环 + +当启动一个 shell,它会显示一个命令提示符并等待你的命令输入。在接收了输入的命令并执行它之后(稍后文章会进行详细解释),你的 shell 会重新回到循环,等待下一条指令。 + +在 `shell.py`,我们会以一个简单的 mian 函数开始,该函数调用了 shell_loop() 函数,如下: + +``` +def shell_loop(): + # Start the loop here + + +def main(): + shell_loop() + + +if __name__ == "__main__": + main() +``` + +接着,在 `shell_loop()`,为了指示循环是否继续或停止,我们使用了一个状态标志。在循环的开始,我们的 shell 将显示一个命令提示符,并等待读取命令输入。 + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() +``` + +之后,我们切分命令输入并进行执行(我们即将实现`命令切分`和`执行`函数)。 + +因此,我们的 shell_loop() 会是如下这样: + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() + + # Tokenize the command input + cmd_tokens = tokenize(cmd) + + # Execute the command and retrieve new status + status = execute(cmd_tokens) +``` + +这就是我们整个 shell 循环。如果我们使用 `python shell.py` 启动我们的 shell,它会显示命令提示符。然而如果我们输入命令并按回车,它会抛出错误,因为我们还没定义`命令切分`函数。 + +为了退出 shell,可以尝试输入 ctrl-c。稍后我将解释如何以优雅的形式退出 shell。 + +### 步骤 2:命令切分 + +当用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的很长的字符串。因此,我们必须切分该字符串(分割一个字符串为多个标记)。 + +咋一看似乎很简单。我们或许可以使用 `cmd.split()`,以空格分割输入。它对类似 `ls -a my_folder` 的命令起作用,因为它能够将命令分割为一个列表 `['ls', '-a', 'my_folder']`,这样我们便能轻易处理它们了。 + +然而,也有一些类似 `echo "Hello World"` 或 `echo 'Hello World'` 以单引号或双引号引用参数的情况。如果我们使用 cmd.spilt,我们将会得到一个存有 3 个标记的列表 `['echo', '"Hello', 'World"']` 而不是 2 个标记的列表 `['echo', 'Hello World']`。 + +幸运的是,Python 提供了一个名为 `shlex` 的库,它能够帮助我们效验如神地分割命令。(提示:我们也可以使用正则表达式,但它不是本文的重点。) + + +``` +import sys +import shlex + +... + +def tokenize(string): + return shlex.split(string) + +... +``` + +然后我们将这些标记发送到执行进程。 + +### 步骤 3:执行 + +这是 shell 中核心和有趣的一部分。当 shell 执行 `mkdir test_dir` 时,到底发生了什么?(提示: `mkdir` 是一个带有 `test_dir` 参数的执行程序,用于创建一个名为 `test_dir` 的目录。) + +`execvp` 是涉及这一步的首个函数。在我们解释 `execvp` 所做的事之前,让我们看看它的实际效果。 + +``` +import os +... + +def execute(cmd_tokens): + # Execute command + os.execvp(cmd_tokens[0], cmd_tokens) + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +再次尝试运行我们的 shell,并输入 `mkdir test_dir` 命令,接着按下回车键。 + +在我们敲下回车键之后,问题是我们的 shell 会直接退出而不是等待下一个命令。然而,目标正确地被创建。 + +因此,`execvp` 实际上做了什么? + +`execvp` 是系统调用 `exec` 的一个变体。第一个参数是程序名字。`v` 表示第二个参数是一个程序参数列表(可变参数)。`p` 表示环境变量 `PATH` 会被用于搜索给定的程序名字。在我们上一次的尝试中,它将会基于我们的 `PATH` 环境变量查找`mkdir` 程序。 + +(还有其他 `exec` 变体,比如 execv、execvpe、execl、execlp、execlpe;你可以 google 它们获取更多的信息。) + +`exec` 会用即将运行的新进程替换调用进程的当前内存。在我们的例子中,我们的 shell 进程内存会被替换为 `mkdir` 程序。接着,`mkdir` 成为主进程并创建 `test_dir` 目录。最后该进程退出。 + +这里的重点在于**我们的 shell 进程已经被 `mkdir` 进程所替换**。这就是我们的 shell 消失且不会等待下一条命令的原因。 + +因此,我们需要其他的系统调用来解决问题:`fork`。 + +`fork` 会开辟新的内存并拷贝当前进程到一个新的进程。我们称这个新的进程为**子进程**,调用者进程为**父进程**。然后,子进程内存会被替换为被执行的程序。因此,我们的 shell,也就是父进程,可以免受内存替换的危险。 + +让我们看看修改的代码。 + +``` +... + +def execute(cmd_tokens): + # Fork a child shell process + # If the current process is a child process, its `pid` is set to `0` + # else the current process is a parent process and the value of `pid` + # is the process id of its child process. + pid = os.fork() + + if pid == 0: + # Child process + # Replace the child shell process with the program called with exec + os.execvp(cmd_tokens[0], cmd_tokens) + elif pid > 0: + # Parent process + while True: + # Wait response status from its child process (identified with pid) + wpid, status = os.waitpid(pid, 0) + + # Finish waiting if its child process exits normally + # or is terminated by a signal + if os.WIFEXITED(status) or os.WIFSIGNALED(status): + break + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +当我们的父进程调用 `os.fork()`时,你可以想象所有的源代码被拷贝到了新的子进程。此时此刻,父进程和子进程看到的是相同的代码,且并行运行着。 + +如果运行的代码属于子进程,`pid` 将为 `0`。否则,如果运行的代码属于父进程,`pid` 将会是子进程的进程 id。 + +当 `os.execvp` 在子进程中被调用时,你可以想象子进程的所有源代码被替换为正被调用程序的代码。然而父进程的代码不会被改变。 + +当父进程完成等待子进程退出或终止时,它会返回一个状态,指示继续 shell 循环。 + +### 运行 + +现在,你可以尝试运行我们的 shell 并输入 `mkdir test_dir2`。它应该可以正确执行。我们的主 shell 进程仍然存在并等待下一条命令。尝试执行 `ls`,你可以看到已创建的目录。 + +但是,这里仍有许多问题。 + +第一,尝试执行 `cd test_dir2`,接着执行 `ls`。它应该会进入到一个空的 `test_dir2` 目录。然而,你将会看到目录并没有变为 `test_dir2`。 + +第二,我们仍然没有办法优雅地退出我们的 shell。 + +我们将会在 [Part 2][1] 解决诸如此类的问题。 + + +-------------------------------------------------------------------------------- + +via: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ + +作者:[Supasate Choochaisri][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/supasate_choochaisri/ +[1]: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ From 4cd961ced359ddd54ed620f95a5e16f718c9872a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Wed, 13 Jul 2016 16:41:30 +0800 Subject: [PATCH 131/471] Translated by cposture (#4181) * Translated by cposture * Translating by cposture * translating partly * translating partly 75 * Translated by cposture * Translated by cposture --- ...reate Your Own Shell in Python - Part I.md | 227 ----------------- ...reate Your Own Shell in Python - Part I.md | 228 ++++++++++++++++++ 2 files changed, 228 insertions(+), 227 deletions(-) delete mode 100644 sources/tech/20160705 Create Your Own Shell in Python - Part I.md create mode 100644 translated/tech/20160705 Create Your Own Shell in Python - Part I.md diff --git a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md b/sources/tech/20160705 Create Your Own Shell in Python - Part I.md deleted file mode 100644 index 48e84381c8..0000000000 --- a/sources/tech/20160705 Create Your Own Shell in Python - Part I.md +++ /dev/null @@ -1,227 +0,0 @@ -Translating by cposture 2016.07.09 -Create Your Own Shell in Python : Part I - -I’m curious to know how a shell (like bash, csh, etc.) works internally. So, I implemented one called yosh (Your Own SHell) in Python to answer my own curiosity. The concept I explain in this article can be applied to other languages as well. - -(Note: You can find source code used in this blog post here. I distribute it with MIT license.) - -Let’s start. - -### Step 0: Project Structure - -For this project, I use the following project structure. - -``` -yosh_project -|-- yosh - |-- __init__.py - |-- shell.py -``` - -`yosh_project` is the root project folder (you can also name it just `yosh`). - -`yosh` is the package folder and `__init__.py` will make it a package named the same as the package folder name. (If you don’t write Python, just ignore it.) - -`shell.py` is our main shell file. - -### Step 1: Shell Loop - -When you start a shell, it will show a command prompt and wait for your command input. After it receives the command and executes it (the detail will be explained later), your shell will be back to the wait loop for your next command. - -In `shell.py`, we start by a simple main function calling the shell_loop() function as follows: - -``` -def shell_loop(): - # Start the loop here - - -def main(): - shell_loop() - - -if __name__ == "__main__": - main() -``` - -Then, in our `shell_loop()`, we use a status flag to indicate whether the loop should continue or stop. In the beginning of the loop, our shell will show a command prompt and wait to read command input. - -``` -import sys - -SHELL_STATUS_RUN = 1 -SHELL_STATUS_STOP = 0 - - -def shell_loop(): - status = SHELL_STATUS_RUN - - while status == SHELL_STATUS_RUN: - # Display a command prompt - sys.stdout.write('> ') - sys.stdout.flush() - - # Read command input - cmd = sys.stdin.readline() -``` - -After that, we tokenize the command input and execute it (we’ll implement the tokenize and execute functions soon). - -Therefore, our shell_loop() will be the following. - -``` -import sys - -SHELL_STATUS_RUN = 1 -SHELL_STATUS_STOP = 0 - - -def shell_loop(): - status = SHELL_STATUS_RUN - - while status == SHELL_STATUS_RUN: - # Display a command prompt - sys.stdout.write('> ') - sys.stdout.flush() - - # Read command input - cmd = sys.stdin.readline() - - # Tokenize the command input - cmd_tokens = tokenize(cmd) - - # Execute the command and retrieve new status - status = execute(cmd_tokens) -``` - -That’s all of our shell loop. If we start our shell with python shell.py, it will show the command prompt. However, it will throw an error if we type a command and hit enter because we don’t define tokenize function yet. - -To exit the shell, try ctrl-c. I will tell how to exit gracefully later. - -### Step 2: Tokenization - -When a user types a command in our shell and hits enter. The command input will be a long string containing both a command name and its arguments. Therefore, we have to tokenize it (split a string into several tokens). - -It seems simple at first glance. We might use cmd.split() to separate the input by spaces. It works well for a command like `ls -a my_folder` because it splits the command into a list `['ls', '-a', 'my_folder']` which we can use them easily. - -However, there are some cases that some arguments are quoted with single or double quotes like `echo "Hello World"` or `echo 'Hello World'`. If we use cmd.split(), we will get a list of 3 tokens `['echo', '"Hello', 'World"']` instead of 2 tokens `['echo', 'Hello World']`. - -Fortunately, Python provides a library called shlex that helps us split like a charm. (Note: we can also use regular expression but it’s not the main point of this article.) - -``` -import sys -import shlex - -... - -def tokenize(string): - return shlex.split(string) - -... -``` - -Then, we will send these tokens to the execution process. - -### Step 3: Execution - -This is the core and fun part of a shell. What happened when a shell executes mkdir test_dir? (Note: mkdir is a program to be executed with arguments test_dir for creating a directory named test_dir.) - -The first function involved in this step is execvp. Before I explain what execvp does, let’s see it in action. - -``` -import os -... - -def execute(cmd_tokens): - # Execute command - os.execvp(cmd_tokens[0], cmd_tokens) - - # Return status indicating to wait for next command in shell_loop - return SHELL_STATUS_RUN - -... -``` - -Try running our shell again and input a command `mkdir test_dir`, then, hit enter. - -The problem is, after we hit enter, our shell exits instead of waiting for the next command. However, the directory is correctly created. - -So, what execvp really does? - -execvp is a variant of a system call exec. The first argument is the program name. The v indicates the second argument is a list of program arguments (variable number of arguments). The p indicates the PATH environment will be used for searching for the given program name. In our previous attempt, the mkdir program was found based on your PATH environment variable. - -(There are other variants of exec such as execv, execvpe, execl, execlp, execlpe; you can google them for more information.) - -exec replaces the current memory of a calling process with a new process to be executed. In our case, our shell process memory was replaced by `mkdir` program. Then, mkdir became the main process and created the test_dir directory. Finally, its process exited. - -The main point here is that our shell process was replaced by mkdir process already. That’s the reason why our shell disappeared and did not wait for the next command. - -Therefore, we need another system call to rescue: fork. - -fork will allocate new memory and copy the current process into a new process. We called this new process as child process and the caller process as parent process. Then, the child process memory will be replaced by a execed program. Therefore, our shell, which is a parent process, is safe from memory replacement. - -Let’s see our modified code. - -``` -... - -def execute(cmd_tokens): - # Fork a child shell process - # If the current process is a child process, its `pid` is set to `0` - # else the current process is a parent process and the value of `pid` - # is the process id of its child process. - pid = os.fork() - - if pid == 0: - # Child process - # Replace the child shell process with the program called with exec - os.execvp(cmd_tokens[0], cmd_tokens) - elif pid > 0: - # Parent process - while True: - # Wait response status from its child process (identified with pid) - wpid, status = os.waitpid(pid, 0) - - # Finish waiting if its child process exits normally - # or is terminated by a signal - if os.WIFEXITED(status) or os.WIFSIGNALED(status): - break - - # Return status indicating to wait for next command in shell_loop - return SHELL_STATUS_RUN - -... -``` - -When the parent process call `os.fork()`, you can imagine that all source code is copied into a new child process. At this point, the parent and child process see the same code and run in parallel. - -If the running code is belong to the child process, pid will be 0. Else, the running code is belong to the parent process, pid will be the process id of the child process. - -When os.execvp is invoked in the child process, you can imagine like all the source code of the child process is replaced by the code of a program that is being called. However, the code of the parent process is not changed. - -When the parent process finishes waiting its child process to exit or be terminated, it returns the status indicating to continue the shell loop. - -### Run - -Now, you can try running our shell and enter mkdir test_dir2. It should work properly. Our main shell process is still there and waits for the next command. Try ls and you will see the created directories. - -However, there are some problems here. - -First, try cd test_dir2 and then ls. It’s supposed to enter the directory test_dir2 which is an empty directory. However, you will see that the directory was not changed into test_dir2. - -Second, we still have no way to exit from our shell gracefully. - -We will continue to solve such problems in [Part 2][1]. - - --------------------------------------------------------------------------------- - -via: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ - -作者:[Supasate Choochaisri][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://disqus.com/by/supasate_choochaisri/ -[1]: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ diff --git a/translated/tech/20160705 Create Your Own Shell in Python - Part I.md b/translated/tech/20160705 Create Your Own Shell in Python - Part I.md new file mode 100644 index 0000000000..b54d0bff29 --- /dev/null +++ b/translated/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -0,0 +1,228 @@ +使用 Python 创建你自己的 Shell:Part I +========================================== + +我很想知道一个 shell (像 bash,csh 等)内部是如何工作的。为了满足自己的好奇心,我使用 Python 实现了一个名为 **yosh** (Your Own Shell)的 Shell。本文章所介绍的概念也可以应用于其他编程语言。 + +(提示:你可以在[这里](https://github.com/supasate/yosh)查找本博文使用的源代码,代码以 MIT 许可证发布。在 Mac OS X 10.11.5 上,我使用 Python 2.7.10 和 3.4.3 进行了测试。它应该可以运行在其他类 Unix 环境,比如 Linux 和 Windows 上的 Cygwin。) + +让我们开始吧。 + +### 步骤 0:项目结构 + +对于此项目,我使用了以下的项目结构。 + +``` +yosh_project +|-- yosh + |-- __init__.py + |-- shell.py +``` + +`yosh_project` 为项目根目录(你也可以把它简单命名为 `yosh`)。 + +`yosh` 为包目录,且 `__init__.py` 可以使它成为与包目录名字相同的包(如果你不写 Python,可以忽略它。) + +`shell.py` 是我们主要的脚本文件。 + +### 步骤 1:Shell 循环 + +当启动一个 shell,它会显示一个命令提示符并等待你的命令输入。在接收了输入的命令并执行它之后(稍后文章会进行详细解释),你的 shell 会重新回到循环,等待下一条指令。 + +在 `shell.py`,我们会以一个简单的 mian 函数开始,该函数调用了 shell_loop() 函数,如下: + +``` +def shell_loop(): + # Start the loop here + + +def main(): + shell_loop() + + +if __name__ == "__main__": + main() +``` + +接着,在 `shell_loop()`,为了指示循环是否继续或停止,我们使用了一个状态标志。在循环的开始,我们的 shell 将显示一个命令提示符,并等待读取命令输入。 + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() +``` + +之后,我们切分命令输入并进行执行(我们即将实现`命令切分`和`执行`函数)。 + +因此,我们的 shell_loop() 会是如下这样: + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() + + # Tokenize the command input + cmd_tokens = tokenize(cmd) + + # Execute the command and retrieve new status + status = execute(cmd_tokens) +``` + +这就是我们整个 shell 循环。如果我们使用 `python shell.py` 启动我们的 shell,它会显示命令提示符。然而如果我们输入命令并按回车,它会抛出错误,因为我们还没定义`命令切分`函数。 + +为了退出 shell,可以尝试输入 ctrl-c。稍后我将解释如何以优雅的形式退出 shell。 + +### 步骤 2:命令切分 + +当用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的很长的字符串。因此,我们必须切分该字符串(分割一个字符串为多个标记)。 + +咋一看似乎很简单。我们或许可以使用 `cmd.split()`,以空格分割输入。它对类似 `ls -a my_folder` 的命令起作用,因为它能够将命令分割为一个列表 `['ls', '-a', 'my_folder']`,这样我们便能轻易处理它们了。 + +然而,也有一些类似 `echo "Hello World"` 或 `echo 'Hello World'` 以单引号或双引号引用参数的情况。如果我们使用 cmd.spilt,我们将会得到一个存有 3 个标记的列表 `['echo', '"Hello', 'World"']` 而不是 2 个标记的列表 `['echo', 'Hello World']`。 + +幸运的是,Python 提供了一个名为 `shlex` 的库,它能够帮助我们效验如神地分割命令。(提示:我们也可以使用正则表达式,但它不是本文的重点。) + + +``` +import sys +import shlex + +... + +def tokenize(string): + return shlex.split(string) + +... +``` + +然后我们将这些标记发送到执行进程。 + +### 步骤 3:执行 + +这是 shell 中核心和有趣的一部分。当 shell 执行 `mkdir test_dir` 时,到底发生了什么?(提示: `mkdir` 是一个带有 `test_dir` 参数的执行程序,用于创建一个名为 `test_dir` 的目录。) + +`execvp` 是涉及这一步的首个函数。在我们解释 `execvp` 所做的事之前,让我们看看它的实际效果。 + +``` +import os +... + +def execute(cmd_tokens): + # Execute command + os.execvp(cmd_tokens[0], cmd_tokens) + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +再次尝试运行我们的 shell,并输入 `mkdir test_dir` 命令,接着按下回车键。 + +在我们敲下回车键之后,问题是我们的 shell 会直接退出而不是等待下一个命令。然而,目标正确地被创建。 + +因此,`execvp` 实际上做了什么? + +`execvp` 是系统调用 `exec` 的一个变体。第一个参数是程序名字。`v` 表示第二个参数是一个程序参数列表(可变参数)。`p` 表示环境变量 `PATH` 会被用于搜索给定的程序名字。在我们上一次的尝试中,它将会基于我们的 `PATH` 环境变量查找`mkdir` 程序。 + +(还有其他 `exec` 变体,比如 execv、execvpe、execl、execlp、execlpe;你可以 google 它们获取更多的信息。) + +`exec` 会用即将运行的新进程替换调用进程的当前内存。在我们的例子中,我们的 shell 进程内存会被替换为 `mkdir` 程序。接着,`mkdir` 成为主进程并创建 `test_dir` 目录。最后该进程退出。 + +这里的重点在于**我们的 shell 进程已经被 `mkdir` 进程所替换**。这就是我们的 shell 消失且不会等待下一条命令的原因。 + +因此,我们需要其他的系统调用来解决问题:`fork`。 + +`fork` 会开辟新的内存并拷贝当前进程到一个新的进程。我们称这个新的进程为**子进程**,调用者进程为**父进程**。然后,子进程内存会被替换为被执行的程序。因此,我们的 shell,也就是父进程,可以免受内存替换的危险。 + +让我们看看修改的代码。 + +``` +... + +def execute(cmd_tokens): + # Fork a child shell process + # If the current process is a child process, its `pid` is set to `0` + # else the current process is a parent process and the value of `pid` + # is the process id of its child process. + pid = os.fork() + + if pid == 0: + # Child process + # Replace the child shell process with the program called with exec + os.execvp(cmd_tokens[0], cmd_tokens) + elif pid > 0: + # Parent process + while True: + # Wait response status from its child process (identified with pid) + wpid, status = os.waitpid(pid, 0) + + # Finish waiting if its child process exits normally + # or is terminated by a signal + if os.WIFEXITED(status) or os.WIFSIGNALED(status): + break + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +当我们的父进程调用 `os.fork()`时,你可以想象所有的源代码被拷贝到了新的子进程。此时此刻,父进程和子进程看到的是相同的代码,且并行运行着。 + +如果运行的代码属于子进程,`pid` 将为 `0`。否则,如果运行的代码属于父进程,`pid` 将会是子进程的进程 id。 + +当 `os.execvp` 在子进程中被调用时,你可以想象子进程的所有源代码被替换为正被调用程序的代码。然而父进程的代码不会被改变。 + +当父进程完成等待子进程退出或终止时,它会返回一个状态,指示继续 shell 循环。 + +### 运行 + +现在,你可以尝试运行我们的 shell 并输入 `mkdir test_dir2`。它应该可以正确执行。我们的主 shell 进程仍然存在并等待下一条命令。尝试执行 `ls`,你可以看到已创建的目录。 + +但是,这里仍有许多问题。 + +第一,尝试执行 `cd test_dir2`,接着执行 `ls`。它应该会进入到一个空的 `test_dir2` 目录。然而,你将会看到目录并没有变为 `test_dir2`。 + +第二,我们仍然没有办法优雅地退出我们的 shell。 + +我们将会在 [Part 2][1] 解决诸如此类的问题。 + + +-------------------------------------------------------------------------------- + +via: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ + +作者:[Supasate Choochaisri][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/supasate_choochaisri/ +[1]: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ From 4526db9b0b542be19dc2e2d018d852630870aab7 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Wed, 13 Jul 2016 17:15:35 +0800 Subject: [PATCH 132/471] sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md --- ...sing Vagrant to control your DigitalOcean cloud instances.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md b/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md index e24e381b38..5a0e1ffda0 100644 --- a/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md +++ b/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md @@ -1,3 +1,5 @@ +MikeCoder Translating... + Using Vagrant to control your DigitalOcean cloud instances ========================================================= From 86b788968cb3ebd800af02a994f31b30d0f12ab9 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 12 Jul 2016 13:18:34 +0800 Subject: [PATCH 133/471] PUB:20160218 What do Linux developers think of Git and GitHub @mudongliang --- ...inux developers think of Git and GitHub.md | 110 ++++++++++++++++++ ...inux developers think of Git and GitHub.md | 93 --------------- 2 files changed, 110 insertions(+), 93 deletions(-) create mode 100644 published/20160218 What do Linux developers think of Git and GitHub.md delete mode 100644 translated/tech/20160218 What do Linux developers think of Git and GitHub.md diff --git a/published/20160218 What do Linux developers think of Git and GitHub.md b/published/20160218 What do Linux developers think of Git and GitHub.md new file mode 100644 index 0000000000..1824c6e19e --- /dev/null +++ b/published/20160218 What do Linux developers think of Git and GitHub.md @@ -0,0 +1,110 @@ +Linux 开发者如何看待 Git 和 Github? +===================================================== + +### Linux 开发者如何看待 Git 和 Github? + +Git 和 Github 在 Linux 开发者中有很高的知名度。但是开发者如何看待它们呢?另外,Github 是不是真的和 Git 是一个意思?一个 Linux reddit 用户最近问到了这个问题,并且得到了很有意思的答案。 + +Dontwakemeup46 提问: + +> 我正在学习 Git 和 Github。我感兴趣社区如何看待两者?据我所知,Git 和 Github 应用十分广泛。但是 Git 或 Github 有没有严重的不足?社区喜欢去改变些什么呢? + +[更多见 Reddit](https://www.reddit.com/r/linux/comments/45jy59/the_popularity_of_git_and_github/) + +与他志同道合的 Linux reddit 用户回答了他们对于 Git 和 Github的观点: + +>**Derenir**: “Github 并不附属于 Git。 + +> Git 是由 Linus Torvalds 开发的。 + +> Github 几乎不支持 Linux。 + +> Github 是一家企图借助 Git 赚钱的公司。 + +> https://desktop.github.com/ 并没有支持 Linux。” + +--- +>**Bilog78**: “一个小的补充: Linus Torvalds 已经不再维护 Git了。维护者是 Junio C Hamano,以及 在他之后的主要贡献者是 Jeff King 和 Shawn O. Pearce。” + +--- + +>**Fearthefuture**: “我喜欢 Git,但是不明白人们为什么还要使用 Github。从我的角度,Github 比 Bitbucket 好的一点是用户统计和更大的用户基础。Bitbucket 有无限的免费私有库,更好的 UI,以及更好地集成了其他服务,比如说 Jenkins。” + +--- + +>**Thunger**: “Gitlab.com 也很不错,特别是你可以在自己的服务器上架设自己的实例。” + +--- + +>**Takluyver**: “很多人熟悉 Github 的 UI 以及相关联的服务,比如说 Travis 。并且很多人都有 Github 账号,所以它是存储项目的一个很好的地方。人们也使用他们的 Github 个人信息页作为一种求职用的作品选辑,所以他们很积极地将更多的项目放在这里。Github 是一个存放开源项目的事实标准。” + +--- + +>**Tdammers**: “Git 严重问题在于 UI,它有些违反直觉,以至于很多用户只能达到使用一些容易记住的咒语的程度。” + +> Github:最严重的问题在于它是商业托管的解决方案;你买了方便,但是代价是你的代码在别人的服务器上面,已经不在你的掌控范围之内了。另一个对于 Github 的普遍批判是它的工作流和 Git 本身的精神不符,特别是 pull requests 工作的方式。最后, Github 垄断了代码的托管环境,同时对于多样性是很不好的,这反过来对于旺盛的免费软件社区很重要。” + +--- + +>**Dies**: “更重要的是,如果一旦是这样,按照现状来说,我猜我们会被 Github 所困,因为它们控制如此多的项目。” + +--- + +>**Tdammers**: “代码托管在别人的服务器上,这里"别人"指的是 Github。这对于开源项目来说,并不是什么太大的问题,但是尽管如此,你无法控制它。如果你在 Github 上有私有项目,“它将保持私有”的唯一的保险只是 Github 的承诺而已。如果你决定删除东西,你不能确定东西是否被删除了,或者只是隐藏了。 + +Github 并不自己控制这些项目(你总是可以拿走你的代码,然后托管到别的地方,声明新位置是“官方”的),它只是有比开发者本身有更深的使用权。” + +--- + +>**Drelos**: “我已经读了大量的关于 Github 的赞美与批评。(这里有一个[例子](http://www.wired.com/2015/06/problem-putting-worlds-code-github/)),但是我的幼稚问题是为什么不向一个免费开源的版本努力呢?” + +--- + +>**Twizmwazin**: “Gitlab 的源码就存在这里” + +--- + +[更多见 Reddit](https://www.reddit.com/r/linux/comments/45jy59/the_popularity_of_git_and_github/) + +### DistroWatch 评估 XStream 桌面 153 版本 + +XStreamOS 是一个由 Sonicle 创建的 Solaris 的一个版本。XStream 桌面将 Solaris 的强大带给了桌面用户,同时新手用户很可能有兴趣体验一下。DistroWatch 对于 XStream 桌面 153 版本做了一个很全面的评估,并且发现它运行相当好。 + +Jesse Smith 为 DistroWatch 报道: + +> 我认为 XStream 桌面做好了很多事情。诚然,当操作系统无法在我的硬件上启动,同时当运行在 VirtualBox 中时我无法使得桌面使用我显示器的完整分辨率,我的开端并不很成功。不过,除此之外,XStream 表现的很好。安装器工作的很好,该系统自动设置和使用了引导环境(boot environments),这让我们可以在发生错误时恢复该系统。包管理器有工作的不错, XStream 带了一套有用的软件。 + +> 我确实在播放多媒体文件时遇见一些问题,特别是使声卡工作。我不确定这是不是又一个硬件兼容问题,或者是该操作系统自带的多媒体软件的问题。另一方面,像 Web 浏览器,电子邮件,生产工具套件以及配置工具这样的工作的很好。 + +> 我最欣赏 XStream 的地方是这个操作系统是 OpenSolaris 家族的一个使用保持最新的分支。OpenSolaris 的其他衍生系统有落后的倾向,但是至少在桌面软件上,XStream 搭载最新版本的火狐和 LibreOffice。 + +> 对我个人来说,XStream 缺少一些组件,比如打印机管理器,多媒体支持和我的特定硬件的驱动。这个操作系统的其他方面也是相当吸引人的。我喜欢开发者搭配了 LXDE,也喜欢它的默认软件集,以及我最喜欢文件系统快照和启动环境开箱即用的方式。大多数的 Linux 发行版,openSUSE 除外,并没有利用好引导环境(boot environments)的用途。我希望它是一个被更多项目采用的技术。 + +[更多见 DistroWatch](http://distrowatch.com/weekly.php?issue=20160215#xstreamos) + +### 街头霸王 V 和 SteamOS + +街头霸王是最出名的游戏之一,并且 [Capcom 已经宣布](http://steamcommunity.com/games/310950/announcements/detail/857177755595160250) 街头霸王 V 将会在这个春天进入 Linux 和 StreamOS。这对于 Linux 游戏者是非常好的消息。 + +Joe Parlock 为 Destructoid 报道: + +> 你是不足 1% 的那些在 Linux 系统上玩游戏的 Stream 用户吗?你是更少数的那些在 Linux 平台上玩游戏,同时也很喜欢街头霸王 V 的人之一吗?是的话,我有一些好消息要告诉你。 + +> Capcom 已经宣布,这个春天街头霸王 V 通过 Stream 进入 StreamOS 以及其他 Linux 发行版。它无需任何额外的花费,所以那些已经在自己的个人电脑上安装了该游戏的人可以很容易在 Linux 上安装它并玩了。 + +[更多 Destructoid](https://www.destructoid.com/street-fighter-v-is-coming-to-linux-and-steamos-this-spring-341531.phtml) + +你是否错过了摘要?检查 [开源之眼的主页](http://www.infoworld.com/blog/eye-on-open/) 来获得关于 Linux 和开源的最新的新闻。 + +------------------------------------------------------------------------------ + +via: http://www.infoworld.com/article/3033059/linux/what-do-linux-developers-think-of-git-and-github.html + +作者:[Jim Lynch][a] +译者:[mudongliang](https://github.com/mudongliang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/Jim-Lynch/ + diff --git a/translated/tech/20160218 What do Linux developers think of Git and GitHub.md b/translated/tech/20160218 What do Linux developers think of Git and GitHub.md deleted file mode 100644 index a0eb63506e..0000000000 --- a/translated/tech/20160218 What do Linux developers think of Git and GitHub.md +++ /dev/null @@ -1,93 +0,0 @@ -Linux 开发者如何看待 Git 和 Github? -===================================================== - -**同样在今日的开源摘要: DistroWatch 评估 XStream 桌面 153 版本,街头霸王 V 即将在这个春天进入 Linux 和 SteamOS** - -## Linux 开发者如何看待 Git 和 Github? - -Git 和 Github 在 Linux 开发者中有很高的知名度。但是开发者如何看待它们呢?另外,Github 是不是真的和 Git 是一个意思?一个 Linux reddit 用户最近问到了这个问题,并且得到了很有意思的答案。 - -Dontwakemeup46 提问: - -> 我正在学习 Git 和 Github。我感兴趣的是社区如何看待两者?据我所知,Git 和 Github 应用十分广泛。但是 Git 或 Github 有没有严重的,社区喜欢去修改的问题呢? - -[更多见 Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580413015211&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) - -与他志同道合的 Linux reddit 用户回答了他们对于 Git 和 Github的想法: - ->Derenir: “Github 并不隶属于 Git。 - ->Git 是由 Linus Torvalds 开发的。 - ->Github 几乎不支持 Linux。 - ->Github 是一家唯利是图的,企图借助 Git 赚钱的公司。 - ->[https://desktop.github.com/](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580415025712&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&type=U&out=https%3A%2F%2Fdesktop.github.com%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=https%3A%2F%2Fdesktop.github.com%2F) 并没有支持 Linux。” - ->**Bilog78**: “一个简单的更新: Linus Torvalds 已经不再维护 Git了。维护者是 Junio C Hamano,以及 Linus 之后的主要贡献者是Jeff King 和 Shawn O. Pearce。” - ->**Fearthefuture**: “我喜欢 Git,但是不明白人们为什么还要使用 Github。从我的角度,Github 比 Bitbucket 好的一点是用户统计和更大的用户基础。Bitbucket 有无限的免费私有库,更好的 UI,以及更好地继承其他服务,比如说 Jenkins。” - ->**Thunger**: “Gitlab.com 也很不错,特别是你可以在自己的服务器上架设自己的实例。” - ->**Takluyver**: “很多人熟悉 Github 的 UI 以及相关联的服务,像 Travis 。并且很多人都有 Github 账号,所以它是一个很好地存储项目的地方。人们也使用他们的 Github 简况作为一种求职用的作品选辑,所以他们很积极地将更多的项目放在这里。Github 是一个事实上的,存放开源项目的标准。” - ->**Tdammers**: “Git 严重问题在于 UI,它有些违反直觉,到很多用户只使用一些容易记住的咒语程度。” - -Github:最严重的问题在于它是私人拥有的解决方案;你买了方便,但是代价是你的代码在别人的服务器上面,已经不在你的掌控范围之内了。另一个对于 Github 的普遍批判是它的工作流和 Git 本身的精神不符,特别是 pull requests 工作的方式。最后, Github 垄断代码的托管环境,同时对于多样性是很不好的,这反过来对于旺盛的免费软件社区很重要。” - ->**Dies**: “更重要的是,如果一旦是这样,做过的都做过了,并且我猜我们会被 Github 所困,因为它们控制如此多的项目。” - ->**Tdammers**: “代码托管在别人的服务器上,别人指的是 Github。这对于开源项目来说,并不是什么太大的问题,但是仍然,你无法控制它。如果你在 Github 上有私有项目,唯一的保险在于它将保持私有是 Github 的承诺。如果你决定删除东西,你不能确定东西是否被删除了,或者只是隐藏了。 - -Github 并不自己控制这些项目(你总是可以拿走你的代码,然后托管到别的地方,声明新位置是“官方”的),它只是有比开发者本身有更深的使用权。” - ->**Drelos**: “我已经读了大量的关于 Github 的赞美与批评。(这里有一个[例子](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580428524613&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fwww.wired.com%2F2015%2F06%2Fproblem-putting-worlds-code-github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=here%27s%20an%20example)),但是我的新手问题是为什么不向一个免费开源的版本努力呢?” - ->**Twizmwazin**: “Gitlab 的源码就存在这里” - -[更多见 Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580429720714&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) - -## DistroWatch 评估 XStream 桌面 153 版本 - -XStreamOS 是一个由 Sonicle 创建的 Solaris 的一个版本。XStream 桌面将 Solaris 的强大带给了桌面用户,同时新手用户很可能有兴趣体验一下。DistroWatch 对于 XStream 桌面 153 版本做了一个很全面的评估,并且发现它运行相当好。 - -Jesse Smith 为 DistroWatch 报道: - -> 我认为 XStream 桌面做好了很多事情。无可否认地,我的实验陷入了头晕目眩的状态当操作系统无法在我的硬件上启动。同时,当运行在 VirtualBox 中我无法使得桌面使用我显示器的完整分辨率。 - -> 我确实在播放多媒体文件时遇见一些问题,特别是使声卡工作。我不确定是不是另外一个硬件兼容问题,或者一个关于操作系统自带的多媒体软件的问题。另一方面,像 Web 浏览器,电子邮件,生产工具套件以及配置工具这样的工作都工作的很好。 - -> 我最欣赏 XStream 的地方是这个操作系统是 OpenSolaris 家族的一个使用保持最新的分支。OpenSolaris 的其他衍生系统有落后的倾向,至少在桌面软件上,但是 XStream 仍然搭载最新版本的火狐和 LibreOffice。 - ->对我个人来说,XStream 缺少一些组件,比如打印机管理器,多媒体支持和我特定硬件的驱动。这个操作系统的其他方面也是相当吸引人的。我喜欢开发者搭建 LXDE 的方式,软件的默认组合,以及我最喜欢文件系统快照和启动环境开箱即用的方式。大多数的 Linux 发行版,openSUSE 除外,并没有使得启动环境的有用性流行起来。我希望它是一个被更多项目采用的技术。 - -[更多见 DistroWatch](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580434172315&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fdistrowatch.com%2Fweekly.php%3Fissue%3D20160215%23xstreamos&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20DistroWatch) - -## 街头霸王 V 和 SteamOS - -街头霸王是最出名的游戏之一,并且 [Capcom 已经宣布](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) 街头霸王 V 将会在这个春天进入 Linux 和 StreamOS。这对于 Linux 游戏者是非常好的消息。 - -Joe Parlock 为 Destructoid 报道: - ->你是少于 1% 的,在 Linux 系统上玩游戏的 Stream 用户吗?你是更少百分比的,在 Linux 平台上玩游戏,同时很喜欢街头霸王 V 的人之一吗?是的话,我有一些好消息要告诉你。 - ->Capcom 已经宣布,这个春天街头霸王 V 通过 Stream 进入 StreamOS 以及其他 Linux 发行版。它无需任何额外的成本,所以那些已经个人电脑建立的游戏的人可以很容易在 Linux 上安装它,并且运行良好。 - -[更多 Destructoid](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) - -你是否错过了摘要?检查 [Eye On Open home page](http://www.infoworld.com/blog/eye-on-open/) 来获得关于 Linux 和开源的最新的新闻。 - ------------------------------------------------------------------------------- - -via: http://www.infoworld.com/article/3033059/linux/what-do-linux-developers-think-of-git-and-github.html - -作者:[Jim Lynch][a] -译者:[mudongliang](https://github.com/mudongliang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.infoworld.com/author/Jim-Lynch/ - From 039c403c225967381e061c1f18ca212f62b0232d Mon Sep 17 00:00:00 2001 From: ezio Date: Wed, 13 Jul 2016 21:17:55 +0800 Subject: [PATCH 134/471] recyle --- ...er Tools With Wercker’s Open Source CLI.md | 80 ------- .../20160314 15 podcasts for FOSS fans.md | 79 ------- sources/tech/20160314 Healthy Open Source.md | 212 ------------------ 3 files changed, 371 deletions(-) delete mode 100644 sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md delete mode 100644 sources/tech/20160314 15 podcasts for FOSS fans.md delete mode 100644 sources/tech/20160314 Healthy Open Source.md diff --git a/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md b/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md deleted file mode 100644 index 81f7467719..0000000000 --- a/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md +++ /dev/null @@ -1,80 +0,0 @@ -Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI -=========================================== - -For enterprises, containers offer more efficient build environments, cloud-native applications and migration from legacy systems to the cloud. But enterprise adoption of the technology -- Docker specifically -- has been hampered by, among other issues, [a lack of mature developer tools][1]. - -Amsterdam-based [Wercker][2] is one of many early-stage companies looking to meet the need for better tools with its cloud platform for automating microservices and application development, based on Docker. - -The company [announced a $4.5 million Series A][3] funding round this month, which will help it ramp up development on an upcoming on-premise enterprise product. Key to its success, however, will be building a community around its newly [open-sourced CLI][4] tool. Wercker must quickly integrate with myriad other container technologies -- open source Kubernetes and Mesos among them -- to remain competitive in the evolving container space. - -“By open sourcing our CLI technology, we hope to get to dev-prod parity faster and turn “build once, ship anywhere” into an automated reality,” said Wercker CEO and founder Micha Hernández van Leuffen. - -I reached out to van Leuffen to learn more about the company, its CLI tool, and how it’s planning to help grow the pool of enterprise customers actually using containers in production. Below is an edited version of the interview. - -### Linux.com: Can you briefly tell us about Wercker? - -van Leuffen: Wercker is a container-centric platform for automating the development of microservices and applications. - -With Wercker’s Docker-based infrastructure, teams can increase developer velocity with custom automation pipelines using steps that produce containers as artifacts. Once the build passes, users can continue to deploy the steps as specified in the wercker.yml. Continuously repeating these steps allows teams to work in small increments, making it easy to debug and ship faster. - -![](https://www.linux.com/images/stories/66866/wercker-cli.png) - -### Linux.com: How does it help developers? - -van Leuffen: The Wercker CLI helps developers attain greater dev-prod parity. They’re able to release faster and more often because they are developing, building and testing in an environment very similar to that in production. We’ve open sourced the exact same program that we execute in the Wercker cloud platform to run your pipelines. - -### Linux.com: Can you point out some of the features and advantages of your tool as compared to competitors? - -van Leuffen: Unlike some of our competitors, we’re not just offering Docker support. With Wercker, the Docker container is the unit of work. All jobs run inside containers, and each build artifact can be a Docker container. - -Wercker’s Docker container pipeline is completely customizable. A ‘pipeline’ refers to any automated workflow, for instance, a build or deploy pipeline. In those workflows, you want to execute tasks: install dependencies, test your code, push your container, or create a slack notification when something fails, for example. We call these tasks ‘steps,’ and there is no limit to the types of steps created. In fact, we have a marketplace of steps built by the Wercker community. So if you’ve built a step that fits my workflow, I can use that in my pipeline. - -Our Docker container pipelines adapt to any developer workflow. Users can use any Docker container out there — not just those made by or for Wercker. Whether the container is on Docker Hub or a private registry such as CoreOS’s Quay, it works with Wercker. - -Our competitors range from the classic CI/CD tools to larger-scale DevOps solutions like CloudBees. - -### Linux.com: How does it integrate with other cloud technologies? - -van Leuffen: Wercker is vendor-agnostic and can automate development with any cloud platform or service. We work closely with ecosystem partners like Mesosphere, Kubernetes and CoreOS to make integrations as seamless as possible. We also recently partnered with Atlassian to integrate the Wercker platform with Bitbucket. More than 3 million Bitbucket users can install the Wercker Pipeline Viewer and view build status directly from their dashboard. - -### Linux.com: Why did you open source the Wercker CLI tool? - -van Leuffen: Open sourcing the Wercker CLI will help us stay ahead of the curve and strengthen the developer community. The market landscape is changing fast; developers are expected to release more frequently, using infrastructure of increasing complexity. While Docker has solved a lot of infrastructure problems, developer teams are still looking for the perfect tools to test, build and deploy rapidly. - -The Wercker community is already experimenting with these new tools: Kubernetes, Mesosphere, CoreOS. It makes sense to tap that community to create integrations that work with our technology – and make that process as frictionless as possible. By open sourcing our CLI technology, we hope to get to dev-prod parity faster and turn “build once, ship anywhere” into an automated reality. - -### Linux.com: You recently raised over $4.5 million, so how is this fund being used for product development? - -van Leuffen: We’re focused on building out our commercial team and bringing an enterprise product to market. We’ve had a lot of inbound interest from the enterprise looking for VPC and on-premise solutions. While the enterprise is still largely in the discovery stage, we can see the market shifting toward containers. Enterprise software devs need to release often, just like the small, agile teams with whom they are increasingly competing. We need to prove containers can scale, and that Wercker has the organizational permissions and the automation suite to make that process as efficient as possible. - -In addition to continuing to invest in our product, we’ll be focusing our resources on market education and developer evangelism. Developer teams are still looking for the right mix of tools to test, build and deploy rapidly (including Kubernetes, Mesosphere, CoreOS, etc.). As an ecosystem, we need to do more to educate and provide the tutorials and resources to help developers succeed in this changing landscape. - -### Linux.com: What products do you offer and who is your target audience? - -van Leuffen: We currently offer one service level of our product Wercker; however, we’re developing an enterprise offering. Current organizations using Wercker range from startups, such as Open Listings, to larger companies and big agencies, like Pivotal Labs. - - -### Linux.com: What does this recently open-sourced CLI do? - -van Leuffen: Using the Wercker Command Line Interface (CLI), developers can spin up Docker containers on their desktop, automate their build and deploy processes and then deploy them to various cloud providers, like AWS, and scheduler and orchestration platforms, such as Mesosphere and Kubernetes. - -The Wercker Command Line Interface is available as an open source project on GitHub and runs on both OSX and Linux machines. - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/enterprise/systems-management/887177-achieving-enterprise-ready-container-tools-with-werckers-open-source-cli - -作者:[Swapnil Bhartiya][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/community/forums/person/61003 -[1]:http://thenewstack.io/adopting-containers-enterprise/ -[2]:http://wercker.com/ -[3]:http://venturebeat.com/2016/01/28/wercker-raises-4-5-million-open-sources-its-command-line-tool/ -[4]:https://github.com/wercker/wercker - - diff --git a/sources/tech/20160314 15 podcasts for FOSS fans.md b/sources/tech/20160314 15 podcasts for FOSS fans.md deleted file mode 100644 index 222861427e..0000000000 --- a/sources/tech/20160314 15 podcasts for FOSS fans.md +++ /dev/null @@ -1,79 +0,0 @@ -15 podcasts for FOSS fans -============================= - -keyword : FOSS , podcast - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/oss_podcasts.png?itok=3KwxsunX) - -I listen to a lot of podcasts. A lot. On my phone's podcatcher, I am subscribed to around 60 podcasts... and I think that only eight of those have podfaded (died). Unsurprisingly, a fairly sizeable proportion of those remaining alive-and-well subscriptions are shows with a specific interest or relevance to open source software. As I seek to resurrect my own comatose podcast from the nebulous realm of podfadery, I thought it would be great for us as a community to share what we're listening to. - ->Quick digression: I understand that there are a lot of "pod"-prefixed words in that first paragraph. Furthermore, I also know that the term itself is related to a proprietary device that, by most accounts, isn't even used for listening to these web-based audio broadcasts. However, the term 'webcast' died in the nineties and 'oggcast' never gathered a substantial foothold among the listening public. As such, in order to ensure that the most people actually know what I'm referring to, I'm essentially forced to use the web-anachronistic, but publicly recognized term, podcast. - -I should also mention that a number of these shows involve grown-ups using grown-up language (i.e. swearing). I've tried to indicate which shows these are by putting a red E next to their names, but please do your own due diligence if you're concerned about listening to these shows at work or with children around. - -The following lists are podcasts that I keep in heavy rotation (each sublist is listed in alphabetical order). In the first list are the ones I think of as my "general coverage" shows. They tend to either discuss general topics related to free and open source software, or they give a survey of multiple open source projects from one episode to the next. - -- [Bad Voltage][1] E — Regular contributor and community moderator here on Opensource.com, Jono Bacon, shares hosting dutes on this podcast with Jeremy Garcia, Stuart Langridge, and Bryan Lunduke, four friends with a variety of digressing and intersecting opinions. That's the most interesting part of the show for me. Of course, they also do product reviews and cover timely news relevant to free and open source software, but it's the banter that I stick around for. - -- [FLOSS Weekly][2] — The Twit network of podcasts is a long-time standby in technology broadcasts. Hosted by Randal Schwartz, FLOSS Weekly focuses on covering one open source project each week, typically by interviewing someone relevant in the development of that project. It's a really good show for getting exposed to new open source tools... or learning more about the programs you're already familiar with. - -- [Free as in Freedom][3] — Hosted by Bradley Kuhn and Karen Sandler, this show has a specific focus on legal and policy matters as it relates to both specific free and open source projects, as well as open culture in general. The show seems to have gone on a bit of a hiatus since its last episode in November of 2015, but I for one am immensely hopeful that Free as in Freedom emerges victoriously from its battle with being podfaded and returns to its regular bi-weekly schedule. - -- [GNU World Order][4] — I think that this show can be best descrbed as a free and open source variety show. Solo host, Klaatu, spends the majority of each show going in-depth at nearly tutorial level with a whole range of specific software tools and workflows. It's a really friendly way to get an open source neophyte up to speed with everything from understanding SSH to playing with digital painting and video. And there's a video component to the show, too, which certainly helps make some of these topics easier to follow. - -- [Hacker Public Radio][5] — This is just a well-executed version of a fantastic concept. Hacker Public Radio (HPR) is a community-run daily (well, working-week daily) podcast with a focus on "anything of interest to hackers." Sure there are wide swings in audio quality from show to show, but it's an open platform where anyone can share what they know (or what they think) in that topic space. Show topics include 3D printing, hardware hacking, conference interviews, and more. There are even long-running tutorial series and an audio book club. The monthly recap episodes are particularly useful if you're having trouble picking a place to start. And best of all, you can record your own episode and add it to the schedule. In fact, they actively encourage it. - -My next list of open source podcasts are a bit more specific to particular topics or software packages in the free and open source ecosystem. - -- [Blender Podcast][6] — Although this podcast is very specific to one particular application—Blender, in case you couldn't guess—many of the topics are relevant to issues faced by users and developers of open source other softrware programs. Hosts Thomas Dinges and Campbell Barton—both on the core development team for Blender—discuss the latest happenings in the Blender community, sometimes with a guest. The release schedule is a bit sporadic, but one of the things I really like about this particular show is the fact that they talk about both user issues and developer issues... and the various intersections of the two. It's a great way for each part of the community to gain insight from the other. - -- [Sunday Morning Linux Review][7] — As it's name indicates, SMLR offers a weekly review of topics relevant to Linux. Since around the end of last year, the show has seen a bit of a restructuring. However, that has not detracted from its quality. Tony Bemus, Mary Tomich, and Tom Lawrence deliver a lot of good information, and you can catch them recording their shows live through their website (if you happen to have free time on your Sundays). - -- [LinuxLUGcast][8] — The LinuxLUGcast is a community podcast that's really a recording of an online Linux Users Group (LUG) that meets on the first and third Friday of each month. The group meets (and records) via Mumble and discussions range from home builds with single-board computers like the Raspberry Pi to getting help with trying out a new distro. The LUG is open to everyone, but there is a rotating cast of regulars who've made themselves (and their IRC handles) recognizable fixtures on the show. (Full disclosure: I'm a regular on this one) - -- [The Open EdTech Podcast][9] — Thaj Sara's Open EdTech Podcast is a fairly new show that so far only has three episodes. However, since there's a really sizeable community of open source users in the field of education (both in teaching and in IT), this show serves an important and underserved segment of our community. I've spoken with Thaj via email and he assures me that new episodes are in the pipe. He just needs to set aside the time to edit them. - -- [The Linux Action Show][10] — It would be remiss of me to make a list of open source podcasts and not mention one of the stallwart fixtures in the space: The Linux Action Show. Chris Fisher and Noah Chelliah discuss current news as it pertains to Linux and open source topics while at the same time giving feature attention to specific projects or their own experiences using various open source tools. - -This next section is what I'm going to term my "honorable mention" section. These shows are either new or have a more tangential focus on open source software and culture. In any case, I still think readers of Opensource.com would enjoy listening to these shows. - -- [Blender Institute Podcast][11] — The Blender Institute—the more commercial creative production spin-off from the Blender Foundation—started hosting their own weekly podcast a few months ago. In the show, artists (and now a developer!) working at the Institute discuss the open content projects they're working on, answer questions about using Blender, and give great insight into how things go (or occasionally don't go) in their day-to-day work. - -- [Geek News Radio][12] E — There was a tangible sense of loss about a year ago when the hosts of Linux Outlaws hung up their mics. Well good news! A new show has sprung from its ashes. In episodes of Geek News Radio, Fab Scherschel and Dave Nicholas have a wider focus than Linux Outlaws did. Rather than being an actual news podcast, it's more akin to an informal discussion among friends about video games, movies, technology, and open source (of course). - -- [Geekrant][13] — Formerly known as the Everyday Linux Podcast, this show was rebranded at the start of the year to reflect kind of content that the hosts Mark Cockrell, Seth Anderson, and Chris Neves were already discussing. They do discuss open source software and culture, but they also give their own spin and opinions on topics of interest in general geek culture. Topics have a range that includes everything from popular media to network security. (P.S. Opensource.com content manager Jen Wike Huger was a guest on Episode 164.) - -- [Open Source Creative][14] E — In case you haven't read my little bio blurb, I also have my own podcast. In this show, I talk about news and topics that are [hopefully] of interest to artists and creatives who use free and open source tools. I record it during my work commute so episode length varies with traffic, and I haven't quite figured out a good way to do interviews safely, but if you listen while you're on your way to work, it'll be like we're carpooling. The show has been on a bit of hiatus for almost a year, but I've commited to making sure it comes back... and soon. - -- [Still Untitled][15] E — As you may have noticed from most of the selections on this list, I tend to lean toward the indie side of the spectrum, preferring to listen to shows by people with less of a "name." That said, this show really hits a good place for me. Hosts Adam Savage, Norman Chan, and Will Smith talk about all manner of interesting and geeky things. From Adam's adventures with Mythbusters to maker builds and book reviews, there's rarely ever a show that hasn't been fun for me to listen to. - -So there you go! I'm always looking for more interesting shows to listen to on my commute (as I'm sure many others are). What suggestions or recommendations do you have? - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/3/open-source-podcasts - -作者:[Jason van Gumster][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jason-van-gumster -[1]: http://badvoltage.org/ -[2]: https://twit.tv/shows/floss-weekly -[3]: http://faif.us/ -[4]: http://gnuworldorder.info/ -[5]: http://hackerpublicradio.org/ -[6]: https://blender-podcast.org/ -[7]: http://smlr.us/ -[8]: http://linuxlugcast.com/ -[9]: http://openedtechpodcast.com/ -[10]: http://www.jupiterbroadcasting.com/tag/linux-action-show/ -[11]: http://podcast.blender.institute/ -[12]: http://sixgun.org/geeknewsradio -[13]: http://elementopie.com/geekrant-episodes -[14]: http://monsterjavaguns.com/podcast -[15]: http://www.tested.com/still-untitled-the-adam-savage-project/ diff --git a/sources/tech/20160314 Healthy Open Source.md b/sources/tech/20160314 Healthy Open Source.md deleted file mode 100644 index b36b516758..0000000000 --- a/sources/tech/20160314 Healthy Open Source.md +++ /dev/null @@ -1,212 +0,0 @@ -Healthy Open Source -============================ - -keyword: Node.js , opensource , project management , software - -*A walkthrough of the Node.js Foundation’s base contribution policy*. - -A lot has changed since io.js and Node.js merged under the Node.js Foundation. The most impressive change, and probably the change that is most relevant to the rest of the community and to open source in general, is the growth in contributors and committers to the project. - -A few years ago, Node.js had just a few committers (contributors with write access to the repository in order to merge code and triage bugs). The maintenance overhead for the few committers on Node.js Core was overwhelming and the project began to see a decline in both committers and outside contribution. This resulted in a corresponding decline in releases. - -Today, the Node.js project is divided into many components with a full org size of well over 400 members. Node.js Core now has over 50 committers and over 100 contributors per month. - -Through this growth we’ve found many tools that help scale the human infrastructure around an Open Source project. We also identified a few core values we believe are fundamental to modern Open Source: transparency, participation, and efficacy. As we continue to scale the way we do Open Source we try to find a balance of these values and adapt the practices we find help to fit the needs of each component of the Node.js project. - -Now that Node.js is in a good place, the foundation is looking to promote this kind of sustainability in the ecosystem. Part of this is a new umbrella for additional projects to enter the foundation, of which [Express was recently admitted][1], and the creation of this new contribution policy. - -This contribution policy is not universal. It’s meant as a starting point. Additions and alterations to this policy are encouraged so that the process used by each project fits its needs and can continue to change shape as the project grows and faces new challenges. - -The [current version][2] is hosted in the Node.js Foundation. We expect to iterate on this over time and encourage people to [log issues][3] with questions and feedback regarding the policy for future iterations. - -This document describes a very simple process suitable for most projects in the Node.js ecosystem. Projects are encouraged to adopt this whether they are hosted in the Node.js Foundation or not. - -The Node.js project is organized into over a hundred repositories and a few dozen Working Groups. There are large variations in contribution policy between many of these components because each one has different constraints. This document is a minimalist version of the processes and philosophy we’ve found works best everywhere. - -We believe that contributors should own their projects, and that includes contribution policies like this. While new foundation projects start with this policy we expect many of them to alter it or possibly diverge from it entirely to suite their own specific needs. - -The goal of this document is to create a contribution process that: - -* Encourages new contributions. - -* Encourages contributors to remain involved. - -* Avoids unnecessary processes and bureaucracy whenever possible. - -* Creates a transparent decision making process which makes it clear how contributors can be involved in decision making. - -Most contribution processes are created by maintainers who feel overwhelmed by outside contributions. These documents have traditionally been about processes that make life easier for a small group of maintainers, often at the cost of attracting new contributors. - -We’ve gone the opposite direction. The purpose of this policy is to gain contributors, to retain them as much as possible, and to use a much larger and growing contributor base to manage the corresponding influx of contributions. - -As projects mature, there’s a tendency to become top heavy and overly hierarchical as a means of quality control and this is enforced through process. We use process to add transparency that encourages participation which grows the code review pool which leads to better quality control. - -This document is based on much prior art in the Node.js community, io.js, and the Node.js project. - -This document is based on what we’ve learned growing the Node.js project. Not just the core project, which has been a massive undertaking, but also much smaller sub-projects like the website which have very different needs and, as a result, very different processes. - -When we began these reforms in the Node.js project, we were taking a lot of inspiration from the broader Node.js ecosystem. In particular, Rod Vagg’s [OPEN Open Source policy][4]. Rod’s work in levelup and nan is the basis for what we now call “liberal contribution policies.” - -### Vocabulary - -* A **Contributor** is any individual creating or commenting on an issue or pull request. - -* A **Committer** is a subset of contributors who have been given write access to the repository. - -* A **TC (Technical Committee)** is a group of committers representing the required technical expertise to resolve rare disputes. - -Every person who shows up to comment on an issue or submit code is a member of a project’s community. Just being able to see them means that they have crossed the line from being a user to being a contributor. - -Typically open source projects have had a single distinction for those that have write access to the repository and those empowered with decision making. We’ve found this to be inadequate and have separated this into two distinctions which we’ll dive into more a bit later. - -![](https://www.linux.com/images/stories/66866/healthy_1.png) - -healthy 1Looking at the community in and around a project as a bunch of concentric circles helps to visualize this. - -In the outermost circle are users, a subset of those users are contributors, a subset of contributors become committers who can merge code and triage issues. Finally, a smaller group of trusted experts who only get pulled in to the hard problems and can act as a tie-breaker in disputes. - -This is what a healthy project should look like. As the demands on the project from increased users rise, so do the contributors, and as contributors increase more are converted into committers. As the committer base grows, more of them rise to the level of expertise where they should be involved in higher level decision making. - -![](https://www.linux.com/images/stories/66866/healthy-2.png) - -If these groups don’t grow in proportion to each other they can’t carry the load imposed on them by outward growth. A project’s ability to convert people from each of these groups is the only way it can stay healthy if its user base is growing. - -This is what unhealthy projects look like in their earliest stages of dysfunction, but imagine that the committers bubble is so small you can’t actually read the word “committers” in it, and imagine this is a logarithmic scale. - -healthy-2A massive user base is pushing a lot of contributions onto a very small number of maintainers. - -This is when maintainers build processes and barriers to new contributions as a means to manage the workload. Often the problems the project is facing will be attributed to the tools the project is using, especially GitHub. - -In Node.js we had all the same problems, resolved them without a change in tooling, and today manage a growing workload much larger than most projects, and GitHub has not been a bottleneck. - -We know what happens to unhealthy projects over a long enough time period, more maintainers leave, contributions eventually fall, and **if we’re lucky** users leave it. When we aren’t so lucky adoption continues and years later we’re plagued with security and stability issues in widely adopt software that can’t be effectively maintained. - -The number of users a project has is a poor indicator of the health of the project, often it is the most used software that suffers the biggest contribution crisis. - -### Logging - -Log an issue for any question or problem you might have. When in doubt, log an issue, any additional policies about what to include will be provided in the responses. The only exception is security disclosures which should be sent privately. - -The first sentence is surprisingly controversial. A lot of maintainers complain that there isn’t a more heavy handed way of forcing people to read a document before they log an issue on GitHub. We have documents all over projects in the Node.js Foundation about writing good bug reports but, first and foremost, we encourage people to log something and try to avoid putting barriers in the way of that. - -Sure, we get bad bugs, but we have a ton of contributors who can immediately work with people who log them to educate them on better practices and treat it as an opportunity to educate. This is why we have documentation on writing good bugs, in order to educate contributors, not as a barrier to entry. - -Creating barriers to entry just reduces the number of people there’s a chance to identify, educate and potentially grow into greater contributors. - -Of course, never log a public issue about a security disclosure, ever. This is a bit vague about the best private venue because we can’t determine that for every project that adopts this policy, but we’re working on a responsible disclosure mechanism for the broader community (stay tuned). - -Committers may direct you to another repository, ask for additional clarifications, and add appropriate metadata before the issue is addressed. - -For smaller projects this isn’t a big deal but in Node.js we’ve had to continually break off work into other, more specific, repositories just to keep the volume on a single repo manageable. But all anyone has to do when someone puts something in the wrong place is direct them to the right one. - -Another benefit of growing the committer base is that there’s more people to deal with little things, like redirecting issues to other repos, or adding metadata to issues and PRs. This allows developers who are more specialized to focus on just a narrow subset of work rather than triaging issues. - -Please be courteous, respectful, and every participant is expected to follow the project’s Code of Conduct. - -One thing that can burn out a project is when people show up with a lot of hostility and entitlement. Most of the time this sentiment comes from a feeling that their input isn’t valued. No matter what, a few people will show up who are used to more hostile environments and it’s good to have these kinds of expectations explicit and written down. - -And each project should have a Code of Conduct, which is an extension of these expectations that makes people feel safe and respected. - -### Contributions - -Any change to resources in this repository must be through pull requests. This applies to all changes to documentation, code, binary files, etc. Even long term committers and TC members must use pull requests. - -No pull request can be merged without being reviewed. - -Every change needs to be a pull request. - -A Pull Request captures the entire discussion and review of a change. Allowing some subset of committers to slip things in without a Pull Request gives the impression to potential contributors that they they can’t be involved in the project because they don’t have access to a behind the scenes process or culture. - -This isn’t just a good practice, it’s a necessity in order to be transparent enough to attract new contributors. - -For non-trivial contributions, pull requests should sit for at least 36 hours to ensure that contributors in other timezones have time to review. Consideration should also be given to weekends and other holiday periods to ensure active committers all have reasonable time to become involved in the discussion and review process if they wish. - -Part of being open and inviting to more contributors is making the process accessible to people in timezones all over the world. We don’t want to add an artificial delay in small doc changes but for any change that needs a bit of consideration needs to give people in different parts of the world time to consider it. - -In Node.js we actually have an even longer timeline than this, 48 hours on weekdays and 72 on weekends. That might be too much for smaller projects so it is shorter in this base policy but as a project grows it may want to increase this as well. - -The default for each contribution is that it is accepted once no committer has an objection. During review committers may also request that a specific contributor who is most versed in a particular area gives a “LGTM” before the PR can be merged. There is no additional “sign off” process for contributions to land. Once all issues brought by committers are addressed it can be landed by any committer. - -A key part of the liberal contribution policies we’ve been building is an inversion of the typical code review process. Rather than the default mode for a change to be rejected until enough people sign off, we make the default for every change to land. This puts the onus on reviewers to note exactly what adjustments need to be made in order for it to land. - -For new contributors it’s a big leap just to get that initial code up and sent. Viewing the code review process as a series of small adjustments and education, rather than a quality control hierarchy, does a lot to encourage and retain these new contributors. - -It’s important not to build processes that encourage a project to be too top heavy, with a few people needing to sign off on every change. Instead, we just mention any committer than we think should weigh in on a specific review. In Node.js we have people who are the experts on OpenSSL, any change to crypto is going to need a LGTM from them. This kind of expertise forms naturally as a project grows and this is a good way to work with it without burning people out. - -In the case of an objection being raised in a pull request by another committer, all involved committers should seek to arrive at a consensus by way of addressing concerns being expressed by discussion, compromise on the proposed change, or withdrawal of the proposed change. - -This is what we call a lazy consensus seeking process. Most review comments and adjustments are uncontroversial and the process should optimize for getting them in without unnecessary process. When there is disagreement, try to reach an easy consensus among the committers. More than 90% of the time this is simple, easy and obvious. - -If a contribution is controversial and committers cannot agree about how to get it to land or if it should land then it should be escalated to the TC. TC members should regularly discuss pending contributions in order to find a resolution. It is expected that only a small minority of issues be brought to the TC for resolution and that discussion and compromise among committers be the default resolution mechanism. - -For the minority of changes that are controversial and don’t reach an easy consensus we escalate that to the TC. These are rare but when they do happen it’s good to reach a resolution quickly rather than letting things fester. Contentious issues tend to get a lot of attention, especially by those more casually involved in the project or even entirely outside of it, but they account for a relatively small amount of what the project does every day. - -### Becoming a Committer - -All contributors who land a non-trivial contribution should be on-boarded in a timely manner, and added as a committer, and be given write access to the repository. - -This is where we diverge sharply from open source tradition. - -Projects have historically guarded commit rights to their version control system. This made a lot of sense when we were using version control systems like subversion. A single contributor can inadvertently mess up a project pretty badly in older version control systems, but not so much in git. In git, there isn’t a lot that can’t be fixed and so most of the quality controls we put on guarding access are no longer necessary. - -Not every committer has the rights to release or make high level decisions, so we can be much more liberal about giving out commit rights. That increases the committer base for code review and bug triage. As a wider range of expertise in the committer pool smaller changes are reviewed and adjusted without the intervention of the more technical contributors, who can spend their time on reviews only they can do. - -This is they key to scaling contribution growth: committer growth. - -Committers are expected to follow this policy and continue to send pull requests, go through proper review, and have other committers merge their pull requests. - -This part is entirely redundant, but on purpose. Just a reminder even once someone is a committer their changes still flow through the same process they followed before. - -### TC Process - -The TC uses a “consensus seeking” process for issues that are escalated to the TC. The group tries to find a resolution that has no open objections among TC members. If a consensus cannot be reached that has no objections then a majority wins vote is called. It is also expected that the majority of decisions made by the TC are via a consensus seeking process and that voting is only used as a last-resort. - -The best solution tends to be the one everyone can agree to so you would think that consensus systems would be the norm. However, **pure consensus** systems incentivize obstructionism which we need to avoid. - -In pure consensus everyone essentially has a veto. So, if I don’t want something to happen I’m in a strong position of power over everyone that wants something to happen. They have to convince me, and I don’t have to convince anyone else of anything. - -To avoid this we use a system called “consensus seeking” which has a long history outside of open source. It’s quite simple, just attempt to reach a consensus, if a consensus can’t be reached then call for a majority wins vote. - -Just the fact that a vote **is a possibility** means that people can’t be obstructionists, whether someone favor a change or not, they have to convince their peers and if they aren’t willing to put in the work to convince their peers then they probably don’t involve themselves in that decision at all. - -The way these incentives play out is pretty impressive. We started using this process in io.js and adopted it in Node.js when we merged into the foundation. In that entire time we’ve never actually had to call for a vote, just the fact that we could is enough to keep everyone working together to find a solution and move forward. - -Resolution may involve returning the issue to committers with suggestions on how to move forward towards a consensus. It is not expected that a meeting of the TC will resolve all issues on its agenda during that meeting and may prefer to continue the discussion happening among the committers. - -A TC tries to resolve things in a timely manner so that people can make progress but often it’s better to provide some additional guidance that pushes the greater contributorship towards resolution without being heavy handed. - -Avoid creating big decision hierarchies. Instead, invest in a broad, growing and empowered contributorship that can make progress without intervention. We need to view a constant need for intervention by a few people to make any and every tough decision as the biggest obstacle to healthy Open Source. - -Members can be added to the TC at any time. Any committer can nominate another committer to the TC and the TC uses its standard consensus seeking process to evaluate whether or not to add this new member. Members who do not participate consistently at the level of a majority of the other members are expected to resign. - -The TC just uses the same consensus seeking process for adding new members as it uses for everything else. - -It’s a good idea to encourage committers to nominate people to the TC and not just wait around for TC members to notice the impact some people are having. Listening to the broader committers about who they see as having a big impact keeps the TC’s perspective inline with the rest of the project. - -As a project grows it’s important to add people from a variety of skill sets. If people are doing a lot of docs work, or test work, treat the investment they are making as equally valuable as the hard technical stuff. - -Projects should have the same ladder, user -> contributor -> commiters -> TC member, for every skill set they want to build into the project to keep it healthy. - -I often see long time maintainers worry about adding people who don’t understand every part of the project, as if they have to be involved in every decision. The reality is that people do know their limitations and want to defer hard decisions to people they know have more experience. - -Thanks to Greg [Wallace][5] and ashley [williams][6]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/biz-os/governance/892141-healthy-open-source - -作者:[Mikeal Rogers][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/community/forums/person/66928 - - -[1]: https://medium.com/@nodejs/node-js-foundation-to-add-express-as-an-incubator-project-225fa3008f70#.mc30mvj4m -[2]: https://github.com/nodejs/TSC/blob/master/BasePolicies/CONTRIBUTING.md -[3]: https://github.com/nodejs/TSC/issues -[4]: https://github.com/Level/community/blob/master/CONTRIBUTING.md -[5]: https://medium.com/@gtewallaceLF -[6]: https://medium.com/@ag_dubs From baf49bf420ca24b09ef8a8483e17e0e3ecbebe7e Mon Sep 17 00:00:00 2001 From: ezio Date: Wed, 13 Jul 2016 21:22:12 +0800 Subject: [PATCH 135/471] =?UTF-8?q?20160712=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20160309 Let’s Build A Web Server. Part 1.md | 0 .../tech/20160406 Let’s Build A Web Server. Part 2.md | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename {translated => sources}/tech/20160309 Let’s Build A Web Server. Part 1.md (100%) rename {translated => sources}/tech/20160406 Let’s Build A Web Server. Part 2.md (100%) diff --git a/translated/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md similarity index 100% rename from translated/tech/20160309 Let’s Build A Web Server. Part 1.md rename to sources/tech/20160309 Let’s Build A Web Server. Part 1.md diff --git a/translated/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md similarity index 100% rename from translated/tech/20160406 Let’s Build A Web Server. Part 2.md rename to sources/tech/20160406 Let’s Build A Web Server. Part 2.md From 2a32e17bff87eef4905cbc13288f94da59f18b56 Mon Sep 17 00:00:00 2001 From: Mike Date: Wed, 13 Jul 2016 21:38:56 +0800 Subject: [PATCH 136/471] translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md (#4183) * translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md * translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md * translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md * translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md --- ...ntrol your DigitalOcean cloud instances.md | 22 +++++++++---------- 1 file changed, 10 insertions(+), 12 deletions(-) rename {sources => translated}/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md (52%) diff --git a/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md b/translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md similarity index 52% rename from sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md rename to translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md index 5a0e1ffda0..a9094445b5 100644 --- a/sources/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md +++ b/translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md @@ -1,21 +1,19 @@ -MikeCoder Translating... - -Using Vagrant to control your DigitalOcean cloud instances +使用 Vagrant 控制你的 DigitalOcean 云主机 ========================================================= ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/fedora-vagrant-do-945x400.jpg) -[Vagrant][1] is an application to create and support virtual development environments using virtual machines. Fedora has [official support for Vagrant][2] with libvirt on your local system. [DigitalOcean][3] is a cloud provider that provides a one-click deployment of a Fedora Cloud instance to an all-SSD server in under a minute. During the [recent Cloud FAD][4] in Raleigh, the Fedora Cloud team packaged a new plugin for Vagrant which enables Fedora users to keep up cloud instances in DigitalOcean using local Vagrantfiles. +[Vagrant][1] 是一个创建和管理支持虚拟机开发环境的应用。Fedora 已经[官方支持 Vagrant][2]。[DigitalOcean][3]是一个提供一键部署的云计算服务提供商(主要是 Fedora 的服务实例)。在[最近的 Raleigh 举办的云计算 FAD][4]中,Fedora 云计算队伍已经打包了一个 Vagrant 的新的插件。它能够帮助用户通过使用本地的 Vagrantfile 来管理 DigitalOcean 实例。 -### How to use this plugin +### 如何使用 Vagrant DigitalOcean 插件 -First step is to install the package in the command line. +第一步是安装 vagrant DigitalOcean 的插件软件包。 ``` $ sudo dnf install -y vagrant-digitalocean ``` -After installing the plugin, the next task is to create the local Vagrantfile. An example is provided below. +安装 结束之后,下一个任务是创建本地的 Vagrantfile 文件。下面是一个例子。 ``` $ mkdir digitalocean @@ -39,17 +37,17 @@ Vagrant.configure('2') do |config| end ``` -### Notes about Vagrant DigitalOcean plugin +### Vagrant DigitalOcean 插件 -A few points to remember about the SSH key naming scheme: if you already have the key uploaded to DigitalOcean, make sure that the provider.ssh_key_name matches the name of the existing key in their server. The provider.image details are found at the [DigitalOcean documentation][5]. The AUTH token is created on the control panel within the Apps & API section. +一定要记住几个 SSH 命令规范:如果你已经在 DigitalOcean 上传了秘钥,请确保 provider.ssh_key_name 和已经在服务器中的名字吻合。provider.image 具体的文档可以在[DigitalOcean documentation][5]找到。认证 Token 可以在控制管理器的 Apps & API 区域找到。 -You can then get the instance up with the following command. +你可以通过以下命令来实例化一台主机。 ``` $ vagrant up --provider=digital_ocean ``` -This command will fire up the instance in the DigitalOcean server. You can then SSH into the box by using vagrant ssh command. Run vagrant destroy to destroy the instance. +这个命令会启动一台 DigitalOcean 的服务器实例。你可以使用 vagrant ssh 命令来 ssh 登陆进入这个实例。执行 vagrant destroy 来废弃这个实例。 @@ -59,7 +57,7 @@ This command will fire up the instance in the DigitalOcean server. You can then via: https://fedoramagazine.org/using-vagrant-digitalocean-cloud/ 作者:[Kushal Das][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/MikeCoder) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 48da0c5ea0743e0813cab3964c9dcbc8de16432e Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 13 Jul 2016 22:44:11 +0800 Subject: [PATCH 137/471] PUB:20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA @MikeCoder --- ...MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) rename {translated/tech => published}/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md (58%) diff --git a/translated/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md b/published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md similarity index 58% rename from translated/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md rename to published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md index 63ea47c541..a7932ee269 100644 --- a/translated/tech/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md +++ b/published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md @@ -1,23 +1,23 @@ -在 Ubuntu Mate 16.04("Xenial Xerus") 上通过 PPA 安装 Mate 1.14 +在 Ubuntu Mate 16.04 上通过 PPA 升级 Mate 1.14 ================================================================= -Mate 桌面环境 1.14 现在可以在 Ubuntu Mate 16.04("Xenial Xerus") 上使用了。根据这个[发布版本][1]的描述, 为了全面测试 Mate 1.14,所以在 PPA 上发布 Mate 桌面环境 1.14大概了用2个月的时间。因此,你不太可能遇到安装的问题。 +Mate 桌面环境 1.14 现在可以在 Ubuntu Mate 16.04 ("Xenial Xerus") 上使用了。根据这个[发布版本][1]的描述,为了全面测试 Mate 1.14,所以 Mate 桌面环境 1.14 已经在 PPA 上发布 2 个月了。因此,你不太可能遇到安装的问题。 ![](https://2.bp.blogspot.com/-v38tLvDAxHg/V1k7beVd5SI/AAAAAAAAX7A/1X72bmQ3ia42ww6kJ_61R-CZ6yrYEBSpgCLcB/s400/mate114-ubuntu1604.png) -**现在 PPA 提供 Mate 1.14.1 包含如下改变(Ubuntu Mate 16.04 默认安装的是 Mate 1.12.x):** +**现在 PPA 提供 Mate 1.14.1 包含如下改变(Ubuntu Mate 16.04 默认安装的是 Mate 1.12.x):** -- 客户端的装饰应用现在可以正确的在所有主题中渲染; -- 触摸板配置现在支持边缘操作和双指的滚动; -- 在 Caja 中的 Python 扩展可以被单独管理; -- 所有三个窗口焦点模式都是是可选的; -- Mate Panel 中的所有菜单栏图标和菜单图标可以改变大小; -- 音量和亮度 OSD 目前可以启用和禁用; -- 更多的改进和 bug 修改; +- 客户端的装饰应用现在可以正确的在所有主题中渲染; +- 触摸板配置现在支持边缘操作和双指滚动; +- 在 Caja 中的 Python 扩展可以被单独管理; +- 所有三个窗口焦点模式都是可选的; +- Mate Panel 中的所有菜单栏图标和菜单图标可以改变大小; +- 音量和亮度 OSD 目前可以启用和禁用; +- 更多的改进和 bug 修改; -Mate 1.14 同时改进了整个桌面环境中对 GTK+3 的支持,包括各种各项的 GTK+3 小应用。但是,Ubuntu MATE 的博客中提到:PPA 的发行包使用 GTK+2编译"为了确保对 Ubuntu MATE 16.05还有各种各样的第三方 MATE 应用,插件,扩展的支持"。 +Mate 1.14 同时改进了整个桌面环境中对 GTK+ 3 的支持,包括各种 GTK+3 小应用。但是,Ubuntu MATE 的博客中提到:PPA 的发行包使用 GTK+ 2 编译是“为了确保对 Ubuntu MATE 16.04 还有各种各样的第三方 MATE 应用、插件、扩展的支持"。 -MATE 1.14 的完成修改列表[点击此处][2]阅读。 +MATE 1.14 的完整修改列表[点击此处][2]阅读。 ### 在 Ubuntu MATE 16.04 中升级 MATE 1.14.x @@ -29,7 +29,7 @@ sudo apt update sudo apt dist-upgrade ``` -**注意**: mate-netspeed 应用将会在升级中删除。因为该应用现在已经是 mate-applets 应用报的一部分,所以他依旧是可以获得的。 +**注意**: mate-netspeed 应用将会在升级中删除。因为该应用现在已经是 mate-applets 应用报的一部分,所以它依旧是可以使用的。 一旦升级完成,请重启你的系统,享受全新的 MATE! @@ -51,8 +51,8 @@ via [Ubuntu MATE blog][3] via: http://www.webupd8.org/2016/06/install-mate-114-in-ubuntu-mate-1604.html 作者:[Andrew][a] -译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1477657f553de2d5556e165483d110908eebde5d Mon Sep 17 00:00:00 2001 From: cposture Date: Wed, 13 Jul 2016 22:56:30 +0800 Subject: [PATCH 138/471] Translating by cposture --- sources/tech/20160309 Let’s Build A Web Server. Part 1.md | 1 + sources/tech/20160406 Let’s Build A Web Server. Part 2.md | 1 + 2 files changed, 2 insertions(+) diff --git a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md index 4c8048786d..47f8bfdcc7 100644 --- a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md +++ b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 1. ===================================== diff --git a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md index 482352ac9a..5cba11dd64 100644 --- a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md +++ b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 2. =================================== From e282fdeb20f54e706e71831caddd338df30e5ba3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Wed, 13 Jul 2016 23:17:51 +0800 Subject: [PATCH 139/471] Translated by cposture (#4184) * Translated by cposture * Translating by cposture * translating partly * translating partly 75 * Translated by cposture * Translated by cposture * Translating by cposture --- sources/tech/20160309 Let’s Build A Web Server. Part 1.md | 1 + sources/tech/20160406 Let’s Build A Web Server. Part 2.md | 1 + 2 files changed, 2 insertions(+) diff --git a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md index 4c8048786d..47f8bfdcc7 100644 --- a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md +++ b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 1. ===================================== diff --git a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md index 482352ac9a..5cba11dd64 100644 --- a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md +++ b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md @@ -1,3 +1,4 @@ +Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 2. =================================== From 7d95caa62a3cf6bd7af810d71ceadf001f5b21c7 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 14 Jul 2016 14:06:54 +0800 Subject: [PATCH 140/471] PUB:20160609 How to record your terminal session on Linux MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @MikeCoder 这篇原文有较大疏漏,已做了订正。 --- ...o record your terminal session on Linux.md | 98 +++++++++++++++++ ...o record your terminal session on Linux.md | 104 ------------------ 2 files changed, 98 insertions(+), 104 deletions(-) create mode 100644 published/20160609 How to record your terminal session on Linux.md delete mode 100644 translated/tech/20160609 How to record your terminal session on Linux.md diff --git a/published/20160609 How to record your terminal session on Linux.md b/published/20160609 How to record your terminal session on Linux.md new file mode 100644 index 0000000000..f30e4db2dc --- /dev/null +++ b/published/20160609 How to record your terminal session on Linux.md @@ -0,0 +1,98 @@ +如何在 Linux 上录制你的终端操作 +================================================= + +录制一个终端操作可能是一个帮助他人学习 Linux 、展示一系列正确命令行操作的和分享知识的通俗易懂方法。不管是出于什么目的,从终端复制粘贴文本需要重复很多次,而录制视频的过程也是相当麻烦,有时候还不能录制。在这次的文章中,我们将简单的了解一下以 gif 格式记录和分享终端会话的方法。 + +### 预先要求 + +如果你只是希望能记录你的终端会话,并且能在终端进行回放或者和他人分享,那么你只需要一个叫做:ttyrec 的软件。Ubuntu 用户可以通过运行这行代码进行安装: + +``` +sudo apt-get install ttyrec +``` + +如果你想将生成的视频转换成一个 gif 文件,这样能够和那些不使用终端的人分享,就可以发布到网站上去,或者你只是想做一个 gif 方便使用而不想写命令。那么你需要安装额外的两个软件包。第一个就是 imagemagick , 你可以通过以下的命令安装: + +``` +sudo apt-get install imagemagick +``` + +第二个软件包就是:tty2gif.py,访问其[项目网站][1]下载。这个软件包需要安装如下依赖: + +``` +sudo apt-get install python-opster +``` + +### 录制 + +开始录制终端操作,你需要的仅仅是键入 `ttyprec` ,然后回车。这个命令将会在后台运行一个实时的记录工具。我们可以通过键入`exit`或者`ctrl+d`来停止。ttyrec 默认会在主目录下创建一个`ttyrecord`的文件。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_1.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_2.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_3.jpg) + +### 回放 + +回放这个文件非常简单。你只需要打开终端并且使用 `ttyplay` 命令打开 `ttyrecord` 文件即可。(在这个例子里,我们使用 ttyrecord 作为文件名,当然,你也可以改成你用的文件名) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_4.jpg) + +然后就可以开始播放这个文件。这个视频记录了所有的操作,包括你的删除,修改。这看起来像一个拥有自我意识的终端,但是这个命令执行的过程并不是只是为了给系统看,而是为了更好的展现给人。 + +注意一点,播放这个记录是完全可控的,你可以通过点击 `+` 或者 `-` 进行加速减速,或者 `0`和 `1` 暂停和恢复播放。 + +### 导出成 GIF + +为了方便,我们通常会将视频记录转换为 gif 格式,并且,这个非常容易做到。以下是方法: + +将之前下载的 tty2gif.py 这个文件拷贝到 ttyprecord 文件(或者你命名的那个视频文件)相同的目录,然后在这个目录下打开终端,输入命令: + +``` +python tty2gif.py typing ttyrecord +``` + +如果出现了错误,检查一下你是否有安装 python-opster 包。如果还是有错误,使用如下命令进行排除。 + +``` +sudo apt-get install xdotool +export WINDOWID=$(xdotool getwindowfocus) +``` + +然后重复这个命令 `python tty2gif.py` 并且你将会看到在 ttyrecord 目录下多了一些 gif 文件。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_5.jpg) + +接下来的一步就是整合所有的 gif 文件,将他打包成一个 gif 文件。我们通过使用 imagemagick 工具。输入下列命令: + +``` +convert -delay 25 -loop 0 *.gif example.gif +``` + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_6.jpg) + +你可以使用任意的文件名,我用的是 example.gif。 并且,你可以改变这个延时和循环时间。 Enjoy。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/example.gif) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-record-your-terminal-session-on-linux/ + +作者:[Bill Toulas][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/howtoforgecom +[1]: https://bitbucket.org/antocuni/tty2gif/raw/61d5596c916512ce5f60fcc34f02c686981e6ac6/tty2gif.py + + + + + + + + diff --git a/translated/tech/20160609 How to record your terminal session on Linux.md b/translated/tech/20160609 How to record your terminal session on Linux.md deleted file mode 100644 index bc91a552a7..0000000000 --- a/translated/tech/20160609 How to record your terminal session on Linux.md +++ /dev/null @@ -1,104 +0,0 @@ -如何在 Linux 上录制你的终端操作 -================================================= - -录制一次终端操作可能是一个帮助他人学习 Linux 、展示一系列正确命令行操作的和分享知识的通俗易懂方法。不管是什么目的,大多数情况下从终端复制粘贴文本从终端不会是很有帮助而且录制视频的过程也是相当困难。在这次的文章中,我们将简单的了解一下在终端会话中用 GIF 格式录制视频的方法。 - -### 预先要求 - -如果你只是希望能记录你的终端会话,并且能在终端进行播放或者和他人分享,那么你只需要一个叫做:'ttyrec' 的软件。Ubuntu 使用者可以通过运行这行代码进行安装: - -``` -sudo apt-get install ttyrec -``` - -如果你想将生成的视频转换成一个 GIF 文件,并且能够和那些希望发布在网站上或者不使用终端或者只是简单的想保留一个 GIF 的用户分享。那么你需要安装额外的两个软件包。第一个就是'imagemagick', 你可以通过以下的命令安装: - -``` -sudo apt-get install imagemagick -``` - -第二个软件包就是:'tty2gif',你可以从这里下载。这个软件包需要安装如下依赖: - -``` -sudo apt-get install python-opster -``` - -### 录制 - -开始录制中毒啦操作,你需要的仅仅是键入'ttyprec' + 回车。这个命令将会在后台运行一个实时的记录工具。我们可以通过键入'exit'或者'ctrl+d'来停止。ttyrec 默认会在'Home'目录下创建一个'ttyrecord'的文件。 - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_1.jpg) - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_2.jpg) - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_3.jpg) - -### 播放 - -播放这个文件非常简单。你只需要打开终端并且使用 'ttyplay' 命令打开 'ttyrecord' 文件即可。(在这个例子里,我们使用 ttyrecord 作为文件名,当然,你也可以对这个文件进行重命名) - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_4.jpg) - -然后就可以开始播放这个文件。这个视频记录了所有的操作,包括你的删除,修改。这看起来想一个拥有自我意识的终端,但是这个命令执行的过程并不是只是为了给系统看,而是为了更好的展现给人。 - -注意的一点,播放这个记录是完全可控的,你可以通过点击 '+' 或者 '-' 或者 '0', 或者 '1' 按钮加速、减速、暂停、和恢复播放。 - -### 导出成 GIF - -为了方便,我们通常会将视频记录转换为 GIF 格式,并且,这个也非常方便可以实现。以下是方法: - -首先,解压这个文件 'tty2gif.tar.bz2': - -``` -tar xvfj tty2gif.tar.bz2 -``` - -然后,将 'tty2gif.py' 这个文件拷贝到 'ttyprecord' 文件同目录(或者你命名的那个视频文件), 然后在这个目录下打开终端,输入命令: - -``` -python tty2gif.py typing ttyrecord -``` - -如果你出现了错误,检查一下你是否有安装 'python-opster' 包。如果还是有错误,使用如下命令进行排除。 - -``` -sudo apt-get install xdotool -export WINDOWID=$(xdotool getwindowfocus) -``` - -然后重复这个命令 'python tty2gif.py' 并且你将会看到在 ttyrecord 目录下多了一大串的 gif 文件。 - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_5.jpg) - -接下来的一步就是整合所有的 gif 文件,将他打包成一个 GIF 文件。我们通过使用 imagemagick 工具。输入下列命令: - -``` -convert -delay 25 -loop 0 *.gif example.gif -``` - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_6.jpg) - -你可以任意的文件名,我喜欢用 'example.gif'。 并且,你可以改变这个延时和循环时间。 Enjoy. - -![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/example.gif) - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/how-to-record-your-terminal-session-on-linux/ - -作者:[Bill Toulas][a] -译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twitter.com/howtoforgecom - - - - - - - - - From 534cc841761d982b7dec166c382b9853c1f56b49 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 14 Jul 2016 14:47:18 +0800 Subject: [PATCH 141/471] PUB:20160625 How to Hide Linux Command Line History by Going Incognito @chunyang-wen --- ...Command Line History by Going Incognito.md | 38 ++++++++++--------- 1 file changed, 20 insertions(+), 18 deletions(-) rename {translated/tech => published}/20160625 How to Hide Linux Command Line History by Going Incognito.md (60%) diff --git a/translated/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md b/published/20160625 How to Hide Linux Command Line History by Going Incognito.md similarity index 60% rename from translated/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md rename to published/20160625 How to Hide Linux Command Line History by Going Incognito.md index 9ef701559b..14bcdc2c7b 100644 --- a/translated/tech/20160625 How to Hide Linux Command Line History by Going Incognito.md +++ b/published/20160625 How to Hide Linux Command Line History by Going Incognito.md @@ -1,9 +1,9 @@ -如何进入无痕模式进而隐藏 Linux 的命令行历史 +如何隐藏你的 Linux 的命令行历史 ================================================================ ![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-featured.jpg) -如果你是 Linux 命令行的用户,你会同意有的时候你不希望某些命令记录在你的命令行历史中。其中原因可能很多。例如,你在公司处于某个职位,你有一些不希望被其它人滥用的特权。亦或者有些特别重要的命令,你不希望在你浏览历史列表时误执行。 +如果你是 Linux 命令行的用户,有的时候你可能不希望某些命令记录在你的命令行历史中。原因可能很多,例如,你在公司担任某个职位,你有一些不希望被其它人滥用的特权。亦或者有些特别重要的命令,你不希望在你浏览历史列表时误执行。 然而,有方法可以控制哪些命令进入历史列表,哪些不进入吗?或者换句话说,我们在 Linux 终端中可以开启像浏览器一样的无痕模式吗?答案是肯定的,而且根据你想要的具体目标,有很多实现方法。在这篇文章中,我们将讨论一些行之有效的方法。 @@ -15,15 +15,15 @@ #### 1. 在命令前插入空格 -是的,没看错。在命令前面插入空格,这条命令会被终端忽略,也就意味着它不会出现在历史记录中。但是这种方法有个前提,只有在你的环境变量 HISTCONTROL 设置为 "ignorespace" 或者 "ignoreboth" 才会起作用。在大多数情况下,这个是默认值。 +是的,没看错。在命令前面插入空格,这条命令会被 shell 忽略,也就意味着它不会出现在历史记录中。但是这种方法有个前提,只有在你的环境变量 `HISTCONTROL` 设置为 "ignorespace" 或者 "ignoreboth" 才会起作用。在大多数情况下,这个是默认值。 -所以,像下面的命令: +所以,像下面的命令(LCTT 译注:这里`[space]`表示输入一个空格): ``` [space]echo "this is a top secret" ``` -你执行后,它不会出现在历史记录中。 +如果你之前执行过如下设置环境变量的命令,那么上述命令不会出现在历史记录中。 ``` export HISTCONTROL = ignorespace @@ -37,35 +37,35 @@ export HISTCONTROL = ignorespace #### 2. 禁用当前会话的所有历史记录 -如果你想禁用某个会话所有历史,你可以简单地在开始命令行工作前清除环境变量 HISTSIZE 的值。执行下面的命令来清除其值: +如果你想禁用某个会话所有历史,你可以在开始命令行工作前简单地清除环境变量 HISTFILESIZE 的值即可。执行下面的命令来清除其值: ``` -export HISTFILE=0 +export HISTFILESIZE=0 ``` -HISTFILE 表示对于 bash 会话其历史中可以保存命令的个数。默认情况,它设置了一个非零值,例如 在我的电脑上,它的值为 1000。 +HISTFILESIZE 表示对于 bash 会话其历史文件中可以保存命令的个数(行数)。默认情况,它设置了一个非零值,例如在我的电脑上,它的值为 1000。 -所以上面所提到的命令将其值设置为 0,结果就是直到你关闭终端,没有东西会存储在历史记录中。记住同样你也不能通过按向上的箭头按键来执行之前的命令,也不能运行 history 命令。 +所以上面所提到的命令将其值设置为 0,结果就是直到你关闭终端,没有东西会存储在历史记录中。记住同样你也不能通过按向上的箭头按键或运行 history 命令来看到之前执行的命令。 #### 3. 工作结束后清除整个历史 -这可以看作是前一部分所提方案的另外一种实现。唯一的区别是在你完成所有工作之后执行这个命令。下面是刚讨论的命令: +这可以看作是前一部分所提方案的另外一种实现。唯一的区别是在你完成所有工作之后执行这个命令。下面是刚说到的命令: ``` history -cw ``` -刚才已经提到,这个和 HISTFILE 方法有相同效果。 +刚才已经提到,这个和 HISTFILESIZE 方法有相同效果。 #### 4. 只针对你的工作关闭历史记录 -虽然前面描述的方法(2 和 3)可以实现目的,它们清除整个历史,在很多情况下,有些可能不是我们所期望的。有时候你可能想保存直到你开始命令行工作之间的历史记录。类似的需求需要在你开始工作前执行下述命令: +虽然前面描述的方法(2 和 3)可以实现目的,它们可以清除整个历史,在很多情况下,有些可能不是我们所期望的。有时候你可能想保存直到你开始命令行工作之间的历史记录。对于这样的需求,你开始在工作前执行下述命令: ``` [space]set +o history ``` -备注:[space] 表示空格。 +备注:[space] 表示空格。并且由于空格的缘故,该命令本身也不会被记录。 上面的命令会临时禁用历史功能,这意味着在这命令之后你执行的所有操作都不会记录到历史中,然而这个命令之前的所有东西都会原样记录在历史列表中。 @@ -75,14 +75,14 @@ history -cw [Space]set -o history ``` -它将环境恢复原状,也就是你完成你的工作,执行上述命令之后的命令都会出现在历史中。 +它将环境恢复原状,也就是你完成了你的工作,执行上述命令之后的命令都会出现在历史中。 #### 5. 从历史记录中删除指定的命令 -现在假设历史记录中有一些命令你不希望被记录。这种情况下我们怎么办?很简单。直接动手删除它们。通过下面的命令来删除: +现在假设历史记录中已经包含了一些你不希望记录的命令。这种情况下我们怎么办?很简单。直接动手删除它们。通过下面的命令来删除: ``` -[space]history | grep "part of command you want to remove" +history | grep "part of command you want to remove" ``` 上面的命令会输出历史记录中匹配的命令,每一条前面会有个数字。 @@ -99,6 +99,8 @@ history -d [num] 第二个 ‘echo’命令被成功的删除了。 +(LCTT 译注:如果你不希望上述命令本身也被记录进历史中,你可以在上述命令前加个空格) + 同样的,你可以使用向上的箭头一直往回翻看历史记录。当你发现你感兴趣的命令出现在终端上时,按下 “Ctrl + U”清除整行,也会从历史记录中删除它。 ### 总结 @@ -107,11 +109,11 @@ history -d [num] -------------------------------------------------------------------------------- -via: https://www.maketecheasier.com/linux-command-line-history-incognito/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier +via: https://www.maketecheasier.com/linux-command-line-history-incognito/ 作者:[Himanshu Arora][a] 译者:[chunyang-wen](https://github.com/chunyang-wen) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e200f3a65410dcfff736257298f2097605959c07 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 14 Jul 2016 22:18:43 +0800 Subject: [PATCH 142/471] =?UTF-8?q?=E6=A0=A1=E5=AF=B9=E5=AE=8C=E6=88=90?= =?UTF-8?q?=EF=BC=8C=E7=A7=BB=E5=88=B0publish?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 6 Amazing Linux Distributions For Kids.md | 33 +++++++++---------- 1 file changed, 15 insertions(+), 18 deletions(-) rename {translated/tech => published}/20160616 6 Amazing Linux Distributions For Kids.md (58%) diff --git a/translated/tech/20160616 6 Amazing Linux Distributions For Kids.md b/published/20160616 6 Amazing Linux Distributions For Kids.md similarity index 58% rename from translated/tech/20160616 6 Amazing Linux Distributions For Kids.md rename to published/20160616 6 Amazing Linux Distributions For Kids.md index 7a8169cd6a..79f3aff154 100644 --- a/translated/tech/20160616 6 Amazing Linux Distributions For Kids.md +++ b/published/20160616 6 Amazing Linux Distributions For Kids.md @@ -1,28 +1,26 @@ -[Translated by HaohongWANG] -//校对爸爸辛苦了!>_< 惊艳!6款面向儿童的 Linux 发行版 ====================================== -毫无疑问未来是属于 Linux 和开源的。为了实现这样的未来、使Linux占据一席之地,人们已经着手从尽可能低的水平开始开发面向儿童的Linux发行版,并尝试教授他们如何使用Linux操作系统。 +毫无疑问未来是属于 Linux 和开源的。为了实现这样的未来、使 Linux 占据一席之地,人们已经着手开发尽可能简单的、面向儿童的 Linux 发行版,并尝试教导他们如何使用 Linux 操作系统。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Linux-Distros-For-Kids.png) >面向儿童的 Linux 发行版 -Linux 是一款非常强大的操作系统,原因之一便是它驱动了互联网上绝大多数的服务器。但出于对其用户友好性的担忧,坊间时常展开有关于 Linux 应如何取代 Mac OS X 或 Windows 的辩论。而我认为用户应该接受 Linux 来见识它真正的威力。 +Linux 是一款非常强大的操作系统,原因之一便是它支撑了互联网上绝大多数的服务器。但出于对其用户友好性的担忧,坊间时常展开有关于 Linux 应如何取代 Mac OS X 或 Windows 的争论。而我认为用户应该接受 Linux 来见识它真正的威力。 -如今,Linux 运行在绝大多数设备上,从智能手机到平板电脑,笔记本电脑,工作站,服务器,超级计算机,再到汽车,航空管制系统,甚至电冰箱,都有 Linux 的身影。正如我在开篇所说,有了这一切, Linux 是未来的操作系统。 +如今,Linux 运行在绝大多数设备上,从智能手机到平板电脑,笔记本电脑,工作站,服务器,超级计算机,再到汽车,航空管制系统,电冰箱,到处都有 Linux 的身影。正如我在开篇所说,有了这一切, Linux 是未来的操作系统。 >参考阅读: [30 Big Companies and Devices Running on Linux][1] -未来是属于孩子们的,教育要从娃娃抓起。所以,要让小孩子尽早地学习计算机、了解 Linux 、接触科学技术。这是改变未来图景的最好方法。 +未来是属于孩子们的,教育要从娃娃抓起。所以,要让小孩子尽早地学习计算机,而 Linux 就是其中一个重要的部分。 -一个常见的现象是,当儿童在一个适合他的环境中学习时,好奇心和早期学习的能力会使他自己养成喜好探索的性格。 +对小孩来说,一个常见的现象是,当他们在一个适合他的环境中学习时,好奇心和早期学习的能力会使他自己养成喜好探索的性格。 说了这么多儿童应该学习 Linux 的原因,接下来我就列出这些令人激动的发行版。你可以把它们推荐给小孩子来帮助他们开始学习使用 Linux 。 ### Sugar on a Stick -Sugar on a Stick (译注:“糖在棒上”)是 Sugar Labs 旗下的工程,Sugar Labs 是一个由志愿者领导的非盈利组织。这一发行版旨在设计大量的免费工具来使儿童在探索、发现、创造中认知自己的思想。 +Sugar on a Stick (译注:“糖棒”)是 Sugar 实验室旗下的工程,Sugar 实验室是一个由志愿者领导的非盈利组织。这一发行版旨在设计大量的免费工具来促进儿童在探索中学会技能,发现、创造,并将这些反映到自己的思想上。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Sugar-Neighborhood-View.png) >Sugar Neighborhood 界面 @@ -36,31 +34,31 @@ Sugar on a Stick (译注:“糖在棒上”)是 Sugar Labs 旗下的工程,S ### Edubuntu -Edubuntu 是基于当下最流行的发行版 Ubuntu 而开发的一款草根发行版。主要致力于降低学校、家庭和社区安装、使用 Ubuntu 自由软件的难度。 +Edubuntu 是基于当下最流行的发行版 Ubuntu 而开发的一款非官方发行版。主要致力于降低学校、家庭和社区安装、使用 Ubuntu 自由软件的难度。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Edubuntu-Apps.jpg) >Edubuntu 桌面应用 -它的桌面应用由来自不同组织的学生、教师、家长、一些利益相关者甚至黑客来提供。他们都笃信社区的发展和知识的共享是自由学习和自由分享的基石。 +它是由来自不同组织的学生、教师、家长、一些利益相关者甚至黑客支持的,这些人都坚信自由的学习和共享知识能够提高自己和社区的发展。 -该项目的主要目标是组建一款安装、管理软件难度低的操作系统以增长使用 Linux 学习和教育的用户数量。 +该项目的主要目标是通过组建一款能够降低安装、管理软件难度的操作系统来增强学习和教育水平。 访问主页: ### Doudou Linux -Doudou Linux 是专为方便儿童使用而设计的发行版,能在构建中激发儿童的创造性思维。它提供了简单但是颇具教育意义的应用来使儿童在应用过程中学习发现新的知识。 +Doudou Linux 是专为方便孩子们在建设创新思维时使用计算机而设计的发行版。它提供了简单但是颇具教育意义的应用来使儿童在应用过程中学习发现新的知识。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Doudou-Linux.png) >Doudou Linux -其最引人注目的一点便是内容过滤功能,顾名思义,它能够阻止孩童访问网络上的禁止内容。如果想要更进一步的儿童保护功能,Doudou Linux 还提供了互联网用户隐私功能,能够去除网页中的特定加载内容。 +其最引人注目的一点便是内容过滤功能,顾名思义,它能够阻止孩童访问网络上的限制性内容。如果想要更进一步的儿童保护功能,Doudou Linux 还提供了互联网用户隐私功能,能够去除网页中的特定加载内容。 访问主页: ### LinuxKidX -这是一款整合了许多专为儿童的教育类软件的 Slackware Linux 的 LiveCD。它使用 KDE 作为默认桌面环境并配置了诸如 Ktouch 打字指导,Kstars 虚拟天文台,Kalzium 元素周期表和 KwordQuiz 单词测试等应用。 +这是一款整合了许多专为儿童设计的教育软件的基于 Slackware Linux 发行版的 LiveCD。它使用 KDE 作为默认桌面环境并配置了诸如 Ktouch 打字指导,Kstars 虚拟天文台,Kalzium 元素周期表和 KwordQuiz 单词测试等应用。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/LinuxKidX.jpg) >LinuxKidX @@ -85,7 +83,7 @@ Ubermix 还具有5分钟快速安装和快速恢复等功能,可以给小孩 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Qimo-Linux.png) >Qimo Linux -你仍然可以在 Ubuntu 或者其他的 Linux 发行版中找到大多数儿童游戏。正如这些开发商所说,他们不仅在为儿童制作教育软件,同时也在开发增长儿童文化水平的安卓应用。 +你仍然可以在 Ubuntu 或者其他的 Linux 发行版中找到大多数儿童游戏。正如这些开发商所说,他们并不是要终止为孩子们开发教育软件,而是在开发能够提高儿童读写能力的 android 应用。 如果你想进一步了解,可以移步他们的官方网站。 @@ -93,7 +91,7 @@ Ubermix 还具有5分钟快速安装和快速恢复等功能,可以给小孩 以上这些便是我所知道的面向儿童的Linux发行版,或有缺漏,欢迎评论补充。 -如果你想探讨桌面 Linux 的发展前景或是如何引导儿童接触 Linux ,欢迎与我联系。 +如果你想让我们知道你对如何想儿童介绍 Linux 或者你对未来的 Linux ,特别是桌面计算机上的 Linux,欢迎与我联系。 -------------------------------------------------------------------------------- @@ -102,12 +100,11 @@ via: http://www.tecmint.com/best-linux-distributions-for-kids/?utm_source=feedbu 作者:[Aaron Kili][a] 译者:[HaohongWANG](https://github.com/HaohongWANG) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Ezio](https://github.com/oska874) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: http://www.tecmint.com/author/aaronkili/ - [1]: http://www.tecmint.com/big-companies-and-devices-running-on-gnulinux/ From 78607ba1576473e50c30936a96a9d94706381621 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 14 Jul 2016 22:30:26 +0800 Subject: [PATCH 143/471] PUB:20160620 Monitor Linux With Netdata MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @GitFuture 总体翻译的不错~ --- .../20160620 Monitor Linux With Netdata.md | 34 +++++++++++-------- 1 file changed, 20 insertions(+), 14 deletions(-) rename {translated/tech => published}/20160620 Monitor Linux With Netdata.md (63%) diff --git a/translated/tech/20160620 Monitor Linux With Netdata.md b/published/20160620 Monitor Linux With Netdata.md similarity index 63% rename from translated/tech/20160620 Monitor Linux With Netdata.md rename to published/20160620 Monitor Linux With Netdata.md index bc453d51ee..989abf0d66 100644 --- a/translated/tech/20160620 Monitor Linux With Netdata.md +++ b/published/20160620 Monitor Linux With Netdata.md @@ -1,29 +1,33 @@ 用 Netdata 监控 Linux ======= -Netdata 是一个实时的资源监控工具,它拥有基于 web 的友好界面,由 [FireHQL][1] 开发和维护。通过这个工具,你可以通过图表来了解 CPU,RAM,硬盘,网络,Apache, Postfix等软硬件的资源使用情况。它很像 Nagios 等别的监控软件;但是,Netdata 仅仅支持通过网络接口进行实时监控。 +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/netdata-945x400.png) + +Netdata 是一个实时的资源监控工具,它拥有基于 web 的友好界面,由 [FireHQL][1] 开发和维护。通过这个工具,你可以通过图表来了解 CPU,RAM,硬盘,网络,Apache, Postfix 等软硬件的资源使用情况。它很像 Nagios 等别的监控软件;但是,Netdata 仅仅支持通过 Web 界面进行实时监控。 ### 了解 Netdata -目前 Netdata 还没有验证机制,如果你担心别人能从你的电脑上获取相关信息的话,你应该设置防火墙规则来限制访问。UI 是简化版的,以便用户查看和理解他们看到的表格数据,至少你能够记住它的快速安装。 +目前 Netdata 还没有验证机制,如果你担心别人能从你的电脑上获取相关信息的话,你应该设置防火墙规则来限制访问。UI 很简单,所以任何人看懂图形并理解他们看到的结果,至少你会对它的快速安装印象深刻。 -它的 web 前端响应很快,而且不需要 Flash 插件。 UI 很整洁,保持着 Netdata 应有的特性。粗略的看,你能够看到很多图表,幸运的是绝大多数常用的图表数据(像 CPU,RAM,网络和硬盘)都在顶部。如果你想深入了解图形化数据,你只需要下滑滚动条,或者点击在右边菜单的项目。通过每个图表的右下方的按钮, Netdata 还能让你控制图表的显示,重置,缩放。 +它的 web 前端响应很快,而且不需要 Flash 插件。 UI 很整洁,保持着 Netdata 应有的特性。第一眼看上去,你能够看到很多图表,幸运的是绝大多数常用的图表数据(像 CPU,RAM,网络和硬盘)都在顶部。如果你想深入了解图形化数据,你只需要下滑滚动条,或者点击在右边菜单的项目。通过每个图表的右下方的按钮, Netdata 还能让你控制图表的显示,重置,缩放。 ![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-1.png) ->Netdata 图表控制 + +*Netdata 图表控制* Netdata 并不会占用多少系统资源,它占用的内存不会超过 40MB。因为这个软件是作者用 C 语言写的。 ![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture.png) ->Netdata 显示的内存使用情况 + +*Netdata 显示的内存使用情况* ### 下载 Netdata -要下载这个软件,你可以从访问 [Netdata GitHub page][2]。然后点击页面左边绿色的 "Clone or download" 按钮 。你应该能看到两个选项。 +要下载这个软件,你可以访问 [Netdata 的 GitHub 页面][2],然后点击页面左边绿色的 "Clone or download" 按钮 。你应该能看到以下两个选项: #### 通过 ZIP 文件下载 -另一种方法是下载 ZIP 文件。它包含在仓库的所有东西。但是如果仓库更新了,你需要重新下载 ZIP 文件。下载完 ZIP 文件后,你要用 `unzip` 命令行工具来解压文件。运行下面的命令能把 ZIP 文件的内容解压到 `netdata` 文件夹。 +一种方法是下载 ZIP 文件。它包含仓库里的所有东西。但是如果仓库更新了,你需要重新下载 ZIP 文件。下载完 ZIP 文件后,你要用 `unzip` 命令行工具来解压文件。运行下面的命令能把 ZIP 文件的内容解压到 `netdata` 文件夹。 ``` $ cd ~/Downloads @@ -31,9 +35,10 @@ $ unzip netdata-master.zip ``` ![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-2.png) ->解压 Netdata -没必要在 unzip 命令后加上 `-d` 选项,因为文件都是是放在 ZIP 文件里的一个文件夹里面。如果没有那个文件夹, unzip 会把所有东西都解压到当前目录下面(这会让文件非常混乱)。 +*解压 Netdata* + +没必要在 unzip 命令后加上 `-d` 选项,因为文件都是是放在 ZIP 文件的根文件夹里面。如果没有那个文件夹, unzip 会把所有东西都解压到当前目录下面(这会让文件非常混乱)。 #### 通过 Git 下载 @@ -53,7 +58,7 @@ $ git clone https://github.com/firehol/netdata.git ### 安装 Netdata -有些软件包是你成功构造 Netdata 时候需要的。 还好,一行命令就可以安装你所需要的东西([as stated in their installation guide][3])。在命令行运行下面的命令就能满足安装 Netdata 需要的所有依赖关系。 +有些软件包是你成功构造 Netdata 时候需要的。 还好,一行命令就可以安装你所需要的东西([这写在它的安装文档中][3])。在命令行运行下面的命令就能满足安装 Netdata 需要的所有依赖关系。 ``` $ dnf install zlib-devel libuuid-devel libmnl-devel gcc make git autoconf autogen automake pkgconfig @@ -68,7 +73,8 @@ $ sudo ./netdata-installer.sh 然后就会提示你按回车键,开始安装程序。如果要继续的话,就按下回车吧。 ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-3-600x341.png) ->Netdata 的安装。 + +*Netdata 的安装* 如果一切顺利,你的系统上就已经安装并且运行了 Netdata。安装脚本还会在相应的文件夹里添加一个卸载脚本,叫做 `netdata-uninstaller.sh`。如果你以后不想使用 Netdata,运行这个脚本可以从你的系统里面卸载掉 Netdata。 @@ -83,10 +89,10 @@ $ sudo systemctl status netdata 既然我们已经安装并且运行了 Netdata,你就能够通过 19999 端口来访问 web 界面。下面的截图是我在一个测试机器上运行的 Netdata。 ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-4-768x458.png) ->关于 Netdata 运行时的概览 -恭喜!你已经成功安装并且能够看到关于你的机器性能的完美显示,图形和高级的统计数据。无论是否是你个人的机器,你都可以向你的朋友们炫耀,因为你能够深入的了解你的服务器性能,Netdata 在任何机器上的性能报告都非常出色。 +*关于 Netdata 运行时的概览* +恭喜!你已经成功安装并且能够看到漂亮的外观和图形,以及你的机器性能的高级统计数据。无论是否是你个人的机器,你都可以向你的朋友们炫耀,因为你能够深入的了解你的服务器性能,Netdata 在任何机器上的性能报告都非常出色。 -------------------------------------------------------------------------------- @@ -94,7 +100,7 @@ via: https://fedoramagazine.org/monitor-linux-netdata/ 作者:[Martino Jones][a] 译者:[GitFuture](https://github.com/GitFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 64231f948727ba4728239ece766b15fcdd37c3a3 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 14 Jul 2016 22:35:10 +0800 Subject: [PATCH 144/471] =?UTF-8?q?=E6=A0=A1=E5=AF=B9=E5=AE=8C=E6=88=90?= =?UTF-8?q?=EF=BC=8C=E5=8F=AF=E4=BB=A5=E5=8F=91=E5=B8=83?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ntrol your DigitalOcean cloud instances.md | 22 +++++++++---------- 1 file changed, 10 insertions(+), 12 deletions(-) rename {translated/tech => published}/20160708 Using Vagrant to control your DigitalOcean cloud instances.md (55%) diff --git a/translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md b/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md similarity index 55% rename from translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md rename to published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md index a9094445b5..b07a1cb5ed 100644 --- a/translated/tech/20160708 Using Vagrant to control your DigitalOcean cloud instances.md +++ b/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md @@ -3,17 +3,17 @@ ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/fedora-vagrant-do-945x400.jpg) -[Vagrant][1] 是一个创建和管理支持虚拟机开发环境的应用。Fedora 已经[官方支持 Vagrant][2]。[DigitalOcean][3]是一个提供一键部署的云计算服务提供商(主要是 Fedora 的服务实例)。在[最近的 Raleigh 举办的云计算 FAD][4]中,Fedora 云计算队伍已经打包了一个 Vagrant 的新的插件。它能够帮助用户通过使用本地的 Vagrantfile 来管理 DigitalOcean 实例。 +[Vagrant][1] 是一个使用虚拟机创建和支持虚拟开发环境的应用。Fedora 官方已经在本地系统上通过库 `libvirt` [支持 Vagrant][2]。[DigitalOcean][3]是一个提供一键部署 Fedora 云服务实例到全固态存储服务器的云计算服务提供商。在[最近的 Raleigh 举办的 FAD 大会][4]中,Fedora 云计算队伍已经打包了一个 Vagrant 的新的插件,它能够帮助 Fedora 用户通过使用本地的 Vagrantfile 文件来管理 DigitalOcean 上的云服务实例。 -### 如何使用 Vagrant DigitalOcean 插件 +### 如何使用这个插件 -第一步是安装 vagrant DigitalOcean 的插件软件包。 +第一步在命令行下是安装软件。 ``` $ sudo dnf install -y vagrant-digitalocean ``` -安装 结束之后,下一个任务是创建本地的 Vagrantfile 文件。下面是一个例子。 +安装 结束之后,下一步是创建本地的 Vagrantfile 文件。下面是一个例子。 ``` $ mkdir digitalocean @@ -37,20 +37,18 @@ Vagrant.configure('2') do |config| end ``` -### Vagrant DigitalOcean 插件 +### Vagrant DigitalOcean 插件的注意事项 -一定要记住几个 SSH 命令规范:如果你已经在 DigitalOcean 上传了秘钥,请确保 provider.ssh_key_name 和已经在服务器中的名字吻合。provider.image 具体的文档可以在[DigitalOcean documentation][5]找到。认证 Token 可以在控制管理器的 Apps & API 区域找到。 -你可以通过以下命令来实例化一台主机。 +一定要记住的几个关于 SSH 的关键命名规范 : 如果你已经在 DigitalOcean 上传了秘钥,请确保 `provider.ssh_key_name` 和已经在服务器中的名字吻合。 `provider.image` 具体的文档可以在[DigitalOcean documentation][5]找到。在控制面板上的 `App & API` 部分可以创建认证令牌。 + +你可以使用下面的命令启动一个实例。 ``` $ vagrant up --provider=digital_ocean ``` -这个命令会启动一台 DigitalOcean 的服务器实例。你可以使用 vagrant ssh 命令来 ssh 登陆进入这个实例。执行 vagrant destroy 来废弃这个实例。 - - - +这个命令会在 DigitalOcean 的启动一个服务器实例。然后你就可以使用 `vagrant ssh` 命令来 `ssh` 登陆进入这个实例。执行 vagrant destroy 来删除这个实例。 -------------------------------------------------------------------------------- @@ -58,7 +56,7 @@ via: https://fedoramagazine.org/using-vagrant-digitalocean-cloud/ 作者:[Kushal Das][a] 译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Ezio](https://github.com/oska874) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6f0b38d1f4b21b7a9775755d61a2451c3780ca29 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 14 Jul 2016 23:00:31 +0800 Subject: [PATCH 145/471] =?UTF-8?q?=E6=A0=A1=E5=AF=B9=E5=AE=8C=E6=88=90?= =?UTF-8?q?=EF=BC=8C=E5=8F=AF=E4=BB=A5=E5=8F=91=E5=B8=83?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... SBC builds on Raspberry Pi Compute Module.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/tech => published}/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md (66%) diff --git a/translated/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md b/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md similarity index 66% rename from translated/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md rename to published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md index 361dc746ee..d79d24652b 100644 --- a/translated/tech/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md +++ b/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md @@ -3,9 +3,9 @@ ![](http://hackerboards.com/files/embeddedmicro_mypi-thm.jpg) -在 Kickstarter 众筹网站上,一个叫“MyPi”的项目用 RPi 计算模块制作了一款 SBC(Single Board Computer 单板计算机),提供 mini-PCIe 插槽,串口,宽范围输入电源,以及模块扩展等功能。 +在 Kickstarter 众筹网站上,一个叫 “MyPi” 的项目用 RPi 计算模块制作了一款 SBC(译注: Single Board Computer 单板计算机),提供一个 mini-PCIe 插槽,串口,宽范围输入电源,以及模块扩展等功能。 -你也许觉得奇怪,都 2016 年了,为什么还会有人发布这样一款长得有点像三明治,用过时的基于 ARM11 的原版树莓派 COM(Compuer on Module,模块化计算机)版本,[树莓派计算模块][1],构建的单板计算机。首先,目前仍然有大量工业应用不需要太多 CPU 处理能力,第二,树莓派计算模块仍是目前仅有的基于树莓派硬件的 COM,虽然更便宜,有点像 COM 并采用 700MHz 处理器的 [零号树莓派][2],快要发布了。 +你也许觉得奇怪,都 2016 年了,为什么还会有人发布这样一款长得有点像三明治,用过时的基于 ARM11 的 COM (译注: Compuer on Module,模块化计算机)版本的原版树莓派,[树莓派计算模块][1],来构建单板计算机。首先,目前仍然有大量工业应用不需要太多 CPU 处理能力,第二,树莓派计算模块仍是目前仅有的基于树莓派硬件的 COM,虽然更便宜,有点像 COM 并采用 700MHz 处理器的 [零号树莓派][2]。 ![](http://hackerboards.com/files/embeddedmicro_mypi-sm.jpg) @@ -13,7 +13,7 @@ >安装了 COM 和 I/O 组件的 MyPi(左),装入了可选的工业外壳中 -另外,Embedded Micro Technology 还表示它的 SBC 还设计成支持和承诺的树莓派计算模块升级版互换,采用树莓派 3 的四核,Cortex-A53 博通 BCM2837 SoC。因为这个产品最近很快就会到货,不确定他们怎么能及时为 Kickstarter 赞助者处理好这一切。不过,以后能支持也挺不错,就算要为这个升级付费也可以接受。 +另外,Embedded Micro Technology 还表示它的 SBC 还设计成支持升级替换成允许的树莓派计算模块 —— 采用了树莓派 3 的四核、Cortex-A53 博通 BCM2837处理器的 SoC。因为这个产品最近很快就会到货,不确定他们怎么能及时为 Kickstarter 赞助者处理好这一切。不过,以后能支持也挺不错,就算要为这个升级付费也可以接受。 MyPi 并不是唯一一款新的基于树莓派计算模块的商业嵌入式设备。Pigeon Computers 在五月份启动了 [Pigeon RB100][3] 的项目,是一个基于 COM 的工业自动化控制器。不过,包括 [Techbase Modberry][4] 在内的这一类设备大都出现在 2014 年 COM 发布之后的一小段时间内。 @@ -25,7 +25,7 @@ MyPi 的目标是 30 天内筹集 $21,696,目前已经实现了三分之一。 >不带 COM 和插件板的 MyPi 主板(左)以及它的接口定义 -树莓派计算模块能给 MyPi 带来博通 BCM2835 Soc,512MB 内存,以及 4GB eMMC 存储空间。MyPi 主板扩展了一个 microSD 卡槽,一个 HDMI 接口,两个 USB 2.0 接口,一个 10/100 以太网口,还有一个类似网口的 RS232 端口(通过 USB)。 +树莓派计算模块能给 MyPi 带来博通 BCM2835 Soc,512MB 内存,以及 4GB eMMC 存储空间。MyPi 主板扩展了一个 microSD 卡槽,一个 HDMI 接口,两个 USB 2.0 接口,一个 10/100M 以太网口,还有一个像网口的 RS232 端口(通过 USB 连接)。 ![](http://hackerboards.com/files/embeddedmicro_mypi_angle1-sm.jpg) @@ -35,17 +35,17 @@ MyPi 的目标是 30 天内筹集 $21,696,目前已经实现了三分之一。 MyPi 还将配备一个 mini-PCIe 插槽,据说“只支持 USB,以及只适用 mPCIe 形式的调制解调器”。还带有一个 SIM 卡插槽。板上还有双标准的树莓派摄像头接口,一个音频输出接口,自带备用电池的 RTC,LED 灯。还支持宽范围的 9-23V 直流输入。 -Embedded Micro 表示,MyPi 是为那些树莓派爱好者们设计的,他们堆积了太多 HAT 外接板,已经不能有效地工作了,或者不能很好地装入工业外壳里。MyPi 支持 HAT,另外还提供了公司自己定义的“ASIO”(特定应用接口)插件模块,它会将自己的 I/O 扩展到主板上,主板再将它们连到主板边上的 8 脚,绿色,工业 I/O 连接器(标记了“ASIO Out”)上,在下面图片里有描述。 +Embedded Micro 表示,MyPi 是为那些树莓派爱好者们设计的,他们堆积了太多 HAT 外接板,已经不能有效地工作了,或者不能很好地装入工业外壳里。MyPi 支持 HAT,另外还提供了公司自己定义的 “ASIO” (特定应用接口)插件模块,它会将自己的 I/O 扩展到载板上,载板再将它们连到载板边上的 8-pin,绿色,凤凰式工业 I/O 连接器(标记了“ASIO Out”)上,在下面图片里有描述。 ![](http://hackerboards.com/files/embeddedmicro_mypi_io-sm.jpg) >MyPi 的模块扩展接口 -就像 Kickstarter 页面里描述的:“比起在板边插满带 IO 信号接头的 HAT 板,我们有意将同样的 IO 信号接到另一个接头,它直接接到绿色的工业接头上。”另外,“通过简单地延长卡上的插脚长度(抬高),你将来可以直接扩展 IO 集 - 这些都不需要任何排线!”Embedded Micro 表示。 +就像 Kickstarter 页面里描述的:“比起在板边插满带 IO 信号接头的 HAT 板,我们更愿意把同样的 IO 信号接到另一个接头,它直接接到绿色的工业接头上。” 另外,“通过简单地延长卡上的插脚长度(抬高),你将来可以直接扩展 IO 集 - 这些都不需要任何排线!”Embedded Micro 表示。 ![](http://hackerboards.com/files/embeddedmicro_mypi_with_iocards-sm.jpg) >MyPi 和它的可选 I/O 插件板卡 -公司为 MyPi 提供了一系列可靠的 ASIO 插件板,像上面展示的。这些一开始会包括 CAN 总线,4-20mA 传感器信号,RS485,窄带 RF,等等。 +像上面展示的,这家公司为 MyPi 提供了一系列可靠的 ASIO 插卡,。一开始这些会包括 CAN 总线,4-20mA 传感器信号,RS485,窄带 RF,等等。 ### 更多信息 @@ -58,7 +58,7 @@ via: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ 作者:[Eric Brown][a] 译者:[zpl1025](https://github.com/zpl1025) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Ezio](https://github.com/oska874) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2f19bee95fd0bb828463f0f331636d2809b4fc1e Mon Sep 17 00:00:00 2001 From: cposture Date: Fri, 15 Jul 2016 14:11:18 +0800 Subject: [PATCH 146/471] Translating partly 50 --- ...eate Your Own Shell in Python - Part II.md | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md index 3154839443..2733288475 100644 --- a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md +++ b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -1,26 +1,25 @@ -Translating by cposture 2016.07.09 -Create Your Own Shell in Python - Part II +使用 Python 创建你自己的 Shell:Part II =========================================== -In [part 1][1], we already created a main shell loop, tokenized command input, and executed a command by fork and exec. In this part, we will solve the remaining problmes. First, `cd test_dir2` does not change our current directory. Second, we still have no way to exit from our shell gracefully. +在[part 1][1],我们已经创建了一个主要的 shell 循环、切分了的命令输入,以及通过 fork 和 exec 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 -### Step 4: Built-in Commands +### 步骤 4:内置命令 -The statement “cd test_dir2 does not change our current directory” is true and false in some senses. It’s true in the sense that after executing the command, we are still at the same directory. However, the directory is actullay changed, but, it’s changed in the child process. +“cd test_dir2 无法修改我们的当前目录” 这句话是对的,但在某种意义上也是错的。在执行完该命令之后,我们仍然处在同一目录,从这个意义上讲,它是对的。然而,目录实际上已经被修改,只不过它是在子进程中被修改。 -Remember that we fork a child process, then, exec the command which does not happen on a parent process. The result is we just change the current directory of a child process, not the directory of a parent process. +还记得我们 fork 了一个子进程,然后执行命令,执行命令的过程没有发生在父进程上。结果是我们只是改变了子进程的当前目录,而不是父进程的目录。 -Then, the child process exits, and the parent process continues with the same intact directory. +然后子进程退出,且父进程在原封不动的目录下继续运行。 -Therefore, this kind of commands must be built-in with the shell itself. It must be executed in the shell process without forking. +因此,这类与 shell 自己相关的命令必须是内置命令。它必须在 shell 进程中执行而没有分叉(forking)。 #### cd -Let’s start with cd command. +让我们从 cd 命令开始。 -We first create a builtins directory. Each built-in command will be put inside this directory. +我们首先创建一个内置目录。每一个内置命令都会被放进这个目录中。 -``` +```shell yosh_project |-- yosh |-- builtins @@ -30,9 +29,9 @@ yosh_project |-- shell.py ``` -In cd.py, we implement our own cd command by using a system call os.chdir. +在 cd.py,我们通过使用系统调用 os.chdir 实现自己的 cd 命令。 -``` +```python import os from yosh.constants import * @@ -43,9 +42,9 @@ def cd(args): return SHELL_STATUS_RUN ``` -Notice that we return shell running status from a built-in function. Therefore, we move constants into yosh/constants.py to be used across the project. +注意,我们会从内置函数返回 shell 的运行状态。所以,为了能够在项目中继续使用常量,我们将它们移至 yosh/constants.py。 -``` +```shell yosh_project |-- yosh |-- builtins @@ -56,16 +55,16 @@ yosh_project |-- shell.py ``` -In constants.py, we put shell status constants here. +在 constants.py,我们将状态常量放在这里。 -``` +```python SHELL_STATUS_STOP = 0 SHELL_STATUS_RUN = 1 ``` -Now, our built-in cd is ready. Let’s modify our shell.py to handle built-in functions. +现在,我们的内置 cd 已经准备好了。让我们修改 shell.py 来处理这些内置函数。 -``` +```python ... # Import constants from yosh.constants import * @@ -90,6 +89,7 @@ def execute(cmd_tokens): ... ``` +我们使用一个 python 字典变量 built_in_cmds 作为哈希映射(a hash map),以存储我们的内置函数。在 execute 函数,我们提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 We use a Python dictionary built_in_cmds as a hash map to store our built-in functions. In execute function, we extract command name and arguments. If the command name is in our hash map, we call that built-in function. (Note: built_in_cmds[cmd_name] returns the function reference that can be invoked with arguments immediately.) From d1d49d8b7f6e3528ad48c8ad9d4835bff0cee816 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 15 Jul 2016 17:09:17 +0800 Subject: [PATCH 147/471] PUB:20160708 Using Vagrant to control your DigitalOcean cloud instances @MikeCoder @oska874 --- ...grant to control your DigitalOcean cloud instances.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md b/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md index b07a1cb5ed..541bcda865 100644 --- a/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md +++ b/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md @@ -3,7 +3,7 @@ ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/fedora-vagrant-do-945x400.jpg) -[Vagrant][1] 是一个使用虚拟机创建和支持虚拟开发环境的应用。Fedora 官方已经在本地系统上通过库 `libvirt` [支持 Vagrant][2]。[DigitalOcean][3]是一个提供一键部署 Fedora 云服务实例到全固态存储服务器的云计算服务提供商。在[最近的 Raleigh 举办的 FAD 大会][4]中,Fedora 云计算队伍已经打包了一个 Vagrant 的新的插件,它能够帮助 Fedora 用户通过使用本地的 Vagrantfile 文件来管理 DigitalOcean 上的云服务实例。 +[Vagrant][1] 是一个使用虚拟机创建和支持虚拟开发环境的应用。Fedora 官方已经在本地系统上通过库 `libvirt` [支持 Vagrant][2]。[DigitalOcean][3] 是一个提供一键部署 Fedora 云服务实例到全 SSD 服务器的云计算服务提供商。在[最近的 Raleigh 举办的 FAD 大会][4]中,Fedora 云计算队伍为 Vagrant 打包了一个新的插件,它能够帮助 Fedora 用户通过使用本地的 Vagrantfile 文件来管理 DigitalOcean 上的云服务实例。 ### 如何使用这个插件 @@ -39,8 +39,7 @@ end ### Vagrant DigitalOcean 插件的注意事项 - -一定要记住的几个关于 SSH 的关键命名规范 : 如果你已经在 DigitalOcean 上传了秘钥,请确保 `provider.ssh_key_name` 和已经在服务器中的名字吻合。 `provider.image` 具体的文档可以在[DigitalOcean documentation][5]找到。在控制面板上的 `App & API` 部分可以创建认证令牌。 +一定要记住的几个关于 SSH 的关键命名规范 : 如果你已经在 DigitalOcean 上传了秘钥,请确保 `provider.ssh_key_name` 和已经在服务器中的名字吻合。 `provider.image` 具体的文档可以在[DigitalOcean documentation][5]找到。在控制面板上的 `App & API` 部分可以创建 AUTH 令牌。 你可以使用下面的命令启动一个实例。 @@ -48,14 +47,14 @@ end $ vagrant up --provider=digital_ocean ``` -这个命令会在 DigitalOcean 的启动一个服务器实例。然后你就可以使用 `vagrant ssh` 命令来 `ssh` 登陆进入这个实例。执行 vagrant destroy 来删除这个实例。 +这个命令会在 DigitalOcean 的启动一个服务器实例。然后你就可以使用 `vagrant ssh` 命令来 `ssh` 登录进入这个实例。可以执行 `vagrant destroy` 来删除这个实例。 -------------------------------------------------------------------------------- via: https://fedoramagazine.org/using-vagrant-digitalocean-cloud/ 作者:[Kushal Das][a] -译者:[译者ID](https://github.com/MikeCoder) +译者:[MikeCoder](https://github.com/MikeCoder) 校对:[Ezio](https://github.com/oska874) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From fddec9ee4e0e81a6fc15006bf4b8adc2d5d81b21 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 15 Jul 2016 17:44:11 +0800 Subject: [PATCH 148/471] PUB:20160624 Industrial SBC builds on Raspberry Pi Compute Module @zpl1025 @oska874 --- ...C builds on Raspberry Pi Compute Module.md | 27 ++++++++++++------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md b/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md index d79d24652b..cf916044a6 100644 --- a/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md +++ b/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md @@ -3,17 +3,19 @@ ![](http://hackerboards.com/files/embeddedmicro_mypi-thm.jpg) -在 Kickstarter 众筹网站上,一个叫 “MyPi” 的项目用 RPi 计算模块制作了一款 SBC(译注: Single Board Computer 单板计算机),提供一个 mini-PCIe 插槽,串口,宽范围输入电源,以及模块扩展等功能。 +在 Kickstarter 众筹网站上,一个叫 “MyPi” 的项目用树莓派计算模块制作了一款 SBC(LCTT 译注: Single Board Computer 单板计算机),提供一个 mini-PCIe 插槽,串口,宽范围输入电源,以及模块扩展等功能。 -你也许觉得奇怪,都 2016 年了,为什么还会有人发布这样一款长得有点像三明治,用过时的基于 ARM11 的 COM (译注: Compuer on Module,模块化计算机)版本的原版树莓派,[树莓派计算模块][1],来构建单板计算机。首先,目前仍然有大量工业应用不需要太多 CPU 处理能力,第二,树莓派计算模块仍是目前仅有的基于树莓派硬件的 COM,虽然更便宜,有点像 COM 并采用 700MHz 处理器的 [零号树莓派][2]。 +你也许觉得奇怪,都 2016 年了,为什么还会有人发布这样一款长得有点像三明治,用过时的 ARM11 构建的 COM (LCTT 译注: Compuer on Module,模块化计算机)版本的树莓派单板计算机:[树莓派计算模块][1]。原因是这样的,首先,目前仍然有大量工业应用不需要太多 CPU 处理能力,第二,树莓派计算模块仍是目前仅有的基于树莓派硬件的 COM,虽然更便宜、有点像 COM 并采用同样的 700MHz 处理器的 [零号树莓派][2] 也很类似。 ![](http://hackerboards.com/files/embeddedmicro_mypi-sm.jpg) +*安装了 COM 和 I/O 组件的 MyPi* + ![](http://hackerboards.com/files/embeddedmicro_mypi_encl-sm.jpg) ->安装了 COM 和 I/O 组件的 MyPi(左),装入了可选的工业外壳中 +*装入了可选的工业外壳中* -另外,Embedded Micro Technology 还表示它的 SBC 还设计成支持升级替换成允许的树莓派计算模块 —— 采用了树莓派 3 的四核、Cortex-A53 博通 BCM2837处理器的 SoC。因为这个产品最近很快就会到货,不确定他们怎么能及时为 Kickstarter 赞助者处理好这一切。不过,以后能支持也挺不错,就算要为这个升级付费也可以接受。 +另外,Embedded Micro Technology 还表示它的 SBC 还设计成可升级替换为支持的树莓派计算模块 —— 采用了树莓派 3 的四核、Cortex-A53 博通 BCM2837处理器的 SoC。因为这个产品最近很快就会到货,不确定他们怎么能及时为 Kickstarter 赞助者处理好这一切。不过,以后能支持也挺不错,就算要为这个升级付费也可以接受。 MyPi 并不是唯一一款新的基于树莓派计算模块的商业嵌入式设备。Pigeon Computers 在五月份启动了 [Pigeon RB100][3] 的项目,是一个基于 COM 的工业自动化控制器。不过,包括 [Techbase Modberry][4] 在内的这一类设备大都出现在 2014 年 COM 发布之后的一小段时间内。 @@ -21,29 +23,35 @@ MyPi 的目标是 30 天内筹集 $21,696,目前已经实现了三分之一。 ![](http://hackerboards.com/files/embeddedmicro_mypi_baseboard-sm.jpg) +*不带 COM 和插件板的 MyPi 主板* + ![](http://hackerboards.com/files/embeddedmicro_mypi_detail-sm.jpg) ->不带 COM 和插件板的 MyPi 主板(左)以及它的接口定义 +*以及它的接口定义* 树莓派计算模块能给 MyPi 带来博通 BCM2835 Soc,512MB 内存,以及 4GB eMMC 存储空间。MyPi 主板扩展了一个 microSD 卡槽,一个 HDMI 接口,两个 USB 2.0 接口,一个 10/100M 以太网口,还有一个像网口的 RS232 端口(通过 USB 连接)。 ![](http://hackerboards.com/files/embeddedmicro_mypi_angle1-sm.jpg) +*插上树莓派计算模块和 mini-PCIe 模块的 MyPi 的两个视角* + ![](http://hackerboards.com/files/embeddedmicro_mypi_angle2.jpg) ->插上树莓派计算模块和 mini-PCIe 模块的 MyPi 的两个视角 +*插上树莓派计算模块和 mini-PCIe 模块的 MyPi 的两个视角* MyPi 还将配备一个 mini-PCIe 插槽,据说“只支持 USB,以及只适用 mPCIe 形式的调制解调器”。还带有一个 SIM 卡插槽。板上还有双标准的树莓派摄像头接口,一个音频输出接口,自带备用电池的 RTC,LED 灯。还支持宽范围的 9-23V 直流输入。 -Embedded Micro 表示,MyPi 是为那些树莓派爱好者们设计的,他们堆积了太多 HAT 外接板,已经不能有效地工作了,或者不能很好地装入工业外壳里。MyPi 支持 HAT,另外还提供了公司自己定义的 “ASIO” (特定应用接口)插件模块,它会将自己的 I/O 扩展到载板上,载板再将它们连到载板边上的 8-pin,绿色,凤凰式工业 I/O 连接器(标记了“ASIO Out”)上,在下面图片里有描述。 +Embedded Micro 表示,MyPi 是为那些树莓派爱好者们设计的,他们拼接了太多 HAT 外接板,已经不能有效地工作了,或者不能很好地装入工业外壳里。MyPi 支持 HAT,另外还提供了公司自己定义的 “ASIO” (特定应用接口)插件模块,它会将自己的 I/O 扩展到载板上,载板再将它们连到载板边上的 8针的绿色凤凰式工业 I/O 连接器(标记了“ASIO Out”)上,在下面图片里有描述。 ![](http://hackerboards.com/files/embeddedmicro_mypi_io-sm.jpg) ->MyPi 的模块扩展接口 + +*MyPi 的模块扩展接口* 就像 Kickstarter 页面里描述的:“比起在板边插满带 IO 信号接头的 HAT 板,我们更愿意把同样的 IO 信号接到另一个接头,它直接接到绿色的工业接头上。” 另外,“通过简单地延长卡上的插脚长度(抬高),你将来可以直接扩展 IO 集 - 这些都不需要任何排线!”Embedded Micro 表示。 ![](http://hackerboards.com/files/embeddedmicro_mypi_with_iocards-sm.jpg) ->MyPi 和它的可选 I/O 插件板卡 + +*MyPi 和它的可选 I/O 插件板卡* 像上面展示的,这家公司为 MyPi 提供了一系列可靠的 ASIO 插卡,。一开始这些会包括 CAN 总线,4-20mA 传感器信号,RS485,窄带 RF,等等。 @@ -51,7 +59,6 @@ Embedded Micro 表示,MyPi 是为那些树莓派爱好者们设计的,他们 MyPi 在 Kickstarter 上提供了 7 月 23 日到期的 79 英镑($119)早期参与包(不包括树莓派计算模块),预计九月份发货。更多信息请查看 [Kickstarter 上 MyPi 的页面][5] 以及 [Embedded Micro Technology 官网][6]。 - -------------------------------------------------------------------------------- via: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ From 5b0200b6aab9291ec7c5ea59c2d3e1508d033d72 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 15 Jul 2016 20:20:07 +0800 Subject: [PATCH 149/471] PUB:20160624 How to permanently mount a Windows share on Linux @alim0x --- ...ermanently mount a Windows share on Linux.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) rename {translated/tech => published}/20160624 How to permanently mount a Windows share on Linux.md (83%) diff --git a/translated/tech/20160624 How to permanently mount a Windows share on Linux.md b/published/20160624 How to permanently mount a Windows share on Linux.md similarity index 83% rename from translated/tech/20160624 How to permanently mount a Windows share on Linux.md rename to published/20160624 How to permanently mount a Windows share on Linux.md index a205923fe0..1782a76241 100644 --- a/translated/tech/20160624 How to permanently mount a Windows share on Linux.md +++ b/published/20160624 How to permanently mount a Windows share on Linux.md @@ -4,13 +4,14 @@ > 如果你已经厌倦了每次重启 Linux 就得重新挂载 Windows 共享,读读这个让共享永久挂载的简单方法。 ![](http://tr2.cbsistatic.com/hub/i/2016/06/02/e965310b-b38d-43e6-9eac-ea520992138b/68fd9ec5d6731cc405bdd27f2f42848d/linuxadminhero.jpg) ->图片: Jack Wallen -在 Linux 上和一个 Windows 网络进行交互从来就不是件轻松的事情。想想多少企业正在采用 Linux,这两个平台不得不一起好好协作。幸运的是,有了一些工具的帮助,你可以轻松地将 Windows 网络驱动器映射到一台 Linux 机器上,甚至可以确保在重启 Linux 机器之后共享还在。 +*图片: Jack Wallen* + +在 Linux 上和一个 Windows 网络进行交互从来就不是件轻松的事情。想想多少企业正在采用 Linux,需要在这两个平台上彼此协作。幸运的是,有了一些工具的帮助,你可以轻松地将 Windows 网络驱动器映射到一台 Linux 机器上,甚至可以确保在重启 Linux 机器之后共享还在。 ### 在我们开始之前 -要实现这个,你需要用到命令行。过程十分简单,但你需要编辑 /etc/fstab 文件,所以小心操作。还有,我假设你已经有正常工作的 Samba 了,可以手动从 Windows 网络挂载共享到你的 Linux 机器,还知道这个共享的主机 IP 地址。 +要实现这个,你需要用到命令行。过程十分简单,但你需要编辑 /etc/fstab 文件,所以小心操作。还有,我假设你已经让 Samba 正常工作了,可以手动从 Windows 网络挂载共享到你的 Linux 机器,还知道这个共享的主机 IP 地址。 准备好了吗?那就开始吧。 @@ -22,7 +23,7 @@ sudo mkdir /media/share ``` -### 一些安装 +### 安装一些软件 现在我们得安装允许跨平台文件共享的系统;这个系统是 cifs-utils。在终端窗口输入: @@ -44,7 +45,7 @@ hosts: files mdns4_minimal [NOTFOUND=return] dns hosts: files mdns4_minimal [NOTFOUND=return] wins dns ``` -现在你必须安装 windbind 让你的 Linux 机器可以在 DHCP 网络中解析 Windows 机器名。在终端里执行: +现在你需要安装 windbind 让你的 Linux 机器可以在 DHCP 网络中解析 Windows 机器名。在终端里执行: ``` sudo apt-get install libnss-windbind windbind @@ -70,7 +71,7 @@ sudo cp /etc/fstab /etc/fstab.old sudo mv /etc/fstab.old /etc/fstab ``` -在你的主目录创建一个认证信息文件 .smbcredentials。在这个文件里添加你的用户名和密码,就像这样(USER 和 PASSWORD 是实际的用户名和密码): +在你的主目录创建一个认证信息文件 .smbcredentials。在这个文件里添加你的用户名和密码,就像这样(USER 和 PASSWORD 替换为实际的用户名和密码): ``` username=USER @@ -84,7 +85,7 @@ password=PASSWORD id USER ``` -USER 是实际的用户名,你应该会看到类似这样的信息: +USER 是你的实际用户名,你应该会看到类似这样的信息: ``` uid=1000(USER) gid=1000(GROUP) @@ -115,7 +116,7 @@ via: http://www.techrepublic.com/article/how-to-permanently-mount-a-windows-shar 作者:[Jack Wallen][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 683f34b235fe2bad7b105dda48b7c4dbca85528c Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 15 Jul 2016 22:23:26 +0800 Subject: [PATCH 150/471] PUB:20160220 Best Cloud Services For Linux To Replace Copy @cposture --- ...loud Services For Linux To Replace Copy.md | 23 ++++++++++--------- 1 file changed, 12 insertions(+), 11 deletions(-) rename {translated/tech => published}/20160220 Best Cloud Services For Linux To Replace Copy.md (76%) diff --git a/translated/tech/20160220 Best Cloud Services For Linux To Replace Copy.md b/published/20160220 Best Cloud Services For Linux To Replace Copy.md similarity index 76% rename from translated/tech/20160220 Best Cloud Services For Linux To Replace Copy.md rename to published/20160220 Best Cloud Services For Linux To Replace Copy.md index ce9380b600..a43c5776e8 100644 --- a/translated/tech/20160220 Best Cloud Services For Linux To Replace Copy.md +++ b/published/20160220 Best Cloud Services For Linux To Replace Copy.md @@ -3,9 +3,9 @@ ![](http://itsfoss.com/wp-content/uploads/2016/02/Linux-cloud-services.jpg) -云存储服务 Copy 即将关闭,我们 Linux 用户是时候该寻找其他优秀的** Copy 之外的 Linux 云存储服务**。 +云存储服务 Copy 已经关闭,我们 Linux 用户是时候该寻找其他优秀的** Copy 之外的 Linux 云存储服务**。 -全部文件将会在 2016年5月1号 被删除。如果你是 Copy 的用户,你应该保存你的文件并将它们移至其他地方。 +全部文件会在 2016年5月1号 被删除。如果你是 Copy 的用户,你应该保存你的文件并将它们移至其他地方。 在过去的两年里,Copy 已经成为了我最喜爱的云存储。它为我提供了大量的免费空间并且带有桌面平台的原生应用程序,包括 Linux 和移动平台如 iOS 和 Android。 @@ -13,16 +13,16 @@ 当我从 Copy.com 看到它即将关闭的消息,我的担忧成真了。事实上,Copy 并不孤独。它的母公司 [Barracuda Networks](https://www.barracuda.com/)正经历一段困难时期并且已经[雇佣 Morgan Stanely 寻找 合适的卖家](http://www.bloomberg.com/news/articles/2016-02-01/barracuda-networks-said-to-work-with-morgan-stanley-to-seek-sale)(s) -无论什么理由,我们所知道的是 Copy 将会成为历史,我们需要寻找相似的**优秀的 Linux 云服务**。我之所以强调 Linux 是因为其他流行的云存储服务,如[微软的OneDrive](https://onedrive.live.com/about/en-us/) 和 [Google Drive](https://www.google.com/drive/) 都没有提供本地 Linux 客户端。这是微软预计的事情,但是谷歌对 Linux 的冷漠令人震惊。 +无论什么理由,我们所知道的是 Copy 将会成为历史,我们需要寻找相似的**优秀的 Linux 云服务**。我之所以强调 Linux 是因为其他流行的云存储服务,如[微软的 OneDrive](https://onedrive.live.com/about/en-us/) 和 [Google Drive](https://www.google.com/drive/) 都没有提供本地 Linux 客户端。微软并没有出乎我们的预料,但是[谷歌对 Linux 的冷漠][1]令人震惊。 ## Linux 下 Copy 的最佳替代者 -现在,作为一个 Linux 存储,在云存储中你需要什么?让我们猜猜: +什么样的云服务才适合作为 Linux 下的存储服务?让我们猜猜: -- 大量的免费空间。毕竟,个人用户无法每月支付巨额款项。 -- 原生的 Linux 客户端。因此你能够使用提供的服务,方便地同步文件,而不用做一些特殊的调整或者定时执行脚本。 -- 其他桌面系统的客户端,比如 Windows 和 OS X。便携性是必要的,并且同步设备间的文件是一种很好的缓解。 -- Android 和 iOS 的移动应用程序。在今天的现代世界里,你需要连接所有设备。 +- 大量的免费空间。毕竟,个人用户无法支付每月的巨额款项。 +- 原生的 Linux 客户端。以便你能够方便的在服务器之间同步文件,而不用做一些特殊的调整或者定时执行脚本。 +- 其他桌面系统的客户端,比如 Windows 和 OS X。移动性是必要的,并且同步设备间的文件也很有必要。 +- 基于 Android 和 iOS 的移动应用程序。在今天的现代世界里,你需要连接所有设备。 我不将自托管的云服务计算在内,比如 OwnCloud 或 [Seafile](https://www.seafile.com/en/home/) ,因为它们需要自己建立和运行一个服务器。这不适合所有想要类似 Copy 的云服务的家庭用户。 @@ -32,7 +32,7 @@ ![](http://itsfoss.com/wp-content/uploads/2016/02/Mega-Linux.jpg) -如果你是一个 It’s FOSS 的普通读者,你可能已经看过我之前的一篇有关[Mega on Linux](http://itsfoss.com/install-mega-cloud-storage-linux/)的文章。这种云服务由[Megaupload scandal](https://en.wikipedia.org/wiki/Megaupload) 公司下臭名昭著的[Kim Dotcom](https://en.wikipedia.org/wiki/Kim_Dotcom)提供。这也使一些用户怀疑它,因为 Kim Dotcom 已经很长一段时间成为美国当局的目标。 +如果你是一个 It’s FOSS 的普通读者,你可能已经看过我之前的一篇有关 [Mega on Linux](http://itsfoss.com/install-mega-cloud-storage-linux/)的文章。这种云服务由 [Megaupload scandal](https://en.wikipedia.org/wiki/Megaupload) 公司下臭名昭著的 [Kim Dotcom](https://en.wikipedia.org/wiki/Kim_Dotcom) 提供。这也使一些用户怀疑它,因为 Kim Dotcom 已经很长一段时间成为美国当局的目标。 Mega 拥有方便免费云服务下你所期望的一切。它给每个个人用户提供 50 GB 的免费存储空间。提供Linux 和其他平台下的原生客户端,并带有端到端的加密。原生的 Linux 客户端运行良好,可以无缝地跨平台同步。你也能在浏览器上查看操作你的文件。 @@ -74,7 +74,7 @@ Hubic 拥有一些不错的功能。除了简单的用户界面、文件共享 ![](http://itsfoss.com/wp-content/uploads/2016/02/pCloud-Linux.jpeg) -pCloud 是另一款欧洲的发行软件,但这一次从瑞士横跨法国边境。专注于加密和安全,pCloud 为每一个注册者提供 10 GB 的免费存储空间。你可以通过邀请好友、在社交媒体上分享链接等方式将空间增加至 20 GB。 +pCloud 是另一款欧洲的发行软件,但这一次跨过了法国边境,它来自瑞士。专注于加密和安全,pCloud 为每一个注册者提供 10 GB 的免费存储空间。你可以通过邀请好友、在社交媒体上分享链接等方式将空间增加至 20 GB。 它拥有云服务的所有标准特性,例如文件共享、同步、选择性同步等等。pCloud 也有跨平台原生客户端,当然包括 Linux。 @@ -128,8 +128,9 @@ via: http://itsfoss.com/cloud-services-linux/ 作者:[ABHISHEK][a] 译者:[cposture](https://github.com/cposture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://itsfoss.com/author/abhishek/ +[1]:https://itsfoss.com/google-hates-desktop-linux/ \ No newline at end of file From 8079ec325040447803b2b1fe499b57a61b88a65d Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 15 Jul 2016 22:28:50 +0800 Subject: [PATCH 151/471] =?UTF-8?q?=E6=81=A2=E5=A4=8D=E6=96=87=E4=BB=B6?= =?UTF-8?q?=E5=90=8D?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @kylepeng93 --- ...newcomer's guide to navigating OpenStack Infrastructure.md} | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) rename translated/tech/{A newcomer's guide to navigating OpenStack Infrastructure.md => 20160416 A newcomer's guide to navigating OpenStack Infrastructure.md} (99%) diff --git a/translated/tech/A newcomer's guide to navigating OpenStack Infrastructure.md b/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md similarity index 99% rename from translated/tech/A newcomer's guide to navigating OpenStack Infrastructure.md rename to translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md index b22d5e1fd5..25353ef462 100644 --- a/translated/tech/A newcomer's guide to navigating OpenStack Infrastructure.md +++ b/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md @@ -1,4 +1,3 @@ -translating by kylepeng93 给学习OpenStack基础设施的新手的入门指南 =========================================================== @@ -39,7 +38,7 @@ translating by kylepeng93 via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners 作者:[linux.com][a] -译者:[译者ID](https://github.com/译者ID) +译者:[kylepeng93](https://github.com/kylepeng93) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 31a898b314bc71e8638a55df2e9f9822d9008d2f Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 16 Jul 2016 00:02:06 +0800 Subject: [PATCH 152/471] Translting partly 70 --- .../20160706 Create Your Own Shell in Python - Part II.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md index 2733288475..26f0625368 100644 --- a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md +++ b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -89,12 +89,11 @@ def execute(cmd_tokens): ... ``` -我们使用一个 python 字典变量 built_in_cmds 作为哈希映射(a hash map),以存储我们的内置函数。在 execute 函数,我们提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 -We use a Python dictionary built_in_cmds as a hash map to store our built-in functions. In execute function, we extract command name and arguments. If the command name is in our hash map, we call that built-in function. +我们使用一个 python 字典变量 built_in_cmds 作为哈希映射(a hash map),以存储我们的内置函数。我们在 execute 函数中提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 -(Note: built_in_cmds[cmd_name] returns the function reference that can be invoked with arguments immediately.) +(提示:built_in_cmds[cmd_name] 返回能直接使用参数调用的函数引用的。) -We are almost ready to use the built-in cd function. The last thing is to add cd function into the built_in_cmds map. +我们差不多准备好使用内置的 cd 函数了。最后一步是将 cd 函数添加到 built_in_cmds 映射中。 ``` ... @@ -119,6 +118,7 @@ def main(): shell_loop() ``` +我们定义 register_command 函数以添加一个内置函数到我们内置的命令哈希映射。 We define register_command function for adding a built-in function to our built-in commmand hash map. Then, we define init function and register the built-in cd function there. Notice the line register_command("cd", cd). The first argument is a command name. The second argument is a reference to a function. In order to let cd, in the second argument, refer to the cd function reference in yosh/builtins/cd.py, we have to put the following line in yosh/builtins/__init__.py. From 0eeb6216837084d151202801bc689acfc4f691c8 Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 16 Jul 2016 01:04:01 +0800 Subject: [PATCH 153/471] Translating partly 75:wq --- ...20160706 Create Your Own Shell in Python - Part II.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md index 26f0625368..934011e0cf 100644 --- a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md +++ b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -118,16 +118,19 @@ def main(): shell_loop() ``` -我们定义 register_command 函数以添加一个内置函数到我们内置的命令哈希映射。 -We define register_command function for adding a built-in function to our built-in commmand hash map. Then, we define init function and register the built-in cd function there. +我们定义 register_command 函数以添加一个内置函数到我们内置的命令哈希映射。接着,我们定义 init 函数并且在这里注册内置的 cd 函数。 -Notice the line register_command("cd", cd). The first argument is a command name. The second argument is a reference to a function. In order to let cd, in the second argument, refer to the cd function reference in yosh/builtins/cd.py, we have to put the following line in yosh/builtins/__init__.py. +注意这行 register_command("cd", cd) 。第一个参数为命令的名字。第二个参数为一个函数引用。为了能够让第二个参数 cd 引用到 yosh/builtins/cd.py 中的cd 函数引用,我们必须将以下这行代码放在 yosh/builtins/__init__.py 文件中。 ``` from yosh.builtins.cd import * ``` + +因此,在 yosh/shell.py 中,当我们从 yosh.builtins 导入 * 时,我们可以得到已经通过 yosh.builtins +被导入的 cd 函数引用。 Therefore, in yosh/shell.py, when we import * from yosh.builtins, we get cd function reference that is already imported by yosh.builtins. +我们已经准备好了代码。 We’ve done preparing our code. Let’s try by running our shell as a module python -m yosh.shell at the same level as the yosh directory. Now, our cd command should change our shell directory correctly while non-built-in commands still work too. Cool. From 89d9ed97814565e48fc03badce1090df909a1b82 Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 16 Jul 2016 11:03:10 +0800 Subject: [PATCH 154/471] Translated by cposture --- ...eate Your Own Shell in Python - Part II.md | 63 +++++++++---------- 1 file changed, 30 insertions(+), 33 deletions(-) diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md index 934011e0cf..0f0cd6a878 100644 --- a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md +++ b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -1,7 +1,7 @@ 使用 Python 创建你自己的 Shell:Part II =========================================== -在[part 1][1],我们已经创建了一个主要的 shell 循环、切分了的命令输入,以及通过 fork 和 exec 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 +在[part 1][1] 中,我们已经创建了一个主要的 shell 循环、切分了的命令输入,以及通过 `fork` 和 `exec` 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 ### 步骤 4:内置命令 @@ -9,15 +9,15 @@ 还记得我们 fork 了一个子进程,然后执行命令,执行命令的过程没有发生在父进程上。结果是我们只是改变了子进程的当前目录,而不是父进程的目录。 -然后子进程退出,且父进程在原封不动的目录下继续运行。 +然后子进程退出,而父进程在原封不动的目录下继续运行。 因此,这类与 shell 自己相关的命令必须是内置命令。它必须在 shell 进程中执行而没有分叉(forking)。 #### cd -让我们从 cd 命令开始。 +让我们从 `cd` 命令开始。 -我们首先创建一个内置目录。每一个内置命令都会被放进这个目录中。 +我们首先创建一个 `builtins` 目录。每一个内置命令都会被放进这个目录中。 ```shell yosh_project @@ -29,7 +29,7 @@ yosh_project |-- shell.py ``` -在 cd.py,我们通过使用系统调用 os.chdir 实现自己的 cd 命令。 +在 `cd.py` 中,我们通过使用系统调用 `os.chdir` 实现自己的 `cd` 命令。 ```python import os @@ -42,7 +42,7 @@ def cd(args): return SHELL_STATUS_RUN ``` -注意,我们会从内置函数返回 shell 的运行状态。所以,为了能够在项目中继续使用常量,我们将它们移至 yosh/constants.py。 +注意,我们会从内置函数返回 shell 的运行状态。所以,为了能够在项目中继续使用常量,我们将它们移至 `yosh/constants.py`。 ```shell yosh_project @@ -55,14 +55,14 @@ yosh_project |-- shell.py ``` -在 constants.py,我们将状态常量放在这里。 +在 `constants.py` 中,我们将状态常量都放在这里。 ```python SHELL_STATUS_STOP = 0 SHELL_STATUS_RUN = 1 ``` -现在,我们的内置 cd 已经准备好了。让我们修改 shell.py 来处理这些内置函数。 +现在,我们的内置 `cd` 已经准备好了。让我们修改 `shell.py` 来处理这些内置函数。 ```python ... @@ -89,11 +89,11 @@ def execute(cmd_tokens): ... ``` -我们使用一个 python 字典变量 built_in_cmds 作为哈希映射(a hash map),以存储我们的内置函数。我们在 execute 函数中提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 +我们使用一个 python 字典变量 `built_in_cmds` 作为哈希映射(hash map),以存储我们的内置函数。我们在 `execute` 函数中提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 -(提示:built_in_cmds[cmd_name] 返回能直接使用参数调用的函数引用的。) +(提示:`built_in_cmds[cmd_name]` 返回能直接使用参数调用的函数引用的。) -我们差不多准备好使用内置的 cd 函数了。最后一步是将 cd 函数添加到 built_in_cmds 映射中。 +我们差不多准备好使用内置的 `cd` 函数了。最后一步是将 `cd` 函数添加到 `built_in_cmds` 映射中。 ``` ... @@ -118,32 +118,29 @@ def main(): shell_loop() ``` -我们定义 register_command 函数以添加一个内置函数到我们内置的命令哈希映射。接着,我们定义 init 函数并且在这里注册内置的 cd 函数。 +我们定义了 `register_command` 函数,以添加一个内置函数到我们内置的命令哈希映射。接着,我们定义 `init` 函数并且在这里注册内置的 `cd` 函数。 -注意这行 register_command("cd", cd) 。第一个参数为命令的名字。第二个参数为一个函数引用。为了能够让第二个参数 cd 引用到 yosh/builtins/cd.py 中的cd 函数引用,我们必须将以下这行代码放在 yosh/builtins/__init__.py 文件中。 +注意这行 `register_command("cd", cd)` 。第一个参数为命令的名字。第二个参数为一个函数引用。为了能够让第二个参数 `cd` 引用到 `yosh/builtins/cd.py` 中的 `cd` 函数引用,我们必须将以下这行代码放在 `yosh/builtins/__init__.py` 文件中。 ``` from yosh.builtins.cd import * ``` -因此,在 yosh/shell.py 中,当我们从 yosh.builtins 导入 * 时,我们可以得到已经通过 yosh.builtins -被导入的 cd 函数引用。 -Therefore, in yosh/shell.py, when we import * from yosh.builtins, we get cd function reference that is already imported by yosh.builtins. +因此,在 `yosh/shell.py` 中,当我们从 `yosh.builtins` 导入 `*` 时,我们可以得到已经通过 `yosh.builtins` 导入的 `cd` 函数引用。 -我们已经准备好了代码。 -We’ve done preparing our code. Let’s try by running our shell as a module python -m yosh.shell at the same level as the yosh directory. +我们已经准备好了代码。让我们尝试在 `yosh` 同级目录下以模块形式运行我们的 shell,`python -m yosh.shell`。 -Now, our cd command should change our shell directory correctly while non-built-in commands still work too. Cool. +现在,`cd` 命令可以正确修改我们的 shell 目录了,同时非内置命令仍然可以工作。非常好! #### exit -Here comes the last piece: to exit gracefully. +最后一块终于来了:优雅地退出。 -We need a function that changes the shell status to be SHELL_STATUS_STOP. So, the shell loop will naturally break and the shell program will end and exit. +我们需要一个可以修改 shell 状态为 `SHELL_STATUS_STOP` 的函数。这样,shell 循环可以自然地结束,shell 将到达终点而退出。 -As same as cd, if we fork and exec exit in a child process, the parent process will still remain inact. Therefore, the exit function is needed to be a shell built-in function. +和 `cd` 一样,如果我们在子进程中 fork 和执行 `exit` 函数,其对父进程是不起作用的。因此,`exit` 函数需要成为一个 shell 内置函数。 -Let’s start by creating a new file called exit.py in the builtins folder. +让我们从这开始:在 `builtins` 目录下创建一个名为 `exit.py` 的新文件。 ``` yosh_project @@ -157,7 +154,7 @@ yosh_project |-- shell.py ``` -The exit.py defines the exit function that just returns the status to break the main loop. +`exit.py` 定义了一个 `exit` 函数,该函数仅仅返回一个可以退出主循环的状态。 ``` from yosh.constants import * @@ -167,14 +164,14 @@ def exit(args): return SHELL_STATUS_STOP ``` -Then, we import the exit function reference in `yosh/builtins/__init__.py`. +然后,我们导入位于 `yosh/builtins/__init__.py` 文件的 `exit` 函数引用。 ``` from yosh.builtins.cd import * from yosh.builtins.exit import * ``` -Finally, in shell.py, we register the exit command in `init()` function. +最后,我们在 `shell.py` 中的 `init()` 函数注册 `exit` 命令。 ``` @@ -188,17 +185,17 @@ def init(): ... ``` -That’s all! +到此为止! -Try running python -m yosh.shell. Now you can enter exit to quit the program gracefully. +尝试执行 `python -m yosh.shell`。现在你可以输入 `exit` 优雅地退出程序了。 -### Final Thought +### 最后的想法 -I hope you enjoy creating yosh (your own shell) like I do. However, my version of yosh is still in an early stage. I don’t handle several corner cases that can corrupt the shell. There are a lot of built-in commands that I don’t cover. Some non-built-in commands can also be implemented as built-in commands to improve performance (avoid new process creation time). And, a ton of features are not yet implemented (see Common features and Differing features). +我希望你能像我一样享受创建 `yosh` (**y**our **o**wn **sh**ell)的过程。但我的 `yosh` 版本仍处于早期阶段。我没有处理一些会使 shell 崩溃的极端状况。还有很多我没有覆盖的内置命令。为了提高性能,一些非内置命令也可以实现为内置命令(避免新进程创建时间)。同时,大量的功能还没有实现(请看 [公共特性](http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html) 和 [不同特性](http://www.tldp.org/LDP/intro-linux/html/x12249.html)) -I’ve provided the source code at github.com/supasate/yosh. Feel free to fork and play around. +我已经在 github.com/supasate/yosh 中提供了源代码。请随意 fork 和尝试。 -Now, it’s your turn to make it real Your Own SHell. +现在该是创建你真正自己拥有的 Shell 的时候了。 Happy Coding! @@ -207,7 +204,7 @@ Happy Coding! via: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ 作者:[Supasate Choochaisri][a] -译者:[译者ID](https://github.com/译者ID) +译者:[cposture](https://github.com/cposture) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From fc7e5a6dec6dea4bc018f29beaca8e944f4594c1 Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 16 Jul 2016 11:04:29 +0800 Subject: [PATCH 155/471] Translated by cposture --- .../tech/20160706 Create Your Own Shell in Python - Part II.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20160706 Create Your Own Shell in Python - Part II.md (100%) diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/translated/tech/20160706 Create Your Own Shell in Python - Part II.md similarity index 100% rename from sources/tech/20160706 Create Your Own Shell in Python - Part II.md rename to translated/tech/20160706 Create Your Own Shell in Python - Part II.md From d7dd2e863b6d397c1fbefe9bd4283f56bbfc7286 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Sat, 16 Jul 2016 20:53:33 +0800 Subject: [PATCH 156/471] Translated by cposture (#4187) * Translated by cposture * Translating by cposture * translating partly * translating partly 75 * Translated by cposture * Translated by cposture * Translating by cposture * Translating partly 50 * Translting partly 70 * Translating partly 75:wq * Translated by cposture * Translated by cposture --- ...eate Your Own Shell in Python - Part II.md | 216 ------------------ ...eate Your Own Shell in Python - Part II.md | 216 ++++++++++++++++++ 2 files changed, 216 insertions(+), 216 deletions(-) delete mode 100644 sources/tech/20160706 Create Your Own Shell in Python - Part II.md create mode 100644 translated/tech/20160706 Create Your Own Shell in Python - Part II.md diff --git a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md b/sources/tech/20160706 Create Your Own Shell in Python - Part II.md deleted file mode 100644 index 3154839443..0000000000 --- a/sources/tech/20160706 Create Your Own Shell in Python - Part II.md +++ /dev/null @@ -1,216 +0,0 @@ -Translating by cposture 2016.07.09 -Create Your Own Shell in Python - Part II -=========================================== - -In [part 1][1], we already created a main shell loop, tokenized command input, and executed a command by fork and exec. In this part, we will solve the remaining problmes. First, `cd test_dir2` does not change our current directory. Second, we still have no way to exit from our shell gracefully. - -### Step 4: Built-in Commands - -The statement “cd test_dir2 does not change our current directory” is true and false in some senses. It’s true in the sense that after executing the command, we are still at the same directory. However, the directory is actullay changed, but, it’s changed in the child process. - -Remember that we fork a child process, then, exec the command which does not happen on a parent process. The result is we just change the current directory of a child process, not the directory of a parent process. - -Then, the child process exits, and the parent process continues with the same intact directory. - -Therefore, this kind of commands must be built-in with the shell itself. It must be executed in the shell process without forking. - -#### cd - -Let’s start with cd command. - -We first create a builtins directory. Each built-in command will be put inside this directory. - -``` -yosh_project -|-- yosh - |-- builtins - | |-- __init__.py - | |-- cd.py - |-- __init__.py - |-- shell.py -``` - -In cd.py, we implement our own cd command by using a system call os.chdir. - -``` -import os -from yosh.constants import * - - -def cd(args): - os.chdir(args[0]) - - return SHELL_STATUS_RUN -``` - -Notice that we return shell running status from a built-in function. Therefore, we move constants into yosh/constants.py to be used across the project. - -``` -yosh_project -|-- yosh - |-- builtins - | |-- __init__.py - | |-- cd.py - |-- __init__.py - |-- constants.py - |-- shell.py -``` - -In constants.py, we put shell status constants here. - -``` -SHELL_STATUS_STOP = 0 -SHELL_STATUS_RUN = 1 -``` - -Now, our built-in cd is ready. Let’s modify our shell.py to handle built-in functions. - -``` -... -# Import constants -from yosh.constants import * - -# Hash map to store built-in function name and reference as key and value -built_in_cmds = {} - - -def tokenize(string): - return shlex.split(string) - - -def execute(cmd_tokens): - # Extract command name and arguments from tokens - cmd_name = cmd_tokens[0] - cmd_args = cmd_tokens[1:] - - # If the command is a built-in command, invoke its function with arguments - if cmd_name in built_in_cmds: - return built_in_cmds[cmd_name](cmd_args) - - ... -``` - -We use a Python dictionary built_in_cmds as a hash map to store our built-in functions. In execute function, we extract command name and arguments. If the command name is in our hash map, we call that built-in function. - -(Note: built_in_cmds[cmd_name] returns the function reference that can be invoked with arguments immediately.) - -We are almost ready to use the built-in cd function. The last thing is to add cd function into the built_in_cmds map. - -``` -... -# Import all built-in function references -from yosh.builtins import * - -... - -# Register a built-in function to built-in command hash map -def register_command(name, func): - built_in_cmds[name] = func - - -# Register all built-in commands here -def init(): - register_command("cd", cd) - - -def main(): - # Init shell before starting the main loop - init() - shell_loop() -``` - -We define register_command function for adding a built-in function to our built-in commmand hash map. Then, we define init function and register the built-in cd function there. - -Notice the line register_command("cd", cd). The first argument is a command name. The second argument is a reference to a function. In order to let cd, in the second argument, refer to the cd function reference in yosh/builtins/cd.py, we have to put the following line in yosh/builtins/__init__.py. - -``` -from yosh.builtins.cd import * -``` -Therefore, in yosh/shell.py, when we import * from yosh.builtins, we get cd function reference that is already imported by yosh.builtins. - -We’ve done preparing our code. Let’s try by running our shell as a module python -m yosh.shell at the same level as the yosh directory. - -Now, our cd command should change our shell directory correctly while non-built-in commands still work too. Cool. - -#### exit - -Here comes the last piece: to exit gracefully. - -We need a function that changes the shell status to be SHELL_STATUS_STOP. So, the shell loop will naturally break and the shell program will end and exit. - -As same as cd, if we fork and exec exit in a child process, the parent process will still remain inact. Therefore, the exit function is needed to be a shell built-in function. - -Let’s start by creating a new file called exit.py in the builtins folder. - -``` -yosh_project -|-- yosh - |-- builtins - | |-- __init__.py - | |-- cd.py - | |-- exit.py - |-- __init__.py - |-- constants.py - |-- shell.py -``` - -The exit.py defines the exit function that just returns the status to break the main loop. - -``` -from yosh.constants import * - - -def exit(args): - return SHELL_STATUS_STOP -``` - -Then, we import the exit function reference in `yosh/builtins/__init__.py`. - -``` -from yosh.builtins.cd import * -from yosh.builtins.exit import * -``` - -Finally, in shell.py, we register the exit command in `init()` function. - - -``` -... - -# Register all built-in commands here -def init(): - register_command("cd", cd) - register_command("exit", exit) - -... -``` - -That’s all! - -Try running python -m yosh.shell. Now you can enter exit to quit the program gracefully. - -### Final Thought - -I hope you enjoy creating yosh (your own shell) like I do. However, my version of yosh is still in an early stage. I don’t handle several corner cases that can corrupt the shell. There are a lot of built-in commands that I don’t cover. Some non-built-in commands can also be implemented as built-in commands to improve performance (avoid new process creation time). And, a ton of features are not yet implemented (see Common features and Differing features). - -I’ve provided the source code at github.com/supasate/yosh. Feel free to fork and play around. - -Now, it’s your turn to make it real Your Own SHell. - -Happy Coding! - --------------------------------------------------------------------------------- - -via: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ - -作者:[Supasate Choochaisri][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://disqus.com/by/supasate_choochaisri/ -[1]: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ -[2]: http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html -[3]: http://www.tldp.org/LDP/intro-linux/html/x12249.html -[4]: https://github.com/supasate/yosh diff --git a/translated/tech/20160706 Create Your Own Shell in Python - Part II.md b/translated/tech/20160706 Create Your Own Shell in Python - Part II.md new file mode 100644 index 0000000000..0f0cd6a878 --- /dev/null +++ b/translated/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -0,0 +1,216 @@ +使用 Python 创建你自己的 Shell:Part II +=========================================== + +在[part 1][1] 中,我们已经创建了一个主要的 shell 循环、切分了的命令输入,以及通过 `fork` 和 `exec` 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 + +### 步骤 4:内置命令 + +“cd test_dir2 无法修改我们的当前目录” 这句话是对的,但在某种意义上也是错的。在执行完该命令之后,我们仍然处在同一目录,从这个意义上讲,它是对的。然而,目录实际上已经被修改,只不过它是在子进程中被修改。 + +还记得我们 fork 了一个子进程,然后执行命令,执行命令的过程没有发生在父进程上。结果是我们只是改变了子进程的当前目录,而不是父进程的目录。 + +然后子进程退出,而父进程在原封不动的目录下继续运行。 + +因此,这类与 shell 自己相关的命令必须是内置命令。它必须在 shell 进程中执行而没有分叉(forking)。 + +#### cd + +让我们从 `cd` 命令开始。 + +我们首先创建一个 `builtins` 目录。每一个内置命令都会被放进这个目录中。 + +```shell +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + |-- __init__.py + |-- shell.py +``` + +在 `cd.py` 中,我们通过使用系统调用 `os.chdir` 实现自己的 `cd` 命令。 + +```python +import os +from yosh.constants import * + + +def cd(args): + os.chdir(args[0]) + + return SHELL_STATUS_RUN +``` + +注意,我们会从内置函数返回 shell 的运行状态。所以,为了能够在项目中继续使用常量,我们将它们移至 `yosh/constants.py`。 + +```shell +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + |-- __init__.py + |-- constants.py + |-- shell.py +``` + +在 `constants.py` 中,我们将状态常量都放在这里。 + +```python +SHELL_STATUS_STOP = 0 +SHELL_STATUS_RUN = 1 +``` + +现在,我们的内置 `cd` 已经准备好了。让我们修改 `shell.py` 来处理这些内置函数。 + +```python +... +# Import constants +from yosh.constants import * + +# Hash map to store built-in function name and reference as key and value +built_in_cmds = {} + + +def tokenize(string): + return shlex.split(string) + + +def execute(cmd_tokens): + # Extract command name and arguments from tokens + cmd_name = cmd_tokens[0] + cmd_args = cmd_tokens[1:] + + # If the command is a built-in command, invoke its function with arguments + if cmd_name in built_in_cmds: + return built_in_cmds[cmd_name](cmd_args) + + ... +``` + +我们使用一个 python 字典变量 `built_in_cmds` 作为哈希映射(hash map),以存储我们的内置函数。我们在 `execute` 函数中提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 + +(提示:`built_in_cmds[cmd_name]` 返回能直接使用参数调用的函数引用的。) + +我们差不多准备好使用内置的 `cd` 函数了。最后一步是将 `cd` 函数添加到 `built_in_cmds` 映射中。 + +``` +... +# Import all built-in function references +from yosh.builtins import * + +... + +# Register a built-in function to built-in command hash map +def register_command(name, func): + built_in_cmds[name] = func + + +# Register all built-in commands here +def init(): + register_command("cd", cd) + + +def main(): + # Init shell before starting the main loop + init() + shell_loop() +``` + +我们定义了 `register_command` 函数,以添加一个内置函数到我们内置的命令哈希映射。接着,我们定义 `init` 函数并且在这里注册内置的 `cd` 函数。 + +注意这行 `register_command("cd", cd)` 。第一个参数为命令的名字。第二个参数为一个函数引用。为了能够让第二个参数 `cd` 引用到 `yosh/builtins/cd.py` 中的 `cd` 函数引用,我们必须将以下这行代码放在 `yosh/builtins/__init__.py` 文件中。 + +``` +from yosh.builtins.cd import * +``` + +因此,在 `yosh/shell.py` 中,当我们从 `yosh.builtins` 导入 `*` 时,我们可以得到已经通过 `yosh.builtins` 导入的 `cd` 函数引用。 + +我们已经准备好了代码。让我们尝试在 `yosh` 同级目录下以模块形式运行我们的 shell,`python -m yosh.shell`。 + +现在,`cd` 命令可以正确修改我们的 shell 目录了,同时非内置命令仍然可以工作。非常好! + +#### exit + +最后一块终于来了:优雅地退出。 + +我们需要一个可以修改 shell 状态为 `SHELL_STATUS_STOP` 的函数。这样,shell 循环可以自然地结束,shell 将到达终点而退出。 + +和 `cd` 一样,如果我们在子进程中 fork 和执行 `exit` 函数,其对父进程是不起作用的。因此,`exit` 函数需要成为一个 shell 内置函数。 + +让我们从这开始:在 `builtins` 目录下创建一个名为 `exit.py` 的新文件。 + +``` +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + | |-- exit.py + |-- __init__.py + |-- constants.py + |-- shell.py +``` + +`exit.py` 定义了一个 `exit` 函数,该函数仅仅返回一个可以退出主循环的状态。 + +``` +from yosh.constants import * + + +def exit(args): + return SHELL_STATUS_STOP +``` + +然后,我们导入位于 `yosh/builtins/__init__.py` 文件的 `exit` 函数引用。 + +``` +from yosh.builtins.cd import * +from yosh.builtins.exit import * +``` + +最后,我们在 `shell.py` 中的 `init()` 函数注册 `exit` 命令。 + + +``` +... + +# Register all built-in commands here +def init(): + register_command("cd", cd) + register_command("exit", exit) + +... +``` + +到此为止! + +尝试执行 `python -m yosh.shell`。现在你可以输入 `exit` 优雅地退出程序了。 + +### 最后的想法 + +我希望你能像我一样享受创建 `yosh` (**y**our **o**wn **sh**ell)的过程。但我的 `yosh` 版本仍处于早期阶段。我没有处理一些会使 shell 崩溃的极端状况。还有很多我没有覆盖的内置命令。为了提高性能,一些非内置命令也可以实现为内置命令(避免新进程创建时间)。同时,大量的功能还没有实现(请看 [公共特性](http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html) 和 [不同特性](http://www.tldp.org/LDP/intro-linux/html/x12249.html)) + +我已经在 github.com/supasate/yosh 中提供了源代码。请随意 fork 和尝试。 + +现在该是创建你真正自己拥有的 Shell 的时候了。 + +Happy Coding! + +-------------------------------------------------------------------------------- + +via: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ + +作者:[Supasate Choochaisri][a] +译者:[cposture](https://github.com/cposture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/supasate_choochaisri/ +[1]: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ +[2]: http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html +[3]: http://www.tldp.org/LDP/intro-linux/html/x12249.html +[4]: https://github.com/supasate/yosh From 010d4439c5bd850854785d736d885fe844fceb7b Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 16 Jul 2016 21:29:27 +0800 Subject: [PATCH 157/471] PUB:20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options @GitFuture --- ... 16.04 And Use Its Command line Options.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) rename {translated/tech => published}/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md (66%) diff --git a/translated/tech/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md b/published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md similarity index 66% rename from translated/tech/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md rename to published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md index fec5b55c6c..b177995009 100644 --- a/translated/tech/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md +++ b/published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md @@ -1,17 +1,17 @@ -在 Ubuntu 16.04 上安装使用 VBoxManage 以及 VBoxManage 命令行选项的用法 +在 Linux 上安装使用 VirtualBox 的命令行管理界面 VBoxManage ================= -VirtualBox 拥有一套命令行工具,然后你可以使用 VirtualBox 的命令行界面 (CLI) 对远端无界面的服务器上的虚拟机进行管理操作。在这篇教程中,你将会学到如何在没有 GUI 的情况下使用 VBoxManage 创建、启动一个虚拟机。VBoxManage 是 VirtualBox 的命令行界面,你可以在你的主机操作系统的命令行中来用它实现对 VirtualBox 的所有操作。VBoxManage 拥有图形化用户界面所支持的全部功能,而且它支持的功能远不止这些。它提供虚拟引擎的所有功能,甚至包含 GUI 还不能实现的那些功能。如果你想尝试不同的用户界面而不仅仅是 GUI,或者更改虚拟机更多高级和实验性的配置,那么你就需要用到命令行。 +VirtualBox 拥有一套命令行工具,你可以使用 VirtualBox 的命令行界面 (CLI) 对远程无界面的服务器上的虚拟机进行管理操作。在这篇教程中,你将会学到如何在没有 GUI 的情况下使用 VBoxManage 创建、启动一个虚拟机。VBoxManage 是 VirtualBox 的命令行界面,你可以在你的主机操作系统的命令行中用它来实现对 VirtualBox 的所有操作。VBoxManage 拥有图形化用户界面所支持的全部功能,而且它支持的功能远不止这些。它提供虚拟引擎的所有功能,甚至包含 GUI 还不能实现的那些功能。如果你想尝试下不同的用户界面而不仅仅是 GUI,或者更改虚拟机更多高级和实验性的配置,那么你就需要用到命令行。 -当你想要在 VirtualBox 上创建或运行虚拟机时,你会发现 VBoxManage 非常有用,你只需要使用远程主机的终端就够了。这对于服务器来说是一种常见的情形,因为在服务器上需要进行虚拟机的远程操作。 +当你想要在 VirtualBox 上创建或运行虚拟机时,你会发现 VBoxManage 非常有用,你只需要使用远程主机的终端就够了。这对于需要远程管理虚拟机的服务器来说是一种常见的情形。 ### 准备工作 -在开始使用 VBoxManage 的命令行工具前,确保在运行着 Ubuntu 16.04 的服务器上,你拥有超级用户的权限或者你能够使用 sudo 命令,而且你已经在服务器上安装了 Oracle Virtual Box。 然后你需要安装 VirtualBox 扩展包,这是运行远程桌面环境,访问无界面启动虚拟机所必须的。(headless的翻译拿不准,翻译为无界面启动) +在开始使用 VBoxManage 的命令行工具前,确保在运行着 Ubuntu 16.04 的服务器上,你拥有超级用户的权限或者你能够使用 sudo 命令,而且你已经在服务器上安装了 Oracle Virtual Box。 然后你需要安装 VirtualBox 扩展包,这是运行 VRDE 远程桌面环境,访问无界面虚拟机所必须的。 ### 安装 VBoxManage -通过 [Virtual Box Download Page][1] 这个链接,你能够获取你所需要的软件扩展包的最新版本,扩展包的版本和你安装的 VirtualBox 版本需要一致! +通过 [Virtual Box 下载页][1] 这个链接,你能够获取你所需要的软件扩展包的最新版本,扩展包的版本和你安装的 VirtualBox 版本需要一致! ![](http://linuxpitstop.com/wp-content/uploads/2016/06/12.png) @@ -71,11 +71,11 @@ $ VBoxManage modifyvm Ubuntu10.10 --memory 512 $ VBoxManage storagectl Ubuntu16.04 --name IDE --add ide --controller PIIX4 --bootable on ``` -这里的 “storagect1” 是给虚拟机创建存储控制器的,“--name” 指定了虚拟机里需要创建、更改或者移除的存储控制器的名称。“--add” 选项指明系统总线类型,可选的选项有 ide / sata / scsi / floppy,存储控制器必须要连接到系统总线。“--controller” 选择主板的类型,主板需要根据需要的存储控制器选择,可选的选项有 LsiLogic / LSILogicSAS / BusLogic / IntelAhci / PIIX3 / PIIX4 / ICH6 / I82078。最后的 “--bootable” 表示控制器是否可以引导。 +这里的 “storagect1” 是给虚拟机创建存储控制器的,“--name” 指定了虚拟机里需要创建、更改或者移除的存储控制器的名称。“--add” 选项指明存储控制器所需要连接到的系统总线类型,可选的选项有 ide / sata / scsi / floppy。“--controller” 选择主板的类型,主板需要根据需要的存储控制器选择,可选的选项有 LsiLogic / LSILogicSAS / BusLogic / IntelAhci / PIIX3 / PIIX4 / ICH6 / I82078。最后的 “--bootable” 表示控制器是否可以引导系统。 -上面的命令创建了叫做 IDE 的存储控制器。然后虚拟设备就能通过 “storageattach” 命令连接到控制器。 +上面的命令创建了叫做 IDE 的存储控制器。之后虚拟介质就能通过 “storageattach” 命令连接到该控制器。 -然后运行下面这个命令来创建一个叫做 SATA 的存储控制器,它将会连接到硬盘镜像上。 +然后运行下面这个命令来创建一个叫做 SATA 的存储控制器,它将会连接到之后的硬盘镜像上。 ``` $ VBoxManage storagectl Ubuntu16.04 --name SATA --add sata --controller IntelAhci --bootable on @@ -87,7 +87,7 @@ $ VBoxManage storagectl Ubuntu16.04 --name SATA --add sata --controller IntelAhc $ VBoxManage storageattach Ubuntu16.04 --storagectl SATA --port 0 --device 0 --type hdd --medium "your_iso_filepath" ``` -用媒体把 SATA 存储控制器连接到 Ubuntu16.04 虚拟机中,也就是之前创建的虚拟硬盘镜像里。 +这将把 SATA 存储控制器及介质(比如之前创建的虚拟磁盘镜像)连接到 Ubuntu16.04 虚拟机中。 运行下面的命令添加像网络连接,音频之类的功能。 @@ -120,9 +120,9 @@ $VBoxManage controlvm ![](http://linuxpitstop.com/wp-content/uploads/2016/06/81.png) -完结! +###完结 -从这篇文章中,我们了解了 Oracle Virtual Box 中一个十分实用的工具,就是 VBoxManage,包含了 VBoxManage 的安装和在 Ubuntu 16.04 系统上的使用。文章包含详细的教程, 通过 VBoxManage 中实用的命令来创建和管理虚拟机。希望这篇文章对你有帮助,另外别忘了分享你的评论或者建议。 +从这篇文章中,我们了解了 Oracle Virtual Box 中一个十分实用的工具 VBoxManage,文章包含了 VBoxManage 的安装和在 Ubuntu 16.04 系统上的使用,包括通过 VBoxManage 中实用的命令来创建和管理虚拟机。希望这篇文章对你有帮助,另外别忘了分享你的评论或者建议。 -------------------------------------------------------------------------------- @@ -130,7 +130,7 @@ via: http://linuxpitstop.com/install-and-use-command-line-tool-vboxmanage-on-ubu 作者:[Kashif][a] 译者:[GitFuture](https://github.com/GitFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From bf4f2549256f9facb25f9cb6d8fd8a2a1039dcfd Mon Sep 17 00:00:00 2001 From: cposture Date: Sun, 17 Jul 2016 09:03:58 +0800 Subject: [PATCH 158/471] =?UTF-8?q?=E7=BD=91=E4=B8=8A=E5=B7=B2=E6=9C=89?= =?UTF-8?q?=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160309 Let’s Build A Web Server. Part 1.md | 1 - sources/tech/20160406 Let’s Build A Web Server. Part 2.md | 1 - 2 files changed, 2 deletions(-) diff --git a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md index 47f8bfdcc7..4c8048786d 100644 --- a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md +++ b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md @@ -1,4 +1,3 @@ -Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 1. ===================================== diff --git a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md index 5cba11dd64..482352ac9a 100644 --- a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md +++ b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md @@ -1,4 +1,3 @@ -Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 2. =================================== From 477810acabe5817ed8d5e1e2a4d486b8dd1c393e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Sun, 17 Jul 2016 10:58:41 +0800 Subject: [PATCH 159/471] =?UTF-8?q?=E7=BD=91=E4=B8=8A=E5=B7=B2=E6=9C=89?= =?UTF-8?q?=E7=BF=BB=E8=AF=91=20(#4188)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Translated by cposture * Translating by cposture * translating partly * translating partly 75 * Translated by cposture * Translated by cposture * Translating by cposture * Translating partly 50 * Translting partly 70 * Translating partly 75:wq * Translated by cposture * Translated by cposture * 网上已有翻译 --- sources/tech/20160309 Let’s Build A Web Server. Part 1.md | 1 - sources/tech/20160406 Let’s Build A Web Server. Part 2.md | 1 - 2 files changed, 2 deletions(-) diff --git a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md index 47f8bfdcc7..4c8048786d 100644 --- a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md +++ b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md @@ -1,4 +1,3 @@ -Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 1. ===================================== diff --git a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md index 5cba11dd64..482352ac9a 100644 --- a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md +++ b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md @@ -1,4 +1,3 @@ -Translating by cposture 2016.07.13 Let’s Build A Web Server. Part 2. =================================== From c601a53f765e1a597b577fc620b1a637e98a8d72 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 17 Jul 2016 16:46:39 +0800 Subject: [PATCH 160/471] PUB:20160304 Microservices with Python RabbitMQ and Nameko @mr-ping --- ...ervices with Python RabbitMQ and Nameko.md | 35 ++++++++----------- 1 file changed, 14 insertions(+), 21 deletions(-) rename {translated/tech => published}/20160304 Microservices with Python RabbitMQ and Nameko.md (81%) diff --git a/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md b/published/20160304 Microservices with Python RabbitMQ and Nameko.md similarity index 81% rename from translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md rename to published/20160304 Microservices with Python RabbitMQ and Nameko.md index 8b88bfceec..d70afff960 100644 --- a/translated/tech/20160304 Microservices with Python RabbitMQ and Nameko.md +++ b/published/20160304 Microservices with Python RabbitMQ and Nameko.md @@ -1,4 +1,4 @@ -基于 Python、 RabbitMQ 和 Nameko 的微服务 +用 Python、 RabbitMQ 和 Nameko 实现微服务 ============================================== >"微服务是一股新浪潮" - 现如今,将项目拆分成多个独立的、可扩展的服务是保障代码演变的最好选择。在 Python 的世界里,有个叫做 “Nameko” 的框架,它将微服务的实现变得简单并且强大。 @@ -8,19 +8,17 @@ > 在最近的几年里,“微服务架构”如雨后春笋般涌现。它用于描述一种特定的软件应用设计方式,这种方式使得应用可以由多个独立部署的服务以服务套件的形式组成。 - M. Fowler -推荐各位读一下 [Fowler's posts][1] 以理解它背后的原理。 - +推荐各位读一下 [Fowler 的文章][1] 以理解它背后的原理。 #### 好吧,那它究竟意味着什么呢? -简单来说,**微服务架构**可以将你的系统拆分成多个负责不同任务的小块儿,它们之间互不依赖,各自只提供用于通讯的通用指向。这个指向通常是已经将通讯协议和接口定义好的消息队列。 - +简单来说,**微服务架构**可以将你的系统拆分成多个负责不同任务的小的(单一上下文内)功能块(responsibilities blocks),它们彼此互无感知,各自只提供用于通讯的通用指向(common point)。这个指向通常是已经将通讯协议和接口定义好的消息队列。 #### 这里给大家提供一个真实案例 ->案例的代码可以通过github: 访问,查看 service 和 api 文件夹可以获取更多信息。 +> 案例的代码可以通过 github: 访问,查看 service 和 api 文件夹可以获取更多信息。 -想象一下,你有一个 REST API ,这个 API 有一个端点(译者注:REST 风格的 API 可以有多个端点用于处理对同一资源的不同类型的请求)用来接受数据,并且你需要将接收到的数据进行一些运算。那么相比阻塞接口调用者的请求来说,异步实现此接口是一个更好的选择。你可以先给用户返回一个 "OK - 你的请求稍后会处理" 的状态,然后在后台任务中完成运算。 +想象一下,你有一个 REST API ,这个 API 有一个端点(LCTT 译注:REST 风格的 API 可以有多个端点用于处理对同一资源的不同类型的请求)用来接受数据,并且你需要将接收到的数据进行一些运算工作。那么相比阻塞接口调用者的请求来说,异步实现此接口是一个更好的选择。你可以先给用户返回一个 "OK - 你的请求稍后会处理" 的状态,然后在后台任务中完成运算。 同样,如果你想要在不阻塞主进程的前提下,在计算完成后发送一封提醒邮件,那么将“邮件发送”委托给其他服务去做会更好一些。 @@ -30,20 +28,18 @@ ![](http://brunorocha.org/static/media/microservices/micro_services.png) -### 用代码说话: +### 用代码说话 让我们将系统创建起来,在实践中理解它: - #### 环境 我们需要的环境: -- 运行良好的 RabbitMQ(译者注:[RabbitMQ][2]是一个流行的消息队列实现) +- 运行良好的 RabbitMQ(LCTT 译注:[RabbitMQ][2] 是一个流行的消息队列实现) - 由 VirtualEnv 提供的 Services 虚拟环境 - 由 VirtualEnv 提供的 API 虚拟环境 - #### Rabbit 在开发环境中使用 RabbitMQ 最简单的方式就是运行其官方的 docker 容器。在你已经拥有 Docker 的情况下,运行: @@ -56,10 +52,9 @@ docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:567 ![](http://brunorocha.org/static/media/microservices/RabbitMQManagement.png) - #### 服务环境 -现在让我们创建微服务来消费我们的任务。其中一个服务用来执行计算任务,另一个用来发送邮件。按以下步骤执行: +现在让我们创建微服务来满足我们的任务需要。其中一个服务用来执行计算任务,另一个用来发送邮件。按以下步骤执行: 在 Shell 中创建项目的根目录 @@ -82,7 +77,6 @@ $ source service_env/bin/activate (service_env)$ pip install yagmail ``` - #### 服务的代码 现在我们已经准备好了 virtualenv 所提供的虚拟环境(可以想象成我们的服务是运行在一个独立服务器上的,而我们的 API 运行在另一个服务器上),接下来让我们编码,实现 nameko 的 RPC 服务。 @@ -135,7 +129,7 @@ class Compute(object): 现在我们已经用以上代码定义好了两个服务,下面让我们将 Nameko RPC service 运行起来。 ->注意:我们会在控制台中启动并运行它。但在生产环境中,建议大家使用 supervisord 替代控制台命令。 +> 注意:我们会在控制台中启动并运行它。但在生产环境中,建议大家使用 supervisord 替代控制台命令。 在 Shell 中启动并运行服务 @@ -149,7 +143,7 @@ Connected to amqp://guest:**@127.0.0.1:5672// #### 测试 -在另外一个 Shell 中(使用相同的虚拟环境),用 nameko shell 进行测试: +在另外一个 Shell 中(使用相同的虚拟环境),用 nameko shell 进行测试: ``` (service_env)$ nameko shell --broker amqp://guest:guest@localhost @@ -178,19 +172,18 @@ Broker: amqp://guest:guest@localhost 3 ``` - ### 在 API 中调用微服务 在另外一个 Shell 中(甚至可以是另外一台服务器上),准备好 API 环境。 -用 virtualenv 工具创建并且激活一个虚拟环境(你也可以使用virtualenv-wrapper) +用 virtualenv 工具创建并且激活一个虚拟环境(你也可以使用 virtualenv-wrapper) ``` $ virtualenv api_env $ source api_env/bin/activate ``` -安装 Nameko, Flask 和 Flasgger +安装 Nameko、 Flask 和 Flasgger ``` (api_env)$ pip install nameko @@ -269,7 +262,7 @@ app.run(debug=True) ![](http://brunorocha.org/static/media/microservices/Flasgger_API_documentation.png) ->注意: 你可以在 shell 中查看到服务的运行日志,打印信息和错误信息。也可以访问 RabbitMQ 控制面板来查看消息在队列中的处理情况。 +> 注意: 你可以在 shell 中查看到服务的运行日志,打印信息和错误信息。也可以访问 RabbitMQ 控制面板来查看消息在队列中的处理情况。 Nameko 框架还为我们提供了很多高级特性,你可以从 获取更多的信息。 @@ -282,7 +275,7 @@ via: http://brunorocha.org/python/microservices-with-python-rabbitmq-and-nameko. 作者: [Bruno Rocha][a] 译者: [mr-ping](http://www.mr-ping.com) -校对: [校对者ID](https://github.com/校对者ID) +校对: [wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c4d4ce4e8bcbc6883ef7ca7d40e5c05ecab857d9 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 17 Jul 2016 17:36:09 +0800 Subject: [PATCH 161/471] PUB:20160511 An introduction to data processing with Cassandra and Spark @KevinSJ --- ...ata processing with Cassandra and Spark.md | 45 +++++++++++++++++ ...ata processing with Cassandra and Spark.md | 49 ------------------- 2 files changed, 45 insertions(+), 49 deletions(-) create mode 100644 published/20160511 An introduction to data processing with Cassandra and Spark.md delete mode 100644 translated/tech/20160511 An introduction to data processing with Cassandra and Spark.md diff --git a/published/20160511 An introduction to data processing with Cassandra and Spark.md b/published/20160511 An introduction to data processing with Cassandra and Spark.md new file mode 100644 index 0000000000..bec55c2e7c --- /dev/null +++ b/published/20160511 An introduction to data processing with Cassandra and Spark.md @@ -0,0 +1,45 @@ +Cassandra 和 Spark 数据处理一窥 +============================================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28) + +Apache Cassandra 数据库近来引起了很多的兴趣,这主要源于现代云端软件对于可用性及性能方面的要求。 + +那么,Apache Cassandra 是什么?它是一种为高可用性及线性可扩展性优化的分布式的联机交易处理 (OLTP) 数据库。具体说到 Cassandra 的用途时,可以想想你希望贴近用户的系统,比如说让我们的用户进行交互的系统、需要保证实时可用的程序等等,如:产品目录,物联网,医疗系统,以及移动应用。对这些程序而言,下线时间意味着利润降低甚至导致其他更坏的结果。Netfilix 是这个在 2008 年开源的项目的早期使用者,他们对此项目的贡献以及带来的成功让这个项目名声大噪。 + +Cassandra 于2010年成为了 Apache 软件基金会的顶级项目,并从此之后就流行起来。现在,只要你有 Cassadra 的相关知识,找工作时就能轻松不少。想想看,NoSQL 语言和开源技术能达到企业级 SQL 技术的高度,真让人觉得十分疯狂而又不可思议的。这引出了一个问题。是什么让它如此的流行? + +因为采用了[亚马逊发表的 Dynamo 论文][1]中率先提出的设计,Cassandra 有能力在大规模的硬件及网络故障时保持实时在线。由于采用了点对点模式,在没有单点故障的情况下,我们能幸免于机架故障甚至全网中断。我们能在不影响用户体验的前提下处理数据中心故障。一个能考虑到故障的分布式系统才是一个没有后顾之忧的分布式系统,因为老实说,故障是迟早会发生的。有了 Cassandra, 我们可以直面残酷的生活并将之融入数据库的结构和功能中。 + +我们能猜到你现在在想什么,“但我只有关系数据库相关背景,难道这样的转变不会很困难吗?”这问题的答案介于是和不是之间。使用 Cassandra 建立数据模型对有关系数据库背景的开发者而言是轻车熟路。我们使用表格来建立数据模型,并使用 CQL ( Cassandra 查询语言)来查询数据库。然而,与 SQL 不同的是,Cassandra 支持更加复杂的数据结构,例如嵌套和用户自定义类型。举个例子,当要储存对一个小猫照片的点赞数目时,我们可以将整个数据储存在一个包含照片本身的集合之中从而获得更快的顺序查找而不是建立一个独立的表。这样的表述在 CQL 中十分的自然。在我们照片表中,我们需要记录名字,URL以及给此照片点赞过的人。 + +![](https://opensource.com/sites/default/files/resize/screen_shot_2016-05-06_at_7.17.33_am-350x198.png) + +在一个高性能系统中,毫秒级处理都能对用户体验和客户维系产生影响。昂贵的 JOIN 操作制约了我们通过增加不可预见的网络调用而扩容的能力。当我们将数据反范式化使其能通过尽可能少的请求就可获取时,我们即可从磁盘空间成本的降低中获益并获得可预期的、高性能应用。我们将反范式化同 Cassandra 一同介绍是因为它提供了很有吸引力的的折衷方案。 + +很明显,我们不会局限于对于小猫照片的点赞数量。Canssandra 是一款为高并发写入优化的方案。这使其成为需要时常吞吐数据的大数据应用的理想解决方案。实时应用和物联网方面的应用正在稳步增长,无论是需求还是市场表现,我们也会不断的利用我们收集到的数据来寻求改进技术应用的方式。 + +这就引出了我们的下一步,我们已经提到了如何以一种现代的、性价比高的方式储存数据,但我们应该如何获得更多的动力呢?具体而言,当我们收集到了所需的数据,我们应该怎样处理呢?如何才能有效的分析几百 TB 的数据呢?如何才能实时的对我们所收集到的信息进行反馈,并在几秒而不是几小时的时间利作出决策呢?Apache Spark 将给我们答案。 + +Spark 是大数据变革中的下一步。 Hadoop 和 MapReduce 都是革命性的产品,它们让大数据界获得了分析所有我们所取得的数据的机会。Spark 对性能的大幅提升及对代码复杂度的大幅降低则将大数据分析提升到了另一个高度。通过 Spark,我们能大批量的处理计算,对流处理进行快速反应,通过机器学习作出决策,并通过图遍历来理解复杂的递归关系。这并非只是为你的客户提供与快捷可靠的应用程序连接(Cassandra 已经提供了这样的功能),这更是能洞悉 Canssandra 所储存的数据,作出更加合理的商业决策并同时更好地满足客户需求。 + +你可以看看 [Spark-Cassandra Connector][2] (开源) 并动手试试。若想了解更多关于这两种技术的信息,我们强烈推荐名为 [DataStax Academy][3] 的自学课程 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/5/basics-cassandra-and-spark-data-processing + +作者:[Jon Haddad][a],[Dani Traphagen][b] +译者:[KevinSJ](https://github.com/KevinSJ) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/rustyrazorblade +[b]: https://opensource.com/users/dtrapezoid +[1]: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf +[2]: https://github.com/datastax/spark-cassandra-connector +[3]: https://academy.datastax.com/ +[4]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49162 +[5]: https://twitter.com/dtrapezoid +[6]: https://twitter.com/rustyrazorblade diff --git a/translated/tech/20160511 An introduction to data processing with Cassandra and Spark.md b/translated/tech/20160511 An introduction to data processing with Cassandra and Spark.md deleted file mode 100644 index 0786996e39..0000000000 --- a/translated/tech/20160511 An introduction to data processing with Cassandra and Spark.md +++ /dev/null @@ -1,49 +0,0 @@ -Cassandra 和 Spark 数据处理入门 -============================================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28) - -Apache Cassandra 数据库近来引起了很多的兴趣,这主要源于现代云端软件对于可用性及性能方面的要求。 - -那么,Apache Cassandra 是什么?它是一种为高可用性及线性可扩展性优化的分布式的联机交易处理 (OLTP) 数据库。当人们想知道 Cassandra 的用途时,可以想想你想要的离客户近的系统。这j最终是我们的用户进行交互的系统。需要保证实时可用的程序:产品目录,IoT,医疗系统,以及移动应用。对这些程序而言,下线时间意味着利润降低甚至导致其他更坏的结果。Netfilix 是这个于2008年开源的项目的早期使用者,他们对此项目的贡献以及带来的成功让这个项目名声大噪。 - -Cassandra 于2010年成为了 Apache 软件基金会的顶级项目,在这之后就开始变得流行。现在,只要你有 Cassadra 的相关知识,找工作时就能轻松不少。光是想想一个 NoSQL 语言和开源技术能达到如此企业级 SQL 的高度就觉得这是十分疯狂而又不可思议的。这引出了一个问题。是什么让它如此的流行? - -因为采用了首先在[亚马逊发表的 Dynamo 论文][1]提出的设计,Cassandra 有能力在大规模的硬件及网络故障时保持实时在线。由于采用了点对点模式,在没有单点故障的情况下,我们能幸免于机架故障甚至完全网络分区。我们能在不影响用户体验的前提下处理数据中心故障。一个能考虑到故障的分布式系统才是一个没有后顾之忧的分布式系统,因为老实说,故障是迟早会发生的。有了 Cassandra, 我们可疑直面残酷的生活并将之融入数据库的结构和功能中。 - - - -我们能猜到你现在在想什么,“但我只有关系数据库相关背景,难道这样的转变不会很困难吗?"这问题的答案介于是和不是之间。使用 Cassandra 建立数据模型对有关系数据库背景的开发者而言是轻车熟路。我们使用表格来建立数据模型,并使用 CQL 或者 Cassandra 查询语言来查询数据库。然而,与 SQL 不同的是,Cassandra 支持更加复杂的数据结构,例如多重和用户自定义类型。举个例子,当要储存对一个小猫照片的点赞数目时,我们可以将整个数据储存在一个包含照片本身的集合之中从而获得更快的顺序查找而不是建立一个独立的表。这样的表述在 CQL 中十分的自然。在我们照片表中,我们需要记录名字,URL以及给此照片点赞过的人。 - -![](https://opensource.com/sites/default/files/resize/screen_shot_2016-05-06_at_7.17.33_am-350x198.png) - -在一个高性能系统中,毫秒对用户体验和客户保留都能产生影响。昂贵的 JOIN 制约了我们通过增加不可预见的网络调用而扩容的能力。当我们将数据反规范化使其能在尽可能少的请求中被获取到时,我们即可从磁盘空间花费的降低中获益并获得可预测的,高性能应用。我们将反规范化同 Cassandra 一同介绍是因为它提供了很有吸引力的的折衷方案。 - -很明显,我们不会局限于对于小猫照片的点赞数量。Canssandra 是一款个为并发高写入优化的方案。这使其成为需要时常吞吐数据的大数据应用的理想解决方案。市场上的时序和 IoT 的使用场景正在以稳定的速度在需求和亮相方面增加,我们也在不断探寻优化我们所收集到的数据以求提升我们的技术应用(注:这句翻的非常别扭,求校队) - - -这就引出了我们的下一步,我们已经提到了如何以一种现代的,性价比高的方式储存数据,但我们应该如何获得更多的马力呢?具体而言,当我们收集到了所需的数据,我们应该怎样处理呢?如何才能有效的分析几百 TB 的数据呢?如何才能在实时的对我们所收集到的信息进行反馈并在几秒而不是几小时的时间利作出决策呢?Apache Spark 将给我们答案。 - - -Spark 是大数据变革中的下一步。 Hadoop 和 MapReduce 都是革命性的产品,他们让大数据界获得了分析所有我们所取得的数据的机会。Spark 对性能的大幅提升及对代码复杂度的大幅降低则将大数据分析提升到了另一个高度。通过 Spark,我们能大批量的处理计算,对流处理进行快速反映,通过机器学习作出决策并理解通过对图的遍历理解复杂的递归关系。这并非只是为你的客户提供与快捷可靠的应用程序连接(Cassandra 已经提供了这样的功能),这更是能一探 Canssandra 所储存的数据并作出更加合理的商业决策同时更好地满足客户需求。 - -你可以看看 [Spark-Cassandra Connector][2] (open source) 并动手试试。若想了解更多关于这两种技术的信息,我们强烈推荐名为 [DataStax Academy][3] 的自学课程 - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/5/basics-cassandra-and-spark-data-processing - -作者:[Jon Haddad][a],[Dani Traphagen][b] -译者:[KevinSJ](https://github.com/KevinSJ) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twitter.com/rustyrazorblade -[b]: https://opensource.com/users/dtrapezoid -[1]: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf -[2]: https://github.com/datastax/spark-cassandra-connector -[3]: https://academy.datastax.com/ -[4]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49162 -[5]: https://twitter.com/dtrapezoid -[6]: https://twitter.com/rustyrazorblade From 54a1329118987442f7a0b3cfa2d5f27d570e14c5 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 17 Jul 2016 23:18:23 +0800 Subject: [PATCH 162/471] PUB:Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files @wwy-hust --- ...sions to Filter Text or String in Files.md | 74 +++++++++++-------- 1 file changed, 42 insertions(+), 32 deletions(-) rename {translated/tech/awk => published}/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md (72%) diff --git a/translated/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md b/published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md similarity index 72% rename from translated/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md rename to published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md index 791c349f56..1c1b5577c9 100644 --- a/translated/tech/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md +++ b/published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md @@ -1,28 +1,28 @@ -如何使用Awk和正则表达式过滤文本或文件中的字符串 +awk 系列:如何使用 awk 和正则表达式过滤文本或文件中的字符串 ============================================================================= ![](http://www.tecmint.com/wp-content/uploads/2016/04/Linux-Awk-Command-Examples.png) -当我们在 Unix/Linux 下使用特定的命令从字符串或文件中读取或编辑文本时,我们经常会尝试过滤输出以得到感兴趣的部分。这时正则表达式就派上用场了。 +当我们在 Unix/Linux 下使用特定的命令从字符串或文件中读取或编辑文本时,我们经常需要过滤输出以得到感兴趣的部分。这时正则表达式就派上用场了。 ### 什么是正则表达式? -正则表达式可以定义为代表若干个字符序列的字符串。它最重要的功能就是它允许你过滤一条命令或一个文件的输出,编辑文本或配置等文件的一部分。 +正则表达式可以定义为代表若干个字符序列的字符串。它最重要的功能之一就是它允许你过滤一条命令或一个文件的输出、编辑文本或配置文件的一部分等等。 ### 正则表达式的特点 正则表达式由以下内容组合而成: -- 普通的字符,例如空格、下划线、A-Z、a-z、0-9。 -- 可以扩展为普通字符的元字符,它们包括: +- **普通字符**,例如空格、下划线、A-Z、a-z、0-9。 +- 可以扩展为普通字符的**元字符**,它们包括: - `(.)` 它匹配除了换行符外的任何单个字符。 - - `(*)` 它匹配零个或多个在其之前的立即字符。 - - `[ character(s) ]` 它匹配任何由 character(s) 指定的一个字符,你可以使用连字符(-)代表字符区间,例如 [a-f]、[1-5]等。 + - `(*)` 它匹配零个或多个在其之前紧挨着的字符。 + - `[ character(s) ]` 它匹配任何由其中的字符/字符集指定的字符,你可以使用连字符(-)代表字符区间,例如 [a-f]、[1-5]等。 - `^` 它匹配文件中一行的开头。 - `$` 它匹配文件中一行的结尾。 - `\` 这是一个转义字符。 -你必须使用类似 awk 这样的文本过滤工具来过滤文本。你还可以把 awk 当作一个用于自身的编程语言。但由于这个指南的适用范围是关于使用 awk 的,我会按照一个简单的命令行过滤工具来介绍它。 +你必须使用类似 awk 这样的文本过滤工具来过滤文本。你还可以把 awk 自身当作一个编程语言。但由于这个指南的适用范围是关于使用 awk 的,我会按照一个简单的命令行过滤工具来介绍它。 awk 的一般语法如下: @@ -30,13 +30,13 @@ awk 的一般语法如下: # awk 'script' filename ``` -此处 `'script'` 是一个由 awk 使用并应用于 filename 的命令集合。 +此处 `'script'` 是一个由 awk 可以理解并应用于 filename 的命令集合。 -它通过读取文件中的给定的一行,复制该行的内容并在该行上执行脚本的方式工作。这个过程会在该文件中的所有行上重复。 +它通过读取文件中的给定行,复制该行的内容并在该行上执行脚本的方式工作。这个过程会在该文件中的所有行上重复。 该脚本 `'script'` 中内容的格式是 `'/pattern/ action'`,其中 `pattern` 是一个正则表达式,而 `action` 是当 awk 在该行中找到此模式时应当执行的动作。 -### 如何在 Linux 中使用 Awk 过滤工具 +### 如何在 Linux 中使用 awk 过滤工具 在下面的例子中,我们将聚焦于之前讨论过的元字符。 @@ -45,13 +45,14 @@ awk 的一般语法如下: 下面的例子打印文件 /etc/hosts 中的所有行,因为没有指定任何的模式。 ``` -# awk '//{print}'/etc/hosts +# awk '//{print}' /etc/hosts ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Command-Example.gif) ->Awk 打印文件中的所有行 -#### 结合模式使用 Awk +*awk 打印文件中的所有行* + +#### 结合模式使用 awk 在下面的示例中,指定了模式 `localhost`,因此 awk 将匹配文件 `/etc/hosts` 中有 `localhost` 的那些行。 @@ -60,22 +61,24 @@ awk 的一般语法如下: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-Command-with-Pattern.gif) ->Awk 打印文件中匹配模式的行 -#### 在 Awk 模式中使用通配符 (.) +*awk 打印文件中匹配模式的行* + +#### 在 awk 模式中使用通配符 (.) 在下面的例子中,符号 `(.)` 将匹配包含 loc、localhost、localnet 的字符串。 -这里的意思是匹配 *** l 一些单个字符 c ***。 +这里的正则表达式的意思是匹配 **l一个字符c**。 ``` # awk '/l.c/{print}' /etc/hosts ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Wild-Cards.gif) ->使用 Awk 打印文件中匹配模式的字符串 -#### 在 Awk 模式中使用字符 (*) +*使用 awk 打印文件中匹配模式的字符串* + +#### 在 awk 模式中使用字符 (*) 在下面的例子中,将匹配包含 localhost、localnet、lines, capable 的字符串。 @@ -84,7 +87,8 @@ awk 的一般语法如下: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Match-Strings-in-File.gif) ->使用 Awk 匹配文件中的字符串 + +*使用 awk 匹配文件中的字符串* 你可能也意识到 `(*)` 将会尝试匹配它可能检测到的最长的匹配。 @@ -112,7 +116,7 @@ this is tecmint, where you get the best good tutorials, how tos, guides, tecmint this is tecmint, where you get the best good tutorials, how to's, guides, tecmint ``` -#### 结合集合 [ character(s) ] 使用 Awk +#### 结合集合 [ character(s) ] 使用 awk 以集合 [al1] 为例,awk 将匹配文件 /etc/hosts 中所有包含字符 a 或 l 或 1 的字符串。 @@ -121,7 +125,8 @@ this is tecmint, where you get the best good tutorials, how to's, guides, tecmin ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matching-Character.gif) ->使用 Awk 打印文件中匹配的字符 + +*使用 awk 打印文件中匹配的字符* 下一个例子匹配以 `K` 或 `k` 开始头,后面跟着一个 `T` 的字符串: @@ -130,7 +135,8 @@ this is tecmint, where you get the best good tutorials, how to's, guides, tecmin ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matched-String-in-File.gif) ->使用 Awk 打印文件中匹配的字符 + +*使用 awk 打印文件中匹配的字符* #### 以范围的方式指定字符 @@ -149,11 +155,12 @@ awk 所能理解的字符: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-To-Print-Matching-Numbers-in-File.gif) ->使用 Awk 打印文件中匹配的数字 + +*使用 awk 打印文件中匹配的数字* 在上面的例子中,文件 /etc/hosts 中的所有行都至少包含一个单独的数字 [0-9]。 -#### 结合元字符 (\^) 使用 Awk +#### 结合元字符 (\^) 使用 awk 在下面的例子中,它匹配所有以给定模式开头的行: @@ -163,9 +170,10 @@ awk 所能理解的字符: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-All-Matching-Lines-with-Pattern.gif) ->使用 Awk 打印与模式匹配的行 -#### 结合元字符 ($) 使用 Awk +*使用 awk 打印与模式匹配的行* + +#### 结合元字符 ($) 使用 awk 它将匹配所有以给定模式结尾的行: @@ -176,9 +184,10 @@ awk 所能理解的字符: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Given-Pattern-String.gif) ->使用 Awk 打印与模式匹配的字符串 -#### 结合转义字符 (\\) 使用 Awk +*使用 awk 打印与模式匹配的字符串* + +#### 结合转义字符 (\\) 使用 awk 它允许你将该转义字符后面的字符作为文字,即理解为其字面的意思。 @@ -193,11 +202,12 @@ awk 所能理解的字符: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Escape-Character.gif) ->结合转义字符使用 Awk + +*结合转义字符使用 awk* ### 总结 -以上内容并不是 Awk 命令用做过滤工具的全部,上述的示例均是 awk 的基础操作。在下面的章节中,我将进一步介绍如何使用 awk 的高级功能。感谢您的阅读,请在评论区贴出您的评论。 +以上内容并不是 awk 命令用做过滤工具的全部,上述的示例均是 awk 的基础操作。在下面的章节中,我将进一步介绍如何使用 awk 的高级功能。感谢您的阅读,请在评论区贴出您的评论。 -------------------------------------------------------------------------------- @@ -205,7 +215,7 @@ via: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files 作者:[Aaron Kili][a] 译者:[wwy-hust](https://github.com/wwy-hust) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0912e81191e4adcdfe7dda75e37e2645b23ae64b Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 17 Jul 2016 23:31:58 +0800 Subject: [PATCH 163/471] PUB:Part 2 - How to Use Awk to Print Fields and Columns in File MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ictlyh @Cathon 这篇选稿正好和系列选题中重复了,我就综合校对了。@oska874 --- ...Awk to Print Fields and Columns in File.md | 0 ...Awk to Print Fields and Columns in File.md | 33 ++++++++++--------- 2 files changed, 18 insertions(+), 15 deletions(-) rename {translated/tech => published}/20160425 How to Use Awk to Print Fields and Columns in File.md (100%) rename {translated/tech/awk => published}/Part 2 - How to Use Awk to Print Fields and Columns in File.md (72%) diff --git a/translated/tech/20160425 How to Use Awk to Print Fields and Columns in File.md b/published/20160425 How to Use Awk to Print Fields and Columns in File.md similarity index 100% rename from translated/tech/20160425 How to Use Awk to Print Fields and Columns in File.md rename to published/20160425 How to Use Awk to Print Fields and Columns in File.md diff --git a/translated/tech/awk/Part 2 - How to Use Awk to Print Fields and Columns in File.md b/published/Part 2 - How to Use Awk to Print Fields and Columns in File.md similarity index 72% rename from translated/tech/awk/Part 2 - How to Use Awk to Print Fields and Columns in File.md rename to published/Part 2 - How to Use Awk to Print Fields and Columns in File.md index c0b526b925..69f372d099 100644 --- a/translated/tech/awk/Part 2 - How to Use Awk to Print Fields and Columns in File.md +++ b/published/Part 2 - How to Use Awk to Print Fields and Columns in File.md @@ -1,18 +1,19 @@ -如何使用 Awk 输出文本中的字段和列 +awk 系列:如何使用 awk 输出文本中的字段和列 ====================================================== -在 Awk 系列的这一节中,我们将看到 Awk 最重要的特性之一,字段编辑。 +在 Awk 系列的这一节中,我们将看到 awk 最重要的特性之一,字段编辑。 -需要知道的是,Awk 能够自动将输入的行,分隔为若干字段。每一个字段就是一组字符,它们和其他的字段由一个内部字段分隔符分隔开来。 +首先我们要知道,Awk 能够自动将输入的行,分隔为若干字段。每一个字段就是一组字符,它们和其他的字段由一个内部字段分隔符分隔开来。 ![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Print-Fields-and-Columns.png) ->Awk Print Fields and Columns -如果你熟悉 Unix/Linux 或者使用 [bash 脚本][1]编过程,那么你应该知道什么是内部字段分隔符(IFS)变量。Awk 中默认的 IFS 是制表符和空格。 +*Awk 输出字段和列* -Awk 中的字段分隔符的工作流程如下:当读到一行输入时,将它按照指定的 IFS 分割为不同字段,第一组字符就是字段一,可以通过 $1 来访问,第二组字符就是字段二,可以通过 $2 来访问,第三组字符就是字段三,可以通过 $3 来访问,以此类推,直到最后一组字符。 +如果你熟悉 Unix/Linux 或者懂得 [bash shell 编程][1],那么你应该知道什么是内部字段分隔符(IFS)变量。awk 中默认的 IFS 是制表符和空格。 -为了更好地理解 Awk 的字段编辑,让我们看一个下面的例子: +awk 中的字段分隔符的工作原理如下:当读到一行输入时,将它按照指定的 IFS 分割为不同字段,第一组字符就是字段一,可以通过 $1 来访问,第二组字符就是字段二,可以通过 $2 来访问,第三组字符就是字段三,可以通过 $3 来访问,以此类推,直到最后一组字符。 + +为了更好地理解 awk 的字段编辑,让我们看一个下面的例子: **例 1**:我创建了一个名为 tecmintinfo.txt 的文本文件。 @@ -22,7 +23,8 @@ Awk 中的字段分隔符的工作流程如下:当读到一行输入时,将 ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Create-File-in-Linux.png) ->在 Linux 上创建一个文件 + +*在 Linux 上创建一个文件* 然后在命令行中,我试着使用下面的命令从文本 tecmintinfo.txt 中输出第一个,第二个,以及第三个字段。 @@ -47,15 +49,16 @@ $ awk '//{print $1, $2, $3; }' tecmintinfo.txt TecMint.com is the ``` -需要记住而且非常重要的是,`($)` 在 Awk 和在 shell 脚本中的使用是截然不同的! +需要记住而且非常重要的是,`($)` 在 awk 和在 shell 脚本中的使用是截然不同的! -在 shell 脚本中,`($)` 被用来获取变量的值。而在 Awk 中,`($)` 只有在获取字段的值时才会用到,不能用于获取变量的值。 +在 shell 脚本中,`($)` 被用来获取变量的值。而在 awk 中,`($)` 只有在获取字段的值时才会用到,不能用于获取变量的值。 **例 2**:让我们再看一个例子,用到了一个名为 my_shoping.list 的包含多行的文件。 + ``` No Item_Name Unit_Price Quantity Price 1 Mouse #20,000 1 #20,000 -2 Monitor #500,000 1 #500,000 +2 Monitor #500,000 1 #500,000 3 RAM_Chips #150,000 2 #300,000 4 Ethernet_Cables #30,000 4 #120,000 ``` @@ -72,7 +75,7 @@ RAM_Chips #150,000 Ethernet_Cables #30,000 ``` -可以看到上面的输出不够清晰,Awk 还有一个 `printf` 的命令,可以帮助你将输出格式化。 +可以看到上面的输出不够清晰,awk 还有一个 `printf` 的命令,可以帮助你将输出格式化。 使用 `printf` 来格式化 Item_Name 和 Unit_Price 的输出: @@ -88,7 +91,7 @@ Ethernet_Cables #30,000 ### 总结 -使用 Awk 过滤文本或字符串时,字段编辑的功能是非常重要的。它能够帮助你从一个表的数据中得到特定的列。一定要记住的是,Awk 中 `($)` 操作符的用法与其在 shell 脚本中的用法是不同的! +使用 awk 过滤文本或字符串时,字段编辑的功能是非常重要的。它能够帮助你从一个表的数据中得到特定的列。一定要记住的是,awk 中 `($)` 操作符的用法与其在 shell 脚本中的用法是不同的! 希望这篇文章对您有所帮助。如有任何疑问,可以在评论区域发表评论。 @@ -97,8 +100,8 @@ Ethernet_Cables #30,000 via: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/ 作者:[Aaron Kili][a] -译者:[Cathon](https://github.com/Cathon) -校对:[校对者ID](https://github.com/校对者ID) +译者:[Cathon](https://github.com/Cathon),[ictlyh](https://github.com/ictlyh) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 4de2a5d1ca8ef3163fce13695db35c3e4e227d7f Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Mon, 18 Jul 2016 00:48:56 +0800 Subject: [PATCH 164/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20160531 The Anatomy of a Linux User.md | 57 ------------------ .../20160531 The Anatomy of a Linux User.md | 58 +++++++++++++++++++ 2 files changed, 58 insertions(+), 57 deletions(-) delete mode 100644 sources/talk/20160531 The Anatomy of a Linux User.md create mode 100644 translated/talk/20160531 The Anatomy of a Linux User.md diff --git a/sources/talk/20160531 The Anatomy of a Linux User.md b/sources/talk/20160531 The Anatomy of a Linux User.md deleted file mode 100644 index b403758f2a..0000000000 --- a/sources/talk/20160531 The Anatomy of a Linux User.md +++ /dev/null @@ -1,57 +0,0 @@ -vim-kakali translating - -The Anatomy of a Linux User -================================ - - -**Some new GNU/Linux users understand right away that Linux isn’t Windows. Others never quite get it. The best distro designers strive to keep both in mind.** - -### The Heart of Linux - -Nicky isn’t outwardly remarkable in any way. She’s a thirtysomething who decided to go back to school later in life than most. She spent six years in the Navy until she decided a job offer from an old friend would be a better bet than a career in the armed forces. That happens a lot in any of the post-war military service branches. It was at that job where I met her. She was the regional manager for an eight state trucking broker and I was driving for a meat packing outfit in Dallas. - -![](http://i2.wp.com/fossforce.com/wp-content/uploads/2016/05/anatomy.jpg?w=525) - -We became good friends in 2006, Nicky and me. She’s an outgoing spirit, curious about almost anyone whose path she crosses. We had an ongoing Friday night date to go fight in an indoor laser combat arena. It wasn’t rare for us to burn through three 30 minute sessions in a row. Maybe it wasn’t as cheap as a paint ball arena, but it was climate controlled and had a horror game feel to it. It was during one of those outings that she asked me if I could fix her computer. - -She knew about my efforts to get computers into the homes of disadvantaged kids and I kidded her about paying into Bill Gates’ 401K plan when she complained about her computer becoming too slow. Nicky figured this was as good a time as any to see what Linux was all about. - -Her computer was a decent machine, a mid 2005 Asus desktop with a Dell 19″ monitor. Unfortunately, it had all the obligatory toolbars and popups that a Windows computer can collect when not properly tended. After getting all of the files from the computer, we began the process of installing Linux. We sat together during the install process and I made sure she understood the partitioning process. Inside of an hour, she had a bright new and shiny PCLinuxOS desktop. - -She remarked often, as she navigated her way through her new system, at how beautiful the system looked. She wasn’t mentioning this as an aside; she was almost hypnotized by the sleek beauty in front of her. She remarked that her screen “shimmered” with beauty. That’s something I took away from our install session and have made sure to deploy on every Linux computer I’ve installed since. I want the screen to shimmer for everyone. - -The first week or so, she called or emailed me with the usual questions, but the one that was probably the most important was wanting to know how to save her OpenOffice documents so colleagues could read them. This is key when teaching anyone Linux or Open/LibreOffice. Most people just obey the first popup, allow the document to be saved in Open Document Format and get their fingers bit in the process. - -There was a story going around a year or so ago about a high school kid who claimed he flunked an exam when his professor couldn’t open the file containing his paper. It made for some blustery comments from readers who couldn’t decide who was more of a moron, the kid for not having a clue or his professor for not having a ummm… clue of his own. - -I know some college professors and each and every one of them could figure out how to open an ODF file. Heck, even as much as Microsoft can be grade A, blue-ribbon proprietary jerks, I think Microsoft Office has been able to open an ODT or ODF file for a while now. I can’t say for sure since I haven’t used Microsoft Office much since 2005. - -Even in the bad ol’ days, when Microsoft was openly and flagrantly shoving their way onto enterprise desktops via their vendor lock-in, I never had a problem when conducting business or collaborating with users of Microsoft Office, because I became pro-active and never assumed. I would email the person or people I was to work with and ask what version of Office they were using. From that information, I could make sure to save my documents in a format they could readily open and read. - -But back to Nicky, who put a lot of time into learning about her Linux computer. I was surprised by her enthusiasm. - -Learning how to use Linux on the desktop is made much simpler when the person doing the learning realizes that all habits and tools for using Windows are to be left at the door. Even after telling our Reglue kids this, more often than not when I come back to do a check-up with them there is some_dodgy_file.exe on the desktop or in the download folder. - -While we are in the general vicinity of discussing files, let’s talk about doing updates. For a long time I was dead set against having multiple program installers or updaters on the same computer. In the case of Mint, it was decided to disable the update ability completely within Synaptic and that frosted my flakes. But while for us older folks dpkg and apt are our friends, wise heads have prevailed and have come to understand that the command line doesn’t often seem warm and welcoming to new users. - -I frothed at the mouth and raged against the machine over the crippling of Synaptic until it was ‘splained to me. Do you remember when you were just starting out and had full admin rights to your brand new Linux install? Remember when you combed through the massive amounts of software listed in Synaptic? Remember how you began check marking every cool program you found? Do you remember how many of those cool programs started with the letters “lib”? - -Yeah, me too. I installed and broke a few brand new installations until I found out that those LIB files were the nuts and bolts of the application and not the application itself. That’s why the genius’ behind Linux Mint and Ubuntu have created smart, pretty-to-look-at and easy-to-use application installers. Synaptic is still there for us old heads, but for the people coming up behind us, there are just too many ways to leave a system open to major borks by installing lib files and the like. In the new installers, those files are tucked away and not even shown to the user. And really, that’s the way it should be. - -Unless you are charging for support calls that is. - -There are a lot of smarts built into today’s Linux distros and I applaud those folks because they make my job easier. Not every new user is a Nicky. She was pretty much an install and forget project for me, and she is in the minority. The majority of new Linux users can be needy at times. - -That’s okay. They are the ones who will be teaching their kids how to use Linux. - --------------------------------------------------------------------------------- - -via: http://fossforce.com/2016/05/anatomy-linux-user/ - -作者:[Ken Starks][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://linuxlock.blogspot.com/ diff --git a/translated/talk/20160531 The Anatomy of a Linux User.md b/translated/talk/20160531 The Anatomy of a Linux User.md new file mode 100644 index 0000000000..a404697347 --- /dev/null +++ b/translated/talk/20160531 The Anatomy of a Linux User.md @@ -0,0 +1,58 @@ + + +一个 Linux 用户的故事 +================================ + + + +**一些新的 GNU/Linux 用户都很清楚的知道 Linux 不是 Windows .其他很多人都不是很清楚的知道.最好的发行版设计者努力保持新的思想** + +### Linux 的核心 + +不管怎么说,Nicky 都不是那种表面上看起来很值得注意的人.她已经三十岁了,却决定回到学校学习.她在海军待了6年时间直到她的老友给她一份新的工作,而且这份工作比她在军队的工作还好.在过去的军事分支服务期间发生了很多事情.我认识她还是她在军队工作的时候.她是8个州的货车运输业协商区域的管理者.那会我在达拉斯跑肉品包装工具的运输. +![](http://i2.wp.com/fossforce.com/wp-content/uploads/2016/05/anatomy.jpg?w=525) + + +Nicky 和我在 2006 年成为了好朋友.她很外向,并且有很强的好奇心,她几乎走过每个运输商走的路线.一个星期五晚上我们在灯光下有一场很激烈的争论,像这样的 达 30 分钟的争论在我们之间并不少见.或许这并不比油漆一个球场省事,但是气氛还是可以控制的住的,感觉就像是一个很恐怖的游戏.在这次争论的时候她问到我是否可以修复她的电脑. + +她知道我为了一些贫穷的孩子能拥有他们自己的电脑做出的努力,当她抱怨她的电脑很慢的时候,我提到了参加 Bill Gate 的 401k 计划[译注:这个计划是为那些为内部收益代码做过贡献的人们提供由税收定义的养老金账户.].Nicky 说这是了解 Linux 的最佳时间. + +她的电脑相当好,它是一个带有 Dell 19 显示器的华硕电脑.不好的是,当不需要一些东西的时候,这个 Windows 电脑会强制性的显示所有的工具条和文本菜单.我们把电脑上的文件都做了备份之后就开始安装 Linux 了.我们一起完成了安装,并且我确信她知道了如何分区.不到一个小时,她的电脑上就有了一个漂亮的 PCLinuxOS 桌面. + +她会经常谈论她使用新系统的方法,系统看起来多么漂亮.她不曾提及的是,她几乎被她面前的井然有序的漂亮桌面吸引.她说她的桌面带有漂亮的"微光".这是我在安装系统期间特意设置的.我每次在安装 Linux 的时候都会进行这样的配置.我想让每个 Linux 用户的桌面都配置这个漂亮的微光. + +大概第一周左右,她给我打电话或者发邮件问了一些常规的问题,但是最主要的问题还是她想知道怎样保存她打开的 Office 文件(OpenOffice)以至于她的同事也可以读这些文件.教一个人使用 Linux 或者 Open/LibreOffice 的时候最重要的就是教她保存文件.大多数用户仅仅看到第一个提示,只需要手指轻轻一点就可以以打开文件模式( Open Document Format )保存. + + +大约一年前或者更久,一个高中生说他没有通过期末考试,因为教授不能打开他写着论文的文件.这引来不能决定谁对谁错的读者的激烈评论.这个高中生和教授都不知道这件事该怪谁. + +我知道一些大学教授甚至每个人都能够打开一个 ODF 文件.见鬼,很少有像微软这样优秀的公司,我想 Microsoft Office 现在已经能打开 ODT 或者 ODF 文件了.我也不能确保,毕竟我最近一次用 Microsoft Office 是在 2005 年. + +甚至在困难时期,当 Microsoft 很受欢迎并且很热衷于为使用他们系统的厂商的企业桌面上安装他们自己的软件的时候,我和一些使用 Microsoft Office 的用户的产品生意和合作从来没有出现过问题,因为我会提前想到可能出现的问题并且不会有侥幸心理.我会发邮件给他们询问他们正在使用的 Office 版本.这样,我就可以确保以他们能够读写的格式保存文件. + +说到 Nicky ,她花了很多时间学习她的 Linux 系统.我很惊奇于她的热情. + +当人们意识到所有的使用 Windows 的习惯和工具都要被抛弃的时候,学习 Linux 系统也会很容易.甚至在我们谈论第一次用的系统时,我查看这些系统的桌面或者下载文件夹大多都找不到 some_dodgy_file.exe 这样的文件. + +在我们通常讨论这些文件的时候,我们也会提及关于更新的问题.很长时间我都不会在一台电脑上反复设置去完成多种程序的更新和安装.比如 Mint ,它没有带有 Synaptic [译注:一个 Linux 管理图像程序包]的完整更新方法,这让我失去兴趣.但是我们的老成员 dpkg 和 apt 是我们的好朋友,聪明的领导者已经得到肯定并且认识到命令行看起来不是那么舒服,同时欢迎新的用户加入. + +我很生气,强烈反对机器对 Synaptic 的削弱,最后我放弃使用它.你记得你第一次使用的 Linux 发行版吗?记得你什么时候在 Synaptic 中详细查看大量的软件列表吗?记得你怎样开始检查并标记每个你发现的很酷的程序吗?你记得有多少这样的程序开始都是使用"lib"这样的文件吗? + +是的,我也是。我安装并且查看了一些新的安装程序,直到我发现那些库文件是应用程序的螺母和螺栓,而不是应用程序本身.这就是为什么这些聪明的开发者在 Linux Mint 和 Ubuntu 之后创造了聪明、漂亮和易于使用的应用程序的安装程序.Synaptic 仍然是我们的老大,但是对于一些后来者,安装像 lib 文件这样的方式需要打开大量的文件夹,很多这样的事情都会导致他们放弃使用这个系统.在新的安装程序中,这些文件会被放在一个文件夹中甚至不会展示给用户.总之,这也是该有的解决方法. + +除非你要改变应该支持的需求. + +现在的 Linux 发行版中有很多有用的软件,我也很感谢这些开发者,因为他们,我的工作变得容易.不是每一个 Linux 新用户都像 Nicky 这样富有热情.她相当不错的完成了安装过程并且达到了忘我的状态.像她这样极具热情的毕竟是少数.大多数新的 Linux 用户也就是在需要的时候才这样用心. + +很不错,他们都是要教自己的孩子使用 Linux 的人. +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2016/05/anatomy-linux-user/ + +作者:[Ken Starks][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxlock.blogspot.com/ From 54f396f110e974810a9a09db5bc811c027f04ea6 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Mon, 18 Jul 2016 01:02:23 +0800 Subject: [PATCH 165/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rvalds Talks IoT Smart Devices Security Concerns and More.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md b/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md index 8744d5d38c..156fda88db 100644 --- a/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md +++ b/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md @@ -1,3 +1,5 @@ +vim-kakali translating + Linus Torvalds Talks IoT, Smart Devices, Security Concerns, and More[video] =========================================================================== From 4fbb8d306e88284f8ce12846efd72ca8fc45e7ab Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 18 Jul 2016 01:02:38 +0800 Subject: [PATCH 166/471] PUB:20151117 How bad a boss is Linus Torvalds MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FrankXinqi 翻译的不错,不过这种文章确实比较难翻译,我尝试做了校对,但是有些地方也不是很有把握和到位。 --- ...151117 How bad a boss is Linus Torvalds.md | 80 +++++++++++++++++++ ...151117 How bad a boss is Linus Torvalds.md | 80 ------------------- 2 files changed, 80 insertions(+), 80 deletions(-) create mode 100644 published/20151117 How bad a boss is Linus Torvalds.md delete mode 100644 translated/talk/20151117 How bad a boss is Linus Torvalds.md diff --git a/published/20151117 How bad a boss is Linus Torvalds.md b/published/20151117 How bad a boss is Linus Torvalds.md new file mode 100644 index 0000000000..b35ba67827 --- /dev/null +++ b/published/20151117 How bad a boss is Linus Torvalds.md @@ -0,0 +1,80 @@ +Linus Torvalds 是一个糟糕的老板吗? +================================================================================ + +![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg) + +*1999 年 8 月 10 日,加利福尼亚州圣何塞市,在 LinuxWorld Show 上 Linus Torvalds 在一个坐满 Linux 爱好者的礼堂中发表了一篇演讲。图片来自:James Niccolai* + +**这取决于所处的领域。在软件开发的世界中,他也是个普通人。问题是,这种情况是否应该继续下去?** + +Linus Torvalds 是 Linux 的发明者,我认识他超过 20 年了。我们不是密友,但是我们欣赏彼此。 + +最近,因为 Linus Torvalds 的管理风格,他正遭到严厉的炮轰。Linus 无法忍受胡来的人。“代码的质量有多好?”是他在 Linux 内核的开发过程中评判人的一种方式。 + +没有什么比这个更重要了。正如 Linus 今年(2015年)早些时候在 Linux.conf.au 会议上说的那样,“我不是一个友好的人,我也不在意你。对我重要的是『[我所关心的技术和内核][1]』。” + +现在我也可以和这种只关心技术的人打交道了。如果你不能,你应当避免参加 Linux 内核会议,因为在那里你会遇到许多有这种精英思想的人。这不代表我认为在 Linux 领域所有东西都是极好的,并且不应该受到其他影响而带来改变。我能够和一个精英待在一起;而在一个男性做主导的大城堡中遇到的问题是,女性经常受到蔑视和无礼的对待。 + +这就是我看到的最近关于 Linus 管理风格所引发争论的原因 -- 或者更准确的说,他对于个人管理方面是完全冷漠的 -- 就像是在软件开发世界的标准操作流程一样。与此同时,我看到了揭示了这个事情需要改变的另外一个证据。 + +第一次是在 [Linux 4.3 发布][2]的时候出现的这个情况,Linus 使用 Linux 内核邮件列表来狠狠的数落了一个插入了一些网络方面的代码的开发者——这些代码很“烂”,“[生成了如此烂的代码][3]。这看起来太糟糕了,并且完全没有理由这样做。”他继续咆哮了半天。这里使用“烂”这个词,相对他早期使用的“愚蠢的”这个同义词来说还算好的。 + +但是,事情就是这样。Linus 是对的。我读了代码后,发现代码确实很烂,并且开发者只是为了用新的“overflow_usub()” 函数而用的。 + +现在,一些人把 Linus 的这种谩骂的行为看作他脾气不好而且恃强凌弱的证据。我见过一个完美主义者,在他的领域中,他无法忍受这种糟糕。 + +许多人告诉我,这不是一个专业的程序员应当有的行为。群众们,你曾经和最优秀的开发者一起工作过吗?据我所知道的,在 Apple,Microsoft,Oracle 这就是他们的行为。 + +我曾经听过 Steve Jobs 攻击一个开发者,就像要把他撕成碎片那样。我也被一个 Oracle 的高级开发者攻击一屋子的新开发者吓到过,就像食人鱼穿过一群金鱼那样。 + +在 Robert X. Cringely 关于 PC 崛起的经典书籍《[意外帝国(Accidental Empires)][5]》,中,他这样描述了微软的软件管理风格,比尔·盖茨像计算机系统一样管理他们,“比尔·盖茨的是最高等级,从他开始每一个等级依次递减,上级会向下级叫嚷,刺激他们,甚至羞辱他们。” + +Linus 和所有大型的商业软件公司的领导人不同的是,Linus 说在这里所有的东西是向全世界公开的。而其他人是在自己的会议室中做东西的。我听有人说 Linus 在那种公司中可能会被开除。这是不可能的。他会处于他现在所处的地位,他在编程世界的最顶端。 + +但是,这里有另外一个不同。如果 Larry Ellison (Oracle 的首席执行官)向你发火,你就别想在这里干了。如果 Linus 向你发火,你会在邮件中收到他的责骂。这就是差别。 + +你知道的,Linus 不是任何人的老板。他完全没有雇佣和解聘的权利,他只是负责着有 10000 个贡献者的一个项目而已。他仅仅能做的就是从心理上伤害你。 + +这说明,在开源软件开发圈和商业软件开发圈中同时存在一个非常严重的问题。不管你是一个多么好的编程者,如果你是一个女性,你的这个身份就是对你不利的。 + +这种情况并没有在 Sarah Sharp 的身上有任何好转,她现在是一个 Intel 的开发者,以前是一个顶尖的 Linux 程序员。[在她博客上10月份的一个帖子中][4],她解释道:“我最终发现,我不能够再为 Linux 社区做出贡献了。因为在那里,我虽然能够得到技术上的尊重,却得不到个人的尊重……我不想专职于同那些有着轻微的性别歧视或开同性恋玩笑的人一起工作。” + +谁会责怪她呢?我不会。很抱歉,我必须说,Linus 就像所有我见过的软件经理一样,是他造成了这种不利的工作环境。 + +他可能会说,确保 Linux 的贡献者都表现出专业精神和相互尊重不应该是他的工作。除了代码以外,他不关心任何其他事情。 + +就像 Sarah Sharp 写的那样: + + +> 我对于 Linux 内核社区做出的技术努力表示最大尊重。他们在那维护一些最高标准的代码,以此来平衡并且发展一个项目。他们专注于优秀的技术,以及超过负荷的维护人员,他们有不同的文化背景和社会规范,这些意味着这些 Linux 内核维护者说话非常直率、粗鲁,或者为了完成他们的任务而不讲道理。顶尖的 Linux 内核开发者经常为了使别人改正行为而向他们大喊大叫。 +> +> 这种事情发生在我身上,但它不是一种有效的沟通方式。 +> +> 许多高级的 Linux 内核开发者支持那些技术上和人性上不讲道理的维护者的权利。即使他们自己是非常友好的人,他们不想看到 Linux 内核交流方式改变。 + +她是对的。 + +我和其他观察者不同的是,我不认为这个问题对于 Linux 或开源社区在任何方面有特殊之处。作为一个从事技术商业工作超过五年和有着 25 年技术工作经历的记者,我见多了这种不成熟的小孩子行为。 + +这不是 Linus 的错误。他不是一个经理,他是一个有想象力的技术领导者。看起来真正的问题是,在软件开发领域没有人能够用一种支持的语气来对待团队和社区。 + +展望未来,我希望像 Linux 基金会这样的公司和组织,能够找到一种方式去授权社区经理或其他经理来鼓励并且强制实施民主的行为。 + +非常遗憾的是,我们不能够在我们这种纯技术或纯商业的领导人中找到这种管理策略。它不存在于这些人的基因中。 + +-------------------------------------------------------------------------------- + +via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-is-linus-torvalds.html + +作者:[Steven J. Vaughan-Nichols][a] +译者:[FrankXinqi](https://github.com/FrankXinqi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html +[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/ +[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html +[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/ +[5]:https://www.amazon.cn/Accidental-Empires-Cringely-Robert-X/dp/0887308554/479-5308016-9671450?ie=UTF8&qid=1447101469&ref_=sr_1_1&tag=geo-23 \ No newline at end of file diff --git a/translated/talk/20151117 How bad a boss is Linus Torvalds.md b/translated/talk/20151117 How bad a boss is Linus Torvalds.md deleted file mode 100644 index 5c7375c190..0000000000 --- a/translated/talk/20151117 How bad a boss is Linus Torvalds.md +++ /dev/null @@ -1,80 +0,0 @@ -Translated by FrankXinqi - -Linus Torvalds作为一个老板有多么糟糕? -================================================================================ -![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg) - -*1999 年 8 月 10 日,加利福尼亚州圣何塞市,在 LinuxWorld Show 上 Linus Torvalds 在一个坐满 Linux 爱好者的礼堂中发表了一篇演讲。作者:James Niccolai* - -**这取决于所处的领域。在软件开发的世界中,他变得更加平庸。问题是,这种情况是否应该被允许继续?** - -Linus Torvalds 是 Linux 的发明者,我认识他超过 20 年了。我们不是密友,但是我们欣赏彼此。 - -最近,因为 Linus Torvalds 的管理风格,他正遭到严厉的炮轰。Linus 无法忍受胡来的人。「代码的质量有多好?」是他在 Linux 内核的开发过程中评判人的一种方式。 - -没有什么比这个更重要了。正如 Linus 今年(1999年)早些时候在 Linux.conf.au 会议上说的那样,「我不是一个友好的人,并且我不关心你。对我重要的是『[我所关心的技术和内核][1]』。」 - -现在我也可以和这只关心技术的这一类人打交道了。如果你不能,你应当避免参加 Linux 内核会议,因为在那里你会遇到许多有这种精英思想的人。这不代表我认为在 Linux 领域所有东西都是极好的,并且应该不受其他影响的来激起改变。我能够一起生活的一个精英;在一个男性做主导的大城堡中遇到的问题是,女性经常受到蔑视和无礼的对待。 - -这就是我看到的最近关于 Linus 管理风格所引发社会争吵的原因 -- 或者更准确的说,他对于个人管理方面是完全冷漠的 -- 就像不过是在软件开发世界的标准操作流程一样。与此同时,我看到了另外一个非常需要被改变的事实,它必须作为证据公开。 - -第一次是在 [Linux 4.3 发布][2]的时候出现的这个情况,Linus 使用 Linux 内核邮件列表来狠狠的攻击一个插入了非常糟糕并且没有价值的网络代码的开发者。「[这段代码导致了非常糟糕并且没有价值的代码。][3]这看起来太糟糕了,并且完全没有理由这样做。」当他说到这里的时候,他沉默了很长时间。除了使用「非常糟糕并且没有价值」这个词,他在早期使用「愚蠢的」这个同义词是相对较好的。 - -但是,事情就是这样。Linus 是对的。我读了代码后,发现代码确实很烂并且开发者是为了使用新的「overflow_usub()」 函数而使用的。 - -现在,一些人把 Linus 的这种谩骂的行为看作他脾气不好而且恃强凌弱的证据。我见过一个完美主义者,在他的领域中,他无法忍受这种糟糕。 - -许多人告诉我,这不是一个专业的程序员应当有的行为。人们,你曾经和最优秀的开发者一起工作过吗?据我所知道的,在 Apple,Microsoft,Oracle 这就是他们的行为。 - -我曾经听过 Steve Jobs 攻击一个开发者,像把他撕成碎片那样。我大为不快,当一个 Oracle 的高级开发者攻击一屋子的新开发者的时候就像食人鱼穿过一群金鱼那样。 - -在意外的电脑帝国,在 Robert X. Cringely 关于 PCs 崛起的经典书籍中,他这样描述 Bill Gates 的微软软件管理风格,Bill Gates 像计算机系统一样管理他们,『比尔盖茨是最高等级,从他开始每一个等级依次递减,上级会向下级叫嚷,刺激他们,甚至羞辱他们。』 - -Linus 和所有大型的私有软件公司的领导人不同的是,Linus 说在这里所有的东西是向全世界公开的。而其他人是在私有的会议室中做东西的。我听有人说 Linus 在那种公司中可能会被开除。这是不可能的。他会在正确的地方就像现在这样,他在编程世界的最顶端。 - -但是,这里有另外一个不同。如果 Larry Ellison (Oracle的首席执行官)向你发火,你就别想在这里干了。如果 Linus 向你发火,你会在邮件中收到他的责骂。这就是差别。 - -你知道的,Linus 不是任何人的老板。他完全没有雇佣和解聘的权利,他只是负责着有 10,000 个贡献者的一个项目而已。他仅仅能做的就是从心理上伤害你。 - -这说明,在开源软件开发圈和私有软件开发圈中同时存在一个非常严重的问题。不管你是一个多么好的编程者,如果你是一个女性,你的这个身份就是对你不利的。 - -这种情况并没有在 Sarah Sharp 的身上有任何好转,她现在是一个Intel的开发者,以前是一个顶尖的Linux程序员。[在她博客10月份的一个帖子中][4],她解释道:『我最终发现,我不能够再为Linux社区做出贡献了。因为在在那里,我虽然能够得到技术上的尊重,却得不到个人的尊重……我不想专职于同那些轻微的性别歧视者或开同性恋玩笑的人一起工作。』 - -谁能责怪她呢?我不能。我非常伤心的说,Linus 就像所有我见过的软件经理一样,是他造成了这种不利的工作环境。 - -他可能会说,确保 Linux 的贡献者都表现出专业精神和相互尊重不应该是他的工作。除了代码以外,他不关系任何其他事情。 - -就像Sarah Sharp写的那样: - - -> 我对于 Linux 内核社区做出的技术努力表现出非常的尊重。他们在那维护一些最高标准的代码,以此来平衡并且发展一个项目。他们专注于优秀的技术,却带有过量的维护人员,他们有不同的文化背景和社会规范,这些意味着这些 Linux 内核维护者说话非常直率,粗鲁或者为了完成他们的任务而不讲道理。顶尖的 Linux 内核开发者经常为了使别人改正行为而向他们大喊大叫。 -> -> 这种事情发生在我身上,但它不是一种有效的沟通方式。 -> -> 许多高级的 Linux 内核开发者支持那些技术上和人性上不讲道理的维护者的权利。即使他们是非常友好的人,他们不想看到 Linux 内核交流方式改变。 - -她是对的。 - -我和其他调查者不同的是,我不认为这个问题对于 Linux 或开源社区在任何方面有特殊之处。作为一个从事技术商业工作超过五年和有着 25 年技术工作经历的记者,我随处可见这种不成熟的男孩的行为。 - -这不是 Linus 的错误。他不是一个经理,他是一个有想象力的技术领导者。看起来真正的问题是,在软件开发领域没有人能够用一种支持的语气来对待团队和社区。 - -展望未来,我希望像 Linux Foundation 这样的公司和组织,能够找到一种方式去授权社区经理或其他经理来鼓励并且强制实施民主的行为。 - -非常遗憾的是,我们不能够在我们这种纯技术或纯商业的领导人中找到这种管理策略。它不存在于这些人的基因中。 - --------------------------------------------------------------------------------- - -via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-is-linus-torvalds.html - -作者:[Steven J. Vaughan-Nichols][a] -译者:[FrankXinqi](https://github.com/FrankXinqi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ -[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html -[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/ -[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html -[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/ From 71310166ab32c538f150bcc7dcc9a33c89a0e822 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 09:34:07 +0800 Subject: [PATCH 167/471] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160309 Let’s Build A Web Server. Part 1.md | 150 ------------------ 1 file changed, 150 deletions(-) delete mode 100644 sources/tech/20160309 Let’s Build A Web Server. Part 1.md diff --git a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md deleted file mode 100644 index 4c8048786d..0000000000 --- a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md +++ /dev/null @@ -1,150 +0,0 @@ -Let’s Build A Web Server. Part 1. -===================================== - -Out for a walk one day, a woman came across a construction site and saw three men working. She asked the first man, “What are you doing?” Annoyed by the question, the first man barked, “Can’t you see that I’m laying bricks?” Not satisfied with the answer, she asked the second man what he was doing. The second man answered, “I’m building a brick wall.” Then, turning his attention to the first man, he said, “Hey, you just passed the end of the wall. You need to take off that last brick.” Again not satisfied with the answer, she asked the third man what he was doing. And the man said to her while looking up in the sky, “I am building the biggest cathedral this world has ever known.” While he was standing there and looking up in the sky the other two men started arguing about the errant brick. The man turned to the first two men and said, “Hey guys, don’t worry about that brick. It’s an inside wall, it will get plastered over and no one will ever see that brick. Just move on to another layer.”1 - -The moral of the story is that when you know the whole system and understand how different pieces fit together (bricks, walls, cathedral), you can identify and fix problems faster (errant brick). - -What does it have to do with creating your own Web server from scratch? - -I believe to become a better developer you MUST get a better understanding of the underlying software systems you use on a daily basis and that includes programming languages, compilers and interpreters, databases and operating systems, web servers and web frameworks. And, to get a better and deeper understanding of those systems you MUST re-build them from scratch, brick by brick, wall by wall. - -Confucius put it this way: - ->“I hear and I forget.” - -![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_hear.png) - ->“I see and I remember.” - -![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_see.png) - ->“I do and I understand.” - -![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_do.png) - -I hope at this point you’re convinced that it’s a good idea to start re-building different software systems to learn how they work. - -In this three-part series I will show you how to build your own basic Web server. Let’s get started. - -First things first, what is a Web server? - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_response.png) - -In a nutshell it’s a networking server that sits on a physical server (oops, a server on a server) and waits for a client to send a request. When it receives a request, it generates a response and sends it back to the client. The communication between a client and a server happens using HTTP protocol. A client can be your browser or any other software that speaks HTTP. - -What would a very simple implementation of a Web server look like? Here is my take on it. The example is in Python but even if you don’t know Python (it’s a very easy language to pick up, try it!) you still should be able to understand concepts from the code and explanations below: - -``` -import socket - -HOST, PORT = '', 8888 - -listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) -listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) -listen_socket.bind((HOST, PORT)) -listen_socket.listen(1) -print 'Serving HTTP on port %s ...' % PORT -while True: - client_connection, client_address = listen_socket.accept() - request = client_connection.recv(1024) - print request - - http_response = """\ -HTTP/1.1 200 OK - -Hello, World! -""" - client_connection.sendall(http_response) - client_connection.close() -``` - -Save the above code as webserver1.py or download it directly from GitHub and run it on the command line like this - -``` -$ python webserver1.py -Serving HTTP on port 8888 … -``` - -Now type in the following URL in your Web browser’s address bar http://localhost:8888/hello, hit Enter, and see magic in action. You should see “Hello, World!” displayed in your browser like this: - -![](https://ruslanspivak.com/lsbaws-part1/browser_hello_world.png) - -Just do it, seriously. I will wait for you while you’re testing it. - -Done? Great. Now let’s discuss how it all actually works. - -First let’s start with the Web address you’ve entered. It’s called an URL and here is its basic structure: - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_URL_Web_address.png) - -This is how you tell your browser the address of the Web server it needs to find and connect to and the page (path) on the server to fetch for you. Before your browser can send a HTTP request though, it first needs to establish a TCP connection with the Web server. Then it sends an HTTP request over the TCP connection to the server and waits for the server to send an HTTP response back. And when your browser receives the response it displays it, in this case it displays “Hello, World!” - -Let’s explore in more detail how the client and the server establish a TCP connection before sending HTTP requests and responses. To do that they both use so-called sockets. Instead of using a browser directly you are going to simulate your browser manually by using telnet on the command line. - -On the same computer you’re running the Web server fire up a telnet session on the command line specifying a host to connect to localhost and the port to connect to 8888 and then press Enter: - -``` -$ telnet localhost 8888 -Trying 127.0.0.1 … -Connected to localhost. -``` - -At this point you’ve established a TCP connection with the server running on your local host and ready to send and receive HTTP messages. In the picture below you can see a standard procedure a server has to go through to be able to accept new TCP connections. - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_socket.png) - -In the same telnet session type GET /hello HTTP/1.1 and hit Enter: - -``` -$ telnet localhost 8888 -Trying 127.0.0.1 … -Connected to localhost. -GET /hello HTTP/1.1 - -HTTP/1.1 200 OK -Hello, World! -``` - -You’ve just manually simulated your browser! You sent an HTTP request and got an HTTP response back. This is the basic structure of an HTTP request: - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_anatomy.png) - -The HTTP request consists of the line indicating the HTTP method (GET, because we are asking our server to return us something), the path /hello that indicates a “page” on the server we want and the protocol version. - -For simplicity’s sake our Web server at this point completely ignores the above request line. You could just as well type in any garbage instead of “GET /hello HTTP/1.1” and you would still get back a “Hello, World!” response. - -Once you’ve typed the request line and hit Enter the client sends the request to the server, the server reads the request line, prints it and returns the proper HTTP response. - -Here is the HTTP response that the server sends back to your client (telnet in this case): - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_response_anatomy.png) - -Let’s dissect it. The response consists of a status line HTTP/1.1 200 OK, followed by a required empty line, and then the HTTP response body. - -The response status line HTTP/1.1 200 OK consists of the HTTP Version, the HTTP status code and the HTTP status code reason phrase OK. When the browser gets the response, it displays the body of the response and that’s why you see “Hello, World!” in your browser. - -And that’s the basic model of how a Web server works. To sum it up: The Web server creates a listening socket and starts accepting new connections in a loop. The client initiates a TCP connection and, after successfully establishing it, the client sends an HTTP request to the server and the server responds with an HTTP response that gets displayed to the user. To establish a TCP connection both clients and servers use sockets. - -Now you have a very basic working Web server that you can test with your browser or some other HTTP client. As you’ve seen and hopefully tried, you can also be a human HTTP client too, by using telnet and typing HTTP requests manually. - -Here’s a question for you: “How do you run a Django application, Flask application, and Pyramid application under your freshly minted Web server without making a single change to the server to accommodate all those different Web frameworks?” - -I will show you exactly how in Part 2 of the series. Stay tuned. - -BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch and goes into more detail on topics I just covered. Subscribe to the mailing list to get the latest updates about the book and the release date. - --------------------------------------------------------------------------------- - -via: https://ruslanspivak.com/lsbaws-part1/ - -作者:[Ruslan][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://linkedin.com/in/ruslanspivak/ - - - From 65a6b871815b907d197d2d73189d99d13f084bdf Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 09:34:17 +0800 Subject: [PATCH 168/471] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160406 Let’s Build A Web Server. Part 2.md | 427 ------------------ 1 file changed, 427 deletions(-) delete mode 100644 sources/tech/20160406 Let’s Build A Web Server. Part 2.md diff --git a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md deleted file mode 100644 index 482352ac9a..0000000000 --- a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md +++ /dev/null @@ -1,427 +0,0 @@ -Let’s Build A Web Server. Part 2. -=================================== - -Remember, in Part 1 I asked you a question: “How do you run a Django application, Flask application, and Pyramid application under your freshly minted Web server without making a single change to the server to accommodate all those different Web frameworks?” Read on to find out the answer. - -In the past, your choice of a Python Web framework would limit your choice of usable Web servers, and vice versa. If the framework and the server were designed to work together, then you were okay: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_before_wsgi.png) - -But you could have been faced (and maybe you were) with the following problem when trying to combine a server and a framework that weren’t designed to work together: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_after_wsgi.png) - -Basically you had to use what worked together and not what you might have wanted to use. - -So, how do you then make sure that you can run your Web server with multiple Web frameworks without making code changes either to the Web server or to the Web frameworks? And the answer to that problem became the Python Web Server Gateway Interface (or WSGI for short, pronounced “wizgy”). - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_idea.png) - -WSGI allowed developers to separate choice of a Web framework from choice of a Web server. Now you can actually mix and match Web servers and Web frameworks and choose a pairing that suits your needs. You can run Django, Flask, or Pyramid, for example, with Gunicorn or Nginx/uWSGI or Waitress. Real mix and match, thanks to the WSGI support in both servers and frameworks: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interop.png) - -So, WSGI is the answer to the question I asked you in Part 1 and repeated at the beginning of this article. Your Web server must implement the server portion of a WSGI interface and all modern Python Web Frameworks already implement the framework side of the WSGI interface, which allows you to use them with your Web server without ever modifying your server’s code to accommodate a particular Web framework. - -Now you know that WSGI support by Web servers and Web frameworks allows you to choose a pairing that suits you, but it is also beneficial to server and framework developers because they can focus on their preferred area of specialization and not step on each other’s toes. Other languages have similar interfaces too: Java, for example, has Servlet API and Ruby has Rack. - -It’s all good, but I bet you are saying: “Show me the code!” Okay, take a look at this pretty minimalistic WSGI server implementation: - -``` -# Tested with Python 2.7.9, Linux & Mac OS X -import socket -import StringIO -import sys - - -class WSGIServer(object): - - address_family = socket.AF_INET - socket_type = socket.SOCK_STREAM - request_queue_size = 1 - - def __init__(self, server_address): - # Create a listening socket - self.listen_socket = listen_socket = socket.socket( - self.address_family, - self.socket_type - ) - # Allow to reuse the same address - listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - # Bind - listen_socket.bind(server_address) - # Activate - listen_socket.listen(self.request_queue_size) - # Get server host name and port - host, port = self.listen_socket.getsockname()[:2] - self.server_name = socket.getfqdn(host) - self.server_port = port - # Return headers set by Web framework/Web application - self.headers_set = [] - - def set_app(self, application): - self.application = application - - def serve_forever(self): - listen_socket = self.listen_socket - while True: - # New client connection - self.client_connection, client_address = listen_socket.accept() - # Handle one request and close the client connection. Then - # loop over to wait for another client connection - self.handle_one_request() - - def handle_one_request(self): - self.request_data = request_data = self.client_connection.recv(1024) - # Print formatted request data a la 'curl -v' - print(''.join( - '< {line}\n'.format(line=line) - for line in request_data.splitlines() - )) - - self.parse_request(request_data) - - # Construct environment dictionary using request data - env = self.get_environ() - - # It's time to call our application callable and get - # back a result that will become HTTP response body - result = self.application(env, self.start_response) - - # Construct a response and send it back to the client - self.finish_response(result) - - def parse_request(self, text): - request_line = text.splitlines()[0] - request_line = request_line.rstrip('\r\n') - # Break down the request line into components - (self.request_method, # GET - self.path, # /hello - self.request_version # HTTP/1.1 - ) = request_line.split() - - def get_environ(self): - env = {} - # The following code snippet does not follow PEP8 conventions - # but it's formatted the way it is for demonstration purposes - # to emphasize the required variables and their values - # - # Required WSGI variables - env['wsgi.version'] = (1, 0) - env['wsgi.url_scheme'] = 'http' - env['wsgi.input'] = StringIO.StringIO(self.request_data) - env['wsgi.errors'] = sys.stderr - env['wsgi.multithread'] = False - env['wsgi.multiprocess'] = False - env['wsgi.run_once'] = False - # Required CGI variables - env['REQUEST_METHOD'] = self.request_method # GET - env['PATH_INFO'] = self.path # /hello - env['SERVER_NAME'] = self.server_name # localhost - env['SERVER_PORT'] = str(self.server_port) # 8888 - return env - - def start_response(self, status, response_headers, exc_info=None): - # Add necessary server headers - server_headers = [ - ('Date', 'Tue, 31 Mar 2015 12:54:48 GMT'), - ('Server', 'WSGIServer 0.2'), - ] - self.headers_set = [status, response_headers + server_headers] - # To adhere to WSGI specification the start_response must return - # a 'write' callable. We simplicity's sake we'll ignore that detail - # for now. - # return self.finish_response - - def finish_response(self, result): - try: - status, response_headers = self.headers_set - response = 'HTTP/1.1 {status}\r\n'.format(status=status) - for header in response_headers: - response += '{0}: {1}\r\n'.format(*header) - response += '\r\n' - for data in result: - response += data - # Print formatted response data a la 'curl -v' - print(''.join( - '> {line}\n'.format(line=line) - for line in response.splitlines() - )) - self.client_connection.sendall(response) - finally: - self.client_connection.close() - - -SERVER_ADDRESS = (HOST, PORT) = '', 8888 - - -def make_server(server_address, application): - server = WSGIServer(server_address) - server.set_app(application) - return server - - -if __name__ == '__main__': - if len(sys.argv) < 2: - sys.exit('Provide a WSGI application object as module:callable') - app_path = sys.argv[1] - module, application = app_path.split(':') - module = __import__(module) - application = getattr(module, application) - httpd = make_server(SERVER_ADDRESS, application) - print('WSGIServer: Serving HTTP on port {port} ...\n'.format(port=PORT)) - httpd.serve_forever() -``` - -It’s definitely bigger than the server code in Part 1, but it’s also small enough (just under 150 lines) for you to understand without getting bogged down in details. The above server also does more - it can run your basic Web application written with your beloved Web framework, be it Pyramid, Flask, Django, or some other Python WSGI framework. - -Don’t believe me? Try it and see for yourself. Save the above code as webserver2.py or download it directly from GitHub. If you try to run it without any parameters it’s going to complain and exit. - -``` -$ python webserver2.py -Provide a WSGI application object as module:callable -``` - -It really wants to serve your Web application and that’s where the fun begins. To run the server the only thing you need installed is Python. But to run applications written with Pyramid, Flask, and Django you need to install those frameworks first. Let’s install all three of them. My preferred method is by using virtualenv. Just follow the steps below to create and activate a virtual environment and then install all three Web frameworks. - -``` -$ [sudo] pip install virtualenv -$ mkdir ~/envs -$ virtualenv ~/envs/lsbaws/ -$ cd ~/envs/lsbaws/ -$ ls -bin include lib -$ source bin/activate -(lsbaws) $ pip install pyramid -(lsbaws) $ pip install flask -(lsbaws) $ pip install django -``` - -At this point you need to create a Web application. Let’s start with Pyramid first. Save the following code as pyramidapp.py to the same directory where you saved webserver2.py or download the file directly from GitHub: - -``` -from pyramid.config import Configurator -from pyramid.response import Response - - -def hello_world(request): - return Response( - 'Hello world from Pyramid!\n', - content_type='text/plain', - ) - -config = Configurator() -config.add_route('hello', '/hello') -config.add_view(hello_world, route_name='hello') -app = config.make_wsgi_app() -``` - -Now you’re ready to serve your Pyramid application with your very own Web server: - -``` -(lsbaws) $ python webserver2.py pyramidapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -You just told your server to load the ‘app’ callable from the python module ‘pyramidapp’ Your server is now ready to take requests and forward them to your Pyramid application. The application only handles one route now: the /hello route. Type http://localhost:8888/hello address into your browser, press Enter, and observe the result: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_pyramid.png) - -You can also test the server on the command line using the ‘curl’ utility: - -``` -$ curl -v http://localhost:8888/hello -... -``` - -Check what the server and curl prints to standard output. - -Now onto Flask. Let’s follow the same steps. - -``` -from flask import Flask -from flask import Response -flask_app = Flask('flaskapp') - - -@flask_app.route('/hello') -def hello_world(): - return Response( - 'Hello world from Flask!\n', - mimetype='text/plain' - ) - -app = flask_app.wsgi_app -``` - -Save the above code as flaskapp.py or download it from GitHub and run the server as: - -``` -(lsbaws) $ python webserver2.py flaskapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -Now type in the http://localhost:8888/hello into your browser and press Enter: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_flask.png) - -Again, try ‘curl’ and see for yourself that the server returns a message generated by the Flask application: - -``` -$ curl -v http://localhost:8888/hello -... -``` - -Can the server also handle a Django application? Try it out! It’s a little bit more involved, though, and I would recommend cloning the whole repo and use djangoapp.py, which is part of the GitHub repository. Here is the source code which basically adds the Django ‘helloworld’ project (pre-created using Django’s django-admin.py startproject command) to the current Python path and then imports the project’s WSGI application. - -``` -import sys -sys.path.insert(0, './helloworld') -from helloworld import wsgi - - -app = wsgi.application -``` - -Save the above code as djangoapp.py and run the Django application with your Web server: - -``` -(lsbaws) $ python webserver2.py djangoapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -Type in the following address and press Enter: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_django.png) - -And as you’ve already done a couple of times before, you can test it on the command line, too, and confirm that it’s the Django application that handles your requests this time around: - -``` -$ curl -v http://localhost:8888/hello -... -``` - -Did you try it? Did you make sure the server works with those three frameworks? If not, then please do so. Reading is important, but this series is about rebuilding and that means you need to get your hands dirty. Go and try it. I will wait for you, don’t worry. No seriously, you must try it and, better yet, retype everything yourself and make sure that it works as expected. - -Okay, you’ve experienced the power of WSGI: it allows you to mix and match your Web servers and Web frameworks. WSGI provides a minimal interface between Python Web servers and Python Web Frameworks. It’s very simple and it’s easy to implement on both the server and the framework side. The following code snippet shows the server and the framework side of the interface: - -``` -def run_application(application): - """Server code.""" - # This is where an application/framework stores - # an HTTP status and HTTP response headers for the server - # to transmit to the client - headers_set = [] - # Environment dictionary with WSGI/CGI variables - environ = {} - - def start_response(status, response_headers, exc_info=None): - headers_set[:] = [status, response_headers] - - # Server invokes the ‘application' callable and gets back the - # response body - result = application(environ, start_response) - # Server builds an HTTP response and transmits it to the client - … - -def app(environ, start_response): - """A barebones WSGI app.""" - start_response('200 OK', [('Content-Type', 'text/plain')]) - return ['Hello world!'] - -run_application(app) -``` - -Here is how it works: - -1. The framework provides an ‘application’ callable (The WSGI specification doesn’t prescribe how that should be implemented) -2. The server invokes the ‘application’ callable for each request it receives from an HTTP client. It passes a dictionary ‘environ’ containing WSGI/CGI variables and a ‘start_response’ callable as arguments to the ‘application’ callable. -3. The framework/application generates an HTTP status and HTTP response headers and passes them to the ‘start_response’ callable for the server to store them. The framework/application also returns a response body. -4. The server combines the status, the response headers, and the response body into an HTTP response and transmits it to the client (This step is not part of the specification but it’s the next logical step in the flow and I added it for clarity) - -And here is a visual representation of the interface: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interface.png) - -So far, you’ve seen the Pyramid, Flask, and Django Web applications and you’ve seen the server code that implements the server side of the WSGI specification. You’ve even seen the barebones WSGI application code snippet that doesn’t use any framework. - -The thing is that when you write a Web application using one of those frameworks you work at a higher level and don’t work with WSGI directly, but I know you’re curious about the framework side of the WSGI interface, too because you’re reading this article. So, let’s create a minimalistic WSGI Web application/Web framework without using Pyramid, Flask, or Django and run it with your server: - -``` -def app(environ, start_response): - """A barebones WSGI application. - - This is a starting point for your own Web framework :) - """ - status = '200 OK' - response_headers = [('Content-Type', 'text/plain')] - start_response(status, response_headers) - return ['Hello world from a simple WSGI application!\n'] -``` - -Again, save the above code in wsgiapp.py file or download it from GitHub directly and run the application under your Web server as: - -``` -(lsbaws) $ python webserver2.py wsgiapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -Type in the following address and press Enter. This is the result you should see: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_simple_wsgi_app.png) - -You just wrote your very own minimalistic WSGI Web framework while learning about how to create a Web server! Outrageous. - -Now, let’s get back to what the server transmits to the client. Here is the HTTP response the server generates when you call your Pyramid application using an HTTP client: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response.png) - -The response has some familiar parts that you saw in Part 1 but it also has something new. It has, for example, four HTTP headers that you haven’t seen before: Content-Type, Content-Length, Date, and Server. Those are the headers that a response from a Web server generally should have. None of them are strictly required, though. The purpose of the headers is to transmit additional information about the HTTP request/response. - -Now that you know more about the WSGI interface, here is the same HTTP response with some more information about what parts produced it: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response_explanation.png) - -I haven’t said anything about the ‘environ’ dictionary yet, but basically it’s a Python dictionary that must contain certain WSGI and CGI variables prescribed by the WSGI specification. The server takes the values for the dictionary from the HTTP request after parsing the request. This is what the contents of the dictionary look like: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_environ.png) - -A Web framework uses the information from that dictionary to decide which view to use based on the specified route, request method etc., where to read the request body from and where to write errors, if any. - -By now you’ve created your own WSGI Web server and you’ve made Web applications written with different Web frameworks. And, you’ve also created your barebones Web application/Web framework along the way. It’s been a heck of a journey. Let’s recap what your WSGI Web server has to do to serve requests aimed at a WSGI application: - -- First, the server starts and loads an ‘application’ callable provided by your Web framework/application -- Then, the server reads a request -- Then, the server parses it -- Then, it builds an ‘environ’ dictionary using the request data -- Then, it calls the ‘application’ callable with the ‘environ’ dictionary and a ‘start_response’ callable as parameters and gets back a response body. -- Then, the server constructs an HTTP response using the data returned by the call to the ‘application’ object and the status and response headers set by the ‘start_response’ callable. -- And finally, the server transmits the HTTP response back to the client - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_server_summary.png) - -That’s about all there is to it. You now have a working WSGI server that can serve basic Web applications written with WSGI compliant Web frameworks like Django, Flask, Pyramid, or your very own WSGI framework. The best part is that the server can be used with multiple Web frameworks without any changes to the server code base. Not bad at all. - -Before you go, here is another question for you to think about, “How do you make your server handle more than one request at a time?” - -Stay tuned and I will show you a way to do that in Part 3. Cheers! - -BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch and goes into more detail on topics I just covered. Subscribe to the mailing list to get the latest updates about the book and the release date. - --------------------------------------------------------------------------------- - -via: https://ruslanspivak.com/lsbaws-part2/ - -作者:[Ruslan][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://github.com/rspivak/ - - - - - - From 907ffd4c3432fd5a95a93348f8367db6d7128e6f Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 10:00:06 +0800 Subject: [PATCH 169/471] =?UTF-8?q?20160718-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20160711 Getting Started with Git.md | 134 ++++++++++++++++++ 1 file changed, 134 insertions(+) create mode 100644 sources/tech/20160711 Getting Started with Git.md diff --git a/sources/tech/20160711 Getting Started with Git.md b/sources/tech/20160711 Getting Started with Git.md new file mode 100644 index 0000000000..cf67119f28 --- /dev/null +++ b/sources/tech/20160711 Getting Started with Git.md @@ -0,0 +1,134 @@ +Getting started with Git +====================== + + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) + +In the introduction to this series we learned who should use Git, and what it is for. Today we will learn how to clone public Git repositories, and how to extract individual files without cloning the whole works. + +Since Git is so popular, it makes life a lot easier if you're at least familiar with it at a basic level. If you can grasp the basics (and you can, I promise!), then you'll be able to download whatever you need, and maybe even contribute stuff back. And that, after all, is what open source is all about: having access to the code that makes up the software you run, the freedom to share it with others, and the right to change it as you please. Git makes this whole process easy, as long as you're comfortable with Git. + +So let's get comfortable with Git. + +### Read and write + +Broadly speaking, there are two ways to interact with a Git repository: you can read from it, or you can write to it. It's just like a file: sometimes you open a document just to read it, and other times you open a document because you need to make changes. + +In this article, we'll cover reading from a Git repository. We'll tackle the subject of writing back to a Git repository in a later article. + +### Git or GitHub? + +A word of clarification: Git is not the same as GitHub (or GitLab, or Bitbucket). Git is a command-line program, so it looks like this: + +``` +$ git +usage: Git [--version] [--help] [-C ] + [-p | --paginate | --no-pager] [--bare] + [--Git-dir=] [] +``` + +As Git is open source, lots of smart people have built infrastructures around it which, in themselves, have become very popular. + +My articles about Git teach pure Git first, because if you understand what Git is doing then you can maintain an indifference to what front end you are using. However, my articles also include common ways of accomplishing each task through popular Git services, since that's probably what you'll encounter first. + +### Installing Git + +To install Git on Linux, grab it from your distribution's software repository. BSD users should find Git in the Ports tree, in the devel section. + +For non-open source operating systems, go to the project site and follow the instructions. Once installed, there should be no difference between Linux, BSD, and Mac OS X commands. Windows users will have to adapt Git commands to match the Windows file system, or install Cygwin to run Git natively, without getting tripped up by Windows file system conventions. + +### Afternoon tea with Git + +Not every one of us needs to adopt Git into our daily lives right away. Sometimes, the most interaction you have with Git is to visit a repository of code, download a file or two, and then leave. On the spectrum of getting to know Git, this is more like afternoon tea than a proper dinner party. You make some polite conversation, you get the information you need, and then you part ways without the intention of speaking again for at least another three months. + +And that's OK. + +Generally speaking, there are two ways to access Git: via command line, or by any one of the fancy Internet technologies providing quick and easy access through the web browser. + +Say you want to install a trash bin for use in your terminal because you've been burned one too many times by the rm command. You've heard about Trashy, which calls itself "a sane intermediary to the rm command", and you want to look over its documentation before you install it. Lucky for you, [Trashy is hosted publicly on GitLab.com][1]. + +### Landgrab + +The first way we'll work with this Git repository is a sort of landgrab method: we'll clone the entire thing, and then sort through the contents later. Since the repository is hosted with a public Git service, there are two ways to do this: on the command line, or through a web interface. + +To grab an entire repository with Git, use the git clone command with the URL of the Git repository. If you're not clear on what the right URL is, the repository should tell you. GitLab gives you a copy-and-paste repository URL [for Trashy][2]. + +![](https://opensource.com/sites/default/files/1_gitlab-url.jpg) + +You might notice that on some services, both SSH and HTTPS links are provided. You can use SSH only if you have write permissions to the repository. Otherwise, you must use the HTTPS URL. + +Once you have the right URL, cloning the repository is pretty simple. Just git clone the URL, and optionally name the directory to clone it into. The default behaviour is to clone the git directory to your current directory; for example, 'trashy.git' gets put in your current location as 'trashy'. I use the .clone extension as a shorthand for repositories that are read-only, and the .git extension as shorthand for repositories I can read and write, but that's not by any means an official mandate. + +``` +$ git clone https://gitlab.com/trashy/trashy.git trashy.clone +Cloning into 'trashy.clone'... +remote: Counting objects: 142, done. +remote: Compressing objects: 100% (91/91), done. +remote: Total 142 (delta 70), reused 103 (delta 47) +Receiving objects: 100% (142/142), 25.99 KiB | 0 bytes/s, done. +Resolving deltas: 100% (70/70), done. +Checking connectivity... done. +``` + +Once the repository has been cloned successfully, you can browse files in it just as you would any other directory on your computer. + +The other way to get a copy of the repository is through the web interface. Both GitLab and GitHub provide a snapshot of any repository in a .zip file. GitHub has a big green download button, but on GitLab, look for an inconspicuous download button on the far right of your browser window: + +![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) + +### Pick and choose + +An alternate method of obtaining a file from a Git repository is to find the file you're after and pluck it right out of the repository. This method is only supported via web interfaces, which is essentially you looking at someone else's clone of a repository; you can think of it as a sort of HTTP shared directory. + +The problem with using this method is that you might find that certain files don't actually exist in a raw Git repository, as a file might only exist in its complete form after a make command builds the file, which won't happen until you download the repository, read the README or INSTALL file, and run the command. Assuming, however, that you are sure a file does exist and you just want to go into the repository, grab it, and walk away, you can do that. + +In GitLab and GitHub, click the Files link for a file view, view the file in Raw mode, and use your web browser's save function, e.g. in Firefox, File > Save Page As. In a GitWeb repository (a web view of personal git repositories used some who prefer to host git themselves), the Raw view link is in the file listing view. + +![](https://opensource.com/sites/default/files/1_webgit-file.jpg) + +### Best practices + +Generally, cloning an entire Git repository is considered the right way of interacting with Git. There are a few reasons for this. Firstly, a clone is easy to keep updated with the git pull command, so you won't have to keep going back to some web site for a new copy of a file each time an improvement has been made. Secondly, should you happen to make an improvement yourself, then it is easier to submit those changes to the original author if it is all nice and tidy in a Git repository. + +For now, it's probably enough to just practice going out and finding interesting Git repositories and cloning them to your drive. As long as you know the basics of using a terminal, then it's not hard to do. Don't know the basics of terminal usage? Give me five more minutes of your time. + +### Terminal basics + +The first thing to understand is that all files have a path. That makes sense; if I told you to open a file for me on a regular non-terminal day, you'd have to get to where that file is on your drive, and you'd do that by navigating a bunch of computer windows until you reached that file. For example, maybe you'd click your home directory > Pictures > InktoberSketches > monkey.kra. + +In that scenario, we could say that the file monkeysketch.kra has the path $HOME/Pictures/InktoberSketches/monkey.kra. + +In the terminal, unless you're doing special sysadmin work, your file paths are generally going to start with $HOME (or, if you're lazy, just the ~ character) followed by a list of folders up to the filename itself. This is analogous to whatever icons you click in your GUI to reach the file or folder. + +If you want to clone a Git repository into your Documents directory, then you could open a terminal and run this command: + +``` +$ git clone https://gitlab.com/foo/bar.git $HOME/Documents/bar.clone +``` + +Once that is complete, you can open a file manager window, navigate to your Documents folder, and you'll find the bar.clone directory waiting for you. + +If you want to get a little more advanced, you might revisit that repository at some later date, and try a git pull to see if there have been updates to the project: + +``` +$ cd $HOME/Documents/bar.clone +$ pwd +bar.clone +$ git pull +``` + +For now, that's all the terminal commands you need to get started, so go out and explore. The more you do it, the better you get at it, and that is, at least give or take a vowel, the name of the game. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/stumbling-git + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://gitlab.com/trashy/trashy +[2]: https://gitlab.com/trashy/trashy.git From cdabd546a481dbbee266622f36d041668b4d519f Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 10:04:22 +0800 Subject: [PATCH 170/471] =?UTF-8?q?20160718-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...latform Discourse On Ubuntu Linux 16.04.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md diff --git a/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md new file mode 100644 index 0000000000..a2521673c9 --- /dev/null +++ b/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md @@ -0,0 +1,101 @@ +How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04 +=============================================================================== + +Discourse is an open source discussion platform, that can work as a mailing list, a chat room and a forum as well. It is a popular tool and modern day implementation of a successful discussion platform. On server side, it is built using Ruby on Rails and uses Postgres on the backend, it also makes use of Redis caching to reduce the loading times, while on client’s side end, it runs in browser using Java Script. It is a pretty well optimized and well structured tool. It also offers converter plugins to migrate your existing discussion boards / forums like vBulletin, phpBB, Drupal, SMF etc to Discourse. In this article, we will be learning how to install Discourse on Ubuntu operating system. + +It is developed by keeping security in mind, so spammers and hackers might not be lucky with this application. It works well with all modern devices, and adjusts its display setting accordingly for mobile devices and tablets. + +### Installing Discourse on Ubuntu 16.04 + +Let’s get started ! the minimum system RAM to run Discourse is 1 GB and the officially supported installation process for Discourse requires dockers to be installed on our Linux system. Besides dockers, it also requires Git. We can fulfill these two requirements by simply running the following command on our system’s terminal. + +``` +wget -qO- https://get.docker.com/ | sh +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/124.png) + +It shouldn’t take longer to complete the installation for Docker and Git, as soon its installation process is complete, create a directory for Discourse inside /var partition of your system (You can choose any other partition here too). + +``` +mkdir /var/discourse +``` + +Now clone the Discourse’s Github repository to this newly created directory. + +``` +git clone https://github.com/discourse/discourse_docker.git /var/discourse +``` + +Go into the cloned directory. + +``` +cd /var/discourse +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/314.png) + +You should be able to locate “discourse-setup” script file here, simply run this script to initiate the installation wizard for Discourse. + +``` +./discourse-setup +``` + +**Side note: Please make sure you have a ready email server setup before attempting install for discourse.** + +Installation wizard will ask you following six questions. + +``` +Hostname for your Discourse? +Email address for admin account? +SMTP server address? +SMTP user name? +SMTP port [587]: +SMTP password? []: +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/411.png) + +Once you supply these information, it will ask for the confirmation, if everything is fine, hit “Enter” and installation process will take off. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/511.png) + +Sit back and relax! it will take sweet amount of time to complete the installation, grab a cup of coffee, and keep an eye for any error messages. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/610.png) + +Here is how the successful completion of the installation process should look alike. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/710.png) + +Now launch your web browser, if the hostname for discourse installation resolves properly to IP, then you can use your hostname in browser , otherwise use your IP address to launch the Discourse page. Here is what you should see: + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/85.png) + +That’s it, create new account by using “Sign Up” option and you should be good to go with your Discourse setup. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/106.png) + +### Conclusion + +It is an easy to setup application and works flawlessly. It is equipped with all required features of modern day discussion board. It is available under General Public License and is 100% open source product. The simplicity, easy of use, powerful and long feature list are the most important feathers of this tool. Hope you enjoyed this article, Question? do let us know in comments please. + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/install-discourse-on-ubuntu-linux-16-04/ + +作者:[Aun][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxpitstop.com/author/aun/ + + + + + + + + From ee03325846bfd9d1d543a167bbbd8e51746931a2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 10:09:07 +0800 Subject: [PATCH 171/471] =?UTF-8?q?20160718-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20160627 Linux Practicality vs Activism.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 sources/tech/20160627 Linux Practicality vs Activism.md diff --git a/sources/tech/20160627 Linux Practicality vs Activism.md b/sources/tech/20160627 Linux Practicality vs Activism.md new file mode 100644 index 0000000000..fc4f5eff26 --- /dev/null +++ b/sources/tech/20160627 Linux Practicality vs Activism.md @@ -0,0 +1,72 @@ +Linux Practicality vs Activism +================================== + +>Is Linux actually more practical than other OSes, or is there some higher minded reason to use it? + +One of the greatest things about running Linux is the freedom it provides. Where the division among the Linux community appears is in how we value this freedom. + +For some, the freedom enjoyed by using Linux is the freedom from vendor lock-in or high software costs. Most would call this a practical consideration. Others users would tell you the freedom they enjoy is software freedom. This means embracing Linux distributions that support the [Free Software Movement][1], avoiding proprietary software completely and all things related. + +In this article, I'll walk you through some of the differences between these two freedoms and how they affect Linux usage. + +### The problem with proprietary + +One thing most Linux users have in common is their preference for avoiding proprietary software. For practical enthusiasts like myself, it's a matter of how I spend my money, the ability to control my software and avoiding vendor lock-in. Granted, I'm not a coder...so my tweaks to my installed software are pretty mild. But there are instances where a minor tweak to an application can mean the difference between it working and it not working. + +Then there are Linux enthusiasts who opt to avoid proprietary software because they feel it's unethical to use it. Usually the main concern here is that using proprietary software takes away or simply obstructs your personal freedom. Users in this corner prefer to use Linux distributions and software that support the [Free Software philosophy][2]. While it's similar to and often directly confused with Open Source concepts, [there are differences][3]. + +So here's the issue: Users such as myself tend to put convenience over the ideals of pure software freedom. Don't get me wrong, folks like me prefer to use software that meets the ideals behind Free Software, but we also are more likely to make concessions in order to accomplish specific tasks. + +Both types of Linux enthusiasts prefer using non-proprietary solutions. But Free Software advocates won't use proprietary at all, where as the practical user will rely on the best tool with the best performance. This means there are instances where the practical user is willing to run a proprietary application or code on their non-proprietary operating system. + +In the end, both user types enjoy using what Linux has to offer. But our reasons for doing so tend to vary. Some have argued that this is a matter of ignorance with those who don't support Free Software. I disagree and believe it's a matter of practical convenience. Users who prefer practical convenience simply aren't concerned about the politics of their software. + +### Practical Convenience + +When you ask most people why they use the operating system they use, it's usually tied in with practical convenience. Examples of this convenience might include "it's what I've always used" down to "it runs the software I need." Other folks might take this a step further and explain it's not so much the software that drives their OS preference, as the familiarity of the OS in question. And finally, there are specialty "niche tasks" or hardware compatibility issues that also provide good reasons for using one OS over another. + +This might surprise many of you, but the single biggest reason I run desktop Linux today is due to familiarity. Even though I provide support for Windows and OS X for others, it's actually quite frustrating to use these operating systems as they're simply not what my muscle memory is used to. I like to believe this allows me to empathize with Linux newcomers, as I too know how off-putting it can be to step into the realm of the unfamiliar. My point here is this – familiarity has value. And familiarity also powers practical convenience as well. + +Now if we compare this to the needs of a Free Software advocate, you'll find those folks are willing to learn something new and perhaps even more challenging if it translates into them avoiding using non-free software. It's actually something I've always admired about this type of user. Their willingness to take the path less followed to stick to their principles is, in my opinion, admirable. + +### The price of freedom + +One area I don't envy is the extra work involved in making sure a Free Software advocate is always using Linux distros and hardware that respect their digital freedom according to the standards set forth by the [Free Software Foundation][4]. This means the Linux kernel needs to be free from proprietary blobs for driver support and the hardware in question doesn't require any proprietary code whatsoever. Certainly not impossible, but it's pretty close. + +The absolute best scenario a Free Software advocate can shoot for is hardware that is "freedom-compatible." There are vendors out there that can meet this need, however most of them are offering hardware that relies on Linux compatible proprietary firmware. Great for the practical user, a show-stopper for the Free Software advocate. + +What all of this translates into is that the advocate must be far more vigilant than the practical Linux enthusiast. This isn't necessarily a negative thing per se, however it's a consideration if one is planning on jumping onto the Free Software approach to computing. Practical users, by contrast, can use any software or hardware that happens to be Linux compatible without a second thought. I don't know about you, but in my eyes this seems a bit easier to me. + +### Defining software freedom + +This part is going to get some folks upset as I personally don't subscribe to the belief that there's only one flavor of software freedom. From where I stand, I think true freedom is being able to soak in all the available data on a given issue and then come to terms with the approach that best suits that person's lifestyle. + +So for me, I prefer using Linux distributions that provide me with the desktop that meets all of my needs. This includes the use of non-proprietary software and proprietary software. Even though it's fair to suggest that the proprietary software restricts my personal freedom, I must counter this by pointing out that I had the freedom to use it in the first place. One might even call this freedom of choice. + +Perhaps this too, is why I find myself identifying more with the ideals of Open Source Software instead of sticking with the ideals behind the Free Software movement. I prefer to stand with the group that doesn't spend their time telling me how I'm wrong for using what works best for me. It's been my experience that the Open Source crowd is merely interested in sharing the merits of software freedom without the passion for Free Software idealism. + +I think the concept of Free Software is great. And to those who need to be active in software politics and point out the flaws of using proprietary software to folks, then I think Linux ([GNU/Linux][5]) activism is a good fit. Where practical users such as myself tend to change course from Free Software Linux advocates is in our presentation. + +When I present Linux on the desktop, I share my passion for its practical merits. And if I'm successful and they enjoy the experience, I allow the user to discover the Free Software perspective on their own. I've found most people use Linux on their computers not because they want to embrace software freedom, rather because they simply want the best user experience possible. Perhaps I'm alone in this, it's hard to say. + +What say you? Are you a Free Software Advocate? Perhaps you're a fan of using proprietary software/code on your desktop Linux distribution? Hit the Comments and share your Linux desktop experiences. + + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/linux-practicality-vs-activism.html + +作者:[Matt Hartley][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.datamation.com/author/Matt-Hartley-3080.html +[1]: https://en.wikipedia.org/wiki/Free_software_movement +[2]: https://www.gnu.org/philosophy/free-sw.en.html +[3]: https://www.gnu.org/philosophy/free-software-for-freedom.en.html +[4]: https://en.wikipedia.org/wiki/Free_Software_Foundation +[5]: https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy + + From 5c03cc4530bfb2a63a368850bcfa4d8143c15f42 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 10:10:33 +0800 Subject: [PATCH 172/471] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20160711 Getting Started with Git.md | 134 ------------------ 1 file changed, 134 deletions(-) delete mode 100644 sources/tech/20160711 Getting Started with Git.md diff --git a/sources/tech/20160711 Getting Started with Git.md b/sources/tech/20160711 Getting Started with Git.md deleted file mode 100644 index cf67119f28..0000000000 --- a/sources/tech/20160711 Getting Started with Git.md +++ /dev/null @@ -1,134 +0,0 @@ -Getting started with Git -====================== - - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) - -In the introduction to this series we learned who should use Git, and what it is for. Today we will learn how to clone public Git repositories, and how to extract individual files without cloning the whole works. - -Since Git is so popular, it makes life a lot easier if you're at least familiar with it at a basic level. If you can grasp the basics (and you can, I promise!), then you'll be able to download whatever you need, and maybe even contribute stuff back. And that, after all, is what open source is all about: having access to the code that makes up the software you run, the freedom to share it with others, and the right to change it as you please. Git makes this whole process easy, as long as you're comfortable with Git. - -So let's get comfortable with Git. - -### Read and write - -Broadly speaking, there are two ways to interact with a Git repository: you can read from it, or you can write to it. It's just like a file: sometimes you open a document just to read it, and other times you open a document because you need to make changes. - -In this article, we'll cover reading from a Git repository. We'll tackle the subject of writing back to a Git repository in a later article. - -### Git or GitHub? - -A word of clarification: Git is not the same as GitHub (or GitLab, or Bitbucket). Git is a command-line program, so it looks like this: - -``` -$ git -usage: Git [--version] [--help] [-C ] - [-p | --paginate | --no-pager] [--bare] - [--Git-dir=] [] -``` - -As Git is open source, lots of smart people have built infrastructures around it which, in themselves, have become very popular. - -My articles about Git teach pure Git first, because if you understand what Git is doing then you can maintain an indifference to what front end you are using. However, my articles also include common ways of accomplishing each task through popular Git services, since that's probably what you'll encounter first. - -### Installing Git - -To install Git on Linux, grab it from your distribution's software repository. BSD users should find Git in the Ports tree, in the devel section. - -For non-open source operating systems, go to the project site and follow the instructions. Once installed, there should be no difference between Linux, BSD, and Mac OS X commands. Windows users will have to adapt Git commands to match the Windows file system, or install Cygwin to run Git natively, without getting tripped up by Windows file system conventions. - -### Afternoon tea with Git - -Not every one of us needs to adopt Git into our daily lives right away. Sometimes, the most interaction you have with Git is to visit a repository of code, download a file or two, and then leave. On the spectrum of getting to know Git, this is more like afternoon tea than a proper dinner party. You make some polite conversation, you get the information you need, and then you part ways without the intention of speaking again for at least another three months. - -And that's OK. - -Generally speaking, there are two ways to access Git: via command line, or by any one of the fancy Internet technologies providing quick and easy access through the web browser. - -Say you want to install a trash bin for use in your terminal because you've been burned one too many times by the rm command. You've heard about Trashy, which calls itself "a sane intermediary to the rm command", and you want to look over its documentation before you install it. Lucky for you, [Trashy is hosted publicly on GitLab.com][1]. - -### Landgrab - -The first way we'll work with this Git repository is a sort of landgrab method: we'll clone the entire thing, and then sort through the contents later. Since the repository is hosted with a public Git service, there are two ways to do this: on the command line, or through a web interface. - -To grab an entire repository with Git, use the git clone command with the URL of the Git repository. If you're not clear on what the right URL is, the repository should tell you. GitLab gives you a copy-and-paste repository URL [for Trashy][2]. - -![](https://opensource.com/sites/default/files/1_gitlab-url.jpg) - -You might notice that on some services, both SSH and HTTPS links are provided. You can use SSH only if you have write permissions to the repository. Otherwise, you must use the HTTPS URL. - -Once you have the right URL, cloning the repository is pretty simple. Just git clone the URL, and optionally name the directory to clone it into. The default behaviour is to clone the git directory to your current directory; for example, 'trashy.git' gets put in your current location as 'trashy'. I use the .clone extension as a shorthand for repositories that are read-only, and the .git extension as shorthand for repositories I can read and write, but that's not by any means an official mandate. - -``` -$ git clone https://gitlab.com/trashy/trashy.git trashy.clone -Cloning into 'trashy.clone'... -remote: Counting objects: 142, done. -remote: Compressing objects: 100% (91/91), done. -remote: Total 142 (delta 70), reused 103 (delta 47) -Receiving objects: 100% (142/142), 25.99 KiB | 0 bytes/s, done. -Resolving deltas: 100% (70/70), done. -Checking connectivity... done. -``` - -Once the repository has been cloned successfully, you can browse files in it just as you would any other directory on your computer. - -The other way to get a copy of the repository is through the web interface. Both GitLab and GitHub provide a snapshot of any repository in a .zip file. GitHub has a big green download button, but on GitLab, look for an inconspicuous download button on the far right of your browser window: - -![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) - -### Pick and choose - -An alternate method of obtaining a file from a Git repository is to find the file you're after and pluck it right out of the repository. This method is only supported via web interfaces, which is essentially you looking at someone else's clone of a repository; you can think of it as a sort of HTTP shared directory. - -The problem with using this method is that you might find that certain files don't actually exist in a raw Git repository, as a file might only exist in its complete form after a make command builds the file, which won't happen until you download the repository, read the README or INSTALL file, and run the command. Assuming, however, that you are sure a file does exist and you just want to go into the repository, grab it, and walk away, you can do that. - -In GitLab and GitHub, click the Files link for a file view, view the file in Raw mode, and use your web browser's save function, e.g. in Firefox, File > Save Page As. In a GitWeb repository (a web view of personal git repositories used some who prefer to host git themselves), the Raw view link is in the file listing view. - -![](https://opensource.com/sites/default/files/1_webgit-file.jpg) - -### Best practices - -Generally, cloning an entire Git repository is considered the right way of interacting with Git. There are a few reasons for this. Firstly, a clone is easy to keep updated with the git pull command, so you won't have to keep going back to some web site for a new copy of a file each time an improvement has been made. Secondly, should you happen to make an improvement yourself, then it is easier to submit those changes to the original author if it is all nice and tidy in a Git repository. - -For now, it's probably enough to just practice going out and finding interesting Git repositories and cloning them to your drive. As long as you know the basics of using a terminal, then it's not hard to do. Don't know the basics of terminal usage? Give me five more minutes of your time. - -### Terminal basics - -The first thing to understand is that all files have a path. That makes sense; if I told you to open a file for me on a regular non-terminal day, you'd have to get to where that file is on your drive, and you'd do that by navigating a bunch of computer windows until you reached that file. For example, maybe you'd click your home directory > Pictures > InktoberSketches > monkey.kra. - -In that scenario, we could say that the file monkeysketch.kra has the path $HOME/Pictures/InktoberSketches/monkey.kra. - -In the terminal, unless you're doing special sysadmin work, your file paths are generally going to start with $HOME (or, if you're lazy, just the ~ character) followed by a list of folders up to the filename itself. This is analogous to whatever icons you click in your GUI to reach the file or folder. - -If you want to clone a Git repository into your Documents directory, then you could open a terminal and run this command: - -``` -$ git clone https://gitlab.com/foo/bar.git $HOME/Documents/bar.clone -``` - -Once that is complete, you can open a file manager window, navigate to your Documents folder, and you'll find the bar.clone directory waiting for you. - -If you want to get a little more advanced, you might revisit that repository at some later date, and try a git pull to see if there have been updates to the project: - -``` -$ cd $HOME/Documents/bar.clone -$ pwd -bar.clone -$ git pull -``` - -For now, that's all the terminal commands you need to get started, so go out and explore. The more you do it, the better you get at it, and that is, at least give or take a vowel, the name of the game. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/stumbling-git - -作者:[Seth Kenlon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]: https://gitlab.com/trashy/trashy -[2]: https://gitlab.com/trashy/trashy.git From c718bcbd5f2531e5fa6f320639e56941233e9ce3 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 10:11:41 +0800 Subject: [PATCH 173/471] =?UTF-8?q?20160718-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/{tech => talk}/20160627 Linux Practicality vs Activism.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => talk}/20160627 Linux Practicality vs Activism.md (100%) diff --git a/sources/tech/20160627 Linux Practicality vs Activism.md b/sources/talk/20160627 Linux Practicality vs Activism.md similarity index 100% rename from sources/tech/20160627 Linux Practicality vs Activism.md rename to sources/talk/20160627 Linux Practicality vs Activism.md From df964529448c4014fcace29cf948f8232ad1ce83 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 18 Jul 2016 10:20:19 +0800 Subject: [PATCH 174/471] =?UTF-8?q?20160718-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160706 What is Git.md | 122 +++++++++++++++++++++++++++ 1 file changed, 122 insertions(+) create mode 100644 sources/tech/20160706 What is Git.md diff --git a/sources/tech/20160706 What is Git.md b/sources/tech/20160706 What is Git.md new file mode 100644 index 0000000000..5726bce405 --- /dev/null +++ b/sources/tech/20160706 What is Git.md @@ -0,0 +1,122 @@ +What is Git +=========== + +Welcome to my series on learning how to use the Git version control system! In this introduction to the series, you will learn what Git is for and who should use it. + +If you're just starting out in the open source world, you're likely to come across a software project that keeps its code in, and possibly releases it for use, by way of Git. In fact, whether you know it or not, you're certainly using software right now that is developed using Git: the Linux kernel (which drives the website you're on right now, if not the desktop or mobile phone you're accessing it on), Firefox, Chrome, and many more projects share their codebase with the world in a Git repository. + +On the other hand, all the excitement and hype over Git tends to make things a little muddy. Can you only use Git to share your code with others, or can you use Git in the privacy of your own home or business? Do you have to have a GitHub account to use Git? Why use Git at all? What are the benefits of Git? Is Git the only option? + +So forget what you know or what you think you know about Git, and let's take it from the beginning. + +### What is version control? + +Git is, first and foremost, a version control system (VCS). There are many version control systems out there: CVS, SVN, Mercurial, Fossil, and, of course, Git. + +Git serves as the foundation for many services, like GitHub and GitLab, but you can use Git without using any other service. This means that you can use Git privately or publicly. + +If you have ever collaborated on anything digital with anyone, then you know how it goes. It starts out simple: you have your version, and you send it to your partner. They make some changes, so now there are two versions, and send the suggestions back to you. You integrate their changes into your version, and now there is one version again. + +Then it gets worse: while you change your version further, your partner makes more changes to their version. Now you have three versions; the merged copy that you both worked on, the version you changed, and the version your partner has changed. + +As Jason van Gumster points out in his article, 【Even artists need version control][1], this syndrome tends to happen in individual settings as well. In both art and science, it's not uncommon to develop a trial version of something; a version of your project that might make it a lot better, or that might fail miserably. So you create file names like project_justTesting.kdenlive and project_betterVersion.kdenlive, and then project_best_FINAL.kdenlive, but with the inevitable allowance for project_FINAL-alternateVersion.kdenlive, and so on. + +Whether it's a change to a for loop or an editing change, it happens to the best of us. That is where a good version control system makes life easier. + +### Git snapshots + +Git takes snapshots of a project, and stores those snapshots as unique versions. + +If you go off in a direction with your project that you decide was the wrong direction, you can just roll back to the last good version and continue along an alternate path. + +If you're collaborating, then when someone sends you changes, you can merge those changes into your working branch, and then your collaborator can grab the merged version of the project and continue working from the new current version. + +Git isn't magic, so conflicts do occur ("You changed the last line of the book, but I deleted that line entirely; how do we resolve that?"), but on the whole, Git enables you to manage the many potential variants of a single work, retaining the history of all the changes, and even allows for parallel versions. + +### Git distributes + +Working on a project on separate machines is complex, because you want to have the latest version of a project while you work, makes your own changes, and share your changes with your collaborators. The default method of doing this tends to be clunky online file sharing services, or old school email attachments, both of which are inefficient and error-prone. + +Git is designed for distributed development. If you're involved with a project you can clone the project's Git repository, and then work on it as if it was the only copy in existence. Then, with a few simple commands, you can pull in any changes from other contributors, and you can also push your changes over to someone else. Now there is no confusion about who has what version of a project, or whose changes exist where. It is all locally developed, and pushed and pulled toward a common target (or not, depending on how the project chooses to develop). + +### Git interfaces + +In its natural state, Git is an application that runs in the Linux terminal. However, as it is well-designed and open source, developers all over the world have designed other ways to access it. + +It is free, available to anyone for $0, and comes in packages on Linux, BSD, Illumos, and other Unix-like operating systems. It looks like this: + +``` +$ git --version +git version 2.5.3 +``` + +Probably the most well-known Git interfaces are web-based: sites like GitHub, the open source GitLab, Savannah, BitBucket, and SourceForge all offer online code hosting to maximise the public and social aspect of open source along with, in varying degrees, browser-based GUIs to minimise the learning curve of using Git. This is what the GitLab interface looks like: + +![](https://opensource.com/sites/default/files/0_gitlab.png) + +Additionally, it is possible that a Git service or independent developer may even have a custom Git frontend that is not HTML-based, which is particularly handy if you don't live with a browser eternally open. The most transparent integration comes in the form of file manager support. The KDE file manager, Dolphin, can show the Git status of a directory, and even generate commits, pushes, and pulls. + +![](https://opensource.com/sites/default/files/0_dolphin.jpg) + +[Sparkleshare][2] uses Git as a foundation for its own Dropbox-style file sharing interface. + +![](https://opensource.com/sites/default/files/0_sparkleshare_1.jpg) + +For more, see the (long) page on the official [Git wiki][3] listing projects with graphical interfaces to Git. + +### Who should use Git? + +You should! The real question is when? And what for? + +### When should I use Git, and what should I use it for? + +To get the most out of Git, you need to think a little bit more than usual about file formats. + +Git is designed to manage source code, which in most languages consists of lines of text. Of course, Git doesn't know if you're feeding it source code or the next Great American Novel, so as long as it breaks down to text, Git is a great option for managing and tracking versions. + +But what is text? If you write something in an office application like Libre Office, then you're probably not generating raw text. There is usually a wrapper around complex applications like that which encapsulate the raw text in XML markup and then in a zip container, as a way to ensure that all of the assets for your office file are available when you send that file to someone else. Strangely, though, something that you might expect to be very complex, like the save files for a [Kdenlive][4] project, or an SVG from [Inkscape][5], are actually raw XML files that can easily be managed by Git. + +If you use Unix, you can check to see what a file is made of with the file command: + +``` +$ file ~/path/to/my-file.blah +my-file.blah: ASCII text +$ file ~/path/to/different-file.kra: Zip data (MIME type "application/x-krita") +``` + +If unsure, you can view the contents of a file with the head command: + +``` +$ head ~/path/to/my-file.blah +``` + +If you see text that is mostly readable by you, then it is probably a file made of text. If you see garbage with some familiar text characters here and there, it is probably not made of text. + +Make no mistake: Git can manage other formats of files, but it treats them as blobs. The difference is that in a text file, two Git snapshots (or commits, as we call them) might be, say, three lines different from each other. If you have a photo that has been altered between two different commits, how can Git express that change? It can't, really, because photographs are not made of any kind of sensible text that can just be inserted or removed. I wish photo editing were as easy as just changing some text from "ugly greenish-blue" to "blue-with-fluffy-clouds" but it truly is not. + +People check in blobs, like PNG icons or a speadsheet or a flowchart, to Git all the time, so if you're working in Git then don't be afraid to do that. Know that it's not sensible to do that with huge files, though. If you are working on a project that does generate both text files and large blobs (a common scenario with video games, which have equal parts source code to graphical and audio assets), then you can do one of two things: either invent your own solution, such as pointers to a shared network drive, or use a Git add-on like Joey Hess's excellent [git annex][6], or the [Git-Media][7] project. + +So you see, Git really is for everyone. It is a great way to manage versions of your files, it is a powerful tool, and it is not as scary as it first seems. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/resources/what-is-git + +作者:[Seth Kenlon ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://opensource.com/life/16/2/version-control-isnt-just-programmers +[2]: http://sparkleshare.org/ +[3]: https://git.wiki.kernel.org/index.php/InterfacesFrontendsAndTools#Graphical_Interfaces +[4]: https://opensource.com/life/11/11/introduction-kdenlive +[5]: http://inkscape.org/ +[6]: https://git-annex.branchable.com/ +[7]: https://github.com/alebedev/git-media + + + + From 905d6d43298bf7e2392a5a9c80f743ec426a58e5 Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Mon, 18 Jul 2016 10:28:16 +0800 Subject: [PATCH 175/471] Update 20160706 What is Git.md translating --- sources/tech/20160706 What is Git.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160706 What is Git.md b/sources/tech/20160706 What is Git.md index 5726bce405..f98de42fd3 100644 --- a/sources/tech/20160706 What is Git.md +++ b/sources/tech/20160706 What is Git.md @@ -1,3 +1,4 @@ +translating by cvsher What is Git =========== From 9e291a8b126bb76d8a19712c1afe7ee8c6f1c3bb Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 18 Jul 2016 11:42:51 +0800 Subject: [PATCH 176/471] PUB:20160624 IT runs on the cloud and the cloud runs on Linux. Any questions MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @chenxinlong 翻译的不错。IT 我考虑还是不翻译,这样含义广泛些。 --- ... the cloud runs on Linux. Any questions.md | 63 +++++++++++++++++++ ... the cloud runs on Linux. Any questions.md | 62 ------------------ 2 files changed, 63 insertions(+), 62 deletions(-) create mode 100644 published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md delete mode 100644 translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md diff --git a/published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md b/published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md new file mode 100644 index 0000000000..17af8d85d9 --- /dev/null +++ b/published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md @@ -0,0 +1,63 @@ +IT 运行在云端,而云运行在 Linux 上。你怎么看? +=================================================================== + +> IT 正在逐渐迁移到云端。那又是什么驱动了云呢?答案是 Linux。 当连微软的 Azure 都开始拥抱 Linux 时,你就应该知道这一切都已经改变了。 + +![](http://zdnet1.cbsistatic.com/hub/i/r/2016/06/24/7d2b00eb-783d-4202-bda2-ca65d45c460a/resize/770xauto/732db8df725ede1cc38972788de71a0b/linux-owns-cloud.jpg) + +*图片: ZDNet* + +不管你接不接受, 云正在接管 IT 已经成为现实。 我们这几年见证了 [ 云在内部 IT 的崛起 ][1] 。 那又是什么驱动了云呢? 答案是 Linux 。 + +[Uptime Institute][2] 最近对 1000 个 IT 决策者进行了调查,发现约 50% 左右的资深企业 IT 决策者认为在将来[大部分的 IT 工作应该放在云上 ][3] 或托管网站上。在这个调查中,23% 的人认为这种改变即将发生在明年,有 70% 的人则认为这种情况会在四年内出现。 + +这一点都不奇怪。 我们中的许多人仍热衷于我们的物理服务器和机架, 但一般运营一个自己的数据中心并不会产生任何的经济效益。 + +很简单, 只需要对比你[运行在你自己的硬件上的资本费用(CAPEX)和使用云的业务费用(OPEX)][4]即可。 但这并不是说你应该把所有的东西都一股脑外包出去,而是说在大多数情况下你应该把许多工作都迁移到云端。 + +相应地,如果你想充分地利用云,你就得了解 Linux 。 + +[亚马逊的 AWS][5]、 [Apache CloudStack][6]、 [Rackspace][7]、[谷歌的 GCP][8] 以及 [ OpenStack ][9] 的核心都是运行在 Linux 上的。那么结果如何?截至到 2014 年, [在 Linux 服务器上部署的应用达到所有企业的 79% ][10],而 在 Windows 服务器上部署的则跌到 36%。从那时起, Linux 就获得了更多的发展动力。 + +即便是微软自身也明白这一点。 + +Azure 的技术主管 Mark Russinovich 曾说,仅仅在过去的几年内微软就从[四分之一的 Azure 虚拟机运行在 Linux 上][11] 变为[将近三分之一的 Azure 虚拟机运行在 Linux 上][12]。 + +试想一下。微软,一家正逐渐将[云变为自身财政收入的主要来源][13] 的公司,其三分之一的云产业依靠于 Linux 。 + +即使是到目前为止, 这些不论喜欢或者不喜欢微软的人都很难想象得到[微软会从一家以商业软件为基础的软件公司转变为一家开源的、基于云服务的企业][14] 。 + +Linux 对于这些专用服务器机房的渗透甚至比它刚开始的时候更深了。 举个例子, [Docker 最近发行了其在 Windows 10 和 Mac OS X 上的公测版本 ][15] 。 这难道是意味着 [Docker][16] 将会把其同名的容器服务移植到 Windows 10 和 Mac 上吗? 并不是的。 + +在这两个平台上, Docker 只是运行在一个 Linux 虚拟机内部。 在 Mac OS 上是 HyperKit ,在 Windows 上则是 Hyper-V 。 在图形界面上可能看起来就像另一个 Mac 或 Windows 上的应用, 但在其内部的容器仍然是运行在 Linux 上的。 + +所以,就像大量的安卓手机和 Chromebook 的用户压根就不知道他们所运行的是 Linux 系统一样。这些 IT 用户也会随之悄然地迁移到 Linux 和云上。 + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/it-runs-on-the-cloud-and-the-cloud-runs-on-linux-any-questions/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[chenxinlong](https://github.com/chenxinlong) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]: http://www.zdnet.com/article/2014-the-year-the-cloud-killed-the-datacenter/ +[2]: https://uptimeinstitute.com/ +[3]: http://www.zdnet.com/article/move-to-cloud-accelerating-faster-than-thought-survey-finds/ +[4]: http://www.zdnet.com/article/rethinking-capex-and-opex-in-a-cloud-centric-world/ +[5]: https://aws.amazon.com/ +[6]: https://cloudstack.apache.org/ +[7]: https://www.rackspace.com/en-us +[8]: https://cloud.google.com/ +[9]: http://www.openstack.org/ +[10]: http://www.zdnet.com/article/linux-foundation-finds-enterprise-linux-growing-at-windows-expense/ +[11]: http://news.microsoft.com/bythenumbers/azure-virtual +[12]: http://www.zdnet.com/article/microsoft-nearly-one-in-three-azure-virtual-machines-now-are-running-linux/ +[13]: http://www.zdnet.com/article/microsofts-q3-azure-commercial-cloud-strong-but-earnings-revenue-light/ +[14]: http://www.zdnet.com/article/why-microsoft-is-turning-into-an-open-source-company/ +[15]: http://www.zdnet.com/article/new-docker-betas-for-azure-windows-10-now-available/ +[16]: http://www.docker.com/ + diff --git a/translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md b/translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md deleted file mode 100644 index 804eb7db67..0000000000 --- a/translated/talk/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md +++ /dev/null @@ -1,62 +0,0 @@ -信息技术运行在云端,而云运行在 Linux 上。有什么问题吗? -=================================================================== - ->信息技术正在逐渐被迁移到云端. 那又是什么驱动了云呢?答案是 Linux。 当连微软的 Azure 都开始拥抱 Linux 时,你就应该知道这一切都已经改变了。 - -![](http://zdnet1.cbsistatic.com/hub/i/r/2016/06/24/7d2b00eb-783d-4202-bda2-ca65d45c460a/resize/770xauto/732db8df725ede1cc38972788de71a0b/linux-owns-cloud.jpg) ->图片: ZDNet - -不管你接不接受, 云正在接管信息技术都已成现实。 我们这几年见证了 [ 云在信息技术产业内部的崛起 ][1] 。 那又是什么驱动了云呢? 答案是 Linux 。 - -[Uptime Institute][2] 最近对 1000 个 IT 执行部门进行调查并发现了约 50% 左右的高级企业的 IT 执行部门认为在将来 [ 大部分的 IT 工作内容应存储备份在云上 ][3] 或托管网站上。在这个调查中,23% 的人认为这种改变即将发生在明年,有 70% 的人则认为这种情况会在四年内出现。 - -这一点都不奇怪。 我们中的许多人仍热衷于我们的物理服务器和机架, 但一般运营一个自己的数据中心并不会产生任何的经济效益。 - -这真的非常简单。 只需要对比你 [ 运行在硬件上的个人资本费用 (CAPEX) 和使用云的操作费用 (OPEX)][4]。 但这并不是说你会想把所有的一切都外置,而是说在大部分时间内你会想把你的一些工作内容迁移到云端。 - -相应地,如果你想充分地利用云,你就得了解 Linux 。 - -[ 亚马逊 web 服务 ][5], [ Apache 的 CloudStack][6], [Rackspace][7], [ 谷歌云平台 ][8] 以及 [ OpenStack ][9] 的核心都是运行在 Linux 上的。那么结果如何?截至到 2014 年, [ 在 Linux 服务器上部署的应用达到所有企业的 79% ][10],而 Windows 服务器上部署的应用则跌到 36%。从那时起, Linux 就获得了更多的发展动力。 - -即便是微软自身也明白这一点。 - -Azure 的技术主管 Mark Russinovich 曾说,仅仅在过去的几年内微软就从 [ 四分之一的 Azure 虚拟机运行在 Linux 上 ][11] 变为 [ 将近三分之一的 Azure 虚拟机运行在 Linux 上][12]. - -试想一下。 微软, 一家正逐渐将 [ 云变为自身财政收入的主要来源 ][13] 的公司,其三分之一的云产业依靠于 Linux 。 - -即使是到目前为止, 这些不论喜欢或者不喜欢微软的人都很难想象得到 [ 微软会从一家以专利保护为基础的软件公司转变为一家开源,基于云服务的企业][14] 。 - -Linux 对于这些专用服务器机房的渗透甚至比它刚开始的时候更深了。 举个例子, [ Docker 最近发行了其在 Windows 10 和 Mac OS X 上的公测版本 ][15] 。 所以难道这意味着 [Docker][16] 将会把其同名的容器服务移植到 Windows 10 和 Mac 上吗? 并不是的。 - -在这两个平台上, Docker 只是运行在一个 Linux 虚拟机内部。 在 Mac OS 上是 HyperKit ,在 Windows 上则是 Hyper-V 。 你的图形界面可能看起来就像另一个 Mac 或 Windows 上的应用, 但在其内部核心的容器仍然是运行在 Linux 上的。 - -所以,就像大量的安卓手机和 Chromebook 的用户压根就不知道他们所运行的是 Linux 系统一样。这些信息技术的用户也会随之悄然地迁移到 Linux 和云上。 - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/it-runs-on-the-cloud-and-the-cloud-runs-on-linux-any-questions/#ftag=RSSbaffb68 - -作者:[Steven J. Vaughan-Nichols][a] -译者:[chenxinlong](https://github.com/chenxinlong) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[1]: http://www.zdnet.com/article/2014-the-year-the-cloud-killed-the-datacenter/ -[2]: https://uptimeinstitute.com/ -[3]: http://www.zdnet.com/article/move-to-cloud-accelerating-faster-than-thought-survey-finds/ -[4]: http://www.zdnet.com/article/rethinking-capex-and-opex-in-a-cloud-centric-world/ -[5]: https://aws.amazon.com/ -[6]: https://cloudstack.apache.org/ -[7]: https://www.rackspace.com/en-us -[8]: https://cloud.google.com/ -[9]: http://www.openstack.org/ -[10]: http://www.zdnet.com/article/linux-foundation-finds-enterprise-linux-growing-at-windows-expense/ -[11]: http://news.microsoft.com/bythenumbers/azure-virtual -[12]: http://www.zdnet.com/article/microsoft-nearly-one-in-three-azure-virtual-machines-now-are-running-linux/ -[13]: http://www.zdnet.com/article/microsofts-q3-azure-commercial-cloud-strong-but-earnings-revenue-light/ -[14]: http://www.zdnet.com/article/why-microsoft-is-turning-into-an-open-source-company/ -[15]: http://www.zdnet.com/article/new-docker-betas-for-azure-windows-10-now-available/ -[16]: http://www.docker.com/ - From 0bd5d26d9170e27beafa287241ed6374476f3932 Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Mon, 18 Jul 2016 12:40:03 +0800 Subject: [PATCH 177/471] Update 20160706 What is Git.md (#4191) translating --- sources/tech/20160706 What is Git.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160706 What is Git.md b/sources/tech/20160706 What is Git.md index 5726bce405..f98de42fd3 100644 --- a/sources/tech/20160706 What is Git.md +++ b/sources/tech/20160706 What is Git.md @@ -1,3 +1,4 @@ +translating by cvsher What is Git =========== From 18750b90b113ef40f79dec66ab352b9cb427dcc6 Mon Sep 17 00:00:00 2001 From: Johnny Liao Date: Mon, 18 Jul 2016 16:42:13 +0800 Subject: [PATCH 178/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?(#4192)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Finish Advanced Image Processing with Python. * Remove the source file. --- ...3 Advanced Image Processing with Python.md | 124 ------------------ ...3 Advanced Image Processing with Python.md | 124 ++++++++++++++++++ 2 files changed, 124 insertions(+), 124 deletions(-) delete mode 100644 sources/tech/20160623 Advanced Image Processing with Python.md create mode 100644 translated/tech/20160623 Advanced Image Processing with Python.md diff --git a/sources/tech/20160623 Advanced Image Processing with Python.md b/sources/tech/20160623 Advanced Image Processing with Python.md deleted file mode 100644 index 0a3a722845..0000000000 --- a/sources/tech/20160623 Advanced Image Processing with Python.md +++ /dev/null @@ -1,124 +0,0 @@ -Johnny-Liao translating... - -Advanced Image Processing with Python -====================================== - -![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/Image-Search-Engine.png) - -Building an image processing search engine is no easy task. There are several concepts, tools, ideas and technologies that go into it. One of the major image-processing concepts is reverse image querying (RIQ) or reverse image search. Google, Cloudera, Sumo Logic and Birst are among the top organizations to use reverse image search. Great for analyzing images and making use of mined data, RIQ provides a very good insight into analytics. - -### Top Companies and Reverse Image Search - -There are many top tech companies that are using RIQ to best effect. For example, Pinterest first brought in visual search in 2014. It subsequently released a white paper in 2015, revealing the architecture. Reverse image search enabled Pinterest to obtain visual features from fashion objects and display similar product recommendations. - -As is generally known, Google images uses reverse image search allowing users to upload an image and then search for connected images. The submitted image is analyzed and a mathematical model made out of it, by advanced algorithm use. The image is then compared with innumerable others in the Google databases before results are matched and similar results obtained. - -**Here is a graph representation from the OpenCV 2.4.9 Features Comparison Report:** - -![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/search-engine-graph.jpg) - -### Algorithms & Python Libraries - -Before we get down to the workings of it, let us rush through the main elements that make building an image processing search engine with Python possible: - -### Patented Algorithms - -#### SIFT (Scale-Invariant Feature Transform) Algorithm - -1. A patented technology with nonfree functionality that uses image identifiers in order to identify a similar image, even those clicked from different angles, sizes, depths and scale, that they are included in the search results. Check the detailed video on SIFT here. -2. SIFT correctly matches the search criteria with a large database of features from many images. -3. Matching same images with different viewpoints and matching invariant features to obtain search results is another SIFT feature. Read more about scale-invariant keypoints here. - -#### SURF (Speeded Up Robust Features) Algorithm - -1. [SURF][1] is also patented with nonfree functionality and a more ‘speeded’ up version of SIFT. Unlike SIFT, SURF approximates Laplacian of Gaussian (unlike SIFT) with Box Filter. - -2. SURF relies on the determinant of Hessian Matrix for both its location and scale. - -3. Rotation invariance is not a requisite in many applications. So not finding this orientation speeds up the process. - -4. SURF includes several features that the speed improved in each step. Three times faster than SIFT, SURF is great with rotation and blurring. It is not as great in illumination and viewpoint change though. - -5. Open CV, a programming function library provides SURF functionalities. SURF.compute() and SURF. Detect() can be used to find descriptors and keypoints. Read more about SURF [here][2]. - -### Open Source Algorithms - -#### KAZE Algorithm - -1.KAZE is a open source 2D multiscale and novel feature detection and description algorithm in nonlinear scale spaces. Efficient techniques in Additive Operator Splitting (AOS) and variable conductance diffusion is used to build the nonlinear scale space. - -2. Multiscale image processing basics are simple – Creating an image’s scale space while filtering original image with right function over enhancing time or scale. - -#### AKAZE (Accelerated-KAZE) Algorithm - -1. As the name suggests, this is a faster mode to image search, finding matching keypoints between two images. AKAZE uses a binary descriptor and nonlinear scale space that balances accuracy and speed. - -#### BRISK (Binary Robust Invariant Scalable Keypoints) Algorithm - -1. BRISK is great for description, keypoint detection and matching. - -2. An algorithm that is highly adaptive, scale-space FAST-based detector along with a bit-string descriptor, helps speed up the search significantly. - -3. Scale-space keypoint detection and keypoint description helps optimize the performance with relation to the task at hand. - -#### FREAK (Fast Retina Keypoint) - -1. This is a novel keypoint descriptor inspired by the human eye.A binary strings cascade is efficiently computed by an image intensity comparison. The FREAK algorithm allows faster computing with lower memory load as compared to BRISK, SURF and SIFT. - -#### ORB (Oriented FAST and Rotated BRIEF) - -1.A fast binary descriptor, ORB is resistant to noise and rotation invariant. ORB builds on the FAST keypoint detector and the BRIEF descriptor, elements attributed to its low cost and good performance. - -2. Apart from the fast and precise orientation component, efficiently computing the oriented BRIEF, analyzing variance and co-relation of oriented BRIEF features, is another ORB feature. - -### Python Libraries - -#### Open CV - -1. OpenCV is available for both academic and commercial use. A open source machine learning and computer vision library, OpenCV makes it easy for organizations to utilize and modify code. - -2. Over 2500 optimized algorithms, including state-of-the-art machine learning and computer vision algorithms serve various image search purposes – face detection, object identification, camera movement tracking, finding similar images from image database, following eye movements, scenery recognition, etc. - -3. Top companies like Google, IBM, Yahoo, IBM, Sony, Honda, Microsoft and Intel make wide use of OpenCV. - -4. OpenCV uses Python, Java, C, C++ and MATLAB interfaces while supporting Windows, Linux, Mac OS and Android. - -#### Python Imaging Library (PIL) - -1. The Python Imaging Library (PIL) supports several file formats while providing image processing and graphics solutions.The open source PIL adds image processing capabilities to your Python interpreter. -2. The standard procedure for image manipulation include image enhancing, transparency and masking handling, image filtering, per-pixel manipulation, etc. - -For detailed statistics and graphs, view the OpenCV 2.4.9 Features Comparison Report [here][3]. - -### Building an Image Search Engine - -An image search engine helps pick similar images from a prepopulated set of image base. The most popular among these is Google’s well known image search engine. For starters, there are various approaches to build a system like this. To mention a few: - -1.Using image extraction, image description extraction, meta data extraction and search result extraction to build an image search engine. -2. Define your image descriptor, dataset indexing, define your similarity metric and then search and rank. -3. Select image to be searched, select directory for carrying out search, search directory for all pictures, create picture feature index, evaluate same feature for search picture, match pictures in search and obtain matched pictures. - -Our approach basically began with comparing grayscaled versions of the images, gradually moving on to complex feature matching algorithms like SIFT and SURF, and then finally settling down to am open source solution called BRISK. All these algorithms give efficient results with minor changes in performance and latency. An engine built on these algorithms have numerous applications like analyzing graphic data for popularity statistics, identification of objects in graphic contents, and many more. - -**Example**: An image search engine needs to be build by an IT company for a client. So if a brand logo image is submitted in the search, all related brand image searches show up as results. The obtained results can also be used for analytics by the client, allowing them to estimate the brand popularity as per the geographic location. Its still early days though, RIQ or reverse image search has not been exploited to its full extent yet. - -This concludes our article on building an image search engine using Python. Check our blog section out for the latest on technology and programming. - -Statistics Source: OpenCV 2.4.9 Features Comparison Report (computer-vision-talks.com) - -(Guidance and additional inputs by Ananthu Nair.) - --------------------------------------------------------------------------------- - -via: http://www.cuelogic.com/blog/advanced-image-processing-with-python/ - -作者:[Snehith Kumbla][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.cuelogic.com/blog/author/snehith-kumbla/ -[1]: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html -[2]: http://www.vision.ee.ethz.ch/~surf/eccv06.pdf -[3]: https://docs.google.com/spreadsheets/d/1gYJsy2ROtqvIVvOKretfxQG_0OsaiFvb7uFRDu5P8hw/edit#gid=10 diff --git a/translated/tech/20160623 Advanced Image Processing with Python.md b/translated/tech/20160623 Advanced Image Processing with Python.md new file mode 100644 index 0000000000..73f8751225 --- /dev/null +++ b/translated/tech/20160623 Advanced Image Processing with Python.md @@ -0,0 +1,124 @@ +Python高级图像处理 +====================================== + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/Image-Search-Engine.png) + +构建图像搜索引擎并不是一件容易的任务。这里有几个概念、工具、想法和技术需要实现。主要的图像处理概念之一是逆图像查询(RIQ)或者说逆图像搜索。Google、Cloudera、Sumo Logic 和 Birst等公司在使用逆图像搜索中名列前茅。通过分析图像和使用数据挖掘RIQ提供了很好的洞察分析能力。 + +### 顶级公司与逆图像搜索 + +有很多顶级的技术公司使用RIQ来产生最好的收益。例如:在2014年Pinterest第一次带来了视觉搜索。随后在2015年发布了一份白皮书,揭示了其架构。逆图像搜索让Pinterest获得对时尚对象的视觉特征和显示类似的产品建议的能力。 + +众所周知,谷歌图片使用逆图像搜索允许用户上传一张图片然后搜索相关联的图片。通过使用先进的算法对提交的图片进行分析和数学建模。然后和谷歌数据库中无数的其他图片进行比较得到相似的结果。 + +**这是opencv 2.4.9特征比较报告一个图表:** + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/search-engine-graph.jpg) + +### 算法 & Python库 + +在我们使用它工作之前,让我们过一遍构建图像搜索引擎的Python库的主要元素: + +### 专利算法 + +#### 尺度不变特征变换算法(SIFT - Scale-Invariant Feature Transform) + +1. 非自由功能的一个专利技术,利用图像识别符,以识别相似图像,甚至那些从不同的角度,大小,深度和规模点击,它们被包括在搜索结果中。[点击这里][4]查看SIFT详细视频。 +2. SIFT能与从许多图片中提取的特征的大型数据库正确的匹配搜索条件。 +3. 从不同的方面来匹配相同的图像和匹配不变特征来获得搜索结果是SIFT的另一个特征。了解更多关于尺度不变[关键点][5] + +#### 加速鲁棒特征(SURF - Speeded Up Robust Features)算法 + +1. [SURF][1] 也是一种非自由功能的专利技术而且还是一种“加速”的SIFT版本。不像SIFT, + +2. SURF依赖于Hessian矩阵行列式的位置和尺度。 + +3. 在许多应用中,旋转不变性是不是一个必要条件。所以找不到这个方向的速度加快了这个过程。 + +4. SURF包括几种特性,在每一步的速度提高。比SIFT快三倍的速度,SIFT擅长旋转和模糊化。然而它不擅长处理照明和变换视角。 + +5. Open CV,一个程序功能库提供SURF相似的功能,SURF.compute()和SURF.Detect()可以用来找到描述符和要点。阅读更多关于SURF[点击这里][2] + +### 开源算法 + +#### KAZE 算法 + +1. KAZE是一个开源的非线性尺度空间的二维多尺度和新的特征检测和描述算法。在加性算子分裂的有效技术(AOS)和可变电导扩散是用来建立非线性尺度空间。 + +2. 多尺度图像处理基础是简单的创建一个图像的尺度空间,同时用正确的函数过滤原始图像,提高时间或规模。 + +#### AKAZE (Accelerated-KAZE) 算法 + +1. 顾名思义,这是一个更快的图像搜索方式,找到匹配的关键点在两幅图像之间。AKAZE 使用二进制描述符和非线性尺度空间来平衡精度和速度。 + +#### BRISK (Binary Robust Invariant Scalable Keypoints) 算法 + +1. BRISK 非常适合描述关键点的检测与匹配。 + +2. 是一种高度自适应的算法,基于尺度空间的快速检测器和一个位字符串描述符,有助于加快搜索显着。 + +3. 尺度空间关键点检测与关键点描述帮助优化手头相关任务的性能 + +#### FREAK (Fast Retina Keypoint) + +1. 这个新的关键点描述的灵感来自人的眼睛。通过图像强度比能有效地计算一个二进制串级联。FREAK算法相比BRISK,SURF和SIFT算法有更快的计算与较低的内存负载。 + +#### ORB (Oriented FAST and Rotated BRIEF) + +1. 快速的二进制描述符,ORB具有抗噪声和旋转不变性。ORB建立在FAST关键点检测器和BRIEF描述符之上,有成本低、性能好的元素属性。 + +2. 除了快速和精确的定位元件,有效地计算定向的BRIEF,分析变动和面向BRIEF特点相关,是另一个ORB的特征。 + +### Python库 + +#### Open CV + +1. Open CV提供学术和商业用途。一个开源的机器学习和计算机视觉库,OpenCV便于组织利用和修改代码。 + +2. 超过2500个优化算法,包括国家最先进的机器学习和计算机视觉算法服务与各种图像搜索--人脸检测、目标识别、摄像机目标跟踪,从图像数据库中寻找类似图像、眼球运动跟随、风景识别等。 + +3. 像谷歌,IBM,雅虎,IBM,索尼,本田,微软和英特尔这样的大公司广泛的使用OpenCV。 + +4. OpenCV拥有python,java,C,C++和MATLAB接口,同时支持Windows,Linux,Mac OS和Android。 + +#### Python图像库 (PIL) + +1. Python图像库(PIL)支持多种文件格式,同时提供图像处理和图形解决方案。开源的PIL为你的Python解释器添加图像处理能力。 +2. 标准的图像处理能力包括图像增强、透明和屏蔽作用、图像过滤、像素操作等。 + +详细的数据和图表,请看OpenCV 2.4.9 特征比较报告。[这里][3] + +### 构建图像搜索引擎 + +图像搜索引擎可以从预置集图像库选择相似的图像。其中最受欢迎的是谷歌的著名的图像搜索引擎。对于初学者来说,有不同的方法来建立这样的系统。提几个如下: + +1. 采用图像提取、图像描述提取、元数据提取和搜索结果提取,建立图像搜索引擎。 +2. 定义你的图像描述符,数据集索引,定义你的相似性度量,然后搜索和排名。 +3. 选择要搜索的图像,选择用于进行搜索的目录,所有图片的搜索目录,创建图片特征索引,评估相同的搜索图片的功能,在搜索中匹配图片,并获得匹配的图片。 + +我们的方法基本上就比较grayscaled版本的图像,逐渐移动到复杂的特征匹配算法如SIFT和SURF,最后沉淀下来的是开源的解决方案称为BRISK。所有这些算法提供了有效的结果,在性能和延迟的细微变化。建立在这些算法上的引擎有许多应用,如分析流行统计的图形数据,在图形内容中识别对象,以及更多。 + +**例如**:一个图像搜索引擎需要由一个IT公司作为客户机来建立。因此,如果一个品牌的标志图像被提交在搜索中,所有相关的品牌形象搜索显示结果。所得到的结果也可以通过客户端分析,使他们能够根据地理位置估计品牌知名度。但它还比较年轻,RIQ或反向图像搜索尚未被完全挖掘利用。 + +这就结束了我们的文章,使用Python构建图像搜索引擎。浏览我们的博客部分来查看最新的编程技术。 + +数据来源:OpenCV 2.4.9 特征比较报告(computer-vision-talks.com) + +(Ananthu Nair 的指导与补充) + +-------------------------------------------------------------------------------- + +via: http://www.cuelogic.com/blog/advanced-image-processing-with-python/ + +作者:[Snehith Kumbla][a] +译者:[Johnny-Liao](https://github.com/Johnny-Liao) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.cuelogic.com/blog/author/snehith-kumbla/ +[1]: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html +[2]: http://www.vision.ee.ethz.ch/~surf/eccv06.pdf +[3]: https://docs.google.com/spreadsheets/d/1gYJsy2ROtqvIVvOKretfxQG_0OsaiFvb7uFRDu5P8hw/edit#gid=10 +[4]: https://www.youtube.com/watch?v=NPcMS49V5hg +[5]: https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf From f8d44e69787be799ffb24a5768570adc6688ebfe Mon Sep 17 00:00:00 2001 From: Mike Date: Tue, 19 Jul 2016 09:58:49 +0800 Subject: [PATCH 179/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?How=20To=20Setup=20Bridge=20(br0)=20Network=20on=20Ubuntu=20Lin?= =?UTF-8?q?ux=2014.04=20and=2016.04=20LTS.md=20(#4194)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md * translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md * translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md --- ...ork on Ubuntu Linux 14.04 and 16.04 LTS.md | 53 ++++++++++--------- 1 file changed, 27 insertions(+), 26 deletions(-) rename {sources => translated}/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md (55%) diff --git a/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md b/translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md similarity index 55% rename from sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md rename to translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md index b3d40a94c2..ed2f37235a 100644 --- a/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md +++ b/translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md @@ -1,48 +1,49 @@ -How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS +如何在 Ubuntu 14.04 和 16.04 上建立网桥(br0) ======================================================================= -> am a new Ubuntu Linux 16.04 LTS user. How do I setup a network bridge on the host server powered by Ubuntu 14.04 LTS or 16.04 LTS operating system? +> 作为一个 Ubuntu 16.04 LTS 的初学者。如何在 Ubuntu 14.04 和 16.04 的主机上建立网桥呢? ![](http://s0.cyberciti.org/images/category/old/ubuntu-logo.jpg) -A Bridged networking is nothing but a simple technique to connect to the outside network through the physical interface. It is useful for LXC/KVM/Xen/Containers virtualization and other virtual interfaces. The virtual interfaces appear as regular hosts to the rest of the network. In this tutorial I will explain how to configure a Linux bridge with bridge-utils (brctl) command line utility on Ubuntu server. +顾名思义,网桥的作用是通过物理接口连接内部和外部网络。对于虚拟端口或者 LXC/KVM/Xen/容器来说,这非常有用。通过网桥虚拟端口看起来是网络上的一个常规设备。在这个教程中,我将会介绍如何在 Ubuntu 服务器上通过 bridge-utils(brctl) 命令行来配置 Linux 网桥。 -### Our sample bridged networking +### 网桥网络示例 ![](http://s0.cyberciti.org/uploads/faq/2016/07/my-br0-br1-setup.jpg) ->Fig.01: Sample Ubuntu Bridged Networking Setup For Kvm/Xen/LXC Containers (br0) +>Fig.01: Kvm/Xen/LXC 容器网桥实例 (br0) In this example eth0 and eth1 is the physical network interface. eth0 connected to the LAN and eth1 is attached to the upstream ISP router/Internet. +在这个例子中,eth0 和 eth1 是物理网络接口。eth0 连接着局域网,eth1 连接着上游路由器/网络。 -### Install bridge-utils +### 安装 bridge-utils -Type the following [apt-get command][1] to install the bridge-utils: +使用[apt-get 命令][1] 安装 bridge-utils: ``` $ sudo apt-get install bridge-utils ``` -OR +或者 ```` $ sudo apt install bridge-utils ``` -Sample outputs: +样例输出: ![](http://s0.cyberciti.org/uploads/faq/2016/07/ubuntu-install-bridge-utils.jpg) ->Fig.02: Ubuntu Linux install bridge-utils package +>Fig.02: Ubuntu 安装 bridge-utils 包 -### Creating a network bridge on the Ubuntu server +### 在 Ubuntu 服务器上创建网桥 -Edit `/etc/network/interfaces` using a text editor such as nano or vi, enter: +使用你熟悉的文本编辑器修改 `/etc/network/interfaces` ,例如 vi 或者 nano : ``` $ sudo cp /etc/network/interfaces /etc/network/interfaces.bakup-1-july-2016 $ sudo vi /etc/network/interfaces ``` -Let us setup eth1 and map it to br1, enter (delete or comment out all eth1 entries): +接下来设置 eth1 并且将他绑定到 br1 ,输入(删除或者注释所有 eth1 相关配置): ``` # br1 setup with static wan IPv4 with ISP router as gateway @@ -59,7 +60,7 @@ iface br1 inet static bridge_maxwait 0 ``` -To setup eth0 and map it to br0, enter (delete or comment out all eth1 entries): +接下来设置 eth0 并将它绑定到 br0,输入(删除或者注释所有 eth1 相关配置): ``` auto br0 @@ -77,9 +78,9 @@ iface br0 inet static bridge_maxwait 0 ``` -### A note about br0 and DHCP +### 关于 br0 和 DHCP 的一点说明 -DHCP config options: +DHCP 的配置选项: ``` auto br0 @@ -90,25 +91,25 @@ iface br0 inet dhcp bridge_maxwait 0 ``` -Save and close the file. +保存并且关闭文件。 -### Restart the server or networking service +### 重启服务器或者网络服务 -You need to reboot the server or type the following command to restart the networking service (this may not work on SSH based session): +你需要重启服务器或者输入下列命令来重启网络服务(在 SSH 登陆的会话中这可能不管用): ``` $ sudo systemctl restart networking ``` -If you are using Ubuntu 14.04 LTS or older not systemd based system, enter: +如果你证使用 Ubuntu 14.04 LTS 或者更老的没有 systemd 的系统,输入: ``` $ sudo /etc/init.d/restart networking ``` -### Verify connectivity +### 验证网络配置成功 -Use the ping/ip commands to verify that both LAN and WAN interfaces are reachable: +使用 ping/ip 命令来验证 LAN 和 WAN 网络接口运行正常: ``` # See br0 and br1 ip a show @@ -120,12 +121,12 @@ ping -c 2 cyberciti.biz ping -c 2 10.0.80.12 ``` -Sample outputs: +样例输出: ![](http://s0.cyberciti.org/uploads/faq/2016/07/br0-br1-eth0-eth1-configured-on-ubuntu.jpg) ->Fig.03: Verify Bridging Ethernet Connections +>Fig.03: 验证网桥的以太网连接 -Now, you can configure XEN/KVM/LXC containers to use br0 and br1 to reach directly to the internet or private lan. No need to setup special routing or iptables SNAT rules. +现在,你就可以配置 br0 和 br1 来让 XEN/KVM/LXC 容器访问因特网或者私有局域网了。再也没有必要去设置特定路由或者 iptables 的 SNAT 规则了。 -------------------------------------------------------------------------------- @@ -133,7 +134,7 @@ Now, you can configure XEN/KVM/LXC containers to use br0 and br1 to reach direct via: http://www.cyberciti.biz/faq/how-to-create-bridge-interface-ubuntu-linux/ 作者:[VIVEK GITE][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/MikeCoder) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 34b778b550e76c35e0998c6535f9028357688607 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Tue, 19 Jul 2016 11:05:48 +0800 Subject: [PATCH 180/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...mart Devices Security Concerns and More.md | 49 ------------------- ...mart Devices Security Concerns and More.md | 48 ++++++++++++++++++ 2 files changed, 48 insertions(+), 49 deletions(-) delete mode 100644 sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md create mode 100644 translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md diff --git a/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md b/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md deleted file mode 100644 index 156fda88db..0000000000 --- a/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md +++ /dev/null @@ -1,49 +0,0 @@ -vim-kakali translating - -Linus Torvalds Talks IoT, Smart Devices, Security Concerns, and More[video] -=========================================================================== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elc-linus-b.jpg?itok=6WwnCSjL) ->Dirk Hohndel interviews Linus Torvalds at ELC. - -For the first time in the 11-year history of the [Embedded Linux Conference (ELC)][0], held in San Diego, April 4-6, the keynotes included a discussion with Linus Torvalds. The creator and lead overseer of the Linux kernel, and “the reason we are all here,” in the words of his interviewer, Intel Chief Linux and Open Source Technologist Dirk Hohndel, seemed upbeat about the state of Linux in embedded and Internet of Things applications. Torvalds very presence signaled that embedded Linux, which has often been overshadowed by Linux desktop, server, and cloud technologies, had come of age. - -![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/elc-linus_0.jpg?itok=FNPIDe8k) ->Linus Torvalds speaking at Embedded Linux Conference. - -IoT was the main topic at ELC, which included an OpenIoT Summit track, and the chief topic in the Torvalds interview. - -“Maybe you won’t see Linux at the IoT leaf nodes, but anytime you have a hub, you will need it,” Torvalds told Hohndel. “You need smart devices especially if you have 23 [IoT standards]. If you have all these stupid devices that don’t necessarily run Linux, and they all talk with slightly different standards, you will need a lot of smart devices. We will never have one completely open standard, one ring to rule them all, but you will have three of four major protocols, and then all these smart hubs that translate.” - -Torvalds remained customarily philosophical when Hohndel asked about the gaping security holes in IoT. “I don’t worry about security because there’s not a lot we can do,” he said. “IoT is unpatchable -- it’s a fact of life.” - -The Linux creator seemed more concerned about the lack of timely upstream contributions from one-off embedded projects, although he noted there have been significant improvements in recent years, partially due to consolidation on hardware. - -“The embedded world has traditionally been hard to interact with as an open source developer, but I think that’s improving,” Torvalds said. “The ARM community has become so much better. Kernel people can now actually keep up with some of the hardware improvements. It’s improving, but we’re not nearly there yet.” - -Torvalds admitted to being more at home on the desktop than in embedded and to having “two left hands” when it comes to hardware. - -“I’ve destroyed things with a soldering iron many times,” he said. “I’m not really set up to do hardware.” On the other hand, Torvalds guessed that if he were a teenager today, he would be fiddling around with a Raspberry Pi or BeagleBone. “The great part is if you’re not great at soldering, you can just buy a new one.” - -Meanwhile, Torvalds vowed to continue fighting for desktop Linux for another 25 years. “I’ll wear them down,” he said with a smile. - -Watch the full video, below. - -Get the Latest on Embedded Linux and IoT. Access 150+ recorded sessions from Embedded Linux Conference 2016. [Watch Now][1]. - -[video](https://youtu.be/tQKUWkR-wtM) - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/linus-torvalds-talks-iot-smart-devices-security-concerns-and-more-video - -作者:[ERIC BROWN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/ericstephenbrown -[0]: http://events.linuxfoundation.org/events/embedded-linux-conference -[1]: http://go.linuxfoundation.org/elc-openiot-summit-2016-videos?utm_source=lf&utm_medium=blog&utm_campaign=linuxcom - diff --git a/translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md b/translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md new file mode 100644 index 0000000000..876f84f68e --- /dev/null +++ b/translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md @@ -0,0 +1,48 @@ + +Linus Torvalds 谈及物联网,智能设备,安全连接等问题[video] +=========================================================================== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elc-linus-b.jpg?itok=6WwnCSjL) +>Dirk Hohndel 在嵌入式大会上采访 Linus Torvalds 。 + + + [嵌入式大会(Embedded Linux Conference)][0] 从在 San Diego 【译注:圣迭戈,美国加利福尼亚州的一个太平洋沿岸城市。】开始举办到现在已经有 11 年了,在 4 月 4 日到 6 日,Linus Torvalds 加入了会议的主题讨论。他是 Linux 内核的缔造者和最高决策者,也是“我们都在这里的原因”,在采访他的对话中,英特尔的 Linux 和开源技术总监 Dirk Hohndel 谈到了 Linux 在嵌入式和物联网应用程序领域的快速发展前景。Torvalds 很少出席嵌入式 Linux 大会,这些大会经常被 Linux 桌面、服务器和云技术夺去光芒。 +![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/elc-linus_0.jpg?itok=FNPIDe8k) +>Linus Torvalds 在嵌入式 Linux 大会上的演讲。 + + +物联网是嵌入式大会的主题,也包括未来开放物联网的最高发展方向(OpenIoT Summit),这是采访 Torvalds 的主要话题。 + +Torvalds 对 Hohndel 说到,“或许你不会在物联网设备上看到 Linux 的影子,但是在你有一个核心设备的时候,你就会需要它。你需要智能设备尤其在你有 23 [物联网标准]的时候。如果你全部使用低级设备,它们没必要一定运行 Linux ,它们采用的标准稍微有点不同,所以你需要很多智能设备。我们将来也不会有一个完全开放的标准,只是给它一个统一的标准,但是你将会需要 4 分之 3 的主要协议,它们都是这些智能核心的转化形式。” + +当 Hohndel 问及在物联网的巨大安全漏洞的时候, Torvalds 神情如常。他说:“我不担心安全问题因为我们能做的不是很多,物联网如果遭受攻击是无法挽回的-这是事实。" + +Linux 缔造者看起来更关心的是一次性的嵌入式项目缺少及时的上游贡献,尽管他注意到近年来这些都有一些本质上的提升,特别是在硬件上的发展。 + +Torvalds 说:”嵌入式领域历来就很难与开源开发者有所联系,但是我认为这些都在发生改变,ARM 团队也已经很优秀了。内核维护者事实上也看到了硬件性能的提升。一切都在变好,但是昨天却不是这样的。” + +Torvalds 承认他在家经常使用桌面系统而不是嵌入式系统,并且在使用硬件的时候他有“两只左手”。 + +“我已经用电烙铁弄坏了很多东西。”他说到。“我真的不适合搞硬件开发。”;另一方面,Torvalds 设想如果他现在是个年轻人,他可能被 Raspberry Pi(树莓派) 和 BeagleBone(猎兔犬板)【译注:Beagle板实际是由TI支持的一个以教育(STEP)为目的的开源项目】欺骗。“最主要是原因是如果你善于焊接,那么你就仅仅是买到了一个新的板子。” + +同时,Torvalds 也承诺他要为 Linux 桌面再奋斗一个 25 年。他笑着说:“我要为它工作一生。” + +下面,请看完整视频。 + +获取关于嵌入式 Linux 和物联网的最新信息。进入 2016 年嵌入式 Linux 大会 150+ 分钟的会议全程。[现在观看][1]. +[video](https://youtu.be/tQKUWkR-wtM) + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/linus-torvalds-talks-iot-smart-devices-security-concerns-and-more-video + +作者:[ERIC BROWN][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/ericstephenbrown +[0]: http://events.linuxfoundation.org/events/embedded-linux-conference +[1]: http://go.linuxfoundation.org/elc-openiot-summit-2016-videos?utm_source=lf&utm_medium=blog&utm_campaign=linuxcom + From c99281a284c0f7180db0773e4135c20c765b0bbf Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Tue, 19 Jul 2016 11:40:01 +0800 Subject: [PATCH 181/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...takes charge of Linux desktop and IoT software distribution.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md b/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md index 6992f76156..667c49b056 100644 --- a/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md +++ b/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md @@ -1,3 +1,6 @@ +vim-kakali translating + + Ubuntu Snap takes charge of Linux desktop and IoT software distribution =========================================================================== From dbdd6b234a0b22f69f873dc3cc4547c8c2ee3330 Mon Sep 17 00:00:00 2001 From: "Cathon.ZHD" Date: Tue, 19 Jul 2016 22:18:03 +0800 Subject: [PATCH 182/471] Tanslation Finished --- ...meet the IT needs of today and tomorrow.md | 63 ------------------- ...meet the IT needs of today and tomorrow.md | 59 +++++++++++++++++ 2 files changed, 59 insertions(+), 63 deletions(-) delete mode 100644 sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md create mode 100644 translated/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md diff --git a/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md b/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md deleted file mode 100644 index 1417c9fe7b..0000000000 --- a/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md +++ /dev/null @@ -1,63 +0,0 @@ -[Cathon is translating] -Training vs. hiring to meet the IT needs of today and tomorrow -================================================================ - -![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf) - -In the digital era, IT skills requirements are in a constant state of flux thanks to the constant change of the tools and technologies companies need to keep pace. It’s not easy for companies to find and hire talent with coveted skills that will enable them to innovate. Meanwhile, training internal staff to take on new skills and challenges takes time that is often in short supply. - -[Sandy Hill][1] is quite familiar with the various skills required across a variety of IT disciplines. As the director of IT for [Pegasystems][2], she is responsible for IT teams involved in areas ranging from application development to data center operations. What’s more, Pegasystems develops applications to help sales, marketing, service and operations teams streamline operations and connect with customers, which means she has to grasp the best way to use IT resources internally, and the IT challenges the company’s customers face. - -![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png) - -**The Enterprisers Project (TEP): How has the emphasis you put on training changed in recent years?** - -**Hill**: We’ve been growing exponentially over the past couple of years so now we’re implementing more global processes and procedures. With that comes the training aspect of making sure everybody is on the same page. - -Most of our focus has shifted to training staff on new products and tools that get implemented to drive innovation and enhance end user productivity. For example, we’ve implemented an asset management system; we didn’t have one before. So we had to do training globally instead of hiring someone who already knew the product. As we’re growing, we’re also trying to maintain a tight budget and flat headcount. So we’d rather internally train than try to hire new people. - -**TEP: Describe your approach to training. What are some of the ways you help employees evolve their skills?** - -**Hill**: I require each staff member to have a technical and non-technical training goal, which are tracked and reported on as part of their performance review. Their technical goal needs to align within their job function, and the non-technical goal can be anything from focusing on sharpening one of their soft skills to learning something outside of their area of expertise. I perform yearly staff evaluations to see where the gaps and shortages are so that teams remain well-rounded. - -**TEP: To what extent have your training initiatives helped quell recruitment and retention issues?** - -**Hill**: Keeping our staff excited about learning new technologies keeps their skill sets sharp. Having the staff know that we value them, and we are vested in their professional growth and development motivates them. - -**TEP: What sorts of training have you found to be most effective?** - -**Hill**: We use several different training methods that we’ve found to be effective. With new or special projects, we try to incorporate a training curriculum led by the vendor as part of the project rollout. If that’s not an option, we use off-site training. We also purchase on-line training packages, and I encourage my staff to attend at least one conference per year to keep up with what’s new in the industry. - -**TEP**: For what sorts of skills have you found it’s better to hire new people than train existing staff? - -**Hill**: It depends on the project. In one recent initiative, trying to implement OpenStack, we didn’t have internal expertise at all. So we aligned with a consulting firm that specialized in that area. We utilized their expertise on-site to help run the project and train internal team members. It was a massive undertaking to get internal people to learn the skills they needed while also doing their day-to-day jobs. - -The consultant helped us determine the headcount we needed to be proficient. This allowed us to assess our staff to see if gaps remained, which would require additional training or hiring. And we did end up hiring some of the contractors. But the alternative was to send some number of FTEs (full-time employees) for 6 to 8 weeks of training, and our pipeline of projects wouldn’t allow that. - -**TEP: In thinking about some of your most recent hires, what skills did they have that are especially attractive to you?** - -**Hill**: In recent hires, I’ve focused on soft skills. In addition to having solid technical skills, they need to be able to communicate effectively, work in teams and have the ability to persuade, negotiate and resolve conflicts. - -IT people in general kind of keep to themselves; they’re often not the most social people. Now, where IT is more integrated throughout the organization, the ability to give useful updates and status reports to other business units is critical to show that IT is an active presence and to be successful. - - - --------------------------------------------------------------------------------- - -via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/ - -作者:[ Paul Desmond][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://enterprisersproject.com/user/paul-desmond -[1]: https://enterprisersproject.com/user/sandy-hill -[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c| - - - - - - diff --git a/translated/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md b/translated/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md new file mode 100644 index 0000000000..fc7794f305 --- /dev/null +++ b/translated/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md @@ -0,0 +1,59 @@ +Training vs. hiring to meet the IT needs of today and tomorrow +培训还是雇人,来满足当今和未来的 IT 需求 +================================================================ + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf) + +在数字化时代,由于企业需要不断跟上工具和技术更新换代的步伐,对 IT 技能的需求也稳定增长。对于企业来说,寻找和雇佣那些拥有令人垂涎能力的创新人才,是非常不容易的。同时,培训内部员工来使他们接受新的技能和挑战,需要一定的时间。而且,这也往往满足不了需求。 + +[Sandy Hill][1] 对多种 IT 学科涉及到的多项技术都很熟悉。她作为 [Pegasystems][2] 项目的 IT 主管,负责的 IT 团队涉及的领域从应用的部署到数据中心的运营。更重要的是,Pegasystems 开发应用来帮助销售,市场,服务以及运行团队简化操作,联系客户。这意味着她需要掌握和利用 IT 内部资源的最佳方法,面对公司客户遇到的 IT 挑战。 + +![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png) + +**企业家项目(TEP):这些年你是如何调整培训重心的?** + +**Hill**:在过去的几十年中,我们经历了爆炸式的发展,所以现在我们要实现更多的全球化进程。随之而来的培训方面,将确保每个人都在同一起跑线上。 + +我们大多的关注点已经转移到培养员工使用新的产品和工具上,这些新产品和工具的实现,能够推动创新,并提高工作效率。例如,我们实现了资产管理系统; 以前我们是没有的。因此我们需要为全部员工做培训,而不是雇佣那些已经知道该产品的人。当我们正在发展的时候,我们也试图保持紧张的预算和稳定的职员总数。所以,我们更愿意在内部培训而不是雇佣新人。 + +**TEP:说说培训方法吧,你是怎样帮助你的员工发展他们的技能?** + +**Hill**:我要求每一位员工制定一个技术性的和非技术性的训练目标。这作为他们绩效评估的一部分。他们的技术性目标需要与他们的工作职能相符,非技术行目标则着重发展一项软技能,或是学一些专业领域之外的东西。我每年对职员进行一次评估,看看差距和不足之处,以使团队保持全面发展。 + +**TEP:你的训练计划能够在多大程度上减轻招聘和保留职员的问题?** + +**Hill**:使我们的职员对学习新的技术保持兴奋,让他们的技能更好。让职员知道我们重视他们并且让他们在擅长的领域成长和发展,以此激励他们。 + +**TEP:你有没有发现哪种培训是最有效的?** + +**HILL**:我们使用几种不同的我们发现是有效的培训方法。当有新的或特殊的项目时,我们尝试加入一套由甲方(不会翻:乙方,卖方?)领导的培训课程,作为项目的一部分。要是这个方法不能实现,我们将进行异地培训。我们也会购买一些在线的培训课程。我也鼓励职员每年参加至少一次会议,以了解行业的动向。 + +**TEP:你有没有发现有哪些技能,雇佣新人要比培训现有员工要好?** + +**Hill**:这和项目有关。有一个最近的计划,试图实现 OpenStack,而我们根本没有这方面的专家。所以我们与一家从事这一领域的咨询公司合作。我们利用他们的专业知识帮助我们运行项目,并现场培训我们的内部团队成员。让内部员工学习他们需要的技能,同时还要完成他们们天的工作,这是一项艰巨的任务。 + +顾问帮助我们确定我们需要的对某一技术熟练的的员工人数。这使我们能够对员工进行评估,看看是否存在缺口。如果存在人员上的缺口,我们还需要额外的培训或是员工招聘。我们也确实雇佣了一些承包商。另一个选择是让一些全职员工进行为期六至八周的培训,但我们的项目模式不容许这么做。 + +**TEP:想一下你最近雇佣的员工,他们的那些技能特别能够吸引到你?** + +**Hill**:在最近的招聘中,我侧重于软技能。除了扎实的技术能力外,他们需要能够在团队中进行有效的沟通和工作,要有说服他人,谈判和解决冲突的能力。 + +IT 人一向独来独往。他们一般不是社交最多的人。现在,IT 越来越整合到组织中,它为其他业务部门提供有用的更新报告和状态报告的能力是至关重要的,这也表明 IT 是积极的存在,并将取得成功。 + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2016/6/training-vs-hiring-meet-it-needs-today-and-tomorrow + +作者:[Paul Desmond][a] +译者:[Cathon](https://github.com/Cathon) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://enterprisersproject.com/user/paul-desmond +[1]: https://enterprisersproject.com/user/sandy-hill +[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c| + + + + From c0f397c0519f2b58bc72a6dac300cc4e09770651 Mon Sep 17 00:00:00 2001 From: Xin Wang <2650454635@qq.com> Date: Wed, 20 Jul 2016 09:48:45 +0800 Subject: [PATCH 183/471] Writing online multiplayer game with python and asyncio - part 1 (#4198) * Create 20160524 Writing online multiplayer game with python and asyncio - part 1.md * Delete 20160524 Writing online multiplayer game with python and asyncio - part 1.md --- ...r game with python and asyncio - part 1.md | 73 ------------------- ...r game with python and asyncio - part 1.md | 71 ++++++++++++++++++ 2 files changed, 71 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md create mode 100644 translated/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md diff --git a/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md b/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md deleted file mode 100644 index 95122771ac..0000000000 --- a/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md +++ /dev/null @@ -1,73 +0,0 @@ -xinglianfly translate -Writing online multiplayer game with python and asyncio - part 1 -=================================================================== - -Have you ever combined async with Python? Here I’ll tell you how to do it and show it on a [working example][1] - a popular Snake game, designed for multiple players. - -[Play gmae][2] - -### 1. Introduction - -Massive multiplayer online games are undoubtedly one of the main trends of our century, in both tech and cultural domains. And while for a long time writing a server for a MMO game was associated with massive budgets and complex low-level programming techniques, things are rapidly changing in the recent years. Modern frameworks based on dynamic languages allow handling thousands of parallel user connections on moderate hardware. At the same time, HTML 5 and WebSockets standards enabled the creation of real-time graphics-based game clients that run directly in web browser, without any extensions. - -Python may be not the most popular tool for creating scalable non-blocking servers, especially comparing to node.js popularity in this area. But the latest versions of Python are aimed to change this. The introduction of [asyncio][3] library and a special [async/await][4] syntax makes asynchronous code look as straightforward as regular blocking code, which now makes Python a worthy choice for asynchronous programming. So I will try to utilize these new features to demonstrate a way to create an online multiplayer game. - -### 2. Getting asynchronous - -A game server should handle a maximum possible number of parallel users' connections and process them all in real time. And a typical solution - creating threads, doesn't solve a problem in this case. Running thousands of threads requires CPU to switch between them all the time (it is called context switching), which creates big overhead, making it very ineffective. Even worse with processes, because, in addition, they do occupy too much memory. In Python there is even one more problem - regular Python interpreter (CPython) is not designed to be multithreaded, it aims to achieve maximum performance for single-threaded apps instead. That's why it uses GIL (global interpreter lock), a mechanism which doesn't allow multiple threads to run Python code at the same time, to prevent uncontrolled usage of the same shared objects. Normally the interpreter switches to another thread when currently running thread is waiting for something, usually a response from I/O (like a response from web server for example). This allows having non-blocking I/O operations in your app, because every operation blocks only one thread instead of blocking the whole server. However, it also makes general multithreading idea nearly useless, because it doesn't allow you to execute python code in parallel, even on multi-core CPU. While at the same time it is completely possible to have non-blocking I/O in one single thread, thus eliminating the need of heavy context-switching. - -Actually, a single-threaded non-blocking I/O is a thing you can do in pure python. All you need is a standard [select][5] module which allows you to write an event loop waiting for I/O from non-blocking sockets. However, this approach requires you to define all the app logic in one place, and soon your app becomes a very complex state-machine. There are frameworks that simplify this task, most popular are [tornado][6] and [twisted][7]. They are utilized to implement complex protocols using callback methods (and this is similar to node.js). The framework runs its own event loop invoking your callbacks on the defined events. And while this may be a way to go for some, it still requires programming in callback style, what makes your code fragmented. Compare this to just writing synchronous code and running multiple copies concurrently, like we would do with normal threads. Why wouldn't this be possible in one thread? - -And this is where the concept of microthreads come in. The idea is to have concurrently running tasks in one thread. When you call a blocking function in one task, behind the scenes it calls a "manager" (or "scheduler") that runs an event loop. And when there is some event ready to process, a manager passes execution to a task waiting for it. That task will also run until it reaches a blocking call, and then it will return execution to a manager again. - ->Microthreads are also called lightweight threads or green threads (a term which came from Java world). Tasks which are running concurrently in pseudo-threads are called tasklets, greenlets or coroutines. - -One of the first implementations of microthreads in Python was [Stackless Python][8]. It got famous because it is used in a very successful online game [EVE online][9]. This MMO game boasts about a persistent universe, where thousands of players are involved in different activities, all happening in the real time. Stackless is a standalone Python interpreter which replaces standard function calling stack and controls the flow directly to allow minimum possible context-switching expenses. Though very effective, this solution remained less popular than "soft" libraries that work with a standard interpreter. Packages like [eventlet][10] and [gevent][11] come with patching of a standard I/O library in the way that I/O function pass execution to their internal event loop. This allows turning normal blocking code into non-blocking in a very simple way. The downside of this approach is that it is not obvious from the code, which calls are non-blocking. A newer version of Python introduced native coroutines as an advanced form of generators. Later in Python 3.4 they included asyncio library which relies on native coroutines to provide single-thread concurrency. But only in python 3.5 coroutines became an integral part of python language, described with the new keywords async and await. Here is a simple example, which illustrates using asyncio to run concurrent tasks: - -``` -import asyncio - -async def my_task(seconds): - print("start sleeping for {} seconds".format(seconds)) - await asyncio.sleep(seconds) - print("end sleeping for {} seconds".format(seconds)) - -all_tasks = asyncio.gather(my_task(1), my_task(2)) -loop = asyncio.get_event_loop() -loop.run_until_complete(all_tasks) -loop.close() -``` - -We launch two tasks, one sleeps for 1 second, the other - for 2 seconds. The output is: - -``` -start sleeping for 1 seconds -start sleeping for 2 seconds -end sleeping for 1 seconds -end sleeping for 2 seconds -``` - -As you can see, coroutines do not block each other - the second task starts before the first is finished. This is happening because asyncio.sleep is a coroutine which returns execution to a scheduler until the time will pass. In the next section, we will use coroutine-based tasks to create a game loop. - --------------------------------------------------------------------------------- - -via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ - -作者:[Kyrylo Subbotin][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ -[1]: http://snakepit-game.com/ -[2]: http://snakepit-game.com/ -[3]: https://docs.python.org/3/library/asyncio.html -[4]: https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-492 -[5]: https://docs.python.org/2/library/select.html -[6]: http://www.tornadoweb.org/ -[7]: http://twistedmatrix.com/ -[8]: http://www.stackless.com/ -[9]: http://www.eveonline.com/ -[10]: http://eventlet.net/ -[11]: http://www.gevent.org/ diff --git a/translated/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md b/translated/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md new file mode 100644 index 0000000000..26ddea6b1e --- /dev/null +++ b/translated/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md @@ -0,0 +1,71 @@ +使用python 和asyncio编写在线多人游戏 - 第1部分 +=================================================================== + +你曾经把async和python关联起来过吗?在这里我将告诉你怎样做,而且在[working example][1]这个例子里面展示-一个流行的贪吃蛇游戏,这是为多人游戏而设计的。 +[Play game][2] + +###1.简介 + +在技术和文化领域,大量的多人在线游戏毋庸置疑是我们这个世界的主流之一。同时,为一个MMO游戏写一个服务器一般和大量的预算与低水平的编程技术相关,在最近这几年,事情发生了很大的变化。基于动态语言的现代框架允许在稳健的硬件上面处理大量并发的用户连接。同时,HTML5 和 WebSockets 标准允许基于实时的图形游戏直接在web浏览器上创建客户端,而不需要任何的扩展。 + +对于创建可扩展非堵塞的服务器,Python可能不是最受欢迎的工具,尤其是和在这个领域最受欢迎的node.js相比。但是最近版本的python打算改变这种现状。[asyncio][3]的介绍和一个特别的[async/await][4] 语法使得异步代码看起来像常规的阻塞代码,这使得python成为一个值得信赖的异步编程语言。所以我将尝试利用这些新特点来创建一个多人在线游戏。 + +###2.异步 +一个游戏服务器应该处理最大数量的用户的并发连接和实时处理这些连接。一个典型的解决方案----创建线程,然而在这种情况下并不能解决这个问题。运行上千的线程需要CPU在它们之间不停的切换(这叫做上下文切换),这将开销非常大,效率很低下。更糟糕的是,因为,此外,它们会占用大量的内存。在python中,还有一个问题,python的解释器(CPython)并不是针对多线程设计的,它主要针对于单线程实现最大数量的行为。这就是为什么它使用GIL(global interpreter lock),一个不允许同时运行多线程python代码的架构,来防止共享物体的不可控用法。正常情况下当当前线程正在等待的时候,解释器转换到另一个线程,通常是一个I/O的响应(像一个服务器的响应一样)。这允许在你的应用中有非阻塞I/O,因为每一个操作仅仅堵塞一个线程而不是堵塞整个服务器。然而,这也使得通常的多线程变得无用,因为它不允许你并发执行python代码,即使是在多核心的cpu上。同时在单线程中拥有非阻塞IO是完全有可能的,因而消除了经常切换上下文的需要。 + +实际上,你可以用纯python代码来实现一个单线程的非阻塞IO。你所需要的只是标准的[select][5]模块,这个模块可以让你写一个事件循环来等待未阻塞的socket的io。然而,这个方法需要你在一个地方定义所有app的逻辑,不久之后,你的app就会变成非常复杂的状态机。有一些框架可以简化这个任务,比较流行的是[tornade][6] 和 [twisted][7]。他们被用来使用回调方法实现复杂的协议(这和node.js比较相似)。这个框架运行在他自己的事件循环中,这个事件在定义的事件上调用你的回调。并且,这或许是一些情况的解决方案,但是它仍然需要使用回调的方式编程,这使你的代码碎片化。和写同步代码并且并发执行多个副本相比,就像我们会在普通的线程上做一样。这为什么在单个线程上是不可能的呢? + +这就是为什么microthread出现的原因。这个想法是为了在一个线程上并发执行任务。当你在一个任务中调用阻塞的方法时,有一个叫做"manager" (或者“scheduler”)的东西在执行事件循环。当有一些事件准备处理的时候,一个manager会让等这个事件的“任务”单元去执行,直到自己停了下来。然后执行完之后就返回那个管理器(manager)。 + +>Microthreads are also called lightweight threads or green threads (a term which came from Java world). Tasks which are running concurrently in pseudo-threads are called tasklets, greenlets or coroutines.(Microthreads 也会被称为lightweight threads 或者 green threads(java中的一个术语)。在伪线程中并发执行的任务叫做tasklets,greenlets或者coroutines). + +microthreads的其中一种实现在python中叫做[Stackless Python][8]。这个被用在了一个叫[EVE online][9]的非常有名的在线游戏中,所以它变得非常有名。这个MMO游戏自称说在一个持久的宇宙中,有上千个玩家在做不同的活动,这些都是实时发生的。Stackless 是一个单独的python解释器,它代替了标准的栈调用并且直接控制流来减少上下文切换的开销。尽管这非常有效,这个解决方案不如使用标准解释器的“soft”库有名。像[eventlet][10]和[gevent][11] 的方式配备了标准的I / O库的补丁的I / O功能在内部事件循环执行。这使得将正常的阻塞代码转变成非阻塞的代码变得简单。这种方法的一个缺点是从代码看这并不明显,这被称为非阻塞。Python的新的版本介绍了本地协同程序作为生成器的高级形式。在Python 的3.4版本中,引入了asyncio库,这个库依赖于本地协同程序来提供单线程并发。但是在Python 3.5 协同程序变成了Python语言的一部分,使用新的关键字 async 和 await 来描述。这是一个简单的例子,这表明了使用asyncio来运行 并发任务。 + +``` +import asyncio + +async def my_task(seconds): + print("start sleeping for {} seconds".format(seconds)) + await asyncio.sleep(seconds) + print("end sleeping for {} seconds".format(seconds)) + +all_tasks = asyncio.gather(my_task(1), my_task(2)) +loop = asyncio.get_event_loop() +loop.run_until_complete(all_tasks) +loop.close() +``` + +我们启动了两个任务,一个睡眠1秒钟,另一个睡眠2秒钟,输出如下: + +``` +start sleeping for 1 seconds +start sleeping for 2 seconds +end sleeping for 1 seconds +end sleeping for 2 seconds +``` + +正如你所看到的,协同程序不会阻塞彼此-----第二个任务在第一个结束之前启动。这发生的原因是asyncio.sleep是协同程序,它会返回一个调度器的执行直到时间过去。在下一节中, +我们将会使用coroutine-based的任务来创建一个游戏循环。 + +-------------------------------------------------------------------------------- + +via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ + +作者:[Kyrylo Subbotin][a] +译者:[xinglianfly](https://github.com/xinglianfly) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ +[1]: http://snakepit-game.com/ +[2]: http://snakepit-game.com/ +[3]: https://docs.python.org/3/library/asyncio.html +[4]: https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-492 +[5]: https://docs.python.org/2/library/select.html +[6]: http://www.tornadoweb.org/ +[7]: http://twistedmatrix.com/ +[8]: http://www.stackless.com/ +[9]: http://www.eveonline.com/ +[10]: http://eventlet.net/ +[11]: http://www.gevent.org/ From b3dd7a29100ab5cb9abc3e36519a29f557ffc71d Mon Sep 17 00:00:00 2001 From: ivo_wang Date: Tue, 19 Jul 2016 20:49:15 -0500 Subject: [PATCH 184/471] =?UTF-8?q?Update=2020160104=20What=20is=20good=20?= =?UTF-8?q?stock=20portfolio=20management=20software=20on=20L=E2=80=A6=20(?= =?UTF-8?q?#4197)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * a little * move * moveall * about half * finish * finish * finished * Translating by ivo-wang Translating by ivo-wang --- ... portfolio management software on Linux.md | 110 ------------------ ...g may prevent the demise of Moore's Law.md | 1 + ... portfolio management software on Linux.md | 109 +++++++++++++++++ 3 files changed, 110 insertions(+), 110 deletions(-) delete mode 100644 sources/tech/20160104 What is good stock portfolio management software on Linux.md create mode 100644 translated/tech/20160104 What is good stock portfolio management software on Linux.md diff --git a/sources/tech/20160104 What is good stock portfolio management software on Linux.md b/sources/tech/20160104 What is good stock portfolio management software on Linux.md deleted file mode 100644 index b24139b9c5..0000000000 --- a/sources/tech/20160104 What is good stock portfolio management software on Linux.md +++ /dev/null @@ -1,110 +0,0 @@ -Translating by ivo-wang -What is good stock portfolio management software on Linux -================================================================================ -If you are investing in the stock market, you probably understand the importance of a sound portfolio management plan. The goal of portfolio management is to come up with the best investment plan tailored for you, considering your risk tolerance, time horizon and financial goals. Given its importance, no wonder there are no shortage of commercial portfolio management apps and stock market monitoring software, each touting various sophisticated portfolio performance tracking and reporting capabilities. - -For those of you Linux aficionados who are looking for a **good open-source portfolio management tool** to manage and track your stock portfolio on Linux, I would highly recommend a Java-based portfolio manager called [JStock][1]. If you are not a big Java fan, you might be turned off by the fact that JStock runs on a heavyweight JVM. At the same time I am sure many people will appreciate the fact that JStock is instantly accessible on every Linux platform with JRE installed. No hoops to jump through to make it work on your Linux environment. - -The day is gone when "open-source" means "cheap" or "subpar". Considering that JStock is just a one-man job, JStock is impressively packed with many useful features as a portfolio management tool, and all that credit goes to Yan Cheng Cheok! For example, JStock supports price monitoring via watchlists, multiple portfolios, custom/built-in stock indicators and scanners, support for 27 different stock markets and cross-platform cloud backup/restore. JStock is available on multiple platforms (Linux, OS X, Android and Windows), and you can save and restore your JStock portfolios seamlessly across different platforms via cloud backup/restore. - -Sounds pretty neat, huh? Now I am going to show you how to install and use JStock in more detail. - -### Install JStock on Linux ### - -Since JStock is written in Java, you must [install JRE][2] to run it. Note that JStock requires JRE 1.7 or higher. If your JRE version does not meet this requirement, JStock will fail with the following error. - - Exception in thread "main" java.lang.UnsupportedClassVersionError: org/yccheok/jstock/gui/JStock : Unsupported major.minor version 51.0 - -Once you install JRE on your Linux, download the latest JStock release from the official website, and launch it as follows. - - $ wget https://github.com/yccheok/jstock/releases/download/release_1-0-7-13/jstock-1.0.7.13-bin.zip - $ unzip jstock-1.0.7.13-bin.zip - $ cd jstock - $ chmod +x jstock.sh - $ ./jstock.sh - -In the rest of the tutorial, let me demonstrate several useful features of JStock. - -### Monitor Stock Price Movements via Watchlist ### - -On JStock you can monitor stock price movement and automatically get notified by creating one or more watchlists. In each watchlist, you can add multiple stocks you are interested in. Then add your alert thresholds under "Fall Below" and "Rise Above" columns, which correspond to minimum and maximum stock prices you want to set, respectively. - -![](https://c2.staticflickr.com/2/1588/23795349969_37f4b0f23c_c.jpg) - -For example, if you set minimum/maximum prices of AAPL stock to $102 and $115.50, you will be alerted via desktop notifications if the stock price goes below $102 or moves higher than $115.50 at any time. - -You can also enable email alert option, so that you will instead receive email notifications for such price events. To enable email alerts, go to "Options" menu. Under "Alert" tab, turn on "Send message to email(s)" box, and enter your Gmail account. Once you go through Gmail authorization steps, JStock will start sending email alerts to that Gmail account (and optionally CC to any third-party email address). - -![](https://c2.staticflickr.com/2/1644/24080560491_3aef056e8d_b.jpg) - -### Manage Multiple Portfolios ### - -JStock allows you to manage multiple portfolios. This feature is useful if you are using multiple stock brokers. You can create a separate portfolio for each broker and manage your buy/sell/dividend transactions on a per-broker basis. You can switch different portfolios by choosing a particular portfolio under "Portfolio" menu. The following screenshot shows a hypothetical portfolio. - -![](https://c2.staticflickr.com/2/1646/23536385433_df6c036c9a_c.jpg) - -Optionally you can enable broker fee option, so that you can enter any broker fees, stamp duty and clearing fees for each buy/sell transaction. If you are lazy, you can enable fee auto-calculation and enter fee schedules for each brokering firm from the option menu beforehand. Then JStock will automatically calculate and enter fees when you add transactions to your portfolio. - -![](https://c2.staticflickr.com/2/1653/24055085262_0e315c3691_b.jpg) - -### Screen Stocks with Built-in/Custom Indicators ### - -If you are doing any technical analysis on stocks, you may want to screen stocks based on various criteria (so-called "stock indicators"). For stock screening, JStock offers several [pre-built technical indicators][3] that capture upward/downward/reversal trends of individual stocks. The following is a list of available indicators. - -- Moving Average Convergence Divergence (MACD) -- Relative Strength Index (RSI) -- Money Flow Index (MFI) -- Commodity Channel Index (CCI) -- Doji -- Golden Cross, Death Cross -- Top Gainers/Losers - -To install any pre-built indicator, go to "Stock Indicator Editor" tab on JStock. Then click on "Install" button in the right-side panel. Choose "Install from JStock server" option, and then install any indicator(s) you want. - -![](https://c2.staticflickr.com/2/1476/23867534660_b6a9c95a06_c.jpg) - -Once one or more indicators are installed, you can scan stocks using them. Go to "Stock Indicator Scanner" tab, click on "Scan" button at the bottom, and choose any indicator. - -![](https://c2.staticflickr.com/2/1653/24137054996_e8fcd10393_c.jpg) - -Once you select the stocks to scan (e.g., NYSE, NASDAQ), JStock will perform scan, and show a list of stocks captured by the indicator. - -![](https://c2.staticflickr.com/2/1446/23795349889_0f1aeef608_c.jpg) - -Besides pre-built indicators, you can also define custom indicator(s) on your own with a GUI-based indicator editor. The following example screens for stocks whose current price is less than or equal to its 60-day average price. - -![](https://c2.staticflickr.com/2/1605/24080560431_3d26eac6b5_c.jpg) - -### Cloud Backup and Restore between Linux and Android JStock ### - -Another nice feature of JStock is cloud backup and restore. JStock allows you to save and restore your portfolios/watchlists via Google Drive, and this features works seamlessly across different platforms (e.g., Linux and Android). For example, if you saved your JStock portfolios to Google Drive on Android, you can restore them on Linux version of JStock. - -![](https://c2.staticflickr.com/2/1537/24163165565_bb47e04d6c_c.jpg) - -![](https://c2.staticflickr.com/2/1556/23536385333_9ed1a75d72_c.jpg) - -If you don't see your portfolios/watchlists after restoring from Google Drive, make sure that your country is correctly set under "Country" menu. - -JStock Android free version is available from [Google Play store][4]. You will need to upgrade to premium version for one-time payment if you want to use its full features (e.g., cloud backup, alerts, charts). I think the premium version is definitely worth it. - -![](https://c2.staticflickr.com/2/1687/23867534720_18b917028c_c.jpg) - -As a final note, I should mention that its creator, Yan Cheng Cheok, is pretty active in JStock development, and quite responsive in addressing any bugs. Kudos to him! - -What do you think of JStock as portfolio tracking software? - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/stock-portfolio-management-software-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://jstock.org/ -[2]:http://ask.xmodulo.com/install-java-runtime-linux.html -[3]:http://jstock.org/ma_indicator.html -[4]:https://play.google.com/store/apps/details?id=org.yccheok.jstock.gui diff --git a/sources/tech/20160604 Microfluidic cooling may prevent the demise of Moore's Law.md b/sources/tech/20160604 Microfluidic cooling may prevent the demise of Moore's Law.md index 66548921e2..0dbd772a31 100644 --- a/sources/tech/20160604 Microfluidic cooling may prevent the demise of Moore's Law.md +++ b/sources/tech/20160604 Microfluidic cooling may prevent the demise of Moore's Law.md @@ -1,3 +1,4 @@ +Translating by ivo-wang Microfluidic cooling may prevent the demise of Moore's Law ============================================================ diff --git a/translated/tech/20160104 What is good stock portfolio management software on Linux.md b/translated/tech/20160104 What is good stock portfolio management software on Linux.md new file mode 100644 index 0000000000..32e1c8fb2a --- /dev/null +++ b/translated/tech/20160104 What is good stock portfolio management software on Linux.md @@ -0,0 +1,109 @@ +Translating by ivo-wang +What is good stock portfolio management software on Linux +linux上那些不错的管理股票组合投资软件 +================================================================================ +如果你在股票市场做投资,那么你可能非常清楚管理组合投资的计划有多重要。管理组合投资的目标是依据你能承受的风险,时间层面的长短和资金盈利的目标去为你量身打造的一种投资计划。鉴于这类软件的重要性,难怪从不缺乏商业性质的app和股票行情检测软件,每一个都可以兜售复杂的组合投资以及跟踪报告功能。 + +对于这些linux爱好者们,我们找到了一些 **好用的开源组合投资管理工具** 用来在linux上管理和跟踪股票的组合投资,这里高度推荐一个基于java编写的管理软件[JStock][1]。如果你不是一个java粉,你不得不面对这样一个事实JStock需要运行在重型的JVM环境上。同时我相信许多人非常欣赏JStock,安装JRE以后它可以非常迅速的安装在各个linux平台上。没有障碍能阻止你将它安装在你的linux环境中。 + +开源就意味着免费或标准低下的时代已经过去了。鉴于JStock只是一个个人完成的产物,作为一个组合投资管理软件它最令人印象深刻的是包含了非常多实用的功能,以上所有的荣誉属于它的作者Yan Cheng Cheok!例如,JStock 支持通过监视列表去监控价格,多种组合投资,按习惯/按固定 做股票指示与相关扫描,支持27个不同的股票市场和交易平台云端备份/还原。JStock支持多平台部署(Linux, OS X, Android 和 Windows),你可以通过云端保存你的JStock记录,它可以无缝的备份还原到其他的不同平台上面。 + +现在我将向你展示如何安装以及使用过程的一些具体细节。 + +### 在Linux上安装JStock ### + +因为JStock使用Java编写,所以必须[安装 JRE][2]才能让它运行起来.小提示JStock 需要JRE1.7或更高版本。如你的JRE版本不能满足这个需求,JStock将会安装失败然后出现下面的报错。 + + Exception in thread "main" java.lang.UnsupportedClassVersionError: org/yccheok/jstock/gui/JStock : Unsupported major.minor version 51.0 + + +一旦你安装了JRE在你的linux上,从官网下载最新的发布的JStock,然后加载启动它。 + + $ wget https://github.com/yccheok/jstock/releases/download/release_1-0-7-13/jstock-1.0.7.13-bin.zip + $ unzip jstock-1.0.7.13-bin.zip + $ cd jstock + $ chmod +x jstock.sh + $ ./jstock.sh + +教程的其他部分,让我来给大家展示一些JStock的实用功能 + +### 监视监控列表股票价格的波动 ### + +使用JStock你可以创建一个或多个监视列表,它可以自动的监视股票价格的波动并给你提供相应的通知。在每一个监视列表里面你可以添加多个感兴趣的股票进去。之后添加你的警戒值在"Fall Below"和"Rise Above"的表格里,分别是在设定最低价格和最高价格。 + +![](https://c2.staticflickr.com/2/1588/23795349969_37f4b0f23c_c.jpg) + +例如你设置了AAPL股票的最低/最高价格分别是$102 和 $115.50,你将在价格低于$102或高于$115.50的任意时间在桌面得到通知。 + +你也可以设置邮件通知,之后你将收到一些价格信息的邮件通知。设置邮件通知在栏的"Options"选项。在"Alert"标签,打开"Send message to email(s)",填入你的Gmail账户。一旦完成Gmail认证步骤,JStock将开始发送邮件通知到你的Gmail账户(也可以设置其他的第三方邮件地址) +![](https://c2.staticflickr.com/2/1644/24080560491_3aef056e8d_b.jpg) + +### 管理多个组合投资 ### + +JStock能够允许你管理多个组合投资。这个功能对于股票经纪人是非常实用的。你可以为经纪人创建一个投资项去管理你的 买入/卖出/红利 用来了解每一个经纪人的业务情况。你也可以切换不同的组合项目通过选择一个特殊项目在"Portfolio"菜单里面。下面是一张截图用来展示一个意向投资 +![](https://c2.staticflickr.com/2/1646/23536385433_df6c036c9a_c.jpg) + +因为能够设置付给经纪人小费的选项,所以你能付给经纪人任意的小费,印花税以及清空每一比交易的小费。如果你非常懒,你也可以在菜单里面设置自动计算小费和给每一个经纪人固定的小费。在完成交易之后JStock将自动的计算并发送小费。 + +![](https://c2.staticflickr.com/2/1653/24055085262_0e315c3691_b.jpg) + +### 显示固定/自选股票提示 ### + +如果你要做一些股票的技术分析,你可能需要不同股票的指数(这里叫做“平均股指”),对于股票的跟踪,JStock提供多个[预设技术指示器][3] 去获得股票上涨/下跌/逆转指数的趋势。下面的列表里面是一些可用的指示。 +- 异同平均线(MACD) +- 相对强弱指数 (RSI) +- 货币流通指数 (MFI) +- 顺势指标 (CCI) +- 十字线 +- 黄金交叉线, 死亡交叉线 +- 涨幅/跌幅 + +开启预设指示器能需要在JStock中点击"Stock Indicator Editor"标签。之后点击右侧面板中的安装按钮。选择"Install from JStock server"选项,之后安装你想要的指示器。 + +![](https://c2.staticflickr.com/2/1476/23867534660_b6a9c95a06_c.jpg) + +一旦安装了一个或多个指示器,你可以用他们来扫描股票。选择"Stock Indicator Scanner"标签,点击底部的"Scan"按钮,选择需要的指示器。 + +![](https://c2.staticflickr.com/2/1653/24137054996_e8fcd10393_c.jpg) + +当你选择完需要扫描的股票(例如e.g., NYSE, NASDAQ)以后,JStock将执行扫描,并将捕获的结果通过列表的形式展现在指示器上面。 + +![](https://c2.staticflickr.com/2/1446/23795349889_0f1aeef608_c.jpg) + +除了预设指示器以外,你也可以使用一个图形化的工具来定义自己的指示器。下面这张图例中展示的是当前价格小于或等于60天平均价格 + +![](https://c2.staticflickr.com/2/1605/24080560431_3d26eac6b5_c.jpg) + +### 云备份还原Linux 和 Android JStock ### + +另一个非常棒的功能是JStock可以支持云备份还原。Jstock也可以把你的组合投资/监视列表备份还原在 Google Drive,这个功能可以实现在不同平台(例如Linux和Android)上无缝穿梭。举个例子,如果你把Android Jstock组合投资的信息保存在Google Drive上,你可以在Linux班级本上还原他们。 + +![](https://c2.staticflickr.com/2/1537/24163165565_bb47e04d6c_c.jpg) + +![](https://c2.staticflickr.com/2/1556/23536385333_9ed1a75d72_c.jpg) + +如果你在从Google Drive还原之后不能看到你的投资信息以及监视列表,请确认你的国家信息与“Country”菜单里面设置的保持一致。 + +JStock的安卓免费版可以从[Google Play Store][4]获取到。如果你需要完整的功能(比如云备份,通知,图表等),你需要一次性支付费用升级到高级版。我想高级版肯定有它的价值所在。 + +![](https://c2.staticflickr.com/2/1687/23867534720_18b917028c_c.jpg) + +写在最后,我应该说一下它的作者,Yan Cheng Cheok,他是一个十分活跃的开发者,有bug及时反馈给他。最后多有的荣耀都属于他一个人!!! + +关于JStock这个组合投资跟踪软件你有什么想法呢? + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/stock-portfolio-management-software-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/ivo-wang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://jstock.org/ +[2]:http://ask.xmodulo.com/install-java-runtime-linux.html +[3]:http://jstock.org/ma_indicator.html +[4]:https://play.google.com/store/apps/details?id=org.yccheok.jstock.gui From 9a1040eefb0441dc4cbc663d93ced4501f4b9ded Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 20 Jul 2016 10:08:23 +0800 Subject: [PATCH 185/471] =?UTF-8?q?20160720-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...EST TEXT EDITORS FOR LINUX COMMAND LINE.md | 92 +++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100644 sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md diff --git a/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md b/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md new file mode 100644 index 0000000000..5e6a971fd9 --- /dev/null +++ b/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md @@ -0,0 +1,92 @@ +BEST TEXT EDITORS FOR LINUX COMMAND LINE +========================================== + +![](https://itsfoss.com/wp-content/uploads/2016/07/Best-Command-Line-Text-Editors-for-Linux.jpg) + +A text editor is a must have application for any operating system. We have no dearth of [best modern editors for Linux][1]. But those are GUI based editors. + +As you know, the real power of Linux lies in the command line. And when you are working in command line, you would need a text editor that could work right inside the terminal. + +For that purpose, today we are going to make a list of best command line text editors for Linux. + +### [VIM][2] + +If you’re on Linux for quite some time, you must have heard about Vim. Vim is an extensively configurable, cross-platform and highly efficient text editor. + +Almost every Linux distribution comes with Vim pre-installed. It is extremely popular for its wide range of features. + +![](https://itsfoss.com/wp-content/uploads/2016/07/vim.png) +>Vim User Interface + +Vim can be quite agonizing for first-time users. I remember the first time I tried to edit a text file with Vim, I was completely puzzled. I couldn’t type a single letter on it and the funny part is, I couldn’t even figure out how to close this thing. If you are going to use Vim, you have to be determined for climbing up a very steep learning curve. + +But after you have gone through all that, combed through some documentations, remembered its commands and shortcuts you will find that the hassle was worth it. You can bend Vim to your will – customizing its interface however it seems fit to you, give your workflow a boost by using various user scripts, plugins and so on. Vim supports syntax highlighting, macro recording and action history. + +As in the official site, it is stated that, + +>**Vim: The power tool for everyone!** + +It is completely up to you how you will use it. You can just use it for simple text editing, or you can customize it to behave as a full-fledged IDE. + +### [GNU EMACS][3] + +GNU Emacs is undoubtedly one of the most powerful text editor out there. If you have heard about both Vim and Emacs, you should know that both of these editors have a very loyal fan-base and often they are very serious about their text editor of choice. And you can find lots of humor and stuff on the internet about it: + +Suggested Read [Download Linux Wallpapers That Are Also Cheat Sheets][4] + +![](https://itsfoss.com/wp-content/uploads/2016/07/vi-emacs-768x426.png) +>Vim vs Emacs + +Emacs is cross-platform and has both command-line and graphical user interface. It is also very rich with various features and, most importantly, extensible. + +![](https://itsfoss.com/wp-content/uploads/2016/07/emacs.png) +>Emacs User Interface + +Just as Vim, Emacs too comes with a steep learning curve. But once you master it, you can completely leverage its power. Emacs can handle just about any types of text files. The interface is customizable to suit your workflow. It supports macro recording and shortcuts. + +The unique power of Emacs is that it can be transformed into something completely different from a text editor. There is a large collection of modules that can transform the application for using in completely different scenarios, like – calendar, news reader, word processor etc. You can even play games in Emacs! + +### [NANO][5] + +When it comes to simplicity, Nano is the one. Unlike Vim or Emacs, the learning curve for nano is almost flat. + +If you want to simply create & edit a text file and get on with your life, look no further than Nano. + +![](https://itsfoss.com/wp-content/uploads/2016/07/nano.png) +>Nano User Interface + +The shortcuts available on Nano are displayed at the bottom of the user interface. Nano includes only the basic functions of a text editor. + +It is minimal and perfectly suitable for editing system & configuration files. For those who doesn’t need advanced features from a command-line text editor, Nano is the perfect match. + +### OTHERS + +There is one more editor I’d like to mention: + +[The Nice Editor (ne)][6]: The official site says, + +>If you have the resources and the patience to use emacs or the right mental twist to use vi then probably ne is not for you. + +Basically, ne features many advanced features like Vim or Emacs, including – scripting and macro recording. But it comes with a more intuitive control and not so steep learning curve. + +### WHAT DO YOU THINK? + +I know that if you are a seasoned Linux user, you’ll say these are the obvious candidates for the list of best command line text editors for Linux. Therefore, I would like to ask you, if there are some other command line text editors for Linux that you want to share with us? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/command-line-text-editors-linux/?utm_source=newsletter&utm_medium=email&utm_campaign=ubuntu_forums_hacked_new_skype_for_linux_and_more_linux_stories + +作者:[Munif Tanjim][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[1]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ +[2]: http://www.vim.org/ +[3]: https://www.gnu.org/software/emacs/ +[4]: https://itsfoss.com/download-linux-wallpapers-cheat-sheets/ +[5]: http://www.nano-editor.org/ +[6]: http://ne.di.unimi.it/ From 5cfa37d4e01949e3152f202c6e312ce8b8026b47 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 20 Jul 2016 10:23:33 +0800 Subject: [PATCH 186/471] =?UTF-8?q?20160720-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ramming or Source Code Editors on Linux.md | 304 ++++++++++++++++++ 1 file changed, 304 insertions(+) create mode 100644 sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md diff --git a/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md b/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md new file mode 100644 index 0000000000..105ee90faa --- /dev/null +++ b/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md @@ -0,0 +1,304 @@ +18 Best IDEs for C/C++ Programming or Source Code Editors on Linux +====================================================================== + +C++, an extension of well known C language, is an excellent, powerful and general purpose programming language that offers modern and generic programming features for developing large-scale applications ranging from video games, search engines, other computer software to operating systems. + +C++ is highly reliable and also enables low-level memory manipulation for more advanced programming requirements. + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Best-Linux-IDE-Editors.png) + +There are several text editors out there that programmers can use to write C/C++ code, but IDE have come up to offer comprehensive facilities and components for easy and ideal programming. + +In this article, we shall look at some of the best IDE’s you can find on the Linux platform for C++ or any other programming. + +### 1. Netbeans for C/C++ Development + +Netbeans is a free, open-source and popular cross-platform IDE for C/C++ and many other programming languages. Its fully extensible using community developed plugins. + +It includes project types and templates for C/C++ and you can build applications using static and dynamic libraries. Additionally, you can reuse existing code to create your projects, and also use drag and drop feature to import binary files into it to build applications from the ground. + +Let us look at some of its features: + +- The C/C++ editor is well integrated with multi-session [GNU GDB][1] debugger tool. +- Support for code assistance +- C++11 support +- Create and run C/C++ tests from within +- Qt toolkit support +- Support for automatic packaging of compiled application into .tar, .zip and many more archive files +- Support for multiple compilers such as GNU, Clang/LLVM, Cygwin, Oracle Solaris Studio and MinGW +- Support for remote development +- File navigation +- Source inspection + +![](http://www.tecmint.com/wp-content/uploads/2016/06/NetBeans-IDE.png) +>Visit Homepage: + +### 2. Code::Blocks + +Code::Blocks is a free, highly extensible and configurable, cross-platform C++ IDE built to offer users the most demanded and ideal features. It delivers a consistent user interface and feel. + +And most importantly, you can extend its functionality by using plugins developed by users, some of the plugins are part of Code::Blocks release and many are not, written by individual users not part of the Code::Block development team. + +Its features are categorized into compiler, debugger and interface features and these include: + +Multiple compiler support including GCC, clang, Borland C++ 5.5, digital mars plus many more +Very fast, no need for makefiles +Multi-target projects +Workspace that supports combining of projects +Interfaces GNU GDB +Support for full breakpoints including code breakpoints, data breakpoints, breakpoint conditions plus many more +display local functions symbols and arguments +custom memory dump and syntax highlighting +Customizable and extensible interface plus many more other features including those added through user built plugins + +![](http://www.tecmint.com/wp-content/uploads/2016/06/CodeBlocks-IDE-for-Linux.png) +>Visit Homepage: + +### 3. Eclipse CDT(C/C++ Development Tooling) + +Eclipse is a well known open-source, cross-platform IDE in the programming arena. It offers users a great GUI with support for drag and drop functionality for easy arrangement of interface elements. + +The Eclipse CDT is a project based on the primary Eclipse platform and it provides a full functional C/C++ IDE with following features: + +- Supports project creation +- Managed build for various toolchains +- Standard make build +- Source navigation +- Several knowledge tools such as call graph, type hierarchy, in-built browser, macro definition browser +- Code editor with support for syntax highlighting +- Support for folding and hyperlink navigation +- Source code refactoring plus code generation +- Tools for visual debugging such as memory, registers +- Disassembly viewers and many more + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Eclipse-IDE-for-Linux.png) +>Visit Homepage: + +### 4. CodeLite IDE + +CodeLite is also a free, open-source, cross-platform IDE designed and built specifically for C/C++, JavaScript (Node.js) and PHP programming. + +Some of its main features include: + +- Code completion, and it offers two code completion engines +- Supports several compilers including GCC, clang/VC++ +- Displays errors as code glossary +- Clickable errors via build tab +- Support for LLDB next generation debugger +- GDB support +- Support for refactoring +- Code navigation +- Remote development using built-in SFTP +- Source control plugins +- RAD (Rapid Application Development) tool for developing wxWidgets-based apps plus many more features + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Codelite-IDE.png) +>Visit Homepage: + +### 6. Bluefish Editor + +Bluefish is a more than just a normal editor, it is a lightweight, fast editor that offers programmers IDE like features for developing websites, writing scripts and software code. It is multi-platform, runs on Linux, Mac OSX, FreeBSD, OpenBSD, Solaris and Windows, and also supports many programming languages including C/C++. + +It is feature rich including the ones listed below: + +- Multiple document interface +- Supports recursive opening of files based on filename patterns or content pattern +- Offers a very powerful search and replace functionality +- Snippet sidebar +- Support for integrating external filters of your own, pipe documents using commands such as awk, sed, sort plus custom built scripts +- Supports full screen editing +- Site uploader and downloader +- Multiple encoding support and many more other features + +![](http://www.tecmint.com/wp-content/uploads/2016/06/BlueFish-IDE-Editor-for-Linux.png) +>Visit Homepage: + +### 7. Brackets Code Editor + +Brackets is a modern and open-source text editor designed specifically for web designing and development. It is highly extensible through plugins, therefore C/C++ programmers can use it by installing the C/C++/Objective-C pack extension, this pack is designed to enhance C/C++ code writing and to offer IDE like features. + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Brackets-Code-Editor-for-Linux.png) +>Visit Homepage: + +### 8. Atom Code Editor + +Atom is also a modern, open-source, multi-platform text editor that can run on Linux, Windows or Mac OS X. It is also hackable down to its base, therefore users can customize it to meet their code writing demands. + +It is fully featured and some of its main features include: + +- Built-in package manager +- Smart auto-completion +- In-built file browser +- Find and replace functionality and many more + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Atom-Code-Editor-for-Linux.png) +>Visit Homepage: >https://atom.io/> +>Installation Instructions: + +### 9. Sublime Text Editor + +Sublime Text is a well refined, multi-platform text editor designed and developed for code, markup and prose. You can use it for writing C/C++ code and offers a great user interface. + +It’s features list comprises of: + +- Multiple selections +- Command palette +- Goto anything functionality +- Distraction free mode +- Split editing +- Instant project switching support +- Highly customizable +- Plugin API support based on Python plus other small features + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Sublime-Code-Editor-for-Linux.png) +>Visit Homepage: +>Installation Instructions: + +### 10. JetBrains CLion + +CLion is a non-free, powerful and cross-platform IDE for C/C++ programming. It is a fully integrated C/C++ development environment for programmers, providing Cmake as a project model, an embedded terminal window and a keyboard oriented approach to code writing. + +It also offers a smart and modern code editor plus many more exciting features to enable an ideal code writing environment and these features include: + +- Supports several languages other than C/C++ +- Easy navigation to symbol declarations or context usage +- Code generation and refactoring +- Editor customization +- On-the-fly code analysis +- An integrated code debugger +- Supports Git, Subversion, Mercurial, CVS, Perforce(via plugin) and TFS +- Seamlessly integrates with Google test frameworks +- Support for Vim text editor via Vim-emulation plugin + +![](http://www.tecmint.com/wp-content/uploads/2016/06/JetBains-CLion-IDE.png) +>Visit Homepage: + +### 11. Microsoft’s Visual Studio Code Editor + +Visual Studio is a rich, fully integrated, cross-platform development environment that runs on Linux, Windows and Mac OS X. It was recently made open-source to Linux users and it has redefined code editing, offering users every tool needed for building every app for multiple platforms including Windows, Android, iOS and the web. + +It is feature full, with features categorized under application development, application lifecycle management, and extend and integrate features. You can read a comprehensive features list from the Visual Studio website. + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Visual-Studio-Code-Editor.png) +>Visit Homepage: + +### 12. KDevelop + +KDevelop is just another free, open-source and cross-platform IDE that works on Linux, Solaris, FreeBSD, Windows, Mac OSX and other Unix-like operating systems. It is based on the KDevPlatform, KDE and Qt libraries. KDevelop is highly extensible through plugins and feature rich with the following notable features: + +- Support for Clang-based C/C++ plugin +- KDE 4 config migration support +- Revival of Oketa plugin support +- Support for different line editings in various views and plugins +- Support for Grep view and Uses widget to save vertical space plus many more + +![](http://www.tecmint.com/wp-content/uploads/2016/06/KDevelop-IDE-Editor.png) +>Visit Homepage: + +### 13. Geany IDE + +Geany is a free, fast, lightweight and cross-platform IDE developed to work with few dependencies and also operate independently from popular Linux desktops such as GNOME and KDE. It requires GTK2 libraries for functionality. + +Its features list consists of the following: + +- Support for syntax highlighting +- Code folding +- Call tips +- Symbol name auto completion +- Symbol lists +- Code navigation +- A simple project management tool +- In-built system to compile and run a users code +- Extensible through plugins + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Geany-IDE-for-Linux.png) +>Visit Homepage: + +### 14. Ajunta DeveStudio + +Ajunta DevStudio is a simple GNOME yet powerful software development studio that supports several programming languages including C/C++. + +It offers advanced programming tools such as project management, GUI designer, interactive debugger, application wizard, source editor, version control plus so many other facilities. Additionally, to above features, Ajunta DevStudio also has some other great IDE features and these include: + +- Simple user interface +- Extensible with plugins +- Integrated Glade for WYSIWYG UI development +- Project wizards and templates +- Integrated GDB debugger +- In-built file manager +- Integrated DevHelp for context sensitive programming help +- Source code editor with features such as syntax highlighting, smart indentation, auto-indentation, code folding/hiding, text zooming plus many more + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Anjuta-DevStudio-for-Linux.png) +>Visit Homepage: + +### 15. The GNAT Programming Studio + +The GNAT Programming Studio is a free easy to use IDE designed and developed to unify the interaction between a developer and his/her code and software. + +Built for ideal programming by facilitating source navigation while highlighting important sections and ideas of a program. It is also designed to offer a high-level of programming comfortability, enabling users to developed comprehensive systems from the ground. + +It is feature rich with the following features: + +- - Intuitive user interface +- Developer friendly +- Multi-lingual and multi-platform +- Flexible MDI(multiple document interface) +- Highly customizable +- Fully extensible with preferred tools + +![](http://www.tecmint.com/wp-content/uploads/2016/06/GNAT-Programming-Studio.jpg) +>Visit Homepage: + +### 16. Qt Creator + +It is a non-free, cross-platform IDE designed for creation of connected devices, UIs and applications. Qt creator enables users to do more of creation than actual coding of applications. + +It can be used to create mobile and desktop applications, and also connected embedded devices. + +Some of its features include: + +- Sophisticated code editor +- Support for version control +- Project and build management tools +- Multi-screen and multi-platform support for easy switching between build targets plus many more + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Qt-Creator.png) +>Visit Homepage: + +### 17. Emacs Editor + +Emacs is a free, powerful, highly extensible and customizable, cross-platform text editors you can use on Linux, Solaris, FreeBSD, NetBSD, OpenBSD, Windows and Mac OS X. + +The core of Emacs is also an interpreter for Emacs Lisp which is a language under the Lisp programming language. As of this writing, the latest release of GNU Emacs is version 24.5 and the fundamental and notable features of Emacs include: + +- Content-aware editing modes +- Full Unicode support +- Highly customizable using GUI or Emacs Lisp code +- A packaging system for downloading and installing extensions +- Ecosystem of functionalities beyond normal text editing including project planner, mail, calender and news reader plus many more +- A complete built-in documentation plus user tutorials and many more + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Emacs-Editor.png) +>Visit Homepage: https://www.gnu.org/software/emacs/ + +### 18. VI/VIM Editor + +Vim an improved version of VI editor, is a free, powerful, popular and highly configurable text editor. It is built to enable efficient text editing, and offers exciting editor features for Unix/Linux users, therefore, it is also a good option for writing and editing C/C++ code. + +Generally, IDEs offer more programming comfortability then traditional text editors, therefore it is always a good idea to use them. They come with exciting features and offer a comprehensive development environment, sometimes programmers are caught up between choosing the best IDE to use for C/C++ programming. + +There many other IDEs you can find out there and download from the Internet, but trying out several of them can help you find that which suites your needs. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/best-linux-ide-editors-source-code-editors/ + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/debug-source-code-in-linux-using-gdb/ From e791c68d54912b69de11463eb79f0a402077928a Mon Sep 17 00:00:00 2001 From: ivo_wang Date: Wed, 20 Jul 2016 13:01:03 +0800 Subject: [PATCH 187/471] Update 20160104 What is good stock portfolio management software on Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 更新译者id --- ...What is good stock portfolio management software on Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20160104 What is good stock portfolio management software on Linux.md b/translated/tech/20160104 What is good stock portfolio management software on Linux.md index 32e1c8fb2a..7e0c8a05fd 100644 --- a/translated/tech/20160104 What is good stock portfolio management software on Linux.md +++ b/translated/tech/20160104 What is good stock portfolio management software on Linux.md @@ -97,7 +97,7 @@ JStock的安卓免费版可以从[Google Play Store][4]获取到。如果你需 via: http://xmodulo.com/stock-portfolio-management-software-linux.html 作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/ivo-wang) +译者:[ivo-wang](https://github.com/ivo-wang) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 50bbf61ef9777a82374d99a043fa346f36243fad Mon Sep 17 00:00:00 2001 From: chenzhijun <522858454@qq.com> Date: Wed, 20 Jul 2016 21:33:02 +0800 Subject: [PATCH 188/471] Translating by chenzhijun --- .../tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md b/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md index 5e6a971fd9..80f4b1439c 100644 --- a/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md +++ b/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md @@ -1,3 +1,6 @@ +Translating by chenzhijun + + BEST TEXT EDITORS FOR LINUX COMMAND LINE ========================================== From 35f4c778177360089743cfbf035d75157cdb6cd0 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 20 Jul 2016 23:07:08 +0800 Subject: [PATCH 189/471] PUB:Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions @FSSlc --- ... Strings Using Pattern Specific Actions.md | 29 ++++++++++--------- 1 file changed, 16 insertions(+), 13 deletions(-) rename {translated/tech/awk => published}/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md (72%) diff --git a/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md b/published/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md similarity index 72% rename from translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md rename to published/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md index ef91e93575..53e9aabd68 100644 --- a/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md +++ b/published/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md @@ -1,11 +1,11 @@ -如何使用 Awk 来筛选文本或字符串 +awk 系列:如何使用 awk 按模式筛选文本或字符串 ========================================================================= ![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Filter-Text-or-Strings-Using-Pattern.png) -作为 Awk 命令系列的第三部分,这次我们将看一看如何基于用户定义的特定模式来筛选文本或字符串。 +作为 awk 命令系列的第三部分,这次我们将看一看如何基于用户定义的特定模式来筛选文本或字符串。 -在筛选文本时,有时你可能想根据某个给定的条件或使用一个特定的可被匹配的模式,去标记某个文件或数行字符串中的某几行。使用 Awk 来完成这个任务是非常容易的,这也正是 Awk 中可能对你有所帮助的几个特色之一。 +在筛选文本时,有时你可能想根据某个给定的条件或使用一个可被匹配的特定模式,去标记某个文件或数行字符串中的某几行。使用 awk 来完成这个任务是非常容易的,这也正是 awk 中可能对你有所帮助的几个功能之一。 让我们看一看下面这个例子,比方说你有一个写有你想要购买的食物的购物清单,其名称为 food_prices.list,它所含有的食物名称及相应的价格如下所示: @@ -28,9 +28,10 @@ $ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *\$[0-1]\.[0- ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Text-Using-Awk.gif) ->打印出单价大于 $2 的项目 -从上面的输出你可以看到在含有 芒果(mangoes) 和 菠萝(pineapples) 的那行末尾都已经有了一个 `(*)` 标记。假如你检查它们的单价,你可以看到它们的单价的确超过了 $2 。 +*打印出单价大于 $2 的项目* + +从上面的输出你可以看到在含有 芒果(mangoes) 和菠萝(pineapples)的那行末尾都已经有了一个 `(*)` 标记。假如你检查它们的单价,你可以看到它们的单价的确超过了 $2 。 在这个例子中,我们已经使用了两个模式: @@ -39,33 +40,35 @@ $ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *\$[0-1]\.[0- 上面的命令具体做了什么呢?这个文件有四个字段,当模式一匹配到含有食物单价大于 $2 的行时,它便会输出所有的四个字段并在该行末尾加上一个 `(*)` 符号来作为标记。 -第二个模式只是简单地输出其他含有食物单价小于 $2 的行,因为它们出现在输入文件 food_prices.list 中。 +第二个模式只是简单地输出其他含有食物单价小于 $2 的行,按照它们出现在输入文件 food_prices.list 中的样子。 这样你就可以使用模式来筛选出那些价格超过 $2 的食物项目,尽管上面的输出还有些问题,带有 `(*)` 符号的那些行并没有像其他行那样被格式化输出,这使得输出显得不够清晰。 -我们在 Awk 系列的第二部分中也看到了同样的问题,但我们可以使用下面的两种方式来解决: +我们在 awk 系列的第二部分中也看到了同样的问题,但我们可以使用下面的两种方式来解决: -1. 可以像下面这样使用 printf 命令,但这样使用又长又无聊: +1、可以像下面这样使用 printf 命令,但这样使用又长又无聊: ``` $ awk '/ *\$[2-9]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4; }' food_prices.list ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Printf.gif) ->使用 Awk 和 Printf 来筛选和输出项目 -2. 使用 `$0` 字段。Awk 使用变量 **0** 来存储整个输入行。对于上面的问题,这种方式非常方便,并且它还简单、快速: +*使用 Awk 和 Printf 来筛选和输出项目* + +2、 使用 `$0` 字段。Awk 使用变量 **0** 来存储整个输入行。对于上面的问题,这种方式非常方便,并且它还简单、快速: ``` $ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list ``` ![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Variable.gif) ->使用 Awk 和变量来筛选和输出项目 + +*使用 Awk 和变量来筛选和输出项目* ### 结论 -这就是全部内容了,使用 Awk 命令你便可以通过几种简单的方法去利用模式匹配来筛选文本,帮助你在一个文件中对文本或字符串的某些行做标记。 +这就是全部内容了,使用 awk 命令你便可以通过几种简单的方法去利用模式匹配来筛选文本,帮助你在一个文件中对文本或字符串的某些行做标记。 希望这篇文章对你有所帮助。记得阅读这个系列的下一部分,我们将关注在 awk 工具中使用比较运算符。 @@ -75,7 +78,7 @@ via: http://www.tecmint.com/awk-filter-text-or-string-using-patterns/ 作者:[Aaron Kili][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 38f868b22f6cf03a66e52259e385aef17d9c34ca Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 20 Jul 2016 23:08:01 +0800 Subject: [PATCH 190/471] =?UTF-8?q?=E7=A7=BB=E5=8A=A8=E5=88=B0=E7=9B=AE?= =?UTF-8?q?=E5=BD=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...k and Regular Expressions to Filter Text or String in Files.md | 0 ...Part 2 - How to Use Awk to Print Fields and Columns in File.md | 0 ...wk to Filter Text or Strings Using Pattern Specific Actions.md | 0 3 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => awk}/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md (100%) rename published/{ => awk}/Part 2 - How to Use Awk to Print Fields and Columns in File.md (100%) rename published/{ => awk}/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md (100%) diff --git a/published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md b/published/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md similarity index 100% rename from published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md rename to published/awk/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md diff --git a/published/Part 2 - How to Use Awk to Print Fields and Columns in File.md b/published/awk/Part 2 - How to Use Awk to Print Fields and Columns in File.md similarity index 100% rename from published/Part 2 - How to Use Awk to Print Fields and Columns in File.md rename to published/awk/Part 2 - How to Use Awk to Print Fields and Columns in File.md diff --git a/published/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md b/published/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md similarity index 100% rename from published/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md rename to published/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md From 83bde07deb1e230de7f82ef82992fe622231ebd2 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 20 Jul 2016 23:32:20 +0800 Subject: [PATCH 191/471] PUB:Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md @martin2011qi --- ...gcreate, lvcreate and lvextend Commands.md | 37 +++++++++++-------- 1 file changed, 22 insertions(+), 15 deletions(-) rename {translated/tech => published}/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md (88%) diff --git a/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md b/published/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md similarity index 88% rename from translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md rename to published/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md index 84b9db7968..49ac0e34f4 100644 --- a/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md +++ b/published/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md @@ -1,14 +1,15 @@ LFCS 系列第十一讲:如何使用命令 vgcreate、lvcreate 和 lvextend 管理和创建 LVM -============================================================================================ +======================================================================================== 由于 LFCS 考试中的一些改变已在 2016 年 2 月 2 日生效,我们添加了一些必要的专题到 [LFCS 系列][1]。我们也非常推荐备考的同学,同时阅读 [LFCE 系列][2]。 ![](http://www.tecmint.com/wp-content/uploads/2016/03/Manage-LVM-and-Create-LVM-Partition-in-Linux.png) ->LFCS:管理 LVM 和创建 LVM 分区 -在安装 Linux 系统的时候要做的最重要的决定之一便是给系统文件,home 目录等分配空间。在这个地方犯了错,再要增长空间不足的分区,那样既麻烦又有风险。 +*LFCS:管理 LVM 和创建 LVM 分区* -**逻辑卷管理** (即 **LVM**)相较于传统的分区管理有许多优点,已经成为大多数(如果不能说全部的话) Linux 发行版安装时的默认选择。LVM 最大的优点应该是能方便的按照你的意愿调整(减小或增大)逻辑分区的大小。 +在安装 Linux 系统的时候要做的最重要的决定之一便是给系统文件、home 目录等分配空间。在这个地方犯了错,再要扩大空间不足的分区,那样既麻烦又有风险。 + +**逻辑卷管理** (**LVM**)相较于传统的分区管理有许多优点,已经成为大多数(如果不能说全部的话) Linux 发行版安装时的默认选择。LVM 最大的优点应该是能方便的按照你的意愿调整(减小或增大)逻辑分区的大小。 LVM 的组成结构: @@ -16,7 +17,7 @@ LVM 的组成结构: * 一个用一个或多个物理卷创建出的卷组(**VG**)。可以把一个卷组想象成一个单独的存储单元。 * 在一个卷组上可以创建多个逻辑卷。每个逻辑卷相当于一个传统意义上的分区 —— 优点是它的大小可以根据需求重新调整大小,正如之前提到的那样。 -本文,我们将使用三块 **8 GB** 的磁盘(**/dev/sdb**、**/dev/sdc** 和 **/dev/sdd**)分别创建三个物理卷。你既可以直接在设备上创建 PV,也可以先分区在创建。 +本文,我们将使用三块 **8 GB** 的磁盘(**/dev/sdb**、**/dev/sdc** 和 **/dev/sdd**)分别创建三个物理卷。你既可以直接在整个设备上创建 PV,也可以先分区在创建。 在这里我们选择第一种方式,如果你决定使用第二种(可以参考本系列[第四讲:创建分区和文件系统][3])确保每个分区的类型都是 `8e`。 @@ -59,7 +60,8 @@ LVM 的组成结构: 由于 `vg00` 是由两个 **8 GB** 的磁盘组成的,所以它将会显示成一个 **16 GB** 的硬盘: ![](http://www.tecmint.com/wp-content/uploads/2016/03/List-LVM-Volume-Groups.png) ->LVM 卷组列表 + +*LVM 卷组列表* 当谈到创建逻辑卷,空间的分配必须考虑到当下和以后的需求。根据每个逻辑卷的用途来命名是一个好的做法。 @@ -78,7 +80,7 @@ LVM 的组成结构: # lvs ``` -或是详细信息,通过: +或是查看详细信息,通过: ``` # lvdisplay @@ -91,9 +93,10 @@ LVM 的组成结构: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Logical-Volume.png) ->逻辑卷列表 -如上图,我们看到 LV 已经被创建成存储设备了(参考 LV Path line)。在使用每个逻辑卷之前,需要先在上面创建文件系统。 +*逻辑卷列表* + +如上图,我们看到 LV 已经被创建成存储设备了(参考 LV Path 那一行)。在使用每个逻辑卷之前,需要先在上面创建文件系统。 这里我们拿 ext4 来做举例,因为对于每个 LV 的大小, ext4 既可以增大又可以减小(相对的 xfs 就只允许增大): @@ -116,7 +119,8 @@ LVM 的组成结构: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/03/Resize-Reduce-Logical-Volume-and-Volume-Group.png) ->减小逻辑卷和卷组 + +*减小逻辑卷和卷组* 在调整逻辑卷的时候,其中包含的减号 `(-)` 或加号 `(+)` 是十分重要的。否则 LV 将会被设置成指定的大小,而非调整指定大小。 @@ -135,7 +139,8 @@ LVM 的组成结构: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Volume-Group-Size.png) ->查看卷组磁盘大小 + +*查看卷组磁盘大小* 现在,你可以使用新加的空间,按照你的需求调整现有 LV 的大小,或者创建一个新的 LV。 @@ -151,7 +156,8 @@ LVM 的组成结构: ``` ![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png) ->寻找逻辑卷的 UUID + +*寻找逻辑卷的 UUID* 为每个 LV 创建挂载点: @@ -175,7 +181,8 @@ UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults 0 0 ``` ![](http://www.tecmint.com/wp-content/uploads/2016/03/Mount-Logical-Volumes-on-Linux-1.png) ->挂载逻辑卷 + +*挂载逻辑卷* 在涉及到 LV 的实际使用时,你还需要按照曾在本系列[第八讲:管理用户和用户组][4]中讲解的那样,为其设置合适的 `ugo+rwx`。 @@ -193,7 +200,7 @@ via: http://www.tecmint.com/manage-and-create-lvm-parition-using-vgcreate-lvcrea 作者:[Gabriel Cánepa][a] 译者:[martin2011qi](https://github.com/martin2011qi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -202,5 +209,5 @@ via: http://www.tecmint.com/manage-and-create-lvm-parition-using-vgcreate-lvcrea [2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ [3]: https://linux.cn/article-7187-1.html [4]: https://linux.cn/article-7418-1.html -[5]: http://www.tecmint.com/create-lvm-storage-in-linux/ +[5]: https://linux.cn/article-3965-1.html [6]: https://linux.cn/article-7229-1.html From 8174e5f671921e3ae2bb131c39a2b25eb613f5b9 Mon Sep 17 00:00:00 2001 From: "zenghow.fw" Date: Thu, 21 Jul 2016 01:02:58 +0800 Subject: [PATCH 192/471] =?UTF-8?q?=E7=94=B3=E9=A2=86=E4=BB=BB=E5=8A=A1?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...IDEs for C+C++ Programming or Source Code Editors on Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md b/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md index 105ee90faa..01fb7c5c22 100644 --- a/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md +++ b/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md @@ -1,3 +1,4 @@ +translating by fw8899 18 Best IDEs for C/C++ Programming or Source Code Editors on Linux ====================================================================== From 20e234e4b4dc946915d963183a1c559338145a59 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 21 Jul 2016 01:44:59 +0800 Subject: [PATCH 193/471] PUB:20160620 Detecting cats in images with OpenCV MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @MikeCoder 翻译中,如无特殊原因,应当尽量忠实原文 --- ...20 Detecting cats in images with OpenCV.md | 227 +++++++++++++++++ ...20 Detecting cats in images with OpenCV.md | 228 ------------------ 2 files changed, 227 insertions(+), 228 deletions(-) create mode 100644 published/20160620 Detecting cats in images with OpenCV.md delete mode 100644 translated/tech/20160620 Detecting cats in images with OpenCV.md diff --git a/published/20160620 Detecting cats in images with OpenCV.md b/published/20160620 Detecting cats in images with OpenCV.md new file mode 100644 index 0000000000..55b80cc51e --- /dev/null +++ b/published/20160620 Detecting cats in images with OpenCV.md @@ -0,0 +1,227 @@ +使用 OpenCV 识别图片中的猫咪 +======================================= + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) + +你知道 OpenCV 可以识别在图片中小猫的脸吗?而且是拿来就能用,不需要其它的库之类的。 + +之前我也不知道。 + +但是在 [Kendrick Tan 曝出这个功能][1]后,我需要亲自体验一下……去看看到 OpenCV 是如何在我没有察觉到的情况下,将这一个功能添加进了他的软件库(就像一只悄悄溜进空盒子的猫咪一样,等待别人发觉)。 + +下面,我将会展示如何使用 OpenCV 的猫咪检测器在图片中识别小猫的脸。同样的,该技术也可以用在视频流中。 + +### 使用 OpenCV 在图片中检测猫咪 + +如果你查找过 [OpenCV 的代码仓库][3],尤其是在 [haarcascades 目录][4]里(OpenCV 在这里保存处理它预先训练好的 Haar 分类器,以检测各种物体、身体部位等), 你会看到这两个文件: + +- haarcascade_frontalcatface.xml +- haarcascade\_frontalcatface\_extended.xml + +这两个 Haar Cascade 文件都将被用来在图片中检测小猫的脸。实际上,我使用了相同的 cascades 分类器来生成这篇博文顶端的图片。 + +在做了一些调查工作之后,我发现这些 cascades 分类器是由鼎鼎大名的 [Joseph Howse][5]训练和贡献给 OpenCV 仓库的,他写了很多很棒的教程和书籍,在计算机视觉领域有着很高的声望。 + +下面,我将会展示给你如何使用 Howse 的 Haar cascades 分类器来检测图片中的小猫。 + +### 猫咪检测代码 + +让我们开始使用 OpenCV 来检测图片中的猫咪。新建一个叫 cat_detector.py 的文件,并且输入如下的代码: + +``` +# import the necessary packages +import argparse +import cv2 + +# construct the argument parse and parse the arguments +ap = argparse.ArgumentParser() +ap.add_argument("-i", "--image", required=True, + help="path to the input image") +ap.add_argument("-c", "--cascade", + default="haarcascade_frontalcatface.xml", + help="path to cat detector haar cascade") +args = vars(ap.parse_args()) +``` + +第 2 和第 3 行主要是导入了必要的 python 包。6-12 行用于解析我们的命令行参数。我们仅要求一个必需的参数 `--image` ,它是我们要使用 OpenCV 检测猫咪的图片。 + +我们也可以(可选的)通过 `--cascade` 参数指定我们的 Haar cascade 分类器的路径。默认使用 `haarcascades_frontalcatface.xml`,假定这个文件和你的 `cat_detector.py` 在同一目录下。 + +注意:我已经打包了猫咪的检测代码,还有在这个教程里的样本图片。你可以在博文原文的 “下载” 部分下载到。如果你是刚刚接触 Python+OpenCV(或者 Haar cascade),我建议你下载这个 zip 压缩包,这个会方便你跟着教程学习。 + +接下来,就是检测猫的时刻了: + +``` +# load the input image and convert it to grayscale +image = cv2.imread(args["image"]) +gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) + +# load the cat detector Haar cascade, then detect cat faces +# in the input image +detector = cv2.CascadeClassifier(args["cascade"]) +rects = detector.detectMultiScale(gray, scaleFactor=1.3, + minNeighbors=10, minSize=(75, 75)) +``` + +在 15、16 行,我们从硬盘上读取了图片,并且进行灰度化(这是一个在将图片传给 Haar cascade 分类器之前的常用的图片预处理步骤,尽管不是必须的) + +20 行,从硬盘加载 Haar casacade 分类器,即猫咪检测器,并且实例化 `cv2.CascadeClassifier` 对象。 + +在 21、22 行通过调用 `detector` 的 `detectMultiScale` 方法使用 OpenCV 完成猫脸检测。我们给 `detectMultiScale` 方法传递了四个参数。包括: + +1. 图片 `gray`,我们要在该图片中检测猫脸。 +2. 检测猫脸时的[图片金字塔][6] 的检测粒度 `scaleFactor` 。更大的粒度将会加快检测的速度,但是会对检测准确性( true-positive)产生影响。相反的,一个更小的粒度将会影响检测的时间,但是会增加准确性( true-positive)。但是,细粒度也会增加误报率(false-positive)。你可以看这篇博文的“ Haar cascades 注意事项”部分来获得更多的信息。 +3. `minNeighbors` 参数控制了检定框的最少数量,即在给定区域内被判断为猫脸的最少数量。这个参数可以很好的排除误报(false-positive)结果。 +4. 最后,`minSize` 参数不言自明。这个值描述每个检定框的最小宽高尺寸(单位是像素),这个例子中就是 75\*75 + +`detectMultiScale` 函数会返回 `rects`,这是一个 4 元组列表。这些元组包含了每个检测到的猫脸的 (x,y) 坐标值,还有宽度、高度。 + +最后,让我们在图片上画下这些矩形来标识猫脸: + +``` +# loop over the cat faces and draw a rectangle surrounding each +for (i, (x, y, w, h)) in enumerate(rects): + cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2) + cv2.putText(image, "Cat #{}".format(i + 1), (x, y - 10), + cv2.FONT_HERSHEY_SIMPLEX, 0.55, (0, 0, 255), 2) + +# show the detected cat faces +cv2.imshow("Cat Faces", image) +cv2.waitKey(0) +``` + +给我们这些框(比如,rects)的数据,我们在 25 行依次遍历它。 + +在 26 行,我们在每张猫脸的周围画上一个矩形。27、28 行展示了一个整数,即图片中猫咪的数量。 + +最后,31,32 行在屏幕上展示了输出的图片。 + +### 猫咪检测结果 + +为了测试我们的 OpenCV 猫咪检测器,可以在原文的最后,下载教程的源码。 + +然后,在你解压缩之后,你将会得到如下的三个文件/目录: + +1. cat_detector.py:我们的主程序 +2. haarcascade_frontalcatface.xml: 猫咪检测器 Haar cascade +3. images:我们将会使用的检测图片目录。 + +到这一步,执行以下的命令: + +``` +$ python cat_detector.py --image images/cat_01.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_01.jpg) + +*图 1. 在图片中检测猫脸,甚至是猫咪部分被遮挡了。* + +注意,我们已经可以检测猫脸了,即使它的其余部分是被遮挡的。 + +试下另外的一张图片: + +``` +python cat_detector.py --image images/cat_02.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_02.jpg) + +*图 2. 使用 OpenCV 检测猫脸的第二个例子,这次猫脸稍有不同。* + +这次的猫脸和第一次的明显不同,因为它正在发出“喵呜”叫声的当中。这种情况下,我们依旧能检测到正确的猫脸。 + +在下面这张图片的结果也是正确的: + +``` +$ python cat_detector.py --image images/cat_03.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_03.jpg) + +*图 3. 使用 OpenCV 和 python 检测猫脸* + +我们最后的一个样例就是在一张图中检测多张猫脸: + +``` +$ python cat_detector.py --image images/cat_04.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) + +*图 4. 在同一张图片中使用 OpenCV 检测多只猫* + +注意,Haar cascade 返回的检定框不一定是以你预期的顺序。这种情况下,中间的那只猫会被标记成第三只。你可以通过判断他们的 (x, y) 坐标来自己排序这些检定框。 + +#### 关于精度的说明 + +在这个 xml 文件中的注释非常重要,Joseph Hower 提到了这个猫脸检测器有可能会将人脸识别成猫脸。 + +这种情况下,他推荐使用两种检测器(人脸 & 猫脸),然后将出现在人脸识别结果中的结果剔除掉。 + +#### Haar cascades 注意事项 + +这个方法首先出现在 Paul Viola 和 Michael Jones 2001 年出版的 [Rapid Object Detection using a Boosted Cascade of Simple Features][7] 论文中。现在它已经成为了计算机识别领域引用最多的论文之一。 + +这个算法能够识别图片中的对象,无论它们的位置和比例。而且最令人感兴趣的或许是它能在现有的硬件条件下实现实时检测。 + +在他们的论文中,Viola 和 Jones 关注在训练人脸检测器;但是,这个框架也能用来检测各类事物,如汽车、香蕉、路标等等。 + +#### 问题是? + +Haar cascades 最大的问题就是如何确定 `detectMultiScale` 方法的参数正确。特别是 `scaleFactor` 和 `minNeighbors` 参数。你很容易陷入一张一张图片调参数的坑,这个就是该对象检测器很难被实用化的原因。 + +这个 `scaleFactor` 变量控制了用来检测对象的图片的各种比例的[图像金字塔][8]。如果 `scaleFactor` 参数过大,你就只需要检测图像金字塔中较少的层,这可能会导致你丢失一些在图像金字塔层之间缩放时少了的对象。 + +换句话说,如果 `scaleFactor` 参数过低,你会检测过多的金字塔图层。这虽然可以能帮助你检测到更多的对象。但是他会造成计算速度的降低,还会**明显**提高误报率。Haar cascades 分类器就是这样。 + +为了避免这个,我们通常使用 [Histogram of Oriented Gradients + 线性 SVM 检测][9] 替代。 + +上述的 HOG + 线性 SVM 框架的参数更容易调优。而且更好的误报率也更低,但是唯一不好的地方是无法实时运算。 + +### 对对象识别感兴趣?并且希望了解更多? + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/custom_object_detector_example.jpg) + +*图 5. 在 PyImageSearch Gurus 课程中学习如何构建自定义的对象识别器。* + +如果你对学习如何训练自己的自定义对象识别器感兴趣,请务必要去了解下 PyImageSearch Gurus 课程。 + +在这个课程中,我提供了 15 节课,覆盖了超过 168 页的教程,来教你如何从 0 开始构建自定义的对象识别器。你会掌握如何应用 HOG + 线性 SVM 框架来构建自己的对象识别器来识别路标、面孔、汽车(以及附近的其它东西)。 + +要学习 PyImageSearch Gurus 课程(有 10 节示例免费课程),点此:https://www.pyimagesearch.com/pyimagesearch-gurus/?src=post-cat-detection + +### 总结 + +在这篇博文里,我们学习了如何使用 OpenCV 默认就有的 Haar cascades 分类器来识别图片中的猫脸。这些 Haar casacades 是由 [Joseph Howse][9] 训练兵贡献给 OpenCV 项目的。我是在 Kendrick Tan 的[这篇文章][10]中开始注意到这个。 + +尽管 Haar cascades 相当有用,但是我们也经常用 HOG + 线性 SVM 替代。因为后者相对而言更容易使用,并且可以有效地降低误报率。 + +我也会[在 PyImageSearch Gurus 课程中][11]详细的讲述如何构建定制的 HOG + 线性 SVM 对象识别器,来识别包括汽车、路标在内的各种事物。 + +不管怎样,我希望你喜欢这篇博文。 + +-------------------------------------------------------------------------------- + +via: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/ + +作者:[Adrian Rosebrock][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.pyimagesearch.com/author/adrian/ +[1]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[2]: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/# +[3]: https://github.com/Itseez/opencv +[4]: https://github.com/Itseez/opencv/tree/master/data/haarcascades +[5]: http://nummist.com/ +[6]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[7]: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf +[8]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[9]: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/ +[10]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[11]: https://www.pyimagesearch.com/pyimagesearch-gurus/ + + + diff --git a/translated/tech/20160620 Detecting cats in images with OpenCV.md b/translated/tech/20160620 Detecting cats in images with OpenCV.md deleted file mode 100644 index 9b1e19fef7..0000000000 --- a/translated/tech/20160620 Detecting cats in images with OpenCV.md +++ /dev/null @@ -1,228 +0,0 @@ -使用 OpenCV 识别图片中的猫 -======================================= - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) - -你知道 OpenCV 可以识别在图片中识别猫脸吗?还是在开箱即用的情况下,无需多余的附件。 - -我也不知道。 - -但是在看完'[Kendrick Tan broke the story][1]'这个故事之后, 我需要亲自体验一下...去看看到OpenCV 是如何在我没有察觉到的情况下,将这一个功能添加进了他的软件库。 - -作为这个博客的大纲,我将会展示如何使用 OpenCV 的猫检测器在图片中识别猫脸。同样的,你也可以在视频流中使用该技术。 - -> 想找这篇博客的源码?[请点这][2]。 - - -### 使用 OpenCV 在图片中检测猫 - -如果你看一眼[OpenCV 的代码库][3],尤其是在[haarcascades 目录][4](OpenCV 用来保存处理他对多种目标检测的Cascade预先训练的级联图像分类), 你将会注意到这两个文件: - -- haarcascade_frontalcatface.xml -- haarcascade_frontalcatface_extended.xml - -这两个 Haar Cascade 文件都将被用来在图片中检测猫脸。实际上,我使用了相同的方式来生成这篇博客顶端的图片。 - -在做了一些调查工作之后,我发现训练这些记过并且将其提供给 OpenCV 仓库的是鼎鼎大名的 [Joseph Howse][5],他在计算机视觉领域有着很高的声望。 - -在博客的剩余部分,我将会展示给你如何使用 Howse 的 Haar 级联模型来检测猫。 - -让我们开工。新建一个叫 cat_detector.py 的文件,并且输入如下的代码: - -### 使用 OpenCVPython 来检测猫 - -``` -# import the necessary packages -import argparse -import cv2 - -# construct the argument parse and parse the arguments -ap = argparse.ArgumentParser() -ap.add_argument("-i", "--image", required=True, - help="path to the input image") -ap.add_argument("-c", "--cascade", - default="haarcascade_frontalcatface.xml", - help="path to cat detector haar cascade") -args = vars(ap.parse_args()) -``` - -第2和第3行主要是导入了必要的 python 包。6-12行主要是我们的命令行参数。我们在这只需要使用单独的参数'--image'。 - -我们可以指定一个 Haar cascade 的路径通过 `--cascade` 参数。默认使用 `haarcascades_frontalcatface.xml`,同时需要保证这个文件和你的 `cat_detector.py` 在同一目录下。 - -注意:我已经打包了猫的检测代码,还有在这个教程里的样本图片。你可以在博客的'Downloads' 部分下载到。如果你是刚刚接触 Python+OpenCV(或者 Haar 级联模型), 我会建议你下载 zip 压缩包,这个会方便你进行操作。 - -接下来,就是检测猫的时刻了: - -``` -# load the input image and convert it to grayscale -image = cv2.imread(args["image"]) -gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - -# load the cat detector Haar cascade, then detect cat faces -# in the input image -detector = cv2.CascadeClassifier(args["cascade"]) -rects = detector.detectMultiScale(gray, scaleFactor=1.3, - minNeighbors=10, minSize=(75, 75)) -``` - -在15,16行,我们从硬盘上读取了图片,并且进行灰度化(一个常用的图片预处理,方便 Haar cascade 进行分类,尽管不是必须) - -20行,我们加载了Haar casacade,即猫检测器,并且初始化了 cv2.CascadeClassifier 对象。 - -使用 OpenCV 检测猫脸的步骤是21,22行,通过调用 detectMultiScale 方法。我们使用四个参数来调用。包括: - -1. 灰度化的图片,即样本图片。 -2. scaleFactor 参数,[图片金字塔][6]使用的检测猫脸时的检测粒度。一个更大的粒度将会加快检测的速度,但是会对准确性产生影响。相反的,一个更小的粒度将会影响检测的时间,但是会增加正确性。但是,细粒度也会增加错误的检测数量。你可以看博客的 'Haar 级联模型笔记' 部分来获得更多的信息。 -3. minNeighbors 参数控制了检测的最小数量,即在给定区域最小的检测猫脸的次数。这个参数很好的可以排除错误的检测结果。 -4. 最后,minSize 参数很好的自我说明了用途。即最后图片的最小大小,这个例子中就是 75\*75 - -detectMultiScale 函数 return rects,这是一个4维数组链表。这些item 中包含了猫脸的(x,y)坐标值,还有宽度,高度。 - -最后,让我们在图片上画下这些矩形来标识猫脸: - -``` -# loop over the cat faces and draw a rectangle surrounding each -for (i, (x, y, w, h)) in enumerate(rects): - cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2) - cv2.putText(image, "Cat #{}".format(i + 1), (x, y - 10), - cv2.FONT_HERSHEY_SIMPLEX, 0.55, (0, 0, 255), 2) - -# show the detected cat faces -cv2.imshow("Cat Faces", image) -cv2.waitKey(0) -``` - -给相关的区域(举个例子,rects),我们在25行依次历遍它。 - -在26行,我们在每张猫脸的周围画上一个矩形。27,28行展示了一个整数,即图片中猫的数量。 - -最后,31,32行在屏幕上展示了输出的图片。 - -### 猫检测结果 - -为了测试我们的 OpenCV 毛检测器,可以在文章的最后,下载教程的源码。 - -然后,在你解压缩之后,你将会得到如下的三个文件/目录: - -1. cat_detector.py:我们的主程序 -2. haarcascade_frontalcatface.xml: Haar cascade 猫检测资源 -3. images:我们将会使用的检测图片目录。 - -到这一步,执行以下的命令: - -使用 OpenCVShell 检测猫。 - -``` -$ python cat_detector.py --image images/cat_01.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_01.jpg) ->1. 在图片中检测猫脸,甚至是猫的一部分。 - -注意,我们已经可以检测猫脸了,即使他的其余部分是被隐藏的。 - -试下另外的一张图片: - -``` -python cat_detector.py --image images/cat_02.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_02.jpg) ->2. 第二个例子就是在略微不同的猫脸中检测。 - -这次的猫脸和第一次的明显不同,因为它在'Meow'的中央。这种情况下,我们依旧能检测到正确的猫脸。 - -这张图片的结果也是正确的: - -``` -$ python cat_detector.py --image images/cat_03.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_03.jpg) ->3. 使用 OpenCV 和 python 检测猫脸 - -我们最后的一个样例就是在一张图中检测多张猫脸: - -``` -$ python cat_detector.py --image images/cat_04.jpg -``` - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) ->Figure 4: Detecting multiple cats in the same image with OpenCV ->4. 在同一张图片中使用 OpenCV 检测多只猫 - -注意,Haar cascade 的返回值并不是有序的。这种情况下,中间的那只猫会被标记成第三只。你可以通过判断他们的(x, y)坐标来自己排序。 - -#### 精度的 Tips - -xml 文件中的注释,非常重要,Joseph Hower 提到了猫 脸检测器有可能会将人脸识别成猫脸。 - -这种情况下,他推荐使用两种检测器(人脸&猫脸),然后将出现在人脸识别结果中的结果剔除掉。 - -#### Haar 级联模型注意事项 - -这个方法首先出现在 Paul Viola 和 Michael Jones 2001 年发布的 [Rapid Object Detection using a Boosted Cascade of Simple Features] 论文中。现在它已经成为了计算机识别领域引用最多的成果之一。 - -这个算法能够识别图片中的对象,无论地点,规模。并且,他也能在现有的硬件条件下实现实时计算。 - -在他们的论文中,Viola 和 Jones 关注在训练人脸检测器;但是,这个框架也能用来检测各类事物,如汽车,香蕉,路标等等。 - -#### 有问题? - -Haar 级联模型最大的问题就是如何确定 detectMultiScale 方法的参数正确。特别是 scaleFactor 和 minNeighbors 参数。你很容易陷入,一张一张图片调参数的坑,这个就是该模型很难被实用化的原因。 - -这个 scaleFactor 变量控制了用来检测图片各种对象的[图像棱锥图][8]。如何参数过大,你就会得到更少的特征值,这会导致你无法在图层中识别一些目标。 - -换句话说,如果参数过低,你会检测出过多的图层。这虽然可以能帮助你检测更多的对象。但是他会造成计算速度的降低还会提高错误率。 - -为了避免这个,我们通常使用[Histogram of Oriented Gradients + Linear SVM detection][9]。 - -HOG + 线性 SVM 框架,它的参数更加容易的进行调优。而且也有更低的错误识别率,但是最大的缺点及时无法实时运算。 - -### 对对象识别感兴趣?并且希望了解更多? - -![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/custom_object_detector_example.jpg) ->5. 在 PyImageSearch Gurus 课程中学习如何构建自定义的对象识别器。 - -如果你对学习如何训练自己的自定义对象识别器,请务必要去学习 PyImageSearch Gurus 的课程。 - -在这个课程中,我提供了15节课还有超过168页的教程,来教你如何从0开始构建自定义的对象识别器。你会掌握如何应用 HOG+线性 SVM 计算框架来构建自己的对象识别器。 - -### 总结 - -在这篇博客里,我们学习了如何使用默认的 Haar 级联模型来识别图片中的猫脸。这些 Haar casacades 是通过[Joseph Howse][9] 贡献给 OpenCV 项目的。我是在[这篇文章][10]中开始注意到这个。 - -尽管 Haar 级联模型相当有用,但是我们也经常用 HOG 和 线性 SVM 替代。因为后者相对而言更容易使用,并且可以有效地降低错误的识别概率。 - -我也会在[在 PyImageSearch Gurus 的课程中][11]详细的讲述如何使用 HOG 和线性 SVM 对象识别器,来识别包括汽车,路标在内的各种事物。 - -不管怎样,我希望你享受这篇博客。 - -在你离开之前,确保你会使用这下面的表单注册 PyImageSearch Newsletter。这样你能收到最新的消息。 - --------------------------------------------------------------------------------- - -via: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/ - -作者:[Adrian Rosebrock][a] -译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.pyimagesearch.com/author/adrian/ -[1]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html -[2]: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/# -[3]: https://github.com/Itseez/opencv -[4]: https://github.com/Itseez/opencv/tree/master/data/haarcascades -[5]: http://nummist.com/ -[6]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ -[7]: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf -[8]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ -[9]: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/ -[10]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html -[11]: https://www.pyimagesearch.com/pyimagesearch-gurus/ - - - From 552b4d3cfd324bf904208853ed9af150520be66c Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 21 Jul 2016 17:11:25 +0800 Subject: [PATCH 194/471] =?UTF-8?q?20160721-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...1 5 tricks for getting started with Vim.md | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 sources/tech/20160721 5 tricks for getting started with Vim.md diff --git a/sources/tech/20160721 5 tricks for getting started with Vim.md b/sources/tech/20160721 5 tricks for getting started with Vim.md new file mode 100644 index 0000000000..8f622770f5 --- /dev/null +++ b/sources/tech/20160721 5 tricks for getting started with Vim.md @@ -0,0 +1,60 @@ +5 tricks for getting started with Vim +===================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/BUSINESS_peloton.png?itok=nuMbW9d3) + +For years, I've wanted to learn Vim, now my preferred Linux text editor and a favorite open source tool among developers and system administrators. And when I say learn, I mean really learn. Master is probably too strong a word, but I'd settle for advanced proficiency. For most of my years using Linux, my skillset included the ability to open a file, use the arrow keys to navigate up and down, switch into insert mode, change some text, save, and exit. + +But that's like minimum-viable-Vim. My skill level enabled me edit text documents from the terminal, but hasn't actually empowered me with any of the text editing super powers I've always imagined were possible. And it didn't justify using Vim over the totally capable Pico or Nano. + +So why learn Vim at all? Because I do spend an awful lot of time editing text, and I know I could be more efficient at it. And why not Emacs, or a more modern editor like Atom? Because Vim works for me, and at least I have some minimal experience in it. And perhaps, importantly, because it's rare that I encounter a system that I'm working on which doesn't have Vim or it's less-improved cousin (vi) available on it already. If you've always had a desire to learn Emacs, more power to you—I hope the Emacs-analog of these tips will prove useful to you, too. + +A few weeks in to this concentrated effort to up my Vim-use ability, the number one tip I have to share is that you actually must use the tool. While it seems like a piece of advice straight from Captain Obvious, I actually found it considerably harder than I expected to stay in the program. Most of my work happens inside of a web browser, and I had to untrain my trigger-like opening of (Gedit) every time I needed to edit a block of text outside of a browser. Gedit had made its way to my quick launcher, and so step one was removing this shortcut and putting Vim there instead. + +I've tried a number of things that have helped me learn. Here's a few of them I would recommend if you're looking to learn as well. + +### Vimtutor + +Sometimes the best place to get started isn't far from the application itself. I found Vimtutor, a tiny application that is basically a tutorial in a text file that you edit as you learn, to be as helpful as anything else in showing me the basics of the commands I had skipped learning through the years. Vimtutor is typically found everywhere Vim is, and is an easy install from your package manager if it's not already on your system. + +### GVim + +I know not everyone will agree with this one, but I found it useful to stop using the version of Vim that lives in my terminal and start using GVim for my basic editing needs. Naysayers will argue that it encourages using the mouse in an environment designed for keyboards, but I found it helpful to be able to quickly find the command I was looking for in a drop-down menu, reminding myself of the correct command, and then executing it with a keyboard. The alternative was often frustration at the inability to figure out how to do something, which is not a good feeling to be under constantly as you struggle to learn a new editor. No, stopping every few minutes to read a man page or use a search engine to remind you of a key sequence is not the best way to learn something new. + +### Keyboard maps + +Along with switching to GVim, I also found it handy to have a keyboard "cheat sheet" handy to remind me of the basic keystrokes. There are many available on the web that you can download, print, and set beside your station, but I opted for buying a set of stickers for my laptop keyboard. They were less than ten dollars US and had the added bonus of being a subtle reminder every time I used the laptop to at least try out one new thing as I edited. + +### Vimium + +As I mentioned, I live in the web browser most of the day. One of the tricks I've found helpful to reinforce the Vim way of navigation is to use [Vimium][1], an open source extension for Chrome that makes Chrome mimick the shortcuts used by Vim. I've found the fewer times I switch contexts for the keyboard shortcuts I'm using, the more likely I am to actually use them. Similar extensions, like [Vimerator][2], exist for Firefox. + +### Other human beings + +Without a doubt, there's no better way to get help learning something new than to get advice, feedback, and solutions from other people who have gone down a path before you. + +If you live in a larger urban area, there might be a Vim meetup group near you. Otherwise, the place to be is the #vim channel on Freenode IRC. One of the more popular channels on Freenode, the #vim channel is always full of helpful individuals willing to offer help with your problems. I find it interesting just to listen to the chatter and see what sorts of problems others are trying to solve to see what I'm missing out on. + +------ + +And so what to make of this effort? So far, so good. The time spent has probably yet to pay for itself in terms of time saved, but I'm always mildly surprised and amused when I find myself with a new reflex, jumping words with the right keypress sequence, or some similarly small feat. I can at least see that every day, the investment is bringing itself a little closer to payoff. + +These aren't the only tricks for learning Vim, by far. I also like to point people towards [Vim Adventures][3], an online game in which you navigate using the Vim keystrokes. And just the other day I came across a marvelous visual learning tool at [Vimgifs.com][4], which is exactly what you might expect it to be: illustrated examples with Vim so small they fit nicely in a gif. + +Have you invested the time to learn Vim, or really, any program with a keyboard-heavy interface? What worked for you, and, did you think the effort was worth it? Has your productivity changed as much as you thought it would? Lets share stories in the comments below. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/tips-getting-started-vim + +作者:[Jason Baker ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jason-baker +[1]: https://github.com/philc/vimium +[2]: http://www.vimperator.org/ +[3]: http://vim-adventures.com/ +[4]: http://vimgifs.com/ From 4355b496dccbdab0150f406edb83ce1c1d0f66eb Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 21 Jul 2016 17:20:25 +0800 Subject: [PATCH 195/471] =?UTF-8?q?20160721-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0718 Creating your first Git repository.md | 177 ++++++++++++++++++ 1 file changed, 177 insertions(+) create mode 100644 sources/tech/20160718 Creating your first Git repository.md diff --git a/sources/tech/20160718 Creating your first Git repository.md b/sources/tech/20160718 Creating your first Git repository.md new file mode 100644 index 0000000000..0a9df8740b --- /dev/null +++ b/sources/tech/20160718 Creating your first Git repository.md @@ -0,0 +1,177 @@ +Creating your first Git repository +====================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open_abstract_pieces.jpg?itok=ZRt0Db00) + +Now it is time to learn how to create your own Git repository, and how to add files and make commits. + +In the previous installments in this series, you learned how to interact with Git as an end user; you were the aimless wanderer who stumbled upon an open source project's website, cloned a repository, and moved on with your life. You learned that interacting with Git wasn't as confusing as you may have thought it would be, and maybe you've been convinced that it's time to start leveraging Git for your own work. + +While Git is definitely the tool of choice for major software projects, it doesn't only work with major software projects. It can manage your grocery lists (if they're that important to you, and they may be!), your configuration files, a journal or diary, a novel in progress, and even source code! + +And it is well worth doing; after all, when have you ever been angry that you have a backup copy of something that you've just mangled beyond recognition? + +Git can't work for you unless you use it, and there's no time like the present. Or, translated to Git, "There is no push like origin HEAD". You'll understand that later, I promise. + +### The audio recording analogy + +We tend to speak of computer imaging in terms of snapshots because most of us can identify with the idea of having a photo album filled with particular moments in time. It may be more useful, however, to think of Git more like an analogue audio recording. + +A traditional studio tape deck, in case you're unfamiliar, has a few components: it contains the reels that turn either forward or in reverse, tape to preserve sound waves, and a playhead to record or detect sound waves on tape and present them to the listener. + +In addition to playing a tape forward, you can rewind it to get back to a previous point in the tape, or fast-forward to skip ahead to a later point. + +Imagine a band in the 1970s recording to tape. You can imagine practising a song over and over until all the parts are perfect, and then laying down a track. First, you record the drums, and then the bass, and then the guitar, and then the vocals. Each time you record, the studio engineer rewinds the tape and puts it into loop mode so that it plays the previous part as you play yours; that is, if you're on bass, you get to hear the drums in the background as you play, and then the guitarist hears the drums and bass (and cowbell) and so on. On each loop, you play over the part, and then on the following loop, the engineer hits the record button and lays the performance down on tape. + +You can also copy and swap out a reel of tape entirely, should you decide to do a re-mix of something you're working on. + +Now that I've hopefully painted a vivid Roger Dean-quality image of studio life in the 70s, let's translate that into Git. + +### Create a Git repository + +The first step is to go out and buy some tape for our virtual tape deck. In Git terms, that's the repository ; it's the medium or domain where all the work is going to live. + +Any directory can become a Git repository, but to begin with let's start a fresh one. It takes three commands: + +- Create the directory (you can do that in your GUI file manager, if you prefer). +- Visit that directory in a terminal. +- Initialise it as a directory managed by Git. + +Specifically, run these commands: + +``` +$ mkdir ~/jupiter # make directory +$ cd ~/jupiter # change into the new directory +$ git init . # initialise your new Git repo +``` + +Is this example, the folder jupiter is now an empty but valid Git repository. + +That's all it takes. You can clone the repository, you can go backward and forward in history (once it has a history), create alternate timelines, and everything else Git can normally do. + +Working inside the Git repository is the same as working in any directory; create files, copy files into the directory, save files into it. You can do everything as normal; Git doesn't get involved until you involve it. + +In a local Git repository, a file can have one of three states: + +- Untracked: a file you create in a repository, but not yet added to Git. +- Tracked: a file that has been added to Git. +- Staged: a tracked file that has been changed and added to Git's commit queue. + +Any file that you add to a Git repository starts life out as an untracked file. The file exists on your computer, but you have not told Git about it yet. In our tape deck analogy, the tape deck isn't even turned on yet; the band is just noodling around in the studio, nowhere near ready to record yet. + +That is perfectly acceptable, and Git will let you know when it happens: + +``` +$ echo "hello world" > foo +$ git status +On branch master +Untracked files: +(use "git add ..." to include in what will be committed) + foo +nothing added but untracked files present (use "git add" to track) +``` + +As you can see, Git also tells you how to start tracking files. + +### Git without Git + +Creating a repository in GitHub or GitLab is a lot more clicky and pointy. It isn't difficult; you click the New Repository button and follow the prompts. + +It is a good practice to include a README file so that people wandering by have some notion of what your repository is for, and it is a little more satisfying to clone a non-empty repository. + +Cloning the repository is no different than usual, but obtaining permission to write back into that repository on GitHub is slightly more complex, because in order to authenticate to GitHub you must have an SSH key. If you're on Linux, create one with this command: + +``` +$ ssh-keygen +``` + +Then copy your new key, which is plain text. You can open it in a plain text editor, or use the cat command: + +``` +$ cat ~/.ssh/id_rsa.pub +``` + +Now paste your key into [GitHub's SSH configuration][1], or your [GitLab configuration][2]. + +As long as you clone your GitHub project via SSH, you'll be able to write back to your repository. + +Alternately, you can use GitHub's file uploader interface to add files without even having Git on your system. + +![](https://opensource.com/sites/default/files/2_githubupload.jpg) + +### Tracking files + +As the output of git status tells you, if you want Git to start tracking a file, you must git add it. The git add action places a file in a special staging area, where files wait to be committed, or preserved for posterity in a snapshot. The point of a git add is to differentiate between files that you want to have included in a snapshot, and the new or temporary files you want Git to, at least for now, ignore. + +In our tape deck analogy, this action turns the tape deck on and arms it for recording. You can picture the tape deck with the record and pause button pushed, or in a playback loop awaiting the next track to be laid down. + +Once you add a file, Git will identify it as a tracked file: + +``` +$ git add foo +$ git status +On branch master +Changes to be committed: +(use "git reset HEAD ..." to unstage) +new file: foo +``` + +Adding a file to Git's tracking system is not making a recording. It just puts a file on the stage in preparation for recording. You can still change a file after you've added it; it's being tracked and remains staged, so you can continue to refine it or change it before committing it to tape (but be warned; you're NOT recording yet, so if you break something in a file that was perfect, there's no going back in time yet, because you never got that perfect moment on tape). + +If you decide that the file isn't really ready to be recorded in the annals of Git history, then you can unstage something, just as the Git message described: + +``` +$ git reset HEAD foo +``` + +This, in effect, disarms the tape deck from being ready to record, and you're back to just noodling around in the studio. + +### The big commit + +At some point, you're going to want to commit something; in our tape deck analogy, that means finally pressing record and laying a track down on tape. + +At different stages of a project's life, how often you press that record button varies. For example, if you're hacking your way through a new Python toolkit and finally manage to get a window to appear, then you'll certainly want to commit so you have something to fall back on when you inevitably break it later as you try out new display options. But if you're working on a rough draft of some new graphics in Inkscape, you might wait until you have something you want to develop from before committing. Ultimately, though, it's up to you how often you commit; Git doesn't "cost" that much and hard drives these days are big, so in my view, the more the better. + +A commit records all staged files in a repository. Git only records files that are tracked, that is, any file that you did a git add on at some point in the past. and that have been modified since the previous commit. If no previous commit exists, then all tracked files are included in the commit because they went from not existing to existing, which is a pretty major modification from Git's point-of-view. + +To make a commit, run this command: + +``` +$ git commit -m 'My great project, first commit.' +``` + +This preserves all files committed for posterity (or, if you speak Gallifreyan, they become "fixed points in time"). You can see not only the commit event, but also the reference pointer back to that commit in your Git log: + +``` +$ git log --oneline +55df4c2 My great project, first commit. +``` + +For a more detailed report, just use git log without the --oneline option. + +The reference number for the commit in this example is 55df4c2. It's called a commit hash and it represents all of the new material you just recorded, overlaid onto previous recordings. If you need to "rewind" back to that point in history, you can use that hash as a reference. + +You can think of a commit hash as [SMPTE timecode][3] on an audio tape, or if we bend the analogy a little, one of those big gaps between songs on a vinyl record, or track numbers on a CD. + +As you change files further and add them to the stage, and ultimately commit them, you accrue new commit hashes, each of which serve as pointers to different versions of your production. + +And that's why they call Git a version control system, Charlie Brown. + +In the next article, we'll explore everything you need to know about the Git HEAD, and we'll nonchalantly reveal the secret of time travel. No big deal, but you'll want to read it (or maybe you already have?). + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/creating-your-first-git-repository + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://github.com/settings/keys +[2]: https://gitlab.com/profile/keys +[3]: http://slackermedia.ml/handbook/doku.php?id=timecode From 18ba0c49a3bcc31d7aab8974984c92bb2bc82f31 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 21 Jul 2016 17:29:05 +0800 Subject: [PATCH 196/471] =?UTF-8?q?20160721-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...User Space What We Did for Kernel Space.md | 94 +++++++++++++++++++ 1 file changed, 94 insertions(+) create mode 100644 sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md diff --git a/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md b/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md new file mode 100644 index 0000000000..72d79aaede --- /dev/null +++ b/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md @@ -0,0 +1,94 @@ +Doing for User Space What We Did for Kernel Space +======================================================= + +I believe the best and worst thing about Linux is its hard distinction between kernel space and user space. + +Without that distinction, Linux never would have become the most leveraged operating system in the world. Today, Linux has the largest range of uses for the largest number of users—most of whom have no idea they are using Linux when they search for something on Google or poke at their Android phones. Even Apple stuff wouldn't be what it is (for example, using BSD in its computers) were it not for Linux's success. + +Not caring about user space is a feature of Linux kernel development, not a bug. As Linus put it on our 2003 Geek Cruise, "I only do kernel stuff...I don't know what happens outside the kernel, and I don't much care. What happens inside the kernel I care about." After Andrew Morton gave me additional schooling on the topic a couple years later on another Geek Cruise, I wrote: + +>Kernel space is where the Linux species lives. User space is where Linux gets put to use, along with a lot of other natural building materials. The division between kernel space and user space is similar to the division between natural materials and stuff humans make out of those materials. + +A natural outcome of this distinction, however, is for Linux folks to stay relatively small as a community while the world outside depends more on Linux every second. So, in hope that we can enlarge our number a bit, I want to point us toward two new things. One is already hot, and the other could be. + +The first is [blockchain][1], made famous as the distributed ledger used by Bitcoin, but useful for countless other purposes as well. At the time of this writing, interest in blockchain is [trending toward the vertical][2]. + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f1.png) +>Figure 1. Google Trends for Blockchain + +The second is self-sovereign identity. To explain that, let me ask who and what you are. + +If your answers come from your employer, your doctor, the Department of Motor Vehicles, Facebook, Twitter or Google, they are each administrative identifiers: entries in namespaces each of those organizations control, entirely for their own convenience. As Timothy Ruff of [Evernym][3] explains, "You don't exist for them. Only your identifier does." It's the dependent variable. The independent variable—the one controlling the identifier—is the organization. + +If your answer comes from your self, we have a wide-open area for a new development category—one where, finally, we can be set fully free in the connected world. + +The first person to explain this, as far as I know, was [Devon Loffreto][4] He wrote "What is 'Sovereign Source Authority'?" in February 2012, on his blog, [The Moxy Tongue][5]. In "[Self-Sovereign Identity][6]", published in February 2016, he writes: + +>Self-Sovereign Identity must emit directly from an individual human life, and not from within an administrative mechanism...self-Sovereign Identity references every individual human identity as the origin of source authority. A self-Sovereign identity produces an administrative trail of data relations that begin and resolve to individual humans. Every individual human may possess a self-Sovereign identity, and no person or abstraction of any type created may alter this innate human Right. A self-Sovereign identity is the root of all participation as a valued social being within human societies of any type. + +To put this in Linux terms, only the individual has root for his or her own source identity. In the physical world, this is a casual thing. For example, my own portfolio of identifiers includes: + +- David Allen Searls, which my parents named me. +- David Searls, the name I tend to use when I suspect official records are involved. +- Dave, which is what most of my relatives and old friends call me. +- Doc, which is what most people call me. + +As the sovereign source authority over the use of those, I can jump from one to another in different contexts and get along pretty well. But, that's in the physical world. In the virtual one, it gets much more complicated. In addition to all the above, I am @dsearls (my Twitter handle) and dsearls (my handle in many other net-based services). I am also burdened by having my ability to relate contained within hundreds of different silos, each with their own logins and passwords. + +You can get a sense of how bad this is by checking the list of logins and passwords on your browser. On Firefox alone, I have hundreds of them. Many are defunct (since my collection dates back to Netscape days), but I would guess that I still have working logins to hundreds of companies I need to deal with from time to time. For all of them, I'm the dependent variable. It's not the other way around. Even the term "user" testifies to the subordinate dependency that has become a primary fact of life in the connected world. + +Today, the only easy way to bridge namespaces is via the compromised convenience of "Log in with Facebook" or "Log in with Twitter". In both of those cases, each of us is even less ourselves or in any kind of personal control over how we are known (if we wish to be knowable at all) to other entities in the connected world. + +What we have needed from the start are personal systems for instantiating our sovereign selves and choosing how to reveal and protect ourselves when dealing with others in the connected world. For lack of that ability, we are deep in a metastasized mess that Shoshana Zuboff calls "surveillance capitalism", which she says is: + +>...unimaginable outside the inscrutable high velocity circuits of Google's digital universe, whose signature feature is the Internet and its successors. While the world is riveted by the showdown between Apple and the FBI, the real truth is that the surveillance capabilities being developed by surveillance capitalists are the envy of every state security agency. + +Then she asks, "How can we protect ourselves from its invasive power?" + +I suggest self-sovereign identity. I believe it is only there that we have both safety from unwelcome surveillance and an Archimedean place to stand in the world. From that place, we can assert full agency in our dealings with others in society, politics and business. + +I came to this provisional conclusion during [ID2020][7], a gathering at the UN on May. It was gratifying to see Devon Loffreto there, since he's the guy who got the sovereign ball rolling in 2013. Here's [what I wrote about][8] it at the time, with pointers to Devon's earlier posts (such as one sourced above). + +Here are three for the field's canon: + +- "[Self-Sovereign Identity][9]" by Devon Loffreto. +- "[System or Human First][10]" by Devon Loffreto. +- "[The Path to Self-Sovereign Identity][11]" by Christopher Allen. + +A one-pager from Evernym, [digi.me][12], [iRespond][13] and [Respect Network][14] also was circulated there, contrasting administrative identity (which it calls the "current model") with the self-sovereign one. In it is the graphic shown in Figure 2. + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f2.jpg) +>Figure 2. Current Model of Identity vs. Self-Sovereign Identity + +The [platform][15] for this is Sovrin, explained as a "Fully open-source, attribute-based, sovereign identity graph platform on an advanced, dedicated, permissioned, distributed ledger" There's a [white paper][16] too. The code is called [plenum][17], and it's at GitHub. + +Here—and places like it—we can do for user space what we've done for the last quarter century for kernel space. + +-------------------------------------------------------------------------------- + +via: https://www.linuxjournal.com/content/doing-user-space-what-we-did-kernel-space + +作者:[Doc Searls][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxjournal.com/users/doc-searls +[1]: https://en.wikipedia.org/wiki/Block_chain_%28database%29 +[2]: https://www.google.com/trends/explore#q=blockchain +[3]: http://evernym.com/ +[4]: https://twitter.com/nzn +[5]: http://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html +[6]: http://www.moxytongue.com/2016/02/self-sovereign-identity.html +[7]: http://www.id2020.org/ +[8]: http://blogs.harvard.edu/doc/2013/10/14/iiw-challenge-1-sovereign-identity-in-the-great-silo-forest +[9]: http://www.moxytongue.com/2016/02/self-sovereign-identity.html +[10]: http://www.moxytongue.com/2016/05/system-or-human.html +[11]: http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html +[12]: https://get.digi.me/ +[13]: http://irespond.com/ +[14]: https://www.respectnetwork.com/ +[15]: http://evernym.com/technology +[16]: http://evernym.com/assets/doc/Identity-System-Essentials.pdf?v=167284fd65 +[17]: https://github.com/evernym/plenum From e5a45840f45b4f00be6cc83b14e36b0f737cfa80 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 22 Jul 2016 13:45:54 +0800 Subject: [PATCH 197/471] =?UTF-8?q?20160722-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ice Efficiency at Instagram with Python.md | 73 +++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 sources/tech/20160602 Web Service Efficiency at Instagram with Python.md diff --git a/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md b/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md new file mode 100644 index 0000000000..f637331336 --- /dev/null +++ b/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md @@ -0,0 +1,73 @@ +Web Service Efficiency at Instagram with Python +=============================================== + +Instagram currently features the world’s largest deployment of the Django web framework, which is written entirely in Python. We initially chose to use Python because of its reputation for simplicity and practicality, which aligns well with our philosophy of “do the simple thing first.” But simplicity can come with a tradeoff: efficiency. Instagram has doubled in size over the last two years and recently crossed 500 million users, so there is a strong need to maximize web service efficiency so that our platform can continue to scale smoothly. In the past year we’ve made our efficiency program a priority, and over the last six months we’ve been able to maintain our user growth without adding new capacity to our Django tiers. In this post, we’ll share some of the tools we built and how we use them to optimize our daily deployment flow. + +### Why Efficiency? + +Instagram, like all software, is limited by physical constraints like servers and datacenter power. With these constraints in mind, there are two main goals we want to achieve with our efficiency program: + +1. Instagram should be able to serve traffic normally with continuous code rollouts in the case of lost capacity in one data center region, due to natural disaster, regional network issues, etc. +2. Instagram should be able to freely roll out new products and features without being blocked by capacity. + +To meet these goals, we realized we needed to persistently monitor our system and battle regression. + +### Defining Efficiency + +Web services are usually bottlenecked by available CPU time on each server. Efficiency in this context means using the same amount of CPU resources to do more work, a.k.a, processing more user requests per second (RPS). As we look for ways to optimize, our first challenge is trying to quantify our current efficiency. Up to this point, we were approximating efficiency using ‘Average CPU time per requests,’ but there were two inherent limitations to using this metric: + +1. Diversity of devices. Using CPU time for measuring CPU resources is not ideal because it is affected by both CPU models and CPU loads. +2. Request impacts data. Measuring CPU resource per request is not ideal because adding and removing light or heavy requests would also impact the efficiency metric using the per-requests measurement. + +Compared to CPU time, CPU instruction is a better metric, as it reports the same numbers regardless of CPU models and CPU loads for the same request. Instead of linking all our data to each user request, we chose to use a ‘per active user’ metric. We eventually landed on measuring efficiency by using ‘CPU instruction per active user during peak minute.’ With our new metric established, our next step was to learn more about our regressions by profiling Django. + +### Profiling the Django Service + +There are two major questions we want to answer by profiling our Django web service: + +1. Does a CPU regression happen? +2. What causes the CPU regression and how do we fix it? + +To answer the first question, we need to track the CPU-instruction-per-active-user metric. If this metric increases, we know a CPU regression has occurred. + +The tool we built for this purpose is called Dynostats. Dynostats utilizes Django middleware to sample user requests by a certain rate, recording key efficiency and performance metrics such as the total CPU instructions, end to end requests latency, time spent on accessing memcache and database services, etc. On the other hand, each request has multiple metadata that we can use for aggregation, such as the endpoint name, the HTTP return code of the request, the server name that serves this request, and the latest commit hash on the request. Having two aspects for a single request record is especially powerful because we can slice and dice on various dimensions that help us narrow down the cause of any CPU regression. For example, we can aggregate all requests by their endpoint names as shown in the time series chart below, where it is very obvious to spot if any regression happens on a specific endpoint. + +![](https://d262ilb51hltx0.cloudfront.net/max/800/1*3iouYiAchYBwzF-v0bALMw.png) + +CPU instructions matter for measuring efficiency — and they’re also the hardest to get. Python does not have common libraries that support direct access to the CPU hardware counters (CPU hardware counters are the CPU registers that can be programmed to measure performance metrics, such as CPU instructions). Linux kernel, on the other hand, provides the perf_event_open system call. Bridging through Python ctypes enables us to call the syscall function in standard C library, which also provides C compatible data types for programming the hardware counters and reading data from them. + + With Dynostats, we can already find CPU regressions and dig into the cause of the CPU regression, such as which endpoint gets impacted most, who committed the changes that actually cause the CPU regression, etc. However, when a developer is notified that their changes have caused a CPU regression, they usually have a hard time finding the problem. If it was obvious, the regression probably wouldn’t have been committed in the first place! + + That’s why we needed a Python profiler that the developer can use to find the root cause of the regression (once Dynostats identifies it). Instead of starting from scratch, we decided to make slight alterations to cProfile, a readily available Python profiler. The cProfile module normally provides a set of statistics describing how long and how often various parts of a program were executed. Instead of measuring in time, we took cProfile and replaced the timer with a CPU instruction counter that reads from hardware counters. The data is created at the end of the sampled requests and sent to some data pipelines. We also send metadata similar to what we have in Dynostats, such as server name, cluster, region, endpoint name, etc. +On the other side of the data pipeline, we created a tailer to consume the data. The main functionality of the tailer is to parse the cProfile stats data and create entities that represent Python function-level CPU instructions. By doing so, we can aggregate CPU instructions by Python functions, making it easier to tell which functions contribute to CPU regression. + +### Monitoring and Alerting Mechanism + +At Instagram, we [deploy our backend 30–50 times a day][1]. Any one of these deployments can contain troublesome CPU regressions. Since each rollout usually includes at least one diff, it is easy to identify the cause of any regression. Our efficiency monitoring mechanism includes scanning the CPU instruction in Dynostats before and after each rollout, and sending out alerts when the change exceeds a certain threshold. For the CPU regressions happening over longer periods of time, we also have a detector to scan daily and weekly changes for the most heavily loaded endpoints. + + Deploying new changes is not the only thing that can trigger a CPU regression. In many cases, the new features or new code paths are controlled by global environment variables (GEV). There are very common practices for rolling out new features to a subset of users on a planned schedule. We added this information as extra metadata fields for each request in Dynostats and cProfile stats data. Grouping requests by those fields reveal possible CPU regressions caused by turning the GEVs. This enables us to catch CPU regressions before they can impact performance. + +### What’s Next? + +Dynostats and our customized cProfile, along with the monitoring and alerting mechanism we’ve built to support them, can effectively identify the culprit for most CPU regressions. These developments have helped us recover more than 50% of unnecessary CPU regressions, which would have otherwise gone unnoticed. + + There are still areas where we can improve and make it easier to embed into Instagram’s daily deployment flow: + +1. The CPU instruction metric is supposed to be more stable than other metrics like CPU time, but we still observe variances that make our alerting noisy. Keeping signal:noise ratio reasonably low is important so that developers can focus on the real regressions. This could be improved by introducing the concept of confidence intervals and only alarm when it is high. For different endpoints, the threshold of variation could also be set differently. +2. One limitation for detecting CPU regressions by GEV change is that we have to manually enable the logging of those comparisons in Dynostats. As the number of GEVs increases and more features are developed, this wont scale well. Instead, we could leverage an automatic framework that schedules the logging of these comparisons and iterates through all GEVs, and send alerts when regressions are detected. +3. cProfile needs some enhancement to handle wrapper functions and their children functions better. + +With the work we’ve put into building the efficiency framework for Instagram’s web service, we are confident that we will keep scaling our service infrastructure using Python. We’ve also started to invest more into the Python language itself, and are beginning to explore moving our Python from version 2 to 3. We will continue to explore this and more experiments to keep improving both infrastructure and developer efficiency, and look forward to sharing more soon. + +-------------------------------------------------------------------------------- + +via: https://engineering.instagram.com/web-service-efficiency-at-instagram-with-python-4976d078e366#.tiakuoi4p + +作者:[Min Ni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://engineering.instagram.com/@InstagramEng?source=post_header_lockup +[1]: https://engineering.instagram.com/continuous-deployment-at-instagram-1e18548f01d1#.p5adp7kcz From e219dff74a7a39404d57c7d9df59aaffeeac0b01 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 22 Jul 2016 13:59:57 +0800 Subject: [PATCH 198/471] PUB:Part 4 - How to Use Comparison Operators with Awk in Linux @chunyang-wen --- ... Comparison Operators with Awk in Linux.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) rename {translated/tech => published}/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md (61%) diff --git a/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md b/published/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md similarity index 61% rename from translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md rename to published/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md index 86649a52d5..4ed7d8ed02 100644 --- a/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md +++ b/published/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md @@ -1,15 +1,15 @@ -在 Linux 下如何使用 Awk 比较操作符 +awk 系列:如何使用 awk 比较操作符 =================================================== ![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Comparison-Operators-with-AWK.png) -对于 Awk 命令的用户来说,处理一行文本中的数字或者字符串时,使用比较运算符来过滤文本和字符串是十分方便的。 +对于 使用 awk 命令的用户来说,处理一行文本中的数字或者字符串时,使用比较运算符来过滤文本和字符串是十分方便的。 -在 Awk 系列的此部分中,我们将探讨一下如何使用比较运算符来过滤文本或者字符串。如果你是程序员,那么你应该已经熟悉比较运算符;对于其它人,下面的部分将介绍比较运算符。 +在 awk 系列的此部分中,我们将探讨一下如何使用比较运算符来过滤文本或者字符串。如果你是程序员,那么你应该已经熟悉了比较运算符;对于其它人,下面的部分将介绍比较运算符。 -### Awk 中的比较运算符是什么? +### awk 中的比较运算符是什么? -Awk 中的比较运算符用于比较字符串和或者数值,包括以下类型: +awk 中的比较运算符用于比较字符串和或者数值,包括以下类型: - `>` – 大于 - `<` – 小于 @@ -17,12 +17,12 @@ Awk 中的比较运算符用于比较字符串和或者数值,包括以下类 - `<=` – 小于等于 - `==` – 等于 - `!=` – 不等于 -- `some_value ~ / pattern/` – 如果some_value匹配模式pattern,则返回true -- `some_value !~ / pattern/` – 如果some_value不匹配模式pattern,则返回true +- `some_value ~ / pattern/` – 如果 some_value 匹配模式 pattern,则返回 true +- `some_value !~ / pattern/` – 如果 some_value 不匹配模式 pattern,则返回 true -现在我们通过例子来熟悉 Awk 中各种不同的比较运算符。 +现在我们通过例子来熟悉 awk 中各种不同的比较运算符。 -在这个例子中,我们有一个文件名为 food_list.txt 的文件,里面包括不同食物的购买列表。我想给食物数量小于或等于30的物品所在行的后面加上`(**)` +在这个例子中,我们有一个文件名为 food_list.txt 的文件,里面包括不同食物的购买列表。我想给食物数量小于或等于 30 的物品所在行的后面加上`(**)` ``` File – food_list.txt @@ -38,7 +38,7 @@ No Item_Name Quantity Price Awk 中使用比较运算符的通用语法如下: ``` -# expression { actions; } +# 表达式 { 动作; } ``` 为了实现刚才的目的,执行下面的命令: @@ -57,8 +57,8 @@ No Item_Name` Quantity Price 在刚才的例子中,发生如下两件重要的事情: -- 第一个表达式 `{ action ; }` 组合, `$3 <= 30 { printf “%s\t%s\n”, $0,”**” ; }` 打印出数量小于等于30的行,并且在后面增加`(**)`。物品的数量是通过 `$3`这个域变量获得的。 -- 第二个表达式 `{ action ; }` 组合, `$3 > 30 { print $0 ;}` 原样输出数量小于等于 `30` 的行。 +- 第一个“表达式 {动作;}”组合中, `$3 <= 30 { printf “%s\t%s\n”, $0,”**” ; }` 打印出数量小于等于30的行,并且在后面增加`(**)`。物品的数量是通过 `$3` 这个域变量获得的。 +- 第二个“表达式 {动作;}”组合中, `$3 > 30 { print $0 ;}` 原样输出数量小于等于 `30` 的行。 再举一个例子: @@ -78,9 +78,9 @@ No Item_Name Quantity Price ### 总结 -这是一篇对 Awk 中的比较运算符介绍性的指引,因此你需要尝试其他选项,发现更多使用方法。 +这是一篇对 awk 中的比较运算符介绍性的指引,因此你需要尝试其他选项,发现更多使用方法。 -如果你遇到或者想到任何问题,请在下面评论区留下评论。请记得阅读 Awk 系列下一部分的文章,那里我将介绍组合表达式。 +如果你遇到或者想到任何问题,请在下面评论区留下评论。请记得阅读 awk 系列下一部分的文章,那里我将介绍组合表达式。 -------------------------------------------------------------------------------- @@ -88,7 +88,7 @@ via: http://www.tecmint.com/comparison-operators-in-awk/ 作者:[Aaron Kili][a] 译者:[chunyang-wen](https://github.com/chunyang-wen) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c62bae310232f409360cd80cbe08a2c8e56bc424 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 22 Jul 2016 14:14:48 +0800 Subject: [PATCH 199/471] PUB:Part 5 - How to Use Compound Expressions with Awk in Linux MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @martin2011qi 一看没写译者的就是你。。。 --- ... Compound Expressions with Awk in Linux.md | 31 ++++++++++--------- 1 file changed, 16 insertions(+), 15 deletions(-) rename {translated/tech => published}/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md (65%) diff --git a/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md b/published/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md similarity index 65% rename from translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md rename to published/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md index ed1ba4aa7c..ebe288b020 100644 --- a/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md +++ b/published/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md @@ -1,31 +1,31 @@ -如何使用 Awk 复合表达式 +awk 系列:如何使用 awk 复合表达式 ==================================================== ![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Compound-Expressions-with-Awk.png) -一直以来在查对条件是否匹配时,我们寻求的都是简单的表达式。那如果你想用超过一个表达式,来查对特定的条件呢? +一直以来在查对条件是否匹配时,我们使用的都是简单的表达式。那如果你想用超过一个表达式来查对特定的条件呢? 本文,我们将看看如何在过滤文本和字符串时,结合多个表达式,即复合表达式,用以查对条件。 -Awk 的复合表达式可由表示`与`的组合操作符 `&&` 和表示`或`的 `||` 构成。 +awk 的复合表达式可由表示“与”的组合操作符 `&&` 和表示“或”的 `||` 构成。 复合表达式的常规写法如下: ``` -( first_expression ) && ( second_expression ) +( 第一个表达式 ) && ( 第二个表达式 ) ``` -为了保证整个表达式的正确,在这里必须确保 `first_expression` 和 `second_expression` 是正确的。 +这里只有当“第一个表达式” 和“第二个表达式”都是真值时整个表达式才为真。 ``` -( first_expression ) || ( second_expression) +( 第一个表达式 ) || ( 第二个表达式) ``` -为了保证整个表达式的正确,在这里必须确保 `first_expression` 或 `second_expression` 是正确的。 +这里只要“第一个表达式” 为真或“第二个表达式”为真,整个表达式就为真。 **注意**:切记要加括号。 -表达式可以由比较操作符构成,具体可查看 awk 系列的第四部分。 +表达式可以由比较操作符构成,具体可查看[ awk 系列的第四节][1]。 现在让我们通过一个例子来加深理解: @@ -43,7 +43,7 @@ No Name Price Type 7 Nano_Prowler_Mini_Drone $36.99 Tech ``` -我们只想打印出价格超过 $20 的物品,并在其中种类为 “Tech” 的物品的行末用 (**) 打上标记。 +我们只想打印出价格超过 $20 且其种类为 “Tech” 的物品,在其行末用 (*) 打上标记。 我们将要执行以下命令。 @@ -56,13 +56,13 @@ No Name Price Type 此例,在复合表达式中我们使用了两个表达式: -- 表达式 1:`($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/)` ;查找交易价格超过 `$20` 的行,即只有当 `$3` 也就是价格满足 `/^\$[2-9][0-9]*\.[0-9][0-9]$/` 时值才为 true。 -- 表达式 2:`($4 == “Tech”)` ;查找是否有种类为 “`Tech`”的交易,即只有当 `$4` 等于 “`Tech`” 时值才为 true。 -切记,只有当 `&&` 操作符的两端状态,也就是两个表达式都是 true 的情况下,这一行才会被打上 `(**)` 标志。 +- 表达式 1:`($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/)` ;查找交易价格超过 `$20` 的行,即只有当 `$3` 也就是价格满足 `/^\$[2-9][0-9]*\.[0-9][0-9]$/` 时值才为真值。 +- 表达式 2:`($4 == “Tech”)` ;查找是否有种类为 “`Tech`”的交易,即只有当 `$4` 等于 “`Tech`” 时值才为真值。 +切记,只有当 `&&` 操作符的两端状态,也就是两个表达式都是真值的情况下,这一行才会被打上 `(*)` 标志。 ### 总结 -有些时候为了匹配你的真实想法,就不得不用到复合表达式。当你掌握了比较和复合表达式操作符的用法之后,在难的文本或字符串过滤条件也能轻松解决。 +有些时候为了真正符合你的需求,就不得不用到复合表达式。当你掌握了比较和复合表达式操作符的用法之后,复杂的文本或字符串过滤条件也能轻松解决。 希望本向导对你有所帮助,如果你有任何问题或者补充,可以在下方发表评论,你的问题将会得到相应的解释。 @@ -71,9 +71,10 @@ No Name Price Type via: http://www.tecmint.com/combine-multiple-expressions-in-awk/ 作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: http://www.tecmint.com/author/aaronkili/ +[1]: https://linux.cn/article-7602-1.html \ No newline at end of file From 01afd251be931dd214dc180552906759e5cbc3b8 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 22 Jul 2016 14:37:29 +0800 Subject: [PATCH 200/471] PUB:20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse @GitFuture --- ...ve on Linux with google-drive-ocamlfuse.md | 64 +++++++++++++++++ ...ve on Linux with google-drive-ocamlfuse.md | 68 ------------------- 2 files changed, 64 insertions(+), 68 deletions(-) create mode 100644 published/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md delete mode 100644 translated/tech/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md diff --git a/published/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md b/published/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md new file mode 100644 index 0000000000..e993dc5e58 --- /dev/null +++ b/published/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md @@ -0,0 +1,64 @@ +教你用 google-drive-ocamlfuse 在 Linux 上挂载 Google Drive +===================== + +> 如果你在找一个方便的方式在 Linux 机器上挂载你的 Google Drive 文件夹, Jack Wallen 将教你怎么使用 google-drive-ocamlfuse 来挂载 Google Drive。 + +![](http://tr4.cbsistatic.com/hub/i/2016/05/18/ee5d7b81-e5be-4b24-843d-d3ca99230a63/651be96ac8714698f8100afa6883e64d/linuxcloudhero.jpg) + +*图片来源: Jack Wallen* + +Google 还没有发行 Linux 版本的 Google Drive 应用,尽管现在有很多方法从 Linux 中访问你的 Drive 文件。 + +如果你喜欢界面化的工具,你可以选择 Insync。如果你喜欢用命令行,有很多像 Grive2 这样的工具,和更容易使用的以 Ocaml 语言编写的基于 FUSE 的文件系统。我将会用后面这种方式演示如何在 Linux 桌面上挂载你的 Google Drive。尽管这是通过命令行完成的,但是它的用法会简单到让你吃惊。它太简单了以至于谁都能做到。 + +这个系统的特点: + +- 对普通文件/文件夹有完全的读写权限 +- 对于 Google Docs,sheets,slides 这三个应用只读 +- 能够访问 Drive 回收站(.trash) +- 处理重复文件功能 +- 支持多个帐号 + +让我们接下来完成 google-drive-ocamlfuse 在 Ubuntu 16.04 桌面的安装,然后你就能够访问云盘上的文件了。 + +### 安装 + +1. 打开终端。 +2. 用 `sudo add-apt-repository ppa:alessandro-strada/ppa` 命令添加必要的 PPA +3. 出现提示的时候,输入你的 root 密码并按下回车。 +4. 用 `sudo apt-get update` 命令更新应用。 +5. 输入 `sudo apt-get install google-drive-ocamlfuse` 命令安装软件。 + +### 授权 + +接下来就是授权 google-drive-ocamlfuse,让它有权限访问你的 Google 账户。先回到终端窗口敲下命令 `google-drive-ocamlfuse`,这个命令将会打开一个浏览器窗口,它会提示你登陆你的 Google 帐号或者如果你已经登陆了 Google 帐号,它会询问是否允许 google-drive-ocamlfuse 访问 Google 账户。如果你还没有登录,先登录然后点击“允许”。接下来的窗口(在 Ubuntu 16.04 桌面上会出现,但不会出现在 Elementary OS Freya 桌面上)将会询问你是否授给 gdfuse 和 OAuth2 Endpoint 访问你的 Google 账户的权限,再次点击“允许”。然后出现的窗口就会告诉你等待授权令牌下载完成,这个时候就能最小化浏览器了。当你的终端提示如下图一样的内容,你就能知道令牌下载完了,并且你已经可以挂载 Google Drive 了。 + +![](http://tr4.cbsistatic.com/hub/i/r/2016/05/18/a493122b-445f-4aca-8974-5ec41192eede/resize/620x/6ae5907ad2c08dc7620b7afaaa9e389c/googledriveocamlfuse3.png) + +*应用已经得到授权,你可以进行后面的工作。* + +### 挂载 Google Drive + +在挂载 Google Drive 之前,你得先创建一个文件夹,作为挂载点。在终端里,敲下`mkdir ~/google-drive`命令在你的家目录下创建一个新的文件夹。最后敲下命令`google-drive-ocamlfuse ~/google-drive`将你的 Google Drive 挂载到 google-drive 文件夹中。 + +这时你可以查看本地 google-drive 文件夹中包含的 Google Drive 文件/文件夹。你可以把 Google Drive 当作本地文件系统来进行工作。 + +当你想卸载 google-drive 文件夹,输入命令 `fusermount -u ~/google-drive`。 + +### 没有 GUI,但它特别好用 + +我发现这个特别的系统非常容易使用,在同步 Google Drive 时它出奇的快,并且这可以作为一种本地备份你的 Google Drive 账户的巧妙方式。(LCTT 译注:然而首先你得能使用……) + +试试 google-drive-ocamlfuse,看看你能用它做出什么有趣的事。 + +-------------------------------------------------------------------------------- + +via: http://www.techrepublic.com/article/how-to-mount-your-google-drive-on-linux-with-google-drive-ocamlfuse/ + +作者:[Jack Wallen][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.techrepublic.com/search/?a=jack+wallen diff --git a/translated/tech/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md b/translated/tech/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md deleted file mode 100644 index b64fa51ab0..0000000000 --- a/translated/tech/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md +++ /dev/null @@ -1,68 +0,0 @@ -教你用 google-drive-ocamlfuse 在 Linux 上挂载 Google Drive -===================== - ->如果你在找一个方便的方式在 Linux 机器上挂载你的 Google Drive 文件夹, Jack Wallen 将教你怎么使用 google-drive-ocamlfuse 来挂载 Google Drive。 - -![](http://tr4.cbsistatic.com/hub/i/2016/05/18/ee5d7b81-e5be-4b24-843d-d3ca99230a63/651be96ac8714698f8100afa6883e64d/linuxcloudhero.jpg) ->图片来源: Jack Wallen - -Google 还没有发行 Linux 版本的 Google Drive 应用,尽管现在有很多方法从 Linux 中访问你的 Drive 文件。 -(注:不清楚 app 需不需要翻译成应用,这里翻译了) - -如果你喜欢界面化的工具,你可以选择 Insync。如果你喜欢用命令行,这有很多工具,像 Grive2 和用 Ocaml 语言编写的、非常容易使用的、基于 FUSE 的系统(注:there are tools such as Grive2 and the incredibly easy to use FUSE-based system written in Ocaml. 这一句感觉翻译不出来)。我将会用后面这种方式演示如何在 Linux 桌面上挂载你的 Google Drive。尽管这是通过命令行完成的,但是它的用法会简单到让你吃惊。它太简单了以至于谁都能做到。 - -系统特点: - -- 对普通文件/文件夹有完全的读写权限 -- 对于 Google Docs,sheets,slides 这三个应用只读 -- 能够访问 Drive 回收站(.trash) -- 处理重复文件功能 -- 支持多个帐号 - -接下来完成 google-drive-ocamlfuse 在 Ubuntu 16.04 桌面的安装,然后你就能够访问云盘上的文件了。 - -### 安装 - -1. 打开终端。 -2. 用`sudo add-apt-repository ppa:alessandro-strada/ppa`命令添加必要的 PPA -3. 出现提示的时候,输入密码并按下回车。 -4. 用`sudo apt-get update`命令更新应用。 -5. 输入`sudo apt-get install google-drive-ocamlfuse`命令安装软件。 -(注:这里,我把所有的命令加上着重标记了) - -### 授权 - -接下来就是授权 google-drive-ocamlfuse,让它有权限访问你的 Google 账户。先回到终端窗口敲下命令 google-drive-ocamlfuse,这个命令将会打开一个浏览器窗口,它会提示你登陆你的 Google 帐号或者如果你已经登陆了 Google 帐号,它会询问是否允许 google-drive-ocamlfuse 访问 Google 账户。如果你还没有登陆,先登陆然后点击允许。接下来的窗口(在 Ubuntu 16.04 桌面上会出现,但不会出现在基本系统 Freya 桌面上)将会询问你是否授给 gdfuse 和 OAuth2 Endpoint访问你的 Google 账户的权限,再次点击允许。然后出现的窗口就会告诉你等待授权令牌下载完成,这个时候就能最小化浏览器了。当你的终端提示像图 A 一样的内容,你就能知道令牌下载完了,并且你已经可以挂载 Google Drive 了。 - -**图 A** - -![](http://tr4.cbsistatic.com/hub/i/r/2016/05/18/a493122b-445f-4aca-8974-5ec41192eede/resize/620x/6ae5907ad2c08dc7620b7afaaa9e389c/googledriveocamlfuse3.png) ->图片来源: Jack Wallen - -**应用已经得到授权,你可以进行后面的工作。** - -### 挂载 Google Drive - -在挂载 Google Drive 之前,你得先创建一个文件夹,作为挂载点。在终端里,敲下`mkdir ~/google-drive`命令在你的家目录下创建一个新的文件夹。最后敲下命令`google-drive-ocamlfuse ~/google-drive`将你的 Google Drive 挂载到 google-drive 文件夹中。 - -这时你可以查看本地 google-drive 文件夹中包含的 Google Drive 文件/文件夹。你能够把 Google Drive 当作本地文件系统来进行工作。 - -当你想 卸载 google-drive 文件夹,输入命令 `fusermount -u ~/google-drive`。 - -### 没有 GUI,但它特别好用 - -我发现这个特别的系统非常容易使用,在同步 Google Drive 时它出奇的快,并且这可以作为一种巧妙的方式备份你的 Google Drive 账户。 - -试试 google-drive-ocamlfuse,看看你能用它做出什么有趣的事。 - --------------------------------------------------------------------------------- - -via: http://www.techrepublic.com/article/how-to-mount-your-google-drive-on-linux-with-google-drive-ocamlfuse/ - -作者:[Jack Wallen ][a] -译者:[GitFuture](https://github.com/GitFuture) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.techrepublic.com/search/?a=jack+wallen From f5170c812cc733e5c51a3dbf92c7d62f21125f99 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Fri, 22 Jul 2016 14:46:04 +0800 Subject: [PATCH 201/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...inux desktop and IoT software distribution.md | 113 ---------------- ...inux desktop and IoT software distribution.md | 121 ++++++++++++++++++ 2 files changed, 121 insertions(+), 113 deletions(-) delete mode 100644 sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md create mode 100644 translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md diff --git a/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md b/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md deleted file mode 100644 index 667c49b056..0000000000 --- a/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md +++ /dev/null @@ -1,113 +0,0 @@ -vim-kakali translating - - -Ubuntu Snap takes charge of Linux desktop and IoT software distribution -=========================================================================== - -[Canonical][28] and [Ubuntu][29] founder Mark Shuttleworth said in an interview that he hadn't planned on an announcement about Ubuntu's new [Snap app package format][30]. But then in a matter of a few months, developers from multiple Linux distributions and companies announced they would use Snap as a universal Linux package format. - -![](http://zdnet2.cbsistatic.com/hub/i/r/2016/06/14/a9b2a139-3cd4-41bf-8e10-180cb9450134/resize/770xauto/adc7d16a46167565399ecdb027dd1416/ubuntu-snap.jpg) ->Linux distributors, ISVs, and companies are all adopting Ubuntu Snap to distribute and update programs across all Linux varieties. - -Why? Because Snap enables a single binary package to work perfectly and securely on any Linux desktop, server, cloud or device. According to Olli Ries, head of Canonical's Ubuntu client platform products and releases: - ->The [security mechanisms in Snap packages][1] allow us to open up the platform for much faster iteration across all our flavors as Snap applications are isolated from the rest of the system. Users can install a Snap without having to worry whether it will have an impact on their other apps or their system. - -Of course, as Matthew Garrett, a former Linux kernel developer and CoreOS security developer, has pointed out: If you [use Snap with an insecure program, such as the X11][2] window system, you don't actually gain any security. - -Shuttleworth agrees with Garrett but points out that you can control how Snap applications interact with the rest of this system. So, for example, a web browser can be contained within a secure Snap, which uses the Ubuntu packaged [openssl][3] Transport Layer Security (TLS) and Secure Sockets Layer (SSL) library. In addition, even if something does break into the browser instance, it still can't get to the underlying operating system. - -Many companies agree. [Dell][4], [Samsung][5], [Mozilla][6], [Krita][7], [Mycroft][8], and [Horizon Computing][9] are adopting Snap. [Arch Linux][10], [Debian][11], [Gentoo][12], and [OpenWrt][13] developers have also embraced Snaps and are adding it to their Linux distributions - -Snap packages, aka "Snaps", now work natively on Arch, Debian, Fedora, Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Ubuntu Unity, and Xubuntu. Snap is being validated on CentOS, Elementary, Gentoo, Mint, OpenSUSE, and Red Hat Enterprise Linux (RHEL), and are easy to enable on other Linux distributions. - -These distributions are adopting Snaps, Shuttleworth explained, because "Snaps bring those apps to every Linux desktop, server, device or cloud machine, giving users freedom to choose any Linux distribution while retaining access to the best apps." - -Taken together these distributions represent the vast majority of common Linux desktop, server and cloud distributions. Why would they switch from their existing package management systems? "One nice feature of Snaps is support for edge and beta channels, which allow users to opt-in to the pre-release developer versions of software or stick with the latest stable versions." explained Tim Jester-Pfadt, an Arch Linux contributor. - -In addition to the Linux distributors, independent software vendors (ISVs) are embracing Snap since it greatly simplifies third-party Linux app distribution and security maintenance. For example, [The Document Foundation][14] will be making the popular open-source office suite [LibreOffice][15] available as a Snap. - -Thorsten Behrens, co-founder of The Document Foundation explained: - ->Our objective is to make LibreOffice easily available to as many users as possible. Snaps enable our users to get the freshest LibreOffice releases across different desktops and distributions quickly, easily and consistently. As a bonus, it should help our release engineers to eventually move away from bespoke, home-grown and ancient Linux build solutions, towards something that is collectively maintained. - -In a statement, Nick Nguyen, Mozilla's [Firefox][16] VP, added: - ->We strive to offer users a great experience and make Firefox available across many platforms, devices and operating systems. With the introduction of Snaps, continually optimizing Firefox will become possible, providing Linux users the most up-to-date features. - -Boudewijn Rempt, project lead at the [Krita Foundation][17], a KDE-based graphics program, said: - ->Maintaining DEB packages in a private repository was complex and time consuming, snaps are much easier to maintain, package and distribute. Putting the snap in the store was particularly simple, this is the most streamlined app store I have published software in. [Krita 3.0][18] has just been released as a snap which will be updated automatically as newer versions become available. - -It's not just Linux desktop programmers who are excited by Snap. Internet of Things (IoT) and embedded developers are also grabbing on to Snap with both hands. - -Because Snaps are isolated from one another to help with data security, and can be updated or rolled back automatically, they are ideal for devices. Multiple vendors have launched snappy IoT devices, enabling a new class of "smart edge" device with IoT app store. Snappy devices receive automatic updates for the base OS, together with updates to the apps installed on the device. - -Dell, which according to Shuttleworth was one of the first IoT vendors to see the power of Snap, will be using Snap in its devices. - -"We believe Snaps address the security risks and manageability challenges associated with deploying and running multiple third party applications on a single IoT Gateway," said Jason Shepherd, Dell's Director of IoT Strategy and Partnerships. "This trusted and universal app format is essential for Dell, our IoT Solutions Partners and commercial customers to build a scalable, IT-ready, and vibrant ecosystem of IoT applications." - -It's simple, explained OpenWrt developer Matteo Croce. "Snaps deliver new applications to OpenWrt while leaving the core OS unchanged.... Snaps are a faster way to deliver a wider range of software to supported OpenWrt access points and routers." - -Shuttleworth doesn't see Snaps replacing existing Linux package systems such as [RPM][19] and [DEB][20]. Instead he sees it as being complementary to them. Snaps will sit alongside the native package. Each distribution has its own mechanisms to provide and update the core operating system and its updates. What Snap brings to the table is universal apps that cannot interfere with the base operating system - -Each Snap is confined using a range of kernel isolation and security mechanisms, tailored to the Snap application's needs. A careful review process ensures that snaps only receive the permissions they require to operate. Users will not have to make complex security decisions when installing the snap. - -Since Snaps are essentially self-contained zip files that can be quickly executed in place, "Snaps are much easier to create than traditional Linux packages, and allow us to evolve dependencies independent of the base operating system, so we can easily provide the very best and latest Chinese Linux apps to users across all distributions," explained Jack Yu, leader of the popular [Chinese Ubuntu Kylin][21] team. - -The snap format, designed by Canonical, is handled by [snapd][22]. Its development work is done on [GitHub][23]. Porting snapd to a wide range of Linux distributions has proven straightforward, and the community has grown to include contributors from a wide range of Linux backgrounds. - -Snap packages are created with the snapcrafttool. The home of the project is [snapcraft.io][24], which includes a tour and step-by-step guides to Snap creation, along with documentation for users and contributors to the project. Snaps can be built from existing distribution packages, but are more commonly built from source for optimization and size efficiency. - -Unless you're an Ubuntu power-user or serious Linux developer you may not have heard of Snap. In the future, anyone who does work with Linux on any platform will know the program. It's well on its way to becoming a major -- perhaps the most important of all -- Linux application installation and upgrade mechanism. - -#### Related Stories: - -- [Linux expert Matthew Garrett: Ubuntu 16.04's new Snap format is a security risk][25] -- [Ubuntu Linux 16.04 is here][26] -- [Microsoft and Canonical partner to bring Ubuntu to Windows 10][27] - - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/ubuntu-snap-takes-charge-of-linux-desktop-and-iot-software-distribution/ - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[28]: http://www.canonical.com/ -[29]: http://www.ubuntu.com/ -[30]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ -[1]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ -[2]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ -[3]: https://www.openssl.org/ -[4]: http://www.dell.com/en-us/ -[5]: http://www.samsung.com/us/ -[6]: http://www.mozilla.com/ -[7]: https://krita.org/en/ -[8]: https://mycroft.ai/ -[9]: http://www.horizon-computing.com/ -[10]: https://www.archlinux.org/ -[11]: https://www.debian.org/ -[12]: https://www.gentoo.org/ -[13]: https://openwrt.org/ -[14]: https://www.documentfoundation.org/ -[15]: https://www.libreoffice.org/download/libreoffice-fresh/ -[16]: https://www.mozilla.org/en-US/firefox/new/ -[17]: https://krita.org/en/about/krita-foundation/ -[18]: https://krita.org/en/item/krita-3-0-released/ -[19]: http://rpm5.org/ -[20]: https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html -[21]: http://www.ubuntu.com/desktop/ubuntu-kylin -[22]: https://launchpad.net/ubuntu/+source/snapd -[23]: https://github.com/snapcore/snapd -[24]: http://snapcraft.io/ -[25]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ -[26]: http://www.zdnet.com/article/ubuntu-linux-16-04-is-here/ -[27]: http://www.zdnet.com/article/microsoft-and-canonical-partner-to-bring-ubuntu-to-windows-10/ - - diff --git a/translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md b/translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md new file mode 100644 index 0000000000..f7b3e5590c --- /dev/null +++ b/translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md @@ -0,0 +1,121 @@ + + +Ubuntu Snap 软件包管理 Linux 桌面和 IoT 的软件安装 +=========================================================================== + + +[Canonical][28] 和 [Ubuntu][29] 创始人 Mark Shuttleworth 在一次采访中说他不计划宣布 Ubuntu 的新 [ Snap 程序包格式][30]。但是 +在之后几个月,各种 Linux 发行版的开发者和团队都宣布他们会把 Snap 作为通用 Linux 程序包格式。 +![](http://zdnet2.cbsistatic.com/hub/i/r/2016/06/14/a9b2a139-3cd4-41bf-8e10-180cb9450134/resize/770xauto/adc7d16a46167565399ecdb027dd1416/ubuntu-snap.jpg) +>Linux 供应商,独立软件开发商(ISV:Independent Software Vendor)和开发团队都采用 Ubuntu Snap 作为多种 Linux 系统的配置和更新程序包。 + +为什么呢?因为 Snap 能使一个二进制程序包可以完美、安全地在任何 Linux 台式机、服务器、云或设备上运行。Canonical 的 Ubuntu 客户端产品和版本负责人 Olli Ries 说: + + +>[ Snap 程序包的安全机制][1] 允许我们为 Snap 应用单独空出一些系统空间,这样我们就可以根据我们自身的情况更高效的进行循环开发。用户安装一个 Snap 的时候也不用担心是否会影响其他的应用程序和操作系统。 + + +当然了,早期的 Linux 内核开发者和 CoreOS 【译注:CoreOS是一种操作系统,于2013年十二月发布,它的设计旨在关注开源操作系统内核的新兴使用——用于大量基于云计算的虚拟服务器。】安全维护者 Matthew Garrett 指出:如果你 [使用带有不安全程序的 Snap 程序包比如 X11 ][2] 视窗系统【译注:X11也叫做 X Window 系统,X Window 系统 ( X11 或 X )是一种位图显示的视窗系统 。它是在 Unix 和类 Unix 操作系统 ,以及 OpenVMS 上建立图形用户界面的标准工具包和协议,并可用于几乎所有已有的现代操作系统。】,实际上你的系统一点也不安全。 + + +Shuttleworth 同意 Garrett 的观点,但是他也说你可以控制 Snap 应用使用多少的系统剩余空间。比如,一个 web 浏览器可以包含一个安全的 Snap 程序包,这个 Snap 使用 Ubuntu 的一个包 [openssl][3]【译注:OpenSSL 是一个强大的安全套接字层密码库,囊括主要的密码算法、常用的密钥和证书封装管理功能及 SSL 协议,并提供丰富的应用程序供测试或其它目的使用。】的安全传输层(TLS,Transport Layer Security)和安全套接字层(SSL,Secure Sockets Layer)二进制文件。除此之外,即使有程序攻破了浏览器的诉讼程序【译注:诉讼程序属于程序性法律程序中的公力救济型程序】,也不能进行底层的系统操作。 + +很多团队也这样认为。[戴尔][4],[三星][5],[Mozilla][6],[krita][7]【译注:Krita 是一个位图形编辑软件,KOffice 套装的一部份。包含一个绘画程式和照片编辑器,Krita 是自由软件,并根据GNU通用公共许可证发布。】,[麦考夫][8]【译注:Mycroft 是一个开源AI智能家居平台,配置 Raspberry Pi 2 和 Arduino 控制器,应该就是以夏洛克福尔摩斯的哥哥为名的。】,以及 [地平线计算][9] 【译注:地平线计算解决方案,为客户提供优质的硬件架构为其运行云平台。】都将使用 Snap。[Arch Linux][10],[Debain][11],[Gentoo][12],和 [OpenWrt][13] 【译注:OpenWrt 可以被描述为一个嵌入式的 Linux 发行版】开发团队也已经拥抱了 Snap,也会把 Snap 加入到他们各自开发的分支中。 + +Snap 包又叫做“ Snaps ”,现在在 Arch、Debian、Fedora、Kubuntu、Lubuntu、Ubuntu GNOME、Ubuntu Kylin、Ubuntu MATE、Ubuntu Unity 和 Xubuntu 上运行。 Snap 正在 CentOS、Elementary、Gentoo、Mint、OpenSUSE 和 Red Hat Enterprise Linux (RHEL) 上予以验证,并且在其他 Linux 发行版上运行也会很容易。 + +这些发行版都使用 Snaps,Shuttleworth 声称:“ Snaps 为每个 Linux 台式机、服务器、设备和云机器带来了很多应用程序,也给了用户更大的自由,用户可以选择任何带有最好应用程序的 Linux 发行版本。” + +把这些发行版放在一起代表了大多的主流 Linux 桌面、服务器和云系统分支。为什么他们切换到本地包来进行系统管理呢? Arch Linux 的贡献者 +除过这些 Linux 分支,独立软件开发商(ISV,Independent Software Vendor)也将会因为 Snap 很好的简化了第三方 Linux 应用程序和安全维护问题而拥抱 Snap。例如,[文档基金会][14] 也将会开发出受欢迎的可用开源 office 套件 [LibreOffice][15] 作为一个 Snap 程序包。 + +文档基金会的联合创始人 Thorsten Behrens 这样说: + +>我们的目标是使 LibreOffice 能被大多数人更容易使用成为可能。Snap 使我们的用户能够在不同的桌面系统和分支上更快捷更容易的持续获取最新的 LibreOffice 版本。如上所述,它也会帮助我们的版本开发工程师最终从定期的自产的陈旧 Linux 开发解决方案中解放出来,总之很多东西会被共同维护。 + +Mozilla 的 [火狐][16] 副总裁(VP,Vice President)Nick Nguyen 在这段陈述中提到: + +>我们力求为用户提供良好的使用体验,并且使火狐浏览器能够在更多平台、设备和操作系统上运行。随着对 Snaps 的介绍,我们也会对火狐浏览器进行持续优化,使它可以为 Linux 用户提供最新特性。 + +基于 KDE 的图形程序的 [Krita Foundation][17] 项目领导 Boudewijn Rempt 说: + + +>正在维护的 DEB 包在一个私有仓库,这很复杂也很耗费时间。Snaps 更容易维护、打包和配置。把 Snap 放进软件商店也特别容易,我已经把软件发布在最合适的软件商店了。[Krita 3.0][18] 刚刚作为一个 snap 程序包发行,它作为最新的版本能够自动更新。 + +不仅 Linux 桌面系统程序使用 Snap。物联网(IoT)和嵌入式开发者也双手拥抱了 Snap。 + + +Snaps 彼此隔离开来,以确保安全性,它们还可以自动更新或回滚,这对于硬件设备是极好的。多种厂商都在他们的物联网设备上运行着 snappy【译注:Snap 基于 snappy进行构建。】,能够生产带有物联网应用程序商店的新的“智能边缘”设备。Snappy 设备能够自动接收系统更新,并且连同安装在设备上的应用程序也会得到更新。 + +戴尔公司根据最早的物联网厂商之一的创始人 Shuttleworth 看到的 Snap 的巨大潜力决定在他们的设备上使用 Snap。 + +戴尔公司的物联网战略和合作伙伴主管 Jason Shepherd 说:“我们认为,Snaps 能够报告安全风险,也能解决在单一物联网网关上部署和运行多个第三方应用程序所带来的安全风险和可管理性挑战。这种课信赖的通用的应用程序格式才是戴尔真正需要的,我们的物联网解决方案合作伙伴和产品客户都对物联网应用程序的充满活力的生态系统有极大的兴趣。” + + +OpenWrt 的开发者 Matteo Croce 说:“这很简单,在脱离无变化的核心操作系统的时候 Snaps 会为 OpenWrt 递送大量的软件...Snaps 是通过点和路由为 OpenWrt 提供大量软件的最快方式。” + +Shuttleworth 认为 Snaps 不会取代已经存在的 Linux 程序包比如 [RPM][19] 和 [DEB][20]。相反,他认为他们将会相辅相成。Snaps 将会与现有包共存。每个发行版都有为系统内核提供更新的相应机制,这种机制也在不断更新。Snap 为桌面系统带来的是通用的应用程序,这些应用程序不会影响基本的系统操作。 + +每个 Snap 使用大量的独立核心和安全机制时也会有所限制,这是 Snap 应用程序的特色,谨慎的重览过程确保 Snap 仅仅得到其完成请求操作的权限。用户在安装 Snap 的时候也不必考虑复杂的安全问题。 + +Snap实际上是独立式zip文件,能够非常迅速地在原地执行,很受欢迎的[中标麒麟][21]团队的负责人 Jack Yu 称:“Snaps 比传统的 Linux 包更容易构建,允许我们对这种基本的系统的操作的独立性产生依赖,所以我们可以为所有的分支用户开发更好的最新国产 Linux 应用程序。” + +Snap 程序包格式由 Canonical 设计,基于 [snapd][22] 。这是GitHub上的一个软件项目。大多 Linux 发行版的部分 snapd 已经被证明是容易理解的,社区里也加入了新的有大量 Linux 经验的贡献者。 + + +Snap 程序包使用 snapcraft 工具来构建。项目基地是 [snapcraft.io][24] 网站,附有构建 Snap 的预览和逐步指南,不包括项目开发者和使用者的文件。Snap可能基于现有的发行版程序包,但更常使用源代码来构建,为了优化和规模效率。 + +如果你不是 Ubuntu 的忠实粉丝或者一个偏执的 Linux 开发者你可能不知道 Snap。未来,在任何平台上需要用 Linux 完成工作的任何人都会知道这个软件。用它的方法完成工作会成为主流 -- 尤其在这些方面将更重要 -- Linux 应用程序的安装和更新机制。 + + +#### 相关内容: + + +- [Linux 专家 Matthew Garrett:Ubuntu 16.04 的新 Snap 程序包格式存在安全风险 ][25] +- [Ubuntu Linux 16.04 ] +- [Microsoft 和 Canonical 合作使 Ubuntu 可以在 Windows 10 上运行 ] + + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/ubuntu-snap-takes-charge-of-linux-desktop-and-iot-software-distribution/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[28]: http://www.canonical.com/ +[29]: http://www.ubuntu.com/ +[30]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ +[1]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ +[2]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ +[3]: https://www.openssl.org/ +[4]: http://www.dell.com/en-us/ +[5]: http://www.samsung.com/us/ +[6]: http://www.mozilla.com/ +[7]: https://krita.org/en/ +[8]: https://mycroft.ai/ +[9]: http://www.horizon-computing.com/ +[10]: https://www.archlinux.org/ +[11]: https://www.debian.org/ +[12]: https://www.gentoo.org/ +[13]: https://openwrt.org/ +[14]: https://www.documentfoundation.org/ +[15]: https://www.libreoffice.org/download/libreoffice-fresh/ +[16]: https://www.mozilla.org/en-US/firefox/new/ +[17]: https://krita.org/en/about/krita-foundation/ +[18]: https://krita.org/en/item/krita-3-0-released/ +[19]: http://rpm5.org/ +[20]: https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html +[21]: http://www.ubuntu.com/desktop/ubuntu-kylin +[22]: https://launchpad.net/ubuntu/+source/snapd +[23]: https://github.com/snapcore/snapd +[24]: http://snapcraft.io/ +[25]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ +[26]: http://www.zdnet.com/article/ubuntu-linux-16-04-is-here/ +[27]: http://www.zdnet.com/article/microsoft-and-canonical-partner-to-bring-ubuntu-to-windows-10/ + + From 6bbfb73ec6235eca3178c16f3669f9af119fa58c Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Fri, 22 Jul 2016 22:54:05 +0800 Subject: [PATCH 202/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160718 Creating your first Git repository.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/sources/tech/20160718 Creating your first Git repository.md b/sources/tech/20160718 Creating your first Git repository.md index 0a9df8740b..4ec9b0aace 100644 --- a/sources/tech/20160718 Creating your first Git repository.md +++ b/sources/tech/20160718 Creating your first Git repository.md @@ -1,3 +1,7 @@ +vim-kakali translating + + + Creating your first Git repository ====================================== From 2e2d0c993d48c44d981d23190a8e3f1b9ca82658 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Sat, 23 Jul 2016 11:40:01 +0800 Subject: [PATCH 203/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=2020160711=20Getting?= =?UTF-8?q?=20started=20with=20Git.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160711 Getting started with Git.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160711 Getting started with Git.md b/sources/tech/20160711 Getting started with Git.md index 032e5e3510..b748a160da 100644 --- a/sources/tech/20160711 Getting started with Git.md +++ b/sources/tech/20160711 Getting started with Git.md @@ -1,3 +1,5 @@ +Being translated by ChrisLeeGit + Getting started with Git ========================= From 1e15ffd9c20c00ed666b396f8a1153d1ef0014e0 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 23 Jul 2016 18:50:28 +0800 Subject: [PATCH 204/471] PUB:20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS @MikeCoder --- ...ork on Ubuntu Linux 14.04 and 16.04 LTS.md | 48 +++++++++---------- 1 file changed, 24 insertions(+), 24 deletions(-) rename {translated/tech => published}/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md (67%) diff --git a/translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md b/published/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md similarity index 67% rename from translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md rename to published/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md index ed2f37235a..61fba7be98 100644 --- a/translated/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md +++ b/published/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md @@ -1,23 +1,21 @@ -如何在 Ubuntu 14.04 和 16.04 上建立网桥(br0) +如何在 Ubuntu 上建立网桥 ======================================================================= > 作为一个 Ubuntu 16.04 LTS 的初学者。如何在 Ubuntu 14.04 和 16.04 的主机上建立网桥呢? -![](http://s0.cyberciti.org/images/category/old/ubuntu-logo.jpg) +顾名思义,网桥的作用是通过物理接口连接内部和外部网络。对于虚拟端口或者 LXC/KVM/Xen/容器来说,这非常有用。网桥虚拟端口看起来是网络上的一个常规设备。在这个教程中,我将会介绍如何在 Ubuntu 服务器上通过 bridge-utils (brctl) 命令行来配置 Linux 网桥。 -顾名思义,网桥的作用是通过物理接口连接内部和外部网络。对于虚拟端口或者 LXC/KVM/Xen/容器来说,这非常有用。通过网桥虚拟端口看起来是网络上的一个常规设备。在这个教程中,我将会介绍如何在 Ubuntu 服务器上通过 bridge-utils(brctl) 命令行来配置 Linux 网桥。 - -### 网桥网络示例 +### 网桥化的网络示例 ![](http://s0.cyberciti.org/uploads/faq/2016/07/my-br0-br1-setup.jpg) ->Fig.01: Kvm/Xen/LXC 容器网桥实例 (br0) -In this example eth0 and eth1 is the physical network interface. eth0 connected to the LAN and eth1 is attached to the upstream ISP router/Internet. -在这个例子中,eth0 和 eth1 是物理网络接口。eth0 连接着局域网,eth1 连接着上游路由器/网络。 +*图 01: Kvm/Xen/LXC 容器网桥示例 (br0)* + +在这个例子中,eth0 和 eth1 是物理网络接口。eth0 连接着局域网,eth1 连接着上游路由器和互联网。 ### 安装 bridge-utils -使用[apt-get 命令][1] 安装 bridge-utils: +使用 [apt-get 命令][1] 安装 bridge-utils: ``` $ sudo apt-get install bridge-utils @@ -32,7 +30,8 @@ $ sudo apt install bridge-utils 样例输出: ![](http://s0.cyberciti.org/uploads/faq/2016/07/ubuntu-install-bridge-utils.jpg) ->Fig.02: Ubuntu 安装 bridge-utils 包 + +*图 02: Ubuntu 安装 bridge-utils 包* ### 在 Ubuntu 服务器上创建网桥 @@ -43,10 +42,10 @@ $ sudo cp /etc/network/interfaces /etc/network/interfaces.bakup-1-july-2016 $ sudo vi /etc/network/interfaces ``` -接下来设置 eth1 并且将他绑定到 br1 ,输入(删除或者注释所有 eth1 相关配置): +接下来设置 eth1 并且将它映射到 br1 ,输入如下(删除或者注释所有 eth1 相关配置): ``` -# br1 setup with static wan IPv4 with ISP router as gateway +# br1 使用静态公网 IP 地址,并以 ISP 的路由器作为网关 auto br1 iface br1 inet static address 208.43.222.51 @@ -60,7 +59,7 @@ iface br1 inet static bridge_maxwait 0 ``` -接下来设置 eth0 并将它绑定到 br0,输入(删除或者注释所有 eth1 相关配置): +接下来设置 eth0 并将它映射到 br0,输入如下(删除或者注释所有 eth0 相关配置): ``` auto br0 @@ -70,8 +69,8 @@ iface br0 inet static broadcast 10.18.44.63 dns-nameservers 10.0.80.11 10.0.80.12 # set static route for LAN - post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1 - post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1 + post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1 + post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1 bridge_ports eth0 bridge_stp off bridge_fd 0 @@ -80,7 +79,7 @@ iface br0 inet static ### 关于 br0 和 DHCP 的一点说明 -DHCP 的配置选项: +如果使用 DHCP ,配置选项是这样的: ``` auto br0 @@ -95,7 +94,7 @@ iface br0 inet dhcp ### 重启服务器或者网络服务 -你需要重启服务器或者输入下列命令来重启网络服务(在 SSH 登陆的会话中这可能不管用): +你需要重启服务器或者输入下列命令来重启网络服务(在 SSH 登录的会话中这可能不管用): ``` $ sudo systemctl restart networking @@ -111,20 +110,21 @@ $ sudo /etc/init.d/restart networking 使用 ping/ip 命令来验证 LAN 和 WAN 网络接口运行正常: ``` -# See br0 and br1 +# 查看 br0 和 br1 ip a show -# See routing info +# 查看路由信息 ip r -# ping public site +# ping 外部站点 ping -c 2 cyberciti.biz -# ping lan server +# ping 局域网服务器 ping -c 2 10.0.80.12 ``` 样例输出: ![](http://s0.cyberciti.org/uploads/faq/2016/07/br0-br1-eth0-eth1-configured-on-ubuntu.jpg) ->Fig.03: 验证网桥的以太网连接 + +*图 03: 验证网桥的以太网连接* 现在,你就可以配置 br0 和 br1 来让 XEN/KVM/LXC 容器访问因特网或者私有局域网了。再也没有必要去设置特定路由或者 iptables 的 SNAT 规则了。 @@ -134,8 +134,8 @@ ping -c 2 10.0.80.12 via: http://www.cyberciti.biz/faq/how-to-create-bridge-interface-ubuntu-linux/ 作者:[VIVEK GITE][a] -译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5a5b861a035bf1d4485c8d2c068bfbe82bf5f44f Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 23 Jul 2016 22:30:01 +0800 Subject: [PATCH 205/471] =?UTF-8?q?20160723-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... GTK 3 Tiling Terminal Emulator for Linux.md | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md diff --git a/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md new file mode 100644 index 0000000000..f593b5f119 --- /dev/null +++ b/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md @@ -0,0 +1,131 @@ +Terminix – A New GTK 3 Tiling Terminal Emulator for Linux +============================================================ + +There are [multiple terminal emulators][1] you can find on the Linux platform today, with each of them offering users some remarkable features. + +But sometimes, we find it difficult to choose which terminal emulator to work with, depending on our preferences. In this overview, we shall cover one exciting terminal emulator for Linux called Terminix. + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Emulator-for-Linux.png) +>Terminix Terminal Emulator for Linux + +Terminix is a tiling terminal emulator that uses VTE GTK+ 3 widget. It is developed using GTK 3 with aims of conforming to GNOME HIG (Human Interface Guidelines). Additionally, this application has been tested on GNOME and Unity desktops, although users have also tested it successfully on various other Linux desktops environments. + +Just like the rest of Linux terminal emulators, Terminix comes with some illustrious features and these include: + +- Enables users to layout terminals in any style by splitting them vertically or horizontally +- Supports drag and drop functionality to re-arrange terminals +- Supports detaching of terminals from windows using drag and drop +- Supports input synchronization between terminals, therefore commands typed in one terminal can be reproduced in another +- Terminal grouping can be saved and loaded from disk +- Supports transparent backgrounds +- Allows use of background images +- Supports automatic profile switches based on hostname and directory +- Also supports notification for out of view process completion +- Color schemes stored in files and new files can be created for custom color schemes + +### How to Install Terminix on Linux Systems + +Let us now uncover the steps you can follow to install Terminix on the various Linux distributions, but before we move any further, we have to list the various requirements for Terminix to work on Linux. + +#### Dependencies + +To work very well, the application requires the following libraries: + +- GTK 3.14 and above +- GTK VTE 0.42 and above +- Dconf +- GSettings +- Nautilus-Python for Nautilus integration + +If you have all the above requirements on your system, then proceed to install Terminix as follows. + +#### On RHEL/CentOS 7 and Fedora 22-24 + +First, you need to add the package repository by creating a file `/etc/yum.repos.d/terminix.repo` using your favorite text editor as follows. + +``` +# vi /etc/yum.repos.d/terminix.repo +``` + +Then copy and paste the text below into the file above: + +``` +[heikoada-terminix] +name=Copr repo for terminix owned by heikoada +baseurl=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/fedora-$releasever-$basearch/ +skip_if_unavailable=True +gpgcheck=1 +gpgkey=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/pubkey.gpg +enabled=1 +enabled_metadata=1 +``` + +Save the file and exit. + +Then update your system and install Terminix as shown: + +``` +---------------- On RHEL/CentOS 7 ---------------- +# yum update +# yum install terminix + +---------------- On Fedora 22-24 ---------------- +# dnf update +# dnf install terminix +``` + +#### On Ubuntu 16.04-14.04 and Linux Mint 18-17 + +There is no official package for Debian/Ubuntu based distributions, but you can install it manually using the commands below: + +``` +$ wget -c https://github.com/gnunn1/terminix/releases/download/1.1.1/terminix.zip +$ sudo unzip terminix.zip -d / +$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas/ +``` + +OpenSUSE users can install Terminix from the default repository and Arch Linux users can install the [AUR Terminix package][2]. + +### Terminix Screenshot Tour + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal.png) +>Terminix Terminal + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Settings.png) +>Terminix Terminal Settings + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Tabs.png) +>Terminix Multiple Terminal Tabs + +### How to Uninstall or Remove Terminix + +In case you installed it manually and want to remove it, then you can follow the steps below to uninstall it. Download the uninstall.sh from Github repository, make it executable and then run it: + +``` +$ wget -c https://github.com/gnunn1/terminix/blob/master/uninstall.sh +$ chmod +x uninstall.sh +$ sudo sh uninstall.sh +``` + +But if you installed it using a package manager, then you can use the package manager to uninstall it. + +Visit the [Terminix Github][3] repository + +In this overview, we have looked at an important Linux terminal emulator that is just an alternative to the multiple terminal emulators out there. Having installed it you can try out the different features and also compare it with the rest that you have probably used. + +Importantly, for any questions or extra information that you have about Terminix, please use the comment section below and do not forget to also give us feedback about your experience with it. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/terminix-tiling-terminal-emulator-for-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/linux-terminal-emulators/ +[2]: https://aur.archlinux.org/packages/terminix +[3]: https://github.com/gnunn1/terminix From adfee21210e8968399b80c40899dc416653503a2 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Sun, 24 Jul 2016 13:01:52 +0800 Subject: [PATCH 206/471] =?UTF-8?q?sources/tech/20160722=20Terminix=20?= =?UTF-8?q?=E2=80=93=20A=20New=20GTK=203=20Tiling=20Terminal=20Emulator=20?= =?UTF-8?q?for=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md index f593b5f119..5ce9c05d75 100644 --- a/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md +++ b/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md @@ -1,3 +1,5 @@ +MikeCoder Translating + Terminix – A New GTK 3 Tiling Terminal Emulator for Linux ============================================================ From 92d1ac9dfb01f71c0ffe09e5814ffaf4cf817155 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Sun, 24 Jul 2016 14:15:01 +0800 Subject: [PATCH 207/471] =?UTF-8?q?translated/tech/20160722=20Terminix=20?= =?UTF-8?q?=E2=80=93=20A=20New=20GTK=203=20Tiling=20Terminal=20Emulator=20?= =?UTF-8?q?for=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... GTK 3 Tiling Terminal Emulator for Linux.md | 133 ------------------ ... GTK 3 Tiling Terminal Emulator for Linux.md | 133 ++++++++++++++++++ 2 files changed, 133 insertions(+), 133 deletions(-) delete mode 100644 sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md create mode 100644 translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md diff --git a/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md deleted file mode 100644 index 5ce9c05d75..0000000000 --- a/sources/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md +++ /dev/null @@ -1,133 +0,0 @@ -MikeCoder Translating - -Terminix – A New GTK 3 Tiling Terminal Emulator for Linux -============================================================ - -There are [multiple terminal emulators][1] you can find on the Linux platform today, with each of them offering users some remarkable features. - -But sometimes, we find it difficult to choose which terminal emulator to work with, depending on our preferences. In this overview, we shall cover one exciting terminal emulator for Linux called Terminix. - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Emulator-for-Linux.png) ->Terminix Terminal Emulator for Linux - -Terminix is a tiling terminal emulator that uses VTE GTK+ 3 widget. It is developed using GTK 3 with aims of conforming to GNOME HIG (Human Interface Guidelines). Additionally, this application has been tested on GNOME and Unity desktops, although users have also tested it successfully on various other Linux desktops environments. - -Just like the rest of Linux terminal emulators, Terminix comes with some illustrious features and these include: - -- Enables users to layout terminals in any style by splitting them vertically or horizontally -- Supports drag and drop functionality to re-arrange terminals -- Supports detaching of terminals from windows using drag and drop -- Supports input synchronization between terminals, therefore commands typed in one terminal can be reproduced in another -- Terminal grouping can be saved and loaded from disk -- Supports transparent backgrounds -- Allows use of background images -- Supports automatic profile switches based on hostname and directory -- Also supports notification for out of view process completion -- Color schemes stored in files and new files can be created for custom color schemes - -### How to Install Terminix on Linux Systems - -Let us now uncover the steps you can follow to install Terminix on the various Linux distributions, but before we move any further, we have to list the various requirements for Terminix to work on Linux. - -#### Dependencies - -To work very well, the application requires the following libraries: - -- GTK 3.14 and above -- GTK VTE 0.42 and above -- Dconf -- GSettings -- Nautilus-Python for Nautilus integration - -If you have all the above requirements on your system, then proceed to install Terminix as follows. - -#### On RHEL/CentOS 7 and Fedora 22-24 - -First, you need to add the package repository by creating a file `/etc/yum.repos.d/terminix.repo` using your favorite text editor as follows. - -``` -# vi /etc/yum.repos.d/terminix.repo -``` - -Then copy and paste the text below into the file above: - -``` -[heikoada-terminix] -name=Copr repo for terminix owned by heikoada -baseurl=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/fedora-$releasever-$basearch/ -skip_if_unavailable=True -gpgcheck=1 -gpgkey=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/pubkey.gpg -enabled=1 -enabled_metadata=1 -``` - -Save the file and exit. - -Then update your system and install Terminix as shown: - -``` ----------------- On RHEL/CentOS 7 ---------------- -# yum update -# yum install terminix - ----------------- On Fedora 22-24 ---------------- -# dnf update -# dnf install terminix -``` - -#### On Ubuntu 16.04-14.04 and Linux Mint 18-17 - -There is no official package for Debian/Ubuntu based distributions, but you can install it manually using the commands below: - -``` -$ wget -c https://github.com/gnunn1/terminix/releases/download/1.1.1/terminix.zip -$ sudo unzip terminix.zip -d / -$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas/ -``` - -OpenSUSE users can install Terminix from the default repository and Arch Linux users can install the [AUR Terminix package][2]. - -### Terminix Screenshot Tour - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal.png) ->Terminix Terminal - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Settings.png) ->Terminix Terminal Settings - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Tabs.png) ->Terminix Multiple Terminal Tabs - -### How to Uninstall or Remove Terminix - -In case you installed it manually and want to remove it, then you can follow the steps below to uninstall it. Download the uninstall.sh from Github repository, make it executable and then run it: - -``` -$ wget -c https://github.com/gnunn1/terminix/blob/master/uninstall.sh -$ chmod +x uninstall.sh -$ sudo sh uninstall.sh -``` - -But if you installed it using a package manager, then you can use the package manager to uninstall it. - -Visit the [Terminix Github][3] repository - -In this overview, we have looked at an important Linux terminal emulator that is just an alternative to the multiple terminal emulators out there. Having installed it you can try out the different features and also compare it with the rest that you have probably used. - -Importantly, for any questions or extra information that you have about Terminix, please use the comment section below and do not forget to also give us feedback about your experience with it. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/terminix-tiling-terminal-emulator-for-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/linux-terminal-emulators/ -[2]: https://aur.archlinux.org/packages/terminix -[3]: https://github.com/gnunn1/terminix diff --git a/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md new file mode 100644 index 0000000000..9aa106dfe0 --- /dev/null +++ b/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md @@ -0,0 +1,133 @@ +Terminix - 一个很赞的基于 GTK3 的 Linux 终端模拟器 +============================================================ + +现在,你可以很容易的找到[大量的 Linux 终端模拟器][1],每一个都可以给用户留下深刻的印象。 + +但是,很多时候,我们会很难根据我们的喜好来选择一款来作为日常终端模拟器。这篇文章中,我们将会推荐一款叫做 Terminix 的令人激动的终端模拟机。 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Emulator-for-Linux.png) +>Terminix Linux 终端模拟器 + +Terminix 是一个使用 VTE GTK+ 3 插件的终端模拟器。使用 GTK 3 开发的原因主要是为了符合 GNOME HIG(Human Interface Guidelines) 标准。另外,Terminix 已经在 GNOME 和 Unity 桌面环境下完成了测试。同时,志愿者也在其他的 Linux 桌面环境下进行了测试。 + +和其他的终端模拟器一样,Terminix 有着很多著名的特点,列表如下: + +- 允许用户进行自定义的垂直或者水平分屏。 +- 支持功能性的拖拽来进行重新布局 +- 支持拖拽Tab 的方式,新建终端窗口 +- 支持终端之间的输入同步,因此,命令可以在一个终端输入,同时另一个终端进行展示 +- 终端的分组配置可以保存在硬盘 +- 支持透明背景 +- 允许使用背景图片 +- 支持通过主机名和目录来自动切换配置 +- 支持来自其他进程的通知信息 +- 配色模式使用文件存储,同时支持自定义配色方案 + +### 如何在 Linux 系统上安装 Terminix + +现在来详细说明一下在不同的 Linux 发行版本上安装 Terminix 的步骤。首先,在此列出 Terminix 在 Linux 所需要的环境需求。 + +#### 依赖组件 + +为了正常运行,该应用需要使用如下库: + +- GTK 3.14 或者以上版本 +- GTK VTE 0.42 或者以上版本 +- Dconf +- GSettings +- Nautilus 的 iNautilus-Python 插件 + +如果你已经满足了如上的系统 要求,接下来就是安装 Terminix 的步骤。 + +#### 在 RHEL/CentOS 7 或者 Fedora 22-24 上 + +首先,你需要将包仓库通过新建文件 `/etc/yum.repos.d/terminix.repo` 的方式,然后使用你最喜欢的文本编辑器来进行编辑。 + +``` +# vi /etc/yum.repos.d/terminix.repo +``` + +然后拷贝如下的文字,并且拷贝到我们刚新建的文件中: + +``` +[heikoada-terminix] +name=Copr repo for terminix owned by heikoada +baseurl=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/fedora-$releasever-$basearch/ +skip_if_unavailable=True +gpgcheck=1 +gpgkey=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/pubkey.gpg +enabled=1 +enabled_metadata=1 +``` + +保存文件并退出。 + +然后更新你的系统,并且安装 Terminix,步骤如下: + +``` +---------------- On RHEL/CentOS 7 ---------------- +# yum update +# yum install terminix + +---------------- On Fedora 22-24 ---------------- +# dnf update +# dnf install terminix +``` + +#### 在 Ubuntu 16.04-14.04 和 Linux Mint 18-17 + +虽然没有基于 Debian/Ubuntu 发行版本的官方的软件包,但是你依旧可以通过如下的命令手动安装。 + +``` +$ wget -c https://github.com/gnunn1/terminix/releases/download/1.1.1/terminix.zip +$ sudo unzip terminix.zip -d / +$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas/ +``` + +OpenSUSE users can install Terminix from the default repository and Arch Linux users can install the [AUR Terminix package][2]. +OpenSUSE 用户可以从默认的仓库中安装 Terminix,Arch Linux 用户也可以安装 [AUR Terminix 软件包][2]。 + +### Terminix 截图教程 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal.png) +>Terminix Terminal + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Settings.png) +>Terminix Terminal Settings + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Tabs.png) +>Terminix Multiple Terminal Tabs + +### 如何卸载删除 Terminix + +In case you installed it manually and want to remove it, then you can follow the steps below to uninstall it. Download the uninstall.sh from Github repository, make it executable and then run it: +如果你是手动安装的 Terminix 并且想要删除他,那么你可以参照如下的步骤来卸载它。从 Github 仓库上下载 uninstall.sh,并且给它可执行权限并且执行它: + +``` +$ wget -c https://github.com/gnunn1/terminix/blob/master/uninstall.sh +$ chmod +x uninstall.sh +$ sudo sh uninstall.sh +``` + +但是如果你是通过包管理器安装的 Terminix,你依旧可以使用包管理器来卸载它。 + +请浏览 [Terminix Github][3] 仓库 + +在这个总览中,我们在众多优秀的终端模拟器中了解了一个重要的 Linux 终端模拟器。你可以尝试着去体验下它的新特性,并且可以将它和你现在使用的终端进行比较。 + +重要的一点,如果你想得到关于 Terminix 的更多信息或者有疑问,请使用评论区,而且不要忘了,给我一个关于你使用体验的反馈。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/terminix-tiling-terminal-emulator-for-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/linux-terminal-emulators/ +[2]: https://aur.archlinux.org/packages/terminix +[3]: https://github.com/gnunn1/terminix From 8e9f9135f5002b0e2abbfeca85b5330cb8c80d05 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Sun, 24 Jul 2016 14:15:18 +0800 Subject: [PATCH 208/471] =?UTF-8?q?translated/tech/20160722=20Terminix=20?= =?UTF-8?q?=E2=80=93=20A=20New=20GTK=203=20Tiling=20Terminal=20Emulator=20?= =?UTF-8?q?for=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md index 9aa106dfe0..2a74e62ca1 100644 --- a/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md +++ b/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md @@ -14,7 +14,7 @@ Terminix 是一个使用 VTE GTK+ 3 插件的终端模拟器。使用 GTK 3 开 - 允许用户进行自定义的垂直或者水平分屏。 - 支持功能性的拖拽来进行重新布局 -- 支持拖拽Tab 的方式,新建终端窗口 +- 支持拖拽 Tab 的方式,新建终端窗口 - 支持终端之间的输入同步,因此,命令可以在一个终端输入,同时另一个终端进行展示 - 终端的分组配置可以保存在硬盘 - 支持透明背景 From 82661ad8bd1578b735c8dc6d5b499f9df3e726cb Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 24 Jul 2016 15:02:56 +0800 Subject: [PATCH 209/471] PUB:20160519 The future of sharing integrating Pydio and ownCloud @martin2011qi --- ... sharing integrating Pydio and ownCloud.md | 65 +++++++++++++++++++ ... sharing integrating Pydio and ownCloud.md | 65 ------------------- 2 files changed, 65 insertions(+), 65 deletions(-) create mode 100644 published/20160519 The future of sharing integrating Pydio and ownCloud.md delete mode 100644 translated/tech/20160519 The future of sharing integrating Pydio and ownCloud.md diff --git a/published/20160519 The future of sharing integrating Pydio and ownCloud.md b/published/20160519 The future of sharing integrating Pydio and ownCloud.md new file mode 100644 index 0000000000..5c1b998508 --- /dev/null +++ b/published/20160519 The future of sharing integrating Pydio and ownCloud.md @@ -0,0 +1,65 @@ +共享的未来:Pydio 与 ownCloud 的联合 +========================================================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_darwincloud_520x292_0311LL.png?itok=5yWIaEDe) + +*图片来源 : opensource.com* + +开源共享生态圈内容纳了许多各异的项目,它们每一个都给出了自己的解决方案,且每一个都不按套路来。有很多原因导致你选择开源的解决方案,而非 Dropbox、Google Drive、iCloud 或 OneDrive 这些商业的解决方案。这些商业的解决方案虽然能让你不必为如何管理数据担心,但也理所应当的带着种种限制,其中就包括对于原有基础结构的控制和整合不足。 + +对于用户而言仍有相当一部分可供选择的文件分享和同步的替代品,其中就包括了 Pydio 和 ownCloud。 + +### Pydio + +Pydio (Put your data in orbit 把你的数据放上轨道) 项目由一位作曲家 Charles du Jeu 发起,起初他只是需要一种与乐队成员分享大型音频文件的方法。[Pydio][1] 是一种文件分享与同步的解决方案,综合了多存储后端,设计时还同时考虑了开发者和系统管理员两方面。在世界各地有逾百万的下载量,已被翻译成 27 种语言。 + +项目在刚开始的时候便开源了,先是在 [SourceForge][2] 上茁壮的成长,现在已在 [GitHub][3] 上安了家。 + +用户界面基于 Google 的 [Material 设计风格][4]。用户可以使用现有的传统文件基础结构或是根据预估的需求部署 Pydio,并通过 web、桌面和移动端应用随时随地地管理自己的东西。对于管理员来说,细粒度的访问权限绝对是配置访问时的利器。 + +在 [Pydio 社区][5],你可以找到许多让你增速的资源。Pydio 网站 [对于如何为 Pydio GitHub 仓库贡献][6] 给出了明确的指导方案。[论坛][7]中也包含了开发者板块和社区。 + +### ownCloud + +[ownCloud][8] 在世界各地拥有逾 8 百万的用户,它是一个开源、自行管理的文件同步共享技术。同步客户端支持所有主流平台并支持 WebDAV 通过 web 界面实现。ownCloud 拥有简单的使用界面,强大的管理工具,和大规模的共享及协作功能——以满足用户管理数据时的需求。 + +ownCloud 的开放式架构是通过 API 和为应用提供平台来实现可扩展性的。迄今已有逾 300 款应用,功能包括处理像日历、联系人、邮件、音乐、密码、笔记等诸多数据类型。ownCloud 由一个数百位贡献者的国际化的社区开发,安全,并且能做到为小到一个树莓派大到好几百万用户的 PB 级存储集群量身定制。 + +### 联合共享 (Federated sharing) + +文件共享开始转向团队合作时代,而标准化为合作提供了坚实的土壤。 + +联合共享(Federated sharing)——一个由 [OpenCloudMesh][9] 项目提供的新的开放标准,就是在这个方向迈出的一步。先不说别的,在支持该标准的服务器之间分享文件和文件夹,比如说 Pydio 和 ownCloud。 + +ownCloud 7 率先引入该标准,这种服务器到服务器的分享方式可以让你挂载远程服务器上共享的文件,实际上就是创建你自己的云上之云。你可以直接为其它支持联合共享的服务器上的用户创建共享链接。 + +实现这个新的 API 允许存储解决方案之间更深层次的集成,同时保留了原有平台的安全,控制和特性。 + +“交换和共享文件是当下和未来不可或缺的东西。”ownCloud 的创始人 Frank Karlitschek 说道:“正因如此,采用联合和分布的方式而非集中的数据孤岛就显得至关重要。联合共享的设计初衷便是在保证安全和用户隐私的同时追求分享的无缝、至简之道。” + +### 下一步是什么呢? + +正如 OpenCloudMesh 做的那样,将会通过像 Pydio 和 ownCloud 这样的机构和公司,合作推广这一文件共享的新开放标准。ownCloud 9 已经引入联合的服务器之间交换用户列表的功能,让你的用户们在你的服务器上享有和你同样的无缝体验。将来,一个中央地址簿服务(联合的)集合,用以检索其他联合云 ID 的构想可能会把云间合作推向一个新的高度。 + +这一举措无疑有助于日益开放的技术社区中的那些成员方便地讨论,开发,并推动“OCM 分享 API”作为一个厂商中立协议。所有领导 OCM 项目的合作伙伴都全心致力于开放 API 的设计原则,并欢迎其他开源的文件分享和同步社区参与并加入其中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/16/5/sharing-files-pydio-owncloud + +作者:[ben van 't ende][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/benvantende +[1]: https://pydio.com/ +[2]: https://sourceforge.net/projects/ajaxplorer/ +[3]: https://github.com/pydio/ +[4]: https://www.google.com/design/spec/material-design/introduction.html +[5]: https://pydio.com/en/community +[6]: https://pydio.com/en/community/contribute +[7]: https://pydio.com/forum/f +[8]: https://owncloud.org/ +[9]: https://wiki.geant.org/display/OCM/Open+Cloud+Mesh diff --git a/translated/tech/20160519 The future of sharing integrating Pydio and ownCloud.md b/translated/tech/20160519 The future of sharing integrating Pydio and ownCloud.md deleted file mode 100644 index bf5a25c77e..0000000000 --- a/translated/tech/20160519 The future of sharing integrating Pydio and ownCloud.md +++ /dev/null @@ -1,65 +0,0 @@ -分享的未来:整合 Pydio 与 ownCloud -========================================================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_darwincloud_520x292_0311LL.png?itok=5yWIaEDe) ->图片来源 : -opensource.com - -开源共享生态圈内容纳了许多各异的项目,他们每一个都能提供出自己的解决方案,且每一个都不按套路来。有很多原因导致你选择开源的解决方案,而非 Dropbox、Google Drive、iCloud 或 OneDrive 这些商业的解决方案。这些商业的解决方案虽然能让你不必为如何管理数据担心,但也理所应当的带着种种限制,其中就包含对于原有基础结构的控制和整合不足。 - -对于用户而言仍有相当一部分文件分享和同步的替代品可供选择,其中就包括了 Pydio 和 ownCloud。 - -### Pydio - -Pydio (Put your data in orbit 把你的数据放上轨道) 项目由一位作曲家 Charles du Jeu 发起,起初他也只是需要一种与乐队成员分享大型音频文件的方法。[Pydio][1] 是一种文件分享与同步的解决方案,综合了多存储后端,设计时还同时考虑了开发者和系统管理员两方面。在世界各地有逾百万的下载量,已被翻译成 27 种语言。 - -项目在很开始的时候便开源了,先是在 [SourceForge][2] 上茁壮的成长,现在已在 [GitHub][3] 上安了家.。 - -用户界面基于 Google 的 [Material 设计][4]。用户可以使用现有的传统的文件基础结构或是本地部署 Pydio,并通过 web、桌面和移动端应用随时随地地管理自己的东西。对于管理员来说,细粒度的访问权限绝对是配置访问时的利器。 - -在 [Pydio 社区][5],你可以找到许多让你增速的资源。Pydio 网站 [对于如何为 Pydio GitHub 仓库贡献][6] 给出了明确的指导方案。[论坛][7]中也包含了开发者板块和社区。 - -### ownCloud - -[ownCloud][8] 在世界各地拥有逾 8 百万的用户,并且开源,支持自托管文件同步,且共享技术。同步客户端支持所有主流平台并支持 WebDAV 通过 web 界面实现。ownCloud 拥有简单的使用界面,强大的管理工具,和大规模的共享及协作功能——以满足用户管理数据时的需求。 - -ownCloud 的开放式架构是通过 API 和为应用提供平台来实现可扩展性的。迄今已有逾 300 款应用,功能包括处理像日历、联系人、邮件、音乐、密码、笔记等诸多数据类型。ownCloud 由一个数百位贡献者的国际化的社区开发,安全,并且能做到为小到一个树莓派大到好几百万用户的 PB 级存储集群量身定制。 - -### 联合共享 (Federated sharing) - -文件共享开始转向团队合作时代,而标准化为合作提供了坚实的土壤。 - -联合共享——一个由 [OpenCloudMesh][9] 项目提供的新开放标准,就是在这个方向迈出的一步。先不说别的,在支持该标准的服务端上,可以像 Pydio 和 ownCloud 那样分享文件和文件夹。 - -ownCloud 7 率先引入,这种服务端到服务端的分享方式可以让你挂载远程服务端上共享的文件,实际上就是创建你所有云的云。你可以直接创建共享链接,让用户在其他支持联合云共享的服务端上使用。 - -实现这个新的 API 允许存储解决方案之间更深层次的集成,同时保留了原有平台的安全,控制和属性。 - -“交换和共享文件是当下和未来不可或缺的东西。”ownCloud 的创始人 Frank Karlitschek 说道:“正因如此,采用联合和分布的方式而非集中的数据孤岛就显得至关重要。[联合共享]的设计初衷便是在保证安全和用户隐私的同时追求分享的无缝、至简之道。” - -### 下一步是什么呢? - -正如 OpenCloudMesh 做的那样,将会通过像 Pydio 和 ownCloud 这样的机构和公司,合作推广这一文件共享的新开放标准。ownCloud 9 已经引入联合服务端间交换用户列表的功能,让你的用户在你的服务器上享有和你同样的无缝体验。将来,一个中央地址簿服务(联合!)集合,用以检索其他联合云 ID 的想法可能会把云间合作推向一个新的高度。 - -这一举措无疑有助于日益开放的技术社区中的那些成员方便地讨论,开发,并推动“OCM 分享 API”作为一个厂商中立协议。所有领导 OCM 项目的合作伙伴都全心致力于开放 API 的设计原理,并欢迎其他开源的文件分享和同步社区参与并加入其中。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/5/sharing-files-pydio-owncloud - -作者:[ben van 't ende][a] -译者:[martin2011qi](https://github.com/martin2011qi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/benvantende -[1]: https://pydio.com/ -[2]: https://sourceforge.net/projects/ajaxplorer/ -[3]: https://github.com/pydio/ -[4]: https://www.google.com/design/spec/material-design/introduction.html -[5]: https://pydio.com/en/community -[6]: https://pydio.com/en/community/contribute -[7]: https://pydio.com/forum/f -[8]: https://owncloud.org/ -[9]: https://wiki.geant.org/display/OCM/Open+Cloud+Mesh From d8037c5dabfd1e56c729faa4778ea2d5dd52ac7c Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 24 Jul 2016 18:28:21 +0800 Subject: [PATCH 210/471] =?UTF-8?q?PUB:Part=206=20-=20How=20to=20Use=20?= =?UTF-8?q?=E2=80=98next=E2=80=99=20Command=20with=20Awk=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @kokialoves 整体翻译的不错,有些细节需要注意。另外还有就是标点符号的使用。 --- ...to Use ‘next’ Command with Awk in Linux.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) rename {translated/tech => published}/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md (59%) diff --git a/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md b/published/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md similarity index 59% rename from translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md rename to published/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md index 8ead60bd44..c2d277fc17 100644 --- a/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md +++ b/published/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md @@ -1,13 +1,13 @@ - -如何使用AWK的‘next’命令 +awk 系列:如何使用 awk 的 ‘next’ 命令 ============================================= ![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png) -在Awk 系列的第六章, 我们来看一下`next`命令 ,它告诉 Awk 跳过你所提供的表达式而是读取下一个输入行. -`next` 命令帮助你阻止运行多余的步骤. +在 awk 系列的第六节,我们来看一下`next`命令 ,它告诉 awk 跳过你所提供的所有剩下的模式和表达式,直接处理下一个输入行。 -要明白它是如何工作的, 让我们来分析一下food_list.txt它看起来像这样 : +`next` 命令帮助你阻止运行命令执行过程中多余的步骤。 + +要明白它是如何工作的, 让我们来分析一下 food_list.txt 它看起来像这样: ``` Food List Items @@ -20,7 +20,7 @@ No Item_Name Price Quantity 6 Bananas $3.45 30 ``` -运行下面的命令,它将在每个食物数量小于或者等于20的行后面标一个星号: +运行下面的命令,它将在每个食物数量小于或者等于 20 的行后面标一个星号: ``` # awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt @@ -36,14 +36,14 @@ No Item_Name Price Quantity 上面的命令实际运行如下: -- 首先, 它用`$4 <= 20`表达式检查每个输入行的第四列是否小于或者等于20,如果满足条件, 它将在末尾打一个星号 `(*)` . -- 接着, 它用`$4 > 20`表达式检查每个输入行的第四列是否大于20,如果满足条件,显示出来. +- 首先,它用`$4 <= 20`表达式检查每个输入行的第四列(数量(Quantity))是否小于或者等于 20,如果满足条件,它将在末尾打一个星号 `(*)`。 +- 接着,它用`$4 > 20`表达式检查每个输入行的第四列是否大于20,如果满足条件,显示出来。 但是这里有一个问题, 当第一个表达式用`{ printf "%s\t%s\n", $0,"**" ; }`命令进行标注的时候在同样的步骤第二个表达式也进行了判断这样就浪费了时间. -因此当我们已经用第一个表达式打印标志行的时候就不在需要用第二个表达式`$4 > 20`再次打印. +因此当我们已经用第一个表达式打印标志行的时候就不再需要用第二个表达式`$4 > 20`再次打印。 -要处理这个问题, 我们需要用到`next` 命令: +要处理这个问题, 我们需要用到`next` 命令: ``` # awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt @@ -57,11 +57,11 @@ No Item_Name Price Quantity 6 Bananas $3.45 30 ``` -当输入行用`$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`命令打印以后,`next`命令 将跳过第二个`$4 > 20` `{ print $0 ;}`表达式, 继续判断下一个输入行,而不是浪费时间继续判断一下是不是当前输入行还大于20. +当输入行用`$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`命令打印以后,`next`命令将跳过第二个`$4 > 20` `{ print $0 ;}`表达式,继续判断下一个输入行,而不是浪费时间继续判断一下是不是当前输入行还大于 20。 -next命令在编写高效的命令脚本时候是非常重要的, 它可以很大的提高脚本速度. 下面我们准备来学习Awk的下一个系列了. +`next`命令在编写高效的命令脚本时候是非常重要的,它可以提高脚本速度。本系列的下一部分我们将来学习如何使用 awk 来处理标准输入(STDIN)。 -希望这篇文章对你有帮助,你可以给我们留言. +希望这篇文章对你有帮助,你可以给我们留言。 -------------------------------------------------------------------------------- @@ -69,7 +69,7 @@ via: http://www.tecmint.com/use-next-command-with-awk-in-linux/ 作者:[Aaron Kili][a] 译者:[kokialoves](https://github.com/kokialoves) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a01adedd95a0b5ff3b43ac39ffd11ff554eec376 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 24 Jul 2016 18:40:34 +0800 Subject: [PATCH 211/471] PUB:Part 7 - How to Read Awk Input from STDIN in Linux @vim-kakali --- ...w to Read Awk Input from STDIN in Linux.md | 75 ++++++++++++++++++ ...w to Read Awk Input from STDIN in Linux.md | 76 ------------------- 2 files changed, 75 insertions(+), 76 deletions(-) create mode 100644 published/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md delete mode 100644 translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md diff --git a/published/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md b/published/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md new file mode 100644 index 0000000000..2730691459 --- /dev/null +++ b/published/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md @@ -0,0 +1,75 @@ +awk 系列:awk 怎么从标准输入(STDIN)读取输入 +============================================ + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Read-Awk-Input-from-STDIN.png) + +在 awk 系列的前几节,我们看到大多数操作都是从一个文件或多个文件读取输入,或者你想要把标准输入作为 awk 的输入。 + +在 awk 系列的第七节中,我们将会看到几个例子,你可以筛选其他命令的输出代替从一个文件读取输入作为 awk 的输入。 + +我们首先从使用 [dir 命令][1]开始,它类似于 [ls 命令][2],在第一个例子下面,我们使用 `dir -l` 命令的输出作为 awk 命令的输入,这样就可以打印出文件拥有者的用户名,所属组组名以及在当前路径下他/她拥有的文件。 + +``` +# dir -l | awk '{print $3, $4, $9;}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-By-User-in-Directory.png) + +*列出当前路径下的用户文件* + + +再来看另一个例子,我们[使用 awk 表达式][3] ,在这里,我们想要在 awk 命令里使用一个表达式筛选出字符串来打印出属于 root 用户的文件。命令如下: + +``` +# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} ' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-by-Root-User.png) + +*列出 root 用户的文件* + +上面的命令包含了 `(==)` 来进行比较操作,这帮助我们在当前路径下筛选出 root 用户的文件。这是通过使用 `$3=="root"` 表达式实现的。 + +让我们再看另一个例子,我们使用一个 [awk 比较运算符][4] 来匹配一个确定的字符串。 + +这里,我们使用了 [cat 命令][5] 来浏览文件名为 tecmint_deals.txt 的文件内容,并且我们想要仅仅查看有字符串 Tech 的部分,所以我们会运行下列命令: + +``` +# cat tecmint_deals.txt +# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}' +# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-Comparison-Operator-to-Match-String.png) + +*用 Awk 比较运算符匹配字符串* + +在上面的例子中,我们已经用了参数为 `~ /匹配字符/` 的比较操作,但是上面的两个命令给我们展示了一些很重要的问题。 + +当你运行带有 tech 字符串的命令时终端没有输出,因为在文件中没有 tech 这种字符串,但是运行带有 Tech 字符串的命令,你却会得到包含 Tech 的输出。 + +所以你应该在进行这种比较操作的时候时刻注意这种问题,正如我们在上面看到的那样,awk 对大小写很敏感。 + +你总是可以使用另一个命令的输出作为 awk 命令的输入来代替从一个文件中读取输入,这就像我们在上面看到的那样简单。 + +希望这些例子足够简单到可以使你理解 awk 的用法,如果你有任何问题,你可以在下面的评论区提问,记得查看 awk 系列接下来的章节内容,我们将关注 awk 的一些功能,比如变量,数字表达式以及赋值运算符。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/read-awk-input-from-stdin-in-linux/ + +作者:[Aaron Kili][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/linux-dir-command-usage-with-examples/ +[2]: http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ +[3]: https://linux.cn/article-7599-1.html +[4]: https://linux.cn/article-7602-1.html +[5]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ + + + diff --git a/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md b/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md deleted file mode 100644 index 0d0d499d00..0000000000 --- a/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md +++ /dev/null @@ -1,76 +0,0 @@ - - -在 Linux 上怎么读取标准输入(STDIN)作为 Awk 的输入 -============================================ - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Read-Awk-Input-from-STDIN.png) - - -在 Awk 工具系列的前几节,我们看到大多数操作都是从一个文件或多个文件读取输入,或者你想要把标准输入作为 Awk 的输入. -在 Awk 系列的第7节中,我们将会看到几个例子,这些例子都是关于你可以筛选其他命令的输出代替从一个文件读取输入作为 awk 的输入. - - -我们开始使用 [dir utility][1] , dir 命令和 [ls 命令][2] 相似,在第一个例子下面,我们使用 'dir -l' 命令的输出作为 Awk 命令的输入,这样就可以打印出文件拥有者的用户名,所属组组名以及在当前路径下他/她拥有的文件. -``` -# dir -l | awk '{print $3, $4, $9;}' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-By-User-in-Directory.png) ->列出当前路径下的用户文件 - - -看另一个例子,我们 [使用 awk 表达式][3] ,在这里,我们想要在 awk 命令里使用一个表达式筛选出字符串,通过这样来打印出 root 用户的文件.命令如下: -``` -# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} ' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-by-Root-User.png) ->列出 root 用户的文件 - - -上面的命令包含了 '(==)' 来进行比较操作,这帮助我们在当前路径下筛选出 root 用户的文件.这种方法的实现是通过使用 '$3=="root"' 表达式. - -让我们再看另一个例子,我们使用一个 [awk 比较运算符][4] 来匹配一个确定的字符串. - - -现在,我们已经用了 [cat utility][5] 来浏览文件名为 tecmint_deals.txt 的文件内容,并且我们想要仅仅查看有字符串 Tech 的部分,所以我们会运行下列命令: -``` -# cat tecmint_deals.txt -# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}' -# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-Comparison-Operator-to-Match-String.png) ->用 Awk 比较运算符匹配字符串 - - -在上面的例子中,我们已经用了参数为 `~ /匹配字符/` 的比较操作,但是上面的两个命令给我们展示了一些很重要的问题. - -当你运行带有 tech 字符串的命令时终端没有输出,因为在文件中没有 tech 这种字符串,但是运行带有 Tech 字符串的命令,你却会得到包含 Tech 的输出. - -所以你应该在进行这种比较操作的时候时刻注意这种问题,正如我们在上面看到的那样, awk 对大小写很敏感. - - -你可以一直使用另一个命令的输出作为 awk 命令的输入来代替从一个文件中读取输入,这就像我们在上面看到的那样简单. - - -希望这些例子足够简单可以使你理解 awk 的用法,如果你有任何问题,你可以在下面的评论区提问,记得查看 awk 系列接下来的章节内容,我们将关注 awk 的一些功能,比如变量,数字表达式以及赋值运算符. --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/read-awk-input-from-stdin-in-linux/ - -作者:[Aaron Kili][a] -译者:[vim-kakali](https://github.com/vim-kakali) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/linux-dir-command-usage-with-examples/ -[2]: http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ -[3]: http://www.tecmint.com/combine-multiple-expressions-in-awk -[4]: http://www.tecmint.com/comparison-operators-in-awk -[5]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ - - - From b51eea158ccb55b5d9127836930a064781ea2076 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 24 Jul 2016 19:03:46 +0800 Subject: [PATCH 212/471] =?UTF-8?q?PUB:20160722=20Terminix=20=E2=80=93=20A?= =?UTF-8?q?=20New=20GTK=203=20Tiling=20Terminal=20Emulator=20for=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @MikeCoder --- ... GTK 3 Tiling Terminal Emulator for Linux.md | 136 ++++++++++++++++++ ... GTK 3 Tiling Terminal Emulator for Linux.md | 133 ----------------- 2 files changed, 136 insertions(+), 133 deletions(-) create mode 100644 published/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md delete mode 100644 translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md diff --git a/published/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/published/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md new file mode 100644 index 0000000000..e9ec640549 --- /dev/null +++ b/published/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md @@ -0,0 +1,136 @@ +Terminix:一个很赞的基于 GTK3 的平铺式 Linux 终端模拟器 +============================================================ + +现在,你可以很容易的找到[大量的 Linux 终端模拟器][1],每一个都可以给用户留下深刻的印象。 + +但是,很多时候,我们会很难根据我们的喜好来找到一款心仪的日常使用的终端模拟器。这篇文章中,我们将会推荐一款叫做 Terminix 的令人激动的终端模拟机。 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Emulator-for-Linux.png) + +*Terminix Linux 终端模拟器* + +Terminix 是一个使用 VTE GTK+ 3 组件的平铺式终端模拟器。使用 GTK 3 开发的原因主要是为了符合 GNOME HIG(人机接口 Human Interface Guidelines) 标准。另外,Terminix 已经在 GNOME 和 Unity 桌面环境下测试过了,也有用户在其他的 Linux 桌面环境下测试成功。 + +和其他的终端模拟器一样,Terminix 有着很多知名的特征,列表如下: + +- 允许用户进行任意的垂直或者水平分屏 +- 支持拖拽功能来进行重新排布终端 +- 支持使用拖拽的方式终端从窗口中将脱离出来 +- 支持终端之间的输入同步,因此,可以在一个终端输入命令,而在另一个终端同步复现 +- 终端的分组配置可以保存在硬盘,并再次加载 +- 支持透明背景 +- 允许使用背景图片 +- 基于主机和目录来自动切换配置 +- 支持进程完成的通知信息 +- 配色方案采用文件存储,同时支持自定义配色方案 + +### 如何在 Linux 系统上安装 Terminix + +现在来详细说明一下在不同的 Linux 发行版本上安装 Terminix 的步骤。首先,在此列出 Terminix 在 Linux 所需要的环境需求。 + +#### 依赖组件 + +为了正常运行,该应用需要使用如下库: + +- GTK 3.14 或者以上版本 +- GTK VTE 0.42 或者以上版本 +- Dconf +- GSettings +- Nautilus 的 iNautilus-Python 插件 + +如果你已经满足了如上的系统要求,接下来就是安装 Terminix 的步骤。 + +#### 在 RHEL/CentOS 7 或者 Fedora 22-24 上 + +首先,你需要通过新建文件 `/etc/yum.repos.d/terminix.repo` 来增加软件仓库,使用你最喜欢的文本编辑器来进行编辑: + +``` +# vi /etc/yum.repos.d/terminix.repo +``` + +然后拷贝如下的文字到我们刚新建的文件中: + +``` +[heikoada-terminix] +name=Copr repo for terminix owned by heikoada +baseurl=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/fedora-$releasever-$basearch/ +skip_if_unavailable=True +gpgcheck=1 +gpgkey=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/pubkey.gpg +enabled=1 +enabled_metadata=1 +``` + +保存文件并退出。 + +然后更新你的系统,并且安装 Terminix,步骤如下: + +``` +---------------- On RHEL/CentOS 7 ---------------- +# yum update +# yum install terminix + +---------------- On Fedora 22-24 ---------------- +# dnf update +# dnf install terminix +``` + +#### 在 Ubuntu 16.04-14.04 和 Linux Mint 18-17 + +虽然没有基于 Debian/Ubuntu 发行版本的官方的软件包,但是你依旧可以通过如下的命令手动安装。 + +``` +$ wget -c https://github.com/gnunn1/terminix/releases/download/1.1.1/terminix.zip +$ sudo unzip terminix.zip -d / +$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas/ +``` + +#### 其它 Linux 发行版 + +OpenSUSE 用户可以从默认仓库中安装 Terminix,Arch Linux 用户也可以安装 [AUR Terminix 软件包][2]。 + +### Terminix 截图教程 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal.png) + +*Terminix 终端* + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Settings.png) + +*Terminix 终端设置* + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Tabs.png) + +*Terminix 多终端界面* + +### 如何卸载删除 Terminix + + +如果你是手动安装的 Terminix 并且想要删除它,那么你可以参照如下的步骤来卸载它。从 [Github 仓库][3]上下载 uninstall.sh,并且给它可执行权限并且执行它: + +``` +$ wget -c https://github.com/gnunn1/terminix/blob/master/uninstall.sh +$ chmod +x uninstall.sh +$ sudo sh uninstall.sh +``` + +但是如果你是通过包管理器安装的 Terminix,你可以使用包管理器来卸载它。 + +在这篇介绍中,我们在众多优秀的终端模拟器中发现了一个重要的 Linux 终端模拟器。你可以尝试着去体验下它的新特性,并且可以将它和你现在使用的终端进行比较。 + +重要的一点,如果你想得到更多信息或者有疑问,请使用评论区,而且不要忘了,给我一个关于你使用体验的反馈。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/terminix-tiling-terminal-emulator-for-linux/ + +作者:[Aaron Kili][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/linux-terminal-emulators/ +[2]: https://aur.archlinux.org/packages/terminix +[3]: https://github.com/gnunn1/terminix diff --git a/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md deleted file mode 100644 index 2a74e62ca1..0000000000 --- a/translated/tech/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md +++ /dev/null @@ -1,133 +0,0 @@ -Terminix - 一个很赞的基于 GTK3 的 Linux 终端模拟器 -============================================================ - -现在,你可以很容易的找到[大量的 Linux 终端模拟器][1],每一个都可以给用户留下深刻的印象。 - -但是,很多时候,我们会很难根据我们的喜好来选择一款来作为日常终端模拟器。这篇文章中,我们将会推荐一款叫做 Terminix 的令人激动的终端模拟机。 - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Emulator-for-Linux.png) ->Terminix Linux 终端模拟器 - -Terminix 是一个使用 VTE GTK+ 3 插件的终端模拟器。使用 GTK 3 开发的原因主要是为了符合 GNOME HIG(Human Interface Guidelines) 标准。另外,Terminix 已经在 GNOME 和 Unity 桌面环境下完成了测试。同时,志愿者也在其他的 Linux 桌面环境下进行了测试。 - -和其他的终端模拟器一样,Terminix 有着很多著名的特点,列表如下: - -- 允许用户进行自定义的垂直或者水平分屏。 -- 支持功能性的拖拽来进行重新布局 -- 支持拖拽 Tab 的方式,新建终端窗口 -- 支持终端之间的输入同步,因此,命令可以在一个终端输入,同时另一个终端进行展示 -- 终端的分组配置可以保存在硬盘 -- 支持透明背景 -- 允许使用背景图片 -- 支持通过主机名和目录来自动切换配置 -- 支持来自其他进程的通知信息 -- 配色模式使用文件存储,同时支持自定义配色方案 - -### 如何在 Linux 系统上安装 Terminix - -现在来详细说明一下在不同的 Linux 发行版本上安装 Terminix 的步骤。首先,在此列出 Terminix 在 Linux 所需要的环境需求。 - -#### 依赖组件 - -为了正常运行,该应用需要使用如下库: - -- GTK 3.14 或者以上版本 -- GTK VTE 0.42 或者以上版本 -- Dconf -- GSettings -- Nautilus 的 iNautilus-Python 插件 - -如果你已经满足了如上的系统 要求,接下来就是安装 Terminix 的步骤。 - -#### 在 RHEL/CentOS 7 或者 Fedora 22-24 上 - -首先,你需要将包仓库通过新建文件 `/etc/yum.repos.d/terminix.repo` 的方式,然后使用你最喜欢的文本编辑器来进行编辑。 - -``` -# vi /etc/yum.repos.d/terminix.repo -``` - -然后拷贝如下的文字,并且拷贝到我们刚新建的文件中: - -``` -[heikoada-terminix] -name=Copr repo for terminix owned by heikoada -baseurl=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/fedora-$releasever-$basearch/ -skip_if_unavailable=True -gpgcheck=1 -gpgkey=https://copr-be.cloud.fedoraproject.org/results/heikoada/terminix/pubkey.gpg -enabled=1 -enabled_metadata=1 -``` - -保存文件并退出。 - -然后更新你的系统,并且安装 Terminix,步骤如下: - -``` ----------------- On RHEL/CentOS 7 ---------------- -# yum update -# yum install terminix - ----------------- On Fedora 22-24 ---------------- -# dnf update -# dnf install terminix -``` - -#### 在 Ubuntu 16.04-14.04 和 Linux Mint 18-17 - -虽然没有基于 Debian/Ubuntu 发行版本的官方的软件包,但是你依旧可以通过如下的命令手动安装。 - -``` -$ wget -c https://github.com/gnunn1/terminix/releases/download/1.1.1/terminix.zip -$ sudo unzip terminix.zip -d / -$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas/ -``` - -OpenSUSE users can install Terminix from the default repository and Arch Linux users can install the [AUR Terminix package][2]. -OpenSUSE 用户可以从默认的仓库中安装 Terminix,Arch Linux 用户也可以安装 [AUR Terminix 软件包][2]。 - -### Terminix 截图教程 - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal.png) ->Terminix Terminal - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Settings.png) ->Terminix Terminal Settings - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Terminix-Terminal-Tabs.png) ->Terminix Multiple Terminal Tabs - -### 如何卸载删除 Terminix - -In case you installed it manually and want to remove it, then you can follow the steps below to uninstall it. Download the uninstall.sh from Github repository, make it executable and then run it: -如果你是手动安装的 Terminix 并且想要删除他,那么你可以参照如下的步骤来卸载它。从 Github 仓库上下载 uninstall.sh,并且给它可执行权限并且执行它: - -``` -$ wget -c https://github.com/gnunn1/terminix/blob/master/uninstall.sh -$ chmod +x uninstall.sh -$ sudo sh uninstall.sh -``` - -但是如果你是通过包管理器安装的 Terminix,你依旧可以使用包管理器来卸载它。 - -请浏览 [Terminix Github][3] 仓库 - -在这个总览中,我们在众多优秀的终端模拟器中了解了一个重要的 Linux 终端模拟器。你可以尝试着去体验下它的新特性,并且可以将它和你现在使用的终端进行比较。 - -重要的一点,如果你想得到关于 Terminix 的更多信息或者有疑问,请使用评论区,而且不要忘了,给我一个关于你使用体验的反馈。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/terminix-tiling-terminal-emulator-for-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/linux-terminal-emulators/ -[2]: https://aur.archlinux.org/packages/terminix -[3]: https://github.com/gnunn1/terminix From 576fe07b3eb8d88a5364d9d952caf583f72fbdba Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 24 Jul 2016 21:04:10 +0800 Subject: [PATCH 213/471] =?UTF-8?q?20160724-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r managing your project's issue tracker.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 sources/talk/20160718 Tips for managing your project's issue tracker.md diff --git a/sources/talk/20160718 Tips for managing your project's issue tracker.md b/sources/talk/20160718 Tips for managing your project's issue tracker.md new file mode 100644 index 0000000000..1b89fe5851 --- /dev/null +++ b/sources/talk/20160718 Tips for managing your project's issue tracker.md @@ -0,0 +1,101 @@ +Tips for managing your project's issue tracker +============================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_opennature_3.png?itok=30fRGfpv) + +Issue-tracking systems are important for many open source projects, and there are many open source tools that provide this functionality but many projects opt to use GitHub's built-in issue tracker. + +Its simple structure makes it easy for others to weigh in, but issues are really only as good as you make them. + +Without a process, your repository can become unwieldy, overflowing with duplicate issues, vague feature requests, or confusing bug reports. Project maintainers can become burdened by the organizational load, and it can become difficult for new contributors to understand where priorities lie. + +In this article, I'll discuss how to take your GitHub issues from good to great. + +### The issue as user story + +My team spoke with open source expert [Jono Bacon][1]—author of [The Art of Community][2], a strategy consultant, and former Director of Community at GitHub—who said that high-quality issues are at the core of helping a projects succeed. He says that while some see issues as merely a big list of problems you have to tend to, well-managed, triaged, and labeled issues can provide incredible insight into your code, your community, and where the problem spots are. + +"At the point of submission of an issue, the user likely has little patience or interest in providing expansive detail. As such, you should make it as easy as possible to get the most useful information from them in the shortest time possible," Jono Bacon said. + +A consistent structure can take a lot of burden off project maintainers, particularly for open source projects. We've found that encouraging a user story approach helps make clarity a constant. The common structure for a user story addresses the "who, what, and why" of a feature: As a [user type], I want to [task] so that [goal]. + +Here's what that looks like in practice: + +>As a customer, I want to create an account so that I can make purchases. + +We suggest sticking that user story in the issue's title. You can also set up [issue templates][3] to keep things consistent. + +![](https://opensource.com/sites/default/files/resize/issuetemplate-new-520x293.png) +> Issue templates bring consistency to feature requests. + +The point is to make the issue well-defined for everyone involved: it identifies the audience (or user), the action (or task), and the outcome (or goal) as simply as possible. There's no need to obsess over this structure, though; as long as the what and why of a story are easy to spot, you're good. + +### Qualities of a good issue + +Not all issues are created equal—as any OSS contributor or maintainer can attest. A well-formed issue meets these qualities outlined in [The Agile Samurai][4]. + +Ask yourself if it is... + +- something of value to customers +- avoids jargon or mumbo jumbo; a non-expert should be able to understand it +- "slices the cake," which means it goes end-to-end to deliver something of value +- independent from other issues if possible; dependent issues reduce flexibility of scope +- negotiable, meaning there are usually several ways to get to the stated goal +- small and easily estimable in terms of time and resources required +- measurable; you can test for results + +### What about everything else? Working with constraints + +If an issue is difficult to measure or doesn't seem feasible to complete within a short time period, you can still work with it. Some people call these "constraints." + +For example, "the product needs to be fast" doesn't fit the story template, but it is non-negotiable. But how fast is fast? Vague requirements don't meet the criteria of a "good issue", but if you further define these concepts—for example, "the product needs to be fast" can be "each page needs to load within 0.5 seconds"—you can work with it more easily. Constraints can be seen as internal metrics of success, or a landmark to shoot for. Your team should test for them periodically. + +### What's inside your issue? + +In agile, user stories typically include acceptance criteria or requirements. In GitHub, I suggest using markdown checklists to outline any tasks that make up an issue. Issues should get more detail as they move up in priority. + +Say you're creating an issue around a new homepage for a website. The sub-tasks for that task might look something like this. + +![](https://opensource.com/sites/default/files/resize/markdownchecklist-520x255.png) +>Use markdown checklists to split a complicated issue into several parts. + +If necessary, link to other issues to further define a task. (GitHub makes this really easy.) + +Defining features as granularly as possible makes it easier to track progress, test for success, and ultimately ship valuable code more frequently. + +Once you've gathered some data points in the form of issues, you can use APIs to glean deeper insight into the health of your project. + +"The GitHub API can be hugely helpful here in identifying patterns and trends in your issues," Bacon said. "With some creative data science, you can identify problem spots in your code, active members of your community, and other useful insights." + +Some issue management tools provide APIs that add additional context, like time estimates or historical progress. + +### Getting others on board + +Once your team decides on an issue structure, how do you get others to buy in? Think of your repo's ReadMe.md file as your project's "how-to." It should clearly define what your project does (ideally using searchable language) and explain how others can contribute (by submitting requests, bug reports, suggestions, or by contributing code itself.) + +![](https://opensource.com/sites/default/files/resize/readme-520x184.png) +>Edit your ReadMe file with clear instructions for new collaborators. + +This is the perfect spot to share your GitHub issue guidelines. If you want feature requests to follow the user story format, share that here. If you use a tracking tool to organize your product backlog, share the badge so others can gain visibility. + +"Issue templates, sensible labels, documentation for how to file issues, and ensuring your issues get triaged and responded to quickly are all important" for your open source project, Bacon said. + +Remember: It's not about adding process for the process' sake. It's about setting up a structure that makes it easy for others to discover, understand, and feel confident contributing to your community. + +"Focus your community growth efforts not just on growing the number of programmers, but also [on] people interested in helping issues be accurate, up to date, and a source of active conversation and productive problem solving," Bacon said. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/how-take-your-projects-github-issues-good-great + +作者:[Matt Butler][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mattzenhub +[1]: http://www.jonobacon.org/ +[2]: http://www.artofcommunityonline.org/ +[3]: https://help.github.com/articles/creating-an-issue-template-for-your-repository/ +[4]: https://www.amazon.ca/Agile-Samurai-Masters-Deliver-Software/dp/1934356581 From c9e9fc908f32a7e77352cc59020f063c447fdcc8 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 24 Jul 2016 21:13:17 +0800 Subject: [PATCH 214/471] =?UTF-8?q?20160724-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/talk/20160620 5 SSH Hardening Tips.md | 125 ++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 sources/talk/20160620 5 SSH Hardening Tips.md diff --git a/sources/talk/20160620 5 SSH Hardening Tips.md b/sources/talk/20160620 5 SSH Hardening Tips.md new file mode 100644 index 0000000000..ad6741ab26 --- /dev/null +++ b/sources/talk/20160620 5 SSH Hardening Tips.md @@ -0,0 +1,125 @@ +5 SSH Hardening Tips +====================== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1188510_1920_0.jpg?itok=ocPCL_9G) +>Make your OpenSSH sessions more secure with these simple tips. +> Creative Commons Zero + +When you look at your SSH server logs, chances are they are full of attempted logins from entities of ill intent. Here are 5 general ways (along with several specific tactics) to make your OpenSSH sessions more secure. + +### 1. Make Password Auth Stronger + +Password logins are convenient, because you can log in from any machine anywhere. But they are vulnerable to brute-force attacks. Try these tactics for strengthening your password logins. + +- Use a password generator, such as pwgen. pwgen takes several options; the most useful is password length (e.g., pwgen 12 generates a 12-character password). + +- Never reuse a password. Ignore all the bad advice about not writing down your passwords, and keep a notebook with your logins written in it. If you don't believe me that this is a good idea, then believe security guru [Bruce Schneier][1]. If you're reasonably careful, nobody will ever find your notebook, and it is immune from online attacks. + +- You can add extra protection to your login notebook by obscuring the logins recorded in your notebook with character substitution or padding. Use a simple, easily-memorable convention such as padding your passwords with two extra random characters, or use a single simple character substitution such as # for *. + +- Use a non-standard listening port on your SSH server. Yes, this is old advice, and it's still good. Examine your logs; chances are that port 22 is the standard attack point, with few attacks on other ports. + +- Use [Fail2ban][2] to dynamically protect your server from brute force attacks. + +- Create non-standard usernames. Never ever enable a remote root login, and avoid "admin". + +### 2. Fix Too Many Authentication Failures + +When my ssh logins fail with "Too many authentication failures for carla" error messages, it makes me feel bad. I know I shouldn't take it personally, but it still stings. But, as my wise granny used to say, hurt feelings don't fix the problem. The cure for this is to force a password-based login in your ~/.ssh/config file. If this file does not exist, first create the ~/.ssh/ directory: + +``` +$ mkdir ~/.ssh +$ chmod 700 ~/.ssh +``` + +Then create the `~/.ssh/config` file in a text editor and enter these lines, using your own remote HostName address: + +``` +HostName remote.site.com +PubkeyAuthentication=no +``` + +### 3. Use Public Key Authentication + +Public Key authentication is much stronger than password authentication, because it is immune to brute-force password attacks, but it’s less convenient because it relies on RSA key pairs. To begin, you create a public/private key pair. Next, the private key goes on your client computer, and you copy the public key to the remote server that you want to log into. You can log in to the remote server only from computers that have your private key. Your private key is just as sensitive as your house key; anyone who has possession of it can access your accounts. You can add a strong layer of protection by putting a passphrase on your private key. + +Using RSA key pairs is a great tool for managing multiple users. When a user leaves, disable their login by deleting their public key from the server. + +This example creates a new key pair of 3072 bits strength, which is stronger than the default 2048 bits, and gives it a unique name so you know what server it belongs to: + +``` +$ ssh-keygen -t rsa -b 3072 -f id_mailserver +``` + +This creates two new keys, id_mailserver and id_mailserver.pub. id_mailserver is your private key -- do not share this! Now securely copy your public key to your remote server with the ssh-copy-id command. You must already have a working SSH login on the remote server: + +``` +$ ssh-copy-id -i id_rsa.pub user@remoteserver + +/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed +user@remoteserver's password: + +Number of key(s) added: 1 + +Now try logging into the machine, with: "ssh 'user@remoteserver'" +and check to make sure that only the key(s) you wanted were added. +``` + +ssh-copy-id ensures that you will not accidentally copy your private key. Test your new key login by copying the example from your command output, with single quotes: + +``` +$ ssh 'user@remoteserver' +``` + +It should log you in using your new key, and if you set a password on your private key, it will prompt you for it. + +### 4. Disable Password Logins + +Once you have tested and verified your public key login, disable password logins so that your remote server is not vulnerable to brute force password attacks. Do this in the /etc/sshd_config file on your remote server with this line: + +``` +PasswordAuthentication no +``` + +Then restart your SSH daemon. + +### 5. Set Up Aliases -- They’re Fast and Cool + +You can set up aliases for remote logins that you use a lot, so instead of logging in with something like "ssh -u username -p 2222 remote.site.with.long-name", you can use "ssh remote1". Set it up like this in your ~/.ssh/config file: + +``` +Host remote1 +HostName remote.site.with.long-name +Port 2222 +User username +PubkeyAuthentication no +``` + +If you are using public key authentication, it looks like this: + +``` +Host remote1 +HostName remote.site.with.long-name +Port 2222 +User username +IdentityFile ~/.ssh/id_remoteserver +``` + +The [OpenSSH documentation][3] is long and detailed, but after you have mastered basic SSH use, you'll find it's very useful and contains a trove of cool things you can do with OpenSSH. + + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/5-ssh-hardening-tips + +作者:[CARLA SCHRODER][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/cschroder +[1]: https://www.schneier.com/blog/archives/2005/06/write_down_your.html +[2]: http://www.fail2ban.org/wiki/index.php/Main_Page +[3]: http://www.openssh.com/ From 327472d1073b92a68dd71695ef9b27912ab7efb5 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 24 Jul 2016 21:27:53 +0800 Subject: [PATCH 215/471] =?UTF-8?q?20160724-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...a GUI - How to live in a Linux terminal.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md diff --git a/sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md b/sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md new file mode 100644 index 0000000000..f9633eb966 --- /dev/null +++ b/sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md @@ -0,0 +1,108 @@ +Who needs a GUI? How to live in a Linux terminal +================================================= + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-1-100669790-orig.jpg) + +### The best Linux shell apps for handling common functions + +Ever consider the idea of living entirely in a Linux terminal? No graphical desktop. No modern GUI software. Just text—and nothing but text—inside a Linux shell. It may not be easy, but it’s absolutely doable. [I recently tried living completely in a Linux shell for 30 days][1]. What follows are my favorite shell applications for handling some of the most common bits of computer functionality (web browsing, word processing, etc.). With a few obvious holes. Because being text-only is hard. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-2-100669791-orig.png) + +### Emailing within a Linux terminal + +For emailing in a terminal, we are spoiled for choice. Many people recommend mutt and notmuch. Both of those are powerful and excellent, but I prefer alpine. Why? Not only does it work well, but it’s also much more of a familiar interface if you are used to GUI email software like Thunderbird. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-3-100669837-orig.jpg) + +### Web browsing within a Linux terminal + +I have one word for you: [w3m][5]. Well, I suppose that’s not even really a word. But w3m is definitely my terminal web browser of choice. It tenders things fairly well and is powerful enough to even let you post to sites such as Google Plus (albeit, not in a terribly fun way). Lynx may be the de facto text-based web browser, but w3m is my favorite. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-4-100669838-orig.jpg) + +### Text editing within a Linux terminal + +For editing simple text files, I have one application that I straight-up love. No, not emacs. Also, definitely not vim. For editing of a text file or jotting down some notes, I like nano. Yes, nano. It’s simple, easy to learn and pleasant to use. Are there pieces of software with more features? Sure. But nano is just delightful. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-5-100669839-orig.jpg) + +### Word processing within a Linux terminal + +In a shell—with nothing but text—there really isn’t a huge difference between a “text editor” and a “word processor.” But being as I do a lot of writing, having a piece of software built specifically for long-form writing is a definite must. My favorite is wordgrinder. It has just enough tools to make me happy, a nice menu-driven interface (with hot-keys), and it supports multiple file types, including OpenDocument, HTML and a bunch of other ones. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-6-100669795-orig.jpg) + +### Music playing within a Linux terminal + +When it comes to playing music (mp3, Ogg, etc.) from a shell, one piece of software is king: [cmus][7]. It supports every conceivable file format. It’s super easy to use and incredibly fast and light on system resource usage. So clean. So streamlined. This is what a good music player should be like. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-7-100669796-orig.jpg) + +### Instant messaging within a Linux terminal + +When I realized how will I could instant message from the terminal, my head exploded. You know Pidgin, the multi-protocol IM client? Well, it has a version for the terminal, called “[finch][8],” that allows you to connect to multiple networks and chat with multiple people at once. The interface is even similar to Pidgin. Just amazing. Use Google Hangouts? Try [hangups][9]. It has a nice tabbed interface and works amazingly well. Seriously. Other than needing perhaps some emoji and inline pictures, instant messaging from the shell is a great experience. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-8-100669797-orig.jpg) + +### Tweeting within a Linux terminal + +No joke. Twitter, in your terminal, thanks to [rainbowstream][10]. I hit a few bugs here and there, but overall, it works rather well. Not as well as the website itself—and not as well as the official mobile clients—but, come on, this is Twitter in a shell. Even if it has one or two rough edges, this is pretty stinkin’ cool. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-9-100669798-orig.jpg) + +### Reddit-ing within a Linux terminal + +Spending time on Reddit from the comforts of the command line feels right somehow. And with rtv, it’s a rather pleasant experience. Reading. Commenting. Voting. It all works. The experience isn’t actually all that different than the website itself. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-10-100669799-orig.jpg) + +### Process managing within a Linux terminal + +Use [htop][12]. It’s like top—only better and prettier. Sometimes I just leave htop up and running all the time. Just because. In that regard, it’s like a music visualizer—only for RAM and CPU usage. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-11-100669800-orig.png) + +### File managing within a Linux terminal + +Just because you’re in a text-based shell doesn’t mean you don’t enjoy the finer things in life. Like having a nice file browser and manager. In that regard, [Midnight Commander][13] is a pretty doggone great one. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-12-100669801-orig.png) + +### Terminal managing within a Linux terminal + +If you spend much time in the shell, you’re going to need a terminal multiplexer. Basically it’s a piece of software that lets you split up your terminal session into a customizable grid, allowing you to use and see multiple terminal applications at the same time. It’s a tiled window manager for your shell. My favorite is [tmux][14]. But [GNU Screen][15] is also quite nice. It might take a few minutes to learn how to use it, but once you do, you’ll be glad you did. + +![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-13-100669802-orig.jpg) + +### Presentation-ing within a Linux terminal + +LibreOffice, Google Slides or, gasp, PowerPoint. I spend a lot of time in presentation software. The fact that one exists for the shell pleases me greatly. It’s called, appropriately, “[text presentation program][16].” There are no images (obviously), just a simple program for displaying slides put together in a simple markup language. It may not let you embed pictures of cats, but you’ll earn some serious nerd-cred for doing an entire presentation from the terminal. + +-------------------------------------------------------------------------------- + +via: http://www.networkworld.com/article/3091139/linux/who-needs-a-gui-how-to-live-in-a-linux-terminal.html#slide1 + +作者:[Bryan Lunduke][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.networkworld.com/author/Bryan-Lunduke/ +[1]: http://www.networkworld.com/article/3083268/linux/30-days-in-a-terminal-day-0-the-adventure-begins.html +[2]: https://en.wikipedia.org/wiki/Mutt_(email_client) +[3]: https://notmuchmail.org/ +[4]: https://en.wikipedia.org/wiki/Alpine_(email_client) +[5]: https://en.wikipedia.org/wiki/W3m +[6]: http://cowlark.com/wordgrinder/index.html +[7]: https://en.wikipedia.org/wiki/Cmus +[8]: https://developer.pidgin.im/wiki/Using%20Finch +[9]: https://github.com/tdryer/hangups +[10]: http://www.rainbowstream.org/ +[11]: https://github.com/michael-lazar/rtv +[12]: http://hisham.hm/htop/ +[13]: https://en.wikipedia.org/wiki/Midnight_Commander +[14]: https://tmux.github.io/ +[15]: https://en.wikipedia.org/wiki/GNU_Screen +[16]: http://www.ngolde.de/tpp.html From f86827e35298d60e00596183d72f371fde78f725 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 24 Jul 2016 21:38:21 +0800 Subject: [PATCH 216/471] =?UTF-8?q?20160724-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...IDEO EDITING SOFTWARE FOR LINUX IN 2016.md | 148 ++++++++++++++++++ 1 file changed, 148 insertions(+) create mode 100644 sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md diff --git a/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md b/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md new file mode 100644 index 0000000000..44c66359f9 --- /dev/null +++ b/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md @@ -0,0 +1,148 @@ +TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016 +===================================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/linux-video-ditor-software.jpg) + +Brief: Tiwo discusses the best video editors for Linux, their pros and cons and the installation method for Ubuntu-based distros in this article. + +We have discussed [best photo management applications for Linux][1], [best code editors for Linux][2] in similar articles in the past. Today we shall see the best video editing software for Linux. + +When asked about free video editing software, Windows Movie Maker and iMovie is what most people often suggest. + +Unfortunately, both of them are not available for GNU/Linux. But you don’t need to worry about it, we have pooled together a list of best free video editors for you. + +### BEST VIDEO EDITOR APPS FOR LINUX + +Let’s have a look at the top 5 best free video editing software for Linux below : + +#### 1. KDENLIVE + +![](https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg) + +[Kdenlive][3] is a free and [open source][4] video editing software from KDE that provides dual video monitors, a multi-track timeline, clip list, customizable layout support, basic effects, and basic transitions. +It supports wide variety of file formats and a wide range of camcorders and cameras including Low resolution camcorder (Raw and AVI DV editing), Mpeg2, mpeg4 and h264 AVCHD (small cameras and camcorders), High resolution camcorder files, including HDV and AVCHD camcorders, Professional camcorders, including XDCAM-HD™ streams, IMX™ (D10) streams, DVCAM (D10) , DVCAM, DVCPRO™, DVCPRO50™ streams and DNxHD™ streams. + +You can install it from terminal by running the following command : + +``` +sudo apt-get install kdenlive +``` + +Or, open Ubuntu Software Center then search Kdenlive. + +#### 2. OPENSHOT + +![](https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg) + +[OpenShot][5] is the second choice in our list of Linux video editing software. OpenShot can help you create the film that supports for transitions, effects, adjusting audio levels, and of course, it support of most formats and codecs. + +You can also export your film to DVD, upload to YouTube, Vimeo, Xbox 360, and many other common formats. OpenShot is simpler than kdenlive. So if you need a video editor with a simple UI OpenShot is a good choice. + +The latest version is 2.0.7. You can install OpenShot video editor by run the following command from terminal window : + +``` +sudo apt-get install openshot +``` + +It needs to download 25 MB, and 70 MB disk space after installed. + +#### 3. FLOWBLADE MOVIE EDITOR + +![](https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg) + +[Flowblade Movie Editor][6] is a multitrack non-linear video editor for Linux. It is free and open source. It comes with a stylish and modern user interface. + +Written in Python, it is designed to provide a fast, and precise. Flowblade has focused on providing the best possible experience on Linux and other free platforms. So there’s no Windows and OS X version for now. + +To install Flowblade in Ubuntu and other Ubuntu based systems, use the command below: + +``` +sudo apt-get install flowblade +``` + +#### 4. LIGHTWORKS + +![](https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg) + +If you looking for a video editor software that has more feature, this is the answer. [Lightworks][7] is a cross-platform professional video editor, available for Linux, Mac OS X and Windows. + +It is an award winning professional [non-linear editing][8] (NLE) software that supports resolutions up to 4K as well as video in SD and HD formats. + +This application has two versions: Lightworks Free and Lightworks Pro. While free version doesn’t support Vimeo (H.264 / MPEG-4) and YouTube (H.264 / MPEG-4)- Up to 2160p (4K UHD), Blu-ray, and H.264/MP4 export option with configurable bitrate setting, then pro version is. + +- Lightworks Free +- Lightworks Pro + +Pro version has more features such as higher resolution support, 4K and Blue Ray support etc. + +##### HOW TO INSTALL LIGHTWORKS? + +Unlike the other video editors, installing Lightwork is not as straight forward as running a single command. Don’t worry, it’s not that complicated either. + +- Step 1 – You can get the package from [Lightworks Downloads Page][9]. The package’s size about 79,5 MB. + +>Please note: There’s no Linux 32-bit support. + +- Step 2 – Once downloaded, you can install it using [Gdebi package installer][10]. Gdebi automatically downloads the dependency : + +![](https://itsfoss.com/wp-content/uploads/2016/06/Installing-lightworks-on-ubuntu.jpg) + +- Step 3 – Now you can open it from Ubuntu dashboard, or your Linux distro’s menu. + +- Step 4 – It needs an account when you use it for first time. Click at Not Registerd? button to register. Don’t worry, it’s free! + +- Step 5 – After your account has been verified, now login. + +Now the Lightworks is ready to use. + +Need Lightworks video tutorial? Get them at [Lightworks video tutorials Page][11]. + +#### 5. BLENDER + +![](https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg) + +Blender is a professional, industry-grade open source, cross platform video editor. It is popular for 3D works. Blender has been used in several Hollywood movies including Spider Man series. + +Although originally designed for produce 3D modeling, but it can also be used for video editing and input capabilities with a variety of formats. The Video Editor includes: + +- Live preview, luma waveform, chroma vectorscope and histogram displays +- Audio mixing, syncing, scrubbing and waveform visualization +- Up to 32 slots for adding video, images, audio, scenes, masks and effects +- Speed control, adjustment layers, transitions, keyframes, filters and more. + +The latest version can be downloaded from [Blender Download Page][12]. + +### WHICH IS THE BEST VIDEO EDITING SOFTWARE? + +If you need a simple video editor, OpenShot, Kdenlive or Flowblade is a good choice. These are suitable for beginners and a system with standard specification. + +Then if you have a high-end computer, and need advanced features you can go out with Lightworks. If you are looking for more advanced features, Blender has got your back. + +So that’s all I can write about 5 best video editing software for Linux such as Ubuntu, Linux Mint, Elementary, and other Linux distributions. Share with us which video editor you like the most. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-video-editing-software-linux/ + +作者:[Tiwo Satriatama][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/tiwo/ +[1]: https://itsfoss.com/linux-photo-management-software/ +[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ +[3]: https://kdenlive.org/ +[4]: https://itsfoss.com/tag/open-source/ +[5]: http://www.openshot.org/ +[6]: http://jliljebl.github.io/flowblade/ +[7]: https://www.lwks.com/ +[8]: https://en.wikipedia.org/wiki/Non-linear_editing_system +[9]: https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206 +[10]: https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[11]: https://www.lwks.com/videotutorials +[12]: https://www.blender.org/download/ + + + From 2eb5bc2855c8d0a8048a096942f5ace7383d978c Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 24 Jul 2016 22:15:33 +0800 Subject: [PATCH 217/471] =?UTF-8?q?20160724-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ilter and Edit - Demonstrated in Pandas.md | 327 ++++++++++++++++++ 1 file changed, 327 insertions(+) create mode 100644 sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md diff --git a/sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md b/sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md new file mode 100644 index 0000000000..348f66196f --- /dev/null +++ b/sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md @@ -0,0 +1,327 @@ +Excel “Filter and Edit” - Demonstrated in Pandas +================================================== + +![](http://pbpython.com/images/Boolean-Indexing-Example.png) + +### Introduction + +I have heard from various people that my [previous][1] [articles][2] on common Excel tasks in pandas were useful in helping new pandas users translate Excel processes into equivalent pandas code. This article will continue that tradition by illustrating various pandas indexing examples using Excel’s Filter function as a model for understanding the process. + +One of the first things most new pandas users learn is basic data filtering. Despite working with pandas over the past few months, I recently realized that there was another benefit to the pandas filtering approach that I was not using in my day to day work. Namely that you can filter on a given set of columns but update another set of columns using a simplified pandas syntax. This is similar to what I’ll call the “Filter and Edit” process in Excel. + +This article will walk through some examples of filtering a pandas DataFrame and updating the data based on various criteria. Along the way, I will explain some more about panda’s indexing and how to use indexing methods such as .loc , .ix and .iloc to quickly and easily update a subset of data based on simple or complex criteria. + +### Excel: “Filter and Edit” + +Outside of the Pivot Table, one of the top go-to tools in Excel is the Filter. This simple tool allows a user to quickly filter and sort the data by various numeric, text and formatting criteria. Here is a basic screenshot of some sample data with data filtered by several different criteria: + +![](http://pbpython.com/images/filter-example.png) + +The Filter process is intuitive and is easy to grasp for even the most novice Excel user. I have also noticed that people will use this feature to select rows of data, then update additional columns based on the row criteria. The example below shows what I’m describing: + +![](http://pbpython.com/images/commission-example.png) + +In the example, I have filtered the data on Account Number, SKU and Unit Price. Then I manually added a Commission_Rate column and typed in 0.01 in each cell. The benefit to this approach is that it is easy to understand and can help someone manage relatively complex data without writing long Excel formulas or getting into VBA. The downside of this approach is that it is not repeatable and can be difficult for someone from the outside to understand which criteria were used for any filter. + +For instance, if you look at the screenshot about, there is no obvious way to tell what is filtered without looking at each column. Fortunately, we can do something very similar in pandas. Not surprisingly, it is easy in pandas to execute this “Filter and Edit” model with simple and clean code. + +### Boolean Indexing + +Now that you have a feel for the problem, I want to step through some details of boolean indexing in pandas. This is an important concept to understand if you want to understand pandas’ [Indexing and Selecting of Data][3] in the most broad sense. This idea may seem a little complex to the new pandas user (and maybe too basic for experienced users) but I think it is important to take some time and understand it. If you grasp this concept, the basic process of working with data in pandas will be more straightforward. + +Pandas supports indexing (or selecting data) by using labels, position based integers or a list of boolean values (True/False). Using a list of boolean values to select a row is called boolean indexing and will be the focus of the rest of this article. + +I find that my pandas workflow tends to focus mostly on using lists of boolean values for selecting my data. In other words, when I create pandas DataFrames, I tend to keep the default index in the DataFrame. Therefore the index is not really meaningful on its own and not straightforward for selecting data. + +>Key Point +> Boolean indexing is one (of several) powerful and useful ways of selecting rows of data in pandas. + +Let’s look at some example DataFrames to help clarify the what a boolean index in pandas does. + +First, we will create a very small DataFrame purely from a python list and use it to show how boolean indexing works. + +``` +import pandas as pd +sales = [('account', ['Jones LLC', 'Alpha Co', 'Blue Inc', 'Mega Corp']), + ('Total Sales', [150, 200, 75, 300]), + ('Country', ['US', 'UK', 'US', 'US'])] +df = pd.DataFrame.from_items(sales) +``` + + |account |Total Sales |Country +:--|:-- |:-- |: +0 |Jones LLC |150 |US +1 |Alpha Co |200 |UK +2 |Blue Inc |75 |US +3 |Mega Corp |300 |US + +Notice how the values 0-3, get automatically assigned to the rows? Those are the indices and they are not particularly meaningful in this data set but are useful to pandas and are important to understand for other use cases not described below. + +When we refer to boolean indexing, we simply mean that we can pass in a list of True or False values representing each row we want to view. + +In this case, if we want to view the data for Jones LLC, Blue Inc and Mega Corp, we can see that the True False list would look like this: + +``` +indices = [True, False, True, True] +``` + +It should be no surprise, that you can pass this list to your DataFrame and it will only display the rows where our value is True : + +``` +df[indices] +``` + + |account |Total Sales |Country +:--|:--|:--|:-- +0 |Jones LLC |150 |US +2 |Blue Inc |75 |US +3 |Mega Corp |300 |US + +Here is a visual of what just happened: + +![](http://pbpython.com/images/Boolean-Indexing-Example.png) + +This manual list creation of the index works but obviously is not scaleable or very useful for anything more than a trivial data set. Fortunately pandas makes it very easy to create these boolean indexes using a simple query language that should be familiar to someone that has used python (or any language for that matter). + +For an example, let’s look at all sales lines from the US. If we execute a python expression based on the Country column: + +``` +df.Country == 'US' +``` + +``` +0 True +1 False +2 True +3 True +Name: Country, dtype: bool +``` + +The example shows how pandas will take your traditional python logic, apply it to a DataFrame and return a list of boolean values. This list of boolean values can then be passed to the DataFrame to get the corresponding rows of data. + +In real code, you would not do this two step process. The shorthand method for doing this would typically look like this: + +``` +df[df["Country"] == 'US'] +``` + + |account |Total Sales |Country +:--|:--|:--|:-- +0 |Jones LLC |150| US +2 |Blue Inc |75 |US +3 |Mega Corp| 300| US + +While this concept is simple, you can write fairly complex logic to filter your data using the power of python. + +>Key Point +>In this example, `df[df.Country == 'US']` is equivalent to `df[df["Country"] == 'US']` The ‘.’ notation is cleaner but will not work when there are spaces in your column names. + +### Selecting the Columns + +Now that we have figured out how to select rows of data, how can we control which columns to display? In the example above, there’s no obvious way to do that. Pandas can support this use case using three types of location based indexing: .loc , iloc , and .ix . These functions also allow us to select columns in addition to the row selection we have seen so far. + +There is a lot of confusion about when to use .loc , iloc , or .ix . The quick summary of the difference is that: + +- .loc is used for label indexing +- .iloc is used for position based integers +- .ix is a shortcut that will try to use labels (like .loc ) but will fall back to position based integers (like .iloc ) + +So, the question is, which one should I use? I will profess that I get tripped up some times on this one too. I have found that I use .loc most frequently. Mainly because my data does not lend itself to meaningful position based indexing (in other words, I rarely find myself needing .iloc ) so I stick with .loc . + +To be fair each of these methods do have their place and are useful in many situations. One area in particular is when dealing with MultiIndex DataFrames. I will not cover that topic in this article - maybe in a future post. + +Now that we have covered this topic, let’s show how to filter a DataFrame on values in a row and select specific columns to display. + +Continuing with our example, what if we just want to show the account names that correspond to our index? Using .loc it is simple: + +``` +df.loc[[True, True, False, True], "account"] +``` + +``` +1 Alpha Co +2 Blue Inc +3 Mega Corp +Name: account, dtype: object +``` + +If you would like to see multiple columns, just pass a list: + +``` +df.loc[[True, True, False, True], ["account", "Country"]] +``` + + | account |Country +:--|:--|:-- +0 |Jones LLC| US +1 |Alpha Co |UK +3 |Mega Corp |US + +The real power is when you create more complex queries on your data. In this case, let’s show all account names and Countries where sales > 200: + +``` +df.loc[df["Total Sales"] > 200, ["account", "Country"]] +``` + + |account| Country + :--|:--|:-- +3 |Mega Corp| US + +This process can be thought of somewhat equivalent to Excel’s Filter we discussed above. You have the added benefit that you can also limit the number of columns you retrieve, not just the rows. + +### Editing Columns + +All of this is good background but where this process really shines is when you use a similar approach for updating one or more columns based on a row selection. + +For one simple example, let’s add a commission rate column to our data: + +``` +df["rate"] = 0.02 +``` + + | account |Total Sales| Country |rate +:--|:--|:--|:--|:-- +0 |Jones LLC |150 |US |0.02 +1 |Alpha Co |200 |UK |0.02 +2 |Blue Inc |75 |US |0.02 +3 |Mega Corp |300 |US |0.02 + +Let’s say that if you sold more than 100, your rate is 5%. The basic process is to setup a boolean index to select the columns, then assign the value to the rate column: + +``` +df.loc[df["Total Sales"] > 100, ["rate"]] = .05 +``` + + | account |Total Sales| Country| rate +:--|:--|:--|:--|:-- +0 |Jones LLC |150| US| 0.05 +1 |Alpha Co |200 |UK |0.05 +2| Blue Inc |75| US |0.02 +3 |Mega Corp |300| US| 0.05 + +Hopefully if you stepped through this article, this will make sense and that it will help you understand how this syntax works. Now you have the fundamentals of the “Filter and Edit” approach. The final section will show this process in a little more detail in Excel and pandas. + +### Bringing It All Together + +For the final example, we will create a simple commissions calculator using the following rules: + +- All commissions calculated at the transaction level +- Base commission on all sales is 2% +- All shirts will get a commission of 2.5% +- A special program is going on where selling > 10 belts in one transaction gets 4% commission +- There is a special bonus of $250 plus a 4.5% commission for all shoe sales > $1000 in a single transaction + +In order to do this in Excel, using the Filter and edit approach: + +- Add a commission column with 2% +- Add a bonus column of $0 +- Filter on shirts and change the vale to 2.5% +- Clear the filter +- Filter for belts and quantity > 10 and change the value to 4% +- Clear the filter +- Filter for shoes > $1000 and add commission and bonus values of 4.5% and $250 respectively + +I am not going to show a screen shot of each step but here is the last filter: + +![](http://pbpython.com/images/filter-2.png) + +This approach is simple enough to manipulate in Excel but it is not very repeatable nor audit-able. There are certainly other approaches to accomplish this in Excel - such as formulas or VBA. However, this Filter and Edit approach is common and is illustrative of the pandas logic. + +Now, let’s walk through the whole example in pandas. + +First, read in the [Excel file][4] and add a column with the 2% default rate: + +``` +import pandas as pd +df = pd.read_excel("https://github.com/chris1610/pbpython/blob/master/data/sample-sales-reps.xlsx?raw=true") +df["commission"] = .02 +df.head() +``` + + | account number| customer name| sales rep| sku |category |quantity |unit price| ext price| date| commission +:--|:--|:--|:--|:--|:--|:--|:--|:--|:--|:--|:-- +0 |680916 |Mueller and Sons |Loring Predovic |GP-14407 | Belt |19 |88.49 |1681.31 |2015-11-17 05:58:34| 0.02 +1 |680916 |Mueller and Sons |Loring Predovic |FI-01804| Shirt |3| 78.07| 234.21 |2016-02-13 04:04:11 |0.02 +2 |530925 |Purdy and Sons| Teagan O’Keefe |EO-54210 |Shirt |19 |30.21 |573.99 |2015-08-11 12:44:38 |0.02 +3 |14406| Harber, Lubowitz and Fahey| Esequiel Schinner| NZ-99565| Shirt| 12| 90.29 |1083.48 |2016-01-23 02:15:50 |0.02 +4 |398620| Brekke Ltd |Esequiel Schinner |NZ-99565 |Shirt |5| 72.64 |363.20 |2015-08-10 07:16:03 |0.02 + +The next commission rule is for all shirts to get 2.5% and Belt sales > 10 get a 4% rate: + +``` +df.loc[df["category"] == "Shirt", ["commission"]] = .025 +df.loc[(df["category"] == "Belt") & (df["quantity"] >= 10), ["commission"]] = .04 +df.head() +``` + +| account number |customer name |sales rep| sku |category |quantity| unit price| ext price |date |commission + :--|:--|:--|:--|:--|:--|:--|:--|:--|:--|:--|:-- +0 |680916| Mueller and Sons| Loring Predovic| GP-14407| Belt| 19 |88.49| 1681.31 |2015-11-17 05:58:34| 0.040 +1 |680916| Mueller and Sons| Loring Predovic| FI-01804| Shirt |3 |78.07 |234.21| 2016-02-13 04:04:11 |0.025 +2 |530925 |Purdy and Sons |Teagan O’Keefe| EO-54210 |Shirt| 19 |30.21 |573.99 |2015-08-11 12:44:38| 0.025 +3 |14406| Harber, Lubowitz and Fahey| Esequiel Schinner| NZ-99565| Shirt| 12| 90.29| 1083.48| 2016-01-23 02:15:50 |0.025 +4 |398620| Brekke Ltd| Esequiel Schinner| NZ-99565| Shirt| 5 |72.64 |363.20| 2015-08-10 07:16:03 |0.025 + +The final commission rule is to add the special bonus: + +``` +df["bonus"] = 0 +df.loc[(df["category"] == "Shoes") & (df["ext price"] >= 1000 ), ["bonus", "commission"]] = 250, 0.045 + +# Display a sample of rows that show this bonus +df.ix[3:7] +``` + +| account number| customer name |sales rep| sku |category| quantity| unit price| ext price| date| commission| bonus + :--|:--|:--|:--|:--|:--|:--|:--|:--|:--|:--|:--|:-- +3| 14406| Harber, Lubowitz and Fahey| Esequiel Schinner| NZ-99565| Shirt |12 |90.29| 1083.48 |2016-01-23 02:15:50 |0.025 |0 +4| 398620| Brekke Ltd| Esequiel Schinner |NZ-99565| Shirt| 5| 72.64| 363.20| 2015-08-10 07:16:03| 0.025| 0 +5| 282122| Connelly, Abshire and Von Beth| Skiles| GJ-90272| Shoes| 20| 96.62| 1932.40 |2016-03-17 10:19:05 |0.045 |250 +6 |398620 |Brekke Ltd| Esequiel Schinner |DU-87462 |Shirt| 10| 67.64 |676.40| 2015-11-25 22:05:36| 0.025| 0 +7| 218667| Jaskolski-O’Hara| Trish Deckow| DU-87462| Shirt |11| 91.86| 1010.46 |2016-04-24 15:05:58| 0.025| 0 + +In order to do the commissions calculation: + +``` +# Calculate the compensation for each row +df["comp"] = df["commission"] * df["ext price"] + df["bonus"] + +# Summarize and round the results by sales rep +df.groupby(["sales rep"])["comp"].sum().round(2) +``` + +``` +sales rep +Ansley Cummings 2169.76 +Beth Skiles 3028.60 +Esequiel Schinner 10451.21 +Loring Predovic 10108.60 +Shannen Hudson 5275.66 +Teagan O'Keefe 7989.52 +Trish Deckow 5807.74 +Name: comp, dtype: float64 +``` + +If you are interested, an example notebook is hosted on [github][5]. + +### Conclusion + +Thanks for reading through the article. I find that one of the biggest challenges for new users in learning how to use pandas is figuring out how to use their Excel-based knowledge to build an equivalent pandas-based solution. In many cases the pandas solution is going to be more robust, faster, easier to audit and more powerful. However, the learning curve can take some time. I hope that this example showing how to solve a problem using Excel’s Filter tool will be a useful guide for those just starting on this pandas journey. Good luck! + +-------------------------------------------------------------------------------- + +via: http://pbpython.com/excel-filter-edit.html + +作者:[Chris Moffitt ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://pbpython.com/author/chris-moffitt.html +[1]: http://pbpython.com/excel-pandas-comp.html +[2]: http://pbpython.com/excel-pandas-comp-2.html +[3]: http://pandas.pydata.org/pandas-docs/stable/indexing.html +[4]: https://github.com/chris1610/pbpython/blob/master/data/sample-sales-reps.xlsx?raw=true +[5]: https://github.com/chris1610/pbpython/blob/master/notebooks/Commissions-Example.ipynb + From 1298ca2fc50b142ee8bd95ae7de9cb12e22ca2b2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 24 Jul 2016 22:21:01 +0800 Subject: [PATCH 218/471] =?UTF-8?q?20160724-6=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20160628 Python 101 An Intro to urllib.md | 198 ++++++++++++++++++ 1 file changed, 198 insertions(+) create mode 100644 sources/tech/20160628 Python 101 An Intro to urllib.md diff --git a/sources/tech/20160628 Python 101 An Intro to urllib.md b/sources/tech/20160628 Python 101 An Intro to urllib.md new file mode 100644 index 0000000000..38620bf17b --- /dev/null +++ b/sources/tech/20160628 Python 101 An Intro to urllib.md @@ -0,0 +1,198 @@ +Python 101: An Intro to urllib +================================= + +The urllib module in Python 3 is a collection of modules that you can use for working with URLs. If you are coming from a Python 2 background you will note that in Python 2 you had urllib and urllib2. These are now a part of the urllib package in Python 3. The current version of urllib is made up of the following modules: + +- urllib.request +- urllib.error +- urllib.parse +- urllib.rebotparser + +We will be covering each part individually except for urllib.error. The official documentation actually recommends that you might want to check out the 3rd party library, requests, for a higher-level HTTP client interface. However, I believe that it can be useful to know how to open URLs and interact with them without using a 3rd party and it may also help you appreciate why the requests package is so popular. + +--- + +### urllib.request + +The urllib.request module is primarily used for opening and fetching URLs. Let’s take a look at some of the things you can do with the urlopen function: + +``` +>>> import urllib.request +>>> url = urllib.request.urlopen('https://www.google.com/') +>>> url.geturl() +'https://www.google.com/' +>>> url.info() + +>>> header = url.info() +>>> header.as_string() +('Date: Fri, 24 Jun 2016 18:21:19 GMT\n' + 'Expires: -1\n' + 'Cache-Control: private, max-age=0\n' + 'Content-Type: text/html; charset=ISO-8859-1\n' + 'P3P: CP="This is not a P3P policy! See ' + 'https://www.google.com/support/accounts/answer/151657?hl=en for more info."\n' + 'Server: gws\n' + 'X-XSS-Protection: 1; mode=block\n' + 'X-Frame-Options: SAMEORIGIN\n' + 'Set-Cookie: ' + 'NID=80=tYjmy0JY6flsSVj7DPSSZNOuqdvqKfKHDcHsPIGu3xFv41LvH_Jg6LrUsDgkPrtM2hmZ3j9V76pS4K_cBg7pdwueMQfr0DFzw33SwpGex5qzLkXUvUVPfe9g699Qz4cx9ipcbU3HKwrRYA; ' + 'expires=Sat, 24-Dec-2016 18:21:19 GMT; path=/; domain=.google.com; HttpOnly\n' + 'Alternate-Protocol: 443:quic\n' + 'Alt-Svc: quic=":443"; ma=2592000; v="34,33,32,31,30,29,28,27,26,25"\n' + 'Accept-Ranges: none\n' + 'Vary: Accept-Encoding\n' + 'Connection: close\n' + '\n') +>>> url.getcode() +200 +``` + +Here we import our module and ask it to open Google’s URL. Now we have an HTTPResponse object that we can interact with. The first thing we do is call the geturl method which will return the URL of the resource that was retrieved. This is useful for finding out if we followed a redirect. + +Next we call info, which will return meta-data about the page, such as headers. Because of this, we assign that result to our headers variable and then call its as_string method. This prints out the header we received from Google. You can also get the HTTP response code by calling getcode, which in this case was 200, which means it worked successfully. + +If you’d like to see the HTML of the page, you can call the read method on the url variable we created. I am not reproducing that here as the output will be quite long. + +Please note that the request object defaults to a GET request unless you specify the data parameter. Should you pass in the data parameter, then the request object will issue a POST request instead. + +--- + +### Downloading a file + +A typical use case for the urllib package is for downloading a file. Let’s find out a couple of ways we can accomplish this task: + +``` +>>> import urllib.request +>>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' +>>> response = urllib.request.urlopen(url) +>>> data = response.read() +>>> with open('/home/mike/Desktop/test.zip', 'wb') as fobj: +... fobj.write(data) +... +``` + +Here we just open a URL that leads us to a zip file stored on my blog. Then we read the data and write it out to disk. An alternate way to accomplish this is to use urlretrieve: + +``` +>>> import urllib.request +>>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' +>>> tmp_file, header = urllib.request.urlretrieve(url) +>>> with open('/home/mike/Desktop/test.zip', 'wb') as fobj: +... with open(tmp_file, 'rb') as tmp: +... fobj.write(tmp.read()) +``` + +The urlretrieve method will copy a network object to a local file. The file it copies to is randomly named and goes into the temp directory unless you use the second parameter to urlretrieve where you can actually specify where you want the file saved. This will save you a step and make your code much simpler: + +``` +>>> import urllib.request +>>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' +>>> urllib.request.urlretrieve(url, '/home/mike/Desktop/blog.zip') +('/home/mike/Desktop/blog.zip', + ) +``` + +As you can see, it returns the location of where it saved the file and the header information from the request. + +### Specifying Your User Agent + +When you visit a website with your browser, the browser tells the website who it is. This is called the user-agent string. Python’s urllib identifies itself as Python-urllib/x.y where the x and y are major and minor version numbers of Python. Some websites won’t recognize this user-agent string and will behave in strange ways or not work at all. Fortunately, it’s easy for you to set up your own custom user-agent string: + +``` +>>> import urllib.request +>>> user_agent = ' Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0' +>>> url = 'http://www.whatsmyua.com/' +>>> headers = {'User-Agent': user_agent} +>>> request = urllib.request.Request(url, headers=headers) +>>> with urllib.request.urlopen(request) as response: +... with open('/home/mdriscoll/Desktop/user_agent.html', 'wb') as out: +... out.write(response.read()) +``` + +Here we set up our user agent to Mozilla FireFox and we set out URL to which will tell us what it thinks our user-agent string is. Then we create a Request instance using our url and headers and pass that to urlopen. Finally we save the result. If you open the result file, you will see that we successfully changed our user-agent string. Feel free to try out a few different strings with this code to see how it will change. + +--- + +### urllib.parse + +The urllib.parse library is your standard interface for breaking up URL strings and combining them back together. You can use it to convert a relative URL to an absolute URL, for example. Let’s try using it to parse a URL that includes a query: + +``` +>>> from urllib.parse import urlparse +>>> result = urlparse('https://duckduckgo.com/?q=python+stubbing&t=canonical&ia=qa') +>>> result +ParseResult(scheme='https', netloc='duckduckgo.com', path='/', params='', query='q=python+stubbing&t=canonical&ia=qa', fragment='') +>>> result.netloc +'duckduckgo.com' +>>> result.geturl() +'https://duckduckgo.com/?q=python+stubbing&t=canonical&ia=qa' +>>> result.port +None +``` + +Here we import the urlparse function and pass it an URL that contains a search query to the duckduckgo website. My query was to look up articles on “python stubbing”. As you can see, it returned a ParseResult object that you can use to learn more about the URL. For example, you can get the port information (None in this case), the network location, path and much more. + +### Submitting a Web Form + +This module also holds the urlencode method, which is great for passing data to a URL. A typical use case for the urllib.parse library is submitting a web form. Let’s find out how you might do that by having the duckduckgo search engine look for Python: + +``` +>>> import urllib.request +>>> import urllib.parse +>>> data = urllib.parse.urlencode({'q': 'Python'}) +>>> data +'q=Python' +>>> url = 'http://duckduckgo.com/html/' +>>> full_url = url + '?' + data +>>> response = urllib.request.urlopen(full_url) +>>> with open('/home/mike/Desktop/results.html', 'wb') as f: +... f.write(response.read()) +``` + +This is pretty straightforward. Basically we want to submit a query to duckduckgo ourselves using Python instead of a browser. To do that, we need to construct our query string using urlencode. Then we put that together to create a fully qualified URL and use urllib.request to submit the form. We then grab the result and save it to disk. + +--- + +### urllib.robotparser + +The robotparser module is made up of a single class, RobotFileParser. This class will answer questions about whether or not a specific user agent can fetch a URL that has a published robot.txt file. The robots.txt file will tell a web scraper or robot what parts of the server should not be accessed. Let’s take a look at a simple example using ArsTechnica’s website: + +``` +>>> import urllib.robotparser +>>> robot = urllib.robotparser.RobotFileParser() +>>> robot.set_url('http://arstechnica.com/robots.txt') +None +>>> robot.read() +None +>>> robot.can_fetch('*', 'http://arstechnica.com/') +True +>>> robot.can_fetch('*', 'http://arstechnica.com/cgi-bin/') +False +``` + +Here we import the robot parser class and create an instance of it. Then we pass it a URL that specifies where the website’s robots.txt file resides. Next we tell our parser to read the file. Now that that’s done, we give it a couple of different URLs to find out which ones we can crawl and which ones we can’t. We quickly see that we can access the main site, but not the cgi-bin. + +--- + +### Wrapping Up + +You have reached the point that you should be able to use Python’s urllib package competently. We learned how to download a file, submit a web form, change our user agent and access a robots.txt file in this chapter. The urllib has a lot of additional functionality that is not covered here, such as website authentication. However, you might want to consider switching to the requests library before trying to do authentication with urllib as the requests implementation is a lot easier to understand and debug. I also want to note that Python has support for Cookies via its http.cookies module although that is also wrapped quite well in the requests package. You should probably consider trying both to see which one makes the most sense to you. + +-------------------------------------------------------------------------------- + +via: http://www.blog.pythonlibrary.org/2016/06/28/python-101-an-intro-to-urllib/ + +作者:[Mike][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.blog.pythonlibrary.org/author/mld/ + + + + + + + From c08fa449ffea3b606d142e12267435ca801655fd Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Sun, 24 Jul 2016 22:26:30 +0800 Subject: [PATCH 219/471] sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md --- ...0160706 Doing for User Space What We Did for Kernel Space.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md b/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md index 72d79aaede..743114c481 100644 --- a/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md +++ b/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md @@ -1,3 +1,5 @@ +MikeCoder Translating... + Doing for User Space What We Did for Kernel Space ======================================================= From 62559cb71df24ed3190ffb0baab6f4b510d4bc27 Mon Sep 17 00:00:00 2001 From: maywanting Date: Sun, 24 Jul 2016 23:16:10 +0800 Subject: [PATCH 220/471] 20160721 5 tricks for getting started with Vim.md --- sources/tech/20160721 5 tricks for getting started with Vim.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160721 5 tricks for getting started with Vim.md b/sources/tech/20160721 5 tricks for getting started with Vim.md index 8f622770f5..417b51ed45 100644 --- a/sources/tech/20160721 5 tricks for getting started with Vim.md +++ b/sources/tech/20160721 5 tricks for getting started with Vim.md @@ -1,3 +1,5 @@ +maywanting + 5 tricks for getting started with Vim ===================================== From 5baefa4cef06ad73c8e4c943b49aeb1dc312d0ad Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 25 Jul 2016 09:57:14 +0800 Subject: [PATCH 221/471] PUB:Part 12 - How to Explore Linux with Installed Help Documentations and Tools MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @kokialoves 辛苦了,有两点需要注意:1、标点符号要用中文的;2、忠实原文。 --- ...Installed Help Documentations and Tools.md | 184 ++++++++++++++++++ ...Installed Help Documentations and Tools.md | 178 ----------------- 2 files changed, 184 insertions(+), 178 deletions(-) create mode 100644 published/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md delete mode 100644 translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md diff --git a/published/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/published/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md new file mode 100644 index 0000000000..bc5e764637 --- /dev/null +++ b/published/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md @@ -0,0 +1,184 @@ +LFCS 系列第十二讲:如何使用 Linux 的帮助文档和工具 +================================================================================== + +由于 2016 年 2 月 2 号开始启用了新的 LFCS 考试要求, 我们在 [LFCS 系列][1]系列添加了一些必要的内容。为了考试的需要,我们强烈建议你看一下[LFCE 系列][2]。 + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) + +*LFCS: 了解 Linux 的帮助文档和工具* + +当你习惯了在命令行下进行工作,你会发现 Linux 已经有了许多使用和配置 Linux 系统所需要的文档。 + +另一个你必须熟悉命令行帮助工具的理由是,在[LFCS][3] 和 [LFCE][4] 考试中,它们是你唯一能够使用的信息来源,没有互联网也没有百度。你只能依靠你自己和命令行。 + +基于上面的理由,在这一章里我们将给你一些建议来可以让你有效的使用这些安装的文档和工具,以帮助你通过**Linux 基金会认证**考试。 + +### Linux 帮助手册(man) + +man 手册是 manual 手册的缩写,就是其名字所揭示的那样:一个给定工具的帮助手册。它包含了命令所支持的选项列表(以及解释),有些工具甚至还提供一些使用范例。 + +我们用 **man 命令** 跟上你想要了解的工具名称来打开一个帮助手册。例如: + +``` +# man diff +``` + +这将打开`diff`的手册页,这个工具将逐行对比文本文件(如你想退出只需要轻轻的点一下 q 键)。 + +下面我来比较两个文本文件 `file1` 和 `file2`。这两个文本文件包含了使用同一个 Linux 发行版相同版本安装的两台机器上的的安装包列表。 + +输入`diff` 命令它将告诉我们 `file1` 和`file2` 有什么不同: + +``` +# diff file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) + +*在Linux中比较两个文本文件* + +`<` 这个符号是说`file2`缺失的行。如果是 `file1`缺失,我们将用 `>` 符号来替代指示。 + +另外,**7d6** 意思是说`file1`的第**7**行要删除了才能和`file2`一致(**24d22** 和 **41d38** 也是同样的意思) **65,67d61** 告诉需要删除从第 **65** 行到 **67** 行。我们完成了以上步骤,那么这两个文件将完全一致。 + +此外,根据 man 手册说明,你还可以通过 `-y` 选项来以两路的方式显示文件。你可以发现这对于你找到两个文件间的不同根据方便容易。 + +``` +# diff -y file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) + +*比较并列出两个文件的不同* + +此外,你也可以用`diff`来比较两个二进制文件。如果它们完全一样,`diff` 将什么也不会输出。否则,它将会返回如下信息:“**Binary files X and Y differ**”。 + +### –help 选项 + +`--help`选项,大多数命令都支持它(并不是所有), 它可以理解为一个命令的简短帮助手册。尽管它没有提供工具的详细介绍,但是确实是一个能够快速列出程序的所支持的选项的不错的方法。 + +例如, + +``` +# sed --help +``` + +将显示 sed (流编辑器)的每个支持的选项。 + +`sed`命令的一个典型用法是替换文件中的字符。用 `-i` 选项(意思是 “**原地编辑编辑文件**”),你可以编辑一个文件而且并不需要打开它。 如果你想要同时备份一个原始文件,用 `-i` 选项加后缀来创建一个原始文件的副本。 + +例如,替换 `lorem.txt` 中的`Lorem` 为 `Tecmint`(忽略大小写),并且创建一个原文件的备份副本,命令如下: + +``` +# less lorem.txt | grep -i lorem +# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt +# less lorem.txt | grep -i lorem +# less lorem.txt.orig | grep -i lorem +``` + +请注意`lorem.txt`文件中`Lorem` 都已经替换为 `Tecmint`,并且原文件 `lorem.txt` 被保存为`lorem.txt.orig`。 + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) + +*替换文件中的文本* + +### /usr/share/doc 内的文档 + +这可能是我最喜欢的方法。如果你进入 `/usr/share/doc` 目录,并列出该目录,你可以看到许多以安装在你的 Linux 上的工具为名称的文件夹。 + +根据 [文件系统层级标准][5],这些文件夹包含了许多帮助手册没有的信息,还有一些可以使配置更方便的模板和配置文件。 + +例如,让我们来看一下 `squid-3.3.8` (不同发行版的版本可能会不同),这还是一个非常受欢迎的 HTTP 代理和 [squid 缓存服务器][6]。 + +让我们用`cd`命令进入目录: + +``` +# cd /usr/share/doc/squid-3.3.8 +``` + +列出当前文件夹列表: + +``` +# ls +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) + +*使用 ls 列出目录* + +你应该特别注意 `QUICKSTART` 和 `squid.conf.documented`。这些文件分别包含了 Squid 详细文档及其经过详细备注的配置文件。对于别的安装包来说,具体的名字可能不同(有可能是 **QuickRef** 或者**00QUICKSTART**),但意思是一样的。 + +对于另外一些安装包,比如 Apache web 服务器,在`/usr/share/doc`目录提供了配置模板,当你配置独立服务器或者虚拟主机的时候会非常有用。 + +### GNU 信息文档 + +你可以把它看做帮助手册的“开挂版”。它不仅仅提供工具的帮助信息,而且还是超级链接的形式(没错,在命令行中的超级链接),你可以通过箭头按钮从一个章节导航到另外章节,并按下回车按钮来确认。 + +一个典型的例子是: + +``` +# info coreutils +``` + +因为 coreutils 包含了每个系统中都有的基本文件、shell 和文本处理工具,你自然可以从 coreutils 的 info 文档中得到它们的详细介绍。 + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) + +*Info Coreutils* + +和帮助手册一样,你可以按 q 键退出。 + +此外,GNU info 还可以显示标准的帮助手册。 例如: + +``` +# info tune2fs +``` + +它将显示 **tune2fs**的帮助手册, 这是一个 ext2/3/4 文件系统管理工具。 + +我们现在看到了,让我们来试试怎么用**tune2fs**: + +显示 **/dev/mapper/vg00-vol_backups** 文件系统信息: + +``` +# tune2fs -l /dev/mapper/vg00-vol_backups +``` + +修改文件系统标签(修改为 Backups): + +``` +# tune2fs -L Backups /dev/mapper/vg00-vol_backups +``` + +设置文件系统的自检间隔及挂载计数(用`-c` 选项设置挂载计数间隔, 用 `-i` 选项设置自检时间间隔,这里 **d 表示天,w 表示周,m 表示月**)。 + +``` +# tune2fs -c 150 /dev/mapper/vg00-vol_backups # 每 150 次挂载检查一次 +# tune2fs -i 6w /dev/mapper/vg00-vol_backups # 每 6 周检查一次 +``` + +以上这些内容也可以通过 `--help` 选项找到,或者查看帮助手册。 + +### 摘要 + +不管你选择哪种方法,知道并且会使用它们在考试中对你是非常有用的。你知道其它的一些方法吗? 欢迎给我们留言。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/explore-linux-installed-help-documentation-and-tools/ + +作者:[Gabriel Cánepa][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]: https://linux.cn/article-7161-1.html +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: https://linux.cn/article-7161-1.html +[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[5]: https://linux.cn/article-6132-1.html +[6]: http://www.tecmint.com/configure-squid-server-in-linux/ +[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ + diff --git a/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md deleted file mode 100644 index 2d0238becf..0000000000 --- a/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md +++ /dev/null @@ -1,178 +0,0 @@ -LFCS第十二讲: 如何使用Linux的帮助文档和工具 -================================================================================== - -由于 2016 年 2 月 2 号开始启用了新的 LFCS 考试要求, 我们在[LFCS series][1]系列添加了一些必要的内容 . 为了考试的需要, 我们强烈建议你看一下[LFCE series][2] . - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) ->LFCS: 了解Linux的帮助文档和工具 - -当你习惯了在命令行下进行工作, 你会发现Linux有许多文档需要你去使用和配置Linux系统. - -另一个你必须熟悉命令行帮助工具的理由是,在[LFCS][3] 和 [LFCE][4] 考试中, 你只能靠你自己和命令行工具,没有互联网也没有百度。 - -基于上面的理由, 在这一章里我们将给你一些建议来帮助你通过**Linux Foundation Certification** 考试. - -### Linux 帮助手册 - -man命令, 大体上来说就是一个工具手册. 它包含选项列表(和解释) , 甚至还提供一些例子. - -我们用**man command** 加工具名称来打开一个帮助手册以便获取更多内容. 例如: - -``` -# man diff -``` - -我们将打开`diff`的手册页, 这个工具将一行一行的对比文本文档 (如你想退出只需要轻轻的点一下Q键). - -下面我来比较两个文本文件 `file1` 和 `file2` . 这两个文本文件包含着相同版本Linux的安装包信息. - -输入`diff` 命令它将告诉我们 `file1` 和`file2` 有什么不同: - -``` -# diff file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) ->在Linux中比较两个文本文件 - -`<` 这个符号是说`file2`少一行. 如果是 `file1`少一行, 我们将用 `>` 符号来替代. - -接下来说, **7d6** 意思是说 文件1的**#7**行在 `file2`中被删除了 ( **24d22** 和**41d38**是同样的意思), 65,67d61告诉我们移动 **65** 到 **67** . 我们把以上步骤都做了两个文件将完全匹配. - -你还可以通过 `-y` 选项来对比两个文件: - -``` -# diff -y file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) ->通过列表来列出两个文件的不同 - - 当然你也可以用`diff`来比较两个二进制文件 . 如果它们完全一样, `diff` 将什么也不会输出. 否则, 他将会返回如下信息: “**Binary files X and Y differ**”. - -### –help 选项 - -`--help`选项 , 大多数命令都可以用它(并不是所有) , 他可以理解为一个命令的简单介绍. 尽管它不提供工具的详细介绍, 但是确实是一个能够快速列出程序使用信息的不错的方法. - -例如, - -``` -# sed --help -``` - -显示 sed 的每个选项的用法(sed文本流编辑器). - -一个经典的`sed`例子,替换文件字符. 用 `-i` 选项 (描述为 “**编辑文件在指定位置**”), 你可以编辑一个文件而且并不需要打开他. 如果你想要备份一个原始文件, 用 `-i` 选项 加后缀来创建一个原始文件的副本. - -例如, 替换 `lorem.txt`中的`Lorem` 为 `Tecmint` (忽略大小写) 并且创建一个新的原始文件副本, 命令如下: - -``` -# less lorem.txt | grep -i lorem -# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt -# less lorem.txt | grep -i lorem -# less lorem.txt.orig | grep -i lorem -``` - - 请注意`lorem.txt`文件中`Lorem` 都已经替换为 `Tecmint` , 并且原始的 `lorem.txt` 保存为`lorem.txt.orig`. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) ->替换文件文本 - -### /usr/share/doc内的文档 -这可能是我最喜欢的方法. 如果你进入 `/usr/share/doc` 目录, 你可以看到好多Linux已经安装的工具的名称的文件夹. - -根据[Filesystem Hierarchy Standard][5](文件目录标准),这些文件夹包含了许多帮助手册没有的信息, 还有一些模板和配置文件. - -例如, 让我们来看一下 `squid-3.3.8` (版本可能会不同) 一个非常受欢迎的HTTP代理[squid cache server][6]. - -让我们用`cd`命令进入目录 : - -``` -# cd /usr/share/doc/squid-3.3.8 -``` - -列出当前文件夹列表: - -``` -# ls -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) ->ls linux列表命令 - -你应该特别注意 `QUICKSTART` 和 `squid.conf.documented`. 这两个文件包含了Squid许多信息, . 对于别的安装包来说, 他们的名字可能不同 (有可能是 **QuickRef** 或者**00QUICKSTART**), 但原理是一样的. - -对于另外一些安装包, 比如 the Apache web server, 在`/usr/share/doc`目录提供了配置模板, 当你配置独立服务器或者虚拟主机的时候会非常有用. - -### GNU 信息文档 - -你可以把它想象为帮助手册的超级链接形式. 正如上面说的, 他不仅仅提供工具的帮助信息, 而且还是超级链接的形式(是的!在命令行中的超级链接) 你可以通过箭头按钮和回车按钮来浏览你需要的内容. - -一个典型的例子是: - -``` -# info coreutils -``` - -通过coreutils 列出当前系统的 基本文件,shell脚本和文本处理工具[basic file, shell and text manipulation utilities][7] , 你可以得到他们的详细介绍. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) ->Info Coreutils - -和帮助手册一样你可以按Q键退出. - -此外, GNU info 还可以像帮助手册一样使用. 例如: - -``` -# info tune2fs -``` - -它将显示 **tune2fs**的帮助手册, ext2/3/4 文件系统管理工具. - -让我们来看看怎么用**tune2fs**: - -显示 **/dev/mapper/vg00-vol_backups**文件系统信息: - -``` -# tune2fs -l /dev/mapper/vg00-vol_backups -``` - -修改文件系统标签 (修改为Backups): - -``` -# tune2fs -L Backups /dev/mapper/vg00-vol_backups -``` - -设置 `/` 自检的挂载次数 (用`-c` 选项设置 `/`的自检的挂载次数 或者用 `-i` 选项设置 自检时间 **d=days, w=weeks, and m=months**). - -``` -# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts -# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks -``` - -以上这些内容也可以通过 `--help` 选项找到, 或者查看帮助手册. - -### 摘要 - -不管你选择哪种方法,知道并且会使用它们在考试中对你是非常有用的. 你知道其它的一些方法吗? 欢迎给我们留言. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[kokialoves](https://github.com/kokialoves) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[3]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[5]: http://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/ -[6]: http://www.tecmint.com/configure-squid-server-in-linux/ -[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[8]: From e8690b1517c51fb2074de4324bf52d944309f79d Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 25 Jul 2016 10:00:24 +0800 Subject: [PATCH 222/471] =?UTF-8?q?20160725-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...O CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md diff --git a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md new file mode 100644 index 0000000000..37692dfd7b --- /dev/null +++ b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md @@ -0,0 +1,67 @@ +HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU +============================================== + +![](https://itsfoss.com/wp-content/uploads/2016/07/change-default-applications-ubuntu.jpg) + +Brief: This beginners guide shows you how to change the default applications in Ubuntu Linux. + +Installing [VLC media player][1] is one of the first few [things to do after installing Ubuntu 16.04][2] for me. One thing I do after installing VLC is to make it the default application so that I can open video file with VLC when I double click it. + +As a beginner, you may need to know how to change any default application in Ubuntu and this is what I am going to show you in today’s tutorial. + +### CHANGE DEFAULT APPLICATIONS IN UBUNTU + +The methods mentioned here are valid for all versions of Ubuntu be it Ubuntu 12.04, Ubuntu 14.04 or Ubuntu 16.04. There are basically two ways you can change the default applications in Ubuntu: + +- via system settings +- via right click menu + +#### 1. CHANGE DEFAULT APPLICATIONS IN UBUNTU FROM SYSTEM SETTINGS + +Go to Unity Dash and search for System Settings: + +![](https://itsfoss.com/wp-content/uploads/2013/11/System_Settings_Ubuntu.jpeg) + +In the System Settings, click on the Details option: + +![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-detail-ubuntu.jpeg) + +In here, from the left side pane, select Default Applications. You will see the option to change the default applications in the right side pane. + +![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-default-applications.jpeg) + +As you can see, there are only a few kinds of default applications that can be changed here. You can change the default applications for web browser, email client, calendar app, music, videos and photo here. What about other kinds of applications? + +Don’t worry. To change the default applications of other kinds, we’ll use the option in the right click menu. + +#### 2. CHANGE DEFAULT APPLICATIONS IN UBUNTU FROM RIGHT CLICK MENU + +If you have ever used Windows, you might be aware of the “open with” option in the right click menu that allows changing the default applications. We have something similar in Ubuntu Linux as well. + +Right click on the file that you want to open in a non-default application. Go to properties. + +![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) +>Select Properties from Right Click menu + +And in here, you can select the application that you want to use and set it as default. + +![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) +>Making gThumb the default application for WebP images in Ubuntu + +Easy peasy. Isn’t it? Once you do that, all the files of the same kind will be opened with your chosen default application. + +I hope you found this beginners tutorial to change default applications in Ubuntu helpful. If you have any questions or suggestions, feel free to drop a comment below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/change-default-applications-ubuntu/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[Abhishek Prakash][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: http://www.videolan.org/vlc/index.html +[2]: https://itsfoss.com/things-to-do-after-installing-ubuntu-16-04/ From 97131e94b16eb49615197b87d2c8a270346502bf Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 25 Jul 2016 10:10:34 +0800 Subject: [PATCH 223/471] =?UTF-8?q?20160725-2=20=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0160718 OPEN SOURCE ACCOUNTING SOFTWARE.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md diff --git a/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md b/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md new file mode 100644 index 0000000000..4232e6cda8 --- /dev/null +++ b/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md @@ -0,0 +1,71 @@ +GNU KHATA: OPEN SOURCE ACCOUNTING SOFTWARE +============================================ + +Being an active Linux enthusiast, I usually introduce my friends to Linux, help them choose the best distro to suit their needs, and finally get them set with open source alternative software for their work. + +But in one case, I was pretty helpless. My uncle, who is a freelance accountant, uses a set of some pretty sophisticated paid software for work. And I wasn’t sure if I’d find anything under FOSS for him, until Yesterday. + +Abhishek was suggesting me some [cool apps][1] to check out and this particular one, GNU Khata stuck out. + +[GNU Khata][2] is an accounting tool. Or shall I say a collection of accounting tools? It is like the [Evernote][3] of economy management. It is so versatile that it can be used from personal Finance management to large scale business management, from store inventory management to corporate tax works. + +One interesting fact for you. ‘Khata’ in Hindi and other Indian languages means account and hence this accounting software is called GNU Khata. + +### INSTALLATION + +There are many installation instructions floating around the internet which actually install the older web app version of GNU Khata. Currently, GNU Khata is available only for Debian/Ubuntu and their derivatives. I suggest you follow the steps given in GNU Khata official Website to install the updated standalone. Let me give them out real quick. + +- Download the installer [here][4]. +- Open the terminal in download location. +- Copy and paste the below code in terminal and run. + +``` +sudo chmod 755 GNUKhatasetup.run +sudo ./GNUKhatasetup.run +``` + +- That’s it. Open the GNU Khata from the dash or the application menu. +### FIRST LAUNCH + +GNU Khata opens up in the browser and displays the following page. + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-1.jpg) + +Fill in the Organization name, case and organization type, financial year and click on proceed to go to the admin setup page. + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-2.jpg) + +Carefully feed in your name, password, security question and the answer and click on “create and login”. + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-3.jpg) + +You’re all set now. Use the Menu bar to start using GNU Khata to manage your finances. It’s that easy. + +### DOES GNU KHATA REALLY RIVAL THE PAID ACCOUNTING SOFTWARE IN THE MARKET? + +To begin with, GNU Khata keeps it all simple. The menu bar up top is very conveniently organized to help you work faster and better. You can choose to manage different accounts and projects and access them easily. [Their Website][5] states that GNU Khata can be “easily transformed into Indian languages”. Also, did you know that GNU Khata can be used on the cloud too? + +All the major accounting tools like ledgers, project statements, statement of affairs etc are formatted in a professional manner and are made available in both instantly presentable as well as customizable formats. It makes accounting and inventory management look so easy. + +The Project is very actively evolving, seeks feedback and guidance from practicing accountants to make improvements in the software. Considering the maturity, ease of use and the absence of a price tag, GNU Khata can be the perfect assistant in bookkeeping. + +Let us know what you think about GNU Khata in the comments below. + + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/using-gnu-khata/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[Aquil Roshan][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/aquil/ +[1]: https://itsfoss.com/category/apps/ +[2]: http://www.gnukhata.in/ +[3]: https://evernote.com/ +[4]: https://cloud.openmailbox.org/index.php/s/L8ppsxtsFq1345E/download +[5]: http://www.gnukhata.in/ From 62b8fb40c76e1d3b689a100bdbe4474c1f5c6947 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Mon, 25 Jul 2016 12:03:23 +0800 Subject: [PATCH 224/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=EF=BC=9A20160711=20Getting=20started=20with=20Git.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20160711 Getting started with Git.md | 141 ------------------ .../tech/20160711 Getting started with Git.md | 130 ++++++++++++++++ 2 files changed, 130 insertions(+), 141 deletions(-) delete mode 100644 sources/tech/20160711 Getting started with Git.md create mode 100644 translated/tech/20160711 Getting started with Git.md diff --git a/sources/tech/20160711 Getting started with Git.md b/sources/tech/20160711 Getting started with Git.md deleted file mode 100644 index b748a160da..0000000000 --- a/sources/tech/20160711 Getting started with Git.md +++ /dev/null @@ -1,141 +0,0 @@ -Being translated by ChrisLeeGit - -Getting started with Git -========================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) ->Image by : opensource.com - - -In the introduction to this series we learned who should use Git, and what it is for. Today we will learn how to clone public Git repositories, and how to extract individual files without cloning the whole works. - -Since Git is so popular, it makes life a lot easier if you're at least familiar with it at a basic level. If you can grasp the basics (and you can, I promise!), then you'll be able to download whatever you need, and maybe even contribute stuff back. And that, after all, is what open source is all about: having access to the code that makes up the software you run, the freedom to share it with others, and the right to change it as you please. Git makes this whole process easy, as long as you're comfortable with Git. - -So let's get comfortable with Git. - -### Read and write - -Broadly speaking, there are two ways to interact with a Git repository: you can read from it, or you can write to it. It's just like a file: sometimes you open a document just to read it, and other times you open a document because you need to make changes. - -In this article, we'll cover reading from a Git repository. We'll tackle the subject of writing back to a Git repository in a later article. - -### Git or GitHub? - -A word of clarification: Git is not the same as GitHub (or GitLab, or Bitbucket). Git is a command-line program, so it looks like this: - -``` -$ git -usage: Git [--version] [--help] [-C ] - [-p | --paginate | --no-pager] [--bare] - [--Git-dir=] [] - -``` - -As Git is open source, lots of smart people have built infrastructures around it which, in themselves, have become very popular. - -My articles about Git teach pure Git first, because if you understand what Git is doing then you can maintain an indifference to what front end you are using. However, my articles also include common ways of accomplishing each task through popular Git services, since that's probably what you'll encounter first. - -### Installing Git - -To install Git on Linux, grab it from your distribution's software repository. BSD users should find Git in the Ports tree, in the devel section. - -For non-open source operating systems, go to the [project site][1] and follow the instructions. Once installed, there should be no difference between Linux, BSD, and Mac OS X commands. Windows users will have to adapt Git commands to match the Windows file system, or install Cygwin to run Git natively, without getting tripped up by Windows file system conventions. - -### Afternoon tea with Git - -Not every one of us needs to adopt Git into our daily lives right away. Sometimes, the most interaction you have with Git is to visit a repository of code, download a file or two, and then leave. On the spectrum of getting to know Git, this is more like afternoon tea than a proper dinner party. You make some polite conversation, you get the information you need, and then you part ways without the intention of speaking again for at least another three months. - -And that's OK. - -Generally speaking, there are two ways to access Git: via command line, or by any one of the fancy Internet technologies providing quick and easy access through the web browser. - -Say you want to install a trash bin for use in your terminal because you've been burned one too many times by the rm command. You've heard about Trashy, which calls itself "a sane intermediary to the rm command", and you want to look over its documentation before you install it. Lucky for you, [Trashy is hosted publicly on GitLab.com][2]. - -### Landgrab - -The first way we'll work with this Git repository is a sort of landgrab method: we'll clone the entire thing, and then sort through the contents later. Since the repository is hosted with a public Git service, there are two ways to do this: on the command line, or through a web interface. - -To grab an entire repository with Git, use the git clone command with the URL of the Git repository. If you're not clear on what the right URL is, the repository should tell you. GitLab gives you a copy-and-paste repository URL [for Trashy][3]. - -![](https://opensource.com/sites/default/files/1_gitlab-url.jpg) - -You might notice that on some services, both SSH and HTTPS links are provided. You can use SSH only if you have write permissions to the repository. Otherwise, you must use the HTTPS URL. - -Once you have the right URL, cloning the repository is pretty simple. Just git clone the URL, and optionally name the directory to clone it into. The default behaviour is to clone the git directory to your current directory; for example, 'trashy.git' gets put in your current location as 'trashy'. I use the .clone extension as a shorthand for repositories that are read-only, and the .git extension as shorthand for repositories I can read and write, but that's not by any means an official mandate. - -``` -$ git clone https://gitlab.com/trashy/trashy.git trashy.clone -Cloning into 'trashy.clone'... -remote: Counting objects: 142, done. -remote: Compressing objects: 100% (91/91), done. -remote: Total 142 (delta 70), reused 103 (delta 47) -Receiving objects: 100% (142/142), 25.99 KiB | 0 bytes/s, done. -Resolving deltas: 100% (70/70), done. -Checking connectivity... done. -``` - -Once the repository has been cloned successfully, you can browse files in it just as you would any other directory on your computer. - -The other way to get a copy of the repository is through the web interface. Both GitLab and GitHub provide a snapshot of any repository in a .zip file. GitHub has a big green download button, but on GitLab, look for an inconspicuous download button on the far right of your browser window: - -![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) - -### Pick and choose - -An alternate method of obtaining a file from a Git repository is to find the file you're after and pluck it right out of the repository. This method is only supported via web interfaces, which is essentially you looking at someone else's clone of a repository; you can think of it as a sort of HTTP shared directory. - -The problem with using this method is that you might find that certain files don't actually exist in a raw Git repository, as a file might only exist in its complete form after a make command builds the file, which won't happen until you download the repository, read the README or INSTALL file, and run the command. Assuming, however, that you are sure a file does exist and you just want to go into the repository, grab it, and walk away, you can do that. - -In GitLab and GitHub, click the Files link for a file view, view the file in Raw mode, and use your web browser's save function, e.g. in Firefox, File > Save Page As. In a GitWeb repository (a web view of personal git repositories used some who prefer to host git themselves), the Raw view link is in the file listing view. - -![](https://opensource.com/sites/default/files/1_webgit-file.jpg) - -### Best practices - -Generally, cloning an entire Git repository is considered the right way of interacting with Git. There are a few reasons for this. Firstly, a clone is easy to keep updated with the git pull command, so you won't have to keep going back to some web site for a new copy of a file each time an improvement has been made. Secondly, should you happen to make an improvement yourself, then it is easier to submit those changes to the original author if it is all nice and tidy in a Git repository. - -For now, it's probably enough to just practice going out and finding interesting Git repositories and cloning them to your drive. As long as you know the basics of using a terminal, then it's not hard to do. Don't know the basics of terminal usage? Give me five more minutes of your time. - -### Terminal basics - -The first thing to understand is that all files have a path. That makes sense; if I told you to open a file for me on a regular non-terminal day, you'd have to get to where that file is on your drive, and you'd do that by navigating a bunch of computer windows until you reached that file. For example, maybe you'd click your home directory > Pictures > InktoberSketches > monkey.kra. - -In that scenario, we could say that the file monkeysketch.kra has the path $HOME/Pictures/InktoberSketches/monkey.kra. - -In the terminal, unless you're doing special sysadmin work, your file paths are generally going to start with $HOME (or, if you're lazy, just the ~ character) followed by a list of folders up to the filename itself. This is analogous to whatever icons you click in your GUI to reach the file or folder. - -If you want to clone a Git repository into your Documents directory, then you could open a terminal and run this command: - -``` -$ git clone https://gitlab.com/foo/bar.git -$HOME/Documents/bar.clone -``` - -Once that is complete, you can open a file manager window, navigate to your Documents folder, and you'll find the bar.clone directory waiting for you. - -If you want to get a little more advanced, you might revisit that repository at some later date, and try a git pull to see if there have been updates to the project: - -``` -$ cd $HOME/Documents/bar.clone -$ pwd -bar.clone -$ git pull -``` - -For now, that's all the terminal commands you need to get started, so go out and explore. The more you do it, the better you get at it, and that is, at least give or take a vowel, the name of the game. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/stumbling-git - -作者:[Seth Kenlon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]: https://git-scm.com/download -[2]: https://gitlab.com/trashy/trashy -[3]: https://gitlab.com/trashy/trashy.git - diff --git a/translated/tech/20160711 Getting started with Git.md b/translated/tech/20160711 Getting started with Git.md new file mode 100644 index 0000000000..85c159945d --- /dev/null +++ b/translated/tech/20160711 Getting started with Git.md @@ -0,0 +1,130 @@ +Git 入门指南 +========================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) +>Image by : opensource.com + +在这个系列的介绍中,我们学习到了谁应该使用 Git,以及 Git 是用来做什么的。今天,我们将学习如何克隆公共的 Git 仓库,以及如何提取出独立的文件而不用克隆整个仓库。 + +由于 Git 如此流行,因而如果你能够至少熟悉一些基础的 Git 知识也能为你的生活带来很多便捷。如果你可以掌握 Git 基础(你可以的,我发誓!),那么你将能够下载任何你需要的东西,甚至还可能做一些贡献作为回馈。毕竟,那就是开源的精髓所在:你拥有获取你使用的软件代码的权利,拥有和他人分享的自由,以及只要你愿意就可以修改它的权利。只要你熟悉了 Git,它就可以让这一切都变得很容易。 + +那么,让我们一起来熟悉 Git 吧。 + +### 读和写 +一般来说,有两种方法可以和 Git 仓库交互:你可以从仓库中读取,或者你也能够向仓库中写入。它就像一个文件:有时候你打开一个文档只是为了阅读它,而其它时候你打开文档是因为你需要做些改动。 + +本文仅讲解如何从 Git 仓库读取。我们将会在后面的一篇文章中讲解如何向 Git 仓库写回的主题。 + +### Git 还是 GitHub? +一句话澄清:Git 不同于 GitHub(或 GitLab,或 Bitbucket)。Git 是一个命令行程序,所以它就像下面这样: + +``` +$ git +usage: Git [--version] [--help] [-C ] + [-p | --paginate | --no-pager] [--bare] + [--Git-dir=] [] + +``` + +由于 Git 是开源的,所以就有许多聪明人围绕它构建了基础软件;这些基础软件,包括在他们自己身边,都已经变得非常流行了。 + +我的文章系列将首先教你纯粹的 Git 知识,因为一旦你理解了 Git 在做什么,那么你就无需关心正在使用的前端工具是什么了。然而,我的文章系列也将涵盖通过流行的 Git 服务完成每项任务的常用方法,因为那些将可能是你首先会遇到的。 + +### 安装 Git +在 Linux 系统上,你可以从所使用的发行版软件仓库中获取并安装 Git。BSD 用户应当在 Ports 树的 devel 部分查找 Git。 + +对于闭源的操作系统,请前往 [项目网站][1] 并根据说明安装。一旦安装后,在 Linux、BSD 和 Mac OS X 上的命令应当没有任何差别。Windows 用户需要调整 Git 命令,从而和 Windows 文件系统相匹配,或者安装 Cygwin 以原生的方式运行 Git,而不受 Windows 文件系统转换问题的羁绊。 + +### 下午茶和 Git +并非每个人都需要立刻将 Git 加入到我们的日常生活中。有些时候,你和 Git 最多的交互就是访问一个代码库,下载一两个文件,然后就不用它了。以这样的方式看待 Git,它更像是下午茶而非一次正式的宴会。你进行一些礼节性的交谈,获得了需要的信息,然后你就会离开,至少接下来的三个月你不再想这样说话。 + +当然,那是可以的。 + +一般来说,有两种方法访问 Git:使用命令行,或者使用一种神奇的因特网技术通过 web 浏览器快速轻松地访问。 + +假设你想要在终端中安装并使用一个回收站,因为你已经被 rm 命令毁掉太多次了。你已经听说过 Trashy 了,它称自己为「理智的 rm 命令媒介」,并且你想在安装它之前阅读它的文档。幸运的是,[Trashy 公开地托管在 GitLab.com][2]。 + +### Landgrab +我们工作的第一步是对这个 Git 仓库使用 landgrab 排序方法:我们会克隆这个完整的仓库,然后会根据内容排序。由于该仓库是托管在公共的 Git 服务平台上,所以有两种方式来完成工作:使用命令行,或者使用 web 界面。 + +要想使用 Git 获取整个仓库,就要使用 git clone 命令和 Git 仓库的 URL 作为参数。如果你不清楚正确的 URL 是什么,仓库应该会告诉你的。GitLab 为你提供了 [Trashy][3] 仓库的拷贝-粘贴 URL。 + +![](https://opensource.com/sites/default/files/1_gitlab-url.jpg) + +你也许注意到了,在某些服务平台上,会同时提供 SSH 和 HTTPS 链接。只有当你拥有仓库的写权限时,你才可以使用 SSH。否则的话,你必须使用 HTTPS URL。 + +一旦你获得了正确的 URL,克隆仓库是非常容易的。就是 git clone 这个 URL 即可,可选项是可以指定要克隆到的目录。默认情况下会将 git 目录克隆到你当前所在的位置;例如,'trashy.git' 表示将仓库克隆到你当前位置的 'trashy' 目录。我使用 .clone 扩展名标记那些只读的仓库,使用 .git 扩展名标记那些我可以读写的仓库,但那无论如何也不是官方要求的。 + +``` +$ git clone https://gitlab.com/trashy/trashy.git trashy.clone +Cloning into 'trashy.clone'... +remote: Counting objects: 142, done. +remote: Compressing objects: 100% (91/91), done. +remote: Total 142 (delta 70), reused 103 (delta 47) +Receiving objects: 100% (142/142), 25.99 KiB | 0 bytes/s, done. +Resolving deltas: 100% (70/70), done. +Checking connectivity... done. +``` + +一旦成功地克隆了仓库,你就可以像对待你电脑上任何其它目录那样浏览仓库中的文件。 + +另外一种获得仓库拷贝的方式是使用 web 界面。GitLab 和 GitHub 都会提供一个 .zip 格式的仓库快照文件。GitHub 有一个大的绿色下载按钮,但是在 GitLab 中,可以浏览器的右侧找到并不显眼的下载按钮。 + +![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) + +### 挑选和选择 +另外一种从 Git 仓库中获取文件的方法是找到你想要的文件,然后把它从仓库中拽出来。只有 web 界面才提供这种方法,本质上来说,你看到的是别人仓库的克隆;你可以把它想象成一个 HTTP 共享目录。 + +使用这种方法的问题是,你也许会发现某些文件并不存在于原始仓库中,因为完整形式的文件可能只有在执行 make 命令后才能构建,那只有你下载了完整的仓库,阅读了 README 或者 INSTALL 文件,然后运行相关命令之后才会产生。不过,假如你确信文件存在,而你只想进入仓库,获取那个文件,然后离开的话,你就可以那样做。 + +在 GitLab 和 GitHub 中,单击文件链接,并在 Raw 模式下查看,然后使用你的 web 浏览器的保存功能,例如:在 Firefox 中,文件 > 保存页面为。在一个 GitWeb 仓库中(一些更喜欢自己托管 git 的人使用的私有 git 仓库 web 查看器),Raw 查看链接在文件列表视图中。 + +![](https://opensource.com/sites/default/files/1_webgit-file.jpg) + +### 最佳实践 +通常认为,和 Git 交互的正确方式是克隆完整的 Git 仓库。这样认为是有几个原因的。首先,可以使用 git pull 命令轻松地使克隆仓库保持更新,这样你就不必在每次文件改变时就重回 web 站点获得一份全新的拷贝。第二,你碰巧需要做些改进,只要保持仓库整洁,那么你可以非常轻松地向原来的作者提交所做的变更。 + +现在,可能是时候练习查找感兴趣的 Git 仓库,然后将它们克隆到你的硬盘中了。只要你了解使用终端的基础知识,那就不会太难做到。还不知道终端使用基础吗?那再给多我 5 分钟时间吧。 + +### 终端使用基础 +首先要知道的是,所有的文件都有一个路径。这是有道理的;如果我让你在常规的非终端环境下为我打开一个文件,你就要导航到文件在你硬盘的位置,并且直到你找到那个文件,你要浏览一大堆窗口。例如,你也许要点击你的家目录 > 图片 > InktoberSketches > monkey.kra。 + +在那样的场景下,我们可以说文件 monkeysketch.kra 的路径是:$HOME/图片/InktoberSketches/monkey.kra。 + +在终端中,除非你正在处理一些特殊的系统管理员任务,你的文件路径通常是以 $HOME 开头的(或者,如果你很懒,就使用 ~ 字符),后面紧跟着一些列的文件夹直到文件名自身。 +这就和你在 GUI 中点击各种图标直到找到相关的文件或文件夹类似。 + +如果你想把 Git 仓库克隆到你的文档目录,那么你可以打开一个终端然后运行下面的命令: + +``` +$ git clone https://gitlab.com/foo/bar.git +$HOME/文档/bar.clone +``` +一旦克隆完成,你可以打开一个文件管理器窗口,导航到你的文档文件夹,然后你就会发现 bar.clone 目录正在等待着你访问。 + +如果你想要更高级点,你或许会在以后再次访问那个仓库,可以尝试使用 git pull 命令来查看项目有没有更新: + +``` +$ cd $HOME/文档/bar.clone +$ pwd +bar.clone +$ git pull +``` + +到目前为止,你需要了解的所有终端命令就是那些了,那就出去探索吧。你实践得越多,Git 掌握得就越好(孰能生巧),那就是游戏的名称,至少它教会了你一些基础(give or take a vowel)。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/stumbling-git + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/chrisleegit) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://git-scm.com/download +[2]: https://gitlab.com/trashy/trashy +[3]: https://gitlab.com/trashy/trashy.git + From a9862bebe083e1006bbd4894f8404116a4baf120 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 25 Jul 2016 13:54:47 +0800 Subject: [PATCH 225/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160628 Python 101 An Intro to urllib.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160628 Python 101 An Intro to urllib.md b/sources/tech/20160628 Python 101 An Intro to urllib.md index 38620bf17b..3fb88971db 100644 --- a/sources/tech/20160628 Python 101 An Intro to urllib.md +++ b/sources/tech/20160628 Python 101 An Intro to urllib.md @@ -1,3 +1,5 @@ +ezio is translating + Python 101: An Intro to urllib ================================= From 67e14806563e377a942dac330be9692b9e9c7713 Mon Sep 17 00:00:00 2001 From: Locez Date: Mon, 25 Jul 2016 00:59:28 -0500 Subject: [PATCH 226/471] translating by locez --- .../20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md index 37692dfd7b..1b4c576d1b 100644 --- a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md +++ b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md @@ -1,3 +1,4 @@ +translating by Locez HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU ============================================== From ab2415b83ec6d7c8168449e8215a5aeed453d7be Mon Sep 17 00:00:00 2001 From: joVoV <704451873@qq.com> Date: Mon, 25 Jul 2016 14:15:00 +0800 Subject: [PATCH 227/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/talk/20160627 Linux Practicality vs Activism.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20160627 Linux Practicality vs Activism.md b/sources/talk/20160627 Linux Practicality vs Activism.md index fc4f5eff26..195a9fdf5c 100644 --- a/sources/talk/20160627 Linux Practicality vs Activism.md +++ b/sources/talk/20160627 Linux Practicality vs Activism.md @@ -1,3 +1,5 @@ +jovov 正在翻译。。。 + Linux Practicality vs Activism ================================== From 180ba8b6386a1601e05c8d2ed6cb8395689a330d Mon Sep 17 00:00:00 2001 From: Locez Date: Mon, 25 Jul 2016 15:12:08 +0800 Subject: [PATCH 228/471] translated --- ...O CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 51 +++++++++---------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md index 1b4c576d1b..b6c9ae0cf7 100644 --- a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md +++ b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md @@ -1,64 +1,63 @@ -translating by Locez -HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU +怎样在 UBUNTU 中修改默认程序 ============================================== ![](https://itsfoss.com/wp-content/uploads/2016/07/change-default-applications-ubuntu.jpg) -Brief: This beginners guide shows you how to change the default applications in Ubuntu Linux. +简介: 这个新手指南会向你展示如何在 Ubuntu Linux 中修改默认程序 -Installing [VLC media player][1] is one of the first few [things to do after installing Ubuntu 16.04][2] for me. One thing I do after installing VLC is to make it the default application so that I can open video file with VLC when I double click it. +对于我来说,安装 [VLC 多媒体播放器][1]是[安装完 Ubuntu 16.04 该做的事][2]中最先做的几件事之一。为了能够使我双击一个视频就用 VLC 打开,在我安装完 VLC 之后我会设置它为默认程序。 -As a beginner, you may need to know how to change any default application in Ubuntu and this is what I am going to show you in today’s tutorial. +作为一个新手,你需要知道如何在 Ubuntu 中修改任何默认程序,这也是我今天在这篇指南中所要讲的。 -### CHANGE DEFAULT APPLICATIONS IN UBUNTU +### 在 UBUNTU 中修改默认程序 +这里提及的方法适用于所有的 Ubuntu 12.04,Ubuntu 14.04 和Ubuntu 16.04。在 Ubuntu 中,这里有两种基本的方法可以修改默认程序: -The methods mentioned here are valid for all versions of Ubuntu be it Ubuntu 12.04, Ubuntu 14.04 or Ubuntu 16.04. There are basically two ways you can change the default applications in Ubuntu: +- 通过系统设置 +- 通过右键菜单 -- via system settings -- via right click menu +#### 1.通过系统设置修改 Ubuntu 的默认程序 -#### 1. CHANGE DEFAULT APPLICATIONS IN UBUNTU FROM SYSTEM SETTINGS - -Go to Unity Dash and search for System Settings: +进入 Unity 面板并且搜索系统设置(System Settings): ![](https://itsfoss.com/wp-content/uploads/2013/11/System_Settings_Ubuntu.jpeg) -In the System Settings, click on the Details option: +在系统设置(System Settings)中,选择详细选项(Details): ![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-detail-ubuntu.jpeg) -In here, from the left side pane, select Default Applications. You will see the option to change the default applications in the right side pane. + +在左边的面板中选择默认程序(Default Applications),你会发现在右边的面板中可以修改默认程序。 ![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-default-applications.jpeg) -As you can see, there are only a few kinds of default applications that can be changed here. You can change the default applications for web browser, email client, calendar app, music, videos and photo here. What about other kinds of applications? +正如看到的那样,这里只有少数几类的默认程序可以被改变。你可以在这里改变浏览器,邮箱客户端,日历,音乐,视频和相册的默认程序。那其他类型的默认程序怎么修改? -Don’t worry. To change the default applications of other kinds, we’ll use the option in the right click menu. +不要担心,为了修改其他类型的默认程序,我们会用到右键菜单。 -#### 2. CHANGE DEFAULT APPLICATIONS IN UBUNTU FROM RIGHT CLICK MENU +#### 2.通过右键菜单修改默认程序 -If you have ever used Windows, you might be aware of the “open with” option in the right click menu that allows changing the default applications. We have something similar in Ubuntu Linux as well. -Right click on the file that you want to open in a non-default application. Go to properties. +如果你使用过 Windows 系统,你应该看见过右键菜单的“打开方式”,可以通过这个来修改默认程序。我们在 Ubuntu 中也有相似的方法。 + +右键一个还没有设置默认打开程序的文件,选择“属性(properties)” ![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) ->Select Properties from Right Click menu +>从右键菜单中选择属性 -And in here, you can select the application that you want to use and set it as default. +在这里,你可以选择使用什么程序打开,并且设置为默认程序。 ![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) ->Making gThumb the default application for WebP images in Ubuntu +>在 Ubuntu 中设置打开 WebP 图片的默认程序为 gThumb -Easy peasy. Isn’t it? Once you do that, all the files of the same kind will be opened with your chosen default application. - -I hope you found this beginners tutorial to change default applications in Ubuntu helpful. If you have any questions or suggestions, feel free to drop a comment below. +小菜一碟不是么?一旦你做完这些,所有同样类型的文件都会用你选择的默认程序打开。 +我很希望这个新手指南对你在修改 Ubuntu 的默认程序时有帮助。如果你有任何的疑问或者建议,可以随时在下面评论。 -------------------------------------------------------------------------------- via: https://itsfoss.com/change-default-applications-ubuntu/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 作者:[Abhishek Prakash][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/locez) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From b0aacf4336353088db849f02b32e4edfa2562379 Mon Sep 17 00:00:00 2001 From: Locez Date: Mon, 25 Jul 2016 15:13:12 +0800 Subject: [PATCH 229/471] translated --- .../20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md index b6c9ae0cf7..2192b71600 100644 --- a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md +++ b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md @@ -57,7 +57,7 @@ via: https://itsfoss.com/change-default-applications-ubuntu/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 作者:[Abhishek Prakash][a] -译者:[译者ID](https://github.com/locez) +译者:[Locez](https://github.com/locez) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From deb9c2fd5350f642fc2e64b9f1f3bc4e12f54660 Mon Sep 17 00:00:00 2001 From: Locez Date: Mon, 25 Jul 2016 15:19:36 +0800 Subject: [PATCH 230/471] move to translated --- ...O CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 translated/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md diff --git a/translated/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/translated/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md new file mode 100644 index 0000000000..2192b71600 --- /dev/null +++ b/translated/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md @@ -0,0 +1,67 @@ +怎样在 UBUNTU 中修改默认程序 +============================================== + +![](https://itsfoss.com/wp-content/uploads/2016/07/change-default-applications-ubuntu.jpg) + +简介: 这个新手指南会向你展示如何在 Ubuntu Linux 中修改默认程序 + +对于我来说,安装 [VLC 多媒体播放器][1]是[安装完 Ubuntu 16.04 该做的事][2]中最先做的几件事之一。为了能够使我双击一个视频就用 VLC 打开,在我安装完 VLC 之后我会设置它为默认程序。 + +作为一个新手,你需要知道如何在 Ubuntu 中修改任何默认程序,这也是我今天在这篇指南中所要讲的。 + +### 在 UBUNTU 中修改默认程序 +这里提及的方法适用于所有的 Ubuntu 12.04,Ubuntu 14.04 和Ubuntu 16.04。在 Ubuntu 中,这里有两种基本的方法可以修改默认程序: + +- 通过系统设置 +- 通过右键菜单 + +#### 1.通过系统设置修改 Ubuntu 的默认程序 + +进入 Unity 面板并且搜索系统设置(System Settings): + +![](https://itsfoss.com/wp-content/uploads/2013/11/System_Settings_Ubuntu.jpeg) + +在系统设置(System Settings)中,选择详细选项(Details): + +![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-detail-ubuntu.jpeg) + + +在左边的面板中选择默认程序(Default Applications),你会发现在右边的面板中可以修改默认程序。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-default-applications.jpeg) + +正如看到的那样,这里只有少数几类的默认程序可以被改变。你可以在这里改变浏览器,邮箱客户端,日历,音乐,视频和相册的默认程序。那其他类型的默认程序怎么修改? + +不要担心,为了修改其他类型的默认程序,我们会用到右键菜单。 + +#### 2.通过右键菜单修改默认程序 + + +如果你使用过 Windows 系统,你应该看见过右键菜单的“打开方式”,可以通过这个来修改默认程序。我们在 Ubuntu 中也有相似的方法。 + +右键一个还没有设置默认打开程序的文件,选择“属性(properties)” + +![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) +>从右键菜单中选择属性 + +在这里,你可以选择使用什么程序打开,并且设置为默认程序。 + +![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) +>在 Ubuntu 中设置打开 WebP 图片的默认程序为 gThumb + +小菜一碟不是么?一旦你做完这些,所有同样类型的文件都会用你选择的默认程序打开。 +我很希望这个新手指南对你在修改 Ubuntu 的默认程序时有帮助。如果你有任何的疑问或者建议,可以随时在下面评论。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/change-default-applications-ubuntu/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[Abhishek Prakash][a] +译者:[Locez](https://github.com/locez) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: http://www.videolan.org/vlc/index.html +[2]: https://itsfoss.com/things-to-do-after-installing-ubuntu-16-04/ From 28a8b5c6afe4523b8e15c0537dc0fee7974fd76a Mon Sep 17 00:00:00 2001 From: Locez Date: Mon, 25 Jul 2016 15:20:50 +0800 Subject: [PATCH 231/471] translated and delete source --- ...O CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 67 ------------------- 1 file changed, 67 deletions(-) delete mode 100644 sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md diff --git a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md deleted file mode 100644 index 2192b71600..0000000000 --- a/sources/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md +++ /dev/null @@ -1,67 +0,0 @@ -怎样在 UBUNTU 中修改默认程序 -============================================== - -![](https://itsfoss.com/wp-content/uploads/2016/07/change-default-applications-ubuntu.jpg) - -简介: 这个新手指南会向你展示如何在 Ubuntu Linux 中修改默认程序 - -对于我来说,安装 [VLC 多媒体播放器][1]是[安装完 Ubuntu 16.04 该做的事][2]中最先做的几件事之一。为了能够使我双击一个视频就用 VLC 打开,在我安装完 VLC 之后我会设置它为默认程序。 - -作为一个新手,你需要知道如何在 Ubuntu 中修改任何默认程序,这也是我今天在这篇指南中所要讲的。 - -### 在 UBUNTU 中修改默认程序 -这里提及的方法适用于所有的 Ubuntu 12.04,Ubuntu 14.04 和Ubuntu 16.04。在 Ubuntu 中,这里有两种基本的方法可以修改默认程序: - -- 通过系统设置 -- 通过右键菜单 - -#### 1.通过系统设置修改 Ubuntu 的默认程序 - -进入 Unity 面板并且搜索系统设置(System Settings): - -![](https://itsfoss.com/wp-content/uploads/2013/11/System_Settings_Ubuntu.jpeg) - -在系统设置(System Settings)中,选择详细选项(Details): - -![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-detail-ubuntu.jpeg) - - -在左边的面板中选择默认程序(Default Applications),你会发现在右边的面板中可以修改默认程序。 - -![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-default-applications.jpeg) - -正如看到的那样,这里只有少数几类的默认程序可以被改变。你可以在这里改变浏览器,邮箱客户端,日历,音乐,视频和相册的默认程序。那其他类型的默认程序怎么修改? - -不要担心,为了修改其他类型的默认程序,我们会用到右键菜单。 - -#### 2.通过右键菜单修改默认程序 - - -如果你使用过 Windows 系统,你应该看见过右键菜单的“打开方式”,可以通过这个来修改默认程序。我们在 Ubuntu 中也有相似的方法。 - -右键一个还没有设置默认打开程序的文件,选择“属性(properties)” - -![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) ->从右键菜单中选择属性 - -在这里,你可以选择使用什么程序打开,并且设置为默认程序。 - -![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) ->在 Ubuntu 中设置打开 WebP 图片的默认程序为 gThumb - -小菜一碟不是么?一旦你做完这些,所有同样类型的文件都会用你选择的默认程序打开。 -我很希望这个新手指南对你在修改 Ubuntu 的默认程序时有帮助。如果你有任何的疑问或者建议,可以随时在下面评论。 - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/change-default-applications-ubuntu/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 - -作者:[Abhishek Prakash][a] -译者:[Locez](https://github.com/locez) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: http://www.videolan.org/vlc/index.html -[2]: https://itsfoss.com/things-to-do-after-installing-ubuntu-16-04/ From 9539049bf2ba6bb2f76aa6020bd954a090e82ea8 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Mon, 25 Jul 2016 16:21:18 +0800 Subject: [PATCH 232/471] Update 20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md --- ...Source Discussion Platform Discourse On Ubuntu Linux 16.04.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md index a2521673c9..738c3dbf55 100644 --- a/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md +++ b/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md @@ -1,3 +1,4 @@ +[translating by kokialoves] How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04 =============================================================================== From 958f533847bfdded7e8a50d96a853cb932bb0b10 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Mon, 25 Jul 2016 18:31:51 +0800 Subject: [PATCH 233/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...60621 Container technologies in Fedora - systemd-nspawn.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md b/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md index 1ec72a3c90..6d3e08ac91 100644 --- a/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md +++ b/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md @@ -1,3 +1,5 @@ +Being translated by [ChrisLeeGit](https://github.com/chrisleegit) + Container technologies in Fedora: systemd-nspawn === @@ -97,7 +99,7 @@ $ restorecon -R /home/johnmh/DebianJessie/ via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ 作者:[John M. Harris, Jr.][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c7c7ca71ad54930dfa59a2e51d250440f45a6db8 Mon Sep 17 00:00:00 2001 From: Name1e5s <836401406@qq.com> Date: Mon, 25 Jul 2016 20:34:25 +0800 Subject: [PATCH 234/471] name1e5s translating --- ...60627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md b/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md index 44c66359f9..4faee7eaef 100644 --- a/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md +++ b/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md @@ -1,3 +1,5 @@ +name1e5s translating + TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016 ===================================================== From 85f2b50b2509d313fea84508025a580aee9010e3 Mon Sep 17 00:00:00 2001 From: cinlen <237448382@qq.com> Date: Mon, 25 Jul 2016 22:33:50 +0800 Subject: [PATCH 235/471] [translating]Who needs a GUI? How to live in a Linux terminal (#4219) * [translated]Growing a carrer alongside Linux * [translated]Growing a career alongside Linux * [translating]Who needs a GUI - How to live in a Linux terminal --- ...160606 Who needs a GUI - How to live in a Linux terminal.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md b/sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md index f9633eb966..148d758216 100644 --- a/sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md +++ b/sources/talk/20160606 Who needs a GUI - How to live in a Linux terminal.md @@ -1,3 +1,4 @@ +chenxinlong translating Who needs a GUI? How to live in a Linux terminal ================================================= @@ -84,7 +85,7 @@ LibreOffice, Google Slides or, gasp, PowerPoint. I spend a lot of time in presen via: http://www.networkworld.com/article/3091139/linux/who-needs-a-gui-how-to-live-in-a-linux-terminal.html#slide1 作者:[Bryan Lunduke][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/chenxinlong) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 574ae23673ec096484b72043bbe1dda619ef2052 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Tue, 26 Jul 2016 06:38:14 +0800 Subject: [PATCH 236/471] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E5=88=9D=E8=AF=91?= =?UTF-8?q?=E6=96=87=E6=A1=A3?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- translated/tech/20160711 Getting started with Git.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/translated/tech/20160711 Getting started with Git.md b/translated/tech/20160711 Getting started with Git.md index 85c159945d..afa6b7382b 100644 --- a/translated/tech/20160711 Getting started with Git.md +++ b/translated/tech/20160711 Getting started with Git.md @@ -1,8 +1,8 @@ -Git 入门指南 +初步了解 Git ========================= ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) ->Image by : opensource.com +> 图片来源:opensource.com 在这个系列的介绍中,我们学习到了谁应该使用 Git,以及 Git 是用来做什么的。今天,我们将学习如何克隆公共的 Git 仓库,以及如何提取出独立的文件而不用克隆整个仓库。 @@ -72,7 +72,7 @@ Checking connectivity... done. ![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) -### 挑选和选择 +### 仔细挑选 另外一种从 Git 仓库中获取文件的方法是找到你想要的文件,然后把它从仓库中拽出来。只有 web 界面才提供这种方法,本质上来说,你看到的是别人仓库的克隆;你可以把它想象成一个 HTTP 共享目录。 使用这种方法的问题是,你也许会发现某些文件并不存在于原始仓库中,因为完整形式的文件可能只有在执行 make 命令后才能构建,那只有你下载了完整的仓库,阅读了 README 或者 INSTALL 文件,然后运行相关命令之后才会产生。不过,假如你确信文件存在,而你只想进入仓库,获取那个文件,然后离开的话,你就可以那样做。 @@ -111,14 +111,14 @@ bar.clone $ git pull ``` -到目前为止,你需要了解的所有终端命令就是那些了,那就出去探索吧。你实践得越多,Git 掌握得就越好(孰能生巧),那就是游戏的名称,至少它教会了你一些基础(give or take a vowel)。 +到目前为止,你需要初步了解的所有终端命令就是那些了,那就去探索吧。你实践得越多,Git 掌握得就越好(孰能生巧),那就是游戏的名称,至少给了或取了一个元音。 -------------------------------------------------------------------------------- via: https://opensource.com/life/16/7/stumbling-git 作者:[Seth Kenlon][a] -译者:[译者ID](https://github.com/chrisleegit) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 2ffb09b907d090d7cdefadaad2b59a94ca4114be Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 26 Jul 2016 09:00:31 +0800 Subject: [PATCH 237/471] PUB:Part 1 - LXD 2.0--Introduction to LXD MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @oska874 @PurlingNayuki 我说这篇怎么没啥可校对的,原来已经校对过了。 --- .../Part 1 - LXD 2.0--Introduction to LXD.md | 69 +++++++++---------- 1 file changed, 33 insertions(+), 36 deletions(-) rename {translated/tech => published}/LXD/Part 1 - LXD 2.0--Introduction to LXD.md (61%) diff --git a/translated/tech/LXD/Part 1 - LXD 2.0--Introduction to LXD.md b/published/LXD/Part 1 - LXD 2.0--Introduction to LXD.md similarity index 61% rename from translated/tech/LXD/Part 1 - LXD 2.0--Introduction to LXD.md rename to published/LXD/Part 1 - LXD 2.0--Introduction to LXD.md index 011f385881..11c7220964 100644 --- a/translated/tech/LXD/Part 1 - LXD 2.0--Introduction to LXD.md +++ b/published/LXD/Part 1 - LXD 2.0--Introduction to LXD.md @@ -1,4 +1,4 @@ -Part 1 - LXD 2.0: LXD 入门 +LXD 2.0 系列(一):LXD 入门 ====================================== 这是 [LXD 2.0 系列介绍文章][1]的第一篇。 @@ -20,12 +20,11 @@ LXD 最主要的目标就是使用 Linux 容器而不是硬件虚拟化向用户 LXD 聚焦于系统容器,通常也被称为架构容器。这就是说 LXD 容器实际上如在裸机或虚拟机上运行一般运行了一个完整的 Linux 操作系统。 -这些容器一般基于一个干净的发布镜像并会长时间运行。传统的配置管理工具和部署工具可以如在虚拟机、云和物理机器上一样与 LXD 一起使用。 +这些容器一般基于一个干净的发布镜像并会长时间运行。传统的配置管理工具和部署工具可以如在虚拟机、云实例和物理机器上一样与 LXD 一起使用。 -相对的, Docker 关注于短期的、无状态的最小容器,这些容器通常并不会升级或者重新配置,而是作为一个整体被替换掉。这就使得 Docker 及类似项目更像是一种软件发布机制,而不是一个机器管理工具。 - -这两种模型并不是完全互斥的。你完全可以使用 LXD 为你的用户提供一个完整的 Linux 系统,而他们可以在 LXD 内安装 Docker 来运行他们想要的软件。 +相对的, Docker 关注于短期的、无状态的、最小化的容器,这些容器通常并不会升级或者重新配置,而是作为一个整体被替换掉。这就使得 Docker 及类似项目更像是一种软件发布机制,而不是一个机器管理工具。 +这两种模型并不是完全互斥的。你完全可以使用 LXD 为你的用户提供一个完整的 Linux 系统,然后他们可以在 LXD 内安装 Docker 来运行他们想要的软件。 #### 为什么要用 LXD? @@ -35,56 +34,55 @@ LXD 聚焦于系统容器,通常也被称为架构容器。这就是说 LXD 我们把 LXD 作为解决这些缺陷的一个很好的机会。作为一个长时间运行的守护进程, LXD 可以绕开 LXC 的许多限制,比如动态资源限制、无法进行容器迁移和高效的在线迁移;同时,它也为创造新的默认体验提供了机会:默认开启安全特性,对用户更加友好。 - ### LXD 的主要组件 -LXD 是由几个主要组件构成的,这些组件都是 LXD 目录结构、命令行客户端和 API 结构体里下可见的。 +LXD 是由几个主要组件构成的,这些组件都出现在 LXD 目录结构、命令行客户端和 API 结构体里。 #### 容器 LXD 中的容器包括以下及部分: -- 根文件系统 +- 根文件系统(rootfs) +- 配置选项列表,包括资源限制、环境、安全选项等等 - 设备:包括磁盘、unix 字符/块设备、网络接口 - 一组继承而来的容器配置文件 -- 属性(容器架构,暂时的或持久的,容器名) -- 运行时状态(当时为了记录检查点、恢复时用到了 CRIU时) +- 属性(容器架构、暂时的还是持久的、容器名) +- 运行时状态(当用 CRIU 来中断/恢复时) #### 快照 容器快照和容器是一回事,只不过快照是不可修改的,只能被重命名,销毁或者用来恢复系统,但是无论如何都不能被修改。 -值得注意的是,因为我们允许用户保存容器的运行时状态,这就有效的为我们提供了“有状态”的快照的功能。这就是说我们可以使用快照回滚容器的 CPU 和内存。 +值得注意的是,因为我们允许用户保存容器的运行时状态,这就有效的为我们提供了“有状态”的快照的功能。这就是说我们可以使用快照回滚容器的状态,包括快照当时的 CPU 和内存状态。 #### 镜像 LXD 是基于镜像实现的,所有的 LXD 容器都是来自于镜像。容器镜像通常是一些纯净的 Linux 发行版的镜像,类似于你们在虚拟机和云实例上使用的镜像。 -所以就可以「发布」容器:使用容器制作一个镜像并在本地或者远程 LXD 主机上使用。 +所以可以「发布」一个容器:使用容器制作一个镜像并在本地或者远程 LXD 主机上使用。 -镜像通常使用全部或部分 sha256 哈希码来区分。因为输入长长的哈希码对用户来说不好,所以镜像可以使用几个自身的属性来区分,这就使得用户在镜像商店里方便搜索镜像。别名也可以用来 1 对 1 地把对用户友好的名字映射到某个镜像的哈希码。 +镜像通常使用全部或部分 sha256 哈希码来区分。因为输入长长的哈希码对用户来说不方便,所以镜像可以使用几个自身的属性来区分,这就使得用户在镜像商店里方便搜索镜像。也可以使用别名来一对一地将一个用户好记的名字映射到某个镜像的哈希码上。 LXD 安装时已经配置好了三个远程镜像服务器(参见下面的远程一节): -- “ubuntu:” 提供稳定版的 Ubuntu 镜像 -- “ubuntu-daily:” 提供每天构建出来的 Ubuntu -- “images:” 社区维护的镜像服务器,提供一系列的 Linux 发布版,使用的是上游 LXC 的模板 +- “ubuntu”:提供稳定版的 Ubuntu 镜像 +- “ubuntu-daily”:提供 Ubuntu 的每日构建镜像 +- “images”: 社区维护的镜像服务器,提供一系列的其它 Linux 发布版,使用的是上游 LXC 的模板 LXD 守护进程会从镜像上次被使用开始自动缓存远程镜像一段时间(默认是 10 天),超过时限后这些镜像才会失效。 此外, LXD 还会自动更新远程镜像(除非指明不更新),所以本地的镜像会一直是最新版的。 - #### 配置 -配置文件是一种在一处定义容器配置和容器设备,然后应用到一系列容器的方法。 +配置文件是一种在一个地方定义容器配置和容器设备,然后将其应用到一系列容器的方法。 -一个容器可以被应用多个配置文件。当构建最终容器配置时(即通常的扩展配置),这些配置文件都会按照他们定义顺序被应用到容器上,当有重名的配置时,新的会覆盖掉旧的。然后本地容器设置会在这些基础上应用,覆盖所有来自配置文件的选项。 +一个容器可以被应用多个配置文件。当构建最终容器配置时(即通常的扩展配置),这些配置文件都会按照他们定义顺序被应用到容器上,当有重名的配置键或设备时,新的会覆盖掉旧的。然后本地容器设置会在这些基础上应用,覆盖所有来自配置文件的选项。 LXD 自带两种预配置的配置文件: -- 「 default 」配置是自动应用在所有容器之上,除非用户提供了一系列替代的配置文件。目前这个配置文件只做一件事,为容器定义 eth0 网络设备。 -- 「 docker” 」配置是一个允许你在容器里运行 Docker 容器的配置文件。它会要求 LXD 加载一些需要的内核模块以支持容器嵌套并创建一些设备入口。 +- “default”配置是自动应用在所有容器之上,除非用户提供了一系列替代的配置文件。目前这个配置文件只做一件事,为容器定义 eth0 网络设备。 +- “docker”配置是一个允许你在容器里运行 Docker 容器的配置文件。它会要求 LXD 加载一些需要的内核模块以支持容器嵌套并创建一些设备。 #### 远程 @@ -92,14 +90,14 @@ LXD 自带两种预配置的配置文件: 默认情况下,我们的命令行客户端会与下面几个预定义的远程服务器通信: -- local:(默认的远程服务器,使用 UNIX socket 和本地的 LXD 守护进程通信) -- ubuntu:( Ubuntu 镜像服务器,提供稳定版的 Ubuntu 镜像) -- ubuntu-daily:( Ubuntu 镜像服务器,提供每天构建出来的 Ubuntu ) -- images:( images.linuxcontainers.org 镜像服务器) +- local:默认的远程服务器,使用 UNIX socket 和本地的 LXD 守护进程通信 +- ubuntu:Ubuntu 镜像服务器,提供稳定版的 Ubuntu 镜像 +- ubuntu-daily:Ubuntu 镜像服务器,提供 Ubuntu 的每日构建版 +- images:images.linuxcontainers.org 的镜像服务器 所有这些远程服务器的组合都可以在命令行客户端里使用。 -你也可以添加任意数量的远程 LXD 主机来监听网络。匿名的开放镜像服务器,或者通过认证可以管理远程容器的镜像服务器,都可以添加进来。 +你也可以添加任意数量的远程 LXD 主机,并配置它们监听网络。匿名的开放镜像服务器,或者通过认证可以管理远程容器的镜像服务器,都可以添加进来。 正是这种远程机制使得与远程镜像服务器交互及在主机间复制、移动容器成为可能。 @@ -107,30 +105,29 @@ LXD 自带两种预配置的配置文件: 我们设计 LXD 时的一个核心要求,就是在不修改现代 Linux 发行版的前提下,使容器尽可能的安全。 -LXD 使用的、通过使用 LXC 库实现的主要安全特性有: +LXD 通过使用 LXC 库实现的主要安全特性有: -- 内核名字空间。尤其是用户名字空间,它让容器和系统剩余部分完全分离。LXD 默认使用用户名字空间(和 LXC 相反),并允许用户在需要的时候以容器为单位打开或关闭。 +- 内核名字空间。尤其是用户名字空间,它让容器和系统剩余部分完全分离。LXD 默认使用用户名字空间(和 LXC 相反),并允许用户在需要的时候以容器为单位关闭(将容器标为“特权的”)。 - Seccomp 系统调用。用来隔离潜在危险的系统调用。 -- AppArmor:对 mount、socket、ptrace 和文件访问提供额外的限制。特别是限制跨容器通信。 +- AppArmor。对 mount、socket、ptrace 和文件访问提供额外的限制。特别是限制跨容器通信。 - Capabilities。阻止容器加载内核模块,修改主机系统时间,等等。 -- CGroups。限制资源使用,防止对主机的 DoS 攻击。 +- CGroups。限制资源使用,防止针对主机的 DoS 攻击。 +为了对用户友好,LXD 构建了一个新的配置语言把大部分的这些特性都抽象封装起来,而不是如 LXC 一般直接将这些特性暴露出来。举了例子,一个用户可以告诉 LXD 把主机设备放进容器而不需要手动检查他们的主/次设备号来手动更新 CGroup 策略。 -为了对用户友好 , LXD 构建了一个新的配置语言把大部分的这些特性都抽象封装起来,而不是如 LXC 一般直接将这些特性暴露出来。举了例子,一个用户可以告诉 LXD 把主机设备放进容器而不需要手动检查他们的主/次设备号来更新 CGroup 策略。 - -和 LXD 本身通信是基于使用 TLS 1.2 保护的链路,这些链路只允许使用有限的几个被允许的密钥。当和那些经过系统证书认证之外的主机通信时, LXD 会提示用户验证主机的远程足迹(SSH 方式),然后把足迹缓存起来以供以后使用。 +和 LXD 本身通信是基于使用 TLS 1.2 保护的链路,只允许使用有限的几个被允许的密钥算法。当和那些经过系统证书认证之外的主机通信时, LXD 会提示用户验证主机的远程指纹(SSH 方式),然后把指纹缓存起来以供以后使用。 ### REST 接口 -LXD 的工作都是通过 REST 接口实现的。在客户端和守护进程之间并没有其他的通讯手段。 +LXD 的工作都是通过 REST 接口实现的。在客户端和守护进程之间并没有其他的通讯渠道。 -REST 接口可以通过本地的 unix socket 访问,这只需要经过组认证,或者经过 HTTP 套接字使用客户端认证进行通信。 +REST 接口可以通过本地的 unix socket 访问,这只需要经过用户组认证,或者经过 HTTP 套接字使用客户端认证进行通信。 REST 接口的结构能够和上文所说的不同的组件匹配,是一种简单、直观的使用方法。 当需要一种复杂的通信机制时, LXD 将会进行 websocket 协商完成剩余的通信工作。这主要用于交互式终端会话、容器迁移和事件通知。 -LXD 2.0 附带了 1.0 版的稳定 API。虽然我们在 1.0 版 API 添加了额外的特性,但是这不会在 1.0 版 API 的端点里破坏向后兼容性,因为我们会声明额外的 API 扩展使得客户端可以找到新的接口。 +LXD 2.0 附带了 1.0 版的稳定 API。虽然我们在 1.0 版 API 添加了额外的特性,但是这不会在 1.0 版 API 端点里破坏向后兼容性,因为我们会声明额外的 API 扩展使得客户端可以找到新的接口。 ### 容器规模化 From 170200ad4e794553e1410982afc6d2b213e547d3 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 26 Jul 2016 09:03:56 +0800 Subject: [PATCH 238/471] =?UTF-8?q?=E8=BF=99=E7=AF=87=E5=B7=B2=E7=BB=8F?= =?UTF-8?q?=E5=8F=91=E5=B8=83=E8=BF=87=EF=BC=8C=E5=B9=B6=E7=A7=BB=E5=8A=A8?= =?UTF-8?q?=E5=88=B0=20PUB=20=E4=B8=8B=E4=BA=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @kokialoves @locez --- ...Installed Help Documentations and Tools.md | 178 ------------------ 1 file changed, 178 deletions(-) delete mode 100644 translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md diff --git a/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md deleted file mode 100644 index 2d0238becf..0000000000 --- a/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md +++ /dev/null @@ -1,178 +0,0 @@ -LFCS第十二讲: 如何使用Linux的帮助文档和工具 -================================================================================== - -由于 2016 年 2 月 2 号开始启用了新的 LFCS 考试要求, 我们在[LFCS series][1]系列添加了一些必要的内容 . 为了考试的需要, 我们强烈建议你看一下[LFCE series][2] . - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) ->LFCS: 了解Linux的帮助文档和工具 - -当你习惯了在命令行下进行工作, 你会发现Linux有许多文档需要你去使用和配置Linux系统. - -另一个你必须熟悉命令行帮助工具的理由是,在[LFCS][3] 和 [LFCE][4] 考试中, 你只能靠你自己和命令行工具,没有互联网也没有百度。 - -基于上面的理由, 在这一章里我们将给你一些建议来帮助你通过**Linux Foundation Certification** 考试. - -### Linux 帮助手册 - -man命令, 大体上来说就是一个工具手册. 它包含选项列表(和解释) , 甚至还提供一些例子. - -我们用**man command** 加工具名称来打开一个帮助手册以便获取更多内容. 例如: - -``` -# man diff -``` - -我们将打开`diff`的手册页, 这个工具将一行一行的对比文本文档 (如你想退出只需要轻轻的点一下Q键). - -下面我来比较两个文本文件 `file1` 和 `file2` . 这两个文本文件包含着相同版本Linux的安装包信息. - -输入`diff` 命令它将告诉我们 `file1` 和`file2` 有什么不同: - -``` -# diff file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) ->在Linux中比较两个文本文件 - -`<` 这个符号是说`file2`少一行. 如果是 `file1`少一行, 我们将用 `>` 符号来替代. - -接下来说, **7d6** 意思是说 文件1的**#7**行在 `file2`中被删除了 ( **24d22** 和**41d38**是同样的意思), 65,67d61告诉我们移动 **65** 到 **67** . 我们把以上步骤都做了两个文件将完全匹配. - -你还可以通过 `-y` 选项来对比两个文件: - -``` -# diff -y file1 file2 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) ->通过列表来列出两个文件的不同 - - 当然你也可以用`diff`来比较两个二进制文件 . 如果它们完全一样, `diff` 将什么也不会输出. 否则, 他将会返回如下信息: “**Binary files X and Y differ**”. - -### –help 选项 - -`--help`选项 , 大多数命令都可以用它(并不是所有) , 他可以理解为一个命令的简单介绍. 尽管它不提供工具的详细介绍, 但是确实是一个能够快速列出程序使用信息的不错的方法. - -例如, - -``` -# sed --help -``` - -显示 sed 的每个选项的用法(sed文本流编辑器). - -一个经典的`sed`例子,替换文件字符. 用 `-i` 选项 (描述为 “**编辑文件在指定位置**”), 你可以编辑一个文件而且并不需要打开他. 如果你想要备份一个原始文件, 用 `-i` 选项 加后缀来创建一个原始文件的副本. - -例如, 替换 `lorem.txt`中的`Lorem` 为 `Tecmint` (忽略大小写) 并且创建一个新的原始文件副本, 命令如下: - -``` -# less lorem.txt | grep -i lorem -# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt -# less lorem.txt | grep -i lorem -# less lorem.txt.orig | grep -i lorem -``` - - 请注意`lorem.txt`文件中`Lorem` 都已经替换为 `Tecmint` , 并且原始的 `lorem.txt` 保存为`lorem.txt.orig`. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) ->替换文件文本 - -### /usr/share/doc内的文档 -这可能是我最喜欢的方法. 如果你进入 `/usr/share/doc` 目录, 你可以看到好多Linux已经安装的工具的名称的文件夹. - -根据[Filesystem Hierarchy Standard][5](文件目录标准),这些文件夹包含了许多帮助手册没有的信息, 还有一些模板和配置文件. - -例如, 让我们来看一下 `squid-3.3.8` (版本可能会不同) 一个非常受欢迎的HTTP代理[squid cache server][6]. - -让我们用`cd`命令进入目录 : - -``` -# cd /usr/share/doc/squid-3.3.8 -``` - -列出当前文件夹列表: - -``` -# ls -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) ->ls linux列表命令 - -你应该特别注意 `QUICKSTART` 和 `squid.conf.documented`. 这两个文件包含了Squid许多信息, . 对于别的安装包来说, 他们的名字可能不同 (有可能是 **QuickRef** 或者**00QUICKSTART**), 但原理是一样的. - -对于另外一些安装包, 比如 the Apache web server, 在`/usr/share/doc`目录提供了配置模板, 当你配置独立服务器或者虚拟主机的时候会非常有用. - -### GNU 信息文档 - -你可以把它想象为帮助手册的超级链接形式. 正如上面说的, 他不仅仅提供工具的帮助信息, 而且还是超级链接的形式(是的!在命令行中的超级链接) 你可以通过箭头按钮和回车按钮来浏览你需要的内容. - -一个典型的例子是: - -``` -# info coreutils -``` - -通过coreutils 列出当前系统的 基本文件,shell脚本和文本处理工具[basic file, shell and text manipulation utilities][7] , 你可以得到他们的详细介绍. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) ->Info Coreutils - -和帮助手册一样你可以按Q键退出. - -此外, GNU info 还可以像帮助手册一样使用. 例如: - -``` -# info tune2fs -``` - -它将显示 **tune2fs**的帮助手册, ext2/3/4 文件系统管理工具. - -让我们来看看怎么用**tune2fs**: - -显示 **/dev/mapper/vg00-vol_backups**文件系统信息: - -``` -# tune2fs -l /dev/mapper/vg00-vol_backups -``` - -修改文件系统标签 (修改为Backups): - -``` -# tune2fs -L Backups /dev/mapper/vg00-vol_backups -``` - -设置 `/` 自检的挂载次数 (用`-c` 选项设置 `/`的自检的挂载次数 或者用 `-i` 选项设置 自检时间 **d=days, w=weeks, and m=months**). - -``` -# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts -# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks -``` - -以上这些内容也可以通过 `--help` 选项找到, 或者查看帮助手册. - -### 摘要 - -不管你选择哪种方法,知道并且会使用它们在考试中对你是非常有用的. 你知道其它的一些方法吗? 欢迎给我们留言. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[kokialoves](https://github.com/kokialoves) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[3]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[5]: http://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/ -[6]: http://www.tecmint.com/configure-squid-server-in-linux/ -[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[8]: From 4702262225de41e4eb8d7bbabce1711f968f976f Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Tue, 26 Jul 2016 09:47:57 +0800 Subject: [PATCH 239/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?(#4222)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Delete 20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md * Create 20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md --- ...latform Discourse On Ubuntu Linux 16.04.md | 102 ------------------ ...latform Discourse On Ubuntu Linux 16.04.md | 101 +++++++++++++++++ 2 files changed, 101 insertions(+), 102 deletions(-) delete mode 100644 sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md create mode 100644 translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md diff --git a/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md deleted file mode 100644 index 738c3dbf55..0000000000 --- a/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md +++ /dev/null @@ -1,102 +0,0 @@ -[translating by kokialoves] -How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04 -=============================================================================== - -Discourse is an open source discussion platform, that can work as a mailing list, a chat room and a forum as well. It is a popular tool and modern day implementation of a successful discussion platform. On server side, it is built using Ruby on Rails and uses Postgres on the backend, it also makes use of Redis caching to reduce the loading times, while on client’s side end, it runs in browser using Java Script. It is a pretty well optimized and well structured tool. It also offers converter plugins to migrate your existing discussion boards / forums like vBulletin, phpBB, Drupal, SMF etc to Discourse. In this article, we will be learning how to install Discourse on Ubuntu operating system. - -It is developed by keeping security in mind, so spammers and hackers might not be lucky with this application. It works well with all modern devices, and adjusts its display setting accordingly for mobile devices and tablets. - -### Installing Discourse on Ubuntu 16.04 - -Let’s get started ! the minimum system RAM to run Discourse is 1 GB and the officially supported installation process for Discourse requires dockers to be installed on our Linux system. Besides dockers, it also requires Git. We can fulfill these two requirements by simply running the following command on our system’s terminal. - -``` -wget -qO- https://get.docker.com/ | sh -``` - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/124.png) - -It shouldn’t take longer to complete the installation for Docker and Git, as soon its installation process is complete, create a directory for Discourse inside /var partition of your system (You can choose any other partition here too). - -``` -mkdir /var/discourse -``` - -Now clone the Discourse’s Github repository to this newly created directory. - -``` -git clone https://github.com/discourse/discourse_docker.git /var/discourse -``` - -Go into the cloned directory. - -``` -cd /var/discourse -``` - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/314.png) - -You should be able to locate “discourse-setup” script file here, simply run this script to initiate the installation wizard for Discourse. - -``` -./discourse-setup -``` - -**Side note: Please make sure you have a ready email server setup before attempting install for discourse.** - -Installation wizard will ask you following six questions. - -``` -Hostname for your Discourse? -Email address for admin account? -SMTP server address? -SMTP user name? -SMTP port [587]: -SMTP password? []: -``` - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/411.png) - -Once you supply these information, it will ask for the confirmation, if everything is fine, hit “Enter” and installation process will take off. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/511.png) - -Sit back and relax! it will take sweet amount of time to complete the installation, grab a cup of coffee, and keep an eye for any error messages. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/610.png) - -Here is how the successful completion of the installation process should look alike. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/710.png) - -Now launch your web browser, if the hostname for discourse installation resolves properly to IP, then you can use your hostname in browser , otherwise use your IP address to launch the Discourse page. Here is what you should see: - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/85.png) - -That’s it, create new account by using “Sign Up” option and you should be good to go with your Discourse setup. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/106.png) - -### Conclusion - -It is an easy to setup application and works flawlessly. It is equipped with all required features of modern day discussion board. It is available under General Public License and is 100% open source product. The simplicity, easy of use, powerful and long feature list are the most important feathers of this tool. Hope you enjoyed this article, Question? do let us know in comments please. - --------------------------------------------------------------------------------- - -via: http://linuxpitstop.com/install-discourse-on-ubuntu-linux-16-04/ - -作者:[Aun][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://linuxpitstop.com/author/aun/ - - - - - - - - diff --git a/translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md new file mode 100644 index 0000000000..742fd381dd --- /dev/null +++ b/translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md @@ -0,0 +1,101 @@ +如何在 Ubuntu Linux 16.04上安装开源的Discourse论坛 +=============================================================================== + +Discourse 是一个开源的论坛, 它可以以邮件列表, 聊天室或者论坛等多种形式工作. 它是一个广受欢迎的现代的论坛工具. 在服务端,它使用Ruby on Rails 和 Postgres 搭建, 并且使用Redis caching 减少读取时间 , 在客户端, 它用浏览器的Java Script运行. 它是一个非常好的定制和构架工具. 并且它提供了转换插件对你现存的论坛进行转换例如: vBulletin, phpBB, Drupal, SMF 等等. 在这篇文章中, 我们将学习在Ubuntu操作系统下安装 Discourse. + +它是基于安全开发的, 黑客们不能轻易的发现漏洞. 它能很好的支持各个平台, 相应的调整手机和平板的显示设置. + +### Installing Discourse on Ubuntu 16.04 + +让我们开始吧 ! 最少需要1G的内存并且你要保证dockers已经安装了. 说到dockers, 它还需要安装Git. 要达到以上的两点要求我们只需要运行下面的命令. + +``` +wget -qO- https://get.docker.com/ | sh +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/124.png) + +用不了多久就安装好了Docker 和 Git, 安装结束以后, 创建一个 Discourse 文件夹在 /var 分区 (当然你也可以选择其他的分区). + +``` +mkdir /var/discourse +``` + +现在我们来克隆 Discourse’s Github 项目到这个新建的文件夹. + +``` +git clone https://github.com/discourse/discourse_docker.git /var/discourse +``` + +进入克隆文件夹. + +``` +cd /var/discourse +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/314.png) + +你将 看到“discourse-setup” 脚本文件, 运行这个脚本文件进行Discourse的初始化. + +``` +./discourse-setup +``` + +**Side note: 在安装discourse之前请确保你已经安装了邮件服务器.** + +安装向导将会问你以下六个问题. + +``` +Hostname for your Discourse? +Email address for admin account? +SMTP server address? +SMTP user name? +SMTP port [587]: +SMTP password? []: +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/411.png) + +当你提交了以上信息以后, 它会让你提交确认, 恩一切都很正常, 点击回车以后安装开始. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/511.png) + +现在坐下来,倒杯茶,看看有什么错误信息没有. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/610.png) + +安装成功以后看起来应该像这样. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/710.png) + +现在打开浏览器, 如果已经做了域名解析, 你可以使用你的域名来连接Discourse页面 , 否则你只能使用IP地址了. 你讲看到如下信息: + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/85.png) + +就是这个, 用 “Sign Up” 选项创建一个新管理账户. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/106.png) + +### 结论 + +它是安装简便安全易用的. 它拥有当前所有论坛功能. 它支持所有的开源产品. 简单, 易用, 各类实用的功能. 希望你喜欢这篇文章你可以给我们留言. + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/install-discourse-on-ubuntu-linux-16-04/ + +作者:[Aun][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxpitstop.com/author/aun/ + + + + + + + + From 57c3eb5ad1532b0e8ff0ade934aaf318536e3cb2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 26 Jul 2016 09:51:57 +0800 Subject: [PATCH 240/471] =?UTF-8?q?20160726-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...60722 7 Best Markdown Editors for Linux.md | 155 ++++++++++++++++++ 1 file changed, 155 insertions(+) create mode 100644 sources/tech/20160722 7 Best Markdown Editors for Linux.md diff --git a/sources/tech/20160722 7 Best Markdown Editors for Linux.md b/sources/tech/20160722 7 Best Markdown Editors for Linux.md new file mode 100644 index 0000000000..afe303021d --- /dev/null +++ b/sources/tech/20160722 7 Best Markdown Editors for Linux.md @@ -0,0 +1,155 @@ +7 Best Markdown Editors for Linux +====================================== + +In this article, we shall review some of the best Markdown editors you can install and use on your Linux desktop. There are numerous Markdown editors you can find for Linux but here, we want to unveil possibly the best you may choose to work with. + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Best-Linux-Markdown-Editors.png) +>Best Linux Markdown Editors + +For starters, Markdown is a simple and lightweight tool written in Perl, that enables users to write plain text format and covert it to valid HTML (or XHTML). It is literally an easy-to-read, easy-to-write plain text language and a software tool for text-to-HTML conversion. + +Hoping that you have a slight understanding of what Markdown is, let us proceed to list the editors. + +### 1. Atom + +Atom is a modern, cross-platform, open-source and very powerful text editor that can work on Linux, Windows and Mac OS X operating systems. Users can customize it down to its base, minus altering any configuration files. + +It is designed with some illustrious features and these include: + +- Comes with a built-in package manager +- Smart auto-completion functionality +- Offers multiple panes +- Supports find and replace functionality +- Includes a file system browser +- Easily customizable themes +- Highly extensible using open-source packages and many more + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Atom-Markdown-Editor-for-Linux.png) + +Visit Homepage: + +### 2. GNU Emacs + +Emacs is one of the popular open-source text editors you can find on the Linux platform today. It is a great editor for Markdown language, which is highly extensible and customizable. + +It’s comprehensively developed with the following amazing features: + +- Comes with an extensive built-in documentation including tutorials for beginners +- Full Unicode support for probably all human scripts +- Supports content-aware text-editing modes +- Includes syntax coloring for multiple file types +- Its highly customizable using Emacs Lisp code or GUI +- Offers a packaging system for downloading and installing various extensions plus so much more + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Emacs-Markdown-Editor-for-Linux.png) +>Emacs Markdown Editor for Linux + +Visit Homepage: + +### 3. Remarkable + +Remarkable is possibly the best Markdown editor you can find on Linux, it also works on Windows operating system. It is indeed a remarkable and fully featured Markdown editor that offers users some exciting features. + +Some of its remarkable features include: + +- Supports live preview +- Supports exporting to PDF and HTML +- Also offers Github Markdown +- Supports custom CSS +- It also supports syntax highlighting +- Offers keyboard shortcuts +- Highly customizable plus and many more + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Remarkable-Markdown-Editor-for-Linux.png) +>Remarkable Markdown Editor for Linux + +Visit Homepage: + +### 4. Haroopad + +Haroopad is an extensively built, cross-platform Markdown document processor for Linux, Windows and Mac OS X. It enables users to write expert-level documents of numerous formats including email, reports, blogs, presentations, blog posts and many more. + +It is fully featured with the following notable features: + +- Easily imports content +- Also exports to numerous formats +- Broadly supports blogging and mailing +- Supports several mathematical expressions +- Supports Github flavored Markdown and extensions +- Offers users some exciting themes, skins and UI components plus so much more + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Haroopad-Markdown-Editor-for-Linux.png) +>Haroopad Markdown Editor for Linux + +Visit Homepage: + +### 5. ReText + +ReText is a simple, lightweight and powerful Markdown editor for Linux and several other POSIX-compatible operating systems. It also doubles as a reStructuredText editor, and has the following attributes: + +- Simple and intuitive GUI +- It is highly customizable, users can customize file syntax and configuration options +- Also supports several color schemes +- Supports use of multiple mathematical formulas +- Enables export extensions and many more + +![](http://www.tecmint.com/wp-content/uploads/2016/07/ReText-Markdown-Editor-for-Linux.png) +>ReText Markdown Editor for Linux + +Visit Homepage: + +### 6. UberWriter + +UberWriter is a simple and easy-to-use Markdown editor for Linux, it’s development was highly influenced by iA writer for Mac OS X. It is also feature rich with these remarkable features: + +- Uses pandoc to perform all text-to-HTML conversions +- Offers a clean UI +- Offers a distraction free mode, highlighting a users last sentence +- Supports spellcheck +- Also supports full screen mode +- Supports exporting to PDF, HTML and RTF using pandoc +- Enables syntax highlighting and mathematical functions plus many more + +![](http://www.tecmint.com/wp-content/uploads/2016/07/UberWriter-Markdown-Editor-for-Linux.png) +>UberWriter Markdown Editor for Linux + +Visit Homepage: + +### 7. Mark My Words + +Mark My Words is a also lightweight yet powerful Markdown editor. It’s a relatively new editor, therefore offers a handful of features including syntax highlighting, simple and intuitive GUI. + +The following are some of the awesome features yet to be bundled into the application: + +- Live preview support +- Markdown parsing and file IO +- State management +- Support for exporting to PDF and HTML +- Monitoring files for changes +- Support for preferences + +![](http://www.tecmint.com/wp-content/uploads/2016/07/MarkMyWords-Markdown-Editor-for-Linux.png) +>MarkMyWords Markdown Editor for-Linux + +Visit Homepage: + +### Conclusion + +Having walked through the list above, you probably know what Markdown editors and document processors to download and install on your Linux desktop for now. + +Note that what we consider to be the best here may reasonably not be the best for you, therefore, you can reveal to us exciting Markdown editors that you think are missing in the list and have earned the right to be mentioned here by sharing your thoughts via the feedback section below. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/best-markdown-editors-for-linux/ + +作者:[Aaron Kili |][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ + + From 1d3df1c128a5b8ee6957c16a32f7f1a34ab4739f Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 26 Jul 2016 10:26:40 +0800 Subject: [PATCH 241/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20160628 Python 101 An Intro to urllib.md | 200 ------------------ .../20160628 Python 101 An Intro to urllib.md | 193 +++++++++++++++++ 2 files changed, 193 insertions(+), 200 deletions(-) delete mode 100644 sources/tech/20160628 Python 101 An Intro to urllib.md create mode 100644 translated/tech/20160628 Python 101 An Intro to urllib.md diff --git a/sources/tech/20160628 Python 101 An Intro to urllib.md b/sources/tech/20160628 Python 101 An Intro to urllib.md deleted file mode 100644 index 3fb88971db..0000000000 --- a/sources/tech/20160628 Python 101 An Intro to urllib.md +++ /dev/null @@ -1,200 +0,0 @@ -ezio is translating - -Python 101: An Intro to urllib -================================= - -The urllib module in Python 3 is a collection of modules that you can use for working with URLs. If you are coming from a Python 2 background you will note that in Python 2 you had urllib and urllib2. These are now a part of the urllib package in Python 3. The current version of urllib is made up of the following modules: - -- urllib.request -- urllib.error -- urllib.parse -- urllib.rebotparser - -We will be covering each part individually except for urllib.error. The official documentation actually recommends that you might want to check out the 3rd party library, requests, for a higher-level HTTP client interface. However, I believe that it can be useful to know how to open URLs and interact with them without using a 3rd party and it may also help you appreciate why the requests package is so popular. - ---- - -### urllib.request - -The urllib.request module is primarily used for opening and fetching URLs. Let’s take a look at some of the things you can do with the urlopen function: - -``` ->>> import urllib.request ->>> url = urllib.request.urlopen('https://www.google.com/') ->>> url.geturl() -'https://www.google.com/' ->>> url.info() - ->>> header = url.info() ->>> header.as_string() -('Date: Fri, 24 Jun 2016 18:21:19 GMT\n' - 'Expires: -1\n' - 'Cache-Control: private, max-age=0\n' - 'Content-Type: text/html; charset=ISO-8859-1\n' - 'P3P: CP="This is not a P3P policy! See ' - 'https://www.google.com/support/accounts/answer/151657?hl=en for more info."\n' - 'Server: gws\n' - 'X-XSS-Protection: 1; mode=block\n' - 'X-Frame-Options: SAMEORIGIN\n' - 'Set-Cookie: ' - 'NID=80=tYjmy0JY6flsSVj7DPSSZNOuqdvqKfKHDcHsPIGu3xFv41LvH_Jg6LrUsDgkPrtM2hmZ3j9V76pS4K_cBg7pdwueMQfr0DFzw33SwpGex5qzLkXUvUVPfe9g699Qz4cx9ipcbU3HKwrRYA; ' - 'expires=Sat, 24-Dec-2016 18:21:19 GMT; path=/; domain=.google.com; HttpOnly\n' - 'Alternate-Protocol: 443:quic\n' - 'Alt-Svc: quic=":443"; ma=2592000; v="34,33,32,31,30,29,28,27,26,25"\n' - 'Accept-Ranges: none\n' - 'Vary: Accept-Encoding\n' - 'Connection: close\n' - '\n') ->>> url.getcode() -200 -``` - -Here we import our module and ask it to open Google’s URL. Now we have an HTTPResponse object that we can interact with. The first thing we do is call the geturl method which will return the URL of the resource that was retrieved. This is useful for finding out if we followed a redirect. - -Next we call info, which will return meta-data about the page, such as headers. Because of this, we assign that result to our headers variable and then call its as_string method. This prints out the header we received from Google. You can also get the HTTP response code by calling getcode, which in this case was 200, which means it worked successfully. - -If you’d like to see the HTML of the page, you can call the read method on the url variable we created. I am not reproducing that here as the output will be quite long. - -Please note that the request object defaults to a GET request unless you specify the data parameter. Should you pass in the data parameter, then the request object will issue a POST request instead. - ---- - -### Downloading a file - -A typical use case for the urllib package is for downloading a file. Let’s find out a couple of ways we can accomplish this task: - -``` ->>> import urllib.request ->>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' ->>> response = urllib.request.urlopen(url) ->>> data = response.read() ->>> with open('/home/mike/Desktop/test.zip', 'wb') as fobj: -... fobj.write(data) -... -``` - -Here we just open a URL that leads us to a zip file stored on my blog. Then we read the data and write it out to disk. An alternate way to accomplish this is to use urlretrieve: - -``` ->>> import urllib.request ->>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' ->>> tmp_file, header = urllib.request.urlretrieve(url) ->>> with open('/home/mike/Desktop/test.zip', 'wb') as fobj: -... with open(tmp_file, 'rb') as tmp: -... fobj.write(tmp.read()) -``` - -The urlretrieve method will copy a network object to a local file. The file it copies to is randomly named and goes into the temp directory unless you use the second parameter to urlretrieve where you can actually specify where you want the file saved. This will save you a step and make your code much simpler: - -``` ->>> import urllib.request ->>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' ->>> urllib.request.urlretrieve(url, '/home/mike/Desktop/blog.zip') -('/home/mike/Desktop/blog.zip', - ) -``` - -As you can see, it returns the location of where it saved the file and the header information from the request. - -### Specifying Your User Agent - -When you visit a website with your browser, the browser tells the website who it is. This is called the user-agent string. Python’s urllib identifies itself as Python-urllib/x.y where the x and y are major and minor version numbers of Python. Some websites won’t recognize this user-agent string and will behave in strange ways or not work at all. Fortunately, it’s easy for you to set up your own custom user-agent string: - -``` ->>> import urllib.request ->>> user_agent = ' Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0' ->>> url = 'http://www.whatsmyua.com/' ->>> headers = {'User-Agent': user_agent} ->>> request = urllib.request.Request(url, headers=headers) ->>> with urllib.request.urlopen(request) as response: -... with open('/home/mdriscoll/Desktop/user_agent.html', 'wb') as out: -... out.write(response.read()) -``` - -Here we set up our user agent to Mozilla FireFox and we set out URL to which will tell us what it thinks our user-agent string is. Then we create a Request instance using our url and headers and pass that to urlopen. Finally we save the result. If you open the result file, you will see that we successfully changed our user-agent string. Feel free to try out a few different strings with this code to see how it will change. - ---- - -### urllib.parse - -The urllib.parse library is your standard interface for breaking up URL strings and combining them back together. You can use it to convert a relative URL to an absolute URL, for example. Let’s try using it to parse a URL that includes a query: - -``` ->>> from urllib.parse import urlparse ->>> result = urlparse('https://duckduckgo.com/?q=python+stubbing&t=canonical&ia=qa') ->>> result -ParseResult(scheme='https', netloc='duckduckgo.com', path='/', params='', query='q=python+stubbing&t=canonical&ia=qa', fragment='') ->>> result.netloc -'duckduckgo.com' ->>> result.geturl() -'https://duckduckgo.com/?q=python+stubbing&t=canonical&ia=qa' ->>> result.port -None -``` - -Here we import the urlparse function and pass it an URL that contains a search query to the duckduckgo website. My query was to look up articles on “python stubbing”. As you can see, it returned a ParseResult object that you can use to learn more about the URL. For example, you can get the port information (None in this case), the network location, path and much more. - -### Submitting a Web Form - -This module also holds the urlencode method, which is great for passing data to a URL. A typical use case for the urllib.parse library is submitting a web form. Let’s find out how you might do that by having the duckduckgo search engine look for Python: - -``` ->>> import urllib.request ->>> import urllib.parse ->>> data = urllib.parse.urlencode({'q': 'Python'}) ->>> data -'q=Python' ->>> url = 'http://duckduckgo.com/html/' ->>> full_url = url + '?' + data ->>> response = urllib.request.urlopen(full_url) ->>> with open('/home/mike/Desktop/results.html', 'wb') as f: -... f.write(response.read()) -``` - -This is pretty straightforward. Basically we want to submit a query to duckduckgo ourselves using Python instead of a browser. To do that, we need to construct our query string using urlencode. Then we put that together to create a fully qualified URL and use urllib.request to submit the form. We then grab the result and save it to disk. - ---- - -### urllib.robotparser - -The robotparser module is made up of a single class, RobotFileParser. This class will answer questions about whether or not a specific user agent can fetch a URL that has a published robot.txt file. The robots.txt file will tell a web scraper or robot what parts of the server should not be accessed. Let’s take a look at a simple example using ArsTechnica’s website: - -``` ->>> import urllib.robotparser ->>> robot = urllib.robotparser.RobotFileParser() ->>> robot.set_url('http://arstechnica.com/robots.txt') -None ->>> robot.read() -None ->>> robot.can_fetch('*', 'http://arstechnica.com/') -True ->>> robot.can_fetch('*', 'http://arstechnica.com/cgi-bin/') -False -``` - -Here we import the robot parser class and create an instance of it. Then we pass it a URL that specifies where the website’s robots.txt file resides. Next we tell our parser to read the file. Now that that’s done, we give it a couple of different URLs to find out which ones we can crawl and which ones we can’t. We quickly see that we can access the main site, but not the cgi-bin. - ---- - -### Wrapping Up - -You have reached the point that you should be able to use Python’s urllib package competently. We learned how to download a file, submit a web form, change our user agent and access a robots.txt file in this chapter. The urllib has a lot of additional functionality that is not covered here, such as website authentication. However, you might want to consider switching to the requests library before trying to do authentication with urllib as the requests implementation is a lot easier to understand and debug. I also want to note that Python has support for Cookies via its http.cookies module although that is also wrapped quite well in the requests package. You should probably consider trying both to see which one makes the most sense to you. - --------------------------------------------------------------------------------- - -via: http://www.blog.pythonlibrary.org/2016/06/28/python-101-an-intro-to-urllib/ - -作者:[Mike][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.blog.pythonlibrary.org/author/mld/ - - - - - - - diff --git a/translated/tech/20160628 Python 101 An Intro to urllib.md b/translated/tech/20160628 Python 101 An Intro to urllib.md new file mode 100644 index 0000000000..641ac2baf1 --- /dev/null +++ b/translated/tech/20160628 Python 101 An Intro to urllib.md @@ -0,0 +1,193 @@ +Python 101: urllib 简介 +================================= + +Python 3 的 urllib 模块是一堆可以处理 URL 的组件集合。如果你有 Python 2 背景,那么你就会注意到 Python 2 中有 urllib 和 urllib2 两个版本的模块。这些现在都是 Python 3 的 urllib 包的一部分。当前版本的 urllib 包括下面几部分: + +- urllib.request +- urllib.error +- urllib.parse +- urllib.rebotparser + +接下来我们会分开讨论除了 urllib.error 以外的几部分。官方文档实际推荐你尝试第三方库, requests,一个高级的 HTTP 客户端接口。然而我依然认为知道如何不依赖第三方库打开 URL 并与之进行交互是很有用的,而且这也可以帮助你理解为什么 requests 包是如此的流行。 + +--- + +### urllib.request + +urllib.request 模块期初是用来打开和获取 URL 的。让我们看看你可以用函数 urlopen可以做的事: + + +``` +>>> import urllib.request +>>> url = urllib.request.urlopen('https://www.google.com/') +>>> url.geturl() +'https://www.google.com/' +>>> url.info() + +>>> header = url.info() +>>> header.as_string() +('Date: Fri, 24 Jun 2016 18:21:19 GMT\n' + 'Expires: -1\n' + 'Cache-Control: private, max-age=0\n' + 'Content-Type: text/html; charset=ISO-8859-1\n' + 'P3P: CP="This is not a P3P policy! See ' + 'https://www.google.com/support/accounts/answer/151657?hl=en for more info."\n' + 'Server: gws\n' + 'X-XSS-Protection: 1; mode=block\n' + 'X-Frame-Options: SAMEORIGIN\n' + 'Set-Cookie: ' + 'NID=80=tYjmy0JY6flsSVj7DPSSZNOuqdvqKfKHDcHsPIGu3xFv41LvH_Jg6LrUsDgkPrtM2hmZ3j9V76pS4K_cBg7pdwueMQfr0DFzw33SwpGex5qzLkXUvUVPfe9g699Qz4cx9ipcbU3HKwrRYA; ' + 'expires=Sat, 24-Dec-2016 18:21:19 GMT; path=/; domain=.google.com; HttpOnly\n' + 'Alternate-Protocol: 443:quic\n' + 'Alt-Svc: quic=":443"; ma=2592000; v="34,33,32,31,30,29,28,27,26,25"\n' + 'Accept-Ranges: none\n' + 'Vary: Accept-Encoding\n' + 'Connection: close\n' + '\n') +>>> url.getcode() +200 +``` + +在这里我们包含了需要的模块,然后告诉它打开 Google 的 URL。现在我们就有了一个可以交互的 HTTPResponse 对象。我们要做的第一件事是调用方法 geturl ,它会返回根据 URL 获取的资源。这可以让我们发现 URL 是否进行了重定向。 + +接下来调用 info ,它会返回网页的元数据,比如头信息。因此,我们可以将结果赋给我们的 headers 变量,然后调用它的方法 as_string 。就可以打印出我们从 Google 收到的头信息。你也可以通过 getcode 得到网页的 HTTP 响应码,当前情况下就是 200,意思是正常工作。 + +如果你想看看网页的 HTML 代码,你可以调用变量 url 的方法 read。我不准备再现这个过程,因为输出结果太长了。 + +请注意 request 对象默认是 GET 请求,除非你指定它的 data 参数。你应该给它传递 data 参数,这样 request 对象才会变成 POST 请求。 + +--- + +### 下载文件 + +urllib 一个典型的应用场景是下载文件。让我们看看几种可以完成这个任务的方法: + +``` +>>> import urllib.request +>>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' +>>> response = urllib.request.urlopen(url) +>>> data = response.read() +>>> with open('/home/mike/Desktop/test.zip', 'wb') as fobj: +... fobj.write(data) +... +``` + +这个例子中我们打开一个保存在我的博客上的 zip 压缩文件的 URL。然后我们读出数据并将数据写到磁盘。一个替代方法是使用 urlretrieve : + +``` +>>> import urllib.request +>>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' +>>> tmp_file, header = urllib.request.urlretrieve(url) +>>> with open('/home/mike/Desktop/test.zip', 'wb') as fobj: +... with open(tmp_file, 'rb') as tmp: +... fobj.write(tmp.read()) +``` + +方法 urlretrieve 会把网络对象拷贝到本地文件。除非你在使用 urlretrieve 的第二个参数指定你要保存文件的路径,否则这个文件在本地是随机命名的并且是保存在临时文件夹。这个可以为你节省一步操作,并且使代码开起来更简单: + +``` +>>> import urllib.request +>>> url = 'http://www.blog.pythonlibrary.org/wp-content/uploads/2012/06/wxDbViewer.zip' +>>> urllib.request.urlretrieve(url, '/home/mike/Desktop/blog.zip') +('/home/mike/Desktop/blog.zip', + ) +``` + +如你所见,它返回了文件保存的路径,以及从请求得来的头信息。 + +### 设置你的用户代理 + +当你使用浏览器访问网页时,浏览器会告诉网站它是谁。这就是所谓的 user-agent 字段。Python 的 urllib 会表示他自己为 Python-urllib/x.y , 其中 x 和 y 是你使用的 Python 的主、次版本号。有一些网站不认识这个用户代理字段,然后网站的可能会有奇怪的表现或者根本不能正常工作。辛运的是你可以很轻松的设置你自己的 user-agent 字段。 + +``` +>>> import urllib.request +>>> user_agent = ' Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0' +>>> url = 'http://www.whatsmyua.com/' +>>> headers = {'User-Agent': user_agent} +>>> request = urllib.request.Request(url, headers=headers) +>>> with urllib.request.urlopen(request) as response: +... with open('/home/mdriscoll/Desktop/user_agent.html', 'wb') as out: +... out.write(response.read()) +``` + +这里设置我们的用户代理为 Mozilla FireFox ,然后我们访问 , 它会告诉我们它识别出的我们的 user-agent 字段。之后我们将 url 和我们的头信息传给 urlopen 创建一个 Request 实例。最后我们保存这个结果。如果你打开这个结果,你会看到我们成功的修改了自己的 user-agent 字段。使用这段代码尽情的尝试不同的值来看看它是如何改变的。 + +--- + +### urllib.parse + +urllib.parse 库是用来拆分和组合 URL 字符串的标准接口。比如,你可以使用它来转换一个相对的 URL 为绝对的 URL。让我们试试用它来转换一个包含查询的 URL : + + +``` +>>> from urllib.parse import urlparse +>>> result = urlparse('https://duckduckgo.com/?q=python+stubbing&t=canonical&ia=qa') +>>> result +ParseResult(scheme='https', netloc='duckduckgo.com', path='/', params='', query='q=python+stubbing&t=canonical&ia=qa', fragment='') +>>> result.netloc +'duckduckgo.com' +>>> result.geturl() +'https://duckduckgo.com/?q=python+stubbing&t=canonical&ia=qa' +>>> result.port +None +``` + +这里我们导入了函数 urlparse , 并且把一个包含搜索查询 duckduckgo 的 URL 作为参数传给它。我的查询的关于 “python stubbing” 的文章。如你所见,它返回了一个 ParseResult 对象,你可以用这个对象了解更多关于 URL 的信息。举个例子,你可以获取到短信息(此处的没有端口信息),网络位置,路径和很多其他东西。 + +### 提交一个 Web 表单 + +这个模块还有一个方法 urlencode 可以向 URL 传输数据。 urllib.parse 的一个典型使用场景是提交 Web 表单。让我们通过搜索引擎 duckduckgo 搜索 Python 来看看这个功能是怎么工作的。 + +``` +>>> import urllib.request +>>> import urllib.parse +>>> data = urllib.parse.urlencode({'q': 'Python'}) +>>> data +'q=Python' +>>> url = 'http://duckduckgo.com/html/' +>>> full_url = url + '?' + data +>>> response = urllib.request.urlopen(full_url) +>>> with open('/home/mike/Desktop/results.html', 'wb') as f: +... f.write(response.read()) +``` + +这个例子很直接。基本上我们想使用 Python 而不是浏览器向 duckduckgo 提交一个查询。要完成这个我们需要使用 urlencode 构建我们的查询字符串。然后我们把这个字符串和网址拼接成一个完整的正确 URL ,然后使用 urllib.request 提交这个表单。最后我们就获取到了结果然后保存到磁盘上。 + +--- + +### urllib.robotparser + +robotparser 模块是由一个单独的类 —— RobotFileParser —— 构成的。这个类会回答诸如一个特定的用户代理可以获取已经设置 robot.txt 的网站。 robot.txt 文件会告诉网络爬虫或者机器人当前网站的那些部分是不允许被访问的。让我们看一个简单的例子: + +``` +>>> import urllib.robotparser +>>> robot = urllib.robotparser.RobotFileParser() +>>> robot.set_url('http://arstechnica.com/robots.txt') +None +>>> robot.read() +None +>>> robot.can_fetch('*', 'http://arstechnica.com/') +True +>>> robot.can_fetch('*', 'http://arstechnica.com/cgi-bin/') +False +``` + +这里我们导入了 robot 分析器类,然后创建一个实例。然后我们给它传递一个表明网站 robots.txt 位置的 URL 。接下来我们告诉分析器来读取这个文件。现在就完成了,我们给它了一组不同的 URL 让它找出那些我们可以爬取而那些不能爬取。我们很快就看到我们可以访问主站但是不包括 cgi-bin 路径。 + +--- + +### 总结一下 + +现在你就有能力使用 Python 的 urllib 包了。在这一节里,我们学习了如何下载文件,提交 Web 表单,修改自己的用户代理以及访问 robots.txt。 urllib 还有一大堆附加功能没有在这里提及,比如网站授权。然后你可能会考虑在使用 urllib 进行认证之前切换到 requests 库,因为 requests 已经以更易用和易调试的方式实现了这些功能。我同时也希望提醒你 Python 已经通过 http.cookies 模块支持 Cookies 了,虽然在 request 包里也很好的封装了这个功能。你应该可能考虑同时试试两个来决定那个最适合你。 + +-------------------------------------------------------------------------------- + +via: http://www.blog.pythonlibrary.org/2016/06/28/python-101-an-intro-to-urllib/ + +作者:[Mike][a] +译者:[Ezio](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.blog.pythonlibrary.org/author/mld/ From 91e04d1ca4ec0b72dc404850c2f08f921615d20c Mon Sep 17 00:00:00 2001 From: Mike Date: Tue, 26 Jul 2016 16:32:01 +0800 Subject: [PATCH 242/471] translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md (#4223) --- ...User Space What We Did for Kernel Space.md | 96 ------------------- ...User Space What We Did for Kernel Space.md | 95 ++++++++++++++++++ 2 files changed, 95 insertions(+), 96 deletions(-) delete mode 100644 sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md create mode 100644 translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md diff --git a/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md b/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md deleted file mode 100644 index 743114c481..0000000000 --- a/sources/tech/20160706 Doing for User Space What We Did for Kernel Space.md +++ /dev/null @@ -1,96 +0,0 @@ -MikeCoder Translating... - -Doing for User Space What We Did for Kernel Space -======================================================= - -I believe the best and worst thing about Linux is its hard distinction between kernel space and user space. - -Without that distinction, Linux never would have become the most leveraged operating system in the world. Today, Linux has the largest range of uses for the largest number of users—most of whom have no idea they are using Linux when they search for something on Google or poke at their Android phones. Even Apple stuff wouldn't be what it is (for example, using BSD in its computers) were it not for Linux's success. - -Not caring about user space is a feature of Linux kernel development, not a bug. As Linus put it on our 2003 Geek Cruise, "I only do kernel stuff...I don't know what happens outside the kernel, and I don't much care. What happens inside the kernel I care about." After Andrew Morton gave me additional schooling on the topic a couple years later on another Geek Cruise, I wrote: - ->Kernel space is where the Linux species lives. User space is where Linux gets put to use, along with a lot of other natural building materials. The division between kernel space and user space is similar to the division between natural materials and stuff humans make out of those materials. - -A natural outcome of this distinction, however, is for Linux folks to stay relatively small as a community while the world outside depends more on Linux every second. So, in hope that we can enlarge our number a bit, I want to point us toward two new things. One is already hot, and the other could be. - -The first is [blockchain][1], made famous as the distributed ledger used by Bitcoin, but useful for countless other purposes as well. At the time of this writing, interest in blockchain is [trending toward the vertical][2]. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f1.png) ->Figure 1. Google Trends for Blockchain - -The second is self-sovereign identity. To explain that, let me ask who and what you are. - -If your answers come from your employer, your doctor, the Department of Motor Vehicles, Facebook, Twitter or Google, they are each administrative identifiers: entries in namespaces each of those organizations control, entirely for their own convenience. As Timothy Ruff of [Evernym][3] explains, "You don't exist for them. Only your identifier does." It's the dependent variable. The independent variable—the one controlling the identifier—is the organization. - -If your answer comes from your self, we have a wide-open area for a new development category—one where, finally, we can be set fully free in the connected world. - -The first person to explain this, as far as I know, was [Devon Loffreto][4] He wrote "What is 'Sovereign Source Authority'?" in February 2012, on his blog, [The Moxy Tongue][5]. In "[Self-Sovereign Identity][6]", published in February 2016, he writes: - ->Self-Sovereign Identity must emit directly from an individual human life, and not from within an administrative mechanism...self-Sovereign Identity references every individual human identity as the origin of source authority. A self-Sovereign identity produces an administrative trail of data relations that begin and resolve to individual humans. Every individual human may possess a self-Sovereign identity, and no person or abstraction of any type created may alter this innate human Right. A self-Sovereign identity is the root of all participation as a valued social being within human societies of any type. - -To put this in Linux terms, only the individual has root for his or her own source identity. In the physical world, this is a casual thing. For example, my own portfolio of identifiers includes: - -- David Allen Searls, which my parents named me. -- David Searls, the name I tend to use when I suspect official records are involved. -- Dave, which is what most of my relatives and old friends call me. -- Doc, which is what most people call me. - -As the sovereign source authority over the use of those, I can jump from one to another in different contexts and get along pretty well. But, that's in the physical world. In the virtual one, it gets much more complicated. In addition to all the above, I am @dsearls (my Twitter handle) and dsearls (my handle in many other net-based services). I am also burdened by having my ability to relate contained within hundreds of different silos, each with their own logins and passwords. - -You can get a sense of how bad this is by checking the list of logins and passwords on your browser. On Firefox alone, I have hundreds of them. Many are defunct (since my collection dates back to Netscape days), but I would guess that I still have working logins to hundreds of companies I need to deal with from time to time. For all of them, I'm the dependent variable. It's not the other way around. Even the term "user" testifies to the subordinate dependency that has become a primary fact of life in the connected world. - -Today, the only easy way to bridge namespaces is via the compromised convenience of "Log in with Facebook" or "Log in with Twitter". In both of those cases, each of us is even less ourselves or in any kind of personal control over how we are known (if we wish to be knowable at all) to other entities in the connected world. - -What we have needed from the start are personal systems for instantiating our sovereign selves and choosing how to reveal and protect ourselves when dealing with others in the connected world. For lack of that ability, we are deep in a metastasized mess that Shoshana Zuboff calls "surveillance capitalism", which she says is: - ->...unimaginable outside the inscrutable high velocity circuits of Google's digital universe, whose signature feature is the Internet and its successors. While the world is riveted by the showdown between Apple and the FBI, the real truth is that the surveillance capabilities being developed by surveillance capitalists are the envy of every state security agency. - -Then she asks, "How can we protect ourselves from its invasive power?" - -I suggest self-sovereign identity. I believe it is only there that we have both safety from unwelcome surveillance and an Archimedean place to stand in the world. From that place, we can assert full agency in our dealings with others in society, politics and business. - -I came to this provisional conclusion during [ID2020][7], a gathering at the UN on May. It was gratifying to see Devon Loffreto there, since he's the guy who got the sovereign ball rolling in 2013. Here's [what I wrote about][8] it at the time, with pointers to Devon's earlier posts (such as one sourced above). - -Here are three for the field's canon: - -- "[Self-Sovereign Identity][9]" by Devon Loffreto. -- "[System or Human First][10]" by Devon Loffreto. -- "[The Path to Self-Sovereign Identity][11]" by Christopher Allen. - -A one-pager from Evernym, [digi.me][12], [iRespond][13] and [Respect Network][14] also was circulated there, contrasting administrative identity (which it calls the "current model") with the self-sovereign one. In it is the graphic shown in Figure 2. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f2.jpg) ->Figure 2. Current Model of Identity vs. Self-Sovereign Identity - -The [platform][15] for this is Sovrin, explained as a "Fully open-source, attribute-based, sovereign identity graph platform on an advanced, dedicated, permissioned, distributed ledger" There's a [white paper][16] too. The code is called [plenum][17], and it's at GitHub. - -Here—and places like it—we can do for user space what we've done for the last quarter century for kernel space. - --------------------------------------------------------------------------------- - -via: https://www.linuxjournal.com/content/doing-user-space-what-we-did-kernel-space - -作者:[Doc Searls][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linuxjournal.com/users/doc-searls -[1]: https://en.wikipedia.org/wiki/Block_chain_%28database%29 -[2]: https://www.google.com/trends/explore#q=blockchain -[3]: http://evernym.com/ -[4]: https://twitter.com/nzn -[5]: http://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html -[6]: http://www.moxytongue.com/2016/02/self-sovereign-identity.html -[7]: http://www.id2020.org/ -[8]: http://blogs.harvard.edu/doc/2013/10/14/iiw-challenge-1-sovereign-identity-in-the-great-silo-forest -[9]: http://www.moxytongue.com/2016/02/self-sovereign-identity.html -[10]: http://www.moxytongue.com/2016/05/system-or-human.html -[11]: http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html -[12]: https://get.digi.me/ -[13]: http://irespond.com/ -[14]: https://www.respectnetwork.com/ -[15]: http://evernym.com/technology -[16]: http://evernym.com/assets/doc/Identity-System-Essentials.pdf?v=167284fd65 -[17]: https://github.com/evernym/plenum diff --git a/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md b/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md new file mode 100644 index 0000000000..0b31016003 --- /dev/null +++ b/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md @@ -0,0 +1,95 @@ +在用户空间做我们会在内核空间做的事情 +======================================================= + +我相信,Linux 最好也是最坏的事情,就是内核空间和用户空间之间的巨大差别。 + +但是如果抛开这个区别,Linux 可能也不会成为世界上影响力最大的操作系统。如今,Linux 已经拥有世界上最大数量的用户,和最大范围的应用。尽管大多数用户并不知道,当他们进行谷歌搜索,或者触摸安卓手机的时候,他们其实正在使用 Linux。如果不是 Linux 的巨大成功,Apple 公司也可能并不会成为现在这样(苹果在他们的电脑产品中使用 BSD 发行版)。 + +不用担心,用户空间是 Linux 内核开发中的一个特性,并不是一个缺陷。正如 Linus 在 2003 的极客巡航中提到的那样,“”我只做内核相关技术……我并不知道内核之外发生的事情,而且我并不关心。我只关注内核部分发生的事情。” 在 Andrew Morton 在多年之后的另一个极客巡航上给我上了另外的一课,我写到: + +> 内核空间是 Linux 核心存在的地方。用户空间是使用 Linux 时使用的空间,和其他的自然的建筑材料一样。内核空间和用户空间的区别,和自然材料和人类从中生产的人造材料的区别很类似。 + +这个区别的自然而然的结果,就是尽管外面的世界一刻也离不开 Linux, 但是 Linux 社区还是保持相对较小。所以,为了增加我们社区团体的数量,我希望指出两件事情。第一件已经非常火热,另外一件可能热门。 + +第一件事情就是 [blockchain][1],出自著名的分布式货币,比特币之手。当你正在阅读这篇文章的同时,对 blockchain 的[兴趣已经直线上升][2]。 + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f1.png) +> 图1. 谷歌 Blockchain 的趋势 + +第二件事就是自主身份。为了解释这个,让我先来问你,你是谁或者你是什么。 + +如果你从你的雇员,你的医生,或者车管所,Facebook,Twitter 或者谷歌上得到答案,你就会发现他们每一个都有明显的社会性: 为了他们自己的便利,在进入这些机构的控制前,他们都会添加自己的命名空间。正如 Timothy Ruff 在 [Evernym][3] 中解释的,”你并不为了他们而存在,你只为了自己的身份而活。“。你的身份可能会变化,但是唯一不变的就是控制着身份的人,也就是这个组织。 + +如果你的答案出自你自己,我们就有一个广大空间来发展一个新的领域,在这个领域中,我们完全自由。 + +第一个解释这个的人,据我所知,是 [Devon Loffreto][4]。在 2012 年 2 月,在的他的博客中,他写道 ”什么是' Sovereign Source Authority'?“,[Moxy Tongue][5]。在他发表在 2016 年 2 月的 "[Self-Sovereign Identity][6]" 中,他写道: + +> 自主身份必须是独立个人提出的,并且不包含社会因素。。。自主身份源于每个个体对其自身本源的认识。 一个自主身份可以为个体带来新的社会面貌。每个个体都可能为自己生成一个自主身份,并且这并不会改变固有的人权。使用自主身份机制是所有参与者参与的基石,并且 依旧可以同各种形式的人类社会保持联系。 + +为了将这个发布在 Linux 条款中,只有个人才能为他或她设定一个自己的开源社区身份。这在现实实践中,这只是一个非常偶然的事件。举个例子,我自己的身份包括: + +- David Allen Searls,我父母会这样叫我。 +- David Searls,正式场合下我会这么称呼自己。 +- Dave,我的亲戚和好朋友会这么叫我。 +- Doc,大多数人会这么叫我。 + +在上述提到的身份认证中,我可以在不同的情景中轻易的转换。但是,这只是在现实世界中。在虚拟世界中,这就变得非常困难。除了上述的身份之外,我还可以是 @dsearls(我的 twitter 账号) 和 dsearls (其他的网络账号)。然而为了记住成百上千的不同账号的登录名和密码,我已经不堪重负。 + +你可以在你的浏览器上感受到这个糟糕的体验。在火狐上,我有成百上千个用户名密码。很多已经废弃(很多都是从 Netscape 时代遗留下来的),但是我依旧假设我有时会有大量的工作账号需要处理。对于这些,我只是被动接受者。没有其他的解决方法。甚至一些安全较低的用户认证,已经成为了现实世界中不可缺少的一环。 + +现在,最简单的方式来联系账号,就是通过 "Log in with Facebook" 或者 "Login in with Twitter" 来进行身份认证。在这些例子中,我们中的每一个甚至并不是真正意义上的自己,或者某种程度上是我们希望被大家认识的自己(如果我们希望被其他人认识的话)。 + +我们从一开始就需要的是一个可以实体化我们的自主身份和交流时选择如何保护和展示自身的个人系统。因为缺少这个能力,我们现在陷入混乱。Shoshana Zuboff 称之为 "监视资本主义",她如此说道: + +>...难以想象,在见证了互联网和获得了的巨大成功的谷歌背后。世界因 Apple 和 FBI 的对决而紧密联系在一起。真相就是,被热衷于监视的资本家开发监视系统,是每一个国家安全机构真正的恶。 + +然后,她问道,”我们怎样才能保护自己远离他人的影响?“ + +我建议使用自主身份。我相信这是我们唯一的方式,来保证我们从一个被监视的世界中脱离出来。以此为基础,我们才可以完全无顾忌的和社会,政治,商业上的人交流。 + +我在五月联合国举行的 [ID2020][7] 会议中总结了这个临时的结论。很高兴,Devon Loffreto 也在那,自从他在2013年被选为作为轮值主席之后。这就是[我曾经写的一些文章][8],引用了 Devon 的早期博客(比如上面的原文)。 + +这有三篇这个领域的准则: + +- "[Self-Sovereign Identity][9]" - Devon Loffreto. +- "[System or Human First][10]" - Devon Loffreto. +- "[The Path to Self-Sovereign Identity][11]" - Christopher Allen. + +从Evernym 的简要说明中,[digi.me][12], [iRespond][13] 和 [Respect Network][14] 也被包括在内。自主身份和社会身份 (也被称为”current model“) 的对比结果,显示在图二中。 + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f2.jpg) +> 图 2. Current Model 身份 vs. 自主身份 + +为此而生的[平台][15]就是 Sovrin,也被解释为“”依托于先进技术的,授权机制的,分布式货币上的一个完全开源,基于标识,声明身份的图平台“ 同时,这也有一本[白皮书][16]。代号为 [plenum][17],而且它在 Github 上。 + +在这-或者其他类似的地方-我们就可以在用户空间中重现我们在上一个的四分之一世纪中已经做过的事情。 + + +-------------------------------------------------------------------------------- + +via: https://www.linuxjournal.com/content/doing-user-space-what-we-did-kernel-space + +作者:[Doc Searls][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxjournal.com/users/doc-searls +[1]: https://en.wikipedia.org/wiki/Block_chain_%28database%29 +[2]: https://www.google.com/trends/explore#q=blockchain +[3]: http://evernym.com/ +[4]: https://twitter.com/nzn +[5]: http://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html +[6]: http://www.moxytongue.com/2016/02/self-sovereign-identity.html +[7]: http://www.id2020.org/ +[8]: http://blogs.harvard.edu/doc/2013/10/14/iiw-challenge-1-sovereign-identity-in-the-great-silo-forest +[9]: http://www.moxytongue.com/2016/02/self-sovereign-identity.html +[10]: http://www.moxytongue.com/2016/05/system-or-human.html +[11]: http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html +[12]: https://get.digi.me/ +[13]: http://irespond.com/ +[14]: https://www.respectnetwork.com/ +[15]: http://evernym.com/technology +[16]: http://evernym.com/assets/doc/Identity-System-Essentials.pdf?v=167284fd65 +[17]: https://github.com/evernym/plenum From c2110a39900f819f0690479eb8f638004fc7b35b Mon Sep 17 00:00:00 2001 From: joVoV <704451873@qq.com> Date: Tue, 26 Jul 2016 16:34:52 +0800 Subject: [PATCH 243/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20160627 Linux Practicality vs Activism.md | 74 ------------------- ...20160627 Linux Practicality vs Activism.md | 71 ++++++++++++++++++ 2 files changed, 71 insertions(+), 74 deletions(-) delete mode 100644 sources/talk/20160627 Linux Practicality vs Activism.md create mode 100644 translated/talk/20160627 Linux Practicality vs Activism.md diff --git a/sources/talk/20160627 Linux Practicality vs Activism.md b/sources/talk/20160627 Linux Practicality vs Activism.md deleted file mode 100644 index 195a9fdf5c..0000000000 --- a/sources/talk/20160627 Linux Practicality vs Activism.md +++ /dev/null @@ -1,74 +0,0 @@ -jovov 正在翻译。。。 - -Linux Practicality vs Activism -================================== - ->Is Linux actually more practical than other OSes, or is there some higher minded reason to use it? - -One of the greatest things about running Linux is the freedom it provides. Where the division among the Linux community appears is in how we value this freedom. - -For some, the freedom enjoyed by using Linux is the freedom from vendor lock-in or high software costs. Most would call this a practical consideration. Others users would tell you the freedom they enjoy is software freedom. This means embracing Linux distributions that support the [Free Software Movement][1], avoiding proprietary software completely and all things related. - -In this article, I'll walk you through some of the differences between these two freedoms and how they affect Linux usage. - -### The problem with proprietary - -One thing most Linux users have in common is their preference for avoiding proprietary software. For practical enthusiasts like myself, it's a matter of how I spend my money, the ability to control my software and avoiding vendor lock-in. Granted, I'm not a coder...so my tweaks to my installed software are pretty mild. But there are instances where a minor tweak to an application can mean the difference between it working and it not working. - -Then there are Linux enthusiasts who opt to avoid proprietary software because they feel it's unethical to use it. Usually the main concern here is that using proprietary software takes away or simply obstructs your personal freedom. Users in this corner prefer to use Linux distributions and software that support the [Free Software philosophy][2]. While it's similar to and often directly confused with Open Source concepts, [there are differences][3]. - -So here's the issue: Users such as myself tend to put convenience over the ideals of pure software freedom. Don't get me wrong, folks like me prefer to use software that meets the ideals behind Free Software, but we also are more likely to make concessions in order to accomplish specific tasks. - -Both types of Linux enthusiasts prefer using non-proprietary solutions. But Free Software advocates won't use proprietary at all, where as the practical user will rely on the best tool with the best performance. This means there are instances where the practical user is willing to run a proprietary application or code on their non-proprietary operating system. - -In the end, both user types enjoy using what Linux has to offer. But our reasons for doing so tend to vary. Some have argued that this is a matter of ignorance with those who don't support Free Software. I disagree and believe it's a matter of practical convenience. Users who prefer practical convenience simply aren't concerned about the politics of their software. - -### Practical Convenience - -When you ask most people why they use the operating system they use, it's usually tied in with practical convenience. Examples of this convenience might include "it's what I've always used" down to "it runs the software I need." Other folks might take this a step further and explain it's not so much the software that drives their OS preference, as the familiarity of the OS in question. And finally, there are specialty "niche tasks" or hardware compatibility issues that also provide good reasons for using one OS over another. - -This might surprise many of you, but the single biggest reason I run desktop Linux today is due to familiarity. Even though I provide support for Windows and OS X for others, it's actually quite frustrating to use these operating systems as they're simply not what my muscle memory is used to. I like to believe this allows me to empathize with Linux newcomers, as I too know how off-putting it can be to step into the realm of the unfamiliar. My point here is this – familiarity has value. And familiarity also powers practical convenience as well. - -Now if we compare this to the needs of a Free Software advocate, you'll find those folks are willing to learn something new and perhaps even more challenging if it translates into them avoiding using non-free software. It's actually something I've always admired about this type of user. Their willingness to take the path less followed to stick to their principles is, in my opinion, admirable. - -### The price of freedom - -One area I don't envy is the extra work involved in making sure a Free Software advocate is always using Linux distros and hardware that respect their digital freedom according to the standards set forth by the [Free Software Foundation][4]. This means the Linux kernel needs to be free from proprietary blobs for driver support and the hardware in question doesn't require any proprietary code whatsoever. Certainly not impossible, but it's pretty close. - -The absolute best scenario a Free Software advocate can shoot for is hardware that is "freedom-compatible." There are vendors out there that can meet this need, however most of them are offering hardware that relies on Linux compatible proprietary firmware. Great for the practical user, a show-stopper for the Free Software advocate. - -What all of this translates into is that the advocate must be far more vigilant than the practical Linux enthusiast. This isn't necessarily a negative thing per se, however it's a consideration if one is planning on jumping onto the Free Software approach to computing. Practical users, by contrast, can use any software or hardware that happens to be Linux compatible without a second thought. I don't know about you, but in my eyes this seems a bit easier to me. - -### Defining software freedom - -This part is going to get some folks upset as I personally don't subscribe to the belief that there's only one flavor of software freedom. From where I stand, I think true freedom is being able to soak in all the available data on a given issue and then come to terms with the approach that best suits that person's lifestyle. - -So for me, I prefer using Linux distributions that provide me with the desktop that meets all of my needs. This includes the use of non-proprietary software and proprietary software. Even though it's fair to suggest that the proprietary software restricts my personal freedom, I must counter this by pointing out that I had the freedom to use it in the first place. One might even call this freedom of choice. - -Perhaps this too, is why I find myself identifying more with the ideals of Open Source Software instead of sticking with the ideals behind the Free Software movement. I prefer to stand with the group that doesn't spend their time telling me how I'm wrong for using what works best for me. It's been my experience that the Open Source crowd is merely interested in sharing the merits of software freedom without the passion for Free Software idealism. - -I think the concept of Free Software is great. And to those who need to be active in software politics and point out the flaws of using proprietary software to folks, then I think Linux ([GNU/Linux][5]) activism is a good fit. Where practical users such as myself tend to change course from Free Software Linux advocates is in our presentation. - -When I present Linux on the desktop, I share my passion for its practical merits. And if I'm successful and they enjoy the experience, I allow the user to discover the Free Software perspective on their own. I've found most people use Linux on their computers not because they want to embrace software freedom, rather because they simply want the best user experience possible. Perhaps I'm alone in this, it's hard to say. - -What say you? Are you a Free Software Advocate? Perhaps you're a fan of using proprietary software/code on your desktop Linux distribution? Hit the Comments and share your Linux desktop experiences. - - --------------------------------------------------------------------------------- - -via: http://www.datamation.com/open-source/linux-practicality-vs-activism.html - -作者:[Matt Hartley][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.datamation.com/author/Matt-Hartley-3080.html -[1]: https://en.wikipedia.org/wiki/Free_software_movement -[2]: https://www.gnu.org/philosophy/free-sw.en.html -[3]: https://www.gnu.org/philosophy/free-software-for-freedom.en.html -[4]: https://en.wikipedia.org/wiki/Free_Software_Foundation -[5]: https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy - - diff --git a/translated/talk/20160627 Linux Practicality vs Activism.md b/translated/talk/20160627 Linux Practicality vs Activism.md new file mode 100644 index 0000000000..81135bf56f --- /dev/null +++ b/translated/talk/20160627 Linux Practicality vs Activism.md @@ -0,0 +1,71 @@ +Linux 的实用性 VS 行动主义 +================================== + +>我们使用 Linux 是因为它比其他操作系统更实用,还是其他更高级的理由呢? + +其中一件关于运行 Linux 的最伟大的事情之一就是它所提供的自由。凡出现在 Linux 社区之间的划分在于我们如何珍惜这种自由。 + +一些人认为,通过使用 Linux 所享有的自由是从供应商锁定或高软件成本的自由。大多数人会称这个是一个实际的考虑。而其他用户会告诉你,他们享受的是自由软件的自由。那就意味着拥抱支持 [开源软件运动][1] 的 Linux 发行版,完全避免专有软件和所有相关的东西。 + + +在这篇文章中,我将带你比较这两种自由的区别,以及他们如何影响 Linux 的使用。 + +### 专有的问题 + +大多数的用户有一个共同的一点是他们的喜欢避免专有软件。对于像我这样的实际的爱好者来说,这是一个我怎么样花我的钱,来控制我的软件和避免供应商锁定的问题。当然,我不是一个程序员……所以我调整我的安装软件是十分温柔的。但也有一些个别情况,一个应用程序的小调整可以意味着它的工作和不工作的区别。 + +还有就是选择避开专有软件的Linux爱好者,因为他们觉得这是不道德的使用。通常这里主要的问题是使用专有软件会带走或者干脆阻碍你的个人自由。像这些用户更喜欢使用的Linux发行版和软件来支持 [自由软件理念][2] 。虽然它类似于开源的概念并经常直接与之混淆,[这里有些差异][3] 。 + +因此,这里有个问题:像我这样的用户往往以其便利掩盖了其纯软件自由的理想化。不要误会我的意思,像我这样的人更喜欢使用符合自由软件背后的理想软件,但我们也更有可能做出让步,以完成特定的任务。 + +这两种类型的 Linux 爱好者都喜欢使用非专有的解决方案。但是,自由软件倡导者根本不会去使用所有权,在那里作为实际的用户将依靠具有最佳性能的最佳工具。这意味着,在有些情况下的实际用户愿意来运行他们的非专有操作系统上的专有应用或代码实例。 + +最终,这两种类型的用户都喜欢使用 Linux 所提供的。但是,我们这样做的原因往往会有所不同。有人认为那些不支持自由软件的人是无知的。我不同意,我认为它是实用方便性的问题。那些喜欢实用方便性的用户根本不关心他们软件的政治问题。 + +### 实用方便性 + +当你问起绝大多数的人为什么使用他们现在的操作系统,回答通常都集中于实用方便性。这种关于方便性的例子可能包括“它是我一直使用的东西”、“它运行的软件是我需要的”。 其他人可能进一步解释说,并没有那么多软件影响他们对操作系统的偏好和熟悉程度,最后,有“利基任务”或硬件兼容性问题也提供了很好的理由让我们用这个操作系统而不是另一个。 + +这可能会让你们中许多人很惊讶,但我今天运行的桌面 Linux 最大的一个原因是由于熟悉。即使我为别人提供对 Windows 和 OS X 的支持,但实际上我是相当沮丧地使用这些操作系统,因为它们根本就不是我记忆中的那样习惯用法。我相信这可以让我对那些 Linux 新手表示同情,因为我太懂得踏入陌生的领域是怎样的让人倒胃口了。我的观点是这样的 —— 熟悉具有价值。而且熟悉同样使得实用方便性变得有力量。 + +现在,如果我们把它和一个自由软件倡导者的需求来比较,你会发现那些人都愿意学习新的东西,甚至更具挑战性,去学习那些若转化成为他们所避免使用的非自由软件。这就是我经常赞美的那种用户,我认为他们愿意采取最少路径来遵循坚持他们的原则是十分值得赞赏的。 + +### 自由的价值 + +我不羡慕那些自由软件倡导者的一个地方,就是根据 [自由软件基金会][4] 所规定的标准需要确保他们可以一直使用 Linux 发行版和硬件,以便于尊重他们的数字自由。这意味着 Linux 内核需要摆脱专有的斑点的驱动支持和不需要任何专有代码的硬件。当然不是不可能的,但它很接近。 + +一个自由软件倡导者可以达到的最好的情况是硬件是“自由兼容”的。有些供应商,可以满足这一需求,但他们大多是提供依赖于 Linux 兼容专有固件的硬件。伟大的实际用户对自由软件倡导者来说是个搅局者。 + +那么这一切意味着的是,倡导者必须比实际的 Linux 爱好者,更加警惕。这本身并不一定是消极的,但如果是打算用自由软件的方法来计算的话那就值得考虑了。通过对比,实用的用户可以专心地使用与 Linux 兼容的任何软件或硬件。我不知道你是怎么想的,但在我眼中是更轻松一点的。 + +### 定义自由软件 + +这一部分可能会让一部分人失望,因为我不相信自由软件只有一种。从我的立场,我认为真正的自由是能够在一个给定的情况里沉浸在所有可用的数据里,然后用最适合这个人的生活方式的途径来达成协议。 + +所以对我来说,我更喜欢使用的 Linux 桌面,满足了我所有的需求,这包括使用非专有软件和专有软件。尽管这是公平的建议,专有的软件限制了我的个人自由,但我必须反驳这一点,因为我有选择用不用它,即选择的自由。 + +或许,这也就是为什么我发现自己更确定开源软件的理想,而不是坚持自由软件运动背后的理念的原因。我更愿意和那些不会花时间告诉我,我是怎么用错了的那些人群在一起。我的经验是,那些开源的人群仅仅是感兴趣去分享自由软件的优点,而不是因为自由软件的理想主义的激情。 + +我觉的自由软件的概念实在是太棒了。对那些需要活跃在软件政治,并指出使用专有软件的人的缺陷的人来说,那么我认为 Linux ( [GNU/Linux][5] ) 行动是一个不错的选择。在我们的介绍里,像我一样的实际用户更倾向于从自由软件的支持者改变方向。 + +当我介绍 Linux 的桌面时,我富有激情地分享它的实际优点。而且我成功地让他们享受这一经历,我允许用户自己去发现自由软件的观点。但我发现大多数人使用的 Linux 不是因为他们想拥抱自由软件,而是因为他们只是想要最好的用户体验。也许只有我是这样的,很难说。 + +嘿!说你呢?你是一个自由软件倡导者吗?也许你是个使用桌面 Linux 发行专有软件/代码的粉丝?那么评论和分享您的 Linux 桌面体验吧! + + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/linux-practicality-vs-activism.html + +作者:[Matt Hartley][a] +译者:[joVoV](https://github.com/joVoV) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.datamation.com/author/Matt-Hartley-3080.html +[1]: https://en.wikipedia.org/wiki/Free_software_movement +[2]: https://www.gnu.org/philosophy/free-sw.en.html +[3]: https://www.gnu.org/philosophy/free-software-for-freedom.en.html +[4]: https://en.wikipedia.org/wiki/Free_Software_Foundation +[5]: https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy \ No newline at end of file From 2982eaea05cd7496e216a36e0f3e4662418905ad Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=91=A8=E5=AE=B6=E6=9C=AA?= Date: Tue, 26 Jul 2016 17:12:11 +0800 Subject: [PATCH 244/471] Translating by GitFuture [07.26] --- .../20160705 How to Encrypt a Flash Drive Using VeraCrypt.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md b/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md index 2dd2ae024f..27a0b950bd 100644 --- a/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md +++ b/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md @@ -1,3 +1,5 @@ +Translating by GitFuture [07.26] + How to Encrypt a Flash Drive Using VeraCrypt ============================================ From 9e955cfb872c86914a5e2693803b1ddad52a77eb Mon Sep 17 00:00:00 2001 From: Locez Date: Tue, 26 Jul 2016 07:36:41 -0500 Subject: [PATCH 245/471] translating by locez --- sources/tech/20160722 7 Best Markdown Editors for Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160722 7 Best Markdown Editors for Linux.md b/sources/tech/20160722 7 Best Markdown Editors for Linux.md index afe303021d..b4b05a41f5 100644 --- a/sources/tech/20160722 7 Best Markdown Editors for Linux.md +++ b/sources/tech/20160722 7 Best Markdown Editors for Linux.md @@ -1,3 +1,4 @@ +translating by Locez 7 Best Markdown Editors for Linux ====================================== From f50bf5b4f7d953ca0aa23b4822adbf3326fa59e3 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Wed, 27 Jul 2016 15:36:59 +0800 Subject: [PATCH 246/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...technologies in Fedora - systemd-nspawn.md | 108 ------------------ ...technologies in Fedora - systemd-nspawn.md | 99 ++++++++++++++++ 2 files changed, 99 insertions(+), 108 deletions(-) delete mode 100644 sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md create mode 100644 translated/tech/20160621 Container technologies in Fedora - systemd-nspawn.md diff --git a/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md b/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md deleted file mode 100644 index 6d3e08ac91..0000000000 --- a/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md +++ /dev/null @@ -1,108 +0,0 @@ -Being translated by [ChrisLeeGit](https://github.com/chrisleegit) - -Container technologies in Fedora: systemd-nspawn -=== - -Welcome to the “Container technologies in Fedora” series! This is the first article in a series of articles that will explain how you can use the various container technologies available in Fedora. This first article will deal with `systemd-nspawn`. - -### What is a container? - -A container is a user-space instance which can be used to run a program or an operating system in isolation from the system hosting the container (called the host system). The idea is very similar to a `chroot` or a [virtual machine][1]. The processes running in a container are managed by the same kernel as the host operating system, but they are isolated from the host file system, and from the other processes. - - -### What is systemd-nspawn? - -The systemd project considers container technologies as something that should fundamentally be part of the desktop and that should integrate with the rest of the user’s systems. To this end, systemd provides `systemd-nspawn`, a tool which is able to create containers using various Linux technologies. It also provides some container management tools. - -In many ways, `systemd-nspawn` is similar to `chroot`, but is much more powerful. It virtualizes the file system, process tree, and inter-process communication of the guest system. Much of its appeal lies in the fact that it provides a number of tools, such as `machinectl`, for managing containers. Containers run by `systemd-nspawn` will integrate with the systemd components running on the host system. As an example, journal entries can be logged from a container in the host system’s journal. - -In Fedora 24, `systemd-nspawn` has been split out from the systemd package, so you’ll need to install the `systemd-container` package. As usual, you can do that with a `dnf install systemd-container`. - -### Creating the container - -Creating a container with `systemd-nspawn` is easy. Let’s say you have an application made for Debian, and it doesn’t run well anywhere else. That’s not a problem, we can make a container! To set up a container with the latest version of Debian (at this point in time, Jessie), you need to pick a directory to set up your system in. I’ll be using `~/DebianJessie` for now. - -Once the directory has been created, you need to run `debootstrap`, which you can install from the Fedora repositories. For Debian Jessie, you run the following command to initialize a Debian file system. - -``` -$ debootstrap --arch=amd64 stable ~/DebianJessie -``` - -This assumes your architecture is x86_64. If it isn’t, you must change `amd64` to the name of your architecture. You can find your machine’s architecture with `uname -m`. - -Once your root directory is set up, you will start your container with the following command. - -``` -$ systemd-nspawn -bD ~/DebianJessie -``` - -You’ll be up and running within seconds. You’ll notice something as soon as you try to log in: you can’t use any accounts on your system. This is because systemd-nspawn virtualizes users. The fix is simple: remove -b from the previous command. You’ll boot directly to the root shell in the container. From there, you can just use passwd to set a password for root, or you can use adduser to add a new user. As soon as you’re done with that, go ahead and put the -b flag back. You’ll boot to the familiar login console and you log in with the credentials you set. - -All of this applies for any distribution you would want to run in the container, but you need to create the system using the correct package manager. For Fedora, you would use DNF instead of debootstrap. To set up a minimal Fedora system, you can run the following command, replacing the absolute path with wherever you want the container to be. - -``` -$ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd dnf fedora-release -``` - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-04-14.png) - -### Setting up the network - -You’ll notice an issue if you attempt to start a service that binds to a port currently in use on your host system. Your container is using the same network interface. Luckily, `systemd-nspawn` provides several ways to achieve separate networking from the host machine. - -#### Local networking - -The first method uses the `--private-network` flag, which only creates a loopback device by default. This is ideal for environments where you don’t need networking, such as build systems and other continuous integration systems. - -#### Multiple networking interfaces - -If you have multiple network devices, you can give one to the container with the `--network-interface` flag. To give `eno1` to my container, I would add the flag `--network-interface=eno1`. While an interface is assigned to a container, the host can’t use it at the same time. When the container is completely shut down, it will be available to the host again. - -#### Sharing network interfaces - -For those of us who don’t have spare network devices, there are other options for providing access to the container. One of those is the `--port` flag. This forwards a port on the container to the host. The format is `protocol:host:container`, where protocol is either `tcp` or `udp`, `host` is a valid port number on the host, and `container` is a valid port on the container. You can omit the protocol and specify only `host:container`. I often use something similar to `--port=2222:22`. - -You can enable complete, host-only networking with the `--network-veth` flag, which creates a virtual Ethernet interface between the host and the container. You can also bridge two connections with `--network-bridge`. - -### Using systemd components - -If the system in your container has D-Bus, you can use systemd’s provided utilities to control and monitor your container. Debian doesn’t include dbus in the base install. If you want to use it with Debian Jessie, you’ll want to run `apt install dbus`. - -#### machinectl - -To easily manage containers, systemd provides the machinectl utility. Using machinectl, you can log in to a container with machinectl login name, check the status with machinectl status name, reboot with machinectl reboot name, or power it off with machinectl poweroff name. - -### Other systemd commands - -Most systemd commands, such as journalctl, systemd-analyze, and systemctl, support containers with the `--machine` option. For example, if you want to see the journals of a container named “foobar”, you can use journalctl `--machine=foobar`. You can also see the status of a service running in this container with `systemctl --machine=foobar` status service. - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-09-25.png) - -### Working with SELinux - -If you’re running with SELinux enforcing (the default in Fedora), you’ll need to set the SELinux context for your container. To do that, you need to run the following two commands on the host system. - -``` -$ semanage fcontext -a -t svirt_sandbox_file_t "/path/to/container(/.*)?" -$ restorecon -R /path/to/container/ -``` - -Make sure you replace “/path/to/container” with the path to your container. For my container, “DebianJessie”, I would run the following: - -``` -$ semanage fcontext -a -t svirt_sandbox_file_t "/home/johnmh/DebianJessie(/.*)?" -$ restorecon -R /home/johnmh/DebianJessie/ -``` - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ - -作者:[John M. Harris, Jr.][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ -[1]: https://en.wikipedia.org/wiki/Virtual_machine diff --git a/translated/tech/20160621 Container technologies in Fedora - systemd-nspawn.md b/translated/tech/20160621 Container technologies in Fedora - systemd-nspawn.md new file mode 100644 index 0000000000..f47d17556a --- /dev/null +++ b/translated/tech/20160621 Container technologies in Fedora - systemd-nspawn.md @@ -0,0 +1,99 @@ +Fedora 中的容器技术:systemd-nspawn +=== + +欢迎来到“Fedora 中的容器技术”系列!本文是该系列文章中的第一篇,它将说明你可以怎样使用 Fedora 中各种可用的容器技术。本文将学习 `systemd-nspawn` 的相关知识。 + +### 容器是什么? +一个容器就是一个用户空间实例,它能够在与托管容器的系统(叫做宿主系统)隔离的环境中运行一个程序或者一个操作系统。这和 `chroot` 或 [虚拟机][1] 的思想非常类似。 +运行在容器中的进程是由与宿主操作系统相同的内核来管理的,但它们是与宿主文件系统以及其它进程隔离开的。 + + +### 什么是 systemd-nspawn? +systemd 项目认为应当将容器技术变成桌面的基础部分,并且应当和剩余的用户系统集成在一起。为此,systemd 提供了 `systemd-nspawn`,这款工具能够使用多种 Linux 技术创建容器。它也提供了一些容器管理工具。 + +`systemd-nspawn` 和 `chroot` 在许多方面都是类似的,但是前者更加强大。它虚拟化了文件系统、进程树以及客户系统中的进程间通信。它的引力在于它提供了很多用于管理容器的工具,例如 `machinectl`。由 `systemd-nspawn` 运行的容器将会与 systemd 组件一同运行在宿主系统上。举例来说,一个容器的日志可以输出到宿主系统的日志中。 + +在 Fedora 24 上,`systemd-nspawn` 已经和 systemd 软件包分开了,所以你需要安装 `systemd-container` 软件包。一如往常,你可以使用 `dnf install systemd-container` 进行安装。 + +### 创建容器 +使用 `systemd-nspawn` 创建一个容器是很容易的。假设你有一个专门为 Debian 创造的应用,并且无法在其它地方正常运行。那并不是一个问题,我们可以创造一个容器!为了设置容器使用最新版本的 Debian(此时是 Jessie),你需要挑选一个目录来放置你的系统。我暂时将使用目录 `~/DebianJessie`。 + +一旦你创建完目录,你需要运行 `debootstrap`,你可以从 Fedora 仓库中安装它。对于 Debian Jessie,你运行下面的命令来初始化一个 Debian 文件系统。 + +``` +$ debootstrap --arch=amd64 stable ~/DebianJessie +``` + +以上默认你的架构是 x86_64。如果不是的话,你必须将架构的名称改为 `amd64`。你可以使用 `uname -m` 得知你的机器架构。 + +一旦设置好你的根目录,你就可以使用下面的命令来启动你的容器。 + +``` +$ systemd-nspawn -bD ~/DebianJessie +``` + +容器将会在数秒后准备好并运行,当你一尝试登录就会注意到:你无法在你的系统上使用任何账户。这是因为 `systemd-nspawn` 虚拟化了用户。修复的方法很简单:将之前的命令中的 `-b` 移除即可。你将直接进入容器的 root shell。此时,你只能使用 `passwd` 命令为 root 设置密码,或者使用 `adduser` 命令添加一个新用户。一旦设置好密码或添加好用户,你就可以把 `-b` 标志添加回去然后继续了。你会进入到熟悉的登录控制台,然后你使用设置好的认证信息登录进去。 + +以上对于任意你想在容器中运行的发行版都适用,但前提是你需要使用正确的包管理器创建系统。对于 Fedora,你应使用 DNF 而非 `debootstrap`。想要设置一个最小化的 Fedora 系统,你可以运行下面的命令,要将绝对路径替换成任何你希望容器存放的位置。 + +``` +$ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd dnf fedora-release +``` + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-04-14.png) + +### 设置网络 +如果你尝试启动一个服务,但它绑定了你宿主机正在使用的端口,你将会注意到这个问题:你的容器正在使用和宿主机相同的网络接口。 +幸运的是,`systemd-nspawn` 提供了几种方法可以将网络从宿主机分开。 + +#### 本地网络 + +第一种方法是使用 `--private-network` 标志,它默认仅创建一个回环设备。这对于你不需要使用网络的环境是非常理想的,例如构建系统和其它持续集成系统。 + +#### 多个网络接口 + +如果你有多个网络接口设备,你可以使用 `--network-interface` 标志给容器分配一个接口。想要给我的容器分配 `eno1`,我会添加标志 `--network-interface=eno1`。当某个接口分配给一个容器后,宿主机就不能同时使用那个接口了。只有当容器彻底关闭后,宿主机才可以使用那个接口。 + + +#### 共享网络接口 +对于我们中那些并没有额外的网络设备的人来说,还有其它方法可以访问容器。一种就是使用 `--port` 标志。这会将容器中的一个端口定向到宿主机。使用格式是 `协议:宿主机:容器`,这里的协议可以是 `tcp` 或者 `udp`,`宿主机` 是宿主机的一个合法端口,`容器` 则是容器中的一个合法端口。你可以省略协议,只指定 `宿主机:容器`。我通常的用法类似 `--port=2222:22`。 + +你可以使用 `--network-veth` 启用完全的、仅宿主机模式的网络,这会在宿主机和容器之间创建一个虚拟的网络接口。你也可以使用 `--network-bridge` 桥接二者的连接。 + +### 使用 systemd 组件 +如果你容器中的系统含有 D-Bus,你可以使用 systemd 提供的实用工具来控制并监视你的容器。基础安装的 Debian 并不包含 `dbus`。如果你想在 Debian Jessie 中使用 `dbus`,你需要运行命令 `apt install dbus`。 + +#### machinectl +为了能够轻松地管理容器,systemd 提供了 `machinectl` 实用工具。使用 `machinectl`,你可以使用 `machinectl login name` 登录到一个容器中、使用 `machinectl status name`检查状态、使用 `machinectl reboot name` 启动容器或者使用 `machinectl poweroff name` 关闭容器。 + +### 其它 systemd 命令 +多数 systemd 命令,例如 `journalctl`, `systemd-analyze` 和 `systemctl`,都支持使用了 `--machine` 选项的容器。例如,如果你想查看一个名为 "foobar" 的容器日志,你可以使用 `journalctl --machine=foobar`。你也可以使用 `systemctl --machine=foobar status service` 来查看运行在这个容器中的服务状态。 + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-09-25.png) + +### 和 SELinux 一起工作 +如果你要使用 SELinux 强制模式(Fedora 默认模式),你需要为你的容器设置 SELinux 环境。想要那样的话,你需要在宿主系统上运行下面两行命令。 + +``` +$ semanage fcontext -a -t svirt_sandbox_file_t "/path/to/container(/.*)?" +$ restorecon -R /path/to/container/ +``` +确保使用你的容器路径替换 "/path/to/container"。对于我的容器 "DebianJessie",我会运行下面的命令: + +``` +$ semanage fcontext -a -t svirt_sandbox_file_t "/home/johnmh/DebianJessie(/.*)?" +$ restorecon -R /home/johnmh/DebianJessie/ +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/container-technologies-fedora-systemd-nspawn/ + +作者:[John M. Harris, Jr.][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/container-technologies-fedora-systemd-nspawn/ +[1]: https://en.wikipedia.org/wiki/Virtual_machine From a10295803858cb583d50738335458f194fb47a99 Mon Sep 17 00:00:00 2001 From: Locez Date: Wed, 27 Jul 2016 21:34:49 +0800 Subject: [PATCH 247/471] translated and delete source --- ...60722 7 Best Markdown Editors for Linux.md | 156 ------------------ 1 file changed, 156 deletions(-) delete mode 100644 sources/tech/20160722 7 Best Markdown Editors for Linux.md diff --git a/sources/tech/20160722 7 Best Markdown Editors for Linux.md b/sources/tech/20160722 7 Best Markdown Editors for Linux.md deleted file mode 100644 index b4b05a41f5..0000000000 --- a/sources/tech/20160722 7 Best Markdown Editors for Linux.md +++ /dev/null @@ -1,156 +0,0 @@ -translating by Locez -7 Best Markdown Editors for Linux -====================================== - -In this article, we shall review some of the best Markdown editors you can install and use on your Linux desktop. There are numerous Markdown editors you can find for Linux but here, we want to unveil possibly the best you may choose to work with. - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Best-Linux-Markdown-Editors.png) ->Best Linux Markdown Editors - -For starters, Markdown is a simple and lightweight tool written in Perl, that enables users to write plain text format and covert it to valid HTML (or XHTML). It is literally an easy-to-read, easy-to-write plain text language and a software tool for text-to-HTML conversion. - -Hoping that you have a slight understanding of what Markdown is, let us proceed to list the editors. - -### 1. Atom - -Atom is a modern, cross-platform, open-source and very powerful text editor that can work on Linux, Windows and Mac OS X operating systems. Users can customize it down to its base, minus altering any configuration files. - -It is designed with some illustrious features and these include: - -- Comes with a built-in package manager -- Smart auto-completion functionality -- Offers multiple panes -- Supports find and replace functionality -- Includes a file system browser -- Easily customizable themes -- Highly extensible using open-source packages and many more - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Atom-Markdown-Editor-for-Linux.png) - -Visit Homepage: - -### 2. GNU Emacs - -Emacs is one of the popular open-source text editors you can find on the Linux platform today. It is a great editor for Markdown language, which is highly extensible and customizable. - -It’s comprehensively developed with the following amazing features: - -- Comes with an extensive built-in documentation including tutorials for beginners -- Full Unicode support for probably all human scripts -- Supports content-aware text-editing modes -- Includes syntax coloring for multiple file types -- Its highly customizable using Emacs Lisp code or GUI -- Offers a packaging system for downloading and installing various extensions plus so much more - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Emacs-Markdown-Editor-for-Linux.png) ->Emacs Markdown Editor for Linux - -Visit Homepage: - -### 3. Remarkable - -Remarkable is possibly the best Markdown editor you can find on Linux, it also works on Windows operating system. It is indeed a remarkable and fully featured Markdown editor that offers users some exciting features. - -Some of its remarkable features include: - -- Supports live preview -- Supports exporting to PDF and HTML -- Also offers Github Markdown -- Supports custom CSS -- It also supports syntax highlighting -- Offers keyboard shortcuts -- Highly customizable plus and many more - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Remarkable-Markdown-Editor-for-Linux.png) ->Remarkable Markdown Editor for Linux - -Visit Homepage: - -### 4. Haroopad - -Haroopad is an extensively built, cross-platform Markdown document processor for Linux, Windows and Mac OS X. It enables users to write expert-level documents of numerous formats including email, reports, blogs, presentations, blog posts and many more. - -It is fully featured with the following notable features: - -- Easily imports content -- Also exports to numerous formats -- Broadly supports blogging and mailing -- Supports several mathematical expressions -- Supports Github flavored Markdown and extensions -- Offers users some exciting themes, skins and UI components plus so much more - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Haroopad-Markdown-Editor-for-Linux.png) ->Haroopad Markdown Editor for Linux - -Visit Homepage: - -### 5. ReText - -ReText is a simple, lightweight and powerful Markdown editor for Linux and several other POSIX-compatible operating systems. It also doubles as a reStructuredText editor, and has the following attributes: - -- Simple and intuitive GUI -- It is highly customizable, users can customize file syntax and configuration options -- Also supports several color schemes -- Supports use of multiple mathematical formulas -- Enables export extensions and many more - -![](http://www.tecmint.com/wp-content/uploads/2016/07/ReText-Markdown-Editor-for-Linux.png) ->ReText Markdown Editor for Linux - -Visit Homepage: - -### 6. UberWriter - -UberWriter is a simple and easy-to-use Markdown editor for Linux, it’s development was highly influenced by iA writer for Mac OS X. It is also feature rich with these remarkable features: - -- Uses pandoc to perform all text-to-HTML conversions -- Offers a clean UI -- Offers a distraction free mode, highlighting a users last sentence -- Supports spellcheck -- Also supports full screen mode -- Supports exporting to PDF, HTML and RTF using pandoc -- Enables syntax highlighting and mathematical functions plus many more - -![](http://www.tecmint.com/wp-content/uploads/2016/07/UberWriter-Markdown-Editor-for-Linux.png) ->UberWriter Markdown Editor for Linux - -Visit Homepage: - -### 7. Mark My Words - -Mark My Words is a also lightweight yet powerful Markdown editor. It’s a relatively new editor, therefore offers a handful of features including syntax highlighting, simple and intuitive GUI. - -The following are some of the awesome features yet to be bundled into the application: - -- Live preview support -- Markdown parsing and file IO -- State management -- Support for exporting to PDF and HTML -- Monitoring files for changes -- Support for preferences - -![](http://www.tecmint.com/wp-content/uploads/2016/07/MarkMyWords-Markdown-Editor-for-Linux.png) ->MarkMyWords Markdown Editor for-Linux - -Visit Homepage: - -### Conclusion - -Having walked through the list above, you probably know what Markdown editors and document processors to download and install on your Linux desktop for now. - -Note that what we consider to be the best here may reasonably not be the best for you, therefore, you can reveal to us exciting Markdown editors that you think are missing in the list and have earned the right to be mentioned here by sharing your thoughts via the feedback section below. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/best-markdown-editors-for-linux/ - -作者:[Aaron Kili |][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ - - From 4bbcd800086bb809f2aad07b403f3378d3ef229b Mon Sep 17 00:00:00 2001 From: Locez Date: Wed, 27 Jul 2016 21:38:07 +0800 Subject: [PATCH 248/471] [translated] 7 Best Markdown Editors for Linux.md --- .../tech/7 Best Markdown Editors for Linux.md | 158 ++++++++++++++++++ 1 file changed, 158 insertions(+) create mode 100644 translated/tech/7 Best Markdown Editors for Linux.md diff --git a/translated/tech/7 Best Markdown Editors for Linux.md b/translated/tech/7 Best Markdown Editors for Linux.md new file mode 100644 index 0000000000..8cf7cdd45f --- /dev/null +++ b/translated/tech/7 Best Markdown Editors for Linux.md @@ -0,0 +1,158 @@ +Linux 上 7 个最好的 Markdown 编辑器 +====================================== + +在这篇文章中,我们会重温一些可以在 Linux 上安装使用的最好的 Markdown 编辑器。 你可以找到非常多的 Linux 平台上的 Markdown 编辑器,但是在这里我们将尽可能地推荐你选择最好的。 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Best-Linux-Markdown-Editors.png) +>Best Linux Markdown Editors + + +首先,Markdown 是一个用 Perl 写的简单、轻量的工具。它可以将用户写的纯文本转为可用的 HTML (或 XHTML)。它实际上是一门易读,易写的纯文本语言,以及一个用于将文本转为 HTML 的转换工具。 + + +希望你先对 Markdown 有一个稍微的了解,接下来让我们逐一列出这些编辑器。 +### 1. Atom + + + +Atom 是一个现代的,跨平台,开源且强有力的文本编辑器,它可以运行在 Linux, Windows 和 MAC OS X 等操作系统上。用户可以在它的基础上进行定制,删减修改任何配置文件。 + +它包含了一些非常杰出的特性: + + +- 内置软件包管理器 +- 智能自动补全功能 +- 提供多窗口操作 +- 支持查找替换功能 +- 包含一个文件系统浏览器 +- 轻松自定义主题 +- 开源、高度扩展性的软件包等 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Atom-Markdown-Editor-for-Linux.png) + + +访问主页: + +### 2. GNU Emacs + +Emacs 是 Linux 平台上一款的流行文本编辑器。它是一个非常棒的,具备高扩展性和定制性的 Markdown 语言编辑器。 + + +它综合了以下这些神奇的特性: + +- 带有丰富的内置文档,包括适合初学者的教程 +- 有完整的 UnicodeU支持,可显示所有的人类符号 +- 支持内容识别的文本编辑模式 +- 包括多种文件类型的语法高亮 +- 可用 Emacs Lisp 或 GUI 对其进行高度定制 +- 提供了一个包系统可用来下载安装各种扩展等 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Emacs-Markdown-Editor-for-Linux.png) +>Emacs Markdown Editor for Linux + +访问主页: + +### 3. Remarkable + +Remarkable 可能是 Linux 上最好的 Markdown 编辑器了,它也适用于 Windows 操作系统。它的确是是一个卓越且功能齐全的 Markdown 编辑器,为用户提供了一些令人兴奋的特性。 + +一些卓越的特性: + +- 支持实时预览 +- 支持导出 PDF 和 HTML +- 支持 Github Markdown 语法 +- 支持定制 CSS +- 支持语法高亮 +- 提供键盘快捷键 +- 高可定制性和其他 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Remarkable-Markdown-Editor-for-Linux.png) +>Remarkable Markdown Editor for Linux + +访问主页: + +### 4. Haroopad + +Haroopad 是为 Linux,Windows 和 Mac OS X 构建的跨平台 Markdown 文档处理程序。用户可以用它来书写许多专家级格式的文档,包括电子邮件、报告、博客、演示文稿和博客文章等等。 + + +功能齐全且具备以下的亮点: + +- 轻松导入内容 +- 支持导出多种格式 +- 广泛支持博客和邮件 +- 支持许多数学表达式 +- 支持 Github Markdown 扩展 +- 为用户提供了一些令人兴奋的主题、皮肤和 UI 组件等等 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Haroopad-Markdown-Editor-for-Linux.png) +>Haroopad Markdown Editor for Linux + +访问主页: +### 5. ReText + +ReText 是为 Linux 和其它几个 POSIX 兼容操作系统提供的简单、轻量、强大的 Markdown 编辑器。它还可以作为一个 reStructuredText 编辑器,并且具有以下的特性: + +- 简单直观的 GUI +- 具备高定制性,用户可以自定义语法文件和配置选项 +- 支持多种配色方案 +- 支持使用多种数学公式 +- 启用导出扩展等等 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/ReText-Markdown-Editor-for-Linux.png) +>ReText Markdown Editor for Linux + +访问主页: +### 6. UberWriter + +UberWriter 是一个简单、易用的 Linux Markdown 编辑器。它的开发受 Mac OS X 上的 iA writer 影响很大,同样它也具备这些卓越的特性: + +- 使用 pandoc 将所有的文本转为 HTML +- 提供了一个简洁的 UI 界面 +- 提供了一种自由的分发模式,高亮用户的最后一句话 +- 支持拼写检查 +- 支持全屏模式 +- 支持用 pandoc 导出 PDF、HTML 和 RTF +- 启用语法高亮和数学函数等等 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/UberWriter-Markdown-Editor-for-Linux.png) +>UberWriter Markdown Editor for Linux + +访问主页: +### 7. Mark My Words + +Mark My Words 同样也是一个轻量、强大的 Markdown 编辑器。它是一个相对比较新的编辑器,因此提供了大量的功能,包含语法高亮、简单和直观的 UI 等。 + + +下面是一些棒极了,但还未捆绑到应用中的功能: + +- 实时预览 +- Markdown 解析和文件 IO +- 状态管理 +- 支持导出 PDF 和 HTML +- 监测文件的修改 +- 支持首选项 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/MarkMyWords-Markdown-Editor-for-Linux.png) +>MarkMyWords Markdown Editor for-Linux + +访问主页: + +### 结论 + +通过上面的列表,你大概已经知道要为你的 Linux 桌面下载、安装什么样的 Markdown 编辑器和文档处理程序了。 + +请注意,这里提到的最好的 Markdown 编辑器可能对你来说并不是最好的选择。因此你可以通过下面的反馈部分,为我们展示你认为列表中未提及的,并且具备足够的资格的,令人兴奋的 Markdown 编辑器。 + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/best-markdown-editors-for-linux/ + +作者:[Aaron Kili |][a] +译者:[Locez](https://github.com/locez) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 6553b0e28615ddd39e2f8b16a87230a3ed05c1c8 Mon Sep 17 00:00:00 2001 From: Locez Date: Wed, 27 Jul 2016 21:40:28 +0800 Subject: [PATCH 249/471] Rename 7 Best Markdown Editors for Linux.md to 20160722 7 Best Markdown Editors for Linux.md --- ...for Linux.md => 20160722 7 Best Markdown Editors for Linux.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{7 Best Markdown Editors for Linux.md => 20160722 7 Best Markdown Editors for Linux.md} (100%) diff --git a/translated/tech/7 Best Markdown Editors for Linux.md b/translated/tech/20160722 7 Best Markdown Editors for Linux.md similarity index 100% rename from translated/tech/7 Best Markdown Editors for Linux.md rename to translated/tech/20160722 7 Best Markdown Editors for Linux.md From e896363804f9995a08c681206cef2c15d77fec99 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 27 Jul 2016 22:27:56 +0800 Subject: [PATCH 250/471] PUB:20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU @locez --- ...O CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 22 ++++++++++--------- 1 file changed, 12 insertions(+), 10 deletions(-) rename {translated/tech => published}/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md (81%) diff --git a/translated/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/published/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md similarity index 81% rename from translated/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md rename to published/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md index 2192b71600..b485632d6c 100644 --- a/translated/tech/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md +++ b/published/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md @@ -1,15 +1,16 @@ -怎样在 UBUNTU 中修改默认程序 +怎样在 Ubuntu 中修改默认程序 ============================================== ![](https://itsfoss.com/wp-content/uploads/2016/07/change-default-applications-ubuntu.jpg) -简介: 这个新手指南会向你展示如何在 Ubuntu Linux 中修改默认程序 +> 简介: 这个新手指南会向你展示如何在 Ubuntu Linux 中修改默认程序 对于我来说,安装 [VLC 多媒体播放器][1]是[安装完 Ubuntu 16.04 该做的事][2]中最先做的几件事之一。为了能够使我双击一个视频就用 VLC 打开,在我安装完 VLC 之后我会设置它为默认程序。 作为一个新手,你需要知道如何在 Ubuntu 中修改任何默认程序,这也是我今天在这篇指南中所要讲的。 ### 在 UBUNTU 中修改默认程序 + 这里提及的方法适用于所有的 Ubuntu 12.04,Ubuntu 14.04 和Ubuntu 16.04。在 Ubuntu 中,这里有两种基本的方法可以修改默认程序: - 通过系统设置 @@ -25,43 +26,44 @@ ![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-detail-ubuntu.jpeg) - 在左边的面板中选择默认程序(Default Applications),你会发现在右边的面板中可以修改默认程序。 ![](https://itsfoss.com/wp-content/uploads/2016/07/System-settings-default-applications.jpeg) -正如看到的那样,这里只有少数几类的默认程序可以被改变。你可以在这里改变浏览器,邮箱客户端,日历,音乐,视频和相册的默认程序。那其他类型的默认程序怎么修改? +正如看到的那样,这里只有少数几类的默认程序可以被改变。你可以在这里改变浏览器、邮箱客户端、日历、音乐、视频和相册的默认程序。那其他类型的默认程序怎么修改? 不要担心,为了修改其他类型的默认程序,我们会用到右键菜单。 #### 2.通过右键菜单修改默认程序 - 如果你使用过 Windows 系统,你应该看见过右键菜单的“打开方式”,可以通过这个来修改默认程序。我们在 Ubuntu 中也有相似的方法。 右键一个还没有设置默认打开程序的文件,选择“属性(properties)” ![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) ->从右键菜单中选择属性 + +*从右键菜单中选择属性* 在这里,你可以选择使用什么程序打开,并且设置为默认程序。 ![](https://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) ->在 Ubuntu 中设置打开 WebP 图片的默认程序为 gThumb + +*在 Ubuntu 中设置打开 WebP 图片的默认程序为 gThumb* 小菜一碟不是么?一旦你做完这些,所有同样类型的文件都会用你选择的默认程序打开。 + 我很希望这个新手指南对你在修改 Ubuntu 的默认程序时有帮助。如果你有任何的疑问或者建议,可以随时在下面评论。 -------------------------------------------------------------------------------- -via: https://itsfoss.com/change-default-applications-ubuntu/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 +via: https://itsfoss.com/change-default-applications-ubuntu/ 作者:[Abhishek Prakash][a] 译者:[Locez](https://github.com/locez) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://itsfoss.com/author/abhishek/ [1]: http://www.videolan.org/vlc/index.html -[2]: https://itsfoss.com/things-to-do-after-installing-ubuntu-16-04/ +[2]: https://linux.cn/article-7453-1.html From 89182f569304b02318986dc0bedf1db13f6d94ad Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Wed, 27 Jul 2016 22:56:09 +0800 Subject: [PATCH 251/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=94=B3=E8=AF=B7?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160630 What makes up the Fedora kernel.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160630 What makes up the Fedora kernel.md b/sources/tech/20160630 What makes up the Fedora kernel.md index 95b61a201a..603aefa575 100644 --- a/sources/tech/20160630 What makes up the Fedora kernel.md +++ b/sources/tech/20160630 What makes up the Fedora kernel.md @@ -1,3 +1,5 @@ +Being translated by ChrisLeeGit + What makes up the Fedora kernel? ==================================== @@ -22,7 +24,7 @@ The Fedora kernel contains code from many places. All of it is necessary to give via: https://fedoramagazine.org/makes-fedora-kernel/ 作者:[Laura Abbott][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ffef0b3a3d2629298a21cb3b75422fb8fad419b2 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 27 Jul 2016 23:22:16 +0800 Subject: [PATCH 252/471] PUB:20160722 7 Best Markdown Editors for Linux @locez --- ...60722 7 Best Markdown Editors for Linux.md | 96 +++++++++++++------ 1 file changed, 69 insertions(+), 27 deletions(-) rename {translated/tech => published}/20160722 7 Best Markdown Editors for Linux.md (59%) diff --git a/translated/tech/20160722 7 Best Markdown Editors for Linux.md b/published/20160722 7 Best Markdown Editors for Linux.md similarity index 59% rename from translated/tech/20160722 7 Best Markdown Editors for Linux.md rename to published/20160722 7 Best Markdown Editors for Linux.md index 8cf7cdd45f..7d1ffe0c86 100644 --- a/translated/tech/20160722 7 Best Markdown Editors for Linux.md +++ b/published/20160722 7 Best Markdown Editors for Linux.md @@ -1,25 +1,22 @@ -Linux 上 7 个最好的 Markdown 编辑器 +Linux 上 10 个最好的 Markdown 编辑器 ====================================== -在这篇文章中,我们会重温一些可以在 Linux 上安装使用的最好的 Markdown 编辑器。 你可以找到非常多的 Linux 平台上的 Markdown 编辑器,但是在这里我们将尽可能地推荐你选择最好的。 +在这篇文章中,我们会点评一些可以在 Linux 上安装使用的最好的 Markdown 编辑器。 你可以找到非常多的 Linux 平台上的 Markdown 编辑器,但是在这里我们将尽可能地为您推荐那些最好的。 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Best-Linux-Markdown-Editors.png) ->Best Linux Markdown Editors +*Best Linux Markdown Editors* -首先,Markdown 是一个用 Perl 写的简单、轻量的工具。它可以将用户写的纯文本转为可用的 HTML (或 XHTML)。它实际上是一门易读,易写的纯文本语言,以及一个用于将文本转为 HTML 的转换工具。 - +对于不了解 Markdown 的人做个简单介绍,Markdown 是由著名的 Aaron Swartz 和 John Gruber 发明的标记语言,其最初的解析器是一个用 Perl 写的简单、轻量的[同名工具][1]。它可以将用户写的纯文本转为可用的 HTML(或 XHTML)。它实际上是一门易读,易写的纯文本语言,以及一个用于将文本转为 HTML 的转换工具。 希望你先对 Markdown 有一个稍微的了解,接下来让我们逐一列出这些编辑器。 + ### 1. Atom - - -Atom 是一个现代的,跨平台,开源且强有力的文本编辑器,它可以运行在 Linux, Windows 和 MAC OS X 等操作系统上。用户可以在它的基础上进行定制,删减修改任何配置文件。 +Atom 是一个现代的、跨平台、开源且强大的文本编辑器,它可以运行在 Linux、Windows 和 MAC OS X 等操作系统上。用户可以在它的基础上进行定制,删减修改任何配置文件。 它包含了一些非常杰出的特性: - - 内置软件包管理器 - 智能自动补全功能 - 提供多窗口操作 @@ -30,31 +27,32 @@ Atom 是一个现代的,跨平台,开源且强有力的文本编辑器,它 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Atom-Markdown-Editor-for-Linux.png) +*Atom Markdown Editor for Linux* 访问主页: ### 2. GNU Emacs -Emacs 是 Linux 平台上一款的流行文本编辑器。它是一个非常棒的,具备高扩展性和定制性的 Markdown 语言编辑器。 - +Emacs 是 Linux 平台上一款的流行文本编辑器。它是一个非常棒的、具备高扩展性和定制性的 Markdown 语言编辑器。 它综合了以下这些神奇的特性: - 带有丰富的内置文档,包括适合初学者的教程 -- 有完整的 UnicodeU支持,可显示所有的人类符号 +- 有完整的 Unicode 支持,可显示所有的人类符号 - 支持内容识别的文本编辑模式 - 包括多种文件类型的语法高亮 - 可用 Emacs Lisp 或 GUI 对其进行高度定制 - 提供了一个包系统可用来下载安装各种扩展等 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Emacs-Markdown-Editor-for-Linux.png) ->Emacs Markdown Editor for Linux + +*Emacs Markdown Editor for Linux* 访问主页: ### 3. Remarkable -Remarkable 可能是 Linux 上最好的 Markdown 编辑器了,它也适用于 Windows 操作系统。它的确是是一个卓越且功能齐全的 Markdown 编辑器,为用户提供了一些令人兴奋的特性。 +Remarkable 可能是 Linux 上最好的 Markdown 编辑器了,它也适用于 Windows 操作系统。它的确是是一个卓越且功能齐全的 Markdown 编辑器,为用户提供了一些令人激动的特性。 一些卓越的特性: @@ -67,7 +65,8 @@ Remarkable 可能是 Linux 上最好的 Markdown 编辑器了,它也适用于 - 高可定制性和其他 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Remarkable-Markdown-Editor-for-Linux.png) ->Remarkable Markdown Editor for Linux + +*Remarkable Markdown Editor for Linux* 访问主页: @@ -75,7 +74,6 @@ Remarkable 可能是 Linux 上最好的 Markdown 编辑器了,它也适用于 Haroopad 是为 Linux,Windows 和 Mac OS X 构建的跨平台 Markdown 文档处理程序。用户可以用它来书写许多专家级格式的文档,包括电子邮件、报告、博客、演示文稿和博客文章等等。 - 功能齐全且具备以下的亮点: - 轻松导入内容 @@ -86,9 +84,11 @@ Haroopad 是为 Linux,Windows 和 Mac OS X 构建的跨平台 Markdown 文档 - 为用户提供了一些令人兴奋的主题、皮肤和 UI 组件等等 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Haroopad-Markdown-Editor-for-Linux.png) ->Haroopad Markdown Editor for Linux + +*Haroopad Markdown Editor for Linux* 访问主页: + ### 5. ReText ReText 是为 Linux 和其它几个 POSIX 兼容操作系统提供的简单、轻量、强大的 Markdown 编辑器。它还可以作为一个 reStructuredText 编辑器,并且具有以下的特性: @@ -100,29 +100,32 @@ ReText 是为 Linux 和其它几个 POSIX 兼容操作系统提供的简单、 - 启用导出扩展等等 ![](http://www.tecmint.com/wp-content/uploads/2016/07/ReText-Markdown-Editor-for-Linux.png) ->ReText Markdown Editor for Linux + +*ReText Markdown Editor for Linux* 访问主页: + ### 6. UberWriter UberWriter 是一个简单、易用的 Linux Markdown 编辑器。它的开发受 Mac OS X 上的 iA writer 影响很大,同样它也具备这些卓越的特性: -- 使用 pandoc 将所有的文本转为 HTML +- 使用 pandoc 进行所有的文本到 HTML 的转换 - 提供了一个简洁的 UI 界面 -- 提供了一种自由的分发模式,高亮用户的最后一句话 +- 提供了一种专心(distraction free)模式,高亮用户最后的句子 - 支持拼写检查 - 支持全屏模式 - 支持用 pandoc 导出 PDF、HTML 和 RTF - 启用语法高亮和数学函数等等 ![](http://www.tecmint.com/wp-content/uploads/2016/07/UberWriter-Markdown-Editor-for-Linux.png) ->UberWriter Markdown Editor for Linux + +*UberWriter Markdown Editor for Linux* 访问主页: + ### 7. Mark My Words -Mark My Words 同样也是一个轻量、强大的 Markdown 编辑器。它是一个相对比较新的编辑器,因此提供了大量的功能,包含语法高亮、简单和直观的 UI 等。 - +Mark My Words 同样也是一个轻量、强大的 Markdown 编辑器。它是一个相对比较新的编辑器,因此提供了包含语法高亮在内的大量的功能,简单和直观的 UI。 下面是一些棒极了,但还未捆绑到应用中的功能: @@ -131,13 +134,47 @@ Mark My Words 同样也是一个轻量、强大的 Markdown 编辑器。它是 - 状态管理 - 支持导出 PDF 和 HTML - 监测文件的修改 -- 支持首选项 +- 支持首选项设置 ![](http://www.tecmint.com/wp-content/uploads/2016/07/MarkMyWords-Markdown-Editor-for-Linux.png) ->MarkMyWords Markdown Editor for-Linux + +*MarkMyWords Markdown Editor for-Linux* 访问主页: +### 8. Vim-Instant-Markdown 插件 + +Vim 是 Linux 上的一个久经考验的强大、流行而开源的文本编辑器。它用于编程极棒。它也高度支持插件功能,可以让用户为其增加一些其它功能,包括 Markdown 预览。 + +有好几种 Vim 的 Markdown 预览插件,但是 [Vim-Instant-Markdown][2] 的表现最佳。 + +###9. Bracket-MarkdownPreview 插件 + +Brackets 是一个现代、轻量、开源且跨平台的文本编辑器。它特别为 Web 设计和开发而构建。它的一些重要功能包括:支持内联编辑器、实时预览、预处理支持及更多。 + +它也是通过插件高度可扩展的,你可以使用 [Bracket-MarkdownPreview][3] 插件来编写和预览 Markdown 文档。 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Brackets-Markdown-Plugin.png) + +*Brackets Markdown Plugin Preview* + +### 10. SublimeText-Markdown 插件 + +Sublime Text 是一个精心打造的、流行的、跨平台文本编辑器,用于代码、markdown 和普通文本。它的表现极佳,包括如下令人兴奋的功能: + +- 简洁而美观的 GUI +- 支持多重选择 +- 提供专心模式 +- 支持窗体分割编辑 +- 通过 Python 插件 API 支持高度插件化 +- 完全可定制化,提供命令查找模式 + +[SublimeText-Markdown][4] 插件是一个支持格式高亮的软件包,带有一些漂亮的颜色方案。 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/SublimeText-Markdown-Plugin-Preview.png) + +*SublimeText Markdown Plugin Preview* + ### 结论 通过上面的列表,你大概已经知道要为你的 Linux 桌面下载、安装什么样的 Markdown 编辑器和文档处理程序了。 @@ -149,10 +186,15 @@ Mark My Words 同样也是一个轻量、强大的 Markdown 编辑器。它是 via: http://www.tecmint.com/best-markdown-editors-for-linux/ -作者:[Aaron Kili |][a] +作者:[Aaron Kili][a] 译者:[Locez](https://github.com/locez) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: http://www.tecmint.com/author/aaronkili/ +[1]: https://daringfireball.net/projects/markdown/ +[2]: https://github.com/suan/vim-instant-markdown +[3]: https://github.com/gruehle/MarkdownPreview +[4]: https://github.com/SublimeText-Markdown/MarkdownEditing + From 090244d91cd3d38f2be5d62ff01961ad25359e1d Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 28 Jul 2016 09:35:02 +0800 Subject: [PATCH 253/471] =?UTF-8?q?20160728-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ic Expressions and Assignment Operators.md | 271 ++++++++++++++++++ 1 file changed, 271 insertions(+) create mode 100644 sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md diff --git a/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md b/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md new file mode 100644 index 0000000000..be536df74f --- /dev/null +++ b/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md @@ -0,0 +1,271 @@ +Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators – part8 +======================================================================================= + +The [Awk command series][1] is getting exciting I believe, in the previous seven parts, we walked through some fundamentals of Awk that you need to master to enable you perform some basic text or string filtering in Linux. + +Starting with this part, we shall dive into advance areas of Awk to handle more complex text or string filtering operations. Therefore, we are going to cover Awk features such as variables, numeric expressions and assignment operators. + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Learn-Awk-Variables-Numeric-Expressions-Assignment-Operators.png) +>Learn Awk Variables, Numeric Expressions and Assignment Operators + +These concepts are not comprehensively distinct from the ones you may have probably encountered in many programming languages before such shell, C, Python plus many others, so there is no need to worry much about this topic, we are simply revising the common ideas of using these mentioned features. + +This will probably be one of the easiest Awk command sections to understand, so sit back and lets get going. + +### 1. Awk Variables + +In any programming language, a variable is a place holder which stores a value, when you create a variable in a program file, as the file is executed, some space is created in memory that will store the value you specify for the variable. + +You can define Awk variables in the same way you define shell variables as follows: + +``` +variable_name=value +``` + +In the syntax above: + +- `variable_name`: is the name you give a variable +- `value`: the value stored in the variable + +Let’s look at some examples below: + +``` +computer_name=”tecmint.com” +port_no=”22” +email=”admin@tecmint.com” +server=”computer_name” +``` + +Take a look at the simple examples above, in the first variable definition, the value `tecmint.com` is assigned to the variable `computer_name`. + +Furthermore, the value 22 is assigned to the variable port_no, it is also possible to assign the value of one variable to another variable as in the last example where we assigned the value of computer_name to the variable server. + +If you can recall, right from [part 2 of this Awk series][2] were we covered field editing, we talked about how Awk divides input lines into fields and uses standard field access operator, $ to read the different fields that have been parsed. We can also use variables to store the values of fields as follows. + +``` +first_name=$2 +second_name=$3 +``` + +In the examples above, the value of first_name is set to second field and second_name is set to the third field. + +As an illustration, consider a file named names.txt which contains a list of an application’s users indicating their first and last names plus gender. Using the [cat command][3], we can view the contents of the file as follows: + +``` +$ cat names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Content-Using-cat-Command.png) +>List File Content Using cat Command + +Then, we can also use the variables first_name and second_name to store the first and second names of the first user on the list as by running the Awk command below: + +``` +$ awk '/Aaron/{ first_name=$2 ; second_name=$3 ; print first_name, second_name ; }' names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Variables-Using-Awk-Command.png) +>Store Variables Using Awk Command + +Let us also take a look at another case, when you issue the command `uname -a` on your terminal, it prints out all your system information. + +The second field contains your `hostname`, therefore we can store the hostname in a variable called hostname and print it using Awk as follows: + +``` +$ uname -a +$ uname -a | awk '{hostname=$2 ; print hostname ; }' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Command-Output-to-Variable-Using-Awk.png) +>Store Command Output to Variable Using Awk + +### 2. Numeric Expressions + +In Awk, numeric expressions are built using the following numeric operators: + +- `*` : multiplication operator +- `+` : addition operator +- `/` : division operator +- `-` : subtraction operator +- `%` : modulus operator +- `^` : exponentiation operator + +The syntax for a numeric expressions is: + +``` +$ operand1 operator operand2 +``` + +In the form above, operand1 and operand2 can be numbers or variable names, and operator is any of the operators above. + +Below are some examples to demonstrate how to build numeric expressions: + +``` +counter=0 +num1=5 +num2=10 +num3=num2-num1 +counter=counter+1 +``` + +To understand the use of numeric expressions in Awk, we shall consider the following example below, with the file domains.txt which contains all domains owned by Tecmint. + +``` +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +``` + +To view the contents of the file, use the command below: + +``` +$ cat domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) +>View Contents of File + +If we want to count the number of times the domain tecmint.com appears in the file, we can write a simple script to do that as follows: + +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +#print out filename +echo "File is: $file" +#print a number incrementally for every line containing tecmint.com +awk '/^tecmint.com/ { counter=counter+1 ; printf "%s\n", counter ; }' $file +else +#print error info incase input is not a file +echo "$file is not a file, please specify a file." >&2 && exit 1 +fi +done +#terminate script with exit code 0 in case of successful execution +exit 0 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Shell-Script-to-Count-a-String-in-File.png) +>Shell Script to Count a String or Text in File + +After creating the script, save it and make it executable, when we run it with the file, domains.txt as out input, we get the following output: + +``` +$ ./script.sh ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-To-Count-String.png) +>Script to Count String or Text + +From the output of the script, there are 6 lines in the file domains.txt which contain tecmint.com, to confirm that you can manually count them. + +### 3. Assignment Operators + +The last Awk feature we shall cover is assignment operators, there are several assignment operators in Awk and these include the following: + +- `*=` : multiplication assignment operator +- `+=` : addition assignment operator +- `/=` : division assignment operator +- `-=` : subtraction assignment operator +- `%=` : modulus assignment operator +- `^=` : exponentiation assignment operator + +The simplest syntax of an assignment operation in Awk is as follows: + +``` +$ variable_name=variable_name operator operand +``` + +Examples: + +``` +counter=0 +counter=counter+1 +num=20 +num=num-1 +``` + +You can use the assignment operators above to shorten assignment operations in Awk, consider the previous examples, we could perform the assignment in the following form: + +``` +variable_name operator=operand +counter=0 +counter+=1 +num=20 +num-=1 +``` + +Therefore, we can alter the Awk command in the shell script we just wrote above using += assignment operator as follows: + +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +#print out filename +echo "File is: $file" +#print a number incrementally for every line containing tecmint.com +awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file +else +#print error info incase input is not a file +echo "$file is not a file, please specify a file." >&2 && exit 1 +fi +done +#terminate script with exit code 0 in case of successful execution +exit 0 +``` + + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Alter-Shell-Script.png) +>Alter Shell Script + +In this segment of the [Awk series][4], we covered some powerful Awk features, that is variables, building numeric expressions and using assignment operators, plus some few illustrations of how we can actually use them. + +These concepts are not any different from the one in other programming languages but there may be some significant distinctions under Awk programming. + +In part 9, we shall look at more Awk features that is special patterns: BEGIN and END. Until then, stay connected to Tecmint. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-awk-variables-numeric-expressions-and-assignment-operators/ + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ + + + + + + + + + + + + + + + + + + + +[1]: http://www.tecmint.com/category/awk-command/ +[2]: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/ +[3]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ +[4]: http://www.tecmint.com/category/awk-command/ From 3e3543ad84824a7684ebb75023f4371533738a78 Mon Sep 17 00:00:00 2001 From: range Date: Thu, 28 Jul 2016 09:41:43 +0800 Subject: [PATCH 254/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Variables, Numeric Expressions and Assignment Operators.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md b/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md index be536df74f..d7cfd4c064 100644 --- a/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md +++ b/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md @@ -1,4 +1,8 @@ +vim-kakali translating + + Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators – part8 + ======================================================================================= The [Awk command series][1] is getting exciting I believe, in the previous seven parts, we walked through some fundamentals of Awk that you need to master to enable you perform some basic text or string filtering in Linux. From 0212c3012db43a18551f8a5355028eef4bae7680 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 28 Jul 2016 09:47:33 +0800 Subject: [PATCH 255/471] =?UTF-8?q?20160728-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... A DEMO UBUNTU VERSION IN A WEB BROWSER.md | 37 +++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md diff --git a/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md b/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md new file mode 100644 index 0000000000..7c40028c60 --- /dev/null +++ b/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md @@ -0,0 +1,37 @@ +YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER +===================================================== + +Canonical, the parent company of Ubuntu, has put a lot of effort in popularizing Linux. No matter how much you dislike Ubuntu, but you have to accept that it has actually helped to make “Linux for human beings”. Ubuntu and its derivatives are the most used Linux distributions. + +In an effort to further popularize Ubuntu Linux, Canonical has put a website in place where you can use a [demo version of Ubuntu][1]. It will help you get the look and feel of Ubuntu. With that, it will be a little easier to make a decision about using Ubuntu for new users. + +You may argue that a live USB of Ubuntu will be better as it provides a far better experience without installing. I agree to that but keep in mind the effort to download the ISO, to create a live USB, change boot configuration and then using the live USB to experience. It’s tedious and not everyone is going to do that. An online tour is a simpler way out. + +So, what do you see in this online Ubuntu demo tour. Not a lot actually. + +You can browse files, you can use Unity Dash, you can experience Ubuntu Software Center, even install a few apps (they won’t be really installed), browse the File Manager and other such things. But that’s all. But in my opinion, it’s pretty nifty and does what it intends to do and that is provide a quick look at this popular OS. + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo.jpeg) + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-1.jpeg) + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-2.jpeg) + +If a friend or family ever asks you that he is interested in trying Linux and would like to experience it before installing, you can send her/him this link below: + +[Ubuntu Online Tour][0] + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-online-demo/?utm_source=newsletter&utm_medium=email&utm_campaign=linux_and_open_source_stories_this_week + +作者:[Abhishek Prakash][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[0]: http://tour.ubuntu.com/en/ +[1]: http://tour.ubuntu.com/en/ From b8628099334cba9e402d33d92f4ce65584c5506f Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Thu, 28 Jul 2016 10:04:41 +0800 Subject: [PATCH 256/471] Update 20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md --- ...0160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md b/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md index 7c40028c60..6f52422a06 100644 --- a/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md +++ b/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md @@ -1,3 +1,4 @@ +[translating by kokialoves] YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER ===================================================== From e3e519c8a808f9d2d2be7bcafdbc4e9009162c6b Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 28 Jul 2016 11:13:49 +0800 Subject: [PATCH 257/471] PUB:20160705 Create Your Own Shell in Python - Part I @cposture --- ...reate Your Own Shell in Python - Part I.md | 82 +++++++++---------- 1 file changed, 41 insertions(+), 41 deletions(-) rename {translated/tech => published}/20160705 Create Your Own Shell in Python - Part I.md (67%) diff --git a/translated/tech/20160705 Create Your Own Shell in Python - Part I.md b/published/20160705 Create Your Own Shell in Python - Part I.md similarity index 67% rename from translated/tech/20160705 Create Your Own Shell in Python - Part I.md rename to published/20160705 Create Your Own Shell in Python - Part I.md index b54d0bff29..3aee650db0 100644 --- a/translated/tech/20160705 Create Your Own Shell in Python - Part I.md +++ b/published/20160705 Create Your Own Shell in Python - Part I.md @@ -1,7 +1,7 @@ -使用 Python 创建你自己的 Shell:Part I +使用 Python 创建你自己的 Shell (一) ========================================== -我很想知道一个 shell (像 bash,csh 等)内部是如何工作的。为了满足自己的好奇心,我使用 Python 实现了一个名为 **yosh** (Your Own Shell)的 Shell。本文章所介绍的概念也可以应用于其他编程语言。 +我很想知道一个 shell (像 bash,csh 等)内部是如何工作的。于是为了满足自己的好奇心,我使用 Python 实现了一个名为 **yosh** (Your Own Shell)的 Shell。本文章所介绍的概念也可以应用于其他编程语言。 (提示:你可以在[这里](https://github.com/supasate/yosh)查找本博文使用的源代码,代码以 MIT 许可证发布。在 Mac OS X 10.11.5 上,我使用 Python 2.7.10 和 3.4.3 进行了测试。它应该可以运行在其他类 Unix 环境,比如 Linux 和 Windows 上的 Cygwin。) @@ -20,15 +20,15 @@ yosh_project `yosh_project` 为项目根目录(你也可以把它简单命名为 `yosh`)。 -`yosh` 为包目录,且 `__init__.py` 可以使它成为与包目录名字相同的包(如果你不写 Python,可以忽略它。) +`yosh` 为包目录,且 `__init__.py` 可以使它成为与包的目录名字相同的包(如果你不用 Python 编写的话,可以忽略它。) `shell.py` 是我们主要的脚本文件。 ### 步骤 1:Shell 循环 -当启动一个 shell,它会显示一个命令提示符并等待你的命令输入。在接收了输入的命令并执行它之后(稍后文章会进行详细解释),你的 shell 会重新回到循环,等待下一条指令。 +当启动一个 shell,它会显示一个命令提示符并等待你的命令输入。在接收了输入的命令并执行它之后(稍后文章会进行详细解释),你的 shell 会重新回到这里,并循环等待下一条指令。 -在 `shell.py`,我们会以一个简单的 mian 函数开始,该函数调用了 shell_loop() 函数,如下: +在 `shell.py` 中,我们会以一个简单的 main 函数开始,该函数调用了 shell_loop() 函数,如下: ``` def shell_loop(): @@ -43,7 +43,7 @@ if __name__ == "__main__": main() ``` -接着,在 `shell_loop()`,为了指示循环是否继续或停止,我们使用了一个状态标志。在循环的开始,我们的 shell 将显示一个命令提示符,并等待读取命令输入。 +接着,在 `shell_loop()` 中,为了指示循环是否继续或停止,我们使用了一个状态标志。在循环的开始,我们的 shell 将显示一个命令提示符,并等待读取命令输入。 ``` import sys @@ -56,15 +56,15 @@ def shell_loop(): status = SHELL_STATUS_RUN while status == SHELL_STATUS_RUN: - # Display a command prompt + ### 显示命令提示符 sys.stdout.write('> ') sys.stdout.flush() - # Read command input + ### 读取命令输入 cmd = sys.stdin.readline() ``` -之后,我们切分命令输入并进行执行(我们即将实现`命令切分`和`执行`函数)。 +之后,我们切分命令(tokenize)输入并进行执行(execute)(我们即将实现 `tokenize` 和 `execute` 函数)。 因此,我们的 shell_loop() 会是如下这样: @@ -79,33 +79,33 @@ def shell_loop(): status = SHELL_STATUS_RUN while status == SHELL_STATUS_RUN: - # Display a command prompt + ### 显示命令提示符 sys.stdout.write('> ') sys.stdout.flush() - # Read command input + ### 读取命令输入 cmd = sys.stdin.readline() - # Tokenize the command input + ### 切分命令输入 cmd_tokens = tokenize(cmd) - # Execute the command and retrieve new status + ### 执行该命令并获取新的状态 status = execute(cmd_tokens) ``` -这就是我们整个 shell 循环。如果我们使用 `python shell.py` 启动我们的 shell,它会显示命令提示符。然而如果我们输入命令并按回车,它会抛出错误,因为我们还没定义`命令切分`函数。 +这就是我们整个 shell 循环。如果我们使用 `python shell.py` 启动我们的 shell,它会显示命令提示符。然而如果我们输入命令并按回车,它会抛出错误,因为我们还没定义 `tokenize` 函数。 为了退出 shell,可以尝试输入 ctrl-c。稍后我将解释如何以优雅的形式退出 shell。 -### 步骤 2:命令切分 +### 步骤 2:命令切分(tokenize) -当用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的很长的字符串。因此,我们必须切分该字符串(分割一个字符串为多个标记)。 +当用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的长字符串。因此,我们必须切分该字符串(分割一个字符串为多个元组)。 咋一看似乎很简单。我们或许可以使用 `cmd.split()`,以空格分割输入。它对类似 `ls -a my_folder` 的命令起作用,因为它能够将命令分割为一个列表 `['ls', '-a', 'my_folder']`,这样我们便能轻易处理它们了。 然而,也有一些类似 `echo "Hello World"` 或 `echo 'Hello World'` 以单引号或双引号引用参数的情况。如果我们使用 cmd.spilt,我们将会得到一个存有 3 个标记的列表 `['echo', '"Hello', 'World"']` 而不是 2 个标记的列表 `['echo', 'Hello World']`。 -幸运的是,Python 提供了一个名为 `shlex` 的库,它能够帮助我们效验如神地分割命令。(提示:我们也可以使用正则表达式,但它不是本文的重点。) +幸运的是,Python 提供了一个名为 `shlex` 的库,它能够帮助我们如魔法般地分割命令。(提示:我们也可以使用正则表达式,但它不是本文的重点。) ``` @@ -120,23 +120,23 @@ def tokenize(string): ... ``` -然后我们将这些标记发送到执行进程。 +然后我们将这些元组发送到执行进程。 ### 步骤 3:执行 -这是 shell 中核心和有趣的一部分。当 shell 执行 `mkdir test_dir` 时,到底发生了什么?(提示: `mkdir` 是一个带有 `test_dir` 参数的执行程序,用于创建一个名为 `test_dir` 的目录。) +这是 shell 中核心而有趣的一部分。当 shell 执行 `mkdir test_dir` 时,到底发生了什么?(提示: `mkdir` 是一个带有 `test_dir` 参数的执行程序,用于创建一个名为 `test_dir` 的目录。) -`execvp` 是涉及这一步的首个函数。在我们解释 `execvp` 所做的事之前,让我们看看它的实际效果。 +`execvp` 是这一步的首先需要的函数。在我们解释 `execvp` 所做的事之前,让我们看看它的实际效果。 ``` import os ... def execute(cmd_tokens): - # Execute command + ### 执行命令 os.execvp(cmd_tokens[0], cmd_tokens) - # Return status indicating to wait for next command in shell_loop + ### 返回状态以告知在 shell_loop 中等待下一个命令 return SHELL_STATUS_RUN ... @@ -144,11 +144,11 @@ def execute(cmd_tokens): 再次尝试运行我们的 shell,并输入 `mkdir test_dir` 命令,接着按下回车键。 -在我们敲下回车键之后,问题是我们的 shell 会直接退出而不是等待下一个命令。然而,目标正确地被创建。 +在我们敲下回车键之后,问题是我们的 shell 会直接退出而不是等待下一个命令。然而,目录正确地创建了。 因此,`execvp` 实际上做了什么? -`execvp` 是系统调用 `exec` 的一个变体。第一个参数是程序名字。`v` 表示第二个参数是一个程序参数列表(可变参数)。`p` 表示环境变量 `PATH` 会被用于搜索给定的程序名字。在我们上一次的尝试中,它将会基于我们的 `PATH` 环境变量查找`mkdir` 程序。 +`execvp` 是系统调用 `exec` 的一个变体。第一个参数是程序名字。`v` 表示第二个参数是一个程序参数列表(参数数量可变)。`p` 表示将会使用环境变量 `PATH` 搜索给定的程序名字。在我们上一次的尝试中,它将会基于我们的 `PATH` 环境变量查找`mkdir` 程序。 (还有其他 `exec` 变体,比如 execv、execvpe、execl、execlp、execlpe;你可以 google 它们获取更多的信息。) @@ -158,7 +158,7 @@ def execute(cmd_tokens): 因此,我们需要其他的系统调用来解决问题:`fork`。 -`fork` 会开辟新的内存并拷贝当前进程到一个新的进程。我们称这个新的进程为**子进程**,调用者进程为**父进程**。然后,子进程内存会被替换为被执行的程序。因此,我们的 shell,也就是父进程,可以免受内存替换的危险。 +`fork` 会分配新的内存并拷贝当前进程到一个新的进程。我们称这个新的进程为**子进程**,调用者进程为**父进程**。然后,子进程内存会被替换为被执行的程序。因此,我们的 shell,也就是父进程,可以免受内存替换的危险。 让我们看看修改的代码。 @@ -166,34 +166,34 @@ def execute(cmd_tokens): ... def execute(cmd_tokens): - # Fork a child shell process - # If the current process is a child process, its `pid` is set to `0` - # else the current process is a parent process and the value of `pid` - # is the process id of its child process. + ### 分叉一个子 shell 进程 + ### 如果当前进程是子进程,其 `pid` 被设置为 `0` + ### 否则当前进程是父进程的话,`pid` 的值 + ### 是其子进程的进程 ID。 pid = os.fork() if pid == 0: - # Child process - # Replace the child shell process with the program called with exec + ### 子进程 + ### 用被 exec 调用的程序替换该子进程 os.execvp(cmd_tokens[0], cmd_tokens) elif pid > 0: - # Parent process + ### 父进程 while True: - # Wait response status from its child process (identified with pid) + ### 等待其子进程的响应状态(以进程 ID 来查找) wpid, status = os.waitpid(pid, 0) - # Finish waiting if its child process exits normally - # or is terminated by a signal + ### 当其子进程正常退出时 + ### 或者其被信号中断时,结束等待状态 if os.WIFEXITED(status) or os.WIFSIGNALED(status): break - # Return status indicating to wait for next command in shell_loop + ### 返回状态以告知在 shell_loop 中等待下一个命令 return SHELL_STATUS_RUN ... ``` -当我们的父进程调用 `os.fork()`时,你可以想象所有的源代码被拷贝到了新的子进程。此时此刻,父进程和子进程看到的是相同的代码,且并行运行着。 +当我们的父进程调用 `os.fork()` 时,你可以想象所有的源代码被拷贝到了新的子进程。此时此刻,父进程和子进程看到的是相同的代码,且并行运行着。 如果运行的代码属于子进程,`pid` 将为 `0`。否则,如果运行的代码属于父进程,`pid` 将会是子进程的进程 id。 @@ -205,13 +205,13 @@ def execute(cmd_tokens): 现在,你可以尝试运行我们的 shell 并输入 `mkdir test_dir2`。它应该可以正确执行。我们的主 shell 进程仍然存在并等待下一条命令。尝试执行 `ls`,你可以看到已创建的目录。 -但是,这里仍有许多问题。 +但是,这里仍有一些问题。 第一,尝试执行 `cd test_dir2`,接着执行 `ls`。它应该会进入到一个空的 `test_dir2` 目录。然而,你将会看到目录并没有变为 `test_dir2`。 第二,我们仍然没有办法优雅地退出我们的 shell。 -我们将会在 [Part 2][1] 解决诸如此类的问题。 +我们将会在 [第二部分][1] 解决诸如此类的问题。 -------------------------------------------------------------------------------- @@ -219,8 +219,8 @@ def execute(cmd_tokens): via: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ 作者:[Supasate Choochaisri][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[cposture](https://github.com/cposture) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2ab589bf9da6bf62162970e1dc9c2d6ca150a810 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 28 Jul 2016 11:44:36 +0800 Subject: [PATCH 258/471] PUB:20160706 Create Your Own Shell in Python - Part II @cposture --- ...eate Your Own Shell in Python - Part II.md | 43 ++++++++----------- 1 file changed, 19 insertions(+), 24 deletions(-) rename {translated/tech => published}/20160706 Create Your Own Shell in Python - Part II.md (76%) diff --git a/translated/tech/20160706 Create Your Own Shell in Python - Part II.md b/published/20160706 Create Your Own Shell in Python - Part II.md similarity index 76% rename from translated/tech/20160706 Create Your Own Shell in Python - Part II.md rename to published/20160706 Create Your Own Shell in Python - Part II.md index 0f0cd6a878..9e66d5643c 100644 --- a/translated/tech/20160706 Create Your Own Shell in Python - Part II.md +++ b/published/20160706 Create Your Own Shell in Python - Part II.md @@ -1,17 +1,17 @@ -使用 Python 创建你自己的 Shell:Part II +使用 Python 创建你自己的 Shell(下) =========================================== -在[part 1][1] 中,我们已经创建了一个主要的 shell 循环、切分了的命令输入,以及通过 `fork` 和 `exec` 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 +在[上篇][1]中,我们已经创建了一个 shell 主循环、切分了命令输入,以及通过 `fork` 和 `exec` 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 ### 步骤 4:内置命令 -“cd test_dir2 无法修改我们的当前目录” 这句话是对的,但在某种意义上也是错的。在执行完该命令之后,我们仍然处在同一目录,从这个意义上讲,它是对的。然而,目录实际上已经被修改,只不过它是在子进程中被修改。 +“`cd test_dir2` 无法修改我们的当前目录” 这句话是对的,但在某种意义上也是错的。在执行完该命令之后,我们仍然处在同一目录,从这个意义上讲,它是对的。然而,目录实际上已经被修改,只不过它是在子进程中被修改。 -还记得我们 fork 了一个子进程,然后执行命令,执行命令的过程没有发生在父进程上。结果是我们只是改变了子进程的当前目录,而不是父进程的目录。 +还记得我们分叉(fork)了一个子进程,然后执行命令,执行命令的过程没有发生在父进程上。结果是我们只是改变了子进程的当前目录,而不是父进程的目录。 然后子进程退出,而父进程在原封不动的目录下继续运行。 -因此,这类与 shell 自己相关的命令必须是内置命令。它必须在 shell 进程中执行而没有分叉(forking)。 +因此,这类与 shell 自己相关的命令必须是内置命令。它必须在 shell 进程中执行而不是在分叉中(forking)。 #### cd @@ -35,7 +35,6 @@ yosh_project import os from yosh.constants import * - def cd(args): os.chdir(args[0]) @@ -66,23 +65,21 @@ SHELL_STATUS_RUN = 1 ```python ... -# Import constants +### 导入常量 from yosh.constants import * -# Hash map to store built-in function name and reference as key and value +### 使用哈希映射来存储内建的函数名及其引用 built_in_cmds = {} - def tokenize(string): return shlex.split(string) - def execute(cmd_tokens): - # Extract command name and arguments from tokens + ### 从元组中分拆命令名称与参数 cmd_name = cmd_tokens[0] cmd_args = cmd_tokens[1:] - # If the command is a built-in command, invoke its function with arguments + ### 如果该命令是一个内建命令,使用参数调用该函数 if cmd_name in built_in_cmds: return built_in_cmds[cmd_name](cmd_args) @@ -91,29 +88,29 @@ def execute(cmd_tokens): 我们使用一个 python 字典变量 `built_in_cmds` 作为哈希映射(hash map),以存储我们的内置函数。我们在 `execute` 函数中提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 -(提示:`built_in_cmds[cmd_name]` 返回能直接使用参数调用的函数引用的。) +(提示:`built_in_cmds[cmd_name]` 返回能直接使用参数调用的函数引用。) 我们差不多准备好使用内置的 `cd` 函数了。最后一步是将 `cd` 函数添加到 `built_in_cmds` 映射中。 ``` ... -# Import all built-in function references +### 导入所有内建函数引用 from yosh.builtins import * ... -# Register a built-in function to built-in command hash map +### 注册内建函数到内建命令的哈希映射中 def register_command(name, func): built_in_cmds[name] = func -# Register all built-in commands here +### 在此注册所有的内建命令 def init(): register_command("cd", cd) def main(): - # Init shell before starting the main loop + ###在开始主循环之前初始化 shell init() shell_loop() ``` @@ -138,7 +135,7 @@ from yosh.builtins.cd import * 我们需要一个可以修改 shell 状态为 `SHELL_STATUS_STOP` 的函数。这样,shell 循环可以自然地结束,shell 将到达终点而退出。 -和 `cd` 一样,如果我们在子进程中 fork 和执行 `exit` 函数,其对父进程是不起作用的。因此,`exit` 函数需要成为一个 shell 内置函数。 +和 `cd` 一样,如果我们在子进程中分叉并执行 `exit` 函数,其对父进程是不起作用的。因此,`exit` 函数需要成为一个 shell 内置函数。 让我们从这开始:在 `builtins` 目录下创建一个名为 `exit.py` 的新文件。 @@ -159,7 +156,6 @@ yosh_project ``` from yosh.constants import * - def exit(args): return SHELL_STATUS_STOP ``` @@ -173,11 +169,10 @@ from yosh.builtins.exit import * 最后,我们在 `shell.py` 中的 `init()` 函数注册 `exit` 命令。 - ``` ... -# Register all built-in commands here +### 在此注册所有的内建命令 def init(): register_command("cd", cd) register_command("exit", exit) @@ -193,7 +188,7 @@ def init(): 我希望你能像我一样享受创建 `yosh` (**y**our **o**wn **sh**ell)的过程。但我的 `yosh` 版本仍处于早期阶段。我没有处理一些会使 shell 崩溃的极端状况。还有很多我没有覆盖的内置命令。为了提高性能,一些非内置命令也可以实现为内置命令(避免新进程创建时间)。同时,大量的功能还没有实现(请看 [公共特性](http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html) 和 [不同特性](http://www.tldp.org/LDP/intro-linux/html/x12249.html)) -我已经在 github.com/supasate/yosh 中提供了源代码。请随意 fork 和尝试。 +我已经在 https://github.com/supasate/yosh 中提供了源代码。请随意 fork 和尝试。 现在该是创建你真正自己拥有的 Shell 的时候了。 @@ -205,12 +200,12 @@ via: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-pyt 作者:[Supasate Choochaisri][a] 译者:[cposture](https://github.com/cposture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://disqus.com/by/supasate_choochaisri/ -[1]: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ +[1]: https://linux.cn/article-7624-1.html [2]: http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html [3]: http://www.tldp.org/LDP/intro-linux/html/x12249.html [4]: https://github.com/supasate/yosh From dab654979b756ec209e172a61a2b1c1840940130 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Thu, 28 Jul 2016 11:54:39 +0800 Subject: [PATCH 259/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?(#4233)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Delete 20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md * Create 20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md --- ... A DEMO UBUNTU VERSION IN A WEB BROWSER.md | 38 ------------------- ... A DEMO UBUNTU VERSION IN A WEB BROWSER.md | 37 ++++++++++++++++++ 2 files changed, 37 insertions(+), 38 deletions(-) delete mode 100644 sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md create mode 100644 translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md diff --git a/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md b/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md deleted file mode 100644 index 6f52422a06..0000000000 --- a/sources/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md +++ /dev/null @@ -1,38 +0,0 @@ -[translating by kokialoves] -YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER -===================================================== - -Canonical, the parent company of Ubuntu, has put a lot of effort in popularizing Linux. No matter how much you dislike Ubuntu, but you have to accept that it has actually helped to make “Linux for human beings”. Ubuntu and its derivatives are the most used Linux distributions. - -In an effort to further popularize Ubuntu Linux, Canonical has put a website in place where you can use a [demo version of Ubuntu][1]. It will help you get the look and feel of Ubuntu. With that, it will be a little easier to make a decision about using Ubuntu for new users. - -You may argue that a live USB of Ubuntu will be better as it provides a far better experience without installing. I agree to that but keep in mind the effort to download the ISO, to create a live USB, change boot configuration and then using the live USB to experience. It’s tedious and not everyone is going to do that. An online tour is a simpler way out. - -So, what do you see in this online Ubuntu demo tour. Not a lot actually. - -You can browse files, you can use Unity Dash, you can experience Ubuntu Software Center, even install a few apps (they won’t be really installed), browse the File Manager and other such things. But that’s all. But in my opinion, it’s pretty nifty and does what it intends to do and that is provide a quick look at this popular OS. - -![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo.jpeg) - -![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-1.jpeg) - -![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-2.jpeg) - -If a friend or family ever asks you that he is interested in trying Linux and would like to experience it before installing, you can send her/him this link below: - -[Ubuntu Online Tour][0] - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/ubuntu-online-demo/?utm_source=newsletter&utm_medium=email&utm_campaign=linux_and_open_source_stories_this_week - -作者:[Abhishek Prakash][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对ID](https://github.com/校对ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[0]: http://tour.ubuntu.com/en/ -[1]: http://tour.ubuntu.com/en/ diff --git a/translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md b/translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md new file mode 100644 index 0000000000..d79969e3e7 --- /dev/null +++ b/translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md @@ -0,0 +1,37 @@ +你能在浏览器中运行UBUNTU +===================================================== + +Canonical, Ubuntu的母公司, 为Linux推广做了很多努力. 无论你有多么不喜欢 Ubuntu, 你必须承认它对 “Linux 易用性”的影响. Ubuntu 以及其衍生是应用最多的Linux版本 . + +为了进一步推广 Ubuntu Linux, Canonical 把它放到了浏览器里你可以再任何地方使用 [demo version of Ubuntu][1]. 它将帮你更好的体验 Ubuntu. 以便让新人更容易决定是否使用. + +你可能争辩说USB版的linux更好. 我同意但是你要知道你要下载ISO, 创建USB驱动, 修改配置文件. 并不是每个人都乐意这么干的. 在线体验是一个更好的选择. + +因此, 你能在Ubuntu在线看到什么. 实际上并不多. + +你可以浏览文件, 你可以使用 Unity Dash, 浏览 Ubuntu Software Center, 甚至装几个 apps (当然它们不会真的安装), 看一看文件浏览器 和其它一些东西. 以上就是全部了. 但是在我看来, 它是非常漂亮的 + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo.jpeg) + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-1.jpeg) + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-2.jpeg) + +如果你的朋友或者家人想试试Linux又不乐意安装, 你可以给他们以下链接: + +[Ubuntu Online Tour][0] + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-online-demo/?utm_source=newsletter&utm_medium=email&utm_campaign=linux_and_open_source_stories_this_week + +作者:[Abhishek Prakash][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[0]: http://tour.ubuntu.com/en/ +[1]: http://tour.ubuntu.com/en/ From 49dde28c2e619cee0d8d541d3c8016af0b40379c Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 28 Jul 2016 11:55:18 +0800 Subject: [PATCH 260/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?(#4232)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Create 20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md * Delete 20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md --- ...ook Messenger bot with Python and Flask.md | 320 ------------------ ...ook Messenger bot with Python and Flask.md | 319 +++++++++++++++++ 2 files changed, 319 insertions(+), 320 deletions(-) delete mode 100644 sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md create mode 100644 translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md diff --git a/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md b/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md deleted file mode 100644 index 6ab74e3527..0000000000 --- a/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md +++ /dev/null @@ -1,320 +0,0 @@ -wyangsun translating -How to build and deploy a Facebook Messenger bot with Python and Flask, a tutorial -========================================================================== - -This is my log of how I built a simple Facebook Messenger bot. The functionality is really simple, it’s an echo bot that will just print back to the user what they write. - -This is something akin to the Hello World example for servers, the echo server. - -The goal of the project is not to build the best Messenger bot, but rather to get a feel for what it takes to build a minimal bot and how everything comes together. - -- [Tech Stack][1] -- [Bot Architecture][2] -- [The Bot Server][3] -- [Deploying to Heroku][4] -- [Creating the Facebook App][5] -- [Conclusion][6] - -### Tech Stack - -The tech stack that was used is: - -- [Heroku][7] for back end hosting. The free-tier is more than enough for a tutorial of this level. The echo bot does not require any sort of data persistence so a database was not used. -- [Python][8] was the language of choice. The version that was used is 2.7 however it can easily be ported to Python 3 with minor alterations. -- [Flask][9] as the web development framework. It’s a very lightweight framework that’s perfect for small scale projects/microservices. -- Finally the [Git][10] version control system was used for code maintenance and to deploy to Heroku. -- Worth mentioning: [Virtualenv][11]. This python tool is used to create “environments” clean of python libraries so you can only install the necessary requirements and minimize the app footprint. - -### Bot Architecture - -Messenger bots are constituted by a server that responds to two types of requests: - -- GET requests are being used for authentication. They are sent by Messenger with an authentication code that you register on FB. -- POST requests are being used for the actual communication. The typical workflow is that the bot will initiate the communication by sending the POST request with the data of the message sent by the user, we will handle it, send a POST request of our own back. If that one is completed successfully (a 200 OK status is returned) we also respond with a 200 OK code to the initial Messenger request. -For this tutorial the app will be hosted on Heroku, which provides a nice and easy interface to deploy apps. As mentioned the free tier will suffice for this tutorial. - -After the app has been deployed and is running, we’ll create a Facebook app and link it to our app so that messenger knows where to send the requests that are meant for our bot. - -### The Bot Server -The basic server code was taken from the following [Chatbot][12] project by Github user [hult (Magnus Hult)][13], with a few modifications to the code to only echo messages and a couple bugfixes I came across. This is the final version of the server code: - -``` -from flask import Flask, request -import json -import requests - -app = Flask(__name__) - -# This needs to be filled with the Page Access Token that will be provided -# by the Facebook App that will be created. -PAT = '' - -@app.route('/', methods=['GET']) -def handle_verification(): - print "Handling Verification." - if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': - print "Verification successful!" - return request.args.get('hub.challenge', '') - else: - print "Verification failed!" - return 'Error, wrong validation token' - -@app.route('/', methods=['POST']) -def handle_messages(): - print "Handling Messages" - payload = request.get_data() - print payload - for sender, message in messaging_events(payload): - print "Incoming from %s: %s" % (sender, message) - send_message(PAT, sender, message) - return "ok" - -def messaging_events(payload): - """Generate tuples of (sender_id, message_text) from the - provided payload. - """ - data = json.loads(payload) - messaging_events = data["entry"][0]["messaging"] - for event in messaging_events: - if "message" in event and "text" in event["message"]: - yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') - else: - yield event["sender"]["id"], "I can't echo this" - - -def send_message(token, recipient, text): - """Send the message text to recipient with id recipient. - """ - - r = requests.post("https://graph.facebook.com/v2.6/me/messages", - params={"access_token": token}, - data=json.dumps({ - "recipient": {"id": recipient}, - "message": {"text": text.decode('unicode_escape')} - }), - headers={'Content-type': 'application/json'}) - if r.status_code != requests.codes.ok: - print r.text - -if __name__ == '__main__': - app.run() -``` - -Let’s break down the code. The first part is the imports that will be needed: - -``` -from flask import Flask, request -import json -import requests -``` - -Next we define the two functions (using the Flask specific app.route decorators) that will handle the GET and POST requests to our bot. - -``` -@app.route('/', methods=['GET']) -def handle_verification(): - print "Handling Verification." - if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': - print "Verification successful!" - return request.args.get('hub.challenge', '') - else: - print "Verification failed!" - return 'Error, wrong validation token' -``` - -The verify_token object that is being sent by Messenger will be declared by us when we create the Facebook app. We have to validate the one we are being have against itself. Finally we return the “hub.challenge” back to Messenger. - -The function that handles the POST requests is a bit more interesting. - -``` -@app.route('/', methods=['POST']) -def handle_messages(): - print "Handling Messages" - payload = request.get_data() - print payload - for sender, message in messaging_events(payload): - print "Incoming from %s: %s" % (sender, message) - send_message(PAT, sender, message) - return "ok" -``` - -When called we grab the massage payload, use function messaging_events to break it down and extract the sender user id and the actual message sent, generating a python iterator that we can loop over. Notice that in each request sent by Messenger it is possible to have more than one messages. - -``` -def messaging_events(payload): - """Generate tuples of (sender_id, message_text) from the - provided payload. - """ - data = json.loads(payload) - messaging_events = data["entry"][0]["messaging"] - for event in messaging_events: - if "message" in event and "text" in event["message"]: - yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') - else: - yield event["sender"]["id"], "I can't echo this" -``` - -While iterating over each message we call the send_message function and we perform the POST request back to Messnger using the Facebook Graph messages API. During this time we still have not responded to the original Messenger request which we are blocking. This can lead to timeouts and 5XX errors. - -The above was spotted during an outage due to a bug I came across, which was occurred when the user was sending emojis which are actual unicode ids, however Python was miss-encoding. We ended up sending back garbage. - -This POST request back to Messenger would never finish, and that in turn would cause 5XX status codes to be returned to the original request, rendering the service unusable. - -This was fixed by escaping the messages with `encode('unicode_escape')` and then just before we sent back the message decode it with `decode('unicode_escape')`. - -``` -def send_message(token, recipient, text): - """Send the message text to recipient with id recipient. - """ - - r = requests.post("https://graph.facebook.com/v2.6/me/messages", - params={"access_token": token}, - data=json.dumps({ - "recipient": {"id": recipient}, - "message": {"text": text.decode('unicode_escape')} - }), - headers={'Content-type': 'application/json'}) - if r.status_code != requests.codes.ok: - print r.text -``` - -### Deploying to Heroku - -Once the code was built to my liking it was time for the next step. -Deploy the app. - -Sure, but how? - -I have deployed apps before to Heroku (mainly Rails) however I was always following a tutorial of some sort, so the configuration has already been created. In this case though I had to start from scratch. - -Fortunately it was the official [Heroku documentation][14] to the rescue. The article explains nicely the bare minimum required for running an app. - -Long story short, what we need besides our code are two files. The first file is the “requirements.txt” file which is a list of of the library dependencies required to run the application. - -The second file required is the “Procfile”. This file is there to inform the Heroku how to run our service. Again the bare minimum needed for this file is the following: - ->web: gunicorn echoserver:app - -The way this will be interpreted by heroku is that our app is started by running the echoserver.py file and the app will be using gunicorn as the web server. The reason we are using an additional webserver is performance related and is explained in the above Heroku documentation: - ->Web applications that process incoming HTTP requests concurrently make much more efficient use of dyno resources than web applications that only process one request at a time. Because of this, we recommend using web servers that support concurrent request processing whenever developing and running production services. - ->The Django and Flask web frameworks feature convenient built-in web servers, but these blocking servers only process a single request at a time. If you deploy with one of these servers on Heroku, your dyno resources will be underutilized and your application will feel unresponsive. - ->Gunicorn is a pure-Python HTTP server for WSGI applications. It allows you to run any Python application concurrently by running multiple Python processes within a single dyno. It provides a perfect balance of performance, flexibility, and configuration simplicity. - -Going back to our “requirements.txt” file let’s see how it binds with the Virtualenv tool that was mentioned. - -At anytime, your developement machine may have a number of python libraries installed. When deploying applications you don’t want to have these libraries loaded as it makes it hard to make out which ones you actually use. - -What Virtualenv does is create a new blank virtual enviroment so that you can only install the libraries that your app requires. - -You can check which libraries are currently installed by running the following command: - -``` -kostis@KostisMBP ~ $ pip freeze -cycler==0.10.0 -Flask==0.10.1 -gunicorn==19.6.0 -itsdangerous==0.24 -Jinja2==2.8 -MarkupSafe==0.23 -matplotlib==1.5.1 -numpy==1.10.4 -pyparsing==2.1.0 -python-dateutil==2.5.0 -pytz==2015.7 -requests==2.10.0 -scipy==0.17.0 -six==1.10.0 -virtualenv==15.0.1 -Werkzeug==0.11.10 -``` - -Note: The pip tool should already be installed on your machine along with Python. - -If not check the [official site][15] for how to install it. - -Now let’s use Virtualenv to create a new blank enviroment. First we create a new folder for our project, and change dir into it: - -``` -kostis@KostisMBP projects $ mkdir echoserver -kostis@KostisMBP projects $ cd echoserver/ -kostis@KostisMBP echoserver $ -``` - -Now let’s create a new enviroment called echobot. To activate it you run the following source command, and checking with pip freeze we can see that it’s now empty. - -``` -kostis@KostisMBP echoserver $ virtualenv echobot -kostis@KostisMBP echoserver $ source echobot/bin/activate -(echobot) kostis@KostisMBP echoserver $ pip freeze -(echobot) kostis@KostisMBP echoserver $ -``` - -We can start installing the libraries required. The ones we’ll need are flask, gunicorn, and requests and with them installed we create the requirements.txt file: - -``` -(echobot) kostis@KostisMBP echoserver $ pip install flask -(echobot) kostis@KostisMBP echoserver $ pip install gunicorn -(echobot) kostis@KostisMBP echoserver $ pip install requests -(echobot) kostis@KostisMBP echoserver $ pip freeze -click==6.6 -Flask==0.11 -gunicorn==19.6.0 -itsdangerous==0.24 -Jinja2==2.8 -MarkupSafe==0.23 -requests==2.10.0 -Werkzeug==0.11.10 -(echobot) kostis@KostisMBP echoserver $ pip freeze > requirements.txt -``` - -After all the above have been run, we create the echoserver.py file with the python code and the Procfile with the command that was mentioned, and we should end up with the following files/folders: - -``` -(echobot) kostis@KostisMBP echoserver $ ls -Procfile echobot echoserver.py requirements.txt -``` - -We are now ready to upload to Heroku. We need to do two things. The first is to install the Heroku toolbet if it’s not already installed on your system (go to [Heroku][16] for details). The second is to create a new Heroku app through the [web interface][17]. - -Click on the big plus sign on the top right and select “Create new app”. - - - - - - - - --------------------------------------------------------------------------------- - -via: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/ - -作者:[Konstantinos Tsaprailis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://github.com/kostistsaprailis -[1]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#tech-stack -[2]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#bot-architecture -[3]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#the-bot-server -[4]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#deploying-to-heroku -[5]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#creating-the-facebook-app -[6]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#conclusion -[7]: https://www.heroku.com -[8]: https://www.python.org -[9]: http://flask.pocoo.org -[10]: https://git-scm.com -[11]: https://virtualenv.pypa.io/en/stable -[12]: https://github.com/hult/facebook-chatbot-python -[13]: https://github.com/hult -[14]: https://devcenter.heroku.com/articles/python-gunicorn -[15]: https://pip.pypa.io/en/stable/installing -[16]: https://toolbelt.heroku.com -[17]: https://dashboard.heroku.com/apps - - diff --git a/translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md b/translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md new file mode 100644 index 0000000000..5ad4e62816 --- /dev/null +++ b/translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md @@ -0,0 +1,319 @@ +如何用Python和Flask建立部署一个Facebook信使机器人教程 +========================================================================== + +这是我建立一个简单的Facebook信使机器人的记录。功能很简单,他是一个回显机器人,只是打印回用户写了什么。 + +回显服务器类似于服务器的“Hello World”例子。 + +这个项目的目的不是建立最好的信使机器人,而是获得建立一个小型机器人和每个事物是如何整合起来的感觉。 + +- [技术栈][1] +- [机器人架构][2] +- [机器人服务器][3] +- [部署到 Heroku][4] +- [创建 Facebook 应用][5] +- [结论][6] + +### 技术栈 + +使用到的技术栈: + +- [Heroku][7] 做后端主机。免费层足够这个等级的教程。回显机器人不需要任何种类的数据持久,所以不需要数据库。 +- [Python][8] 是选择的一个语言。版本选择2.7,虽然它移植到 Pyhton 3 很容易,只需要很少的改动。 +- [Flask][9] 作为网站开发框架。它是非常轻量的框架,用在小型工程或微服务是完美的。 +- 最后 [Git][10] 版本控制系统用来维护代码和部署到 Heroku。 +- 值得一提:[Virtualenv][11]。这个 python 工具是用来创建清洁的 python 库“环境”的,你只用安装必要的需求和最小化的应用封装。 + +### 机器人架构 + +信使机器人是由一个服务器组成,响应两种请求: + +- GET 请求被用来认证。他们与你注册的 FB 认证码一同被信使发出。 +- POST 请求被用来真实的通信。传统的工作流是,机器人将通过发送 POST 请求与用户发送的消息数据建立通信,我们将处理它,发送一个我们自己的 POST 请求回去。如果这一个完全成功(返回一个200 OK 状态)我们也响应一个200 OK 码给初始信使请求。 +这个教程应用将托管到Heroku,他提供了一个很好并且简单的接口来部署应用。如前所述,免费层可以满足这个教程。 + +在应用已经部署并且运行后,我们将创建一个 Facebook 应用然后连接它到我们的应用,以便信使知道发送请求到哪,这就是我们的机器人。 + +### 机器人服务器 + +基本的服务器代码可以在Github用户 [hult(Magnus Hult)][13] 的 [Chatbot][12] 工程上获取,经过一些代码修改只回显消息和一些我遇到的错误更正。最终版本的服务器代码: + +``` +from flask import Flask, request +import json +import requests + +app = Flask(__name__) + +# 这需要填写被授予的页面通行令牌 +# 通过 Facebook 应用创建令牌。 +PAT = '' + +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' + +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" + +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" + + +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text + +if __name__ == '__main__': + app.run() +``` + +让我们分解代码。第一部分是引入所需: + +``` +from flask import Flask, request +import json +import requests +``` + +接下来我们定义两个函数(使用 Flask 特定的 app.route 装饰器),用来处理到我们的机器人的 GET 和 POST 请求。 + +``` +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' +``` + +当我们创建 Facebook 应用时声明由信使发送的 verify_token 对象。我们必须对自己进行认证。最后我们返回“hub.challenge”给信使。 + +处理 POST 请求的函数更有趣 + +``` +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" +``` + +当调用我们抓取的消息负载时,使用函数 messaging_events 来中断它并且提取发件人身份和真实发送消息,生成一个 python 迭代器循环遍历。请注意信使发送的每个请求有可能多于一个消息。 + +``` +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" +``` + +迭代完每个消息时我们调用send_message函数然后我们执行POST请求回给使用Facebook图形消息接口信使。在这期间我们一直没有回应我们阻塞的原始信使请求。这会导致超时和5XX错误。 + +上述的发现错误是中断期间我偶然发现的,当用户发送表情时其实的是当成了 unicode 标识,无论如何 Python 发生了误编码。我们以发送回垃圾结束。 + +这个 POST 请求回到信使将不会结束,这会导致发生5xx状态返回给原始的请求,显示服务不可用。 + +通过使用`encode('unicode_escape')`转义消息然后在我们发送回消息前用`decode('unicode_escape')`解码消息就可以解决。 + +``` +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text +``` + +### 部署到 Heroku + +一旦代码已经建立成我想要的样子时就可以进行下一步。部署应用。 + +当然,但是怎么做? + +我之前已经部署了应用到 Heroku (主要是 Rails)然而我总是遵循某种教程,所以配置已经创建。在这种情况下,尽管我必须从头开始。 + +幸运的是有官方[Heroku文档][14]来帮忙。这篇文章很好地说明了运行应用程序所需的最低限度。 + +长话短说,我们需要的除了我们的代码还有两个文件。第一个文件是“requirements.txt”他列出了运行应用所依赖的库。 + +需要的第二个文件是“Procfile”。这个文件通知 Heroku 如何运行我们的服务。此外这个文件最低限度如下: + +>web: gunicorn echoserver:app + +heroku解读他的方式是我们的应用通过运行 echoserver.py 开始并且应用将使用 gunicorn 作为网站服务器。我们使用一个额外的网站服务器是因为与性能相关并在上面的Heroku文档里解释了: + +>Web 应用程序并发处理传入的HTTP请求比一次只处理一个请求的Web应用程序,更有效利地用动态资源。由于这个原因,我们建议使用支持并发请求的 web 服务器处理开发和生产运行的服务。 + +>Django 和 Flask web 框架特性方便内建 web 服务器,但是这些阻塞式服务器一个时刻只处理一个请求。如果你部署这种服务到 Heroku上,你的动态资源不会充分使用并且你的应用会感觉迟钝。 + +>Gunicorn 是一个纯 Python HTTP 的 WSGI 引用服务器。允许你在单独一个动态资源内通过并发运行多 Python 进程的方式运行任一 Python 应用。它提供了一个完美性能,弹性,简单配置的平衡。 + +回到我们提到的“requirements.txt”文件,让我们看看它如何结合 Virtualenv 工具。 + +在任何时候,你的开发机器也许有若干已安装的 python 库。当部署应用时你不想这些库被加载因为很难辨认出你实际使用哪些库。 + +Virtualenv 创建一个新的空白虚拟环境,因此你可以只安装你应用需要的库。 + +你可以检查当前安装使用哪些库的命令如下: + +``` +kostis@KostisMBP ~ $ pip freeze +cycler==0.10.0 +Flask==0.10.1 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +matplotlib==1.5.1 +numpy==1.10.4 +pyparsing==2.1.0 +python-dateutil==2.5.0 +pytz==2015.7 +requests==2.10.0 +scipy==0.17.0 +six==1.10.0 +virtualenv==15.0.1 +Werkzeug==0.11.10 +``` + +注意:pip 工具应该已经与 Python 一起安装在你的机器上。 + +如果没有,查看[官方网站][15]如何安装他。 + +现在让我们使用 Virtualenv 来创建一个新的空白环境。首先我们给我们的工程创建一个新文件夹,然后进到目录下: + +``` +kostis@KostisMBP projects $ mkdir echoserver +kostis@KostisMBP projects $ cd echoserver/ +kostis@KostisMBP echoserver $ +``` + +现在来创建一个叫做 echobot 新的环境。运行下面的 source 命令激活它,然后使用 pip freeze 检查,我们能看到现在是空的。 + +``` +kostis@KostisMBP echoserver $ virtualenv echobot +kostis@KostisMBP echoserver $ source echobot/bin/activate +(echobot) kostis@KostisMBP echoserver $ pip freeze +(echobot) kostis@KostisMBP echoserver $ +``` + +我们可以安装需要的库。我们需要是 flask,gunicorn,和 requests,他们被安装完我们就创建 requirements.txt 文件: + +``` +(echobot) kostis@KostisMBP echoserver $ pip install flask +(echobot) kostis@KostisMBP echoserver $ pip install gunicorn +(echobot) kostis@KostisMBP echoserver $ pip install requests +(echobot) kostis@KostisMBP echoserver $ pip freeze +click==6.6 +Flask==0.11 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +requests==2.10.0 +Werkzeug==0.11.10 +(echobot) kostis@KostisMBP echoserver $ pip freeze > requirements.txt +``` + +毕竟上文已经被运行,我们用 python 代码创建 echoserver.py 文件然后用之前提到的命令创建 Procfile,我们应该以下面的文件/文件夹结束: + +``` +(echobot) kostis@KostisMBP echoserver $ ls +Procfile echobot echoserver.py requirements.txt +``` + +我们现在准备上传到 Heroku。我们需要做两件事。第一是安装 Heroku toolbet 如果你还没安装到你的系统中(详细看[Heroku][16])。第二通过[网页接口][17]创建一个新的 Heroku 应用。 + +点击右上的大加号然后选择“Create new app”。 + + + + + + + + +-------------------------------------------------------------------------------- + +via: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/ + +作者:[Konstantinos Tsaprailis][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/kostistsaprailis +[1]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#tech-stack +[2]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#bot-architecture +[3]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#the-bot-server +[4]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#deploying-to-heroku +[5]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#creating-the-facebook-app +[6]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#conclusion +[7]: https://www.heroku.com +[8]: https://www.python.org +[9]: http://flask.pocoo.org +[10]: https://git-scm.com +[11]: https://virtualenv.pypa.io/en/stable +[12]: https://github.com/hult/facebook-chatbot-python +[13]: https://github.com/hult +[14]: https://devcenter.heroku.com/articles/python-gunicorn +[15]: https://pip.pypa.io/en/stable/installing +[16]: https://toolbelt.heroku.com +[17]: https://dashboard.heroku.com/apps + + From d31c18f5a45e58df54066e34310a58ad2efe2836 Mon Sep 17 00:00:00 2001 From: range Date: Thu, 28 Jul 2016 12:53:36 +0800 Subject: [PATCH 261/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0718 Creating your first Git repository.md | 181 ---------------- ...0718 Creating your first Git repository.md | 199 ++++++++++++++++++ 2 files changed, 199 insertions(+), 181 deletions(-) delete mode 100644 sources/tech/20160718 Creating your first Git repository.md create mode 100644 translated/tech/20160718 Creating your first Git repository.md diff --git a/sources/tech/20160718 Creating your first Git repository.md b/sources/tech/20160718 Creating your first Git repository.md deleted file mode 100644 index 4ec9b0aace..0000000000 --- a/sources/tech/20160718 Creating your first Git repository.md +++ /dev/null @@ -1,181 +0,0 @@ -vim-kakali translating - - - -Creating your first Git repository -====================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open_abstract_pieces.jpg?itok=ZRt0Db00) - -Now it is time to learn how to create your own Git repository, and how to add files and make commits. - -In the previous installments in this series, you learned how to interact with Git as an end user; you were the aimless wanderer who stumbled upon an open source project's website, cloned a repository, and moved on with your life. You learned that interacting with Git wasn't as confusing as you may have thought it would be, and maybe you've been convinced that it's time to start leveraging Git for your own work. - -While Git is definitely the tool of choice for major software projects, it doesn't only work with major software projects. It can manage your grocery lists (if they're that important to you, and they may be!), your configuration files, a journal or diary, a novel in progress, and even source code! - -And it is well worth doing; after all, when have you ever been angry that you have a backup copy of something that you've just mangled beyond recognition? - -Git can't work for you unless you use it, and there's no time like the present. Or, translated to Git, "There is no push like origin HEAD". You'll understand that later, I promise. - -### The audio recording analogy - -We tend to speak of computer imaging in terms of snapshots because most of us can identify with the idea of having a photo album filled with particular moments in time. It may be more useful, however, to think of Git more like an analogue audio recording. - -A traditional studio tape deck, in case you're unfamiliar, has a few components: it contains the reels that turn either forward or in reverse, tape to preserve sound waves, and a playhead to record or detect sound waves on tape and present them to the listener. - -In addition to playing a tape forward, you can rewind it to get back to a previous point in the tape, or fast-forward to skip ahead to a later point. - -Imagine a band in the 1970s recording to tape. You can imagine practising a song over and over until all the parts are perfect, and then laying down a track. First, you record the drums, and then the bass, and then the guitar, and then the vocals. Each time you record, the studio engineer rewinds the tape and puts it into loop mode so that it plays the previous part as you play yours; that is, if you're on bass, you get to hear the drums in the background as you play, and then the guitarist hears the drums and bass (and cowbell) and so on. On each loop, you play over the part, and then on the following loop, the engineer hits the record button and lays the performance down on tape. - -You can also copy and swap out a reel of tape entirely, should you decide to do a re-mix of something you're working on. - -Now that I've hopefully painted a vivid Roger Dean-quality image of studio life in the 70s, let's translate that into Git. - -### Create a Git repository - -The first step is to go out and buy some tape for our virtual tape deck. In Git terms, that's the repository ; it's the medium or domain where all the work is going to live. - -Any directory can become a Git repository, but to begin with let's start a fresh one. It takes three commands: - -- Create the directory (you can do that in your GUI file manager, if you prefer). -- Visit that directory in a terminal. -- Initialise it as a directory managed by Git. - -Specifically, run these commands: - -``` -$ mkdir ~/jupiter # make directory -$ cd ~/jupiter # change into the new directory -$ git init . # initialise your new Git repo -``` - -Is this example, the folder jupiter is now an empty but valid Git repository. - -That's all it takes. You can clone the repository, you can go backward and forward in history (once it has a history), create alternate timelines, and everything else Git can normally do. - -Working inside the Git repository is the same as working in any directory; create files, copy files into the directory, save files into it. You can do everything as normal; Git doesn't get involved until you involve it. - -In a local Git repository, a file can have one of three states: - -- Untracked: a file you create in a repository, but not yet added to Git. -- Tracked: a file that has been added to Git. -- Staged: a tracked file that has been changed and added to Git's commit queue. - -Any file that you add to a Git repository starts life out as an untracked file. The file exists on your computer, but you have not told Git about it yet. In our tape deck analogy, the tape deck isn't even turned on yet; the band is just noodling around in the studio, nowhere near ready to record yet. - -That is perfectly acceptable, and Git will let you know when it happens: - -``` -$ echo "hello world" > foo -$ git status -On branch master -Untracked files: -(use "git add ..." to include in what will be committed) - foo -nothing added but untracked files present (use "git add" to track) -``` - -As you can see, Git also tells you how to start tracking files. - -### Git without Git - -Creating a repository in GitHub or GitLab is a lot more clicky and pointy. It isn't difficult; you click the New Repository button and follow the prompts. - -It is a good practice to include a README file so that people wandering by have some notion of what your repository is for, and it is a little more satisfying to clone a non-empty repository. - -Cloning the repository is no different than usual, but obtaining permission to write back into that repository on GitHub is slightly more complex, because in order to authenticate to GitHub you must have an SSH key. If you're on Linux, create one with this command: - -``` -$ ssh-keygen -``` - -Then copy your new key, which is plain text. You can open it in a plain text editor, or use the cat command: - -``` -$ cat ~/.ssh/id_rsa.pub -``` - -Now paste your key into [GitHub's SSH configuration][1], or your [GitLab configuration][2]. - -As long as you clone your GitHub project via SSH, you'll be able to write back to your repository. - -Alternately, you can use GitHub's file uploader interface to add files without even having Git on your system. - -![](https://opensource.com/sites/default/files/2_githubupload.jpg) - -### Tracking files - -As the output of git status tells you, if you want Git to start tracking a file, you must git add it. The git add action places a file in a special staging area, where files wait to be committed, or preserved for posterity in a snapshot. The point of a git add is to differentiate between files that you want to have included in a snapshot, and the new or temporary files you want Git to, at least for now, ignore. - -In our tape deck analogy, this action turns the tape deck on and arms it for recording. You can picture the tape deck with the record and pause button pushed, or in a playback loop awaiting the next track to be laid down. - -Once you add a file, Git will identify it as a tracked file: - -``` -$ git add foo -$ git status -On branch master -Changes to be committed: -(use "git reset HEAD ..." to unstage) -new file: foo -``` - -Adding a file to Git's tracking system is not making a recording. It just puts a file on the stage in preparation for recording. You can still change a file after you've added it; it's being tracked and remains staged, so you can continue to refine it or change it before committing it to tape (but be warned; you're NOT recording yet, so if you break something in a file that was perfect, there's no going back in time yet, because you never got that perfect moment on tape). - -If you decide that the file isn't really ready to be recorded in the annals of Git history, then you can unstage something, just as the Git message described: - -``` -$ git reset HEAD foo -``` - -This, in effect, disarms the tape deck from being ready to record, and you're back to just noodling around in the studio. - -### The big commit - -At some point, you're going to want to commit something; in our tape deck analogy, that means finally pressing record and laying a track down on tape. - -At different stages of a project's life, how often you press that record button varies. For example, if you're hacking your way through a new Python toolkit and finally manage to get a window to appear, then you'll certainly want to commit so you have something to fall back on when you inevitably break it later as you try out new display options. But if you're working on a rough draft of some new graphics in Inkscape, you might wait until you have something you want to develop from before committing. Ultimately, though, it's up to you how often you commit; Git doesn't "cost" that much and hard drives these days are big, so in my view, the more the better. - -A commit records all staged files in a repository. Git only records files that are tracked, that is, any file that you did a git add on at some point in the past. and that have been modified since the previous commit. If no previous commit exists, then all tracked files are included in the commit because they went from not existing to existing, which is a pretty major modification from Git's point-of-view. - -To make a commit, run this command: - -``` -$ git commit -m 'My great project, first commit.' -``` - -This preserves all files committed for posterity (or, if you speak Gallifreyan, they become "fixed points in time"). You can see not only the commit event, but also the reference pointer back to that commit in your Git log: - -``` -$ git log --oneline -55df4c2 My great project, first commit. -``` - -For a more detailed report, just use git log without the --oneline option. - -The reference number for the commit in this example is 55df4c2. It's called a commit hash and it represents all of the new material you just recorded, overlaid onto previous recordings. If you need to "rewind" back to that point in history, you can use that hash as a reference. - -You can think of a commit hash as [SMPTE timecode][3] on an audio tape, or if we bend the analogy a little, one of those big gaps between songs on a vinyl record, or track numbers on a CD. - -As you change files further and add them to the stage, and ultimately commit them, you accrue new commit hashes, each of which serve as pointers to different versions of your production. - -And that's why they call Git a version control system, Charlie Brown. - -In the next article, we'll explore everything you need to know about the Git HEAD, and we'll nonchalantly reveal the secret of time travel. No big deal, but you'll want to read it (or maybe you already have?). - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/creating-your-first-git-repository - -作者:[Seth Kenlon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]: https://github.com/settings/keys -[2]: https://gitlab.com/profile/keys -[3]: http://slackermedia.ml/handbook/doku.php?id=timecode diff --git a/translated/tech/20160718 Creating your first Git repository.md b/translated/tech/20160718 Creating your first Git repository.md new file mode 100644 index 0000000000..35382aba55 --- /dev/null +++ b/translated/tech/20160718 Creating your first Git repository.md @@ -0,0 +1,199 @@ + + +建立你的第一个仓库 +====================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open_abstract_pieces.jpg?itok=ZRt0Db00) + + +现在是时候学习怎样创建你自己的仓库了,还有怎样增加文件和完成提交。 + +在本系列前面文章的安装过程中,你已经学习了作为一个目标用户怎样与 Git 进行交互;你就像一个漫无目的的流浪者一样偶然发现了一个开源项目网站,然后克隆了仓库,Git 走进了你的生活。学习怎样和 Git 进行交互并不像你想的那样困难,或许你并不确信现在是否应该使用 Git 完成你的工作。 + + +Git 被认为是选择大多软件项目的工具,它不仅能够完成大多软件项目的工作;它也能管理你杂乱项目的列表(如果他们不重要,也可以这样说!),你的配置文件,一个日记,项目进展日志,甚至源代码! + +使用 Git 是很有必要的,毕竟,你肯定有因为一个备份文件不能够辨认出版本信息而烦恼的时候。 + +你不使用 Git,它也就不会为你工作,或者也可以把 Git 理解为“没有任何推送就像源头指针一样”【译注: HEAD 可以理解为“头指针”,是当前工作区的“基础版本”,当执行提交时, HEAD 指向的提交将作为新提交的父提交。】。我保证,你很快就会对 Git 有所了解 。 + + +### 类比于录音 + + +我们更喜欢谈论快照上的图像,因为很多人都可以通过一个相册很快辨认出每个照片上特有的信息。这可能很有用,然而,我认为 Git 更像是在进行声音的记录。 + +传统的录音机,可能你对于它的部件不是很清楚:它包含转轴并且正转或反转,使用磁带保存声音波形,通过放音头记录声音并保存到磁带上然后播放给收听者。 + + +除了往前退磁带,你也可以把磁带多绕几圈到磁带前面的部分,或快进跳过前面的部分到最后。 + +想象一下 70 年代的磁带录制的声音。你可能想到那会正在反复练习一首歌直到非常完美,它们最终被记录下来了。起初,你记录了鼓声,低音,然后是吉他声,还有其他的声音。每次你录音,工作人员都会把磁带重绕并设置为环绕模式,这样在你演唱的时候录音磁带就会播放之前录制的声音。如果你是低音歌唱,你唱歌的时候就需要把有鼓声的部分作为背景音乐,然后就是吉他声、鼓声、低音(和牛铃声【译注:一种打击乐器,状如四棱锥。】)等等。在每一环,你完成了整个部分,到了下一环,工作人员就开始在磁带上制作你的演唱作品。 + + +你也可以拷贝或换出整个磁带,这是你需要继续录音并且进行多次混合的时候需要做的。 + + +现在我希望对于上述 70 年代的录音工作的描述足够生动,我们就可以把 Git 的工作想象成一个录音磁带了。 + + +### 新建一个 Git 仓库 + + +首先得为我们的虚拟的录音机买一些磁带。在 Git 术语中,这就是仓库;它是完成所有工作的基础,也就是说这里是存放 Git 文件的地方(即 Git 工作区)。 + +任何目录都可以是一个 Git 仓库,但是在开始的时候需要进行一次更新。需要下面三个命令: + + +- 创建目录(如果你喜欢的话,你可以在你的 GUI 文件管理器里面完成。) +- 在终端里查看目录。 +- 初始化这个目录使它可以被 Git管理。 + + +特别是运行如下代码: + +``` +$ mkdir ~/jupiter # 创建目录 +$ cd ~/jupiter # 进入目录 +$ git init . # 初始化你的新 Git 工作区 +``` + + +在这个例子中,文件夹 jupiter 是空的但却成为了你的 Git 仓库。 + +有了仓库接下来的事件就按部就班了。你可以克隆项目仓库,你可以在一个历史点前后来回穿梭(前提是你有一个历史点),创建可交替时间线,然后剩下的工作 Git 就都能正常完成了。 + + +在 Git 仓库里面工作和在任何目录里面工作都是一样的;在仓库中新建文件,复制文件,保存文件。你可以像平常一样完成工作;Git 并不复杂,除非你把它想复杂了。 + +在本地的 Git 仓库中,一个文件可以有下面这三种状态: +- 未跟踪文件:你在仓库里新建了一个文件,但是你没有把文件加入到 Git 的提交任务(提交暂存区,stage)中。 +- 已跟踪文件:已经加入到 Git 暂存区的文件。 +- 暂存区文件:存在于暂存区的文件已经加入到 Git 的提交队列中。 + + +任何你新加入到 Git 仓库中的文件都是未跟踪文件。文件还保存在你的电脑硬盘上,但是你没有告诉 Git 这是需要提交的文件,就像我们的录音机,如果你没有打开录音机;乐队开始演唱了,但是录音机并没有准备录音。 + +不用担心,Git 会告诉你存在的问题并提示你怎么解决: +``` +$ echo "hello world" > foo +$ git status +位于您当前工作的分支 master 上 +未跟踪文件: +(使用 "git add " 更新要提交的内容) + foo +没有任何提交任务,但是存在未跟踪文件(用 "git add" 命令加入到提交任务) +``` + + +你看到了,Git 会提醒你怎样把文件加入到提交任务中。 + +### 不使用 Git 命令进行 Git 操作 + + +在 GitHub 或 GitLab(译注:GitLab 是一个用于仓库管理系统的开源项目。使用Git作为代码管理工具,并在此基础上搭建起来的web服务。)上创建一个仓库大多是使用鼠标点击完成的。这不会很难,你单击 New Repository 这个按钮就会很快创建一个仓库。 + +在仓库中新建一个 README 文件是一个好习惯,这样人们在浏览你的仓库的时候就可以知道你的仓库基于什么项目,更有用的是通过 README 文件可以确定克隆的是否为一个非空仓库。 + + +克隆仓库通常很简单,但是在 GitHub 上获取仓库改动权限就不简单了,为了进行用户验证你必须有一个 SSH 秘钥。如果你使用 Linux 系统,通过下面的命令可以生成一个秘钥: +``` +$ ssh-keygen +``` + + +复制纯文本文件里的秘钥。你可以使用一个文本编辑器打开它,也可以使用 cat 命令: + +``` +$ cat ~/.ssh/id_rsa.pub +``` + + +现在把你的秘钥拷贝到 [GitHub SSH 配置文件][1] 中,或者 [GitLab 配置文件[2]。 + +如果你通过使用 SSH 模式克隆了你的项目,就可以在你的仓库开始工作了。 + +另外,如果你的系统上没有安装 Git 的话也可以使用 GitHub 的文件上传接口来克隆仓库。 + +![](https://opensource.com/sites/default/files/2_githubupload.jpg) + + +### 跟踪文件 + +命令 git status 的输出会告诉你如果你想让 git 跟踪一个文件,你必须使用命令 git add 把它加入到提交任务中。这个命令把文件存在了暂存区,暂存区存放的都是等待提交的文件,或者把仓库保存为一个快照。git add 命令的最主要目的是为了区分你已经保存在仓库快照里的文件,还有新建的或你想提交的临时文件,至少现在,你都不用为它们之间的不同之处而费神了。 + +类比大型录音机,这个动作就像打开录音机开始准备录音一样。你可以按已经录音的录音机上的 pause 按钮来完成推送,或者拉下重置环等待开始跟踪下一个文件。 + +如果你把文件加入到提交任务中,Git 会自动标识为跟踪文件: + +``` +$ git add foo +$ git status +位于您当前工作的分支 master 上 +下列修改将被提交: +(使用 "git reset HEAD ..." 将下列改动撤出提交任务) +新增文件:foo +``` + + +加入文件到提交任务中并不会生成一个记录。这仅仅是为了之后方便记录而把文件存放到暂存区。在你把文件加入到提交任务后仍然可以修改文件;文件会被标记为跟踪文件并且存放到暂存区,所以你在最终提交之前都可以改动文件或撤出提交任务(但是请注意:你并没有记录文件,所以如果你完全改变了文件就没有办法撤销了,因为你没有记住最终修改的准确时间。)。 + +如果你决定不把文件记录到 Git 历史列表中,那么你可以撤出提交任务,在 Git 中是这样做的: +``` +$ git reset HEAD foo +``` + + +这实际上就是删除了录音机里面的录音,你只是在工作区转了一圈而已而已。 + +### 大型提交 + +有时候,你会需要完成很多提交;我们以录音机类比,这就好比按下录音键并最终按下保存键一样。 + +在一个项目从建立到完成,你会按记录键无数次。比如,如果你通过你的方式使用一个新的 Python 工具包并且最终实现了窗口展示,然后你就很肯定的提交了文件,但是不可避免的你最后会发生一些错误,现在你却不能撤销你的提交操作了。 + +一次提交会记录仓库中所有的暂存区文件。Git 只记录加入到提交任务中的文件,也就是说在过去某个时刻你使用 git add 命令加入到暂存区的所有文件。还有从先前的提交开始被改动的文件。如果没有其他的提交,所有的跟踪文件都包含在这次提交中,因为在浏览 Git 历史点的时候,它们没有存在于仓库中。 + +完成一次提交需要运行下面的命令: +``` +$ git commit -m 'My great project, first commit.' +``` + +这就保存了所有需要在仓库中提交的文件(或者,如果你说到 Gallifreyan【译注:英国电视剧《神秘博士》里的时间领主使用的一种优雅的语言,】,它们可能就是“固定的时间点” )。你不仅能看到整个提交记录,还能通过 git log 命令查看修改日志找到提交时的版本号: +``` +$ git log --oneline +55df4c2 My great project, first commit. +``` + + +如果想浏览更多信息,只需要使用不带 --oneline 选项的 git log 命令。 + + +在这个例子中提交时的版本号是 55df4c2。它被叫做 commit hash(译注:一个SHA-1生成的哈希码,用于表示一个git commit对象。),它表示着刚才你的提交包含的所有改动,覆盖了先前的记录。如果你想要“倒回”到你的提交历史点上就可以用这个 commit hash 作为依据。 + +你可以把 commit hash 想象成一个声音磁带上的 [SMPTE timecode][3],或者再夸张一点,这就是好比一个黑胶唱片两首不同的歌之间的不同点,或是一个 CD 上的轨段编号。 + + +你在很久前改动了文件并且把它们加入到提交任务中,最终完成提交,这就会生成新的 commit hashes,每个 commit hashes 标示的历史点都代表着你的产品不同的版本。 + + +这就是 Charlie Brown 把 Git 称为版本控制系统的原因。 + +在接下来的文章中,我们将会讨论你需要知道的关于 Git HEAD 的一切,我们不准备讨论关于 Git 的提交历史问题。基本不会提及,但是你可能会需要了解它(或许你已经有所了解?)。 + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/creating-your-first-git-repository + +作者:[Seth Kenlon][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://github.com/settings/keys +[2]: https://gitlab.com/profile/keys +[3]: http://slackermedia.ml/handbook/doku.php?id=timecode From e05b6e265869065180e827fe2b02da50377036ed Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Thu, 28 Jul 2016 15:55:58 +0800 Subject: [PATCH 262/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0160630 What makes up the Fedora kernel.md | 35 ------------------- ...0160630 What makes up the Fedora kernel.md | 31 ++++++++++++++++ 2 files changed, 31 insertions(+), 35 deletions(-) delete mode 100644 sources/tech/20160630 What makes up the Fedora kernel.md create mode 100644 translated/tech/20160630 What makes up the Fedora kernel.md diff --git a/sources/tech/20160630 What makes up the Fedora kernel.md b/sources/tech/20160630 What makes up the Fedora kernel.md deleted file mode 100644 index 603aefa575..0000000000 --- a/sources/tech/20160630 What makes up the Fedora kernel.md +++ /dev/null @@ -1,35 +0,0 @@ -Being translated by ChrisLeeGit - -What makes up the Fedora kernel? -==================================== - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/kernel-945x400.png) - -Every Fedora system runs a kernel. Many pieces of code come together to make this a reality. - -Each release of the Fedora kernel starts with a baseline release from the [upstream community][1]. This is often called a ‘vanilla’ kernel. The upstream kernel is the standard. The goal is to have as much code upstream as possible. This makes it easier for bug fixes and API updates to happen as well as having more people review the code. In an ideal world, Fedora would be able to to take the kernel straight from kernel.org and send that out to all users. - -Realistically, using the vanilla kernel isn’t complete enough for Fedora. Some features Fedora users want may not be available. The [Fedora kernel][2] that users actually receive contains a number of patches on top of the vanilla kernel. These patches are considered ‘out of tree’. Many of these patches will not exist out of tree patches very long. If patches are available to fix an issue, the patches may be pulled in to the Fedora tree so the fix can go out to users faster. When the kernel is rebased to a new version, the patches will be removed if they are in the new version. - -Some patches remain in the Fedora kernel tree for an extended period of time. A good example of patches that fall into this category are the secure boot patches. These patches provide a feature Fedora wants to support even though the upstream community has not yet accepted them. It takes effort to keep these patches up to date so Fedora tries to minimize the number of patches that are carried without being accepted by an upstream kernel maintainer. - -Generally, the best way to get a patch included in the Fedora kernel is to send it to the ]Linux Kernel Mailing List (LKML)][3] first and then ask for it to be included in Fedora. If a patch has been accepted by a maintainer it stands a very high chance of being included in the Fedora kernel tree. Patches that come from places like github which have not been submitted to LKML are unlikely to be taken into the tree. It’s important to send the patches to LKML first to ensure Fedora is carrying the correct patches in its tree. Without the community review, Fedora could end up carrying patches which are buggy and cause problems. - -The Fedora kernel contains code from many places. All of it is necessary to give the best experience possible. - - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/makes-fedora-kernel/ - -作者:[Laura Abbott][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/makes-fedora-kernel/ -[1]: http://www.kernel.org/ -[2]: http://pkgs.fedoraproject.org/cgit/rpms/kernel.git/ -[3]: http://www.labbott.name/blog/2015/10/02/the-art-of-communicating-with-lkml/ diff --git a/translated/tech/20160630 What makes up the Fedora kernel.md b/translated/tech/20160630 What makes up the Fedora kernel.md new file mode 100644 index 0000000000..954c4cb440 --- /dev/null +++ b/translated/tech/20160630 What makes up the Fedora kernel.md @@ -0,0 +1,31 @@ +Fedora 内核是由什么构成的? +==================================== + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/kernel-945x400.png) + +每个 Fedora 系统都运行着一个内核。许多代码段组合在一起使之成为现实。 + +每个 Fedora 内核都起始于一个来自于 [上游社区][1] 的基线版本,通常称之为 vanilla 内核。上游内核就是标准。(Fedora 的)目标是包含尽可能多的上游代码,这样使得 bug 修复和 API 更新更加容易,同时也会有更多的人审查代码。理想情况下,Fedora 能够直接从 kernel.org 获得内核,然后发送给所有用户。 + +现实情况是,使用 vanilla 内核并不能完全满足 Fedora。Vanilla 内核可能并不支持一些 Fedora 用户希望拥有的功能。用户接收的 [Fedora 内核] 是在 vanilla 内核之上打了很多补丁的内核。这些补丁被认为“不在树上”。许多这些位于补丁树之外的补丁都不会存在太久。如果某补丁能够修复一个问题,那么该补丁可能会被合并到 Fedora 树,以便用户能够更快地收到修复。当内核变基到一个新版本时,在新版本中的补丁都将被清除。 + +一些补丁会在 Fedora 内核树上存在很长时间。一个很好的例子是,安全启动补丁就是这类补丁。这些补丁提供了 Fedora 希望支持的功能,即使上游社区还没有接受它们。保持这些补丁更新是需要付出很多努力的,所以 Fedora 尝试减少不被上游内核维护者接受的补丁数量。 + +通常来说,想要在 Fedora 内核中获得一个补丁的最佳方法是先给 [Linux 内核邮件列表(LKML)][3] 发送补丁,然后请求将该补丁包含到 Fedora 中。如果某个维护者接受了补丁,就意味着 Fedora 内核树中将来很有可能会包含该补丁。一些来自于 GitHub 等地方的还没有提交给 LKML 的补丁是不可能进入内核树的。首先向 LKML 发送补丁是非常重要的,它能确保 Fedora 内核树中携带的补丁是功能正常的。如果没有社区审查,Fedora 最终携带的补丁将会充满 bug 并会导致问题。 + +Fedora 内核中包含的代码来自许多地方。一切都需要提供最佳的体验。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/makes-fedora-kernel/ + +作者:[Laura Abbott][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/makes-fedora-kernel/ +[1]: http://www.kernel.org/ +[2]: http://pkgs.fedoraproject.org/cgit/rpms/kernel.git/ +[3]: http://www.labbott.name/blog/2015/10/02/the-art-of-communicating-with-lkml/ From 018abadd1c026f23d7607880765102b4546b840f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 28 Jul 2016 16:55:15 +0800 Subject: [PATCH 263/471] PUB:20160621 Container technologies in Fedora - systemd-nspawn MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ChrisLeeGit 翻译的很好! --- ...technologies in Fedora - systemd-nspawn.md | 38 +++++++++++-------- 1 file changed, 22 insertions(+), 16 deletions(-) rename {translated/tech => published}/20160621 Container technologies in Fedora - systemd-nspawn.md (61%) diff --git a/translated/tech/20160621 Container technologies in Fedora - systemd-nspawn.md b/published/20160621 Container technologies in Fedora - systemd-nspawn.md similarity index 61% rename from translated/tech/20160621 Container technologies in Fedora - systemd-nspawn.md rename to published/20160621 Container technologies in Fedora - systemd-nspawn.md index f47d17556a..69cbf688d8 100644 --- a/translated/tech/20160621 Container technologies in Fedora - systemd-nspawn.md +++ b/published/20160621 Container technologies in Fedora - systemd-nspawn.md @@ -4,19 +4,20 @@ Fedora 中的容器技术:systemd-nspawn 欢迎来到“Fedora 中的容器技术”系列!本文是该系列文章中的第一篇,它将说明你可以怎样使用 Fedora 中各种可用的容器技术。本文将学习 `systemd-nspawn` 的相关知识。 ### 容器是什么? -一个容器就是一个用户空间实例,它能够在与托管容器的系统(叫做宿主系统)隔离的环境中运行一个程序或者一个操作系统。这和 `chroot` 或 [虚拟机][1] 的思想非常类似。 -运行在容器中的进程是由与宿主操作系统相同的内核来管理的,但它们是与宿主文件系统以及其它进程隔离开的。 +一个容器就是一个用户空间实例,它能够在与托管容器的系统(叫做宿主系统)相隔离的环境中运行一个程序或者一个操作系统。这和 `chroot` 或 [虚拟机][1] 的思想非常类似。运行在容器中的进程是由与宿主操作系统相同的内核来管理的,但它们是与宿主文件系统以及其它进程隔离开的。 ### 什么是 systemd-nspawn? -systemd 项目认为应当将容器技术变成桌面的基础部分,并且应当和剩余的用户系统集成在一起。为此,systemd 提供了 `systemd-nspawn`,这款工具能够使用多种 Linux 技术创建容器。它也提供了一些容器管理工具。 -`systemd-nspawn` 和 `chroot` 在许多方面都是类似的,但是前者更加强大。它虚拟化了文件系统、进程树以及客户系统中的进程间通信。它的引力在于它提供了很多用于管理容器的工具,例如 `machinectl`。由 `systemd-nspawn` 运行的容器将会与 systemd 组件一同运行在宿主系统上。举例来说,一个容器的日志可以输出到宿主系统的日志中。 +systemd 项目认为应当将容器技术变成桌面的基础部分,并且应当和用户的其余系统集成在一起。为此,systemd 提供了 `systemd-nspawn`,这款工具能够使用多种 Linux 技术创建容器。它也提供了一些容器管理工具。 -在 Fedora 24 上,`systemd-nspawn` 已经和 systemd 软件包分开了,所以你需要安装 `systemd-container` 软件包。一如往常,你可以使用 `dnf install systemd-container` 进行安装。 +`systemd-nspawn` 和 `chroot` 在许多方面都是类似的,但是前者更加强大。它虚拟化了文件系统、进程树以及客户系统中的进程间通信。它的吸引力在于它提供了很多用于管理容器的工具,例如用来管理容器的 `machinectl`。由 `systemd-nspawn` 运行的容器将会与 systemd 组件一同运行在宿主系统上。举例来说,一个容器的日志可以输出到宿主系统的日志中。 + +在 Fedora 24 上,`systemd-nspawn` 已经从 systemd 软件包分离出来了,所以你需要安装 `systemd-container` 软件包。一如往常,你可以使用 `dnf install systemd-container` 进行安装。 ### 创建容器 -使用 `systemd-nspawn` 创建一个容器是很容易的。假设你有一个专门为 Debian 创造的应用,并且无法在其它地方正常运行。那并不是一个问题,我们可以创造一个容器!为了设置容器使用最新版本的 Debian(此时是 Jessie),你需要挑选一个目录来放置你的系统。我暂时将使用目录 `~/DebianJessie`。 + +使用 `systemd-nspawn` 创建一个容器是很容易的。假设你有一个专门为 Debian 创造的应用,并且无法在其它发行版中正常运行。那并不是一个问题,我们可以创造一个容器!为了设置容器使用最新版本的 Debian(现在是 Jessie),你需要挑选一个目录来放置你的系统。我暂时将使用目录 `~/DebianJessie`。 一旦你创建完目录,你需要运行 `debootstrap`,你可以从 Fedora 仓库中安装它。对于 Debian Jessie,你运行下面的命令来初始化一个 Debian 文件系统。 @@ -32,9 +33,9 @@ $ debootstrap --arch=amd64 stable ~/DebianJessie $ systemd-nspawn -bD ~/DebianJessie ``` -容器将会在数秒后准备好并运行,当你一尝试登录就会注意到:你无法在你的系统上使用任何账户。这是因为 `systemd-nspawn` 虚拟化了用户。修复的方法很简单:将之前的命令中的 `-b` 移除即可。你将直接进入容器的 root shell。此时,你只能使用 `passwd` 命令为 root 设置密码,或者使用 `adduser` 命令添加一个新用户。一旦设置好密码或添加好用户,你就可以把 `-b` 标志添加回去然后继续了。你会进入到熟悉的登录控制台,然后你使用设置好的认证信息登录进去。 +容器将会在数秒后准备好并运行,当你试图登录时就会注意到:你无法使用你的系统上任何账户。这是因为 `systemd-nspawn` 虚拟化了用户。修复的方法很简单:将之前的命令中的 `-b` 移除即可。你将直接进入容器的 root 用户的 shell。此时,你只能使用 `passwd` 命令为 root 设置密码,或者使用 `adduser` 命令添加一个新用户。一旦设置好密码或添加好用户,你就可以把 `-b` 标志添加回去然后继续了。你会进入到熟悉的登录控制台,然后你使用设置好的认证信息登录进去。 -以上对于任意你想在容器中运行的发行版都适用,但前提是你需要使用正确的包管理器创建系统。对于 Fedora,你应使用 DNF 而非 `debootstrap`。想要设置一个最小化的 Fedora 系统,你可以运行下面的命令,要将绝对路径替换成任何你希望容器存放的位置。 +以上对于任意你想在容器中运行的发行版都适用,但前提是你需要使用正确的包管理器创建系统。对于 Fedora,你应使用 DNF 而非 `debootstrap`。想要设置一个最小化的 Fedora 系统,你可以运行下面的命令,要将“/absolute/path/”替换成任何你希望容器存放的位置。 ``` $ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd dnf fedora-release @@ -43,8 +44,8 @@ $ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-04-14.png) ### 设置网络 -如果你尝试启动一个服务,但它绑定了你宿主机正在使用的端口,你将会注意到这个问题:你的容器正在使用和宿主机相同的网络接口。 -幸运的是,`systemd-nspawn` 提供了几种方法可以将网络从宿主机分开。 + +如果你尝试启动一个服务,但它绑定了你宿主机正在使用的端口,你将会注意到这个问题:你的容器正在使用和宿主机相同的网络接口。幸运的是,`systemd-nspawn` 提供了几种可以将网络从宿主机分开的方法。 #### 本地网络 @@ -52,33 +53,38 @@ $ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd #### 多个网络接口 -如果你有多个网络接口设备,你可以使用 `--network-interface` 标志给容器分配一个接口。想要给我的容器分配 `eno1`,我会添加标志 `--network-interface=eno1`。当某个接口分配给一个容器后,宿主机就不能同时使用那个接口了。只有当容器彻底关闭后,宿主机才可以使用那个接口。 - +如果你有多个网络接口设备,你可以使用 `--network-interface` 标志给容器分配一个接口。想要给我的容器分配 `eno1`,我会添加选项 `--network-interface=eno1`。当某个接口分配给一个容器后,宿主机就不能同时使用那个接口了。只有当容器彻底关闭后,宿主机才可以使用那个接口。 #### 共享网络接口 -对于我们中那些并没有额外的网络设备的人来说,还有其它方法可以访问容器。一种就是使用 `--port` 标志。这会将容器中的一个端口定向到宿主机。使用格式是 `协议:宿主机:容器`,这里的协议可以是 `tcp` 或者 `udp`,`宿主机` 是宿主机的一个合法端口,`容器` 则是容器中的一个合法端口。你可以省略协议,只指定 `宿主机:容器`。我通常的用法类似 `--port=2222:22`。 + +对于我们中那些并没有额外的网络设备的人来说,还有其它方法可以访问容器。一种就是使用 `--port` 选项。这会将容器中的一个端口定向到宿主机。使用格式是 `协议:宿主机端口:容器端口`,这里的协议可以是 `tcp` 或者 `udp`,`宿主机端口` 是宿主机的一个合法端口,`容器端口` 则是容器中的一个合法端口。你可以省略协议,只指定 `宿主机端口:容器端口`。我通常的用法类似 `--port=2222:22`。 你可以使用 `--network-veth` 启用完全的、仅宿主机模式的网络,这会在宿主机和容器之间创建一个虚拟的网络接口。你也可以使用 `--network-bridge` 桥接二者的连接。 ### 使用 systemd 组件 + 如果你容器中的系统含有 D-Bus,你可以使用 systemd 提供的实用工具来控制并监视你的容器。基础安装的 Debian 并不包含 `dbus`。如果你想在 Debian Jessie 中使用 `dbus`,你需要运行命令 `apt install dbus`。 #### machinectl + 为了能够轻松地管理容器,systemd 提供了 `machinectl` 实用工具。使用 `machinectl`,你可以使用 `machinectl login name` 登录到一个容器中、使用 `machinectl status name`检查状态、使用 `machinectl reboot name` 启动容器或者使用 `machinectl poweroff name` 关闭容器。 ### 其它 systemd 命令 -多数 systemd 命令,例如 `journalctl`, `systemd-analyze` 和 `systemctl`,都支持使用了 `--machine` 选项的容器。例如,如果你想查看一个名为 "foobar" 的容器日志,你可以使用 `journalctl --machine=foobar`。你也可以使用 `systemctl --machine=foobar status service` 来查看运行在这个容器中的服务状态。 + +多数 systemd 命令,例如 `journalctl`, `systemd-analyze` 和 `systemctl`,都支持使用 `--machine` 选项来指定容器。例如,如果你想查看一个名为 “foobar” 的容器的日志,你可以使用 `journalctl --machine=foobar`。你也可以使用 `systemctl --machine=foobar status service` 来查看运行在这个容器中的服务状态。 ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-09-25.png) ### 和 SELinux 一起工作 + 如果你要使用 SELinux 强制模式(Fedora 默认模式),你需要为你的容器设置 SELinux 环境。想要那样的话,你需要在宿主系统上运行下面两行命令。 ``` $ semanage fcontext -a -t svirt_sandbox_file_t "/path/to/container(/.*)?" $ restorecon -R /path/to/container/ ``` -确保使用你的容器路径替换 "/path/to/container"。对于我的容器 "DebianJessie",我会运行下面的命令: + +确保使用你的容器路径替换 “/path/to/container”。对于我的容器 "DebianJessie",我会运行下面的命令: ``` $ semanage fcontext -a -t svirt_sandbox_file_t "/home/johnmh/DebianJessie(/.*)?" @@ -91,7 +97,7 @@ via: https://fedoramagazine.org/container-technologies-fedora-systemd-nspawn/ 作者:[John M. Harris, Jr.][a] 译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6bd966ef31cf12b07239b3dcc946d3a671c2e350 Mon Sep 17 00:00:00 2001 From: kakali <1799225723@qq.com> Date: Thu, 28 Jul 2016 17:49:55 +0800 Subject: [PATCH 264/471] vim-kakali translated (#4236) * vim-kakali translated * vim-kakali translating --- ...riables, Numeric Expressions and Assignment Operators.md | 2 +- .../tech/20160718 Creating your first Git repository.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md b/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md index d7cfd4c064..dd2494529d 100644 --- a/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md +++ b/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md @@ -1,4 +1,4 @@ -vim-kakali translating +translating by vim-kakali Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators – part8 diff --git a/translated/tech/20160718 Creating your first Git repository.md b/translated/tech/20160718 Creating your first Git repository.md index 35382aba55..722fe2ea56 100644 --- a/translated/tech/20160718 Creating your first Git repository.md +++ b/translated/tech/20160718 Creating your first Git repository.md @@ -122,7 +122,7 @@ $ cat ~/.ssh/id_rsa.pub 命令 git status 的输出会告诉你如果你想让 git 跟踪一个文件,你必须使用命令 git add 把它加入到提交任务中。这个命令把文件存在了暂存区,暂存区存放的都是等待提交的文件,或者把仓库保存为一个快照。git add 命令的最主要目的是为了区分你已经保存在仓库快照里的文件,还有新建的或你想提交的临时文件,至少现在,你都不用为它们之间的不同之处而费神了。 -类比大型录音机,这个动作就像打开录音机开始准备录音一样。你可以按已经录音的录音机上的 pause 按钮来完成推送,或者拉下重置环等待开始跟踪下一个文件。 +类比大型录音机,这个动作就像打开录音机开始准备录音一样。你可以按已经录音的录音机上的 pause 按钮来完成推送,或者按下重置按钮等待开始跟踪下一个文件。 如果你把文件加入到提交任务中,Git 会自动标识为跟踪文件: @@ -150,7 +150,7 @@ $ git reset HEAD foo 有时候,你会需要完成很多提交;我们以录音机类比,这就好比按下录音键并最终按下保存键一样。 -在一个项目从建立到完成,你会按记录键无数次。比如,如果你通过你的方式使用一个新的 Python 工具包并且最终实现了窗口展示,然后你就很肯定的提交了文件,但是不可避免的你最后会发生一些错误,现在你却不能撤销你的提交操作了。 +在一个项目从建立到完成,你会按记录键无数次。比如,如果你通过你的方式使用一个新的 Python 工具包并且最终实现了窗口展示,然后你就很肯定的提交了文件,但是不可避免的会发生一些错误,现在你却不能撤销你的提交操作了。 一次提交会记录仓库中所有的暂存区文件。Git 只记录加入到提交任务中的文件,也就是说在过去某个时刻你使用 git add 命令加入到暂存区的所有文件。还有从先前的提交开始被改动的文件。如果没有其他的提交,所有的跟踪文件都包含在这次提交中,因为在浏览 Git 历史点的时候,它们没有存在于仓库中。 @@ -171,7 +171,7 @@ $ git log --oneline 在这个例子中提交时的版本号是 55df4c2。它被叫做 commit hash(译注:一个SHA-1生成的哈希码,用于表示一个git commit对象。),它表示着刚才你的提交包含的所有改动,覆盖了先前的记录。如果你想要“倒回”到你的提交历史点上就可以用这个 commit hash 作为依据。 -你可以把 commit hash 想象成一个声音磁带上的 [SMPTE timecode][3],或者再夸张一点,这就是好比一个黑胶唱片两首不同的歌之间的不同点,或是一个 CD 上的轨段编号。 +你可以把 commit hash 想象成一个声音磁带上的 [SMPTE timecode][3],或者再夸张一点,这就是好比一个黑胶唱片上两首不同的歌之间的不同点,或是一个 CD 上的轨段编号。 你在很久前改动了文件并且把它们加入到提交任务中,最终完成提交,这就会生成新的 commit hashes,每个 commit hashes 标示的历史点都代表着你的产品不同的版本。 From 6c6fff61eff8c6e703f58d33164f9973ae6815fe Mon Sep 17 00:00:00 2001 From: Christopher L Date: Thu, 28 Jul 2016 21:57:04 +0800 Subject: [PATCH 265/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...figure and Troubleshoot Grand Unified Bootloader (GRUB).md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md b/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md index cf24c51b58..2d1cbef467 100644 --- a/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md +++ b/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md @@ -1,3 +1,5 @@ +Being translated by ChrisLeeGit + Part 13 - LFCS: How to Configure and Troubleshoot Grand Unified Bootloader (GRUB) ===================================================================================== @@ -167,7 +169,7 @@ Do you have questions or comments? Don’t hesitate to let us know using the com via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ 作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 89de15b01650ef16a9110a53ffa168f8a7c1379f Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Fri, 29 Jul 2016 00:42:03 +0800 Subject: [PATCH 266/471] OPEN SOURCE ACCOUNTING SOFTWARE --- sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md b/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md index 4232e6cda8..edc260a1bc 100644 --- a/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md +++ b/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md @@ -1,3 +1,5 @@ +MikeCoder Translating... + GNU KHATA: OPEN SOURCE ACCOUNTING SOFTWARE ============================================ From 193ab20ecbba980fd60b06b3a1a0a1ae5279e184 Mon Sep 17 00:00:00 2001 From: gitfuture Date: Fri, 29 Jul 2016 11:21:35 +0800 Subject: [PATCH 267/471] Finished Translating --- ...o Encrypt a Flash Drive Using VeraCrypt.md | 102 ------------------ ...o Encrypt a Flash Drive Using VeraCrypt.md | 100 +++++++++++++++++ 2 files changed, 100 insertions(+), 102 deletions(-) delete mode 100644 sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md create mode 100644 translated/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md diff --git a/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md b/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md deleted file mode 100644 index 27a0b950bd..0000000000 --- a/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md +++ /dev/null @@ -1,102 +0,0 @@ -Translating by GitFuture [07.26] - -How to Encrypt a Flash Drive Using VeraCrypt -============================================ - -Many security experts prefer open source software like VeraCrypt, which can be used to encrypt flash drives, because of its readily available source code. - -Encryption is a smart idea for protecting data on a USB flash drive, as we covered in our piece that described ]how to encrypt a flash drive][1] using Microsoft BitLocker. - -But what if you do not want to use BitLocker? - -You may be concerned that because Microsoft's source code is not available for inspection, it could be susceptible to security "backdoors" used by the government or others. Because source code for open source software is widely shared, many security experts feel open source software is far less likely to have any backdoors. - -Fortunately, there are several open source encryption alternatives to BitLocker. - -If you need to be able to encrypt and access files on any Windows machine, as well as computers running Apple OS X or Linux, the open source [VeraCrypt][2] offers an excellent alternative. - -VeraCrypt is derived from TrueCrypt, a well-regarded open source encryption software product that has now been discontinued. But the code for TrueCrypt was audited and no major security flaws were found. In addition, it has since been improved in VeraCrypt. - -Versions exist for Windows, OS X and Linux. - -Encrypting a USB flash drive with VeraCrypt is not as straightforward as it is with BitLocker, but it still only takes a few minutes. - -### Encrypting Flash Drive with VeraCrypt in 8 Steps - -After [downloading VeraCrypt][3] for your operating system: - -Start VeraCrypt, and click on Create Volume to start the VeraCrypt Volume Creation Wizard. - -![](http://www.esecurityplanet.com/imagesvr_ce/6246/Vera0.jpg) - -The VeraCrypt Volume Creation Wizard allows you to create an encrypted file container on the flash drive which sits along with other unencrypted files, or you can choose to encrypt the entire flash drive. For the moment, we will choose to encrypt the entire flash drive. - -![](http://www.esecurityplanet.com/imagesvr_ce/6703/Vera1.jpg) - -On the next screen, choose Standard VeraCrypt Volume. - -![](http://www.esecurityplanet.com/imagesvr_ce/835/Vera2.jpg) - -Select the drive letter of the flash drive you want to encrypt (in this case O:). - -![](http://www.esecurityplanet.com/imagesvr_ce/9427/Vera3.jpg) - -Choose the Volume Creation Mode. If your flash drive is empty or you want to delete everything it contains, choose the first option. If you want to keep any existing files, choose the second option. - -![](http://www.esecurityplanet.com/imagesvr_ce/7828/Vera4.jpg) - -This screen allows you to choose your encryption options. If you are unsure of which to choose, leave the default settings of AES and SHA-512. - -![](http://www.esecurityplanet.com/imagesvr_ce/5918/Vera5.jpg) - -After confirming the Volume Size screen, enter and re-enter the password you want to use to encrypt your data. - -![](http://www.esecurityplanet.com/imagesvr_ce/3850/Vera6.jpg) - -To work effectively, VeraCrypt must draw from a pool of entropy or "randomness." To generate this pool, you'll be asked to move your mouse around in a random fashion for about a minute. Once the bar has turned green, or preferably when it reaches the far right of the screen, click Format to finish creating your encrypted drive. - -![](http://www.esecurityplanet.com/imagesvr_ce/7468/Vera8.jpg) - -### Using a Flash Drive Encrypted with VeraCrypt - -When you want to use an encrypted flash drive, first insert the drive in the computer and start VeraCrypt. - -Then select an unused drive letter (such as z:) and click Auto-Mount Devices. - -![](http://www.esecurityplanet.com/imagesvr_ce/2016/Vera10.jpg) - -Enter your password and click OK. - -![](http://www.esecurityplanet.com/imagesvr_ce/8222/Vera11.jpg) - -The mounting process may take a few minutes, after which your unencrypted drive will become available with the drive letter you selected previously. - -### VeraCrypt Traveler Disk Setup - -If you set up a flash drive with an encrypted container rather than encrypting the whole drive, you also have the option to create what VeraCrypt calls a traveler disk. This installs a copy of VeraCrypt on the USB flash drive itself, so when you insert the drive in another Windows computer you can run VeraCrypt automatically from the flash drive; there is no need to install it on the computer. - -You can set up a flash drive to be a Traveler Disk by choosing Traveler Disk SetUp from the Tools menu of VeraCrypt. - -![](http://www.esecurityplanet.com/imagesvr_ce/5812/Vera12.jpg) - -It is worth noting that in order to run VeraCrypt from a Traveler Disk on a computer, you must have administrator privileges on that computer. While that may seem to be a limitation, no confidential files can be opened safely on a computer that you do not control, such as one in a business center. - ->Paul Rubens has been covering enterprise technology for over 20 years. In that time he has written for leading UK and international publications including The Economist, The Times, Financial Times, the BBC, Computing and ServerWatch. - --------------------------------------------------------------------------------- - -via: http://www.esecurityplanet.com/open-source-security/how-to-encrypt-flash-drive-using-veracrypt.html - -作者:[Paul Rubens ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.esecurityplanet.com/author/3700/Paul-Rubens -[1]: http://www.esecurityplanet.com/views/article.php/3880616/How-to-Encrypt-a-USB-Flash-Drive.htm -[2]: http://www.esecurityplanet.com/open-source-security/veracrypt-a-worthy-truecrypt-alternative.html -[3]: https://veracrypt.codeplex.com/releases/view/619351 - - - diff --git a/translated/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md b/translated/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md new file mode 100644 index 0000000000..a049bfd73c --- /dev/null +++ b/translated/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md @@ -0,0 +1,100 @@ +用 VeraCrypt 加密闪存盘 +============================================ + +很多安全专家偏好像 VeraCrypt 这类能够用来加密闪存盘的开源软件。因为获取它的源代码很简单。 + +保护 USB闪存盘里的数据,加密是一个聪明的方法,正如我们在使用 Microsoft 的 BitLocker [加密闪存盘][1] 一文中提到的。 + +但是如果你不想用 BitLocker 呢? + +你可能有顾虑,因为你不能够查看 Microsoft 的程序源码,那么它容易被植入用于政府或其它用途的“后门”。由于开源软件的源码是公开的,很多安全专家认为开源软件很少藏有后门。 + +还好,有几个开源加密软件能作为 BitLocker 的替代。 + +要是你需要在 Windows 系统,苹果的 OS X 系统或者 Linux 系统上加密以及访问文件,开源软件 [VeraCrypt][2] 提供绝佳的选择。 + +VeraCrypt 源于 TrueCrypt。TrueCrypt是一个备受好评的开源加密软件,尽管它现在已经停止维护了。但是 TrueCrypt 的代码通过了审核,没有发现什么重要的安全漏洞。另外,它已经在 VeraCrypt 中进行了改善。 + +Windows,OS X 和 Linux 系统的版本都有。 + +用 VeraCrypt 加密 USB 闪存盘不像用 BitLocker 那么简单,但是它只要几分钟就好了。 + +### 用 VeraCrypt 加密闪存盘的 8 个步骤 + +对应操作系统 [下载 VeraCrypt][3] 之后: + +打开 VeraCrypt,点击 Create Volume,进入 VeraCrypt 的创建卷的向导程序(VeraCrypt Volume Creation Wizard)。(注:VeraCrypt Volume Creation Wizard 首字母全大写,不清楚是否需要翻译,之后有很多首字母大写的词,都以括号标出) + +![](http://www.esecurityplanet.com/imagesvr_ce/6246/Vera0.jpg) + +VeraCrypt 创建卷向导(VeraCrypt Volume Creation Wizard)允许你在闪存盘里新建一个加密文件容器,这与其它未加密文件是独立的。或者你也可以选择加密整个闪存盘。这个时候你就选加密整个闪存盘就行。 + +![](http://www.esecurityplanet.com/imagesvr_ce/6703/Vera1.jpg) + +然后选择标准模式(Standard VeraCrypt Volume)。 + +![](http://www.esecurityplanet.com/imagesvr_ce/835/Vera2.jpg) + +选择你想加密的闪存盘的驱动器卷标(这里是 O:)。 + +![](http://www.esecurityplanet.com/imagesvr_ce/9427/Vera3.jpg) + +选择创建卷标模式(Volume Creation Mode)。如果你的闪存盘是空的,或者你想要删除它里面的所有东西,选第一个。要么你想保持所有现存的文件,选第二个就好了。 + +![](http://www.esecurityplanet.com/imagesvr_ce/7828/Vera4.jpg) + +这一步允许你选择加密选项。要是你不确定选哪个,就用默认的 AES 和 SHA-512 设置。 + +![](http://www.esecurityplanet.com/imagesvr_ce/5918/Vera5.jpg) + +确定了卷标容量后,输入并确认你想要用来加密数据密码。 + +![](http://www.esecurityplanet.com/imagesvr_ce/3850/Vera6.jpg) + +要有效工作,VeraCrypt 要从一个熵或者“随机数”池中取出一个随机数。要初始化这个池,你将被要求随机地移动鼠标一分钟。一旦进度条变绿了,或者更方便的是等到进度条到了屏幕右边足够远的时候,点击 “Format” 来结束创建加密盘。 + +![](http://www.esecurityplanet.com/imagesvr_ce/7468/Vera8.jpg) + +### 用 VeraCrypt 使用加密过的闪存盘 + +当你想要使用一个加密了的闪存盘,先插入闪存盘到电脑上,启动 VeraCrypt。 + +然后选择一个没有用过的卷标(比如 z:),点击自动挂载设备(Auto-Mount Devices)。 + +![](http://www.esecurityplanet.com/imagesvr_ce/2016/Vera10.jpg) + +输入密码,点击确定。 + +![](http://www.esecurityplanet.com/imagesvr_ce/8222/Vera11.jpg) + +挂载过程需要几分钟,这之后你的解密盘就能通过你先前选择的盘符进行访问了。 + +### VeraCrypt 移动硬盘安装步骤 + +如果你设置闪存盘的时候,选择的是加密过的容器而不是加密整个盘,你可以选择创建 VeraCrypt 称为移动盘(Traveler Disk)的设备。这会复制安装一个 VeraCrypt 在 USB 闪存盘。当你在别的 Windows 电脑上插入 U 盘时,就能从 U 盘自动运行 VeraCrypt;也就是说没必要在新电脑上安装 VeraCrypt。 + +你可以设置闪存盘作为一个移动硬盘(Traveler Disk),在 VeraCrypt 的工具栏(Tools)菜单里选择 Traveler Disk SetUp 就行了。 + +![](http://www.esecurityplanet.com/imagesvr_ce/5812/Vera12.jpg) + +要从移动盘(Traveler Disk)上运行 VeraCrypt,你必须要有那台电脑的管理员权限,这不足为奇。尽管这看起来是个限制,机密文件无法在不受控制的电脑上安全打开,比如在一个商务中心的电脑上。 + +>Paul Rubens 从事技术行业已经超过 20 年。这期间他为英国和国际主要的出版社,包括 《The Economist》《The Times》《Financial Times》《The BBC》《Computing》和《ServerWatch》等出版社写过文章, + +-------------------------------------------------------------------------------- + +via: http://www.esecurityplanet.com/open-source-security/how-to-encrypt-flash-drive-using-veracrypt.html + +作者:[Paul Rubens ][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.esecurityplanet.com/author/3700/Paul-Rubens +[1]: http://www.esecurityplanet.com/views/article.php/3880616/How-to-Encrypt-a-USB-Flash-Drive.htm +[2]: http://www.esecurityplanet.com/open-source-security/veracrypt-a-worthy-truecrypt-alternative.html +[3]: https://veracrypt.codeplex.com/releases/view/619351 + + + From 0921361a3fd8732a180d5b748fb721086bd3d7a9 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 29 Jul 2016 13:35:24 +0800 Subject: [PATCH 268/471] =?UTF-8?q?20160729-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...f Earth as Your Linux Desktop Wallpaper.md | 38 +++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md diff --git a/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md new file mode 100644 index 0000000000..1fd552e70c --- /dev/null +++ b/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md @@ -0,0 +1,38 @@ +Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper +================================================================= + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/07/Screen-Shot-2016-07-26-at-16.36.47-1.jpg) + +Bored of looking at the same desktop background? Here’s something that’s (almost) out of this world. + +‘[Himawaripy][1]‘ is a small Python 3 script that fetches a near-real time picture of Earth taken by the [Japanese Himawari 8 weather satellite][2] and sets it as your desktop background. + +Once installed you can set the app to run as a cron job every 10 minutes (in the background, naturally) so that it can fetch and set a realtime picture of Earth as your desktop wallpaper. + +Because Himawari-8 is a geostationary satellite you’re only ever going to images of the earth as seen from above Australasia — but with real time weather patterns, cloud formations and lighting it’s still makes for spectacular scene, even if seeing things above the UK would be better for me! + +Advanced settings allow you to configure the quality of the images pulled from the satellite , but keep in mind that any increase in quality will result in an increased file size, and a longer download wait! + +Lastly, while this script is very similar to many others that we’ve covered over the years it is up-to-date and working. + +Get Himawaripy +Himawaripy has been tested on a range of desktop environments, including Unity, LXDE, i3, MATE and a host of other desktop environments. It is free, open-source software but is not entirely straightforward to set up and configure. + +Find all instructions on getting the app installed and set up (hint: there’s no one-click installer) on the project’s GitHub page. + +[Real time earth wallpaper script on GitHub][0] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[ JOEY-ELIJAH SNEDDON][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://plus.google.com/117485690627814051450/?rel=author +[1]: https://github.com/boramalper/himawaripy +[2]: https://en.wikipedia.org/wiki/Himawari_8 +[0]: https://github.com/boramalper/himawaripy From 27b76b256636123ea14f7d823615ec8dd158f5db Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 29 Jul 2016 13:43:09 +0800 Subject: [PATCH 269/471] =?UTF-8?q?20160729-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20160715 bc - Command line calculator.md | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 sources/tech/20160715 bc - Command line calculator.md diff --git a/sources/tech/20160715 bc - Command line calculator.md b/sources/tech/20160715 bc - Command line calculator.md new file mode 100644 index 0000000000..adf854e468 --- /dev/null +++ b/sources/tech/20160715 bc - Command line calculator.md @@ -0,0 +1,131 @@ +bc: Command line calculator +============================ + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/07/bc-calculator-945x400.jpg) + +If you run a graphical desktop environment, you probably point and click your way to a calculator when you need one. The Fedora Workstation, for example, includes the Calculator tool. It features several different operating modes that allow you to do, for example, complex math or financial calculations. But did you know the command line also offers a similar calculator called bc? + +The bc utility gives you everything you expect from a scientific, financial, or even simple calculator. What’s more, it can be scripted from the command line if needed. This allows you to use it in shell scripts, in case you need to do more complex math. + +Because bc is used by some other system software, like CUPS printing services, it’s probably installed on your Fedora system already. You can check with this command: + +``` +dnf list installed bc +``` + +If you don’t see it for some reason, you can install the package with this command: + +``` +sudo dnf install bc +``` + +### Doing simple math with bc + +One way to use bc is to enter the calculator’s own shell. There you can run many calculations in a row. When you enter, the first thing that appears is a notice about the program: + +``` +$ bc +bc 1.06.95 +Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. +This is free software with ABSOLUTELY NO WARRANTY. +For details type `warranty'. +``` + +Now you can type in calculations or commands, one per line: + +``` +1+1 +``` + +The calculator helpfully answers: + +``` +2 +``` + +You can perform other commands here. You can use addition (+), subtraction (-), multiplication (*), division (/), parentheses, exponents (^), and so forth. Note that the calculator respects all expected conventions such as order of operations. Try these examples: + +``` +(4+7)*2 +4+7*2 +``` + +To exit, send the “end of input” signal with the key combination Ctrl+D. + +Another way is to use the echo command to send calculations or commands. Here’s the calculator equivalent of “Hello, world,” using the shell’s pipe function (|) to send output from echo into bc: + +``` +echo '1+1' | bc +``` + +You can send more than one calculation using the shell pipe, with a semicolon to separate entries. The results are returned on separate lines. + +``` +echo '1+1; 2+2' | bc +``` + +### Scale + +The bc calculator uses the concept of scale, or the number of digits after a decimal point, in some calculations. The default scale is 0. Division operations always use the scale setting. So if you don’t set scale, you may get unexpected answers: + +``` +echo '3/2' | bc +echo 'scale=3; 3/2' | bc +``` + +Multiplication uses a more complex decision for scale: + +``` +echo '3*2' | bc +echo '3*2.0' | bc +``` + +Meanwhile, addition and subtraction are more as expected: + +``` +echo '7-4.15' | bc +``` + +### Other base number systems + +Another useful function is the ability to use number systems other than base-10 (decimal). For instance, you can easily do hexadecimal or binary math. Use the ibase and obase commands to set input and output base systems between base-2 and base-16. Remember that once you use ibase, any number you enter is expected to be in the new declared base. + +To do hexadecimal to decimal conversions or math, you can use a command like this. Note the hexadecimal digits above 9 must be in uppercase (A-F): + +``` +echo 'ibase=16; A42F' | bc +echo 'ibase=16; 5F72+C39B' | bc +``` + +To get results in hexadecimal, set the obase as well: + +``` +echo 'obase=16; ibase=16; 5F72+C39B' | bc +``` + +Here’s a trick, though. If you’re doing these calculations in the shell, how do you switch back to input in base-10? The answer is to use ibase, but you must set it to the equivalent of decimal number 10 in the current input base. For instance, if ibase was set to hexadecimal, enter: + +``` +ibase=A +``` + +Once you do this, all input numbers are now decimal again, so you can enter obase=10 to reset the output base system. + +### Conclusion + +This is only the beginning of what bc can do. It also allows you to define functions, variables, and loops for complex calculations and programs. You can save these programs as text files on your system to run whenever you need. You can find numerous resources on the web that offer examples and additional function libraries. Happy calculating! + + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[Paul W. Frields][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[1]: http://phodd.net/gnu-bc/ From 8236318652e2e8187e386b6eb97bbcba8fe62114 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 29 Jul 2016 13:49:10 +0800 Subject: [PATCH 270/471] =?UTF-8?q?20160729-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0160722 Keeweb A Linux Password Manager.md | 62 +++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 sources/tech/20160722 Keeweb A Linux Password Manager.md diff --git a/sources/tech/20160722 Keeweb A Linux Password Manager.md b/sources/tech/20160722 Keeweb A Linux Password Manager.md new file mode 100644 index 0000000000..b829ea107e --- /dev/null +++ b/sources/tech/20160722 Keeweb A Linux Password Manager.md @@ -0,0 +1,62 @@ +Keeweb A Linux Password Manager +================================ + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb_1.png?608) + +Today we are depending on more and more online services. Each online service we sign up for, let us set a password and this way we have to remember hundreds of passwords. In this case, it is easy for anyone to forget passwords. In this article I am going to talk about Keeweb, a Linux password manager that can store all your passwords securely either online or offline. + +When we talk about Linux password managers, there are so many. Password managers like, [Keepass][1] and [Encryptr, a Zero-knowledge system based password manager][2] have already been talked about on LinuxAndUbuntu. Keeweb is another password manager for Linux that we are going to see in this article. + +### Keeweb can store passwords offline or online + +Keeweb is a cross-platform password manager. It can store all your passwords offline and sync it with your own cloud storage services like OneDrive, Google Drive, Dropbox etc. Keeweb does not have online database of its own to sync your passwords. + +To connect your online storage with Keeweb, just click more and click the service that you want to use. + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb.png?685) + +Now Keeweb will prompt you to sign in to your drive. After sign in authenticate Keeweb to use your account. + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/authenticate-dropbox-with-keeweb_orig.jpg?649) + +### Store passwords with Keeweb + +It is very easy to store your passwords with Keeweb. You can encrypt your password file with a complex password. Keeweb also allows you to lock file with a key file but I don't recommend it. If somebody gets your key file, it takes only a click to unlock your passwords file. + +#### Create Passwords + +To create a new password simply click the '+' sign and you will be presented all entries to fill up. You can create more entries if you want. + +#### Search Passwords + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/search-passwords_orig.png) + +Keeweb has a library of icons so that you can find any particular password entry easily. You can change the color of icons, download more icons and even import icons from your computer. When talking about finding passwords the search comes very handy. ​ + +Passwords of similar services can be grouped so that you can find them all at one place in one folder. You can also tag passwords to store them all in different categories. + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/tags-passwords-in-keeweb.png?283) + +### Themes + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/themes.png?304) + +If you like light themes like white or high contrast then you can change theme from Settings > General > Themes. There are four themes available, two are dark and two are light. + +### Dont' you Like Linux Passwords Manager? No problem! + +I have already posted about two other Linux password managers, Keepass and Encryptr and there were arguments on Reddit, and other social media. There were people against using any password manager and vice-versa. In this article I want to clear out that it is our responsibility to save the file that passwords are stored in. I think Password managers like Keepass and Keeweb are good to use as they don't store your passwords in the cloud. These password managers create a file and you can store it on your hard drive or encrypt it with apps like VeraCrypt. I myself don't use or recommend to use services that store passwords in their own database. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[author][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxandubuntu.com/home/keeweb-a-linux-password-manager +[1]: http://www.linuxandubuntu.com/home/keepass-password-management-tool-creates-strong-passwords-and-keeps-them-secure +[2]: http://www.linuxandubuntu.com/home/encryptr-zero-knowledge-system-based-password-manager-for-linux From 231a1636b91ee1568c7aac5f6681f3101e3d4661 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 29 Jul 2016 13:55:13 +0800 Subject: [PATCH 271/471] =?UTF-8?q?20160729-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r With Multiple Terminals In One Window.md | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md diff --git a/sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md b/sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md new file mode 100644 index 0000000000..eb50d539a1 --- /dev/null +++ b/sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md @@ -0,0 +1,85 @@ +Terminator A Linux Terminal Emulator With Multiple Terminals In One Window +============================================================================= + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/lots-of-terminals-in-terminator_1.jpg?659) + +Each Linux distribution has a default terminal emulator for interacting with system through commands. But the default terminal app might not be perfect for you. There are so many terminal apps that will provide you more functionalities to perform more tasks simultaneously to sky-rocket speed of your work. Such useful terminal emulators include Terminator, a multi-windows supported free terminal emulator for your Linux system. + +### What Is Linux Terminal Emulator? + +A Linux terminal emulator is a program that lets you interact with the shell. All Linux distributions come with a default Linux terminal app that let you pass commands to the shell. + +### Terminator, A Free Linux Terminal App + +Terminator is a Linux terminal emulator that provides several features that your default terminal app does not support. It provides the ability to create multiple terminals in one window and faster your work progress. Other than multiple windows, it allows you to change other properties such as, terminal fonts, fonts colour, background colour and so on. Let's see how we can install and use Terminator in different Linux distributions. + +### How To Install Terminator In Linux? + +#### Install Terminator In Ubuntu Based Distributions + +Terminator is available in the default Ubuntu repository. So you don't require to add any additional PPA. Just use APT or Software App to install it in Ubuntu. + +``` +sudo apt-get install terminator +``` + +In case Terminator is not available in your default repository, just compile Terminator from source code. + +[DOWNLOAD SOURCE CODE][1] + +Download Terminator source code and extract it on your desktop. Now open your default terminal & cd into the extracted folder. + +Now use the following command to install Terminator - + +``` +sudo ./setup.py install +``` + +#### Install Terminator In Fedora & Other Derivatives + +``` +dnf install terminator +``` + +#### Install Terminator In OpenSuse + +[INSTALL IN OPENSUSE][2] + +### How To Use Multiple Terminals In One Window? + +After you have installed Terminator, simply open multiple terminals in one window. Simply right click and divide. + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/multiple-terminals-in-terminator_orig.jpg) + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/multiple-terminals-in-terminator-emulator.jpg?697) + +You can create as many terminals as you want, if you can manage them. + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/lots-of-terminals-in-terminator.jpg?706) + +### Customise Terminals + +Right click the terminal and click Properties. Now you can customise fonts, fonts colour, title colour & background and terminal fonts colour & background. + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/customize-terminator-interface.jpg?702) + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/free-terminal-emulator_orig.jpg) + +### Conclusion & What Is Your Favorite Terminal Emulator? + +Terminator is an advanced terminal emulator and it also let you customize the interface. If you have not yet switched from your default terminal emulator then just try this one. I know you'll like it. If you're using any other free terminal emulator, then let us know your favorite terminal emulator. Also don't forget to share this article with your friends. Perhaps your friends are searching for something like this. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[author][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxandubuntu.com/home/terminator-a-linux-terminal-emulator-with-multiple-terminals-in-one-window +[1]: https://launchpad.net/terminator/+download +[2]: http://software.opensuse.org/download.html?project=home%3AKorbi123&package=terminator From 1b6e4f293dda28a37848ae4453c060efa8dbaad6 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 29 Jul 2016 14:15:33 +0800 Subject: [PATCH 272/471] =?UTF-8?q?20160729-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...pt-get on Ubuntu Linux 16.04 LTS server.md | 218 ++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md diff --git a/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md b/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md new file mode 100644 index 0000000000..460dabc03c --- /dev/null +++ b/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md @@ -0,0 +1,218 @@ +How to use multiple connections to speed up apt-get/apt on Ubuntu Linux 16.04 LTS server +========================================================================================= + +ow do I speed up my apt-get or apt command to download packages from multiple repos on a Ubuntu Linux 16.04 or 14.04 LTS server? + +You need to use apt-fast shell script wrapper. It should speed up apt-get command/apt command and aptitude command by downloading packages with multiple connections per package. All packages are downloaded simultaneously in parallel. It uses aria2c as default download accelerator. + +### Install apt-fast tool + +Type the following command on Ubuntu Linux 14.04 and later versions: + +``` +$ sudo add-apt-repository ppa:saiarcot895/myppa +``` + +Sample outputs: + +![](http://s0.cyberciti.org/uploads/faq/2016/07/install-apt-fast-repo.jpg) + +Update your repo: + +``` +$ sudo apt-get update +``` + +OR + +``` +$ sudo apt update +``` + +![](http://s0.cyberciti.org/uploads/faq/2016/07/install-apt-fast-command.jpg) + +Install apt-fast shell wrapper: + +``` +$ sudo apt-get -y install apt-fast +``` + +OR + +``` +$ sudo apt -y install apt-fast +``` + +Sample outputs: + + +``` +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following additional packages will be installed: + aria2 libc-ares2 libssh2-1 +Suggested packages: + aptitude +The following NEW packages will be installed: + apt-fast aria2 libc-ares2 libssh2-1 +0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. +Need to get 1,282 kB of archives. +After this operation, 4,786 kB of additional disk space will be used. +Do you want to continue? [Y/n] y +Get:1 http://01.archive.ubuntu.com/ubuntu xenial/universe amd64 libssh2-1 amd64 1.5.0-2 [70.3 kB] +Get:2 http://ppa.launchpad.net/saiarcot895/myppa/ubuntu xenial/main amd64 apt-fast all 1.8.3~137+git7b72bb7-0ubuntu1~ppa3~xenial1 [34.4 kB] +Get:3 http://01.archive.ubuntu.com/ubuntu xenial/main amd64 libc-ares2 amd64 1.10.0-3 [33.9 kB] +Get:4 http://01.archive.ubuntu.com/ubuntu xenial/universe amd64 aria2 amd64 1.19.0-1build1 [1,143 kB] +54% [4 aria2 486 kB/1,143 kB 42%] 20.4 kB/s 32s +``` + +### Configure apt-fast + +You will be prompted as follows (a value between 5 and 16 must be entered): + +![](http://s0.cyberciti.org/uploads/faq/2016/07/max-connection-10.jpg) + +And: + +![](http://s0.cyberciti.org/uploads/faq/2016/07/apt-fast-confirmation-box.jpg) + +You can edit settings directly too: + +``` +$ sudo vi /etc/apt-fast.conf +``` + +>**Please note that this tool is not for slow network connections; it is for fast network connections. If you have a slow connection to the Internet, you are not going to benefit by this tool.** + +### How do I use apt-fast command? + +The syntax is: + +``` +apt-fast command +apt-fast [options] command +``` + +#### To retrieve new lists of packages using apt-fast + +``` +sudo apt-fast update +``` + +#### To perform an upgrade using apt-fast + +``` +sudo apt-fast upgrade +``` + + +#### To perform distribution upgrade (release or force kernel upgrade), enter: + +``` +$ sudo apt-fast dist-upgrade +``` + +#### To install new packages + +The syntax is: + +``` +sudo apt-fast install pkg +``` + +For example, install nginx package, enter: + +``` +$ sudo apt-fast install nginx +``` + +Sample outputs: + +![](http://s0.cyberciti.org/uploads/faq/2016/07/sudo-apt-fast-install.jpg) + +#### To remove packages + +``` +$ sudo apt-fast remove pkg +$ sudo apt-fast remove nginx +``` + +#### To remove packages and its config files too + +``` +$ sudo apt-fast purge pkg +$ sudo apt-fast purge nginx +``` + +#### To remove automatically all unused packages, enter: + +``` +$ sudo apt-fast autoremove +``` + +#### To Download source archives + +``` +$ sudo apt-fast source pkgNameHere +``` + +#### To erase downloaded archive files + +``` +$ sudo apt-fast clean +``` + +#### To erase old downloaded archive files + +``` +$ sudo apt-fast autoclean +``` + +#### To verify that there are no broken dependencies + +``` +$ sudo apt-fast check +``` + +#### To download the binary package into the current directory + +``` +$ sudo apt-fast download pkgNameHere +$ sudo apt-fast download nginx +``` + +Sample outputs: + +``` +[#7bee0c 0B/0B CN:1 DL:0B] +07/26 15:35:42 [NOTICE] Verification finished successfully. file=/home/vivek/nginx_1.10.0-0ubuntu0.16.04.2_all.deb +07/26 15:35:42 [NOTICE] Download complete: /home/vivek/nginx_1.10.0-0ubuntu0.16.04.2_all.deb +Download Results: +gid |stat|avg speed |path/URI +======+====+===========+======================================================= +7bee0c|OK | n/a|/home/vivek/nginx_1.10.0-0ubuntu0.16.04.2_all.deb +Status Legend: +(OK):download completed. +``` + +#### To download and display the changelog for the given package + +``` +$ sudo apt-fast changelog pkgNameHere +$ sudo apt-fast changelog nginx +``` + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-flatpak/ + +作者:[VIVEK GITE][a] +译者:[zky001](https://github.com/zky001) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.cyberciti.biz/tips/about-us From bd4ec60a566239d0d14beee337f7939d6e51f157 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 29 Jul 2016 14:41:49 +0800 Subject: [PATCH 273/471] PUB:20160630 What makes up the Fedora kernel MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ChrisLeeGit 好棒!翻译的精确而优雅。 --- .../20160630 What makes up the Fedora kernel.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) rename {translated/tech => published}/20160630 What makes up the Fedora kernel.md (70%) diff --git a/translated/tech/20160630 What makes up the Fedora kernel.md b/published/20160630 What makes up the Fedora kernel.md similarity index 70% rename from translated/tech/20160630 What makes up the Fedora kernel.md rename to published/20160630 What makes up the Fedora kernel.md index 954c4cb440..67c26ee93c 100644 --- a/translated/tech/20160630 What makes up the Fedora kernel.md +++ b/published/20160630 What makes up the Fedora kernel.md @@ -3,11 +3,11 @@ Fedora 内核是由什么构成的? ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/kernel-945x400.png) -每个 Fedora 系统都运行着一个内核。许多代码段组合在一起使之成为现实。 +每个 Fedora 系统都运行着一个内核。许多代码片段组合在一起使之成为现实。 -每个 Fedora 内核都起始于一个来自于 [上游社区][1] 的基线版本,通常称之为 vanilla 内核。上游内核就是标准。(Fedora 的)目标是包含尽可能多的上游代码,这样使得 bug 修复和 API 更新更加容易,同时也会有更多的人审查代码。理想情况下,Fedora 能够直接从 kernel.org 获得内核,然后发送给所有用户。 +每个 Fedora 内核都起始于一个来自于[上游社区][1]的基线版本——通常称之为 vanilla 内核。上游内核就是标准。(Fedora 的)目标是包含尽可能多的上游代码,这样使得 bug 修复和 API 更新更加容易,同时也会有更多的人审查代码。理想情况下,Fedora 能够直接获取 kernel.org 的内核,然后发送给所有用户。 -现实情况是,使用 vanilla 内核并不能完全满足 Fedora。Vanilla 内核可能并不支持一些 Fedora 用户希望拥有的功能。用户接收的 [Fedora 内核] 是在 vanilla 内核之上打了很多补丁的内核。这些补丁被认为“不在树上”。许多这些位于补丁树之外的补丁都不会存在太久。如果某补丁能够修复一个问题,那么该补丁可能会被合并到 Fedora 树,以便用户能够更快地收到修复。当内核变基到一个新版本时,在新版本中的补丁都将被清除。 +现实情况是,使用 vanilla 内核并不能完全满足 Fedora。Vanilla 内核可能并不支持一些 Fedora 用户希望拥有的功能。用户接收的 [Fedora 内核] 是在 vanilla 内核之上打了很多补丁的内核。这些补丁被认为“不在树上(out of tree)”。许多这些位于补丁树之外的补丁都不会存在太久。如果某补丁能够修复一个问题,那么该补丁可能会被合并到 Fedora 树,以便用户能够更快地收到修复。当内核变基到一个新版本时,在新版本中的补丁都将被清除。 一些补丁会在 Fedora 内核树上存在很长时间。一个很好的例子是,安全启动补丁就是这类补丁。这些补丁提供了 Fedora 希望支持的功能,即使上游社区还没有接受它们。保持这些补丁更新是需要付出很多努力的,所以 Fedora 尝试减少不被上游内核维护者接受的补丁数量。 @@ -21,7 +21,7 @@ via: https://fedoramagazine.org/makes-fedora-kernel/ 作者:[Laura Abbott][a] 译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a8048d99cc63ff10bbbf7e7676ba43274a08d195 Mon Sep 17 00:00:00 2001 From: Mike Date: Fri, 29 Jul 2016 16:08:32 +0800 Subject: [PATCH 274/471] translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md (#4240) * translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md * translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md --- ...0160718 OPEN SOURCE ACCOUNTING SOFTWARE.md | 73 ------------------- ...0160718 OPEN SOURCE ACCOUNTING SOFTWARE.md | 71 ++++++++++++++++++ 2 files changed, 71 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md create mode 100644 translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md diff --git a/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md b/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md deleted file mode 100644 index edc260a1bc..0000000000 --- a/sources/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md +++ /dev/null @@ -1,73 +0,0 @@ -MikeCoder Translating... - -GNU KHATA: OPEN SOURCE ACCOUNTING SOFTWARE -============================================ - -Being an active Linux enthusiast, I usually introduce my friends to Linux, help them choose the best distro to suit their needs, and finally get them set with open source alternative software for their work. - -But in one case, I was pretty helpless. My uncle, who is a freelance accountant, uses a set of some pretty sophisticated paid software for work. And I wasn’t sure if I’d find anything under FOSS for him, until Yesterday. - -Abhishek was suggesting me some [cool apps][1] to check out and this particular one, GNU Khata stuck out. - -[GNU Khata][2] is an accounting tool. Or shall I say a collection of accounting tools? It is like the [Evernote][3] of economy management. It is so versatile that it can be used from personal Finance management to large scale business management, from store inventory management to corporate tax works. - -One interesting fact for you. ‘Khata’ in Hindi and other Indian languages means account and hence this accounting software is called GNU Khata. - -### INSTALLATION - -There are many installation instructions floating around the internet which actually install the older web app version of GNU Khata. Currently, GNU Khata is available only for Debian/Ubuntu and their derivatives. I suggest you follow the steps given in GNU Khata official Website to install the updated standalone. Let me give them out real quick. - -- Download the installer [here][4]. -- Open the terminal in download location. -- Copy and paste the below code in terminal and run. - -``` -sudo chmod 755 GNUKhatasetup.run -sudo ./GNUKhatasetup.run -``` - -- That’s it. Open the GNU Khata from the dash or the application menu. -### FIRST LAUNCH - -GNU Khata opens up in the browser and displays the following page. - -![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-1.jpg) - -Fill in the Organization name, case and organization type, financial year and click on proceed to go to the admin setup page. - -![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-2.jpg) - -Carefully feed in your name, password, security question and the answer and click on “create and login”. - -![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-3.jpg) - -You’re all set now. Use the Menu bar to start using GNU Khata to manage your finances. It’s that easy. - -### DOES GNU KHATA REALLY RIVAL THE PAID ACCOUNTING SOFTWARE IN THE MARKET? - -To begin with, GNU Khata keeps it all simple. The menu bar up top is very conveniently organized to help you work faster and better. You can choose to manage different accounts and projects and access them easily. [Their Website][5] states that GNU Khata can be “easily transformed into Indian languages”. Also, did you know that GNU Khata can be used on the cloud too? - -All the major accounting tools like ledgers, project statements, statement of affairs etc are formatted in a professional manner and are made available in both instantly presentable as well as customizable formats. It makes accounting and inventory management look so easy. - -The Project is very actively evolving, seeks feedback and guidance from practicing accountants to make improvements in the software. Considering the maturity, ease of use and the absence of a price tag, GNU Khata can be the perfect assistant in bookkeeping. - -Let us know what you think about GNU Khata in the comments below. - - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/using-gnu-khata/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 - -作者:[Aquil Roshan][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/aquil/ -[1]: https://itsfoss.com/category/apps/ -[2]: http://www.gnukhata.in/ -[3]: https://evernote.com/ -[4]: https://cloud.openmailbox.org/index.php/s/L8ppsxtsFq1345E/download -[5]: http://www.gnukhata.in/ diff --git a/translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md b/translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md new file mode 100644 index 0000000000..8093f8eae2 --- /dev/null +++ b/translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md @@ -0,0 +1,71 @@ +GNU KHATA:开源的会计管理软件 +============================================ + +作为一个活跃的 Linux 爱好者,我经常向我的朋友们介绍 Linux,帮助他们选择最适合他们的发行版本,同时也会帮助他们安装一些适用于他们工作的开源软件。 + +但是,又一次,我就变得很无奈。我的叔叔,他是一个自由职业的会计师。他会有一系列的为了会计工作的付费软件。我就不那么确定,我能在在开软软件中找到这么一款可以替代的软件。直到昨天。 + +Abhishek 给我推荐了一些[很酷的软件][1],并且,GNU Khata 这个特殊的一款,脱颖而出。 + +[GNU Khata][2] 是一个会计工具。 或者,我可以说成是一系列的会计工具集合?这像经济管理方面的[Evernote][3]。他的应用是如此之广,以至于他可以处理个人的财务管理,到大型公司的管理,从店铺存货管理到税率计算,都可以有效处理。 + +对你来说一个有趣的点,'Khata' 在印度或者是其他的印度语国家中意味着账户,所以这个会计软件叫做 GNU Khata。 + +### 安装 + +互联网上有很多关于老旧版本的 Khata 安装的介绍。现在,GNU Khata 已经可以在 Debian/Ubuntu 和他们的衍生产中得到。我建议你按照如下 GNU Khata 官网的步骤来安装。我们来一次快速的入门。 + +- 从[这][4]下载安装器。 +- 在下载目录打开终端。 +- 粘贴复制以下的代码到终端,并且执行。 + +``` +sudo chmod 755 GNUKhatasetup.run +sudo ./GNUKhatasetup.run +``` + +- 这就结束了,从你的 Dash 或者是应用菜单中启动 GNU Khata 吧。 +### 第一次启动 + +GNU Khata 在浏览器中打开,并且展现以下的画面。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-1.jpg) + +填写组织的名字和组织形式,经济年份并且点击 proceed 按钮进入管理设置页面。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-2.jpg) + +仔细填写你的姓名,密码,安全问题和他的答案,并且点击“create and login”。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-3.jpg) + +你已经全部设置完成了。使用菜单栏来开始使用 GNU Khata 来管理你的经济吧。这很容易。 + +### GNU KHATA 真的是市面上付费会计应用的竞争对手吗? + +首先,GNU Khata 让所有的事情变得简单。顶部的菜单栏被方便的组织,可以帮助你有效的进行工作。 你可以选择管理不同的账户和项目,并且切换非常容易。[他们的官网][5]表明,GNU Khata 可以“方便的转变成印度语”。同时,你知道 GNU Khata 也可以在云端使用吗? + +所有的主流的账户管理工具,将账簿分类,项目介绍,法规介绍等等用专业的方式整理,并且支持自定义整理和实时显示。这让会计和进存储管理看起来如此的简单。 + +这个项目正在积极的发展,从实际的操作中提交反馈来帮助这个软件更加进步。考虑到软件的成熟性,使用的便利性还有免费的 tag。GNU Khata 可能会成为最好的账簿助手。 + +请在评论框里留言吧,让我们知道你是如何看待 GNU Khata 的。 + + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/using-gnu-khata/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[Aquil Roshan][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/aquil/ +[1]: https://itsfoss.com/category/apps/ +[2]: http://www.gnukhata.in/ +[3]: https://evernote.com/ +[4]: https://cloud.openmailbox.org/index.php/s/L8ppsxtsFq1345E/download +[5]: http://www.gnukhata.in/ From cfe454e814f9ca9d3e27151870871b5840e9b257 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 29 Jul 2016 19:11:15 +0800 Subject: [PATCH 275/471] translating --- ... Real Time Photo of Earth as Your Linux Desktop Wallpaper.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md index 1fd552e70c..9c0f673900 100644 --- a/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md +++ b/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md @@ -1,3 +1,5 @@ +Translating---geekpi + Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper ================================================================= From 6a44b74276c3da5d38d03d3265a919d6c9263537 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 29 Jul 2016 19:11:51 +0800 Subject: [PATCH 276/471] translating --- ...ions-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md b/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md index 460dabc03c..6803f2ec47 100644 --- a/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md +++ b/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md @@ -1,3 +1,5 @@ +translating---geekpi + How to use multiple connections to speed up apt-get/apt on Ubuntu Linux 16.04 LTS server ========================================================================================= From f8ad5216d1ca06009f04d4bf180a6ad9e127efc7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 29 Jul 2016 20:45:47 +0800 Subject: [PATCH 277/471] translated --- ...f Earth as Your Linux Desktop Wallpaper.md | 40 ------------------- ...f Earth as Your Linux Desktop Wallpaper.md | 39 ++++++++++++++++++ 2 files changed, 39 insertions(+), 40 deletions(-) delete mode 100644 sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md create mode 100644 translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md diff --git a/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md deleted file mode 100644 index 9c0f673900..0000000000 --- a/sources/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md +++ /dev/null @@ -1,40 +0,0 @@ -Translating---geekpi - -Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper -================================================================= - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/07/Screen-Shot-2016-07-26-at-16.36.47-1.jpg) - -Bored of looking at the same desktop background? Here’s something that’s (almost) out of this world. - -‘[Himawaripy][1]‘ is a small Python 3 script that fetches a near-real time picture of Earth taken by the [Japanese Himawari 8 weather satellite][2] and sets it as your desktop background. - -Once installed you can set the app to run as a cron job every 10 minutes (in the background, naturally) so that it can fetch and set a realtime picture of Earth as your desktop wallpaper. - -Because Himawari-8 is a geostationary satellite you’re only ever going to images of the earth as seen from above Australasia — but with real time weather patterns, cloud formations and lighting it’s still makes for spectacular scene, even if seeing things above the UK would be better for me! - -Advanced settings allow you to configure the quality of the images pulled from the satellite , but keep in mind that any increase in quality will result in an increased file size, and a longer download wait! - -Lastly, while this script is very similar to many others that we’ve covered over the years it is up-to-date and working. - -Get Himawaripy -Himawaripy has been tested on a range of desktop environments, including Unity, LXDE, i3, MATE and a host of other desktop environments. It is free, open-source software but is not entirely straightforward to set up and configure. - -Find all instructions on getting the app installed and set up (hint: there’s no one-click installer) on the project’s GitHub page. - -[Real time earth wallpaper script on GitHub][0] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ - -作者:[ JOEY-ELIJAH SNEDDON][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://plus.google.com/117485690627814051450/?rel=author -[1]: https://github.com/boramalper/himawaripy -[2]: https://en.wikipedia.org/wiki/Himawari_8 -[0]: https://github.com/boramalper/himawaripy diff --git a/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md new file mode 100644 index 0000000000..8dc73bf150 --- /dev/null +++ b/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md @@ -0,0 +1,39 @@ +为你的Linux桌面设置一张真实的地球照片 +================================================================= + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/07/Screen-Shot-2016-07-26-at-16.36.47-1.jpg) + +厌倦了看同样的桌面背景了么?这里有一个几乎这世界上最好的东西。 + +‘[Himawaripy][1]‘是一个Python 3脚本,它会接近实时抓取由[日本Himawari 8气象卫星][2]拍摄的地球照片,并将它设置成你的桌面背景。 + +安装完成后,你可以将它设置成每10分钟运行的任务(自然地它是在后台运行),这样它就可以实时地取回地球的照片并设置成背景了。 + +因为Himawari-8是一颗同步轨道卫星,你只能看到澳大利亚上空的地球的图片-但是它实时的天气形态、云团和光线仍使它很壮丽,即使对我而言在看到英国上方的更好! + +高级设置允许你配置从卫星取回的图片质量,但是要记住增加图片质量会增加文件大小及更长的下载等待! + +最后,虽然这个脚本与其他我们提到过的其他类似,它还仍保持更新及可用。 + +获取Himawaripy + +Himawaripy已经在一系列的桌面环境中都测试过了,包括Unity、LXDE、i3、MATE和其他桌面环境。它是免费、开源软件但是并不能直接设置及配置。 + +在Github上查找获取安装的应用程序和设置的所有指令()提示:有没有一键安装)上。 + +[GitHub上的实时地球壁纸脚本][0] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[ JOEY-ELIJAH SNEDDON][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://plus.google.com/117485690627814051450/?rel=author +[1]: https://github.com/boramalper/himawaripy +[2]: https://en.wikipedia.org/wiki/Himawari_8 +[0]: https://github.com/boramalper/himawaripy From b2d8ab5f01db13d78b83987fe8fb9812692f652a Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 30 Jul 2016 00:03:05 +0800 Subject: [PATCH 278/471] PUB:20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @kokialoves 请注意标点符号。 --- ...latform Discourse On Ubuntu Linux 16.04.md | 101 ++++++++++++++++++ ...latform Discourse On Ubuntu Linux 16.04.md | 101 ------------------ 2 files changed, 101 insertions(+), 101 deletions(-) create mode 100644 published/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md delete mode 100644 translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md diff --git a/published/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/published/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md new file mode 100644 index 0000000000..45c5d634dc --- /dev/null +++ b/published/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md @@ -0,0 +1,101 @@ +如何在 Ubuntu Linux 16.04上安装开源的 Discourse 论坛 +=============================================================================== + +Discourse 是一个开源的论坛,它可以以邮件列表、聊天室或者论坛等多种形式工作。它是一个广受欢迎的现代的论坛工具。在服务端,它使用 Ruby on Rails 和 Postgres 搭建, 并且使用 Redis 缓存来减少读取时间 , 在客户端,它使用支持 Java Script 的浏览器。它非常容易定制,结构良好,并且它提供了转换插件,可以对你现存的论坛、公告板进行转换,例如: vBulletin、phpBB、Drupal、SMF 等等。在这篇文章中,我们将学习在 Ubuntu 操作系统下安装 Discourse。 + +它以安全作为设计思想,所以发垃圾信息的人和黑客们不能轻易的实现其企图。它能很好的支持各种现代设备,并可以相应的调整以手机和平板的显示。 + +### 在 Ubuntu 16.04 上安装 Discourse + +让我们开始吧 ! 最少需要 1G 的内存,并且官方支持的安装过程需要已经安装了 docker。 说到 docker,它还需要安装Git。要满足以上的两点要求我们只需要运行下面的命令: + +``` +wget -qO- https://get.docker.com/ | sh +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/124.png) + +用不了多久就安装好了 docker 和 Git,安装结束以后,在你的系统上的 /var 分区创建一个 Discourse 文件夹(当然你也可以选择其他的分区)。 + +``` +mkdir /var/discourse +``` + +现在我们来克隆 Discourse 的 Github 仓库到这个新建的文件夹。 + +``` +git clone https://github.com/discourse/discourse_docker.git /var/discourse +``` + +进入这个克隆的文件夹。 + +``` +cd /var/discourse +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/314.png) + +你将看到“discourse-setup” 脚本文件,运行这个脚本文件进行 Discourse 的初始化。 + +``` +./discourse-setup +``` + +**备注: 在安装 discourse 之前请确保你已经安装好了邮件服务器。** + +安装向导将会问你以下六个问题: + +``` +Hostname for your Discourse? +Email address for admin account? +SMTP server address? +SMTP user name? +SMTP port [587]: +SMTP password? []: +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/411.png) + +当你提交了以上信息以后, 它会让你提交确认, 如果一切都很正常,点击回车以后安装开始。 + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/511.png) + +现在“坐等放宽”,需要花费一些时间来完成安装,倒杯咖啡,看看有什么错误信息没有。 + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/610.png) + +安装成功以后看起来应该像这样。 + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/710.png) + +现在打开浏览器,如果已经做了域名解析,你可以使用你的域名来连接 Discourse 页面 ,否则你只能使用IP地址了。你将看到如下信息: + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/85.png) + +就是这个,点击 “Sign Up” 选项创建一个新的账户,然后进行你的 Discourse 设置。 + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/106.png) + +### 结论 + +它安装简便,运行完美。 它拥有现代论坛所有必备功能。它以 GPL 发布,是完全开源的产品。简单、易用、以及特性丰富是它的最大特点。希望你喜欢这篇文章,如果有问题,你可以给我们留言。 + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/install-discourse-on-ubuntu-linux-16-04/ + +作者:[Aun][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxpitstop.com/author/aun/ + + + + + + + + diff --git a/translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md deleted file mode 100644 index 742fd381dd..0000000000 --- a/translated/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md +++ /dev/null @@ -1,101 +0,0 @@ -如何在 Ubuntu Linux 16.04上安装开源的Discourse论坛 -=============================================================================== - -Discourse 是一个开源的论坛, 它可以以邮件列表, 聊天室或者论坛等多种形式工作. 它是一个广受欢迎的现代的论坛工具. 在服务端,它使用Ruby on Rails 和 Postgres 搭建, 并且使用Redis caching 减少读取时间 , 在客户端, 它用浏览器的Java Script运行. 它是一个非常好的定制和构架工具. 并且它提供了转换插件对你现存的论坛进行转换例如: vBulletin, phpBB, Drupal, SMF 等等. 在这篇文章中, 我们将学习在Ubuntu操作系统下安装 Discourse. - -它是基于安全开发的, 黑客们不能轻易的发现漏洞. 它能很好的支持各个平台, 相应的调整手机和平板的显示设置. - -### Installing Discourse on Ubuntu 16.04 - -让我们开始吧 ! 最少需要1G的内存并且你要保证dockers已经安装了. 说到dockers, 它还需要安装Git. 要达到以上的两点要求我们只需要运行下面的命令. - -``` -wget -qO- https://get.docker.com/ | sh -``` - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/124.png) - -用不了多久就安装好了Docker 和 Git, 安装结束以后, 创建一个 Discourse 文件夹在 /var 分区 (当然你也可以选择其他的分区). - -``` -mkdir /var/discourse -``` - -现在我们来克隆 Discourse’s Github 项目到这个新建的文件夹. - -``` -git clone https://github.com/discourse/discourse_docker.git /var/discourse -``` - -进入克隆文件夹. - -``` -cd /var/discourse -``` - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/314.png) - -你将 看到“discourse-setup” 脚本文件, 运行这个脚本文件进行Discourse的初始化. - -``` -./discourse-setup -``` - -**Side note: 在安装discourse之前请确保你已经安装了邮件服务器.** - -安装向导将会问你以下六个问题. - -``` -Hostname for your Discourse? -Email address for admin account? -SMTP server address? -SMTP user name? -SMTP port [587]: -SMTP password? []: -``` - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/411.png) - -当你提交了以上信息以后, 它会让你提交确认, 恩一切都很正常, 点击回车以后安装开始. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/511.png) - -现在坐下来,倒杯茶,看看有什么错误信息没有. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/610.png) - -安装成功以后看起来应该像这样. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/710.png) - -现在打开浏览器, 如果已经做了域名解析, 你可以使用你的域名来连接Discourse页面 , 否则你只能使用IP地址了. 你讲看到如下信息: - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/85.png) - -就是这个, 用 “Sign Up” 选项创建一个新管理账户. - -![](http://linuxpitstop.com/wp-content/uploads/2016/06/106.png) - -### 结论 - -它是安装简便安全易用的. 它拥有当前所有论坛功能. 它支持所有的开源产品. 简单, 易用, 各类实用的功能. 希望你喜欢这篇文章你可以给我们留言. - --------------------------------------------------------------------------------- - -via: http://linuxpitstop.com/install-discourse-on-ubuntu-linux-16-04/ - -作者:[Aun][a] -译者:[kokialoves](https://github.com/kokialoves) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://linuxpitstop.com/author/aun/ - - - - - - - - From 8c8bba067989fd9c536a9aaf7bc118f3bc1c2033 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 30 Jul 2016 06:32:57 +0800 Subject: [PATCH 279/471] translated --- ...f Earth as Your Linux Desktop Wallpaper.md | 2 +- ...pt-get on Ubuntu Linux 16.04 LTS server.md | 74 +++++++++---------- 2 files changed, 37 insertions(+), 39 deletions(-) rename {sources => translated}/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md (64%) diff --git a/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md index 8dc73bf150..d05fd97eb2 100644 --- a/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md +++ b/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md @@ -28,7 +28,7 @@ Himawaripy已经在一系列的桌面环境中都测试过了,包括Unity、LX via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ 作者:[ JOEY-ELIJAH SNEDDON][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md b/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md similarity index 64% rename from sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md rename to translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md index 6803f2ec47..4824bd7f7d 100644 --- a/sources/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md +++ b/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md @@ -1,31 +1,29 @@ -translating---geekpi - -How to use multiple connections to speed up apt-get/apt on Ubuntu Linux 16.04 LTS server +如何在Ubuntu Linux 16.04 LTS中使用多条连接加速apt-get/apt ========================================================================================= -ow do I speed up my apt-get or apt command to download packages from multiple repos on a Ubuntu Linux 16.04 or 14.04 LTS server? +我该如何在Ubuntu Linux 16.04或者14.04 LTS中从多个仓库中下载包来加速apt-get或者apt命令? -You need to use apt-fast shell script wrapper. It should speed up apt-get command/apt command and aptitude command by downloading packages with multiple connections per package. All packages are downloaded simultaneously in parallel. It uses aria2c as default download accelerator. +你需要使用到apt-fast这个脚本。它会通过多个连接同时下载一个包来加速apt-get/apt和aptitude命令。所有的包都会同时下载。它使用aria2c作为默认的下载加速。 -### Install apt-fast tool +### 安装 apt-fast 工具 -Type the following command on Ubuntu Linux 14.04 and later versions: +在Ubuntu Linux 14.04或者之后的版本尝试下面的命令: ``` $ sudo add-apt-repository ppa:saiarcot895/myppa ``` -Sample outputs: +示例输出: ![](http://s0.cyberciti.org/uploads/faq/2016/07/install-apt-fast-repo.jpg) -Update your repo: +更新你的仓库: ``` $ sudo apt-get update ``` -OR +或者 ``` $ sudo apt update @@ -33,19 +31,19 @@ $ sudo apt update ![](http://s0.cyberciti.org/uploads/faq/2016/07/install-apt-fast-command.jpg) -Install apt-fast shell wrapper: +安装 apt-fast: ``` $ sudo apt-get -y install apt-fast ``` -OR +或者 ``` $ sudo apt -y install apt-fast ``` -Sample outputs: +示例输出: ``` @@ -69,122 +67,122 @@ Get:4 http://01.archive.ubuntu.com/ubuntu xenial/universe amd64 aria2 amd64 1.19 54% [4 aria2 486 kB/1,143 kB 42%] 20.4 kB/s 32s ``` -### Configure apt-fast +### 配置 apt-fast -You will be prompted as follows (a value between 5 and 16 must be entered): +你将会得到下面的提示(必须输入一个5到16的数值): ![](http://s0.cyberciti.org/uploads/faq/2016/07/max-connection-10.jpg) -And: +并且 ![](http://s0.cyberciti.org/uploads/faq/2016/07/apt-fast-confirmation-box.jpg) -You can edit settings directly too: +你可以直接编辑设置: ``` $ sudo vi /etc/apt-fast.conf ``` ->**Please note that this tool is not for slow network connections; it is for fast network connections. If you have a slow connection to the Internet, you are not going to benefit by this tool.** +>**请注意这个工具并不是给慢速网络连接的,它是给快速网络连接的。如果你的网速慢,那么你将无法从这个工具中得到好处。** -### How do I use apt-fast command? +### 我该怎么使用 apt-fast 命令? -The syntax is: +语法是: ``` apt-fast command apt-fast [options] command ``` -#### To retrieve new lists of packages using apt-fast +#### 使用apt-fast取回新的包列表 ``` sudo apt-fast update ``` -#### To perform an upgrade using apt-fast +#### 使用apt-fast执行升级 ``` sudo apt-fast upgrade ``` -#### To perform distribution upgrade (release or force kernel upgrade), enter: +#### 执行发行版升级(发布或者强制内核升级),输入: ``` $ sudo apt-fast dist-upgrade ``` -#### To install new packages +#### 安装新的包 -The syntax is: +语法是: ``` sudo apt-fast install pkg ``` -For example, install nginx package, enter: +比如要安装nginx,输入: ``` $ sudo apt-fast install nginx ``` -Sample outputs: +示例输出: ![](http://s0.cyberciti.org/uploads/faq/2016/07/sudo-apt-fast-install.jpg) -#### To remove packages +#### 删除包 ``` $ sudo apt-fast remove pkg $ sudo apt-fast remove nginx ``` -#### To remove packages and its config files too +#### 删除包和它的配置文件 ``` $ sudo apt-fast purge pkg $ sudo apt-fast purge nginx ``` -#### To remove automatically all unused packages, enter: +#### 删除所有未使用的包 ``` $ sudo apt-fast autoremove ``` -#### To Download source archives +#### 下载源码包 ``` $ sudo apt-fast source pkgNameHere ``` -#### To erase downloaded archive files +#### 清理下载的文件 ``` $ sudo apt-fast clean ``` -#### To erase old downloaded archive files +#### 清理旧的下载文件 ``` $ sudo apt-fast autoclean ``` -#### To verify that there are no broken dependencies +#### 验证没有破坏的依赖 ``` $ sudo apt-fast check ``` -#### To download the binary package into the current directory +#### 下载二进制包到当前目录 ``` $ sudo apt-fast download pkgNameHere $ sudo apt-fast download nginx ``` -Sample outputs: +示例输出: ``` [#7bee0c 0B/0B CN:1 DL:0B] @@ -198,7 +196,7 @@ Status Legend: (OK):download completed. ``` -#### To download and display the changelog for the given package +#### 下载并显示指定包的changelog ``` $ sudo apt-fast changelog pkgNameHere @@ -212,7 +210,7 @@ $ sudo apt-fast changelog nginx via: https://fedoramagazine.org/introducing-flatpak/ 作者:[VIVEK GITE][a] -译者:[zky001](https://github.com/zky001) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 59a3e3f759a1d8a72857f77c2600391f290ba036 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 30 Jul 2016 06:41:09 +0800 Subject: [PATCH 280/471] Update 20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md --- ...ions-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md b/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md index 4824bd7f7d..e36813cdc9 100644 --- a/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md +++ b/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md @@ -3,7 +3,7 @@ 我该如何在Ubuntu Linux 16.04或者14.04 LTS中从多个仓库中下载包来加速apt-get或者apt命令? -你需要使用到apt-fast这个脚本。它会通过多个连接同时下载一个包来加速apt-get/apt和aptitude命令。所有的包都会同时下载。它使用aria2c作为默认的下载加速。 +你需要使用到apt-fast这个shell封装器。它会通过多个连接同时下载一个包来加速apt-get/apt和aptitude命令。所有的包都会同时下载。它使用aria2c作为默认的下载加速。 ### 安装 apt-fast 工具 From 89dd3858da807582e575a98b2c97759976278ee2 Mon Sep 17 00:00:00 2001 From: Christopher L Date: Sat, 30 Jul 2016 16:02:11 +0800 Subject: [PATCH 281/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160722 Keeweb A Linux Password Manager.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160722 Keeweb A Linux Password Manager.md b/sources/tech/20160722 Keeweb A Linux Password Manager.md index b829ea107e..b55de2a4cb 100644 --- a/sources/tech/20160722 Keeweb A Linux Password Manager.md +++ b/sources/tech/20160722 Keeweb A Linux Password Manager.md @@ -1,3 +1,5 @@ +Being translated by ChrisLeeGit + Keeweb A Linux Password Manager ================================ @@ -52,7 +54,7 @@ I have already posted about two other Linux password managers, Keepass and Encry via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ 作者:[author][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e0a9969aac1cd53e6b47fef5fa67f0f4ae8e92ac Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 30 Jul 2016 22:09:51 +0800 Subject: [PATCH 282/471] Translating 20%: --- ...18 An Introduction to Mocking in Python.md | 138 +++++++++--------- 1 file changed, 67 insertions(+), 71 deletions(-) diff --git a/sources/tech/20160618 An Introduction to Mocking in Python.md b/sources/tech/20160618 An Introduction to Mocking in Python.md index 182f596431..9c6c912b60 100644 --- a/sources/tech/20160618 An Introduction to Mocking in Python.md +++ b/sources/tech/20160618 An Introduction to Mocking in Python.md @@ -1,34 +1,36 @@ -Translating by cposture 2016-07-29 -An Introduction to Mocking in Python +Mock 在 Python 中的使用介绍 ===================================== +http://www.oschina.net/translate/an-introduction-to-mocking-in-python?cmp +本文讲述的是 Python 中 Mock 的使用 -This article is about mocking in python, +**如何在避免测试你的耐心的情景下执行单元测试** -**How to Run Unit Tests Without Testing Your Patience** +通常,我们编写的软件会直接与我们称之为肮脏无比的服务交互。用外行人的话说:交互已设计好的服务对我们的应用程序很重要,但是这会带来我们不希望的副作用,也就是那些在我们自己测试的时候不希望的功能。例如:我们正在写一个社交 app,并且想要测试一下我们 "发布到 Facebook" 的新功能,但是不想每次运行测试集的时候真的发布到 Facebook。 -More often than not, the software we write directly interacts with what we would label as “dirty” services. In layman’s terms: services that are crucial to our application, but whose interactions have intended but undesired side-effects—that is, undesired in the context of an autonomous test run.For example: perhaps we’re writing a social app and want to test out our new ‘Post to Facebook feature’, but don’t want to actually post to Facebook every time we run our test suite. -The Python unittest library includes a subpackage named unittest.mock—or if you declare it as a dependency, simply mock—which provides extremely powerful and useful means by which to mock and stub out these undesired side-effects. +Python 的单元测试库包含了一个名为 unittest.mock 或者可以称之为依赖的子包,简言之为 mock——其提供了极其强大和有用的方法,通过它们可以模拟和打桩我们不希望的副作用。 + >Source | -Note: mock is [newly included][1] in the standard library as of Python 3.3; prior distributions will have to use the Mock library downloadable via [PyPI][2]. +注意:mock [最近收录][1]到了 Python 3.3 的标准库中;先前发布的版本必须通过 [PyPI][2] 下载 Mock 库。 +### ### Fear System Calls -To give you another example, and one that we’ll run with for the rest of the article, consider system calls. It’s not difficult to see that these are prime candidates for mocking: whether you’re writing a script to eject a CD drive, a web server which removes antiquated cache files from /tmp, or a socket server which binds to a TCP port, these calls all feature undesired side-effects in the context of your unit-tests. +再举另一个例子,思考一个我们会在余文讨论的系统调用。不难发现,这些系统调用都是主要的模拟对象:无论你是正在写一个可以弹出 CD 驱动的脚本,还是一个用来删除 /tmp 下过期的缓存文件的 Web 服务,这些调用都是在你的单元测试上下文中不希望的副作用。 ->As a developer, you care more that your library successfully called the system function for ejecting a CD as opposed to experiencing your CD tray open every time a test is run. +> 作为一个开发者,你需要更关心你的库是否成功地调用了一个可以弹出 CD 的系统函数,而不是切身经历 CD 托盘每次在测试执行的时候都打开了。 -As a developer, you care more that your library successfully called the system function for ejecting a CD (with the correct arguments, etc.) as opposed to actually experiencing your CD tray open every time a test is run. (Or worse, multiple times, as multiple tests reference the eject code during a single unit-test run!) +作为一个开发者,你需要更关心你的库是否成功地调用了一个可以弹出 CD 的系统函数(使用了正确的参数等等),而不是切身经历 CD 托盘每次在测试执行的时候都打开了。(或者更糟糕的是,很多次,在一个单元测试运行期间多个测试都引用了弹出代码!) -Likewise, keeping your unit-tests efficient and performant means keeping as much “slow code” out of the automated test runs, namely filesystem and network access. +同样,保持你的单元测试的效率和性能意味着需要让如此多的 "缓慢代码" 远离自动测试,比如文件系统和网络访问。 -For our first example, we’ll refactor a standard Python test case from original form to one using mock. We’ll demonstrate how writing a test case with mocks will make our tests smarter, faster, and able to reveal more about how the software works. +对于我们首个例子,我们要从原始形式到使用 mock 地重构一个标准 Python 测试用例。我们会演示如何使用 mock 写一个测试用例使我们的测试更加智能、快速,并且能展示更多关于我们软件的工作原理。 -### A Simple Delete Function +### 一个简单的删除函数 -We all need to delete files from our filesystem from time to time, so let’s write a function in Python which will make it a bit easier for our scripts to do so. +有时,我们都需要从文件系统中删除文件,因此,让我们在 Python 中写一个可以使我们的脚本更加轻易完成此功能的函数。 ``` #!/usr/bin/env python @@ -40,9 +42,9 @@ def rm(filename): os.remove(filename) ``` -Obviously, our rm method at this point in time doesn’t provide much more than the underlying os.remove method, but our codebase will improve, allowing us to add more functionality here. +很明显,我们的 rm 方法此时无法提供比相关 os.remove 方法更多的功能,但我们的基础代码会逐步改善,允许我们在这里添加更多的功能。 -Let’s write a traditional test case, i.e., without mocks: +让我们写一个传统的测试用例,即,没有使用 mock: ``` #!/usr/bin/env python @@ -61,7 +63,7 @@ class RmTestCase(unittest.TestCase): def setUp(self): with open(self.tmpfilepath, "wb") as f: f.write("Delete me!") - + def test_rm(self): # remove the file rm(self.tmpfilepath) @@ -69,9 +71,11 @@ class RmTestCase(unittest.TestCase): self.assertFalse(os.path.isfile(self.tmpfilepath), "Failed to remove the file.") ``` -Our test case is pretty simple, but every time it is run, a temporary file is created and then deleted. Additionally, we have no way of testing whether our rm method properly passes the argument down to the os.remove call. We can assume that it does based on the test above, but much is left to be desired. +我们的测试用例相当简单,但是当它每次运行的时候,它都会创建一个临时文件并且随后删除。此外,我们没有办法测试我们的 rm 方法是否正确地将我们的参数向下传递给 os.remove 调用。我们可以基于以上的测试认为它做到了,但还有很多需要改进的地方。 -Refactoring with MocksLet’s refactor our test case using mock: +### 使用 Mock 重构 + +让我们使用 mock 重构我们的测试用例: ``` #!/usr/bin/env python @@ -83,7 +87,7 @@ import mock import unittest class RmTestCase(unittest.TestCase): - + @mock.patch('mymodule.os') def test_rm(self, mock_os): rm("any path") @@ -91,10 +95,11 @@ class RmTestCase(unittest.TestCase): mock_os.remove.assert_called_with("any path") ``` -With these refactors, we have fundamentally changed the way that the test operates. Now, we have an insider, an object we can use to verify the functionality of another. +使用这些重构,我们从根本上改变了该测试用例的运行方式。现在,我们有一个可以用于验证其他功能的内部对象。 -### Potential Pitfalls +### 潜在陷阱 +第一件需要注意的事情就是,我们使用了用于模拟 mock.patch 方法的装饰器位于mymodule.os One of the first things that should stick out is that we’re using the mock.patch method decorator to mock an object located at mymodule.os, and injecting that mock into our test case method. Wouldn’t it make more sense to just mock os itself, rather than the reference to it at mymodule.os? Well, Python is somewhat of a sneaky snake when it comes to imports and managing modules. At runtime, the mymodule module has its own os which is imported into its own local scope in the module. Thus, if we mock os, we won’t see the effects of the mock in the mymodule module. @@ -135,23 +140,23 @@ import mock import unittest class RmTestCase(unittest.TestCase): - + @mock.patch('mymodule.os.path') @mock.patch('mymodule.os') def test_rm(self, mock_os, mock_path): # set up the mock mock_path.isfile.return_value = False - + rm("any path") - + # test that the remove call was NOT called. self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") - + # make the file 'exist' mock_path.isfile.return_value = True - + rm("any path") - + mock_os.remove.assert_called_with("any path") ``` @@ -190,26 +195,26 @@ import mock import unittest class RemovalServiceTestCase(unittest.TestCase): - + @mock.patch('mymodule.os.path') @mock.patch('mymodule.os') def test_rm(self, mock_os, mock_path): # instantiate our service reference = RemovalService() - + # set up the mock mock_path.isfile.return_value = False - + reference.rm("any path") - + # test that the remove call was NOT called. self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") - + # make the file 'exist' mock_path.isfile.return_value = True - + reference.rm("any path") - + mock_os.remove.assert_called_with("any path") ``` @@ -228,13 +233,13 @@ class RemovalService(object): def rm(self, filename): if os.path.isfile(filename): os.remove(filename) - + class UploadService(object): def __init__(self, removal_service): self.removal_service = removal_service - + def upload_complete(self, filename): self.removal_service.rm(filename) ``` @@ -262,29 +267,29 @@ import mock import unittest class RemovalServiceTestCase(unittest.TestCase): - + @mock.patch('mymodule.os.path') @mock.patch('mymodule.os') def test_rm(self, mock_os, mock_path): # instantiate our service reference = RemovalService() - + # set up the mock mock_path.isfile.return_value = False - + reference.rm("any path") - + # test that the remove call was NOT called. self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") - + # make the file 'exist' mock_path.isfile.return_value = True - + reference.rm("any path") - + mock_os.remove.assert_called_with("any path") - - + + class UploadServiceTestCase(unittest.TestCase): @mock.patch.object(RemovalService, 'rm') @@ -292,13 +297,13 @@ class UploadServiceTestCase(unittest.TestCase): # build our dependencies removal_service = RemovalService() reference = UploadService(removal_service) - + # call upload_complete, which should, in turn, call `rm`: reference.upload_complete("my uploaded file") - + # check that it called the rm method of any RemovalService mock_rm.assert_called_with("my uploaded file") - + # check that it called the rm method of _our_ removal_service removal_service.rm.assert_called_with("my uploaded file") ``` @@ -339,39 +344,39 @@ import mock import unittest class RemovalServiceTestCase(unittest.TestCase): - + @mock.patch('mymodule.os.path') @mock.patch('mymodule.os') def test_rm(self, mock_os, mock_path): # instantiate our service reference = RemovalService() - + # set up the mock mock_path.isfile.return_value = False - + reference.rm("any path") - + # test that the remove call was NOT called. self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") - + # make the file 'exist' mock_path.isfile.return_value = True - + reference.rm("any path") - + mock_os.remove.assert_called_with("any path") - - + + class UploadServiceTestCase(unittest.TestCase): def test_upload_complete(self, mock_rm): # build our dependencies mock_removal_service = mock.create_autospec(RemovalService) reference = UploadService(mock_removal_service) - + # call upload_complete, which should, in turn, call `rm`: reference.upload_complete("my uploaded file") - + # test that it called the rm method mock_removal_service.rm.assert_called_with("my uploaded file") ``` @@ -427,7 +432,7 @@ To finish up, let’s write a more applicable real-world example, one which we m import facebook class SimpleFacebook(object): - + def __init__(self, oauth_token): self.graph = facebook.GraphAPI(oauth_token) @@ -445,7 +450,7 @@ import mock import unittest class SimpleFacebookTestCase(unittest.TestCase): - + @mock.patch.object(facebook.GraphAPI, 'put_object', autospec=True) def test_post_message(self, mock_put_object): sf = simple_facebook.SimpleFacebook("fake oauth token") @@ -480,12 +485,3 @@ via: http://slviki.com/index.php/2016/06/18/introduction-to-mocking-in-python/ [6]: http://www.voidspace.org.uk/python/mock/mock.html [7]: http://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters [8]: http://www.toptal.com/python - - - - - - - - - From 9f91c8b545c4e0d0f3d9080e8ecbcb1fe4b35167 Mon Sep 17 00:00:00 2001 From: cposture Date: Sat, 30 Jul 2016 23:38:30 +0800 Subject: [PATCH 283/471] Translating 21% --- .../tech/20160618 An Introduction to Mocking in Python.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160618 An Introduction to Mocking in Python.md b/sources/tech/20160618 An Introduction to Mocking in Python.md index 9c6c912b60..64c26b8ec5 100644 --- a/sources/tech/20160618 An Introduction to Mocking in Python.md +++ b/sources/tech/20160618 An Introduction to Mocking in Python.md @@ -99,15 +99,19 @@ class RmTestCase(unittest.TestCase): ### 潜在陷阱 -第一件需要注意的事情就是,我们使用了用于模拟 mock.patch 方法的装饰器位于mymodule.os +第一件需要注意的事情就是,我们使用了位于 mymodule.os 且用于模拟对象的 mock.patch 方法装饰器,并且将该 mock 注入到我们的测试用例方法。相比在 mymodule.os 引用它,那么只是模拟 os 本身,会不会更有意义? One of the first things that should stick out is that we’re using the mock.patch method decorator to mock an object located at mymodule.os, and injecting that mock into our test case method. Wouldn’t it make more sense to just mock os itself, rather than the reference to it at mymodule.os? +当然,当涉及到导入和管理模块,Python 的用法非常灵活。在运行时,mymodule 模块拥有被导入到本模块局部作用域的 os。因此,如果我们模拟 os,我们是看不到模拟在 mymodule 模块中的作用的。 Well, Python is somewhat of a sneaky snake when it comes to imports and managing modules. At runtime, the mymodule module has its own os which is imported into its own local scope in the module. Thus, if we mock os, we won’t see the effects of the mock in the mymodule module. +这句话需要深刻地记住: The mantra to keep repeating is this: +> 模拟测试一个项目,只需要了解它用在哪里,而不是它从哪里来。 > Mock an item where it is used, not where it came from. + If you need to mock the tempfile module for myproject.app.MyElaborateClass, you probably need to apply the mock to myproject.app.tempfile, as each module keeps its own imports. With that pitfall out of the way, let’s keep mocking. From a236f0fac4ec7d07f9c3bc8924f96e5039d5082b Mon Sep 17 00:00:00 2001 From: maywanting Date: Sun, 31 Jul 2016 00:11:26 +0800 Subject: [PATCH 284/471] finish the translation of 20160721 5 tricks for getting started with Vim.md --- ...1 5 tricks for getting started with Vim.md | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 translated/tech/20160721 5 tricks for getting started with Vim.md diff --git a/translated/tech/20160721 5 tricks for getting started with Vim.md b/translated/tech/20160721 5 tricks for getting started with Vim.md new file mode 100644 index 0000000000..7e9808d6d0 --- /dev/null +++ b/translated/tech/20160721 5 tricks for getting started with Vim.md @@ -0,0 +1,60 @@ +Vim 学习的 5 个技巧 +===================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/BUSINESS_peloton.png?itok=nuMbW9d3) + +多年来,我一直想学 Vim 。如今 Vim 是我最喜欢的 Linux 文本编辑器,也是开发者和系统管理者最喜爱的开源工具。我说的学习,指的是真正意义上的学习。想要精通确实很难,所以我只想要达到熟练的水平。根据我多年使用 Linux 的经验,我会的也仅仅只是打开一个文件,使用上下左右箭头按键来移动光标,切换到 insert 模式,更改一些文本,保存,然后退出。 + +但那只是 Vim 的最基本操作。我可以在终端修改文本文档,但是无法使用任何一个我想象中强大的文本处理功能。这样我无法说明 Vim 完全优于 Pico 和 Nano 。 + +所以到底为什么要学习 Vim ?因为我需要花费相当多的时间用于编辑文本,而且有很大的效率提升空间。为什么不选择 Emacs ,或者是更为现代化的编辑器例如 Atom ?因为 Vim 适合我,至少我有一丁点的使用经验。而且,很重要的一点就是,在我需要处理的系统上很少碰见没有装 Vim 或者它的简化版(Vi)。如果你有强烈的欲望想学习 Emacs ,我希望这些对于 Emacs 同类编辑器的建议能对你有所帮助。 + +花了几周的时间专注提高我的 Vim 使用技巧之后,我想分享的第一个建议就是必须使用工具。虽然这看起来就是明知故问的回答,但事实上比我所想的在代码层面上还要困难。我的大多数工作是在网页浏览器上进行的,而且我每次都得有针对性的用 Gedit 打开并修改一段浏览器之外的文本。Gedit 需要快捷键来启动,所以第一步就是移出快捷键然后替换成 Vim 的快捷键。 + +为了跟更好的学习 Vim ,我尝试了很多。如果你也正想学习,以下列举了一些作为推荐。 + +### Vimtutor + +通常如何开始学习最好就是使用应用本身。我找到一个小的应用叫 Vimtutor ,当你在学习编辑一个文本时它能辅导你一些基础知识,它向我展示了很多我这些年都忽视的基础命令。Vimtutor 上到处都是 Vim 影子,如果你的系统上没有 Vim ,Vimtutor可以很容易从你的包管理器上下载。 + +### GVim + +我知道并不是每个人都看好这个,但就是它让我从使用在终端的 Vim 转战到使用 GVim 来满足我基本编辑需求。反对者表示 GVim 鼓励使用鼠标,而 Vim 主要是为键盘党设计的。但是我能通过 GVim 的下拉菜单快速找到想找的指令,并且 GVim 可以提醒我正确的指令然后通过敲键盘执行它。努力学习一个新的编辑器然后陷入无法解决的困境,这种感觉并不好受。每隔几分钟读一下 man 出来的文字或者使用搜索引擎来提醒你指令也并不是最好的学习新事务的方法。 + +### Keyboard maps + +当我转战 GVim ,我发现有一个键盘的“作弊表”来提醒我最基础的按键很是便利。网上有很多这种可用的表,你可以下载,打印,然后贴在你身边的某一处地方。但是为了我的笔记本键盘,我选择买一沓便签纸。这些便签纸在美国不到10美元,而且当我使用键盘编辑文本尝试新的命令的时候,可以随时提醒我。 + +### Vimium + +上文提到,我工作都在浏览器上进行。其中一条我觉得很有帮助的建议就是,使用 [Vimium](1)来用增强使用 Vim 的体验。Vimium 是 Chrome 浏览器上的一个开源插件,能用 Vim 的指令快捷操作 Chrome。当我有意识的使用快捷键切换文本的次数越少时,这说明我越来越多的使用这些快捷键。同样的扩展 Firefox 上也有,例如 [Vimerator](2)。 + +### 人 + +毫无疑问,最好的学习方法就是求助于在你之前探索过的人,让他给你建议、反馈和解决方法。 + +如果你住在一个大城市,那么附近可能会有一个 Vim meetup 组,不然就是在 Freenode IRC 上的 #vim 频道。#vim 频道是 Freenode 上最活跃的频道之一,那上面可以针对你个人的问题来提供帮助。听上面的人发发牢骚或者看看别人尝试解决自己没有遇到过的问题,仅仅是这样我都觉得很有趣。 + +------ + +所以是什么成就了现在?如今便是极好。为它所花的时间是否值得就在于之后它为你节省了多少时间。但是我经常收到意外的惊喜与快乐,当我发现一个新的按键指令来复制、跳过词,或者一些相似的小技巧。每天我至少可以看见,一点点回报,正在逐渐配得上当初的付出。 + +学习 Vim 并不仅仅只有这些建议,还有很多。我很喜欢指引别人去 [Vim Advantures](3),它是一种只能使用 Vim 的快捷键的在线游戏。而且某天我发现了一个非常神奇的虚拟学习工具,在 [Vimgifts.com](4),那上面有明确的你想要的:用一个 gif 动图来描述,使用一点点 Vim 操作来达到他们想要的。 + +你有花时间学习 Vim 吗?或者有遇到任何关于按键复杂的插件的问题吗?什么适合你,你认为这些努力值得吗?效率的提高有达到你的预期?分享你们的故事在下面的评论区吧。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/tips-getting-started-vim + +作者:[Jason Baker ][a] +译者:[maywanting](https://github.com/maywanting) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jason-baker +[1]: https://github.com/philc/vimium +[2]: http://www.vimperator.org/ +[3]: http://vim-adventures.com/ +[4]: http://vimgifs.com/ From 15019d3beb05f47dfafc302ec178301df9506e5e Mon Sep 17 00:00:00 2001 From: maywanting Date: Sun, 31 Jul 2016 00:22:24 +0800 Subject: [PATCH 285/471] change the small sign --- ...1 5 tricks for getting started with Vim.md | 62 ------------------- ...1 5 tricks for getting started with Vim.md | 14 ++--- 2 files changed, 7 insertions(+), 69 deletions(-) delete mode 100644 sources/tech/20160721 5 tricks for getting started with Vim.md diff --git a/sources/tech/20160721 5 tricks for getting started with Vim.md b/sources/tech/20160721 5 tricks for getting started with Vim.md deleted file mode 100644 index 417b51ed45..0000000000 --- a/sources/tech/20160721 5 tricks for getting started with Vim.md +++ /dev/null @@ -1,62 +0,0 @@ -maywanting - -5 tricks for getting started with Vim -===================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/BUSINESS_peloton.png?itok=nuMbW9d3) - -For years, I've wanted to learn Vim, now my preferred Linux text editor and a favorite open source tool among developers and system administrators. And when I say learn, I mean really learn. Master is probably too strong a word, but I'd settle for advanced proficiency. For most of my years using Linux, my skillset included the ability to open a file, use the arrow keys to navigate up and down, switch into insert mode, change some text, save, and exit. - -But that's like minimum-viable-Vim. My skill level enabled me edit text documents from the terminal, but hasn't actually empowered me with any of the text editing super powers I've always imagined were possible. And it didn't justify using Vim over the totally capable Pico or Nano. - -So why learn Vim at all? Because I do spend an awful lot of time editing text, and I know I could be more efficient at it. And why not Emacs, or a more modern editor like Atom? Because Vim works for me, and at least I have some minimal experience in it. And perhaps, importantly, because it's rare that I encounter a system that I'm working on which doesn't have Vim or it's less-improved cousin (vi) available on it already. If you've always had a desire to learn Emacs, more power to you—I hope the Emacs-analog of these tips will prove useful to you, too. - -A few weeks in to this concentrated effort to up my Vim-use ability, the number one tip I have to share is that you actually must use the tool. While it seems like a piece of advice straight from Captain Obvious, I actually found it considerably harder than I expected to stay in the program. Most of my work happens inside of a web browser, and I had to untrain my trigger-like opening of (Gedit) every time I needed to edit a block of text outside of a browser. Gedit had made its way to my quick launcher, and so step one was removing this shortcut and putting Vim there instead. - -I've tried a number of things that have helped me learn. Here's a few of them I would recommend if you're looking to learn as well. - -### Vimtutor - -Sometimes the best place to get started isn't far from the application itself. I found Vimtutor, a tiny application that is basically a tutorial in a text file that you edit as you learn, to be as helpful as anything else in showing me the basics of the commands I had skipped learning through the years. Vimtutor is typically found everywhere Vim is, and is an easy install from your package manager if it's not already on your system. - -### GVim - -I know not everyone will agree with this one, but I found it useful to stop using the version of Vim that lives in my terminal and start using GVim for my basic editing needs. Naysayers will argue that it encourages using the mouse in an environment designed for keyboards, but I found it helpful to be able to quickly find the command I was looking for in a drop-down menu, reminding myself of the correct command, and then executing it with a keyboard. The alternative was often frustration at the inability to figure out how to do something, which is not a good feeling to be under constantly as you struggle to learn a new editor. No, stopping every few minutes to read a man page or use a search engine to remind you of a key sequence is not the best way to learn something new. - -### Keyboard maps - -Along with switching to GVim, I also found it handy to have a keyboard "cheat sheet" handy to remind me of the basic keystrokes. There are many available on the web that you can download, print, and set beside your station, but I opted for buying a set of stickers for my laptop keyboard. They were less than ten dollars US and had the added bonus of being a subtle reminder every time I used the laptop to at least try out one new thing as I edited. - -### Vimium - -As I mentioned, I live in the web browser most of the day. One of the tricks I've found helpful to reinforce the Vim way of navigation is to use [Vimium][1], an open source extension for Chrome that makes Chrome mimick the shortcuts used by Vim. I've found the fewer times I switch contexts for the keyboard shortcuts I'm using, the more likely I am to actually use them. Similar extensions, like [Vimerator][2], exist for Firefox. - -### Other human beings - -Without a doubt, there's no better way to get help learning something new than to get advice, feedback, and solutions from other people who have gone down a path before you. - -If you live in a larger urban area, there might be a Vim meetup group near you. Otherwise, the place to be is the #vim channel on Freenode IRC. One of the more popular channels on Freenode, the #vim channel is always full of helpful individuals willing to offer help with your problems. I find it interesting just to listen to the chatter and see what sorts of problems others are trying to solve to see what I'm missing out on. - ------- - -And so what to make of this effort? So far, so good. The time spent has probably yet to pay for itself in terms of time saved, but I'm always mildly surprised and amused when I find myself with a new reflex, jumping words with the right keypress sequence, or some similarly small feat. I can at least see that every day, the investment is bringing itself a little closer to payoff. - -These aren't the only tricks for learning Vim, by far. I also like to point people towards [Vim Adventures][3], an online game in which you navigate using the Vim keystrokes. And just the other day I came across a marvelous visual learning tool at [Vimgifs.com][4], which is exactly what you might expect it to be: illustrated examples with Vim so small they fit nicely in a gif. - -Have you invested the time to learn Vim, or really, any program with a keyboard-heavy interface? What worked for you, and, did you think the effort was worth it? Has your productivity changed as much as you thought it would? Lets share stories in the comments below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/tips-getting-started-vim - -作者:[Jason Baker ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jason-baker -[1]: https://github.com/philc/vimium -[2]: http://www.vimperator.org/ -[3]: http://vim-adventures.com/ -[4]: http://vimgifs.com/ diff --git a/translated/tech/20160721 5 tricks for getting started with Vim.md b/translated/tech/20160721 5 tricks for getting started with Vim.md index 7e9808d6d0..3dbc910d8d 100644 --- a/translated/tech/20160721 5 tricks for getting started with Vim.md +++ b/translated/tech/20160721 5 tricks for getting started with Vim.md @@ -3,19 +3,19 @@ Vim 学习的 5 个技巧 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/BUSINESS_peloton.png?itok=nuMbW9d3) -多年来,我一直想学 Vim 。如今 Vim 是我最喜欢的 Linux 文本编辑器,也是开发者和系统管理者最喜爱的开源工具。我说的学习,指的是真正意义上的学习。想要精通确实很难,所以我只想要达到熟练的水平。根据我多年使用 Linux 的经验,我会的也仅仅只是打开一个文件,使用上下左右箭头按键来移动光标,切换到 insert 模式,更改一些文本,保存,然后退出。 +多年来,我一直想学 Vim。如今 Vim 是我最喜欢的 Linux 文本编辑器,也是开发者和系统管理者最喜爱的开源工具。我说的学习,指的是真正意义上的学习。想要精通确实很难,所以我只想要达到熟练的水平。根据我多年使用 Linux 的经验,我会的也仅仅只是打开一个文件,使用上下左右箭头按键来移动光标,切换到 insert 模式,更改一些文本,保存,然后退出。 -但那只是 Vim 的最基本操作。我可以在终端修改文本文档,但是无法使用任何一个我想象中强大的文本处理功能。这样我无法说明 Vim 完全优于 Pico 和 Nano 。 +但那只是 Vim 的最基本操作。我可以在终端修改文本文档,但是无法使用任何一个我想象中强大的文本处理功能。这样我无法说明 Vim 完全优于 Pico 和 Nano。 -所以到底为什么要学习 Vim ?因为我需要花费相当多的时间用于编辑文本,而且有很大的效率提升空间。为什么不选择 Emacs ,或者是更为现代化的编辑器例如 Atom ?因为 Vim 适合我,至少我有一丁点的使用经验。而且,很重要的一点就是,在我需要处理的系统上很少碰见没有装 Vim 或者它的简化版(Vi)。如果你有强烈的欲望想学习 Emacs ,我希望这些对于 Emacs 同类编辑器的建议能对你有所帮助。 +所以到底为什么要学习 Vim?因为我需要花费相当多的时间用于编辑文本,而且有很大的效率提升空间。为什么不选择 Emacs,或者是更为现代化的编辑器例如 Atom?因为 Vim 适合我,至少我有一丁点的使用经验。而且,很重要的一点就是,在我需要处理的系统上很少碰见没有装 Vim 或者它的简化版(Vi)。如果你有强烈的欲望想学习 Emacs,我希望这些对于 Emacs 同类编辑器的建议能对你有所帮助。 花了几周的时间专注提高我的 Vim 使用技巧之后,我想分享的第一个建议就是必须使用工具。虽然这看起来就是明知故问的回答,但事实上比我所想的在代码层面上还要困难。我的大多数工作是在网页浏览器上进行的,而且我每次都得有针对性的用 Gedit 打开并修改一段浏览器之外的文本。Gedit 需要快捷键来启动,所以第一步就是移出快捷键然后替换成 Vim 的快捷键。 -为了跟更好的学习 Vim ,我尝试了很多。如果你也正想学习,以下列举了一些作为推荐。 +为了跟更好的学习 Vim,我尝试了很多。如果你也正想学习,以下列举了一些作为推荐。 ### Vimtutor -通常如何开始学习最好就是使用应用本身。我找到一个小的应用叫 Vimtutor ,当你在学习编辑一个文本时它能辅导你一些基础知识,它向我展示了很多我这些年都忽视的基础命令。Vimtutor 上到处都是 Vim 影子,如果你的系统上没有 Vim ,Vimtutor可以很容易从你的包管理器上下载。 +通常如何开始学习最好就是使用应用本身。我找到一个小的应用叫 Vimtutor,当你在学习编辑一个文本时它能辅导你一些基础知识,它向我展示了很多我这些年都忽视的基础命令。Vimtutor 上到处都是 Vim 影子,如果你的系统上没有 Vim,Vimtutor可以很容易从你的包管理器上下载。 ### GVim @@ -23,11 +23,11 @@ Vim 学习的 5 个技巧 ### Keyboard maps -当我转战 GVim ,我发现有一个键盘的“作弊表”来提醒我最基础的按键很是便利。网上有很多这种可用的表,你可以下载,打印,然后贴在你身边的某一处地方。但是为了我的笔记本键盘,我选择买一沓便签纸。这些便签纸在美国不到10美元,而且当我使用键盘编辑文本尝试新的命令的时候,可以随时提醒我。 +当我转战 GVim,我发现有一个键盘的“作弊表”来提醒我最基础的按键很是便利。网上有很多这种可用的表,你可以下载,打印,然后贴在你身边的某一处地方。但是为了我的笔记本键盘,我选择买一沓便签纸。这些便签纸在美国不到10美元,而且当我使用键盘编辑文本尝试新的命令的时候,可以随时提醒我。 ### Vimium -上文提到,我工作都在浏览器上进行。其中一条我觉得很有帮助的建议就是,使用 [Vimium](1)来用增强使用 Vim 的体验。Vimium 是 Chrome 浏览器上的一个开源插件,能用 Vim 的指令快捷操作 Chrome。当我有意识的使用快捷键切换文本的次数越少时,这说明我越来越多的使用这些快捷键。同样的扩展 Firefox 上也有,例如 [Vimerator](2)。 +上文提到,我工作都在浏览器上进行。其中一条我觉得很有帮助的建议就是,使用 [Vimium](1) 来用增强使用 Vim 的体验。Vimium 是 Chrome 浏览器上的一个开源插件,能用 Vim 的指令快捷操作 Chrome。当我有意识的使用快捷键切换文本的次数越少时,这说明我越来越多的使用这些快捷键。同样的扩展 Firefox 上也有,例如 [Vimerator](2)。 ### 人 From e0444ab80940a97472434d38e7f235e9f4a6a219 Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Sun, 31 Jul 2016 00:26:03 +0800 Subject: [PATCH 286/471] Create 20160706 What is Git.md --- translated/tech/20160706 What is Git.md | 118 ++++++++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 translated/tech/20160706 What is Git.md diff --git a/translated/tech/20160706 What is Git.md b/translated/tech/20160706 What is Git.md new file mode 100644 index 0000000000..8a43ef717d --- /dev/null +++ b/translated/tech/20160706 What is Git.md @@ -0,0 +1,118 @@ +什么是Git +=========== + +欢迎阅读本系列关于Git版本控制系统的教程!通过本文的介绍,你将会了解到Git的用途及谁该使用Git。 + +如果你刚步入开源的世界,你很有可能会遇到一些在Git上托管代码或者发布使用版本的开源软件。事实上,不管你知道与否,你都在使用基于Git进行版本管理的软件:linux内核(就算你没有在手机或者电脑上使用linux,你正在访问的网站也是运行在linux系统上的),Firefox,Chrome等其他很多项目都在Git上和世界各地开发者共享他们的代码。 + +换个角度来说,你是否仅仅通过Git就可以和其他人共享你的代码?你是否可以在家里或者企业里私有化的使用Git?你必须要通过一个GitHub账号来使用Git吗?为什么要使用Git呢?Git的优势又是什么?Git是我唯一的选择吗?这对Git所有的疑问都会把我们搞的一脑浆糊。 + +因此,忘记你以前所知的Git,我们从新走进Git世界的大门。 + +### 什么是版本控制系统? + +现在市面上有很多不同的版本控制系统:CVS,SVN,Mercurial,Fossil 当然还有 Git。 + +很多像 GitHub 和 GitLab 这样的服务是以Git为基础的,但是你也可以只使用Git而无需使用其他额外的服务。这意味着你可以以私有或者公有的方式来使用Git。 + +如果你曾经和其他人有过任何数字方面的合作,你就会知道传统版本管理的工作流程。开始是很简单的:你有一个原始的版本,你把这个版本发送给你的同事,他们在接收到的版本上做了些修改,现在你们有两个版本了,然后他们把他们手上修改过的版本发回来给你。你把他们的修改合并到你手上的版本中,现在两个版本又合并成一个最新的版本了。 + +然后,你修改了你手上最新的版本,同时,你的同事也修改了他们手上合并前的版本。现在你们有3个不同的版本了,分别是合并后最新的版本,你修改后的版本,你同事手上继续修改过的版本。至此,你们的版本管理工作开始变得越来越混乱了。 + +正如 Jason van Gumster 在他的文章中指出 [即使是艺术家也需要版本控制][1],而且这也有向个人设置方向发展的趋势。无论是艺术家还是科学家,在一些想法上开发出实验版本都是比较常见的;在你的项目中,可能有某个版本大获成功,把项目推向一个新的高度,也可能有某个版本惨遭失败。因此,最终你不可避免的会创建出一堆名为project_justTesting.kdenlive、project_betterVersion.kdenlive、project_best_FINAL.kdenlive、project_FINAL-alternateVersion.kdenlive等类似名称的文件。 + +不管你是修改一个for循环,还是一些简单的文本编辑,一个好的版本控制系统都会让我们的生活更加的轻松。 + +### Git快照 + +Git会为每个项目创建快照,并且为项目的每个版本都保存一个唯一的快照。 + +如果你将项目带领到了一个错误的方向上,你可以回退到上一个正确的版本,并且 开始尝试另一个可行的方向。 + +如果你是和别人合作开发,当有人向你发送他们的修改时,你可以将这些修改合并到你的工作分支中,然后你的同事就可以获取到合并后的最新版本,并在此基础上继续工作。 + +Git并不是魔法,因此冲突还是会发生的(“你修改了某文件的最后一行,但是我把这行整行都删除了;我们怎样处理这些冲突呢?”),但是总体而言,Git会为你保留了所有更改的历史版本,甚至允许并行版本。这为你保留了以任何方式处理冲突的能力。 + +### 分布式Git + +在不同的机器上为同一个项目工作是一件复杂的事情。因为在你开始工作时,你想要获得项目的最新版本,然后此基础上进行修改,最后向你的同事共享这些改动。传统的方法是通过笨重的在线文件共享服务或者老旧的电邮附件,但是这两种方式都是效率低下且容易出错。 + +Git天生是为分布式工作设计的。如果你要参与到某个项目中,你可以克隆(clone)该项目的Git仓库,然后就像这个项目只有你本地一个版本一样对项目进行修改。最后使用一些简单的命令你就可以拉取(pull)其他开发者的修改,或者你可以把你的修改推送(push)给别人。现在不用担心谁手上的是最新的版本,或者谁的版本又存放在哪里等这些问题了。全部人都是在本地进行开发,然后向共同的目标推送或者拉取更新。(或者不是共同的目标,这取决于项目的开发方式)。 + +### Git 界面 + +最原始的Git是运行在Linux终端上的应用软件。然而,得益于Git是开源的,并且拥有良好的设计,世界各地的开发者都可以为Git设计不同的问接入界面。 + +Git完全是免费的,并且附带在Linux,BSD,Illumos 和其他类unix系统中,Git命令看起来像: + +``` +$ git --version +git version 2.5.3 +``` + +可能最著名的Git访问界面是基于网页的,像GitHub,开源的GitLab,Savannah,BitBucket和SourceForge这些网站都是基于网页端的Git界面。这些站点为面向公众和面向社会的开源软件提供了最大限度的代码托管服务。在一定程度上,基于浏览器的图形界面(GUI)可以尽量的减缓Git的学习曲线。下面的GitLab接口的截图: + +![](https://opensource.com/sites/default/files/0_gitlab.png) + +再者,第三方Git服务提供商或者独立开发者甚至在Git的基础上开发出不是基于HTML的定制化前端接口。此类接口让你可以不用打开浏览器就可以方便的使用Git进行版本管理。其中对用户最透明的方式是直接集成到文件管理器中。KDE文件管理器,Dolphin 可以直接在目录中显示Git状态,甚至支持提交,推送和拉取更新操作。 + +![](https://opensource.com/sites/default/files/0_dolphin.jpg) + +[Sparkleshare][2]使用Git作为其Dropbox类文件共享接口的基础。 + +![](https://opensource.com/sites/default/files/0_sparkleshare_1.jpg) + +想了解更多的内容,可以查看[Git wiki][3],此页面中展示了很多Git的图形界面项目。 + +### 谁应该使用Git? + +就是你!我们更应该关心的问题是什么时候使用Git?和用Git来干嘛? + +### 我应该在什么时候使用Git呢?我要用Git来干嘛呢? + +想更深入的学习Git,我们必须比平常考虑更多关于文件格式的问题。 + +Git是为了管理源代码而设计的,在大多数编程语言中,源代码就意味者一行行的文本。当然,Git并不知道你把这些文本当成是源代码还是下一部伟大的美式小说。因此,只要文件内容是以文本构成的,使用Git来跟踪和管理其版本就是一个很好的选择了。 + +但是什么是文本呢?如果你在像Libre Office这类办公软件中编辑一些内容,通常并不会产生纯文本内容。因为通常复杂的应用软件都会对原始的文本内容进行一层封装,就如把原始文本内容用XML标记语言包装起来,然后封装在Zip容器中。这种对原始文本内容进行一层封装的做法可以保证当你把文件发送给其他人时,他们可以看到你在办公软件中编辑的内容及特定的文本效果。奇怪的是,虽然,通常你的需求可能会很复杂,就像保存[Kdenlive][4]项目文件,或者保存从[Inkscape][5]导出的SVG文件,但是,事实上使用Git管理像XML文本这样的纯文本类容是最简单的。 + +如果你在使用Unix系统,你可以使用 file 命令来查看文件内容构成: + +``` +$ file ~/path/to/my-file.blah +my-file.blah: ASCII text +$ file ~/path/to/different-file.kra: Zip data (MIME type "application/x-krita") +``` + +如果还是不确定,你可以使用 head 命令来查看文件内容: + +``` +$ head ~/path/to/my-file.blah +``` + +如果输出的文本你基本能看懂,这个文件就很有可能是文本文件。如果你仅仅在一堆乱码中偶尔看到几个熟悉的字符,那么这个文件就可能不是文本文件了。 + +准确的说:Git可以管理其他格式的文件,但是它会把这些文件当成二进制大对象(blob)。两者的区别是,在文本文件中,Git可以明确的告诉你在这两个快照(或者说提交)间有3行是修改过的。但是如果你在两个提交间对一张图片进行的编辑操作,Git会怎么指出这种修改呢?实际上,因为图片并不是以某种可以增加或删除的有意义的文本构成,因此Git并不能明确的描述这种变化。当然我个人是非常希望图片的编辑可以像把本文"丑陋的蓝绿色"修改成"漂浮着蓬松白云的天蓝色"一样的简单,但是事实上图片的编辑并没有这么简单。 + +经常有人在Git上记录png图标、电子表格或者流程图这类二进制大型对象。尽管,我们知道在Git上管理此类大型文件并不直观,但是,如果你需要使用Git来管理此类文件,你也并不需要过多的担心。如果你参与的项目同时生成文本文件和二进制大文件对象(如视频游戏中常见的场景,这些和源代码同样重要的图像和音频材料),那么你有两条路可以走:要么开发出你自己的解决方案,就如使用指向共享网络驱动器的指针;要么使用Git插件,如Joey Hess开发的[git annex][6], 或者 [Git-Media][7] 项目。 + +你看,Git真的是一个任何人都可以使用的工具。它是你进行文件版本管理的一个强大而且好用工具,同时它并没有你开始认为的那么可怕。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/resources/what-is-git + +作者:[Seth Kenlon ][a] +译者:[cvsher](https://github.com/cvsher) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://opensource.com/life/16/2/version-control-isnt-just-programmers +[2]: http://sparkleshare.org/ +[3]: https://git.wiki.kernel.org/index.php/InterfacesFrontendsAndTools#Graphical_Interfaces +[4]: https://opensource.com/life/11/11/introduction-kdenlive +[5]: http://inkscape.org/ +[6]: https://git-annex.branchable.com/ +[7]: https://github.com/alebedev/git-media From 82d69780088789c52fd132ca3c2e8b69820ba9d7 Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Sun, 31 Jul 2016 00:38:49 +0800 Subject: [PATCH 287/471] Update 20160706 What is Git.md --- translated/tech/20160706 What is Git.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/20160706 What is Git.md b/translated/tech/20160706 What is Git.md index 8a43ef717d..f2f01b8d39 100644 --- a/translated/tech/20160706 What is Git.md +++ b/translated/tech/20160706 What is Git.md @@ -41,7 +41,7 @@ Git天生是为分布式工作设计的。如果你要参与到某个项目中 ### Git 界面 -最原始的Git是运行在Linux终端上的应用软件。然而,得益于Git是开源的,并且拥有良好的设计,世界各地的开发者都可以为Git设计不同的问接入界面。 +最原始的Git是运行在Linux终端上的应用软件。然而,得益于Git是开源的,并且拥有良好的设计,世界各地的开发者都可以为Git设计不同的接入界面。 Git完全是免费的,并且附带在Linux,BSD,Illumos 和其他类unix系统中,Git命令看起来像: @@ -54,7 +54,7 @@ git version 2.5.3 ![](https://opensource.com/sites/default/files/0_gitlab.png) -再者,第三方Git服务提供商或者独立开发者甚至在Git的基础上开发出不是基于HTML的定制化前端接口。此类接口让你可以不用打开浏览器就可以方便的使用Git进行版本管理。其中对用户最透明的方式是直接集成到文件管理器中。KDE文件管理器,Dolphin 可以直接在目录中显示Git状态,甚至支持提交,推送和拉取更新操作。 +再者,第三方Git服务提供商或者独立开发者甚至可以在Git的基础上开发出不是基于HTML的定制化前端接口。此类接口让你可以不用打开浏览器就可以方便的使用Git进行版本管理。其中对用户最透明的方式是直接集成到文件管理器中。KDE文件管理器,Dolphin 可以直接在目录中显示Git状态,甚至支持提交,推送和拉取更新操作。 ![](https://opensource.com/sites/default/files/0_dolphin.jpg) From 5260df0109f620e8362516743bb1fcb7464a37ec Mon Sep 17 00:00:00 2001 From: maywanting Date: Sun, 31 Jul 2016 01:13:30 +0800 Subject: [PATCH 288/471] the last version --- ...20160721 5 tricks for getting started with Vim.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/translated/tech/20160721 5 tricks for getting started with Vim.md b/translated/tech/20160721 5 tricks for getting started with Vim.md index 3dbc910d8d..67d64ced2f 100644 --- a/translated/tech/20160721 5 tricks for getting started with Vim.md +++ b/translated/tech/20160721 5 tricks for getting started with Vim.md @@ -5,7 +5,7 @@ Vim 学习的 5 个技巧 多年来,我一直想学 Vim。如今 Vim 是我最喜欢的 Linux 文本编辑器,也是开发者和系统管理者最喜爱的开源工具。我说的学习,指的是真正意义上的学习。想要精通确实很难,所以我只想要达到熟练的水平。根据我多年使用 Linux 的经验,我会的也仅仅只是打开一个文件,使用上下左右箭头按键来移动光标,切换到 insert 模式,更改一些文本,保存,然后退出。 -但那只是 Vim 的最基本操作。我可以在终端修改文本文档,但是无法使用任何一个我想象中强大的文本处理功能。这样我无法说明 Vim 完全优于 Pico 和 Nano。 +但那只是 Vim 的最基本操作。Vim 可以让我在终端修改文本,但是它并没有任何一个我想象中强大的文本处理功能。这样我无法说明 Vim 完全优于 Pico 和 Nano。 所以到底为什么要学习 Vim?因为我需要花费相当多的时间用于编辑文本,而且有很大的效率提升空间。为什么不选择 Emacs,或者是更为现代化的编辑器例如 Atom?因为 Vim 适合我,至少我有一丁点的使用经验。而且,很重要的一点就是,在我需要处理的系统上很少碰见没有装 Vim 或者它的简化版(Vi)。如果你有强烈的欲望想学习 Emacs,我希望这些对于 Emacs 同类编辑器的建议能对你有所帮助。 @@ -15,11 +15,11 @@ Vim 学习的 5 个技巧 ### Vimtutor -通常如何开始学习最好就是使用应用本身。我找到一个小的应用叫 Vimtutor,当你在学习编辑一个文本时它能辅导你一些基础知识,它向我展示了很多我这些年都忽视的基础命令。Vimtutor 上到处都是 Vim 影子,如果你的系统上没有 Vim,Vimtutor可以很容易从你的包管理器上下载。 +通常如何开始学习最好就是使用应用本身。我找到一个小的应用叫 Vimtutor,当你在学习编辑一个文本时它能辅导你一些基础知识,它向我展示了很多我这些年都忽视的基础命令。Vimtutor 上到处都是 Vim 影子,如果你的系统上没有 Vimtutor,Vimtutor可以很容易从你的包管理器上下载。 ### GVim -我知道并不是每个人都看好这个,但就是它让我从使用在终端的 Vim 转战到使用 GVim 来满足我基本编辑需求。反对者表示 GVim 鼓励使用鼠标,而 Vim 主要是为键盘党设计的。但是我能通过 GVim 的下拉菜单快速找到想找的指令,并且 GVim 可以提醒我正确的指令然后通过敲键盘执行它。努力学习一个新的编辑器然后陷入无法解决的困境,这种感觉并不好受。每隔几分钟读一下 man 出来的文字或者使用搜索引擎来提醒你指令也并不是最好的学习新事务的方法。 +我知道并不是每个人都认同这个,但就是它让我从使用在终端的 Vim 转战到使用 GVim 来满足我基本编辑需求。反对者表示 GVim 鼓励使用鼠标,而 Vim 主要是为键盘党设计的。但是我能通过 GVim 的下拉菜单快速找到想找的指令,并且 GVim 可以提醒我正确的指令然后通过敲键盘执行它。努力学习一个新的编辑器然后陷入无法解决的困境,这种感觉并不好受。每隔几分钟读一下 man 出来的文字或者使用搜索引擎来提醒你指令也并不是最好的学习新事务的方法。 ### Keyboard maps @@ -27,7 +27,7 @@ Vim 学习的 5 个技巧 ### Vimium -上文提到,我工作都在浏览器上进行。其中一条我觉得很有帮助的建议就是,使用 [Vimium](1) 来用增强使用 Vim 的体验。Vimium 是 Chrome 浏览器上的一个开源插件,能用 Vim 的指令快捷操作 Chrome。当我有意识的使用快捷键切换文本的次数越少时,这说明我越来越多的使用这些快捷键。同样的扩展 Firefox 上也有,例如 [Vimerator](2)。 +上文提到,我工作都在浏览器上进行。其中一条我觉得很有帮助的建议就是,使用 [Vimium][1] 来用增强使用 Vim 的体验。Vimium 是 Chrome 浏览器上的一个开源插件,能用 Vim 的指令快捷操作 Chrome。当我有意识的使用快捷键切换文本的次数越少时,这说明我越来越多的使用这些快捷键。同样的扩展 Firefox 上也有,例如 [Vimerator][2]。 ### 人 @@ -39,9 +39,9 @@ Vim 学习的 5 个技巧 所以是什么成就了现在?如今便是极好。为它所花的时间是否值得就在于之后它为你节省了多少时间。但是我经常收到意外的惊喜与快乐,当我发现一个新的按键指令来复制、跳过词,或者一些相似的小技巧。每天我至少可以看见,一点点回报,正在逐渐配得上当初的付出。 -学习 Vim 并不仅仅只有这些建议,还有很多。我很喜欢指引别人去 [Vim Advantures](3),它是一种只能使用 Vim 的快捷键的在线游戏。而且某天我发现了一个非常神奇的虚拟学习工具,在 [Vimgifts.com](4),那上面有明确的你想要的:用一个 gif 动图来描述,使用一点点 Vim 操作来达到他们想要的。 +学习 Vim 并不仅仅只有这些建议,还有很多。我很喜欢指引别人去 [Vim Advantures][3],它是一种只能使用 Vim 的快捷键的在线游戏。而且某天我发现了一个非常神奇的虚拟学习工具,在 [Vimgifts.com][4],那上面有明确的你想要的:用一个 gif 动图来描述,使用一点点 Vim 操作来达到他们想要的。 -你有花时间学习 Vim 吗?或者有遇到任何关于按键复杂的插件的问题吗?什么适合你,你认为这些努力值得吗?效率的提高有达到你的预期?分享你们的故事在下面的评论区吧。 +你有花时间学习 Vim 吗?或者有大量键盘操作交互体验的程序上投资时间吗?那些经过你努力后掌握的工具,你认为这些努力值得吗?效率的提高有达到你的预期?分享你们的故事在下面的评论区吧。 -------------------------------------------------------------------------------- From 28f0a31b6e1057c66a4971c26fb3a0495ab3324f Mon Sep 17 00:00:00 2001 From: sevenot <469147394@qq.com> Date: Sun, 31 Jul 2016 01:45:28 +0800 Subject: [PATCH 289/471] sevenot translating --- ...ux Terminal Emulator With Multiple Terminals In One Window.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md b/sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md index eb50d539a1..2f1b393c80 100644 --- a/sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md +++ b/sources/tech/20160724 Terminator A Linux Terminal Emulator With Multiple Terminals In One Window.md @@ -1,3 +1,4 @@ +sevenot translating Terminator A Linux Terminal Emulator With Multiple Terminals In One Window ============================================================================= From 35dea8011103be3f36c44ac055622f2e9163f74c Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Sun, 31 Jul 2016 07:24:44 +0800 Subject: [PATCH 290/471] Delete 20160706 What is Git.md --- sources/tech/20160706 What is Git.md | 123 --------------------------- 1 file changed, 123 deletions(-) delete mode 100644 sources/tech/20160706 What is Git.md diff --git a/sources/tech/20160706 What is Git.md b/sources/tech/20160706 What is Git.md deleted file mode 100644 index f98de42fd3..0000000000 --- a/sources/tech/20160706 What is Git.md +++ /dev/null @@ -1,123 +0,0 @@ -translating by cvsher -What is Git -=========== - -Welcome to my series on learning how to use the Git version control system! In this introduction to the series, you will learn what Git is for and who should use it. - -If you're just starting out in the open source world, you're likely to come across a software project that keeps its code in, and possibly releases it for use, by way of Git. In fact, whether you know it or not, you're certainly using software right now that is developed using Git: the Linux kernel (which drives the website you're on right now, if not the desktop or mobile phone you're accessing it on), Firefox, Chrome, and many more projects share their codebase with the world in a Git repository. - -On the other hand, all the excitement and hype over Git tends to make things a little muddy. Can you only use Git to share your code with others, or can you use Git in the privacy of your own home or business? Do you have to have a GitHub account to use Git? Why use Git at all? What are the benefits of Git? Is Git the only option? - -So forget what you know or what you think you know about Git, and let's take it from the beginning. - -### What is version control? - -Git is, first and foremost, a version control system (VCS). There are many version control systems out there: CVS, SVN, Mercurial, Fossil, and, of course, Git. - -Git serves as the foundation for many services, like GitHub and GitLab, but you can use Git without using any other service. This means that you can use Git privately or publicly. - -If you have ever collaborated on anything digital with anyone, then you know how it goes. It starts out simple: you have your version, and you send it to your partner. They make some changes, so now there are two versions, and send the suggestions back to you. You integrate their changes into your version, and now there is one version again. - -Then it gets worse: while you change your version further, your partner makes more changes to their version. Now you have three versions; the merged copy that you both worked on, the version you changed, and the version your partner has changed. - -As Jason van Gumster points out in his article, 【Even artists need version control][1], this syndrome tends to happen in individual settings as well. In both art and science, it's not uncommon to develop a trial version of something; a version of your project that might make it a lot better, or that might fail miserably. So you create file names like project_justTesting.kdenlive and project_betterVersion.kdenlive, and then project_best_FINAL.kdenlive, but with the inevitable allowance for project_FINAL-alternateVersion.kdenlive, and so on. - -Whether it's a change to a for loop or an editing change, it happens to the best of us. That is where a good version control system makes life easier. - -### Git snapshots - -Git takes snapshots of a project, and stores those snapshots as unique versions. - -If you go off in a direction with your project that you decide was the wrong direction, you can just roll back to the last good version and continue along an alternate path. - -If you're collaborating, then when someone sends you changes, you can merge those changes into your working branch, and then your collaborator can grab the merged version of the project and continue working from the new current version. - -Git isn't magic, so conflicts do occur ("You changed the last line of the book, but I deleted that line entirely; how do we resolve that?"), but on the whole, Git enables you to manage the many potential variants of a single work, retaining the history of all the changes, and even allows for parallel versions. - -### Git distributes - -Working on a project on separate machines is complex, because you want to have the latest version of a project while you work, makes your own changes, and share your changes with your collaborators. The default method of doing this tends to be clunky online file sharing services, or old school email attachments, both of which are inefficient and error-prone. - -Git is designed for distributed development. If you're involved with a project you can clone the project's Git repository, and then work on it as if it was the only copy in existence. Then, with a few simple commands, you can pull in any changes from other contributors, and you can also push your changes over to someone else. Now there is no confusion about who has what version of a project, or whose changes exist where. It is all locally developed, and pushed and pulled toward a common target (or not, depending on how the project chooses to develop). - -### Git interfaces - -In its natural state, Git is an application that runs in the Linux terminal. However, as it is well-designed and open source, developers all over the world have designed other ways to access it. - -It is free, available to anyone for $0, and comes in packages on Linux, BSD, Illumos, and other Unix-like operating systems. It looks like this: - -``` -$ git --version -git version 2.5.3 -``` - -Probably the most well-known Git interfaces are web-based: sites like GitHub, the open source GitLab, Savannah, BitBucket, and SourceForge all offer online code hosting to maximise the public and social aspect of open source along with, in varying degrees, browser-based GUIs to minimise the learning curve of using Git. This is what the GitLab interface looks like: - -![](https://opensource.com/sites/default/files/0_gitlab.png) - -Additionally, it is possible that a Git service or independent developer may even have a custom Git frontend that is not HTML-based, which is particularly handy if you don't live with a browser eternally open. The most transparent integration comes in the form of file manager support. The KDE file manager, Dolphin, can show the Git status of a directory, and even generate commits, pushes, and pulls. - -![](https://opensource.com/sites/default/files/0_dolphin.jpg) - -[Sparkleshare][2] uses Git as a foundation for its own Dropbox-style file sharing interface. - -![](https://opensource.com/sites/default/files/0_sparkleshare_1.jpg) - -For more, see the (long) page on the official [Git wiki][3] listing projects with graphical interfaces to Git. - -### Who should use Git? - -You should! The real question is when? And what for? - -### When should I use Git, and what should I use it for? - -To get the most out of Git, you need to think a little bit more than usual about file formats. - -Git is designed to manage source code, which in most languages consists of lines of text. Of course, Git doesn't know if you're feeding it source code or the next Great American Novel, so as long as it breaks down to text, Git is a great option for managing and tracking versions. - -But what is text? If you write something in an office application like Libre Office, then you're probably not generating raw text. There is usually a wrapper around complex applications like that which encapsulate the raw text in XML markup and then in a zip container, as a way to ensure that all of the assets for your office file are available when you send that file to someone else. Strangely, though, something that you might expect to be very complex, like the save files for a [Kdenlive][4] project, or an SVG from [Inkscape][5], are actually raw XML files that can easily be managed by Git. - -If you use Unix, you can check to see what a file is made of with the file command: - -``` -$ file ~/path/to/my-file.blah -my-file.blah: ASCII text -$ file ~/path/to/different-file.kra: Zip data (MIME type "application/x-krita") -``` - -If unsure, you can view the contents of a file with the head command: - -``` -$ head ~/path/to/my-file.blah -``` - -If you see text that is mostly readable by you, then it is probably a file made of text. If you see garbage with some familiar text characters here and there, it is probably not made of text. - -Make no mistake: Git can manage other formats of files, but it treats them as blobs. The difference is that in a text file, two Git snapshots (or commits, as we call them) might be, say, three lines different from each other. If you have a photo that has been altered between two different commits, how can Git express that change? It can't, really, because photographs are not made of any kind of sensible text that can just be inserted or removed. I wish photo editing were as easy as just changing some text from "ugly greenish-blue" to "blue-with-fluffy-clouds" but it truly is not. - -People check in blobs, like PNG icons or a speadsheet or a flowchart, to Git all the time, so if you're working in Git then don't be afraid to do that. Know that it's not sensible to do that with huge files, though. If you are working on a project that does generate both text files and large blobs (a common scenario with video games, which have equal parts source code to graphical and audio assets), then you can do one of two things: either invent your own solution, such as pointers to a shared network drive, or use a Git add-on like Joey Hess's excellent [git annex][6], or the [Git-Media][7] project. - -So you see, Git really is for everyone. It is a great way to manage versions of your files, it is a powerful tool, and it is not as scary as it first seems. - --------------------------------------------------------------------------------- - -via: https://opensource.com/resources/what-is-git - -作者:[Seth Kenlon ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]: https://opensource.com/life/16/2/version-control-isnt-just-programmers -[2]: http://sparkleshare.org/ -[3]: https://git.wiki.kernel.org/index.php/InterfacesFrontendsAndTools#Graphical_Interfaces -[4]: https://opensource.com/life/11/11/introduction-kdenlive -[5]: http://inkscape.org/ -[6]: https://git-annex.branchable.com/ -[7]: https://github.com/alebedev/git-media - - - - From fafb7ca2a5cb862f22497fd03586cd18bd9566a2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 31 Jul 2016 11:52:41 +0800 Subject: [PATCH 291/471] =?UTF-8?q?20160731-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160509 Android vs. iPhone Pros and Cons.md | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 sources/talk/20160509 Android vs. iPhone Pros and Cons.md diff --git a/sources/talk/20160509 Android vs. iPhone Pros and Cons.md b/sources/talk/20160509 Android vs. iPhone Pros and Cons.md new file mode 100644 index 0000000000..92feeba307 --- /dev/null +++ b/sources/talk/20160509 Android vs. iPhone Pros and Cons.md @@ -0,0 +1,86 @@ +Android vs. iPhone: Pros and Cons +=================================== + +>When comparing Android vs. iPhone, clearly Android has certain advantages even as the iPhone is superior in some key ways. But ultimately, which is better? + +The question of Android vs. iPhone is a personal one. + +Take myself, for example. I'm someone who has used both Android and the iPhone iOS. I'm well aware of the strengths of both platforms along with their weaknesses. Because of this, I decided to share my perspective regarding these two mobile platforms. Additionally, we'll take a look at my impressions of the new Ubuntu mobile platform and where it stacks up. + +### What iPhone gets right + +Even though I'm a full time Android user these days, I do recognize the areas where the iPhone got it right. First, Apple has a better record in updating their devices. This is especially true for older devices running iOS. With Android, if it's not a “Google blessed” Nexus...it better be a higher end carrier supported phone. Otherwise, you're going to find updates are either sparse or non-existent. + +Another area where the iPhone does well is apps availability. Expanding on that: iPhone apps almost always have a cleaner look to them. This isn't to say that Android apps are ugly, rather, they may not have an expected flow and consistency found with iOS. Two examples of exclusivity and great iOS-only layout would have to be [Dark Sky][1] (weather) and [Facebook Paper][2]. + +Then there is the backup process. Android can, by default, back stuff up to Google. But that doesn't help much with application data! By contrast, iCloud can essentially make a full backup of your iOS device. + +### Where iPhone loses me + +The biggest indisputable issue I have with the iPhone is more of a hardware limitation than a software one. That issue is storage. + +Look, with most Android phones, I can buy a smaller capacity phone and then add an SD card later. This does two things: First, I can use the SD card to store a lot of media files. Second, I can even use the SD card to store "some" of my apps. Apple has nothing that will touch this. + +Another area where the iPhone loses me is in the lack of choice it provides. Backing up your device? Hope you like iTunes or iCloud. For someone like myself who uses Linux, this means my ONLY option would be to use iCloud. + +To be ultimately fair, there are additional solutions for your iPhone if you're willing to jailbreak it. But that's not what this article is about. Same goes for rooting Android. This article is addressing a vanilla setup for both platforms. + +Finally, let us not forget this little treat – [iTunes decides to delete a user's music][3] because it was seen as a duplication of Apple Music contents...or something along those lines. Not iPhone specific? I disagree, as that music would have very well ended up onto the iPhone at some point. I can say with great certainty that in no universe would I ever put up with this kind of nonsense! + +![](http://www.datamation.com/imagesvr_ce/5552/mobile-abstract-icon-200x150.jpg) +>The Android vs. iPhone debate depends on what features matter the most to you. + +### What Android gets right + +The biggest thing Android gives me that the iPhone doesn't: choice. Choices in applications, devices and overall layout of how my phone works. + +I love desktop widgets! To iPhone users, they may seem really silly. But I can tell you that they save me from opening up applications as I can see the desired data without the extra hassle. Another similar feature I love is being able to install custom launchers instead of my phone's default! + +Finally, I can utilize tools like [Airdroid][4] and [Tasker][5] to add full computer-like functionality to my smart phone. Airdroid allows me treat my Android phone like a computer with file management and SMS with anyone – this becomes a breeze to use with my mouse and keyboard. Tasker is awesome in that I can setup "recipes" to connect/disconnect, put my phone into meeting mode or even put itself into power saving mode when I set the parameters to do so. I can even set it to launch applications when I arrive at specific destinations. + +### Where Android loses me + +Backup options are limited to specific user data, not a full clone of your phone. Without rooting, you're either left out in the wind or you must look to the Android SDK for solutions. Expecting casual users to either root their phone or run the SDK for a complete (I mean everything) Android backup is a joke. + +Yes, Google's backup service will backup Google app data, along with other related customizations. But it's nowhere near as complete as what we see with the iPhone. To accomplish something similar to what the iPhone enjoys, I've found you're going to either be rooting your Android phone or connecting it to a Windows PC to utilize some random program. + +To be fair, however, I believe Nexus owners benefit from a [full backup service][6] that is device specific. Sorry, but Google's default backup is not cutting it. Same applies for adb backups via your PC – they don't always restore things as expected. + +Wait, it gets better. Now after a lot of failed let downs and frustration, I found that there was one app that looked like it "might" offer a glimmer of hope, it's called Helium. Unlike other applications I found to be misleading and frustrating with their limitations, [Helium][7] initially looked like it was the backup application Google should have been offering all along -- emphasis on "looked like." Sadly, it was a huge let down. Not only did I need to connect it to my computer for a first run, it didn't even work using their provided Linux script. After removing their script, I settling for a good old fashioned adb backup...to my Linux PC. Fun facts: You will need to turn on a laundry list of stuff in developer tools, plus if you run the Twilight app, that needs to be turned off. It took me a bit to put this together when the backup option for adb on my phone wasn't responding. + +At the end of the day, Android has ample options for non-rooted users to backup superficial stuff like contacts, SMS and other data easily. But a deep down phone backup is best left to a wired connection and adb from my experience. + +### Ubuntu will save us? + +With the good and the bad examined between the two major players in the mobile space, there's a lot of hope that we're going to see good things from Ubuntu on the mobile front. Well, thus far, it's been pretty lackluster. + +I like what the developers are doing with the OS and I certainly love the idea of a third option for mobile besides iPhone and Android. Unfortunately, though, it's not that popular on the phone and the tablet received a lot of bad press due to subpar hardware and a lousy demonstration that made its way onto YouTube. + +To be fair, I've had subpar experiences with iPhone and Android, too, in the past. So this isn't a dig on Ubuntu. But until it starts showing up with a ready to go ecosystem of functionality that matches what Android and iOS offer, it's not something I'm terribly interested in yet. At a later date, perhaps, I'll feel like the Ubuntu phones are ready to meet my needs. + +### Android vs. iPhone bottom line: Why Android wins long term + +Despite its painful shortcomings, Android treats me like an adult. It doesn't lock me into only two methods for backing up my data. Yes, some of Android's limitations are due to the fact that it's focused on letting me choose how to handle my data. But, I also get to choose my own device, add storage on a whim. Android enables me to do a lot of cool stuff that the iPhone simply isn't capable of doing. + +At its core, Android gives non-root users greater access to the phone's functionality. For better or worse, it's a level of freedom that I think people are gravitating towards. Now there are going to be many of you who swear by the iPhone thanks to efforts like the [libimobiledevice][8] project. But take a long hard look at all the stuff Apple blocks Linux users from doing...then ask yourself – is it really worth it as a Linux user? Hit the Comments, share your thoughts on Android, iPhone or Ubuntu. + +------------------------------------------------------------------------------ + +via: http://www.datamation.com/mobile-wireless/android-vs.-iphone-pros-and-cons.html + +作者:[Matt Hartley][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.datamation.com/author/Matt-Hartley-3080.html +[1]: http://darkskyapp.com/ +[2]: https://www.facebook.com/paper/ +[3]: https://blog.vellumatlanta.com/2016/05/04/apple-stole-my-music-no-seriously/ +[4]: https://www.airdroid.com/ +[5]: http://tasker.dinglisch.net/ +[6]: https://support.google.com/nexus/answer/2819582?hl=en +[7]: https://play.google.com/store/apps/details?id=com.koushikdutta.backup&hl=en +[8]: http://www.libimobiledevice.org/ + From 419bb28718c5277cf84db526207d2cf63f23dde1 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 31 Jul 2016 12:01:35 +0800 Subject: [PATCH 292/471] =?UTF-8?q?20160731-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Use Awk Special Patterns begin and end.md | 166 ++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md diff --git a/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md b/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md new file mode 100644 index 0000000000..f64e834ca1 --- /dev/null +++ b/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md @@ -0,0 +1,166 @@ +Learn How to Use Awk Special Patterns ‘BEGIN and END’ – Part 9 +=============================================================== + +In Part 8 of this Awk series, we introduced some powerful Awk command features, that is variables, numeric expressions and assignment operators. + +As we advance, in this segment, we shall cover more Awk features, and that is the special patterns: `BEGIN` and `END`. + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Learn-Awk-Patterns-BEGIN-and-END.png) +>Learn Awk Patterns BEGIN and END + +These special features will prove helpful as we try to expand on and explore more methods of building complex Awk operations. + +To get started, let us drive our thoughts back to the introduction of the Awk series, remember when we started this series, I pointed out that the general syntax of a running an Awk command is: + +``` +# awk 'script' filenames +``` + +And in the syntax above, the Awk script has the form: + +``` +/pattern/ { actions } +``` + +When you consider the pattern in the script, it is normally a regular expression, additionally, you can also think of pattern as special patterns `BEGIN` and `END`. Therefore, we can also write an Awk command in the form below: + +``` +awk ' +BEGIN { actions } +/pattern/ { actions } +/pattern/ { actions } +………. +END { actions } +' filenames +``` + +In the event that you use the special patterns: `BEGIN` and `END` in an Awk script, this is what each of them means: + +- `BEGIN` pattern: means that Awk will execute the action(s) specified in `BEGIN` once before any input lines are read. +- `END` pattern: means that Awk will execute the action(s) specified in `END` before it actually exits. + +And the flow of execution of the an Awk command script which contains these special patterns is as follows: + +- When the `BEGIN` pattern is used in a script, all the actions for `BEGIN` are executed once before any input line is read. +- Then an input line is read and parsed into the different fields. +- Next, each of the non-special patterns specified is compared with the input line for a match, when a match is found, the action(s) for that pattern are then executed. This stage will be repeated for all the patterns you have specified. +- Next, stage 2 and 3 are repeated for all input lines. +- When all input lines have been read and dealt with, in case you specify the `END` pattern, the action(s) will be executed. + +You should always remember this sequence of execution when working with the special patterns to achieve the best results in an Awk operation. + +To understand it all, let us illustrate using the example from part 8, about the list of domains owned by Tecmint, as stored in a file named domains.txt. + +``` +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +``` + +``` +$ cat ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) +>View Contents of File + +In this example, we want to count the number of times the domain `tecmint.com` is listed in the file domains.txt. So we wrote a small shell script to help us do that using the idea of variables, numeric expressions and assignment operators which has the following content: + +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +#print out filename +echo "File is: $file" +#print a number incrementally for every line containing tecmint.com +awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file +else +#print error info incase input is not a file +echo "$file is not a file, please specify a file." >&2 && exit 1 +fi +done +#terminate script with exit code 0 in case of successful execution +exit 0 +``` + +Let us now employ the two special patterns: `BEGIN` and `END` in the Awk command in the script above as follows: + +We shall alter the script: + +``` +awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file +``` + +To: + +``` +awk ' BEGIN { print "The number of times tecmint.com appears in the file is:" ; } +/^tecmint.com/ { counter+=1 ; } +END { printf "%s\n", counter ; } +' $file +``` + +After making the changes to the Awk command, the complete shell script now looks like this: + +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +#print out filename +echo "File is: $file" +#print the total number of times tecmint.com appears in the file +awk ' BEGIN { print "The number of times tecmint.com appears in the file is:" ; } +/^tecmint.com/ { counter+=1 ; } +END { printf "%s\n", counter ; } +' $file +else +#print error info incase input is not a file +echo "$file is not a file, please specify a file." >&2 && exit 1 +fi +done +#terminate script with exit code 0 in case of successful execution +exit 0 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-BEGIN-and-END-Patterns.png) +>Awk BEGIN and END Patterns + +When we run the script above, it will first of all print the location of the file domains.txt, then the Awk command script is executed, where the `BEGIN` special pattern helps us print out the message “`The number of times tecmint.com appears in the file is:`” before any input lines are read from the file. + +Then our pattern, `/^tecmint.com/` is compared against every input line and the action, `{ counter+=1 ; }` is executed for each input line, which counts the number of times `tecmint.com` appears in the file. + +Finally, the `END` pattern will print the total the number of times the domain `tecmint.com` appears in the file. + +``` +$ ./script.sh ~/domains.txt +``` +![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-to-Count-Number-of-Times-String-Appears.png) +>Script to Count Number of Times String Appears + +To conclude, we walked through more Awk features exploring on the concepts of special pattern: `BEGIN` and `END`. + +As I pointed out before, these Awk features will help us build more complex text filtering operations, there is more to cover under Awk features and in part 10, we shall approach the idea of Awk built-in variables, so stay connected. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-use-awk-special-patterns-begin-and-end/ + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 4a6dae274e72492ba7c2bfcbcdbecf3e7dc54541 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 31 Jul 2016 12:07:54 +0800 Subject: [PATCH 293/471] =?UTF-8?q?20160731-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Learn How to Use Awk Built-in Variables.md | 119 ++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md diff --git a/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md b/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md new file mode 100644 index 0000000000..4cbecac8c9 --- /dev/null +++ b/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md @@ -0,0 +1,119 @@ +Learn How to Use Awk Built-in Variables – Part 10 +================================================= + +As we uncover the section of Awk features, in this part of the series, we shall walk through the concept of built-in variables in Awk. There are two types of variables you can use in Awk, these are; user-defined variables, which we covered in Part 8 and built-in variables. + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Built-in-Variables-Examples.png) +>Awk Built in Variables Examples + +Built-in variables have values already defined in Awk, but we can also carefully alter those values, the built-in variables include: + +- `FILENAME` : current input file name( do not change variable name) +- `FR` : number of the current input line (that is input line 1, 2, 3… so on, do not change variable name) +- `NF` : number of fields in current input line (do not change variable name) +- `OFS` : output field separator +- `FS` : input field separator +- `ORS` : output record separator +- `RS` : input record separator + +Let us proceed to illustrate the use of some of the Awk built-in variables above: + +To read the filename of the current input file, you can use the `FILENAME` built-in variable as follows: + +``` +$ awk ' { print FILENAME } ' ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-FILENAME-Variable.png) +>Awk FILENAME Variable + +You will realize that, the filename is printed out for each input line, that is the default behavior of Awk when you use `FILENAME` built-in variable. + +Using `NR` to count the number of lines (records) in an input file, remember that, it also counts the empty lines, as we shall see in the example below. + +When we view the file domains.txt using cat command, it contains 14 lines with text and empty 2 lines: + +``` +$ cat ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Print-Contents-of-File.png) +>Print Contents of File + + +``` +$ awk ' END { print "Number of records in file is: ", NR } ' ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Lines.png) +>Awk Count Number of Lines + +To count the number of fields in a record or line, we use the NR built-in variable as follows: + +``` +$ cat ~/names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Contents.png) +>List File Contents + +``` +$ awk '{ print "Record:",NR,"has",NF,"fields" ; }' ~/names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Fields-in-File.png) +>Awk Count Number of Fields in File + +Next, you can also specify an input field separator using the FS built-in variable, it defines how Awk divides input lines into fields. + +The default value for FS is space and tab, but we can change the value of FS to any character that will instruct Awk to divide input lines accordingly. + +There are two methods to do this: + +- one method is to use the FS built-in variable +- and the second is to invoke the -F Awk option + +Consider the file /etc/passwd on a Linux system, the fields in this file are divided using the : character, so we can specify it as the new input field separator when we want to filter out certain fields as in the following examples: + +We can use the `-F` option as follows: + +``` +$ awk -F':' '{ print $1, $4 ;}' /etc/passwd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Filter-Fields-in-Password-File.png) +>Awk Filter Fields in Password File + +Optionally, we can also take advantage of the FS built-in variable as below: + +``` +$ awk ' BEGIN { FS=“:” ; } { print $1, $4 ; } ' /etc/passwd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Filter-Fields-in-File-Using-Awk.png) +>Filter Fields in File Using Awk + +To specify an output field separator, use the OFS built-in variable, it defines how the output fields will be separated using the character we use as in the example below: + +``` +$ awk -F':' ' BEGIN { OFS="==>" ;} { print $1, $4 ;}' /etc/passwd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Add-Separator-to-Field-in-File.png) +>Add Separator to Field in File + +In this Part 10, we have explored the idea of using Awk built-in variables which come with predefined values. But we can also change these values, though, it is not recommended to do so unless you know what you are doing, with adequate understanding. + +After this, we shall progress to cover how we can use shell variables in Awk command operations, therefore, stay connected to Tecmint. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/awk-built-in-variables-examples/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From b6f2ecb0404edbd7074f9ffd3da07bd5f421a55f Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 31 Jul 2016 12:19:36 +0800 Subject: [PATCH 294/471] =?UTF-8?q?20160731-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Audio files with Octave 4.0.0 on Ubuntu.md | 138 ++++++++++++++++++ 1 file changed, 138 insertions(+) create mode 100644 sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md diff --git a/sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md b/sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md new file mode 100644 index 0000000000..80300deb6b --- /dev/null +++ b/sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md @@ -0,0 +1,138 @@ +Scientific Audio Processing, Part I - How to read and write Audio files with Octave 4.0.0 on Ubuntu +================ + +Octave, the equivalent software to Matlab in Linux, has a number of functions and commands that allow the acquisition, recording, playback and digital processing of audio signals for entertainment applications, research, medical, or any other science areas. In this tutorial, we will use Octave V4.0.0 in Ubuntu and will start reading from audio files through writing and playing signals to emulate sounds used in a wide range of activities. + +Note that the main focus of this tutorial is not to install or learn to use an audio processing software already established, but rather to understand how it works from the point of view of design and audio engineering. + +### Prerequisites + +The first step is to install octave. Run the following commands in a terminal to add the Octave PPA in Ubuntu and install the software. + +``` +sudo apt-add-repository ppa:octave/stable +sudo apt-get update +sudo apt-get install octave +``` + +### Step 1: Opening Octave. + +In this step we open the software by clicking on its icon, we can change the work directory by clicking on the File Browser dropdown. + +![](https://www.howtoforge.com/images/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/initial.png) + +### Step 2: Audio Info + +The command "audioinfo" shows us relevant information about the audio file that we will process. + +``` +>> info = audioinfo ('testing.ogg') +``` + +![](https://www.howtoforge.com/images/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/audioinfo.png) + +### Step 3: Reading an audio File + +In this tutorial I will read and use ogg files for which it is feasible to read characteristics like sampling , audio type (stereo or mono), number of channels, etc. I should mention that for purposes of this tutorial, all the commands used will be executed in the terminal window of Octave. First, we have to save the ogg file in a variable. Note: it´s important that the file must be in the work path of Octave + +``` +>> file='yourfile.ogg' +``` + +``` +>> [M, fs] = audioread(file) +``` + +Where M is a matrix of one or two columns, depending on the number of channels and fs is the sampling frequency. + +![](https://www.howtoforge.com/images/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/reading.png) + +![](https://www.howtoforge.com/images/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/matrix.png) + +![](https://www.howtoforge.com/images/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/big/frequency.png) + +There are some options that we can use for reading audio files, such as: + +``` +>> [y, fs] = audioread (filename, samples) + +>> [y, fs] = audioread (filename, datatype) + +>> [y, fs] = audioread (filename, samples, datatype) +``` + +Where samples specifies starting and ending frames and datatype specifies the data type to return. We can assign values to any variable: + +``` +>> samples = [1, fs) + +>> [y, fs] = audioread (filename, samples) +``` + +And about datatype: + +``` +>> [y,Fs] = audioread(filename,'native') +``` + +If the value is 'native' then the type of data depends on how the data is stored in the audio file. + +### Step 4: Writing an audio file + +Creating the ogg file: + +For this purpose, we are going to generate an ogg file with values from a cosine. The sampling frequency that I will use is 44100 samples per second and the file will last for 10 seconds. The frequency of the cosine signal is 440 Hz. + +``` +>> filename='cosine.ogg'; +>> fs=44100; +>> t=0:1/fs:10; +>> w=2*pi*440*t; +>> signal=cos(w); +>> audiowrite(filename, signal, fs); +``` + +This creates a file named 'cosine.ogg' in our workspace that contains the cosine signal. + +![](https://www.howtoforge.com/images/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/cosinefile.png) + +If we play the 'cosine.ogg' file then this will reproduce a 440Hz tone which is equivalent to an 'A' musical tone. If we want to see the values saved in the file we have to 'read' the file with the 'audioread' function. In a further tutorial, we will see how to write an audio file with two channels. + +### Step 5: Playing an audio file + +Octave, by default, has an audio player that we can use for testing purposes. Use the following functions as example: + +``` + >> [y,fs]=audioread('yourfile.ogg'); +>> player=audioplayer(y, fs, 8) + + scalar structure containing the fields: + + BitsPerSample = 8 + CurrentSample = 0 + DeviceID = -1 + NumberOfChannels = 1 + Running = off + SampleRate = 44100 + TotalSamples = 236473 + Tag = + Type = audioplayer + UserData = [](0x0) +>> play(player); +``` + +In the next parts of the tutorial, we will see advanced audio processing features and possible use cases for scientific and commercial use. + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-read-and-write-audio-files-with-octave-4-in-ubuntu/ + +作者:[David Duarte][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/intent/follow?original_referer=https%3A%2F%2Fwww.howtoforge.com%2Ftutorial%2Fhow-to-read-and-write-audio-files-with-octave-4-in-ubuntu%2F&ref_src=twsrc%5Etfw®ion=follow_link&screen_name=howtoforgecom&tw_p=followbutton + + From db95b0e90111e70434454575b4790dc94557bce4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 31 Jul 2016 13:16:10 +0800 Subject: [PATCH 295/471] =?UTF-8?q?20160731-0=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 长文,机器学习 --- ...ce portfolio - Machine learning project.md | 845 ++++++++++++++++++ 1 file changed, 845 insertions(+) create mode 100644 sources/tech/20160705 Building a data science portfolio - Machine learning project.md diff --git a/sources/tech/20160705 Building a data science portfolio - Machine learning project.md b/sources/tech/20160705 Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..ca57b2e83c --- /dev/null +++ b/sources/tech/20160705 Building a data science portfolio - Machine learning project.md @@ -0,0 +1,845 @@ +Building a data science portfolio: Machine learning project +=========================================================== + +>This is the third in a series of posts on how to build a Data Science Portfolio. If you like this and want to know when the next post in the series is released, you can [subscribe at the bottom of the page][1]. + +Data science companies are increasingly looking at portfolios when making hiring decisions. One of the reasons for this is that a portfolio is the best way to judge someone’s real-world skills. The good news for you is that a portfolio is entirely within your control. If you put some work in, you can make a great portfolio that companies are impressed by. + +The first step in making a high-quality portfolio is to know what skills to demonstrate. The primary skills that companies want in data scientists, and thus the primary skills they want a portfolio to demonstrate, are: + +- Ability to communicate +- Ability to collaborate with others +- Technical competence +- Ability to reason about data +- Motivation and ability to take initiative + +Any good portfolio will be composed of multiple projects, each of which may demonstrate 1-2 of the above points. This is the third post in a series that will cover how to make a well-rounded data science portfolio. In this post, we’ll cover how to make the second project in your portfolio, and how to build an end to end machine learning project. At the end, you’ll have a project that shows your ability to reason about data, and your technical competence. [Here’s][2] the completed project if you want to take a look. + +### An end to end project + +As a data scientist, there are times when you’ll be asked to take a dataset and figure out how to [tell a story with it][3]. In times like this, it’s important to communicate very well, and walk through your process. Tools like Jupyter notebook, which we used in a previous post, are very good at helping you do this. The expectation here is that the deliverable is a presentation or document summarizing your findings. + +However, there are other times when you’ll be asked to create a project that has operational value. A project with operational value directly impacts the day-to-day operations of a company, and will be used more than once, and often by multiple people. A task like this might be “create an algorithm to forecast our churn rate”, or “create a model that can automatically tag our articles”. In cases like this, storytelling is less important than technical competence. You need to be able to take a dataset, understand it, then create a set of scripts that can process that data. It’s often important that these scripts run quickly, and use minimal system resources like memory. It’s very common that these scripts will be run several times, so the deliverable becomes the scripts themselves, not a presentation. The deliverable is often integrated into operational flows, and may even be user-facing. + +The main components of building an end to end project are: + +- Understanding the context +- Exploring the data and figuring out the nuances +- Creating a well-structured project, so its easy to integrate into operational flows +- Writing high-performance code that runs quickly and uses minimal system resources +- Documenting the installation and usage of your code well, so others can use it + +In order to effectively create a project of this kind, we’ll need to work with multiple files. Using a text editor like [Atom][4], or an IDE like [PyCharm][5] is highly recommended. These tools will allow you to jump between files, and edit files of different types, like markdown files, Python files, and csv files. Structuring your project so it’s easy to version control and upload to collaborative coding tools like [Github][6] is also useful. + +![](https://www.dataquest.io/blog/images/end_to_end/github.png) +>This project on Github. + +We’ll use our editing tools along with libraries like [Pandas][7] and [scikit-learn][8] in this post. We’ll make extensive use of Pandas [DataFrames][9], which make it easy to read in and work with tabular data in Python. + +### Finding good datasets + +A good dataset for an end to end portfolio project can be hard to find. [The dataset][10] needs to be sufficiently large that memory and performance constraints come into play. It also needs to potentially be operationally useful. For instance, this dataset, which contains data on the admission criteria, graduation rates, and graduate future earnings for US colleges would be a great dataset to use to tell a story. However, as you think about the dataset, it becomes clear that there isn’t enough nuance to build a good end to end project with it. For example, you could tell someone their potential future earnings if they went to a specific college, but that would be a quick lookup without enough nuance to demonstrate technical competence. You could also figure out if colleges with higher admissions standards tend to have graduates who earn more, but that would be more storytelling than operational. + +These memory and performance constraints tend to come into play when you have more than a gigabyte of data, and when you have some nuance to what you want to predict, which involves running algorithms over the dataset. + +A good operational dataset enables you to build a set of scripts that transform the data, and answer dynamic questions. A good example would be a dataset of stock prices. You would be able to predict the prices for the next day, and keep feeding new data to the algorithm as the markets closed. This would enable you to make trades, and potentially even profit. This wouldn’t be telling a story – it would be adding direct value. + +Some good places to find datasets like this are: + +- [/r/datasets][11] – a subreddit that has hundreds of interesting datasets. +- [Google Public Datasets][12] – public datasets available through Google BigQuery. +- [Awesome datasets][13] – a list of datasets, hosted on Github. + +As you look through these datasets, think about what questions someone might want answered with the dataset, and think if those questions are one-time (“how did housing prices correlate with the S&P 500?”), or ongoing (“can you predict the stock market?”). The key here is to find questions that are ongoing, and require the same code to be run multiple times with different inputs (different data). + +For the purposes of this post, we’ll look at [Fannie Mae Loan Data][14]. Fannie Mae is a government sponsored enterprise in the US that buys mortgage loans from other lenders. It then bundles these loans up into mortgage-backed securities and resells them. This enables lenders to make more mortgage loans, and creates more liquidity in the market. This theoretically leads to more homeownership, and better loan terms. From a borrowers perspective, things stay largely the same, though. + +Fannie Mae releases two types of data – data on loans it acquires, and data on how those loans perform over time. In the ideal case, someone borrows money from a lender, then repays the loan until the balance is zero. However, some borrowers miss multiple payments, which can cause foreclosure. Foreclosure is when the house is seized by the bank because mortgage payments cannot be made. Fannie Mae tracks which loans have missed payments on them, and which loans needed to be foreclosed on. This data is published quarterly, and lags the current date by 1 year. As of this writing, the most recent dataset that’s available is from the first quarter of 2015. + +Acquisition data, which is published when the loan is acquired by Fannie Mae, contains information on the borrower, including credit score, and information on their loan and home. Performance data, which is published every quarter after the loan is acquired, contains information on the payments being made by the borrower, and the foreclosure status, if any. A loan that is acquired may have dozens of rows in the performance data. A good way to think of this is that the acquisition data tells you that Fannie Mae now controls the loan, and the performance data contains a series of status updates on the loan. One of the status updates may tell us that the loan was foreclosed on during a certain quarter. + +![](https://www.dataquest.io/blog/images/end_to_end/foreclosure.jpg) +>A foreclosed home being sold. + +### Picking an angle + +There are a few directions we could go in with the Fannie Mae dataset. We could: + +- Try to predict the sale price of a house after it’s foreclosed on. +- Predict the payment history of a borrower. +- Figure out a score for each loan at acquisition time. + +The important thing is to stick to a single angle. Trying to focus on too many things at once will make it hard to make an effective project. It’s also important to pick an angle that has sufficient nuance. Here are examples of angles without much nuance: + +- Figuring out which banks sold loans to Fannie Mae that were foreclosed on the most. +- Figuring out trends in borrower credit scores. +- Exploring which types of homes are foreclosed on most often. +- Exploring the relationship between loan amounts and foreclosure sale prices + +All of the above angles are interesting, and would be great if we were focused on storytelling, but aren’t great fits for an operational project. + +With the Fannie Mae dataset, we’ll try to predict whether a loan will be foreclosed on in the future by only using information that was available when the loan was acquired. In effect, we’ll create a “score” for any mortgage that will tell us if Fannie Mae should buy it or not. This will give us a nice foundation to build on, and will be a great portfolio piece. + +### Understanding the data + +Let’s take a quick look at the raw data files. Here are the first few rows of the acquisition data from quarter 1 of 2012: + +``` +100000853384|R|OTHER|4.625|280000|360|02/2012|04/2012|31|31|1|23|801|N|C|SF|1|I|CA|945||FRM| +100003735682|R|SUNTRUST MORTGAGE INC.|3.99|466000|360|01/2012|03/2012|80|80|2|30|794|N|P|SF|1|P|MD|208||FRM|788 +100006367485|C|PHH MORTGAGE CORPORATION|4|229000|360|02/2012|04/2012|67|67|2|36|802|N|R|SF|1|P|CA|959||FRM|794 +``` + +Here are the first few rows of the performance data from quarter 1 of 2012: + +``` +100000853384|03/01/2012|OTHER|4.625||0|360|359|03/2042|41860|0|N|||||||||||||||| +100000853384|04/01/2012||4.625||1|359|358|03/2042|41860|0|N|||||||||||||||| +100000853384|05/01/2012||4.625||2|358|357|03/2042|41860|0|N|||||||||||||||| +``` + +Before proceeding too far into coding, it’s useful to take some time and really understand the data. This is more critical in operational projects – because we aren’t interactively exploring the data, it can be harder to spot certain nuances unless we find them upfront. In this case, the first step is to read the materials on the Fannie Mae site: + +- [Overview][15] +- [Glossary of useful terms][16] +- [FAQs][17] +- [Columns in the Acquisition and Performance files][18] +- [Sample Acquisition data file][19] +- [Sample Performance data file][20] + +After reading through these files, we know some key facts that will help us: + +- There’s an Acquisition file and a Performance file for each quarter, starting from the year 2000 to present. There’s a 1 year lag in the data, so the most recent data is from 2015 as of this writing. +- The files are in text format, with a pipe (|) as a delimiter. +- The files don’t have headers, but we have a list of what each column is. +- All together, the files contain data on 22 million loans. +- Because the Performance files contain information on loans acquired in previous years, there will be more performance data for loans acquired in earlier years (ie loans acquired in 2014 won’t have much performance history). + +These small bits of information will save us a ton of time as we figure out how to structure our project and work with the data. + +### Structuring the project + +Before we start downloading and exploring the data, it’s important to think about how we’ll structure the project. When building an end-to-end project, our primary goals are: + +- Creating a solution that works +- Having a solution that runs quickly and uses minimal resources +- Enabling others to easily extend our work +- Making it easy for others to understand our code +- Writing as little code as possible + +In order to achieve these goals, we’ll need to structure our project well. A well structured project follows a few principles: + +- Separates data files and code files. +- Separates raw data from generated data. +- Has a README.md file that walks people through installing and using the project. +- Has a requirements.txt file that contains all the packages needed to run the project. +- Has a single settings.py file that contains any settings that are used in other files. + - For example, if you are reading the same file from multiple Python scripts, it’s useful to have them all import settings and get the file name from a centralized place. +- Has a .gitignore file that prevents large or secret files from being committed. +- Breaks each step in our task into a separate file that can be executed separately. + - For example, we may have one file for reading in the data, one for creating features, and one for making predictions. +- Stores intermediate values. For example, one script may output a file that the next script can read. + - This enables us to make changes in our data processing flow without recalculating everything. + +Our file structure will look something like this shortly: + +``` +loan-prediction +├── data +├── processed +├── .gitignore +├── README.md +├── requirements.txt +├── settings.py +``` + +### Creating the initial files + +To start with, we’ll need to create a loan-prediction folder. Inside that folder, we’ll need to make a data folder and a processed folder. The first will store our raw data, and the second will store any intermediate calculated values. + +Next, we’ll make a .gitignore file. A .gitignore file will make sure certain files are ignored by git and not pushed to Github. One good example of such a file is the .DS_Store file created by OSX in every folder. A good starting point for a .gitignore file is here. We’ll also want to ignore the data files because they are very large, and the Fannie Mae terms prevent us from redistributing them, so we should add two lines to the end of our file: + +``` +data +processed +``` + +[Here’s][21] an example .gitignore file for this project. + +Next, we’ll need to create README.md, which will help people understand the project. .md indicates that the file is in markdown format. Markdown enables you write plain text, but also add some fancy formatting if you want. [Here’s][22] a guide on markdown. If you upload a file called README.md to Github, Github will automatically process the markdown, and show it to anyone who views the project. [Here’s][23] an example. + +For now, we just need to put a simple description in README.md: + +``` +Loan Prediction +----------------------- + +Predict whether or not loans acquired by Fannie Mae will go into foreclosure. Fannie Mae acquires loans from other lenders as a way of inducing them to lend more. Fannie Mae releases data on the loans it has acquired and their performance afterwards [here](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html). +``` + +Now, we can create a requirements.txt file. This will make it easy for other people to install our project. We don’t know exactly what libraries we’ll be using yet, but here’s a good starting point: + +``` +pandas +matplotlib +scikit-learn +numpy +ipython +scipy +``` + +The above libraries are the most commonly used for data analysis tasks in Python, and its fair to assume that we’ll be using most of them. [Here’s][24] an example requirements file for this project. + +After creating requirements.txt, you should install the packages. For this post, we’ll be using Python 3. If you don’t have Python installed, you should look into using [Anaconda][25], a Python installer that also installs all the packages listed above. + +Finally, we can just make a blank settings.py file, since we don’t have any settings for our project yet. + +### Acquiring the data + +Once we have the skeleton of our project, we can get the raw data. + +Fannie Mae has some restrictions around acquiring the data, so you’ll need to sign up for an account. You can find the download page [here][26]. After creating an account, you’ll be able to download as few or as many loan data files as you want. The files are in zip format, and are reasonably large after decompression. + +For the purposes of this blog post, we’ll download everything from Q1 2012 to Q1 2015, inclusive. We’ll then need to unzip all of the files. After unzipping the files, remove the original .zip files. At the end, the loan-prediction folder should look something like this: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +├── .gitignore +├── README.md +├── requirements.txt +├── settings.py +``` + +After downloading the data, you can use the head and tail shell commands to look at the lines in the files. Do you see any columns that aren’t needed? It might be useful to consult the [pdf of column names][27] while doing this. + +### Reading in the data + +There are two issues that make our data hard to work with right now: + +- The acquisition and performance datasets are segmented across multiple files. +- Each file is missing headers. + +Before we can get started on working with the data, we’ll need to get to the point where we have one file for the acquisition data, and one file for the performance data. Each of the files will need to contain only the columns we care about, and have the proper headers. One wrinkle here is that the performance data is quite large, so we should try to trim some of the columns if we can. + +The first step is to add some variables to settings.py, which will contain the paths to our raw data and our processed data. We’ll also add a few other settings that will be useful later on: + +``` +DATA_DIR = "data" +PROCESSED_DIR = "processed" +MINIMUM_TRACKING_QUARTERS = 4 +TARGET = "foreclosure_status" +NON_PREDICTORS = [TARGET, "id"] +CV_FOLDS = 3 +``` + +Putting the paths in settings.py will put them in a centralized place and make them easy to change down the line. When referring to the same variables in multiple files, it’s easier to put them in a central place than edit them in every file when you want to change them. [Here’s][28] an example settings.py file for this project. + +The second step is to create a file called assemble.py that will assemble all the pieces into 2 files. When we run python assemble.py, we’ll get 2 data files in the processed directory. + +We’ll then start writing code in assemble.py. We’ll first need to define the headers for each file, so we’ll need to look at [pdf of column names][29] and create lists of the columns in each Acquisition and Performance file: + +``` +HEADERS = { + "Acquisition": [ + "id", + "channel", + "seller", + "interest_rate", + "balance", + "loan_term", + "origination_date", + "first_payment_date", + "ltv", + "cltv", + "borrower_count", + "dti", + "borrower_credit_score", + "first_time_homebuyer", + "loan_purpose", + "property_type", + "unit_count", + "occupancy_status", + "property_state", + "zip", + "insurance_percentage", + "product_type", + "co_borrower_credit_score" + ], + "Performance": [ + "id", + "reporting_period", + "servicer_name", + "interest_rate", + "balance", + "loan_age", + "months_to_maturity", + "maturity_date", + "msa", + "delinquency_status", + "modification_flag", + "zero_balance_code", + "zero_balance_date", + "last_paid_installment_date", + "foreclosure_date", + "disposition_date", + "foreclosure_costs", + "property_repair_costs", + "recovery_costs", + "misc_costs", + "tax_costs", + "sale_proceeds", + "credit_enhancement_proceeds", + "repurchase_proceeds", + "other_foreclosure_proceeds", + "non_interest_bearing_balance", + "principal_forgiveness_balance" + ] +} +``` + +The next step is to define the columns we want to keep. Since all we’re measuring on an ongoing basis about the loan is whether or not it was ever foreclosed on, we can discard many of the columns in the performance data. We’ll need to keep all the columns in the acquisition data, though, because we want to maximize the information we have about when the loan was acquired (after all, we’re predicting if the loan will ever be foreclosed or not at the point it’s acquired). Discarding columns will enable us to save disk space and memory, while also speeding up our code. + +``` +SELECT = { + "Acquisition": HEADERS["Acquisition"], + "Performance": [ + "id", + "foreclosure_date" + ] +} +``` + +Next, we’ll write a function to concatenate the data sets. The below code will: + +- Import a few needed libraries, including settings. +- Define a function concatenate, that: + - Gets the names of all the files in the data directory. + - Loops through each file. + - If the file isn’t the right type (doesn’t start with the prefix we want), we ignore it. + - Reads the file into a [DataFrame][30] with the right settings using the Pandas [read_csv][31] function. + - Sets the separator to | so the fields are read in correctly. + - The data has no header row, so sets header to None to indicate this. + - Sets names to the right value from the HEADERS dictionary – these will be the column names of our DataFrame. + - Picks only the columns from the DataFrame that we added in SELECT. +- Concatenates all the DataFrames together. +- Writes the concatenated DataFrame back to a file. + +``` +import os +import settings +import pandas as pd + +def concatenate(prefix="Acquisition"): + files = os.listdir(settings.DATA_DIR) + full = [] + for f in files: + if not f.startswith(prefix): + continue + + data = pd.read_csv(os.path.join(settings.DATA_DIR, f), sep="|", header=None, names=HEADERS[prefix], index_col=False) + data = data[SELECT[prefix]] + full.append(data) + + full = pd.concat(full, axis=0) + + full.to_csv(os.path.join(settings.PROCESSED_DIR, "{}.txt".format(prefix)), sep="|", header=SELECT[prefix], index=False) +``` + +We can call the above function twice with the arguments Acquisition and Performance to concatenate all the acquisition and performance files together. The below code will: + +- Only execute if the script is called from the command line with python assemble.py. +- Concatenate all the files, and result in two files: + - `processed/Acquisition.txt` + - `processed/Performance.txt` + +``` +if __name__ == "__main__": + concatenate("Acquisition") + concatenate("Performance") +``` + +We now have a nice, compartmentalized assemble.py that’s easy to execute, and easy to build off of. By decomposing the problem into pieces like this, we make it easy to build our project. Instead of one messy script that does everything, we define the data that will pass between the scripts, and make them completely separate from each other. When you’re working on larger projects, it’s a good idea to do this, because it makes it much easier to change individual pieces without having unexpected consequences on unrelated pieces of the project. + +Once we finish the assemble.py script, we can run python assemble.py. You can find the complete assemble.py file [here][32]. + +This will result in two files in the processed directory: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +├── .gitignore +├── assemble.py +├── README.md +├── requirements.txt +├── settings.py +``` + +### Computing values from the performance data + +The next step we’ll take is to calculate some values from processed/Performance.txt. All we want to do is to predict whether or not a property is foreclosed on. To figure this out, we just need to check if the performance data associated with a loan ever has a foreclosure_date. If foreclosure_date is None, then the property was never foreclosed on. In order to avoid including loans with little performance history in our sample, we’ll also want to count up how many rows exist in the performance file for each loan. This will let us filter loans without much performance history from our training data. + +One way to think of the loan data and the performance data is like this: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/001.png) + +As you can see above, each row in the Acquisition data can be related to multiple rows in the Performance data. In the Performance data, foreclosure_date will appear in the quarter when the foreclosure happened, so it should be blank prior to that. Some loans are never foreclosed on, so all the rows related to them in the Performance data have foreclosure_date blank. + +We need to compute foreclosure_status, which is a Boolean that indicates whether a particular loan id was ever foreclosed on, and performance_count, which is the number of rows in the performance data for each loan id. + +There are a few different ways to compute the counts we want: + +- We could read in all the performance data, then use the Pandas groupby method on the DataFrame to figure out the number of rows associated with each loan id, and also if the foreclosure_date is ever not None for the id. + - The upside of this method is that it’s easy to implement from a syntax perspective. + - The downside is that reading in all 129236094 lines in the data will take a lot of memory, and be extremely slow. +- We could read in all the performance data, then use apply on the acquisition DataFrame to find the counts for each id. + - The upside is that it’s easy to conceptualize. + - The downside is that reading in all 129236094 lines in the data will take a lot of memory, and be extremely slow. +- We could iterate over each row in the performance dataset, and keep a separate dictionary of counts. + - The upside is that the dataset doesn’t need to be loaded into memory, so it’s extremely fast and memory-efficient. + - The downside is that it will take slightly longer to conceptualize and implement, and we need to parse the rows manually. + +Loading in all the data will take quite a bit of memory, so let’s go with the third option above. All we need to do is to iterate through all the rows in the Performance data, while keeping a dictionary of counts per loan id. In the dictionary, we’ll keep track of how many times the id appears in the performance data, as well as if foreclosure_date is ever not None. This will give us foreclosure_status and performance_count. + +We’ll create a new file called annotate.py, and add in code that will enable us to compute these values. In the below code, we’ll: + +- Import needed libraries. +- Define a function called count_performance_rows. + - Open processed/Performance.txt. This doesn’t read the file into memory, but instead opens a file handler that can be used to read in the file line by line. + - Loop through each line in the file. + - Split the line on the delimiter (|) + - Check if the loan_id is not in the counts dictionary. + - If not, add it to counts. + - Increment performance_count for the given loan_id because we’re on a row that contains it. + - If date is not None, then we know that the loan was foreclosed on, so set foreclosure_status appropriately. + +``` +import os +import settings +import pandas as pd + +def count_performance_rows(): + counts = {} + with open(os.path.join(settings.PROCESSED_DIR, "Performance.txt"), 'r') as f: + for i, line in enumerate(f): + if i == 0: + # Skip header row + continue + loan_id, date = line.split("|") + loan_id = int(loan_id) + if loan_id not in counts: + counts[loan_id] = { + "foreclosure_status": False, + "performance_count": 0 + } + counts[loan_id]["performance_count"] += 1 + if len(date.strip()) > 0: + counts[loan_id]["foreclosure_status"] = True + return counts +``` + +### Getting the values + +Once we create our counts dictionary, we can make a function that will extract values from the dictionary if a loan_id and a key are passed in: + +``` +def get_performance_summary_value(loan_id, key, counts): + value = counts.get(loan_id, { + "foreclosure_status": False, + "performance_count": 0 + }) + return value[key] +``` + +The above function will return the appropriate value from the counts dictionary, and will enable us to assign a foreclosure_status value and a performance_count value to each row in the Acquisition data. The [get][33] method on dictionaries returns a default value if a key isn’t found, so this enables us to return sensible default values if a key isn’t found in the counts dictionary. + +### Annotating the data + +We’ve already added a few functions to annotate.py, but now we can get into the meat of the file. We’ll need to convert the acquisition data into a training dataset that can be used in a machine learning algorithm. This involves a few things: + +- Converting all columns to numeric. +- Filling in any missing values. +- Assigning a performance_count and a foreclosure_status to each row. +- Removing any rows that don’t have a lot of performance history (where performance_count is low). + +Several of our columns are strings, which aren’t useful to a machine learning algorithm. However, they are actually categorical variables, where there are a few different category codes, like R, S, and so on. We can convert these columns to numeric by assigning a number to each category label: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/002.png) + +Converting the columns this way will allow us to use them in our machine learning algorithm. + +Some of the columns also contain dates (first_payment_date and origination_date). We can split these dates into 2 columns each: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/003.png) +In the below code, we’ll transform the Acquisition data. We’ll define a function that: + +- Creates a foreclosure_status column in acquisition by getting the values from the counts dictionary. +- Creates a performance_count column in acquisition by getting the values from the counts dictionary. +- Converts each of the following columns from a string column to an integer column: + - channel + - seller + - first_time_homebuyer + - loan_purpose + - property_type + - occupancy_status + - property_state + - product_type +- Converts first_payment_date and origination_date to 2 columns each: + - Splits the column on the forward slash. + - Assigns the first part of the split list to a month column. + - Assigns the second part of the split list to a year column. + - Deletes the column. + - At the end, we’ll have first_payment_month, first_payment_year, origination_month, and origination_year. +- Fills any missing values in acquisition with -1. + +``` +def annotate(acquisition, counts): + acquisition["foreclosure_status"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "foreclosure_status", counts)) + acquisition["performance_count"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "performance_count", counts)) + for column in [ + "channel", + "seller", + "first_time_homebuyer", + "loan_purpose", + "property_type", + "occupancy_status", + "property_state", + "product_type" + ]: + acquisition[column] = acquisition[column].astype('category').cat.codes + + for start in ["first_payment", "origination"]: + column = "{}_date".format(start) + acquisition["{}_year".format(start)] = pd.to_numeric(acquisition[column].str.split('/').str.get(1)) + acquisition["{}_month".format(start)] = pd.to_numeric(acquisition[column].str.split('/').str.get(0)) + del acquisition[column] + + acquisition = acquisition.fillna(-1) + acquisition = acquisition[acquisition["performance_count"] > settings.MINIMUM_TRACKING_QUARTERS] + return acquisition +``` + +### Pulling everything together + +We’re almost ready to pull everything together, we just need to add a bit more code to annotate.py. In the below code, we: + +- Define a function to read in the acquisition data. +- Define a function to write the processed data to processed/train.csv +- If this file is called from the command line, like python annotate.py: + - Read in the acquisition data. + - Compute the counts for the performance data, and assign them to counts. + - Annotate the acquisition DataFrame. + - Write the acquisition DataFrame to train.csv. + +``` +def read(): + acquisition = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "Acquisition.txt"), sep="|") + return acquisition + +def write(acquisition): + acquisition.to_csv(os.path.join(settings.PROCESSED_DIR, "train.csv"), index=False) + +if __name__ == "__main__": + acquisition = read() + counts = count_performance_rows() + acquisition = annotate(acquisition, counts) + write(acquisition) +``` + +Once you’re done updating the file, make sure to run it with python annotate.py, to generate the train.csv file. You can find the complete annotate.py file [here][34]. + +The folder should now look like this: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +│ ├── train.csv +├── .gitignore +├── annotate.py +├── assemble.py +├── README.md +├── requirements.txt +├── settings.py +``` + +### Finding an error metric + +We’re done with generating our training dataset, and now we’ll just need to do the final step, generating predictions. We’ll need to figure out an error metric, as well as how we want to evaluate our data. In this case, there are many more loans that aren’t foreclosed on than are, so typical accuracy measures don’t make much sense. + +If we read in the training data, and check the counts in the foreclosure_status column, here’s what we get: + +``` +import pandas as pd +import settings + +train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) +train["foreclosure_status"].value_counts() +``` + +``` +False 4635982 +True 1585 +Name: foreclosure_status, dtype: int64 +``` + +Since so few of the loans were foreclosed on, just checking the percentage of labels that were correctly predicted will mean that we can make a machine learning model that predicts False for every row, and still gets a very high accuracy. Instead, we’ll want to use a metric that takes the class imbalance into account, and ensures that we predict foreclosures accurately. We don’t want too many false positives, where we make predict that a loan will be foreclosed on even though it won’t, or too many false negatives, where we predict that a loan won’t be foreclosed on, but it is. Of these two, false negatives are more costly for Fannie Mae, because they’re buying loans where they may not be able to recoup their investment. + +We’ll define false negative rate as the number of loans where the model predicts no foreclosure but the the loan was actually foreclosed on, divided by the number of total loans that were actually foreclosed on. This is the percentage of actual foreclosures that the model “Missed”. Here’s a diagram: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/004.png) + +In the diagram above, 1 loan was predicted as not being foreclosed on, but it actually was. If we divide this by the number of loans that were actually foreclosed on, 2, we get the false negative rate, 50%. We’ll use this as our error metric, so we can evaluate our model’s performance. + +### Setting up the classifier for machine learning + +We’ll use cross validation to make predictions. With cross validation, we’ll divide our data into 3 groups. Then we’ll do the following: + +- Train a model on groups 1 and 2, and use the model to make predictions for group 3. +- Train a model on groups 1 and 3, and use the model to make predictions for group 2. +- Train a model on groups 2 and 3, and use the model to make predictions for group 1. + +Splitting it up into groups this way means that we never train a model using the same data we’re making predictions for. This avoids overfitting. If we overfit, we’ll get a falsely low false negative rate, which makes it hard to improve our algorithm or use it in the real world. + +[Scikit-learn][35] has a function called [cross_val_predict][36] which will make it easy to perform cross validation. + +We’ll also need to pick an algorithm to use to make predictions. We need a classifier that can do [binary classification][37]. The target variable, foreclosure_status only has two values, True and False. + +We’ll use [logistic regression][38], because it works well for binary classification, runs extremely quickly, and uses little memory. This is due to how the algorithm works – instead of constructing dozens of trees, like a random forest, or doing expensive transformations, like a support vector machine, logistic regression has far fewer steps involving fewer matrix operations. + +We can use the [logistic regression classifier][39] algorithm that’s implemented in scikit-learn. The only thing we need to pay attention to is the weights of each class. If we weight the classes equally, the algorithm will predict False for every row, because it is trying to minimize errors. However, we care much more about foreclosures than we do about loans that aren’t foreclosed on. Thus, we’ll pass balanced to the class_weight keyword argument of the [LogisticRegression][40] class, to get the algorithm to weight the foreclosures more to account for the difference in the counts of each class. This will ensure that the algorithm doesn’t predict False for every row, and instead is penalized equally for making errors in predicting either class. + +### Making predictions + +Now that we have the preliminaries out of the way, we’re ready to make predictions. We’ll create a new file called predict.py that will use the train.csv file we created in the last step. The below code will: + +- Import needed libraries. +- Create a function called cross_validate that: + - Creates a logistic regression classifier with the right keyword arguments. + - Creates a list of columns that we want to use to train the model, removing id and foreclosure_status. + - Run cross validation across the train DataFrame. + - Return the predictions. + + +``` +import os +import settings +import pandas as pd +from sklearn import cross_validation +from sklearn.linear_model import LogisticRegression +from sklearn import metrics + +def cross_validate(train): + clf = LogisticRegression(random_state=1, class_weight="balanced") + + predictors = train.columns.tolist() + predictors = [p for p in predictors if p not in settings.NON_PREDICTORS] + + predictions = cross_validation.cross_val_predict(clf, train[predictors], train[settings.TARGET], cv=settings.CV_FOLDS) + return predictions +``` + +### Predicting error + +Now, we just need to write a few functions to compute error. The below code will: + +- Create a function called compute_error that: + - Uses scikit-learn to compute a simple accuracy score (the percentage of predictions that matched the actual foreclosure_status values). +- Create a function called compute_false_negatives that: + - Combines the target and the predictions into a DataFrame for convenience. + - Finds the false negative rate. +- Create a function called compute_false_positives that: + - Combines the target and the predictions into a DataFrame for convenience. + - Finds the false positive rate. + - Finds the number of loans that weren’t foreclosed on that the model predicted would be foreclosed on. + - Divide by the total number of loans that weren’t foreclosed on. + +``` +def compute_error(target, predictions): + return metrics.accuracy_score(target, predictions) + +def compute_false_negatives(target, predictions): + df = pd.DataFrame({"target": target, "predictions": predictions}) + return df[(df["target"] == 1) & (df["predictions"] == 0)].shape[0] / (df[(df["target"] == 1)].shape[0] + 1) + +def compute_false_positives(target, predictions): + df = pd.DataFrame({"target": target, "predictions": predictions}) + return df[(df["target"] == 0) & (df["predictions"] == 1)].shape[0] / (df[(df["target"] == 0)].shape[0] + 1) +``` + +### Putting it all together + +Now, we just have to put the functions together in predict.py. The below code will: + +- Read in the dataset. +- Compute cross validated predictions. +- Compute the 3 error metrics above. +- Print the error metrics. + +``` +def read(): + train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) + return train + +if __name__ == "__main__": + train = read() + predictions = cross_validate(train) + error = compute_error(train[settings.TARGET], predictions) + fn = compute_false_negatives(train[settings.TARGET], predictions) + fp = compute_false_positives(train[settings.TARGET], predictions) + print("Accuracy Score: {}".format(error)) + print("False Negatives: {}".format(fn)) + print("False Positives: {}".format(fp)) +``` + +Once you’ve added the code, you can run python predict.py to generate predictions. Running everything shows that our false negative rate is .26, which means that of the foreclosed loans, we missed predicting 26% of them. This is a good start, but can use a lot of improvement! + +You can find the complete predict.py file [here][41]. + +Your file tree should now look like this: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +│ ├── train.csv +├── .gitignore +├── annotate.py +├── assemble.py +├── predict.py +├── README.md +├── requirements.txt +├── settings.py +``` + +### Writing up a README + +Now that we’ve finished our end to end project, we just have to write up a README.md file so that other people know what we did, and how to replicate it. A typical README.md for a project should include these sections: + +- A high level overview of the project, and what the goals are. +- Where to download any needed data or materials. +- Installation instructions. + - How to install the requirements. +- Usage instructions. + - How to run the project. + - What you should see after each step. +- How to contribute to the project. + - Good next steps for extending the project. + +[Here’s][42] a sample README.md for this project. + +### Next steps + +Congratulations, you’re done making an end to end machine learning project! You can find a complete example project [here][43]. It’s a good idea to upload your project to [Github][44] once you’ve finished it, so others can see it as part of your portfolio. + +There are still quite a few angles left to explore with this data. Broadly, we can split them up into 3 categories – extending this project and making it more accurate, finding other columns to predict, and exploring the data. Here are some ideas: + +- Generate more features in annotate.py. +- Switch algorithms in predict.py. +- Try using more data from Fannie Mae than we used in this post. +- Add in a way to make predictions on future data. The code we wrote will still work if we add more data, so we can add more past or future data. +- Try seeing if you can predict if a bank should have issued the loan originally (vs if Fannie Mae should have acquired the loan). + - Remove any columns from train that the bank wouldn’t have known at the time of issuing the loan. + - Some columns are known when Fannie Mae bought the loan, but not before. + - Make predictions. +- Explore seeing if you can predict columns other than foreclosure_status. + - Can you predict how much the property will be worth at sale time? +- Explore the nuances between performance updates. + - Can you predict how many times the borrower will be late on payments? + - Can you map out the typical loan lifecycle? +- Map out data on a state by state or zip code by zip code level. + - Do you see any interesting patterns? + +If you build anything interesting, please let us know in the comments! + +If you liked this, you might like to read the other posts in our ‘Build a Data Science Porfolio’ series: + +- [Storytelling with data][45]. +- [How to setup up a data science blog][46]. + + +-------------------------------------------------------------------------------- + +via: https://www.dataquest.io/blog/data-science-portfolio-machine-learning/ + +作者:[Vik Paruchuri][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.dataquest.io/blog +[1]: https://www.dataquest.io/blog/data-science-portfolio-machine-learning/#email-signup +[2]: https://github.com/dataquestio/loan-prediction +[3]: https://www.dataquest.io/blog/data-science-portfolio-project/ +[4]: https://atom.io/ +[5]: https://www.jetbrains.com/pycharm/ +[6]: https://github.com/ +[7]: http://pandas.pydata.org/ +[8]: http://scikit-learn.org/ +[9]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html +[10]: https://collegescorecard.ed.gov/data/ +[11]: https://reddit.com/r/datasets +[12]: https://cloud.google.com/bigquery/public-data/#usa-names +[13]: https://github.com/caesar0301/awesome-public-datasets +[14]: http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html +[15]: http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html +[16]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_glossary.pdf +[17]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_faq.pdf +[18]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_file_layout.pdf +[19]: https://loanperformancedata.fanniemae.com/lppub-docs/acquisition-sample-file.txt +[20]: https://loanperformancedata.fanniemae.com/lppub-docs/performance-sample-file.txt +[21]: https://github.com/dataquestio/loan-prediction/blob/master/.gitignore +[22]: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet +[23]: https://github.com/dataquestio/loan-prediction +[24]: https://github.com/dataquestio/loan-prediction/blob/master/requirements.txt +[25]: https://www.continuum.io/downloads +[26]: https://loanperformancedata.fanniemae.com/lppub/index.html +[27]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_file_layout.pdf +[28]: https://github.com/dataquestio/loan-prediction/blob/master/settings.py +[29]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_file_layout.pdf +[30]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html +[31]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html +[32]: https://github.com/dataquestio/loan-prediction/blob/master/assemble.py +[33]: https://docs.python.org/3/library/stdtypes.html#dict.get +[34]: https://github.com/dataquestio/loan-prediction/blob/master/annotate.py +[35]: http://scikit-learn.org/ +[36]: http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_predict.html +[37]: https://en.wikipedia.org/wiki/Binary_classification +[38]: https://en.wikipedia.org/wiki/Logistic_regression +[39]: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html +[40]: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html +[41]: https://github.com/dataquestio/loan-prediction/blob/master/predict.py +[42]: https://github.com/dataquestio/loan-prediction/blob/master/README.md +[43]: https://github.com/dataquestio/loan-prediction +[44]: https://www.github.com/ +[45]: https://www.dataquest.io/blog/data-science-portfolio-project/ +[46]: https://www.dataquest.io/blog/how-to-setup-a-data-science-blog/ From 6d8fcc7fc1ff9f72e367c960e1b4e2105bf00fb2 Mon Sep 17 00:00:00 2001 From: cposture Date: Sun, 31 Jul 2016 13:18:35 +0800 Subject: [PATCH 296/471] Translating 50% --- ...18 An Introduction to Mocking in Python.md | 24 ++++++++++--------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/sources/tech/20160618 An Introduction to Mocking in Python.md b/sources/tech/20160618 An Introduction to Mocking in Python.md index 64c26b8ec5..d28fae272d 100644 --- a/sources/tech/20160618 An Introduction to Mocking in Python.md +++ b/sources/tech/20160618 An Introduction to Mocking in Python.md @@ -99,26 +99,26 @@ class RmTestCase(unittest.TestCase): ### 潜在陷阱 -第一件需要注意的事情就是,我们使用了位于 mymodule.os 且用于模拟对象的 mock.patch 方法装饰器,并且将该 mock 注入到我们的测试用例方法。相比在 mymodule.os 引用它,那么只是模拟 os 本身,会不会更有意义? +第一件需要注意的事情就是,我们使用了位于 mymodule.os 且用于模拟对象的 mock.patch 方法装饰器,并且将该 mock 注入到我们的测试用例方法。相比在 mymodule.os 引用它,那么只是模拟 os 本身,会不会更有意义呢? One of the first things that should stick out is that we’re using the mock.patch method decorator to mock an object located at mymodule.os, and injecting that mock into our test case method. Wouldn’t it make more sense to just mock os itself, rather than the reference to it at mymodule.os? 当然,当涉及到导入和管理模块,Python 的用法非常灵活。在运行时,mymodule 模块拥有被导入到本模块局部作用域的 os。因此,如果我们模拟 os,我们是看不到模拟在 mymodule 模块中的作用的。 -Well, Python is somewhat of a sneaky snake when it comes to imports and managing modules. At runtime, the mymodule module has its own os which is imported into its own local scope in the module. Thus, if we mock os, we won’t see the effects of the mock in the mymodule module. 这句话需要深刻地记住: -The mantra to keep repeating is this: > 模拟测试一个项目,只需要了解它用在哪里,而不是它从哪里来。 > Mock an item where it is used, not where it came from. - +如果你需要为 myproject.app.MyElaborateClass 模拟 tempfile 模块,你可能需要 If you need to mock the tempfile module for myproject.app.MyElaborateClass, you probably need to apply the mock to myproject.app.tempfile, as each module keeps its own imports. +先将那个陷阱置身事外,让我们继续模拟。 With that pitfall out of the way, let’s keep mocking. -### Adding Validation to ‘rm’ +### 向 ‘rm’ 中加入验证 + +之前定义的 rm 方法相当的简单。在盲目地删除之前,我们倾向于拿它来验证一个路径是否存在,并验证其是否是一个文件。让我们重构 rm 使其变得更加智能: -The rm method defined earlier is quite oversimplified. We’d like to have it validate that a path exists and is a file before just blindly attempting to remove it. Let’s refactor rm to be a bit smarter: ``` #!/usr/bin/env python @@ -132,7 +132,7 @@ def rm(filename): os.remove(filename) ``` -Great. Now, let’s adjust our test case to keep coverage up. +很好。现在,让我们调整测试用例来保持测试的覆盖程度。 ``` #!/usr/bin/env python @@ -164,13 +164,13 @@ class RmTestCase(unittest.TestCase): mock_os.remove.assert_called_with("any path") ``` -Our testing paradigm has completely changed. We now can verify and validate internal functionality of methods without any side-effects. +我们的测试用例完全改变了。现在我们可以在没有任何副作用下核实并验证方法的内部功能。 -### File-Removal as a Service +### 将文件删除作为服务 -So far, we’ve only been working with supplying mocks for functions, but not for methods on objects or cases where mocking is necessary for sending parameters. Let’s cover object methods first. +到目前为止,我们只是对函数功能提供模拟测试,并没对需要传递参数的对象和实例的方法进行模拟测试。接下来我们将介绍如何对对象的方法进行模拟测试。 -We’ll begin with a refactor of the rm method into a service class. There really isn’t a justifiable need, per se, to encapsulate such a simple function into an object, but it will at the very least help us demonstrate key concepts in mock. Let’s refactor: +首先,我们将rm方法重构成一个服务类。实际上将这样一个简单的函数转换成一个对象,在本质上,这不是一个合理的需求,但它能够帮助我们了解mock的关键概念。让我们开始重构: ``` #!/usr/bin/env python @@ -187,6 +187,7 @@ class RemovalService(object): os.remove(filename) ``` +### 你会注意到我们的测试用例没有太大的变化 ### You’ll notice that not much has changed in our test case: ``` @@ -222,6 +223,7 @@ class RemovalServiceTestCase(unittest.TestCase): mock_os.remove.assert_called_with("any path") ``` +很好,我们知道 RemovalService 会如期工作。接下来让我们创建另一个服务,将其声明为一个依赖 Great, so we now know that the RemovalService works as planned. Let’s create another service which declares it as a dependency: ``` From c716df7f93993b1e043e0e53a2bc68dcae5cc4b0 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 31 Jul 2016 15:23:05 +0800 Subject: [PATCH 297/471] PUB:20160104 What is good stock portfolio management software on Linux MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ivo-wang 翻译中要采用中文的语句习惯。 --- ... portfolio management software on Linux.md | 111 ++++++++++++++++++ ... portfolio management software on Linux.md | 109 ----------------- 2 files changed, 111 insertions(+), 109 deletions(-) create mode 100644 published/20160104 What is good stock portfolio management software on Linux.md delete mode 100644 translated/tech/20160104 What is good stock portfolio management software on Linux.md diff --git a/published/20160104 What is good stock portfolio management software on Linux.md b/published/20160104 What is good stock portfolio management software on Linux.md new file mode 100644 index 0000000000..15c5091b6a --- /dev/null +++ b/published/20160104 What is good stock portfolio management software on Linux.md @@ -0,0 +1,111 @@ +JStock:Linux 上不错的股票投资组合管理软件 +================================================================================ + +如果你在股票市场做投资,那么你可能非常清楚投资组合管理计划有多重要。管理投资组合的目标是依据你能承受的风险,时间层面的长短和资金盈利的目标去为你量身打造的一种投资计划。鉴于这类软件的重要性,因此从来不会缺乏商业性的 app 和股票行情检测软件,每一个都可以兜售复杂的投资组合以及跟踪报告功能。 + +对于我们这些 Linux 爱好者们,我也找到了一些**好用的开源投资组合管理工具**,用来在 Linux 上管理和跟踪股票的投资组合,这里高度推荐一个基于 java 编写的管理软件 [JStock][1]。如果你不是一个 java 粉,也许你会放弃它,JStock 需要运行在沉重的 JVM 环境上。但同时,在每一个安装了 JRE 的环境中它都可以马上运行起来,在你的 Linux 环境中它会运行的很顺畅。 + +“开源”就意味着免费或标准低下的时代已经过去了。鉴于 JStock 只是一个个人完成的产物,作为一个投资组合管理软件它最令人印象深刻的是包含了非常多实用的功能,以上所有的荣誉属于它的作者 Yan Cheng Cheok!例如,JStock 支持通过监视列表去监控价格,多种投资组合,自选/内置的股票指标与相关监测,支持27个不同的股票市场和跨平台的云端备份/还原。JStock 支持多平台部署(Linux, OS X, Android 和 Windows),你可以通过云端保存你的 JStock 投资组合,并通过云平台无缝的备份/还原到其他的不同平台上面。 + +现在我将向你展示如何安装以及使用过程的一些具体细节。 + +### 在 Linux 上安装 JStock ### + +因为 JStock 使用Java编写,所以必须[安装 JRE][2]才能让它运行起来。小提示,JStock 需要 JRE1.7 或更高版本。如你的 JRE 版本不能满足这个需求,JStock 将会运行失败然后出现下面的报错。 + + Exception in thread "main" java.lang.UnsupportedClassVersionError: org/yccheok/jstock/gui/JStock : Unsupported major.minor version 51.0 + + +在你的 Linux 上安装好了 JRE 之后,从其官网下载最新的发布的 JStock,然后加载启动它。 + + $ wget https://github.com/yccheok/jstock/releases/download/release_1-0-7-13/jstock-1.0.7.13-bin.zip + $ unzip jstock-1.0.7.13-bin.zip + $ cd jstock + $ chmod +x jstock.sh + $ ./jstock.sh + +教程的其他部分,让我来给大家展示一些 JStock 的实用功能 + +### 监视监控列表中股票价格的波动 ### + +使用 JStock 你可以创建一个或多个监视列表,它可以自动的监视股票价格的波动并给你提供相应的通知。在每一个监视列表里面你可以添加多个感兴趣的股票进去。之后在“Fall Below”和“Rise Above”的表格里添加你的警戒值,分别设定该股票的最低价格和最高价格。 + +![](https://c2.staticflickr.com/2/1588/23795349969_37f4b0f23c_c.jpg) + +例如你设置了 AAPL 股票的最低/最高价格分别是 $102 和 $115.50,只要在价格低于 $102 或高于 $115.50 时你就得到桌面通知。 + +你也可以设置邮件通知,这样你将收到一些价格信息的邮件通知。设置邮件通知在“Options”菜单里,在“Alert”标签中国,打开“Send message to email(s)”,填入你的 Gmail 账户。一旦完成 Gmail 认证步骤,JStock 就会开始发送邮件通知到你的 Gmail 账户(也可以设置其他的第三方邮件地址)。 + +![](https://c2.staticflickr.com/2/1644/24080560491_3aef056e8d_b.jpg) + +### 管理多个投资组合 ### + +JStock 允许你管理多个投资组合。这个功能对于你使用多个股票经纪人时是非常实用的。你可以为每个经纪人创建一个投资组合去管理你的“买入/卖出/红利”用来了解每一个经纪人的业务情况。你也可以在“Portfolio”菜单里面选择特定的投资组合来切换不同的组合项目。下面是一张截图用来展示一个假设的投资组合。 + +![](https://c2.staticflickr.com/2/1646/23536385433_df6c036c9a_c.jpg) + +你也可以设置付给中介费,你可以为每个买卖交易设置中介费、印花税以及结算费。如果你比较懒,你也可以在选项菜单里面启用自动费用计算,并提前为每一家经济事务所设置费用方案。当你为你的投资组合增加交易之后,JStock 将自动的计算并计入费用。 + +![](https://c2.staticflickr.com/2/1653/24055085262_0e315c3691_b.jpg) + +### 使用内置/自选股票指标来监控 ### + +如果你要做一些股票的技术分析,你可能需要基于各种不同的标准来监控股票(这里叫做“股票指标”)。对于股票的跟踪,JStock提供多个[预设的技术指示器][3] 去获得股票上涨/下跌/逆转指数的趋势。下面的列表里面是一些可用的指标。 + +- 平滑异同移动平均线(MACD) +- 相对强弱指标 (RSI) +- 资金流向指标 (MFI) +- 顺势指标 (CCI) +- 十字线 +- 黄金交叉线,死亡交叉线 +- 涨幅/跌幅 + +开启预设指示器能需要在 JStock 中点击“Stock Indicator Editor”标签。之后点击右侧面板中的安装按钮。选择“Install from JStock server”选项,之后安装你想要的指示器。 + +![](https://c2.staticflickr.com/2/1476/23867534660_b6a9c95a06_c.jpg) + +一旦安装了一个或多个指示器,你可以用他们来扫描股票。选择“Stock Indicator Scanner”标签,点击底部的“Scan”按钮,选择需要的指示器。 + +![](https://c2.staticflickr.com/2/1653/24137054996_e8fcd10393_c.jpg) + +当你选择完需要扫描的股票(例如, NYSE, NASDAQ)以后,JStock 将执行该扫描,并将该指示器捕获的结果通过列表展现。 + +![](https://c2.staticflickr.com/2/1446/23795349889_0f1aeef608_c.jpg) + +除了预设指示器以外,你也可以使用一个图形化的工具来定义自己的指示器。下面这张图例用于监控当前价格小于或等于60天平均价格的股票。 + +![](https://c2.staticflickr.com/2/1605/24080560431_3d26eac6b5_c.jpg) + +### 通过云在 Linux 和 Android JStock 之间备份/恢复### + +另一个非常棒的功能是 JStock 支持云备份恢复。Jstock 可以通过 Google Drive 把你的投资组合/监视列表在云上备份和恢复,这个功能可以实现在不同平台上无缝穿梭。如果你在两个不同的平台之间来回切换使用 Jstock,这种跨平台备份和还原非常有用。我在 Linux 桌面和 Android 手机上测试过我的 Jstock 投资组合,工作的非常漂亮。我在 Android 上将 Jstock 投资组合信息保存到 Google Drive 上,然后我可以在我的 Linux 版的 Jstock 上恢复它。如果能够自动同步到云上,而不用我手动地触发云备份/恢复就更好了,十分期望这个功能出现。 + +![](https://c2.staticflickr.com/2/1537/24163165565_bb47e04d6c_c.jpg) + +![](https://c2.staticflickr.com/2/1556/23536385333_9ed1a75d72_c.jpg) + +如果你在从 Google Drive 还原之后不能看到你的投资信息以及监视列表,请确认你的国家信息与“Country”菜单里面设置的保持一致。 + +JStock 的安卓免费版可以从 [Google Play Store][4] 获取到。如果你需要完整的功能(比如云备份,通知,图表等),你需要一次性支付费用升级到高级版。我认为高级版物有所值。 + +![](https://c2.staticflickr.com/2/1687/23867534720_18b917028c_c.jpg) + +写在最后,我应该说一下它的作者,Yan Cheng Cheok,他是一个十分活跃的开发者,有bug及时反馈给他。这一切都要感谢他!!! + +关于 JStock 这个投资组合跟踪软件你有什么想法呢? + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/stock-portfolio-management-software-Linux.html + +作者:[Dan Nanni][a] +译者:[ivo-wang](https://github.com/ivo-wang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://Linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://jstock.org/ +[2]:http://ask.xmodulo.com/install-java-runtime-Linux.html +[3]:http://jstock.org/ma_indicator.html +[4]:https://play.google.com/store/apps/details?id=org.yccheok.jstock.gui diff --git a/translated/tech/20160104 What is good stock portfolio management software on Linux.md b/translated/tech/20160104 What is good stock portfolio management software on Linux.md deleted file mode 100644 index 7e0c8a05fd..0000000000 --- a/translated/tech/20160104 What is good stock portfolio management software on Linux.md +++ /dev/null @@ -1,109 +0,0 @@ -Translating by ivo-wang -What is good stock portfolio management software on Linux -linux上那些不错的管理股票组合投资软件 -================================================================================ -如果你在股票市场做投资,那么你可能非常清楚管理组合投资的计划有多重要。管理组合投资的目标是依据你能承受的风险,时间层面的长短和资金盈利的目标去为你量身打造的一种投资计划。鉴于这类软件的重要性,难怪从不缺乏商业性质的app和股票行情检测软件,每一个都可以兜售复杂的组合投资以及跟踪报告功能。 - -对于这些linux爱好者们,我们找到了一些 **好用的开源组合投资管理工具** 用来在linux上管理和跟踪股票的组合投资,这里高度推荐一个基于java编写的管理软件[JStock][1]。如果你不是一个java粉,你不得不面对这样一个事实JStock需要运行在重型的JVM环境上。同时我相信许多人非常欣赏JStock,安装JRE以后它可以非常迅速的安装在各个linux平台上。没有障碍能阻止你将它安装在你的linux环境中。 - -开源就意味着免费或标准低下的时代已经过去了。鉴于JStock只是一个个人完成的产物,作为一个组合投资管理软件它最令人印象深刻的是包含了非常多实用的功能,以上所有的荣誉属于它的作者Yan Cheng Cheok!例如,JStock 支持通过监视列表去监控价格,多种组合投资,按习惯/按固定 做股票指示与相关扫描,支持27个不同的股票市场和交易平台云端备份/还原。JStock支持多平台部署(Linux, OS X, Android 和 Windows),你可以通过云端保存你的JStock记录,它可以无缝的备份还原到其他的不同平台上面。 - -现在我将向你展示如何安装以及使用过程的一些具体细节。 - -### 在Linux上安装JStock ### - -因为JStock使用Java编写,所以必须[安装 JRE][2]才能让它运行起来.小提示JStock 需要JRE1.7或更高版本。如你的JRE版本不能满足这个需求,JStock将会安装失败然后出现下面的报错。 - - Exception in thread "main" java.lang.UnsupportedClassVersionError: org/yccheok/jstock/gui/JStock : Unsupported major.minor version 51.0 - - -一旦你安装了JRE在你的linux上,从官网下载最新的发布的JStock,然后加载启动它。 - - $ wget https://github.com/yccheok/jstock/releases/download/release_1-0-7-13/jstock-1.0.7.13-bin.zip - $ unzip jstock-1.0.7.13-bin.zip - $ cd jstock - $ chmod +x jstock.sh - $ ./jstock.sh - -教程的其他部分,让我来给大家展示一些JStock的实用功能 - -### 监视监控列表股票价格的波动 ### - -使用JStock你可以创建一个或多个监视列表,它可以自动的监视股票价格的波动并给你提供相应的通知。在每一个监视列表里面你可以添加多个感兴趣的股票进去。之后添加你的警戒值在"Fall Below"和"Rise Above"的表格里,分别是在设定最低价格和最高价格。 - -![](https://c2.staticflickr.com/2/1588/23795349969_37f4b0f23c_c.jpg) - -例如你设置了AAPL股票的最低/最高价格分别是$102 和 $115.50,你将在价格低于$102或高于$115.50的任意时间在桌面得到通知。 - -你也可以设置邮件通知,之后你将收到一些价格信息的邮件通知。设置邮件通知在栏的"Options"选项。在"Alert"标签,打开"Send message to email(s)",填入你的Gmail账户。一旦完成Gmail认证步骤,JStock将开始发送邮件通知到你的Gmail账户(也可以设置其他的第三方邮件地址) -![](https://c2.staticflickr.com/2/1644/24080560491_3aef056e8d_b.jpg) - -### 管理多个组合投资 ### - -JStock能够允许你管理多个组合投资。这个功能对于股票经纪人是非常实用的。你可以为经纪人创建一个投资项去管理你的 买入/卖出/红利 用来了解每一个经纪人的业务情况。你也可以切换不同的组合项目通过选择一个特殊项目在"Portfolio"菜单里面。下面是一张截图用来展示一个意向投资 -![](https://c2.staticflickr.com/2/1646/23536385433_df6c036c9a_c.jpg) - -因为能够设置付给经纪人小费的选项,所以你能付给经纪人任意的小费,印花税以及清空每一比交易的小费。如果你非常懒,你也可以在菜单里面设置自动计算小费和给每一个经纪人固定的小费。在完成交易之后JStock将自动的计算并发送小费。 - -![](https://c2.staticflickr.com/2/1653/24055085262_0e315c3691_b.jpg) - -### 显示固定/自选股票提示 ### - -如果你要做一些股票的技术分析,你可能需要不同股票的指数(这里叫做“平均股指”),对于股票的跟踪,JStock提供多个[预设技术指示器][3] 去获得股票上涨/下跌/逆转指数的趋势。下面的列表里面是一些可用的指示。 -- 异同平均线(MACD) -- 相对强弱指数 (RSI) -- 货币流通指数 (MFI) -- 顺势指标 (CCI) -- 十字线 -- 黄金交叉线, 死亡交叉线 -- 涨幅/跌幅 - -开启预设指示器能需要在JStock中点击"Stock Indicator Editor"标签。之后点击右侧面板中的安装按钮。选择"Install from JStock server"选项,之后安装你想要的指示器。 - -![](https://c2.staticflickr.com/2/1476/23867534660_b6a9c95a06_c.jpg) - -一旦安装了一个或多个指示器,你可以用他们来扫描股票。选择"Stock Indicator Scanner"标签,点击底部的"Scan"按钮,选择需要的指示器。 - -![](https://c2.staticflickr.com/2/1653/24137054996_e8fcd10393_c.jpg) - -当你选择完需要扫描的股票(例如e.g., NYSE, NASDAQ)以后,JStock将执行扫描,并将捕获的结果通过列表的形式展现在指示器上面。 - -![](https://c2.staticflickr.com/2/1446/23795349889_0f1aeef608_c.jpg) - -除了预设指示器以外,你也可以使用一个图形化的工具来定义自己的指示器。下面这张图例中展示的是当前价格小于或等于60天平均价格 - -![](https://c2.staticflickr.com/2/1605/24080560431_3d26eac6b5_c.jpg) - -### 云备份还原Linux 和 Android JStock ### - -另一个非常棒的功能是JStock可以支持云备份还原。Jstock也可以把你的组合投资/监视列表备份还原在 Google Drive,这个功能可以实现在不同平台(例如Linux和Android)上无缝穿梭。举个例子,如果你把Android Jstock组合投资的信息保存在Google Drive上,你可以在Linux班级本上还原他们。 - -![](https://c2.staticflickr.com/2/1537/24163165565_bb47e04d6c_c.jpg) - -![](https://c2.staticflickr.com/2/1556/23536385333_9ed1a75d72_c.jpg) - -如果你在从Google Drive还原之后不能看到你的投资信息以及监视列表,请确认你的国家信息与“Country”菜单里面设置的保持一致。 - -JStock的安卓免费版可以从[Google Play Store][4]获取到。如果你需要完整的功能(比如云备份,通知,图表等),你需要一次性支付费用升级到高级版。我想高级版肯定有它的价值所在。 - -![](https://c2.staticflickr.com/2/1687/23867534720_18b917028c_c.jpg) - -写在最后,我应该说一下它的作者,Yan Cheng Cheok,他是一个十分活跃的开发者,有bug及时反馈给他。最后多有的荣耀都属于他一个人!!! - -关于JStock这个组合投资跟踪软件你有什么想法呢? - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/stock-portfolio-management-software-linux.html - -作者:[Dan Nanni][a] -译者:[ivo-wang](https://github.com/ivo-wang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://jstock.org/ -[2]:http://ask.xmodulo.com/install-java-runtime-linux.html -[3]:http://jstock.org/ma_indicator.html -[4]:https://play.google.com/store/apps/details?id=org.yccheok.jstock.gui From 60ba5b6d27f61fdea3cabf8a1669f707ca9f92f5 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 31 Jul 2016 18:06:20 +0800 Subject: [PATCH 298/471] PUB:20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER @kokialoves --- ... A DEMO UBUNTU VERSION IN A WEB BROWSER.md | 36 ++++++++++++++++++ ... A DEMO UBUNTU VERSION IN A WEB BROWSER.md | 37 ------------------- 2 files changed, 36 insertions(+), 37 deletions(-) create mode 100644 published/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md delete mode 100644 translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md diff --git a/published/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md b/published/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md new file mode 100644 index 0000000000..0fd776c078 --- /dev/null +++ b/published/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md @@ -0,0 +1,36 @@ +在浏览器中体验 Ubuntu +===================================================== + +[Ubuntu][2] 的背后的公司 [Canonical][1] 为 Linux 推广做了很多努力。无论你有多么不喜欢 Ubuntu,你必须承认它对 “Linux 易用性”的影响。Ubuntu 以及其衍生是使用最多的 Linux 版本。 + +为了进一步推广 Ubuntu Linux,Canonical 把它放到了浏览器里,你可以在任何地方使用这个 [Ubuntu 演示版][0]。 它将帮你更好的体验 Ubuntu,以便让新人更容易决定是否使用它。 + +你可能争辩说 USB 版的 Linux 更好。我同意,但是你要知道你要下载 ISO,创建 USB 启动盘,修改配置文件,然后才能使用这个 USB 启动盘来体验。这么乏味并不是每个人都乐意这么干的。 在线体验是一个更好的选择。 + +那么,你能在 Ubuntu 在线看到什么。实际上并不多。 + +你可以浏览文件,你可以使用 Unity Dash,浏览 Ubuntu 软件中心,甚至装几个应用(当然它们不会真的安装),看一看文件浏览器和其它一些东西。以上就是全部了。但是在我看来,这已经做的很好了,让你知道它是个什么,对这个流行的操作系统有个直接感受。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo.jpeg) + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-1.jpeg) + +![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-2.jpeg) + +如果你的朋友或者家人对试试 Linux 抱有兴趣,但是想在安装前想体验一下 Linux 。你可以给他们以下链接:[Ubuntu 在线导览][0] 。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-online-demo/ + +作者:[Abhishek Prakash][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[0]: http://tour.ubuntu.com/en/ +[1]: http://www.canonical.com/ +[2]: http://www.ubuntu.com/ diff --git a/translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md b/translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md deleted file mode 100644 index d79969e3e7..0000000000 --- a/translated/tech/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md +++ /dev/null @@ -1,37 +0,0 @@ -你能在浏览器中运行UBUNTU -===================================================== - -Canonical, Ubuntu的母公司, 为Linux推广做了很多努力. 无论你有多么不喜欢 Ubuntu, 你必须承认它对 “Linux 易用性”的影响. Ubuntu 以及其衍生是应用最多的Linux版本 . - -为了进一步推广 Ubuntu Linux, Canonical 把它放到了浏览器里你可以再任何地方使用 [demo version of Ubuntu][1]. 它将帮你更好的体验 Ubuntu. 以便让新人更容易决定是否使用. - -你可能争辩说USB版的linux更好. 我同意但是你要知道你要下载ISO, 创建USB驱动, 修改配置文件. 并不是每个人都乐意这么干的. 在线体验是一个更好的选择. - -因此, 你能在Ubuntu在线看到什么. 实际上并不多. - -你可以浏览文件, 你可以使用 Unity Dash, 浏览 Ubuntu Software Center, 甚至装几个 apps (当然它们不会真的安装), 看一看文件浏览器 和其它一些东西. 以上就是全部了. 但是在我看来, 它是非常漂亮的 - -![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo.jpeg) - -![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-1.jpeg) - -![](https://itsfoss.com/wp-content/uploads/2016/07/Ubuntu-online-demo-2.jpeg) - -如果你的朋友或者家人想试试Linux又不乐意安装, 你可以给他们以下链接: - -[Ubuntu Online Tour][0] - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/ubuntu-online-demo/?utm_source=newsletter&utm_medium=email&utm_campaign=linux_and_open_source_stories_this_week - -作者:[Abhishek Prakash][a] -译者:[kokialoves](https://github.com/kokialoves) -校对:[校对ID](https://github.com/校对ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[0]: http://tour.ubuntu.com/en/ -[1]: http://tour.ubuntu.com/en/ From 1132d343cdc7f88dcfc3f8ac8a09a712fe2df86e Mon Sep 17 00:00:00 2001 From: Mike Date: Sun, 31 Jul 2016 18:17:42 +0800 Subject: [PATCH 299/471] What containers and unikernels can learn from Arduino and Raspberry Pi (#4251) * translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md * translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md * sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md --- ...rs and unikernels can learn from Arduino and Raspberry Pi.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md b/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md index a1d6257d4d..ff81871630 100644 --- a/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md +++ b/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md @@ -1,3 +1,5 @@ +MikeCoder Translating... + What containers and unikernels can learn from Arduino and Raspberry Pi ========================================================================== From 601aec7515ab2e66e3e16b4c4945e76e3c12cc36 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 31 Jul 2016 18:40:17 +0800 Subject: [PATCH 300/471] PUB:20160718 OPEN SOURCE ACCOUNTING SOFTWARE @MikeCoder --- ...0160718 OPEN SOURCE ACCOUNTING SOFTWARE.md | 80 +++++++++++++++++++ ...0160718 OPEN SOURCE ACCOUNTING SOFTWARE.md | 71 ---------------- 2 files changed, 80 insertions(+), 71 deletions(-) create mode 100644 published/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md delete mode 100644 translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md diff --git a/published/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md b/published/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md new file mode 100644 index 0000000000..7598de6bf1 --- /dev/null +++ b/published/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md @@ -0,0 +1,80 @@ +GNU KHATA:开源的会计管理软件 +============================================ + +作为一个活跃的 Linux 爱好者,我经常向我的朋友们介绍 Linux,帮助他们选择最适合他们的发行版本,同时也会帮助他们安装一些适用于他们工作的开源软件。 + +但是在这一次,我就变得很无奈。我的叔叔,他是一个自由职业的会计师。他会有一系列的为了会计工作的漂亮而成熟的付费软件。我不那么确定我能在在开源软件中找到这么一款可以替代的软件——直到昨天。 + +Abhishek 给我推荐了一些[很酷的软件][1],而其中 GNU Khata 脱颖而出。 + +[GNU Khata][2] 是一个会计工具。 或者,我应该说成是一系列的会计工具集合?它就像经济管理方面的 [Evernote][3] 一样。它的应用是如此之广,以至于它不但可以用于个人的财务管理,也可以用于大型公司的管理,从店铺存货管理到税率计算,都可以有效处理。 + +有个有趣的地方,Khata 这个词在印度或者是其他的印度语国家中意味着账户,所以这个会计软件叫做 GNU Khata。 + +### 安装 + +互联网上有很多关于旧的 Web 版本的 Khata 安装介绍。现在,GNU Khata 只能用在 Debian/Ubuntu 和它们的衍生版本中。我建议你按照 GNU Khata 官网给出的如下步骤来安装。我们来快速过一下。 + +- 从[这里][4]下载安装器。 +- 在下载目录打开终端。 +- 粘贴复制以下的代码到终端,并且执行。 + +``` +sudo chmod 755 GNUKhatasetup.run +sudo ./GNUKhatasetup.run +``` + +这就结束了,从你的 Dash 或者是应用菜单中启动 GNU Khata 吧。 + +### 第一次启动 + +GNU Khata 在浏览器中打开,并且展现以下的画面。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-1.jpg) + +填写组织的名字、组织形式,财务年度并且点击 proceed 按钮进入管理设置页面。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-2.jpg) + +仔细填写你的用户名、密码、安全问题及其答案,并且点击“create and login”。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-3.jpg) + +你已经全部设置完成了。使用菜单栏来开始使用 GNU Khata 来管理你的财务吧。这很容易。 + +### 移除 GNU KHATA + +如果你不想使用 GNU Khata 了,你可以执行如下命令移除: + +``` +sudo apt-get remove --auto-remove gnukhata-core-engine +``` + +你也可以通过新立得软件管理来删除它。 + +### GNU KHATA 真的是市面上付费会计应用的竞争对手吗? + +首先,GNU Khata 以简化为设计原则。顶部的菜单栏组织的很方便,可以帮助你有效的进行工作。你可以选择管理不同的账户和项目,并且切换非常容易。[它们的官网][5]表明,GNU Khata 可以“像说印度语一样方便”(LCTT 译注:原谅我,这个软件作者和本文作者是印度人……)。同时,你知道 GNU Khata 也可以在云端使用吗? + +所有的主流的账户管理工具,比如分类账簿、项目报表、财务报表等等都用专业的方式整理,并且支持自定义格式和即时展示。这让会计和仓储管理看起来如此的简单。 + +这个项目正在积极的发展,正在寻求实操中的反馈以帮助这个软件更加进步。考虑到软件的成熟性、使用的便利性还有免费的情况,GNU Khata 可能会成为你最好的账簿助手。 + +请在评论框里留言吧,让我们知道你是如何看待 GNU Khata 的。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/using-gnu-khata/ + +作者:[Aquil Roshan][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/aquil/ +[1]: https://itsfoss.com/category/apps/ +[2]: http://www.gnukhata.in/ +[3]: https://evernote.com/ +[4]: https://cloud.openmailbox.org/index.php/s/L8ppsxtsFq1345E/download +[5]: http://www.gnukhata.in/ diff --git a/translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md b/translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md deleted file mode 100644 index 8093f8eae2..0000000000 --- a/translated/tech/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md +++ /dev/null @@ -1,71 +0,0 @@ -GNU KHATA:开源的会计管理软件 -============================================ - -作为一个活跃的 Linux 爱好者,我经常向我的朋友们介绍 Linux,帮助他们选择最适合他们的发行版本,同时也会帮助他们安装一些适用于他们工作的开源软件。 - -但是,又一次,我就变得很无奈。我的叔叔,他是一个自由职业的会计师。他会有一系列的为了会计工作的付费软件。我就不那么确定,我能在在开软软件中找到这么一款可以替代的软件。直到昨天。 - -Abhishek 给我推荐了一些[很酷的软件][1],并且,GNU Khata 这个特殊的一款,脱颖而出。 - -[GNU Khata][2] 是一个会计工具。 或者,我可以说成是一系列的会计工具集合?这像经济管理方面的[Evernote][3]。他的应用是如此之广,以至于他可以处理个人的财务管理,到大型公司的管理,从店铺存货管理到税率计算,都可以有效处理。 - -对你来说一个有趣的点,'Khata' 在印度或者是其他的印度语国家中意味着账户,所以这个会计软件叫做 GNU Khata。 - -### 安装 - -互联网上有很多关于老旧版本的 Khata 安装的介绍。现在,GNU Khata 已经可以在 Debian/Ubuntu 和他们的衍生产中得到。我建议你按照如下 GNU Khata 官网的步骤来安装。我们来一次快速的入门。 - -- 从[这][4]下载安装器。 -- 在下载目录打开终端。 -- 粘贴复制以下的代码到终端,并且执行。 - -``` -sudo chmod 755 GNUKhatasetup.run -sudo ./GNUKhatasetup.run -``` - -- 这就结束了,从你的 Dash 或者是应用菜单中启动 GNU Khata 吧。 -### 第一次启动 - -GNU Khata 在浏览器中打开,并且展现以下的画面。 - -![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-1.jpg) - -填写组织的名字和组织形式,经济年份并且点击 proceed 按钮进入管理设置页面。 - -![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-2.jpg) - -仔细填写你的姓名,密码,安全问题和他的答案,并且点击“create and login”。 - -![](https://itsfoss.com/wp-content/uploads/2016/07/GNU-khata-3.jpg) - -你已经全部设置完成了。使用菜单栏来开始使用 GNU Khata 来管理你的经济吧。这很容易。 - -### GNU KHATA 真的是市面上付费会计应用的竞争对手吗? - -首先,GNU Khata 让所有的事情变得简单。顶部的菜单栏被方便的组织,可以帮助你有效的进行工作。 你可以选择管理不同的账户和项目,并且切换非常容易。[他们的官网][5]表明,GNU Khata 可以“方便的转变成印度语”。同时,你知道 GNU Khata 也可以在云端使用吗? - -所有的主流的账户管理工具,将账簿分类,项目介绍,法规介绍等等用专业的方式整理,并且支持自定义整理和实时显示。这让会计和进存储管理看起来如此的简单。 - -这个项目正在积极的发展,从实际的操作中提交反馈来帮助这个软件更加进步。考虑到软件的成熟性,使用的便利性还有免费的 tag。GNU Khata 可能会成为最好的账簿助手。 - -请在评论框里留言吧,让我们知道你是如何看待 GNU Khata 的。 - - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/using-gnu-khata/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 - -作者:[Aquil Roshan][a] -译者:[MikeCoder](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/aquil/ -[1]: https://itsfoss.com/category/apps/ -[2]: http://www.gnukhata.in/ -[3]: https://evernote.com/ -[4]: https://cloud.openmailbox.org/index.php/s/L8ppsxtsFq1345E/download -[5]: http://www.gnukhata.in/ From 9c8a11a995865b85bf04bbdb59d7b7b582da1de5 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 31 Jul 2016 19:07:19 +0800 Subject: [PATCH 301/471] PUB:20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper @geekpi --- ...f Earth as Your Linux Desktop Wallpaper.md | 90 +++++++++++++++++++ ...f Earth as Your Linux Desktop Wallpaper.md | 39 -------- 2 files changed, 90 insertions(+), 39 deletions(-) create mode 100644 published/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md delete mode 100644 translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md diff --git a/published/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/published/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md new file mode 100644 index 0000000000..40ba237020 --- /dev/null +++ b/published/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md @@ -0,0 +1,90 @@ +为你的 Linux 桌面设置一张实时的地球照片 +================================================================= + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/07/Screen-Shot-2016-07-26-at-16.36.47-1.jpg) + +厌倦了看同样的桌面背景了么?这里有一个(可能是)世界上最棒的东西。 + +‘[Himawaripy][1]’ 是一个 Python 3 小脚本,它会抓取由[日本 Himawari 8 气象卫星][2]拍摄的接近实时的地球照片,并将它设置成你的桌面背景。 + +安装完成后,你可以将它设置成每 10 分钟运行的定时任务(自然,它要在后台运行),这样它就可以实时地取回地球的照片并设置成背景了。 + +因为 Himawari-8 是一颗同步轨道卫星,你只能看到澳大利亚上空的地球的图片——但是它实时的天气形态、云团和光线仍使它很壮丽,对我而言要是看到英国上方的就更好了! + +高级设置允许你配置从卫星取回的图片质量,但是要记住增加图片质量会增加文件大小及更长的下载等待! + +最后,虽然这个脚本与其他我们提到过的其他脚本类似,它还仍保持更新及可用。 + +###获取 Himawaripy + +Himawaripy 已经在一系列的桌面环境中都测试过了,包括 Unity、LXDE、i3、MATE 和其他桌面环境。它是自由开源软件,但是整体来说安装及配置不太简单。 + +在该项目的 [Github 主页][0]上可以找到安装和设置该应用程序的所有指导(提示:没有一键安装功能)。 + +- [实时地球壁纸脚本的 GitHub 主页][0] + +### 安装及使用 + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/07/Screen-Shot-2016-07-26-at-16.46.13-750x143.png) + +一些读者请我在本文中补充一下一步步安装该应用的步骤。以下所有步骤都在其 GitHub 主页上,这里再贴一遍。 + +1、下载及解压 Himawaripy + +这是最容易的步骤。点击下面的下载链接,然后下载最新版本,并解压到你的下载目录里面。 + + - [下载 Himawaripy 主干文件(.zip 格式)][3] + +2、安装 python3-setuptools + +你需要手工来安装主干软件包,Ubuntu 里面默认没有安装它: + +``` +sudo apt install python3-setuptools +``` + +3、安装 Himawaripy + +在终端中,你需要切换到之前解压的目录中,并运行如下安装命令: + +``` +cd ~/Downloads/himawaripy-master +sudo python3 setup.py install +``` + +4、 看看它是否可以运行并下载最新的实时图片: +``` +himawaripy +``` +5、 设置定时任务 + +如果你希望该脚本可以在后台自动运行并更新(如果你需要手动更新,只需要运行 ‘himarwaripy’ 即可) + +在终端中运行: +``` +crontab -e +``` +在其中新加一行(默认每10分钟运行一次) +``` +*/10 * * * * /usr/local/bin/himawaripy +``` +关于[配置定时任务][4]可以在 Ubuntu Wiki 上找到更多信息。 + +该脚本安装后你不需要不断运行它,它会自动的每十分钟在后台运行一次。 + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2016/07/set-real-time-earth-wallpaper-ubuntu-desktop + +作者:[JOEY-ELIJAH SNEDDON][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://plus.google.com/117485690627814051450/?rel=author +[1]: https://github.com/boramalper/himawaripy +[2]: https://en.wikipedia.org/wiki/Himawari_8 +[0]: https://github.com/boramalper/himawaripy +[3]: https://github.com/boramalper/himawaripy/archive/master.zip +[4]: https://help.ubuntu.com/community/CronHowto \ No newline at end of file diff --git a/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md deleted file mode 100644 index d05fd97eb2..0000000000 --- a/translated/tech/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md +++ /dev/null @@ -1,39 +0,0 @@ -为你的Linux桌面设置一张真实的地球照片 -================================================================= - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/07/Screen-Shot-2016-07-26-at-16.36.47-1.jpg) - -厌倦了看同样的桌面背景了么?这里有一个几乎这世界上最好的东西。 - -‘[Himawaripy][1]‘是一个Python 3脚本,它会接近实时抓取由[日本Himawari 8气象卫星][2]拍摄的地球照片,并将它设置成你的桌面背景。 - -安装完成后,你可以将它设置成每10分钟运行的任务(自然地它是在后台运行),这样它就可以实时地取回地球的照片并设置成背景了。 - -因为Himawari-8是一颗同步轨道卫星,你只能看到澳大利亚上空的地球的图片-但是它实时的天气形态、云团和光线仍使它很壮丽,即使对我而言在看到英国上方的更好! - -高级设置允许你配置从卫星取回的图片质量,但是要记住增加图片质量会增加文件大小及更长的下载等待! - -最后,虽然这个脚本与其他我们提到过的其他类似,它还仍保持更新及可用。 - -获取Himawaripy - -Himawaripy已经在一系列的桌面环境中都测试过了,包括Unity、LXDE、i3、MATE和其他桌面环境。它是免费、开源软件但是并不能直接设置及配置。 - -在Github上查找获取安装的应用程序和设置的所有指令()提示:有没有一键安装)上。 - -[GitHub上的实时地球壁纸脚本][0] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ - -作者:[ JOEY-ELIJAH SNEDDON][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://plus.google.com/117485690627814051450/?rel=author -[1]: https://github.com/boramalper/himawaripy -[2]: https://en.wikipedia.org/wiki/Himawari_8 -[0]: https://github.com/boramalper/himawaripy From d592e68b58f12b4f81d258e424941677c22030a5 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Sun, 31 Jul 2016 19:24:56 +0800 Subject: [PATCH 302/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=EF=BC=9AHow=20to=20Configure=20and=20Troubleshoot=20Grand=20Un?= =?UTF-8?q?ified=20Bootloader=20(GRUB)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...leshoot Grand Unified Bootloader (GRUB).md | 187 ------------------ ...leshoot Grand Unified Bootloader (GRUB).md | 184 +++++++++++++++++ 2 files changed, 184 insertions(+), 187 deletions(-) delete mode 100644 sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md create mode 100644 translated/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md diff --git a/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md b/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md deleted file mode 100644 index 2d1cbef467..0000000000 --- a/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md +++ /dev/null @@ -1,187 +0,0 @@ -Being translated by ChrisLeeGit - -Part 13 - LFCS: How to Configure and Troubleshoot Grand Unified Bootloader (GRUB) -===================================================================================== - -Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Configure-Troubleshoot-Grub-Boot-Loader.png) ->LFCS: Configure and Troubleshoot Grub Boot Loader – Part 13 - -In this article we will introduce you to GRUB and explain why a boot loader is necessary, and how it adds versatility to the system. - -The [Linux boot process][3] from the time you press the power button of your computer until you get a fully-functional system follows this high-level sequence: - -* 1. A process known as **POST** (**Power-On Self Test**) performs an overall check on the hardware components of your computer. -* 2. When **POST** completes, it passes the control over to the boot loader, which in turn loads the Linux kernel in memory (along with **initramfs**) and executes it. The most used boot loader in Linux is the **GRand Unified Boot loader**, or **GRUB** for short. -* 3. The kernel checks and accesses the hardware, and then runs the initial process (mostly known by its generic name “**init**”) which in turn completes the system boot by starting services. - -In Part 7 of this series (“[SysVinit, Upstart, and Systemd][4]”) we introduced the [service management systems and tools][5] used by modern Linux distributions. You may want to review that article before proceeding further. - -### Introducing GRUB Boot Loader - -Two major **GRUB** versions (**v1** sometimes called **GRUB Legacy** and **v2**) can be found in modern systems, although most distributions use **v2** by default in their latest versions. Only **Red Hat Enterprise Linux 6** and its derivatives still use **v1** today. - -Thus, we will focus primarily on the features of **v2** in this guide. - -Regardless of the **GRUB** version, a boot loader allows the user to: - -* 1). modify the way the system behaves by specifying different kernels to use, -* 2). choose between alternate operating systems to boot, and -* 3). add or edit configuration stanzas to change boot options, among other things. - -Today, **GRUB** is maintained by the **GNU** project and is well documented in their website. You are encouraged to use the [GNU official documentation][6] while going through this guide. - -When the system boots you are presented with the following **GRUB** screen in the main console. Initially, you are prompted to choose between alternate kernels (by default, the system will boot using the latest kernel) and are allowed to enter a **GRUB** command line (with `c`) or edit the boot options (by pressing the `e` key). - -![](http://www.tecmint.com/wp-content/uploads/2016/03/GRUB-Boot-Screen.png) ->GRUB Boot Screen - -One of the reasons why you would consider booting with an older kernel is a hardware device that used to work properly and has started “acting up” after an upgrade (refer to [this link][7] in the AskUbuntu forums for an example). - -The **GRUB v2** configuration is read on boot from `/boot/grub/grub.cfg` or `/boot/grub2/grub.cfg`, whereas `/boot/grub/grub.conf` or `/boot/grub/menu.lst` are used in **v1**. These files are NOT to be edited by hand, but are modified based on the contents of `/etc/default/grub` and the files found inside `/etc/grub.d`. - -In a **CentOS 7**, here’s the configuration file that is created when the system is first installed: - -``` -GRUB_TIMEOUT=5 -GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" -GRUB_DEFAULT=saved -GRUB_DISABLE_SUBMENU=true -GRUB_TERMINAL_OUTPUT="console" -GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet" -GRUB_DISABLE_RECOVERY="true" -``` - -In addition to the online documentation, you can also find the GNU GRUB manual using info as follows: - -``` -# info grub -``` - -If you’re interested specifically in the options available for /etc/default/grub, you can invoke the configuration section directly: - -``` -# info -f grub -n 'Simple configuration' -``` - -Using the command above you will find out that `GRUB_TIMEOUT` sets the time between the moment when the initial screen appears and the system automatic booting begins unless interrupted by the user. When this variable is set to `-1`, boot will not be started until the user makes a selection. - -When multiple operating systems or kernels are installed in the same machine, `GRUB_DEFAULT` requires an integer value that indicates which OS or kernel entry in the GRUB initial screen should be selected to boot by default. The list of entries can be viewed not only in the splash screen shown above, but also using the following command: - -### In CentOS and openSUSE: - -``` -# awk -F\' '$1=="menuentry " {print $2}' /boot/grub2/grub.cfg -``` - -### In Ubuntu: - -``` -# awk -F\' '$1=="menuentry " {print $2}' /boot/grub/grub.cfg -``` - -In the example shown in the below image, if we wish to boot with the kernel version **3.10.0-123.el7.x86_64** (4th entry), we need to set `GRUB_DEFAULT` to `3` (entries are internally numbered beginning with zero) as follows: - -``` -GRUB_DEFAULT=3 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Boot-System-with-Old-Kernel-Version.png) ->Boot System with Old Kernel Version - -One final GRUB configuration variable that is of special interest is `GRUB_CMDLINE_LINUX`, which is used to pass options to the kernel. The options that can be passed through GRUB to the kernel are well documented in the [Kernel Parameters file][8] and in [man 7 bootparam][9]. - -Current options in my **CentOS 7** server are: - -``` -GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet" -``` - -Why would you want to modify the default kernel parameters or pass extra options? In simple terms, there may be times when you need to tell the kernel certain hardware parameters that it may not be able to determine on its own, or to override the values that it would detect. - -This happened to me not too long ago when I tried **Vector Linux**, a derivative of **Slackware**, on my 10-year old laptop. After installation it did not detect the right settings for my video card so I had to modify the kernel options passed through GRUB in order to make it work. - -Another example is when you need to bring the system to single-user mode to perform maintenance tasks. You can do this by appending the word single to `GRUB_CMDLINE_LINUX` and rebooting: - -``` -GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet single" -``` - -After editing `/etc/defalt/grub`, you will need to run `update-grub` (Ubuntu) or `grub2-mkconfig -o /boot/grub2/grub.cfg` (**CentOS** and **openSUSE**) afterwards to update `grub.cfg` (otherwise, changes will be lost upon boot). - -This command will process the boot configuration files mentioned earlier to update `grub.cfg`. This method ensures changes are permanent, while options passed through GRUB at boot time will only last during the current session. - -### Fixing Linux GRUB Issues - -If you install a second operating system or if your GRUB configuration file gets corrupted due to human error, there are ways you can get your system back on its feet and be able to boot again. - -In the initial screen, press `c` to get a GRUB command line (remember that you can also press `e` to edit the default boot options), and use help to bring the available commands in the GRUB prompt: - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Fix-Grub-Issues-in-Linux.png) ->Fix Grub Configuration Issues in Linux - -We will focus on **ls**, which will list the installed devices and filesystems, and we will examine what it finds. In the image below we can see that there are 4 hard drives (`hd0` through `hd3`). - -Only `hd0` seems to have been partitioned (as evidenced by msdos1 and msdos2, where 1 and 2 are the partition numbers and msdos is the partitioning scheme). - -Let’s now examine the first partition on `hd0` (**msdos1**) to see if we can find GRUB there. This approach will allow us to boot Linux and there use other high level tools to repair the configuration file or reinstall GRUB altogether if it is needed: - -``` -# ls (hd0,msdos1)/ -``` - -As we can see in the highlighted area, we found the `grub2` directory in this partition: - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Grub-Configuration.png) ->Find Grub Configuration - -Once we are sure that GRUB resides in (**hd0,msdos1**), let’s tell GRUB where to find its configuration file and then instruct it to attempt to launch its menu: - -``` -set prefix=(hd0,msdos1)/grub2 -set root=(hd0,msdos1) -insmod normal -normal -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-and-Launch-Grub-Menu.png) ->Find and Launch Grub Menu - -Then in the GRUB menu, choose an entry and press **Enter** to boot using it. Once the system has booted you can issue the `grub2-install /dev/sdX` command (change `sdX` with the device you want to install GRUB on). The boot information will then be updated and all related files be restored. - -``` -# grub2-install /dev/sdX -``` - -Other more complex scenarios are documented, along with their suggested fixes, in the [Ubuntu GRUB2 Troubleshooting guide][10]. The concepts explained there are valid for other distributions as well. - -### Summary - -In this article we have introduced you to GRUB, indicated where you can find documentation both online and offline, and explained how to approach an scenario where a system has stopped booting properly due to a bootloader-related issue. - -Fortunately, GRUB is one of the tools that is best documented and you can easily find help either in the installed docs or online using the resources we have shared in this article. - -Do you have questions or comments? Don’t hesitate to let us know using the comment form below. We look forward to hearing from you! - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[3]: http://www.tecmint.com/linux-boot-process/ -[4]: http://www.tecmint.com/linux-boot-process-and-manage-services/ -[5]: http://www.tecmint.com/best-linux-log-monitoring-and-management-tools/ -[6]: http://www.gnu.org/software/grub/manual/ -[7]: http://askubuntu.com/questions/82140/how-can-i-boot-with-an-older-kernel-version -[8]: https://www.kernel.org/doc/Documentation/kernel-parameters.txt -[9]: http://man7.org/linux/man-pages/man7/bootparam.7.html -[10]: https://help.ubuntu.com/community/Grub2/Troubleshooting diff --git a/translated/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md b/translated/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md new file mode 100644 index 0000000000..d8b7b294e2 --- /dev/null +++ b/translated/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md @@ -0,0 +1,184 @@ +LFCS 系列第十三讲:如何配置并排除 GNU 引导加载程序(GRUB)故障 +===================================================================================== + +由于 LFCS 考试需求的变动已于 2016 年 2 月 2 日生效,因此我们向 [LFCS 系列][1] 添加了一些必要的话题。为了准备认证考试,我们也强烈推荐你去看 [LFCE 系列][2]。 + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Configure-Troubleshoot-Grub-Boot-Loader.png) +>LFCS 系列第十三讲:配置并排除 Grub 引导加载程序故障。 + +本文将会向你介绍 GRUB 的知识,并会说明你为什么需要一个引导加载程序,以及它是如何增强系统通用性的。 + +[Linux 引导过程][3] 是从你按下你的电脑电源键开始,直到你拥有一个全功能的系统为止,整个过程遵循着这样的高层次顺序: + +* 1. 一个叫做 **POST**(**上电自检**)的过程会对你的电脑硬件组件做全面的检查。 +* 2. 当 **POST** 完成后,它会把控制权转交给引导加载程序,接下来引导加载程序会将 Linux 内核(以及 **initramfs**)加载到内存中并执行。 +* 3. 内核首先检查并访问硬件,然后运行初始进程(主要以它的通用名 **init** 而为人熟知),接下来初始进程会启动一些服务,最后完成系统启动过程。 + +在该系列的第七讲(“[SysVinit, Upstart, 和 Systemd][4]”)中,我们介绍了现代 Linux 发行版使用的一些服务管理系统和工具。在继续学习之前,你可能想要回顾一下那一讲的知识。 + +### GRUB 引导装载程序介绍 + +在现代系统中,你会发现有两种主要的 **GRUB** 版本(一种是偶尔被成为 **GRUB Legacy** 的 **v1** 版本,另一种则是 **v2** 版本),虽说多数最新版本的发行版系统都默认使用了 **v2** 版本。如今,只有 **红帽企业版 Linux 6** 及其衍生系统仍在使用 **v1** 版本。 + +因此,在本指南中,我们将着重关注 **v2** 版本的功能。 + +不管 **GRUB** 的版本是什么,一个引导加载程序都允许用户: + +* 1). 通过指定使用不同的内核来修改系统的表现方式; +* 2). 从多个操作系统中选择一个启动; +* 3). 添加或编辑配置节点来改变启动选项等。 + +如今,**GNU** 项目负责维护 **GRUB**,并在它们的网站上提供了丰富的文档。当你在阅读这篇指南时,我们强烈建议你看下 [GNU 官方文档][6]。 + +当系统引导时,你会在主控制台看到如下的 **GRUB** 画面。最开始,你可以根据提示在多个内核版本中选择一个内核(默认情况下,系统将会使用最新的内核启动),并且可以进入 **GRUB** 命令行模式(使用 `c` 键),或者编辑启动项(按下 `e` 键)。 + +![](http://www.tecmint.com/wp-content/uploads/2016/03/GRUB-Boot-Screen.png) +> GRUB 启动画面 + +你会考虑使用一个旧版内核启动的原因之一是之前工作正常的某个硬件设备在一次升级后出现了“怪毛病(acting up)”(例如,你可以参考 AskUbuntu 论坛中的 [这条链接][7])。 + +**GRUB v2** 的配置文件会在启动时从 `/boot/grub/grub.cfg` 或 `/boot/grub2/grub.cfg` 文件中读取,而 **GRUB v1** 使用的配置文件则来自 `/boot/grub/grub.conf` 或 `/boot/grub/menu.lst`。这些文件不能直接手动编辑,而是根据 `/etc/default/grub` 的内容和 `/etc/grub.d` 目录中的文件来修改的。 + +在 **CentOS 7** 上,当系统最初完成安装后,会生成如下的配置文件: + +``` +GRUB_TIMEOUT=5 +GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" +GRUB_DEFAULT=saved +GRUB_DISABLE_SUBMENU=true +GRUB_TERMINAL_OUTPUT="console" +GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet" +GRUB_DISABLE_RECOVERY="true" +``` + +除了在线文档外,你也可以使用下面的命令查阅 GNU GRUB 手册: + +``` +# info grub +``` + +如果你对 `/etc/default/grub` 文件中的可用选项特别感兴趣的话,你可以直接查阅配置一节的帮助文档: + +``` +# info -f grub -n 'Simple configuration' +``` + +使用上述命令,你会发现 `GRUB_TIMEOUT` 用于设置启动画面出现和系统自动开始启动(除非被用户中断)之间的时间。当该变量值为 `-1` 时,除非用户主动做出选择,否则不会开始启动。 + +当同一台机器上安装了多个操作系统或内核后,`GRUB_DEFAULT` 就需要用一个整数来指定 GRUB 启动画面默认选择启动的操作系统或内核条目。我们既可以通过上述启动画查看启动条目列表,也可以使用下面的命令: + +### 在 CentOS 和 openSUSE 系统上 + +``` +# awk -F\' '$1=="menuentry " {print $2}' /boot/grub2/grub.cfg +``` + +### 在 Ubuntu 系统上 + +``` +# awk -F\' '$1=="menuentry " {print $2}' /boot/grub/grub.cfg +``` + +如下图所示的例子中,如果我们想要使用版本为 `3.10.0-123.el7.x86_64` 的内核(第四个条目),我们需要将 `GRUB_DEFAULT` 设置为 `3`(条目从零开始编号),如下所示: + +``` +GRUB_DEFAULT=3 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Boot-System-with-Old-Kernel-Version.png) +> 使用旧版内核启动系统 + +最后一个需要特别关注的 GRUB 配置变量是 `GRUB_CMDLINE_LINUX`,它是用来给内核传递选项的。我们可以在 [内核变量文件][8] 和 [man 7 bootparam][9] 中找到能够通过 GRUB 传递给内核的选项的详细文档。 + +我的 **CentOS 7** 服务器上当前的选项是: + +``` +GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet" +``` +为什么你希望修改默认的内核参数或者传递额外的选项呢?简单来说,在很多情况下,你需要告诉内核某些由内核自身无法判断的硬件参数,或者是覆盖一些内核会检测的值。 + +不久之前,就在我身上发生过这样的事情,当时我在自己已用了 10 年的老笔记本上尝试衍生自 **Slackware** 的 **Vector Linux**。完成安装后,内核并没有检测出我的显卡的正确配置,所以我不得不通过 GRUB 传递修改过的内核选项来让它工作。 + +另外一个例子是当你需要将系统切换到单用户模式以执行维护工作时。为此,你可以直接在 `GRUB_CMDLINE_LINUX` 变量中直接追加 `single` 并重启即可: + +``` +GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet single" +``` + +编辑完 `/etc/default/grub` 之后,你需要运行 `update-grub` (在 Ubuntu 上)或者 `grub2-mkconfig -o /boot/grub2/grub.cfg` (在 **CentOS** 和 **openSUSE** 上)命令来更新 `grub.cfg` 文件(否则,改动会在系统启动时丢失)。 + +这条命令会处理早先提到的一些启动配置文件来更新 `grub.cfg` 文件。这种方法可以确保改动持久化,而在启动时刻通过 GRUB 传递的选项仅在当前会话期间有效。 + +### 修复 Linux GRUB 问题 + +如果你安装了第二个操作系统,或者由于人为失误而导致你的 GRUB 配置文件损坏了,依然有一些方法可以让你恢复并能够再次启动系统。 + +在启动画面中按下 `c` 键进入 GRUB 命令行模式(记住,你也可以按下 `e` 键编辑默认启动选项),并可以在 GRUB 提示中输入 `help` 命令获得可用命令: + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Fix-Grub-Issues-in-Linux.png) +> 修复 Linux 的 Grub 配置问题 + +我们将会着重关注 **ls** 命令,它会列出已安装的设备和文件系统,并且我们将会看看它可以查找什么。在下面的图片中,我们可以看到有 4 块硬盘(`hd0` 到 `hd3`)。 + +貌似只有 `hd0` 已经分区了(msdos1 和 msdos2 可以证明,这里的 1 和 2 是分区号,msdos 则是分区方案)。 + +现在我们来看看能否在第一个分区 `hd0`(**msdos1**)上找到 GRUB。这种方法允许我们启动 Linux,并且使用高级工具修复配置文件或者如果有必要的话,干脆重新安装 GRUB: + +``` +# ls (hd0,msdos1)/ +``` + +从高亮区域可以发现,`grub2` 目录就在这个分区: + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Grub-Configuration.png) +> 查找 Grub 配置 + +一旦我们确信了 GRUB 位于 (**hd0, msdos1**),那就让我们告诉 GRUB 该去哪儿查找它的配置文件并指示它去尝试启动它的菜单: + +``` +set prefix=(hd0,msdos1)/grub2 +set root=(hd0,msdos1) +insmod normal +normal +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-and-Launch-Grub-Menu.png) +> 查找并启动 Grub 菜单 + +然后,在 GRUB 菜单中,选择一个条目并按下 **Enter** 键以使用它启动。一旦系统成功启动后,你就可以运行 `grub2-install /dev/sdX` 命令修复问题了(将 `sdX` 改成你想要安装 GRUB 的设备)。然后启动信息将会更新,并且所有相关文件都会得到恢复。 + +``` +# grub2-install /dev/sdX +``` + +其它更加复杂的情景及其修复建议都记录在 [Ubuntu GRUB2 故障排除指南][10] 中。该指南中阐述的概念对于其它发行版也是有效的。 + +### 总结 + +本文向你介绍了 GRUB,并指导你可以在何处找到线上和线下的文档,同时说明了如何面对由于引导加载相关的问题而导致系统无法正常启动的情况。 + +幸运的是,GRUB 是文档支持非常丰富的工具之一,你可以使用我们在文中分享的资源非常轻松地获取已安装的文档或在线文档。 + +你有什么问题或建议吗?请不要犹豫,使用下面的评论框告诉我们吧。我们期待着来自你的回复! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/configure-and-troubleshoot-grub-boot-loader-linux/ + +作者:[Gabriel Cánepa][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: http://www.tecmint.com/linux-boot-process/ +[4]: http://www.tecmint.com/linux-boot-process-and-manage-services/ +[5]: http://www.tecmint.com/best-linux-log-monitoring-and-management-tools/ +[6]: http://www.gnu.org/software/grub/manual/ +[7]: http://askubuntu.com/questions/82140/how-can-i-boot-with-an-older-kernel-version +[8]: https://www.kernel.org/doc/Documentation/kernel-parameters.txt +[9]: http://man7.org/linux/man-pages/man7/bootparam.7.html +[10]: https://help.ubuntu.com/community/Grub2/Troubleshooting From a99e94384d18bc39e73223dc62b37d182c897909 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Sun, 31 Jul 2016 21:51:00 +0800 Subject: [PATCH 303/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=EF=BC=9AKeeweb=20A=20Linux=20Password=20Manager?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0160722 Keeweb A Linux Password Manager.md | 64 ------------------- ...0160722 Keeweb A Linux Password Manager.md | 62 ++++++++++++++++++ 2 files changed, 62 insertions(+), 64 deletions(-) delete mode 100644 sources/tech/20160722 Keeweb A Linux Password Manager.md create mode 100644 translated/tech/20160722 Keeweb A Linux Password Manager.md diff --git a/sources/tech/20160722 Keeweb A Linux Password Manager.md b/sources/tech/20160722 Keeweb A Linux Password Manager.md deleted file mode 100644 index b55de2a4cb..0000000000 --- a/sources/tech/20160722 Keeweb A Linux Password Manager.md +++ /dev/null @@ -1,64 +0,0 @@ -Being translated by ChrisLeeGit - -Keeweb A Linux Password Manager -================================ - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb_1.png?608) - -Today we are depending on more and more online services. Each online service we sign up for, let us set a password and this way we have to remember hundreds of passwords. In this case, it is easy for anyone to forget passwords. In this article I am going to talk about Keeweb, a Linux password manager that can store all your passwords securely either online or offline. - -When we talk about Linux password managers, there are so many. Password managers like, [Keepass][1] and [Encryptr, a Zero-knowledge system based password manager][2] have already been talked about on LinuxAndUbuntu. Keeweb is another password manager for Linux that we are going to see in this article. - -### Keeweb can store passwords offline or online - -Keeweb is a cross-platform password manager. It can store all your passwords offline and sync it with your own cloud storage services like OneDrive, Google Drive, Dropbox etc. Keeweb does not have online database of its own to sync your passwords. - -To connect your online storage with Keeweb, just click more and click the service that you want to use. - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb.png?685) - -Now Keeweb will prompt you to sign in to your drive. After sign in authenticate Keeweb to use your account. - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/authenticate-dropbox-with-keeweb_orig.jpg?649) - -### Store passwords with Keeweb - -It is very easy to store your passwords with Keeweb. You can encrypt your password file with a complex password. Keeweb also allows you to lock file with a key file but I don't recommend it. If somebody gets your key file, it takes only a click to unlock your passwords file. - -#### Create Passwords - -To create a new password simply click the '+' sign and you will be presented all entries to fill up. You can create more entries if you want. - -#### Search Passwords - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/search-passwords_orig.png) - -Keeweb has a library of icons so that you can find any particular password entry easily. You can change the color of icons, download more icons and even import icons from your computer. When talking about finding passwords the search comes very handy. ​ - -Passwords of similar services can be grouped so that you can find them all at one place in one folder. You can also tag passwords to store them all in different categories. - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/tags-passwords-in-keeweb.png?283) - -### Themes - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/themes.png?304) - -If you like light themes like white or high contrast then you can change theme from Settings > General > Themes. There are four themes available, two are dark and two are light. - -### Dont' you Like Linux Passwords Manager? No problem! - -I have already posted about two other Linux password managers, Keepass and Encryptr and there were arguments on Reddit, and other social media. There were people against using any password manager and vice-versa. In this article I want to clear out that it is our responsibility to save the file that passwords are stored in. I think Password managers like Keepass and Keeweb are good to use as they don't store your passwords in the cloud. These password managers create a file and you can store it on your hard drive or encrypt it with apps like VeraCrypt. I myself don't use or recommend to use services that store passwords in their own database. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ - -作者:[author][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.linuxandubuntu.com/home/keeweb-a-linux-password-manager -[1]: http://www.linuxandubuntu.com/home/keepass-password-management-tool-creates-strong-passwords-and-keeps-them-secure -[2]: http://www.linuxandubuntu.com/home/encryptr-zero-knowledge-system-based-password-manager-for-linux diff --git a/translated/tech/20160722 Keeweb A Linux Password Manager.md b/translated/tech/20160722 Keeweb A Linux Password Manager.md new file mode 100644 index 0000000000..3e32d1001c --- /dev/null +++ b/translated/tech/20160722 Keeweb A Linux Password Manager.md @@ -0,0 +1,62 @@ +Linux 密码管理器:Keeweb +================================ + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb_1.png?608) + +如今,我们依赖于越来越多的线上服务。我们每注册一个线上服务,就要设置一个密码;如此,我们就不得不记住数以百计的密码。这样对于每个人来说,都很容易忘记密码。我将在本文中介绍 Keeweb,它是一款 Linux 密码管理器,可以将你所有的密码安全地存储在线上或线下。 + +当谈及 Linux 密码管理器时,我们会发现有很多这样的软件。我们已经在 LinuxAndUbuntu 上讨论过像 [Keepass][1] 和 [Encryptr,一个基于零知识系统的密码管理器][2] 这样的密码管理器。Keeweb 则是另外一款我们将在本文讲解的 Linux 密码管理器。 + +### Keeweb 可以在线下或线上存储密码 + +Keeweb 是一款跨平台的密码管理器。它可以在线下存储你所有的密码,并且能够同步到你自己的云存储服务上,例如 OneDrive, Google Drive, Dropbox 等。Keeweb 并没有它自己的用于同步你密码的在线数据库。 + +要使用 Keeweb 连接你的线上存储服务,只需要点击更多,然后再点击你想要使用的服务即可。 + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb.png?685) + +现在,Keeweb 会提示你登录到你的云盘。登录成功后,给 Keeweb 授权使用你的账户。 + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/authenticate-dropbox-with-keeweb_orig.jpg?649) + +### 使用 Keeweb 存储密码 + +使用 Keeweb 存储你的密码是非常容易的。你可以使用一个复杂的密码加密你的密码文件。Keeweb 也允许你使用一个秘钥文件来锁定密码文件,但是我并不推荐这种方式。如果某个家伙拿到了你的秘钥文件,他只需要简单点击一下就可以解锁你的密码文件。 + +#### 创建密码 + +想要创建一个新的密码,你只需要简单地点击 `+` 号,然后你就会看到所有需要填充的输入框。如果你想的话,可以创建更多的输入框。 + +#### 搜索密码 + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/search-passwords_orig.png) + +Keeweb 拥有一个图标库,这样你就可以轻松地找到任何特定的密码入口。你可以改变图标的颜色、下载更多的图标,甚至可以直接从你的电脑中导入图标。这对于密码搜索来说,异常好使。 + +相似服务的密码可以分组,这样你就可以在一个文件夹的一个地方同时找到它们。你也可以给密码打上标签并把它们存放在不同分类中。 + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/tags-passwords-in-keeweb.png?283) + +### 主题 + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/themes.png?304) + +如果你喜欢类似于白色或者高对比度的亮色主题,你可以在“设置 > 通用 > 主题”中修改。(Keeweb)有四款可供选择的主题,其中两款为暗色,另外两款为亮色。 + +### 不喜欢 Linux 密码管理器?没问题! + +我已经发表过文章介绍了另外两款 Linux 密码管理器,它们分别是 Keepass 和 Encryptr,在 Reddit 和其它社交媒体上有些关于它们的争论。有些人反对使用任何密码管理器,反之亦然。在本文中,我想要澄清的是,存放密码文件是我们自己的责任。我认为像 keepass 和 Keeweb 这样的密码管理器是非常好用的,因为它们并没有自己的云来存放你的密码。这些密码管理器会创建一个文件,然后你可以将它存放在你的硬盘上,或者使用像 VeraCrypt 这样的应用给它加密。我个人不使用也不推荐使用那些将密码存储在它们自己数据库的服务。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/keeweb-a-linux-password-manager + +作者:[author][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxandubuntu.com/home/keeweb-a-linux-password-manager +[1]: http://www.linuxandubuntu.com/home/keepass-password-management-tool-creates-strong-passwords-and-keeps-them-secure +[2]: http://www.linuxandubuntu.com/home/encryptr-zero-knowledge-system-based-password-manager-for-linux From 35a7b4b8d36880eccc22b65d2540d1afdf04e5d5 Mon Sep 17 00:00:00 2001 From: alim0x Date: Sun, 31 Jul 2016 22:20:32 +0800 Subject: [PATCH 304/471] [translating]Implementing Mandatory Access Control with SELinux or AppArmor in Linux --- ...andatory Access Control with SELinux or AppArmor in Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md b/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md index 65c8657dff..c4c6aecec6 100644 --- a/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md +++ b/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md @@ -1,3 +1,5 @@ +alim0x translating + Implementing Mandatory Access Control with SELinux or AppArmor in Linux =========================================================================== From b6719d8cc4c0c141c014628b89c5ae2cec9accab Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Sun, 31 Jul 2016 22:58:17 +0800 Subject: [PATCH 305/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...9 - Learn How to Use Awk Special Patterns begin and end.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md b/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md index f64e834ca1..b3f99e6d4f 100644 --- a/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md +++ b/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md @@ -1,3 +1,5 @@ +Being translated by ChrisLeeGit + Learn How to Use Awk Special Patterns ‘BEGIN and END’ – Part 9 =============================================================== @@ -158,7 +160,7 @@ As I pointed out before, these Awk features will help us build more complex text via: http://www.tecmint.com/learn-use-awk-special-patterns-begin-and-end/ 作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对ID](https://github.com/校对ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 705f188aa63789937172d89bf488e737c1979aa3 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Mon, 1 Aug 2016 02:07:10 +0800 Subject: [PATCH 306/471] translated --- ...ic Expressions and Assignment Operators.md | 275 ------------------ ...ic Expressions and Assignment Operators.md | 275 ++++++++++++++++++ 2 files changed, 275 insertions(+), 275 deletions(-) delete mode 100644 sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md create mode 100644 translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md diff --git a/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md b/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md deleted file mode 100644 index dd2494529d..0000000000 --- a/sources/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md +++ /dev/null @@ -1,275 +0,0 @@ -translating by vim-kakali - - -Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators – part8 - -======================================================================================= - -The [Awk command series][1] is getting exciting I believe, in the previous seven parts, we walked through some fundamentals of Awk that you need to master to enable you perform some basic text or string filtering in Linux. - -Starting with this part, we shall dive into advance areas of Awk to handle more complex text or string filtering operations. Therefore, we are going to cover Awk features such as variables, numeric expressions and assignment operators. - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Learn-Awk-Variables-Numeric-Expressions-Assignment-Operators.png) ->Learn Awk Variables, Numeric Expressions and Assignment Operators - -These concepts are not comprehensively distinct from the ones you may have probably encountered in many programming languages before such shell, C, Python plus many others, so there is no need to worry much about this topic, we are simply revising the common ideas of using these mentioned features. - -This will probably be one of the easiest Awk command sections to understand, so sit back and lets get going. - -### 1. Awk Variables - -In any programming language, a variable is a place holder which stores a value, when you create a variable in a program file, as the file is executed, some space is created in memory that will store the value you specify for the variable. - -You can define Awk variables in the same way you define shell variables as follows: - -``` -variable_name=value -``` - -In the syntax above: - -- `variable_name`: is the name you give a variable -- `value`: the value stored in the variable - -Let’s look at some examples below: - -``` -computer_name=”tecmint.com” -port_no=”22” -email=”admin@tecmint.com” -server=”computer_name” -``` - -Take a look at the simple examples above, in the first variable definition, the value `tecmint.com` is assigned to the variable `computer_name`. - -Furthermore, the value 22 is assigned to the variable port_no, it is also possible to assign the value of one variable to another variable as in the last example where we assigned the value of computer_name to the variable server. - -If you can recall, right from [part 2 of this Awk series][2] were we covered field editing, we talked about how Awk divides input lines into fields and uses standard field access operator, $ to read the different fields that have been parsed. We can also use variables to store the values of fields as follows. - -``` -first_name=$2 -second_name=$3 -``` - -In the examples above, the value of first_name is set to second field and second_name is set to the third field. - -As an illustration, consider a file named names.txt which contains a list of an application’s users indicating their first and last names plus gender. Using the [cat command][3], we can view the contents of the file as follows: - -``` -$ cat names.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Content-Using-cat-Command.png) ->List File Content Using cat Command - -Then, we can also use the variables first_name and second_name to store the first and second names of the first user on the list as by running the Awk command below: - -``` -$ awk '/Aaron/{ first_name=$2 ; second_name=$3 ; print first_name, second_name ; }' names.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Variables-Using-Awk-Command.png) ->Store Variables Using Awk Command - -Let us also take a look at another case, when you issue the command `uname -a` on your terminal, it prints out all your system information. - -The second field contains your `hostname`, therefore we can store the hostname in a variable called hostname and print it using Awk as follows: - -``` -$ uname -a -$ uname -a | awk '{hostname=$2 ; print hostname ; }' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Command-Output-to-Variable-Using-Awk.png) ->Store Command Output to Variable Using Awk - -### 2. Numeric Expressions - -In Awk, numeric expressions are built using the following numeric operators: - -- `*` : multiplication operator -- `+` : addition operator -- `/` : division operator -- `-` : subtraction operator -- `%` : modulus operator -- `^` : exponentiation operator - -The syntax for a numeric expressions is: - -``` -$ operand1 operator operand2 -``` - -In the form above, operand1 and operand2 can be numbers or variable names, and operator is any of the operators above. - -Below are some examples to demonstrate how to build numeric expressions: - -``` -counter=0 -num1=5 -num2=10 -num3=num2-num1 -counter=counter+1 -``` - -To understand the use of numeric expressions in Awk, we shall consider the following example below, with the file domains.txt which contains all domains owned by Tecmint. - -``` -news.tecmint.com -tecmint.com -linuxsay.com -windows.tecmint.com -tecmint.com -news.tecmint.com -tecmint.com -linuxsay.com -tecmint.com -news.tecmint.com -tecmint.com -linuxsay.com -windows.tecmint.com -tecmint.com -``` - -To view the contents of the file, use the command below: - -``` -$ cat domains.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) ->View Contents of File - -If we want to count the number of times the domain tecmint.com appears in the file, we can write a simple script to do that as follows: - -``` -#!/bin/bash -for file in $@; do -if [ -f $file ] ; then -#print out filename -echo "File is: $file" -#print a number incrementally for every line containing tecmint.com -awk '/^tecmint.com/ { counter=counter+1 ; printf "%s\n", counter ; }' $file -else -#print error info incase input is not a file -echo "$file is not a file, please specify a file." >&2 && exit 1 -fi -done -#terminate script with exit code 0 in case of successful execution -exit 0 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Shell-Script-to-Count-a-String-in-File.png) ->Shell Script to Count a String or Text in File - -After creating the script, save it and make it executable, when we run it with the file, domains.txt as out input, we get the following output: - -``` -$ ./script.sh ~/domains.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-To-Count-String.png) ->Script to Count String or Text - -From the output of the script, there are 6 lines in the file domains.txt which contain tecmint.com, to confirm that you can manually count them. - -### 3. Assignment Operators - -The last Awk feature we shall cover is assignment operators, there are several assignment operators in Awk and these include the following: - -- `*=` : multiplication assignment operator -- `+=` : addition assignment operator -- `/=` : division assignment operator -- `-=` : subtraction assignment operator -- `%=` : modulus assignment operator -- `^=` : exponentiation assignment operator - -The simplest syntax of an assignment operation in Awk is as follows: - -``` -$ variable_name=variable_name operator operand -``` - -Examples: - -``` -counter=0 -counter=counter+1 -num=20 -num=num-1 -``` - -You can use the assignment operators above to shorten assignment operations in Awk, consider the previous examples, we could perform the assignment in the following form: - -``` -variable_name operator=operand -counter=0 -counter+=1 -num=20 -num-=1 -``` - -Therefore, we can alter the Awk command in the shell script we just wrote above using += assignment operator as follows: - -``` -#!/bin/bash -for file in $@; do -if [ -f $file ] ; then -#print out filename -echo "File is: $file" -#print a number incrementally for every line containing tecmint.com -awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file -else -#print error info incase input is not a file -echo "$file is not a file, please specify a file." >&2 && exit 1 -fi -done -#terminate script with exit code 0 in case of successful execution -exit 0 -``` - - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Alter-Shell-Script.png) ->Alter Shell Script - -In this segment of the [Awk series][4], we covered some powerful Awk features, that is variables, building numeric expressions and using assignment operators, plus some few illustrations of how we can actually use them. - -These concepts are not any different from the one in other programming languages but there may be some significant distinctions under Awk programming. - -In part 9, we shall look at more Awk features that is special patterns: BEGIN and END. Until then, stay connected to Tecmint. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/learn-awk-variables-numeric-expressions-and-assignment-operators/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对ID](https://github.com/校对ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ - - - - - - - - - - - - - - - - - - - -[1]: http://www.tecmint.com/category/awk-command/ -[2]: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/ -[3]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ -[4]: http://www.tecmint.com/category/awk-command/ diff --git a/translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md b/translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md new file mode 100644 index 0000000000..a5eb6b6df4 --- /dev/null +++ b/translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md @@ -0,0 +1,275 @@ + + +第 8 节--学习怎样使用 Awk 变量,数值表达式以及赋值运算符 + +======================================================================================= + + +我相信 [Awk 命令系列][1] 将会令人兴奋不已,在系列的前几节我们讨论了在 Linux 中处理文件和筛选字符串需要的基本 Awk 命令。 + + +在这一部分,我们会对处理更复杂的文件和筛选字符串操作需要的更高级的命令进行讨论。因此,我们将会看到关于 Awk 的一些特性诸如变量,数值表达式和赋值运算符。 +![](http://www.tecmint.com/wp-content/uploads/2016/07/Learn-Awk-Variables-Numeric-Expressions-Assignment-Operators.png) +>学习 Awk 变量,数值表达式和赋值运算符 + +你可能已经在很多编程语言中接触过它们,比如 shell,C,Python等;这些概念在理解上和这些语言没有什么不同,所以在这一小节中你不用担心很难理解,我们将会简短的提及常用的一些 Awk 特性。 + +这一小节可能是 Awk 命令里最容易理解的部分,所以放松点,我们开始吧。 +### 1. Awk 变量 + + +在任何编程语言中,当你在程序中新建一个变量的时候这个变量就是一个存储了值的占位符,程序一运行就占用了一些内存空间,你为变量赋的值会存储在这些内存空间上。 + + +你可以像下面这样定义 shell 变量一样定义 Awk 变量: +``` +variable_name=value +``` + +上面的语法: + +- `variable_name`: 为定义的变量的名字 +- `value`: 为变量赋的值 + +再看下面的一些例子: +``` +computer_name=”tecmint.com” +port_no=”22” +email=”admin@tecmint.com” +server=”computer_name” +``` + +观察上面的简单的例子,在定义第一个变量的时候,值 'tecmint.com' 被赋给了 'computer_name' 变量。 + + +此外,值 22 也被赋给了 port_no 变量,把一个变量的值赋给另一个变量也是可以的,在最后的例子中我们把变量 computer_name 的值赋给了变量 server。 + +你可以看看 [本系列的第 2 节][2] 中提到的字段编辑,我们讨论了 Awk 怎样将输入的行分隔为若干字段并且使用标准的字段进行输入操作 ,$ 访问不同的被分配的字段。我们也可以像下面这样使用变量为字段赋值。 + +``` +first_name=$2 +second_name=$3 +``` + +在上面的例子中,变量 first_name 的值设置为第二个字段,second_name 的值设置为第三个字段。 + + +再举个例子,有一个名为 names.txt 的文件,这个文件包含了一个应用程序的用户列表,这个用户列表显示了用户的名字和曾用名以及性别。可以使用 [cat 命令][3] 查看文件内容: +``` +$ cat names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Content-Using-cat-Command.png) +>使用 cat 命令查看列表文件内容 + + +然后,我们也可以使用下面的 Awk 命令把列表中第一个用户的第一个和第二个名字分别存储到变量 first_name 和 second_name 上: +``` +$ awk '/Aaron/{ first_name=$2 ; second_name=$3 ; print first_name, second_name ; }' names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Variables-Using-Awk-Command.png) +>使用 Awk 命令为变量赋值 + + +再看一个例子,当你在终端运行 'uname -a' 时,它可以打印出所有的系统信息。 + +第二个字段包含了你的 'hostname',因此,我们可以像下面这样把它赋给一个叫做 hostname 的变量并且用 Awk 打印出来。 +``` +$ uname -a +$ uname -a | awk '{hostname=$2 ; print hostname ; }' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Command-Output-to-Variable-Using-Awk.png) +>使用 Awk 把命令的输出赋给变量 + +### 2. 数值表达式 + +在 Awk 中,数值表达式使用下面的数值运算符组成: + +- `*` : 乘法运算符 +- `+` : 加法运算符 +- `/` : 除法运算符 +- `-` : 减法运算符 +- `%` : 取模运算符 +- `^` : 指数运算符 + + +数值表达式的语法是: +``` +$ operand1 operator operand2 +``` + +上面的 operand1 和 operand2 可以是数值和变量,运算符可以是上面列出的任意一种。 + +下面是一些展示怎样使用数值表达式的例子: +``` +counter=0 +num1=5 +num2=10 +num3=num2-num1 +counter=counter+1 +``` + + +理解了 Awk 中数值表达式的用法,我们就可以看下面的例子了,文件 domians.txt 里包括了所有属于 Tecmint 的域名。 +``` +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +``` + +可以使用下面的命令查看文件的内容; +``` +$ cat domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) +>查看文件内容 + + +如果想要计算出域名 tecmint.com 在文件中出现的次数,我们就可以通过写一个简单的脚本实现这个功能: +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +#print out filename +echo "File is: $file" +#print a number incrementally for every line containing tecmint.com +awk '/^tecmint.com/ { counter=counter+1 ; printf "%s\n", counter ; }' $file +else +#print error info incase input is not a file +echo "$file is not a file, please specify a file." >&2 && exit 1 +fi +done +#terminate script with exit code 0 in case of successful execution +exit 0 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Shell-Script-to-Count-a-String-in-File.png) +>计算一个字符串或文本在文件中出现次数的 shell 脚本 + + +写完脚本后保存并赋予执行权限,当我们使用文件运行脚本的时候,文件 domains.txt 作为脚本的输入,我们会得到下面的输出: + +``` +$ ./script.sh ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-To-Count-String.png) +>计算字符串或文本出现次数的脚本 + + +从脚本执行后的输出中,可以看到在文件 domains.txt 中包含域名 tecmint.com 的地方有 6 行,你可以自己计算进行验证。 + +### 3. 赋值操作符 + +我们要说的最后的 Awk 特性是赋值运算符,下面列出的只是 Awk 中的部分赋值运算符: + +- `*=` : 乘法赋值运算符 +- `+=` : 加法赋值运算符 +- `/=` : 除法赋值运算符 +- `-=` : 减法赋值运算符 +- `%=` : 取模赋值运算符 +- `^=` : 指数赋值运算符 + +下面是 Awk 中最简单的一个赋值操作的语法: +``` +$ variable_name=variable_name operator operand +``` + + +例子: + +``` +counter=0 +counter=counter+1 +num=20 +num=num-1 +``` + + +你可以使用在 Awk 中使用上面的赋值操作符使命令更简短,从先前的例子中,我们可以使用下面这种格式进行赋值操作: +``` +variable_name operator=operand +counter=0 +counter+=1 +num=20 +num-=1 +``` + + +因此,我们可以在 shell 脚本中改变 Awk 命令,使用上面提到的 += 操作符: +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +#print out filename +echo "File is: $file" +#print a number incrementally for every line containing tecmint.com +awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file +else +#print error info incase input is not a file +echo "$file is not a file, please specify a file." >&2 && exit 1 +fi +done +#terminate script with exit code 0 in case of successful execution +exit 0 +``` + + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Alter-Shell-Script.png) +>改变了的 shell 脚本 + + +在 [Awk 系列][4] 的这一部分,我们讨论了一些有用的 Awk 特性,有变量,使用数值表达式和赋值运算符,还有一些使用他们的实例。 + +这些概念和其他的编程语言没有任何不同,但是可能在 Awk 中有一些意义上的区别。 + +在本系列的第 9 节,我们会学习更多的 Awk 特性,比如特殊格式: BEGIN 和 END。这也会与 Tecmit 有联系。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-awk-variables-numeric-expressions-and-assignment-operators/ + +作者:[Aaron Kili][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ + + + + + + + + + + + + + + + + + + + +[1]: http://www.tecmint.com/category/awk-command/ +[2]: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/ +[3]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ +[4]: http://www.tecmint.com/category/awk-command/ From 7acdcd342428534d6482b2f32e5c9b1791ca92fa Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 1 Aug 2016 07:25:05 +0800 Subject: [PATCH 307/471] =?UTF-8?q?=E5=BD=92=E6=A1=A3=20201607?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../{ => 201607}/20151117 How bad a boss is Linus Torvalds.md | 0 ...Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md | 0 ...4 What is good stock portfolio management software on Linux.md | 0 .../20160218 How to Best Manage Encryption Keys on Linux.md | 0 .../20160218 What do Linux developers think of Git and GitHub.md | 0 ...ow to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md | 0 .../20160220 Best Cloud Services For Linux To Replace Copy.md | 0 ...volving Market for Commercial Software Built On Open Source.md | 0 .../20160303 Top 5 open source command shells for Linux.md | 0 .../20160304 Microservices with Python RabbitMQ and Nameko.md | 0 ...20160425 How to Use Awk to Print Fields and Columns in File.md | 0 ...An introduction to data processing with Cassandra and Spark.md | 0 ...160519 The future of sharing integrating Pydio and ownCloud.md | 0 .../20160527 Turn Your Old Laptop into a Chromebook.md | 0 ...DB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md | 0 .../20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md | 0 .../{ => 201607}/20160531 Why Ubuntu-based Distros Are Leaders.md | 0 ...ount your Google Drive on Linux with google-drive-ocamlfuse.md | 0 ...VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md | 0 .../{ => 201607}/20160606 Basic Git Commands You Must Know.md | 0 .../20160609 How to record your terminal session on Linux.md | 0 .../20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md | 0 published/{ => 201607}/20160610 Getting started with ReactOS.md | 0 ...sic Linux Commands That Every Linux Newbies Should Remember.md | 0 .../20160616 6 Amazing Linux Distributions For Kids.md | 0 .../{ => 201607}/20160620 Detecting cats in images with OpenCV.md | 0 published/{ => 201607}/20160620 Monitor Linux With Netdata.md | 0 ...0 PowerPC gains an Android 4.4 port with Big Endian support.md | 0 .../20160621 Container technologies in Fedora - systemd-nspawn.md | 0 .../20160624 How to permanently mount a Windows share on Linux.md | 0 ...uns on the cloud and the cloud runs on Linux. Any questions.md | 0 ...160624 Industrial SBC builds on Raspberry Pi Compute Module.md | 0 ...5 How to Hide Linux Command Line History by Going Incognito.md | 0 ... Source Discussion Platform Discourse On Ubuntu Linux 16.04.md | 0 .../{ => 201607}/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md | 0 .../{ => 201607}/20160630 What makes up the Fedora kernel.md | 0 ...up Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md | 0 .../20160705 Create Your Own Shell in Python - Part I.md | 0 .../20160706 Create Your Own Shell in Python - Part II.md | 0 ... Using Vagrant to control your DigitalOcean cloud instances.md | 0 .../{ => 201607}/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md | 0 ...20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md | 0 .../{ => 201607}/20160722 7 Best Markdown Editors for Linux.md | 0 .../20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md | 0 ...2 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md | 0 ... a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md | 0 46 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201607}/20151117 How bad a boss is Linus Torvalds.md (100%) rename published/{ => 201607}/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md (100%) rename published/{ => 201607}/20160104 What is good stock portfolio management software on Linux.md (100%) rename published/{ => 201607}/20160218 How to Best Manage Encryption Keys on Linux.md (100%) rename published/{ => 201607}/20160218 What do Linux developers think of Git and GitHub.md (100%) rename published/{ => 201607}/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md (100%) rename published/{ => 201607}/20160220 Best Cloud Services For Linux To Replace Copy.md (100%) rename published/{ => 201607}/20160301 The Evolving Market for Commercial Software Built On Open Source.md (100%) rename published/{ => 201607}/20160303 Top 5 open source command shells for Linux.md (100%) rename published/{ => 201607}/20160304 Microservices with Python RabbitMQ and Nameko.md (100%) rename published/{ => 201607}/20160425 How to Use Awk to Print Fields and Columns in File.md (100%) rename published/{ => 201607}/20160511 An introduction to data processing with Cassandra and Spark.md (100%) rename published/{ => 201607}/20160519 The future of sharing integrating Pydio and ownCloud.md (100%) rename published/{ => 201607}/20160527 Turn Your Old Laptop into a Chromebook.md (100%) rename published/{ => 201607}/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md (100%) rename published/{ => 201607}/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md (100%) rename published/{ => 201607}/20160531 Why Ubuntu-based Distros Are Leaders.md (100%) rename published/{ => 201607}/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md (100%) rename published/{ => 201607}/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md (100%) rename published/{ => 201607}/20160606 Basic Git Commands You Must Know.md (100%) rename published/{ => 201607}/20160609 How to record your terminal session on Linux.md (100%) rename published/{ => 201607}/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md (100%) rename published/{ => 201607}/20160610 Getting started with ReactOS.md (100%) rename published/{ => 201607}/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md (100%) rename published/{ => 201607}/20160616 6 Amazing Linux Distributions For Kids.md (100%) rename published/{ => 201607}/20160620 Detecting cats in images with OpenCV.md (100%) rename published/{ => 201607}/20160620 Monitor Linux With Netdata.md (100%) rename published/{ => 201607}/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md (100%) rename published/{ => 201607}/20160621 Container technologies in Fedora - systemd-nspawn.md (100%) rename published/{ => 201607}/20160624 How to permanently mount a Windows share on Linux.md (100%) rename published/{ => 201607}/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md (100%) rename published/{ => 201607}/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md (100%) rename published/{ => 201607}/20160625 How to Hide Linux Command Line History by Going Incognito.md (100%) rename published/{ => 201607}/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md (100%) rename published/{ => 201607}/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md (100%) rename published/{ => 201607}/20160630 What makes up the Fedora kernel.md (100%) rename published/{ => 201607}/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md (100%) rename published/{ => 201607}/20160705 Create Your Own Shell in Python - Part I.md (100%) rename published/{ => 201607}/20160706 Create Your Own Shell in Python - Part II.md (100%) rename published/{ => 201607}/20160708 Using Vagrant to control your DigitalOcean cloud instances.md (100%) rename published/{ => 201607}/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md (100%) rename published/{ => 201607}/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md (100%) rename published/{ => 201607}/20160722 7 Best Markdown Editors for Linux.md (100%) rename published/{ => 201607}/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md (100%) rename published/{ => 201607}/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md (100%) rename published/{ => 201607}/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md (100%) diff --git a/published/20151117 How bad a boss is Linus Torvalds.md b/published/201607/20151117 How bad a boss is Linus Torvalds.md similarity index 100% rename from published/20151117 How bad a boss is Linus Torvalds.md rename to published/201607/20151117 How bad a boss is Linus Torvalds.md diff --git a/published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md b/published/201607/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md similarity index 100% rename from published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md rename to published/201607/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md diff --git a/published/20160104 What is good stock portfolio management software on Linux.md b/published/201607/20160104 What is good stock portfolio management software on Linux.md similarity index 100% rename from published/20160104 What is good stock portfolio management software on Linux.md rename to published/201607/20160104 What is good stock portfolio management software on Linux.md diff --git a/published/20160218 How to Best Manage Encryption Keys on Linux.md b/published/201607/20160218 How to Best Manage Encryption Keys on Linux.md similarity index 100% rename from published/20160218 How to Best Manage Encryption Keys on Linux.md rename to published/201607/20160218 How to Best Manage Encryption Keys on Linux.md diff --git a/published/20160218 What do Linux developers think of Git and GitHub.md b/published/201607/20160218 What do Linux developers think of Git and GitHub.md similarity index 100% rename from published/20160218 What do Linux developers think of Git and GitHub.md rename to published/201607/20160218 What do Linux developers think of Git and GitHub.md diff --git a/published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md b/published/201607/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md similarity index 100% rename from published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md rename to published/201607/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md diff --git a/published/20160220 Best Cloud Services For Linux To Replace Copy.md b/published/201607/20160220 Best Cloud Services For Linux To Replace Copy.md similarity index 100% rename from published/20160220 Best Cloud Services For Linux To Replace Copy.md rename to published/201607/20160220 Best Cloud Services For Linux To Replace Copy.md diff --git a/published/20160301 The Evolving Market for Commercial Software Built On Open Source.md b/published/201607/20160301 The Evolving Market for Commercial Software Built On Open Source.md similarity index 100% rename from published/20160301 The Evolving Market for Commercial Software Built On Open Source.md rename to published/201607/20160301 The Evolving Market for Commercial Software Built On Open Source.md diff --git a/published/20160303 Top 5 open source command shells for Linux.md b/published/201607/20160303 Top 5 open source command shells for Linux.md similarity index 100% rename from published/20160303 Top 5 open source command shells for Linux.md rename to published/201607/20160303 Top 5 open source command shells for Linux.md diff --git a/published/20160304 Microservices with Python RabbitMQ and Nameko.md b/published/201607/20160304 Microservices with Python RabbitMQ and Nameko.md similarity index 100% rename from published/20160304 Microservices with Python RabbitMQ and Nameko.md rename to published/201607/20160304 Microservices with Python RabbitMQ and Nameko.md diff --git a/published/20160425 How to Use Awk to Print Fields and Columns in File.md b/published/201607/20160425 How to Use Awk to Print Fields and Columns in File.md similarity index 100% rename from published/20160425 How to Use Awk to Print Fields and Columns in File.md rename to published/201607/20160425 How to Use Awk to Print Fields and Columns in File.md diff --git a/published/20160511 An introduction to data processing with Cassandra and Spark.md b/published/201607/20160511 An introduction to data processing with Cassandra and Spark.md similarity index 100% rename from published/20160511 An introduction to data processing with Cassandra and Spark.md rename to published/201607/20160511 An introduction to data processing with Cassandra and Spark.md diff --git a/published/20160519 The future of sharing integrating Pydio and ownCloud.md b/published/201607/20160519 The future of sharing integrating Pydio and ownCloud.md similarity index 100% rename from published/20160519 The future of sharing integrating Pydio and ownCloud.md rename to published/201607/20160519 The future of sharing integrating Pydio and ownCloud.md diff --git a/published/20160527 Turn Your Old Laptop into a Chromebook.md b/published/201607/20160527 Turn Your Old Laptop into a Chromebook.md similarity index 100% rename from published/20160527 Turn Your Old Laptop into a Chromebook.md rename to published/201607/20160527 Turn Your Old Laptop into a Chromebook.md diff --git a/published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/published/201607/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md similarity index 100% rename from published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md rename to published/201607/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md diff --git a/published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md b/published/201607/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md similarity index 100% rename from published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md rename to published/201607/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md diff --git a/published/20160531 Why Ubuntu-based Distros Are Leaders.md b/published/201607/20160531 Why Ubuntu-based Distros Are Leaders.md similarity index 100% rename from published/20160531 Why Ubuntu-based Distros Are Leaders.md rename to published/201607/20160531 Why Ubuntu-based Distros Are Leaders.md diff --git a/published/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md b/published/201607/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md similarity index 100% rename from published/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md rename to published/201607/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md diff --git a/published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md b/published/201607/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md similarity index 100% rename from published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md rename to published/201607/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md diff --git a/published/20160606 Basic Git Commands You Must Know.md b/published/201607/20160606 Basic Git Commands You Must Know.md similarity index 100% rename from published/20160606 Basic Git Commands You Must Know.md rename to published/201607/20160606 Basic Git Commands You Must Know.md diff --git a/published/20160609 How to record your terminal session on Linux.md b/published/201607/20160609 How to record your terminal session on Linux.md similarity index 100% rename from published/20160609 How to record your terminal session on Linux.md rename to published/201607/20160609 How to record your terminal session on Linux.md diff --git a/published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md b/published/201607/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md similarity index 100% rename from published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md rename to published/201607/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md diff --git a/published/20160610 Getting started with ReactOS.md b/published/201607/20160610 Getting started with ReactOS.md similarity index 100% rename from published/20160610 Getting started with ReactOS.md rename to published/201607/20160610 Getting started with ReactOS.md diff --git a/published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md b/published/201607/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md similarity index 100% rename from published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md rename to published/201607/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md diff --git a/published/20160616 6 Amazing Linux Distributions For Kids.md b/published/201607/20160616 6 Amazing Linux Distributions For Kids.md similarity index 100% rename from published/20160616 6 Amazing Linux Distributions For Kids.md rename to published/201607/20160616 6 Amazing Linux Distributions For Kids.md diff --git a/published/20160620 Detecting cats in images with OpenCV.md b/published/201607/20160620 Detecting cats in images with OpenCV.md similarity index 100% rename from published/20160620 Detecting cats in images with OpenCV.md rename to published/201607/20160620 Detecting cats in images with OpenCV.md diff --git a/published/20160620 Monitor Linux With Netdata.md b/published/201607/20160620 Monitor Linux With Netdata.md similarity index 100% rename from published/20160620 Monitor Linux With Netdata.md rename to published/201607/20160620 Monitor Linux With Netdata.md diff --git a/published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md b/published/201607/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md similarity index 100% rename from published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md rename to published/201607/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md diff --git a/published/20160621 Container technologies in Fedora - systemd-nspawn.md b/published/201607/20160621 Container technologies in Fedora - systemd-nspawn.md similarity index 100% rename from published/20160621 Container technologies in Fedora - systemd-nspawn.md rename to published/201607/20160621 Container technologies in Fedora - systemd-nspawn.md diff --git a/published/20160624 How to permanently mount a Windows share on Linux.md b/published/201607/20160624 How to permanently mount a Windows share on Linux.md similarity index 100% rename from published/20160624 How to permanently mount a Windows share on Linux.md rename to published/201607/20160624 How to permanently mount a Windows share on Linux.md diff --git a/published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md b/published/201607/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md similarity index 100% rename from published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md rename to published/201607/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md diff --git a/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md b/published/201607/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md similarity index 100% rename from published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md rename to published/201607/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md diff --git a/published/20160625 How to Hide Linux Command Line History by Going Incognito.md b/published/201607/20160625 How to Hide Linux Command Line History by Going Incognito.md similarity index 100% rename from published/20160625 How to Hide Linux Command Line History by Going Incognito.md rename to published/201607/20160625 How to Hide Linux Command Line History by Going Incognito.md diff --git a/published/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/published/201607/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md similarity index 100% rename from published/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md rename to published/201607/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md diff --git a/published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/published/201607/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md similarity index 100% rename from published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md rename to published/201607/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md diff --git a/published/20160630 What makes up the Fedora kernel.md b/published/201607/20160630 What makes up the Fedora kernel.md similarity index 100% rename from published/20160630 What makes up the Fedora kernel.md rename to published/201607/20160630 What makes up the Fedora kernel.md diff --git a/published/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md b/published/201607/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md similarity index 100% rename from published/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md rename to published/201607/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md diff --git a/published/20160705 Create Your Own Shell in Python - Part I.md b/published/201607/20160705 Create Your Own Shell in Python - Part I.md similarity index 100% rename from published/20160705 Create Your Own Shell in Python - Part I.md rename to published/201607/20160705 Create Your Own Shell in Python - Part I.md diff --git a/published/20160706 Create Your Own Shell in Python - Part II.md b/published/201607/20160706 Create Your Own Shell in Python - Part II.md similarity index 100% rename from published/20160706 Create Your Own Shell in Python - Part II.md rename to published/201607/20160706 Create Your Own Shell in Python - Part II.md diff --git a/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md b/published/201607/20160708 Using Vagrant to control your DigitalOcean cloud instances.md similarity index 100% rename from published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md rename to published/201607/20160708 Using Vagrant to control your DigitalOcean cloud instances.md diff --git a/published/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md b/published/201607/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md similarity index 100% rename from published/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md rename to published/201607/20160718 OPEN SOURCE ACCOUNTING SOFTWARE.md diff --git a/published/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md b/published/201607/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md similarity index 100% rename from published/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md rename to published/201607/20160721 YOU CAN TRY A DEMO UBUNTU VERSION IN A WEB BROWSER.md diff --git a/published/20160722 7 Best Markdown Editors for Linux.md b/published/201607/20160722 7 Best Markdown Editors for Linux.md similarity index 100% rename from published/20160722 7 Best Markdown Editors for Linux.md rename to published/201607/20160722 7 Best Markdown Editors for Linux.md diff --git a/published/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md b/published/201607/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md similarity index 100% rename from published/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md rename to published/201607/20160722 HOW TO CHANGE DEFAULT APPLICATIONS IN UBUNTU.md diff --git a/published/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md b/published/201607/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md similarity index 100% rename from published/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md rename to published/201607/20160722 Terminix – A New GTK 3 Tiling Terminal Emulator for Linux.md diff --git a/published/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md b/published/201607/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md similarity index 100% rename from published/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md rename to published/201607/20160726 Set a Real Time Photo of Earth as Your Linux Desktop Wallpaper.md From ac0320c3f9ec1358316d0f4a022b7c848c6bf3ce Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 1 Aug 2016 07:25:42 +0800 Subject: [PATCH 308/471] PUB:20160706 What is Git MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @cvsher 这篇翻译的很用心,很好! --- published/20160706 What is Git.md | 118 ++++++++++++++++++++++++ translated/tech/20160706 What is Git.md | 118 ------------------------ 2 files changed, 118 insertions(+), 118 deletions(-) create mode 100644 published/20160706 What is Git.md delete mode 100644 translated/tech/20160706 What is Git.md diff --git a/published/20160706 What is Git.md b/published/20160706 What is Git.md new file mode 100644 index 0000000000..f0fa7dded8 --- /dev/null +++ b/published/20160706 What is Git.md @@ -0,0 +1,118 @@ +Git 系列(一):什么是 Git +=========== + +欢迎阅读本系列关于如何使用 Git 版本控制系统的教程!通过本文的介绍,你将会了解到 Git 的用途及谁该使用 Git。 + +如果你刚步入开源的世界,你很有可能会遇到一些在 Git 上托管代码或者发布使用版本的开源软件。事实上,不管你知道与否,你都在使用基于 Git 进行版本管理的软件:Linux 内核(就算你没有在手机或者电脑上使用 Linux,你正在访问的网站也是运行在 Linux 系统上的),Firefox、Chrome 等其他很多项目都通过 Git 代码库和世界各地开发者共享他们的代码。 + +换个角度来说,你是否仅仅通过 Git 就可以和其他人共享你的代码?你是否可以在家里或者企业里私有化的使用 Git?你必须要通过一个 GitHub 账号来使用 Git 吗?为什么要使用 Git 呢?Git 的优势又是什么?Git 是我唯一的选择吗?这对 Git 所有的疑问都会把我们搞的一脑浆糊。 + +因此,忘记你以前所知的 Git,让我们重新走进 Git 世界的大门。 + +### 什么是版本控制系统? + +Git 首先是一个版本控制系统。现在市面上有很多不同的版本控制系统:CVS、SVN、Mercurial、Fossil 当然还有 Git。 + +很多像 GitHub 和 GitLab 这样的服务是以 Git 为基础的,但是你也可以只使用 Git 而无需使用其他额外的服务。这意味着你可以以私有或者公有的方式来使用 Git。 + +如果你曾经和其他人有过任何电子文件方面的合作,你就会知道传统版本管理的工作流程。开始是很简单的:你有一个原始的版本,你把这个版本发送给你的同事,他们在接收到的版本上做了些修改,现在你们有两个版本了,然后他们把他们手上修改过的版本发回来给你。你把他们的修改合并到你手上的版本中,现在两个版本又合并成一个最新的版本了。 + +然后,你修改了你手上最新的版本,同时,你的同事也修改了他们手上合并前的版本。现在你们有 3 个不同的版本了,分别是合并后最新的版本,你修改后的版本,你同事手上继续修改过的版本。至此,你们的版本管理工作开始变得越来越混乱了。 + +正如 Jason van Gumster 在他的文章中指出 [即使是艺术家也需要版本控制][1],而且已经在个别人那里发现了这种趋势变化。无论是艺术家还是科学家,开发一个某种实验版本是并不鲜见的;在你的项目中,可能有某个版本大获成功,把项目推向一个新的高度,也可能有某个版本惨遭失败。因此,最终你不可避免的会创建出一堆名为project\_justTesting.kdenlive、project\_betterVersion.kdenlive、project\_best\_FINAL.kdenlive、project\_FINAL-alternateVersion.kdenlive 等类似名称的文件。 + +不管你是修改一个 for 循环,还是一些简单的文本编辑,一个好的版本控制系统都会让我们的生活更加的轻松。 + +### Git 快照 + +Git 可以为项目创建快照,并且存储这些快照为唯一的版本。 + +如果你将项目带领到了一个错误的方向上,你可以回退到上一个正确的版本,并且开始尝试另一个可行的方向。 + +如果你是和别人合作开发,当有人向你发送他们的修改时,你可以将这些修改合并到你的工作分支中,然后你的同事就可以获取到合并后的最新版本,并在此基础上继续工作。 + +Git 并不是魔法,因此冲突还是会发生的(“你修改了某文件的最后一行,但是我把这行整行都删除了;我们怎样处理这些冲突呢?”),但是总体而言,Git 会为你保留了所有更改的历史版本,甚至允许并行版本。这为你保留了以任何方式处理冲突的能力。 + +### 分布式 Git + +在不同的机器上为同一个项目工作是一件复杂的事情。因为在你开始工作时,你想要获得项目的最新版本,然后此基础上进行修改,最后向你的同事共享这些改动。传统的方法是通过笨重的在线文件共享服务或者老旧的电邮附件,但是这两种方式都是效率低下且容易出错。 + +Git 天生是为分布式工作设计的。如果你要参与到某个项目中,你可以克隆(clone)该项目的 Git 仓库,然后就像这个项目只有你本地一个版本一样对项目进行修改。最后使用一些简单的命令你就可以拉取(pull)其他开发者的修改,或者你可以把你的修改推送(push)给别人。现在不用担心谁手上的是最新的版本,或者谁的版本又存放在哪里等这些问题了。全部人都是在本地进行开发,然后向共同的目标推送或者拉取更新。(或者不是共同的目标,这取决于项目的开发方式)。 + +### Git 界面 + +最原始的 Git 是运行在 Linux 终端上的应用软件。然而,得益于 Git 是开源的,并且拥有良好的设计,世界各地的开发者都可以为 Git 设计不同的访问界面。 + +Git 完全是免费的,并且已经打包在 Linux,BSD,Illumos 和其他类 Unix 系统中,Git 命令看起来像这样: + +``` +$ git --version +git version 2.5.3 +``` + +可能最著名的 Git 访问界面是基于网页的,像 GitHub、开源的 GitLab、Savannah、BitBucket 和 SourceForge 这些网站都是基于网页端的 Git 界面。这些站点为面向公众和面向社会的开源软件提供了最大限度的代码托管服务。在一定程度上,基于浏览器的图形界面(GUI)可以尽量的减缓 Git 的学习曲线。下面的 GitLab 界面的截图: + +![](https://opensource.com/sites/default/files/0_gitlab.png) + +再者,第三方 Git 服务提供商或者独立开发者甚至可以在 Git 的基础上开发出不是基于 HTML 的定制化前端界面。此类界面让你可以不用打开浏览器就可以方便的使用 Git 进行版本管理。其中对用户最透明的方式是直接集成到文件管理器中。KDE 文件管理器 Dolphin 可以直接在目录中显示 Git 状态,甚至支持提交,推送和拉取更新操作。 + +![](https://opensource.com/sites/default/files/0_dolphin.jpg) + +[Sparkleshare][2] 使用 Git 作为其 Dropbox 式的文件共享界面的基础。 + +![](https://opensource.com/sites/default/files/0_sparkleshare_1.jpg) + +想了解更多的内容,可以查看 [Git wiki][3],这个(长长的)页面中展示了很多 Git 的图形界面项目。 + +### 谁应该使用 Git? + +就是你!我们更应该关心的问题是什么时候使用 Git?和用 Git 来干嘛? + +### 我应该在什么时候使用 Git 呢?我要用 Git 来干嘛呢? + +想更深入的学习 Git,我们必须比平常考虑更多关于文件格式的问题。 + +Git 是为了管理源代码而设计的,在大多数编程语言中,源代码就意味者一行行的文本。当然,Git 并不知道你把这些文本当成是源代码还是下一部伟大的美式小说。因此,只要文件内容是以文本构成的,使用 Git 来跟踪和管理其版本就是一个很好的选择了。 + +但是什么是文本呢?如果你在像 Libre Office 这类办公软件中编辑一些内容,通常并不会产生纯文本内容。因为通常复杂的应用软件都会对原始的文本内容进行一层封装,就如把原始文本内容用 XML 标记语言包装起来,然后封装在 Zip 包中。这种对原始文本内容进行一层封装的做法可以保证当你把文件发送给其他人时,他们可以看到你在办公软件中编辑的内容及特定的文本效果。奇怪的是,虽然,通常你的需求可能会很复杂,就像保存 [Kdenlive][4] 项目文件,或者保存从 [Inkscape][5] 导出的SVG文件,但是,事实上使用 Git 管理像 XML 文本这样的纯文本类容是最简单的。 + +如果你在使用 Unix 系统,你可以使用 `file` 命令来查看文件内容构成: + +``` +$ file ~/path/to/my-file.blah +my-file.blah: ASCII text +$ file ~/path/to/different-file.kra: Zip data (MIME type "application/x-krita") +``` + +如果还是不确定,你可以使用 `head` 命令来查看文件内容: + +``` +$ head ~/path/to/my-file.blah +``` + +如果输出的文本你基本能看懂,这个文件就很有可能是文本文件。如果你仅仅在一堆乱码中偶尔看到几个熟悉的字符,那么这个文件就可能不是文本文件了。 + +准确的说:Git 可以管理其他格式的文件,但是它会把这些文件当成二进制大对象(blob)。两者的区别是,在文本文件中,Git 可以明确的告诉你在这两个快照(或者说提交)间有 3 行是修改过的。但是如果你在两个提交(commit)之间对一张图片进行的编辑操作,Git 会怎么指出这种修改呢?实际上,因为图片并不是以某种可以增加或删除的有意义的文本构成,因此 Git 并不能明确的描述这种变化。当然我个人是非常希望图片的编辑可以像把文本“\丑陋的蓝绿色\”修改成“\漂浮着蓬松白云的天蓝色\”一样的简单,但是事实上图片的编辑并没有这么简单。 + +经常有人在 Git 上放入 png 图标、电子表格或者流程图这类二进制大型对象(blob)。尽管,我们知道在 Git 上管理此类大型文件并不直观,但是,如果你需要使用 Git 来管理此类文件,你也并不需要过多的担心。如果你参与的项目同时生成文本文件和二进制大文件对象(如视频游戏中常见的场景,这些和源代码同样重要的图像和音频材料),那么你有两条路可以走:要么开发出你自己的解决方案,就如使用指向共享网络驱动器的引用;要么使用 Git 插件,如 Joey Hess 开发的 [git annex][6],以及 [Git-Media][7] 项目。 + +你看,Git 真的是一个任何人都可以使用的工具。它是你进行文件版本管理的一个强大而且好用工具,同时它并没有你开始认为的那么可怕。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/resources/what-is-git + +作者:[Seth Kenlon][a] +译者:[cvsher](https://github.com/cvsher) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://opensource.com/life/16/2/version-control-isnt-just-programmers +[2]: http://sparkleshare.org/ +[3]: https://git.wiki.kernel.org/index.php/InterfacesFrontendsAndTools#Graphical_Interfaces +[4]: https://opensource.com/life/11/11/introduction-kdenlive +[5]: http://inkscape.org/ +[6]: https://git-annex.branchable.com/ +[7]: https://github.com/alebedev/git-media diff --git a/translated/tech/20160706 What is Git.md b/translated/tech/20160706 What is Git.md deleted file mode 100644 index f2f01b8d39..0000000000 --- a/translated/tech/20160706 What is Git.md +++ /dev/null @@ -1,118 +0,0 @@ -什么是Git -=========== - -欢迎阅读本系列关于Git版本控制系统的教程!通过本文的介绍,你将会了解到Git的用途及谁该使用Git。 - -如果你刚步入开源的世界,你很有可能会遇到一些在Git上托管代码或者发布使用版本的开源软件。事实上,不管你知道与否,你都在使用基于Git进行版本管理的软件:linux内核(就算你没有在手机或者电脑上使用linux,你正在访问的网站也是运行在linux系统上的),Firefox,Chrome等其他很多项目都在Git上和世界各地开发者共享他们的代码。 - -换个角度来说,你是否仅仅通过Git就可以和其他人共享你的代码?你是否可以在家里或者企业里私有化的使用Git?你必须要通过一个GitHub账号来使用Git吗?为什么要使用Git呢?Git的优势又是什么?Git是我唯一的选择吗?这对Git所有的疑问都会把我们搞的一脑浆糊。 - -因此,忘记你以前所知的Git,我们从新走进Git世界的大门。 - -### 什么是版本控制系统? - -现在市面上有很多不同的版本控制系统:CVS,SVN,Mercurial,Fossil 当然还有 Git。 - -很多像 GitHub 和 GitLab 这样的服务是以Git为基础的,但是你也可以只使用Git而无需使用其他额外的服务。这意味着你可以以私有或者公有的方式来使用Git。 - -如果你曾经和其他人有过任何数字方面的合作,你就会知道传统版本管理的工作流程。开始是很简单的:你有一个原始的版本,你把这个版本发送给你的同事,他们在接收到的版本上做了些修改,现在你们有两个版本了,然后他们把他们手上修改过的版本发回来给你。你把他们的修改合并到你手上的版本中,现在两个版本又合并成一个最新的版本了。 - -然后,你修改了你手上最新的版本,同时,你的同事也修改了他们手上合并前的版本。现在你们有3个不同的版本了,分别是合并后最新的版本,你修改后的版本,你同事手上继续修改过的版本。至此,你们的版本管理工作开始变得越来越混乱了。 - -正如 Jason van Gumster 在他的文章中指出 [即使是艺术家也需要版本控制][1],而且这也有向个人设置方向发展的趋势。无论是艺术家还是科学家,在一些想法上开发出实验版本都是比较常见的;在你的项目中,可能有某个版本大获成功,把项目推向一个新的高度,也可能有某个版本惨遭失败。因此,最终你不可避免的会创建出一堆名为project_justTesting.kdenlive、project_betterVersion.kdenlive、project_best_FINAL.kdenlive、project_FINAL-alternateVersion.kdenlive等类似名称的文件。 - -不管你是修改一个for循环,还是一些简单的文本编辑,一个好的版本控制系统都会让我们的生活更加的轻松。 - -### Git快照 - -Git会为每个项目创建快照,并且为项目的每个版本都保存一个唯一的快照。 - -如果你将项目带领到了一个错误的方向上,你可以回退到上一个正确的版本,并且 开始尝试另一个可行的方向。 - -如果你是和别人合作开发,当有人向你发送他们的修改时,你可以将这些修改合并到你的工作分支中,然后你的同事就可以获取到合并后的最新版本,并在此基础上继续工作。 - -Git并不是魔法,因此冲突还是会发生的(“你修改了某文件的最后一行,但是我把这行整行都删除了;我们怎样处理这些冲突呢?”),但是总体而言,Git会为你保留了所有更改的历史版本,甚至允许并行版本。这为你保留了以任何方式处理冲突的能力。 - -### 分布式Git - -在不同的机器上为同一个项目工作是一件复杂的事情。因为在你开始工作时,你想要获得项目的最新版本,然后此基础上进行修改,最后向你的同事共享这些改动。传统的方法是通过笨重的在线文件共享服务或者老旧的电邮附件,但是这两种方式都是效率低下且容易出错。 - -Git天生是为分布式工作设计的。如果你要参与到某个项目中,你可以克隆(clone)该项目的Git仓库,然后就像这个项目只有你本地一个版本一样对项目进行修改。最后使用一些简单的命令你就可以拉取(pull)其他开发者的修改,或者你可以把你的修改推送(push)给别人。现在不用担心谁手上的是最新的版本,或者谁的版本又存放在哪里等这些问题了。全部人都是在本地进行开发,然后向共同的目标推送或者拉取更新。(或者不是共同的目标,这取决于项目的开发方式)。 - -### Git 界面 - -最原始的Git是运行在Linux终端上的应用软件。然而,得益于Git是开源的,并且拥有良好的设计,世界各地的开发者都可以为Git设计不同的接入界面。 - -Git完全是免费的,并且附带在Linux,BSD,Illumos 和其他类unix系统中,Git命令看起来像: - -``` -$ git --version -git version 2.5.3 -``` - -可能最著名的Git访问界面是基于网页的,像GitHub,开源的GitLab,Savannah,BitBucket和SourceForge这些网站都是基于网页端的Git界面。这些站点为面向公众和面向社会的开源软件提供了最大限度的代码托管服务。在一定程度上,基于浏览器的图形界面(GUI)可以尽量的减缓Git的学习曲线。下面的GitLab接口的截图: - -![](https://opensource.com/sites/default/files/0_gitlab.png) - -再者,第三方Git服务提供商或者独立开发者甚至可以在Git的基础上开发出不是基于HTML的定制化前端接口。此类接口让你可以不用打开浏览器就可以方便的使用Git进行版本管理。其中对用户最透明的方式是直接集成到文件管理器中。KDE文件管理器,Dolphin 可以直接在目录中显示Git状态,甚至支持提交,推送和拉取更新操作。 - -![](https://opensource.com/sites/default/files/0_dolphin.jpg) - -[Sparkleshare][2]使用Git作为其Dropbox类文件共享接口的基础。 - -![](https://opensource.com/sites/default/files/0_sparkleshare_1.jpg) - -想了解更多的内容,可以查看[Git wiki][3],此页面中展示了很多Git的图形界面项目。 - -### 谁应该使用Git? - -就是你!我们更应该关心的问题是什么时候使用Git?和用Git来干嘛? - -### 我应该在什么时候使用Git呢?我要用Git来干嘛呢? - -想更深入的学习Git,我们必须比平常考虑更多关于文件格式的问题。 - -Git是为了管理源代码而设计的,在大多数编程语言中,源代码就意味者一行行的文本。当然,Git并不知道你把这些文本当成是源代码还是下一部伟大的美式小说。因此,只要文件内容是以文本构成的,使用Git来跟踪和管理其版本就是一个很好的选择了。 - -但是什么是文本呢?如果你在像Libre Office这类办公软件中编辑一些内容,通常并不会产生纯文本内容。因为通常复杂的应用软件都会对原始的文本内容进行一层封装,就如把原始文本内容用XML标记语言包装起来,然后封装在Zip容器中。这种对原始文本内容进行一层封装的做法可以保证当你把文件发送给其他人时,他们可以看到你在办公软件中编辑的内容及特定的文本效果。奇怪的是,虽然,通常你的需求可能会很复杂,就像保存[Kdenlive][4]项目文件,或者保存从[Inkscape][5]导出的SVG文件,但是,事实上使用Git管理像XML文本这样的纯文本类容是最简单的。 - -如果你在使用Unix系统,你可以使用 file 命令来查看文件内容构成: - -``` -$ file ~/path/to/my-file.blah -my-file.blah: ASCII text -$ file ~/path/to/different-file.kra: Zip data (MIME type "application/x-krita") -``` - -如果还是不确定,你可以使用 head 命令来查看文件内容: - -``` -$ head ~/path/to/my-file.blah -``` - -如果输出的文本你基本能看懂,这个文件就很有可能是文本文件。如果你仅仅在一堆乱码中偶尔看到几个熟悉的字符,那么这个文件就可能不是文本文件了。 - -准确的说:Git可以管理其他格式的文件,但是它会把这些文件当成二进制大对象(blob)。两者的区别是,在文本文件中,Git可以明确的告诉你在这两个快照(或者说提交)间有3行是修改过的。但是如果你在两个提交间对一张图片进行的编辑操作,Git会怎么指出这种修改呢?实际上,因为图片并不是以某种可以增加或删除的有意义的文本构成,因此Git并不能明确的描述这种变化。当然我个人是非常希望图片的编辑可以像把本文"丑陋的蓝绿色"修改成"漂浮着蓬松白云的天蓝色"一样的简单,但是事实上图片的编辑并没有这么简单。 - -经常有人在Git上记录png图标、电子表格或者流程图这类二进制大型对象。尽管,我们知道在Git上管理此类大型文件并不直观,但是,如果你需要使用Git来管理此类文件,你也并不需要过多的担心。如果你参与的项目同时生成文本文件和二进制大文件对象(如视频游戏中常见的场景,这些和源代码同样重要的图像和音频材料),那么你有两条路可以走:要么开发出你自己的解决方案,就如使用指向共享网络驱动器的指针;要么使用Git插件,如Joey Hess开发的[git annex][6], 或者 [Git-Media][7] 项目。 - -你看,Git真的是一个任何人都可以使用的工具。它是你进行文件版本管理的一个强大而且好用工具,同时它并没有你开始认为的那么可怕。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/resources/what-is-git - -作者:[Seth Kenlon ][a] -译者:[cvsher](https://github.com/cvsher) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]: https://opensource.com/life/16/2/version-control-isnt-just-programmers -[2]: http://sparkleshare.org/ -[3]: https://git.wiki.kernel.org/index.php/InterfacesFrontendsAndTools#Graphical_Interfaces -[4]: https://opensource.com/life/11/11/introduction-kdenlive -[5]: http://inkscape.org/ -[6]: https://git-annex.branchable.com/ -[7]: https://github.com/alebedev/git-media From 3b3cea5dc788eb6f20a20a927509e86c0ea76cf5 Mon Sep 17 00:00:00 2001 From: may Date: Mon, 1 Aug 2016 08:58:56 +0800 Subject: [PATCH 309/471] translating by maywanting --- sources/talk/20160620 5 SSH Hardening Tips.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20160620 5 SSH Hardening Tips.md b/sources/talk/20160620 5 SSH Hardening Tips.md index ad6741ab26..e1a7353807 100644 --- a/sources/talk/20160620 5 SSH Hardening Tips.md +++ b/sources/talk/20160620 5 SSH Hardening Tips.md @@ -1,3 +1,5 @@ +translating by maywanting + 5 SSH Hardening Tips ====================== From 7047bb29cf12b05c2b15c8a348b2ac5cb7a6432a Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 1 Aug 2016 07:42:23 +0800 Subject: [PATCH 310/471] PUB:20160705 How to Encrypt a Flash Drive Using VeraCrypt @GitFuture --- ...o Encrypt a Flash Drive Using VeraCrypt.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) rename {translated/tech => published}/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md (72%) diff --git a/translated/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md b/published/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md similarity index 72% rename from translated/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md rename to published/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md index a049bfd73c..71a2e01a24 100644 --- a/translated/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md +++ b/published/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md @@ -1,29 +1,29 @@ 用 VeraCrypt 加密闪存盘 ============================================ -很多安全专家偏好像 VeraCrypt 这类能够用来加密闪存盘的开源软件。因为获取它的源代码很简单。 +很多安全专家偏好像 VeraCrypt 这类能够用来加密闪存盘的开源软件,是因为可以获取到它的源代码。 -保护 USB闪存盘里的数据,加密是一个聪明的方法,正如我们在使用 Microsoft 的 BitLocker [加密闪存盘][1] 一文中提到的。 +保护 USB 闪存盘里的数据,加密是一个聪明的方法,正如我们在使用 Microsoft 的 BitLocker [加密闪存盘][1] 一文中提到的。 但是如果你不想用 BitLocker 呢? -你可能有顾虑,因为你不能够查看 Microsoft 的程序源码,那么它容易被植入用于政府或其它用途的“后门”。由于开源软件的源码是公开的,很多安全专家认为开源软件很少藏有后门。 +你可能有顾虑,因为你不能够查看 Microsoft 的程序源码,那么它容易被植入用于政府或其它用途的“后门”。而由于开源软件的源码是公开的,很多安全专家认为开源软件很少藏有后门。 还好,有几个开源加密软件能作为 BitLocker 的替代。 要是你需要在 Windows 系统,苹果的 OS X 系统或者 Linux 系统上加密以及访问文件,开源软件 [VeraCrypt][2] 提供绝佳的选择。 -VeraCrypt 源于 TrueCrypt。TrueCrypt是一个备受好评的开源加密软件,尽管它现在已经停止维护了。但是 TrueCrypt 的代码通过了审核,没有发现什么重要的安全漏洞。另外,它已经在 VeraCrypt 中进行了改善。 +VeraCrypt 源于 TrueCrypt。TrueCrypt 是一个备受好评的开源加密软件,尽管它现在已经停止维护了。但是 TrueCrypt 的代码通过了审核,没有发现什么重要的安全漏洞。另外,在 VeraCrypt 中对它进行了改善。 Windows,OS X 和 Linux 系统的版本都有。 -用 VeraCrypt 加密 USB 闪存盘不像用 BitLocker 那么简单,但是它只要几分钟就好了。 +用 VeraCrypt 加密 USB 闪存盘不像用 BitLocker 那么简单,但是它也只要几分钟就好了。 ### 用 VeraCrypt 加密闪存盘的 8 个步骤 -对应操作系统 [下载 VeraCrypt][3] 之后: +对应你的操作系统 [下载 VeraCrypt][3] 之后: -打开 VeraCrypt,点击 Create Volume,进入 VeraCrypt 的创建卷的向导程序(VeraCrypt Volume Creation Wizard)。(注:VeraCrypt Volume Creation Wizard 首字母全大写,不清楚是否需要翻译,之后有很多首字母大写的词,都以括号标出) +打开 VeraCrypt,点击 Create Volume,进入 VeraCrypt 的创建卷的向导程序(VeraCrypt Volume Creation Wizard)。 ![](http://www.esecurityplanet.com/imagesvr_ce/6246/Vera0.jpg) @@ -39,7 +39,7 @@ VeraCrypt 创建卷向导(VeraCrypt Volume Creation Wizard)允许你在闪 ![](http://www.esecurityplanet.com/imagesvr_ce/9427/Vera3.jpg) -选择创建卷标模式(Volume Creation Mode)。如果你的闪存盘是空的,或者你想要删除它里面的所有东西,选第一个。要么你想保持所有现存的文件,选第二个就好了。 +选择创建卷模式(Volume Creation Mode)。如果你的闪存盘是空的,或者你想要删除它里面的所有东西,选第一个。要么你想保持所有现存的文件,选第二个就好了。 ![](http://www.esecurityplanet.com/imagesvr_ce/7828/Vera4.jpg) @@ -47,7 +47,7 @@ VeraCrypt 创建卷向导(VeraCrypt Volume Creation Wizard)允许你在闪 ![](http://www.esecurityplanet.com/imagesvr_ce/5918/Vera5.jpg) -确定了卷标容量后,输入并确认你想要用来加密数据密码。 +确定了卷容量后,输入并确认你想要用来加密数据密码。 ![](http://www.esecurityplanet.com/imagesvr_ce/3850/Vera6.jpg) @@ -71,7 +71,7 @@ VeraCrypt 创建卷向导(VeraCrypt Volume Creation Wizard)允许你在闪 ### VeraCrypt 移动硬盘安装步骤 -如果你设置闪存盘的时候,选择的是加密过的容器而不是加密整个盘,你可以选择创建 VeraCrypt 称为移动盘(Traveler Disk)的设备。这会复制安装一个 VeraCrypt 在 USB 闪存盘。当你在别的 Windows 电脑上插入 U 盘时,就能从 U 盘自动运行 VeraCrypt;也就是说没必要在新电脑上安装 VeraCrypt。 +如果你设置闪存盘的时候,选择的是加密过的容器而不是加密整个盘,你可以选择创建 VeraCrypt 称为移动盘(Traveler Disk)的设备。这会复制安装一个 VeraCrypt 到 USB 闪存盘。当你在别的 Windows 电脑上插入 U 盘时,就能从 U 盘自动运行 VeraCrypt;也就是说没必要在新电脑上安装 VeraCrypt。 你可以设置闪存盘作为一个移动硬盘(Traveler Disk),在 VeraCrypt 的工具栏(Tools)菜单里选择 Traveler Disk SetUp 就行了。 @@ -79,15 +79,15 @@ VeraCrypt 创建卷向导(VeraCrypt Volume Creation Wizard)允许你在闪 要从移动盘(Traveler Disk)上运行 VeraCrypt,你必须要有那台电脑的管理员权限,这不足为奇。尽管这看起来是个限制,机密文件无法在不受控制的电脑上安全打开,比如在一个商务中心的电脑上。 ->Paul Rubens 从事技术行业已经超过 20 年。这期间他为英国和国际主要的出版社,包括 《The Economist》《The Times》《Financial Times》《The BBC》《Computing》和《ServerWatch》等出版社写过文章, +> 本文作者 Paul Rubens 从事技术行业已经超过 20 年。这期间他为英国和国际主要的出版社,包括 《The Economist》《The Times》《Financial Times》《The BBC》《Computing》和《ServerWatch》等出版社写过文章, -------------------------------------------------------------------------------- via: http://www.esecurityplanet.com/open-source-security/how-to-encrypt-flash-drive-using-veracrypt.html -作者:[Paul Rubens ][a] +作者:[Paul Rubens][a] 译者:[GitFuture](https://github.com/GitFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6da38f56d75ba8092dbdd3690459cdc3839393d8 Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 1 Aug 2016 21:06:19 +0800 Subject: [PATCH 311/471] =?UTF-8?q?=E7=BB=84=E9=98=9F=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?Building=20a=20data=20science=20portfolio=20-=20Machine=20learn?= =?UTF-8?q?ing=20project?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/team_test/README.md | 4 + ...ce portfolio - Machine learning project.md | 79 +++++++ ...ce portfolio - Machine learning project.md | 114 ++++++++++ ...ce portfolio - Machine learning project.md | 194 ++++++++++++++++++ ...ce portfolio - Machine learning project.md | 80 ++++++++ ...ce portfolio - Machine learning project.md | 166 +++++++++++++++ ...ce portfolio - Machine learning project.md | 173 ++++++++++++++++ 7 files changed, 810 insertions(+) create mode 100644 sources/team_test/README.md create mode 100644 sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md create mode 100644 sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md create mode 100644 sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md create mode 100644 sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md create mode 100644 sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md create mode 100644 sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md diff --git a/sources/team_test/README.md b/sources/team_test/README.md new file mode 100644 index 0000000000..d2bdd305f8 --- /dev/null +++ b/sources/team_test/README.md @@ -0,0 +1,4 @@ +组队翻译 : 《Building a data science portfolio: Machine learning project》 +本次组织者 : @选题-oska874 +参加译者 : @译者-vim-kakali @译者-Noobfish @译者-zky001 @译者-kokialoves @译者-ideas4u @译者-cposture +分配方式 : 原文按大致长度分成 6 部分,参与者自由选择,先到先选,如有疑问联系 @选题-oska874 diff --git a/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..70d9b65f92 --- /dev/null +++ b/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md @@ -0,0 +1,79 @@ +>This is the third in a series of posts on how to build a Data Science Portfolio. If you like this and want to know when the next post in the series is released, you can [subscribe at the bottom of the page][1]. + +Data science companies are increasingly looking at portfolios when making hiring decisions. One of the reasons for this is that a portfolio is the best way to judge someone’s real-world skills. The good news for you is that a portfolio is entirely within your control. If you put some work in, you can make a great portfolio that companies are impressed by. + +The first step in making a high-quality portfolio is to know what skills to demonstrate. The primary skills that companies want in data scientists, and thus the primary skills they want a portfolio to demonstrate, are: + +- Ability to communicate +- Ability to collaborate with others +- Technical competence +- Ability to reason about data +- Motivation and ability to take initiative + +Any good portfolio will be composed of multiple projects, each of which may demonstrate 1-2 of the above points. This is the third post in a series that will cover how to make a well-rounded data science portfolio. In this post, we’ll cover how to make the second project in your portfolio, and how to build an end to end machine learning project. At the end, you’ll have a project that shows your ability to reason about data, and your technical competence. [Here’s][2] the completed project if you want to take a look. + +### An end to end project + +As a data scientist, there are times when you’ll be asked to take a dataset and figure out how to [tell a story with it][3]. In times like this, it’s important to communicate very well, and walk through your process. Tools like Jupyter notebook, which we used in a previous post, are very good at helping you do this. The expectation here is that the deliverable is a presentation or document summarizing your findings. + +However, there are other times when you’ll be asked to create a project that has operational value. A project with operational value directly impacts the day-to-day operations of a company, and will be used more than once, and often by multiple people. A task like this might be “create an algorithm to forecast our churn rate”, or “create a model that can automatically tag our articles”. In cases like this, storytelling is less important than technical competence. You need to be able to take a dataset, understand it, then create a set of scripts that can process that data. It’s often important that these scripts run quickly, and use minimal system resources like memory. It’s very common that these scripts will be run several times, so the deliverable becomes the scripts themselves, not a presentation. The deliverable is often integrated into operational flows, and may even be user-facing. + +The main components of building an end to end project are: + +- Understanding the context +- Exploring the data and figuring out the nuances +- Creating a well-structured project, so its easy to integrate into operational flows +- Writing high-performance code that runs quickly and uses minimal system resources +- Documenting the installation and usage of your code well, so others can use it + +In order to effectively create a project of this kind, we’ll need to work with multiple files. Using a text editor like [Atom][4], or an IDE like [PyCharm][5] is highly recommended. These tools will allow you to jump between files, and edit files of different types, like markdown files, Python files, and csv files. Structuring your project so it’s easy to version control and upload to collaborative coding tools like [Github][6] is also useful. + +![](https://www.dataquest.io/blog/images/end_to_end/github.png) +>This project on Github. + +We’ll use our editing tools along with libraries like [Pandas][7] and [scikit-learn][8] in this post. We’ll make extensive use of Pandas [DataFrames][9], which make it easy to read in and work with tabular data in Python. + +### Finding good datasets + +A good dataset for an end to end portfolio project can be hard to find. [The dataset][10] needs to be sufficiently large that memory and performance constraints come into play. It also needs to potentially be operationally useful. For instance, this dataset, which contains data on the admission criteria, graduation rates, and graduate future earnings for US colleges would be a great dataset to use to tell a story. However, as you think about the dataset, it becomes clear that there isn’t enough nuance to build a good end to end project with it. For example, you could tell someone their potential future earnings if they went to a specific college, but that would be a quick lookup without enough nuance to demonstrate technical competence. You could also figure out if colleges with higher admissions standards tend to have graduates who earn more, but that would be more storytelling than operational. + +These memory and performance constraints tend to come into play when you have more than a gigabyte of data, and when you have some nuance to what you want to predict, which involves running algorithms over the dataset. + +A good operational dataset enables you to build a set of scripts that transform the data, and answer dynamic questions. A good example would be a dataset of stock prices. You would be able to predict the prices for the next day, and keep feeding new data to the algorithm as the markets closed. This would enable you to make trades, and potentially even profit. This wouldn’t be telling a story – it would be adding direct value. + +Some good places to find datasets like this are: + +- [/r/datasets][11] – a subreddit that has hundreds of interesting datasets. +- [Google Public Datasets][12] – public datasets available through Google BigQuery. +- [Awesome datasets][13] – a list of datasets, hosted on Github. + +As you look through these datasets, think about what questions someone might want answered with the dataset, and think if those questions are one-time (“how did housing prices correlate with the S&P 500?”), or ongoing (“can you predict the stock market?”). The key here is to find questions that are ongoing, and require the same code to be run multiple times with different inputs (different data). + +For the purposes of this post, we’ll look at [Fannie Mae Loan Data][14]. Fannie Mae is a government sponsored enterprise in the US that buys mortgage loans from other lenders. It then bundles these loans up into mortgage-backed securities and resells them. This enables lenders to make more mortgage loans, and creates more liquidity in the market. This theoretically leads to more homeownership, and better loan terms. From a borrowers perspective, things stay largely the same, though. + +Fannie Mae releases two types of data – data on loans it acquires, and data on how those loans perform over time. In the ideal case, someone borrows money from a lender, then repays the loan until the balance is zero. However, some borrowers miss multiple payments, which can cause foreclosure. Foreclosure is when the house is seized by the bank because mortgage payments cannot be made. Fannie Mae tracks which loans have missed payments on them, and which loans needed to be foreclosed on. This data is published quarterly, and lags the current date by 1 year. As of this writing, the most recent dataset that’s available is from the first quarter of 2015. + +Acquisition data, which is published when the loan is acquired by Fannie Mae, contains information on the borrower, including credit score, and information on their loan and home. Performance data, which is published every quarter after the loan is acquired, contains information on the payments being made by the borrower, and the foreclosure status, if any. A loan that is acquired may have dozens of rows in the performance data. A good way to think of this is that the acquisition data tells you that Fannie Mae now controls the loan, and the performance data contains a series of status updates on the loan. One of the status updates may tell us that the loan was foreclosed on during a certain quarter. + +![](https://www.dataquest.io/blog/images/end_to_end/foreclosure.jpg) +>A foreclosed home being sold. + +### Picking an angle + +There are a few directions we could go in with the Fannie Mae dataset. We could: + +- Try to predict the sale price of a house after it’s foreclosed on. +- Predict the payment history of a borrower. +- Figure out a score for each loan at acquisition time. + +The important thing is to stick to a single angle. Trying to focus on too many things at once will make it hard to make an effective project. It’s also important to pick an angle that has sufficient nuance. Here are examples of angles without much nuance: + +- Figuring out which banks sold loans to Fannie Mae that were foreclosed on the most. +- Figuring out trends in borrower credit scores. +- Exploring which types of homes are foreclosed on most often. +- Exploring the relationship between loan amounts and foreclosure sale prices + +All of the above angles are interesting, and would be great if we were focused on storytelling, but aren’t great fits for an operational project. + +With the Fannie Mae dataset, we’ll try to predict whether a loan will be foreclosed on in the future by only using information that was available when the loan was acquired. In effect, we’ll create a “score” for any mortgage that will tell us if Fannie Mae should buy it or not. This will give us a nice foundation to build on, and will be a great portfolio piece. + diff --git a/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..2b429aaab8 --- /dev/null +++ b/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md @@ -0,0 +1,114 @@ +### Understanding the data + +Let’s take a quick look at the raw data files. Here are the first few rows of the acquisition data from quarter 1 of 2012: + +``` +100000853384|R|OTHER|4.625|280000|360|02/2012|04/2012|31|31|1|23|801|N|C|SF|1|I|CA|945||FRM| +100003735682|R|SUNTRUST MORTGAGE INC.|3.99|466000|360|01/2012|03/2012|80|80|2|30|794|N|P|SF|1|P|MD|208||FRM|788 +100006367485|C|PHH MORTGAGE CORPORATION|4|229000|360|02/2012|04/2012|67|67|2|36|802|N|R|SF|1|P|CA|959||FRM|794 +``` + +Here are the first few rows of the performance data from quarter 1 of 2012: + +``` +100000853384|03/01/2012|OTHER|4.625||0|360|359|03/2042|41860|0|N|||||||||||||||| +100000853384|04/01/2012||4.625||1|359|358|03/2042|41860|0|N|||||||||||||||| +100000853384|05/01/2012||4.625||2|358|357|03/2042|41860|0|N|||||||||||||||| +``` + +Before proceeding too far into coding, it’s useful to take some time and really understand the data. This is more critical in operational projects – because we aren’t interactively exploring the data, it can be harder to spot certain nuances unless we find them upfront. In this case, the first step is to read the materials on the Fannie Mae site: + +- [Overview][15] +- [Glossary of useful terms][16] +- [FAQs][17] +- [Columns in the Acquisition and Performance files][18] +- [Sample Acquisition data file][19] +- [Sample Performance data file][20] + +After reading through these files, we know some key facts that will help us: + +- There’s an Acquisition file and a Performance file for each quarter, starting from the year 2000 to present. There’s a 1 year lag in the data, so the most recent data is from 2015 as of this writing. +- The files are in text format, with a pipe (|) as a delimiter. +- The files don’t have headers, but we have a list of what each column is. +- All together, the files contain data on 22 million loans. +- Because the Performance files contain information on loans acquired in previous years, there will be more performance data for loans acquired in earlier years (ie loans acquired in 2014 won’t have much performance history). + +These small bits of information will save us a ton of time as we figure out how to structure our project and work with the data. + +### Structuring the project + +Before we start downloading and exploring the data, it’s important to think about how we’ll structure the project. When building an end-to-end project, our primary goals are: + +- Creating a solution that works +- Having a solution that runs quickly and uses minimal resources +- Enabling others to easily extend our work +- Making it easy for others to understand our code +- Writing as little code as possible + +In order to achieve these goals, we’ll need to structure our project well. A well structured project follows a few principles: + +- Separates data files and code files. +- Separates raw data from generated data. +- Has a README.md file that walks people through installing and using the project. +- Has a requirements.txt file that contains all the packages needed to run the project. +- Has a single settings.py file that contains any settings that are used in other files. + - For example, if you are reading the same file from multiple Python scripts, it’s useful to have them all import settings and get the file name from a centralized place. +- Has a .gitignore file that prevents large or secret files from being committed. +- Breaks each step in our task into a separate file that can be executed separately. + - For example, we may have one file for reading in the data, one for creating features, and one for making predictions. +- Stores intermediate values. For example, one script may output a file that the next script can read. + - This enables us to make changes in our data processing flow without recalculating everything. + +Our file structure will look something like this shortly: + +``` +loan-prediction +├── data +├── processed +├── .gitignore +├── README.md +├── requirements.txt +├── settings.py +``` + +### Creating the initial files + +To start with, we’ll need to create a loan-prediction folder. Inside that folder, we’ll need to make a data folder and a processed folder. The first will store our raw data, and the second will store any intermediate calculated values. + +Next, we’ll make a .gitignore file. A .gitignore file will make sure certain files are ignored by git and not pushed to Github. One good example of such a file is the .DS_Store file created by OSX in every folder. A good starting point for a .gitignore file is here. We’ll also want to ignore the data files because they are very large, and the Fannie Mae terms prevent us from redistributing them, so we should add two lines to the end of our file: + +``` +data +processed +``` + +[Here’s][21] an example .gitignore file for this project. + +Next, we’ll need to create README.md, which will help people understand the project. .md indicates that the file is in markdown format. Markdown enables you write plain text, but also add some fancy formatting if you want. [Here’s][22] a guide on markdown. If you upload a file called README.md to Github, Github will automatically process the markdown, and show it to anyone who views the project. [Here’s][23] an example. + +For now, we just need to put a simple description in README.md: + +``` +Loan Prediction +----------------------- + +Predict whether or not loans acquired by Fannie Mae will go into foreclosure. Fannie Mae acquires loans from other lenders as a way of inducing them to lend more. Fannie Mae releases data on the loans it has acquired and their performance afterwards [here](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html). +``` + +Now, we can create a requirements.txt file. This will make it easy for other people to install our project. We don’t know exactly what libraries we’ll be using yet, but here’s a good starting point: + +``` +pandas +matplotlib +scikit-learn +numpy +ipython +scipy +``` + +The above libraries are the most commonly used for data analysis tasks in Python, and its fair to assume that we’ll be using most of them. [Here’s][24] an example requirements file for this project. + +After creating requirements.txt, you should install the packages. For this post, we’ll be using Python 3. If you don’t have Python installed, you should look into using [Anaconda][25], a Python installer that also installs all the packages listed above. + +Finally, we can just make a blank settings.py file, since we don’t have any settings for our project yet. + diff --git a/sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..31fb6f0fc9 --- /dev/null +++ b/sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md @@ -0,0 +1,194 @@ +### Acquiring the data + +Once we have the skeleton of our project, we can get the raw data. + +Fannie Mae has some restrictions around acquiring the data, so you’ll need to sign up for an account. You can find the download page [here][26]. After creating an account, you’ll be able to download as few or as many loan data files as you want. The files are in zip format, and are reasonably large after decompression. + +For the purposes of this blog post, we’ll download everything from Q1 2012 to Q1 2015, inclusive. We’ll then need to unzip all of the files. After unzipping the files, remove the original .zip files. At the end, the loan-prediction folder should look something like this: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +├── .gitignore +├── README.md +├── requirements.txt +├── settings.py +``` + +After downloading the data, you can use the head and tail shell commands to look at the lines in the files. Do you see any columns that aren’t needed? It might be useful to consult the [pdf of column names][27] while doing this. + +### Reading in the data + +There are two issues that make our data hard to work with right now: + +- The acquisition and performance datasets are segmented across multiple files. +- Each file is missing headers. + +Before we can get started on working with the data, we’ll need to get to the point where we have one file for the acquisition data, and one file for the performance data. Each of the files will need to contain only the columns we care about, and have the proper headers. One wrinkle here is that the performance data is quite large, so we should try to trim some of the columns if we can. + +The first step is to add some variables to settings.py, which will contain the paths to our raw data and our processed data. We’ll also add a few other settings that will be useful later on: + +``` +DATA_DIR = "data" +PROCESSED_DIR = "processed" +MINIMUM_TRACKING_QUARTERS = 4 +TARGET = "foreclosure_status" +NON_PREDICTORS = [TARGET, "id"] +CV_FOLDS = 3 +``` + +Putting the paths in settings.py will put them in a centralized place and make them easy to change down the line. When referring to the same variables in multiple files, it’s easier to put them in a central place than edit them in every file when you want to change them. [Here’s][28] an example settings.py file for this project. + +The second step is to create a file called assemble.py that will assemble all the pieces into 2 files. When we run python assemble.py, we’ll get 2 data files in the processed directory. + +We’ll then start writing code in assemble.py. We’ll first need to define the headers for each file, so we’ll need to look at [pdf of column names][29] and create lists of the columns in each Acquisition and Performance file: + +``` +HEADERS = { + "Acquisition": [ + "id", + "channel", + "seller", + "interest_rate", + "balance", + "loan_term", + "origination_date", + "first_payment_date", + "ltv", + "cltv", + "borrower_count", + "dti", + "borrower_credit_score", + "first_time_homebuyer", + "loan_purpose", + "property_type", + "unit_count", + "occupancy_status", + "property_state", + "zip", + "insurance_percentage", + "product_type", + "co_borrower_credit_score" + ], + "Performance": [ + "id", + "reporting_period", + "servicer_name", + "interest_rate", + "balance", + "loan_age", + "months_to_maturity", + "maturity_date", + "msa", + "delinquency_status", + "modification_flag", + "zero_balance_code", + "zero_balance_date", + "last_paid_installment_date", + "foreclosure_date", + "disposition_date", + "foreclosure_costs", + "property_repair_costs", + "recovery_costs", + "misc_costs", + "tax_costs", + "sale_proceeds", + "credit_enhancement_proceeds", + "repurchase_proceeds", + "other_foreclosure_proceeds", + "non_interest_bearing_balance", + "principal_forgiveness_balance" + ] +} +``` + +The next step is to define the columns we want to keep. Since all we’re measuring on an ongoing basis about the loan is whether or not it was ever foreclosed on, we can discard many of the columns in the performance data. We’ll need to keep all the columns in the acquisition data, though, because we want to maximize the information we have about when the loan was acquired (after all, we’re predicting if the loan will ever be foreclosed or not at the point it’s acquired). Discarding columns will enable us to save disk space and memory, while also speeding up our code. + +``` +SELECT = { + "Acquisition": HEADERS["Acquisition"], + "Performance": [ + "id", + "foreclosure_date" + ] +} +``` + +Next, we’ll write a function to concatenate the data sets. The below code will: + +- Import a few needed libraries, including settings. +- Define a function concatenate, that: + - Gets the names of all the files in the data directory. + - Loops through each file. + - If the file isn’t the right type (doesn’t start with the prefix we want), we ignore it. + - Reads the file into a [DataFrame][30] with the right settings using the Pandas [read_csv][31] function. + - Sets the separator to | so the fields are read in correctly. + - The data has no header row, so sets header to None to indicate this. + - Sets names to the right value from the HEADERS dictionary – these will be the column names of our DataFrame. + - Picks only the columns from the DataFrame that we added in SELECT. +- Concatenates all the DataFrames together. +- Writes the concatenated DataFrame back to a file. + +``` +import os +import settings +import pandas as pd + +def concatenate(prefix="Acquisition"): + files = os.listdir(settings.DATA_DIR) + full = [] + for f in files: + if not f.startswith(prefix): + continue + + data = pd.read_csv(os.path.join(settings.DATA_DIR, f), sep="|", header=None, names=HEADERS[prefix], index_col=False) + data = data[SELECT[prefix]] + full.append(data) + + full = pd.concat(full, axis=0) + + full.to_csv(os.path.join(settings.PROCESSED_DIR, "{}.txt".format(prefix)), sep="|", header=SELECT[prefix], index=False) +``` + +We can call the above function twice with the arguments Acquisition and Performance to concatenate all the acquisition and performance files together. The below code will: + +- Only execute if the script is called from the command line with python assemble.py. +- Concatenate all the files, and result in two files: + - `processed/Acquisition.txt` + - `processed/Performance.txt` + +``` +if __name__ == "__main__": + concatenate("Acquisition") + concatenate("Performance") +``` + +We now have a nice, compartmentalized assemble.py that’s easy to execute, and easy to build off of. By decomposing the problem into pieces like this, we make it easy to build our project. Instead of one messy script that does everything, we define the data that will pass between the scripts, and make them completely separate from each other. When you’re working on larger projects, it’s a good idea to do this, because it makes it much easier to change individual pieces without having unexpected consequences on unrelated pieces of the project. + +Once we finish the assemble.py script, we can run python assemble.py. You can find the complete assemble.py file [here][32]. + +This will result in two files in the processed directory: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +├── .gitignore +├── assemble.py +├── README.md +├── requirements.txt +├── settings.py +``` diff --git a/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..66db19bc54 --- /dev/null +++ b/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md @@ -0,0 +1,80 @@ +### Computing values from the performance data + +The next step we’ll take is to calculate some values from processed/Performance.txt. All we want to do is to predict whether or not a property is foreclosed on. To figure this out, we just need to check if the performance data associated with a loan ever has a foreclosure_date. If foreclosure_date is None, then the property was never foreclosed on. In order to avoid including loans with little performance history in our sample, we’ll also want to count up how many rows exist in the performance file for each loan. This will let us filter loans without much performance history from our training data. + +One way to think of the loan data and the performance data is like this: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/001.png) + +As you can see above, each row in the Acquisition data can be related to multiple rows in the Performance data. In the Performance data, foreclosure_date will appear in the quarter when the foreclosure happened, so it should be blank prior to that. Some loans are never foreclosed on, so all the rows related to them in the Performance data have foreclosure_date blank. + +We need to compute foreclosure_status, which is a Boolean that indicates whether a particular loan id was ever foreclosed on, and performance_count, which is the number of rows in the performance data for each loan id. + +There are a few different ways to compute the counts we want: + +- We could read in all the performance data, then use the Pandas groupby method on the DataFrame to figure out the number of rows associated with each loan id, and also if the foreclosure_date is ever not None for the id. + - The upside of this method is that it’s easy to implement from a syntax perspective. + - The downside is that reading in all 129236094 lines in the data will take a lot of memory, and be extremely slow. +- We could read in all the performance data, then use apply on the acquisition DataFrame to find the counts for each id. + - The upside is that it’s easy to conceptualize. + - The downside is that reading in all 129236094 lines in the data will take a lot of memory, and be extremely slow. +- We could iterate over each row in the performance dataset, and keep a separate dictionary of counts. + - The upside is that the dataset doesn’t need to be loaded into memory, so it’s extremely fast and memory-efficient. + - The downside is that it will take slightly longer to conceptualize and implement, and we need to parse the rows manually. + +Loading in all the data will take quite a bit of memory, so let’s go with the third option above. All we need to do is to iterate through all the rows in the Performance data, while keeping a dictionary of counts per loan id. In the dictionary, we’ll keep track of how many times the id appears in the performance data, as well as if foreclosure_date is ever not None. This will give us foreclosure_status and performance_count. + +We’ll create a new file called annotate.py, and add in code that will enable us to compute these values. In the below code, we’ll: + +- Import needed libraries. +- Define a function called count_performance_rows. + - Open processed/Performance.txt. This doesn’t read the file into memory, but instead opens a file handler that can be used to read in the file line by line. + - Loop through each line in the file. + - Split the line on the delimiter (|) + - Check if the loan_id is not in the counts dictionary. + - If not, add it to counts. + - Increment performance_count for the given loan_id because we’re on a row that contains it. + - If date is not None, then we know that the loan was foreclosed on, so set foreclosure_status appropriately. + +``` +import os +import settings +import pandas as pd + +def count_performance_rows(): + counts = {} + with open(os.path.join(settings.PROCESSED_DIR, "Performance.txt"), 'r') as f: + for i, line in enumerate(f): + if i == 0: + # Skip header row + continue + loan_id, date = line.split("|") + loan_id = int(loan_id) + if loan_id not in counts: + counts[loan_id] = { + "foreclosure_status": False, + "performance_count": 0 + } + counts[loan_id]["performance_count"] += 1 + if len(date.strip()) > 0: + counts[loan_id]["foreclosure_status"] = True + return counts +``` + +### Getting the values + +Once we create our counts dictionary, we can make a function that will extract values from the dictionary if a loan_id and a key are passed in: + +``` +def get_performance_summary_value(loan_id, key, counts): + value = counts.get(loan_id, { + "foreclosure_status": False, + "performance_count": 0 + }) + return value[key] +``` + +The above function will return the appropriate value from the counts dictionary, and will enable us to assign a foreclosure_status value and a performance_count value to each row in the Acquisition data. The [get][33] method on dictionaries returns a default value if a key isn’t found, so this enables us to return sensible default values if a key isn’t found in the counts dictionary. + + + diff --git a/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..64be5f9492 --- /dev/null +++ b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md @@ -0,0 +1,166 @@ +### Annotating the data + +We’ve already added a few functions to annotate.py, but now we can get into the meat of the file. We’ll need to convert the acquisition data into a training dataset that can be used in a machine learning algorithm. This involves a few things: + +- Converting all columns to numeric. +- Filling in any missing values. +- Assigning a performance_count and a foreclosure_status to each row. +- Removing any rows that don’t have a lot of performance history (where performance_count is low). + +Several of our columns are strings, which aren’t useful to a machine learning algorithm. However, they are actually categorical variables, where there are a few different category codes, like R, S, and so on. We can convert these columns to numeric by assigning a number to each category label: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/002.png) + +Converting the columns this way will allow us to use them in our machine learning algorithm. + +Some of the columns also contain dates (first_payment_date and origination_date). We can split these dates into 2 columns each: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/003.png) +In the below code, we’ll transform the Acquisition data. We’ll define a function that: + +- Creates a foreclosure_status column in acquisition by getting the values from the counts dictionary. +- Creates a performance_count column in acquisition by getting the values from the counts dictionary. +- Converts each of the following columns from a string column to an integer column: + - channel + - seller + - first_time_homebuyer + - loan_purpose + - property_type + - occupancy_status + - property_state + - product_type +- Converts first_payment_date and origination_date to 2 columns each: + - Splits the column on the forward slash. + - Assigns the first part of the split list to a month column. + - Assigns the second part of the split list to a year column. + - Deletes the column. + - At the end, we’ll have first_payment_month, first_payment_year, origination_month, and origination_year. +- Fills any missing values in acquisition with -1. + +``` +def annotate(acquisition, counts): + acquisition["foreclosure_status"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "foreclosure_status", counts)) + acquisition["performance_count"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "performance_count", counts)) + for column in [ + "channel", + "seller", + "first_time_homebuyer", + "loan_purpose", + "property_type", + "occupancy_status", + "property_state", + "product_type" + ]: + acquisition[column] = acquisition[column].astype('category').cat.codes + + for start in ["first_payment", "origination"]: + column = "{}_date".format(start) + acquisition["{}_year".format(start)] = pd.to_numeric(acquisition[column].str.split('/').str.get(1)) + acquisition["{}_month".format(start)] = pd.to_numeric(acquisition[column].str.split('/').str.get(0)) + del acquisition[column] + + acquisition = acquisition.fillna(-1) + acquisition = acquisition[acquisition["performance_count"] > settings.MINIMUM_TRACKING_QUARTERS] + return acquisition +``` + +### Pulling everything together + +We’re almost ready to pull everything together, we just need to add a bit more code to annotate.py. In the below code, we: + +- Define a function to read in the acquisition data. +- Define a function to write the processed data to processed/train.csv +- If this file is called from the command line, like python annotate.py: + - Read in the acquisition data. + - Compute the counts for the performance data, and assign them to counts. + - Annotate the acquisition DataFrame. + - Write the acquisition DataFrame to train.csv. + +``` +def read(): + acquisition = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "Acquisition.txt"), sep="|") + return acquisition + +def write(acquisition): + acquisition.to_csv(os.path.join(settings.PROCESSED_DIR, "train.csv"), index=False) + +if __name__ == "__main__": + acquisition = read() + counts = count_performance_rows() + acquisition = annotate(acquisition, counts) + write(acquisition) +``` + +Once you’re done updating the file, make sure to run it with python annotate.py, to generate the train.csv file. You can find the complete annotate.py file [here][34]. + +The folder should now look like this: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +│ ├── train.csv +├── .gitignore +├── annotate.py +├── assemble.py +├── README.md +├── requirements.txt +├── settings.py +``` + +### Finding an error metric + +We’re done with generating our training dataset, and now we’ll just need to do the final step, generating predictions. We’ll need to figure out an error metric, as well as how we want to evaluate our data. In this case, there are many more loans that aren’t foreclosed on than are, so typical accuracy measures don’t make much sense. + +If we read in the training data, and check the counts in the foreclosure_status column, here’s what we get: + +``` +import pandas as pd +import settings + +train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) +train["foreclosure_status"].value_counts() +``` + +``` +False 4635982 +True 1585 +Name: foreclosure_status, dtype: int64 +``` + +Since so few of the loans were foreclosed on, just checking the percentage of labels that were correctly predicted will mean that we can make a machine learning model that predicts False for every row, and still gets a very high accuracy. Instead, we’ll want to use a metric that takes the class imbalance into account, and ensures that we predict foreclosures accurately. We don’t want too many false positives, where we make predict that a loan will be foreclosed on even though it won’t, or too many false negatives, where we predict that a loan won’t be foreclosed on, but it is. Of these two, false negatives are more costly for Fannie Mae, because they’re buying loans where they may not be able to recoup their investment. + +We’ll define false negative rate as the number of loans where the model predicts no foreclosure but the the loan was actually foreclosed on, divided by the number of total loans that were actually foreclosed on. This is the percentage of actual foreclosures that the model “Missed”. Here’s a diagram: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/004.png) + +In the diagram above, 1 loan was predicted as not being foreclosed on, but it actually was. If we divide this by the number of loans that were actually foreclosed on, 2, we get the false negative rate, 50%. We’ll use this as our error metric, so we can evaluate our model’s performance. + +### Setting up the classifier for machine learning + +We’ll use cross validation to make predictions. With cross validation, we’ll divide our data into 3 groups. Then we’ll do the following: + +- Train a model on groups 1 and 2, and use the model to make predictions for group 3. +- Train a model on groups 1 and 3, and use the model to make predictions for group 2. +- Train a model on groups 2 and 3, and use the model to make predictions for group 1. + +Splitting it up into groups this way means that we never train a model using the same data we’re making predictions for. This avoids overfitting. If we overfit, we’ll get a falsely low false negative rate, which makes it hard to improve our algorithm or use it in the real world. + +[Scikit-learn][35] has a function called [cross_val_predict][36] which will make it easy to perform cross validation. + +We’ll also need to pick an algorithm to use to make predictions. We need a classifier that can do [binary classification][37]. The target variable, foreclosure_status only has two values, True and False. + +We’ll use [logistic regression][38], because it works well for binary classification, runs extremely quickly, and uses little memory. This is due to how the algorithm works – instead of constructing dozens of trees, like a random forest, or doing expensive transformations, like a support vector machine, logistic regression has far fewer steps involving fewer matrix operations. + +We can use the [logistic regression classifier][39] algorithm that’s implemented in scikit-learn. The only thing we need to pay attention to is the weights of each class. If we weight the classes equally, the algorithm will predict False for every row, because it is trying to minimize errors. However, we care much more about foreclosures than we do about loans that aren’t foreclosed on. Thus, we’ll pass balanced to the class_weight keyword argument of the [LogisticRegression][40] class, to get the algorithm to weight the foreclosures more to account for the difference in the counts of each class. This will ensure that the algorithm doesn’t predict False for every row, and instead is penalized equally for making errors in predicting either class. + + + + diff --git a/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..e99087b496 --- /dev/null +++ b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md @@ -0,0 +1,173 @@ +### Setting up the classifier for machine learning + +We’ll use cross validation to make predictions. With cross validation, we’ll divide our data into 3 groups. Then we’ll do the following: + +- Train a model on groups 1 and 2, and use the model to make predictions for group 3. +- Train a model on groups 1 and 3, and use the model to make predictions for group 2. +- Train a model on groups 2 and 3, and use the model to make predictions for group 1. + +Splitting it up into groups this way means that we never train a model using the same data we’re making predictions for. This avoids overfitting. If we overfit, we’ll get a falsely low false negative rate, which makes it hard to improve our algorithm or use it in the real world. + +[Scikit-learn][35] has a function called [cross_val_predict][36] which will make it easy to perform cross validation. + +We’ll also need to pick an algorithm to use to make predictions. We need a classifier that can do [binary classification][37]. The target variable, foreclosure_status only has two values, True and False. + +We’ll use [logistic regression][38], because it works well for binary classification, runs extremely quickly, and uses little memory. This is due to how the algorithm works – instead of constructing dozens of trees, like a random forest, or doing expensive transformations, like a support vector machine, logistic regression has far fewer steps involving fewer matrix operations. + +We can use the [logistic regression classifier][39] algorithm that’s implemented in scikit-learn. The only thing we need to pay attention to is the weights of each class. If we weight the classes equally, the algorithm will predict False for every row, because it is trying to minimize errors. However, we care much more about foreclosures than we do about loans that aren’t foreclosed on. Thus, we’ll pass balanced to the class_weight keyword argument of the [LogisticRegression][40] class, to get the algorithm to weight the foreclosures more to account for the difference in the counts of each class. This will ensure that the algorithm doesn’t predict False for every row, and instead is penalized equally for making errors in predicting either class. + +### Making predictions + +Now that we have the preliminaries out of the way, we’re ready to make predictions. We’ll create a new file called predict.py that will use the train.csv file we created in the last step. The below code will: + +- Import needed libraries. +- Create a function called cross_validate that: + - Creates a logistic regression classifier with the right keyword arguments. + - Creates a list of columns that we want to use to train the model, removing id and foreclosure_status. + - Run cross validation across the train DataFrame. + - Return the predictions. + + +``` +import os +import settings +import pandas as pd +from sklearn import cross_validation +from sklearn.linear_model import LogisticRegression +from sklearn import metrics + +def cross_validate(train): + clf = LogisticRegression(random_state=1, class_weight="balanced") + + predictors = train.columns.tolist() + predictors = [p for p in predictors if p not in settings.NON_PREDICTORS] + + predictions = cross_validation.cross_val_predict(clf, train[predictors], train[settings.TARGET], cv=settings.CV_FOLDS) + return predictions +``` + +### Predicting error + +Now, we just need to write a few functions to compute error. The below code will: + +- Create a function called compute_error that: + - Uses scikit-learn to compute a simple accuracy score (the percentage of predictions that matched the actual foreclosure_status values). +- Create a function called compute_false_negatives that: + - Combines the target and the predictions into a DataFrame for convenience. + - Finds the false negative rate. +- Create a function called compute_false_positives that: + - Combines the target and the predictions into a DataFrame for convenience. + - Finds the false positive rate. + - Finds the number of loans that weren’t foreclosed on that the model predicted would be foreclosed on. + - Divide by the total number of loans that weren’t foreclosed on. + +``` +def compute_error(target, predictions): + return metrics.accuracy_score(target, predictions) + +def compute_false_negatives(target, predictions): + df = pd.DataFrame({"target": target, "predictions": predictions}) + return df[(df["target"] == 1) & (df["predictions"] == 0)].shape[0] / (df[(df["target"] == 1)].shape[0] + 1) + +def compute_false_positives(target, predictions): + df = pd.DataFrame({"target": target, "predictions": predictions}) + return df[(df["target"] == 0) & (df["predictions"] == 1)].shape[0] / (df[(df["target"] == 0)].shape[0] + 1) +``` + + +### Putting it all together + +Now, we just have to put the functions together in predict.py. The below code will: + +- Read in the dataset. +- Compute cross validated predictions. +- Compute the 3 error metrics above. +- Print the error metrics. + +``` +def read(): + train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) + return train + +if __name__ == "__main__": + train = read() + predictions = cross_validate(train) + error = compute_error(train[settings.TARGET], predictions) + fn = compute_false_negatives(train[settings.TARGET], predictions) + fp = compute_false_positives(train[settings.TARGET], predictions) + print("Accuracy Score: {}".format(error)) + print("False Negatives: {}".format(fn)) + print("False Positives: {}".format(fp)) +``` + +Once you’ve added the code, you can run python predict.py to generate predictions. Running everything shows that our false negative rate is .26, which means that of the foreclosed loans, we missed predicting 26% of them. This is a good start, but can use a lot of improvement! + +You can find the complete predict.py file [here][41]. + +Your file tree should now look like this: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +│ ├── train.csv +├── .gitignore +├── annotate.py +├── assemble.py +├── predict.py +├── README.md +├── requirements.txt +├── settings.py +``` + +### Writing up a README + +Now that we’ve finished our end to end project, we just have to write up a README.md file so that other people know what we did, and how to replicate it. A typical README.md for a project should include these sections: + +- A high level overview of the project, and what the goals are. +- Where to download any needed data or materials. +- Installation instructions. + - How to install the requirements. +- Usage instructions. + - How to run the project. + - What you should see after each step. +- How to contribute to the project. + - Good next steps for extending the project. + +[Here’s][42] a sample README.md for this project. + +### Next steps + +Congratulations, you’re done making an end to end machine learning project! You can find a complete example project [here][43]. It’s a good idea to upload your project to [Github][44] once you’ve finished it, so others can see it as part of your portfolio. + +There are still quite a few angles left to explore with this data. Broadly, we can split them up into 3 categories – extending this project and making it more accurate, finding other columns to predict, and exploring the data. Here are some ideas: + +- Generate more features in annotate.py. +- Switch algorithms in predict.py. +- Try using more data from Fannie Mae than we used in this post. +- Add in a way to make predictions on future data. The code we wrote will still work if we add more data, so we can add more past or future data. +- Try seeing if you can predict if a bank should have issued the loan originally (vs if Fannie Mae should have acquired the loan). + - Remove any columns from train that the bank wouldn’t have known at the time of issuing the loan. + - Some columns are known when Fannie Mae bought the loan, but not before. + - Make predictions. +- Explore seeing if you can predict columns other than foreclosure_status. + - Can you predict how much the property will be worth at sale time? +- Explore the nuances between performance updates. + - Can you predict how many times the borrower will be late on payments? + - Can you map out the typical loan lifecycle? +- Map out data on a state by state or zip code by zip code level. + - Do you see any interesting patterns? + +If you build anything interesting, please let us know in the comments! + +If you liked this, you might like to read the other posts in our ‘Build a Data Science Porfolio’ series: + +- [Storytelling with data][45]. +- [How to setup up a data science blog][46]. From 54212848c6ee922a9919e6e84e77bb9e5effbcb6 Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 1 Aug 2016 21:07:52 +0800 Subject: [PATCH 312/471] fix typo --- sources/team_test/README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/team_test/README.md b/sources/team_test/README.md index d2bdd305f8..0798a592ef 100644 --- a/sources/team_test/README.md +++ b/sources/team_test/README.md @@ -1,4 +1,7 @@ 组队翻译 : 《Building a data science portfolio: Machine learning project》 + 本次组织者 : @选题-oska874 + 参加译者 : @译者-vim-kakali @译者-Noobfish @译者-zky001 @译者-kokialoves @译者-ideas4u @译者-cposture + 分配方式 : 原文按大致长度分成 6 部分,参与者自由选择,先到先选,如有疑问联系 @选题-oska874 From 1b666b6fe8d3537162b6e353e75136e23c9ddd57 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Mon, 1 Aug 2016 21:33:36 +0800 Subject: [PATCH 313/471] vim-kakali translating --- ...ing a data science portfolio - Machine learning project.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md index 66db19bc54..a9af49b188 100644 --- a/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md @@ -1,3 +1,7 @@ +vim-kakali translating + + + ### Computing values from the performance data The next step we’ll take is to calculate some values from processed/Performance.txt. All we want to do is to predict whether or not a property is foreclosed on. To figure this out, we just need to check if the performance data associated with a loan ever has a foreclosure_date. If foreclosure_date is None, then the property was never foreclosed on. In order to avoid including loans with little performance history in our sample, we’ll also want to count up how many rows exist in the performance file for each loan. This will let us filter loans without much performance history from our training data. From 1325b98f208b7a4355061b17bb319daf61efdf97 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Mon, 1 Aug 2016 21:56:50 +0800 Subject: [PATCH 314/471] vim-kakali translating --- ... read and write Audio files with Octave 4.0.0 on Ubuntu.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md b/sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md index 80300deb6b..f35e0724a6 100644 --- a/sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md +++ b/sources/tech/20160616 Part 1 - Scientific Audio Processing How to read and write Audio files with Octave 4.0.0 on Ubuntu.md @@ -1,3 +1,7 @@ +vim-kakali translating + + + Scientific Audio Processing, Part I - How to read and write Audio files with Octave 4.0.0 on Ubuntu ================ From cb90401f21b819bb641106eedfcac48a7b408195 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 1 Aug 2016 23:51:51 +0800 Subject: [PATCH 315/471] PUB:20160711 Getting started with Git MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ChrisLeeGit 翻译的不错~ --- .../20160711 Getting started with Git.md | 41 ++++++++++++------- 1 file changed, 26 insertions(+), 15 deletions(-) rename {translated/tech => published}/20160711 Getting started with Git.md (77%) diff --git a/translated/tech/20160711 Getting started with Git.md b/published/20160711 Getting started with Git.md similarity index 77% rename from translated/tech/20160711 Getting started with Git.md rename to published/20160711 Getting started with Git.md index afa6b7382b..a490e59e2d 100644 --- a/translated/tech/20160711 Getting started with Git.md +++ b/published/20160711 Getting started with Git.md @@ -2,20 +2,23 @@ ========================= ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) -> 图片来源:opensource.com -在这个系列的介绍中,我们学习到了谁应该使用 Git,以及 Git 是用来做什么的。今天,我们将学习如何克隆公共的 Git 仓库,以及如何提取出独立的文件而不用克隆整个仓库。 +*图片来源:opensource.com* + +在这个系列的[介绍篇][4]中,我们学习到了谁应该使用 Git,以及 Git 是用来做什么的。今天,我们将学习如何克隆公共 Git 仓库,以及如何提取出独立的文件而不用克隆整个仓库。 由于 Git 如此流行,因而如果你能够至少熟悉一些基础的 Git 知识也能为你的生活带来很多便捷。如果你可以掌握 Git 基础(你可以的,我发誓!),那么你将能够下载任何你需要的东西,甚至还可能做一些贡献作为回馈。毕竟,那就是开源的精髓所在:你拥有获取你使用的软件代码的权利,拥有和他人分享的自由,以及只要你愿意就可以修改它的权利。只要你熟悉了 Git,它就可以让这一切都变得很容易。 那么,让我们一起来熟悉 Git 吧。 ### 读和写 + 一般来说,有两种方法可以和 Git 仓库交互:你可以从仓库中读取,或者你也能够向仓库中写入。它就像一个文件:有时候你打开一个文档只是为了阅读它,而其它时候你打开文档是因为你需要做些改动。 本文仅讲解如何从 Git 仓库读取。我们将会在后面的一篇文章中讲解如何向 Git 仓库写回的主题。 ### Git 还是 GitHub? + 一句话澄清:Git 不同于 GitHub(或 GitLab,或 Bitbucket)。Git 是一个命令行程序,所以它就像下面这样: ``` @@ -31,29 +34,32 @@ usage: Git [--version] [--help] [-C ] 我的文章系列将首先教你纯粹的 Git 知识,因为一旦你理解了 Git 在做什么,那么你就无需关心正在使用的前端工具是什么了。然而,我的文章系列也将涵盖通过流行的 Git 服务完成每项任务的常用方法,因为那些将可能是你首先会遇到的。 ### 安装 Git + 在 Linux 系统上,你可以从所使用的发行版软件仓库中获取并安装 Git。BSD 用户应当在 Ports 树的 devel 部分查找 Git。 -对于闭源的操作系统,请前往 [项目网站][1] 并根据说明安装。一旦安装后,在 Linux、BSD 和 Mac OS X 上的命令应当没有任何差别。Windows 用户需要调整 Git 命令,从而和 Windows 文件系统相匹配,或者安装 Cygwin 以原生的方式运行 Git,而不受 Windows 文件系统转换问题的羁绊。 +对于闭源的操作系统,请前往其[项目官网][1],并根据说明安装。一旦安装后,在 Linux、BSD 和 Mac OS X 上的命令应当没有任何差别。Windows 用户需要调整 Git 命令,从而和 Windows 文件系统相匹配,或者安装 Cygwin 以原生的方式运行 Git,而不受 Windows 文件系统转换问题的羁绊。 + +### Git 下午茶 -### 下午茶和 Git 并非每个人都需要立刻将 Git 加入到我们的日常生活中。有些时候,你和 Git 最多的交互就是访问一个代码库,下载一两个文件,然后就不用它了。以这样的方式看待 Git,它更像是下午茶而非一次正式的宴会。你进行一些礼节性的交谈,获得了需要的信息,然后你就会离开,至少接下来的三个月你不再想这样说话。 当然,那是可以的。 一般来说,有两种方法访问 Git:使用命令行,或者使用一种神奇的因特网技术通过 web 浏览器快速轻松地访问。 -假设你想要在终端中安装并使用一个回收站,因为你已经被 rm 命令毁掉太多次了。你已经听说过 Trashy 了,它称自己为「理智的 rm 命令媒介」,并且你想在安装它之前阅读它的文档。幸运的是,[Trashy 公开地托管在 GitLab.com][2]。 +假设你想要给终端安装一个回收站,因为你已经被 rm 命令毁掉太多次了。你可能听说过 Trashy ,它称自己为「理智的 rm 命令中间人」,也许你想在安装它之前阅读它的文档。幸运的是,[Trashy 公开地托管在 GitLab.com][2]。 ### Landgrab + 我们工作的第一步是对这个 Git 仓库使用 landgrab 排序方法:我们会克隆这个完整的仓库,然后会根据内容排序。由于该仓库是托管在公共的 Git 服务平台上,所以有两种方式来完成工作:使用命令行,或者使用 web 界面。 -要想使用 Git 获取整个仓库,就要使用 git clone 命令和 Git 仓库的 URL 作为参数。如果你不清楚正确的 URL 是什么,仓库应该会告诉你的。GitLab 为你提供了 [Trashy][3] 仓库的拷贝-粘贴 URL。 +要想使用 Git 获取整个仓库,就要使用 git clone 命令和 Git 仓库的 URL 作为参数。如果你不清楚正确的 URL 是什么,仓库应该会告诉你的。GitLab 为你提供了 [Trashy][3] 仓库的用于拷贝粘贴的 URL。 ![](https://opensource.com/sites/default/files/1_gitlab-url.jpg) 你也许注意到了,在某些服务平台上,会同时提供 SSH 和 HTTPS 链接。只有当你拥有仓库的写权限时,你才可以使用 SSH。否则的话,你必须使用 HTTPS URL。 -一旦你获得了正确的 URL,克隆仓库是非常容易的。就是 git clone 这个 URL 即可,可选项是可以指定要克隆到的目录。默认情况下会将 git 目录克隆到你当前所在的位置;例如,'trashy.git' 表示将仓库克隆到你当前位置的 'trashy' 目录。我使用 .clone 扩展名标记那些只读的仓库,使用 .git 扩展名标记那些我可以读写的仓库,但那无论如何也不是官方要求的。 +一旦你获得了正确的 URL,克隆仓库是非常容易的。就是 git clone 该 URL 即可,以及一个可选的指定要克隆到的目录。默认情况下会将 git 目录克隆到你当前所在的目录;例如,'trashy.git' 将会克隆到你当前位置的 'trashy' 目录。我使用 .clone 扩展名标记那些只读的仓库,而使用 .git 扩展名标记那些我可以读写的仓库,不过这并不是官方要求的。 ``` $ git clone https://gitlab.com/trashy/trashy.git trashy.clone @@ -68,30 +74,34 @@ Checking connectivity... done. 一旦成功地克隆了仓库,你就可以像对待你电脑上任何其它目录那样浏览仓库中的文件。 -另外一种获得仓库拷贝的方式是使用 web 界面。GitLab 和 GitHub 都会提供一个 .zip 格式的仓库快照文件。GitHub 有一个大的绿色下载按钮,但是在 GitLab 中,可以浏览器的右侧找到并不显眼的下载按钮。 +另外一种获得仓库拷贝的方式是使用 web 界面。GitLab 和 GitHub 都会提供一个 .zip 格式的仓库快照文件。GitHub 有一个大大的绿色下载按钮,但是在 GitLab 中,可以在浏览器的右侧找到并不显眼的下载按钮。 ![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) ### 仔细挑选 -另外一种从 Git 仓库中获取文件的方法是找到你想要的文件,然后把它从仓库中拽出来。只有 web 界面才提供这种方法,本质上来说,你看到的是别人仓库的克隆;你可以把它想象成一个 HTTP 共享目录。 + +另外一种从 Git 仓库中获取文件的方法是找到你想要的文件,然后把它从仓库中拽出来。只有 web 界面才提供这种方法,本质上来说,你看到的是别人的仓库克隆;你可以把它想象成一个 HTTP 共享目录。 使用这种方法的问题是,你也许会发现某些文件并不存在于原始仓库中,因为完整形式的文件可能只有在执行 make 命令后才能构建,那只有你下载了完整的仓库,阅读了 README 或者 INSTALL 文件,然后运行相关命令之后才会产生。不过,假如你确信文件存在,而你只想进入仓库,获取那个文件,然后离开的话,你就可以那样做。 -在 GitLab 和 GitHub 中,单击文件链接,并在 Raw 模式下查看,然后使用你的 web 浏览器的保存功能,例如:在 Firefox 中,文件 > 保存页面为。在一个 GitWeb 仓库中(一些更喜欢自己托管 git 的人使用的私有 git 仓库 web 查看器),Raw 查看链接在文件列表视图中。 +在 GitLab 和 GitHub 中,单击文件链接,并在 Raw 模式下查看,然后使用你的 web 浏览器的保存功能,例如:在 Firefox 中,“文件” \> “保存页面为”。在一个 GitWeb 仓库中(这是一个某些更喜欢自己托管 git 的人使用的私有 git 仓库 web 查看器),Raw 查看链接在文件列表视图中。 ![](https://opensource.com/sites/default/files/1_webgit-file.jpg) ### 最佳实践 + 通常认为,和 Git 交互的正确方式是克隆完整的 Git 仓库。这样认为是有几个原因的。首先,可以使用 git pull 命令轻松地使克隆仓库保持更新,这样你就不必在每次文件改变时就重回 web 站点获得一份全新的拷贝。第二,你碰巧需要做些改进,只要保持仓库整洁,那么你可以非常轻松地向原来的作者提交所做的变更。 -现在,可能是时候练习查找感兴趣的 Git 仓库,然后将它们克隆到你的硬盘中了。只要你了解使用终端的基础知识,那就不会太难做到。还不知道终端使用基础吗?那再给多我 5 分钟时间吧。 +现在,可能是时候练习查找感兴趣的 Git 仓库,然后将它们克隆到你的硬盘中了。只要你了解使用终端的基础知识,那就不会太难做到。还不知道基本的终端使用方式吗?那再给多我 5 分钟时间吧。 ### 终端使用基础 + 首先要知道的是,所有的文件都有一个路径。这是有道理的;如果我让你在常规的非终端环境下为我打开一个文件,你就要导航到文件在你硬盘的位置,并且直到你找到那个文件,你要浏览一大堆窗口。例如,你也许要点击你的家目录 > 图片 > InktoberSketches > monkey.kra。 -在那样的场景下,我们可以说文件 monkeysketch.kra 的路径是:$HOME/图片/InktoberSketches/monkey.kra。 +在那样的场景下,文件 monkeysketch.kra 的路径是:$HOME/图片/InktoberSketches/monkey.kra。 在终端中,除非你正在处理一些特殊的系统管理员任务,你的文件路径通常是以 $HOME 开头的(或者,如果你很懒,就使用 ~ 字符),后面紧跟着一些列的文件夹直到文件名自身。 + 这就和你在 GUI 中点击各种图标直到找到相关的文件或文件夹类似。 如果你想把 Git 仓库克隆到你的文档目录,那么你可以打开一个终端然后运行下面的命令: @@ -100,6 +110,7 @@ Checking connectivity... done. $ git clone https://gitlab.com/foo/bar.git $HOME/文档/bar.clone ``` + 一旦克隆完成,你可以打开一个文件管理器窗口,导航到你的文档文件夹,然后你就会发现 bar.clone 目录正在等待着你访问。 如果你想要更高级点,你或许会在以后再次访问那个仓库,可以尝试使用 git pull 命令来查看项目有没有更新: @@ -111,7 +122,7 @@ bar.clone $ git pull ``` -到目前为止,你需要初步了解的所有终端命令就是那些了,那就去探索吧。你实践得越多,Git 掌握得就越好(孰能生巧),那就是游戏的名称,至少给了或取了一个元音。 +到目前为止,你需要初步了解的所有终端命令就是那些了,那就去探索吧。你实践得越多,Git 掌握得就越好(熟能生巧),这是重点,也是事情的本质。 -------------------------------------------------------------------------------- @@ -119,7 +130,7 @@ via: https://opensource.com/life/16/7/stumbling-git 作者:[Seth Kenlon][a] 译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -127,4 +138,4 @@ via: https://opensource.com/life/16/7/stumbling-git [1]: https://git-scm.com/download [2]: https://gitlab.com/trashy/trashy [3]: https://gitlab.com/trashy/trashy.git - +[4]: https://linux.cn/article-7639-1.html From 6605d61db2546c2523362a1d801fad568cf28145 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 2 Aug 2016 00:01:02 +0800 Subject: [PATCH 316/471] =?UTF-8?q?=E7=BB=84=E9=98=9F=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?Building=20a=20data=20science=20portfolio=20-=20Machine=20learn?= =?UTF-8?q?ing=20project?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ence portfolio - Machine learning project.md | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md index e99087b496..86ecbe127d 100644 --- a/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md @@ -1,20 +1,3 @@ -### Setting up the classifier for machine learning - -We’ll use cross validation to make predictions. With cross validation, we’ll divide our data into 3 groups. Then we’ll do the following: - -- Train a model on groups 1 and 2, and use the model to make predictions for group 3. -- Train a model on groups 1 and 3, and use the model to make predictions for group 2. -- Train a model on groups 2 and 3, and use the model to make predictions for group 1. - -Splitting it up into groups this way means that we never train a model using the same data we’re making predictions for. This avoids overfitting. If we overfit, we’ll get a falsely low false negative rate, which makes it hard to improve our algorithm or use it in the real world. - -[Scikit-learn][35] has a function called [cross_val_predict][36] which will make it easy to perform cross validation. - -We’ll also need to pick an algorithm to use to make predictions. We need a classifier that can do [binary classification][37]. The target variable, foreclosure_status only has two values, True and False. - -We’ll use [logistic regression][38], because it works well for binary classification, runs extremely quickly, and uses little memory. This is due to how the algorithm works – instead of constructing dozens of trees, like a random forest, or doing expensive transformations, like a support vector machine, logistic regression has far fewer steps involving fewer matrix operations. - -We can use the [logistic regression classifier][39] algorithm that’s implemented in scikit-learn. The only thing we need to pay attention to is the weights of each class. If we weight the classes equally, the algorithm will predict False for every row, because it is trying to minimize errors. However, we care much more about foreclosures than we do about loans that aren’t foreclosed on. Thus, we’ll pass balanced to the class_weight keyword argument of the [LogisticRegression][40] class, to get the algorithm to weight the foreclosures more to account for the difference in the counts of each class. This will ensure that the algorithm doesn’t predict False for every row, and instead is penalized equally for making errors in predicting either class. ### Making predictions From 263406bc4d937c1988c6f2a02c037141731c54b2 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 2 Aug 2016 07:04:24 +0800 Subject: [PATCH 317/471] Update 20160715 bc - Command line calculator.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备进行翻译。 --- sources/tech/20160715 bc - Command line calculator.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160715 bc - Command line calculator.md b/sources/tech/20160715 bc - Command line calculator.md index adf854e468..a85ebc4985 100644 --- a/sources/tech/20160715 bc - Command line calculator.md +++ b/sources/tech/20160715 bc - Command line calculator.md @@ -1,3 +1,5 @@ +FSSlc translating + bc: Command line calculator ============================ From 19ed340a8d11ba269efc7b18c4df21e699d80922 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Tue, 2 Aug 2016 08:46:55 +0800 Subject: [PATCH 318/471] Update part 5 - Building a data science portfolio - Machine learning project.md --- ...ilding a data science portfolio - Machine learning project.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md index 64be5f9492..722aa8f286 100644 --- a/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md @@ -1,3 +1,4 @@ +[translating by kokialoves] ### Annotating the data We’ve already added a few functions to annotate.py, but now we can get into the meat of the file. We’ll need to convert the acquisition data into a training dataset that can be used in a machine learning algorithm. This involves a few things: From 6846cec1e9cef45d8a31951fe3fc47ba2bb5cbce Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Tue, 2 Aug 2016 09:45:45 +0800 Subject: [PATCH 319/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=EF=BC=9ALearn=20How=20to=20Use=20Awk=20Special=20Patterns=20be?= =?UTF-8?q?gin=20and=20end?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Use Awk Special Patterns begin and end.md | 168 ------------------ ... Use Awk Special Patterns begin and end.md | 166 +++++++++++++++++ 2 files changed, 166 insertions(+), 168 deletions(-) delete mode 100644 sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md create mode 100644 translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md diff --git a/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md b/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md deleted file mode 100644 index b3f99e6d4f..0000000000 --- a/sources/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md +++ /dev/null @@ -1,168 +0,0 @@ -Being translated by ChrisLeeGit - -Learn How to Use Awk Special Patterns ‘BEGIN and END’ – Part 9 -=============================================================== - -In Part 8 of this Awk series, we introduced some powerful Awk command features, that is variables, numeric expressions and assignment operators. - -As we advance, in this segment, we shall cover more Awk features, and that is the special patterns: `BEGIN` and `END`. - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Learn-Awk-Patterns-BEGIN-and-END.png) ->Learn Awk Patterns BEGIN and END - -These special features will prove helpful as we try to expand on and explore more methods of building complex Awk operations. - -To get started, let us drive our thoughts back to the introduction of the Awk series, remember when we started this series, I pointed out that the general syntax of a running an Awk command is: - -``` -# awk 'script' filenames -``` - -And in the syntax above, the Awk script has the form: - -``` -/pattern/ { actions } -``` - -When you consider the pattern in the script, it is normally a regular expression, additionally, you can also think of pattern as special patterns `BEGIN` and `END`. Therefore, we can also write an Awk command in the form below: - -``` -awk ' -BEGIN { actions } -/pattern/ { actions } -/pattern/ { actions } -………. -END { actions } -' filenames -``` - -In the event that you use the special patterns: `BEGIN` and `END` in an Awk script, this is what each of them means: - -- `BEGIN` pattern: means that Awk will execute the action(s) specified in `BEGIN` once before any input lines are read. -- `END` pattern: means that Awk will execute the action(s) specified in `END` before it actually exits. - -And the flow of execution of the an Awk command script which contains these special patterns is as follows: - -- When the `BEGIN` pattern is used in a script, all the actions for `BEGIN` are executed once before any input line is read. -- Then an input line is read and parsed into the different fields. -- Next, each of the non-special patterns specified is compared with the input line for a match, when a match is found, the action(s) for that pattern are then executed. This stage will be repeated for all the patterns you have specified. -- Next, stage 2 and 3 are repeated for all input lines. -- When all input lines have been read and dealt with, in case you specify the `END` pattern, the action(s) will be executed. - -You should always remember this sequence of execution when working with the special patterns to achieve the best results in an Awk operation. - -To understand it all, let us illustrate using the example from part 8, about the list of domains owned by Tecmint, as stored in a file named domains.txt. - -``` -news.tecmint.com -tecmint.com -linuxsay.com -windows.tecmint.com -tecmint.com -news.tecmint.com -tecmint.com -linuxsay.com -tecmint.com -news.tecmint.com -tecmint.com -linuxsay.com -windows.tecmint.com -tecmint.com -``` - -``` -$ cat ~/domains.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) ->View Contents of File - -In this example, we want to count the number of times the domain `tecmint.com` is listed in the file domains.txt. So we wrote a small shell script to help us do that using the idea of variables, numeric expressions and assignment operators which has the following content: - -``` -#!/bin/bash -for file in $@; do -if [ -f $file ] ; then -#print out filename -echo "File is: $file" -#print a number incrementally for every line containing tecmint.com -awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file -else -#print error info incase input is not a file -echo "$file is not a file, please specify a file." >&2 && exit 1 -fi -done -#terminate script with exit code 0 in case of successful execution -exit 0 -``` - -Let us now employ the two special patterns: `BEGIN` and `END` in the Awk command in the script above as follows: - -We shall alter the script: - -``` -awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file -``` - -To: - -``` -awk ' BEGIN { print "The number of times tecmint.com appears in the file is:" ; } -/^tecmint.com/ { counter+=1 ; } -END { printf "%s\n", counter ; } -' $file -``` - -After making the changes to the Awk command, the complete shell script now looks like this: - -``` -#!/bin/bash -for file in $@; do -if [ -f $file ] ; then -#print out filename -echo "File is: $file" -#print the total number of times tecmint.com appears in the file -awk ' BEGIN { print "The number of times tecmint.com appears in the file is:" ; } -/^tecmint.com/ { counter+=1 ; } -END { printf "%s\n", counter ; } -' $file -else -#print error info incase input is not a file -echo "$file is not a file, please specify a file." >&2 && exit 1 -fi -done -#terminate script with exit code 0 in case of successful execution -exit 0 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-BEGIN-and-END-Patterns.png) ->Awk BEGIN and END Patterns - -When we run the script above, it will first of all print the location of the file domains.txt, then the Awk command script is executed, where the `BEGIN` special pattern helps us print out the message “`The number of times tecmint.com appears in the file is:`” before any input lines are read from the file. - -Then our pattern, `/^tecmint.com/` is compared against every input line and the action, `{ counter+=1 ; }` is executed for each input line, which counts the number of times `tecmint.com` appears in the file. - -Finally, the `END` pattern will print the total the number of times the domain `tecmint.com` appears in the file. - -``` -$ ./script.sh ~/domains.txt -``` -![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-to-Count-Number-of-Times-String-Appears.png) ->Script to Count Number of Times String Appears - -To conclude, we walked through more Awk features exploring on the concepts of special pattern: `BEGIN` and `END`. - -As I pointed out before, these Awk features will help us build more complex text filtering operations, there is more to cover under Awk features and in part 10, we shall approach the idea of Awk built-in variables, so stay connected. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/learn-use-awk-special-patterns-begin-and-end/ - -作者:[Aaron Kili][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对ID](https://github.com/校对ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md b/translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md new file mode 100644 index 0000000000..378811776a --- /dev/null +++ b/translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md @@ -0,0 +1,166 @@ +awk 系列:如何使用 awk 的特殊模式 BEGIN 和 END +=============================================================== +在 awk 系列的第八节,我们介绍了一些强大的 awk 命令功能,它们是变量、数字表达式和赋值运算符。 + +本节我们将学习更多的 awk 功能,即 awk 的特殊模式:`BEGIN` 和 `END`。 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Learn-Awk-Patterns-BEGIN-and-END.png) +> 学习 awk 的模式 BEGIN 和 END + +随着我们逐渐展开,并探索出更多构建复杂 awk 操作的方法,将会证明 awk 的这些特殊功能的是多么强大。 + +开始前,先让我们回顾一下 awk 系列的介绍,记得当我们开始这个系列时,我就指出 awk 指令的通用语法是这样的: + +``` +# awk 'script' filenames +``` + +在上述语法中,awk 脚本拥有这样的形式: + +``` +/pattern/ { actions } +``` + +当你看脚本中的模式(`/pattern`)时,你会发现它通常是一个正则表达式,此外,你也可以将模式(`/pattern`)当成特殊模式 `BEGIN` 和 `END`。 +因此,我们也能按照下面的形式编写一条 awk 命令: + +``` +awk ' +BEGIN { actions } +/pattern/ { actions } +/pattern/ { actions } +………. +END { actions } +' filenames +``` + +假如你在 awk 脚本中使用了特殊模式:`BEGIN` 和 `END`,以下则是它们对应的含义: + +- `BEGIN` 模式:是指 awk 将在读取任何输入行之前立即执行 `BEGIN` 中指定的动作。 +- `END` 模式:是指 awk 将在它正式退出前执行 `END` 中指定的动作。 + +含有这些特殊模式的 awk 命令脚本的执行流程如下: + +- 当在脚本中使用了 `BEGIN` 模式,则 `BEGIN` 中所有的动作都会在读取任何输入行之前执行。 +- 然后,读入一个输入行并解析成不同的段。 +- 接下来,每一条指定的非特殊模式都会和输入行进行比较匹配,当匹配成功后,就会执行模式对应的动作。对所有你指定的模式重复此执行该步骤。 +- 再接下来,对于所有输入行重复执行步骤 2 和 步骤 3。 +- 当读取并处理完所有输入行后,假如你指定了 `END` 模式,那么将会执行相应的动作。 + +当你使用特殊模式时,想要在 awk 操作中获得最好的结果,你应当记住上面的执行顺序。 + +为了便于理解,让我们使用第八节的例子进行演示,那个例子是关于 Tecmint 拥有的域名列表,并保存在一个叫做 domains.txt 的文件中。 + +``` +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +tecmint.com +news.tecmint.com +tecmint.com +linuxsay.com +windows.tecmint.com +tecmint.com +``` + +``` +$ cat ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) +> 查看文件内容 + +在这个例子中,我们希望统计出 domains.txt 文件中域名 `tecmint.com` 出现的次数。所以,我们编写了一个简单的 shell 脚本帮助我们完成任务,它使用了变量、数学表达式和赋值运算符的思想,脚本内容如下: + +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +# 输出文件名 +echo "File is: $file" +# 输出一个递增的数字记录包含 tecmint.com 的行数 +awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file +else +# 若输入不是文件,则输出错误信息 +echo "$file 不是一个文件,请指定一个文件。" >&2 && exit 1 +fi +done +# 成功执行后使用退出代码 0 终止脚本 +exit 0 +``` + +现在让我们像下面这样在上述脚本的 awk 命令中应用这两个特殊模式:`BEGIN` 和 `END`: + +我们应当把脚本: + +``` +awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file +``` + +改成: + +``` +awk ' BEGIN { print "文件中出现 tecmint.com 的次数是:" ; } +/^tecmint.com/ { counter+=1 ; } +END { printf "%s\n", counter ; } +' $file +``` + +在修改了 awk 命令之后,现在完整的 shell 脚本就像下面这样: + +``` +#!/bin/bash +for file in $@; do +if [ -f $file ] ; then +# 输出文件名 +echo "File is: $file" +# 输出文件中 tecmint.com 出现的总次数 +awk ' BEGIN { print "文件中出现 tecmint.com 的次数是:" ; } +/^tecmint.com/ { counter+=1 ; } +END { printf "%s\n", counter ; } +' $file +else +# 若输入不是文件,则输出错误信息 +echo "$file 不是一个文件,请指定一个文件。" >&2 && exit 1 +fi +done +# 成功执行后使用退出代码 0 终止脚本 +exit 0 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-BEGIN-and-END-Patterns.png) +> awk 模式 BEGIN 和 END + +当我们运行上面的脚本时,它会首先输出 domains.txt 文件的位置,然后执行 awk 命令脚本,该命令脚本中的特殊模式 `BEGIN` 将会在从文件读取任何行之前帮助我们输出这样的消息“`文件中出现 tecmint.com 的次数是:`”。 + +接下来,我们的模式 `/^tecmint.com/` 会在每个输入行中进行比较,对应的动作 `{ counter+=1 ; }` 会在每个匹配成功的行上执行,它会统计出 `tecmint.com` 在文件中出现的次数。 + +最终,`END` 模式将会输出域名 `tecmint.com` 在文件中出现的总次数。 + +``` +$ ./script.sh ~/domains.txt +``` +![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-to-Count-Number-of-Times-String-Appears.png) +> 用于统计字符串出现次数的脚本 + +最后总结一下,我们在本节中演示了更多的 awk 功能,并学习了特殊模式 `BEGIN` 和 `END` 的概念。 + +正如我之前所言,这些 awk 功能将会帮助我们构建出更复杂的文本过滤操作。第十节将会给出更多的 awk 功能,我们将会学习 awk 内置变量的思想,所以,请继续保持关注。 + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-use-awk-special-patterns-begin-and-end/ + +作者:[Aaron Kili][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 99c74fdce7d8c3b23b2659bdcbdd616f77f94a6d Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Tue, 2 Aug 2016 11:39:25 +0800 Subject: [PATCH 320/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?Part=2010=20-=20Learn=20How=20to=20Use=20Awk=20Built-in=20Varia?= =?UTF-8?q?bles?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160725 Part 10 - Learn How to Use Awk Built-in Variables.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md b/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md index 4cbecac8c9..9b66a0cbfe 100644 --- a/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md +++ b/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md @@ -1,3 +1,4 @@ +Being translated by ChrisLeeGit Learn How to Use Awk Built-in Variables – Part 10 ================================================= @@ -111,7 +112,7 @@ After this, we shall progress to cover how we can use shell variables in Awk com via: http://www.tecmint.com/awk-built-in-variables-examples/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对ID](https://github.com/校对ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 020d5f489bd5e3ad3c0c20b1aa9c500641dc3c17 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Tue, 2 Aug 2016 12:00:37 +0800 Subject: [PATCH 321/471] part 5 - Building a data science portfolio - Machine learning project.md (#4264) * Delete part 5 - Building a data science portfolio - Machine learning project.md * part 5 - Building a data science portfolio - Machine learning project.md --- ...ce portfolio - Machine learning project.md | 146 ++++++++---------- 1 file changed, 61 insertions(+), 85 deletions(-) diff --git a/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md index 722aa8f286..99e1046e70 100644 --- a/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md @@ -1,44 +1,38 @@ -[translating by kokialoves] -### Annotating the data +注解数据 +我们已经在annotate.py中添加了一些功能, 现在我们来看一看数据文件. 我们需要将采集到的数据转换到training dataset来进行机器学习的训练. 这涉及到以下几件事情: -We’ve already added a few functions to annotate.py, but now we can get into the meat of the file. We’ll need to convert the acquisition data into a training dataset that can be used in a machine learning algorithm. This involves a few things: +转换所以列数字. +填充缺失值. +分配 performance_count 和 foreclosure_status. +移除出现次数很少的行(performance_count 计数低). +我们有几个列是strings类型的, 看起来对于机器学习算法来说并不是很有用. 然而, 他们实际上是分类变量, 其中有很多不同的类别代码, 例如R,S等等. 我们可以把这些类别标签转换为数值: -- Converting all columns to numeric. -- Filling in any missing values. -- Assigning a performance_count and a foreclosure_status to each row. -- Removing any rows that don’t have a lot of performance history (where performance_count is low). -Several of our columns are strings, which aren’t useful to a machine learning algorithm. However, they are actually categorical variables, where there are a few different category codes, like R, S, and so on. We can convert these columns to numeric by assigning a number to each category label: -![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/002.png) +通过这种方法转换的列我们可以应用到机器学习算法中. -Converting the columns this way will allow us to use them in our machine learning algorithm. +还有一些包含日期的列 (first_payment_date 和 origination_date). 我们可以将这些日期放到两个列中: -Some of the columns also contain dates (first_payment_date and origination_date). We can split these dates into 2 columns each: + 在下面的代码中, 我们将转换采集到的数据. 我们将定义一个函数如下: -![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/003.png) -In the below code, we’ll transform the Acquisition data. We’ll define a function that: - -- Creates a foreclosure_status column in acquisition by getting the values from the counts dictionary. -- Creates a performance_count column in acquisition by getting the values from the counts dictionary. -- Converts each of the following columns from a string column to an integer column: - - channel - - seller - - first_time_homebuyer - - loan_purpose - - property_type - - occupancy_status - - property_state - - product_type -- Converts first_payment_date and origination_date to 2 columns each: - - Splits the column on the forward slash. - - Assigns the first part of the split list to a month column. - - Assigns the second part of the split list to a year column. - - Deletes the column. - - At the end, we’ll have first_payment_month, first_payment_year, origination_month, and origination_year. -- Fills any missing values in acquisition with -1. - -``` +在采集到的数据中创建foreclosure_status列 . +在采集到的数据中创建performance_count列. +将下面的string列转换为integer列: +channel +seller +first_time_homebuyer +loan_purpose +property_type +occupancy_status +property_state +product_type +转换first_payment_date 和 origination_date 为两列: +通过斜杠分离列. +将第一部分分离成月清单. +将第二部分分离成年清单. +删除这一列. +最后, 我们得到 first_payment_month, first_payment_year, origination_month, and origination_year. +所有缺失值填充为-1. def annotate(acquisition, counts): acquisition["foreclosure_status"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "foreclosure_status", counts)) acquisition["performance_count"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "performance_count", counts)) @@ -63,25 +57,21 @@ def annotate(acquisition, counts): acquisition = acquisition.fillna(-1) acquisition = acquisition[acquisition["performance_count"] > settings.MINIMUM_TRACKING_QUARTERS] return acquisition -``` -### Pulling everything together +聚合到一起 +我们差不多准备就绪了, 我们只需要再在annotate.py添加一点点代码. 在下面代码中, 我们将: -We’re almost ready to pull everything together, we just need to add a bit more code to annotate.py. In the below code, we: - -- Define a function to read in the acquisition data. -- Define a function to write the processed data to processed/train.csv -- If this file is called from the command line, like python annotate.py: - - Read in the acquisition data. - - Compute the counts for the performance data, and assign them to counts. - - Annotate the acquisition DataFrame. - - Write the acquisition DataFrame to train.csv. - -``` +定义一个函数来读取采集的数据. +定义一个函数来写入数据到/train.csv +如果我们在命令行运行annotate.py来读取更新过的数据文件,它将做如下事情: +读取采集到的数据. +计算数据性能. +注解数据. +将注解数据写入到train.csv. def read(): acquisition = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "Acquisition.txt"), sep="|") return acquisition - + def write(acquisition): acquisition.to_csv(os.path.join(settings.PROCESSED_DIR, "train.csv"), index=False) @@ -90,13 +80,11 @@ if __name__ == "__main__": counts = count_performance_rows() acquisition = annotate(acquisition, counts) write(acquisition) -``` -Once you’re done updating the file, make sure to run it with python annotate.py, to generate the train.csv file. You can find the complete annotate.py file [here][34]. +修改完成以后为了确保annotate.py能够生成train.csv文件. 你可以在这里找到完整的 annotate.py file [here][34]. -The folder should now look like this: +文件夹结果应该像这样: -``` loan-prediction ├── data │ ├── Acquisition_2012Q1.txt @@ -114,54 +102,42 @@ loan-prediction ├── README.md ├── requirements.txt ├── settings.py -``` -### Finding an error metric +找到标准 +我们已经完成了training dataset的生成, 现在我们需要最后一步, 生成预测. 我们需要找到错误的标准, 以及该如何评估我们的数据. 在这种情况下, 因为有很多的贷款没有收回, 所以根本不可能做到精确的计算. -We’re done with generating our training dataset, and now we’ll just need to do the final step, generating predictions. We’ll need to figure out an error metric, as well as how we want to evaluate our data. In this case, there are many more loans that aren’t foreclosed on than are, so typical accuracy measures don’t make much sense. +我们需要读取数据, 并且计算foreclosure_status列, 我们将得到如下信息: -If we read in the training data, and check the counts in the foreclosure_status column, here’s what we get: - -``` import pandas as pd import settings train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) train["foreclosure_status"].value_counts() -``` -``` False 4635982 True 1585 Name: foreclosure_status, dtype: int64 -``` -Since so few of the loans were foreclosed on, just checking the percentage of labels that were correctly predicted will mean that we can make a machine learning model that predicts False for every row, and still gets a very high accuracy. Instead, we’ll want to use a metric that takes the class imbalance into account, and ensures that we predict foreclosures accurately. We don’t want too many false positives, where we make predict that a loan will be foreclosed on even though it won’t, or too many false negatives, where we predict that a loan won’t be foreclosed on, but it is. Of these two, false negatives are more costly for Fannie Mae, because they’re buying loans where they may not be able to recoup their investment. +因为只有一点点贷款收回, 通过百分比标签来建立的机器学习模型会把每行都设置为Fasle, 所以我们在这里要考虑每个样本的不平衡性,确保我们做出的预测是准确的. 我们不想要这么多假的false, 我们将预计贷款收回但是它并没有收回, 我们预计贷款不会回收但是却回收了. 通过以上两点, Fannie Mae的false太多了, 因此显示他们可能无法收回投资. -We’ll define false negative rate as the number of loans where the model predicts no foreclosure but the the loan was actually foreclosed on, divided by the number of total loans that were actually foreclosed on. This is the percentage of actual foreclosures that the model “Missed”. Here’s a diagram: - -![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/004.png) - -In the diagram above, 1 loan was predicted as not being foreclosed on, but it actually was. If we divide this by the number of loans that were actually foreclosed on, 2, we get the false negative rate, 50%. We’ll use this as our error metric, so we can evaluate our model’s performance. - -### Setting up the classifier for machine learning - -We’ll use cross validation to make predictions. With cross validation, we’ll divide our data into 3 groups. Then we’ll do the following: - -- Train a model on groups 1 and 2, and use the model to make predictions for group 3. -- Train a model on groups 1 and 3, and use the model to make predictions for group 2. -- Train a model on groups 2 and 3, and use the model to make predictions for group 1. - -Splitting it up into groups this way means that we never train a model using the same data we’re making predictions for. This avoids overfitting. If we overfit, we’ll get a falsely low false negative rate, which makes it hard to improve our algorithm or use it in the real world. - -[Scikit-learn][35] has a function called [cross_val_predict][36] which will make it easy to perform cross validation. - -We’ll also need to pick an algorithm to use to make predictions. We need a classifier that can do [binary classification][37]. The target variable, foreclosure_status only has two values, True and False. - -We’ll use [logistic regression][38], because it works well for binary classification, runs extremely quickly, and uses little memory. This is due to how the algorithm works – instead of constructing dozens of trees, like a random forest, or doing expensive transformations, like a support vector machine, logistic regression has far fewer steps involving fewer matrix operations. - -We can use the [logistic regression classifier][39] algorithm that’s implemented in scikit-learn. The only thing we need to pay attention to is the weights of each class. If we weight the classes equally, the algorithm will predict False for every row, because it is trying to minimize errors. However, we care much more about foreclosures than we do about loans that aren’t foreclosed on. Thus, we’ll pass balanced to the class_weight keyword argument of the [LogisticRegression][40] class, to get the algorithm to weight the foreclosures more to account for the difference in the counts of each class. This will ensure that the algorithm doesn’t predict False for every row, and instead is penalized equally for making errors in predicting either class. +所以我们将定义一个百分比,就是模型预测没有收回但是实际上收回了, 这个数除以总的负债回收总数. 这个负债回收百分比模型实际上是“没有的”. 下面看这个图表: +通过上面的图表, 1个负债预计不会回收, 也确实没有回收. 如果我们将这个数除以总数, 2, 我们将得到false的概率为50%. 我们将使用这个标准, 因此我们可以评估一下模型的性能. +设置机器学习分类器 +我们使用交叉验证预测. 通过交叉验证法, 我们将数据分为3组. 按照下面的方法来做: + +Train a model on groups 1 and 2, and use the model to make predictions for group 3. +Train a model on groups 1 and 3, and use the model to make predictions for group 2. +Train a model on groups 2 and 3, and use the model to make predictions for group 1. +将它们分割到不同的组 ,这意味着我们永远不会用相同的数据来为预测训练模型. 这样就避免了 overfitting. 如果我们overfit, 我们将得到很低的false概率, 这使得我们难以改进算法或者应用到现实生活中. + +[Scikit-learn][35] 有一个叫做 [cross_val_predict][36] 他可以帮助我们理解交叉算法. + +我们还需要一种算法来帮我们预测. 我们还需要一个分类器 [binary classification][37](二元分类). 目标变量foreclosure_status 只有两个值, True 和 False. + +我们用[logistic regression][38](回归算法), 因为它能很好的进行binary classification(二元分类), 并且运行很快, 占用内存很小. 我们来说一下它是如何工作的 – 取代许多树状结构, 更像随机森林, 进行转换, 更像一个向量机, 逻辑回归涉及更少的步骤和更少的矩阵. + +我们可以使用[logistic regression classifier][39](逻辑回归分类器)算法 来实现scikit-learn. 我们唯一需要注意的是每个类的标准. 如果我们使用同样标准的类, 算法将会预测每行都为false, 因为它总是试图最小化误差.不管怎样, 我们关注有多少贷款能够回收而不是有多少不能回收. 因此, 我们通过 [LogisticRegression][40](逻辑回归)来平衡标准参数, 并计算回收贷款的标准. 这将使我们的算法不会认为每一行都为false. From 725d3a6a81c30c27799bc055a116d7439da3138b Mon Sep 17 00:00:00 2001 From: FSSlc Date: Tue, 2 Aug 2016 13:33:49 +0800 Subject: [PATCH 322/471] [Translated]20160715 bc-Command line calculator.md --- .../20160715 bc - Command line calculator.md | 133 ------------------ .../20160715 bc - Command line calculator.md | 129 +++++++++++++++++ 2 files changed, 129 insertions(+), 133 deletions(-) delete mode 100644 sources/tech/20160715 bc - Command line calculator.md create mode 100644 translated/tech/20160715 bc - Command line calculator.md diff --git a/sources/tech/20160715 bc - Command line calculator.md b/sources/tech/20160715 bc - Command line calculator.md deleted file mode 100644 index a85ebc4985..0000000000 --- a/sources/tech/20160715 bc - Command line calculator.md +++ /dev/null @@ -1,133 +0,0 @@ -FSSlc translating - -bc: Command line calculator -============================ - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/07/bc-calculator-945x400.jpg) - -If you run a graphical desktop environment, you probably point and click your way to a calculator when you need one. The Fedora Workstation, for example, includes the Calculator tool. It features several different operating modes that allow you to do, for example, complex math or financial calculations. But did you know the command line also offers a similar calculator called bc? - -The bc utility gives you everything you expect from a scientific, financial, or even simple calculator. What’s more, it can be scripted from the command line if needed. This allows you to use it in shell scripts, in case you need to do more complex math. - -Because bc is used by some other system software, like CUPS printing services, it’s probably installed on your Fedora system already. You can check with this command: - -``` -dnf list installed bc -``` - -If you don’t see it for some reason, you can install the package with this command: - -``` -sudo dnf install bc -``` - -### Doing simple math with bc - -One way to use bc is to enter the calculator’s own shell. There you can run many calculations in a row. When you enter, the first thing that appears is a notice about the program: - -``` -$ bc -bc 1.06.95 -Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. -This is free software with ABSOLUTELY NO WARRANTY. -For details type `warranty'. -``` - -Now you can type in calculations or commands, one per line: - -``` -1+1 -``` - -The calculator helpfully answers: - -``` -2 -``` - -You can perform other commands here. You can use addition (+), subtraction (-), multiplication (*), division (/), parentheses, exponents (^), and so forth. Note that the calculator respects all expected conventions such as order of operations. Try these examples: - -``` -(4+7)*2 -4+7*2 -``` - -To exit, send the “end of input” signal with the key combination Ctrl+D. - -Another way is to use the echo command to send calculations or commands. Here’s the calculator equivalent of “Hello, world,” using the shell’s pipe function (|) to send output from echo into bc: - -``` -echo '1+1' | bc -``` - -You can send more than one calculation using the shell pipe, with a semicolon to separate entries. The results are returned on separate lines. - -``` -echo '1+1; 2+2' | bc -``` - -### Scale - -The bc calculator uses the concept of scale, or the number of digits after a decimal point, in some calculations. The default scale is 0. Division operations always use the scale setting. So if you don’t set scale, you may get unexpected answers: - -``` -echo '3/2' | bc -echo 'scale=3; 3/2' | bc -``` - -Multiplication uses a more complex decision for scale: - -``` -echo '3*2' | bc -echo '3*2.0' | bc -``` - -Meanwhile, addition and subtraction are more as expected: - -``` -echo '7-4.15' | bc -``` - -### Other base number systems - -Another useful function is the ability to use number systems other than base-10 (decimal). For instance, you can easily do hexadecimal or binary math. Use the ibase and obase commands to set input and output base systems between base-2 and base-16. Remember that once you use ibase, any number you enter is expected to be in the new declared base. - -To do hexadecimal to decimal conversions or math, you can use a command like this. Note the hexadecimal digits above 9 must be in uppercase (A-F): - -``` -echo 'ibase=16; A42F' | bc -echo 'ibase=16; 5F72+C39B' | bc -``` - -To get results in hexadecimal, set the obase as well: - -``` -echo 'obase=16; ibase=16; 5F72+C39B' | bc -``` - -Here’s a trick, though. If you’re doing these calculations in the shell, how do you switch back to input in base-10? The answer is to use ibase, but you must set it to the equivalent of decimal number 10 in the current input base. For instance, if ibase was set to hexadecimal, enter: - -``` -ibase=A -``` - -Once you do this, all input numbers are now decimal again, so you can enter obase=10 to reset the output base system. - -### Conclusion - -This is only the beginning of what bc can do. It also allows you to define functions, variables, and loops for complex calculations and programs. You can save these programs as text files on your system to run whenever you need. You can find numerous resources on the web that offer examples and additional function libraries. Happy calculating! - - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ - -作者:[Paul W. Frields][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/pfrields/ -[1]: http://phodd.net/gnu-bc/ diff --git a/translated/tech/20160715 bc - Command line calculator.md b/translated/tech/20160715 bc - Command line calculator.md new file mode 100644 index 0000000000..d5dc711909 --- /dev/null +++ b/translated/tech/20160715 bc - Command line calculator.md @@ -0,0 +1,129 @@ +bc : 一个命令行计算器 +============================ + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/07/bc-calculator-945x400.jpg) + +假如你运行在一个图形桌面环境中,当你需要一个计算器时,你可能只需要一路进行点击便可以找到一个计算器。例如,Fedora 工作站中就已经包含了一个名为 `Calculator` 的工具。它有着几种不同的操作模式,例如,你可以进行复杂的数学运算或者金融运算。但是,你知道吗,命令行也提供了一个与之相似的名为 `bc` 的工具? + +`bc` 工具可以为你提供你期望一个科学计算器、金融计算器或者是简单的计算器所能提供的所有功能。另外,假如需要的话,它还可以从命令行中被脚本化。这使得当你需要做复杂的数学运算时,你可以在 shell 脚本中使用它。 + +因为 bc 被其他的系统软件所使用,例如 CUPS 打印服务,它可能已经在你的 Fedora 系统中被安装了。你可以使用下面这个命令来进行检查: + +``` +dnf list installed bc +``` + +假如因为某些原因你没有在上面命令的输出中看到它,你可以使用下面的这个命令来安装它: + +``` +sudo dnf install bc +``` + +### 用 bc 做一些简单的数学运算 + +使用 bc 的一种方式是进入它自己的 shell。在那里你可以在一行中做许多次计算。但在你键入 bc 后,首先出现的是有关这个程序的警告: + +``` +$ bc +bc 1.06.95 +Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. +This is free software with ABSOLUTELY NO WARRANTY. +For details type `warranty'. +``` + +现在你可以按照每行一个输入运算式或者命令了: + +``` +1+1 +``` + +bc 会回答上面计算式的答案是: + +``` +2 +``` + +在这里你还可以执行其他的命令。你可以使用 加(+)、减(-)、乘(*)、除(/)、圆括号、指数符号(^) 等等。请注意 bc 同样也遵循所有约定俗成的运算规定,例如运算的先后顺序。你可以试试下面的例子: + +``` +(4+7)*2 +4+7*2 +``` + +若要离开 bc 可以通过按键组合 `Ctrl+D` 来发送 “输入结束”信号给 bc 。 + +使用 bc 的另一种方式是使用 `echo` 命令来传递运算式或命令。下面这个示例类似于计算器中的 "Hello, world" 例子,使用 shell 的管道函数(|) 来将 `echo` 的输出传入 `bc` 中: + +``` +echo '1+1' | bc +``` + +使用 shell 的管道,你可以发送不止一个运算操作,你需要使用分号来分隔不同的运算。结果将在不同的行中返回。 + +``` +echo '1+1; 2+2' | bc +``` + +### 精度 + +在某些计算中,bc 会使用精度的概念,即小数点后面的数字位数。默认的精度是 0。除法操作总是使用精度的设定。所以,如果你没有设置精度,有可能会带来意想不到的答案: + +``` +echo '3/2' | bc +echo 'scale=3; 3/2' | bc +``` + +乘法使用一个更复杂的精度选择机制: + +``` +echo '3*2' | bc +echo '3*2.0' | bc +``` + +同时,加法和减法的相关运算则与之相似: + +``` +echo '7-4.15' | bc +``` + +### 其他进制系统 + +bc 的另一个有用的功能是可以使用除 十进制以外的其他计数系统。例如,你可以轻松地做十六进制或二进制的数学运算。可以使用 `ibase` 和 `obase` 命令来分别设定输入和输出的进制系统。需要记住的是一旦你使用了 `ibase`,之后你输入的任何数字都将被认为是在新定义的进制系统中。 + +要做十六进制数到十进制数的转换或运算,你可以使用类似下面的命令。请注意大于 9 的十六进制数必须是大写的(A-F): + +``` +echo 'ibase=16; A42F' | bc +echo 'ibase=16; 5F72+C39B' | bc +``` + +若要使得结果是十六进制数,则需要设定 `obase` : + +``` +echo 'obase=16; ibase=16; 5F72+C39B' | bc +``` + +下面是一个小技巧。假如你在 shell 中做这些运算,怎样才能使得输入重新为十进制数呢?答案是使用 `ibase` 命令,但你必须设定它为在当前进制中与十进制中的 10 等价的值。例如,假如 `ibase` 被设定为十六进制,你需要输入: + +``` +ibase=A +``` + +一旦你执行了上面的命令,所有输入的数字都将是十进制的了,接着你便可以输入 `obase=10` 来重置输出的进制系统。 + +### 结论 + +上面所提到的只是 bc 所能做到的基础。它还允许你为某些复杂的运算和程序定义函数、变量和循环结构。你可以在你的系统中将这些程序保存为文本文件以便你在需要的时候使用。你还可以在网上找到更多的资源,它们提供了更多的例子以及额外的函数库。快乐地计算吧! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[Paul W. Frields][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[1]: http://phodd.net/gnu-bc/ From a49a41f4118638a4b0e60ca491691aa577c8adf5 Mon Sep 17 00:00:00 2001 From: cposture Date: Tue, 2 Aug 2016 13:35:33 +0800 Subject: [PATCH 323/471] Translating by cposture --- ...lding a data science portfolio - Machine learning project.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md index 86ecbe127d..3bec1d0a98 100644 --- a/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md @@ -1,4 +1,4 @@ - +Translating by cposture 2016-08-02 ### Making predictions Now that we have the preliminaries out of the way, we’re ready to make predictions. We’ll create a new file called predict.py that will use the train.csv file we created in the last step. The below code will: From 38e37b6e10377cb56045ae51075f2f53ba1e37b0 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 2 Aug 2016 14:05:14 +0800 Subject: [PATCH 324/471] PUB:20160721 5 tricks for getting started with Vim MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @maywanting 翻译的不错,不过有个别语句我有不同理解,你可以看看,如有不同意见,欢迎讨论改进。 --- ...1 5 tricks for getting started with Vim.md | 60 +++++++++++++++++++ ...1 5 tricks for getting started with Vim.md | 60 ------------------- 2 files changed, 60 insertions(+), 60 deletions(-) create mode 100644 published/20160721 5 tricks for getting started with Vim.md delete mode 100644 translated/tech/20160721 5 tricks for getting started with Vim.md diff --git a/published/20160721 5 tricks for getting started with Vim.md b/published/20160721 5 tricks for getting started with Vim.md new file mode 100644 index 0000000000..684a1d6295 --- /dev/null +++ b/published/20160721 5 tricks for getting started with Vim.md @@ -0,0 +1,60 @@ +Vim 起步的五个技巧 +===================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/BUSINESS_peloton.png?itok=nuMbW9d3) + +多年来,我一直想学 Vim。如今 Vim 是我最喜欢的 Linux 文本编辑器,也是开发者和系统管理者最喜爱的开源工具。我说的学习,指的是真正意义上的学习。想要精通确实很难,所以我只想要达到熟练的水平。我使用了这么多年的 Linux ,我会的也仅仅只是打开一个文件,使用上下左右箭头按键来移动光标,切换到插入模式,更改一些文本,保存,然后退出。 + +但那只是 Vim 的最最基本的操作。我的技能水平只能让我在终端使用 Vim 修改文本,但是它并没有任何一个我想象中强大的文本处理功能。这样我完全无法用 Vim 发挥出胜出 Pico 和 Nano 的能力。 + +所以到底为什么要学习 Vim?因为我花费了相当多的时间用于编辑文本,而且我知道还有很大的效率提升空间。为什么不选择 Emacs,或者是更为现代化的编辑器例如 Atom?因为 Vim 适合我,至少我有一丁点的使用经验。而且,很重要的一点就是,在我需要处理的系统上很少碰见没有装 Vim 或者它的弱化版(Vi)。如果你有强烈的欲望想学习对你来说更给力的 Emacs,我希望这些对于 Emacs 同类编辑器的建议能对你有所帮助。 + +花了几周的时间专注提高我的 Vim 使用技巧之后,我想分享的第一个建议就是必须使用它。虽然这看起来就是明知故问的回答,但事实上它比我所预想的计划要困难一些。我的大多数工作是在网页浏览器上进行的,而且每次我需要在浏览器之外打开并编辑一段文本时,就需要避免下意识地打开 Gedit。Gedit 已经放在了我的快速启动栏中,所以第一步就是移除这个快捷方式,然后替换成 Vim 的。 + +为了更好的学习 Vim,我尝试了很多。如果你也正想学习,以下列举了一些作为推荐。 + +### Vimtutor + +通常如何开始学习最好就是使用应用本身。我找到一个小的应用叫 Vimtutor,当你在学习编辑一个文本时它能辅导你一些基础知识,它向我展示了很多我这些年都忽视的基础命令。Vimtutor 一般在有 Vim 的地方都能找到它,如果你的系统上没有 Vimtutor,Vimtutor 可以很容易从你的包管理器上安装。 + +### GVim + +我知道并不是每个人都认同这个,但就是它让我从使用终端中的 Vim 转战到使用 GVim 来满足我基本编辑需求。反对者表示 GVim 鼓励使用鼠标,而 Vim 主要是为键盘党设计的。但是我能通过 GVim 的下拉菜单快速找到想找的指令,并且 GVim 可以提醒我正确的指令然后通过敲键盘执行它。努力学习一个新的编辑器然后陷入无法解决的困境,这种感觉并不好受。每隔几分钟读一下 man 出来的文字或者使用搜索引擎来提醒你该用的按键序列也并不是最好的学习新事物的方法。 + +### 键盘表 + +当我转战 GVim,我发现有一个键盘的“速查表”来提醒我最基础的按键很是便利。网上有很多这种可用的表,你可以下载、打印,然后贴在你身边的某一处地方。但是为了我的笔记本键盘,我选择买一沓便签纸。这些便签纸在美国不到 10 美元,当我使用键盘编辑文本,尝试新的命令的时候,可以随时提醒我。 + +### Vimium + +上文提到,我工作都在浏览器上进行。其中一条我觉得很有帮助的建议就是,使用 [Vimium][1] 来用增强使用 Vim 的体验。Vimium 是 Chrome 浏览器上的一个开源插件,能用 Vim 的指令快捷操作 Chrome。我发现我只用了几次使用快捷键切换上下文,就好像比之前更熟悉这些快捷键了。同样的扩展 Firefox 上也有,例如 [Vimerator][2]。 + +### 其它人 + +毫无疑问,最好的学习方法就是求助于在你之前探索过的人,让他给你建议、反馈和解决方法。 + +如果你住在一个大城市,那么附近可能会有一个 Vim meetup 小组,或者还有 Freenode IRC 上的 #vim 频道。#vim 频道是 Freenode 上最活跃的频道之一,那上面可以针对你个人的问题来提供帮助。听上面的人发发牢骚或者看看别人尝试解决自己没有遇到过的问题,仅仅是这样我都觉得很有趣。 + +------ + +那么,现在怎么样了?到现在为止还不错。为它所花的时间是否值得就在于之后它为你节省了多少时间。但是当我发现一个新的按键序列可以来跳过词,或者一些相似的小技巧,我经常会收获意外的惊喜与快乐。每天我至少可以看见,一点点的回报,正在逐渐配得上当初的付出。 + +学习 Vim 并不仅仅只有这些建议,还有很多。我很喜欢指引别人去 [Vim Advantures][3],它是一种使用 Vim 按键方式进行移动的在线游戏。而在另外一天我在 [Vimgifts.com][4] 发现了一个非常神奇的虚拟学习工具,那可能就是你真正想要的:用一个小小的 gif 动图来描述 Vim 操作。 + +你有花时间学习 Vim 吗?或者是任何需要大量键盘操作的程序?那些经过你努力后掌握的工具,你认为这些努力值得吗?效率的提高有没有达到你的预期?分享你们的故事在下面的评论区吧。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/tips-getting-started-vim + +作者:[Jason Baker][a] +译者:[maywanting](https://github.com/maywanting) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jason-baker +[1]: https://github.com/philc/vimium +[2]: http://www.vimperator.org/ +[3]: http://vim-adventures.com/ +[4]: http://vimgifs.com/ diff --git a/translated/tech/20160721 5 tricks for getting started with Vim.md b/translated/tech/20160721 5 tricks for getting started with Vim.md deleted file mode 100644 index 67d64ced2f..0000000000 --- a/translated/tech/20160721 5 tricks for getting started with Vim.md +++ /dev/null @@ -1,60 +0,0 @@ -Vim 学习的 5 个技巧 -===================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/BUSINESS_peloton.png?itok=nuMbW9d3) - -多年来,我一直想学 Vim。如今 Vim 是我最喜欢的 Linux 文本编辑器,也是开发者和系统管理者最喜爱的开源工具。我说的学习,指的是真正意义上的学习。想要精通确实很难,所以我只想要达到熟练的水平。根据我多年使用 Linux 的经验,我会的也仅仅只是打开一个文件,使用上下左右箭头按键来移动光标,切换到 insert 模式,更改一些文本,保存,然后退出。 - -但那只是 Vim 的最基本操作。Vim 可以让我在终端修改文本,但是它并没有任何一个我想象中强大的文本处理功能。这样我无法说明 Vim 完全优于 Pico 和 Nano。 - -所以到底为什么要学习 Vim?因为我需要花费相当多的时间用于编辑文本,而且有很大的效率提升空间。为什么不选择 Emacs,或者是更为现代化的编辑器例如 Atom?因为 Vim 适合我,至少我有一丁点的使用经验。而且,很重要的一点就是,在我需要处理的系统上很少碰见没有装 Vim 或者它的简化版(Vi)。如果你有强烈的欲望想学习 Emacs,我希望这些对于 Emacs 同类编辑器的建议能对你有所帮助。 - -花了几周的时间专注提高我的 Vim 使用技巧之后,我想分享的第一个建议就是必须使用工具。虽然这看起来就是明知故问的回答,但事实上比我所想的在代码层面上还要困难。我的大多数工作是在网页浏览器上进行的,而且我每次都得有针对性的用 Gedit 打开并修改一段浏览器之外的文本。Gedit 需要快捷键来启动,所以第一步就是移出快捷键然后替换成 Vim 的快捷键。 - -为了跟更好的学习 Vim,我尝试了很多。如果你也正想学习,以下列举了一些作为推荐。 - -### Vimtutor - -通常如何开始学习最好就是使用应用本身。我找到一个小的应用叫 Vimtutor,当你在学习编辑一个文本时它能辅导你一些基础知识,它向我展示了很多我这些年都忽视的基础命令。Vimtutor 上到处都是 Vim 影子,如果你的系统上没有 Vimtutor,Vimtutor可以很容易从你的包管理器上下载。 - -### GVim - -我知道并不是每个人都认同这个,但就是它让我从使用在终端的 Vim 转战到使用 GVim 来满足我基本编辑需求。反对者表示 GVim 鼓励使用鼠标,而 Vim 主要是为键盘党设计的。但是我能通过 GVim 的下拉菜单快速找到想找的指令,并且 GVim 可以提醒我正确的指令然后通过敲键盘执行它。努力学习一个新的编辑器然后陷入无法解决的困境,这种感觉并不好受。每隔几分钟读一下 man 出来的文字或者使用搜索引擎来提醒你指令也并不是最好的学习新事务的方法。 - -### Keyboard maps - -当我转战 GVim,我发现有一个键盘的“作弊表”来提醒我最基础的按键很是便利。网上有很多这种可用的表,你可以下载,打印,然后贴在你身边的某一处地方。但是为了我的笔记本键盘,我选择买一沓便签纸。这些便签纸在美国不到10美元,而且当我使用键盘编辑文本尝试新的命令的时候,可以随时提醒我。 - -### Vimium - -上文提到,我工作都在浏览器上进行。其中一条我觉得很有帮助的建议就是,使用 [Vimium][1] 来用增强使用 Vim 的体验。Vimium 是 Chrome 浏览器上的一个开源插件,能用 Vim 的指令快捷操作 Chrome。当我有意识的使用快捷键切换文本的次数越少时,这说明我越来越多的使用这些快捷键。同样的扩展 Firefox 上也有,例如 [Vimerator][2]。 - -### 人 - -毫无疑问,最好的学习方法就是求助于在你之前探索过的人,让他给你建议、反馈和解决方法。 - -如果你住在一个大城市,那么附近可能会有一个 Vim meetup 组,不然就是在 Freenode IRC 上的 #vim 频道。#vim 频道是 Freenode 上最活跃的频道之一,那上面可以针对你个人的问题来提供帮助。听上面的人发发牢骚或者看看别人尝试解决自己没有遇到过的问题,仅仅是这样我都觉得很有趣。 - ------- - -所以是什么成就了现在?如今便是极好。为它所花的时间是否值得就在于之后它为你节省了多少时间。但是我经常收到意外的惊喜与快乐,当我发现一个新的按键指令来复制、跳过词,或者一些相似的小技巧。每天我至少可以看见,一点点回报,正在逐渐配得上当初的付出。 - -学习 Vim 并不仅仅只有这些建议,还有很多。我很喜欢指引别人去 [Vim Advantures][3],它是一种只能使用 Vim 的快捷键的在线游戏。而且某天我发现了一个非常神奇的虚拟学习工具,在 [Vimgifts.com][4],那上面有明确的你想要的:用一个 gif 动图来描述,使用一点点 Vim 操作来达到他们想要的。 - -你有花时间学习 Vim 吗?或者有大量键盘操作交互体验的程序上投资时间吗?那些经过你努力后掌握的工具,你认为这些努力值得吗?效率的提高有达到你的预期?分享你们的故事在下面的评论区吧。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/tips-getting-started-vim - -作者:[Jason Baker ][a] -译者:[maywanting](https://github.com/maywanting) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jason-baker -[1]: https://github.com/philc/vimium -[2]: http://www.vimperator.org/ -[3]: http://vim-adventures.com/ -[4]: http://vimgifs.com/ From d4e82db38c29b847f19e2b9118e5ee8a49311780 Mon Sep 17 00:00:00 2001 From: WEIYUE XIE Date: Tue, 2 Aug 2016 19:15:14 +0800 Subject: [PATCH 325/471] =?UTF-8?q?Update=20and=20rename=20part=202=20-=20?= =?UTF-8?q?Building=20a=20data=20science=20portfolio=20-=20Machine=20learn?= =?UTF-8?q?ing=20project.md=20to=20=E7=BF=BB=E8=AF=91=E4=B8=AD=20ideas4u?= =?UTF-8?q?=20part=202=20-=20Building=20a=20data=20science=20portfolio=20-?= =?UTF-8?q?=20Machine=20learning=20project.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ideas4u 翻译中 --- ...data science portfolio - Machine learning project.md} | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) rename sources/team_test/{part 2 - Building a data science portfolio - Machine learning project.md => 翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md} (91%) diff --git a/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md similarity index 91% rename from sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md rename to sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md index 2b429aaab8..4a4fe73553 100644 --- a/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md @@ -1,7 +1,7 @@ -### Understanding the data +### 理解数据 Let’s take a quick look at the raw data files. Here are the first few rows of the acquisition data from quarter 1 of 2012: - +让我们来简单看一下原始数据文件。下面是2012年1季度采集数据的前几行。 ``` 100000853384|R|OTHER|4.625|280000|360|02/2012|04/2012|31|31|1|23|801|N|C|SF|1|I|CA|945||FRM| 100003735682|R|SUNTRUST MORTGAGE INC.|3.99|466000|360|01/2012|03/2012|80|80|2|30|794|N|P|SF|1|P|MD|208||FRM|788 @@ -10,6 +10,7 @@ Let’s take a quick look at the raw data files. Here are the first few rows of Here are the first few rows of the performance data from quarter 1 of 2012: +下面是2012年1季度执行数据的前几行 ``` 100000853384|03/01/2012|OTHER|4.625||0|360|359|03/2042|41860|0|N|||||||||||||||| 100000853384|04/01/2012||4.625||1|359|358|03/2042|41860|0|N|||||||||||||||| @@ -17,10 +18,14 @@ Here are the first few rows of the performance data from quarter 1 of 2012: ``` Before proceeding too far into coding, it’s useful to take some time and really understand the data. This is more critical in operational projects – because we aren’t interactively exploring the data, it can be harder to spot certain nuances unless we find them upfront. In this case, the first step is to read the materials on the Fannie Mae site: - +在开始编码之前,花些时间真正理解数据是值得的。这对于操作项目优为重要,因为我们没有交互式探索数据,将很难察觉到细微的差别除非我们在前期发现他们。在这种情况下,第一个步骤是阅读房利美站点的资料: - [Overview][15] +- [概述][15] - [Glossary of useful terms][16] +- [用用的术语表][16] - [FAQs][17] +- [问答][17] +- [Columns in the Acquisition and Performance files][18] - [Columns in the Acquisition and Performance files][18] - [Sample Acquisition data file][19] - [Sample Performance data file][20] From 4110b9b4fa4f9753a42de5723cc832d0fa67d79a Mon Sep 17 00:00:00 2001 From: WEIYUE XIE Date: Tue, 2 Aug 2016 20:30:46 +0800 Subject: [PATCH 326/471] =?UTF-8?q?Update=20=E7=BF=BB=E8=AF=91=E4=B8=AD=20?= =?UTF-8?q?ideas4u=20part=202=20-=20Building=20a=20data=20science=20portfo?= =?UTF-8?q?lio=20-=20Machine=20learning=20project.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译2 --- ...ience portfolio - Machine learning project.md | 47 ++++++++----------- 1 file changed, 20 insertions(+), 27 deletions(-) diff --git a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md index 4a4fe73553..27c7604922 100644 --- a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md @@ -1,54 +1,47 @@ ### 理解数据 -Let’s take a quick look at the raw data files. Here are the first few rows of the acquisition data from quarter 1 of 2012: -让我们来简单看一下原始数据文件。下面是2012年1季度采集数据的前几行。 +我们来简单看一下原始数据文件。下面是2012年1季度前几行的采集数据。 ``` 100000853384|R|OTHER|4.625|280000|360|02/2012|04/2012|31|31|1|23|801|N|C|SF|1|I|CA|945||FRM| 100003735682|R|SUNTRUST MORTGAGE INC.|3.99|466000|360|01/2012|03/2012|80|80|2|30|794|N|P|SF|1|P|MD|208||FRM|788 100006367485|C|PHH MORTGAGE CORPORATION|4|229000|360|02/2012|04/2012|67|67|2|36|802|N|R|SF|1|P|CA|959||FRM|794 ``` -Here are the first few rows of the performance data from quarter 1 of 2012: - -下面是2012年1季度执行数据的前几行 +下面是2012年1季度的前几行执行数据 ``` 100000853384|03/01/2012|OTHER|4.625||0|360|359|03/2042|41860|0|N|||||||||||||||| 100000853384|04/01/2012||4.625||1|359|358|03/2042|41860|0|N|||||||||||||||| 100000853384|05/01/2012||4.625||2|358|357|03/2042|41860|0|N|||||||||||||||| ``` - -Before proceeding too far into coding, it’s useful to take some time and really understand the data. This is more critical in operational projects – because we aren’t interactively exploring the data, it can be harder to spot certain nuances unless we find them upfront. In this case, the first step is to read the materials on the Fannie Mae site: 在开始编码之前,花些时间真正理解数据是值得的。这对于操作项目优为重要,因为我们没有交互式探索数据,将很难察觉到细微的差别除非我们在前期发现他们。在这种情况下,第一个步骤是阅读房利美站点的资料: -- [Overview][15] - [概述][15] -- [Glossary of useful terms][16] -- [用用的术语表][16] -- [FAQs][17] +- [有用的术语表][16] - [问答][17] -- [Columns in the Acquisition and Performance files][18] -- [Columns in the Acquisition and Performance files][18] -- [Sample Acquisition data file][19] -- [Sample Performance data file][20] - -After reading through these files, we know some key facts that will help us: - -- There’s an Acquisition file and a Performance file for each quarter, starting from the year 2000 to present. There’s a 1 year lag in the data, so the most recent data is from 2015 as of this writing. -- The files are in text format, with a pipe (|) as a delimiter. -- The files don’t have headers, but we have a list of what each column is. -- All together, the files contain data on 22 million loans. -- Because the Performance files contain information on loans acquired in previous years, there will be more performance data for loans acquired in earlier years (ie loans acquired in 2014 won’t have much performance history). - -These small bits of information will save us a ton of time as we figure out how to structure our project and work with the data. +- [采集和执行文件中的列][18] +- [采集数据文件样本][19] +- [执行数据文件样本][20] +在看完这些文件后后,我们了解到一些能帮助我们的关键点: +- 从2000年到现在,每季度都有一个采集和执行文件,因数据是滞后一年的,所以到目前为止最新数据是2015年的。 +- 这些文件是文本格式的,采用管道符号“|”进行分割。 +- 这些文件是没有表头的,但我们有文件各列的名称。 +- 所有一起,文件包含2200万个贷款的数据。 +由于执行文件包含过去几年获得的贷款的信息,在早些年获得的贷款将有更多的执行数据(即在2014获得的贷款没有多少历史执行数据)。 +这些小小的信息将会为我们节省很多时间,因为我们知道如何构造我们的项目和利用这些数据。 ### Structuring the project - +### 构造项目 Before we start downloading and exploring the data, it’s important to think about how we’ll structure the project. When building an end-to-end project, our primary goals are: - +在我们开始下载和探索数据之前,先想一想将如何构造项目是很重要的。当建立端到端项目时,我们的主要目标是: - Creating a solution that works +- 创建一个可行解决方案 - Having a solution that runs quickly and uses minimal resources +- 有一个快速运行且占用最小资源的解决方案 - Enabling others to easily extend our work +- 容易可扩展 - Making it easy for others to understand our code +- 容易理解的代码 - Writing as little code as possible +- 写尽量少的代码 In order to achieve these goals, we’ll need to structure our project well. A well structured project follows a few principles: From 2f0e2db40c5bde9e80db4e50433b0bce719d2485 Mon Sep 17 00:00:00 2001 From: WEIYUE XIE Date: Tue, 2 Aug 2016 21:09:04 +0800 Subject: [PATCH 327/471] =?UTF-8?q?Update=20=E7=BF=BB=E8=AF=91=E4=B8=AD=20?= =?UTF-8?q?ideas4u=20part=202=20-=20Building=20a=20data=20science=20portfo?= =?UTF-8?q?lio=20-=20Machine=20learning=20project.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 保存进度3 --- ...ience portfolio - Machine learning project.md | 31 ++++++++----------- 1 file changed, 13 insertions(+), 18 deletions(-) diff --git a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md index 27c7604922..370a9853e5 100644 --- a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md @@ -28,33 +28,28 @@ - 所有一起,文件包含2200万个贷款的数据。 由于执行文件包含过去几年获得的贷款的信息,在早些年获得的贷款将有更多的执行数据(即在2014获得的贷款没有多少历史执行数据)。 这些小小的信息将会为我们节省很多时间,因为我们知道如何构造我们的项目和利用这些数据。 -### Structuring the project + ### 构造项目 -Before we start downloading and exploring the data, it’s important to think about how we’ll structure the project. When building an end-to-end project, our primary goals are: 在我们开始下载和探索数据之前,先想一想将如何构造项目是很重要的。当建立端到端项目时,我们的主要目标是: -- Creating a solution that works - 创建一个可行解决方案 -- Having a solution that runs quickly and uses minimal resources - 有一个快速运行且占用最小资源的解决方案 -- Enabling others to easily extend our work - 容易可扩展 -- Making it easy for others to understand our code -- 容易理解的代码 -- Writing as little code as possible +- 写容易理解的代码 - 写尽量少的代码 -In order to achieve these goals, we’ll need to structure our project well. A well structured project follows a few principles: - -- Separates data files and code files. -- Separates raw data from generated data. -- Has a README.md file that walks people through installing and using the project. -- Has a requirements.txt file that contains all the packages needed to run the project. -- Has a single settings.py file that contains any settings that are used in other files. - - For example, if you are reading the same file from multiple Python scripts, it’s useful to have them all import settings and get the file name from a centralized place. -- Has a .gitignore file that prevents large or secret files from being committed. -- Breaks each step in our task into a separate file that can be executed separately. +为了实现这些目标,需要对我们的项目进行良好的构造。一个结构良好的项目遵循几个原则: +- 分离数据文件和代码文件 +- 从原始数据中分离生成的数据。 +- 有一个README.md文件帮助人们安装和使用该项目。 +- 有一个requirements.txt文件列明项目运行所需的所有包。 +- 有一个单独的settings.py 文件列明其它文件中使用的所有的设置 + - 例如,如果从多个Python脚本读取相同的文件,把它们全部import设置和从一个集中的地方获得文件名是有用的。 +- 有一个.gitignore文件,防止大的或秘密文件被提交。 +- 分解任务中每一步可以单独执行的步骤到单独的文件中。 - For example, we may have one file for reading in the data, one for creating features, and one for making predictions. + - 例如, - Stores intermediate values. For example, one script may output a file that the next script can read. + - This enables us to make changes in our data processing flow without recalculating everything. Our file structure will look something like this shortly: From c430acb471201c8b9e02c69ced00267861a332a4 Mon Sep 17 00:00:00 2001 From: Noobfish Date: Tue, 2 Aug 2016 21:12:45 +0800 Subject: [PATCH 328/471] Translating by noobfish since 2016.08.02 --- ...ng a data science portfolio - Machine learning project.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md index 70d9b65f92..439be578ce 100644 --- a/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md @@ -1,3 +1,8 @@ ++@noobfish translating since Aug 2nd,2016. ++ ++ + + >This is the third in a series of posts on how to build a Data Science Portfolio. If you like this and want to know when the next post in the series is released, you can [subscribe at the bottom of the page][1]. Data science companies are increasingly looking at portfolios when making hiring decisions. One of the reasons for this is that a portfolio is the best way to judge someone’s real-world skills. The good news for you is that a portfolio is entirely within your control. If you put some work in, you can make a great portfolio that companies are impressed by. From a088e129d23496de91278e9d53b9291b06b83c01 Mon Sep 17 00:00:00 2001 From: WEIYUE XIE Date: Tue, 2 Aug 2016 21:34:22 +0800 Subject: [PATCH 329/471] =?UTF-8?q?Update=20=E7=BF=BB=E8=AF=91=E4=B8=AD=20?= =?UTF-8?q?ideas4u=20part=202=20-=20Building=20a=20data=20science=20portfo?= =?UTF-8?q?lio=20-=20Machine=20learning=20project.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...a science portfolio - Machine learning project.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md index 370a9853e5..b311cd814d 100644 --- a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md @@ -24,7 +24,7 @@ 在看完这些文件后后,我们了解到一些能帮助我们的关键点: - 从2000年到现在,每季度都有一个采集和执行文件,因数据是滞后一年的,所以到目前为止最新数据是2015年的。 - 这些文件是文本格式的,采用管道符号“|”进行分割。 -- 这些文件是没有表头的,但我们有文件各列的名称。 +- 这些文件是没有表头的,但我们有文件列明各列的名称。 - 所有一起,文件包含2200万个贷款的数据。 由于执行文件包含过去几年获得的贷款的信息,在早些年获得的贷款将有更多的执行数据(即在2014获得的贷款没有多少历史执行数据)。 这些小小的信息将会为我们节省很多时间,因为我们知道如何构造我们的项目和利用这些数据。 @@ -46,13 +46,13 @@ - 例如,如果从多个Python脚本读取相同的文件,把它们全部import设置和从一个集中的地方获得文件名是有用的。 - 有一个.gitignore文件,防止大的或秘密文件被提交。 - 分解任务中每一步可以单独执行的步骤到单独的文件中。 - - For example, we may have one file for reading in the data, one for creating features, and one for making predictions. - - 例如, -- Stores intermediate values. For example, one script may output a file that the next script can read. + - 例如,我们将有一个文件用于读取数据,一个用于创建特征,一个用于做出预测。 +- 保存中间结果,例如,一个脚本可输出下一个脚本可读取的文件。 - - This enables us to make changes in our data processing flow without recalculating everything. + - 这使我们无需重新计算就可以在数据处理流程中进行更改。 + -Our file structure will look something like this shortly: +我们的文件结构大体如下: ``` loan-prediction @@ -64,8 +64,7 @@ loan-prediction ├── settings.py ``` -### Creating the initial files - +### 创建初始文件 To start with, we’ll need to create a loan-prediction folder. Inside that folder, we’ll need to make a data folder and a processed folder. The first will store our raw data, and the second will store any intermediate calculated values. Next, we’ll make a .gitignore file. A .gitignore file will make sure certain files are ignored by git and not pushed to Github. One good example of such a file is the .DS_Store file created by OSX in every folder. A good starting point for a .gitignore file is here. We’ll also want to ignore the data files because they are very large, and the Fannie Mae terms prevent us from redistributing them, so we should add two lines to the end of our file: From 108dde60df911aef8edc5c81906557c78d4f578c Mon Sep 17 00:00:00 2001 From: WEIYUE XIE Date: Tue, 2 Aug 2016 21:40:28 +0800 Subject: [PATCH 330/471] =?UTF-8?q?Update=20and=20rename=20=E7=BF=BB?= =?UTF-8?q?=E8=AF=91=E4=B8=AD=20ideas4u=20part=202=20-=20Building=20a=20da?= =?UTF-8?q?ta=20science=20portfolio=20-=20Machine=20learning=20project.md?= =?UTF-8?q?=20to=20part=202=20-=20Building=20a=20data=20science=20portfoli?= =?UTF-8?q?o=20-=20Machine=20learning=20project.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 占坑 --- ...Building a data science portfolio - Machine learning project.md} | 1 + 1 file changed, 1 insertion(+) rename sources/team_test/{翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md => part 2 - Building a data science portfolio - Machine learning project.md} (99%) diff --git a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md similarity index 99% rename from sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md rename to sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md index b311cd814d..19e1fed3a7 100644 --- a/sources/team_test/翻译中 ideas4u part 2 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md @@ -1,4 +1,5 @@ +翻译中 by ideas4u ### 理解数据 我们来简单看一下原始数据文件。下面是2012年1季度前几行的采集数据。 ``` From 044fb2dbd56ca01617538d61abc4f37d842d2552 Mon Sep 17 00:00:00 2001 From: WEIYUE XIE Date: Tue, 2 Aug 2016 23:19:10 +0800 Subject: [PATCH 331/471] part 2 - Building a data science portfolio - Machine learning project.md (#4270) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update part 2 - Building a data science portfolio - Machine learning project.md save changes 5 * Update part 2 - Building a data science portfolio - Machine learning project.md 初稿完成了。 --- ...ce portfolio - Machine learning project.md | 27 +++++++------------ 1 file changed, 9 insertions(+), 18 deletions(-) diff --git a/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md index 19e1fed3a7..ff979de34f 100644 --- a/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 2 - Building a data science portfolio - Machine learning project.md @@ -66,21 +66,16 @@ loan-prediction ``` ### 创建初始文件 -To start with, we’ll need to create a loan-prediction folder. Inside that folder, we’ll need to make a data folder and a processed folder. The first will store our raw data, and the second will store any intermediate calculated values. - -Next, we’ll make a .gitignore file. A .gitignore file will make sure certain files are ignored by git and not pushed to Github. One good example of such a file is the .DS_Store file created by OSX in every folder. A good starting point for a .gitignore file is here. We’ll also want to ignore the data files because they are very large, and the Fannie Mae terms prevent us from redistributing them, so we should add two lines to the end of our file: - +首先,我们需要创建一个loan-prediction文件夹,在此文件夹下面,再创建一个data文件夹和一个processed文件夹。data文件夹存放原始数据,processed文件夹存放所有的中间计算结果。 +其次,创建.gitignore文件,.gitignore文件将保证某些文件被git忽略而不会被推送至github。关于这个文件的一个好的例子是由OSX在每一个文件夹都会创建的.DS_Store文件,.gitignore文件一个很好的起点就是在这了。我们还想忽略数据文件因为他们实在是太大了,同时房利美的条文禁止我们重新分发该数据文件,所以我们应该在我们的文件后面添加以下2行: ``` data processed ``` -[Here’s][21] an example .gitignore file for this project. - -Next, we’ll need to create README.md, which will help people understand the project. .md indicates that the file is in markdown format. Markdown enables you write plain text, but also add some fancy formatting if you want. [Here’s][22] a guide on markdown. If you upload a file called README.md to Github, Github will automatically process the markdown, and show it to anyone who views the project. [Here’s][23] an example. - -For now, we just need to put a simple description in README.md: - +这是该项目的一个关于.gitignore文件的例子。 +再次,我们需要创建README.md文件,它将帮助人们理解该项目。后缀.md表示这个文件采用markdown格式。Markdown使你能够写纯文本文件,同时还可以添加你想要的梦幻格式。这是关于markdown的导引。如果你上传一个叫README.md的文件至Github,Github会自动处理该markdown,同时展示给浏览该项目的人。 +至此,我们仅需在README.md文件中添加简单的描述: ``` Loan Prediction ----------------------- @@ -88,8 +83,7 @@ Loan Prediction Predict whether or not loans acquired by Fannie Mae will go into foreclosure. Fannie Mae acquires loans from other lenders as a way of inducing them to lend more. Fannie Mae releases data on the loans it has acquired and their performance afterwards [here](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html). ``` -Now, we can create a requirements.txt file. This will make it easy for other people to install our project. We don’t know exactly what libraries we’ll be using yet, but here’s a good starting point: - +现在,我们可以创建requirements.txt文件了。这会唯其它人可以很方便地安装我们的项目。我们还不知道我们将会具体用到哪些库,但是以下几个库是一个很好的开始: ``` pandas matplotlib @@ -99,9 +93,6 @@ ipython scipy ``` -The above libraries are the most commonly used for data analysis tasks in Python, and its fair to assume that we’ll be using most of them. [Here’s][24] an example requirements file for this project. - -After creating requirements.txt, you should install the packages. For this post, we’ll be using Python 3. If you don’t have Python installed, you should look into using [Anaconda][25], a Python installer that also installs all the packages listed above. - -Finally, we can just make a blank settings.py file, since we don’t have any settings for our project yet. - +以上几个是在python数据分析任务中最常用到的库。可以认为我们将会用到大部分这些库。这里是【24】该项目requirements文件的一个例子。 + 创建requirements.txt文件之后,你应该安装包了。我们将会使用python3.如果你没有安装python,你应该考虑使用 [Anaconda][25],一个python安装程序,同时安装了上面列出的所有包。 +最后,我们可以建立一个空白的settings.py文件,因为我们的项目还没有任何设置。 From 307cd103c0370581b85a869913f16b1963b6b3b6 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Wed, 3 Aug 2016 09:58:43 +0800 Subject: [PATCH 332/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?(#4271)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Delete part 5 - Building a data science portfolio - Machine learning project.md * Create part 5 - Building a data science portfolio - Machine learning project.md --- ...ce portfolio - Machine learning project.md | 106 +++++++++++------- 1 file changed, 63 insertions(+), 43 deletions(-) diff --git a/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md index 99e1046e70..a92bb01900 100644 --- a/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 5 - Building a data science portfolio - Machine learning project.md @@ -1,38 +1,44 @@ -注解数据 -我们已经在annotate.py中添加了一些功能, 现在我们来看一看数据文件. 我们需要将采集到的数据转换到training dataset来进行机器学习的训练. 这涉及到以下几件事情: -转换所以列数字. -填充缺失值. -分配 performance_count 和 foreclosure_status. -移除出现次数很少的行(performance_count 计数低). -我们有几个列是strings类型的, 看起来对于机器学习算法来说并不是很有用. 然而, 他们实际上是分类变量, 其中有很多不同的类别代码, 例如R,S等等. 我们可以把这些类别标签转换为数值: +### 注解数据 +我们已经在annotate.py中添加了一些功能, 现在我们来看一看数据文件. 我们需要将采集到的数据转换到训练数据表来进行机器学习的训练. 这涉及到以下几件事情: +- 转换所以列数字. +- 填充缺失值. +- 分配 performance_count 和 foreclosure_status. +- 移除出现次数很少的行(performance_count 计数低). + +我们有几个列是文本类型的, 看起来对于机器学习算法来说并不是很有用. 然而, 他们实际上是分类变量, 其中有很多不同的类别代码, 例如R,S等等. 我们可以把这些类别标签转换为数值: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/002.png) 通过这种方法转换的列我们可以应用到机器学习算法中. 还有一些包含日期的列 (first_payment_date 和 origination_date). 我们可以将这些日期放到两个列中: - 在下面的代码中, 我们将转换采集到的数据. 我们将定义一个函数如下: +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/003.png) +在下面的代码中, 我们将转换采集到的数据. 我们将定义一个函数如下: -在采集到的数据中创建foreclosure_status列 . -在采集到的数据中创建performance_count列. -将下面的string列转换为integer列: -channel -seller -first_time_homebuyer -loan_purpose -property_type -occupancy_status -property_state -product_type -转换first_payment_date 和 origination_date 为两列: -通过斜杠分离列. -将第一部分分离成月清单. -将第二部分分离成年清单. -删除这一列. -最后, 我们得到 first_payment_month, first_payment_year, origination_month, and origination_year. -所有缺失值填充为-1. +- 在采集到的数据中创建foreclosure_status列 . +- 在采集到的数据中创建performance_count列. +- 将下面的string列转换为integer列: + - channel + - seller + - first_time_homebuyer + - loan_purpose + - property_type + - occupancy_status + - property_state + - product_type +- 转换first_payment_date 和 origination_date 为两列: + - 通过斜杠分离列. + - 将第一部分分离成月清单. + - 将第二部分分离成年清单. + - 删除这一列. + - 最后, 我们得到 first_payment_month, first_payment_year, origination_month, and origination_year. +- 所有缺失值填充为-1. + +``` def annotate(acquisition, counts): acquisition["foreclosure_status"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "foreclosure_status", counts)) acquisition["performance_count"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "performance_count", counts)) @@ -57,21 +63,25 @@ def annotate(acquisition, counts): acquisition = acquisition.fillna(-1) acquisition = acquisition[acquisition["performance_count"] > settings.MINIMUM_TRACKING_QUARTERS] return acquisition +``` + +### 聚合到一起 -聚合到一起 我们差不多准备就绪了, 我们只需要再在annotate.py添加一点点代码. 在下面代码中, 我们将: -定义一个函数来读取采集的数据. -定义一个函数来写入数据到/train.csv -如果我们在命令行运行annotate.py来读取更新过的数据文件,它将做如下事情: -读取采集到的数据. -计算数据性能. -注解数据. -将注解数据写入到train.csv. +- 定义一个函数来读取采集的数据. +- 定义一个函数来写入数据到/train.csv +- 如果我们在命令行运行annotate.py来读取更新过的数据文件,它将做如下事情: + - 读取采集到的数据. + - 计算数据性能. + - 注解数据. + - 将注解数据写入到train.csv. + +``` def read(): acquisition = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "Acquisition.txt"), sep="|") return acquisition - + def write(acquisition): acquisition.to_csv(os.path.join(settings.PROCESSED_DIR, "train.csv"), index=False) @@ -80,11 +90,13 @@ if __name__ == "__main__": counts = count_performance_rows() acquisition = annotate(acquisition, counts) write(acquisition) +``` 修改完成以后为了确保annotate.py能够生成train.csv文件. 你可以在这里找到完整的 annotate.py file [here][34]. 文件夹结果应该像这样: +``` loan-prediction ├── data │ ├── Acquisition_2012Q1.txt @@ -102,37 +114,45 @@ loan-prediction ├── README.md ├── requirements.txt ├── settings.py +``` -找到标准 -我们已经完成了training dataset的生成, 现在我们需要最后一步, 生成预测. 我们需要找到错误的标准, 以及该如何评估我们的数据. 在这种情况下, 因为有很多的贷款没有收回, 所以根本不可能做到精确的计算. +### 找到标准 + +我们已经完成了训练数据表的生成, 现在我们需要最后一步, 生成预测. 我们需要找到错误的标准, 以及该如何评估我们的数据. 在这种情况下, 因为有很多的贷款没有收回, 所以根本不可能做到精确的计算. 我们需要读取数据, 并且计算foreclosure_status列, 我们将得到如下信息: +``` import pandas as pd import settings train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) train["foreclosure_status"].value_counts() +``` +``` False 4635982 True 1585 Name: foreclosure_status, dtype: int64 +``` 因为只有一点点贷款收回, 通过百分比标签来建立的机器学习模型会把每行都设置为Fasle, 所以我们在这里要考虑每个样本的不平衡性,确保我们做出的预测是准确的. 我们不想要这么多假的false, 我们将预计贷款收回但是它并没有收回, 我们预计贷款不会回收但是却回收了. 通过以上两点, Fannie Mae的false太多了, 因此显示他们可能无法收回投资. 所以我们将定义一个百分比,就是模型预测没有收回但是实际上收回了, 这个数除以总的负债回收总数. 这个负债回收百分比模型实际上是“没有的”. 下面看这个图表: - +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/004.png) 通过上面的图表, 1个负债预计不会回收, 也确实没有回收. 如果我们将这个数除以总数, 2, 我们将得到false的概率为50%. 我们将使用这个标准, 因此我们可以评估一下模型的性能. -设置机器学习分类器 +### 设置机器学习分类器 + 我们使用交叉验证预测. 通过交叉验证法, 我们将数据分为3组. 按照下面的方法来做: -Train a model on groups 1 and 2, and use the model to make predictions for group 3. -Train a model on groups 1 and 3, and use the model to make predictions for group 2. -Train a model on groups 2 and 3, and use the model to make predictions for group 1. -将它们分割到不同的组 ,这意味着我们永远不会用相同的数据来为预测训练模型. 这样就避免了 overfitting. 如果我们overfit, 我们将得到很低的false概率, 这使得我们难以改进算法或者应用到现实生活中. +- Train a model on groups 1 and 2, and use the model to make predictions for group 3. +- Train a model on groups 1 and 3, and use the model to make predictions for group 2. +- Train a model on groups 2 and 3, and use the model to make predictions for group 1. + +将它们分割到不同的组 ,这意味着我们永远不会用相同的数据来为预测训练模型. 这样就避免了overfitting(过拟合). 如果我们overfit(过拟合), 我们将得到很低的false概率, 这使得我们难以改进算法或者应用到现实生活中. [Scikit-learn][35] 有一个叫做 [cross_val_predict][36] 他可以帮助我们理解交叉算法. From bfa16485d321e1a1d8b0f7d34ed9a8a13390bace Mon Sep 17 00:00:00 2001 From: joVoV <704451873@qq.com> Date: Wed, 3 Aug 2016 18:03:35 +0800 Subject: [PATCH 333/471] =?UTF-8?q?=E6=AD=A3=E5=9C=A8=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/talk/20160509 Android vs. iPhone Pros and Cons.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20160509 Android vs. iPhone Pros and Cons.md b/sources/talk/20160509 Android vs. iPhone Pros and Cons.md index 92feeba307..9977d46d35 100644 --- a/sources/talk/20160509 Android vs. iPhone Pros and Cons.md +++ b/sources/talk/20160509 Android vs. iPhone Pros and Cons.md @@ -1,6 +1,7 @@ Android vs. iPhone: Pros and Cons =================================== +正在翻译 by jovov >When comparing Android vs. iPhone, clearly Android has certain advantages even as the iPhone is superior in some key ways. But ultimately, which is better? The question of Android vs. iPhone is a personal one. From 60ad576a3a0ac8645b098cfa417b82926879ba0a Mon Sep 17 00:00:00 2001 From: Purling Nayuki Date: Thu, 4 Aug 2016 00:25:46 +0800 Subject: [PATCH 334/471] Proofread 20160706 Doing for User Space What We Did for Kernel Space --- ...User Space What We Did for Kernel Space.md | 44 +++++++++---------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md b/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md index 0b31016003..a62f9ad09f 100644 --- a/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md +++ b/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md @@ -3,51 +3,51 @@ 我相信,Linux 最好也是最坏的事情,就是内核空间和用户空间之间的巨大差别。 -但是如果抛开这个区别,Linux 可能也不会成为世界上影响力最大的操作系统。如今,Linux 已经拥有世界上最大数量的用户,和最大范围的应用。尽管大多数用户并不知道,当他们进行谷歌搜索,或者触摸安卓手机的时候,他们其实正在使用 Linux。如果不是 Linux 的巨大成功,Apple 公司也可能并不会成为现在这样(苹果在他们的电脑产品中使用 BSD 发行版)。 +但是如果没有这个区别,Linux 可能也不会成为世界上影响力最大的操作系统。如今,Linux 的使用范围在世界上是最大的,而这些应用又有着世界上最大的用户群——尽管大多数用户并不知道,当他们进行谷歌搜索或者触摸安卓手机的时候,他们其实正在使用 Linux。如果不是 Linux 的巨大成功,Apple 公司也可能并不会成为现在这样(即在他们的电脑产品中使用 BSD 的技术)(注:Linux 获得成功后,Apple 曾与 Linus 协商使用 Linux 核心作为 Apple 电脑的操作系统并帮助开发的事宜,但遭到拒绝。因此,Apple 转向使用许可证更为宽松的 BSD 。)。 -不用担心,用户空间是 Linux 内核开发中的一个特性,并不是一个缺陷。正如 Linus 在 2003 的极客巡航中提到的那样,“”我只做内核相关技术……我并不知道内核之外发生的事情,而且我并不关心。我只关注内核部分发生的事情。” 在 Andrew Morton 在多年之后的另一个极客巡航上给我上了另外的一课,我写到: +不(需要)关注用户空间是 Linux 内核开发中的一个特点而非缺陷。正如 Linus 在 2003 的极客巡航中提到的那样,“我只做内核相关技术……我并不知道内核之外发生的事情,而且我并不关心。我只关注内核部分发生的事情。” 多年之后的另一个极客巡航上, Andrew Morton 给我上了另外的一课,这之后我写道: -> 内核空间是 Linux 核心存在的地方。用户空间是使用 Linux 时使用的空间,和其他的自然的建筑材料一样。内核空间和用户空间的区别,和自然材料和人类从中生产的人造材料的区别很类似。 +> Linux 存在于内核空间,而在用户空间中被使用,和其他的自然的建筑材料一样。内核空间和用户空间的区别,和自然材料与人类从中生产的人造材料的区别很类似。 -这个区别的自然而然的结果,就是尽管外面的世界一刻也离不开 Linux, 但是 Linux 社区还是保持相对较小。所以,为了增加我们社区团体的数量,我希望指出两件事情。第一件已经非常火热,另外一件可能热门。 +这个区别的自然而然的结果,就是尽管外面的世界一刻也离不开 Linux, 但是 Linux 社区还是保持相对较小。所以,为了增加哪怕一点我们社区团体的规模,我希望指出两件事情。第一件已经非常火了,另外一件可能会火起来。 -第一件事情就是 [blockchain][1],出自著名的分布式货币,比特币之手。当你正在阅读这篇文章的同时,对 blockchain 的[兴趣已经直线上升][2]。 +第一件事情就是 [blockchain][1],出自著名的分布式货币——比特币之手。当你正在阅读这篇文章的同时,人们对 blockchain 的[关注度正在直线上升][2]。 ![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f1.png) -> 图1. 谷歌 Blockchain 的趋势 +> 图1. Blockchain 的谷歌搜索趋势 -第二件事就是自主身份。为了解释这个,让我先来问你,你是谁或者你是什么。 +第二件事就是自主身份。为了解释这个概念,让我先来问你:你是谁,你来自哪里? -如果你从你的雇员,你的医生,或者车管所,Facebook,Twitter 或者谷歌上得到答案,你就会发现他们每一个都有明显的社会性: 为了他们自己的便利,在进入这些机构的控制前,他们都会添加自己的命名空间。正如 Timothy Ruff 在 [Evernym][3] 中解释的,”你并不为了他们而存在,你只为了自己的身份而活。“。你的身份可能会变化,但是唯一不变的就是控制着身份的人,也就是这个组织。 +如果你从你的雇员、你的医生或者车管所,Facebook、Twitter 或者谷歌上得到答案,你就会发现它们都是行政身份——机构完全以自己的便利为原因设置这些身份和职位。正如 [Evernym][3] 的 Timothy Ruff 所说,“你并不因组织而存在,但你的身份却因此存在。”身份是个因变量。自变量——即控制着身份的变量——是(你所在的)组织。 如果你的答案出自你自己,我们就有一个广大空间来发展一个新的领域,在这个领域中,我们完全自由。 -第一个解释这个的人,据我所知,是 [Devon Loffreto][4]。在 2012 年 2 月,在的他的博客中,他写道 ”什么是' Sovereign Source Authority'?“,[Moxy Tongue][5]。在他发表在 2016 年 2 月的 "[Self-Sovereign Identity][6]" 中,他写道: +据我所知,第一个解释这个的人是 [Devon Loffreto][4]。在 2012 年 2 月,他在博客[Moxy Tongue][5] 中写道:“什么是'Sovereign Source Authority'?”。在发表于 2016 年 2 月的 "[Self-Sovereign Identity][6]" 中,他写道: -> 自主身份必须是独立个人提出的,并且不包含社会因素。。。自主身份源于每个个体对其自身本源的认识。 一个自主身份可以为个体带来新的社会面貌。每个个体都可能为自己生成一个自主身份,并且这并不会改变固有的人权。使用自主身份机制是所有参与者参与的基石,并且 依旧可以同各种形式的人类社会保持联系。 +> 自主身份必须是独立个人提出的,并且不包含社会因素……自主身份源于每个个体对其自身本源的认识。 一个自主身份可以为个体带来新的社会面貌。每个个体都可能为自己生成一个自主身份,并且这并不会改变固有的人权。使用自主身份机制是所有参与者参与的基石,并且依旧可以同各种形式的人类社会保持联系。 -为了将这个发布在 Linux 条款中,只有个人才能为他或她设定一个自己的开源社区身份。这在现实实践中,这只是一个非常偶然的事件。举个例子,我自己的身份包括: +将这个概念放在 Linux 领域中,只有个人才能为他或她设定一个自己的开源社区身份。这在现实实践中,这只是一个非常正从的事件。举个例子,我自己的身份包括: - David Allen Searls,我父母会这样叫我。 - David Searls,正式场合下我会这么称呼自己。 - Dave,我的亲戚和好朋友会这么叫我。 - Doc,大多数人会这么叫我。 -在上述提到的身份认证中,我可以在不同的情景中轻易的转换。但是,这只是在现实世界中。在虚拟世界中,这就变得非常困难。除了上述的身份之外,我还可以是 @dsearls(我的 twitter 账号) 和 dsearls (其他的网络账号)。然而为了记住成百上千的不同账号的登录名和密码,我已经不堪重负。 +作为承认以上称呼的自主身份来源,我可以在不同的情景中轻易的转换。但是,这只是在现实世界中。在虚拟世界中,这就变得非常困难。除了上述的身份之外,我还可以是 @dsearls(我的 twitter 账号) 和 dsearls (其他的网络账号)。然而为了记住成百上千的不同账号的登录名和密码,我已经不堪重负。 -你可以在你的浏览器上感受到这个糟糕的体验。在火狐上,我有成百上千个用户名密码。很多已经废弃(很多都是从 Netscape 时代遗留下来的),但是我依旧假设我有时会有大量的工作账号需要处理。对于这些,我只是被动接受者。没有其他的解决方法。甚至一些安全较低的用户认证,已经成为了现实世界中不可缺少的一环。 +你可以在你的浏览器上感受到这个糟糕的体验。在火狐上,我有成百上千个用户名密码。很多已经废弃(很多都是从 Netscape 时代遗留下来的),但是我想会有大量的工作账号需要处理。对于这些,我只是被动接受者。没有其他的解决方法。甚至一些安全较低的用户认证,已经成为了现实世界中不可缺少的一环。 -现在,最简单的方式来联系账号,就是通过 "Log in with Facebook" 或者 "Login in with Twitter" 来进行身份认证。在这些例子中,我们中的每一个甚至并不是真正意义上的自己,或者某种程度上是我们希望被大家认识的自己(如果我们希望被其他人认识的话)。 +现在,最简单的方式来联系账号,就是通过 "Log in with Facebook" 或者 "Login in with Twitter" 来进行身份认证。在这种情况下,我们中的每一个甚至并不是真正意义上的自己,甚至(如果我们希望被其他人认识的话)缺乏对其他实体如何认识我们的控制。 我们从一开始就需要的是一个可以实体化我们的自主身份和交流时选择如何保护和展示自身的个人系统。因为缺少这个能力,我们现在陷入混乱。Shoshana Zuboff 称之为 "监视资本主义",她如此说道: ->...难以想象,在见证了互联网和获得了的巨大成功的谷歌背后。世界因 Apple 和 FBI 的对决而紧密联系在一起。真相就是,被热衷于监视的资本家开发监视系统,是每一个国家安全机构真正的恶。 +>...难以想象,在见证了互联网和获得了的巨大成功的谷歌背后。世界因 Apple 和 FBI 的对决而紧密联系在一起。讲道理,热衷于监视的资本家开发的监视系统是每一个国家安全机构都渴望的。 然后,她问道,”我们怎样才能保护自己远离他人的影响?“ -我建议使用自主身份。我相信这是我们唯一的方式,来保证我们从一个被监视的世界中脱离出来。以此为基础,我们才可以完全无顾忌的和社会,政治,商业上的人交流。 +我建议使用自主身份。我相信这是我们唯一的既可以保证我们从监视中逃脱、又可以使我们有一个有序的世界的办法。以此为基础,我们才可以完全无顾忌地和社会,政治,商业上的人交流。 -我在五月联合国举行的 [ID2020][7] 会议中总结了这个临时的结论。很高兴,Devon Loffreto 也在那,自从他在2013年被选为作为轮值主席之后。这就是[我曾经写的一些文章][8],引用了 Devon 的早期博客(比如上面的原文)。 +我在五月联合国举行的 [ID2020][7] 会议中总结了这个临时的结论。很高兴,Devon Loffreto 也在那,他于 2013 年推动了自主身份的创立。这是[我那时写的一些文章][8],引用了 Devon 的早期博客(比如上面的原文)。 这有三篇这个领域的准则: @@ -55,14 +55,14 @@ - "[System or Human First][10]" - Devon Loffreto. - "[The Path to Self-Sovereign Identity][11]" - Christopher Allen. -从Evernym 的简要说明中,[digi.me][12], [iRespond][13] 和 [Respect Network][14] 也被包括在内。自主身份和社会身份 (也被称为”current model“) 的对比结果,显示在图二中。 +从Evernym 的简要说明中,[digi.me][12], [iRespond][13] 和 [Respect Network][14] 也被包括在内。自主身份和社会身份 (也被称为“current model”) 的对比结果,显示在图二中。 ![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f2.jpg) > 图 2. Current Model 身份 vs. 自主身份 -为此而生的[平台][15]就是 Sovrin,也被解释为“”依托于先进技术的,授权机制的,分布式货币上的一个完全开源,基于标识,声明身份的图平台“ 同时,这也有一本[白皮书][16]。代号为 [plenum][17],而且它在 Github 上。 +Sovrin 就是为此而生的[平台][15],它阐述自己为一个“依托于先进、专用、经授权、分布式平台的,完全开源、基于标识的身份声明图平台”。同时,这也有一本[白皮书][16]。它的代码名为 [plenum][17],并且公开在 Github 上。 -在这-或者其他类似的地方-我们就可以在用户空间中重现我们在上一个的四分之一世纪中已经做过的事情。 +在这里——或者其他类似的地方——我们就可以在用户空间中重现我们在过去 25 年中在内核空间做过的事情。 -------------------------------------------------------------------------------- @@ -70,8 +70,8 @@ via: https://www.linuxjournal.com/content/doing-user-space-what-we-did-kernel-space 作者:[Doc Searls][a] -译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[PurlingNayuki](https://github.com/PurlingNayuki) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0c45724b1a7e2d01edc34aa835cfa8eefa7604b2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 4 Aug 2016 09:20:37 +0800 Subject: [PATCH 335/471] =?UTF-8?q?20160804-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...w to restore older file versions in Git.md | 181 ++++++++++++++++++ 1 file changed, 181 insertions(+) create mode 100644 sources/tech/20160726 How to restore older file versions in Git.md diff --git a/sources/tech/20160726 How to restore older file versions in Git.md b/sources/tech/20160726 How to restore older file versions in Git.md new file mode 100644 index 0000000000..a8fc789672 --- /dev/null +++ b/sources/tech/20160726 How to restore older file versions in Git.md @@ -0,0 +1,181 @@ +How to restore older file versions in Git +============================================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/file_system.jpg?itok=s2b60oIB) + +In today's article you will learn how to find out where you are in the history of your project, how to restore older file versions, and how to make Git branches so you can safely conduct wild experiments. + +Where you are in the history of your Git project, much like your location in the span of a rock album, is determined by a marker called HEAD (like the playhead of a tape recorder or record player). To move HEAD around in your own Git timeline, use the git checkout command. + +There are two ways to use the git checkout command. A common use is to restore a file from a previous commit, and you can also rewind your entire tape reel and go in an entirely different direction. + +### Restore a file + +This happens when you realize you've utterly destroyed an otherwise good file. We all do it; we get a file to a great place, we add and commit it, and then we decide that what it really needs is one last adjustment, and the file ends up completely unrecognizable. + +To restore it to its former glory, use git checkout from the last known commit, which is HEAD: + +``` +$ git checkout HEAD filename +``` + +If you accidentally committed a bad version of a file and need to yank a version from even further back in time, look in your Git log to see your previous commits, and then check it out from the appropriate commit: + +``` +$ git log --oneline +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. + +$ git checkout 55df4c2 filename + +``` + +Now the older version of the file is restored into your current position. (You can see your current status at any time with the git status command.) You need to add the file because it has changed, and then commit it: + +``` +$ git add filename +$ git commit -m 'restoring filename from first commit.' +``` + +Look in your Git log to verify what you did: + +``` +$ git log --oneline +d512580 restoring filename from first commit +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. +``` + +Essentially, you have rewound the tape and are taping over a bad take. So you need to re-record the good take. + +### Rewind the timeline + +The other way to check out a file is to rewind the entire Git project. This introduces the idea of branches, which are, in a way, alternate takes of the same song. + +When you go back in history, you rewind your Git HEAD to a previous version of your project. This example rewinds all the way back to your original commit: + +``` +$ git log --oneline +d512580 restoring filename from first commit +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. + +$ git checkout 55df4c2 +``` + +When you rewind the tape in this way, if you hit the record button and go forward, you are destroying your future work. By default, Git assumes you do not want to do this, so it detaches HEAD from the project and lets you work as needed without accidentally recording over something you have recorded later. + +If you look at your previous version and realise suddenly that you want to re-do everything, or at least try a different approach, then the safe way to do that is to create a new branch. You can think of this process as trying out a different version of the same song, or creating a remix. The original material exists, but you're branching off and doing your own version for fun. + +To get your Git HEAD back down on blank tape, make a new branch: + +``` +$ git checkout -b remix +Switched to a new branch 'remix' +``` + +Now you've moved back in time, with an alternate and clean workspace in front of you, ready for whatever changes you want to make. + +You can do the same thing without moving in time. Maybe you're perfectly happy with how your progress is going, but would like to switch to a temporary workspace just to try some crazy ideas out. That's a perfectly acceptable workflow, as well: + +``` +$ git status +On branch master +nothing to commit, working directory clean + +$ git checkout -b crazy_idea +Switched to a new branch 'crazy_idea' +``` + +Now you have a clean workspace where you can sandbox some crazy new ideas. Once you're done, you can either keep your changes, or you can forget they ever existed and switch back to your master branch. + +To forget your ideas in shame, change back to your master branch and pretend your new branch doesn't exist: + +``` +$ git checkout master +``` + +To keep your crazy ideas and pull them back into your master branch, change back to your master branch and merge your new branch: + +``` +$ git checkout master +$ git merge crazy_idea +``` + +Branches are powerful aspects of git, and it's common for developers to create a new branch immediately after cloning a repository; that way, all of their work is contained on their own branch, which they can submit for merging to the master branch. Git is pretty flexible, so there's no "right" or "wrong" way (even a master branch can be distinguished from what remote it belongs to), but branching makes it easy to separate tasks and contributions. Don't get too carried away, but between you and me, you can have as many Git branches as you please. They're free! + +### Working with remotes + +So far you've maintained a Git repository in the comfort and privacy of your own home, but what about when you're working with other people? + +There are several different ways to set Git up so that many people can work on a project at once, so for now we'll focus on working on a clone, whether you got that clone from someone's personal Git server or their GitHub page, or from a shared drive on the same network. + +The only difference between working on your own private Git repository and working on something you want to share with others is that at some point, you need to push your changes to someone else's repository. We call the repository you are working in a local repository, and any other repository a remote. + +When you clone a repository with read and write permissions from another source, your clone inherits the remote from whence it came as its origin. You can see a clone's remote: + +``` +$ git remote --verbose +origin seth@example.com:~/myproject.Git (fetch) +origin seth@example.com:~/myproject.Git (push) +``` + +Having a remote origin is handy because it is functionally an offsite backup, and it also allows someone else to be working on the project. + +If your clone didn't inherit a remote origin, or if you choose to add one later, use the git remote command: + +``` +$ git remote add seth@example.com:~/myproject.Git +``` + +If you have changed files and want to send them to your remote origin, and have read and write permissions to the repository, use git push. The first time you push changes, you must also send your branch information. It is a good practice to not work on master, unless you've been told to do so: + +``` +$ git checkout -b seth-dev +$ git add exciting-new-file.txt +$ git commit -m 'first push to remote' +$ git push -u origin HEAD +``` + +This pushes your current location (HEAD, naturally) and the branch it exists on to the remote. After you've pushed your branch once, you can drop the -u option: + +``` +$ git add another-file.txt +$ git commit -m 'another push to remote' +$ git push origin HEAD +``` + +### Merging branches + +When you're working alone in a Git repository you can merge test branches into your master branch whenever you want. When working in tandem with a contributor, you'll probably want to review their changes before merging them into your master branch: + +``` +$ git checkout contributor +$ git pull +$ less blah.txt # review the changed files +$ git checkout master +$ git merge contributor +``` + +If you are using GitHub or GitLab or something similar, the process is different. There, it is traditional to fork the project and treat it as though it is your own repository. You can work in the repository and send changes to your GitHub or GitLab account without getting permission from anyone, because it's your repository. + +If you want the person you forked it from to receive your changes, you create a pull request, which uses the web service's backend to send patches to the real owner, and allows them to review and pull in your changes. + +Forking a project is usually done on the web service, but the Git commands to manage your copy of the project are the same, even the push process. Then it's back to the web service to open a pull request, and the job is done. + +In our next installment we'll look at some convenience add-ons to help you integrate Git comfortably into your everyday workflow. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/how-restore-older-file-versions-git + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth From c5ab34a90b78a432e272ff42af3643cebde43da0 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 4 Aug 2016 09:25:55 +0800 Subject: [PATCH 336/471] =?UTF-8?q?20160804-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20160802 3 graphical tools for Git.md | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 sources/tech/20160802 3 graphical tools for Git.md diff --git a/sources/tech/20160802 3 graphical tools for Git.md b/sources/tech/20160802 3 graphical tools for Git.md new file mode 100644 index 0000000000..c09afed48b --- /dev/null +++ b/sources/tech/20160802 3 graphical tools for Git.md @@ -0,0 +1,117 @@ +3 graphical tools for Git +============================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/BUSINESS_meritladder.png?itok=4CAH2wV0) + +In this article, we'll take a look at some convenience add-ons to help you integrate Git comfortably into your everyday workflow. + +I learned Git before many of these fancy interfaces existed, and my workflow is frequently text-based anyway, so most of the inbuilt conveniences of Git suit me pretty well. It is always best, in my opinion, to understand how Git works natively. However, it is always nice to have options, so these are some of the ways you can start using Git outside of the terminal. + +### Git in KDE Dolphin + +I am a KDE user, if not always within the Plasma desktop, then as my application layer in Fluxbox. Dolphin is an excellent file manager with lots of options and plenty of secret little features. Particularly useful are all the plugins people develop for it, one of which is a nearly-complete Git interface. Yes, you can manage your Git repositories natively from the comfort of your own desktop. + +But first, you'll need to make sure the add-ons are installed. Some distros come with a filled-to-the-brim KDE, while others give you just the basics, so if you don't see the Git options in the next few steps, search your repository for something like dolphin-extras or dolphin-plugins. + +To activate Git integration, go to the Settings menu in any Dolphin window and select Configure Dolphin. + +In the Configure Dolphin window, click on the Services icon in the left column. + +In the Services panel, scroll through the list of available plugins until you find Git. + +![](https://opensource.com/sites/default/files/4_dolphinconfig.jpg) + +Save your changes and close your Dolphin window. When you re-launch Dolphin, navigate to a Git repository and have a look around. Notice that all icons now have emblems: green boxes for committed files, solid green boxes for modified files, no icon for untracked files, and so on. + +Your right-click menu now has contextual Git options when invoked inside a Git repository. You can initiate a checkout, push or pull when clicking inside a Dolphin window, and you can even do a git add or git remove on your files. + +![](https://opensource.com/sites/default/files/4_dolphingit.jpg) + +You can't clone a repository or change remote paths in Dolphin, but will have to drop to a terminal, which is just an F4 away. + +Frankly, this feature of KDE is so kool [sic] that this article could just end here. The integration of Git in your native file manager makes working with Git almost transparent; everything you need to do just happens no matter what stage of the process you are in. Git in the terminal, and Git waiting for you when you switch to the GUI. It is perfection. + +But wait, there's more! + +### Sparkleshare + +From the other side of the desktop pond comes SparkleShare, a project that uses a file synchronization model ("like Dropbox!") that got started by some GNOME developers. It is not integrated into any specific part of GNOME, so you can use it on all platforms. + +If you run Linux, install SparkleShare from your software repository. Other operating systems should download from the SparkleShare website. You can safely ignore the instructions on the SparkleShare website, which are for setting up a SparkleShare server, which is not what we will do here. You certainly can set up a SparkleShare server if you want, but SparkleShare is compatible with any Git repository, so you don't need to create your own server. + +After it is installed, launch SparkleShare from your applications menu. Step through the setup wizard, which is two steps plus a brief tutorial, and optionally set SparkleShare as a startup item for your desktop. + +![](https://opensource.com/sites/default/files/4_sparklesetup.jpg) + +An orange SparkleShare directory is now in your system tray. Currently, SparkleShare is oblivious to anything on your computer, so you need to add a hosted project. + +To add a directory for SparkleShare to track, click the SparkleShare icon in your system tray and select Add Hosted Project. + +![](https://opensource.com/sites/default/files/4_sparklehost.jpg) + +SparkleShare can work with self-hosted Git projects, or projects hosted on public Git services like GitHub and Bitbucket. For full access, you'll probably need to use the Client ID that SparkleShare provides to you. This is an SSH key acting as the authentication token for the service you use for hosting, including your own Git server, which should also use SSH public key authentication rather than password login. Copy the Client ID into the authorized_hosts file of your Git user on your server, or into the SSH key panel of your Git host. + +After configuring the host you want to use, SparkleShare downloads the Git project, including, at your option, the commit history. Find the files in ~/SparkleShare. + +Unlike Dolphin's Git integration, SparkleShare is unnervingly invisible. When you make a change, it quietly syncs the change to your remote project. For many people, that is a huge benefit: all the power of Git with none of the maintenance. To me, it is unsettling, because I like to govern what I commit and which branch I use. + +SparkleShare may not be for everyone, but it is a powerful and simple Git solution that shows how different open source projects fit together in perfect harmony to create something unique. + +### Git-cola + +Yet another model of working with Git repositories is less native and more of a monitoring approach; rather than using an integrated application to interact directly with your Git project, you can use a desktop client to monitor changes in your project and deal with each change in whatever way you choose. An advantage to this approach is focus. You might not care about all 125 files in your project when only three of them are actively being worked on, so it is helpful to bring them to the forefront. + +If you thought there were a lot of Git web hosts out there, you haven't seen anything yet. [Git clients for your desktop][1] are a dime-a-dozen. In fact, Git actually ships with an inbuilt graphical Git client. The most cross-platform and most configurable of them all is the open source Git-cola client, written in Python and Qt. + +If you're on Linux, Git-cola may be in your software repository. Otherwise, just download it from the site and install it: + +``` +$ python setup.py install +``` + +When Git-cola launches, you're given three buttons to open an existing repository, create a new repo, or clone an existing repository. + +Whichever you choose, at some point you end up with a Git repository. Git-cola, and indeed most desktop clients that I've used, don't try to be your interface into your repository; they leave that up to your normal operating system tools. In other words, I might start a repository with Git-cola, but then I would open that repository in Thunar or Emacs to start my work. Leaving Git-cola open as a monitor works quite well, because as you create new files, or change existing ones, they appear in Git-cola's Status panel. + +The default layout of Git-cola is a little non-linear. I prefer to move from left-to-right, and because Git-cola happens to be very configurable, you're welcome to change your layout. I set mine up so that the left-most panel is Status, showing any changes made to my current branch, then to the right, a Diff panel in case I want to review a change, and the Actions panel for quick-access buttons to common tasks, and finally the right-most panel is a Commit panel where I can write commit messages. + +![](https://opensource.com/sites/default/files/4_gitcola.jpg) + +Even if you use a different layout, this is the general flow of Git-cola: + +Changes appear in the Status panel. Right-click a change entry, or select a file and click the Stage button in the Action panel, to stage a file. + +A staged file's icon changes to a green triangle to indicate that it has been both modified and staged. You can unstage a file by right-clicking and selecting Unstage Selected, or by clicking the Unstage button in the Actions panel. + +Review your changes in the Diff panel. + +When you are ready to commit, enter a commit message and click the Commit button. + +There are other buttons in the Actions panel for other common tasks like a git pull or git push. The menus round out the task list, with dedicated actions for branching, reviewing diffs, rebasing, and a lot more. + +I tend to think of Git-cola as a kind of floating panel for my file manager (and I only use Git-cola when Dolphin is not available). On one hand, it's less interactive than a fully integrated and Git-aware file manager, but on the other, it offers practically everything that raw Git does, so it's actually more powerful. + +There are plenty of graphical Git clients. Some are paid software with no source code available, others are viewers only, others attempt to reinvent Git with special terms that are specific to the client ("sync" instead of "push"..?), and still others are platform-specific. Git-Cola has consistently been the easiest to use on any platform, and the one that stays closest to pure Git so that users learn Git whilst using it, and experts feel comfortable with the interface and terminology. + +### Git or graphical? + +I don't generally use graphical tools to access Git; mostly I use the ones I've discussed when helping other people find a comfortable interface for themselves. At the end of the day, though, it comes down to what fits with how you work. I like terminal-based Git because it integrates well with Emacs, but on a day where I'm working mostly in Inkscape, I might naturally fall back to using Git in Dolphin because I'm in Dolphin anyway. + +It's up to you how you use Git; the most important thing to remember is that Git is meant to make your life easier and those crazy ideas you have for your work safer to try out. Get familiar with the way Git works, and then use Git from whatever angle you find works best for you. + +In our next installment, we will learn how to set up and manage a Git server, including user access and management, and running custom scripts. + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/8/graphical-tools-git + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://git-scm.com/downloads/guis From 569747cdee068155773a7a6b5b8b9841ecb73ac1 Mon Sep 17 00:00:00 2001 From: tianfeiyu Date: Thu, 4 Aug 2016 09:28:11 +0800 Subject: [PATCH 337/471] Update 20160726 How to restore older file versions in Git.md --- .../tech/20160726 How to restore older file versions in Git.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160726 How to restore older file versions in Git.md b/sources/tech/20160726 How to restore older file versions in Git.md index a8fc789672..675167aaad 100644 --- a/sources/tech/20160726 How to restore older file versions in Git.md +++ b/sources/tech/20160726 How to restore older file versions in Git.md @@ -1,3 +1,4 @@ +translated by strugglingyouth How to restore older file versions in Git ============================================= From a42cbc7cf6c3e5e7af7a32ef5782a3e6d41f1ccd Mon Sep 17 00:00:00 2001 From: David Dai Date: Thu, 4 Aug 2016 11:16:47 +0800 Subject: [PATCH 338/471] =?UTF-8?q?Traslating=2020160309=20Let=E2=80=99s?= =?UTF-8?q?=20Build=20A=20Web=20Server.=20Part=201.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160309 Let’s Build A Web Server. Part 1.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md index 4c8048786d..63f08f754d 100644 --- a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md +++ b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md @@ -1,3 +1,4 @@ +Translating by StdioA Let’s Build A Web Server. Part 1. ===================================== From ae01fb620328fa04a45b336282b2ef234c1ee169 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 3 Aug 2016 20:17:49 +0800 Subject: [PATCH 339/471] PUB:20160602 How to build and deploy a Facebook Messenger bot with Python and Flask MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @wyangsun 只翻译了一半。。。 --- ...ook Messenger bot with Python and Flask.md | 392 ++++++++++++++++++ ...ook Messenger bot with Python and Flask.md | 319 -------------- 2 files changed, 392 insertions(+), 319 deletions(-) create mode 100644 published/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md delete mode 100644 translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md diff --git a/published/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md b/published/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md new file mode 100644 index 0000000000..e2191b88c1 --- /dev/null +++ b/published/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md @@ -0,0 +1,392 @@ +如何用 Python 和 Flask 建立部署一个 Facebook Messenger 机器人 +========================================================================== + +这是我建立一个简单的 Facebook Messenger 机器人的记录。功能很简单,它是一个回显机器人,只是打印回用户写了什么。 + +回显服务器类似于服务器的“Hello World”例子。 + +这个项目的目的不是建立最好的 Messenger 机器人,而是让你了解如何建立一个小型机器人和每个事物是如何整合起来的。 + +- [技术栈][1] +- [机器人架构][2] +- [机器人服务器][3] +- [部署到 Heroku][4] +- [创建 Facebook 应用][5] +- [结论][6] + +### 技术栈 + +使用到的技术栈: + +- [Heroku][7] 做后端主机。免费级足够这个等级的教程。回显机器人不需要任何种类的数据持久,所以不需要数据库。 +- [Python][8] 是我们选择的语言。版本选择 2.7,虽然它移植到 Pyhton 3 很容易,只需要很少的改动。 +- [Flask][9] 作为网站开发框架。它是非常轻量的框架,用在小型工程或微服务是非常完美的。 +- 最后 [Git][10] 版本控制系统用来维护代码和部署到 Heroku。 +- 值得一提:[Virtualenv][11]。这个 python 工具是用来创建清洁的 python 库“环境”的,这样你可以只安装必要的需求和最小化应用的大小。 + +### 机器人架构 + +Messenger 机器人是由一个响应两种请求的服务器组成的: + +- GET 请求被用来认证。他们与你注册的 FaceBook 认证码一同被 Messenger 发出。 +- POST 请求被用来实际的通信。典型的工作流是,机器人将通过用户发送带有消息数据的 POST 请求而建立通信,然后我们将处理这些数据,并发回我们的 POST 请求。如果这个请求完全成功(返回一个 200 OK 状态码),我们也将响应一个 200 OK 状态码给初始的 Messenger请求。 + +这个教程应用将托管到 Heroku,它提供了一个优雅而简单的部署应用的接口。如前所述,免费级可以满足这个教程。 + +在应用已经部署并且运行后,我们将创建一个 Facebook 应用然后连接它到我们的应用,以便 Messenger 知道发送请求到哪,这就是我们的机器人。 + +### 机器人服务器 + +基本的服务器代码可以在 Github 用户 [hult(Magnus Hult)][13] 的 [Chatbot][12] 项目上获取,做了一些只回显消息的代码修改和修正了一些我遇到的错误。最终版本的服务器代码如下: + +``` +from flask import Flask, request +import json +import requests + +app = Flask(__name__) + +### 这需要填写被授予的页面通行令牌(PAT) +### 它由将要创建的 Facebook 应用提供。 +PAT = '' + +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' + +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" + +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" + + +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text + +if __name__ == '__main__': + app.run() +``` + +让我们分解代码。第一部分是引入所需的依赖: + +``` +from flask import Flask, request +import json +import requests +``` + +接下来我们定义两个函数(使用 Flask 特定的 app.route 装饰器),用来处理到我们的机器人的 GET 和 POST 请求。 + +``` +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' +``` + +当我们创建 Facebook 应用时,verify_token 对象将由我们声明的 Messenger 发送。我们必须自己来校验它。最后我们返回“hub.challenge”给 Messenger。 + +处理 POST 请求的函数更有意思一些: + +``` +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" +``` + +当被调用时,我们抓取消息载荷,使用函数 messaging_events 来拆解它,并且提取发件人身份和实际发送的消息,生成一个可以循环处理的 python 迭代器。请注意 Messenger 发送的每个请求有可能多于一个消息。 + +``` +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" +``` + +对每个消息迭代时,我们会调用 send_message 函数,然后我们使用 Facebook Graph messages API 对 Messenger 发回 POST 请求。在这期间我们一直没有回应我们阻塞的原始 Messenger请求。这会导致超时和 5XX 错误。 + +上述情况是我在解决遇到错误时发现的,当用户发送表情时实际上是发送的 unicode 标识符,但是被 Python 错误的编码了,最终我们发回了一些乱码。 + +这个发回 Messenger 的 POST 请求将永远不会完成,这会导致给初始请求返回 5xx 状态码,显示服务不可用。 + +通过使用 `encode('unicode_escape')` 封装消息,然后在我们发送回消息前用 `decode('unicode_escape')` 解码消息就可以解决。 + +``` +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text +``` + +### 部署到 Heroku + +一旦代码已经建立成我想要的样子时就可以进行下一步。部署应用。 + +那么,该怎么做? + +我之前在 Heroku 上部署过应用(主要是 Rails),然而我总是遵循某种教程做的,所用的配置是创建好了的。而在本文的情况下,我就需要从头开始。 + +幸运的是有官方 [Heroku 文档][14]来帮忙。这篇文档很好地说明了运行应用程序所需的最低限度。 + +长话短说,我们需要的除了我们的代码还有两个文件。第一个文件是“requirements.txt”,它列出了运行应用所依赖的库。 + +需要的第二个文件是“Procfile”。这个文件通知 Heroku 如何运行我们的服务。此外这个文件只需要一点点内容: + +``` +web: gunicorn echoserver:app +``` + +Heroku 对它的解读是,我们的应用通过运行 echoserver.py 启动,并且应用将使用 gunicorn 作为 Web 服务器。我们使用一个额外的网站服务器是因为与性能相关,在上面的 Heroku 文档里对此解释了: + +> Web 应用程序并发处理传入的 HTTP 请求比一次只处理一个请求的 Web 应用程序会更有效利地用测试机的资源。由于这个原因,我们建议使用支持并发请求的 Web 服务器来部署和运行产品级服务。 + +> Django 和 Flask web 框架提供了一个方便的内建 Web 服务器,但是这些阻塞式服务器一个时刻只能处理一个请求。如果你部署这种服务到 Heroku 上,你的测试机就会资源利用率低下,应用会感觉反应迟钝。 + +> Gunicorn 是一个纯 Python 的 HTTP 服务器,用于 WSGI 应用。允许你在单独一个测试机内通过运行多 Python 进程的方式来并发的运行各种 Python 应用。它在性能、灵活性和配置简易性方面取得了完美的平衡。 + +回到我们之前提到过的“requirements.txt”文件,让我们看看它如何结合 Virtualenv 工具。 + +很多情况下,你的开发机器也许已经安装了很多 python 库。当部署应用时你不想全部加载那些库,但是辨认出你实际使用哪些库很困难。 + +Virtualenv 可以创建一个新的空白虚拟环境,以便你可以只安装你应用所需要的库。 + +你可以运行如下命令来检查当前安装了哪些库: + +``` +kostis@KostisMBP ~ $ pip freeze +cycler==0.10.0 +Flask==0.10.1 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +matplotlib==1.5.1 +numpy==1.10.4 +pyparsing==2.1.0 +python-dateutil==2.5.0 +pytz==2015.7 +requests==2.10.0 +scipy==0.17.0 +six==1.10.0 +virtualenv==15.0.1 +Werkzeug==0.11.10 +``` + +注意:pip 工具应该已经与 Python 一起安装在你的机器上。如果没有,查看[官方网站][15]如何安装它。 + +现在让我们使用 Virtualenv 来创建一个新的空白环境。首先我们给我们的项目创建一个新文件夹,然后进到目录下: + +``` +kostis@KostisMBP projects $ mkdir echoserver +kostis@KostisMBP projects $ cd echoserver/ +kostis@KostisMBP echoserver $ +``` + +现在来创建一个叫做 echobot 的新环境。运行下面的 source 命令激活它,然后使用 pip freeze 检查,我们能看到现在是空的。 + +``` +kostis@KostisMBP echoserver $ virtualenv echobot +kostis@KostisMBP echoserver $ source echobot/bin/activate +(echobot) kostis@KostisMBP echoserver $ pip freeze +(echobot) kostis@KostisMBP echoserver $ +``` + +我们可以安装需要的库。我们需要是 flask、gunicorn 和 requests,它们被安装后我们就创建 requirements.txt 文件: + +``` +(echobot) kostis@KostisMBP echoserver $ pip install flask +(echobot) kostis@KostisMBP echoserver $ pip install gunicorn +(echobot) kostis@KostisMBP echoserver $ pip install requests +(echobot) kostis@KostisMBP echoserver $ pip freeze +click==6.6 +Flask==0.11 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +requests==2.10.0 +Werkzeug==0.11.10 +(echobot) kostis@KostisMBP echoserver $ pip freeze > requirements.txt +``` + +上述完成之后,我们用 python 代码创建 echoserver.py 文件,然后用之前提到的命令创建 Procfile,我们最终的文件/文件夹如下: + +``` +(echobot) kostis@KostisMBP echoserver $ ls +Procfile echobot echoserver.py requirements.txt +``` + +我们现在准备上传到 Heroku。我们需要做两件事。第一是如果还没有安装 Heroku toolbet,就安装它(详见 [Heroku][16])。第二是通过 Heroku [网页界面][17]创建一个新的 Heroku 应用。 + +点击右上的大加号然后选择“Create new app”。 + +![](http://tsaprailis.com/assets/create_app.png) + +为你的应用选择一个名字,然后点击“Create App”。 + +![](http://tsaprailis.com/assets/create.png) + +你将会重定向到你的应用的控制面板,在那里你可以找到如何部署你的应用到 Heroku 的细节说明。 + +``` +(echobot) kostis@KostisMBP echoserver $ heroku login +(echobot) kostis@KostisMBP echoserver $ git init +(echobot) kostis@KostisMBP echoserver $ heroku git:remote -a +(echobot) kostis@KostisMBP echoserver $ git add . +(echobot) kostis@KostisMBP echoserver $ git commit -m "Initial commit" +(echobot) kostis@KostisMBP echoserver (master) $ git push heroku master +... +remote: https://.herokuapp.com/ deployed to Heroku +... +(echobot) kostis@KostisMBP echoserver (master) $ heroku config:set WEB_CONCURRENCY=3 +``` + +如上,当你推送你的修改到 Heroku 之后,你会得到一个用于公开访问你新创建的应用的 URL。保存该 URL,下一步需要它。 + +### 创建这个 Facebook 应用 + +让我们的机器人可以工作的最后一步是创建这个我们将连接到其上的 Facebook 应用。Facebook 通常要求每个应用都有一个相关页面,所以我们来[创建一个][18]。 + +接下来我们去 [Facebook 开发者专页][19],点击右上角的“My Apps”按钮并选择“Add a New App”。不要选择建议的那个,而是点击“basic setup”。填入需要的信息并点击“Create App Id”,然后你会重定向到新的应用页面。 + +![](http://tsaprailis.com/assets/facebook_app.png) + + +在 “Products” 菜单之下,点击“+ Add Product” ,然后在“Messenger”下点击“Get Started”。跟随这些步骤设置 Messenger,当完成后你就可以设置你的 webhooks 了。Webhooks 简单的来说是你的服务所用的 URL 的名称。点击 “Setup Webhooks” 按钮,并添加该 Heroku 应用的 URL (你之前保存的那个)。在校验元组中写入 ‘my_voice_is_my_password_verify_me’。你可以写入任何你要的内容,但是不管你在这里写入的是什么内容,要确保同时修改代码中 handle_verification 函数。然后勾选 “messages” 选项。 + +![](http://tsaprailis.com/assets/webhooks.png) + +点击“Verify and Save” 就完成了。Facebook 将访问该 Heroku 应用并校验它。如果不工作,可以试试运行: + +``` +(echobot) kostis@KostisMBP heroku logs -t +``` + +然后看看日志中是否有错误。如果发现错误, Google 搜索一下可能是最快的解决方法。 + +最后一步是取得页面访问元组(PAT),它可以将该 Facebook 应用于你创建好的页面连接起来。 + +![](http://tsaprailis.com/assets/PAT.png) + +从下拉列表中选择你创建好的页面。这会在“Page Access Token”(PAT)下面生成一个字符串。点击复制它,然后编辑 echoserver.py 文件,将其贴入 PAT 变量中。然后在 Git 中添加、提交并推送该修改。 + +``` +(echobot) kostis@KostisMBP echoserver (master) $ git add . +(echobot) kostis@KostisMBP echoserver (master) $ git commit -m "Initial commit" +(echobot) kostis@KostisMBP echoserver (master) $ git push heroku master +``` + +最后,在 Webhooks 菜单下再次选择你的页面并点击“Subscribe”。 + +![](http://tsaprailis.com/assets/subscribe.png) + +现在去访问你的页面并建立会话: + +![](http://tsaprailis.com/assets/success.png) + +成功了,机器人回显了! + +注意:除非你要将这个机器人用在 Messenger 上测试,否则你就是机器人唯一响应的那个人。如果你想让其他人也试试它,到 [Facebook 开发者专页][19]中,选择你的应用、角色,然后添加你要添加的测试者。 + +###总结 + +这对于我来说是一个非常有用的项目,希望它可以指引你找到开始的正确方向。[官方的 Facebook 指南][20]有更多的资料可以帮你学到更多。 + +你可以在 [Github][21] 上找到该项目的代码。 + +如果你有任何评论、勘误和建议,请随时联系我。 + + +-------------------------------------------------------------------------------- + +via: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/ + +作者:[Konstantinos Tsaprailis][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/kostistsaprailis +[1]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#tech-stack +[2]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#bot-architecture +[3]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#the-bot-server +[4]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#deploying-to-heroku +[5]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#creating-the-facebook-app +[6]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#conclusion +[7]: https://www.heroku.com +[8]: https://www.python.org +[9]: http://flask.pocoo.org +[10]: https://git-scm.com +[11]: https://virtualenv.pypa.io/en/stable +[12]: https://github.com/hult/facebook-chatbot-python +[13]: https://github.com/hult +[14]: https://devcenter.heroku.com/articles/python-gunicorn +[15]: https://pip.pypa.io/en/stable/installing +[16]: https://toolbelt.heroku.com +[17]: https://dashboard.heroku.com/apps +[18]: https://www.facebook.com/pages/create +[19]: https://developers.facebook.com/ +[20]: https://developers.facebook.com/docs/messenger-platform/implementation +[21]: https://github.com/kostistsaprailis/messenger-bot-tutorial \ No newline at end of file diff --git a/translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md b/translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md deleted file mode 100644 index 5ad4e62816..0000000000 --- a/translated/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md +++ /dev/null @@ -1,319 +0,0 @@ -如何用Python和Flask建立部署一个Facebook信使机器人教程 -========================================================================== - -这是我建立一个简单的Facebook信使机器人的记录。功能很简单,他是一个回显机器人,只是打印回用户写了什么。 - -回显服务器类似于服务器的“Hello World”例子。 - -这个项目的目的不是建立最好的信使机器人,而是获得建立一个小型机器人和每个事物是如何整合起来的感觉。 - -- [技术栈][1] -- [机器人架构][2] -- [机器人服务器][3] -- [部署到 Heroku][4] -- [创建 Facebook 应用][5] -- [结论][6] - -### 技术栈 - -使用到的技术栈: - -- [Heroku][7] 做后端主机。免费层足够这个等级的教程。回显机器人不需要任何种类的数据持久,所以不需要数据库。 -- [Python][8] 是选择的一个语言。版本选择2.7,虽然它移植到 Pyhton 3 很容易,只需要很少的改动。 -- [Flask][9] 作为网站开发框架。它是非常轻量的框架,用在小型工程或微服务是完美的。 -- 最后 [Git][10] 版本控制系统用来维护代码和部署到 Heroku。 -- 值得一提:[Virtualenv][11]。这个 python 工具是用来创建清洁的 python 库“环境”的,你只用安装必要的需求和最小化的应用封装。 - -### 机器人架构 - -信使机器人是由一个服务器组成,响应两种请求: - -- GET 请求被用来认证。他们与你注册的 FB 认证码一同被信使发出。 -- POST 请求被用来真实的通信。传统的工作流是,机器人将通过发送 POST 请求与用户发送的消息数据建立通信,我们将处理它,发送一个我们自己的 POST 请求回去。如果这一个完全成功(返回一个200 OK 状态)我们也响应一个200 OK 码给初始信使请求。 -这个教程应用将托管到Heroku,他提供了一个很好并且简单的接口来部署应用。如前所述,免费层可以满足这个教程。 - -在应用已经部署并且运行后,我们将创建一个 Facebook 应用然后连接它到我们的应用,以便信使知道发送请求到哪,这就是我们的机器人。 - -### 机器人服务器 - -基本的服务器代码可以在Github用户 [hult(Magnus Hult)][13] 的 [Chatbot][12] 工程上获取,经过一些代码修改只回显消息和一些我遇到的错误更正。最终版本的服务器代码: - -``` -from flask import Flask, request -import json -import requests - -app = Flask(__name__) - -# 这需要填写被授予的页面通行令牌 -# 通过 Facebook 应用创建令牌。 -PAT = '' - -@app.route('/', methods=['GET']) -def handle_verification(): - print "Handling Verification." - if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': - print "Verification successful!" - return request.args.get('hub.challenge', '') - else: - print "Verification failed!" - return 'Error, wrong validation token' - -@app.route('/', methods=['POST']) -def handle_messages(): - print "Handling Messages" - payload = request.get_data() - print payload - for sender, message in messaging_events(payload): - print "Incoming from %s: %s" % (sender, message) - send_message(PAT, sender, message) - return "ok" - -def messaging_events(payload): - """Generate tuples of (sender_id, message_text) from the - provided payload. - """ - data = json.loads(payload) - messaging_events = data["entry"][0]["messaging"] - for event in messaging_events: - if "message" in event and "text" in event["message"]: - yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') - else: - yield event["sender"]["id"], "I can't echo this" - - -def send_message(token, recipient, text): - """Send the message text to recipient with id recipient. - """ - - r = requests.post("https://graph.facebook.com/v2.6/me/messages", - params={"access_token": token}, - data=json.dumps({ - "recipient": {"id": recipient}, - "message": {"text": text.decode('unicode_escape')} - }), - headers={'Content-type': 'application/json'}) - if r.status_code != requests.codes.ok: - print r.text - -if __name__ == '__main__': - app.run() -``` - -让我们分解代码。第一部分是引入所需: - -``` -from flask import Flask, request -import json -import requests -``` - -接下来我们定义两个函数(使用 Flask 特定的 app.route 装饰器),用来处理到我们的机器人的 GET 和 POST 请求。 - -``` -@app.route('/', methods=['GET']) -def handle_verification(): - print "Handling Verification." - if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': - print "Verification successful!" - return request.args.get('hub.challenge', '') - else: - print "Verification failed!" - return 'Error, wrong validation token' -``` - -当我们创建 Facebook 应用时声明由信使发送的 verify_token 对象。我们必须对自己进行认证。最后我们返回“hub.challenge”给信使。 - -处理 POST 请求的函数更有趣 - -``` -@app.route('/', methods=['POST']) -def handle_messages(): - print "Handling Messages" - payload = request.get_data() - print payload - for sender, message in messaging_events(payload): - print "Incoming from %s: %s" % (sender, message) - send_message(PAT, sender, message) - return "ok" -``` - -当调用我们抓取的消息负载时,使用函数 messaging_events 来中断它并且提取发件人身份和真实发送消息,生成一个 python 迭代器循环遍历。请注意信使发送的每个请求有可能多于一个消息。 - -``` -def messaging_events(payload): - """Generate tuples of (sender_id, message_text) from the - provided payload. - """ - data = json.loads(payload) - messaging_events = data["entry"][0]["messaging"] - for event in messaging_events: - if "message" in event and "text" in event["message"]: - yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') - else: - yield event["sender"]["id"], "I can't echo this" -``` - -迭代完每个消息时我们调用send_message函数然后我们执行POST请求回给使用Facebook图形消息接口信使。在这期间我们一直没有回应我们阻塞的原始信使请求。这会导致超时和5XX错误。 - -上述的发现错误是中断期间我偶然发现的,当用户发送表情时其实的是当成了 unicode 标识,无论如何 Python 发生了误编码。我们以发送回垃圾结束。 - -这个 POST 请求回到信使将不会结束,这会导致发生5xx状态返回给原始的请求,显示服务不可用。 - -通过使用`encode('unicode_escape')`转义消息然后在我们发送回消息前用`decode('unicode_escape')`解码消息就可以解决。 - -``` -def send_message(token, recipient, text): - """Send the message text to recipient with id recipient. - """ - - r = requests.post("https://graph.facebook.com/v2.6/me/messages", - params={"access_token": token}, - data=json.dumps({ - "recipient": {"id": recipient}, - "message": {"text": text.decode('unicode_escape')} - }), - headers={'Content-type': 'application/json'}) - if r.status_code != requests.codes.ok: - print r.text -``` - -### 部署到 Heroku - -一旦代码已经建立成我想要的样子时就可以进行下一步。部署应用。 - -当然,但是怎么做? - -我之前已经部署了应用到 Heroku (主要是 Rails)然而我总是遵循某种教程,所以配置已经创建。在这种情况下,尽管我必须从头开始。 - -幸运的是有官方[Heroku文档][14]来帮忙。这篇文章很好地说明了运行应用程序所需的最低限度。 - -长话短说,我们需要的除了我们的代码还有两个文件。第一个文件是“requirements.txt”他列出了运行应用所依赖的库。 - -需要的第二个文件是“Procfile”。这个文件通知 Heroku 如何运行我们的服务。此外这个文件最低限度如下: - ->web: gunicorn echoserver:app - -heroku解读他的方式是我们的应用通过运行 echoserver.py 开始并且应用将使用 gunicorn 作为网站服务器。我们使用一个额外的网站服务器是因为与性能相关并在上面的Heroku文档里解释了: - ->Web 应用程序并发处理传入的HTTP请求比一次只处理一个请求的Web应用程序,更有效利地用动态资源。由于这个原因,我们建议使用支持并发请求的 web 服务器处理开发和生产运行的服务。 - ->Django 和 Flask web 框架特性方便内建 web 服务器,但是这些阻塞式服务器一个时刻只处理一个请求。如果你部署这种服务到 Heroku上,你的动态资源不会充分使用并且你的应用会感觉迟钝。 - ->Gunicorn 是一个纯 Python HTTP 的 WSGI 引用服务器。允许你在单独一个动态资源内通过并发运行多 Python 进程的方式运行任一 Python 应用。它提供了一个完美性能,弹性,简单配置的平衡。 - -回到我们提到的“requirements.txt”文件,让我们看看它如何结合 Virtualenv 工具。 - -在任何时候,你的开发机器也许有若干已安装的 python 库。当部署应用时你不想这些库被加载因为很难辨认出你实际使用哪些库。 - -Virtualenv 创建一个新的空白虚拟环境,因此你可以只安装你应用需要的库。 - -你可以检查当前安装使用哪些库的命令如下: - -``` -kostis@KostisMBP ~ $ pip freeze -cycler==0.10.0 -Flask==0.10.1 -gunicorn==19.6.0 -itsdangerous==0.24 -Jinja2==2.8 -MarkupSafe==0.23 -matplotlib==1.5.1 -numpy==1.10.4 -pyparsing==2.1.0 -python-dateutil==2.5.0 -pytz==2015.7 -requests==2.10.0 -scipy==0.17.0 -six==1.10.0 -virtualenv==15.0.1 -Werkzeug==0.11.10 -``` - -注意:pip 工具应该已经与 Python 一起安装在你的机器上。 - -如果没有,查看[官方网站][15]如何安装他。 - -现在让我们使用 Virtualenv 来创建一个新的空白环境。首先我们给我们的工程创建一个新文件夹,然后进到目录下: - -``` -kostis@KostisMBP projects $ mkdir echoserver -kostis@KostisMBP projects $ cd echoserver/ -kostis@KostisMBP echoserver $ -``` - -现在来创建一个叫做 echobot 新的环境。运行下面的 source 命令激活它,然后使用 pip freeze 检查,我们能看到现在是空的。 - -``` -kostis@KostisMBP echoserver $ virtualenv echobot -kostis@KostisMBP echoserver $ source echobot/bin/activate -(echobot) kostis@KostisMBP echoserver $ pip freeze -(echobot) kostis@KostisMBP echoserver $ -``` - -我们可以安装需要的库。我们需要是 flask,gunicorn,和 requests,他们被安装完我们就创建 requirements.txt 文件: - -``` -(echobot) kostis@KostisMBP echoserver $ pip install flask -(echobot) kostis@KostisMBP echoserver $ pip install gunicorn -(echobot) kostis@KostisMBP echoserver $ pip install requests -(echobot) kostis@KostisMBP echoserver $ pip freeze -click==6.6 -Flask==0.11 -gunicorn==19.6.0 -itsdangerous==0.24 -Jinja2==2.8 -MarkupSafe==0.23 -requests==2.10.0 -Werkzeug==0.11.10 -(echobot) kostis@KostisMBP echoserver $ pip freeze > requirements.txt -``` - -毕竟上文已经被运行,我们用 python 代码创建 echoserver.py 文件然后用之前提到的命令创建 Procfile,我们应该以下面的文件/文件夹结束: - -``` -(echobot) kostis@KostisMBP echoserver $ ls -Procfile echobot echoserver.py requirements.txt -``` - -我们现在准备上传到 Heroku。我们需要做两件事。第一是安装 Heroku toolbet 如果你还没安装到你的系统中(详细看[Heroku][16])。第二通过[网页接口][17]创建一个新的 Heroku 应用。 - -点击右上的大加号然后选择“Create new app”。 - - - - - - - - --------------------------------------------------------------------------------- - -via: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/ - -作者:[Konstantinos Tsaprailis][a] -译者:[wyangsun](https://github.com/wyangsun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://github.com/kostistsaprailis -[1]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#tech-stack -[2]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#bot-architecture -[3]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#the-bot-server -[4]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#deploying-to-heroku -[5]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#creating-the-facebook-app -[6]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#conclusion -[7]: https://www.heroku.com -[8]: https://www.python.org -[9]: http://flask.pocoo.org -[10]: https://git-scm.com -[11]: https://virtualenv.pypa.io/en/stable -[12]: https://github.com/hult/facebook-chatbot-python -[13]: https://github.com/hult -[14]: https://devcenter.heroku.com/articles/python-gunicorn -[15]: https://pip.pypa.io/en/stable/installing -[16]: https://toolbelt.heroku.com -[17]: https://dashboard.heroku.com/apps - - From e5ff58184b2cd9608d0c8d559fc94c6cfc25ca6c Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 4 Aug 2016 16:36:43 +0800 Subject: [PATCH 340/471] PUB:20160718 Creating your first Git repository MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @vim-kakali 校对累死我了——这个作者写的真累。 --- ...0718 Creating your first Git repository.md | 176 ++++++++++++++++ ...0718 Creating your first Git repository.md | 199 ------------------ 2 files changed, 176 insertions(+), 199 deletions(-) create mode 100644 published/20160718 Creating your first Git repository.md delete mode 100644 translated/tech/20160718 Creating your first Git repository.md diff --git a/published/20160718 Creating your first Git repository.md b/published/20160718 Creating your first Git repository.md new file mode 100644 index 0000000000..d1a8ca2840 --- /dev/null +++ b/published/20160718 Creating your first Git repository.md @@ -0,0 +1,176 @@ +Git 系列(三):建立你的第一个 Git 仓库 +====================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open_abstract_pieces.jpg?itok=ZRt0Db00) + +现在是时候学习怎样创建你自己的 Git 仓库了,还有怎样增加文件和完成提交。 + +在本系列[前面的文章][4]中,你已经学习了怎样作为一个最终用户与 Git 进行交互;你就像一个漫无目的的流浪者一样偶然发现了一个开源项目网站,克隆了仓库,然后你就可以继续钻研它了。你知道了和 Git 进行交互并不像你想的那样困难,或许你只是需要被说服现在去使用 Git 完成你的工作罢了。 + +虽然 Git 确实是被许多重要软件选作版本控制工具,但是并不是仅能用于这些重要软件;它也能管理你购物清单(如果它们对你来说很重要的话,当然可以了!)、你的配置文件、周报或日记、项目进展日志、甚至源代码! + +使用 Git 是很有必要的,毕竟,你肯定有过因为一个备份文件不能够辨认出版本信息而抓狂的时候。 + +Git 无法帮助你,除非你开始使用它,而现在就是开始学习和使用它的最好时机。或者,用 Git 的话来说,“没有其他的 `push` 能像 `origin HEAD` 一样有帮助了”(千里之行始于足下的意思)。我保证,你很快就会理解这一点的。 + +### 类比于录音 + +我们经常用名词“快照”来指代计算机上的镜像,因为很多人都能够对插满了不同时光的照片的相册充满了感受。这很有用,不过,我认为 Git 更像是进行一场录音。 + +也许你不太熟悉传统的录音棚卡座式录音机,它包括几个部件:一个可以正转或反转的转轴、保存声音波形的磁带,可以通过拾音头在磁带上记录声音波形,或者检测到磁带上的声音波形并播放给听众。 + +除了往前播放磁带,你也可以把磁带倒回到之前的部分,或快进跳过后面的部分。 + +想象一下上世纪 70 年代乐队录制磁带的情形。你可以想象到他们一遍遍地练习歌曲,直到所有部分都非常完美,然后记录到音轨上。起初,你会录下鼓声,然后是低音,再然后是吉他声,最后是主唱。每次你录音时,录音棚工作人员都会把磁带倒带,然后进入循环模式,这样它就会播放你之前录制的部分。比如说如果你正在录制低音,你就会在背景音乐里听到鼓声,就像你自己在击鼓一样,然后吉他手在录制时会听到鼓声、低音(和牛铃声)等等。在每个循环中,你都会录制一部分,在接下来的循环中,工作人员就会按下录音按钮将其合并记录到磁带中。 + +你也可以拷贝或换下整个磁带,如果你要对你的作品重新混音的话。 + +现在我希望对于上述的上世纪 70 年代的录音工作的描述足够生动,这样我们就可以把 Git 的工作想象成一个录音工作了。 + +### 新建一个 Git 仓库 + +首先得为我们的虚拟的录音机买一些磁带。用 Git 的话说,这些磁带就是*仓库*;它是完成所有工作的基础,也就是说这里是存放 Git 文件的地方(即 Git 工作区)。 + +任何目录都可以成为一个 Git 仓库,但是让我们从一个新目录开始。这需要下面三个命令: + +- 创建目录(如果你喜欢的话,你可以在你的图形化的文件管理器里面完成。) +- 在终端里切换到目录。 +- 将其初始化成一个 Git 管理的目录。 + +也就是运行如下代码: + +``` +$ mkdir ~/jupiter # 创建目录 +$ cd ~/jupiter # 进入目录 +$ git init . # 初始化你的新 Git 工作区 +``` + +在这个例子中,文件夹 jupiter 是一个空的但是合法的 Git 仓库。 + +有了仓库接下来的事情就可以按部就班进行了。你可以克隆该仓库,你可以在一个历史点前后来回穿梭(前提是你有一个历史点),创建交替的时间线,以及做 Git 能做的其它任何事情。 + +在 Git 仓库里面工作和在任何目录里面工作都是一样的,可以在仓库中新建文件、复制文件、保存文件。你可以像平常一样做各种事情;Git 并不复杂,除非你把它想复杂了。 + +在本地的 Git 仓库中,一个文件可以有以下这三种状态: + +- 未跟踪文件(Untracked):你在仓库里新建了一个文件,但是你没有把文件加入到 Git 的管理之中。 +- 已跟踪文件(Tracked):已经加入到 Git 管理的文件。 +- 暂存区文件(Staged):被修改了的已跟踪文件,并加入到 Git 的提交队列中。 + +任何你新加入到 Git 仓库中的文件都是未跟踪文件。这些文件保存在你的电脑硬盘上,但是你没有告诉 Git 这是需要管理的文件,用我们的录音机来类比,就是录音机还没打开;乐队就开始在录音棚里忙碌了,但是录音机并没有准备录音。 + +不用担心,Git 会在出现这种情况时告诉你: + +``` +$ echo "hello world" > foo +$ git status +On branch master +Untracked files: +(use "git add ..." to include in what will be committed) + foo +nothing added but untracked files present (use "git add" to track) +``` + +你看到了,Git 会提醒你怎样把文件加入到提交任务中。 + +### 不使用 Git 命令进行 Git 操作 + +在 GitHub 或 GitLab 上创建一个仓库只需要用鼠标点几下即可。这并不难,你单击“New Repository”这个按钮然后跟着提示做就可以了。 + +在仓库中包括一个“README”文件是一个好习惯,这样人们在浏览你的仓库的时候就可以知道你的仓库是干什么的,更有用的是可以让你在克隆一个有东西的仓库前知道它有些什么。 + +克隆仓库通常很简单,但是在 GitHub 上获取仓库改动权限就稍微复杂一些,为了通过 GitHub 验证你必须有一个 SSH 密钥。如果你使用 Linux 系统,可以通过下面的命令生成: + +``` +$ ssh-keygen +``` + +然后复制你的新密钥的内容,它是纯文本文件,你可以使用一个文本编辑器打开它,也可以使用如下 cat 命令查看: + +``` +$ cat ~/.ssh/id_rsa.pub +``` + +现在把你的密钥粘贴到 [GitHub SSH 配置文件][1] 中,或者 [GitLab 配置文件][2]。 + +如果你通过使用 SSH 模式克隆了你的项目,你就可以将修改写回到你的仓库了。 + +另外,如果你的系统上没有安装 Git 的话也可以使用 GitHub 的文件上传接口来添加文件。 + +![](https://opensource.com/sites/default/files/2_githubupload.jpg) + +### 跟踪文件 + +正如命令 `git status` 的输出告诉你的那样,如果你想让 git 跟踪一个文件,你必须使用命令 `git add` 把它加入到提交任务中。这个命令把文件存在了暂存区,这里存放的都是等待提交的文件,或者也可以用在快照中。在将文件包括到快照中,和添加要 Git 管理的新的或临时文件时,`git add` 命令的目的是不同的,不过至少现在,你不用为它们之间的不同之处而费神。 + +类比录音机,这个动作就像打开录音机开始准备录音一样。你可以想象为对已经在录音的录音机按下暂停按钮,或者倒回开头等着记录下个音轨。 + +当你把文件添加到 Git 管理中,它会标识其为已跟踪文件: + +``` +$ git add foo +$ git status +On branch master +Changes to be committed: +(use "git reset HEAD ..." to unstage) +new file: foo +``` + +加入文件到提交任务中并不是“准备录音”。这仅仅是将该文件置于准备录音的状态。在你添加文件后,你仍然可以修改该文件;它只是被标记为**已跟踪**和**处于暂存区**,所以在它被写到“磁带”前你可以将它撤出或修改它(当然你也可以再次将它加入来做些修改)。但是请注意:你还没有在磁带中记录该文件,所以如果弄坏了一个之前还是好的文件,你是没有办法恢复的,因为你没有在“磁带”中记下那个文件还是好着的时刻。 + +如果你最后决定不把文件记录到 Git 历史列表中,那么你可以撤销提交任务,在 Git 中是这样做的: + +``` +$ git reset HEAD foo +``` + +这实际上就是解除了录音机的准备录音状态,你只是在录音棚中转了一圈而已。 + +### 大型提交 + +有时候,你想要提交一些内容到仓库;我们以录音机类比,这就好比按下录音键然后记录到磁带中一样。 + +在一个项目所经历的不同阶段中,你会按下这个“记录键”无数次。比如,如果你尝试了一个新的 Python 工具包并且最终实现了窗口呈现功能,然后你肯定要进行提交,以便你在实验新的显示选项时搞砸了可以回退到这个阶段。但是如果你在 Inkscape 中画了一些图形草样,在提交前你可能需要等到已经有了一些要开发的内容。尽管你可能提交了很多次,但是 Git 并不会浪费很多,也不会占用太多磁盘空间,所以在我看来,提交的越多越好。 + +`commit` 命令会“记录”仓库中所有的暂存区文件。Git 只“记录”已跟踪的文件,即,在过去某个时间点你使用 `git add` 命令加入到暂存区的所有文件,以及从上次提交后被改动的文件。如果之前没有过提交,那么所有跟踪的文件都包含在这次提交中,以 Git 的角度来看,这是一次非常重要的修改,因为它们从没放到仓库中变成了放进去。 + +完成一次提交需要运行下面的命令: + +``` +$ git commit -m 'My great project, first commit.' +``` + +这就保存了所有提交的文件,之后可以用于其它操作(或者,用英国电视剧《神秘博士》中时间领主所讲的 Gallifreyan 语说,它们成为了“固定的时间点” )。这不仅是一个提交事件,也是一个你在 Git 日志中找到该提交的引用指针: + +``` +$ git log --oneline +55df4c2 My great project, first commit. +``` + +如果想浏览更多信息,只需要使用不带 `--oneline` 选项的 `git log` 命令。 + +在这个例子中提交时的引用号码是 55df4c2。它被叫做“提交哈希(commit hash)”(LCTT 译注:这是一个 SHA-1 算法生成的哈希码,用于表示一个 git 提交对象),它代表着刚才你的提交所包含的所有新改动,覆盖到了先前的记录上。如果你想要“倒回”到你的提交历史点上,就可以用这个哈希作为依据。 + +你可以把这个哈希想象成一个声音磁带上的 [SMPTE 时间码][3],或者再形象一点,这就是好比一个黑胶唱片上两首不同的歌之间的空隙,或是一个 CD 上的音轨编号。 + +当你改动了文件之后并且把它们加入到提交任务中,最终完成提交,这就会生成新的提交哈希,它们每一个所标示的历史点都代表着你的产品不同的版本。 + +这就是 Charlie Brown 这样的音乐家们为什么用 Git 作为版本控制系统的原因。 + +在接下来的文章中,我们将会讨论关于 Git HEAD 的各个方面,我们会真正地向你揭示时间旅行的秘密。不用担心,你只需要继续读下去就行了(或许你已经在读了?)。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/creating-your-first-git-repository + +作者:[Seth Kenlon][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://github.com/settings/keys +[2]: https://gitlab.com/profile/keys +[3]: http://slackermedia.ml/handbook/doku.php?id=timecode +[4]: https://linux.cn/article-7641-1.html diff --git a/translated/tech/20160718 Creating your first Git repository.md b/translated/tech/20160718 Creating your first Git repository.md deleted file mode 100644 index 722fe2ea56..0000000000 --- a/translated/tech/20160718 Creating your first Git repository.md +++ /dev/null @@ -1,199 +0,0 @@ - - -建立你的第一个仓库 -====================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open_abstract_pieces.jpg?itok=ZRt0Db00) - - -现在是时候学习怎样创建你自己的仓库了,还有怎样增加文件和完成提交。 - -在本系列前面文章的安装过程中,你已经学习了作为一个目标用户怎样与 Git 进行交互;你就像一个漫无目的的流浪者一样偶然发现了一个开源项目网站,然后克隆了仓库,Git 走进了你的生活。学习怎样和 Git 进行交互并不像你想的那样困难,或许你并不确信现在是否应该使用 Git 完成你的工作。 - - -Git 被认为是选择大多软件项目的工具,它不仅能够完成大多软件项目的工作;它也能管理你杂乱项目的列表(如果他们不重要,也可以这样说!),你的配置文件,一个日记,项目进展日志,甚至源代码! - -使用 Git 是很有必要的,毕竟,你肯定有因为一个备份文件不能够辨认出版本信息而烦恼的时候。 - -你不使用 Git,它也就不会为你工作,或者也可以把 Git 理解为“没有任何推送就像源头指针一样”【译注: HEAD 可以理解为“头指针”,是当前工作区的“基础版本”,当执行提交时, HEAD 指向的提交将作为新提交的父提交。】。我保证,你很快就会对 Git 有所了解 。 - - -### 类比于录音 - - -我们更喜欢谈论快照上的图像,因为很多人都可以通过一个相册很快辨认出每个照片上特有的信息。这可能很有用,然而,我认为 Git 更像是在进行声音的记录。 - -传统的录音机,可能你对于它的部件不是很清楚:它包含转轴并且正转或反转,使用磁带保存声音波形,通过放音头记录声音并保存到磁带上然后播放给收听者。 - - -除了往前退磁带,你也可以把磁带多绕几圈到磁带前面的部分,或快进跳过前面的部分到最后。 - -想象一下 70 年代的磁带录制的声音。你可能想到那会正在反复练习一首歌直到非常完美,它们最终被记录下来了。起初,你记录了鼓声,低音,然后是吉他声,还有其他的声音。每次你录音,工作人员都会把磁带重绕并设置为环绕模式,这样在你演唱的时候录音磁带就会播放之前录制的声音。如果你是低音歌唱,你唱歌的时候就需要把有鼓声的部分作为背景音乐,然后就是吉他声、鼓声、低音(和牛铃声【译注:一种打击乐器,状如四棱锥。】)等等。在每一环,你完成了整个部分,到了下一环,工作人员就开始在磁带上制作你的演唱作品。 - - -你也可以拷贝或换出整个磁带,这是你需要继续录音并且进行多次混合的时候需要做的。 - - -现在我希望对于上述 70 年代的录音工作的描述足够生动,我们就可以把 Git 的工作想象成一个录音磁带了。 - - -### 新建一个 Git 仓库 - - -首先得为我们的虚拟的录音机买一些磁带。在 Git 术语中,这就是仓库;它是完成所有工作的基础,也就是说这里是存放 Git 文件的地方(即 Git 工作区)。 - -任何目录都可以是一个 Git 仓库,但是在开始的时候需要进行一次更新。需要下面三个命令: - - -- 创建目录(如果你喜欢的话,你可以在你的 GUI 文件管理器里面完成。) -- 在终端里查看目录。 -- 初始化这个目录使它可以被 Git管理。 - - -特别是运行如下代码: - -``` -$ mkdir ~/jupiter # 创建目录 -$ cd ~/jupiter # 进入目录 -$ git init . # 初始化你的新 Git 工作区 -``` - - -在这个例子中,文件夹 jupiter 是空的但却成为了你的 Git 仓库。 - -有了仓库接下来的事件就按部就班了。你可以克隆项目仓库,你可以在一个历史点前后来回穿梭(前提是你有一个历史点),创建可交替时间线,然后剩下的工作 Git 就都能正常完成了。 - - -在 Git 仓库里面工作和在任何目录里面工作都是一样的;在仓库中新建文件,复制文件,保存文件。你可以像平常一样完成工作;Git 并不复杂,除非你把它想复杂了。 - -在本地的 Git 仓库中,一个文件可以有下面这三种状态: -- 未跟踪文件:你在仓库里新建了一个文件,但是你没有把文件加入到 Git 的提交任务(提交暂存区,stage)中。 -- 已跟踪文件:已经加入到 Git 暂存区的文件。 -- 暂存区文件:存在于暂存区的文件已经加入到 Git 的提交队列中。 - - -任何你新加入到 Git 仓库中的文件都是未跟踪文件。文件还保存在你的电脑硬盘上,但是你没有告诉 Git 这是需要提交的文件,就像我们的录音机,如果你没有打开录音机;乐队开始演唱了,但是录音机并没有准备录音。 - -不用担心,Git 会告诉你存在的问题并提示你怎么解决: -``` -$ echo "hello world" > foo -$ git status -位于您当前工作的分支 master 上 -未跟踪文件: -(使用 "git add " 更新要提交的内容) - foo -没有任何提交任务,但是存在未跟踪文件(用 "git add" 命令加入到提交任务) -``` - - -你看到了,Git 会提醒你怎样把文件加入到提交任务中。 - -### 不使用 Git 命令进行 Git 操作 - - -在 GitHub 或 GitLab(译注:GitLab 是一个用于仓库管理系统的开源项目。使用Git作为代码管理工具,并在此基础上搭建起来的web服务。)上创建一个仓库大多是使用鼠标点击完成的。这不会很难,你单击 New Repository 这个按钮就会很快创建一个仓库。 - -在仓库中新建一个 README 文件是一个好习惯,这样人们在浏览你的仓库的时候就可以知道你的仓库基于什么项目,更有用的是通过 README 文件可以确定克隆的是否为一个非空仓库。 - - -克隆仓库通常很简单,但是在 GitHub 上获取仓库改动权限就不简单了,为了进行用户验证你必须有一个 SSH 秘钥。如果你使用 Linux 系统,通过下面的命令可以生成一个秘钥: -``` -$ ssh-keygen -``` - - -复制纯文本文件里的秘钥。你可以使用一个文本编辑器打开它,也可以使用 cat 命令: - -``` -$ cat ~/.ssh/id_rsa.pub -``` - - -现在把你的秘钥拷贝到 [GitHub SSH 配置文件][1] 中,或者 [GitLab 配置文件[2]。 - -如果你通过使用 SSH 模式克隆了你的项目,就可以在你的仓库开始工作了。 - -另外,如果你的系统上没有安装 Git 的话也可以使用 GitHub 的文件上传接口来克隆仓库。 - -![](https://opensource.com/sites/default/files/2_githubupload.jpg) - - -### 跟踪文件 - -命令 git status 的输出会告诉你如果你想让 git 跟踪一个文件,你必须使用命令 git add 把它加入到提交任务中。这个命令把文件存在了暂存区,暂存区存放的都是等待提交的文件,或者把仓库保存为一个快照。git add 命令的最主要目的是为了区分你已经保存在仓库快照里的文件,还有新建的或你想提交的临时文件,至少现在,你都不用为它们之间的不同之处而费神了。 - -类比大型录音机,这个动作就像打开录音机开始准备录音一样。你可以按已经录音的录音机上的 pause 按钮来完成推送,或者按下重置按钮等待开始跟踪下一个文件。 - -如果你把文件加入到提交任务中,Git 会自动标识为跟踪文件: - -``` -$ git add foo -$ git status -位于您当前工作的分支 master 上 -下列修改将被提交: -(使用 "git reset HEAD ..." 将下列改动撤出提交任务) -新增文件:foo -``` - - -加入文件到提交任务中并不会生成一个记录。这仅仅是为了之后方便记录而把文件存放到暂存区。在你把文件加入到提交任务后仍然可以修改文件;文件会被标记为跟踪文件并且存放到暂存区,所以你在最终提交之前都可以改动文件或撤出提交任务(但是请注意:你并没有记录文件,所以如果你完全改变了文件就没有办法撤销了,因为你没有记住最终修改的准确时间。)。 - -如果你决定不把文件记录到 Git 历史列表中,那么你可以撤出提交任务,在 Git 中是这样做的: -``` -$ git reset HEAD foo -``` - - -这实际上就是删除了录音机里面的录音,你只是在工作区转了一圈而已而已。 - -### 大型提交 - -有时候,你会需要完成很多提交;我们以录音机类比,这就好比按下录音键并最终按下保存键一样。 - -在一个项目从建立到完成,你会按记录键无数次。比如,如果你通过你的方式使用一个新的 Python 工具包并且最终实现了窗口展示,然后你就很肯定的提交了文件,但是不可避免的会发生一些错误,现在你却不能撤销你的提交操作了。 - -一次提交会记录仓库中所有的暂存区文件。Git 只记录加入到提交任务中的文件,也就是说在过去某个时刻你使用 git add 命令加入到暂存区的所有文件。还有从先前的提交开始被改动的文件。如果没有其他的提交,所有的跟踪文件都包含在这次提交中,因为在浏览 Git 历史点的时候,它们没有存在于仓库中。 - -完成一次提交需要运行下面的命令: -``` -$ git commit -m 'My great project, first commit.' -``` - -这就保存了所有需要在仓库中提交的文件(或者,如果你说到 Gallifreyan【译注:英国电视剧《神秘博士》里的时间领主使用的一种优雅的语言,】,它们可能就是“固定的时间点” )。你不仅能看到整个提交记录,还能通过 git log 命令查看修改日志找到提交时的版本号: -``` -$ git log --oneline -55df4c2 My great project, first commit. -``` - - -如果想浏览更多信息,只需要使用不带 --oneline 选项的 git log 命令。 - - -在这个例子中提交时的版本号是 55df4c2。它被叫做 commit hash(译注:一个SHA-1生成的哈希码,用于表示一个git commit对象。),它表示着刚才你的提交包含的所有改动,覆盖了先前的记录。如果你想要“倒回”到你的提交历史点上就可以用这个 commit hash 作为依据。 - -你可以把 commit hash 想象成一个声音磁带上的 [SMPTE timecode][3],或者再夸张一点,这就是好比一个黑胶唱片上两首不同的歌之间的不同点,或是一个 CD 上的轨段编号。 - - -你在很久前改动了文件并且把它们加入到提交任务中,最终完成提交,这就会生成新的 commit hashes,每个 commit hashes 标示的历史点都代表着你的产品不同的版本。 - - -这就是 Charlie Brown 把 Git 称为版本控制系统的原因。 - -在接下来的文章中,我们将会讨论你需要知道的关于 Git HEAD 的一切,我们不准备讨论关于 Git 的提交历史问题。基本不会提及,但是你可能会需要了解它(或许你已经有所了解?)。 - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/creating-your-first-git-repository - -作者:[Seth Kenlon][a] -译者:[vim-kakali](https://github.com/vim-kakali) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]: https://github.com/settings/keys -[2]: https://gitlab.com/profile/keys -[3]: http://slackermedia.ml/handbook/doku.php?id=timecode From aca86e505b95b8aeb0de6968a7165478199206d9 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 4 Aug 2016 21:13:55 +0800 Subject: [PATCH 341/471] =?UTF-8?q?20160804-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...How to Allow Awk to Use Shell Variables.md | 95 +++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md diff --git a/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md b/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md new file mode 100644 index 0000000000..b407050da7 --- /dev/null +++ b/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md @@ -0,0 +1,95 @@ +How to Allow Awk to Use Shell Variables – Part 11 +================================================== + +When we write shell scripts, we normally include other smaller programs or commands such as Awk operations in our scripts. In the case of Awk, we have to find ways of passing some values from the shell to Awk operations. + +This can be done by using shell variables within Awk commands, and in this part of the series, we shall learn how to allow Awk to use shell variables that may contain values we want to pass to Awk commands. + +There possibly two ways you can enable Awk to use shell variables: + +### 1. Using Shell Quoting + +Let us take a look at an example to illustrate how you can actually use shell quoting to substitute the value of a shell variable in an Awk command. In this example, we want to search for a username in the file /etc/passwd, filter and print the user’s account information. + +Therefore, we can write a `test.sh` script with the following content: + +``` +#!/bin/bash + +#read user input +read -p "Please enter username:" username + +#search for username in /etc/passwd file and print details on the screen +cat /etc/passwd | awk "/$username/ "' { print $0 }' +``` + +Thereafter, save the file and exit. + +Interpretation of the Awk command in the test.sh script above: + +``` +cat /etc/passwd | awk "/$username/ "' { print $0 }' +``` + +`"/$username/ "` – shell quoting used to substitute value of shell variable username in Awk command. The value of username is the pattern to be searched in the file /etc/passwd. + +Note that the double quote is outside the Awk script, `‘{ print $0 }’`. + +Then make the script executable and run it as follows: + +``` +$ chmod +x test.sh +$ ./text.sh +``` + +After running the script, you will be prompted to enter a username, type a valid username and hit Enter. You will view the user’s account details from the /etc/passwd file as below: + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Shell-Script-to-Find-Username-in-Passwd-File.png) +>Shell Script to Find Username in Password File + +### 2. Using Awk’s Variable Assignment + +This method is much simpler and better in comparison to method one above. Considering the example above, we can run a simple command to accomplish the job. Under this method, we use the -v option to assign a shell variable to a Awk variable. + +Firstly, create a shell variable, username and assign it the name that we want to search in the /etc/passswd file: + +``` +username="aaronkilik" +``` + +Then type the command below and hit Enter: + +``` +# cat /etc/passwd | awk -v name="$username" ' $0 ~ name {print $0}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Find-Username-in-Password-File-Using-Awk.png) +>Find Username in Password File Using Awk + +Explanation of the above command: + +- `-v` – Awk option to declare a variable +- `username` – is the shell variable +- `name` – is the Awk variable +Let us take a careful look at `$0 ~ name` inside the Awk script, `' $0 ~ name {print $0}'`. Remember, when we covered Awk comparison operators in Part 4 of this series, one of the comparison operators was value ~ pattern, which means: true if value matches the pattern. + +The `output($0)` of cat command piped to Awk matches the pattern `(aaronkilik)` which is the name we are searching for in /etc/passwd, as a result, the comparison operation is true. The line containing the user’s account information is then printed on the screen. + +### Conclusion + +We have covered an important section of Awk features, that can help us use shell variables within Awk commands. Many times, you will write small Awk programs or commands within shell scripts and therefore, you need to have a clear understanding of how to use shell variables within Awk commands. + +In the next part of the Awk series, we shall dive into yet another critical section of Awk features, that is flow control statements. So stay tunned and let’s keep learning and sharing. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-shell-script-variable-in-awk/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 8bc5f01bcc13547081445b448858d1a81351c6b5 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 5 Aug 2016 09:16:12 +0800 Subject: [PATCH 342/471] PUB:Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators @vim-kakali --- ...ic Expressions and Assignment Operators.md | 121 ++++++++---------- 1 file changed, 55 insertions(+), 66 deletions(-) rename translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md => published/awk/Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md (59%) diff --git a/translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md b/published/awk/Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md similarity index 59% rename from translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md rename to published/awk/Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md index a5eb6b6df4..1f44f3c37a 100644 --- a/translated/tech/awk/20160714 Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md +++ b/published/awk/Part 8 - Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators.md @@ -1,27 +1,24 @@ - - -第 8 节--学习怎样使用 Awk 变量,数值表达式以及赋值运算符 - +awk 系列:怎样使用 awk 变量、数值表达式以及赋值运算符 ======================================================================================= +我觉得 [awk 系列][1] 将会越来越好,在本系列的前七节我们讨论了在 Linux 中处理文件和筛选字符串所需要的一些 awk 命令基础。 -我相信 [Awk 命令系列][1] 将会令人兴奋不已,在系列的前几节我们讨论了在 Linux 中处理文件和筛选字符串需要的基本 Awk 命令。 +在这一部分,我们将会进入 awk 更高级的部分,使用 awk 处理更复杂的文本和进行字符串过滤操作。因此,我们将会讲到 Awk 的一些特性,诸如变量、数值表达式和赋值运算符。 - -在这一部分,我们会对处理更复杂的文件和筛选字符串操作需要的更高级的命令进行讨论。因此,我们将会看到关于 Awk 的一些特性诸如变量,数值表达式和赋值运算符。 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Learn-Awk-Variables-Numeric-Expressions-Assignment-Operators.png) ->学习 Awk 变量,数值表达式和赋值运算符 -你可能已经在很多编程语言中接触过它们,比如 shell,C,Python等;这些概念在理解上和这些语言没有什么不同,所以在这一小节中你不用担心很难理解,我们将会简短的提及常用的一些 Awk 特性。 +*学习 Awk 变量,数值表达式和赋值运算符* + +你可能已经在很多编程语言中接触过它们,比如 shell,C,Python 等;这些概念在理解上和这些语言没有什么不同,所以在这一小节中你不用担心很难理解,我们将会简短的提及常用的一些 awk 特性。 + +这一小节可能是 awk 命令里最容易理解的部分,所以放松点,我们开始吧。 -这一小节可能是 Awk 命令里最容易理解的部分,所以放松点,我们开始吧。 ### 1. Awk 变量 - -在任何编程语言中,当你在程序中新建一个变量的时候这个变量就是一个存储了值的占位符,程序一运行就占用了一些内存空间,你为变量赋的值会存储在这些内存空间上。 - +在很多编程语言中,变量就是一个存储了值的占位符,当你在程序中新建一个变量的时候,程序一运行就会在内存中创建一些空间,你为变量赋的值会存储在这些内存空间上。 你可以像下面这样定义 shell 变量一样定义 Awk 变量: + ``` variable_name=value ``` @@ -32,56 +29,59 @@ variable_name=value - `value`: 为变量赋的值 再看下面的一些例子: + ``` computer_name=”tecmint.com” port_no=”22” email=”admin@tecmint.com” -server=”computer_name” +server=computer_name ``` 观察上面的简单的例子,在定义第一个变量的时候,值 'tecmint.com' 被赋给了 'computer_name' 变量。 +此外,值 22 也被赋给了 port\_no 变量,把一个变量的值赋给另一个变量也是可以的,在最后的例子中我们把变量 computer\_name 的值赋给了变量 server。 -此外,值 22 也被赋给了 port_no 变量,把一个变量的值赋给另一个变量也是可以的,在最后的例子中我们把变量 computer_name 的值赋给了变量 server。 - -你可以看看 [本系列的第 2 节][2] 中提到的字段编辑,我们讨论了 Awk 怎样将输入的行分隔为若干字段并且使用标准的字段进行输入操作 ,$ 访问不同的被分配的字段。我们也可以像下面这样使用变量为字段赋值。 +你可以看看[本系列的第 2 节][2]中提到的字段编辑,我们讨论了 awk 怎样将输入的行分隔为若干字段并且使用标准字段访问操作符 `$` 来访问拆分出来的不同字段。我们也可以像下面这样使用变量为字段赋值。 ``` first_name=$2 second_name=$3 ``` -在上面的例子中,变量 first_name 的值设置为第二个字段,second_name 的值设置为第三个字段。 +在上面的例子中,变量 first\_name 的值设置为第二个字段,second\_name 的值设置为第三个字段。 +再举个例子,有一个名为 names.txt 的文件,这个文件包含了一个应用程序的用户列表,这个用户列表包含了用户的名和姓以及性别。可以使用 [cat 命令][3] 查看文件内容: -再举个例子,有一个名为 names.txt 的文件,这个文件包含了一个应用程序的用户列表,这个用户列表显示了用户的名字和曾用名以及性别。可以使用 [cat 命令][3] 查看文件内容: ``` $ cat names.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Content-Using-cat-Command.png) ->使用 cat 命令查看列表文件内容 +*使用 cat 命令查看列表文件内容* + +然后,我们也可以使用下面的 awk 命令把列表中第一个用户的第一个和第二个名字分别存储到变量 first\_name 和 second\_name 上: -然后,我们也可以使用下面的 Awk 命令把列表中第一个用户的第一个和第二个名字分别存储到变量 first_name 和 second_name 上: ``` $ awk '/Aaron/{ first_name=$2 ; second_name=$3 ; print first_name, second_name ; }' names.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Variables-Using-Awk-Command.png) ->使用 Awk 命令为变量赋值 +*使用 Awk 命令为变量赋值* 再看一个例子,当你在终端运行 'uname -a' 时,它可以打印出所有的系统信息。 -第二个字段包含了你的 'hostname',因此,我们可以像下面这样把它赋给一个叫做 hostname 的变量并且用 Awk 打印出来。 +第二个字段包含了你的主机名,因此,我们可以像下面这样把它赋给一个叫做 hostname 的变量并且用 awk 打印出来。 + ``` $ uname -a $ uname -a | awk '{hostname=$2 ; print hostname ; }' ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Store-Command-Output-to-Variable-Using-Awk.png) ->使用 Awk 把命令的输出赋给变量 + +*使用 Awk 把命令的输出赋给变量* ### 2. 数值表达式 @@ -94,8 +94,8 @@ $ uname -a | awk '{hostname=$2 ; print hostname ; }' - `%` : 取模运算符 - `^` : 指数运算符 - 数值表达式的语法是: + ``` $ operand1 operator operand2 ``` @@ -103,6 +103,7 @@ $ operand1 operator operand2 上面的 operand1 和 operand2 可以是数值和变量,运算符可以是上面列出的任意一种。 下面是一些展示怎样使用数值表达式的例子: + ``` counter=0 num1=5 @@ -112,7 +113,8 @@ counter=counter+1 ``` -理解了 Awk 中数值表达式的用法,我们就可以看下面的例子了,文件 domians.txt 里包括了所有属于 Tecmint 的域名。 +要理解 Awk 中数值表达式的用法,我们可以看看下面的例子,文件 domians.txt 里包括了所有属于 Tecmint 的域名。 + ``` news.tecmint.com tecmint.com @@ -130,16 +132,19 @@ windows.tecmint.com tecmint.com ``` -可以使用下面的命令查看文件的内容; +可以使用下面的命令查看文件的内容: + ``` $ cat domains.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) ->查看文件内容 + +*查看文件内容* 如果想要计算出域名 tecmint.com 在文件中出现的次数,我们就可以通过写一个简单的脚本实现这个功能: + ``` #!/bin/bash for file in $@; do @@ -158,7 +163,8 @@ exit 0 ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Shell-Script-to-Count-a-String-in-File.png) ->计算一个字符串或文本在文件中出现次数的 shell 脚本 + +*计算一个字符串或文本在文件中出现次数的 shell 脚本* 写完脚本后保存并赋予执行权限,当我们使用文件运行脚本的时候,文件 domains.txt 作为脚本的输入,我们会得到下面的输出: @@ -168,23 +174,24 @@ $ ./script.sh ~/domains.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-To-Count-String.png) ->计算字符串或文本出现次数的脚本 +*计算字符串或文本出现次数的脚本* 从脚本执行后的输出中,可以看到在文件 domains.txt 中包含域名 tecmint.com 的地方有 6 行,你可以自己计算进行验证。 ### 3. 赋值操作符 -我们要说的最后的 Awk 特性是赋值运算符,下面列出的只是 Awk 中的部分赋值运算符: +我们要说的最后的 Awk 特性是赋值操作符,下面列出的只是 awk 中的部分赋值运算符: -- `*=` : 乘法赋值运算符 -- `+=` : 加法赋值运算符 -- `/=` : 除法赋值运算符 -- `-=` : 减法赋值运算符 -- `%=` : 取模赋值运算符 -- `^=` : 指数赋值运算符 +- `*=` : 乘法赋值操作符 +- `+=` : 加法赋值操作符 +- `/=` : 除法赋值操作符 +- `-=` : 减法赋值操作符 +- `%=` : 取模赋值操作符 +- `^=` : 指数赋值操作符 下面是 Awk 中最简单的一个赋值操作的语法: + ``` $ variable_name=variable_name operator operand ``` @@ -199,8 +206,8 @@ num=20 num=num-1 ``` +你可以使用在 awk 中使用上面的赋值操作符使命令更简短,从先前的例子中,我们可以使用下面这种格式进行赋值操作: -你可以使用在 Awk 中使用上面的赋值操作符使命令更简短,从先前的例子中,我们可以使用下面这种格式进行赋值操作: ``` variable_name operator=operand counter=0 @@ -209,8 +216,8 @@ num=20 num-=1 ``` +因此,我们可以在 shell 脚本中改变 awk 命令,使用上面提到的 += 操作符: -因此,我们可以在 shell 脚本中改变 Awk 命令,使用上面提到的 += 操作符: ``` #!/bin/bash for file in $@; do @@ -230,14 +237,15 @@ exit 0 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Alter-Shell-Script.png) ->改变了的 shell 脚本 + +*修改了的 shell 脚本* -在 [Awk 系列][4] 的这一部分,我们讨论了一些有用的 Awk 特性,有变量,使用数值表达式和赋值运算符,还有一些使用他们的实例。 +在 [awk 系列][4] 的这一部分,我们讨论了一些有用的 awk 特性,有变量,使用数值表达式和赋值运算符,还有一些使用它们的实例。 -这些概念和其他的编程语言没有任何不同,但是可能在 Awk 中有一些意义上的区别。 +这些概念和其他的编程语言没有任何不同,但是可能在 awk 中有一些意义上的区别。 -在本系列的第 9 节,我们会学习更多的 Awk 特性,比如特殊格式: BEGIN 和 END。这也会与 Tecmit 有联系。 +在本系列的第 9 节,我们会学习更多的 awk 特性,比如特殊格式: BEGIN 和 END。请继续关注。 -------------------------------------------------------------------------------- @@ -245,31 +253,12 @@ via: http://www.tecmint.com/learn-awk-variables-numeric-expressions-and-assignme 作者:[Aaron Kili][a] 译者:[vim-kakali](https://github.com/vim-kakali) -校对:[校对ID](https://github.com/校对ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: http://www.tecmint.com/author/aaronkili/ - - - - - - - - - - - - - - - - - - - -[1]: http://www.tecmint.com/category/awk-command/ -[2]: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/ +[1]: https://linux.cn/article-7586-1.html +[2]: https://linux.cn/article-7587-1.html [3]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ -[4]: http://www.tecmint.com/category/awk-command/ +[4]: https://linux.cn/article-7586-1.html From f183752cc349b4d421ae1fde90fca2f0c661c752 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 5 Aug 2016 09:50:32 +0800 Subject: [PATCH 343/471] =?UTF-8?q?20160805-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/talk/20160803 The revenge of Linux.md | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 sources/talk/20160803 The revenge of Linux.md diff --git a/sources/talk/20160803 The revenge of Linux.md b/sources/talk/20160803 The revenge of Linux.md new file mode 100644 index 0000000000..57a1ebe27c --- /dev/null +++ b/sources/talk/20160803 The revenge of Linux.md @@ -0,0 +1,36 @@ +The revenge of Linux +======================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/penguin%20swimming.jpg?itok=mfhEdRdM) + +In the beginning of Linux they laughed at it and didn't think it could do anything. Now Linux is everywhere! + +I was a junior at college studying Computer Engineering in Brazil, and working at the same time in a global auditing and consulting company as a sysadmin. The company decided to implement some enterprise resource planning (ERP) software with an Oracle database. I got training in Digital UNIX OS (DEC Alpha), and it blew my mind. + +The UNIX system was very powerful and gave us absolute control of the machine: storage systems, networking, applications, and everything. + +I started writing lots of scripts in ksh and Bash to automate backup, file transfer, extract, transform, load (ETL) operations, automate DBA routines, and created many other services that came up from different projects. Moreover, doing database and operating system tuning gave me a good understanding of how to get the best from a server. At that time, I used Windows 95 on my PC, and I would have loved to have Digital UNIX in my PC box, or even Solaris or HP-UX, but those UNIX systems were made to run on specific hardware. I read all the documentation that came with the systems, looked for additional books to get more information, and tested crazy ideas in our development environment. + +Later in college, I heard about Linux from my colleagues. I downloaded it back in the time of dialup internet, and I was very excited. The idea of having my standard PC with a UNIX-like system was amazing! + +As Linux is made to run in any general PC hardware, unlike UNIX systems, at the beginning it was really hard to get it working. Linux was just for sysadmins and geeks. I even made adjustments in drivers using C language to get it running. My previous experience with UNIX made me feel at home when I was compiling the Linux kernel, troubleshooting, and so on. It was very challenging for Linux to work with any unexpected hardware setups, as opposed to closed systems that fit just some specific hardware. + +I have been seeing Linux get space in data centers. Some adventurous sysadmins start boxes to help in everyday tasks for monitoring and managing the infrastructure, and then Linux gets more space as DNS and DHCP servers, printer management, and file servers. There used to be lots of FUD (fear, uncertainty and doubt) and criticism about Linux for the enterprise: Who is the owner of it? Who supports it? Are there applications for it? + +But nowadays it seems the revenge of Linux is everywhere! From developer's PCs to huge enterprise servers; we can find it in smart phones, watches, and in the Internet of Things (IoT) devices such as Raspberry Pi. Even Mac OS X has a kind of prompt with commands we are used to. Microsoft is making its own distribution, runs it at Azure, and then... Windows 10 is going to get Bash on it. + +The interesting thing is that the IT market creates and quickly replaces new technologies, but the knowledge we got with old systems such as Digital UNIX, HP-UX, and Solaris is still useful and relevant with Linux, for business and just for fun. Now we can get absolute control of our systems, and use it at its maximum capabilities. Moreover Linux has an enthusiastic community! + +I really recommend that youngsters looking for a career in computing learn Linux. It does not matter what your branch in IT is. If you learn deeply how a standard home PC works, you can get in front of any big box talking basically the same language. With Linux you learn fundamental computing, and build skills that work anywhere in IT. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/8/revenge-linux + +作者:[Daniel Carvalho][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/danielscarvalho From 90d137399556eedc51bf3b3ce3c975269e8ab145 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 5 Aug 2016 14:53:20 +0800 Subject: [PATCH 344/471] PUB:20160316 Growing a career alongside Linux @chenxinlong --- ...160316 Growing a career alongside Linux.md | 49 +++++++++++++++++++ ...160316 Growing a career alongside Linux.md | 49 ------------------- 2 files changed, 49 insertions(+), 49 deletions(-) create mode 100644 published/20160316 Growing a career alongside Linux.md delete mode 100644 translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md diff --git a/published/20160316 Growing a career alongside Linux.md b/published/20160316 Growing a career alongside Linux.md new file mode 100644 index 0000000000..d536dea1bb --- /dev/null +++ b/published/20160316 Growing a career alongside Linux.md @@ -0,0 +1,49 @@ +伴随 Linux 成长的职业生涯 +================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT) + +我与 Linux 的故事开始于 1998 年,一直延续到今天。 当时我在 Gap 公司工作,管理着成千台运行着 [OS/2][1] 系统的台式机 ( 在随后的几年里变成了 [Warp 3.0][2])。 作为一个 OS/2 的粉丝,那时我非常喜欢那个时候。 随着这些台式机的嗡鸣,我们使用 Gap 开发的工具轻而易举地就能支撑起对成千的用户的服务支持。 然而,一切都将改变了。 + +在 1998 年的 11 月, 我收到邀请加入一个新成立的公司,这家公司将专注于企业级 Linux 上。 这就是后来非常出名的 [Linuxcare][2]。 + +### 我在 Linuxcare 的时光 + +我曾经接触过一些 Linux , 但我从未想过要把它提供给企业客户。仅仅几个月后 ( 从这里开始成为了空间和时间上的转折点 ), 我就在管理一整条线的业务,让企业获得他们的软件,硬件,甚至是证书认证等各种在当时非常盛行的 Linux 服务。 + +我支持的客户包括像 IBM ,Dell ,HP 这样的厂商以确保他们的硬件能够成功的运行 Linux 。 今天你们应该都听过许多关于在硬件上预装 Linux 的事, 但当时 Dell 邀请我去讨论为即将到来的贸易展上将 Linux 运行在认证的笔记本电脑上。 这是多么激动人心的时刻 !同时我们在之后几年内也支持了 IBM 和 HP 等多项认证工作。 + +Linux 变化得非常快,并且它总是这样。 它也获得了更多的关键设备的支持,比如声音,网络和图形。在这段时间, 我把个人使用的系统从基于 RPM 的系统换成了 [Debian][3] 。 + +### 使用 Linux 的这些年 + +几年前我在一些做 Linux 硬件设备、Linux 定制软件以及 Linux 数据中心的公司工作。而在二十世纪中期的时候,那时我正在忙为那些在雷蒙德附近(微软公司所在地)的大一些的软件公司做咨询工作,为他们对比 Linux 解决方案及其自己的解决方案做分析和验证。 我个人使用的系统一直没有改变,我仍会在尽可能的情况下运行 Debian 测试系统。 + +我真的非常欣赏发行版的灵活性和永久更新状态。 Debian 是我所使用过的最有趣且拥有良好支持的发行版,并且它拥有最好的社区,而我是社区的一份子。 + +当我回首我使用 Linux 的这几年,我仍记得大约在二十世纪前期和中期的时候在圣何塞,旧金山,波士顿和纽约召开的那些 Linux Expo 大会。在 Linuxcare 时我们总是会摆一些有趣而且时髦的展位,在那边逛的时候总会碰到一些老朋友。这一切工作都是需要付出代价的,所有的这一切都是在努力地强调使用 Linux 的乐趣。 + +随着虚拟化和云的崛起也让 Linux 变得更加有趣。 当我在 Linuxcare 的时候, 我们常和斯坦福大学附近的帕洛阿尔托的一个约 30 人左右的小公司在一块。我们会开车到他们的办公处,然后帮他们准备和我们一起参加展览的东西。 谁会想得到这个小小的初创公司会成就后来的 VMware ? + +我还有许多的故事,能认识这些人并和他们一起工作我感到很幸运。 Linux 在各方面都不断发展且变得尤为重要。 并且甚至随着它重要性的提升,它使用起来仍然非常有趣。 我认为它的开放性和可定制能力给它带来了大量的新用户,这也是让我感到非常震惊的一点。 + +### 现在 + +在过去的五年里我的工作重心逐渐离开 Linux。 我所管理的大规模基础设施项目中包含着许多不同的操作系统 ( 包括非开源的和开源的 ), 但我的心一直以来都是和 Linux 在一起的。 + +在使用 Linux 过程中的乐趣和不断进步是在过去的 18 年里一直驱动我的动力。我从 Linux 2.0 内核开始看着它变成现在的这样。 Linux 是一个卓越的、生机勃勃的且非常酷的东西。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/3/my-linux-story-michael-perry + +作者:[Michael Perry][a] +译者:[chenxinlong](https://github.com/chenxinlong) +校对:[wxy](https://github.com/wxy) + +[a]: https://opensource.com/users/mpmilestogo +[1]: https://en.wikipedia.org/wiki/OS/2 +[2]: https://archive.org/details/IBMOS2Warp3Collection +[3]: https://en.wikipedia.org/wiki/Linuxcare +[4]: https://www.debian.org/ +[5]: diff --git a/translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md b/translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md deleted file mode 100644 index f7ddefb92c..0000000000 --- a/translated/talk/my-open-source-story/20160316 Growing a career alongside Linux.md +++ /dev/null @@ -1,49 +0,0 @@ -培养一个 Linux 职业生涯 -================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT) - -我与 Linux 的故事开始于 1998 年,一直延续到今天。 当时我在 Gap 公司工作,管理着成千台运行着 [OS/2][1] 系统的台式机 ( 并且在随后的几年里是 [Warp 3.0][2])。 作为一个 OS/2 的使用者,那时我非常高兴。 随着这些台式机的嗡鸣,我们使用 Gap 开发的工具轻而易举地就能支撑起对成千的用户的服务支持。 然而,一切都将改变了。 - -在 1998 年的 11 月, 我收到邀请加入一个新成立的公司,这家公司将专注于 Linux 上。 这就是后来非常出名的 [Linuxcare][2]. - -### 我在 Linuxcare 的时光 - -我曾经有接触过一些 Linux , 但我从未想过要把它提供给企业客户。仅仅几个月后 ( 从这里开始成为了空间和时间上的转折点 ), 我就在管理一整条线的业务,让企业获得他们的软件,硬件,甚至是证书认证等不同风格且在当时非常盛行的的 Linux 服务。 - -我向我的客户提供像 IBM ,Dell ,HP 等产品以确保他们的硬件能够成功的运行 Linux 。 今天你们应该都听过许多关于在硬件上预载 Linux 的事, 但当时我被邀请到 Dell 去讨论有关于为即将到来的贸易展在笔记本电脑上运行 Linux 的相关事宜。 这是多么激动人心的时刻 !同时我们也在有效期的几年内支持 IBM 和 HP 等多项认证工作。 - -Linux 变化得非常快,并且它总是这样。 它也获得了更多的关键设备的支持,比如声音,网络和图形。在这段时间, 我把个人使用的系统从基于 RPM 的系统换成了 [Debian][3] 。 - -### 使用 Linux 的这些年 - -几年前我在一些以 Linux 为硬设备,客户软件以及数据中心的公司工作。而在二十世纪中期的时候,那时我正在忙着做咨询,而那些在 Redmond 的大型软件公司正围绕着 Linux 做一些分析和验证以和自己的解决方案做对比。 我个人使用的系统一直改变,我仍会在任何能做到的时候运行 Debian 测试系统。 - -我真的非常欣赏其发行版本的灵活性和永久更新状态。 Debian 是我所使用过的最有趣且拥有良好支持的发行版本且拥有最好的社区的版本之一。 - -当我回首我使用 Linux 的这几年,我仍记得大约在二十世纪前期和中期的时候在圣何塞,旧金山,波士顿和纽约的那些 Linux 的展览会。在 Linuxcare 时我们总是会做一些有趣而且时髦的展览位,在那边逛的时候总会碰到一些老朋友。这一切工作都是需要付出代价的,所有的这一切都是在努力地强调使用 Linux 的乐趣。 - -同时,虚拟化和云的崛起也让 Linux 变得更加有趣。 当我在 Linuxcare 的时候, 我们常和 Palo Alto 的一个约 30 人左右的小公司在一块。我们会开车到他们的办公处然后准备一个展览并且他们也会参与进来。 谁会想得到这个小小的开端会成就后来的 VMware ? - -我还有许多的故事,能认识这些人并和他们一起工作我感到很幸运。 Linux 在各方面都不断发展且变得尤为重要。 并且甚至随着它重要性的提升,它使用起来仍然非常有趣。 我认为它的开放性和可定制能力给它带来了大量的新用户,这也是让我感到非常震惊的一点。 - -### 现在 - -在过去的五年里我逐渐离开 Linux 的主流事物。 我所管理的大规模基础设施项目中包含着许多不同的操作系统 ( 包括非开源的和开源的 ), 但我的心一直以来都是和 Linux 在一起的。 - -在使用 Linux 过程中的乐趣和不断进步是在过去的 18 年里一直驱动我的动力。我从 Linux 2.0 内核开始看着它变成现在的这样。 Linux 是一个卓越,有机且非常酷的东西。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/3/my-linux-story-michael-perry - -作者:[Michael Perry][a] -译者:[译者ID](https://github.com/chenxinlong) -校对:[校对者ID](https://github.com/校对者ID) - -[a]: https://opensource.com/users/mpmilestogo -[1]: https://en.wikipedia.org/wiki/OS/2 -[2]: https://archive.org/details/IBMOS2Warp3Collection -[3]: https://en.wikipedia.org/wiki/Linuxcare -[4]: https://www.debian.org/ -[5]: From bca82aabae1c3c0f718b3aa484356c4a3377bfaa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Fri, 5 Aug 2016 17:24:54 +0800 Subject: [PATCH 345/471] Translated by cposture (#4275) * 75 * Translated by cposture --- ...18 An Introduction to Mocking in Python.md | 114 +++++++++--------- 1 file changed, 56 insertions(+), 58 deletions(-) rename {sources => translated}/tech/20160618 An Introduction to Mocking in Python.md (53%) diff --git a/sources/tech/20160618 An Introduction to Mocking in Python.md b/translated/tech/20160618 An Introduction to Mocking in Python.md similarity index 53% rename from sources/tech/20160618 An Introduction to Mocking in Python.md rename to translated/tech/20160618 An Introduction to Mocking in Python.md index d28fae272d..b8daaed937 100644 --- a/sources/tech/20160618 An Introduction to Mocking in Python.md +++ b/translated/tech/20160618 An Introduction to Mocking in Python.md @@ -1,32 +1,34 @@ Mock 在 Python 中的使用介绍 ===================================== -http://www.oschina.net/translate/an-introduction-to-mocking-in-python?cmp + 本文讲述的是 Python 中 Mock 的使用 -**如何在避免测试你的耐心的情景下执行单元测试** +**如何在避免测试你的耐心的情况下执行单元测试** -通常,我们编写的软件会直接与我们称之为肮脏无比的服务交互。用外行人的话说:交互已设计好的服务对我们的应用程序很重要,但是这会带来我们不希望的副作用,也就是那些在我们自己测试的时候不希望的功能。例如:我们正在写一个社交 app,并且想要测试一下我们 "发布到 Facebook" 的新功能,但是不想每次运行测试集的时候真的发布到 Facebook。 +很多时候,我们编写的软件会直接与那些被标记为肮脏无比的服务交互。用外行人的话说:交互已设计好的服务对我们的应用程序很重要,但是这会给我们带来不希望的副作用,也就是那些在一个自动化测试运行的上下文中不希望的功能。 + +例如:我们正在写一个社交 app,并且想要测试一下 "发布到 Facebook" 的新功能,但是不想每次运行测试集的时候真的发布到 Facebook。 -Python 的单元测试库包含了一个名为 unittest.mock 或者可以称之为依赖的子包,简言之为 mock——其提供了极其强大和有用的方法,通过它们可以模拟和打桩我们不希望的副作用。 +Python 的 `unittest` 库包含了一个名为 `unittest.mock` 或者可以称之为依赖的子包,简称为 +`mock` —— 其提供了极其强大和有用的方法,通过它们可以模拟和打桩来去除我们不希望的副作用。 >Source | -注意:mock [最近收录][1]到了 Python 3.3 的标准库中;先前发布的版本必须通过 [PyPI][2] 下载 Mock 库。 +> 注意:`mock` [最近收录][1]到了 Python 3.3 的标准库中;先前发布的版本必须通过 [PyPI][2] 下载 Mock 库。 -### -### Fear System Calls +### 恐惧系统调用 -再举另一个例子,思考一个我们会在余文讨论的系统调用。不难发现,这些系统调用都是主要的模拟对象:无论你是正在写一个可以弹出 CD 驱动的脚本,还是一个用来删除 /tmp 下过期的缓存文件的 Web 服务,这些调用都是在你的单元测试上下文中不希望的副作用。 +再举另一个例子,思考一个我们会在余文讨论的系统调用。不难发现,这些系统调用都是主要的模拟对象:无论你是正在写一个可以弹出 CD 驱动的脚本,还是一个用来删除 /tmp 下过期的缓存文件的 Web 服务,或者一个绑定到 TCP 端口的 socket 服务器,这些调用都是在你的单元测试上下文中不希望的副作用。 > 作为一个开发者,你需要更关心你的库是否成功地调用了一个可以弹出 CD 的系统函数,而不是切身经历 CD 托盘每次在测试执行的时候都打开了。 作为一个开发者,你需要更关心你的库是否成功地调用了一个可以弹出 CD 的系统函数(使用了正确的参数等等),而不是切身经历 CD 托盘每次在测试执行的时候都打开了。(或者更糟糕的是,很多次,在一个单元测试运行期间多个测试都引用了弹出代码!) -同样,保持你的单元测试的效率和性能意味着需要让如此多的 "缓慢代码" 远离自动测试,比如文件系统和网络访问。 +同样,保持单元测试的效率和性能意味着需要让如此多的 "缓慢代码" 远离自动测试,比如文件系统和网络访问。 -对于我们首个例子,我们要从原始形式到使用 mock 地重构一个标准 Python 测试用例。我们会演示如何使用 mock 写一个测试用例使我们的测试更加智能、快速,并且能展示更多关于我们软件的工作原理。 +对于首个例子,我们要从原始形式到使用 `mock` 重构一个标准 Python 测试用例。我们会演示如何使用 mock 写一个测试用例,使我们的测试更加智能、快速,并展示更多关于我们软件的工作原理。 ### 一个简单的删除函数 @@ -42,9 +44,9 @@ def rm(filename): os.remove(filename) ``` -很明显,我们的 rm 方法此时无法提供比相关 os.remove 方法更多的功能,但我们的基础代码会逐步改善,允许我们在这里添加更多的功能。 +很明显,我们的 `rm` 方法此时无法提供比 `os.remove` 方法更多的相关功能,但我们可以在这里添加更多的功能,使我们的基础代码逐步改善。 -让我们写一个传统的测试用例,即,没有使用 mock: +让我们写一个传统的测试用例,即,没有使用 `mock`: ``` #!/usr/bin/env python @@ -71,7 +73,7 @@ class RmTestCase(unittest.TestCase): self.assertFalse(os.path.isfile(self.tmpfilepath), "Failed to remove the file.") ``` -我们的测试用例相当简单,但是当它每次运行的时候,它都会创建一个临时文件并且随后删除。此外,我们没有办法测试我们的 rm 方法是否正确地将我们的参数向下传递给 os.remove 调用。我们可以基于以上的测试认为它做到了,但还有很多需要改进的地方。 +我们的测试用例相当简单,但是在它每次运行的时候,它都会创建一个临时文件并且随后删除。此外,我们没有办法测试我们的 `rm` 方法是否正确地将我们的参数向下传递给 `os.remove` 调用。我们可以基于以上的测试认为它做到了,但还有很多需要改进的地方。 ### 使用 Mock 重构 @@ -95,29 +97,25 @@ class RmTestCase(unittest.TestCase): mock_os.remove.assert_called_with("any path") ``` -使用这些重构,我们从根本上改变了该测试用例的运行方式。现在,我们有一个可以用于验证其他功能的内部对象。 +使用这些重构,我们从根本上改变了测试用例的操作方式。现在,我们有一个可以用于验证其他功能的内部对象。 ### 潜在陷阱 -第一件需要注意的事情就是,我们使用了位于 mymodule.os 且用于模拟对象的 mock.patch 方法装饰器,并且将该 mock 注入到我们的测试用例方法。相比在 mymodule.os 引用它,那么只是模拟 os 本身,会不会更有意义呢? -One of the first things that should stick out is that we’re using the mock.patch method decorator to mock an object located at mymodule.os, and injecting that mock into our test case method. Wouldn’t it make more sense to just mock os itself, rather than the reference to it at mymodule.os? +第一件需要注意的事情就是,我们使用了 `mock.patch` 方法装饰器,用于模拟位于 `mymodule.os` 的对象,并且将 mock 注入到我们的测试用例方法。那么只是模拟 `os` 本身,而不是 `mymodule.os` 下 `os` 的引用(注意 `@mock.patch('mymodule.os')` 便是模拟 `mymodule.os` 下的 `os`,译者注),会不会更有意义呢? -当然,当涉及到导入和管理模块,Python 的用法非常灵活。在运行时,mymodule 模块拥有被导入到本模块局部作用域的 os。因此,如果我们模拟 os,我们是看不到模拟在 mymodule 模块中的作用的。 +当然,当涉及到导入和管理模块,Python 的用法非常灵活。在运行时,`mymodule` 模块拥有被导入到本模块局部作用域的 `os`。因此,如果我们模拟 `os`,我们是看不到 mock 在 `mymodule` 模块中的作用的。 这句话需要深刻地记住: > 模拟测试一个项目,只需要了解它用在哪里,而不是它从哪里来。 -> Mock an item where it is used, not where it came from. -如果你需要为 myproject.app.MyElaborateClass 模拟 tempfile 模块,你可能需要 -If you need to mock the tempfile module for myproject.app.MyElaborateClass, you probably need to apply the mock to myproject.app.tempfile, as each module keeps its own imports. +如果你需要为 `myproject.app.MyElaborateClass` 模拟 `tempfile` 模块,你可能需要将 mock 用于 `myproject.app.tempfile`,而其他模块保持自己的导入。 先将那个陷阱置身事外,让我们继续模拟。 -With that pitfall out of the way, let’s keep mocking. ### 向 ‘rm’ 中加入验证 -之前定义的 rm 方法相当的简单。在盲目地删除之前,我们倾向于拿它来验证一个路径是否存在,并验证其是否是一个文件。让我们重构 rm 使其变得更加智能: +之前定义的 rm 方法相当的简单。在盲目地删除之前,我们倾向于验证一个路径是否存在,并验证其是否是一个文件。让我们重构 rm 使其变得更加智能: ``` @@ -132,7 +130,7 @@ def rm(filename): os.remove(filename) ``` -很好。现在,让我们调整测试用例来保持测试的覆盖程度。 +很好。现在,让我们调整测试用例来保持测试的覆盖率。 ``` #!/usr/bin/env python @@ -168,9 +166,9 @@ class RmTestCase(unittest.TestCase): ### 将文件删除作为服务 -到目前为止,我们只是对函数功能提供模拟测试,并没对需要传递参数的对象和实例的方法进行模拟测试。接下来我们将介绍如何对对象的方法进行模拟测试。 +到目前为止,我们只是将 mock 应用在函数上,并没应用在需要传递参数的对象和实例的方法。我们现在开始涵盖对象的方法。 -首先,我们将rm方法重构成一个服务类。实际上将这样一个简单的函数转换成一个对象,在本质上,这不是一个合理的需求,但它能够帮助我们了解mock的关键概念。让我们开始重构: +首先,我们将 `rm` 方法重构成一个服务类。实际上将这样一个简单的函数转换成一个对象,在本质上这不是一个合理的需求,但它能够帮助我们了解 `mock` 的关键概念。让我们开始重构: ``` #!/usr/bin/env python @@ -187,8 +185,7 @@ class RemovalService(object): os.remove(filename) ``` -### 你会注意到我们的测试用例没有太大的变化 -### You’ll notice that not much has changed in our test case: +你会注意到我们的测试用例没有太大变化: ``` #!/usr/bin/env python @@ -223,8 +220,8 @@ class RemovalServiceTestCase(unittest.TestCase): mock_os.remove.assert_called_with("any path") ``` -很好,我们知道 RemovalService 会如期工作。接下来让我们创建另一个服务,将其声明为一个依赖 -Great, so we now know that the RemovalService works as planned. Let’s create another service which declares it as a dependency: +很好,我们知道 `RemovalService` 会如期工作。接下来让我们创建另一个服务,将 `RemovalService` 声明为它的一个依赖: +: ``` #!/usr/bin/env python @@ -250,18 +247,18 @@ class UploadService(object): self.removal_service.rm(filename) ``` -Since we already have test coverage on the RemovalService, we’re not going to validate internal functionality of the rm method in our tests of UploadService. Rather, we’ll simply test (without side-effects, of course) that UploadService calls the RemovalService.rm method, which we know “just works™” from our previous test case. +因为我们的测试覆盖了 `RemovalService`,因此我们不会对我们测试用例中 `UploadService` 的内部函数 `rm` 进行验证。相反,我们将调用 `UploadService` 的 `RemovalService.rm` 方法来进行简单测试(当然没有其他副作用),我们通过之前的测试用例便能知道它可以正确地工作。 -There are two ways to go about this: +这里有两种方法来实现测试: -1. Mock out the RemovalService.rm method itself. -2. Supply a mocked instance in the constructor of UploadService. +1. 模拟 RemovalService.rm 方法本身。 +2. 在 UploadService 的构造函数中提供一个模拟实例。 -As both methods are often important in unit-testing, we’ll review both. +因为这两种方法都是单元测试中非常重要的方法,所以我们将同时对这两种方法进行回顾。 -### Option 1: Mocking Instance Methods +### 方法 1:模拟实例的方法 -The mock library has a special method decorator for mocking object instance methods and properties, the @mock.patch.object decorator: +`mock` 库有一个特殊的方法装饰器,可以模拟对象实例的方法和属性,即 `@mock.patch.object decorator` 装饰器: ``` #!/usr/bin/env python @@ -314,31 +311,32 @@ class UploadServiceTestCase(unittest.TestCase): removal_service.rm.assert_called_with("my uploaded file") ``` -Great! We’ve validated that the UploadService successfully calls our instance’s rm method. Notice anything interesting in there? The patching mechanism actually replaced the rm method of all RemovalService instances in our test method. That means that we can actually inspect the instances themselves. If you want to see more, try dropping in a breakpoint in your mocking code to get a good feel for how the patching mechanism works. +非常棒!我们验证了 UploadService 成功调用了我们实例的 rm 方法。你是否注意到一些有趣的地方?这种修补机制(patching mechanism)实际上替换了我们测试用例中的所有 `RemovalService` 实例的 `rm` 方法。这意味着我们可以检查实例本身。如果你想要了解更多,可以试着在你模拟的代码下断点,以对这种修补机制的原理获得更好的认识。 -### Pitfall: Decorator Order +### 陷阱:装饰顺序 + +当我们在测试方法中使用多个装饰器,其顺序是很重要的,并且很容易混乱。基本上,当装饰器被映射到方法参数时,[装饰器的工作顺序是反向的][3]。思考这个例子: -When using multiple decorators on your test methods, order is important, and it’s kind of confusing. Basically, when mapping decorators to method parameters, [work backwards][3]. Consider this example: ``` -@mock.patch('mymodule.sys') + @mock.patch('mymodule.sys') @mock.patch('mymodule.os') @mock.patch('mymodule.os.path') def test_something(self, mock_os_path, mock_os, mock_sys): pass ``` -Notice how our parameters are matched to the reverse order of the decorators? That’s partly because of [the way that Python works][4]. With multiple method decorators, here’s the order of execution in pseudocode: +注意到我们的参数和装饰器的顺序是反向匹配了吗?这多多少少是由 [Python 的工作方式][4] 导致的。这里是使用多个装饰器的情况下它们执行顺序的伪代码: ``` patch_sys(patch_os(patch_os_path(test_something))) ``` -Since the patch to sys is the outermost patch, it will be executed last, making it the last parameter in the actual test method arguments. Take note of this well and use a debugger when running your tests to make sure that the right parameters are being injected in the right order. +因为 sys 补丁位于最外层,所以它最晚执行,使得它成为实际测试方法参数的最后一个参数。请特别注意这一点,并且在运行你的测试用例时,使用调试器来保证正确的参数以正确的顺序注入。 -### Option 2: Creating Mock Instances +### 方法 2:创建 Mock 实例 -Instead of mocking the specific instance method, we could instead just supply a mocked instance to UploadService with its constructor. I prefer option 1 above, as it’s a lot more precise, but there are many cases where option 2 might be efficient or necessary. Let’s refactor our test again: +我们可以使用构造函数为 UploadService 提供一个 Mock 实例,而不是模拟特定的实例方法。我更推荐方法 1,因为它更加精确,但在多数情况,方法 2 或许更加有效和必要。让我们再次重构测试用例: ``` #!/usr/bin/env python @@ -387,13 +385,13 @@ class UploadServiceTestCase(unittest.TestCase): mock_removal_service.rm.assert_called_with("my uploaded file") ``` -In this example, we haven’t even had to patch any functionality, we simply create an auto-spec for the RemovalService class, and then inject this instance into our UploadService to validate the functionality. +在这个例子中,我们甚至不需要补充任何功能,只需为 `RemovalService` 类创建一个 auto-spec,然后将实例注入到我们的 `UploadService` 以验证功能。 -The [mock.create_autospec][5] method creates a functionally equivalent instance to the provided class. What this means, practically speaking, is that when the returned instance is interacted with, it will raise exceptions if used in illegal ways. More specifically, if a method is called with the wrong number of arguments, an exception will be raised. This is extremely important as refactors happen. As a library changes, tests break and that is expected. Without using an auto-spec, our tests will still pass even though the underlying implementation is broken. +`mock.create_autospec` 方法为类提供了一个同等功能实例。实际上来说,这意味着在使用返回的实例进行交互的时候,如果使用了非法的方式将会引发异常。更具体地说,如果一个方法被调用时的参数数目不正确,将引发一个异常。这对于重构来说是非常重要。当一个库发生变化的时候,中断测试正是所期望的。如果不使用 auto-spec,尽管底层的实现已经被破坏,我们的测试仍然会通过。 -### Pitfall: The mock.Mock and mock.MagicMock Classes +### 陷阱:mock.Mock 和 mock.MagicMock 类 -The mock library also includes two important classes upon which most of the internal functionality is built upon: [mock.Mock][6] and mock.MagicMock. When given a choice to use a mock.Mock instance, a mock.MagicMock instance, or an auto-spec, always favor using an auto-spec, as it helps keep your tests sane for future changes. This is because mock.Mock and mock.MagicMock accept all method calls and property assignments regardless of the underlying API. Consider the following use case: +`mock` 库包含了两个重要的类 [mock.Mock](http://www.voidspace.org.uk/python/mock/mock.html) 和 [mock.MagicMock](http://www.voidspace.org.uk/python/mock/magicmock.html#magic-mock),大多数内部函数都是建立在这两个类之上的。当在选择使用 `mock.Mock` 实例,`mock.MagicMock` 实例或 auto-spec 的时候,通常倾向于选择使用 auto-spec,因为对于未来的变化,它更能保持测试的健全。这是因为 `mock.Mock` 和 `mock.MagicMock` 会无视底层的 API,接受所有的方法调用和属性赋值。比如下面这个用例: ``` class Target(object): @@ -404,7 +402,7 @@ def method(target, value): return target.apply(value) ``` -We can test this with a mock.Mock instance like this: +我们可以像下面这样使用 mock.Mock 实例进行测试: ``` class MethodTestCase(unittest.TestCase): @@ -417,7 +415,7 @@ class MethodTestCase(unittest.TestCase): target.apply.assert_called_with("value") ``` -This logic seems sane, but let’s modify the Target.apply method to take more parameters: +这个逻辑看似合理,但如果我们修改 `Target.apply` 方法接受更多参数: ``` class Target(object): @@ -428,11 +426,11 @@ class Target(object): return None ``` -Re-run your test, and you’ll find that it still passes. That’s because it isn’t built against your actual API. This is why you should always use the create_autospec method and the autospec parameter with the @patch and @patch.object decorators. +重新运行你的测试,你会发现它仍能通过。这是因为它不是针对你的 API 创建的。这就是为什么你总是应该使用 `create_autospec` 方法,并且在使用 `@patch`和 `@patch.object` 装饰方法时使用 `autospec` 参数。 -### Real-World Example: Mocking a Facebook API Call +### 现实例子:模拟 Facebook API 调用 -To finish up, let’s write a more applicable real-world example, one which we mentioned in the introduction: posting a message to Facebook. We’ll write a nice wrapper class and a corresponding test case. +为了完成,我们写一个更加适用的现实例子,一个在介绍中提及的功能:发布消息到 Facebook。我将写一个不错的包装类及其对应的测试用例。 ``` import facebook @@ -447,7 +445,7 @@ class SimpleFacebook(object): self.graph.put_object("me", "feed", message=message) ``` -Here’s our test case, which checks that we post the message without actually posting the message: +这是我们的测试用例,它可以检查我们发布的消息,而不是真正地发布消息: ``` import facebook @@ -466,18 +464,18 @@ class SimpleFacebookTestCase(unittest.TestCase): mock_put_object.assert_called_with(message="Hello World!") ``` -As we’ve seen so far, it’s really simple to start writing smarter tests with mock in Python. +正如我们所看到的,在 Python 中,通过 mock,我们可以非常容易地动手写一个更加智能的测试用例。 -### Mocking in python Conclusion +### Python Mock 总结 -Python’s mock library, if a little confusing to work with, is a game-changer for [unit-testing][7]. We’ve demonstrated common use-cases for getting started using mock in unit-testing, and hopefully this article will help [Python developers][8] overcome the initial hurdles and write excellent, tested code. +对 [单元测试][7] 来说,Python 的 `mock` 库可以说是一个游戏变革者,即使对于它的使用还有点困惑。我们已经演示了单元测试中常见的用例以开始使用 `mock`,并希望这篇文章能够帮助 [Python 开发者][8] 克服初期的障碍,写出优秀、经受过考验的代码。 -------------------------------------------------------------------------------- via: http://slviki.com/index.php/2016/06/18/introduction-to-mocking-in-python/ 作者:[Dasun Sucharith][a] -译者:[译者ID](https://github.com/译者ID) +译者:[cposture](https://github.com/cposture) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 06bc98749eb7f6b462f8f39acd641cd3ed20ed76 Mon Sep 17 00:00:00 2001 From: joVoV <704451873@qq.com> Date: Fri, 5 Aug 2016 17:25:10 +0800 Subject: [PATCH 346/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?(#4276)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160509 Android vs. iPhone Pros and Cons.md | 87 ------------------- ...160509 Android vs. iPhone Pros and Cons.md | 87 +++++++++++++++++++ 2 files changed, 87 insertions(+), 87 deletions(-) delete mode 100644 sources/talk/20160509 Android vs. iPhone Pros and Cons.md create mode 100644 translated/talk/20160509 Android vs. iPhone Pros and Cons.md diff --git a/sources/talk/20160509 Android vs. iPhone Pros and Cons.md b/sources/talk/20160509 Android vs. iPhone Pros and Cons.md deleted file mode 100644 index 9977d46d35..0000000000 --- a/sources/talk/20160509 Android vs. iPhone Pros and Cons.md +++ /dev/null @@ -1,87 +0,0 @@ -Android vs. iPhone: Pros and Cons -=================================== - -正在翻译 by jovov ->When comparing Android vs. iPhone, clearly Android has certain advantages even as the iPhone is superior in some key ways. But ultimately, which is better? - -The question of Android vs. iPhone is a personal one. - -Take myself, for example. I'm someone who has used both Android and the iPhone iOS. I'm well aware of the strengths of both platforms along with their weaknesses. Because of this, I decided to share my perspective regarding these two mobile platforms. Additionally, we'll take a look at my impressions of the new Ubuntu mobile platform and where it stacks up. - -### What iPhone gets right - -Even though I'm a full time Android user these days, I do recognize the areas where the iPhone got it right. First, Apple has a better record in updating their devices. This is especially true for older devices running iOS. With Android, if it's not a “Google blessed” Nexus...it better be a higher end carrier supported phone. Otherwise, you're going to find updates are either sparse or non-existent. - -Another area where the iPhone does well is apps availability. Expanding on that: iPhone apps almost always have a cleaner look to them. This isn't to say that Android apps are ugly, rather, they may not have an expected flow and consistency found with iOS. Two examples of exclusivity and great iOS-only layout would have to be [Dark Sky][1] (weather) and [Facebook Paper][2]. - -Then there is the backup process. Android can, by default, back stuff up to Google. But that doesn't help much with application data! By contrast, iCloud can essentially make a full backup of your iOS device. - -### Where iPhone loses me - -The biggest indisputable issue I have with the iPhone is more of a hardware limitation than a software one. That issue is storage. - -Look, with most Android phones, I can buy a smaller capacity phone and then add an SD card later. This does two things: First, I can use the SD card to store a lot of media files. Second, I can even use the SD card to store "some" of my apps. Apple has nothing that will touch this. - -Another area where the iPhone loses me is in the lack of choice it provides. Backing up your device? Hope you like iTunes or iCloud. For someone like myself who uses Linux, this means my ONLY option would be to use iCloud. - -To be ultimately fair, there are additional solutions for your iPhone if you're willing to jailbreak it. But that's not what this article is about. Same goes for rooting Android. This article is addressing a vanilla setup for both platforms. - -Finally, let us not forget this little treat – [iTunes decides to delete a user's music][3] because it was seen as a duplication of Apple Music contents...or something along those lines. Not iPhone specific? I disagree, as that music would have very well ended up onto the iPhone at some point. I can say with great certainty that in no universe would I ever put up with this kind of nonsense! - -![](http://www.datamation.com/imagesvr_ce/5552/mobile-abstract-icon-200x150.jpg) ->The Android vs. iPhone debate depends on what features matter the most to you. - -### What Android gets right - -The biggest thing Android gives me that the iPhone doesn't: choice. Choices in applications, devices and overall layout of how my phone works. - -I love desktop widgets! To iPhone users, they may seem really silly. But I can tell you that they save me from opening up applications as I can see the desired data without the extra hassle. Another similar feature I love is being able to install custom launchers instead of my phone's default! - -Finally, I can utilize tools like [Airdroid][4] and [Tasker][5] to add full computer-like functionality to my smart phone. Airdroid allows me treat my Android phone like a computer with file management and SMS with anyone – this becomes a breeze to use with my mouse and keyboard. Tasker is awesome in that I can setup "recipes" to connect/disconnect, put my phone into meeting mode or even put itself into power saving mode when I set the parameters to do so. I can even set it to launch applications when I arrive at specific destinations. - -### Where Android loses me - -Backup options are limited to specific user data, not a full clone of your phone. Without rooting, you're either left out in the wind or you must look to the Android SDK for solutions. Expecting casual users to either root their phone or run the SDK for a complete (I mean everything) Android backup is a joke. - -Yes, Google's backup service will backup Google app data, along with other related customizations. But it's nowhere near as complete as what we see with the iPhone. To accomplish something similar to what the iPhone enjoys, I've found you're going to either be rooting your Android phone or connecting it to a Windows PC to utilize some random program. - -To be fair, however, I believe Nexus owners benefit from a [full backup service][6] that is device specific. Sorry, but Google's default backup is not cutting it. Same applies for adb backups via your PC – they don't always restore things as expected. - -Wait, it gets better. Now after a lot of failed let downs and frustration, I found that there was one app that looked like it "might" offer a glimmer of hope, it's called Helium. Unlike other applications I found to be misleading and frustrating with their limitations, [Helium][7] initially looked like it was the backup application Google should have been offering all along -- emphasis on "looked like." Sadly, it was a huge let down. Not only did I need to connect it to my computer for a first run, it didn't even work using their provided Linux script. After removing their script, I settling for a good old fashioned adb backup...to my Linux PC. Fun facts: You will need to turn on a laundry list of stuff in developer tools, plus if you run the Twilight app, that needs to be turned off. It took me a bit to put this together when the backup option for adb on my phone wasn't responding. - -At the end of the day, Android has ample options for non-rooted users to backup superficial stuff like contacts, SMS and other data easily. But a deep down phone backup is best left to a wired connection and adb from my experience. - -### Ubuntu will save us? - -With the good and the bad examined between the two major players in the mobile space, there's a lot of hope that we're going to see good things from Ubuntu on the mobile front. Well, thus far, it's been pretty lackluster. - -I like what the developers are doing with the OS and I certainly love the idea of a third option for mobile besides iPhone and Android. Unfortunately, though, it's not that popular on the phone and the tablet received a lot of bad press due to subpar hardware and a lousy demonstration that made its way onto YouTube. - -To be fair, I've had subpar experiences with iPhone and Android, too, in the past. So this isn't a dig on Ubuntu. But until it starts showing up with a ready to go ecosystem of functionality that matches what Android and iOS offer, it's not something I'm terribly interested in yet. At a later date, perhaps, I'll feel like the Ubuntu phones are ready to meet my needs. - -### Android vs. iPhone bottom line: Why Android wins long term - -Despite its painful shortcomings, Android treats me like an adult. It doesn't lock me into only two methods for backing up my data. Yes, some of Android's limitations are due to the fact that it's focused on letting me choose how to handle my data. But, I also get to choose my own device, add storage on a whim. Android enables me to do a lot of cool stuff that the iPhone simply isn't capable of doing. - -At its core, Android gives non-root users greater access to the phone's functionality. For better or worse, it's a level of freedom that I think people are gravitating towards. Now there are going to be many of you who swear by the iPhone thanks to efforts like the [libimobiledevice][8] project. But take a long hard look at all the stuff Apple blocks Linux users from doing...then ask yourself – is it really worth it as a Linux user? Hit the Comments, share your thoughts on Android, iPhone or Ubuntu. - ------------------------------------------------------------------------------- - -via: http://www.datamation.com/mobile-wireless/android-vs.-iphone-pros-and-cons.html - -作者:[Matt Hartley][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.datamation.com/author/Matt-Hartley-3080.html -[1]: http://darkskyapp.com/ -[2]: https://www.facebook.com/paper/ -[3]: https://blog.vellumatlanta.com/2016/05/04/apple-stole-my-music-no-seriously/ -[4]: https://www.airdroid.com/ -[5]: http://tasker.dinglisch.net/ -[6]: https://support.google.com/nexus/answer/2819582?hl=en -[7]: https://play.google.com/store/apps/details?id=com.koushikdutta.backup&hl=en -[8]: http://www.libimobiledevice.org/ - diff --git a/translated/talk/20160509 Android vs. iPhone Pros and Cons.md b/translated/talk/20160509 Android vs. iPhone Pros and Cons.md new file mode 100644 index 0000000000..efbef243b9 --- /dev/null +++ b/translated/talk/20160509 Android vs. iPhone Pros and Cons.md @@ -0,0 +1,87 @@ +Android 和 iPhone 的优缺点 +=================================== + + +>当我们比较 Android 与 iPhone 的时候,很显然 iPhone 和 Android 在一些关键方面都具有一定的优势,但是,究竟哪个比较好呢? + + Android 与 iPhone 两者比较是个私人的问题。 + +就好比我来说,我两个都用。我深知这两个平台的优劣势。所以,我决定分享关于这两个移动平台的观点。另外,然后谈谈我对新的 Ubuntu 移动平台的印象和它的优势。 + +### iPhone 的优点 + +虽然这些天我是个十足的 Android 用户,但我必须承认到 iPhone 在某些方面做的是不错。首先,苹果公司在更新他们的设备有更好的战绩。尤其是在旧设备也能运行 iOS 。反观 Android ,如果它不是谷歌福利的关系,它最好是一个更高端的运营商支持的手机。否则,你将发现找那些更新少的可怜或者不存在。 + +其中 iPhone 做得很好的另一个领域是应用程序的可用性。扩展上: iPhone 应用程序几乎总是有一个简洁的外观。这并不是说 Android 应用程序是难看的,相反,他们可能没有和预期的流动性和一致性当建立在 iOS 上。有两个例子 [Dark Sky][1] (天气)和 [Facebook Paper][2] 很好表现了 iOS 的布局。 + +再有就是备份过程。 Android 可以备份,默认情况下是备份到谷歌。但是对应用数据起不了太多作用。对比 iPhone ,iCloud 基本上可以让你的 iOS 设备进行完整备份。 + +### iPhone 令我失望的地方 + +我使用 iPhone 的最大的不容置疑的问题是它的硬件限制大于软件,换句话来说,就是存储问题。 + +你看,对于大多数 Android 手机,我可以买一个容量较小的手机,然后在以后添加 SD 卡。这做了两件事:第一,我可以使用 SD 卡来存储大量的媒体文件。其次,我甚至可以用 SD 卡来存储我的应用程序的一些文件。苹果完全不能这么做。 + +另一个 iPhone 让我失望的地方是它提供的选择很少。备份您的设备?希望大家喜欢 iTunes 或 iCloud 。但对一些像我一样用 Linux 的人,那就意味着,我唯一的选择便是使用 iCloud 。 + +为了最终公平的,如果你愿意越狱,你的 iPhone 还有一些其他解决方案的。但这并不是文章所讲的。 Android 的 root 也一样。本文章针对的是两个用户的原生设置。 + +最后,让我们不要忘记这小小的玩意儿—— [iTunes 决定删除用户的音乐][3] 因为它被视为苹果音乐内容的重复...或者类似的规定。 iPhone 并不明确?我不同意,就算音乐在有些时候很好地在结束在 iPhone 上。我也十分肯定地说在任何地方我会不会忍受这种废话! + +![](http://www.datamation.com/imagesvr_ce/5552/mobile-abstract-icon-200x150.jpg) +> Android 和 iPhone 的对决取决于什么功能对你来说最重要。 + +### Android 的优点 + + Android 给我最大的事情就是 iPhone 提供不了的选择。包括应用程序,设备和我的手机是如何工作的整体布局。 + +我爱桌面小工具!对于 iPhone 用户,它们也许看上去很蠢。但我可以告诉你,他们可以让我不用打开应用程序就可以看到所需的数据,而无需额外的麻烦。另一个类似的功能,我喜欢安装自定义应用,而不是我的手机的默认! + +最后,我可以利用像 [Airdroid][4] 和 [Tasker][5] 工具添加全电脑式的功能到我的智能手机。AirDroid 可以让我对待我的 Android 手机就像一个文件管理和通信的计算机–这使得使用我的鼠标和键盘变得轻而易举的。Tasker 很厉害,我可以用它让我手机可联系或不可联系,当我设置参数时,我可以把我的手机处在会议模式,甚至把它自己变成省电模式。我甚至可以设置它来启动应用程序时当我到达特定的目的地时。 + +### Android 让我失望的地方 + +备份选项仅限于特定的用户数据,而不是手机的完整克隆。没有 root ,你将要么乘风而去或者你必须看看 Android SDK 开发解决方案。期望普通用户 root 他们的手机或运行 SDK来进行所有的Android的备份(我的意思是一切)将是一个笑话。 + +是的,谷歌的备份服务将备份谷歌应用程序的数据,以及其他相关的自定义设置。但它是远不及我们所看到的苹果一样完整。为了完成类似于在苹果的功能,我发现你就必须要 root 你的安卓手机或利用一些随机程序将其连接到一个在 PC 机上来。 + +为了公平的,但是,我相信 Nexus 所有者受益于一个 [完整备份服务][6] ,这是设备特定。对不起,但谷歌的默认备份是不削减它。同样的应用来备份您的电脑——他们不总是恢复预期的东西。 + +等待,它会变得更好。现在经过了很多失败的失望和挫折,我发现有一个应用程序,看起来它“可能”提供了一个微小的希望,它被称为 Helium 。它不像我发现的其他应用程序那样拥有误导性的和令人沮丧的局限性,[Helium][7] 最初看起来像是谷歌应该一直提供的备份应用程序——强调“看起来像”。可悲的是,这是一个巨大的失望。我不仅需要将它连接到我的计算机上进行第一次运行,而且它甚至不使用他们提供的 Linux 脚本。删除他们的脚本后,我弄了一个很好的老式 adb (Android Debug Bridge) 备份到我的Linux PC 。有趣的事实:你需要在开发工具里打开一箩筐东西,再加上如果你运行 Twilight app,那是需要被关闭的。当 adb (Android Debug Bridge) 的备份选项在手机上不起作用时,它花了我一点时间把这个弄在一起。 + +最终,Android 为非 root 用户也提供了可以轻松备份一些如联系人,短信等简单东西的选择。但是,要深度手机备份的话,以我经验还是通过有线连接和 adb (Android Debug Bridge) 。 + +### Ubuntu 会救我们吗? + +在手机领域,通过两大玩家之间的好坏考核,我们将有很多的希望从 Ubuntu 看到好的一方面。但是,迄今为止,它已经相当低迷。 + +我喜欢开发人员正在基于 OS 所做的,我当然喜欢除了 iPhone 和 Android 手机的第三个选项的想法。但是不幸的是,它在手机和平板上并不受欢迎且受到很多坏新闻,就是由于不符合标准的硬件和一个 YouTube 上的糟糕的示范。 + +公平来说,我在以前用 iPhone 和 Android 也不是很规范。所以这不是对 Ubuntu 的挖苦。但是直到它开始表现出准备提供生态系统功能来与 iPhone 和 Android 竞争,那就另说了,这还不是我现在特别感兴趣的东西。在以后的日子里,也许,我会觉得 Ubuntu 手机可以满足我的需要了。 + +###Android pk iPhone:为什么Android 长期胜利 + +忽视 Android 那些痛苦的缺点,它起码对待我像一个成年人。它并没有把我困在只有两种方法来备份我的数据。是的,一些 Android 的限制是由于它的关注点在让我选择如何处理我的数据。但是,我也可以选择我自己的设备,一时兴起扩充内存。 Android 让我做了很多很酷的东西,那些手机根本就没有能力做的事情。 + +在其核心, Android 给非 root 用户提供更大的访问手机的功能。无论是好是坏,这是人们倾向的一种自由。现在你们其中有很多用 iPhone 谩骂的人多亏了像 [libimobiledevice][8] 类似影响的项目。但要看看苹果阻止 Linux 用户所做的事情……然后问自己:作为一个 Linux 用户这是真的值得吗?评论,分享你对 iPhone 、 Android 或 Ubuntu 的看法。 + +------------------------------------------------------------------------------ + +via: http://www.datamation.com/mobile-wireless/android-vs.-iphone-pros-and-cons.html + +作者:[Matt Hartley][a] +译者:[jovov](https://github.com/jovov) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.datamation.com/author/Matt-Hartley-3080.html +[1]: http://darkskyapp.com/ +[2]: https://www.facebook.com/paper/ +[3]: https://blog.vellumatlanta.com/2016/05/04/apple-stole-my-music-no-seriously/ +[4]: https://www.airdroid.com/ +[5]: http://tasker.dinglisch.net/ +[6]: https://support.google.com/nexus/answer/2819582?hl=en +[7]: https://play.google.com/store/apps/details?id=com.koushikdutta.backup&hl=en +[8]: http://www.libimobiledevice.org/ + From ac188917fb2c85de001f073b1554cc32c59167c0 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Sat, 6 Aug 2016 09:31:29 +0800 Subject: [PATCH 347/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?=20Part=2010=20-=20Learn=20How=20to=20Use=20Awk=20Built-in=20Va?= =?UTF-8?q?riables?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Learn How to Use Awk Built-in Variables.md | 120 ------------------ ...Learn How to Use Awk Built-in Variables.md | 119 +++++++++++++++++ 2 files changed, 119 insertions(+), 120 deletions(-) delete mode 100644 sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md create mode 100644 translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md diff --git a/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md b/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md deleted file mode 100644 index 9b66a0cbfe..0000000000 --- a/sources/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md +++ /dev/null @@ -1,120 +0,0 @@ -Being translated by ChrisLeeGit -Learn How to Use Awk Built-in Variables – Part 10 -================================================= - -As we uncover the section of Awk features, in this part of the series, we shall walk through the concept of built-in variables in Awk. There are two types of variables you can use in Awk, these are; user-defined variables, which we covered in Part 8 and built-in variables. - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Built-in-Variables-Examples.png) ->Awk Built in Variables Examples - -Built-in variables have values already defined in Awk, but we can also carefully alter those values, the built-in variables include: - -- `FILENAME` : current input file name( do not change variable name) -- `FR` : number of the current input line (that is input line 1, 2, 3… so on, do not change variable name) -- `NF` : number of fields in current input line (do not change variable name) -- `OFS` : output field separator -- `FS` : input field separator -- `ORS` : output record separator -- `RS` : input record separator - -Let us proceed to illustrate the use of some of the Awk built-in variables above: - -To read the filename of the current input file, you can use the `FILENAME` built-in variable as follows: - -``` -$ awk ' { print FILENAME } ' ~/domains.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-FILENAME-Variable.png) ->Awk FILENAME Variable - -You will realize that, the filename is printed out for each input line, that is the default behavior of Awk when you use `FILENAME` built-in variable. - -Using `NR` to count the number of lines (records) in an input file, remember that, it also counts the empty lines, as we shall see in the example below. - -When we view the file domains.txt using cat command, it contains 14 lines with text and empty 2 lines: - -``` -$ cat ~/domains.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Print-Contents-of-File.png) ->Print Contents of File - - -``` -$ awk ' END { print "Number of records in file is: ", NR } ' ~/domains.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Lines.png) ->Awk Count Number of Lines - -To count the number of fields in a record or line, we use the NR built-in variable as follows: - -``` -$ cat ~/names.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Contents.png) ->List File Contents - -``` -$ awk '{ print "Record:",NR,"has",NF,"fields" ; }' ~/names.txt -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Fields-in-File.png) ->Awk Count Number of Fields in File - -Next, you can also specify an input field separator using the FS built-in variable, it defines how Awk divides input lines into fields. - -The default value for FS is space and tab, but we can change the value of FS to any character that will instruct Awk to divide input lines accordingly. - -There are two methods to do this: - -- one method is to use the FS built-in variable -- and the second is to invoke the -F Awk option - -Consider the file /etc/passwd on a Linux system, the fields in this file are divided using the : character, so we can specify it as the new input field separator when we want to filter out certain fields as in the following examples: - -We can use the `-F` option as follows: - -``` -$ awk -F':' '{ print $1, $4 ;}' /etc/passwd -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Filter-Fields-in-Password-File.png) ->Awk Filter Fields in Password File - -Optionally, we can also take advantage of the FS built-in variable as below: - -``` -$ awk ' BEGIN { FS=“:” ; } { print $1, $4 ; } ' /etc/passwd -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Filter-Fields-in-File-Using-Awk.png) ->Filter Fields in File Using Awk - -To specify an output field separator, use the OFS built-in variable, it defines how the output fields will be separated using the character we use as in the example below: - -``` -$ awk -F':' ' BEGIN { OFS="==>" ;} { print $1, $4 ;}' /etc/passwd -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/07/Add-Separator-to-Field-in-File.png) ->Add Separator to Field in File - -In this Part 10, we have explored the idea of using Awk built-in variables which come with predefined values. But we can also change these values, though, it is not recommended to do so unless you know what you are doing, with adequate understanding. - -After this, we shall progress to cover how we can use shell variables in Awk command operations, therefore, stay connected to Tecmint. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/awk-built-in-variables-examples/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Aaron Kili][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对ID](https://github.com/校对ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md b/translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md new file mode 100644 index 0000000000..321b1dc588 --- /dev/null +++ b/translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md @@ -0,0 +1,119 @@ +awk 系列:如何使用 awk 内置变量 +================================================= + +我们将逐渐揭开 awk 功能的神秘面纱,在本节中,我们将介绍 awk 内置(built-in)变量的概念。你可以在 awk 中使用两种类型的变量,它们是:用户自定义(user-defined)变量(我们在第八节中已经介绍了)和内置变量。 + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Built-in-Variables-Examples.png) +> awk 内置变量示例 + +awk 内置变量已经有预先定义的值了,但我们也可以谨慎地修改这些值,awk 内置变量包括: + +- `FILENAME` : 当前输入文件名称(不要修改变量名) +- `NR` : 当前输入行编号(是指输入行 1,2,3……等,不要改变变量名) +- `NF` : 当前输入行段编号(不要修改变量名) +- `OFS` : 输出段分隔符 +- `FS` : 输入段分隔符 +- `ORS` : 输出记录分隔符 +- `RS` : 输入记录分隔符 + +让我们继续演示一些使用上述 awk 内置变量的方法: + +想要读取当前输入文件的名称,你可以使用 `FILENAME` 内置变量,如下: + +``` +$ awk ' { print FILENAME } ' ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-FILENAME-Variable.png) +> awk FILENAME 变量 + +你会看到,每一行都会对应输出一次文件名,那是你使用 `FILENAME` 内置变量时 awk 默认的行为。 + +我们可以使用 `NR` 来统计一个输入文件的行数(记录),谨记,它也会计算空行,正如我们将要在下面的例子中看到的那样。 + +当我们使用 cat 命令查看文件 domains.txt 时,会发现它有 14 行文本和 2 个空行: + +``` +$ cat ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Print-Contents-of-File.png) +> 输出文件内容 + + +``` +$ awk ' END { print "文件记录数是:", NR } ' ~/domains.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Lines.png) +> awk 统计行数 + +想要统计一条记录或一行中的段数,我们可以像下面那样使用 NR 内置变量: + +``` +$ cat ~/names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Contents.png) +> 列出文件内容 + +``` +$ awk '{ print "记录:",NR,"有",NF,"段" ; }' ~/names.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Fields-in-File.png) +> awk 统计文件中的段数 + +接下来,你也可以使用 FS 内置变量指定一个输入文件分隔符,它会定义 awk 如何将输入行划分成段。 + +FS 默认值为空格和 TAB,但我们也能将 FS 值修改为任何字符来让 awk 根据情况分隔输入行。 + +有两种方法可以达到目的: + +- 第一种方法是使用 FS 内置变量 +- 第二种方法是使用 awk 的 -F 选项 + +来看 Linux 系统上的 `/etc/passwd` 文件,该文件中的各段是使用 `:` 分隔的,因此,当我们想要过滤出某些段时,可以将 `:` 指定为新的输入段分隔符,示例如下: + +我们可以使用 `-F` 选项,如下: + +``` +$ awk -F':' '{ print $1, $4 ;}' /etc/passwd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Filter-Fields-in-Password-File.png) +> awk 过滤密码文件中的各段 + +此外,我们也可以利用 FS 内置变量,如下: + +``` +$ awk ' BEGIN { FS=“:” ; } { print $1, $4 ; } ' /etc/passwd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Filter-Fields-in-File-Using-Awk.png) +> 使用 awk 过滤文件中的各段 + +使用 OFS 内置变量来指定一个输出段分隔符,它会定义如何使用指定的字符分隔输出段,示例如下: + +``` +$ awk -F':' ' BEGIN { OFS="==>" ;} { print $1, $4 ;}' /etc/passwd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/07/Add-Separator-to-Field-in-File.png) +> 向文件中的段添加分隔符 + +在第十节(即本节)中,我们已经学习了使用含有预定义值的 awk 内置变量的理念。但我们也能够修改这些值,虽然并不推荐这样做,除非你晓得自己在做什么,并且充分理解(这些变量值)。 + +此后,我们将继续学习如何在 awk 命令操作中使用 shell 变量,所以,请继续关注 Tecmint。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/awk-built-in-variables-examples/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From dde1507ae311ec40a716a570728282d76c6d7f13 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Sat, 6 Aug 2016 13:59:06 +0800 Subject: [PATCH 348/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91Par?= =?UTF-8?q?t=2011=20-=20How=20to=20Allow=20Awk=20to=20Use=20Shell=20Variab?= =?UTF-8?q?les?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...60802 Part 11 - How to Allow Awk to Use Shell Variables.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md b/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md index b407050da7..5930cadb64 100644 --- a/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md +++ b/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md @@ -1,3 +1,5 @@ +Being translated by ChrisLeeGit + How to Allow Awk to Use Shell Variables – Part 11 ================================================== @@ -87,7 +89,7 @@ In the next part of the Awk series, we shall dive into yet another critical sect via: http://www.tecmint.com/use-shell-script-variable-in-awk/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对ID](https://github.com/校对ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 05dd878226858c5e8191cb683e5c182a9a5421eb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Sat, 6 Aug 2016 15:34:54 +0800 Subject: [PATCH 349/471] Translated by cposture (#4279) * 75 * Translated by cposture * Translated by cposture * Translated by cposture --- ...ce portfolio - Machine learning project.md | 130 +++++++++--------- 1 file changed, 62 insertions(+), 68 deletions(-) diff --git a/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md index 3bec1d0a98..2ea994c119 100644 --- a/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 6 - Building a data science portfolio - Machine learning project.md @@ -1,17 +1,15 @@ -Translating by cposture 2016-08-02 -### Making predictions +### 做出预测 -Now that we have the preliminaries out of the way, we’re ready to make predictions. We’ll create a new file called predict.py that will use the train.csv file we created in the last step. The below code will: +既然完成了前期准备,我们可以开始准备做出预测了。我将创建一个名为 predict.py 的新文件,它会使用我们在最后一步创建的 train.csv 文件。下面的代码: -- Import needed libraries. -- Create a function called cross_validate that: - - Creates a logistic regression classifier with the right keyword arguments. - - Creates a list of columns that we want to use to train the model, removing id and foreclosure_status. - - Run cross validation across the train DataFrame. - - Return the predictions. +- 导入所需的库 +- 创建一个名为 `cross_validate` 的函数: + - 使用正确的关键词参数创建逻辑回归分类器(logistic regression classifier) + - 创建移除了 `id` 和 `foreclosure_status` 属性的用于训练模型的列 + - 跨 `train` 数据帧使用交叉验证 + - 返回预测结果 - -``` +```python import os import settings import pandas as pd @@ -29,22 +27,19 @@ def cross_validate(train): return predictions ``` -### Predicting error +### 预测误差 -Now, we just need to write a few functions to compute error. The below code will: +现在,我们仅仅需要写一些函数来计算误差。下面的代码: -- Create a function called compute_error that: - - Uses scikit-learn to compute a simple accuracy score (the percentage of predictions that matched the actual foreclosure_status values). -- Create a function called compute_false_negatives that: - - Combines the target and the predictions into a DataFrame for convenience. - - Finds the false negative rate. -- Create a function called compute_false_positives that: - - Combines the target and the predictions into a DataFrame for convenience. - - Finds the false positive rate. - - Finds the number of loans that weren’t foreclosed on that the model predicted would be foreclosed on. - - Divide by the total number of loans that weren’t foreclosed on. +- 创建函数 `compute_error`: + - 使用 scikit-learn 计算一个简单的精确分数(与实际 `foreclosure_status` 值匹配的预测百分比) +- 创建函数 `compute_false_negatives`: + - 为了方便,将目标和预测结果合并到一个数据帧 + - 查找漏报率 + - 找到原本应被预测模型取消但没有取消的贷款数目 + - 除以没被取消的贷款总数目 -``` +```python def compute_error(target, predictions): return metrics.accuracy_score(target, predictions) @@ -57,21 +52,20 @@ def compute_false_positives(target, predictions): return df[(df["target"] == 0) & (df["predictions"] == 1)].shape[0] / (df[(df["target"] == 0)].shape[0] + 1) ``` +### 聚合到一起 -### Putting it all together +现在,我们可以把函数都放在 `predict.py`。下面的代码: -Now, we just have to put the functions together in predict.py. The below code will: +- 读取数据集 +- 计算交叉验证预测 +- 计算上面的 3 个误差 +- 打印误差 -- Read in the dataset. -- Compute cross validated predictions. -- Compute the 3 error metrics above. -- Print the error metrics. - -``` +```python def read(): train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) return train - + if __name__ == "__main__": train = read() predictions = cross_validate(train) @@ -83,11 +77,11 @@ if __name__ == "__main__": print("False Positives: {}".format(fp)) ``` -Once you’ve added the code, you can run python predict.py to generate predictions. Running everything shows that our false negative rate is .26, which means that of the foreclosed loans, we missed predicting 26% of them. This is a good start, but can use a lot of improvement! +一旦你添加完代码,你可以运行 `python predict.py` 来产生预测结果。运行结果向我们展示漏报率为 `.26`,这意味着我们没能预测 `26%` 的取消贷款。这是一个好的开始,但仍有很多改善的地方! -You can find the complete predict.py file [here][41]. +你可以在[这里][41]找到完整的 `predict.py` 文件 -Your file tree should now look like this: +你的文件树现在看起来像下面这样: ``` loan-prediction @@ -110,47 +104,47 @@ loan-prediction ├── settings.py ``` -### Writing up a README +### 撰写 README -Now that we’ve finished our end to end project, we just have to write up a README.md file so that other people know what we did, and how to replicate it. A typical README.md for a project should include these sections: +既然我们完成了端到端的项目,那么我们可以撰写 README.md 文件了,这样其他人便可以知道我们做的事,以及如何复制它。一个项目典型的 README.md 应该包括这些部分: -- A high level overview of the project, and what the goals are. -- Where to download any needed data or materials. -- Installation instructions. - - How to install the requirements. -- Usage instructions. - - How to run the project. - - What you should see after each step. -- How to contribute to the project. - - Good next steps for extending the project. +- 一个高水准的项目概览,并介绍项目目的 +- 任何必需的数据和材料的下载地址 +- 安装命令 + - 如何安装要求依赖 +- 使用命令 + - 如何运行项目 + - 每一步之后会看到的结果 +- 如何为这个项目作贡献 + - 扩展项目的下一步计划 -[Here’s][42] a sample README.md for this project. +[这里][42] 是这个项目的一个 README.md 样例。 -### Next steps +### 下一步 -Congratulations, you’re done making an end to end machine learning project! You can find a complete example project [here][43]. It’s a good idea to upload your project to [Github][44] once you’ve finished it, so others can see it as part of your portfolio. +恭喜你完成了端到端的机器学习项目!你可以在[这里][43]找到一个完整的示例项目。一旦你完成了项目,把它上传到 [Github][44] 是一个不错的主意,这样其他人也可以看到你的文件夹的部分项目。 -There are still quite a few angles left to explore with this data. Broadly, we can split them up into 3 categories – extending this project and making it more accurate, finding other columns to predict, and exploring the data. Here are some ideas: +这里仍有一些留待探索数据的角度。总的来说,我们可以把它们分割为 3 类 - 扩展这个项目并使它更加精确,发现预测其他列并探索数据。这是其中一些想法: -- Generate more features in annotate.py. -- Switch algorithms in predict.py. -- Try using more data from Fannie Mae than we used in this post. -- Add in a way to make predictions on future data. The code we wrote will still work if we add more data, so we can add more past or future data. -- Try seeing if you can predict if a bank should have issued the loan originally (vs if Fannie Mae should have acquired the loan). - - Remove any columns from train that the bank wouldn’t have known at the time of issuing the loan. - - Some columns are known when Fannie Mae bought the loan, but not before. - - Make predictions. -- Explore seeing if you can predict columns other than foreclosure_status. - - Can you predict how much the property will be worth at sale time? -- Explore the nuances between performance updates. - - Can you predict how many times the borrower will be late on payments? - - Can you map out the typical loan lifecycle? -- Map out data on a state by state or zip code by zip code level. - - Do you see any interesting patterns? +- 在 `annotate.py` 中生成更多的特性 +- 切换 `predict.py` 中的算法 +- 尝试使用比我们发表在这里的更多的来自 `Fannie Mae` 的数据 +- 添加对未来数据进行预测的方法。如果我们添加更多数据,我们所写的代码仍然可以起作用,这样我们可以添加更多过去和未来的数据。 +- 尝试看看是否你能预测一个银行是否应该发放贷款(相对地,`Fannie Mae` 是否应该获得贷款) + - 移除 train 中银行不知道发放贷款的时间的任何列 + - 当 Fannie Mae 购买贷款时,一些列是已知的,但不是之前 + - 做出预测 +- 探索是否你可以预测除了 foreclosure_status 的其他列 + - 你可以预测在销售时财产值多少? +- 探索探索性能更新之间的细微差别 + - 你能否预测借款人会逾期还款多少次? + - 你能否标出的典型贷款周期? +- 标出一个州到州或邮政编码到邮政级水平的数据 + - 你看到一些有趣的模式了吗? -If you build anything interesting, please let us know in the comments! +如果你建立了任何有趣的东西,请在评论中让我们知道! -If you liked this, you might like to read the other posts in our ‘Build a Data Science Porfolio’ series: +如果你喜欢这个,你可能会喜欢阅读 ‘Build a Data Science Porfolio’ 系列的其他文章: - [Storytelling with data][45]. - [How to setup up a data science blog][46]. From e2ac066705b7ad6e4d1502a476782638c9915cdd Mon Sep 17 00:00:00 2001 From: Frank Zhang Date: Sat, 6 Aug 2016 17:45:22 +0800 Subject: [PATCH 350/471] [translating] 20160802 3 graphical tools for Git.md --- sources/tech/20160802 3 graphical tools for Git.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160802 3 graphical tools for Git.md b/sources/tech/20160802 3 graphical tools for Git.md index c09afed48b..579ec09fdc 100644 --- a/sources/tech/20160802 3 graphical tools for Git.md +++ b/sources/tech/20160802 3 graphical tools for Git.md @@ -1,3 +1,4 @@ +zpl1025 3 graphical tools for Git ============================= From 7fe1182dc58e269230e8c95dcb1e418c85337947 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 6 Aug 2016 17:52:02 +0800 Subject: [PATCH 351/471] PUB:20160706 Doing for User Space What We Did for Kernel Space @MikeCoder @PurlingNayuki --- ...User Space What We Did for Kernel Space.md | 37 ++++++++++--------- 1 file changed, 20 insertions(+), 17 deletions(-) rename {translated/tech => published}/20160706 Doing for User Space What We Did for Kernel Space.md (59%) diff --git a/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md b/published/20160706 Doing for User Space What We Did for Kernel Space.md similarity index 59% rename from translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md rename to published/20160706 Doing for User Space What We Did for Kernel Space.md index a62f9ad09f..f9dfb89593 100644 --- a/translated/tech/20160706 Doing for User Space What We Did for Kernel Space.md +++ b/published/20160706 Doing for User Space What We Did for Kernel Space.md @@ -1,45 +1,46 @@ 在用户空间做我们会在内核空间做的事情 ======================================================= -我相信,Linux 最好也是最坏的事情,就是内核空间和用户空间之间的巨大差别。 +我相信,Linux 最好也是最坏的事情,就是内核空间(kernel space)和用户空间(user space)之间的巨大差别。 -但是如果没有这个区别,Linux 可能也不会成为世界上影响力最大的操作系统。如今,Linux 的使用范围在世界上是最大的,而这些应用又有着世界上最大的用户群——尽管大多数用户并不知道,当他们进行谷歌搜索或者触摸安卓手机的时候,他们其实正在使用 Linux。如果不是 Linux 的巨大成功,Apple 公司也可能并不会成为现在这样(即在他们的电脑产品中使用 BSD 的技术)(注:Linux 获得成功后,Apple 曾与 Linus 协商使用 Linux 核心作为 Apple 电脑的操作系统并帮助开发的事宜,但遭到拒绝。因此,Apple 转向使用许可证更为宽松的 BSD 。)。 +如果没有这个区别,Linux 可能也不会成为世界上影响力最大的操作系统。如今,Linux 的使用范围在世界上是最大的,而这些应用又有着世界上最大的用户群——尽管大多数用户并不知道,当他们进行谷歌搜索或者触摸安卓手机的时候,他们其实正在使用 Linux。如果不是 Linux 的巨大成功,Apple 公司也可能并不会成为现在这样(即在他们的电脑产品中使用 BSD 的技术)(LCTT 译注:Linux 获得成功后,Apple 曾与 Linus 协商使用 Linux 核心作为 Apple 电脑的操作系统并帮助开发的事宜,但遭到拒绝。因此,Apple 转向使用许可证更为宽松的 BSD 。)。 -不(需要)关注用户空间是 Linux 内核开发中的一个特点而非缺陷。正如 Linus 在 2003 的极客巡航中提到的那样,“我只做内核相关技术……我并不知道内核之外发生的事情,而且我并不关心。我只关注内核部分发生的事情。” 多年之后的另一个极客巡航上, Andrew Morton 给我上了另外的一课,这之后我写道: +不(需要)关注用户空间是 Linux 内核开发中的一个特点而非缺陷。正如 Linus 在 2003 年的[极客巡航(Geek Cruise)][18]中提到的那样,“我只做内核相关的东西……我并不知道内核之外发生的事情,而且我也并不关心。我只关注内核部分发生的事情。” 多年之后的[另一次极客巡航][19]上, Andrew Morton 给我上了另外的一课,这之后我写道: -> Linux 存在于内核空间,而在用户空间中被使用,和其他的自然的建筑材料一样。内核空间和用户空间的区别,和自然材料与人类从中生产的人造材料的区别很类似。 +> 内核空间是Linux 所在的地方,而用户空间是 Linux 与其它的“自然材料”一起使用的地方。内核空间和用户空间的区别,和自然材料与人类用其生产的人造材料的区别很类似。 -这个区别的自然而然的结果,就是尽管外面的世界一刻也离不开 Linux, 但是 Linux 社区还是保持相对较小。所以,为了增加哪怕一点我们社区团体的规模,我希望指出两件事情。第一件已经非常火了,另外一件可能会火起来。 +这个区别是自然而然的结果,就是尽管外面的世界一刻也离不开 Linux, 但是 Linux 社区还是保持相对较小。所以,为了增加哪怕一点我们社区团体的规模,我希望指出两件事情。第一件已经非常火了,另外一件可能会火起来。 -第一件事情就是 [blockchain][1],出自著名的分布式货币——比特币之手。当你正在阅读这篇文章的同时,人们对 blockchain 的[关注度正在直线上升][2]。 +第一件事情就是 [区块链(blockchain)][1],出自著名的分布式货币——比特币之手。当你正在阅读这篇文章的同时,人们对区块链的[关注度正在直线上升][2]。 ![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f1.png) -> 图1. Blockchain 的谷歌搜索趋势 -第二件事就是自主身份。为了解释这个概念,让我先来问你:你是谁,你来自哪里? +*图1. 区块链的谷歌搜索趋势* -如果你从你的雇员、你的医生或者车管所,Facebook、Twitter 或者谷歌上得到答案,你就会发现它们都是行政身份——机构完全以自己的便利为原因设置这些身份和职位。正如 [Evernym][3] 的 Timothy Ruff 所说,“你并不因组织而存在,但你的身份却因此存在。”身份是个因变量。自变量——即控制着身份的变量——是(你所在的)组织。 +第二件事就是自主身份(self-sovereign identity)。为了解释这个概念,让我先来问你:你是谁,你来自哪里? + +如果你从你的老板、你的医生或者车管所,Facebook、Twitter 或者谷歌上得到答案,你就会发现它们都是行政身份(administrative identifiers)——这些机构完全以自己的便利为原因设置这些身份和职位。正如一家区块链技术公司 [Evernym][3] 的 Timothy Ruff 所说,“你并不因组织而存在,但你的身份却因此存在。”身份是个因变量。自变量——即控制着身份的变量——是(你所在的)组织。 如果你的答案出自你自己,我们就有一个广大空间来发展一个新的领域,在这个领域中,我们完全自由。 -据我所知,第一个解释这个的人是 [Devon Loffreto][4]。在 2012 年 2 月,他在博客[Moxy Tongue][5] 中写道:“什么是'Sovereign Source Authority'?”。在发表于 2016 年 2 月的 "[Self-Sovereign Identity][6]" 中,他写道: +据我所知,第一个解释这个的人是 [Devon Loffreto][4]。在 2012 年 2 月,他在博客 [Moxy Tongue][5] 中写道:“什么是 'Sovereign Source Authority'?”。在发表于 2016 年 2 月的 “[Self-Sovereign Identity][6]” 一文中,他写道: > 自主身份必须是独立个人提出的,并且不包含社会因素……自主身份源于每个个体对其自身本源的认识。 一个自主身份可以为个体带来新的社会面貌。每个个体都可能为自己生成一个自主身份,并且这并不会改变固有的人权。使用自主身份机制是所有参与者参与的基石,并且依旧可以同各种形式的人类社会保持联系。 -将这个概念放在 Linux 领域中,只有个人才能为他或她设定一个自己的开源社区身份。这在现实实践中,这只是一个非常正从的事件。举个例子,我自己的身份包括: +将这个概念放在 Linux 领域中,只有个人才能为他或她设定一个自己的开源社区身份。这在现实实践中,这只是一个非常正常的事件。举个例子,我自己的身份包括: - David Allen Searls,我父母会这样叫我。 - David Searls,正式场合下我会这么称呼自己。 - Dave,我的亲戚和好朋友会这么叫我。 - Doc,大多数人会这么叫我。 -作为承认以上称呼的自主身份来源,我可以在不同的情景中轻易的转换。但是,这只是在现实世界中。在虚拟世界中,这就变得非常困难。除了上述的身份之外,我还可以是 @dsearls(我的 twitter 账号) 和 dsearls (其他的网络账号)。然而为了记住成百上千的不同账号的登录名和密码,我已经不堪重负。 +作为承认以上称呼的自主身份来源,我可以在不同的情景中轻易的转换。但是,这只是在现实世界中。在虚拟世界中,这就变得非常困难。除了上述的身份之外,我还可以是 @dsearls (我的 twitter 账号) 和 dsearls (其他的网络账号)。然而为了记住成百上千的不同账号的登录名和密码,我已经不堪重负。 你可以在你的浏览器上感受到这个糟糕的体验。在火狐上,我有成百上千个用户名密码。很多已经废弃(很多都是从 Netscape 时代遗留下来的),但是我想会有大量的工作账号需要处理。对于这些,我只是被动接受者。没有其他的解决方法。甚至一些安全较低的用户认证,已经成为了现实世界中不可缺少的一环。 -现在,最简单的方式来联系账号,就是通过 "Log in with Facebook" 或者 "Login in with Twitter" 来进行身份认证。在这种情况下,我们中的每一个甚至并不是真正意义上的自己,甚至(如果我们希望被其他人认识的话)缺乏对其他实体如何认识我们的控制。 +现在,最简单的方式来联系账号,就是通过 “Log in with Facebook” 或者 “Login in with Twitter” 来进行身份认证。在这种情况下,我们中的每一个甚至并不是真正意义上的自己,甚至(如果我们希望被其他人认识的话)缺乏对其他实体如何认识我们的控制。 -我们从一开始就需要的是一个可以实体化我们的自主身份和交流时选择如何保护和展示自身的个人系统。因为缺少这个能力,我们现在陷入混乱。Shoshana Zuboff 称之为 "监视资本主义",她如此说道: +我们从一开始就需要的是一个可以实体化我们的自主身份和交流时选择如何保护和展示自身的个人系统。因为缺少这个能力,我们现在陷入混乱。Shoshana Zuboff 称之为 “监视资本主义”,她如此说道: >...难以想象,在见证了互联网和获得了的巨大成功的谷歌背后。世界因 Apple 和 FBI 的对决而紧密联系在一起。讲道理,热衷于监视的资本家开发的监视系统是每一个国家安全机构都渴望的。 @@ -55,16 +56,16 @@ - "[System or Human First][10]" - Devon Loffreto. - "[The Path to Self-Sovereign Identity][11]" - Christopher Allen. -从Evernym 的简要说明中,[digi.me][12], [iRespond][13] 和 [Respect Network][14] 也被包括在内。自主身份和社会身份 (也被称为“current model”) 的对比结果,显示在图二中。 +从 Evernym 的简要说明中,[digi.me][12]、 [iRespond][13] 和 [Respect Network][14] 也被包括在内。自主身份和社会身份 (也被称为“当前模式(current model)”) 的对比结果,显示在图二中。 ![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12042f2.jpg) -> 图 2. Current Model 身份 vs. 自主身份 + +*图 2. 当前模式身份 vs. 自主身份* Sovrin 就是为此而生的[平台][15],它阐述自己为一个“依托于先进、专用、经授权、分布式平台的,完全开源、基于标识的身份声明图平台”。同时,这也有一本[白皮书][16]。它的代码名为 [plenum][17],并且公开在 Github 上。 在这里——或者其他类似的地方——我们就可以在用户空间中重现我们在过去 25 年中在内核空间做过的事情。 - -------------------------------------------------------------------------------- via: https://www.linuxjournal.com/content/doing-user-space-what-we-did-kernel-space @@ -93,3 +94,5 @@ via: https://www.linuxjournal.com/content/doing-user-space-what-we-did-kernel-sp [15]: http://evernym.com/technology [16]: http://evernym.com/assets/doc/Identity-System-Essentials.pdf?v=167284fd65 [17]: https://github.com/evernym/plenum +[18]: http://www.linuxjournal.com/article/6427 +[19]: http://www.linuxjournal.com/article/8664 \ No newline at end of file From 824b4e85232bd9d1613def115457fcd8100e181f Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 6 Aug 2016 17:54:46 +0800 Subject: [PATCH 352/471] =?UTF-8?q?=E5=B7=B2=E6=A0=A1=E5=AF=B9=E5=8F=91?= =?UTF-8?q?=E5=B8=83?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eate Your Own Shell in Python - Part II.md | 216 ------------------ 1 file changed, 216 deletions(-) delete mode 100644 translated/tech/20160706 Create Your Own Shell in Python - Part II.md diff --git a/translated/tech/20160706 Create Your Own Shell in Python - Part II.md b/translated/tech/20160706 Create Your Own Shell in Python - Part II.md deleted file mode 100644 index 88f6a0fba8..0000000000 --- a/translated/tech/20160706 Create Your Own Shell in Python - Part II.md +++ /dev/null @@ -1,216 +0,0 @@ -使用 Python 创建你自己的 Shell:Part II -=========================================== - -在 [part 1][1] 中,我们已经创建了一个主要的 shell 循环、切分了的命令输入,以及通过 `fork` 和 `exec` 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 - -### 步骤 4:内置命令 - -“cd test_dir2 无法修改我们的当前目录” 这句话是对的,但在某种意义上也是错的。在执行完该命令之后,我们仍然处在同一目录,从这个意义上讲,它是对的。然而,目录实际上已经被修改,只不过它是在子进程中被修改。 - -还记得我们 fork 了一个子进程,然后执行命令,执行命令的过程没有发生在父进程上。结果是我们只是改变了子进程的当前目录,而不是父进程的目录。 - -然后子进程退出,而父进程在原封不动的目录下继续运行。 - -因此,这类与 shell 自己相关的命令必须是内置命令。它必须在 shell 进程中执行而没有分叉(forking)。 - -#### cd - -让我们从 `cd` 命令开始。 - -我们首先创建一个 `builtins` 目录。每一个内置命令都会被放进这个目录中。 - -```shell -yosh_project -|-- yosh - |-- builtins - | |-- __init__.py - | |-- cd.py - |-- __init__.py - |-- shell.py -``` - -在 `cd.py` 中,我们通过使用系统调用 `os.chdir` 实现自己的 `cd` 命令。 - -```python -import os -from yosh.constants import * - - -def cd(args): - os.chdir(args[0]) - - return SHELL_STATUS_RUN -``` - -注意,我们会从内置函数返回 shell 的运行状态。所以,为了能够在项目中继续使用常量,我们将它们移至 `yosh/constants.py`。 - -```shell -yosh_project -|-- yosh - |-- builtins - | |-- __init__.py - | |-- cd.py - |-- __init__.py - |-- constants.py - |-- shell.py -``` - -在 `constants.py` 中,我们将状态常量都放在这里。 - -```python -SHELL_STATUS_STOP = 0 -SHELL_STATUS_RUN = 1 -``` - -现在,我们的内置 `cd` 已经准备好了。让我们修改 `shell.py` 来处理这些内置函数。 - -```python -... -# Import constants -from yosh.constants import * - -# Hash map to store built-in function name and reference as key and value -built_in_cmds = {} - - -def tokenize(string): - return shlex.split(string) - - -def execute(cmd_tokens): - # Extract command name and arguments from tokens - cmd_name = cmd_tokens[0] - cmd_args = cmd_tokens[1:] - - # If the command is a built-in command, invoke its function with arguments - if cmd_name in built_in_cmds: - return built_in_cmds[cmd_name](cmd_args) - - ... -``` - -我们使用一个 python 字典变量 `built_in_cmds` 作为哈希映射(hash map),以存储我们的内置函数。我们在 `execute` 函数中提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 - -(提示:`built_in_cmds[cmd_name]` 返回能直接使用参数调用的函数引用的。) - -我们差不多准备好使用内置的 `cd` 函数了。最后一步是将 `cd` 函数添加到 `built_in_cmds` 映射中。 - -``` -... -# Import all built-in function references -from yosh.builtins import * - -... - -# Register a built-in function to built-in command hash map -def register_command(name, func): - built_in_cmds[name] = func - - -# Register all built-in commands here -def init(): - register_command("cd", cd) - - -def main(): - # Init shell before starting the main loop - init() - shell_loop() -``` - -我们定义了 `register_command` 函数,以添加一个内置函数到我们内置的命令哈希映射。接着,我们定义 `init` 函数并且在这里注册内置的 `cd` 函数。 - -注意这行 `register_command("cd", cd)` 。第一个参数为命令的名字。第二个参数为一个函数引用。为了能够让第二个参数 `cd` 引用到 `yosh/builtins/cd.py` 中的 `cd` 函数引用,我们必须将以下这行代码放在 `yosh/builtins/__init__.py` 文件中。 - -``` -from yosh.builtins.cd import * -``` - -因此,在 `yosh/shell.py` 中,当我们从 `yosh.builtins` 导入 `*` 时,我们可以得到已经通过 `yosh.builtins` 导入的 `cd` 函数引用。 - -我们已经准备好了代码。让我们尝试在 `yosh` 同级目录下以模块形式运行我们的 shell,`python -m yosh.shell`。 - -现在,`cd` 命令可以正确修改我们的 shell 目录了,同时非内置命令仍然可以工作。非常好! - -#### exit - -最后一块终于来了:优雅地退出。 - -我们需要一个可以修改 shell 状态为 `SHELL_STATUS_STOP` 的函数。这样,shell 循环可以自然地结束,shell 将到达终点而退出。 - -和 `cd` 一样,如果我们在子进程中 fork 和执行 `exit` 函数,其对父进程是不起作用的。因此,`exit` 函数需要成为一个 shell 内置函数。 - -让我们从这开始:在 `builtins` 目录下创建一个名为 `exit.py` 的新文件。 - -``` -yosh_project -|-- yosh - |-- builtins - | |-- __init__.py - | |-- cd.py - | |-- exit.py - |-- __init__.py - |-- constants.py - |-- shell.py -``` - -`exit.py` 定义了一个 `exit` 函数,该函数仅仅返回一个可以退出主循环的状态。 - -``` -from yosh.constants import * - - -def exit(args): - return SHELL_STATUS_STOP -``` - -然后,我们导入位于 `yosh/builtins/__init__.py` 文件的 `exit` 函数引用。 - -``` -from yosh.builtins.cd import * -from yosh.builtins.exit import * -``` - -最后,我们在 `shell.py` 中的 `init()` 函数注册 `exit` 命令。 - - -``` -... - -# Register all built-in commands here -def init(): - register_command("cd", cd) - register_command("exit", exit) - -... -``` - -到此为止! - -尝试执行 `python -m yosh.shell`。现在你可以输入 `exit` 优雅地退出程序了。 - -### 最后的想法 - -我希望你能像我一样享受创建 `yosh` (**y**our **o**wn **sh**ell)的过程。但我的 `yosh` 版本仍处于早期阶段。我没有处理一些会使 shell 崩溃的极端状况。还有很多我没有覆盖的内置命令。为了提高性能,一些非内置命令也可以实现为内置命令(避免新进程创建时间)。同时,大量的功能还没有实现(请看 [公共特性](http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html) 和 [不同特性](http://www.tldp.org/LDP/intro-linux/html/x12249.html)) - -我已经在 github.com/supasate/yosh 中提供了源代码。请随意 fork 和尝试。 - -现在该是创建你真正自己拥有的 Shell 的时候了。 - -Happy Coding! - --------------------------------------------------------------------------------- - -via: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ - -作者:[Supasate Choochaisri][a] -译者:[cposture](https://github.com/cposture) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://disqus.com/by/supasate_choochaisri/ -[1]: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ -[2]: http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html -[3]: http://www.tldp.org/LDP/intro-linux/html/x12249.html -[4]: https://github.com/supasate/yosh From b0f2444bf204778cf530e20037c2b5e4458a3075 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 6 Aug 2016 18:24:41 +0800 Subject: [PATCH 353/471] PUB:20160715 bc - Command line calculator @FSSlc --- .../20160715 bc - Command line calculator.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) rename {translated/tech => published}/20160715 bc - Command line calculator.md (57%) diff --git a/translated/tech/20160715 bc - Command line calculator.md b/published/20160715 bc - Command line calculator.md similarity index 57% rename from translated/tech/20160715 bc - Command line calculator.md rename to published/20160715 bc - Command line calculator.md index d5dc711909..bf96f9c464 100644 --- a/translated/tech/20160715 bc - Command line calculator.md +++ b/published/20160715 bc - Command line calculator.md @@ -3,11 +3,11 @@ bc : 一个命令行计算器 ![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/07/bc-calculator-945x400.jpg) -假如你运行在一个图形桌面环境中,当你需要一个计算器时,你可能只需要一路进行点击便可以找到一个计算器。例如,Fedora 工作站中就已经包含了一个名为 `Calculator` 的工具。它有着几种不同的操作模式,例如,你可以进行复杂的数学运算或者金融运算。但是,你知道吗,命令行也提供了一个与之相似的名为 `bc` 的工具? +假如你在一个图形桌面环境中需要一个计算器时,你可能只需要一路进行点击便可以找到一个计算器。例如,Fedora 工作站中就已经包含了一个名为 `Calculator` 的工具。它有着几种不同的操作模式,例如,你可以进行复杂的数学运算或者金融运算。但是,你知道吗,命令行也提供了一个与之相似的名为 `bc` 的工具? -`bc` 工具可以为你提供你期望一个科学计算器、金融计算器或者是简单的计算器所能提供的所有功能。另外,假如需要的话,它还可以从命令行中被脚本化。这使得当你需要做复杂的数学运算时,你可以在 shell 脚本中使用它。 +`bc` 工具可以为你提供的功能可以满足你对科学计算器、金融计算器或者是简单计算器的期望。另外,假如需要的话,它还可以从命令行中被脚本化。这使得当你需要做复杂的数学运算时,你可以在 shell 脚本中使用它。 -因为 bc 被其他的系统软件所使用,例如 CUPS 打印服务,它可能已经在你的 Fedora 系统中被安装了。你可以使用下面这个命令来进行检查: +因为 bc 也被用于其他的系统软件,例如 CUPS 打印服务,所以它可能已经在你的 Fedora 系统中被安装了。你可以使用下面这个命令来进行检查: ``` dnf list installed bc @@ -21,7 +21,7 @@ sudo dnf install bc ### 用 bc 做一些简单的数学运算 -使用 bc 的一种方式是进入它自己的 shell。在那里你可以在一行中做许多次计算。但在你键入 bc 后,首先出现的是有关这个程序的警告: +使用 bc 的一种方式是进入它自己的 shell。在那里你可以按行进行许多次计算。当你键入 bc 后,首先出现的是有关这个程序的警告: ``` $ bc @@ -43,16 +43,16 @@ bc 会回答上面计算式的答案是: 2 ``` -在这里你还可以执行其他的命令。你可以使用 加(+)、减(-)、乘(*)、除(/)、圆括号、指数符号(^) 等等。请注意 bc 同样也遵循所有约定俗成的运算规定,例如运算的先后顺序。你可以试试下面的例子: +在这里你还可以执行其他的命令。你可以使用 加(+)、减(-)、乘(*)、除(/)、圆括号、指数符号(\^) 等等。请注意 bc 同样也遵循所有约定俗成的运算规则,例如运算的先后顺序。你可以试试下面的例子: ``` (4+7)*2 4+7*2 ``` -若要离开 bc 可以通过按键组合 `Ctrl+D` 来发送 “输入结束”信号给 bc 。 +若要退出 bc 可以通过按键组合 `Ctrl+D` 来发送 “输入结束”信号给 bc 。 -使用 bc 的另一种方式是使用 `echo` 命令来传递运算式或命令。下面这个示例类似于计算器中的 "Hello, world" 例子,使用 shell 的管道函数(|) 来将 `echo` 的输出传入 `bc` 中: +使用 bc 的另一种方式是使用 `echo` 命令来传递运算式或命令。下面这个示例就是计算器中的 “Hello, world” 例子,使用 shell 的管道函数(|) 来将 `echo` 的输出传入 `bc` 中: ``` echo '1+1' | bc @@ -88,7 +88,7 @@ echo '7-4.15' | bc ### 其他进制系统 -bc 的另一个有用的功能是可以使用除 十进制以外的其他计数系统。例如,你可以轻松地做十六进制或二进制的数学运算。可以使用 `ibase` 和 `obase` 命令来分别设定输入和输出的进制系统。需要记住的是一旦你使用了 `ibase`,之后你输入的任何数字都将被认为是在新定义的进制系统中。 +bc 的另一个有用的功能是可以使用除了十进制以外的其他计数系统。例如,你可以轻松地做十六进制或二进制的数学运算。可以使用 `ibase` 和 `obase` 命令来分别设定输入和输出的进制系统。需要记住的是一旦你使用了 `ibase`,之后你输入的任何数字都将被认为是在新定义的进制系统中。 要做十六进制数到十进制数的转换或运算,你可以使用类似下面的命令。请注意大于 9 的十六进制数必须是大写的(A-F): @@ -103,7 +103,7 @@ echo 'ibase=16; 5F72+C39B' | bc echo 'obase=16; ibase=16; 5F72+C39B' | bc ``` -下面是一个小技巧。假如你在 shell 中做这些运算,怎样才能使得输入重新为十进制数呢?答案是使用 `ibase` 命令,但你必须设定它为在当前进制中与十进制中的 10 等价的值。例如,假如 `ibase` 被设定为十六进制,你需要输入: +下面是一个小技巧。假如你在 shell 中做这些十六进制运算,怎样才能使得输入重新为十进制数呢?答案是使用 `ibase` 命令,但你必须设定它为在当前进制中与十进制中的 10 等价的值。例如,假如 `ibase` 被设定为十六进制,你需要输入: ``` ibase=A @@ -117,11 +117,11 @@ ibase=A -------------------------------------------------------------------------------- -via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ +via: https://fedoramagazine.org/bc-command-line-calculator/ 作者:[Paul W. Frields][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From b45e17341fe8ea717b7710133299c85022b5bb81 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 6 Aug 2016 18:42:44 +0800 Subject: [PATCH 354/471] PUB:Part 9 - Learn How to Use Awk Special Patterns begin and end @ChrisLeeGit --- ... Use Awk Special Patterns begin and end.md | 40 ++++++++++--------- 1 file changed, 21 insertions(+), 19 deletions(-) rename translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md => published/awk/Part 9 - Learn How to Use Awk Special Patterns begin and end.md (78%) diff --git a/translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md b/published/awk/Part 9 - Learn How to Use Awk Special Patterns begin and end.md similarity index 78% rename from translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md rename to published/awk/Part 9 - Learn How to Use Awk Special Patterns begin and end.md index 378811776a..5791ffb0f9 100644 --- a/translated/tech/awk/20160719 Part 9 - Learn How to Use Awk Special Patterns begin and end.md +++ b/published/awk/Part 9 - Learn How to Use Awk Special Patterns begin and end.md @@ -21,8 +21,7 @@ awk 系列:如何使用 awk 的特殊模式 BEGIN 和 END /pattern/ { actions } ``` -当你看脚本中的模式(`/pattern`)时,你会发现它通常是一个正则表达式,此外,你也可以将模式(`/pattern`)当成特殊模式 `BEGIN` 和 `END`。 -因此,我们也能按照下面的形式编写一条 awk 命令: +你通常会发现脚本中的模式(`/pattern/`)是一个正则表达式,不过你也可以将模式使用特殊模式 `BEGIN` 和 `END`。因此,我们也能按照下面的形式编写一条 awk 命令: ``` awk ' @@ -41,11 +40,11 @@ END { actions } 含有这些特殊模式的 awk 命令脚本的执行流程如下: -- 当在脚本中使用了 `BEGIN` 模式,则 `BEGIN` 中所有的动作都会在读取任何输入行之前执行。 -- 然后,读入一个输入行并解析成不同的段。 -- 接下来,每一条指定的非特殊模式都会和输入行进行比较匹配,当匹配成功后,就会执行模式对应的动作。对所有你指定的模式重复此执行该步骤。 -- 再接下来,对于所有输入行重复执行步骤 2 和 步骤 3。 -- 当读取并处理完所有输入行后,假如你指定了 `END` 模式,那么将会执行相应的动作。 +1. 当在脚本中使用了 `BEGIN` 模式,则 `BEGIN` 中所有的动作都会在读取任何输入行之前执行。 +2. 然后,读入一个输入行并解析成不同的段。 +3. 接下来,每一条指定的非特殊模式都会和输入行进行比较匹配,当匹配成功后,就会执行模式对应的动作。对所有你指定的模式重复此执行该步骤。 +4. 再接下来,对于所有输入行重复执行步骤 2 和 步骤 3。 +5. 当读取并处理完所有输入行后,假如你指定了 `END` 模式,那么将会执行相应的动作。 当你使用特殊模式时,想要在 awk 操作中获得最好的结果,你应当记住上面的执行顺序。 @@ -73,7 +72,8 @@ $ cat ~/domains.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/View-Contents-of-File.png) -> 查看文件内容 + +*查看文件内容* 在这个例子中,我们希望统计出 domains.txt 文件中域名 `tecmint.com` 出现的次数。所以,我们编写了一个简单的 shell 脚本帮助我们完成任务,它使用了变量、数学表达式和赋值运算符的思想,脚本内容如下: @@ -81,16 +81,16 @@ $ cat ~/domains.txt #!/bin/bash for file in $@; do if [ -f $file ] ; then -# 输出文件名 +### 输出文件名 echo "File is: $file" -# 输出一个递增的数字记录包含 tecmint.com 的行数 +### 输出一个递增的数字记录包含 tecmint.com 的行数 awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file else -# 若输入不是文件,则输出错误信息 +### 若输入不是文件,则输出错误信息 echo "$file 不是一个文件,请指定一个文件。" >&2 && exit 1 fi done -# 成功执行后使用退出代码 0 终止脚本 +### 成功执行后使用退出代码 0 终止脚本 exit 0 ``` @@ -117,24 +117,25 @@ END { printf "%s\n", counter ; } #!/bin/bash for file in $@; do if [ -f $file ] ; then -# 输出文件名 +### 输出文件名 echo "File is: $file" -# 输出文件中 tecmint.com 出现的总次数 +### 输出文件中 tecmint.com 出现的总次数 awk ' BEGIN { print "文件中出现 tecmint.com 的次数是:" ; } /^tecmint.com/ { counter+=1 ; } END { printf "%s\n", counter ; } ' $file else -# 若输入不是文件,则输出错误信息 +### 若输入不是文件,则输出错误信息 echo "$file 不是一个文件,请指定一个文件。" >&2 && exit 1 fi done -# 成功执行后使用退出代码 0 终止脚本 +### 成功执行后使用退出代码 0 终止脚本 exit 0 ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-BEGIN-and-END-Patterns.png) -> awk 模式 BEGIN 和 END + +*awk 模式 BEGIN 和 END* 当我们运行上面的脚本时,它会首先输出 domains.txt 文件的位置,然后执行 awk 命令脚本,该命令脚本中的特殊模式 `BEGIN` 将会在从文件读取任何行之前帮助我们输出这样的消息“`文件中出现 tecmint.com 的次数是:`”。 @@ -146,7 +147,8 @@ exit 0 $ ./script.sh ~/domains.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Script-to-Count-Number-of-Times-String-Appears.png) -> 用于统计字符串出现次数的脚本 + +*用于统计字符串出现次数的脚本* 最后总结一下,我们在本节中演示了更多的 awk 功能,并学习了特殊模式 `BEGIN` 和 `END` 的概念。 @@ -159,7 +161,7 @@ via: http://www.tecmint.com/learn-use-awk-special-patterns-begin-and-end/ 作者:[Aaron Kili][a] 译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对ID](https://github.com/校对ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1200da79093ba4c9085703508a3b5d4b064c6577 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Sat, 6 Aug 2016 22:03:19 +0800 Subject: [PATCH 355/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?Part=2011=20-=20How=20to=20Allow=20Awk=20to=20Use=20Shell=20Var?= =?UTF-8?q?iables?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...How to Allow Awk to Use Shell Variables.md | 97 ------------------- ...How to Allow Awk to Use Shell Variables.md | 95 ++++++++++++++++++ 2 files changed, 95 insertions(+), 97 deletions(-) delete mode 100644 sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md create mode 100644 translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md diff --git a/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md b/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md deleted file mode 100644 index 5930cadb64..0000000000 --- a/sources/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md +++ /dev/null @@ -1,97 +0,0 @@ -Being translated by ChrisLeeGit - -How to Allow Awk to Use Shell Variables – Part 11 -================================================== - -When we write shell scripts, we normally include other smaller programs or commands such as Awk operations in our scripts. In the case of Awk, we have to find ways of passing some values from the shell to Awk operations. - -This can be done by using shell variables within Awk commands, and in this part of the series, we shall learn how to allow Awk to use shell variables that may contain values we want to pass to Awk commands. - -There possibly two ways you can enable Awk to use shell variables: - -### 1. Using Shell Quoting - -Let us take a look at an example to illustrate how you can actually use shell quoting to substitute the value of a shell variable in an Awk command. In this example, we want to search for a username in the file /etc/passwd, filter and print the user’s account information. - -Therefore, we can write a `test.sh` script with the following content: - -``` -#!/bin/bash - -#read user input -read -p "Please enter username:" username - -#search for username in /etc/passwd file and print details on the screen -cat /etc/passwd | awk "/$username/ "' { print $0 }' -``` - -Thereafter, save the file and exit. - -Interpretation of the Awk command in the test.sh script above: - -``` -cat /etc/passwd | awk "/$username/ "' { print $0 }' -``` - -`"/$username/ "` – shell quoting used to substitute value of shell variable username in Awk command. The value of username is the pattern to be searched in the file /etc/passwd. - -Note that the double quote is outside the Awk script, `‘{ print $0 }’`. - -Then make the script executable and run it as follows: - -``` -$ chmod +x test.sh -$ ./text.sh -``` - -After running the script, you will be prompted to enter a username, type a valid username and hit Enter. You will view the user’s account details from the /etc/passwd file as below: - -![](http://www.tecmint.com/wp-content/uploads/2016/08/Shell-Script-to-Find-Username-in-Passwd-File.png) ->Shell Script to Find Username in Password File - -### 2. Using Awk’s Variable Assignment - -This method is much simpler and better in comparison to method one above. Considering the example above, we can run a simple command to accomplish the job. Under this method, we use the -v option to assign a shell variable to a Awk variable. - -Firstly, create a shell variable, username and assign it the name that we want to search in the /etc/passswd file: - -``` -username="aaronkilik" -``` - -Then type the command below and hit Enter: - -``` -# cat /etc/passwd | awk -v name="$username" ' $0 ~ name {print $0}' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/08/Find-Username-in-Password-File-Using-Awk.png) ->Find Username in Password File Using Awk - -Explanation of the above command: - -- `-v` – Awk option to declare a variable -- `username` – is the shell variable -- `name` – is the Awk variable -Let us take a careful look at `$0 ~ name` inside the Awk script, `' $0 ~ name {print $0}'`. Remember, when we covered Awk comparison operators in Part 4 of this series, one of the comparison operators was value ~ pattern, which means: true if value matches the pattern. - -The `output($0)` of cat command piped to Awk matches the pattern `(aaronkilik)` which is the name we are searching for in /etc/passwd, as a result, the comparison operation is true. The line containing the user’s account information is then printed on the screen. - -### Conclusion - -We have covered an important section of Awk features, that can help us use shell variables within Awk commands. Many times, you will write small Awk programs or commands within shell scripts and therefore, you need to have a clear understanding of how to use shell variables within Awk commands. - -In the next part of the Awk series, we shall dive into yet another critical section of Awk features, that is flow control statements. So stay tunned and let’s keep learning and sharing. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/use-shell-script-variable-in-awk/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Aaron Kili][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对ID](https://github.com/校对ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md b/translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md new file mode 100644 index 0000000000..e4ec78abd4 --- /dev/null +++ b/translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md @@ -0,0 +1,95 @@ +awk 系列:如何让 awk 使用 Shell 变量 +================================================== + +当我们编写 shell 脚本时,我们通常会在脚本中包含其它小程序或命令,例如 awk 操作。对于 awk 而言,我们需要找一些将某些值从 shell 传递到 awk 操作中的方法。 + +我们可以通过在 awk 命令中使用 shell 变量达到目的,在 awk 系列的这一节中,我们将学习如何让 awk 使用 shell 变量,这些变量可能包含我们希望传递给 awk 命令的值。 + +有两种可能的方法可以让 awk 使用 shell 变量: + +### 1. 使用 Shell 引用 + +让我们用一个示例来演示如何在一条 awk 命令中使用 shell 引用替代一个 shell 变量。在该示例中,我们希望在文件 /etc/passwd 中搜索一个用户名,过滤并输出用户的账户信息。 + +因此,我们可以编写一个 `test.sh` 脚本,内容如下: + +``` +#!/bin/bash + +# 读取用户名 +read -p "请输入用户名:" username + +# 在 /etc/passwd 中搜索用户名,然后在屏幕上输出详细信息 +cat /etc/passwd | awk "/$username/ "' { print $0 }' +``` + +然后,保存文件并退出。 + +上述 `test.sh` 脚本中 awk 命令的说明: + +``` +cat /etc/passwd | awk "/$username/ "' { print $0 }' +``` + +`"/$username/ "`:在 awk 命令中使用 shell 引用来替代 shell 变量 `username` 的值。`username` 的值就是要在文件 /etc/passwd 中搜索的模式。 + +注意,双引号位于 awk 脚本 `'{ print $0 }'` 之外。 + +接下来给脚本添加可执行权限并运行它,操作如下: + +``` +$ chmod +x test.sh +$ ./text.sh +``` + +运行脚本后,它会提示你输入一个用户名,然后你输入一个合法的用户名并回车。你将会看到来自 /etc/passwd 文件中详细的用户账户信息,如下图所示: + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Shell-Script-to-Find-Username-in-Passwd-File.png) +> *在 Password 文件中查找用户名的 shell 脚本* + +### 2. 使用 awk 进行变量赋值 + +和上面介绍的方法相比,该方法更加单,并且更好。考虑上面的示例,我们可以运行一条简单的命令来完成同样的任务。 +在该方法中,我们使用 `-v` 选项将一个 shell 变量的值赋给一个 awk 变量。 +首先,创建一个 shell 变量 `username`,然后给它赋予一个我们希望在 /etc/passwd 文件中搜索的名称。 + +``` +username="aaronkilik" +``` +然后输入下面的命令并回车: + +``` +# cat /etc/passwd | awk -v name="$username" ' $0 ~ name {print $0}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Find-Username-in-Password-File-Using-Awk.png) +> *使用 awk 在 Password 文件中查找用户名* + +上述命令的说明: + +- `-v`:awk 选项之一,用于声明一个变量 +- `username`:是 shell 变量 +- `name`:是 awk 变量 + +让我们仔细瞧瞧 awk 脚本 `' $0 ~ name {print $0}'` 中的 `$0 ~ name`。还记得么,当我们在 awk 系列第四节中介绍 awk 比较运算符时,`value ~ pattern` 便是比较运算符之一,它是指:如果 `value` 匹配了 `pattern` 则返回 `true`。 + +cat 命令通过管道传给 awk 的 `output($0)` 与模式 `(aaronkilik)` 匹配,该模式即为我们在 /etc/passwd 中搜索的名称,最后,比较操作返回 `true`。接下来会在屏幕上输出包含用户账户信息的行。 + +### 结论 + +我们已经介绍了 awk 功能的一个重要部分,它能帮助我们在 awk 命令中使用 shell 变量。很多时候,你都会在 shell 脚本中编写小的 awk 程序或命令,因此,你需要清晰地理解如何在 awk 命令中使用 shell 变量。 + +在 awk 系列的下一个部分,我们将会深入学习 awk 功能的另外一个关键部分,即流程控制语句。所以请继续保持关注,并让我们坚持学习与分享。 + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-shell-script-variable-in-awk/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 4b2273b2da59f601ec0a48010b348c0c8fb41c07 Mon Sep 17 00:00:00 2001 From: vim-kakali <1799225723@qq.com> Date: Sat, 6 Aug 2016 23:56:44 +0800 Subject: [PATCH 356/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ce portfolio - Machine learning project.md | 64 +++++++++---------- 1 file changed, 31 insertions(+), 33 deletions(-) diff --git a/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md index a9af49b188..9e6aa7c1fe 100644 --- a/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 4 - Building a data science portfolio - Machine learning project.md @@ -1,44 +1,42 @@ -vim-kakali translating +### 计算运用数据中的值 +接下来我们会计算过程数据或运用数据中的值。我们要做的就是推测这些数据代表的贷款是否被收回。如果能够计算出来,我们只要看一下包含贷款的运用数据的参数 foreclosure_date 就可以了。如果这个参数的值是 None ,那么这些贷款肯定没有收回。为了避免我们的样例中存在少量的运用数据,我们会计算出运用数据中有贷款数据的行的行数。这样我们就能够从我们的训练数据中筛选出贷款数据,排除了一些运用数据。 -### Computing values from the performance data - -The next step we’ll take is to calculate some values from processed/Performance.txt. All we want to do is to predict whether or not a property is foreclosed on. To figure this out, we just need to check if the performance data associated with a loan ever has a foreclosure_date. If foreclosure_date is None, then the property was never foreclosed on. In order to avoid including loans with little performance history in our sample, we’ll also want to count up how many rows exist in the performance file for each loan. This will let us filter loans without much performance history from our training data. - -One way to think of the loan data and the performance data is like this: +下面是一种区分贷款数据和运用数据的方法: ![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/001.png) -As you can see above, each row in the Acquisition data can be related to multiple rows in the Performance data. In the Performance data, foreclosure_date will appear in the quarter when the foreclosure happened, so it should be blank prior to that. Some loans are never foreclosed on, so all the rows related to them in the Performance data have foreclosure_date blank. -We need to compute foreclosure_status, which is a Boolean that indicates whether a particular loan id was ever foreclosed on, and performance_count, which is the number of rows in the performance data for each loan id. +在上面的表格中,采集数据中的每一行数据都与运用数据中的多行数据有联系。在运用数据中,在收回贷款的时候 foreclosure_date 就会以季度的的形式显示出收回时间,而且它会在该行数据的最前面显示一个空格。一些贷款没有收回,所以与运用数据中的贷款数据有关的行都会在前面出现一个表示 foreclosure_date 的空格。 -There are a few different ways to compute the counts we want: +我们需要计算 foreclosure_status 的值,它的值是布尔类型,可以表示一个特殊的贷款数据 id 是否被收回过,还有一个参数 performance_count ,它记录了运用数据中每个贷款 id 出现的行数。  -- We could read in all the performance data, then use the Pandas groupby method on the DataFrame to figure out the number of rows associated with each loan id, and also if the foreclosure_date is ever not None for the id. - - The upside of this method is that it’s easy to implement from a syntax perspective. - - The downside is that reading in all 129236094 lines in the data will take a lot of memory, and be extremely slow. -- We could read in all the performance data, then use apply on the acquisition DataFrame to find the counts for each id. - - The upside is that it’s easy to conceptualize. - - The downside is that reading in all 129236094 lines in the data will take a lot of memory, and be extremely slow. -- We could iterate over each row in the performance dataset, and keep a separate dictionary of counts. - - The upside is that the dataset doesn’t need to be loaded into memory, so it’s extremely fast and memory-efficient. - - The downside is that it will take slightly longer to conceptualize and implement, and we need to parse the rows manually. +计算这些行数有多种不同的方法: -Loading in all the data will take quite a bit of memory, so let’s go with the third option above. All we need to do is to iterate through all the rows in the Performance data, while keeping a dictionary of counts per loan id. In the dictionary, we’ll keep track of how many times the id appears in the performance data, as well as if foreclosure_date is ever not None. This will give us foreclosure_status and performance_count. +- 我们能够读取所有的运用数据,然后我们用 Pandas 的 groupby 方法在数据框中计算出与每个贷款 id 有关的行的行数,然后就可以查看贷款 id 的 foreclosure_date 值是否为 None 。 +    - 这种方法的优点是从语法上来说容易执行。 +    - 它的缺点需要读取所有的 129236094 行数据,这样就会占用大量内存,并且运行起来极慢。 +- 我们可以读取所有的运用数据,然后使用采集到的数据框去计算每个贷款 id 出现的次数。 +    - 这种方法的优点是容易理解。 +    - 缺点是需要读取所有的 129236094 行数据。这样会占用大量内存,并且运行起来极慢。 +- 我们可以在迭代访问运用数据中的每一行数据,而且会建立一个区分开的计数字典。 + - 这种方法的优点是数据不需要被加载到内存中,所以运行起来会很快且不需要占用内存。 +    - 缺点是这样的话理解和执行上可能有点耗费时间,我们需要对每一行数据进行语法分析。 -We’ll create a new file called annotate.py, and add in code that will enable us to compute these values. In the below code, we’ll: +加载所有的数据会非常耗费内存,所以我们采用第三种方法。我们要做的就是迭代运用数据中的每一行数据,然后为每一个贷款 id 生成一个字典值。在这个字典中,我们会计算出贷款 id 在运用数据中出现的次数,而且如果 foreclosure_date 不是 Nnoe 。我们可以查看 foreclosure_status 和 performance_count 的值 。 -- Import needed libraries. -- Define a function called count_performance_rows. - - Open processed/Performance.txt. This doesn’t read the file into memory, but instead opens a file handler that can be used to read in the file line by line. - - Loop through each line in the file. - - Split the line on the delimiter (|) - - Check if the loan_id is not in the counts dictionary. - - If not, add it to counts. - - Increment performance_count for the given loan_id because we’re on a row that contains it. - - If date is not None, then we know that the loan was foreclosed on, so set foreclosure_status appropriately. +我们会新建一个 annotate.py 文件,文件中的代码可以计算这些值。我们会使用下面的代码: + +- 导入需要的库 +- 定义一个函数 count_performance_rows 。 + - 打开 processed/Performance.txt 文件。这不是在内存中读取文件而是打开了一个文件标识符,这个标识符可以用来以行为单位读取文件。  + - 迭代文件的每一行数据。 + - 使用分隔符(|)分开每行的不同数据。 + - 检查 loan_id 是否在计数字典中。 + - 如果不存在,进行一次计数。 + - loan_id 的 performance_count 参数自增 1 次,因为我们这次迭代也包含其中。 + - 如果日期是 None ,我们就会知道贷款被收回了,然后为foreclosure_status 设置合适的值。 ``` import os @@ -65,9 +63,9 @@ def count_performance_rows(): return counts ``` -### Getting the values +### 获取值 -Once we create our counts dictionary, we can make a function that will extract values from the dictionary if a loan_id and a key are passed in: +只要我们创建了计数字典,我们就可以使用一个函数通过一个 loan_id 和一个 key 从字典中提取到需要的参数的值: ``` def get_performance_summary_value(loan_id, key, counts): @@ -78,7 +76,7 @@ def get_performance_summary_value(loan_id, key, counts): return value[key] ``` -The above function will return the appropriate value from the counts dictionary, and will enable us to assign a foreclosure_status value and a performance_count value to each row in the Acquisition data. The [get][33] method on dictionaries returns a default value if a key isn’t found, so this enables us to return sensible default values if a key isn’t found in the counts dictionary. - + +上面的函数会从计数字典中返回合适的值,我们也能够为采集数据中的每一行赋一个 foreclosure_status 值和一个 performance_count 值。如果键不存在,字典的 [get][33] 方法会返回一个默认值,所以在字典中不存在键的时候我们就可以得到一个可知的默认值。 From 4049083f78cfc68108e6a484c06ddbb0b678b83b Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 7 Aug 2016 08:40:08 +0800 Subject: [PATCH 357/471] PUB:20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More @vim-kakali --- ...mart Devices Security Concerns and More.md | 49 +++++++++++++++++++ ...mart Devices Security Concerns and More.md | 48 ------------------ 2 files changed, 49 insertions(+), 48 deletions(-) create mode 100644 published/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md delete mode 100644 translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md diff --git a/published/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md b/published/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md new file mode 100644 index 0000000000..78af0dc3de --- /dev/null +++ b/published/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md @@ -0,0 +1,49 @@ +Linus Torvalds 谈及物联网、智能设备、安全连接等问题 +=========================================================================== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elc-linus-b.jpg?itok=6WwnCSjL) + +*Dirk Hohndel 在嵌入式大会上采访 Linus Torvalds 。* + + +4 月 4 日到 6 日,在圣迭戈召开的[嵌入式 Linux 大会(Embedded Linux Conference)][0](ELC) 从首次举办到现在已经有 11 年了,该会议包括了与 Linus Torvalds 的主题讨论。作为 Linux 内核的缔造者和最高决策者——用采访他的英特尔 Linux 和开源技术总监 Dirk Hohndel 的话说,“(他是)我们聚在一起的理由”——他对 Linux 在嵌入式和物联网应用程序领域的发展表示乐观。Torvalds 很明确地力挺了嵌入式 Linux,它被 Linux 桌面、服务器和云技术这些掩去光芒已经很多年了。 + +![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/elc-linus_0.jpg?itok=FNPIDe8k) + +*Linus Torvalds 在嵌入式 Linux 大会上的演讲。* + +物联网是嵌入式大会的主题,在 OpenIoT 峰会讲演中谈到了,在 Torvalds 的访谈中也是主要话题。 + +Torvalds 对 Hohndel 说到,“或许你不会在物联网末端设备上看到 Linux 的影子,但是在你有一个中心设备的时候,你就会需要它。尤其是物联网标准都有 23 个的时候,你就更需要智能设备了。如果你全部使用的是低级设备,它们没必要一定运行 Linux;如果它们采用的标准稍有差异,你就需要很多的智能设备。我们将来也不会有一个完全开放的标准来将这些物联网设备统一到一起,但是我们会有 3/4 的主要协议是一样的,然后那些智能的中心设备就可以对它们进行互相转换。” + +当 Hohndel 问及在物联网的巨大安全漏洞的时候,Torvalds 神情如常。他说:“我不担心安全问题因为我们能做的不是很多,物联网(设备)是不能更新的,这是我们面对的事实。" + +Linux 缔造者看起来更关心的是一次性嵌入式项目缺少对上游的及时贡献,尽管他注意到近年来这些有了一些显著改善,特别是在硬件整合方面。 + +“嵌入式领域历来就很难与开源开发者有所联系,但是我认为这些都在发生改变。”Torvalds 说:“ARM 社区变得越来越好了。内核维护者实际上现在也能跟上了一些硬件的更新换代。一切都在变好,但是还不够。” + +Torvalds 承认他在家经常使用桌面系统而不是嵌入式系统,并且对硬件不是很熟悉。 + +“我已经用电烙铁弄坏了很多东西。”他说到。“我真的不适合搞硬件开发。”另一方面,Torvalds 设想如果他现在是个年轻人,他可能也在摆弄 Raspberry Pi 和 BeagleBone(猎兔犬板)。“最棒的是你不需要精通焊接,你只需要买个新的板子就行。” + +同时,Torvalds 也承诺他要为 Linux 桌面再奋斗一个 25 年。他笑着说:“我要为它工作一生。” + +下面,请看完整[视频](https://youtu.be/tQKUWkR-wtM)。 + + +要获取关于嵌入式 Linux 和物联网的最新信息,请访问 2016 年嵌入式 Linux 大会 150+ 分钟的会议全程。[现在观看][1]。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/linus-torvalds-talks-iot-smart-devices-security-concerns-and-more-video + +作者:[ERIC BROWN][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/ericstephenbrown +[0]: http://events.linuxfoundation.org/events/embedded-linux-conference +[1]: http://go.linuxfoundation.org/elc-openiot-summit-2016-videos?utm_source=lf&utm_medium=blog&utm_campaign=linuxcom + diff --git a/translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md b/translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md deleted file mode 100644 index 876f84f68e..0000000000 --- a/translated/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md +++ /dev/null @@ -1,48 +0,0 @@ - -Linus Torvalds 谈及物联网,智能设备,安全连接等问题[video] -=========================================================================== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elc-linus-b.jpg?itok=6WwnCSjL) ->Dirk Hohndel 在嵌入式大会上采访 Linus Torvalds 。 - - - [嵌入式大会(Embedded Linux Conference)][0] 从在 San Diego 【译注:圣迭戈,美国加利福尼亚州的一个太平洋沿岸城市。】开始举办到现在已经有 11 年了,在 4 月 4 日到 6 日,Linus Torvalds 加入了会议的主题讨论。他是 Linux 内核的缔造者和最高决策者,也是“我们都在这里的原因”,在采访他的对话中,英特尔的 Linux 和开源技术总监 Dirk Hohndel 谈到了 Linux 在嵌入式和物联网应用程序领域的快速发展前景。Torvalds 很少出席嵌入式 Linux 大会,这些大会经常被 Linux 桌面、服务器和云技术夺去光芒。 -![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/elc-linus_0.jpg?itok=FNPIDe8k) ->Linus Torvalds 在嵌入式 Linux 大会上的演讲。 - - -物联网是嵌入式大会的主题,也包括未来开放物联网的最高发展方向(OpenIoT Summit),这是采访 Torvalds 的主要话题。 - -Torvalds 对 Hohndel 说到,“或许你不会在物联网设备上看到 Linux 的影子,但是在你有一个核心设备的时候,你就会需要它。你需要智能设备尤其在你有 23 [物联网标准]的时候。如果你全部使用低级设备,它们没必要一定运行 Linux ,它们采用的标准稍微有点不同,所以你需要很多智能设备。我们将来也不会有一个完全开放的标准,只是给它一个统一的标准,但是你将会需要 4 分之 3 的主要协议,它们都是这些智能核心的转化形式。” - -当 Hohndel 问及在物联网的巨大安全漏洞的时候, Torvalds 神情如常。他说:“我不担心安全问题因为我们能做的不是很多,物联网如果遭受攻击是无法挽回的-这是事实。" - -Linux 缔造者看起来更关心的是一次性的嵌入式项目缺少及时的上游贡献,尽管他注意到近年来这些都有一些本质上的提升,特别是在硬件上的发展。 - -Torvalds 说:”嵌入式领域历来就很难与开源开发者有所联系,但是我认为这些都在发生改变,ARM 团队也已经很优秀了。内核维护者事实上也看到了硬件性能的提升。一切都在变好,但是昨天却不是这样的。” - -Torvalds 承认他在家经常使用桌面系统而不是嵌入式系统,并且在使用硬件的时候他有“两只左手”。 - -“我已经用电烙铁弄坏了很多东西。”他说到。“我真的不适合搞硬件开发。”;另一方面,Torvalds 设想如果他现在是个年轻人,他可能被 Raspberry Pi(树莓派) 和 BeagleBone(猎兔犬板)【译注:Beagle板实际是由TI支持的一个以教育(STEP)为目的的开源项目】欺骗。“最主要是原因是如果你善于焊接,那么你就仅仅是买到了一个新的板子。” - -同时,Torvalds 也承诺他要为 Linux 桌面再奋斗一个 25 年。他笑着说:“我要为它工作一生。” - -下面,请看完整视频。 - -获取关于嵌入式 Linux 和物联网的最新信息。进入 2016 年嵌入式 Linux 大会 150+ 分钟的会议全程。[现在观看][1]. -[video](https://youtu.be/tQKUWkR-wtM) - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/linus-torvalds-talks-iot-smart-devices-security-concerns-and-more-video - -作者:[ERIC BROWN][a] -译者:[vim-kakali](https://github.com/vim-kakali) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/ericstephenbrown -[0]: http://events.linuxfoundation.org/events/embedded-linux-conference -[1]: http://go.linuxfoundation.org/elc-openiot-summit-2016-videos?utm_source=lf&utm_medium=blog&utm_campaign=linuxcom - From 41d774bfe8ee0627ab16d039ae7d409f61e8b456 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 7 Aug 2016 08:40:43 +0800 Subject: [PATCH 358/471] PUB:20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server @geekpi --- ...pt-get on Ubuntu Linux 16.04 LTS server.md | 29 +++++++++---------- 1 file changed, 13 insertions(+), 16 deletions(-) rename {translated/tech => published}/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md (78%) diff --git a/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md b/published/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md similarity index 78% rename from translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md rename to published/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md index e36813cdc9..d60a04cf8a 100644 --- a/translated/tech/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md +++ b/published/20160727-How-to-use-multiple-connections-to-speed-up-apt-get on Ubuntu Linux 16.04 LTS server.md @@ -1,13 +1,13 @@ -如何在Ubuntu Linux 16.04 LTS中使用多条连接加速apt-get/apt -========================================================================================= +如何在 Ubuntu Linux 16.04 LTS 中使用多个连接加速 apt-get/apt +================================================= -我该如何在Ubuntu Linux 16.04或者14.04 LTS中从多个仓库中下载包来加速apt-get或者apt命令? +我该如何加速在 Ubuntu Linux 16.04 或者 14.04 LTS 上从多个仓库中下载包的 apt-get 或者 apt 命令? -你需要使用到apt-fast这个shell封装器。它会通过多个连接同时下载一个包来加速apt-get/apt和aptitude命令。所有的包都会同时下载。它使用aria2c作为默认的下载加速。 +你需要使用到 apt-fast 这个 shell 封装器。它会通过多个连接同时下载一个包来加速 apt-get/apt 和 aptitude 命令。所有的包都会同时下载。它使用 aria2c 作为默认的下载加速器。 ### 安装 apt-fast 工具 -在Ubuntu Linux 14.04或者之后的版本尝试下面的命令: +在 Ubuntu Linux 14.04 或者之后的版本尝试下面的命令: ``` $ sudo add-apt-repository ppa:saiarcot895/myppa @@ -45,7 +45,6 @@ $ sudo apt -y install apt-fast 示例输出: - ``` Reading package lists... Done Building dependency tree @@ -77,13 +76,13 @@ Get:4 http://01.archive.ubuntu.com/ubuntu xenial/universe amd64 aria2 amd64 1.19 ![](http://s0.cyberciti.org/uploads/faq/2016/07/apt-fast-confirmation-box.jpg) -你可以直接编辑设置: +你也可以直接编辑设置: ``` $ sudo vi /etc/apt-fast.conf ``` ->**请注意这个工具并不是给慢速网络连接的,它是给快速网络连接的。如果你的网速慢,那么你将无法从这个工具中得到好处。** +> **请注意这个工具并不是给慢速网络连接的,它是给快速网络连接的。如果你的网速慢,那么你将无法从这个工具中得到好处。** ### 我该怎么使用 apt-fast 命令? @@ -94,13 +93,13 @@ apt-fast command apt-fast [options] command ``` -#### 使用apt-fast取回新的包列表 +#### 使用 apt-fast 取回新的包列表 ``` sudo apt-fast update ``` -#### 使用apt-fast执行升级 +#### 使用 apt-fast 执行升级 ``` sudo apt-fast upgrade @@ -121,7 +120,7 @@ $ sudo apt-fast dist-upgrade sudo apt-fast install pkg ``` -比如要安装nginx,输入: +比如要安装 nginx,输入: ``` $ sudo apt-fast install nginx @@ -196,22 +195,20 @@ Status Legend: (OK):download completed. ``` -#### 下载并显示指定包的changelog +#### 下载并显示指定包的 changelog ``` $ sudo apt-fast changelog pkgNameHere $ sudo apt-fast changelog nginx ``` - - -------------------------------------------------------------------------------- -via: https://fedoramagazine.org/introducing-flatpak/ +via: http://www.cyberciti.biz/faq/how-to-speed-up-apt-get-apt-command-ubuntu-linux/ 作者:[VIVEK GITE][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d3054ff6454f175b63bd6b290de8a792e6efd340 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 7 Aug 2016 10:00:41 +0800 Subject: [PATCH 359/471] PUB:20160502 The intersection of Drupal, IoT, and open hardware @zpl1025 --- ...ction of Drupal, IoT, and open hardware.md | 61 ++++++++++++++++++ ...ction of Drupal, IoT, and open hardware.md | 62 ------------------- 2 files changed, 61 insertions(+), 62 deletions(-) create mode 100644 published/20160502 The intersection of Drupal, IoT, and open hardware.md delete mode 100644 translated/tech/20160502 The intersection of Drupal, IoT, and open hardware.md diff --git a/published/20160502 The intersection of Drupal, IoT, and open hardware.md b/published/20160502 The intersection of Drupal, IoT, and open hardware.md new file mode 100644 index 0000000000..168bbce778 --- /dev/null +++ b/published/20160502 The intersection of Drupal, IoT, and open hardware.md @@ -0,0 +1,61 @@ +Drupal、IoT 和开源硬件之间的交集 +======================================================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/drupal_blue_gray_lead.jpeg?itok=t7W_KD-D) + + +来认识一下 [Amber Matz][1],她是来自 Lullabot Education 旗下的 [Drupalize.Me][3] 的产品经理以及培训师。当她没有倒腾 Arduino、Raspberry Pi 以及电子穿戴设备时,通常会在波特兰 Drupal 用户组里担任辩论主持人。 + +在即将举行的 [DrupalCon NOLA][3] 大会上,Amber 将主持一个关于 Drupal 和 IoT 的主题。如果你会去参加,也想了解下开源硬件,IoT 和 Drupal 之间的交集,那这个将很合适。如果你去不了新奥尔良的现场也没关系,Amber 还分享了许多很酷的事情。在这次采访中,她讲述了自己参与 Drupal 的原因,一些她自己喜欢的开源硬件项目,以及 IoT 和 Drupal 的未来。 + +![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) + +**你是怎么加入 Drupal 社区的?** + +在这之前,我在一家大型非盈利性机构市场部的“网站管理部”工作,飞快地批量生产出各种定制 PHP/MySQL 表单。最终我厌烦了这一切,并开始在网上寻找更好的方式。然后我找到了 Drupal 6 并开始沉迷进去。过了几年,在一次跳槽之后,我发现了波特兰 Drupal 用户组,然后在里面找了一份全职的 Drupal 开发者工作。我一直经常参加在波特兰的聚会,在那里我找到了大量的社区、朋友和专业方面的发展。一个偶然的机会,我在 Lullabot 找了一份培训师的工作,为 Drupalize.Me 提供内容。现在,我管理着 Drupalize.Me 的内容输出,负责编撰 Drupal 8 相关的内容,还很大程度地参与到波特兰 Drupal 社区中。我是今年的协调员,寻找并安排演讲者们。 + +**我们想知道:什么是 Arduino 原型,你是怎么找到它的,以及你用 Arduino 做过的最酷的事是什么?** + +Arduino,Raspberry Pi,以及可穿戴电子设备,这些年到处都能听到这些术语。我在几年前通过 Becky Stern 的 YouTube 秀(最近由 Becky 继续主持,每周三播出)发现了 [Adafruit 的可穿戴电子设备][4]。我被那些可穿戴设备迷住了,还订了一套 LED 缝制工具,不过没做出任何东西。我不太适合它。我没有任何电子相关的背景,而且在我被那些项目吸引的时候,我根本不知道怎么做出那样的东西,它似乎看上去太遥远了。 + +后来,我在 Coursera 上找到了一个“物联网”专题。(很时髦,对吧?)我很快就喜欢上了。我最终找到了 Arduino 是什么的解释,以及所有这些其他的重要术语和概念。我订了一套推荐的 Arduino 初学者套件,还附带了一本如何上手的小册子。当我第一次让 LED 闪烁的时候,开心极了。我在圣诞节以及之后有两个星期的假期,然而我什么都没干,就一直根据初学者小册子给 Arduino 电路编程。很奇怪我觉得很放松!我太喜欢了。 + +在一月份的时候,我开始构思我自己的原型设备。在知道我需要主持公司培训的开场白时,我用五个 LED 灯和 Arduino 搭建了一个开场白视觉计时器的原型。 + +![](https://opensource.com/sites/default/files/resize/amber-arduino-lightning-talk-timer-400x400.jpg) + +这是一次巨大的成功。我还做了我的第一个可穿戴项目,一件会发光的连帽衫,使用了和 Arduino IDE 兼容的 Gemma 微控制器,一个小的圆形可缝制部件,然后用可导电的线缝起来,将一个滑动可变电阻和衣服帽口的收缩绳连在一起,用来控制缝到帽子里的五个 NeoPixel 灯的颜色。这就是我对原型设计的看法:做一些很好玩也可能会有点实际用途的疯狂项目。 + +**Drupal 和 IoT 带来的最大机遇是什么??** + +IoT 与 Web Service 以及 Drupal 分层趋势实际并没有太大差别。就是将数据从一个东西传送到另一个东西,然后将数据转换成一些有用的东西。但数据是如何送达?能用来做点什么?你觉得现在就有一大堆现成的解决方案、应用、中间层,以及 API 吗?采用 IoT,这只会继续成几何指数级的增长。我觉得,给我任何一个设备或“东西”,总有办法来将它连接到互联网上,有很多办法。而且有大量现成的代码库来帮助创客们将他们的数据从一个东西传到另一个东西。 + +那么 Drupal 在这里处于什么位置?首先,Web services 将是第一个明显的地方。但作为一个创客,我不希望将时间花在编写 Drupal 的订制模块上。我想要的是即插即用!所以我很高兴出现这样的模块能连接 IoT 云端 API 和服务,比如 ThingSpeak,Adafruit.io,IFTTT,以及其他的。我觉得也有一个很好的商业机会,在 Drupal 里构建一套 IoT 云服务,允许用户发送和存储他们的传感器数据,并可以制成表格和图像,还可以写一些插件可以响应特定数据或阙值。每一个 IoT 云 API 服务都是一个细分的机会,所以能留下很大空间给其他人。 + +**这次 DrupalCon 你有哪些期待?** + +我喜欢与 Drupal 上的朋友重逢,认识一些新的人,还能见到 Lullabot 和 Drupalize.Me 的同事(我们是分布式的公司)!Drupal 8 有太多东西可以去探索了,我们给我们的客户们提供了海量的培训资料。所以,我很期待参与一些 Drupal 8 相关的主题,以及跟上最新的开发进度。最后,我对新奥尔良也很感兴趣!我曾经在 2004 年去过,很期待将这次将看到哪些改变。 + +**谈一谈你这次 DrupalCon 上的演讲:“超越闪烁:将 Drupal 加到你的 IoT 游乐场中”。别人为什么要参加?他们最重要的收获会是什么?** + +我的主题的标题是,“超越闪烁:将 Drupal 加到你的 IoT 游乐场中”,假设我们所有人都处在同一进度和层次,你不需要了解任何关于 Arduino、物联网、甚至是 Drupal,都能跟上。我将从用 Arduino 让 LED 灯闪烁开始,然后我会谈一下我自己在这里面的最大收获:玩、学、教和做。我会列出一些曾经激励过我的例子,它们也很有希望能激发和鼓励其他听众去尝试一下。然后,就是展示时间! + +首先,第一个东西。它是一个建筑提醒信号灯。在这个展示里,我会说明如何将信号灯连到互联网上,以及如何响应从云 API 服务收到的数据。然后,第二个东西。它是一个蒸汽朋克风格 iPhone 外壳形式的“天气手表”。有一个小型 LED 矩阵用来显示我的天气的图标,一个气压和温度传感器,一个 GPS 模块,以及一个 Bluetooth LE 模块,都连接到一个 Adafruit Flora 微控制器上。第二个东西能通过蓝牙连接到我的 iPhone 上的一个应用,并将天气和位置数据通过 MQTT 协议发到 Adafruit.io 的服务器!然后,在 Drupal 这边,我会从云端下载这些数据,更新天气信息,然后更新地图。所以大家也能体验一下通过web service、地图和 Drupal 8 的功能块所能做的事情。 + +学习和制作这些展示原型是一次烧脑的探险,我也希望有人能参与这个主题并感染一点我对这种技术交叉的传染性热情!我很兴奋能分享一些我的发现。 + +------------------------------------------------------------------------------ + +via: https://opensource.com/business/16/5/drupalcon-interview-amber-matz + +作者:[Jason Hibbets][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jhibbets +[1]: https://www.drupal.org/u/amber-himes-matz +[2]: https://drupalize.me/ +[3]: https://events.drupal.org/neworleans2016/ +[4]: https://www.adafruit.com/beckystern diff --git a/translated/tech/20160502 The intersection of Drupal, IoT, and open hardware.md b/translated/tech/20160502 The intersection of Drupal, IoT, and open hardware.md deleted file mode 100644 index 7cfffd953f..0000000000 --- a/translated/tech/20160502 The intersection of Drupal, IoT, and open hardware.md +++ /dev/null @@ -1,62 +0,0 @@ -Drupal, IoT 和开源硬件的交叉点 -======================================================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/drupal_blue_gray_lead.jpeg?itok=t7W_KD-D) - - -认识一下 [Amber Matz][1],来自由 Lullabot Education 提供的 [Drupalize.Me][3] 的生产经理以及培训师。当她没有倒腾 Arduino,Raspberry Pi 以及电子穿戴设备时,通常会在波特兰 Drupal 用户组里和主持人争论。 - -在即将举行的 [DrupalCon NOLA][3] 大会上,Amber 将主持一个关于 Drupal 和 IoT 的主题。如果你会去参加,也想了解下开源硬件,IoT 和 Drupal 之间的交叉点,那这个将很合适。如果你去不了新奥尔良的现场也没关系,Amber 还分享了许多很酷的事情。在这次采访中,她讲述了自己参与 Drupal 的原因,一些她自己喜欢的开源硬件项目,以及 IoT 和 Drupal 的未来。 - -![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) - -**你是怎么加入 Drupal 社区的?** - -在这之前,我在一家大型非盈利性机构市场部的“站长办公室”工作,大量产出没人喜欢的定制 PHP/MySQL 表格。终于我觉得这样很烦并开始在网上寻找更好的方式。然后我找到了 Drupal 6 并开始自己沉迷进去。多年以后,在准备职业转换的时候,发现了波特兰 Drupal 用户组,然后在里面找了一份全职的 Drupal 开发者工作。我一直经常参加在波特兰的聚会,我觉得它是一种很好的社区,交友,以及专业开发的资源。一个偶然的机会,我在 Lullabot 找了一份培训师的工作为 Drupalize.Me 提供内容。现在,我管理着 Drupalize.Me 的内容管道,创建 Drupal 8 的内容,还很大程度地参与到波特兰 Drupal 社区中。我是今年的协调员,寻找并规划演讲者。 - -**我们得明白:什么是 Arduino 原型,你是怎么找到它的,以及你用 Arduino 做过的最酷的事是什么?** - -Arduino,Raspberry Pi,以及可穿戴电子设备,这些年到处都能听到这些术语。我在几年前通过 Becky Stern YouTube 秀(最近由 Becky 继续主持,每周三播出)发现了 [Adafruit 的可穿戴电子设备][4]。我被那些可穿戴设备迷住了,还订了一套 LED 缝制工具,不过没做出任何东西。我就是没搞懂。我没有任何电子相关的背景,而且在我被那些项目吸引的时候,我根本不知道怎么做出那样的东西。看上去太遥远了。 - -后来,我找到一个 Coursera 的“物联网”专题。(很时髦,对吧?)但我很快就喜欢上了。我最终找到了 Arduino 是什么的解释,以及所有这些其他的重要术语和概念。我订了一套推荐的 Arduino 初学者套件,还附带了一本如何上手的小册子。当我第一次让 LED 闪烁的时候,开心极了。我在圣诞节以及之后有两个星期的假期,然后我什么都没干,就一直根据初学者小册子给 Arduino 电路编程。很奇怪我觉得很放松!我太喜欢了。 - -一月份的时候,我开始构思我自己的原型设备。在知道我要主持公司培训的开场白时,我用五个 LED 灯和 Arduino 搭建了一个开场白视觉计时器。 - -![](https://opensource.com/sites/default/files/resize/amber-arduino-lightning-talk-timer-400x400.jpg) - -这是一次巨大的成功。我还做了我的第一个可穿戴项目,一件会发光的连帽衫,使用了和 Arduino IDE 兼容的 Gemma 微控制器,一个小的圆形可缝制部件,然后用可导电的线缝起来,将一个滑动可变电阻和衣服帽口的收缩绳连在一起,用来控制缝到帽子里的五个 NeoPixel 灯的颜色。这就是我对原型设计的看法:开展一些很好玩也可能会有点实际用途的疯狂项目。 - -**Drupal 和 IoT 带来的最大机遇是什么??** - -IoT 和网站服务以及 Drupal 分层趋势实际并没有太大差别。就是将数据从一个物体传送到另一个物体,然后将数据转换成一些有用的东西。但数据是如何送达?能用来做点什么?你觉得现在就有一大堆现成的解决方案,应用,中间层,以及 API?采用 IoT,这只会继续成指数增长。我觉得,给我任何一个设备或“物体”,需要只用一种方式来将它连接到因特网的无线可能上。然后有现成的各种代码库来帮助制作者将他们的数据从一个物体送到另一个物体。 - -那么 Drupal 在这里处于什么位置?首先,网站服务将是第一个明显的地方。但作为一个制作者,我不希望将时间花在编写 Drupal 的订制模块上。我想要的是即插即用!所以我很高兴出现这样的模块能连接 IoT 云端 API 和服务,比如 ThingSpeak,Adafruit.io,IFTTT,以及其他的。我觉得也有一个很好的商业机会,在 Drupal 里构建一套 IoT 云服务,允许用户发送和存储他们的传感器数据,并可以制成表格和图像,还可以写一些插件可以响应特定数据或阙值。每一个 IoT 云 API 服务都是一个细分的机会,所以能留下很大空间给其他人。 - -**这次 DrupalCon 你有哪些期待?** - -我喜欢重新联系 Drupal 上的朋友,认识一些新的人,还能见到 Lullabot 和 Drupalize.Me 的同事(我们是分布式的公司)!Drupal 8 有太多东西可以去探索了,不可抗拒地要帮我们的客户收集培训资料。所以,我很期待参与一些 Drupal 8 相关的主题,以及跟上最新的开发活动。最后,我对新奥尔良也很感兴趣!我曾经在 2004 年去过,很期待将这次将看到哪些改变。 - -**谈一谈你这次 DrupalCon 上的演讲,超越闪烁:将 Drupal 加到你的 IoT 游乐场中。别人为什么要参与?他们最重要的收获会是什么?** - -我的主题的标题是,超越闪烁:将 Drupal 加到你的 IoT 游乐场中,本身有很多假设,我将让所有人都放在同一速度和层次。你不需要了解任何关于 Arduino,物联网,甚至是 Drupal,都能跟上。我将从用 Arduino 让 LED 灯闪烁开始,然后我会谈一下我自己在这里面的最大收获:玩,学,教,和做。我会列出一些曾经激发过我的例子,它们也很有希望能激发和鼓励其他听众去尝试一下。然后,就是展示时间! - -首先,第一个东西。它是一个构建提醒信号灯。在这个展示里,我会说明如何将信号灯连到互联网上,以及如何响应从云 API 服务收到的数据。然后,第二个东西。它是一个蒸汽朋克风格 iPhone 外壳形式的“天气手表”。有一个小型 LED 矩阵用来显示我的天气的图标,一个气压和温度传感器,一个 GPS 模块,以及一个 Bluetooth LE 模块,都连接到一个 Adafruit Flora 微控制器上。第二个东西能通过蓝牙连接到我的 iPhone 上的一个应用,并将天气和位置数据通过 MQTT 协议发到 Adafruit.io 的服务器!然后,在 Drupal 这边,我会从云端下载这些数据,根据天气更新一个功能块,然后更新地图。所以大家也能体验一下通过网站服务,地图和 Drupal 8 的功能块所能做的事情。 - -学习和制作这些展示原型是一次烧脑的探险,我也希望有人能参与这个主题并感染一点我对这个技术交叉的传染性热情!我很兴奋能分享一些我的发现。 - - ------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/5/drupalcon-interview-amber-matz - -作者:[Jason Hibbets][a] -译者:[zpl1025](https://github.com/zpl1025) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jhibbets -[1]: https://www.drupal.org/u/amber-himes-matz -[2]: https://drupalize.me/ -[3]: https://events.drupal.org/neworleans2016/ -[4]: https://www.adafruit.com/beckystern From 0a978c116682d2acff7a7a8fb37b447ad4630578 Mon Sep 17 00:00:00 2001 From: Stdio A Date: Sun, 7 Aug 2016 11:33:58 +0800 Subject: [PATCH 360/471] =?UTF-8?q?Finish=2020160309=20Let=E2=80=99s=20Bui?= =?UTF-8?q?ld=20A=20Web=20Server.=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160309 Let’s Build A Web Server. Part 1.md | 151 ------------------ ...160309 Let’s Build A Web Server. Part 1.md | 150 +++++++++++++++++ 2 files changed, 150 insertions(+), 151 deletions(-) delete mode 100644 sources/tech/20160309 Let’s Build A Web Server. Part 1.md create mode 100644 translated/tech/20160309 Let’s Build A Web Server. Part 1.md diff --git a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md b/sources/tech/20160309 Let’s Build A Web Server. Part 1.md deleted file mode 100644 index 63f08f754d..0000000000 --- a/sources/tech/20160309 Let’s Build A Web Server. Part 1.md +++ /dev/null @@ -1,151 +0,0 @@ -Translating by StdioA -Let’s Build A Web Server. Part 1. -===================================== - -Out for a walk one day, a woman came across a construction site and saw three men working. She asked the first man, “What are you doing?” Annoyed by the question, the first man barked, “Can’t you see that I’m laying bricks?” Not satisfied with the answer, she asked the second man what he was doing. The second man answered, “I’m building a brick wall.” Then, turning his attention to the first man, he said, “Hey, you just passed the end of the wall. You need to take off that last brick.” Again not satisfied with the answer, she asked the third man what he was doing. And the man said to her while looking up in the sky, “I am building the biggest cathedral this world has ever known.” While he was standing there and looking up in the sky the other two men started arguing about the errant brick. The man turned to the first two men and said, “Hey guys, don’t worry about that brick. It’s an inside wall, it will get plastered over and no one will ever see that brick. Just move on to another layer.”1 - -The moral of the story is that when you know the whole system and understand how different pieces fit together (bricks, walls, cathedral), you can identify and fix problems faster (errant brick). - -What does it have to do with creating your own Web server from scratch? - -I believe to become a better developer you MUST get a better understanding of the underlying software systems you use on a daily basis and that includes programming languages, compilers and interpreters, databases and operating systems, web servers and web frameworks. And, to get a better and deeper understanding of those systems you MUST re-build them from scratch, brick by brick, wall by wall. - -Confucius put it this way: - ->“I hear and I forget.” - -![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_hear.png) - ->“I see and I remember.” - -![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_see.png) - ->“I do and I understand.” - -![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_do.png) - -I hope at this point you’re convinced that it’s a good idea to start re-building different software systems to learn how they work. - -In this three-part series I will show you how to build your own basic Web server. Let’s get started. - -First things first, what is a Web server? - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_response.png) - -In a nutshell it’s a networking server that sits on a physical server (oops, a server on a server) and waits for a client to send a request. When it receives a request, it generates a response and sends it back to the client. The communication between a client and a server happens using HTTP protocol. A client can be your browser or any other software that speaks HTTP. - -What would a very simple implementation of a Web server look like? Here is my take on it. The example is in Python but even if you don’t know Python (it’s a very easy language to pick up, try it!) you still should be able to understand concepts from the code and explanations below: - -``` -import socket - -HOST, PORT = '', 8888 - -listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) -listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) -listen_socket.bind((HOST, PORT)) -listen_socket.listen(1) -print 'Serving HTTP on port %s ...' % PORT -while True: - client_connection, client_address = listen_socket.accept() - request = client_connection.recv(1024) - print request - - http_response = """\ -HTTP/1.1 200 OK - -Hello, World! -""" - client_connection.sendall(http_response) - client_connection.close() -``` - -Save the above code as webserver1.py or download it directly from GitHub and run it on the command line like this - -``` -$ python webserver1.py -Serving HTTP on port 8888 … -``` - -Now type in the following URL in your Web browser’s address bar http://localhost:8888/hello, hit Enter, and see magic in action. You should see “Hello, World!” displayed in your browser like this: - -![](https://ruslanspivak.com/lsbaws-part1/browser_hello_world.png) - -Just do it, seriously. I will wait for you while you’re testing it. - -Done? Great. Now let’s discuss how it all actually works. - -First let’s start with the Web address you’ve entered. It’s called an URL and here is its basic structure: - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_URL_Web_address.png) - -This is how you tell your browser the address of the Web server it needs to find and connect to and the page (path) on the server to fetch for you. Before your browser can send a HTTP request though, it first needs to establish a TCP connection with the Web server. Then it sends an HTTP request over the TCP connection to the server and waits for the server to send an HTTP response back. And when your browser receives the response it displays it, in this case it displays “Hello, World!” - -Let’s explore in more detail how the client and the server establish a TCP connection before sending HTTP requests and responses. To do that they both use so-called sockets. Instead of using a browser directly you are going to simulate your browser manually by using telnet on the command line. - -On the same computer you’re running the Web server fire up a telnet session on the command line specifying a host to connect to localhost and the port to connect to 8888 and then press Enter: - -``` -$ telnet localhost 8888 -Trying 127.0.0.1 … -Connected to localhost. -``` - -At this point you’ve established a TCP connection with the server running on your local host and ready to send and receive HTTP messages. In the picture below you can see a standard procedure a server has to go through to be able to accept new TCP connections. - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_socket.png) - -In the same telnet session type GET /hello HTTP/1.1 and hit Enter: - -``` -$ telnet localhost 8888 -Trying 127.0.0.1 … -Connected to localhost. -GET /hello HTTP/1.1 - -HTTP/1.1 200 OK -Hello, World! -``` - -You’ve just manually simulated your browser! You sent an HTTP request and got an HTTP response back. This is the basic structure of an HTTP request: - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_anatomy.png) - -The HTTP request consists of the line indicating the HTTP method (GET, because we are asking our server to return us something), the path /hello that indicates a “page” on the server we want and the protocol version. - -For simplicity’s sake our Web server at this point completely ignores the above request line. You could just as well type in any garbage instead of “GET /hello HTTP/1.1” and you would still get back a “Hello, World!” response. - -Once you’ve typed the request line and hit Enter the client sends the request to the server, the server reads the request line, prints it and returns the proper HTTP response. - -Here is the HTTP response that the server sends back to your client (telnet in this case): - -![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_response_anatomy.png) - -Let’s dissect it. The response consists of a status line HTTP/1.1 200 OK, followed by a required empty line, and then the HTTP response body. - -The response status line HTTP/1.1 200 OK consists of the HTTP Version, the HTTP status code and the HTTP status code reason phrase OK. When the browser gets the response, it displays the body of the response and that’s why you see “Hello, World!” in your browser. - -And that’s the basic model of how a Web server works. To sum it up: The Web server creates a listening socket and starts accepting new connections in a loop. The client initiates a TCP connection and, after successfully establishing it, the client sends an HTTP request to the server and the server responds with an HTTP response that gets displayed to the user. To establish a TCP connection both clients and servers use sockets. - -Now you have a very basic working Web server that you can test with your browser or some other HTTP client. As you’ve seen and hopefully tried, you can also be a human HTTP client too, by using telnet and typing HTTP requests manually. - -Here’s a question for you: “How do you run a Django application, Flask application, and Pyramid application under your freshly minted Web server without making a single change to the server to accommodate all those different Web frameworks?” - -I will show you exactly how in Part 2 of the series. Stay tuned. - -BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch and goes into more detail on topics I just covered. Subscribe to the mailing list to get the latest updates about the book and the release date. - --------------------------------------------------------------------------------- - -via: https://ruslanspivak.com/lsbaws-part1/ - -作者:[Ruslan][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://linkedin.com/in/ruslanspivak/ - - - diff --git a/translated/tech/20160309 Let’s Build A Web Server. Part 1.md b/translated/tech/20160309 Let’s Build A Web Server. Part 1.md new file mode 100644 index 0000000000..8ccee84871 --- /dev/null +++ b/translated/tech/20160309 Let’s Build A Web Server. Part 1.md @@ -0,0 +1,150 @@ +搭个 Web 服务器:第一部分 +===================================== + +一天,有一个正在散步的妇人恰好路过一个建筑工地,看到三个正在工作的工人。她问第一个人:“你在做什么?”第一个人没好气地喊道:“你没看到我在砌砖吗?”妇人对这个答案不满意,于是问第二个人:“你在做什么?”第二个人回答说:“我在建一堵砖墙。”说完,他转向第一个人,跟他说:“嗨,你把墙砌过头了。去把刚刚那块砖弄下来!”然而,妇人对这个答案依然不满意,于是又问了第三个人相同的问题。第三个人仰头看着天,对她说:“我在建造世界上最大的教堂。”当他回答时,第一个人和第二个人在为刚刚砌错的砖而争吵。他转向那两个人,说:“不用管那块砖了。这堵墙在室内,它会被水泥填平,没人会看见它的。去砌下一层吧。” + +这个故事告诉我们:如果你能够理解整个系统的构造,了解系统的各个部件如何相互结合(如砖、墙还有整个教堂),你就能够更快地定位及修复问题(那块砌错的砖)。 + +如果你想从头开始创造一个 Web 服务器,那么你需要做些什么呢? + +我相信,如果你想成为一个更好的开发者,你**必须**对日常使用的软件系统的内部结构有更深的理解,包括编程语言、编译器与解释器、数据库及操作系统、Web 服务器及 Web 框架。而且,为了更好更深入地理解这些系统,你**必须**从头开始,用一砖一瓦来重新构建这个系统。 + +孔子曾经用这几句话来表达这种思想: + +>“不闻不若闻之。” + +![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_hear.png) + +>“闻之不若见之。” + +![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_see.png) + +>“知之不若行之。” + +![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_do.png) + +我希望你现在能够意识到,重新建造一个软件系统来了解它的工作方式是一个好主意。 + +在这个由三篇文章组成的系列中,我将会教你构建你自己的 Web 服务器。我们开始吧~ + +先说首要问题:Web 服务器是什么? + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_response.png) + +简而言之,它是一个运行在一个物理服务器上的网络服务器(啊呀,服务器套服务器),等待客户端向其发送请求。当它接收请求后,会生成一个响应,并回送至客户端。客户端和服务端之间通过 HTTP 协议来实现相互交流。客户端可以是你的浏览器,也可以是使用 HTTP 协议的其它任何软件。 + +最简单的 Web 服务器实现应该是什么样的呢?这里我给出我的实现。这个例子由 Python 写成,即使你没听说过 Python(它是一门超级容易上手的语言,快去试试看!),你也应该能够从代码及注释中理解其中的理念: + +``` +import socket + +HOST, PORT = '', 8888 + +listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) +listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) +listen_socket.bind((HOST, PORT)) +listen_socket.listen(1) +print 'Serving HTTP on port %s ...' % PORT +while True: + client_connection, client_address = listen_socket.accept() + request = client_connection.recv(1024) + print request + + http_response = """\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + client_connection.close() +``` + +将以上代码保存为 webserver1.py,或者直接从 GitHub 上下载这个文件。然后,在命令行中运行这个程序。像这样: + +``` +$ python webserver1.py +Serving HTTP on port 8888 … +``` + +现在,在你的网页浏览器的地址栏中输入 URL:http://localhost:8888/hello ,敲一下回车,然后来见证奇迹。你应该看到“Hello, World!”显示在你的浏览器中,就像下图那样: + +![](https://ruslanspivak.com/lsbaws-part1/browser_hello_world.png) + +说真的,快去试一试。你做实验的时候,我会等着你的。 + +完成了?不错。现在我们来讨论一下它实际上是怎么工作的。 + +首先我们从你刚刚输入的 Web 地址开始。它叫 URL,这是它的基本结构: + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_URL_Web_address.png) + +URL 是一个 Web 服务器的地址,浏览器用这个地址来寻找并连接 Web 服务器,并将上面的内容返回给你。在你的浏览器能够发送 HTTP 请求之前,它需要与 Web 服务器建立一个 TCP 连接。然后会在 TCP 连接中发送 HTTP 请求,并等待服务器返回 HTTP 响应。当你的浏览器收到响应后,就会显示其内容,在上面的例子中,它显示了“Hello, World!”。 + +我们来进一步探索在发送 HTTP 请求之前,客户端与服务器建立 TCP 连接的过程。为了建立链接,它们使用了所谓“套接字”。我们现在不直接使用浏览器发送请求,而在命令行中使用 telnet 来人工模拟这个过程。 + +在你运行 Web 服务器的电脑上,在命令行中建立一个 telnet 会话,指定一个本地域名,使用端口 8888,然后按下回车: + +``` +$ telnet localhost 8888 +Trying 127.0.0.1 … +Connected to localhost. +``` + +这个时候,你已经与运行在你本地主机的服务器建立了一个 TCP 连接。在下图中,你可以看到一个服务器从头开始,到能够建立 TCP 连接的基本过程。 + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_socket.png) + +在同一个 telnet 会话中,输入 GET /hello HTTP/1.1,然后输入回车: + +``` +$ telnet localhost 8888 +Trying 127.0.0.1 … +Connected to localhost. +GET /hello HTTP/1.1 + +HTTP/1.1 200 OK +Hello, World! +``` + +你刚刚手动模拟了你的浏览器!你发送了 HTTP 请求,并且收到了一个 HTTP 应答。下面是一个 HTTP 请求的基本结构: + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_anatomy.png) + +HTTP 请求的第一行由三部分组成:HTTP 方法(GET,因为我们想让我们的服务器返回一些内容),以及标明所需页面的路径 /hello,还有协议版本。 + +为了简单一些,我们刚刚构建的 Web 服务器完全忽略了上面的请求内容。你也可以试着输入一些无用内容而不是“GET /hello HTTP/1.1”,但你仍然会收到一个“Hello, World!”响应。 + +一旦你输入了请求行并敲了回车,客户端就会将请求发送至服务器;服务器读取请求行,就会返回相应的 HTTP 响应。 + +下面是服务器返回客户端(在上面的例子里是 telnet)的响应内容: + +![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_response_anatomy.png) + +我们来解析它。这个响应由三部分组成:一个状态行 HTTP/1.1 200 OK,后面跟着一个空行,再下面是响应正文。 + +HTTP 响应的状态行 HTTP/1.1 200 OK 包含 HTTP 版本号,HTTP 状态码以及 HTTP 状态短语“OK”。当浏览器收到响应后,它会将响应正文显示出来,这也就是为什么你会在浏览器中看到“Hello, World!”。 + +以上就是 Web 服务器的基本工作模型。总结一下:Web 服务器创建一个处于监听状态的套接字,循环接收新的连接。客户端建立 TCP 连接成功后,会向服务器发送 HTTP 请求,然后服务器会以一个 HTTP 响应做应答,客户端会将 HTTP 的响应内容显示给用户。为了建立 TCP 连接,客户端和服务端均会使用套接字。 + +现在,你应该了解了 Web 服务器的基本工作方式,你可以使用浏览器或其它 HTTP 客户端进行试验。如果你尝试过、观察过,你应该也能够使用 telnet,人工编写 HTTP 请求,成为一个人形 HTTP 客户端。 + +现在留一个小问题:“你要如何在不对程序做任何改动的情况下,在你刚刚搭建起来的 Web 服务器上适配 Django, Flask 或 Pyramid 应用呢? + +我会在本系列的第二部分中来详细讲解。敬请期待。 + +顺便,我在撰写《搭个 Web 服务器:从头开始》。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。订阅邮件列表,你就可以获取到这本书的最新进展,以及发布日期。 + +-------------------------------------------------------------------------------- + +via: https://ruslanspivak.com/lsbaws-part1/ + +作者:[Ruslan][a] +译者:[StdioA](https://github.com/StdioA) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://linkedin.com/in/ruslanspivak/ + + + From 32123d2936adb91d2cb785c7da8c70c1af8df9f4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Sun, 7 Aug 2016 14:38:52 +0800 Subject: [PATCH 361/471] =?UTF-8?q?=E8=BF=94=E5=9B=9E=E6=96=87=E7=AB=A0=20?= =?UTF-8?q?(#4284)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 75 * Translated by cposture * Translated by cposture * Translated by cposture * 应 erlinux 的要求,把未翻译完的文章移到 source * 应 erlinux 的要求,把未翻译完的文章移到 source,同时删除 translated文章 --- ... Rapid prototyping with docker-compose.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) rename {translated => sources}/tech/20160512 Rapid prototyping with docker-compose.md (66%) diff --git a/translated/tech/20160512 Rapid prototyping with docker-compose.md b/sources/tech/20160512 Rapid prototyping with docker-compose.md similarity index 66% rename from translated/tech/20160512 Rapid prototyping with docker-compose.md rename to sources/tech/20160512 Rapid prototyping with docker-compose.md index a32f894d4b..0c67223697 100644 --- a/translated/tech/20160512 Rapid prototyping with docker-compose.md +++ b/sources/tech/20160512 Rapid prototyping with docker-compose.md @@ -1,36 +1,36 @@ -使用docker快速组成样品机 + +Rapid prototyping with docker-compose ======================================== -在写前,我们将看看 Node.js 样机 ** 找寻树莓派 PI Zero ** 的供应在英国三个主要销售. +In this write-up we'll look at a Node.js prototype for **finding stock of the Raspberry PI Zero** from three major outlets in the UK. -我写的代码,黑客部署到 Azure Ubuntu 虚拟机一个晚上就可以到位。Docker 和 docker-compose 工具做出调配和更新过程非常快。 +I wrote the code and deployed it to an Ubuntu VM in Azure within a single evening of hacking. Docker and the docker-compose tool made the deployment and update process extremely quick. -### 建立链接? +### Remember linking? - -如果您已经通过 [动手 Docker 教程指南] [1] 那么你已有在命令行建立 Docker 容器的经验。链接一个Redis 服务器计数器节点在命令行上可能是这样: +If you've already been through the [Hands-On Docker tutorial][1] then you will have experience linking Docker containers on the command line. Linking a Node hit counter to a Redis server on the command line may look like this: ``` $ docker run -d -P --name redis1 $ docker run -d hit_counter -p 3000:3000 --link redis1:redis ``` -现在,假设应用程序中有三个等级 +Now imagine your application has three tiers -- Web 前端 -- 批次层处理长时间运行的任务 -- Redis 或 MongoDB 数据库 +- Web front-end +- Batch tier for processing long running tasks +- Redis or mongo database -通过 `--link` 管理几个容器,但可能失效,可以添加多层级或容器到应用程序。 +Explicit linking through `--link` is just about manageable with a couple of containers, but can get out of hand as we add more tiers or containers to the application. -### 键入 docker 撰写 +### Enter docker-compose ![](http://blog.alexellis.io/content/images/2016/05/docker-compose-logo-01.png) ->Docker 撰写图标 +>Docker Compose logo -docker-compose 工具是标准的 docker工具箱的一部分,也可以单独下载。它提供了丰富功能,通过一个纯文本YAML文件配置所有应用程序组件。 +The docker-compose tool is part of the standard Docker Toolbox and can also be downloaded separately. It provides a rich set of features to configure all of an application's parts through a plain-text YAML file. -上述提供了一个例子: +The above example would look like this: ``` version: "2.0" @@ -43,18 +43,18 @@ services: - 3000:3000 ``` -从Docker 1.10起,我们可以充分利用网络来帮助我们在多个主机进行扩展覆盖。在此之前,仅通过单个主机工作。“docker-compose scale” 命令可用于更多计算能力有需要时。 +From Docker 1.10 onwards we can take advantage of network overlays to help us scale out across multiple hosts. Prior to this linking only worked across a single host. The `docker-compose scale` command can be used to bring on more computing power as the need arises. ->参考docker.com上关于"docker-compose" +>View the [docker-compose][2] reference on docker.com -### 真实例子:树莓派 PI 到货通知 +### Real-world example: Raspberry PI Stock Alert ![](http://blog.alexellis.io/content/images/2016/05/Raspberry_Pi_Zero_ver_1-3_1_of_3_large.JPG) ->新版树莓派 PI Zero V1.3 图片提供来自Pimoroni +>The new Raspberry PI Zero v1.3 image courtesy of Pimoroni -树莓派 PI Zero - 巨大的轰动一个微型计算机具有一个1GHz 处理器 和 512MB 内存能够运行完整 Linux,Docker,Node.js,Ruby 和许多流行的开源工具。一个关于 PI Zero 的好消息是,成本只有5美元。这也意味着,存量迅速抢购一空。 +There is a huge buzz around the Raspberry PI Zero - a tiny microcomputer with a 1GHz CPU and 512MB RAM capable of running full Linux, Docker, Node.js, Ruby and many other popular open-source tools. One of the best things about the PI Zero is that costs only 5 USD. That also means that stock gets snapped up really quickly. -*如果您想尝试Docker 或集群在PI看看下面的教程。* +*If you want to try Docker or Swarm on the PI check out the tutorial below.* >[Docker Swarm on the PI Zero][3] @@ -127,7 +127,7 @@ Preview as of 16th of May 2016 via: http://blog.alexellis.io/rapid-prototype-docker-compose/ 作者:[Alex Ellis][a] -译者:[erlinux](https://github.com/erlinux) +译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -139,3 +139,4 @@ via: http://blog.alexellis.io/rapid-prototype-docker-compose/ [4]: https://github.com/alexellis/pi_zero_stock [5]: https://github.com/alexellis/pi_zero_stock [6]: http://stockalert.alexellis.io/ + From 39eb115a31c076f0c0deee4689c172a39c387644 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 7 Aug 2016 21:18:46 +0800 Subject: [PATCH 362/471] PUB:20160722 Keeweb A Linux Password Manager @ChrisLeeGit --- ...0160722 Keeweb A Linux Password Manager.md | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) rename {translated/tech => published}/20160722 Keeweb A Linux Password Manager.md (67%) diff --git a/translated/tech/20160722 Keeweb A Linux Password Manager.md b/published/20160722 Keeweb A Linux Password Manager.md similarity index 67% rename from translated/tech/20160722 Keeweb A Linux Password Manager.md rename to published/20160722 Keeweb A Linux Password Manager.md index 3e32d1001c..6838c3105d 100644 --- a/translated/tech/20160722 Keeweb A Linux Password Manager.md +++ b/published/20160722 Keeweb A Linux Password Manager.md @@ -1,17 +1,17 @@ -Linux 密码管理器:Keeweb +Linux 下的密码管理器:Keeweb ================================ ![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb_1.png?608) -如今,我们依赖于越来越多的线上服务。我们每注册一个线上服务,就要设置一个密码;如此,我们就不得不记住数以百计的密码。这样对于每个人来说,都很容易忘记密码。我将在本文中介绍 Keeweb,它是一款 Linux 密码管理器,可以将你所有的密码安全地存储在线上或线下。 +如今,我们依赖于越来越多的线上服务。我们每注册一个线上服务,就要设置一个密码;如此,我们就不得不记住数以百计的密码。这样对于每个人来说,都很容易忘记密码。我将在本文中介绍 Keeweb,它是一款 Linux 密码管理器,可以为你离线或在线地安全存储所有的密码。 当谈及 Linux 密码管理器时,我们会发现有很多这样的软件。我们已经在 LinuxAndUbuntu 上讨论过像 [Keepass][1] 和 [Encryptr,一个基于零知识系统的密码管理器][2] 这样的密码管理器。Keeweb 则是另外一款我们将在本文讲解的 Linux 密码管理器。 -### Keeweb 可以在线下或线上存储密码 +### Keeweb 可以离线或在线存储密码 -Keeweb 是一款跨平台的密码管理器。它可以在线下存储你所有的密码,并且能够同步到你自己的云存储服务上,例如 OneDrive, Google Drive, Dropbox 等。Keeweb 并没有它自己的用于同步你密码的在线数据库。 +Keeweb 是一款跨平台的密码管理器。它可以离线存储你所有的密码,并且能够同步到你自己的云存储服务上,例如 OneDrive、Google Drive、Dropbox 等。Keeweb 并没有它提供它自己的在线数据库来的同步你的密码。 -要使用 Keeweb 连接你的线上存储服务,只需要点击更多,然后再点击你想要使用的服务即可。 +要使用 Keeweb 连接你的线上存储服务,只需要点击界面中的“more”,然后再点击你想要使用的服务即可。 ![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/keeweb.png?685) @@ -25,15 +25,15 @@ Keeweb 是一款跨平台的密码管理器。它可以在线下存储你所有 #### 创建密码 -想要创建一个新的密码,你只需要简单地点击 `+` 号,然后你就会看到所有需要填充的输入框。如果你想的话,可以创建更多的输入框。 +想要创建一个新的密码,你只需要简单地点击 `+` 号,然后你就会看到所有需要填充的输入框。根据你的需要创建更多的密码记录。 #### 搜索密码 ![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/search-passwords_orig.png) -Keeweb 拥有一个图标库,这样你就可以轻松地找到任何特定的密码入口。你可以改变图标的颜色、下载更多的图标,甚至可以直接从你的电脑中导入图标。这对于密码搜索来说,异常好使。 +Keeweb 拥有一个图标库,这样你就可以轻松地找到各种特定的密码记录。你可以改变图标的颜色、下载更多的图标,甚至可以直接从你的电脑中导入图标。这对于密码搜索来说,异常好使。 -相似服务的密码可以分组,这样你就可以在一个文件夹的一个地方同时找到它们。你也可以给密码打上标签并把它们存放在不同分类中。 +相似的服务的密码可以分组,这样你就可以在一个文件夹里找到它们。你也可以给密码打上标签并把它们存放在不同分类中。 ![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/tags-passwords-in-keeweb.png?283) @@ -45,15 +45,14 @@ Keeweb 拥有一个图标库,这样你就可以轻松地找到任何特定的 ### 不喜欢 Linux 密码管理器?没问题! -我已经发表过文章介绍了另外两款 Linux 密码管理器,它们分别是 Keepass 和 Encryptr,在 Reddit 和其它社交媒体上有些关于它们的争论。有些人反对使用任何密码管理器,反之亦然。在本文中,我想要澄清的是,存放密码文件是我们自己的责任。我认为像 keepass 和 Keeweb 这样的密码管理器是非常好用的,因为它们并没有自己的云来存放你的密码。这些密码管理器会创建一个文件,然后你可以将它存放在你的硬盘上,或者使用像 VeraCrypt 这样的应用给它加密。我个人不使用也不推荐使用那些将密码存储在它们自己数据库的服务。 +我已经发表过文章介绍了另外两款 Linux 密码管理器,它们分别是 Keepass 和 Encryptr,在 Reddit 和其它社交媒体上有些关于它们的争论。有些人反对使用任何密码管理器,也有人持相反意见。在本文中,我想要澄清的是,存放密码文件是我们自己的责任。我认为像 keepass 和 Keeweb 这样的密码管理器是非常好用的,因为它们并没有自己的云来存放你的密码。这些密码管理器会创建一个文件,然后你可以将它存放在你的硬盘上,或者使用像 VeraCrypt 这样的应用给它加密。我个人不使用也不推荐使用那些将密码存储在它们自己数据库的服务。 -------------------------------------------------------------------------------- via: http://www.linuxandubuntu.com/home/keeweb-a-linux-password-manager -作者:[author][a] 译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1b1830a6ed4fca74dca18dd5f54798448413d629 Mon Sep 17 00:00:00 2001 From: tianfeiyu Date: Mon, 8 Aug 2016 10:20:33 +0800 Subject: [PATCH 363/471] Create 20160726 How to restore older file versions in Git.md --- ...w to restore older file versions in Git.md | 188 ++++++++++++++++++ 1 file changed, 188 insertions(+) create mode 100644 translated/tech/20160726 How to restore older file versions in Git.md diff --git a/translated/tech/20160726 How to restore older file versions in Git.md b/translated/tech/20160726 How to restore older file versions in Git.md new file mode 100644 index 0000000000..ccf9863e14 --- /dev/null +++ b/translated/tech/20160726 How to restore older file versions in Git.md @@ -0,0 +1,188 @@ + +在 Git 中进行版本回退 +============================================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/file_system.jpg?itok=s2b60oIB) + +在这篇文章中,你将学到如何查看项目中的历史版本,如何进行版本回退,以及如何使用 Git 分支,你可以大胆的进行尝试。 + + +在你的 Git 项目中,你的位置就像是一个滚动的标签,由一个被称为 HEAD 的 标记(如磁带录音机或留声机的播放头)来确定。要让 HEAD 指向你想到的时间线上,需要使用 git checkout 命令。 + + +git checkout 命令的使用方式有两种。最常见的用途是恢复一个以前提交过的文件,你也可以用他切换到另一个分支。 + + +### 恢复一个文件 + + +当你意识到一个文件被你完全改乱了。我们不得不将它恢复到以前的某个位置,然后 a添加并提交,因此我们需要的是该文件最后一次修改的位置,然后替换文件。 + + +查看最后一次提交的 HEAD,然后使用 git checkout 将其恢复到以前的版本: + +``` +$ git checkout HEAD filename +``` + +如果回退会的文件依然有问题,使用 git log 查看你更早的提交,然后切换到正确的版本: + +``` +$ git log --oneline +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. + +$ git checkout 55df4c2 filename + +``` + +现在,以前的文件恢复到了你当前的位置。(你可以用 git status 命令查看你当前的状态)然后添加刚改变的文件再进行提交: + +``` +$ git add filename +$ git commit -m 'restoring filename from first commit.' +``` + +使用 Git log 验证你所提交的: + +``` +$ git log --oneline +d512580 restoring filename from first commit +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. +``` + +从本质上讲,你已经倒好了磁带并修复了坏的地方。所以你需要重新记录正确的。 + +### 回退时间线 + +恢复文件的另一种方式是回退整个 Git 项目。这里使用了分支的思想,这是另一种替代方法。 + +你要将 Git HEAD 回退到以前的版本才能回到历史提交。这个例子将回到最初的提交处: + +``` +$ git log --oneline +d512580 restoring filename from first commit +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. + +$ git checkout 55df4c2 +``` + +当你以这种方式回退,如果你重新开始提交,会丢失以前的工作。Git 默认假定你不想这样做,所以将 HEAD 从项目中分离出来,并将以前的记录保存下来。 + +如果你想看看以前的版本,想要重新做或者尝试不同的方法,那么安全一点的方式就是创建一个新的分支。可以将这个过程想象为尝试同一首歌曲的不同版本,或者创建一个混音的。原始的依然存在,关闭分支做你想做的版本吧。 + +把你的 Git HEAD 回退到另一个起点处: + +``` +$ git checkout -b remix +Switched to a new branch 'remix' +``` + +现在你已经切换到了另一个分支,你当前的工作区是干净的,准备开始工作吧。 + +也可以不用改变时间线来做同样的事情。也许你很想这么做,但切换到一个临时的工作区只是为了尝试一些疯狂的想法。这在工作中完全是可以接受的,请看: + +``` +$ git status +On branch master +nothing to commit, working directory clean + +$ git checkout -b crazy_idea +Switched to a new branch 'crazy_idea' +``` + +现在你有一个干净的工作空间,在这里你可以完成一些奇怪的想法。一旦你完成了,可以保留你的改变,或者丢弃他们,并切换回你的主分支。 + +若要放弃你的想法,切换到你的主分支,假装新分支不存在: + +``` +$ git checkout master +``` + +想要继续使用你的 crazy ideas,需要把他们拉回到主分支,切换到主分支然后合并新分支到主分支: + +``` +$ git checkout master +$ git merge crazy_idea +``` + +git 的分支功能很强大,在克隆仓库后为开发人员创建一个新分支是很常见的;这样,他们所有的工作都在自己的分支上,可以提交并合并到主分支。Git 是很灵活的,所以没有“正确”或“错误”的方式(甚至一个主分支也可以区分远程分支),但分支易于分离任务和提交贡献。两个人之间可以有很多的 Git 分支。 + +### 远程协作 + +到目前为止你已经在自己的家目录下维护着一个 Git 仓库,但如何与其他人协同工作呢? + +有好几种不同的方式来设置 Git 以便让多人可以同时在一个项目上工作,所以首先我们要克隆仓库,是否你已经从某人的 Git 服务器或 GitHub 主页克隆了一个仓库,或在局域网中使用了共享存储。 + +工作在私人仓库下和共享仓库唯一不同的是你需要把你的改变提交到别人的仓库。我们把工作的仓库称之为本地仓库,其他仓库称为远程仓库。 + + +当你以读写的方式克隆一个仓库时,克隆的仓库会继承远程库并且你会看到一个名为 origin 的远程库。你可以看看克隆的远程仓库: + +``` +$ git remote --verbose +origin seth@example.com:~/myproject.Git (fetch) +origin seth@example.com:~/myproject.Git (push) +``` + +有一个 origin 远程库非常有用,因为它有异地备份的功能,并允许其他人在该项目上工作。 + +如果克隆没有继承 origin 远程库,或者如果你选择以后再添加,可以使用 git remote 命令: + +``` +$ git remote add seth@example.com:~/myproject.Git +``` + +如果你修改了文件,想把它们发到有读写权限的 origin 远程库,使用 git push。第一次推送改变,必须发送分支信息。不直接在主分支上工作是一个很好的做法,除非你被要求这样做: + +``` +$ git checkout -b seth-dev +$ git add exciting-new-file.txt +$ git commit -m 'first push to remote' +$ git push -u origin HEAD +``` + +它会推送你当前的位置(HEAD)和存在的分支到远程。当推送过一次后,以后每次推送可以不使用 -u 选项: + +``` +$ git add another-file.txt +$ git commit -m 'another push to remote' +$ git push origin HEAD +``` + +### 合并分支 + +当一个人工作在一个 Git 仓库时,你可以合并任意测试分支到主分支。当团队协作时,你可能会想检查他们的改变,然后再将它们合并到主分支: + +``` +$ git checkout contributor +$ git pull +$ less blah.txt # 检查改变的文件 +$ git checkout master +$ git merge contributor +``` + +如果你正在使用 GitHub 或 GitLab 以及类似的东西,虽然过程是不同的,但克隆项目并把它作为你自己的仓库都是相似的。你可以在本地工作,将改变提交到 GitHub 或 GitLab 帐户,其他人对这些仓库没有任何权限。 + +如果你想要为克隆的仓库推送,需要创建了一个拉取请求,它使用 Web 服务的后端发送补丁到真正的拥有者,并允许他们审查和拉取的改变。 + +克隆一个项目通常是在 Web 服务端完成的,它和使用 Git 命令来管理项目是类似的,甚至推送的过程。然后它返回到 Web 服务打开一个拉取请求,工作就完成了。 + +下一部分我们将整合一些有用的插件到 Git 中来帮你轻松的完成日常工作。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/how-restore-older-file-versions-git + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth From c9682e0c75005ff6d54edc8f9c7125de1c9dd312 Mon Sep 17 00:00:00 2001 From: tianfeiyu Date: Mon, 8 Aug 2016 10:21:59 +0800 Subject: [PATCH 364/471] Delete 20160726 How to restore older file versions in Git.md --- ...w to restore older file versions in Git.md | 182 ------------------ 1 file changed, 182 deletions(-) delete mode 100644 sources/tech/20160726 How to restore older file versions in Git.md diff --git a/sources/tech/20160726 How to restore older file versions in Git.md b/sources/tech/20160726 How to restore older file versions in Git.md deleted file mode 100644 index 675167aaad..0000000000 --- a/sources/tech/20160726 How to restore older file versions in Git.md +++ /dev/null @@ -1,182 +0,0 @@ -translated by strugglingyouth -How to restore older file versions in Git -============================================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/file_system.jpg?itok=s2b60oIB) - -In today's article you will learn how to find out where you are in the history of your project, how to restore older file versions, and how to make Git branches so you can safely conduct wild experiments. - -Where you are in the history of your Git project, much like your location in the span of a rock album, is determined by a marker called HEAD (like the playhead of a tape recorder or record player). To move HEAD around in your own Git timeline, use the git checkout command. - -There are two ways to use the git checkout command. A common use is to restore a file from a previous commit, and you can also rewind your entire tape reel and go in an entirely different direction. - -### Restore a file - -This happens when you realize you've utterly destroyed an otherwise good file. We all do it; we get a file to a great place, we add and commit it, and then we decide that what it really needs is one last adjustment, and the file ends up completely unrecognizable. - -To restore it to its former glory, use git checkout from the last known commit, which is HEAD: - -``` -$ git checkout HEAD filename -``` - -If you accidentally committed a bad version of a file and need to yank a version from even further back in time, look in your Git log to see your previous commits, and then check it out from the appropriate commit: - -``` -$ git log --oneline -79a4e5f bad take -f449007 The second commit -55df4c2 My great project, first commit. - -$ git checkout 55df4c2 filename - -``` - -Now the older version of the file is restored into your current position. (You can see your current status at any time with the git status command.) You need to add the file because it has changed, and then commit it: - -``` -$ git add filename -$ git commit -m 'restoring filename from first commit.' -``` - -Look in your Git log to verify what you did: - -``` -$ git log --oneline -d512580 restoring filename from first commit -79a4e5f bad take -f449007 The second commit -55df4c2 My great project, first commit. -``` - -Essentially, you have rewound the tape and are taping over a bad take. So you need to re-record the good take. - -### Rewind the timeline - -The other way to check out a file is to rewind the entire Git project. This introduces the idea of branches, which are, in a way, alternate takes of the same song. - -When you go back in history, you rewind your Git HEAD to a previous version of your project. This example rewinds all the way back to your original commit: - -``` -$ git log --oneline -d512580 restoring filename from first commit -79a4e5f bad take -f449007 The second commit -55df4c2 My great project, first commit. - -$ git checkout 55df4c2 -``` - -When you rewind the tape in this way, if you hit the record button and go forward, you are destroying your future work. By default, Git assumes you do not want to do this, so it detaches HEAD from the project and lets you work as needed without accidentally recording over something you have recorded later. - -If you look at your previous version and realise suddenly that you want to re-do everything, or at least try a different approach, then the safe way to do that is to create a new branch. You can think of this process as trying out a different version of the same song, or creating a remix. The original material exists, but you're branching off and doing your own version for fun. - -To get your Git HEAD back down on blank tape, make a new branch: - -``` -$ git checkout -b remix -Switched to a new branch 'remix' -``` - -Now you've moved back in time, with an alternate and clean workspace in front of you, ready for whatever changes you want to make. - -You can do the same thing without moving in time. Maybe you're perfectly happy with how your progress is going, but would like to switch to a temporary workspace just to try some crazy ideas out. That's a perfectly acceptable workflow, as well: - -``` -$ git status -On branch master -nothing to commit, working directory clean - -$ git checkout -b crazy_idea -Switched to a new branch 'crazy_idea' -``` - -Now you have a clean workspace where you can sandbox some crazy new ideas. Once you're done, you can either keep your changes, or you can forget they ever existed and switch back to your master branch. - -To forget your ideas in shame, change back to your master branch and pretend your new branch doesn't exist: - -``` -$ git checkout master -``` - -To keep your crazy ideas and pull them back into your master branch, change back to your master branch and merge your new branch: - -``` -$ git checkout master -$ git merge crazy_idea -``` - -Branches are powerful aspects of git, and it's common for developers to create a new branch immediately after cloning a repository; that way, all of their work is contained on their own branch, which they can submit for merging to the master branch. Git is pretty flexible, so there's no "right" or "wrong" way (even a master branch can be distinguished from what remote it belongs to), but branching makes it easy to separate tasks and contributions. Don't get too carried away, but between you and me, you can have as many Git branches as you please. They're free! - -### Working with remotes - -So far you've maintained a Git repository in the comfort and privacy of your own home, but what about when you're working with other people? - -There are several different ways to set Git up so that many people can work on a project at once, so for now we'll focus on working on a clone, whether you got that clone from someone's personal Git server or their GitHub page, or from a shared drive on the same network. - -The only difference between working on your own private Git repository and working on something you want to share with others is that at some point, you need to push your changes to someone else's repository. We call the repository you are working in a local repository, and any other repository a remote. - -When you clone a repository with read and write permissions from another source, your clone inherits the remote from whence it came as its origin. You can see a clone's remote: - -``` -$ git remote --verbose -origin seth@example.com:~/myproject.Git (fetch) -origin seth@example.com:~/myproject.Git (push) -``` - -Having a remote origin is handy because it is functionally an offsite backup, and it also allows someone else to be working on the project. - -If your clone didn't inherit a remote origin, or if you choose to add one later, use the git remote command: - -``` -$ git remote add seth@example.com:~/myproject.Git -``` - -If you have changed files and want to send them to your remote origin, and have read and write permissions to the repository, use git push. The first time you push changes, you must also send your branch information. It is a good practice to not work on master, unless you've been told to do so: - -``` -$ git checkout -b seth-dev -$ git add exciting-new-file.txt -$ git commit -m 'first push to remote' -$ git push -u origin HEAD -``` - -This pushes your current location (HEAD, naturally) and the branch it exists on to the remote. After you've pushed your branch once, you can drop the -u option: - -``` -$ git add another-file.txt -$ git commit -m 'another push to remote' -$ git push origin HEAD -``` - -### Merging branches - -When you're working alone in a Git repository you can merge test branches into your master branch whenever you want. When working in tandem with a contributor, you'll probably want to review their changes before merging them into your master branch: - -``` -$ git checkout contributor -$ git pull -$ less blah.txt # review the changed files -$ git checkout master -$ git merge contributor -``` - -If you are using GitHub or GitLab or something similar, the process is different. There, it is traditional to fork the project and treat it as though it is your own repository. You can work in the repository and send changes to your GitHub or GitLab account without getting permission from anyone, because it's your repository. - -If you want the person you forked it from to receive your changes, you create a pull request, which uses the web service's backend to send patches to the real owner, and allows them to review and pull in your changes. - -Forking a project is usually done on the web service, but the Git commands to manage your copy of the project are the same, even the push process. Then it's back to the web service to open a pull request, and the job is done. - -In our next installment we'll look at some convenience add-ons to help you integrate Git comfortably into your everyday workflow. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/how-restore-older-file-versions-git - -作者:[Seth Kenlon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth From a35ac1e11429c9144d7f37b972c83ffb619f7e63 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Mon, 8 Aug 2016 12:56:47 +0800 Subject: [PATCH 365/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91Web?= =?UTF-8?q?=20Service=20Efficiency=20at=20Instagram=20with=20Python?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0160602 Web Service Efficiency at Instagram with Python.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md b/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md index f637331336..b3cf1f4192 100644 --- a/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md +++ b/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md @@ -1,3 +1,5 @@ +Being translated by ChrisLeeGit + Web Service Efficiency at Instagram with Python =============================================== @@ -64,7 +66,7 @@ With the work we’ve put into building the efficiency framework for Instagram via: https://engineering.instagram.com/web-service-efficiency-at-instagram-with-python-4976d078e366#.tiakuoi4p 作者:[Min Ni][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 663745585c2fe69538436b10e1720ce9f184c178 Mon Sep 17 00:00:00 2001 From: Markgolzh Date: Mon, 8 Aug 2016 13:48:47 +0800 Subject: [PATCH 366/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= =?UTF-8?q?=EF=BC=8Dby=20zky001?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译文件 --- ...ce portfolio - Machine learning project.md | 189 ++++++++++++++++++ 1 file changed, 189 insertions(+) create mode 100644 sources/team_test/part 3Building a data science portfolio - Machine learning project.md diff --git a/sources/team_test/part 3Building a data science portfolio - Machine learning project.md b/sources/team_test/part 3Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..8160ef18b6 --- /dev/null +++ b/sources/team_test/part 3Building a data science portfolio - Machine learning project.md @@ -0,0 +1,189 @@ +### 获取数据 + +一旦我们拥有项目的基本架构,我们就可以去获得原始数据 + +Fannie Mae 对获取数据有一些限制,所有你需要去注册一个账户。在创建完账户之后,你可以找到下载页面[在这里][26],你可以按照你所需要的下载非常少量或者很多的借款数据文件。文件格式是zip,在解压后当然是非常大的。 + +为了达到我们这个博客文章的目的,我们将要下载从2012年1季度到2015 +年1季度的所有数据。接着我们需要解压所有的文件。解压过后,删掉原来的.zip格式的文件。最后,借款预测文件夹看起来应该像下面的一样: +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +├── .gitignore +├── README.md +├── requirements.txt +├── settings.py +``` + +在下载完数据后,你可以在shell命令行中使用head和tail命令去查看文件中的行数据,你看到任何的不需要的列数据了吗?在做这件事的同时查阅[列名称的pdf文件][27]可能是有用的事情。 +### 读入数据 + +有两个问题让我们的数据难以现在就使用: +- 收购数据和业绩数据被分割在多个文件中 +- 每个文件都缺少标题 + +在我们开始使用数据之前,我们需要首先明白我们要在哪里去存一个收购数据的文件,同时到哪里去存储一个业绩数据的文件。每个文件仅仅需要包括我们关注的那些数据列,同时拥有正确的标题。这里有一个小问题是业绩数据非常大,因此我们需要尝试去修剪一些数据列。 + +第一步是向settings.py文件中增加一些变量,这个文件中同时也包括了我们原始数据的存放路径和处理出的数据存放路径。我们同时也将添加其他一些可能在接下来会用到的设置数据: +``` +DATA_DIR = "data" +PROCESSED_DIR = "processed" +MINIMUM_TRACKING_QUARTERS = 4 +TARGET = "foreclosure_status" +NON_PREDICTORS = [TARGET, "id"] +CV_FOLDS = 3 +``` + +把路径设置在settings.py中将使它们放在一个集中的地方,同时使其修改更加的容易。当提到在多个文件中的相同的变量时,你想改变它的话,把他们放在一个地方比分散放在每一个文件时更加容易。[这里的][28]是一个这个工程的示例settings.py文件 + +第二步是创建一个文件名为assemble.py,它将所有的数据分为2个文件。当我们运行Python assemble.py,我们在处理数据文件的目录会获得2个数据文件。 + +接下来我们开始写assemble.py文件中的代码。首先我们需要为每个文件定义相应的标题,因此我们需要查看[列名称的pdf文件][29]同时创建在每一个收购数据和业绩数据的文件的列数据的列表: +``` +HEADERS = { + "Acquisition": [ + "id", + "channel", + "seller", + "interest_rate", + "balance", + "loan_term", + "origination_date", + "first_payment_date", + "ltv", + "cltv", + "borrower_count", + "dti", + "borrower_credit_score", + "first_time_homebuyer", + "loan_purpose", + "property_type", + "unit_count", + "occupancy_status", + "property_state", + "zip", + "insurance_percentage", + "product_type", + "co_borrower_credit_score" + ], + "Performance": [ + "id", + "reporting_period", + "servicer_name", + "interest_rate", + "balance", + "loan_age", + "months_to_maturity", + "maturity_date", + "msa", + "delinquency_status", + "modification_flag", + "zero_balance_code", + "zero_balance_date", + "last_paid_installment_date", + "foreclosure_date", + "disposition_date", + "foreclosure_costs", + "property_repair_costs", + "recovery_costs", + "misc_costs", + "tax_costs", + "sale_proceeds", + "credit_enhancement_proceeds", + "repurchase_proceeds", + "other_foreclosure_proceeds", + "non_interest_bearing_balance", + "principal_forgiveness_balance" + ] +} +``` + + +接下来一步是定义我们想要保持的数据列。我们将需要保留收购数据中的所有列数据,丢弃列数据将会使我们节省下内存和硬盘空间,同时也会加速我们的代码。 +``` +SELECT = { + "Acquisition": HEADERS["Acquisition"], + "Performance": [ + "id", + "foreclosure_date" + ] +} +``` + +下一步,我们将编写一个函数来连接数据集。下面的代码将: +- 引用一些需要的库,包括设置。 +- 定义一个函数concatenate, 目的是: + - 获取到所有数据目录中的文件名. + - 在每个文件中循环. + - 如果文件不是正确的格式 (不是以我们需要的格式作为开头), 我们将忽略它. + - 把文件读入一个[数据帧][30] 伴随着正确的设置通过使用Pandas [读取csv][31]函数. + - 在|处设置分隔符以便所有的字段能被正确读出. + - 数据没有标题行,因此设置标题为None来进行标示. + - 从HEADERS字典中设置正确的标题名称 – 这将会是我们数据帧中的数据列名称. + - 通过SELECT来选择我们加入数据的数据帧中的列. +- 把所有的数据帧共同连接在一起. +- 把已经连接好的数据帧写回一个文件. + +``` +import os +import settings +import pandas as pd + +def concatenate(prefix="Acquisition"): + files = os.listdir(settings.DATA_DIR) + full = [] + for f in files: + if not f.startswith(prefix): + continue + + data = pd.read_csv(os.path.join(settings.DATA_DIR, f), sep="|", header=None, names=HEADERS[prefix], index_col=False) + data = data[SELECT[prefix]] + full.append(data) + + full = pd.concat(full, axis=0) + + full.to_csv(os.path.join(settings.PROCESSED_DIR, "{}.txt".format(prefix)), sep="|", header=SELECT[prefix], index=False) +``` + +我们可以通过调用上面的函数,通过传递的参数收购和业绩两次以将所有收购和业绩文件连接在一起。下面的代码将: +- 仅仅在脚本被在命令行中通过python assemble.py被唤起而执行. +- 将所有的数据连接在一起,并且产生2个文件: + - `processed/Acquisition.txt` + - `processed/Performance.txt` + +``` +if __name__ == "__main__": + concatenate("Acquisition") + concatenate("Performance") +``` + +我们现在拥有了一个漂亮的,划分过的assemble.py文件,它很容易执行,也容易被建立。通过像这样把问题分解为一块一块的,我们构建工程就会变的容易许多。不用一个可以做所有的凌乱的脚本,我们定义的数据将会在多个脚本间传递,同时使脚本间完全的0耦合。当你正在一个大的项目中工作,这样做是一个好的想法,因为这样可以更佳容易的修改其中的某一部分而不会引起其他项目中不关联部分产生超出预期的结果。 + +一旦我们完成assemble.py脚本文件, 我们可以运行python assemble.py命令. 你可以查看完整的assemble.py文件[在这里][32]. + +这将会在处理结果目录下产生2个文件: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +├── .gitignore +├── assemble.py +├── README.md +├── requirements.txt +├── settings.py +``` + From d0189a1cc583c498a595868e2cbb19c5d749c5fc Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 8 Aug 2016 19:25:02 +0800 Subject: [PATCH 367/471] PUB:20160725 Part 10 - Learn How to Use Awk Built-in Variables.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ChrisLeeGit 你的译文质量很好,谢谢你的贡献。就此文,有一点需要说明的:1、内置变量列表那里,原文有些表述是不清晰的,也不好确定其原意,所以从译文中删除了,以 免引起读者疑惑。(我估计原意应该是“最好不要修改该变量值”,但是和原文相去较远)2、文末提及“请继续关注 Tecmint”,我们并不是避讳提及 Tecmint (原文链接保留,并明确表明本文是译文),但是发表在 LC 的话就有点感觉突兀,所以做了修改。 3、“段”修改为“字段”。以上,请了解。 --- ...Learn How to Use Awk Built-in Variables.md | 62 +++++++++++-------- 1 file changed, 36 insertions(+), 26 deletions(-) rename translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md => published/awk/Part 10 - Learn How to Use Awk Built-in Variables.md (64%) diff --git a/translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md b/published/awk/Part 10 - Learn How to Use Awk Built-in Variables.md similarity index 64% rename from translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md rename to published/awk/Part 10 - Learn How to Use Awk Built-in Variables.md index 321b1dc588..70f494fb90 100644 --- a/translated/tech/awk/20160725 Part 10 - Learn How to Use Awk Built-in Variables.md +++ b/published/awk/Part 10 - Learn How to Use Awk Built-in Variables.md @@ -1,18 +1,19 @@ awk 系列:如何使用 awk 内置变量 ================================================= -我们将逐渐揭开 awk 功能的神秘面纱,在本节中,我们将介绍 awk 内置(built-in)变量的概念。你可以在 awk 中使用两种类型的变量,它们是:用户自定义(user-defined)变量(我们在第八节中已经介绍了)和内置变量。 +我们将逐渐揭开 awk 功能的神秘面纱,在本节中,我们将介绍 awk 内置(built-in)变量的概念。你可以在 awk 中使用两种类型的变量,它们是:用户自定义(user-defined)变量(我们在[第八节][1]中已经介绍了)和内置变量。 ![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Built-in-Variables-Examples.png) -> awk 内置变量示例 + +*awk 内置变量示例* awk 内置变量已经有预先定义的值了,但我们也可以谨慎地修改这些值,awk 内置变量包括: -- `FILENAME` : 当前输入文件名称(不要修改变量名) -- `NR` : 当前输入行编号(是指输入行 1,2,3……等,不要改变变量名) -- `NF` : 当前输入行段编号(不要修改变量名) -- `OFS` : 输出段分隔符 -- `FS` : 输入段分隔符 +- `FILENAME` : 当前输入文件名称 +- `NR` : 当前输入行编号(是指输入行 1,2,3……等) +- `NF` : 当前输入行的字段编号 +- `OFS` : 输出字段分隔符 +- `FS` : 输入字段分隔符 - `ORS` : 输出记录分隔符 - `RS` : 输入记录分隔符 @@ -25,7 +26,8 @@ $ awk ' { print FILENAME } ' ~/domains.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-FILENAME-Variable.png) -> awk FILENAME 变量 + +*awk FILENAME 变量* 你会看到,每一行都会对应输出一次文件名,那是你使用 `FILENAME` 内置变量时 awk 默认的行为。 @@ -38,42 +40,46 @@ $ cat ~/domains.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Print-Contents-of-File.png) -> 输出文件内容 + +*输出文件内容* ``` -$ awk ' END { print "文件记录数是:", NR } ' ~/domains.txt +$ awk ' END { print "Number of records in file is: ", NR } ' ~/domains.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Lines.png) -> awk 统计行数 -想要统计一条记录或一行中的段数,我们可以像下面那样使用 NR 内置变量: +*awk 统计行数* + +想要统计一条记录或一行中的字段数,我们可以像下面那样使用 NR 内置变量: ``` $ cat ~/names.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/List-File-Contents.png) -> 列出文件内容 + +*列出文件内容* ``` -$ awk '{ print "记录:",NR,"有",NF,"段" ; }' ~/names.txt +$ awk '{ "Record:",NR,"has",NF,"fields" ; }' ~/names.txt ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Count-Number-of-Fields-in-File.png) -> awk 统计文件中的段数 -接下来,你也可以使用 FS 内置变量指定一个输入文件分隔符,它会定义 awk 如何将输入行划分成段。 +*awk 统计文件中的字段数* -FS 默认值为空格和 TAB,但我们也能将 FS 值修改为任何字符来让 awk 根据情况分隔输入行。 +接下来,你也可以使用 FS 内置变量指定一个输入文件分隔符,它会定义 awk 如何将输入行划分成字段。 + +FS 默认值为“空格”和“制表符”,但我们也能将 FS 值修改为任何字符来让 awk 根据情况切分输入行。 有两种方法可以达到目的: - 第一种方法是使用 FS 内置变量 - 第二种方法是使用 awk 的 -F 选项 -来看 Linux 系统上的 `/etc/passwd` 文件,该文件中的各段是使用 `:` 分隔的,因此,当我们想要过滤出某些段时,可以将 `:` 指定为新的输入段分隔符,示例如下: +来看 Linux 系统上的 `/etc/passwd` 文件,该文件中的各字段是使用 `:` 分隔的,因此,当我们想要过滤出某些字段时,可以将 `:` 指定为新的输入字段分隔符,示例如下: 我们可以使用 `-F` 选项,如下: @@ -82,7 +88,8 @@ $ awk -F':' '{ print $1, $4 ;}' /etc/passwd ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Awk-Filter-Fields-in-Password-File.png) -> awk 过滤密码文件中的各段 + +*awk 过滤密码文件中的各字段* 此外,我们也可以利用 FS 内置变量,如下: @@ -91,29 +98,32 @@ $ awk ' BEGIN { FS=“:” ; } { print $1, $4 ; } ' /etc/passwd ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Filter-Fields-in-File-Using-Awk.png) -> 使用 awk 过滤文件中的各段 -使用 OFS 内置变量来指定一个输出段分隔符,它会定义如何使用指定的字符分隔输出段,示例如下: +*使用 awk 过滤文件中的各字段* + +使用 OFS 内置变量来指定一个用于输出的字段分隔符,它会定义如何使用指定的字符分隔输出字段,示例如下: ``` $ awk -F':' ' BEGIN { OFS="==>" ;} { print $1, $4 ;}' /etc/passwd ``` ![](http://www.tecmint.com/wp-content/uploads/2016/07/Add-Separator-to-Field-in-File.png) -> 向文件中的段添加分隔符 -在第十节(即本节)中,我们已经学习了使用含有预定义值的 awk 内置变量的理念。但我们也能够修改这些值,虽然并不推荐这样做,除非你晓得自己在做什么,并且充分理解(这些变量值)。 +*向文件中的字段添加分隔符* -此后,我们将继续学习如何在 awk 命令操作中使用 shell 变量,所以,请继续关注 Tecmint。 +在本节中,我们已经学习了使用含有预定义值的 awk 内置变量的理念。但我们也能够修改这些值,虽然并不推荐这样做,除非你明白自己在做什么,并且充分理解(这些变量值)。 + +此后,我们将继续学习如何在 awk 命令操作中使用 shell 变量,所以,请继续关注我们。 -------------------------------------------------------------------------------- -via: http://www.tecmint.com/awk-built-in-variables-examples/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 +via: http://www.tecmint.com/awk-built-in-variables-examples/ 作者:[Aaron Kili][a] 译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对ID](https://github.com/校对ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: http://www.tecmint.com/author/aaronkili/ +[1]: https://linux.cn/article-7650-1.html From d6fcd00f942ae449783fc4200afeff6645bc5fb8 Mon Sep 17 00:00:00 2001 From: Stdio A Date: Mon, 8 Aug 2016 21:04:30 +0800 Subject: [PATCH 368/471] =?UTF-8?q?Translate=2020160406=20Let=E2=80=99s=20?= =?UTF-8?q?Build=20A=20Web=20Server.=20Part=202?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160406 Let’s Build A Web Server. Part 2.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md index 482352ac9a..15593c5085 100644 --- a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md +++ b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md @@ -1,3 +1,5 @@ +Translating by StdioA + Let’s Build A Web Server. Part 2. =================================== From d34a8f088ddf2727786a85f747242319dbd98ba7 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 8 Aug 2016 21:16:39 +0800 Subject: [PATCH 369/471] =?UTF-8?q?PUB:20160309=20Let=E2=80=99s=20Build=20?= =?UTF-8?q?A=20Web=20Server.=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @StdioA 翻译的不错,轻松诙谐。继续加油将本系列搞定! --- ...160309 Let’s Build A Web Server. Part 1.md | 40 +++++++++---------- 1 file changed, 20 insertions(+), 20 deletions(-) rename {translated/tech => published}/20160309 Let’s Build A Web Server. Part 1.md (76%) diff --git a/translated/tech/20160309 Let’s Build A Web Server. Part 1.md b/published/20160309 Let’s Build A Web Server. Part 1.md similarity index 76% rename from translated/tech/20160309 Let’s Build A Web Server. Part 1.md rename to published/20160309 Let’s Build A Web Server. Part 1.md index 8ccee84871..d55d5720d2 100644 --- a/translated/tech/20160309 Let’s Build A Web Server. Part 1.md +++ b/published/20160309 Let’s Build A Web Server. Part 1.md @@ -1,4 +1,4 @@ -搭个 Web 服务器:第一部分 +搭个 Web 服务器(一) ===================================== 一天,有一个正在散步的妇人恰好路过一个建筑工地,看到三个正在工作的工人。她问第一个人:“你在做什么?”第一个人没好气地喊道:“你没看到我在砌砖吗?”妇人对这个答案不满意,于是问第二个人:“你在做什么?”第二个人回答说:“我在建一堵砖墙。”说完,他转向第一个人,跟他说:“嗨,你把墙砌过头了。去把刚刚那块砖弄下来!”然而,妇人对这个答案依然不满意,于是又问了第三个人相同的问题。第三个人仰头看着天,对她说:“我在建造世界上最大的教堂。”当他回答时,第一个人和第二个人在为刚刚砌错的砖而争吵。他转向那两个人,说:“不用管那块砖了。这堵墙在室内,它会被水泥填平,没人会看见它的。去砌下一层吧。” @@ -9,17 +9,17 @@ 我相信,如果你想成为一个更好的开发者,你**必须**对日常使用的软件系统的内部结构有更深的理解,包括编程语言、编译器与解释器、数据库及操作系统、Web 服务器及 Web 框架。而且,为了更好更深入地理解这些系统,你**必须**从头开始,用一砖一瓦来重新构建这个系统。 -孔子曾经用这几句话来表达这种思想: +荀子曾经用这几句话来表达这种思想: ->“不闻不若闻之。” +>“不闻不若闻之。(I hear and I forget.)” ![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_hear.png) ->“闻之不若见之。” +>“闻之不若见之。(I see and I remember.)” ![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_see.png) ->“知之不若行之。” +>“知之不若行之。(I do and I understand.)” ![](https://ruslanspivak.com/lsbasi-part4/LSBAWS_confucius_do.png) @@ -59,7 +59,7 @@ Hello, World! client_connection.close() ``` -将以上代码保存为 webserver1.py,或者直接从 GitHub 上下载这个文件。然后,在命令行中运行这个程序。像这样: +将以上代码保存为 webserver1.py,或者直接从 [GitHub][1] 上下载这个文件。然后,在命令行中运行这个程序。像这样: ``` $ python webserver1.py @@ -72,15 +72,15 @@ Serving HTTP on port 8888 … 说真的,快去试一试。你做实验的时候,我会等着你的。 -完成了?不错。现在我们来讨论一下它实际上是怎么工作的。 +完成了?不错!现在我们来讨论一下它实际上是怎么工作的。 -首先我们从你刚刚输入的 Web 地址开始。它叫 URL,这是它的基本结构: +首先我们从你刚刚输入的 Web 地址开始。它叫 [URL][2],这是它的基本结构: ![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_URL_Web_address.png) URL 是一个 Web 服务器的地址,浏览器用这个地址来寻找并连接 Web 服务器,并将上面的内容返回给你。在你的浏览器能够发送 HTTP 请求之前,它需要与 Web 服务器建立一个 TCP 连接。然后会在 TCP 连接中发送 HTTP 请求,并等待服务器返回 HTTP 响应。当你的浏览器收到响应后,就会显示其内容,在上面的例子中,它显示了“Hello, World!”。 -我们来进一步探索在发送 HTTP 请求之前,客户端与服务器建立 TCP 连接的过程。为了建立链接,它们使用了所谓“套接字”。我们现在不直接使用浏览器发送请求,而在命令行中使用 telnet 来人工模拟这个过程。 +我们来进一步探索在发送 HTTP 请求之前,客户端与服务器建立 TCP 连接的过程。为了建立链接,它们使用了所谓“套接字(socket)”。我们现在不直接使用浏览器发送请求,而在命令行中使用 `telnet` 来人工模拟这个过程。 在你运行 Web 服务器的电脑上,在命令行中建立一个 telnet 会话,指定一个本地域名,使用端口 8888,然后按下回车: @@ -94,7 +94,7 @@ Connected to localhost. ![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_socket.png) -在同一个 telnet 会话中,输入 GET /hello HTTP/1.1,然后输入回车: +在同一个 telnet 会话中,输入 `GET /hello HTTP/1.1`,然后输入回车: ``` $ telnet localhost 8888 @@ -106,11 +106,11 @@ HTTP/1.1 200 OK Hello, World! ``` -你刚刚手动模拟了你的浏览器!你发送了 HTTP 请求,并且收到了一个 HTTP 应答。下面是一个 HTTP 请求的基本结构: +你刚刚手动模拟了你的浏览器(的工作)!你发送了 HTTP 请求,并且收到了一个 HTTP 应答。下面是一个 HTTP 请求的基本结构: ![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_request_anatomy.png) -HTTP 请求的第一行由三部分组成:HTTP 方法(GET,因为我们想让我们的服务器返回一些内容),以及标明所需页面的路径 /hello,还有协议版本。 +HTTP 请求的第一行由三部分组成:HTTP 方法(`GET`,因为我们想让我们的服务器返回一些内容),以及标明所需页面的路径 `/hello`,还有协议版本。 为了简单一些,我们刚刚构建的 Web 服务器完全忽略了上面的请求内容。你也可以试着输入一些无用内容而不是“GET /hello HTTP/1.1”,但你仍然会收到一个“Hello, World!”响应。 @@ -120,19 +120,19 @@ HTTP 请求的第一行由三部分组成:HTTP 方法(GET,因为我们想 ![](https://ruslanspivak.com/lsbaws-part1/LSBAWS_HTTP_response_anatomy.png) -我们来解析它。这个响应由三部分组成:一个状态行 HTTP/1.1 200 OK,后面跟着一个空行,再下面是响应正文。 +我们来解析它。这个响应由三部分组成:一个状态行 `HTTP/1.1 200 OK`,后面跟着一个空行,再下面是响应正文。 -HTTP 响应的状态行 HTTP/1.1 200 OK 包含 HTTP 版本号,HTTP 状态码以及 HTTP 状态短语“OK”。当浏览器收到响应后,它会将响应正文显示出来,这也就是为什么你会在浏览器中看到“Hello, World!”。 +HTTP 响应的状态行 HTTP/1.1 200 OK 包含了 HTTP 版本号,HTTP 状态码以及 HTTP 状态短语“OK”。当浏览器收到响应后,它会将响应正文显示出来,这也就是为什么你会在浏览器中看到“Hello, World!”。 以上就是 Web 服务器的基本工作模型。总结一下:Web 服务器创建一个处于监听状态的套接字,循环接收新的连接。客户端建立 TCP 连接成功后,会向服务器发送 HTTP 请求,然后服务器会以一个 HTTP 响应做应答,客户端会将 HTTP 的响应内容显示给用户。为了建立 TCP 连接,客户端和服务端均会使用套接字。 -现在,你应该了解了 Web 服务器的基本工作方式,你可以使用浏览器或其它 HTTP 客户端进行试验。如果你尝试过、观察过,你应该也能够使用 telnet,人工编写 HTTP 请求,成为一个人形 HTTP 客户端。 +现在,你应该了解了 Web 服务器的基本工作方式,你可以使用浏览器或其它 HTTP 客户端进行试验。如果你尝试过、观察过,你应该也能够使用 telnet,人工编写 HTTP 请求,成为一个“人形” HTTP 客户端。 -现在留一个小问题:“你要如何在不对程序做任何改动的情况下,在你刚刚搭建起来的 Web 服务器上适配 Django, Flask 或 Pyramid 应用呢? +现在留一个小问题:“你要如何在不对程序做任何改动的情况下,在你刚刚搭建起来的 Web 服务器上适配 Django, Flask 或 Pyramid 应用呢?” 我会在本系列的第二部分中来详细讲解。敬请期待。 -顺便,我在撰写《搭个 Web 服务器:从头开始》。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。订阅邮件列表,你就可以获取到这本书的最新进展,以及发布日期。 +顺便,我在撰写一本名为《搭个 Web 服务器:从头开始》的书。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。订阅邮件列表,你就可以获取到这本书的最新进展,以及发布日期。 -------------------------------------------------------------------------------- @@ -140,11 +140,11 @@ via: https://ruslanspivak.com/lsbaws-part1/ 作者:[Ruslan][a] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://linkedin.com/in/ruslanspivak/ - - +[1]: https://github.com/rspivak/lsbaws/blob/master/part1/webserver1.py +[2]: http://en.wikipedia.org/wiki/Uniform_resource_locator From cb803378c0fe36c684705ef1766f0dc6e5263561 Mon Sep 17 00:00:00 2001 From: Markgolzh Date: Mon, 8 Aug 2016 23:32:24 +0800 Subject: [PATCH 370/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 删掉原文 --- ...Flatpak brings standalone apps to Linux.md | 35 ------------------- 1 file changed, 35 deletions(-) delete mode 100644 sources/tech/20160621 Flatpak brings standalone apps to Linux.md diff --git a/sources/tech/20160621 Flatpak brings standalone apps to Linux.md b/sources/tech/20160621 Flatpak brings standalone apps to Linux.md deleted file mode 100644 index c2c1b51e7b..0000000000 --- a/sources/tech/20160621 Flatpak brings standalone apps to Linux.md +++ /dev/null @@ -1,35 +0,0 @@ -翻译中:by zky001 -Flatpak brings standalone apps to Linux -=== - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/flatpak-945x400.jpg) - -The development team behind [Flatpak][1] has [just announced the general availability][2] of the Flatpak desktop application framework. Flatpak (which was also known during development as xdg-app) provides the ability for an application — bundled as a Flatpak — to be installed and run easily and consistently on many different Linux distributions. Applications bundled as Flatpaks also have the ability to be sandboxed for security, isolating them from your operating system, and other applications. Check out the [Flatpak website][3], and the [press release][4] for more information on the tech that makes up the Flatpak framework. - -### Installing Flatpak on Fedora - -For users wanting to run applications bundled as Flatpaks, installation on Fedora is easy, with Flatpak already available in the official Fedora 23 and Fedora 24 repositories. The Flatpak website has [full details on installation on Fedora][5], as well as how to install on Arch, Debian, Mageia, and Ubuntu. [Many applications][6] have builds already bundled with Flatpak — including LibreOffice, and nightly builds of popular graphics applications Inkscape and GIMP. - -### For Application Developers - -If you are an application developer, the Flatpak website also contains some great resources on getting started [bundling and distributing your applications with Flatpak][7]. These resources contain information on using Flakpak SDKs to build standalone, sandboxed Flatpak applications. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/introducing-flatpak/ - -作者:[Ryan Lerch][a] -译者:[zky001](https://github.com/zky001) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/introducing-flatpak/ -[1]: http://flatpak.org/ -[2]: http://flatpak.org/press/2016-06-21-flatpak-released.html -[3]: http://flatpak.org/ -[4]: http://flatpak.org/press/2016-06-21-flatpak-released.html -[5]: http://flatpak.org/getting.html -[6]: http://flatpak.org/apps.html -[7]: http://flatpak.org/developer.html From 51c318d8f16617a90be7b7264c64786dd7e552ea Mon Sep 17 00:00:00 2001 From: Markgolzh Date: Mon, 8 Aug 2016 23:33:44 +0800 Subject: [PATCH 371/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 中文翻译部分 --- ...Flatpak brings standalone apps to Linux.md | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 translated/tech/20160621 Flatpak brings standalone apps to Linux.md diff --git a/translated/tech/20160621 Flatpak brings standalone apps to Linux.md b/translated/tech/20160621 Flatpak brings standalone apps to Linux.md new file mode 100644 index 0000000000..cad3fff909 --- /dev/null +++ b/translated/tech/20160621 Flatpak brings standalone apps to Linux.md @@ -0,0 +1,36 @@ +翻译中:by zky001 +Flatpak给Linux带来了独立的应用 +=== + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/flatpak-945x400.jpg) + + +在[Flatpak][1]背后的开发团队已经[刚刚宣布Flatpak作为桌面应用开发框架基本的能力][2]。 Flatpak为一个应用提供了这种能力—提供了一个应用程序的能力,应用程序捆绑为Flatpak 被安装在许多不同的Linux发行版,并且它可以很轻易地运行。应用程序捆绑的Flatpaks也有沙盒安全能力,可以从你的操作系统隔离它们和其他应用程序。查看[Flatpak网站][3],[网站指导手册][4]来获取更多的关于Flatpak框架的技术。 + +### 在Fedora中安装Flatpak + +对于想要以打包格式Flatpaks来运行程序的话,安装在Fedora上是很容易的,Flatpak格式已经可以在官方的Fedora 23和Fedora 24版本源中获得。Flatpak网站有[完整的在Fedora上安装的细节][5],同时也有如何在Arch, Debian,Mageia,和Ubuntu中安装的方法。[许多的应用][6]已经使用Flatpak来进行打包构建的程序-包括LibreOffice,Inkscape和GIMP。 +### 对应用开发者 + +如果你是一个应用开发者,Flatpak网站也包含许多有关于[使用Flatpak打包和分发应用程序][7]的优秀资源。这些资源包括了使用Flakpak SDKs来构建独立的,沙河的Flakpak应用程序。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-flatpak/ + +作者:[Ryan Lerch][a] +译者:[zky001](https://github.com/zky001) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/introducing-flatpak/ +[1]: http://flatpak.org/ +[2]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[3]: http://flatpak.org/ +[4]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[5]: http://flatpak.org/getting.html +[6]: http://flatpak.org/apps.html +[7]: http://flatpak.org/developer.html + + From 7766872f2f0953237bfa8a06f0bb84700070f386 Mon Sep 17 00:00:00 2001 From: Purling Nayuki Date: Tue, 9 Aug 2016 01:25:56 +0800 Subject: [PATCH 372/471] Proofread 20160531 The Anatomy of a Linux User --- .../20160531 The Anatomy of a Linux User.md | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/translated/talk/20160531 The Anatomy of a Linux User.md b/translated/talk/20160531 The Anatomy of a Linux User.md index a404697347..116bd88b41 100644 --- a/translated/talk/20160531 The Anatomy of a Linux User.md +++ b/translated/talk/20160531 The Anatomy of a Linux User.md @@ -1,57 +1,55 @@ - - -一个 Linux 用户的故事 +深入理解 Linux 用戶 ================================ -**一些新的 GNU/Linux 用户都很清楚的知道 Linux 不是 Windows .其他很多人都不是很清楚的知道.最好的发行版设计者努力保持新的思想** +**一些新的 GNU/Linux 用户都很清楚地知道 Linux 不是 Windows。但其他人对此都不甚了解。最好的发行版设计者努力保持新的思想。** ### Linux 的核心 -不管怎么说,Nicky 都不是那种表面上看起来很值得注意的人.她已经三十岁了,却决定回到学校学习.她在海军待了6年时间直到她的老友给她一份新的工作,而且这份工作比她在军队的工作还好.在过去的军事分支服务期间发生了很多事情.我认识她还是她在军队工作的时候.她是8个州的货车运输业协商区域的管理者.那会我在达拉斯跑肉品包装工具的运输. +不管怎么说,Nicky 看起来都不引人注目。她已经三十岁了,却决定回到学校学习。她在海军待了 6 年,后来接受了一份老友给她的新工作,她认为这份工作比她在军队的工作更有前途。认定其他工作更有前途而换工作的事情在战后的军事后勤处非常常见。我正是因此而认识的她。她那时是一个有 8 个州参与的的货车运输业协商组织的区域经理。那会我在达拉斯跑肉品包装工具的运输。 ![](http://i2.wp.com/fossforce.com/wp-content/uploads/2016/05/anatomy.jpg?w=525) -Nicky 和我在 2006 年成为了好朋友.她很外向,并且有很强的好奇心,她几乎走过每个运输商走的路线.一个星期五晚上我们在灯光下有一场很激烈的争论,像这样的 达 30 分钟的争论在我们之间并不少见.或许这并不比油漆一个球场省事,但是气氛还是可以控制的住的,感觉就像是一个很恐怖的游戏.在这次争论的时候她问到我是否可以修复她的电脑. +Nicky 和我在 2006 年成为了好朋友。她很外向,对所有她跑过的线路的拥有者。我们经常星期五晚上我们相约去一家室内激光枪战中心打真人 CS。像这样连续激战半个小时对我们来说并不罕见。或许这并不像彩弹游戏[译注:一种军事游戏,双方以汽枪互射彩色染料弹丸,对方被击中后衣服上会留下彩色印渍即表示“被消灭”(必应词典)]一样便宜,但是气氛还是可以控制得住的,而且令人感觉比较恐怖。某次活动的时候,她问我能否帮她维修电脑。 -她知道我为了一些贫穷的孩子能拥有他们自己的电脑做出的努力,当她抱怨她的电脑很慢的时候,我提到了参加 Bill Gate 的 401k 计划[译注:这个计划是为那些为内部收益代码做过贡献的人们提供由税收定义的养老金账户.].Nicky 说这是了解 Linux 的最佳时间. +她知道我为了一些贫穷的孩子能拥有他们自己的电脑做出的努力,当她抱怨她的电脑很慢的时候,我开玩笑地提到了参加比尔盖茨的 401k 计划[译注:这个计划是为那些为内部收益代码做过贡献的人们提供由税收定义的养老金账户。]。Nicky 说这是了解 Linux 的最佳时间。 -她的电脑相当好,它是一个带有 Dell 19 显示器的华硕电脑.不好的是,当不需要一些东西的时候,这个 Windows 电脑会强制性的显示所有的工具条和文本菜单.我们把电脑上的文件都做了备份之后就开始安装 Linux 了.我们一起完成了安装,并且我确信她知道了如何分区.不到一个小时,她的电脑上就有了一个漂亮的 PCLinuxOS 桌面. +她的电脑相当好,它是一个带有 Dell 19'' 显示器的华硕电脑。不幸的是,即使当不需要一些东西的时候,这台 Windows 电脑也会强制性的显示所有的工具条和文本菜单。我们把电脑上的文件都做了备份之后就开始安装 Linux 了。我们一起完成了安装,并且确信她知道了如何分区。不到一个小时,她的电脑上就有了一个漂亮的 PCLinuxOS 桌面。 -她会经常谈论她使用新系统的方法,系统看起来多么漂亮.她不曾提及的是,她几乎被她面前的井然有序的漂亮桌面吸引.她说她的桌面带有漂亮的"微光".这是我在安装系统期间特意设置的.我每次在安装 Linux 的时候都会进行这样的配置.我想让每个 Linux 用户的桌面都配置这个漂亮的微光. +在她操作新系统时,她经常评论这个系统看起来多么漂亮。她并非随口一说;她为眼前光鲜亮丽的桌面着了魔。她说她的桌面带有漂亮的"微光"[译注:即文字闪光效果,shimmer]。这是我在安装系统期间特意设置的。我每次安装 Linux 的时候都会进行这样的配置。我想让每个 Linux 用户的桌面都拥有这个漂亮的微光。 -大概第一周左右,她给我打电话或者发邮件问了一些常规的问题,但是最主要的问题还是她想知道怎样保存她打开的 Office 文件(OpenOffice)以至于她的同事也可以读这些文件.教一个人使用 Linux 或者 Open/LibreOffice 的时候最重要的就是教她保存文件.大多数用户仅仅看到第一个提示,只需要手指轻轻一点就可以以打开文件模式( Open Document Format )保存. +大概第一周左右,她通过电话和邮件问了我一些常规问题,而最主要的问题还是她想知道如何保存她OpenOffice 文件才可以让她的同事也可以读这些文件。教一个人使用 Linux 或者 Open/LibreOffice 的时候最重要的就是教她保存文件。大多数用户直接点了保存,结果就用默认的开放文档格式(Open Document Format)保存了,这让他们吃了不少苦头。 -大约一年前或者更久,一个高中生说他没有通过期末考试,因为教授不能打开他写着论文的文件.这引来不能决定谁对谁错的读者的激烈评论.这个高中生和教授都不知道这件事该怪谁. +大约一年前或者更久,一个高中生说他没有通过期末考试,因为教授不能打开他上交的 ODF 格式的论文。这引来了一些读者的激烈评论,大家都不知道这件事该怪谁。 -我知道一些大学教授甚至每个人都能够打开一个 ODF 文件.见鬼,很少有像微软这样优秀的公司,我想 Microsoft Office 现在已经能打开 ODT 或者 ODF 文件了.我也不能确保,毕竟我最近一次用 Microsoft Office 是在 2005 年. +我知道一些大学教授甚至每个人都能够打开一个 ODF 文件。另外,我想 Microsoft Office 现在已经能打开 ODT 或者 ODF 文件了。不过我也不确定,毕竟我最近一次用 Microsoft Office 是在 2005 年。 -甚至在困难时期,当 Microsoft 很受欢迎并且很热衷于为使用他们系统的厂商的企业桌面上安装他们自己的软件的时候,我和一些使用 Microsoft Office 的用户的产品生意和合作从来没有出现过问题,因为我会提前想到可能出现的问题并且不会有侥幸心理.我会发邮件给他们询问他们正在使用的 Office 版本.这样,我就可以确保以他们能够读写的格式保存文件. +甚至在过去糟糕的日子里,当 Microsoft 很受欢迎并且很热衷于为使用他们系统的厂商的企业桌面上安装他们自己的软件的时候,我和一些 Microsoft Office 用户的产品生意和合作从来没有出现过问题,因为我会提前想到可能出现的问题并且不会有侥幸心理。我会发邮件给他们询问他们正在使用的 Office 版本。这样,我就可以确保以他们能够读写的格式保存文件。 -说到 Nicky ,她花了很多时间学习她的 Linux 系统.我很惊奇于她的热情. +说回 Nicky ,她花了很多时间学习她的 Linux 系统。我很惊奇于她的热情。 -当人们意识到所有的使用 Windows 的习惯和工具都要被抛弃的时候,学习 Linux 系统也会很容易.甚至在我们谈论第一次用的系统时,我查看这些系统的桌面或者下载文件夹大多都找不到 some_dodgy_file.exe 这样的文件. +当人们意识到所有的使用 Windows 的习惯和工具都要被抛弃的时候,学习 Linux 系统也会很容易。甚至在告诉孩子们如何使用之后,他们都不会试图下载和执行 .exe 文件。 -在我们通常讨论这些文件的时候,我们也会提及关于更新的问题.很长时间我都不会在一台电脑上反复设置去完成多种程序的更新和安装.比如 Mint ,它没有带有 Synaptic [译注:一个 Linux 管理图像程序包]的完整更新方法,这让我失去兴趣.但是我们的老成员 dpkg 和 apt 是我们的好朋友,聪明的领导者已经得到肯定并且认识到命令行看起来不是那么舒服,同时欢迎新的用户加入. +在我们通常讨论这些文件的时候,我们也会提及关于更新的问题。很长时间我都不会在一台电脑上反复设置去完成多种程序的更新和安装。比如 Mint,它禁用了 Synaptic[译注:一个 Linux 管理图像程序包]的完整(系统)更新功能,这让我失去兴趣。但是即便我们仍然在使用 dpkg 和 apt,智能的前端已经开始占据上风,并使大家开始意识到命令行对心用户来说并不那么温馨而友好。 -我很生气,强烈反对机器对 Synaptic 的削弱,最后我放弃使用它.你记得你第一次使用的 Linux 发行版吗?记得你什么时候在 Synaptic 中详细查看大量的软件列表吗?记得你怎样开始检查并标记每个你发现的很酷的程序吗?你记得有多少这样的程序开始都是使用"lib"这样的文件吗? +我曾严正抗议并强烈谴责 Synaptic 功能上的削弱。你记得你第一次使用的 Linux 发行版吗?记得你什么时候在 Synaptic 中详细查看大量的软件列表吗?记得你怎样开始检查并标记每个你发现的很酷的程序吗?你记得有多少这样的程序开始都是使用"lib"这样的文件吗? -是的,我也是。我安装并且查看了一些新的安装程序,直到我发现那些库文件是应用程序的螺母和螺栓,而不是应用程序本身.这就是为什么这些聪明的开发者在 Linux Mint 和 Ubuntu 之后创造了聪明、漂亮和易于使用的应用程序的安装程序.Synaptic 仍然是我们的老大,但是对于一些后来者,安装像 lib 文件这样的方式需要打开大量的文件夹,很多这样的事情都会导致他们放弃使用这个系统.在新的安装程序中,这些文件会被放在一个文件夹中甚至不会展示给用户.总之,这也是该有的解决方法. +是的,我也是。我安装又弄坏了好次 Linux,后来我才发现那些库文件是应用程序的螺母和螺栓,而不是应用程序本身。这就是 Linux Mint 和 Ubuntu 幕后那些聪明的开发者创造了智能、漂亮和易于使用的应用程序的安装程序的理由。Synaptic 仍然是我们的老大,但是对于一些后来者,安装像 lib 文件这样的方式需要打开大量的文件夹,很多这样的事情都会导致他们放弃使用这个系统。在新的安装程序中,这些文件会被折叠起来,不会展示给用户。讲真,这也是它应该做的。 -除非你要改变应该支持的需求. +除非你要改变应该支持的需求。 -现在的 Linux 发行版中有很多有用的软件,我也很感谢这些开发者,因为他们,我的工作变得容易.不是每一个 Linux 新用户都像 Nicky 这样富有热情.她相当不错的完成了安装过程并且达到了忘我的状态.像她这样极具热情的毕竟是少数.大多数新的 Linux 用户也就是在需要的时候才这样用心. +现在的 Linux 发行版中藏了很多智慧的结晶,我也很感谢这些开发者,因为他们,我的工作变得容易。不是每一个 Linux 新用户都像 Nicky 这样富有学习能力和热情。她对我来说就是一个“安装后不管”的项目,只需要为她解答一些问题,其他的她会自己研究解决。像她这样极具学习能力和热情的用户的毕竟是少数。这样的 Linux 新人任何时候都是珍稀物种。 -很不错,他们都是要教自己的孩子使用 Linux 的人. +很不错,他们都是要教自己的孩子使用 Linux 的人。 -------------------------------------------------------------------------------- via: http://fossforce.com/2016/05/anatomy-linux-user/ 作者:[Ken Starks][a] 译者:[vim-kakali](https://github.com/vim-kakali) -校对:[校对者ID](https://github.com/校对者ID) +校对:[PurlingNayuki](https://github.com/PurlingNayuki) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 77e4d18eab0343c0717be399e6c4ab9a22e2ce9c Mon Sep 17 00:00:00 2001 From: Markgolzh Date: Tue, 9 Aug 2016 01:31:22 +0800 Subject: [PATCH 373/471] =?UTF-8?q?zky001=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译中by zky001 --- .../20160615 Excel Filter and Edit - Demonstrated in Pandas.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md b/sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md index 348f66196f..4e9c973066 100644 --- a/sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md +++ b/sources/tech/20160615 Excel Filter and Edit - Demonstrated in Pandas.md @@ -1,3 +1,4 @@ +翻译中 by-zky001 Excel “Filter and Edit” - Demonstrated in Pandas ================================================== @@ -313,7 +314,7 @@ Thanks for reading through the article. I find that one of the biggest challenge via: http://pbpython.com/excel-filter-edit.html 作者:[Chris Moffitt ][a] -译者:[译者ID](https://github.com/译者ID) +译者:[zky001](https://github.com/zky001) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 27aa4c166fb3b000648b3eb77f75dfc549227d22 Mon Sep 17 00:00:00 2001 From: Purling Nayuki Date: Tue, 9 Aug 2016 01:35:54 +0800 Subject: [PATCH 374/471] Proofread 20160531 The Anatomy of a Linux User and correct some translations --- translated/talk/20160531 The Anatomy of a Linux User.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/translated/talk/20160531 The Anatomy of a Linux User.md b/translated/talk/20160531 The Anatomy of a Linux User.md index 116bd88b41..99e3f0acc5 100644 --- a/translated/talk/20160531 The Anatomy of a Linux User.md +++ b/translated/talk/20160531 The Anatomy of a Linux User.md @@ -32,11 +32,11 @@ Nicky 和我在 2006 年成为了好朋友。她很外向,对所有她跑过 当人们意识到所有的使用 Windows 的习惯和工具都要被抛弃的时候,学习 Linux 系统也会很容易。甚至在告诉孩子们如何使用之后,他们都不会试图下载和执行 .exe 文件。 -在我们通常讨论这些文件的时候,我们也会提及关于更新的问题。很长时间我都不会在一台电脑上反复设置去完成多种程序的更新和安装。比如 Mint,它禁用了 Synaptic[译注:一个 Linux 管理图像程序包]的完整(系统)更新功能,这让我失去兴趣。但是即便我们仍然在使用 dpkg 和 apt,智能的前端已经开始占据上风,并使大家开始意识到命令行对心用户来说并不那么温馨而友好。 +在我们通常讨论这些文件的时候,我们也会提及关于更新的问题。很长时间我都不会在一台电脑上反复设置去完成多种程序的更新和安装。比如 Mint,它禁用了 Synaptic[译注:一个 Linux 管理图像程序包]的完整(系统)更新功能,这让我失去兴趣。但是即便我们仍然在使用 dpkg 和 apt,睿智的玩家已经开始意识到命令行对新用户来说并不那么温馨而友好。 -我曾严正抗议并强烈谴责 Synaptic 功能上的削弱。你记得你第一次使用的 Linux 发行版吗?记得你什么时候在 Synaptic 中详细查看大量的软件列表吗?记得你怎样开始检查并标记每个你发现的很酷的程序吗?你记得有多少这样的程序开始都是使用"lib"这样的文件吗? +我曾严正抗议并强烈谴责 Synaptic 功能上的削弱。你记得你第一次使用的 Linux 发行版吗?记得你什么时候在 Synaptic 中详细查看大量的软件列表吗?记得你怎样开始标记你发现的每个很酷的程序吗?你记得有多少这样的程序都是以字母"lib"开头的吗? -是的,我也是。我安装又弄坏了好次 Linux,后来我才发现那些库文件是应用程序的螺母和螺栓,而不是应用程序本身。这就是 Linux Mint 和 Ubuntu 幕后那些聪明的开发者创造了智能、漂亮和易于使用的应用程序的安装程序的理由。Synaptic 仍然是我们的老大,但是对于一些后来者,安装像 lib 文件这样的方式需要打开大量的文件夹,很多这样的事情都会导致他们放弃使用这个系统。在新的安装程序中,这些文件会被折叠起来,不会展示给用户。讲真,这也是它应该做的。 +我也曾做过那样的事。我安装又弄坏了好次 Linux,后来我才发现那些库(lib)文件是应用程序的螺母和螺栓,而不是应用程序本身。这就是 Linux Mint 和 Ubuntu 幕后那些聪明的开发者创造了智能、漂亮和易于使用的应用程序的安装程序的理由。Synaptic 仍然是我们这些老玩家爱用的工具,但是对于一些新手,Synaptic 中有太多的方式可以让他们安装库文件和其他类似的包,把系统弄乱。在新的安装程序中,这些包会被折叠起来,不会展示给用户。讲真,这也是它应该做的。 除非你要改变应该支持的需求。 From e19a22994cc7bcc7e74bb2d7f4d09311e564a9a5 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 9 Aug 2016 08:21:55 +0800 Subject: [PATCH 375/471] PUB:20160628 Python 101 An Intro to urllib @oska874 --- .../20160628 Python 101 An Intro to urllib.md | 29 +++++++++---------- 1 file changed, 14 insertions(+), 15 deletions(-) rename {translated/tech => published}/20160628 Python 101 An Intro to urllib.md (66%) diff --git a/translated/tech/20160628 Python 101 An Intro to urllib.md b/published/20160628 Python 101 An Intro to urllib.md similarity index 66% rename from translated/tech/20160628 Python 101 An Intro to urllib.md rename to published/20160628 Python 101 An Intro to urllib.md index 641ac2baf1..2ba5ea85b8 100644 --- a/translated/tech/20160628 Python 101 An Intro to urllib.md +++ b/published/20160628 Python 101 An Intro to urllib.md @@ -1,7 +1,7 @@ -Python 101: urllib 简介 +Python 学习:urllib 简介 ================================= -Python 3 的 urllib 模块是一堆可以处理 URL 的组件集合。如果你有 Python 2 背景,那么你就会注意到 Python 2 中有 urllib 和 urllib2 两个版本的模块。这些现在都是 Python 3 的 urllib 包的一部分。当前版本的 urllib 包括下面几部分: +Python 3 的 urllib 模块是一堆可以处理 URL 的组件集合。如果你有 Python 2 的知识,那么你就会注意到 Python 2 中有 urllib 和 urllib2 两个版本的模块。这些现在都是 Python 3 的 urllib 包的一部分。当前版本的 urllib 包括下面几部分: - urllib.request - urllib.error @@ -10,11 +10,10 @@ Python 3 的 urllib 模块是一堆可以处理 URL 的组件集合。如果你 接下来我们会分开讨论除了 urllib.error 以外的几部分。官方文档实际推荐你尝试第三方库, requests,一个高级的 HTTP 客户端接口。然而我依然认为知道如何不依赖第三方库打开 URL 并与之进行交互是很有用的,而且这也可以帮助你理解为什么 requests 包是如此的流行。 ---- ### urllib.request -urllib.request 模块期初是用来打开和获取 URL 的。让我们看看你可以用函数 urlopen可以做的事: +urllib.request 模块期初是用来打开和获取 URL 的。让我们看看你可以用函数 urlopen 可以做的事: ``` @@ -50,11 +49,11 @@ urllib.request 模块期初是用来打开和获取 URL 的。让我们看看你 在这里我们包含了需要的模块,然后告诉它打开 Google 的 URL。现在我们就有了一个可以交互的 HTTPResponse 对象。我们要做的第一件事是调用方法 geturl ,它会返回根据 URL 获取的资源。这可以让我们发现 URL 是否进行了重定向。 -接下来调用 info ,它会返回网页的元数据,比如头信息。因此,我们可以将结果赋给我们的 headers 变量,然后调用它的方法 as_string 。就可以打印出我们从 Google 收到的头信息。你也可以通过 getcode 得到网页的 HTTP 响应码,当前情况下就是 200,意思是正常工作。 +接下来调用 info ,它会返回网页的元数据,比如请求头信息。因此,我们可以将结果赋给我们的 headers 变量,然后调用它的方法 as_string 。就可以打印出我们从 Google 收到的头信息。你也可以通过 getcode 得到网页的 HTTP 响应码,当前情况下就是 200,意思是正常工作。 如果你想看看网页的 HTML 代码,你可以调用变量 url 的方法 read。我不准备再现这个过程,因为输出结果太长了。 -请注意 request 对象默认是 GET 请求,除非你指定它的 data 参数。你应该给它传递 data 参数,这样 request 对象才会变成 POST 请求。 +请注意 request 对象默认发起 GET 请求,除非你指定了它的 data 参数。如果你给它传递了 data 参数,这样 request 对象将会变成 POST 请求。 --- @@ -72,7 +71,7 @@ urllib 一个典型的应用场景是下载文件。让我们看看几种可以 ... ``` -这个例子中我们打开一个保存在我的博客上的 zip 压缩文件的 URL。然后我们读出数据并将数据写到磁盘。一个替代方法是使用 urlretrieve : +这个例子中我们打开一个保存在我的博客上的 zip 压缩文件的 URL。然后我们读出数据并将数据写到磁盘。一个替代此操作的方案是使用 urlretrieve : ``` >>> import urllib.request @@ -83,7 +82,7 @@ urllib 一个典型的应用场景是下载文件。让我们看看几种可以 ... fobj.write(tmp.read()) ``` -方法 urlretrieve 会把网络对象拷贝到本地文件。除非你在使用 urlretrieve 的第二个参数指定你要保存文件的路径,否则这个文件在本地是随机命名的并且是保存在临时文件夹。这个可以为你节省一步操作,并且使代码开起来更简单: +方法 urlretrieve 会把网络对象拷贝到本地文件。除非你在使用 urlretrieve 的第二个参数指定你要保存文件的路径,否则这个文件将被拷贝到临时文件夹的随机命名的一个文件中。这个可以为你节省一步操作,并且使代码看起来更简单: ``` >>> import urllib.request @@ -97,7 +96,7 @@ urllib 一个典型的应用场景是下载文件。让我们看看几种可以 ### 设置你的用户代理 -当你使用浏览器访问网页时,浏览器会告诉网站它是谁。这就是所谓的 user-agent 字段。Python 的 urllib 会表示他自己为 Python-urllib/x.y , 其中 x 和 y 是你使用的 Python 的主、次版本号。有一些网站不认识这个用户代理字段,然后网站的可能会有奇怪的表现或者根本不能正常工作。辛运的是你可以很轻松的设置你自己的 user-agent 字段。 +当你使用浏览器访问网页时,浏览器会告诉网站它是谁。这就是所谓的 user-agent (用户代理)字段。Python 的 urllib 会表示它自己为 Python-urllib/x.y , 其中 x 和 y 是你使用的 Python 的主、次版本号。有一些网站不认识这个用户代理字段,然后网站可能会有奇怪的表现或者根本不能正常工作。辛运的是你可以很轻松的设置你自己的 user-agent 字段。 ``` >>> import urllib.request @@ -132,7 +131,7 @@ ParseResult(scheme='https', netloc='duckduckgo.com', path='/', params='', query= None ``` -这里我们导入了函数 urlparse , 并且把一个包含搜索查询 duckduckgo 的 URL 作为参数传给它。我的查询的关于 “python stubbing” 的文章。如你所见,它返回了一个 ParseResult 对象,你可以用这个对象了解更多关于 URL 的信息。举个例子,你可以获取到短信息(此处的没有端口信息),网络位置,路径和很多其他东西。 +这里我们导入了函数 urlparse , 并且把一个包含搜索查询字串的 duckduckgo 的 URL 作为参数传给它。我的查询字串是搜索关于 “python stubbing” 的文章。如你所见,它返回了一个 ParseResult 对象,你可以用这个对象了解更多关于 URL 的信息。举个例子,你可以获取到端口信息(本例中没有端口信息)、网络位置、路径和很多其它东西。 ### 提交一个 Web 表单 @@ -151,13 +150,13 @@ None ... f.write(response.read()) ``` -这个例子很直接。基本上我们想使用 Python 而不是浏览器向 duckduckgo 提交一个查询。要完成这个我们需要使用 urlencode 构建我们的查询字符串。然后我们把这个字符串和网址拼接成一个完整的正确 URL ,然后使用 urllib.request 提交这个表单。最后我们就获取到了结果然后保存到磁盘上。 +这个例子很直接。基本上我们是使用 Python 而不是浏览器向 duckduckgo 提交了一个查询。要完成这个我们需要使用 urlencode 构建我们的查询字符串。然后我们把这个字符串和网址拼接成一个完整的正确 URL ,然后使用 urllib.request 提交这个表单。最后我们就获取到了结果然后保存到磁盘上。 --- ### urllib.robotparser -robotparser 模块是由一个单独的类 —— RobotFileParser —— 构成的。这个类会回答诸如一个特定的用户代理可以获取已经设置 robot.txt 的网站。 robot.txt 文件会告诉网络爬虫或者机器人当前网站的那些部分是不允许被访问的。让我们看一个简单的例子: +robotparser 模块是由一个单独的类 RobotFileParser 构成的。这个类会回答诸如一个特定的用户代理是否获取已经设置了 robot.txt 的网站的 URL。 robot.txt 文件会告诉网络爬虫或者机器人当前网站的那些部分是不允许被访问的。让我们看一个简单的例子: ``` >>> import urllib.robotparser @@ -172,13 +171,13 @@ True False ``` -这里我们导入了 robot 分析器类,然后创建一个实例。然后我们给它传递一个表明网站 robots.txt 位置的 URL 。接下来我们告诉分析器来读取这个文件。现在就完成了,我们给它了一组不同的 URL 让它找出那些我们可以爬取而那些不能爬取。我们很快就看到我们可以访问主站但是不包括 cgi-bin 路径。 +这里我们导入了 robot 分析器类,然后创建一个实例。然后我们给它传递一个表明网站 robots.txt 位置的 URL 。接下来我们告诉分析器来读取这个文件。完成后,我们给它了一组不同的 URL 让它找出那些我们可以爬取而那些不能爬取。我们很快就看到我们可以访问主站但是不能访问 cgi-bin 路径。 --- ### 总结一下 -现在你就有能力使用 Python 的 urllib 包了。在这一节里,我们学习了如何下载文件,提交 Web 表单,修改自己的用户代理以及访问 robots.txt。 urllib 还有一大堆附加功能没有在这里提及,比如网站授权。然后你可能会考虑在使用 urllib 进行认证之前切换到 requests 库,因为 requests 已经以更易用和易调试的方式实现了这些功能。我同时也希望提醒你 Python 已经通过 http.cookies 模块支持 Cookies 了,虽然在 request 包里也很好的封装了这个功能。你应该可能考虑同时试试两个来决定那个最适合你。 +现在你就有能力使用 Python 的 urllib 包了。在这一节里,我们学习了如何下载文件、提交 Web 表单、修改自己的用户代理以及访问 robots.txt。 urllib 还有一大堆附加功能没有在这里提及,比如网站身份认证。你可能会考虑在使用 urllib 进行身份认证之前切换到 requests 库,因为 requests 已经以更易用和易调试的方式实现了这些功能。我同时也希望提醒你 Python 已经通过 http.cookies 模块支持 Cookies 了,虽然在 request 包里也很好的封装了这个功能。你应该可能考虑同时试试两个来决定那个最适合你。 -------------------------------------------------------------------------------- @@ -186,7 +185,7 @@ via: http://www.blog.pythonlibrary.org/2016/06/28/python-101-an-intro-to-urllib/ 作者:[Mike][a] 译者:[Ezio](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From d658889532bf7d3ea7c7c6ac689d3e7ab2674e75 Mon Sep 17 00:00:00 2001 From: Felix Yan Date: Tue, 9 Aug 2016 14:14:26 +0800 Subject: [PATCH 376/471] =?UTF-8?q?20160809-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...cker Containers using Grafana on Ubuntu.md | 253 ++++++++++++++++++ 1 file changed, 253 insertions(+) create mode 100644 sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md diff --git a/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md b/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md new file mode 100644 index 0000000000..cafd93ab19 --- /dev/null +++ b/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md @@ -0,0 +1,253 @@ +How to Monitor Docker Containers using Grafana on Ubuntu +================================================================================ + +Grafana is an open source feature rich metrics dashboard. It is very useful for visualizing large-scale measurement data. It provides a powerful and elegant way to create, share, and explore data and dashboards from your disparate metric databases. + +It supports a wide variety of graphing options for ultimate flexibility. Furthermore, it supports many different storage backends for your Data Source. Each Data Source has a specific Query Editor that is customized for the features and capabilities that the particular Data Source exposes. The following datasources are officially supported by Grafana: Graphite, InfluxDB, OpenTSDB, Prometheus, Elasticsearch and Cloudwatch + +The query language and capabilities of each Data Source are obviously very different. You can combine data from multiple Data Sources onto a single Dashboard, but each Panel is tied to a specific Data Source that belongs to a particular Organization. It supports authenticated login and a basic role based access control implementation. It is deployed as a single software installation which is written in Go and Javascript. + +In this article, I'll explain on how to install Grafana on a docker container in Ubuntu 16.04 and configure docker monitoring using this software. + +### Pre-requisites ### + +- Docker installed server + +### Installing Grafana ### + +We can build our Grafana in a docker container. There is an official docker image available for building Grafana. Please run this command to build a Grafana container. + + root@ubuntu:~# docker run -i -p 3000:3000 grafana/grafana + + Unable to find image 'grafana/grafana:latest' locally + latest: Pulling from grafana/grafana + 5c90d4a2d1a8: Pull complete + b1a9a0b6158e: Pull complete + acb23b0d58de: Pull complete + Digest: sha256:34ca2f9c7986cb2d115eea373083f7150a2b9b753210546d14477e2276074ae1 + Status: Downloaded newer image for grafana/grafana:latest + t=2016-07-27T15:20:19+0000 lvl=info msg="Starting Grafana" logger=main version=3.1.0 commit=v3.1.0 compiled=2016-07-12T06:42:28+0000 + t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini + t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini + t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.data=/var/lib/grafana" + t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.logs=/var/log/grafana" + t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins" + t=2016-07-27T15:20:19+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana + t=2016-07-27T15:20:19+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana + t=2016-07-27T15:20:19+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana + t=2016-07-27T15:20:19+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins + t=2016-07-27T15:20:19+0000 lvl=info msg="Initializing DB" logger=sqlstore dbtype=sqlite3 + + t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2" + t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2" + t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2" + t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3" + t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3" + t=2016-07-27T15:20:20+0000 lvl=info msg="Created default admin user: [admin]" + t=2016-07-27T15:20:20+0000 lvl=info msg="Starting plugin search" logger=plugins + t=2016-07-27T15:20:20+0000 lvl=info msg="Server Listening" logger=server address=0.0.0.0:3000 protocol=http subUrl= + +We can confirm the working of the Grafana container by running this command `docker ps -a` or by accessing it by URL `http://Docker IP:3000` + +All Grafana configuration settings are defined using environment variables, this is much useful when using container technology. The Grafana configuration file is located at /etc/grafana/grafana.ini. + +### Understanding the Configuration ### + +The Grafana has number of configuration options that can be specified in its configuration file as .ini file or can be specified using environment variables as mentioned before. + +#### Config file locations #### + +Normal config file locations. + +- Default configuration from : $WORKING_DIR/conf/defaults.ini +- Custom configuration from : $WORKING_DIR/conf/custom.ini + +PS : When you install Grafana using the deb or rpm packages or docker images, then your configuration file is located at /etc/grafana/grafana.ini + +#### Understanding the config variables #### + +Let's see some of the variables in the configuration file below: + +`instance_name` : It's the name of the grafana server instance. It default value is fetched from ${HOSTNAME}, which will be replaced with environment variable HOSTNAME, if that is empty or does not exist Grafana will try to use system calls to get the machine name. + +`[paths]` + +`data` : It's the path where Grafana stores the sqlite3 database (when used), file based sessions (when used), and other data. + +`logs` : It's where Grafana stores the logs. + +Both these paths are usually specified via command line in the init.d scripts or the systemd service file. + +`[server]` + +`http_addr` : The IP address to bind the application. If it's left empty it will bind to all interfaces. + +`http_port` : The port to which the application is bind to, defaults is 3000. You can redirect your 80 port to 3000 using the below command. + + $iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000 + +`root_url` : This is the URL used to access Grafana from a web browser. + +`cert_file` : Path to the certificate file (if protocol is set to https). + +`cert_key` : Path to the certificate key file (if protocol is set to https). + +`[database]` + +Grafana uses a database to store its users and dashboards and other informations. By default it is configured to use sqlite3 which is an embedded database included in the main Grafana binary. + +`type` +You can choose mysql, postgres or sqlite3 as per our requirement. + +`path` +It's applicable only for sqlite3 database. The file path where the database will be stored. + +`host` +It's applicable only to MySQL or Postgres. it includes IP or hostname and port. For example, for MySQL running on the same host as Grafana: host = 127.0.0.1:3306 + +`name` +The name of the Grafana database. Leave it set to grafana or some other name. + +`user` +The database user (not applicable for sqlite3). + +`password` +The database user's password (not applicable for sqlite3). + +`ssl_mode` +For Postgres, use either disable, require or verify-full. For MySQL, use either true, false, or skip-verify. + +`ca_cert_path` +(MySQL only) The path to the CA certificate to use. On many linux systems, certs can be found in /etc/ssl/certs. + +`client_key_path` +(MySQL only) The path to the client key. Only if server requires client authentication. + +`client_cert_path` +(MySQL only) The path to the client cert. Only if server requires client authentication. + +`server_cert_name` +(MySQL only) The common name field of the certificate used by the mysql server. Not necessary if ssl_mode is set to skip-verify. + +`[security]` + +`admin_user` : It is the name of the default Grafana admin user. The default name set is admin. + +`admin_password` : It is the password of the default Grafana admin. It is set on first-run. The default password is admin. + +`login_remember_days` : The number of days the keep me logged in / remember me cookie lasts. + +`secret_key` : It is used for signing keep me logged in / remember me cookies. + +### Essentials components for setting up Monitoring ### + +We use the below components to create our Docker Monitoring system. + +`cAdvisor` : It is otherwise called Container Advisor. It provides its users an understanding of the resource usage and performance characteristics. It collects, aggregates, processes and exports information about the running containers. You can go through this documentation for more information about this. + +`InfluxDB` : It is a time series, metrics, and analytic database. We use this datasource for setting up our monitoring. cAdvisor displays only real time information and doesn’t store the metrics. Influx Db helps to store the monitoring information which cAdvisor provides in order to display a time range other than real time. + +`Grafana Dashboard` : It allows us to combine all the pieces of information together visually. This powerful Dashboard allows us to run queries against the data store InfluxDB and chart them accordingly in beautiful layout. + +### Installation of Docker Monitoring ### + +We need to install each of these components one by one in our docker system. + +#### Installing InfluxDB #### + +We can use this command to pull InfluxDB image and setuup a influxDB container. + + root@ubuntu:~# docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -e PRE_CREATE_DB=cadvisor --name influxsrv tutum/influxdb:0.8.8 + Unable to find image 'tutum/influxdb:0.8.8' locally + 0.8.8: Pulling from tutum/influxdb + a3ed95caeb02: Already exists + 23efb549476f: Already exists + aa2f8df21433: Already exists + ef072d3c9b41: Already exists + c9f371853f28: Already exists + a248b0871c3c: Already exists + 749db6d368d0: Already exists + 7d7c7d923e63: Pull complete + e47cc7808961: Pull complete + 1743b6eeb23f: Pull complete + Digest: sha256:8494b31289b4dbc1d5b444e344ab1dda3e18b07f80517c3f9aae7d18133c0c42 + Status: Downloaded newer image for tutum/influxdb:0.8.8 + d3b6f7789e0d1d01fa4e0aacdb636c221421107d1df96808ecbe8e241ceb1823 + + -p 8083:8083 : user interface, log in with username-admin, pass-admin + -p 8086:8086 : interaction with other application + --name influxsrv : container have name influxsrv, use to cAdvisor link it. + +You can test your InfluxDB installation by calling this URL >>http://45.79.148.234:8083 and login with user/password as "root". + +![InfluxDB Administration 2016-08-01 14-10-08](http://blog.linoxide.com/wp-content/uploads/2016/07/InfluxDB-Administration-2016-08-01-14-10-08-1-1024x530.png) + +We can create our required databases from this tab. + +![createDB influx](http://blog.linoxide.com/wp-content/uploads/2016/07/createDB-influx-1024x504.png) + +#### Installing cAdvisor #### + +Our next step is to install cAdvisor container and link it to the InfluxDB container. You can use this command to create it. + + root@ubuntu:~# docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --link influxsrv:influxsrv --name=cadvisor google/cadvisor:latest -storage_driver_db=cadvisor -storage_driver_host=influxsrv:8086 + Unable to find image 'google/cadvisor:latest' locally + latest: Pulling from google/cadvisor + 09d0220f4043: Pull complete + 151807d34af9: Pull complete + 14cd28dce332: Pull complete + Digest: sha256:8364c7ab7f56a087b757a304f9376c3527c8c60c848f82b66dd728980222bd2f + Status: Downloaded newer image for google/cadvisor:latest + 3bfdf7fdc83872485acb06666a686719983a1172ac49895cd2a260deb1cdde29 + root@ubuntu:~# + + --publish=8080:8080 : user interface + --link=influxsrv:influxsrv: link to container influxsrv + -storage_driver=influxdb: set the storage driver as InfluxDB + Specify what InfluxDB instance to push data to: + -storage_driver_host=influxsrv:8086: The ip:port of the database. Default is ‘localhost:8086’ + -storage_driver_db=cadvisor: database name. Uses db ‘cadvisor’ by default + +You can test our cAdvisor installation by calling this URL >>http://45.79.148.234:8080. This will provide you the statistics of your Docker host and containers. + +![cAdvisor - Docker Containers 2016-08-01 14-24-18](http://blog.linoxide.com/wp-content/uploads/2016/07/cAdvisor-Docker-Containers-2016-08-01-14-24-18-776x1024.png) + +#### Installing the Grafana Dashboard #### + +Finally, we need to install the Grafana Dashboard and link to the InfluxDB. You can run this command to setup that. + + root@ubuntu:~# docker run -d -p 3000:3000 -e INFLUXDB_HOST=localhost -e INFLUXDB_PORT=8086 -e INFLUXDB_NAME=cadvisor -e INFLUXDB_USER=root -e INFLUXDB_PASS=root --link influxsrv:influxsrv --name grafana grafana/grafana + f3b7598529202b110e4e6b998dca6b6e60e8608d75dcfe0d2b09ae408f43684a + +Now we can login to Grafana and configure the Data Sources. Navigate to http://45.79.148.234:3000 or just http://45.79.148.234: + +Username - admin +Password - admin + +Once we've installed Grafana, we can connect the InfluxDB. Login on the Dashboard and click on the Grafana icon(Fireball) in the upper left hand corner of the panel. Click on Data Sources to configure. + +![addingdatabsource](http://blog.linoxide.com/wp-content/uploads/2016/08/addingdatabsource-1-1024x804.png) + +Now you can add our new Graph to our default Datasource InfluxDB. + +![panelgraph](http://blog.linoxide.com/wp-content/uploads/2016/08/panelgraph-1024x576.png) + +We can edit and modify our query by adjusting our graph at Metric tab. + +![Grafana - Grafana Dashboard 2016-08-01 14-53-40](http://blog.linoxide.com/wp-content/uploads/2016/08/Grafana-Grafana-Dashboard-2016-08-01-14-53-40-1024x504.png) + +![Grafana - Grafana Dashboard](http://blog.linoxide.com/wp-content/uploads/2016/08/Grafana-Grafana-Dashboard-1024x509.png) + +You can get [more information][1] on docker monitoring here. Thank you for reading this. I would suggest your valuable comments and suggestions on this. Hope you'd a wonderful day! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/monitor-docker-containers-grafana-ubuntu/ + +作者:[Saheetha Shameer][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/saheethas/ \ No newline at end of file From ba719f40e335bd6ef94bf2368d381813f5fc1be7 Mon Sep 17 00:00:00 2001 From: Felix Yan Date: Tue, 9 Aug 2016 15:08:12 +0800 Subject: [PATCH 377/471] =?UTF-8?q?=E4=BF=AE=E6=AD=A3=E4=BB=A3=E7=A0=81?= =?UTF-8?q?=E5=9D=97=E6=A0=B7=E5=BC=8F=E3=80=81=E8=A1=A5=E5=85=85=E9=93=BE?= =?UTF-8?q?=E6=8E=A5?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...cker Containers using Grafana on Ubuntu.md | 143 ++++++++++-------- 1 file changed, 77 insertions(+), 66 deletions(-) diff --git a/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md b/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md index cafd93ab19..3d7b171788 100644 --- a/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md +++ b/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md @@ -17,35 +17,37 @@ In this article, I'll explain on how to install Grafana on a docker container in We can build our Grafana in a docker container. There is an official docker image available for building Grafana. Please run this command to build a Grafana container. - root@ubuntu:~# docker run -i -p 3000:3000 grafana/grafana +``` +root@ubuntu:~# docker run -i -p 3000:3000 grafana/grafana - Unable to find image 'grafana/grafana:latest' locally - latest: Pulling from grafana/grafana - 5c90d4a2d1a8: Pull complete - b1a9a0b6158e: Pull complete - acb23b0d58de: Pull complete - Digest: sha256:34ca2f9c7986cb2d115eea373083f7150a2b9b753210546d14477e2276074ae1 - Status: Downloaded newer image for grafana/grafana:latest - t=2016-07-27T15:20:19+0000 lvl=info msg="Starting Grafana" logger=main version=3.1.0 commit=v3.1.0 compiled=2016-07-12T06:42:28+0000 - t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini - t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini - t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.data=/var/lib/grafana" - t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.logs=/var/log/grafana" - t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins" - t=2016-07-27T15:20:19+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana - t=2016-07-27T15:20:19+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana - t=2016-07-27T15:20:19+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana - t=2016-07-27T15:20:19+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins - t=2016-07-27T15:20:19+0000 lvl=info msg="Initializing DB" logger=sqlstore dbtype=sqlite3 +Unable to find image 'grafana/grafana:latest' locally +latest: Pulling from grafana/grafana +5c90d4a2d1a8: Pull complete +b1a9a0b6158e: Pull complete +acb23b0d58de: Pull complete +Digest: sha256:34ca2f9c7986cb2d115eea373083f7150a2b9b753210546d14477e2276074ae1 +Status: Downloaded newer image for grafana/grafana:latest +t=2016-07-27T15:20:19+0000 lvl=info msg="Starting Grafana" logger=main version=3.1.0 commit=v3.1.0 compiled=2016-07-12T06:42:28+0000 +t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini +t=2016-07-27T15:20:19+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini +t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.data=/var/lib/grafana" +t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.logs=/var/log/grafana" +t=2016-07-27T15:20:19+0000 lvl=info msg="Config overriden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins" +t=2016-07-27T15:20:19+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana +t=2016-07-27T15:20:19+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana +t=2016-07-27T15:20:19+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana +t=2016-07-27T15:20:19+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins +t=2016-07-27T15:20:19+0000 lvl=info msg="Initializing DB" logger=sqlstore dbtype=sqlite3 - t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2" - t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2" - t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2" - t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3" - t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3" - t=2016-07-27T15:20:20+0000 lvl=info msg="Created default admin user: [admin]" - t=2016-07-27T15:20:20+0000 lvl=info msg="Starting plugin search" logger=plugins - t=2016-07-27T15:20:20+0000 lvl=info msg="Server Listening" logger=server address=0.0.0.0:3000 protocol=http subUrl= +t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2" +t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2" +t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2" +t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3" +t=2016-07-27T15:20:20+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3" +t=2016-07-27T15:20:20+0000 lvl=info msg="Created default admin user: [admin]" +t=2016-07-27T15:20:20+0000 lvl=info msg="Starting plugin search" logger=plugins +t=2016-07-27T15:20:20+0000 lvl=info msg="Server Listening" logger=server address=0.0.0.0:3000 protocol=http subUrl= +``` We can confirm the working of the Grafana container by running this command `docker ps -a` or by accessing it by URL `http://Docker IP:3000` @@ -84,7 +86,9 @@ Both these paths are usually specified via command line in the init.d scripts or `http_port` : The port to which the application is bind to, defaults is 3000. You can redirect your 80 port to 3000 using the below command. - $iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000 +``` +$iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000 +``` `root_url` : This is the URL used to access Grafana from a web browser. @@ -157,26 +161,28 @@ We need to install each of these components one by one in our docker system. We can use this command to pull InfluxDB image and setuup a influxDB container. - root@ubuntu:~# docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -e PRE_CREATE_DB=cadvisor --name influxsrv tutum/influxdb:0.8.8 - Unable to find image 'tutum/influxdb:0.8.8' locally - 0.8.8: Pulling from tutum/influxdb - a3ed95caeb02: Already exists - 23efb549476f: Already exists - aa2f8df21433: Already exists - ef072d3c9b41: Already exists - c9f371853f28: Already exists - a248b0871c3c: Already exists - 749db6d368d0: Already exists - 7d7c7d923e63: Pull complete - e47cc7808961: Pull complete - 1743b6eeb23f: Pull complete - Digest: sha256:8494b31289b4dbc1d5b444e344ab1dda3e18b07f80517c3f9aae7d18133c0c42 - Status: Downloaded newer image for tutum/influxdb:0.8.8 - d3b6f7789e0d1d01fa4e0aacdb636c221421107d1df96808ecbe8e241ceb1823 +``` +root@ubuntu:~# docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -e PRE_CREATE_DB=cadvisor --name influxsrv tutum/influxdb:0.8.8 +Unable to find image 'tutum/influxdb:0.8.8' locally +0.8.8: Pulling from tutum/influxdb +a3ed95caeb02: Already exists +23efb549476f: Already exists +aa2f8df21433: Already exists +ef072d3c9b41: Already exists +c9f371853f28: Already exists +a248b0871c3c: Already exists +749db6d368d0: Already exists +7d7c7d923e63: Pull complete +e47cc7808961: Pull complete +1743b6eeb23f: Pull complete +Digest: sha256:8494b31289b4dbc1d5b444e344ab1dda3e18b07f80517c3f9aae7d18133c0c42 +Status: Downloaded newer image for tutum/influxdb:0.8.8 +d3b6f7789e0d1d01fa4e0aacdb636c221421107d1df96808ecbe8e241ceb1823 - -p 8083:8083 : user interface, log in with username-admin, pass-admin - -p 8086:8086 : interaction with other application - --name influxsrv : container have name influxsrv, use to cAdvisor link it. + -p 8083:8083 : user interface, log in with username-admin, pass-admin + -p 8086:8086 : interaction with other application + --name influxsrv : container have name influxsrv, use to cAdvisor link it. +``` You can test your InfluxDB installation by calling this URL >>http://45.79.148.234:8083 and login with user/password as "root". @@ -190,23 +196,25 @@ We can create our required databases from this tab. Our next step is to install cAdvisor container and link it to the InfluxDB container. You can use this command to create it. - root@ubuntu:~# docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --link influxsrv:influxsrv --name=cadvisor google/cadvisor:latest -storage_driver_db=cadvisor -storage_driver_host=influxsrv:8086 - Unable to find image 'google/cadvisor:latest' locally - latest: Pulling from google/cadvisor - 09d0220f4043: Pull complete - 151807d34af9: Pull complete - 14cd28dce332: Pull complete - Digest: sha256:8364c7ab7f56a087b757a304f9376c3527c8c60c848f82b66dd728980222bd2f - Status: Downloaded newer image for google/cadvisor:latest - 3bfdf7fdc83872485acb06666a686719983a1172ac49895cd2a260deb1cdde29 - root@ubuntu:~# +``` +root@ubuntu:~# docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --link influxsrv:influxsrv --name=cadvisor google/cadvisor:latest -storage_driver_db=cadvisor -storage_driver_host=influxsrv:8086 +Unable to find image 'google/cadvisor:latest' locally +latest: Pulling from google/cadvisor +09d0220f4043: Pull complete +151807d34af9: Pull complete +14cd28dce332: Pull complete +Digest: sha256:8364c7ab7f56a087b757a304f9376c3527c8c60c848f82b66dd728980222bd2f +Status: Downloaded newer image for google/cadvisor:latest +3bfdf7fdc83872485acb06666a686719983a1172ac49895cd2a260deb1cdde29 +root@ubuntu:~# - --publish=8080:8080 : user interface - --link=influxsrv:influxsrv: link to container influxsrv - -storage_driver=influxdb: set the storage driver as InfluxDB - Specify what InfluxDB instance to push data to: - -storage_driver_host=influxsrv:8086: The ip:port of the database. Default is ‘localhost:8086’ - -storage_driver_db=cadvisor: database name. Uses db ‘cadvisor’ by default + --publish=8080:8080 : user interface + --link=influxsrv:influxsrv: link to container influxsrv + -storage_driver=influxdb: set the storage driver as InfluxDB + Specify what InfluxDB instance to push data to: + -storage_driver_host=influxsrv:8086: The ip:port of the database. Default is ‘localhost:8086’ + -storage_driver_db=cadvisor: database name. Uses db ‘cadvisor’ by default +``` You can test our cAdvisor installation by calling this URL >>http://45.79.148.234:8080. This will provide you the statistics of your Docker host and containers. @@ -216,8 +224,10 @@ You can test our cAdvisor installation by calling this URL >>http://45.79.148.23 Finally, we need to install the Grafana Dashboard and link to the InfluxDB. You can run this command to setup that. - root@ubuntu:~# docker run -d -p 3000:3000 -e INFLUXDB_HOST=localhost -e INFLUXDB_PORT=8086 -e INFLUXDB_NAME=cadvisor -e INFLUXDB_USER=root -e INFLUXDB_PASS=root --link influxsrv:influxsrv --name grafana grafana/grafana - f3b7598529202b110e4e6b998dca6b6e60e8608d75dcfe0d2b09ae408f43684a +``` +root@ubuntu:~# docker run -d -p 3000:3000 -e INFLUXDB_HOST=localhost -e INFLUXDB_PORT=8086 -e INFLUXDB_NAME=cadvisor -e INFLUXDB_USER=root -e INFLUXDB_PASS=root --link influxsrv:influxsrv --name grafana grafana/grafana +f3b7598529202b110e4e6b998dca6b6e60e8608d75dcfe0d2b09ae408f43684a +``` Now we can login to Grafana and configure the Data Sources. Navigate to http://45.79.148.234:3000 or just http://45.79.148.234: @@ -250,4 +260,5 @@ via: http://linoxide.com/linux-how-to/monitor-docker-containers-grafana-ubuntu/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/saheethas/ \ No newline at end of file +[a]:http://linoxide.com/author/saheethas/ +[1]:https://github.com/vegasbrianc/docker-monitoring \ No newline at end of file From bc1739c098ed9f41a9e88b8c4ac8b2d4705ee866 Mon Sep 17 00:00:00 2001 From: chenzhijun <522858454@qq.com> Date: Tue, 9 Aug 2016 15:55:48 +0800 Subject: [PATCH 378/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C20160717=20B?= =?UTF-8?q?EST=20TEXT=20EDITORS=20FOR=20LINUX=20COMMAND=20LINE?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...EST TEXT EDITORS FOR LINUX COMMAND LINE.md | 95 ------------------- ...EST TEXT EDITORS FOR LINUX COMMAND LINE.md | 93 ++++++++++++++++++ 2 files changed, 93 insertions(+), 95 deletions(-) delete mode 100644 sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md create mode 100644 translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md diff --git a/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md b/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md deleted file mode 100644 index 80f4b1439c..0000000000 --- a/sources/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md +++ /dev/null @@ -1,95 +0,0 @@ -Translating by chenzhijun - - -BEST TEXT EDITORS FOR LINUX COMMAND LINE -========================================== - -![](https://itsfoss.com/wp-content/uploads/2016/07/Best-Command-Line-Text-Editors-for-Linux.jpg) - -A text editor is a must have application for any operating system. We have no dearth of [best modern editors for Linux][1]. But those are GUI based editors. - -As you know, the real power of Linux lies in the command line. And when you are working in command line, you would need a text editor that could work right inside the terminal. - -For that purpose, today we are going to make a list of best command line text editors for Linux. - -### [VIM][2] - -If you’re on Linux for quite some time, you must have heard about Vim. Vim is an extensively configurable, cross-platform and highly efficient text editor. - -Almost every Linux distribution comes with Vim pre-installed. It is extremely popular for its wide range of features. - -![](https://itsfoss.com/wp-content/uploads/2016/07/vim.png) ->Vim User Interface - -Vim can be quite agonizing for first-time users. I remember the first time I tried to edit a text file with Vim, I was completely puzzled. I couldn’t type a single letter on it and the funny part is, I couldn’t even figure out how to close this thing. If you are going to use Vim, you have to be determined for climbing up a very steep learning curve. - -But after you have gone through all that, combed through some documentations, remembered its commands and shortcuts you will find that the hassle was worth it. You can bend Vim to your will – customizing its interface however it seems fit to you, give your workflow a boost by using various user scripts, plugins and so on. Vim supports syntax highlighting, macro recording and action history. - -As in the official site, it is stated that, - ->**Vim: The power tool for everyone!** - -It is completely up to you how you will use it. You can just use it for simple text editing, or you can customize it to behave as a full-fledged IDE. - -### [GNU EMACS][3] - -GNU Emacs is undoubtedly one of the most powerful text editor out there. If you have heard about both Vim and Emacs, you should know that both of these editors have a very loyal fan-base and often they are very serious about their text editor of choice. And you can find lots of humor and stuff on the internet about it: - -Suggested Read [Download Linux Wallpapers That Are Also Cheat Sheets][4] - -![](https://itsfoss.com/wp-content/uploads/2016/07/vi-emacs-768x426.png) ->Vim vs Emacs - -Emacs is cross-platform and has both command-line and graphical user interface. It is also very rich with various features and, most importantly, extensible. - -![](https://itsfoss.com/wp-content/uploads/2016/07/emacs.png) ->Emacs User Interface - -Just as Vim, Emacs too comes with a steep learning curve. But once you master it, you can completely leverage its power. Emacs can handle just about any types of text files. The interface is customizable to suit your workflow. It supports macro recording and shortcuts. - -The unique power of Emacs is that it can be transformed into something completely different from a text editor. There is a large collection of modules that can transform the application for using in completely different scenarios, like – calendar, news reader, word processor etc. You can even play games in Emacs! - -### [NANO][5] - -When it comes to simplicity, Nano is the one. Unlike Vim or Emacs, the learning curve for nano is almost flat. - -If you want to simply create & edit a text file and get on with your life, look no further than Nano. - -![](https://itsfoss.com/wp-content/uploads/2016/07/nano.png) ->Nano User Interface - -The shortcuts available on Nano are displayed at the bottom of the user interface. Nano includes only the basic functions of a text editor. - -It is minimal and perfectly suitable for editing system & configuration files. For those who doesn’t need advanced features from a command-line text editor, Nano is the perfect match. - -### OTHERS - -There is one more editor I’d like to mention: - -[The Nice Editor (ne)][6]: The official site says, - ->If you have the resources and the patience to use emacs or the right mental twist to use vi then probably ne is not for you. - -Basically, ne features many advanced features like Vim or Emacs, including – scripting and macro recording. But it comes with a more intuitive control and not so steep learning curve. - -### WHAT DO YOU THINK? - -I know that if you are a seasoned Linux user, you’ll say these are the obvious candidates for the list of best command line text editors for Linux. Therefore, I would like to ask you, if there are some other command line text editors for Linux that you want to share with us? - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/command-line-text-editors-linux/?utm_source=newsletter&utm_medium=email&utm_campaign=ubuntu_forums_hacked_new_skype_for_linux_and_more_linux_stories - -作者:[Munif Tanjim][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/munif/ -[1]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ -[2]: http://www.vim.org/ -[3]: https://www.gnu.org/software/emacs/ -[4]: https://itsfoss.com/download-linux-wallpapers-cheat-sheets/ -[5]: http://www.nano-editor.org/ -[6]: http://ne.di.unimi.it/ diff --git a/translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md b/translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md new file mode 100644 index 0000000000..fdb8d34098 --- /dev/null +++ b/translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md @@ -0,0 +1,93 @@ + +Linux命令行下的优秀文本编辑软件 +========================================== + +![](https://itsfoss.com/wp-content/uploads/2016/07/Best-Command-Line-Text-Editors-for-Linux.jpg) + +文本编辑软件在任何操作系统上都是必备的软件。我们不缺乏在 Linux 上非常现代化的编辑软件,但是他们都是基于 GUI (图形界面)的编辑软件。 + +正如你所了解的,Linux 真正的魅力在于命令行。另外当你正在用命令行工作时,你可能需要一个可以在控制台窗口就可以运行的文本编辑器。 + +正因为这个目的,我们准备了一个基于 Linux 命令行的文本编辑器清单。 + +### [VIM][2] + +如果你已经使用 Linux 有一段时间,那么你肯定听到过 Vim 。Vim 是一个高可配、跨平台、高效率的文本编辑器。 + +几乎所有的 Linux 发行版本都已经内置了 Vim ,由于它的丰富特性已经变得非常流行的。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/vim.png) +>Vim 用户窗口 + +Vim 可能会让第一次使用它的人感到非常痛苦。我记得我第一次尝试使用 Vim 编辑一个文本文件时,我是非常困惑的。我不能用 Vim 输入一个字母,更有趣的是,我甚至不知道该怎么关闭它。如果你准备使用 Vim ,你需要有决心跨过一个曲折的学习路线。 + +一旦你经历过那些,通过梳理一些文档,记住它的命令和快捷键,你会发现这段学习经历是非常值得的。你可以将 Vim 按照你的意愿进行改造--配置一个让你看起来舒服的界面,通过使用脚本或者插件等来提高工作效率。Vim 支持语句高亮,宏记录和操作记录。 + +在Vim官网上,它是这样介绍的: + +>**Vim: The power tool for everyone!** + +如何使用它完全取决于你。你可以仅仅使用它作为文本编辑器,或者你可以将它打造成一个完善的IDE( Integrated Development Environment:集成开发环境)。 + +### [GNU EMACS][3] + +GNU Emacs 毫无疑问是一个非常强大的文本编辑器。如果你听说过 Vim 和 Emacs ,你应该知道这两个编辑器都拥有非常忠诚的粉丝基础,并且他们对于文本编辑器的选择非常看重。你也可以在互联网上找到大量关于他们的段子: + +建议阅读 [Download Linux Wallpapers That Are Also Cheat Sheets][4] + +![](https://itsfoss.com/wp-content/uploads/2016/07/vi-emacs-768x426.png) +>Vim vs Emacs + +Emacs 是一个拥有图形界面和命令行界面并且跨平台的软件。它也拥有非常多的特性,更重要的是,它是支持扩展的。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/emacs.png) +>Emacs 用户界面 + +像Vim一样,Emacs 也需要经历一个曲折的学习路线。但是一旦你掌握它,你就能知道它的强大。Emacs可以处理几乎所有类型文本文件。它的界面可以定制成适合你的工作流。它也支持宏记录和快捷键。 + +Emacs 独特的特性是它可以转换成和文本编辑器完全不同的的东西。这里有大量的模块集可是使它在不同的场景下成为不同的应用,例如-计算器,新闻阅读,文字处理器等。你甚至都可以在 Emacs 里面玩游戏。 + +### [NANO][5] + +如果说到简易方便的软件,Nano 就是一个。 不像 Vim 和 Emacs , nano 的学习曲线是平滑的。 + +如果在生活中你仅仅是想创建和编辑一个文本文件,Nano 估计是最适合你的了。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/nano.png) +>Nano 用户界面 + +Nano 可用的快捷键都在用户界面的下方展示出来了。Nano 仅仅拥有最基础的文本编辑软件的功能。 + +它非常小巧并且非常适合编辑系统和配置文件。对于那些不需要命令行编辑器功能的人来说,Nano是完美配备。 + +### 其它 +这里还有一些我想要提及其它编辑器: + +[The Nice Editor (ne)][6]: 官网是这样介绍的: + +>如果你有相关的资源或者耐心来使用 Emacs 或者正确的心理准备来使用 Vim ,那么 ne 可能不适合你。 + +基本上 ne 拥有像 Vim 和 Emacs 一样多的先进功能,包括--脚本和宏记录。但是它有更为直观的操作方式和平滑的学习路线。 + + +### 你认为呢? + +我知道如果你是一个熟练的 Linux 用户,你可以会说上面列举的 Linux 最好的命令行编辑器清单上的候选者都是非常明显的。因此我想跟你说,如果你还知道其他的 Linux 命令行文本编辑器你是否愿意跟我们一同分享? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/command-line-text-editors-linux/?utm_source=newsletter&utm_medium=email&utm_campaign=ubuntu_forums_hacked_new_skype_for_linux_and_more_linux_stories + +作者:[Munif Tanjim][a] +译者:[chenzhijun](https://github.com/chenzhijun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[1]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ +[2]: http://www.vim.org/ +[3]: https://www.gnu.org/software/emacs/ +[4]: https://itsfoss.com/download-linux-wallpapers-cheat-sheets/ +[5]: http://www.nano-editor.org/ +[6]: http://ne.di.unimi.it/ From 224a9f96ec5b31cafc926e8686f6f48bc4354c4f Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Tue, 9 Aug 2016 16:02:14 +0800 Subject: [PATCH 379/471] translated/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md --- ...can learn from Arduino and Raspberry Pi.md | 53 ------------------- ...can learn from Arduino and Raspberry Pi.md | 51 ++++++++++++++++++ 2 files changed, 51 insertions(+), 53 deletions(-) delete mode 100644 sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md create mode 100644 translated/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md diff --git a/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md b/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md deleted file mode 100644 index ff81871630..0000000000 --- a/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md +++ /dev/null @@ -1,53 +0,0 @@ -MikeCoder Translating... - -What containers and unikernels can learn from Arduino and Raspberry Pi -========================================================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/bus-containers.png?itok=vM7_7vs0) - - -Just the other day, I was speaking with a friend who is a mechanical engineer. He works on computer assisted braking systems for semi trucks and mentioned that his company has [Arduinos][1] all over the office. The idea is to encourage people to quickly experiment with new ideas. He also mentioned that Arduinos are more expensive than printed circuits. I was surprised by his comment about price, because coming from the software side of things, my perceptions of Arduinos was that they cost less than designing a specialized circuit. - -I had always viewed [Arduinos][2] and [Raspberry Pi][3] as these cool, little, specialized devices that can be used to make all kinds of fun gadgets. I came from the software side of the world and have always considered Linux on x86 and x86-64 "general purpose." The truth is, Arduinos are not specialized. In fact, they are very general purpose. They are fairly small, fairly cheap, and extremely flexible—that's why they caught on like wildfire. They have all kinds of I/O ports and expansion cards. They allow a maker to go out and build something cool really quickly. They even allow companies to build new products quickly. - -The unit price for an Arduino is much higher than a printed circuit, but time to a minimum viable idea is much lower. With a printed circuit, the unit price can be driven much lower but the upfront capital investment is much higher. So, long story short, the answer is—it depends. - -### Unikernels, rump kernels, and container hosts - -Enter unikernels, rump kernels, and minimal Linux distributions—these operating systems are purpose-built for specific use cases. These specialized operating systems are kind of like printed circuits. They require some up-front investment in planning and design to utilize, but could provide a great performance increase when deploying a specific workload at scale. - -Minimal operating systems such as Red Hat Enterprise Linux Atomic or CoreOS are purpose-built to run containers. They are small, quick, easily configured at boot time, and run containers quite well. The downside is that it requires extra engineering to add third-party extensions such as monitoring agents or tools for virtualization. Some side-loaded tooling needs redesigned as super-privileged containers. This extra engineering could be worth it if you are building a big enough container environment, but might not be necessary to just try out containers. - -Containers provide the ability to run standard workloads (things built on [glibc][4], etc.). The advantage is that the workload artifact (Docker image) can be built and tested on your desktop and deployed in production on completely different hardware or in the cloud with confidence that it will run with the same characteristics. In the production environment, container hosts are still configured by the operations teams, but the application is controlled by the developer. This is a sort of a best of both worlds. - -Unikernels and rump kernels are also purpose-built, but go a step further. The entire operating system is configured at build time by the developer or architect. This has benefits and challenges. - -One benefit is that the developer can control a lot about how the workload will run. Theoretically, a developer could try out [different TCP stacks][5] for different performance characteristics and choose the best one. The developer can configure the IP address ahead of time or have the system configure itself at boot with DHCP. The developer can also cut out anything that is not necessary for their application. There is also the promise of increased performance because of less [context switching][6]. - -There are also challenges with unikernels. Currently, there is a lot of tooling missing. It's much like a printed circuit world right now. A developer has to invest a lot of time and energy discerning if all of the right libraries exist, or they have to change the way their application works. There may also be challenges with how the "embedded" operating system is configured at runtime. Finally, every time a major change is made to the OS, it requires [going back to the developer][7] to change it. This is not a clean separation between development and operations, so I envision some organizational changes being necessary to truly adopt this model. - -### Conclusion - -There is a lot of interesting buzz around specialized container hosts, rump kernels, and unikernels because they hold the potential to revolutionize certain workloads (embedded, cloud, etc.). Keep your eye on this exciting, fast moving space, but cautiously. - -Currently, unikernels seem quite similar to building printed circuits. They require a lot of upfront investment to utilize and are very specialized, providing benefits for certain workloads. In the meantime containers are quite interesting even for conventional workloads and don't require as much investment. Typically an operations team should be able to port an application to containers, whereas it takes real re-engineering to port an application to unikernels and the industry is still not quite sure what workloads can be ported to unikernels. - -Here's to an exciting future of containers, rump kernels, and unikernels! - --------------------------------------- -via: https://opensource.com/business/16/5/containers-unikernels-learn-arduino-raspberry-pi - -作者:[Scott McCarty][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/fatherlinux -[1]: https://opensource.com/resources/what-arduino -[2]: https://opensource.com/life/16/4/arduino-day-3-projects -[3]: https://opensource.com/resources/what-raspberry-pi -[4]: https://en.wikipedia.org/wiki/GNU_C_Library -[5]: http://www.eetasia.com/ARTICLES/2001JUN/2001JUN18_NTEK_CT_AN5.PDF -[6]: https://en.wikipedia.org/wiki/Context_switch -[7]: http://developers.redhat.com/blog/2016/05/18/3-reasons-i-should-build-my-containerized-applications-on-rhel-and-openshift/ diff --git a/translated/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md b/translated/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md new file mode 100644 index 0000000000..958cf7ee67 --- /dev/null +++ b/translated/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md @@ -0,0 +1,51 @@ +容器和 Unikernel 能从 Raspberry Pi(树莓派)和 Arduino 学到什么 +========================================================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/bus-containers.png?itok=vM7_7vs0) + + +某一天,我和我的一个机械工程师朋友聊天的时候。 他最近在做一个给半挂卡车的电子辅助刹车系统,他提到他们公司的办公室里都是 [Arduinos][1]。这主要是方便员工可以快速的对新的想法进行实验。他也提到了,Arduinos 其实比自己画电路板更加昂贵。对此,我感到非常震惊。因为我从软件行业得到的印象是 Arduinos 比定制电路板更加便宜。 + +我常常把 [Arduinos][2] 和 [Raspberry Pi][3] 看做是可以制作非常有趣设备的小型,Cool,特别的组件。我主要是从事软件行业,并且常常想让 Linux 在 x86 和 x86-64 设备上都可以执行一致。真相就是,Arduinos 并不特殊。实际上,他们非常通用。他们相当的小,便宜,但是非常得灵活。这就是为什么他们向野火一样流行起来。他们有所有种类的输入输出设备,和扩展卡。他们能让制作者快速的构建非常 Cool 的设备。他们甚至可以让公司可以快速的开发产品。 + +一整套 Arduino 的价格比批量生产的电路板高了很多,但是,看不见的时间成本却低了很多。当电路板大规模生产的时候,价格可以控制的很低,但是,之前的研发费用却高了很多。所以,长话短说,答案就是,使用 Arduino 划得来。 + +### Unikernel, Rump 内核,和容器主机 + +Unikernel, Rump 内核和迷你 Linux 发行版,这些操作系统是为了特有用途而构建的。这些特有的操作系统,某种程度上就像定制电路板。他们需要前期的研发,还需要为了工具化而设计,但是,当大规模部署的时候,他可以提供强大的性能。 + +迷你操作系统,例如:红帽企业版或者 CoreOS 是为了运行容器而构建的。他们很小,快速,并且很容易在启动时配置,并且运行容器非常良好。缺点就是他需要额外的工程量来添加第三方插件,比如监控客户端或者虚拟化的工具。一些工具也需要为了超级权限的容器而重新设计。 如果你正在构建一个巨大的容器环境,这些额外的工作量是划算的。但是,如果只是尝试,那就没必要了。 + +容器提供了运行标准化的工作流程 (比如使用 [glibc][4] 编译) 的能力。一个好处就是你可以在你的电脑上构建和测试这个工作单元 (Docker 镜像) 并且在完全不同的硬件上或者云端非常顺利的部署。而且保持着相同的特性。在生产环境中,容器的宿主机可以依旧被运维配置管理,但是应用被开发团队控制。这就是对两个团队来说最好的合作方式。 + +Unikernels 和 Rump 内核依旧是为了特定目标构建的,但是却更进一步。整个的操作系统在构建的时候就被开发或者架构师配置了。这带来了好处,同时还有挑战。 + +一个好处就是,开发人员可以控制这个工作流程的运转。理论上说,一个开发者可以为了不同的特性,尝试 [不同的 TCP 协议栈][5],并且选择最好的一个。在操作系统启动的时候,开发人也可以配置 IP 地址,而不是通过 DHCP。 开发人员也可以裁剪任何对于应用而言不需要的部分。这也是性能提升的保障,通过减少[不必要的上下文切换][6]。 + +同时,Unikernel 也带来了挑战。目前,有很大的工具缺口。 现在,和画板子的世界类似,开发人员需要花费很多时间和精力在检查是否有完整的库文件存在,不然的话,他们必须改变他们应用的执行方式。在如何让嵌入式操作系统在运行时配置的时候,也存在挑战。最后,每次操作系统的大改动,都需要[反馈到开发人员][7]来进行修改。这并没有一个在开发和运维之间明确的界限,所以我能想象,为了接受了这个开发流程,一些组织或者公司必须要改变。 + +### 结论 + +这也有一些有趣的传闻在专门的容器主机,Rump 内核和 Unikernel,因为,他们会带来一个特定工作流程的潜在变革(嵌入式,云,等等)。在这个令人激动又快速发展的领域请保持你的关注,但是也不要放松警惕。 + +目前,Unikernel 看起来和定制电路板很像。他们都需要前期的研发投资,并且都是独特的,可以为确定的工作流程带来好处。同时,容器甚至在常规的工作流中都非常有趣,而且他不需要那么多的投入。一个简单的例子,运维团队能方便的在容器上部署一个应用,但是在 Unikernel 上部署一个应用则需要重新设计和编码,而且业界并不能完全保证,这个工作流程可以被部署在 Unikernel 上。 + +容器,Rump 内核 和 Unikernel 有一个光明的未来! + +-------------------------------------- +via: https://opensource.com/business/16/5/containers-unikernels-learn-arduino-raspberry-pi + +作者:[Scott McCarty][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/fatherlinux +[1]: https://opensource.com/resources/what-arduino +[2]: https://opensource.com/life/16/4/arduino-day-3-projects +[3]: https://opensource.com/resources/what-raspberry-pi +[4]: https://en.wikipedia.org/wiki/GNU_C_Library +[5]: http://www.eetasia.com/ARTICLES/2001JUN/2001JUN18_NTEK_CT_AN5.PDF +[6]: https://en.wikipedia.org/wiki/Context_switch +[7]: http://developers.redhat.com/blog/2016/05/18/3-reasons-i-should-build-my-containerized-applications-on-rhel-and-openshift/ From ea8b62122676037e9990aa97d1e37ae86a35132c Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 9 Aug 2016 21:41:17 +0800 Subject: [PATCH 380/471] =?UTF-8?q?20160809-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...729 Best Password Manager all platform.md | 459 ++++++++++++++++++ 1 file changed, 459 insertions(+) create mode 100644 sources/tech/20160729 Best Password Manager all platform.md diff --git a/sources/tech/20160729 Best Password Manager all platform.md b/sources/tech/20160729 Best Password Manager all platform.md new file mode 100644 index 0000000000..09c62bf93b --- /dev/null +++ b/sources/tech/20160729 Best Password Manager all platform.md @@ -0,0 +1,459 @@ +Best Password Manager — For Windows, Linux, Mac, Android, iOS and Enterprise +============================== + +![](https://4.bp.blogspot.com/-uMOdpnxBV9w/V5x4YW54SbI/AAAAAAAAo_E/o-gUmO46UB0Ji2IMzd_xdY5pVsCcJnFwQCLcB/s1600/free-best-password-manager-2016.png) + +When it comes to safeguarding your Internet security, installing an antivirus software or running a [Secure Linux OS][1] on your system does not mean you are safe enough from all kinds of cyber-threats. + +Today majority of Internet users are vulnerable to cyber attacks, not because they aren't using any best antivirus software or other security measures, but because they are using [weak passwords][2] to secure their online accounts. + +Passwords are your last lines of defense against online threats. Just look back to some recent data breaches and cyber attacks, including high-profile [data breach at OPM][3] (United States Office of Personnel Management) and the extra-marital affair site [Ashley Madison][4], that led to the exposure of hundreds of millions of records online. + +Although you can not control data breaches, it is still important to create strong passwords that can withstand dictionary and [brute-force attacks][5]. + +You see, the longer and more complex your password is, the much harder it is crack. + +### How to Stay Secure Online? + +Security researchers have always advised online users to create long, complex and different passwords for their various online accounts. So, if one site is breached, your other accounts on other websites are secure enough from being hacked. + +Ideally, your strong password should be at least 16 characters long, should contain a combination of digits, symbols, uppercase letters and lowercase letters and most importantly the most secure password is one you don't even know. + +The password should be free of repetition and not contain any dictionary word, pronoun, your username or ID, and any other predefined letter or number sequences. + +I know this is a real pain to memorize such complex password strings and unless we are human supercomputers, remembering different passwords for several online accounts is not an easy task. + +The issue is that today people subscribe to a lot of online sites and services, and it's usually hard to create and remember different passwords for every single account. + +But, Luckily to make this whole process easy, there's a growing market for password managers for PCs and phones that can significantly reduce your password memorizing problem, along with the cure for your bad habit of setting weak passwords. + +### What is Password Manager? + +![](https://1.bp.blogspot.com/-LY7pI45tMq0/V5r_XV083RI/AAAAAAAAo6M/MivILg0_4Vs7UgLKZJqM5vhvYujQCCcpgCLcB/s1600/best-password-manager-software.png) + +Password Manager software has come a very long way in the past few years and is an excellent system that both allows you to create complex passwords for different sites and remember them. + +A password manager is just software that creates, stores and organizes all your passwords for your computers, websites, applications and networks. + +Password managers that generate passwords and double as a form filler are also available in the market, which has the ability to enter your username and password automatically into login forms on websites. + +So, if you want super secure passwords for your multiple online accounts, but you do not want to memorize them all, Password Manager is the way to go. + +### How does a Password Manager work? + +Typically, Password Manager software works by generating long, complex, and, most importantly, unique password strings for you, and then stores them in encrypted form to protect the confidential data from hackers with physical access to your PC or mobile device. + +The encrypted file is accessible only through a master password. So, all you need to do is remember just one master password to open your password manager or vault and unlock all your other passwords. + +? + +However, you need to make sure your master password is extra-secure of at least 16 characters long. + +### Which is the Best Password Manager? How to Choose? + +I've long recommended password managers, but most of our readers always ask: + +- Which password manager is best? +- Which password manager is the most secure? Help! + +So, today I'm introducing you some of the best Password Manager currently available in the market for Windows, Linux, Mac, Android, iOS and Enterprise. + +Before choosing a good password manager for your devices, you should check these following features: + +- Cross-Platform Application +- Works with zero-knowledge model +- Offers two-factor authentication (multi-factor authentication) + +Note: Once adopted, start relying on your password manager because if you are still using weak passwords for your important online accounts, nobody can save you from malicious hackers. + +### Best Password Managers for Windows + +![](https://2.bp.blogspot.com/-8MUEI5RctdA/V5sDM_oCoaI/AAAAAAAAo60/LX4AktoS_f0JeYDORSqmDZMfmsbOa6QnACLcB/s1600/Best-Password-Manager-for-Windows.png) + +Windows users are most vulnerable to cyber attacks because Windows operating system has always been the favorite target of hackers. So, it is important for Windows users to make use of a good password manager. + +Some other best password manager for windows: Keeper, Password Safe, LockCrypt, 1Password, and Dashlane. + +- 1. Keeper Password Manager (Cross-Platform) + +![](https://1.bp.blogspot.com/-9ISKGyTAX9U/V5xec18I21I/AAAAAAAAo8E/i8IqZXXpDwMGe8wnI6Adj3qduR_Qm5o3ACLcB/s1600/keeper-Password-Manager-for-mac-os-x.png) + +Keeper is a secure, easy-to-use and robust password manager for your Windows, Mac, iPhone, iPad, and iPod devices. + +Using military-grade 256-bit AES encryption, Keeper password manager keeps your data safe from prying eyes. + +It has a secure digital vault for protecting and managing your passwords, as well as other secret information. Keeper password manager application supports Two-factor authentication and available for every major operating system. + +There is also an important security feature, called Self-destruct, which if enabled, will delete all records from your device if the incorrect master password is entered more than five times incorrectly. + +But you don't need worry, as this action will not delete the backup records stored on Keeper's Cloud Security Vault. + +Download Keeper Password Manager: [Windows, Linux and Mac][6] | [iOS][7] | [Android][8] | [Kindle][9] + +- 2. Dashlane Password Manager (Cross-Platform) + +![](https://3.bp.blogspot.com/-2BuFpcAe9K8/V5xjugOWPuI/AAAAAAAAo9A/wpooAjcH74EzxfNJwrFu-Mcn0IkwiRGjACLcB/s1600/Dashlane-Password-Manager-for-Android.png) + +DashLane Password Manager software is a little newer, but it offers great features for almost every platform. + +DashLane password manager works by encrypting your personal info and accounts' passwords with AES-256 encryption on a local machine, and then syncs your details with its online server, so that you can access your accounts database from anywhere. + +The best part of DashLane is that it has an automatic password changer that can change your accounts' passwords for you without having to deal with it yourself. + +DashLane Password Manager app for Android gives you the secure password management tools right to your Android phone: your password vault and form auto-filler for online stores and other sites. + +DashLane Password Manager app for Android is completely free to use on a single device and for accessing multiple devices, you can buy a premium version of the app. + +Download DashLane Password Manager: [Windows][10] and [Mac][11] | [iOS][12] | [Android][13] + +- 3. LastPass Password Manager (Cross-Platform) + +![](https://3.bp.blogspot.com/--o_hWTgXh2M/V5sAjw7FlYI/AAAAAAAAo6U/Ajmvt0rgRAQE3M_YeYurpbsUoLBN8OTLwCLcB/s1600/LastPass-Password-Manager-for-Windows.png) + +LastPass is one of the best Password Manager for Windows users, though it comes with the extension, mobile app, and even desktop app support for all the browsers and operating systems. + +LastPass is an incredibly powerful cloud-based password manager software that encrypts your personal info and accounts' passwords with AES-256 bit encryption and even offers a variety of two-factor authentication options in order to ensure no one else can log into your password vault. + +LastPass Password Manager comes for free as well as a premium with a fingerprint reader support. + +Download LastPass Password Manager: [Windows, Mac, and Linux][14] | [iOS][15] | [Android][16] + +### Best Password Manager for Mac OS X + +![](https://2.bp.blogspot.com/-lEim3E-0wcg/V5sFhOVYK7I/AAAAAAAAo7A/z6Lp8_ULdJAD8ErZ1a-FevXPO8nR3JKNACLcB/s1600/Best-Password-Manager-for-mac-os-x.png) + +People often say that Mac computers are more secure than Windows and that "Macs don't get viruses," but it is not entirely correct. + +As proof, you can read our previous articles on cyber attacks against Mac and iOs users, and then decide yourself that you need a password manager or not. + +Some other best password manager for Mac OS X: 1Password, Dashlane, LastPass, OneSafe, PwSafe. + +- 1. LogMeOnce Password Manager (Cross-Platform) + +![](https://4.bp.blogspot.com/-fl64fXK2bdA/V5sHL215j_I/AAAAAAAAo7M/fbn4EsrQMkU3tWWfiAsWlTgKKXb0oEzlwCLcB/s1600/LogMeOnce-Password-Manager-for-Mac-os-x.png) + +LogMeOnce Password Management Suite is one of the best password manager for Mac OS X, as well as syncs your passwords across Windows, iOS, and Android devices. + +LogMeOnce is one of the best Premium and Enterprise Password Management Software that offers a wide variety of features and options, including Mugshot feature. + +If your phone is ever stolen, LogMeOnce Mugshot feature tracks the location of the thief and also secretly takes a photo of the intruder when trying to gain access to your account without permission. + +LogmeOnce protects your passwords with military-grade AES-256 encryption technology and offers Two-factor authentication to ensure that even with the master password in hand, a thief hacks your account. + +Download LogMeOnce Password Manager: [Windows and Mac][17] | [iOS][18] | [Android][19] + +- 2. KeePass Password Manager (Cross-Platform) + +![](https://4.bp.blogspot.com/-XWwdG1z9sDw/V5sA7azAy6I/AAAAAAAAo6c/dkkfMRuxDoE_gi5OMRvDOUFq15P5NRO6QCLcB/s1600/Keepass-Password-Manager-for-Windows.png) + +Although LastPass is one of the best password manager, some people are not comfortable with a cloud-based password manager. + +KeePass is a popular password manager application for Windows, but there are browser extensions and mobile apps for KeePass as well. + +KeePass password manager for Windows stores your accounts' passwords on your PC, so you remain in control of them, and also on Dropbox, so you can access it using multiple devices. + +KeePass encrypts your passwords and login info using the most secure encryption algorithms currently known: AES 256-bit encryption by default, or optional, Twofish 256-bit encryption. + +KeePass is not just free, but it is also open source, which means its code and integrity can be examined by anyone, adding a degree of confidence. + +Download KeePass Password Manager: [Windows and Linux][20] | [Mac][21] | [iOS][22] | [Android][23] +3. Apple iCloud Keychain + +![](https://4.bp.blogspot.com/-vwY_dmsKIBg/V5xfhIZGxxI/AAAAAAAAo8M/OjPrBsp9GysF-bK3oqHtW74hKNYO61W9QCLcB/s1600/Apple-iCloud-Keychain-Security.png) + +Apple introduced the iCloud Keychain password management system as a convenient way to store and automatically sync all your login credentials, Wi-Fi passwords, and credit card numbers securely across your approved Apple devices, including Mac OS X, iPhone, and iPad. + +Your Secret Data in Keychain is encrypted with 256-bit AES (Advanced Encryption Standard) and secured with elliptic curve asymmetric cryptography and key wrapping. + +Also, iCloud Keychain generates new, unique and strong passwords for you to use to protect your computer and accounts. + +Major limitation: Keychain doesn't work with other browsers other than Apple Safari. + +Also Read: [How to Setup iCloud Keychain][24]? +Best Password Manager for Linux + +![](https://1.bp.blogspot.com/-2zDAqYEQTQA/V5xgbo_OcQI/AAAAAAAAo8Y/hWzGLW7R4vse3QnpCM5-qmSHLtzK5M1VACLcB/s1600/best-Password-Manager-for-linux.png) + +No doubt, some Linux distributions are the safest operating systems exist on the earth, but as I said above that adopting Linux doesn't completely protect your online accounts from hackers. + +There are a number of cross-platform password managers available that sync all your accounts' passwords across all your devices, such as LastPass, KeePass, RoboForm password managers. + +Here below I have listed two popular and secure open source password managers for Linux: + +- 1. SpiderOak Encryptr Password Manager (Cross-Platform) + +![](https://4.bp.blogspot.com/-SZkmP7dpXZM/V5xiKeuT4KI/AAAAAAAAo8s/QhvfBz3OX78IUit_HLym0sdKxlz99qFfgCLcB/s1600/SpiderOak-Encryptr-Password-Manager-for-linux.png) + +SpiderOak's Encryptr Password Manager is a zero-knowledge cloud-based password manager that encrypts protect your passwords using Crypton JavaScript framework, developed by SpiderOak and recommended by Edward Snowden. + +It is a cross-platform, open-Source and free password manager that uses end-to-end encryption and works perfectly for Ubuntu, Debian Linux Mint, and other Linux distributions. + +Encryptr Password Manager application itself is very simple and comes with some basic features. + +Encryptr software lets you encrypt three types of files: Passwords, Credit Card numbers and general any text/keys. + +Download Encryptr Password Manager: [Windows, Linux and Mac][25] | [iOS][26] | [Android][27] + +2. EnPass Password Manager (Cross-Platform) + +![](https://4.bp.blogspot.com/-_IF81t9rL7U/V5xhBIPUSHI/AAAAAAAAo8c/6-kbLXTl2G0EESTH4sP9KvZLzTFlCyypACLcB/s1600/EnPass-Password-Manager-for-Linux.png) + +Enpass is an excellent security oriented Linux password manager that works perfectly with other platforms too. Enpass offers you to backup and restores stored passwords with third-party cloud services, including Google Drive, Dropbox, OneDrive, or OwnCloud. + +It makes sure to provide the high levels of security and protects your data by a master password and encrypted it with 256-bit AES using open-source encryption engine SQLCipher, before uploading backup onto the cloud. +"We do not host your Enpass data on our servers. So, no signup is required for us. Your data is only stored on your device," EnPass says. +Additionally, by default, Enpass locks itself every minute when you leave your computer unattended and clears clipboard memory every 30 seconds to prevent your passwords from being stolen by any other malicious software. + +Download EnPass Password Manager: [Windows][28], [Linux][29] | [Mac][30] | [iOS][31] | [Android][32] + +3. RoboForm Password Manager (Cross-Platform) + +![](https://3.bp.blogspot.com/-g8Qf9V1EdqU/V5sBkDk614I/AAAAAAAAo6k/5ZTr9LyosU82F16GxajewvU4sWYyJFq5gCLcB/s1600/Roboform-Password-Manager-for-Windows.png) + +You can easily find good password managers for Windows OS, but RoboForm Free Password Manager software goes a step further. + +Besides creating complex passwords and remembering them for you, RoboForm also offers a smart form filler feature to save your time while browsing the Web. + +RoboForm encrypts your login info and accounts' passwords using military grade AES encryption with the key that is obtained from your RoboForm Master Password. + +RoboForm is available for browsers like Internet Explorer, Chrome, and Firefox as well as mobile platforms with apps available for iOS, Android, and Windows Phone. + +Download RoboForm Password Manager: [Windows and Mac][33] | [Linux][34] | [iOS][35] | [Android][36] + +### Best Password Manager for Android + +![](https://1.bp.blogspot.com/-1PXI2KDrDEU/V5xigbW8lgI/AAAAAAAAo8w/Zv5hrdOcbSU7LA0kYrNpvJ1rxjg7EoOewCLcB/s1600/best-Password-Manager-for-android.png) + +More than half of the world's population today is using Android devices, so it becomes necessary for Android users to secure their online accounts from hackers who are always seeking access to these devices. + +Some of the best Password Manager apps for Android include 1Password, Keeper, DashLane, EnPass, OneSafe, mSecure and SplashID Safe. + +- 1. 1Password Password Manager (Cross-Platform) + +![](https://4.bp.blogspot.com/--w3s9SoWgYA/V5xjJwVRUTI/AAAAAAAAo84/BSucybvPdtUKYYcRtDbn-_2cOz-mfMA9gCLcB/s1600/1password-Password-Manager-for-android.png) + +1Password Password Manager app for Android is one of the best apps for managing all your accounts' passwords. + +1Password password manager app creates strong, unique and secure passwords for every account, remembers them all for you, and logs you in with just a single tap. + +1Password password manager software secures your logins and passwords with AES-256 bit encryption, and syncs them to all of your devices via your Dropbox account or stores locally for any other application to sync if you choose. + +Recently, the Android version of 1Password password manager app has added Fingerprint support for unlocking all of your passwords instead of using your master password. + +Download 1Password Password Manager: [Windows and Mac][37] | [iOS][38] | [Android][39] + +- 2. mSecure Password Manager (Cross-Platform) + +![](https://4.bp.blogspot.com/-nvjjS2dWfPc/V5xkEdAOYvI/AAAAAAAAo9I/EDGfA5hzacIq46gWG-6BD2UPHwQAHD-pgCLcB/s1600/mSecure-password-manager-for-android.png) + +Like other popular password manager solutions, mSecure Password Manager for Android automatically generates secure passwords for you and stores them using 256-bit Blowfish encryption. + +The catchy and unique feature mSecure Password Manager software provides its ability to self-destruct database after 5, 10, or 20 failed attempts (as per your preference) to input the right password. + +You can also sync all of your devices with Dropbox, or via a private Wi-Fi network. In either case, all your data is transmitted safely and securely between devices regardless of the security of your cloud account. + +Download mSecure Password Manager software: [Windows and Mac][40] | [iOS][41] | [Android][42] + +### Best Password Manager for iOS + +![](https://4.bp.blogspot.com/-SOXYw_9mFq0/V5xk6Kl8-DI/AAAAAAAAo9Q/AMbEl_t3HjAJ4ZX7gLVoa33z-myE4bK5wCLcB/s1600/best-Password-Manager-for-ios-iphone.png) + +As I said, Apple's iOS is also prone to cyber attacks, so you can use some of the best password manager apps for iOS to secure your online accounts, including Keeper, OneSafe, Enpass, mSecure, LastPass, RoboForm, SplashID Safe and LoginBox Pro. + +- 1. OneSafe Password Manager (Cross-Platform) + +![](https://2.bp.blogspot.com/-HPEJpqeOs00/V5xlSh7OUxI/AAAAAAAAo9Y/d5qkOy3BieMSxjGrnrnH4fvzUzAzDqhCgCLcB/s1600/onesafe-password-manager-for-ios.png) + +OneSafe is one of the best Password Manager apps for iOS devices that lets you store not only your accounts' passwords but also sensitive documents, credit card details, photos, and more. + +OneSafe password manager app for iOS encrypts your data behind a master password, with AES-256 encryption — the highest level available on mobile — and Touch ID. There is also an option for additional passwords for given folders. + +OneSafe password manager for iOS also offers an in-app browser that supports autofill of logins, so that you don't need to enter your login details every time. + +Besides this, OneSafe also provides advanced security for your accounts' passwords with features like auto-lock, intrusion detection, self-destruct mode, decoy safe and double protection. + +Download OneSafe Password Manager: [iOS][43] | [Mac][44] | [Android][45] | [Windows][46] + +- 2. SplashID Safe Password Manager (Cross-Platform) + +![](https://1.bp.blogspot.com/-FcNub2p-QNE/V5xmDW7QXvI/AAAAAAAAo9o/23VuGUAMCYYS64kKlUqBcfx3JIfBr5gTgCLcB/s1600/SplashID-Safe-password-manager-for-ios.png) + +SplashID Safe is one of the oldest and best password manager tools for iOS that allows users to securely store their login data and other sensitive information in an encrypted record. + +All your information, including website logins, credit card and social security data, photos and file attachments, are protected with 256-bit encryption. + +SplashID Safe Password Manager app for iOS also provides web autofill option, meaning you will not have to bother copy-pasting your passwords in login. + +The free version of SplashID Safe app comes with basic record storage functionality, though you can opt for premium subscriptions that provide cross-device syncing among other premium features. + +Download SplashID Safe Password Manager: [Windows and Mac][47] | [iOS][48] | [Android][49] + +3. LoginBox Pro Password Manager + +![](https://3.bp.blogspot.com/-4GzhwZFXDHQ/V5xogkDk49I/AAAAAAAAo90/69rmVdKD-VUG0kHJXIqE2x-mVlWZEDrYwCLcB/s1600/LoginBox-Pro-Password-Manager-for-ios.png) + +LoginBox Pro is another great password manager app for iOS devices. The app provides a single tap login to any website you visit, making the password manager app as the safest and fastest way to sign in to password-protected internet sites. + +LoginBox Password Manager app for iOS combines a password manager as well as a browser. + +From the moment you download it, all your login actions, including entering information, tapping buttons, checking boxes, or answering security questions, automatically completes by the LoginBox Password Manager app. + +For security, LoginBox Password Manager app uses hardware-accelerated AES encryption and passcode to encrypt your data and save it on your device itself. + +Download LoginBox Password Manager: [iOS][50] | [Android][51] + +### Best Online Password Managers + +Using an online password manager tool is the easiest way to keep your personal and private information safe and secure from hackers and people with malicious intents. + +Here I have listed some of the best online password managers that you can rely on to keep yourself safe online: + +- 1. Google Online Password Manager + +![](https://2.bp.blogspot.com/-HCSzj5tKgwY/V5xqVjjtfgI/AAAAAAAAo-A/OYcgv-S5wmQlAskF1jrEGQAy98ogMnXTgCLcB/s1600/google-online-password-manager.png) + +Did you know Google has its homebrew dedicated password manager? + +Google Chrome has a built-in password manager tool that offers you an option to save your password whenever you sign in to a website or web service using Chrome. + +All of your stored accounts' passwords are synced with your Google Account, making them available across all of your devices using the same Google Account. + +Chrome password manager lets you manage all your accounts' passwords from the Web. + +So, if you prefer using a different browser, like Microsoft Edge on Windows 10 or Safari on iPhone, just visit [passwords.google.com][52], and you'll see a list of all your passwords you have saved with Chrome. Google's two-factor authentication protects this list. + +- 2. Clipperz Online Password Manager + +![](https://2.bp.blogspot.com/-gs8b_N_k6CA/V5xrvzbUIKI/AAAAAAAAo-M/vsTXHZNErkQu6g8v9V1R2FxLkdppZq_GACLcB/s1600/Clipperz-Online-Password-Manager.png) + +Clipperz is a free, cross-platform best online password manager that does not require you to download any software. Clipperz online password manager uses a bookmarklet or sidebar to create and use direct logins. + +Clipperz also offers an offline password manager version of its software that allows you to download your passwords to an [encrypted disk][53] or a USB drive so you can take them with you while traveling and access your accounts' passwords when you are offline. + +Some features of Clipperz online password manager also includes password strength indicator, application locking, SSL secure connection, one-time password and a password generator. + +Clipperz online password manager can work on any computer that runs a browser with a JavaScript browser. + +- 3. Passpack Online Password Manager + +![](https://4.bp.blogspot.com/-ng91nPnzbWI/V5xsarl2mqI/AAAAAAAAo-Q/zJlFK-63vugeoyymDL26c5mPiWNsGQjuACLcB/s1600/Passpack-Free-Online-Password-Manager.png) + +Passpack is an excellent online password manager with a competitive collection of features that creates, stores and manages passwords for your different online accounts. + +PassPack online password manager also allows you to share your passwords safely with your family or coworkers for managing multiple projects, team members, clients, and employees easily. + +Your usernames and passwords for different accounts are encrypted with AES-256 Encryption on PassPack's servers that even hackers access to its server can not read your login information. + +Download the PassPack online password manager toolbar to your web browser and navigate the web normally. Whenever you log into any password-protected site, PassPack saves your login data so that you do not have to save your username and password manually on its site. + +### Best Enterprise Password Manager + +Over the course of last 12 months, we've seen some of the biggest data breaches in the history of the Internet and year-over-year the growth is heating up. + +According to statistics, a majority of employees even don't know how to protect themselves online, which led company’s business at risk. + +To keep password sharing mechanism secure in an organization, there exist some password management tools specially designed for enterprises use, such as Vaultier, CommonKey, Meldium, PassWork, and Zoho Vault. + +- 1. Meldium Enterprise Password Manager Software + +![](https://3.bp.blogspot.com/-3rKr3KUpuiQ/V5xs8JR7pVI/AAAAAAAAo-c/VF1tmKbwPzoJmNvA3Ym69CizG7_VqM6ywCLcB/s1600/Meldium-Enterprise-Password-Manager.png) + +LogMeIn's Meldium password management tool comes with a one-click single sign-on solution that helps businesses access to web apps securely and quickly. + +It automatically logs users into apps and websites without typing usernames and passwords and also tracks password usage within your organization. + +Meldium password manager is perfect for sharing accounts within your team member without sharing the actual password, which helps organizations to protect themselves from phishing attacks. + +- 2. Zoho Vault Password Management Software + +![](https://2.bp.blogspot.com/-J-N_1wOYxmI/V5xtrz42QWI/AAAAAAAAo-o/QF4n4QAF7ZMBd7uIRdjM6Hdd1MHwsXWQACLcB/s1600/zoho-vault--Enterprise-Password-Manager.png) + +Zoho Vault is one of the best Password Manager for Enterprise users that helps your team share passwords and other sensitive information fast and securely while monitoring each user's usage. + +All your team members need to download is the Zoho browser extension. Zoho Vault password manager will automatically fill passwords from your team's shared vault. + +Zoho Vault also provides features that let you monitor your team's password usage and security level so that you can know who is using which login. + +The Zoho Vault enterprise-level package even alerts you whenever a password is changed or accessed. + +### For Extra Security, Use 2-Factor Authentication + +![](https://4.bp.blogspot.com/-jDnJBDoibtQ/V5xuHVHukRI/AAAAAAAAo-w/1Erjgk-IvKs__TXwYDz-8Groz9hWEElZgCLcB/s1600/two-factor-authentication-password-security.png) + +No matter how strong your password is, there still remains a possibility for hackers to find some or the other way to hack into your account. + +Two-factor authentication is designed to fight this issue. Instead of just one password, it requires you to enter the second passcode which is sent either to your mobile number via an SMS or to your email address via an email. + +So, I recommend you to enable two-factor authentication now along with using a password manager software to secure your online accounts and sensitive information from hackers. + + + + + +-------------------------------------------------------------------------------- + +via: https://thehackernews.com/2016/07/best-password-manager.html + +作者:[Swati Khandelwal][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://thehackernews.com/2016/07/best-password-manager.html#author-info + +[1]: http://thehackernews.com/2016/03/subgraph-secure-operating-system.html +[2]: http://thehackernews.com/2016/01/password-security-manager.html +[3]: http://thehackernews.com/2015/09/opm-hack-fingerprint.html +[4]: http://thehackernews.com/2015/08/ashley-madison-accounts-leaked-online.html +[5]: http://thehackernews.com/2013/05/cracking-16-character-strong-passwords.html +[6]: https://keepersecurity.com/download.html +[7]: https://itunes.apple.com/us/app/keeper-password-manager-digital/id287170072?mt=8 +[8]: https://play.google.com/store/apps/details?id=com.callpod.android_apps.keeper +[9]: http://www.amazon.com/gp/mas/dl/android?p=com.callpod.android_apps.keeper +[10]: https://www.dashlane.com/download +[11]: https://www.dashlane.com/passwordmanager/mac-password-manager +[12]: https://itunes.apple.com/in/app/dashlane-free-secure-password/id517914548?mt=8 +[13]: https://play.google.com/store/apps/details?id=com.dashlane&hl=en +[14]: https://lastpass.com/misc_download2.php +[15]: https://itunes.apple.com/us/app/lastpass-for-premium-customers/id324613447?mt=8&ign-mpt=uo%3D4 +[16]: https://play.google.com/store/apps/details?id=com.lastpass.lpandroid +[17]: https://www.logmeonce.com/download/ +[18]: https://itunes.apple.com/us/app/logmeonce-free-password-manager/id972000703?ls=1&mt=8 +[19]: https://play.google.com/store/apps/details?id=log.me.once + +[20]: http://keepass.info/download.html +[21]: https://itunes.apple.com/us/app/kypass-companion/id555293879?ls=1&mt=12 +[22]: https://itunes.apple.com/de/app/ikeepass/id299697688?mt=8 +[23]: https://play.google.com/store/apps/details?id=keepass2android.keepass2android +[24]: https://support.apple.com/en-in/HT204085 +[25]: https://spideroak.com/opendownload +[26]: https://itunes.apple.com/us/app/spideroak/id360584371?mt=8 +[27]: https://play.google.com/store/apps/details?id=com.spideroak.android +[28]: https://www.enpass.io/download-enpass-for-windows/ +[29]: https://www.enpass.io/download-enpass-linux/ +[30]: https://itunes.apple.com/app/enpass-password-manager-best/id732710998?mt=12 +[31]: https://itunes.apple.com/us/app/enpass-password-manager/id455566716?mt=8 +[32]: https://play.google.com/store/apps/details?id=io.enpass.app&hl=en +[33]: http://www.roboform.com/download +[34]: http://www.roboform.com/for-linux +[35]: https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=331787573&mt=8 +[36]: https://play.google.com/store/apps/details?id=com.siber.roboform +[37]: https://1password.com/downloads/ +[38]: https://itunes.apple.com/in/app/1password-password-manager/id568903335?mt=8 +[39]: https://play.google.com/store/apps/details?id=com.agilebits.onepassword&hl=en +[40]: https://www.msecure.com/desktop-app/ +[41]: https://itunes.apple.com/in/app/msecure-password-manager/id292411902?mt=8 +[42]: https://play.google.com/store/apps/details?id=com.mseven.msecure&hl=en +[43]: https://itunes.apple.com/us/app/onesafe/id455190486?ls=1&mt=8 +[44]: https://itunes.apple.com/us/app/onesafe-secure-password-manager/id595543758?ls=1&mt=12 +[45]: https://play.google.com/store/apps/details?id=com.lunabee.onesafe +[46]: https://www.microsoft.com/en-us/store/apps/onesafe/9wzdncrddtx9 +[47]: https://splashid.com/downloads.php +[48]: https://itunes.apple.com/app/splashid-safe-password-manager/id284334840?mt=8 +[49]: https://play.google.com/store/apps/details?id=com.splashidandroid&hl=en +[50]: https://itunes.apple.com/app/loginbox-pro/id579954762?mt=8 +[51]: https://play.google.com/store/apps/details?id=com.mygosoftware.android.loginbox +[52]: https://passwords.google.com/ +[53]: http://thehackernews.com/2014/01/Kali-linux-Self-Destruct-nuke-password.html + From be32b533e38a96b0281df148a12d464a9fb04b36 Mon Sep 17 00:00:00 2001 From: ChrisLeeGit Date: Tue, 9 Aug 2016 22:09:04 +0800 Subject: [PATCH 381/471] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?How=20to=20Monitor=20Docker=20Containers=20using=20Grafana=20on?= =?UTF-8?q?=20Ubuntu=20=E5=A4=9A=E8=B0=A2?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...w to Monitor Docker Containers using Grafana on Ubuntu.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md b/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md index 3d7b171788..acdfff0b4f 100644 --- a/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md +++ b/sources/tech/20160809 How to Monitor Docker Containers using Grafana on Ubuntu.md @@ -1,3 +1,4 @@ +Being translated by ChrisLeeGit How to Monitor Docker Containers using Grafana on Ubuntu ================================================================================ @@ -255,10 +256,10 @@ You can get [more information][1] on docker monitoring here. Thank you for readi via: http://linoxide.com/linux-how-to/monitor-docker-containers-grafana-ubuntu/ 作者:[Saheetha Shameer][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ChrisLeeGit](https://github.com/chrisleegit) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/saheethas/ -[1]:https://github.com/vegasbrianc/docker-monitoring \ No newline at end of file +[1]:https://github.com/vegasbrianc/docker-monitoring From e12eb3a0054fbcbe7a0a72f58d4653421efb3ffd Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 9 Aug 2016 22:11:49 +0800 Subject: [PATCH 382/471] =?UTF-8?q?20160809-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/Part 3 - Let’s Build A Web Server.md | 1012 +++++++++++++++++ 1 file changed, 1012 insertions(+) create mode 100644 sources/tech/Part 3 - Let’s Build A Web Server.md diff --git a/sources/tech/Part 3 - Let’s Build A Web Server.md b/sources/tech/Part 3 - Let’s Build A Web Server.md new file mode 100644 index 0000000000..40a9ccd0b7 --- /dev/null +++ b/sources/tech/Part 3 - Let’s Build A Web Server.md @@ -0,0 +1,1012 @@ +Part 3 - Let’s Build A Web Server +===================================== + +>“We learn most when we have to invent” —Piaget + +In Part 2 you created a minimalistic WSGI server that could handle basic HTTP GET requests. And I asked you a question, “How can you make your server handle more than one request at a time?” In this article you will find the answer. So, buckle up and shift into high gear. You’re about to have a really fast ride. Have your Linux, Mac OS X (or any *nix system) and Python ready. All source code from the article is available on [GitHub][1]. + +First let’s remember what a very basic Web server looks like and what the server needs to do to service client requests. The server you created in Part 1 and Part 2 is an iterative server that handles one client request at a time. It cannot accept a new connection until after it has finished processing a current client request. Some clients might be unhappy with it because they will have to wait in line, and for busy servers the line might be too long. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it1.png) + +Here is the code of the iterative server [webserver3a.py][2]: + +``` +##################################################################### +# Iterative server - webserver3a.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +##################################################################### +import socket + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 +REQUEST_QUEUE_SIZE = 5 + + +def handle_request(client_connection): + request = client_connection.recv(1024) + print(request.decode()) + http_response = b"""\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + + +def serve_forever(): + listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listen_socket.bind(SERVER_ADDRESS) + listen_socket.listen(REQUEST_QUEUE_SIZE) + print('Serving HTTP on port {port} ...'.format(port=PORT)) + + while True: + client_connection, client_address = listen_socket.accept() + handle_request(client_connection) + client_connection.close() + +if __name__ == '__main__': + serve_forever() +``` + +To observe your server handling only one client request at a time, modify the server a little bit and add a 60 second delay after sending a response to a client. The change is only one line to tell the server process to sleep for 60 seconds. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it2.png) + +And here is the code of the sleeping server [webserver3b.py][3]: + +``` +######################################################################### +# Iterative server - webserver3b.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +# # +# - Server sleeps for 60 seconds after sending a response to a client # +######################################################################### +import socket +import time + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 +REQUEST_QUEUE_SIZE = 5 + + +def handle_request(client_connection): + request = client_connection.recv(1024) + print(request.decode()) + http_response = b"""\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + time.sleep(60) # sleep and block the process for 60 seconds + + +def serve_forever(): + listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listen_socket.bind(SERVER_ADDRESS) + listen_socket.listen(REQUEST_QUEUE_SIZE) + print('Serving HTTP on port {port} ...'.format(port=PORT)) + + while True: + client_connection, client_address = listen_socket.accept() + handle_request(client_connection) + client_connection.close() + +if __name__ == '__main__': + serve_forever() +``` + +Start the server with: + +``` +$ python webserver3b.py +``` + +Now open up a new terminal window and run the curl command. You should instantly see the “Hello, World!” string printed on the screen: + +``` +$ curl http://localhost:8888/hello +Hello, World! +``` + +And without delay open up a second terminal window and run the same curl command: + + +``` +$ curl http://localhost:8888/hello +``` + +If you’ve done that within 60 seconds then the second curl should not produce any output right away and should just hang there. The server shouldn’t print a new request body on its standard output either. Here is how it looks like on my Mac (the window at the bottom right corner highlighted in yellow shows the second curl command hanging, waiting for the connection to be accepted by the server): + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it3.png) + +After you’ve waited long enough (more than 60 seconds) you should see the first curl terminate and the second curl print “Hello, World!” on the screen, then hang for 60 seconds, and then terminate: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it4.png) + +The way it works is that the server finishes servicing the first curl client request and then it starts handling the second request only after it sleeps for 60 seconds. It all happens sequentially, or iteratively, one step, or in our case one client request, at a time. + +Let’s talk about the communication between clients and servers for a bit. In order for two programs to communicate with each other over a network, they have to use sockets. And you saw sockets both in Part 1 and Part 2. But what is a socket? + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_socket.png) + +A socket is an abstraction of a communication endpoint and it allows your program to communicate with another program using file descriptors. In this article I’ll be talking specifically about TCP/IP sockets on Linux/Mac OS X. An important notion to understand is the TCP socket pair. + +>The socket pair for a TCP connection is a 4-tuple that identifies two endpoints of the TCP connection: the local IP address, local port, foreign IP address, and foreign port. A socket pair uniquely identifies every TCP connection on a network. The two values that identify each endpoint, an IP address and a port number, are often called a socket.[1 +][4] + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_socketpair.png) + +So, the tuple {10.10.10.2:49152, 12.12.12.3:8888} is a socket pair that uniquely identifies two endpoints of the TCP connection on the client and the tuple {12.12.12.3:8888, 10.10.10.2:49152} is a socket pair that uniquely identifies the same two endpoints of the TCP connection on the server. The two values that identify the server endpoint of the TCP connection, the IP address 12.12.12.3 and the port 8888, are referred to as a socket in this case (the same applies to the client endpoint). + +The standard sequence a server usually goes through to create a socket and start accepting client connections is the following: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_server_socket_sequence.png) + +1. The server creates a TCP/IP socket. This is done with the following statement in Python: + +``` +listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) +``` + +2. The server might set some socket options (this is optional, but you can see that the server code above does just that to be able to re-use the same address over and over again if you decide to kill and re-start the server right away). + +``` +listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) +``` + +3. Then, the server binds the address. The bind function assigns a local protocol address to the socket. With TCP, calling bind lets you specify a port number, an IP address, both, or neither.[1][4] + +``` +listen_socket.bind(SERVER_ADDRESS) +``` + +4. Then, the server makes the socket a listening socket + +``` +listen_socket.listen(REQUEST_QUEUE_SIZE) +``` + +The listen method is only called by servers. It tells the kernel that it should accept incoming connection requests for this socket. + +After that’s done, the server starts accepting client connections one connection at a time in a loop. When there is a connection available the accept call returns the connected client socket. Then, the server reads the request data from the connected client socket, prints the data on its standard output and sends a message back to the client. Then, the server closes the client connection and it is ready again to accept a new client connection. + +Here is what a client needs to do to communicate with the server over TCP/IP: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_client_socket_sequence.png) + +Here is the sample code for a client to connect to your server, send a request and print the response: + +``` +import socket + + # create a socket and connect to a server + sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + sock.connect(('localhost', 8888)) + + # send and receive some data + sock.sendall(b'test') + data = sock.recv(1024) + print(data.decode()) +``` + +After creating the socket, the client needs to connect to the server. This is done with the connect call: + +``` +sock.connect(('localhost', 8888)) +``` + +The client only needs to provide the remote IP address or host name and the remote port number of a server to connect to. + +You’ve probably noticed that the client doesn’t call bind and accept. The client doesn’t need to call bind because the client doesn’t care about the local IP address and the local port number. The TCP/IP stack within the kernel automatically assigns the local IP address and the local port when the client calls connect. The local port is called an ephemeral port, i.e. a short-lived port. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_ephemeral_port.png) + +A port on a server that identifies a well-known service that a client connects to is called a well-known port (for example, 80 for HTTP and 22 for SSH). Fire up your Python shell and make a client connection to the server you run on localhost and see what ephemeral port the kernel assigns to the socket you’ve created (start the server webserver3a.py or webserver3b.py before trying the following example): + +``` +>>> import socket +>>> sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) +>>> sock.connect(('localhost', 8888)) +>>> host, port = sock.getsockname()[:2] +>>> host, port +('127.0.0.1', 60589) +``` + +In the case above the kernel assigned the ephemeral port 60589 to the socket. + +There are some other important concepts that I need to cover quickly before I get to answer the question from Part 2. You will see shortly why this is important. The two concepts are that of a process and a file descriptor. + +What is a process? A process is just an instance of an executing program. When the server code is executed, for example, it’s loaded into memory and an instance of that executing program is called a process. The kernel records a bunch of information about the process - its process ID would be one example - to keep track of it. When you run your iterative server webserver3a.py or webserver3b.py you run just one process. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_server_process.png) + +Start the server webserver3b.py in a terminal window: + +``` +$ python webserver3b.py +``` + +And in a different terminal window use the ps command to get the information about that process: + +``` +$ ps | grep webserver3b | grep -v grep +7182 ttys003 0:00.04 python webserver3b.py +``` + +The ps command shows you that you have indeed run just one Python process webserver3b. When a process gets created the kernel assigns a process ID to it, PID. In UNIX, every user process also has a parent that, in turn, has its own process ID called parent process ID, or PPID for short. I assume that you run a BASH shell by default and when you start the server, a new process gets created with a PID and its parent PID is set to the PID of the BASH shell. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_ppid_pid.png) + +Try it out and see for yourself how it all works. Fire up your Python shell again, which will create a new process, and then get the PID of the Python shell process and the parent PID (the PID of your BASH shell) using os.getpid() and os.getppid() system calls. Then, in another terminal window run ps command and grep for the PPID (parent process ID, which in my case is 3148). In the screenshot below you can see an example of a parent-child relationship between my child Python shell process and the parent BASH shell process on my Mac OS X: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_pid_ppid_screenshot.png) + +Another important concept to know is that of a file descriptor. So what is a file descriptor? A file descriptor is a non-negative integer that the kernel returns to a process when it opens an existing file, creates a new file or when it creates a new socket. You’ve probably heard that in UNIX everything is a file. The kernel refers to the open files of a process by a file descriptor. When you need to read or write a file you identify it with the file descriptor. Python gives you high-level objects to deal with files (and sockets) and you don’t have to use file descriptors directly to identify a file but, under the hood, that’s how files and sockets are identified in UNIX: by their integer file descriptors. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_process_descriptors.png) + +By default, UNIX shells assign file descriptor 0 to the standard input of a process, file descriptor 1 to the standard output of the process and file descriptor 2 to the standard error. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it_default_descriptors.png) + +As I mentioned before, even though Python gives you a high-level file or file-like object to work with, you can always use the fileno() method on the object to get the file descriptor associated with the file. Back to your Python shell to see how you can do that: + + +``` +>>> import sys +>>> sys.stdin +', mode 'r' at 0x102beb0c0> +>>> sys.stdin.fileno() +0 +>>> sys.stdout.fileno() +1 +>>> sys.stderr.fileno() +2 +``` + +And while working with files and sockets in Python, you’ll usually be using a high-level file/socket object, but there may be times where you need to use a file descriptor directly. Here is an example of how you can write a string to the standard output using a write system call that takes a file descriptor integer as a parameter: + +``` +>>> import sys +>>> import os +>>> res = os.write(sys.stdout.fileno(), 'hello\n') +hello +``` + +And here is an interesting part - which should not be surprising to you anymore because you already know that everything is a file in Unix - your socket also has a file descriptor associated with it. Again, when you create a socket in Python you get back an object and not a non-negative integer, but you can always get direct access to the integer file descriptor of the socket with the fileno() method that I mentioned earlier. + +``` +>>> import socket +>>> sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) +>>> sock.fileno() +3 +``` +One more thing I wanted to mention: have you noticed that in the second example of the iterative server webserver3b.py, when the server process was sleeping for 60 seconds you could still connect to the server with the second curl command? Sure, the curl didn’t output anything right away and it was just hanging out there but how come the server was not accept ing a connection at the time and the client was not rejected right away, but instead was able to connect to the server? The answer to that is the listen method of a socket object and its BACKLOG argument, which I called REQUEST_QUEUE_SIZE in the code. The BACKLOG argument determines the size of a queue within the kernel for incoming connection requests. When the server webserver3b.py was sleeping, the second curl command that you ran was able to connect to the server because the kernel had enough space available in the incoming connection request queue for the server socket. + +While increasing the BACKLOG argument does not magically turn your server into a server that can handle multiple client requests at a time, it is important to have a fairly large backlog parameter for busy servers so that the accept call would not have to wait for a new connection to be established but could grab the new connection off the queue right away and start processing a client request without delay. + +Whoo-hoo! You’ve covered a lot of ground. Let’s quickly recap what you’ve learned (or refreshed if it’s all basics to you) so far. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_checkpoint.png) + +- Iterative server +- Server socket creation sequence (socket, bind, listen, accept) +- Client connection creation sequence (socket, connect) +- Socket pair +- Socket +- Ephemeral port and well-known port +- Process +- Process ID (PID), parent process ID (PPID), and the parent-child relationship. +- File descriptors +- The meaning of the BACKLOG argument of the listen socket method + +Now I am ready to answer the question from Part 2: “How can you make your server handle more than one request at a time?” Or put another way, “How do you write a concurrent server?” + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc2_service_clients.png) + +The simplest way to write a concurrent server under Unix is to use a fork() system call. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_fork.png) + +Here is the code of your new shiny concurrent server webserver3c.py that can handle multiple client requests at the same time (as in our iterative server example webserver3b.py, every child process sleeps for 60 secs): + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_it2.png) + +``` +########################################################################### +# Concurrent server - webserver3c.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +# # +# - Child process sleeps for 60 seconds after handling a client's request # +# - Parent and child processes close duplicate descriptors # +# # +########################################################################### +import os +import socket +import time + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 +REQUEST_QUEUE_SIZE = 5 + + +def handle_request(client_connection): + request = client_connection.recv(1024) + print( + 'Child PID: {pid}. Parent PID {ppid}'.format( + pid=os.getpid(), + ppid=os.getppid(), + ) + ) + print(request.decode()) + http_response = b"""\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + time.sleep(60) + + +def serve_forever(): + listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listen_socket.bind(SERVER_ADDRESS) + listen_socket.listen(REQUEST_QUEUE_SIZE) + print('Serving HTTP on port {port} ...'.format(port=PORT)) + print('Parent PID (PPID): {pid}\n'.format(pid=os.getpid())) + + while True: + client_connection, client_address = listen_socket.accept() + pid = os.fork() + if pid == 0: # child + listen_socket.close() # close child copy + handle_request(client_connection) + client_connection.close() + os._exit(0) # child exits here + else: # parent + client_connection.close() # close parent copy and loop over + +if __name__ == '__main__': + serve_forever() +``` + +Before diving in and discussing how fork works, try it, and see for yourself that the server can indeed handle multiple client requests at the same time, unlike its iterative counterparts webserver3a.py and webserver3b.py. Start the server on the command line with: + +``` +$ python webserver3c.py +``` + +And try the same two curl commands you’ve tried before with the iterative server and see for yourself that, now, even though the server child process sleeps for 60 seconds after serving a client request, it doesn’t affect other clients because they are served by different and completely independent processes. You should see your curl commands output “Hello, World!” instantly and then hang for 60 secs. You can keep on running as many curl commands as you want (well, almost as many as you want :) and all of them will output the server’s response “Hello, World” immediately and without any noticeable delay. Try it. + +The most important point to understand about fork() is that you call fork once but it returns twice: once in the parent process and once in the child process. When you fork a new process the process ID returned to the child process is 0. When the fork returns in the parent process it returns the child’s PID. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc2_how_fork_works.png) + +I still remember how fascinated I was by fork when I first read about it and tried it. It looked like magic to me. Here I was reading a sequential code and then “boom!”: the code cloned itself and now there were two instances of the same code running concurrently. I thought it was nothing short of magic, seriously. + +When a parent forks a new child, the child process gets a copy of the parent’s file descriptors: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc2_shared_descriptors.png) + +You’ve probably noticed that the parent process in the code above closed the client connection: + +``` +else: # parent + client_connection.close() # close parent copy and loop over +``` + +So how come a child process is still able to read the data from a client socket if its parent closed the very same socket? The answer is in the picture above. The kernel uses descriptor reference counts to decide whether to close a socket or not. It closes the socket only when its descriptor reference count becomes 0. When your server creates a child process, the child gets the copy of the parent’s file descriptors and the kernel increments the reference counts for those descriptors. In the case of one parent and one child, the descriptor reference count would be 2 for the client socket and when the parent process in the code above closes the client connection socket, it merely decrements its reference count which becomes 1, not small enough to cause the kernel to close the socket. The child process also closes the duplicate copy of the parent’s listen_socket because the child doesn’t care about accepting new client connections, it cares only about processing requests from the established client connection: + +``` +listen_socket.close() # close child copy +``` + +I’ll talk about what happens if you do not close duplicate descriptors later in the article. + +As you can see from the source code of your concurrent server, the sole role of the server parent process now is to accept a new client connection, fork a new child process to handle that client request, and loop over to accept another client connection, and nothing more. The server parent process does not process client requests - its children do. + +A little aside. What does it mean when we say that two events are concurrent? + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc2_concurrent_events.png) + +When we say that two events are concurrent we usually mean that they happen at the same time. As a shorthand that definition is fine, but you should remember the strict definition: + +>Two events are concurrent if you cannot tell by looking at the program which will happen first.[2][5] + +Again, it’s time to recap the main ideas and concepts you’ve covered so far. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_checkpoint.png) + +- The simplest way to write a concurrent server in Unix is to - use the fork() system call +- When a process forks a new process it becomes a parent process to that newly forked child process. +- Parent and child share the same file descriptors after the call to fork. +- The kernel uses descriptor reference counts to decide whether to close the file/socket or not +- The role of a server parent process: all it does now is accept a new connection from a client, fork a child to handle the client request, and loop over to accept a new client connection. + +Let’s see what is going to happen if you don’t close duplicate socket descriptors in the parent and child processes. Here is a modified version of the concurrent server where the server does not close duplicate descriptors, webserver3d.py: + +``` +########################################################################### +# Concurrent server - webserver3d.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +########################################################################### +import os +import socket + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 +REQUEST_QUEUE_SIZE = 5 + + +def handle_request(client_connection): + request = client_connection.recv(1024) + http_response = b"""\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + + +def serve_forever(): + listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listen_socket.bind(SERVER_ADDRESS) + listen_socket.listen(REQUEST_QUEUE_SIZE) + print('Serving HTTP on port {port} ...'.format(port=PORT)) + + clients = [] + while True: + client_connection, client_address = listen_socket.accept() + # store the reference otherwise it's garbage collected + # on the next loop run + clients.append(client_connection) + pid = os.fork() + if pid == 0: # child + listen_socket.close() # close child copy + handle_request(client_connection) + client_connection.close() + os._exit(0) # child exits here + else: # parent + # client_connection.close() + print(len(clients)) + +if __name__ == '__main__': + serve_forever() +``` + +Start the server with: + +``` +$ python webserver3d.py +``` + +Use curl to connect to the server: + +``` +$ curl http://localhost:8888/hello +Hello, World! +``` + +Okay, the curl printed the response from the concurrent server but it did not terminate and kept hanging. What is happening here? The server no longer sleeps for 60 seconds: its child process actively handles a client request, closes the client connection and exits, but the client curl still does not terminate. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc3_child_is_active.png) + +So why does the curl not terminate? The reason is the duplicate file descriptors. When the child process closed the client connection, the kernel decremented the reference count of that client socket and the count became 1. The server child process exited, but the client socket was not closed by the kernel because the reference count for that socket descriptor was not 0, and, as a result, the termination packet (called FIN in TCP/IP parlance) was not sent to the client and the client stayed on the line, so to speak. There is also another problem. If your long-running server doesn’t close duplicate file descriptors, it will eventually run out of available file descriptors: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc3_out_of_descriptors.png) + +Stop your server webserver3d.py with Control-C and check out the default resources available to your server process set up by your shell with the shell built-in command ulimit: + +``` +$ ulimit -a +core file size (blocks, -c) 0 +data seg size (kbytes, -d) unlimited +scheduling priority (-e) 0 +file size (blocks, -f) unlimited +pending signals (-i) 3842 +max locked memory (kbytes, -l) 64 +max memory size (kbytes, -m) unlimited +open files (-n) 1024 +pipe size (512 bytes, -p) 8 +POSIX message queues (bytes, -q) 819200 +real-time priority (-r) 0 +stack size (kbytes, -s) 8192 +cpu time (seconds, -t) unlimited +max user processes (-u) 3842 +virtual memory (kbytes, -v) unlimited +file locks (-x) unlimited +``` + +As you can see above, the maximum number of open file descriptors (open files) available to the server process on my Ubuntu box is 1024. + +Now let’s see how your server can run out of available file descriptors if it doesn’t close duplicate descriptors. In an existing or new terminal window, set the maximum number of open file descriptors for your server to be 256: + +``` +$ ulimit -n 256 +``` + +Start the server webserver3d.py in the same terminal where you’ve just run the $ ulimit -n 256 command: + +``` +$ python webserver3d.py +``` + +and use the following client client3.py to test the server. + +``` +##################################################################### +# Test client - client3.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +##################################################################### +import argparse +import errno +import os +import socket + + +SERVER_ADDRESS = 'localhost', 8888 +REQUEST = b"""\ +GET /hello HTTP/1.1 +Host: localhost:8888 + +""" + + +def main(max_clients, max_conns): + socks = [] + for client_num in range(max_clients): + pid = os.fork() + if pid == 0: + for connection_num in range(max_conns): + sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + sock.connect(SERVER_ADDRESS) + sock.sendall(REQUEST) + socks.append(sock) + print(connection_num) + os._exit(0) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser( + description='Test client for LSBAWS.', + formatter_class=argparse.ArgumentDefaultsHelpFormatter, + ) + parser.add_argument( + '--max-conns', + type=int, + default=1024, + help='Maximum number of connections per client.' + ) + parser.add_argument( + '--max-clients', + type=int, + default=1, + help='Maximum number of clients.' + ) + args = parser.parse_args() + main(args.max_clients, args.max_conns) +``` + +In a new terminal window, start the client3.py and tell it to create 300 simultaneous connections to the server: + +``` +$ python client3.py --max-clients=300 +``` + +Soon enough your server will explode. Here is a screenshot of the exception on my box: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc3_too_many_fds_exc.png) + +The lesson is clear - your server should close duplicate descriptors. But even if you close duplicate descriptors, you are not out of the woods yet because there is another problem with your server, and that problem is zombies! + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc3_zombies.png) + +Yes, your server code actually creates zombies. Let’s see how. Start up your server again: + +``` +$ python webserver3d.py +``` + +Run the following curl command in another terminal window: + +``` +$ curl http://localhost:8888/hello +``` + +And now run the ps command to show running Python processes. This the example of ps output on my Ubuntu box: + +``` +$ ps auxw | grep -i python | grep -v grep +vagrant 9099 0.0 1.2 31804 6256 pts/0 S+ 16:33 0:00 python webserver3d.py +vagrant 9102 0.0 0.0 0 0 pts/0 Z+ 16:33 0:00 [python] +``` + +Do you see the second line above where it says the status of the process with PID 9102 is Z+ and the name of the process is ? That’s our zombie there. The problem with zombies is that you can’t kill them. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc3_kill_zombie.png) + +Even if you try to kill zombies with $ kill -9 , they will survive. Try it and see for yourself. + +What is a zombie anyway and why does our server create them? A zombie is a process that has terminated, but its parent has not waited for it and has not received its termination status yet. When a child process exits before its parent, the kernel turns the child process into a zombie and stores some information about the process for its parent process to retrieve later. The information stored is usually the process ID, the process termination status, and the resource usage by the process. Okay, so zombies serve a purpose, but if your server doesn’t take care of these zombies your system will get clogged up. Let’s see how that happens. First stop your running server and, in a new terminal window, use the ulimit command to set the max user processess to 400(make sure to set open files to a high number, let’s say 500 too): + +``` +$ ulimit -u 400 +$ ulimit -n 500 +``` + +Start the server webserver3d.py in the same terminal where you’ve just run the $ ulimit -u 400 command: + +``` +$ python webserver3d.py +``` + +In a new terminal window, start the client3.py and tell it to create 500 simultaneous connections to the server: + +``` +$ python client3.py --max-clients=500 +``` + +And, again, soon enough your server will blow up with an OSError: Resource temporarily unavailable exception when it tries to create a new child process, but it can’t because it has reached the limit for the maximum number of child processes it’s allowed to create. Here is a screenshot of the exception on my box: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc3_resource_unavailable.png) + +As you can see, zombies create problems for your long-running server if it doesn’t take care of them. I will discuss shortly how the server should deal with that zombie problem. + +Let’s recap the main points you’ve covered so far: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_checkpoint.png) + +- If you don’t close duplicate descriptors, the clients won’t terminate because the client connections won’t get closed. +- If you don’t close duplicate descriptors, your long-running server will eventually run out of available file descriptors (max open files). +- When you fork a child process and it exits and the parent process doesn’t wait for it and doesn’t collect its termination status, it becomes a zombie. +- Zombies need to eat something and, in our case, it’s memory. Your server will eventually run out of available processes (max user processes) if it doesn’t take care of zombies. +- You can’t kill a zombie, you need to wait for it. + +So what do you need to do to take care of zombies? You need to modify your server code to wait for zombies to get their termination status. You can do that by modifying your server to call a wait system call. Unfortunately, that’s far from ideal because if you call wait and there is no terminated child process the call to wait will block your server, effectively preventing your server from handling new client connection requests. Are there any other options? Yes, there are, and one of them is the combination of a signal handler with the wait system call. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc4_signaling.png) + +Here is how it works. When a child process exits, the kernel sends a SIGCHLD signal. The parent process can set up a signal handler to be asynchronously notified of that SIGCHLD event and then it can wait for the child to collect its termination status, thus preventing the zombie process from being left around. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part_conc4_sigchld_async.png) + +By the way, an asynchronous event means that the parent process doesn’t know ahead of time that the event is going to happen. + +Modify your server code to set up a SIGCHLD event handler and wait for a terminated child in the event handler. The code is available in webserver3e.py file: + +``` +########################################################################### +# Concurrent server - webserver3e.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +########################################################################### +import os +import signal +import socket +import time + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 +REQUEST_QUEUE_SIZE = 5 + + +def grim_reaper(signum, frame): + pid, status = os.wait() + print( + 'Child {pid} terminated with status {status}' + '\n'.format(pid=pid, status=status) + ) + + +def handle_request(client_connection): + request = client_connection.recv(1024) + print(request.decode()) + http_response = b"""\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + # sleep to allow the parent to loop over to 'accept' and block there + time.sleep(3) + + +def serve_forever(): + listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listen_socket.bind(SERVER_ADDRESS) + listen_socket.listen(REQUEST_QUEUE_SIZE) + print('Serving HTTP on port {port} ...'.format(port=PORT)) + + signal.signal(signal.SIGCHLD, grim_reaper) + + while True: + client_connection, client_address = listen_socket.accept() + pid = os.fork() + if pid == 0: # child + listen_socket.close() # close child copy + handle_request(client_connection) + client_connection.close() + os._exit(0) + else: # parent + client_connection.close() + +if __name__ == '__main__': + serve_forever() +``` + +Start the server: + +``` +$ python webserver3e.py +``` + +Use your old friend curl to send a request to the modified concurrent server: + +``` +$ curl http://localhost:8888/hello +``` + +Look at the server: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc4_eintr.png) + +What just happened? The call to accept failed with the error EINTR. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc4_eintr_error.png) + +The parent process was blocked in accept call when the child process exited which caused SIGCHLD event, which in turn activated the signal handler and when the signal handler finished the accept system call got interrupted: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc4_eintr_accept.png) + +Don’t worry, it’s a pretty simple problem to solve, though. All you need to do is to re-start the accept system call. Here is the modified version of the server webserver3f.py that handles that problem: + + +``` +########################################################################### +# Concurrent server - webserver3f.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +########################################################################### +import errno +import os +import signal +import socket + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 +REQUEST_QUEUE_SIZE = 1024 + + +def grim_reaper(signum, frame): + pid, status = os.wait() + + +def handle_request(client_connection): + request = client_connection.recv(1024) + print(request.decode()) + http_response = b"""\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + + +def serve_forever(): + listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listen_socket.bind(SERVER_ADDRESS) + listen_socket.listen(REQUEST_QUEUE_SIZE) + print('Serving HTTP on port {port} ...'.format(port=PORT)) + + signal.signal(signal.SIGCHLD, grim_reaper) + + while True: + try: + client_connection, client_address = listen_socket.accept() + except IOError as e: + code, msg = e.args + # restart 'accept' if it was interrupted + if code == errno.EINTR: + continue + else: + raise + + pid = os.fork() + if pid == 0: # child + listen_socket.close() # close child copy + handle_request(client_connection) + client_connection.close() + os._exit(0) + else: # parent + client_connection.close() # close parent copy and loop over + + +if __name__ == '__main__': + serve_forever() +``` + +Start the updated server webserver3f.py: + +``` +$ python webserver3f.py +``` + +Use curl to send a request to the modified concurrent server: + +``` +$ curl http://localhost:8888/hello +``` + +See? No EINTR exceptions any more. Now, verify that there are no more zombies either and that your SIGCHLD event handler with wait call took care of terminated children. To do that, just run the ps command and see for yourself that there are no more Python processes with Z+ status (no more processes). Great! It feels safe without zombies running around. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_checkpoint.png) + +- If you fork a child and don’t wait for it, it becomes a zombie. +- Use the SIGCHLD event handler to asynchronously wait for a terminated child to get its termination status +- When using an event handler you need to keep in mind that system calls might get interrupted and you need to be prepared for that scenario + +Okay, so far so good. No problems, right? Well, almost. Try your webserver3f.py again, but instead of making one request with curl use client3.py to create 128 simultaneous connections: + +``` +$ python client3.py --max-clients 128 +``` + +Now run the ps command again + +``` +$ ps auxw | grep -i python | grep -v grep +``` + +and see that, oh boy, zombies are back again! + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc5_zombies_again.png) + +What went wrong this time? When you ran 128 simultaneous clients and established 128 connections, the child processes on the server handled the requests and exited almost at the same time causing a flood of SIGCHLD signals being sent to the parent process. The problem is that the signals are not queued and your server process missed several signals, which left several zombies running around unattended: + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc5_signals_not_queued.png) + +The solution to the problem is to set up a SIGCHLD event handler but instead of wait use a waitpid system call with a WNOHANG option in a loop to make sure that all terminated child processes are taken care of. Here is the modified server code, webserver3g.py: + +``` +########################################################################### +# Concurrent server - webserver3g.py # +# # +# Tested with Python 2.7.9 & Python 3.4 on Ubuntu 14.04 & Mac OS X # +########################################################################### +import errno +import os +import signal +import socket + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 +REQUEST_QUEUE_SIZE = 1024 + + +def grim_reaper(signum, frame): + while True: + try: + pid, status = os.waitpid( + -1, # Wait for any child process + os.WNOHANG # Do not block and return EWOULDBLOCK error + ) + except OSError: + return + + if pid == 0: # no more zombies + return + + +def handle_request(client_connection): + request = client_connection.recv(1024) + print(request.decode()) + http_response = b"""\ +HTTP/1.1 200 OK + +Hello, World! +""" + client_connection.sendall(http_response) + + +def serve_forever(): + listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listen_socket.bind(SERVER_ADDRESS) + listen_socket.listen(REQUEST_QUEUE_SIZE) + print('Serving HTTP on port {port} ...'.format(port=PORT)) + + signal.signal(signal.SIGCHLD, grim_reaper) + + while True: + try: + client_connection, client_address = listen_socket.accept() + except IOError as e: + code, msg = e.args + # restart 'accept' if it was interrupted + if code == errno.EINTR: + continue + else: + raise + + pid = os.fork() + if pid == 0: # child + listen_socket.close() # close child copy + handle_request(client_connection) + client_connection.close() + os._exit(0) + else: # parent + client_connection.close() # close parent copy and loop over + +if __name__ == '__main__': + serve_forever() +``` + +Start the server: + +``` +$ python webserver3g.py +``` + +Use the test client client3.py: + +``` +$ python client3.py --max-clients 128 +``` + +And now verify that there are no more zombies. Yay! Life is good without zombies :) + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_conc5_no_zombies.png) + +Congratulations! It’s been a pretty long journey but I hope you liked it. Now you have your own simple concurrent server and the code can serve as a foundation for your further work towards a production grade Web server. + +I’ll leave it as an exercise for you to update the WSGI server from Part 2 and make it concurrent. You can find the modified version here. But look at my code only after you’ve implemented your own version. You have all the necessary information to do that. So go and just do it :) + +What’s next? As Josh Billings said, + +>“Be like a postage stamp — stick to one thing until you get there.” + +Start mastering the basics. Question what you already know. And always dig deeper. + +![](https://ruslanspivak.com/lsbaws-part3/lsbaws_part3_dig_deeper.png) + +>“If you learn only methods, you’ll be tied to your methods. But if you learn principles, you can devise your own methods.” —Ralph Waldo Emerson + +Below is a list of books that I’ve drawn on for most of the material in this article. They will help you broaden and deepen your knowledge about the topics I’ve covered. I highly recommend you to get those books somehow: borrow them from your friends, check them out from your local library, or just buy them on Amazon. They are the keepers: + +1. [Unix Network Programming, Volume 1: The Sockets Networking API (3rd Edition)][6] +2. [Advanced Programming in the UNIX Environment, 3rd Edition][7] +3. [The Linux Programming Interface: A Linux and UNIX System Programming Handbook][8] +4. [TCP/IP Illustrated, Volume 1: The Protocols (2nd Edition) (Addison-Wesley Professional Computing Series)][9] +5. [The Little Book of SEMAPHORES (2nd Edition): The Ins and Outs of Concurrency Control and Common Mistakes][10]. Also available for free on the author’s site [here][11]. + +BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch and goes into more detail on the topics I just covered. Subscribe to the mailing list to get the latest updates about the book and the release date. + +-------------------------------------------------------------------------------- + +via: https://ruslanspivak.com/lsbaws-part3/ + +作者:[Ruslan][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/rspivak/ + +[1]: https://github.com/rspivak/lsbaws/blob/master/part3/ +[2]: https://github.com/rspivak/lsbaws/blob/master/part3/webserver3a.py +[3]: https://github.com/rspivak/lsbaws/blob/master/part3/webserver3b.py +[4]: https://ruslanspivak.com/lsbaws-part3/#fn:1 +[5]: https://ruslanspivak.com/lsbaws-part3/#fn:2 +[6]: http://www.amazon.com/gp/product/0131411551/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0131411551&linkCode=as2&tag=russblo0b-20&linkId=2F4NYRBND566JJQL +[7]: http://www.amazon.com/gp/product/0321637739/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0321637739&linkCode=as2&tag=russblo0b-20&linkId=3ZYAKB537G6TM22J +[8]: http://www.amazon.com/gp/product/1593272200/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1593272200&linkCode=as2&tag=russblo0b-20&linkId=CHFOMNYXN35I2MON +[9]: http://www.amazon.com/gp/product/0321336313/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0321336313&linkCode=as2&tag=russblo0b-20&linkId=K467DRFYMXJ5RWAY +[10]: http://www.amazon.com/gp/product/1441418687/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1441418687&linkCode=as2&tag=russblo0b-20&linkId=QFOAWARN62OWTWUG +[11]: http://greenteapress.com/semaphores/ From 1459084ebe4d7549a0db8426c2ae4b2651716e2a Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 10 Aug 2016 01:36:10 +0800 Subject: [PATCH 383/471] PUB:20160531 The Anatomy of a Linux User MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @vim-kakali @PurlingNayuki 尽力了,可能还有不足或谬误的地方吧。 --- .../20160531 The Anatomy of a Linux User.md | 54 ++++++++++++++++++ .../20160531 The Anatomy of a Linux User.md | 56 ------------------- 2 files changed, 54 insertions(+), 56 deletions(-) create mode 100644 published/20160531 The Anatomy of a Linux User.md delete mode 100644 translated/talk/20160531 The Anatomy of a Linux User.md diff --git a/published/20160531 The Anatomy of a Linux User.md b/published/20160531 The Anatomy of a Linux User.md new file mode 100644 index 0000000000..750df665f5 --- /dev/null +++ b/published/20160531 The Anatomy of a Linux User.md @@ -0,0 +1,54 @@ +深入理解 Linux 用戶 +================================ + +**一些新的 GNU/Linux 用户很清楚 Linux 不是 Windows,但其他人对此则不甚了解,而最好的发行版设计者们则会谨记着这两种人的存在。** + +### Linux 的核心 + +不管怎么说,Nicky 看起来都不太引人注目。她已经三十岁了,却决定在离开学校多年后回到学校学习。她在海军待了六年,后来接受了一份老友给她的新工作,想试试这份工作会不会比她在军队的工作更有前途。这种换工作的事情在战后的军事后勤处非常常见。我正是因此而认识的她。她那时是一个八个州的货车运输业中介组织的区域经理,而那会我在达拉斯跑肉品包装工具的运输。 + +![](http://i2.wp.com/fossforce.com/wp-content/uploads/2016/05/anatomy.jpg?w=525) + +Nicky 和我在 2006 年成为了好朋友。她很外向,几乎每一个途经她负责的线路上的人她都乐于接触。我们经常星期五晚上相约去一家室内激光枪战中心打真人 CS。像这样一次就打三个的半小时战役对我们来说并不鲜见。或许这并不像彩弹游戏(LCTT 译注:一种军事游戏,双方以汽枪互射彩色染料弹丸,对方被击中后衣服上会留下彩色印渍即表示“被消灭”——必应词典)一样便宜,但是它很有临场感,还稍微有点恐怖游戏的感觉。某次活动的时候,她问我能否帮她维修电脑。 + +她知道我在为了让一些贫穷的孩子能拥有他们自己的电脑而奔走,当她抱怨她的电脑很慢的时候,我开玩笑地说她可以给比尔盖茨的 401k 计划交钱了(LCTT 译注:401k 计划始于20世纪80年代初,是一种由雇员、雇主共同缴费建立起来的完全基金式的养老保险制度。此处隐喻需要购买新电脑而向比尔盖茨的微软公司付费软件费用。)。Nicky 却说这是了解 Linux 的最佳时间。 + +她的电脑是品牌机,是个带有 Dell 19'' 显示器的 2005 年中款的华硕电脑。不幸的是,这台电脑没有好好照料,上面充斥着所能找到的各种关都关不掉的工具栏和弹窗软件。我们把电脑上的文件都做了备份之后就开始安装 Linux 了。我们一起完成了安装,并且确信她知道了如何分区。不到一个小时,她的电脑上就有了一个“金闪闪”的 PCLinuxOS 桌面。 + +在她操作新系统时,她经常评论这个系统看起来多么漂亮。她并非随口一说;她为眼前光鲜亮丽的桌面着了魔。她说她的桌面漂亮的就像化了“彩妆”一样。这是我在安装系统期间特意设置的,我每次安装 Linux 的时候都会把它打扮的漂漂亮亮的。我希望这些桌面让每个人看起来都觉得漂亮。 + +大概第一周左右,她通过电话和邮件问了我一些常规问题,而最主要的问题还是她想知道如何保存她 OpenOffice 文件才可以让她的同事也可以打开这些文件。教一个人使用 Linux 或者 Open/LibreOffice 的时候最重要的就是教她保存文件。大多数用户在弹出的对话框中直接点了保存,结果就用默认的开放文档格式(Open Document Format)保存了,这让他们吃了不少苦头。 + +曾经有过这么一件事,大约一年前或者更久,一个高中生说他没有通过期末考试,因为教授不能打开包含他的论文的文件。这引来了一些读者的激烈评论,大家都不知道这件事该怪谁,这孩子没错,而他的教授,似乎也没错。 + +我认识的一些大学教授他们每一个人都知道怎么打开 ODF 文件。另外,那个该死的微软在这方面做得真 XX 的不错,我觉得微软 Office 现在已经能打开 ODT 或者 ODF 文件了。不过我也不确定,毕竟我从 2005 年就没用过 Microsoft Office 了。 + +甚至在过去糟糕的日子里,微软公开而悍然地通过产品绑架的方式来在企业桌面领域推行他们的软件时,我和一些微软 Office 的用户在开展业务和洽谈合作时从来没有出现过问题,因为我会提前想到可能出现的问题并且不会有侥幸心理。我会发邮件给他们询问他们正在使用的 Office 版本。这样,我就可以确保以他们能够读写的格式保存文件。 + +说回 Nicky ,她花了很多时间学习她的 Linux 系统。我很惊奇于她的热情。 + +当人们意识到需要抛弃所有的 Windows 的使用习惯和工具的时候,学习 Linux 系统就会很容易。甚至在告诉那些淘气的孩子们如何使用之后,再次回来检查的时候,他们都不会试图把 .exe 文件下载到桌面上或某个下载文件夹。 + +在我们通常讨论这些文件的时候,我们也会提及关于更新的问题。长久以来我一直反对在一台机器上有多个软件安装系统和更新管理软件。以 Mint 来说,它完全禁用了 Synaptic 中的更新功能,这让我失去兴趣。但是即便对于我们这些仍然在使用 dpkg 和 apt 的老家伙们来说,睿智的脑袋也已经开始意识到命令行对新用户来说并不那么温馨而友好。 + +我曾严正抗议并强烈谴责 Synaptic 功能上的削弱,直到它说服了我。你记得什么时候第一次使用的新打造的 Linux 发行版,并拥有了最高管理权限吗? 你记得什么时候对 Synaptic 中列出的大量软件进行过梳理吗?你记得怎样开始安装每个你发现的很酷的程序吗?你记得有多少这样的程序都是以字母"lib"开头的吗? + +我也曾做过那样的事。我安装又弄坏了好几次 Linux,后来我才发现那些库(lib)文件是应用程序的螺母和螺栓,而不是应用程序本身。这就是 Linux Mint 和 Ubuntu 幕后那些聪明的开发者创造了智能、漂亮和易用的应用安装器的原因。Synaptic 仍然是我们这些老玩家爱用的工具,但是对于那些在我们之后才来的新手来说,有太多的方式可以让他们安装库文件和其他类似的包。在新的安装程序中,这些文件的显示会被折叠起来,不会展示给用户。真的,这才是它应该做的。 + +除非你要准备好了打很多支持电话。 + +现在的 Linux 发行版中藏了很多智慧的结晶,我也很感谢这些开发者们,因为他们,我的工作变得更容易。不是每一个 Linux 新用户都像 Nicky 这样富有学习能力和热情。她对我来说就是一个“装好就行”的项目,只需要为她解答一些问题,其它的她会自己研究解决。像她这样极具学习能力和热情的用户的毕竟是少数。这样的 Linux 新人任何时候都是珍稀物种。 + +很不错,他们都是要教自己的孩子使用 Linux 的人。 + +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2016/05/anatomy-linux-user/ + +作者:[Ken Starks][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[PurlingNayuki](https://github.com/PurlingNayuki), [wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxlock.blogspot.com/ diff --git a/translated/talk/20160531 The Anatomy of a Linux User.md b/translated/talk/20160531 The Anatomy of a Linux User.md deleted file mode 100644 index 99e3f0acc5..0000000000 --- a/translated/talk/20160531 The Anatomy of a Linux User.md +++ /dev/null @@ -1,56 +0,0 @@ -深入理解 Linux 用戶 -================================ - - - -**一些新的 GNU/Linux 用户都很清楚地知道 Linux 不是 Windows。但其他人对此都不甚了解。最好的发行版设计者努力保持新的思想。** - -### Linux 的核心 - -不管怎么说,Nicky 看起来都不引人注目。她已经三十岁了,却决定回到学校学习。她在海军待了 6 年,后来接受了一份老友给她的新工作,她认为这份工作比她在军队的工作更有前途。认定其他工作更有前途而换工作的事情在战后的军事后勤处非常常见。我正是因此而认识的她。她那时是一个有 8 个州参与的的货车运输业协商组织的区域经理。那会我在达拉斯跑肉品包装工具的运输。 -![](http://i2.wp.com/fossforce.com/wp-content/uploads/2016/05/anatomy.jpg?w=525) - - -Nicky 和我在 2006 年成为了好朋友。她很外向,对所有她跑过的线路的拥有者。我们经常星期五晚上我们相约去一家室内激光枪战中心打真人 CS。像这样连续激战半个小时对我们来说并不罕见。或许这并不像彩弹游戏[译注:一种军事游戏,双方以汽枪互射彩色染料弹丸,对方被击中后衣服上会留下彩色印渍即表示“被消灭”(必应词典)]一样便宜,但是气氛还是可以控制得住的,而且令人感觉比较恐怖。某次活动的时候,她问我能否帮她维修电脑。 - -她知道我为了一些贫穷的孩子能拥有他们自己的电脑做出的努力,当她抱怨她的电脑很慢的时候,我开玩笑地提到了参加比尔盖茨的 401k 计划[译注:这个计划是为那些为内部收益代码做过贡献的人们提供由税收定义的养老金账户。]。Nicky 说这是了解 Linux 的最佳时间。 - -她的电脑相当好,它是一个带有 Dell 19'' 显示器的华硕电脑。不幸的是,即使当不需要一些东西的时候,这台 Windows 电脑也会强制性的显示所有的工具条和文本菜单。我们把电脑上的文件都做了备份之后就开始安装 Linux 了。我们一起完成了安装,并且确信她知道了如何分区。不到一个小时,她的电脑上就有了一个漂亮的 PCLinuxOS 桌面。 - -在她操作新系统时,她经常评论这个系统看起来多么漂亮。她并非随口一说;她为眼前光鲜亮丽的桌面着了魔。她说她的桌面带有漂亮的"微光"[译注:即文字闪光效果,shimmer]。这是我在安装系统期间特意设置的。我每次安装 Linux 的时候都会进行这样的配置。我想让每个 Linux 用户的桌面都拥有这个漂亮的微光。 - -大概第一周左右,她通过电话和邮件问了我一些常规问题,而最主要的问题还是她想知道如何保存她OpenOffice 文件才可以让她的同事也可以读这些文件。教一个人使用 Linux 或者 Open/LibreOffice 的时候最重要的就是教她保存文件。大多数用户直接点了保存,结果就用默认的开放文档格式(Open Document Format)保存了,这让他们吃了不少苦头。 - - -大约一年前或者更久,一个高中生说他没有通过期末考试,因为教授不能打开他上交的 ODF 格式的论文。这引来了一些读者的激烈评论,大家都不知道这件事该怪谁。 - -我知道一些大学教授甚至每个人都能够打开一个 ODF 文件。另外,我想 Microsoft Office 现在已经能打开 ODT 或者 ODF 文件了。不过我也不确定,毕竟我最近一次用 Microsoft Office 是在 2005 年。 - -甚至在过去糟糕的日子里,当 Microsoft 很受欢迎并且很热衷于为使用他们系统的厂商的企业桌面上安装他们自己的软件的时候,我和一些 Microsoft Office 用户的产品生意和合作从来没有出现过问题,因为我会提前想到可能出现的问题并且不会有侥幸心理。我会发邮件给他们询问他们正在使用的 Office 版本。这样,我就可以确保以他们能够读写的格式保存文件。 - -说回 Nicky ,她花了很多时间学习她的 Linux 系统。我很惊奇于她的热情。 - -当人们意识到所有的使用 Windows 的习惯和工具都要被抛弃的时候,学习 Linux 系统也会很容易。甚至在告诉孩子们如何使用之后,他们都不会试图下载和执行 .exe 文件。 - -在我们通常讨论这些文件的时候,我们也会提及关于更新的问题。很长时间我都不会在一台电脑上反复设置去完成多种程序的更新和安装。比如 Mint,它禁用了 Synaptic[译注:一个 Linux 管理图像程序包]的完整(系统)更新功能,这让我失去兴趣。但是即便我们仍然在使用 dpkg 和 apt,睿智的玩家已经开始意识到命令行对新用户来说并不那么温馨而友好。 - -我曾严正抗议并强烈谴责 Synaptic 功能上的削弱。你记得你第一次使用的 Linux 发行版吗?记得你什么时候在 Synaptic 中详细查看大量的软件列表吗?记得你怎样开始标记你发现的每个很酷的程序吗?你记得有多少这样的程序都是以字母"lib"开头的吗? - -我也曾做过那样的事。我安装又弄坏了好次 Linux,后来我才发现那些库(lib)文件是应用程序的螺母和螺栓,而不是应用程序本身。这就是 Linux Mint 和 Ubuntu 幕后那些聪明的开发者创造了智能、漂亮和易于使用的应用程序的安装程序的理由。Synaptic 仍然是我们这些老玩家爱用的工具,但是对于一些新手,Synaptic 中有太多的方式可以让他们安装库文件和其他类似的包,把系统弄乱。在新的安装程序中,这些包会被折叠起来,不会展示给用户。讲真,这也是它应该做的。 - -除非你要改变应该支持的需求。 - -现在的 Linux 发行版中藏了很多智慧的结晶,我也很感谢这些开发者,因为他们,我的工作变得容易。不是每一个 Linux 新用户都像 Nicky 这样富有学习能力和热情。她对我来说就是一个“安装后不管”的项目,只需要为她解答一些问题,其他的她会自己研究解决。像她这样极具学习能力和热情的用户的毕竟是少数。这样的 Linux 新人任何时候都是珍稀物种。 - -很不错,他们都是要教自己的孩子使用 Linux 的人。 --------------------------------------------------------------------------------- - -via: http://fossforce.com/2016/05/anatomy-linux-user/ - -作者:[Ken Starks][a] -译者:[vim-kakali](https://github.com/vim-kakali) -校对:[PurlingNayuki](https://github.com/PurlingNayuki) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://linuxlock.blogspot.com/ From 1804bbae80b1bfd79505b4f33b0db747aa09799b Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 10 Aug 2016 09:39:02 +0800 Subject: [PATCH 384/471] PUB:Part 11 - How to Allow Awk to Use Shell Variables @ChrisLeeGit --- ...How to Allow Awk to Use Shell Variables.md | 23 +++++++++++-------- 1 file changed, 14 insertions(+), 9 deletions(-) rename translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md => published/awk/Part 11 - How to Allow Awk to Use Shell Variables.md (81%) diff --git a/translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md b/published/awk/Part 11 - How to Allow Awk to Use Shell Variables.md similarity index 81% rename from translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md rename to published/awk/Part 11 - How to Allow Awk to Use Shell Variables.md index e4ec78abd4..7caf10ec28 100644 --- a/translated/tech/awk/20160802 Part 11 - How to Allow Awk to Use Shell Variables.md +++ b/published/awk/Part 11 - How to Allow Awk to Use Shell Variables.md @@ -5,21 +5,23 @@ awk 系列:如何让 awk 使用 Shell 变量 我们可以通过在 awk 命令中使用 shell 变量达到目的,在 awk 系列的这一节中,我们将学习如何让 awk 使用 shell 变量,这些变量可能包含我们希望传递给 awk 命令的值。 +![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Shell-Variables-in-Awk.png) + 有两种可能的方法可以让 awk 使用 shell 变量: ### 1. 使用 Shell 引用 -让我们用一个示例来演示如何在一条 awk 命令中使用 shell 引用替代一个 shell 变量。在该示例中,我们希望在文件 /etc/passwd 中搜索一个用户名,过滤并输出用户的账户信息。 +让我们用一个示例来演示如何在一条 awk 命令中使用 shell 引用来替代一个 shell 变量。在该示例中,我们希望在文件 /etc/passwd 中搜索一个用户名,过滤并输出用户的账户信息。 因此,我们可以编写一个 `test.sh` 脚本,内容如下: ``` #!/bin/bash -# 读取用户名 +### 读取用户名 read -p "请输入用户名:" username -# 在 /etc/passwd 中搜索用户名,然后在屏幕上输出详细信息 +### 在 /etc/passwd 中搜索用户名,然后在屏幕上输出详细信息 cat /etc/passwd | awk "/$username/ "' { print $0 }' ``` @@ -31,7 +33,7 @@ cat /etc/passwd | awk "/$username/ "' { print $0 }' cat /etc/passwd | awk "/$username/ "' { print $0 }' ``` -`"/$username/ "`:在 awk 命令中使用 shell 引用来替代 shell 变量 `username` 的值。`username` 的值就是要在文件 /etc/passwd 中搜索的模式。 +`"/$username/ "`:该 shell 引用用于在 awk 命令中替换 shell 变量 `username` 的值。`username` 的值就是要在文件 /etc/passwd 中搜索的模式。 注意,双引号位于 awk 脚本 `'{ print $0 }'` 之外。 @@ -45,12 +47,14 @@ $ ./text.sh 运行脚本后,它会提示你输入一个用户名,然后你输入一个合法的用户名并回车。你将会看到来自 /etc/passwd 文件中详细的用户账户信息,如下图所示: ![](http://www.tecmint.com/wp-content/uploads/2016/08/Shell-Script-to-Find-Username-in-Passwd-File.png) -> *在 Password 文件中查找用户名的 shell 脚本* + +*在 Password 文件中查找用户名的 shell 脚本* ### 2. 使用 awk 进行变量赋值 和上面介绍的方法相比,该方法更加单,并且更好。考虑上面的示例,我们可以运行一条简单的命令来完成同样的任务。 在该方法中,我们使用 `-v` 选项将一个 shell 变量的值赋给一个 awk 变量。 + 首先,创建一个 shell 变量 `username`,然后给它赋予一个我们希望在 /etc/passwd 文件中搜索的名称。 ``` @@ -63,7 +67,8 @@ username="aaronkilik" ``` ![](http://www.tecmint.com/wp-content/uploads/2016/08/Find-Username-in-Password-File-Using-Awk.png) -> *使用 awk 在 Password 文件中查找用户名* + +*使用 awk 在 Password 文件中查找用户名* 上述命令的说明: @@ -71,7 +76,7 @@ username="aaronkilik" - `username`:是 shell 变量 - `name`:是 awk 变量 -让我们仔细瞧瞧 awk 脚本 `' $0 ~ name {print $0}'` 中的 `$0 ~ name`。还记得么,当我们在 awk 系列第四节中介绍 awk 比较运算符时,`value ~ pattern` 便是比较运算符之一,它是指:如果 `value` 匹配了 `pattern` 则返回 `true`。 +让我们仔细瞧瞧 awk 脚本 `' $0 ~ name {print $0}'` 中的 `$0 ~ name`。还记得么,当我们在 awk 系列第四节中介绍 awk 比较运算符时,`value ~ pattern` 便是比较运算符之一,它是指:如果 `value` 匹配了 `pattern` 则返回 `true`。 cat 命令通过管道传给 awk 的 `output($0)` 与模式 `(aaronkilik)` 匹配,该模式即为我们在 /etc/passwd 中搜索的名称,最后,比较操作返回 `true`。接下来会在屏幕上输出包含用户账户信息的行。 @@ -84,11 +89,11 @@ cat 命令通过管道传给 awk 的 `output($0)` 与模式 `(aaronkilik)` 匹 -------------------------------------------------------------------------------- -via: http://www.tecmint.com/use-shell-script-variable-in-awk/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 +via: http://www.tecmint.com/use-shell-script-variable-in-awk/ 作者:[Aaron Kili][a] 译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对ID](https://github.com/校对ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 18f72540c2b8ae3dd2459c727378e025cefff07f Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 10 Aug 2016 10:32:09 +0800 Subject: [PATCH 385/471] =?UTF-8?q?PUB:20160615=20Explanation=20of=20?= =?UTF-8?q?=E2=80=9CEverything=20is=20a=20File=E2=80=9D=20and=20Types=20of?= =?UTF-8?q?=20Files=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @runningwater --- ...ng is a File” and Types of Files in Linux.md | 58 +++++++++---------- 1 file changed, 28 insertions(+), 30 deletions(-) rename {translated/talk => published}/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md (75%) diff --git a/translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md b/published/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md similarity index 75% rename from translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md rename to published/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md index c9a1b243ff..62379377cd 100755 --- a/translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md +++ b/published/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md @@ -2,15 +2,16 @@ ==================================================================== ![](http://www.tecmint.com/wp-content/uploads/2016/05/Everything-is-a-File-in-Linux.png) ->Linux 系统中一切都是文件并有相应的文件类型 + +*Linux 系统中一切都是文件并有相应的文件类型* 在 Unix 和它衍生的比如 Linux 系统中,一切都可以看做文件。虽然它仅仅只是一个泛泛的概念,但这是事实。如果有不是文件的,那它一定是正运行的进程。 -要理解这点,可以举个例子,您的根目录(/) 的空间是由不同类型的 Linux 文件所占据的。当您创建一个文件或向系统传一个文件时,它在物理磁盘上占据的一些空间,可以认为是一个特定的格式(文件类型)。 +要理解这点,可以举个例子,您的根目录(/)的空间充斥着不同类型的 Linux 文件。当您创建一个文件或向系统传输一个文件时,它会在物理磁盘上占据的一些空间,而且是一个特定的格式(文件类型)。 -虽然 Linux 系统中文件和目录没有什么不同,但目录还有一个重要的功能,那就是有结构性的分组存储其它文件,以方便查找访问。所有的硬件部件都表示为文件,系统使用这些文件来与硬件通信。 +虽然 Linux 系统中文件和目录没有什么不同,但目录还有一个重要的功能,那就是有结构性的分组存储其它文件,以方便查找访问。所有的硬件组件都表示为文件,系统使用这些文件来与硬件通信。 -这些思想是对伟大的 Linux 财产的重要阐述,因此像文档、目录(Mac OS X 和 Windows 系统下是文件夹)、键盘、监视器、硬盘、可移动媒体设备、打印机、调制解调器、虚拟终端,还有进程间通信(IPC)和网络通信等输入/输出资源都在定义在文件系统空间下的字节流。 +这些思想是对 Linux 中的各种事物的重要阐述,因此像文档、目录(Mac OS X 和 Windows 系统下称之为文件夹)、键盘、监视器、硬盘、可移动媒体设备、打印机、调制解调器、虚拟终端,还有进程间通信(IPC)和网络通信等输入/输出资源都是定义在文件系统空间下的字节流。 一切都可看作是文件,其最显著的好处是对于上面所列出的输入/输出资源,只需要相同的一套 Linux 工具、实用程序和 API。 @@ -28,7 +29,7 @@ Linux 系统中有三种基本的文件类型: 它们是包含文本、数据、程序指令等数据的文件,其在 Linux 系统中是最常见的一种。包括如下: -- 只读文件 +- 可读文件 - 二进制文件 - 图像文件 - 压缩文件等等 @@ -37,7 +38,7 @@ Linux 系统中有三种基本的文件类型: 特殊文件包括以下几种: -块文件:设备文件,对访问系统硬件部件提供了缓存接口。他们提供了一种使用文件系统与设备驱动通信的方法。 +块文件(block):设备文件,对访问系统硬件部件提供了缓存接口。它们提供了一种通过文件系统与设备驱动通信的方法。 有关于块文件一个重要的性能就是它们能在指定时间内传输大块的数据和信息。 @@ -73,7 +74,7 @@ brw-rw---- 1 root disk 1, 5 May 18 10:26 ram5 ... ``` -字符文件: 也是设备文件,对访问系统硬件组件提供了非缓冲串行接口。它们与设备的通信工作方式是一次只传输一个字符的数据。 +字符文件(Character): 也是设备文件,对访问系统硬件组件提供了非缓冲串行接口。它们与设备的通信工作方式是一次只传输一个字符的数据。 列出某目录下的字符文件: @@ -113,7 +114,7 @@ crw-rw-rw- 1 root tty 5, 2 May 18 17:40 ptmx crw-rw-rw- 1 root root 1, 8 May 18 10:26 random ``` -符号链接文件 : 符号链接是指向系统上其他文件的引用。因此,符号链接文件是指向其它文件的文件,也可以是目录或常规文件。 +符号链接文件(Symbolic link) : 符号链接是指向系统上其他文件的引用。因此,符号链接文件是指向其它文件的文件,那些文件可以是目录或常规文件。 列出某目录下的符号链接文件: @@ -142,11 +143,11 @@ Linux 中使用 `ln` 工具就可以创建一个符号链接文件,如下所 # ls -l /home/tecmint/ | grep "^l" [列出符号链接文件] ``` -在上面的例子中,首先我们在 `/tmp` 目录创建了一个名叫 `file1.txt` 的文件,然后创建符号链接文件,所以 `/home/tecmint/file1.txt` 指向 `/tmp/file1.txt` 文件。 +在上面的例子中,首先我们在 `/tmp` 目录创建了一个名叫 `file1.txt` 的文件,然后创建符号链接文件,将 `/home/tecmint/file1.txt` 指向 `/tmp/file1.txt` 文件。 -套接字和命令管道 : 连接一个进行的输出和另一个进程的输入,允许进程间通信的文件。 +管道(Pipes)和命令管道(Named pipes) : 将一个进程的输出连接到另一个进程的输入,从而允许进程间通信(IPC)的文件。 -命名管道实际上是一个文件,用来使两个进程彼此通信,就像一个 Linux pipe(管道) 命令一样。 +命名管道实际上是一个文件,用来使两个进程彼此通信,就像一个 Linux 管道一样。 列出某目录下的管道文件: @@ -154,7 +155,7 @@ Linux 中使用 `ln` 工具就可以创建一个符号链接文件,如下所 # ls -l | grep "^p" ``` -输出例子 +输出例子: ``` prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe1 @@ -171,19 +172,17 @@ prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe5 # echo "This is named pipe1" > pipe1 ``` -在上的例子中,我们创建了一个名叫 `pipe1` 的命名管道,然后使用 [echo 命令][2] 加入一些数据,在这操作后,要使用这些输入数据就要用非交互的 shell 了。 +在上的例子中,我们创建了一个名叫 `pipe1` 的命名管道,然后使用 [echo 命令][2] 加入一些数据,这之后在处理输入的数据时 shell 就变成非交互式的了(LCTT 译注:被管道占住了)。 - - -然后,我们打开另外的 shell 终端,运行另外的命令来打印出刚加入管道的数据。 +然后,我们打开另外一个 shell 终端,运行另外的命令来打印出刚加入管道的数据。 ``` # while read line ;do echo "This was passed-'$line' "; done Date: Wed, 10 Aug 2016 12:06:00 +0800 Subject: [PATCH 386/471] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?Web=20Service=20Efficiency=20at=20Instagram=20with=20Python?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ice Efficiency at Instagram with Python.md | 75 ------------------ ...ice Efficiency at Instagram with Python.md | 78 +++++++++++++++++++ 2 files changed, 78 insertions(+), 75 deletions(-) delete mode 100644 sources/tech/20160602 Web Service Efficiency at Instagram with Python.md create mode 100644 translated/tech/20160602 Web Service Efficiency at Instagram with Python.md diff --git a/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md b/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md deleted file mode 100644 index b3cf1f4192..0000000000 --- a/sources/tech/20160602 Web Service Efficiency at Instagram with Python.md +++ /dev/null @@ -1,75 +0,0 @@ -Being translated by ChrisLeeGit - -Web Service Efficiency at Instagram with Python -=============================================== - -Instagram currently features the world’s largest deployment of the Django web framework, which is written entirely in Python. We initially chose to use Python because of its reputation for simplicity and practicality, which aligns well with our philosophy of “do the simple thing first.” But simplicity can come with a tradeoff: efficiency. Instagram has doubled in size over the last two years and recently crossed 500 million users, so there is a strong need to maximize web service efficiency so that our platform can continue to scale smoothly. In the past year we’ve made our efficiency program a priority, and over the last six months we’ve been able to maintain our user growth without adding new capacity to our Django tiers. In this post, we’ll share some of the tools we built and how we use them to optimize our daily deployment flow. - -### Why Efficiency? - -Instagram, like all software, is limited by physical constraints like servers and datacenter power. With these constraints in mind, there are two main goals we want to achieve with our efficiency program: - -1. Instagram should be able to serve traffic normally with continuous code rollouts in the case of lost capacity in one data center region, due to natural disaster, regional network issues, etc. -2. Instagram should be able to freely roll out new products and features without being blocked by capacity. - -To meet these goals, we realized we needed to persistently monitor our system and battle regression. - -### Defining Efficiency - -Web services are usually bottlenecked by available CPU time on each server. Efficiency in this context means using the same amount of CPU resources to do more work, a.k.a, processing more user requests per second (RPS). As we look for ways to optimize, our first challenge is trying to quantify our current efficiency. Up to this point, we were approximating efficiency using ‘Average CPU time per requests,’ but there were two inherent limitations to using this metric: - -1. Diversity of devices. Using CPU time for measuring CPU resources is not ideal because it is affected by both CPU models and CPU loads. -2. Request impacts data. Measuring CPU resource per request is not ideal because adding and removing light or heavy requests would also impact the efficiency metric using the per-requests measurement. - -Compared to CPU time, CPU instruction is a better metric, as it reports the same numbers regardless of CPU models and CPU loads for the same request. Instead of linking all our data to each user request, we chose to use a ‘per active user’ metric. We eventually landed on measuring efficiency by using ‘CPU instruction per active user during peak minute.’ With our new metric established, our next step was to learn more about our regressions by profiling Django. - -### Profiling the Django Service - -There are two major questions we want to answer by profiling our Django web service: - -1. Does a CPU regression happen? -2. What causes the CPU regression and how do we fix it? - -To answer the first question, we need to track the CPU-instruction-per-active-user metric. If this metric increases, we know a CPU regression has occurred. - -The tool we built for this purpose is called Dynostats. Dynostats utilizes Django middleware to sample user requests by a certain rate, recording key efficiency and performance metrics such as the total CPU instructions, end to end requests latency, time spent on accessing memcache and database services, etc. On the other hand, each request has multiple metadata that we can use for aggregation, such as the endpoint name, the HTTP return code of the request, the server name that serves this request, and the latest commit hash on the request. Having two aspects for a single request record is especially powerful because we can slice and dice on various dimensions that help us narrow down the cause of any CPU regression. For example, we can aggregate all requests by their endpoint names as shown in the time series chart below, where it is very obvious to spot if any regression happens on a specific endpoint. - -![](https://d262ilb51hltx0.cloudfront.net/max/800/1*3iouYiAchYBwzF-v0bALMw.png) - -CPU instructions matter for measuring efficiency — and they’re also the hardest to get. Python does not have common libraries that support direct access to the CPU hardware counters (CPU hardware counters are the CPU registers that can be programmed to measure performance metrics, such as CPU instructions). Linux kernel, on the other hand, provides the perf_event_open system call. Bridging through Python ctypes enables us to call the syscall function in standard C library, which also provides C compatible data types for programming the hardware counters and reading data from them. - - With Dynostats, we can already find CPU regressions and dig into the cause of the CPU regression, such as which endpoint gets impacted most, who committed the changes that actually cause the CPU regression, etc. However, when a developer is notified that their changes have caused a CPU regression, they usually have a hard time finding the problem. If it was obvious, the regression probably wouldn’t have been committed in the first place! - - That’s why we needed a Python profiler that the developer can use to find the root cause of the regression (once Dynostats identifies it). Instead of starting from scratch, we decided to make slight alterations to cProfile, a readily available Python profiler. The cProfile module normally provides a set of statistics describing how long and how often various parts of a program were executed. Instead of measuring in time, we took cProfile and replaced the timer with a CPU instruction counter that reads from hardware counters. The data is created at the end of the sampled requests and sent to some data pipelines. We also send metadata similar to what we have in Dynostats, such as server name, cluster, region, endpoint name, etc. -On the other side of the data pipeline, we created a tailer to consume the data. The main functionality of the tailer is to parse the cProfile stats data and create entities that represent Python function-level CPU instructions. By doing so, we can aggregate CPU instructions by Python functions, making it easier to tell which functions contribute to CPU regression. - -### Monitoring and Alerting Mechanism - -At Instagram, we [deploy our backend 30–50 times a day][1]. Any one of these deployments can contain troublesome CPU regressions. Since each rollout usually includes at least one diff, it is easy to identify the cause of any regression. Our efficiency monitoring mechanism includes scanning the CPU instruction in Dynostats before and after each rollout, and sending out alerts when the change exceeds a certain threshold. For the CPU regressions happening over longer periods of time, we also have a detector to scan daily and weekly changes for the most heavily loaded endpoints. - - Deploying new changes is not the only thing that can trigger a CPU regression. In many cases, the new features or new code paths are controlled by global environment variables (GEV). There are very common practices for rolling out new features to a subset of users on a planned schedule. We added this information as extra metadata fields for each request in Dynostats and cProfile stats data. Grouping requests by those fields reveal possible CPU regressions caused by turning the GEVs. This enables us to catch CPU regressions before they can impact performance. - -### What’s Next? - -Dynostats and our customized cProfile, along with the monitoring and alerting mechanism we’ve built to support them, can effectively identify the culprit for most CPU regressions. These developments have helped us recover more than 50% of unnecessary CPU regressions, which would have otherwise gone unnoticed. - - There are still areas where we can improve and make it easier to embed into Instagram’s daily deployment flow: - -1. The CPU instruction metric is supposed to be more stable than other metrics like CPU time, but we still observe variances that make our alerting noisy. Keeping signal:noise ratio reasonably low is important so that developers can focus on the real regressions. This could be improved by introducing the concept of confidence intervals and only alarm when it is high. For different endpoints, the threshold of variation could also be set differently. -2. One limitation for detecting CPU regressions by GEV change is that we have to manually enable the logging of those comparisons in Dynostats. As the number of GEVs increases and more features are developed, this wont scale well. Instead, we could leverage an automatic framework that schedules the logging of these comparisons and iterates through all GEVs, and send alerts when regressions are detected. -3. cProfile needs some enhancement to handle wrapper functions and their children functions better. - -With the work we’ve put into building the efficiency framework for Instagram’s web service, we are confident that we will keep scaling our service infrastructure using Python. We’ve also started to invest more into the Python language itself, and are beginning to explore moving our Python from version 2 to 3. We will continue to explore this and more experiments to keep improving both infrastructure and developer efficiency, and look forward to sharing more soon. - --------------------------------------------------------------------------------- - -via: https://engineering.instagram.com/web-service-efficiency-at-instagram-with-python-4976d078e366#.tiakuoi4p - -作者:[Min Ni][a] -译者:[ChrisLeeGit](https://github.com/chrisleegit) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://engineering.instagram.com/@InstagramEng?source=post_header_lockup -[1]: https://engineering.instagram.com/continuous-deployment-at-instagram-1e18548f01d1#.p5adp7kcz diff --git a/translated/tech/20160602 Web Service Efficiency at Instagram with Python.md b/translated/tech/20160602 Web Service Efficiency at Instagram with Python.md new file mode 100644 index 0000000000..b35fad81eb --- /dev/null +++ b/translated/tech/20160602 Web Service Efficiency at Instagram with Python.md @@ -0,0 +1,78 @@ +Instagram Web 服务效率与 Python +=============================================== + +Instagram 目前是世界上最大规模部署 Django web 框架(该框架完全使用 Python 编写)的主角。我们最初选用 Python 是因为它久负盛名的简洁性与实用性,这非常符合我们的哲学思想——“先做简单的事情”。但简洁性也会带来效率方面的折衷。Instagram 的规模在过去两年中已经翻番,并且最近已突破 5 亿用户,所以急需最大程度地提升 web 服务效率以便我们的平台能够继续顺利地扩大。在过去的一年,我们已经将效率计划(efficiency program)提上日程,并在过去的六个月,我们已经能够做到无需向我们的 Django 层(Django tiers)添加新的容量来维护我们的用户增长。我们将在本文分享一些由我们构建的工具以及如何使用它们来优化我们的日常部署流程。 + +### 为何需要提升效率? + +Instagram,正如所有的软件,受限于像服务器和数据中心能源这样的样物理限制。鉴于这些限制,在我们的效率计划中有两个我们希望实现的主要目标: + +1. Instagram 应当能够利用持续代码发布提供正常地通信服务,防止因为自然灾害、区域性网络问题等造成某一个数据中心区丢失。 +2. Instagram 应当能够自由地滚动发布新产品和新功能,不必因容量而受阻。 + +想要实现这些目标,我们意识到我们需要持续不断地监控我们的系统并在战斗中回归(battle regression)。 + + +### 定义效率 + +Web 服务器的瓶颈通常在于每台服务器上可用的 CPU 时间。在这种环境下,效率就意味着利用相同的 CPU 资源完成更多的任务,也就是说,每秒处理更多的用户请求(requests per second, RPS)。当我们寻找优化方法时,我们面临的第一个最大的挑战就是尝试量化我们当前的效率。到目前为止,我们一直在使用“每次请求的平均 CPU 时间”来评估效率,但使用这种指标也有其固有限制: + +1. 设备多样性。使用 CPU 时间来测量 CPU 资源并非理想方案,因为它同时受到 CPU 模型与 CPU 负载影响。 +1. 请求影响数据。测量每次请求的 CPU 资源并非理想方案,因为在使用每次请求测量(per-request measurement)方案时,添加或移除轻量级或重量级的请求也会影响到效率指标。 + +相对于 CPU 时间来说,CPU 指令是一种更好的指标,因为对于相同的请求,它会报告相同的数字,不管 CPU 模型和 CPU 负载情况如何。我们选择使用了一种叫做”每个活动用户(per active user)“的指标,而不是将我们所有的数据链接到每个用户请求上。我们最终采用”每个活动用户在高峰期间的 CPU 指令(CPU instruction per active user during peak minute)“来测量效率。我们建立好新的度量标准后,下一步就是通过对 Django 的分析来学习更多关于我们的回归(our regressions)。 + +### Django 服务分析 + +通过分析我们的 Django web 服务,我们希望回答两个主要问题: + +1. 一次 CPU 回归会发生吗? +2. 是什么导致了 CPU 回归问题发生以及我们该怎样修复它? + +想要回答第一个问题,我们需要追踪”每个活动用户的 CPU 指令(CPU-instruction-per-active-user)“指标。如果该指标增加,我们就知道一次 CPU 回归已经发生了。 + +我们为此构建的工具叫做 Dynostats。Dynostats 利用 Django 中间件以一定的速率采样用户请求,记录键效率以及性能指标,例如 CPU 总指令数、端到端请求时延、花费在访问内存缓存(memcache)和数据库服务的时间等。另一方面,每个请求都有很多可用于聚合的元数据(metadata),例如端点名称、HTTP 请求返回码、服务该请求的服务器名称以及请求中最新提交的哈希值(hash)。对于单个请求记录来说,有两个方面非常强大,因为我们可以在不同的维度上进行切割,那将帮助我们减少任何导致 CPU 回归的原因。例如,我们可以根据他们的端点名称聚合所有请求,正如下面的时间序列图所示,从图中可以清晰地看出在特定端点上是否发生了回归。 + +![](https://d262ilb51hltx0.cloudfront.net/max/800/1*3iouYiAchYBwzF-v0bALMw.png) + +CPU 指令对测量效率很重要——当然,它们也很难获得。Python 并没有支持直接访问 CPU 硬件计数器(CPU 硬件计数器是指可编程 CPU 寄存器,用于测量性能指标,例如 CPU 指令)的公共库。另一方面,Linux 内核提供了 `perf_event_open` 系统调用。通过 Python ctypes 桥接技术能够让我们调用标准 C 库编写的系统调用函数,它也为我们提供了兼容 C 的数据类型,从而可以编程硬件计数器并从它们读取数据。 + +使用 Dynostats,我们已经可以找出 CPU 回归,并探究 CPU 回归发生的原因,例如哪个端点受到的影响最多,谁提交了真正会导致 CPU 回归的变更等。然而,当开发者收到他们的变更已经导致一次 CPU 回归发生的通知时,他们通常难以找出问题所在。如果问题很明显,那么回归可能就不会一开始就被提交! + +这就是为何我们需要一个 Python 分析器,从而使开发者能够使用它找出回归(一旦 Dynostats 发现了它)发生的根本原因。不同于白手起家,我们决定对一个现成的 Python 分析器 cProfile 做适当的修改。cProfile 模块通常会提供一个统计集合来描述程序不同的部分执行时间和执行频率。我们将 cProfile 的定时器(timer)替换成了一个从硬件计数器读取的 CPU 指令计数器,以此取代对时间的测量。我们在采样请求后产生数据并把数据发送到数据流水线。我们也会发送一些我们在 Dynostats 所拥有的类似元数据,例如服务器名称、集群、区域、端点名称等。 +在数据流水线的另一边,我们创建了一个消费数据的尾随者(tailer)。尾随者的主要功能是解析 cProfile 的统计数据并创建能够表示 Python 函数级别的 CPU 指令的实体。如此,我们能够通过 Python 函数来聚集 CPU 指令,从而更加方便地找出是什么函数导致了 CPU 回归。 + +### 监控与警报机制 + +在 Instagram,我们 [每天部署 30-50 次后端服务][1]。这些部署中的任何一个都能发生 CPU 回归的问题。因为每次发生通常都包含至少一个区别(diff),所以找出任何回归是很容易的。我们的效率监控机制包含在每次发布前后都会在 Dynostats 中哦了过扫描 CPU 指令,并且当变更超出某个阈值时发出警告。对于长期会发生 CPU 回归的情况,我们也有一个探测器为负载最繁重的端点提供日常和每周的变更扫描。 + +部署新的变更并非触发一次 CPU 回归的唯一情况。在许多情况下,新的功能和新的代码路径都由全局环境变量(global environment variables, GEV)控制。 在一个计划好的时间表上,给一个用户子集发布新功能有一些非常一般的实践。我们在 Dynostats 和 cProfile 统计数据中为每个请求添加了这个信息作为额外的元数据字段。来自这些字段的组请求通过转变全局环境变量(GEV),从而暴露出可能的 CPU 回归问题。这让我们能够在它们对性能造成影响前就捕获到 CPU 回归。 + + +### 接下来是什么? + +Dynostats 和我们定制的 cProfile,以及我们建立去支持它们的监控和警报机制能够有效地找出大多数导致 CPU 回归的元凶。这些进展已经帮助我们恢复了超过 50% 的不必要的 CPU 回归,否则我们就根本不会知道。 + +我们仍然还有一些可以提升的方面并可以更加便捷将它们地加入到 Instagram 的日常部署流程中: + +1. CPU 指令指标应该要比其它指标如 CPU 时间更加稳定,但我们仍然观察了让我们头疼的差异。保持信号“信噪比(noise ratio)”合理地低是非常重要的,这样开发者们就可以集中于真实的回归上。这可以通过引入置信区间(confidence intervals)的概念来提升,并在信噪比过高时发出警报。针对不同的端点,变化的阈值也可以设置为不同值。 +2. 通过更改 GEV 来探测 CPU 回归的一个限制就是我们要在 Dynostats 中手动启用这些比较的日志输出。当 GEV 逐渐增加,越来越多的功能被开发出来,这就不便于扩展了。 +作为替代,我们能够利用一个自动化框架来调度这些比较的日志输出,并对所有的 GEV 进行遍历,然后当检查到回归时就发出警告。 +3. cProfile 需要一些增强以便更好地处理装饰器函数以及它们的子函数。 + +鉴于我们在为 Instagram 的 web 服务构建效率框架中所投入的工作,所以我们对于将来使用 Python 继续扩展我们的服务很有信心。 +我们也开始向 Python 语言自身投入更多,并且开始探索从 Python 2 转移 Python 3 之道。我们将会继续探索并做更多的实验以继续提升基础设施与开发者效率,我们期待着很快能够分享更多的经验。 + + +-------------------------------------------------------------------------------- + +via: https://engineering.instagram.com/web-service-efficiency-at-instagram-with-python-4976d078e366#.tiakuoi4p + +作者:[Min Ni][a] +译者:[ChrisLeeGit](https://github.com/chrisleegit) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://engineering.instagram.com/@InstagramEng?source=post_header_lockup +[1]: https://engineering.instagram.com/continuous-deployment-at-instagram-1e18548f01d1#.p5adp7kcz From ae99559d8b8a770d8478a76f87d0fc8bed324b06 Mon Sep 17 00:00:00 2001 From: alim0x Date: Wed, 10 Aug 2016 21:09:37 +0800 Subject: [PATCH 387/471] [translated]Implementing Mandatory Access Control ...with SELinux or AppArmor in Linux --- ...ntrol with SELinux or AppArmor in Linux.md | 250 ------------------ ...ntrol with SELinux or AppArmor in Linux.md | 247 +++++++++++++++++ 2 files changed, 247 insertions(+), 250 deletions(-) delete mode 100644 sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md create mode 100644 translated/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md diff --git a/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md b/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md deleted file mode 100644 index c4c6aecec6..0000000000 --- a/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md +++ /dev/null @@ -1,250 +0,0 @@ -alim0x translating - -Implementing Mandatory Access Control with SELinux or AppArmor in Linux -=========================================================================== - -To overcome the limitations of and to increase the security mechanisms provided by standard ugo/rwx permissions and [access control lists][1], the United States National Security Agency (NSA) devised a flexible Mandatory Access Control (MAC) method known as SELinux (short for Security Enhanced Linux) in order to restrict among other things, the ability of processes to access or perform other operations on system objects (such as files, directories, network ports, etc) to the least permission possible, while still allowing for later modifications to this model. - -![](http://www.tecmint.com/wp-content/uploads/2016/06/SELinux-AppArmor-Security-Hardening-Linux.png) ->SELinux and AppArmor Security Hardening Linux - -Another popular and widely-used MAC is AppArmor, which in addition to the features provided by SELinux, includes a learning mode that allows the system to “learn” how a specific application behaves, and to set limits by configuring profiles for safe application usage. - -In CentOS 7, SELinux is incorporated into the kernel itself and is enabled in Enforcing mode by default (more on this in the next section), as opposed to openSUSE and Ubuntu which use AppArmor. - -In this article we will explain the essentials of SELinux and AppArmor and how to use one of these tools for your benefit depending on your chosen distribution. - -### Introduction to SELinux and How to Use it on CentOS 7 - -Security Enhanced Linux can operate in two different ways: - -- Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that control the security engine. -- Permissive: SELinux does not deny access, but denials are logged for actions that would have been denied if running in enforcing mode. - -SELinux can also be disabled. Although it is not an operation mode itself, it is still an option. However, learning how to use this tool is better than just ignoring it. Keep it in mind! - -To display the current mode of SELinux, use getenforce. If you want to toggle the operation mode, use setenforce 0 (to set it to Permissive) or setenforce 1 (Enforcing). - -Since this change will not survive a reboot, you will need to edit the /etc/selinux/config file and set the SELINUX variable to either enforcing, permissive, or disabled in order to achieve persistence across reboots: - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Enable-Disable-SELinux-Mode.png) ->How to Enable and Disable SELinux Mode - -On a side note, if getenforce returns Disabled, you will have to edit /etc/selinux/config with the desired operation mode and reboot. Otherwise, you will not be able to set (or toggle) the operation mode with setenforce. - -One of the typical uses of setenforce consists of toggling between SELinux modes (from enforcing to permissive or the other way around) to troubleshoot an application that is misbehaving or not working as expected. If it works after you set SELinux to Permissive mode, you can be confident you’re looking at a SELinux permissions issue. - -Two classic cases where we will most likely have to deal with SELinux are: - -- Changing the default port where a daemon listens on. -- Setting the DocumentRoot directive for a virtual host outside of /var/www/html. - -Let’s take a look at these two cases using the following examples. - -#### EXAMPLE 1: Changing the default port for the sshd daemon - -One of the first thing most system administrators do in order to secure their servers is change the port where the SSH daemon listens on, mostly to discourage port scanners and external attackers. To do this, we use the Port directive in `/etc/ssh/sshd_config` followed by the new port number as follows (we will use port 9999 in this case): - -``` -Port 9999 -``` - -After attempting to restart the service and checking its status we will see that it failed to start: - -``` -# systemctl restart sshd -# systemctl status sshd -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-sshd-Service-Status.png) ->Check SSH Service Status - -If we take a look at /var/log/audit/audit.log, we will see that sshd was prevented from starting on port 9999 by SELinux because that is a reserved port for the JBoss Management service (SELinux log messages include the word “AVC” so that they might be easily identified from other messages): - -``` -# cat /var/log/audit/audit.log | grep AVC | tail -1 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-Linux-Audit-Logs.png) ->Check Linux Audit Logs - -At this point most people would probably disable SELinux but we won’t. We will see that there’s a way for SELinux, and sshd listening on a different port, to live in harmony together. Make sure you have the policycoreutils-python package installed and run: - -``` -# yum install policycoreutils-python -``` - -To view a list of the ports where SELinux allows sshd to listen on. In the following image we can also see that port 9999 was reserved for another service and thus we can’t use it to run another service for the time being: - -``` -# semanage port -l | grep ssh -``` - -Of course we could choose another port for SSH, but if we are certain that we will not need to use this specific machine for any JBoss-related services, we can then modify the existing SELinux rule and assign that port to SSH instead: - -``` -# semanage port -m -t ssh_port_t -p tcp 9999 -``` - -After that, we can use the first semanage command to check if the port was correctly assigned, or the -lC options (short for list custom): - -``` -# semanage port -lC -# semanage port -l | grep ssh -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Assign-Port-to-SSH.png) ->Assign Port to SSH - -We can now restart SSH and connect to the service using port 9999. Note that this change WILL survive a reboot. - -#### EXAMPLE 2: Choosing a DocumentRoot outside /var/www/html for a virtual host - -If you need to [set up a Apache virtual host][2] using a directory other than /var/www/html as DocumentRoot (say, for example, `/websrv/sites/gabriel/public_html`): - -``` -DocumentRoot “/websrv/sites/gabriel/public_html” -``` - -Apache will refuse to serve the content because the index.html has been labeled with the default_t SELinux type, which Apache can’t access: - -``` -# wget http://localhost/index.html -# ls -lZ /websrv/sites/gabriel/public_html/index.html -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Labeled-default_t-SELinux-Type.png) ->Labeled as default_t SELinux Type - -As with the previous example, you can use the following command to verify that this is indeed a SELinux-related issue: - -``` -# cat /var/log/audit/audit.log | grep AVC | tail -1 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-Logs-for-SELinux-Issues.png) ->Check Logs for SELinux Issues - -To change the label of /websrv/sites/gabriel/public_html recursively to httpd_sys_content_t, do: - -``` -# semanage fcontext -a -t httpd_sys_content_t "/websrv/sites/gabriel/public_html(/.*)?" -``` - -The above command will grant Apache read-only access to that directory and its contents. - -Finally, to apply the policy (and make the label change effective immediately), do: - -``` -# restorecon -R -v /websrv/sites/gabriel/public_html -``` - -Now you should be able to access the directory: - -``` -# wget http://localhost/index.html -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Access-Apache-Directory.png) ->Access Apache Directory - -For more information on SELinux, refer to the Fedora 22 [SELinux and Administrator guide][3]. - - -### Introduction to AppArmor and How to Use it on OpenSUSE and Ubuntu - -The operation of AppArmor is based on profiles defined in plain text files where the allowed permissions and access control rules are set. Profiles are then used to place limits on how applications interact with processes and files in the system. - -A set of profiles is provided out-of-the-box with the operating system, whereas others can be put in place either automatically by applications when they are installed or manually by the system administrator. - -Like SELinux, AppArmor runs profiles in two modes. In enforce mode, applications are given the minimum permissions that are necessary for them to run, whereas in complain mode AppArmor allows an application to take restricted actions and saves the “complaints” resulting from that operation to a log (/var/log/kern.log, /var/log/audit/audit.log, and other logs inside /var/log/apparmor). - -These logs will show through lines with the word audit in them errors that would occur should the profile be run in enforce mode. Thus, you can try out an application in complain mode and adjust its behavior before running it under AppArmor in enforce mode. - -The current status of AppArmor can be shown using: - -``` -$ sudo apparmor_status -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-AppArmor-Status.png) ->Check AppArmor Status - -The image above indicates that the profiles /sbin/dhclient, /usr/sbin/, and /usr/sbin/tcpdump are in enforce mode (that is true by default in Ubuntu). - -Since not all applications include the associated AppArmor profiles, the apparmor-profiles package, which provides other profiles that have not been shipped by the packages they provide confinement for. By default, they are configured to run in complain mode so that system administrators can test them and choose which ones are desired. - -We will make use of apparmor-profiles since writing our own profiles is out of the scope of the LFCS [certification][4]. However, since profiles are plain text files, you can view them and study them in preparation to create your own profiles in the future. - -AppArmor profiles are stored inside /etc/apparmor.d. Let’s take a look at the contents of that directory before and after installing apparmor-profiles: - -``` -$ ls /etc/apparmor.d -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/06/View-AppArmor-Directory-Content.png) ->View AppArmor Directory Content - -If you execute sudo apparmor_status again, you will see a longer list of profiles in complain mode. You can now perform the following operations: - -To switch a profile currently in enforce mode to complain mode: - -``` -$ sudo aa-complain /path/to/file -``` - -and the other way around (complain –> enforce): - -``` -$ sudo aa-enforce /path/to/file -``` - -Wildcards are allowed in the above cases. For example, - -``` -$ sudo aa-complain /etc/apparmor.d/* -``` - -will place all profiles inside /etc/apparmor.d into complain mode, whereas - -``` -$ sudo aa-enforce /etc/apparmor.d/* -``` - -will switch all profiles to enforce mode. - -To entirely disable a profile, create a symbolic link in the /etc/apparmor.d/disabled directory: - -``` -$ sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/ -``` - -For more information on AppArmor, please refer to the [official AppArmor wiki][5] and to the documentation [provided by Ubuntu][6]. - -### Summary - -In this article we have gone through the basics of SELinux and AppArmor, two well-known MACs. When to use one or the other? To avoid difficulties, you may want to consider sticking with the one that comes with your chosen distribution. In any event, they will help you place restrictions on processes and access to system resources to increase the security in your servers. - -Do you have any questions, comments, or suggestions about this article? Feel free to let us know using the form below. Don’t hesitate to let us know if you have any questions or comments. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/secure-files-using-acls-in-linux/ -[2]: http://www.tecmint.com/apache-virtual-hosting-in-centos/ -[3]: https://docs.fedoraproject.org/en-US/Fedora/22/html/SELinux_Users_and_Administrators_Guide/index.html -[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[5]: http://wiki.apparmor.net/index.php/Main_Page -[6]: https://help.ubuntu.com/community/AppArmor - - - diff --git a/translated/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md b/translated/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md new file mode 100644 index 0000000000..82dd1a3484 --- /dev/null +++ b/translated/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md @@ -0,0 +1,247 @@ +在 Linux 上用 SELinux 或 AppArmor 实现强制访问控制 +=========================================================================== + +为了克服标准 用户-组-其他/读-写-执行 权限以及[访问控制列表][1]的限制以及加强安全机制,美国国家安全局(NSA)设计出一个灵活的强制访问控制(MAC)方法 SELinux(Security Enhanced Linux 的缩写),来限制其他事物,在仍然允许对这个模型后续修改的情况下,让进程尽可能以最小权限访问或在系统对象(如文件,文件夹,网络端口等)上执行其他操作。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/SELinux-AppArmor-Security-Hardening-Linux.png) +>SELinux 和 AppArmor 加固 Linux 安全 + +另一个流行并且广泛使用的 MAC 是 AppArmor,相比于 SELinux 它提供额外的特性,包括一个学习模型,让系统“学习”一个特定应用的行为,通过配置文件设置限制实现安全的应用使用。 + +在 CentOS 7 中,SELinux 合并进了内核并且默认启用强制(Enforcing)模式(下一节会介绍这方面更多的内容),与使用 AppArmor 的 openSUSE 和 Ubuntu 完全不同。 + +在这篇文章中我们会解释 SELinux 和 AppArmor 的本质以及如何在你选择的发行版上使用这两个工具之一并从中获益。 + +### SELinux 介绍以及如何在 CentOS 7 中使用 + +Security Enhanced Linux 可以以两种不同模式运行: + +- 强制(Enforcing):SELinux 基于 SELinux 策略规则拒绝访问,一个指导准则集合控制安全引擎。 +- 宽容(Permissive):SELinux 不拒绝访问,但如果在强制模式下会被拒绝的操作会被记录下来。 + +SELinux 也能被禁用。尽管这不是它的一个操作模式,不过也是一个选项。但学习如何使用这个工具强过只是忽略它。时刻牢记这一点! + +使用 getenforce 命令来显示 SELinux 的当前模式。如果你想要更改模式,使用 setenforce 0(设置为宽容模式)或 setenforce 1(强制模式)。 + +因为这些设置重启后就失效了,你需要编辑 /etc/selinux/ 的配置文件并设置 SELINUX 变量为 enforcing,permissive 或 disabled 来保存设置让其重启后也有效: + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Enable-Disable-SELinux-Mode.png) +>如何启用和禁用 SELinux 模式 + +还有一点要注意,如果 getenforce 返回 Disabled,你得编辑 /etc/selinux/ 配置为你想要的操作模式并重启。否则你无法利用 setenforce 设置(或切换)操作模式。 + +setenforce 的典型用法之一包括在 SELinux 模式之间切换(从强制到宽容或相反)来定位一个应用是否行为不端或没有像预期一样工作。如果它在你将 SELinux 设置为宽容模式正常工作,你就可以确定你遇到的是 SELinux 权限问题。 + +两种我们使用 SELinux 可能需要解决的典型案例: + +- 改变一个守护进程监听的默认端口。 +- 给一个虚拟主机设置 /var/www/html 以外的文档根路径值。 + +让我们用以下例子来看看这两种情况。 + +#### 例 1:更改 sshd 守护进程的默认端口 + +大部分系统管理员为了加强服务器安全首先要做的事情之一就是更改 SSH 守护进程监听的端口,主要是为了组织端口扫描和外部攻击。要达到这个目的,我们要更改 `/etc/ssh/sshd_config` 中的 Port 值为以下值(我们在这里使用端口 9999 为例): + +``` +Port 9999 +``` + +在尝试重启服务并检查它的状态之后,我们会看到它启动失败: + +``` +# systemctl restart sshd +# systemctl status sshd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-sshd-Service-Status.png) +>检查 SSH 服务状态 + +如果我们看看 /var/log/audit/audit.log,就会看到 sshd 被 SELinux 组织在端口 9999 上启动,因为他是 JBoss 管理服务的保留端口(SELinux 日志信息包含了词语“AVC”,所以应该很容易把它同其他信息区分开来): + +``` +# cat /var/log/audit/audit.log | grep AVC | tail -1 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-Linux-Audit-Logs.png) +>检查 Linux 审计日志 + +在这种情况下大部分人可能会禁用 SELinux,但我们不这么做。我们会看到有个让 Selinux 和监听其他端口的 sshd 和谐共处的方法。首先确保你有 policycoreutils-python 这个包,执行: + +``` +# yum install policycoreutils-python +``` + +查看 SELinux 允许 sshd 监听的端口列表。在接下来的图片中我们还能看到端口 9999 是为其他服务保留的,所以我们暂时无法用它来运行其他服务: + +``` +# semanage port -l | grep ssh +``` + +当然我们可以给 SSH 选择其他端口,但如果我们确定我们不会使用这台机器跑任何 JBoss 相关的服务,我们就可以修改 SELinux 已存在的规则,转而给 SSH 分配那个端口: + +``` +# semanage port -m -t ssh_port_t -p tcp 9999 +``` + +在那之后,我们可以用第一个 semanage 命令检查端口是否正确分配了,或用 -lC 参数(list custom 的简称): + +``` +# semanage port -lC +# semanage port -l | grep ssh +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Assign-Port-to-SSH.png) +>给 SSH 分配端口 + +我们现在可以重启 SSH 服务并通过端口 9999 连接了。注意这个更改重启之后依然有效。 + +#### 例 2:给一个虚拟主机设置 /var/www/html 以外的文档根路径值 + +如果你需要用除 /var/www/html 以外目录作为文档根目录[设置一个 Apache 虚拟主机][2](也就是说,比如 `/websrv/sites/gabriel/public_html`): + +``` +DocumentRoot “/websrv/sites/gabriel/public_html” +``` + +Apache 会拒绝提供内容,因为 index.html 已经被标记为了 default_t SELinux 类型,Apache 无法访问它: + +``` +# wget http://localhost/index.html +# ls -lZ /websrv/sites/gabriel/public_html/index.html +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Labeled-default_t-SELinux-Type.png) +>被标记为 default_t SELinux 类型 + +和之前的例子一样,你可以用以下命令验证这是不是 SELinux 相关的问题: + +``` +# cat /var/log/audit/audit.log | grep AVC | tail -1 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-Logs-for-SELinux-Issues.png) +>检查日志确定是不是 SELinux 的问题 + +要将 /websrv/sites/gabriel/public_html 整个目录内容标记为 httpd_sys_content_t,执行: + +``` +# semanage fcontext -a -t httpd_sys_content_t "/websrv/sites/gabriel/public_html(/.*)?" +``` + +上面这个命令会赋予 Apache 对那个目录以及其内容的读取权限。 + +最后,要应用这条策略(并让更改的标记立即生效),执行: + +``` +# restorecon -R -v /websrv/sites/gabriel/public_html +``` + +现在你应该可以访问这个目录了: + +``` +# wget http://localhost/index.html +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Access-Apache-Directory.png) +>访问 Apache 目录 + +要获取关于 SELinux 的更多信息,参阅 Fedora 22 [SELinux 以及 管理员指南][3]。 + + +### AppArmor 介绍以及如何在 OpenSUSE 和 Ubuntu 上使用它 + +AppArmor 的操作是基于纯文本文件的规则定义,该文件中含有允许权限和访问控制规则。安全配置文件用来限制应用程序如何与系统中的进程和文件进行交互。 + +系统初始就提供了一系列的配置文件,但其他的也可以由应用程序安装的时候设置或由系统管理员手动设置。 + +像 SELinux 一样,AppArmor 以两种模式运行。在 enforce 模式下,应用被赋予它们运行所需要的最小权限,但在 complain 模式下 AppArmor 允许一个应用执行有限的操作并将操作造成的“抱怨”记录到日志里(/var/log/kern.log,/var/log/audit/audit.log,和其它在 /var/log/apparmor 中的日志)。 + +日志中会显示配置文件在强制模式下运行时会产生错误的记录,它们中带有审计这个词。因此,你可以在 AppArmor 的 enforce 模式下运行之前,先在 complain 模式下尝试运行一个应用并调整它的行为。 + +可以用这个命令显示 AppArmor 的当前状态: + +``` +$ sudo apparmor_status +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-AppArmor-Status.png) +>查看 AppArmor 的状态 + +上面的图片指明配置 /sbin/dhclient,/usr/sbin/,和 /usr/sbin/tcpdump 在 enforce 模式下(在 Ubuntu 下默认就是这样的)。 + +因为不是所有的应用都包含相关的 AppArmor 配置,apparmor-profiles 包提供了其它配置给没有提供限制的包。默认它们配置在 complain 模式下运行以便系统管理员能够测试并选择一个所需要的配置。 + +我们将会利用 apparmor-profiles,因为写一份我们自己的配置已经超出了 LFCS [认证][4]的范围了。但是,由于配置都是纯文本文件,你可以查看并学习它们,为以后创建自己的配置做准备。 + +AppArmor 配置保存在 /etc/apparmor.d 中。让我们来看看这个文件夹在安装 apparmor-profiles 之前和之后有什么不同: + +``` +$ ls /etc/apparmor.d +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/View-AppArmor-Directory-Content.png) +>查看 AppArmor 文件夹内容 + +如果你再次执行 sudo apparmor_status,你会在 complain 模式看到更长的配置文件列表。你现在可以执行下列操作: + +将当前在 enforce 模式下的配置文件切换到 complain 模式: + +``` +$ sudo aa-complain /path/to/file +``` + +以及相反的操作(complain –> enforce): + +``` +$ sudo aa-enforce /path/to/file +``` + +上面这些例子是允许使用通配符的。举个例子: + +``` +$ sudo aa-complain /etc/apparmor.d/* +``` + +会将 /etc/apparmor.d 中的所有配置文件设置为 complain 模式,反之 + +``` +$ sudo aa-enforce /etc/apparmor.d/* +``` + +会将所有配置文件设置为 enforce 模式。 + +要完全禁用一个配置,在 /etc/apparmor.d/disabled 目录中创建一个符号链接: + +``` +$ sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/ +``` + +要获取关于 AppArmor 的更多信息,参阅[官方 AppArmor wiki][5] 以及 [Ubuntu 提供的][6]文档。 + +### 总结 + +在这篇文章中我们学习了一些 SELinux 和 AppArmor 这两个著名强制访问控制系统的基本知识。什么时候使用两者中的一个或是另一个?为了避免提高难度,你可能需要考虑专注于你选择的发行版自带的那一个。不管怎样,它们会帮助你限制进程和系统资源的访问,以提高你服务器的安全性。 + +关于本文你有任何的问题,评论,或建议,欢迎在下方发表。不要犹豫,让我们知道你是否有疑问或评论。 + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[Gabriel Cánepa][a] +译者:[alim0x](https://github.com/alim0x) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/secure-files-using-acls-in-linux/ +[2]: http://www.tecmint.com/apache-virtual-hosting-in-centos/ +[3]: https://docs.fedoraproject.org/en-US/Fedora/22/html/SELinux_Users_and_Administrators_Guide/index.html +[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[5]: http://wiki.apparmor.net/index.php/Main_Page +[6]: https://help.ubuntu.com/community/AppArmor + + From 65138a5a1f57e4c375ce7fd4ab6d6478611cf9c6 Mon Sep 17 00:00:00 2001 From: CHL Date: Thu, 11 Aug 2016 00:35:28 +0800 Subject: [PATCH 388/471] [Translating by Flowsnow] [Translating by Flowsnow!]Best Password Manager for ALL Platforms --- sources/tech/20160729 Best Password Manager all platform.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160729 Best Password Manager all platform.md b/sources/tech/20160729 Best Password Manager all platform.md index 09c62bf93b..1c90462aae 100644 --- a/sources/tech/20160729 Best Password Manager all platform.md +++ b/sources/tech/20160729 Best Password Manager all platform.md @@ -1,3 +1,4 @@ +Translating by Flowsnow! Best Password Manager — For Windows, Linux, Mac, Android, iOS and Enterprise ============================== From 12235f6d3cdc00d1bf88952f8d2b9606922cf233 Mon Sep 17 00:00:00 2001 From: CHL Date: Thu, 11 Aug 2016 10:39:31 +0800 Subject: [PATCH 389/471] [Translated]72% Of The People I Follow On Twitter Are Men (#4301) * [Translating]72% Of The People I Follow On Twitter Are Men Translating by Flowsnow! 72% Of The People I Follow On Twitter Are Men * update --- ... The People I Follow On Twitter Are Men.md | 88 ------------------- ... The People I Follow On Twitter Are Men.md | 83 +++++++++++++++++ 2 files changed, 83 insertions(+), 88 deletions(-) delete mode 100644 sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md create mode 100644 translated/tech/20160623 72% Of The People I Follow On Twitter Are Men.md diff --git a/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md b/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md deleted file mode 100644 index 71a23511e9..0000000000 --- a/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md +++ /dev/null @@ -1,88 +0,0 @@ -Translating by Flowsnow! -72% Of The People I Follow On Twitter Are Men -=============================================== - -![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/abacus.jpg) - -At least, that's my estimate. Twitter does not ask users their gender, so I [have written a program that guesses][1] based on their names. Among those who follow me, the distribution is even worse: 83% are men. None are gender-nonbinary as far as I can tell. - -The way to fix the first number is not mysterious: I should notice and seek more women experts tweeting about my interests, and follow them. - -The second number, on the other hand, I can merely influence, but I intend to improve it as well. My network on Twitter should represent of the software industry's diverse future, not its unfair present. - -### How Did I Measure It? - -I set out to estimate the gender distribution of the people I follow—my "friends" in Twitter's jargon—and found it surprisingly hard. [Twitter analytics][2] readily shows me the converse, an estimate of my followers' gender: - -![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/twitter-analytics.png) - -So, Twitter analytics divides my followers' accounts among male, female, and unknown, and tells me the ratio of the first two groups. (Gender-nonbinary folk are absent here—they're lumped in with the Twitter accounts of organizations, and those whose gender is simply unknown.) But Twitter doesn't tell me the ratio of my friends. That [which is measured improves][3], so I searched for a service that would measure this number for me, and found [FollowerWonk][4]. - -FollowerWonk guesses my friends are 71% men. Is this a good guess? For the sake of validation, I compare FollowerWonk's estimate of my followers to Twitter's estimate: - -**Twitter analytics** - - |men |women -:-- |:-- |:-- -Followers |83% |17% - -**FollowerWonk** - - |men |women -:-- |:-- |:-- -Followers |81% |19% -Friends I follow |72% |28% - -My followers show up 81% male here, close to the Twitter analytics number. So far so good. If FollowerWonk and Twitter agree on the gender ratio of my followers, that suggests FollowerWonk's estimate of the people I follow (which Twitter doesn't analyze) is reasonably good. With it, I can make a habit of measuring my numbers, and improve them. - -At $30 a month, however, checking my friends' gender distribution with FollowerWonk is a pricey habit. I don't need all its features anyhow. Can I solve only the gender-distribution problem economically? - -Since FollowerWonk's numbers seem reasonable, I tried to reproduce them. Using Python and [some nice Philadelphians' Twitter][5] API wrapper, I began downloading the profiles of all my friends and followers. I immediately found that Twitter's rate limits are miserly, so I randomly sampled only a subset of users instead. - -I wrote a rudimentary program that searches for a pronoun announcement in each of my friends' profiles. For example, a profile description that includes "she/her" probably belongs to a woman, a description with "they/them" is probably nonbinary. But most don't state their pronouns: for these, the best gender-correlated information is the "name" field: for example, @gvanrossum's name field is "Guido van Rossum", and the first name "Guido" suggests that @gvanrossum is male. Where pronouns were not announced, I decided to use first names to estimate my numbers. - -My script passes parts of each name to the SexMachine library to guess gender. [SexMachine][6] has predictable downfalls, like mistaking "Brooklyn Zen Center" for a woman named "Brooklyn", but its estimates are as good as FollowerWonk's and Twitter's: - - - - |nonbinary |men |women |no gender,unknown -:-- |:-- |:-- |:-- |:-- -Friends I follow |1 |168 |66 |173 - |0% |72% |28% | -Followers |0 |459 |108 |433 - |0% |81% |19% | - -(Based on all 408 friends and a sample of 1000 followers.) - -### Know Your Number - -I want you to check your Twitter network's gender distribution, too. So I've deployed "Proportional" to PythonAnywhere's handy service for $10 a month: - -> - -The application may rate-limit you or otherwise fail, so use it gently. The [code is on GitHub][7]. It includes a command-line tool, as well. - -Who is represented in your network on Twitter? Are you speaking and listening to the same unfairly distributed group who have been talking about software for the last few decades, or does your network look like the software industry of the future? Let's know our numbers and improve them. - - - - - --------------------------------------------------------------------------------- - -via: https://emptysqua.re/blog/gender-of-twitter-users-i-follow/ - -作者:[A. Jesse Jiryu Davis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://disqus.com/by/AJesseJiryuDavis/ -[1]: https://www.proporti.onl/ -[2]: https://analytics.twitter.com/ -[3]: http://english.stackexchange.com/questions/14952/that-which-is-measured-improves -[4]: https://moz.com/followerwonk/ -[5]: https://github.com/bear/python-twitter/graphs/contributors -[6]: https://pypi.python.org/pypi/SexMachine/ -[7]: https://github.com/ajdavis/twitter-gender-distribution diff --git a/translated/tech/20160623 72% Of The People I Follow On Twitter Are Men.md b/translated/tech/20160623 72% Of The People I Follow On Twitter Are Men.md new file mode 100644 index 0000000000..e836bfe4ab --- /dev/null +++ b/translated/tech/20160623 72% Of The People I Follow On Twitter Are Men.md @@ -0,0 +1,83 @@ +在推特上我关注的人72%都是男性 +=============================================== + +![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/abacus.jpg) + +至少,这是我的估计。推特并不会询问用户的性别,因此我 [写了一个程序][1] ,根据姓名猜测他们的性别。在我的那些关注者中,性别分布甚至更糟,83%的是男性。据我所知,其他的还不全都是女性。 + +修正第一个数字并不是什么神秘的事:我注意寻找更多支持我兴趣的女性专家,并且关注他们。 + +另一方面,第二个数字,我只能只能轻微影响一点,但是我也打算改进下。我在推特上的关系网应该代表的是软件行业的多元化未来,而不是不公平的现状。 + +### 我应该怎么估算呢 + +我开始估算我关注的人(推特的上的术语是“朋友”)的性别分布,然后这格外的难。[推特的分析][2]经常显示相反的结果。 一个我的关注者的性别估算结果: + +![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/twitter-analytics.png) + +因此,推特的分析将我的关注者分成了三类:男性、女性、未知,并且给我们展示了前面两组的比例。(性别二值化现象在这里并不存在——未知性别的人都集中在组织的推特账号上。)但是我关注的人的性别比例,推特并没有告诉我。 [而这就是可以改进的][3], 然后我开始搜索能够帮我估算这个数字的服务,最终发现了 [FollowerWonk][4] 。 + +FollowerWonk 估算我关注的人里面有71%的都是男性。这个估算准确吗? 为了准确性,我把FollowerWonk和Twitter对我关注的人的进行了估算,结果如下: + +**推特分析** + +| | 男性 | 女性 | +| --------- | ---- | ---- | +| **我的关注者** | 83% | 17% | + +**FollowerWonk** + +| | 男性 | 女性 | +| --------- | ---- | ---- | +| **我的关注者** | 81% | 19% | +| **我关注的人** | 72% | 28% | + +FollowerWonk的分析显示我的关注者中81%的人都是男性,很接近推特分析的数字。这个结果还说得过去。如果FollowerWonk 和Twitter 在我的关注者的性别比例上是一致的,这就表明FollowerWonk对我关注的人的性别估算也应当是合理的。使用FollowerWonk我就能养成估算这些数字的爱好,并且做出改进。 + +然而,使用FollowerWonk 检测我关注的人的性别分布一个月需要30美元,这真是一个昂贵的爱好。我并不需要FollowerWonk 的所有的功能。我能很经济的解决只需要性别分布的问题吗? + +因为FollowerWonk 的估算数字看起来比较合理,我试图做一个自己的FollowerWonk 。使用Python和[一些好心的费城人写的Twitter API封装类][5](LCTT译者注:Twitter API封装类是由Mike Taylor等一批费城人在github上开源的一个项目),我开始下载我所有关注的人和我所有的关注者的简介。我马上就发现推特的比例限制是极少的,因此我随机的采样了一部分用户。 + +我写了一个初步的程序,在所有我关注的人的简介中搜索一个和性别相关的代词。例如,如果简介中包含了“she”或者“her”这样的字眼,可能这就属于一个女性,如果简介中包含了“they”或者”then“,那么可能这就是性别位置的。但是大多数简介中不会出现这些代词。对于这种简介,和性别关联最紧密的信息就是姓名了。例如:@gvanrossum的姓名那一栏是“Guido van Rossum”,第一姓名是“Guido”,这表明@gvanrossum是一个女的。当找不到代词的时候,我就使用第一姓名来评估性别估算数字。 + +我的脚本把每个名字的一部分传到性别检测机中去检测性别。[性别检测机][6]也有可预见的失败,比如错误的把“Brooklyn Zen Center”当做一个名叫“Brooklyn”的女性,但是它的评估结果与FollowerWonk和Twitter的相比也是很合理的: + +| | 非男非女 | 男性 | 女性 | 性别未知的 | +| ----- | ---- | ---- | ---- | ----- | +| 我关注的人 | 1 | 168 | 66 | 173 | +| | 0% | 72% | 28% | | +| 我的关注者 | 0 | 459 | 108 | 433 | +| | 0% | 81% | 19% | | + +(数据基于我所有的408个关注的人和1000个关注者。) + +### 了解你的数字 + +我想你们也能检测你们推特关系网的性别分布。所以我每月花费10美元将“Proportional”应用发布到PythonAnywhere这个便利的服务上: + +> + +这个应用可能会在速率上有限制,否则会失败,因此请温柔的对待它。github上放了源代码[代码][7] ,也有命令行的工具。 + +是谁代表了你的推特关系网?你还在忍受那些在过去几十年里一直在谈论的软件的不公平的分布组吗?或者你的关系网看起来像软件行业的未来吗?让我们了解我们的数字并且改善他们。 + + + +-------------------------------------------------------------------------------- + +via: https://emptysqua.re/blog/gender-of-twitter-users-i-follow/ + +作者:[A. Jesse Jiryu Davis][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/AJesseJiryuDavis/ +[1]: https://www.proporti.onl/ +[2]: https://analytics.twitter.com/ +[3]: http://english.stackexchange.com/questions/14952/that-which-is-measured-improves +[4]: https://moz.com/followerwonk/ +[5]: https://github.com/bear/python-twitter/graphs/contributors +[6]: https://pypi.python.org/pypi/SexMachine/ +[7]: https://github.com/ajdavis/twitter-gender-distribution \ No newline at end of file From 301b313e78c3a84b882e5c46ca1a910721e77d79 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 11 Aug 2016 13:57:58 +0800 Subject: [PATCH 390/471] PUB:20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @chenzhijun 翻译的不错。 --- ...EST TEXT EDITORS FOR LINUX COMMAND LINE.md | 94 +++++++++++++++++++ ...EST TEXT EDITORS FOR LINUX COMMAND LINE.md | 93 ------------------ 2 files changed, 94 insertions(+), 93 deletions(-) create mode 100644 published/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md delete mode 100644 translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md diff --git a/published/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md b/published/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md new file mode 100644 index 0000000000..f2a76f10bb --- /dev/null +++ b/published/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md @@ -0,0 +1,94 @@ +Linux 命令行下的最佳文本编辑器 +========================================== + +![](https://itsfoss.com/wp-content/uploads/2016/07/Best-Command-Line-Text-Editors-for-Linux.jpg) + +文本编辑软件在任何操作系统上都是必备的软件。我们在 Linux 上不缺乏[非常现代化的编辑软件][1],但是它们都是基于 GUI(图形界面)的编辑软件。 + +正如你所了解的,Linux 真正的魅力在于命令行。当你正在用命令行工作时,你就需要一个可以在控制台窗口运行的文本编辑器。 + +正因为这个目的,我们准备了一个基于 Linux 命令行的文本编辑器清单。 + +### [VIM][2] + +如果你已经使用 Linux 有一段时间,那么你肯定听到过 Vim 。Vim 是一个高度可配置的、跨平台的、高效率的文本编辑器。 + +几乎所有的 Linux 发行版本都已经内置了 Vim ,由于其特性之丰富,它已经变得非常流行了。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/vim.png) + +*Vim 用户界面* + +Vim 可能会让第一次使用它的人感到非常痛苦。我记得我第一次尝试使用 Vim 编辑一个文本文件时,我是非常困惑的。我不能用 Vim 输入一个字母,更有趣的是,我甚至不知道该怎么关闭它。如果你准备使用 Vim ,你需要有决心跨过一个陡峭的学习路线。 + +但是一旦你经历过了那些,通过梳理一些文档,记住它的命令和快捷键,你会发现这段学习经历是非常值得的。你可以将 Vim 按照你的意愿进行改造:配置一个让你看起来舒服的界面,通过使用脚本或者插件等来提高工作效率。Vim 支持格式高亮,宏记录和操作记录。 + +在Vim官网上,它是这样介绍的: + +>**Vim: The power tool for everyone!** + +如何使用它完全取决于你。你可以仅仅使用它作为文本编辑器,或者你可以将它打造成一个完善的IDE(Integrated Development Environment:集成开发环境)。 + +### [GNU EMACS][3] + +GNU Emacs 毫无疑问是非常强大的文本编辑器之一。如果你听说过 Vim 和 Emacs ,你应该知道这两个编辑器都拥有非常忠诚的粉丝基础,并且他们对于文本编辑器的选择非常看重。你也可以在互联网上找到大量关于他们的段子: + +![](https://itsfoss.com/wp-content/uploads/2016/07/vi-emacs-768x426.png) + +*Vim vs Emacs* + +Emacs 是一个跨平台的、既有有图形界面也有命令行界面的软件。它也拥有非常多的特性,更重要的是,可扩展! + +![](https://itsfoss.com/wp-content/uploads/2016/07/emacs.png) + +*Emacs 用户界面* + +像 Vim一样,Emacs 也需要经历一个陡峭的学习路线。但是一旦你掌握了它,你就能完全体会到它的强大。Emacs 可以处理几乎所有类型文本文件。它的界面可以定制以适应你的工作流。它也支持宏记录和快捷键。 + +Emacs 独特的特性是它可以“变形”成和文本编辑器完全不同的的东西。有大量的模块可使它在不同的场景下成为不同的应用,例如:计算器、新闻阅读器、文字处理器等。你甚至都可以在 Emacs 里面玩游戏。 + +### [NANO][5] + +如果说到简易方便的软件,Nano 就是一个。不像 Vim 和 Emacs,nano 的学习曲线是平滑的。 + +如果你仅仅是想创建和编辑一个文本文件,不想给自己找太多挑战,Nano 估计是最适合你的了。 + +![](https://itsfoss.com/wp-content/uploads/2016/07/nano.png) + +*Nano 用户界面* + +Nano 可用的快捷键都在用户界面的下方展示出来了。Nano 仅仅拥有最基础的文本编辑软件的功能。 + +它是非常小巧的,非常适合编辑系统配置文件。对于那些不需要复杂的命令行编辑功能的人来说,Nano 是完美配备。 + +### 其它 + +这里还有一些我想要提及其它编辑器: + +[The Nice Editor (ne)][6]: 官网是这样介绍的: + +> 如果你有足够的资料,也有使用 Emacs 的耐心或使用 Vim 的良好心态,那么 ne 可能不适合你。 + +基本上 ne 拥有像 Vim 和 Emacs 一样多的高级功能,包括:脚本和宏记录。但是它有更为直观的操作方式和平滑的学习路线。 + +### 你认为呢? + +我知道,如果你是一个熟练的 Linux 用户,你可以会说还有很多应该被列入 “Linux 最好的命令行编辑器”清单上。因此我想跟你说,如果你还知道其他的 Linux 命令行文本编辑器,你是否愿意跟我们一同分享? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/command-line-text-editors-linux/ + +作者:[Munif Tanjim][a] +译者:[chenzhijun](https://github.com/chenzhijun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[1]: https://linux.cn/article-7468-1.html +[2]: http://www.vim.org/ +[3]: https://www.gnu.org/software/emacs/ +[4]: https://itsfoss.com/download-linux-wallpapers-cheat-sheets/ +[5]: http://www.nano-editor.org/ +[6]: http://ne.di.unimi.it/ diff --git a/translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md b/translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md deleted file mode 100644 index fdb8d34098..0000000000 --- a/translated/tech/20160717 BEST TEXT EDITORS FOR LINUX COMMAND LINE.md +++ /dev/null @@ -1,93 +0,0 @@ - -Linux命令行下的优秀文本编辑软件 -========================================== - -![](https://itsfoss.com/wp-content/uploads/2016/07/Best-Command-Line-Text-Editors-for-Linux.jpg) - -文本编辑软件在任何操作系统上都是必备的软件。我们不缺乏在 Linux 上非常现代化的编辑软件,但是他们都是基于 GUI (图形界面)的编辑软件。 - -正如你所了解的,Linux 真正的魅力在于命令行。另外当你正在用命令行工作时,你可能需要一个可以在控制台窗口就可以运行的文本编辑器。 - -正因为这个目的,我们准备了一个基于 Linux 命令行的文本编辑器清单。 - -### [VIM][2] - -如果你已经使用 Linux 有一段时间,那么你肯定听到过 Vim 。Vim 是一个高可配、跨平台、高效率的文本编辑器。 - -几乎所有的 Linux 发行版本都已经内置了 Vim ,由于它的丰富特性已经变得非常流行的。 - -![](https://itsfoss.com/wp-content/uploads/2016/07/vim.png) ->Vim 用户窗口 - -Vim 可能会让第一次使用它的人感到非常痛苦。我记得我第一次尝试使用 Vim 编辑一个文本文件时,我是非常困惑的。我不能用 Vim 输入一个字母,更有趣的是,我甚至不知道该怎么关闭它。如果你准备使用 Vim ,你需要有决心跨过一个曲折的学习路线。 - -一旦你经历过那些,通过梳理一些文档,记住它的命令和快捷键,你会发现这段学习经历是非常值得的。你可以将 Vim 按照你的意愿进行改造--配置一个让你看起来舒服的界面,通过使用脚本或者插件等来提高工作效率。Vim 支持语句高亮,宏记录和操作记录。 - -在Vim官网上,它是这样介绍的: - ->**Vim: The power tool for everyone!** - -如何使用它完全取决于你。你可以仅仅使用它作为文本编辑器,或者你可以将它打造成一个完善的IDE( Integrated Development Environment:集成开发环境)。 - -### [GNU EMACS][3] - -GNU Emacs 毫无疑问是一个非常强大的文本编辑器。如果你听说过 Vim 和 Emacs ,你应该知道这两个编辑器都拥有非常忠诚的粉丝基础,并且他们对于文本编辑器的选择非常看重。你也可以在互联网上找到大量关于他们的段子: - -建议阅读 [Download Linux Wallpapers That Are Also Cheat Sheets][4] - -![](https://itsfoss.com/wp-content/uploads/2016/07/vi-emacs-768x426.png) ->Vim vs Emacs - -Emacs 是一个拥有图形界面和命令行界面并且跨平台的软件。它也拥有非常多的特性,更重要的是,它是支持扩展的。 - -![](https://itsfoss.com/wp-content/uploads/2016/07/emacs.png) ->Emacs 用户界面 - -像Vim一样,Emacs 也需要经历一个曲折的学习路线。但是一旦你掌握它,你就能知道它的强大。Emacs可以处理几乎所有类型文本文件。它的界面可以定制成适合你的工作流。它也支持宏记录和快捷键。 - -Emacs 独特的特性是它可以转换成和文本编辑器完全不同的的东西。这里有大量的模块集可是使它在不同的场景下成为不同的应用,例如-计算器,新闻阅读,文字处理器等。你甚至都可以在 Emacs 里面玩游戏。 - -### [NANO][5] - -如果说到简易方便的软件,Nano 就是一个。 不像 Vim 和 Emacs , nano 的学习曲线是平滑的。 - -如果在生活中你仅仅是想创建和编辑一个文本文件,Nano 估计是最适合你的了。 - -![](https://itsfoss.com/wp-content/uploads/2016/07/nano.png) ->Nano 用户界面 - -Nano 可用的快捷键都在用户界面的下方展示出来了。Nano 仅仅拥有最基础的文本编辑软件的功能。 - -它非常小巧并且非常适合编辑系统和配置文件。对于那些不需要命令行编辑器功能的人来说,Nano是完美配备。 - -### 其它 -这里还有一些我想要提及其它编辑器: - -[The Nice Editor (ne)][6]: 官网是这样介绍的: - ->如果你有相关的资源或者耐心来使用 Emacs 或者正确的心理准备来使用 Vim ,那么 ne 可能不适合你。 - -基本上 ne 拥有像 Vim 和 Emacs 一样多的先进功能,包括--脚本和宏记录。但是它有更为直观的操作方式和平滑的学习路线。 - - -### 你认为呢? - -我知道如果你是一个熟练的 Linux 用户,你可以会说上面列举的 Linux 最好的命令行编辑器清单上的候选者都是非常明显的。因此我想跟你说,如果你还知道其他的 Linux 命令行文本编辑器你是否愿意跟我们一同分享? - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/command-line-text-editors-linux/?utm_source=newsletter&utm_medium=email&utm_campaign=ubuntu_forums_hacked_new_skype_for_linux_and_more_linux_stories - -作者:[Munif Tanjim][a] -译者:[chenzhijun](https://github.com/chenzhijun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/munif/ -[1]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ -[2]: http://www.vim.org/ -[3]: https://www.gnu.org/software/emacs/ -[4]: https://itsfoss.com/download-linux-wallpapers-cheat-sheets/ -[5]: http://www.nano-editor.org/ -[6]: http://ne.di.unimi.it/ From 33a0fa95c2247f393d6dc725a98b51121fb2bdc0 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 11 Aug 2016 15:07:17 +0800 Subject: [PATCH 391/471] =?UTF-8?q?20190811-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160809 How to build your own Git server.md | 223 ++++++++++++++++++ 1 file changed, 223 insertions(+) create mode 100644 sources/tech/20160809 How to build your own Git server.md diff --git a/sources/tech/20160809 How to build your own Git server.md b/sources/tech/20160809 How to build your own Git server.md new file mode 100644 index 0000000000..ac5b229d96 --- /dev/null +++ b/sources/tech/20160809 How to build your own Git server.md @@ -0,0 +1,223 @@ +How to build your own Git server +==================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/bus-big-data.png?itok=sOQHDuID) + +Now we will learn how to build a Git server, and how to write custom Git hooks to trigger specific actions on certain events (such as notifications), and publishing your code to a website. + +Up until now, the focus has been interacting with Git as a user. In this article I'll discuss the administration of Git, and the design of a flexible Git infrastructure. You might think it sounds like a euphemism for "advanced Git techniques" or "only read this if you're a super-nerd", but actually none of these tasks require advanced knowledge or any special training beyond an intermediate understanding of how Git works, and in some cases a little bit of knowledge about Linux. + +### Shared Git server + +Creating your own shared Git server is surprisingly simple, and in many cases well worth the trouble. Not only does it ensure that you always have access to your code, it also opens doors to stretching the reach of Git with extensions such as personal Git hooks, unlimited data storage, and continuous integration and deployment. + +If you know how to use Git and SSH, then you already know how to create a Git server. The way Git is designed, the moment you create or clone a repository, you have already set up half the server. Then enable SSH access to the repository, and anyone with access can use your repo as the basis for a new clone. + +However, that's a little ad hoc. With some planning a you can construct a well-designed Git server with about the same amount of effort, but with better scalability. + +First things first: identify your users, both current and in the future. If you're the only user then no changes are necessary, but if you intend to invite contributors aboard, then you should allow for a dedicated shared system user for your developers. + +Assuming that you have a server available (if not, that's not exactly a problem Git can help with, but CentOS on a Raspberry Pi 3 is a good start), then the first step is to enable SSH logins using only SSH key authorization. This is much stronger than password logins because it is immune to brute-force attacks, and disabling a user is as simple as deleting their key. + +Once you have SSH key authorization enabled, create the gituser. This is a shared user for all of your authorized users: + +``` +$ su -c 'adduser gituser' +``` + +Then switch over to that user, and create an ~/.ssh framework with the appropriate permissions. This is important, because for your own protection SSH will default to failure if you set the permissions too liberally. + +``` +$ su - gituser +$ mkdir .ssh && chmod 700 .ssh +$ touch .ssh/authorized_keys +$ chmod 600 .ssh/authorized_keys +``` + +The authorized_keys file holds the SSH public keys of all developers you give permission to work on your Git project. Your developers must create their own SSH key pairs and send you their public keys. Copy the public keys into the gituser's authorized_keys file. For instance, for a developer called Bob, run these commands: + +``` +$ cat ~/path/to/id_rsa.bob.pub >> \ +/home/gituser/.ssh/authorized_keys +``` + +As long as developer Bob has the private key that matches the public key he sent you, Bob can access the server as gituser. + +However, you don't really want to give your developers access to your server, even if only as gituser. You only want to give them access to the Git repository. For this very reason, Git provides a limited shell called, appropriately, git-shell. Run these commands as root to add git-shell to your system, and then make it the default shell for your gituser: + +``` +# grep git-shell /etc/shells || su -c \ +"echo `which git-shell` >> /etc/shells" +# su -c 'usermod -s git-shell gituser' +``` + +Now the gituser can only use SSH to push and pull Git repositories, and cannot access a login shell. You should add yourself to the corresponding group for the gituser, which in our example server is also gituser. + +For example: + +``` +# usermod -a -G gituser seth +``` + +The only step remaining is to make a Git repository. Since no one is going to interact with it directly on the server (that is, you're not going to SSH to the server and work directly in this repository), make it a bare repository. If you want to use the repo on the server to get work done, you'll clone it from where it lives and work on it in your home directory. + +Strictly speaking, you don't have to make this a bare repository; it would work as a normal repo. However, a bare repository has no *working tree* (that is, no branch is ever in a `checkout` state). This is important because remote users are not permitted to push to an active branch (how would you like it if you were working in a `dev` branch and suddenly someone pushed changes into your workspace?). Since a bare repo can have no active branch, that won't ever be an issue. + +You can place this repository anywhere you please, just as long as the users and groups you want to grant permission to access it can do so. You do NOT want to store the directory in a user's home directory, for instance, because the permissions there are pretty strict, but in a common shared location, such as /opt or /usr/local/share. + +Create a bare repository as root: + +``` +# git init --bare /opt/jupiter.git +# chown -R gituser:gituser /opt/jupiter.git +# chmod -R 770 /opt/jupiter.git +``` + +Now any user who is either authenticated as gituser or is in the gituser group can read from and write to the jupiter.git repository. Try it out on a local machine: + +``` +$ git clone gituser@example.com:/opt/jupiter.git jupiter.clone +Cloning into 'jupiter.clone'... +Warning: you appear to have cloned an empty repository. +``` + +Remember: developers MUST have their public SSH key entered into the authorized_keys file of gituser, or if they have accounts on the server (as you would), then they must be members of the gituser group. + +### Git hooks + +One of the nice things about running your own Git server is that it makes Git hooks available. Git hosting services sometimes provide a hook-like interface, but they don't give you true Git hooks with access to the file system. A Git hook is a script that gets executed at some point during a Git process; a hook can be executed when a repository is about to receive a commit, or after it has accepted a commit, or before it receives a push, or after a push, and so on. + +It is a simple system: any executable script placed in the .git/hooks directory, using a standard naming scheme, is executed at the designated time. When a script should be executed is determined by the name; a pre-push script is executed before a push, a post-receive script is executed after a commit has been received, and so on. It's more or less self-documenting. + +Scripts can be written in any language; if you can execute a language's hello world script on your system, then you can use that language to script a Git hook. By default, Git ships with some samples but does not have any enabled. + +Want to see one in action? It's easy to get started. First, create a Git repository if you don't already have one: + +``` +$ mkdir jupiter +$ cd jupiter +$ git init . +``` + +Then write a "hello world" Git hook. Since I use tcsh at work for legacy support, I'll stick with that as my scripting language, but feel free to use your preferred language (Bash, Python, Ruby, Perl, Rust, Swift, Go) instead: + +``` +$ echo "#\!/bin/tcsh" > .git/hooks/post-commit +$ echo "echo 'POST-COMMIT SCRIPT TRIGGERED'" >> \ +~/jupiter/.git/hooks/post-commit +$ chmod +x ~/jupiter/.git/hooks/post-commit +``` + +Now test it out: + +``` +$ echo "hello world" > foo.txt +$ git add foo.txt +$ git commit -m 'first commit' +! POST-COMMIT SCRIPT TRIGGERED +[master (root-commit) c8678e0] first commit +1 file changed, 1 insertion(+) +create mode 100644 foo.txt +``` + +And there you have it: your first functioning Git hook. + +### The famous push-to-web hook + +A popular use of Git hooks is to automatically push changes to a live, in-production web server directory. It is a great way to ditch FTP, retain full version control of what is in production, and integrate and automate publication of content. + +If done correctly, it works brilliantly and is, in a way, exactly how web publishing should have been done all along. It is that good. I don't know who came up with the idea initially, but the first I heard of it was from my Emacs- and Git- mentor, Bill von Hagen at IBM. His article remains the definitive introduction to the process: [Git changes the game of distributed Web development][1]. + +### Git variables + +Each Git hook gets a different set of variables relevant to the Git action that triggered it. You may or may not need to use those variables; it depends on what you're writing. If all you want is a generic email alerting you that someone pushed something, then you don't need specifics, and probably don't even need to write the script as the existing samples may work for you. If you want to see the commit message and author of a commit in that email, then your script becomes more demanding. + +Git hooks aren't run by the user directly, so figuring out how to gather important information can be confusing. In fact, a Git hook script is just like any other script, accepting arguments from stdin in the same way that BASH, Python, C++, and anything else does. The difference is, we aren't providing that input ourselves, so to use it you need to know what to expect. + +Before writing a Git hook, look at the samples that Git provides in your project's .git/hooks directory. The pre-push.sample file, for instance, states in the comments section: + +``` +# $1 -- Name of the remote to which the push is being done +# $2 -- URL to which the push is being done +# If pushing without using a named remote those arguments will be equal. +# +# Information about commit is supplied as lines +# to the standard input in this form: +# +``` + +Not all samples are that clear, and documentation on what hook gets what variable is still a little sparse (unless you want to read the source code of Git), but if in doubt, you can learn a lot from the [trials of other users][2] online, or just write a basic script and echo $1, $2, $3, and so on. + +### Branch detection example + +I have found that a common requirement in production instances is a hook that triggers specific events based on what branch is being affected. Here is an example of how to tackle such a task. + +First of all, Git hooks are not, themselves, version controlled. That is, Git doesn't track its own hooks because a Git hook is part of Git, not a part of your repository. For that reason, a Git hook that oversees commits and pushes probably make most sense living in a bare repository on your Git server, rather than as a part of your local repositories. + +Let's write a hook that runs upon post-receive (that is, after a commit has been received). The first step is to identify the branch name: + +``` +#!/bin/tcsh + +foreach arg ( $< ) + set argv = ( $arg ) + set refname = $1 +end +``` + +This for-loop reads in the first arg ($1) and then loops again to overwrite that with the value of the second ($2), and then again with the third ($3). There is a better way to do that in Bash: use the read command and put the values into an array. However, this being tcsh and the variable order being predictable, it's safe to hack through it. + +When we have the refname of what is being commited, we can use Git to discover the human-readable name of the branch: + +``` +set branch = `git rev-parse --symbolic --abbrev-ref $refname` +echo $branch #DEBUG +``` + +And then compare the branch name to the keywords we want to base the action on: + +``` +if ( "$branch" == "master" ) then + echo "Branch detected: master" + git \ + --work-tree=/path/to/where/you/want/to/copy/stuff/to \ + checkout -f $branch || echo "master fail" +else if ( "$branch" == "dev" ) then + echo "Branch detected: dev" + Git \ + --work-tree=/path/to/where/you/want/to/copy/stuff/to \ + checkout -f $branch || echo "dev fail" + else + echo "Your push was successful." + echo "Private branch detected. No action triggered." +endif +``` + +Make the script executable: + +``` +$ chmod +x ~/jupiter/.git/hooks/post-receive +``` + +Now when a user commits to the server's master branch, the code is copied to an in-production directory, a commit to the dev branch get copied someplace else, and any other branch triggers no action. + +It's just as simple to create a pre-commit script that, for instance, checks to see if someone is trying to push to a branch that they should not be pushing to, or to parse commit messages for approval strings, and so on. + +Git hooks can get complex, and they can be confusing due to the level of abstraction that working through Git imposes, but they're a powerful system that allows you to design all manner of actions in your Git infrastructure. They're worth dabbling in, if only to become familiar with the process, and worth mastering if you're a serious Git user or full-time Git admin. + +In our next and final article in this series, we will learn how to use Git to manage non-text binary blobs, such as audio and graphics files. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6 + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: http://www.ibm.com/developerworks/library/wa-git/ +[2]: https://www.analysisandsolutions.com/code/git-hooks-summary-cheat-sheet.htm From ae223bd445cb4abb5a0e70d8aba0b3296b7bcf1e Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 11 Aug 2016 15:12:55 +0800 Subject: [PATCH 392/471] =?UTF-8?q?20160811-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20160809 7 reasons to love Vim.md | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 sources/talk/20160809 7 reasons to love Vim.md diff --git a/sources/talk/20160809 7 reasons to love Vim.md b/sources/talk/20160809 7 reasons to love Vim.md new file mode 100644 index 0000000000..868a1f5773 --- /dev/null +++ b/sources/talk/20160809 7 reasons to love Vim.md @@ -0,0 +1,46 @@ +7 reasons to love Vim +==================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUS_OpenSourceExperience_520x292_cm.png?itok=APna2N9Y) + +When I started using the vi text editor, I hated it. I thought it was the most painful and counter-intuitive editor ever designed. But I'd decided I had to learn the thing, because if you're using Unix, vi was everywhere and was the only editor you were guaranteed to have access to. That was back in 1998, but it remains true today—vi is available, usually as part of the base install, on almost every Linux distribution in existence. + +It took about a month before I could do anything with any proficiency in vi and I still didn't love it, but by then I'd realized that there was an insanely powerful editor hiding behind this bizarre facade. So I stuck with it, and eventually found out that once you know what you're doing, it's an incredibly fast editor. + +The name "vi" is short for "visual." When vi originated, line editing was the norm and being able to display and edit multiple lines at once was unusual. Vim, a contraction of "Vi IMproved" and originally released by Bram Moolenaar in 1991, has become the dominant vi clone and continued to extend the capabilities of an already powerful editor. Vim's powerful regex and ":" command-line syntax started in the world of line editing and teletypes. + +Vim, with its 40 years of history, has had time to develop a massive and complex bag of tricks that even the most knowledgeable users don't fully grasp. Here are a few reasons to love Vim: + +1. Colour schemes: You probably know Vim has colour syntax highlighting. Did you know you can download literally hundreds of colour schemes? [Find some of the better ones here][1]. +2. You never need to take your hands off the keyboard or reach for the mouse. +3. Vi or Vim is everywhere. Even [OpenWRT][2] has vi (okay, it's [BusyBox][3], but it works). +4. Vimscript: You've probably remapped a few keys, but did you know that Vim has its own programming language? You can rewrite the behaviour of your editor, or create language-specific editor extensions. (Recently I've spent time customizing Vim's behaviour with Ansible.) The best entry point to the language is Steve Losh's brilliant [Learn Vimscript the Hard Way][4]. +5. Vim has plugins. Use [Vundle][5] (my choice) or [Pathogen][6] to manage your plugins to improve Vim's capabilities. +6. Plugins to integrate git (or your VCS of choice) into Vim are available. +7. The online community is huge and active, and if you ask your question about Vim online, it will be answered. + +The irony of my original hatred of vi is that I'd been bouncing from editor to editor for five years, always looking for "something better." I never hated any editor as much as I hated vi, and now I've stuck with it for 17 years because I can no longer imagine a better editor. Well, maybe a little better: Go try Neovim—it's the future. It looks like Bram Moolenaar will be merging most of Neovim into Vim version 8, which will mean a 30% reduction in the code base, better tab completion, real async, built-in terminal, built-in mouse support, and complete compatibility. + +In his [LinuxCon talk][7] in Toronto, Giles will explain some of the features you may have missed in the welter of extensions and improvements added in the past four decades. The class isn't for beginners, so if you don't know why "hjklia:wq" are important, this probably isn't the talk for you. He'll also cover a bit about the history of vi, because knowing some history helps to understand how we've ended up where we are now. Attend his talk to find out how to make your favourite editor better and faster. + + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/16/8/7-reasons-love-vim + +作者:[Giles Orr][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gilesorr +[1]: http://www.gilesorr.com/blog/vim-colours.html +[2]: https://www.openwrt.org/ +[3]: https://busybox.net/ +[4]: http://learnvimscriptthehardway.stevelosh.com/ +[5]: https://github.com/VundleVim/Vundle.vim +[6]: https://github.com/tpope/vim-pathogen +[7]: http://sched.co/7JWz From 365cb7cac015fa4e806abaa9ac9034e9a9ecce83 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 11 Aug 2016 15:19:43 +0800 Subject: [PATCH 393/471] =?UTF-8?q?20160811-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...n to Linux containers and image signing.md | 62 +++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 sources/tech/20160810 A brief introduction to Linux containers and image signing.md diff --git a/sources/tech/20160810 A brief introduction to Linux containers and image signing.md b/sources/tech/20160810 A brief introduction to Linux containers and image signing.md new file mode 100644 index 0000000000..c91eab815f --- /dev/null +++ b/sources/tech/20160810 A brief introduction to Linux containers and image signing.md @@ -0,0 +1,62 @@ +A brief introduction to Linux containers and image signing +==================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/containers_2015-1-osdc-lead.png?itok=E1imOYe4) + +Fundamentally, all major software, even open source, was designed before image-based containers. This means that putting software inside of containers is fundamentally a platform migration. This also means that some programs are easy to migrate into containers, [while others are more difficult][1]. + +I started working with image-based containers nearly 3.5 years ago. In this time I have containerized a ton of applications. I have learned what's real, and what is superstition. Today, I'd like to give a brief introduction to how Linux containers are designed and talk briefly about image signing. + +### How Linux containers are designed + +What most people find confusing about the image based Linux containers, is that it's really about breaking an operating system into two parts: [the kernel and the user space][2]. In a traditional operating system, the kernel runs on the hardware and you never interact with it directly. The user space is what you actually interact with and this includes all the files, libraries, and programs that you see when you look at a file browser or run the ls command. When you use the ifconfig command to change an IP address, you are actually leveraging a user space program to make kernel changes to the TCP stack. This often blows people's minds if they haven't studied [Linux/Unix fundamentals][3]. + +Historically, the libraries in the user space supported programs that interacted with the kernel (ifconfig, sysctl, tuned-adm) and user-facing programs such as web servers or databases. Everything was dumped together in a single filesystem hierarchy. Users could inspect the /sbin or /lib directories and see all of the applications and libraries that support the operating system itself, or inspect the /usr/sbin or /usr/lib directory to see all of the user-facing programs and libraries (Check out the [Filesystem Hierarchy Standard][4]). The problem with this model was that there was never complete isolation between operating system programs and business supporting applications. Programs in /usr/bin might rely on libraries which live in /lib. If an application owner needed to change something, it could break the operating system. Conversely, if the team in charge of doing security updates needed to change a library, it could (and often did) break business facing applications. It was a mess. + +With image-based containers such as Docker, LXD, and RKT, an application owner can package and modify all of the dependencies in /sbin, /lib, /usr/bin, and /usr/lib without worrying about breaking the underlying operating system. Essentially, using image-based containers cleanly isolates the operating system into two parts, again, the kernel and the user space. Now dev and ops can update things independently of each other, kinda... + +There is some serous confusion though. Often, each application owner (or developer) doesn't want to be responsible for updating application dependencies such as openssl, glibc, or hardening underlying components, such as XML parsers, or JVMs, or dealing with performance settings. Historically, these problems were delegated to the operations team. Since we are packing a lot of dependencies in the container, the delegation of responsibility for all of the pieces in the container is still a real problem for many organizations. + +### Migrating existing applications to Linux containers + +Putting software inside of containers is basically a platform migration. I'd like to highlight what makes this difficult to migrate some applications into containers. + +Developers now have complete control over what's in /sbin, /lib, /usr/bin, and /usr/lib. But, one of the challenges they have is, they still need to put data and configuration in folders such as /etc or /var/lib. With image-based containers, this is a bad idea. We really want good separation of code, configuration, and data. We want the developers to provide the code in the container, but we want the data and configuration to come from the environment, e.g. development, testing, or production. + +This means we need to mount some files from /etc or directories from /var/lib when we (or better, the platform) instantiate a container. This will allow us to move the containers around and still get its configuration and data from the environment. Cool, right? Well, there is a problem, that means we have to be able to isolate configuration and data cleanly. Many modern open source programs like Apache, MySQL, MongoDB, or Nginx do this by default, [but many home-grown, legacy, or proprietary programs are not designed to do this by default][5]. This is a major pain point for many organizations. A best practice for developers would be to start architecting new applications and migrating legacy code so that configuration and data are cleanly isolated. + +### Introduction to image signing + +Trust is a major issue with containers. Container image signing allows a user to add a digital fingerprint to an image. This fingerprint can later be cryptographically tested to verify trust. This allows the user of a container image to verify the source and trust the container image. + +The container community uses the words “container image" quite a lot, but this nomenclature can be quite confusing. Docker, LXD, and RKT operate on the concept of pulling remote files and running them as a container. Each of these technologies treats containers images in different ways. LXD pulls a single container image with a single layer, while Docker and RKT use Open Container Image (OCI)-based images which can be made up of multiple layers. Worse, different teams or even organization may be responsible for different layers of a container image. Implicit in the concept of a container image is the concept of a Container Image Format. Having a standard image format such as OCI will allow an ecosystem to flourish around container scanning, signing, and movement between cloud providers. + +Now on to signing. + +One of the problems with containers is we package a bunch of code, binaries, and libraries into a container image. Once we package the code, we share it with essentially fancy file servers which we call Registry Servers. Once the code is shared, it is basically anonymous without some form of cryptographic signing. Worse yet, container images are often made up of image layers which are controlled by different people or teams of people. Each team needs to have the ability to check the last team's work, add their work, and then put their stamp of approval on it. They then need to send it on to the next team. + +The final user of the container image (really made up of multiple images) really needs to check the chain of custody. They need to verify trust with every team that added files to the container image. It is critical for end users to have confidence about every single layer of the container image. + +Scott McCarty will give a talk called [Containers for Grownups: Migrating Traditional & Existing Applications][6] at ContainerCon on August 24. Talk attendees will gain a new understanding of how containers work, and be able to leverage their current architectural knowledge to the world of containers. He will teach attendees which applications are easy to put in containers and why, and he'll explain which types of programs are more difficult and why. He will provide tons of examples and help attendees gain confidence in building and migrating their own applications into containers. + + + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/bus/16/8/introduction-linux-containers-and-image-signing + +作者:[Scott McCarty][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/fatherlinux +[1]: http://rhelblog.redhat.com/2016/04/21/architecting-containers-part-4-workload-characteristics-and-candidates-for-containerization/ +[2]: http://rhelblog.redhat.com/2015/07/29/architecting-containers-part-1-user-space-vs-kernel-space/ +[3]: http://rhelblog.redhat.com/tag/architecting-containers/ +[4]: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard +[5]: http://rhelblog.redhat.com/2016/04/21/architecting-containers-part-4-workload-characteristics-and-candidates-for-containerization/ +[6]: https://lcccna2016.sched.org/event/7JUc/containers-for-grownups-migrating-traditional-existing-applications-scott-mccarty-red-hat From 2287cdd1af0867418817bd867fded6f2d785b50e Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 11 Aug 2016 16:04:15 +0800 Subject: [PATCH 394/471] PUB:20160621 Flatpak brings standalone apps to Linux MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @zky001 翻译应该再细心一些,并自己通读几遍。 --- ...Flatpak brings standalone apps to Linux.md | 35 ++++++++++++++++++ ...Flatpak brings standalone apps to Linux.md | 36 ------------------- 2 files changed, 35 insertions(+), 36 deletions(-) create mode 100644 published/20160621 Flatpak brings standalone apps to Linux.md delete mode 100644 translated/tech/20160621 Flatpak brings standalone apps to Linux.md diff --git a/published/20160621 Flatpak brings standalone apps to Linux.md b/published/20160621 Flatpak brings standalone apps to Linux.md new file mode 100644 index 0000000000..488c404f45 --- /dev/null +++ b/published/20160621 Flatpak brings standalone apps to Linux.md @@ -0,0 +1,35 @@ +Flatpak 为 Linux 带来了独立应用 +================ + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/flatpak-945x400.jpg) + +[Flatpak][1] 的开发团队[宣布了][2] Flatpak 桌面应用框架已经可用了。 Flatpak (以前在开发时名为 xdg-app)为应用提供了捆绑为一个 Flatpak 软件包的能力,可以让应用在很多 Linux 发行版上都以轻松而一致的体验来安装和运行。将应用程序捆绑成 Flatpak 为其提供了沙盒安全环境,可以将它们与操作系统和彼此之间相互隔离。查看 [Flatpak 网站][3]上的[发布公告][4]来了解关于 Flatpak 框架技术的更多信息。 + +### 在 Fedora 中安装 Flatpak + +如果用户想要运行以 Flatpak 格式打包的应用,在 Fedora 上安装是很容易的,Flatpak 格式已经可以在官方的 Fedora 23 和 Fedora 24 仓库中获得。Flatpak 网站上有[在 Fedora 上安装的完整细节][5],同时也有如何在 Arch、 Debian、Mageia 和 Ubuntu 中安装的方法。[许多的应用][6]已经使用 Flatpak 打包构建了,这包括 LibreOffice、Inkscape 和 GIMP。 + +### 对应用开发者 + +如果你是一个应用开发者,Flatpak 网站也包含许多有关于[使用 Flatpak 打包和分发应用程序][7]的重要资料。这些资料中包括了使用 Flakpak SDK 构建独立的、沙盒化的 Flakpak 应用程序的信息。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-flatpak/ + +作者:[Ryan Lerch][a] +译者:[zky001](https://github.com/zky001) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/introducing-flatpak/ +[1]: http://flatpak.org/ +[2]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[3]: http://flatpak.org/ +[4]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[5]: http://flatpak.org/getting.html +[6]: http://flatpak.org/apps.html +[7]: http://flatpak.org/developer.html + + diff --git a/translated/tech/20160621 Flatpak brings standalone apps to Linux.md b/translated/tech/20160621 Flatpak brings standalone apps to Linux.md deleted file mode 100644 index cad3fff909..0000000000 --- a/translated/tech/20160621 Flatpak brings standalone apps to Linux.md +++ /dev/null @@ -1,36 +0,0 @@ -翻译中:by zky001 -Flatpak给Linux带来了独立的应用 -=== - -![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/flatpak-945x400.jpg) - - -在[Flatpak][1]背后的开发团队已经[刚刚宣布Flatpak作为桌面应用开发框架基本的能力][2]。 Flatpak为一个应用提供了这种能力—提供了一个应用程序的能力,应用程序捆绑为Flatpak 被安装在许多不同的Linux发行版,并且它可以很轻易地运行。应用程序捆绑的Flatpaks也有沙盒安全能力,可以从你的操作系统隔离它们和其他应用程序。查看[Flatpak网站][3],[网站指导手册][4]来获取更多的关于Flatpak框架的技术。 - -### 在Fedora中安装Flatpak - -对于想要以打包格式Flatpaks来运行程序的话,安装在Fedora上是很容易的,Flatpak格式已经可以在官方的Fedora 23和Fedora 24版本源中获得。Flatpak网站有[完整的在Fedora上安装的细节][5],同时也有如何在Arch, Debian,Mageia,和Ubuntu中安装的方法。[许多的应用][6]已经使用Flatpak来进行打包构建的程序-包括LibreOffice,Inkscape和GIMP。 -### 对应用开发者 - -如果你是一个应用开发者,Flatpak网站也包含许多有关于[使用Flatpak打包和分发应用程序][7]的优秀资源。这些资源包括了使用Flakpak SDKs来构建独立的,沙河的Flakpak应用程序。 - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/introducing-flatpak/ - -作者:[Ryan Lerch][a] -译者:[zky001](https://github.com/zky001) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/introducing-flatpak/ -[1]: http://flatpak.org/ -[2]: http://flatpak.org/press/2016-06-21-flatpak-released.html -[3]: http://flatpak.org/ -[4]: http://flatpak.org/press/2016-06-21-flatpak-released.html -[5]: http://flatpak.org/getting.html -[6]: http://flatpak.org/apps.html -[7]: http://flatpak.org/developer.html - - From d5f08e4b6256b56510898df7b7be7c4ba73e0b50 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 11 Aug 2016 21:12:39 +0800 Subject: [PATCH 395/471] PUB:20151208 6 creative ways to use ownCloud MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @GHLandy 翻译的不错。 --- ...0151208 6 creative ways to use ownCloud.md | 96 +++++++++++++++++++ ...0151208 6 creative ways to use ownCloud.md | 95 ------------------ 2 files changed, 96 insertions(+), 95 deletions(-) create mode 100644 published/20151208 6 creative ways to use ownCloud.md delete mode 100644 translated/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md diff --git a/published/20151208 6 creative ways to use ownCloud.md b/published/20151208 6 creative ways to use ownCloud.md new file mode 100644 index 0000000000..02824a4255 --- /dev/null +++ b/published/20151208 6 creative ways to use ownCloud.md @@ -0,0 +1,96 @@ +ownCloud 的六大神奇用法 +================================================================================ + +![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/osdc-open-source-yearbook-lead1-inc0335020sw-201511-01.png) + +(图片来源:Opensource.com) + +[ownCloud][1] 是一个自行托管的开源文件同步和共享服务器。就像“行业老大” Dropbox、Google Drive、Box 和其他的同类服务一样,ownCloud 也可以让你访问自己的文件、日历、联系人和其他数据。你可以在自己设备之间同步任意数据(或部分数据)并分享给其他人。然而,ownCloud 要比其它的商业解决方案更棒,可以[将 ownCloud 运行在自己的服务器][2]而不是其它人的服务器上。 + +现在,让我们一起来看看在 ownCloud 上的六个创造性的应用方式。其中一些是由于 ownCloud 的开源才得以完成,而另外的则是 ownCloud 自身特有的功能。 + +### 1. 可扩展的 ownCloud “派”集群 ### + +由于 ownCloud 是开源的,你可以选择将它运行在自己的服务器中,或者从你信任的服务商那里获取空间——没必要将你的文件存储在那些大公司的服务器中,谁知他们将你的文件存储到哪里去。[点击此处查看部分 ownCloud 服务商][3],或者下载该服务软件到你的虚拟主机中[搭建自己的服务器][4]. + +![](https://opensource.com/sites/default/files/images/life-uploads/banana-pi-owncloud-cluster.jpg) + +*拍摄: Jörn Friedrich Dreyer. [CC BY-SA 4.0.][5]* + +我们见过最具创意的事情就是架设[香蕉派集群][6]和[树莓派集群][7]。ownCloud 的扩展性通常用于支持成千上万的用户,但有些人则将它往不同方向发展,通过将多个微型系统集群在一起,就可以创建出运行速度超快的 ownCloud。酷毙了! + +### 2. 密码同步 ### + +为了让 ownCloud 更容易扩展,我们将它变得超级的模块化,甚至还有一个 [ownCloud 应用商店][8]。你可以在里边找到音乐和视频播放器、日历、联系人、生产力应用、游戏、应用模板(sketching app)等等。 + +从近 200 多个应用中仅挑选一个是一件非常困难的事,但密码管理则是一个很独特的功能。只有不超过三个应用提供这个功能:[Passwords][9]、[Secure Container][10] 和 [Passman][11]。 + +![](https://opensource.com/sites/default/files/images/life-uploads/password.png) + +### 3. 随心所欲地存储文件 ### + +外部存储可以让你将现有数据挂载到 ownCloud 上,让你通过一个界面来访问存储在 FTP、WebDAV、Amazon S3,甚至 Dropbox 和 Google Drive 的文件。 + +注:youtube 视频 + + +行业老大们喜欢创建自己的 “藩篱花园”,Box 的用户只能和其它的 Box 用户协作;假如你想从 Google Drive 分享你的文件,你的同伴也必须要有一个 Google 账号才可以访问的分享。通过 ownCloud 的外部存储功能,你可以轻松打破这些。 + +最有创意的就是把 Google Drive 和 Dropbox 添加为外部存储。这样你就可以无缝连接它们,通过一个简单的链接即可分享给其它人——并不需要账户。 + +### 4. 获取上传的文件 ### + +由于 ownCloud 是开源开,人们可以不受公司需求的制约而向它贡献感兴趣的功能。我们的贡献者总是很在意安全和隐私,所以 ownCloud 引入的通过密码保护公共链接并[设置失效期限][12]的功能要比其它人早很多。 + +现在,ownCloud 可以配置分享链接的读写权限了,这就是说链接的访问者可以无缝的编辑你分享给他们的文件(可以有密码保护,也可以没有),或者将文件上传到服务器前不用强制他们提供私人信息来注册服务。 + +注:youtube 视频 + + +对于有人想给你分享大体积的文件时,这个特性就非常有用了。相比于上传到第三方站点、然后给你发送一个连接、你再去下载文件(通常需要登录),ownCloud 仅需要上传文件到你提供的分享文件夹,你就可以马上获取到文件了。 + +### 5. 免费却又安全的存储空间 ### + +之前就强调过,我们的代码贡献者最关注的就是安全和隐私,这就是 ownCloud 中有用于加密和解密存储数据的应用的原因。 + +通过使用 ownCloud 将你的文件存储到 Dropbox 或者 Google Drive,则会违背夺回数据的控制权并保持数据隐私的初衷。但是加密应用则可以改变这个状况。在发送数据给这些提供商前进行数据加密,并在取回数据的时候进行解密,你的数据就会变得很安全。 + +### 6. 在你的可控范围内分享文件 ### + +作为开源项目,ownCloud 没有必要自建 “藩篱花园”。通过“联邦云共享(Federated Cloud Sharing)”:这个[由 ownCloud 开发和发布的][13]协议使不同的文件同步和共享服务器可以彼此之间进行通信,并能够安全地传输文件。联邦云共享本身来自一个有趣的事情:有 [22 所德国大学][14] 想要为自身的 50 万名学生建立一个庞大的云服务,但是每个大学都想控制自己学生的数据。于是乎,我们需要一个创造性的解决方案:也就是联邦云服务。该解决方案可以连接全部的大学,使得学生们可以无缝的协同工作。同时,每个大学的系统管理员保持着对自己学生创建的文件的控制权,并可采用自己的策略,如限制限额,或者限制什么人、什么文件以及如何共享。 + +注:youtube 视频 + + +并且,这项神奇的技术并没有限制于德国的大学之间,每个 ownCloud 用户都能在自己的用户设置中找到自己的[联邦云 ID][15],并将之分享给同伴。 + +现在你明白了吧。通过这六个方式,ownCloud 就能让人们做一些特殊而独特的事。而使这一切成为可能的,就是 ownCloud 是开源的,其设计目标就是让你的数据自由。 + +你有其它的 ownCloud 的创意用法吗?请发表评论让我们知道。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/15/12/6-creative-ways-use-owncloud + +作者:[Jos Poortvliet][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jospoortvliet +[1]:https://owncloud.com/ +[2]:https://blogs.fsfe.org/mk/new-stickers-and-leaflets-no-cloud-and-e-mail-self-defense/ +[3]:https://owncloud.org/providers +[4]:https://owncloud.org/install/#instructions-server +[5]:https://creativecommons.org/licenses/by-sa/4.0/ +[6]:http://www.owncluster.de/ +[7]:https://christopherjcoleman.wordpress.com/2013/01/05/host-your-owncloud-on-a-raspberry-pi-cluster/ +[8]:https://apps.owncloud.com/ +[9]:https://apps.owncloud.com/content/show.php/Passwords?content=170480 +[10]:https://apps.owncloud.com/content/show.php/Secure+Container?content=167268 +[11]:https://apps.owncloud.com/content/show.php/Passman?content=166285 +[12]:https://owncloud.com/owncloud45-community/ +[13]:http://karlitschek.de/2015/08/announcing-the-draft-federated-cloud-sharing-api/ +[14]:https://owncloud.com/customer/sciebo/ +[15]:https://owncloud.org/federation/ diff --git a/translated/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md b/translated/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md deleted file mode 100644 index 5d81ed9cdf..0000000000 --- a/translated/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md +++ /dev/null @@ -1,95 +0,0 @@ -GHLandy Translated - -使用 ownCloud 的六个创意方法 -================================================================================ -![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/osdc-open-source-yearbook-lead1-inc0335020sw-201511-01.png) - -图片来源:Opensource.com - -[ownCloud][1] 是一个自我托管且开源的文件同步和共享服务上。就像 "big boys" Dropbox、Google Drive、Box 和其他的同类服务一样,ownCloud 可以让你访问自己的文件、日历、联系人和其他数据。你可以在自己设备之间进行任意数据(包括它自身的一部分)同步以及给其他人分享文件。然而,ownCloud 并非只能运行在它自己的开发商之中,试试[将 ownCloud 运行在其他服务器上][2] - -现在,一起来看看在 ownCloud 上的六件创意事件。其中一些是由于 ownCloud 的开源才得以完成,而另外的则是 ownCloud 自身特有的功能。 - -### 1. 可扩展的 ownCloud 派集群 ### - -由于 ownCloud 是开源的,你可以选择将它运行在自己的服务器中,或者从你信任的服务器提供商那里获取空间——没必要将你的文件存储在大公司的服务器中,谁知他们将你的文件存储到哪里去。[点击此处查看部分 ownCloud 服务商][3],或者下载该服务软件到你的虚拟主机中[搭建自己的服务器][4]. - -![](https://opensource.com/sites/default/files/images/life-uploads/banana-pi-owncloud-cluster.jpg) - -拍摄: Jörn Friedrich Dreyer. [CC BY-SA 4.0.][5] - -我们见过最具创意的事情就是组建 [香蕉派集群][6] 和 [树莓派集群][7]。ownCloud 的扩展性通常是成千上万的用户来完成的,这些人则将它往不同方向发展,通过大量的小型系统集群在一起,就可以创建出运行速度非常快的 ownCloud。酷毙了! - -### 2. 密码同步 ### - -为了让 ownCloud 更容易扩展,我们需要将它模块化,并拥有 [ownCloud app store][8]。然后你就可以在里边搜索音乐、视频播放器、日历、联系人、生产应用、游戏、应用框架等。 - -仅从 200 多个可用应用中挑选一个是一件非常困难的事,但密码管理则是一个很好的特性。ownCloud app store 里边至少有三款这种应用:[Passwords][9]、[Secure Container][10] 和 [Passman][11]。 - -![](https://opensource.com/sites/default/files/images/life-uploads/password.png) - -### 3. 随心所欲地存储文件 ### - -外部存储允许你通过接口将现有数据联系到 ownCloud,让你轻松访问存储在FTP、WebDAV、Amazon S3,甚至 Dropbox 和Google Drive。 - -注:youtube 视频 - - -DropBox 喜欢创建自己的 “围墙式花园”,只有注册用户之间才可以进行协作;假如你通过Google Drive 来分享文件,你的同伴也必须要有一个 Google 账号才可以访问的分享。通过 ownCloud 的外部存储功能,你可以轻松打破这些规则障碍。 - -最有创意的就是把 Google Drive 和 Dropbox 添加为外部存储。这样你就可以无缝的使用它们,并使用不需要账户地链接把文件分享给和你协作的人。 - -### 4. 下载以上传的文件 ### - -由于 ownCloud 的开源,人们可以不受公司需求限制地向它共享代码,增加新特性。共献者关注的往往是安全和隐私,所以 ownCloud 引入的特性常常比别人的要早,比如通过密码保护的公共链接和[设置失效期限][12]。 - -现在,ownCloud 可以配置分享链接的读写权限了,这就是说链接的访问者可以无缝的编辑你分享给他们的文件(不管是否有密码保护),或者在不提供他们的私人数据来登录其他服务的情况下将文件上传到服务器。 - -注:youtube 视频 - - -对于有人想给你分享大体积的文件时,这个特性就非常有用了。相比于上传到第三方站点、然后给你发送一个连接、你再去下载文件(通常需要登录),ownCloud 仅需要上传文件到你提供的分享文件夹、你就可以买上获取到文件了。 - -### 5. 免费却又安全的存储空间 ### - -之前就强调过,我们的代码贡献者最关注的就是安全和隐私,这就是 ownCloud 中有用于加密和解密存储数据的应用的原因。 - -通过使用 ownCloud 将你的文件存储到 Dropbox 或者 Google Drive,则会违背控制数据以及保持数据隐私的原则。但是加密应用则刚好可以满足安全及隐私问题。在发送数据给这些提供商前进行数据加密,并在取回数据的时候进行解密,你的数据就会变得很安全。 - -### 6. 在你的可控范围内分享文件 ### - -作为开源项目,ownCloud 没有必要自建 “围墙式花园”。进入联邦云共享:[developed and published by ownCloud][13] 协议使不同的文件同步和共享服务器可以彼此之间进行通信,并能够安全地传输文件。联邦云共享本身有一个有趣的故事:[22 所德国大学][14] 想要为自身的 500,000 学生建立一个庞大的云服务,但是每个大学都想控制自己学生数据。于是乎,我们需要一个可行性解决方案:也就是联邦云服务。该解决方案让让学生保持连接,使得他们可以无缝的协同工作。同时,每个大学的系统管理员保持着对自己学生创建的文件的控制权,如限制存储或者限制什么人、什么文件以及如何共享。 - -注:youtube 视频 - - -并且,这项令人崇敬的技术并没有限制于德国的大学之间,而是每个 ownCloud 用户都能在自己的用户设置中找到自己的 [联邦云 ID][15],并将之分享给同伴。 - -现在你明白了吧。仅六个方法,ownCloud 就能让人们完成特殊和特别的事。而是这一切成为可能的,就是 ownCloud 的开源 —— 设计用来释放你数据。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/15/12/6-creative-ways-use-owncloud - -作者:[Jos Poortvliet][a] -译者:[GHLandy](https://github.com/GHLandy) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jospoortvliet -[1]:https://owncloud.com/ -[2]:https://blogs.fsfe.org/mk/new-stickers-and-leaflets-no-cloud-and-e-mail-self-defense/ -[3]:https://owncloud.org/providers -[4]:https://owncloud.org/install/#instructions-server -[5]:https://creativecommons.org/licenses/by-sa/4.0/ -[6]:http://www.owncluster.de/ -[7]:https://christopherjcoleman.wordpress.com/2013/01/05/host-your-owncloud-on-a-raspberry-pi-cluster/ -[8]:https://apps.owncloud.com/ -[9]:https://apps.owncloud.com/content/show.php/Passwords?content=170480 -[10]:https://apps.owncloud.com/content/show.php/Secure+Container?content=167268 -[11]:https://apps.owncloud.com/content/show.php/Passman?content=166285 -[12]:https://owncloud.com/owncloud45-community/ -[13]:http://karlitschek.de/2015/08/announcing-the-draft-federated-cloud-sharing-api/ -[14]:https://owncloud.com/customer/sciebo/ -[15]:https://owncloud.org/federation/ From 33f340f5ee71931e6c9d9259c73cb4d361bff680 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 12 Aug 2016 08:57:38 +0800 Subject: [PATCH 396/471] PUB:20160518 Python 3 - An Intro to Encryption MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Cathon 翻译的不错,很用心。 --- ...60518 Python 3 - An Intro to Encryption.md | 49 +++++++------------ 1 file changed, 18 insertions(+), 31 deletions(-) rename {translated/tech => published}/20160518 Python 3 - An Intro to Encryption.md (74%) diff --git a/translated/tech/20160518 Python 3 - An Intro to Encryption.md b/published/20160518 Python 3 - An Intro to Encryption.md similarity index 74% rename from translated/tech/20160518 Python 3 - An Intro to Encryption.md rename to published/20160518 Python 3 - An Intro to Encryption.md index 5c57264aa7..cb37986f4c 100644 --- a/translated/tech/20160518 Python 3 - An Intro to Encryption.md +++ b/published/20160518 Python 3 - An Intro to Encryption.md @@ -1,15 +1,13 @@ Python 3: 加密简介 =================================== -Python 3 的标准库中没什么用来解决加密的,不过却有用于处理哈希的库。在这里我们会对其进行一个简单的介绍,但重点会放在两个第三方的软件包: PyCrypto 和 cryptography 上。我们将学习如何使用这两个库,来加密和解密字符串。 - ---- +Python 3 的标准库中没多少用来解决加密的,不过却有用于处理哈希的库。在这里我们会对其进行一个简单的介绍,但重点会放在两个第三方的软件包:PyCrypto 和 cryptography 上。我们将学习如何使用这两个库,来加密和解密字符串。 ### 哈希 -如果需要用到安全哈希算法或是消息摘要算法,那么你可以使用标准库中的 **hashlib** 模块。这个模块包含了标准的安全哈希算法,包括 SHA1,SHA224,SHA256,SHA384,SHA512 以及 RSA 的 MD5 算法。Python 的 **zlib** 模块也提供 adler32 以及 crc32 哈希函数。 +如果需要用到安全哈希算法或是消息摘要算法,那么你可以使用标准库中的 **hashlib** 模块。这个模块包含了符合 FIPS(美国联邦信息处理标准)的安全哈希算法,包括 SHA1,SHA224,SHA256,SHA384,SHA512 以及 RSA 的 MD5 算法。Python 也支持 adler32 以及 crc32 哈希函数,不过它们在 **zlib** 模块中。 -一个哈希最常见的用法是,存储密码的哈希值而非密码本身。当然了,使用的哈希函数需要稳健一点,否则容易被破解。另一个常见的用法是,计算一个文件的哈希值,然后将这个文件和它的哈希值分别发送。接受到文件的人可以计算文件的哈希值,检验是否与接受到的哈希值相符。如果两者相符,就说明文件在传送的过程中未经篡改。 +哈希的一个最常见的用法是,存储密码的哈希值而非密码本身。当然了,使用的哈希函数需要稳健一点,否则容易被破解。另一个常见的用法是,计算一个文件的哈希值,然后将这个文件和它的哈希值分别发送。接收到文件的人可以计算文件的哈希值,检验是否与接受到的哈希值相符。如果两者相符,就说明文件在传送的过程中未经篡改。 让我们试着创建一个 md5 哈希: @@ -26,8 +24,6 @@ TypeError: Unicode-objects must be encoded before hashing b'\x14\x82\xec\x1b#d\xf6N}\x16*+[\x16\xf4w' ``` -Let’s take a moment to break this down a bit. First off, we import **hashlib** and then we create an instance of an md5 HASH object. Next we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s **digest** method to get our hash. If you prefer the hex digest, we can do that too: - 让我们花点时间一行一行来讲解。首先,我们导入 **hashlib** ,然后创建一个 md5 哈希对象的实例。接着,我们向这个实例中添加一个字符串后,却得到了报错信息。原来,计算 md5 哈希时,需要使用字节形式的字符串而非普通字符串。正确添加字符串后,我们调用它的 **digest** 函数来得到哈希值。如果你想要十六进制的哈希值,也可以用以下方法: ``` @@ -35,7 +31,7 @@ Let’s take a moment to break this down a bit. First off, we import **hashlib** '1482ec1b2364f64e7d162a2b5b16f477' ``` -实际上,有一种精简的方法来创建哈希,下面我们看一下用这种方法创建一个 sha512 哈希: +实际上,有一种精简的方法来创建哈希,下面我们看一下用这种方法创建一个 sha1 哈希: ``` >>> sha = hashlib.sha1(b'Hello Python').hexdigest() @@ -45,14 +41,11 @@ Let’s take a moment to break this down a bit. First off, we import **hashlib** 可以看到,我们可以同时创建一个哈希实例并且调用其 digest 函数。然后,我们打印出这个哈希值看一下。这里我使用 sha1 哈希函数作为例子,但它不是特别安全,读者可以随意尝试其他的哈希函数。 - ---- - ### 密钥导出 -Python 的标准库对密钥导出支持较弱。实际上,hashlib 函数库提供的唯一方法就是 **pbkdf2_hmac** 函数。它是基于口令的密钥导出函数 PKCS#5 ,并使用 HMAC 作为伪随机函数。因为它支持加盐和迭代操作,你可以使用类似的方法来哈希你的密码。例如,如果你打算使用 SHA-256 加密方法,你将需要至少 16 个字节的盐,以及最少 100000 次的迭代操作。 +Python 的标准库对密钥导出支持较弱。实际上,hashlib 函数库提供的唯一方法就是 **pbkdf2_hmac** 函数。它是 PKCS#5 的基于口令的第二个密钥导出函数,并使用 HMAC 作为伪随机函数。因为它支持“加盐(salt)”和迭代操作,你可以使用类似的方法来哈希你的密码。例如,如果你打算使用 SHA-256 加密方法,你将需要至少 16 个字节的“盐”,以及最少 100000 次的迭代操作。 -简单来说,盐就是随机的数据,被用来加入到哈希的过程中,以加大破解的难度。这基本可以保护你的密码免受字典和彩虹表的攻击。 +简单来说,“盐”就是随机的数据,被用来加入到哈希的过程中,以加大破解的难度。这基本可以保护你的密码免受字典和彩虹表(rainbow table)的攻击。 让我们看一个简单的例子: @@ -66,9 +59,7 @@ Python 的标准库对密钥导出支持较弱。实际上,hashlib 函数库 b'6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02' ``` -这里,我们用 SHA256 对一个密码进行哈希,使用了一个糟糕的盐,但经过了 100000 次迭代操作。当然,SHA 实际上并不被推荐用来创建密码的密钥。你应该使用类似 **scrypt** 的算法来替代。另一个不错的选择是使用一个叫 **bcrypt** 的第三方库。它是被专门设计出来哈希密码的。 - ---- +这里,我们用 SHA256 对一个密码进行哈希,使用了一个糟糕的盐,但经过了 100000 次迭代操作。当然,SHA 实际上并不被推荐用来创建密码的密钥。你应该使用类似 **scrypt** 的算法来替代。另一个不错的选择是使用一个叫 **bcrypt** 的第三方库,它是被专门设计出来哈希密码的。 ### PyCryptodome @@ -86,11 +77,11 @@ pip install pycryptodome pip install pycryptodomex ``` -如果你遇到了问题,可能是因为你没有安装正确的依赖包(译者注:如 python-devel),或者你的 Windows 系统需要一个编译器。如果你需要安装上的帮助或技术支持,可以访问 PyCryptodome 的[网站][1]。 +如果你遇到了问题,可能是因为你没有安装正确的依赖包(LCTT 译注:如 python-devel),或者你的 Windows 系统需要一个编译器。如果你需要安装上的帮助或技术支持,可以访问 PyCryptodome 的[网站][1]。 还值得注意的是,PyCryptodome 在 PyCrypto 最后版本的基础上有很多改进。非常值得去访问它们的主页,看看有什么新的特性。 -### 加密字符串 +#### 加密字符串 访问了他们的主页之后,我们可以看一些例子。在第一个例子中,我们将使用 DES 算法来加密一个字符串: @@ -116,8 +107,7 @@ ValueError: Input strings must be a multiple of 8 in length b'>\xfc\x1f\x16x\x87\xb2\x93\x0e\xfcH\x02\xd59VQ' ``` -这段代码稍有些复杂,让我们一点点来看。首先需要注意的是,DES 加密使用的密钥长度为 8 个字节,这也是我们将密钥变量设置为 8 个字符的原因。而我们需要加密的字符串的长度必须是 8 的倍数,所以我们创建了一个名为 **pad** 的函数,来给一个字符串末尾添加空格,直到它的长度是 8 的倍数。然后,我们创建了一个 DES 的实例,以及我们需要加密的文本。我们还创建了一个经过填充处理的文本。我们尝试这对未经填充处理的文本进行加密,啊欧,报错了!我们需要对经过填充处理的文本进行加密,然后得到加密的字符串。 -(译者注:encrypt 函数的参数应为 byte 类型字符串,代码为:`encrypted_text = des.encrypt(padded_textpadded_text.encode('uf-8'))`) +这段代码稍有些复杂,让我们一点点来看。首先需要注意的是,DES 加密使用的密钥长度为 8 个字节,这也是我们将密钥变量设置为 8 个字符的原因。而我们需要加密的字符串的长度必须是 8 的倍数,所以我们创建了一个名为 **pad** 的函数,来给一个字符串末尾填充空格,直到它的长度是 8 的倍数。然后,我们创建了一个 DES 的实例,以及我们需要加密的文本。我们还创建了一个经过填充处理的文本。我们尝试着对未经填充处理的文本进行加密,啊欧,报了一个 ValueError 错误!我们需要对经过填充处理的文本进行加密,然后得到加密的字符串。(LCTT 译注:encrypt 函数的参数应为 byte 类型字符串,代码为:`encrypted_text = des.encrypt(padded_text.encode('utf-8'))`) 知道了如何加密,还要知道如何解密: @@ -128,7 +118,7 @@ b'Python rocks! ' 幸运的是,解密非常容易,我们只需要调用 des 对象的 **decrypt** 方法就可以得到我们原来的 byte 类型字符串了。下一个任务是学习如何用 RSA 算法加密和解密一个文件。首先,我们需要创建一些 RSA 密钥。 -### 创建 RSA 密钥 +#### 创建 RSA 密钥 如果你希望使用 RSA 算法加密数据,那么你需要拥有访问 RAS 公钥和私钥的权限,否则你需要生成一组自己的密钥对。在这个例子中,我们将生成自己的密钥对。创建 RSA 密钥非常容易,所以我们将在 Python 解释器中完成。 @@ -148,9 +138,9 @@ b'Python rocks! ' 接下来,我们通过 RSA 密钥实例的 **publickey** 方法创建我们的公钥。我们使用方法链调用 publickey 和 exportKey 方法生成公钥,同样将它写入磁盘上的文件。 -### 加密文件 +#### 加密文件 -有了私钥和公钥之后,我们就可以加密一些数据,并写入文件了。这儿有个比较标准的例子: +有了私钥和公钥之后,我们就可以加密一些数据,并写入文件了。这里有个比较标准的例子: ``` from Crypto.PublicKey import RSA @@ -204,17 +194,15 @@ with open('/path/to/encrypted_data.bin', 'rb') as fobj: print(data) ``` -如果你认真看了上一个例子,这段代码应该很容易解析。在这里,我们先读取二进制的加密文件,然后导入私钥。注意,当你导入私钥时,需要提供一个密码,否则会出现错误。然后,我们文件中读取数据,首先是加密的会话密钥,然后是 16 字节的随机数和 16 字节的消息认证码,最后是剩下的加密的数据。 +如果你认真看了上一个例子,这段代码应该很容易解析。在这里,我们先以二进制模式读取我们的加密文件,然后导入私钥。注意,当你导入私钥时,需要提供一个密码,否则会出现错误。然后,我们文件中读取数据,首先是加密的会话密钥,然后是 16 字节的随机数和 16 字节的消息认证码,最后是剩下的加密的数据。 接下来我们需要解密出会话密钥,重新创建 AES 密钥,然后解密出数据。 你还可以用 PyCryptodome 库做更多的事。不过我们要接着讨论在 Python 中还可以用什么来满足我们加密解密的需求。 ---- - ### cryptography 包 -**cryptography** 的目标是成为人类易于使用的密码学包,就像 **requests** 是人类易于使用的 HTTP 库一样。这个想法使你能够创建简单安全,易于使用的加密方案。如果有需要的话,你也可以使用一些底层的密码学基元,但这也需要你知道更多的细节,否则创建的东西将是不安全的。 +**cryptography** 的目标是成为“人类易于使用的密码学包(cryptography for humans)”,就像 **requests** 是“人类易于使用的 HTTP 库(HTTP for Humans)”一样。这个想法使你能够创建简单安全、易于使用的加密方案。如果有需要的话,你也可以使用一些底层的密码学基元,但这也需要你知道更多的细节,否则创建的东西将是不安全的。 如果你使用的 Python 版本是 3.5, 你可以使用 pip 安装,如下: @@ -222,7 +210,7 @@ print(data) pip install cryptography ``` -你会看到 cryptography 包还安装了一些依赖包(译者注:如 libopenssl-devel)。如果安装都顺利,我们就可以试着加密一些文本了。让我们使用 **Fernet** 对称加密算法,它保证了你加密的任何信息在不知道密码的情况下不能被篡改或读取。Fernet 还通过 **MultiFernet** 支持密钥轮换。下面让我们看一个简单的例子: +你会看到 cryptography 包还安装了一些依赖包(LCTT 译注:如 libopenssl-devel)。如果安装都顺利,我们就可以试着加密一些文本了。让我们使用 **Fernet** 对称加密算法,它保证了你加密的任何信息在不知道密码的情况下不能被篡改或读取。Fernet 还通过 **MultiFernet** 支持密钥轮换。下面让我们看一个简单的例子: ``` >>> from cryptography.fernet import Fernet @@ -242,9 +230,8 @@ b'My super secret message' 首先我们需要导入 Fernet,然后生成一个密钥。我们输出密钥看看它是什么样儿。如你所见,它是一个随机的字节串。如果你愿意的话,可以试着多运行 **generate_key** 方法几次,生成的密钥会是不同的。然后我们使用这个密钥生成 Fernet 密码实例。 -现在我们有了用来加密和解密消息的密码。下一步是创建一个需要加密的消息,然后使用 **encrypt** 方法对它加密。我打印出加密的文本,然后你可以看到你不再能够读懂它。为了**解密**出我们的秘密消息,我们只需调用 decrypt 方法,并传入加密的文本作为参数。结果就是我们得到了消息字节串形式的纯文本。 +现在我们有了用来加密和解密消息的密码。下一步是创建一个需要加密的消息,然后使用 **encrypt** 方法对它加密。我打印出加密的文本,然后你可以看到你再也读不懂它了。为了解密出我们的秘密消息,我们只需调用 **decrypt** 方法,并传入加密的文本作为参数。结果就是我们得到了消息字节串形式的纯文本。 ---- ### 小结 @@ -268,7 +255,7 @@ via: http://www.blog.pythonlibrary.org/2016/05/18/python-3-an-intro-to-encryptio 作者:[Mike][a] 译者:[Cathon](https://github.com/Cathon) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 207dd4370034b2a6696c95f0c61170dda4d5ebd0 Mon Sep 17 00:00:00 2001 From: may Date: Fri, 12 Aug 2016 09:29:21 +0800 Subject: [PATCH 397/471] =?UTF-8?q?=EF=BD=94=EF=BD=92=EF=BD=81=EF=BD=8E?= =?UTF-8?q?=EF=BD=93=EF=BD=8C=EF=BD=81=EF=BD=94=EF=BD=89=EF=BD=8E=EF=BD=87?= =?UTF-8?q?=E3=80=80=EF=BD=86=EF=BD=89=EF=BD=8E=EF=BD=89=EF=BD=93=EF=BD=88?= =?UTF-8?q?=EF=BD=85=EF=BD=84=20(#4303)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * translating by maywanting * finish translating --- sources/talk/20160620 5 SSH Hardening Tips.md | 127 ------------------ .../talk/20160620 5 SSH Hardening Tips.md | 125 +++++++++++++++++ 2 files changed, 125 insertions(+), 127 deletions(-) delete mode 100644 sources/talk/20160620 5 SSH Hardening Tips.md create mode 100644 translated/talk/20160620 5 SSH Hardening Tips.md diff --git a/sources/talk/20160620 5 SSH Hardening Tips.md b/sources/talk/20160620 5 SSH Hardening Tips.md deleted file mode 100644 index e1a7353807..0000000000 --- a/sources/talk/20160620 5 SSH Hardening Tips.md +++ /dev/null @@ -1,127 +0,0 @@ -translating by maywanting - -5 SSH Hardening Tips -====================== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1188510_1920_0.jpg?itok=ocPCL_9G) ->Make your OpenSSH sessions more secure with these simple tips. -> Creative Commons Zero - -When you look at your SSH server logs, chances are they are full of attempted logins from entities of ill intent. Here are 5 general ways (along with several specific tactics) to make your OpenSSH sessions more secure. - -### 1. Make Password Auth Stronger - -Password logins are convenient, because you can log in from any machine anywhere. But they are vulnerable to brute-force attacks. Try these tactics for strengthening your password logins. - -- Use a password generator, such as pwgen. pwgen takes several options; the most useful is password length (e.g., pwgen 12 generates a 12-character password). - -- Never reuse a password. Ignore all the bad advice about not writing down your passwords, and keep a notebook with your logins written in it. If you don't believe me that this is a good idea, then believe security guru [Bruce Schneier][1]. If you're reasonably careful, nobody will ever find your notebook, and it is immune from online attacks. - -- You can add extra protection to your login notebook by obscuring the logins recorded in your notebook with character substitution or padding. Use a simple, easily-memorable convention such as padding your passwords with two extra random characters, or use a single simple character substitution such as # for *. - -- Use a non-standard listening port on your SSH server. Yes, this is old advice, and it's still good. Examine your logs; chances are that port 22 is the standard attack point, with few attacks on other ports. - -- Use [Fail2ban][2] to dynamically protect your server from brute force attacks. - -- Create non-standard usernames. Never ever enable a remote root login, and avoid "admin". - -### 2. Fix Too Many Authentication Failures - -When my ssh logins fail with "Too many authentication failures for carla" error messages, it makes me feel bad. I know I shouldn't take it personally, but it still stings. But, as my wise granny used to say, hurt feelings don't fix the problem. The cure for this is to force a password-based login in your ~/.ssh/config file. If this file does not exist, first create the ~/.ssh/ directory: - -``` -$ mkdir ~/.ssh -$ chmod 700 ~/.ssh -``` - -Then create the `~/.ssh/config` file in a text editor and enter these lines, using your own remote HostName address: - -``` -HostName remote.site.com -PubkeyAuthentication=no -``` - -### 3. Use Public Key Authentication - -Public Key authentication is much stronger than password authentication, because it is immune to brute-force password attacks, but it’s less convenient because it relies on RSA key pairs. To begin, you create a public/private key pair. Next, the private key goes on your client computer, and you copy the public key to the remote server that you want to log into. You can log in to the remote server only from computers that have your private key. Your private key is just as sensitive as your house key; anyone who has possession of it can access your accounts. You can add a strong layer of protection by putting a passphrase on your private key. - -Using RSA key pairs is a great tool for managing multiple users. When a user leaves, disable their login by deleting their public key from the server. - -This example creates a new key pair of 3072 bits strength, which is stronger than the default 2048 bits, and gives it a unique name so you know what server it belongs to: - -``` -$ ssh-keygen -t rsa -b 3072 -f id_mailserver -``` - -This creates two new keys, id_mailserver and id_mailserver.pub. id_mailserver is your private key -- do not share this! Now securely copy your public key to your remote server with the ssh-copy-id command. You must already have a working SSH login on the remote server: - -``` -$ ssh-copy-id -i id_rsa.pub user@remoteserver - -/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed -user@remoteserver's password: - -Number of key(s) added: 1 - -Now try logging into the machine, with: "ssh 'user@remoteserver'" -and check to make sure that only the key(s) you wanted were added. -``` - -ssh-copy-id ensures that you will not accidentally copy your private key. Test your new key login by copying the example from your command output, with single quotes: - -``` -$ ssh 'user@remoteserver' -``` - -It should log you in using your new key, and if you set a password on your private key, it will prompt you for it. - -### 4. Disable Password Logins - -Once you have tested and verified your public key login, disable password logins so that your remote server is not vulnerable to brute force password attacks. Do this in the /etc/sshd_config file on your remote server with this line: - -``` -PasswordAuthentication no -``` - -Then restart your SSH daemon. - -### 5. Set Up Aliases -- They’re Fast and Cool - -You can set up aliases for remote logins that you use a lot, so instead of logging in with something like "ssh -u username -p 2222 remote.site.with.long-name", you can use "ssh remote1". Set it up like this in your ~/.ssh/config file: - -``` -Host remote1 -HostName remote.site.with.long-name -Port 2222 -User username -PubkeyAuthentication no -``` - -If you are using public key authentication, it looks like this: - -``` -Host remote1 -HostName remote.site.with.long-name -Port 2222 -User username -IdentityFile ~/.ssh/id_remoteserver -``` - -The [OpenSSH documentation][3] is long and detailed, but after you have mastered basic SSH use, you'll find it's very useful and contains a trove of cool things you can do with OpenSSH. - - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/5-ssh-hardening-tips - -作者:[CARLA SCHRODER][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/cschroder -[1]: https://www.schneier.com/blog/archives/2005/06/write_down_your.html -[2]: http://www.fail2ban.org/wiki/index.php/Main_Page -[3]: http://www.openssh.com/ diff --git a/translated/talk/20160620 5 SSH Hardening Tips.md b/translated/talk/20160620 5 SSH Hardening Tips.md new file mode 100644 index 0000000000..a030e13186 --- /dev/null +++ b/translated/talk/20160620 5 SSH Hardening Tips.md @@ -0,0 +1,125 @@ +五条强化 SSH 安全的建议 +====================== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1188510_1920_0.jpg?itok=ocPCL_9G) +>采用这些简单的建议,使你的 OpenSSH 会话更加安全。 + +> 创造性的零共有 + +当你查看你的 SSH 服务日志,很有可能都充斥着一些不怀好意的尝试性登录。这里有5条一般意义上的建议(和一些个别特殊策略),让你的 OpenSSH 会话更加安全。 + +### 1. 强化密码登录 + +密码登录很方便,因为你可以从任何地方的任何机器上登录。但是他们在暴力攻击面前也是脆弱的。尝试以下策略来强化你的密码登录。 + +- 使用一个密码生成工具,例如 pwgen。pwgen 会有几个选项;最有用的就是密码长度的选项(例如,`pwgen 12` 产生一个12位字符的密码) + +- 不要重复使用一个密码。忽略所有有关不要记下你的密码的建议,然后关于你所有登录信息都记在一本本子上。如果你不相信我的建议,那去相信安全权威 [Bruce Schneier][1]。如果你足够仔细,没有人能够发现你的笔记本,那么这样能够不受到网络上的那些攻击。 + +- 你可以为你的登录记事本增加一些额外的保护措施,例如用字符替换或者增加新的字符来掩盖笔记本上的登录密码。使用一个简单而且好记的规则,比如说给你的密码增加两个额外的随机字符,或者使用单个简单的字符替换,例如 `#` 替换成 `*`。 + +- 为你的 SSH 服务开启一个不是默认的监听端口。是的,这是很老套的建议,但是它确实很有效。检查你的登录;很有可能22端口是被普遍攻击的端口,其他端口则很少被攻击。 + +- 使用 [Fail2ban][2] 来动态保护你的服务器,是服务器免于被暴力攻击 + +- 使用不寻常的用户名。不能让root可以远程登录,并确避免用户名为“admin”。 + +### 2. 解决 `Too Many Authentication Failures` 报错 + +当我的 ssh 登录失败,并显示「Too many authentication failures for carla」的报错信息时,我很难过。我知道我应该不介意,但是这报错确实很碍眼。而且,正如我聪慧的奶奶曾经说过,伤痛之感并不能解决问题。解决办法就是在你的 `~/.ssh/config` 文件设置强制密码登录。如果这个文件不存在,首先创个 `~/.ssh/` 目录。 + +``` +$ mkdir ~/.ssh +$ chmod 700 ~/.ssh +``` + +然后在一个文本编辑器创建 `~/.ssh/confg` 文件,输入以下行,使用你自己的远程域名。 + +``` +HostName remote.site.com +PubkeyAuthentication=no +``` + +### 3. 使用公钥认证 + +公钥认证比密码登录安全多了,因为它不受暴力密码攻击的影响,但是并不方便因为它依赖于 RSA 密钥对。一开始,你创建一对公钥和私钥。下一步,私钥放于你的客户端电脑,并且复制公钥到你想登录的远程服务器。你只能从拥有私钥的电脑登录才能登录到远程服务器。你的私钥就和你的家门钥匙一样敏感;任何人获取到了私钥就可以获取你的账号。你可以给你的私钥加上密码来增加一些强化保护规则。 +使用 RSA 密钥对管理多种多样的用户是一种好的方法。当一个用户离开了,只要从服务器删了他的公钥就能取消他的登录。 + +以下例子创建一个新的 3072 bits 长度的密钥对,它比默认的 2048 bits 更安全,而且为它起一个独一无二的名字,这样你就可以知道它属于哪个服务器。 + +``` +$ ssh-keygen -t rsa -b 3072 -f id_mailserver +``` + +以下创建两个新的密钥, `id_mailserver` 和 `id_mailserver.pub`,`id_mailserver` 是你的私钥--不传播它!现在用 `ssh-copy-id` 命令安全地复制你的公钥到你的远程服务器。你必须确保有运作中的 SSH 登录到远程服务器。 + +``` +$ ssh-copy-id -i id_rsa.pub user@remoteserver + +/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed +user@remoteserver's password: + +Number of key(s) added: 1 + +Now try logging into the machine, with: "ssh 'user@remoteserver'" +and check to make sure that only the key(s) you wanted were added. +``` + +`ssh-copy-id` 确保你不会无意间复制你的私钥。复制你的命令输出样式,带上单引号,这样测试你的新的密钥登录。 + +``` +$ ssh 'user@remoteserver' +``` + +必须用你的新密钥登录,如果你为你的私钥设置了密码,它会提示你。 + +### 4. 取消密码登录 + +一旦你已经测试并且验证了你的公钥登录,取消密码登录,这样你的远程服务器就不会被暴力密码攻击。如下设置你的远程服务器的 `/etc/sshd_config` 文件。 + +``` +PasswordAuthentication no +``` + +然后重启 SSH 守护进程。 + +### 5. 设置别名 -- 这很快捷而且很有 B 格 + +你可以为你的远程登录设置常用的别名,来替代登录时输入的命令,例如 `ssh -u username -p 2222 remote.site.with.long-name`。你可以使用 `ssh remote1`。你的 `~/.ssh/config` 文件可以参照如下设置 + +``` +Host remote1 +HostName remote.site.with.long-name +Port 2222 +User username +PubkeyAuthentication no +``` + +如果你正在使用公钥登录,可以参照这个: + +``` +Host remote1 +HostName remote.site.with.long-name +Port 2222 +User username +IdentityFile ~/.ssh/id_remoteserver +``` + +[OpenSSH 文档][3] 很长而且详细,但是当你掌握了基础的 SSH 使用规则之后,你会发现它非常的有用而且包含很多可以通过 OpenSSH 来实现的炫酷效果。 + + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/5-ssh-hardening-tips + +作者:[CARLA SCHRODER][a] +译者:[maywanting](https://github.com/maywanting) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/cschroder +[1]: https://www.schneier.com/blog/archives/2005/06/write_down_your.html +[2]: http://www.fail2ban.org/wiki/index.php/Main_Page +[3]: http://www.openssh.com/ From 44281871ef28a488bc7d6b5edc29d9a74917ad1f Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 12 Aug 2016 13:13:56 +0800 Subject: [PATCH 398/471] PUB:20160726 How to restore older file versions in Git @strugglingyouth --- ...w to restore older file versions in Git.md | 182 +++++++++++++++++ ...w to restore older file versions in Git.md | 188 ------------------ 2 files changed, 182 insertions(+), 188 deletions(-) create mode 100644 published/20160726 How to restore older file versions in Git.md delete mode 100644 translated/tech/20160726 How to restore older file versions in Git.md diff --git a/published/20160726 How to restore older file versions in Git.md b/published/20160726 How to restore older file versions in Git.md new file mode 100644 index 0000000000..9008d9cf63 --- /dev/null +++ b/published/20160726 How to restore older file versions in Git.md @@ -0,0 +1,182 @@ +Git 系列(四):在 Git 中进行版本回退 +============================================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/file_system.jpg?itok=s2b60oIB) + +在这篇文章中,你将学到如何查看项目中的历史版本,如何进行版本回退,以及如何创建 Git 分支以便你可以大胆尝试而不会出现问题。 + +在你的 Git 项目的历史中,你的位置就像是摇滚专辑中的一个片段,由一个被称为 HEAD 的 标记来确定(如磁带录音机或录音播放器的播放头)。要在你的 Git 时间线上前后移动 HEAD ,需要使用 `git checkout` 命令。 + +git checkout 命令的使用方式有两种。最常见的用途是从一个以前的提交中恢复文件,你也可以整个倒回磁带,切换到另一个分支。 + + +### 恢复一个文件 + +当你意识到一个本来很好文件被你完全改乱了。我们都这么干过:我们把文件放到一个地方,添加并提交,然后我们发现它还需要做点最后的调整,最后这个文件被搞得面目全非了。 + +要把它恢复到最后的完好状态,使用 git checkout 从最后的提交(即 HEAD)中恢复: + +``` +$ git checkout HEAD filename +``` + +如果你碰巧提交了一个错误的版本,你需要找回更早的版本,使用 git log 查看你更早的提交,然后从合适的提交中找回它: + +``` +$ git log --oneline +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. + +$ git checkout 55df4c2 filename + +``` + +现在,以前的文件恢复到了你当前的位置。(任何时候你都可以用 git status 命令查看你的当前状态)因为这个文件改变了,你需要添加这个文件,再进行提交: + +``` +$ git add filename +$ git commit -m 'restoring filename from first commit.' +``` + +使用 Git log 验证你所提交的: + +``` +$ git log --oneline +d512580 restoring filename from first commit +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. +``` + +从本质上讲,你已经倒好了磁带并修复了坏的地方,所以你需要重新录制正确的。 + +### 回退时间线 + +恢复文件的另一种方式是回退整个 Git 项目。这里使用了分支的思想,这是另一种替代方法。 + +如果你要回到历史提交,你要将 Git HEAD 回退到以前的版本才行。这个例子将回到最初的提交处: + +``` +$ git log --oneline +d512580 restoring filename from first commit +79a4e5f bad take +f449007 The second commit +55df4c2 My great project, first commit. + +$ git checkout 55df4c2 +``` + +当你以这种方式倒回磁带,如果你按下录音键再次开始,就会丢失以前的工作。Git 默认假定你不想这样做,所以将 HEAD 从项目中分离出来,可以让你如所需的那样工作,而不会因为偶尔的记录而影响之后的工作。 + +如果你想看看以前的版本,想要重新做或者尝试不同的方法,那么安全一点的方式就是创建一个新的分支。可以将这个过程想象为尝试同一首歌曲的不同版本,或者创建一个混音的。原始的依然存在,关闭那个分支做你想做的版本吧。 + +就像记录到一个空白磁带一样,把你的 Git HEAD 指到一个新的分支处: + +``` +$ git checkout -b remix +Switched to a new branch 'remix' +``` + +现在你已经切换到了另一个分支,在你面前的是一个替代的干净工作区,准备开始工作吧。 + +也可以不用改变时间线来做同样的事情。也许你很想这么做,但切换到一个临时的工作区只是为了尝试一些疯狂的想法。这在工作中完全是可以接受的,请看: + +``` +$ git status +On branch master +nothing to commit, working directory clean + +$ git checkout -b crazy_idea +Switched to a new branch 'crazy_idea' +``` + +现在你有一个干净的工作空间,在这里你可以完成一些奇怪的想法。一旦你完成了,可以保留你的改变,或者丢弃他们,并切换回你的主分支。 + +若要放弃你的想法,切换到你的主分支,假装新分支不存在: + +``` +$ git checkout master +``` + +想要继续使用你的疯狂的想法,需要把它们拉回到主分支,切换到主分支然后合并新分支到主分支: + +``` +$ git checkout master +$ git merge crazy_idea +``` + +git 的分支功能很强大,开发人员在克隆仓库后马上创建一个新分支是很常见的做法;这样,他们所有的工作都在自己的分支上,可以提交并合并到主分支。Git 是很灵活的,所以没有“正确”或“错误”的方式(甚至一个主分支也可以与其所属的远程仓库分离),但分支易于分离任务和提交贡献。不要太激动,你可以如你所愿的有很多的 Git 分支。完全自由。 + +### 远程协作 + +到目前为止你已经在自己舒适而私密的家中维护着一个 Git 仓库,但如何与其他人协同工作呢? + +有好几种不同的方式来设置 Git 以便让多人可以同时在一个项目上工作,所以首先我们要克隆仓库,你可能已经从某人的 Git 服务器或 GitHub 主页,或在局域网中的共享存储上克隆了一个仓库。 + +工作在私人仓库下和共享仓库下唯一不同的是你需要把你的改变 `push` 到别人的仓库。我们把工作的仓库称之为本地(local)仓库,其他仓库称为远程(remote)仓库。 + +当你以读写的方式克隆一个仓库时,克隆的仓库会继承自被称为 origin 的远程库。你可以看看你的克隆仓库的远程仓库: + +``` +$ git remote --verbose +origin seth@example.com:~/myproject.Git (fetch) +origin seth@example.com:~/myproject.Git (push) +``` + +有一个 origin 远程库非常有用,因为它有异地备份的功能,并允许其他人在该项目上工作。 + +如果克隆没有继承 origin 远程库,或者如果你选择以后再添加,可以使用 `git remote` 命令: + +``` +$ git remote add seth@example.com:~/myproject.Git +``` + +如果你修改了文件,想把它们发到有读写权限的 origin 远程库,使用 `git push`。第一次推送改变,必须也发送分支信息。不直接在主分支上工作是一个很好的做法,除非你被要求这样做: + +``` +$ git checkout -b seth-dev +$ git add exciting-new-file.txt +$ git commit -m 'first push to remote' +$ git push -u origin HEAD +``` + +它会推送你当前的位置(HEAD)及其存在的分支到远程。当推送过一次后,以后每次推送可以不使用 -u 选项: + +``` +$ git add another-file.txt +$ git commit -m 'another push to remote' +$ git push origin HEAD +``` + +### 合并分支 + +当你工作在一个 Git 仓库时,你可以合并任意测试分支到主分支。当团队协作时,你可能想在将它们合并到主分支之前检查他们的改变: + +``` +$ git checkout contributor +$ git pull +$ less blah.txt ### 检查改变的文件 +$ git checkout master +$ git merge contributor +``` + +如果你正在使用 GitHub 或 GitLab 以及类似的东西,这个过程是不同的。但克隆项目并把它作为你自己的仓库都是相似的。你可以在本地工作,将改变提交到你的 GitHub 或 GitLab 帐户,而不用其它人的许可,因为这些库是你自己的。 + +如果你想要让你克隆的仓库接受你的改变,需要创建了一个拉取请求(pull request),它使用 Web 服务的后端发送补丁到真正的拥有者,并允许他们审查和拉取你的改变。 + +克隆一个项目通常是在 Web 服务端完成的,它和使用 Git 命令来管理项目是类似的,甚至推送的过程也是。然后它返回到 Web 服务打开一个拉取请求,工作就完成了。 + +下一部分我们将整合一些有用的插件到 Git 中来帮你轻松的完成日常工作。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/how-restore-older-file-versions-git + +作者:[Seth Kenlon][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth diff --git a/translated/tech/20160726 How to restore older file versions in Git.md b/translated/tech/20160726 How to restore older file versions in Git.md deleted file mode 100644 index ccf9863e14..0000000000 --- a/translated/tech/20160726 How to restore older file versions in Git.md +++ /dev/null @@ -1,188 +0,0 @@ - -在 Git 中进行版本回退 -============================================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/file_system.jpg?itok=s2b60oIB) - -在这篇文章中,你将学到如何查看项目中的历史版本,如何进行版本回退,以及如何使用 Git 分支,你可以大胆的进行尝试。 - - -在你的 Git 项目中,你的位置就像是一个滚动的标签,由一个被称为 HEAD 的 标记(如磁带录音机或留声机的播放头)来确定。要让 HEAD 指向你想到的时间线上,需要使用 git checkout 命令。 - - -git checkout 命令的使用方式有两种。最常见的用途是恢复一个以前提交过的文件,你也可以用他切换到另一个分支。 - - -### 恢复一个文件 - - -当你意识到一个文件被你完全改乱了。我们不得不将它恢复到以前的某个位置,然后 a添加并提交,因此我们需要的是该文件最后一次修改的位置,然后替换文件。 - - -查看最后一次提交的 HEAD,然后使用 git checkout 将其恢复到以前的版本: - -``` -$ git checkout HEAD filename -``` - -如果回退会的文件依然有问题,使用 git log 查看你更早的提交,然后切换到正确的版本: - -``` -$ git log --oneline -79a4e5f bad take -f449007 The second commit -55df4c2 My great project, first commit. - -$ git checkout 55df4c2 filename - -``` - -现在,以前的文件恢复到了你当前的位置。(你可以用 git status 命令查看你当前的状态)然后添加刚改变的文件再进行提交: - -``` -$ git add filename -$ git commit -m 'restoring filename from first commit.' -``` - -使用 Git log 验证你所提交的: - -``` -$ git log --oneline -d512580 restoring filename from first commit -79a4e5f bad take -f449007 The second commit -55df4c2 My great project, first commit. -``` - -从本质上讲,你已经倒好了磁带并修复了坏的地方。所以你需要重新记录正确的。 - -### 回退时间线 - -恢复文件的另一种方式是回退整个 Git 项目。这里使用了分支的思想,这是另一种替代方法。 - -你要将 Git HEAD 回退到以前的版本才能回到历史提交。这个例子将回到最初的提交处: - -``` -$ git log --oneline -d512580 restoring filename from first commit -79a4e5f bad take -f449007 The second commit -55df4c2 My great project, first commit. - -$ git checkout 55df4c2 -``` - -当你以这种方式回退,如果你重新开始提交,会丢失以前的工作。Git 默认假定你不想这样做,所以将 HEAD 从项目中分离出来,并将以前的记录保存下来。 - -如果你想看看以前的版本,想要重新做或者尝试不同的方法,那么安全一点的方式就是创建一个新的分支。可以将这个过程想象为尝试同一首歌曲的不同版本,或者创建一个混音的。原始的依然存在,关闭分支做你想做的版本吧。 - -把你的 Git HEAD 回退到另一个起点处: - -``` -$ git checkout -b remix -Switched to a new branch 'remix' -``` - -现在你已经切换到了另一个分支,你当前的工作区是干净的,准备开始工作吧。 - -也可以不用改变时间线来做同样的事情。也许你很想这么做,但切换到一个临时的工作区只是为了尝试一些疯狂的想法。这在工作中完全是可以接受的,请看: - -``` -$ git status -On branch master -nothing to commit, working directory clean - -$ git checkout -b crazy_idea -Switched to a new branch 'crazy_idea' -``` - -现在你有一个干净的工作空间,在这里你可以完成一些奇怪的想法。一旦你完成了,可以保留你的改变,或者丢弃他们,并切换回你的主分支。 - -若要放弃你的想法,切换到你的主分支,假装新分支不存在: - -``` -$ git checkout master -``` - -想要继续使用你的 crazy ideas,需要把他们拉回到主分支,切换到主分支然后合并新分支到主分支: - -``` -$ git checkout master -$ git merge crazy_idea -``` - -git 的分支功能很强大,在克隆仓库后为开发人员创建一个新分支是很常见的;这样,他们所有的工作都在自己的分支上,可以提交并合并到主分支。Git 是很灵活的,所以没有“正确”或“错误”的方式(甚至一个主分支也可以区分远程分支),但分支易于分离任务和提交贡献。两个人之间可以有很多的 Git 分支。 - -### 远程协作 - -到目前为止你已经在自己的家目录下维护着一个 Git 仓库,但如何与其他人协同工作呢? - -有好几种不同的方式来设置 Git 以便让多人可以同时在一个项目上工作,所以首先我们要克隆仓库,是否你已经从某人的 Git 服务器或 GitHub 主页克隆了一个仓库,或在局域网中使用了共享存储。 - -工作在私人仓库下和共享仓库唯一不同的是你需要把你的改变提交到别人的仓库。我们把工作的仓库称之为本地仓库,其他仓库称为远程仓库。 - - -当你以读写的方式克隆一个仓库时,克隆的仓库会继承远程库并且你会看到一个名为 origin 的远程库。你可以看看克隆的远程仓库: - -``` -$ git remote --verbose -origin seth@example.com:~/myproject.Git (fetch) -origin seth@example.com:~/myproject.Git (push) -``` - -有一个 origin 远程库非常有用,因为它有异地备份的功能,并允许其他人在该项目上工作。 - -如果克隆没有继承 origin 远程库,或者如果你选择以后再添加,可以使用 git remote 命令: - -``` -$ git remote add seth@example.com:~/myproject.Git -``` - -如果你修改了文件,想把它们发到有读写权限的 origin 远程库,使用 git push。第一次推送改变,必须发送分支信息。不直接在主分支上工作是一个很好的做法,除非你被要求这样做: - -``` -$ git checkout -b seth-dev -$ git add exciting-new-file.txt -$ git commit -m 'first push to remote' -$ git push -u origin HEAD -``` - -它会推送你当前的位置(HEAD)和存在的分支到远程。当推送过一次后,以后每次推送可以不使用 -u 选项: - -``` -$ git add another-file.txt -$ git commit -m 'another push to remote' -$ git push origin HEAD -``` - -### 合并分支 - -当一个人工作在一个 Git 仓库时,你可以合并任意测试分支到主分支。当团队协作时,你可能会想检查他们的改变,然后再将它们合并到主分支: - -``` -$ git checkout contributor -$ git pull -$ less blah.txt # 检查改变的文件 -$ git checkout master -$ git merge contributor -``` - -如果你正在使用 GitHub 或 GitLab 以及类似的东西,虽然过程是不同的,但克隆项目并把它作为你自己的仓库都是相似的。你可以在本地工作,将改变提交到 GitHub 或 GitLab 帐户,其他人对这些仓库没有任何权限。 - -如果你想要为克隆的仓库推送,需要创建了一个拉取请求,它使用 Web 服务的后端发送补丁到真正的拥有者,并允许他们审查和拉取的改变。 - -克隆一个项目通常是在 Web 服务端完成的,它和使用 Git 命令来管理项目是类似的,甚至推送的过程。然后它返回到 Web 服务打开一个拉取请求,工作就完成了。 - -下一部分我们将整合一些有用的插件到 Git 中来帮你轻松的完成日常工作。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/7/how-restore-older-file-versions-git - -作者:[Seth Kenlon][a] -译者:[译者ID](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth From e1dea4ed3f7f7ecbf336cd6deeb0a462b86f5625 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 12 Aug 2016 22:47:43 +0800 Subject: [PATCH 399/471] =?UTF-8?q?=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 译者因时间放弃。 --- ...60627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md b/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md index 4faee7eaef..44c66359f9 100644 --- a/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md +++ b/sources/tech/20160627 TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016.md @@ -1,5 +1,3 @@ -name1e5s translating - TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016 ===================================================== From fe1368a7da246d72ffd756f889fcc3de8159ea5e Mon Sep 17 00:00:00 2001 From: tanete Date: Fri, 12 Aug 2016 23:00:12 +0800 Subject: [PATCH 400/471] Translating by Tanete A brief introduction to Linux containers and image signing --- ...A brief introduction to Linux containers and image signing.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160810 A brief introduction to Linux containers and image signing.md b/sources/tech/20160810 A brief introduction to Linux containers and image signing.md index c91eab815f..fc9bd181ae 100644 --- a/sources/tech/20160810 A brief introduction to Linux containers and image signing.md +++ b/sources/tech/20160810 A brief introduction to Linux containers and image signing.md @@ -1,3 +1,4 @@ +***Translating by Tanete*** A brief introduction to Linux containers and image signing ==================== From 9d4f8578c699970b0347167978d27d341025d6c9 Mon Sep 17 00:00:00 2001 From: Stdio A Date: Sat, 13 Aug 2016 17:01:57 +0800 Subject: [PATCH 401/471] =?UTF-8?q?Translated=2020160406=20Let=E2=80=99s?= =?UTF-8?q?=20Build=20A=20Web=20Server.=20Part=202?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...160406 Let’s Build A Web Server. Part 2.md | 429 ------------------ ...160406 Let’s Build A Web Server. Part 2.md | 424 +++++++++++++++++ 2 files changed, 424 insertions(+), 429 deletions(-) delete mode 100644 sources/tech/20160406 Let’s Build A Web Server. Part 2.md create mode 100644 translated/tech/20160406 Let’s Build A Web Server. Part 2.md diff --git a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md b/sources/tech/20160406 Let’s Build A Web Server. Part 2.md deleted file mode 100644 index 15593c5085..0000000000 --- a/sources/tech/20160406 Let’s Build A Web Server. Part 2.md +++ /dev/null @@ -1,429 +0,0 @@ -Translating by StdioA - -Let’s Build A Web Server. Part 2. -=================================== - -Remember, in Part 1 I asked you a question: “How do you run a Django application, Flask application, and Pyramid application under your freshly minted Web server without making a single change to the server to accommodate all those different Web frameworks?” Read on to find out the answer. - -In the past, your choice of a Python Web framework would limit your choice of usable Web servers, and vice versa. If the framework and the server were designed to work together, then you were okay: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_before_wsgi.png) - -But you could have been faced (and maybe you were) with the following problem when trying to combine a server and a framework that weren’t designed to work together: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_after_wsgi.png) - -Basically you had to use what worked together and not what you might have wanted to use. - -So, how do you then make sure that you can run your Web server with multiple Web frameworks without making code changes either to the Web server or to the Web frameworks? And the answer to that problem became the Python Web Server Gateway Interface (or WSGI for short, pronounced “wizgy”). - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_idea.png) - -WSGI allowed developers to separate choice of a Web framework from choice of a Web server. Now you can actually mix and match Web servers and Web frameworks and choose a pairing that suits your needs. You can run Django, Flask, or Pyramid, for example, with Gunicorn or Nginx/uWSGI or Waitress. Real mix and match, thanks to the WSGI support in both servers and frameworks: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interop.png) - -So, WSGI is the answer to the question I asked you in Part 1 and repeated at the beginning of this article. Your Web server must implement the server portion of a WSGI interface and all modern Python Web Frameworks already implement the framework side of the WSGI interface, which allows you to use them with your Web server without ever modifying your server’s code to accommodate a particular Web framework. - -Now you know that WSGI support by Web servers and Web frameworks allows you to choose a pairing that suits you, but it is also beneficial to server and framework developers because they can focus on their preferred area of specialization and not step on each other’s toes. Other languages have similar interfaces too: Java, for example, has Servlet API and Ruby has Rack. - -It’s all good, but I bet you are saying: “Show me the code!” Okay, take a look at this pretty minimalistic WSGI server implementation: - -``` -# Tested with Python 2.7.9, Linux & Mac OS X -import socket -import StringIO -import sys - - -class WSGIServer(object): - - address_family = socket.AF_INET - socket_type = socket.SOCK_STREAM - request_queue_size = 1 - - def __init__(self, server_address): - # Create a listening socket - self.listen_socket = listen_socket = socket.socket( - self.address_family, - self.socket_type - ) - # Allow to reuse the same address - listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - # Bind - listen_socket.bind(server_address) - # Activate - listen_socket.listen(self.request_queue_size) - # Get server host name and port - host, port = self.listen_socket.getsockname()[:2] - self.server_name = socket.getfqdn(host) - self.server_port = port - # Return headers set by Web framework/Web application - self.headers_set = [] - - def set_app(self, application): - self.application = application - - def serve_forever(self): - listen_socket = self.listen_socket - while True: - # New client connection - self.client_connection, client_address = listen_socket.accept() - # Handle one request and close the client connection. Then - # loop over to wait for another client connection - self.handle_one_request() - - def handle_one_request(self): - self.request_data = request_data = self.client_connection.recv(1024) - # Print formatted request data a la 'curl -v' - print(''.join( - '< {line}\n'.format(line=line) - for line in request_data.splitlines() - )) - - self.parse_request(request_data) - - # Construct environment dictionary using request data - env = self.get_environ() - - # It's time to call our application callable and get - # back a result that will become HTTP response body - result = self.application(env, self.start_response) - - # Construct a response and send it back to the client - self.finish_response(result) - - def parse_request(self, text): - request_line = text.splitlines()[0] - request_line = request_line.rstrip('\r\n') - # Break down the request line into components - (self.request_method, # GET - self.path, # /hello - self.request_version # HTTP/1.1 - ) = request_line.split() - - def get_environ(self): - env = {} - # The following code snippet does not follow PEP8 conventions - # but it's formatted the way it is for demonstration purposes - # to emphasize the required variables and their values - # - # Required WSGI variables - env['wsgi.version'] = (1, 0) - env['wsgi.url_scheme'] = 'http' - env['wsgi.input'] = StringIO.StringIO(self.request_data) - env['wsgi.errors'] = sys.stderr - env['wsgi.multithread'] = False - env['wsgi.multiprocess'] = False - env['wsgi.run_once'] = False - # Required CGI variables - env['REQUEST_METHOD'] = self.request_method # GET - env['PATH_INFO'] = self.path # /hello - env['SERVER_NAME'] = self.server_name # localhost - env['SERVER_PORT'] = str(self.server_port) # 8888 - return env - - def start_response(self, status, response_headers, exc_info=None): - # Add necessary server headers - server_headers = [ - ('Date', 'Tue, 31 Mar 2015 12:54:48 GMT'), - ('Server', 'WSGIServer 0.2'), - ] - self.headers_set = [status, response_headers + server_headers] - # To adhere to WSGI specification the start_response must return - # a 'write' callable. We simplicity's sake we'll ignore that detail - # for now. - # return self.finish_response - - def finish_response(self, result): - try: - status, response_headers = self.headers_set - response = 'HTTP/1.1 {status}\r\n'.format(status=status) - for header in response_headers: - response += '{0}: {1}\r\n'.format(*header) - response += '\r\n' - for data in result: - response += data - # Print formatted response data a la 'curl -v' - print(''.join( - '> {line}\n'.format(line=line) - for line in response.splitlines() - )) - self.client_connection.sendall(response) - finally: - self.client_connection.close() - - -SERVER_ADDRESS = (HOST, PORT) = '', 8888 - - -def make_server(server_address, application): - server = WSGIServer(server_address) - server.set_app(application) - return server - - -if __name__ == '__main__': - if len(sys.argv) < 2: - sys.exit('Provide a WSGI application object as module:callable') - app_path = sys.argv[1] - module, application = app_path.split(':') - module = __import__(module) - application = getattr(module, application) - httpd = make_server(SERVER_ADDRESS, application) - print('WSGIServer: Serving HTTP on port {port} ...\n'.format(port=PORT)) - httpd.serve_forever() -``` - -It’s definitely bigger than the server code in Part 1, but it’s also small enough (just under 150 lines) for you to understand without getting bogged down in details. The above server also does more - it can run your basic Web application written with your beloved Web framework, be it Pyramid, Flask, Django, or some other Python WSGI framework. - -Don’t believe me? Try it and see for yourself. Save the above code as webserver2.py or download it directly from GitHub. If you try to run it without any parameters it’s going to complain and exit. - -``` -$ python webserver2.py -Provide a WSGI application object as module:callable -``` - -It really wants to serve your Web application and that’s where the fun begins. To run the server the only thing you need installed is Python. But to run applications written with Pyramid, Flask, and Django you need to install those frameworks first. Let’s install all three of them. My preferred method is by using virtualenv. Just follow the steps below to create and activate a virtual environment and then install all three Web frameworks. - -``` -$ [sudo] pip install virtualenv -$ mkdir ~/envs -$ virtualenv ~/envs/lsbaws/ -$ cd ~/envs/lsbaws/ -$ ls -bin include lib -$ source bin/activate -(lsbaws) $ pip install pyramid -(lsbaws) $ pip install flask -(lsbaws) $ pip install django -``` - -At this point you need to create a Web application. Let’s start with Pyramid first. Save the following code as pyramidapp.py to the same directory where you saved webserver2.py or download the file directly from GitHub: - -``` -from pyramid.config import Configurator -from pyramid.response import Response - - -def hello_world(request): - return Response( - 'Hello world from Pyramid!\n', - content_type='text/plain', - ) - -config = Configurator() -config.add_route('hello', '/hello') -config.add_view(hello_world, route_name='hello') -app = config.make_wsgi_app() -``` - -Now you’re ready to serve your Pyramid application with your very own Web server: - -``` -(lsbaws) $ python webserver2.py pyramidapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -You just told your server to load the ‘app’ callable from the python module ‘pyramidapp’ Your server is now ready to take requests and forward them to your Pyramid application. The application only handles one route now: the /hello route. Type http://localhost:8888/hello address into your browser, press Enter, and observe the result: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_pyramid.png) - -You can also test the server on the command line using the ‘curl’ utility: - -``` -$ curl -v http://localhost:8888/hello -... -``` - -Check what the server and curl prints to standard output. - -Now onto Flask. Let’s follow the same steps. - -``` -from flask import Flask -from flask import Response -flask_app = Flask('flaskapp') - - -@flask_app.route('/hello') -def hello_world(): - return Response( - 'Hello world from Flask!\n', - mimetype='text/plain' - ) - -app = flask_app.wsgi_app -``` - -Save the above code as flaskapp.py or download it from GitHub and run the server as: - -``` -(lsbaws) $ python webserver2.py flaskapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -Now type in the http://localhost:8888/hello into your browser and press Enter: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_flask.png) - -Again, try ‘curl’ and see for yourself that the server returns a message generated by the Flask application: - -``` -$ curl -v http://localhost:8888/hello -... -``` - -Can the server also handle a Django application? Try it out! It’s a little bit more involved, though, and I would recommend cloning the whole repo and use djangoapp.py, which is part of the GitHub repository. Here is the source code which basically adds the Django ‘helloworld’ project (pre-created using Django’s django-admin.py startproject command) to the current Python path and then imports the project’s WSGI application. - -``` -import sys -sys.path.insert(0, './helloworld') -from helloworld import wsgi - - -app = wsgi.application -``` - -Save the above code as djangoapp.py and run the Django application with your Web server: - -``` -(lsbaws) $ python webserver2.py djangoapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -Type in the following address and press Enter: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_django.png) - -And as you’ve already done a couple of times before, you can test it on the command line, too, and confirm that it’s the Django application that handles your requests this time around: - -``` -$ curl -v http://localhost:8888/hello -... -``` - -Did you try it? Did you make sure the server works with those three frameworks? If not, then please do so. Reading is important, but this series is about rebuilding and that means you need to get your hands dirty. Go and try it. I will wait for you, don’t worry. No seriously, you must try it and, better yet, retype everything yourself and make sure that it works as expected. - -Okay, you’ve experienced the power of WSGI: it allows you to mix and match your Web servers and Web frameworks. WSGI provides a minimal interface between Python Web servers and Python Web Frameworks. It’s very simple and it’s easy to implement on both the server and the framework side. The following code snippet shows the server and the framework side of the interface: - -``` -def run_application(application): - """Server code.""" - # This is where an application/framework stores - # an HTTP status and HTTP response headers for the server - # to transmit to the client - headers_set = [] - # Environment dictionary with WSGI/CGI variables - environ = {} - - def start_response(status, response_headers, exc_info=None): - headers_set[:] = [status, response_headers] - - # Server invokes the ‘application' callable and gets back the - # response body - result = application(environ, start_response) - # Server builds an HTTP response and transmits it to the client - … - -def app(environ, start_response): - """A barebones WSGI app.""" - start_response('200 OK', [('Content-Type', 'text/plain')]) - return ['Hello world!'] - -run_application(app) -``` - -Here is how it works: - -1. The framework provides an ‘application’ callable (The WSGI specification doesn’t prescribe how that should be implemented) -2. The server invokes the ‘application’ callable for each request it receives from an HTTP client. It passes a dictionary ‘environ’ containing WSGI/CGI variables and a ‘start_response’ callable as arguments to the ‘application’ callable. -3. The framework/application generates an HTTP status and HTTP response headers and passes them to the ‘start_response’ callable for the server to store them. The framework/application also returns a response body. -4. The server combines the status, the response headers, and the response body into an HTTP response and transmits it to the client (This step is not part of the specification but it’s the next logical step in the flow and I added it for clarity) - -And here is a visual representation of the interface: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interface.png) - -So far, you’ve seen the Pyramid, Flask, and Django Web applications and you’ve seen the server code that implements the server side of the WSGI specification. You’ve even seen the barebones WSGI application code snippet that doesn’t use any framework. - -The thing is that when you write a Web application using one of those frameworks you work at a higher level and don’t work with WSGI directly, but I know you’re curious about the framework side of the WSGI interface, too because you’re reading this article. So, let’s create a minimalistic WSGI Web application/Web framework without using Pyramid, Flask, or Django and run it with your server: - -``` -def app(environ, start_response): - """A barebones WSGI application. - - This is a starting point for your own Web framework :) - """ - status = '200 OK' - response_headers = [('Content-Type', 'text/plain')] - start_response(status, response_headers) - return ['Hello world from a simple WSGI application!\n'] -``` - -Again, save the above code in wsgiapp.py file or download it from GitHub directly and run the application under your Web server as: - -``` -(lsbaws) $ python webserver2.py wsgiapp:app -WSGIServer: Serving HTTP on port 8888 ... -``` - -Type in the following address and press Enter. This is the result you should see: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_simple_wsgi_app.png) - -You just wrote your very own minimalistic WSGI Web framework while learning about how to create a Web server! Outrageous. - -Now, let’s get back to what the server transmits to the client. Here is the HTTP response the server generates when you call your Pyramid application using an HTTP client: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response.png) - -The response has some familiar parts that you saw in Part 1 but it also has something new. It has, for example, four HTTP headers that you haven’t seen before: Content-Type, Content-Length, Date, and Server. Those are the headers that a response from a Web server generally should have. None of them are strictly required, though. The purpose of the headers is to transmit additional information about the HTTP request/response. - -Now that you know more about the WSGI interface, here is the same HTTP response with some more information about what parts produced it: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response_explanation.png) - -I haven’t said anything about the ‘environ’ dictionary yet, but basically it’s a Python dictionary that must contain certain WSGI and CGI variables prescribed by the WSGI specification. The server takes the values for the dictionary from the HTTP request after parsing the request. This is what the contents of the dictionary look like: - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_environ.png) - -A Web framework uses the information from that dictionary to decide which view to use based on the specified route, request method etc., where to read the request body from and where to write errors, if any. - -By now you’ve created your own WSGI Web server and you’ve made Web applications written with different Web frameworks. And, you’ve also created your barebones Web application/Web framework along the way. It’s been a heck of a journey. Let’s recap what your WSGI Web server has to do to serve requests aimed at a WSGI application: - -- First, the server starts and loads an ‘application’ callable provided by your Web framework/application -- Then, the server reads a request -- Then, the server parses it -- Then, it builds an ‘environ’ dictionary using the request data -- Then, it calls the ‘application’ callable with the ‘environ’ dictionary and a ‘start_response’ callable as parameters and gets back a response body. -- Then, the server constructs an HTTP response using the data returned by the call to the ‘application’ object and the status and response headers set by the ‘start_response’ callable. -- And finally, the server transmits the HTTP response back to the client - -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_server_summary.png) - -That’s about all there is to it. You now have a working WSGI server that can serve basic Web applications written with WSGI compliant Web frameworks like Django, Flask, Pyramid, or your very own WSGI framework. The best part is that the server can be used with multiple Web frameworks without any changes to the server code base. Not bad at all. - -Before you go, here is another question for you to think about, “How do you make your server handle more than one request at a time?” - -Stay tuned and I will show you a way to do that in Part 3. Cheers! - -BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch and goes into more detail on topics I just covered. Subscribe to the mailing list to get the latest updates about the book and the release date. - --------------------------------------------------------------------------------- - -via: https://ruslanspivak.com/lsbaws-part2/ - -作者:[Ruslan][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://github.com/rspivak/ - - - - - - diff --git a/translated/tech/20160406 Let’s Build A Web Server. Part 2.md b/translated/tech/20160406 Let’s Build A Web Server. Part 2.md new file mode 100644 index 0000000000..2bbf783c51 --- /dev/null +++ b/translated/tech/20160406 Let’s Build A Web Server. Part 2.md @@ -0,0 +1,424 @@ +搭个 Web 服务器(二) +=================================== + +在第一部分中,我提出了一个问题:“你要如何在不对程序做任何改动的情况下,在你刚刚搭建起来的 Web 服务器上适配 Django, Flask 或 Pyramid 应用呢?”我们可以从这一篇中找到答案。 + +曾几何时,你对 Python Web 框架种类作出的选择会对可用的 Web 服务器类型造成限制,反之亦然。如果框架及服务器在设计层面可以一起工作(相互适配),那么一切正常: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_before_wsgi.png) + +但你可能正面对着(或者曾经面对过)尝试将一对无法适配的框架和服务器搭配在一起的问题: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_after_wsgi.png) + +基本上,你需要选择那些能够一起工作的框架和服务器,而不能选择你想用的那些。 + +所以,你该如何确保在不对代码做任何更改的情况下,让你的 Web 服务器和多个不同的 Web 框架一同工作呢?这个问题的答案,就是 Python Web 服务器网关接口(缩写为 WSGI,念做“wizgy”)。 + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_idea.png) + +WSGI 允许开发者互不干扰地选择 Web 框架及 Web 服务器的类型。现在,你可以真正将 Web 服务器及框架任意搭配,然后选出你最中意的那对组合。比如,你可以使用 Django,Flask 或者 Pyramid,与 Gunicorn,Nginx/uWSGI 或 Waitress 进行结合。感谢 WSGI 同时对服务器与框架的支持,我们可以真正随意选择它们的搭配了。 + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interop.png) + +所以,WSGI 就是我在第一部分中提出,又在本文开头重复了一遍的那个问题的答案。你的 Web 服务器必须实现 WSGI 接口的服务器部分,而现代的 Python Web 框架均已实现了 WSGI 接口的框架部分,这使得你可以直接在 Web 服务器中使用任意框架,而不需要更改任何服务器代码,以对特定的 Web 框架实现兼容。 + +现在,你一个知道 Web 服务器及 Web 框架对 WSGI 的支持使得你可以选择最合适的一对来使用,而且它对服务器和框架的开发者依然有益,因为他们只需专注于他们擅长的部分来进行开发,而不需要触及另一部分的代码。其它语言也拥有类似的接口,比如:Java 拥有 Servlet API,而 Ruby 拥有 Rack. + +这些理论都不错,但是我打赌你在说:“给我看代码!” 那好,我们来看看下面这个很小的 WSGI 服务器实现: + +``` +# 使用 Python 2.7.9,在 Linux 及 Mac OS X 下测试通过 +import socket +import StringIO +import sys + + +class WSGIServer(object): + + address_family = socket.AF_INET + socket_type = socket.SOCK_STREAM + request_queue_size = 1 + + def __init__(self, server_address): + # Create a listening socket + self.listen_socket = listen_socket = socket.socket( + self.address_family, + self.socket_type + ) + # 允许复用同一地址 + listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + # 绑定地址 + listen_socket.bind(server_address) + # 激活套接字 + listen_socket.listen(self.request_queue_size) + # 获取主机名称及端口 + host, port = self.listen_socket.getsockname()[:2] + self.server_name = socket.getfqdn(host) + self.server_port = port + # 由 Web 框架/应用设定的响应头部字段 + self.headers_set = [] + + def set_app(self, application): + self.application = application + + def serve_forever(self): + listen_socket = self.listen_socket + while True: + # 获取新的客户端连接 + self.client_connection, client_address = listen_socket.accept() + # 处理一条请求后关闭连接,然后循环等待另一个连接建立 + self.handle_one_request() + + def handle_one_request(self): + self.request_data = request_data = self.client_connection.recv(1024) + # 以 'curl -v' 的风格输出格式化请求数据 + print(''.join( + '< {line}\n'.format(line=line) + for line in request_data.splitlines() + )) + + self.parse_request(request_data) + + # 根据请求数据构建环境字典 + env = self.get_environ() + + # 此时需要调用 Web 应用来获取结果, + # 取回的结果将成为 HTTP 响应体 + result = self.application(env, self.start_response) + + # 构造一个响应,回送至客户端 + self.finish_response(result) + + def parse_request(self, text): + request_line = text.splitlines()[0] + request_line = request_line.rstrip('\r\n') + # 将请求行分开成组 + (self.request_method, # GET + self.path, # /hello + self.request_version # HTTP/1.1 + ) = request_line.split() + + def get_environ(self): + env = {} + # 以下代码段没有遵循 PEP8 规则,但它这样排版,是为了通过强调 + # 所需变量及它们的值,来达到其展示目的。 + # + # WSGI 必需变量 + env['wsgi.version'] = (1, 0) + env['wsgi.url_scheme'] = 'http' + env['wsgi.input'] = StringIO.StringIO(self.request_data) + env['wsgi.errors'] = sys.stderr + env['wsgi.multithread'] = False + env['wsgi.multiprocess'] = False + env['wsgi.run_once'] = False + # CGI 必需变量 + env['REQUEST_METHOD'] = self.request_method # GET + env['PATH_INFO'] = self.path # /hello + env['SERVER_NAME'] = self.server_name # localhost + env['SERVER_PORT'] = str(self.server_port) # 8888 + return env + + def start_response(self, status, response_headers, exc_info=None): + # 添加必要的服务器头部字段 + server_headers = [ + ('Date', 'Tue, 31 Mar 2015 12:54:48 GMT'), + ('Server', 'WSGIServer 0.2'), + ] + self.headers_set = [status, response_headers + server_headers] + # 为了遵循 WSGI 协议,start_response 函数必须返回一个 'write' + # 可调用对象(返回值.write 可以作为函数调用)。为了简便,我们 + # 在这里无视这个细节。 + # return self.finish_response + + def finish_response(self, result): + try: + status, response_headers = self.headers_set + response = 'HTTP/1.1 {status}\r\n'.format(status=status) + for header in response_headers: + response += '{0}: {1}\r\n'.format(*header) + response += '\r\n' + for data in result: + response += data + # 以 'curl -v' 的风格输出格式化请求数据 + print(''.join( + '> {line}\n'.format(line=line) + for line in response.splitlines() + )) + self.client_connection.sendall(response) + finally: + self.client_connection.close() + + +SERVER_ADDRESS = (HOST, PORT) = '', 8888 + + +def make_server(server_address, application): + server = WSGIServer(server_address) + server.set_app(application) + return server + + +if __name__ == '__main__': + if len(sys.argv) < 2: + sys.exit('Provide a WSGI application object as module:callable') + app_path = sys.argv[1] + module, application = app_path.split(':') + module = __import__(module) + application = getattr(module, application) + httpd = make_server(SERVER_ADDRESS, application) + print('WSGIServer: Serving HTTP on port {port} ...\n'.format(port=PORT)) + httpd.serve_forever() +``` + +当然,这段代码要比第一部分的服务器代码长不少,但它仍然很短(只有不到 150 行),你可以轻松理解它,而不需要深究细节。上面的服务器代码还可以做更多——它可以用来运行一些你喜欢的框架写出的 Web 应用,可以是 Pyramid,Flask,Django 或其它 Python WSGI 框架。 + +不相信吗?自己来试试看吧。把以上的代码保存为 `webserver2.py`,或直接从 Github 上下载它。如果你打算不加任何参数而直接运行它,它会抱怨一句,然后退出。 + +``` +$ python webserver2.py +Provide a WSGI application object as module:callable +``` + +它想做的其实是为你的 Web 应用服务,而这才是重头戏。为了运行这个服务器,你只需要安装 Python。不过,如果你希望运行 Pyramid,Flask 或 Django 应用,你还需要先安装那些框架。那我们把这三个都装上吧。我推荐的安装方式是通过 `virtualenv` 安装。按照以下几步来做,你就可以创建并激活一个虚拟环境,并在其中安装以上三个 Web 框架。 + +``` +$ [sudo] pip install virtualenv +$ mkdir ~/envs +$ virtualenv ~/envs/lsbaws/ +$ cd ~/envs/lsbaws/ +$ ls +bin include lib +$ source bin/activate +(lsbaws) $ pip install pyramid +(lsbaws) $ pip install flask +(lsbaws) $ pip install django +``` + +现在,你需要创建一个 Web 应用。我们先从 Pyramid 开始吧。把以下代码保存为 `pyramidapp.py`,并与刚刚的 `webserver2.py` 放置在同一目录,或直接从 Github 下载该文件: + +``` +from pyramid.config import Configurator +from pyramid.response import Response + + +def hello_world(request): + return Response( + 'Hello world from Pyramid!\n', + content_type='text/plain', + ) + +config = Configurator() +config.add_route('hello', '/hello') +config.add_view(hello_world, route_name='hello') +app = config.make_wsgi_app() +``` + +现在,你可以用你自己的 Web 服务器来运行你的 Pyramid 应用了: + +``` +(lsbaws) $ python webserver2.py pyramidapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +你刚刚让你的服务器去加载 Python 模块 `pyramidapp` 中的可执行对象 `app`。现在你的服务器可以接收请求,并将它们转发到 Pyramid 应用中了。在浏览器中输入 http://localhost:8888/hello ,敲一下回车,然后看看结果: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_pyramid.png) + +你也可以使用命令行工具 `curl` 来测试服务器: + +``` +$ curl -v http://localhost:8888/hello +... +``` + +看看服务器和 `curl` 向标准输出流打印的内容吧。 + +现在来试试 `Flask`。运行步骤跟上面的一样。 + +``` +from flask import Flask +from flask import Response +flask_app = Flask('flaskapp') + + +@flask_app.route('/hello') +def hello_world(): + return Response( + 'Hello world from Flask!\n', + mimetype='text/plain' + ) + +app = flask_app.wsgi_app +``` + +将以上代码保存为 `flaskapp.py`,或者直接从 Github + 下载,然后输入以下命令运行服务器: + +``` +(lsbaws) $ python webserver2.py flaskapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +现在在浏览器中输入 http://localhost:8888/hello ,敲一下回车: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_flask.png) + +同样,尝试一下 `curl`,然后你会看到服务器返回了一条 `Flask` 应用生成的信息: + +``` +$ curl -v http://localhost:8888/hello +... +``` + +这个服务器能处理 Django 应用吗?试试看吧!不过这个任务可能有点复杂,所以我建议你将整个仓库克隆下来,然后使用 Github 仓库中的 `djangoapp.py` 来完成这个实验。下面是它的源代码,你可以看到它将 Django `helloword` 工程(已使用 `Django` 的 `django-admin.py startproject` 命令创建完毕)添加到了当前的 Python 路径中,然后导入了这个工程的 WSGI 应用。 + +``` +import sys +sys.path.insert(0, './helloworld') +from helloworld import wsgi + + +app = wsgi.application +``` + +将以上代码保存为 `djangoapp.py`,然后用你的 Web 服务器运行这个 Django 应用: + +``` +(lsbaws) $ python webserver2.py djangoapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +输入以下链接,敲回车: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_django.png) + +你这次也可以在命令行中测试——你之前应该已经做过两次了——来确认 Django 应用处理了你的请求: + +``` +$ curl -v http://localhost:8888/hello +... +``` + +你试过了吗?你确定这个服务器可以与那三个框架搭配工作吗?如果没试,请去试一下。阅读固然重要,但这个系列的内容是重建,这意味着你需要亲自动手干点活。去试一下吧。别担心,我等着你呢。不开玩笑,你真的需要试一下,亲自尝试每一步,并确保它像预期的那样工作。 + +好,你已经体验到了 WSGI 的威力:它可以使 Web 服务器及 Web 框架随意搭配。WSGI 在 Python Web 服务器及框架之间提供了一个微型接口。它非常简单,而且在服务器和框架端均可以轻易实现。下面的代码片段展示了 WSGI 接口的服务器及框架端实现: + +``` +def run_application(application): + """服务器端代码。""" + # Web 应用/框架在这里存储 HTTP 状态码以及 HTTP 响应头部, + # 服务器会将这些信息传递给客户端 + headers_set = [] + # 用于存储 WSGI/CGI 环境变量的字典 + environ = {} + + def start_response(status, response_headers, exc_info=None): + headers_set[:] = [status, response_headers] + + # 服务器唤醒可执行变量“application”,获得响应头部 + result = application(environ, start_response) + # 服务器组装一个 HTTP 响应,将其传送至客户端 + … + +def app(environ, start_response): + """一个空的 WSGI 应用""" + start_response('200 OK', [('Content-Type', 'text/plain')]) + return ['Hello world!'] + +run_application(app) +``` + +这是它的工作原理: + +1. Web 框架提供一个可调用对象 `application` (WSGI 规范没有规定它的实现方式) +2. Web 服务器每次收到来自客户端的 HTTP 请求后,会唤醒可调用对象 `applition`。它会向该对象传递一个包含 WSGI/CGI 变量的字典,以及一个可调用对象 `start_response` +3. Web 框架或应用生成 HTTP 状态码、HTTP 响应头部,然后将它传给 `start_response` 函数,服务器会将其存储起来。同时,Web 框架或应用也会返回 HTTP 响应正文。 +4. 服务器将状态码、响应头部及响应正文组装成一个 HTTP 响应,然后将其传送至客户端(这一步并不在 WSGI 规范中,但从逻辑上讲,这一步应该包含在工作流程之中。所以为了明确这个过程,我把它写了出来) + +这是这个接口规范的图形化表达: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interface.png) + +到现在为止,你已经看过了用 Pyramid, Flask 和 Django 写出的 Web 应用的代码,你也看到了一个 Web 服务器如何用代码来实现另一半(服务器端的) WSGI 规范。你甚至还看到了我们如何在不使用任何框架的情况下,使用一段代码来实现一个最简单的 WSGI Web 应用。 + +其实,当你使用上面的框架编写一个 Web 应用时,你只是在较高的层面工作,而不需要直接与 WSGI 打交道。但是我知道你一定也对 WSGI 接口的框架部分,因为你在看这篇文章呀。所以,我们不用 Pyramid, Flask 或 Django,而是自己动手来创造一个最朴素的 WSGI Web 应用(或 Web 框架),然后将它和你的服务器一起运行: + +``` +def app(environ, start_response): + """一个最简单的 WSGI 应用。 + + 这是你自己的 Web 框架的起点 ^_^ + """ + status = '200 OK' + response_headers = [('Content-Type', 'text/plain')] + start_response(status, response_headers) + return ['Hello world from a simple WSGI application!\n'] +``` + +同样,将上面的代码保存至 `wsgiapp.py` 或直接从 Github 上下载该文件,然后在 Web 服务器上运行这个应用,像这样: + +``` +(lsbaws) $ python webserver2.py wsgiapp:app +WSGIServer: Serving HTTP on port 8888 ... +``` + +在浏览器中输入下面的地址,然后按下回车。这是你应该看到的结果: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_simple_wsgi_app.png) + +你刚刚在学习如何创建一个 Web 服务器的过程中,自己编写了一个最朴素的 WSGI Web 框架!棒极了! + +现在,我们再回来看看服务器传给客户端的那些东西。这是在使用 HTTP 客户端调用你的 Pyramid 应用时,服务器生成的 HTTP 响应内容: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response.png) + +这个响应和你在本系列第一部分中看到的 HTTP 响应有一部分共同点,但它还多出来了一些内容。比如说,它拥有四个你曾经没见过的 HTTP 头部:`Content-Type`, `Content-Length`, `Date` 以及 `Server`。这些头部内容基本上在每个 Web 服务器返回的响应中都会出现。不过,它们都不是被严格要求出现的。HTTP 请求/响应头部字段的目的,在于它可以向你传递一些关于 HTTP 请求/响应的额外信息。 + +既然你对 WSGI 接口了解的更深了一些,那我再来展示一下上面那个 HTTP 响应中的各个部分的信息来源: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response_explanation.png) + +我现在还没有对上面那个 `environ` 字典做任何解释,不过基本上这个字典必须包含那些被 WSGI 规范事先定义好的 WSGI 及 CGI 变量值。服务器在解析 HTTP 请求时,会从请求中获取这些变量的值。这是 `environ` 字典应该有的样子: + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_environ.png) + +Web 框架会利用以上字典中包含的信息,通过字典中的请求路径、请求动作等等来决定使用哪个视图来处理响应、在哪里读取请求正文、在哪里输出错误信息(如果有的话)。 + +现在,你已经创造了属于你自己的 WSGI Web 服务器,你也使用不同 Web 框架做了几个 Web 应用。而且,你在这个过程中也自己创造出了一个朴素的 Web 应用及框架。这个过程真是累人。现在我们来回顾一下,你的 WSGI Web 服务器在服务请求时,需要针对 WSGI 应用做些什么: + +- 首先,服务器开始工作,然后会加载一个可调用对象 `application`,这个对象由你的 Web 框架或应用提供 +- 然后,服务器读取一个请求 +- 然后,服务器会解析这个请求 +- 然后,服务器会使用请求数据来构建一个 `environ` 字典 +- 然后,它会用 `environ` 字典及一个可调用对象 `start_response` 作为参数,来调用 `application`,并获取响应体内容。 +- 然后,服务器会使用 `application` 返回的响应体,和 `start_response` 函数设置的状态码及响应头部内容,来构建一个 HTTP 响应。 +- 最终,服务器将 HTTP 响应回送给客户端。 + +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_server_summary.png) + +这基本上是服务器要做的全部内容了。你现在有了一个可以正常工作的 WSGI 服务器,它可以为使用任何遵循 WSGI 规范的 Web 框架(如 Django, Flask, Pyramid, 还有你刚刚自己写的那个框架)构建出的 Web 应用服务。最棒的部分在于,它可以在不用更改任何服务器代码的情况下,与多个不同的 Web 框架一起工作。真不错。 + +在结束之前,你可以想想这个问题:“你该如何让你的服务器在同一时间处理多个请求呢?” + +敬请期待,我会在第三部分向你展示一种解决这个问题的方法。干杯! + +顺便,我在撰写一本名为《搭个 Web 服务器:从头开始》的书。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。订阅邮件列表,你就可以获取到这本书的最新进展,以及发布日期。 + +-------------------------------------------------------------------------------- + +via: https://ruslanspivak.com/lsbaws-part2/ + +作者:[Ruslan][a] +译者:[StdioA](https://github.com/StdioA) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/rspivak/ + + + + + + From 9bb78dd4277143661442efcfd89cd3f526f7478f Mon Sep 17 00:00:00 2001 From: Stdio A Date: Sat, 13 Aug 2016 17:07:35 +0800 Subject: [PATCH 402/471] =?UTF-8?q?Translationg=2020160809=20Part=203=20-?= =?UTF-8?q?=20Let=E2=80=99s=20Build=20A=20Web=20Server.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eb Server.md => 20160809 Part 3 - Let’s Build A Web Server.md} | 2 ++ 1 file changed, 2 insertions(+) rename sources/tech/{Part 3 - Let’s Build A Web Server.md => 20160809 Part 3 - Let’s Build A Web Server.md} (99%) diff --git a/sources/tech/Part 3 - Let’s Build A Web Server.md b/sources/tech/20160809 Part 3 - Let’s Build A Web Server.md similarity index 99% rename from sources/tech/Part 3 - Let’s Build A Web Server.md rename to sources/tech/20160809 Part 3 - Let’s Build A Web Server.md index 40a9ccd0b7..4301ab8afb 100644 --- a/sources/tech/Part 3 - Let’s Build A Web Server.md +++ b/sources/tech/20160809 Part 3 - Let’s Build A Web Server.md @@ -1,3 +1,5 @@ +translating by StdioA + Part 3 - Let’s Build A Web Server ===================================== From 79fce5d06b07042eff24f71cea2628c226450cbb Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 13 Aug 2016 20:36:32 +0800 Subject: [PATCH 403/471] PUB:20160306 5 Favorite Open Source Django Packages @StdioA --- ... 5 Favorite Open Source Django Packages.md | 161 +++++++++++++++++ ... 5 Favorite Open Source Django Packages.md | 164 ------------------ 2 files changed, 161 insertions(+), 164 deletions(-) create mode 100644 published/20160306 5 Favorite Open Source Django Packages.md delete mode 100644 translated/talk/yearbook2015/20160306 5 Favorite Open Source Django Packages.md diff --git a/published/20160306 5 Favorite Open Source Django Packages.md b/published/20160306 5 Favorite Open Source Django Packages.md new file mode 100644 index 0000000000..c97ed74570 --- /dev/null +++ b/published/20160306 5 Favorite Open Source Django Packages.md @@ -0,0 +1,161 @@ +5 个最受人喜爱的开源 Django 包 +================================================================================ + +![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/osdc-open-source-yearbook-lead8.png?itok=0_5-hdFE) + +图片来源:Opensource.com + +_Jacob Kaplan-Moss 和 Frank Wiles 也参与了本文的写作。_ + +Django 围绕“[可重用应用][1]”的思想建立:自包含的包提供了可重复使用的特性。你可以将这些可重用应用组装起来,在加上适用于你的网站的特定代码,来搭建你自己的网站。Django 具有一个丰富多样的、由可供你使用的可重用应用组建起来的生态系统——PyPI 列出了[超过 8000个 Django 应用][2]——可你该如何知道哪些是最好的呢? + +为了节省你的时间,我们总结了五个最受喜爱的 Django 应用。它们是: + +- [Cookiecutter][3]: 建立 Django 网站的最佳方式。 +- [Whitenoise][4]: 最棒的静态资源服务器。 +- [Django Rest Framework][5]: 使用 Django 开发 REST API 的最佳方式。 +- [Wagtail][6]: 基于 Django 的最佳内容管理系统(CMS)。 +- [django-allauth][7]: 提供社交账户登录的最佳应用(如 Twitter, Facebook, GitHub 等)。 + +我们同样推荐你看看 [Django Packages][8],这是一个可重用 Django 应用的目录。Django Packages 将 Django 应用组织成“表格”,你可以在功能相似的不同应用之间进行比较并做出选择。你可以查看每个包中提供的特性和使用统计情况。(比如:这是[ REST 工具的表格][9],也许可以帮助你理解我们为何推荐 Django REST Framework。 + +### 为什么你应该相信我们? + +我们使用 Django 的时间几乎比任何人都长。在 Django 发布之前,我们当中的两个人(Frank 和 Jacob)就在 [Lawrence Journal-World][10] (Django 的发源地)工作(事实上,是他们两人推动了 Django 开源发布的进程)。我们在过去的八年当中运行着一个咨询公司,来建议公司怎样最好地应用 Django。 + +所以,我们见证了 Django 项目和社群的完整历史,我们见证了那些流行的软件包的兴起和没落。在我们三个之中,我们个人可能试用了 8000 个应用中至少一半以上,或者我们知道谁试用过这些。我们对如何使应用变得坚实可靠有着深刻的理解,并且我们对给予这些应用持久力量的来源也有着深入的了解。 + +### 建立 Django 网站的最佳方式:[Cookiecutter][3] + +建立一个新项目或应用总是有些痛苦。你可以用 Django 内建的 `startproject`。不过,如果你像我们一样,对如何做事比较挑剔。Cookiecutter 为你提供了一个快捷简单的方式来构建项目或易于重用的应用模板,从而解决了这个问题。一个简单的例子:键入 `pip install cookiecutter`,然后在命令行中运行以下命令: + +```bash +$ cookiecutter https://github.com/marcofucci/cookiecutter-simple-django +``` + +接下来你需要回答几个简单的问题,比如你的项目名称、目录(repo)、作者名字、E-Mail 和其他几个关于配置的小问题。这些能够帮你补充项目相关的细节。我们使用最最原始的 “_foo_” 作为我们的目录名称。所以 cokkiecutter 在子目录 “_foo_” 下建立了一个简单的 Django 项目。 + +如果你在 “_foo_” 项目中闲逛,你会看见你刚刚选择的其它设置已通过模板,连同所需的子目录一同嵌入到文件当中。这个“模板”在我们刚刚在执行 `cookiecutter` 命令时输入的唯一一个参数 Github 仓库 URL 中定义。这个样例工程使用了一个 Github 远程仓库作为模板;不过你也可以使用本地的模板,这在建立非重用项目时非常有用。 + +我们认为 cookiecutter 是一个极棒的 Django 包,但是,事实上其实它在面对纯 Python 甚至非 Python 相关需求时也极为有用。你能够将所有文件以一种可重复的方式精确地摆放在任何位置上,使得 cookiecutter 成为了一个简化(DRY)工作流程的极佳工具。 + +### 最棒的静态资源服务器:[Whitenoise][4] + +多年来,托管网站的静态资源——图片、Javascript、CSS——都是一件很痛苦的事情。Django 内建的 [django.views.static.serve][11] 视图,就像 Django 文章所述的那样,“在生产环境中不可靠,所以只应为开发环境的提供辅助功能。”但使用一个“真正的” Web 服务器,如 NGINX 或者借助 CDN 来托管媒体资源,配置起来会比较困难。 + +Whitenoice 很简洁地解决了这个问题。它可以像在开发环境那样轻易地在生产环境中设置静态服务器,并且针对生产环境进行了加固和优化。它的设置方法极为简单: + +1. 确保你在使用 Django 的 [contrib.staticfiles][12] 应用,并确认你在配置文件中正确设置了 `STATIC_ROOT` 变量。 + +2. 在 `wsgi.py` 文件中启用 Whitenoise: + + ```python + from django.core.wsgi import get_wsgi_application + from whitenoise.django import DjangoWhiteNoise + + application = get_wsgi_application() + application = DjangoWhiteNoise(application) + ``` + +配置它真的就这么简单!对于大型应用,你可能想要使用一个专用的媒体服务器和/或一个 CDN,但对于大多数小型或中型 Django 网站,Whitenoise 已经足够强大。 + +如需查看更多关于 Whitenoise 的信息,[请查看文档][13]。 + +### 开发 REST API 的最佳工具:[Django REST Framework][5] + +REST API 正在迅速成为现代 Web 应用的标准功能。 API 就是简单的使用 JSON 对话而不是 HTML,当然你可以只用 Django 做到这些。你可以制作自己的视图,设置合适的 `Content-Type`,然后返回 JSON 而不是渲染后的 HTML 响应。这是在像 [Django Rest Framework][14](下称 DRF)这样的 API 框架发布之前,大多数人所做的。 + +如果你对 Django 的视图类很熟悉,你会觉得使用 DRF 构建 REST API 与使用它们很相似,不过 DRF 只针对特定 API 使用场景而设计。一般的 API 设置只需要一点代码,所以我们没有提供一份让你兴奋的示例代码,而是强调了一些可以让你生活的更舒适的 DRF 特性: + +* 可自动预览的 API 可以使你的开发和人工测试轻而易举。你可以查看 DRF 的[示例代码][15]。你可以查看 API 响应,并且不需要你做任何事就可以支持 POST/PUT/DELETE 类型的操作。 +* 便于集成各种认证方式,如 OAuth, Basic Auth, 或API Tokens。 +* 内建请求速率限制。 +* 当与 [django-rest-swagger][16] 组合使用时,API 文档几乎可以自动生成。 +* 广泛的第三方库生态。 + +当然,你可以不依赖 DRF 来构建 API,但我们无法想象你不去使用 DRF 的原因。就算你不使用 DRF 的全部特性,使用一个成熟的视图库来构建你自己的 API 也会使你的 API 更加一致、完全,更能提高你的开发速度。如果你还没有开始使用 DRF, 你应该找点时间去体验一下。 + +## 基于 Django 的最佳 CMS:[Wagtail][6] + +Wagtail 是当下 Django CMS(内容管理系统)世界中最受人青睐的应用,并且它的热门有足够的理由。就像大多数的 CMS 一样,它具有极佳的灵活性,可以通过简单的 Django 模型来定义不同类型的页面及其内容。使用它,你可以从零开始在几个小时而不是几天之内来和建造一个基本可以运行的内容管理系统。举一个小例子,为你公司的员工定义一个员工页面类型可以像下面一样简单: + +```python +from wagtail.wagtailcore.models import Page +from wagtail.wagtailcore.fields import RichTextField +from wagtail.wagtailadmin.edit_handlers import FieldPanel, MultiFieldPanel +from wagtail.wagtailimages.edit_handlers import ImageChooserPanel + +class StaffPage(Page): + name = models.CharField(max_length=100) + hire_date = models.DateField() + bio = models.RichTextField() + email = models.EmailField() + headshot = models.ForeignKey('wagtailimages.Image', null=True, blank=True) + content_panels = Page.content_panels + [ + FieldPanel('name'), + FieldPanel('hire_date'), + FieldPanel('email'), + FieldPanel('bio',classname="full"), + ImageChoosePanel('headshot'), + ] +``` + +然而,Wagtail 真正出彩的地方在于它的灵活性及其易于使用的现代化管理页面。你可以控制不同类型的页面在哪网站的哪些区域可以访问,为页面添加复杂的附加逻辑,还天生就支持标准的适应/审批工作流。在大多数 CMS 系统中,你会在开发时在某些点上遇到困难。而使用 Wagtail 时,我们经过不懈努力找到了一个突破口,使得让我们轻易地开发出一套简洁稳定的系统,使得程序完全依照我们的想法运行。如果你对此感兴趣,我们写了一篇[深入理解 Wagtail][17。 + +### 提供社交账户登录的最佳工具:[django-allauth][7] + +django-allauth 是一个能够解决你的注册和认证需求的、可重用的 Django 应用。无论你需要构建本地注册系统还是社交账户注册系统,django-allauth 都能够帮你做到。 + +这个应用支持多种认证体系,比如用户名或电子邮件。一旦用户注册成功,它还可以提供从无需认证到电子邮件认证的多种账户验证的策略。同时,它也支持多种社交账户和电子邮件账户。它还支持插拔式注册表单,可让用户在注册时回答一些附加问题。 + +django-allauth 支持多于 20 种认证提供者,包括 Facebook、Github、Google 和 Twitter。如果你发现了一个它不支持的社交网站,很有可能通过第三方插件提供该网站的接入支持。这个项目还支持自定义后端,可以支持自定义的认证方式,对每个有定制认证需求的人来说这都很棒。 + +django-allauth 易于配置,且有[完善的文档][18]。该项目通过了很多测试,所以你可以相信它的所有部件都会正常运作。 + +你有最喜爱的 Django 包吗?请在评论中告诉我们。 + +### 关于作者 + +![Photo](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/main-one-i-use-everywhere.png?itok=66GC-D1q) + +*Jeff Triplett 劳伦斯,堪萨斯州 * + +我在 2007 年搬到了堪萨斯州的劳伦斯,在 Django 的发源地—— Lawrence Journal-World 工作。我现在在劳伦斯市的 [Revolution Systems (Revsys)][19] 工作,做一位开发者兼顾问。 + +我是[北美 Django 运动基金会(DEFNA)][20]的联合创始人,2015 和 2016 年 [DjangoCon US][21] 的会议主席,而且我在 Django 的发源地劳伦斯参与组织了 [Django Birthday][22] 来庆祝 Django 的 10 岁生日。 + +我是当地越野跑小组的成员,我喜欢篮球,我还喜欢梦见自己随着一道气流游遍美国。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/15/12/5-favorite-open-source-django-packages + +作者:[Jeff Triplett][a] +译者:[StdioA](https://github.com/StdioA) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jefftriplett +[1]:https://docs.djangoproject.com/en/1.8/intro/reusable-apps/ +[2]:https://pypi.python.org/pypi?:action=browse&c=523 +[3]:https://github.com/audreyr/cookiecutter +[4]:http://whitenoise.evans.io/en/latest/base.html +[5]:http://www.django-rest-framework.org/ +[6]:https://wagtail.io/ +[7]:http://www.intenct.nl/projects/django-allauth/ +[8]:https://www.djangopackages.com/ +[9]:https://www.djangopackages.com/grids/g/rest/ +[10]:http://www2.ljworld.com/news/2015/jul/09/happy-birthday-django/ +[11]:https://docs.djangoproject.com/en/1.8/ref/views/#django.views.static.serve +[12]:https://docs.djangoproject.com/en/1.8/ref/contrib/staticfiles/ +[13]:http://whitenoise.evans.io/en/latest/index.html +[14]:http://www.django-rest-framework.org/ +[15]:http://restframework.herokuapp.com/ +[16]:http://django-rest-swagger.readthedocs.org/en/latest/index.html +[17]:https://opensource.com/business/15/5/wagtail-cms +[18]:http://django-allauth.readthedocs.org/en/latest/ +[19]:http://www.revsys.com/ +[20]:http://defna.org/ +[21]:https://2015.djangocon.us/ +[22]:https://djangobirthday.com/ diff --git a/translated/talk/yearbook2015/20160306 5 Favorite Open Source Django Packages.md b/translated/talk/yearbook2015/20160306 5 Favorite Open Source Django Packages.md deleted file mode 100644 index 81e999fd80..0000000000 --- a/translated/talk/yearbook2015/20160306 5 Favorite Open Source Django Packages.md +++ /dev/null @@ -1,164 +0,0 @@ -5个最受喜爱的开源Django包 -================================================================================ -![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/osdc-open-source-yearbook-lead8.png?itok=0_5-hdFE) - -图片来源:Opensource.com - -_Jacob Kaplan-Moss和Frank Wiles也参与了本文的写作。_ - -Django围绕“[可重用应用][1]”观点建立:自我包含了提供可重复使用特性的包。你可以将这些可重用应用组装起来,在加上适用于你的网站的特定代码,来搭建你自己的网站。Django具有一个丰富多样的、由可供你使用的可重用应用组建起来的生态系统——PyPI列出了[超过8000个 Django 应用][2]——可你该如何知道哪些是最好的呢? - -为了节省你的时间,我们总结了五个最受喜爱的 Django 应用。它们是: -- [Cookiecutter][3]: 建立 Django 网站的最佳方式。 -- [Whitenoise][4]: 最棒的静态资源服务器。 -- [Django Rest Framework][5]: 使用 Django 开发 REST API 的最佳方式。 -- [Wagtail][6]: 基于 Django 的最佳内容管理系统。 -- [django-allauth][7]: 提供社交账户登录的最佳应用(如 Twitter, Facebook, GitHub 等)。 - -我们同样推荐你查看 [Django Packages][8],一个可重用 Django 应用的目录。Django Packages 将 Django 应用组织成“表格”,你可以在功能相似的不同应用之间进行比较并做出选择。你可以查看每个包中提供的特性,和使用统计情况。(比如:这是[ REST 工具的表格][9],也许可以帮助你理解我们为何推荐 Django REST Framework. - -## 为什么你应该相信我们? - -我们使用 Django 的时间比几乎其他人都长。在 Django 发布之前,我们当中的两个人(Frank 和 Jacob)在 [Lawrence Journal-World][10] (Django 的发源地)工作(事实上他们两人推动了 Django 开源发布的进程)。我们在过去的八年当中运行着一个咨询公司,来建议公司使用 Django 去将事情做到最好。 - -所以,我们见证了Django项目和社群的完整历史,我们见证了流行软件包的兴起和没落。在我们三个中,我们可能私下试用了8000个应用中的一半以上,或者我们知道谁试用过这些。我们对如何使应用变得坚实可靠有着深刻的理解,并且我们对给予这些应用持久力量的来源也有不错的理解。 - -## 建立Django网站的最佳方式:[Cookiecutter][3] - -建立一个新项目或应用总是有些痛苦。你可以用Django内建的 `startproject`。不过,如果你像我们一样,你可能会对你的办事方式很挑剔。Cookiecutter 为你提供了一个快捷简单的方式来构建易于使用的项目或应用模板,从而解决了这个问题。一个简单的例子:键入 `pip install cookiecutter`,然后在命令行中运行以下命令: - -```bash -$ cookiecutter https://github.com/marcofucci/cookiecutter-simple-django -``` - -接下来你需要回答几个简单的问题,比如你的项目名称、目录、作者名字、E-Mail和其他几个关于配置的小问题。这些能够帮你补充项目相关的细节。我们使用最最原始的 "_foo_" 作为我们的目录名称。所以 cokkiecutter 在子目录 "_foo_" 下建立了一个简单的 Django 项目。 - -如果你在"_foo_"项目中闲逛,你会看见你刚刚选择的其它设置已通过模板,连同子目录一同嵌入到文件当中。这个“模板”在我们刚刚在执行 `cookiecutter` 命令时输入的 Github 仓库 URL 中定义。这个样例工程使用了一个 Github 远程仓库作为模板;不过你也可以使用本地的模板,这在建立非重用项目时非常有用。 - -我们认为 cookiecutter 是一个极棒的 Django 包,但是,事实上其实它在面对纯 Python 甚至非 Python 相关需求时也极为有用。你能够将所有文件依你所愿精确摆放在任何位置上,使得 cookiecutter 成为了一个简化工作流程的极佳工具。 - -## 最棒的静态资源服务器:[Whitenoise][4] - -多年来,托管网站的静态资源——图片、Javascript、CSS——都是一件很痛苦的事情。Django 内建的 [django.views.static.serve][11] 视图,就像Django文章所述的那样,“在生产环境中不可靠,所以只应为开发环境的提供辅助功能。”但使用一个“真正的” Web 服务器,如 NGINX 或者借助 CDN 来托管媒体资源,配置起来会相当困难。 - -Whitenoice 很简洁地解决了这个问题。它可以像在开发环境那样轻易地在生产环境中设置静态服务器,并且针对生产环境进行了加固和优化。它的设置方法极为简单: - -1. 确保你在使用 Django 的 [contrib.staticfiles][12] 应用,并确认你在配置文件中正确设置了 `STATIC_ROOT` 变量。 - -2. 在 `wsgi.py` 文件中启用 Whitenoise: - - ```python - from django.core.wsgi import get_wsgi_application - from whitenoise.django import DjangoWhiteNoise - - application = get_wsgi_application() - application = DjangoWhiteNoise(application) - ``` - -配置它真的就这么简单!对于大型应用,你可能想要使用一个专用的媒体服务器和/或一个 CDN,但对于大多数小型或中型 Django 网站,Whitenoise 已经足够强大。 - -如需查看更多关于 Whitenoise 的信息,[请查看文档][13]。 - -## 开发REST API的最佳工具:[Django REST Framework][5] - -REST API 正在迅速成为现代 Web 应用的标准功能。与一个 API 进行简短的会话,你只需使用 JSON 而不是 HTML,当然你可以只用 Django 做到这些。你可以制作自己的视图,设置合适的 `Content-Type`, 然后返回 JSON 而不是渲染后的 HTML 响应。这是在像 [Django Rest Framework][14](下称DRF)这样的API框架发布之前,大多数人所做的。 - -如果你对 Django 的视图类很熟悉,你会觉得使用DRF构建REST API与使用它们很相似,不过 DRF 只针对特定 API 使用场景而设计。在一般 API 设计中,你会用到它的不少代码,所以我们强调了一些 DRF 的特性来使你更快地接受它,而不是去看一份让你兴奋的示例代码: - -* 可自动预览的 API 可以使你的开发和人工测试轻而易举。你可以查看 DRF 的[示例代码][15]。你可以查看 API 响应,并且它支持 POST/PUT/DELETE 类型的操作,不需要你做任何事。 - - -* 认证方式易于迁移,如OAuth, Basic Auth, 或API Tokens. -* 内建请求速度限制。 -* 当与 [django-rest-swagger][16] 结合时,API文档几乎可以自动生成。 -* 第三方库拥有广泛的生态。 - -当然你可以不依赖 DRF 来构建 API,但我们无法推测你不开始使用 DRF 的原因。就算你不使用 DRF 的全部特性,使用一个成熟的视图库来构建你自己的 API 也会使你的 API 更加一致、完全,更能提高你的开发速度。如果你还没有开始使用 DRF, 你应该找点时间去体验一下。 - -## 以 Django 为基础的最佳 CMS:[Wagtail][6] - -Wagtail是当下Django CMS(内容管理系统)世界中最受人青睐的应用,并且它的热门有足够的理由。就想大多数的 CMS 一样,它具有极佳的灵活性,可以通过简单的 Django 模型来定义不同类型的页面及其内容。使用它,你可以从零开始,在几个小时而不是几天之内来和建造一个基本可以运行的内容管理系统。举一个小例子,为你公司的员工定义一个页面类型可以像下面一样简单: - -```python -from wagtail.wagtailcore.models import Page -from wagtail.wagtailcore.fields import RichTextField -from wagtail.wagtailadmin.edit_handlers import FieldPanel, MultiFieldPanel -from wagtail.wagtailimages.edit_handlers import ImageChooserPanel - -class StaffPage(Page): - name = models.CharField(max_length=100) - hire_date = models.DateField() - bio = models.RichTextField() - email = models.EmailField() - headshot = models.ForeignKey('wagtailimages.Image', null=True, blank=True) - content_panels = Page.content_panels + [ - FieldPanel('name'), - FieldPanel('hire_date'), - FieldPanel('email'), - FieldPanel('bio',classname="full"), - ImageChoosePanel('headshot'), - ] -``` - -然而,Wagtail 真正出彩的地方在于它的灵活性及其易于使用的现代化管理页面。你可以控制不同类型的页面在哪网站的哪些区域可以访问,为页面添加复杂的附加逻辑,还可以极为方便地取得标准的适应/审批工作流。在大多数 CMS 系统中,你会在开发时在某些点上遇到困难。而使用 Wagtail 时,我们经过不懈努力找到了一个突破口,使得让我们轻易地开发出一套简洁稳定的系统,使得程序完全依照我们的想法运行。如果你对此感兴趣,我们写了一篇[深入理解 Wagtail][17]. - -## 提供社交账户登录的最佳工具:[django-allauth][7] - -django-allauth 是一个能够解决你的注册和认证需求的、可重用的Django应用。无论你需要构建本地注册系统还是社交账户注册系统,django-allauth 都能够帮你做到。 - -这个应用支持多种认证体系,比如用户名或电子邮件。一旦用户注册成功,它可以提供从零到电子邮件认证的多种账户验证的策略。同时,它也支持多种社交账户和电子邮件账户关联。它还支持可插拔的注册表单,可让用户在注册时回答一些附加问题。 - -django-allauth 支持多于 20 种认证提供者,包括 Facebook, Github, Google 和 Twitter。如果你发现了一个它不支持的社交网站,那很有可能有一款第三方插件提供该网站的接入支持。这个项目还支持自定义后台开发,可以支持自定义的认证方式。 - -django-allauth 易于配置,且有[完善的文档][18]。该项目通过了很多测试,所以你可以相信它的所有部件都会正常运作。 - -你有最喜爱的 Django 包吗?请在评论中告诉我们。 - -## 关于作者 - -![Photo](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/main-one-i-use-everywhere.png?itok=66GC-D1q) - -Jeff Triplett - -劳伦斯,堪萨斯州 - - -我在 2007 年搬到了堪萨斯州的劳伦斯,在 Django 的发源地—— Lawrence Journal-World 工作。我现在在劳伦斯市的 [Revolution Systems (Revsys)][19] 工作,做一位开发者兼顾问。 - -我是[北美 Django 运动基金会(DEFNA)][20]的联合创始人,2015 和 2016 年 [DjangoCon US][21] 的会议主席,而且我在 Django 的发源地劳伦斯参与组织了 [Django Birthday][22] 来庆祝 Django 的 10 岁生日。 - -我是当地越野跑小组的成员,我喜欢篮球,我还喜欢梦见自己随着一道气流游遍美国。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/business/15/12/5-favorite-open-source-django-packages - -作者:[Jeff Triplett][a] -译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jefftriplett -[1]:https://docs.djangoproject.com/en/1.8/intro/reusable-apps/ -[2]:https://pypi.python.org/pypi?:action=browse&c=523 -[3]:https://github.com/audreyr/cookiecutter -[4]:http://whitenoise.evans.io/en/latest/base.html -[5]:http://www.django-rest-framework.org/ -[6]:https://wagtail.io/ -[7]:http://www.intenct.nl/projects/django-allauth/ -[8]:https://www.djangopackages.com/ -[9]:https://www.djangopackages.com/grids/g/rest/ -[10]:http://www2.ljworld.com/news/2015/jul/09/happy-birthday-django/ -[11]:https://docs.djangoproject.com/en/1.8/ref/views/#django.views.static.serve -[12]:https://docs.djangoproject.com/en/1.8/ref/contrib/staticfiles/ -[13]:http://whitenoise.evans.io/en/latest/index.html -[14]:http://www.django-rest-framework.org/ -[15]:http://restframework.herokuapp.com/ -[16]:http://django-rest-swagger.readthedocs.org/en/latest/index.html -[17]:https://opensource.com/business/15/5/wagtail-cms -[18]:http://django-allauth.readthedocs.org/en/latest/ -[19]:http://www.revsys.com/ -[20]:http://defna.org/ -[21]:https://2015.djangocon.us/ -[22]:https://djangobirthday.com/ From a35d8980ac554646270bdf28bf2e1495b3695f8d Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 13 Aug 2016 20:38:31 +0800 Subject: [PATCH 404/471] =?UTF-8?q?=E6=9B=B4=E6=AD=A3?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- published/20160306 5 Favorite Open Source Django Packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/published/20160306 5 Favorite Open Source Django Packages.md b/published/20160306 5 Favorite Open Source Django Packages.md index c97ed74570..8c73c702e1 100644 --- a/published/20160306 5 Favorite Open Source Django Packages.md +++ b/published/20160306 5 Favorite Open Source Django Packages.md @@ -75,7 +75,7 @@ REST API 正在迅速成为现代 Web 应用的标准功能。 API 就是简单 当然,你可以不依赖 DRF 来构建 API,但我们无法想象你不去使用 DRF 的原因。就算你不使用 DRF 的全部特性,使用一个成熟的视图库来构建你自己的 API 也会使你的 API 更加一致、完全,更能提高你的开发速度。如果你还没有开始使用 DRF, 你应该找点时间去体验一下。 -## 基于 Django 的最佳 CMS:[Wagtail][6] +### 基于 Django 的最佳 CMS:[Wagtail][6] Wagtail 是当下 Django CMS(内容管理系统)世界中最受人青睐的应用,并且它的热门有足够的理由。就像大多数的 CMS 一样,它具有极佳的灵活性,可以通过简单的 Django 模型来定义不同类型的页面及其内容。使用它,你可以从零开始在几个小时而不是几天之内来和建造一个基本可以运行的内容管理系统。举一个小例子,为你公司的员工定义一个员工页面类型可以像下面一样简单: From 9fec424d17f3d4802c5039e46bc2cee453e02415 Mon Sep 17 00:00:00 2001 From: WangYue <815420852@qq.com> Date: Sat, 13 Aug 2016 20:43:13 +0800 Subject: [PATCH 405/471] =?UTF-8?q?=E3=80=90=E7=BF=BB=E8=AF=91=E4=B8=AD?= =?UTF-8?q?=E3=80=9118=20Best=20IDEs=20for=20C/C++=20Programming=20or=20So?= =?UTF-8?q?urce=20Code=20Editors=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 原译者很久没更新 --- ...DEs for C+C++ Programming or Source Code Editors on Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md b/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md index 01fb7c5c22..03b18de986 100644 --- a/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md +++ b/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md @@ -1,4 +1,4 @@ -translating by fw8899 +translating by WangYueScream & LiBrad 18 Best IDEs for C/C++ Programming or Source Code Editors on Linux ====================================================================== From e4b35e4f9e50bd49e0111aa55b56315ffdff8b37 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 13 Aug 2016 22:52:44 +0800 Subject: [PATCH 406/471] PUB:20160623 Advanced Image Processing with Python MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Johnny-Liao 辛苦了,这篇太专业了。@oska874 以后不要选这么冷僻的文章了。 --- ...3 Advanced Image Processing with Python.md | 113 ++++++++++++++++ ...3 Advanced Image Processing with Python.md | 124 ------------------ 2 files changed, 113 insertions(+), 124 deletions(-) create mode 100644 published/20160623 Advanced Image Processing with Python.md delete mode 100644 translated/tech/20160623 Advanced Image Processing with Python.md diff --git a/published/20160623 Advanced Image Processing with Python.md b/published/20160623 Advanced Image Processing with Python.md new file mode 100644 index 0000000000..f668a63924 --- /dev/null +++ b/published/20160623 Advanced Image Processing with Python.md @@ -0,0 +1,113 @@ +Python 高级图像处理 +====================================== + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/Image-Search-Engine.png) + +构建图像搜索引擎并不是一件容易的任务。这里有几个概念、工具、想法和技术需要实现。主要的图像处理概念之一是逆图像查询(RIQ:reverse image querying)。Google、Cloudera、Sumo Logic 和 Birst 等公司在使用逆图像搜索中名列前茅。通过分析图像和使用数据挖掘 RIQ 提供了很好的洞察分析能力。 + +### 顶级公司与逆图像搜索 + +有很多顶级的技术公司使用 RIQ 来取得了不错的收益。例如:在 2014 年 Pinterest 第一次带来了视觉搜索。随后在 2015 年发布了一份白皮书,披露了其架构。逆图像搜索让 Pinterest 获得了时尚品的视觉特征,并可以显示相似产品的推荐。 + +众所周知,谷歌图片使用逆图像搜索允许用户上传一张图片然后搜索相关联的图片。通过使用先进的算法对提交的图片进行分析和数学建模,然后和谷歌数据库中无数的其他图片进行比较得到相似的结果。 + +**这是 OpenCV 2.4.9 特征比较报告一个图表:** + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/search-engine-graph.jpg) + +### 算法 & Python库 + +在我们使用它工作之前,让我们过一遍构建图像搜索引擎的 Python 库的主要元素: + +### 专利算法 + +#### 尺度不变特征变换(SIFT - Scale-Invariant Feature Transform)算法 + +1. 带有非自由功能的一个专利技术,利用图像识别符,以识别相似图像,甚至那些来自不同的角度,大小,深度和尺度的图片,也会被包括在搜索结果中。[点击这里][4]查看 SIFT 详细视频。 +2. SIFT 能与从许多图片中提取了特征的大型数据库正确地匹配搜索条件。 +3. 能匹配不同视角的相同图像和匹配不变特征来获得搜索结果是 SIFT 的另一个特征。了解更多关于尺度不变[关键点][5]。 + +#### 加速鲁棒特征(SURF - Speeded Up Robust Features)算法 + +1. [SURF][1] 也是一种带有非自由功能的专利技术,而且还是一种“加速”的 SIFT 版本。不像 SIFT,SURF 接近于带有箱式过滤器(Box Filter)的高斯拉普拉斯算子(Laplacian of Gaussian)。 +2. SURF 依赖于黑塞矩阵(Hessian Matrix)的位置和尺度。 +3. 在许多应用中,旋转不变性不是一个必要条件,所以不按这个方向查找加速了处理。 +4. SURF 包括了几种特性,提升了每一步的速度。SIFT 在旋转和模糊化方面做的很好,比 SIFT 的速度快三倍。然而它不擅长处理照明和变换视角。 +5. OpenCV 程序功能库提供了 SURF 功能,SURF.compute() 和 SURF.Detect() 可以用来找到描述符和要点。阅读更多关于SURF[点击这里][2] + +### 开源算法 + +#### KAZE 算法 + +1. KAZE是一个开源的非线性尺度空间的二维多尺度和新的特征检测和描述算法。在加性算子分裂(Additive Operator Splitting, AOS)和可变电导扩散中的有效技术被用来建立非线性尺度空间。 +2. 多尺度图像处理的基本原理很简单:创建一个图像的尺度空间,同时用正确的函数过滤原始图像,以提高时间或尺度。 + +#### 加速的 KAZE(AKAZE - Accelerated-KAZE) 算法 + +1. 顾名思义,这是一个更快的图像搜索方式,它会在两幅图像之间找到匹配的关键点。AKAZE 使用二进制描述符和非线性尺度空间来平衡精度和速度。 + +#### 二进制鲁棒性不变尺度可变关键点(BRISK - Binary Robust Invariant Scalable Keypoints) 算法 + +1. BRISK 非常适合关键点的描述、检测与匹配。 +2. 是一种高度自适应的算法,基于尺度空间 FAST 的快速检测器和一个位字符串描述符,有助于显著加快搜索。 +3. 尺度空间关键点检测与关键点描述帮助优化当前相关任务的性能。 + +#### 快速视网膜关键点(FREAK - Fast Retina Keypoint) + +1. 这个新的关键点描述的灵感来自人的眼睛。通过图像强度比能有效地计算一个二进制串级联。FREAK 算法相比 BRISK、SURF 和 SIFT 算法可以更快的计算与内存负载较低。 + +#### 定向 FAST 和旋转 BRIEF(ORB - Oriented FAST and Rotated BRIEF) + +1. 快速的二进制描述符,ORB 具有抗噪声和旋转不变性。ORB 建立在 FAST 关键点检测器和 BRIEF 描述符之上,有成本低、性能好的元素属性。 +2. 除了快速和精确的定位元件,有效地计算定向的 BRIEF,分析变动和面向 BRIEF 特点相关,是另一个 ORB 的特征。 + +### Python库 + +#### OpenCV + +1. OpenCV 支持学术和商业用途,它是一个开源的机器学习和计算机视觉库,OpenCV 便于组织利用和修改代码。 +2. 超过 2500 个优化的算法,包括当前最先进的机器学习和计算机视觉算法服务与各种图像搜索--人脸检测、目标识别、摄像机目标跟踪,从图像数据库中寻找类似图像、眼球运动跟随、风景识别等。 +3. 像谷歌,IBM,雅虎,索尼,本田,微软和英特尔这样的大公司广泛的使用 OpenCV。 +4. OpenCV 拥有 python,java,C,C++ 和 MATLAB 接口,同时支持 Windows,Linux,Mac OS 和 Android。 + +#### Python 图像库 (PIL) + +1. Python 图像库(PIL)支持多种文件格式,同时提供图像处理和图形解决方案。开源的 PIL 为你的 Python解释器添加了图像处理能力。 +2. 标准的图像处理能力包括图像增强、透明和遮罩处理、图像过滤、像素操作等。 + +详细的数据和图表,请看[这里][3]的 OpenCV 2.4.9 特征比较报告。 + +### 构建图像搜索引擎 + +图像搜索引擎可以从预置的图像库选择相似的图像。其中最受欢迎的是谷歌的著名的图像搜索引擎。对于初学者来说,有不同的方法来建立这样的系统。提几个如下: + +1. 采用图像提取、图像描述提取、元数据提取和搜索结果提取,建立图像搜索引擎。 +2. 定义你的图像描述符,数据集索引,定义你的相似性度量,然后进行搜索和排名。 +3. 选择要搜索的图像,选择用于进行搜索的目录,搜索所有图片的目录,创建图片特征索引,评估搜索图片的相同特征,匹配搜索的图片并获得匹配的图片。 + +我们的方法基本上从比较灰度版本的图像,逐渐演变到复杂的特征匹配算法如 SIFT 和 SURF,最后采用的是开源的解决方案 BRISK 。所有这些算法都提供了有效的结果,但在性能和延迟有细微变化。建立在这些算法上的引擎有许多应用,如分析流行统计的图形数据,在图形内容中识别对象,等等。 + +**举例**:一个 IT 公司为其客户建立了一个图像搜索引擎。因此,如果如果搜索一个品牌的标志图像,所有相关的品牌形象也应该显示在搜索结果。所得到的结果也能够被客户用于分析,使他们能够根据地理位置估计品牌知名度。但它还比较年轻,RIQ(反向图像搜索)的潜力尚未被完全挖掘利用。 + +这就结束了我们的文章,使用 Python 构建图像搜索引擎。浏览我们的博客部分来查看最新的编程技术。 + +数据来源:OpenCV 2.4.9 特征比较报告(computer-vision-talks.com) + +(感谢 Ananthu Nair 的指导与补充) + +-------------------------------------------------------------------------------- + +via: http://www.cuelogic.com/blog/advanced-image-processing-with-python/ + +作者:[Snehith Kumbla][a] +译者:[Johnny-Liao](https://github.com/Johnny-Liao) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.cuelogic.com/blog/author/snehith-kumbla/ +[1]: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html +[2]: http://www.vision.ee.ethz.ch/~surf/eccv06.pdf +[3]: https://docs.google.com/spreadsheets/d/1gYJsy2ROtqvIVvOKretfxQG_0OsaiFvb7uFRDu5P8hw/edit#gid=10 +[4]: https://www.youtube.com/watch?v=NPcMS49V5hg +[5]: https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf diff --git a/translated/tech/20160623 Advanced Image Processing with Python.md b/translated/tech/20160623 Advanced Image Processing with Python.md deleted file mode 100644 index 73f8751225..0000000000 --- a/translated/tech/20160623 Advanced Image Processing with Python.md +++ /dev/null @@ -1,124 +0,0 @@ -Python高级图像处理 -====================================== - -![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/Image-Search-Engine.png) - -构建图像搜索引擎并不是一件容易的任务。这里有几个概念、工具、想法和技术需要实现。主要的图像处理概念之一是逆图像查询(RIQ)或者说逆图像搜索。Google、Cloudera、Sumo Logic 和 Birst等公司在使用逆图像搜索中名列前茅。通过分析图像和使用数据挖掘RIQ提供了很好的洞察分析能力。 - -### 顶级公司与逆图像搜索 - -有很多顶级的技术公司使用RIQ来产生最好的收益。例如:在2014年Pinterest第一次带来了视觉搜索。随后在2015年发布了一份白皮书,揭示了其架构。逆图像搜索让Pinterest获得对时尚对象的视觉特征和显示类似的产品建议的能力。 - -众所周知,谷歌图片使用逆图像搜索允许用户上传一张图片然后搜索相关联的图片。通过使用先进的算法对提交的图片进行分析和数学建模。然后和谷歌数据库中无数的其他图片进行比较得到相似的结果。 - -**这是opencv 2.4.9特征比较报告一个图表:** - -![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/search-engine-graph.jpg) - -### 算法 & Python库 - -在我们使用它工作之前,让我们过一遍构建图像搜索引擎的Python库的主要元素: - -### 专利算法 - -#### 尺度不变特征变换算法(SIFT - Scale-Invariant Feature Transform) - -1. 非自由功能的一个专利技术,利用图像识别符,以识别相似图像,甚至那些从不同的角度,大小,深度和规模点击,它们被包括在搜索结果中。[点击这里][4]查看SIFT详细视频。 -2. SIFT能与从许多图片中提取的特征的大型数据库正确的匹配搜索条件。 -3. 从不同的方面来匹配相同的图像和匹配不变特征来获得搜索结果是SIFT的另一个特征。了解更多关于尺度不变[关键点][5] - -#### 加速鲁棒特征(SURF - Speeded Up Robust Features)算法 - -1. [SURF][1] 也是一种非自由功能的专利技术而且还是一种“加速”的SIFT版本。不像SIFT, - -2. SURF依赖于Hessian矩阵行列式的位置和尺度。 - -3. 在许多应用中,旋转不变性是不是一个必要条件。所以找不到这个方向的速度加快了这个过程。 - -4. SURF包括几种特性,在每一步的速度提高。比SIFT快三倍的速度,SIFT擅长旋转和模糊化。然而它不擅长处理照明和变换视角。 - -5. Open CV,一个程序功能库提供SURF相似的功能,SURF.compute()和SURF.Detect()可以用来找到描述符和要点。阅读更多关于SURF[点击这里][2] - -### 开源算法 - -#### KAZE 算法 - -1. KAZE是一个开源的非线性尺度空间的二维多尺度和新的特征检测和描述算法。在加性算子分裂的有效技术(AOS)和可变电导扩散是用来建立非线性尺度空间。 - -2. 多尺度图像处理基础是简单的创建一个图像的尺度空间,同时用正确的函数过滤原始图像,提高时间或规模。 - -#### AKAZE (Accelerated-KAZE) 算法 - -1. 顾名思义,这是一个更快的图像搜索方式,找到匹配的关键点在两幅图像之间。AKAZE 使用二进制描述符和非线性尺度空间来平衡精度和速度。 - -#### BRISK (Binary Robust Invariant Scalable Keypoints) 算法 - -1. BRISK 非常适合描述关键点的检测与匹配。 - -2. 是一种高度自适应的算法,基于尺度空间的快速检测器和一个位字符串描述符,有助于加快搜索显着。 - -3. 尺度空间关键点检测与关键点描述帮助优化手头相关任务的性能 - -#### FREAK (Fast Retina Keypoint) - -1. 这个新的关键点描述的灵感来自人的眼睛。通过图像强度比能有效地计算一个二进制串级联。FREAK算法相比BRISK,SURF和SIFT算法有更快的计算与较低的内存负载。 - -#### ORB (Oriented FAST and Rotated BRIEF) - -1. 快速的二进制描述符,ORB具有抗噪声和旋转不变性。ORB建立在FAST关键点检测器和BRIEF描述符之上,有成本低、性能好的元素属性。 - -2. 除了快速和精确的定位元件,有效地计算定向的BRIEF,分析变动和面向BRIEF特点相关,是另一个ORB的特征。 - -### Python库 - -#### Open CV - -1. Open CV提供学术和商业用途。一个开源的机器学习和计算机视觉库,OpenCV便于组织利用和修改代码。 - -2. 超过2500个优化算法,包括国家最先进的机器学习和计算机视觉算法服务与各种图像搜索--人脸检测、目标识别、摄像机目标跟踪,从图像数据库中寻找类似图像、眼球运动跟随、风景识别等。 - -3. 像谷歌,IBM,雅虎,IBM,索尼,本田,微软和英特尔这样的大公司广泛的使用OpenCV。 - -4. OpenCV拥有python,java,C,C++和MATLAB接口,同时支持Windows,Linux,Mac OS和Android。 - -#### Python图像库 (PIL) - -1. Python图像库(PIL)支持多种文件格式,同时提供图像处理和图形解决方案。开源的PIL为你的Python解释器添加图像处理能力。 -2. 标准的图像处理能力包括图像增强、透明和屏蔽作用、图像过滤、像素操作等。 - -详细的数据和图表,请看OpenCV 2.4.9 特征比较报告。[这里][3] - -### 构建图像搜索引擎 - -图像搜索引擎可以从预置集图像库选择相似的图像。其中最受欢迎的是谷歌的著名的图像搜索引擎。对于初学者来说,有不同的方法来建立这样的系统。提几个如下: - -1. 采用图像提取、图像描述提取、元数据提取和搜索结果提取,建立图像搜索引擎。 -2. 定义你的图像描述符,数据集索引,定义你的相似性度量,然后搜索和排名。 -3. 选择要搜索的图像,选择用于进行搜索的目录,所有图片的搜索目录,创建图片特征索引,评估相同的搜索图片的功能,在搜索中匹配图片,并获得匹配的图片。 - -我们的方法基本上就比较grayscaled版本的图像,逐渐移动到复杂的特征匹配算法如SIFT和SURF,最后沉淀下来的是开源的解决方案称为BRISK。所有这些算法提供了有效的结果,在性能和延迟的细微变化。建立在这些算法上的引擎有许多应用,如分析流行统计的图形数据,在图形内容中识别对象,以及更多。 - -**例如**:一个图像搜索引擎需要由一个IT公司作为客户机来建立。因此,如果一个品牌的标志图像被提交在搜索中,所有相关的品牌形象搜索显示结果。所得到的结果也可以通过客户端分析,使他们能够根据地理位置估计品牌知名度。但它还比较年轻,RIQ或反向图像搜索尚未被完全挖掘利用。 - -这就结束了我们的文章,使用Python构建图像搜索引擎。浏览我们的博客部分来查看最新的编程技术。 - -数据来源:OpenCV 2.4.9 特征比较报告(computer-vision-talks.com) - -(Ananthu Nair 的指导与补充) - --------------------------------------------------------------------------------- - -via: http://www.cuelogic.com/blog/advanced-image-processing-with-python/ - -作者:[Snehith Kumbla][a] -译者:[Johnny-Liao](https://github.com/Johnny-Liao) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.cuelogic.com/blog/author/snehith-kumbla/ -[1]: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html -[2]: http://www.vision.ee.ethz.ch/~surf/eccv06.pdf -[3]: https://docs.google.com/spreadsheets/d/1gYJsy2ROtqvIVvOKretfxQG_0OsaiFvb7uFRDu5P8hw/edit#gid=10 -[4]: https://www.youtube.com/watch?v=NPcMS49V5hg -[5]: https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf From 762fd2dd8f4c9ede4cb721fd0c19903bc16c5b39 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 14 Aug 2016 01:27:21 +0800 Subject: [PATCH 407/471] =?UTF-8?q?20160814-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...asks and Application Deployments Over SSH.md | 252 ++++++++++++++++++ 1 file changed, 252 insertions(+) create mode 100644 sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md diff --git a/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md b/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md new file mode 100644 index 0000000000..43b1fe59c7 --- /dev/null +++ b/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md @@ -0,0 +1,252 @@ +Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH +=========================== + +When it comes to managing remote machines and deployment of applications, there are several command line tools out there in existence though many have a common problem of lack of detailed documentation. + +In this guide, we shall cover the steps to introduce and get started on how to use fabric to improve on administering groups of servers. + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Automate-Linux-Administration-Tasks-Using-Fabric.png) +>Automate Linux Administration Tasks Using Fabric + +Fabric is a python library and a powerful command line tool for performing system administration tasks such as executing SSH commands on multiple machines and application deployment. + +Having a working knowledge of Python can be helpful when using Fabric, but may certainly not be necessary. + +Reasons why you should choose fabric over other alternatives: + +- Simplicity +- It is well-documented +- You don’t need to learn another language if you’re already a python guy. +- Easy to install and use. +- It is fast in its operations. +- It supports parallel remote execution. + +### How to Install Fabric Automation Tool in Linux + +An important characteristic about fabric is that the remote machines which you need to administer only need to have the standard OpenSSH server installed. You only need certain requirements installed on the server from which you are administering the remote servers before you can get started. + +#### Requirements: + +- Python 2.5+ with the development headers +- Python-setuptools and pip (optional, but preferred) gcc + +Fabric is easily installed using pip (highly recommended), but you may also prefer to choose your default package manager `yum`, `dnf` or `apt-get` to install fabric package, typically called fabric or python-fabric. + +For RHEL/CentOS based distributions, you must have [EPEL repository][1] installed and enabled on the system to install fabric package. + +``` +# yum install fabric [On RedHat based systems] +# dnf install fabric [On Fedora 22+ versions] +``` + +For Debian and it’s derivatives such as Ubuntu and Mint users can simply do apt-get to install the fabric package as shown: + +``` +# apt-get install fabric +``` + +If you want to install development version of fabric, you may use pip to grab the most recent master branch. + +``` +# yum install python-pip [On RedHat based systems] +# dnf install python-pip [On Fedora 22+ versions] +# apt-get install python-pip [On Debian based systems] +``` + +Once pip has been installed successfully, you may use pip to grab the latest version of fabric as shown: + +``` +# pip install fabric +``` + +### How to Use Fabric to Automate Linux Administration Tasks + +So lets get started on how you can use Fabric. During the installation process, a Python script called fab was added to a directory in your path. The `fab` script does all the work when using fabric. + +#### Executing commands on the local Linux machine + +By convention, you need to start by creating a Python file called fabfile.py using your favorite editor. Remember you can give this file a different name as you wish but you will need to specify the file path as follows: + +``` +# fabric --fabfile /path/to/the/file.py +``` + +Fabric uses `fabfile.py` to execute tasks. The fabfile should be in the same directory where you run the Fabric tool. + +Example 1: Let’s create a basic `Hello World` first. + +``` +# vi fabfile.py +``` + +Add these lines of code in the file. + +``` +def hello(): +print('Hello world, Tecmint community') +``` + +Save the file and run the command below. + +``` +# fab hello +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Create-Fabric-Fab-Python-File.gif) +>Fabric Tool Usage + +And paste the following lines of code in the file. + +``` +#! /usr/bin/env python +from fabric.api import local +def uptime(): +local('uptime') +``` + +Then save the file and run the following command: + +``` +# fab uptime +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Uptime.gif) +>Fabric: Check System Uptime + +#### Executing commands on remote Linux machines to automate tasks + +The Fabric API uses a configuration dictionary which is Python’s equivalent of an associative array known as `env`, which stores values that control what Fabric does. + +The `env.hosts` is a list of servers on which you want run Fabric tasks. If your network is 192.168.0.0 and wish to manage host 192.168.0.2 and 192.168.0.6 with your fabfile, you could configure the env.hosts as follows: + +``` +#!/usr/bin/env python +from fabric.api import env +env.hosts = [ '192.168.0.2', '192.168.0.6' ] +``` + +The above line of code only specify the hosts on which you will run Fabric tasks but do nothing more. Therefore you can define some tasks, Fabric provides a set of functions which you can use to interact with your remote machines. + +Although there are many functions, the most commonly used are: + +- run – which runs a shell command on a remote machine. +- local – which runs command on the local machine. +- sudo – which runs a shell command on a remote machine, with root privileges. +- Get – which downloads one or more files from a remote machine. +- Put – which uploads one or more files to a remote machine. + +Example 3: To echo a message on multiple machines create a fabfile.py such as the one below. + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def echo(): +run("echo -n 'Hello, you are tuned to Tecmint ' ") +``` + +To execute the tasks, run the following command: + +``` +# fab echo +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabrick-Automate-Linux-Tasks.gif) +>Fabric: Automate Linux Tasks on Remote Linux + +Example 4: You can improve the fabfile.py which you created earlier on to execute the uptime command on the local machine, so that it runs the uptime command and also checks disk usage using the df command on multiple machines as follows: + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def uptime(): +run('uptime') +def disk_space(): +run('df -h') +``` + +Save the file and run the following command: + +``` +# fab uptime +# fab disk_space +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Run-Multiple-Commands-on-Multiple-Linux-Systems.gif) +>Fabric: Automate Tasks on Multiple Linux Systems + +#### Automatically Deploy LAMP Stack on Remote Linux Server + +Example 4: Let us look at an example to deploy LAMP (Linux, Apache, MySQL/MariaDB and PHP) server on a remote Linux server. + +We shall write a function that will allow LAMP to be installed remotely using root privileges. + +##### For RHEL/CentOS and Fedora + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def deploy_lamp(): +run ("yum install -y httpd mariadb-server php php-mysql") +``` + +##### For Debian/Ubuntu and Linux Mint + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def deploy_lamp(): +sudo("apt-get install -q apache2 mysql-server libapache2-mod-php5 php5-mysql") +``` + +Save the file and run the following command: + +``` +# fab deploy_lamp +``` + +Note: Due to large output, it’s not possible for us to create a screencast (animated gif) for this example. + +Now you can able to [automate Linux server management tasks][2] using Fabric and its features and examples given above… + +#### Some Useful Options to Use with Fabric + +- You can run fab –help to view help information and a long list of available command line options. +- An important option is –fabfile=PATH that helps you to specify a different python module file to import other then fabfile.py. +- To specify a username to use when connecting to remote hosts, use the –user=USER option. +- To use password for authentication and/or sudo, use the –password=PASSWORD option. +- To print detailed info about command NAME, use –display=NAME option. +- To view formats use –list option, choices: short, normal, nested, use the –list-format=FORMAT option. +- To print list of possible commands and exit, include the –list option. +- You can specify the location of config file to use by using the –config=PATH option. +- To display a colored error output, use –colorize-errors. +- To view the program’s version number and exit, use the –version option. + +### Summary + +Fabric is a powerful tool and is well documented and provides easy usage for newbies. You can read the full documentation to get more understanding of it. If you have any information to add or incase of any errors you encounter during installation and usage, you can leave a comment and we shall find ways to fix them. + +Reference: [Fabric documentation][3] + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/automating-linux-system-administration-tasks/ + +作者:[Aaron Kili ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[2]: http://www.tecmint.com/use-ansible-playbooks-to-automate-complex-tasks-on-multiple-linux-servers/ +[3]: http://docs.fabfile.org/en/1.4.0/usage/env.html + + + + From 188bf1cf67c7a80f54b7f15028963dfdf9f5f278 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 14 Aug 2016 01:38:38 +0800 Subject: [PATCH 408/471] =?UTF-8?q?20160814-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/ubuntu vs ubuntu on windows.md | 34 +++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 sources/tech/ubuntu vs ubuntu on windows.md diff --git a/sources/tech/ubuntu vs ubuntu on windows.md b/sources/tech/ubuntu vs ubuntu on windows.md new file mode 100644 index 0000000000..3074c5fd5b --- /dev/null +++ b/sources/tech/ubuntu vs ubuntu on windows.md @@ -0,0 +1,34 @@ +Ubuntu 14.04/16.04 vs. Ubuntu Bash On Windows 10 Anniversary Performance +=========================== + +When Microsoft and Canonical brought [Bash and Ubuntu's user-space to Windows 10][1] earlier this year I ran some preliminary [benchmarks of Ubuntu on Windows 10 versus a native Ubuntu installation][2] on the same hardware. Now that this "Windows Subsystem for Linux" is part of the recent Windows 10 Anniversary Update, I've carried out some fresh benchmarks of Ubuntu running atop Windows 10 compared to Ubuntu running bare metal. + +![](http://www.phoronix.net/image.php?id=windows10-anv-wsl&image=windows_wsl_1_med) + +The Windows Subsystem for Linux testing was done with the Windows 10 Anniversary Update plus all available system updates as of last week. The default Ubuntu user-space on Windows 10 continues to be Ubuntu 14.04 LTS, but it worked out to upgrade to Ubuntu 16.04 LTS. So at first I carried out the benchmarks in the 14.04-based stock environment on Windows followed by upgrading the user-space to Ubuntu 16.04 LTS and then repeating the tests. After all of the Windows-based testing was completed, I did clean installs of Ubuntu 14.04.5 and Ubuntu 16.04 LTS on the same system to see how the performance compares. + +![](http://www.phoronix.net/image.php?id=windows10-anv-wsl&image=windows_wsl_2_med) + +An Intel Core i5 6600K Skylake system with 16GB of RAM and 256GB Toshiba SSD were used during the testing process. Each OS was left at its default settings/packages during the testing process. + + +![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=09989b3&p=2) +>点击放大查看 + + +The testing of Ubuntu/Bash on Windows and the native Ubuntu Linux installations were carried out in a fully-automated and reproducible manner using the open-source Phoronix Test Suite benchmarking software. + +-------------------------------------------------------------------------------- + +via: https://www.phoronix.com/scan.php?page=article&item=windows10-anv-wsl&num=1 + +作者:[Michael Larabel][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.michaellarabel.com/ +[1]: http://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-User-Space-On-Win10 +[2]: http://www.phoronix.com/scan.php?page=article&item=windows-10-lxcore&num=1 + From 2c0762ee57a25a67e03d45c4d0c8a479e1854b48 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 14 Aug 2016 01:42:27 +0800 Subject: [PATCH 409/471] =?UTF-8?q?20160814-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...uring IT productivity is so challenging.md | 39 +++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 sources/talk/20160808 Why measuring IT productivity is so challenging.md diff --git a/sources/talk/20160808 Why measuring IT productivity is so challenging.md b/sources/talk/20160808 Why measuring IT productivity is so challenging.md new file mode 100644 index 0000000000..a0a29c57ae --- /dev/null +++ b/sources/talk/20160808 Why measuring IT productivity is so challenging.md @@ -0,0 +1,39 @@ +Why measuring IT productivity is so challenging +=========================== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_6.png?itok=JV-zSor3) + +In some professions, there are metrics that one can use to determine an individual’s productivity. If you’re a widget maker, for instance, you might be measured on how many widgets you can make in a month. If you work in a call center – how many calls did you answer, and what’s your average handle time? These are over-simplified examples, but even if you're a doctor, you may be measured on how many operations you perform, or how many patients you can see in a month. Whether or not these are the right metrics, they offer a general picture of how an individual performed within a set timeframe. + +In the case of technology, however, it becomes almost impossible to measure a person’s productivity because there is so much variability. For example, it may be tempting to measure a developer's time by lines of code built. But, depending on the coding language, one line of code in one language versus another might be significantly more or less time consuming or difficult. + +Has it always been this nuanced? Many years ago, you might have heard about or experienced IT measurement in terms of function points. These measurements were about the critical features that developers were able to create. But that, too, is becoming harder to do in today’s environment, where developers are often encapsulating logic that may already exist, such as integration of function points through a vendor. This makes it harder to measure productivity based simply on the number of function points built. + +These two examples shed light on why CIOs sometimes struggle when we talk to our peers about IT productivity. Consider this hypothetical conversation: + +>IT leader: “Wow, I think this developer is great.” +>HR: “Really? What do they do?” +>IT leader: “They built this excellent application.” +>HR: “Well, are they better than the other developer who built ten applications?” +>IT leader: “That depends on what you mean by 'better.'” + +Typically, when in the midst of a conversation like the above, there is so much subjectivity involved that it's difficult to answer the question. This is just the tip of the iceberg when it comes to measuring IT performance in a meaningful way. And it doesn't just make conversations harder – it makes it harder for CIOs to showcase the value of their organization to the business. + +This certainly isn't a new problem. I’ve been trying to figure this out for the last 30 years, and I’ve mostly come to the conclusion that we really shouldn’t bother with productivity – we'll never get there. + +I believe we need to change the conversation and stop trying to speak in terms of throughput and cost and productivity but instead, focus on measuring the overall business value of IT. Again, it won't be easy. Business value realization is a hard thing to do. But if CIOs can partner with the business to figure that out, then attributing real value can become more of a science than an art form. + + + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2016/8/why-measuring-it-productivity-so-challenging + +作者:[Anil Cheriyan][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://enterprisersproject.com/user/anil-cheriyan + From d1998f572acbffce3946e07ba1fa6bf6bc2ac1fc Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 14 Aug 2016 01:47:46 +0800 Subject: [PATCH 410/471] =?UTF-8?q?20160814-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0808 How to cache static files on nginx.md | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 sources/tech/20160808 How to cache static files on nginx.md diff --git a/sources/tech/20160808 How to cache static files on nginx.md b/sources/tech/20160808 How to cache static files on nginx.md new file mode 100644 index 0000000000..380d567852 --- /dev/null +++ b/sources/tech/20160808 How to cache static files on nginx.md @@ -0,0 +1,79 @@ +How to cache static files on nginx +=========================== + +This tutorial explains how you can configure nginx to set the Expires HTTP header and the max-age directive of the Cache-Control HTTP header of static files (such as images, CSS and Javascript files) to a date in the future so that these files will be cached by your visitors' browsers. This saves bandwidth and makes your web site appear faster (if a user visits your site for a second time, static files will be fetched from the browser cache). + +### 1 Preliminary Note + +I'm assuming you have a working nginx setup, e.g. as shown in this tutorial: [Installing Nginx with PHP 7 and MySQL 5.7 (LEMP) on Ubuntu 16.04 LTS][1] + +### 2 Configuring nginx + +The Expires HTTP header can be set with the help of the [expires][2] directive which can be placed in inside http {}, server {}, location {}, or an if statement inside a location {} block. Usually you will use it in a location block for your static files, e.g. as follows: + +``` +location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { + expires 365d; +} +``` + +In the above example, all `.jpg`, `.jpeg`, `.png`, `.gif`, `.ico`, `.css`, and `.js` files get an Expires header with a date 365 days in the future from the browser access time. Therefore, you should make sure that the location {} block really only contain static files that can be cached by browsers. +Reload nginx after your changes: + +``` +/etc/init.d/nginx reload +``` + +You can use the following time settings with the expires directive: + +- off makes that the Expires and Cache-Control headers will not be modified. +- epoch sets the Expires header to 1 January, 1970 00:00:01 GMT. +- max sets the Expires header to 31 December 2037 23:59:59 GMT, and the Cache-Control max-age to 10 years. +- A time without an @ prefix means an expiry time relative to the browser access time. A negative time can be specified, which sets the Cache-Control header to no-cache. Example: expires 10d; or expires 14w3d; +- A time with an @ prefix specifies an absolute time-of-day expiry, written in either the form Hh or Hh:Mm, where H ranges from 0 to 24, and M ranges from 0 to 59. Exmaple: expires @15:34; + +You can use the following time units: + +- ms: milliseconds +- s: seconds +- - m: minutes +- h: hours +- d: days +- w: weeks +- M: months (30 days) +- y: years (365 days) + +Examples: 1h30m for one hour thirty minutes, 1y6M for one year and six months. + +Also note that if you use a far future Expires header you have to change the component's filename whenever the component changes. Therefore it's a good idea to version your files. For example, if you have a file javascript.js and want to modify it, you should add a version number to the file name of the modified file (e.g. javascript-1.1.js) so that browsers have to download it. If you don't change the file name, browsers will load the (old) file from their cache. + +Instead of basing the Expires header on the access time of the browser (e.g. expires 10d;), you can also base it on the modification date of a file (please note that this works only for real files that are stored on the hard drive!) by using the modified keyword which precedes the time: + +``` +expires modified 10d; +``` + +### 3 Testing + +To test if your configuration works, you can use the Network analysis function of the Developer tools in the Firefox Browser and access a static file through Firefox (e.g. an image). In the Header output, you should now see an Expires header and a Cache-Control header with a max-age directive (max-age contains a value in seconds, for example 31536000 is one year in the future): + +![](https://www.howtoforge.com/images/how-to-cache-static-files-on-nginx/accept_headers.png) + +### 4 Links + +nginx HttpHeadersModule: + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-cache-static-files-on-nginx/ + +作者:[Falko Timme][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.howtoforge.com/tutorial/how-to-cache-static-files-on-nginx/ +[1]: https://www.howtoforge.com/tutorial/installing-nginx-with-php7-fpm-and-mysql-on-ubuntu-16.04-lts-lemp/ +[2]:http://nginx.org/en/docs/http/ngx_http_headers_module.html#expires From 3620af6dfbba4b1cfbbcc5e26f9cde10f89d5911 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 14 Aug 2016 01:58:00 +0800 Subject: [PATCH 411/471] =?UTF-8?q?20160814-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rtTrue is truthy - assertFalse is falsy.md | 176 ++++++++++++++++++ 1 file changed, 176 insertions(+) create mode 100644 sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md diff --git a/sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md b/sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md new file mode 100644 index 0000000000..bfed46fb0e --- /dev/null +++ b/sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md @@ -0,0 +1,176 @@ +Python unittest: assertTrue is truthy, assertFalse is falsy +=========================== + +In this post, I explore the differences between the unittest boolean assert methods assertTrue and assertFalse and the assertIs identity assertion. + +### Definitions + +Here’s what the [unittest module documentation][1] currently notes about assertTrue and assertFalse, with the appropriate code highlighted: + + +>assertTrue(expr, msg=None) + +>assertFalse(expr, msg=None) + +>>Test that expr is true (or false). + +>>Note that this is equivalent to + +>>bool(expr) is True + +>>and not to + +>>expr is True + +>>(use assertIs(expr, True) for the latter). + +[Mozilla Developer Network defines truthy][2] as: + +> A value that translates to true when evaluated in a Boolean context. +In Python this is equivalent to: + +``` +bool(expr) is True +``` + +Which exactly matches what assertTrue is testing for. + +Therefore the documentation already indicates assertTrue is truthy and assertFalse is falsy. These assertion methods are creating a bool from the received value and then evaluating it. It also suggests that we really shouldn’t use assertTrue or assertFalse for very much at all. + +### What does this mean in practice? + +Let’s use a very simple example - a function called always_true that returns True. We’ll write the tests for it and then make changes to the code and see how the tests perform. + +Starting with the tests, we’ll have two tests. One is “loose”, using assertTrue to test for a truthy value. The other is “strict” using assertIs as recommended by the documentation: + +``` +import unittest + +from func import always_true + + +class TestAlwaysTrue(unittest.TestCase): + + def test_assertTrue(self): + """ + always_true returns a truthy value + """ + result = always_true() + + self.assertTrue(result) + + def test_assertIs(self): + """ + always_true returns True + """ + result = always_true() + + self.assertIs(result, True) +``` + +Here’s the code for our simple function in func.py: + +``` +def always_true(): + """ + I'm always True. + + Returns: + bool: True + """ + return True +``` + +When run, everything passes: + +``` +always_true returns True ... ok +always_true returns a truthy value ... ok + +---------------------------------------------------------------------- +Ran 2 tests in 0.004s + +OK +``` + +Happy days! + +Now, “someone” changes always_true to the following: + +``` +def always_true(): + """ + I'm always True. + + Returns: + bool: True + """ + return 'True' +``` + +Instead of returning True (boolean), it’s now returning string 'True'. (Of course this “someone” hasn’t updated the docstring - we’ll raise a ticket later.) + +This time the result is not so happy: + +``` +always_true returns True ... FAIL +always_true returns a truthy value ... ok + +====================================================================== +FAIL: always_true returns True +---------------------------------------------------------------------- +Traceback (most recent call last): + File "/tmp/assertttt/test.py", line 22, in test_is_true + self.assertIs(result, True) +AssertionError: 'True' is not True + +---------------------------------------------------------------------- +Ran 2 tests in 0.004s + +FAILED (failures=1) +``` + +Only one test failed! This means assertTrue gave us a false-positive. It passed when it shouldn’t have. It’s lucky we wrote the second test with assertIs. + +Therefore, just as we learned from the manual, to keep the functionality of always_true pinned tightly the stricter assertIs should be used rather than assertTrue. + +### Use assertion helpers + +Writing out assertIs to test for True and False values is not too lengthy. However, if you have a project in which you often need to check that values are exactly True or exactly False, then you can make yourself the assertIsTrue and assertIsFalse assertion helpers. + +This doesn’t save a particularly large amount of code, but it does improve readability in my opinion. + +``` +def assertIsTrue(self, value): + self.assertIs(value, True) + +def assertIsFalse(self, value): + self.assertIs(value, False) +``` + +### Summary + +In general, my recommendation is to keep tests as tight as possible. If you mean to test for the exact value True or False, then follow the [documentation][3] and use assertIs. Do not use assertTrue or assertFalse unless you really have to. + +If you are looking at a function that can return various types, for example, sometimes bool sometimes int, then consider refactoring. This is a code smell and in Python, that False value for an error would probably be better raised as an exception. + +In addition, if you really need to assert the return value from a function under test is truthy, there might be a second code smell - is your code correctly encapsulated? If assertTrue and assertFalse are asserting that function return values will trigger if statements correctly, then it might be worth sense-checking you’ve encapsulated everything you intended in the appropriate place. Maybe those if statements should be encapsulated within the function under test. + +Happy testing! + + + +-------------------------------------------------------------------------------- + +via: http://jamescooke.info/python-unittest-asserttrue-is-truthy-assertfalse-is-falsy.html + +作者:[James Cooke][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://jamescooke.info/pages/hello-my-name-is-james.html +[1]:https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTrue +[2]:https://developer.mozilla.org/en-US/docs/Glossary/Truthy +[3]:https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTrue From f03c4a265481ba645e1061ffa240a47b19d2fa06 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 14 Aug 2016 02:12:10 +0800 Subject: [PATCH 412/471] =?UTF-8?q?20160814-6=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/talk/20160808 What is open source.md | 121 +++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 sources/talk/20160808 What is open source.md diff --git a/sources/talk/20160808 What is open source.md b/sources/talk/20160808 What is open source.md new file mode 100644 index 0000000000..1ff197dee4 --- /dev/null +++ b/sources/talk/20160808 What is open source.md @@ -0,0 +1,121 @@ +What is open source +=========================== + +The term "open source" refers to something people can modify and share because its design is publicly accessible. + +The term originated in the context of software development to designate a specific approach to creating computer programs. Today, however, "open source" designates a broader set of values—what we call "[the open source way][1]." Open source projects, products, or initiatives embrace and celebrate principles of open exchange, collaborative participation, rapid prototyping, transparency, meritocracy, and community-oriented development. + +### What is open source software? + +Open source software is software with source code that anyone can inspect, modify, and enhance. + +"Source code" is the part of software that most computer users don't ever see; it's the code computer programmers can manipulate to change how a piece of software—a "program" or "application"—works. Programmers who have access to a computer program's source code can improve that program by adding features to it or fixing parts that don't always work correctly. + +### What's the difference between open source software and other types of software? + +Some software has source code that only the person, team, or organization who created it—and maintains exclusive control over it—can modify. People call this kind of software "proprietary" or "closed source" software. + +Only the original authors of proprietary software can legally copy, inspect, and alter that software. And in order to use proprietary software, computer users must agree (usually by signing a license displayed the first time they run this software) that they will not do anything with the software that the software's authors have not expressly permitted. Microsoft Office and Adobe Photoshop are examples of proprietary software. + +Open source software is different. Its authors [make its source code available][2] to others who would like to view that code, copy it, learn from it, alter it, or share it. [LibreOffice][3] and the [GNU Image Manipulation Program][4] are examples of open source software. + +As they do with proprietary software, users must accept the terms of a [license][5] when they use open source software—but the legal terms of open source licenses differ dramatically from those of proprietary licenses. + +Open source licenses affect the way people can [use, study, modify, and distribute][6] software. In general, open source licenses grant computer users [permission to use open source software for any purpose they wish][7]. Some open source licenses—what some people call "copyleft" licenses—stipulate that anyone who releases a modified open source program must also release the source code for that program alongside it. Moreover, [some open source licenses][8] stipulate that anyone who alters and shares a program with others must also share that program's source code without charging a licensing fee for it. + +By design, open source software licenses promote collaboration and sharing because they permit other people to make modifications to source code and incorporate those changes into their own projects. They encourage computer programmers to access, view, and modify open source software whenever they like, as long as they let others do the same when they share their work. + +### Is open source software only important to computer programmers? + +No. Open source technology and open source thinking both benefit programmers and non-programmers. + +Because early inventors built much of the Internet itself on open source technologies—like [the Linux operating system][9] and the[ Apache Web server][10] application—anyone using the Internet today benefits from open source software. + +Every time computer users view web pages, check email, chat with friends, stream music online, or play multiplayer video games, their computers, mobile phones, or gaming consoles connect to a global network of computers using open source software to route and transmit their data to the "local" devices they have in front of them. The computers that do all this important work are typically located in faraway places that users don't actually see or can't physically access—which is why some people call these computers "remote computers." + +More and more, people rely on remote computers when performing tasks they might otherwise perform on their local devices. For example, they may use online word processing, email management, and image editing software that they don't install and run on their personal computers. Instead, they simply access these programs on remote computers by using a Web browser or mobile phone application. When they do this, they're engaged in "remote computing." + +Some people call remote computing "cloud computing," because it involves activities (like storing files, sharing photos, or watching videos) that incorporate not only local devices but also a global network of remote computers that form an "atmosphere" around them. + +Cloud computing is an increasingly important aspect of everyday life with Internet-connected devices. Some cloud computing applications, like Google Apps, are proprietary. Others, like ownCloud and Nextcloud, are open source. + +Cloud computing applications run "on top" of additional software that helps them operate smoothly and efficiently, so people will often say that software running "underneath" cloud computing applications acts as a "platform" for those applications. Cloud computing platforms can be open source or closed source. OpenStack is an example of an open source cloud computing platform. + +### Why do people prefer using open source software? + +People prefer open source software to proprietary software for a number of reasons, including: + +Control. Many people prefer open source software because they [have more control][11] over that kind of software. They can examine the code to make sure it's not doing anything they don't want it to do, and they can change parts of it they don't like. Users who aren't programmers also benefit from open source software, because they can use this software for any purpose they wish—not merely the way someone else thinks they should. + +Training. Other people like open source software because it helps them [become better programmers][12]. Because open source code is publicly accessible, students can easily study it as they learn to make better software. Students can also share their work with others, inviting comment and critique, as they develop their skills. When people discover mistakes in programs' source code, they can share those mistakes with others to help them avoid making those same mistakes themselves. + +Security. Some people prefer open source software because they consider it more [secure][13] and stable than proprietary software. Because anyone can view and modify open source software, someone might spot and correct errors or omissions that a program's original authors might have missed. And because so many programmers can work on a piece of open source software without asking for permission from original authors, they can fix, update, and upgrade open source software more [quickly][14] than they can proprietary software. + +Stability. Many users prefer open source software to proprietary software for important, long-term projects. Because programmers [publicly distribute][15] the source code for open source software, users relying on that software for critical tasks can be sure their tools won't disappear or fall into disrepair if their original creators stop working on them. Additionally, open source software tends to both incorporate and operate according to open standards. + +### Doesn't "open source" just mean something is free of charge? + +No. This is a [common misconception][16] about what "open source" implies, and the concept's implications are [not only economic][17]. + +Open source software programmers can charge money for the open source software they create or to which they contribute. But in some cases, because an open source license might require them to release their source code when they sell software to others, some programmers find that charging users money for software services and support (rather than for the software itself) is more lucrative. This way, their software remains free of charge, and they [make money helping others][18] install, use, and troubleshoot it. + +While some open source software may be free of charge, skill in programming and troubleshooting open source software can be [quite valuable][19]. Many employers specifically seek to [hire programmers with experience][20] working on open source software. + +### What is open source "beyond software"? + +At Opensource.com, we like to say that we're interested in the ways open source values and principles apply to the world beyond software. We like to think of open source as not only a way to develop and license computer software, but also an attitude. + +Approaching all aspects of life "[the open source way][21]" means expressing a willingness to share, collaborating with others in ways that are transparent (so that others can watch and join too), embracing failure as a means of improving, and expecting—even encouraging—everyone else to do the same. + +It also means committing to playing an active role in improving the world, which is possible only when [everyone has access][22] to the way that world is designed. + +The world is full of "source code"—[blueprints][23], [recipes][24], [rules][25]—that guide and shape the way we think and act in it. We believe this underlying code (whatever its form) should be open, accessible, and shared—so many people can have a hand in altering it for the better. + +Here, we tell stories about the impact of open source values on all areas of life—[science][26], [education][27], [government][28], [manufacturing][29], health, law, and [organizational dynamics][30]. We're a community committed to telling others how the open source way is the best way, because a love of open source is just like anything else: it's better when it's shared. + +Where can I learn more about open source? + +We've compiled several resources designed to help you learn more about [open source. We recommend you read our open source FAQs, how-to guides, and tutorials][31] to get started. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/resources/what-open-source + +作者:[opensource.com][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: opensource.com +[1]:https://opensource.com/open-source-way +[2]:https://opensource.com/business/13/5/open-source-your-code +[3]:https://www.libreoffice.org/ +[4]:http://www.gimp.org/ +[5]:https://opensource.com/law/13/1/which-open-source-software-license-should-i-use +[6]:https://opensource.com/law/10/10/license-compliance-not-problem-open-source-users +[7]:https://opensource.org/docs/osd +[8]:https://opensource.com/law/13/5/does-your-code-need-license +[9]:https://opensource.com/resources/what-is-linux +[10]:http://httpd.apache.org/ +[11]:https://opensource.com/life/13/5/tumblr-open-publishing +[12]:https://opensource.com/life/13/6/learning-program-open-source-way +[13]:https://opensource.com/government/10/9/scap-computer-security-rest-us +[14]:https://opensource.com/government/13/2/bug-fix-day +[15]:https://opensource.com/life/12/9/should-we-develop-open-source-openly +[16]:https://opensource.com/education/12/7/clearing-open-source-misconceptions +[17]:https://opensource.com/open-organization/16/5/appreciating-full-power-open +[18]:https://opensource.com/business/14/7/making-your-product-free-and-open-source-crazy-talk +[19]:https://opensource.com/business/16/2/add-open-source-to-your-resume +[20]:https://opensource.com/business/16/5/2016-open-source-jobs-report +[21]:https://opensource.com/open-source-way +[22]:https://opensource.com/resources/what-open-access +[23]:https://opensource.com/life/11/6/architecture-open-source-applications-learn-those-you +[24]:https://opensource.com/life/12/6/open-source-like-sharing-recipe +[25]:https://opensource.com/life/12/4/day-my-mind-became-open-sourced +[26]:https://opensource.com/resources/open-science +[27]:https://opensource.com/resources/what-open-education +[28]:https://opensource.com/resources/open-government +[29]:https://opensource.com/resources/what-open-hardware +[30]:https://opensource.com/resources/what-open-organization +[31]:https://opensource.com/resources From 18a5651b30b4808a96368b6c88b744bb37837315 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 14 Aug 2016 09:35:50 +0800 Subject: [PATCH 413/471] =?UTF-8?q?PUB:20160604=20Smem=20=E2=80=93=20Repor?= =?UTF-8?q?ts=20Memory=20Consumption=20Per-Process=20and=20Per-User=20Basi?= =?UTF-8?q?s=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @dongfengweixiao --- ...n Per-Process and Per-User Basis in Linux.md | 89 ++++++++++--------- 1 file changed, 46 insertions(+), 43 deletions(-) rename {translated/tech => published}/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md (86%) diff --git a/translated/tech/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md b/published/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md similarity index 86% rename from translated/tech/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md rename to published/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md index befcdb5d31..1554186fbf 100644 --- a/translated/tech/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md +++ b/published/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md @@ -1,45 +1,45 @@ -Smem – Linux 下基于进程和用户的内存占用报告程序 +Smem – Linux 下基于进程和用户的内存占用报告 =========================================================================== +Linux 系统的内存管理工作中,内存使用情况的监控是十分重要的,在各种 Linux 发行版上你会找到许多这种工具。它们的工作方式多种多样,在这里,我们将会介绍如何安装和使用这样的一个名为 SMEM 的工具软件。 -Linux 系统的内存管理工作中,内存使用情况的监控是十分重要的,不同的 Linux 发行版可能会提供不同的工具。但是它们的工作方式多种多样,这里,我们将会介绍如何安装和使用这样的一个名为 SMEM 的工具软件。 - -Smem 是一款命令行下的内存使用情况报告工具。和其它传统的内存报告工具不同个,它仅做这一件事情——报告 PPS(实际使用的物理内存[比例分配共享库占用的内存]),这种内存使用量表示方法对于那些在虚拟内存中的应用和库更有意义。 +Smem 是一款命令行下的内存使用情况报告工具,它能够给用户提供 Linux 系统下的内存使用的多种报告。和其它传统的内存报告工具不同的是,它有个独特的功能,可以报告 PSS(按比例占用大小:Proportional Set Size),这种内存使用量表示方法对于那些在虚拟内存中的应用和库更有意义。 ![](http://www.tecmint.com/wp-content/uploads/2016/06/Smem-Linux-Memory-Reporting-Tool.png) ->Smem – Linux 内存报告工具 -已有的传统工具会将目光主要集中于读取 RSS(实际使用物理内存[包含共享库占用的内存]),这种方法对于恒量那些使用物理内存方案的使用情况来说是标准方法,但是应用程序往往会高估内存的使用情况。 +*Smem – Linux 内存报告工具* -PSS 从另一个侧面,为那些使用虚拟内存方案的应用和库提供了给出了确定内存“公评分担”的合理措施。 +已有的传统工具会将目光主要集中于读取 RSS(实际占用大小:Resident Set Size),这种方法是以物理内存方案来衡量使用情况的标准方法,但是往往高估了应用程序的内存的使用情况。 -你可以 [阅读此指南了解 (关于内存的 RSS 和 PSS)][1] Linux 系统中的内存占用。 +PSS 从另一个侧面,通过判定在虚拟内存中的应用和库所使用的“合理分享”的内存,来给出更可信的衡量结果。 + +你可以阅读此[指南 (关于内存的 RSS 和 PSS)][1]了解 Linux 系统中的内存占用,不过现在让我们继续看看 smem 的特点。 ### Smem 这一工具的特点 - 系统概览列表 -- 以进程,映射和用户来显示或者是过滤 +- 以进程、映射和用户来显示或者是过滤 - 从 /proc 文件系统中得到数据 -- 从多个数据源配置显示条目 -- 可配置输出单元和百分比 -- 易于配置列表标题和汇总 +- 从多个数据源配置显示的条目 +- 可配置输出单位和百分比 +- 易于配置列表表头和汇总 - 从镜像文件夹或者是压缩的 tar 文件中获得数据快照 - 内置的图表生成机制 -- 在嵌入式系统中使用轻量级的捕获工具 +- 轻量级的捕获工具,可用于嵌入式系统 ### 如何安装 Smem - Linux 下的内存使用情况报告工具 安装之前,需要确保满足以下的条件: -- 现代内存 (版本号高于 2.6.27) +- 现代内核 (版本号高于 2.6.27) - 较新的 Python 版本 (2.4 及以后版本) - 可选的 [matplotlib][2] 库用于生成图表 -对于当今的大多数的 Linux 发行版而言,内核版本和 Python 的版本都能够 满足需要,所以仅需要为生成良好的图表安装 matplotlib 库。 +对于当今的大多数的 Linux 发行版而言,内核版本和 Python 的版本都能够满足需要,所以仅需要为生成良好的图表安装 matplotlib 库。 #### RHEL, CentOS 和 Fedora -首先启用 [EPEL (Extra Packages for Enterprise Linux)][3] 软件源然后按照下列步骤操作: +首先启用 [EPEL (Extra Packages for Enterprise Linux)][3] 软件源,然后按照下列步骤操作: ``` # yum install smem python-matplotlib python-tk @@ -59,7 +59,7 @@ $ sudo apt-get install smem python-matplotlib python-tk #### Arch Linux -使用此 [AUR repository][4]。 +使用此 [AUR 仓库][4]。 ### 如何使用 Smem – Linux 下的内存使用情况报告工具 @@ -69,7 +69,7 @@ $ sudo apt-get install smem python-matplotlib python-tk $ sudo smem ``` -监视 Linux 系统中的内存使用情况 +*监视 Linux 系统中的内存使用情况* ``` PID User Command Swap USS PSS RSS @@ -108,7 +108,7 @@ $ sudo smem .... ``` -当常规用户运行 smem,将会显示由用户启用的进程的占用情况,其中进程按照 PSS 的值升序排列。 +当普通用户运行 smem,将会显示由该用户启用的进程的占用情况,其中进程按照 PSS 的值升序排列。 下面的输出为用户 “aaronkilik” 启用的进程的使用情况: @@ -116,7 +116,7 @@ $ sudo smem $ smem ``` -监视 Linux 系统中的内存使用情况 +*监视 Linux 系统中的内存使用情况* ``` PID User Command Swap USS PSS RSS @@ -156,12 +156,13 @@ $ smem ... ``` -使用 smem 是还有一些参数可以选用,例如当参看整个系统的内存占用情况,运行以下的命令: +使用 smem 时还有一些参数可以选用,例如当查看整个系统的内存占用情况,运行以下的命令: ``` $ sudo smem -w ``` -监视 Linux 系统中的内存使用情况 + +*监视 Linux 系统中的内存使用情况* ``` Area Used Cache Noncache @@ -178,7 +179,7 @@ free memory 4424936 4424936 0 $ sudo smem -u ``` -Linux 下以用户为单位监控内存占用情况 +*Linux 下以用户为单位监控内存占用情况* ``` User Count Swap USS PSS RSS @@ -201,7 +202,7 @@ tecmint 64 0 1652888 1815699 2763112 $ sudo smem -m ``` -Linux 下以映射为单位监控内存占用情况 +*Linux 下以映射为单位监控内存占用情况* ``` Map PIDs AVGPSS PSS @@ -231,15 +232,15 @@ Map PIDs AVGPSS PSS .... ``` -还有其它的选项用于 smem 的输出,下面将会举两个例子。 +还有其它的选项可以筛选 smem 的输出,下面将会举两个例子。 -要按照用户名筛选输出的信息,调用 -u 或者是 --userfilter="regex" 选项,就像下面的命令这样: +要按照用户名筛选输出的信息,使用 -u 或者是 --userfilter="regex" 选项,就像下面的命令这样: ``` $ sudo smem -u ``` -按照用户报告内存使用情况 +*按照用户报告内存使用情况* ``` User Count Swap USS PSS RSS @@ -256,13 +257,13 @@ root 39 0 323804 353374 496552 tecmint 64 0 1708900 1871766 2819212 ``` -要按照进程名称筛选输出信息,调用 -P 或者是 --processfilter="regex" 选项,就像下面的命令这样: +要按照进程名称筛选输出信息,使用 -P 或者是 --processfilter="regex" 选项,就像下面的命令这样: ``` $ sudo smem --processfilter="firefox" ``` -按照进程名称报告内存使用情况 +*按照进程名称报告内存使用情况* ``` PID User Command Swap USS PSS RSS @@ -271,7 +272,7 @@ PID User Command Swap USS PSS RSS 4424 tecmint /usr/lib/firefox/firefox 0 931732 937590 961504 ``` -输出的格式有时候也很重要,smem 提供了一些参数帮助您格式化内存使用报告,我们将举出几个例子。 +输出的格式有时候也很重要,smem 提供了一些帮助您格式化内存使用报告的参数,我们将举出几个例子。 设置哪些列在报告中,使用 -c 或者是 --columns选项,就像下面的命令这样: @@ -279,7 +280,7 @@ PID User Command Swap USS PSS RSS $ sudo smem -c "name user pss rss" ``` -按列报告内存使用情况 +*按列报告内存使用情况* ``` Name User PSS RSS @@ -317,7 +318,7 @@ ssh-agent tecmint 485 992 $ sudo smem -p ``` -按百分比报告内存使用情况 +*按百分比报告内存使用情况* ``` PID User Command Swap USS PSS RSS @@ -345,13 +346,13 @@ $ sudo smem -p .... ``` -下面的额命令将会在输出的最后输出一行汇总信息: +下面的命令将会在输出的最后输出一行汇总信息: ``` $ sudo smem -t ``` -报告内存占用合计 +*报告内存占用合计* ``` PID User Command Swap USS PSS RSS @@ -389,27 +390,29 @@ PID User Command Swap USS PSS RSS 比如,你可以生成一张进程的 PSS 和 RSS 值的条状图。在下面的例子中,我们会生成属于 root 用户的进程的内存占用图。 -纵坐标为每一个进程的 PSS 和 RSS 值,横坐标为 root 用户的所有进程: +纵坐标为每一个进程的 PSS 和 RSS 值,横坐标为 root 用户的所有进程(的 ID): ``` $ sudo smem --userfilter="root" --bar pid -c"pss rss" ``` ![](http://www.tecmint.com/wp-content/uploads/2016/06/Linux-Memory-Usage-in-PSS-and-RSS-Values.png) ->Linux Memory Usage in PSS and RSS Values -也可以生成进程及其 PSS 和 RSS 占用量的饼状图。以下的命令将会输出一张 root 用户的所有进程的饼状。 +*Linux Memory Usage in PSS and RSS Values* -`--pie` name 意思为以各个进程名字为标签,`-s` 选项帮助以 PSS 的值排序。 +也可以生成进程及其 PSS 和 RSS 占用量的饼状图。以下的命令将会输出一张 root 用户的所有进程的饼状图。 + +`--pie` name 意思为以各个进程名字为标签,`-s` 选项用来以 PSS 的值排序。 ``` $ sudo smem --userfilter="root" --pie name -s pss ``` ![](http://www.tecmint.com/wp-content/uploads/2016/06/Linux-Memory-Consumption-by-Processes.png) ->Linux Memory Consumption by Processes -它们还提供了一些其它与 PSS 和 RSS 相关的字段用于图表的标签: +*Linux Memory Consumption by Processes* + +除了 PSS 和 RSS ,其它的字段也可以用于图表的标签: 假如需要获得帮助,非常简单,仅需要输入 `smem -h` 或者是浏览帮助页面。 @@ -420,16 +423,16 @@ $ sudo smem --userfilter="root" --pie name -s pss -------------------------------------------------------------------------------- -via: http://www.tecmint.com/smem-linux-memory-usage-per-process-per-user/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 +via: http://www.tecmint.com/smem-linux-memory-usage-per-process-per-user/ 作者:[Aaron Kili][a] 译者:[dongfengweixiao](https://github.com/dongfengweixiao) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: http://www.tecmint.com/author/aaronkili/ [1]: https://emilics.com/notebook/enblog/p871.html [2]: http://matplotlib.org/index.html -[3]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[3]: https://linux.cn/article-2324-1.html [4]: https://www.archlinux.org/packages/community/i686/smem/ From 5927329554915f9974102238ae56f3b33e6d4c40 Mon Sep 17 00:00:00 2001 From: Frank Zhang Date: Sun, 14 Aug 2016 14:40:40 +0800 Subject: [PATCH 414/471] [translated] 20160802 3 graphical tools for Git.md --- .../20160802 3 graphical tools for Git.md | 118 ------------------ .../20160802 3 graphical tools for Git.md | 117 +++++++++++++++++ 2 files changed, 117 insertions(+), 118 deletions(-) delete mode 100644 sources/tech/20160802 3 graphical tools for Git.md create mode 100644 translated/tech/20160802 3 graphical tools for Git.md diff --git a/sources/tech/20160802 3 graphical tools for Git.md b/sources/tech/20160802 3 graphical tools for Git.md deleted file mode 100644 index 579ec09fdc..0000000000 --- a/sources/tech/20160802 3 graphical tools for Git.md +++ /dev/null @@ -1,118 +0,0 @@ -zpl1025 -3 graphical tools for Git -============================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/BUSINESS_meritladder.png?itok=4CAH2wV0) - -In this article, we'll take a look at some convenience add-ons to help you integrate Git comfortably into your everyday workflow. - -I learned Git before many of these fancy interfaces existed, and my workflow is frequently text-based anyway, so most of the inbuilt conveniences of Git suit me pretty well. It is always best, in my opinion, to understand how Git works natively. However, it is always nice to have options, so these are some of the ways you can start using Git outside of the terminal. - -### Git in KDE Dolphin - -I am a KDE user, if not always within the Plasma desktop, then as my application layer in Fluxbox. Dolphin is an excellent file manager with lots of options and plenty of secret little features. Particularly useful are all the plugins people develop for it, one of which is a nearly-complete Git interface. Yes, you can manage your Git repositories natively from the comfort of your own desktop. - -But first, you'll need to make sure the add-ons are installed. Some distros come with a filled-to-the-brim KDE, while others give you just the basics, so if you don't see the Git options in the next few steps, search your repository for something like dolphin-extras or dolphin-plugins. - -To activate Git integration, go to the Settings menu in any Dolphin window and select Configure Dolphin. - -In the Configure Dolphin window, click on the Services icon in the left column. - -In the Services panel, scroll through the list of available plugins until you find Git. - -![](https://opensource.com/sites/default/files/4_dolphinconfig.jpg) - -Save your changes and close your Dolphin window. When you re-launch Dolphin, navigate to a Git repository and have a look around. Notice that all icons now have emblems: green boxes for committed files, solid green boxes for modified files, no icon for untracked files, and so on. - -Your right-click menu now has contextual Git options when invoked inside a Git repository. You can initiate a checkout, push or pull when clicking inside a Dolphin window, and you can even do a git add or git remove on your files. - -![](https://opensource.com/sites/default/files/4_dolphingit.jpg) - -You can't clone a repository or change remote paths in Dolphin, but will have to drop to a terminal, which is just an F4 away. - -Frankly, this feature of KDE is so kool [sic] that this article could just end here. The integration of Git in your native file manager makes working with Git almost transparent; everything you need to do just happens no matter what stage of the process you are in. Git in the terminal, and Git waiting for you when you switch to the GUI. It is perfection. - -But wait, there's more! - -### Sparkleshare - -From the other side of the desktop pond comes SparkleShare, a project that uses a file synchronization model ("like Dropbox!") that got started by some GNOME developers. It is not integrated into any specific part of GNOME, so you can use it on all platforms. - -If you run Linux, install SparkleShare from your software repository. Other operating systems should download from the SparkleShare website. You can safely ignore the instructions on the SparkleShare website, which are for setting up a SparkleShare server, which is not what we will do here. You certainly can set up a SparkleShare server if you want, but SparkleShare is compatible with any Git repository, so you don't need to create your own server. - -After it is installed, launch SparkleShare from your applications menu. Step through the setup wizard, which is two steps plus a brief tutorial, and optionally set SparkleShare as a startup item for your desktop. - -![](https://opensource.com/sites/default/files/4_sparklesetup.jpg) - -An orange SparkleShare directory is now in your system tray. Currently, SparkleShare is oblivious to anything on your computer, so you need to add a hosted project. - -To add a directory for SparkleShare to track, click the SparkleShare icon in your system tray and select Add Hosted Project. - -![](https://opensource.com/sites/default/files/4_sparklehost.jpg) - -SparkleShare can work with self-hosted Git projects, or projects hosted on public Git services like GitHub and Bitbucket. For full access, you'll probably need to use the Client ID that SparkleShare provides to you. This is an SSH key acting as the authentication token for the service you use for hosting, including your own Git server, which should also use SSH public key authentication rather than password login. Copy the Client ID into the authorized_hosts file of your Git user on your server, or into the SSH key panel of your Git host. - -After configuring the host you want to use, SparkleShare downloads the Git project, including, at your option, the commit history. Find the files in ~/SparkleShare. - -Unlike Dolphin's Git integration, SparkleShare is unnervingly invisible. When you make a change, it quietly syncs the change to your remote project. For many people, that is a huge benefit: all the power of Git with none of the maintenance. To me, it is unsettling, because I like to govern what I commit and which branch I use. - -SparkleShare may not be for everyone, but it is a powerful and simple Git solution that shows how different open source projects fit together in perfect harmony to create something unique. - -### Git-cola - -Yet another model of working with Git repositories is less native and more of a monitoring approach; rather than using an integrated application to interact directly with your Git project, you can use a desktop client to monitor changes in your project and deal with each change in whatever way you choose. An advantage to this approach is focus. You might not care about all 125 files in your project when only three of them are actively being worked on, so it is helpful to bring them to the forefront. - -If you thought there were a lot of Git web hosts out there, you haven't seen anything yet. [Git clients for your desktop][1] are a dime-a-dozen. In fact, Git actually ships with an inbuilt graphical Git client. The most cross-platform and most configurable of them all is the open source Git-cola client, written in Python and Qt. - -If you're on Linux, Git-cola may be in your software repository. Otherwise, just download it from the site and install it: - -``` -$ python setup.py install -``` - -When Git-cola launches, you're given three buttons to open an existing repository, create a new repo, or clone an existing repository. - -Whichever you choose, at some point you end up with a Git repository. Git-cola, and indeed most desktop clients that I've used, don't try to be your interface into your repository; they leave that up to your normal operating system tools. In other words, I might start a repository with Git-cola, but then I would open that repository in Thunar or Emacs to start my work. Leaving Git-cola open as a monitor works quite well, because as you create new files, or change existing ones, they appear in Git-cola's Status panel. - -The default layout of Git-cola is a little non-linear. I prefer to move from left-to-right, and because Git-cola happens to be very configurable, you're welcome to change your layout. I set mine up so that the left-most panel is Status, showing any changes made to my current branch, then to the right, a Diff panel in case I want to review a change, and the Actions panel for quick-access buttons to common tasks, and finally the right-most panel is a Commit panel where I can write commit messages. - -![](https://opensource.com/sites/default/files/4_gitcola.jpg) - -Even if you use a different layout, this is the general flow of Git-cola: - -Changes appear in the Status panel. Right-click a change entry, or select a file and click the Stage button in the Action panel, to stage a file. - -A staged file's icon changes to a green triangle to indicate that it has been both modified and staged. You can unstage a file by right-clicking and selecting Unstage Selected, or by clicking the Unstage button in the Actions panel. - -Review your changes in the Diff panel. - -When you are ready to commit, enter a commit message and click the Commit button. - -There are other buttons in the Actions panel for other common tasks like a git pull or git push. The menus round out the task list, with dedicated actions for branching, reviewing diffs, rebasing, and a lot more. - -I tend to think of Git-cola as a kind of floating panel for my file manager (and I only use Git-cola when Dolphin is not available). On one hand, it's less interactive than a fully integrated and Git-aware file manager, but on the other, it offers practically everything that raw Git does, so it's actually more powerful. - -There are plenty of graphical Git clients. Some are paid software with no source code available, others are viewers only, others attempt to reinvent Git with special terms that are specific to the client ("sync" instead of "push"..?), and still others are platform-specific. Git-Cola has consistently been the easiest to use on any platform, and the one that stays closest to pure Git so that users learn Git whilst using it, and experts feel comfortable with the interface and terminology. - -### Git or graphical? - -I don't generally use graphical tools to access Git; mostly I use the ones I've discussed when helping other people find a comfortable interface for themselves. At the end of the day, though, it comes down to what fits with how you work. I like terminal-based Git because it integrates well with Emacs, but on a day where I'm working mostly in Inkscape, I might naturally fall back to using Git in Dolphin because I'm in Dolphin anyway. - -It's up to you how you use Git; the most important thing to remember is that Git is meant to make your life easier and those crazy ideas you have for your work safer to try out. Get familiar with the way Git works, and then use Git from whatever angle you find works best for you. - -In our next installment, we will learn how to set up and manage a Git server, including user access and management, and running custom scripts. - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/8/graphical-tools-git - -作者:[Seth Kenlon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]: https://git-scm.com/downloads/guis diff --git a/translated/tech/20160802 3 graphical tools for Git.md b/translated/tech/20160802 3 graphical tools for Git.md new file mode 100644 index 0000000000..e26b1c10d9 --- /dev/null +++ b/translated/tech/20160802 3 graphical tools for Git.md @@ -0,0 +1,117 @@ +3 个 Git 图形工具 +============================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/BUSINESS_meritladder.png?itok=4CAH2wV0) + +在本文里,我们来了解几个能帮你在日常工作中舒服地用上 Git 的工具。 + +我是在这许多漂亮界面出来之前学习的 Git,而且我的日常工作经常是基于字符界面的,所以 Git 本身自带的大部分功能已经足够我用了。在我看来,最好能理解 Git 的工作原理。不过,能有的选也不错,下面这些就是能让你不用终端酒可以开始使用 Git 的一些方式。 + +### KDE Dolphin 里的 Git + +我是一个 KDE 用户,如果不在 Plasma 桌面环境下,就是在 Fluxbox 的应用层。Dolphin 是一个非常优秀的文件管理器,有很多配置项以及大量秘密小功能。大家为它开发的插件都特别好用,其中一个是几乎完整的 Git 界面。是的,你可以直接在自己的桌面上很方便地管理你的 Git 仓库。 + +但首先,你得先确认已经安装了这个插件。有些发行版带的 KDE 将各种插件都装的满满的,而有些只装了一些最基本的,所以如果你在下面的步骤里没有看到 Git 相关选项,就在你的配置目录里找找类似 dolphin-extras 或者 dolphin-plugins 的字样。 + +要打开 Git 集成,在任何 Dolphin 窗口里点击 Settings 菜单,并选择 Configure Dolphin。 + +在弹出的 Configure Dolphin 窗口里,点击左边侧栏里的 Services 图标。 + +在 Services 面板里,滚动可用的插件列表找到 Git。 + +![](https://opensource.com/sites/default/files/4_dolphinconfig.jpg) + +保存你的改动并关闭 Dolphin 窗口。重新启动 Dolphin,浏览一个 Git 仓库试试看。你会发现现在所有文件图标都带有标记:绿色方框表示已经提交的文件,绿色实心方块表示文件有改动,没加入库里的文件没有标记,等等。 + +之后你在 Git 仓库目录下点击鼠标右键弹出的菜单里就会有 Git 选项了。你在 Dolphin 窗口里点击鼠标就可以检出一个版本,推送或提交改动,还可以对文件进行 git add 或 git remove 操作。 + +![](https://opensource.com/sites/default/files/4_dolphingit.jpg) + +不过 Dolphin 不支持克隆仓库或是改变远端路径,需要到终端窗口操作,按下 F4 就可以很方便地进行切换。 + +坦白地说,KDE 的这个功能太牛了,这篇文章已经可以到此为止。将 Git 集成到原生文件管理器里可以让 Git 操作非常清晰;不管你在工作流程的哪个阶段,一切都能直接地摆在面前。在终端里 Git,切换到 GUI 后也是一样 Git。完美。 + +不过别急,还有好多呢! + +### Sparkleshare + +SparkleShare 来自桌面环境的另一大阵营,由一些 GNOME 开发人员发起,一个使用文件同步模型 ("例如 Dropbox!") 的项目。不过它并没有直接嵌入到 GNOME 里,所以你可以在任何平台使用。 + +如果你在用 Linux,可以从你的软件仓库直接安装 SparkleShare。如果是其它操作系统,可以去 SparkleShare 网站下载。你可以不用看 SparkleShare 网站上的指引,那个是告诉你如何架设 SparkleShare 服务器的,不是我们这里讨论的。当然你想的话也可以架 SparkleShare 服务器,但是 SparkleShare 能兼容 Git 仓库,所以其实没必要再架一个自己的。 + +在安装完成后,从应用程序菜单里启动 SparkleShare。走一遍设置向导,只有两个步骤外加一个简单介绍,然后可以选择是否将 SparkleShare 设置为随桌面自动启动。 + +![](https://opensource.com/sites/default/files/4_sparklesetup.jpg) + +之后在你的系统托盘里会出现一个橙色的 SparkleShare 目录。目前,SparkleShare 对你电脑上的任何东西都一无所知,所以你需要添加一个项目。 + +要添加一个目录给 SparkleShare 追踪,可以点击系统托盘里的 SparkleShare 图标然后选择 Add Hosted Project。 + +![](https://opensource.com/sites/default/files/4_sparklehost.jpg) + +SparkleShare 支持本地 Git 项目,也可以是存放在像 GitHub 和 Bitbucket 这样的 Git 服务器上的项目。要获得访问权限,你可能会需要使用 SparkleShare 生成的客户端 ID。这是一个 SSH 密钥,作为你所用到服务的授权标记,包括你自己的 Git 服务器,应该也是用 SSH 公钥认证而不是用户名密码。将客户端 ID 拷贝到你服务器上 Git 用户的 authorized_hosts 文件里,或者是你的 Git 主机的 SSH 密钥面板里。 + +在配置要你要用的主机后,SparkleShare 会下载整个 Git 项目,包括(你可以自己选择)提交历史。可以在 ~/SparkleShare 目录下找到同步完成的文件。 + +不像 Dolphin 那样的集成方式,SparkleShare 是不透明的,让人心里没底。在你做出改动后,它会悄悄地把改动同步到服务器远端项目中。对大部分人来说,这样做有一个很大的好处:可以用到 Git 的全部威力但是不用维护。对我来说,这样有些乱,因为我想自己管理我的提交以及要用的分支。 + +SparkleShare 可能不适合所有人,但是它是一个强大而且简单的 Git 解决方案,展示了不同的开源项目完美地协调整合到一起后所创造出的独特项目。 + +### Git-cola + +另一种配合 Git 仓库工作的模型,没那么原生,更多的是监视方式;不是使用一个集成的应用程序和你的 Git 项目直接交互,而是你可以使用一个桌面客户端来监视项目改动,并随意处理每一个改动。这种方式的一个优势就是专注。当你实际只用到项目里的三个文件的时候,你可能不会关心所有的 125 个文件,能将这三个文件挑出来就很方便了。 + +如果你觉得有好多 Git 网站,只是你还不知道。[桌面上的 Git 客户端][1] 上有一大把。实际上,Git 默认自带一个图形客户端。它们中平台适用最广,配置最丰富的是开源的 Git-cola 客户端,用 Python 和 Qt 写的。 + +如果你在用 Linux,Git-cola 应该在你的软件仓库里有。不是的话,可以直接从网站下载安装: + +``` +$ python setup.py install +``` + +启动 git-cola 后,会有三个按钮用来打开仓库,创建新仓库,或克隆仓库。 + +不管选哪个,最终都会停在一个 Git 仓库中。和大多数我用过的客户端一样,Git-cola 不会尝试成为你的仓库的接口;它们一般会让操作系统工具来做这个。换句话说,我可以通过 Git-cola 创建一个仓库,但随后我就在 Thunar 或 Emacs 里打开仓库开始工作。打开 Git-cola 来监视仓库很不错,因为当你创建新文件,或者改动文件的时候,它们都会出现在 Git-cola 的状态栏里。 + +Git-cola 的默认布局不是线性的。我喜欢从左向右排列,因为 Git-cola 又是高度可配置的,你可以随便修改布局。我自己设置成最左边是状态栏,显示当前分支的任何改动,然后右边是改动栏,可以浏览当前改动,然后是动作栏,放一些常用任务的快速按钮,最后,最右边是提交栏,可以写提交信息。 + +![](https://opensource.com/sites/default/files/4_gitcola.jpg) + +不管怎么改布局,下面是 Git-cola 的通用流程: + +改动会出现在状态栏里。右键点击一个改动或选中一个文件,然后在动作栏里点击 Stage 按钮来将文件加入待提交暂存区。 + +待提交文件的图标会变成绿色三角形,表示该文件有改动并且正等待提交。你也可以右键点击并选择 Unstage Selected 将改动移出待提交暂存区,或者点击动作栏里的 Unstage 按钮。 + +在改动栏里检查你的改动。 + +当准备好提交后,输入提交信息并点击 Commit 按钮。 + +在动作栏里还有其它按钮用来处理其它普通任务,比如抽取或推送。菜单里有更多的任务列表,比如专用的操作分支,改动审查,变更基础,等等。 + +我更愿意将 Git-cola 当作文件管理器的一个浮动面板(在不能用 Dolphin 的时候我只用 Git-cola)。虽然它的交互性没有完全集成 Git 的文件管理器那么强,但另一方面,它几乎提供了原始 Git 命令的所有功能,所以它实际上更为强大。 + +有很多 Git 图形客户端。有些是不提供源代码的付费软件,有些只是用来查看,有些尝试加入新的特定术语(用 "sync" 替代 "push" ...?) 来重造 Git,也有一些是适合特定平台。Git-cola 一直是最简单的能在任意平台上使用的客户端,也是最贴近纯粹 Git 命令的,可以让用户在使用过程中学习 Git,高手也很满意它的界面和术语。 + +### Git 命令还是图形界面? + +我一般不用图形工具来操作 Git;大多数时候我使用上面介绍的工具,是帮助其他人找出适合他们的界面。不过在一天结束的时候,不同的工作适合不一样的工具。我喜欢基于终端的 Git 命令是因为可以很好地集成到 Emacs 里,但如果某天我几乎都在用 Inkscape 工作时,我一般会很自然地使用 Dolphin 里带的 Git,因为我在 Dolphin 环境里。 + +如何使用 Git 你自己可以选择;但要记住 Git 是一种让生活更轻松的方式,也是让你在工作中更安全地尝试一些疯狂点子的方法。熟悉 Git 的工作模式,然后不管什么方式使用 Git,只要能让你觉得最适合就可以。 + +在下一期文章里,我们将了解如何架设和管理 Git 服务器,包括用户权限和管理,以及运行定制脚本。 + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/8/graphical-tools-git + +作者:[Seth Kenlon][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://git-scm.com/downloads/guis From b9e140dbfb29d5855bddb40dec35c5c1e08133b3 Mon Sep 17 00:00:00 2001 From: NearTan Date: Sun, 14 Aug 2016 17:56:09 +0800 Subject: [PATCH 415/471] =?UTF-8?q?NearTan=20=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...x Administration Tasks and Application Deployments Over SSH.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md b/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md index 43b1fe59c7..f45d1e9702 100644 --- a/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md +++ b/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md @@ -1,3 +1,5 @@ +NearTan 认领 + Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH =========================== From aaebd2aa5e3bb467a3cfc3fb8cf918b1a065bf17 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 14 Aug 2016 21:31:34 +0800 Subject: [PATCH 416/471] PUB:20160620 5 SSH Hardening Tips @maywanting --- published/20160620 5 SSH Hardening Tips.md | 119 +++++++++++++++++ .../talk/20160620 5 SSH Hardening Tips.md | 125 ------------------ 2 files changed, 119 insertions(+), 125 deletions(-) create mode 100644 published/20160620 5 SSH Hardening Tips.md delete mode 100644 translated/talk/20160620 5 SSH Hardening Tips.md diff --git a/published/20160620 5 SSH Hardening Tips.md b/published/20160620 5 SSH Hardening Tips.md new file mode 100644 index 0000000000..22f6edae6e --- /dev/null +++ b/published/20160620 5 SSH Hardening Tips.md @@ -0,0 +1,119 @@ +五条强化 SSH 安全的建议 +====================== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1188510_1920_0.jpg?itok=ocPCL_9G) + +*采用这些简单的建议,使你的 OpenSSH 会话更加安全。* + +当你查看你的 SSH 服务日志,可能你会发现充斥着一些不怀好意的尝试性登录。这里有 5 条常规建议(和一些个别特殊策略)可以让你的 OpenSSH 会话更加安全。 + +### 1. 强化密码登录 + +密码登录很方便,因为你可以从任何地方的任何机器上登录。但是它们在暴力攻击面前也是脆弱的。尝试以下策略来强化你的密码登录。 + +- 使用一个密码生成工具,例如 pwgen。pwgen 有几个选项,最有用的就是密码长度的选项(例如,`pwgen 12` 产生一个12位字符的密码) +- 不要重复使用密码。忽略所有那些不要写下你的密码的建议,然后将你的所有登录信息都记在一个本子上。如果你不相信我的建议,那总可以相信安全权威 [Bruce Schneier][1] 吧。如果你足够细心,没有人能够发现你的笔记本,那么这样能够不受到网络上的那些攻击。 +- 你可以为你的登录记事本增加一些额外的保护措施,例如用字符替换或者增加新的字符来掩盖笔记本上的登录密码。使用一个简单而且好记的规则,比如说给你的密码增加两个额外的随机字符,或者使用单个简单的字符替换,例如 `#` 替换成 `*`。 +- 为你的 SSH 服务开启一个非默认的监听端口。是的,这是很老套的建议,但是它确实很有效。检查你的登录;很有可能 22 端口是被普遍攻击的端口,其他端口则很少被攻击。 +- 使用 [Fail2ban][2] 来动态保护你的服务器,是服务器免于被暴力攻击。 +- 使用不常用的用户名。绝不能让 root 可以远程登录,并避免用户名为“admin”。 + +### 2. 解决 `Too Many Authentication Failures` 报错 + +当我的 ssh 登录失败,并显示“Too many authentication failures for carla”的报错信息时,我很难过。我知道我应该不介意,但是这报错确实很碍眼。而且,正如我聪慧的奶奶曾经说过,伤痛之感并不能解决问题。解决办法就是在你的(客户端的) `~/.ssh/config` 文件设置强制密码登录。如果这个文件不存在,首先创个 `~/.ssh/` 目录。 + +``` +$ mkdir ~/.ssh +$ chmod 700 ~/.ssh +``` +然后在一个文本编辑器创建 `~/.ssh/confg` 文件,输入以下行,使用你自己的远程域名替换 HostName。 + +``` +HostName remote.site.com +PubkeyAuthentication=no +``` + +(LCTT 译注:这种错误发生在你使用一台 Linux 机器使用 ssh 登录另外一台服务器时,你的 .ssh 目录中存储了过多的私钥文件,而 ssh 客户端在你没有指定 -i 选项时,会默认逐一尝试使用这些私钥来登录远程服务器后才会提示密码登录,如果这些私钥并不能匹配远程主机,显然会触发这样的报错,甚至拒绝连接。因此本条是通过禁用本地私钥的方式来强制使用密码登录——显然这并不可取,如果你确实要避免用私钥登录,那你应该用 `-o PubkeyAuthentication=no` 选项登录。显然这条和下两条是互相矛盾的,所以请无视本条即可。) + +### 3. 使用公钥认证 + +公钥认证比密码登录安全多了,因为它不受暴力密码攻击的影响,但是并不方便因为它依赖于 RSA 密钥对。首先,你要创建一个公钥/私钥对。下一步,私钥放于你的客户端电脑,并且复制公钥到你想登录的远程服务器。你只能从拥有私钥的电脑登录才能登录到远程服务器。你的私钥就和你的家门钥匙一样敏感;任何人获取到了私钥就可以获取你的账号。你可以给你的私钥加上密码来增加一些强化保护规则。 + +使用 RSA 密钥对管理多个用户是一种好的方法。当一个用户离开了,只要从服务器删了他的公钥就能取消他的登录。 + +以下例子创建一个新的 3072 位长度的密钥对,它比默认的 2048 位更安全,而且为它起一个独一无二的名字,这样你就可以知道它属于哪个服务器。 + +``` +$ ssh-keygen -t rsa -b 3072 -f id_mailserver +``` + +以下创建两个新的密钥, `id_mailserver` 和 `id_mailserver.pub`,`id_mailserver` 是你的私钥--不要传播它!现在用 `ssh-copy-id` 命令安全地复制你的公钥到你的远程服务器。你必须确保在远程服务器上有可用的 SSH 登录方式。 + +``` +$ ssh-copy-id -i id_rsa.pub user@remoteserver + +/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed +user@remoteserver's password: + +Number of key(s) added: 1 + +Now try logging into the machine, with: "ssh 'user@remoteserver'" +and check to make sure that only the key(s) you wanted were added. +``` + +`ssh-copy-id` 会确保你不会无意间复制了你的私钥。从上述输出中复制登录命令,记得带上其中的单引号,以测试你的新的密钥登录。 + +``` +$ ssh 'user@remoteserver' +``` + +它将用你的新密钥登录,如果你为你的私钥设置了密码,它会提示你输入。 + +### 4. 取消密码登录 + +一旦你已经测试并且验证了你的公钥可以登录,就可以取消密码登录,这样你的远程服务器就不会被暴力密码攻击。如下设置**你的远程服务器**的 `/etc/sshd_config` 文件。 + +``` +PasswordAuthentication no +``` + +然后重启服务器上的 SSH 守护进程。 + +### 5. 设置别名 -- 这很快捷而且很有 B 格 + +你可以为你的远程登录设置常用的别名,来替代登录时输入的命令,例如 `ssh -u username -p 2222 remote.site.with.long-name`。你可以使用 `ssh remote1`。你的客户端机器上的 `~/.ssh/config` 文件可以参照如下设置 + +``` +Host remote1 +HostName remote.site.with.long-name +Port 2222 +User username +PubkeyAuthentication no +``` + +如果你正在使用公钥登录,可以参照这个: + +``` +Host remote1 +HostName remote.site.with.long-name +Port 2222 +User username +IdentityFile ~/.ssh/id_remoteserver +``` + +[OpenSSH 文档][3] 很长而且详细,但是当你掌握了基础的 SSH 使用规则之后,你会发现它非常的有用,而且包含很多可以通过 OpenSSH 来实现的炫酷效果。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/5-ssh-hardening-tips + +作者:[CARLA SCHRODER][a] +译者:[maywanting](https://github.com/maywanting) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/cschroder +[1]: https://www.schneier.com/blog/archives/2005/06/write_down_your.html +[2]: http://www.fail2ban.org/wiki/index.php/Main_Page +[3]: http://www.openssh.com/ diff --git a/translated/talk/20160620 5 SSH Hardening Tips.md b/translated/talk/20160620 5 SSH Hardening Tips.md deleted file mode 100644 index a030e13186..0000000000 --- a/translated/talk/20160620 5 SSH Hardening Tips.md +++ /dev/null @@ -1,125 +0,0 @@ -五条强化 SSH 安全的建议 -====================== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1188510_1920_0.jpg?itok=ocPCL_9G) ->采用这些简单的建议,使你的 OpenSSH 会话更加安全。 - -> 创造性的零共有 - -当你查看你的 SSH 服务日志,很有可能都充斥着一些不怀好意的尝试性登录。这里有5条一般意义上的建议(和一些个别特殊策略),让你的 OpenSSH 会话更加安全。 - -### 1. 强化密码登录 - -密码登录很方便,因为你可以从任何地方的任何机器上登录。但是他们在暴力攻击面前也是脆弱的。尝试以下策略来强化你的密码登录。 - -- 使用一个密码生成工具,例如 pwgen。pwgen 会有几个选项;最有用的就是密码长度的选项(例如,`pwgen 12` 产生一个12位字符的密码) - -- 不要重复使用一个密码。忽略所有有关不要记下你的密码的建议,然后关于你所有登录信息都记在一本本子上。如果你不相信我的建议,那去相信安全权威 [Bruce Schneier][1]。如果你足够仔细,没有人能够发现你的笔记本,那么这样能够不受到网络上的那些攻击。 - -- 你可以为你的登录记事本增加一些额外的保护措施,例如用字符替换或者增加新的字符来掩盖笔记本上的登录密码。使用一个简单而且好记的规则,比如说给你的密码增加两个额外的随机字符,或者使用单个简单的字符替换,例如 `#` 替换成 `*`。 - -- 为你的 SSH 服务开启一个不是默认的监听端口。是的,这是很老套的建议,但是它确实很有效。检查你的登录;很有可能22端口是被普遍攻击的端口,其他端口则很少被攻击。 - -- 使用 [Fail2ban][2] 来动态保护你的服务器,是服务器免于被暴力攻击 - -- 使用不寻常的用户名。不能让root可以远程登录,并确避免用户名为“admin”。 - -### 2. 解决 `Too Many Authentication Failures` 报错 - -当我的 ssh 登录失败,并显示「Too many authentication failures for carla」的报错信息时,我很难过。我知道我应该不介意,但是这报错确实很碍眼。而且,正如我聪慧的奶奶曾经说过,伤痛之感并不能解决问题。解决办法就是在你的 `~/.ssh/config` 文件设置强制密码登录。如果这个文件不存在,首先创个 `~/.ssh/` 目录。 - -``` -$ mkdir ~/.ssh -$ chmod 700 ~/.ssh -``` - -然后在一个文本编辑器创建 `~/.ssh/confg` 文件,输入以下行,使用你自己的远程域名。 - -``` -HostName remote.site.com -PubkeyAuthentication=no -``` - -### 3. 使用公钥认证 - -公钥认证比密码登录安全多了,因为它不受暴力密码攻击的影响,但是并不方便因为它依赖于 RSA 密钥对。一开始,你创建一对公钥和私钥。下一步,私钥放于你的客户端电脑,并且复制公钥到你想登录的远程服务器。你只能从拥有私钥的电脑登录才能登录到远程服务器。你的私钥就和你的家门钥匙一样敏感;任何人获取到了私钥就可以获取你的账号。你可以给你的私钥加上密码来增加一些强化保护规则。 -使用 RSA 密钥对管理多种多样的用户是一种好的方法。当一个用户离开了,只要从服务器删了他的公钥就能取消他的登录。 - -以下例子创建一个新的 3072 bits 长度的密钥对,它比默认的 2048 bits 更安全,而且为它起一个独一无二的名字,这样你就可以知道它属于哪个服务器。 - -``` -$ ssh-keygen -t rsa -b 3072 -f id_mailserver -``` - -以下创建两个新的密钥, `id_mailserver` 和 `id_mailserver.pub`,`id_mailserver` 是你的私钥--不传播它!现在用 `ssh-copy-id` 命令安全地复制你的公钥到你的远程服务器。你必须确保有运作中的 SSH 登录到远程服务器。 - -``` -$ ssh-copy-id -i id_rsa.pub user@remoteserver - -/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed -user@remoteserver's password: - -Number of key(s) added: 1 - -Now try logging into the machine, with: "ssh 'user@remoteserver'" -and check to make sure that only the key(s) you wanted were added. -``` - -`ssh-copy-id` 确保你不会无意间复制你的私钥。复制你的命令输出样式,带上单引号,这样测试你的新的密钥登录。 - -``` -$ ssh 'user@remoteserver' -``` - -必须用你的新密钥登录,如果你为你的私钥设置了密码,它会提示你。 - -### 4. 取消密码登录 - -一旦你已经测试并且验证了你的公钥登录,取消密码登录,这样你的远程服务器就不会被暴力密码攻击。如下设置你的远程服务器的 `/etc/sshd_config` 文件。 - -``` -PasswordAuthentication no -``` - -然后重启 SSH 守护进程。 - -### 5. 设置别名 -- 这很快捷而且很有 B 格 - -你可以为你的远程登录设置常用的别名,来替代登录时输入的命令,例如 `ssh -u username -p 2222 remote.site.with.long-name`。你可以使用 `ssh remote1`。你的 `~/.ssh/config` 文件可以参照如下设置 - -``` -Host remote1 -HostName remote.site.with.long-name -Port 2222 -User username -PubkeyAuthentication no -``` - -如果你正在使用公钥登录,可以参照这个: - -``` -Host remote1 -HostName remote.site.with.long-name -Port 2222 -User username -IdentityFile ~/.ssh/id_remoteserver -``` - -[OpenSSH 文档][3] 很长而且详细,但是当你掌握了基础的 SSH 使用规则之后,你会发现它非常的有用而且包含很多可以通过 OpenSSH 来实现的炫酷效果。 - - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/5-ssh-hardening-tips - -作者:[CARLA SCHRODER][a] -译者:[maywanting](https://github.com/maywanting) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/cschroder -[1]: https://www.schneier.com/blog/archives/2005/06/write_down_your.html -[2]: http://www.fail2ban.org/wiki/index.php/Main_Page -[3]: http://www.openssh.com/ From 88db2b4a5f65cde681d3bb0a4e0d13c93c70014e Mon Sep 17 00:00:00 2001 From: VicYu Date: Mon, 15 Aug 2016 16:54:47 +0800 Subject: [PATCH 417/471] Vic020 --- sources/tech/ubuntu vs ubuntu on windows.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/ubuntu vs ubuntu on windows.md b/sources/tech/ubuntu vs ubuntu on windows.md index 3074c5fd5b..c3300bf984 100644 --- a/sources/tech/ubuntu vs ubuntu on windows.md +++ b/sources/tech/ubuntu vs ubuntu on windows.md @@ -1,3 +1,5 @@ + Vic020 + Ubuntu 14.04/16.04 vs. Ubuntu Bash On Windows 10 Anniversary Performance =========================== From 41db9c99564d1e1423a661b7f8ec5e564122b352 Mon Sep 17 00:00:00 2001 From: VicYu Date: Mon, 15 Aug 2016 17:21:25 +0800 Subject: [PATCH 418/471] complete sources/tech/ubuntu vs ubuntu on windows.md --- sources/tech/ubuntu vs ubuntu on windows.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/sources/tech/ubuntu vs ubuntu on windows.md b/sources/tech/ubuntu vs ubuntu on windows.md index c3300bf984..b8f81c8b8a 100644 --- a/sources/tech/ubuntu vs ubuntu on windows.md +++ b/sources/tech/ubuntu vs ubuntu on windows.md @@ -1,31 +1,30 @@ - Vic020 - -Ubuntu 14.04/16.04 vs. Ubuntu Bash On Windows 10 Anniversary Performance +Ubuntu 14.04/16.04 与Windows 10周年版ubuntu Bash性能对比 =========================== -When Microsoft and Canonical brought [Bash and Ubuntu's user-space to Windows 10][1] earlier this year I ran some preliminary [benchmarks of Ubuntu on Windows 10 versus a native Ubuntu installation][2] on the same hardware. Now that this "Windows Subsystem for Linux" is part of the recent Windows 10 Anniversary Update, I've carried out some fresh benchmarks of Ubuntu running atop Windows 10 compared to Ubuntu running bare metal. +今年初,当Microsoft和Canonical发布[Windows 10 Bash 和Ubuntu用户空间][1],我尝试做了一些初步性能测试 [Ubuntu on Windows 10 对比 原生Ubuntu][2],这次我发布更多的,关于原生纯净Ubuntu和Base on Windows 10的基准对比。 ![](http://www.phoronix.net/image.php?id=windows10-anv-wsl&image=windows_wsl_1_med) -The Windows Subsystem for Linux testing was done with the Windows 10 Anniversary Update plus all available system updates as of last week. The default Ubuntu user-space on Windows 10 continues to be Ubuntu 14.04 LTS, but it worked out to upgrade to Ubuntu 16.04 LTS. So at first I carried out the benchmarks in the 14.04-based stock environment on Windows followed by upgrading the user-space to Ubuntu 16.04 LTS and then repeating the tests. After all of the Windows-based testing was completed, I did clean installs of Ubuntu 14.04.5 and Ubuntu 16.04 LTS on the same system to see how the performance compares. +Windows的Linux子系统测试在上周刚刚完成所有测试,并放出升级。 默认的Ubuntu用户空间还是Ubuntu 14.04,但是已经可以升级到16.04。所以测试首先在14.04测试,完成后将系统升级升级到16.04版本并重复所有测试。完成所有基于Windows的测试后,我删除了Ubuntu14.04.5和Ubuntu 16.04 LTS来对比查看性能 + ![](http://www.phoronix.net/image.php?id=windows10-anv-wsl&image=windows_wsl_2_med) -An Intel Core i5 6600K Skylake system with 16GB of RAM and 256GB Toshiba SSD were used during the testing process. Each OS was left at its default settings/packages during the testing process. +配置为Intel i5 6600K Skylake框架, 16G内存和256东芝ssd, 所有测试都采用原生默认配置。 ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=09989b3&p=2) >点击放大查看 -The testing of Ubuntu/Bash on Windows and the native Ubuntu Linux installations were carried out in a fully-automated and reproducible manner using the open-source Phoronix Test Suite benchmarking software. +这次Ubuntu/Bash on Windows和原生Ubuntu对比测试,采用开源软件Phoronix测试套件,完全自动化并可重复测试。 -------------------------------------------------------------------------------- via: https://www.phoronix.com/scan.php?page=article&item=windows10-anv-wsl&num=1 作者:[Michael Larabel][a] -译者:[译者ID](https://github.com/译者ID) +译者:[VicYu/Vic020](http://vicyu.net) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 91b189d2cfb775c2d8e012fbbb6dae1005908bdc Mon Sep 17 00:00:00 2001 From: VicYu Date: Mon, 15 Aug 2016 17:22:52 +0800 Subject: [PATCH 419/471] Translated and moved to translaed folder --- {sources => translated}/tech/ubuntu vs ubuntu on windows.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/ubuntu vs ubuntu on windows.md (100%) diff --git a/sources/tech/ubuntu vs ubuntu on windows.md b/translated/tech/ubuntu vs ubuntu on windows.md similarity index 100% rename from sources/tech/ubuntu vs ubuntu on windows.md rename to translated/tech/ubuntu vs ubuntu on windows.md From bfd9acc3ba0ea1d31e7cb06ecdf952fd320f1a1d Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Mon, 15 Aug 2016 18:16:28 +0800 Subject: [PATCH 420/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?(#4314)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Delete part 1 - Building a data science portfolio - Machine learning project.md * Create part 1 - Building a data science portfolio - Machine learning project.md --- ...ce portfolio - Machine learning project.md | 97 +++++++++---------- 1 file changed, 46 insertions(+), 51 deletions(-) diff --git a/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md index 439be578ce..9acb80676b 100644 --- a/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md +++ b/sources/team_test/part 1 - Building a data science portfolio - Machine learning project.md @@ -1,84 +1,79 @@ -+@noobfish translating since Aug 2nd,2016. -+ -+ +>这是这个系列的第三次发布关于如何建立科学的投资数据. 如果你喜欢这个系列并且想继续关注, 你可以在订阅页面的底部找到链接[subscribe at the bottom of the page][1]. ->This is the third in a series of posts on how to build a Data Science Portfolio. If you like this and want to know when the next post in the series is released, you can [subscribe at the bottom of the page][1]. +数据科学公司越来越关注投资组合问题. 这其中的一个原因是,投资组合是最好的判断人们生活技能的方法. 好消息是投资组合是完全在你的控制之下的. 只要你做些投资方面的工作,就可以做出很棒的投资组合. -Data science companies are increasingly looking at portfolios when making hiring decisions. One of the reasons for this is that a portfolio is the best way to judge someone’s real-world skills. The good news for you is that a portfolio is entirely within your control. If you put some work in, you can make a great portfolio that companies are impressed by. +高质量投资组合的第一步就是知道需要什么技能. 客户想要将这些初级技能应用到数据科学, 因此这些投资技能显示如下: -The first step in making a high-quality portfolio is to know what skills to demonstrate. The primary skills that companies want in data scientists, and thus the primary skills they want a portfolio to demonstrate, are: +- 沟通能力 +- 协作能力 +- 技术能力 +- 数据推理能力 +- 主动性 -- Ability to communicate -- Ability to collaborate with others -- Technical competence -- Ability to reason about data -- Motivation and ability to take initiative +任何好的投资组合都由多个能力组成,其中必然包含以上一到两点. 这里我们主要讲第三点如何做好科学的数据投资组合. 在这一节, 我们主要讲第二项以及如何创建一个端对端的机器学习项目. 在最后, 在最后我们将拥有一个项目它将显示你的能力和技术水平. [Here’s][2]如果你想看一下这里有一个完整的例子. -Any good portfolio will be composed of multiple projects, each of which may demonstrate 1-2 of the above points. This is the third post in a series that will cover how to make a well-rounded data science portfolio. In this post, we’ll cover how to make the second project in your portfolio, and how to build an end to end machine learning project. At the end, you’ll have a project that shows your ability to reason about data, and your technical competence. [Here’s][2] the completed project if you want to take a look. +### 一个端到端的项目 -### An end to end project +作为一个数据科学家, 有时候你会拿到一个数据集并被问到是 [如何产生的][3]. 在这个时候, 交流是非常重要的, 走一遍流程. 用用Jupyter notebook, 看一看以前的例子,这对你非常有帮助. 在这里你能找到一些可以用的报告或者文档. -As a data scientist, there are times when you’ll be asked to take a dataset and figure out how to [tell a story with it][3]. In times like this, it’s important to communicate very well, and walk through your process. Tools like Jupyter notebook, which we used in a previous post, are very good at helping you do this. The expectation here is that the deliverable is a presentation or document summarizing your findings. +不管怎样, 有时候你会被要求创建一个具有操作价值的项目. 一个直接影响公司业务的项目, 不止一次的, 许多人用的项目. 这个任务可能像这样 “创建一个算法来预测波动率”或者 “创建一个模型来自动标签我们的文章”. 在这种情况下, 技术能力比说评书更重要. 你必须能够创建一个数据集, 并且理解它, 然后创建脚本处理该数据. 还有很重要的脚本要运行的很快, 占用系统资源很小. 它可能要运行很多次, 脚本的可使用性也很重要,并不仅仅是一个演示版. 可使用性是指整合操作流程, 因为他很有可能是面向用户的. -However, there are other times when you’ll be asked to create a project that has operational value. A project with operational value directly impacts the day-to-day operations of a company, and will be used more than once, and often by multiple people. A task like this might be “create an algorithm to forecast our churn rate”, or “create a model that can automatically tag our articles”. In cases like this, storytelling is less important than technical competence. You need to be able to take a dataset, understand it, then create a set of scripts that can process that data. It’s often important that these scripts run quickly, and use minimal system resources like memory. It’s very common that these scripts will be run several times, so the deliverable becomes the scripts themselves, not a presentation. The deliverable is often integrated into operational flows, and may even be user-facing. +端对端项目的主要组成部分: -The main components of building an end to end project are: +- 理解背景 +- 浏览数据并找出细微差别 +- 创建结构化项目, 那样比较容易整合操作流程 +- 运行速度快占用系统资源小的代码 +- 写好文档以便其他人用 -- Understanding the context -- Exploring the data and figuring out the nuances -- Creating a well-structured project, so its easy to integrate into operational flows -- Writing high-performance code that runs quickly and uses minimal system resources -- Documenting the installation and usage of your code well, so others can use it - -In order to effectively create a project of this kind, we’ll need to work with multiple files. Using a text editor like [Atom][4], or an IDE like [PyCharm][5] is highly recommended. These tools will allow you to jump between files, and edit files of different types, like markdown files, Python files, and csv files. Structuring your project so it’s easy to version control and upload to collaborative coding tools like [Github][6] is also useful. +为类有效的创建这种类型的项目, 我们可能需要处理多个文件. 强烈推荐使用 [Atom][4]或者[PyCharm][5] . 这些工具允许你在文件间跳转, 编辑不同类型的文件, 例如 markdown 文件, Python 文件, 和csv 文件. 结构化你的项目还利于版本控制 [Github][6] 也很有用. ![](https://www.dataquest.io/blog/images/end_to_end/github.png) ->This project on Github. +>Github上的这个项目. -We’ll use our editing tools along with libraries like [Pandas][7] and [scikit-learn][8] in this post. We’ll make extensive use of Pandas [DataFrames][9], which make it easy to read in and work with tabular data in Python. +在这一节中我们将使用 [Pandas][7] 和 [scikit-learn][8]扩展包 . 我们还将用到Pandas [DataFrames][9], 它使得python读取和处理表格数据更加方便. -### Finding good datasets +### 找到好的数据集 -A good dataset for an end to end portfolio project can be hard to find. [The dataset][10] needs to be sufficiently large that memory and performance constraints come into play. It also needs to potentially be operationally useful. For instance, this dataset, which contains data on the admission criteria, graduation rates, and graduate future earnings for US colleges would be a great dataset to use to tell a story. However, as you think about the dataset, it becomes clear that there isn’t enough nuance to build a good end to end project with it. For example, you could tell someone their potential future earnings if they went to a specific college, but that would be a quick lookup without enough nuance to demonstrate technical competence. You could also figure out if colleges with higher admissions standards tend to have graduates who earn more, but that would be more storytelling than operational. +找到一个好的端到端投资项目数据集很难. [The dataset][10]数据集需要足够大但是内存和性能限制了它. 它还需要实际有用的. 例如, 这个数据集, 它包含有美国院校的录取标准, 毕业率以及毕业以后的收入是个很好的数据集了. 不管怎样, 不管你如何想这个数据, 很显然它不适合创建端到端的项目. 比如, 你能告诉人们他们去了这些大学以后的未来收益, 但是却没有足够的细微差别. 你还能找出院校招生标准收入更高, 但是却没有告诉你如何实际操作. -These memory and performance constraints tend to come into play when you have more than a gigabyte of data, and when you have some nuance to what you want to predict, which involves running algorithms over the dataset. +这里还有内存和性能约束的问题比如你有几千兆的数据或者有一些细微差别需要你去预测或者运行什么样的算法数据集等. -A good operational dataset enables you to build a set of scripts that transform the data, and answer dynamic questions. A good example would be a dataset of stock prices. You would be able to predict the prices for the next day, and keep feeding new data to the algorithm as the markets closed. This would enable you to make trades, and potentially even profit. This wouldn’t be telling a story – it would be adding direct value. +一个好的数据集包括你可以动态的转换数组, 并且回答动态的问题. 一个很好的例子是股票价格数据集. 你可以预测明天的价格, 持续的添加数据在算法中. 它将有助于你预测利润. 这不是讲故事这是真实的. -Some good places to find datasets like this are: +一些找到数据集的好地方: -- [/r/datasets][11] – a subreddit that has hundreds of interesting datasets. -- [Google Public Datasets][12] – public datasets available through Google BigQuery. -- [Awesome datasets][13] – a list of datasets, hosted on Github. +- [/r/datasets][11] – subreddit(Reddit是国外一个社交新闻站点,subreddit指该论坛下的各不同板块). +- [Google Public Datasets][12] – 通过Google BigQuery发布的可用数据集. +- [Awesome datasets][13] – Github上的数据集. -As you look through these datasets, think about what questions someone might want answered with the dataset, and think if those questions are one-time (“how did housing prices correlate with the S&P 500?”), or ongoing (“can you predict the stock market?”). The key here is to find questions that are ongoing, and require the same code to be run multiple times with different inputs (different data). +当你查看这些数据集, 想一下人们想要在这些数据集中得到什么答案, 哪怕这些问题只想过一次 (“放假是如何S&P 500关联的?”), 或者更进一步(“你能预测股市吗?”). 这里的关键是更进一步找出问题, 并且多次运行不同的数据相同的代码. -For the purposes of this post, we’ll look at [Fannie Mae Loan Data][14]. Fannie Mae is a government sponsored enterprise in the US that buys mortgage loans from other lenders. It then bundles these loans up into mortgage-backed securities and resells them. This enables lenders to make more mortgage loans, and creates more liquidity in the market. This theoretically leads to more homeownership, and better loan terms. From a borrowers perspective, things stay largely the same, though. +为了这个目标, 我们来看一下[Fannie Mae 贷款数据][14]. Fannie Mae 是一家政府赞助的企业抵押贷款公司它从其他银行购买按揭贷款. 然后捆绑这些贷款为抵押贷款来倒卖证券. 这使得贷款机构可以提供更多的抵押贷款, 在市场上创造更多的流动性. 这在理论上会导致更多的住房和更好的贷款条件. 从借款人的角度来说,他们大体上差不多, 话虽这样说. -Fannie Mae releases two types of data – data on loans it acquires, and data on how those loans perform over time. In the ideal case, someone borrows money from a lender, then repays the loan until the balance is zero. However, some borrowers miss multiple payments, which can cause foreclosure. Foreclosure is when the house is seized by the bank because mortgage payments cannot be made. Fannie Mae tracks which loans have missed payments on them, and which loans needed to be foreclosed on. This data is published quarterly, and lags the current date by 1 year. As of this writing, the most recent dataset that’s available is from the first quarter of 2015. +Fannie Mae 发布了两种类型的数据 – 它获得的贷款, 随着时间的推移这些贷款是否被偿还.在理想的情况下, 有人向贷款人借钱, 然后还清贷款. 不管怎样, 有些人没还的起钱, 丧失了抵押品赎回权. Foreclosure 是说没钱还了被银行把房子给回收了. Fannie Mae 追踪谁没还钱, 并且需要收回房屋抵押权. 每个季度会发布此数据, 并滞后一年. 当前可用是2015年第一季度数据. -Acquisition data, which is published when the loan is acquired by Fannie Mae, contains information on the borrower, including credit score, and information on their loan and home. Performance data, which is published every quarter after the loan is acquired, contains information on the payments being made by the borrower, and the foreclosure status, if any. A loan that is acquired may have dozens of rows in the performance data. A good way to think of this is that the acquisition data tells you that Fannie Mae now controls the loan, and the performance data contains a series of status updates on the loan. One of the status updates may tell us that the loan was foreclosed on during a certain quarter. +采集数据是由Fannie Mae发布的贷款数据, 它包含借款人的信息, 信用评分, 和他们的家庭贷款信息. 性能数据, 贷款回收后的每一个季度公布, 包含借贷人所支付款项信息和丧失抵押品赎回状态, 收回贷款的性能数据可能有十几行.一个很好的思路是这样的采集数据告诉你Fannie Mae所控制的贷款, 性能数据包含几个属性来更新贷款. 其中一个属性告诉我们每个季度的贷款赎回权. ![](https://www.dataquest.io/blog/images/end_to_end/foreclosure.jpg) ->A foreclosed home being sold. +>一个没有及时还贷的房子就这样的被卖了. -### Picking an angle +### 选择一个角度 -There are a few directions we could go in with the Fannie Mae dataset. We could: +这里有几个方向我们可以去分析 Fannie Mae 数据集. 我们可以: -- Try to predict the sale price of a house after it’s foreclosed on. -- Predict the payment history of a borrower. -- Figure out a score for each loan at acquisition time. +- 预测房屋的销售价格. +- 预测借款人还款历史. +- 在收购时为每一笔贷款打分. -The important thing is to stick to a single angle. Trying to focus on too many things at once will make it hard to make an effective project. It’s also important to pick an angle that has sufficient nuance. Here are examples of angles without much nuance: +最重要的事情是坚持单一的角度. 关注太多的事情很难做出效果. 选择一个有着足够细节的角度也很重要. 下面的理解就没有太多差别: -- Figuring out which banks sold loans to Fannie Mae that were foreclosed on the most. -- Figuring out trends in borrower credit scores. -- Exploring which types of homes are foreclosed on most often. -- Exploring the relationship between loan amounts and foreclosure sale prices +- 找出哪些银行将贷款出售给Fannie Mae. +- 计算贷款人的信用评分趋势. +- 搜索哪些类型的家庭没有偿还贷款的能力. +- 搜索贷款金额和抵押品价格之间的关系 -All of the above angles are interesting, and would be great if we were focused on storytelling, but aren’t great fits for an operational project. - -With the Fannie Mae dataset, we’ll try to predict whether a loan will be foreclosed on in the future by only using information that was available when the loan was acquired. In effect, we’ll create a “score” for any mortgage that will tell us if Fannie Mae should buy it or not. This will give us a nice foundation to build on, and will be a great portfolio piece. +上面的想法非常有趣, 它会告诉我们很多有意思的事情, 但是不是一个很适合操作的项目。 +在Fannie Mae数据集中, 我们将预测贷款是否能被偿还. 实际上, 我们将建立一个抵押贷款的分数来告诉 Fannie Mae买还是不买. 这将给我提供很好的基础. From 7dda4e1fff6207b0fe7cc3e3cbf300df3d7709b6 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 15 Aug 2016 23:02:36 +0800 Subject: [PATCH 421/471] =?UTF-8?q?PUB:20160406=20Let=E2=80=99s=20Build=20?= =?UTF-8?q?A=20Web=20Server.=20Part=202?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @StdioA --- ...160406 Let’s Build A Web Server. Part 2.md | 138 +++++++++--------- 1 file changed, 72 insertions(+), 66 deletions(-) rename {translated/tech => published}/20160406 Let’s Build A Web Server. Part 2.md (64%) diff --git a/translated/tech/20160406 Let’s Build A Web Server. Part 2.md b/published/20160406 Let’s Build A Web Server. Part 2.md similarity index 64% rename from translated/tech/20160406 Let’s Build A Web Server. Part 2.md rename to published/20160406 Let’s Build A Web Server. Part 2.md index 2bbf783c51..38c09e9915 100644 --- a/translated/tech/20160406 Let’s Build A Web Server. Part 2.md +++ b/published/20160406 Let’s Build A Web Server. Part 2.md @@ -1,9 +1,9 @@ 搭个 Web 服务器(二) =================================== -在第一部分中,我提出了一个问题:“你要如何在不对程序做任何改动的情况下,在你刚刚搭建起来的 Web 服务器上适配 Django, Flask 或 Pyramid 应用呢?”我们可以从这一篇中找到答案。 +在[第一部分][1]中,我提出了一个问题:“如何在你刚刚搭建起来的 Web 服务器上适配 Django, Flask 或 Pyramid 应用,而不用单独对 Web 服务器做做出改动以适应各种不同的 Web 框架呢?”我们可以从这一篇中找到答案。 -曾几何时,你对 Python Web 框架种类作出的选择会对可用的 Web 服务器类型造成限制,反之亦然。如果框架及服务器在设计层面可以一起工作(相互适配),那么一切正常: +曾几何时,你所选择的 Python Web 框架会限制你所可选择的 Web 服务器,反之亦然。如果某个框架及服务器设计用来协同工作的,那么一切正常: ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_before_wsgi.png) @@ -13,22 +13,22 @@ 基本上,你需要选择那些能够一起工作的框架和服务器,而不能选择你想用的那些。 -所以,你该如何确保在不对代码做任何更改的情况下,让你的 Web 服务器和多个不同的 Web 框架一同工作呢?这个问题的答案,就是 Python Web 服务器网关接口(缩写为 WSGI,念做“wizgy”)。 +所以,你该如何确保在不对 Web 服务器或框架的代码做任何更改的情况下,让你的 Web 服务器和多个不同的 Web 框架一同工作呢?这个问题的答案,就是 Python Web 服务器网关接口(Web Server Gateway Interface )(缩写为 [WSGI][2],念做“wizgy”)。 ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_idea.png) -WSGI 允许开发者互不干扰地选择 Web 框架及 Web 服务器的类型。现在,你可以真正将 Web 服务器及框架任意搭配,然后选出你最中意的那对组合。比如,你可以使用 Django,Flask 或者 Pyramid,与 Gunicorn,Nginx/uWSGI 或 Waitress 进行结合。感谢 WSGI 同时对服务器与框架的支持,我们可以真正随意选择它们的搭配了。 +WSGI 允许开发者互不干扰地选择 Web 框架及 Web 服务器的类型。现在,你可以真正将 Web 服务器及框架任意搭配,然后选出你最中意的那对组合。比如,你可以使用 [Django][3],[Flask][4] 或者 [Pyramid][5],与 [Gunicorn][6],[Nginx/uWSGI][7] 或 [Waitress][8] 进行结合。感谢 WSGI 同时对服务器与框架的支持,我们可以真正随意选择它们的搭配了。 ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interop.png) 所以,WSGI 就是我在第一部分中提出,又在本文开头重复了一遍的那个问题的答案。你的 Web 服务器必须实现 WSGI 接口的服务器部分,而现代的 Python Web 框架均已实现了 WSGI 接口的框架部分,这使得你可以直接在 Web 服务器中使用任意框架,而不需要更改任何服务器代码,以对特定的 Web 框架实现兼容。 -现在,你一个知道 Web 服务器及 Web 框架对 WSGI 的支持使得你可以选择最合适的一对来使用,而且它对服务器和框架的开发者依然有益,因为他们只需专注于他们擅长的部分来进行开发,而不需要触及另一部分的代码。其它语言也拥有类似的接口,比如:Java 拥有 Servlet API,而 Ruby 拥有 Rack. +现在,你已经知道 Web 服务器及 Web 框架对 WSGI 的支持使得你可以选择最合适的一对来使用,而且它也有利于服务器和框架的开发者,这样他们只需专注于其擅长的部分来进行开发,而不需要触及另一部分的代码。其它语言也拥有类似的接口,比如:Java 拥有 Servlet API,而 Ruby 拥有 Rack。 -这些理论都不错,但是我打赌你在说:“给我看代码!” 那好,我们来看看下面这个很小的 WSGI 服务器实现: +这些理论都不错,但是我打赌你在说:“Show me the code!” 那好,我们来看看下面这个很小的 WSGI 服务器实现: ``` -# 使用 Python 2.7.9,在 Linux 及 Mac OS X 下测试通过 +### 使用 Python 2.7.9,在 Linux 及 Mac OS X 下测试通过 import socket import StringIO import sys @@ -41,22 +41,22 @@ class WSGIServer(object): request_queue_size = 1 def __init__(self, server_address): - # Create a listening socket + ### 创建一个监听的套接字 self.listen_socket = listen_socket = socket.socket( self.address_family, self.socket_type ) - # 允许复用同一地址 + ### 允许复用同一地址 listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - # 绑定地址 + ### 绑定地址 listen_socket.bind(server_address) - # 激活套接字 + ### 激活套接字 listen_socket.listen(self.request_queue_size) - # 获取主机名称及端口 + ### 获取主机的名称及端口 host, port = self.listen_socket.getsockname()[:2] self.server_name = socket.getfqdn(host) self.server_port = port - # 由 Web 框架/应用设定的响应头部字段 + ### 返回由 Web 框架/应用设定的响应头部字段 self.headers_set = [] def set_app(self, application): @@ -65,14 +65,14 @@ class WSGIServer(object): def serve_forever(self): listen_socket = self.listen_socket while True: - # 获取新的客户端连接 + ### 获取新的客户端连接 self.client_connection, client_address = listen_socket.accept() - # 处理一条请求后关闭连接,然后循环等待另一个连接建立 + ### 处理一条请求后关闭连接,然后循环等待另一个连接建立 self.handle_one_request() def handle_one_request(self): self.request_data = request_data = self.client_connection.recv(1024) - # 以 'curl -v' 的风格输出格式化请求数据 + ### 以 'curl -v' 的风格输出格式化请求数据 print(''.join( '< {line}\n'.format(line=line) for line in request_data.splitlines() @@ -80,20 +80,20 @@ class WSGIServer(object): self.parse_request(request_data) - # 根据请求数据构建环境字典 + ### 根据请求数据构建环境变量字典 env = self.get_environ() - # 此时需要调用 Web 应用来获取结果, - # 取回的结果将成为 HTTP 响应体 + ### 此时需要调用 Web 应用来获取结果, + ### 取回的结果将成为 HTTP 响应体 result = self.application(env, self.start_response) - # 构造一个响应,回送至客户端 + ### 构造一个响应,回送至客户端 self.finish_response(result) def parse_request(self, text): request_line = text.splitlines()[0] request_line = request_line.rstrip('\r\n') - # 将请求行分开成组 + ### 将请求行分成几个部分 (self.request_method, # GET self.path, # /hello self.request_version # HTTP/1.1 @@ -101,10 +101,10 @@ class WSGIServer(object): def get_environ(self): env = {} - # 以下代码段没有遵循 PEP8 规则,但它这样排版,是为了通过强调 - # 所需变量及它们的值,来达到其展示目的。 - # - # WSGI 必需变量 + ### 以下代码段没有遵循 PEP8 规则,但这样排版,是为了通过强调 + ### 所需变量及它们的值,来达到其展示目的。 + ### + ### WSGI 必需变量 env['wsgi.version'] = (1, 0) env['wsgi.url_scheme'] = 'http' env['wsgi.input'] = StringIO.StringIO(self.request_data) @@ -112,7 +112,7 @@ class WSGIServer(object): env['wsgi.multithread'] = False env['wsgi.multiprocess'] = False env['wsgi.run_once'] = False - # CGI 必需变量 + ### CGI 必需变量 env['REQUEST_METHOD'] = self.request_method # GET env['PATH_INFO'] = self.path # /hello env['SERVER_NAME'] = self.server_name # localhost @@ -120,16 +120,16 @@ class WSGIServer(object): return env def start_response(self, status, response_headers, exc_info=None): - # 添加必要的服务器头部字段 + ### 添加必要的服务器头部字段 server_headers = [ ('Date', 'Tue, 31 Mar 2015 12:54:48 GMT'), ('Server', 'WSGIServer 0.2'), ] self.headers_set = [status, response_headers + server_headers] - # 为了遵循 WSGI 协议,start_response 函数必须返回一个 'write' - # 可调用对象(返回值.write 可以作为函数调用)。为了简便,我们 - # 在这里无视这个细节。 - # return self.finish_response + ### 为了遵循 WSGI 协议,start_response 函数必须返回一个 'write' + ### 可调用对象(返回值.write 可以作为函数调用)。为了简便,我们 + ### 在这里无视这个细节。 + ### return self.finish_response def finish_response(self, result): try: @@ -140,7 +140,7 @@ class WSGIServer(object): response += '\r\n' for data in result: response += data - # 以 'curl -v' 的风格输出格式化请求数据 + ### 以 'curl -v' 的风格输出格式化请求数据 print(''.join( '> {line}\n'.format(line=line) for line in response.splitlines() @@ -149,16 +149,13 @@ class WSGIServer(object): finally: self.client_connection.close() - SERVER_ADDRESS = (HOST, PORT) = '', 8888 - def make_server(server_address, application): server = WSGIServer(server_address) server.set_app(application) return server - if __name__ == '__main__': if len(sys.argv) < 2: sys.exit('Provide a WSGI application object as module:callable') @@ -173,14 +170,14 @@ if __name__ == '__main__': 当然,这段代码要比第一部分的服务器代码长不少,但它仍然很短(只有不到 150 行),你可以轻松理解它,而不需要深究细节。上面的服务器代码还可以做更多——它可以用来运行一些你喜欢的框架写出的 Web 应用,可以是 Pyramid,Flask,Django 或其它 Python WSGI 框架。 -不相信吗?自己来试试看吧。把以上的代码保存为 `webserver2.py`,或直接从 Github 上下载它。如果你打算不加任何参数而直接运行它,它会抱怨一句,然后退出。 +不相信吗?自己来试试看吧。把以上的代码保存为 `webserver2.py`,或直接从 [Github][9] 上下载它。如果你打算不加任何参数而直接运行它,它会抱怨一句,然后退出。 ``` $ python webserver2.py Provide a WSGI application object as module:callable ``` -它想做的其实是为你的 Web 应用服务,而这才是重头戏。为了运行这个服务器,你只需要安装 Python。不过,如果你希望运行 Pyramid,Flask 或 Django 应用,你还需要先安装那些框架。那我们把这三个都装上吧。我推荐的安装方式是通过 `virtualenv` 安装。按照以下几步来做,你就可以创建并激活一个虚拟环境,并在其中安装以上三个 Web 框架。 +它想做的其实是为你的 Web 应用服务,而这才是重头戏。为了运行这个服务器,你唯一需要的就是安装好 Python。不过,如果你希望运行 Pyramid,Flask 或 Django 应用,你还需要先安装那些框架。那我们把这三个都装上吧。我推荐的安装方式是通过 `virtualenv` 安装。按照以下几步来做,你就可以创建并激活一个虚拟环境,并在其中安装以上三个 Web 框架。 ``` $ [sudo] pip install virtualenv @@ -195,7 +192,7 @@ $ source bin/activate (lsbaws) $ pip install django ``` -现在,你需要创建一个 Web 应用。我们先从 Pyramid 开始吧。把以下代码保存为 `pyramidapp.py`,并与刚刚的 `webserver2.py` 放置在同一目录,或直接从 Github 下载该文件: +现在,你需要创建一个 Web 应用。我们先从 Pyramid 开始吧。把以下代码保存为 `pyramidapp.py`,并与刚刚的 `webserver2.py` 放置在同一目录,或直接从 [Github][10] 下载该文件: ``` from pyramid.config import Configurator @@ -221,7 +218,7 @@ app = config.make_wsgi_app() WSGIServer: Serving HTTP on port 8888 ... ``` -你刚刚让你的服务器去加载 Python 模块 `pyramidapp` 中的可执行对象 `app`。现在你的服务器可以接收请求,并将它们转发到 Pyramid 应用中了。在浏览器中输入 http://localhost:8888/hello ,敲一下回车,然后看看结果: +你刚刚让你的服务器去加载 Python 模块 `pyramidapp` 中的可执行对象 `app`。现在你的服务器可以接收请求,并将它们转发到你的 Pyramid 应用中了。在浏览器中输入 http://localhost:8888/hello ,敲一下回车,然后看看结果: ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_pyramid.png) @@ -252,8 +249,7 @@ def hello_world(): app = flask_app.wsgi_app ``` -将以上代码保存为 `flaskapp.py`,或者直接从 Github - 下载,然后输入以下命令运行服务器: +将以上代码保存为 `flaskapp.py`,或者直接从 [Github][11] 下载,然后输入以下命令运行服务器: ``` (lsbaws) $ python webserver2.py flaskapp:app @@ -271,7 +267,7 @@ $ curl -v http://localhost:8888/hello ... ``` -这个服务器能处理 Django 应用吗?试试看吧!不过这个任务可能有点复杂,所以我建议你将整个仓库克隆下来,然后使用 Github 仓库中的 `djangoapp.py` 来完成这个实验。下面是它的源代码,你可以看到它将 Django `helloword` 工程(已使用 `Django` 的 `django-admin.py startproject` 命令创建完毕)添加到了当前的 Python 路径中,然后导入了这个工程的 WSGI 应用。 +这个服务器能处理 Django 应用吗?试试看吧!不过这个任务可能有点复杂,所以我建议你将整个仓库克隆下来,然后使用 [Github][13] 仓库中的 [djangoapp.py][12] 来完成这个实验。这里的源代码主要是将 Django 的 helloworld 工程(已使用 `Django` 的 `django-admin.py startproject` 命令创建完毕)添加到了当前的 Python 路径中,然后导入了这个工程的 WSGI 应用。(LCTT 译注:除了这里展示的代码,还需要一个配合的 helloworld 工程才能工作,代码可以参见 [Github][13] 仓库。) ``` import sys @@ -300,25 +296,25 @@ $ curl -v http://localhost:8888/hello ... ``` -你试过了吗?你确定这个服务器可以与那三个框架搭配工作吗?如果没试,请去试一下。阅读固然重要,但这个系列的内容是重建,这意味着你需要亲自动手干点活。去试一下吧。别担心,我等着你呢。不开玩笑,你真的需要试一下,亲自尝试每一步,并确保它像预期的那样工作。 +你试过了吗?你确定这个服务器可以与那三个框架搭配工作吗?如果没试,请去试一下。阅读固然重要,但这个系列的内容是**重新搭建**,这意味着你需要亲自动手干点活。去试一下吧。别担心,我等着你呢。不开玩笑,你真的需要试一下,亲自尝试每一步,并确保它像预期的那样工作。 好,你已经体验到了 WSGI 的威力:它可以使 Web 服务器及 Web 框架随意搭配。WSGI 在 Python Web 服务器及框架之间提供了一个微型接口。它非常简单,而且在服务器和框架端均可以轻易实现。下面的代码片段展示了 WSGI 接口的服务器及框架端实现: ``` def run_application(application): """服务器端代码。""" - # Web 应用/框架在这里存储 HTTP 状态码以及 HTTP 响应头部, - # 服务器会将这些信息传递给客户端 + ### Web 应用/框架在这里存储 HTTP 状态码以及 HTTP 响应头部, + ### 服务器会将这些信息传递给客户端 headers_set = [] - # 用于存储 WSGI/CGI 环境变量的字典 + ### 用于存储 WSGI/CGI 环境变量的字典 environ = {} def start_response(status, response_headers, exc_info=None): headers_set[:] = [status, response_headers] - # 服务器唤醒可执行变量“application”,获得响应头部 + ### 服务器唤醒可执行变量“application”,获得响应头部 result = application(environ, start_response) - # 服务器组装一个 HTTP 响应,将其传送至客户端 + ### 服务器组装一个 HTTP 响应,将其传送至客户端 … def app(environ, start_response): @@ -331,18 +327,18 @@ run_application(app) 这是它的工作原理: -1. Web 框架提供一个可调用对象 `application` (WSGI 规范没有规定它的实现方式) -2. Web 服务器每次收到来自客户端的 HTTP 请求后,会唤醒可调用对象 `applition`。它会向该对象传递一个包含 WSGI/CGI 变量的字典,以及一个可调用对象 `start_response` -3. Web 框架或应用生成 HTTP 状态码、HTTP 响应头部,然后将它传给 `start_response` 函数,服务器会将其存储起来。同时,Web 框架或应用也会返回 HTTP 响应正文。 +1. Web 框架提供一个可调用对象 `application` (WSGI 规范没有规定它的实现方式)。 +2. Web 服务器每次收到来自客户端的 HTTP 请求后,会唤醒可调用对象 `applition`。它会向该对象传递一个包含 WSGI/CGI 变量的环境变量字典 `environ`,以及一个可调用对象 `start_response`。 +3. Web 框架或应用生成 HTTP 状态码和 HTTP 响应头部,然后将它传给 `start_response` 函数,服务器会将其存储起来。同时,Web 框架或应用也会返回 HTTP 响应正文。 4. 服务器将状态码、响应头部及响应正文组装成一个 HTTP 响应,然后将其传送至客户端(这一步并不在 WSGI 规范中,但从逻辑上讲,这一步应该包含在工作流程之中。所以为了明确这个过程,我把它写了出来) 这是这个接口规范的图形化表达: ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_wsgi_interface.png) -到现在为止,你已经看过了用 Pyramid, Flask 和 Django 写出的 Web 应用的代码,你也看到了一个 Web 服务器如何用代码来实现另一半(服务器端的) WSGI 规范。你甚至还看到了我们如何在不使用任何框架的情况下,使用一段代码来实现一个最简单的 WSGI Web 应用。 +到现在为止,你已经看过了用 Pyramid、Flask 和 Django 写出的 Web 应用的代码,你也看到了一个 Web 服务器如何用代码来实现另一半(服务器端的) WSGI 规范。你甚至还看到了我们如何在不使用任何框架的情况下,使用一段代码来实现一个最简单的 WSGI Web 应用。 -其实,当你使用上面的框架编写一个 Web 应用时,你只是在较高的层面工作,而不需要直接与 WSGI 打交道。但是我知道你一定也对 WSGI 接口的框架部分,因为你在看这篇文章呀。所以,我们不用 Pyramid, Flask 或 Django,而是自己动手来创造一个最朴素的 WSGI Web 应用(或 Web 框架),然后将它和你的服务器一起运行: +其实,当你使用上面的框架编写一个 Web 应用时,你只是在较高的层面工作,而不需要直接与 WSGI 打交道。但是我知道你一定也对 WSGI 接口的框架部分感兴趣,因为你在看这篇文章呀。所以,我们不用 Pyramid、Flask 或 Django,而是自己动手来创造一个最朴素的 WSGI Web 应用(或 Web 框架),然后将它和你的服务器一起运行: ``` def app(environ, start_response): @@ -356,7 +352,7 @@ def app(environ, start_response): return ['Hello world from a simple WSGI application!\n'] ``` -同样,将上面的代码保存至 `wsgiapp.py` 或直接从 Github 上下载该文件,然后在 Web 服务器上运行这个应用,像这样: +同样,将上面的代码保存为 `wsgiapp.py` 或直接从 [Github][14] 上下载该文件,然后在 Web 服务器上运行这个应用,像这样: ``` (lsbaws) $ python webserver2.py wsgiapp:app @@ -367,13 +363,13 @@ WSGIServer: Serving HTTP on port 8888 ... ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_browser_simple_wsgi_app.png) -你刚刚在学习如何创建一个 Web 服务器的过程中,自己编写了一个最朴素的 WSGI Web 框架!棒极了! +你刚刚在学习如何创建一个 Web 服务器的过程中自己编写了一个最朴素的 WSGI Web 框架!棒极了! 现在,我们再回来看看服务器传给客户端的那些东西。这是在使用 HTTP 客户端调用你的 Pyramid 应用时,服务器生成的 HTTP 响应内容: ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_http_response.png) -这个响应和你在本系列第一部分中看到的 HTTP 响应有一部分共同点,但它还多出来了一些内容。比如说,它拥有四个你曾经没见过的 HTTP 头部:`Content-Type`, `Content-Length`, `Date` 以及 `Server`。这些头部内容基本上在每个 Web 服务器返回的响应中都会出现。不过,它们都不是被严格要求出现的。HTTP 请求/响应头部字段的目的,在于它可以向你传递一些关于 HTTP 请求/响应的额外信息。 +这个响应和你在本系列第一部分中看到的 HTTP 响应有一部分共同点,但它还多出来了一些内容。比如说,它拥有四个你曾经没见过的 [HTTP 头部][15]:`Content-Type`, `Content-Length`, `Date` 以及 `Server`。这些头部内容基本上在每个 Web 服务器返回的响应中都会出现。不过,它们都不是被严格要求出现的。这些 HTTP 请求/响应头部字段的目的在于它可以向你传递一些关于 HTTP 请求/响应的额外信息。 既然你对 WSGI 接口了解的更深了一些,那我再来展示一下上面那个 HTTP 响应中的各个部分的信息来源: @@ -381,7 +377,7 @@ WSGIServer: Serving HTTP on port 8888 ... 我现在还没有对上面那个 `environ` 字典做任何解释,不过基本上这个字典必须包含那些被 WSGI 规范事先定义好的 WSGI 及 CGI 变量值。服务器在解析 HTTP 请求时,会从请求中获取这些变量的值。这是 `environ` 字典应该有的样子: -![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_environ.png) +![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_environ.png) Web 框架会利用以上字典中包含的信息,通过字典中的请求路径、请求动作等等来决定使用哪个视图来处理响应、在哪里读取请求正文、在哪里输出错误信息(如果有的话)。 @@ -397,13 +393,13 @@ Web 框架会利用以上字典中包含的信息,通过字典中的请求路 ![](https://ruslanspivak.com/lsbaws-part2/lsbaws_part2_server_summary.png) -这基本上是服务器要做的全部内容了。你现在有了一个可以正常工作的 WSGI 服务器,它可以为使用任何遵循 WSGI 规范的 Web 框架(如 Django, Flask, Pyramid, 还有你刚刚自己写的那个框架)构建出的 Web 应用服务。最棒的部分在于,它可以在不用更改任何服务器代码的情况下,与多个不同的 Web 框架一起工作。真不错。 +这基本上是服务器要做的全部内容了。你现在有了一个可以正常工作的 WSGI 服务器,它可以为使用任何遵循 WSGI 规范的 Web 框架(如 Django、Flask、Pyramid,还有你刚刚自己写的那个框架)构建出的 Web 应用服务。最棒的部分在于,它可以在不用更改任何服务器代码的情况下,与多个不同的 Web 框架一起工作。真不错。 在结束之前,你可以想想这个问题:“你该如何让你的服务器在同一时间处理多个请求呢?” 敬请期待,我会在第三部分向你展示一种解决这个问题的方法。干杯! -顺便,我在撰写一本名为《搭个 Web 服务器:从头开始》的书。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。订阅邮件列表,你就可以获取到这本书的最新进展,以及发布日期。 +顺便,我在撰写一本名为《搭个 Web 服务器:从头开始》的书。这本书讲解了如何从头开始编写一个基本的 Web 服务器,里面包含本文中没有的更多细节。[订阅邮件列表][16],你就可以获取到这本书的最新进展,以及发布日期。 -------------------------------------------------------------------------------- @@ -411,14 +407,24 @@ via: https://ruslanspivak.com/lsbaws-part2/ 作者:[Ruslan][a] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://github.com/rspivak/ - - - - - - +[1]: https://linux.cn/article-7662-1.html +[2]: https://www.python.org/dev/peps/pep-0333/ +[3]: https://www.djangoproject.com/ +[4]: http://flask.pocoo.org/ +[5]: http://trypyramid.com/ +[6]: http://gunicorn.org/ +[7]: http://uwsgi-docs.readthedocs.org/ +[8]: http://waitress.readthedocs.org/ +[9]: https://github.com/rspivak/lsbaws/blob/master/part2/webserver2.py +[10]: https://github.com/rspivak/lsbaws/blob/master/part2/pyramidapp.py +[11]: https://github.com/rspivak/lsbaws/blob/master/part2/flaskapp.py +[12]: https://github.com/rspivak/lsbaws/blob/master/part2/flaskapp.py +[13]: https://github.com/rspivak/lsbaws/ +[14]: https://github.com/rspivak/lsbaws/blob/master/part2/wsgiapp.py +[15]: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields +[16]: https://ruslanspivak.com/lsbaws-part2/ \ No newline at end of file From 078175fa5e35af466797e27a496c9424e3124117 Mon Sep 17 00:00:00 2001 From: may Date: Tue, 16 Aug 2016 09:03:48 +0800 Subject: [PATCH 422/471] =?UTF-8?q?=E5=86=8D=E6=9D=A5=E7=94=B3=E8=AF=B7?= =?UTF-8?q?=EF=BC=81=20(#4315)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * translating by maywanting * finish translating * translating start * delete the file --- sources/tech/20160809 How to build your own Git server.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160809 How to build your own Git server.md b/sources/tech/20160809 How to build your own Git server.md index ac5b229d96..7f75c33195 100644 --- a/sources/tech/20160809 How to build your own Git server.md +++ b/sources/tech/20160809 How to build your own Git server.md @@ -1,3 +1,5 @@ +translating by maywanting + How to build your own Git server ==================== From c99f1586c608a7e950c90ec6743eca4cf5a42947 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 16 Aug 2016 10:57:43 +0800 Subject: [PATCH 423/471] PUB:Part 2 - LXD 2.0--Installing and configuring LXD MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @oska874 @PurlingNayuki 这个系列居然有12篇之多。。 --- ...LXD 2.0--Installing and configuring LXD.md | 43 ++++++++----------- 1 file changed, 19 insertions(+), 24 deletions(-) rename {translated/tech => published}/LXD/Part 2 - LXD 2.0--Installing and configuring LXD.md (78%) diff --git a/translated/tech/LXD/Part 2 - LXD 2.0--Installing and configuring LXD.md b/published/LXD/Part 2 - LXD 2.0--Installing and configuring LXD.md similarity index 78% rename from translated/tech/LXD/Part 2 - LXD 2.0--Installing and configuring LXD.md rename to published/LXD/Part 2 - LXD 2.0--Installing and configuring LXD.md index fe386f3229..a523ffa1e6 100644 --- a/translated/tech/LXD/Part 2 - LXD 2.0--Installing and configuring LXD.md +++ b/published/LXD/Part 2 - LXD 2.0--Installing and configuring LXD.md @@ -1,4 +1,4 @@ -Part 2 - LXD 2.0: 安装与配置 +LXD 2.0 系列(二):安装与配置 ================================================= 这是 LXD 2.0 [系列介绍文章][2]的第二篇。 @@ -11,7 +11,7 @@ Part 2 - LXD 2.0: 安装与配置 #### Ubuntu 标准版 -所有新发布的 LXD 都会在发布几分钟后上传到 Ubuntu 开发版的安装源里。这个安装包然后就会当作种子给全部其他的安装包源,供 Ubuntu 用户使用。 +所有新发布的 LXD 都会在发布几分钟后上传到 Ubuntu 开发版的安装源里。这个安装包然后就会作为 Ubuntu 用户的其他安装包源的种子。 如果使用 Ubuntu 16.04,可以直接安装: @@ -54,7 +54,7 @@ sudo emerge --ask lxd #### 使用源代码安装 -如果你曾经编译过 Go 语言的项目,那么从源代码编译 LXD 并不是十分困难。然而注意,你需要 LXC 的开发头文件。为了运行 LXD, 你的发布版需也要使用比较新的内核(最起码是 3.13)、比较新的 LXC (1.1.4 或更高版本)、LXCFS 以及支持用户子 uid/gid 分配的 shadow。 +如果你曾经编译过 Go 语言的项目,那么从源代码编译 LXD 并不是十分困难。然而注意,你需要 LXC 的开发头文件。为了运行 LXD, 你的发布版需也要使用比较新的内核(最起码是 3.13)、比较新的 LXC (1.1.4 或更高版本)、LXCFS 以及支持用户子 uid/gid 分配的 shadow 文件。 从源代码编译 LXD 的最新教程可以在[上游 README][2]里找到。 @@ -76,13 +76,13 @@ sudo lxd init ### 存储后端 -LXD 提供了许多集中存储后端。在开始使用 LXD 之前,你应该决定将要使用的后端,因为我们不支持在后端之间迁移已经生成的容器。 +LXD 提供了几种存储后端。在开始使用 LXD 之前,你应该决定将要使用的后端,因为我们不支持在后端之间迁移已经生成的容器。 各个[后端特性比较表][3]可以在[这里][3]找到。 #### ZFS -我们的推荐是 ZFS, 因为它能支持 LXD 的全部特性,同时提供最快和最可靠的容器体验。它包括了以容器为单位的磁盘配额,即时快照和恢复,优化了的迁移(发送/接收),以及快速从镜像创建容器的能力。它同时也被认为要比 btrfs 更成熟。 +我们的推荐是 ZFS, 因为它能支持 LXD 的全部特性,同时提供最快和最可靠的容器体验。它包括了以容器为单位的磁盘配额、即时快照和恢复、优化后的迁移(发送/接收),以及快速从镜像创建容器的能力。它同时也被认为要比 btrfs 更成熟。 要和 LXD 一起使用 ZFS ,你需要首先在你的系统上安装 ZFS。 @@ -112,11 +112,11 @@ sudo apt install ubuntu-zfs sudo lxd init ``` -这条命令接下来会向你提问一下一些 ZFS 的配置细节,然后为你配置好 ZFS。 +这条命令接下来会向你提问一些 ZFS 的配置细节,然后为你配置好 ZFS。 #### btrfs -如果 ZFS 不可用,那么 btrfs 可以提供相同级别的集成,但不会合理地报告容器内的磁盘使用情况(虽然配额仍然可用)。 +如果 ZFS 不可用,那么 btrfs 可以提供相同级别的集成,但不能正确地报告容器内的磁盘使用情况(虽然配额仍然可用)。 btrfs 同时拥有很好的嵌套属性,而这是 ZFS 所不具有的。也就是说如果你计划在 LXD 中再使用 LXD,那么 btrfs 就很值得你考虑。 @@ -126,14 +126,13 @@ btrfs 同时拥有很好的嵌套属性,而这是 ZFS 所不具有的。也就 如果 ZFS 和 btrfs 都不是你想要的,你还可以考虑使用 LVM 以获得部分特性。 LXD 会以自动精简配置的方式使用 LVM,为每个镜像和容器创建 LV,如果需要的话也会使用 LVM 的快照功能。 -要配置 LXD 使用 LVM,需要创建一个 LVM VG,然后运行: - +要配置 LXD 使用 LVM,需要创建一个 LVM 卷组,然后运行: ``` lxc config set storage.lvm_vg_name "THE-NAME-OF-YOUR-VG" ``` -默认情况下 LXD 使用 ext4 作为全部 LV 的文件系统。如果你喜欢的话可以改成 XFS: +默认情况下 LXD 使用 ext4 作为全部逻辑卷的文件系统。如果你喜欢的话可以改成 XFS: ``` lxc config set storage.lvm_fstype xfs @@ -151,7 +150,7 @@ LXD 守护进程的完整配置项列表可以在[这里找到][4]。 #### 网络配置 -默认情况下 LXD 不会监听网络。和它通信的唯一办法是通过 `/var/lib/lxd/unix.socket` 使用本地 unix socket 进行通信。 +默认情况下 LXD 不会监听网络。和它通信的唯一办法是通过 `/var/lib/lxd/unix.socket` 使用本地 unix 套接字进行通信。 要让 LXD 监听网络,下面有两个有用的命令: @@ -160,11 +159,11 @@ lxc config set core.https_address [::] lxc config set core.trust_password some-secret-string ``` -第一条命令将 LXD 绑定到 IPv6 地址 “::”,也就是监听机器的所有 IPv6 地址。你可以显式的使用一个特定的 IPv4 或者 IPv6 地址替代默认地址,如果你想绑定 TCP 端口(默认是 8443)的话可以在地址后面添加端口号即可。 +第一条命令将 LXD 绑定到 IPv6 地址 “::”,也就是监听机器的所有 IPv6 地址。你可以显式的使用一个特定的 IPv4 或者 IPv6 地址替代默认地址,如果你想绑定某个 TCP 端口(默认是 8443)的话可以在地址后面添加端口号即可。 -第二条命令设置了密码,用于让远程客户端用来把自己添加到 LXD 可信证书中心。如果已经给主机设置了密码,当添加 LXD 主机时会提示输入密码,LXD 守护进程会保存他们的客户端的证书以确保客户端是可信的,这样就不需要再次输入密码(可以随时设置和取消)。 +第二条命令设置了密码,用于让远程客户端把自己添加到 LXD 可信证书中心。如果已经给主机设置了密码,当添加 LXD 主机时会提示输入密码,LXD 守护进程会保存他们的客户端证书以确保客户端是可信的,这样就不需要再次输入密码(可以随时设置和取消)。 -你也可以选择不设置密码,然后通过给每个客户端发送“client.crt”(来自于 `~/.config/lxc`)文件,然后把它添加到你自己的可信中信来实现人工验证每个新客户端是否可信,可以使用下面的命令: +你也可以选择不设置密码,而是人工验证每个新客户端是否可信——让每个客户端发送“client.crt”(来自于 `~/.config/lxc`)文件,然后把它添加到你自己的可信证书中心: ``` lxc config trust add client.crt @@ -186,7 +185,7 @@ lxc config set core.proxy_ignore_hosts image-server.local #### 镜像管理 -LXD 使用动态镜像缓存。当从远程镜像创建容器的时候,它会自动把镜像下载到本地镜像商店,同时标志为已缓存并记录来源。几天后(默认 10 天)如果某个镜像没有被使用过,那么它就会自动地被删除。每个几小时(默认是 6 小时)LXD 还会检查一下这个镜像是否有新版本,然后更新镜像的本地拷贝。 +LXD 使用动态镜像缓存。当从远程镜像创建容器的时候,它会自动把镜像下载到本地镜像商店,同时标志为已缓存并记录来源。几天后(默认 10 天)如果某个镜像没有被使用过,那么它就会自动地被删除。每隔几小时(默认是 6 小时)LXD 还会检查一下这个镜像是否有新版本,然后更新镜像的本地拷贝。 所有这些都可以通过下面的配置选项进行配置: @@ -196,8 +195,7 @@ lxc config set images.auto_update_interval 24 lxc config set images.auto_update_cached false ``` -这些命令让 LXD 修改了它的默认属性,缓存期替换为 5 天,更新间隔为 24 小时,而且只更新那些标记为自动更新的镜像(lxc 镜像拷贝被标记为 `–auto-update`)而不是 LXD 自动缓存的镜像。 - +这些命令让 LXD 修改了它的默认属性,缓存期替换为 5 天,更新间隔为 24 小时,而且只更新那些标记为自动更新(–auto-update)的镜像(lxc 镜像拷贝被标记为 `–auto-update`)而不是 LXD 自动缓存的镜像。 ### 总结 @@ -205,13 +203,10 @@ lxc config set images.auto_update_cached false ### 额外信息 -LXD 的主站在: - -LXD 的 GitHub 仓库: - -LXD 的邮件列表: - -LXD 的 IRC 频道: #lxcontainers on irc.freenode.net +- LXD 的主站在: +- LXD 的 GitHub 仓库: +- LXD 的邮件列表: +- LXD 的 IRC 频道: #lxcontainers on irc.freenode.net 如果你不想或者不能在你的机器上安装 LXD ,你可以[试试在线版的 LXD][1]。 From 83ef5767f9d23c52c05eebcc47c0345b48f9d0c4 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 16 Aug 2016 17:45:46 +0800 Subject: [PATCH 424/471] PUB:20151220 GCC-Inline-Assembly-HOWTO MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @cposture 辛苦了,好长! --- .../20151220 GCC-Inline-Assembly-HOWTO.md | 498 ++++++++++++++ .../20151220 GCC-Inline-Assembly-HOWTO.md | 632 ------------------ 2 files changed, 498 insertions(+), 632 deletions(-) create mode 100644 published/20151220 GCC-Inline-Assembly-HOWTO.md delete mode 100644 translated/tech/20151220 GCC-Inline-Assembly-HOWTO.md diff --git a/published/20151220 GCC-Inline-Assembly-HOWTO.md b/published/20151220 GCC-Inline-Assembly-HOWTO.md new file mode 100644 index 0000000000..e89e7fa256 --- /dev/null +++ b/published/20151220 GCC-Inline-Assembly-HOWTO.md @@ -0,0 +1,498 @@ +* * * + +# GCC 内联汇编 HOWTO + +v0.1, 01 March 2003. +* * * + +_本 HOWTO 文档将讲解 GCC 提供的内联汇编特性的用途和用法。对于阅读这篇文章,这里只有两个前提要求,很明显,就是 x86 汇编语言和 C 语言的基本认识。_ + +* * * + +## 1. 简介 + +### 1.1 版权许可 + +Copyright (C)2003 Sandeep S. + +本文档自由共享;你可以重新发布它,并且/或者在遵循自由软件基金会发布的 GNU 通用公共许可证下修改它;也可以是该许可证的版本 2 或者(按照你的需求)更晚的版本。 + +发布这篇文档是希望它能够帮助别人,但是没有任何担保;甚至不包括可售性和适用于任何特定目的的担保。关于更详细的信息,可以查看 GNU 通用许可证。 + +### 1.2 反馈校正 + +请将反馈和批评一起提交给 [Sandeep.S](mailto:busybox@sancharnet.in) 。我将感谢任何一个指出本文档中错误和不准确之处的人;一被告知,我会马上改正它们。 + +### 1.3 致谢 + +我对提供如此棒的特性的 GNU 人们表示真诚的感谢。感谢 Mr.Pramode C E 所做的所有帮助。感谢在 Govt Engineering College 和 Trichur 的朋友们的精神支持和合作,尤其是 Nisha Kurur 和 Sakeeb S 。 感谢在 Gvot Engineering College 和 Trichur 的老师们的合作。 + +另外,感谢 Phillip , Brennan Underwood 和 colin@nyx.net ;这里的许多东西都厚颜地直接取自他们的工作成果。 + +* * * + +## 2. 概览 + +在这里,我们将学习 GCC 内联汇编。这里内联(inline)表示的是什么呢? + +我们可以要求编译器将一个函数的代码插入到调用者代码中函数被实际调用的地方。这样的函数就是内联函数。这听起来和宏差不多?这两者确实有相似之处。 + +内联函数的优点是什么呢? + +这种内联方法可以减少函数调用开销。同时如果所有实参的值为常量,它们的已知值可以在编译期允许简化,因此并非所有的内联函数代码都需要被包含进去。代码大小的影响是不可预测的,这取决于特定的情况。为了声明一个内联函数,我们必须在函数声明中使用 `inline` 关键字。 + +现在我们正处于一个猜测内联汇编到底是什么的点上。它只不过是一些写为内联函数的汇编程序。在系统编程上,它们方便、快速并且极其有用。我们主要集中学习(GCC)内联汇编函数的基本格式和用法。为了声明内联汇编函数,我们使用 `asm` 关键词。 + +内联汇编之所以重要,主要是因为它可以操作并且使其输出通过 C 变量显示出来。正是因为此能力, "asm" 可以用作汇编指令和包含它的 C 程序之间的接口。 + +* * * + +## 3. GCC 汇编语法 + +Linux上的 GNU C 编译器 GCC ,使用 **AT&T** / **UNIX** 汇编语法。在这里,我们将使用 AT&T 语法 进行汇编编码。如果你对 AT&T 语法不熟悉的话,请不要紧张,我会教你的。AT&T 语法和 Intel 语法的差别很大。我会给出主要的区别。 + +1. 源操作数和目的操作数顺序 + + AT&T 语法的操作数方向和 Intel 语法的刚好相反。在Intel 语法中,第一操作数为目的操作数,第二操作数为源操作数,然而在 AT&T 语法中,第一操作数为源操作数,第二操作数为目的操作数。也就是说, + + Intel 语法中的 `Op-code dst src` 变为 + + AT&T 语法中的 `Op-code src dst`。 + +2. 寄存器命名 + + 寄存器名称有 `%` 前缀,即如果必须使用 `eax`,它应该用作 `%eax`。 + +3. 立即数 + + AT&T 立即数以 `$` 为前缀。静态 "C" 变量也使用 `$` 前缀。在 Intel 语法中,十六进制常量以 `h` 为后缀,然而 AT&T 不使用这种语法,这里我们给常量添加前缀 `0x`。所以,对于十六进制,我们首先看到一个 `$`,然后是 `0x`,最后才是常量。 + +4. 操作数大小 + + 在 AT&T 语法中,存储器操作数的大小取决于操作码名字的最后一个字符。操作码后缀 ’b’ 、’w’、’l’ 分别指明了字节(byte)(8位)、字(word)(16位)、长型(long)(32位)存储器引用。Intel 语法通过给存储器操作数添加 `byte ptr`、 `word ptr` 和 `dword ptr` 前缀来实现这一功能。 + + 因此,Intel的 `mov al, byte ptr foo` 在 AT&T 语法中为 `movb foo, %al`。 + +5. 存储器操作数 + + 在 Intel 语法中,基址寄存器包含在 `[` 和 `]` 中,然而在 AT&T 中,它们变为 `(` 和 `)`。另外,在 Intel 语法中, 间接内存引用为 + + `section:[base + index*scale + disp]`,在 AT&T中变为 `section:disp(base, index, scale)`。 + + 需要牢记的一点是,当一个常量用于 disp 或 scale,不能添加 `$` 前缀。 + +现在我们看到了 Intel 语法和 AT&T 语法之间的一些主要差别。我仅仅写了它们差别的一部分而已。关于更完整的信息,请参考 GNU 汇编文档。现在为了更好地理解,我们可以看一些示例。 + +``` ++------------------------------+------------------------------------+ +| Intel Code | AT&T Code | ++------------------------------+------------------------------------+ +| mov eax,1 | movl $1,%eax | +| mov ebx,0ffh | movl $0xff,%ebx | +| int 80h | int $0x80 | +| mov ebx, eax | movl %eax, %ebx | +| mov eax,[ecx] | movl (%ecx),%eax | +| mov eax,[ebx+3] | movl 3(%ebx),%eax | +| mov eax,[ebx+20h] | movl 0x20(%ebx),%eax | +| add eax,[ebx+ecx*2h] | addl (%ebx,%ecx,0x2),%eax | +| lea eax,[ebx+ecx] | leal (%ebx,%ecx),%eax | +| sub eax,[ebx+ecx*4h-20h] | subl -0x20(%ebx,%ecx,0x4),%eax | ++------------------------------+------------------------------------+ +``` +* * * + +## 4. 基本内联 + +基本内联汇编的格式非常直接了当。它的基本格式为 + +`asm("汇编代码");` + +示例 + +``` +asm("movl %ecx %eax"); /* 将 ecx 寄存器的内容移至 eax */ +__asm__("movb %bh (%eax)"); /* 将 bh 的一个字节数据 移至 eax 寄存器指向的内存 */ +``` + +你可能注意到了这里我使用了 `asm ` 和 `__asm__`。这两者都是有效的。如果关键词 `asm` 和我们程序的一些标识符冲突了,我们可以使用 `__asm__`。如果我们的指令多于一条,我们可以每个一行,并用双引号圈起,同时为每条指令添加 ’\n’ 和 ’\t’ 后缀。这是因为 gcc 将每一条当作字符串发送给 **as**(GAS)(LCTT 译注: GAS 即 GNU 汇编器),并且通过使用换行符/制表符发送正确格式化后的行给汇编器。 + +示例 + +``` +__asm__ ("movl %eax, %ebx\n\t" + "movl $56, %esi\n\t" + "movl %ecx, $label(%edx,%ebx,$4)\n\t" + "movb %ah, (%ebx)"); +``` + +如果在代码中,我们涉及到一些寄存器(即改变其内容),但在没有恢复这些变化的情况下从汇编中返回,这将会导致一些意想不到的事情。这是因为 GCC 并不知道寄存器内容的变化,这会导致问题,特别是当编译器做了某些优化。在没有告知 GCC 的情况下,它将会假设一些寄存器存储了一些值——而我们可能已经改变却没有告知 GCC——它会像什么事都没发生一样继续运行(LCTT 译注:什么事都没发生一样是指GCC不会假设寄存器装入的值是有效的,当退出改变了寄存器值的内联汇编后,寄存器的值不会保存到相应的变量或内存空间)。我们所可以做的是使用那些没有副作用的指令,或者当我们退出时恢复这些寄存器,要不就等着程序崩溃吧。这是为什么我们需要一些扩展功能,扩展汇编给我们提供了那些功能。 + +* * * + +## 5. 扩展汇编 + +在基本内联汇编中,我们只有指令。然而在扩展汇编中,我们可以同时指定操作数。它允许我们指定输入寄存器、输出寄存器以及修饰寄存器列表。GCC 不强制用户必须指定使用的寄存器。我们可以把头疼的事留给 GCC ,这可能可以更好地适应 GCC 的优化。不管怎么说,基本格式为: + +``` +asm ( 汇编程序模板 + : 输出操作数 /* 可选的 */ + : 输入操作数 /* 可选的 */ + : 修饰寄存器列表 /* 可选的 */ + ); +``` + +汇编程序模板由汇编指令组成。每一个操作数由一个操作数约束字符串所描述,其后紧接一个括弧括起的 C 表达式。冒号用于将汇编程序模板和第一个输出操作数分开,另一个(冒号)用于将最后一个输出操作数和第一个输入操作数分开(如果存在的话)。逗号用于分离每一个组内的操作数。总操作数的数目限制在 10 个,或者机器描述中的任何指令格式中的最大操作数数目,以较大者为准。 + +如果没有输出操作数但存在输入操作数,你必须将两个连续的冒号放置于输出操作数原本会放置的地方周围。 + +示例: + +``` +asm ("cld\n\t" + "rep\n\t" + "stosl" + : /* 无输出寄存器 */ + : "c" (count), "a" (fill_value), "D" (dest) + : "%ecx", "%edi" + ); +``` + +现在来看看这段代码是干什么的?以上的内联汇编是将 `fill_value` 值连续 `count` 次拷贝到寄存器 `edi` 所指位置(LCTT 译注:每执行 stosl 一次,寄存器 edi 的值会递增或递减,这取决于是否设置了 direction 标志,因此以上代码实则初始化一个内存块)。 它也告诉 gcc 寄存器 `ecx` 和 `edi` 一直无效(LCTT 译注:原文为 eax ,但代码修饰寄存器列表中为 ecx,因此这可能为作者的纰漏。)。为了更加清晰地说明,让我们再看一个示例。 + +``` +int a=10, b; +asm ("movl %1, %%eax; + movl %%eax, %0;" + :"=r"(b) /* 输出 */ + :"r"(a) /* 输入 */ + :"%eax" /* 修饰寄存器 */ + ); +``` + +这里我们所做的是使用汇编指令使 ’b’ 变量的值等于 ’a’ 变量的值。一些有意思的地方是: + +* "b" 为输出操作数,用 %0 引用,并且 "a" 为输入操作数,用 %1 引用。 +* "r" 为操作数约束。之后我们会更详细地了解约束(字符串)。目前,"r" 告诉 GCC 可以使用任一寄存器存储操作数。输出操作数约束应该有一个约束修饰符 "=" 。这修饰符表明它是一个只读的输出操作数。 +* 寄存器名字以两个 % 为前缀。这有利于 GCC 区分操作数和寄存器。操作数以一个 % 为前缀。 +* 第三个冒号之后的修饰寄存器 %eax 用于告诉 GCC %eax 的值将会在 "asm" 内部被修改,所以 GCC 将不会使用此寄存器存储任何其他值。 + +当 “asm” 执行完毕, "b" 变量会映射到更新的值,因为它被指定为输出操作数。换句话说, “asm” 内 "b" 变量的修改应该会被映射到 “asm” 外部。 + +现在,我们可以更详细地看看每一个域。 + +### 5.1 汇编程序模板 + +汇编程序模板包含了被插入到 C 程序的汇编指令集。其格式为:每条指令用双引号圈起,或者整个指令组用双引号圈起。同时每条指令应以分界符结尾。有效的分界符有换行符(`\n`)和分号(`;`)。`\n` 可以紧随一个制表符(`\t`)。我们应该都明白使用换行符或制表符的原因了吧(LCTT 译注:就是为了排版和分隔)?和 C 表达式对应的操作数使用 %0、%1 ... 等等表示。 + +### 5.2 操作数 + +C 表达式用作 “asm” 内的汇编指令操作数。每个操作数前面是以双引号圈起的操作数约束。对于输出操作数,在引号内还有一个约束修饰符,其后紧随一个用于表示操作数的 C 表达式。即,“操作数约束”(C 表达式)是一个通用格式。对于输出操作数,还有一个额外的修饰符。约束字符串主要用于决定操作数的寻址方式,同时也用于指定使用的寄存器。 + +如果我们使用的操作数多于一个,那么每一个操作数用逗号隔开。 + +在汇编程序模板中,每个操作数用数字引用。编号方式如下。如果总共有 n 个操作数(包括输入和输出操作数),那么第一个输出操作数编号为 0 ,逐项递增,并且最后一个输入操作数编号为 n - 1 。操作数的最大数目在前一节我们讲过。 + +输出操作数表达式必须为左值。输入操作数的要求不像这样严格。它们可以为表达式。扩展汇编特性常常用于编译器所不知道的机器指令 ;-)。如果输出表达式无法直接寻址(即,它是一个位域),我们的约束字符串必须给定一个寄存器。在这种情况下,GCC 将会使用该寄存器作为汇编的输出,然后存储该寄存器的内容到输出。 + +正如前面所陈述的一样,普通的输出操作数必须为只写的; GCC 将会假设指令前的操作数值是死的,并且不需要被(提前)生成。扩展汇编也支持输入-输出或者读-写操作数。 + +所以现在我们来关注一些示例。我们想要求一个数的5次方结果。为了计算该值,我们使用 `lea` 指令。 + +``` +asm ("leal (%1,%1,4), %0" + : "=r" (five_times_x) + : "r" (x) + ); +``` + +这里我们的输入为 x。我们不指定使用的寄存器。 GCC 将会选择一些输入寄存器,一个输出寄存器,来做我们预期的工作。如果我们想要输入和输出放在同一个寄存器里,我们也可以要求 GCC 这样做。这里我们使用那些读-写操作数类型。这里我们通过指定合适的约束来实现它。 + +``` +asm ("leal (%0,%0,4), %0" + : "=r" (five_times_x) + : "0" (x) + ); +``` + +现在输出和输出操作数位于同一个寄存器。但是我们无法得知是哪一个寄存器。现在假如我们也想要指定操作数所在的寄存器,这里有一种方法。 + +``` +asm ("leal (%%ecx,%%ecx,4), %%ecx" + : "=c" (x) + : "c" (x) + ); +``` + +在以上三个示例中,我们并没有在修饰寄存器列表里添加任何寄存器,为什么?在头两个示例, GCC 决定了寄存器并且它知道发生了什么改变。在最后一个示例,我们不必将 'ecx' 添加到修饰寄存器列表(LCTT 译注: 原文修饰寄存器列表这个单词拼写有错,这里已修正),gcc 知道它表示 x。因此,因为它可以知道 `ecx` 的值,它就不被当作修饰的(寄存器)了。 + +### 5.3 修饰寄存器列表 + +一些指令会破坏一些硬件寄存器内容。我们不得不在修饰寄存器中列出这些寄存器,即汇编函数内第三个 ’**:**’ 之后的域。这可以通知 gcc 我们将会自己使用和修改这些寄存器,这样 gcc 就不会假设存入这些寄存器的值是有效的。我们不用在这个列表里列出输入、输出寄存器。因为 gcc 知道 “asm” 使用了它们(因为它们被显式地指定为约束了)。如果指令隐式或显式地使用了任何其他寄存器,(并且寄存器没有出现在输出或者输出约束列表里),那么就需要在修饰寄存器列表中指定这些寄存器。 + +如果我们的指令可以修改条件码寄存器(cc),我们必须将 "cc" 添加进修饰寄存器列表。 + +如果我们的指令以不可预测的方式修改了内存,那么需要将 "memory" 添加进修饰寄存器列表。这可以使 GCC 不会在汇编指令间保持缓存于寄存器的内存值。如果被影响的内存不在汇编的输入或输出列表中,我们也必须添加 **volatile** 关键词。 + +我们可以按我们的需求多次读写修饰寄存器。参考一下模板内的多指令示例;它假设子例程 _foo 接受寄存器 `eax` 和 `ecx` 里的参数。 + +``` +asm ("movl %0,%%eax; + movl %1,%%ecx; + call _foo" + : /* no outputs */ + : "g" (from), "g" (to) + : "eax", "ecx" + ); +``` + +### 5.4 Volatile ...? + +如果你熟悉内核源码或者类似漂亮的代码,你一定见过许多声明为 `volatile` 或者 `__volatile__`的函数,其跟着一个 `asm` 或者 `__asm__`。我之前提到过关键词 `asm` 和 `__asm__`。那么什么是 `volatile` 呢? + +如果我们的汇编语句必须在我们放置它的地方执行(例如,不能为了优化而被移出循环语句),将关键词 `volatile` 放置在 asm 后面、()的前面。以防止它被移动、删除或者其他操作,我们将其声明为 `asm volatile ( ... : ... : ... : ...);` + +如果担心发生冲突,请使用 `__volatile__`。 + +如果我们的汇编只是用于一些计算并且没有任何副作用,不使用 `volatile` 关键词会更好。不使用 `volatile` 可以帮助 gcc 优化代码并使代码更漂亮。 + + +在“一些实用的诀窍”一节中,我提供了多个内联汇编函数的例子。那里我们可以了解到修饰寄存器列表的细节。 + +* * * + +## 6. 更多关于约束 + +到这个时候,你可能已经了解到约束和内联汇编有很大的关联。但我们对约束讲的还不多。约束用于表明一个操作数是否可以位于寄存器和位于哪种寄存器;操作数是否可以为一个内存引用和哪种地址;操作数是否可以为一个立即数和它可能的取值范围(即值的范围),等等。 + +### 6.1 常用约束 + +在许多约束中,只有小部分是常用的。我们来看看这些约束。 + +1. **寄存器操作数约束(r)** + + 当使用这种约束指定操作数时,它们存储在通用寄存器(GPR)中。请看下面示例: + + `asm ("movl %%eax, %0\n" :"=r"(myval));` + + 这里,变量 myval 保存在寄存器中,寄存器 eax 的值被复制到该寄存器中,并且 myval 的值从寄存器更新到了内存。当指定 "r" 约束时, gcc 可以将变量保存在任何可用的 GPR 中。要指定寄存器,你必须使用特定寄存器约束直接地指定寄存器的名字。它们为: + + ``` + +---+--------------------+ + | r | Register(s) | + +---+--------------------+ + | a | %eax, %ax, %al | + | b | %ebx, %bx, %bl | + | c | %ecx, %cx, %cl | + | d | %edx, %dx, %dl | + | S | %esi, %si | + | D | %edi, %di | + +---+--------------------+ + ``` + +2. **内存操作数约束(m)** + + 当操作数位于内存时,任何对它们的操作将直接发生在内存位置,这与寄存器约束相反,后者首先将值存储在要修改的寄存器中,然后将它写回到内存位置。但寄存器约束通常用于一个指令必须使用它们或者它们可以大大提高处理速度的地方。当需要在 “asm” 内更新一个 C 变量,而又不想使用寄存器去保存它的值,使用内存最为有效。例如,IDTR 寄存器的值存储于内存位置 loc 处: + + `asm("sidt %0\n" : :"m"(loc));` + +3. **匹配(数字)约束** + + 在某些情况下,一个变量可能既充当输入操作数,也充当输出操作数。可以通过使用匹配约束在 "asm" 中指定这种情况。 + + `asm ("incl %0" :"=a"(var):"0"(var));` + + 在操作数那一节中,我们也看到了一些类似的示例。在这个匹配约束的示例中,寄存器 "%eax" 既用作输入变量,也用作输出变量。 var 输入被读进 %eax,并且等递增后更新的 %eax 再次被存储进 var。这里的 "0" 用于指定与第 0 个输出变量相同的约束。也就是,它指定 var 输出实例应只被存储在 "%eax" 中。该约束可用于: + - 在输入从变量读取或变量修改后且修改被写回同一变量的情况 + - 在不需要将输入操作数实例和输出操作数实例分开的情况 + + 使用匹配约束最重要的意义在于它们可以有效地使用可用寄存器。 + +其他一些约束: + +1. "m" : 允许一个内存操作数,可以使用机器普遍支持的任一种地址。 +2. "o" : 允许一个内存操作数,但只有当地址是可偏移的。即,该地址加上一个小的偏移量可以得到一个有效地址。 +3. "V" : 一个不允许偏移的内存操作数。换言之,任何适合 "m" 约束而不适合 "o" 约束的操作数。 +4. "i" : 允许一个(带有常量)的立即整形操作数。这包括其值仅在汇编时期知道的符号常量。 +5. "n" : 允许一个带有已知数字的立即整形操作数。许多系统不支持汇编时期的常量,因为操作数少于一个字宽。对于此种操作数,约束应该使用 'n' 而不是'i'。 +6. "g" : 允许任一寄存器、内存或者立即整形操作数,不包括通用寄存器之外的寄存器。 + +以下约束为 x86 特有。 + +1. "r" : 寄存器操作数约束,查看上面给定的表格。 +2. "q" : 寄存器 a、b、c 或者 d。 +3. "I" : 范围从 0 到 31 的常量(对于 32 位移位)。 +4. "J" : 范围从 0 到 63 的常量(对于 64 位移位)。 +5. "K" : 0xff。 +6. "L" : 0xffff。 +7. "M" : 0、1、2 或 3 (lea 指令的移位)。 +8. "N" : 范围从 0 到 255 的常量(对于 out 指令)。 +9. "f" : 浮点寄存器 +10. "t" : 第一个(栈顶)浮点寄存器 +11. "u" : 第二个浮点寄存器 +12. "A" : 指定 `a` 或 `d` 寄存器。这主要用于想要返回 64 位整形数,使用 `d` 寄存器保存最高有效位和 `a` 寄存器保存最低有效位。 + +### 6.2 约束修饰符 + +当使用约束时,对于更精确的控制超过了对约束作用的需求,GCC 给我们提供了约束修饰符。最常用的约束修饰符为: + +1. "=" : 意味着对于这条指令,操作数为只写的;旧值会被忽略并被输出数据所替换。 +2. "&" : 意味着这个操作数为一个早期改动的操作数,其在该指令完成前通过使用输入操作数被修改了。因此,这个操作数不可以位于一个被用作输出操作数或任何内存地址部分的寄存器。如果在旧值被写入之前它仅用作输入而已,一个输入操作数可以为一个早期改动操作数。 + +上述的约束列表和解释并不完整。示例可以让我们对内联汇编的用途和用法更好的理解。在下一节,我们会看到一些示例,在那里我们会发现更多关于修饰寄存器列表的东西。 + +* * * + +## 7. 一些实用的诀窍 + +现在我们已经介绍了关于 GCC 内联汇编的基础理论,现在我们将专注于一些简单的例子。将内联汇编函数写成宏的形式总是非常方便的。我们可以在 Linux 内核代码里看到许多汇编函数。(usr/src/linux/include/asm/*.h)。 + +1. 首先我们从一个简单的例子入手。我们将写一个两个数相加的程序。 + + ``` + int main(void) + { + int foo = 10, bar = 15; + __asm__ __volatile__("addl %%ebx,%%eax" + :"=a"(foo) + :"a"(foo), "b"(bar) + ); + printf("foo+bar=%d\n", foo); + return 0; + } + ``` + + 这里我们要求 GCC 将 foo 存放于 %eax,将 bar 存放于 %ebx,同时我们也想要在 %eax 中存放结果。'=' 符号表示它是一个输出寄存器。现在我们可以以其他方式将一个整数加到一个变量。 + + ``` + __asm__ __volatile__( + " lock ;\n" + " addl %1,%0 ;\n" + : "=m" (my_var) + : "ir" (my_int), "m" (my_var) + : /* 无修饰寄存器列表 */ + ); + ``` + + 这是一个原子加法。为了移除原子性,我们可以移除指令 'lock'。在输出域中,"=m" 表明 my_var 是一个输出且位于内存。类似地,"ir" 表明 my_int 是一个整型,并应该存在于其他寄存器(回想我们上面看到的表格)。没有寄存器位于修饰寄存器列表中。 + +2. 现在我们将在一些寄存器/变量上展示一些操作,并比较值。 + + ``` + __asm__ __volatile__( "decl %0; sete %1" + : "=m" (my_var), "=q" (cond) + : "m" (my_var) + : "memory" + ); + ``` + + 这里,my_var 的值减 1 ,并且如果结果的值为 0,则变量 cond 置 1。我们可以通过将指令 "lock;\n\t" 添加为汇编模板的第一条指令以增加原子性。 + + 以类似的方式,为了增加 my_var,我们可以使用 "incl %0" 而不是 "decl %0"。 + + 这里需要注意的地方是(i)my_var 是一个存储于内存的变量。(ii)cond 位于寄存器 eax、ebx、ecx、edx 中的任何一个。约束 "=q" 保证了这一点。(iii)同时我们可以看到 memory 位于修饰寄存器列表中。也就是说,代码将改变内存中的内容。 + +3. 如何置 1 或清 0 寄存器中的一个比特位。作为下一个诀窍,我们将会看到它。 + + ``` + __asm__ __volatile__( "btsl %1,%0" + : "=m" (ADDR) + : "Ir" (pos) + : "cc" + ); + ``` + + 这里,ADDR 变量(一个内存变量)的 'pos' 位置上的比特被设置为 1。我们可以使用 'btrl' 来清除由 'btsl' 设置的比特位。pos 的约束 "Ir" 表明 pos 位于寄存器,并且它的值为 0-31(x86 相关约束)。也就是说,我们可以设置/清除 ADDR 变量上第 0 到 31 位的任一比特位。因为条件码会被改变,所以我们将 "cc" 添加进修饰寄存器列表。 + +4. 现在我们看看一些更为复杂而有用的函数。字符串拷贝。 + + ``` + static inline char * strcpy(char * dest,const char *src) + { + int d0, d1, d2; + __asm__ __volatile__( "1:\tlodsb\n\t" + "stosb\n\t" + "testb %%al,%%al\n\t" + "jne 1b" + : "=&S" (d0), "=&D" (d1), "=&a" (d2) + : "0" (src),"1" (dest) + : "memory"); + return dest; + } + ``` + + 源地址存放于 esi,目标地址存放于 edi,同时开始拷贝,当我们到达 **0** 时,拷贝完成。约束 "&S"、"&D"、"&a" 表明寄存器 esi、edi 和 eax 早期修饰寄存器,也就是说,它们的内容在函数完成前会被改变。这里很明显可以知道为什么 "memory" 会放在修饰寄存器列表。 + + 我们可以看到一个类似的函数,它能移动双字块数据。注意函数被声明为一个宏。 + + ``` + #define mov_blk(src, dest, numwords) \ + __asm__ __volatile__ ( \ + "cld\n\t" \ + "rep\n\t" \ + "movsl" \ + : \ + : "S" (src), "D" (dest), "c" (numwords) \ + : "%ecx", "%esi", "%edi" \ + ) + ``` + + 这里我们没有输出,寄存器 ecx、esi和 edi 的内容发生了改变,这是块移动的副作用。因此我们必须将它们添加进修饰寄存器列表。 + +5. 在 Linux 中,系统调用使用 GCC 内联汇编实现。让我们看看如何实现一个系统调用。所有的系统调用被写成宏(linux/unistd.h)。例如,带有三个参数的系统调用被定义为如下所示的宏。 + + ``` + type name(type1 arg1,type2 arg2,type3 arg3) \ + { \ + long __res; \ + __asm__ volatile ( "int $0x80" \ + : "=a" (__res) \ + : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \ + "d" ((long)(arg3))); \ + __syscall_return(type,__res); \ + } + ``` + + 无论何时调用带有三个参数的系统调用,以上展示的宏就会用于执行调用。系统调用号位于 eax 中,每个参数位于 ebx、ecx、edx 中。最后 "int 0x80" 是一条用于执行系统调用的指令。返回值被存储于 eax 中。 + + 每个系统调用都以类似的方式实现。Exit 是一个单一参数的系统调用,让我们看看它的代码看起来会是怎样。它如下所示。 + + ``` + { + asm("movl $1,%%eax; /* SYS_exit is 1 */ + xorl %%ebx,%%ebx; /* Argument is in ebx, it is 0 */ + int $0x80" /* Enter kernel mode */ + ); + } + ``` + + Exit 的系统调用号是 1,同时它的参数是 0。因此我们分配 eax 包含 1,ebx 包含 0,同时通过 `int $0x80` 执行 `exit(0)`。这就是 exit 的工作原理。 + +* * * + +## 8. 结束语 + +这篇文档已经将 GCC 内联汇编过了一遍。一旦你理解了基本概念,你就可以按照自己的需求去使用它们了。我们看了许多例子,它们有助于理解 GCC 内联汇编的常用特性。 + +GCC 内联是一个极大的主题,这篇文章是不完整的。更多关于我们讨论过的语法细节可以在 GNU 汇编器的官方文档上获取。类似地,要获取完整的约束列表,可以参考 GCC 的官方文档。 + +当然,Linux 内核大量地使用了 GCC 内联。因此我们可以在内核源码中发现许多各种各样的例子。它们可以帮助我们很多。 + +如果你发现任何的错别字,或者本文中的信息已经过时,请告诉我们。 + +* * * + +## 9. 参考 + +1. [Brennan’s Guide to Inline Assembly](http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html) +2. [Using Assembly Language in Linux](http://linuxassembly.org/articles/linasm.html) +3. [Using as, The GNU Assembler](http://www.gnu.org/manual/gas-2.9.1/html_mono/as.html) +4. [Using and Porting the GNU Compiler Collection (GCC)](http://gcc.gnu.org/onlinedocs/gcc_toc.html) +5. [Linux Kernel Source](http://ftp.kernel.org/) + +* * * + +via: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html + + 作者:[Sandeep.S](mailto:busybox@sancharnet.in) 译者:[cposture](https://github.com/cposture) 校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151220 GCC-Inline-Assembly-HOWTO.md b/translated/tech/20151220 GCC-Inline-Assembly-HOWTO.md deleted file mode 100644 index e0e1fc6d50..0000000000 --- a/translated/tech/20151220 GCC-Inline-Assembly-HOWTO.md +++ /dev/null @@ -1,632 +0,0 @@ -* * * - -# GCC 内联汇编 HOWTO - -v0.1, 01 March 2003. -* * * - -_本 HOWTO 文档将讲解 GCC 提供的内联汇编特性的用途和用法。对于阅读这篇文章,这里只有两个前提要求,很明显,就是 x86 汇编语言和 C 语言的基本认识。_ - -* * * - -## 1. 简介 - -## 1.1 版权许可 - -Copyright (C)2003 Sandeep S. - -本文档自由共享;你可以重新发布它,并且/或者在遵循自由软件基金会发布的 GNU 通用公共许可证下修改它;或者该许可证的版本 2 ,或者(按照你的需求)更晚的版本。 - -发布这篇文档是希望它能够帮助别人,但是没有任何保证;甚至不包括可售性和适用于任何特定目的的保证。关于更详细的信息,可以查看 GNU 通用许可证。 - -## 1.2 反馈校正 - -请将反馈和批评一起提交给 [Sandeep.S](mailto:busybox@sancharnet.in) 。我将感谢任何一个指出本文档中错误和不准确之处的人;一被告知,我会马上改正它们。 - -## 1.3 致谢 - -我对提供如此棒的特性的 GNU 人们表示真诚的感谢。感谢 Mr.Pramode C E 所做的所有帮助。感谢在 Govt Engineering College 和 Trichur 的朋友们的精神支持和合作,尤其是 Nisha Kurur 和 Sakeeb S 。 感谢在 Gvot Engineering College 和 Trichur 的老师们的合作。 - -另外,感谢 Phillip , Brennan Underwood 和 colin@nyx.net ;这里的许多东西都厚颜地直接取自他们的工作成果。 - -* * * - -## 2. 概览 - -在这里,我们将学习 GCC 内联汇编。这内联表示的是什么呢? - -我们可以要求编译器将一个函数的代码插入到调用者代码中函数被实际调用的地方。这样的函数就是内联函数。这听起来和宏差不多?这两者确实有相似之处。 - -内联函数的优点是什么呢? - -这种内联方法可以减少函数调用开销。同时如果所有实参的值为常量,它们的已知值可以在编译期允许简化,因此并非所有的内联函数代码都需要被包含。代码大小的影响是不可预测的,这取决于特定的情况。为了声明一个内联函数,我们必须在函数声明中使用 `inline` 关键字。 - -现在我们正处于一个猜测内联汇编到底是什么的点上。它只不过是一些写为内联函数的汇编程序。在系统编程上,它们方便、快速并且极其有用。我们主要集中学习(GCC)内联汇编函数的基本格式和用法。为了声明内联汇编函数,我们使用 `asm` 关键词。 - -内联汇编之所以重要,主要是因为它可以操作并且使其输出通过 C 变量显示出来。正是因为此能力, "asm" 可以用作汇编指令和包含它的 C 程序之间的接口。 - -* * * - -## 3. GCC 汇编语法 - -GCC , Linux上的 GNU C 编译器,使用 **AT&T** / **UNIX** 汇编语法。在这里,我们将使用 AT&T 语法 进行汇编编码。如果你对 AT&T 语法不熟悉的话,请不要紧张,我会教你的。AT&T 语法和 Intel 语法的差别很大。我会给出主要的区别。 - -1. 源操作数和目的操作数顺序 - - AT&T 语法的操作数方向和 Intel 语法的刚好相反。在Intel 语法中,第一操作数为目的操作数,第二操作数为源操作数,然而在 AT&T 语法中,第一操作数为源操作数,第二操作数为目的操作数。也就是说, - - Intel 语法中的 "Op-code dst src" 变为 - - AT&T 语法中的 "Op-code src dst"。 - -2. 寄存器命名 - - 寄存器名称有 % 前缀,即如果必须使用 eax,它应该用作 %eax。 - -3. 立即数 - - AT&T 立即数以 ’$’ 为前缀。静态 "C" 变量 也使用 ’$’ 前缀。在 Intel 语法中,十六进制常量以 ’h’ 为后缀,然而AT&T不使用这种语法,这里我们给常量添加前缀 ’0x’。所以,对于十六进制,我们首先看到一个 ’$’,然后是 ’0x’,最后才是常量。 - -4. 操作数大小 - - 在 AT&T 语法中,存储器操作数的大小取决于操作码名字的最后一个字符。操作码后缀 ’b’ 、’w’、’l’分别指明了字节(byte)(8位)、字(word)(16位)、长型(long)(32位)存储器引用。Intel 语法通过给存储器操作数添加’byte ptr’、 ’word ptr’ 和 ’dword ptr’前缀来实现这一功能。 - - 因此,Intel的 "mov al, byte ptr foo" 在 AT&T 语法中为 "movb foo, %al"。 - -5. 存储器操作数 - - 在 Intel 语法中,基址寄存器包含在 ’[’ 和 ’]’ 中,然而在 AT&T 中,它们变为 ’(’ 和 ’)’。另外,在 Intel 语法中, 间接内存引用为 - - section:[base + index*scale + disp], 在 AT&T中变为 - - section:disp(base, index, scale)。 - - 需要牢记的一点是,当一个常量用于 disp 或 scale,不能添加’$’前缀。 - -现在我们看到了 Intel 语法和 AT&T 语法之间的一些主要差别。我仅仅写了它们差别的一部分而已。关于更完整的信息,请参考 GNU 汇编文档。现在为了更好地理解,我们可以看一些示例。 - -> ` -> ->
+------------------------------+------------------------------------+
-> |       Intel Code             |      AT&T Code                     |
-> +------------------------------+------------------------------------+
-> | mov     eax,1                |  movl    $1,%eax                   |   
-> | mov     ebx,0ffh             |  movl    $0xff,%ebx                |   
-> | int     80h                  |  int     $0x80                     |   
-> | mov     ebx, eax             |  movl    %eax, %ebx                |
-> | mov     eax,[ecx]            |  movl    (%ecx),%eax               |
-> | mov     eax,[ebx+3]          |  movl    3(%ebx),%eax              | 
-> | mov     eax,[ebx+20h]        |  movl    0x20(%ebx),%eax           |
-> | add     eax,[ebx+ecx*2h]     |  addl    (%ebx,%ecx,0x2),%eax      |
-> | lea     eax,[ebx+ecx]        |  leal    (%ebx,%ecx),%eax          |
-> | sub     eax,[ebx+ecx*4h-20h] |  subl    -0x20(%ebx,%ecx,0x4),%eax |
-> +------------------------------+------------------------------------+
-> 
-> -> ` - -* * * - -## 4. 基本内联 - -基本内联汇编的格式非常直接了当。它的基本格式为 - -`asm("汇编代码");` - -示例 - -> ` -> -> * * * -> ->
asm("movl %ecx %eax"); /* 将 ecx 寄存器的内容移至 eax  */
-> __asm__("movb %bh (%eax)"); /* 将 bh 的一个字节数据 移至 eax 寄存器指向的内存 */
-> 
-> -> * * * -> -> ` - -你可能注意到了这里我使用了 `asm ` 和 `__asm__`。这两者都是有效的。如果关键词 `asm` 和我们程序的一些标识符冲突了,我们可以使用 `__asm__`。如果我们的指令多余一条,我们可以写成一行,并用括号括起,也可以为每条指令添加 ’\n’ 和 ’\t’ 后缀。这是因为gcc将每一条当作字符串发送给 **as**(GAS)( GAS 即 GNU 汇编器 ——译者注),并且通过使用换行符/制表符发送正确地格式化行给汇编器。 - -示例 - -> ` -> -> * * * -> ->
 __asm__ ("movl %eax, %ebx\n\t"
->           "movl $56, %esi\n\t"
->           "movl %ecx, $label(%edx,%ebx,$4)\n\t"
->           "movb %ah, (%ebx)");
-> 
-> -> * * * -> -> ` - -如果在代码中,我们涉及到一些寄存器(即改变其内容),但在没有固定这些变化的情况下从汇编中返回,这将会导致一些不好的事情。这是因为 GCC 并不知道寄存器内容的变化,这会导致问题,特别是当编译器做了某些优化。在没有告知 GCC 的情况下,它将会假设一些寄存器存储了我们可能已经改变的变量的值,它会像什么事都没发生一样继续运行(什么事都没发生一样是指GCC不会假设寄存器装入的值是有效的,当退出改变了寄存器值的内联汇编后,寄存器的值不会保存到相应的变量或内存空间 ——译者注)。我们所可以做的是使用这些没有副作用的指令,或者当我们退出时固定这些寄存器,或者等待程序崩溃。这是为什么我们需要一些扩展功能。扩展汇编正好给我们提供了那样的功能。 - -* * * - -## 5. 扩展汇编 - -在基本内联汇编中,我们只有指令。然而在扩展汇编中,我们可以同时指定操作数。它允许我们指定输入寄存器、输出寄存器以及修饰寄存器列表。GCC 不强制用户必须指定使用的寄存器。我们可以把头疼的事留给 GCC ,这可能可以更好地适应 GCC 的优化。不管怎樣,基本格式为: - -> ` -> -> * * * -> ->
       asm ( 汇编程序模板 
->            : 输出操作数					/* 可选的 */
->            : 输入操作数                   /* 可选的 */
->            : 修饰寄存器列表			    /* 可选的 */
->            );
-> 
-> -> * * * -> -> ` - -汇编程序模板由汇编指令组成.每一个操作数由一个操作数约束字符串所描述,其后紧接一个括弧括起的 C 表达式。冒号用于将汇编程序模板和第一个输出操作数分开,另一个(冒号)用于将最后一个输出操作数和第一个输入操作数分开,如果存在的话。逗号用于分离每一个组内的操作数。总操作数的数目限制在10个,或者机器描述中的任何指令格式中的最大操作数数目,以较大者为准。 - -如果没有输出操作数但存在输入操作数,你必须将两个连续的冒号放置于输出操作数原本会放置的地方周围。 - -示例: - -> ` -> -> * * * -> ->
        asm ("cld\n\t"
->              "rep\n\t"
->              "stosl"
->              : /* 无输出寄存器 */
->              : "c" (count), "a" (fill_value), "D" (dest)
->              : "%ecx", "%edi" 
->              );
-> 
-> -> * * * -> -> ` - -现在,这段代码是干什么的?以上的内联汇编是将 `fill_value` 值 连续 `count` 次 拷贝到 寄存器 `edi` 所指位置(每执行stosl一次,寄存器 edi 的值会递增或递减,这取决于是否设置了 direction 标志,因此以上代码实则初始化一个内存块 ——译者注)。 它也告诉 gcc 寄存器 `ecx` 和 `edi` 一直无效(原文为 eax ,但代码修饰寄存器列表中为 ecx,因此这可能为作者的纰漏 ——译者注)。为了使扩展汇编更加清晰,让我们再看一个示例。 - -> ` -> -> * * * -> ->
        
->         int a=10, b;
->         asm ("movl %1, %%eax; 
->               movl %%eax, %0;"
->              :"=r"(b)        /* 输出 */
->              :"r"(a)         /* 输入 */
->              :"%eax"         /* 修饰寄存器 */
->              );       
-> 
-> -> * * * -> -> ` - -这里我们所做的是使用汇编指令使 ’b’ 变量的值等于 ’a’ 变量的值。一些有意思的地方是: - -* "b" 为输出操作数,用 %0 引用,并且 "a" 为输入操作数,用 %1 引用。 -* "r" 为操作数约束。之后我们会更详细地了解约束(字符串)。目前,"r" 告诉 GCC 可以使用任一寄存器存储操作数。输出操作数约束应该有一个约束修饰符 "=" 。这修饰符表明它是一个只读的输出操作数。 -* 寄存器名字以两个%为前缀。这有利于 GCC 区分操作数和寄存器。操作数以一个 % 为前缀。 -* 第三个冒号之后的修饰寄存器 %eax 告诉 GCC %eax的值将会在 "asm" 内部被修改,所以 GCC 将不会使用此寄存器存储任何其他值。 - -当 "asm" 执行完毕, "b" 变量会映射到更新的值,因为它被指定为输出操作数。换句话说, "asm" 内 "b" 变量的修改 应该会被映射到 "asm" 外部。 - -现在,我们可以更详细地看看每一个域。 - -## 5.1 汇编程序模板 - -汇编程序模板包含了被插入到 C 程序的汇编指令集。其格式为:每条指令用双引号圈起,或者整个指令组用双引号圈起。同时每条指令应以分界符结尾。有效的分界符有换行符(\n)和逗号(;)。’\n’ 可以紧随一个制表符(\t)。我们应该都明白使用换行符或制表符的原因了吧?和 C 表达式对应的操作数使用 %0、%1 ... 等等表示。 - -## 5.2 操作数 - -C 表达式用作 "asm" 内的汇编指令操作数。作为第一双引号内的操作数约束,写下每一操作数。对于输出操作数,在引号内还有一个约束修饰符,其后紧随一个用于表示操作数的 C 表达式。即, - -"约束字符串"(C 表达式),它是一个通用格式。对于输出操作数,还有一个额外的修饰符。约束字符串主要用于决定操作数的寻找方式,同时也用于指定使用的寄存器。 - -如果我们使用的操作数多于一个,那么每一个操作数用逗号隔开。 - -在汇编程序模板,每个操作数用数字引用。编号方式如下。如果总共有 n 个操作数(包括输入和输出操作数),那么第一个输出操作数编号为 0 ,逐项递增,并且最后一个输入操作数编号为 n - 1 。操作数的最大数目为前一节我们所看到的那样。 - -输出操作数表达式必须为左值。输入操作数的要求不像这样严格。它们可以为表达式。扩展汇编特性常常用于编译器自己不知道其存在的机器指令 ;-)。如果输出表达式无法直接寻址(例如,它是一个位域),我们的约束字符串必须给定一个寄存器。在这种情况下,GCC 将会使用该寄存器作为汇编的输出,然后存储该寄存器的内容到输出。 - -正如前面所陈述的一样,普通的输出操作数必须为只写的; GCC 将会假设指令前的操作数值是死的,并且不需要被(提前)生成。扩展汇编也支持输入-输出或者读-写操作数。 - -所以现在我们来关注一些示例。我们想要求一个数的5次方结果。为了计算该值,我们使用 `lea` 指令。 - -> ` -> -> * * * -> ->
        asm ("leal (%1,%1,4), %0"
->              : "=r" (five_times_x)
->              : "r" (x) 
->              );
-> 
-> -> * * * -> -> ` - -这里我们的输入为x。我们不指定使用的寄存器。 GCC 将会选择一些输入寄存器,一个输出寄存器,并且做我们期望的事。如果我们想要输入和输出存在于同一个寄存器里,我们可以要求 GCC 这样做。这里我们使用那些读-写操作数类型。这里我们通过指定合适的约束来实现它。 - -> ` -> -> * * * -> ->
        asm ("leal (%0,%0,4), %0"
->              : "=r" (five_times_x)
->              : "0" (x) 
->              );
-> 
-> -> * * * -> -> ` - -现在输出和输出操作数位于同一个寄存器。但是我们无法得知是哪一个寄存器。现在假如我们也想要指定操作数所在的寄存器,这里有一种方法。 - -> ` -> -> * * * -> ->
        asm ("leal (%%ecx,%%ecx,4), %%ecx"
->              : "=c" (x)
->              : "c" (x) 
->              );
-> 
-> -> * * * -> -> ` - -在以上三个示例中,我们并没有添加任何寄存器到修饰寄存器里,为什么?在头两个示例, GCC 决定了寄存器并且它知道发生了什么改变。在最后一个示例,我们不必将 'ecx' 添加到修饰寄存器列表(原文修饰寄存器列表拼写有错,这里已修正 ——译者注), gcc 知道它表示x。因此,因为它可以知道 `ecx` 的值,它就不被当作修饰的(寄存器)了。 - -## 5.3 修饰寄存器列表 - -一些指令会破坏一些硬件寄存器。我们不得不在修饰寄存器中列出这些寄存器,即汇编函数内第三个 ’**:**’ 之后的域。这可以通知 gcc 我们将会自己使用和修改这些寄存器。所以 gcc 将不会假设存入这些寄存器的值是有效的。我们不用在这个列表里列出输入输出寄存器。因为 gcc 知道 "asm" 使用了它们(因为它们被显式地指定为约束了)。如果指令隐式或显式地使用了任何其他寄存器,(并且寄存器不能出现在输出或者输出约束列表里),那么不得不在修饰寄存器列表中指定这些寄存器。 - -如果我们的指令可以修改状态寄存器,我们必须将 "cc" 添加进修饰寄存器列表。 - -如果我们的指令以不可预测的方式修改了内存,那么需要将 "memory" 添加进修饰寄存器列表。这可以使 GCC 不会在汇编指令间保持缓存于寄存器的内存值。如果被影响的内存不在汇编的输入或输出列表中,我们也必须添加 **volatile** 关键词。 - -我们可以按我们的需求多次读写修饰寄存器。考虑一个模板内的多指令示例;它假设子例程 _foo 接受寄存器 `eax` 和 `ecx` 里的参数。 - -> ` -> -> * * * -> ->
        asm ("movl %0,%%eax;
->               movl %1,%%ecx;
->               call _foo"
->              : /* no outputs */
->              : "g" (from), "g" (to)
->              : "eax", "ecx"
->              );
-> 
-> -> * * * -> -> ` - -## 5.4 Volatile ...? - -如果你熟悉内核源码或者其他像内核源码一样漂亮的代码,你一定见过许多声明为 `volatile` 或者 `__volatile__`的函数,其跟着一个 `asm` 或者 `__asm__`。我之前提过关键词 `asm` 和 `__asm__`。那么什么是 `volatile`呢? - -如果我们的汇编语句必须在我们放置它的地方执行(即,不能作为一种优化被移出循环语句),将关键词 `volatile` 放置在 asm 后面,()的前面。因为为了防止它被移动、删除或者其他操作,我们将其声明为 - -`asm volatile ( ... : ... : ... : ...);` - -当我们必须非常谨慎时,请使用 `__volatile__`。 - -如果我们的汇编只是用于一些计算并且没有任何副作用,不使用 `volatile` 关键词会更好。不使用 `volatile` 可以帮助 gcc 优化代码并使代码更漂亮。 - - -在 `Some Useful Recipes` 一节中,我提供了多个内联汇编函数的例子。这儿我们详细查看修饰寄存器列表。 - -* * * - -## 6. 更多关于约束 - -到这个时候,你可能已经了解到约束和内联汇编有很大的关联。但我们很少说到约束。约束用于表明一个操作数是否可以位于寄存器和位于哪个寄存器;是否操作数可以为一个内存引用和哪种地址;是否操作数可以为一个立即数和为哪一个可能的值(即值的范围)。它可以有...等等。 - -## 6.1 常用约束 - -在许多约束中,只有小部分是常用的。我们将看看这些约束。 - -1. **寄存器操作数约束(r)** - - 当使用这种约束指定操作数时,它们存储在通用寄存器(GPR)中。请看下面示例: - - `asm ("movl %%eax, %0\n" :"=r"(myval));` - - 这里,变量 myval 保存在寄存器中,寄存器 eax 的值被复制到该寄存器中,并且myval的值从寄存器更新到了内存。当指定 "r" 约束时, gcc 可以将变量保存在任何可用的 GPR 中。为了指定寄存器,你必须使用特定寄存器约束直接地指定寄存器的名字。它们为: - - > ` - > - >
+---+--------------------+
-    > | r |    Register(s)     |
-    > +---+--------------------+
-    > | a |   %eax, %ax, %al   |
-    > | b |   %ebx, %bx, %bl   |
-    > | c |   %ecx, %cx, %cl   |
-    > | d |   %edx, %dx, %dl   |
-    > | S |   %esi, %si        |
-    > | D |   %edi, %di        |
-    > +---+--------------------+
-    > 
- > - > ` - -2. **内存操作数约束(m)** - - 当操作数位于内存时,任何对它们的操作将直接发生在内存位置,这与寄存器约束相反,后者首先将值存储在要修改的寄存器中,然后将它写回到内存位置。但寄存器约束通常用于一个指令必须使用它们或者它们可以大大提高进程速度的地方。当需要在 "asm" 内更新一个 C 变量,而又不想使用寄存器去保存它的只,使用内存最为有效。例如, idtr 的值存储于内存位置: - - `asm("sidt %0\n" : :"m"(loc));` - -3. **匹配(数字)约束** - - 在某些情况下,一个变量可能既充当输入操作数,也充当输出操作数。可以通过使用匹配约束在 "asm" 中指定这种情况。 - - `asm ("incl %0" :"=a"(var):"0"(var));` - - 在操作数子节中,我们也看到了一些类似的示例。在这个匹配约束的示例中,寄存器 "%eax" 既用作输入变量,也用作输出变量。 var 输入被读进 %eax ,并且更新的 %eax 再次被存储进 var。这里的 "0" 用于指定与第0个输出变量相同的约束。也就是,它指定 var 输出实例应只被存储在 "%eax" 中。该约束可用于: - - * 在输入从变量读取或变量修改后,修改被写回同一变量的情况 - * 在不需要将输入操作数实例和输出操作数实例分开的情况 - - 使用匹配约束最重要的意义在于它们可以导致有效地使用可用寄存器。 - -其他一些约束: - -1. "m" : 允许一个内存操作数使用机器普遍支持的任一种地址。 -2. "o" : 允许一个内存操作数,但只有当地址是可偏移的。即,该地址加上一个小的偏移量可以得到一个地址。 -3. "V" : A memory operand that is not offsettable. In other words, anything that would fit the `m’ constraint but not the `o’constraint. -4. "i" : 允许一个(带有常量)的立即整形操作数。这包括其值仅在汇编时期知道的符号常量。 -5. "n" : 允许一个带有已知数字的立即整形操作数。许多系统不支持汇编时期的常量,因为操作数少于一个字宽。对于此种操作数,约束应该使用 'n' 而不是'i'。 -6. "g" : 允许任一寄存器、内存或者立即整形操作数,不包括通用寄存器之外的寄存器。 - - -以下约束为x86特有。 - -1. "r" : 寄存器操作数约束,查看上面给定的表格。 -2. "q" : 寄存器 a、b、c 或者 d。 -3. "I" : 范围从 0 到 31 的常量(对于 32 位移位)。 -4. "J" : 范围从 0 到 63 的常量(对于 64 位移位)。 -5. "K" : 0xff。 -6. "L" : 0xffff。 -7. "M" : 0, 1, 2, or 3 (lea 指令的移位)。 -8. "N" : 范围从 0 到 255 的常量(对于 out 指令)。 -9. "f" : 浮点寄存器 -10. "t" : 第一个(栈顶)浮点寄存器 -11. "u" : 第二个浮点寄存器 -12. "A" : 指定 `a` 或 `d` 寄存器。这主要用于想要返回 64 位整形数,使用 `d` 寄存器保存最高有效位和 `a` 寄存器保存最低有效位。 - -## 6.2 约束修饰符 - -当使用约束时,对于更精确的控制超越了约束作用的需求,GCC 给我们提供了约束修饰符。最常用的约束修饰符为: - -1. "=" : 意味着对于这条指令,操作数为只写的;旧值会被忽略并被输出数据所替换。 -2. "&" : 意味着这个操作数为一个早期的改动操作数,其在该指令完成前通过使用输入操作数被修改了。因此,这个操作数不可以位于一个被用作输出操作数或任何内存地址部分的寄存器。如果在旧值被写入之前它仅用作输入而已,一个输入操作数可以为一个早期改动操作数。 - - 约束的列表和解释是决不完整的。示例可以给我们一个关于内联汇编的用途和用法的更好的理解。在下一节,我们会看到一些示例,在那里我们会发现更多关于修饰寄存器列表的东西。 - -* * * - -## 7. 一些实用的诀窍 - -现在我们已经介绍了关于 GCC 内联汇编的基础理论,现在我们将专注于一些简单的例子。将内联汇编函数写成宏的形式总是非常方便的。我们可以在内核代码里看到许多汇编函数。(usr/src/linux/include/asm/*.h)。 - -1. 首先我们从一个简单的例子入手。我们将写一个两个数相加的程序。 - - > ` - > - > * * * - > - >
int main(void)
-    > {
-    >         int foo = 10, bar = 15;
-    >         __asm__ __volatile__("addl  %%ebx,%%eax"
-    >                              :"=a"(foo)
-    >                              :"a"(foo), "b"(bar)
-    >                              );
-    >         printf("foo+bar=%d\n", foo);
-    >         return 0;
-    > }
-    > 
- > - > * * * - > - > ` - - 这里我们要求 GCC 将 foo 存放于 %eax,将 bar 存放于 %ebx,同时我们也想要在 %eax 中存放结果。'=' 符号表示它是一个输出寄存器。现在我们可以以其他方式将一个整数加到一个变量。 - - > ` - > - > * * * - > - >
 __asm__ __volatile__(
-    >                       "   lock       ;\n"
-    >                       "   addl %1,%0 ;\n"
-    >                       : "=m"  (my_var)
-    >                       : "ir"  (my_int), "m" (my_var)
-    >                       :                                 /* 无修饰寄存器列表 */
-    >                       );
-    > 
- > - > * * * - > - > ` - - 这是一个原子加法。为了移除原子性,我们可以移除指令 'lock'。在输出域中,"=m" 表明 my_var 是一个输出且位于内存。类似地,"ir" 表明 my_int 是一个整型,并应该存在于其他寄存器(回想我们上面看到的表格)。没有寄存器位于修饰寄存器列表中。 - -2. 现在我们将在一些寄存器/变量上展示一些操作,并比较值。 - - > ` - > - > * * * - > - >
 __asm__ __volatile__(  "decl %0; sete %1"
-    >                       : "=m" (my_var), "=q" (cond)
-    >                       : "m" (my_var) 
-    >                       : "memory"
-    >                       );
-    > 
- > - > * * * - > - > ` - - 这里,my_var 的值减 1 ,并且如果结果的值为 0,则变量 cond 置 1。我们可以通过添加指令 "lock;\n\t" 作为汇编模板的第一条指令来添加原子性。 - - 以类似的方式,为了增加 my_var,我们可以使用 "incl %0" 而不是 "decl %0"。 - - 这里需要注意的点为(i)my_var 是一个存储于内存的变量。(ii)cond 位于任何一个寄存器 eax、ebx、ecx、edx。约束 "=q" 保证这一点。(iii)同时我们可以看到 memory 位于修饰寄存器列表中。也就是说,代码将改变内存中的内容。 - -3. 如何置1或清0寄存器中的一个比特位。作为下一个诀窍,我们将会看到它。 - - > ` - > - > * * * - > - >
__asm__ __volatile__(   "btsl %1,%0"
-    >                       : "=m" (ADDR)
-    >                       : "Ir" (pos)
-    >                       : "cc"
-    >                       );
-    > 
- > - > * * * - > - > ` - - 这里,ADDR 变量(一个内存变量)的 'pos' 位置上的比特被设置为 1。我们可以使用 'btrl' 来清楚由 'btsl' 设置的比特位。pos 的约束 "Ir" 表明 pos 位于寄存器并且它的值为 0-31(x86 相关约束)。也就是说,我们可以设置/清除 ADDR 变量上第 0 到 31 位的任一比特位。因为条件码会被改变,所以我们将 "cc" 添加进修饰寄存器列表。 - -4. 现在我们看看一些更为复杂而有用的函数。字符串拷贝。 - - > ` - > - > * * * - > - >
static inline char * strcpy(char * dest,const char *src)
-    > {
-    > int d0, d1, d2;
-    > __asm__ __volatile__(  "1:\tlodsb\n\t"
-    >                        "stosb\n\t"
-    >                        "testb %%al,%%al\n\t"
-    >                        "jne 1b"
-    >                      : "=&S" (d0), "=&D" (d1), "=&a" (d2)
-    >                      : "0" (src),"1" (dest) 
-    >                      : "memory");
-    > return dest;
-    > }
-    > 
- > - > * * * - > - > ` - - 源地址存放于 esi,目标地址存放于 edi,同时开始拷贝,当我们到达 **0** 时,拷贝完成。约束 "&S"、"&D"、"&a" 表明寄存器 esi、edi和 eax 早期的修饰寄存器,也就是说,它们的内容在函数完成前会被改变。这里很明显可以知道为什么 "memory" 会放在修饰寄存器列表。 - - 我们可以看到一个类似的函数,它能移动双字块数据。注意函数被声明为一个宏。 - - > ` - > - > * * * - > - >
#define mov_blk(src, dest, numwords) \
-    > __asm__ __volatile__ (                                          \
-    >                        "cld\n\t"                                \
-    >                        "rep\n\t"                                \
-    >                        "movsl"                                  \
-    >                        :                                        \
-    >                        : "S" (src), "D" (dest), "c" (numwords)  \
-    >                        : "%ecx", "%esi", "%edi"                 \
-    >                        )
-    > 
- > - > * * * - > - > ` - - 这里我们没有输出,所以寄存器 ecx、esi和 edi 的内容发生改变,这是块移动的副作用。因此我们必须将它们添加进修饰寄存器列表。 - -5. 在 Linux 中,系统调用使用 GCC 内联汇编实现。让我们看看如何实现一个系统调用。所有的系统调用被写成宏(linux/unistd.h)。例如,带有三个参数的系统调用被定义为如下所示的宏。 - - > ` - > - > * * * - > - > type name(type1 arg1,type2 arg2,type3 arg3) \ - > { \ - > long __res; \ - > __asm__ volatile ( "int $0x80" \ - > : "=a" (__res) \ - > : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \ - > "d" ((long)(arg3))); \ - > __syscall_return(type,__res); \ - > } - > - > - > * * * - > - > ` - - 无论何时调用带有三个参数的系统调用,以上展示的宏用于执行调用。系统调用号位于 eax 中,每个参数位于 ebx、ecx、edx 中。最后 "int 0x80" 是一条用于执行系统调用的指令。返回值被存储于 eax 中。 - - 每个系统调用都以类似的方式实现。Exit 是一个单一参数的系统调用,让我们看看它的代码看起来会是怎样。它如下所示。 - - > ` - > - > * * * - > - >
{
-    >         asm("movl $1,%%eax;         /* SYS_exit is 1 */
-    >              xorl %%ebx,%%ebx;      /* Argument is in ebx, it is 0 */
-    >              int  $0x80"            /* Enter kernel mode */
-    >              );
-    > }
-    > 
- > - > * * * - > - > ` - - Exit 的系统调用号是 1 同时它的参数是 0。因此我们分配 eax 包含 1,ebx 包含 0,同时通过 `int $0x80` 执行 `exit(0)`。这就是 exit 的工作原理。 - -* * * - -## 8. 结束语 - -这篇文档已经将 GCC 内联汇编过了一遍。一旦你理解了基本概念,你便不难采取自己的行动。我们看了许多例子,它们有助于理解 GCC 内联汇编的常用特性。 - -GCC 内联是一个极大的主题,这篇文章是不完整的。更多关于我们讨论过的语法细节可以在 GNU 汇编器的官方文档上获取。类似地,对于一个完整的约束列表,可以参考 GCC 的官方文档。 - -当然,Linux 内核 大规模地使用 GCC 内联。因此我们可以在内核源码中发现许多各种各样的例子。它们可以帮助我们很多。 - -如果你发现任何的错别字,或者本文中的信息已经过时,请告诉我们。 - -* * * - -## 9. 参考 - -1. [Brennan’s Guide to Inline Assembly](http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html) -2. [Using Assembly Language in Linux](http://linuxassembly.org/articles/linasm.html) -3. [Using as, The GNU Assembler](http://www.gnu.org/manual/gas-2.9.1/html_mono/as.html) -4. [Using and Porting the GNU Compiler Collection (GCC)](http://gcc.gnu.org/onlinedocs/gcc_toc.html) -5. [Linux Kernel Source](http://ftp.kernel.org/) - -* * * -via: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html - - 作者:[Sandeep.S](mailto:busybox@sancharnet.in) 译者:[cposture](https://github.com/cposture) 校对:[]() - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From a5d7d25e756b78ddaacd09abf25c41590196343c Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Aug 2016 00:38:12 +0800 Subject: [PATCH 425/471] PUB:20160623 72% Of The People I Follow On Twitter Are Men @Flowsnow --- ... The People I Follow On Twitter Are Men.md | 32 +++++++++---------- 1 file changed, 15 insertions(+), 17 deletions(-) rename {translated/tech => published}/20160623 72% Of The People I Follow On Twitter Are Men.md (54%) diff --git a/translated/tech/20160623 72% Of The People I Follow On Twitter Are Men.md b/published/20160623 72% Of The People I Follow On Twitter Are Men.md similarity index 54% rename from translated/tech/20160623 72% Of The People I Follow On Twitter Are Men.md rename to published/20160623 72% Of The People I Follow On Twitter Are Men.md index e836bfe4ab..e0cc100587 100644 --- a/translated/tech/20160623 72% Of The People I Follow On Twitter Are Men.md +++ b/published/20160623 72% Of The People I Follow On Twitter Are Men.md @@ -1,9 +1,9 @@ -在推特上我关注的人72%都是男性 +在推特上我关注的人 72% 都是男性 =============================================== ![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/abacus.jpg) -至少,这是我的估计。推特并不会询问用户的性别,因此我 [写了一个程序][1] ,根据姓名猜测他们的性别。在我的那些关注者中,性别分布甚至更糟,83%的是男性。据我所知,其他的还不全都是女性。 +至少,这是我的估计。推特并不会询问用户的性别,因此我 [写了一个程序][1] ,根据姓名猜测他们的性别。在那些关注我的人当中,性别分布甚至更糟,83% 的是男性。据我所知,其他的还不全都是女性。 修正第一个数字并不是什么神秘的事:我注意寻找更多支持我兴趣的女性专家,并且关注他们。 @@ -11,13 +11,13 @@ ### 我应该怎么估算呢 -我开始估算我关注的人(推特的上的术语是“朋友”)的性别分布,然后这格外的难。[推特的分析][2]经常显示相反的结果。 一个我的关注者的性别估算结果: +我开始估算我关注的人(推特的上的术语是“朋友”)的性别分布,然后发现这格外的难。[推特的分析][2]给我展示了如下的结果, 关于关注我的人的性别估算: ![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/twitter-analytics.png) -因此,推特的分析将我的关注者分成了三类:男性、女性、未知,并且给我们展示了前面两组的比例。(性别二值化现象在这里并不存在——未知性别的人都集中在组织的推特账号上。)但是我关注的人的性别比例,推特并没有告诉我。 [而这就是可以改进的][3], 然后我开始搜索能够帮我估算这个数字的服务,最终发现了 [FollowerWonk][4] 。 +因此,推特的分析将我的关注者分成了三类:男性、女性、未知,并且给我们展示了前面两组的比例。(性别二值化现象在这里并不存在——未知性别的人都集中在组织的推特账号上。)但是我关注的人的性别比例,推特并没有告诉我。 [而这就是可以改进的][3],然后我开始搜索能够帮我估算这个数字的服务,最终发现了 [FollowerWonk][4] 。 -FollowerWonk 估算我关注的人里面有71%的都是男性。这个估算准确吗? 为了准确性,我把FollowerWonk和Twitter对我关注的人的进行了估算,结果如下: +FollowerWonk 估算我关注的人里面有 71% 都是男性。这个估算准确吗? 为了评估一下,我把 FollowerWonk 和 Twitter 对我关注的人的进行了估算,结果如下: **推特分析** @@ -32,15 +32,15 @@ FollowerWonk 估算我关注的人里面有71%的都是男性。这个估算准 | **我的关注者** | 81% | 19% | | **我关注的人** | 72% | 28% | -FollowerWonk的分析显示我的关注者中81%的人都是男性,很接近推特分析的数字。这个结果还说得过去。如果FollowerWonk 和Twitter 在我的关注者的性别比例上是一致的,这就表明FollowerWonk对我关注的人的性别估算也应当是合理的。使用FollowerWonk我就能养成估算这些数字的爱好,并且做出改进。 +FollowerWonk 的分析显示我的关注者中 81% 的人都是男性,很接近推特分析的数字。这个结果还说得过去。如果FollowerWonk 和 Twitter 在我的关注者的性别比例上是一致的,这就表明 FollowerWonk 对我关注的人的性别估算也应当是合理的。使用 FollowerWonk 我就能养成估算这些数字的爱好,并且做出改进。 -然而,使用FollowerWonk 检测我关注的人的性别分布一个月需要30美元,这真是一个昂贵的爱好。我并不需要FollowerWonk 的所有的功能。我能很经济的解决只需要性别分布的问题吗? +然而,使用 FollowerWonk 检测我关注的人的性别分布一个月需要 30 美元,这真是一个昂贵的爱好。我并不需要FollowerWonk 的所有的功能。我能很经济的解决只需要性别分布的问题吗? -因为FollowerWonk 的估算数字看起来比较合理,我试图做一个自己的FollowerWonk 。使用Python和[一些好心的费城人写的Twitter API封装类][5](LCTT译者注:Twitter API封装类是由Mike Taylor等一批费城人在github上开源的一个项目),我开始下载我所有关注的人和我所有的关注者的简介。我马上就发现推特的比例限制是极少的,因此我随机的采样了一部分用户。 +因为 FollowerWonk 的估算数字看起来比较合理,我试图做一个自己的 FollowerWonk 。使用 Python 和[一些好心的费城人写的 Twitter API 封装类][5](LCTT 译注:Twitter API 封装类是由 Mike Taylor 等一批费城人在 github 上开源的一个项目),我开始下载我所有关注的人和我所有的关注者的简介。我马上就发现推特的速率限制是很低,因此我随机的采样了一部分用户。 -我写了一个初步的程序,在所有我关注的人的简介中搜索一个和性别相关的代词。例如,如果简介中包含了“she”或者“her”这样的字眼,可能这就属于一个女性,如果简介中包含了“they”或者”then“,那么可能这就是性别位置的。但是大多数简介中不会出现这些代词。对于这种简介,和性别关联最紧密的信息就是姓名了。例如:@gvanrossum的姓名那一栏是“Guido van Rossum”,第一姓名是“Guido”,这表明@gvanrossum是一个女的。当找不到代词的时候,我就使用第一姓名来评估性别估算数字。 +我写了一个初步的程序,在所有我关注的人的简介中搜索一个和性别相关的代词。例如,如果简介中包含了“she”或者“her”这样的字眼,可能这就属于一个女性,如果简介中包含了“they”或者“them”,那么可能这就是性别未知的。但是大多数简介中不会出现这些代词。对于这种简介,和性别关联最紧密的信息就是姓名了。例如:@gvanrossum 的姓名那一栏是“Guido van Rossum”,第一姓名是“Guido”,这表明 @gvanrossum 是一个女的。当找不到代词的时候,我就使用名字来评估性别估算数字。 -我的脚本把每个名字的一部分传到性别检测机中去检测性别。[性别检测机][6]也有可预见的失败,比如错误的把“Brooklyn Zen Center”当做一个名叫“Brooklyn”的女性,但是它的评估结果与FollowerWonk和Twitter的相比也是很合理的: +我的脚本把每个名字的一部分传到性别检测机中去检测性别。[性别检测机][6]也有可预见的失败,比如错误的把“Brooklyn Zen Center”当做一个名叫“Brooklyn”的女性,但是它的评估结果与 FollowerWonk 和 Twitter 的相比也是很合理的: | | 非男非女 | 男性 | 女性 | 性别未知的 | | ----- | ---- | ---- | ---- | ----- | @@ -53,15 +53,13 @@ FollowerWonk的分析显示我的关注者中81%的人都是男性,很接近 ### 了解你的数字 -我想你们也能检测你们推特关系网的性别分布。所以我每月花费10美元将“Proportional”应用发布到PythonAnywhere这个便利的服务上: +我想你们也能检测你们推特关系网的性别分布。所以我将“Proportional”应用发布到 PythonAnywhere 这个便利的服务上,每月仅需 10 美元: -> - -这个应用可能会在速率上有限制,否则会失败,因此请温柔的对待它。github上放了源代码[代码][7] ,也有命令行的工具。 - -是谁代表了你的推特关系网?你还在忍受那些在过去几十年里一直在谈论的软件的不公平的分布组吗?或者你的关系网看起来像软件行业的未来吗?让我们了解我们的数字并且改善他们。 +> +这个应用可能会在速率上有限制,超过会失败,因此请温柔的对待它。github 上放了源代码[代码][7] ,也有命令行的工具。 +是谁代表了你的推特关系网?你还在忍受那些在过去几十年里一直在谈论的软件行业的不公平的男女分布吗?或者你的关系网看起来像软件行业的未来吗?让我们了解我们的数字并且改善他们。 -------------------------------------------------------------------------------- @@ -69,7 +67,7 @@ via: https://emptysqua.re/blog/gender-of-twitter-users-i-follow/ 作者:[A. Jesse Jiryu Davis][a] 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f3220bf920ca6af95c39fe98a44a095c22adeb18 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Aug 2016 01:13:36 +0800 Subject: [PATCH 426/471] =?UTF-8?q?PUB:20160525=20Getting=20Started=20with?= =?UTF-8?q?=20Python=20Programming=20and=20Scripting=20in=20Linux=20?= =?UTF-8?q?=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @StdioA --- ...ramming and Scripting in Linux – Part 1.md | 58 ++++++++++--------- 1 file changed, 31 insertions(+), 27 deletions(-) rename {translated/tech => published}/20160525 Getting Started with Python Programming and Scripting in Linux – Part 1.md (76%) diff --git a/translated/tech/20160525 Getting Started with Python Programming and Scripting in Linux – Part 1.md b/published/20160525 Getting Started with Python Programming and Scripting in Linux – Part 1.md similarity index 76% rename from translated/tech/20160525 Getting Started with Python Programming and Scripting in Linux – Part 1.md rename to published/20160525 Getting Started with Python Programming and Scripting in Linux – Part 1.md index 24df8ca66b..a0caa58efa 100644 --- a/translated/tech/20160525 Getting Started with Python Programming and Scripting in Linux – Part 1.md +++ b/published/20160525 Getting Started with Python Programming and Scripting in Linux – Part 1.md @@ -1,21 +1,19 @@ -Linux 平台下 Python 脚本编程入门 – Part 1 +Linux 平台下 Python 脚本编程入门(一) =============================================================================== - 众所周知,系统管理员需要精通一门脚本语言,而且招聘机构列出的职位需求上也会这么写。大多数人会认为 Bash (或者其他的 shell 语言)用起来很方便,但一些强大的语言(比如 Python)会给你带来一些其它的好处。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Programming-Scripting-in-Linux.png) -> 在 Linux 中学习 Python 脚本编程 + +*在 Linux 中学习 Python 脚本编程* 首先,我们会使用 Python 的命令行工具,还会接触到 Python 的面向对象特性(这篇文章的后半部分会谈到它)。 -最后,学习 Python - 可以助力于你在[桌面应用开发][2]及[数据科学领域][3]的事业。 +学习 Python 可以助力于你在[桌面应用开发][2]及[数据科学领域][3]的职业发展。 -容易上手,广泛使用,拥有海量“开箱即用”的模块(它是一组包含 Python 声明的外部文件),Python 理所当然地成为了美国计算机专业大学生在一年级时所上的程序设计课所用语言的不二之选。 +容易上手,广泛使用,拥有海量“开箱即用”的模块(它是一组包含 Python 语句的外部文件),Python 理所当然地成为了美国计算机专业大学生在一年级时所上的程序设计课所用语言的不二之选。 -在这个由两篇文章构成的系列中,我们将回顾 Python -的基础部分,希望初学编程的你能够将这篇实用的文章作为一个编程入门的跳板,和日后使用 Python 时的一篇快速指引。 +在这个由两篇文章构成的系列中,我们将回顾 Python 的基础部分,希望初学编程的你能够将这篇实用的文章作为一个编程入门的跳板,和日后使用 Python 时的一篇快速指引。 ### Linux 中的 Python @@ -33,7 +31,8 @@ $ python3 ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Running-Python-Commands-on-Linux.png) -> 在 Linux 中运行 Python 命令 + +*在 Linux 中运行 Python 命令* 如果你希望在键入 `python` 时使用 Python 3.x 而不是 2.x,你可以像下面一样更改对应的符号链接: @@ -44,11 +43,12 @@ $ ln -s python3.2 python # Choose the Python 3.x binary here ``` ![](http://www.tecmint.com/wp-content/uploads/2016/05/Remove-Python-2-and-Use-Python-3.png) -> 删除 Python 2,使用 Python 3 + +*删除 Python 2,使用 Python 3* 顺便一提,有一点需要注意:尽管 Python 2.x 仍旧被使用,但它并不会被积极维护。因此,你可能要考虑像上面指示的那样来切换到 3.x。2.x 和 3.x 的语法有一些不同,我们会在这个系列文章中使用后者。 -另一个在 Linux 中使用 Python 的方法是通过 IDLE (the Python Integrated Development Environment),一个为编写 Python 代码而生的图形用户界面。在安装它之前,你最好查看一下适用于你的 Linux 发行版的 IDLE 可用版本。 +另一个在 Linux 中使用 Python 的方法是通过 IDLE (the Python Integrated Development Environment),这是一个为编写 Python 代码而生的图形用户界面。在安装它之前,你最好查看一下适用于你的 Linux 发行版的 IDLE 可用版本。 ``` # aptitude search idle [Debian 及其衍生发行版] @@ -68,23 +68,24 @@ $ sudo aptitude install idle-python3.2 # I'm using Linux Mint 13 1. 轻松打开外部文件 (File → Open); -![](http://www.tecmint.com/wp-content/uploads/2016/05/Python-Shell.png) -> Python Shell + ![](http://www.tecmint.com/wp-content/uploads/2016/05/Python-Shell.png) + + *Python Shell* 2. 复制 (`Ctrl + C`) 和粘贴 (`Ctrl + V`) 文本; 3. 查找和替换文本; -4. 显示可能的代码补全(一个在其他 IDE 里可能叫做 Intellisense 或者 Autocompletion 的功能); +4. 显示可能的代码补全(一个在其他 IDE 里可能叫做“智能感知”或者“自动补完”的功能); 5. 更改字体和字号,等等。 -最厉害的是,你可以用 IDLE 创建桌面工程。 +最厉害的是,你可以用 IDLE 创建桌面应用。 我们在这两篇文章中不会开发桌面应用,所以你可以根据喜好来选择 IDLE 或 Python shell 去运行下面的例子。 ### Python 中的基本运算 -就像你预料的那样,你能够直接进行算术操作(你可以在所有操作中使用足够多的括号!),还可以轻松地使用 Python 拼接字符串。 +就像你预料的那样,你能够直接进行算术操作(你可以在你的所有运算中使用足够多的括号!),还可以轻松地使用 Python 拼接字符串。 -你还可以将运算结果赋给一个变量,然后在屏幕上显示它。Python 有一个叫做输出 (concatenation) 的实用功能——把一串变量和/或字符串用逗号分隔,然后在 print 函数中插入,它会返回一个由你刚才提供的变量依序构成的句子: +你还可以将运算结果赋给一个变量,然后在屏幕上显示它。Python 有一个叫做拼接 (concatenation) 的实用功能——给 print 函数提供一串用逗号分隔的变量和/或字符串,它会返回一个由你刚才提供的变量依序构成的句子: ``` >>> a = 5 @@ -100,15 +101,16 @@ $ sudo aptitude install idle-python3.2 # I'm using Linux Mint 13 如果你尝试在静态类型语言中(如 Java 或 C#)做这件事,它将抛出一个错误。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Basic-Operations.png) -> 学习 Python 的基本操作 + +*学习 Python 的基本操作* ### 面向对象编程的简单介绍 -在面向对象编程(OOP)中,程序中的所有实体都会由对象的形式呈现,所以它们可以与其他对象交互。因此,对象拥有属性,而且大多数对象可以完成动作(这被称为对象的方法)。 +在面向对象编程(OOP)中,程序中的所有实体都会由对象的形式呈现,并且它们可以与其他对象交互。因此,对象拥有属性,而且大多数对象可以执行动作(这被称为对象的方法)。 举个例子:我们来想象一下,创建一个对象“狗”。它可能拥有的一些属性有`颜色`、`品种`、`年龄`等等,而它可以完成的动作有 `叫()`、`吃()`、`睡觉()`,诸如此类。 -你可以看到,方法名后面会跟着一对括号,他们当中可能会包含一个或多个参数(向方法中传递的值),也有可能什么都不包含。 +你可以看到,方法名后面会跟着一对括号,括号当中可能会包含一个或多个参数(向方法中传递的值),也有可能什么都不包含。 我们用 Python 的基本对象类型之一——列表来解释这些概念。 @@ -139,19 +141,21 @@ $ sudo aptitude install idle-python3.2 # I'm using Linux Mint 13 >>> rockBands.pop(0) ``` -如果你输入了对象的名字,然后在后面输入了一个点,你可以按 Ctrl + 空格来显示这个对象的可用方法列表。 +如果你输入了对象的名字,然后在后面输入了一个点,你可以按 `Ctrl + space` 来显示这个对象的可用方法列表。 ![](http://www.tecmint.com/wp-content/uploads/2016/05/List-Available-Python-Methods.png) -> 列出可用的 Python 方法 + +*列出可用的 Python 方法* 列表中含有的元素个数是它的一个属性。它通常被叫做“长度”,你可以通过向内建函数 `len` 传递一个列表作为它的参数来显示该列表的长度(顺便一提,之前的例子中提到的 print 语句,是 Python 的另一个内建函数)。 如果你在 IDLE 中输入 `len`,然后跟上一个不闭合的括号,你会看到这个函数的默认语法: ![](http://www.tecmint.com/wp-content/uploads/2016/05/Python-len-Function.png) -> Python 的 len 函数 -现在我们来看看列表中的特定条目。他们也有属性和方法吗?答案是肯定的。比如,你可以将一个字符串条目装换为大写形式,并获取这个字符串所包含的字符数量。像下面这样做: +*Python 的 len 函数* + +现在我们来看看列表中的特定条目。它们也有属性和方法吗?答案是肯定的。比如,你可以将一个字符串条目转换为大写形式,并获取这个字符串所包含的字符数量。像下面这样做: ``` >>> rockBands[0].upper() @@ -162,11 +166,11 @@ $ sudo aptitude install idle-python3.2 # I'm using Linux Mint 13 ### 总结 -在这篇文章中,我们简要介绍了 Python,它的命令行 shell,IDLE,展示了如何执行算术运算,如何在变量中存储数据,如何使用 `print` 函数在屏幕上重新显示那些数据(无论是它们本身还是它们的一部分),还通过一个实际的例子解释了对象的属性和方法。 +在这篇文章中,我们简要介绍了 Python、它的命令行 shell、IDLE,展示了如何执行算术运算,如何在变量中存储数据,如何使用 `print` 函数在屏幕上重新显示那些数据(无论是它们本身还是它们的一部分),还通过一个实际的例子解释了对象的属性和方法。 下一篇文章中,我们会展示如何使用条件语句和循环语句来实现流程控制。我们也会解释如何编写一个脚本来帮助我们完成系统管理任务。 -你是不是想继续学习一些有关 Python 的知识呢?敬请期待本系列的第二部分(我们会将 Python 的慷慨、脚本中的命令行工具与其他部分结合在一起),你还可以考虑购买我们的《终极 Python 编程》系列教程([这里][4]有详细信息)。 +你是不是想继续学习一些有关 Python 的知识呢?敬请期待本系列的第二部分(我们会在脚本中将 Python 和命令行工具的优点结合在一起),你还可以考虑购买我们的《终极 Python 编程》系列教程([这里][4]有详细信息)。 像往常一样,如果你对这篇文章有什么问题,可以向我们寻求帮助。你可以使用下面的联系表单向我们发送留言,我们会尽快回复你。 @@ -176,7 +180,7 @@ via: http://www.tecmint.com/learn-python-programming-and-scripting-in-linux/ 作者:[Gabriel Cánepa][a] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e62929cb0dd761692029f57205a2a89f4aef01fb Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 18 Aug 2016 21:25:04 +0800 Subject: [PATCH 427/471] =?UTF-8?q?20160818-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ystem or Directory Using SSHFS Over SSH.md | 145 ++++++++++++++++++ 1 file changed, 145 insertions(+) create mode 100644 sources/tech/20160811 How to Mount Remote Linux Filesystem or Directory Using SSHFS Over SSH.md diff --git a/sources/tech/20160811 How to Mount Remote Linux Filesystem or Directory Using SSHFS Over SSH.md b/sources/tech/20160811 How to Mount Remote Linux Filesystem or Directory Using SSHFS Over SSH.md new file mode 100644 index 0000000000..6b51e41a9c --- /dev/null +++ b/sources/tech/20160811 How to Mount Remote Linux Filesystem or Directory Using SSHFS Over SSH.md @@ -0,0 +1,145 @@ +How to Mount Remote Linux Filesystem or Directory Using SSHFS Over SSH +============================ + +The main purpose of writing this article is to provide a step-by-step guide on how to mount remote Linux file system using SSHFS client over SSH. + +This article is useful for those users and system administrators who want to mount remote file system on their local systems for whatever purposes. We have practically tested by installing SSHFS client on one of our Linux system and successfully mounted remote file systems. + +Before we go further installation let’s understand about SSHFS and how it works. + +![](http://www.tecmint.com/wp-content/uploads/2012/08/Sshfs-Mount-Remote-Linux-Filesystem-Directory.png) +>Sshfs Mount Remote Linux Filesystem or Directory + +### What Is SSHFS? + +SSHFS stands for (Secure SHell FileSystem) client that enable us to mount remote filesystem and interact with remote directories and files on a local machine using SSH File Transfer Protocol (SFTP). + +SFTP is a secure file transfer protocol that provides file access, file transfer and file management features over Secure Shell protocol. Because SSH uses encryption while transferring files over the network from one computer to another computer and SSHFS comes with built-in FUSE (Filesystem in Userspace) kernel module that allows any non-privileged users to create their file system without modifying kernel code. + +In this article, we will show you how to install and use SSHFS client on any Linux distribution to mount remote Linux filesystem or directory on a local Linux machine. + +#### Step 1: Install SSHFS Client in Linux Systems + +By default sshfs packages does not exists on all major Linux distributions, you need to enable [epel repository][1] under your Linux systems to install sshfs with the help of Yum command with their dependencies. + +``` +# yum install sshfs +# dnf install sshfs [On Fedora 22+ releases] +$ sudo apt-get install sshfs [On Debian/Ubuntu based systems] +``` + +#### Step 2: Creating SSHFS Mount Directory + +Once the sshfs package installed, you need to create a mount point directory where you will mount your remote file system. For example, we have created mount directory under /mnt/tecmint. + +``` +# mkdir /mnt/tecmint +$ sudo mkdir /mnt/tecmint [On Debian/Ubuntu based systems] +``` + +### Step 3: Mounting Remote Filesystem with SSHFS + +Once you have created your mount point directory, now run the following command as a root user to mount remote file system under /mnt/tecmint. In your case the mount directory would be anything. + +The following command will mount remote directory called /home/tecmint under /mnt/tecmint in local system. (Don’t forget replace x.x.x.x with your IP Address and mount point). + +``` +# sshfs tecmint@x.x.x.x:/home/tecmint/ /mnt/tecmint +$ sudo sshfs -o allow_other tecmint@x.x.x.x:/home/tecmint/ /mnt/tecmint [On Debian/Ubuntu based systems] +``` + +If your Linux server is configured with SSH key based authorization, then you will need to specify the path to your public keys as shown in the following command. + +``` +# sshfs -o IdentityFile=~/.ssh/id_rsa tecmint@x.x.x.x:/home/tecmint/ /mnt/tecmint +$ sudo sshfs -o allow_other,IdentityFile=~/.ssh/id_rsa tecmint@x.x.x.x:/home/tecmint/ /mnt/tecmint [On Debian/Ubuntu based systems] +``` + +#### Step 4: Verifying Remote Filesystem is Mounted + +If you have run the above command successfully without any errors, you will see the list of remote files and directories mounted under /mnt/tecmint. + +``` +# cd /mnt/tecmint +# ls +[root@ tecmint]# ls +12345.jpg ffmpeg-php-0.6.0.tbz2 Linux news-closeup.xsl s3.jpg +cmslogs gmd-latest.sql.tar.bz2 Malware newsletter1.html sshdallow +epel-release-6-5.noarch.rpm json-1.2.1 movies_list.php pollbeta.sql +ffmpeg-php-0.6.0 json-1.2.1.tgz my_next_artical_v2.php pollbeta.tar.bz2 +``` + +#### Step 5: Checking Mount Point with df -hT Command + +If you run df -hT command you will see the remote file system mount point. + +``` +# df -hT +``` + +Sample Output + +``` +Filesystem Type Size Used Avail Use% Mounted on +udev devtmpfs 730M 0 730M 0% /dev +tmpfs tmpfs 150M 4.9M 145M 4% /run +/dev/sda1 ext4 31G 5.5G 24G 19% / +tmpfs tmpfs 749M 216K 748M 1% /dev/shm +tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock +tmpfs tmpfs 749M 0 749M 0% /sys/fs/cgroup +tmpfs tmpfs 150M 44K 150M 1% /run/user/1000 +tecmint@192.168.0.102:/home/tecmint fuse.sshfs 324G 55G 253G 18% /mnt/tecmint +``` + +#### Step 6: Mounting Remote Filesystem Permanently + +To mount remote filesystem permanently, you need to edit the file called /etc/fstab. To do, open the file with your favorite editor. + +``` +# vi /etc/fstab +$ sudo vi /etc/fstab [On Debian/Ubuntu based systems] +``` + +Go to the bottom of the file and add the following line to it and save the file and exit. The below entry mount remote server file system with default settings. + +``` +sshfs#tecmint@x.x.x.x:/home/tecmint/ /mnt/tecmint fuse.sshfs defaults 0 0 +``` + +Make sure you’ve [SSH Passwordless Login][2] in place between servers to auto mount filesystem during system reboots.. + +``` +sshfs#tecmint@x.x.x.x:/home/tecmint/ /mnt/tecmint fuse.sshfs IdentityFile=~/.ssh/id_rsa defaults 0 0 +``` + +Next, you need to update the fstab file to reflect the changes. + +``` +# mount -a +$ sudo mount -a [On Debian/Ubuntu based systems] +``` + +#### Step 7: Unmounting Remote Filesystem + +To unmount remote filesystem, jun issue the following command it will unmount the remote file system. + +``` +# umount /mnt/tecmint +``` + +That’s all for now, if you’re facing any difficulties or need any help in mounting remote file system, please contact us via comments and if you feel this article is much useful then share it with your friends. + + +------------------------------------------------------------------------------- + +via: http://www.tecmint.com/sshfs-mount-remote-linux-filesystem-directory-using-ssh/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Ravi Saive][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/admin/ +[1]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[2]: http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ From d4e9624498af686bf5984afdfeb1725f4a284742 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 19 Aug 2016 10:27:29 +0800 Subject: [PATCH 428/471] =?UTF-8?q?20160819-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20160811 5 best linux init system.md | 103 ++++++++++++++++++ 1 file changed, 103 insertions(+) create mode 100644 sources/tech/20160811 5 best linux init system.md diff --git a/sources/tech/20160811 5 best linux init system.md b/sources/tech/20160811 5 best linux init system.md new file mode 100644 index 0000000000..460acdef3f --- /dev/null +++ b/sources/tech/20160811 5 best linux init system.md @@ -0,0 +1,103 @@ +5 Best Modern Linux ‘init’ Systems (1992-2015) +============================================ + +In Linux and other Unix-like operating systems, the init (initialization) process is the first process executed by the kernel at boot time. It has a process ID (PID) of 1, it is executed in the background until the system is shut down. + +The init process starts all other processes, that is daemons, services and other background processes, therefore, it is the mother of all other processes on the system. A process can start many other child processes on the system, but in the event that a parent process dies, init becomes the parent of the orphan process. + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Linux-init-Systems.png) + +Over the years, many init systems have emerged in major Linux distributions and in this guide, we shall take a look at some of the best init systems you can work with on the Linux operating system. + +### 1. System V Init + +System V (SysV) is a mature and popular init scheme on Unix-like operating systems, it is the parent of all processes on a Unix/Linux system. SysV is the first commercial Unix operating system designed. + +Almost all Linux distributions first used SysV init scheme except Gentoo which has a custom init and Slackware using BSD-style init scheme. + +As years have passed by, due to some imperfections, several SysV init replacements have been developed in quests to create more efficient and perfect init systems for Linux. + +Although these alternatives seek to improve SysV and probably offer new features, they are still compatible with original SysV init scripts. + +### 2. SystemD + +SystemD is a relatively new init scheme on the Linux platform. Introduced in Fedora 15, its is an assortment of tools for easy system management. The main purpose is to initialize, manage and keep track of all system processes in the boot process and while the system is running. + +Systemd init is comprehensively distinct from other traditional Unix init systems, in the way it practically approaches system and services management. It is also compatible with SysV and LBS init scripts. + +It has some of the following eminent features: + +- Clean, straightforward and efficient design +- Concurrent and parallel processing at bootup +- Better APIv +- Enables removal of optional processes +- Supports event logging using journald +- Supports job scheduling using systemd calender timers +- Storage of logs in binary files +- Preservation of systemd state for future reference +- Better integration with GNOME plus many more + +Read the Systemd init Overview: + +### 3. Upstart + +Upstart is an event-based init system developed by makers of Ubuntu as a replacement for SysV init system. It starts different system tasks and processes, inspects them while the system is running and stops them during system shut down. + +It is a hybrid init system which uses both SysV startup scripts and also Systemd scripts, some of the notable features of Upstart init system include: + +- Originally developed for Ubuntu Linux but can run on all other distributions +- Event-based starting and stopping of tasks and services +- Events are generated during starting and stopping of tasks and services +- Events can be sent by other system processes +- Communication with init process through D-Bus +- Users can start and stop their own processes +- Re-spawning of services that die abruptly and many more + +Visit Homepage: + +### 4. OpenRC + +OpenRC is a dependency-based init scheme for Unix-like operating systems, it is compatible with SysV init. As much as it brings some improvements to Sys V, you must keep in mind that OpenRC is not an absolute replacement for /sbin/init file. + +It offers some illustrious features and these include: + +- It can run on other many Linux distributions including Gentoo and also on BSD +- Supports hardware initiated init scripts +- Supports a single configuration file +- No per-service configurations supported +- Runs as a daemon +- Parallel services startup and many more + +Visit Homepage: + +### 5. runit + +runit is also a cross-platform init system that can run on GNU/Linux, Solaris, *BSD and Mac OS X and it is an alternative for SysV init, that offers service supervision. + +It comes with some benefits and remarkable components not found in SysV init and possibly other init systems in Linux and these include: + +Service supervision, where each service is associated with a service directory +Clean process state, it guarantees each process a clean state +It has a reliable logging facility +Fast system boot up and shutdown +It is also portable +Packaging friendly +Small code size and many more +Visit Homepage: + +As I had earlier on mentioned, the init system starts and manages all other processes on a Linux system. Additionally, SysV is the primary init scheme on Linux operating systems, but due to some performance weaknesses, system programmers have developed several replacements for it. + +And here, we looked at a few of those replacements, but there could be other init systems that you think are worth mentioning in this list. You can let us know of them via the comment section below. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/best-linux-init-systems/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 6d258b9e209176cb269ac5d7fe17cd5346a35765 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 19 Aug 2016 10:31:26 +0800 Subject: [PATCH 429/471] =?UTF-8?q?20160819-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow Awk to Use Shell Variables – Part 11.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md diff --git a/sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md b/sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md new file mode 100644 index 0000000000..b08f3fa15d --- /dev/null +++ b/sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md @@ -0,0 +1,106 @@ +How to Allow Awk to Use Shell Variables – Part 11 +=========================================== + +When we write shell scripts, we normally include other smaller programs or commands such as Awk operations in our scripts. In the case of Awk, we have to find ways of passing some values from the shell to Awk operations. + +This can be done by using shell variables within Awk commands, and in this part of the series, we shall learn how to allow Awk to use shell variables that may contain values we want to pass to Awk commands. + +There possibly two ways you can enable Awk to use shell variables: + +### 1. Using Shell Quoting + +Let us take a look at an example to illustrate how you can actually use shell quoting to substitute the value of a shell variable in an Awk command. In this example, we want to search for a username in the file /etc/passwd, filter and print the user’s account information. + +Therefore, we can write a `test.sh` script with the following content: + +``` +#!/bin/bash + +#read user input +read -p "Please enter username:" username + +#search for username in /etc/passwd file and print details on the screen +cat /etc/passwd | awk "/$username/ "' { print $0 }' +``` + +Thereafter, save the file and exit. + +Interpretation of the Awk command in the test.sh script above: + +``` +cat /etc/passwd | awk "/$username/ "' { print $0 }' +``` + +`"/$username/ "` – shell quoting used to substitute value of shell variable username in Awk command. The value of username is the pattern to be searched in the file /etc/passwd. + +Note that the double quote is outside the Awk script, `‘{ print $0 }’`. + +Then make the script executable and run it as follows: + +``` +$ chmod +x test.sh +$ ./text.sh +``` + +After running the script, you will be prompted to enter a username, type a valid username and hit Enter. You will view the user’s account details from the /etc/passwd file as below: + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Shell-Script-to-Find-Username-in-Passwd-File.png) + +### 2. Using Awk’s Variable Assignment + +This method is much simpler and better in comparison to method one above. Considering the example above, we can run a simple command to accomplish the job. Under this method, we use the -v option to assign a shell variable to a Awk variable. + +Firstly, create a shell variable, username and assign it the name that we want to search in the /etc/passswd file: + +``` +username="aaronkilik" +``` + +Then type the command below and hit Enter: + +``` +# cat /etc/passwd | awk -v name="$username" ' $0 ~ name {print $0}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Find-Username-in-Password-File-Using-Awk.png) + +Explanation of the above command: + +- `-v` – Awk option to declare a variable +- `username` – is the shell variable +- `name` – is the Awk variable + +Let us take a careful look at `$0 ~ name` inside the Awk script, `' $0 ~ name {print $0}'`. Remember, when we covered Awk comparison operators in Part 4 of this series, one of the comparison operators was value ~ pattern, which means: true if value matches the pattern. + +The `output($0)` of cat command piped to Awk matches the pattern (aaronkilik) which is the name we are searching for in /etc/passwd, as a result, the comparison operation is true. The line containing the user’s account information is then printed on the screen. + +### Conclusion + +We have covered an important section of Awk features, that can help us use shell variables within Awk commands. Many times, you will write small Awk programs or commands within shell scripts and therefore, you need to have a clear understanding of how to use shell variables within Awk commands. + +In the next part of the Awk series, we shall dive into yet another critical section of Awk features, that is flow control statements. So stay tunned and let’s keep learning and sharing. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-shell-script-variable-in-awk/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ + + + + + + + + + + + + + From 065a34868642f80d0122b2f5ce68f7af405e8966 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 19 Aug 2016 10:43:16 +0800 Subject: [PATCH 430/471] =?UTF-8?q?20160819-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...08 Simple Python Framework from Scratch.md | 461 ++++++++++++++++++ 1 file changed, 461 insertions(+) create mode 100644 sources/tech/20160608 Simple Python Framework from Scratch.md diff --git a/sources/tech/20160608 Simple Python Framework from Scratch.md b/sources/tech/20160608 Simple Python Framework from Scratch.md new file mode 100644 index 0000000000..f4e7ce0518 --- /dev/null +++ b/sources/tech/20160608 Simple Python Framework from Scratch.md @@ -0,0 +1,461 @@ +Simple Python Framework from Scratch +=================================== + +Why would you want to build a web framework? I can think of a few reasons: + +- Novel idea that will displace other frameworks. +- Get some mad street cred. +- Your problem domain is so unique that other frameworks don't fit. +- You're curious how web frameworks work because you want to become a better web developer. + +I'll focus on the last point. This post aims to describe what I learned by writing a small server and framework by explaining the design and implementation process step by step, function by function. The complete code for this project can be found in this [repository][1]. + +I hope this encourages other people to try because the it was fun, taught me a lot about how web applications work, and it was a lot easier than I thought! + +### Scope + +Frameworks handle things like the request-response cycle, authentication, database access, generating templates, and lots more. Web developers use frameworks because most web applications share a lot of functionality and it doesn't make sense to re-implement all of that for every project. + +Bigger frameworks like Rails or Django operate on a high level of abstraction and are said to be "batteries-included". It would take thousands of man-hours to implement all of those features so it's important to focus on tiny subset for this project. Before setting down a single line of code I created a list of features and constraints. + +Features: + +- Must handle GET and POST HTTP requests. You can get a brief overview of HTTP in [this wiki article][2]). +- Must be asynchronous (I'm loving the Python 3 asyncio module). +- Must include simple routing logic along with capturing parameters. +- Must provide a simple user-facing API like other cool microframeworks. +- Must handle authentication, because it's cool to learn that too (saved for Part 2). + +Constraints: + +- Will only handle a small subset of HTTP/1.1: no transfer-encoding, no http-auth, no content-encoding (gzip), no [persistant connections][3]. +- No MIME-guessing for responses - users will have to set this manually. +- No WSGI - just simple TCP connection handling. +- No database support. + +I decided a small use case would make the above more concrete. It would also demonstrate the framework's API: + +``` +from diy_framework import App, Router +from diy_framework.http_utils import Response + + +# GET simple route +async def home(r): + rsp = Response() + rsp.set_header('Content-Type', 'text/html') + rsp.body = 'test' + return rsp + + +# GET route + params +async def welcome(r, name): + return "Welcome {}".format(name) + +# POST route + body param +async def parse_form(r): + if r.method == 'GET': + return 'form' + else: + name = r.body.get('name', '')[0] + password = r.body.get('password', '')[0] + + return "{0}:{1}".format(name, password) + +# application = router + http server +router = Router() +router.add_routes({ + r'/welcome/{name}': welcome, + r'/': home, + r'/login': parse_form,}) + +app = App(router) +app.start_server() +``` + +The user is supposed to be able to define a few asynchronous functions that either return strings or Response objects, then pair those functions up with strings that represent routes, and finally start handling requests with a single function call (start_server). + +Having created the designs, I condensed it into abstract things I needed to code up: + +- Something that should accept TCP connections and schedule an asynchronous function to handle them. +- Something to parse raw text into some kind of abstracted container. +- Something to decide which function should be called for each request. +- Something that bundles all of the above together and presents a simple interface to a developer. + +I started by writing tests that were essentially contracts describing each piece of functionality. After a few refactorings, the layout settled into a handful of composable pieces. Each piece is relatively decoupled from the other components, which is especially beneficial in this case because each piece can be studied on its own. The pieces are concrete reflections of the abstractions that I listed above: + +- A HTTPServer object that holds onto a Router object and a http_parser module and uses them to initialize... +- HTTPConnection objects, where each one represents a single client HTTP connection and takes care of the request-response cycle: parses incoming bytes into a Request object using the http_parser module; uses an instance of Router to find the correct function to call to generate a response; finally send the response back to the client. +- A pair of Request and Response objects that give a user a comfortable way to work with what are in essence specially formatted strings of bytes. The user shouldn't be aware of the correct message format or delimiters. +- A Router object that contains route:function pairs. It exposes a way to add these pairs and a way, given a URL path, to find a function. +- Finally, an App object that contains configuration and uses it to instantiate a HTTPServer instance. + +Let's go over each of these pieces, starting from HTTPConnection. + +### Modelling Asynchronous Connections + +To satisfy the contraints, each HTTP request is a separate TCP connection. This makes request handling slower because of the relatively high cost of establishing multiple TCP connections (cost of DNS lookup, hand-shake, [Slow Start][4], etc.) but it's much easier to model. For this task, I chose the fairly high-level [asyncio-stream][5] module that rests on top of [asyncio's transports and protocols][6]. I recommend checking out the code for it in the stdlib because it's a joy to read! + +An instance of HTTPConnection handles multiple tasks. First, it reads data incrementally from a TCP connection using an asyncio.StreamReader object and stores it in a buffer. After each read operation, it tries to parse whatever is in the buffer and build out a Request object. Once it receives the whole request, it generates a reply and sends it back to the client through an asyncio.StreamWriter object. It also handles two more tasks: timing out a connection and handling errors. + +You can view the complete code for the class [here][7]. I'll introduce each part of the code separately, with the docstrings removed for brevity: + +``` +class HTTPConnection(object): + def init(self, http_server, reader, writer): + self.router = http_server.router + self.http_parser = http_server.http_parser + self.loop = http_server.loop + + self._reader = reader + self._writer = writer + self._buffer = bytearray() + self._conn_timeout = None + self.request = Request() +``` + +The init method is boring: it just collects objects to work with later. It stores a router, an http_parser, and loop objects, which are used to generate responses, parse requests, and schedule things in the event loop. + +Next, it stores the reader-writer pair, which together represent a TCP connection, and an empty [bytearray][8] that serves as buffer for raw bytes. _conn_timeout stores an instance of [asyncio.Handle][9] that is used to manage the timeout logic. Finally, it also stores a single instance of Request. + +The following code handles the core functionality of receiving and sending data: + +``` +async def handle_request(self): + try: + while not self.request.finished and not self._reader.at_eof(): + data = await self._reader.read(1024) + if data: + self._reset_conn_timeout() + await self.process_data(data) + if self.request.finished: + await self.reply() + elif self._reader.at_eof(): + raise BadRequestException() + except (NotFoundException, + BadRequestException) as e: + self.error_reply(e.code, body=Response.reason_phrases[e.code]) + except Exception as e: + self.error_reply(500, body=Response.reason_phrases[500]) + + self.close_connection() +``` + +Everything here is closed in a try-except block so that any exceptions thrown during parsing a request or replying to one are caught and an error response is sent back to the client. + +The request is read inside of a while loop until the parser sets self.request.finished = True or until the client closed the connection, signalled by self._reader.at_eof() method returning True. The code tries to read data from a StreamReader on every iteration of the loop and incrementally build out self.request through the call to self.process_data(data). The connection timeout timer is reset everytime the loop reads any data. + +There's an error in there - can you spot it? I'll get back to it shortly. I also have to note that this loop can potentially eat up all of the CPU because self._reader.read() returns b'' objects if there is nothing to read - meaning the loop will cycle millions of times, doing nothing. A possible solution here would be to wait a bit of time in a non-blocking fashion: await asyncio.sleep(0.1). I'll hold off on optimizing this until it's needed. + +Remember the error that I mentioned at the start of the previous paragraph? The self._reset_conn_timeout() method is called only when data is read from the StreamReader. The way it's set up now means that the timeout is not initiated until the first byte arrives. If a client opens a connection to the server and doesn't send any data - it never times out. This could be used to exhaust system resources and cause a denial of service. The fix is to simply call self._reset_conn_timeout() in the init method. + +When the request is received or when the connection drops, the code hits an if-else block. This block determines if the parser, having received all the data, finished parsing the request? Yes? Great - generate a reply and send it back! No? Uh-oh - something's wrong with the request - raise an exception! Finally, self.close_connection is called to perform cleanup. + +Parsing the parts of a request is done inside the self.process_data method. It's a very short and simple method that's easy to test: + +``` +async def process_data(self, data): + self._buffer.extend(data) + + self._buffer = self.http_parser.parse_into( + self.request, self._buffer) +``` + +Each call accumulates data into self._buffer and then tries to parse whatever has gathered inside of the buffer using self.http_parser. It's worth pointing out here that this code exhibits a pattern called [Dependency Injection][10]. If you remember the init function, you know that I pass in a http_server object, which contains a http_parser object. In this case, the http_parser object is a module that is part of the diy_framework package, but it could potentially be anything else that has a parse_into function that accepts a Request object and a bytearray. This is useful for two reasons. First, it means that this code is easier to extend. Someone could come along and want to use HTTPConnection with a different parser - no problem - just pass it in as an argument! Second, it makes testing much easier because the http_parser is not hard coded anywhere so replacing it with a dummy or [mock][11] object is super easy. + +The next interesting piece is the reply method: + +``` +async def reply(self): + request = self.request + handler = self.router.get_handler(request.path) + + response = await handler.handle(request) + + if not isinstance(response, Response): + response = Response(code=200, body=response) + + self._writer.write(response.to_bytes()) + await self._writer.drain() +``` + +Here, an instance of HTTPConnection uses a router object it got from the HTTPServer (another example of Dependency Injection) to obtain an object that will generate a response. A router can be anything that has a get_handler method that accepts a string and returns a callable or raises a NotFoundException. The callable object is then used to process the request and generate a response. Handlers are written by the users of the framework, as outlined in use case above, and should return either strings or Response objects. Response objects give us a nice interface so the simple if block ensures that whatever a handler returns, the code further along ends up with a uniform Response object. + +Next, the StreamWriter instance assigned to self._writer is called to send back a string of bytes to the client. Before the function returns, it awaits at await self._writer.drain(), which ensures that all the data has been sent to the client. This ensures that a call to self._writer.close won't happen when there is still unsent data in the buffer. + +There are two more interesting parts of the HTTPConnection class: a method that closes the connection and a group of methods that handle the timeout mechanism. First, closing a connection is accomplished by this little function: + +``` +def close_connection(self): + self._cancel_conn_timeout() + self._writer.close() +``` + +Any time a connection is to be closed, the code has to first cancel the timeout to clean it out of the event loop. + +The timeout mechanism is a set of three related functions: a function that acts on the timeout by sending an error message to the client and closing the connection; a function that cancels the current timeout; and a function that schedules the timeout. The first two are simple and I add them for completeness, but I'll explain the third one — _reset_conn_timeout — in more detail. + +``` +def _conn_timeout_close(self): + self.error_reply(500, 'timeout') + self.close_connection() + +def _cancel_conn_timeout(self): + if self._conn_timeout: + self._conn_timeout.cancel() + +def _reset_conn_timeout(self, timeout=TIMEOUT): + self._cancel_conn_timeout() + self._conn_timeout = self.loop.call_later( + timeout, self._conn_timeout_close) +``` + +Everytime _reset_conn_timeout is called, it first cancels any previously set asyncio.Handle object assigned to self._conn_timeout. Then, using the BaseEventLoop.call_later function, it schedules the _conn_timeout_close function to run after timeout seconds. If you remember the contents of the handle_request function, you'll know that this function gets called every time any data is received. This cancels any existing timeout and re-schedules the _conn_timeout_close function timeout seconds in the future. As long as there is data coming, this cycle will keep resetting the timeout callback. If no data is received inside of timeout seconds, the _conn_timeout_close finally gets called. + +### Creating the Connections + +Something has to create HTTPConnection objects and it has to do it correctly. This task is delegated to the HTTPServer class, which is a very simple container that helps to store some configuration (the parser, router, and event loop instances) and then use that configuration to create instances of HTTPConnection: + +``` +class HTTPServer(object): + def init(self, router, http_parser, loop): + self.router = router + self.http_parser = http_parser + self.loop = loop + + async def handle_connection(self, reader, writer): + connection = HTTPConnection(self, reader, writer) + asyncio.ensure_future(connection.handle_request(), loop=self.loop) +``` + +Each instance of HTTPServer can listen on one port. It has an asynchronous handle_connection method that creates instances of HTTPConnection and schedules them for execution in the event loop. This method is passed to [asyncio.start_server][12] and serves as a callbacks: it's called every time a TCP connection is initiated with a StreamReader and StreamWriter as the arguments. + +``` + self._server = HTTPServer(self.router, self.http_parser, self.loop) + self._connection_handler = asyncio.start_server( + self._server.handle_connection, + host=self.host, + port=self.port, + reuse_address=True, + reuse_port=True, + loop=self.loop) +``` + +This forms the core of how the application works: asyncio.start_server accepts TCP connections and calls a method on a preconfigured HTTPServer object. This method handles all the logic for a single connection: reading, parsing, generating and sending a reply back to the client, and closing the connection. It focuses on IO logic and coordinates parsing and generating a reply. + +With the core IO stuff out of the way lets move on over to... + +### Parsing Requests + +The users of this tiny framework are spoiled and don't want to work with bytes. They want a higher level of abstraction - a more convenientt way of working with requests. The tiny framework includes a simple HTTP parser that transforms bytes into Request objects. + +These Request objects are containers that looks like: + +``` +class Request(object): + def init(self): + self.method = None + self.path = None + self.query_params = {} + self.path_params = {} + self.headers = {} + self.body = None + self.body_raw = None + self.finished = False +``` + +It has everything that a developer needs in order to accept data coming from a client in an easily understandable package. Well, everything except cookies, which are crucial in order to do things like authentication. I'll leave that for part 2. + +Each HTTP request contains certain required pieces - like the path or the method. It also contains certain optional pieces like the body, headers, or URL parameters. Furthermore, owing to the popularity of REST, the URL, minus the URL parameters, may also contain pieces of information eg. "/users/1/edit" contains a user's id. + +Each part of a request has to be identified, parsed, and assigned to the correct part of a Request object. The fact that HTTP/1.1 is text protocol simplifies things (HTTP/2 is binary protocol - whole 'nother level of fun). + +The http_parser module is a group of functions inside because the parser does not need to keep track of state. Instead, the calling code has to manage a Request object and pass it into the parse_into function along with a bytearray containing the raw bytes of a request. To this end, the parser modifies both the request object as well as the bytearray buffer passed to it. The request object gets fuller and fuller while the bytearray buffer gets emptier and emptier. + +The core functionality of the http_parser module is inside the parse_into function: + +``` +def parse_into(request, buffer): + _buffer = buffer[:] + if not request.method and can_parse_request_line(_buffer): + (request.method, request.path, + request.query_params) = parse_request_line(_buffer) + remove_request_line(_buffer) + + if not request.headers and can_parse_headers(_buffer): + request.headers = parse_headers(_buffer) + if not has_body(request.headers): + request.finished = True + + remove_intro(_buffer) + + if not request.finished and can_parse_body(request.headers, _buffer): + request.body_raw, request.body = parse_body(request.headers, _buffer) + clear_buffer(_buffer) + request.finished = True + return _buffer +``` + +As you can see in the code above, I divided the parsing process into three parts: parsing the request line (the line that goes GET /resource HTTP/1.1), parsing the headers, and parsing the body. + +The request line contains the HTTP method and the URL. The URL in turn contains yet more information: the path, url parameters, and developer defined url parameters. Parsing out the method and URL is easy - it's a matter of splitting the string appropriately. The urlparse.parse function is used to parse out the URL parameters, if any, from the URL. The developer defined url parameters are extracted using regular expressions. + +Next up are the HTTP headers. These are simply lines of text that are key-value pairs. The catch is that a there may be multiple headers of the same name but with different values. An important header to watch out for is the Content-Length header that specifies the length of the body (not the whole request, just the body!), which is important in determining whether to parse the body at all. + +Finally, the parser looks at the HTTP method and headers and decides whether to parse the request's body. + +### Routing! + +The router is a bridge between the framework and the user in the sense that the user creates a Router object and fills it with path/function pairs using the appropriate methods and then gives the Router object to the App. The App object in turn uses the get_handler function to get a hold of a callable that generates a response. In short, the router is responsible for two things - storing pairs of paths/functions and handing back a pair to whatever asks for one. + +There are two methods in the Router class that allow an end-developer to add routes: add_routes and add_route. Since add_routes is a convenient wrapper around add_route, I'll skip describing it and focus on add_route: + +``` +def add_route(self, path, handler): + compiled_route = self.class.build_route_regexp(path) + if compiled_route not in self.routes: + self.routes[compiled_route] = handler + else: + raise DuplicateRoute +``` + +This method first "compiles" a route — a string like '/cars/{id}' — into a compiled regexp object using the Router.build_route_regexp class method. These compiled regexp objects serve both to match a request's path and to extract developer defined URL parameters specified by that route. Next there's a check that raises an exception if the same route already exists, and finally the route/handler pair is added to a simple dictionary — self.routes. + +Here's how the Router "compiles" routes: + +``` +@classmethod +def build_route_regexp(cls, regexp_str): + """ + Turns a string into a compiled regular expression. Parses '{}' into + named groups ie. '/path/{variable}' is turned into + '/path/(?P[a-zA-Z0-9_-]+)'. + + :param regexp_str: a string representing a URL path. + :return: a compiled regular expression. + """ + def named_groups(matchobj): + return '(?P<{0}>[a-zA-Z0-9_-]+)'.format(matchobj.group(1)) + + re_str = re.sub(r'{([a-zA-Z0-9_-]+)}', named_groups, regexp_str) + re_str = ''.join(('^', re_str, '$',)) + return re.compile(re_str) +``` + +The method uses regular expressions to substitute all occurrences of "{variable}" with named regexp groups: "(?P...)". Then it adds the ^ and $ regexp signifiers at the beginning and end of the resulting string, and finally compiles regexp object out of it. + +Storing a route is just half the battle, here's how to get one back: + +``` +def get_handler(self, path): + logger.debug('Getting handler for: {0}'.format(path)) + for route, handler in self.routes.items(): + path_params = self.class.match_path(route, path) + if path_params is not None: + logger.debug('Got handler for: {0}'.format(path)) + wrapped_handler = HandlerWrapper(handler, path_params) + return wrapped_handler + + raise NotFoundException() +``` + +Once the App has a Request object, it also has the path part of the URL (ie. /users/15/edit). Having that, it needs a matching function to generate the response or a 404 error. get_handler takes a path as an argument, loops over routes and calls the Router.match_path class method on each one to check if any compiled regexp matches the request's path. If it does, it wraps the route's function in a HandleWrapper. The path_params dictionary contains path variables (ie. the '15' from /users/15/edit) or is left empty if the route doesn't specify any variables. Finally it returns the wrapped route's function to the App. + +If the code iterates through all the routes and none of them matches the path, the function raises a NotFoundException. + +The Route.match class method is simple: + +``` +def match_path(cls, route, path): + match = route.match(path) + try: + return match.groupdict() + except AttributeError: + return None +``` + +It uses the regexp object's match method to check if the route and path matches. It returns None on no matches. + +Finally, we have the HandleWrapper class. Its only job is to wrap an asynchronous function, store the path_params dictionary, and expose a uniform interface through the handle method: + +``` +class HandlerWrapper(object): + def init(self, handler, path_params): + self.handler = handler + self.path_params = path_params + self.request = None + + async def handle(self, request): + return await self.handler(request, **self.path_params) +``` + +### Bringing it All Together + +The last piece of the framework is the one that ties everything together — the App class. + +The App class serves to gather all the configuration details. An App object uses its single method — start_server — to create an instance of HTTPServer using some of the configuration data, and then feed it to the function asyncio.start_server more info here. The asyncio.start_server function will call the HTTPServer object's handle_connection method for every incoming TCP connection: + +``` +def start_server(self): + if not self._server: + self.loop = asyncio.get_event_loop() + self._server = HTTPServer(self.router, self.http_parser, self.loop) + self._connection_handler = asyncio.start_server( + self._server.handle_connection, + host=self.host, + port=self.port, + reuse_address=True, + reuse_port=True, + loop=self.loop) + + logger.info('Starting server on {0}:{1}'.format( + self.host, self.port)) + self.loop.run_until_complete(self._connection_handler) + + try: + self.loop.run_forever() + except KeyboardInterrupt: + logger.info('Got signal, killing server') + except DiyFrameworkException as e: + logger.error('Critical framework failure:') + logger.error(e.traceback) + finally: + self.loop.close() + else: + logger.info('Server already started - {0}'.format(self)) +``` + +### Lessons learned + +If you look at the repo, you'll notice that the whole thing is roughly 320 lines of code if you don't count the tests (it's ~540 if you do). It really surprised me that it's possible to fit so much functionality in so little code. Granted, this framework does not offer useful pieces like templates, authentication, or database access, but hey, it's something fun to work on :). This also gave me an idea how other framework like Django or Tornado work at a general level and it's already paying off in how quickly I'm able to debug things. + +This is also the first project that I did in true TDD fashion and it's just amazing how pleasant and productive the process was. Writing tests first forced me to think about design and architecture and not just on gluing bits of code together to "make it work". Don't get me wrong, there are many scenarios when the latter approach is preferred, but if you're placing a premium on low-maintenance code that you and others will be able to work in weeks or months in the future then TDD is exactly what you need. + +I explored things like [the Clean Architecture][13] and dependency injection, which is most evident in how the Router class is a higher-level abstraction (Entity?) that's close to the "core" whereas pieces like the http_parser or App are somewhere on the outer edges because they do either itty-bitty string or bytes work or interface with mid-level IO stuff. Whereas TDD forced me to think about each small part separately, this made me ask myself questions like: Does this combination of method calls compose into an understandable action? Do the class names accurately reflect the problem that I'm solving? Is it easy to distinguish different levels of abstraction in my code? + +Go ahead, write a small framework, it's a ton of fun! + +-------------------------------------------------------------------------------- + +via: http://mattscodecave.com/posts/simple-python-framework-from-scratch.html + +作者:[Matt][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://mattscodecave.com/hire-me.html +[1]: https://github.com/sirMackk/diy_framework +[2]:https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol +[3]: https://en.wikipedia.org/wiki/HTTP_persistent_connection +[4]: https://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm#Slow_start +[5]: https://docs.python.org/3/library/asyncio-stream.html +[6]: https://docs.python.org/3/library/asyncio-protocol.html +[7]: https://github.com/sirMackk/diy_framework/blob/88968e6b30e59504251c0c7cd80abe88f51adb79/diy_framework/http_server.py#L46 +[8]: https://docs.python.org/3/library/functions.html#bytearray +[9]: https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.Handle +[10]: https://en.wikipedia.org/wiki/Dependency_injection +[11]: https://docs.python.org/3/library/unittest.mock.html +[12]: https://docs.python.org/3/library/asyncio-stream.html#asyncio.start_server +[13]: https://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html From 83c695e40ae9da31df819adc2a668cbeff62c5b9 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 19 Aug 2016 10:46:12 +0800 Subject: [PATCH 431/471] =?UTF-8?q?20160819-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...een Linuxing since before you were born.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/talk/20160728 I've been Linuxing since before you were born.md diff --git a/sources/talk/20160728 I've been Linuxing since before you were born.md b/sources/talk/20160728 I've been Linuxing since before you were born.md new file mode 100644 index 0000000000..d9fbeb9970 --- /dev/null +++ b/sources/talk/20160728 I've been Linuxing since before you were born.md @@ -0,0 +1,66 @@ +I've been Linuxing since before you were born +===================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/OSDC_Penguin_Image_520x292_12324207_0714_mm_v1a.png?itok=WfAkwbFy) + +Once upon a time, there was no Linux. No, really! It did not exist. It was not like today, with Linux everywhere. There were multiple flavors of Unix, there was Apple, and there was Microsoft Windows. + +When it comes to Windows, the more things change, the more they stay the same. Despite adding 20+ gigabytes of gosh-knows-what, Windows is mostly the same. (Except you can't drop to a DOS prompt to get actual work done.) Hey, who remembers Gorilla.bas, the exploding banana game that came in DOS? Fun times! The Internet never forgets, and you can play a Flash version on Kongregate.com. + +Apple changed, evolving from a friendly system that encouraged hacking to a sleek, sealed box that you are not supposed to open, and that dictates what hardware interfaces you are allowed to use. 1998: no more floppy disk. 2012: no more optical drive. The 12-inch MacBook has only a single USB Type-C port that supplies power, Bluetooth, Wi-Fi, external storage, video output, and accessories. If you want to plug in more than one thing at a time and don't want to tote a herd of dongles and adapters around with you, too bad. Next up: The headphone jack. Yes, the one remaining non-proprietary standard hardware port in Apple-land is doomed. + +There was a sizable gaggle of other operating systems such as Amiga, BeOS, OS/2, and dozens more that you can look up should you feel so inclined, which I encourage because looking things up is so easy now there is no excuse not to. Amiga, BeOS, and OS/2 were noteworthy for having advanced functionality such as 32-bit multitasking and advanced graphics handling. But marketing clout defeats higher quality, and so the less-capable Apple and Windows dominated the market while the others faded away. + +Then came Linux, and the world changed. + +### First PC + +The first PC I ever used was an Apple IIc, somewheres around 1994, when Linux was three years old. A friend loaned it to me and it was all right, but too inflexible. Then I bought a used Tandy PC for something like $500, which was sad for the person who sold it because it cost a couple thousand dollars. Back then, computers depreciated very quickly. It was a monster: an Intel 386SX CPU, 4 megabytes RAM, a 107-megabyte hard drive, 14-inch color CRT monitor, running MS-DOS 5 and Windows 3.1. + +I tore that poor thing apart multiple times and reinstalled Windows and DOS many times. Windows was marginally usable, so I did most of my work in DOS. I loved gory video games and played Doom, Duke Nukem, Quake, and Heretic. Ah, those glorious, jiggedy 8-bit graphics. + +In those days the hardware was always behind the software, so I upgraded frequently. Now we have all the horsepower we need and then some. I haven't had to upgrade the hardware in any of my computers for several years. + +### Computer bits + +Back in those golden years, there were independent computer stores on every corner and you could hardly walk down the block without tripping over a local independent Internet service provider (ISP). ISPs were very different then. They were not horrid customer-hostile megacorporations like our good friends the American telcos and cable companies. They were nice, and offered all kinds of extra services like Bulletin Board Services (BBS), file downloads, and Multi-User Domains (MUDs), which were multi-player online games. + +I spent a lot of time buying parts in the computer stores, and half the fun was shocking the store staff by being a woman. It is puzzling how this is so upsetting to some people. Now I'm an old fart of 58 and they're still not used to it. I hope that being a woman nerd will become acceptable by the time I die. + +Those stores had racks of Computer Bits magazine. Check out this old issue of Computer Bits on the Internet Archive. Computer Bits was a local free paper with good articles and tons of ads. Unfortunately, the print ads did not appear in the online edition, so you can't see how they included a wealth of detailed information. You know how advertisers are so whiny, and complain about ad blockers, and have turned tech journalism into barely-disguised advertising? They need to learn some lessons from the past. Those ads were useful. People wanted to read them. I learned everything about computer hardware from reading the ads in Computer Bits and other computer magazines. Computer Shopper was an especially fabulous resource; it had several hundred pages of ads and high-quality articles. + +![](https://opensource.com/sites/default/files/resize/march2002-300x387.jpg) + +The publisher of Computer Bits, Paul Harwood, launched my writing career. My first ever professional writing was for Computer Bits. Paul, if you're still around, thank you! + +You can see something in the Internet Archives of Computer Bits that barely exists anymore, and that is the classified ads section. Classified ads generated significant income for print publications. Craigslist killed the classifieds, which killed newspapers and publications like Computer Bits. + +One of my cherished memories is when the 12-year-old twit who ran my favorite computer store, who could never get over my blatant woman-ness and could never accept that I knew what I was doing, handed me a copy of Computer Bits as a good resource for beginners. I opened it to show him one of my Linux articles and said "Oh yes, I know." He turned colors I didn't think were physiologically possible and scuttled away. (No, my fine literalists, he was not really 12, but in his early 20s. Perhaps by now his emotional maturity has caught up a little.) + +### Discovering Linux + +I first learned about Linux in Computer Bits magazine, maybe around 1997 or so. My first Linuxes were Red Hat 5 and Mandrake Linux. Mandrake was glorious. It was the first easy-to-install Linux, and it bundled graphics and sound drivers so I could immediately play Tux Racer. Unlike most Linux nerds at the time I did not come from a Unix background, so I had a steep learning curve. But that was alright, because everything I learned was useful. In contrast to my Windows adventures, where most of what I learned was working around its awfulness, then giving up and dropping to a DOS shell. + +Playing with computers was so much fun I drifted into freelance consulting, forcing Windows computers to more or less function, and helping IT staff in small shops migrate to Linux servers. Usually we did this on the sneak, because those were the days of Microsoft calling Linux a cancer, and implying that it was a communist conspiracy to sap and impurify all of our precious bodily fluids. + +### Linux won + +I continued consulting for a number of years, doing a little bit of everything: Repairing and upgrading hardware, pulling cable, system and network administration, and running mixed Apple/Windows/Linux networks. Apple and Windows were absolutely hellish to integrate into mixed networks as they deliberately tried to make it impossible. One of the coolest things about Linux and FOSS is there is always someone ready and able to defeat the barriers erected by proprietary vendors. + +It is very different now. There are still proprietary barriers to interoperability, and there is still no Tier 1 desktop Linux OEM vendor. In my opinion this is because Microsoft and Apple have a hard lock on retail distribution. Maybe they're doing us a favor, because you get way better service and quality from wonderful independent Linux vendors like ZaReason and System76. They're Linux experts, and they don't treat us like unwelcome afterthoughts. + +Aside from the retail desktop, Linux dominates all aspects of computing from embedded to supercomputing and distributed computing. Open source dominates software development. All of the significant new frontiers in software, such as containers, clusters, and artificial intelligence are all powered by open source software. When you measure the distance from my first little old 386SX PC til now, that is phenomenal progress. + +It would not have happened without Linux and open source. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/my-linux-story-carla-schroder + +作者:[Carla Schroder ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/carlaschroder From 09708bdb438ee3721eedb6c11a64464373e3d62e Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 19 Aug 2016 10:54:49 +0800 Subject: [PATCH 432/471] =?UTF-8?q?20160819-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Write and Tune Shell Scripts – Part 2.md | 253 ++++++++++++++++++ 1 file changed, 253 insertions(+) create mode 100644 sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md diff --git a/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md b/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md new file mode 100644 index 0000000000..2ec64d9cf6 --- /dev/null +++ b/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md @@ -0,0 +1,253 @@ +Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2 +==================================== + +In the previous article of this Python series we shared a brief introduction to Python, its command-line shell, and the IDLE. We also demonstrated how to perform arithmetic calculations, how to store values in variables, and how to print back those values to the screen. Finally, we explained the concepts of methods and properties in the context of Object Oriented Programming through a practical example. + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png) + +In this guide we will discuss control flow (to choose different courses of action depending on information entered by a user, the result of a calculation, or the current value of a variable) and loops (to automate repetitive tasks) and then apply what we have learned so far to write a simple shell script that will display the operating system type, the hostname, the kernel release, version, and the machine hardware name. + +This example, although basic, will help us illustrate how we can leverage Python OOP’s capabilities to write shell scripts easier than using regular bash tools. + +In other words, we want to go from + +``` +# uname -snrvm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png) + +to + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png) + +Looks pretty, doesn’t it? Let’s roll up our sleeves and make it happen. + +### Control flow in Python + +As we said earlier, control flow allows us to choose different outcomes depending on a given condition. Its most simple implementation in Python is an if / else clause. + +The basic syntax is: + +``` +if condition: +# action 1 +else: +# action 2 +``` + +When condition evaluates to true, the code block below will be executed (represented by # action 1. Otherwise, the code under else will be run. + +A condition can be any statement that can evaluate to either true or false. + +For example: + +1. 1 < 3 # true + +2. firstName == “Gabriel” # true for me, false for anyone not named Gabriel + +- In the first example we compared two values to determine if one is greater than the other. +- In the second example we compared firstName (a variable) to determine if, at the current execution point, its value is identical to “Gabriel” +- The condition and the else statement must be followed by a colon (:) +- Indentation is important in Python. Lines with identical indentation are considered to be in the same code block. + +Please note that the if / else statement is only one of the many control flow tools available in Python. We reviewed it here since we will use it in our script later. You can learn more about the rest of the tools in the [official docs][1]. + +### Loops in Python + +Simply put, a loop is a sequence of instructions or statements that are executed in order as long as a condition is true, or once per item in a list. + +The most simple loop in Python is represented by the for loop iterates over the items of a given list or string beginning with the first item and ending with the last. + +Basic syntax: + +``` +for x in example: +# do this +``` + +Here example can be either a list or a string. If the former, the variable named x represents each item in the list; if the latter, x represents each character in the string: + +``` +>>> rockBands = [] +>>> rockBands.append("Roxette") +>>> rockBands.append("Guns N' Roses") +>>> rockBands.append("U2") +>>> for x in rockBands: +print(x) +or +>>> firstName = "Gabriel" +>>> for x in firstName: +print(x) +``` + +The output of the above examples is shown in the following image: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png) + +### Python Modules + +For obvious reasons, there must be a way to save a sequence of Python instructions and statements in a file that can be invoked when it is needed. + +That is precisely what a module is. Particularly, the os module provides an interface to the underlying operating system and allows us to perform many of the operations we usually do in a command-line prompt. + +As such, it incorporates several methods and properties that can be called as we explained in the previous article. However, we need to import (or include) it in our environment using the import keyword: + +``` +>>> import os +``` + +Let’s print the current working directory: + +``` +>>> os.getcwd() +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png) + +Let’s now put all of this together (along with the concepts discussed in the previous article) to write the desired script. + +### Python Script + +It is considered good practice to start a script with a statement that indicates the purpose of the script, the license terms under which it is released, and a revision history listing the changes that have been made. Although this is more of a personal preference, it adds a professional touch to our work. + +Here’s the script that produces the output we shown at the top of this article. It is heavily commented so that you can understand what’s happening. + +Take a few minutes to go through it before proceeding. Note how we use an if / else structure to determine whether the length of each field caption is greater than the value of the field itself. + +Based on the result, we use empty characters to fill in the space between a field caption and the next. Also, we use the right number of dashes as separator between the field caption and its value below. + +``` +#!/usr/bin/python3 +# Change the above line to #!/usr/bin/python if you don't have Python 3 installed +# Script name: uname.py +# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily +# License: GPL v3 (http://www.gnu.org/licenses/gpl.html) +# Copyright (C) 2016 Gabriel Alejandro Cánepa +# ​Facebook / Skype / G+ / Twitter / Github: gacanepa +# Email: gacanepa (at) gmail (dot) com +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# REVISION HISTORY +# DATE VERSION AUTHOR CHANGE DESCRIPTION +# ---------- ------- -------------- +# 2016-05-28 1.0 Gabriel Cánepa Initial version +# Import the os module +import os +# Assign the output of os.uname() to the the systemInfo variable +# os.uname() returns a 5-string tuple (sysname, nodename, release, version, machine) +# Documentation: https://docs.python.org/3.2/library/os.html#module-os +systemInfo = os.uname() +# This is a fixed array with the desired captions in the script output +headers = ["Operating system","Hostname","Release","Version","Machine"] +# Initial value of the index variable. It is used to define the +# index of both systemInfo and headers in each step of the iteration. +index = 0 +# Initial value of the caption variable. +caption = "" +# Initial value of the values variable +values = "" +# Initial value of the separators variable +separators = "" +# Start of the loop +for item in systemInfo: +if len(item) < len(headers[index]): +# A string containing dashes to the length of item[index] or headers[index] +# To repeat a character(s), enclose it within quotes followed +# by the star sign (*) and the desired number of times. +separators = separators + "-" * len(headers[index]) + " " +caption = caption + headers[index] + " " +values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " " +else: +separators = separators + "-" * len(item) + " " +caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1) +values = values + item + " " +# Increment the value of index by 1 +index = index + 1 +# End of the loop +# Print the variable named caption converted to uppercase +print(caption.upper()) +# Print separators +print(separators) +# Print values (items in systemInfo) +print(values) +# INSTRUCTIONS: +# 1) Save the script as uname.py (or another name of your choosing) and give it execute permissions: +# chmod +x uname.py +# 2) Execute it: +# ./uname.py +``` + +Once you have saved the above script to a file, give it execute permissions and run it as indicated at the bottom of the code: + +``` +# chmod +x uname.py +# ./uname.py +``` + +If you get the following error while attempting to execute the script: + +``` +-bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory +``` + +It means you don’t have Python 3 installed. If that is the case, you can either install the package or replace the interpreter line (pay special attention and be very careful if you followed the steps to update the symbolic links to the Python binaries as outlined in the previous article): + +``` +#!/usr/bin/python3 +``` + +with + +``` +#!/usr/bin/python +``` + +which will cause the installed version of Python 2 to execute the script instead. + +Note: This script has been tested successfully both in Python 2.x and 3.x. + +Although somewhat rudimentary, you can think of this script as a Python module. This means that you can open it in the IDLE (File → Open… → Select file): + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png) + +A new window will open with the contents of the file. Then go to Run → Run module (or just press F5). The output of the script will be shown in the original shell: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png) + +If you want to obtain the same results with a script written purely in Bash, you would need to use a combination of awk, sed, and resort to complex methods to store and retrieve items in a list (not to mention the use of tr to convert lowercase letters to uppercase). + +In addition, Python provides portability in that all Linux systems ship with at least one Python version (either 2.x or 3.x, sometimes both). Should you need to rely on a shell to accomplish the same goal, you would need to write different versions of the script based on the shell. + +This goes to show that Object Oriented Programming features can become strong allies of system administrators. + +Note: You can find this python script (and others) in one of my GitHub repositories. + +### Summary + +In this article we have reviewed the concepts of control flow, loops / iteration, and modules in Python. We have shown how to leverage OOP methods and properties in Python to simplify otherwise complex shell scripts. + +Do you have any other ideas you would like to test? Go ahead and write your own Python scripts and let us know if you have any questions. Don’t hesitate to drop us a line using the comment form below, and we will get back to you as soon as we can. + + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/2/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs. From f59107204f61e988f56e4e10517815585c6a4b8e Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 19 Aug 2016 11:05:07 +0800 Subject: [PATCH 433/471] =?UTF-8?q?20160819-6=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nux how long a process has been running.md | 75 +++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 sources/tech/20160801 Linux how long a process has been running.md diff --git a/sources/tech/20160801 Linux how long a process has been running.md b/sources/tech/20160801 Linux how long a process has been running.md new file mode 100644 index 0000000000..1dee75b429 --- /dev/null +++ b/sources/tech/20160801 Linux how long a process has been running.md @@ -0,0 +1,75 @@ +Linux how long a process has been running +===================== + +![](http://s0.cyberciti.org/images/category/old/linux-logo.png) + +>I‘m a new Linux system user. How do I check how long a process or pid has been running on my Ubuntu Linux server? + +You need to use the ps command to see information about a selection of the active processes. The ps command provide following two formatting options: + +1. etime Display elapsed time since the process was started, in the form [[DD-]hh:]mm:ss. +2. etimes Display elapsed time since the process was started, in seconds. + +### How to check how long a process has been running? + +You need to pass the -o etimes or -o etime to the ps command. The syntax is: + +``` +ps -p {PID-HERE} -o etime +ps -p {PID-HERE} -o etimes +``` + +#### Step 1: Find PID of a process (say openvpn) + +``` +$ pidof openvpn +6176 +``` + +#### Step 2: How long a openvpn process has been running? + +``` +$ ps -p 6176 -o etime +``` + +OR + +``` +$ ps -p 6176 -o etimes +``` + +To hide header: + +``` +$ ps -p 6176 -o etime= +$ ps -p 6176 -o etimes= +``` + +Sample outputs: + +![](http://s0.cyberciti.org/uploads/faq/2016/08/How-to-check-how-long-a-process-has-been-running.jpg) + +The 6176 is the PID of the process you want to check. In this case I’m looking into openvpn process. Feel free to replace openvpn and PID # 6176 as per your own requirements. In this find example, I am printing PID, command, elapsed time, user ID, and group ID: + +``` +$ ps -p 6176 -o pid,cmd,etime,uid,gid +``` + +Sample outputs: + +``` + PID CMD ELAPSED UID GID + 6176 /usr/sbin/openvpn --daemon 15:25 65534 65534 +``` + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/how-to-check-how-long-a-process-has-been-running/ + +作者:[VIVEK GITE][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.cyberciti.biz/faq/how-to-check-how-long-a-process-has-been-running/ From 822efd228ac5abd2f24601eac8546d33e1c0910f Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Fri, 19 Aug 2016 15:09:12 +0800 Subject: [PATCH 434/471] sources/tech/20160801 Linux how long a process has been running.md --- .../tech/20160801 Linux how long a process has been running.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160801 Linux how long a process has been running.md b/sources/tech/20160801 Linux how long a process has been running.md index 1dee75b429..352da47db8 100644 --- a/sources/tech/20160801 Linux how long a process has been running.md +++ b/sources/tech/20160801 Linux how long a process has been running.md @@ -1,3 +1,5 @@ +MikeCoder translating... + Linux how long a process has been running ===================== From bfffe7dee8d9f820e6a28c3a50bf5ee225b4af59 Mon Sep 17 00:00:00 2001 From: Mike Date: Fri, 19 Aug 2016 16:40:17 +0800 Subject: [PATCH 435/471] translated/tech/20160801 Linux how long a process has been running.md (#4317) --- ...nux how long a process has been running.md | 31 +++++++++---------- 1 file changed, 15 insertions(+), 16 deletions(-) rename {sources => translated}/tech/20160801 Linux how long a process has been running.md (50%) diff --git a/sources/tech/20160801 Linux how long a process has been running.md b/translated/tech/20160801 Linux how long a process has been running.md similarity index 50% rename from sources/tech/20160801 Linux how long a process has been running.md rename to translated/tech/20160801 Linux how long a process has been running.md index 352da47db8..9ef5a22859 100644 --- a/sources/tech/20160801 Linux how long a process has been running.md +++ b/translated/tech/20160801 Linux how long a process has been running.md @@ -1,63 +1,62 @@ -MikeCoder translating... - -Linux how long a process has been running +在 Linux 如何查看一个进程的运行时间 ===================== ![](http://s0.cyberciti.org/images/category/old/linux-logo.png) ->I‘m a new Linux system user. How do I check how long a process or pid has been running on my Ubuntu Linux server? +>我是一个 Linux 系统的新手。我该如何在我的 Ubuntu 服务器上查看一个进程或者根据一个进程 id 来查看他已经运行的时间? You need to use the ps command to see information about a selection of the active processes. The ps command provide following two formatting options: +你需要使用 ps 命令来查看关于一组正在运行的进程的信息。ps 命令提供了如下的两种格式化选项。 -1. etime Display elapsed time since the process was started, in the form [[DD-]hh:]mm:ss. -2. etimes Display elapsed time since the process was started, in seconds. +1. etime 显示了自从该进程启动以来,经历过的时间。以[[DD-]hh:]mm:ss 的格式。 +2. etimes 显示了自该进程启动以来,经历过的时间,以秒的形式。 -### How to check how long a process has been running? +### 如何查看一个进程已经运行的时间? -You need to pass the -o etimes or -o etime to the ps command. The syntax is: +你需要在 ps 命令之后添加 -o etimes 或者 -o etime 参数。它的语法如下: ``` ps -p {PID-HERE} -o etime ps -p {PID-HERE} -o etimes ``` -#### Step 1: Find PID of a process (say openvpn) +#### 第一步:找到一个进程的 PID (openvpn 为例) ``` $ pidof openvpn 6176 ``` -#### Step 2: How long a openvpn process has been running? +#### 第二步:openvpn 进程运行了多长时间? ``` $ ps -p 6176 -o etime ``` -OR +或者 ``` $ ps -p 6176 -o etimes ``` -To hide header: +隐藏头部: ``` $ ps -p 6176 -o etime= $ ps -p 6176 -o etimes= ``` -Sample outputs: +样例输出: ![](http://s0.cyberciti.org/uploads/faq/2016/08/How-to-check-how-long-a-process-has-been-running.jpg) -The 6176 is the PID of the process you want to check. In this case I’m looking into openvpn process. Feel free to replace openvpn and PID # 6176 as per your own requirements. In this find example, I am printing PID, command, elapsed time, user ID, and group ID: +这个 6176 就是你想查看的进程的 PID。在这个例子中,我查看的是 openvpn 进程。你可以按照你的需求随意的更换 openvpn 进程名或者是 PID。在下面的例子中,我打印了 PID,执行命令,运行时间,用户 ID,和用户组 ID: ``` $ ps -p 6176 -o pid,cmd,etime,uid,gid ``` -Sample outputs: +样例输出: ``` PID CMD ELAPSED UID GID @@ -69,7 +68,7 @@ Sample outputs: via: http://www.cyberciti.biz/faq/how-to-check-how-long-a-process-has-been-running/ 作者:[VIVEK GITE][a] -译者:[译者ID](https://github.com/译者ID) +译者:[MikeCoder](https://github.com/MikeCoder) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 875a070a949a218659505caeb08fc9144b399de5 Mon Sep 17 00:00:00 2001 From: WangYue <815420852@qq.com> Date: Sat, 20 Aug 2016 10:18:05 +0800 Subject: [PATCH 436/471] =?UTF-8?q?=E3=80=90=E7=BF=BB=E8=AF=91=E5=AE=8C?= =?UTF-8?q?=E6=88=90=E3=80=9120160630=2018=20Best=20IDEs=20for=20C+C++=20P?= =?UTF-8?q?rogramming=20or=20Source=20Code=20Editors=20on=20Linux.md=20(#4?= =?UTF-8?q?320)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 【翻译完成】20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md 源文件删除【翻译完成】20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md * 【翻译完成】20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md 提交译文【翻译完成】20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md --- ...ramming or Source Code Editors on Linux.md | 305 ------------------ ...ramming or Source Code Editors on Linux.md | 302 +++++++++++++++++ 2 files changed, 302 insertions(+), 305 deletions(-) delete mode 100644 sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md create mode 100644 translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md diff --git a/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md b/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md deleted file mode 100644 index 03b18de986..0000000000 --- a/sources/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md +++ /dev/null @@ -1,305 +0,0 @@ -translating by WangYueScream & LiBrad -18 Best IDEs for C/C++ Programming or Source Code Editors on Linux -====================================================================== - -C++, an extension of well known C language, is an excellent, powerful and general purpose programming language that offers modern and generic programming features for developing large-scale applications ranging from video games, search engines, other computer software to operating systems. - -C++ is highly reliable and also enables low-level memory manipulation for more advanced programming requirements. - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Best-Linux-IDE-Editors.png) - -There are several text editors out there that programmers can use to write C/C++ code, but IDE have come up to offer comprehensive facilities and components for easy and ideal programming. - -In this article, we shall look at some of the best IDE’s you can find on the Linux platform for C++ or any other programming. - -### 1. Netbeans for C/C++ Development - -Netbeans is a free, open-source and popular cross-platform IDE for C/C++ and many other programming languages. Its fully extensible using community developed plugins. - -It includes project types and templates for C/C++ and you can build applications using static and dynamic libraries. Additionally, you can reuse existing code to create your projects, and also use drag and drop feature to import binary files into it to build applications from the ground. - -Let us look at some of its features: - -- The C/C++ editor is well integrated with multi-session [GNU GDB][1] debugger tool. -- Support for code assistance -- C++11 support -- Create and run C/C++ tests from within -- Qt toolkit support -- Support for automatic packaging of compiled application into .tar, .zip and many more archive files -- Support for multiple compilers such as GNU, Clang/LLVM, Cygwin, Oracle Solaris Studio and MinGW -- Support for remote development -- File navigation -- Source inspection - -![](http://www.tecmint.com/wp-content/uploads/2016/06/NetBeans-IDE.png) ->Visit Homepage: - -### 2. Code::Blocks - -Code::Blocks is a free, highly extensible and configurable, cross-platform C++ IDE built to offer users the most demanded and ideal features. It delivers a consistent user interface and feel. - -And most importantly, you can extend its functionality by using plugins developed by users, some of the plugins are part of Code::Blocks release and many are not, written by individual users not part of the Code::Block development team. - -Its features are categorized into compiler, debugger and interface features and these include: - -Multiple compiler support including GCC, clang, Borland C++ 5.5, digital mars plus many more -Very fast, no need for makefiles -Multi-target projects -Workspace that supports combining of projects -Interfaces GNU GDB -Support for full breakpoints including code breakpoints, data breakpoints, breakpoint conditions plus many more -display local functions symbols and arguments -custom memory dump and syntax highlighting -Customizable and extensible interface plus many more other features including those added through user built plugins - -![](http://www.tecmint.com/wp-content/uploads/2016/06/CodeBlocks-IDE-for-Linux.png) ->Visit Homepage: - -### 3. Eclipse CDT(C/C++ Development Tooling) - -Eclipse is a well known open-source, cross-platform IDE in the programming arena. It offers users a great GUI with support for drag and drop functionality for easy arrangement of interface elements. - -The Eclipse CDT is a project based on the primary Eclipse platform and it provides a full functional C/C++ IDE with following features: - -- Supports project creation -- Managed build for various toolchains -- Standard make build -- Source navigation -- Several knowledge tools such as call graph, type hierarchy, in-built browser, macro definition browser -- Code editor with support for syntax highlighting -- Support for folding and hyperlink navigation -- Source code refactoring plus code generation -- Tools for visual debugging such as memory, registers -- Disassembly viewers and many more - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Eclipse-IDE-for-Linux.png) ->Visit Homepage: - -### 4. CodeLite IDE - -CodeLite is also a free, open-source, cross-platform IDE designed and built specifically for C/C++, JavaScript (Node.js) and PHP programming. - -Some of its main features include: - -- Code completion, and it offers two code completion engines -- Supports several compilers including GCC, clang/VC++ -- Displays errors as code glossary -- Clickable errors via build tab -- Support for LLDB next generation debugger -- GDB support -- Support for refactoring -- Code navigation -- Remote development using built-in SFTP -- Source control plugins -- RAD (Rapid Application Development) tool for developing wxWidgets-based apps plus many more features - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Codelite-IDE.png) ->Visit Homepage: - -### 6. Bluefish Editor - -Bluefish is a more than just a normal editor, it is a lightweight, fast editor that offers programmers IDE like features for developing websites, writing scripts and software code. It is multi-platform, runs on Linux, Mac OSX, FreeBSD, OpenBSD, Solaris and Windows, and also supports many programming languages including C/C++. - -It is feature rich including the ones listed below: - -- Multiple document interface -- Supports recursive opening of files based on filename patterns or content pattern -- Offers a very powerful search and replace functionality -- Snippet sidebar -- Support for integrating external filters of your own, pipe documents using commands such as awk, sed, sort plus custom built scripts -- Supports full screen editing -- Site uploader and downloader -- Multiple encoding support and many more other features - -![](http://www.tecmint.com/wp-content/uploads/2016/06/BlueFish-IDE-Editor-for-Linux.png) ->Visit Homepage: - -### 7. Brackets Code Editor - -Brackets is a modern and open-source text editor designed specifically for web designing and development. It is highly extensible through plugins, therefore C/C++ programmers can use it by installing the C/C++/Objective-C pack extension, this pack is designed to enhance C/C++ code writing and to offer IDE like features. - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Brackets-Code-Editor-for-Linux.png) ->Visit Homepage: - -### 8. Atom Code Editor - -Atom is also a modern, open-source, multi-platform text editor that can run on Linux, Windows or Mac OS X. It is also hackable down to its base, therefore users can customize it to meet their code writing demands. - -It is fully featured and some of its main features include: - -- Built-in package manager -- Smart auto-completion -- In-built file browser -- Find and replace functionality and many more - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Atom-Code-Editor-for-Linux.png) ->Visit Homepage: >https://atom.io/> ->Installation Instructions: - -### 9. Sublime Text Editor - -Sublime Text is a well refined, multi-platform text editor designed and developed for code, markup and prose. You can use it for writing C/C++ code and offers a great user interface. - -It’s features list comprises of: - -- Multiple selections -- Command palette -- Goto anything functionality -- Distraction free mode -- Split editing -- Instant project switching support -- Highly customizable -- Plugin API support based on Python plus other small features - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Sublime-Code-Editor-for-Linux.png) ->Visit Homepage: ->Installation Instructions: - -### 10. JetBrains CLion - -CLion is a non-free, powerful and cross-platform IDE for C/C++ programming. It is a fully integrated C/C++ development environment for programmers, providing Cmake as a project model, an embedded terminal window and a keyboard oriented approach to code writing. - -It also offers a smart and modern code editor plus many more exciting features to enable an ideal code writing environment and these features include: - -- Supports several languages other than C/C++ -- Easy navigation to symbol declarations or context usage -- Code generation and refactoring -- Editor customization -- On-the-fly code analysis -- An integrated code debugger -- Supports Git, Subversion, Mercurial, CVS, Perforce(via plugin) and TFS -- Seamlessly integrates with Google test frameworks -- Support for Vim text editor via Vim-emulation plugin - -![](http://www.tecmint.com/wp-content/uploads/2016/06/JetBains-CLion-IDE.png) ->Visit Homepage: - -### 11. Microsoft’s Visual Studio Code Editor - -Visual Studio is a rich, fully integrated, cross-platform development environment that runs on Linux, Windows and Mac OS X. It was recently made open-source to Linux users and it has redefined code editing, offering users every tool needed for building every app for multiple platforms including Windows, Android, iOS and the web. - -It is feature full, with features categorized under application development, application lifecycle management, and extend and integrate features. You can read a comprehensive features list from the Visual Studio website. - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Visual-Studio-Code-Editor.png) ->Visit Homepage: - -### 12. KDevelop - -KDevelop is just another free, open-source and cross-platform IDE that works on Linux, Solaris, FreeBSD, Windows, Mac OSX and other Unix-like operating systems. It is based on the KDevPlatform, KDE and Qt libraries. KDevelop is highly extensible through plugins and feature rich with the following notable features: - -- Support for Clang-based C/C++ plugin -- KDE 4 config migration support -- Revival of Oketa plugin support -- Support for different line editings in various views and plugins -- Support for Grep view and Uses widget to save vertical space plus many more - -![](http://www.tecmint.com/wp-content/uploads/2016/06/KDevelop-IDE-Editor.png) ->Visit Homepage: - -### 13. Geany IDE - -Geany is a free, fast, lightweight and cross-platform IDE developed to work with few dependencies and also operate independently from popular Linux desktops such as GNOME and KDE. It requires GTK2 libraries for functionality. - -Its features list consists of the following: - -- Support for syntax highlighting -- Code folding -- Call tips -- Symbol name auto completion -- Symbol lists -- Code navigation -- A simple project management tool -- In-built system to compile and run a users code -- Extensible through plugins - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Geany-IDE-for-Linux.png) ->Visit Homepage: - -### 14. Ajunta DeveStudio - -Ajunta DevStudio is a simple GNOME yet powerful software development studio that supports several programming languages including C/C++. - -It offers advanced programming tools such as project management, GUI designer, interactive debugger, application wizard, source editor, version control plus so many other facilities. Additionally, to above features, Ajunta DevStudio also has some other great IDE features and these include: - -- Simple user interface -- Extensible with plugins -- Integrated Glade for WYSIWYG UI development -- Project wizards and templates -- Integrated GDB debugger -- In-built file manager -- Integrated DevHelp for context sensitive programming help -- Source code editor with features such as syntax highlighting, smart indentation, auto-indentation, code folding/hiding, text zooming plus many more - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Anjuta-DevStudio-for-Linux.png) ->Visit Homepage: - -### 15. The GNAT Programming Studio - -The GNAT Programming Studio is a free easy to use IDE designed and developed to unify the interaction between a developer and his/her code and software. - -Built for ideal programming by facilitating source navigation while highlighting important sections and ideas of a program. It is also designed to offer a high-level of programming comfortability, enabling users to developed comprehensive systems from the ground. - -It is feature rich with the following features: - -- - Intuitive user interface -- Developer friendly -- Multi-lingual and multi-platform -- Flexible MDI(multiple document interface) -- Highly customizable -- Fully extensible with preferred tools - -![](http://www.tecmint.com/wp-content/uploads/2016/06/GNAT-Programming-Studio.jpg) ->Visit Homepage: - -### 16. Qt Creator - -It is a non-free, cross-platform IDE designed for creation of connected devices, UIs and applications. Qt creator enables users to do more of creation than actual coding of applications. - -It can be used to create mobile and desktop applications, and also connected embedded devices. - -Some of its features include: - -- Sophisticated code editor -- Support for version control -- Project and build management tools -- Multi-screen and multi-platform support for easy switching between build targets plus many more - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Qt-Creator.png) ->Visit Homepage: - -### 17. Emacs Editor - -Emacs is a free, powerful, highly extensible and customizable, cross-platform text editors you can use on Linux, Solaris, FreeBSD, NetBSD, OpenBSD, Windows and Mac OS X. - -The core of Emacs is also an interpreter for Emacs Lisp which is a language under the Lisp programming language. As of this writing, the latest release of GNU Emacs is version 24.5 and the fundamental and notable features of Emacs include: - -- Content-aware editing modes -- Full Unicode support -- Highly customizable using GUI or Emacs Lisp code -- A packaging system for downloading and installing extensions -- Ecosystem of functionalities beyond normal text editing including project planner, mail, calender and news reader plus many more -- A complete built-in documentation plus user tutorials and many more - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Emacs-Editor.png) ->Visit Homepage: https://www.gnu.org/software/emacs/ - -### 18. VI/VIM Editor - -Vim an improved version of VI editor, is a free, powerful, popular and highly configurable text editor. It is built to enable efficient text editing, and offers exciting editor features for Unix/Linux users, therefore, it is also a good option for writing and editing C/C++ code. - -Generally, IDEs offer more programming comfortability then traditional text editors, therefore it is always a good idea to use them. They come with exciting features and offer a comprehensive development environment, sometimes programmers are caught up between choosing the best IDE to use for C/C++ programming. - -There many other IDEs you can find out there and download from the Internet, but trying out several of them can help you find that which suites your needs. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/best-linux-ide-editors-source-code-editors/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/debug-source-code-in-linux-using-gdb/ diff --git a/translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md b/translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md new file mode 100644 index 0000000000..cb991d80c9 --- /dev/null +++ b/translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md @@ -0,0 +1,302 @@ + +# Linux 下 18 个最好的 C/C++ 编程或者源代码编辑的集成开发环境 + +C++,一个众所周知的 C 语言的扩展,是一个优秀的,强大的,通用的编程语言,能够提供现代化的和通用的编程功能,可以用于开发大型应用,包括视频游戏,搜索引擎,其他计算机软件甚至操作系统。 + +C++,提供高度可靠性的同时还能够允许操作底层内存来满足更高级的编程要求。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Best-Linux-IDE-Editors.png) + +虽然已经有了一些供程序员用来写 C/C++ 代码的文本编辑器,但 IDE 可以为简单、完美的编程提供全面的工具和组件。 + +在这篇文章里,我们会向你展示一些可以在 Linux 平台上找到的用于 C++ 或者其他编程语言编程的最好的 IDE。  + +### 1. Netbeans for C/C++ Development + +Netbeans 是一个免费的、开源和流行跨平台的 IDE 并且用于 C/C++ 以及其他编程语言。由社区提供成熟的插件使它可以充分利用其扩展性。 + +它包含项目类型和 C/C++ 模版,并且你可以使用静态和动态函数库来搭建应用程序。此外,你可以利用现有的代码去创造你的工程,并且使用它的特点将二进制文件拖放进文件夹到开发应用程序的地方。 + +让我们来看看关于它的特性: + +- C/C++ 编辑器是很好的集成多元化的 GNU GDB 调试工具 +- 支持代码协助 +- 支持 C++11 标准 +- 在里面创建和运行 C/C++ 测试程序 +- 支持 QT 工具包 +- 支持自动由已编译的应用程序到 .tar,.zip 和其他更多的归档文件封包 +- 支持多个编译器,例如: GNU,Clang/LLVM,Cygwin,Oracle Solaris Studio 和 MinGW +- 支持远程开发 +- 文件导航 +- 源代码检查 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/NetBeans-IDE.png) +>Visit Homepage: + +### 2. Code::Blocks +Code::Blocks 是一个免费的、具有高度扩展性的、并且可以配置的跨平台 C++ IDE, 它为用户提供最需要的最理想形象。它给用户一种协调的界面感觉。 + +并且最为重要的是,你可以通过用户开发的插件扩展它的功能,这些插件中一部分是 Code::Blocks发布的然而大多数不是这样,而是由非 Code::Block 开发团队成员的个人用户所写的。 + +其特色分为编译器、调试器、接口特性,它们包括: + +- 支持多种编译器如 GCC, clang, Borland C++ 5.5, digital mars 等等 +- 非常快,不需要 makefiles +- Multi-target 工程 +- 支持组合项目的工作空间 +- GNU GDB 接口 +- 支持完整的断点,包括代码的断点,数据断点,断点条件等等 +- 显示本地函数符号和参数 +- 自定义储存仓库和语法高亮显示 +- 可自定义、可扩展的接口以及许多其他的的功能,包括那些用户开发的插件 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/CodeBlocks-IDE-for-Linux.png) +>Visit Homepage: + +### 3. Eclipse CDT(C/C++ Development Tooling) +Eclipse 在编程界是一款著名的、开源的、跨平台的 IDE。为了方便界面元素的布置,它给用户提供了一个很棒的支持拖拽的界面。 + +Eclipse CDT 项目是基于 Eclipse 平台,它给 C/C++ 提供一个全新的 IDE 并具有以下功能: + +- 支持项目创建 +- 管理并建立各种工具链 +- 标准的构建 +- 源导航 +- 一些知识工具,如调用图,类型分级结构,内置浏览程序,定义宏浏览程序 +- 支持语法高亮的代码编辑器 +- 支持折叠和超链接导航 +- 代码重构与代码生成 +- 可视化的调试工具,如存储器、寄存器 +- 反汇编浏览区以及更多功能 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Eclipse-IDE-for-Linux.png) +>Visit Homepage: + +### 4.CodeLite IDE +CodeLite 也是一款为 C/C++,JavaScript(Node.js) 和 PHP 编程专门设计并建造的免费,开源,跨平台的 IDE。 + +它的一些主要特点包括: + +- 代码自动完成,内置了两个帮助代码自动完成的引擎 +- 支持多种编译器, 包括 GCC,clang/VC++ +- 显示代码词汇的错误 +- 通过构建选项卡点击error +- 支持下一代LLDB调试器 +- 支持 GDB +- 支持重构 +- 代码导航 +- 使用内置的 SFTP 远程开发 +- 源代码控制插件 +- RAD(快速应用程序开发)工具开发 wxWidgets-based 应用以及更多的特性 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Codelite-IDE.png) +>Visit Homepage: + +### 6. Bluefish Editor + +Bluefish 不仅仅是一个常规的编辑器,它是一个轻量级的,快速的编辑器,为程序员提供了 IDE 的特性如开发网站,编写脚本和软件代码。它支持多平台,在 Linux,Mac OSX,FreeBSD,OpenBSD,Solaris 和 Windows上运行,同时支持众多编程语言包括 C/C++。 + +下面列出的是它众多功能的一部分: + +- 多文档界面 +- 支持递归打开文件,基于文件名通配模式或者内容模式 +- 提供一个非常强大的搜索和替换功能 +- 侧边导航摘要 +- 支持个人的集成外部过滤器,使用命令如awk,sed,sort加上自定义构建脚本组成的管道文件 +- 支持全屏编辑 +- 网站上传和下载 +- 支持多种编码等许多其他功能 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/BlueFish-IDE-Editor-for-Linux.png) +>Visit Homepage: + +### 7. Brackets Code Editor + +Brackets 是一个现代的开源的文本编辑器,专为 Web 设计与开发打造。它可以通过插件进行高度扩展,因此 C/C++ 程序员通过安装 C/C++/Objective-C 包来使用它,这个包用来在改进 C/C++ 代码编写的同时提供 IDE 等特性。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Brackets-Code-Editor-for-Linux.png) +>Visit Homepage: + +### 8. Atom Code Editor + +Atom 也是一个现代化风格、开源的多平台文本编辑器,它能运行在 Linux, Windows 或是 Mac OS X 平台。它的底层库可以删除,因此用户可以自定义编译器,以便满足各种编写代码的需求。 + +它功能完整,主要的功能包括: + +- 内置包管理器 +- 快速的自动完成 +- 内置文件浏览器 +- 查找、替换以及其他更多的功能 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Atom-Code-Editor-for-Linux.png) +>Visit Homepage: >https://atom.io/> +>Installation Instructions: + +### 9. Sublime Text Editor + +Sublime Text 是一个完善的,跨平台的文本编辑器,用于写码、标记和撰文。它可以用来编写 C/C++ 代码,并且提供非常棒的用户界面。 + +它的功能列表包括: + +- 多重选择 +- 命令调色 +- 重要的 Goto 功能 +- 免打扰模式 +- 分离编辑器 +- 支持项目之间快速的切换 +- 高度可制定 +- 支持基于 Python 的 API 插件以及其他特性 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Sublime-Code-Editor-for-Linux.png) +>Visit Homepage: +>Installation Instructions: + +### 10. JetBrains CLion + +JetBrains CLion 是一个收费的,强大的并且跨平台的 C/C++ IDE。它是一个全面的 C/C++ 程序集成开发环境,并提供 Cmake 项目模型,一个嵌入式终端的窗口和一个编码的定向键入通道。 + +它还提供了一个漂亮的现代化的编辑器,和许多令人激动的功能,提供理想的编码环境,这些功能包括: + +- 除了 C/C++ 还支持其他多种语言 +- 便利的符号声明或上下文导航 +- 代码生成和重构 +- 可制定的编辑 +- 关于代码的快速分享 +- 一个集成的代码调试器 +- 支持 Git,Subversion,Mercurial,CVS,Perforcevia p(lugin) 和 TFS +- 无缝连接 Google 测试框架 +- 通过 Vim 仿真插件支持 Vim 文本编辑器 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/JetBains-CLion-IDE.png) +>Visit Homepage: + +### 11. Microsoft’s Visual Studio Code Editor + +Visual Studio 是一个丰富的,完全集成的,跨平台的开发环境,运行在 Linux,Windows 和 Mac OS X。 最近它向 Linux 用户开源,同时它重新定义了代码编辑,提供了用户所需的所有工具用来开发不同平台下的应用,包括的平台有 Windows,Android,iOS 和 Web。 + +它功能完整,功能分类成应用程序开发、应用生命周期管理、扩展和集成特性。你可以从 Visual Studio 官网阅读全面的功能列表。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Visual-Studio-Code-Editor.png) +>Visit Homepage: + +### 12. KDevelop + +KDevelop 是另一个免费,开源和跨平台的 IDE,能够运行在 Linux,Solaris,FreeBSD,Windows,Mac OS X 和其他类 Unix 操作系统。它基于 KDevPlatform,KDE 和 Qt 库。KDevelop 可以通过插件高度扩展,功能丰富并具有以下显著特色: + +- 支持基于 Clang 的 C/C++ 插件 +- 支持 KDE 4 配置迁移 +- Oketa 插件支持的复兴 +- 支持众多视图插件下的不同行编辑 +- 支持 Grep 视图,使用窗口小部件节省垂直空间等 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/KDevelop-IDE-Editor.png) +>Visit Homepage: + +### 13. Geany IDE + +Geany 是一个免费,快速,轻量级和跨平台的 IDE, 操作很少有依赖性,独立运行在流行的 Linux 桌面环境下,比如 GNOME 和 KDE. 它需要 GTK2 库实现功能。 + +它的特性包括以下列出的内容: + +- 支持语法高亮显示 +- 代码折叠 +- 调用提示 +- 符号名自动完成 +- 符号列表 +- 代码提示 +- 一个简单的项目管理工具 +- 内置系统编译并运行用户的代码 +- 可以通过插件扩展 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Geany-IDE-for-Linux.png) +>Visit Homepage: + +### 14. Ajunta DeveStudio + +Ajunta DevStudio 是一个简单,强大的 GNOME 界面的软件开发工作室,支持包括 C/C++ 在内的众多编程语言。 + +它提供了先进的编程工具,比如项目管理,GUI 设计,交互式调试器,应用程序向导,源代码编辑,版本控制等。此外,除了以上特点,Ajunta DeveStudio 也有其他很多不错的 IDE 特色,包括这些: + +- 简单的用户界面 +- 通过插件扩展 +- 针对 WYSIWYG UI 开发的 Glade 集成 +- 工程的向导和模板 +- 集成的 GDB 调试器 +- 内置文件管理器 +- 使用 DevHelp 集成提供上下文敏感编程辅助 +- 源码编辑支持语法高亮显示,智能缩进,自动缩进,代码折叠/隐藏,文本缩放等 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Anjuta-DevStudio-for-Linux.png) +>Visit Homepage: + +### 15. The GNAT Programming Studio + +The GNAT Programming Studio 是一个免费的易于使用的用来设计与开发的 IDE,可以统一开发人员与他/她的代码,软件之间的交互。 + +通过促进源导航的同时强调一个程序的重要部分和想法打造理想的编程`。`它也会带给你高水平舒适的编程体验,使用户能够开发综合性的系统。 + +它丰富的特性包括以下这些: + +- 直观的用户界面 +- 对开发者的友好性 +- 支持多种编程语言,跨平台 +- 灵活的 MDI (多文档界面) +- 高度可定制 +- 首选工具的完全可扩展性 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/GNAT-Programming-Studio.jpg) +>Visit Homepage: + +### 16. Qt Creator + +这是一款非免费的,跨平台的 IDE, 用于创建连接设备、用户界面和应用程序。Qt Creator 可以让用户比实际应用编码有更多的创新。 + +它可以用来创建移动和桌面应用程序,以及嵌入式设备连接。 + +它的优点包含以下几点: + +- 复杂的代码编辑器 +- 支持版本控制 +- 项目构造管理工具 +- 支持多屏幕和多平台,易于构建目标之间的切换等等 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Qt-Creator.png) +>Visit Homepage: + + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Emacs-Editor.png) +>Visit Homepage: https://www.gnu.org/software/emacs/ + +### 17. Emacs Editor + +Emacs是一个免费的,强大的,可大幅度扩展的、可定制的、跨平台的文本编辑器,你可以在Linux,Solaris,FreeBSD,NetBSD,OpenBSD,Windows和Mac OS X这些系统中使用该编辑器. + +Emacs 的核心也是一种 Emacs Lisp 的解释器,Emascs Lisp 是一种基于 Lisp 的编程语言。在撰写本文时,GNU Emacs的最新版本是24.5,Emacs的基本功能包括: + +- 内容识别编辑模式 +- 完全支持 Unicode +- 高度可定制地使用 GUI 或 Emacs Lisp 代码 +- 一种用于下载和安装扩展的打包系统 +- 系统本身的功能超出正常文本编辑的功能,包括项目策划、邮件、日历和新闻阅读器等 +- 一个完整的内置文档,以及用户指南等等 + +### 18. VI/VIM Editor + +Vim,一款 VI 编辑器的改进版本,是一款免费的、强大的、流行的并且高度可配置的文本编辑器。它为有效率地文本编辑而生,并且为 Unix/Linux 使用者提供了激动人心的编辑器特性,因此,它对于撰写和编辑 C/C++ 代码也是一个好的选择。 + +总的来说,与传统的文本编辑器相比,IDE 为编程提供了更多的便利,因此使用它们是一个很好的选择。它们带有激动人心的特征并且提供了一个综合性的开发环境,有时候程序员不得不陷入对最好的 C/C++ IDE 的选择。 + +在互联网上你还可以找到许多 IDE 来下载,但不妨试试我们推荐的这几款,可以帮助你尽快找到哪一款是你需要的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/best-linux-ide-editors-source-code-editors/ + +作者:[Aaron Kili][a] +译者:[ZenMoore](https://github.com/ZenMoore) [LiBrad](https://github.com/LiBrad) [WangYueScream](https://github.com/WangYueScream) [LemonDemo](https://github.com/LemonDemo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/debug-source-code-in-linux-using-gdb/ From 2948514d818a7a8e36649b17a876791a7ae08a6b Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Aug 2016 15:08:48 +0800 Subject: [PATCH 437/471] =?UTF-8?q?PUB:20160601=20Learn=20Python=20Control?= =?UTF-8?q?=20Flow=20and=20Loops=20to=20Write=20and=20Tune=20Shell=20Scrip?= =?UTF-8?q?ts=20=E2=80=93=20Part=202?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @wi-cuckoo --- ...o Write and Tune Shell Scripts – Part 2.md | 292 ++++++++++++++++++ ...o Write and Tune Shell Scripts – Part 2.md | 285 ----------------- 2 files changed, 292 insertions(+), 285 deletions(-) create mode 100644 published/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md delete mode 100644 translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md diff --git a/published/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md b/published/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md new file mode 100644 index 0000000000..b2bc20848a --- /dev/null +++ b/published/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md @@ -0,0 +1,292 @@ +Linux 平台下 Python 脚本编程入门(二) +====================================================================================== + +在“[Linux 平台下 Python 脚本编程入门][1]”系列之前的文章里,我们向你介绍了 Python 的简介,它的命令行 shell 和 IDLE(LCTT 译注:python 自带的一个 IDE)。我们也演示了如何进行算术运算、如何在变量中存储值、还有如何打印那些值到屏幕上。最后,我们通过一个练习示例讲解了面向对象编程中方法和属性概念。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png) + +*在 Python 编程中写 Linux Shell 脚本* + +本篇中,我们会讨论控制流(根据用户输入的信息、计算的结果,或者一个变量的当前值选择不同的动作行为)和循环(自动重复执行任务),接着应用我们目前所学东西来编写一个简单的 shell 脚本,这个脚本会显示操作系统类型、主机名、内核版本、版本号和机器硬件架构。 + +这个例子尽管很基础,但是会帮助我们证明,比起使用一般的 bash 工具,我们通过发挥 Python 面向对象的特性来编写 shell 脚本会更简单些。 + +换句话说,我们想从这里出发: + +``` +# uname -snrvm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png) + +*检查 Linux 的主机名* + +到 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png) + +*用 Python 脚本来检查 Linux 的主机名* + +或者 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Script-to-Check-Linux-System-Information.png) + +*用脚本检查 Linux 系统信息* + +看着不错,不是吗?那我们就挽起袖子,开干吧。 + +### Python 中的控制流 + +如我们刚说那样,控制流允许我们根据一个给定的条件,选择不同的输出结果。在 Python 中最简单的实现就是一个 `if`/`else` 语句。 + +基本语法是这样的: + +``` +if 条件: + # 动作 1 +else: + # 动作 2 +``` + +当“条件”求值为真(true),下面的代码块就会被执行(`# 动作 1`代表的部分)。否则,else 下面的代码就会运行。 +“条件”可以是任何表达式,只要可以求得值为真或者假。 + +举个例子: + +1. `1 < 3` # 真 +2. `firstName == "Gabriel"` # 对 firstName 为 Gabriel 的人是真,对其他不叫 Gabriel 的人为假 + +- 在第一个例子中,我们比较了两个值,判断 1 是否小于 3。 +- 在第二个例子中,我们比较了 firstName(一个变量)与字符串 “Gabriel”,看在当前执行的位置,firstName 的值是否等于该字符串。 +- 条件和 else 表达式都必须跟着一个冒号(`:`)。 +- **缩进在 Python 中非常重要**。同样缩进下的行被认为是相同的代码块。 + +请注意,`if`/`else` 表达式只是 Python 中许多控制流工具的一个而已。我们先在这里了解以下,后面会用在我们的脚本中。你可以在[官方文档][2]中学到更多工具。 + +### Python 中的循环 + +简单来说,一个循环就是一组指令或者表达式序列,可以按顺序一直执行,只要条件为真,或者对列表里每个项目执行一一次。 + +Python 中最简单的循环,就是用 for 循环迭代一个给定列表的元素,或者对一个字符串从第一个字符开始到执行到最后一个字符结束。 + +基本语句: + +``` +for x in example: + # do this +``` + +这里的 example 可以是一个列表或者一个字符串。如果是列表,变量 x 就代表列表中每个元素;如果是字符串,x 就代表字符串中每个字符。 + +``` +>>> rockBands = [] +>>> rockBands.append("Roxette") +>>> rockBands.append("Guns N' Roses") +>>> rockBands.append("U2") +>>> for x in rockBands: + print(x) +或 +>>> firstName = "Gabriel" +>>> for x in firstName: + print(x) +``` + +上面例子的输出如下图所示: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png) + +*学习 Python 中的循环* + +### Python 模块 + +很明显,必须有个办法将一系列的 Python 指令和表达式保存到文件里,然后在需要的时候取出来。 + +准确来说模块就是这样的。比如,os 模块提供了一个到操作系统的底层的接口,可以允许我们做许多通常在命令行下执行的操作。 + +没错,os 模块包含了许多可以用来调用的方法和属性,就如我们之前文章里讲解的那样。不过,我们需要使用 `import` 关键词导入(或者叫包含)模块到运行环境里来: + +``` +>>> import os +``` + +我们来打印出当前的工作目录: + +``` +>>> os.getcwd() +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png) + +*学习 Python 模块* + +现在,让我们把这些结合在一起(包括之前文章里讨论的概念),编写需要的脚本。 + +### Python 脚本 + +以一段声明文字开始一个脚本是个不错的想法,它可以表明脚本的目的、发布所依据的许可证,以及一个列出做出的修改的修订历史。尽管这主要是个人喜好,但这会让我们的工作看起来比较专业。 + +这里有个脚本,可以输出这篇文章最前面展示的那样。脚本做了大量的注释,可以让大家可以理解发生了什么。 + +在进行下一步之前,花点时间来理解它。注意,我们是如何使用一个 `if`/`else` 结构,判断每个字段标题的长度是否比字段本身的值还大。 + +基于比较结果,我们用空字符去填充一个字段标题和下一个之间的空格。同时,我们使用一定数量的短线作为字段标题与其值之间的分割符。 + +``` +#!/usr/bin/python3 +# 如果你没有安装 Python 3 ,那么修改这一行为 #!/usr/bin/python + +# Script name: uname.py +# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily +# License: GPL v3 (http://www.gnu.org/licenses/gpl.html) + +# Copyright (C) 2016 Gabriel Alejandro Cánepa +# ​Facebook / Skype / G+ / Twitter / Github: gacanepa +# Email: gacanepa (at) gmail (dot) com + +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +# REVISION HISTORY +# DATE VERSION AUTHOR CHANGE DESCRIPTION +# ---------- ------- -------------- +# 2016-05-28 1.0 Gabriel Cánepa Initial version + +### 导入 os 模块 +import os + +### 将 os.uname() 的输出赋值给 systemInfo 变量 +### os.uname() 会返回五个字符串元组(sysname, nodename, release, version, machine) +### 参见文档:https://docs.python.org/3.2/library/os.html#module-os +systemInfo = os.uname() + +### 这是一个固定的数组,用于描述脚本输出的字段标题 +headers = ["Operating system","Hostname","Release","Version","Machine"] + +### 初始化索引值,用于定义每一步迭代中 +### systemInfo 和字段标题的索引 +index = 0 + +### 字段标题变量的初始值 +caption = "" + +### 值变量的初始值 +values = "" + +### 分隔线变量的初始值 +separators = "" + +### 开始循环 +for item in systemInfo: + if len(item) < len(headers[index]): + ### 一个包含横线的字符串,横线长度等于item[index] 或 headers[index] + ### 要重复一个字符,用引号圈起来并用星号(*)乘以所需的重复次数 + separators = separators + "-" * len(headers[index]) + " " + caption = caption + headers[index] + " " + values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " " + else: + separators = separators + "-" * len(item) + " " + caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1) + values = values + item + " " + ### 索引加 1 + index = index + 1 +### 终止循环 + +### 输出转换为大写的变量(字段标题)名 +print(caption.upper()) + +### 输出分隔线 +print(separators) + +# 输出值(systemInfo 中的项目) +print(values) + +### 步骤: +### 1) 保持该脚本为 uname.py (或任何你想要的名字) +### 并通过如下命令给其执行权限: +### chmod +x uname.py +### 2) 执行它; +### ./uname.py +``` + +如果你已经按照上述描述将上面的脚本保存到一个文件里,并给文件增加了执行权限,那么运行它: + +``` +# chmod +x uname.py +# ./uname.py +``` + +如果试图运行脚本时你得到了如下的错误: + +``` +-bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory +``` + +这意味着你没有安装 Python3。如果那样的话,你要么安装 Python3 的包,要么替换解释器那行(如果如之前文章里概述的那样,跟着下面的步骤去更新 Python 执行文件的软连接,要特别注意并且非常小心): + +``` +#!/usr/bin/python3 +``` + +为 + +``` +#!/usr/bin/python +``` + +这样会通过使用已经安装好的 Python 2 去执行该脚本。 + +**注意**:该脚本在 Python 2.x 与 Pyton 3.x 上都测试成功过了。 + +尽管比较粗糙,你可以认为该脚本就是一个 Python 模块。这意味着你可以在 IDLE 中打开它(File → Open… → Select file): + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png) + +*在 IDLE 中打开 Python* + +一个包含有文件内容的新窗口就会打开。然后执行 Run → Run module(或者按 F5)。脚本的输出就会在原始的 Shell 里显示出来: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png) + +*执行 Python 脚本* + +如果你想纯粹用 bash 写一个脚本,也获得同样的结果,你可能需要结合使用 [awk][3]、[sed][4],并且借助复杂的方法来存储与获得列表中的元素(不要忘了使用 tr 命令将小写字母转为大写)。 + +另外,在所有的 Linux 系统版本中都至少集成了一个 Python 版本(2.x 或者 3.x,或者两者都有)。你还需要依赖 shell 去完成同样的目标吗?那样你可能需要为不同的 shell 编写不同的版本。 + +这里演示了面向对象编程的特性,它会成为一个系统管理员得力的助手。 + +**注意**:你可以在我的 Github 仓库里获得 [这个 python 脚本][5](或者其他的)。 + +### 总结 + +这篇文章里,我们讲解了 Python 中控制流、循环/迭代、和模块的概念。我们也演示了如何利用 Python 中面向对象编程的方法和属性来简化复杂的 shell 脚本。 + +你有任何其他希望去验证的想法吗?开始吧,写出自己的 Python 脚本,如果有任何问题可以咨询我们。不必犹豫,在分割线下面留下评论,我们会尽快回复你。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/ + +作者:[Gabriel Cánepa][a] +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/learn-python-programming-and-scripting-in-linux/ +[2]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs. +[3]: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ +[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[5]: https://github.com/gacanepa/scripts/blob/master/python/uname.py + diff --git a/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md b/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md deleted file mode 100644 index d7b6bf6c2e..0000000000 --- a/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md +++ /dev/null @@ -1,285 +0,0 @@ -学习使用 python 控制流和循环来编写和执行 Shell 脚本 —— Part 2 -====================================================================================== - -在[Python series][1]之前的文章里,我们分享了 Python的一个简介,它的命令行 shell 和 IDLE(译者注:python 自带的一个IDE)。我们也演示了如何进行数值运算,如何用变量存储值,还有如何打印那些值到屏幕上。最后,我们通过一个练习示例讲解了面向对象编程中方法和属性概念。 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png) ->在 Python 编程中写 Linux Shell 脚本 - -本篇中,我嫩会讨论控制流(根据用户输入的信息,计算的结果,或者一个变量的当前值选择不同的动作行为)和循环(自动重复执行任务),接着应用到我们目前所学东西中,编写一个简单的 shell 脚本,这个脚本会显示操作系统类型,主机名,内核发行版,版本号和机器硬件名字。 - -这个例子尽管很基础,但是会帮助我们证明,比起使用一些 bash 工具写 shell 脚本,我们可以使得用 Python OOP 的兼容特性来编写 shell 脚本会更简单些。 - -换句话说,我们想从这里出发 - -``` -# uname -snrvm -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png) -> 检查 Linux 的主机号 - -到 - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png) -> 用 Python 脚本来检查 Linux 的主机号 - -或者 - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Script-to-Check-Linux-System-Information.png) -> 用脚本检查 Linux 系统信息 - -看着不错,不是吗?那我们就挽起袖子,开干吧。 - -### Python 中的控制流 - -如我们刚说那样,控制流允许我们根据一个给定的条件,选择不同的输出结果。在 Python 中最简单的实现就是一个 if/else 语句。 - -基本语法是这样的: - -``` -if condition: - # action 1 -else: - # action 2 -``` - -当 condition 求值为真(true),下面的代码块就会被执行(`# action 1`代表的部分)。否则,else 下面的代码就会运行。 -condition 可以是任何表达式,只要可以求得值为真或者假。 - -举个例子: - -1. 1 < 3 # 真 - -2. firstName == "Gabriel" # 对 firstName 为 Gabriel 的人是真,对其他不叫 Gabriel 的人为假 - - - 在第一个例子中,我们比较了两个值,判断 1 是否小于 3。 - - 在第二个例子中,我们比较了 firstName(一个变量)与字符串 “Gabriel”,看在当前执行的位置,firstName 的值是否等于该字符串。 - - 条件和 else 表达式都必须带着一个冒号(:)。 - - 缩进在 Python 非常重要。同样缩进下的行被认为是相同的代码块。 - -请注意,if/else 表达式只是 Python 中许多控制流工具的一个而已。我们先在这里了解以下,后面会用在我们的脚本中。你可以在[官方文档][2]中学到更多工具。 - -### Python 中的循环 - -简单来说,一个循环就是一组指令或者表达式序列,可以按顺序一直执行,只要一个条件为真,或者在一个列表里一次执行一个条目。 - -Python 中最简单的循环,就是 for 循环迭代一个给定列表的元素,或者一个字符串从第一个字符开始到最后一个字符结束。 - -基本语句: - -``` -for x in example: - # do this -``` - -这里的 example 可以是一个列表或者一个字符串。如果是列表,变量 x 就代表列表中每个元素;如果是字符串,x 就代表字符串中每个字符。 - -``` ->>> rockBands = [] ->>> rockBands.append("Roxette") ->>> rockBands.append("Guns N' Roses") ->>> rockBands.append("U2") ->>> for x in rockBands: - print(x) -or ->>> firstName = "Gabriel" ->>> for x in firstName: - print(x) -``` - -上面例子的输出如下图所示: - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png) ->学习 Python 中的循环 - -### Python 模块 - -很明显,必须有个途径可以保存一系列的 Python 指令和表达式到文件里,然后需要的时候再取出来。 - -准确来说模块就是这样的。特别地,os 模块提供了一个接口到操作系统的底层,允许我们做许多通常在命令行下的操作。 - -没错,os 模块包含了许多方法和属性,可以用来调用,就如我们之前文章里讲解的那样。尽管如此,我们需要使用 import 关键词导入(或者叫包含)模块到开发环境里来: - -``` ->>> import os -``` - -我们来打印出当前的工作目录: - -``` ->>> os.getcwd() -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png) ->学习 Python 模块 - -现在,让我们把所有结合在一起(包括之前文章里讨论的概念),编写需要的脚本。 - -### Python 脚本 - -以一个声明开始一个脚本是个不错的想法,表明脚本的目的,发行所依据的证书,和一个修订历史列出所做的修改。尽管这主要是个人喜好,但这会让我们的工作看起来比较专业。 - -这里有个脚本,可以输出这篇文章最前面展示的那样。脚本做了大量的注释,为了让大家可以理解发生了什么。 - -在进行下一步之前,花点时间来理解它。注意,我们是如何使用一个 if/else 结构,判断每个字段标题的长度是否比字段本身的值还大。 - -基于这个结果,我们用空字符去填充一个字段标题和下一个之间的空格。同时,我们使用一定数量的短线作为字段标题与其值之间的分割符。 - -``` -#!/usr/bin/python3 -# Change the above line to #!/usr/bin/python if you don't have Python 3 installed - -# Script name: uname.py -# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily -# License: GPL v3 (http://www.gnu.org/licenses/gpl.html) - -# Copyright (C) 2016 Gabriel Alejandro Cánepa -# ​Facebook / Skype / G+ / Twitter / Github: gacanepa -# Email: gacanepa (at) gmail (dot) com - -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. - -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. - -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . - -# REVISION HISTORY -# DATE VERSION AUTHOR CHANGE DESCRIPTION -# ---------- ------- -------------- -# 2016-05-28 1.0 Gabriel Cánepa Initial version - -# Import the os module -import os - -# Assign the output of os.uname() to the the systemInfo variable -# os.uname() returns a 5-string tuple (sysname, nodename, release, version, machine) -# Documentation: https://docs.python.org/3.2/library/os.html#module-os -systemInfo = os.uname() - -# This is a fixed array with the desired captions in the script output -headers = ["Operating system","Hostname","Release","Version","Machine"] - -# Initial value of the index variable. It is used to define the -# index of both systemInfo and headers in each step of the iteration. -index = 0 - -# Initial value of the caption variable. -caption = "" - -# Initial value of the values variable -values = "" - -# Initial value of the separators variable -separators = "" - -# Start of the loop -for item in systemInfo: - if len(item) < len(headers[index]): - # A string containing dashes to the length of item[index] or headers[index] - # To repeat a character(s), enclose it within quotes followed - # by the star sign (*) and the desired number of times. - separators = separators + "-" * len(headers[index]) + " " - caption = caption + headers[index] + " " - values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " " - else: - separators = separators + "-" * len(item) + " " - caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1) - values = values + item + " " - # Increment the value of index by 1 - index = index + 1 -# End of the loop - -# Print the variable named caption converted to uppercase -print(caption.upper()) - -# Print separators -print(separators) - -# Print values (items in systemInfo) -print(values) - -# INSTRUCTIONS: -# 1) Save the script as uname.py (or another name of your choosing) and give it execute permissions: -# chmod +x uname.py -# 2) Execute it: -# ./uname.py -``` - -如果你已经保存上面的脚本到一个文件里,给文件执行权限,并且运行它,像代码底部描述的那样: - -``` -# chmod +x uname.py -# ./uname.py -``` - -如果试图运行脚本时,你得到了如下的错误: - -``` --bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory -``` - -这意味着你没有安装 Python3。如果那样的话,你要么安装 Python3 的包,要么替换解释器那行(如果你跟着下面的步骤去更新 Python 执行文件的软连接,如之前文章里概述的那样,要特别注意并且非常小心): - -``` -#!/usr/bin/python3 -``` - -为 - -``` -#!/usr/bin/python -``` - -这样会导致使用安装好的 Python 2 版本去执行该脚本。 - -**注意**: 该脚本在 Python 2.x 与 Pyton 3.x 上都测试成功过了。 - -尽管比较粗糙,你可以认为该脚本就是一个 Python 模块。这意味着你可以在 IDLE 中打开它(File → Open… → Select file): - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png) ->在 IDLE 中打开 Python - -一个包含有文件内容的新窗口就会打开。然后执行 Run → Run module(或者按 F5)。脚本的输出就会在原 Shell 里显示出来: - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png) ->执行 Python 脚本 - -如果你想纯粹用 bash 写一个脚本,也获得同样的结果,你可能需要结合使用 [awk][3],[sed][4],并且借助复杂的方法来存储与获得列表中的元素(忘了提醒使用 tr 命令将小写字母转为大写) - -另外,Python 在所有的 Linux 系统版本中集成了至少一个 Python 版本(2.x 或者 3.x,或者两者都有)。你还需要依赖 shell 去完成同样的目标吗,那样你可能会为不同的 shell 编写不同的版本。 - -这里演示了面向对象编程的特性,会成为一个系统管理员得力的助手。 - -**注意**:你可以在我的 Github 仓库里获得 [这个 python 脚本][5](或者其他的)。 - -### 总结 - -这篇文章里,我们讲解了 Python 中控制流,循环/迭代,和模块的概念。我们也演示了如何利用 Python 中 OOP 的方法和属性,来简化复杂的 shell 脚本。 - -你有任何其他希望去验证的想法吗?开始吧,写出自己的 Python 脚本,如果有任何问题可以咨询我们。不必犹豫,在分割线下面留下评论,我们会尽快回复你。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Gabriel Cánepa][a] -译者:[wi-cuckoo](https://github.com/wi-cuckoo) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/learn-python-programming-and-scripting-in-linux/ -[2]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs. -[3]: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ -[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[5]: https://github.com/gacanepa/scripts/blob/master/python/uname.py - From cd81201c1949457846c56e210ff08dce0ab2b945 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Aug 2016 15:18:03 +0800 Subject: [PATCH 438/471] PUB:20160801 Linux how long a process has been running @MikeCoder --- ...801 Linux how long a process has been running.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) rename {translated/tech => published}/20160801 Linux how long a process has been running.md (76%) diff --git a/translated/tech/20160801 Linux how long a process has been running.md b/published/20160801 Linux how long a process has been running.md similarity index 76% rename from translated/tech/20160801 Linux how long a process has been running.md rename to published/20160801 Linux how long a process has been running.md index 9ef5a22859..e7e90948b1 100644 --- a/translated/tech/20160801 Linux how long a process has been running.md +++ b/published/20160801 Linux how long a process has been running.md @@ -1,14 +1,13 @@ -在 Linux 如何查看一个进程的运行时间 +在 Linux 下如何查看一个进程的运行时间 ===================== ![](http://s0.cyberciti.org/images/category/old/linux-logo.png) ->我是一个 Linux 系统的新手。我该如何在我的 Ubuntu 服务器上查看一个进程或者根据一个进程 id 来查看他已经运行的时间? +> 我是一个 Linux 系统的新手。我该如何在我的 Ubuntu 服务器上查看一个进程(或者根据进程 id 查看)已经运行了多久? -You need to use the ps command to see information about a selection of the active processes. The ps command provide following two formatting options: 你需要使用 ps 命令来查看关于一组正在运行的进程的信息。ps 命令提供了如下的两种格式化选项。 -1. etime 显示了自从该进程启动以来,经历过的时间。以[[DD-]hh:]mm:ss 的格式。 +1. etime 显示了自从该进程启动以来,经历过的时间,格式为 `[[DD-]hh:]mm:ss`。 2. etimes 显示了自该进程启动以来,经历过的时间,以秒的形式。 ### 如何查看一个进程已经运行的时间? @@ -39,7 +38,7 @@ $ ps -p 6176 -o etime $ ps -p 6176 -o etimes ``` -隐藏头部: +隐藏输出头部: ``` $ ps -p 6176 -o etime= @@ -50,7 +49,7 @@ $ ps -p 6176 -o etimes= ![](http://s0.cyberciti.org/uploads/faq/2016/08/How-to-check-how-long-a-process-has-been-running.jpg) -这个 6176 就是你想查看的进程的 PID。在这个例子中,我查看的是 openvpn 进程。你可以按照你的需求随意的更换 openvpn 进程名或者是 PID。在下面的例子中,我打印了 PID,执行命令,运行时间,用户 ID,和用户组 ID: +这个 6176 就是你想查看的进程的 PID。在这个例子中,我查看的是 openvpn 进程。你可以按照你的需求随意的更换 openvpn 进程名或者是 PID。在下面的例子中,我打印了 PID、执行命令、运行时间、用户 ID、和用户组 ID: ``` $ ps -p 6176 -o pid,cmd,etime,uid,gid @@ -69,7 +68,7 @@ via: http://www.cyberciti.biz/faq/how-to-check-how-long-a-process-has-been-runni 作者:[VIVEK GITE][a] 译者:[MikeCoder](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6519b7b569b2160dc4aa8123b5e26edb3f8ef90c Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Aug 2016 16:14:22 +0800 Subject: [PATCH 439/471] PUB:20160802 3 graphical tools for Git @zpl1025 --- .../20160802 3 graphical tools for Git.md | 48 +++++++++---------- 1 file changed, 24 insertions(+), 24 deletions(-) rename {translated/tech => published}/20160802 3 graphical tools for Git.md (61%) diff --git a/translated/tech/20160802 3 graphical tools for Git.md b/published/20160802 3 graphical tools for Git.md similarity index 61% rename from translated/tech/20160802 3 graphical tools for Git.md rename to published/20160802 3 graphical tools for Git.md index e26b1c10d9..1cbad11ba6 100644 --- a/translated/tech/20160802 3 graphical tools for Git.md +++ b/published/20160802 3 graphical tools for Git.md @@ -1,19 +1,19 @@ -3 个 Git 图形工具 +Git 系列(五):三个 Git 图形化工具 ============================= ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/BUSINESS_meritladder.png?itok=4CAH2wV0) 在本文里,我们来了解几个能帮你在日常工作中舒服地用上 Git 的工具。 -我是在这许多漂亮界面出来之前学习的 Git,而且我的日常工作经常是基于字符界面的,所以 Git 本身自带的大部分功能已经足够我用了。在我看来,最好能理解 Git 的工作原理。不过,能有的选也不错,下面这些就是能让你不用终端酒可以开始使用 Git 的一些方式。 +我是在这许多漂亮界面出来之前学习的 Git,而且我的日常工作经常是基于字符界面的,所以 Git 本身自带的大部分功能已经足够我用了。在我看来,最好能理解 Git 的工作原理。不过,能有的选也不错,下面这些就是能让你不用终端就可以开始使用 Git 的一些方式。 ### KDE Dolphin 里的 Git -我是一个 KDE 用户,如果不在 Plasma 桌面环境下,就是在 Fluxbox 的应用层。Dolphin 是一个非常优秀的文件管理器,有很多配置项以及大量秘密小功能。大家为它开发的插件都特别好用,其中一个是几乎完整的 Git 界面。是的,你可以直接在自己的桌面上很方便地管理你的 Git 仓库。 +我是一个 KDE 用户,如果不在 Plasma 桌面环境下,就是在 Fluxbox 的应用层。Dolphin 是一个非常优秀的文件管理器,有很多配置项以及大量秘密小功能。大家为它开发的插件都特别好用,其中一个几乎就是完整的 Git 界面。是的,你可以直接在自己的桌面上很方便地管理你的 Git 仓库。 -但首先,你得先确认已经安装了这个插件。有些发行版带的 KDE 将各种插件都装的满满的,而有些只装了一些最基本的,所以如果你在下面的步骤里没有看到 Git 相关选项,就在你的配置目录里找找类似 dolphin-extras 或者 dolphin-plugins 的字样。 +但首先,你得先确认已经安装了这个插件。有些发行版带的 KDE 将各种插件都装的满满的,而有些只装了一些最基本的,所以如果你在下面的步骤里没有看到 Git 相关选项,就在你的软件仓库里找找类似 dolphin-extras 或者 dolphin-plugins 的包。 -要打开 Git 集成,在任何 Dolphin 窗口里点击 Settings 菜单,并选择 Configure Dolphin。 +要打开 Git 集成功能,在 Dolphin 的任一窗口里点击 Settings 菜单,并选择 Configure Dolphin。 在弹出的 Configure Dolphin 窗口里,点击左边侧栏里的 Services 图标。 @@ -21,13 +21,13 @@ ![](https://opensource.com/sites/default/files/4_dolphinconfig.jpg) -保存你的改动并关闭 Dolphin 窗口。重新启动 Dolphin,浏览一个 Git 仓库试试看。你会发现现在所有文件图标都带有标记:绿色方框表示已经提交的文件,绿色实心方块表示文件有改动,没加入库里的文件没有标记,等等。 +(勾选上它,)然后保存你的改动并关闭 Dolphin 窗口。重新启动 Dolphin,浏览一个 Git 仓库试试看。你会发现现在所有文件图标都带有标记:绿色方框表示已经提交的文件,绿色实心方块表示文件有改动,没加入库里的文件没有标记,等等。 -之后你在 Git 仓库目录下点击鼠标右键弹出的菜单里就会有 Git 选项了。你在 Dolphin 窗口里点击鼠标就可以检出一个版本,推送或提交改动,还可以对文件进行 git add 或 git remove 操作。 +之后你在 Git 仓库目录下点击鼠标右键弹出的菜单里就会有 Git 选项了。你在 Dolphin 窗口里点击鼠标就可以检出一个版本,推送或提交改动,还可以对文件进行 `git add` 或 `git remove` 操作。 ![](https://opensource.com/sites/default/files/4_dolphingit.jpg) -不过 Dolphin 不支持克隆仓库或是改变远端路径,需要到终端窗口操作,按下 F4 就可以很方便地进行切换。 +不过 Dolphin 不支持克隆仓库或是改变远端仓库路径,需要到终端窗口操作,按下 F4 就可以很方便地进行切换。 坦白地说,KDE 的这个功能太牛了,这篇文章已经可以到此为止。将 Git 集成到原生文件管理器里可以让 Git 操作非常清晰;不管你在工作流程的哪个阶段,一切都能直接地摆在面前。在终端里 Git,切换到 GUI 后也是一样 Git。完美。 @@ -35,9 +35,9 @@ ### Sparkleshare -SparkleShare 来自桌面环境的另一大阵营,由一些 GNOME 开发人员发起,一个使用文件同步模型 ("例如 Dropbox!") 的项目。不过它并没有直接嵌入到 GNOME 里,所以你可以在任何平台使用。 +SparkleShare 来自桌面环境的另一大阵营,由一些 GNOME 开发人员发起,一个使用文件同步模型 (“就像 Dropbox 一样!”) 的项目。不过它并没有集成任何 GNOME 特有的组件,所以你可以在任何平台使用。 -如果你在用 Linux,可以从你的软件仓库直接安装 SparkleShare。如果是其它操作系统,可以去 SparkleShare 网站下载。你可以不用看 SparkleShare 网站上的指引,那个是告诉你如何架设 SparkleShare 服务器的,不是我们这里讨论的。当然你想的话也可以架 SparkleShare 服务器,但是 SparkleShare 能兼容 Git 仓库,所以其实没必要再架一个自己的。 +如果你在用 Linux,可以从你的软件仓库直接安装 SparkleShare。如果是其它操作系统,可以去 SparkleShare 网站下载。你可以不用看 SparkleShare 网站上的指引,那个是告诉你如何架设 SparkleShare 服务器的,不是我们这里讨论的。当然你想的话也可以架设 SparkleShare 服务器,但是 SparkleShare 能兼容 Git 仓库,所以其实没必要再架一个自己的。 在安装完成后,从应用程序菜单里启动 SparkleShare。走一遍设置向导,只有两个步骤外加一个简单介绍,然后可以选择是否将 SparkleShare 设置为随桌面自动启动。 @@ -49,7 +49,7 @@ SparkleShare 来自桌面环境的另一大阵营,由一些 GNOME 开发人员 ![](https://opensource.com/sites/default/files/4_sparklehost.jpg) -SparkleShare 支持本地 Git 项目,也可以是存放在像 GitHub 和 Bitbucket 这样的 Git 服务器上的项目。要获得访问权限,你可能会需要使用 SparkleShare 生成的客户端 ID。这是一个 SSH 密钥,作为你所用到服务的授权标记,包括你自己的 Git 服务器,应该也是用 SSH 公钥认证而不是用户名密码。将客户端 ID 拷贝到你服务器上 Git 用户的 authorized_hosts 文件里,或者是你的 Git 主机的 SSH 密钥面板里。 +SparkleShare 支持本地 Git 项目,也可以是存放在像 GitHub 和 Bitbucket 这样的公共 Git 服务器上的项目。要获得完整访问权限,你可能会需要使用 SparkleShare 生成的客户端 ID。这是一个 SSH 密钥,作为你所用到服务的授权令牌,包括你自己的 Git 服务器,应该也使用 SSH 公钥认证而不是用户名密码。将客户端 ID 拷贝到你服务器上 Git 用户的 `authorized_hosts` 文件里,或者是你的 Git 主机的 SSH 密钥面板里。 在配置要你要用的主机后,SparkleShare 会下载整个 Git 项目,包括(你可以自己选择)提交历史。可以在 ~/SparkleShare 目录下找到同步完成的文件。 @@ -61,9 +61,9 @@ SparkleShare 可能不适合所有人,但是它是一个强大而且简单的 另一种配合 Git 仓库工作的模型,没那么原生,更多的是监视方式;不是使用一个集成的应用程序和你的 Git 项目直接交互,而是你可以使用一个桌面客户端来监视项目改动,并随意处理每一个改动。这种方式的一个优势就是专注。当你实际只用到项目里的三个文件的时候,你可能不会关心所有的 125 个文件,能将这三个文件挑出来就很方便了。 -如果你觉得有好多 Git 网站,只是你还不知道。[桌面上的 Git 客户端][1] 上有一大把。实际上,Git 默认自带一个图形客户端。它们中平台适用最广,配置最丰富的是开源的 Git-cola 客户端,用 Python 和 Qt 写的。 +如果你觉得有好多 Git 托管网站,那只是你还不知道 Git 客户端有多少。[桌面上的 Git 客户端][1] 上有一大把。实际上,Git 默认自带一个图形客户端。它们中最跨平台、最可配置的就是开源的 [Git-cola][2] 客户端,用 Python 和 Qt 写的。 -如果你在用 Linux,Git-cola 应该在你的软件仓库里有。不是的话,可以直接从网站下载安装: +如果你在用 Linux,Git-cola 应该在你的软件仓库里就有。不是的话,可以直接从它的[网站下载][2]并安装: ``` $ python setup.py install @@ -71,47 +71,47 @@ $ python setup.py install 启动 git-cola 后,会有三个按钮用来打开仓库,创建新仓库,或克隆仓库。 -不管选哪个,最终都会停在一个 Git 仓库中。和大多数我用过的客户端一样,Git-cola 不会尝试成为你的仓库的接口;它们一般会让操作系统工具来做这个。换句话说,我可以通过 Git-cola 创建一个仓库,但随后我就在 Thunar 或 Emacs 里打开仓库开始工作。打开 Git-cola 来监视仓库很不错,因为当你创建新文件,或者改动文件的时候,它们都会出现在 Git-cola 的状态栏里。 +不管选哪个,最终都会停在一个 Git 仓库中。和大多数我用过的客户端一样,Git-cola 不会尝试成为你的仓库的接口;它们一般会让操作系统工具来做这个。换句话说,我可以通过 Git-cola 创建一个仓库,但随后我就在 Thunar 或 Emacs 里打开仓库开始工作。打开 Git-cola 来监视仓库很不错,因为当你创建新文件,或者改动文件的时候,它们都会出现在 Git-cola 的状态面板里。 -Git-cola 的默认布局不是线性的。我喜欢从左向右排列,因为 Git-cola 又是高度可配置的,你可以随便修改布局。我自己设置成最左边是状态栏,显示当前分支的任何改动,然后右边是改动栏,可以浏览当前改动,然后是动作栏,放一些常用任务的快速按钮,最后,最右边是提交栏,可以写提交信息。 +Git-cola 的默认布局不是线性的。我喜欢从左向右分布,因为 Git-cola 是高度可配置的,所以你可以随便修改布局。我自己设置成最左边是状态面板,显示当前分支的任何改动,然后右边是差异面板,可以浏览当前改动,然后是动作面板,放一些常用任务的快速按钮,最后,最右边是提交面板,可以写提交信息。 ![](https://opensource.com/sites/default/files/4_gitcola.jpg) 不管怎么改布局,下面是 Git-cola 的通用流程: -改动会出现在状态栏里。右键点击一个改动或选中一个文件,然后在动作栏里点击 Stage 按钮来将文件加入待提交暂存区。 +改动会出现在状态面板里。右键点击一个改动或选中一个文件,然后在动作面板里点击 Stage 按钮来将文件加入待提交暂存区。 -待提交文件的图标会变成绿色三角形,表示该文件有改动并且正等待提交。你也可以右键点击并选择 Unstage Selected 将改动移出待提交暂存区,或者点击动作栏里的 Unstage 按钮。 +待提交文件的图标会变成绿色三角形,表示该文件有改动并且正等待提交。你也可以右键点击并选择 Unstage Selected 将改动移出待提交暂存区,或者点击动作面板里的 Unstage 按钮。 -在改动栏里检查你的改动。 +在差异面板里检查你的改动。 当准备好提交后,输入提交信息并点击 Commit 按钮。 -在动作栏里还有其它按钮用来处理其它普通任务,比如抽取或推送。菜单里有更多的任务列表,比如专用的操作分支,改动审查,变更基础,等等。 +在动作面板里还有其它按钮用来处理其它普通任务,比如拉取或推送。菜单里有更多的任务列表,比如用于操作分支,改动审查,变基等等的专用操作。 我更愿意将 Git-cola 当作文件管理器的一个浮动面板(在不能用 Dolphin 的时候我只用 Git-cola)。虽然它的交互性没有完全集成 Git 的文件管理器那么强,但另一方面,它几乎提供了原始 Git 命令的所有功能,所以它实际上更为强大。 -有很多 Git 图形客户端。有些是不提供源代码的付费软件,有些只是用来查看,有些尝试加入新的特定术语(用 "sync" 替代 "push" ...?) 来重造 Git,也有一些是适合特定平台。Git-cola 一直是最简单的能在任意平台上使用的客户端,也是最贴近纯粹 Git 命令的,可以让用户在使用过程中学习 Git,高手也很满意它的界面和术语。 +有很多 Git 图形客户端。有些是不提供源代码的付费软件,有些只是用来查看,有些尝试加入新的特定术语(用 "sync" 替代 "push" ...?) 来重造 Git,也有一些只适合特定的平台。Git-cola 一直是能在任意平台上使用的最简单的客户端,也是最贴近纯粹 Git 命令的,可以让用户在使用过程中学习 Git,即便是高手也会很满意它的界面和术语。 ### Git 命令还是图形界面? -我一般不用图形工具来操作 Git;大多数时候我使用上面介绍的工具,是帮助其他人找出适合他们的界面。不过在一天结束的时候,不同的工作适合不一样的工具。我喜欢基于终端的 Git 命令是因为可以很好地集成到 Emacs 里,但如果某天我几乎都在用 Inkscape 工作时,我一般会很自然地使用 Dolphin 里带的 Git,因为我在 Dolphin 环境里。 +我一般不用图形工具来操作 Git;一般我使用上面介绍的工具时,只是帮助其他人找出适合他们的界面。不过,最终归结于怎么适合你的工作。我喜欢基于终端的 Git 命令是因为它可以很好地集成到 Emacs 里,但如果某天我几乎都在用 Inkscape 工作时,我一般会很自然地使用 Dolphin 里带的 Git,因为我在 Dolphin 环境里。 -如何使用 Git 你自己可以选择;但要记住 Git 是一种让生活更轻松的方式,也是让你在工作中更安全地尝试一些疯狂点子的方法。熟悉 Git 的工作模式,然后不管什么方式使用 Git,只要能让你觉得最适合就可以。 +如何使用 Git 你自己可以选择;但要记住 Git 是一种让生活更轻松的方式,也是让你在工作中更安全地尝试一些疯狂点子的方法。熟悉 Git 的工作模式,然后不管以什么方式使用 Git,只要能让你觉得最适合就可以。 在下一期文章里,我们将了解如何架设和管理 Git 服务器,包括用户权限和管理,以及运行定制脚本。 - -------------------------------------------------------------------------------- via: https://opensource.com/life/16/8/graphical-tools-git 作者:[Seth Kenlon][a] 译者:[zpl1025](https://github.com/zpl1025) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/seth [1]: https://git-scm.com/downloads/guis +[2]: https://git-cola.github.io/ From 4402eeb0d44a96f9ed0b931c18e7b4e31f16e4ca Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Sat, 20 Aug 2016 18:51:28 +0800 Subject: [PATCH 440/471] Translating by cposture (#4321) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 75 * Translated by cposture * Translated by cposture * Translated by cposture * 应 erlinux 的要求,把未翻译完的文章移到 source * 应 erlinux 的要求,把未翻译完的文章移到 source,同时删除 translated文章 * Translating --- ...hon unittest - assertTrue is truthy - assertFalse is falsy.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md b/sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md index bfed46fb0e..30589a9d63 100644 --- a/sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md +++ b/sources/tech/20160512 Python unittest - assertTrue is truthy - assertFalse is falsy.md @@ -1,3 +1,4 @@ +Translating by cposture Python unittest: assertTrue is truthy, assertFalse is falsy =========================== From f201901d69dec1c3f80f092c5ae1bd780da2c849 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 20 Aug 2016 18:52:24 +0800 Subject: [PATCH 441/471] =?UTF-8?q?20160820-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ....js applications with HTTP2 Server Push.md | 114 ++++++++++++++++++ 1 file changed, 114 insertions(+) create mode 100644 sources/tech/20160816 Accelerating Node.js applications with HTTP2 Server Push.md diff --git a/sources/tech/20160816 Accelerating Node.js applications with HTTP2 Server Push.md b/sources/tech/20160816 Accelerating Node.js applications with HTTP2 Server Push.md new file mode 100644 index 0000000000..9ea8c9079a --- /dev/null +++ b/sources/tech/20160816 Accelerating Node.js applications with HTTP2 Server Push.md @@ -0,0 +1,114 @@ +Accelerating Node.js applications with HTTP/2 Server Push +========================================================= + +In April, we [announced support for HTTP/2 Server][3] Push via the HTTP Link header. My coworker John has demonstrated how easy it is to [add Server Push to an example PHP application][4]. + +![](https://blog.cloudflare.com/content/images/2016/08/489477622_594bf9e3d9_z.jpg) + +We wanted to make it easy to improve the performance of contemporary websites built with Node.js. we developed the netjet middleware to parse the generated HTML and automatically add the Link headers. When used with an example Express application you can see the headers being added: + +![](https://blog.cloudflare.com/content/images/2016/08/2016-08-11_13-32-45.png) + +We use Ghost to power this blog, so if your browser supports HTTP/2 you have already benefited from Server Push without realizing it! More on that below. + +In netjet, we use the PostHTML project to parse the HTML with a custom plugin. Right now it is looking for images, scripts and external stylesheets. You can implement this same technique in other environments too. + +Putting an HTML parser in the response stack has a downside: it will increase the page load latency (or "time to first byte"). In most cases, the added latency will be overshadowed by other parts of your application, such as database access. However, netjet includes an adjustable LRU cache keyed by ETag headers, allowing netjet to insert Link headers quickly on pages already parsed. + +If you are designing a brand new application, however, you should consider storing metadata on embedded resources alongside your content, eliminating the HTML parse, and possible latency increase, entirely. + +Netjet is compatible with any Node.js HTML framework that supports Express-like middleware. Getting started is as simple as adding netjet to the beginning of your middleware chain. + +``` +var express = require('express'); +var netjet = require('netjet'); +var root = '/path/to/static/folder'; + +express() + .use(netjet({ + cache: { + max: 100 + } + })) + .use(express.static(root)) + .listen(1337); +``` + +With a little more work, you can even use netjet without frameworks. + +``` +var http = require('http'); +var netjet = require('netjet'); + +var port = 1337; +var hostname = 'localhost'; +var preload = netjet({ + cache: { + max: 100 + } +}); + +var server = http.createServer(function (req, res) { + preload(req, res, function () { + res.statusCode = 200; + res.setHeader('Content-Type', 'text/html'); + res.end('

Hello World

'); + }); +}); + +server.listen(port, hostname, function () { + console.log('Server running at http://' + hostname + ':' + port+ '/'); +}); +``` + +See the [netjet documentation][1] for more information on the supported options. + +### Seeing what’s pushed + +![](https://blog.cloudflare.com/content/images/2016/08/2016-08-02_10-49-33.png) + +Chrome's Developer Tools makes it easy to verify that your site is using Server Push. The Network tab shows pushed assets with "Push" included as part of the initiator. + +Unfortunately, Firefox's Developers Tools don't yet directly expose if the resource pushed. You can, however, check for the cf-h2-pushed header in the page's response headers, which contains a list of resources that CloudFlare offered browsers over Server Push. + +Contributions to improve netjet or the documentation are greatly appreciated. I'm excited to hear where people are using netjet. + +### Ghost and Server Push + +Ghost is one such exciting integration. With the aid of the Ghost team, I've integrated netjet, and it has been available as an opt-in beta since version 0.8.0. + +If you are running a Ghost instance, you can enable Server Push by modifying the server's config.js file and add the preloadHeaders option to the production configuration block. + + +``` +production: { + url: 'https://my-ghost-blog.com', + preloadHeaders: 100, + // ... +} +``` + +Ghost has put together [a support article][2] for Ghost(Pro) customers. + +### Conclusion + +With netjet, your Node.js applications can start to use browser preloading and, when used with CloudFlare, HTTP/2 Server Push today. + +At CloudFlare, we're excited to make tools to help increase the performance of websites. If you find this interesting, we are hiring in Austin, Texas; Champaign, Illinois; London; San Francisco; and Singapore. + + +-------------------------------------------------------------------------------- + +via: https://blog.cloudflare.com/accelerating-node-js-applications-with-http-2-server-push/?utm_source=nodeweekly&utm_medium=email + +作者:[Terin Stock][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.cloudflare.com/author/terin-stock/ +[1]: https://www.npmjs.com/package/netjet +[2]: http://support.ghost.org/preload-headers/ +[3]: https://www.cloudflare.com/http2/server-push/ +[4]: https://blog.cloudflare.com/using-http-2-server-push-with-php/ From 7f26b08ef20ee726a76e646c234bb2116183bf33 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 20 Aug 2016 18:59:53 +0800 Subject: [PATCH 442/471] =?UTF-8?q?20160820-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Deploying React with Zero Configuration.md | 69 +++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 sources/tech/20160816 Deploying React with Zero Configuration.md diff --git a/sources/tech/20160816 Deploying React with Zero Configuration.md b/sources/tech/20160816 Deploying React with Zero Configuration.md new file mode 100644 index 0000000000..e2cb16a223 --- /dev/null +++ b/sources/tech/20160816 Deploying React with Zero Configuration.md @@ -0,0 +1,69 @@ +Deploying React with Zero Configuration +======================== + +So you want to build an app with [React][1]? "[Getting started][2]" is easy… and then what? + +React is a library for building user interfaces, which comprise only one part of an app. Deciding on all the other parts — styles, routers, npm modules, ES6 code, bundling and more — and then figuring out how to use them is a drain on developers. This has become known as [javascript fatigue][3]. Despite this complexity, usage of React continues to grow. + +The community answers this challenge by sharing boilerplates. These [boilerplates][4] reveal the profusion of architectural choices developers must make. That official "Getting Started" seems so far away from the reality of an operational app. + +### New, Zero-configuration Experience + +Inspired by the cohesive developer experience provided by [Ember.js][5] and [Elm][6], the folks at Facebook wanted to provide an easy, opinionated way forward. They created a new way to develop React apps, `create-react-app`. In the three weeks since initial public release, it has received tremendous community awareness (over 8,000 GitHub stargazers) and support (dozens of pull requests). + +`create-react-app` is different than many past attempts with boilerplates and starter kits. It targets zero configuration [[convention-over-configuration]][7], focusing the developer on what is interesting and different about their application. + +A powerful side-effect of zero configuration is that the tools can now evolve in the background. Zero configuration lays the foundation for the tools ecosystem to create automation and delight developers far beyond React itself. + +### Zero-configuration Deploy to Heroku + +Thanks to the zero-config foundation of create-react-app, the idea of zero-config deployment seemed within reach. Since these new apps all share a common, implicit architecture, the build process can be automated and then served with intelligent defaults. So, [we created this community buildpack to experiment with no-configuration deployment to Heroku][8]. + +#### Create and Deploy a React App in Two Minutes + +You can get started building React apps for free on Heroku. + +``` +npm install -g create-react-app +create-react-app my-app +cd my-app +git init +heroku create -b https://github.com/mars/create-react-app-buildpack.git +git add . +git commit -m "react-create-app on Heroku" +git push heroku master +heroku open +``` + +Try it yourself [using the buildpack docs][9]. + +### Growing Up from Zero Config + +create-react-app is very new (currently version 0.2) and since its target is a crystal-clear developer experience, more advanced use cases are not supported (or may never be supported). For example, it does not provide server-side rendering or customized bundles. + +To support greater control, create-react-app includes the command npm run eject. Eject unpacks all the tooling (config files and package.json dependencies) into the app's directory, so you can customize to your heart's content. Once ejected, changes you make may necessitate switching to a custom deployment with Node.js and/or static buildpacks. Always perform such project changes through a branch / pull request, so they can be easily undone. Heroku's Review Apps are perfect for testing changes to the deployment. + +We'll be tracking progress on create-react-app and adapting the buildpack to support more advanced use cases as they become available. Happy deploying! + + + +-------------------------------------------------------------------------------- + +via: https://blog.heroku.com/deploying-react-with-zero-configuration?c=7013A000000NnBFQA0&utm_campaign=Display%20-%20Endemic%20-Cooper%20-Node%20-%20Blog%20-%20Zero-Configuration&utm_medium=display&utm_source=cooperpress&utm_content=blog&utm_term=node + +作者:[Mars Hall][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.heroku.com/deploying-react-with-zero-configuration?c=7013A000000NnBFQA0&utm_campaign=Display%20-%20Endemic%20-Cooper%20-Node%20-%20Blog%20-%20Zero-Configuration&utm_medium=display&utm_source=cooperpress&utm_content=blog&utm_term=node +[1]: https://facebook.github.io/react/ +[2]: https://facebook.github.io/react/docs/getting-started.html +[3]: https://medium.com/@ericclemmons/javascript-fatigue-48d4011b6fc4 +[4]: https://github.com/search?q=react+boilerplate +[5]: http://emberjs.com/ +[6]: http://elm-lang.org/ +[7]: http://rubyonrails.org/doctrine/#convention-over-configuration +[8]: https://github.com/mars/create-react-app-buildpack +[9]: https://github.com/mars/create-react-app-buildpack#usage From 7c2d01895c137d72d40c84fcc901e20b01a87634 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 20 Aug 2016 19:08:50 +0800 Subject: [PATCH 443/471] =?UTF-8?q?20160820-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...JavaScript framework - Execution timing.md | 192 ++++++++++++++++++ 1 file changed, 192 insertions(+) create mode 100644 sources/tech/20160812 Writing a JavaScript framework - Execution timing.md diff --git a/sources/tech/20160812 Writing a JavaScript framework - Execution timing.md b/sources/tech/20160812 Writing a JavaScript framework - Execution timing.md new file mode 100644 index 0000000000..601abecb80 --- /dev/null +++ b/sources/tech/20160812 Writing a JavaScript framework - Execution timing.md @@ -0,0 +1,192 @@ +Writing a JavaScript framework - Execution timing, beyond setTimeout +=================== + +This is the second chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the different ways of executing asynchronous code in the browser. You will read about the event loop and the differences between timing techniques, like setTimeout and Promises. + +The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the [home page][1]. + +The series includes the following chapters: + +The series includes the following chapters: + +1. [Project structuring][2] +2. Execution timing (current chapter) +3. [Sandboxed code evaluation][3] +4. Data binding (part 1) +5. Data binding (part 2) +6. Custom elements +7. Client side routing + +### Async code execution + +Most of you are probably familiar with Promise, process.nextTick(), setTimeout() and maybe requestAnimationFrame() as ways of executing asynchronous code. They all use the event loop internally, but they behave quite differently regarding precise timing. + +In this chapter, I will explain the differences, then show you how to implement a timing system that a modern framework, like NX requires. Instead of reinventing the wheel we will use the native event loop to achieve our goals. + +### The event loop + +The event loop is not even mentioned in the ES6 spec. JavaScript only has jobs and job queues on its own. A more complex event loop is specified separately by NodeJS and the HTML5 spec. Since this series is about the front-end I will explain the latter one here. + +The event loop is called a loop for a reason. It is infinitely looping and looking for new tasks to execute. A single iteration of this loop is called a tick. The code executed during a tick is called a task. + +``` +while (eventLoop.waitForTask()) { + eventLoop.processNextTask() +} +``` + +Tasks are synchronous pieces of code that may schedule other tasks in the loop. An easy programmatic way to schedule a new task is setTimeout(taskFn). However, tasks may come from several other sources like user events, networking or DOM manipulation. + +![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_event_lopp_with_tasks-1470127590983.svg) + +### Task queues + +To complicate things a bit, the event loop can have multiple task queues. The only two restrictions are that events from the same task source must belong to the same queue and tasks must be processed in insertion order in every queue. Apart from these, the user agent is free to do as it wills. For example, it may decide which task queue to process next. + +``` +while (eventLoop.waitForTask()) { + const taskQueue = eventLoop.selectTaskQueue() + if (taskQueue.hasNextTask()) { + taskQueue.processNextTask() + } +} +``` + +With this model, we loose precise control over timing. The browser may decide to totally empty several other queues before it gets to our task scheduled with setTimeout(). + +![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_event_loop_with_task_queues-1470127624172.svg) + +### The microtask queue + +Fortunately, the event loop also has a single queue called the microtask queue. The microtask queue is completely emptied in every tick after the current task finished executing. + + +``` +while (eventLoop.waitForTask()) { + const taskQueue = eventLoop.selectTaskQueue() + if (taskQueue.hasNextTask()) { + taskQueue.processNextTask() + } + + const microtaskQueue = eventLoop.microTaskQueue + while (microtaskQueue.hasNextMicrotask()) { + microtaskQueue.processNextMicrotask() + } +} +``` + +The easiest way to schedule a microtask is Promise.resolve().then(microtaskFn). Microtasks are processed in insertion order, and since there is only one microtask queue, the user agent can't mess with us this time. + +Moreover, microtasks can schedule new microtasks that will be inserted in the same queue and processed in the same tick. + +![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_event_loop_with_microtask_queue-1470127679393.svg) + +### Rendering + +The last thing missing is the rendering schedule. Unlike event handling or parsing, rendering is not done by separate background tasks. It is an algorithm that may run at the end of every loop tick. + +The user agent has a lot of freedom again: It may render after every task, but it may decide to let hundreds of tasks execute without rendering. + +Fortunately, there is requestAnimationFrame(), that executes the passed function right before the next render. Our final event loop model looks like this. + +``` +while (eventLoop.waitForTask()) { + const taskQueue = eventLoop.selectTaskQueue() + if (taskQueue.hasNextTask()) { + taskQueue.processNextTask() + } + + const microtaskQueue = eventLoop.microTaskQueue + while (microtaskQueue.hasNextMicrotask()) { + microtaskQueue.processNextMicrotask() + } + + if (shouldRender()) { + applyScrollResizeAndCSS() + runAnimationFrames() + render() + } +} +``` + +Now let’s use all this knowledge to build a timing system! + +### Using the event loop + +As most modern frameworks, NX deals with DOM manipulation and data binding in the background. It batches operations and executes them asynchronously for better performance. To time these things right it relies on Promises, MutationObservers and requestAnimationFrame(). + +The desired timing is this: + +1. Code from the developer +2. Data binding and DOM manipulation reactions by NX +3. Hooks defined by the developer +4. Rendering by the user agent + +#### Step 1 + +NX registers object mutations with ES6 Proxies and DOM mutations with a MutationObserver synchronously (more about these in the next chapters). It delays the reactions as microtasks until step 2 for optimized performance. This delay is done by Promise.resolve().then(reaction) for object mutations, and handled automatically by the MutationObserver as it uses microtasks internally. + +#### Step 2 + +The code (task) from the developer finished running. The microtask reactions registered by NX start executing. Since they are microtasks they run in order. Note that we are still in the same loop tick. + +#### Step 3 + +NX runs the hooks passed by the developer using requestAnimationFrame(hook). This may happen in a later loop tick. The important thing is that the hooks run before the next render and after all data, DOM and CSS changes are processed. + +#### Step 4 + +The browser renders the next view. This may also happen in a later loop tick, but it never happens before the previous steps in a tick. + +### Things to keep in mind + +We just implemented a simple but effective timing system on top of the native event loop. It works well in theory, but timing is a delicate thing, and slight mistakes can cause some very strange bugs. + +In a complex system, it is important to set up some rules about the timing and keep to them later. For NX I have the following rules. + +1. Never use setTimeout(fn, 0) for internal operations +2. Register microtasks with the same method +3. Reserve microtasks for internal operations only +4. Do not pollute the developer hook execution time window with anything else + +#### Rule 1 and 2 + +Reactions on data and DOM manipulation should execute in the order the manipulations happened. It is okay to delay them as long as their execution order is not mixed up. Mixing execution order makes things unpredictable and difficult to reason about. + +setTimeout(fn, 0) is totally unpredictable. Registering microtasks with different methods also leads to mixed up execution order. For example microtask2 would incorrectly execute before microtask1 in the example below. + +``` +Promise.resolve().then().then(microtask1) +Promise.resolve().then(microtask2) +``` + +![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_microtask_registration_method-1470127727609.svg) + +#### Rule 3 and 4 + +Separating the time window of the developer code execution and the internal operations is important. Mixing these two would start to cause seemingly unpredictable behavior, and it would eventually force developers to learn about the internal working of the framework. I think many front-end developers have experiences like this already. + +### Conclusion + +If you are interested in the NX framework, please visit the home page. Adventurous readers can find the [NX source code][5] in this Github repository. + +I hope you found this a good read, see you next time when I’ll discuss [sandboxed code evaluation][4]! + +If you have any thoughts on the topic, please share them in the comments. + +-------------------------------------------------------------------------------- + +via: https://blog.risingstack.com/writing-a-javascript-framework-execution-timing-beyond-settimeout/?utm_source=javascriptweekly&utm_medium=email + +作者:[Bertalan Miklos][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.risingstack.com/author/bertalan/ +[1]: http://nx-framework.com/ +[2]: https://blog.risingstack.com/writing-a-javascript-framework-project-structuring/ +[3]: https://blog.risingstack.com/writing-a-javascript-framework-sandboxed-code-evaluation/ +[4]: https://blog.risingstack.com/writing-a-javascript-framework-sandboxed-code-evaluation/ +[5]: https://github.com/RisingStack/nx-framework From 6a163ec25b6d8e31b332c210d8fe7dca5c6358c3 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 20 Aug 2016 19:11:23 +0800 Subject: [PATCH 444/471] =?UTF-8?q?20160820-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0160805 Introducing React Native Ubuntu.md | 39 +++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 sources/tech/20160805 Introducing React Native Ubuntu.md diff --git a/sources/tech/20160805 Introducing React Native Ubuntu.md b/sources/tech/20160805 Introducing React Native Ubuntu.md new file mode 100644 index 0000000000..48e7be997c --- /dev/null +++ b/sources/tech/20160805 Introducing React Native Ubuntu.md @@ -0,0 +1,39 @@ +Introducing React Native Ubuntu +===================== + +In the Webapps team at Canonical, we are always looking to make sure that web and near-web technologies are available to developers. We want to make everyone's life easier, enable the use of tools that are familiar to web developers and provide an easy path to using them on the Ubuntu platform. + +We have support for web applications and creating and packaging Cordova applications, both of these enable any web framework to be used in creating great application experiences on the Ubuntu platform. + +One popular web framework that can be used in these environments is React.js; React.js is a UI framework with a declarative programming model and strong component system, which focuses primarily on the composition of the UI, so you can use what you like elsewhere. + +While these environments are great, sometimes you need just that bit more performance, or to be able to work with native UI components directly, but working in a less familiar environment might not be a good use of time. If you are familiar with React.js, it's easy to move into full native development with all your existing knowledge and tools by developing with React Native. React Native is the sister to React.js, you can use the same style and code to create an application that works directly with native components with native levels of performance, but with the ease of and rapid development you would expect. + + +![](http://i.imgur.com/ZsSHWXP.png) + +We are happy to announce that along with our HTML5 application support, it is now possible to develop React Native applications on the Ubuntu platform. You can port existing iOS or Android React Native applications, or you can start a new application leveraging your web-dev skills. + +You can find the source code for React Native Ubuntu [here][1], + +To get started, follow the instructions in [README-ubuntu.md][2] and create your first application. + +The Ubuntu support includes the ability to generate packages. Managed by the React Native CLI, building a snap is as easy as 'react-native package-ubuntu --snap'. It's also possible to build a click package for Ubuntu devices; meaning React Native Ubuntu apps are store ready from the start. + +Over the next little while there will be blogs posts on everything you need to know about developing a React Native Application for the Ubuntu platform; creating the app, the development process, packaging and releasing to the store. There will also be some information on how to develop new reusable modules, that can add extra functionality to the runtime and be distributed as Node Package Manager (npm) modules. + +Go and experiment, and see what you can create. + +-------------------------------------------------------------------------------- + +via: https://developer.ubuntu.com/en/blog/2016/08/05/introducing-react-native-ubuntu/?utm_source=javascriptweekly&utm_medium=email + +作者:[Justin McPherson][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://developer.ubuntu.com/en/blog/authors/justinmcp/ +[1]: https://github.com/CanonicalLtd/react-native +[2]: https://github.com/CanonicalLtd/react-native/blob/ubuntu/README-ubuntu.md From 9804fb6f418b8eb2ddc2c3ad24a238a3c82f6eee Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 20 Aug 2016 19:22:08 +0800 Subject: [PATCH 445/471] =?UTF-8?q?20160820-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160813 Journey-to-HTTP2.md | 213 ++++++++++++++++++++++ 1 file changed, 213 insertions(+) create mode 100644 sources/tech/20160813 Journey-to-HTTP2.md diff --git a/sources/tech/20160813 Journey-to-HTTP2.md b/sources/tech/20160813 Journey-to-HTTP2.md new file mode 100644 index 0000000000..706ec1485f --- /dev/null +++ b/sources/tech/20160813 Journey-to-HTTP2.md @@ -0,0 +1,213 @@ +Journey to HTTP/2 +=================== + +It has been quite some time since I last wrote through my blog and the reason is not being able to find enough time to put into it. I finally got some time today and thought to put some of it writing about HTTP. + +HTTP is the protocol that every web developer should know as it powers the whole web and knowing it is definitely going to help you develop better applications. + +In this article, I am going to be discussing what HTTP is, how it came to be, where it is today and how did we get here. + +### What is HTTP? + +First things first, what is HTTP? HTTP is the TCP/IP based application layer communication protocol which standardizes how the client and server communicate with each other. It defines how the content is requested and transmitted across the internet. By application layer protocol, I mean it’s just an abstraction layer that standardizes how the hosts (clients and servers) communicate and itself it depends upon TCP/IP to get request and response between the client and server. By default TCP port 80 is used but other ports can be used as well. HTTPS, however, uses port 443. + +#### HTTP/0.9 - The One Liner (1991) + +The first documented version of HTTP was HTTP/0.9 which was put forward in 1991. It was the simplest protocol ever; having a single method called GET. If a client had to access some webpage on the server, it would have made the simple request like below + +``` +GET /index.html +``` + +And the response from server would have looked as follows + +``` +(response body) +(connection closed) +``` + +That is, the server would get the request, reply with the HTML in response and as soon as the content has been transferred, the connection will be closed. There were + +- No headers +- GET was the only allowed method +- Response had to be HTML + +As you can see, the protocol really had nothing more than being a stepping stone for what was to come. + +#### HTTP/1.0 - 1996 + +In 1996, the next version of HTTP i.e. HTTP/1.0 evolved that vastly improved over the original version. + +Unlike HTTP/0.9 which was only designed for HTML response, HTTP/1.0 could now deal with other response formats i.e. images, video files, plain text or any other content type as well. It added more methods (i.e. POST and HEAD), request/response formats got changed, HTTP headers got added to both the request and responses, status codes were added to identify the response, character set support was introduced, multi-part types, authorization, caching, content encoding and more was included. + +Here is how a sample HTTP/1.0 request and response might have looked like: + +``` +GET / HTTP/1.0 +Host: kamranahmed.info +User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) +Accept: */* +``` + +As you can see, alongside the request, client has also sent it’s personal information, required response type etc. While in HTTP/0.9 client could never send such information because there were no headers. + +Example response to the request above may have looked like below + +``` +HTTP/1.0 200 OK +Content-Type: text/plain +Content-Length: 137582 +Expires: Thu, 05 Dec 1997 16:00:00 GMT +Last-Modified: Wed, 5 August 1996 15:55:28 GMT +Server: Apache 0.84 + +(response body) +(connection closed) +``` + +In the very beginning of the response there is HTTP/1.0 (HTTP followed by the version number), then there is the status code 200 followed by the reason phrase (or description of the status code, if you will). + +In this newer version, request and response headers were still kept as ASCII encoded, but the response body could have been of any type i.e. image, video, HTML, plain text or any other content type. So, now that server could send any content type to the client; not so long after the introduction, the term “Hyper Text” in HTTP became misnomer. HMTP or Hypermedia transfer protocol might have made more sense but, I guess, we are stuck with the name for life. + +One of the major drawbacks of HTTP/1.0 were you couldn’t have multiple requests per connection. That is, whenever a client will need something from the server, it will have to open a new TCP connection and after that single request has been fulfilled, connection will be closed. And for any next requirement, it will have to be on a new connection. Why is it bad? Well, let’s assume that you visit a webpage having 10 images, 5 stylesheets and 5 javascript files, totalling to 20 items that needs to fetched when request to that webpage is made. Since the server closes the connection as soon as the request has been fulfilled, there will be a series of 20 separate connections where each of the items will be served one by one on their separate connections. This large number of connections results in a serious performance hit as requiring a new TCP connection imposes a significant performance penalty because of three-way handshake followed by slow-start. + +### Three-way Handshake + +Three-way handshake in it’s simples form is that all the TCP connections begin with a three-way handshake in which the client and the server share a series of packets before starting to share the application data. + +- SYN - Client picks up a random number, let’s say x, and sends it to the server. +- SYN ACK - Server acknowledges the request by sending an ACK packet back to the client which is made up of a random number, let’s say y picked up by server and the number x+1 where x is the number that was sent by the client +- ACK - Client increments the number y received from the server and sends an ACK packet back with the number x+1 + +Once the three-way handshake is completed, the data sharing between the client and server may begin. It should be noted that the client may start sending the application data as soon as it dispatches the last ACK packet but the server will still have to wait for the ACK packet to be recieved in order to fulfill the request. + +![](http://i.imgur.com/uERG2G2.png) + +However, some implementations of HTTP/1.0 tried to overcome this issue by introducing a new header called Connection: keep-alive which was meant to tell the server “Hey server, do not close this connection, I need it again”. But still, it wasn’t that widely supported and the problem still persisted. + +Apart from being connectionless, HTTP also is a stateless protocol i.e. server doesn’t maintain the information about the client and so each of the requests has to have the information necessary for the server to fulfill the request on it’s own without any association with any old requests. And so this adds fuel to the fire i.e. apart from the large number of connections that the client has to open, it also has to send some redundant data on the wire causing increased bandwidth usage. + +#### HTTP/1.1 - 1999 + +After merely 3 years of HTTP/1.0, the next version i.e. HTTP/1.1 was released in 1999; which made alot of improvements over it’s predecessor. The major improvements over HTTP/1.0 included + +- New HTTP methods were added, which introduced PUT, PATCH, HEAD, OPTIONS, DELETE + +- Hostname Identification In HTTP/1.0 Host header wasn’t required but HTTP/1.1 made it required. + +- Persistent Connections As discussed above, in HTTP/1.0 there was only one request per connection and the connection was closed as soon as the request was fulfilled which resulted in accute performance hit and latency problems. HTTP/1.1 introduced the persistent connections i.e. connections weren’t closed by default and were kept open which allowed multiple sequential requests. To close the connections, the header Connection: close had to be available on the request. Clients usually send this header in the last request to safely close the connection. + +- Pipelining It also introduced the support for pipelining, where the client could send multiple requests to the server without waiting for the response from server on the same connection and server had to send the response in the same sequence in which requests were received. But how does the client know that this is the point where first response download completes and the content for next response starts, you may ask! Well, to solve this, there must be Content-Length header present which clients can use to identify where the response ends and it can start waiting for the next response. + +>It should be noted that in order to benefit from persistent connections or pipelining, Content-Length header must be available on the response, because this would let the client know when the transmission completes and it can send the next request (in normal sequential way of sending requests) or start waiting for the the next response (when pipelining is enabled). + +>But there was still an issue with this approach. And that is, what if the data is dynamic and server cannot find the content length before hand? Well in that case, you really can’t benefit from persistent connections, could you?! In order to solve this HTTP/1.1 introduced chunked encoding. In such cases server may omit content-Length in favor of chunked encoding (more to it in a moment). However, if none of them are available, then the connection must be closed at the end of request. + +- Chunked Transfers In case of dynamic content, when the server cannot really find out the Content-Length when the transmission starts, it may start sending the content in pieces (chunk by chunk) and add the Content-Length for each chunk when it is sent. And when all of the chunks are sent i.e. whole transmission has completed, it sends an empty chunk i.e. the one with Content-Length set to zero in order to identify the client that transmission has completed. In order to notify the client about the chunked transfer, server includes the header Transfer-Encoding: chunked + +- Unlike HTTP/1.0 which had Basic authentication only, HTTP/1.1 included digest and proxy authentication +- Caching +- Byte Ranges +- Character sets +- Language negotiation +- Client cookies +- Enhanced compression support +- New status codes +- ..and more + +I am not going to dwell about all the HTTP/1.1 features in this post as it is a topic in itself and you can already find a lot about it. The one such document that I would recommend you to read is Key differences between HTTP/1.0 and HTTP/1.1 and here is the link to original RFC for the overachievers. + +HTTP/1.1 was introduced in 1999 and it had been a standard for many years. Although, it improved alot over it’s predecessor; with the web changing everyday, it started to show it’s age. Loading a web page these days is more resource-intensive than it ever was. A simple webpage these days has to open more than 30 connections. Well HTTP/1.1 has persistent connections, then why so many connections? you say! The reason is, in HTTP/1.1 it can only have one outstanding connection at any moment of time. HTTP/1.1 tried to fix this by introducing pipelining but it didn’t completely address the issue because of the head-of-line blocking where a slow or heavy request may block the requests behind and once a request gets stuck in a pipeline, it will have to wait for the next requests to be fulfilled. To overcome these shortcomings of HTTP/1.1, the developers started implementing the workarounds, for example use of spritesheets, encoded images in CSS, single humungous CSS/Javascript files, domain sharding etc. + +#### SPDY - 2009 + +Google went ahead and started experimenting with alternative protocols to make the web faster and improving web security while reducing the latency of web pages. In 2009, they announced SPDY. + +>SPDY is a trademark of Google and isn’t an acronym. + +It was seen that if we keep increasing the bandwidth, the network performance increases in the beginning but a point comes when there is not much of a performance gain. But if you do the same with latency i.e. if we keep dropping the latency, there is a constant performance gain. This was the core idea for performance gain behind SPDY, decrease the latency to increase the network performance. + +>For those who don’t know the difference, latency is the delay i.e. how long it takes for data to travel between the source and destination (measured in milliseconds) and bandwidth is the amount of data transfered per second (bits per second). + +The features of SPDY included, multiplexing, compression, prioritization, security etc. I am not going to get into the details of SPDY, as you will get the idea when we get into the nitty gritty of HTTP/2 in the next section as I said HTTP/2 is mostly inspired from SPDY. + +SPDY didn’t really try to replace HTTP; it was a translation layer over HTTP which existed at the application layer and modified the request before sending it over to the wire. It started to become a defacto standards and majority of browsers started implementing it. + +In 2015, at Google, they didn’t want to have two competing standards and so they decided to merge it into HTTP while giving birth to HTTP/2 and deprecating SPDY. + +#### HTTP/2 - 2015 + +By now, you must be convinced that why we needed another revision of the HTTP protocol. HTTP/2 was designed for low latency transport of content. The key features or differences from the old version of HTTP/1.1 include + +- Binary instead of Textual +- Multiplexing - Multiple asynchronous HTTP requests over a single connection +- Header compression using HPACK +- Server Push - Multiple responses for single request +- Request Prioritization +- Security + +![](http://i.imgur.com/S85j8gg.png) + +##### 1. Binary Protocol + +HTTP/2 tends to address the issue of increased latency that existed in HTTP/1.x by making it a binary protocol. Being a binary protocol, it easier to parse but unlike HTTP/1.x it is no longer readable by the human eye. The major building blocks of HTTP/2 are Frames and Streams + +**Frames and Streams** + +HTTP messages are now composed of one or more frames. There is a HEADERS frame for the meta data and DATA frame for the payload and there exist several other types of frames (HEADERS, DATA, RST_STREAM, SETTINGS, PRIORITY etc) that you can check through the HTTP/2 specs. + +Every HTTP/2 request and response is given a unique stream ID and it is divided into frames. Frames are nothing but binary pieces of data. A collection of frames is called a Stream. Each frame has a stream id that identifies the stream to which it belongs and each frame has a common header. Also, apart from stream ID being unique, it is worth mentioning that, any request initiated by client uses odd numbers and the response from server has even numbers stream IDs. + +Apart from the HEADERS and DATA, another frame type that I think worth mentioning here is RST_STREAM which is a special frame type that is used to abort some stream i.e. client may send this frame to let the server know that I don’t need this stream anymore. In HTTP/1.1 the only way to make the server stop sending the response to client was closing the connection which resulted in increased latency because a new connection had to be opened for any consecutive requests. While in HTTP/2, client can use RST_STREAM and stop receiving a specific stream while the connection will still be open and the other streams will still be in play. + +##### 2. Multiplexing + +Since HTTP/2 is now a binary protocol and as I said above that it uses frames and streams for requests and responses, once a TCP connection is opened, all the streams are sent asynchronously through the same connection without opening any additional connections. And in turn, the server responds in the same asynchronous way i.e. the response has no order and the client uses the assigned stream id to identify the stream to which a specific packet belongs. This also solves the head-of-line blocking issue that existed in HTTP/1.x i.e. the client will not have to wait for the request that is taking time and other requests will still be getting processed. + +##### 3. HPACK Header Compression + +It was part of a separate RFC which was specifically aimed at optimizing the sent headers. The essence of it is that when we are constantly accessing the server from a same client there is alot of redundant data that we are sending in the headers over and over, and sometimes there might be cookies increasing the headers size which results in bandwidth usage and increased latency. To overcome this, HTTP/2 introduced header compression. + +![](http://i.imgur.com/3IPWXvR.png) + +Unlike request and response, headers are not compressed in gzip or compress etc formats but there is a different mechanism in place for header compression which is literal values are encoded using Huffman code and a headers table is maintained by the client and server and both the client and server omit any repetitive headers (e.g. user agent etc) in the subsequent requests and reference them using the headers table maintained by both. + +While we are talking headers, let me add here that the headers are still the same as in HTTP/1.1, except for the addition of some pseudo headers i.e. :method, :scheme, :host and :path + +##### 4. Server Push + +Server push is another tremendous feature of HTTP/2 where the server, knowing that the client is going to ask for a certain resource, can push it to the client without even client asking for it. For example, let’s say a browser loads a web page, it parses the whole page to find out the remote content that it has to load from the server and then sends consequent requests to the server to get that content. + +Server push allows the server to decrease the roundtrips by pushing the data that it knows that client is going to demand. How it is done is, server sends a special frame called PUSH_PROMISE notifying the client that, “Hey, I am about to send this resource to you! Do not ask me for it.” The PUSH_PROMISE frame is associated with the stream that caused the push to happen and it contains the promised stream ID i.e. the stream on which the server will send the resource to be pushed. + +##### 5. Request Prioritization + +A client can assign a priority to a stream by including the prioritization information in the HEADERS frame by which a stream is opened. At any other time, client can send a PRIORITY frame to change the priority of a stream. + +Without any priority information, server processes the requests asynchronously i.e. without any order. If there is priority assigned to a stream, then based on this prioritization information, server decides how much of the resources need to be given to process which request. + +##### 6. Security + +There was extensive discussion on whether security (through TLS) should be made mandatory for HTTP/2 or not. In the end, it was decided not to make it mandatory. However, most vendors stated that they will only support HTTP/2 when it is used over TLS. So, although HTTP/2 doesn’t require encryption by specs but it has kind of become mandatory by default anyway. With that out of the way, HTTP/2 when implemented over TLS does impose some requirementsi.e. TLS version 1.2 or higher must be used, there must be a certain level of minimum keysizes, ephemeral keys are required etc. + +HTTP/2 is here and it has already surpassed SPDY in adaption which is gradually increasing. HTTP/2 has alot to offer in terms of performance gain and it is about time we should start using it. + +For anyone interested in further details here is the [link to specs][1] and a [link demonstrating the performance benefits of][2] HTTP/2. For any questions or comments, use the comments section below. Also, while reading, if you find any blatant lies; do point them out. + +And that about wraps it up. Until next time! stay tuned. + +-------------------------------------------------------------------------------- + +via: http://kamranahmed.info/blog/2016/08/13/http-in-depth/?utm_source=webopsweekly&utm_medium=email + +作者:[Kamran Ahmed][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://github.com/kamranahmedse + +[1]: https://http2.github.io/http2-spec +[2]: http://www.http2demo.io/ + From 49c0a770e6e86596160a5ca2ed433034f09bf016 Mon Sep 17 00:00:00 2001 From: hkurj <663831938@qq.com> Date: Sat, 20 Aug 2016 21:26:01 +0800 Subject: [PATCH 446/471] translating --- sources/talk/20160809 7 reasons to love Vim.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20160809 7 reasons to love Vim.md b/sources/talk/20160809 7 reasons to love Vim.md index 868a1f5773..dc0caca838 100644 --- a/sources/talk/20160809 7 reasons to love Vim.md +++ b/sources/talk/20160809 7 reasons to love Vim.md @@ -1,3 +1,5 @@ +hkurj translating + 7 reasons to love Vim ==================== From b580c34f3b7a0784fe753cab9f925d7615c7f0eb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=99=88=E5=AE=B6=E5=90=AF?= Date: Sun, 21 Aug 2016 14:52:57 +0800 Subject: [PATCH 447/471] Translated by cposture (#4324) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 75 * Translated by cposture * Translated by cposture * Translated by cposture * 应 erlinux 的要求,把未翻译完的文章移到 source * 应 erlinux 的要求,把未翻译完的文章移到 source,同时删除 translated文章 * Translating * Translated by cposture * 更新格式 * Translated by cposture --- .../20160820 Protocol Buffer Basics C++.md | 411 ++++++++++++++++++ 1 file changed, 411 insertions(+) create mode 100644 translated/tech/20160820 Protocol Buffer Basics C++.md diff --git a/translated/tech/20160820 Protocol Buffer Basics C++.md b/translated/tech/20160820 Protocol Buffer Basics C++.md new file mode 100644 index 0000000000..ed86014010 --- /dev/null +++ b/translated/tech/20160820 Protocol Buffer Basics C++.md @@ -0,0 +1,411 @@ +Protocol Buffer Basics: C++ +============================ + +这篇教程提供了一个面向 C++ 程序员、关于 `protocol buffers` 的基础介绍。通过创建一个简单的示例应用程序,它将向我们展示: + +* 在 `.proto` 文件中定义消息格式 +* 使用 `protocol buffer` 编译器 +* 使用 `C++ protocol buffer API` 读写消息 + +这不是一个关于使用 C++ protocol buffers 的全面指南。要获取更详细的信息,请参考 [Protocol Buffer Language Guide][1] 和 [Encoding Reference][2]。 + +### 为什么使用 Protocol Buffers + +我们接下来要使用的例子是一个非常简单的"地址簿"应用程序,它能从文件中读取联系人详细信息。地址簿中的每一个人都有一个名字,ID,邮件地址和联系电话。 + +如何序列化和获取结构化的数据?这里有几种解决方案: + +* 以二进制形式发送/接收原生的内存数据结构。通常,这是一种脆弱的方法,因为接收/读取代码的编译必须基于完全相同的内存布局、大小端等等。同时,当文件增加时,原始格式数据会随着与该格式相连的软件拷贝而迅速扩散,这将很难扩展文件格式。 + +* 你可以创造一种 `ad-hoc` 方法,将数据项编码为一个字符串——比如将 4 个整数编码为 "12:3:-23:67"。虽然它需要编写一次性的编码和解码代码且解码需要耗费小的运行时成本,但这是一种简单灵活的方法。这最适合编码非常简单的数据。 + +* 序列化数据为 `XML`。这种方法是非常吸引人的,因为 `XML` 是一种适合人阅读的格式,并且有为许多语言开发的库。如果你想与其他程序和项目共享数据,这可能是一种不错的选择。然而,众所周知,`XML` 是空间密集型的,且在编码和解码时,它对程序会造成巨大的性能损失。同时,使用 XML DOM 树被认为比操作一个类的简单字段更加复杂。 + +`Protocol buffers` 是针对这个问题的一种灵活、高效、自动化的解决方案。使用 `Protocol buffers`,你需要写一个 `.proto` 说明,用于描述你所希望存储的数据结构。利用 `.proto` 文件,protocol buffer 编译器可以创建一个类,用于实现自动化编码和解码高效的二进制格式的 protocol buffer 数据。产生的类提供了构造 `protocol buffer` 的字段的 getters 和 setters,并且作为一个单元,关注读写 `protocol buffer` 的细节。重要的是,`protocol buffer` 格式支持扩展格式,代码仍然可以读取以旧格式编码的数据。 + +### 在哪可以找到示例代码 + +示例代码被包含于源代码包,位于 "examples" 文件夹。在[这][4]下载代码。 + +### 定义你的协议格式 + +为了创建自己的地址簿应用程序,你需要从 `.proto` 开始。`.proto` 文件中的定义很简单:为你所需要序列化的数据结构添加一个消息(message),然后为消息中的每一个字段指定一个名字和类型。这里是定义你消息的 `.proto` 文件,`addressbook.proto`。 + +``` +package tutorial; + +message Person { + required string name = 1; + required int32 id = 2; + optional string email = 3; + + enum PhoneType { + MOBILE = 0; + HOME = 1; + WORK = 2; + } + + message PhoneNumber { + required string number = 1; + optional PhoneType type = 2 [default = HOME]; + } + + repeated PhoneNumber phone = 4; +} + +message AddressBook { + repeated Person person = 1; +} +``` + +如你所见,其语法类似于 C++ 或 Java。我们开始看看文件的每一部分内容做了什么。 + +`.proto` 文件以一个 package 声明开始,这可以避免不同项目的命名冲突。在 C++,你生成的类会被置于与 package 名字一样的命名空间。 + +下一步,你需要定义消息(message)。消息只是一个包含一系列类型字段的集合。大多标准简单数据类型是可以作为字段类型的,包括 `bool`、`int32`、`float`、`double` 和 `string`。你也可以通过使用其他消息类型作为字段类型,将更多的数据结构添加到你的消息中——在以上的示例,`Person` 消息包含了 `PhoneNumber` 消息,同时 `AddressBook` 消息包含 `Person` 消息。你甚至可以定义嵌套在其他消息内的消息类型——如你所见,`PhoneNumber` 类型定义于 `Person` 内部。如果你想要其中某一个字段拥有预定义值列表中的某个值,你也可以定义 `enum` 类型——这儿你想指定一个电话号码可以是 `MOBILE`、`HOME` 或 `WORK` 中的某一个。 + +每一个元素上的 “=1”、"=2" 标记确定了用于二进制编码的唯一"标签"(tag)。标签数字 1-15 的编码比更大的数字少需要一个字节,因此作为一种优化,你可以将这些标签用于经常使用或 repeated 元素,剩下 16 以及更高的标签用于非经常使用或 optional 元素。每一个 repeated 字段的元素需要重新编码标签数字,因此 repeated 字段对于这优化是一个特别好的候选者。 + +每一个字段必须使用下面的修饰符加以标注: + +* required:必须提供字段的值,否则消息会被认为是 "未初始化的"(uninitialized)。如果 `libprotobuf` 以 debug 模式编译,序列化未初始化的消息将引起一个断言失败。以优化形式构建,将会跳过检查,并且无论如何都会写入消息。然而,解析未初始化的消息总是会失败(通过 parse 方法返回 `false`)。除此之外,一个 required 字段的表现与 optional 字段完全一样。 + +* optional:字段可能会被设置,也可能不会。如果一个 optional 字段没被设置,它将使用默认值。对于简单类型,你可以指定你自己的默认值,正如例子中我们对电话号码的 `type` 一样,否则使用系统默认值:数字类型为 0、字符串为空字符串、布尔值为 false。对于嵌套消息,默认值总为消息的"默认实例"或"原型",它的所有字段都没被设置。调用 accessor 来获取一个没有显式设置的 optional(或 required) 字段的值总是返回字段的默认值。 + +* repeated:字段可以重复任意次数(包括 0)。repeated 值的顺序会被保存于 protocol buffer。可以将 repeated 字段想象为动态大小的数组。 + +你可以查找关于编写 `.proto` 文件的完整指导——包括所有可能的字段类型——在 [Protocol Buffer Language Guide][6]。不要在这里面查找与类继承相似的特性,因为 protocol buffers 不会做这些。 + +> required 是永久性的,在把一个字段标识为 required 的时候,你应该特别小心。如果在某些情况下你不想写入或者发送一个 required 的字段,那么将该字段更改为 optional 可能会遇到问题——旧版本的读者(译者注:即读取、解析旧版本 Protocol Buffer 消息的一方)会认为不含该字段的消息是不完整的,从而有可能会拒绝解析。在这种情况下,你应该考虑编写特别针对于应用程序的、自定义的消息校验函数。Google 的一些工程师得出了一个结论:使用 required 弊多于利;他们更愿意使用 optional 和 repeated 而不是 required。当然,这个观点并不具有普遍性。 + +### 编译你的 Protocol Buffers + +既然你有了一个 `.proto`,那你需要做的下一件事就是生成一个将用于读写 `AddressBook` 消息的类(从而包括 `Person` 和 `PhoneNumber`)。为了做到这样,你需要在你的 `.proto` 上运行 protocol buffer 编译器 `protoc`: + +1. 如果你没有安装编译器,请[下载这个包][4],并按照 README 中的指令进行安装。 +2. 现在运行编译器,知道源目录(你的应用程序源代码位于哪里——如果你没有提供任何值,将使用当前目录),目标目录(你想要生成的代码放在哪里;常与 `$SRC_DIR` 相同),并且你的 `.proto` 路径。在此示例,你...: + +``` +protoc -I=$SRC_DIR --cpp_out=$DST_DIR $SRC_DIR/addressbook.proto +``` + +因为你想要 C++ 的类,所以你使用了 `--cpp_out` 选项——也为其他支持的语言提供了类似选项。 + +在你指定的目标文件夹,将生成以下的文件: + +* `addressbook.pb.h`,声明你生成类的头文件。 +* `addressbook.pb.cc`,包含你的类的实现。 + +### Protocol Buffer API + +让我们看看生成的一些代码,了解一下编译器为你创建了什么类和函数。如果你查看 `tutorial.pb.h`,你可以看到有一个在 `tutorial.proto` 中指定所有消息的类。关注 `Person` 类,可以看到编译器为每个字段生成了读写函数(accessors)。例如,对于 `name`、`id`、`email` 和 `phone` 字段,有下面这些方法: + +```c++ +// name +inline bool has_name() const; +inline void clear_name(); +inline const ::std::string& name() const; +inline void set_name(const ::std::string& value); +inline void set_name(const char* value); +inline ::std::string* mutable_name(); + +// id +inline bool has_id() const; +inline void clear_id(); +inline int32_t id() const; +inline void set_id(int32_t value); + +// email +inline bool has_email() const; +inline void clear_email(); +inline const ::std::string& email() const; +inline void set_email(const ::std::string& value); +inline void set_email(const char* value); +inline ::std::string* mutable_email(); + +// phone +inline int phone_size() const; +inline void clear_phone(); +inline const ::google::protobuf::RepeatedPtrField< ::tutorial::Person_PhoneNumber >& phone() const; +inline ::google::protobuf::RepeatedPtrField< ::tutorial::Person_PhoneNumber >* mutable_phone(); +inline const ::tutorial::Person_PhoneNumber& phone(int index) const; +inline ::tutorial::Person_PhoneNumber* mutable_phone(int index); +inline ::tutorial::Person_PhoneNumber* add_phone(); +``` + +正如你所见到,getters 的名字与字段的小写名字完全一样,并且 setter 方法以 set_ 开头。同时每个单一(singular)(required 或 optional)字段都有 `has_` 方法,该方法在字段被设置了值的情况下返回 true。最后,所有字段都有一个 `clear_` 方法,用以清除字段到空(empty)状态。 + +数字 `id` 字段仅有上述的基本读写函数集合(accessors),而 `name` 和 `email` 字段有两个额外的方法,因为它们是字符串——一个是可以获得字符串直接指针的`mutable_` getter ,另一个为额外的 setter。注意,尽管 `email` 还没被设置(set),你也可以调用 `mutable_email`;因为 `email` 会被自动地初始化为空字符串。在本例中,如果你有一个单一的(required 或 optional)消息字段,它会有一个 `mutable_` 方法,而没有 `set_` 方法。 + +repeated 字段也有一些特殊的方法——如果你看看 repeated `phone` 字段的方法,你可以看到: + +* 检查 repeated 字段的 `_size`(也就是说,与 `Person` 相关的电话号码的个数) +* 使用下标取得特定的电话号码 +* 更新特定下标的电话号码 +* 添加新的电话号码到消息中,之后你便可以编辑。(repeated 标量类型有一个 `add_` 方法,用于传入新的值) + +为了获取 protocol 编译器为所有字段定义生成的方法的信息,可以查看 [C++ generated code reference][5]。 + +#### 枚举和嵌套类(Enums and Nested Classes) + +与 `.proto` 的枚举相对应,生成的代码包含了一个 `PhoneType` 枚举。你可以通过 `Person::PhoneType` 引用这个类型,通过 `Person::MOBILE`、`Person::HOME` 和 `Person::WORK` 引用它的值。(实现细节有点复杂,但是你无须了解它们而可以直接使用) + +编译器也生成了一个 `Person::PhoneNumber` 的嵌套类。如果你查看代码,你可以发现真正的类型为 `Person_PhoneNumber`,但它通过在 `Person` 内部使用 typedef 定义,使你可以把 `Person_PhoneNumber` 当成嵌套类。唯一产生影响的一个例子是,如果你想要在其他文件前置声明该类——在 C++ 中你不能前置声明嵌套类,但是你可以前置声明 `Person_PhoneNumber`。 + +#### 标准的消息方法 + +所有的消息方法都包含了许多别的方法,用于检查和操作整个消息,包括: + +* `bool IsInitialized() const;` :检查是否所有 `required` 字段已经被设置。 +* `string DebugString() const;`:返回人类可读的消息表示,对 debug 特别有用。 +* `void CopyFrom(const Person& from);`:使用给定的值重写消息。 +* `void Clear();`:清除所有元素为空(empty)的状态。 + +上面这些方法以及下一节要讲的 I/O 方法实现了被所有 C++ protocol buffer 类共享的消息(Message)接口。为了获取更多信息,请查看 [complete API documentation for Message][7]。 + +#### 解析和序列化(Parsing and Serialization) + +最后,所有 protocol buffer 类都有读写你选定类型消息的方法,这些方法使用了特定的 protocol buffer [二进制格式][8]。这些方法包括: + +* `bool SerializeToString(string* output) const;`:序列化消息以及将消息字节数据存储在给定的字符串。注意,字节数据是二进制格式的,而不是文本格式;我们只使用 `string` 类作为合适的容器。 +* `bool ParseFromString(const string& data);`:从给定的字符创解析消息。 +* `bool SerializeToOstream(ostream* output) const;`:将消息写到给定的 C++ `ostream`。 +* `bool ParseFromIstream(istream* input);`:从给定的 C++ `istream` 解析消息。 + +这些只是两个用于解析和序列化的选择。再次说明,可以查看 `Message API reference` 完整的列表。 + +> Protocol Buffers 和 面向对象设计的 Protocol buffer 类通常只是纯粹的数据存储器(像 C++ 中的结构体);它们在对象模型中并不是一等公民。如果你想向生成的 protocol buffer 类中添加更丰富的行为,最好的方法就是在应用程序中对它进行封装。如果你无权控制 .proto 文件的设计的话,封装 protocol buffers 也是一个好主意(例如,你从另一个项目中重用一个 .proto 文件)。在那种情况下,你可以用封装类来设计接口,以更好地适应你的应用程序的特定环境:隐藏一些数据和方法,暴露一些便于使用的函数,等等。但是你绝对不要通过继承生成的类来添加行为。这样做的话,会破坏其内部机制,并且不是一个好的面向对象的实践。 + +### 写消息(Writing A Message) + +现在我们尝试使用 protocol buffer 类。你的地址簿程序想要做的第一件事是将个人详细信息写入到地址簿文件。为了做到这一点,你需要创建、填充 protocol buffer 类实例,并且将它们写入到一个输出流(output stream)。 + +这里的程序可以从文件读取 `AddressBook`,根据用户输入,将新 `Person` 添加到 `AddressBook`,并且再次将新的 `AddressBook` 写回文件。这部分直接调用或引用 protocol buffer 类的代码会高亮显示。 + +```c++ +#include +#include +#include +#include "addressbook.pb.h" +using namespace std; + +// This function fills in a Person message based on user input. +void PromptForAddress(tutorial::Person* person) { + cout << "Enter person ID number: "; + int id; + cin >> id; + person->set_id(id); + cin.ignore(256, '\n'); + + cout << "Enter name: "; + getline(cin, *person->mutable_name()); + + cout << "Enter email address (blank for none): "; + string email; + getline(cin, email); + if (!email.empty()) { + person->set_email(email); + } + + while (true) { + cout << "Enter a phone number (or leave blank to finish): "; + string number; + getline(cin, number); + if (number.empty()) { + break; + } + + tutorial::Person::PhoneNumber* phone_number = person->add_phone(); + phone_number->set_number(number); + + cout << "Is this a mobile, home, or work phone? "; + string type; + getline(cin, type); + if (type == "mobile") { + phone_number->set_type(tutorial::Person::MOBILE); + } else if (type == "home") { + phone_number->set_type(tutorial::Person::HOME); + } else if (type == "work") { + phone_number->set_type(tutorial::Person::WORK); + } else { + cout << "Unknown phone type. Using default." << endl; + } + } +} + +// Main function: Reads the entire address book from a file, +// adds one person based on user input, then writes it back out to the same +// file. +int main(int argc, char* argv[]) { + // Verify that the version of the library that we linked against is + // compatible with the version of the headers we compiled against. + GOOGLE_PROTOBUF_VERIFY_VERSION; + + if (argc != 2) { + cerr << "Usage: " << argv[0] << " ADDRESS_BOOK_FILE" << endl; + return -1; + } + + tutorial::AddressBook address_book; + + { + // Read the existing address book. + fstream input(argv[1], ios::in | ios::binary); + if (!input) { + cout << argv[1] << ": File not found. Creating a new file." << endl; + } else if (!address_book.ParseFromIstream(&input)) { + cerr << "Failed to parse address book." << endl; + return -1; + } + } + + // Add an address. + PromptForAddress(address_book.add_person()); + + { + // Write the new address book back to disk. + fstream output(argv[1], ios::out | ios::trunc | ios::binary); + if (!address_book.SerializeToOstream(&output)) { + cerr << "Failed to write address book." << endl; + return -1; + } + } + + // Optional: Delete all global objects allocated by libprotobuf. + google::protobuf::ShutdownProtobufLibrary(); + + return 0; +} +``` + +注意 `GOOGLE_PROTOBUF_VERIFY_VERSION` 宏。它是一种好的实践——虽然不是严格必须的——在使用 C++ Protocol Buffer 库之前执行该宏。它可以保证避免不小心链接到一个与编译的头文件版本不兼容的库版本。如果被检查出来版本不匹配,程序将会终止。注意,每个 `.pb.cc` 文件在初始化时会自动调用这个宏。 + +同时注意在程序最后调用 `ShutdownProtobufLibrary()`。它用于释放 Protocol Buffer 库申请的所有全局对象。对大部分程序,这不是必须的,因为虽然程序只是简单退出,但是 OS 会处理释放程序的所有内存。然而,如果你使用了内存泄漏检测工具,工具要求全部对象都要释放,或者你正在写一个库,该库可能会被一个进程多次加载和卸载,那么你可能需要强制 Protocol Buffer 清除所有东西。 + +### 读取消息 + +当然,如果你无法从它获取任何信息,那么这个地址簿没多大用处!这个示例读取上面例子创建的文件,并打印文件里的所有内容。 + +```c++ +#include +#include +#include +#include "addressbook.pb.h" +using namespace std; + +// Iterates though all people in the AddressBook and prints info about them. +void ListPeople(const tutorial::AddressBook& address_book) { + for (int i = 0; i < address_book.person_size(); i++) { + const tutorial::Person& person = address_book.person(i); + + cout << "Person ID: " << person.id() << endl; + cout << " Name: " << person.name() << endl; + if (person.has_email()) { + cout << " E-mail address: " << person.email() << endl; + } + + for (int j = 0; j < person.phone_size(); j++) { + const tutorial::Person::PhoneNumber& phone_number = person.phone(j); + + switch (phone_number.type()) { + case tutorial::Person::MOBILE: + cout << " Mobile phone #: "; + break; + case tutorial::Person::HOME: + cout << " Home phone #: "; + break; + case tutorial::Person::WORK: + cout << " Work phone #: "; + break; + } + cout << phone_number.number() << endl; + } + } +} + +// Main function: Reads the entire address book from a file and prints all +// the information inside. +int main(int argc, char* argv[]) { + // Verify that the version of the library that we linked against is + // compatible with the version of the headers we compiled against. + GOOGLE_PROTOBUF_VERIFY_VERSION; + + if (argc != 2) { + cerr << "Usage: " << argv[0] << " ADDRESS_BOOK_FILE" << endl; + return -1; + } + + tutorial::AddressBook address_book; + + { + // Read the existing address book. + fstream input(argv[1], ios::in | ios::binary); + if (!address_book.ParseFromIstream(&input)) { + cerr << "Failed to parse address book." << endl; + return -1; + } + } + + ListPeople(address_book); + + // Optional: Delete all global objects allocated by libprotobuf. + google::protobuf::ShutdownProtobufLibrary(); + + return 0; +} +``` + +### 扩展 Protocol Buffer + +早晚在你发布了使用 protocol buffer 的代码之后,毫无疑问,你会想要 "改善" + protocol buffer 的定义。如果你想要新的 buffers 向后兼容,并且老的 buffers 向前兼容——几乎可以肯定你很渴望这个——这里有一些规则,你需要遵守。在新的 protocol buffer 版本: + + * 你绝不可以修改任何已存在字段的标签数字 + * 你绝不可以添加或删除任何 required 字段 + * 你可以删除 optional 或 repeated 字段 + * 你可以添加新的 optional 或 repeated 字段,但是你必须使用新的标签数字(也就是说,标签数字在 protocol buffer 中从未使用过,甚至不能是已删除字段的标签数字)。 + + (这是对于上面规则的一些[异常情况][9],但它们很少用到。) + + 如果你能遵守这些规则,旧代码则可以欢快地读取新的消息,并且简单地忽略所有新的字段。对于旧代码来说,被删除的 optional 字段将会简单地赋予默认值,被删除的 `repeated` 字段会为空。新代码显然可以读取旧消息。然而,请记住新的 optional 字段不会呈现在旧消息中,因此你需要显式地使用 `has_` 检查它们是否被设置或者在 `.proto` 文件在标签数字后使用 `[default = value]` 提供一个合理的默认值。如果一个 optional 元素没有指定默认值,它将会使用类型特定的默认值:对于字符串,默认值为空字符串;对于布尔值,默认值为 false;对于数字类型,默认类型为 0。注意,如果你添加一个新的 repeated 字段,新代码将无法辨别它被留空(left empty)(被新代码)或者从没被设置(被旧代码),因为 repeated 字段没有 `has_` 标志。 + +### 优化技巧 + +C++ Protocol Buffer 库已极度优化过了。但是,恰当的用法能够更多地提高性能。这里是一些技巧,可以帮你从库中挤压出最后一点速度: + +* 尽可能复用消息对象。即使它们被清除掉,消息也会尽量保存所有被分配来重用的内存。因此,如果我们正在处理许多相同类型或一系列相似结构的消息,一个好的办法是重用相同的消息对象,从而减少内存分配的负担。但是,随着时间的流逝,对象可能会膨胀变大,尤其是当你的消息尺寸(译者注:各消息内容不同,有些消息内容多一些,有些消息内容少一些)不同的时候,或者你偶尔创建了一个比平常大很多的消息的时候。你应该自己通过调用 [SpaceUsed][10] 方法监测消息对象的大小,并在它太大的时候删除它。 + +* 对于在多线程中分配大量小对象的情况,你的操作系统内存分配器可能优化得不够好。你可以尝试使用 google 的 [tcmalloc][11]。 + +### 高级用法 + +Protocol Buffers 绝不仅用于简单的数据存取以及序列化。请阅读 [C++ API reference][12] 来看看你还能用它来做什么。 + +protocol 消息类所提供的一个关键特性就是反射。你不需要编写针对一个特殊的消息类型的代码,就可以遍历一个消息的字段并操作它们的值。一个使用反射的有用方法是 protocol 消息与其他编码互相转换,比如 XML 或 JSON。反射的一个更高级的用法可能就是可以找出两个相同类型的消息之间的区别,或者开发某种 "协议消息的正则表达式",利用正则表达式,你可以对某种消息内容进行匹配。只要你发挥你的想像力,就有可能将 Protocol Buffers 应用到一个更广泛的、你可能一开始就期望解决的问题范围上。 + +反射是由 [Message::Reflection interface][13] 提供的。 + +-------------------------------------------------------------------------------- + +via: https://developers.google.com/protocol-buffers/docs/cpptutorial + +作者:[Google][a] +译者:[cposture](https://github.com/cposture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://developers.google.com/protocol-buffers/docs/cpptutorial +[1]: https://developers.google.com/protocol-buffers/docs/proto +[2]: https://developers.google.com/protocol-buffers/docs/encoding +[3]: https://developers.google.com/protocol-buffers/docs/downloads +[4]: https://developers.google.com/protocol-buffers/docs/downloads.html +[5]: https://developers.google.com/protocol-buffers/docs/reference/cpp-generated +[6]: https://developers.google.com/protocol-buffers/docs/proto +[7]: https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.message.html#Message +[8]: https://developers.google.com/protocol-buffers/docs/encoding +[9]: https://developers.google.com/protocol-buffers/docs/proto#updating +[10]: https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.message.html#Message.SpaceUsed.details +[11]: http://code.google.com/p/google-perftools/ +[12]: https://developers.google.com/protocol-buffers/docs/reference/cpp/index.html +[13]: https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.message.html#Message.Reflection From 98cffc134cb3536fc5b465c9886351d59d1eee14 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 21 Aug 2016 18:41:29 +0800 Subject: [PATCH 448/471] PUB:20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ZenMoore @LiBrad @WangYueScream @LemonDemo 基本上不错,有些地方需要再细心些。 --- ...ramming or Source Code Editors on Linux.md | 322 ++++++++++++++++++ ...ramming or Source Code Editors on Linux.md | 302 ---------------- 2 files changed, 322 insertions(+), 302 deletions(-) create mode 100644 published/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md delete mode 100644 translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md diff --git a/published/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md b/published/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md new file mode 100644 index 0000000000..9b0b74e62e --- /dev/null +++ b/published/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md @@ -0,0 +1,322 @@ +17 个 Linux 下用于 C/C++ 的最好的 IDE /编辑器 +======================= + +C++,一个众所周知的 C 语言的扩展,是一个优秀的、强大的、通用编程语言,它能够提供现代化的、通用的编程功能,可以用于开发包括视频游戏、搜索引擎、其他计算机软件乃至操作系统等在内的各种大型应用。 + +C++,提供高度可靠性的同时还能够允许操作底层内存来满足更高级的编程要求。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Best-Linux-IDE-Editors.png) + +虽然已经有了一些供程序员用来写 C/C++ 代码的文本编辑器,但 IDE 可以为轻松、完美的编程提供综合的环境和组件。 + +在这篇文章里,我们会向你展示一些可以在 Linux 平台上找到的用于 C++ 或者其他编程语言编程的最好的 IDE。  + +### 1. 用于 C/C++ 开发的 Netbeans + +Netbeans 是一个自由而开源的、流行的跨平台 IDE ,可用于 C/C++ 以及其他编程语言,可以使用由社区开发的插件展现了其完全的扩展性。 + +它包含了用于 C/C++ 开发的项目类型和模版,并且你可以使用静态和动态函数库来构建应用程序。此外,你可以利用现有的代码去创造你的工程,并且也可以通过拖放的方式导入二进制文件来从头构建应用。 + +让我们来看看关于它的特性: + +- C/C++ 编辑器很好的整合了多线程的 [GNU GDB 调试工具][1] +- 支持代码协助 +- 支持 C++11 标准 +- 在里面创建和运行 C/C++ 测试程序 +- 支持 QT 工具包 +- 支持将已编译的应用程序自动打包到 .tar,.zip 等归档文件 +- 支持多个编译器,例如: GNU、Clang/LLVM、Cygwin、Oracle Solaris Studio 和 MinGW +- 支持远程开发 +- 文件导航 +- 源代码检查 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/NetBeans-IDE.png) + +主页: + +### 2. Code::Blocks + +Code::Blocks 是一个免费的、具有高度扩展性的、并且可以配置的跨平台 C++ IDE,它为用户提供了必备而典范的功能。它具有一致的界面和体验。 + +最重要的是,你可以通过用户开发的插件扩展它的功能,一些插件是随同 Code::Blocks 发布的,而另外一些则不是,它们由 Code::Block 开发团队之外的个人用户所编写的。 + +其功能分为编译器、调试器、界面功能,它们包括: + +- 支持多种编译器如 GCC、clang、Borland C++ 5.5、digital mars 等等 +- 非常快,不需要 makefile +- 支持多个目标平台的项目 +- 支持将项目组合起来的工作空间 +- GNU GDB 接口 +- 支持完整的断点功能,包括代码断点,数据断点,断点条件等等 +- 显示本地函数的符号和参数 +- 用户内存导出和语法高亮显示 +- 可自定义、可扩展的界面以及许多其他的的功能,包括那些用户开发的插件添加功能 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/CodeBlocks-IDE-for-Linux.png) + +主页: + +### 3. Eclipse CDT (C/C++ Development Tooling) + +Eclipse 在编程界是一款著名的、开源的、跨平台的 IDE。它给用户提供了一个很棒的界面,并支持拖拽功能以方便界面元素的布置。 + +Eclipse CDT 是一个基于 Eclipse 主平台的项目,它提供了一个完整功能的 C/C++ IDE,并具有以下功能: + +- 支持项目创建 +- 管理各种工具链的构建 +- 标准的 make 构建 +- 源代码导航 +- 一些知识工具,如调用图、类型分级结构,内置浏览器,宏定义浏览器 +- 支持语法高亮的代码编辑器 +- 支持代码折叠和超链接导航 +- 代码重构与代码生成 +- 可视化调试存储器、寄存器的工具 +- 反汇编查看器以及更多功能 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Eclipse-IDE-for-Linux.png) + +主页: + +### 4.CodeLite IDE + +CodeLite 也是一款为 C/C++、JavaScript(Node.js)和 PHP 编程专门设计打造的自由而开源的、跨平台的 IDE。 + +它的一些主要特点包括: + +- 代码补完,提供了两个代码补完引擎 +- 支持多种编译器,包括 GCC、clang/VC++ +- 以代码词汇的方式显示错误 +- 构建选项卡中的错误消息可点击 +- 支持下一代 LLDB 调试器 +- 支持 GDB +- 支持重构 +- 代码导航 +- 使用内置的 SFTP 进行远程开发 +- 源代码控制插件 +- 开发基于 wxWidgets 应用的 RAD(快速应用程序开发)工具,以及更多的特性 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Codelite-IDE.png) + +主页: + +### 5. Bluefish 编辑器 + +Bluefish 不仅仅是一个一般的编辑器,它是一个轻量级的、快捷的编辑器,为程序员提供了如开发网站、编写脚本和软件代码的 IDE 特性。它支持多平台,可以在 Linux、Mac OSX、FreeBSD、OpenBSD、Solaris 和 Windows 上运行,同时支持包括 C/C++ 在内的众多编程语言。 + +下面列出的是它众多功能的一部分: + +- 多文档界面 +- 支持递归打开文件,基于文件名通配模式或者内容模式 +- 提供一个非常强大的搜索和替换功能 +- 代码片段边栏 +- 支持整合个人的外部过滤器,可使用命令如 awk,sed,sort 以及自定义构建脚本组成(过滤器的)管道文件 +- 支持全屏编辑 +- 网站上传和下载 +- 支持多种编码等许多其他功能 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/BlueFish-IDE-Editor-for-Linux.png) + +主页: + +### 6. Brackets 代码编辑器 + +Brackets 是一个现代化风格的、开源的文本编辑器,专为 Web 设计与开发打造。它可以通过插件进行高度扩展,因此 C/C++ 程序员通过安装 C/C++/Objective-C 包来使用它来开发,这个包用来在辅助 C/C++ 代码编写的同时提供了 IDE 之类的特性。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Brackets-Code-Editor-for-Linux.png) + +主页: + +### 7. Atom 代码编辑器 + +Atom 也是一个现代化风格、开源的多平台文本编辑器,它能运行在 Linux、Windows 或是 Mac OS X 平台。它的定制可深入底层,用户可以自定义它,以便满足各种编写代码的需求。 + +它功能完整,主要的功能包括: + +- 内置了包管理器 +- 智能的自动补完 +- 内置文件浏览器 +- 查找、替换以及其他更多的功能 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Atom-Code-Editor-for-Linux.png) + +主页: + +安装指南: + +### 8. Sublime Text 编辑器 + +Sublime Text 是一个完善的、跨平台的文本编辑器,可用于代码、标记语言和一般文字。它可以用来编写 C/C++ 代码,并且提供了非常棒的用户界面。 + +它的功能列表包括: + +- 多重选择 +- 按模式搜索命令 +- 抵达任何一处的功能 +- 免打扰模式 +- 窗口分割 +- 支持项目之间快速的切换 +- 高度可定制 +- 支持基于 Python 的 API 插件以及其他特性 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Sublime-Code-Editor-for-Linux.png) + +主页: + +安装指南: + +### 9. JetBrains CLion + +JetBrains CLion 是一个收费的、强大的跨平台 C/C++ IDE。它是一个完全整合的 C/C++ 程序开发环境,并提供 Cmake 项目模型、一个嵌入式终端窗口和一个主要以键盘操作的编码环境。 + +它还提供了一个智能而现代化的编辑器,具有许多令人激动的功能,提供了理想的编码环境,这些功能包括: + +- 除了 C/C++ 还支持其他多种语言 +- 在符号声明和上下文中轻松导航 +- 代码生成和重构 +- 可定制的编辑器 +- 即时代码分析 +- 集成的代码调试器 +- 支持 Git、Subversion、Mercurial、CVS、Perforcevia(通过插件)和 TFS +- 无缝集成了 Google 测试框架 +- 通过 Vim 仿真插件支持 Vim 编辑体验 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/JetBains-CLion-IDE.png) + +主页: + +### 10. 微软的 Visual Studio Code 编辑器 + +Visual Studio 是一个功能丰富的、完全整合的、跨平台开发环境,运行在 Linux、Windows 和 Mac OS X 上。 最近它向 Linux 用户开源了,它重新定义了代码编辑这件事,为用户提供了在 Windows、Android、iOS 和 Web 等多个平台开发不同应用所需的一切工具。 + +它功能完备,功能分类为应用程序开发、应用生命周期管理、扩展和集成特性。你可以从 Visual Studio 官网阅读全面的功能列表。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Visual-Studio-Code-Editor.png) + +主页: + +### 11. KDevelop + +KDevelop 是另一个自由而开源的跨平台 IDE,能够运行在 Linux、Solaris、FreeBSD、Windows、Mac OS X 和其他类 Unix 操作系统上。它基于 KDevPlatform、KDE 和 Qt 库。KDevelop 可以通过插件高度扩展,功能丰富且具有以下显著特色: + +- 支持基于 Clang 的 C/C++ 插件 +- 支持 KDE 4 配置迁移 +- 支持调用二进制编辑器 Oketa +- 支持众多视图插件下的差异行编辑 +- 支持 Grep 视图,使用窗口小部件节省垂直空间等 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/KDevelop-IDE-Editor.png) + +主页: + +### 12. Geany IDE + +Geany 是一个免费的、快速的、轻量级跨平台 IDE,只需要很少的依赖包就可以工作,独立于流行的 Linux 桌面环境下,比如 GNOME 和 KDE。它需要 GTK2 库实现功能。 + +它的特性包括以下列出的内容: + +- 支持语法高亮显示 +- 代码折叠 +- 调用提示 +- 符号名自动补完 +- 符号列表 +- 代码导航 +- 一个简单的项目管理工具 +- 可以编译并运行用户代码的内置系统 +- 可以通过插件扩展 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Geany-IDE-for-Linux.png) + +主页: + +### 13. Ajunta DeveStudio + +Ajunta DevStudio 是一个简单,强大的 GNOME 界面的软件开发工作室,支持包括 C/C++ 在内的几种编程语言。 + +它提供了先进的编程工具,比如项目管理、GUI 设计、交互式调试器、应用程序向导、源代码编辑器、版本控制等。此外,除了以上特点,Ajunta DeveStudio 也有其他很多不错的 IDE 功能,包括: + +- 简单的用户界面 +- 可通过插件扩展 +- 整合了 Glade 用于所见即所得的 UI 开发 +- 项目向导和模板 +- 整合了 GDB 调试器 +- 内置文件管理器 +- 使用 DevHelp 提供上下文敏感的编程辅助 +- 源代码编辑器支持语法高亮显示、智能缩进、自动缩进、代码折叠/隐藏、文本缩放等 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Anjuta-DevStudio-for-Linux.png) + +主页: + +### 14. GNAT Programming Studio + +GNAT Programming Studio 是一个免费的、易于使用的 IDE,设计的目的用于统一开发人员与他/她的代码和软件之间的交互。 + +它通过高亮程序的重要部分和逻辑从而提升源代码导航体验,打造了一个理想的编程环境。它的设计目标是为你带来更舒适的编程体验,使用户能够从头开始开发全面的系统。 + +它丰富的特性包括以下这些: + +- 直观的用户界面 +- 对开发者的友好性 +- 支持多种编程语言,跨平台 +- 灵活的 MDI(多文档界面) +- 高度可定制 +- 使用喜欢的工具获得全面的可扩展性 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/GNAT-Programming-Studio.jpg) + +主页: + +### 15. Qt Creator + +这是一款收费的、跨平台的 IDE,用于创建连接设备、用户界面和应用程序。Qt Creator 可以让用户比应用的编码做到更多的创新。 + +它可以用来创建移动和桌面应用程序,也可以连接到嵌入式设备。 + +它的优点包含以下几点: + +- 复杂的代码编辑器 +- 支持版本控制 +- 项目和构建管理工具 +- 支持多屏幕和多平台,易于构建目标之间的切换等等 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Qt-Creator.png) + +主页: + +### 16. Emacs 编辑器 + +Emacs 是一个自由的、强大的、可高度扩展的、可定制的、跨平台文本编辑器,你可以在 Linux、Solaris、FreeBSD、NetBSD、OpenBSD、Windows 和 Mac OS X 这些系统中使用该编辑器。 + +Emacs 的核心也是一个 Emacs Lisp 的解释器,Emacs Lisp 是一种基于 Lisp 的编程语言。在撰写本文时,GNU Emacs 的最新版本是 24.5,Emacs 的基本功能包括: + +- 内容识别编辑模式 +- Unicode 的完全支持 +- 可使用 GUI 或 Emacs Lisp 代码高度定制 +- 下载和安装扩展的打包系统 +- 超出了正常文本编辑的功能生态系统,包括项目策划、邮件、日历和新闻阅读器等 +- 完整的内置文档,以及用户指南等等 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Emacs-Editor.png) + +主页: https://www.gnu.org/software/emacs/ + +### 17. VI/VIM 编辑器 + +Vim,一款 VI 编辑器的改进版本,是一款自由的、强大的、流行的并且高度可配置的文本编辑器。它为有效率地文本编辑而生,并且为 Unix/Linux 使用者提供了激动人心的编辑器特性,因此,它对于撰写和编辑 C/C++ 代码也是一个好的选择。 + +总的来说,与传统的文本编辑器相比,IDE 为编程提供了更多的便利,因此使用它们是一个很好的选择。它们带有激动人心的特征并且提供了一个综合性的开发环境,有时候程序员不得不陷入对最好的 C/C++ IDE 的选择。 + +在互联网上你还可以找到许多 IDE 来下载,但不妨试试我们推荐的这几款,可以帮助你尽快找到哪一款是你需要的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/best-linux-ide-editors-source-code-editors/ + +作者:[Aaron Kili][a] +译者:[ZenMoore](https://github.com/ZenMoore) ,[LiBrad](https://github.com/LiBrad) ,[WangYueScream](https://github.com/WangYueScream) ,[LemonDemo](https://github.com/LemonDemo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/debug-source-code-in-linux-using-gdb/ diff --git a/translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md b/translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md deleted file mode 100644 index cb991d80c9..0000000000 --- a/translated/tech/20160630 18 Best IDEs for C+C++ Programming or Source Code Editors on Linux.md +++ /dev/null @@ -1,302 +0,0 @@ - -# Linux 下 18 个最好的 C/C++ 编程或者源代码编辑的集成开发环境 - -C++,一个众所周知的 C 语言的扩展,是一个优秀的,强大的,通用的编程语言,能够提供现代化的和通用的编程功能,可以用于开发大型应用,包括视频游戏,搜索引擎,其他计算机软件甚至操作系统。 - -C++,提供高度可靠性的同时还能够允许操作底层内存来满足更高级的编程要求。 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Best-Linux-IDE-Editors.png) - -虽然已经有了一些供程序员用来写 C/C++ 代码的文本编辑器,但 IDE 可以为简单、完美的编程提供全面的工具和组件。 - -在这篇文章里,我们会向你展示一些可以在 Linux 平台上找到的用于 C++ 或者其他编程语言编程的最好的 IDE。  - -### 1. Netbeans for C/C++ Development - -Netbeans 是一个免费的、开源和流行跨平台的 IDE 并且用于 C/C++ 以及其他编程语言。由社区提供成熟的插件使它可以充分利用其扩展性。 - -它包含项目类型和 C/C++ 模版,并且你可以使用静态和动态函数库来搭建应用程序。此外,你可以利用现有的代码去创造你的工程,并且使用它的特点将二进制文件拖放进文件夹到开发应用程序的地方。 - -让我们来看看关于它的特性: - -- C/C++ 编辑器是很好的集成多元化的 GNU GDB 调试工具 -- 支持代码协助 -- 支持 C++11 标准 -- 在里面创建和运行 C/C++ 测试程序 -- 支持 QT 工具包 -- 支持自动由已编译的应用程序到 .tar,.zip 和其他更多的归档文件封包 -- 支持多个编译器,例如: GNU,Clang/LLVM,Cygwin,Oracle Solaris Studio 和 MinGW -- 支持远程开发 -- 文件导航 -- 源代码检查 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/NetBeans-IDE.png) ->Visit Homepage: - -### 2. Code::Blocks -Code::Blocks 是一个免费的、具有高度扩展性的、并且可以配置的跨平台 C++ IDE, 它为用户提供最需要的最理想形象。它给用户一种协调的界面感觉。 - -并且最为重要的是,你可以通过用户开发的插件扩展它的功能,这些插件中一部分是 Code::Blocks发布的然而大多数不是这样,而是由非 Code::Block 开发团队成员的个人用户所写的。 - -其特色分为编译器、调试器、接口特性,它们包括: - -- 支持多种编译器如 GCC, clang, Borland C++ 5.5, digital mars 等等 -- 非常快,不需要 makefiles -- Multi-target 工程 -- 支持组合项目的工作空间 -- GNU GDB 接口 -- 支持完整的断点,包括代码的断点,数据断点,断点条件等等 -- 显示本地函数符号和参数 -- 自定义储存仓库和语法高亮显示 -- 可自定义、可扩展的接口以及许多其他的的功能,包括那些用户开发的插件 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/CodeBlocks-IDE-for-Linux.png) ->Visit Homepage: - -### 3. Eclipse CDT(C/C++ Development Tooling) -Eclipse 在编程界是一款著名的、开源的、跨平台的 IDE。为了方便界面元素的布置,它给用户提供了一个很棒的支持拖拽的界面。 - -Eclipse CDT 项目是基于 Eclipse 平台,它给 C/C++ 提供一个全新的 IDE 并具有以下功能: - -- 支持项目创建 -- 管理并建立各种工具链 -- 标准的构建 -- 源导航 -- 一些知识工具,如调用图,类型分级结构,内置浏览程序,定义宏浏览程序 -- 支持语法高亮的代码编辑器 -- 支持折叠和超链接导航 -- 代码重构与代码生成 -- 可视化的调试工具,如存储器、寄存器 -- 反汇编浏览区以及更多功能 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Eclipse-IDE-for-Linux.png) ->Visit Homepage: - -### 4.CodeLite IDE -CodeLite 也是一款为 C/C++,JavaScript(Node.js) 和 PHP 编程专门设计并建造的免费,开源,跨平台的 IDE。 - -它的一些主要特点包括: - -- 代码自动完成,内置了两个帮助代码自动完成的引擎 -- 支持多种编译器, 包括 GCC,clang/VC++ -- 显示代码词汇的错误 -- 通过构建选项卡点击error -- 支持下一代LLDB调试器 -- 支持 GDB -- 支持重构 -- 代码导航 -- 使用内置的 SFTP 远程开发 -- 源代码控制插件 -- RAD(快速应用程序开发)工具开发 wxWidgets-based 应用以及更多的特性 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Codelite-IDE.png) ->Visit Homepage: - -### 6. Bluefish Editor - -Bluefish 不仅仅是一个常规的编辑器,它是一个轻量级的,快速的编辑器,为程序员提供了 IDE 的特性如开发网站,编写脚本和软件代码。它支持多平台,在 Linux,Mac OSX,FreeBSD,OpenBSD,Solaris 和 Windows上运行,同时支持众多编程语言包括 C/C++。 - -下面列出的是它众多功能的一部分: - -- 多文档界面 -- 支持递归打开文件,基于文件名通配模式或者内容模式 -- 提供一个非常强大的搜索和替换功能 -- 侧边导航摘要 -- 支持个人的集成外部过滤器,使用命令如awk,sed,sort加上自定义构建脚本组成的管道文件 -- 支持全屏编辑 -- 网站上传和下载 -- 支持多种编码等许多其他功能 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/BlueFish-IDE-Editor-for-Linux.png) ->Visit Homepage: - -### 7. Brackets Code Editor - -Brackets 是一个现代的开源的文本编辑器,专为 Web 设计与开发打造。它可以通过插件进行高度扩展,因此 C/C++ 程序员通过安装 C/C++/Objective-C 包来使用它,这个包用来在改进 C/C++ 代码编写的同时提供 IDE 等特性。 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Brackets-Code-Editor-for-Linux.png) ->Visit Homepage: - -### 8. Atom Code Editor - -Atom 也是一个现代化风格、开源的多平台文本编辑器,它能运行在 Linux, Windows 或是 Mac OS X 平台。它的底层库可以删除,因此用户可以自定义编译器,以便满足各种编写代码的需求。 - -它功能完整,主要的功能包括: - -- 内置包管理器 -- 快速的自动完成 -- 内置文件浏览器 -- 查找、替换以及其他更多的功能 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Atom-Code-Editor-for-Linux.png) ->Visit Homepage: >https://atom.io/> ->Installation Instructions: - -### 9. Sublime Text Editor - -Sublime Text 是一个完善的,跨平台的文本编辑器,用于写码、标记和撰文。它可以用来编写 C/C++ 代码,并且提供非常棒的用户界面。 - -它的功能列表包括: - -- 多重选择 -- 命令调色 -- 重要的 Goto 功能 -- 免打扰模式 -- 分离编辑器 -- 支持项目之间快速的切换 -- 高度可制定 -- 支持基于 Python 的 API 插件以及其他特性 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Sublime-Code-Editor-for-Linux.png) ->Visit Homepage: ->Installation Instructions: - -### 10. JetBrains CLion - -JetBrains CLion 是一个收费的,强大的并且跨平台的 C/C++ IDE。它是一个全面的 C/C++ 程序集成开发环境,并提供 Cmake 项目模型,一个嵌入式终端的窗口和一个编码的定向键入通道。 - -它还提供了一个漂亮的现代化的编辑器,和许多令人激动的功能,提供理想的编码环境,这些功能包括: - -- 除了 C/C++ 还支持其他多种语言 -- 便利的符号声明或上下文导航 -- 代码生成和重构 -- 可制定的编辑 -- 关于代码的快速分享 -- 一个集成的代码调试器 -- 支持 Git,Subversion,Mercurial,CVS,Perforcevia p(lugin) 和 TFS -- 无缝连接 Google 测试框架 -- 通过 Vim 仿真插件支持 Vim 文本编辑器 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/JetBains-CLion-IDE.png) ->Visit Homepage: - -### 11. Microsoft’s Visual Studio Code Editor - -Visual Studio 是一个丰富的,完全集成的,跨平台的开发环境,运行在 Linux,Windows 和 Mac OS X。 最近它向 Linux 用户开源,同时它重新定义了代码编辑,提供了用户所需的所有工具用来开发不同平台下的应用,包括的平台有 Windows,Android,iOS 和 Web。 - -它功能完整,功能分类成应用程序开发、应用生命周期管理、扩展和集成特性。你可以从 Visual Studio 官网阅读全面的功能列表。 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Visual-Studio-Code-Editor.png) ->Visit Homepage: - -### 12. KDevelop - -KDevelop 是另一个免费,开源和跨平台的 IDE,能够运行在 Linux,Solaris,FreeBSD,Windows,Mac OS X 和其他类 Unix 操作系统。它基于 KDevPlatform,KDE 和 Qt 库。KDevelop 可以通过插件高度扩展,功能丰富并具有以下显著特色: - -- 支持基于 Clang 的 C/C++ 插件 -- 支持 KDE 4 配置迁移 -- Oketa 插件支持的复兴 -- 支持众多视图插件下的不同行编辑 -- 支持 Grep 视图,使用窗口小部件节省垂直空间等 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/KDevelop-IDE-Editor.png) ->Visit Homepage: - -### 13. Geany IDE - -Geany 是一个免费,快速,轻量级和跨平台的 IDE, 操作很少有依赖性,独立运行在流行的 Linux 桌面环境下,比如 GNOME 和 KDE. 它需要 GTK2 库实现功能。 - -它的特性包括以下列出的内容: - -- 支持语法高亮显示 -- 代码折叠 -- 调用提示 -- 符号名自动完成 -- 符号列表 -- 代码提示 -- 一个简单的项目管理工具 -- 内置系统编译并运行用户的代码 -- 可以通过插件扩展 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Geany-IDE-for-Linux.png) ->Visit Homepage: - -### 14. Ajunta DeveStudio - -Ajunta DevStudio 是一个简单,强大的 GNOME 界面的软件开发工作室,支持包括 C/C++ 在内的众多编程语言。 - -它提供了先进的编程工具,比如项目管理,GUI 设计,交互式调试器,应用程序向导,源代码编辑,版本控制等。此外,除了以上特点,Ajunta DeveStudio 也有其他很多不错的 IDE 特色,包括这些: - -- 简单的用户界面 -- 通过插件扩展 -- 针对 WYSIWYG UI 开发的 Glade 集成 -- 工程的向导和模板 -- 集成的 GDB 调试器 -- 内置文件管理器 -- 使用 DevHelp 集成提供上下文敏感编程辅助 -- 源码编辑支持语法高亮显示,智能缩进,自动缩进,代码折叠/隐藏,文本缩放等 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Anjuta-DevStudio-for-Linux.png) ->Visit Homepage: - -### 15. The GNAT Programming Studio - -The GNAT Programming Studio 是一个免费的易于使用的用来设计与开发的 IDE,可以统一开发人员与他/她的代码,软件之间的交互。 - -通过促进源导航的同时强调一个程序的重要部分和想法打造理想的编程`。`它也会带给你高水平舒适的编程体验,使用户能够开发综合性的系统。 - -它丰富的特性包括以下这些: - -- 直观的用户界面 -- 对开发者的友好性 -- 支持多种编程语言,跨平台 -- 灵活的 MDI (多文档界面) -- 高度可定制 -- 首选工具的完全可扩展性 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/GNAT-Programming-Studio.jpg) ->Visit Homepage: - -### 16. Qt Creator - -这是一款非免费的,跨平台的 IDE, 用于创建连接设备、用户界面和应用程序。Qt Creator 可以让用户比实际应用编码有更多的创新。 - -它可以用来创建移动和桌面应用程序,以及嵌入式设备连接。 - -它的优点包含以下几点: - -- 复杂的代码编辑器 -- 支持版本控制 -- 项目构造管理工具 -- 支持多屏幕和多平台,易于构建目标之间的切换等等 - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Qt-Creator.png) ->Visit Homepage: - - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Emacs-Editor.png) ->Visit Homepage: https://www.gnu.org/software/emacs/ - -### 17. Emacs Editor - -Emacs是一个免费的,强大的,可大幅度扩展的、可定制的、跨平台的文本编辑器,你可以在Linux,Solaris,FreeBSD,NetBSD,OpenBSD,Windows和Mac OS X这些系统中使用该编辑器. - -Emacs 的核心也是一种 Emacs Lisp 的解释器,Emascs Lisp 是一种基于 Lisp 的编程语言。在撰写本文时,GNU Emacs的最新版本是24.5,Emacs的基本功能包括: - -- 内容识别编辑模式 -- 完全支持 Unicode -- 高度可定制地使用 GUI 或 Emacs Lisp 代码 -- 一种用于下载和安装扩展的打包系统 -- 系统本身的功能超出正常文本编辑的功能,包括项目策划、邮件、日历和新闻阅读器等 -- 一个完整的内置文档,以及用户指南等等 - -### 18. VI/VIM Editor - -Vim,一款 VI 编辑器的改进版本,是一款免费的、强大的、流行的并且高度可配置的文本编辑器。它为有效率地文本编辑而生,并且为 Unix/Linux 使用者提供了激动人心的编辑器特性,因此,它对于撰写和编辑 C/C++ 代码也是一个好的选择。 - -总的来说,与传统的文本编辑器相比,IDE 为编程提供了更多的便利,因此使用它们是一个很好的选择。它们带有激动人心的特征并且提供了一个综合性的开发环境,有时候程序员不得不陷入对最好的 C/C++ IDE 的选择。 - -在互联网上你还可以找到许多 IDE 来下载,但不妨试试我们推荐的这几款,可以帮助你尽快找到哪一款是你需要的。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/best-linux-ide-editors-source-code-editors/ - -作者:[Aaron Kili][a] -译者:[ZenMoore](https://github.com/ZenMoore) [LiBrad](https://github.com/LiBrad) [WangYueScream](https://github.com/WangYueScream) [LemonDemo](https://github.com/LemonDemo) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/debug-source-code-in-linux-using-gdb/ From 7516d88c776ebcdf52b39198156c3f7aaf565231 Mon Sep 17 00:00:00 2001 From: NearTan Date: Mon, 22 Aug 2016 01:17:07 +0800 Subject: [PATCH 449/471] translated --- ...asks and Application Deployments Over SSH.md | 254 ----------------- ...asks and Application Deployments Over SSH.md | 257 ++++++++++++++++++ 2 files changed, 257 insertions(+), 254 deletions(-) delete mode 100644 sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md create mode 100644 translated/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md diff --git a/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md b/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md deleted file mode 100644 index f45d1e9702..0000000000 --- a/sources/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md +++ /dev/null @@ -1,254 +0,0 @@ -NearTan 认领 - -Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH -=========================== - -When it comes to managing remote machines and deployment of applications, there are several command line tools out there in existence though many have a common problem of lack of detailed documentation. - -In this guide, we shall cover the steps to introduce and get started on how to use fabric to improve on administering groups of servers. - -![](http://www.tecmint.com/wp-content/uploads/2015/11/Automate-Linux-Administration-Tasks-Using-Fabric.png) ->Automate Linux Administration Tasks Using Fabric - -Fabric is a python library and a powerful command line tool for performing system administration tasks such as executing SSH commands on multiple machines and application deployment. - -Having a working knowledge of Python can be helpful when using Fabric, but may certainly not be necessary. - -Reasons why you should choose fabric over other alternatives: - -- Simplicity -- It is well-documented -- You don’t need to learn another language if you’re already a python guy. -- Easy to install and use. -- It is fast in its operations. -- It supports parallel remote execution. - -### How to Install Fabric Automation Tool in Linux - -An important characteristic about fabric is that the remote machines which you need to administer only need to have the standard OpenSSH server installed. You only need certain requirements installed on the server from which you are administering the remote servers before you can get started. - -#### Requirements: - -- Python 2.5+ with the development headers -- Python-setuptools and pip (optional, but preferred) gcc - -Fabric is easily installed using pip (highly recommended), but you may also prefer to choose your default package manager `yum`, `dnf` or `apt-get` to install fabric package, typically called fabric or python-fabric. - -For RHEL/CentOS based distributions, you must have [EPEL repository][1] installed and enabled on the system to install fabric package. - -``` -# yum install fabric [On RedHat based systems] -# dnf install fabric [On Fedora 22+ versions] -``` - -For Debian and it’s derivatives such as Ubuntu and Mint users can simply do apt-get to install the fabric package as shown: - -``` -# apt-get install fabric -``` - -If you want to install development version of fabric, you may use pip to grab the most recent master branch. - -``` -# yum install python-pip [On RedHat based systems] -# dnf install python-pip [On Fedora 22+ versions] -# apt-get install python-pip [On Debian based systems] -``` - -Once pip has been installed successfully, you may use pip to grab the latest version of fabric as shown: - -``` -# pip install fabric -``` - -### How to Use Fabric to Automate Linux Administration Tasks - -So lets get started on how you can use Fabric. During the installation process, a Python script called fab was added to a directory in your path. The `fab` script does all the work when using fabric. - -#### Executing commands on the local Linux machine - -By convention, you need to start by creating a Python file called fabfile.py using your favorite editor. Remember you can give this file a different name as you wish but you will need to specify the file path as follows: - -``` -# fabric --fabfile /path/to/the/file.py -``` - -Fabric uses `fabfile.py` to execute tasks. The fabfile should be in the same directory where you run the Fabric tool. - -Example 1: Let’s create a basic `Hello World` first. - -``` -# vi fabfile.py -``` - -Add these lines of code in the file. - -``` -def hello(): -print('Hello world, Tecmint community') -``` - -Save the file and run the command below. - -``` -# fab hello -``` - -![](http://www.tecmint.com/wp-content/uploads/2015/11/Create-Fabric-Fab-Python-File.gif) ->Fabric Tool Usage - -And paste the following lines of code in the file. - -``` -#! /usr/bin/env python -from fabric.api import local -def uptime(): -local('uptime') -``` - -Then save the file and run the following command: - -``` -# fab uptime -``` - -![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Uptime.gif) ->Fabric: Check System Uptime - -#### Executing commands on remote Linux machines to automate tasks - -The Fabric API uses a configuration dictionary which is Python’s equivalent of an associative array known as `env`, which stores values that control what Fabric does. - -The `env.hosts` is a list of servers on which you want run Fabric tasks. If your network is 192.168.0.0 and wish to manage host 192.168.0.2 and 192.168.0.6 with your fabfile, you could configure the env.hosts as follows: - -``` -#!/usr/bin/env python -from fabric.api import env -env.hosts = [ '192.168.0.2', '192.168.0.6' ] -``` - -The above line of code only specify the hosts on which you will run Fabric tasks but do nothing more. Therefore you can define some tasks, Fabric provides a set of functions which you can use to interact with your remote machines. - -Although there are many functions, the most commonly used are: - -- run – which runs a shell command on a remote machine. -- local – which runs command on the local machine. -- sudo – which runs a shell command on a remote machine, with root privileges. -- Get – which downloads one or more files from a remote machine. -- Put – which uploads one or more files to a remote machine. - -Example 3: To echo a message on multiple machines create a fabfile.py such as the one below. - -``` -#!/usr/bin/env python -from fabric.api import env, run -env.hosts = ['192.168.0.2','192.168.0.6'] -def echo(): -run("echo -n 'Hello, you are tuned to Tecmint ' ") -``` - -To execute the tasks, run the following command: - -``` -# fab echo -``` - -![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabrick-Automate-Linux-Tasks.gif) ->Fabric: Automate Linux Tasks on Remote Linux - -Example 4: You can improve the fabfile.py which you created earlier on to execute the uptime command on the local machine, so that it runs the uptime command and also checks disk usage using the df command on multiple machines as follows: - -``` -#!/usr/bin/env python -from fabric.api import env, run -env.hosts = ['192.168.0.2','192.168.0.6'] -def uptime(): -run('uptime') -def disk_space(): -run('df -h') -``` - -Save the file and run the following command: - -``` -# fab uptime -# fab disk_space -``` - -![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Run-Multiple-Commands-on-Multiple-Linux-Systems.gif) ->Fabric: Automate Tasks on Multiple Linux Systems - -#### Automatically Deploy LAMP Stack on Remote Linux Server - -Example 4: Let us look at an example to deploy LAMP (Linux, Apache, MySQL/MariaDB and PHP) server on a remote Linux server. - -We shall write a function that will allow LAMP to be installed remotely using root privileges. - -##### For RHEL/CentOS and Fedora - -``` -#!/usr/bin/env python -from fabric.api import env, run -env.hosts = ['192.168.0.2','192.168.0.6'] -def deploy_lamp(): -run ("yum install -y httpd mariadb-server php php-mysql") -``` - -##### For Debian/Ubuntu and Linux Mint - -``` -#!/usr/bin/env python -from fabric.api import env, run -env.hosts = ['192.168.0.2','192.168.0.6'] -def deploy_lamp(): -sudo("apt-get install -q apache2 mysql-server libapache2-mod-php5 php5-mysql") -``` - -Save the file and run the following command: - -``` -# fab deploy_lamp -``` - -Note: Due to large output, it’s not possible for us to create a screencast (animated gif) for this example. - -Now you can able to [automate Linux server management tasks][2] using Fabric and its features and examples given above… - -#### Some Useful Options to Use with Fabric - -- You can run fab –help to view help information and a long list of available command line options. -- An important option is –fabfile=PATH that helps you to specify a different python module file to import other then fabfile.py. -- To specify a username to use when connecting to remote hosts, use the –user=USER option. -- To use password for authentication and/or sudo, use the –password=PASSWORD option. -- To print detailed info about command NAME, use –display=NAME option. -- To view formats use –list option, choices: short, normal, nested, use the –list-format=FORMAT option. -- To print list of possible commands and exit, include the –list option. -- You can specify the location of config file to use by using the –config=PATH option. -- To display a colored error output, use –colorize-errors. -- To view the program’s version number and exit, use the –version option. - -### Summary - -Fabric is a powerful tool and is well documented and provides easy usage for newbies. You can read the full documentation to get more understanding of it. If you have any information to add or incase of any errors you encounter during installation and usage, you can leave a comment and we shall find ways to fix them. - -Reference: [Fabric documentation][3] - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/automating-linux-system-administration-tasks/ - -作者:[Aaron Kili ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[2]: http://www.tecmint.com/use-ansible-playbooks-to-automate-complex-tasks-on-multiple-linux-servers/ -[3]: http://docs.fabfile.org/en/1.4.0/usage/env.html - - - - diff --git a/translated/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md b/translated/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md new file mode 100644 index 0000000000..ba0dc2d2a1 --- /dev/null +++ b/translated/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md @@ -0,0 +1,257 @@ + +Fabric - 通过 SSH 来自动化管理 Linux 任务和布署应用 +=========================== + +当要管理远程机器或者要布署应用时,你会有多种命令行工具的选择,但是其中很多工具都缺少详细的使用文档。 + +在这篇教程中,我们将会一步一步地向你介绍如何使用 fabric 来帮助你更好得管理多台服务器。 + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Automate-Linux-Administration-Tasks-Using-Fabric.png) +>使用 Fabric 来自动化地管理 Linux 任务 + +Fabric 是一个用 Python 编写的命令行工具库,它可以帮助系统管理员高效地执行某些任务,比如 SSH 到多台机器上执行某些命令,远程布署应用等。 + +在使用之前,如果你拥有使用 Python 的经验能帮你更好的使用 Fabric。当然,如果没有那也不影响使用 Fabric。 + +我们为什么要选择 Fabric + +- 简单 +- 拥有详细的文档 +- 如果你会 Python,不用增加学习其他语言的成本 +- 即插即用 +- 使用便捷 +- 支持多台机器并行操作 + +### 在 Linux 上如何安装 Fabric + +Fabric 有一个特点就是要远程操作的机器只需要支持标准的 OpenSSH 服务即可。只要保证在机器上安装并开启了这个服务就能使用 Fabric 来管理机器。 + +#### 依赖: + +- Python 2.5 或更新版本,以及对应的开发组件 +- Python-setuptools 和 pip(可选,但是非常推荐)gcc + +我们推荐使用 pip 安装 Fabric,你可以使用系统自带的包管理器如 `yum`, `dnf` 或 `apt-get` 来安装,包名一般是 `fabric` 或 `python-fabric`。 + +如果是基于 RHEL/CentOS 的发行版本的系统,你可以使用系统自带的 [EPEL 源][1] 来安装 fabric + +``` +# yum install fabric [适用于基于 RedHat 系统] +# dnf install fabric [适用于 Fedora 22+ 版本] +``` + +如果你是 Debian 或者其派生的系统如 Ubuntu 和 Mint 用户,你可以使用 apt-gte 来安装,如下所示: + +``` +# apt-get install fabric +``` + +如果你要安装开发版的 Fabric,你应该安装 master 分支上最新版本的 pip + +``` +# yum install python-pip [适用于基于 RedHat 系统] +# dnf install python-pip [适用于Fedora 22+ 版本] +# apt-get install python-pip [适用于基于 Debian 系统] +``` + +安装好 pip 后,你可以使用 pip 获取最新版本的 Fabric + +``` +# pip install fabric +``` + +### 如何使用 Fabric 来自动化管理 Linux 任务 + +现在我们来开始使用 Fabric,在之前的安装的过程中,Fabric Python 脚本已经被增加到我们的系统目录,当我们要运行 Fabric 时输入 `fab` 命令即可。 + +#### 在本地 Linux 机器上运行命令行 + +按照惯例,先用你最喜欢的编辑器创建一个名为 fabfile.py 的 Python 脚本。你可以使用其他名字来命名脚本,但是需要先声明这个脚本的路径,如下所示: + +``` +# fabric --fabfile /path/to/the/file.py +``` + +如果你要使用 Fabric 来执行 `fabfile.py` 里面的任务,文件必需在你执行 Fabric 命令下的同一目录 + +例 1:创建入门的 `Hello World` 任务: + +``` +# vi fabfile.py +``` + +在文件内输入如下内容: + +``` +def hello(): + print('Hello world, Tecmint community') +``` + +保存文件并执行以下命令: + +``` +# fab hello +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Create-Fabric-Fab-Python-File.gif) +>Fabric 工具使用说明 + +让我们看看这个例子,fabfile.py 文件在本机执行了 uptime 这个命令 + +例子2:新建一个名为 fabfile.py 的文件并打开: + +粘贴以下代码至文件(to 校对:选题时的文章缺少了上面这三句话,译者从原网站补充至本文,请校对注意) + +``` +#! /usr/bin/env python +from fabric.api import local + +def uptime(): + local('uptime') +``` + +保存文件并执行以下命令: + +``` +# fab uptime +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Uptime.gif) +>Fabric: 检查系统运行时间 + +#### 在远程 Linux 机器上运行命令行来执行自动化任务 + +Fabric API 提供了一个名为 `env` 的关联数组(Python 中的词典)来储存 Fabric 要控制的机器的相关信息。 + +`env.hosts` 是一个 list 用来存储你要执行 Fabric 任务的机器,如果你的 IP 地址是 192.168.0.0,想要用 Fabric 来管理地址为 192.168.0.2 和 192.168.0.6 的机器,需要的配置如下所示: + +``` +#!/usr/bin/env python +from fabric.api import env + env.hosts = [ '192.168.0.2', '192.168.0.6' ] +``` + +上面这几行代码只是声明了你要执行 Fabric 任务的主机地址,但是实际上并没有执行任何任务,下面我们就来定义一些任务。Fabric 提供了一系列可以与远程服务器交互的方法。 + +Fabric 提供了众多的方法,这里列出几个经常会用到的: + +- run - 可以在远程机器上运行的 shell 命令 +- local - 可以在本机上运行的 shell 命令 +- sudo 使用 root 权限运行的 shell 命令 +- get - 从远程机器上下载文件 +- put - 上传文件到远程机器 + +例子 3:在多台机子上输出信息,新建新的 fabfile.py 文件如下所示 + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def echo(): + run("echo -n 'Hello, you are tuned to Tecmint ' ") +``` + +运行以下命令执行 Fabric 任务 + +``` +# fab echo +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabrick-Automate-Linux-Tasks.gif) +>fabric: 自动在远程 Linux 机器上执行任务 + +例子 4:你可以继续改进之前创建的执行 uptime 任务的 fabfile.py 文件,让它可以在多台服务器上检查其磁盘使用情况,如下所示: + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def uptime(): + run('uptime') +def disk_space(): + run('df -h') +``` + +保存并执行以下命令 + +``` +# fab uptime +# fab disk_space +``` + +![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Run-Multiple-Commands-on-Multiple-Linux-Systems.gif) +>Fabric:自动在多台服务器上执行任务 + +#### 在远程服务器上自动化布署 LAMP + +Example 4(5?):我们来尝试一下在远程服务器上布署 LAMP(Linux, Apache, MySQL/MariaDB and PHP) + + +我们在远程安装 LAMP 时应该使用 root 权限。 + +##### 在 RHEL/CentOS 或 Fedora 上 + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def deploy_lamp(): + run ("yum install -y httpd mariadb-server php php-mysql") +``` + +##### 在 Debian/Ubuntu 或 Linux Mint 上 + +``` +#!/usr/bin/env python +from fabric.api import env, run +env.hosts = ['192.168.0.2','192.168.0.6'] +def deploy_lamp(): + sudo("apt-get install -q apache2 mysql-server libapache2-mod-php5 php5-mysql") +``` + +保存并执行以下命令: + +``` +# fab deploy_lamp +``` + +注:由于安装时会输出大量信息,这个例子我们就不提供屏幕 gif 图了 + +现在你可以使用上文例子所示的功能来[自动化的管理 Linux 服务器上的任务][2]了。 + +#### 一些 Fabric 有用的选项 + + +- 你可以运行 `fab -help` 输出帮助信息,里面列出了所有可以使用的命令行信息 +- `–fabfile=PATH` 选项可以让你定义除了名为 fabfile.py 之外的模块 +- 如果你想用指定的用户名登录远程主机,请使用 `-user=USER` 选项 +- 如果你需要密码进行验证或者 sudo 提权,请使用 `–password=PASSWORD` 选项 +- 如果需要输出某个命令行的详细信息,请使用 `–display=NAME` 选项 +- 使用 `--list-format=FORMAT` 选项能格式化 `-list` 选项输出的信息,可选的有 short, normal, nested +- 使用 `--list` 输出所有可用的任务(to 校对:建议与上一条交换位置) +- `--config=PATH` 选项可以指定读取配置文件的地址 +- `-–colorize-errors` 能显示彩色的错误输出信息 +- `--version` 输出当前版本 +(to 校对:以上根据原文格式加了代码标签,并且根据 Fabric 文档修正了命令使用) + +### Summary + +Fabric 是一个强大并且拥有充足文档的工具,对于新手来说也能很快上手,阅读提供的文档能帮助你更好的了解这个它。如果你在安装和使用 Fabric 时发现什么问题可以在评论区留言,我们会及时回复。 + +引文:[Fabric 文档][3] + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/automating-linux-system-administration-tasks/ + +作者:[Aaron Kili ][a] +译者:[NearTan](https://github.com/NearTan) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[2]: http://www.tecmint.com/use-ansible-playbooks-to-automate-complex-tasks-on-multiple-linux-servers/ +[3]: http://docs.fabfile.org/en/1.4.0/usage/env.html From 5eb1b1b806e25b1753ea04d7e8c9b95d631c7b7c Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 22 Aug 2016 09:56:26 +0800 Subject: [PATCH 450/471] =?UTF-8?q?PUB:20151118=20Fabric=20=E2=80=93=20Aut?= =?UTF-8?q?omate=20Your=20Linux=20Administration=20Tasks=20and=20Applicati?= =?UTF-8?q?on=20Deployments=20Over=20SSH?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @NearTan 翻译的很用心。个别语句理解有点偏差。 --- ...asks and Application Deployments Over SSH.md | 96 +++++++++---------- 1 file changed, 48 insertions(+), 48 deletions(-) rename {translated/tech => published}/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md (60%) diff --git a/translated/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md b/published/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md similarity index 60% rename from translated/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md rename to published/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md index ba0dc2d2a1..fdd53956c4 100644 --- a/translated/tech/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md +++ b/published/20151118 Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH.md @@ -1,24 +1,24 @@ - Fabric - 通过 SSH 来自动化管理 Linux 任务和布署应用 =========================== -当要管理远程机器或者要布署应用时,你会有多种命令行工具的选择,但是其中很多工具都缺少详细的使用文档。 +当要管理远程机器或者要布署应用时,虽然你有多种命令行工具可以选择,但是其中很多工具都缺少详细的使用文档。 在这篇教程中,我们将会一步一步地向你介绍如何使用 fabric 来帮助你更好得管理多台服务器。 ![](http://www.tecmint.com/wp-content/uploads/2015/11/Automate-Linux-Administration-Tasks-Using-Fabric.png) ->使用 Fabric 来自动化地管理 Linux 任务 -Fabric 是一个用 Python 编写的命令行工具库,它可以帮助系统管理员高效地执行某些任务,比如 SSH 到多台机器上执行某些命令,远程布署应用等。 +*使用 Fabric 来自动化地管理 Linux 任务* + +Fabric 是一个用 Python 编写的命令行工具库,它可以帮助系统管理员高效地执行某些任务,比如通过 SSH 到多台机器上执行某些命令,远程布署应用等。 在使用之前,如果你拥有使用 Python 的经验能帮你更好的使用 Fabric。当然,如果没有那也不影响使用 Fabric。 -我们为什么要选择 Fabric +我们为什么要选择 Fabric: - 简单 -- 拥有详细的文档 +- 完备的文档 - 如果你会 Python,不用增加学习其他语言的成本 -- 即插即用 +- 易于安装使用 - 使用便捷 - 支持多台机器并行操作 @@ -26,27 +26,27 @@ Fabric 是一个用 Python 编写的命令行工具库,它可以帮助系统 Fabric 有一个特点就是要远程操作的机器只需要支持标准的 OpenSSH 服务即可。只要保证在机器上安装并开启了这个服务就能使用 Fabric 来管理机器。 -#### 依赖: +#### 依赖 - Python 2.5 或更新版本,以及对应的开发组件 - Python-setuptools 和 pip(可选,但是非常推荐)gcc -我们推荐使用 pip 安装 Fabric,你可以使用系统自带的包管理器如 `yum`, `dnf` 或 `apt-get` 来安装,包名一般是 `fabric` 或 `python-fabric`。 +我们推荐使用 pip 安装 Fabric,但是你也可以使用系统自带的包管理器如 `yum`, `dnf` 或 `apt-get` 来安装,包名一般是 `fabric` 或 `python-fabric`。 -如果是基于 RHEL/CentOS 的发行版本的系统,你可以使用系统自带的 [EPEL 源][1] 来安装 fabric +如果是基于 RHEL/CentOS 的发行版本的系统,你可以使用系统自带的 [EPEL 源][1] 来安装 fabric。 ``` # yum install fabric [适用于基于 RedHat 系统] # dnf install fabric [适用于 Fedora 22+ 版本] ``` -如果你是 Debian 或者其派生的系统如 Ubuntu 和 Mint 用户,你可以使用 apt-gte 来安装,如下所示: +如果你是 Debian 或者其派生的系统如 Ubuntu 和 Mint 的用户,你可以使用 apt-get 来安装,如下所示: ``` # apt-get install fabric ``` -如果你要安装开发版的 Fabric,你应该安装 master 分支上最新版本的 pip +如果你要安装开发版的 Fabric,你需要安装 pip 来安装 master 分支上最新版本。 ``` # yum install python-pip [适用于基于 RedHat 系统] @@ -54,7 +54,7 @@ Fabric 有一个特点就是要远程操作的机器只需要支持标准的 Ope # apt-get install python-pip [适用于基于 Debian 系统] ``` -安装好 pip 后,你可以使用 pip 获取最新版本的 Fabric +安装好 pip 后,你可以使用 pip 获取最新版本的 Fabric。 ``` # pip install fabric @@ -62,19 +62,19 @@ Fabric 有一个特点就是要远程操作的机器只需要支持标准的 Ope ### 如何使用 Fabric 来自动化管理 Linux 任务 -现在我们来开始使用 Fabric,在之前的安装的过程中,Fabric Python 脚本已经被增加到我们的系统目录,当我们要运行 Fabric 时输入 `fab` 命令即可。 +现在我们来开始使用 Fabric,在之前的安装的过程中,Fabric Python 脚本已经被放到我们的系统目录,当我们要运行 Fabric 时输入 `fab` 命令即可。 #### 在本地 Linux 机器上运行命令行 -按照惯例,先用你最喜欢的编辑器创建一个名为 fabfile.py 的 Python 脚本。你可以使用其他名字来命名脚本,但是需要先声明这个脚本的路径,如下所示: +按照惯例,先用你喜欢的编辑器创建一个名为 fabfile.py 的 Python 脚本。你可以使用其他名字来命名脚本,但是就需要指定这个脚本的路径,如下所示: ``` # fabric --fabfile /path/to/the/file.py ``` -如果你要使用 Fabric 来执行 `fabfile.py` 里面的任务,文件必需在你执行 Fabric 命令下的同一目录 +Fabric 使用 `fabfile.py` 来执行任务,这个文件应该放在你执行 Fabric 命令的目录里面。 -例 1:创建入门的 `Hello World` 任务: +**例子 1**:创建入门的 `Hello World` 任务: ``` # vi fabfile.py @@ -94,18 +94,16 @@ def hello(): ``` ![](http://www.tecmint.com/wp-content/uploads/2015/11/Create-Fabric-Fab-Python-File.gif) ->Fabric 工具使用说明 -让我们看看这个例子,fabfile.py 文件在本机执行了 uptime 这个命令 +*Fabric 工具使用说明* -例子2:新建一个名为 fabfile.py 的文件并打开: +**例子 2**:新建一个名为 fabfile.py 的文件并打开: -粘贴以下代码至文件(to 校对:选题时的文章缺少了上面这三句话,译者从原网站补充至本文,请校对注意) +粘贴以下代码至文件: ``` #! /usr/bin/env python from fabric.api import local - def uptime(): local('uptime') ``` @@ -117,13 +115,16 @@ def uptime(): ``` ![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Uptime.gif) ->Fabric: 检查系统运行时间 -#### 在远程 Linux 机器上运行命令行来执行自动化任务 +*Fabric: 检查系统运行时间* -Fabric API 提供了一个名为 `env` 的关联数组(Python 中的词典)来储存 Fabric 要控制的机器的相关信息。 +让我们看看这个例子,fabfile.py 文件在本机执行了 uptime 这个命令。 -`env.hosts` 是一个 list 用来存储你要执行 Fabric 任务的机器,如果你的 IP 地址是 192.168.0.0,想要用 Fabric 来管理地址为 192.168.0.2 和 192.168.0.6 的机器,需要的配置如下所示: +#### 在远程 Linux 机器上运行命令来执行自动化任务 + +Fabric API 使用了一个名为 `env` 的关联数组(Python 中的词典)作为配置目录,来储存 Fabric 要控制的机器的相关信息。 + +`env.hosts` 是一个用来存储你要执行 Fabric 任务的机器的列表,如果你的 IP 地址是 192.168.0.0,想要用 Fabric 来管理地址为 192.168.0.2 和 192.168.0.6 的机器,需要的配置如下所示: ``` #!/usr/bin/env python @@ -137,11 +138,11 @@ Fabric 提供了众多的方法,这里列出几个经常会用到的: - run - 可以在远程机器上运行的 shell 命令 - local - 可以在本机上运行的 shell 命令 -- sudo 使用 root 权限运行的 shell 命令 -- get - 从远程机器上下载文件 -- put - 上传文件到远程机器 +- sudo - 使用 root 权限在远程机器上运行的 shell 命令 +- get - 从远程机器上下载一个或多个文件 +- put - 上传一个或多个文件到远程机器 -例子 3:在多台机子上输出信息,新建新的 fabfile.py 文件如下所示 +**例子 3**:在多台机子上输出信息,新建新的 fabfile.py 文件如下所示 ``` #!/usr/bin/env python @@ -158,9 +159,10 @@ def echo(): ``` ![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabrick-Automate-Linux-Tasks.gif) ->fabric: 自动在远程 Linux 机器上执行任务 -例子 4:你可以继续改进之前创建的执行 uptime 任务的 fabfile.py 文件,让它可以在多台服务器上检查其磁盘使用情况,如下所示: +*fabric: 自动在远程 Linux 机器上执行任务* + +**例子 4**:你可以继续改进之前创建的执行 uptime 任务的 fabfile.py 文件,让它可以在多台服务器上运行 uptime 命令,也可以检查其磁盘使用情况,如下所示: ``` #!/usr/bin/env python @@ -180,14 +182,14 @@ def disk_space(): ``` ![](http://www.tecmint.com/wp-content/uploads/2015/11/Fabric-Run-Multiple-Commands-on-Multiple-Linux-Systems.gif) ->Fabric:自动在多台服务器上执行任务 + +*Fabric:自动在多台服务器上执行任务* #### 在远程服务器上自动化布署 LAMP -Example 4(5?):我们来尝试一下在远程服务器上布署 LAMP(Linux, Apache, MySQL/MariaDB and PHP) +**例子 5**:我们来尝试一下在远程服务器上布署 LAMP(Linux, Apache, MySQL/MariaDB and PHP) - -我们在远程安装 LAMP 时应该使用 root 权限。 +我们要写个函数在远程使用 root 权限安装 LAMP。 ##### 在 RHEL/CentOS 或 Fedora 上 @@ -217,41 +219,39 @@ def deploy_lamp(): 注:由于安装时会输出大量信息,这个例子我们就不提供屏幕 gif 图了 -现在你可以使用上文例子所示的功能来[自动化的管理 Linux 服务器上的任务][2]了。 +现在你可以使用 Fabric 和上文例子所示的功能来[自动化的管理 Linux 服务器上的任务][2]了。 #### 一些 Fabric 有用的选项 - - 你可以运行 `fab -help` 输出帮助信息,里面列出了所有可以使用的命令行信息 - `–fabfile=PATH` 选项可以让你定义除了名为 fabfile.py 之外的模块 - 如果你想用指定的用户名登录远程主机,请使用 `-user=USER` 选项 - 如果你需要密码进行验证或者 sudo 提权,请使用 `–password=PASSWORD` 选项 -- 如果需要输出某个命令行的详细信息,请使用 `–display=NAME` 选项 -- 使用 `--list-format=FORMAT` 选项能格式化 `-list` 选项输出的信息,可选的有 short, normal, nested -- 使用 `--list` 输出所有可用的任务(to 校对:建议与上一条交换位置) +- 如果需要输出某个命令的详细信息,请使用 `–display=命令名` 选项 +- 使用 `--list` 输出所有可用的任务 +- 使用 `--list-format=FORMAT` 选项能格式化 `-list` 选项输出的信息,可选的有 short、normal、 nested - `--config=PATH` 选项可以指定读取配置文件的地址 - `-–colorize-errors` 能显示彩色的错误输出信息 - `--version` 输出当前版本 -(to 校对:以上根据原文格式加了代码标签,并且根据 Fabric 文档修正了命令使用) -### Summary +### 总结 -Fabric 是一个强大并且拥有充足文档的工具,对于新手来说也能很快上手,阅读提供的文档能帮助你更好的了解这个它。如果你在安装和使用 Fabric 时发现什么问题可以在评论区留言,我们会及时回复。 +Fabric 是一个强大并且文档完备的工具,对于新手来说也能很快上手,阅读提供的文档能帮助你更好的了解它。如果你在安装和使用 Fabric 时发现什么问题可以在评论区留言,我们会及时回复。 -引文:[Fabric 文档][3] +参考:[Fabric 文档][3] -------------------------------------------------------------------------------- via: http://www.tecmint.com/automating-linux-system-administration-tasks/ -作者:[Aaron Kili ][a] +作者:[Aaron Kili][a] 译者:[NearTan](https://github.com/NearTan) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: http://www.tecmint.com/author/aaronkili/ -[1]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[1]: https://linux.cn/article-2324-1.html [2]: http://www.tecmint.com/use-ansible-playbooks-to-automate-complex-tasks-on-multiple-linux-servers/ [3]: http://docs.fabfile.org/en/1.4.0/usage/env.html From e2b9e646fc24883e55c3ec180be5113d5c768d8f Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 22 Aug 2016 10:24:39 +0800 Subject: [PATCH 451/471] =?UTF-8?q?20160822-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...n for the Android platform 101 - Part 1.md | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 sources/tech/20160817 Dependency Injection for the Android platform 101 - Part 1.md diff --git a/sources/tech/20160817 Dependency Injection for the Android platform 101 - Part 1.md b/sources/tech/20160817 Dependency Injection for the Android platform 101 - Part 1.md new file mode 100644 index 0000000000..93ddf0f20a --- /dev/null +++ b/sources/tech/20160817 Dependency Injection for the Android platform 101 - Part 1.md @@ -0,0 +1,107 @@ +Dependency Injection for the Android platform 101 - Part 1 +=========================== + +![](https://d262ilb51hltx0.cloudfront.net/max/2000/1*YWlAzAY20KLLGIyyD_mzZw.png) + +When we first start studying software engineering, we usually bump into something like: + +>Software shall be SOLID. + +but what does that mean, in fact? Well, let’s just say that each character of the acronym means something really important for the architecture, such as: + +- [Single Responsibility Principle][1] +- [Open/closed principle][2] +- [Liskov substitution principle][3] +- [Interface segregation principle][4] +- [Dependency inversion principle][5] which is the core concept on which the dependency injection is based. + +Simply, we need to provide a class with all the objects it needs in order to perform its duties. + +### Overview + +Dependency Injection sounds like a very complex term for something that is, in fact, really easy and could be explained with this example: + +As we can see, in the first case, we create the dependency in the constructor, while in the second case it gets passed as a parameter. The second case is what we call dependency injection. We do this so that our class does not depend on the specific implementation of its dependency but it just uses it. +Moreover, based on whether the parameter is passed over to the constructor or to a method, we talk either about constructor dependency injection or method dependency injection: + +If you want to know more about Dependency Injection in general, be sure to check out this amazing talk from Dan Lew that actually inspired this overview. + +On Android, we have different choices when it comes to frameworks for solving this particular problem but the most famous is Dagger 2, first made by the awesome guys at Square and then evolved by Google itself. Specifically, Dagger 1 was made by the first and then Big G took over the project and created the second version, with major changes such being based on annotations and doing its job at compile time. + +### Importing the framework + +Setting up Dagger is no big deal, but it requires us to import the android-apt plugin by adding its dependency in the build.gradle file in the root directory of the project: + +``` +buildscript{ + ... + dependencies{ + ... + classpath ‘com.neenbedankt.gradle.plugins:android-apt:1.8’ + } +} +``` + +Then, we need to apply the android-apt plugin in the top part of the app’s build.gradle file, right below the Android application one: + +``` +apply plugin: ‘com.neenbedankt.android-apt’ +``` + +At this point, we just need to add the dependencies so that we are now able to use the libraries and its annotations: + +``` +dependencies{ + ... + compile ‘com.google.dagger:dagger:2.6’ + apt ‘com.google.dagger:dagger-compiler:2.6’ + provided ‘javax.annotation:jsr250-api:1.0’ +} +``` + +>The last dependency is needed because the @Generated annotation is not yet available on Android, but it’s pure Java. + +### Dagger Modules + +For injecting our dependencies, we first need to tell the framework what we can provide (i.e. the Context) and how that specific object is built. In order to do this, we annotate a specific class with the @Module annotation (so that Dagger is able to pick it up) scan for the @Provide annotated methods and generate the graph that will give us the object we request. + +Let’s see an example where we create a module that will give us the ConnectivityManager. So we need the Context that we will pass in the constructor of the module: + +>A very interesting feature of Dagger is providing a Singleton by simply annotating a method, dealing with all the issues inherited from Java by itself. + +### The Components + +Once we have a module, we need to tell Dagger where we want our dependencies to be injected: we do this in a Component, a specifically annotated interface in which we create different methods, where the parameters are the classes into which we want our dependencies to be injected. + +Let’s give an example and say that we want our MainActivity class to be able to receive the ConnectivityManager (or any other dependency in the graph). We would simply do something like this: + +>As we can see, the @Component annotation takes some parameters, one being an array of the modules supported, meaning the dependencies it can provide. In our case, that would be both the Context and the ConnectivityManager, as they are declared in the ApplicationModule class. + +### Wiring up + +At this point, what we need to do is create the Component as soon as possible (such as in the onCreate phase of the Application) and return it, so that the classes can use it for injecting the dependencies: + +>In order to have the DaggerApplicationComponent automatically generated by the framework, we need to build our projects so that Dagger can scan our codebase and generate the parts we need. + +In our MainActivity, though, the two things we need to do are annotate a property we want to inject with the @Inject annotation and invoke the method we declared in the ApplicationComponent interface (note that this last part varies based on what kind of injection we are performing, but for the moment we can leave it as is for the simplicity), so that our dependencies get injected and we can use them freely: + +### Conclusion + +Of course, we could do Dependency Injection manually, managing all the different objects, but Dagger takes away a lot of the “noise” involved with such boilerplate, giving us some nice additions (such as Singleton) that would otherwise be pretty awful to deal with in Java. + +-------------------------------------------------------------------------------- + +via: https://medium.com/di-101/di-101-part-1-81896c2858a0#.3hg0jj14o + +作者:[Roberto Orgiu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://medium.com/@_tiwiz +[1]: https://en.wikipedia.org/wiki/Single_responsibility_principle +[2]: https://en.wikipedia.org/wiki/Open/closed_principle +[3]: http://liskov_substitution_principle/ +[4]: https://en.wikipedia.org/wiki/Interface_segregation_principle +[5]: https://en.wikipedia.org/wiki/Dependency_inversion_principle From ba47ddc85efda504b03dcda710b3d8662b31e0f7 Mon Sep 17 00:00:00 2001 From: kokialoves <498497353@qq.com> Date: Mon, 22 Aug 2016 10:27:53 +0800 Subject: [PATCH 452/471] Update 20160812 Writing a JavaScript framework - Execution timing.md --- ...20160812 Writing a JavaScript framework - Execution timing.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160812 Writing a JavaScript framework - Execution timing.md b/sources/tech/20160812 Writing a JavaScript framework - Execution timing.md index 601abecb80..44f5a2653e 100644 --- a/sources/tech/20160812 Writing a JavaScript framework - Execution timing.md +++ b/sources/tech/20160812 Writing a JavaScript framework - Execution timing.md @@ -1,3 +1,4 @@ +[translating by kokialoves] Writing a JavaScript framework - Execution timing, beyond setTimeout =================== From 1df5c832433d7db2ae9f5b94e667f1ddbffdcfc9 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 22 Aug 2016 10:35:30 +0800 Subject: [PATCH 453/471] =?UTF-8?q?20160822-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Android Dev And What I Learned From It.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 sources/talk/20160814 How I Got to be an Android Dev And What I Learned From It.md diff --git a/sources/talk/20160814 How I Got to be an Android Dev And What I Learned From It.md b/sources/talk/20160814 How I Got to be an Android Dev And What I Learned From It.md new file mode 100644 index 0000000000..f8fe109506 --- /dev/null +++ b/sources/talk/20160814 How I Got to be an Android Dev And What I Learned From It.md @@ -0,0 +1,59 @@ +How I Got to be an Android Dev And What I Learned From It +============================= + +They say all relationships go through a rough patch at two, seven, then ten years. I don't remember who said it, but someone told me that many years ago. + +Next week will be my moving-to-Sydney second anniversary, so I figured this is a good time to write this post. + +During I/O last May, I met one of the coolest ladies ever, Yasmine. She asked me how I got into Android development, and when I was done telling her she said I should blog about it. So here it is, Yasmine. Better late than never. ;) + +### In the beginning... + +If there's one thing you should know about me, it's that I find it very hard to make decisions. Who's your best friend? What's your favourite food? What should you name your stuffed panda? I don't know the answer to these things. So imagine 16-year-old me, about to graduate high school, and I had Zero Idea what I wanted to major in. The first university I applied to? I wrote down what major I was applying for right in front of the registrar, literally before handing her my application form (Business Economics). + +I ended up going to another school, majoring in Electronics and Communications Engineering. I had one computer programming subject in freshman year. And I hated it. I hated it with a passion. I couldn't figure how anything works, and I swore I would never do that again. + +My first job after uni was with Intel as a product engineer. I worked there for two years. Lived in the middle of nowhere, worked long hours. But I thought that's par for the course; part of being an adult is working hard, right? And then the semiconductor industry in the Philippines started flipping out. A lot of other factories closed down, some of the products we used to look after were being transferred to other sites. I decided I'd rather look for another job now, than being retrenched and not knowing how long I will be jobless for. + +### What Now? + +I wanted a job back in the city, and I kinda don't want to stay in a sinking industry. But then again, there is nothing else I know how to do. Yeah, I am a licensed engineer, so technically I could work for a telco, or a TV station even! But at that time, if you want to for a telco, you'd have a better chance of getting hired if you interned with them right out of uni. And I didn't, so that's out. There were a lot of job postings for software developers though. But I hated programming! I don't know how to do it! + +And this is when my first lucky break came. I am so fortunate that I met a manager who trusted in me. I was upfront with her, I don't know shit. I would have to learn on the job, so it would be a slow start. Needless to say, I learned a lot in that job. I worked on some pretty cool stuff (we made apps installed in SIM cards), and met a lot of really nice people. But more importantly, it kickstarted my foray into software development. + +I eventually worked on more enterprise-y stuff (boring). Until the time we ran out of projects. I mean I'm all for coming in puttering around the office doing nothing and getting paid for it. But after two days it turns out it kinda sucks. It was 2009 and I keep on hearing about this new OS from Google called Android and that the SDK is Out Now! and that You Should Try It Out. So I installed all the things and started Android-ing. + +### Things Get Interesting + +So now that I have built a shiny, new Hello World app that runs on an emulator, I took that as a sign that I have the creds to apply for an Android development job. I joined a start up, and again, I was upfront about it -- I don't know how to do this, I have just been playing around, but if you want to pay me to play around, then we can be friends. And so I was met with another lucky break. + +It was an encouraging time to be a dev. The Android Dev StackOverflow community was much smaller, we are all learning at the same time, and honestly, I think everyone was kinder and more forgiving+. + +I eventually worked for a company whose mobile team is distributed across offices in Manila, Sydney, and New York. I was the first Android developer in the Manila office, but by then I was so used to it that I didn't mind. + +It was there that I met the guy who would eventually refer me to Domain, and for that I am forever grateful to him. Domain has done so much for me, both personally and professionally. I work with a really talented team, and I have never seen a company love a product so much. Domain made my dream of attending IO a reality, and through my work with them I got to work on a lot of pretty sweet features that I never dreamed of++. Another lucky break, and I mean to make the most out of it. + +### And? + +I guess what I mean to say is that I have just been winging it all these years. But at least I'm honest about it, right? It there's anything I have learned, it is that there is nothing wrong with saying "I don't know". There are times when we need to pretend to know things, but there are a lot more times when we need to accept that we do not know things. + +Do not be afraid to try something new, no matter how scared it makes you feel. Easier said than done, I know. But sometimes it really helps to take a deep breath, close your eyes, and just jump+++. Lundagin mo, baby! + + + +`+` I was looking at my old StackOverflow questions, and I seriously think that if I asked them today, I would have a lot of "What are you, stupid?" comments. Or maybe I'm just old and cynical. I don't know. The point is, we have all been there, so be kind to another, okay? +`++` This merits a post all on its own +`+++` I distinctly remember that's how I applied to my first Android job. I wrote my cover letter, proof read it, hovered my mouse over the Send button, took a deep breath, then clicked it before I could change my mind. + + +-------------------------------------------------------------------------------- + +via: http://www.zdominguez.com/2016/08/winging-it-how-i-got-to-be-android-dev.html?utm_source=Android+Weekly&utm_campaign=9314c56ae3-Android_Weekly_219&utm_medium=email&utm_term=0_4eb677ad19-9314c56ae3-338075705 + +作者:[Zarah Dominguez ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://plus.google.com/102371834744366149197 From 21b4d42ba058eefa14f773fb399d617fbd8dba65 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 22 Aug 2016 12:50:49 +0800 Subject: [PATCH 454/471] =?UTF-8?q?Delete=2020160601=20Learn=20Python=20Co?= =?UTF-8?q?ntrol=20Flow=20and=20Loops=20to=20Write=20and=20Tune=20Shell=20?= =?UTF-8?q?Scripts=20=E2=80=93=20Part=202.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Write and Tune Shell Scripts – Part 2.md | 253 ------------------ 1 file changed, 253 deletions(-) delete mode 100644 sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md diff --git a/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md b/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md deleted file mode 100644 index 2ec64d9cf6..0000000000 --- a/sources/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md +++ /dev/null @@ -1,253 +0,0 @@ -Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2 -==================================== - -In the previous article of this Python series we shared a brief introduction to Python, its command-line shell, and the IDLE. We also demonstrated how to perform arithmetic calculations, how to store values in variables, and how to print back those values to the screen. Finally, we explained the concepts of methods and properties in the context of Object Oriented Programming through a practical example. - -![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png) - -In this guide we will discuss control flow (to choose different courses of action depending on information entered by a user, the result of a calculation, or the current value of a variable) and loops (to automate repetitive tasks) and then apply what we have learned so far to write a simple shell script that will display the operating system type, the hostname, the kernel release, version, and the machine hardware name. - -This example, although basic, will help us illustrate how we can leverage Python OOP’s capabilities to write shell scripts easier than using regular bash tools. - -In other words, we want to go from - -``` -# uname -snrvm -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png) - -to - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png) - -Looks pretty, doesn’t it? Let’s roll up our sleeves and make it happen. - -### Control flow in Python - -As we said earlier, control flow allows us to choose different outcomes depending on a given condition. Its most simple implementation in Python is an if / else clause. - -The basic syntax is: - -``` -if condition: -# action 1 -else: -# action 2 -``` - -When condition evaluates to true, the code block below will be executed (represented by # action 1. Otherwise, the code under else will be run. - -A condition can be any statement that can evaluate to either true or false. - -For example: - -1. 1 < 3 # true - -2. firstName == “Gabriel” # true for me, false for anyone not named Gabriel - -- In the first example we compared two values to determine if one is greater than the other. -- In the second example we compared firstName (a variable) to determine if, at the current execution point, its value is identical to “Gabriel” -- The condition and the else statement must be followed by a colon (:) -- Indentation is important in Python. Lines with identical indentation are considered to be in the same code block. - -Please note that the if / else statement is only one of the many control flow tools available in Python. We reviewed it here since we will use it in our script later. You can learn more about the rest of the tools in the [official docs][1]. - -### Loops in Python - -Simply put, a loop is a sequence of instructions or statements that are executed in order as long as a condition is true, or once per item in a list. - -The most simple loop in Python is represented by the for loop iterates over the items of a given list or string beginning with the first item and ending with the last. - -Basic syntax: - -``` -for x in example: -# do this -``` - -Here example can be either a list or a string. If the former, the variable named x represents each item in the list; if the latter, x represents each character in the string: - -``` ->>> rockBands = [] ->>> rockBands.append("Roxette") ->>> rockBands.append("Guns N' Roses") ->>> rockBands.append("U2") ->>> for x in rockBands: -print(x) -or ->>> firstName = "Gabriel" ->>> for x in firstName: -print(x) -``` - -The output of the above examples is shown in the following image: - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png) - -### Python Modules - -For obvious reasons, there must be a way to save a sequence of Python instructions and statements in a file that can be invoked when it is needed. - -That is precisely what a module is. Particularly, the os module provides an interface to the underlying operating system and allows us to perform many of the operations we usually do in a command-line prompt. - -As such, it incorporates several methods and properties that can be called as we explained in the previous article. However, we need to import (or include) it in our environment using the import keyword: - -``` ->>> import os -``` - -Let’s print the current working directory: - -``` ->>> os.getcwd() -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png) - -Let’s now put all of this together (along with the concepts discussed in the previous article) to write the desired script. - -### Python Script - -It is considered good practice to start a script with a statement that indicates the purpose of the script, the license terms under which it is released, and a revision history listing the changes that have been made. Although this is more of a personal preference, it adds a professional touch to our work. - -Here’s the script that produces the output we shown at the top of this article. It is heavily commented so that you can understand what’s happening. - -Take a few minutes to go through it before proceeding. Note how we use an if / else structure to determine whether the length of each field caption is greater than the value of the field itself. - -Based on the result, we use empty characters to fill in the space between a field caption and the next. Also, we use the right number of dashes as separator between the field caption and its value below. - -``` -#!/usr/bin/python3 -# Change the above line to #!/usr/bin/python if you don't have Python 3 installed -# Script name: uname.py -# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily -# License: GPL v3 (http://www.gnu.org/licenses/gpl.html) -# Copyright (C) 2016 Gabriel Alejandro Cánepa -# ​Facebook / Skype / G+ / Twitter / Github: gacanepa -# Email: gacanepa (at) gmail (dot) com -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# REVISION HISTORY -# DATE VERSION AUTHOR CHANGE DESCRIPTION -# ---------- ------- -------------- -# 2016-05-28 1.0 Gabriel Cánepa Initial version -# Import the os module -import os -# Assign the output of os.uname() to the the systemInfo variable -# os.uname() returns a 5-string tuple (sysname, nodename, release, version, machine) -# Documentation: https://docs.python.org/3.2/library/os.html#module-os -systemInfo = os.uname() -# This is a fixed array with the desired captions in the script output -headers = ["Operating system","Hostname","Release","Version","Machine"] -# Initial value of the index variable. It is used to define the -# index of both systemInfo and headers in each step of the iteration. -index = 0 -# Initial value of the caption variable. -caption = "" -# Initial value of the values variable -values = "" -# Initial value of the separators variable -separators = "" -# Start of the loop -for item in systemInfo: -if len(item) < len(headers[index]): -# A string containing dashes to the length of item[index] or headers[index] -# To repeat a character(s), enclose it within quotes followed -# by the star sign (*) and the desired number of times. -separators = separators + "-" * len(headers[index]) + " " -caption = caption + headers[index] + " " -values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " " -else: -separators = separators + "-" * len(item) + " " -caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1) -values = values + item + " " -# Increment the value of index by 1 -index = index + 1 -# End of the loop -# Print the variable named caption converted to uppercase -print(caption.upper()) -# Print separators -print(separators) -# Print values (items in systemInfo) -print(values) -# INSTRUCTIONS: -# 1) Save the script as uname.py (or another name of your choosing) and give it execute permissions: -# chmod +x uname.py -# 2) Execute it: -# ./uname.py -``` - -Once you have saved the above script to a file, give it execute permissions and run it as indicated at the bottom of the code: - -``` -# chmod +x uname.py -# ./uname.py -``` - -If you get the following error while attempting to execute the script: - -``` --bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory -``` - -It means you don’t have Python 3 installed. If that is the case, you can either install the package or replace the interpreter line (pay special attention and be very careful if you followed the steps to update the symbolic links to the Python binaries as outlined in the previous article): - -``` -#!/usr/bin/python3 -``` - -with - -``` -#!/usr/bin/python -``` - -which will cause the installed version of Python 2 to execute the script instead. - -Note: This script has been tested successfully both in Python 2.x and 3.x. - -Although somewhat rudimentary, you can think of this script as a Python module. This means that you can open it in the IDLE (File → Open… → Select file): - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png) - -A new window will open with the contents of the file. Then go to Run → Run module (or just press F5). The output of the script will be shown in the original shell: - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png) - -If you want to obtain the same results with a script written purely in Bash, you would need to use a combination of awk, sed, and resort to complex methods to store and retrieve items in a list (not to mention the use of tr to convert lowercase letters to uppercase). - -In addition, Python provides portability in that all Linux systems ship with at least one Python version (either 2.x or 3.x, sometimes both). Should you need to rely on a shell to accomplish the same goal, you would need to write different versions of the script based on the shell. - -This goes to show that Object Oriented Programming features can become strong allies of system administrators. - -Note: You can find this python script (and others) in one of my GitHub repositories. - -### Summary - -In this article we have reviewed the concepts of control flow, loops / iteration, and modules in Python. We have shown how to leverage OOP methods and properties in Python to simplify otherwise complex shell scripts. - -Do you have any other ideas you would like to test? Go ahead and write your own Python scripts and let us know if you have any questions. Don’t hesitate to drop us a line using the comment form below, and we will get back to you as soon as we can. - - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/2/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/gacanepa/ -[1]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs. From 4612031808724a1c07e1b930da4739199b9b868b Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Mon, 22 Aug 2016 19:22:58 +0800 Subject: [PATCH 455/471] Update 20160803 The revenge of Linux.md --- sources/talk/20160803 The revenge of Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20160803 The revenge of Linux.md b/sources/talk/20160803 The revenge of Linux.md index 57a1ebe27c..48f060ccbd 100644 --- a/sources/talk/20160803 The revenge of Linux.md +++ b/sources/talk/20160803 The revenge of Linux.md @@ -1,3 +1,4 @@ +翻译中 The revenge of Linux ======================== From a8bda60279348ef5f062e723e8044c22e5c646b6 Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Mon, 22 Aug 2016 19:23:28 +0800 Subject: [PATCH 456/471] Update 20160803 The revenge of Linux.md --- sources/talk/20160803 The revenge of Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20160803 The revenge of Linux.md b/sources/talk/20160803 The revenge of Linux.md index 48f060ccbd..cbf98bd8a2 100644 --- a/sources/talk/20160803 The revenge of Linux.md +++ b/sources/talk/20160803 The revenge of Linux.md @@ -1,4 +1,4 @@ -翻译中 +翻译中 by H-mudcup The revenge of Linux ======================== From d4020ccf60c74313eecae2e645d53140e9835006 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 22 Aug 2016 22:55:24 +0800 Subject: [PATCH 457/471] PUB:Part 3 - LXD 2.0--Your first LXD container MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @kylepeng93 后面翻译都很好,就是开头一段,怎么会翻译 image 为图片。。 --- ...t 3 - LXD 2.0--Your first LXD container.md | 559 ++++++++++++++++++ ...t 3 - LXD 2.0--Your first LXD container.md | 438 -------------- 2 files changed, 559 insertions(+), 438 deletions(-) create mode 100644 published/LXD/Part 3 - LXD 2.0--Your first LXD container.md delete mode 100644 translated/tech/LXD/Part 3 - LXD 2.0--Your first LXD container.md diff --git a/published/LXD/Part 3 - LXD 2.0--Your first LXD container.md b/published/LXD/Part 3 - LXD 2.0--Your first LXD container.md new file mode 100644 index 0000000000..dd961f347b --- /dev/null +++ b/published/LXD/Part 3 - LXD 2.0--Your first LXD container.md @@ -0,0 +1,559 @@ +LXD 2.0 系列( 三):你的第一个 LXD 容器 +========================================== + +这是 [LXD 2.0 系列][0]的第三篇博客。 + +由于在管理 LXD 容器时涉及到大量的命令,所以这篇文章的篇幅是比较长的,如果你更喜欢使用同样的命令来快速的一步步实现整个过程,你可以[尝试我们的在线示例][1]! + +![](https://linuxcontainers.org/static/img/containers.png) + +### 创建并启动一个新的容器 + +正如我在先前的文章中提到的一样,LXD 命令行客户端预配置了几个镜像源。Ubuntu 的所有发行版和架构平台全都提供了官方镜像,但是对于其他的发行版也有大量的非官方镜像,那些镜像都是由社区制作并且被 LXC 上游贡献者所维护。 + +#### Ubuntu + +如果你想要支持最为完善的 Ubuntu 版本,你可以按照下面的去做: + +``` +lxc launch ubuntu: +``` + +注意,这里意味着会随着 Ubuntu LTS 的发布而变化。因此,如果用于脚本,你需要指明你具体安装的版本(参见下面)。 + +#### Ubuntu14.04 LTS + +得到最新更新的、已经测试过的、稳定的 Ubuntu 14.04 LTS 镜像,你可以简单的执行: + +``` +lxc launch ubuntu:14.04 +``` + +在该模式下,会指定一个随机的容器名。 + +如果你更喜欢指定一个你自己的名字,你可以这样做: + +``` +lxc launch ubuntu:14.04 c1 +``` + +如果你想要指定一个特定的体系架构(非主流平台),比如 32 位 Intel 镜像,你可以这样做: + +``` +lxc launch ubuntu:14.04/i386 c2 +``` + +#### 当前的 Ubuntu 开发版本 + +上面使用的“ubuntu:”远程仓库只会给你提供官方的并经过测试的 Ubuntu 镜像。但是如果你想要未经测试过的日常构建版本,开发版可能对你来说是合适的,你需要使用“ubuntu-daily:”远程仓库。 + +``` +lxc launch ubuntu-daily:devel c3 +``` + +在这个例子中,将会自动选中最新的 Ubuntu 开发版本。 + +你也可以更加精确,比如你可以使用代号名: + +``` +lxc launch ubuntu-daily:xenial c4 +``` + +#### 最新的Alpine Linux + +Alpine 镜像可以在“Images:”远程仓库中找到,通过如下命令执行: + +``` +lxc launch images:alpine/3.3/amd64 c5 +``` + +#### 其他 + +全部的 Ubuntu 镜像列表可以这样获得: + +``` +lxc image list ubuntu: +lxc image list ubuntu-daily: +``` + +全部的非官方镜像: + +``` +lxc image list images: +``` + +某个给定的原程仓库的全部别名(易记名称)可以这样获得(比如对于“ubuntu:”远程仓库): + +``` +lxc image alias list ubuntu: +``` + +### 创建但不启动一个容器 + +如果你想创建一个容器或者一批容器,但是你不想马上启动它们,你可以使用`lxc init`替换掉`lxc launch`。所有的选项都是相同的,唯一的不同就是它并不会在你创建完成之后启动容器。 + +``` +lxc init ubuntu: +``` + +### 关于你的容器的信息 + +#### 列出所有的容器 + +要列出你的所有容器,你可以这样这做: + +``` +lxc list +``` + +有大量的选项供你选择来改变被显示出来的列。在一个拥有大量容器的系统上,默认显示的列可能会有点慢(因为必须获取容器中的网络信息),你可以这样做来避免这种情况: + +``` +lxc list --fast +``` + +上面的命令显示了另外一套列的组合,这个组合在服务器端需要处理的信息更少。 + +你也可以基于名字或者属性来过滤掉一些东西: + +``` +stgraber@dakara:~$ lxc list security.privileged=true ++------+---------+---------------------+-----------------------------------------------+------------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++------+---------+---------------------+-----------------------------------------------+------------+-----------+ +| suse | RUNNING | 172.17.0.105 (eth0) | 2607:f2c0:f00f:2700:216:3eff:fef2:aff4 (eth0) | PERSISTENT | 0 | ++------+---------+---------------------+-----------------------------------------------+------------+-----------+ +``` + +在这个例子中,只有那些特权容器(禁用了用户命名空间)才会被列出来。 + +``` +stgraber@dakara:~$ lxc list --fast alpine ++-------------+---------+--------------+----------------------+----------+------------+ +| NAME | STATE | ARCHITECTURE | CREATED AT | PROFILES | TYPE | ++-------------+---------+--------------+----------------------+----------+------------+ +| alpine | RUNNING | x86_64 | 2016/03/20 02:11 UTC | default | PERSISTENT | ++-------------+---------+--------------+----------------------+----------+------------+ +| alpine-edge | RUNNING | x86_64 | 2016/03/20 02:19 UTC | default | PERSISTENT | ++-------------+---------+--------------+----------------------+----------+------------+ +``` + +在这个例子中,只有在名字中带有“alpine”的容器才会被列出来(也支持复杂的正则表达式)。 + +#### 获取容器的详细信息 + +由于 list 命令显然不能以一种友好的可读方式显示容器的所有信息,因此你可以使用如下方式来查询单个容器的信息: + +``` +lxc info +``` + +例如: + +``` +stgraber@dakara:~$ lxc info zerotier +Name: zerotier +Architecture: x86_64 +Created: 2016/02/20 20:01 UTC +Status: Running +Type: persistent +Profiles: default +Pid: 31715 +Processes: 32 +Ips: + eth0: inet 172.17.0.101 + eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8 + eth0: inet6 fe80::216:3eff:feec:65a8 + lo: inet 127.0.0.1 + lo: inet6 ::1 + lxcbr0: inet 10.0.3.1 + lxcbr0: inet6 fe80::c0a4:ceff:fe52:4d51 + zt0: inet 29.17.181.59 + zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1 + zt0: inet6 fe80::79:e7ff:fe0d:5123 +Snapshots: + zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless) + ``` + +### 生命周期管理命令 + +这些命令对于任何容器或者虚拟机管理器或许都是最普通的命令,但是它们仍然需要讲到。 + +所有的这些命令在批量操作时都能接受多个容器名。 + +#### 启动 + +启动一个容器就向下面一样简单: + +``` +lxc start +``` + +#### 停止 + +停止一个容器可以这样来完成: + +``` +lxc stop +``` + +如果容器不合作(即没有对发出的 SIGPWR 信号产生回应),这时候,你可以使用下面的方式强制执行: + +``` +lxc stop --force +``` + +#### 重启 + +通过下面的命令来重启一个容器: + +``` +lxc restart +``` + +如果容器不合作(即没有对发出的 SIGINT 信号产生回应),你可以使用下面的方式强制执行: + +``` +lxc restart --force +``` + +#### 暂停 + +你也可以“暂停”一个容器,在这种模式下,所有的容器任务将会被发送相同的 SIGSTOP 信号,这也意味着它们将仍然是可见的,并且仍然会占用内存,但是它们不会从调度程序中得到任何的 CPU 时间片。 + +如果你有一个很占用 CPU 的容器,而这个容器需要一点时间来启动,但是你却并不会经常用到它。这时候,你可以先启动它,然后将它暂停,并在你需要它的时候再启动它。 + +``` +lxc pause +``` + +#### 删除 + +最后,如果你不需要这个容器了,你可以用下面的命令删除它: + +``` +lxc delete +``` + +注意,如果容器还处于运行状态时你将必须使用“-force”。 + +### 容器的配置 + +LXD 拥有大量的容器配置设定,包括资源限制,容器启动控制以及对各种设备是否允许访问的配置选项。完整的清单因为太长所以并没有在本文中列出,但是,你可以从[这里]获取它。 + +就设备而言,LXD 当前支持下面列出的这些设备类型: + +- 磁盘 + 既可以是一块物理磁盘,也可以只是一个被挂挂载到容器上的分区,还可以是一个来自主机的绑定挂载路径。 +- 网络接口卡 + 一块网卡。它可以是一块桥接的虚拟网卡,或者是一块点对点设备,还可以是一块以太局域网设备或者一块已经被连接到容器的真实物理接口。 +- unix 块设备 + 一个 UNIX 块设备,比如 /dev/sda +- unix 字符设备 + 一个 UNIX 字符设备,比如 /dev/kvm +- none + 这种特殊类型被用来隐藏那种可以通过配置文件被继承的设备。 + +#### 配置 profile 文件 + +所有可用的配置文件列表可以这样获取: + +``` +lxc profile list +``` + +为了看到给定配置文件的内容,最简单的方式是这样做: + +``` +lxc profile show +``` + +你可能想要改变文件里面的内容,可以这样做: + +``` +lxc profile edit +``` + +你可以使用如下命令来改变应用到给定容器的配置文件列表: + +``` +lxc profile apply ,,,... +``` + +#### 本地配置 + +有些配置是某个容器特定的,你并不想将它放到配置文件中,你可直接对容器设置它们: + +``` +lxc config edit +``` + +上面的命令做的和“profile edit”命令是一样。 + +如果不想在文本编辑器中打开整个文件的内容,你也可以像这样修改单独的配置: + +``` +lxc config set +``` + +或者添加设备,例如: + +``` +lxc config device add my-container kvm unix-char path=/dev/kvm +``` + +上面的命令将会为名为“my-container”的容器设置一个 /dev/kvm 项。 + +对一个配置文件使用`lxc profile set`和`lxc profile device add`命令也能实现上面的功能。 + +#### 读取配置 + +你可以使用如下命令来读取容器的本地配置: + +``` +lxc config show +``` + +或者得到已经被展开了的配置(包含了所有的配置值): + +``` +lxc config show --expanded +``` + +例如: + +``` +stgraber@dakara:~$ lxc config show --expanded zerotier +name: zerotier +profiles: +- default +config: + security.nesting: "true" + user.a: b + volatile.base_image: a49d26ce5808075f5175bf31f5cb90561f5023dcd408da8ac5e834096d46b2d8 + volatile.eth0.hwaddr: 00:16:3e:ec:65:a8 + volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' +devices: + eth0: + name: eth0 + nictype: macvlan + parent: eth0 + type: nic + limits.ingress: 10Mbit + limits.egress: 10Mbit + root: + path: / + size: 30GB + type: disk + tun: + path: /dev/net/tun + type: unix-char +ephemeral: false +``` + +这样做可以很方便的检查有哪些配置属性被应用到了给定的容器。 + +#### 实时配置更新 + +注意,除非在文档中已经被明确指出,否则所有的配置值和设备项的设置都会对容器实时发生影响。这意味着在不重启正在运行的容器的情况下,你可以添加和移除某些设备或者修改安全配置文件。 + +### 获得一个 shell + +LXD 允许你直接在容器中执行任务。最常用的做法是在容器中得到一个 shell 或者执行一些管理员任务。 + +和 SSH 相比,这样做的好处是你不需要容器是网络可达的,也不需要任何软件和特定的配置。 + +#### 执行环境 + +与 LXD 在容器内执行命令的方式相比,有一点是不同的,那就是 shell 并不是在容器中运行。这也意味着容器不知道使用的是什么样的 shell,以及设置了什么样的环境变量和你的家目录在哪里。 + +通过 LXD 来执行命令总是使用最小的路径环境变量设置,并且 HOME 环境变量必定为 /root,以容器的超级用户身份来执行(即 uid 为 0,gid 为 0)。 + +其他的环境变量可以通过命令行来设置,或者在“environment.”配置中设置成永久环境变量。 + +#### 执行命令 + +在容器中获得一个 shell 可以简单的执行下列命令得到: + +``` +lxc exec bash +``` + +当然,这样做的前提是容器内已经安装了 bash。 + +更复杂的命令要求使用分隔符来合理分隔参数。 + +``` +lxc exec -- ls -lh / +``` + +如果想要设置或者重写变量,你可以使用“-env”参数,例如: + +``` +stgraber@dakara:~$ lxc exec zerotier --env mykey=myvalue env | grep mykey +mykey=myvalue +``` + +### 管理文件 + +因为 LXD 可以直接访问容器的文件系统,因此,它可以直接读取和写入容器中的任意文件。当我们需要提取日志文件或者与容器传递文件时,这个特性是很有用的。 + +#### 从容器中取回一个文件 + +想要从容器中获得一个文件,简单的执行下列命令: + +``` +lxc file pull / +``` + +例如: + +``` +stgraber@dakara:~$ lxc file pull zerotier/etc/hosts hosts +``` + +或者将它读取到标准输出: + +``` +stgraber@dakara:~$ lxc file pull zerotier/etc/hosts - +127.0.0.1 localhost + +# The following lines are desirable for IPv6 capable hosts +::1 ip6-localhost ip6-loopback +fe00::0 ip6-localnet +ff00::0 ip6-mcastprefix +ff02::1 ip6-allnodes +ff02::2 ip6-allrouters +ff02::3 ip6-allhosts +``` + +#### 向容器发送一个文件 + +发送以另一种简单的方式完成: + +``` +lxc file push / +``` + +#### 直接编辑一个文件 + +编辑是一个方便的功能,其实就是简单的提取一个给定的路径,在你的默认文本编辑器中打开它,在你关闭编辑器时会自动将编辑的内容保存到容器。 + +``` +lxc file edit / +``` + +### 快照管理 + +LXD 允许你对容器执行快照功能并恢复它。快照包括了容器在某一时刻的完整状态(如果`-stateful`被使用的话将会包括运行状态),这也意味着所有的容器配置,容器设备和容器文件系统也会被保存。 + +#### 创建一个快照 + +你可以使用下面的命令来执行快照功能: + +``` +lxc snapshot +``` + +命令执行完成之后将会生成名为snapX(X 为一个自动增长的数)的记录。 + +除此之外,你还可以使用如下命令命名你的快照: + +``` +lxc snapshot +``` + +#### 列出所有的快照 + +一个容器的所有快照的数量可以使用`lxc list`来得到,但是具体的快照列表只能执行`lxc info`命令才能看到。 + +``` +lxc info +``` + +#### 恢复快照 + +为了恢复快照,你可以简单的执行下面的命令: + +``` +lxc restore +``` + +#### 给快照重命名 + +可以使用如下命令来给快照重命名: + +``` +lxc move / / +``` + +#### 从快照中创建一个新的容器 + +你可以使用快照来创建一个新的容器,而这个新的容器除了一些可变的信息将会被重置之外(例如 MAC 地址)其余所有信息都将和快照完全相同。 + +``` +lxc copy / +``` + +#### 删除一个快照 + +最后,你可以执行下面的命令来删除一个快照: + +``` +lxc delete / +``` + +### 克隆并重命名 + +得到一个纯净的发行版镜像总是让人感到愉悦,但是,有时候你想要安装一系列的软件到你的容器中,这时,你需要配置它然后将它分支成多个其他的容器。 + +#### 复制一个容器 + +为了复制一个容器并有效的将它克隆到一个新的容器中,你可以执行下面的命令: + +``` +lxc copy +``` + +目标容器在所有方面将会完全和源容器等同。除了新的容器没有任何源容器的快照以及一些可变值将会被重置之外(例如 MAC 地址)。 + +#### 移动一个快照 + +LXD 允许你复制容器并在主机之间移动它。但是,关于这一点将在后面的文章中介绍。 + +现在,“move”命令将会被用作给容器重命名。 + +``` +lxc move +``` + +唯一的要求就是当容器应该被停止,容器内的任何事情都会被保存成它本来的样子,包括可变化的信息(类似 MAC 地址等)。 + +### 结论 + +这篇如此长的文章介绍了大多数你可能会在日常操作中使用到的命令。 + +很显然,这些如此之多的命令都会有不少选项,可以让你的命令更加有效率或者可以让你指定你的 LXD 容器的某个具体方面的参数。最好的学习这些命令的方式就是深入学习它们的帮助文档( --help)。 + +### 更多信息 + +- LXD 的主要网站是: +- Github 上的开发动态: +- 邮件列表支持: +- IRC 支持: #lxcontainers on irc.freenode.net + +如果你不想或者不能在你的机器上安装 LXD,你可以[试试在线版本][1]! + +-------------------------------------------------------------------------------- + +via:https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/ +作者:[Stéphane Graber][a] +译者:[kylepeng93](https://github.com/kylepeng93) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.stgraber.org/author/stgraber/ +[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/ +[1]: https://linuxcontainers.org/lxd/try-it +[2]: https://github.com/lxc/lxd/blob/master/doc/configuration.md diff --git a/translated/tech/LXD/Part 3 - LXD 2.0--Your first LXD container.md b/translated/tech/LXD/Part 3 - LXD 2.0--Your first LXD container.md deleted file mode 100644 index 4b6dcfd8aa..0000000000 --- a/translated/tech/LXD/Part 3 - LXD 2.0--Your first LXD container.md +++ /dev/null @@ -1,438 +0,0 @@ -kylepeng93 is translating -你的地一个LXD容器 -========================================== - -这是第三篇发布的博客[LXD2.0系列] -由于在管理LXD容器时涉及到大量的命令,所以这篇文章的篇幅是比较长的,如果你更喜欢使用同样的命令来快速的一步步实现整个过程,你可以[尝试我们的在线示例]! -![](https://linuxcontainers.org/static/img/containers.png) - -### 创建并启动一个新的容器 -正如我在先前的文章中提到的一样,LXD命令行客户端使用了少量的图片来做了一个预配置。Ubuntu的所有发行版和架构平台都拥有最好的官方图片,但是对于其他的发行版仍然有大量的非官方图片,那些图片都是由社区制作并且被LXC上层贡献者所维护。 -### Ubuntu -如果你想要支持最为完善的ubuntu版本,你可以按照下面的去做: -``` -lxc launch ubuntu: -``` -注意,这里所做的解释会随着ubuntu LTS的发布而变化。因此对于你使用的脚本应该取决于下面提到的具体你想要安装的版本: -###Ubuntu14.04 LTS -得到最新的,已经测试过的,稳定的ubuntu14.04 LTS镜像,你可以简单的执行: -``` -lxc launch ubuntu:14.04 -``` -这该模式下,一个任意的容器名将会被指定给它。 -如果你更喜欢指定一个你自己的命令,你可以这样做: -``` -lxc launch ubuntu:14.04 c1 -``` - -如果你想要指定一个特定的体系架构(非主要的),比如32位Intel镜像,你可以这样做: -``` -lxc launch ubuntu:14.04/i386 c2 -``` - -### 当前的Ubuntu开发版本 -上面使用的“ubuntu:”远程方式只会给你提供官方的并经过测试的ubuntu镜像。但是如果你想要未经测试过的日常构建版本,开发版可能对你来说是合适的,你将要使用“ubuntu-daily”来远程获取。 -``` -lxc launch ubuntu-daily:devel c3 -``` - -在这个例子中,最新的ubuntu开发版本将会被选自动选中。 -你也可以更加精确,比如你可以使用代号名: -``` -lxc launch ubuntu-daily:xenial c4 -``` - -### 最新的Alpine Linux -Alpine镜像在“Images:”远程中可用,可以通过如下命令执行: -``` -lxc launch images:alpine/3.3/amd64 c5 -``` - -### And many more -### 其他 -所有ubuntu镜像列表可以这样获得: -``` -lxc image list ubuntu: -lxc image list ubuntu-daily: -``` - -所有的非官方镜像: -``` -lxc image list images: -``` - -所有给定的可用远程别名清单可以这样获得(针对“ubuntu:”远程): -``` -lxc image alias list ubuntu: -``` - - -### 创建但不启动一个容器 -如果你想创建一个容器或者一批容器,但是你不想马上启动他们,你可以使用“lxc init”替换掉“lxc launch”。所有的选项都是相同的,唯一的不同就是它并不会在你创建完成之后启动容器。 -``` -lxc init ubuntu: -``` - -### 关于你的容器的信息 -### 列出所有的容器 -为了列出你的所有容器,你可以这样这做: -``` -lxc list -``` - -有大量的选项供你选择来改变被显示出来的列。在一个拥有大量容器的系统上,默认显示的列可能会有点慢(因为必须获取容器中的网络信息),你可以这样做来避免这种情况: -``` -lxc list --fast -``` - -上面的命令显示了一个不同的列的集合,这个集合在服务器端需要处理的信息更少。 -你也可以基于名字或者属性来过滤掉一些东西: -``` -stgraber@dakara:~$ lxc list security.privileged=true -+------+---------+---------------------+-----------------------------------------------+------------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+------+---------+---------------------+-----------------------------------------------+------------+-----------+ -| suse | RUNNING | 172.17.0.105 (eth0) | 2607:f2c0:f00f:2700:216:3eff:fef2:aff4 (eth0) | PERSISTENT | 0 | -+------+---------+---------------------+-----------------------------------------------+------------+-----------+ -``` - -在这个例子中,只有那些有特权(用户命名空间不可用)的容器才会被列出来。 -``` -stgraber@dakara:~$ lxc list --fast alpine -+-------------+---------+--------------+----------------------+----------+------------+ -| NAME | STATE | ARCHITECTURE | CREATED AT | PROFILES | TYPE | -+-------------+---------+--------------+----------------------+----------+------------+ -| alpine | RUNNING | x86_64 | 2016/03/20 02:11 UTC | default | PERSISTENT | -+-------------+---------+--------------+----------------------+----------+------------+ -| alpine-edge | RUNNING | x86_64 | 2016/03/20 02:19 UTC | default | PERSISTENT | -+-------------+---------+--------------+----------------------+----------+------------+ -``` - -在这个例子中,只有在名字中带有“alpine”的容器才会被列出来(支持复杂的正则表达式)。 -### 获取容器的详细信息 -由于list命令显然不能以一种友好的可读方式显示容器的所有信息,因此你可以使用如下方式来查询单个容器的信息: -``` -lxc info -``` - -例如: -``` -stgraber@dakara:~$ lxc info zerotier -Name: zerotier -Architecture: x86_64 -Created: 2016/02/20 20:01 UTC -Status: Running -Type: persistent -Profiles: default -Pid: 31715 -Processes: 32 -Ips: - eth0: inet 172.17.0.101 - eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8 - eth0: inet6 fe80::216:3eff:feec:65a8 - lo: inet 127.0.0.1 - lo: inet6 ::1 - lxcbr0: inet 10.0.3.1 - lxcbr0: inet6 fe80::c0a4:ceff:fe52:4d51 - zt0: inet 29.17.181.59 - zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1 - zt0: inet6 fe80::79:e7ff:fe0d:5123 -Snapshots: - zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless) - ``` - -### 生命周期管理命令 -这些命令对于任何容器或者虚拟机管理器或许都是最普通的命令,但是它们仍然需要被涉及到。 -所有的这些命令在批量操作时都能接受多个容器名。 -### 启动 -启动一个容器就向下面一样简单: -``` -lxc start -``` - -### 停止 -停止一个容器可以这样来完成: -``` -lxc stop -``` - -如果容器不合作(即没有对发出的信号产生回应),这时候,你可以使用下面的方式强制执行: -``` -lxc stop --force -``` - -### 重启 -通过下面的命令来重启一个容器: -``` -lxc restart -``` - -如果容器不合作(即没有对发出的信号产生回应),你可以使用下面的方式强制执行: -``` -lxc restart --force -``` - -### 暂停 -你也可以“暂停”一个容器,在这种模式下,所有的容器任务将会被发送相同的信号,这也意味着他们将仍然是可见的,并且仍然会占用内存,但是他们不会从调度程序中得到任何的CPU时间片。 -如果你有一个CPU的饥饿容器,而这个容器需要一点时间来启动,但是你却并 不会经常用到它。这时候,你可以先启动它,然后将它暂停,并在你需要它的时候再启动它。 -``` -lxc pause -``` - -### 删除 -最后,如果你不需要这个容器了,你可以用下面的命令删除它: -``` -lxc delete -``` - -注意,如果容器还处于运行状态时你将必须使用“-forece”。 -### 容器的配置 -LXD拥有大量的容器配置设定,包括资源限制,容器启动控制以及对各种设备是否允许访问的配置选项。完整的清单因为太长所以并没有在本文中列出,但是,你可以从[here]获取它。 -就设备而言,LXD当前支持下面列出的这些设备: -- 磁盘 -既可以是一块物理磁盘,也可以只是一个被挂挂载到容器上的分区,还可以是一个来自主机的绑定挂载路径。 -- 网络接口卡 -一块网卡。它可以是一块桥接的虚拟网卡,或者是一块点对点设备,还可以是一块以太局域网设备或者一块已经被连接到容器的真实物理接口。 -- unix块 -一个UNIX块设备,比如/dev/sda -- unix字符 -一块UNIX字符设备,比如/dev/kvm -- none -这种特殊类型被用来隐藏那种可以通过profiles文件被继承的设备。 - -### 配置profiles文件 -所有可用的profiles的文件列表可以这样获取: -``` -lxc profile list -``` - -为了看到给定profile文件的内容,最简单的方式是这样做: -``` -lxc profile show -``` - -你可能想要改变文件里面的内容,可以这样做: -``` -lxc profile edit -``` - -你可以使用如下命令来改变profiles的列表并将这种变化应用到给定的容器中: -``` -lxc profile apply ,,,... -``` - -### 本地配置 -For things that are unique to a container and so don’t make sense to put into a profile, you can just set them directly against the container: - -``` -lxc config edit -``` - -上面的命令将会完成和“profile edit”命令一样的功能。 - -即使不在文本编辑器中打开整个文件的内容,你也可以像这样修改单独的键: -``` -lxc config set -``` -或者添加设备,例如 -``` -lxc config device add my-container kvm unix-char path=/dev/kvm -``` - -上面的命令将会为名为“my-container”的容器打开一个/dev/kvm入口。 -对一个profile文件使用“lxc profile set”和“lxc profile device add”命令也能实现上面的功能。 -#### 读取配置 -你可以使用如下命令来阅读容器的本地配置: -``` -lxc config show -``` - -或者得到已经被展开了的配置(包含了所有的键值): -``` -lxc config show --expanded -``` - -例如: -``` -stgraber@dakara:~$ lxc config show --expanded zerotier -name: zerotier -profiles: -- default -config: - security.nesting: "true" - user.a: b - volatile.base_image: a49d26ce5808075f5175bf31f5cb90561f5023dcd408da8ac5e834096d46b2d8 - volatile.eth0.hwaddr: 00:16:3e:ec:65:a8 - volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' -devices: - eth0: - name: eth0 - nictype: macvlan - parent: eth0 - type: nic - limits.ingress: 10Mbit - limits.egress: 10Mbit - root: - path: / - size: 30GB - type: disk - tun: - path: /dev/net/tun - type: unix-char -ephemeral: false -``` - -这样做可以很方便的检查有哪些配置属性被应用到了给定的容器。 -### 实时配置更新 -注意,除非在文档中已经被明确指出,否则所有的键值和设备入口都会被应用到受影响的实时容器。这意味着你可以添加和移除某些设备或者在不重启容器的情况下修改正在运行的容器的安全profile配置文件。 -### 获得一个shell -LXD允许你对容器中的任务进行直接的操作。最常用的做法是在容器中得到一个shell或者执行一些管理员任务。 -和SSH相比,这样做的好处是你可以接触到独立与容器之外的网络或者任何一个软件又或者任何一个能够在容器内可见的文件配置。 -执行环境 -对比LXD在容器内执行命令的方式,有一点是不同的,那就是它自身并不是在容器中运行。这也意味着它不知道该使用什么样的shell,以及设置什么样的环境变量和哪里是它的家目录。 -通过LXD来执行命令必须总是在最小路径环境变量集并且HONE环境变量必须为/root的情况下以容器的超级用户身份来执行(即uid为0,gid为0)。 -其他的环境变量可以通过命令行来设置,或者在“environment.”配置文件中设置成永久环境变量。 -### 执行命令 -在容器中获得一个shell可以简单的执行下列命令得到: -``` -lxc exec bash -``` - -当然,这样做的前提是容器内已经安装了bash。 - -更多复杂的命令要求有对参数分隔符的合理使用。 -``` -lxc exec -- ls -lh / -``` - -如果想要设置或者重写变量,你可以使用“-env”参数,例如: -``` -stgraber@dakara:~$ lxc exec zerotier --env mykey=myvalue env | grep mykey -mykey=myvalue -``` - -### 管理文件 -因为LXD可以直接访问容器的文件系统,因此,它可以直接往容器中读取和写入任意文件。当我们需要提取日志文件或者与容器发生交互时,这个特性是很有用的。 -#### 从容器中取回一个文件 -想要从容器中获得一个文件,简单的执行下列命令: -``` -lxc file pull / -``` - -例如: -``` -stgraber@dakara:~$ lxc file pull zerotier/etc/hosts hosts -``` - -或者将它读取到标准输出: -``` -stgraber@dakara:~$ lxc file pull zerotier/etc/hosts - -127.0.0.1 localhost - -# 下面的所有行对于支持IPv6的主机是有用的 -::1 ip6-localhost ip6-loopback -fe00::0 ip6-localnet -ff00::0 ip6-mcastprefix -ff02::1 ip6-allnodes -ff02::2 ip6-allrouters -ff02::3 ip6-allhosts -``` - -#### 向容器发送一个文件 - -发送会简单的以另一种方式完成: -``` -lxc file push / -``` - -#### 直接编辑一个文件 -当只是简单的提取一个给定的路径时,编辑是一个很方便的功能,在你的默认文本编辑器中打开它,这样在你关闭编辑器时会自动将编辑的内容保存到容器。 -``` -lxc file edit / -``` - -### 快照管理 -LXD允许你对容器执行快照功能并恢复它。快照包括了容器在某一时刻的完整状态(如果-stateful被使用的话将会包括运行状态),这也意味着所有的容器配置,容器设备和容器文件系统也会被包保存。 -#### 创建一个快照 -你可以使用下面的命令来执行快照功能: -``` -lxc snapshot -``` - -命令执行完成之后将会生成名为snapX(X为一个自动增长的数)的记录。 -除此之外,你还可以使用如下命令命名你的快照: -``` -lxc snapshot -``` - -#### 列出所有的快照 -一个容器的所有快照的数量可以使用“lxc list”来得到,但是真正的快照列表只能执行“lxc info”命令才能看到。 -``` -lxc info -``` - -#### 恢复快照 -为了恢复快照,你可以简单的执行下面的命令: -``` -lxc restore -``` - -#### 给快照重命名 -可以使用如下命令来给快照重命名: -``` -lxc move / / -``` - -#### 从快照中创建一个新的容器 -你可以使用快照来创建一个新的容器,而这个新的容器除了一些可变的信息将会被重置之外(例如MAC地址)其余所有信息都将和快照完全相同。 -``` -lxc copy / -``` - -#### 删除一个快照 - -最后,你可以执行下面的命令来删除一个快照: -``` -lxc delete / -``` - -### 克隆并重命名 -得到一个纯净的发行版镜像总是让人感到愉悦,但是,有时候你想要安装一系列的软件到你的容器中,这时,你需要配置它然后将它分割并分配到多个其他的容器中。 -#### 复制一个容器 -为了复制一个容器并有效的将它克隆到一个新的容器中,你可以执行下面的命令: -``` -lxc copy -``` -目标容器在所有方面将会完全和源容器等同。除非它没有 -目标容器在所有方面将会完全和源容器等同。除了新的容器没有任何源容器的快照以及一些可变值将会被重置之外(例如MAC地址)。 -#### 移除一个快照 -LXD允许你复制容器并在主机之间移动它。但是,关于这一点将在后面的文章中介绍。 -现在,“move”命令将会被用作给容器重命名。 -``` -lxc move -``` - -唯一的要求就是当容器被停止时,容器内的任何事情都会被保存成它本来的样子,包括可变化的信息(类似MAC地址等)。 -### 结论 -这篇如此长的文章介绍了大多数你可能会在日常操作中使用到的命令。 -很显然,那些如此之多的命令都会有额外的可以让你的命令更加有效率或者可以让你指定你的LXD容器的某个具体方面的参数。最好的学习这些命令的方式就是深入学习它们的帮助文档,当然。只是对于那些你真正需要用到的命令参数。 -### 额外的信息 -LXD的主要网站是: -Github上的开发说明: -邮件列表支持: -IRC支持: #lxcontainers on irc.freenode.net -如果你不想或者不能在你的机器上安装LXD,你可以[try it online instead][1]! - --------------------------------------------------------------------------------- - -来自于:https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/ -作者:[Stéphane Graber][a] -译者:[kylepeng93](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.stgraber.org/author/stgraber/ -[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/ -[1]: https://linuxcontainers.org/lxd/try-it -[2]: https://github.com/lxc/lxd/blob/master/doc/configuration.md From dc14c2108135caa13e2de3593365fb1fcc494723 Mon Sep 17 00:00:00 2001 From: "Cathon.ZHD" Date: Tue, 23 Aug 2016 09:09:00 +0800 Subject: [PATCH 458/471] cathon is translating --- sources/tech/20160608 Simple Python Framework from Scratch.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20160608 Simple Python Framework from Scratch.md b/sources/tech/20160608 Simple Python Framework from Scratch.md index f4e7ce0518..b4e9d20c1b 100644 --- a/sources/tech/20160608 Simple Python Framework from Scratch.md +++ b/sources/tech/20160608 Simple Python Framework from Scratch.md @@ -1,3 +1,4 @@ +[Cathon is translating...] Simple Python Framework from Scratch =================================== From 612fd72e777ed603552a1853a9621f4c1af29366 Mon Sep 17 00:00:00 2001 From: NearTan Date: Tue, 23 Aug 2016 09:13:43 +0800 Subject: [PATCH 459/471] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E8=AE=A4=E9=A2=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20160813 Journey-to-HTTP2.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20160813 Journey-to-HTTP2.md b/sources/tech/20160813 Journey-to-HTTP2.md index 706ec1485f..e75737f57e 100644 --- a/sources/tech/20160813 Journey-to-HTTP2.md +++ b/sources/tech/20160813 Journey-to-HTTP2.md @@ -1,3 +1,5 @@ +NearTan 认领 + Journey to HTTP/2 =================== From 5b8d2d6419664598116782e6d73531cc909375be Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 23 Aug 2016 10:25:07 +0800 Subject: [PATCH 460/471] =?UTF-8?q?20160823-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow Awk to Use Shell Variables – Part 11.md | 106 ------- ...low Control Statements in Awk – Part 12.md | 259 ++++++++++++++++++ 2 files changed, 259 insertions(+), 106 deletions(-) delete mode 100644 sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md create mode 100644 sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md diff --git a/sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md b/sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md deleted file mode 100644 index b08f3fa15d..0000000000 --- a/sources/tech/awk/20160802 How to Allow Awk to Use Shell Variables – Part 11.md +++ /dev/null @@ -1,106 +0,0 @@ -How to Allow Awk to Use Shell Variables – Part 11 -=========================================== - -When we write shell scripts, we normally include other smaller programs or commands such as Awk operations in our scripts. In the case of Awk, we have to find ways of passing some values from the shell to Awk operations. - -This can be done by using shell variables within Awk commands, and in this part of the series, we shall learn how to allow Awk to use shell variables that may contain values we want to pass to Awk commands. - -There possibly two ways you can enable Awk to use shell variables: - -### 1. Using Shell Quoting - -Let us take a look at an example to illustrate how you can actually use shell quoting to substitute the value of a shell variable in an Awk command. In this example, we want to search for a username in the file /etc/passwd, filter and print the user’s account information. - -Therefore, we can write a `test.sh` script with the following content: - -``` -#!/bin/bash - -#read user input -read -p "Please enter username:" username - -#search for username in /etc/passwd file and print details on the screen -cat /etc/passwd | awk "/$username/ "' { print $0 }' -``` - -Thereafter, save the file and exit. - -Interpretation of the Awk command in the test.sh script above: - -``` -cat /etc/passwd | awk "/$username/ "' { print $0 }' -``` - -`"/$username/ "` – shell quoting used to substitute value of shell variable username in Awk command. The value of username is the pattern to be searched in the file /etc/passwd. - -Note that the double quote is outside the Awk script, `‘{ print $0 }’`. - -Then make the script executable and run it as follows: - -``` -$ chmod +x test.sh -$ ./text.sh -``` - -After running the script, you will be prompted to enter a username, type a valid username and hit Enter. You will view the user’s account details from the /etc/passwd file as below: - -![](http://www.tecmint.com/wp-content/uploads/2016/08/Shell-Script-to-Find-Username-in-Passwd-File.png) - -### 2. Using Awk’s Variable Assignment - -This method is much simpler and better in comparison to method one above. Considering the example above, we can run a simple command to accomplish the job. Under this method, we use the -v option to assign a shell variable to a Awk variable. - -Firstly, create a shell variable, username and assign it the name that we want to search in the /etc/passswd file: - -``` -username="aaronkilik" -``` - -Then type the command below and hit Enter: - -``` -# cat /etc/passwd | awk -v name="$username" ' $0 ~ name {print $0}' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/08/Find-Username-in-Password-File-Using-Awk.png) - -Explanation of the above command: - -- `-v` – Awk option to declare a variable -- `username` – is the shell variable -- `name` – is the Awk variable - -Let us take a careful look at `$0 ~ name` inside the Awk script, `' $0 ~ name {print $0}'`. Remember, when we covered Awk comparison operators in Part 4 of this series, one of the comparison operators was value ~ pattern, which means: true if value matches the pattern. - -The `output($0)` of cat command piped to Awk matches the pattern (aaronkilik) which is the name we are searching for in /etc/passwd, as a result, the comparison operation is true. The line containing the user’s account information is then printed on the screen. - -### Conclusion - -We have covered an important section of Awk features, that can help us use shell variables within Awk commands. Many times, you will write small Awk programs or commands within shell scripts and therefore, you need to have a clear understanding of how to use shell variables within Awk commands. - -In the next part of the Awk series, we shall dive into yet another critical section of Awk features, that is flow control statements. So stay tunned and let’s keep learning and sharing. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/use-shell-script-variable-in-awk/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ - - - - - - - - - - - - - diff --git a/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md b/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md new file mode 100644 index 0000000000..44748caafa --- /dev/null +++ b/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md @@ -0,0 +1,259 @@ +How to Use Flow Control Statements in Awk – Part 12 +=========================================== + +When you review all the Awk examples we have covered so far, right from the start of the Awk series, you will notice that all the commands in the various examples are executed sequentially, that is one after the other. But in certain situations, we may want to run some text filtering operations based on some conditions, that is where the approach of flow control statements sets in. + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Flow-Control-Statements-in-Awk.png) + +There are various flow control statements in Awk programming and these include: + +- if-else statement +- for statement +- while statement +- do-while statement +- break statement +- continue statement +- next statement +- nextfile statement +- exit statement + +However, for the scope of this series, we shall expound on: if-else, for, while and do while statements. Remember that we already walked through how to use next statement in Part 6 of this Awk series. + +### 1. The if-else Statement + +The expected syntax of the if statement is similar to that of the shell if statement: + +``` +if (condition1) { + actions1 +} +else { + actions2 +} +``` + +In the above syntax, condition1 and condition2 are Awk expressions, and actions1 and actions2 are Awk commands executed when the respective conditions are satisfied. + +When condition1 is satisfied, meaning it’s true, then actions1 is executed and the if statement exits, otherwise actions2 is executed. + +The if statement can also be expanded to a if-else_if-else statement as below: + +``` +if (condition1){ + actions1 +} +else if (conditions2){ + actions2 +} +else{ + actions3 +} +``` + +For the form above, if condition1 is true, then actions1 is executed and the if statement exits, otherwise condition2 is evaluated and if it is true, then actions2 is executed and the if statement exits. However, when condition2 is false then, actions3 is executed and the if statement exits. + +Here is a case in point of using if statements, we have a list of users and their ages stored in the file, users.txt. + +We want to print a statement indicating a user’s name and whether the user’s age is less or more than 25 years old. + +``` +aaronkilik@tecMint ~ $ cat users.txt +Sarah L 35 F +Aaron Kili 40 M +John Doo 20 M +Kili Seth 49 M +``` + +We can write a short shell script to carry out our job above, here is the content of the script: + +``` +#!/bin/bash +awk ' { + if ( $3 <= 25 ){ + print "User",$1,$2,"is less than 25 years old." ; + } + else { + print "User",$1,$2,"is more than 25 years old" ; +} +}' ~/users.txt +``` + +Then save the file and exit, make the script executable and run it as follows: + +``` +$ chmod +x test.sh +$ ./test.sh +``` + +Sample Output + +``` +User Sarah L is more than 25 years old +User Aaron Kili is more than 25 years old +User John Doo is less than 25 years old. +User Kili Seth is more than 25 years old +``` + +### 2. The for Statement + +In case you want to execute some Awk commands in a loop, then the for statement offers you a suitable way to do that, with the syntax below: + +Here, the approach is simply defined by the use of a counter to control the loop execution, first you need to initialize the counter, then run it against a test condition, if it is true, execute the actions and finally increment the counter. The loop terminates when the counter does not satisfy the condition. + +``` +for ( counter-initialization; test-condition; counter-increment ){ + actions +} +``` + +The following Awk command shows how the for statement works, where we want to print the numbers 0-10: + +``` +$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }' +``` + +Sample Output + +``` +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +``` + +### 3. The while Statement + +The conventional syntax of the while statement is as follows: + +``` +while ( condition ) { + actions +} +``` + +The condition is an Awk expression and actions are lines of Awk commands executed when the condition is true. + +Below is a script to illustrate the use of while statement to print the numbers 0-10: + +``` +#!/bin/bash +awk ' BEGIN{ counter=0 ; + + while(counter<=10){ + print counter; + counter+=1 ; + +} +} +``` + +Save the file and make the script executable, then run it: + +``` +$ chmod +x test.sh +$ ./test.sh +``` + +Sample Output + +``` +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +``` + +### 4. The do while Statement + +It is a modification of the while statement above, with the following underlying syntax: + +``` +do { + actions +} + while (condition) +``` + +The slight difference is that, under do while, the Awk commands are executed before the condition is evaluated. Using the very example under while statement above, we can illustrate the use of do while by altering the Awk command in the test.sh script as follows: + +``` +#!/bin/bash + +awk ' BEGIN{ counter=0 ; + do{ + print counter; + counter+=1 ; + } + while (counter<=10) +} +' +``` + +After modifying the script, save the file and exit. Then make the script executable and execute it as follows: + +``` +$ chmod +x test.sh +$ ./test.sh +``` + +Sample Output + +``` +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +``` + +### Conclusion + +This is not a comprehensive guide regarding Awk flow control statements, as I had mentioned earlier on, there are several other flow control statements in Awk. + +Nonetheless, this part of the Awk series should give you a clear fundamental idea of how execution of Awk commands can be controlled based on certain conditions. + +You can as well expound more on the rest of the flow control statements to gain more understanding on the subject matter. Finally, in the next section of the Awk series, we shall move into writing Awk scripts. +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ + + + + + + + + + + + + + From b5d40bee61efefec8ffbea47c8625a303f35942e Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Aug 2016 11:48:33 +0800 Subject: [PATCH 461/471] PUB:20160512 Bitmap in Linux Kernel MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @cposture 翻译的很仔细,很好! --- .../20160512 Bitmap in Linux Kernel.md | 82 +++++++++---------- 1 file changed, 38 insertions(+), 44 deletions(-) rename {translated/tech => published}/20160512 Bitmap in Linux Kernel.md (68%) diff --git a/translated/tech/20160512 Bitmap in Linux Kernel.md b/published/20160512 Bitmap in Linux Kernel.md similarity index 68% rename from translated/tech/20160512 Bitmap in Linux Kernel.md rename to published/20160512 Bitmap in Linux Kernel.md index 6475b9260e..3c4117aece 100644 --- a/translated/tech/20160512 Bitmap in Linux Kernel.md +++ b/published/20160512 Bitmap in Linux Kernel.md @@ -1,16 +1,10 @@ ---- -date: 2016-07-09 14:42 -status: public -title: 20160512 Bitmap in Linux Kernel ---- - -Linux 内核里的数据结构 +Linux 内核里的数据结构——位数组 ================================================================================ Linux 内核中的位数组和位操作 -------------------------------------------------------------------------------- -除了不同的基于[链式](https://en.wikipedia.org/wiki/Linked_data_structure)和[树](https://en.wikipedia.org/wiki/Tree_%28data_structure%29)的数据结构以外,Linux 内核也为[位数组](https://en.wikipedia.org/wiki/Bit_array)或`位图`提供了 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。位数组在 Linux 内核里被广泛使用,并且在以下的源代码文件中包含了与这样的结构搭配使用的通用 `API`: +除了不同的基于[链式](https://en.wikipedia.org/wiki/Linked_data_structure)和[树](https://en.wikipedia.org/wiki/Tree_%28data_structure%29)的数据结构以外,Linux 内核也为[位数组](https://en.wikipedia.org/wiki/Bit_array)(或称为位图(bitmap))提供了 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。位数组在 Linux 内核里被广泛使用,并且在以下的源代码文件中包含了与这样的结构搭配使用的通用 `API`: * [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) * [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) @@ -19,14 +13,14 @@ Linux 内核中的位数组和位操作 * [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) -头文件。正如我上面所写的,`位图`在 Linux 内核中被广泛地使用。例如,`位数组`常常用于保存一组在线/离线处理器,以便系统支持[热插拔](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)的 CPU(你可以在 [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) 部分阅读更多相关知识 ),一个`位数组`可以在 Linux 内核初始化等期间保存一组已分配的中断处理。 +头文件。正如我上面所写的,`位图`在 Linux 内核中被广泛地使用。例如,`位数组`常常用于保存一组在线/离线处理器,以便系统支持[热插拔](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)的 CPU(你可以在 [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) 部分阅读更多相关知识 ),一个位数组(bit array)可以在 Linux 内核初始化等期间保存一组已分配的[中断处理](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)。 -因此,本部分的主要目的是了解位数组是如何在 Linux 内核中实现的。让我们现在开始吧。 +因此,本部分的主要目的是了解位数组(bit array)是如何在 Linux 内核中实现的。让我们现在开始吧。 位数组声明 ================================================================================ -在我们开始查看位图操作的 `API` 之前,我们必须知道如何在 Linux 内核中声明它。有两中通用的方法声明位数组。第一种简单的声明一个位数组的方法是,定义一个 unsigned long 的数组,例如: +在我们开始查看`位图`操作的 `API` 之前,我们必须知道如何在 Linux 内核中声明它。有两种声明位数组的通用方法。第一种简单的声明一个位数组的方法是,定义一个 `unsigned long` 的数组,例如: ```C unsigned long my_bitmap[8] @@ -44,7 +38,7 @@ unsigned long my_bitmap[8] * `name` - 位图名称; * `bits` - 位图中位数; -并且只是使用 `BITS_TO_LONGS(bits)` 元素展开 `unsigned long` 数组的定义。 `BITS_TO_LONGS` 宏将一个给定的位数转换为 `longs` 的个数,换言之,就是计算 `bits` 中有多少个 `8` 字节元素: +并且只是使用 `BITS_TO_LONGS(bits)` 元素展开 `unsigned long` 数组的定义。 `BITS_TO_LONGS` 宏将一个给定的位数转换为 `long` 的个数,换言之,就是计算 `bits` 中有多少个 `8` 字节元素: ```C #define BITS_PER_BYTE 8 @@ -70,18 +64,18 @@ unsigned long my_bitmap[1]; 体系结构特定的位操作 ================================================================================ -我们已经看了以上一对源文件和头文件,它们提供了位数组操作的 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。其中重要且广泛使用的位数组 API 是体系结构特定的且位于已提及的头文件中 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h)。 +我们已经看了上面提及的一对源文件和头文件,它们提供了位数组操作的 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。其中重要且广泛使用的位数组 API 是体系结构特定的且位于已提及的头文件中 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h)。 首先让我们查看两个最重要的函数: * `set_bit`; * `clear_bit`. -我认为没有必要解释这些函数的作用。从它们的名字来看,这已经很清楚了。让我们直接查看它们的实现。如果你浏览 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,你将会注意到这些函数中的每一个都有[原子性](https://en.wikipedia.org/wiki/Linearizability)和非原子性两种变体。在我们开始深入这些函数的实现之前,首先,我们必须了解一些有关原子操作的知识。 +我认为没有必要解释这些函数的作用。从它们的名字来看,这已经很清楚了。让我们直接查看它们的实现。如果你浏览 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,你将会注意到这些函数中的每一个都有[原子性](https://en.wikipedia.org/wiki/Linearizability)和非原子性两种变体。在我们开始深入这些函数的实现之前,首先,我们必须了解一些有关原子(atomic)操作的知识。 -简而言之,原子操作保证两个或以上的操作不会并发地执行同一数据。`x86` 体系结构提供了一系列原子指令,例如, [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html)、[cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) 等指令。除了原子指令,一些非原子指令可以在 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令的帮助下具有原子性。目前已经对原子操作有了充分的理解,我们可以接着探讨 `set_bit` 和 `clear_bit` 函数的实现。 +简而言之,原子操作保证两个或以上的操作不会并发地执行同一数据。`x86` 体系结构提供了一系列原子指令,例如, [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html)、[cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) 等指令。除了原子指令,一些非原子指令可以在 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令的帮助下具有原子性。现在你已经对原子操作有了足够的了解,我们可以接着探讨 `set_bit` 和 `clear_bit` 函数的实现。 -我们先考虑函数的非原子性变体。非原子性的 `set_bit` 和 `clear_bit` 的名字以双下划线开始。正如我们所知道的,所有这些函数都定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并且第一个函数就是 `__set_bit`: +我们先考虑函数的非原子性(non-atomic)变体。非原子性的 `set_bit` 和 `clear_bit` 的名字以双下划线开始。正如我们所知道的,所有这些函数都定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并且第一个函数就是 `__set_bit`: ```C static inline void __set_bit(long nr, volatile unsigned long *addr) @@ -92,25 +86,25 @@ static inline void __set_bit(long nr, volatile unsigned long *addr) 正如我们所看到的,它使用了两个参数: -* `nr` - 位数组中的位号(从0开始,译者注) +* `nr` - 位数组中的位号(LCTT 译注:从 0开始) * `addr` - 我们需要置位的位数组地址 -注意,`addr` 参数使用 `volatile` 关键字定义,以告诉编译器给定地址指向的变量可能会被修改。 `__set_bit` 的实现相当简单。正如我们所看到的,它仅包含一行[内联汇编代码](https://en.wikipedia.org/wiki/Inline_assembler)。在我们的例子中,我们使用 [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) 指令,从位数组中选出一个第一操作数(我们的例子中的 `nr`),存储选出的位的值到 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 标志寄存器并设置该位(即 `nr` 指定的位置为1,译者注)。 +注意,`addr` 参数使用 `volatile` 关键字定义,以告诉编译器给定地址指向的变量可能会被修改。 `__set_bit` 的实现相当简单。正如我们所看到的,它仅包含一行[内联汇编代码](https://en.wikipedia.org/wiki/Inline_assembler)。在我们的例子中,我们使用 [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) 指令,从位数组中选出一个第一操作数(我们的例子中的 `nr`)所指定的位,存储选出的位的值到 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 标志寄存器并设置该位(LCTT 译注:即 `nr` 指定的位置为 1)。 -注意,我们了解了 `nr` 的用法,但这里还有一个参数 `addr` 呢!你或许已经猜到秘密就在 `ADDR`。 `ADDR` 是一个定义在同一头文件的宏,它展开为一个包含给定地址和 `+m` 约束的字符串: +注意,我们了解了 `nr` 的用法,但这里还有一个参数 `addr` 呢!你或许已经猜到秘密就在 `ADDR`。 `ADDR` 是一个定义在同一个头文件中的宏,它展开为一个包含给定地址和 `+m` 约束的字符串: ```C #define ADDR BITOP_ADDR(addr) #define BITOP_ADDR(x) "+m" (*(volatile long *) (x)) ``` -除了 `+m` 之外,在 `__set_bit` 函数中我们可以看到其他约束。让我们查看并试图理解它们所表示的意义: +除了 `+m` 之外,在 `__set_bit` 函数中我们可以看到其他约束。让我们查看并试着理解它们所表示的意义: -* `+m` - 表示内存操作数,这里的 `+` 表明给定的操作数为输入输出操作数; -* `I` - 表示整型常量; +* `+m` - 表示内存操作数,这里的 `+` 表明给定的操作数为输入输出操作数; +* `I` - 表示整型常量; * `r` - 表示寄存器操作数 -除了这些约束之外,我们也能看到 `memory` 关键字,其告诉编译器这段代码会修改内存中的变量。到此为止,现在我们看看相同的原子性变体函数。它看起来比非原子性变体更加复杂: +除了这些约束之外,我们也能看到 `memory` 关键字,其告诉编译器这段代码会修改内存中的变量。到此为止,现在我们看看相同的原子性(atomic)变体函数。它看起来比非原子性(non-atomic)变体更加复杂: ```C static __always_inline void @@ -128,7 +122,7 @@ set_bit(long nr, volatile unsigned long *addr) } ``` -(BITOP_ADDR 的定义为:`#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))`,ORB 为字节按位或,译者注) +(LCTT 译注:BITOP_ADDR 的定义为:`#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))`,ORB 为字节按位或。) 首先注意,这个函数使用了与 `__set_bit` 相同的参数集合,但额外地使用了 `__always_inline` 属性标记。 `__always_inline` 是一个定义于 [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) 的宏,并且只是展开为 `always_inline` 属性: @@ -136,19 +130,19 @@ set_bit(long nr, volatile unsigned long *addr) #define __always_inline inline __attribute__((always_inline)) ``` -其意味着这个函数总是内联的,以减少 Linux 内核映像的大小。现在我们试着了解 `set_bit` 函数的实现。首先我们在 `set_bit` 函数的开头检查给定的位数量。`IS_IMMEDIATE` 宏定义于相同[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h),并展开为 gcc 内置函数的调用: +其意味着这个函数总是内联的,以减少 Linux 内核映像的大小。现在让我们试着了解下 `set_bit` 函数的实现。首先我们在 `set_bit` 函数的开头检查给定的位的数量。`IS_IMMEDIATE` 宏定义于相同的[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h),并展开为 [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) 内置函数的调用: ```C #define IS_IMMEDIATE(nr) (__builtin_constant_p(nr)) ``` -如果给定的参数是编译期已知的常量,`__builtin_constant_p` 内置函数则返回 `1`,其他情况返回 `0`。假若给定的位数是编译期已知的常量,我们便无须使用效率低下的 `bts` 指令去设置位。我们可以只需在给定地址指向的字节和和掩码上执行 [按位或](https://en.wikipedia.org/wiki/Bitwise_operation#OR) 操作,其字节包含给定的位,而掩码为位号高位 `1`,其他位为 0。在其他情况下,如果给定的位号不是编译期已知常量,我们便做和 `__set_bit` 函数一样的事。`CONST_MASK_ADDR` 宏: +如果给定的参数是编译期已知的常量,`__builtin_constant_p` 内置函数则返回 `1`,其他情况返回 `0`。假若给定的位数是编译期已知的常量,我们便无须使用效率低下的 `bts` 指令去设置位。我们可以只需在给定地址指向的字节上执行 [按位或](https://en.wikipedia.org/wiki/Bitwise_operation#OR) 操作,其字节包含给定的位,掩码位数表示高位为 `1`,其他位为 0 的掩码。在其他情况下,如果给定的位号不是编译期已知常量,我们便做和 `__set_bit` 函数一样的事。`CONST_MASK_ADDR` 宏: ```C #define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3)) ``` -展开为带有到包含给定位的字节偏移的给定地址,例如,我们拥有地址 `0x1000` 和 位号是 `0x9`。因为 `0x9` 是 `一个字节 + 一位`,所以我们的地址是 `addr + 1`: +展开为带有到包含给定位的字节偏移的给定地址,例如,我们拥有地址 `0x1000` 和位号 `0x9`。因为 `0x9` 代表 `一个字节 + 一位`,所以我们的地址是 `addr + 1`: ```python >>> hex(0x1000 + (0x9 >> 3)) @@ -166,7 +160,7 @@ set_bit(long nr, volatile unsigned long *addr) '0b10' ``` -最后,我们应用 `按位或` 运算到这些变量上面,因此,假如我们的地址是 `0x4097` ,并且我们需要置位号为 `9` 的位 为 1: +最后,我们应用 `按位或` 运算到这些变量上面,因此,假如我们的地址是 `0x4097` ,并且我们需要置位号为 `9` 的位为 1: ```python >>> bin(0x4097) @@ -175,11 +169,11 @@ set_bit(long nr, volatile unsigned long *addr) '0b100010' ``` -`第 9 位` 将会被置位。(这里的 9 是从 0 开始计数的,比如0010,按照作者的意思,其中的 1 是第 1 位,译者注) +`第 9 位` 将会被置位。(LCTT 译注:这里的 9 是从 0 开始计数的,比如0010,按照作者的意思,其中的 1 是第 1 位) 注意,所有这些操作使用 `LOCK_PREFIX` 标记,其展开为 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令,保证该操作的原子性。 -正如我们所知,除了 `set_bit` 和 `__set_bit` 操作之外,Linux 内核还提供了两个功能相反的函数,在原子性和非原子性的上下文中清位。它们为 `clear_bit` 和 `__clear_bit`。这两个函数都定义于同一个[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 并且使用相同的参数集合。不仅参数相似,一般而言,这些函数与 `set_bit` 和 `__set_bit` 也非常相似。让我们查看非原子性 `__clear_bit` 的实现吧: +正如我们所知,除了 `set_bit` 和 `__set_bit` 操作之外,Linux 内核还提供了两个功能相反的函数,在原子性和非原子性的上下文中清位。它们是 `clear_bit` 和 `__clear_bit`。这两个函数都定义于同一个[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 并且使用相同的参数集合。不仅参数相似,一般而言,这些函数与 `set_bit` 和 `__set_bit` 也非常相似。让我们查看非原子性 `__clear_bit` 的实现吧: ```C static inline void __clear_bit(long nr, volatile unsigned long *addr) @@ -188,7 +182,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr) } ``` -没错,正如我们所见,`__clear_bit` 使用相同的参数集合,并包含极其相似的内联汇编代码块。它仅仅使用 [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) 指令替换 `bts`。正如我们从函数名所理解的一样,通过给定地址,它清除了给定的位。`btr` 指令表现得像 `bts`(原文这里为 btr,可能为笔误,修正为 bts,译者注)。该指令选出第一操作数指定的位,存储它的值到 `CF` 标志寄存器,并且清楚第二操作数指定的位数组中的对应位。 +没错,正如我们所见,`__clear_bit` 使用相同的参数集合,并包含极其相似的内联汇编代码块。它只是使用 [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) 指令替换了 `bts`。正如我们从函数名所理解的一样,通过给定地址,它清除了给定的位。`btr` 指令表现得像 `bts`(LCTT 译注:原文这里为 btr,可能为笔误,修正为 bts)。该指令选出第一操作数所指定的位,存储它的值到 `CF` 标志寄存器,并且清除第二操作数指定的位数组中的对应位。 `__clear_bit` 的原子性变体为 `clear_bit`: @@ -208,11 +202,11 @@ clear_bit(long nr, volatile unsigned long *addr) } ``` -并且正如我们所看到的,它与 `set_bit` 非常相似,同时只包含了两处差异。第一处差异为 `clear_bit` 使用 `btr` 指令来清位,而 `set_bit` 使用 `bts` 指令来置位。第二处差异为 `clear_bit` 使用否定的位掩码和 `按位与` 在给定的字节上置位,而 `set_bit` 使用 `按位或` 指令。 +并且正如我们所看到的,它与 `set_bit` 非常相似,只有两处不同。第一处差异为 `clear_bit` 使用 `btr` 指令来清位,而 `set_bit` 使用 `bts` 指令来置位。第二处差异为 `clear_bit` 使用否定的位掩码和 `按位与` 在给定的字节上置位,而 `set_bit` 使用 `按位或` 指令。 -到此为止,我们可以在任何位数组置位和清位了,并且能够转到位掩码上的其他操作。 +到此为止,我们可以在任意位数组置位和清位了,我们将看看位掩码上的其他操作。 -在 Linux 内核位数组上最广泛使用的操作是设置和清除位,但是除了这两个操作外,位数组上其他操作也是非常有用的。Linux 内核里另一种广泛使用的操作是知晓位数组中一个给定的位是否被置位。我们能够通过 `test_bit` 宏的帮助实现这一功能。这个宏定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并展开为 `constant_test_bit` 或 `variable_test_bit` 的调用,这要取决于位号。 +在 Linux 内核中对位数组最广泛使用的操作是设置和清除位,但是除了这两个操作外,位数组上其他操作也是非常有用的。Linux 内核里另一种广泛使用的操作是知晓位数组中一个给定的位是否被置位。我们能够通过 `test_bit` 宏的帮助实现这一功能。这个宏定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并根据位号分别展开为 `constant_test_bit` 或 `variable_test_bit` 调用。 ```C #define test_bit(nr, addr) \ @@ -221,7 +215,7 @@ clear_bit(long nr, volatile unsigned long *addr) : variable_test_bit((nr), (addr))) ``` -因此,如果 `nr` 是编译期已知常量,`test_bit` 将展开为 `constant_test_bit` 函数的调用,而其他情况则为 `variable_test_bit`。现在让我们看看这些函数的实现,我们从 `variable_test_bit` 开始看起: +因此,如果 `nr` 是编译期已知常量,`test_bit` 将展开为 `constant_test_bit` 函数的调用,而其他情况则为 `variable_test_bit`。现在让我们看看这些函数的实现,让我们从 `variable_test_bit` 开始看起: ```C static inline int variable_test_bit(long nr, volatile const unsigned long *addr) @@ -237,7 +231,7 @@ static inline int variable_test_bit(long nr, volatile const unsigned long *addr) } ``` -`variable_test_bit` 函数调用了与 `set_bit` 及其他函数使用的相似的参数集合。我们也可以看到执行 [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) 和 [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) 指令的内联汇编代码。`bt` 或 `bit test` 指令从第二操作数指定的位数组选出第一操作数指定的一个指定位,并且将该位的值存进标志寄存器的 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 位。第二个指令 `sbb` 从第二操作数中减去第一操作数,再减去 `CF` 的值。因此,这里将一个从给定位数组中的给定位号的值写进标志寄存器的 `CF` 位,并且执行 `sbb` 指令计算: `00000000 - CF`,并将结果写进 `oldbit` 变量。 +`variable_test_bit` 函数使用了与 `set_bit` 及其他函数使用的相似的参数集合。我们也可以看到执行 [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) 和 [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) 指令的内联汇编代码。`bt` (或称 `bit test`)指令从第二操作数指定的位数组选出第一操作数指定的一个指定位,并且将该位的值存进标志寄存器的 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 位。第二个指令 `sbb` 从第二操作数中减去第一操作数,再减去 `CF` 的值。因此,这里将一个从给定位数组中的给定位号的值写进标志寄存器的 `CF` 位,并且执行 `sbb` 指令计算: `00000000 - CF`,并将结果写进 `oldbit` 变量。 `constant_test_bit` 函数做了和我们在 `set_bit` 所看到的一样的事: @@ -251,7 +245,7 @@ static __always_inline int constant_test_bit(long nr, const volatile unsigned lo 它生成了一个位号对应位为高位 `1`,而其他位为 `0` 的字节(正如我们在 `CONST_MASK` 所看到的),并将 [按位与](https://en.wikipedia.org/wiki/Bitwise_operation#AND) 应用于包含给定位号的字节。 -下一广泛使用的位数组相关操作是改变一个位数组中的位。为此,Linux 内核提供了两个辅助函数: +下一个被广泛使用的位数组相关操作是改变一个位数组中的位。为此,Linux 内核提供了两个辅助函数: * `__change_bit`; * `change_bit`. @@ -274,7 +268,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr) 1 ``` - `__change_bit` 的原子版本为 `change_bit` 函数: +`__change_bit` 的原子版本为 `change_bit` 函数: ```C static inline void change_bit(long nr, volatile unsigned long *addr) @@ -291,14 +285,14 @@ static inline void change_bit(long nr, volatile unsigned long *addr) } ``` -它和 `set_bit` 函数很相似,但也存在两点差异。第一处差异为 `xor` 操作而不是 `or`。第二处差异为 `btc`(原文为 `bts`,为作者笔误,译者注) 而不是 `bts`。 +它和 `set_bit` 函数很相似,但也存在两点不同。第一处差异为 `xor` 操作而不是 `or`。第二处差异为 `btc`( LCTT 译注:原文为 `bts`,为作者笔误) 而不是 `bts`。 目前,我们了解了最重要的体系特定的位数组操作,是时候看看一般的位图 API 了。 通用位操作 ================================================================================ -除了 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 中体系特定的 API 外,Linux 内核提供了操作位数组的通用 API。正如我们本部分开头所了解的一样,我们可以在 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件和* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) 源文件中找到它。但在查看这些源文件之前,我们先看看 [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) 头文件,其提供了一系列有用的宏,让我们看看它们当中一部分。 +除了 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 中体系特定的 API 外,Linux 内核提供了操作位数组的通用 API。正如我们本部分开头所了解的一样,我们可以在 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件和 [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) 源文件中找到它。但在查看这些源文件之前,我们先看看 [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) 头文件,其提供了一系列有用的宏,让我们看看它们当中一部分。 首先我们看看以下 4 个 宏: @@ -307,7 +301,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) * `for_each_clear_bit` * `for_each_clear_bit_from` -所有这些宏都提供了遍历位数组中某些位集合的迭代器。第一个红迭代那些被置位的位。第二个宏也是一样,但它是从某一确定位开始。最后两个宏做的一样,但是迭代那些被清位的位。让我们看看 `for_each_set_bit` 宏: +所有这些宏都提供了遍历位数组中某些位集合的迭代器。第一个宏迭代那些被置位的位。第二个宏也是一样,但它是从某一个确定的位开始。最后两个宏做的一样,但是迭代那些被清位的位。让我们看看 `for_each_set_bit` 宏: ```C #define for_each_set_bit(bit, addr, size) \ @@ -316,7 +310,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) (bit) = find_next_bit((addr), (size), (bit) + 1)) ``` -正如我们所看到的,它使用了三个参数,并展开为一个循环,该循环从作为 `find_first_bit` 函数返回结果的第一个置位开始到最后一个置位且小于给定大小为止。 +正如我们所看到的,它使用了三个参数,并展开为一个循环,该循环从作为 `find_first_bit` 函数返回结果的第一个置位开始,到小于给定大小的最后一个置位为止。 除了这四个宏, [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 也提供了 `64-bit` 或 `32-bit` 变量循环的 API 等等。 @@ -339,7 +333,7 @@ static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) } ``` -首先我们可以看到对 `nbits` 的检查。 `small_const_nbits` 是一个定义在同一[头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 的宏: +首先我们可以看到对 `nbits` 的检查。 `small_const_nbits` 是一个定义在同一个[头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 的宏: ```C #define small_const_nbits(nbits) \ @@ -398,7 +392,7 @@ via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md 作者:[0xAX][a] 译者:[cposture](https://github.com/cposture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From de8b4a8f9260d4876085b5543c6796d4813ff4b1 Mon Sep 17 00:00:00 2001 From: robot527 Date: Tue, 23 Aug 2016 16:08:43 +0800 Subject: [PATCH 462/471] robot527 translating --- ...60805 How to Use Flow Control Statements in Awk – Part 12.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md b/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md index 44748caafa..86b1eb2ff1 100644 --- a/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md +++ b/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md @@ -1,3 +1,5 @@ +robot527 translating + How to Use Flow Control Statements in Awk – Part 12 =========================================== From b121a6dcb089688ffa63e682411bcb5f14e05c61 Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Tue, 23 Aug 2016 21:20:53 +0800 Subject: [PATCH 463/471] Translated by H-mudcup (#4331) * Delete 20160803 The revenge of Linux.md * Create 20160803 The revenge of Linux.md --- sources/talk/20160803 The revenge of Linux.md | 37 ------------------ .../talk/20160803 The revenge of Linux.md | 38 +++++++++++++++++++ 2 files changed, 38 insertions(+), 37 deletions(-) delete mode 100644 sources/talk/20160803 The revenge of Linux.md create mode 100644 translated/talk/20160803 The revenge of Linux.md diff --git a/sources/talk/20160803 The revenge of Linux.md b/sources/talk/20160803 The revenge of Linux.md deleted file mode 100644 index cbf98bd8a2..0000000000 --- a/sources/talk/20160803 The revenge of Linux.md +++ /dev/null @@ -1,37 +0,0 @@ -翻译中 by H-mudcup -The revenge of Linux -======================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/penguin%20swimming.jpg?itok=mfhEdRdM) - -In the beginning of Linux they laughed at it and didn't think it could do anything. Now Linux is everywhere! - -I was a junior at college studying Computer Engineering in Brazil, and working at the same time in a global auditing and consulting company as a sysadmin. The company decided to implement some enterprise resource planning (ERP) software with an Oracle database. I got training in Digital UNIX OS (DEC Alpha), and it blew my mind. - -The UNIX system was very powerful and gave us absolute control of the machine: storage systems, networking, applications, and everything. - -I started writing lots of scripts in ksh and Bash to automate backup, file transfer, extract, transform, load (ETL) operations, automate DBA routines, and created many other services that came up from different projects. Moreover, doing database and operating system tuning gave me a good understanding of how to get the best from a server. At that time, I used Windows 95 on my PC, and I would have loved to have Digital UNIX in my PC box, or even Solaris or HP-UX, but those UNIX systems were made to run on specific hardware. I read all the documentation that came with the systems, looked for additional books to get more information, and tested crazy ideas in our development environment. - -Later in college, I heard about Linux from my colleagues. I downloaded it back in the time of dialup internet, and I was very excited. The idea of having my standard PC with a UNIX-like system was amazing! - -As Linux is made to run in any general PC hardware, unlike UNIX systems, at the beginning it was really hard to get it working. Linux was just for sysadmins and geeks. I even made adjustments in drivers using C language to get it running. My previous experience with UNIX made me feel at home when I was compiling the Linux kernel, troubleshooting, and so on. It was very challenging for Linux to work with any unexpected hardware setups, as opposed to closed systems that fit just some specific hardware. - -I have been seeing Linux get space in data centers. Some adventurous sysadmins start boxes to help in everyday tasks for monitoring and managing the infrastructure, and then Linux gets more space as DNS and DHCP servers, printer management, and file servers. There used to be lots of FUD (fear, uncertainty and doubt) and criticism about Linux for the enterprise: Who is the owner of it? Who supports it? Are there applications for it? - -But nowadays it seems the revenge of Linux is everywhere! From developer's PCs to huge enterprise servers; we can find it in smart phones, watches, and in the Internet of Things (IoT) devices such as Raspberry Pi. Even Mac OS X has a kind of prompt with commands we are used to. Microsoft is making its own distribution, runs it at Azure, and then... Windows 10 is going to get Bash on it. - -The interesting thing is that the IT market creates and quickly replaces new technologies, but the knowledge we got with old systems such as Digital UNIX, HP-UX, and Solaris is still useful and relevant with Linux, for business and just for fun. Now we can get absolute control of our systems, and use it at its maximum capabilities. Moreover Linux has an enthusiastic community! - -I really recommend that youngsters looking for a career in computing learn Linux. It does not matter what your branch in IT is. If you learn deeply how a standard home PC works, you can get in front of any big box talking basically the same language. With Linux you learn fundamental computing, and build skills that work anywhere in IT. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/8/revenge-linux - -作者:[Daniel Carvalho][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/danielscarvalho diff --git a/translated/talk/20160803 The revenge of Linux.md b/translated/talk/20160803 The revenge of Linux.md new file mode 100644 index 0000000000..96eb9f362f --- /dev/null +++ b/translated/talk/20160803 The revenge of Linux.md @@ -0,0 +1,38 @@ +Translated by H-mudcup + +Linux 的逆袭 +======================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/penguin%20swimming.jpg?itok=mfhEdRdM) + +Linux 系统在早期的时候被人们嘲笑,它什么也干不了。现在,Linux 无处不在! + +我当时还是个在巴西学习计算机工程的大三学生,并同时在一个全球审计和顾问公司兼职系统管理员。公司决定用 Oracle 数据库实现企业资源计划(ERP)软件。我得以在 Digital UNIX OS (DEC Alpha) 进行训练,这个训练颠覆了我的三观。 + +那个 UNIX 系统非常的强大并且给予了我们堆积起的绝对控制权:存储系统、网络、应用和其他一切。 + +我开始在 ksh 和 Bash 里编写大量的脚本让系统自动进行备份、文件转移、提取转换加载(ETL)操作、自动化 DBA 日常工作,还创造了很多从各种不同的项目提出的其他的服务。此外,调整数据库和操作系统的工作让我更好的理解了如何让服务器以最佳方式运行。在那时,我在自己的个人电脑上使用的是 Windows 95 系统,而我非常想要在我的个人电脑里放进一个 Digital UNIX,或者即使是 Solaris 或 HP-UX 也行,但是那些 UNIX 系统都得在特定的硬件才能上运行。我阅读了所有系统的文档,还找过额外的书籍以求获得更多的信息,也在我们的开发环境里对一些疯狂的想法进行了实验。 + +后来在大学里,我从我的同事那听说了 Linux。我那时非常激动的从还在用拨号方式连接的因特网上下载了它。在我标准的个人电脑里装上 UNIX 这类系统的这个想法真是太酷了! + +由于 Linux 不同于 UNIX 系统,它能在几乎所有的个人电脑硬件上运行,在起初,让它开始工作真的是非常困难的一件事。Linux 针对的用户群只有系统管理员和极客们。我为了让它能运行,甚至用 C 语言修改了驱动软件。我之前使用 UNIX 的经历让我在编译 Linux 内核,排错这些过程中非常的顺手。由于它不同于那些只适合特定硬件配置的封闭系统,所以让 Linux 跟各种意料之外的硬件配置一起工作真的是件非常具有挑战性的事。 + +我曾见过 Linux 在数据中心获得一席之地。一些具有冒险精神的系统管理员使用它来帮他们完成每天监视和管理基础设施的工作,随后,Linux 同 DNS 和 DHCP 服务器、打印管理和文件服务器一起攻城略地。企业曾对 Linux 有着很多顾虑(恐惧,不确定性,怀疑)和诟病:谁是它的拥有者?由谁来支持它?有适用于它的应用吗? + +但现在看来,Linux 在各个地方进行着逆袭!从开发者的个人电脑到大企业的服务器;我们能在智能手机、智能手表以及像树莓派这样的物联网(IoT)设备里找到它。甚至 Mac OS X 有些 DOS 命令跟我们所熟悉的命令一样。微软在制造它自己的发行版,在 Azure 上运行,然后…… Windows 10 要装备 Bash。 + +有趣的是 IT 市场会创造并迅速的代替新技术,但是我们所掌握的 Digital UNIX、HP-UX 和 Solaris 这些旧系统的知识还依然有效并跟 Linux 息息相关,不论是为了工作还是玩。现在我们能完全的掌控我们的系统,并使它发挥最大的效用。此外,Linux 有个充满热情的社区。 + +我真的建议想在计算机方面发展的年轻人学习 Linux。不论你处于 IT 界里的哪个分支。如过你深入了解了一个标准的家用个人电脑是如何工作的,你就可以在任何机器面前使用基本相同的语言。你可以通过 Linux 学习最基本的计算机知识,并通过它建立能在 IT 界任何地方都有用的能力。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/8/revenge-linux + +作者:[Daniel Carvalho][a] +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/danielscarvalho From b02b47a6ff869c6f9ccc61db62e348b82e90ce11 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 23 Aug 2016 21:22:01 +0800 Subject: [PATCH 464/471] =?UTF-8?q?=E9=80=89=E9=A2=98=E5=9B=9E=E7=82=89?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 放弃翻译本文 --- sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md b/sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md index 736c5b84bc..e404cccffa 100644 --- a/sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md +++ b/sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md @@ -1,6 +1,3 @@ -ezio is translating - - Part 4 - LXD 2.0: Resource control ====================================== From 718024ccdb0797d395d22cc535ace546db9715d7 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 24 Aug 2016 10:13:30 +0800 Subject: [PATCH 465/471] PUB:20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application @MikeCoder --- ...oying a replicated Python 3 Application.md | 279 +++++++++++++++++ ...oying a replicated Python 3 Application.md | 282 ------------------ 2 files changed, 279 insertions(+), 282 deletions(-) create mode 100644 published/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md delete mode 100644 translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md diff --git a/published/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md b/published/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md new file mode 100644 index 0000000000..36b0cbd7c4 --- /dev/null +++ b/published/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md @@ -0,0 +1,279 @@ +使用 Docker Swarm 部署可扩展的 Python3 应用 +============== + +[Ben Firshman][2] 最近在 [Dockercon][1] 做了一个关于使用 Docker 构建无服务应用的演讲,你可以在[这里查看详情][3](还有视频)。之后,我写了一篇关于如何使用 [AWS Lambda][5] 构建微服务系统的[文章][4]。 + +今天,我想展示给你的就是如何使用 [Docker Swarm][6] 部署一个简单的 Python Falcon REST 应用。这里我不会使用[dockerrun][7] 或者是其他无服务特性,你可能会惊讶,使用 Docker Swarm 部署(复制)一个 Python(Java、Go 都一样)应用是如此的简单。 + +注意:这展示的部分步骤是截取自 [Swarm Tutorial][8],我已经修改了部分内容,并且[增加了一个 Vagrant Helper 的仓库][9]来启动一个可以让 Docker Swarm 工作起来的本地测试环境。请确保你使用的是 1.12 或以上版本的 Docker Engine。我写这篇文章的时候,使用的是 1.12RC2 版本。注意的是,这只是一个测试版本,可能还会有修改。 + +你要做的第一件事,就是如果你想本地运行的话,你要保证 [Vagrant][10] 已经正确的安装和运行了。你也可以按如下步骤使用你最喜欢的云服务提供商部署 Docker Swarm 虚拟机系统。 + +我们将会使用这三台 VM:一个简单的 Docker Swarm 管理平台和两台 worker。 + +安全注意事项:Vagrantfile 代码中包含了部分位于 Docker 测试服务器上的 shell 脚本。这是一个潜在的安全问题,它会运行你不能控制的脚本,所以请确保你会在运行代码之前[审查过这部分的脚本][11]。 + +``` +$ git clone https://github.com/chadlung/vagrant-docker-swarm +$ cd vagrant-docker-swarm +$ vagrant plugin install vagrant-vbguest +$ vagrant up +``` + +Vagrant up 命令需要一些时间才能完成。 + +SSH 登陆进入 manager1 虚拟机: + +``` +$ vagrant ssh manager1 +``` + +在 manager1 的 ssh 终端会话中执行如下命令: + +``` +$ sudo docker swarm init --listen-addr 192.168.99.100:2377 +``` + +现在还没有 worker 注册上来: + +``` +$ sudo docker node ls +``` + +让我们注册两个新的 worker,请打开两个新的终端会话(保持 manager1 会话继续运行): + +``` +$ vagrant ssh worker1 +``` + +在 worker1 的 ssh 终端会话上执行如下命令: + +``` +$ sudo docker swarm join 192.168.99.100:2377 +``` + +在 worker2 的 ssh 终端会话上重复这些命令。 + +在 manager1 终端上执行如下命令: + +``` +$ docker node ls +``` + +你将会看到: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-3.15.25-PM.png) + +在 manager1 的终端里部署一个简单的服务。 + +``` +sudo docker service create --replicas 1 --name pinger alpine ping google.com +``` + +这个命令将会部署一个服务,它会从 worker 之一 ping google.com。(或者 manager,manager 也可以运行服务,不过如果你只是想 worker 运行容器的话,[也可以禁用这一点][12])。可以使用如下命令,查看哪些节点正在执行服务: + +``` +$ sudo docker service tasks pinger +``` + +结果会和这个比较类似: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.23.05-PM.png) + +所以,我们知道了服务正跑在 worker1 上。我们可以回到 worker1 的会话里,然后进入正在运行的容器: + +``` +$ sudo docker ps +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.25.02-PM.png) + +你可以看到容器的 id 是: ae56769b9d4d,在我的例子中,我运行如下的代码: + +``` +$ sudo docker attach ae56769b9d4d +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.26.49-PM.png) + +你可以按下 CTRL-C 来停止服务。 + +回到 manager1,然后移除这个 pinger 服务。 + +``` +$ sudo docker service rm pinger +``` + +现在,我们将会部署可复制的 Python 应用。注意,为了保持文章的简洁,而且容易复制,所以部署的是一个简单的应用。 + +你需要做的第一件事就是将镜像放到 [Docker Hub][13]上,或者使用我[已经上传的一个][14]。这是一个简单的 Python 3 Falcon REST 应用。它有一个简单的入口: /hello 带一个 value 参数。 + +放在 [chadlung/hello-app][15] 上的 Python 代码看起来像这样: + +``` +import json +from wsgiref import simple_server + +import falcon + + +class HelloResource(object): + def on_get(self, req, resp): + try: + value = req.get_param('value') + + resp.content_type = 'application/json' + resp.status = falcon.HTTP_200 + resp.body = json.dumps({'message': str(value)}) + except Exception as ex: + resp.status = falcon.HTTP_500 + resp.body = str(ex) + + +if __name__ == '__main__': + app = falcon.API() + hello_resource = HelloResource() + app.add_route('/hello', hello_resource) + httpd = simple_server.make_server('0.0.0.0', 8080, app) + httpd.serve_forever() +``` + +Dockerfile 很简单: + +``` +FROM python:3.4.4 + +RUN pip install -U pip +RUN pip install -U falcon + +EXPOSE 8080 + +COPY . /hello-app +WORKDIR /hello-app + +CMD ["python", "app.py"] +``` + +上面表示的意思很简单,如果你想,你可以在本地运行该进行来访问这个入口: + +这将返回如下结果: + +``` +{"message": "Fred"} +``` + +在 Docker Hub 上构建和部署这个 hello-app(修改成你自己的 Docker Hub 仓库或者[使用这个][15]): + +``` +$ sudo docker build . -t chadlung/hello-app:2 +$ sudo docker push chadlung/hello-app:2 +``` + +现在,我们可以将应用部署到之前的 Docker Swarm 了。登录 manager1 的 ssh 终端会话,并且执行: + +``` +$ sudo docker service create -p 8080:8080 --replicas 2 --name hello-app chadlung/hello-app:2 +$ sudo docker service inspect --pretty hello-app +$ sudo docker service tasks hello-app +``` + +现在,我们已经可以测试了。使用任何一个 Swarm 节点的 IP 来访问 /hello 入口。在本例中,我在 manager1 的终端里使用 curl 命令: + +注意,Swarm 中的所有的 IP 都可以,不管这个服务是运行在一台还是更多的节点上。 + +``` +$ curl -v -X GET "http://192.168.99.100:8080/hello?value=Chad" +$ curl -v -X GET "http://192.168.99.101:8080/hello?value=Test" +$ curl -v -X GET "http://192.168.99.102:8080/hello?value=Docker" +``` + +结果: + +``` +* Hostname was NOT found in DNS cache +* Trying 192.168.99.101... +* Connected to 192.168.99.101 (192.168.99.101) port 8080 (#0) +> GET /hello?value=Chad HTTP/1.1 +> User-Agent: curl/7.35.0 +> Host: 192.168.99.101:8080 +> Accept: */* +> +* HTTP 1.0, assume close after body +< HTTP/1.0 200 OK +< Date: Tue, 28 Jun 2016 23:52:55 GMT +< Server: WSGIServer/0.2 CPython/3.4.4 +< content-type: application/json +< content-length: 19 +< +{"message": "Chad"} +``` + +从浏览器中访问其他节点: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-6.54.31-PM.png) + +如果你想看运行的所有服务,你可以在 manager1 节点上运行如下代码: + +``` +$ sudo docker service ls +``` + +如果你想添加可视化控制平台,你可以安装 [Docker Swarm Visualizer][16](这对于展示非常方便)。在 manager1 的终端中执行如下代码: + +``` +$ sudo docker run -it -d -p 5000:5000 -e HOST=192.168.99.100 -e PORT=5000 -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer) +``` + +打开你的浏览器,并且访问: + +结果如下(假设已经运行了两个 Docker Swarm 服务): + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-30-at-2.37.28-PM.png) + +要停止运行 hello-app(已经在两个节点上运行了),可以在 manager1 上执行这个代码: + +``` +$ sudo docker service rm hello-app +``` + +如果想停止 Visualizer, 那么在 manager1 的终端中执行: + +``` +$ sudo docker ps +``` + +获得容器的 ID,这里是: f71fec0d3ce1,从 manager1 的终端会话中执行这个代码: + +``` +$ sudo docker stop f71fec0d3ce1 +``` + +祝你成功使用 Docker Swarm。这篇文章主要是以 1.12 版本来进行描述的。 + +-------------------------------------------------------------------------------- + +via: http://www.giantflyingsaucer.com/blog/?p=5923 + +作者:[Chad Lung][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.giantflyingsaucer.com/blog/?author=2 +[1]: http://dockercon.com/ +[2]: https://blog.docker.com/author/bfirshman/ +[3]: https://blog.docker.com/author/bfirshman/ +[4]: http://www.giantflyingsaucer.com/blog/?p=5730 +[5]: https://aws.amazon.com/lambda/ +[6]: https://docs.docker.com/swarm/ +[7]: https://github.com/bfirsh/dockerrun +[8]: https://docs.docker.com/engine/swarm/swarm-tutorial/ +[9]: https://github.com/chadlung/vagrant-docker-swarm +[10]: https://www.vagrantup.com/ +[11]: https://test.docker.com/ +[12]: https://docs.docker.com/engine/reference/commandline/swarm_init/ +[13]: https://hub.docker.com/ +[14]: https://hub.docker.com/r/chadlung/hello-app/ +[15]: https://hub.docker.com/r/chadlung/hello-app/ +[16]: https://github.com/ManoMarks/docker-swarm-visualizer diff --git a/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md b/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md deleted file mode 100644 index b06c56b46e..0000000000 --- a/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md +++ /dev/null @@ -1,282 +0,0 @@ -教程:开始学习如何使用 Docker Swarm 部署可扩展的 Python3 应用 -============== - -[Ben Firshman][2]最近在[Dockercon][1]做了一个关于使用 Docker 构建无服务应用的演讲,你可以在[这查看详情][3](可以和视频一起)。之后,我写了[一篇文章][4]关于如何使用[AWS Lambda][5]构建微服务系统。 - -今天,我想展示给你的就是如何使用[Docker Swarm][6]然后部署一个简单的 Python Falcon REST 应用。尽管,我不会使用[dockerrun][7]或者是其他无服务特性。你可能会惊讶,使用 Docker Swarm 部署(替换)一个 Python(Java, Go 都一样) 应用是如此的简单。 - -注意:这展示的部分步骤是截取自[Swarm Tutorial][8]。我已经修改了部分章节,并且[在 Vagrant 的帮助文档][9]中添加了构建本地测试环境的文档。请确保,你使用的是1.12或以上版本的 Docker 引擎。我写这篇文章的时候,使用的是1.12RC2版本的 Docker。注意的是,这只是一个测试版本,只会可能还会有修改。 - -你要做的第一件事,就是你要保证你正确的安装了[Vagrant][10],如果你想本地运行的话。你也可以按如下步骤使用你最喜欢的云服务提供商部署 Docker Swarm 虚拟机系统。 - -我们将会使用这三台 VM:一个简单的 Docker Swarm 管理平台和两台 worker。 - -安全注意事项:Vagrantfile 代码中包含了部分位于 Docker 测试服务器上的 shell 脚本。这是一个隐藏的安全问题。如果你没有权限的话。请确保你会在运行代码之前[审查这部分的脚本][11]。 - -``` -$ git clone https://github.com/chadlung/vagrant-docker-swarm -$ cd vagrant-docker-swarm -$ vagrant plugin install vagrant-vbguest -$ vagrant up -``` - -Vagrant up 命令可能会花很长的时间来执行。 - -SSH 登陆进入 manager1 虚拟机: - -``` -$ vagrant ssh manager1 -``` - -在 manager1 的终端中执行如下命令: - -``` -$ sudo docker swarm init --listen-addr 192.168.99.100:2377 -``` - -现在还没有 worker 注册上来: - -``` -$ sudo docker node ls -``` - -Let’s register the two workers. Use two new terminal sessions (leave the manager1 session running): -通过两个新的终端会话(退出 manager1 的登陆后),我们注册两个 worker。 - -``` -$ vagrant ssh worker1 -``` - -在 worker1 上执行如下命令: - -``` -$ sudo docker swarm join 192.168.99.100:2377 -``` - -在 worker2 上重复这些命令。 - -在 manager1 上执行这个命令: - -``` -$ docker node ls -``` - -你将会看到: - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-3.15.25-PM.png) - -开始在 manager1 的终端里,部署一个简单的服务。 - -``` -sudo docker service create --replicas 1 --name pinger alpine ping google.com -``` - -这个命令将会部署一个服务,他会从 worker 机器中的一台 ping google.com。(manager 也可以运行服务,不过[这也可以被禁止][12])如果你只是想 worker 运行容器的话)。可以使用如下命令,查看哪些节点正在执行服务: - -``` -$ sudo docker service tasks pinger -``` - -结果回合这个比较类似: - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.23.05-PM.png) - -所以,我们知道了服务正跑在 worker1 上。我们可以回到 worker1 的会话里,然后进入正在运行的容器: - -``` -$ sudo docker ps -``` - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.25.02-PM.png) - -你可以看到容器的 id 是: ae56769b9d4d - -在我的例子中,我运行的是如下的代码: - -``` -$ sudo docker attach ae56769b9d4d -``` - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.26.49-PM.png) - -你可以仅仅只用 CTRL-C 来停止服务。 - -回到 manager1,并且移除 pinger 服务。 - -``` -$ sudo docker service rm pinger -``` - -现在,我们将会部署可复制的 Python 应用。请记住,为了保持文章的简洁,而且容易复制,所以部署的是一个简单的应用。 - -你需要做的第一件事就是将镜像放到[Docker Hub][13]上,或者使用我[已经上传的一个][14]。这是一个简单的 Python 3 Falcon REST 应用。他有一个简单的入口: /hello 带一个 value 参数。 - -[chadlung/hello-app][15]的 Python 代码看起来像这样: - -``` -import json -from wsgiref import simple_server - -import falcon - - -class HelloResource(object): - def on_get(self, req, resp): - try: - value = req.get_param('value') - - resp.content_type = 'application/json' - resp.status = falcon.HTTP_200 - resp.body = json.dumps({'message': str(value)}) - except Exception as ex: - resp.status = falcon.HTTP_500 - resp.body = str(ex) - - -if __name__ == '__main__': - app = falcon.API() - hello_resource = HelloResource() - app.add_route('/hello', hello_resource) - httpd = simple_server.make_server('0.0.0.0', 8080, app) - httpd.serve_forever() -``` - -Dockerfile 很简单: - -``` -FROM python:3.4.4 - -RUN pip install -U pip -RUN pip install -U falcon - -EXPOSE 8080 - -COPY . /hello-app -WORKDIR /hello-app - -CMD ["python", "app.py"] -``` - -再一次说明,这是非常详细的奖惩,如果你想,你也可以在本地访问这个入口: - -这将返回如下结果: - -``` -{"message": "Fred"} -``` - -在 Docker Hub 上构建和部署这个 hello-app(修改成你自己的 Docker Hub 仓库或者[这个][15]): - -``` -$ sudo docker build . -t chadlung/hello-app:2 -$ sudo docker push chadlung/hello-app:2 -``` - -现在,我们可以将应用部署到之前的 Docker Swarm 了。登陆 manager1 终端,并且执行: - -``` -$ sudo docker service create -p 8080:8080 --replicas 2 --name hello-app chadlung/hello-app:2 -$ sudo docker service inspect --pretty hello-app -$ sudo docker service tasks hello-app -``` - -现在,我们已经可以测试了。使用任何一个节点 Swarm 的 IP,来访问/hello 的入口,在本例中,我在 Manager1 的终端里使用 curl 命令: - -注意,在 Swarm 中的所有 IP 都可以工作,即使这个服务只运行在一台或者更多的节点上。 - -``` -$ curl -v -X GET "http://192.168.99.100:8080/hello?value=Chad" -$ curl -v -X GET "http://192.168.99.101:8080/hello?value=Test" -$ curl -v -X GET "http://192.168.99.102:8080/hello?value=Docker" -``` - -结果就是: - -``` -* Hostname was NOT found in DNS cache -* Trying 192.168.99.101... -* Connected to 192.168.99.101 (192.168.99.101) port 8080 (#0) -> GET /hello?value=Chad HTTP/1.1 -> User-Agent: curl/7.35.0 -> Host: 192.168.99.101:8080 -> Accept: */* -> -* HTTP 1.0, assume close after body -< HTTP/1.0 200 OK -< Date: Tue, 28 Jun 2016 23:52:55 GMT -< Server: WSGIServer/0.2 CPython/3.4.4 -< content-type: application/json -< content-length: 19 -< -{"message": "Chad"} -``` - -从浏览器中访问其他节点: - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-6.54.31-PM.png) - -如果你想看运行的所有服务,你可以在 manager1 节点上运行如下代码: - -``` -$ sudo docker service ls -``` - -如果你想添加可视化控制平台,你可以安装[Docker Swarm Visualizer][16](这非常简单上手)。在 manager1 的终端中执行如下代码: - -![]($ sudo docker run -it -d -p 5000:5000 -e HOST=192.168.99.100 -e PORT=5000 -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer) - -打开你的浏览器,并且访问: - -结果(假设已经运行了两个 Docker Swarm 服务): - -![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-30-at-2.37.28-PM.png) - -停止运行 hello-app(已经在两个节点上运行了),可以在 manager1 上执行这个代码: - -``` -$ sudo docker service rm hello-app -``` - -如果想停止, 那么在 manager1 的终端中执行: - -``` -$ sudo docker ps -``` - -获得容器的 ID,这里是: f71fec0d3ce1 - -从 manager1 的终端会话中执行这个代码: - -``` -$ sudo docker stop f71fec0d3ce1 -``` - -祝你使用 Docker Swarm。这篇文章主要是以1.12版本来进行描述的。 - --------------------------------------------------------------------------------- - -via: http://www.giantflyingsaucer.com/blog/?p=5923 - -作者:[Chad Lung][a] -译者:[译者ID](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.giantflyingsaucer.com/blog/?author=2 -[1]: http://dockercon.com/ -[2]: https://blog.docker.com/author/bfirshman/ -[3]: https://blog.docker.com/author/bfirshman/ -[4]: http://www.giantflyingsaucer.com/blog/?p=5730 -[5]: https://aws.amazon.com/lambda/ -[6]: https://docs.docker.com/swarm/ -[7]: https://github.com/bfirsh/dockerrun -[8]: https://docs.docker.com/engine/swarm/swarm-tutorial/ -[9]: https://github.com/chadlung/vagrant-docker-swarm -[10]: https://www.vagrantup.com/ -[11]: https://test.docker.com/ -[12]: https://docs.docker.com/engine/reference/commandline/swarm_init/ -[13]: https://hub.docker.com/ -[14]: https://hub.docker.com/r/chadlung/hello-app/ -[15]: https://hub.docker.com/r/chadlung/hello-app/ -[16]: https://github.com/ManoMarks/docker-swarm-visualizer From d59f1f0c2381805ba471f750d351cf5159ff9003 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 24 Aug 2016 10:53:49 +0800 Subject: [PATCH 466/471] =?UTF-8?q?20160824-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...verless with AWS Lambda and API Gateway.md | 244 ++++++++++++++++++ 1 file changed, 244 insertions(+) create mode 100644 sources/tech/20160807 Going Serverless with AWS Lambda and API Gateway.md diff --git a/sources/tech/20160807 Going Serverless with AWS Lambda and API Gateway.md b/sources/tech/20160807 Going Serverless with AWS Lambda and API Gateway.md new file mode 100644 index 0000000000..fa9ff7e5f9 --- /dev/null +++ b/sources/tech/20160807 Going Serverless with AWS Lambda and API Gateway.md @@ -0,0 +1,244 @@ +Going Serverless with AWS Lambda and API Gateway +============================ + +Lately, there's been a lot of buzz in the computing world about "serverless". Serverless is a concept wherein you don't manage any servers yourself but instead provide your code or executables to a service that executes them for you. This is execution-as-a-service. It introduces many opportunities and also presents its own unique set of challenges. + +A brief digression on computing + +In the beginning, there was... well. It's a little complicated. At the very beginning, we had mechanical computers. Then along came ENIAC. Things really don't start to get "mass production", however, until the advent of mainframes. + +``` +1950s - Mainframes +1960s - Minicomputers +1994 - Rack servers +2001 - Blade servers +2000s - Virtual servers +2006 - Cloud servers +2013 - Containers +2014 - Serverless +``` + +>These are rough release/popularity dates. Argue amongst yourselves about the timeline. + +The progression here seems to be a trend toward executing smaller and smaller units of functionality. Each step down generally represents a decrease in the operational overhead and an increase in the operational flexibility. + +### The possibilities + +Yay! Serverless! But. What advantages do we gain by going serverless? And what challenges do we face? + +No billing when there is no execution. In my mind, this is a huge selling point. When no one is using your site or your API, you aren't paying for. No ongoing infrastructure costs. Pay only for what you need. In some ways, this is the fulfillment of the cloud computing promise "pay only for what you use". + +No servers to maintain or secure. Server maintenance and security is handled by your vendor (you could, of course, host serverless yourself, but in some ways this seems like a step in the wrong direction). Since your execution times are also limited, the patching of security issues is also simplified since there is nothing to restart. This should all be handled seamlessly by your vendor. + +Unlimited scalability. This is another big one. Let's say you write the next Pokémon Go. Instead of your site being down every couple of days, serverless lets you just keep growing and growing. A double-edged sword, for sure (with great scalability comes great... bills), but if your service's profitability depends on being up, then serverless can help enable that. + +Forced microservices architecture. This one really goes both ways. Microservices seem to be a good way to build flexible, scalable, and fault-tolerant architectures. On the other hand, if your business' services aren't designed this way, you're going to have difficulty adding serverless into your existing architecture. + +### But now your stuck on their platform + +Limited range of environments. You get what the vendor gives. You want to go serverless in Rust? You're probably out of luck. + +Limited preinstalled packages. You get what the vendor pre-installs. But you may be able to supply your own. + +Limited execution time. Your function can only run for so long. If you have to process a 1TB file you will likely need to 1) use a work around or 2) use something else. + +Forced microservices architecture. See above. + +Limited insight and ability to instrument. Just what is your code doing? With serverless, it is basically impossible to drop in a debugger and ride along. You still have the ability to log and emit metrics the usual way but these generally can only take you so far. Some of the most difficult problems may be out of reach when the occur in a serverless environment. + +### The playing field + +Since the introduction of AWS Lambda in 2014, the number of offerings has expanded quite a bit. Here are a few popular ones: + +- AWS Lambda - The Original +- OpenWhisk - Available on IBM's Bluemix cloud +- Google Cloud Functions +- Azure Functions + +While all of these have their relative strengths and weaknesses (like, C# support on Azure or tight integration on any vendor's platform) the biggest player here is AWS. + +### Building your first API with AWS Lambda and API Gateway + +Let's give serverless a whirl, shall we? We'll be using AWS Lambda and API Gateway offerings to build an API that returns "Guru Meditations" as spoken by Jimmy. + +All of the code is available on [GitHub][1]. + +API Documentation: + +``` +POST / +{ + "status": "success", + "meditation": "did u mention banana cognac shower" +} +``` + +### How we'll structure things + +The layout: + +``` +. +├── LICENSE +├── README.md +├── server +│ ├── __init__.py +│ ├── meditate.py +│ └── swagger.json +├── setup.py +├── tests +│ └── test_server +│ └── test_meditate.py +└── tools + ├── deploy.py + ├── serve.py + ├── serve.sh + ├── setup.sh + └── zip.sh +``` + +Things in AWS (for a more detailed view as to what is happening here, consult the source of tools/deploy.py): + +- API. The thing that were actually building. It is represented as a separate object in AWS. +- Execution Role. Every function executes as a particular role in AWS. Ours will be meditations. +- Role Policy. Every function executes as a role, and every role needs permission to do things. Our lambda function doesn't do much, so we'll just add some logging permissions. +- Lambda Function. The thing that runs our code. +- Swagger. Swagger is a specification of an API. API Gateway supports consuming a swagger definition to configure most resources for that API. +- Deployments. API Gateway provides for the notion of deployments. We won't be using more than one of these for our API here (i.e., everything is production, yolo, etc.), but know that they exist and for a real production-ready service you will probably want to use development and staging environments. +- Monitoring. In case our service crashes (or begins to accumulate a hefty bill from usage) we'll want to add some monitoring in the form of cloudwatch alarms for errors and billing. Note that you should modify tools/deploy.py to set your email correctly. + +### the codes + +The lambda function itself will be returning guru meditations at random from a hardcoded list and is very simple: + +``` +import logging +import random + + +logger = logging.getLogger() +logger.setLevel(logging.INFO) + + +def handler(event, context): + + logger.info(u"received request with id '{}'".format(context.aws_request_id)) + + meditations = [ + "off to a regex/", + "the count of machines abides", + "you wouldn't fax a bat", + "HAZARDOUS CHEMICALS + RKELLY", + "your solution requires a blood eagle", + "testing is broken because I'm lazy", + "did u mention banana cognac shower", + ] + + meditation = random.choice(meditations) + + return { + "status": "success", + "meditation": meditation, + } +``` + +### The deploy.py script + +This one is rather long (sadly) and I won't be including it here. It basically goes through each of the items under "Things in AWS" and ensures each item exists. + +Let's deploy this shit + +Just run `./tools/deploy.py`. + +Well. Almost. There seems to be some issue applying privileges that I can't seem to figure out. Your lambda function will fail to execute because API Gateway does not have permissions to execute your function. The specific error should be "Execution failed due to configuration error: Invalid permissions on Lambda function". I am not sure how to add these using botocore. You can work around this issue by going to the AWS console (sadness), locating your API, going to the / POST endpoint, going to integration request, clicking the pencil icon next to "Lambda Function" to edit it, and then saving it. You'll get a popup stating "You are about to give API Gateway permission to invoke your Lambda function". Click "OK". + +When you're finished with that, take the URL that ./tools/deploy.py printed and call it like so to see your new API in action: + +``` +$ curl -X POST https://a1b2c3d4.execute-api.us-east-1.amazonaws.com/prod/ +{"status": "success", "meditation": "the count of machines abides"} +``` + +### Running Locally + +One unfortunate thing about AWS Lambda is that there isn't really a good way to run your code locally. In our case, we will use a simple flask server to host the appropriate endpoint locally and handle calling our handler function: + +``` +from __future__ import absolute_import + +from flask import Flask, jsonify + +from server.meditate import handler + + +app = Flask(__name__) + +@app.route("/", methods=["POST"]) +def index(): + + class FakeContext(object): + aws_request_id = "XXX" + + return jsonify(**handler(None, FakeContext())) + +app.run(host="0.0.0.0") +``` + +You can run this in the repo with ./tools/serve.sh. Invoke like: + +``` +$ curl -X POST http://localhost:5000/ +{ + "meditation": "your solution requires a blood eagle", + "status": "success" +} +``` + +### Testing + +You should always test your code. The way we'll be doing this is importing and running our handler function. This is really just plain vanilla python testing: + +``` +from __future__ import absolute_import + +import unittest + +from server.meditate import handler + + +class SubmitTestCase(unittest.TestCase): + + def test_submit(self): + + class FakeContext(object): + + aws_request_id = "XXX" + + response = handler(None, FakeContext()) + + self.assertEquals(response["status"], "success") + self.assertTrue("meditation" in response) +``` + +You can run the tests in the repo with nose2. + +### Other possibilities + +Seamless integration with AWS services. Using boto, you can pretty simply connect to any of AWS other services. You simply allow your execution role access to these services using IAM and you're on your way. You can get/put files from S3, connect to Dynamo DB, invoke other lambda functions, etc. etc. The list goes on. + +Accessing a database. You can easily access remote databases as well. Connect to the database at the top of your lambda handler's module. Execute queries on the connection from within the handler function itself. You will (very likely) have to upload the associated package contents from where it is installed locally for this to work. You may also need to statically compile certain libraries. + +Calling other webservices. API Gateway is also a way to translate the output from one web service into a different form. You can take advantage of this to proxy calls through to a different webservice or provide backwards compatibility when a service changes. + +-------------------------------------------------------------------------------- + +via: http://blog.ryankelly.us/2016/08/07/going-serverless-with-aws-lambda-and-api-gateway.html?utm_source=webopsweekly&utm_medium=email + +作者:[Ryan Kelly][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/f0rk/blog.ryankelly.us/ +[1]: https://github.com/f0rk/meditations From a113632bb321f3ac47e1fdee93b98a6f84212561 Mon Sep 17 00:00:00 2001 From: robot527 Date: Wed, 24 Aug 2016 20:39:00 +0800 Subject: [PATCH 467/471] Translated 20160805 How to Use Flow Control Statements in Awk - Part 12 --- ...low Control Statements in Awk – Part 12.md | 261 ------------------ ...low Control Statements in Awk - Part 12.md | 245 ++++++++++++++++ 2 files changed, 245 insertions(+), 261 deletions(-) delete mode 100644 sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md create mode 100644 translated/tech/awk/20160805 How to Use Flow Control Statements in Awk - Part 12.md diff --git a/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md b/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md deleted file mode 100644 index 86b1eb2ff1..0000000000 --- a/sources/tech/awk/20160805 How to Use Flow Control Statements in Awk – Part 12.md +++ /dev/null @@ -1,261 +0,0 @@ -robot527 translating - -How to Use Flow Control Statements in Awk – Part 12 -=========================================== - -When you review all the Awk examples we have covered so far, right from the start of the Awk series, you will notice that all the commands in the various examples are executed sequentially, that is one after the other. But in certain situations, we may want to run some text filtering operations based on some conditions, that is where the approach of flow control statements sets in. - -![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Flow-Control-Statements-in-Awk.png) - -There are various flow control statements in Awk programming and these include: - -- if-else statement -- for statement -- while statement -- do-while statement -- break statement -- continue statement -- next statement -- nextfile statement -- exit statement - -However, for the scope of this series, we shall expound on: if-else, for, while and do while statements. Remember that we already walked through how to use next statement in Part 6 of this Awk series. - -### 1. The if-else Statement - -The expected syntax of the if statement is similar to that of the shell if statement: - -``` -if (condition1) { - actions1 -} -else { - actions2 -} -``` - -In the above syntax, condition1 and condition2 are Awk expressions, and actions1 and actions2 are Awk commands executed when the respective conditions are satisfied. - -When condition1 is satisfied, meaning it’s true, then actions1 is executed and the if statement exits, otherwise actions2 is executed. - -The if statement can also be expanded to a if-else_if-else statement as below: - -``` -if (condition1){ - actions1 -} -else if (conditions2){ - actions2 -} -else{ - actions3 -} -``` - -For the form above, if condition1 is true, then actions1 is executed and the if statement exits, otherwise condition2 is evaluated and if it is true, then actions2 is executed and the if statement exits. However, when condition2 is false then, actions3 is executed and the if statement exits. - -Here is a case in point of using if statements, we have a list of users and their ages stored in the file, users.txt. - -We want to print a statement indicating a user’s name and whether the user’s age is less or more than 25 years old. - -``` -aaronkilik@tecMint ~ $ cat users.txt -Sarah L 35 F -Aaron Kili 40 M -John Doo 20 M -Kili Seth 49 M -``` - -We can write a short shell script to carry out our job above, here is the content of the script: - -``` -#!/bin/bash -awk ' { - if ( $3 <= 25 ){ - print "User",$1,$2,"is less than 25 years old." ; - } - else { - print "User",$1,$2,"is more than 25 years old" ; -} -}' ~/users.txt -``` - -Then save the file and exit, make the script executable and run it as follows: - -``` -$ chmod +x test.sh -$ ./test.sh -``` - -Sample Output - -``` -User Sarah L is more than 25 years old -User Aaron Kili is more than 25 years old -User John Doo is less than 25 years old. -User Kili Seth is more than 25 years old -``` - -### 2. The for Statement - -In case you want to execute some Awk commands in a loop, then the for statement offers you a suitable way to do that, with the syntax below: - -Here, the approach is simply defined by the use of a counter to control the loop execution, first you need to initialize the counter, then run it against a test condition, if it is true, execute the actions and finally increment the counter. The loop terminates when the counter does not satisfy the condition. - -``` -for ( counter-initialization; test-condition; counter-increment ){ - actions -} -``` - -The following Awk command shows how the for statement works, where we want to print the numbers 0-10: - -``` -$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }' -``` - -Sample Output - -``` -0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -``` - -### 3. The while Statement - -The conventional syntax of the while statement is as follows: - -``` -while ( condition ) { - actions -} -``` - -The condition is an Awk expression and actions are lines of Awk commands executed when the condition is true. - -Below is a script to illustrate the use of while statement to print the numbers 0-10: - -``` -#!/bin/bash -awk ' BEGIN{ counter=0 ; - - while(counter<=10){ - print counter; - counter+=1 ; - -} -} -``` - -Save the file and make the script executable, then run it: - -``` -$ chmod +x test.sh -$ ./test.sh -``` - -Sample Output - -``` -0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -``` - -### 4. The do while Statement - -It is a modification of the while statement above, with the following underlying syntax: - -``` -do { - actions -} - while (condition) -``` - -The slight difference is that, under do while, the Awk commands are executed before the condition is evaluated. Using the very example under while statement above, we can illustrate the use of do while by altering the Awk command in the test.sh script as follows: - -``` -#!/bin/bash - -awk ' BEGIN{ counter=0 ; - do{ - print counter; - counter+=1 ; - } - while (counter<=10) -} -' -``` - -After modifying the script, save the file and exit. Then make the script executable and execute it as follows: - -``` -$ chmod +x test.sh -$ ./test.sh -``` - -Sample Output - -``` -0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -``` - -### Conclusion - -This is not a comprehensive guide regarding Awk flow control statements, as I had mentioned earlier on, there are several other flow control statements in Awk. - -Nonetheless, this part of the Awk series should give you a clear fundamental idea of how execution of Awk commands can be controlled based on certain conditions. - -You can as well expound more on the rest of the flow control statements to gain more understanding on the subject matter. Finally, in the next section of the Awk series, we shall move into writing Awk scripts. --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/aaronkili/ - - - - - - - - - - - - - diff --git a/translated/tech/awk/20160805 How to Use Flow Control Statements in Awk - Part 12.md b/translated/tech/awk/20160805 How to Use Flow Control Statements in Awk - Part 12.md new file mode 100644 index 0000000000..a5862cb595 --- /dev/null +++ b/translated/tech/awk/20160805 How to Use Flow Control Statements in Awk - Part 12.md @@ -0,0 +1,245 @@ +awk 系列:在 awk 中如何使用流程控制语句 +=========================================== + +当你回顾所有到目前为止我们已经覆盖的 awk 实例,从 awk 系列的开始,你会注意到各种实例的所有指令是顺序执行的,即一个接一个地执行。但在某些情况下,我们可能希望运行基于一些条件的文本过滤操作,即其中的流程控制语句集的方法。 + +![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Flow-Control-Statements-in-Awk.png) + +在 awk 编程中有各种各样的流程控制语句,其中包括: + +- if-else 语句 +- for 语句 +- while 语句 +- do-while 语句 +- break 语句 +- continue 语句 +- next 语句 +- nextfile 语句 +- exit 语句 + +然而,对于该系列的这一部分,我们将阐述:if-else、for、while 和 do while 语句。请记住,我们已经在这个 awk 系列的第 6 部分介绍过如何使用 awk 的 next 语句。 + +### 1. if-else 语句 + +if 语句预期的语法类似于 shell 中的 if 语句: + +``` +if (条件 1) { + 动作 1 +} +else { + 动作 2 +} +``` + +在上述语法中,条件 1 和条件 2 是 awk 表达式,而动作 1 和动作 2 是当各自的条件得到满足时 awk 命令的执行。 + +当条件 1 满足时,意味着它为真,那么动作 1 被执行并退出 if 语句,否则动作 2 被执行。 + +if 语句还能扩展为如下的 if-else_if-else 语句: + +``` +if (条件 1){ + 动作 1 +} +else if (条件 2){ + 动作 2 +} +else{ + 动作 3 +} +``` + +对于上面的形式,如果条件 1 为真,那么动作 1 被执行并退出 if 语句,否则条件 2 被求值且如果值为真,那么动作 2 被执行并退出 if 语句。然而,当条件 2 为假时,那么动作 3 被执行并退出 if 语句。 + +这是在使用 if 语句的一个实例,我们有一个用户和他们年龄的列表,存储在文件 users.txt 中。 + +我们要打印一个清单,显示用户的名称和用户的年龄是否小于或超过 25 岁。 + +``` +aaronkilik@tecMint ~ $ cat users.txt +Sarah L 35 F +Aaron Kili 40 M +John Doo 20 M +Kili Seth 49 M +``` + +我们可以写一个简短的 shell 脚本来执行上文中我们的工作,这是脚本的内容: + +``` +#!/bin/bash +awk ' { + if ( $3 <= 25 ){ + print "User",$1,$2,"is less than 25 years old." ; + } + else { + print "User",$1,$2,"is more than 25 years old" ; + } +}' ~/users.txt +``` + +然后保存文件并退出,按如下方式使脚本可执行并运行它: + +``` +$ chmod +x test.sh +$ ./test.sh +``` + +输出样例 + +``` +User Sarah L is more than 25 years old +User Aaron Kili is more than 25 years old +User John Doo is less than 25 years old. +User Kili Seth is more than 25 years old +``` + +### 2. for 语句 + +如果你想在一个循环中执行一些 awk 命令,那么具有下面语法的 for 语句为你提供一个合适的方式来做: + +这里,该方法是简单地通过一个计数器来控制循环执行的定义,首先你需要初始化这个计数器,然后针对测试条件运行它,如果它为真,执行这些动作并最终增加这个计数器。当计数器不满足条件时,循环终止。 + +``` +for ( 计数器的初始化 ; 测试条件 ; 计数器增加 ){ + 动作 +} +``` + +在我们想要打印数字 0 到 10 时,以下 awk 命令显示 for 语句是如何工作的: + +``` +$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }' +``` + +输出样例 + +``` +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +``` + +### 3. while 语句 + +while 语句的传统语法如下: + +``` +while ( 条件 ) { + 动作 +} +``` + +这条件是一个 awk 表达式而动作是当条件为真时被执行的 awk 命令。 + +下面是一个说明使用 while 语句来打印数字 0 到 10 的脚本: + +``` +#!/bin/bash +awk ' BEGIN{ counter=0; + + while(counter<=10){ + print counter; + counter+=1; + + } +}' +``` + +保存文件并使脚本可执行,然后运行它: + +``` +$ chmod +x test.sh +$ ./test.sh +``` + +输出样例 + +``` +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +``` + +### 4. do while 语句 + +它是上文中 while 语句的一个变型,具有以下语法: + +``` +do { + 动作 +} + while (条件) +``` + +这轻微的区别在于,在 do while 语句下,awk 的命令在求值条件之前执行。使用上文 while 语句的例子,我们可以通过按如下所述修改 test.sh 脚本中的 awk 命令来说明 do while 语句的用法: + +``` +#!/bin/bash + +awk ' BEGIN{ counter=0; + do{ + print counter; + counter+=1; + } + while (counter<=10) +}' +``` + +修改脚本之后,保存文件并退出。按如下方式使脚本可执行并执行它: + +``` +$ chmod +x test.sh +$ ./test.sh +``` + +输出样例 + +``` +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +``` + +### 总结 + +这不是关于 awk 的流程控制语句的一个全面的指南,正如我早先提到的,还有其他几个流程控制语句在 awk 里。 + +尽管如此,awk 系列的这一部分使应该你明白了一个明确的基于某些条件控制的 awk 命令是如何执行的基本概念。 + +你还可以阐述更多其余的流程控制语句以获得更多关于该主题的理解。最后,在 awk 的系列下一节,我们将进入编写 awk 脚本。 +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[robot527](https://github.com/robot527) +校对:[校对者 ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ From 1fef6afffeb330ad5d25da60a8d843984bca9f79 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 25 Aug 2016 00:24:26 +0800 Subject: [PATCH 468/471] =?UTF-8?q?PUB:20160614=20=E2=80=8BUbuntu=20Snap?= =?UTF-8?q?=20takes=20charge=20of=20Linux=20desktop=20and=20IoT=20software?= =?UTF-8?q?=20distribution?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @vim-kakali 看得出用心了,但是有不少错误还需要在翻译后重新审读几遍。 --- ...inux desktop and IoT software distribution.md | 110 ++++++++++++++++ ...inux desktop and IoT software distribution.md | 121 ------------------ 2 files changed, 110 insertions(+), 121 deletions(-) create mode 100644 published/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md delete mode 100644 translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md diff --git a/published/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md b/published/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md new file mode 100644 index 0000000000..e4eae87ab2 --- /dev/null +++ b/published/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md @@ -0,0 +1,110 @@ +Ubuntu Snap 软件包接管 Linux 桌面和 IoT 软件的发行 +=========================================================================== + +[Canonical][28] 和 [Ubuntu][29] 创始人 Mark Shuttleworth 在一次采访中说他不准备宣布 Ubuntu 的新 [ Snap 程序包格式][30]。但是就在几个月之后,很多 Linux 发行版的开发者和公司都宣布他们会把 Snap 作为通用 Linux 程序包格式。 + +![](http://zdnet2.cbsistatic.com/hub/i/r/2016/06/14/a9b2a139-3cd4-41bf-8e10-180cb9450134/resize/770xauto/adc7d16a46167565399ecdb027dd1416/ubuntu-snap.jpg) + +*Linux 供应商,独立软件开发商和公司门全都采用 Ubuntu Snap 作为多种 Linux 系统的配置和更新程序包。* + +为什么呢?因为 Snap 能使一个单一的二进制程序包可以完美、安全地运行在任何 Linux 台式机、服务器、云或物联网设备上。据 Canonical 的 Ubuntu 客户端产品和版本负责人 Olli Ries 说: + +>[Snap 程序包的安全机制][1]让我们在更快的跨发行版的应用更新中打开了新的局面,因为 Snap 应用是与系统的其它部分想隔离的。用户可以安装一个 Snap 而不用担心是否会影响其他的应用程序和操作系统。 + +当然了,如 Linux 内核的早期开发者和 CoreOS 安全维护者 Matthew Garrett 指出的那样:如果你[将 Snap 用在不安全的程序中,比如 X11 窗口系统][2],实际上您并不会获得安全性。(LCTT 译注:X11 也叫做 X Window 系统,X Window 系统 ( X11 或 X )是一种位图显示的视窗系统 。它是在 Unix 和类 Unix 操作系统 ,以及 OpenVMS 上建立图形用户界面的标准工具包和协议,并可用于几乎所有已有的现代操作系统。) + +Shuttleworth 同意 Garrett 的观点,但是他也说你可以控制 Snap 应用是如何与系统的其它部分如何交互的。比如,一个 web 浏览器可以包含在一个安全的 Snap 程序包中,这个 Snap 使用 Ubuntu 打包的 [openssl][3] TLS 和 SSL 库。除此之外,即使有些东西影响到了浏览器实例内部,也不能进入到底层的操作系统。 + +很多公司也这样认为。戴尔、三星、Mozilla、[krita][7](LCTT 译注:Krita 是一个位图形编辑软件,KOffice 套装的一部份。包含一个绘画程式和照片编辑器,Krita 是自由软件,并根据GNU通用公共许可证发布)、[Mycroft][8](LCTT 译注:Mycroft 是一个开源AI智能家居平台,配置 Raspberry Pi 2 和 Arduino 控制器),以及 [Horizon Computing][9](LCTT 译注:为客户提供优质的硬件架构为其运行云平台)都将使用 Snap。Arch Linux、Debain、Gentoo 和 OpenWrt 开发团队也已经拥抱了 Snap,也会把 Snap 加入到他们各自的发行版中。 + +Snap 包又叫做“Snaps”,现在已经可以原生的运行在 Arch、Debian、Fedora、Kubuntu、Lubuntu、Ubuntu GNOME、Ubuntu Kylin、Ubuntu MATE、Ubuntu Unity 和 Xubuntu 之上。 Snap 也在 CentOS、Elementary、Gentoo、Mint、OpenSUSE 和 Red Hat Enterprise Linux (RHEL) 上取得了验证,并且也很容易运行在其他 Linux 发行版上。 + +这些发行版正在使用 Snaps,Shuttleworth 声称:“Snaps 为每个 Linux 台式机、服务器、设备和云机器带来了很多应用程序,在让用户使用最好的应用的同时也给了用户选择发行版的自由。” + +这些发行版共同代表了 Linux 桌面、服务器和云系统发行版的主流。为什么它们从现有的软件包管理系统换了过来呢? Arch Linux 的贡献者 Tim Jester-Pfadt 解释说,“Snaps 最棒的一点是它支持先锐和测试通道,这可以让用户选择使用预发布的开发者版本或跟着最新的稳定版本。” + +除过这些 Linux 分支,独立软件开发商也将会因为 Snap 很好的简化了第三方 Linux 应用程序分发和安全维护问题而拥抱 Snap。例如,[文档基金会][14]也将会让流行的开源办公套件 [LibreOffice][15] 支持 Snap 程序包。 + +文档基金会的联合创始人 Thorsten Behrens 这样说: + +> 我们的目标是尽可能的使 LibreOffice 能被大多数人更容易使用。Snap 使我们的用户能够在不同的桌面系统和发行版上更快捷、更容易、持续地获取最新的 LibreOffice 版本。更好的是,它也会帮助我们的发布工程师最终从周而复始的、自产的、陈旧的 Linux 开发解决方案中解放出来,很多东西都可以一同维护了。 + +Mozilla 的 Firefix 副总裁 Nick Nguyen 在该声明中提到: + +> 我们力求为用户提供良好的使用体验,并且使火狐浏览器能够在更多平台、设备和操作系统上运行。随着引入 Snaps ,对火狐浏览器的持续优化成为可能,使它可以为 Linux 用户提供最新的特性。 + +[Krita 基金会][17] (基于 KDE 的图形程序)项目领导 Boudewijn Rempt 说: + +> 在一个私有仓库中维护 DEB 包是复杂而耗时的。Snaps 更容易维护、打包和分发。把 Snap 放进软件商店也特别容易,这是我发布软件用过的最舒服的软件商店了。[Krita 3.0][18] 刚刚作为一个 snap 程序包发行,新版本出现时它会自动更新。 + +不仅 Linux 桌面系统程序为 Snap 而激动。物联网(IoT)和嵌入式开发者也以双手拥抱了 Snap。 + +由于 Snaps 彼此隔离,带来了数据安全性,它们还可以自动更新或回滚,这对于硬件设备是极好的。多个厂商都发布了运行着 snappy 的设备(LCTT 译注:Snap 基于 snappy进行构建),这带来了一种新的带有物联网应用商店的“智能新锐”设备。Snappy 设备能够自动接收系统更新,并且连同安装在设备上的应用程序也会得到更新。 + +据 Shuttleworth 说,戴尔公司是最早一批认识到 Snap 的巨大潜力的物联网供应商之一,也决定在他们的设备上使用 Snap 了。 + +戴尔公司的物联网战略和合作伙伴主管 Jason Shepherd 说:“我们认为,Snaps 能解决在单一物联网网关上部署和运行多个第三方应用程序所带来的安全风险和可管理性挑战。这种可信赖的通用的应用程序格式才是戴尔真正需要的,我们的物联网解决方案合作伙伴和商业客户都对构建一个可扩展的、IT 级的、充满活力的物联网应用生态系统有极大的兴趣。” + +OpenWrt 的开发者 Matteo Croce 说:“这很简单, Snaps 可以在保持核心系统不变的情况下递送新的应用... Snaps 是为 OpenWrt AP 和路由器提供大量软件的最快方式。” + +Shuttleworth 并不认为 Snaps 会取代已经存在的 Linux 程序包比如 [RPM][19] 和 [DEB][20]。相反,他认为它们将会相辅相成。Snaps 将会与现有软件包共存。每个发行版都有其自己提供和更新核心系统及其更新的机制。Snap 为桌面系统带来的是通用的应用程序,这些应用程序不会影响到操作系统的基础。 + +每个 Snap 都通过使用大量的内核隔离和安全机制而限制,以满足 Snap 应用的需求。谨慎的审核过程可以确保 Snap 仅仅得到其完成请求操作的权限。用户在安装 Snap 的时候也不必考虑复杂的安全问题。 + +Snap 本质上是一个自包容的 zip 文件,能够快速地在包内执行。流行的[优麒麟][21]团队的负责人 Jack Yu 称:“Snaps 比传统的 Linux 包更容易构建,允许我们独立于操作系统解决依赖性,所以我们很容易地跨发行版为所有用户提供最好、最新的中国 Linux 应用。” + +由 Canonical 设计的 Snap 程序包格式由 [snapd][22] 所处理。它的开发工作放在 GitHub 上。将其移植到更多的 Linux 发行版已经被证明是很简单的,社区还在不断增长,吸引了大量具有 Linux 经验的贡献者。 + +Snap 程序包使用 snapcraft 工具来构建。项目官网是 [snapcraft.io][24],其上有构建 Snap 的指导和逐步指南,以及给项目开发者和使用者的文档。Snap 能够基于现有的发行版程序包构建,但通常使用源代码来构建,以获得优化和减小软件包大小。 + +如果你不是 Ubuntu 的忠实粉丝或者一个专业的 Linux 开发者,你可能还不知道 Snap。未来,在任何平台上需要用 Linux 完成工作的任何人都会知道这个软件。它会成为主流,尤其是在 Linux 应用程序的安装和更新机制方面。 + +### 相关内容: + +- [Linux 专家 Matthew Garrett:Ubuntu 16.04 的新 Snap 程序包格式存在安全风险 ][25] +- [Ubuntu Linux 16.04 ] +- [Microsoft 和 Canonical 合作使 Ubuntu 可以在 Windows 10 上运行 ] + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/ubuntu-snap-takes-charge-of-linux-desktop-and-iot-software-distribution/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[28]: http://www.canonical.com/ +[29]: http://www.ubuntu.com/ +[30]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ +[1]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ +[2]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ +[3]: https://www.openssl.org/ +[4]: http://www.dell.com/en-us/ +[5]: http://www.samsung.com/us/ +[6]: http://www.mozilla.com/ +[7]: https://krita.org/en/ +[8]: https://mycroft.ai/ +[9]: http://www.horizon-computing.com/ +[10]: https://www.archlinux.org/ +[11]: https://www.debian.org/ +[12]: https://www.gentoo.org/ +[13]: https://openwrt.org/ +[14]: https://www.documentfoundation.org/ +[15]: https://www.libreoffice.org/download/libreoffice-fresh/ +[16]: https://www.mozilla.org/en-US/firefox/new/ +[17]: https://krita.org/en/about/krita-foundation/ +[18]: https://krita.org/en/item/krita-3-0-released/ +[19]: http://rpm5.org/ +[20]: https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html +[21]: http://www.ubuntu.com/desktop/ubuntu-kylin +[22]: https://launchpad.net/ubuntu/+source/snapd +[23]: https://github.com/snapcore/snapd +[24]: http://snapcraft.io/ +[25]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ +[26]: http://www.zdnet.com/article/ubuntu-linux-16-04-is-here/ +[27]: http://www.zdnet.com/article/microsoft-and-canonical-partner-to-bring-ubuntu-to-windows-10/ + + diff --git a/translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md b/translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md deleted file mode 100644 index f7b3e5590c..0000000000 --- a/translated/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md +++ /dev/null @@ -1,121 +0,0 @@ - - -Ubuntu Snap 软件包管理 Linux 桌面和 IoT 的软件安装 -=========================================================================== - - -[Canonical][28] 和 [Ubuntu][29] 创始人 Mark Shuttleworth 在一次采访中说他不计划宣布 Ubuntu 的新 [ Snap 程序包格式][30]。但是 -在之后几个月,各种 Linux 发行版的开发者和团队都宣布他们会把 Snap 作为通用 Linux 程序包格式。 -![](http://zdnet2.cbsistatic.com/hub/i/r/2016/06/14/a9b2a139-3cd4-41bf-8e10-180cb9450134/resize/770xauto/adc7d16a46167565399ecdb027dd1416/ubuntu-snap.jpg) ->Linux 供应商,独立软件开发商(ISV:Independent Software Vendor)和开发团队都采用 Ubuntu Snap 作为多种 Linux 系统的配置和更新程序包。 - -为什么呢?因为 Snap 能使一个二进制程序包可以完美、安全地在任何 Linux 台式机、服务器、云或设备上运行。Canonical 的 Ubuntu 客户端产品和版本负责人 Olli Ries 说: - - ->[ Snap 程序包的安全机制][1] 允许我们为 Snap 应用单独空出一些系统空间,这样我们就可以根据我们自身的情况更高效的进行循环开发。用户安装一个 Snap 的时候也不用担心是否会影响其他的应用程序和操作系统。 - - -当然了,早期的 Linux 内核开发者和 CoreOS 【译注:CoreOS是一种操作系统,于2013年十二月发布,它的设计旨在关注开源操作系统内核的新兴使用——用于大量基于云计算的虚拟服务器。】安全维护者 Matthew Garrett 指出:如果你 [使用带有不安全程序的 Snap 程序包比如 X11 ][2] 视窗系统【译注:X11也叫做 X Window 系统,X Window 系统 ( X11 或 X )是一种位图显示的视窗系统 。它是在 Unix 和类 Unix 操作系统 ,以及 OpenVMS 上建立图形用户界面的标准工具包和协议,并可用于几乎所有已有的现代操作系统。】,实际上你的系统一点也不安全。 - - -Shuttleworth 同意 Garrett 的观点,但是他也说你可以控制 Snap 应用使用多少的系统剩余空间。比如,一个 web 浏览器可以包含一个安全的 Snap 程序包,这个 Snap 使用 Ubuntu 的一个包 [openssl][3]【译注:OpenSSL 是一个强大的安全套接字层密码库,囊括主要的密码算法、常用的密钥和证书封装管理功能及 SSL 协议,并提供丰富的应用程序供测试或其它目的使用。】的安全传输层(TLS,Transport Layer Security)和安全套接字层(SSL,Secure Sockets Layer)二进制文件。除此之外,即使有程序攻破了浏览器的诉讼程序【译注:诉讼程序属于程序性法律程序中的公力救济型程序】,也不能进行底层的系统操作。 - -很多团队也这样认为。[戴尔][4],[三星][5],[Mozilla][6],[krita][7]【译注:Krita 是一个位图形编辑软件,KOffice 套装的一部份。包含一个绘画程式和照片编辑器,Krita 是自由软件,并根据GNU通用公共许可证发布。】,[麦考夫][8]【译注:Mycroft 是一个开源AI智能家居平台,配置 Raspberry Pi 2 和 Arduino 控制器,应该就是以夏洛克福尔摩斯的哥哥为名的。】,以及 [地平线计算][9] 【译注:地平线计算解决方案,为客户提供优质的硬件架构为其运行云平台。】都将使用 Snap。[Arch Linux][10],[Debain][11],[Gentoo][12],和 [OpenWrt][13] 【译注:OpenWrt 可以被描述为一个嵌入式的 Linux 发行版】开发团队也已经拥抱了 Snap,也会把 Snap 加入到他们各自开发的分支中。 - -Snap 包又叫做“ Snaps ”,现在在 Arch、Debian、Fedora、Kubuntu、Lubuntu、Ubuntu GNOME、Ubuntu Kylin、Ubuntu MATE、Ubuntu Unity 和 Xubuntu 上运行。 Snap 正在 CentOS、Elementary、Gentoo、Mint、OpenSUSE 和 Red Hat Enterprise Linux (RHEL) 上予以验证,并且在其他 Linux 发行版上运行也会很容易。 - -这些发行版都使用 Snaps,Shuttleworth 声称:“ Snaps 为每个 Linux 台式机、服务器、设备和云机器带来了很多应用程序,也给了用户更大的自由,用户可以选择任何带有最好应用程序的 Linux 发行版本。” - -把这些发行版放在一起代表了大多的主流 Linux 桌面、服务器和云系统分支。为什么他们切换到本地包来进行系统管理呢? Arch Linux 的贡献者 -除过这些 Linux 分支,独立软件开发商(ISV,Independent Software Vendor)也将会因为 Snap 很好的简化了第三方 Linux 应用程序和安全维护问题而拥抱 Snap。例如,[文档基金会][14] 也将会开发出受欢迎的可用开源 office 套件 [LibreOffice][15] 作为一个 Snap 程序包。 - -文档基金会的联合创始人 Thorsten Behrens 这样说: - ->我们的目标是使 LibreOffice 能被大多数人更容易使用成为可能。Snap 使我们的用户能够在不同的桌面系统和分支上更快捷更容易的持续获取最新的 LibreOffice 版本。如上所述,它也会帮助我们的版本开发工程师最终从定期的自产的陈旧 Linux 开发解决方案中解放出来,总之很多东西会被共同维护。 - -Mozilla 的 [火狐][16] 副总裁(VP,Vice President)Nick Nguyen 在这段陈述中提到: - ->我们力求为用户提供良好的使用体验,并且使火狐浏览器能够在更多平台、设备和操作系统上运行。随着对 Snaps 的介绍,我们也会对火狐浏览器进行持续优化,使它可以为 Linux 用户提供最新特性。 - -基于 KDE 的图形程序的 [Krita Foundation][17] 项目领导 Boudewijn Rempt 说: - - ->正在维护的 DEB 包在一个私有仓库,这很复杂也很耗费时间。Snaps 更容易维护、打包和配置。把 Snap 放进软件商店也特别容易,我已经把软件发布在最合适的软件商店了。[Krita 3.0][18] 刚刚作为一个 snap 程序包发行,它作为最新的版本能够自动更新。 - -不仅 Linux 桌面系统程序使用 Snap。物联网(IoT)和嵌入式开发者也双手拥抱了 Snap。 - - -Snaps 彼此隔离开来,以确保安全性,它们还可以自动更新或回滚,这对于硬件设备是极好的。多种厂商都在他们的物联网设备上运行着 snappy【译注:Snap 基于 snappy进行构建。】,能够生产带有物联网应用程序商店的新的“智能边缘”设备。Snappy 设备能够自动接收系统更新,并且连同安装在设备上的应用程序也会得到更新。 - -戴尔公司根据最早的物联网厂商之一的创始人 Shuttleworth 看到的 Snap 的巨大潜力决定在他们的设备上使用 Snap。 - -戴尔公司的物联网战略和合作伙伴主管 Jason Shepherd 说:“我们认为,Snaps 能够报告安全风险,也能解决在单一物联网网关上部署和运行多个第三方应用程序所带来的安全风险和可管理性挑战。这种课信赖的通用的应用程序格式才是戴尔真正需要的,我们的物联网解决方案合作伙伴和产品客户都对物联网应用程序的充满活力的生态系统有极大的兴趣。” - - -OpenWrt 的开发者 Matteo Croce 说:“这很简单,在脱离无变化的核心操作系统的时候 Snaps 会为 OpenWrt 递送大量的软件...Snaps 是通过点和路由为 OpenWrt 提供大量软件的最快方式。” - -Shuttleworth 认为 Snaps 不会取代已经存在的 Linux 程序包比如 [RPM][19] 和 [DEB][20]。相反,他认为他们将会相辅相成。Snaps 将会与现有包共存。每个发行版都有为系统内核提供更新的相应机制,这种机制也在不断更新。Snap 为桌面系统带来的是通用的应用程序,这些应用程序不会影响基本的系统操作。 - -每个 Snap 使用大量的独立核心和安全机制时也会有所限制,这是 Snap 应用程序的特色,谨慎的重览过程确保 Snap 仅仅得到其完成请求操作的权限。用户在安装 Snap 的时候也不必考虑复杂的安全问题。 - -Snap实际上是独立式zip文件,能够非常迅速地在原地执行,很受欢迎的[中标麒麟][21]团队的负责人 Jack Yu 称:“Snaps 比传统的 Linux 包更容易构建,允许我们对这种基本的系统的操作的独立性产生依赖,所以我们可以为所有的分支用户开发更好的最新国产 Linux 应用程序。” - -Snap 程序包格式由 Canonical 设计,基于 [snapd][22] 。这是GitHub上的一个软件项目。大多 Linux 发行版的部分 snapd 已经被证明是容易理解的,社区里也加入了新的有大量 Linux 经验的贡献者。 - - -Snap 程序包使用 snapcraft 工具来构建。项目基地是 [snapcraft.io][24] 网站,附有构建 Snap 的预览和逐步指南,不包括项目开发者和使用者的文件。Snap可能基于现有的发行版程序包,但更常使用源代码来构建,为了优化和规模效率。 - -如果你不是 Ubuntu 的忠实粉丝或者一个偏执的 Linux 开发者你可能不知道 Snap。未来,在任何平台上需要用 Linux 完成工作的任何人都会知道这个软件。用它的方法完成工作会成为主流 -- 尤其在这些方面将更重要 -- Linux 应用程序的安装和更新机制。 - - -#### 相关内容: - - -- [Linux 专家 Matthew Garrett:Ubuntu 16.04 的新 Snap 程序包格式存在安全风险 ][25] -- [Ubuntu Linux 16.04 ] -- [Microsoft 和 Canonical 合作使 Ubuntu 可以在 Windows 10 上运行 ] - - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/ubuntu-snap-takes-charge-of-linux-desktop-and-iot-software-distribution/ - -作者:[Steven J. Vaughan-Nichols][a] -译者:[vim-kakali](https://github.com/vim-kakali) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[28]: http://www.canonical.com/ -[29]: http://www.ubuntu.com/ -[30]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ -[1]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ -[2]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ -[3]: https://www.openssl.org/ -[4]: http://www.dell.com/en-us/ -[5]: http://www.samsung.com/us/ -[6]: http://www.mozilla.com/ -[7]: https://krita.org/en/ -[8]: https://mycroft.ai/ -[9]: http://www.horizon-computing.com/ -[10]: https://www.archlinux.org/ -[11]: https://www.debian.org/ -[12]: https://www.gentoo.org/ -[13]: https://openwrt.org/ -[14]: https://www.documentfoundation.org/ -[15]: https://www.libreoffice.org/download/libreoffice-fresh/ -[16]: https://www.mozilla.org/en-US/firefox/new/ -[17]: https://krita.org/en/about/krita-foundation/ -[18]: https://krita.org/en/item/krita-3-0-released/ -[19]: http://rpm5.org/ -[20]: https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html -[21]: http://www.ubuntu.com/desktop/ubuntu-kylin -[22]: https://launchpad.net/ubuntu/+source/snapd -[23]: https://github.com/snapcore/snapd -[24]: http://snapcraft.io/ -[25]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ -[26]: http://www.zdnet.com/article/ubuntu-linux-16-04-is-here/ -[27]: http://www.zdnet.com/article/microsoft-and-canonical-partner-to-bring-ubuntu-to-windows-10/ - - From af7ef28c68e64172939dc114e5bd9c0072183a8f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 25 Aug 2016 13:02:57 +0800 Subject: [PATCH 469/471] PUB:20160416 A newcomer's guide to navigating OpenStack Infrastructure @kylepeng93 --- ... to navigating OpenStack Infrastructure.md | 94 +++++++++++++++++++ ... to navigating OpenStack Infrastructure.md | 64 ------------- 2 files changed, 94 insertions(+), 64 deletions(-) create mode 100644 published/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md delete mode 100644 translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md diff --git a/published/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md b/published/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md new file mode 100644 index 0000000000..952fc6f594 --- /dev/null +++ b/published/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md @@ -0,0 +1,94 @@ +给学习 OpenStack 架构的新手入门指南 +=========================================================== + +OpenStack 欢迎新成员的到来,但是,对于这个发展趋近成熟并且快速迭代的开源社区而言,能够拥有一个新手指南并不是件坏事。在奥斯汀举办的 OpenStack 峰会上,[Paul Belanger][1] (来自红帽公司)、 [Elizabeth K. Joseph][2] (来自 HPE 公司)和 [Christopher Aedo][3] (来自 IBM 公司)就[针对新人的 OpenStack 架构][4]作了一场专门的讲演。在这次采访中,他们提供了一些建议和资源来帮助新人成为 OpenStack 贡献者中的一员。 + +![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) + +**你的讲演介绍中说你将“深入架构核心,并解释你需要知道的关于让 OpenStack 工作起来的每一件事情”。这对于 40 分钟的讲演来说是一个艰巨的任务。那么,对于学习 OpenStack 架构的新手来说最需要知道那些事情呢?** + +**Elizabeth K. Joseph (EKJ)**: 我们没有为 OpenStack 使用 GitHub 这种提交补丁的方式,这是因为这样做会对新手造成巨大的困扰,尽管由于历史原因我们还是在 GitHub 上维护了所有库的一个镜像。相反,我们使用了一种完全开源的评审形式,而且持续集成(CI)是由 OpenStack 架构团队维护的。与之有关的,自从我们使用了 CI 系统,每一个提交给 OpenStack 的改变都会在被合并之前进行测试。 + +**Paul Belanger (PB)**: 这个项目中的大多数都是富有激情的人,因此当你提交的补丁被某个人否定时不要感到沮丧。 + +**Christopher Aedo (CA)**:社区会帮助你取得成功,因此不要害怕提问或者寻求更多的那些能够促进你理解某些事物的引导者。 + +**在你的讲话中,对于一些你无法涉及到的方面,你会向新手推荐哪些在线资源来让他们更加容易入门?** + +**PB**:当然是我们的 [OpenStack 项目架构文档][5]。我们已经花了足够大的努力来尽可能让这些文档能够随时保持最新状态。在 OpenStack 运行中使用的每个系统都作为一个项目,都制作了专门的页面来进行说明。甚至于连 OpenStack 云这种架构团队也会放到线上。 + +**EKJ**:我对于架构文档这件事上的观点和 Paul 是一致的,另外,我们十分乐意看到来自那些正在学习项目的人们提交上来的补丁。我们通常不会意识到我们忽略了文档中的某些内容,除非它们恰好被人问起。因此,阅读、学习,会帮助我们修补这些知识上的漏洞。你可以在 [OpenStack 架构邮件清单]提出你的问题,或者在我们位于 FreeNode 上的 #OpenStack-infra 的 IRC 专栏发起你的提问。 + +**CA**:我喜欢[这个详细的帖子][7],它是由 Ian Wienand 写的一篇关于构建镜像的文章。 + +**"gotchas" 会是 OpenStack 新的贡献者们所寻找的吗?** + +**EKJ**:向项目作出贡献并不仅仅是提交新的代码和新的特性;OpenStack 社区高度重视代码评审。如果你想要别人查看你的补丁,那你最好先看看其他人是如何做的,然后参考他们的风格,最后一步步做到你也能够向其他人一样提交清晰且结构分明的代码补丁。你越是能让你的同伴了解你的工作并知道你正在做的评审,那他们也就越有可能及时评审你的代码。 + +**CA**:我看到过大量的新手在面对 [Gerrit][8] 时受挫,阅读开发者引导中的[开发者工作步骤][9],有可能的话多读几遍。如果你没有用过 Gerrit,那你最初对它的感觉可能是困惑和无力的。但是,如果你随后做了一些代码评审的工作,那么你就能轻松应对它。此外,我是 IRC 的忠实粉丝,它可能是一个获得帮助的好地方,但是,你最好保持一个长期在线的状态,这样,尽管你在某个时候没有出现,人们也可以回答你的问题。(阅读 [IRC,开源成功的秘诀][10])你不必总是在线,但是你最好能够轻松的在一个频道中回溯之前信息,以此来跟上最新的动态,这种能力非常重要。 + +**PB**:我同意 Elizabeth 和 Chris 的观点, Gerrit 是需要花点精力的,它将汇聚你的开发方面的努力。你不仅仅要提交代码给别人去评审,同时,你也要能够评审其他人的代码。看到 Gerrit 的界面,你可能一时会变的很困惑。我推荐新手去尝试 [Gertty][11],它是一个基于控制台的终端界面,用于 Gerrit 代码评审系统,而它恰好也是 OpenStack 架构所驱动的一个项目。 + +**你对于 OpenStack 新手如何通过网络与其他贡献者交流方面有什么好的建议?** + +**PB**:对我来说,是通过 IRC 并在 Freenode 上参加 #OpenStack-infra 频道([IRC 日志][12])。这频道上面有很多对新手来说很有价值的资源。你可以看到 OpenStack 项目日复一日的运作情况,同时,一旦你知道了 OpenStack 项目的工作原理,你将更好的知道如何为 OpenStack 的未来发展作出贡献。 + +**CA**:我想要为 IRC 再次说明一点,在 IRC 上保持全天在线记录对我来说有非常重大的意义,因为我会时刻保持连接并随时接到提醒。这也是一种非常好的获得帮助的方式,特别是当你和某人卡在了项目中出现的某一个难题的时候,而在一个活跃的 IRC 频道中,总会有一些人很乐意为你解决问题。 + +**EKJ**:[OpenStack 开发邮件列表][13]对于能够时刻查看到你所致力于的 OpenStack 项目的最新情况是非常重要的。因此,我推荐一定要订阅它。邮件列表使用主题标签来区分项目,因此你可以设置你的邮件客户端来使用它来专注于你所关心的项目。除了在线资源之外,全世界范围内也成立了一些 OpenStack 小组,他们被用来为 OpenStack 的用户和贡献者提供服务。这些小组可能会定期要求 OpenStack 主要贡献者们举办座谈和活动。你可以在 MeetUp.com 上搜素你所在地域的贡献者活动聚会,或者在 [groups.openstack.org][14] 上查看你所在的地域是否存在 OpenStack 小组。最后,还有一个每六个月举办一次的 [OpenStack 峰会][15],这个峰会上会作一些关于架构的演说。当前状态下,这个峰会包含了用户会议和开发者会议,会议内容都是和 OpenStack 相关的东西,包括它的过去,现在和未来。 + +**OpenStack 需要在那些方面得到提升来让新手更加容易学会并掌握?** + +**PB**: 我认为我们的 [account-setup][16] 环节对于新的贡献者已经做的比较容易了,特别是教他们如何提交他们的第一个补丁。真正参与到 OpenStack 开发者模式中是需要花费很大的努力的,相比贡献者来说已经显得非常多了;然而,一旦融入进去了,这个模式将会运转的十分高效和令人满意。 + +**CA**: 我们拥有一个由专业开发者组成的社区,而且我们的关注点都是发展 OpenStack 本身,同时,我们致力于让用户付出更小的代价去使用 OpenStack 云架构平台。我们需要发掘更多的应用开发者,并且鼓励更多的人去开发能在 OpenStack 云上完美运行的云应用程序,我们还鼓励他们在[社区 App 目录][17]上去贡献那些由他们开发的应用。我们可以通过不断提升我们的 API 标准和保证我们不同的库(比如 libcloud,phpopencloud 以及其他一些库)持续地为开发者提供可信赖的支持来实现这一目标。还有一点就是通过举办更多的 OpenStack 黑客比赛。所有的这些事情都可以降低新人的学习门槛,这样也能引导他们与这个社区之间的关系更加紧密。 + +**EKJ**: 我已经致力于开源软件很多年了。但是,对于大量的 OpenStack 开发者而言,这是一个他们自己所从事的第一个开源项目。我发现他们之前使用私有软件的背景并没有为他们塑造开源的观念、方法论,以及在开源项目中需要具备的合作技巧。我乐于看到我们能够让那些曾经一直在使用私有软件工作的人能够真正的明白他们在开源如软件社区所从事的事情的巨大价值。 + +**我想把 2016 年打造成开源俳句之年。请用俳句来向新手解释 OpenStack 一下。** + +(LCTT 译注:俳句(Haiku)是一种日本古典短诗,以5-7-5音节为三句,校对者不揣浅陋,诌了几句歪诗,勿笑 :D,另外 OpenStack 本身音节太长,就捏造了一个中文译名“开栈”——明白就好。) + +**PB**: 开栈在云上//倘钟情自由软件//先当造补丁(OpenStack runs clouds +If you enjoy free software +Submit your first patch) + +**CA**:时光不必久//开栈将支配世界//协力早来到(In the near future +OpenStack will rule the world +Help make it happen!) + +**EKJ**:开栈有自由//放在自家服务器//运行你的云(OpenStack is free +Deploy on your own servers +And run your own cloud!) + +*Paul、Elizabeth 和 Christopher 在 4 月 25 号星期一上午 11:15 于奥斯汀举办的 OpenStack 峰会发表了[演说][18]。 + +------------------------------------------------------------------------------ + +via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners + +作者:[Rikki Endsley][a] +译者:[kylepeng93](https://github.com/kylepeng93) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://rikkiendsley.com/ +[1]: https://twitter.com/pabelanger +[2]: https://twitter.com/pleia2 +[3]: https://twitter.com/docaedo +[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 +[5]: http://docs.openstack.org/infra/system-config/ +[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra +[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html +[8]: https://code.google.com/p/gerrit/ +[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow +[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/ +[11]: https://pypi.python.org/pypi/gertty +[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/ +[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +[14]: https://groups.openstack.org/ +[15]: https://www.openstack.org/summit/ +[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup +[17]: https://apps.openstack.org/ +[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 diff --git a/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md b/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md deleted file mode 100644 index 25353ef462..0000000000 --- a/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md +++ /dev/null @@ -1,64 +0,0 @@ -给学习OpenStack基础设施的新手的入门指南 -=========================================================== - -任何一个为OpenStack贡献源码的人会受到社区的欢迎,但是,对于一个发展趋近成熟并且快速迭代的开源社区而言,能够拥有一个新手指南并不是 -件坏事。在奥斯汀举办的OpenStack峰会上,[Paul Belanger][1] (红帽公司), [Elizabeth K. Joseph][2] (HPE公司),和[Christopher Aedo][3] (IBM公司)将会就针对新人的OpenStack基础设施作一场专门的会谈。在这次采访中,他们将会提供一些建议和资源来帮助新人成为OpenStack贡献者中的一员。 -![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) - -**你在谈话中表示你将“投入全部身心于基础设施并解释你需要知道的有关于维持OpenStack正常工作的系统的每一件事情”。这是一个持续了40分钟的的艰巨任务。那么,对于学习OpenStack基础设施的新手来说最需要知道那些事情呢?** -**Elizabeth K. Joseph (EKJ)**: 我们没有为OpenStack使用GitHub这种提交补丁的方式,这是因为这样做会对新手造成巨大的困扰,尽管由于历史原因我们还是保留了所有原先就在GitHub上的所有库的镜像。相反,我们使用了一种完全开源的复查形式,并通过OpenStack基础设施团队来持续的维持系统集成(CI)。相关的,自从我们使用了CI系统,每一个提交给OpenStack的改变都会在被合并之前进行测试。 -**Paul Belanger (PB)**: 这个项目中的大多数都是富有激情的人,因此当你提交的补丁被某个人否定时不要感到沮丧。 -**Christopher Aedo (CA)**:社区很想帮助你取得成功,因此不要害怕提问或者寻求更多的那些能够促进你理解某些事物的引导者。 - -### 在你的讲话中,对于一些你无法涉及到的方面,你会向新手推荐哪些在线资源来让他们更加容易入门? -**PB**:当然是我们的[OpenStack项目基础设施文档][5]。我们已经花了足够大的努力来尽可能让这些文档能够随时保持最新状态。我们对每个运行OpenStack项目并投入使用的系统都有制作专门的页面来进行说明。甚至于连OpenStack云基础设施团队也即将上线。 - -**EKJ**:我对于将基础设施文档作为新手入门教程这件事上的观点和Paul是一致的,另外,我们十分乐意看到来自那些folk了我们项目的学习者提交上来的补丁。我们通常不会意识到我们忽略了文档中的某些内容,除非它们恰好被人问起。因此,阅读,学习,然后帮助我们修补这些知识上的漏洞。你可以在[OpenStack基础设施邮件清单]提出你的问题,或者在我们位于FreeNode上的#OpenStack-infra的IRC专栏发起你的提问。 -**CA**:我喜欢[这个详细的发布][7],它是由Ian Wienand写的一篇关于构建图片的文章。 -### "gotchas" 会是OpenStack新的代码贡献者苦苦寻找的吗? -**EKJ**:向项目作出贡献并不仅仅是提交新的代码和新的特性;OpenStack社区高度重视代码复查。如果你想要别人查看你的补丁,那你最好先看看其他人是如何做的,然后参考他们的风格,最后一步布做到你也能够向其他人一样提交清晰且结构分明的代码补丁。你越是能让你的同伴了解你的工作并知道你正在做的复查,那他们也就更有可能形成及时复查你的代码的风格。 -**CA**:我看到过大量的新手在面对Gerrit时受挫,阅读开发者引导中的[开发者工作步骤][9]时,可能只是将他通读了一遍。如果你没有经常使用Gerrit,那你最初对它的感觉可能是困惑和无力的。但是,如果你随后做了一些代码复查的工作,那么你马上就能轻松应对它。同样,我是IRC的忠实粉丝。它可能是一个获得帮助的好地方,但是,你最好保持一个长期保留的状态,这样,尽管你在某个时候没有出现,人们也可以回答你的问题。(阅读[IRC,开源界的成功秘诀][10]。)你不必总是在线,但是你最好能够轻松的在一个通道中进行回滚操作,以此来跟上最新的动态,这种能力非常重要。 -**PB**:我同意Elizabeth和Chris—Gerrit关于寻求何种方式来学习的观点。每个开发人员所作出的努力都将让社区变得更加美好。你不仅仅要提交代码给别人去复查,同时,你也要能够复查其他人的代码。留意Gerrit的用户接口,你可能一时会变的很疑惑。我推荐新手去尝试[Gertty][11],它是一个基于控制台的终端接口,用于Gerrit代码复查系统,而它恰好也是OpenStack基础设施所驱动的一个项目。 -### 你对于OpenStack新手如何通过网络与其他贡献者交流方面有什么好的建议? -**PB**:对我来说,是通过IRC以及在Freenode上参加#OpenStack-infra专栏([IRC logs][12]).这专栏上面有很多对新手来说很有价值的资源。你可以看到OpenStack项目日复一日的运作情况,同时,一旦你知道了OpenStack项目的工作原理,你将更好的知道如何为OpenStack的未来发展作出贡献。 -**CA**:我想要为IRC再次说明一点,在IRC上保持一整天的在线记录对我来说有非常重大的意义,因为我会感觉到被重视并且时刻保持连接。这也是一种非常好的获得帮助的方式,特别是当你卡在了项目中出现的某一个难题的时候,而在专栏中,总会有一些人很乐意为你解决问题。 -**EKJ**:[OpenStack开发邮件列表][13]对于能够时刻查看到你所致力于的OpenStack项目的最新情况是非常重要的。因此,我推荐一定要订阅它。邮件列表使用课题标签来区分项目,因此你可以设置你的邮件客户端来使用它,并且集中精力于你所关心的项目。除了在线资源之外,全世界范围内也成立了一些OpenStack小组,他们被用来为OpenStack的用户和贡献者提供服务。这些小组可能会定期举行座谈和针对OpenStack主要贡献者的一些活动。你可以在MeetUp.com上搜素你所在地域的贡献者活动聚会,或者在[groups.openstack.org]上查看你所在的地域是否存在OpenStack小组。最后,还有一个每六个月举办一次的OpenStack峰会,这个峰会上会作一些关于基础设施的演说。当前状态下,这个峰会包含了用户会议和开发者会议,会议内容都是和OpenStack相关的东西,包括它的过去,现在和未来。 -### OpenStack需要在那些方面得到提升来让新手更加容易学会并掌握? -**PB**: 我认为我们的[account-setup][16]过程对于新的贡献者已经做的比较容易了,特别是教他们如何提交他们的第一个补丁。真正参与到OpenStack开发者模式的过程是需要花费很大的努力的,可能对于开发者来说已经显得非常多了;然而,一旦融入进去了,这个模式将会运转的十分高效和令人满意。 -**CA**: 我们拥有一个由专业开发者组成的社区,而且我们的关注点都是发展OpenStack本身,同时,我们致力于让用户付出更小的代价去使用OpenStack云基础设施平台。我们需要发掘更多的应用开发者,并且鼓励更多的人去开发能在OpenStack云上完美运行的云应用程序,我们还鼓励他们在[社区App目录]上去贡献那些由他们开发的app。我们可以通过提升我们的API标准和保证我们不同的库(比如libcloud,phpopencloud已经其他一些库),并让他们持续的为开发者提供可信赖的支持来实现这一目标。还有一点就是通过倡导更多的OpenStack黑客加入进来。所有的这些事情都可以降低新人的学习门槛,这样也能引导他们与这个社区之间的关系更加紧密。y. -**EKJ**: 我已经致力于开源软件很多年了。但是,对于大量的OpenStack开发者而言,这是一个他们每个人都在从事的第一个开源项目。我发现他们之前使用私有软件的背景并没有为他们塑造开源的观念和方法论,还有在开源项目中需要具备的合作技巧。我乐于看到我们能够让那些曾经一直在使用私有软件工作的人能够真正的明白他们在开源如软件社区所从事的事情的巨大价值。 -### 我认为2016年对于开源Haiku的进一步增长是具有重大意义的一年。通过Haiku来向新手解释OpenStack。 -**PB**: 如果你喜欢自由软件,你可以向OpenStack提交你的第一个补丁。 -**CA**: 在不久的未来,OpenStack将以统治世界的姿态让这个世界变得更好。 -**EKJ**: OpenStack是一个可以免费部署在你的服务器上面并且运行你自己的云的一个软件。 -*Paul,Elizabeth和Christopher将会在4月25号星期一上午11:15于奥斯汀举办的OpenStack峰会上进行演说。 - ------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners - -作者:[linux.com][a] -译者:[kylepeng93](https://github.com/kylepeng93) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://rikkiendsley.com/ -[1]: https://twitter.com/pabelanger -[2]: https://twitter.com/pleia2 -[3]: https://twitter.com/docaedo -[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 -[5]: http://docs.openstack.org/infra/system-config/ -[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html -[8]: https://code.google.com/p/gerrit/ -[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow -[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/ -[11]: https://pypi.python.org/pypi/gertty -[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/ -[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -[14]: https://groups.openstack.org/ -[15]: https://www.openstack.org/summit/ -[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup -[17]: https://apps.openstack.org/ -[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 From 01d50f3aef8786d5708ff0b3fb833a11b1d90ff4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 25 Aug 2016 17:29:49 +0800 Subject: [PATCH 470/471] =?UTF-8?q?=E7=BB=A7=E6=89=BFpart1-6=20=E5=88=B0?= =?UTF-8?q?=E4=B8=80=E8=B5=B7=EF=BC=8C=E5=BE=85=E6=A0=A1=E5=AF=B9=E3=80=82?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ce portfolio - Machine learning project.md | 837 ++++++++++++++++++ ...ce portfolio - Machine learning project.md | 194 ---- 2 files changed, 837 insertions(+), 194 deletions(-) create mode 100644 sources/team_test/ Building a data science portfolio - Machine learning project.md delete mode 100644 sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md diff --git a/sources/team_test/ Building a data science portfolio - Machine learning project.md b/sources/team_test/ Building a data science portfolio - Machine learning project.md new file mode 100644 index 0000000000..5213042360 --- /dev/null +++ b/sources/team_test/ Building a data science portfolio - Machine learning project.md @@ -0,0 +1,837 @@ +Building a data science portfolio: Machine learning project +=========================================================== + +>这是这个系列的第三次发布关于如何建立科学的投资数据. 如果你喜欢这个系列并且想继续关注, 你可以在订阅页面的底部找到链接[subscribe at the bottom of the page][1]. + +数据科学公司越来越关注投资组合问题. 这其中的一个原因是,投资组合是最好的判断人们生活技能的方法. 好消息是投资组合是完全在你的控制之下的. 只要你做些投资方面的工作,就可以做出很棒的投资组合. + +高质量投资组合的第一步就是知道需要什么技能. 客户想要将这些初级技能应用到数据科学, 因此这些投资技能显示如下: + +- 沟通能力 +- 协作能力 +- 技术能力 +- 数据推理能力 +- 主动性 + +任何好的投资组合都由多个能力组成,其中必然包含以上一到两点. 这里我们主要讲第三点如何做好科学的数据投资组合. 在这一节, 我们主要讲第二项以及如何创建一个端对端的机器学习项目. 在最后, 在最后我们将拥有一个项目它将显示你的能力和技术水平. [Here’s][2]如果你想看一下这里有一个完整的例子. + +### 一个端到端的项目 + +作为一个数据科学家, 有时候你会拿到一个数据集并被问到是 [如何产生的][3]. 在这个时候, 交流是非常重要的, 走一遍流程. 用用Jupyter notebook, 看一看以前的例子,这对你非常有帮助. 在这里你能找到一些可以用的报告或者文档. + +不管怎样, 有时候你会被要求创建一个具有操作价值的项目. 一个直接影响公司业务的项目, 不止一次的, 许多人用的项目. 这个任务可能像这样 “创建一个算法来预测波动率”或者 “创建一个模型来自动标签我们的文章”. 在这种情况下, 技术能力比说评书更重要. 你必须能够创建一个数据集, 并且理解它, 然后创建脚本处理该数据. 还有很重要的脚本要运行的很快, 占用系统资源很小. 它可能要运行很多次, 脚本的可使用性也很重要,并不仅仅是一个演示版. 可使用性是指整合操作流程, 因为他很有可能是面向用户的. + +端对端项目的主要组成部分: + +- 理解背景 +- 浏览数据并找出细微差别 +- 创建结构化项目, 那样比较容易整合操作流程 +- 运行速度快占用系统资源小的代码 +- 写好文档以便其他人用 + +为类有效的创建这种类型的项目, 我们可能需要处理多个文件. 强烈推荐使用 [Atom][4]或者[PyCharm][5] . 这些工具允许你在文件间跳转, 编辑不同类型的文件, 例如 markdown 文件, Python 文件, 和csv 文件. 结构化你的项目还利于版本控制 [Github][6] 也很有用. + +![](https://www.dataquest.io/blog/images/end_to_end/github.png) +>Github上的这个项目. + +在这一节中我们将使用 [Pandas][7] 和 [scikit-learn][8]扩展包 . 我们还将用到Pandas [DataFrames][9], 它使得python读取和处理表格数据更加方便. + +### 找到好的数据集 + +找到一个好的端到端投资项目数据集很难. [The dataset][10]数据集需要足够大但是内存和性能限制了它. 它还需要实际有用的. 例如, 这个数据集, 它包含有美国院校的录取标准, 毕业率以及毕业以后的收入是个很好的数据集了. 不管怎样, 不管你如何想这个数据, 很显然它不适合创建端到端的项目. 比如, 你能告诉人们他们去了这些大学以后的未来收益, 但是却没有足够的细微差别. 你还能找出院校招生标准收入更高, 但是却没有告诉你如何实际操作. + +这里还有内存和性能约束的问题比如你有几千兆的数据或者有一些细微差别需要你去预测或者运行什么样的算法数据集等. + +一个好的数据集包括你可以动态的转换数组, 并且回答动态的问题. 一个很好的例子是股票价格数据集. 你可以预测明天的价格, 持续的添加数据在算法中. 它将有助于你预测利润. 这不是讲故事这是真实的. + +一些找到数据集的好地方: + +- [/r/datasets][11] – subreddit(Reddit是国外一个社交新闻站点,subreddit指该论坛下的各不同板块). +- [Google Public Datasets][12] – 通过Google BigQuery发布的可用数据集. +- [Awesome datasets][13] – Github上的数据集. + +当你查看这些数据集, 想一下人们想要在这些数据集中得到什么答案, 哪怕这些问题只想过一次 (“放假是如何S&P 500关联的?”), 或者更进一步(“你能预测股市吗?”). 这里的关键是更进一步找出问题, 并且多次运行不同的数据相同的代码. + +为了这个目标, 我们来看一下[Fannie Mae 贷款数据][14]. Fannie Mae 是一家政府赞助的企业抵押贷款公司它从其他银行购买按揭贷款. 然后捆绑这些贷款为抵押贷款来倒卖证券. 这使得贷款机构可以提供更多的抵押贷款, 在市场上创造更多的流动性. 这在理论上会导致更多的住房和更好的贷款条件. 从借款人的角度来说,他们大体上差不多, 话虽这样说. + +Fannie Mae 发布了两种类型的数据 – 它获得的贷款, 随着时间的推移这些贷款是否被偿还.在理想的情况下, 有人向贷款人借钱, 然后还清贷款. 不管怎样, 有些人没还的起钱, 丧失了抵押品赎回权. Foreclosure 是说没钱还了被银行把房子给回收了. Fannie Mae 追踪谁没还钱, 并且需要收回房屋抵押权. 每个季度会发布此数据, 并滞后一年. 当前可用是2015年第一季度数据. + +采集数据是由Fannie Mae发布的贷款数据, 它包含借款人的信息, 信用评分, 和他们的家庭贷款信息. 性能数据, 贷款回收后的每一个季度公布, 包含借贷人所支付款项信息和丧失抵押品赎回状态, 收回贷款的性能数据可能有十几行.一个很好的思路是这样的采集数据告诉你Fannie Mae所控制的贷款, 性能数据包含几个属性来更新贷款. 其中一个属性告诉我们每个季度的贷款赎回权. + +![](https://www.dataquest.io/blog/images/end_to_end/foreclosure.jpg) +>一个没有及时还贷的房子就这样的被卖了. + +### 选择一个角度 + +这里有几个方向我们可以去分析 Fannie Mae 数据集. 我们可以: + +- 预测房屋的销售价格. +- 预测借款人还款历史. +- 在收购时为每一笔贷款打分. + +最重要的事情是坚持单一的角度. 关注太多的事情很难做出效果. 选择一个有着足够细节的角度也很重要. 下面的理解就没有太多差别: + +- 找出哪些银行将贷款出售给Fannie Mae. +- 计算贷款人的信用评分趋势. +- 搜索哪些类型的家庭没有偿还贷款的能力. +- 搜索贷款金额和抵押品价格之间的关系 + +上面的想法非常有趣, 它会告诉我们很多有意思的事情, 但是不是一个很适合操作的项目。 + +在Fannie Mae数据集中, 我们将预测贷款是否能被偿还. 实际上, 我们将建立一个抵押贷款的分数来告诉 Fannie Mae买还是不买. 这将给我提供很好的基础. + +------------ + +### 理解数据 +我们来简单看一下原始数据文件。下面是2012年1季度前几行的采集数据。 +``` +100000853384|R|OTHER|4.625|280000|360|02/2012|04/2012|31|31|1|23|801|N|C|SF|1|I|CA|945||FRM| +100003735682|R|SUNTRUST MORTGAGE INC.|3.99|466000|360|01/2012|03/2012|80|80|2|30|794|N|P|SF|1|P|MD|208||FRM|788 +100006367485|C|PHH MORTGAGE CORPORATION|4|229000|360|02/2012|04/2012|67|67|2|36|802|N|R|SF|1|P|CA|959||FRM|794 +``` + +下面是2012年1季度的前几行执行数据 +``` +100000853384|03/01/2012|OTHER|4.625||0|360|359|03/2042|41860|0|N|||||||||||||||| +100000853384|04/01/2012||4.625||1|359|358|03/2042|41860|0|N|||||||||||||||| +100000853384|05/01/2012||4.625||2|358|357|03/2042|41860|0|N|||||||||||||||| +``` +在开始编码之前,花些时间真正理解数据是值得的。这对于操作项目优为重要,因为我们没有交互式探索数据,将很难察觉到细微的差别除非我们在前期发现他们。在这种情况下,第一个步骤是阅读房利美站点的资料: +- [概述][15] +- [有用的术语表][16] +- [问答][17] +- [采集和执行文件中的列][18] +- [采集数据文件样本][19] +- [执行数据文件样本][20] + +在看完这些文件后后,我们了解到一些能帮助我们的关键点: +- 从2000年到现在,每季度都有一个采集和执行文件,因数据是滞后一年的,所以到目前为止最新数据是2015年的。 +- 这些文件是文本格式的,采用管道符号“|”进行分割。 +- 这些文件是没有表头的,但我们有文件列明各列的名称。 +- 所有一起,文件包含2200万个贷款的数据。 +由于执行文件包含过去几年获得的贷款的信息,在早些年获得的贷款将有更多的执行数据(即在2014获得的贷款没有多少历史执行数据)。 +这些小小的信息将会为我们节省很多时间,因为我们知道如何构造我们的项目和利用这些数据。 + +### 构造项目 +在我们开始下载和探索数据之前,先想一想将如何构造项目是很重要的。当建立端到端项目时,我们的主要目标是: +- 创建一个可行解决方案 +- 有一个快速运行且占用最小资源的解决方案 +- 容易可扩展 +- 写容易理解的代码 +- 写尽量少的代码 + +为了实现这些目标,需要对我们的项目进行良好的构造。一个结构良好的项目遵循几个原则: +- 分离数据文件和代码文件 +- 从原始数据中分离生成的数据。 +- 有一个README.md文件帮助人们安装和使用该项目。 +- 有一个requirements.txt文件列明项目运行所需的所有包。 +- 有一个单独的settings.py 文件列明其它文件中使用的所有的设置 + - 例如,如果从多个Python脚本读取相同的文件,把它们全部import设置和从一个集中的地方获得文件名是有用的。 +- 有一个.gitignore文件,防止大的或秘密文件被提交。 +- 分解任务中每一步可以单独执行的步骤到单独的文件中。 + - 例如,我们将有一个文件用于读取数据,一个用于创建特征,一个用于做出预测。 +- 保存中间结果,例如,一个脚本可输出下一个脚本可读取的文件。 + + - 这使我们无需重新计算就可以在数据处理流程中进行更改。 + + +我们的文件结构大体如下: + +``` +loan-prediction +├── data +├── processed +├── .gitignore +├── README.md +├── requirements.txt +├── settings.py +``` + +### 创建初始文件 +首先,我们需要创建一个loan-prediction文件夹,在此文件夹下面,再创建一个data文件夹和一个processed文件夹。data文件夹存放原始数据,processed文件夹存放所有的中间计算结果。 +其次,创建.gitignore文件,.gitignore文件将保证某些文件被git忽略而不会被推送至github。关于这个文件的一个好的例子是由OSX在每一个文件夹都会创建的.DS_Store文件,.gitignore文件一个很好的起点就是在这了。我们还想忽略数据文件因为他们实在是太大了,同时房利美的条文禁止我们重新分发该数据文件,所以我们应该在我们的文件后面添加以下2行: +``` +data +processed +``` + +这是该项目的一个关于.gitignore文件的例子。 +再次,我们需要创建README.md文件,它将帮助人们理解该项目。后缀.md表示这个文件采用markdown格式。Markdown使你能够写纯文本文件,同时还可以添加你想要的梦幻格式。这是关于markdown的导引。如果你上传一个叫README.md的文件至Github,Github会自动处理该markdown,同时展示给浏览该项目的人。 +至此,我们仅需在README.md文件中添加简单的描述: +``` +Loan Prediction +----------------------- + +Predict whether or not loans acquired by Fannie Mae will go into foreclosure. Fannie Mae acquires loans from other lenders as a way of inducing them to lend more. Fannie Mae releases data on the loans it has acquired and their performance afterwards [here](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html). +``` + +现在,我们可以创建requirements.txt文件了。这会唯其它人可以很方便地安装我们的项目。我们还不知道我们将会具体用到哪些库,但是以下几个库是一个很好的开始: +``` +pandas +matplotlib +scikit-learn +numpy +ipython +scipy +``` + +以上几个是在python数据分析任务中最常用到的库。可以认为我们将会用到大部分这些库。这里是【24】该项目requirements文件的一个例子。 + 创建requirements.txt文件之后,你应该安装包了。我们将会使用python3.如果你没有安装python,你应该考虑使用 [Anaconda][25],一个python安装程序,同时安装了上面列出的所有包。 +最后,我们可以建立一个空白的settings.py文件,因为我们的项目还没有任何设置。 + + +---------------- + +### 获取数据 + +一旦我们拥有项目的基本架构,我们就可以去获得原始数据 + +Fannie Mae 对获取数据有一些限制,所有你需要去注册一个账户。在创建完账户之后,你可以找到下载页面[在这里][26],你可以按照你所需要的下载非常少量或者很多的借款数据文件。文件格式是zip,在解压后当然是非常大的。 + +为了达到我们这个博客文章的目的,我们将要下载从2012年1季度到2015 +年1季度的所有数据。接着我们需要解压所有的文件。解压过后,删掉原来的.zip格式的文件。最后,借款预测文件夹看起来应该像下面的一样: +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +├── .gitignore +├── README.md +├── requirements.txt +├── settings.py +``` + +在下载完数据后,你可以在shell命令行中使用head和tail命令去查看文件中的行数据,你看到任何的不需要的列数据了吗?在做这件事的同时查阅[列名称的pdf文件][27]可能是有用的事情。 +### 读入数据 + +有两个问题让我们的数据难以现在就使用: +- 收购数据和业绩数据被分割在多个文件中 +- 每个文件都缺少标题 + +在我们开始使用数据之前,我们需要首先明白我们要在哪里去存一个收购数据的文件,同时到哪里去存储一个业绩数据的文件。每个文件仅仅需要包括我们关注的那些数据列,同时拥有正确的标题。这里有一个小问题是业绩数据非常大,因此我们需要尝试去修剪一些数据列。 + +第一步是向settings.py文件中增加一些变量,这个文件中同时也包括了我们原始数据的存放路径和处理出的数据存放路径。我们同时也将添加其他一些可能在接下来会用到的设置数据: +``` +DATA_DIR = "data" +PROCESSED_DIR = "processed" +MINIMUM_TRACKING_QUARTERS = 4 +TARGET = "foreclosure_status" +NON_PREDICTORS = [TARGET, "id"] +CV_FOLDS = 3 +``` + +把路径设置在settings.py中将使它们放在一个集中的地方,同时使其修改更加的容易。当提到在多个文件中的相同的变量时,你想改变它的话,把他们放在一个地方比分散放在每一个文件时更加容易。[这里的][28]是一个这个工程的示例settings.py文件 + +第二步是创建一个文件名为assemble.py,它将所有的数据分为2个文件。当我们运行Python assemble.py,我们在处理数据文件的目录会获得2个数据文件。 + +接下来我们开始写assemble.py文件中的代码。首先我们需要为每个文件定义相应的标题,因此我们需要查看[列名称的pdf文件][29]同时创建在每一个收购数据和业绩数据的文件的列数据的列表: +``` +HEADERS = { + "Acquisition": [ + "id", + "channel", + "seller", + "interest_rate", + "balance", + "loan_term", + "origination_date", + "first_payment_date", + "ltv", + "cltv", + "borrower_count", + "dti", + "borrower_credit_score", + "first_time_homebuyer", + "loan_purpose", + "property_type", + "unit_count", + "occupancy_status", + "property_state", + "zip", + "insurance_percentage", + "product_type", + "co_borrower_credit_score" + ], + "Performance": [ + "id", + "reporting_period", + "servicer_name", + "interest_rate", + "balance", + "loan_age", + "months_to_maturity", + "maturity_date", + "msa", + "delinquency_status", + "modification_flag", + "zero_balance_code", + "zero_balance_date", + "last_paid_installment_date", + "foreclosure_date", + "disposition_date", + "foreclosure_costs", + "property_repair_costs", + "recovery_costs", + "misc_costs", + "tax_costs", + "sale_proceeds", + "credit_enhancement_proceeds", + "repurchase_proceeds", + "other_foreclosure_proceeds", + "non_interest_bearing_balance", + "principal_forgiveness_balance" + ] +} +``` + + +接下来一步是定义我们想要保持的数据列。我们将需要保留收购数据中的所有列数据,丢弃列数据将会使我们节省下内存和硬盘空间,同时也会加速我们的代码。 +``` +SELECT = { + "Acquisition": HEADERS["Acquisition"], + "Performance": [ + "id", + "foreclosure_date" + ] +} +``` + +下一步,我们将编写一个函数来连接数据集。下面的代码将: +- 引用一些需要的库,包括设置。 +- 定义一个函数concatenate, 目的是: + - 获取到所有数据目录中的文件名. + - 在每个文件中循环. + - 如果文件不是正确的格式 (不是以我们需要的格式作为开头), 我们将忽略它. + - 把文件读入一个[数据帧][30] 伴随着正确的设置通过使用Pandas [读取csv][31]函数. + - 在|处设置分隔符以便所有的字段能被正确读出. + - 数据没有标题行,因此设置标题为None来进行标示. + - 从HEADERS字典中设置正确的标题名称 – 这将会是我们数据帧中的数据列名称. + - 通过SELECT来选择我们加入数据的数据帧中的列. +- 把所有的数据帧共同连接在一起. +- 把已经连接好的数据帧写回一个文件. + +``` +import os +import settings +import pandas as pd + +def concatenate(prefix="Acquisition"): + files = os.listdir(settings.DATA_DIR) + full = [] + for f in files: + if not f.startswith(prefix): + continue + + data = pd.read_csv(os.path.join(settings.DATA_DIR, f), sep="|", header=None, names=HEADERS[prefix], index_col=False) + data = data[SELECT[prefix]] + full.append(data) + + full = pd.concat(full, axis=0) + + full.to_csv(os.path.join(settings.PROCESSED_DIR, "{}.txt".format(prefix)), sep="|", header=SELECT[prefix], index=False) +``` + +我们可以通过调用上面的函数,通过传递的参数收购和业绩两次以将所有收购和业绩文件连接在一起。下面的代码将: +- 仅仅在脚本被在命令行中通过python assemble.py被唤起而执行. +- 将所有的数据连接在一起,并且产生2个文件: + - `processed/Acquisition.txt` + - `processed/Performance.txt` + +``` +if __name__ == "__main__": + concatenate("Acquisition") + concatenate("Performance") +``` + +我们现在拥有了一个漂亮的,划分过的assemble.py文件,它很容易执行,也容易被建立。通过像这样把问题分解为一块一块的,我们构建工程就会变的容易许多。不用一个可以做所有的凌乱的脚本,我们定义的数据将会在多个脚本间传递,同时使脚本间完全的0耦合。当你正在一个大的项目中工作,这样做是一个好的想法,因为这样可以更佳容易的修改其中的某一部分而不会引起其他项目中不关联部分产生超出预期的结果。 + +一旦我们完成assemble.py脚本文件, 我们可以运行python assemble.py命令. 你可以查看完整的assemble.py文件[在这里][32]. + +这将会在处理结果目录下产生2个文件: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +├── .gitignore +├── assemble.py +├── README.md +├── requirements.txt +├── settings.py +``` + + +-------------- + + +### 计算运用数据中的值 + +接下来我们会计算过程数据或运用数据中的值。我们要做的就是推测这些数据代表的贷款是否被收回。如果能够计算出来,我们只要看一下包含贷款的运用数据的参数 foreclosure_date 就可以了。如果这个参数的值是 None ,那么这些贷款肯定没有收回。为了避免我们的样例中存在少量的运用数据,我们会计算出运用数据中有贷款数据的行的行数。这样我们就能够从我们的训练数据中筛选出贷款数据,排除了一些运用数据。 + +下面是一种区分贷款数据和运用数据的方法: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/001.png) + + +在上面的表格中,采集数据中的每一行数据都与运用数据中的多行数据有联系。在运用数据中,在收回贷款的时候 foreclosure_date 就会以季度的的形式显示出收回时间,而且它会在该行数据的最前面显示一个空格。一些贷款没有收回,所以与运用数据中的贷款数据有关的行都会在前面出现一个表示 foreclosure_date 的空格。 + +我们需要计算 foreclosure_status 的值,它的值是布尔类型,可以表示一个特殊的贷款数据 id 是否被收回过,还有一个参数 performance_count ,它记录了运用数据中每个贷款 id 出现的行数。  + +计算这些行数有多种不同的方法: + +- 我们能够读取所有的运用数据,然后我们用 Pandas 的 groupby 方法在数据框中计算出与每个贷款 id 有关的行的行数,然后就可以查看贷款 id 的 foreclosure_date 值是否为 None 。 +    - 这种方法的优点是从语法上来说容易执行。 +    - 它的缺点需要读取所有的 129236094 行数据,这样就会占用大量内存,并且运行起来极慢。 +- 我们可以读取所有的运用数据,然后使用采集到的数据框去计算每个贷款 id 出现的次数。 +    - 这种方法的优点是容易理解。 +    - 缺点是需要读取所有的 129236094 行数据。这样会占用大量内存,并且运行起来极慢。 +- 我们可以在迭代访问运用数据中的每一行数据,而且会建立一个区分开的计数字典。 + - 这种方法的优点是数据不需要被加载到内存中,所以运行起来会很快且不需要占用内存。 +    - 缺点是这样的话理解和执行上可能有点耗费时间,我们需要对每一行数据进行语法分析。 + +加载所有的数据会非常耗费内存,所以我们采用第三种方法。我们要做的就是迭代运用数据中的每一行数据,然后为每一个贷款 id 生成一个字典值。在这个字典中,我们会计算出贷款 id 在运用数据中出现的次数,而且如果 foreclosure_date 不是 Nnoe 。我们可以查看 foreclosure_status 和 performance_count 的值 。 + +我们会新建一个 annotate.py 文件,文件中的代码可以计算这些值。我们会使用下面的代码: + +- 导入需要的库 +- 定义一个函数 count_performance_rows 。 + - 打开 processed/Performance.txt 文件。这不是在内存中读取文件而是打开了一个文件标识符,这个标识符可以用来以行为单位读取文件。  + - 迭代文件的每一行数据。 + - 使用分隔符(|)分开每行的不同数据。 + - 检查 loan_id 是否在计数字典中。 + - 如果不存在,进行一次计数。 + - loan_id 的 performance_count 参数自增 1 次,因为我们这次迭代也包含其中。 + - 如果日期是 None ,我们就会知道贷款被收回了,然后为foreclosure_status 设置合适的值。 + +``` +import os +import settings +import pandas as pd + +def count_performance_rows(): + counts = {} + with open(os.path.join(settings.PROCESSED_DIR, "Performance.txt"), 'r') as f: + for i, line in enumerate(f): + if i == 0: + # Skip header row + continue + loan_id, date = line.split("|") + loan_id = int(loan_id) + if loan_id not in counts: + counts[loan_id] = { + "foreclosure_status": False, + "performance_count": 0 + } + counts[loan_id]["performance_count"] += 1 + if len(date.strip()) > 0: + counts[loan_id]["foreclosure_status"] = True + return counts +``` + +### 获取值 + +只要我们创建了计数字典,我们就可以使用一个函数通过一个 loan_id 和一个 key 从字典中提取到需要的参数的值: + +``` +def get_performance_summary_value(loan_id, key, counts): + value = counts.get(loan_id, { + "foreclosure_status": False, + "performance_count": 0 + }) + return value[key] +``` + + +上面的函数会从计数字典中返回合适的值,我们也能够为采集数据中的每一行赋一个 foreclosure_status 值和一个 performance_count 值。如果键不存在,字典的 [get][33] 方法会返回一个默认值,所以在字典中不存在键的时候我们就可以得到一个可知的默认值。 + + + +------------------ + + +### 注解数据 + +我们已经在annotate.py中添加了一些功能, 现在我们来看一看数据文件. 我们需要将采集到的数据转换到训练数据表来进行机器学习的训练. 这涉及到以下几件事情: + +- 转换所以列数字. +- 填充缺失值. +- 分配 performance_count 和 foreclosure_status. +- 移除出现次数很少的行(performance_count 计数低). + +我们有几个列是文本类型的, 看起来对于机器学习算法来说并不是很有用. 然而, 他们实际上是分类变量, 其中有很多不同的类别代码, 例如R,S等等. 我们可以把这些类别标签转换为数值: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/002.png) + +通过这种方法转换的列我们可以应用到机器学习算法中. + +还有一些包含日期的列 (first_payment_date 和 origination_date). 我们可以将这些日期放到两个列中: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/003.png) +在下面的代码中, 我们将转换采集到的数据. 我们将定义一个函数如下: + +- 在采集到的数据中创建foreclosure_status列 . +- 在采集到的数据中创建performance_count列. +- 将下面的string列转换为integer列: + - channel + - seller + - first_time_homebuyer + - loan_purpose + - property_type + - occupancy_status + - property_state + - product_type +- 转换first_payment_date 和 origination_date 为两列: + - 通过斜杠分离列. + - 将第一部分分离成月清单. + - 将第二部分分离成年清单. + - 删除这一列. + - 最后, 我们得到 first_payment_month, first_payment_year, origination_month, and origination_year. +- 所有缺失值填充为-1. + +``` +def annotate(acquisition, counts): + acquisition["foreclosure_status"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "foreclosure_status", counts)) + acquisition["performance_count"] = acquisition["id"].apply(lambda x: get_performance_summary_value(x, "performance_count", counts)) + for column in [ + "channel", + "seller", + "first_time_homebuyer", + "loan_purpose", + "property_type", + "occupancy_status", + "property_state", + "product_type" + ]: + acquisition[column] = acquisition[column].astype('category').cat.codes + + for start in ["first_payment", "origination"]: + column = "{}_date".format(start) + acquisition["{}_year".format(start)] = pd.to_numeric(acquisition[column].str.split('/').str.get(1)) + acquisition["{}_month".format(start)] = pd.to_numeric(acquisition[column].str.split('/').str.get(0)) + del acquisition[column] + + acquisition = acquisition.fillna(-1) + acquisition = acquisition[acquisition["performance_count"] > settings.MINIMUM_TRACKING_QUARTERS] + return acquisition +``` + +### 聚合到一起 + +我们差不多准备就绪了, 我们只需要再在annotate.py添加一点点代码. 在下面代码中, 我们将: + +- 定义一个函数来读取采集的数据. +- 定义一个函数来写入数据到/train.csv +- 如果我们在命令行运行annotate.py来读取更新过的数据文件,它将做如下事情: + - 读取采集到的数据. + - 计算数据性能. + - 注解数据. + - 将注解数据写入到train.csv. + +``` +def read(): + acquisition = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "Acquisition.txt"), sep="|") + return acquisition + +def write(acquisition): + acquisition.to_csv(os.path.join(settings.PROCESSED_DIR, "train.csv"), index=False) + +if __name__ == "__main__": + acquisition = read() + counts = count_performance_rows() + acquisition = annotate(acquisition, counts) + write(acquisition) +``` + +修改完成以后为了确保annotate.py能够生成train.csv文件. 你可以在这里找到完整的 annotate.py file [here][34]. + +文件夹结果应该像这样: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +│ ├── train.csv +├── .gitignore +├── annotate.py +├── assemble.py +├── README.md +├── requirements.txt +├── settings.py +``` + +### 找到标准 + +我们已经完成了训练数据表的生成, 现在我们需要最后一步, 生成预测. 我们需要找到错误的标准, 以及该如何评估我们的数据. 在这种情况下, 因为有很多的贷款没有收回, 所以根本不可能做到精确的计算. + +我们需要读取数据, 并且计算foreclosure_status列, 我们将得到如下信息: + +``` +import pandas as pd +import settings + +train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) +train["foreclosure_status"].value_counts() +``` + +``` +False 4635982 +True 1585 +Name: foreclosure_status, dtype: int64 +``` + +因为只有一点点贷款收回, 通过百分比标签来建立的机器学习模型会把每行都设置为Fasle, 所以我们在这里要考虑每个样本的不平衡性,确保我们做出的预测是准确的. 我们不想要这么多假的false, 我们将预计贷款收回但是它并没有收回, 我们预计贷款不会回收但是却回收了. 通过以上两点, Fannie Mae的false太多了, 因此显示他们可能无法收回投资. + +所以我们将定义一个百分比,就是模型预测没有收回但是实际上收回了, 这个数除以总的负债回收总数. 这个负债回收百分比模型实际上是“没有的”. 下面看这个图表: + +![](https://github.com/LCTT/wiki-images/blob/master/TranslateProject/ref_img/004.png) + +通过上面的图表, 1个负债预计不会回收, 也确实没有回收. 如果我们将这个数除以总数, 2, 我们将得到false的概率为50%. 我们将使用这个标准, 因此我们可以评估一下模型的性能. + +### 设置机器学习分类器 + +我们使用交叉验证预测. 通过交叉验证法, 我们将数据分为3组. 按照下面的方法来做: + +- Train a model on groups 1 and 2, and use the model to make predictions for group 3. +- Train a model on groups 1 and 3, and use the model to make predictions for group 2. +- Train a model on groups 2 and 3, and use the model to make predictions for group 1. + +将它们分割到不同的组 ,这意味着我们永远不会用相同的数据来为预测训练模型. 这样就避免了overfitting(过拟合). 如果我们overfit(过拟合), 我们将得到很低的false概率, 这使得我们难以改进算法或者应用到现实生活中. + +[Scikit-learn][35] 有一个叫做 [cross_val_predict][36] 他可以帮助我们理解交叉算法. + +我们还需要一种算法来帮我们预测. 我们还需要一个分类器 [binary classification][37](二元分类). 目标变量foreclosure_status 只有两个值, True 和 False. + +我们用[logistic regression][38](回归算法), 因为它能很好的进行binary classification(二元分类), 并且运行很快, 占用内存很小. 我们来说一下它是如何工作的 – 取代许多树状结构, 更像随机森林, 进行转换, 更像一个向量机, 逻辑回归涉及更少的步骤和更少的矩阵. + +我们可以使用[logistic regression classifier][39](逻辑回归分类器)算法 来实现scikit-learn. 我们唯一需要注意的是每个类的标准. 如果我们使用同样标准的类, 算法将会预测每行都为false, 因为它总是试图最小化误差.不管怎样, 我们关注有多少贷款能够回收而不是有多少不能回收. 因此, 我们通过 [LogisticRegression][40](逻辑回归)来平衡标准参数, 并计算回收贷款的标准. 这将使我们的算法不会认为每一行都为false. + + +---------------------- + +### 做出预测 + +既然完成了前期准备,我们可以开始准备做出预测了。我将创建一个名为 predict.py 的新文件,它会使用我们在最后一步创建的 train.csv 文件。下面的代码: + +- 导入所需的库 +- 创建一个名为 `cross_validate` 的函数: + - 使用正确的关键词参数创建逻辑回归分类器(logistic regression classifier) + - 创建移除了 `id` 和 `foreclosure_status` 属性的用于训练模型的列 + - 跨 `train` 数据帧使用交叉验证 + - 返回预测结果 + +```python +import os +import settings +import pandas as pd +from sklearn import cross_validation +from sklearn.linear_model import LogisticRegression +from sklearn import metrics + +def cross_validate(train): + clf = LogisticRegression(random_state=1, class_weight="balanced") + + predictors = train.columns.tolist() + predictors = [p for p in predictors if p not in settings.NON_PREDICTORS] + + predictions = cross_validation.cross_val_predict(clf, train[predictors], train[settings.TARGET], cv=settings.CV_FOLDS) + return predictions +``` + +### 预测误差 + +现在,我们仅仅需要写一些函数来计算误差。下面的代码: + +- 创建函数 `compute_error`: + - 使用 scikit-learn 计算一个简单的精确分数(与实际 `foreclosure_status` 值匹配的预测百分比) +- 创建函数 `compute_false_negatives`: + - 为了方便,将目标和预测结果合并到一个数据帧 + - 查找漏报率 + - 找到原本应被预测模型取消但没有取消的贷款数目 + - 除以没被取消的贷款总数目 + +```python +def compute_error(target, predictions): + return metrics.accuracy_score(target, predictions) + +def compute_false_negatives(target, predictions): + df = pd.DataFrame({"target": target, "predictions": predictions}) + return df[(df["target"] == 1) & (df["predictions"] == 0)].shape[0] / (df[(df["target"] == 1)].shape[0] + 1) + +def compute_false_positives(target, predictions): + df = pd.DataFrame({"target": target, "predictions": predictions}) + return df[(df["target"] == 0) & (df["predictions"] == 1)].shape[0] / (df[(df["target"] == 0)].shape[0] + 1) +``` + +### 聚合到一起 + +现在,我们可以把函数都放在 `predict.py`。下面的代码: + +- 读取数据集 +- 计算交叉验证预测 +- 计算上面的 3 个误差 +- 打印误差 + +```python +def read(): + train = pd.read_csv(os.path.join(settings.PROCESSED_DIR, "train.csv")) + return train + +if __name__ == "__main__": + train = read() + predictions = cross_validate(train) + error = compute_error(train[settings.TARGET], predictions) + fn = compute_false_negatives(train[settings.TARGET], predictions) + fp = compute_false_positives(train[settings.TARGET], predictions) + print("Accuracy Score: {}".format(error)) + print("False Negatives: {}".format(fn)) + print("False Positives: {}".format(fp)) +``` + +一旦你添加完代码,你可以运行 `python predict.py` 来产生预测结果。运行结果向我们展示漏报率为 `.26`,这意味着我们没能预测 `26%` 的取消贷款。这是一个好的开始,但仍有很多改善的地方! + +你可以在[这里][41]找到完整的 `predict.py` 文件 + +你的文件树现在看起来像下面这样: + +``` +loan-prediction +├── data +│ ├── Acquisition_2012Q1.txt +│ ├── Acquisition_2012Q2.txt +│ ├── Performance_2012Q1.txt +│ ├── Performance_2012Q2.txt +│ └── ... +├── processed +│ ├── Acquisition.txt +│ ├── Performance.txt +│ ├── train.csv +├── .gitignore +├── annotate.py +├── assemble.py +├── predict.py +├── README.md +├── requirements.txt +├── settings.py +``` + +### 撰写 README + +既然我们完成了端到端的项目,那么我们可以撰写 README.md 文件了,这样其他人便可以知道我们做的事,以及如何复制它。一个项目典型的 README.md 应该包括这些部分: + +- 一个高水准的项目概览,并介绍项目目的 +- 任何必需的数据和材料的下载地址 +- 安装命令 + - 如何安装要求依赖 +- 使用命令 + - 如何运行项目 + - 每一步之后会看到的结果 +- 如何为这个项目作贡献 + - 扩展项目的下一步计划 + +[这里][42] 是这个项目的一个 README.md 样例。 + +### 下一步 + +恭喜你完成了端到端的机器学习项目!你可以在[这里][43]找到一个完整的示例项目。一旦你完成了项目,把它上传到 [Github][44] 是一个不错的主意,这样其他人也可以看到你的文件夹的部分项目。 + +这里仍有一些留待探索数据的角度。总的来说,我们可以把它们分割为 3 类 - 扩展这个项目并使它更加精确,发现预测其他列并探索数据。这是其中一些想法: + +- 在 `annotate.py` 中生成更多的特性 +- 切换 `predict.py` 中的算法 +- 尝试使用比我们发表在这里的更多的来自 `Fannie Mae` 的数据 +- 添加对未来数据进行预测的方法。如果我们添加更多数据,我们所写的代码仍然可以起作用,这样我们可以添加更多过去和未来的数据。 +- 尝试看看是否你能预测一个银行是否应该发放贷款(相对地,`Fannie Mae` 是否应该获得贷款) + - 移除 train 中银行不知道发放贷款的时间的任何列 + - 当 Fannie Mae 购买贷款时,一些列是已知的,但不是之前 + - 做出预测 +- 探索是否你可以预测除了 foreclosure_status 的其他列 + - 你可以预测在销售时财产值多少? +- 探索探索性能更新之间的细微差别 + - 你能否预测借款人会逾期还款多少次? + - 你能否标出的典型贷款周期? +- 标出一个州到州或邮政编码到邮政级水平的数据 + - 你看到一些有趣的模式了吗? + +如果你建立了任何有趣的东西,请在评论中让我们知道! + +如果你喜欢这个,你可能会喜欢阅读 ‘Build a Data Science Porfolio’ 系列的其他文章: + +- [Storytelling with data][45]. +- [How to setup up a data science blog][46]. + + +-------------------------------------------------------------------------------- + +via: https://www.dataquest.io/blog/data-science-portfolio-machine-learning/ + +作者:[Vik Paruchuri][a] +译者:[kokialoves](https://github.com/译者ID),[zky001](https://github.com/译者ID),[vim-kakali](https://github.com/译者ID),[cposture](https://github.com/译者ID),[ideas4u](https://github.com/译者ID) +校对:[校对ID](https://github.com/校对ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.dataquest.io/blog +[1]: https://www.dataquest.io/blog/data-science-portfolio-machine-learning/#email-signup +[2]: https://github.com/dataquestio/loan-prediction +[3]: https://www.dataquest.io/blog/data-science-portfolio-project/ +[4]: https://atom.io/ +[5]: https://www.jetbrains.com/pycharm/ +[6]: https://github.com/ +[7]: http://pandas.pydata.org/ +[8]: http://scikit-learn.org/ +[9]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html +[10]: https://collegescorecard.ed.gov/data/ +[11]: https://reddit.com/r/datasets +[12]: https://cloud.google.com/bigquery/public-data/#usa-names +[13]: https://github.com/caesar0301/awesome-public-datasets +[14]: http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html +[15]: http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html +[16]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_glossary.pdf +[17]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_faq.pdf +[18]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_file_layout.pdf +[19]: https://loanperformancedata.fanniemae.com/lppub-docs/acquisition-sample-file.txt +[20]: https://loanperformancedata.fanniemae.com/lppub-docs/performance-sample-file.txt +[21]: https://github.com/dataquestio/loan-prediction/blob/master/.gitignore +[22]: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet +[23]: https://github.com/dataquestio/loan-prediction +[24]: https://github.com/dataquestio/loan-prediction/blob/master/requirements.txt +[25]: https://www.continuum.io/downloads +[26]: https://loanperformancedata.fanniemae.com/lppub/index.html +[27]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_file_layout.pdf +[28]: https://github.com/dataquestio/loan-prediction/blob/master/settings.py +[29]: https://loanperformancedata.fanniemae.com/lppub-docs/lppub_file_layout.pdf +[30]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html +[31]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html +[32]: https://github.com/dataquestio/loan-prediction/blob/master/assemble.py +[33]: https://docs.python.org/3/library/stdtypes.html#dict.get +[34]: https://github.com/dataquestio/loan-prediction/blob/master/annotate.py +[35]: http://scikit-learn.org/ +[36]: http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_predict.html +[37]: https://en.wikipedia.org/wiki/Binary_classification +[38]: https://en.wikipedia.org/wiki/Logistic_regression +[39]: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html +[40]: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html +[41]: https://github.com/dataquestio/loan-prediction/blob/master/predict.py +[42]: https://github.com/dataquestio/loan-prediction/blob/master/README.md +[43]: https://github.com/dataquestio/loan-prediction +[44]: https://www.github.com/ +[45]: https://www.dataquest.io/blog/data-science-portfolio-project/ +[46]: https://www.dataquest.io/blog/how-to-setup-a-data-science-blog/ diff --git a/sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md b/sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md deleted file mode 100644 index 31fb6f0fc9..0000000000 --- a/sources/team_test/part 3 - Building a data science portfolio - Machine learning project.md +++ /dev/null @@ -1,194 +0,0 @@ -### Acquiring the data - -Once we have the skeleton of our project, we can get the raw data. - -Fannie Mae has some restrictions around acquiring the data, so you’ll need to sign up for an account. You can find the download page [here][26]. After creating an account, you’ll be able to download as few or as many loan data files as you want. The files are in zip format, and are reasonably large after decompression. - -For the purposes of this blog post, we’ll download everything from Q1 2012 to Q1 2015, inclusive. We’ll then need to unzip all of the files. After unzipping the files, remove the original .zip files. At the end, the loan-prediction folder should look something like this: - -``` -loan-prediction -├── data -│ ├── Acquisition_2012Q1.txt -│ ├── Acquisition_2012Q2.txt -│ ├── Performance_2012Q1.txt -│ ├── Performance_2012Q2.txt -│ └── ... -├── processed -├── .gitignore -├── README.md -├── requirements.txt -├── settings.py -``` - -After downloading the data, you can use the head and tail shell commands to look at the lines in the files. Do you see any columns that aren’t needed? It might be useful to consult the [pdf of column names][27] while doing this. - -### Reading in the data - -There are two issues that make our data hard to work with right now: - -- The acquisition and performance datasets are segmented across multiple files. -- Each file is missing headers. - -Before we can get started on working with the data, we’ll need to get to the point where we have one file for the acquisition data, and one file for the performance data. Each of the files will need to contain only the columns we care about, and have the proper headers. One wrinkle here is that the performance data is quite large, so we should try to trim some of the columns if we can. - -The first step is to add some variables to settings.py, which will contain the paths to our raw data and our processed data. We’ll also add a few other settings that will be useful later on: - -``` -DATA_DIR = "data" -PROCESSED_DIR = "processed" -MINIMUM_TRACKING_QUARTERS = 4 -TARGET = "foreclosure_status" -NON_PREDICTORS = [TARGET, "id"] -CV_FOLDS = 3 -``` - -Putting the paths in settings.py will put them in a centralized place and make them easy to change down the line. When referring to the same variables in multiple files, it’s easier to put them in a central place than edit them in every file when you want to change them. [Here’s][28] an example settings.py file for this project. - -The second step is to create a file called assemble.py that will assemble all the pieces into 2 files. When we run python assemble.py, we’ll get 2 data files in the processed directory. - -We’ll then start writing code in assemble.py. We’ll first need to define the headers for each file, so we’ll need to look at [pdf of column names][29] and create lists of the columns in each Acquisition and Performance file: - -``` -HEADERS = { - "Acquisition": [ - "id", - "channel", - "seller", - "interest_rate", - "balance", - "loan_term", - "origination_date", - "first_payment_date", - "ltv", - "cltv", - "borrower_count", - "dti", - "borrower_credit_score", - "first_time_homebuyer", - "loan_purpose", - "property_type", - "unit_count", - "occupancy_status", - "property_state", - "zip", - "insurance_percentage", - "product_type", - "co_borrower_credit_score" - ], - "Performance": [ - "id", - "reporting_period", - "servicer_name", - "interest_rate", - "balance", - "loan_age", - "months_to_maturity", - "maturity_date", - "msa", - "delinquency_status", - "modification_flag", - "zero_balance_code", - "zero_balance_date", - "last_paid_installment_date", - "foreclosure_date", - "disposition_date", - "foreclosure_costs", - "property_repair_costs", - "recovery_costs", - "misc_costs", - "tax_costs", - "sale_proceeds", - "credit_enhancement_proceeds", - "repurchase_proceeds", - "other_foreclosure_proceeds", - "non_interest_bearing_balance", - "principal_forgiveness_balance" - ] -} -``` - -The next step is to define the columns we want to keep. Since all we’re measuring on an ongoing basis about the loan is whether or not it was ever foreclosed on, we can discard many of the columns in the performance data. We’ll need to keep all the columns in the acquisition data, though, because we want to maximize the information we have about when the loan was acquired (after all, we’re predicting if the loan will ever be foreclosed or not at the point it’s acquired). Discarding columns will enable us to save disk space and memory, while also speeding up our code. - -``` -SELECT = { - "Acquisition": HEADERS["Acquisition"], - "Performance": [ - "id", - "foreclosure_date" - ] -} -``` - -Next, we’ll write a function to concatenate the data sets. The below code will: - -- Import a few needed libraries, including settings. -- Define a function concatenate, that: - - Gets the names of all the files in the data directory. - - Loops through each file. - - If the file isn’t the right type (doesn’t start with the prefix we want), we ignore it. - - Reads the file into a [DataFrame][30] with the right settings using the Pandas [read_csv][31] function. - - Sets the separator to | so the fields are read in correctly. - - The data has no header row, so sets header to None to indicate this. - - Sets names to the right value from the HEADERS dictionary – these will be the column names of our DataFrame. - - Picks only the columns from the DataFrame that we added in SELECT. -- Concatenates all the DataFrames together. -- Writes the concatenated DataFrame back to a file. - -``` -import os -import settings -import pandas as pd - -def concatenate(prefix="Acquisition"): - files = os.listdir(settings.DATA_DIR) - full = [] - for f in files: - if not f.startswith(prefix): - continue - - data = pd.read_csv(os.path.join(settings.DATA_DIR, f), sep="|", header=None, names=HEADERS[prefix], index_col=False) - data = data[SELECT[prefix]] - full.append(data) - - full = pd.concat(full, axis=0) - - full.to_csv(os.path.join(settings.PROCESSED_DIR, "{}.txt".format(prefix)), sep="|", header=SELECT[prefix], index=False) -``` - -We can call the above function twice with the arguments Acquisition and Performance to concatenate all the acquisition and performance files together. The below code will: - -- Only execute if the script is called from the command line with python assemble.py. -- Concatenate all the files, and result in two files: - - `processed/Acquisition.txt` - - `processed/Performance.txt` - -``` -if __name__ == "__main__": - concatenate("Acquisition") - concatenate("Performance") -``` - -We now have a nice, compartmentalized assemble.py that’s easy to execute, and easy to build off of. By decomposing the problem into pieces like this, we make it easy to build our project. Instead of one messy script that does everything, we define the data that will pass between the scripts, and make them completely separate from each other. When you’re working on larger projects, it’s a good idea to do this, because it makes it much easier to change individual pieces without having unexpected consequences on unrelated pieces of the project. - -Once we finish the assemble.py script, we can run python assemble.py. You can find the complete assemble.py file [here][32]. - -This will result in two files in the processed directory: - -``` -loan-prediction -├── data -│ ├── Acquisition_2012Q1.txt -│ ├── Acquisition_2012Q2.txt -│ ├── Performance_2012Q1.txt -│ ├── Performance_2012Q2.txt -│ └── ... -├── processed -│ ├── Acquisition.txt -│ ├── Performance.txt -├── .gitignore -├── assemble.py -├── README.md -├── requirements.txt -├── settings.py -``` From ec06654a4e6ef25fb03923105e6917d1e1fc5b8c Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 25 Aug 2016 18:23:50 +0800 Subject: [PATCH 471/471] PUB:20160618 An Introduction to Mocking in Python @cposture --- ...18 An Introduction to Mocking in Python.md | 88 +++++++++---------- 1 file changed, 40 insertions(+), 48 deletions(-) rename {translated/tech => published}/20160618 An Introduction to Mocking in Python.md (70%) diff --git a/translated/tech/20160618 An Introduction to Mocking in Python.md b/published/20160618 An Introduction to Mocking in Python.md similarity index 70% rename from translated/tech/20160618 An Introduction to Mocking in Python.md rename to published/20160618 An Introduction to Mocking in Python.md index b8daaed937..56a0c732a2 100644 --- a/translated/tech/20160618 An Introduction to Mocking in Python.md +++ b/published/20160618 An Introduction to Mocking in Python.md @@ -1,38 +1,32 @@ -Mock 在 Python 中的使用介绍 +Mock 在 Python 单元测试中的使用 ===================================== -本文讲述的是 Python 中 Mock 的使用 +本文讲述的是 Python 中 Mock 的使用。 -**如何在避免测试你的耐心的情况下执行单元测试** - -很多时候,我们编写的软件会直接与那些被标记为肮脏无比的服务交互。用外行人的话说:交互已设计好的服务对我们的应用程序很重要,但是这会给我们带来不希望的副作用,也就是那些在一个自动化测试运行的上下文中不希望的功能。 - -例如:我们正在写一个社交 app,并且想要测试一下 "发布到 Facebook" 的新功能,但是不想每次运行测试集的时候真的发布到 Facebook。 +### 如何执行单元测试而不用考验你的耐心 +很多时候,我们编写的软件会直接与那些被标记为“垃圾”的服务交互。用外行人的话说:服务对我们的应用程序很重要,但是我们想要的是交互,而不是那些不想要的副作用,这里的“不想要”是在自动化测试运行的语境中说的。例如:我们正在写一个社交 app,并且想要测试一下 "发布到 Facebook" 的新功能,但是不想每次运行测试集的时候真的发布到 Facebook。 Python 的 `unittest` 库包含了一个名为 `unittest.mock` 或者可以称之为依赖的子包,简称为 -`mock` —— 其提供了极其强大和有用的方法,通过它们可以模拟和打桩来去除我们不希望的副作用。 +`mock` —— 其提供了极其强大和有用的方法,通过它们可以模拟(mock)并去除那些我们不希望的副作用。 +![](https://assets.toptal.io/uploads/blog/image/252/toptal-blog-image-1389090346415.png) ->Source | - -> 注意:`mock` [最近收录][1]到了 Python 3.3 的标准库中;先前发布的版本必须通过 [PyPI][2] 下载 Mock 库。 +*注意:`mock` [最近被收录][1]到了 Python 3.3 的标准库中;先前发布的版本必须通过 [PyPI][2] 下载 Mock 库。* ### 恐惧系统调用 -再举另一个例子,思考一个我们会在余文讨论的系统调用。不难发现,这些系统调用都是主要的模拟对象:无论你是正在写一个可以弹出 CD 驱动的脚本,还是一个用来删除 /tmp 下过期的缓存文件的 Web 服务,或者一个绑定到 TCP 端口的 socket 服务器,这些调用都是在你的单元测试上下文中不希望的副作用。 +再举另一个例子,我们在接下来的部分都会用到它,这是就是**系统调用**。不难发现,这些系统调用都是主要的模拟对象:无论你是正在写一个可以弹出 CD 驱动器的脚本,还是一个用来删除 /tmp 下过期的缓存文件的 Web 服务,或者一个绑定到 TCP 端口的 socket 服务器,这些调用都是在你的单元测试上下文中不希望产生的副作用。 -> 作为一个开发者,你需要更关心你的库是否成功地调用了一个可以弹出 CD 的系统函数,而不是切身经历 CD 托盘每次在测试执行的时候都打开了。 +作为一个开发者,你需要更关心你的库是否成功地调用了一个可以弹出 CD 的系统函数(使用了正确的参数等等),而不是切身经历 CD 托盘每次在测试执行的时候都打开了。(或者更糟糕的是,弹出了很多次,在一个单元测试运行期间多个测试都引用了弹出代码!) -作为一个开发者,你需要更关心你的库是否成功地调用了一个可以弹出 CD 的系统函数(使用了正确的参数等等),而不是切身经历 CD 托盘每次在测试执行的时候都打开了。(或者更糟糕的是,很多次,在一个单元测试运行期间多个测试都引用了弹出代码!) +同样,保持单元测试的效率和性能意味着需要让如此多的“缓慢代码”远离自动测试,比如文件系统和网络访问。 -同样,保持单元测试的效率和性能意味着需要让如此多的 "缓慢代码" 远离自动测试,比如文件系统和网络访问。 - -对于首个例子,我们要从原始形式到使用 `mock` 重构一个标准 Python 测试用例。我们会演示如何使用 mock 写一个测试用例,使我们的测试更加智能、快速,并展示更多关于我们软件的工作原理。 +对于第一个例子来说,我们要从原始形式换成使用 `mock` 重构一个标准 Python 测试用例。我们会演示如何使用 mock 写一个测试用例,使我们的测试更加智能、快速,并展示更多关于我们软件的工作原理。 ### 一个简单的删除函数 -有时,我们都需要从文件系统中删除文件,因此,让我们在 Python 中写一个可以使我们的脚本更加轻易完成此功能的函数。 +我们都有过需要从文件系统中一遍又一遍的删除文件的时候,因此,让我们在 Python 中写一个可以使我们的脚本更加轻易完成此功能的函数。 ``` #!/usr/bin/env python @@ -73,9 +67,9 @@ class RmTestCase(unittest.TestCase): self.assertFalse(os.path.isfile(self.tmpfilepath), "Failed to remove the file.") ``` -我们的测试用例相当简单,但是在它每次运行的时候,它都会创建一个临时文件并且随后删除。此外,我们没有办法测试我们的 `rm` 方法是否正确地将我们的参数向下传递给 `os.remove` 调用。我们可以基于以上的测试认为它做到了,但还有很多需要改进的地方。 +我们的测试用例相当简单,但是在它每次运行的时候,它都会创建一个临时文件并且随后删除。此外,我们没有办法测试我们的 `rm` 方法是否正确地将我们的参数向下传递给 `os.remove` 调用。我们可以基于以上的测试*认为*它做到了,但还有很多需要改进的地方。 -### 使用 Mock 重构 +#### 使用 Mock 重构 让我们使用 mock 重构我们的测试用例: @@ -99,24 +93,23 @@ class RmTestCase(unittest.TestCase): 使用这些重构,我们从根本上改变了测试用例的操作方式。现在,我们有一个可以用于验证其他功能的内部对象。 -### 潜在陷阱 +##### 潜在陷阱 -第一件需要注意的事情就是,我们使用了 `mock.patch` 方法装饰器,用于模拟位于 `mymodule.os` 的对象,并且将 mock 注入到我们的测试用例方法。那么只是模拟 `os` 本身,而不是 `mymodule.os` 下 `os` 的引用(注意 `@mock.patch('mymodule.os')` 便是模拟 `mymodule.os` 下的 `os`,译者注),会不会更有意义呢? +第一件需要注意的事情就是,我们使用了 `mock.patch` 方法装饰器,用于模拟位于 `mymodule.os` 的对象,并且将 mock 注入到我们的测试用例方法。那么只是模拟 `os` 本身,而不是 `mymodule.os` 下 `os` 的引用(LCTT 译注:注意 `@mock.patch('mymodule.os')` 便是模拟 `mymodule.os` 下的 `os`),会不会更有意义呢? -当然,当涉及到导入和管理模块,Python 的用法非常灵活。在运行时,`mymodule` 模块拥有被导入到本模块局部作用域的 `os`。因此,如果我们模拟 `os`,我们是看不到 mock 在 `mymodule` 模块中的作用的。 +当然,当涉及到导入和管理模块,Python 的用法就像蛇一样灵活。在运行时,`mymodule` 模块有它自己的被导入到本模块局部作用域的 `os`。因此,如果我们模拟 `os`,我们是看不到 mock 在 `mymodule` 模块中的模仿作用的。 这句话需要深刻地记住: -> 模拟测试一个项目,只需要了解它用在哪里,而不是它从哪里来。 +> 模拟一个东西要看它用在何处,而不是来自哪里。 如果你需要为 `myproject.app.MyElaborateClass` 模拟 `tempfile` 模块,你可能需要将 mock 用于 `myproject.app.tempfile`,而其他模块保持自己的导入。 -先将那个陷阱置身事外,让我们继续模拟。 +先将那个陷阱放一边,让我们继续模拟。 -### 向 ‘rm’ 中加入验证 - -之前定义的 rm 方法相当的简单。在盲目地删除之前,我们倾向于验证一个路径是否存在,并验证其是否是一个文件。让我们重构 rm 使其变得更加智能: +#### 向 ‘rm’ 中加入验证 +之前定义的 rm 方法相当的简单。在盲目地删除之前,我们倾向于验证一个路径是否存在,并验证其是否是一个文件。让我们重构 rm 使其变得更加智能: ``` #!/usr/bin/env python @@ -162,11 +155,11 @@ class RmTestCase(unittest.TestCase): mock_os.remove.assert_called_with("any path") ``` -我们的测试用例完全改变了。现在我们可以在没有任何副作用下核实并验证方法的内部功能。 +我们的测试用例完全改变了。现在我们可以在没有任何副作用的情况下核实并验证方法的内部功能。 -### 将文件删除作为服务 +#### 将文件删除作为服务 -到目前为止,我们只是将 mock 应用在函数上,并没应用在需要传递参数的对象和实例的方法。我们现在开始涵盖对象的方法。 +到目前为止,我们只是将 mock 应用在函数上,并没应用在需要传递参数的对象和实例的方法上。我们现在开始涵盖对象的方法。 首先,我们将 `rm` 方法重构成一个服务类。实际上将这样一个简单的函数转换成一个对象,在本质上这不是一个合理的需求,但它能够帮助我们了解 `mock` 的关键概念。让我们开始重构: @@ -220,8 +213,7 @@ class RemovalServiceTestCase(unittest.TestCase): mock_os.remove.assert_called_with("any path") ``` -很好,我们知道 `RemovalService` 会如期工作。接下来让我们创建另一个服务,将 `RemovalService` 声明为它的一个依赖: -: +很好,我们知道 `RemovalService` 会如预期般的工作。接下来让我们创建另一个服务,将 `RemovalService` 声明为它的一个依赖: ``` #!/usr/bin/env python @@ -256,7 +248,7 @@ class UploadService(object): 因为这两种方法都是单元测试中非常重要的方法,所以我们将同时对这两种方法进行回顾。 -### 方法 1:模拟实例的方法 +##### 方法 1:模拟实例的方法 `mock` 库有一个特殊的方法装饰器,可以模拟对象实例的方法和属性,即 `@mock.patch.object decorator` 装饰器: @@ -311,9 +303,9 @@ class UploadServiceTestCase(unittest.TestCase): removal_service.rm.assert_called_with("my uploaded file") ``` -非常棒!我们验证了 UploadService 成功调用了我们实例的 rm 方法。你是否注意到一些有趣的地方?这种修补机制(patching mechanism)实际上替换了我们测试用例中的所有 `RemovalService` 实例的 `rm` 方法。这意味着我们可以检查实例本身。如果你想要了解更多,可以试着在你模拟的代码下断点,以对这种修补机制的原理获得更好的认识。 +非常棒!我们验证了 `UploadService` 成功调用了我们实例的 `rm` 方法。你是否注意到一些有趣的地方?这种修补机制(patching mechanism)实际上替换了我们测试用例中的所有 `RemovalService` 实例的 `rm` 方法。这意味着我们可以检查实例本身。如果你想要了解更多,可以试着在你模拟的代码下断点,以对这种修补机制的原理获得更好的认识。 -### 陷阱:装饰顺序 +##### 陷阱:装饰顺序 当我们在测试方法中使用多个装饰器,其顺序是很重要的,并且很容易混乱。基本上,当装饰器被映射到方法参数时,[装饰器的工作顺序是反向的][3]。思考这个例子: @@ -326,17 +318,17 @@ class UploadServiceTestCase(unittest.TestCase): pass ``` -注意到我们的参数和装饰器的顺序是反向匹配了吗?这多多少少是由 [Python 的工作方式][4] 导致的。这里是使用多个装饰器的情况下它们执行顺序的伪代码: +注意到我们的参数和装饰器的顺序是反向匹配了吗?这部分是由 [Python 的工作方式][4]所导致的。这里是使用多个装饰器的情况下它们执行顺序的伪代码: ``` patch_sys(patch_os(patch_os_path(test_something))) ``` -因为 sys 补丁位于最外层,所以它最晚执行,使得它成为实际测试方法参数的最后一个参数。请特别注意这一点,并且在运行你的测试用例时,使用调试器来保证正确的参数以正确的顺序注入。 +因为 `sys` 补丁位于最外层,所以它最晚执行,使得它成为实际测试方法参数的最后一个参数。请特别注意这一点,并且在运行你的测试用例时,使用调试器来保证正确的参数以正确的顺序注入。 -### 方法 2:创建 Mock 实例 +##### 方法 2:创建 Mock 实例 -我们可以使用构造函数为 UploadService 提供一个 Mock 实例,而不是模拟特定的实例方法。我更推荐方法 1,因为它更加精确,但在多数情况,方法 2 或许更加有效和必要。让我们再次重构测试用例: +我们可以使用构造函数为 `UploadService` 提供一个 Mock 实例,而不是模拟特定的实例方法。我更推荐方法 1,因为它更加精确,但在多数情况,方法 2 或许更加有效和必要。让我们再次重构测试用例: ``` #!/usr/bin/env python @@ -385,13 +377,13 @@ class UploadServiceTestCase(unittest.TestCase): mock_removal_service.rm.assert_called_with("my uploaded file") ``` -在这个例子中,我们甚至不需要补充任何功能,只需为 `RemovalService` 类创建一个 auto-spec,然后将实例注入到我们的 `UploadService` 以验证功能。 +在这个例子中,我们甚至不需要修补任何功能,只需为 `RemovalService` 类创建一个 auto-spec,然后将实例注入到我们的 `UploadService` 以验证功能。 `mock.create_autospec` 方法为类提供了一个同等功能实例。实际上来说,这意味着在使用返回的实例进行交互的时候,如果使用了非法的方式将会引发异常。更具体地说,如果一个方法被调用时的参数数目不正确,将引发一个异常。这对于重构来说是非常重要。当一个库发生变化的时候,中断测试正是所期望的。如果不使用 auto-spec,尽管底层的实现已经被破坏,我们的测试仍然会通过。 -### 陷阱:mock.Mock 和 mock.MagicMock 类 +##### 陷阱:mock.Mock 和 mock.MagicMock 类 -`mock` 库包含了两个重要的类 [mock.Mock](http://www.voidspace.org.uk/python/mock/mock.html) 和 [mock.MagicMock](http://www.voidspace.org.uk/python/mock/magicmock.html#magic-mock),大多数内部函数都是建立在这两个类之上的。当在选择使用 `mock.Mock` 实例,`mock.MagicMock` 实例或 auto-spec 的时候,通常倾向于选择使用 auto-spec,因为对于未来的变化,它更能保持测试的健全。这是因为 `mock.Mock` 和 `mock.MagicMock` 会无视底层的 API,接受所有的方法调用和属性赋值。比如下面这个用例: +`mock` 库包含了两个重要的类 [mock.Mock](http://www.voidspace.org.uk/python/mock/mock.html) 和 [mock.MagicMock](http://www.voidspace.org.uk/python/mock/magicmock.html#magic-mock),大多数内部函数都是建立在这两个类之上的。当在选择使用 `mock.Mock` 实例、`mock.MagicMock` 实例还是 auto-spec 的时候,通常倾向于选择使用 auto-spec,因为对于未来的变化,它更能保持测试的健全。这是因为 `mock.Mock` 和 `mock.MagicMock` 会无视底层的 API,接受所有的方法调用和属性赋值。比如下面这个用例: ``` class Target(object): @@ -430,7 +422,7 @@ class Target(object): ### 现实例子:模拟 Facebook API 调用 -为了完成,我们写一个更加适用的现实例子,一个在介绍中提及的功能:发布消息到 Facebook。我将写一个不错的包装类及其对应的测试用例。 +作为这篇文章的结束,我们写一个更加适用的现实例子,一个在介绍中提及的功能:发布消息到 Facebook。我将写一个不错的包装类及其对应的测试用例。 ``` import facebook @@ -468,15 +460,15 @@ class SimpleFacebookTestCase(unittest.TestCase): ### Python Mock 总结 -对 [单元测试][7] 来说,Python 的 `mock` 库可以说是一个游戏变革者,即使对于它的使用还有点困惑。我们已经演示了单元测试中常见的用例以开始使用 `mock`,并希望这篇文章能够帮助 [Python 开发者][8] 克服初期的障碍,写出优秀、经受过考验的代码。 +即使对它的使用还有点不太熟悉,对[单元测试][7]来说,Python 的 `mock` 库可以说是一个规则改变者。我们已经演示了常见的用例来了解了 `mock` 在单元测试中的使用,希望这篇文章能够帮助 [Python 开发者][8]克服初期的障碍,写出优秀、经受过考验的代码。 -------------------------------------------------------------------------------- -via: http://slviki.com/index.php/2016/06/18/introduction-to-mocking-in-python/ +via: https://www.toptal.com/python/an-introduction-to-mocking-in-python -作者:[Dasun Sucharith][a] +作者:[NAFTULI TZVI KAY][a] 译者:[cposture](https://github.com/cposture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出