From f8a86ee1199f249c1252eade872fd8baede76724 Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Fri, 23 Feb 2018 11:43:36 +0800 Subject: [PATCH 001/343] alter translated --- ...180201 How I coined the term open source.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index fa77d13c6b..1c35caf1c2 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -7,11 +7,11 @@ ![How I coined the term 'open source'](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb "How I coined the term 'open source'") 图片来自: opensource.com -In a few days, on February 3, the 20th anniversary of the introduction of the term "[开源软件][6]" is upon us. As open source software grows in popularity and powers some of the most robust and important innovations of our time, we reflect on its rise to prominence. +几天后, 2 月 3 日, 术语 "[开源软件][6]" 首次使用的20周年纪念日即将到来。 As open source software grows in popularity and powers some of the most robust and important innovations of our time, we reflect on its rise to prominence. -I am the originator of the term "open source software" and came up with it while executive director at Foresight Institute. Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group. +我是 “开源软件” 这个词的始作俑者而且他在前瞻技术协会(Foresight Institute)担任执行董事时想出了它。Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group. -This is my account of how I came up with it, how it was proposed, and the subsequent reactions. Of course, there are a number of accounts of the coining of the term, for example by Eric Raymond and Richard Stallman, yet this is mine, written on January 2, 2006. +这是我对于它如何想到的,如何提出的,以及后续影响的记叙。当然,还有一些关于该术语的记叙,例如 Eric Raymond 和 Richard Stallman 创造的,但这是我的,写于2006年1月2日。 直到今天,它才公诸于世。 @@ -23,13 +23,13 @@ This term had long been used in an "intelligence" (i.e., spying) context, but to ### 计算机安全会议 -In late 1997, weekly meetings were being held at Foresight Institute to discuss computer security. Foresight is a nonprofit think tank focused on nanotechnology and artificial intelligence, and software security is regarded as central to the reliability and security of both. We had identified free software as a promising approach to improving software security and reliability and were looking for ways to promote it. Interest in free software was starting to grow outside the programming community, and it was increasingly clear that an opportunity was coming to change the world. However, just how to do this was unclear, and we were groping for strategies. +在1997年的晚些时候,为期一周的会议将被在前瞻技术协会(Foresight Insttitue) 举行来讨论计算机安全问题。这个协会是一个非盈利性智库,它专注于纳米技术和人工智能,并且认为软件安全是二者的安全性以及可靠性的核心。我们在那确定了自由软件是一个改进软件安全可靠性具有发展前景的方法并将寻找推动它的方式。 对自由软件的兴趣开始在编程社区外开始增长,而且越来越清晰一个改变世界的机会正在来临。然而,该怎么做我们并不清楚,因为我们当时正在摸索策略中。 At these meetings, we discussed the need for a new term due to the confusion factor. The argument was as follows: those new to the term "free software" assume it is referring to the price. Oldtimers must then launch into an explanation, usually given as follows: "We mean free as in freedom, not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage. The problem was not that explaining the meaning is impossible—the problem was that the name for an important idea should not be so confusing to newcomers. A clearer term was needed. No political issues were raised regarding the free software term; the issue was its lack of clarity to those new to the concept. ### 网景发布 -On February 2, 1998, Eric Raymond arrived on a visit to work with Netscape on the plan to release the browser code under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message. In addition to Eric and me, active participants included Brian Behlendorf, Michael Tiemann, Todd Anderson, Mark S. Miller, and Ka-Ping Yee. But at that meeting, the field was still described as free software or, by Brian, "source code available" software. +1998 年 2 月 2 日,Eric Raymond arrived on a visit to work with Netscape on the plan to 发布了浏览器代码 under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message.除了 Eric 和我,活跃的参与者还有 included Brian Behlendorf, Michael Tiemann, Todd Anderson, Mark S. Miller, and Ka-Ping Yee. But at that meeting, the field was still described as free software or, by Brian, "source code available" software. While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon. @@ -57,11 +57,11 @@ For the name to succeed, it was necessary, or at least highly desirable, that Ti After this, there was a period during which the term was promoted by Eric Raymond to the media, by Tim O'Reilly to business, and by both to the programming community. It seemed to spread very quickly. -On April 7, 1998, Tim O'Reilly held a meeting of key leaders in the field. Announced in advance as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]." +On April 7, 1998, Tim O'Reilly held a meeting of key leaders in the field. 提前宣布 as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]." -These months were extremely exciting for open source. Every week, it seemed, a new company announced plans to participate. Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public. +These months were extremely exciting for open source.这是对于开源来说激动人心的几个月。 Every week, it seemed, a new company announced plans to participate. Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public. -A quick Google search indicates that "open source" appears more often than "free software," but there still is substantial use of the free software term, which remains useful and should be included when communicating with audiences who prefer it. +快捷的谷歌搜索表明“开源”比“自由软件”出现的更多,但后者仍然有大量的使用,和偏爱它的人们沟通的时候就特别有用。 ### A happy twinge @@ -69,7 +69,7 @@ When an [early account][12] of the terminology change written by Eric Raymond Coming up with a phrase is a small contribution, but I admit to being grateful to those who remember to credit me with it. Every time I hear it, which is very often now, it gives me a little happy twinge. -The big credit for persuading the community goes to Eric Raymond and Tim O'Reilly, who made it happen. Thanks to them for crediting me, and to Todd Anderson for his role throughout. The above is not a complete account of open source history; apologies to the many key players whose names do not appear. Those seeking a more complete account should refer to the links in this article and elsewhere on the net. +说服团队的大功劳归功于Eric Raymond和 Tim O'Reilly,是他们搞定的它。感谢他们对我的评价,并感谢Todd Anderson 在整个过程中的角色。以上内容并非完整的开源历史记录,对很多没有无名人士表示歉意。那些寻求更完整讲述的人应该参考本文和网上其他地方的链接。 ### 关于作者 From 38bbd73a733c18b12c9946c2ba3d55e7fd12da89 Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Fri, 23 Feb 2018 16:41:55 +0800 Subject: [PATCH 002/343] add a part --- ...20180201 How I coined the term open source.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index 1c35caf1c2..7f38d33617 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -7,11 +7,11 @@ ![How I coined the term 'open source'](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb "How I coined the term 'open source'") 图片来自: opensource.com -几天后, 2 月 3 日, 术语 "[开源软件][6]" 首次使用的20周年纪念日即将到来。 As open source software grows in popularity and powers some of the most robust and important innovations of our time, we reflect on its rise to prominence. +几天后, 2 月 3 日, 术语 "[开源软件][6]" 创立20周年的纪念日即将到来。由于开源软件渐受欢迎并且为这个时代强有力的重要变革提供动力,我们仔细思考了它的初生到崛起。 -我是 “开源软件” 这个词的始作俑者而且他在前瞻技术协会(Foresight Institute)担任执行董事时想出了它。Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group. +我是 “开源软件” 这个词的始作俑者,它是我在前瞻技术协会(Foresight Institute)担任执行董事时想出的。Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group. -这是我对于它如何想到的,如何提出的,以及后续影响的记叙。当然,还有一些关于该术语的记叙,例如 Eric Raymond 和 Richard Stallman 创造的,但这是我的,写于2006年1月2日。 +这是我对于它如何想到的,如何提出的,以及后续影响的记叙。当然,还有一些关于该术语的记叙,例如 Eric Raymond 和 Richard Stallman 的,而我的,写于 2006 年 1 月 2 日。 直到今天,它才公诸于世。 @@ -23,13 +23,13 @@ This term had long been used in an "intelligence" (i.e., spying) context, but to ### 计算机安全会议 -在1997年的晚些时候,为期一周的会议将被在前瞻技术协会(Foresight Insttitue) 举行来讨论计算机安全问题。这个协会是一个非盈利性智库,它专注于纳米技术和人工智能,并且认为软件安全是二者的安全性以及可靠性的核心。我们在那确定了自由软件是一个改进软件安全可靠性具有发展前景的方法并将寻找推动它的方式。 对自由软件的兴趣开始在编程社区外开始增长,而且越来越清晰一个改变世界的机会正在来临。然而,该怎么做我们并不清楚,因为我们当时正在摸索策略中。 +在1997年的晚些时候,为期一周的会议将被在前瞻技术协会(Foresight Insttitue) 举行来讨论计算机安全问题。这个协会是一个非盈利性智库,它专注于纳米技术和人工智能,并且认为软件安全是二者的安全性以及可靠性的核心。我们在那确定了自由软件是一个改进软件安全可靠性且具有发展前景的方法并将寻找推动它的方式。 对自由软件的兴趣开始在编程社区外开始增长,而且越来越清晰,一个改变世界的机会正在来临。然而,该怎么做我们并不清楚,因为我们当时正在摸索中。 At these meetings, we discussed the need for a new term due to the confusion factor. The argument was as follows: those new to the term "free software" assume it is referring to the price. Oldtimers must then launch into an explanation, usually given as follows: "We mean free as in freedom, not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage. The problem was not that explaining the meaning is impossible—the problem was that the name for an important idea should not be so confusing to newcomers. A clearer term was needed. No political issues were raised regarding the free software term; the issue was its lack of clarity to those new to the concept. ### 网景发布 -1998 年 2 月 2 日,Eric Raymond arrived on a visit to work with Netscape on the plan to 发布了浏览器代码 under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message.除了 Eric 和我,活跃的参与者还有 included Brian Behlendorf, Michael Tiemann, Todd Anderson, Mark S. Miller, and Ka-Ping Yee. But at that meeting, the field was still described as free software or, by Brian, "source code available" software. +1998 年 2 月 2 日,Eric Raymond arrived on a visit to work with Netscape on the plan to 发布了浏览器代码 under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message.除了 Eric 和我,活跃的参与者还有 Brian Behlendorf,Michael Tiemann,Todd Anderson,Mark S. Miller and Ka-Ping Yee。But at that meeting, the field was still described as free software or, by Brian, “可获得源代码的” 软件。 While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon. @@ -55,13 +55,13 @@ For the name to succeed, it was necessary, or at least highly desirable, that Ti ### 名字的诞生 -After this, there was a period during which the term was promoted by Eric Raymond to the media, by Tim O'Reilly to business, and by both to the programming community. It seemed to spread very quickly. +在那之后,有一段时间,这条术语由 Eric Raymond 推向媒体,由 Tim O'Reilly 推向商业,并由二人推向编程社区,那似乎传播的相当快。 On April 7, 1998, Tim O'Reilly held a meeting of key leaders in the field. 提前宣布 as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]." -These months were extremely exciting for open source.这是对于开源来说激动人心的几个月。 Every week, it seemed, a new company announced plans to participate. Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public. +这几个月对于开源来说是相当激动人心的。似乎每周都有一个新公司宣布加入计划。Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public. -快捷的谷歌搜索表明“开源”比“自由软件”出现的更多,但后者仍然有大量的使用,和偏爱它的人们沟通的时候就特别有用。 +尽管快捷的谷歌搜索表明“开源”比“自由软件”出现的更多,但后者仍然有大量的使用,和偏爱它的人们沟通的时候就特别有用。 ### A happy twinge From beda71d42e03eb959df40a040a600ad2229bd92c Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Sat, 24 Feb 2018 14:53:39 +0800 Subject: [PATCH 003/343] 2018-02-24 --- sources/talk/20180201 How I coined the term open source.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index 7f38d33617..a93795bea4 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -29,7 +29,7 @@ At these meetings, we discussed the need for a new term due to the confusion fac ### 网景发布 -1998 年 2 月 2 日,Eric Raymond arrived on a visit to work with Netscape on the plan to 发布了浏览器代码 under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message.除了 Eric 和我,活跃的参与者还有 Brian Behlendorf,Michael Tiemann,Todd Anderson,Mark S. Miller and Ka-Ping Yee。But at that meeting, the field was still described as free software or, by Brian, “可获得源代码的” 软件。 +1998 年 2 月 2 日,Eric Raymond arrived on a visit to work with Netscape on the plan to 采用免费软件样式的许可证发布浏览器代码 under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message.除了 Eric 和我,活跃的参与者还有 Brian Behlendorf,Michael Tiemann,Todd Anderson,Mark S. Miller and Ka-Ping Yee。但在那次会议上,这个领域仍然被描述成自由软件,或者用 Brian 的话说, 叫“可获得源代码的” 软件。 While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon. @@ -57,7 +57,7 @@ For the name to succeed, it was necessary, or at least highly desirable, that Ti 在那之后,有一段时间,这条术语由 Eric Raymond 推向媒体,由 Tim O'Reilly 推向商业,并由二人推向编程社区,那似乎传播的相当快。 -On April 7, 1998, Tim O'Reilly held a meeting of key leaders in the field. 提前宣布 as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]." +On April 7, 1998, Tim O'Reilly 提前宣布 as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]." 这几个月对于开源来说是相当激动人心的。似乎每周都有一个新公司宣布加入计划。Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public. From 5e687caa454bbcd29895c026389d824b1b95af4d Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Sun, 25 Feb 2018 11:52:20 +0800 Subject: [PATCH 004/343] 2018-2-25 --- .../talk/20180201 How I coined the term open source.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index a93795bea4..435ee4a9ed 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -2,7 +2,7 @@ 我是如何创造“开源”这个词的 ============================================================ -### Christine Peterson 最终发布了对于二十年前那决定命运一天的陈述。 +### Christine Peterson 最终公开讲述了二十年前那决定命运一天。 ![How I coined the term 'open source'](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb "How I coined the term 'open source'") 图片来自: opensource.com @@ -63,13 +63,13 @@ On April 7, 1998, Tim O'Reilly 提前宣布 as the first "[Freeware Summit][10], 尽管快捷的谷歌搜索表明“开源”比“自由软件”出现的更多,但后者仍然有大量的使用,和偏爱它的人们沟通的时候就特别有用。 -### A happy twinge +### A happy twinge喜出望外 When an [early account][12] of the terminology change written by Eric Raymond was posted on the Open Source Initiative website, I was listed as being at the VA brainstorming meeting, but not as the originator of the term. This was my own fault; I had neglected to tell Eric the details. My impulse was to let it pass and stay in the background, but Todd felt otherwise. He suggested to me that one day I would be glad to be known as the person who coined the name "open source software." He explained the situation to Eric, who promptly updated his site. -Coming up with a phrase is a small contribution, but I admit to being grateful to those who remember to credit me with it. Every time I hear it, which is very often now, it gives me a little happy twinge. +想出这个短语只是一个小贡献,但是我得承认我十分感激那些把它归功于我的人。每次我听到它,它都给我些许激动的喜悦,到现在也时常感受到。 -说服团队的大功劳归功于Eric Raymond和 Tim O'Reilly,是他们搞定的它。感谢他们对我的评价,并感谢Todd Anderson 在整个过程中的角色。以上内容并非完整的开源历史记录,对很多没有无名人士表示歉意。那些寻求更完整讲述的人应该参考本文和网上其他地方的链接。 +说服团队的大功劳归功于Eric Raymond和 Tim O'Reilly,这是他们搞定的。感谢他们对我的评价,并感谢Todd Anderson 在整个过程中的角色。以上内容并非完整的开源历史记录,对很多没有无名人士表示歉意。那些寻求更完整讲述的人应该参考本文和网上其他地方的链接。 ### 关于作者 From 3176f18297f3963d583d2dd5c2c00c9a63b586ff Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Thu, 1 Mar 2018 22:09:19 +0800 Subject: [PATCH 005/343] a part finished before start school --- ...80201 How I coined the term open source.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index 435ee4a9ed..db6d289432 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -9,9 +9,9 @@ 几天后, 2 月 3 日, 术语 "[开源软件][6]" 创立20周年的纪念日即将到来。由于开源软件渐受欢迎并且为这个时代强有力的重要变革提供动力,我们仔细思考了它的初生到崛起。 -我是 “开源软件” 这个词的始作俑者,它是我在前瞻技术协会(Foresight Institute)担任执行董事时想出的。Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group. +我是 “开源软件” 这个词的始作俑者,它是我在前瞻技术协会(Foresight Institute)担任执行董事时想出的。并非向上面的一个程序开发者一样,我感谢 Linux 程序员 Todd Anderson 对这个术语的支持并将它提交小组讨论。 -这是我对于它如何想到的,如何提出的,以及后续影响的记叙。当然,还有一些关于该术语的记叙,例如 Eric Raymond 和 Richard Stallman 的,而我的,写于 2006 年 1 月 2 日。 +这是我对于它如何想到的,如何提出的,以及后续影响的记叙。当然,还有一些关于该术语的记叙,例如 Eric Raymond 和 Richard Stallman 写的,而我的,则写于 2006 年 1 月 2 日。 直到今天,它才公诸于世。 @@ -19,29 +19,29 @@ The introduction of the term "open source software" was a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that—to newcomers—its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source. -This term had long been used in an "intelligence" (i.e., spying) context, but to my knowledge, use of the term with respect to software prior to 1998 has not been confirmed. The account below describes how the term [open source software][7] caught on and became the name of both an industry and a movement. +This term had long been used in an "intelligence" (i.e., spying) context, 但据我所知but to my knowledge, use of the term with respect to software prior to 1998 has not been confirmed. 下面这个就是讲述了术语“开源软件”如何流行起来并且变成了一项产业和一场运动的名字。 ### 计算机安全会议 在1997年的晚些时候,为期一周的会议将被在前瞻技术协会(Foresight Insttitue) 举行来讨论计算机安全问题。这个协会是一个非盈利性智库,它专注于纳米技术和人工智能,并且认为软件安全是二者的安全性以及可靠性的核心。我们在那确定了自由软件是一个改进软件安全可靠性且具有发展前景的方法并将寻找推动它的方式。 对自由软件的兴趣开始在编程社区外开始增长,而且越来越清晰,一个改变世界的机会正在来临。然而,该怎么做我们并不清楚,因为我们当时正在摸索中。 -At these meetings, we discussed the need for a new term due to the confusion factor. The argument was as follows: those new to the term "free software" assume it is referring to the price. Oldtimers must then launch into an explanation, usually given as follows: "We mean free as in freedom, not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage. The problem was not that explaining the meaning is impossible—the problem was that the name for an important idea should not be so confusing to newcomers. A clearer term was needed. No political issues were raised regarding the free software term; the issue was its lack of clarity to those new to the concept. +At these meetings, we discussed the need for a new term due to the confusion factor. 观点主要有以下:对于那些新接触“自由软件”的人把它(free)当成了价格。 Oldtimers must then launch into an explanation, usually given as follows: "We mean free as in freedom, not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage. The problem was not that explaining the meaning is impossible—the problem was that the name for an important idea should not be so confusing to newcomers. A clearer term was needed. No political issues were raised regarding the free software term; the issue was its lack of clarity to those new to the concept. ### 网景发布 -1998 年 2 月 2 日,Eric Raymond arrived on a visit to work with Netscape on the plan to 采用免费软件样式的许可证发布浏览器代码 under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message.除了 Eric 和我,活跃的参与者还有 Brian Behlendorf,Michael Tiemann,Todd Anderson,Mark S. Miller and Ka-Ping Yee。但在那次会议上,这个领域仍然被描述成自由软件,或者用 Brian 的话说, 叫“可获得源代码的” 软件。 +1998 年 2 月 2 日,Eric Raymond 抵达访问网景并与它一起计划采用免费软件样式的许可证发布浏览器代码。We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message.除了 Eric 和我,活跃的参与者还有 Brian Behlendorf,Michael Tiemann,Todd Anderson,Mark S. Miller and Ka-Ping Yee。但在那次会议上,这个领域仍然被描述成“自由软件”,或者用 Brian 的话说, 叫“可获得源代码的” 软件。 While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon. -Between meetings that week, I was still focused on the need for a better name and came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term "open" had been overused and abused and believed we could do better. He was right in theory; however, I didn't have a better idea, so I thought I would try to go ahead and introduce it. In hindsight, I should have simply proposed it to Eric Raymond, but I didn't know him well at the time, so I took an indirect strategy instead. +在那周的会议中,我仍然关注Between meetings that week, I was still focused on the need for a better name and came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term "open" had been overused and abused and believed we could do better. 理论上它是对的,我想不出更好的了,所以我想尝试并推广它。 事后一想我应该简单地向 Eric Raymond 提案,但在那时我并不是很了解他,所以我采取了间接的策略。 -Todd had agreed strongly about the need for a new term and offered to assist in getting the term introduced. This was helpful because, as a non-programmer, my influence within the free software community was weak. My work in nanotechnology education at Foresight was a plus, but not enough for me to be taken very seriously on free software questions. As a Linux programmer, Todd would be listened to more closely. +Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助,因为作为一个非编程人员,我在自由软件社区的影响力很弱。我在纳米技术的工作是一个加分项,但不足以让我认真地接受自由软件问题的工作。作为一个Linux程序员,Todd 将会更仔细地聆听它。 ### 关键的会议 Later that week, on February 5, 1998, a group was assembled at VA Research to brainstorm on strategy. Attending—in addition to Eric Raymond, Todd, and me—were Larry Augustin, Sam Ockman, and attending by phone, Jon "maddog" Hall. -The primary topic was promotion strategy, especially which companies to approach. I said little, but was looking for an opportunity to introduce the proposed term. I felt that it wouldn't work for me to just blurt out, "All you technical people should start using my new term." Most of those attending didn't know me, and for all I knew, they might not even agree that a new term was greatly needed, or even somewhat desirable. +会议的主要议题是推广策略,特别是要接洽的公司。 我几乎没说什么,而是在寻找机会推广已经提交讨论的术语。I felt that it wouldn't work for me to just blurt out, "All you technical people should start using my new term." Most of those attending didn't know me, and for all I knew, they might not even agree that a new term was greatly needed, or even somewhat desirable. Fortunately, Todd was on the ball. Instead of making an assertion that the community should use this specific new term, he did something less directive—a smart thing to do with this community of strong-willed individuals. He simply used the term in a sentence on another topic—just dropped it into the conversation to see what happened. I went on alert, hoping for a response, but there was none at first. The discussion continued on the original topic. It seemed only he and I had noticed the usage. @@ -61,11 +61,11 @@ On April 7, 1998, Tim O'Reilly 提前宣布 as the first "[Freeware Summit][10], 这几个月对于开源来说是相当激动人心的。似乎每周都有一个新公司宣布加入计划。Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public. -尽管快捷的谷歌搜索表明“开源”比“自由软件”出现的更多,但后者仍然有大量的使用,和偏爱它的人们沟通的时候就特别有用。 +尽管快捷的谷歌搜索表明“开源”比“自由软件”出现的更多,但后者仍然有大量的使用,特别是和偏爱它的人们沟通的时候。 -### A happy twinge喜出望外 +### 一丝快感 -When an [early account][12] of the terminology change written by Eric Raymond was posted on the Open Source Initiative website, I was listed as being at the VA brainstorming meeting, but not as the originator of the term. This was my own fault; I had neglected to tell Eric the details. My impulse was to let it pass and stay in the background, but Todd felt otherwise. He suggested to me that one day I would be glad to be known as the person who coined the name "open source software." He explained the situation to Eric, who promptly updated his site. +When an [early account][12] of the terminology change written by Eric Raymond was posted on the Open Source Initiative website, I was listed as being at the VA brainstorming meeting, but not as the originator of the term. 这是我自己的错,我没告诉 Eric 细节。我一时冲动只想让它表决通过然后我呆在后台,但是 Todd 不这样认为。他认为我总有一天将作为“开源软件”这个名词的创造者而感到高兴。他向 Eric 解释了这个情况,Eric 及时更新了它的网站。 想出这个短语只是一个小贡献,但是我得承认我十分感激那些把它归功于我的人。每次我听到它,它都给我些许激动的喜悦,到现在也时常感受到。 @@ -80,7 +80,7 @@ When an [early account][12] of the terminology change written by Eric Raymond via: https://opensource.com/article/18/2/coining-term-open-source-software 作者:[ Christine Peterson][a] -译者:[译者ID](https://github.com/译者ID) +译者:[fuzheng1998](https://github.com/fuzheng1998) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From dc01a0c20299b1ed48a6734107d8303c2790be85 Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Sun, 4 Mar 2018 14:21:09 +0800 Subject: [PATCH 006/343] update word [translating] --- .../talk/20180201 How I coined the term open source.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index db6d289432..76a0f13932 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -33,9 +33,9 @@ At these meetings, we discussed the need for a new term due to the confusion fac While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon. -在那周的会议中,我仍然关注Between meetings that week, I was still focused on the need for a better name and came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term "open" had been overused and abused and believed we could do better. 理论上它是对的,我想不出更好的了,所以我想尝试并推广它。 事后一想我应该简单地向 Eric Raymond 提案,但在那时我并不是很了解他,所以我采取了间接的策略。 +在那周的会议中,我仍然关注一个更好的名字Between meetings that week, I was still focused on the need for a better name and 提出术语 “开源软件”came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term "open" had been overused and abused and 相信我们能做的更好。理论上它是对的,可我想不出更好的了,所以我想尝试并推广它。 事后一想我应该直接向 Eric Raymond 提案,但在那时我并不是很了解他,所以我采取了间接的策略。 -Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助,因为作为一个非编程人员,我在自由软件社区的影响力很弱。我在纳米技术的工作是一个加分项,但不足以让我认真地接受自由软件问题的工作。作为一个Linux程序员,Todd 将会更仔细地聆听它。 +Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助,因为作为一个非编程人员,我在自由软件社区的影响力很弱。我从事的纳米技术是一个加分项,但不足以让我认真地接受自由软件问题的工作。作为一个Linux程序员,Todd 将会更仔细地聆听它。 ### 关键的会议 @@ -55,9 +55,9 @@ For the name to succeed, it was necessary, or at least highly desirable, that Ti ### 名字的诞生 -在那之后,有一段时间,这条术语由 Eric Raymond 推向媒体,由 Tim O'Reilly 推向商业,并由二人推向编程社区,那似乎传播的相当快。 +在那之后,有一段时间,这条术语由 Eric Raymond 向媒体推广,由 Tim O'Reilly 向商业推广,并由二人向编程社区推广,那似乎传播的相当快。 -On April 7, 1998, Tim O'Reilly 提前宣布 as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]." +1998 年 4 月 17 日, Tim O'Reilly 提前宣布 首届 “[自由软件峰会][10]” 在 4 月14 日之前,它以首届 “[开源峰会][11]” 被提及。 这几个月对于开源来说是相当激动人心的。似乎每周都有一个新公司宣布加入计划。Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public. From ca81d711de6d88119eb2e86afc5e2cf973bfb261 Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Mon, 5 Mar 2018 08:07:09 +0800 Subject: [PATCH 007/343] 2018-3-5 --- sources/talk/20180201 How I coined the term open source.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index 76a0f13932..84ccb7a1df 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -1,8 +1,8 @@ -[fuzheng1998 tranlating] +[fuzheng1998 translating] 我是如何创造“开源”这个词的 ============================================================ -### Christine Peterson 最终公开讲述了二十年前那决定命运一天。 +### Christine Peterson 最终公开讲述了二十年前那决定命运的一天。 ![How I coined the term 'open source'](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb "How I coined the term 'open source'") 图片来自: opensource.com @@ -17,7 +17,7 @@ * * * -The introduction of the term "open source software" was a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that—to newcomers—its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source. +The introduction of the term "open source software" was 是特地为了这个领域让新手和商业人士更加易懂a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. 早期称号的问题是,“自由软件”,并非有政治含义,但是那对于新手来说貌似对于价格的关注令人心烦意乱。一个术语需要聚焦于源代码的关键问题而且不会被立即把概念跟那些新东西混淆。一个恰好想出并且满足这些要求的第一个术语被快速接受:开源。 This term had long been used in an "intelligence" (i.e., spying) context, 但据我所知but to my knowledge, use of the term with respect to software prior to 1998 has not been confirmed. 下面这个就是讲述了术语“开源软件”如何流行起来并且变成了一项产业和一场运动的名字。 From e79179d9e5c2dcc7ff15b0d5ffc29e777a4a3c3a Mon Sep 17 00:00:00 2001 From: yizhuyan Date: Mon, 5 Mar 2018 18:35:23 +0800 Subject: [PATCH 008/343] Delete 20171204 5 Tips to Improve Technical Writing for an International Audience.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕,删除原文 --- ...l Writing for an International Audience.md | 83 ------------------- 1 file changed, 83 deletions(-) delete mode 100644 sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md diff --git a/sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md b/sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md deleted file mode 100644 index f13a04cb49..0000000000 --- a/sources/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md +++ /dev/null @@ -1,83 +0,0 @@ -translating by yizhuoyan - -5 Tips to Improve Technical Writing for an International Audience -============================================================ - - -![documentation](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/typewriter-801921_1920.jpg?itok=faTXFNoE "documentation") -Writing in English for an international audience takes work; here are some handy tips to remember.[Creative Commons Zero][2] - -Writing in English for an international audience does not necessarily put native English speakers in a better position. On the contrary, they tend to forget that the document's language might not be the first language of the audience. Let's have a look at the following simple sentence as an example: “Encrypt the password using the 'foo bar' command.” - -Grammatically, the sentence is correct. Given that "-ing" forms (gerunds) are frequently used in the English language, most native speakers would probably not hesitate to phrase a sentence like this. However, on closer inspection, the sentence is ambiguous: The word “using” may refer either to the object (“the password”) or to the verb (“encrypt”). Thus, the sentence can be interpreted in two different ways: - -* Encrypt the password that uses the 'foo bar' command. - -* Encrypt the password by using the 'foo bar' command. - -As long as you have previous knowledge about the topic (password encryption or the 'foo bar' command), you can resolve this ambiguity and correctly decide that the second reading is the intended meaning of this sentence. But what if you lack in-depth knowledge of the topic? What if you are not an expert but a translator with only general knowledge of the subject? Or, what if you are a non-native speaker of English who is unfamiliar with advanced grammatical forms? - -### Know Your Audience - -Even native English speakers may need some training to write clear and straightforward technical documentation. Raising awareness of usability and potential problems is the first step. This article, based on my talk at[ Open Source Summit EU][5], offers several useful techniques. Most of them are useful not only for technical documentation but also for everyday written communication, such as writing email or reports. - -**1. Change perspective. **Step into your audience's shoes. Step one is to know your intended audience. If you are a developer writing for end users, view the product from their perspective. The [persona technique][6] can help to focus on the target audience and to provide the right level of detail for your readers. - -**2\. Follow the KISS principle. **Keep it short and simple. The principle can be applied to several levels, like grammar, sentences, or words. Here are some examples: - - _Words: _ Uncommon and long words slow down reading and might be obstacles for non-native speakers. Use simpler alternatives: - -“utilize” → “use” - -“indicate” → “show”, “tell”, “say” - -“prerequisite” → “requirement” - - _Grammar: _ Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action: "Click  _OK_ . The  _Printer Options_  dialog appears.” - - _Sentences: _ As a rule of thumb, present one idea in one sentence. However, restricting sentence length to a certain amount of words is not useful in my opinion. Short sentences are not automatically easy to understand (especially if they are a cluster of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiquities, which can, in turn, make sentences even more difficult to understand. - -**3\. Beware of ambiguities. **As authors, we often do not notice ambiguity in a sentence. Having your texts reviewed by others can help identify such problems. If that's not an option, try to look at each sentence from different perspectives: Does the sentence also work for readers without in-depth knowledge of the topic? Does it work for readers with limited language skills? Is the grammatical relationship between all sentence parts clear? If the sentence does not meet these requirements, rephrase it to resolve the ambiguity. - -**4\. Be consistent. **This applies to choice of words, spelling, and punctuation as well as phrases and structure. For lists, use parallel grammatical construction. For example: - -Why white space is important: - -* It focuses attention. - -* It visually separates sections. - -* It splits content into chunks.  - -**5\. Remove redundant content.** Keep only information that is relevant for your target audience. On a sentence level, avoid fillers (basically, easily) and unnecessary modifications: - -"already existing" → "existing" - -"completely new" → "new" - -As you might have guessed by now, writing is rewriting. Good writing requires effort and practice. But even if you write only occasionally, you can significantly improve your texts by focusing on the target audience and by using basic writing techniques. The better the readability of a text, the easier it is to process, even for an audience with varying language skills. When it comes to localization especially, good quality of the source text is important: Garbage in, garbage out. If the original text has deficiencies, it will take longer to translate the text, resulting in higher costs. In the worst case, the flaws will be multiplied during translation and need to be corrected in various languages.  - - -![Tanja Roth](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tanja-roth.jpg?itok=eta0fvZC "Tanja Roth") - -Tanja Roth, Technical Documentation Specialist at SUSE Linux GmbH[Used with permission][1] - - _Driven by an interest in both language and technology, Tanja has been working as a technical writer in mechanical engineering, medical technology, and IT for many years. She joined SUSE in 2005 and contributes to a wide range of product and project documentation, including High Availability and Cloud topics._ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience?sf175396579=1 - -作者:[TANJA ROTH ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/tanja-roth -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/creative-commons-zero -[3]:https://www.linux.com/files/images/tanja-rothjpg -[4]:https://www.linux.com/files/images/typewriter-8019211920jpg -[5]:https://osseu17.sched.com/event/ByIW -[6]:https://en.wikipedia.org/wiki/Persona_(user_experience) From a48c07a8249020ca6daee599b83a791ba0f4b904 Mon Sep 17 00:00:00 2001 From: yizhuyan Date: Mon, 5 Mar 2018 18:37:33 +0800 Subject: [PATCH 009/343] Create 20171204 5 Tips to Improve Technical Writing for an International Audience.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕 --- ...l Writing for an International Audience.md | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 translated/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md diff --git a/translated/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md b/translated/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md new file mode 100644 index 0000000000..20e604f357 --- /dev/null +++ b/translated/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md @@ -0,0 +1,117 @@ + + +提升针对国际读者技术性写作的5个技巧 +============================================================ + + +![documentation](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/typewriter-801921_1920.jpg?itok=faTXFNoE "documentation") + + + +针对国际读者用英语写作很不容易,下面这些小窍门可以记下来。[知识共享许可][2] + + +针对国际读者用英语写文档不需要特别考虑英语母语的人。相反,更应该注意文档的语言可能不是读者的第一语言。我们看看下面这些简单的句子:“加密密码用‘xxx’命令(Encrypt the password using the 'foo bar' command.)” + +从语法上讲,这个句子是正确的。ing形式(动名词)经常在英语中使用,很多英语母语的人应该大概对这样造句没有疑惑。然而,仔细观察,这个句子有歧义:单词“using”可以针对“the password”,也可以针对动词“加密”。因此,这个句子能够有两种不同的理解方式。 + +* Encrypt the password that uses the 'foo bar' command( 加密这个使用xxx命令的密码). +* Encrypt the password by using the 'foo bar' command(使用xxx命令加密密码). + + +关于这个话题(密码加密或者xxx命令),只要你有这方面的知识,你就不会理解错误,而且正确的选择第二种方式才是句子要表达的含义。但是如果你对这个知识点没有概念呢?如果你仅仅是一个翻译者,只有关于文章主题的一般知识,而不是一个技术专家?或者英语不是你的母语而且你不熟悉英语的高级语法形式呢? + +甚至连英语母语的人都可能需要一些训练才能写出简洁明了的技术文档。所以提升对文章适用性和潜在问题的认识是第一步。 + + +这篇文章,基于我在[欧盟开放源码峰会][5]上的演讲,提供了几种有用的技巧。大多数技巧不仅仅针对技术文档,也可以用于日程信函的书写,如邮件或者报告之类的。 + + +**1. 转换视角** + +转换视角,从你的读者出发。首先要了解你潜在的读者。如果你是作为开发人员针对用户写,则从用户的视角来看待你的产品。用户画像(Persona)技术能够帮助你专注于目标受众,而且提供关于你的受众适当的细节信息。 + +**2. 遵守KISS(Keep it short and simple)原则** + +这个原则可以用于几个层次,如语法,句式或者单词。看下面的例子: + +_单词:_ + +罕见的和长的单词会降低阅读速度而且可能会是非母语读者的障碍。使用简单点的单词,如: +“utilize” → “use” +“indicate” → “show”, “tell”, “say” +“prerequisite” → “requirement” + + + + +_语法:_ + +最好使用最简单的时态。举个例子,当提到一个动作的结果时使用一般现在时。如“Click '_OK_' . The _Printer Options_ dialog appears(单击'_ok_'.就弹出_打印选项_对话框了)” + + +_句式:_ + +一般说来,一个句子就表达一个意思。然而在我看来,把句子的长度限制到一定数量的单词是没有用的。短句子不是想着那么容易理解的(特别是一组名词的时候)。有时候把句子剪短到一定单词数量会造成歧义,相应的还会使句子更加难以理解。 + + + + +**3. 当心歧义** + + +作为作者,我们通常没有注意到句子中的歧义。让别人审阅你的文章有助于发现这些问题。如果无法这么做,就尝试从这些不同的视角审视每个句子: +对于没有相关知识背景的读者也能看懂吗?对于语言能力有限的读者也可以吗?所有句子成分间的语法关系清晰吗?如果某个句子没有达到这些要求,重新措辞来解决歧义。 + +**4. 格式统一** + +这个适用于对单词,拼写和标点符号的选择,也是适用于短语和结构的选择。对于列表,使用平行的语法造句。如: + +Why white space is important(为什么空格很重要): + +* It focuses attention(让读者注意力集中). + +* It visually separates sections(让文章章节分割更直观). + +* It splits content into chunks(让文章内容分割为不同块). + +**5. 清除冗余** + +对目标读者仅保留明确的信息。在句子层面,避免填充(如basically, easily)和没必要的修饰。如: + +"already existing" → "existing" + +"completely new" → "new" + + +##总结 + +你现在应该猜到了,写作就是改。好的文章需要付出和练习。但是如果你仅是偶尔写写,则可以通过专注目标读者和运用基本写作技巧来显著地提升你的文章。 + +文章易读性越高,理解起来越容易,即使针对于不同语言级别的读者也是一样。尤其在本地化翻译方面,高质量的原文是非常重要的,因为“错进错出(原文:Garbage in, garbage out)"。如果原文有不足,翻译时间会更长,导致更高的成本。甚至,这种不足会在翻译过程中成倍的放大而且后面需要在多种语言版本中改正。 + + +![Tanja Roth](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tanja-roth.jpg?itok=eta0fvZC "Tanja Roth") + +Tanja Roth, SUSE Linux公司-技术文档专家 [使用许可][1] + + +_在对语言和技术两方面兴趣的驱动下,Tanja作为一名技术文章的写作者,在机械工程,医学技术和IT领域工作了很多年。她在2005年加入SUSE组织并且贡献了各种各样产品和项目的文章,包括高可用性和云的相关话题。_ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience?sf175396579=1 + +作者:[TANJA ROTH ][a] +译者:[yizhuoyan](https://github.com/yizhuoyan) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/tanja-roth +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://www.linux.com/files/images/tanja-rothjpg +[4]:https://www.linux.com/files/images/typewriter-8019211920jpg +[5]:https://osseu17.sched.com/event/ByIW +[6]:https://en.wikipedia.org/wiki/Persona_(user_experience) From c14fe6afbd0c3cf8b92014b4146f7da132645091 Mon Sep 17 00:00:00 2001 From: amwps290 Date: Mon, 5 Mar 2018 18:49:43 +0800 Subject: [PATCH 010/343] Translating by amwps290 --- ...0220 How to format academic papers on Linux with groff -me.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180220 How to format academic papers on Linux with groff -me.md b/sources/tech/20180220 How to format academic papers on Linux with groff -me.md index 5131cad7f5..a7902856db 100644 --- a/sources/tech/20180220 How to format academic papers on Linux with groff -me.md +++ b/sources/tech/20180220 How to format academic papers on Linux with groff -me.md @@ -1,3 +1,4 @@ +translating by amwps290 How to format academic papers on Linux with groff -me ====== From 85ef69d1cd7eb83a656c768998b1fcd8bb69c3ce Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 5 Mar 2018 19:13:54 +0800 Subject: [PATCH 011/343] Delete 20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md --- ...VM on CentOS 7 - RHEL 7 Headless Server.md | 344 ------------------ 1 file changed, 344 deletions(-) delete mode 100644 sources/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md diff --git a/sources/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/sources/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md deleted file mode 100644 index 560963281d..0000000000 --- a/sources/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md +++ /dev/null @@ -1,344 +0,0 @@ -Translating by MjSeven - -How to install KVM on CentOS 7 / RHEL 7 Headless Server -====== - - -How do I install and configure KVM (Kernel-based Virtual Machine) on a CentOS 7 or RHEL (Red Hat Enterprise Linux) 7 server? How can I setup KMV on a CentOS 7 and use cloud images/cloud-init for installing guest VM? - - -Kernel-based Virtual Machine (KVM) is virtualization software for CentOS or RHEL 7. KVM turn your server into a hypervisor. This page shows how to setup and manage a virtualized environment with KVM in CentOS 7 or RHEL 7. It also described how to install and administer Virtual Machines (VMs) on a physical server using the CLI. Make sure that **Virtualization Technology (VT)** is enabled in your server 's BIOS. You can also run the following command [to test if CPU Support Intel VT and AMD-V Virtualization tech][1] -``` -$ lscpu | grep Virtualization -Virtualization: VT-x -``` - - - -### Follow installation steps of KVM on CentOS 7/RHEL 7 headless sever - -#### Step 1: Install kvm - -Type the following [yum command][2]: -`# yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install` -[![How to install KVM on CentOS 7 RHEL 7 Headless Server][3]][3] -Start the libvirtd service: -``` -# systemctl enable libvirtd -# systemctl start libvirtd -``` - -#### Step 2: Verify kvm installation - -Make sure KVM module loaded using lsmod command and [grep command][4]: -`# lsmod | grep -i kvm` - -#### Step 3: Configure bridged networking - -By default dhcpd based network bridge configured by libvirtd. You can verify that with the following commands: -``` -# brctl show -# virsh net-list -``` -[![KVM default networking][5]][5] -All VMs (guest machine) only have network access to other VMs on the same server. A private network 192.168.122.0/24 created for you. Verify it: -`# virsh net-dumpxml default` -If you want your VMs avilable to other servers on your LAN, setup a a network bridge on the server that connected to the your LAN. Update your nic config file such as ifcfg-enp3s0 or em1: -`# vi /etc/sysconfig/network-scripts/enp3s0 ` -Add line: -``` -BRIDGE=br0 -``` - -[Save and close the file in vi][6]. Edit /etc/sysconfig/network-scripts/ifcfg-br0 and add: -`# vi /etc/sysconfig/network-scripts/ifcfg-br0` -Append the following: -``` -DEVICE="br0" -# I am getting ip from DHCP server # -BOOTPROTO="dhcp" -IPV6INIT="yes" -IPV6_AUTOCONF="yes" -ONBOOT="yes" -TYPE="Bridge" -DELAY="0" -``` - -Restart the networking service (warning ssh command will disconnect, it is better to reboot the box): -`# systemctl restart NetworkManager` -Verify it with brctl command: -`# brctl show` - -#### Step 4: Create your first virtual machine - -I am going to create a CentOS 7.x VM. First, grab CentOS 7.x latest ISO image using the wget command: -``` -# cd /var/lib/libvirt/boot/ -# wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso -``` -Verify ISO images: -``` -# wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/sha256sum.txt -# sha256sum -c sha256sum.txt -``` - -##### Create CentOS 7.x VM - -In this example, I'm creating CentOS 7.x VM with 2GB RAM, 2 CPU core, 1 nics and 40GB disk space, enter: -``` -# virt-install \ ---virt-type=kvm \ ---name centos7 \ ---ram 2048 \ ---vcpus=1 \ ---os-variant=centos7.0 \ ---cdrom=/var/lib/libvirt/boot/CentOS-7-x86_64-Minimal-1708.iso \ ---network=bridge=br0,model=virtio \ ---graphics vnc \ ---disk path=/var/lib/libvirt/images/centos7.qcow2,size=40,bus=virtio,format=qcow2 -``` -To configure vnc login from another terminal over ssh and type: -``` -# virsh dumpxml centos7 | grep vnc - -``` -Please note down the port value (i.e. 5901). You need to use an SSH client to setup tunnel and a VNC client to access the remote vnc server. Type the following SSH port forwarding command from your client/desktop/macbook pro system: -`$ ssh vivek@server1.cyberciti.biz -L 5901:127.0.0.1:5901` -Once you have ssh tunnel established, you can point your VNC client at your own 127.0.0.1 (localhost) address and port 5901 as follows: -[![][7]][7] -You should see CentOS Linux 7 guest installation screen as follows: -[![][8]][8] -Now just follow on screen instructions and install CentOS 7. Once installed, go ahead and click the reboot button. The remote server closed the connection to our VNC client. You can reconnect via KVM client to configure the rest of the server including SSH based session or firewall. - -#### Step 5: Using cloud images - -The above installation method is okay for learning purpose or a single VM. Do you need to deploy lots of VMs? Try cloud images. You can modify pre built cloud images as per your needs. For example, add users, ssh keys, setup time zone, and more using [Cloud-init][9] which is the defacto multi-distribution package that handles early initialization of a cloud instance. Let us see how to create CentOS 7 vm with 1024MB ram, 20GB disk space, and 1 vCPU. - -##### Grab CentOS 7 cloud image - -``` -# cd /var/lib/libvirt/boot -# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 -``` - -##### Create required directories - -``` -# D=/var/lib/libvirt/images -# VM=centos7-vm1 ## vm name ## -# mkdir -vp $D/$VM -mkdir: created directory '/var/lib/libvirt/images/centos7-vm1' -``` - -##### Create meta-data file - -``` -# cd $D/$VM -# vi meta-data -``` -Append the following: -``` -instance-id: centos7-vm1 -local-hostname: centos7-vm1 -``` - -##### Crete user-data file - -I am going to login into VM using ssh keys. So make sure you have ssh-keys in place: -`# ssh-keygen -t ed25519 -C "VM Login ssh key"` -[![ssh-keygen command][10]][11] -See "[How To Setup SSH Keys on a Linux / Unix System][12]" for more info. Edit user-data as follows: -``` -# cd $D/$VM -# vi user-data -``` -Add as follows (replace hostname, users, ssh-authorized-keys as per your setup): -``` -#cloud-config - -# Hostname management -preserve_hostname: False -hostname: centos7-vm1 -fqdn: centos7-vm1.nixcraft.com - -# Users -users: - - default - - name: vivek - groups: ['wheel'] - shell: /bin/bash - sudo: ALL=(ALL) NOPASSWD:ALL - ssh-authorized-keys: - - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIMP3MOF2ot8MOdNXCpHem0e2Wemg4nNmL2Tio4Ik1JY VM Login ssh key - -# Configure where output will go -output: - all: ">> /var/log/cloud-init.log" - -# configure interaction with ssh server -ssh_genkeytypes: ['ed25519', 'rsa'] - -# Install my public ssh key to the first user-defined user configured -# in cloud.cfg in the template (which is centos for CentOS cloud images) -ssh_authorized_keys: - - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIMP3MOF2ot8MOdNXCpHem0e2Wemg4nNmL2Tio4Ik1JY VM Login ssh key - -# set timezone for VM -timezone: Asia/Kolkata - -# Remove cloud-init -runcmd: - - systemctl stop network && systemctl start network - - yum -y remove cloud-init -``` - -##### Copy cloud image - -``` -# cd $D/$VM -# cp /var/lib/libvirt/boot/CentOS-7-x86_64-GenericCloud.qcow2 $VM.qcow2 -``` - -##### Create 20GB disk image - -``` -# cd $D/$VM -# export LIBGUESTFS_BACKEND=direct -# qemu-img create -f qcow2 -o preallocation=metadata $VM.new.image 20G -# virt-resize --quiet --expand /dev/sda1 $VM.qcow2 $VM.new.image -``` -[![Set VM image disk size][13]][13] -Overwrite it resized image: -``` -# cd $D/$VM -# mv $VM.new.image $VM.qcow2 -``` - -##### Creating a cloud-init ISO - -`# mkisofs -o $VM-cidata.iso -V cidata -J -r user-data meta-data` -[![Creating a cloud-init ISO][14]][14] - -##### Creating a pool - -``` -# virsh pool-create-as --name $VM --type dir --target $D/$VM -Pool centos7-vm1 created -``` - -##### Installing a CentOS 7 VM - -``` -# cd $D/$VM -# virt-install --import --name $VM \ ---memory 1024 --vcpus 1 --cpu host \ ---disk $VM.qcow2,format=qcow2,bus=virtio \ ---disk $VM-cidata.iso,device=cdrom \ ---network bridge=virbr0,model=virtio \ ---os-type=linux \ ---os-variant=centos7.0 \ ---graphics spice \ ---noautoconsole -``` -Delete unwanted files: -``` -# cd $D/$VM -# virsh change-media $VM hda --eject --config -# rm meta-data user-data centos7-vm1-cidata.iso -``` - -##### Find out IP address of VM - -`# virsh net-dhcp-leases default` -[![CentOS7-VM1- Created][15]][15] - -##### Log in to your VM - -Use ssh command: -`# ssh vivek@192.168.122.85` -[![Sample VM session][16]][16] - -### Useful commands - -Let us see some useful commands for managing VMs. - -#### List all VMs - -`# virsh list --all` - -#### Get VM info - -``` -# virsh dominfo vmName -# virsh dominfo centos7-vm1 -``` - -#### Stop/shutdown a VM - -`# virsh shutdown centos7-vm1` - -#### Start VM - -`# virsh start centos7-vm1` - -#### Mark VM for autostart at boot time - -`# virsh autostart centos7-vm1` - -#### Reboot (soft & safe reboot) VM - -`# virsh reboot centos7-vm1` -Reset (hard reset/not safe) VM -`# virsh reset centos7-vm1` - -#### Delete VM - -``` -# virsh shutdown centos7-vm1 -# virsh undefine centos7-vm1 -# virsh pool-destroy centos7-vm1 -# D=/var/lib/libvirt/images -# VM=centos7-vm1 -# rm -ri $D/$VM -``` -To see a complete list of virsh command type -``` -# virsh help | less -# virsh help | grep reboot -``` - - -### About the author - -The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][17], [Facebook][18], [Google+][19]. - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/ - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/faq/linux-xen-vmware-kvm-intel-vt-amd-v-support/ -[2]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) -[3]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-install-KVM-on-CentOS-7-RHEL-7-Headless-Server.jpg -[4]:https://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ (See Linux/Unix grep command examples for more info) -[5]:https://www.cyberciti.biz/media/new/faq/2018/01/KVM-default-networking.jpg -[6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/ -[7]:https://www.cyberciti.biz/media/new/faq/2016/01/vnc-client.jpg -[8]:https://www.cyberciti.biz/media/new/faq/2016/01/centos7-guest-vnc.jpg -[9]:https://cloudinit.readthedocs.io/en/latest/index.html -[10]:https://www.cyberciti.biz/media/new/faq/2018/01/ssh-keygen-pub-key.jpg -[11]:https://www.cyberciti.biz/faq/linux-unix-generating-ssh-keys/ -[12]:https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/ -[13]:https://www.cyberciti.biz/media/new/faq/2018/01/Set-VM-image-disk-size.jpg -[14]:https://www.cyberciti.biz/media/new/faq/2018/01/Creating-a-cloud-init-ISO.jpg -[15]:https://www.cyberciti.biz/media/new/faq/2018/01/CentOS7-VM1-Created.jpg -[16]:https://www.cyberciti.biz/media/new/faq/2018/01/Sample-VM-session.jpg -[17]:https://twitter.com/nixcraft -[18]:https://facebook.com/nixcraft -[19]:https://plus.google.com/+CybercitiBiz From 8ede439b1998b445799bbc7e5277baee6b842d0e Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 5 Mar 2018 19:14:38 +0800 Subject: [PATCH 012/343] Create 20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md --- ...VM on CentOS 7 - RHEL 7 Headless Server.md | 350 ++++++++++++++++++ 1 file changed, 350 insertions(+) create mode 100644 translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md diff --git a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md new file mode 100644 index 0000000000..1aa429d651 --- /dev/null +++ b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @@ -0,0 +1,350 @@ +如何在 CentOS 7 / RHEL 7 终端服务器上安装 KVM +====== + +如何在 CnetOS 7 或 RHEL 7( Red Hat 企业版 Linux) 服务器上安装和配置 KVM(基于内核的虚拟机)?如何在 CnetOS 7 上设置 KMV 并使用云镜像/ cloud-init 来安装客户虚拟机? + +基于内核的虚拟机(KVM)是 CentOS 或 RHEL 7 的虚拟化软件。KVM 将你的服务器变成虚拟机管理程序。本文介绍如何在 CentOS 7 或 RHEL 7 中使用 KVM 设置和管理虚拟化环境。还介绍了如何使用 CLI 在物理服务器上安装和管理虚拟机(VM)。确保在服务器的 BIOS 中启用了**虚拟化技术(vt)**。你也可以运行以下命令[测试 CPU 是否支持 Intel VT 和 AMD_V 虚拟化技术][1] +``` +$ lscpu | grep Virtualization +Virtualization: VT-x +``` + +### 按照 CentOS 7/RHEL 7 终端服务器上的 KVM 安装步骤进行操作 + +#### 步骤 1: 安装 kvm + +输入以下 [yum 命令][2]: +`# yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install` + +[![How to install KVM on CentOS 7 RHEL 7 Headless Server][3]][3] + +启动 libvirtd 服务: +``` +# systemctl enable libvirtd +# systemctl start libvirtd +``` + +#### 步骤 2: 确认 kvm 安装 + +确保使用 lsmod 命令和 [grep命令][4] 加载 KVM 模块: +`# lsmod | grep -i kvm` + +#### 步骤 3: 配置桥接网络 + +默认情况下,由 libvirtd 配置的基于 dhcpd 的网桥。你可以使用以下命令验证: +``` +# brctl show +# virsh net-list +``` +[![KVM default networking][5]][5] + +所有虚拟机(客户机器)只能在同一台服务器上对其他虚拟机进行网络访问。为你创建的私有网络是 192.168.122.0/24。验证: +`# virsh net-dumpxml default` + +如果你希望你的虚拟机可用于 LAN 上的其他服务器,请在连接到你的 LAN 的服务器上设置一个网桥。更新你的网卡配置文件,如 ifcfg-enp3s0 或 em1: +`# vi /etc/sysconfig/network-scripts/enp3s0 ` +添加一行: +``` +BRIDGE=br0 +``` + +[使用 vi 保存并关闭文件][6]。编辑 /etc/sysconfig/network-scripts/ifcfg-br0 : +`# vi /etc/sysconfig/network-scripts/ifcfg-br0` +添加以下东西: +``` +DEVICE="br0" +# I am getting ip from DHCP server # +BOOTPROTO="dhcp" +IPV6INIT="yes" +IPV6_AUTOCONF="yes" +ONBOOT="yes" +TYPE="Bridge" +DELAY="0" +``` + +重新启动网络服务(警告:ssh命令将断开连接,最好重新启动该设备): +`# systemctl restart NetworkManager` + +用 brctl 命令验证它: +`# brctl show` + +#### 步骤 4: 创建你的第一个虚拟机 + +我将会创建一个 CentOS 7.x 虚拟机。首先,使用 wget 命令获取 CentOS 7.x 最新的 ISO 镜像: +``` +# cd /var/lib/libvirt/boot/ +# wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso +``` + +验证 ISO 镜像: +``` +# wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/sha256sum.txt +# sha256sum -c sha256sum.txt +``` + +##### 创建 CentOS 7.x 虚拟机 + +在这个例子中,我创建了 2GB RAM,2 个 CPU 核心,1 个网卡和 40 GB 磁盘空间的 CentOS 7.x 虚拟机,输入: +``` +# virt-install \ +--virt-type=kvm \ +--name centos7 \ +--ram 2048 \ +--vcpus=1 \ +--os-variant=centos7.0 \ +--cdrom=/var/lib/libvirt/boot/CentOS-7-x86_64-Minimal-1708.iso \ +--network=bridge=br0,model=virtio \ +--graphics vnc \ +--disk path=/var/lib/libvirt/images/centos7.qcow2,size=40,bus=virtio,format=qcow2 +``` + +从另一个终端通过 ssh 和 type 配置 vnc 登录: +``` +# virsh dumpxml centos7 | grep v nc + +``` + +请记录下端口值(即 5901)。你需要使用 SSH 客户端来建立隧道和 VNC 客户端才能访问远程 vnc 服务区。在客户端/桌面/ macbook pro 系统中输入以下 SSH 端口转化命令: +`$ ssh vivek@server1.cyberciti.biz -L 5901:127.0.0.1:5901` + +一旦你建立了 ssh 隧道,你可以将你的 VNC 客户端指向你自己的 127.0.0.1 (localhost) 地址和端口 5901,如下所示: +[![][7]][7] + +你应该看到 CentOS Linux 7 客户虚拟机安装屏幕如下: +[![][8]][8] + +现在只需按照屏幕说明进行操作并安装CentOS 7。一旦安装完成后,请继续并单击重启按钮。 远程服务器关闭了我们的 VNC 客户端的连接。 你可以通过 KVM 客户端重新连接,以配置服务器的其余部分,包括基于 SSH 的会话或防火墙。 + +#### 步骤 5: 使用云镜像 + +以上安装方法对于学习目的或单个虚拟机而言是可行的。你需要部署大量的虚拟机吗? 尝试云镜像。你可以根据需要修改预先构建的云图像。例如,使用 [Cloud-init][9] 添加用户,ssh 密钥,设置时区等等,这是处理云实例的早期初始化的事实上的多分发包。让我们看看如何创建带有 1024MB RAM,20GB 磁盘空间和 1 个 vCPU 的 CentOS 7 虚拟机。(译注: vCPU 即电脑中的虚拟处理器) + +##### 获取 CentOS 7 云镜像 + +``` +# cd /var/lib/libvirt/boot +# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 +``` + +##### 创建所需的目录 + +``` +# D=/var/lib/libvirt/images +# VM=centos7-vm1 ## vm name ## +# mkdir -vp $D/$VM +mkdir: created directory '/var/lib/libvirt/images/centos7-vm1' +``` + +##### 创建元数据文件 + +``` +# cd $D/$VM +# vi meta-data +``` + +添加以下东西: +``` +instance-id: centos7-vm1 +local-hostname: centos7-vm1 +``` + +##### 创建用户数据文件 + +我将使用 ssh 密钥登录到虚拟机。所以确保你有 ssh-keys: +`# ssh-keygen -t ed25519 -C "VM Login ssh key"` +[![ssh-keygen command][10]][11] + +请参阅 "[如何在 Linux/Unix 系统上设置 SSH 密钥][12]" 来获取更多信息。编辑用户数据如下: +``` +# cd $D/$VM +# vi user-data +``` +添加如下(根据你的设置替换主机名,用户,ssh-authorized-keys): +``` +#cloud-config + +# Hostname management +preserve_hostname: False +hostname: centos7-vm1 +fqdn: centos7-vm1.nixcraft.com + +# Users +users: + - default + - name: vivek + groups: ['wheel'] + shell: /bin/bash + sudo: ALL=(ALL) NOPASSWD:ALL + ssh-authorized-keys: + - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIMP3MOF2ot8MOdNXCpHem0e2Wemg4nNmL2Tio4Ik1JY VM Login ssh key + +# Configure where output will go +output: + all: ">> /var/log/cloud-init.log" + +# configure interaction with ssh server +ssh_genkeytypes: ['ed25519', 'rsa'] + +# Install my public ssh key to the first user-defined user configured +# in cloud.cfg in the template (which is centos for CentOS cloud images) +ssh_authorized_keys: + - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIMP3MOF2ot8MOdNXCpHem0e2Wemg4nNmL2Tio4Ik1JY VM Login ssh key + +# set timezone for VM +timezone: Asia/Kolkata + +# Remove cloud-init +runcmd: + - systemctl stop network && systemctl start network + - yum -y remove cloud-init +``` + +##### 复制云镜像 + +``` +# cd $D/$VM +# cp /var/lib/libvirt/boot/CentOS-7-x86_64-GenericCloud.qcow2 $VM.qcow2 +``` + +##### 创建 20GB 磁盘映像 + +``` +# cd $D/$VM +# export LIBGUESTFS_BACKEND=direct +# qemu-img create -f qcow2 -o preallocation=metadata $VM.new.image 20G +# virt-resize --quiet --expand /dev/sda1 $VM.qcow2 $VM.new.image +``` +[![Set VM image disk size][13]][13] +覆盖它的缩放图片: +``` +# cd $D/$VM +# mv $VM.new.image $VM.qcow2 +``` + +##### 创建一个 cloud-init ISO + +`# mkisofs -o $VM-cidata.iso -V cidata -J -r user-data meta-data` +[![Creating a cloud-init ISO][14]][14] + +##### 创建一个 pool + +``` +# virsh pool-create-as --name $VM --type dir --target $D/$VM +Pool centos7-vm1 created +``` + +##### 安装 CentOS 7 虚拟机 + +``` +# cd $D/$VM +# virt-install --import --name $VM \ +--memory 1024 --vcpus 1 --cpu host \ +--disk $VM.qcow2,format=qcow2,bus=virtio \ +--disk $VM-cidata.iso,device=cdrom \ +--network bridge=virbr0,model=virtio \ +--os-type=linux \ +--os-variant=centos7.0 \ +--graphics spice \ +--noautoconsole +``` +删除不需要的文件: +``` +# cd $D/$VM +# virsh change-media $VM hda --eject --config +# rm meta-data user-data centos7-vm1-cidata.iso +``` + +##### 查找虚拟机的 IP 地址 + +`# virsh net-dhcp-leases default` +[![CentOS7-VM1- Created][15]][15] + +##### 登录到你的虚拟机 + +使用 ssh 命令: +`# ssh vivek@192.168.122.85` +[![Sample VM session][16]][16] + +### 有用的命令 + +让我们看看管理虚拟机的一些有用的命令。 + +#### 列出所有虚拟机 + +`# virsh list --all` + +#### 获取虚拟机信息 + +``` +# virsh dominfo vmName +# virsh dominfo centos7-vm1 +``` + +#### 停止/关闭虚拟机 + +`# virsh shutdown centos7-vm1` + +#### 开启虚拟机 + +`# virsh start centos7-vm1` + +#### 将虚拟机标记为在引导时自动启动 + +`# virsh autostart centos7-vm1` + +#### 重新启动(软安全重启)虚拟机 + +`# virsh reboot centos7-vm1` +重置(硬重置/不安全)虚拟机 +`# virsh reset centos7-vm1` + +#### 删除虚拟机 + +``` +# virsh shutdown centos7-vm1 +# virsh undefine centos7-vm1 +# virsh pool-destroy centos7-vm1 +# D=/var/lib/libvirt/images +# VM=centos7-vm1 +# rm -ri $D/$VM +``` +查看 virsh 命令类型的完整列表 +``` +# virsh help | less +# virsh help | grep reboot +``` + +### 关于作者 + +作者是 nixCraft 的创建者,也是经验丰富的系统管理员和 Linux 操作系统/ Unix shell 脚本的培训师。 他曾与全球客户以及 IT,教育,国防和空间研究以及非营利部门等多个行业合作。 在 [Twitter][17],[Facebook][18],[Google +][19] 上关注他。 + +-------------------------------------------------------------------------------- + +via: [https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/][https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/] + +作者:[Vivek Gite][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/faq/linux-xen-vmware-kvm-intel-vt-amd-v-support/ +[2]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[3]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-install-KVM-on-CentOS-7-RHEL-7-Headless-Server.jpg +[4]:https://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ (See Linux/Unix grep command examples for more info) +[5]:https://www.cyberciti.biz/media/new/faq/2018/01/KVM-default-networking.jpg +[6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/ +[7]:https://www.cyberciti.biz/media/new/faq/2016/01/vnc-client.jpg +[8]:https://www.cyberciti.biz/media/new/faq/2016/01/centos7-guest-vnc.jpg +[9]:https://cloudinit.readthedocs.io/en/latest/index.html +[10]:https://www.cyberciti.biz/media/new/faq/2018/01/ssh-keygen-pub-key.jpg +[11]:https://www.cyberciti.biz/faq/linux-unix-generating-ssh-keys/ +[12]:https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/ +[13]:https://www.cyberciti.biz/media/new/faq/2018/01/Set-VM-image-disk-size.jpg +[14]:https://www.cyberciti.biz/media/new/faq/2018/01/Creating-a-cloud-init-ISO.jpg +[15]:https://www.cyberciti.biz/media/new/faq/2018/01/CentOS7-VM1-Created.jpg +[16]:https://www.cyberciti.biz/media/new/faq/2018/01/Sample-VM-session.jpg +[17]:https://twitter.com/nixcraft +[18]:https://facebook.com/nixcraft +[19]:https://plus.google.com/+CybercitiBiz From 69b20e4aac51d1dae54d6f2b270b658aad73e736 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 5 Mar 2018 19:16:33 +0800 Subject: [PATCH 013/343] Update 20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md --- ...7 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md index 1aa429d651..994f0713a0 100644 --- a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md +++ b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @@ -3,7 +3,7 @@ 如何在 CnetOS 7 或 RHEL 7( Red Hat 企业版 Linux) 服务器上安装和配置 KVM(基于内核的虚拟机)?如何在 CnetOS 7 上设置 KMV 并使用云镜像/ cloud-init 来安装客户虚拟机? -基于内核的虚拟机(KVM)是 CentOS 或 RHEL 7 的虚拟化软件。KVM 将你的服务器变成虚拟机管理程序。本文介绍如何在 CentOS 7 或 RHEL 7 中使用 KVM 设置和管理虚拟化环境。还介绍了如何使用 CLI 在物理服务器上安装和管理虚拟机(VM)。确保在服务器的 BIOS 中启用了**虚拟化技术(vt)**。你也可以运行以下命令[测试 CPU 是否支持 Intel VT 和 AMD_V 虚拟化技术][1] +基于内核的虚拟机(KVM)是 CentOS 或 RHEL 7 的虚拟化软件。KVM 将你的服务器变成虚拟机管理程序。本文介绍如何在 CentOS 7 或 RHEL 7 中使用 KVM 设置和管理虚拟化环境。还介绍了如何使用 CLI 在物理服务器上安装和管理虚拟机(VM)。确保在服务器的 BIOS 中启用了**虚拟化技术(vt)**。你也可以运行以下命令[测试 CPU 是否支持 Intel VT 和 AMD_V 虚拟化技术][1]。 ``` $ lscpu | grep Virtualization Virtualization: VT-x From 44cab90f4c7e4236e24fec7e3953307dc4d8e61b Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 5 Mar 2018 19:18:27 +0800 Subject: [PATCH 014/343] Update 20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md --- ...27 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md | 1 + 1 file changed, 1 insertion(+) diff --git a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md index 994f0713a0..5fce199541 100644 --- a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md +++ b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @@ -257,6 +257,7 @@ Pool centos7-vm1 created ##### 查找虚拟机的 IP 地址 `# virsh net-dhcp-leases default` + [![CentOS7-VM1- Created][15]][15] ##### 登录到你的虚拟机 From bb5ae07fef5c0dc4552ba4c3d68d93f70e7df859 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 5 Mar 2018 19:19:38 +0800 Subject: [PATCH 015/343] Update 20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md --- ...7 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md index 5fce199541..233daa72b2 100644 --- a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md +++ b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @@ -321,7 +321,7 @@ Pool centos7-vm1 created -------------------------------------------------------------------------------- -via: [https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/][https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/] +via: [https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/](https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/) 作者:[Vivek Gite][a] 译者:[MjSeven](https://github.com/MjSeven) From f198695ef0db95d07f9ac6b9add3495c481dcd8f Mon Sep 17 00:00:00 2001 From: Feng Lv Date: Mon, 5 Mar 2018 19:36:14 +0800 Subject: [PATCH 016/343] translated --- sources/tech/20160810 How does gdb work.md | 220 ------------------ translated/tech/20160810 How does gdb work.md | 219 +++++++++++++++++ 2 files changed, 219 insertions(+), 220 deletions(-) delete mode 100644 sources/tech/20160810 How does gdb work.md create mode 100644 translated/tech/20160810 How does gdb work.md diff --git a/sources/tech/20160810 How does gdb work.md b/sources/tech/20160810 How does gdb work.md deleted file mode 100644 index 56b0cfe7bf..0000000000 --- a/sources/tech/20160810 How does gdb work.md +++ /dev/null @@ -1,220 +0,0 @@ -translating by ucasFL - -How does gdb work? -============================================================ - -Hello! Today I was working a bit on my [ruby stacktrace project][1] and I realized that now I know a couple of things about how gdb works internally. - -Lately I’ve been using gdb to look at Ruby programs, so we’re going to be running gdb on a Ruby program. This really means the Ruby interpreter. First, we’re going to print out the address of a global variable: `ruby_current_thread`: - -### getting a global variable - -Here’s how to get the address of the global `ruby_current_thread`: - -``` -$ sudo gdb -p 2983 -(gdb) p & ruby_current_thread -$2 = (rb_thread_t **) 0x5598a9a8f7f0 - -``` - -There are a few places a variable can live: on the heap, the stack, or in your program’s text. Global variables are part of your program! You can think of them as being allocated at compile time, kind of. It turns out we can figure out the address of a global variable pretty easily! Let’s see how `gdb` came up with `0x5598a9a8f7f0`. - -We can find the approximate region this variable lives in by looking at a cool file in `/proc` called `/proc/$pid/maps`. - -``` -$ sudo cat /proc/2983/maps | grep bin/ruby -5598a9605000-5598a9886000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -5598a9a86000-5598a9a8b000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -5598a9a8b000-5598a9a8d000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby - -``` - -So! There’s this starting address `5598a9605000` That’s  _like_  `0x5598a9a8f7f0`, but different. How different? Well, here’s what I get when I subtract them: - -``` -(gdb) p/x 0x5598a9a8f7f0 - 0x5598a9605000 -$4 = 0x48a7f0 - -``` - -“What’s that number?”, you might ask? WELL. Let’s look at the **symbol table**for our program with `nm`. - -``` -sudo nm /proc/2983/exe | grep ruby_current_thread -000000000048a7f0 b ruby_current_thread - -``` - -What’s that we see? Could it be `0x48a7f0`? Yes it is! So!! If we want to find the address of a global variable in our program, all we need to do is look up the name of the variable in the symbol table, and then add that to the start of the range in `/proc/whatever/maps`, and we’re done! - -So now we know how gdb does that. But gdb does so much more!! Let’s skip ahead to… - -### dereferencing pointers - -``` -(gdb) p ruby_current_thread -$1 = (rb_thread_t *) 0x5598ab3235b0 - -``` - -The next thing we’re going to do is **dereference** that `ruby_current_thread`pointer. We want to see what’s in that address! To do that, gdb will run a bunch of system calls like this: - -``` -ptrace(PTRACE_PEEKTEXT, 2983, 0x5598a9a8f7f0, [0x5598ab3235b0]) = 0 - -``` - -You remember this address `0x5598a9a8f7f0`? gdb is asking “hey, what’s in that address exactly”? `2983` is the PID of the process we’re running gdb on. It’s using the `ptrace` system call which is how gdb does everything. - -Awesome! So we can dereference memory and figure out what bytes are at what memory addresses. Some useful gdb commands to know here are `x/40w variable` and `x/40b variable` which will display 40 words / bytes at a given address, respectively. - -### describing structs - -The memory at an address looks like this. A bunch of bytes! - -``` -(gdb) x/40b ruby_current_thread -0x5598ab3235b0: 16 -90 55 -85 -104 85 0 0 -0x5598ab3235b8: 32 47 50 -85 -104 85 0 0 -0x5598ab3235c0: 16 -64 -55 115 -97 127 0 0 -0x5598ab3235c8: 0 0 2 0 0 0 0 0 -0x5598ab3235d0: -96 -83 -39 115 -97 127 0 0 - -``` - -That’s useful, but not that useful! If you are a human like me and want to know what it MEANS, you need more. Like this: - -``` -(gdb) p *(ruby_current_thread) -$8 = {self = 94114195940880, vm = 0x5598ab322f20, stack = 0x7f9f73c9c010, - stack_size = 131072, cfp = 0x7f9f73d9ada0, safe_level = 0, raised_flag = 0, - last_status = 8, state = 0, waiting_fd = -1, passed_block = 0x0, - passed_bmethod_me = 0x0, passed_ci = 0x0, top_self = 94114195612680, - top_wrapper = 0, base_block = 0x0, root_lep = 0x0, root_svar = 8, thread_id = - 140322820187904, - -``` - -GOODNESS. That is a lot more useful. How does gdb know that there are all these cool fields like `stack_size`? Enter DWARF. DWARF is a way to store extra debugging data about your program, so that debuggers like gdb can do their job better! It’s generally stored as part of a binary. If I run `dwarfdump` on my Ruby binary, I get some output like this: - -(I’ve redacted it heavily to make it easier to understand) - -``` -DW_AT_name "rb_thread_struct" -DW_AT_byte_size 0x000003e8 -DW_TAG_member - DW_AT_name "self" - DW_AT_type <0x00000579> - DW_AT_data_member_location DW_OP_plus_uconst 0 -DW_TAG_member - DW_AT_name "vm" - DW_AT_type <0x0000270c> - DW_AT_data_member_location DW_OP_plus_uconst 8 -DW_TAG_member - DW_AT_name "stack" - DW_AT_type <0x000006b3> - DW_AT_data_member_location DW_OP_plus_uconst 16 -DW_TAG_member - DW_AT_name "stack_size" - DW_AT_type <0x00000031> - DW_AT_data_member_location DW_OP_plus_uconst 24 -DW_TAG_member - DW_AT_name "cfp" - DW_AT_type <0x00002712> - DW_AT_data_member_location DW_OP_plus_uconst 32 -DW_TAG_member - DW_AT_name "safe_level" - DW_AT_type <0x00000066> - -``` - -So. The name of the type of `ruby_current_thread` is `rb_thread_struct`. It has size `0x3e8` (or 1000 bytes), and it has a bunch of member items. `stack_size` is one of them, at an offset of 24, and it has type 31\. What’s 31? No worries! We can look that up in the DWARF info too! - -``` -< 1><0x00000031> DW_TAG_typedef - DW_AT_name "size_t" - DW_AT_type <0x0000003c> -< 1><0x0000003c> DW_TAG_base_type - DW_AT_byte_size 0x00000008 - DW_AT_encoding DW_ATE_unsigned - DW_AT_name "long unsigned int" - -``` - -So! `stack_size` has type `size_t`, which means `long unsigned int`, and is 8 bytes. That means that we can read the stack size! - -How that would break down, once we have the DWARF debugging data, is: - -1. Read the region of memory that `ruby_current_thread` is pointing to - -2. Add 24 bytes to get to `stack_size` - -3. Read 8 bytes (in little-endian format, since we’re on x86) - -4. Get the answer! - -Which in this case is 131072 or 128 kb. - -To me, this makes it a lot more obvious what debugging info is **for** – if we didn’t have all this extra metadata about what all these variables meant, we would have no idea what the bytes at address `0x5598ab3235b0` meant. - -This is also why you can install debug info for a program separately from your program – gdb doesn’t care where it gets the extra debug info from. - -### DWARF is confusing - -I’ve been reading a bunch of DWARF info recently. Right now I’m using libdwarf which hasn’t been the best experience – the API is confusing, you initialize everything in a weird way, and it’s really slow (it takes 0.3 seconds to read all the debugging data out of my Ruby program which seems ridiculous). I’ve been told that libdw from elfutils is better. - -Also, I casually remarked that you can look at `DW_AT_data_member_location` to get the offset of a struct member! But I looked up on Stack Overflow how to actually do that and I got [this answer][2]. Basically you start with a check like: - -``` -dwarf_whatform(attrs[i], &form, &error); - if (form == DW_FORM_data1 || form == DW_FORM_data2 - form == DW_FORM_data2 || form == DW_FORM_data4 - form == DW_FORM_data8 || form == DW_FORM_udata) { - -``` - -and then it keeps GOING. Why are there 8 million different `DW_FORM_data` things I need to check for? What is happening? I have no idea. - -Anyway my impression is that DWARF is a large and complicated standard (and possibly the libraries people use to generate DWARF are subtly incompatible?), but it’s what we have, so that’s what we work with! - -I think it’s really cool that I can write code that reads DWARF and my code actually mostly works. Except when it crashes. I’m working on that. - -### unwinding stacktraces - -In an earlier version of this post, I said that gdb unwinds stacktraces using libunwind. It turns out that this isn’t true at all! - -Someone who’s worked on gdb a lot emailed me to say that they actually spent a ton of time figuring out how to unwind stacktraces so that they can do a better job than libunwind does. This means that if you get stopped in the middle of a weird program with less debug info than you might hope for that’s done something strange with its stack, gdb will try to figure out where you are anyway. Thanks <3 - -### other things gdb does - -The few things I’ve described here (reading memory, understanding DWARF to show you structs) aren’t everything gdb does – just looking through Brendan Gregg’s [gdb example from yesterday][3], we see that gdb also knows how to - -* disassemble assembly - -* show you the contents of your registers - -and in terms of manipulating your program, it can - -* set breakpoints and step through a program - -* modify memory (!! danger !!) - -Knowing more about how gdb works makes me feel a lot more confident when using it! I used to get really confused because gdb kind of acts like a C REPL sometimes – you type `ruby_current_thread->cfp->iseq`, and it feels like writing C code! But you’re not really writing C at all, and it was easy for me to run into limitations in gdb and not understand why. - -Knowing that it’s using DWARF to figure out the contents of the structs gives me a better mental model and have more correct expectations! Awesome. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ - -作者:[ Julia Evans][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:http://jvns.ca/blog/2016/06/12/a-weird-system-call-process-vm-readv/ -[2]:https://stackoverflow.com/questions/25047329/how-to-get-struct-member-offset-from-dwarf-info -[3]:http://www.brendangregg.com/blog/2016-08-09/gdb-example-ncurses.html diff --git a/translated/tech/20160810 How does gdb work.md b/translated/tech/20160810 How does gdb work.md new file mode 100644 index 0000000000..e45b988d3d --- /dev/null +++ b/translated/tech/20160810 How does gdb work.md @@ -0,0 +1,219 @@ +gdb 如何工作? +============================================================ + +大家好!今天,我开始进行我的 [ruby 堆栈跟踪项目][1],我意识到,我现在了解了一些关于 gdb 内部如何工作的内容。 + +最近,我使用 gdb 来查看我的 Ruby 程序,所以,我们将对一个 Ruby 程序运行 gdb 。它实际上就是一个 Ruby 解释器。首先,我们需要打印出一个全局变量的地址:`ruby_current_thread`。 + +### 获取全局变量 + +下面展示了如何获取全局变量 `ruby_current_thread` 的地址: + +``` +$ sudo gdb -p 2983 +(gdb) p & ruby_current_thread +$2 = (rb_thread_t **) 0x5598a9a8f7f0 + +``` + +变量能够位于的地方有堆、栈或者程序的文本段。全局变量也是程序的一部分。某种程度上,你可以把它们想象成是在编译的时候分配的。因此,我们可以很容易的找出全局变量的地址。让我们来看看,gdb 是如何找出 `0x5598a9a87f0` 这个地址的。 + +我们可以通过查看位于 `/proc` 目录下一个叫做 `/proc/$pid/maps` 的文件,来找到这个变量所位于的大致区域。 + + +``` +$ sudo cat /proc/2983/maps | grep bin/ruby +5598a9605000-5598a9886000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5598a9a86000-5598a9a8b000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5598a9a8b000-5598a9a8d000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby + +``` + +所以,我们看到,起始地址 `5598a9605000` 和 `0x5598a9a8f7f0` 很像,但并不一样。哪里不一样呢,我们把两个数相减,看看结果是多少: + +``` +(gdb) p/x 0x5598a9a8f7f0 - 0x5598a9605000 +$4 = 0x48a7f0 + +``` + +你可能会问,这个数是什么?让我们使用 `nm` 来查看一下程序的符号表。 + +``` +sudo nm /proc/2983/exe | grep ruby_current_thread +000000000048a7f0 b ruby_current_thread + +``` + +我们看到了什么?能够看到 `0x48a7f0` 吗?是的,没错。所以,如果我们想找到程序中一个全局变量的地址,那么只需在符号表中查找变量的名字,然后再加上在 `/proc/whatever/maps` 中的起始地址,就得到了。 + +所以现在,我们知道 gdb 做了什么。但是,gdb 实际做的事情更多,让我们跳过直接转到… + +### 解引用指针 + +``` +(gdb) p ruby_current_thread +$1 = (rb_thread_t *) 0x5598ab3235b0 + +``` + +我们要做的下一件事就是解引用 `ruby_current_thread` 这一指针。我们想看一下它所指向的地址。为了完成这件事,gdb 会运行大量系统调用比如: + +``` +ptrace(PTRACE_PEEKTEXT, 2983, 0x5598a9a8f7f0, [0x5598ab3235b0]) = 0 + +``` + +你是否还记得 `0x5598a9a8f7f0` 这个地址?gdb 会问:“嘿,在这个地址中的实际内容是什么?”。`2983` 是我们运行 gdb 这个进程的 ID。gdb 使用 `ptrace` 这一系统调用来完成这一件事。 + +好极了!因此,我们可以解引用内存并找出内存地址中存储的内容。一些有用的 gdb 命令能够分别知道 `x/40w` 和 `x/40b` 这两个变量哪一个会在给定地址展示 40 个字/字节。 + +### 描述结构 + +一个内存地址中的内容可能看起来像下面这样。大量的字节! + +``` +(gdb) x/40b ruby_current_thread +0x5598ab3235b0: 16 -90 55 -85 -104 85 0 0 +0x5598ab3235b8: 32 47 50 -85 -104 85 0 0 +0x5598ab3235c0: 16 -64 -55 115 -97 127 0 0 +0x5598ab3235c8: 0 0 2 0 0 0 0 0 +0x5598ab3235d0: -96 -83 -39 115 -97 127 0 0 + +``` + +这很有用,但也不是非常有用!如果你是一个像我一样的人类并且想知道它代表什么,那么你需要更多内容,比如像这样: + +``` +(gdb) p *(ruby_current_thread) +$8 = {self = 94114195940880, vm = 0x5598ab322f20, stack = 0x7f9f73c9c010, + stack_size = 131072, cfp = 0x7f9f73d9ada0, safe_level = 0, raised_flag = 0, + last_status = 8, state = 0, waiting_fd = -1, passed_block = 0x0, + passed_bmethod_me = 0x0, passed_ci = 0x0, top_self = 94114195612680, + top_wrapper = 0, base_block = 0x0, root_lep = 0x0, root_svar = 8, thread_id = + 140322820187904, + +``` + +太好了。现在就更加有用了。gdb 是如何知道这些所有域的,比如 `stack_size` ?输入 `DWARF`。`DWARF` 是存储额外程序调试数据的一种方式,从而调试器比如 gdb 能够更好的工作。它通常存储为一部分二进制。如果我对我的 Ruby 二进制文件运行 `dwarfdump` 命令,那么我将会得到下面的输出: + +(我已经重新编排使得它更容易理解) + +``` +DW_AT_name "rb_thread_struct" +DW_AT_byte_size 0x000003e8 +DW_TAG_member + DW_AT_name "self" + DW_AT_type <0x00000579> + DW_AT_data_member_location DW_OP_plus_uconst 0 +DW_TAG_member + DW_AT_name "vm" + DW_AT_type <0x0000270c> + DW_AT_data_member_location DW_OP_plus_uconst 8 +DW_TAG_member + DW_AT_name "stack" + DW_AT_type <0x000006b3> + DW_AT_data_member_location DW_OP_plus_uconst 16 +DW_TAG_member + DW_AT_name "stack_size" + DW_AT_type <0x00000031> + DW_AT_data_member_location DW_OP_plus_uconst 24 +DW_TAG_member + DW_AT_name "cfp" + DW_AT_type <0x00002712> + DW_AT_data_member_location DW_OP_plus_uconst 32 +DW_TAG_member + DW_AT_name "safe_level" + DW_AT_type <0x00000066> + +``` + +所以,`ruby_current_thread` 的类型名为 `rb_thread_struct`,它的大小为 0x3e8 (或 1000 字节),它有许多成员项,`stack_size` 是其中之一,在偏移为 24 的地方,它有类型 `31\` 。`31` 是什么?不用担心,我们也可以在 DWARF 信息中查看。 + +``` +< 1><0x00000031> DW_TAG_typedef + DW_AT_name "size_t" + DW_AT_type <0x0000003c> +< 1><0x0000003c> DW_TAG_base_type + DW_AT_byte_size 0x00000008 + DW_AT_encoding DW_ATE_unsigned + DW_AT_name "long unsigned int" + +``` + +所以,`stack_size` 具有类型 `size_t`,即 `long unsigned int`,它是 8 字节的。这意味着我们可以查看栈大小。 + +如果我们有了 DWARF 调试数据,该如何分解: + +1. 查看 `ruby_current_thread` 所指向的内存区域 + +2. 加上 24 字节来得到 `stack_size` + +3. 读 8 字节(以小端的格式,因为是在 x86 上) + +4. 得到答案! + +在上面这个例子中是 131072 或 128 kb 。 + +对我来说,这使得调试信息的用途更加明显。如果我们不知道这些所有变量所表示的额外元数据,那么我们无法知道在 `0x5598ab325b0` 这一地址的字节是什么。 + +这就是为什么你可以从你的程序中单独安装一个程序的调试信息,因为 gdb 并不关心从何处获取额外的调试信息。 + +### DWARF 很迷惑 + +我最近阅读了大量的 DWARF 信息。现在,我使用 libdwarf,使用体验不是很好,这个 API 很令人迷惑,你将以一种奇怪的方式初始化所有东西,它真的很慢(需要花费 0.3 s 的时间来读取我的 Ruby 程序的所有调试信息,这真是可笑)。有人告诉我,libdw 比 elfutils 要好一些。 + +同样,你可以查看 `DW_AT_data_member_location` 来查看结构成员的偏移。我在 Stack Overflow 上查找如何完成这件事,并且得到[这个答案][2]。基本上,以下面这样一个检查开始: + +``` +dwarf_whatform(attrs[i], &form, &error); + if (form == DW_FORM_data1 || form == DW_FORM_data2 + form == DW_FORM_data2 || form == DW_FORM_data4 + form == DW_FORM_data8 || form == DW_FORM_udata) { + +``` + +继续往前。为什么会有 800 万种不同的 `DW_FORM_data` 需要检查?发生了什么?我没有头绪。 + +不管怎么说,我的印象是,DWARF 是一个庞大而复杂的标准(可能是人们用来生成 DWARF 的库不匹配),但是这是我们所拥有的,所以我们只能用它来工作。 + +我能够编写代码并查看 DWARF 并且我的代码实际上大多数能够工作,这就很酷了,除了程序崩溃的时候。我就是这样工作的。 + +### 展开栈路径 + +在这篇文章的早期版本中,我说过,gdb 使用 libunwind 来展开栈路径,这样说并不总是对的。 + +有一位对 gdb 有深入研究的人发了大量邮件告诉我,他们花费了大量时间来尝试如何展开栈路径从而能够做得比 libunwind 更好。这意味着,如果你在程序的一个奇怪的中间位置停下来了,你所能够获取的调试信息又很少,那么你可以对栈做一些奇怪的事情,gdb 会尝试找出你位于何处。 + +### gdb 能做的其他事 + +我在这儿所描述的一些事请(查看内存,理解 DWARF 所展示的结构)并不是 gdb 能够做的全部事情。阅读 Brendan Gregg 的[昔日 gdb 例子][3],我们可以知道,gdb 也能够完成下面这些事情: + +* 反汇编 + +* 查看寄存器内容 + +在操作程序方面,它可以: + +* 设置断点,单步运行程序 + +* 修改内存(这是一个危险行为) + +了解 gdb 如何工作使得当我使用它的时候更加自信。我过去经常感到迷惑,因为 gdb 有点像 C,当你输入 `ruby_current_thread->cfp->iseq`,就好像是在写 C 代码。但是你并不是在写 C 代码。我很容易遇到 gdb 的限制,不知道为什么。 + +知道使用 DWARF 来找出结构内容给了我一个更好的心理模型和更加正确的期望!这真是极好的! + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ + +作者:[Julia Evans][a] +译者:[ucasFL](https://github.com/ucasFL) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:http://jvns.ca/blog/2016/06/12/a-weird-system-call-process-vm-readv/ +[2]:https://stackoverflow.com/questions/25047329/how-to-get-struct-member-offset-from-dwarf-info +[3]:http://www.brendangregg.com/blog/2016-08-09/gdb-example-ncurses.html From d6f40c87c23a1f57e3243375ae79c77ebf8b2b53 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 5 Mar 2018 22:57:12 +0800 Subject: [PATCH 017/343] PRF:20171214 6 open source home automation tools.md @qhwdw --- ...214 6 open source home automation tools.md | 26 ++++++++++--------- 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/translated/tech/20171214 6 open source home automation tools.md b/translated/tech/20171214 6 open source home automation tools.md index 742ab453fe..017e0fd447 100644 --- a/translated/tech/20171214 6 open source home automation tools.md +++ b/translated/tech/20171214 6 open source home automation tools.md @@ -1,35 +1,37 @@ 6 个开源的家庭自动化工具 ====== +> 用这些开源软件解决方案构建一个更智能的家庭。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_openlightbulbs.png?itok=nrv9hgnH) [物联网][13] 不仅是一个时髦词,在现实中,自 2016 年我们发布了一篇关于家庭自动化工具的评论文章以来,它也在迅速占领着我们的生活。在 2017,[26.5% 的美国家庭][14] 已经使用了一些智能家居技术;预计五年内,这一数字还将翻倍。 -使用数量持续增加的各种设备,可以帮助你实现对家庭的自动化管理、安保、和监视,在家庭自动化方面,从来没有像现在这样容易和更加吸引人过。不论你是要远程控制你的 HVAC 系统,集成一个家庭影院,保护你的家免受盗窃、火灾、或是其它威胁,还是节省能源或只是控制几盏灯,现在都有无数的设备可以帮到你。 +随着这些数量持续增加的各种设备的使用,可以帮助你实现对家庭的自动化管理、安保、和监视,在家庭自动化方面,从来没有像现在这样容易和更加吸引人过。不论你是要远程控制你的 HVAC 系统,集成一个家庭影院,保护你的家免受盗窃、火灾、或是其它威胁,还是节省能源或只是控制几盏灯,现在都有无数的设备可以帮到你。 -但同时,还有许多用户担心安装在他们家庭中的新设备带来的安全和隐私问题 —— 一个很现实也很 [严肃的问题][15]。他们想要去控制有谁可以接触到这个重要的系统,这个系统管理着他们的应用程序,记录了他们生活中的点点滴滴。这种想法是可以理解的:毕竟在一个连你的冰箱都是智能设备的今天,你不想要一个基本的保证吗?甚至是如果你授权了设备可以与外界通讯,它是否是仅被授权的人能够访问它呢? +但同时,还有许多用户担心安装在他们家庭中的新设备带来的安全和隐私问题 —— 这是一个很现实也很 [严肃的问题][15]。他们想要去控制有谁可以接触到这个重要的系统,这个系统管理着他们的应用程序,记录了他们生活中的点点滴滴。这种想法是可以理解的:毕竟在一个连你的冰箱都是智能设备的今天,你不想要一个基本的保证吗?甚至是如果你授权了设备可以与外界通讯,它是否是仅被授权的人访问它呢? [对安全的担心][16] 是为什么开源对我们将来使用的互联设备至关重要的众多理由之一。由于源代码运行在他们自己的设备上,完全可以去搞明白控制你的家庭的程序,也就是说你可以查看它的代码,如果必要的话甚至可以去修改它。 -虽然联网设备通常都包含他们专有的组件,但是将开源引入家庭自动化的第一步是确保你的设备和这些设备可以共同工作 —— 它们为你提供一个接口—— 并且是开源的。幸运的是,现在有许多解决方案可供选择,从 PC 到树莓派,你可以在它们上做任何事情。 +虽然联网设备通常都包含它们专有的组件,但是将开源引入家庭自动化的第一步是确保你的设备和这些设备可以共同工作 —— 它们为你提供一个接口 —— 并且是开源的。幸运的是,现在有许多解决方案可供选择,从 PC 到树莓派,你可以在它们上做任何事情。 这里有几个我比较喜欢的。 ### Calaos -[Calaos][17] 是一个设计为全栈家庭自动化的平台,包含一个服务器应用程序、触摸屏接口、Web 应用程序、支持 iOS 和 Android 的原生移动应用、以及一个运行在底层的预配置好的 Linux 操作系统。Calaos 项目出自一个法国公司,因此它的支持论坛以法语为主,不过大量的介绍资料和文档都已经翻译为英语了。 +[Calaos][17] 是一个设计为全栈的家庭自动化平台,包含一个服务器应用程序、触摸屏界面、Web 应用程序、支持 iOS 和 Android 的原生移动应用、以及一个运行在底层的预配置好的 Linux 操作系统。Calaos 项目出自一个法国公司,因此它的支持论坛以法语为主,不过大量的介绍资料和文档都已经翻译为英语了。 Calaos 使用的是 [GPL][18] v3 的许可证,你可以在 [GitHub][19] 上查看它的源代码。 ### Domoticz -[Domoticz][20] 是一个有大量设备库支持的家庭自动化系统,在它的项目网站上有大量的文档,从气象站到远程控制的烟雾探测器,以及大量的第三方 [集成][21] 。它使用一个 HTML5 前端,可以从桌面浏览器或者大多数现代的智能手机上访问它,它是一个轻量级的应用,可以运行在像树莓派这样的低功耗设备上。 +[Domoticz][20] 是一个有大量设备库支持的家庭自动化系统,在它的项目网站上有大量的文档,从气象站到远程控制的烟雾探测器,以及大量的第三方 [集成软件][21] 。它使用一个 HTML5 前端,可以从桌面浏览器或者大多数现代的智能手机上访问它,它是一个轻量级的应用,可以运行在像树莓派这样的低功耗设备上。 Domoticz 是用 C++ 写的,使用 [GPLv3][22] 许可证。它的 [源代码][23] 在 GitHub 上。 ### Home Assistant -[Home Assistant][24] 是一个开源的家庭自动化平台,它可以轻松部署在任何能运行 Python 3 的机器上,从树莓派到网络附加存储(NAS),甚至可以使用 Docker 容器轻松地部署到其它系统上。它集成了大量的开源的和商业的产品,允许你去连接它们,比如,IFTTT、天气信息、或者你的 Amazon Echo 设备,去控制从锁到灯的各种硬件。 +[Home Assistant][24] 是一个开源的家庭自动化平台,它可以轻松部署在任何能运行 Python 3 的机器上,从树莓派到网络存储(NAS),甚至可以使用 Docker 容器轻松地部署到其它系统上。它集成了大量的开源和商业的产品,允许你去连接它们,比如,IFTTT、天气信息、或者你的 Amazon Echo 设备,去控制从锁到灯的各种硬件。 Home Assistant 以 [MIT 许可证][25] 发布,它的源代码可以从 [GitHub][26] 上下载。 @@ -41,26 +43,26 @@ MisterHouse 使用 [GPLv2][28] 许可证,你可以在 [GitHub][29] 上查看 ### OpenHAB -[OpenHAB][30](开放家庭自动化总线的简称)是在开源爱好者中大家熟知的家庭自动化工具,它拥有大量用户的社区以及支持和集成了大量的设备。它是用 Java 写的,OpenHAB 非常轻便,可以跨大多数主流操作系统使用,它甚至在树莓派上也运行的很好。支持成百上千的设备,OpenHAB 被设计为与设备无关的,这使开发者在系统中添加他们的设备或者插件很容易。OpenHAB 也支持通过 iOS 和 Android 应用来控制设备以及设计工具,因此,你可以为你的家庭系统创建你自己的 UI。 +[OpenHAB][30](开放家庭自动化总线的简称)是在开源爱好者中所熟知的家庭自动化工具,它拥有大量用户的社区以及支持和集成了大量的设备。它是用 Java 写的,OpenHAB 非常轻便,可以跨大多数主流操作系统使用,它甚至在树莓派上也运行的很好。支持成百上千的设备,OpenHAB 被设计为与设备无关的,这使开发者在系统中添加他们的设备或者插件很容易。OpenHAB 也支持通过 iOS 和 Android 应用来控制设备以及设计工具,因此,你可以为你的家庭系统创建你自己的 UI。 你可以在 GitHub 上找到 OpenHAB 的 [源代码][31],它使用 [Eclipse 公共许可证][32]。 ### OpenMotics -[OpenMotics][33] 是一个开源的硬件和软件家庭自动化系统。它的设计目标是为控制设备提供一个综合的系统,而不是从不同的供应商处将各种设备拼接在一起。不像其它的系统主要是为了方便的改装而设计的,OpenMotics 专注于硬件解决方案。更多资料请查阅来自 OpenMotics 的后端开发者 Frederick Ryckbosch的 [完整文章][34] 。 +[OpenMotics][33] 是一个开源的硬件和软件家庭自动化系统。它的设计目标是为控制设备提供一个综合的系统,而不是从不同的供应商处将各种设备拼接在一起。不像其它的系统主要是为了方便改装而设计的,OpenMotics 专注于硬件解决方案。更多资料请查阅来自 OpenMotics 的后端开发者 Frederick Ryckbosch的 [完整文章][34] 。 OpenMotics 使用 [GPLv2][35] 许可证,它的源代码可以从 [GitHub][36] 上下载。 -当然了,我们的选择不仅有这些。许多家庭自动化爱好者使用不同的解决方案,甚至是它们自己动手做。其它用户选择使用单独的智能家庭设备而无需集成它们到一个单一的综合系统中。 +当然了,我们的选择不仅有这些。许多家庭自动化爱好者使用不同的解决方案,甚至是他们自己动手做。其它用户选择使用单独的智能家庭设备而无需集成它们到一个单一的综合系统中。 如果上面的解决方案并不能满足你的需求,下面还有一些潜在的替代者可以去考虑: * [EventGhost][1] 是一个开源的([GPL v2][2])家庭影院自动化工具,它只能运行在 Microsoft Windows PC 上。它允许用户去控制多媒体电脑和连接的硬件,它通过触发宏指令的插件或者定制的 Python 脚本来使用。 * [ioBroker][3] 是一个基于 JavaScript 的物联网平台,它能够控制灯、锁、空调、多媒体、网络摄像头等等。它可以运行在任何可以运行 Node.js 的硬件上,包括 Windows、Linux、以及 macOS,它使用 [MIT 许可证][4]。 -* [Jeedom][5] 是一个由开源软件([GPL v2][6])构成的家庭自动化平台,它可以控制灯、锁、多媒体等等。它包含一个移动应用程序(Android 和 iOS),并且可以运行在 Linux PC 上;该公司也销售 hubs,它为配置家庭自动化提供一个现成的解决方案。 +* [Jeedom][5] 是一个由开源软件([GPL v2][6])构成的家庭自动化平台,它可以控制灯、锁、多媒体等等。它包含一个移动应用程序(Android 和 iOS),并且可以运行在 Linux PC 上;该公司也销售 hub,它为配置家庭自动化提供一个现成的解决方案。 * [LinuxMCE][7] 标称它是你的多媒体与电子设备之间的“数字粘合剂”。它运行在 Linux(包括树莓派)上,它基于 Pluto 开源 [许可证][8] 发布,它可以用于家庭安全、电话(VoIP 和语音信箱)、A/V 设备、家庭自动化、以及玩视频游戏。 * [OpenNetHome][9],和这一类中的其它解决方案一样,是一个控制灯、报警、应用程序等等的一个开源软件。它基于 Java 和 Apache Maven,可以运行在 Windows、macOS、以及 Linux —— 包括树莓派,它以 [GPLv3][10] 许可证发布。 -* [Smarthomatic][11] 是一个专注于硬件设备和软件的开源家庭自动化框架,而不仅是用户接口。它基于 [GPLv3][12] 许可证,它可用于控制灯、电器、以及空调、检测温度、提醒给植物浇水。 +* [Smarthomatic][11] 是一个专注于硬件设备和软件的开源家庭自动化框架,而不仅是用户界面。它基于 [GPLv3][12] 许可证,它可用于控制灯、电器、以及空调、检测温度、提醒给植物浇水。 现在该轮到你了:你已经准备好家庭自动化系统了吗?或者正在研究去设计一个。你对家庭自动化的新手有什么建议,你会推荐什么样的系统? @@ -70,7 +72,7 @@ via: https://opensource.com/life/17/12/home-automation-tools 作者:[Jason Baker][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 115a1b704b9deaa80a77536ff997ac982152a0c5 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 5 Mar 2018 22:57:30 +0800 Subject: [PATCH 018/343] PUB:20171214 6 open source home automation tools.md @qhwdw --- .../20171214 6 open source home automation tools.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171214 6 open source home automation tools.md (100%) diff --git a/translated/tech/20171214 6 open source home automation tools.md b/published/20171214 6 open source home automation tools.md similarity index 100% rename from translated/tech/20171214 6 open source home automation tools.md rename to published/20171214 6 open source home automation tools.md From 70f9cd171fdeadfb6e5a2df2c9effa7049209dfe Mon Sep 17 00:00:00 2001 From: Feng Lv Date: Mon, 5 Mar 2018 22:58:36 +0800 Subject: [PATCH 019/343] translated --- .../20180104 How does gdb call functions.md | 254 ------------------ .../20180104 How does gdb call functions.md | 254 ++++++++++++++++++ 2 files changed, 254 insertions(+), 254 deletions(-) delete mode 100644 sources/tech/20180104 How does gdb call functions.md create mode 100644 translated/tech/20180104 How does gdb call functions.md diff --git a/sources/tech/20180104 How does gdb call functions.md b/sources/tech/20180104 How does gdb call functions.md deleted file mode 100644 index c88fae999e..0000000000 --- a/sources/tech/20180104 How does gdb call functions.md +++ /dev/null @@ -1,254 +0,0 @@ -translating by ucasFL - -How does gdb call functions? -============================================================ - -(previous gdb posts: [how does gdb work? (2016)][4] and [three things you can do with gdb (2014)][5]) - -I discovered this week that you can call C functions from gdb! I thought this was cool because I’d previously thought of gdb as mostly a read-only debugging tool. - -I was really surprised by that (how does that WORK??). As I often do, I asked [on Twitter][6] how that even works, and I got a lot of really useful answers! My favorite answer was [Evan Klitzke’s example C code][7] showing a way to do it. Code that  _works_  is very exciting! - -I believe (through some stracing & experiments) that that example C code is different from how gdb actually calls functions, so I’ll talk about what I’ve figured out about what gdb does in this post and how I’ve figured it out. - -There is a lot I still don’t know about how gdb calls functions, and very likely some things in here are wrong. - -### What does it mean to call a C function from gdb? - -Before I get into how this works, let’s talk quickly about why I found it surprising / nonobvious. - -So, you have a running C program (the “target program”). You want to run a function from it. To do that, you need to basically: - -* pause the program (because it is already running code!) - -* find the address of the function you want to call (using the symbol table) - -* convince the program (the “target program”) to jump to that address - -* when the function returns, restore the instruction pointer and registers to what they were before - -Using the symbol table to figure out the address of the function you want to call is pretty straightforward – here’s some sketchy (but working!) Rust code that I’ve been using on Linux to do that. This code uses the [elf crate][8]. If I wanted to find the address of the `foo` function in PID 2345, I’d run `elf_symbol_value("/proc/2345/exe", "foo")`. - -``` -fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> { - // open the ELF file - let file = elf::File::open_path(file_name).ok().ok_or("parse error")?; - // loop over all the sections & symbols until you find the right one! - let sections = &file.sections; - for s in sections { - for sym in file.get_symbols(&s).ok().ok_or("parse error")? { - if sym.name == symbol_name { - return Ok(sym.value); - } - } - } - None.ok_or("No symbol found")? -} - -``` - -This won’t totally work on its own, you also need to look at the memory maps of the file and add the symbol offset to the start of the place that file is mapped. But finding the memory maps isn’t so hard, they’re in `/proc/PID/maps`. - -Anyway, this is all to say that finding the address of the function to call seemed straightforward to me but that the rest of it (change the instruction pointer? restore the registers? what else?) didn’t seem so obvious! - -### You can’t just jump - -I kind of said this already but – you can’t just find the address of the function you want to run and then jump to that address. I tried that in gdb (`jump foo`) and the program segfaulted. Makes sense! - -### How you can call C functions from gdb - -First, let’s see that this is possible. I wrote a tiny C program that sleeps for 1000 seconds and called it `test.c`: - -``` -#include - -int foo() { - return 3; -} -int main() { - sleep(1000); -} - -``` - -Next, compile and run it: - -``` -$ gcc -o test test.c -$ ./test - -``` - -Finally, let’s attach to the `test` program with gdb: - -``` -$ sudo gdb -p $(pgrep -f test) -(gdb) p foo() -$1 = 3 -(gdb) quit - -``` - -So I ran `p foo()` and it ran the function! That’s fun. - -### Why is this useful? - -a few possible uses for this: - -* it lets you treat gdb a little bit like a C REPL, which is fun and I imagine could be useful for development - -* utility functions to display / navigate complex data structures quickly while debugging in gdb (thanks [@invalidop][1]) - -* [set an arbitrary process’s namespace while it’s running][2] (featuring a not-so-surprising appearance from my colleague [nelhage][3]!) - -* probably more that I don’t know about - -### How it works - -I got a variety of useful answers on Twitter when I asked how calling functions from gdb works! A lot of them were like “well you get the address of the function from the symbol table” but that is not the whole story!! - -One person pointed me to this nice 2 part series on how gdb works that they’d written: [Debugging with the natives, part 1][9] and [Debugging with the natives, part 2][10]. Part 1 explains approximately how calling functions works (or could work – figuring out what gdb **actually** does isn’t trivial, but I’ll try my best!). - -The steps outlined there are: - -1. Stop the process - -2. Create a new stack frame (far away from the actual stack) - -3. Save all the registers - -4. Set the registers to the arguments you want to call your function with - -5. Set the stack pointer to the new stack frame - -6. Put a trap instruction somewhere in memory - -7. Set the return address to that trap instruction - -8. Set the instruction pointer register to the address of the function you want to call - -9. Start the process again! - -I’m not going to go through how gdb does all of these (I don’t know!) but here are a few things I’ve learned about the various pieces this evening. - -**Create a stack frame** - -If you’re going to run a C function, most likely it needs a stack to store variables on! You definitely don’t want it to clobber your current stack. Concretely – before gdb calls your function (by setting the instruction pointer to it and letting it go), it needs to set the **stack pointer** to… something. - -There was some speculation on Twitter about how this works: - -> i think it constructs a new stack frame for the call right on top of the stack where you’re sitting! - -and: - -> Are you certain it does that? It could allocate a pseudo stack, then temporarily change sp value to that location. You could try, put a breakpoint there and look at the sp register address, see if it’s contiguous to your current program register? - -I did an experiment where (inside gdb) I ran:` - -``` -(gdb) p $rsp -$7 = (void *) 0x7ffea3d0bca8 -(gdb) break foo -Breakpoint 1 at 0x40052a -(gdb) p foo() -Breakpoint 1, 0x000000000040052a in foo () -(gdb) p $rsp -$8 = (void *) 0x7ffea3d0bc00 - -``` - -This seems in line with the “gdb constructs a new stack frame for the call right on top of the stack where you’re sitting” theory, since the stack pointer (`$rsp`) goes from being `...bca8` to `..bc00` – stack pointers grow downward, so a `bc00`stack pointer is **after** a `bca8` pointer. Interesting! - -So it seems like gdb just creates the new stack frames right where you are. That’s a bit surprising to me! - -**change the instruction pointer** - -Let’s see whether gdb changes the instruction pointer! - -``` -(gdb) p $rip -$1 = (void (*)()) 0x7fae7d29a2f0 <__nanosleep_nocancel+7> -(gdb) b foo -Breakpoint 1 at 0x40052a -(gdb) p foo() -Breakpoint 1, 0x000000000040052a in foo () -(gdb) p $rip -$3 = (void (*)()) 0x40052a - -``` - -It does! The instruction pointer changes from `0x7fae7d29a2f0` to `0x40052a` (the address of the `foo` function). - -I stared at the strace output and I still don’t understand **how** it changes, but that’s okay. - -**aside: how breakpoints are set!!** - -Above I wrote `break foo`. I straced gdb while running all of this and understood almost nothing but I found ONE THING that makes sense to me!! - -Here are some of the system calls that gdb uses to set a breakpoint. It’s really simple! It replaces one instruction with `cc` (which [https://defuse.ca/online-x86-assembler.htm][11] tells me means `int3` which means `send SIGTRAP`), and then once the program is interrupted, it puts the instruction back the way it was. - -I was putting a breakpoint on a function `foo` with the address `0x400528`. - -This `PTRACE_POKEDATA` is how gdb changes the code of running programs. - -``` -// change the 0x400528 instructions -25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003b8e589]) = 0 -25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003cce589) = 0 -// start the program running -25622 ptrace(PTRACE_CONT, 25618, 0x1, SIG_0) = 0 -// get a signal when it hits the breakpoint -25622 ptrace(PTRACE_GETSIGINFO, 25618, NULL, {si_signo=SIGTRAP, si_code=SI_KERNEL, si_value={int=-1447215360, ptr=0x7ffda9bd3f00}}) = 0 -// change the 0x400528 instructions back to what they were before -25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0 -25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0 - -``` - -**put a trap instruction somewhere** - -When gdb runs a function, it **also** puts trap instructions in a bunch of places! Here’s one of them (per strace). It’s basically replacing one instruction with `cc` (`int3`). - -``` -5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 -5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 -5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0 - -``` - -What’s `0x7f6fa7c0b260`? Well, I looked in the process’s memory maps, and it turns it’s somewhere in `/lib/x86_64-linux-gnu/libc-2.23.so`. That’s weird! Why is gdb putting trap instructions in libc? - -Well, let’s see what function that’s in. It turns out it’s `__libc_siglongjmp`. The other functions gdb is putting traps in are `__longjmp`, `____longjmp_chk`, `dl_main`, and `_dl_close_worker`. - -Why? I don’t know! Maybe for some reason when our function `foo()` returns, it’s calling `longjmp`, and that is how gdb gets control back? I’m not sure. - -### how gdb calls functions is complicated! - -I’m going to stop there (it’s 1am!), but now I know a little more! - -It seems like the answer to “how does gdb call a function?” is definitely not that simple. I found it interesting to try to figure a little bit of it out and hopefully you have too! - -I still have a lot of unanswered questions about how exactly gdb does all of these things, but that’s okay. I don’t really need to know the details of how this works and I’m happy to have a slightly improved understanding. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:https://twitter.com/invalidop/status/949161146526781440 -[2]:https://github.com/baloo/setns/blob/master/setns.c -[3]:https://github.com/nelhage -[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ -[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ -[6]:https://twitter.com/b0rk/status/948060808243765248 -[7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c -[8]:https://cole14.github.io/rust-elf -[9]:https://www.cl.cam.ac.uk/~srk31/blog/2016/02/25/#native-debugging-part-1 -[10]:https://www.cl.cam.ac.uk/~srk31/blog/2017/01/30/#native-debugging-part-2 -[11]:https://defuse.ca/online-x86-assembler.htm diff --git a/translated/tech/20180104 How does gdb call functions.md b/translated/tech/20180104 How does gdb call functions.md new file mode 100644 index 0000000000..575563ad3d --- /dev/null +++ b/translated/tech/20180104 How does gdb call functions.md @@ -0,0 +1,254 @@ +gdb 如何调用函数? +============================================================ + +(之前的 gdb 系列文章:[gdb 如何工作(2016)][4] 和[通过 gdb 你能够做的三件事(2014)][5]) + +在这个周,我发现,我可以从 gdb 上调用 C 函数。这看起来很酷,因为在过去我认为 gdb 最多只是一个只读调试工具。 + +我对 gdb 能够调用函数感到很吃惊。正如往常所做的那样,我在 [Twitter][6] 上询问这是如何工作的。我得到了大量的有用答案。我最喜欢的答案是 [Evan Klitzke 的示例 C 代码][7],它展示了 gdb 如何调用函数。代码能够运行,这很令人激动! + +我相信(通过一些跟踪和实验)那个示例 C 代码和 gdb 实际上如何调用函数不同。因此,在这篇文章中,我将会阐述 gdb 是如何调用函数的,以及我是如何知道的。 + +关于 gdb 如何调用函数,还有许多我不知道的事情,并且,在这儿我写的内容有可能是错误的。 + +### 从 gdb 中调用 C 函数意味着什么? + +在开始讲解这是如何工作之前,我先快速的谈论一下我是如何发现这件令人惊讶的事情的。 + +所以,你已经在运行一个 C 程序(目标程序)。你可以运行程序中的一个函数,只需要像下面这样做: + +* 暂停程序(因为它已经在运行中) + +* 找到你想调用的函数的地址(使用符号表) + +* 使程序(目标程序)跳转到那个地址 + +* 当函数返回时,恢复之前的指令指针和寄存器 + +通过符号表来找到想要调用的函数的地址非常容易。下面是一段非常简单但能够工作的代码,我在 Linux 上使用这段代码作为例子来讲解如何找到地址。这段代码使用 [elf crate][8]。如果我想找到 PID 为 2345 的进程中的 foo 函数的地址,那么我可以运行 `elf_symbol_value("/proc/2345/exe", "foo")`。 + +``` +fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> { + // 打开 ELF 文件 + let file = elf::File::open_path(file_name).ok().ok_or("parse error")?; + // 在所有的段 & 符号中循环,直到找到正确的那个 + let sections = &file.sections; + for s in sections { + for sym in file.get_symbols(&s).ok().ok_or("parse error")? { + if sym.name == symbol_name { + return Ok(sym.value); + } + } + } + None.ok_or("No symbol found")? +} + +``` + +这并不能够真的发挥作用,你还需要找到文件的内存映射,并将符号偏移量加到文件映射的起始位置。找到内存映射并不困难,它位于 `/proc/PID/maps` 中。 + +总之,找到想要调用的函数地址对我来说很直接,但是其余部分(改变指令指针,恢复寄存器等)看起来就不这么明显了。 + +### 你不能仅仅进行跳转 + +我已经说过,你不能够仅仅找到你想要运行的那个函数地址,然后跳转到那儿。我在 gdb 中尝试过那样做(`jump foo`),然后程序出现了段错误。毫无意义。 + +### 如何从 gdb 中调用 C 函数 + +首先,这是可能的。我写了一个非常简洁的 C 程序,它所做的事只有 sleep 1000 秒,把这个文件命名为 `test.c` : + +``` +#include + +int foo() { + return 3; +} +int main() { + sleep(1000); +} + +``` + +接下来,编译并运行它: + +``` +$ gcc -o test test.c +$ ./test + +``` + +最后,我们使用 gdb 来跟踪 `test` 这一程序: + +``` +$ sudo gdb -p $(pgrep -f test) +(gdb) p foo() +$1 = 3 +(gdb) quit + +``` + +我运行 `p foo()` 然后它运行了这个函数!这非常有趣。 + +### 为什么这是有用的? + +下面是一些可能的用途: + +* 它使得你可以把 gdb 当成一个 C 应答式程序,这很有趣,我想对开发也会有用 + +* 在 gdb 中进行调试的时候展示/浏览复杂数据结构的功能函数(感谢 [@invalidop][1]) + +* [在进程运行时设置一个任意的名字空间][2](我的同事 [nelhage][3] 对此非常惊讶) + +* 可能还有许多我所不知道的用途 + +### 它是如何工作的 + +当我在 Twitter 上询问从 gdb 中调用函数是如何工作的时,我得到了大量有用的回答。许多答案是”你从符号表中得到了函数的地址“,但这并不是完整的答案。 + +有个人告诉了我两篇关于 gdb 如何工作的系列文章:[和本地人一起调试-第一部分][9],[和本地人一起调试-第二部分][10]。第一部分讲述了 gdb 是如何调用函数的(指出了 gdb 实际上完成这件事并不简单,但是我将会尽力)。 + +步骤列举如下: + +1. 停止进程 + +2. 创建一个新的栈框(远离真实栈) + +3. 保存所有寄存器 + +4. 设置你想要调用的函数的寄存器参数 + +5. 设置栈指针指向新的栈框 + +6. 在内存中某个位置放置一条陷阱指令 + +7. 为陷阱指令设置返回地址 + +8. 设置指令寄存器的值为你想要调用的函数地址 + +9. 再次运行进程! + +(LCTT 译注:如果将这个调用的函数看成一个单独的线程,gdb 实际上所做的事情就是一个简单的线程上下文切换) + +我不知道 gdb 是如何完成这些所有事情的,但是今天晚上,我学到了这些所有事情中的其中几件。 + +**创建一个栈框** + +如果你想要运行一个 C 函数,那么你需要一个栈来存储变量。你肯定不想继续使用当前的栈。准确来说,在 gdb 调用函数之前(通过设置函数指针并跳转),它需要设置栈指针到某个地方。 + +这儿是 Twitter 上一些关于它如何工作的猜测: + +> 我认为它在当前栈的栈顶上构造了一个新的栈框来进行调用! + +以及 + +> 你确定是这样吗?它应该是分配一个伪栈,然后临时将 sp (栈指针寄存器)的值改为那个栈的地址。你可以试一试,你可以在那儿设置一个断点,然后看一看栈指针寄存器的值,它是否和当前程序寄存器的值相近? + +我通过 gdb 做了一个试验: + +``` +(gdb) p $rsp +$7 = (void *) 0x7ffea3d0bca8 +(gdb) break foo +Breakpoint 1 at 0x40052a +(gdb) p foo() +Breakpoint 1, 0x000000000040052a in foo () +(gdb) p $rsp +$8 = (void *) 0x7ffea3d0bc00 + +``` + +这看起来符合”gdb 在当前栈的栈顶构造了一个新的栈框“这一理论。因为栈指针(`$rsp`)从 `0x7ffea3d0bca8` 变成了 `0x7ffea3d0bc00` - 栈指针从高地址往低地址长。所以 `0x7ffea3d0bca8` 在 `0x7ffea3d0bc00` 的后面。真是有趣! + +所以,看起来 gdb 只是在当前栈所在位置创建了一个新的栈框。这令我很惊讶! + +**改变指令指针** + +让我们来看一看 gdb 是如何改变指令指针的! + +``` +(gdb) p $rip +$1 = (void (*)()) 0x7fae7d29a2f0 <__nanosleep_nocancel+7> +(gdb) b foo +Breakpoint 1 at 0x40052a +(gdb) p foo() +Breakpoint 1, 0x000000000040052a in foo () +(gdb) p $rip +$3 = (void (*)()) 0x40052a + +``` + +的确是!指令指针从 `0x7fae7d29a2f0` 变为了 `0x40052a`(`foo` 函数的地址)。 + +我盯着输出看了很久,但仍然不理解它是如何改变指令指针的,但这并不影响什么。 + +**如何设置断点** + +上面我写到 `break foo` 。我跟踪 gdb 运行程序的过程,但是没有任何发现。 + +下面是 gdb 用来设置断点的一些系统调用。它们非常简单。它把一条指令用 `cc` 代替了(这告诉我们 `int3` 意味着 `send SIGTRAP` [https://defuse.ca/online-x86-assembler.html][11]),并且一旦程序被打断了,它就把指令恢复为原先的样子。 + +我在函数 `foo` 那儿设置了一个断点,地址为 `0x400528` 。 + +`PTRACE_POKEDATA` 展示了 gdb 如何改变正在运行的程序。 + +``` +// 改变 0x400528 处的指令 +25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003b8e589]) = 0 +25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003cce589) = 0 +// 开始运行程序 +25622 ptrace(PTRACE_CONT, 25618, 0x1, SIG_0) = 0 +// 当到达断点时获取一个信号 +25622 ptrace(PTRACE_GETSIGINFO, 25618, NULL, {si_signo=SIGTRAP, si_code=SI_KERNEL, si_value={int=-1447215360, ptr=0x7ffda9bd3f00}}) = 0 +// 将 0x400528 处的指令更改为之前的样子 +25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0 +25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0 + +``` + +**在某处放置一条陷阱指令** + +当 gdb 运行一个函数的时候,它也会在某个地方放置一条陷阱指令。这是其中一条。它基本上是用 `cc` 来替换一条指令(`int3`)。 + +``` +5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 +5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 +5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0 + +``` + +`0x7f6fa7c0b260` 是什么?我查看了进程的内存映射,发现它位于 `/lib/x86_64-linux-gnu/libc-2.23.so` 中的某个位置。这很奇怪,为什么 gdb 将陷阱指令放在 libc 中? + +让我们看一看里面的函数是什么,它是 `__libc_siglongjmp` 。其他 gdb 放置陷阱指令的地方的函数是 `__longjmp` 、`___longjmp_chk` 、`dl_main` 和 `_dl_close_worker` 。 + +为什么?我不知道!也许出于某种原因,当函数 `foo()` 返回时,它调用 `longjmp` ,从而 gdb 能够进行返回控制。我不确定。 + +### gdb 如何调用函数是很复杂的! + +我将要在这儿停止了(现在已经凌晨 1 点),但是我知道的多一些了! + +看起来”gdb 如何调用函数“这一问题的答案并不简单。我发现这很有趣并且努力找出其中一些答案,希望你也能够找到。 + +我依旧有很多未回答的问题,关于 gdb 是如何完成这些所有事的,但是可以了。我不需要真的知道关于 gdb 是如何工作的所有细节,但是我很开心,我有了一些进一步的理解。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ + +作者:[Julia Evans][a] +译者:[ucasFL](https://github.com/ucasFL) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:https://twitter.com/invalidop/status/949161146526781440 +[2]:https://github.com/baloo/setns/blob/master/setns.c +[3]:https://github.com/nelhage +[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ +[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ +[6]:https://twitter.com/b0rk/status/948060808243765248 +[7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c +[8]:https://cole14.github.io/rust-elf +[9]:https://www.cl.cam.ac.uk/~srk31/blog/2016/02/25/#native-debugging-part-1 +[10]:https://www.cl.cam.ac.uk/~srk31/blog/2017/01/30/#native-debugging-part-2 +[11]:https://defuse.ca/online-x86-assembler.htm From c43a0c0b11bb374065ccd80322a26b2dc4c9814c Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 6 Mar 2018 08:49:26 +0800 Subject: [PATCH 020/343] translated --- ...nd Share Terminal Session with Showterm.md | 76 ------------------- ...nd Share Terminal Session with Showterm.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/20171116 Record and Share Terminal Session with Showterm.md create mode 100644 translated/tech/20171116 Record and Share Terminal Session with Showterm.md diff --git a/sources/tech/20171116 Record and Share Terminal Session with Showterm.md b/sources/tech/20171116 Record and Share Terminal Session with Showterm.md deleted file mode 100644 index 8d3ce451b5..0000000000 --- a/sources/tech/20171116 Record and Share Terminal Session with Showterm.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Record and Share Terminal Session with Showterm -====== - -![](https://www.maketecheasier.com/assets/uploads/2017/11/record-terminal-session.jpg) - -You can easily record your terminal sessions with virtually all screen recording programs. However, you are very likely to end up with an oversized video file. There are several terminal recorders available in Linux, each with its own strengths and weakness. Showterm is a tool that makes it pretty easy to record terminal sessions, upload them, share, and embed them in any web page. On the plus side, you don't end up with any huge file to deal with. - -Showterm is open source, and the project can be found on this [GitHub page][1]. - - **Related** : [2 Simple Applications That Record Your Terminal Session as Video [Linux]][2] - -### Installing Showterm for Linux - -Showterm requires that you have Ruby installed on your computer. Here's how to go about installing the program. -``` -gem install showterm -``` - -If you don't have Ruby installed on your Linux system: -``` -sudo curl showterm.io/showterm > ~/bin/showterm -sudo chmod +x ~/bin/showterm -``` - -If you just want to run the application without installation: -``` -bash <(curl record.showterm.io) -``` - -You can type `showterm --help` for the help screen. If a help page doesn't appear, showterm is probably not installed. Now that you have Showterm installed (or are running the standalone version), let us dive into using the tool to record. - - **Related** : [How to Record Terminal Session in Ubuntu][3] - -### Recording Terminal Session - -![showterm terminal][4] - -Recording a terminal session is pretty simple. From the command line run `showterm`. This should start the terminal recording in the background. All commands entered in the command line from hereon are recorded by Showterm. Once you are done recording, press Ctrl + D or type `exit` in the command line to stop your recording. - -Showterm should upload your video and output a link to the video that looks like http://showterm.io/. It is rather unfortunate that terminal sessions are uploaded right away without any prompting. Don't panic! You can delete any uploaded recording by entering `showterm --delete `. Before uploading your recordings, you'll have the chance to change the timing by adding the `-e` option to the showterm command. If by any chance a recording fails to upload, you can use `showterm --retry - - - - - - -``` - -### 7.2 - CSS - -Add a new file, `/public/styles.css`, with some custom UI styling. - -``` -body { font-family: 'EB Garamond', serif; } - -.mui-textfield > input, .mui-btn, .mui--text-subhead, .mui-panel > .mui--text-headline { - font-family: 'Open Sans', sans-serif; -} - -.all-caps { text-transform: uppercase; } -.app-container { padding: 16px; } -.search-results em { font-weight: bold; } -.book-modal > button { width: 100%; } -.search-results .mui-divider { margin: 14px 0; } - -.search-results { - display: flex; - flex-direction: row; - flex-wrap: wrap; - justify-content: space-around; -} - -.search-results > div { - flex-basis: 45%; - box-sizing: border-box; - cursor: pointer; -} - -@media (max-width: 600px) { - .search-results > div { flex-basis: 100%; } -} - -.paragraphs-container { - max-width: 800px; - margin: 0 auto; - margin-bottom: 48px; -} - -.paragraphs-container .mui--text-body1, .paragraphs-container .mui--text-body2 { - font-size: 1.8rem; - line-height: 35px; -} - -.book-modal { - width: 100%; - height: 100%; - padding: 40px 10%; - box-sizing: border-box; - margin: 0 auto; - background-color: white; - overflow-y: scroll; - position: fixed; - top: 0; - left: 0; -} - -.pagination-panel { - display: flex; - justify-content: space-between; -} - -.title-row { - display: flex; - justify-content: space-between; - align-items: flex-end; -} - -@media (max-width: 600px) { - .title-row{ - flex-direction: column; - text-align: center; - align-items: center - } -} - -.locations-label { - text-align: center; - margin: 8px; -} - -.modal-footer { - position: fixed; - bottom: 0; - left: 0; - width: 100%; - display: flex; - justify-content: space-around; - background: white; -} - -``` - -### 7.3 - Try it out - -Open `localhost:8080` in your web browser, you should see a simple search interface with paginated results. Try typing in the top search bar to find matches from different terms. - -![preview webapp](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_4_0.png) - -> You  _do not_  have to re-run the `docker-compose up` command for the changes to take effect. The local `public` directory is mounted to our Nginx fileserver container, so frontend changes on the local system will be automatically reflected in the containerized app. - -If you try clicking on any result, nothing happens - we still have one more feature to add to the app. - -### 8 - PAGE PREVIEWS - -It would be nice to be able to click on each search result and view it in the context of the book that it's from. - -### 8.0 - Add Elasticsearch Query - -First, we'll need to define a simple query to get a range of paragraphs from a given book. - -Add the following function to the `module.exports` block in `server/search.js`. - -``` -/** Get the specified range of paragraphs from a book */ -getParagraphs (bookTitle, startLocation, endLocation) { - const filter = [ - { term: { title: bookTitle } }, - { range: { location: { gte: startLocation, lte: endLocation } } } - ] - - const body = { - size: endLocation - startLocation, - sort: { location: 'asc' }, - query: { bool: { filter } } - } - - return client.search({ index, type, body }) -} - -``` - -This new function will return an ordered array of paragraphs between the start and end locations of a given book. - -### 8.1 - Add API Endpoint - -Now, let's link this function to an API endpoint. - -Add the following to `server/app.js`, below the original `/search` endpoint. - -``` -/** - * GET /paragraphs - * Get a range of paragraphs from the specified book - * Query Params - - * bookTitle: string under 256 characters - * start: positive integer - * end: positive integer greater than start - */ -router.get('/paragraphs', - validate({ - query: { - bookTitle: joi.string().max(256).required(), - start: joi.number().integer().min(0).default(0), - end: joi.number().integer().greater(joi.ref('start')).default(10) - } - }), - async (ctx, next) => { - const { bookTitle, start, end } = ctx.request.query - ctx.body = await search.getParagraphs(bookTitle, start, end) - } -) - -``` - -### 8.2 - Add UI functionality - -Now that our new endpoint is in place, let's add some frontend functionality to query and display full pages from the book. - -Add the following functions to the `methods` block of `/public/app.js`. - -``` - /** Call the API to get current page of paragraphs */ - async getParagraphs (bookTitle, offset) { - try { - this.bookOffset = offset - const start = this.bookOffset - const end = this.bookOffset + 10 - const response = await axios.get(`${this.baseUrl}/paragraphs`, { params: { bookTitle, start, end } }) - return response.data.hits.hits - } catch (err) { - console.error(err) - } - }, - /** Get next page (next 10 paragraphs) of selected book */ - async nextBookPage () { - this.$refs.bookModal.scrollTop = 0 - this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset + 10) - }, - /** Get previous page (previous 10 paragraphs) of selected book */ - async prevBookPage () { - this.$refs.bookModal.scrollTop = 0 - this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset - 10) - }, - /** Display paragraphs from selected book in modal window */ - async showBookModal (searchHit) { - try { - document.body.style.overflow = 'hidden' - this.selectedParagraph = searchHit - this.paragraphs = await this.getParagraphs(searchHit._source.title, searchHit._source.location - 5) - } catch (err) { - console.error(err) - } - }, - /** Close the book detail modal */ - closeBookModal () { - document.body.style.overflow = 'auto' - this.selectedParagraph = null - } - -``` - -These five functions provide the logic for downloading and paginating through pages (ten paragraphs each) in a book. - -Now we just need to add a UI to display the book pages. Add this markup below the `` comment in `/public/index.html`. - -``` - -
-
- -
-
{{ selectedParagraph._source.title }}
-
{{ selectedParagraph._source.author }}
-
-
-
-
Locations {{ bookOffset - 5 }} to {{ bookOffset + 5 }}
-
-
- - -
-
- {{ paragraph._source.text }} -
-
- {{ paragraph._source.text }} -
-
-
-
- - - -
- -``` - -Restart the app server (`docker-compose up -d --build`) again and open up `localhost:8080`. When you click on a search result, you are now able to view the surrounding paragraphs. You can now even read the rest of the book to completion if you're entertained by what you find. - -![preview webapp book page](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_5_0.png) - -Congrats, you've completed the tutorial application! - -Feel free to compare your local result against the completed sample hosted here - [https://search.patricktriest.com/][37] - -### 9 - DISADVANTAGES OF ELASTICSEARCH - -### 9.0 - Resource Hog - -Elasticsearch is computationally demanding. The [official recommendation][38] is to run ES on a machine with 64 GB of RAM, and they strongly discourage running it on anything with under 8 GB of RAM. Elasticsearch is an  _in-memory_  datastore, which allows it to return results extremely quickly, but also results in a very significant system memory footprint. In production, [it is strongly recommended to run multiple Elasticsearch nodes in a cluster][39] to allow for high server availability, automatic sharding, and data redundancy in case of a node failure. - -I've got our tutorial application running on a $15/month GCP compute instance (at [search.patricktriest.com][40]) with 1.7 GB of RAM, and it  _just barely_  is able to run the Elasticsearch node; sometimes the entire machine freezes up during the initial data-loading step. Elasticsearch is, in my experience, much more of a resource hog than more traditional databases such as PostgreSQL and MongoDB, and can be significantly more expensive to host as a result. - -### 9.1 - Syncing with Databases - -In most applications, storing all of the data in Elasticsearch is not an ideal option. It is possible to use ES as the primary transactional database for an app, but this is generally not recommended due to the lack of ACID compliance in Elasticsearch, which can lead to lost write operations when ingesting data at scale. In many cases, ES serves a more specialized role, such as powering the text searching features of the app. This specialized use requires that some of the data from the primary database is replicated to the Elasticsearch instance. - -For instance, let's imagine that we're storing our users in a PostgreSQL table, but using Elasticsearch to power our user-search functionality. If a user, "Albert", decides to change his name to "Al", we'll need this change to be reflected in both our primary PostgreSQL database and in our auxiliary Elasticsearch cluster. - -This can be a tricky integration to get right, and the best answer will depend on your existing stack. There are a multitude of open-source options available, from [a process to watch a MongoDB operation log][41] and automatically sync detected changes to ES, to a [PostgresSQL plugin][42] to create a custom PSQL-based index that communicates automatically with Elasticsearch. - -If none of the available pre-built options work, you could always just add some hooks into your server code to update the Elasticsearch index manually based on database changes. I would consider this final option to be a last resort, since keeping ES in sync using custom business logic can be complex, and is likely to introduce numerous bugs to the application. - -The need to sync Elasticsearch with a primary database is more of an architectural complexity than it is a specific weakness of ES, but it's certainly worth keeping in mind when considering the tradeoffs of adding a dedicated search engine to your app. - -### CONCLUSION - -Full-text search is one of the most important features in many modern applications - and is one of the most difficult to implement well. Elasticsearch is a fantastic option for adding fast and customizable text search to your application, but there are alternatives. [Apache Solr][43] is a similar open source search platform that is built on Apache Lucene - the same library at the core of Elasticsearch. [Algolia][44] is a search-as-a-service web platform which is growing quickly in popularity and is likely to be easier to get started with for beginners (but as a tradeoff is less customizable and can get quite expensive). - -"Search-bar" style features are far from the only use-case for Elasticsearch. ES is also a very common tool for log storage and analysis, commonly used in an ELK (Elasticsearch, Logstash, Kibana) stack configuration. The flexible full-text search allowed by Elasticsearch can also be very useful for a wide variety of data science tasks - such as correcting/standardizing the spellings of entities within a dataset or searching a large text dataset for similar phrases. - -Here are some ideas for your own projects. - -* Add more of your favorite books to our tutorial app and create your own private library search engine. - -* Create an academic plagiarism detection engine by indexing papers from [Google Scholar][2]. - -* Build a spell checking application by indexing every word in the dictionary to Elasticsearch. - -* Build your own Google-competitor internet search engine by loading the [Common Crawl Corpus][3] into Elasticsearch (caution - with over 5 billion pages, this can be a very expensive dataset play with). - -* Use Elasticsearch for journalism: search for specific names and terms in recent large-scale document leaks such as the [Panama Papers][4] and [Paradise Papers][5]. - -The source code for this tutorial application is 100% open-source and can be found at the GitHub repository here - [https://github.com/triestpa/guttenberg-search][45] - -I hope you enjoyed the tutorial! Please feel free to post any thoughts, questions, or criticisms in the comments below. - - --------------------------------------------------------------------------------- - -作者简介: - -Full-stack engineer, data enthusiast, insatiable learner, obsessive builder. You can find me wandering on a mountain trail, pretending not to be lost. - -------------- - - -via: https://blog.patricktriest.com/text-search-docker-elasticsearch/ - -作者:[Patrick Triest][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.patricktriest.com/author/patrick/ -[1]:https://blog.patricktriest.com/you-should-learn-regex/ -[2]:https://scholar.google.com/ -[3]:https://aws.amazon.com/public-datasets/common-crawl/ -[4]:https://en.wikipedia.org/wiki/Panama_Papers -[5]:https://en.wikipedia.org/wiki/Paradise_Papers -[6]:https://search.patricktriest.com/ -[7]:https://github.com/triestpa/guttenberg-search -[8]:https://www.postgresql.org/ -[9]:https://www.mongodb.com/ -[10]:https://www.elastic.co/ -[11]:https://www.docker.com/ -[12]:https://www.uber.com/ -[13]:https://www.spotify.com/us/ -[14]:https://www.adp.com/ -[15]:https://www.paypal.com/us/home -[16]:https://nodejs.org/en/ -[17]:http://koajs.com/ -[18]:https://vuejs.org/ -[19]:https://www.elastic.co/ -[20]:https://lucene.apache.org/core/ -[21]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/getting-started.html -[22]:https://en.wikipedia.org/wiki/B-tree -[23]:https://www.docker.com/ -[24]:https://www.docker.com/ -[25]:https://docs.docker.com/compose/ -[26]:https://docs.docker.com/engine/installation/ -[27]:https://docs.docker.com/compose/install/ -[28]:https://www.gutenberg.org/ -[29]:https://cdn.patricktriest.com/data/books.zip -[30]:https://www.gnu.org/software/wget/ -[31]:https://theunarchiver.com/command-line -[32]:https://www.elastic.co/guide/en/elasticsearch/reference/current/full-text-queries.html -[33]:http://koajs.com/ -[34]:https://github.com/hapijs/joi -[35]:https://github.com/triestpa/koa-joi-validate -[36]:https://vuejs.org/v2/guide/ -[37]:https://search.patricktriest.com/ -[38]:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html -[39]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/distributed-cluster.html -[40]:https://search.patricktriest.com/ -[41]:https://github.com/mongodb-labs/mongo-connector -[42]:https://github.com/zombodb/zombodb -[43]:https://lucene.apache.org/solr/ -[44]:https://www.algolia.com/ -[45]:https://github.com/triestpa/guttenberg-search -[46]:https://blog.patricktriest.com/tag/guides/ -[47]:https://blog.patricktriest.com/tag/javascript/ -[48]:https://blog.patricktriest.com/tag/nodejs/ -[49]:https://blog.patricktriest.com/tag/web-development/ -[50]:https://blog.patricktriest.com/tag/devops/ diff --git a/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md new file mode 100644 index 0000000000..3210c7d284 --- /dev/null +++ b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md @@ -0,0 +1,1381 @@ +使用 DOCKER 和 ELASTICSEARCH 构建一个全文搜索应用程序 +============================================================ + + _如何在超过 500 万篇文章的 Wikipedia 上找到与你研究相关的文章?_ + + _如何在超过 20 亿用户的 Facebook 中找到你的朋友(并且还拼错了名字)?_ + + _谷歌如何在整个因特网上搜索你的模糊的、充满拼写错误的查询?_ + +在本教程中,我们将带你探索如何配置我们自己的全文探索应用程序(与上述问题中的系统相比,它的复杂度要小很多)。我们的示例应用程序将提供一个 UI 和 API 去从 100 部经典文学(比如,_Peter Pan_ ,  _Frankenstein_ , 和  _Treasure Island_ )中搜索完整的文本。 + +你可以在这里([https://search.patricktriest.com][6])预览教程中应用程序的完整版本。 + +![preview webapp](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_4_0.png) + +这个应用程序的源代码是 100% 公开的,可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][7] + +在应用程序中添加一个快速灵活的全文搜索可能是个挑战。大多数的主流数据库,比如,[PostgreSQL][8] 和 [MongoDB][9],在它们的查询和索引结构中都提供一个有限的、基础的、文本搜索的功能。为实现高质量的全文搜索,通常的最佳选择是单独数据存储。[Elasticsearch][10] 是一个开源数据存储的领导者,它专门为执行灵活而快速的全文搜索进行了优化。 + +我们将使用 [Docker][11] 去配置我们自己的项目环境和依赖。Docker 是一个容器化引擎,它被 [Uber][12]、[Spotify][13]、[ADP][14]、以及 [Paypal][15] 使用。构建容器化应用的一个主要优势是,项目的设置在 Windows、macOS、以及 Linux 上都是相同的 —— 这使我写这个教程快速又简单。如果你还没有使用过 Docker,不用担心,我们接下来将经历完整的项目配置。 + +我也会使用 [Node.js][16] (使用 [Koa][17] 框架)、和 [Vue.js][18],用它们分别去构建我们自己的搜索 API 和前端 Web 应用程序。 + +### 1 - ELASTICSEARCH 是什么? + +全文搜索在现代应用程序中是一个有大量需求的特性。搜索也可能是最难的一项特性 —— 许多流行的网站的搜索功能都不合格,要么返回结果太慢,要么找不到精确的结果。通常,这种情况是被底层的数据库所局限:大多数标准的关系型数据库在基本的 `CONTAINS` 或 `LIKE` SQL 查询上有局限性,它仅提供大多数基本的字符串匹配功能。 + +我们的搜索应用程序将具备: + +1. **快速** - 搜索结果将快速返回,为用户提供一个良好的体验。 + +2. **灵活** - 我们希望能够去修改搜索如何执行,这是为了便于在不同的数据库和用户场景下进行优化。 + +3. **容错** - 如果搜索内容有拼写错误,我们将仍然会返回相关的结果,而这个结果可能正是用户希望去搜索的结果。 + +4. **全文** - 我们不想限制我们的搜索只能与指定的关键字或者标签相匹配 —— 我们希望它可以搜索在我们的数据存储中的任何东西(包括大的文本域)。 + +![Elastic Search Logo](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/Elasticsearch-Logo.png) + +为了构建一个功能强大的搜索功能,通常最理想的方法是使用一个为全文搜索任务优化过的用户数据存储。在这里我们使用 [Elasticsearch][19],Elasticsearch 是一个开源的内存中的数据存储,它是用 Java 写的,最初是在 [Apache Lucene][20] 库上构建的。 + +这里有一些来自 [Elastic 官方网站][21] 上的 Elasticsearch 真实使用案例。 + +* Wikipedia 使用 Elasticsearch 去提供带高亮搜索片断的全文搜索功能,并且提供按类型搜索和 “did-you-mean” 建议。 + +* Guardian 使用 Elasticsearch 把社交网络数据和访客日志相结合,为编辑去提供大家对新文章的实时的反馈。 + +* Stack Overflow 将全文搜索和地理查询相结合,并使用 “类似” 的方法去找到相关的查询和回答。 + +* GitHub 使用 Elasticsearch 对 1300 亿行代码进行查询。 + +### 与 “普通的” 数据库相比,Elasticsearch 有什么不一样的地方? + +Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它使用 _反转索引_ 。 + +“索引” 是数据库中的一种数据结构,它能够以超快的速度进行数据查询和检索操作。数据库通过存储与表中行相关联的字段来生成索引。在一种可搜索的数据结构(一般是 [B树][22])中排序索引,在优化过的查询中,数据库能够达到接近线速的时间(比如,“使用 ID=5 查找行)。 + +![Relational Index](https://cdn.patricktriest.com/blog/images/posts/elastic-library/db_index.png) + +我们可以将数据库索引想像成一个图书馆中老式的卡片式目录 —— 只要你知道书的作者和书名,它就会告诉你书的准确位置。为加速特定字段上的查询速度,数据库表一般有多个索引(比如,在 `name` 列上的索引可以加速指定名字的查询)。 + +反转索引本质上是不一样的。每行(或文档)的内容是分开的,并且每个独立的条目(在本案例中是单词)反向指向到包含它的任何文档上。 + +![Inverted Index](https://cdn.patricktriest.com/blog/images/posts/elastic-library/invertedIndex.jpg) + +这种反转索引数据结构可以使我们非常快地查询到,所有出现 ”football" 的文档。通过使用大量优化过的内存中的反转索引,Elasticsearch 可以让我们在存储的数据上,执行一些非常强大的和自定义的全文搜索。 + +### 2 - 项目设置 + +### 2.0 - Docker + +我们在这个项目上使用 [Docker][23] 管理环境和依赖。Docker 是个容器引擎,它允许应用程序运行在一个独立的环境中,不会受到来自主机操作系统和本地开发环境的影响。现在,许多公司将它们的大规模 Web 应用程序主要运行在容器架构上。这样将提升灵活性和容器化应用程序组件的可组构性。 + +![Docker Logo](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/docker.png) + +对我们来说,使用 Docker 的优势是,它对本教程非常友好,它的本地环境设置量最小,并且跨 Windows、macOS、和 Linux 系统的一致性很好。我们只需要在 Docker 配置文件中定义这些依赖关系,而不是按安装说明分别去安装 Node.js、Elasticsearch、和 Nginx,然后,就可以使用这个配置文件在任何其它地方运行我们的应用程序。而且,因为每个应用程序组件都运行在它自己的独立容器中,它们受本地机器上的其它 “垃圾” 干扰的可能性非常小,因此,在调试问题时,像 "But it works on my machine!" 这类的问题将非常少。 + +### 2.1 - 安装 Docker & Docker-Compose + +这个项目只依赖 [Docker][24] 和 [docker-compose][25],docker-compose 是 Docker 官方支持的一个工具,它用来将定义的多个容器配置 _组装_  成单一的应用程序栈。 + +安装 Docker - [https://docs.docker.com/engine/installation/][26] +安装 Docker Compose - [https://docs.docker.com/compose/install/][27] + +### 2.2 - 设置项目主目录 + +为项目创建一个主目录(名为 `guttenberg_search`)。我们的项目将工作在主目录的以下两个子目录中。 + +* `/public` - 保存前端 Vue.js Web 应用程序。 + +* `/server` - 服务器端 Node.js 源代码。 + +### 2.3 - 添加 Docker-Compose 配置 + +接下来,我们将创建一个 `docker-compose.yml` 文件来定义我们的应用程序栈中的每个容器。 + +1. `gs-api` - 后端应用程序逻辑使用的 Node.js 容器 + +2. `gs-frontend` - 前端 Web 应用程序使用的 Ngnix 容器。 + +3. `gs-search` - 保存和搜索数据的 Elasticsearch 容器。 + +``` +version: '3' + +services: + api: # Node.js App + container_name: gs-api + build: . + ports: + - "3000:3000" # Expose API port + - "9229:9229" # Expose Node process debug port (disable in production) + environment: # Set ENV vars + - NODE_ENV=local + - ES_HOST=elasticsearch + - PORT=3000 + volumes: # Attach local book data directory + - ./books:/usr/src/app/books + + frontend: # Nginx Server For Frontend App + container_name: gs-frontend + image: nginx + volumes: # Serve local "public" dir + - ./public:/usr/share/nginx/html + ports: + - "8080:80" # Forward site to localhost:8080 + + elasticsearch: # Elasticsearch Instance + container_name: gs-search + image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1 + volumes: # Persist ES data in seperate "esdata" volume + - esdata:/usr/share/elasticsearch/data + environment: + - bootstrap.memory_lock=true + - "ES_JAVA_OPTS=-Xms512m -Xmx512m" + - discovery.type=single-node + ports: # Expose Elasticsearch ports + - "9300:9300" + - "9200:9200" + +volumes: # Define seperate volume for Elasticsearch data + esdata: + +``` + +这个文件定义了我们全部的应用程序栈 —— 不需要在你的本地系统上安装 Elasticsearch、Node、和 Nginx。每个容器都将端口转发到宿主机系统(`localhost`)上,以便于我们在宿主机上去访问和调试 Node API、Elasticsearch instance、和前端 Web 应用程序。 + +### 2.4 - 添加 Dockerfile + +对于 Nginx 和 Elasticsearch,我们使用了官方预构建的镜像,而 Node.js 应用程序需要我们自己去构建。 + +在应用程序的根目录下定义一个简单的 `Dockerfile` 配置文件。 + +``` +# Use Node v8.9.0 LTS +FROM node:carbon + +# Setup app working directory +WORKDIR /usr/src/app + +# Copy package.json and package-lock.json +COPY package*.json ./ + +# Install app dependencies +RUN npm install + +# Copy sourcecode +COPY . . + +# Start app +CMD [ "npm", "start" ] + +``` + +这个 Docker 配置扩展了官方的 Node.js 镜像、拷贝我们的应用程序源代码、以及在容器内安装 NPM 依赖。 + +我们也增加了一个 `.dockerignore` 文件,以防止我们不需要的文件拷贝到容器中。 + +``` +node_modules/ +npm-debug.log +books/ +public/ + +``` + +> 请注意:我们之所以不拷贝 `node_modules` 目录到我们的容器中 —— 是因为我们要在容器中运行 `npm install` 来构建这个进程。从宿主机系统拷贝 `node_modules` 可能会引起错误,因为一些包需要在某些操作系统上专门构建。比如说,在 macOS 上安装 `bcrypt` 包,然后尝试将这个模块直接拷贝到一个 Ubuntu 容器上将不能工作,因为 `bcyrpt` 需要为每个操作系统构建一个特定的二进制文件。 + +### 2.5 - 添加基本文件 + +为了测试我们的配置,我们需要添加一些占位符文件到应用程序目录中。 + +在 `public/index.html` 文件中添加如下内容。 + +``` +Hello World From The Frontend Container + +``` + +接下来,在 `server/app.js` 中添加 Node.js 占位符文件。 + +``` +const Koa = require('koa') +const app = new Koa() + +app.use(async (ctx, next) => { + ctx.body = 'Hello World From the Backend Container' +}) + +const port = process.env.PORT || 3000 + +app.listen(port, err => { + if (err) console.error(err) + console.log(`App Listening on Port ${port}`) +}) + +``` + +最后,添加我们的 `package.json` 节点应用配置。 + +``` +{ + "name": "guttenberg-search", + "version": "0.0.1", + "description": "Source code for Elasticsearch tutorial using 100 classic open source books.", + "scripts": { + "start": "node --inspect=0.0.0.0:9229 server/app.js" + }, + "repository": { + "type": "git", + "url": "git+https://github.com/triestpa/guttenberg-search.git" + }, + "author": "patrick.triest@gmail.com", + "license": "MIT", + "bugs": { + "url": "https://github.com/triestpa/guttenberg-search/issues" + }, + "homepage": "https://github.com/triestpa/guttenberg-search#readme", + "dependencies": { + "elasticsearch": "13.3.1", + "joi": "13.0.1", + "koa": "2.4.1", + "koa-joi-validate": "0.5.1", + "koa-router": "7.2.1" + } +} + +``` + +这个文件定义了应用程序启动命令和 Node.js 包依赖。 + +> 注意:不要运行 `npm install` —— 当它构建时,这个依赖将在容器内安装。 + +### 2.6 - 测试它的输出 + +现在一切新绪,我们来测试应用程序的每个组件的输出。从应用程序的主目录运行 `docker-compose build`,它将构建我们的 Node.js 应用程序容器。 + +![docker build output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_3.png) + +接下来,运行 `docker-compose up` 去启动整个应用程序栈。 + +![docker compose output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_2.png) + +> 这一步可能需要几分钟时间,因为 Docker 要为每个容器去下载基础镜像,接着再去运行,启动应用程序非常快,因为所需要的镜像已经下载完成了。 + +在你的浏览器中尝试访问 `localhost:8080` —— 你将看到简单的 “Hello World" Web 页面。 + +![frontend sample output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_0.png) + +访问 `localhost:3000` 去验证我们的 Node 服务器,它将返回 "Hello World" 信息。 + +![backend sample output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_1.png) + +最后,访问 `localhost:9200` 去检查 Elasticsearch 运行状态。它将返回类似如下的内容。 + +``` +{ + "name" : "SLTcfpI", + "cluster_name" : "docker-cluster", + "cluster_uuid" : "iId8e0ZeS_mgh9ALlWQ7-w", + "version" : { + "number" : "6.1.1", + "build_hash" : "bd92e7f", + "build_date" : "2017-12-17T20:23:25.338Z", + "build_snapshot" : false, + "lucene_version" : "7.1.0", + "minimum_wire_compatibility_version" : "5.6.0", + "minimum_index_compatibility_version" : "5.0.0" + }, + "tagline" : "You Know, for Search" +} + +``` + +如果三个 URLs 都显示成功,祝贺你!整个容器栈已经正常运行了,接下来我们进入最有趣的部分。 + +### 3 - 连接到 ELASTICSEARCH + +我们要做的第一件事情是,让我们的应用程序连接到我们本地的 Elasticsearch 实例上。 + +### 3.0 - 添加 ES 连接模块 + +在新文件 `server/connection.js` 中添加如下的 Elasticsearch 初始化代码。 + +``` +const elasticsearch = require('elasticsearch') + +// Core ES variables for this project +const index = 'library' +const type = 'novel' +const port = 9200 +const host = process.env.ES_HOST || 'localhost' +const client = new elasticsearch.Client({ host: { host, port } }) + +/** Check the ES connection status */ +async function checkConnection () { + let isConnected = false + while (!isConnected) { + console.log('Connecting to ES') + try { + const health = await client.cluster.health({}) + console.log(health) + isConnected = true + } catch (err) { + console.log('Connection Failed, Retrying...', err) + } + } +} + +checkConnection() + +``` + +现在,我们重新构建我们的 Node 应用程序,我们将使用 `docker-compose build` 来做一些改变。接下来,运行 `docker-compose up -d` 去启动应用程序栈,它将以守护进程的方式在后台运行。 + +应用程序启动之后,在命令行中运行 `docker exec gs-api "node" "server/connection.js"`,以便于在容器内运行我们的脚本。你将看到类似如下的系统输出信息。 + +``` +{ cluster_name: 'docker-cluster', + status: 'yellow', + timed_out: false, + number_of_nodes: 1, + number_of_data_nodes: 1, + active_primary_shards: 1, + active_shards: 1, + relocating_shards: 0, + initializing_shards: 0, + unassigned_shards: 1, + delayed_unassigned_shards: 0, + number_of_pending_tasks: 0, + number_of_in_flight_fetch: 0, + task_max_waiting_in_queue_millis: 0, + active_shards_percent_as_number: 50 } + +``` + +继续之前,我们先删除最下面的 `checkConnection()` 调用,因为,我们最终的应用程序将调用外部的连接模块。 + +### 3.1 - 添加函数去重置索引 + +在 `server/connection.js` 中的 `checkConnection` 下面添加如下的函数,以便于重置 Elasticsearch 索引。 + +``` +/** Clear the index, recreate it, and add mappings */ +async function resetIndex (index) { + if (await client.indices.exists({ index })) { + await client.indices.delete({ index }) + } + + await client.indices.create({ index }) + await putBookMapping() +} + +``` + +### 3.2 - 添加图书模式 + +接下来,我们将为图书的数据模式添加一个 "mapping"。在 `server/connection.js` 中的 `resetIndex` 函数下面添加如下的函数。 + +``` +/** Add book section schema mapping to ES */ +async function putBookMapping () { + const schema = { + title: { type: 'keyword' }, + author: { type: 'keyword' }, + location: { type: 'integer' }, + text: { type: 'text' } + } + + return client.indices.putMapping({ index, type, body: { properties: schema } }) +} + +``` + +这是为 `book` 索引定义了一个 mapping。一个 Elasticsearch 的 `index` 大概类似于 SQL 的 `table` 或者 MongoDB 的  `collection`。我们通过添加 mapping 来为存储的文档指定每个字段和它的数据类型。Elasticsearch 是无模式的,因此,从技术角度来看,我们是不需要添加 mapping 的,但是,这样做,我们可以更好地控制如何处理数据。 + +比如,我们给 "title" 和 ”author" 字段分配 `keyword` 类型,给 “text" 字段分配 `text` 类型。之所以这样做的原因是,搜索引擎可以区别处理这些字符串字段 —— 在搜索的时候,搜索引擎将在 `text` 字段中搜索可能的匹配项,而对于 `keyword` 类型字段,将对它们进行全文匹配。这看上去差别很小,但是它们对在不同的搜索上的速度和行为的影响非常大。 + +在文件的底部,导出对外发布的属性和函数,这样我们的应用程序中的其它模块就可以访问它们了。 + +``` +module.exports = { + client, index, type, checkConnection, resetIndex +} + +``` + +### 4 - 加载原始数据 + +我们将使用来自 [Gutenberg 项目][28] 的数据 ——  它致力于为公共提供免费的线上电子书。在这个项目中,我们将使用 100 本经典图书来充实我们的图书馆,包括_《The Adventures of Sherlock Holmes》_、_《Treasure Island》_、_《The Count of Monte Cristo》_、_《Around the World in 80 Days》_、_《Romeo and Juliet》_ 、和_《The Odyssey》_。 + +![Book Covers](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/books.jpg) + +### 4.1 - 下载图书文件 + +我将这 100 本书打包成一个文件,你可以从这里下载它 —— +[https://cdn.patricktriest.com/data/books.zip][29] + +将这个文件解压到你的项目的 `books/` 目录中。 + +你可以使用以下的命令来完成(需要在命令行下使用 [wget][30] 和 ["The Unarchiver"][31])。 + +``` +wget https://cdn.patricktriest.com/data/books.zip +unar books.zip + +``` + +### 4.2 - 预览一本书 + +尝试打开其中的一本书的文件,假设打开的是 `219-0.txt`。你将注意到它开头是一个公开访问的协议,接下来是一些标识这本书的书名、作者、发行日期、语言和字符编码的行。 + +``` +Title: Heart of Darkness + +Author: Joseph Conrad + +Release Date: February 1995 [EBook #219] +Last Updated: September 7, 2016 + +Language: English + +Character set encoding: UTF-8 + +``` + +在 `*** START OF THIS PROJECT GUTENBERG EBOOK HEART OF DARKNESS ***` 这些行后面,是这本书的正式内容。 + +如果你滚动到本书的底部,你将看到类似 `*** END OF THIS PROJECT GUTENBERG EBOOK HEART OF DARKNESS ***` 信息,接下来是本书更详细的协议版本。 + +下一步,我们将使用程序从文件头部来解析书的元数据,提取 `*** START OF` 和 `***END OF` 之间的内容。 + +### 4.3 - 读取数据目录 + +我们将写一个脚本来读取每本书的内容,并将这些数据添加到 Elasticsearch。我们将定义一个新的 Javascript 文件 `server/load_data.js` 来执行这些操作。 + +首先,我们将从 `books/` 目录中获取每个文件的列表。 + +在 `server/load_data.js` 中添加下列内容。 + +``` +const fs = require('fs') +const path = require('path') +const esConnection = require('./connection') + +/** Clear ES index, parse and index all files from the books directory */ +async function readAndInsertBooks () { + try { + // Clear previous ES index + await esConnection.resetIndex() + + // Read books directory + let files = fs.readdirSync('./books').filter(file => file.slice(-4) === '.txt') + console.log(`Found ${files.length} Files`) + + // Read each book file, and index each paragraph in elasticsearch + for (let file of files) { + console.log(`Reading File - ${file}`) + const filePath = path.join('./books', file) + const { title, author, paragraphs } = parseBookFile(filePath) + await insertBookData(title, author, paragraphs) + } + } catch (err) { + console.error(err) + } +} + +readAndInsertBooks() + +``` + +我们将使用一个快捷命令来重构我们的 Node.js 应用程序,并更新运行的容器。 + +运行 `docker-compose up -d --build` 去更新应用程序。这是运行  `docker-compose build` 和 `docker-compose up -d` 的快捷命令。 + +![docker build output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_1_0.png) + +为了在容器中运行我们的 `load_data` 脚本,我们运行 `docker exec gs-api "node" "server/load_data.js"` 。你将看到 Elasticsearch 的状态输出 `Found 100 Books`。 + +这之后,脚本发生了错误退出,原因是我们调用了一个没有定义的辅助函数(`parseBookFile`)。 + +![docker exec output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_1_1.png) + +### 4.4 - 读取数据文件 + +接下来,我们读取元数据和每本书的内容。 + +在 `server/load_data.js` 中定义新函数。 + +``` +/** Read an individual book text file, and extract the title, author, and paragraphs */ +function parseBookFile (filePath) { + // Read text file + const book = fs.readFileSync(filePath, 'utf8') + + // Find book title and author + const title = book.match(/^Title:\s(.+)$/m)[1] + const authorMatch = book.match(/^Author:\s(.+)$/m) + const author = (!authorMatch || authorMatch[1].trim() === '') ? 'Unknown Author' : authorMatch[1] + + console.log(`Reading Book - ${title} By ${author}`) + + // Find Guttenberg metadata header and footer + const startOfBookMatch = book.match(/^\*{3}\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK.+\*{3}$/m) + const startOfBookIndex = startOfBookMatch.index + startOfBookMatch[0].length + const endOfBookIndex = book.match(/^\*{3}\s*END OF (THIS|THE) PROJECT GUTENBERG EBOOK.+\*{3}$/m).index + + // Clean book text and split into array of paragraphs + const paragraphs = book + .slice(startOfBookIndex, endOfBookIndex) // Remove Guttenberg header and footer + .split(/\n\s+\n/g) // Split each paragraph into it's own array entry + .map(line => line.replace(/\r\n/g, ' ').trim()) // Remove paragraph line breaks and whitespace + .map(line => line.replace(/_/g, '')) // Guttenberg uses "_" to signify italics. We'll remove it, since it makes the raw text look messy. + .filter((line) => (line && line.length !== '')) // Remove empty lines + + console.log(`Parsed ${paragraphs.length} Paragraphs\n`) + return { title, author, paragraphs } +} + +``` + +这个函数执行几个重要的任务。 + +1. 从文件系统中读取书的文本。 + +2. 使用正则表达式(关于正则表达式,请参阅 [这篇文章][1] )解析书名和作者。 + +3. 通过匹配 ”Guttenberg 项目“ 头部和尾部,识别书的正文内容。 + +4. 提取书的内容文本。 + +5. 分割每个段落到它的数组中。 + +6. 清理文本并删除空白行。 + +它的返回值,我们将构建一个对象,这个对象包含书名、作者、以及书中各段落的数据。 + +再次运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"`,你将看到如下的输出,在输出的末尾有三个额外的行。 + +![docker exec output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_2_0.png) + +成功!我们的脚本从文本文件中成功解析出了书名和作者。脚本再次以错误结束,因为到现在为止,我们还没有定义辅助函数。 + +### 4.5 - 在 ES 中索引数据文件 + +最后一步,我们将批量上传每个段落的数组到 Elasticsearch 索引中。 + +在 `load_data.js` 中添加新的 `insertBookData` 函数。 + +``` +/** Bulk index the book data in Elasticsearch */ +async function insertBookData (title, author, paragraphs) { + let bulkOps = [] // Array to store bulk operations + + // Add an index operation for each section in the book + for (let i = 0; i < paragraphs.length; i++) { + // Describe action + bulkOps.push({ index: { _index: esConnection.index, _type: esConnection.type } }) + + // Add document + bulkOps.push({ + author, + title, + location: i, + text: paragraphs[i] + }) + + if (i > 0 && i % 500 === 0) { // Do bulk insert in 500 paragraph batches + await esConnection.client.bulk({ body: bulkOps }) + bulkOps = [] + console.log(`Indexed Paragraphs ${i - 499} - ${i}`) + } + } + + // Insert remainder of bulk ops array + await esConnection.client.bulk({ body: bulkOps }) + console.log(`Indexed Paragraphs ${paragraphs.length - (bulkOps.length / 2)} - ${paragraphs.length}\n\n\n`) +} + +``` + +这个函数将使用书名、作者、和附加元数据的段落位置来索引书中的每个段落。我们通过批量操作来插入段落,它比逐个段落插入要快的多。 + +> 我们分批索引段落,而不是一次性插入全部,是为运行这个应用程序的、内存稍有点小(1.7 GB)的服务器  `search.patricktriest.com` 上做的一个重要优化。如果你的机器内存还行(4 GB 以上),你或许不用分批上传。 + +运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"` 一次或多次 —— 现在你将看到前面解析的 100 本书的完整输出,并插入到了 Elasticsearch。这可能需要几分钟时间,甚至更长。 + +![data loading output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_3_0.png) + +### 5 - 搜索 + +现在,Elasticsearch 中已经有了 100 本书了(大约有 230000 个段落),现在我们尝试搜索查询。 + +### 5.0 - 简单的 HTTP 查询 + +首先,我们使用 Elasticsearch 的 HTTP API 对它进行直接查询。 + +在你的浏览器上访问这个 URL - `http://localhost:9200/library/_search?q=text:Java&pretty` + +在这里,我们将执行一个极简的全文搜索,在我们的图书馆的书中查找 ”Java" 这个词。 + +你将看到类似于下面的一个 JSON 格式的响应。 + +``` +{ + "took" : 11, + "timed_out" : false, + "_shards" : { + "total" : 5, + "successful" : 5, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : 13, + "max_score" : 14.259304, + "hits" : [ + { + "_index" : "library", + "_type" : "novel", + "_id" : "p_GwFWEBaZvLlaAUdQgV", + "_score" : 14.259304, + "_source" : { + "author" : "Charles Darwin", + "title" : "On the Origin of Species", + "location" : 1080, + "text" : "Java, plants of, 375." + } + }, + { + "_index" : "library", + "_type" : "novel", + "_id" : "wfKwFWEBaZvLlaAUkjfk", + "_score" : 10.186235, + "_source" : { + "author" : "Edgar Allan Poe", + "title" : "The Works of Edgar Allan Poe", + "location" : 827, + "text" : "After many years spent in foreign travel, I sailed in the year 18-- , from the port of Batavia, in the rich and populous island of Java, on a voyage to the Archipelago of the Sunda islands. I went as passenger--having no other inducement than a kind of nervous restlessness which haunted me as a fiend." + } + }, + ... + ] + } +} + +``` + +用 Elasticseach 的 HTTP 接口可以测试我们插入的数据是否成功,但是如果直接将这个 API 暴露给 Web 应用程序将有极大的风险。这个 API 将会暴露管理功能(比如直接添加和删除文档),最理想的情况是完全不要对外暴露它。而是写一个简单的 Node.js API 去接收来自客户端的请求,然后(在我们的本地网络中)生成一个正确的查询发送给 Elasticsearch。 + +### 5.1 - 查询脚本 + +我们现在尝试从我们写的 Node.js 脚本中查询 Elasticsearch。 + +创建一个新文件,`server/search.js`。 + +``` +const { client, index, type } = require('./connection') + +module.exports = { + /** Query ES index for the provided term */ + queryTerm (term, offset = 0) { + const body = { + from: offset, + query: { match: { + text: { + query: term, + operator: 'and', + fuzziness: 'auto' + } } }, + highlight: { fields: { text: {} } } + } + + return client.search({ index, type, body }) + } +} + +``` + +我们的搜索模块定义一个简单的 `search` 函数,它将使用输入的词 `match` 查询。 + +这是查询的字段分解 - + +* `from` - 允许我们分页查询结果。默认每个查询返回 10 个结果,因此,指定 `from: 10` 将允许我们取回 10-20 的结果。 + +* `query` - 这里我们指定要查询的词。 + +* `operator` - 我们可以修改搜索行为;在本案例中,我们使用 "and" 操作去对查询中包含所有 tokens(要查询的词)的结果来确定优先顺序。 + +* `fuzziness` - 对拼写错误的容错调整,`auto` 的默认为 `fuzziness: 2`。模糊值越高,结果越需要更多校正。比如,`fuzziness: 1` 将允许以 `Patricc` 为关键字的查询中返回与 `Patrick` 匹配的结果。 + +* `highlights` - 为结果返回一个额外的字段,这个字段包含 HTML,以显示精确的文本字集和查询中匹配的关键词。 + +你可以去浏览 [Elastic Full-Text Query DSL][32],学习如何随意调整这些参数,以进一步自定义搜索查询。 + +### 6 - API + +为了能够从前端应用程序中访问我们的搜索功能,我们来写一个快速的 HTTP API。 + +### 6.0 - API 服务器 + +用以下的内容替换现有的 `server/app.js` 文件。 + +``` +const Koa = require('koa') +const Router = require('koa-router') +const joi = require('joi') +const validate = require('koa-joi-validate') +const search = require('./search') + +const app = new Koa() +const router = new Router() + +// Log each request to the console +app.use(async (ctx, next) => { + const start = Date.now() + await next() + const ms = Date.now() - start + console.log(`${ctx.method} ${ctx.url} - ${ms}`) +}) + +// Log percolated errors to the console +app.on('error', err => { + console.error('Server Error', err) +}) + +// Set permissive CORS header +app.use(async (ctx, next) => { + ctx.set('Access-Control-Allow-Origin', '*') + return next() +}) + +// ADD ENDPOINTS HERE + +const port = process.env.PORT || 3000 + +app + .use(router.routes()) + .use(router.allowedMethods()) + .listen(port, err => { + if (err) throw err + console.log(`App Listening on Port ${port}`) + }) + +``` + +这些代码将为 [Koa.js][33] Node API 服务器导入服务器依赖,设置简单的日志,以及错误处理。 + +### 6.1 - 使用查询连接端点 + +接下来,我们将在服务器上添加一个端点,以便于发布我们的 Elasticsearch 查询功能。 + +在  `server/app.js` 文件的 `// ADD ENDPOINTS HERE`  下面插入下列的代码。 + +``` +/** + * GET /search + * Search for a term in the library + */ +router.get('/search', async (ctx, next) => { + const { term, offset } = ctx.request.query + ctx.body = await search.queryTerm(term, offset) + } +) + +``` + +使用 `docker-compose up -d --build` 重启动应用程序。之后在你的浏览器中尝试调用这个搜索端点。比如,`http://localhost:3000/search?term=java` 这个请求将搜索整个图书馆中提到 “Jave" 的内容。 + +结果与前面直接调用 Elasticsearch HTTP 界面的结果非常类似。 + +``` +{ + "took": 242, + "timed_out": false, + "_shards": { + "total": 5, + "successful": 5, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": 93, + "max_score": 13.356944, + "hits": [{ + "_index": "library", + "_type": "novel", + "_id": "eHYHJmEBpQg9B4622421", + "_score": 13.356944, + "_source": { + "author": "Charles Darwin", + "title": "On the Origin of Species", + "location": 1080, + "text": "Java, plants of, 375." + }, + "highlight": { + "text": ["Java, plants of, 375."] + } + }, { + "_index": "library", + "_type": "novel", + "_id": "2HUHJmEBpQg9B462xdNg", + "_score": 9.030668, + "_source": { + "author": "Unknown Author", + "title": "The King James Bible", + "location": 186, + "text": "10:4 And the sons of Javan; Elishah, and Tarshish, Kittim, and Dodanim." + }, + "highlight": { + "text": ["10:4 And the sons of Javan; Elishah, and Tarshish, Kittim, and Dodanim."] + } + } + ... + ] + } +} + +``` + +### 6.2 - 输入校验 + +这个端点现在还很脆弱 —— 我们没有对请求参数做任何的校验,因此,如果是无效的或者错误的值将使服务器出错。 + +我们将添加一些使用 [Joi][34] 和 [Koa-Joi-Validate][35] 库的中间件,以对输入做校验。 + +``` +/** + * GET /search + * Search for a term in the library + * Query Params - + * term: string under 60 characters + * offset: positive integer + */ +router.get('/search', + validate({ + query: { + term: joi.string().max(60).required(), + offset: joi.number().integer().min(0).default(0) + } + }), + async (ctx, next) => { + const { term, offset } = ctx.request.query + ctx.body = await search.queryTerm(term, offset) + } +) + +``` + +现在,重启服务器,如果你使用一个没有搜索关键字的请求(`http://localhost:3000/search`),你将返回一个带相关消息的 HTTP 400 错误,比如像 `Invalid URL Query - child "term" fails because ["term" is required]`。 + +如果想从 Node 应用程序中查看实时日志,你可以运行 `docker-compose logs -f api`。 + +### 7 - 前端应用程序 + +现在我们的 `/search` 端点已经就绪,我们来连接到一个简单的 Web 应用程序来测试这个 API。 + +### 7.0 - Vue.js 应用程序 + +我们将使用 Vue.js 去协调我们的前端。 + +添加一个新文件 `/public/app.js`,去控制我们的 Vue.js 应用程序代码。 + +``` +const vm = new Vue ({ + el: '#vue-instance', + data () { + return { + baseUrl: 'http://localhost:3000', // API url + searchTerm: 'Hello World', // Default search term + searchDebounce: null, // Timeout for search bar debounce + searchResults: [], // Displayed search results + numHits: null, // Total search results found + searchOffset: 0, // Search result pagination offset + + selectedParagraph: null, // Selected paragraph object + bookOffset: 0, // Offset for book paragraphs being displayed + paragraphs: [] // Paragraphs being displayed in book preview window + } + }, + async created () { + this.searchResults = await this.search() // Search for default term + }, + methods: { + /** Debounce search input by 100 ms */ + onSearchInput () { + clearTimeout(this.searchDebounce) + this.searchDebounce = setTimeout(async () => { + this.searchOffset = 0 + this.searchResults = await this.search() + }, 100) + }, + /** Call API to search for inputted term */ + async search () { + const response = await axios.get(`${this.baseUrl}/search`, { params: { term: this.searchTerm, offset: this.searchOffset } }) + this.numHits = response.data.hits.total + return response.data.hits.hits + }, + /** Get next page of search results */ + async nextResultsPage () { + if (this.numHits > 10) { + this.searchOffset += 10 + if (this.searchOffset + 10 > this.numHits) { this.searchOffset = this.numHits - 10} + this.searchResults = await this.search() + document.documentElement.scrollTop = 0 + } + }, + /** Get previous page of search results */ + async prevResultsPage () { + this.searchOffset -= 10 + if (this.searchOffset < 0) { this.searchOffset = 0 } + this.searchResults = await this.search() + document.documentElement.scrollTop = 0 + } + } +}) + +``` + +这个应用程序非常简单 —— 我们只定义了一些共享的数据属性,以及添加了检索和分页搜索结果的方法。为防止每按键一次都调用 API,搜索输入有一个 100 毫秒的除颤功能。 + +解释 Vue.js 是如何工作的已经超出了本教程的范围,如果你使用过 Angular 或者 React,其实一些也不可怕。如果你完全不熟悉 Vue,想快速了解它的功能,我建议你从官方的快速指南入手 —— [https://vuejs.org/v2/guide/][36] + +### 7.1 - HTML + +使用以下的内容替换 `/public/index.html` 文件中的占位符,以便于加载我们的 Vue.js 应用程序和设计一个基本的搜索界面。 + +``` + + + + + Elastic Library + + + + + + + + +
+ +
+
+ + +
+
+ + +
+
{{ numHits }} Hits
+
Displaying Results {{ searchOffset }} - {{ searchOffset + 9 }}
+
+ + +
+ + +
+ + +
+
+
+
+
{{ hit._source.title }} - {{ hit._source.author }}
+
Location {{ hit._source.location }}
+
+
+ + +
+ + +
+ + +
+ + + + + + + +``` + +### 7.2 - CSS + +添加一个新文件 `/public/styles.css`,使用一些自定义的 UI 样式。 + +``` +body { font-family: 'EB Garamond', serif; } + +.mui-textfield > input, .mui-btn, .mui--text-subhead, .mui-panel > .mui--text-headline { + font-family: 'Open Sans', sans-serif; +} + +.all-caps { text-transform: uppercase; } +.app-container { padding: 16px; } +.search-results em { font-weight: bold; } +.book-modal > button { width: 100%; } +.search-results .mui-divider { margin: 14px 0; } + +.search-results { + display: flex; + flex-direction: row; + flex-wrap: wrap; + justify-content: space-around; +} + +.search-results > div { + flex-basis: 45%; + box-sizing: border-box; + cursor: pointer; +} + +@media (max-width: 600px) { + .search-results > div { flex-basis: 100%; } +} + +.paragraphs-container { + max-width: 800px; + margin: 0 auto; + margin-bottom: 48px; +} + +.paragraphs-container .mui--text-body1, .paragraphs-container .mui--text-body2 { + font-size: 1.8rem; + line-height: 35px; +} + +.book-modal { + width: 100%; + height: 100%; + padding: 40px 10%; + box-sizing: border-box; + margin: 0 auto; + background-color: white; + overflow-y: scroll; + position: fixed; + top: 0; + left: 0; +} + +.pagination-panel { + display: flex; + justify-content: space-between; +} + +.title-row { + display: flex; + justify-content: space-between; + align-items: flex-end; +} + +@media (max-width: 600px) { + .title-row{ + flex-direction: column; + text-align: center; + align-items: center + } +} + +.locations-label { + text-align: center; + margin: 8px; +} + +.modal-footer { + position: fixed; + bottom: 0; + left: 0; + width: 100%; + display: flex; + justify-content: space-around; + background: white; +} + +``` + +### 7.3 - 尝试输出 + +在你的浏览器中打开 `localhost:8080`,你将看到一个简单的带结果分页功能的搜索界面。在顶部的搜索框中尝试输入不同的关键字来查看它们的搜索情况。 + +![preview webapp](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_4_0.png) + +> 你没有必要重新运行 `docker-compose up` 命令以使更改生效。本地的 `public` 目录是装载在我们的 Nginx 文件服务器容器中,因此,在本地系统中前端的变化将在容器化应用程序中自动反映出来。 + +如果你尝试点击任何搜索结果,什么反应也没有 —— 因为我们还没有为这个应用程序添加进一步的相关功能。 + +### 8 - 分页预览 + +如果点击每个搜索结果,然后查看到来自书中的内容,那将是非常棒的体验。 + +### 8.0 - 添加 Elasticsearch 查询 + +首先,我们需要定义一个简单的查询去从给定的书中获取段落范围。 + +在 `server/search.js` 文件中添加如下的函数到 `module.exports` 块中。 + +``` +/** Get the specified range of paragraphs from a book */ +getParagraphs (bookTitle, startLocation, endLocation) { + const filter = [ + { term: { title: bookTitle } }, + { range: { location: { gte: startLocation, lte: endLocation } } } + ] + + const body = { + size: endLocation - startLocation, + sort: { location: 'asc' }, + query: { bool: { filter } } + } + + return client.search({ index, type, body }) +} + +``` + +这个新函数将返回给定的书的开始位置和结束位置之间的一个排序后的段落数组。 + +### 8.1 - 添加 API 端点 + +现在,我们将这个函数链接到 API 端点。 + +添加下列内容到 `server/app.js` 文件中最初的 `/search` 端点下面。 + +``` +/** + * GET /paragraphs + * Get a range of paragraphs from the specified book + * Query Params - + * bookTitle: string under 256 characters + * start: positive integer + * end: positive integer greater than start + */ +router.get('/paragraphs', + validate({ + query: { + bookTitle: joi.string().max(256).required(), + start: joi.number().integer().min(0).default(0), + end: joi.number().integer().greater(joi.ref('start')).default(10) + } + }), + async (ctx, next) => { + const { bookTitle, start, end } = ctx.request.query + ctx.body = await search.getParagraphs(bookTitle, start, end) + } +) + +``` + +### 8.2 - 添加 UI 功能 + +现在,我们的新端点已经就绪,我们为应用程序添加一些从书中查询和显示全部页面的前端功能。 + +在 `/public/app.js` 文件的 `methods` 块中添加如下的函数。 + +``` + /** Call the API to get current page of paragraphs */ + async getParagraphs (bookTitle, offset) { + try { + this.bookOffset = offset + const start = this.bookOffset + const end = this.bookOffset + 10 + const response = await axios.get(`${this.baseUrl}/paragraphs`, { params: { bookTitle, start, end } }) + return response.data.hits.hits + } catch (err) { + console.error(err) + } + }, + /** Get next page (next 10 paragraphs) of selected book */ + async nextBookPage () { + this.$refs.bookModal.scrollTop = 0 + this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset + 10) + }, + /** Get previous page (previous 10 paragraphs) of selected book */ + async prevBookPage () { + this.$refs.bookModal.scrollTop = 0 + this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset - 10) + }, + /** Display paragraphs from selected book in modal window */ + async showBookModal (searchHit) { + try { + document.body.style.overflow = 'hidden' + this.selectedParagraph = searchHit + this.paragraphs = await this.getParagraphs(searchHit._source.title, searchHit._source.location - 5) + } catch (err) { + console.error(err) + } + }, + /** Close the book detail modal */ + closeBookModal () { + document.body.style.overflow = 'auto' + this.selectedParagraph = null + } + +``` + +这五个函数了提供了通过页码从书中下载和分页(每次十个段落)的逻辑。 + +现在,我们需要添加一个 UI 去显示书的页面。在 `/public/index.html` 的 `` 注释下面添加如下的内容。 + +``` + +
+
+ +
+
{{ selectedParagraph._source.title }}
+
{{ selectedParagraph._source.author }}
+
+
+
+
Locations {{ bookOffset - 5 }} to {{ bookOffset + 5 }}
+
+
+ + +
+
+ {{ paragraph._source.text }} +
+
+ {{ paragraph._source.text }} +
+
+
+
+ + + +
+ +``` + +再次重启应用程序服务器(`docker-compose up -d --build`),然后打开 `localhost:8080`。当你再次点击搜索结果时,你将能看到关键字附近的段落。如果你感兴趣,你现在甚至可以看这本书的剩余部分。 + +![preview webapp book page](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_5_0.png) + +祝贺你!你现在已经完成了本教程的应用程序。 + +你可以去比较你的本地结果与托管在这里的完整示例 —— [https://search.patricktriest.com/][37] + +### 9 - ELASTICSEARCH 的缺点 + +### 9.0 - 耗费资源 + +Elasticsearch 是计算密集型的。[官方建议][38] 运行 ES 的机器最好有 64 GB 的内存,强烈反对在低于 8 GB 内存的机器上运行它。Elasticsearch 是一个 _内存中_ 数据库,这样使它的查询速度非常快,但这也非常占用系统内存。在生产系统中使用时,[他们强烈建议在一个集群中运行多个 Elasticsearch 节点][39],以实现高可用、自动分区、和一个节点失败时的数据冗余。 + +我们的这个教程中的应用程序运行在一个 $15/月 的 GCP 计算实例中( [search.patricktriest.com][40]),它只有 1.7 GB 的内存,它勉强能运行这个 Elasticsearch 节点;有时候在进行初始的数据加载过程中,整个机器就 ”假死机“ 了。在我的经验中,Elasticsearch 比传统的那些数据库,比如,PostgreSQL 和 MongoDB 耗费的资源要多很多,这样会使托管主机的成本增加很多。 + +### 9.1 - 与数据库的同步 + +在大多数应用程序,将数据全部保存在 Elasticsearch 并不是个好的选择。最好是使用 ES 作为应用程序的主要事务数据库,但是一般不推荐这样做,因为在 Elasticsearch 中缺少 ACID,如果在处理数据的时候发生伸缩行为,它将丢失写操作。在许多案例中,ES 服务器更多是一个特定的角色,比如做应用程序中的一个文本搜索功能。这种特定的用途,要求它从主数据库中复制数据到 Elasticsearch 实例中。 + +比如,假设我们将用户信息保存在一个 PostgreSQL 表中,但是用 Elasticsearch 去驱动我们的用户搜索功能。如果一个用户,比如,"Albert",决定将他的名字改成 "Al",我们将需要把这个变化同时反映到我们主要的 PostgreSQL 数据库和辅助的 Elasticsearch 集群中。 + +正确地集成它们可能比较棘手,最好的答案将取决于你现有的应用程序栈。这有多种开源方案可选,从 [用一个进程去关注 MongoDB 操作日志][41] 并自动同步检测到的变化到 ES,到使用一个 [PostgresSQL 插件][42] 去创建一个定制的、基于 PSQL 的索引来与 Elasticsearch 进行自动沟通。 + +如果没有有效的预构建选项可用,你可能需要在你的服务器代码中增加一些钩子,这样可以基于数据库的变化来手动更新 Elasticsearch 索引。最后一招,我认为是一个最后的选择,因为,使用定制的业务逻辑去保持 ES 的同步可能很复杂,这将会给应用程序引入很多的 bugs。 + +让 Elasticsearch 与一个主数据库同步,将使它的架构更加复杂,其复杂性已经超越了 ES 的相关缺点,但是当在你的应用程序中考虑添加一个专用的搜索引擎的利弊得失时,这个问题是值的好好考虑的。 + +### 总结 + +在很多现在流行的应用程序中,全文搜索是一个非常重要的功能 —— 而且是很难实现的一个功能。对于在你的应用程序中添加一个快速而又可定制的文本搜索,Elasticsearch 是一个非常好的选择,但是,在这里也有一个替代者。[Apache Solr][43] 是一个类似的开源搜索平台,它是基于 Apache Lucene 构建的,它与 Elasticsearch 的核心库是相同的。[Algolia][44] 是一个搜索即服务的 Web 平台,它已经很快流行了起来,并且它对新手非常友好,很易于上手(但是作为折衷,它的可定制性较小,并且使用成本较高)。 + +“搜索” 特性并不是 Elasticsearch 唯一功能。ES 也是日志存储和分析的常用工具,在一个 ELK(Elasticsearch、Logstash、Kibana)栈配置中通常会使用它。灵活的全文搜索功能使得 Elasticsearch 在数据量非常大的科学任务中用处很大 —— 比如,在一个数据集中正确的/标准化的条目拼写,或者为了类似的词组搜索一个文本数据集。 + +对于你自己的项目,这里有一些创意。 + +* 添加更多你喜欢的书到教程的应用程序中,然后创建你自己的私人图书馆搜索引擎。 + +* 利用来自 [Google Scholar][2] 的论文索引,创建一个学术抄袭检测引擎。 + +* 通过将字典中的每个词索引到 Elasticsearch,创建一个拼写检查应用程序。 + +* 通过将 [Common Crawl Corpus][3] 加载到 Elasticsearch 中,构建你自己的与谷歌竞争的因特网搜索引擎(注意,它可能会超过 50 亿个页面,这是一个成本极高的数据集)。 + +* 在 journalism 上使用 Elasticsearch:在最近的大规模泄露的文档中搜索特定的名字和关键词,比如, [Panama Papers][4] 和 [Paradise Papers][5]。 + +本教程中应用程序的源代码是 100% 公开的,你可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][45] + +我希望你喜欢这个教程!你可以在下面的评论区,发表任何你的想法、问题、或者评论。 + +-------------------------------------------------------------------------------- + +作者简介: + +全栈工程师,数据爱好者,学霸,“构建强迫症患者”,探险爱好者。 + +------------- + + +via: https://blog.patricktriest.com/text-search-docker-elasticsearch/ + +作者:[Patrick Triest][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.patricktriest.com/author/patrick/ +[1]:https://blog.patricktriest.com/you-should-learn-regex/ +[2]:https://scholar.google.com/ +[3]:https://aws.amazon.com/public-datasets/common-crawl/ +[4]:https://en.wikipedia.org/wiki/Panama_Papers +[5]:https://en.wikipedia.org/wiki/Paradise_Papers +[6]:https://search.patricktriest.com/ +[7]:https://github.com/triestpa/guttenberg-search +[8]:https://www.postgresql.org/ +[9]:https://www.mongodb.com/ +[10]:https://www.elastic.co/ +[11]:https://www.docker.com/ +[12]:https://www.uber.com/ +[13]:https://www.spotify.com/us/ +[14]:https://www.adp.com/ +[15]:https://www.paypal.com/us/home +[16]:https://nodejs.org/en/ +[17]:http://koajs.com/ +[18]:https://vuejs.org/ +[19]:https://www.elastic.co/ +[20]:https://lucene.apache.org/core/ +[21]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/getting-started.html +[22]:https://en.wikipedia.org/wiki/B-tree +[23]:https://www.docker.com/ +[24]:https://www.docker.com/ +[25]:https://docs.docker.com/compose/ +[26]:https://docs.docker.com/engine/installation/ +[27]:https://docs.docker.com/compose/install/ +[28]:https://www.gutenberg.org/ +[29]:https://cdn.patricktriest.com/data/books.zip +[30]:https://www.gnu.org/software/wget/ +[31]:https://theunarchiver.com/command-line +[32]:https://www.elastic.co/guide/en/elasticsearch/reference/current/full-text-queries.html +[33]:http://koajs.com/ +[34]:https://github.com/hapijs/joi +[35]:https://github.com/triestpa/koa-joi-validate +[36]:https://vuejs.org/v2/guide/ +[37]:https://search.patricktriest.com/ +[38]:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html +[39]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/distributed-cluster.html +[40]:https://search.patricktriest.com/ +[41]:https://github.com/mongodb-labs/mongo-connector +[42]:https://github.com/zombodb/zombodb +[43]:https://lucene.apache.org/solr/ +[44]:https://www.algolia.com/ +[45]:https://github.com/triestpa/guttenberg-search +[46]:https://blog.patricktriest.com/tag/guides/ +[47]:https://blog.patricktriest.com/tag/javascript/ +[48]:https://blog.patricktriest.com/tag/nodejs/ +[49]:https://blog.patricktriest.com/tag/web-development/ +[50]:https://blog.patricktriest.com/tag/devops/ From 8669645ddf30a7bfea256b2211050f5e5da4fd57 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Thu, 15 Mar 2018 17:40:13 +0800 Subject: [PATCH 191/343] Translating by qhwdw --- .../tech/20171102 Testing IPv6 Networking in KVM- Part 1.md | 1 + .../tech/20171205 What DevOps teams really need from a CIO.md | 1 + sources/tech/20171213 Will DevOps steal my job-.md | 1 + sources/tech/20180125 Keep Accurate Time on Linux with NTP.md | 1 + sources/tech/20180127 Your instant Kubernetes cluster.md | 3 ++- .../tech/20180131 Microservices vs. monolith How to choose.md | 3 ++- ...20180201 How to Run Your Own Public Time Server on Linux.md | 1 + .../20180308 How to set up a print server on a Raspberry Pi.md | 1 + 8 files changed, 10 insertions(+), 2 deletions(-) diff --git a/sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md b/sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md index 149be0678b..d48f7765cb 100644 --- a/sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md +++ b/sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md @@ -1,3 +1,4 @@ +Translating by qhwdw Testing IPv6 Networking in KVM: Part 1 ====== diff --git a/sources/tech/20171205 What DevOps teams really need from a CIO.md b/sources/tech/20171205 What DevOps teams really need from a CIO.md index d86f549d18..2c83ce7a93 100644 --- a/sources/tech/20171205 What DevOps teams really need from a CIO.md +++ b/sources/tech/20171205 What DevOps teams really need from a CIO.md @@ -1,3 +1,4 @@ +Translating by qhwdw What DevOps teams really need from a CIO ====== IT leaders can learn from plenty of material exploring [DevOps][1] and the challenging cultural shift required for [making the DevOps transition][2]. But are you in tune with the short and long term challenges that a DevOps team faces - and what they really need from a CIO? diff --git a/sources/tech/20171213 Will DevOps steal my job-.md b/sources/tech/20171213 Will DevOps steal my job-.md index 72694ae69e..91c3f3aa6a 100644 --- a/sources/tech/20171213 Will DevOps steal my job-.md +++ b/sources/tech/20171213 Will DevOps steal my job-.md @@ -1,3 +1,4 @@ +Translating by qhwdw Will DevOps steal my job? ====== diff --git a/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md b/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md index 817931c2a4..5895275c62 100644 --- a/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md +++ b/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md @@ -1,3 +1,4 @@ +Translating by qhwdw Keep Accurate Time on Linux with NTP ====== diff --git a/sources/tech/20180127 Your instant Kubernetes cluster.md b/sources/tech/20180127 Your instant Kubernetes cluster.md index b17619762a..d804986aac 100644 --- a/sources/tech/20180127 Your instant Kubernetes cluster.md +++ b/sources/tech/20180127 Your instant Kubernetes cluster.md @@ -1,3 +1,4 @@ +Translating by qhwdw Your instant Kubernetes cluster ============================================================ @@ -168,4 +169,4 @@ via: https://blog.alexellis.io/your-instant-kubernetes-cluster/ [12]:https://weave.works/ [13]:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ [14]:https://blog.alexellis.io/docker-for-mac-with-kubernetes/ -[15]:https://blog.alexellis.io/your-instant-kubernetes-cluster/# \ No newline at end of file +[15]:https://blog.alexellis.io/your-instant-kubernetes-cluster/# diff --git a/sources/tech/20180131 Microservices vs. monolith How to choose.md b/sources/tech/20180131 Microservices vs. monolith How to choose.md index 35056b1ee0..a337f3c85f 100644 --- a/sources/tech/20180131 Microservices vs. monolith How to choose.md +++ b/sources/tech/20180131 Microservices vs. monolith How to choose.md @@ -1,3 +1,4 @@ +Translating by qhwdw Microservices vs. monolith: How to choose ============================================================ @@ -173,4 +174,4 @@ via: https://opensource.com/article/18/1/how-choose-between-monolith-microservic [19]:https://opensource.com/users/jakelumetta [20]:https://opensource.com/users/jakelumetta [21]:https://opensource.com/tags/microservices -[22]:https://opensource.com/tags/devops \ No newline at end of file +[22]:https://opensource.com/tags/devops diff --git a/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md b/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md index 752d06bc6a..4824b0370b 100644 --- a/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md +++ b/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md @@ -1,3 +1,4 @@ +Translating by qhwdw How to Run Your Own Public Time Server on Linux ====== diff --git a/sources/tech/20180308 How to set up a print server on a Raspberry Pi.md b/sources/tech/20180308 How to set up a print server on a Raspberry Pi.md index 77589083c2..200779210d 100644 --- a/sources/tech/20180308 How to set up a print server on a Raspberry Pi.md +++ b/sources/tech/20180308 How to set up a print server on a Raspberry Pi.md @@ -1,3 +1,4 @@ +Translating by qhwdw How to set up a print server on a Raspberry Pi ====== From c72898053d10229190509f07a43508517b47e90f Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 15 Mar 2018 18:48:54 +0800 Subject: [PATCH 192/343] Delete 20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md --- ...rams installed from source and dotfiles.md | 144 ------------------ 1 file changed, 144 deletions(-) delete mode 100644 sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md diff --git a/sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md b/sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md deleted file mode 100644 index 76fc06ffd0..0000000000 --- a/sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md +++ /dev/null @@ -1,144 +0,0 @@ -Translating by MjSeven - -How to use GNU Stow to manage programs installed from source and dotfiles -====== - -### Objective - -Easily manage programs installed from source and dotfiles using GNU stow - -### Requirements - - * Root permissions - - - -### Difficulty - -EASY - -### Conventions - - * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command - * **$** \- given command to be executed as a regular non-privileged user - - - -### Introduction - -Sometimes we have to install programs from source: maybe they are not available through standard channels, or maybe we want a specific version of a software. GNU stow is a very nice `symlinks factory` program which helps us a lot by keeping files organized in a very clean and easy to maintain way. - -### Obtaining stow - -Your distribution repositories is very likely to contain `stow`, for example in Fedora, all you have to do to install it is: -``` -# dnf install stow -``` - -or on Ubuntu/Debian you can install stow by executing: -``` - -# apt install stow - -``` - -In some distributions, stow it's not available in standard repositories, but it can be easily obtained by adding some extra software sources (for example epel in the case of Rhel and CentOS7) or, as a last resort, by compiling it from source: it requires very little dependencies. - -### Compiling stow from source - -The latest available stow version is the `2.2.2`: the tarball is available for download here: `https://ftp.gnu.org/gnu/stow/`. - -Once you have downloaded the sources, you must extract the tarball. Navigate to the directory where you downloaded the package and simply run: -``` -$ tar -xvpzf stow-2.2.2.tar.gz -``` - -After the sources have been extracted, navigate inside the stow-2.2.2 directory, and to compile the program simply run: -``` - -$ ./configure -$ make - -``` - -Finally, to install the package: -``` -# make install -``` - -By default the package will be installed in the `/usr/local/` directory, but we can change this, specifying the directory via the `--prefix` option of the configure script, or by adding `prefix="/your/dir"` when running the `make install` command. - -At this point, if all worked as expected we should have `stow` installed on our system - -### How does stow work? - -The main concept behind stow it's very well explained in the program manual: -``` - -The approach used by Stow is to install each package into its own tree, -then use symbolic links to make it appear as though the files are -installed in the common tree. - -``` - -To better understand the working of the package, let's analyze its key concepts: - -#### The stow directory - -The stow directory is the root directory which contains all the `stow packages`, each with their own private subtree. The typical stow directory is `/usr/local/stow`: inside it, each subdirectory represents a `package` - -#### Stow packages - -As said above, the stow directory contains "packages", each in its own separate subdirectory, usually named after the program itself. A package is nothing more than a list of files and directories related to a specific software, managed as an entity. - -#### The stow target directory - -The stow target directory is very a simple concept to explain. It is the directory in which the package files must appear to be installed. By default the stow target directory is considered to be the one above the directory in which stow is invoked from. This behaviour can be easily changed by using the `-t` option (short for --target), which allows us to specify an alternative directory. - -### A practical example - -I believe a well done example is worth 1000 words, so let's show how stow works. Suppose we want to compile and install `libx264`. Lets clone the git repository containing its sources: -``` -$ git clone git://git.videolan.org/x264.git -``` - -Few seconds after running the command, the "x264" directory will be created, and it will contain the sources, ready to be compiled. We now navigate inside it and run the `configure` script, specifying the /usr/local/stow/libx264 directory as `--prefix`: -``` -$ cd x264 && ./configure --prefix=/usr/local/stow/libx264 -``` - -Then we build the program and install it: -``` - -$ make -# make install - -``` - -The directory x264 should have been created inside of the stow directory: it contains all the stuff that would have been normally installed in the system directly. Now, all we have to do, is to invoke stow. We must run the command either from inside the stow directory, by using the `-d` option to specify manually the path to the stow directory (default is the current directory), or by specifying the target with `-t` as said before. We should also provide the name of the package to be stowed as an argument. In this case we run the program from the stow directory, so all we need to type is: -``` -# stow libx264 -``` - -All the files and directories contained in the libx264 package have now been symlinked in the parent directory (/usr/local) of the one from which stow has been invoked, so that, for example, libx264 binaries contained in `/usr/local/stow/x264/bin` are now symlinked in `/usr/local/bin`, files contained in `/usr/local/stow/x264/etc` are now symlinked in `/usr/local/etc` and so on. This way it will appear to the system that the files were installed normally, and we can easily keep track of each program we compile and install. To revert the action, we just use the `-D` option: -``` -# stow -d libx264 -``` - -It is done! The symlinks don't exist anymore: we just "uninstalled" a stow package, keeping our system in a clean and consistent state. At this point it should be clear why stow it's also used to manage dotfiles. A common practice is to have all user-specific configuration files inside a git repository, to manage them easily and have them available everywhere, and then using stow to place them where appropriate, in the user home directory. - -Stow will also prevent you from overriding files by mistake: it will refuse to create symbolic links if the destination file already exists and doesn't point to a package into the stow directory. This situation is called a conflict in stow terminology. - -That's it! For a complete list of options, please consult the stow manpage and don't forget to tell us your opinions about it in the comments. - --------------------------------------------------------------------------------- - -via: https://linuxconfig.org/how-to-use-gnu-stow-to-manage-programs-installed-from-source-and-dotfiles - -作者:[Egidio Docile][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxconfig.org From f1fda49d0c74508ca43703c71f1aaad3e492bd8c Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 15 Mar 2018 18:50:38 +0800 Subject: [PATCH 193/343] Create 20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md --- ...rams installed from source and dotfiles.md | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 translated/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md diff --git a/translated/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md b/translated/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md new file mode 100644 index 0000000000..ef8cf715e6 --- /dev/null +++ b/translated/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md @@ -0,0 +1,131 @@ +如何使用 GNU Stow 来管理从源代码和 dotfiles 安装的程序 +===== +dotfiles(**.**开头的文件在 *nix 下默认为隐藏文件) +### 目的 + +使用 GNU Stow 轻松管理从源代码和 dotfiles 安装的程序 + +### 要求 + +* root 权限 + + +### 难度 + +简单 + +### 约定 + +* **#** \- 要求直接以 root 用户身份或使用 `sudo` 命令以 root 权限执行给定的命令 +* **$** \- 给定的命令将作为普通的非特权用户来执行 + +### 介绍 + +有时候我们必须从源代码安装程序,因为它们也许不能通过标准渠道获得,或者我们可能需要特定版本的软件。 GNU Stow 是一个非常不错的 `symlinks factory` 程序,它可以帮助我们保持文件的整洁,易于维护。 + +### 获得 stow + +你的 Linux 发行版本很可能包含 `stow`,例如在 Fedora,你安装它只需要: +``` +# dnf install stow +``` + +在 Ubuntu/Debian 中,安装 stow 需要执行: +``` +# apt install stow +``` + +在某些 Linux 发行版中,stow 在标准库中是不可用的,但是可以通过一些额外的软件源(例如 Rhel 和 CentOS7 中的epel )轻松获得,或者,作为最后的手段,你可以从源代码编译它。只需要很少的依赖关系。 + +### 从源代码编译 + +最新的可用 stow 版本是 `2.2.2`。源码包可以在这里下载:`https://ftp.gnu.org/gnu/stow/`。 + +一旦你下载了源码包,你就必须解压它。切换到你下载软件包的目录,然后运行: +``` +$ tar -xvpzf stow-2.2.2.tar.gz +``` + +解压源文件后,切换到 stow-2.2.2 目录中,然后编译该程序,只需运行: +``` +$ ./configure +$ make + +``` + +最后,安装软件包: +``` +# make install +``` + +默认情况下,软件包将安装在 `/usr/local/` 目录中,但是我们可以改变它,通过配置脚本的 `--prefix` 选项指定目录,或者在运行 `make install` 时添加 `prefix="/your/dir"`。 + +此时,如果所有工作都按预期工作,我们应该已经在系统上安装了 `stow`。 + +### stow 是如何工作的? + +stow 背后主要的概念在程序手册中有很好的解释: +``` +Stow 使用的方法是将每个软件包安装到自己的树中,然后使用符号链接使它看起来像文件一样安装在普通树中 + +``` + +为了更好地理解这个软件的运作,我们来分析一下它的关键概念: + +#### stow 文件目录 + +stow 目录是包含所有 `stow 包` 的根目录,每个包都有自己的子目录。典型的 stow 目录是 `/usr/local/stow`:在其中,每个子目录代表一个 `package`。 + +#### stow 包 + +如上所述,stow 目录包含多个 "包",每个包都位于自己单独的子目录中,通常以程序本身命名。包不过是与特定软件相关的文件和目录列表,作为实体进行管理。 + +#### stow 目标目录 + +stow 目标目录解释起来是一个非常简单的概念。它是包文件必须安装的目录。默认情况下,stow 目标目录被认为是从目录调用 stow 的目录。这种行为可以通过使用 `-t` 选项( --target 的简写)轻松改变,这使我们可以指定一个替代目录。 + +### 一个实际的例子 + +我相信一个好的例子胜过 1000 句话,所以让我来展示 stow 如何工作。假设我们想编译并安装 `libx264`,首先我们克隆包含其源代码的仓库: +``` +$ git clone git://git.videolan.org/x264.git +``` + +运行该命令几秒钟后,将创建 "x264" 目录,并且它将包含准备编译的源代码。我们切换到 "x264" 目录中并运行 `configure` 脚本,将 `--prefix` 指定为 /usr/local/stow/libx264 目录。 +``` +$ cd x264 && ./configure --prefix=/usr/local/stow/libx264 +``` + +然后我们构建该程序并安装它: +``` +$ make +# make install +``` + +x264 目录应该在 stow 目录内创建:它包含所有通常直接安装在系统中的东西。 现在,我们所要做的就是调用 stow。 我们必须从 stow 目录内运行这个命令,通过使用 `-d` 选项来手动指定 stow 目录的路径(默认为当前目录),或者通过如前所述用 `-t` 指定目标。我们还应该提供要作为参数存储的包的名称。 在这种情况下,我们从 stow 目录运行程序,所以我们需要输入的内容是: +``` +# stow libx264 +``` + +libx264 软件包中包含的所有文件和目录现在已经在调用 stow 的父目录 (/usr/local) 中进行了符号链接,因此,例如在 `/usr/local/ stow/x264/bin` 中包含的 libx264 二进制文件现在在 `/usr/local/bin` 中符号链接,`/usr/local/stow/x264/etc` 中的文件现在符号链接在 `/usr/local/etc` 中等等。通过这种方式,系统将显示文件已正常安装,并且我们可以容易地跟踪我们编译和安装的每个程序。要恢复该操作,我们只需使用 `-D` 选项: +``` +# stow -d libx264 +``` + +完成了!符号链接不再存在:我们只是“卸载”了一个 stow 包,使我们的系统保持在一个干净且一致的状态。 在这一点上,我们应该清楚为什么 stow 还用于管理 dotfiles。 通常的做法是在 git 仓库中包含用户特定的所有配置文件,以便轻松管理它们并使它们在任何地方都可用,然后使用 stow 将它们放在适当位置,如放在用户主目录中。 + +Stow 还会阻止你错误地覆盖文件:如果目标文件已经存在并且没有指向 Stow 目录中的包时,它将拒绝创建符号链接。 这种情况在 Stow 术语中称为冲突。 + +就是这样!有关选项的完整列表,请参阅 stow 帮助页,并且不要忘记在评论中告诉我们你对此的看法。 + +-------------------------------------------------------------------------------- + +via: https://linuxconfig.org/how-to-use-gnu-stow-to-manage-programs-installed-from-source-and-dotfiles + +作者:[Egidio Docile][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/ 校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxconfig.org From b2a9e677358eece6767f440dce201a8ebce7a62c Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 16 Mar 2018 00:12:24 +0800 Subject: [PATCH 194/343] PRF:20171115 How to create better documentation with a kanban board.md @geekpi --- ...etter documentation with a kanban board.md | 20 ++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/translated/tech/20171115 How to create better documentation with a kanban board.md b/translated/tech/20171115 How to create better documentation with a kanban board.md index 289555963b..fa92553ea2 100644 --- a/translated/tech/20171115 How to create better documentation with a kanban board.md +++ b/translated/tech/20171115 How to create better documentation with a kanban board.md @@ -1,22 +1,24 @@ -如何使用看板创建更好的文档 +如何使用看板(kanban)创建更好的文档 ====== +> 通过卡片分类和看板来给用户提供他们想要的信息。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration.png?itok=68kU6BHy) -如果你正在处理文档、网站或其他面向用户的内容,那么了解用户希望找到的内容(包括他们想要的信息以及信息的组织和结构)很有帮助。毕竟,如果人们无法找到他们想要的东西,那么出色的内容就没有用。 +如果你正在处理文档、网站或其他面向用户的内容,那么了解用户希望找到的内容(包括他们想要的信息以及信息的组织和结构)很有帮助。毕竟,如果人们无法找到他们想要的东西,那么再出色的内容也没有用。 -卡片分类是一种简单而有效的方式,可以从用户那里收集有关菜单界面和页面的内容。最简单的实现方式是在你计划在网站或文章分类标注一些索引卡,并要求用户按照查找信息的方式对卡进行分类。变体包括让人们编写自己的菜单标题或内容元素。 +卡片分类是一种简单而有效的方式,可以从用户那里收集有关菜单界面和页面的内容。最简单的实现方式是在计划在网站或文档中的部分分类标注一些索引卡,并要求用户按照查找信息的方式对卡片进行分类。一个变体是让人们编写自己的菜单标题或内容元素。 -我们的目标是了解用户的期望以及他们希望在哪里找到它,而不是自己弄清楚菜单和布局。当与用户处于相同的物理位置时,这是相对简单的,但当莫尝试从多个位置的人员获得反馈时,这会更具挑战性。 +我们的目标是了解用户的期望以及他们希望在哪里找到它,而不是自己弄清楚菜单和布局。当与用户处于相同的物理位置时,这是相对简单的,但当尝试从多个位置的人员获得反馈时,这会更具挑战性。 -我发现[看板][1]对于这些情况是一个很好的工具。它允许人们轻松拖动虚拟卡进行分类和排名,而且与专门卡片分类软件不同,它们是多用途的。 +我发现[看板kanban][1]对于这些情况是一个很好的工具。它允许人们轻松拖动虚拟卡片进行分类和排名,而且与专门卡片分类软件不同,它们是多用途的。 我经常使用 Trello 进行卡片分类,但有几种你可能想尝试的[开源替代品][2]。 ### 怎么运行的 -我最成功的 kanban 实验是在写 [Gluster][3] 文档的时候- 一个免费开源的可扩展网络存储文件系统。我需要携带大量随时间增长的文档,并将其分成若干类别以创建引导系统。由于我没有必要的技术知识来分类,我向 Gluster 团队和开发人员社区寻求指导。 +我最成功的看板体验是在写 [Gluster][3] 文档的时候 —— 这是一个自由开源的可扩展的网络存储文件系统。我需要携带大量随着时间而增长的文档,并将其分成若干类别以创建导航系统。由于我没有必要的技术知识来分类,我向 Gluster 团队和开发人员社区寻求指导。 -首先,我创建了一个共享看板。我列出了一些通用名称,这些名称可以为我计划在文档中涵盖的所有主题排序和创建卡片。我标记了一些不同颜色的卡片,以表明某个主题缺失并需要创建,或者它存在并需要删除。然后,我把所有卡片放入“未排序”一列,并要求人们将它们拖到他们认为应该组织卡片的地方,然后给我一个他们认为是理想状态的截图。 +首先,我创建了一个共享看板。我列出了一些通用名称,这些名称可以为我计划在文档中涵盖的所有主题排序和创建卡片。我标记了一些不同颜色的卡片,以表明某个主题缺失并需要创建,或者它存在并需要删除。然后,我把所有卡片放入“未排序”一列,并要求人们将它们拖到他们认为这些卡片应该组织到的地方,然后给我一个他们认为是理想状态的截图。 处理所有截图是最棘手的部分。我希望有一个合并或共识功能可以帮助我汇总每个人的数据,而不必检查一堆截图。幸运的是,在第一个人对卡片进行分类之后,人们或多或少地对该结构达成一致,而只做了很小的修改。当对某个主题的位置有不同意见时,我发起一个快速会议,让人们可以解释他们的想法,并且可以排除分歧。 @@ -24,7 +26,7 @@ 在这里,很容易将捕捉到的信息转换为菜单并对其进行优化。如果用户认为项目应该成为子菜单,他们通常会在评论中或在电话聊天时告诉我。对菜单组织的看法因人们的工作任务而异,所以从来没有完全达成一致意见,但用户进行测试意味着你不会对人们使用什么以及在哪里查找有很多盲点。 -将卡片分类与分析功能配对,可以让你更深入地了解人们在寻找什么。有一次,当我对一些我正在写培训文档进行分析时,我惊讶地发现搜索量最大的页面是关于资本的。所以我在顶层菜单层面上显示了该页面,即使我的“逻辑”设置将它放在了子菜单中。 +将卡片分类与分析功能配对,可以让你更深入地了解人们在寻找什么。有一次,当我对一些我正在写的培训文档进行分析时,我惊讶地发现搜索量最大的页面是关于资本的。所以我在顶层菜单层面上显示了该页面,即使我的“逻辑”设置将它放在了子菜单中。 我发现看板卡片分类是一种很好的方式,可以帮助我创建用户想要查看的内容,并将其放在希望被找到的位置。你是否发现了另一种对用户友好的组织内容的方法?或者看板的另一种有趣用途是什么?如果有的话,请在评论中分享你的想法。 @@ -34,7 +36,7 @@ via: https://opensource.com/article/17/11/kanban-boards-card-sorting 作者:[Heidi Waterhouse][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5eb96f67341a8bd83a96cd1ea0a61f61997e8f0d Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 16 Mar 2018 00:12:49 +0800 Subject: [PATCH 195/343] PUB:20171115 How to create better documentation with a kanban board.md @geekpi --- ...1115 How to create better documentation with a kanban board.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171115 How to create better documentation with a kanban board.md (100%) diff --git a/translated/tech/20171115 How to create better documentation with a kanban board.md b/published/20171115 How to create better documentation with a kanban board.md similarity index 100% rename from translated/tech/20171115 How to create better documentation with a kanban board.md rename to published/20171115 How to create better documentation with a kanban board.md From 44be8521f3661f4186d92b056a97bf1e2c957ae6 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 16 Mar 2018 00:30:04 +0800 Subject: [PATCH 196/343] PRF:20171102 What is huge pages in Linux.md @lujun9972 --- .../20171102 What is huge pages in Linux.md | 61 ++++++++++--------- 1 file changed, 32 insertions(+), 29 deletions(-) diff --git a/translated/tech/20171102 What is huge pages in Linux.md b/translated/tech/20171102 What is huge pages in Linux.md index ee261956ad..1f1d0b50a0 100644 --- a/translated/tech/20171102 What is huge pages in Linux.md +++ b/translated/tech/20171102 What is huge pages in Linux.md @@ -1,30 +1,31 @@ -Linux 中的 huge pages 是个什么玩意? +Linux 中的“大内存页”(hugepage)是个什么? ====== -学习 Linux 中的 huge pages( 巨大页)。理解什么是 hugepages,如何进行配置,如何查看当前状态以及如何禁用它。 + +> 学习 Linux 中的大内存页hugepage。理解什么是“大内存页”,如何进行配置,如何查看当前状态以及如何禁用它。 ![Huge Pages in Linux][1] -本文,我们会详细介绍 huge page,让你能够回答:Linux 中的 huge page 是什么玩意?在 RHEL6,RHEL7,Ubuntu 等 Linux 中,如何启用/禁用 huge pages?如何查看 huge page 的当前值? +本文中我们会详细介绍大内存页huge page,让你能够回答:Linux 中的“大内存页”是什么?在 RHEL6、RHEL7、Ubuntu 等 Linux 中,如何启用/禁用“大内存页”?如何查看“大内存页”的当前值? -首先让我们从 Huge page 的基础知识开始讲起。 +首先让我们从“大内存页”的基础知识开始讲起。 -### Linux 中的 Huge page 是个什么玩意? +### Linux 中的“大内存页”是个什么玩意? -Huge pages 有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,他们还能帮助管理内存中的巨大页面。使用 huge pages,你最大可以定义 1GB 的页面大小。 +“大内存页”有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,它们还能帮助管理内存中的巨大的页面。使用“大内存页”,你最大可以定义 1GB 的页面大小。 -在系统启动期间,huge pages 会为应用程序预留一部分内存。这部分内存,即被 huge pages 占用的这些存储器永远不会被交换出内存。它会一直保留其中除非你修改了配置。这会极大地提高像 Orcle 数据库这样的需要海量内存的应用程序的性能。 +在系统启动期间,你能用“大内存页”为应用程序预留一部分内存。这部分内存,即被“大内存页”占用的这些存储器永远不会被交换出内存。它会一直保留其中,除非你修改了配置。这会极大地提高像 Oracle 数据库这样的需要海量内存的应用程序的性能。 -### 为什么使用巨大的页? +### 为什么使用“大内存页”? -在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射标。如果你的内存页很小,那么你需要加载的页就会很多,导致内核加载更多的映射表。而这会降低性能。 +在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射。如果你的内存页很小,那么你需要加载的页就会很多,导致内核会加载更多的映射表。而这会降低性能。 -使用巨大的页,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。 +使用“大内存页”,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。 -简而言之,通过启用 huge pages,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销! +简而言之,通过启用“大内存页”,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销! -### 如何配置 huge pages? +### 如何配置“大内存页”? -运行下面命令来查看当前 huge pages 的详细内容。 +运行下面命令来查看当前“大内存页”的详细内容。 ``` root@kerneltalks # grep Huge /proc/meminfo @@ -36,9 +37,9 @@ HugePages_Surp: 0 Hugepagesize: 2048 kB ``` -从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`) 并且系统中目前有 0 个页 (`HugePages_Total`)。这里巨大页的大小可以从 2MB 增加到 1GB。 +从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`),并且系统中目前有 `0` 个“大内存页”(`HugePages_Total`)。这里“大内存页”的大小可以从 `2MB` 增加到 `1GB`。 -运行下面的脚本可以获取系统当前需要多少个巨大页。该脚本取之于 Oracle。 +运行下面的脚本可以知道系统当前需要多少个巨大页。该脚本取之于 Oracle。 ``` #!/bin/bash @@ -74,29 +75,31 @@ case $KERN in esac # End ``` + 将它以 `hugepages_settings.sh` 为名保存到 `/tmp` 中,然后运行之: + ``` root@kerneltalks # sh /tmp/hugepages_settings.sh Recommended setting: vm.nr_hugepages = 124 ``` -输出如上结果,只是数字会有一些出入。 +你的输出类似如上结果,只是数字会有一些出入。 -这意味着,你系统需要 124 个每个 2MB 的巨大页!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧? +这意味着,你系统需要 124 个每个 2MB 的“大内存页”!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧? -### 配置内核中的 hugepages +### 配置内核中的“大内存页” -本文最后一部分内容是配置上面提到的 [内核参数 ][2] 然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。 +本文最后一部分内容是配置上面提到的 [内核参数 ][2] ,然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。 ``` -vm .nr_hugepages=126 +vm.nr_hugepages=126 ``` -注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量外多一些额外的空闲页。 +注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量之外多一些额外的空闲页。 -现在,内核已经配置好了,但是要让应用能够使用这些巨大页还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。 +现在,内核已经配置好了,但是要让应用能够使用这些“大内存页”还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。 -你需要编辑 `/etc/security/limits.conf` 中的如下配置 +你需要编辑 `/etc/security/limits.conf` 中的如下配置: ``` soft memlock 258048 @@ -107,20 +110,20 @@ hard memlock 258048 这就完成了!你可能还需要重启应用来让应用来使用这些新的巨大页。 -### 如何禁用 hugepages? +### 如何禁用“大内存页”? -HugePages 默认是开启的。使用下面命令来查看 hugepages 的当前状态。 +“大内存页”默认是开启的。使用下面命令来查看“大内存页”的当前状态。 ``` root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never ``` -输出中的 `[always]` 标志说明系统启用了 hugepages。 +输出中的 `[always]` 标志说明系统启用了“大内存页”。 若使用的是基于 RedHat 的系统,则应该要查看的文件路径为 `/sys/kernel/mm/redhat_transparent_hugepage/enabled`。 -若想禁用巨大页,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。 +若想禁用“大内存页”,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。 -------------------------------------------------------------------------------- @@ -128,10 +131,10 @@ via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/ 作者:[Shrikant Lavhate][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://kerneltalks.com -[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png +[1]:https://a1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png [2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/ From fe974b7c68f7a01ed40d9aef518c534d0ede1685 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 16 Mar 2018 00:30:23 +0800 Subject: [PATCH 197/343] PUB:20171102 What is huge pages in Linux.md @lujun9972 --- .../tech => published}/20171102 What is huge pages in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171102 What is huge pages in Linux.md (100%) diff --git a/translated/tech/20171102 What is huge pages in Linux.md b/published/20171102 What is huge pages in Linux.md similarity index 100% rename from translated/tech/20171102 What is huge pages in Linux.md rename to published/20171102 What is huge pages in Linux.md From bb2512c640f535aafb468b65e48cdf268c0b8531 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 16 Mar 2018 01:13:14 +0800 Subject: [PATCH 198/343] PRF:20180202 How to Manage PGP and SSH Keys with Seahorse.md @qhwdw --- ...o Manage PGP and SSH Keys with Seahorse.md | 84 +++++++------------ 1 file changed, 32 insertions(+), 52 deletions(-) diff --git a/translated/tech/20180202 How to Manage PGP and SSH Keys with Seahorse.md b/translated/tech/20180202 How to Manage PGP and SSH Keys with Seahorse.md index 789137066c..55a3ebf4ed 100644 --- a/translated/tech/20180202 How to Manage PGP and SSH Keys with Seahorse.md +++ b/translated/tech/20180202 How to Manage PGP and SSH Keys with Seahorse.md @@ -1,114 +1,94 @@ 如何使用 Seahorse 管理 PGP 和 SSH 密钥 ============================================================ - ![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fish-1907607_1920.jpg?itok=u07bav4m "Seahorse") -学习使用 Seahorse GUI 工具去管理 PGP 和 SSH 密钥。[Creative Commons Zero][6] -安全无异于内心的平静。毕竟,安全是许多用户迁移到 Linux 的最大理由。但是当你可以采用几种方法和技术去确保你的桌面或者服务器系统的安全时,你为什么还要停止使用差不多已经接受的平台呢? +> 学习使用 Seahorse GUI 工具去管理 PGP 和 SSH 密钥。 -其中一项技术涉及到密钥 —在 PGP 和 SSH 中,PGP 密钥允许你去加密和解密电子邮件和文件,而 SSH 密钥允许你使用一个额外的安全层去登入服务器。 +安全即内心的平静。毕竟,安全是许多用户迁移到 Linux 的最大理由。但是为什么要止步于仅仅采用该平台,你还可以采用多种方法和技术去确保你的桌面或者服务器系统的安全。 -当然,你可以通过命令行接口(CLI)来管理这些密钥,但是,如果你使用一个华丽的 GUI 桌面环境呢?经验丰富的 Linux 用户可能对于摆脱命令行来工作感到很不适应,但是,并不是所有用户都具备与他们相同的技术和水平因此,使用 GUI! +其中一项技术涉及到密钥 —— 用在 PGP 和 SSH 中。PGP 密钥允许你去加密和解密电子邮件和文件,而 SSH 密钥允许你使用一个额外的安全层去登入服务器。 + +当然,你可以通过命令行接口(CLI)来管理这些密钥,但是,如果你使用一个华丽的 GUI 桌面环境呢?经验丰富的 Linux 用户可能对于脱离命令行来工作感到很不适应,但是,并不是所有用户都具备与他们相同的技术和水平,因此,使用 GUI 吧! 在本文中,我将带你探索如何使用  [Seahorse][14] GUI 工具来管理 PGP 和 SSH 密钥。Seahorse 有非常强大的功能,它可以: * 加密/解密/签名文件和文本。 - * 管理你的密钥和密钥对。 - * 同步你的密钥和密钥对到远程密钥服务器。 - * 签名和发布密钥。 - * 缓存你的密码。 - * 备份密钥和密钥对。 - * 在任何一个 GDK 支持的格式中添加一个图像作为一个 OpenPGP photo ID。 - * 创建、配置、和缓存 SSH 密钥。 -对于那些不了解 Seahorse 的人来说,它是一个在 GNOME 密钥对中管理加密密钥和密码的 GNOME 应用程序。不用担心,Seahorse 可以安装在许多的桌面上。并且由于 Seahorse 是在标准仓库中创建的,你可以打开你的桌面应用商店(比如,Ubuntu Software 或者 Elementary OS AppCenter)去安装它。因此,你可以在你的发行版的应用商店中点击去安装它。安装完成后,你就可以去使用这个很方便的工具了。 +对于那些不了解 Seahorse 的人来说,它是一个管理 GNOME 钥匙环中的加密密钥和密码的 GNOME 应用程序。不用担心,Seahorse 可以安装在许多的桌面环境上。并且由于 Seahorse 可以在标准的仓库中找到,你可以打开你的桌面应用商店(比如,Ubuntu Software 或者 Elementary OS AppCenter)去安装它。你可以在你的发行版的应用商店中点击去安装它。安装完成后,你就可以去使用这个很方便的工具了。 我们开始去使用它吧。 ### PGP 密钥 -我们需要做的第一件事情就是生成一个新的 PGP 密钥。正如前面所述,PGP 密钥可以用于加密电子邮件(使用一些工具,像  [Thunderbird][15] 的 [Enigmail][16] 或者使用 [Evolution][17] 内置的加密功能)。一个 PGP 密钥也可以用于加密文件。任何人使用你的公钥都可以解密你的电子邮件和文件。没有 PGP 密钥是做不到的。 +我们需要做的第一件事情就是生成一个新的 PGP 密钥。正如前面所述,PGP 密钥可以用于加密电子邮件(通过一些工具,像  [Thunderbird][15] 的 [Enigmail][16] 或者使用 [Evolution][17] 内置的加密功能)。PGP 密钥也可以用于加密文件。任何人使用你的私钥都可以解密你的电子邮件和文件(LCTT 译注:原文此处误作“公钥”)。没有 PGP 密钥是做不到的。 使用 Seahorse 创建一个新的 PGP 密钥对是非常简单的。以下是操作步骤: 1. 打开 Seahorse 应用程序 - -2. 在主面板的左上角点击 + 按钮 - -3. 选择 PGP Key(如图 1 ) - -4. 点击 Continue - +2. 在主面板的左上角点击 “+” 按钮 +3. 选择 “PGP 密钥PGP Key”(如图 1 ) +4. 点击 “继续Continue” 5. 当提示时,输入完整的名字和电子邮件地址 - -6. 点击 Create - +6. 点击 “创建Create” ![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_1.jpg?itok=khLOYC61 "Seahorse") -图 1:使用 Seahorse 创建一个 PGP 密钥。[Used with permission][1] -在创建你的 PGP 密钥期间,你可以点击 Advanced key options 展开选项部分,在那里你可以为密钥添加注释信息、加密类型、密钥长度、以及过期时间(如图 2)。 +*图 1:使用 Seahorse 创建一个 PGP 密钥。* +在创建你的 PGP 密钥期间,你可以点击 “高级密钥选项Advanced key options” 展开选项部分,在那里你可以为密钥添加注释信息、加密类型、密钥长度、以及过期时间(如图 2)。 ![PGP](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_2.jpg?itok=eWiazwrn "PGP") -图 2:PGP 密钥高级选项[Used with permission][2] + +*图 2:PGP 密钥高级选项* 增加注释部分可以很方便帮你记住密钥的用途(或者其它的信息)。 -要使用你创建的 PGP,可在密钥列表中双击它。在结果窗口中,点击 Names 和 Signatures 选项卡。在这个窗口中,你可以签名你的密钥(表示你信任这个密钥)。点击 Sign 按钮然后(在结果窗口中)标识 how carefully you’ve checked this key 和 how others will see the signature(如图 3)。 +要使用你创建的 PGP,可在密钥列表中双击它。在结果窗口中,点击 “名字Names” 和 “签名Signatures” 选项卡。在这个窗口中,你可以签名你的密钥(表示你信任这个密钥)。点击 “签名Sign” 按钮然后(在结果窗口中)指出 “你是如何仔细的检查这个密钥的?how carefully you’ve checked this key?” 和 “其他人将如何看到该签名how others will see the signature”(如图 3)。 ![Key signing](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_3.jpg?itok=7USKG9fI "Key signing") -图 3:签名一个密钥表示信任级别。[Used with permission][3] -当你处理其它人的密钥时,密钥签名是非常重要的,因为一个签名的密钥将确保你的系统(和你)做了这项工作并且完全信任这个重要的密钥。 +*图 3:签名一个密钥表示信任级别。* -谈到导入的密钥,Seahorse 可以允许你很容易地去导入其他人的公钥文件(这个文件以 .asc 为后缀)。你的系统上有其他人的公钥,意味着你可以解密从他们那里发送给你的电子邮件和文件。然而,Seahorse 在很长的一段时间内都存在一个 [已知的 bug][18]。这个问题是,Seahorse 导入使用 GPG 版本 1,但是显示的是 GPG 版本 2。这意味着,在这个存在了很长时间的 bug 被修复之前,导入公钥总是失败的。如果你想导入一个公钥文件到 Seahorse 中,你只能去使用命令行。因此,如果有人发送给你一个文件 olivia.asc,你想去导入到 Seahorse 中使用它,你将只能运行命令 gpg2 --import olivia.asc。那个密钥将出现在 GnuPG 密钥列表中。你可以打开密钥,点击 I trust signatures 按钮,然后在问题 how carefully you’ve checked the key 中,点击 Sign this key 按钮去标示。 +当你处理其它人的密钥时,密钥签名是非常重要的,因为一个签名的密钥将确保你的系统(和你)做了这项签名工作并且完全信任这个重要的密钥。 + +谈到导入的密钥,Seahorse 可以允许你很容易地去导入其他人的公钥文件(这个文件以 `.asc` 为后缀)。你的系统上有其他人的公钥,意味着你可以解密从他们那里发送给你的电子邮件和文件(LCTT 译注:本文的用例有问题,通常使用别人的公钥来验证对方使用其私钥进行的签名是否匹配;并使用自己的私钥来解密别人通过你的公钥加密的文件。)。然而,Seahorse 在很长的一段时间内都存在一个 [已知的 bug][18]。这个问题是,Seahorse 导入使用 GPG 版本 1,但是显示的是 GPG 版本 2。这意味着,在这个存在了很长时间的 bug 被修复之前,导入公钥总是失败的。如果你想导入一个公钥文件到 Seahorse 中,你只能去使用命令行。因此,如果有人发送给你一个文件 `olivia.asc`,你想去导入到 Seahorse 中使用它,你将只能运行命令 `gpg2 --import olivia.asc`。那个密钥将出现在 GnuPG 密钥列表中。你可以打开该密钥,点击 “我信任签名I trust signatures” 按钮,然后在问题 “你是如何仔细地检查该密钥的?how carefully you’ve checked the key” 中,点击 “签名这个密钥Sign this key” 按钮去签名。 ### SSH 密钥 现在我们来谈谈我认为 Seahorse 中最重要的一个方面 — SSH 密钥。Seahorse 不仅可以很容易地生成一个 SSH 密钥,而且它也可以很容易地将生成的密钥发送到服务器上,因此,你可以享受到 SSH 密钥验证的好处。下面是如何生成一个新的密钥以及如何导出它到一个远程服务器上。 1. 打开 Seahorse 应用程序 - -2. 点击 + 按钮 - -3. 选择 Secure Shell Key - -4. 点击 Continue - +2. 点击 “+” 按钮 +3. 选择 “Secure Shell Key” +4. 点击 “Continue” 5. 提供一个密钥描述信息 - -6. 点击 Set Up 去创建密钥 - +6. 点击 “Set Up” 去创建密钥 7. 输入密钥的验证密钥 - 8. 点击 OK - -9. 输入远程服务器地址和服务器上的登陆名(如图 4) - +9. 输入远程服务器地址和服务器上的登录名(如图 4) 10. 输入远程用户的密码 - 11. 点击 OK ![SSH key](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_4.jpg?itok=ZxuxT8ry "SSH key") -图 4:上传一个 SSH 密钥到远程服务器。[Used with permission][4] -新密钥将上传到远程服务器上以准备好使用它。如果你的服务器已经设置为使用 SSH 密钥验证,那就一切就绪了。 +*图 4:上传一个 SSH 密钥到远程服务器。* -需要注意的是,在创建一个 SSH 密钥期间,你可以点击 Advanced key options 去展开它,配置加密类型和密钥长度(如图 5)。 +新密钥将上传到远程服务器上以备使用。如果你的服务器已经设置为使用 SSH 密钥验证,那就一切就绪了。 +需要注意的是,在创建一个 SSH 密钥期间,你可以点击 “高级密钥选项Advanced key options”去展开它,配置加密类型和密钥长度(如图 5)。 ![Advanced options](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_5.jpg?itok=vUT7pi0z "Advanced options") -图 5:高级 SSH 密钥选项。[Used with permission][5] + +*图 5:高级 SSH 密钥选项。* ### Linux 新手必备 @@ -120,9 +100,9 @@ via: https://www.linux.com/learn/intro-to-linux/2018/2/how-manage-pgp-and-ssh-keys-seahorse -作者:[JACK WALLEN ][a] +作者:[JACK WALLEN][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxt](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From efc0640d7bebe7300c4146e2a5c7e382c9d4a78c Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 16 Mar 2018 01:13:27 +0800 Subject: [PATCH 199/343] PUB:20180202 How to Manage PGP and SSH Keys with Seahorse.md @qhwdw --- .../20180202 How to Manage PGP and SSH Keys with Seahorse.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180202 How to Manage PGP and SSH Keys with Seahorse.md (100%) diff --git a/translated/tech/20180202 How to Manage PGP and SSH Keys with Seahorse.md b/published/20180202 How to Manage PGP and SSH Keys with Seahorse.md similarity index 100% rename from translated/tech/20180202 How to Manage PGP and SSH Keys with Seahorse.md rename to published/20180202 How to Manage PGP and SSH Keys with Seahorse.md From af2f8d95b81bababd4ce65cdcecd010457db3caa Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Fri, 16 Mar 2018 07:31:58 +0800 Subject: [PATCH 200/343] 2018-3-16 --- sources/talk/20180201 How I coined the term open source.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index 21f5b8e86b..c19154ea4d 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -45,7 +45,7 @@ Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助, 幸运的是,Todd 是明智的。他没有主张社区应该用哪个特定的术语,而是间接地做了一些事——一件和社区里有强烈意愿的人做的明智之举。他简单地在其他话题中使用那个术语——把他放进对话里看看会发生什么。我警觉起来,希望得到一个答复,但是起初什么也没有。讨论继续进行原来的话题。似乎只有他和我注意了术语的使用。 -Not so—memetic evolution was in action. A few minutes later, one of the others used the term, evidently without noticing, still discussing a topic other than terminology. Todd and I looked at each other out of the corners of our eyes to check: yes, we had both noticed what happened. I was excited—it might work! But I kept quiet: I still had low status in this group. Probably some were wondering why Eric had invited me at all. +不仅如此——模因演化(人类学术语)在起作用。几分钟后,另一个人明显地,没有提醒地,在仍然进行话题讨论而没说术语的情况下,用了这个术语。Todd 和我面面相觑对视:是的我们都注意到了发生的事。我很激动——它起作用了!但我保持了安静:我在小组中仍然地位不高。可能有些人都奇怪为什么 Eric 会最终邀请我。 临近会议尾声,可能是 Todd or Eric,[术语问题][8] 被明确提出。Maddog 提及了一个早期的术语“可自由分发的,和一个新的术语“合作开发的”。Eric列出了“自由软件”、“开源软件”,并把[unknown]作为主要选项。Todd宣传“开源”模型,然后Eric 支持了他。我什么也没说,letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. 只有我在会议中大约10%的说明放在了术语问答中。 From 95f477986d6d54b43c165e253857135508835e99 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 16 Mar 2018 08:57:53 +0800 Subject: [PATCH 201/343] translated --- ...og analysis application on ubuntu 17.10.md | 97 ------------------- ...og analysis application on ubuntu 17.10.md | 89 +++++++++++++++++ 2 files changed, 89 insertions(+), 97 deletions(-) delete mode 100644 sources/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md create mode 100644 translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md diff --git a/sources/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md b/sources/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md deleted file mode 100644 index a5e81bf708..0000000000 --- a/sources/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md +++ /dev/null @@ -1,97 +0,0 @@ -translating----geekpi - -Install AWFFull web server log analysis application on ubuntu 17.10 -====== - - -AWFFull is a web server log analysis program based on "The Webalizer".AWFFull produces usage statistics in HTML format for viewing with a browser. The results are presented in both columnar and graphical format, which facilitates interpretation. Yearly, monthly, daily and hourly usage statistics are presented, along with the ability to display usage by site, URL, referrer, user agent (browser), user name,search strings, entry/exit pages, and country (some information may not be available if not present in the log file being processed). - - - -AWFFull supports CLF (common log format) log files, as well as Combined log formats as defined by NCSA and others, and variations of these which it attempts to handle intelligently. In addition, AWFFull also supports wu-ftpd xferlog formatted log files, allowing analysis of ftp servers, and squid proxy logs. Logs may also be compressed, via gzip. - -AWFFull is a web server log analysis program based on "The Webalizer".AWFFull produces usage statistics in HTML format for viewing with a browser. The results are presented in both columnar and graphical format, which facilitates interpretation. Yearly, monthly, daily and hourly usage statistics are presented, along with the ability to display usage by site, URL, referrer, user agent (browser), user name,search strings, entry/exit pages, and country (some information may not be available if not present in the log file being processed).AWFFull supports CLF (common log format) log files, as well as Combined log formats as defined by NCSA and others, and variations of these which it attempts to handle intelligently. In addition, AWFFull also supports wu-ftpd xferlog formatted log files, allowing analysis of ftp servers, and squid proxy logs. Logs may also be compressed, via gzip. - -If a compressed log file is detected, it will be automatically uncompressed while it is read. Compressed logs must have the standard gzip extension of .gz. - -### Changes from Webalizer - -AWFFull is based on the Webalizer code and has a number of large and small changes. These include: - -o Beyond the raw statistics: Making use of published formulae to provide additional insights into site usage. - -o GeoIP IP Address look-ups for more accurate country detection. - -o Resizable graphs. - -o Integration with GNU gettext allowing for ease of translations.Currently 32 languages are supported. - -o Display more than 12 months of the site history on the front page. - -o Additional page count tracking and sort by same. - -o Some minor visual tweaks, including Geolizer's use of Kb, Mb etc for Volumes. - -o Additional Pie Charts for URL counts, Entry and Exit Pages, and Sites. - -o Horizontal lines on graphs that are more sensible and easier to read. - -o User Agent and Referral tracking is now calculated via PAGES not HITS. - -o GNU style long command line options are now supported (eg --help). - -o Can choose what is a page by excluding "what isn't" vs the original "what is" method. - -o Requests to the site being analysed are displayed with the matching referring URL. - -o A Table of 404 Errors, and the referring URL can be generated. - -o An external CSS file can be used with the generated html. - -o Manual performance optimisation of the config file is now easier with a post analysis summary output. - -o Specified IP's & Addresses can be assigned to a given country. - -o Additional Dump options for detailed analysis with other tools. - -o Lotus Domino v6 logs are now detected and processed. - -**Install awffull on ubuntu 17.10** - -> sudo apt-get install awffull - -### Configuring AWFFULL - -You have to edit awffull config file at /etc/awffull/awffull.conf. If you have multiple virtual websites running in the same machine, you can make several copies of the default config file. - -> sudo vi /etc/awffull/awffull.conf - -Make sure the following lines are there - -> LogFile /var/log/apache2/access.log.1 -> OutputDir /var/www/html/awffull - -Save and exit the file - -You can run the awffull config using the following command - -> awffull -c [your config file name] - -This will create all the required files under /var/www/html/awffull directory so you can access your webserver stats using http://serverip/awffull/ - -You should see similar to the following screen - -If you have more site and you can automate the process using shell script and cron job. - - --------------------------------------------------------------------------------- - -via: http://www.ubuntugeek.com/install-awffull-web-server-log-analysis-application-on-ubuntu-17-10.html - -作者:[ruchi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.ubuntugeek.com/author/ubuntufix diff --git a/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md b/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md new file mode 100644 index 0000000000..50f2f00451 --- /dev/null +++ b/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md @@ -0,0 +1,89 @@ +在 Ubuntu 17.10 上安装 AWFFull Web 服务器日志分析应用程序 +====== + + +AWFFull 是基于 “Webalizer” 的 Web 服务器日志分析程序。AWFFull 以 HTML 格式生成使用统计信息以便用浏览器查看。结果以柱状和图形两种格式显示,这有利于解释。它提供每年、每月、每日和每小时使用统计数据,并显示网站、URL、referrer、user agent(浏览器)、用户名、搜索字符串、进入/退出页面和国家(如果一些信息不存在于处理后日志中那么就没有)。AWFFull 支持 CLF(通用日志格式)日志文件,以及由 NCSA 和其他人定义的组合日志格式,它还能只能地处理这些格式的变体。另外,AWFFull 还支持 wu-ftpd xferlog 格式的日志文件,它能够分析 ftp 服务器和 squid 代理日志。日志也可以通过 gzip 压缩。 + +如果检测到压缩日志文件,它将在读取时自动解压缩。压缩日志必须是 .gz 扩展名的标准 gzip 压缩。 + +### 对于 Webalizer 的修改 + +AWFFull 基于 Webalizer 的代码,并有许多大的和小的变化。包括: + +o 不止原始统计数据:利用已发布的公式,提供额外的网站使用情况。 + +o GeoIP IP 地址能更准确地检测国家。 + +o 可缩放的图形 + +o 与 GNU gettext 集成,能够轻松翻译。目前支持 32 种语言。 + +o 在首页显示超过 12 个月的网站历史记录。 + +o 额外的页面计数跟踪和排序。 + +o 一些小的可视化调整,包括 Geolizer 使用在卷中使用 Kb、Mb。 + +o 额外的用于 URL 计数、进入和退出页面、站点的饼图 + +o 图形上的水平线更有意义,更易于阅读。 + +o User Agent 和 Referral 跟踪现在通过 PAGES 而非 HITS 进行计算。 + +o 现在支持 GNU 风格的长命令行选项(例如 --help)。 + +o 可以通过排除“什么不是”以及原始的“什么是”来选择页面。 + +o 对被分析站点的请求以匹配的引用 URL 显示。 + +o 404 错误表,并且可以生成引用 URL。 + +o 外部 CSS 文件可以与生成的 html 一起使用。 + +o POST 分析总结使得手动优化配置文件性能更简单。 + +o 指定的 IP 和地址可以分配给指定的国家。 + +o 便于使用其他工具详细分析的转储选项。 + +o 支持检测并处理 Lotus Domino v6 日志。 + +**在 Ubuntu 17.10 上安装 awffull** + +> sudo apt-get install awffull + +### 配置 AWFFULL + +你必须在 /etc/awffull/awffull.conf 中编辑 awffull 配置文件。如果你在同一台计算机上运行多个虚拟站点,​​则可以制作多个默认配置文件的副本。 + +> sudo vi /etc/awffull/awffull.conf + +确保有下面这几行 + +> LogFile /var/log/apache2/access.log.1 +> OutputDir /var/www/html/awffull + +保存并退出文件 + +你可以使用以下命令运行 awffull + +> awffull -c [your config file name] + +这将在 /var/www/html/awffull 目录下创建所有必需的文件,以便你可以使用 http://serverip/awffull/ + +你应该看到类似于下面的页面 + +如果你有更多站点,你可以使用 shell 和计划任务自动化这个过程。 + + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/install-awffull-web-server-log-analysis-application-on-ubuntu-17-10.html + +作者:[ruchi][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ubuntugeek.com/author/ubuntufix From 64926870097560667776514189344d3cf72f0b39 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 16 Mar 2018 09:03:15 +0800 Subject: [PATCH 202/343] translating --- .../20180129 A look inside Facebooks open source program.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180129 A look inside Facebooks open source program.md b/sources/tech/20180129 A look inside Facebooks open source program.md index 3610cec043..c76a524026 100644 --- a/sources/tech/20180129 A look inside Facebooks open source program.md +++ b/sources/tech/20180129 A look inside Facebooks open source program.md @@ -1,3 +1,5 @@ +translating---geekpi + A look inside Facebook's open source program ============================================================ From 3b05a25944ea76a4b7c1f2b4169e12ab2f03d0fc Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 16 Mar 2018 10:22:55 +0800 Subject: [PATCH 203/343] =?UTF-8?q?=E6=9B=B4=E6=AD=A3?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @qhwdw :D --- .../20180202 How to Manage PGP and SSH Keys with Seahorse.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/published/20180202 How to Manage PGP and SSH Keys with Seahorse.md b/published/20180202 How to Manage PGP and SSH Keys with Seahorse.md index 55a3ebf4ed..7fe2949666 100644 --- a/published/20180202 How to Manage PGP and SSH Keys with Seahorse.md +++ b/published/20180202 How to Manage PGP and SSH Keys with Seahorse.md @@ -28,7 +28,7 @@ ### PGP 密钥 -我们需要做的第一件事情就是生成一个新的 PGP 密钥。正如前面所述,PGP 密钥可以用于加密电子邮件(通过一些工具,像  [Thunderbird][15] 的 [Enigmail][16] 或者使用 [Evolution][17] 内置的加密功能)。PGP 密钥也可以用于加密文件。任何人使用你的私钥都可以解密你的电子邮件和文件(LCTT 译注:原文此处误作“公钥”)。没有 PGP 密钥是做不到的。 +我们需要做的第一件事情就是生成一个新的 PGP 密钥。正如前面所述,PGP 密钥可以用于加密电子邮件(通过一些工具,像  [Thunderbird][15] 的 [Enigmail][16] 或者使用 [Evolution][17] 内置的加密功能)。PGP 密钥也可以用于加密文件。任何人都可以使用你的公钥加密电子邮件和文件发给你(LCTT 译注:原文此处“加密”误作“解密”)。没有 PGP 密钥是做不到的。 使用 Seahorse 创建一个新的 PGP 密钥对是非常简单的。以下是操作步骤: @@ -59,7 +59,7 @@ 当你处理其它人的密钥时,密钥签名是非常重要的,因为一个签名的密钥将确保你的系统(和你)做了这项签名工作并且完全信任这个重要的密钥。 -谈到导入的密钥,Seahorse 可以允许你很容易地去导入其他人的公钥文件(这个文件以 `.asc` 为后缀)。你的系统上有其他人的公钥,意味着你可以解密从他们那里发送给你的电子邮件和文件(LCTT 译注:本文的用例有问题,通常使用别人的公钥来验证对方使用其私钥进行的签名是否匹配;并使用自己的私钥来解密别人通过你的公钥加密的文件。)。然而,Seahorse 在很长的一段时间内都存在一个 [已知的 bug][18]。这个问题是,Seahorse 导入使用 GPG 版本 1,但是显示的是 GPG 版本 2。这意味着,在这个存在了很长时间的 bug 被修复之前,导入公钥总是失败的。如果你想导入一个公钥文件到 Seahorse 中,你只能去使用命令行。因此,如果有人发送给你一个文件 `olivia.asc`,你想去导入到 Seahorse 中使用它,你将只能运行命令 `gpg2 --import olivia.asc`。那个密钥将出现在 GnuPG 密钥列表中。你可以打开该密钥,点击 “我信任签名I trust signatures” 按钮,然后在问题 “你是如何仔细地检查该密钥的?how carefully you’ve checked the key” 中,点击 “签名这个密钥Sign this key” 按钮去签名。 +谈到导入的密钥,Seahorse 可以允许你很容易地去导入其他人的公钥文件(这个文件以 `.asc` 为后缀)。你的系统上有其他人的公钥,意味着你可以加密发送给他们的电子邮件和文件(LCTT 译注:原文将“加密”误作“解密”)。然而,Seahorse 在很长的一段时间内都存在一个 [已知的 bug][18]。这个问题是,Seahorse 导入使用 GPG 版本 1,但是显示的是 GPG 版本 2。这意味着,在这个存在了很长时间的 bug 被修复之前,导入公钥总是失败的。如果你想导入一个公钥文件到 Seahorse 中,你只能去使用命令行。因此,如果有人发送给你一个文件 `olivia.asc`,你想去导入到 Seahorse 中使用它,你将只能运行命令 `gpg2 --import olivia.asc`。那个密钥将出现在 GnuPG 密钥列表中。你可以打开该密钥,点击 “我信任签名I trust signatures” 按钮,然后在问题 “你是如何仔细地检查该密钥的?how carefully you’ve checked the key” 中,点击 “签名这个密钥Sign this key” 按钮去签名。 ### SSH 密钥 From f8d6ea10eac37b07b14912e4b4c2fbff35b2ec37 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Fri, 16 Mar 2018 12:30:48 +0800 Subject: [PATCH 204/343] Translated by qhwdw --- ...set up a print server on a Raspberry Pi.md | 88 ------------------- ...set up a print server on a Raspberry Pi.md | 87 ++++++++++++++++++ 2 files changed, 87 insertions(+), 88 deletions(-) delete mode 100644 sources/tech/20180308 How to set up a print server on a Raspberry Pi.md create mode 100644 translated/tech/20180308 How to set up a print server on a Raspberry Pi.md diff --git a/sources/tech/20180308 How to set up a print server on a Raspberry Pi.md b/sources/tech/20180308 How to set up a print server on a Raspberry Pi.md deleted file mode 100644 index 200779210d..0000000000 --- a/sources/tech/20180308 How to set up a print server on a Raspberry Pi.md +++ /dev/null @@ -1,88 +0,0 @@ -Translating by qhwdw -How to set up a print server on a Raspberry Pi -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2) - -I like to work on small projects at home, so this year I picked up a [Raspberry Pi 3 Model B][1], a great model for home hobbyists like me. With built-in wireless on the Raspberry Pi 3 Model B, I can connect the Pi to my home network without a cable. This makes it really easy to put the Raspberry Pi to use right where it is needed. - -At our house, my wife and I both have laptops, but we have just one printer: a slightly used HP Color LaserJet. Because our printer doesn't have a wireless card and can't connect to wireless networks, we usually leave the LaserJet connected to my laptop, since I do most of the printing. While that arrangement works most of the time, sometimes my wife would like to print something without having to go through me. - -### Basic setup - -I realized we really needed a solution to connect the printer to the wireless network so both of us could print to it whenever we wanted. I could buy a wireless print server to connect the USB printer to the wireless network, but I decided instead to use my Raspberry Pi to build a print server to make the LaserJet available to anyone in our house. - -Setting up the Raspberry Pi is fairly straightforward. I downloaded the [Raspbian][2] image and wrote that to my microSD card. Then, I booted the Raspberry Pi with an HDMI display, a USB keyboard, and a USB mouse. With that, I was ready to go! - -The Raspbian system automatically boots into a graphical desktop environment where I performed most of the basic setup: setting the keyboard language, connecting to my wireless network, setting the password for the regular user account (`pi`), and setting the password for the system administrator account (`root`). - -I don't plan to use the Raspberry Pi as a desktop system. I only want to use it remotely from my regular Linux computer. So, I also used Raspbian's graphical administration tool to set the Raspberry Pi to boot into console mode, but not to automatically login as the `pi` user. - -Once I rebooted the Raspberry Pi, I needed to make a few other system tweaks so I could use the Pi as a "server" on my network. I set the Dynamic Host Configuration Protocol (DHCP) client to use a static IP address; by default, the DHCP client might pick any available network address, which would make it tricky to know how to connect to the Raspberry Pi over the network. My home network uses a private class A network, so my router's IP address is `10.0.0.1` and all my IP addresses are `10.0.0.x`. In my case, IP addresses in the lower range are safe, so I set up a static IP address on the wireless network at `10.0.0.11` by adding these lines to the `/etc/dhcpcd.conf` file: -``` -interface wlan0 - -static ip_address=10.0.0.11/24 - -static routers=10.0.0.1 - -static domain_name_servers=8.8.8.8 8.8.4.4 - -``` - -Before I rebooted again, I made sure that the secure shell daemon (SSHD) was running (you can set what services start at boot-up in Preferences). This allowed me to use a secure shell (SSH) client from my regular Linux system to connect to the Raspberry Pi over the network. - -### Print setup - -Now that my Raspberry Pi was on the network, I did the rest of the setup remotely, using SSH, from my regular Linux desktop machine. Make sure your printer is connected to the Raspberry Pi before taking the following steps. - -Setting up printing is fairly easy. The modern print server is called CUPS, which stands for the Common Unix Printing System. Any recent Unix system should be able to print through a CUPS print server. To set up CUPS on Raspberry Pi, you just need to enter a few commands to install the CUPS software, allow printing by other systems, and restart the print server with the new configuration: -``` -$ sudo apt-get install cups - -$ sudo cupsctl --remote-any - -$ sudo /etc/init.d/cups restart - -``` - -Setting up a printer in CUPS is also straightforward and can be done through a web browser. CUPS listens on port 631, so just use your favorite web browser and surf to: -``` -https://10.0.0.11:631/ - -``` - -Your web browser may complain that it doesn't recognize the web server's https certificate; just accept it, and login as the system administrator. You should see the standard CUPS panel: -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-1-home.png?itok=t9OFJgSX) - -From there, navigate to the Administration tab, and select Add Printer. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-2-administration.png?itok=MlEINoYC) - -If your printer is already connected via USB, you should be able to easily select the printer's make and model. Don't forget to tick the Share This Printer box so others can use it, too. And now your printer should be set up in CUPS: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-3-printer.png?itok=N5upmhE7) - -### Client setup - -Setting up a network printer from the Linux desktop should be quite simple. My desktop is GNOME, and you can add the network printer right from the GNOME Settings application. Just navigate to Devices and Printers and unlock the panel. Click on the Add button to add the printer. - -On my system, GNOME Settings automatically discovered the network printer and added it. If that doesn't happen for you, you may need to add the IP address for your Raspberry Pi to manually add the printer. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gnome-settings-printers.png?itok=NOQLTaLs) - -And that's it! We are now able to print to the Color LaserJet over the wireless network from wherever we are in the house. I no longer need to be physically connected to the printer, and everyone in the family can print on their own. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/print-server-raspberry-pi - -作者:[Jim Hall][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jim-hall -[1]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ -[2]:https://www.raspberrypi.org/downloads/ diff --git a/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md b/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md new file mode 100644 index 0000000000..0b53281d7f --- /dev/null +++ b/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md @@ -0,0 +1,87 @@ +如何将树莓派配置为打印服务器 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2) + +我喜欢在家做一些小项目,因此,今年我选择使用一个 [树莓派 3 Model B][1],这是一个像我这样的业余爱好者非常适合的东西。使用树莓派 3 Model B 的无线功能,我可以不使用线缆将树莓派连接到我的家庭网络中。这样可以很容易地将树莓派用到各种它所需要的地方。 + +在家里,我和我的妻子都使用笔记本电脑,但是我们只有一台打印机:一台使用的并不频繁的 HP 彩色激光打印机。因为我们的打印机并不内置无线网卡,因此,它不能直接连接到无线网络中,一般情况下,使用我的笔记本电脑时,我并不连接打印机,因为,我做的大多数工作并不需要打印。虽然这种安排在大多数时间都没有问题,但是,有时候,我的妻子想在不 “麻烦” 我的情况下,自己去打印一些东西。 + +### 基本设置 + +我觉得我们需要一个将打印机连接到无线网络的解决方案,以便于我们都能够随时随地打印。我本想买一个无线打印服务器将我的 USB 打印机连接到家里的无线网络上。后来,我决定使用我的树莓派,将它设置为打印服务器,这样就可以让家里的每个人都可以随时来打印。 + +设置树莓派是非常简单的事。我下载了 [Raspbian][2] 镜像,并将它写入到我的 microSD 卡中。然后,使用它引导连接了一个 HDMI 显示器、一个 USB 键盘和一个 USB 鼠标的树莓派。之后,我们开始对它进行设置! + +这个树莓派系统自动引导到一个图形桌面,然后我做了一些基本设置:设置键盘语言、连接无线网络、设置普通用户帐户(`pi`)的密码、设置管理员用户(`root`)的密码。 + +我并不打算将树莓派运行在桌面环境下。我一般是通过我的普通的 Linux 计算机远程来使用它。因此,我使用树莓派的图形化管理工具,去设置将树莓派引导到控制台模式,而且不以 `pi` 用户自动登入。 + +重新启动树莓派之后,我需要做一些其它的系统方面的小调整,以便于我在家用网络中使用树莓派做为 “服务器”。我设置它的 DHCP 客户端为使用静态 IP 地址;默认情况下,DHCP 客户端可能任选一个可用的网络地址,这样我会不知道应该用哪个地址连接到树莓派。我的家用网络使用一个私有的 A 类地址,因此,我的路由器的 IP 地址是 `10.0.0.1`,并且我的全部可用地 IP 地址是 `10.0.0.x`。在我的案例中,低位的 IP 地址是安全的,因此,我通过在 `/etc/dhcpcd.conf` 中添加如下的行,设置它的无线网络使用 `10.0.0.11` 这个静态地址。 +``` +interface wlan0 + +static ip_address=10.0.0.11/24 + +static routers=10.0.0.1 + +static domain_name_servers=8.8.8.8 8.8.4.4 + +``` + +在我再次重启之前,我需要去确认安全 shell 守护程序(SSHD)已经正常运行(你可以在 “偏好” 中设置哪些服务在引导时启动它)。这样我就可以使用 SSH 从普通的 Linux 系统上基于网络连接到树莓派中。 + +### 打印设置 + +现在,我的树莓派已经在网络上正常工作了,我通过 SSH 从我的 Linux 电脑上远程连接它,接着做剩余的设置。在继续设置之前,确保你的打印机已经连接到树莓派上。 + +设置打印机很容易。现在的打印服务器都称为 CUPS,它是标准的通用 Unix 打印系统。任何最新的 Unix 系统都可以通过 CUPS 打印服务器来打印。为了在树莓派上设置 CUPS 打印服务器。你需要通过几个命令去安装 CUPS 软件,并使用新的配置来重启打印服务器,这样就可以允许其它系统来打印了。 +``` +$ sudo apt-get install cups + +$ sudo cupsctl --remote-any + +$ sudo /etc/init.d/cups restart + +``` + +在 CUPS 中设置打印机也是非常简单的,你可以通过一个 Web 界面来完成。CUPS 监听端口是 631,因此你可以在浏览器中收藏这个地址: +``` +https://10.0.0.11:631/ + +``` + +你的 Web 浏览器可能会弹出警告,因为它不认可这个 Web 浏览器的 https 证书;选择 ”接受它“,然后以管理员用户登入系统,你将看到如下的标准的 CUPS 面板: +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-1-home.png?itok=t9OFJgSX) + +这时候,导航到管理标签,选择 “Add Printer"。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-2-administration.png?itok=MlEINoYC) + +如果打印机已经通过 USB 连接,你只需要简单地选择这个打印机和型号。不要忘记去勾选共享这个打印机的选择框,因为其它人也要使用它。现在,你的打印机已经在 CUPS 中设置好了。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-3-printer.png?itok=N5upmhE7) + +### 客户端设置 + +从 Linux 中设置一台网络打印机非常简单。我的桌面环境是 GNOME,你可以从 GNOME 的设置应用程序中添加网络打印机。只需要导航到设备和打印机,然后解锁这个面板。点击 “Add" 按钮去添加打印机。 + +在我的系统中,GNOME 设置为 ”自动发现网络打印机并添加它“。如果你的系统不是这样,你需要通过树莓派的 IP 地址,手动去添加打印机。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gnome-settings-printers.png?itok=NOQLTaLs) + +设置到此为止!我们现在已经可以通过家中的无线网络来使用这台打印机了。我不再需要物理连接到这台打印机了,家里的任何人都可以使用它了! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/print-server-raspberry-pi + +作者:[Jim Hall][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jim-hall +[1]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ +[2]:https://www.raspberrypi.org/downloads/ From 772c9bbed080fc8ea4a257dbd8393cd67da92d07 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Fri, 16 Mar 2018 22:10:45 +0800 Subject: [PATCH 205/343] Translated by qhwdw --- ... Testing IPv6 Networking in KVM- Part 1.md | 83 ------------------- ... Testing IPv6 Networking in KVM- Part 1.md | 82 ++++++++++++++++++ 2 files changed, 82 insertions(+), 83 deletions(-) delete mode 100644 sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md create mode 100644 translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md diff --git a/sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md b/sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md deleted file mode 100644 index d48f7765cb..0000000000 --- a/sources/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md +++ /dev/null @@ -1,83 +0,0 @@ -Translating by qhwdw -Testing IPv6 Networking in KVM: Part 1 -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ipv6-networking.png?itok=swQPV8Ey) - -Nothing beats hands-on playing with IPv6 addresses to get the hang of how they work, and setting up a little test lab in KVM is as easy as falling over — and more fun. In this two-part series, we will learn about IPv6 private addressing and configuring test networks in KVM. - -### QEMU/KVM/Virtual Machine Manager - -Let's start with understanding what KVM is. Here I use KVM as a convenient shorthand for the combination of QEMU, KVM, and the Virtual Machine Manager that is typically bundled together in Linux distributions. The simplified explanation is that QEMU emulates hardware, and KVM is a kernel module that creates the guest state on your CPU and manages access to memory and the CPU. Virtual Machine Manager is a lovely graphical overlay to all of this virtualization and hypervisor goodness. - -But you're not stuck with pointy-clicky, no, for there are also fab command-line tools to use — such as virsh and virt-install. - -If you're not experienced with KVM, you might want to start with [Creating Virtual Machines in KVM: Part 1][1] and [Creating Virtual Machines in KVM: Part 2 - Networking][2]. - -### IPv6 Unique Local Addresses - -Configuring IPv6 networking in KVM is just like configuring IPv4 networks. The main difference is those weird long addresses. [Last time][3], we talked about the different types of IPv6 addresses. There is one more IPv6 unicast address class, and that is unique local addresses, fc00::/7 (see [RFC 4193][4]). This is analogous to the private address classes in IPv4, 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. - -This diagram illustrates the structure of the unique local address space. 48 bits define the prefix and global ID, 16 bits are for subnets, and the remaining 64 bits are the interface ID: -``` -| 7 bits |1| 40 bits | 16 bits | 64 bits | -+--------|-+------------|-----------|----------------------------+ -| Prefix |L| Global ID | Subnet ID | Interface ID | -+--------|-+------------|-----------|----------------------------+ - -``` - -Here is another way to look at it, which is might be more helpful for understanding how to manipulate these addresses: -``` -| Prefix | Global ID | Subnet ID | Interface ID | -+--------|--------------|-------------|----------------------+ -| fd | 00:0000:0000 | 0000 | 0000:0000:0000:0000 | -+--------|--------------|-------------|----------------------+ - -``` - -fc00::/7 is divided into two /8 blocks, fc00::/8 and fd00::/8. fc00::/8 is reserved for future use. So, unique local addresses always start with fd, and the rest is up to you. The L bit, which is the eighth bit, is always set to 1, which makes fd00::/8. Setting it to zero makes fc00::/8. You can see this with subnetcalc: -``` -$ subnetcalc fd00::/8 -n -Address = fd00:: - fd00 = 11111101 00000000 - -$ subnetcalc fc00::/8 -n -Address = fc00:: - fc00 = 11111100 00000000 - -``` - -RFC 4193 requires that addresses be randomly generated. You can invent addresses any way you choose, as long as they start with fd, because the IPv6 cops aren't going to invade your home and give you a hard time. Still, it is a best practice to follow what RFCs say. The addresses must not be assigned sequentially or with well-known numbers. RFC 4193 includes an algorithm for building a pseudo-random address generator, or you can find any number of generators online. - -Unique local addresses are not centrally managed like global unicast addresses (assigned to you by your Internet service provider), but even so the probability of address collisions is very low. This is a nice benefit when you need to merge some local networks or want to route between discrete private networks. - -You can mix unique local addresses and global unicast addresses on the same subnets. Unique local addresses are routable and require no extra router tweaks. However, you should configure your border routers and firewalls to not allow them to leave your network except between private networks at different locations. - -RFC 4193 advises against mingling AAAA and PTR records with your global unicast address records, because there is no guarantee that they will be unique, even though the odds of duplicates are low. Just like we do with IPv4 addresses, keep your private local name services and public name services separate. The tried-and-true combination of Dnsmasq for local name services and BIND for public name services works just as well for IPv6 as it does for IPv4. - -### Pseudo-Random Address Generator - -One example of an online address generator is [Local IPv6 Address Generator][5]. You can find many cool online tools like this. You can use it to create a new address for you, or use it with your existing global ID and play with creating subnets. - -Come back next week to learn how to plug all of this IPv6 goodness into KVM and do live testing. - -Learn more about Linux through the free ["Introduction to Linux" ][6]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1 - -作者:[Carla Schroder][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-1 -[2]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-2-networking -[3]:https://www.linux.com/learn/intro-to-linux/2017/10/calculating-ipv6-subnets-linux -[4]:https://tools.ietf.org/html/rfc4193 -[5]:https://www.ultratools.com/tools/rangeGenerator -[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md b/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md new file mode 100644 index 0000000000..45a696511f --- /dev/null +++ b/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md @@ -0,0 +1,82 @@ +在 KVM 中测试 IPv6 网络(第 1 部分) +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ipv6-networking.png?itok=swQPV8Ey) + +要理解 IPv6 地址是如何工作的,没有比亲自动手去实践更好的方法了,在 KVM 中配置一个小的测试实验室非常容易 —— 也很有趣。这个系列的文章共有两个部分,我们将学习关于 IPv6 私有地址的知识,以及如何在 KVM 中配置测试网络。 + +### QEMU/KVM/虚拟机管理器 + +我们先来了解什么是 KVM。在这里,我将使用 KVM 来表示 QEMU、KVM、以及虚拟机管理器的一个组合,虚拟机管理器在 Linux 发行版中一般内置了。简单解释就是,QEMU 模拟硬件,而 KVM 是一个内核模块,它在你的 CPU 上创建一个 “访客领地”,并去管理它们对内存和 CPU 的访问。虚拟机管理器是一个涵盖虚拟化和管理程序的图形工具。 + +但是你不能被图形界面下 “点击” 操作的方式 "缠住" ,因为,它们也有命令行工具可以使用 —— 比如 virsh 和 virt-install。 + +如果你在使用 KVM 方面没有什么经验,你可以从 [在 KVM 中创建虚拟机:第 1 部分][1] 和 [在 KVM 中创建虚拟机:第 2 部分 - 网络][2] 开始学起。 + +### IPv6 唯一本地地址 + +在 KVM 中配置 IPv6 网络与配置 IPv4 网络很类似。它们的主要不同在于这些怪异的长地址。[上一次][3],我们讨论了 IPv6 地址的不同类型。其中有一个 IPv6 单播地址类,fc00::/7(详细情况请查阅 [RFC 4193][4]),它类似于 IPv4 中的私有地址 —— 10.0.0.0/8、172.16.0.0/12、和 192.168.0.0/16。 + +下图解释了这个唯一本地地址空间的结构。前 48 位定义了前缀和全局 ID,随后的 16 位是子网,剩余的 64 位是接口 ID: +``` +| 7 bits |1| 40 bits | 16 bits | 64 bits | ++--------|-+------------|-----------|----------------------------+ +| Prefix |L| Global ID | Subnet ID | Interface ID | ++--------|-+------------|-----------|----------------------------+ + +``` + +下面是另外一种表示方法,它可能更有助于你理解这些地址是如何管理的: +``` +| Prefix | Global ID | Subnet ID | Interface ID | ++--------|--------------|-------------|----------------------+ +| fd | 00:0000:0000 | 0000 | 0000:0000:0000:0000 | ++--------|--------------|-------------|----------------------+ + +``` + +fc00::/7 共分成两个 /8 地址块,fc00::/8 和 fd00::/8。fc00::/8 是为以后使用保留的。因此,唯一本地地址通常都是以 fd 开头的,而剩余部分是由你使用的。L 位,也就是第八位,它总是设置为 1,这样它可以表示为 fd00::/8。设置为 0 时,它就表示为 fc00::/8。你可以使用 `subnetcalc` 来看到这些东西: +``` +$ subnetcalc fd00::/8 -n +Address = fd00:: + fd00 = 11111101 00000000 + +$ subnetcalc fc00::/8 -n +Address = fc00:: + fc00 = 11111100 00000000 + +``` + +RFC 4193 要求地址必须随机产生。你可以用你选择的任何方法来造出个地址,只要它们以 `fd` 打头就可以,因为 IPv6 范围非常大,它不会因为地址耗尽而无法使用。当然,最佳实践还是按 RFCs 的要求来做。地址不能按顺序分配或者使用众所周知的数字。RFC 4193 包含一个构建伪随机地址生成器的算法,或者你可以在线找到任何生成器产生的数字。 + +唯一本地地址不像全局单播地址(它由你的因特网服务提供商分配)那样进行中心化管理,即使如此,发生地址冲突的可能性也是非常低的。当你需要去合并一些本地网络或者想去在不相关的私有网络之间路由时,这是一个非常好的优势。 + +在同一个子网中,你可以混用唯一本地地址和全局单播地址。唯一本地地址是可路由的,并且它并不会因此要求对路由器做任何调整。但是,你应该在你的边界路由器和防火墙上配置为不允许它们离开你的网络,除非是在不同位置的两个私有网络之间。 + +RFC4193 建议,不要混用全局单播地址的 AAAA 和 PTR 记录,因为虽然它们重复的机率非常低,但是并不能保证它们就是独一无二的。就像我们使用的 IPv4 地址一样,要保持你本地的私有名称服务和公共名称服务的独立。将本地名称服务使用的 Dnsmasq 和公共名称服务使用的 BIND 组合起来,是一个在 IPv4 网络上经过实战检验的可靠组合,这个组合也同样适用于 IPv6 网络。 + +### 伪随机地址生成器 + +在线地址生成器的一个示例是 [本地 IPv6 地址生成器][5]。你可以在线找到许多这样很酷的工具。你可以使用它来为你创建一个新地址,或者使用它在你的现有全局 ID 下为你创建子网。 + +下周我们将讲解如何在 KVM 中配置这些 IPv6 的地址,并现场测试它们。 + +通过来自 Linux 基金会和 edX 的免费在线课程 ["Linux 入门" ][6] 学习更多的 Linux 知识。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1 + +作者:[Carla Schroder][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-1 +[2]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-2-networking +[3]:https://www.linux.com/learn/intro-to-linux/2017/10/calculating-ipv6-subnets-linux +[4]:https://tools.ietf.org/html/rfc4193 +[5]:https://www.ultratools.com/tools/rangeGenerator +[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From c91b73998d65597cf8f72b35c674771a06112f13 Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Sat, 17 Mar 2018 07:37:02 +0800 Subject: [PATCH 206/343] 2018-03-17 --- .../talk/20180201 How I coined the term open source.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index c19154ea4d..ef99769b3b 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -41,17 +41,17 @@ Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助, 那周之后,1998 年的 2 月 5 日,一伙人在 VA research 进行头脑风暴商量对策。与会者——除了 Eric Raymond,Todd和我之外,还有 Larry Augustin,Sam Ockman,还有 Jon“maddog”Hall 的电话。 -会议的主要议题是推广策略,特别是要接洽的公司。 我几乎没说什么,而是在寻找机会推广已经提交讨论的术语。我觉得突然脱口而出那句话没什么用,“你们技术人员应当开始讨论我的新术语了。”他们大多数与会者不认识我,而且据我所知,他们可能甚至不同意对新术语的急切需求,或者是某种渴望。I felt that it wouldn't work for me to just blurt out, "All you technical people should start using my new term." Most of those attending didn't know me, and for all I knew, they might not even agree that a new term was greatly needed, or even somewhat desirable. +会议的主要议题是推广策略,特别是要接洽的公司。 我几乎没说什么,而是在寻找机会推广已经提交讨论的术语。我觉得突然脱口而出那句话没什么用,“你们技术人员应当开始讨论我的新术语了。”他们大多数与会者不认识我,而且据我所知,他们可能甚至不同意对新术语的急切需求,或者是某种渴望。 幸运的是,Todd 是明智的。他没有主张社区应该用哪个特定的术语,而是间接地做了一些事——一件和社区里有强烈意愿的人做的明智之举。他简单地在其他话题中使用那个术语——把他放进对话里看看会发生什么。我警觉起来,希望得到一个答复,但是起初什么也没有。讨论继续进行原来的话题。似乎只有他和我注意了术语的使用。 不仅如此——模因演化(人类学术语)在起作用。几分钟后,另一个人明显地,没有提醒地,在仍然进行话题讨论而没说术语的情况下,用了这个术语。Todd 和我面面相觑对视:是的我们都注意到了发生的事。我很激动——它起作用了!但我保持了安静:我在小组中仍然地位不高。可能有些人都奇怪为什么 Eric 会最终邀请我。 -临近会议尾声,可能是 Todd or Eric,[术语问题][8] 被明确提出。Maddog 提及了一个早期的术语“可自由分发的,和一个新的术语“合作开发的”。Eric列出了“自由软件”、“开源软件”,并把[unknown]作为主要选项。Todd宣传“开源”模型,然后Eric 支持了他。我什么也没说,letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. 只有我在会议中大约10%的说明放在了术语问答中。 +临近会议尾声,可能是 Todd or Eric,[术语问题][8] 被明确提出。Maddog 提及了一个早期的术语“可自由分发的,和一个新的术语“合作开发的”。Eric 列出了“自由软件”、“开源软件”,并把[unknown]作为主要选项。Todd宣传“开源”模型,然后Eric 支持了他。我什么也没说,letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. 只有我在会议中大约10%的说明放在了术语问答中。 -但是我很高兴。那有许多社区的关键领导人,并且他们喜欢新名字,或者至少没反对。这是一个好的信号信号。可能我帮不上什么忙; Eric Raymond was far better positioned to spread the new meme, and he did. Bruce Perens signed on to the effort immediately, 帮助建立 [Opensource.org][9] 并在新术语的宣传中扮演一个重要角色。 +但是我很高兴。在那有许多社区的关键领导人,并且他们喜欢这新名字,或者至少没反对。这是一个好的信号信号。可能我帮不上什么忙; Eric Raymond 被相当好地放在了一个宣传模因的好位子上,而且他的确做到了。立即签约参加行动,帮助建立 [Opensource.org][9] 并在新术语的宣传中发挥重要作用。 -对于这个成功的名字,那很必要,甚至是相当渴望, 因此 Tim O'Reilly 同意以社区的名义在公司积极使用它。 Also helpful would be use of the term in the upcoming official release of the Netscape Navigator(网景浏览器)代码。 到二月底, O'Reilly & Associates 还有网景公司(Netscape) 已经开始使用新术语。 +对于这个成功的名字,那很必要,甚至是相当渴望, 因此 Tim O'Reilly 同意以社区的名义在公司积极使用它。在官方即将发布的 the Netscape Navigator(网景浏览器)代码中的术语使用也为此帮了忙。 到二月底, O'Reilly & Associates 还有网景公司(Netscape) 已经开始使用新术语。 ### 名字的诞生 From 03ae0943d9a8392a5f82112cc5b3a2737a2ea7ba Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 17 Mar 2018 12:13:14 +0800 Subject: [PATCH 207/343] PRF:20171002 Reset Linux Desktop To Default Settings With A Single Command.md @geekpi --- ... Default Settings With A Single Command.md | 25 +++++++++++-------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md index d486a777de..cfeade8a8b 100644 --- a/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md +++ b/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md @@ -1,18 +1,20 @@ -使用一个命令重置 Linux 桌面到默认设置 +使用一个命令重置 Linux 桌面为默认设置 ====== + ![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg) -前段时间,我们分享了一篇关于 [**Resetter**][1] 的文章 - 这是一个有用的软件,可以在几分钟内将 Ubuntu 重置为出厂默认设置。使用 Resetter,任何人都可以轻松地将 Ubuntu 重置为第一次安装时的状态。今天,我偶然发现了一个类似的东西。不,它不是一个应用程序,而是一个单行的命令来重置你的 Linux 桌面设置、调整和定制到默认状态。 +前段时间,我们分享了一篇关于 [Resetter][1] 的文章 - 这是一个有用的软件,可以在几分钟内将 Ubuntu 重置为出厂默认设置。使用 Resetter,任何人都可以轻松地将 Ubuntu 重置为第一次安装时的状态。今天,我偶然发现了一个类似的东西。不,它不是一个应用程序,而是一个单行的命令来重置你的 Linux 桌面设置、调整和定制到默认状态。 ### 将 Linux 桌面重置为默认设置 -这个命令会将 Ubuntu Unity、Gnome 和 MATE 桌面重置为默认状态。我在我的 **Arch Linux MATE** 和 **Ubuntu 16.04 Unity** 上测试了这个命令。它可以在两个系统上工作。我希望它也能在其他桌面上运行。在写这篇文章的时候,我还没有安装 GNOME 的 Linux 桌面,因此我无法确认。但是,我相信它也可以在 Gnome 桌面环境中使用。 +这个命令会将 Ubuntu Unity、Gnome 和 MATE 桌面重置为默认状态。我在我的 Arch Linux MATE 和 Ubuntu 16.04 Unity 上测试了这个命令。它可以在两个系统上工作。我希望它也能在其他桌面上运行。在写这篇文章的时候,我还没有安装 GNOME 的 Linux 桌面,因此我无法确认。但是,我相信它也可以在 Gnome 桌面环境中使用。 -**一句忠告:**请注意,此命令将重置你在系统中所做的所有定制和调整,包括 Unity 启动器或 Dock 中的固定应用程序、桌面小程序、桌面指示器、系统字体、GTK主题、图标主题、显示器分辨率、键盘快捷键、窗口按钮位置、菜单和启动器行为等。 +**一句忠告:**请注意,此命令将重置你在系统中所做的所有定制和调整,包括 Unity 启动器或 Dock 中固定的应用程序、桌面小程序、桌面指示器、系统字体、GTK主题、图标主题、显示器分辨率、键盘快捷键、窗口按钮位置、菜单和启动器行为等。 -好的是它只会重置桌面设置。它不会影响其他不使用 dconf 的程序。此外,它不会删除你的个人资料。 +好的是它只会重置桌面设置。它不会影响其他不使用 `dconf` 的程序。此外,它不会删除你的个人资料。 现在,让我们开始。要将 Ubuntu Unity 或其他带有 GNOME/MATE 环境的 Linux 桌面重置,运行下面的命令: + ``` dconf reset -f / ``` @@ -29,12 +31,13 @@ dconf reset -f / 看见了么?现在,我的 Ubuntu 桌面已经回到了出厂设置。 -有关 “dconf” 命令的更多详细信息,请参阅手册页。 +有关 `dconf` 命令的更多详细信息,请参阅手册页。 + ``` man dconf ``` -在重置桌面上我个人更喜欢 “Resetter” 而不是 “dconf” 命令。因为,Resetter 给用户提供了更多的选择。用户可以决定删除哪些应用程序、保留哪些应用程序、是保留现有用户帐户还是创建新用户等等。如果你懒得安装 Resetter,你可以使用这个 “dconf” 命令在几分钟内将你的 Linux 系统重置为默认设置。 +在重置桌面上我个人更喜欢 “Resetter” 而不是 `dconf` 命令。因为,Resetter 给用户提供了更多的选择。用户可以决定删除哪些应用程序、保留哪些应用程序、是保留现有用户帐户还是创建新用户等等。如果你懒得安装 Resetter,你可以使用这个 `dconf` 命令在几分钟内将你的 Linux 系统重置为默认设置。 就是这样了。希望这个有帮助。我将很快发布另一篇有用的指导。敬请关注! @@ -48,12 +51,12 @@ via: https://www.ostechnix.com/reset-linux-desktop-default-settings-single-comma 作者:[Edwin Arteaga][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com -[1]:https://www.ostechnix.com/reset-ubuntu-factory-defaults/ +[1]:https://linux.cn/article-9217-1.html [2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png () +[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png From 3c406042341557702fe6b13f316b406d7b6101ab Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 17 Mar 2018 12:13:38 +0800 Subject: [PATCH 208/343] PUB:20171002 Reset Linux Desktop To Default Settings With A Single Command.md @geekpi --- ...set Linux Desktop To Default Settings With A Single Command.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171002 Reset Linux Desktop To Default Settings With A Single Command.md (100%) diff --git a/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/published/20171002 Reset Linux Desktop To Default Settings With A Single Command.md similarity index 100% rename from translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md rename to published/20171002 Reset Linux Desktop To Default Settings With A Single Command.md From 1dea57aeadea0f7fea81c28a6bd0667a925cfa4b Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sat, 17 Mar 2018 14:35:16 +0800 Subject: [PATCH 209/343] Delete 20180102 How To Find (Top-10) Largest Files In Linux.md --- ...To Find (Top-10) Largest Files In Linux.md | 191 ------------------ 1 file changed, 191 deletions(-) delete mode 100644 sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md diff --git a/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md b/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md deleted file mode 100644 index 77c6238c9c..0000000000 --- a/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md +++ /dev/null @@ -1,191 +0,0 @@ -Translating by jessie-pang - -How To Find (Top-10) Largest Files In Linux -====== -When you are running out of disk space in system, you may prefer to check with df command or du command or ncdu command but all these will tell you only current directory files and doesn't shows the system wide files. - -You have to spend huge amount of time to get the largest files in the system using the above commands, that to you have to navigate to each and every directory to achieve this. - -It's making you to face trouble and this is not the right way to do it. - -If so, what would be the suggested way to get top 10 largest files in Linux? - -I have spend a lot of time with google but i didn't found this. Everywhere i could see an article which list the top 10 files in the current directory. So, i want to make this article useful for people whoever looking to get the top 10 largest files in the system. - -In this tutorial, we are going to teach you how to find top 10 largest files in Linux system using below four methods. - -### Method-1 : - -There is no specific command available in Linux to do this, hence we are using more than one command (all together) to get this done. -``` -# find / -type f -print0 | xargs -0 du -h | sort -rh | head -n 10 - -1.4G /swapfile -1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso -564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA -378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 -377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU -100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 -93M /usr/lib/firefox/libxul.so -84M /var/lib/snapd/snaps/core_3604.snap -84M /var/lib/snapd/snaps/core_3440.snap -84M /var/lib/snapd/snaps/core_3247.snap - -``` - -**Details :** -**`find`** : It 's a command, Search for files in a directory hierarchy. -**`/`** : Check in the whole system (starting from / directory) -**`-type`** : File is of type - -**`f`** : Regular file -**`-print0`** : Print the full file name on the standard output, followed by a null character -**`|`** : Control operator that send the output of one program to another program for further processing. - -**`xargs`** : It 's a command, which build and execute command lines from standard input. -**`-0`** : Input items are terminated by a null character instead of by whitespace -**`du -h`** : It 's a command to calculate disk usage with human readable format - -**`sort`** : It 's a command, Sort lines of text files -**`-r`** : Reverse the result of comparisons -**`-h`** : Print the output with human readable format - -**`head`** : It 's a command, Output the first part of files -**`n -10`** : Print the first 10 files. - -### Method-2 : - -This is an another way to find or check top 10 largest files in Linux system. Here also, we are putting few commands together to achieve this. -``` -# find / -type f -exec du -Sh {} + | sort -rh | head -n 10 - -1.4G /swapfile -1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso -564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA -378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 -377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU -100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 -93M /usr/lib/firefox/libxul.so -84M /var/lib/snapd/snaps/core_3604.snap -84M /var/lib/snapd/snaps/core_3440.snap -84M /var/lib/snapd/snaps/core_3247.snap - -``` - -**Details :** -**`find`** : It 's a command, Search for files in a directory hierarchy. -**`/`** : Check in the whole system (starting from / directory) -**`-type`** : File is of type - -**`f`** : Regular file -**`-exec`** : This variant of the -exec action runs the specified command on the selected files -**`du`** : It 's a command to estimate file space usage. - -**`-S`** : Do not include size of subdirectories -**`-h`** : Print sizes in human readable format -**`{}`** : Summarize disk usage of each FILE, recursively for directories. - -**`|`** : Control operator that send the output of one program to another program for further processing. -**`sort`** : It 's a command, Sort lines of text files -**`-r`** : Reverse the result of comparisons - -**`-h`** : Compare human readable numbers -**`head`** : It 's a command, Output the first part of files -**`n -10`** : Print the first 10 files. - -### Method-3 : - -It 's an another method to find or search top 10 largest files in Linux system. -``` -# find / -type f -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} - -84M /var/lib/snapd/snaps/core_3247.snap -84M /var/lib/snapd/snaps/core_3440.snap -84M /var/lib/snapd/snaps/core_3604.snap -93M /usr/lib/firefox/libxul.so -100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 -377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU -378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 -564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA -1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso -1.4G /swapfile - -``` - -**Details :** -**`find`** : It 's a command, Search for files in a directory hierarchy. -**`/`** : Check in the whole system (starting from / directory) -**`-type`** : File is of type - -**`f`** : Regular file -**`-print0`** : Print the full file name on the standard output, followed by a null character -**`|`** : Control operator that send the output of one program to another program for further processing. - -**`xargs`** : It 's a command, which build and execute command lines from standard input. -**`-0`** : Input items are terminated by a null character instead of by whitespace -**`du`** : It 's a command to estimate file space usage. - -**`sort`** : It 's a command, Sort lines of text files -**`-n`** : Compare according to string numerical value -**`tail -10`** : It 's a command, output the last part of files (last 10 files) - -**`cut`** : It 's a command, remove sections from each line of files -**`-f2`** : Select only these fields value. -**`-I{}`** : Replace occurrences of replace-str in the initial-arguments with names read from standard input. - -**`-s`** : Display only a total for each argument -**`-h`** : Print sizes in human readable format -**`{}`** : Summarize disk usage of each FILE, recursively for directories. - -### Method-4 : - -It 's an another method to find or search top 10 largest files in Linux system. -``` -# find / -type f -ls | sort -k 7 -r -n | head -10 | column -t | awk '{print $7,$11}' - -1494845440 /swapfile -1085984380 /home/magi/ubuntu-17.04-desktop-amd64.iso -591003648 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA -395770383 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 -394891761 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU -103999072 /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 -97356256 /usr/lib/firefox/libxul.so -87896064 /var/lib/snapd/snaps/core_3604.snap -87793664 /var/lib/snapd/snaps/core_3440.snap -87089152 /var/lib/snapd/snaps/core_3247.snap - -``` - -**Details :** -**`find`** : It 's a command, Search for files in a directory hierarchy. -**`/`** : Check in the whole system (starting from / directory) -**`-type`** : File is of type - -**`f`** : Regular file -**`-ls`** : List current file in ls -dils format on standard output. -**`|`** : Control operator that send the output of one program to another program for further processing. - -**`sort`** : It 's a command, Sort lines of text files -**`-k`** : start a key at POS1 -**`-r`** : Reverse the result of comparisons - -**`-n`** : Compare according to string numerical value -**`head`** : It 's a command, Output the first part of files -**`-10`** : Print the first 10 files. - -**`column`** : It 's a command, formats its input into multiple columns. -**`-t`** : Determine the number of columns the input contains and create a table. -**`awk`** : It 's a command, Pattern scanning and processing language -**`'{print $7,$11}'`** : Print only mentioned column. - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-to-find-search-check-print-top-10-largest-biggest-files-in-linux/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ From 8f9661cc87f0883f81bea50232d2679cd64698d7 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sat, 17 Mar 2018 14:35:56 +0800 Subject: [PATCH 210/343] 20180102 How To Find (Top-10) Largest Files In Linux.md 20180102 How To Find (Top-10) Largest Files In Linux.md --- ...To Find (Top-10) Largest Files In Linux.md | 241 ++++++++++++++++++ 1 file changed, 241 insertions(+) create mode 100644 translated/tech/20180102 How To Find (Top-10) Largest Files In Linux.md diff --git a/translated/tech/20180102 How To Find (Top-10) Largest Files In Linux.md b/translated/tech/20180102 How To Find (Top-10) Largest Files In Linux.md new file mode 100644 index 0000000000..37765828bf --- /dev/null +++ b/translated/tech/20180102 How To Find (Top-10) Largest Files In Linux.md @@ -0,0 +1,241 @@ +如何查找 Linux 中最大的 10 个文件 +====== + + +当系统的磁盘空间不足时,您可能更愿意使用 `df`、`du` 或 `ncdu` 命令进行检查,但这些命令只会显示当前目录的文件,并不会显示整个系统范围的文件。 + +您得花费大量的时间才能用上述命令获取系统中最大的文件,因为要进入到每个目录重复运行上述命令。 + +这个方法比较麻烦,也并不恰当。 + +如果是这样,那么该如何在 Linux 中找到最大的 10 个文件呢? + +我在谷歌上搜索了很久,却没发现类似的文章,我反而看到了很多关于列出当前目录中最大的 10 个文件的文章。所以,我希望这篇文章对那些有类似需求的人有所帮助。 + +本教程中,我们将教您如何使用以下四种方法在 Linux 系统中查找最大的前 10 个文件。 + +### 方法 1: + +在 Linux 中没有特定的命令可以直接执行此操作,因此我们需要将多个命令结合使用。 + +``` +# find / -type f -print0 | xargs -0 du -h | sort -rh | head -n 10 + +1.4G /swapfile +1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso +564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +93M /usr/lib/firefox/libxul.so +84M /var/lib/snapd/snaps/core_3604.snap +84M /var/lib/snapd/snaps/core_3440.snap +84M /var/lib/snapd/snaps/core_3247.snap + +``` + +**详解:** + +**`find`**:在目录结构中搜索文件的命令 + +**`/`**:在整个系统(从根目录开始)中查找 + +**`-type`**:指定文件类型 + +**`f`**:普通文件 + +**`-print0`**:输出完整的文件名,其后跟一个空字符 + +**`|`**:控制操作符,将一条命令的输出传递给下一个命令以供进一步处理 + +**`xargs`**:将标准输入转换成命令行参数的命令 + +**`-0`**:以空字符(null)而不是空白字符(whitespace)(LCTT 译者注:即空格、制表符和换行)来分割记录 + +**`du -h`**:以可读格式计算磁盘空间使用情况的命令 + +**`sort`**:对文本文件进行排序的命令 + +**`-r`**:反转结果 + +**`-h`**:用可读格式打印输出 + +**`head`**:输出文件开头部分的命令 + +**`n -10`**:打印前 10 个文件 + +### 方法 2: + +这是查找 Linux 系统中最大的前 10 个文件的另一种方法。我们依然使用多个命令共同完成这个任务。 + +``` +# find / -type f -exec du -Sh {} + | sort -rh | head -n 10 + +1.4G /swapfile +1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso +564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +93M /usr/lib/firefox/libxul.so +84M /var/lib/snapd/snaps/core_3604.snap +84M /var/lib/snapd/snaps/core_3440.snap +84M /var/lib/snapd/snaps/core_3247.snap + +``` + +**详解:** + +**`find`**:在目录结构中搜索文件的命令 + +**`/`**:在整个系统(从根目录开始)中查找 + +**`-type`**:指定文件类型 + +**`f`**:普通文件 + +**`-exec`**:在所选文件上运行指定命令 + +**`du`**:计算文件占用的磁盘空间的命令 + +**`-S`**:不包含子目录的大小 + +**`-h`**:以可读格式打印 + +**`{}`**:递归地查找目录,统计每个文件占用的磁盘空间 + +**`|`**:控制操作符,将一条命令的输出传递给下一个命令以供进一步处理 + +**`sort`**:对文本文件进行按行排序的命令 + +**`-r`**:反转结果 + +**`-h`**:用可读格式打印输出 + +**`head`**:输出文件开头部分的命令 + +**`n -10`**:打印前 10 个文件 + +### 方法 3: + +这里介绍另一种方法,在 Linux 系统中搜索最大的前 10 个文件。 + +``` +# find / -type f -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} + +84M /var/lib/snapd/snaps/core_3247.snap +84M /var/lib/snapd/snaps/core_3440.snap +84M /var/lib/snapd/snaps/core_3604.snap +93M /usr/lib/firefox/libxul.so +100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso +1.4G /swapfile + +``` + +**详解:** + +**`find`**:在目录结构中搜索文件的命令 + +**`/`**:在整个系统(从根目录开始)中查找 + +**`-type`**:指定文件类型 + +**`f`**:普通文件 + +**`-print0`**:输出完整的文件名,其后跟一个空字符 + +**`|`**:控制操作符,将一条命令的输出传递给下一个命令以供进一步处理 + +**`xargs`**:将标准输入转换成命令行参数的命令 + +**`-0`**:以空字符(null)而不是空白字符(whitespace)来分割记录 + +**`du`**:计算文件占用的磁盘空间的命令 + +**`sort`**:对文本文件进行按行排序的命令 + +**`-n`**:根据数字大小进行比较 + +**`tail -10`**:输出文件结尾部分的命令(最后 10 个文件) + +**`cut`**:从每行删除特定部分的命令 + +**`-f2`**:只选择特定字段值 + +**`-I{}`**:将初始参数中出现的每个替换字符串都替换为从标准输入读取的名称 + +**`-s`**:仅显示每个参数的总和 + +**`-h`**:用可读格式打印输出 + +**`{}`**:递归地查找目录,统计每个文件占用的磁盘空间 + +### 方法 4: + +还有一种在 Linux 系统中查找最大的前 10 个文件的方法。 + +``` +# find / -type f -ls | sort -k 7 -r -n | head -10 | column -t | awk '{print $7,$11}' + +1494845440 /swapfile +1085984380 /home/magi/ubuntu-17.04-desktop-amd64.iso +591003648 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +395770383 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +394891761 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +103999072 /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +97356256 /usr/lib/firefox/libxul.so +87896064 /var/lib/snapd/snaps/core_3604.snap +87793664 /var/lib/snapd/snaps/core_3440.snap +87089152 /var/lib/snapd/snaps/core_3247.snap + +``` + +**详解:** + +**`find`**:在目录结构中搜索文件的命令 + +**`/`**:在整个系统(从根目录开始)中查找 + +**`-type`**:指定文件类型 + +**`f`**:普通文件 + +**`-ls`**:在标准输出中以 `ls -dils` 的格式列出当前文件 + +**`|`**:控制操作符,将一条命令的输出传递给下一个命令以供进一步处理 + +**`sort`**:对文本文件进行按行排序的命令 + +**`-k`**:按指定列进行排序 + +**`-r`**:反转结果 + +**`-n`**:根据数字大小进行比较 + +**`head`**:输出文件开头部分的命令 + +**`-10`**:打印前 10 个文件 + +**`column`**:将其输入格式化为多列的命令 + +**`-t`**:确定输入包含的列数并创建一个表 + +**`awk`**:样式扫描和处理语言 + +**`'{print $7,$11}'`**:只打印指定的列 + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-find-search-check-print-top-10-largest-biggest-files-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ \ No newline at end of file From c633caf7735893eafddbe730efc7a6e5a0dd8f87 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Sat, 17 Mar 2018 16:05:04 +0800 Subject: [PATCH 211/343] Translated by qhwdw --- ...hat DevOps teams really need from a CIO.md | 60 ------------------- ...hat DevOps teams really need from a CIO.md | 58 ++++++++++++++++++ 2 files changed, 58 insertions(+), 60 deletions(-) delete mode 100644 sources/tech/20171205 What DevOps teams really need from a CIO.md create mode 100644 translated/tech/20171205 What DevOps teams really need from a CIO.md diff --git a/sources/tech/20171205 What DevOps teams really need from a CIO.md b/sources/tech/20171205 What DevOps teams really need from a CIO.md deleted file mode 100644 index 2c83ce7a93..0000000000 --- a/sources/tech/20171205 What DevOps teams really need from a CIO.md +++ /dev/null @@ -1,60 +0,0 @@ -Translating by qhwdw -What DevOps teams really need from a CIO -====== -IT leaders can learn from plenty of material exploring [DevOps][1] and the challenging cultural shift required for [making the DevOps transition][2]. But are you in tune with the short and long term challenges that a DevOps team faces - and what they really need from a CIO? - -In my conversations with DevOps team members, some of what I heard might surprise you. DevOps pros (whether part of an internal or external team) want to put the following things at the top of your CIO radar screen. - -### 1. Communication - -First and foremost, DevOps pros need peer-level communication. An experienced DevOps team is extremely knowledgeable on current DevOps trends, successes, and failures in the industry and is interested in sharing this information. DevOps concepts are difficult to convey, so be open to a new working relationship in which there are regular (don't worry, not weekly) conversations about the current state of your IT, how the pieces in the environment communicate, and your overall IT estate. - -**[ Want even more wisdom from CIOs on leading DevOps? See our comprehensive resource,[DevOps: The IT Leader's Guide][3]. ]** - -Conversely, be prepared to share current business needs and goals with the DevOps team. Business objectives no longer exist in isolation from IT: They are now an integral component of what drives your IT advancements, and your IT determines how effectively you can execute on your business needs and goals. - -Focus on participating rather than leading. You are still the ultimate arbiter when it comes to decisions, but understand that these decisions are best made collaboratively in order to empower and motivate your DevOps team. - -### 2. Reduction of technical debt - -Second, strive to better understand technical debt and how DevOps efforts are going to reduce it. Your DevOps team is working hard on this front. In this case, technical debt refers to the manpower and infrastructure resources that are usurped daily by maintaining and adding new features on top of a monolithic, non-sustainable environment (read Rube Goldberg). - -Common CIO questions include: - - * Why do we need to do things in a new way? - * Why are we spending time and money on this? - * If there's no new functionality, just existing pieces being broken out with automation, then where is the gain? - - - -The "if it ain't broke don't fix it" thinking is understandable. But if the car is driving fine while everyone on the road accelerates past you, your environment IS broken. Precious resources continue to be sucked into propping up or augmenting an environmental kluge. - -Addressing every issue in isolation results in a compromised choice from the start that is worsened with each successive patch - layer upon layer added to a foundation that wasn't built to support it. In actuality, this approach is similar to plugging a continuously failing dike. Sooner or later you run out of fingers and the whole thing buckles under the added pressures, drowning your resources. - -The solution: automation. The result of automation is scalability - less effort per person to maintain and grow your IT environment. If adding manpower is the only way to grow your business, then scalability is a pipe dream. - -Automation reduces your manpower requirements and provides the flexibility required for continued IT evolution. Simple, right? Yes, but you must be prepared for delayed gratification. An upfront investment of time and effort for architectural and structural changes is required in order to reap the back-end financial benefits of automation with improved productivity and efficiency. Embracing these challenges as an IT leader is crucial in order for your DevOps team to successfully execute. - -### 3. Trust - -Lastly, trust your DevOps team and make sure they know it. DevOps experts understand that this is a tough request, but they must have your unquestionable support and your willingness to actively participate. It will often be a "learn as you go" experience for you as the DevOps team successively refines your IT environment, while they themselves adapt to ever-changing technology. - -Listen, listen, listen to them and trust them. DevOps changes are valuable and well worth the time and money through increased efficiency, productivity, and business responsiveness. Trusting your DevOps team gives them the freedom to make the most effective IT improvements. - -The new CIO bottom line: To maximize your DevOps team's potential, leave your leadership comfort zone and embrace a "CIOps" transition. Continuously work on finding common ground with the DevOps team throughout the DevOps transition, to help your organization achieve long-term IT success. - - --------------------------------------------------------------------------------- - -via: https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio - -作者:[John Allessio][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://enterprisersproject.com/user/john-allessio -[1]:https://enterprisersproject.com/tags/devops -[2]:https://www.redhat.com/en/insights/devops?intcmp=701f2000000tjyaAAA -[3]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ diff --git a/translated/tech/20171205 What DevOps teams really need from a CIO.md b/translated/tech/20171205 What DevOps teams really need from a CIO.md new file mode 100644 index 0000000000..4780ca8a26 --- /dev/null +++ b/translated/tech/20171205 What DevOps teams really need from a CIO.md @@ -0,0 +1,58 @@ +CIO 真正需要 DevOps 团队做什么? +====== +IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所要求的文化挑战中学习。但是,你在一个 DevOps 团队面对长期或短期挑战的调整中 —— 一个 CIO 真正需要他们做的是什么呢? + +在我与 DevOps 团队成员的谈话中,我听到的其中一些内容让你感到非常的意外。DevOps 专家(无论是内部团队的还是外部团队的)都希望将下列的事情放在你的 CIO 优先关注的级别。 + +### 1. 沟通 + +第一个也是最重要的一个,DevOps 专家需要面对面的沟通。一个经验丰富的 DevOps 团队是非常了解当前 DevOps 的趋势,以及成功、和失败的经验,并且他们非常乐意去分享这些信息。表达 DevOps 的概念是很困难的,因此,要在这种新的工作关系中保持开放,定期(不用担心,不用每周)讨论有关你的 IT 的当前状态,如何评价你的沟通环境,以及你的整体的 IT 产业。 + +**[想从领导 DevOps 的 CIO 们处学习更多的知识吗?查看我们的综合资源,[DevOps: IT 领导者指南][3]。 ]** + +相反,你应该准备好与 DevOps 团队去共享当前的业务需求和目标。业务不再是独立于 IT 的东西:它们现在是驱动 IT 发展的重要因素,并且 IT 决定了你的业务需求和目标运行的效果如何。 + +注重参与而不是领导。在需要做决策的时候,你仍然是最终的决策者,但是,理解这些决策的最好方式是协作,这样,你的 DevOps 团队将有更多的自主权,并因此受到更多激励。 + +### 2. 降低技术债务 + +第二,力争更好地理解技术债务,并在 DevOps 中努力降低它。你的 DevOps 团队面对的工作都非常难。在这种情况下,技术债务是指在一个庞大的、不可持续的环境(查看 Rube Goldberg)之中,通过维护和增加新功能而占用的人力资源和基础设备资源。 + +常见的 CIO 问题包括: + + * 为什么我们要用一种新方法去做这件事情? + * 为什么我们要在它上面花费时间和金钱? + * 如果这里没有新功能,只是现有组件实现了自动化,那么我们的收益是什么? + + + +"如果没有坏,就不要去修理它“ ,这样的事情是可以理解的。但是,如果你正在路上好好的开车,而每个人都加速超过你,这时候,你的环境就被破坏了。持续投入宝贵的资源去支撑或扩张拼凑起来的环境。 + +选择妥协,并且一个接一个的打补丁,以这种方式去处理每个独立的问题,结果将从一开始就变得很糟糕 —— 在一个不能支撑建筑物的地基上,一层摞一层地往上堆。事实上,这种方法就像不断地在电脑中插入坏磁盘一样。迟早有一天,面对出现的问题,你将会毫无办法。在外面持续增加的压力下,整个事情将变得一团糟,完全吞噬掉你的资源。 + +这种情况下,解决方案就是:自动化。使用自动化的结果是良好的可伸缩性 —— 每个维护人员在 IT 环境的维护和增长方面花费更少的努力。如果增加人力资源是实现业务增长的唯一办法,那么,可伸缩性就是白日做梦。 + +自动化降低了你的人力资源需求,并且对持续进行的 IT 提供了更灵活的需求。很简单,对吗?是的,但是你必须为迟到的满意做好心理准备。为了在提高生产力和效率的基础上获得后端经济效益,需要预先投入时间和精力对架构和结构进行变更。为了你的 DevOps 团队能够成功,接受这些挑战,对 IT 领导者来说是非常重要的。 + +### 3. 信任 + +最后,相信你的 DevOps 团队并且一定要理解他们。DevOps 专家也知道这个要求很难,但是他们必须有你的强大支持和你参与实践的意愿。因为 DevOps 团队持续改进你的 IT 环境,他们自身也在不断地适应这些变化的技术,而这些变化通常正是 “你要去学习的经验”。 + +倾听,倾听,倾听他们,并且相信他们。DevOps 的改变是非常有价值的,而且也是值的去投入时间和金钱的。它可以提高效率、生产力、和业务响应能力。信任你的 DevOps 团队,并且给予他们更多的自由,实现更高效率的 IT 改进。 + +新 CIO 的底线是:将你的 DevOps 团队的潜力最大化,离开你的领导 “舒适区”,拥抱一个 “CIOps" 的转变。通过 DevOps 转变,持续地与你的 DevOps 团队共同成长,以帮助你的组织获得长期的 IT 成功。 + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio + +作者:[John Allessio][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/john-allessio +[1]:https://enterprisersproject.com/tags/devops +[2]:https://www.redhat.com/en/insights/devops?intcmp=701f2000000tjyaAAA +[3]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ From aaa9f60f2dd6c82c8277287803ee382142bdb6b7 Mon Sep 17 00:00:00 2001 From: amwps290 Date: Sat, 17 Mar 2018 16:46:17 +0800 Subject: [PATCH 212/343] Delete 20180220 How to format academic papers on Linux with groff -me.md --- ...academic papers on Linux with groff -me.md | 266 ------------------ 1 file changed, 266 deletions(-) delete mode 100644 sources/tech/20180220 How to format academic papers on Linux with groff -me.md diff --git a/sources/tech/20180220 How to format academic papers on Linux with groff -me.md b/sources/tech/20180220 How to format academic papers on Linux with groff -me.md deleted file mode 100644 index a7902856db..0000000000 --- a/sources/tech/20180220 How to format academic papers on Linux with groff -me.md +++ /dev/null @@ -1,266 +0,0 @@ -translating by amwps290 -How to format academic papers on Linux with groff -me -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) - -I was an undergraduate student when I discovered Linux in 1993. I was so excited to have the power of a Unix system right in my dorm room, but despite its many capabilities, Linux lacked applications. Word processors like LibreOffice and OpenOffice were years away. If you wanted to use a word processor, you likely booted your system into MS-DOS and used WordPerfect, the shareware GalaxyWrite, or a similar program. - -`nroff` and `troff`. They are different interfaces to the same system: `nroff` generates plaintext output, suitable for screens or line printers, and `troff` generates very pretty output, usually for printing on a laser printer. - -That was my method, since I needed to write papers for my classes, but I preferred staying in Linux. I knew from our "big Unix" campus computer lab that Unix systems provided a set of text-formatting programs calledand. They are different interfaces to the same system:generates plaintext output, suitable for screens or line printers, andgenerates very pretty output, usually for printing on a laser printer. - -On Linux, `nroff` and `troff` are combined as GNU troff, more commonly known as [groff][1]. I was happy to see a version of groff included in my early Linux distribution, so I set out to learn how to use it to write class papers. The first macro set I learned was the `-me` macro package, a straightforward, easy to learn macro set. - -The first thing to know about `groff` is that it processes and formats text according to a set of macros. A macro is usually a two-character command, set on a line by itself, with a leading dot. A macro might carry one or more options. When `groff` encounters one of these macros while processing a document, it will automatically format the text appropriately. - -Below, I'll share the basics of using `groff -me` to write simple documents like class papers. I won't go deep into the details, like how to create nested lists, keeps and displays, tables, and figures. - -### Paragraphs - -Let's start with an easy example you see in almost every type of document: paragraphs. Paragraphs can be formatted with the first line either indented or not (i.e., flush against the left margin). Many printed documents, including academic papers, magazines, journals, and books, use a combination of the two types, with the first (leading) paragraph in a document or chapter flush left and all other (regular) paragraphs indented. In `groff -me`, you can use both paragraph types: leading paragraphs (`.lp`) and regular paragraphs (`.pp`). -``` -.lp - -This is the first paragraph. - -.pp - -This is a standard paragraph. - -``` - -### Text formatting - -The macro to format text in bold is `.b` and to format in italics is `.i`. If you put `.b` or `.i` on a line by itself, then all text that comes after it will be in bold or italics. But it's more likely you just want to put one or a few words in bold or italics. To make one word bold or italics, put that word on the same line as `.b` or `.i`, as an option. To format multiple words in **bold** or italics, enclose your text in quotes. -``` -.pp - -You can do basic formatting such as - -.i italics - -or - -.b "bold text." - -``` - -In the above example, the period at the end of **bold text** will also be in bold type. In most cases, that's not what you want. It's more correct to only have the words **bold text** in bold, but not the trailing period. To get the effect you want, you can add a second argument to `.b` or `.i` to indicate any text that should trail the bolded or italicized text, but in normal type. For example, you might do this to ensure that the trailing period doesn't show up in bold type. -``` -.pp - -You can do basic formatting such as - -.i italics - -or - -.b "bold text" . - -``` - -### Lists - -With `groff -me`, you can create two types of lists: bullet lists (`.bu`) and numbered lists (`.np`). -``` -.pp - -Bullet lists are easy to make: - -.bu - -Apple - -.bu - -Banana - -.bu - -Pineapple - -.pp - -Numbered lists are as easy as: - -.np - -One - -.np - -Two - -.np - -Three - -.pp - -Note that numbered lists will reset at the next pp or lp. - -``` - -### Subheads - -If you're writing a long paper, you might want to divide your content into sections. With `groff -me`, you can create numbered headings (`.sh`) and unnumbered headings (`.uh`). In either, enclose the section title in quotes as an argument. For numbered headings, you also need to provide the heading level: `1` will give a first-level heading (e.g., 1.). Similarly, `2` and `3` will give second and third level headings, such as 2.1 or 3.1.1. -``` -.uh Introduction - -.pp - -Provide one or two paragraphs to describe the work - -and why it is important. - -.sh 1 "Method and Tools" - -.pp - -Provide a few paragraphs to describe how you - -did the research, including what equipment you used - -``` - -### Smart quotes and block quotes - -It's standard in any academic paper to cite other people's work as evidence. If you're citing a brief quote to highlight a key message, you can just type quotes around your text. But groff won't automatically convert your quotes into the "smart" or "curly" quotes used by modern word processing systems. To create them in `groff -me`, insert an inline macro to create the left quote (`\*(lq`) and right quote mark (`\*(rq`). -``` -.pp - -Christine Peterson coined the phrase \*(lqopen source.\*(rq - -``` - -There's also a shortcut in `groff -me` to create these quotes (`.q`) that I find easier to use. -``` -.pp - -Christine Peterson coined the phrase - -.q "open source." - -``` - -If you're citing a longer quote that spans several lines, you'll want to use a block quote. To do this, insert the blockquote macro (`.(q`) at the beginning and end of the quote. -``` -.pp - -Christine Peterson recently wrote about open source: - -.(q - -On April 7, 1998, Tim O'Reilly held a meeting of key - -leaders in the field. Announced in advance as the first - -.q "Freeware Summit," - -by April 14 it was referred to as the first - -.q "Open Source Summit." - -.)q - -``` - -### Footnotes - -To insert a footnote, include the footnote macro (`.(f`) before and after the footnote text, and use an inline macro (`\**`) to add the footnote mark. The footnote mark should appear both in the text and in the footnote itself. -``` -.pp - -Christine Peterson recently wrote about open source:\** - -.(f - -\**Christine Peterson. - -.q "How I coined the term open source." - -.i "OpenSource.com." - -1 Feb 2018. - -.)f - -.(q - -On April 7, 1998, Tim O'Reilly held a meeting of key - -leaders in the field. Announced in advance as the first - -.q "Freeware Summit," - -by April 14 it was referred to as the first - -.q "Open Source Summit." - -.)q - -``` - -### Cover page - -Most class papers require a cover page containing the paper's title, your name, and the date. Creating a cover page in `groff -me` requires some assembly. I find the easiest way is to use centered blocks of text and add extra lines between the title, name, and date. (I prefer to use two blank lines between each.) At the top of your paper, start with the title page (`.tp`) macro, insert five blank lines (`.sp 5` ), then add the centered text (`.(c`), and extra blank lines (`.sp 2`). -``` -.tp - -.sp 5 - -.(c - -.b "Writing Class Papers with groff -me" - -.)c - -.sp 2 - -.(c - -Jim Hall - -.)c - -.sp 2 - -.(c - -February XX, 2018 - -.)c - -.bp - -``` - -The last macro (`.bp`) tells groff to add a page break after the title page. - -### Learning more - -Those are the essentials of writing professional-looking a paper in `groff -me` with leading and indented paragraphs, bold and italics text, bullet and numbered lists, numbered and unnumbered section headings, block quotes, and footnotes. - -I've included a sample groff file to demonstrate all of this formatting. Save the `lorem-ipsum.me` file to your system and run it through groff. The `-Tps` option sets the output type to PostScript so you can send the document to a printer or convert it to a PDF file using the `ps2pdf` program. -``` -groff -Tps -me lorem-ipsum.me > lorem-ipsum.me.ps - -ps2pdf lorem-ipsum.me.ps lorem-ipsum.me.pdf - -``` - -If you'd like to use more advanced functions in `groff -me`, refer to Eric Allman's "Writing Papers with Groff using `−me`," which you should find on your system as `meintro.me` in groff's `doc` directory. It's a great reference document that explains other ways to format papers using the `groff -me` macros. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/how-format-academic-papers-linux-groff-me - -作者:[Jim Hall][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jim-hall -[1]:https://www.gnu.org/software/groff/ From c7b4325483eb922bad867c58579a8ed9818c28f4 Mon Sep 17 00:00:00 2001 From: amwps290 Date: Sat, 17 Mar 2018 16:56:57 +0800 Subject: [PATCH 213/343] translated by amwps290 --- ...academic papers on Linux with groff -me.md | 277 ++++++++++++++++++ 1 file changed, 277 insertions(+) create mode 100644 translated/tech/20180220 How to format academic papers on Linux with groff -me.md diff --git a/translated/tech/20180220 How to format academic papers on Linux with groff -me.md b/translated/tech/20180220 How to format academic papers on Linux with groff -me.md new file mode 100644 index 0000000000..55b2e48517 --- /dev/null +++ b/translated/tech/20180220 How to format academic papers on Linux with groff -me.md @@ -0,0 +1,277 @@ +# 在 Linux 上使用 groff -me 格式化你的学术论文 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) + +当我在 1993 年发现 Linux 时,我还是一名本科生。我很兴奋在我的宿舍里拥有 Unix 系统的强大功能,但是尽管它有很多功能,Linux 却缺乏应用程序。像 LibreOffice 和 OpenOffice 这样的文字处理程序还需要几年的时间。如果你想使用文字处理器,你可能会将你的系统引导到 MS-DOS 中,并使用 WordPerfect、shareware GalaxyWrite 或类似的程序。 + +`nroff` 和 `troff ` 。它们是同一系统的不同接口:`nroff` 生成纯文本输出,适用于屏幕或行式打印机,而 `troff` 产生非常优美的输出,通常用于在激光打印机上打印。 + +这就是我的方法,因为我需要为我的课程写论文,但我更喜欢呆在 Linux 中。我从我们的 “大 Unix ” 校园计算机实验室得知,Unix 系统提供了一组文本格式化的程序。它们是同一系统的不同接口:生成纯文本的输出,适合于屏幕或行打印机,或者生成非常优美的输出,通常用于在激光打印机上打印。 + +在 Linux 上,`nroff` 和 `troff` 被合并为 GNU troff,通常被称为 [groff][1]。 我很高兴看到早期的 Linux 发行版中包含了某个版本的 groff,因此我着手学习如何使用它来编写课程论文。 我学到的第一个宏集是 `-me` 宏包,一个简单易学的宏集。 + +关于 `groff` ,首先要了解的是它根据一组宏处理和格式化文本。一个宏通常是一个两个字符的命令,它自己设置在一行上,并带有一个引导点。宏可能包含一个或多个选项。当 groff 在处理文档时遇到这些宏中的一个时,它会自动对文本进行格式化。 + +下面,我将分享使用 `groff -me` 编写课程论文等简单文档的基础知识。 我不会深入细节进行讨论,比如如何创建嵌套列表,保存和显示,以及使用表格和数字。 + +### 段落 + +让我们从一个简单的例子开始,在几乎所有类型的文档中都可以看到:段落。段落可以格式化第一行的缩进或不缩进(即,与左边齐平)。 包括学术论文,杂志,期刊和书籍在内的许多印刷文档都使用了这两种类型的组合,其中文档或章节中的第一个(主要)段落与左侧的所有段落以及所有其他(常规)段落缩进。 在 `groff -me`中,您可以使用两种段落类型:前导段落(`.lp`)和常规段落(`.pp`)。 + +``` +.lp + +This is the first paragraph. + +.pp + +This is a standard paragraph. + +``` + +### 文本格式 + +用粗体格式化文本的宏是 `.b`,斜体格式是 `.i` 。 如果您将 `.b` 或 `.i` 放在一行上,则后面的所有文本将以粗体或斜体显示。 但更有可能你只是想用粗体或斜体来表示一个或几个词。 要将一个词加粗或斜体,将该单词放在与 `.b` 或 `.i` 相同的行上作为选项。 要用**粗体**或斜体格式化多个单词,请将文字用引号引起来。 + +``` +.pp + +You can do basic formatting such as + +.i italics + +or + +.b "bold text." + +``` + +在上面的例子中,粗体文本结尾的句点也是粗体。 在大多数情况下,这不是你想要的。 只要文字是粗体字,而不是后面的句点也是粗体字。 要获得您想要的效果,您可以向 `.b` 或 `.i` 添加第二个参数,以指示要以粗体或斜体显示的文本,但是正常类型的文本。 您可以这样做,以确保尾随句点不会以粗体显示。 + +``` +.pp + +You can do basic formatting such as + +.i italics + +or + +.b "bold text" . + +``` + +### 列表 + +使用 `groff -me`,您可以创建两种类型的列表:无序列表(`.bu`)和有序列表(`.np`)。 + +``` +.pp + +Bullet lists are easy to make: + +.bu + +Apple + +.bu + +Banana + +.bu + +Pineapple + +.pp + +Numbered lists are as easy as: + +.np + +One + +.np + +Two + +.np + +Three + +.pp + +Note that numbered lists will reset at the next pp or lp. + +``` + +### 副标题 + +如果你正在写一篇长论文,你可能想把你的内容分成几部分。使用 `groff -me`,您可以创建编号的标题 (`.sh`) 和未编号的标题 (`.uh`)。在这两种方法中,将节标题作为参数括起来。对于编号的标题,您还需要提供标题级别 `:1` 将给出一个一级标题(例如,1)。同样,`2` 和 `3` 将给出第二和第三级标题,如 2.1 或 3.1.1。 + +``` +.uh Introduction + +.pp + +Provide one or two paragraphs to describe the work + +and why it is important. + +.sh 1 "Method and Tools" + +.pp + +Provide a few paragraphs to describe how you + +did the research, including what equipment you used + +``` + +### 智能引号和块引号 + +在任何学术论文中,引用他人的工作作为证据都是正常的。如果你引用一个简短的引用来突出一个关键信息,你可以在你的文本周围键入引号。但是 groff 不会自动将你的引用转换成现代文字处理系统所使用的“智能”或“卷曲”引用。要在 `groff -me` 中创建它们,插入一个内联宏来创建左引号(`\*(lq`)和右引号(`\*(rq`)。 + +``` +.pp + +Christine Peterson coined the phrase \*(lqopen source.\*(rq + +``` + +`groff -me` 中还有一个快捷方式来创建这些引号(`.q`),我发现它更易于使用。 + +``` +.pp + +Christine Peterson coined the phrase + +.q "open source." + +``` + +如果引用的是跨越几行的较长的引用,则需要使用一个块引用。为此,在引用的开头和结尾插入块引用宏( + +`.(q`)。 + +``` +.pp + +Christine Peterson recently wrote about open source: + +.(q + +On April 7, 1998, Tim O'Reilly held a meeting of key + +leaders in the field. Announced in advance as the first + +.q "Freeware Summit," + +by April 14 it was referred to as the first + +.q "Open Source Summit." + +.)q + +``` + +### 脚注 + +要插入脚注,请在脚注文本前后添加脚注宏(`.(f`),并使用内联宏(`\ **`)添加脚注标记。脚注标记应出现在文本中和脚注中。 + +``` +.pp + +Christine Peterson recently wrote about open source:\** + +.(f + +\**Christine Peterson. + +.q "How I coined the term open source." + +.i "OpenSource.com." + +1 Feb 2018. + +.)f + +.(q + +On April 7, 1998, Tim O'Reilly held a meeting of key + +leaders in the field. Announced in advance as the first + +.q "Freeware Summit," + +by April 14 it was referred to as the first + +.q "Open Source Summit." + +.)q + +``` + +### 封面 + +大多数课程论文都需要一个包含论文标题,姓名和日期的封面。 在 `groff -me` 中创建封面需要一些组件。 我发现最简单的方法是使用居中的文本块并在标题,名称和日期之间添加额外的行。 (我倾向于在每一行之间使用两个空行)。在文章顶部,从标题页(`.tp`)宏开始,插入五个空白行(`.sp 5`),然后添加居中文本(`.(c`) 和额外的空白行(`.sp 2`)。 + +``` +.tp + +.sp 5 + +.(c + +.b "Writing Class Papers with groff -me" + +.)c + +.sp 2 + +.(c + +Jim Hall + +.)c + +.sp 2 + +.(c + +February XX, 2018 + +.)c + +.bp + +``` + +最后一个宏(`.bp`)告诉 groff 在标题页后添加一个分页符。 + +### 更多内容 + +这些是用 `groff-me` 写一份专业的论文非常基础的东西,包括前导和缩进段落,粗体和斜体,有序和无需列表,编号和不编号的章节标题,块引用以及脚注。 + +我已经包含一个示例 groff 文件来演示所有这些格式。 将 `lorem-ipsum.me` 文件保存到您的系统并通过 groff 运行。 `-Tps` 选项将输出类型设置为 `PostScript` ,以便您可以将文档发送到打印机或使用 `ps2pdf` 程序将其转换为 PDF 文件。 + +``` +groff -Tps -me lorem-ipsum.me > lorem-ipsum.me.ps + +ps2pdf lorem-ipsum.me.ps lorem-ipsum.me.pdf + +``` + +如果你想使用 groff-me 的更多高级功能,请参阅 Eric Allman 所著的 “使用 `Groff-me` 来写论文”,你可以在你系统的 groff 的 `doc` 目录下找到一个名叫 `meintro.me` 的文件。这份文档非常完美的说明了如何使用 `groff-me` 宏来格式化你的论文。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/how-format-academic-papers-linux-groff-me + +作者:[Jim Hall][a] +译者:[amwps290](https://github.com/amwps290) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jim-hall +[1]:https://www.gnu.org/software/groff/ From 0905410c8a090b5f44f5408b504ca482752d124e Mon Sep 17 00:00:00 2001 From: qhwdw Date: Sat, 17 Mar 2018 17:52:19 +0800 Subject: [PATCH 214/343] Translated by qhwdw --- ...0180127 Your instant Kubernetes cluster.md | 172 ------------------ ...0180127 Your instant Kubernetes cluster.md | 170 +++++++++++++++++ 2 files changed, 170 insertions(+), 172 deletions(-) delete mode 100644 sources/tech/20180127 Your instant Kubernetes cluster.md create mode 100644 translated/tech/20180127 Your instant Kubernetes cluster.md diff --git a/sources/tech/20180127 Your instant Kubernetes cluster.md b/sources/tech/20180127 Your instant Kubernetes cluster.md deleted file mode 100644 index d804986aac..0000000000 --- a/sources/tech/20180127 Your instant Kubernetes cluster.md +++ /dev/null @@ -1,172 +0,0 @@ -Translating by qhwdw -Your instant Kubernetes cluster -============================================================ - - -This is a condensed and updated version of my previous tutorial [Kubernetes in 10 minutes][10]. I've removed just about everything I can so this guide still makes sense. Use it when you want to create a cluster on the cloud or on-premises as fast as possible. - -### 1.0 Pick a host - -We will be using Ubuntu 16.04 for this guide so that you can copy/paste all the instructions. Here are several environments where I've tested this guide. Just pick where you want to run your hosts. - -* [DigitalOcean][1] - developer cloud - -* [Civo][2] - UK developer cloud - -* [Packet][3] - bare metal cloud - -* 2x Dell Intel i7 boxes - at home - -> Civo is a relatively new developer cloud and one thing that I really liked was how quickly they can bring up hosts - in about 25 seconds. I'm based in the UK so I also get very low latency. - -### 1.1 Provision the machines - -You can get away with a single host for testing but I'd recommend at least three so we have a single master and two worker nodes. - -Here are some other guidelines: - -* Pick dual-core hosts with ideally at least 2GB RAM - -* If you can pick a custom username when provisioning the host then do that rather than root. For example Civo offers an option of `ubuntu`, `civo` or `root`. - -Now run through the following steps on each machine. It should take you less than 5-10 minutes. If that's too slow for you then you can use my utility script [kept in a Gist][11]: - -``` -$ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c/raw/23fc4cd13910eac646b13c4f8812bab3eeebab4c/configure.sh | sh - -``` - -### 1.2 Login and install Docker - -Install Docker from the Ubuntu apt repository. This will be an older version of Docker but as Kubernetes is tested with old versions of Docker it will work in our favour. - -``` -$ sudo apt-get update \ - && sudo apt-get install -qy docker.io - -``` - -### 1.3 Disable the swap file - -This is now a mandatory step for Kubernetes. The easiest way to do this is to edit `/etc/fstab` and to comment out the line referring to swap. - -To save a reboot then type in `sudo swapoff -a`. - -> Disabling swap memory may appear like a strange requirement at first. If you are curious about this step then [read more here][4]. - -### 1.4 Install Kubernetes packages - -``` -$ sudo apt-get update \ - && sudo apt-get install -y apt-transport-https \ - && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - - -$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \ - | sudo tee -a /etc/apt/sources.list.d/kubernetes.list \ - && sudo apt-get update - -$ sudo apt-get update \ - && sudo apt-get install -y \ - kubelet \ - kubeadm \ - kubernetes-cni - -``` - -### 1.5 Create the cluster - -At this point we create the cluster by initiating the master with `kubeadm`. Only do this on the master node. - -> Despite any warnings I have been assured by [Weaveworks][5] and Lucas (the maintainer) that `kubeadm` is suitable for production use. - -``` -$ sudo kubeadm init - -``` - -If you missed a step or there's a problem then `kubeadm` will let you know at this point. - -Take a copy of the Kube config: - -``` -mkdir -p $HOME/.kube -sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config -sudo chown $(id -u):$(id -g) $HOME/.kube/config - -``` - -Make sure you note down the join token command i.e. - -``` -$ sudo kubeadm join --token c30633.d178035db2b4bb9a 10.0.0.5:6443 --discovery-token-ca-cert-hash sha256: - -``` - -### 2.0 Install networking - -Many networking providers are available for Kubernetes, but none are included by default, so let's use Weave Net from [Weaveworks][12] which is one of the most popular options in the Kubernetes community. It tends to work out of the box without additional configuration. - -``` -$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" - -``` - -If you have private networking enabled on your host then you may need to alter the private subnet that Weavenet uses for allocating IP addresses to Pods (containers). Here's an example of how to do that: - -``` -$ curl -SL "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=172.16.6.64/27" \ -| kubectl apply -f - - -``` - -> Weave also have a very cool visualisation tool called Weave Cloud. It's free and will show you the path traffic is taking between your Pods. [See here for an example with the OpenFaaS project][6]. - -### 2.2 Join the worker nodes to the cluster - -Now you can switch to each of your workers and use the `kubeadm join` command from 1.5\. Once you run that log out of the workers. - -### 3.0 Profit - -That's it - we're done. You have a cluster up and running and can deploy your applications. If you need to setup a dashboard UI then consult the [Kubernetes documentation][13]. - -``` -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -openfaas1 Ready master 20m v1.9.2 -openfaas2 Ready 19m v1.9.2 -openfaas3 Ready 19m v1.9.2 - -``` - -If you want to see my running through creating a cluster step-by-step and showing you how `kubectl` works then checkout my video below and make sure you subscribe - - -You can also get an "instant" Kubernetes cluster on your Mac for development using Minikube or Docker for Mac Edge edition. [Read my review and first impressions here][14]. - - --------------------------------------------------------------------------------- - -via: https://blog.alexellis.io/your-instant-kubernetes-cluster/ - -作者:[Alex Ellis ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.alexellis.io/author/alex/ -[1]:https://www.digitalocean.com/ -[2]:https://www.civo.com/ -[3]:https://packet.net/ -[4]:https://github.com/kubernetes/kubernetes/issues/53533 -[5]:https://weave.works/ -[6]:https://www.weave.works/blog/openfaas-gke -[7]:https://blog.alexellis.io/tag/kubernetes/ -[8]:https://blog.alexellis.io/tag/k8s/ -[9]:https://blog.alexellis.io/tag/cloud-native/ -[10]:https://www.youtube.com/watch?v=6xJwQgDnMFE -[11]:https://gist.github.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c -[12]:https://weave.works/ -[13]:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ -[14]:https://blog.alexellis.io/docker-for-mac-with-kubernetes/ -[15]:https://blog.alexellis.io/your-instant-kubernetes-cluster/# diff --git a/translated/tech/20180127 Your instant Kubernetes cluster.md b/translated/tech/20180127 Your instant Kubernetes cluster.md new file mode 100644 index 0000000000..ac06ba730f --- /dev/null +++ b/translated/tech/20180127 Your instant Kubernetes cluster.md @@ -0,0 +1,170 @@ +“开箱即用” 的 Kubernetes 集群 +============================================================ + + +这是我以前的 [10 分钟内配置 Kubernetes][10] 教程的精简版和更新版。我删除了一些我认为可以去掉的内容,所以,这个指南仍然是可理解的。当你想在云上创建一个集群或者尽可能快地构建基础设施时,你可能会用到它。 + +### 1.0 挑选一个主机 + +我们在本指南中将使用 Ubuntu 16.04,这样你就可以直接拷贝/粘贴所有的指令。下面是我用本指南测试过的几种环境。根据你运行的主机,你可以从中挑选一个。 + +* [DigitalOcean][1] - 开发者云 + +* [Civo][2] - UK 开发者云 + +* [Packet][3] - 裸机云 + +* 2x Dell Intel i7 服务器 —— 它在我家中 + +> Civo 是一个相对较新的开发者云,我比较喜欢的一点是,它开机时间只有 25 秒,我就在英国,因此,它的延迟很低。 + +### 1.1 准备机器 + +你可以使用一个单台主机进行测试,但是,我建议你至少使用三台机器,这样你就有一个主节点和两个工作节点。 + +下面是一些其他的指导原则: + +* 最好选至少有 2 GB 内存的双核主机 + +* 在准备主机的时候,如果你可以自定义用户名,那么就不要使用 root。例如,Civo 通常让你在 `ubuntu`、`civo` 或者 `root` 中选一个。 + +现在,在每台机器上都运行以下的步骤。它将需要 5-10 钟时间。如果你觉得太慢了,你可以使用我的脚本 [kept in a Gist][11]: + +``` +$ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c/raw/23fc4cd13910eac646b13c4f8812bab3eeebab4c/configure.sh | sh + +``` + +### 1.2 登入和安装 Docker + +从 Ubuntu 的 apt 仓库中安装 Docker。它的版本可能有点老,但是,Kubernetes 在老版本的 Docker 中是测试过的,工作的很好。 + +``` +$ sudo apt-get update \ + && sudo apt-get install -qy docker.io + +``` + +### 1.3 禁用 swap 文件 + +这是 Kubernetes 的强制步骤。实现它很简单,编辑 `/etc/fstab` 文件,然后注释掉引用 swap 的行即可。 + +保存它,重启后输入 `sudo swapoff -a`。 + +> 一开始就禁用 swap 内存,你可能觉得这个要求很奇怪,如果你对这个做法感到好奇,你可以去 [这里阅读它的相关内容][4]。 + +### 1.4 安装 Kubernetes 包 + +``` +$ sudo apt-get update \ + && sudo apt-get install -y apt-transport-https \ + && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - + +$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \ + | sudo tee -a /etc/apt/sources.list.d/kubernetes.list \ + && sudo apt-get update + +$ sudo apt-get update \ + && sudo apt-get install -y \ + kubelet \ + kubeadm \ + kubernetes-cni + +``` + +### 1.5 创建集群 + +这时候,我们使用 `kubeadm` 初始化主节点并创建集群。这一步仅在主节点上操作。 + +> 虽然有警告,但是 [Weaveworks][5] 和 Lucas(他们是维护者)向我保证,`kubeadm` 是可用于生产系统的。 + +``` +$ sudo kubeadm init + +``` + +如果你错过一个步骤或者有问题,`kubeadm` 将会及时告诉你。 + +我们复制一份 Kube 配置: + +``` +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config + +``` + +确保你一定要记下如下的加入 token 命令。 + +``` +$ sudo kubeadm join --token c30633.d178035db2b4bb9a 10.0.0.5:6443 --discovery-token-ca-cert-hash sha256: + +``` + +### 2.0 安装网络 + +Kubernetes 可用于任何网络供应商的产品或服务,但是,默认情况下什么也没有,因此,我们使用来自  [Weaveworks][12] 的 Weave Net,它是 Kebernetes 社区中非常流行的选择之一。它倾向于不需要额外配置的 “开箱即用”。 + +``` +$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" + +``` + +如果在你的主机上启用了私有网络,那么,你可能需要去修改 Weavenet 使用的私有子网络,以便于为 Pods(容器)分配 IP 地址。下面是命令示例: + +``` +$ curl -SL "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=172.16.6.64/27" \ +| kubectl apply -f - + +``` + +> Weave 也有很酷的称为 Weave Cloud 的可视化工具。它是免费的,你可以在它上面看到你的 Pods 之间的路径流量。[这里有一个使用 OpenFaaS 项目的示例][6]。 + +### 2.2 在集群中加入工作节点 + +现在,你可以切换到你的每一台工作节点,然后使用 1.5 节中的 `kubeadm join` 命令。运行完成后,登出那个工作节点。 + +### 3.0 收益 + +到此为止 —— 我们全部配置完成了。你现在有一个正在运行着的集群,你可以在它上面部署应用程序。如果你需要设置仪表板 UI,你可以去参考 [Kubernetes 文档][13]。 + +``` +$ kubectl get nodes +NAME STATUS ROLES AGE VERSION +openfaas1 Ready master 20m v1.9.2 +openfaas2 Ready 19m v1.9.2 +openfaas3 Ready 19m v1.9.2 + +``` + +如果你想看到我一步一步创建集群并且展示 `kubectl` 如何工作的视频,你可以看下面我的视频,你可以订阅它。 + + +想在你的 Mac 电脑上,使用 Minikube 或者 Docker 的 Mac Edge 版本,安装一个 “开箱即用” 的 Kubernetes 集群,[阅读在这里的我的评估和第一印象][14]。 + +-------------------------------------------------------------------------------- + +via: https://blog.alexellis.io/your-instant-kubernetes-cluster/ + +作者:[Alex Ellis ][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.alexellis.io/author/alex/ +[1]:https://www.digitalocean.com/ +[2]:https://www.civo.com/ +[3]:https://packet.net/ +[4]:https://github.com/kubernetes/kubernetes/issues/53533 +[5]:https://weave.works/ +[6]:https://www.weave.works/blog/openfaas-gke +[7]:https://blog.alexellis.io/tag/kubernetes/ +[8]:https://blog.alexellis.io/tag/k8s/ +[9]:https://blog.alexellis.io/tag/cloud-native/ +[10]:https://www.youtube.com/watch?v=6xJwQgDnMFE +[11]:https://gist.github.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c +[12]:https://weave.works/ +[13]:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ +[14]:https://blog.alexellis.io/docker-for-mac-with-kubernetes/ +[15]:https://blog.alexellis.io/your-instant-kubernetes-cluster/# From 172934084bc58c3d70bfb01f402f5f79948854b7 Mon Sep 17 00:00:00 2001 From: cmn <2545489745@qq.com> Date: Sat, 17 Mar 2018 19:29:34 +0800 Subject: [PATCH 215/343] translated --- ... Generate A Random Number - Quarrelsome.md | 95 ------------------- ... Generate A Random Number - Quarrelsome.md | 94 ++++++++++++++++++ 2 files changed, 94 insertions(+), 95 deletions(-) delete mode 100644 sources/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md create mode 100644 translated/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md diff --git a/sources/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md b/sources/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md deleted file mode 100644 index 6ba977eeed..0000000000 --- a/sources/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md +++ /dev/null @@ -1,95 +0,0 @@ -translating by kimii -How To Safely Generate A Random Number — Quarrelsome -====== -### Use urandom - -Use [urandom][1]. Use [urandom][2]. Use [urandom][3]. Use [urandom][4]. Use [urandom][5]. Use [urandom][6]. - -### But what about for crypto keys? - -Still [urandom][6]. - -### Why not {SecureRandom, OpenSSL, havaged, &c}? - -These are userspace CSPRNGs. You want to use the kernel’s CSPRNG, because: - - * The kernel has access to raw device entropy. - - * It can promise not to share the same state between applications. - - * A good kernel CSPRNG, like FreeBSD’s, can also promise not to feed you random data before it’s seeded. - - - - -Study the last ten years of randomness failures and you’ll read a litany of userspace randomness failures. [Debian’s OpenSSH debacle][7]? Userspace random. Android Bitcoin wallets [repeating ECDSA k’s][8]? Userspace random. Gambling sites with predictable shuffles? Userspace random. - -Userspace OpenSSL also seeds itself from “from uninitialized memory, magical fairy dust and unicorn horns” generators almost always depend on the kernel’s generator anyways. Even if they don’t, the security of your whole system sure does. **A userspace CSPRNG doesn’t add defense-in-depth; instead, it creates two single points of failure.** - -### Doesn’t the man page say to use /dev/random? - -You But, more on this later. Stay your pitchforks. should ignore the man page. Don’t use /dev/random. The distinction between /dev/random and /dev/urandom is a Unix design wart. The man page doesn’t want to admit that, so it invents a security concern that doesn’t really exist. Consider the cryptographic advice in random(4) an urban legend and get on with your life. - -### But what if I need real random values, not psuedorandom values? - -Both urandom and /dev/random provide the same kind of randomness. Contrary to popular belief, /dev/random doesn’t provide “true random” data. For cryptography, you don’t usually want “true random”. - -Both urandom and /dev/random are based on a simple idea. Their design is closely related to that of a stream cipher: a small secret is stretched into an indefinite stream of unpredictable values. Here the secrets are “entropy”, and the stream is “output”. - -Only on Linux are /dev/random and urandom still meaningfully different. The Linux kernel CSPRNG rekeys itself regularly (by collecting more entropy). But /dev/random also tries to keep track of how much entropy remains in its kernel pool, and will occasionally go on strike if it decides not enough remains. This design is as silly as I’ve made it sound; it’s akin to AES-CTR blocking based on how much “key” is left in the “keystream”. - -If you use /dev/random instead of urandom, your program will unpredictably (or, if you’re an attacker, very predictably) hang when Linux gets confused about how its own RNG works. Using /dev/random will make your programs less stable, but it won’t make them any more cryptographically safe. - -### There’s a catch here, isn’t there? - -No, but there’s a Linux kernel bug you might want to know about, even though it doesn’t change which RNG you should use. - -On Linux, if your software runs immediately at boot, and/or the OS has just been installed, your code might be in a race with the RNG. That’s bad, because if you win the race, there could be a window of time where you get predictable outputs from urandom. This is a bug in Linux, and you need to know about it if you’re building platform-level code for a Linux embedded device. - -This is indeed a problem with urandom (and not /dev/random) on Linux. It’s also a [bug in the Linux kernel][9]. But it’s also easily fixed in userland: at boot, seed urandom explicitly. Most Linux distributions have done this for a long time. But don’t switch to a different CSPRNG. - -### What about on other operating systems? - -FreeBSD and OS X do away with the distinction between urandom and /dev/random; the two devices behave identically. Unfortunately, the man page does a poor job of explaining why this is, and perpetuates the myth that Linux urandom is scary. - -FreeBSD’s kernel crypto RNG doesn’t block regardless of whether you use /dev/random or urandom. Unless it hasn’t been seeded, in which case both block. This behavior, unlike Linux’s, makes sense. Linux should adopt it. But if you’re an app developer, this makes little difference to you: Linux, FreeBSD, iOS, whatever: use urandom. - -### tl;dr - -Use urandom. - -### Epilog - -[ruby-trunk Feature #9569][10] - -> Right now, SecureRandom.random_bytes tries to detect an OpenSSL to use before it tries to detect /dev/urandom. I think it should be the other way around. In both cases, you just need random bytes to unpack, so SecureRandom could skip the middleman (and second point of failure) and just talk to /dev/urandom directly if it’s available. - -Resolution: - -> /dev/urandom is not suitable to be used to generate directly session keys and other application level random data which is generated frequently. -> -> [the] random(4) [man page] on GNU/Linux [says]… - -Thanks to Matthew Green, Nate Lawson, Sean Devlin, Coda Hale, and Alex Balducci for reading drafts of this. Fair warning: Matthew only mostly agrees with me. - --------------------------------------------------------------------------------- - -via: https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/ - -作者:[Thomas;Erin;Matasano][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://sockpuppet.org/blog -[1]:http://blog.cr.yp.to/20140205-entropy.html -[2]:http://cr.yp.to/talks/2011.09.28/slides.pdf -[3]:http://golang.org/src/pkg/crypto/rand/rand_unix.go -[4]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key -[5]:http://stackoverflow.com/a/5639631 -[6]:https://twitter.com/bramcohen/status/206146075487240194 -[7]:http://research.swtch.com/openssl -[8]:http://arstechnica.com/security/2013/08/google-confirms-critical-android-crypto-flaw-used-in-5700-bitcoin-heist/ -[9]:https://factorable.net/weakkeys12.extended.pdf -[10]:https://bugs.ruby-lang.org/issues/9569 diff --git a/translated/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md b/translated/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md new file mode 100644 index 0000000000..23765b77ba --- /dev/null +++ b/translated/tech/20140225 How To Safely Generate A Random Number - Quarrelsome.md @@ -0,0 +1,94 @@ +如何安全地生成随机数 - 争论 +====== +### 使用 urandom + +使用 urandom。使用 urandom。使用 urandom。使用 urandom。使用 urandom。使用 urandom。 + +### 但对于密码学密钥呢? + +仍然使用 urandom[6]。 + +### 为什么不是 SecureRandom, OpenSSL, havaged 或者 c 语言实现呢? + +这些是用户空间的 CSPRNG(伪随机数生成器)。你应该想用内核的 CSPRNG,因为: + + * 内核可以访问原始设备熵。 + + * 它保证不在应用程序之间共享相同的状态。 + + * 一个好的内核 CSPRNG,像 FreeBSD 中的,也可以保证在提供种子之前不给你随机数据。 + + + + +研究过去十年中的随机失败案例,你会看到一连串的用户空间随机失败。[Debian 的 OpenSSH 崩溃][7]?用户空间随机。安卓的比特币钱包[重复 ECDSA k's][8]?用户空间随机。可预测洗牌的赌博网站?用户空间随机。 + +用户空间生成器几乎总是依赖于内核的生成器。即使它们不这样做,整个系统的安全性也会确保如此。**用户空间的 CSPRNG 不会增加防御深度;相反,它会产生两个单点故障。** + +### 手册页不是说使用/dev/random嘛? + +这个稍后详述,保留你的想法。你应该忽略掉手册页。不要使用 /dev/random。/dev/random 和 /dev/urandom 之间的区别是 Unix 设计缺陷。手册页不想承认这一点,因此它产生了一个并不存在的安全问题。把 random(4) 上的密码学上的建议当作传奇,继续你的生活。 + +### 但是如果我需要的是真随机值,而非伪随机值呢? + +Urandom 和 /dev/random 提供的是同一类型的随机。与流行的观念相反,/dev/random 不提供“真正的随机”。从密码学上来说,你通常不需要“真正的随机”。 + +Urandom 和 /dev/random 都基于一个简单的想法。它们的设计与流密码的设计密切相关:一个小秘密被延伸到不可预测值的不确定流中。 这里的秘密是“熵”,而流是“输出”。 + +只在 Linux 上 /dev/random 和 urandom 仍然有意义上的不同。Linux 内核的 CSPRNG 定期进行密钥更新(通过收集更多的熵)。但是 /dev/random 也试图跟踪内核池中剩余的熵,并且如果它没有足够的剩余熵时,偶尔也会罢工。这种设计和我所说的一样蠢;这与基于“密钥流”中剩下多少“密钥”的 AES-CTR 设计类似。 + +如果你使用 /dev/random 而非 urandom,那么当 Linux 对自己的 RNG(随机数生成器)如何工作感到困惑时,你的程序将不可预测地(或者如果你是攻击者,非常可预测地)挂起。使用 /dev/random 会使你的程序不太稳定,但在密码学角度上它也不会让程序更加安全。 + +### 这里有个缺陷,不是吗? + +不是,但存在一个你可能想要了解的 Linux 内核 bug,即使这并不能改变你应该使用哪一个 RNG。 + +在 Linux 上,如果你的软件在引导时立即运行,并且/或者刚刚安装了操作系统,那么你的代码可能会与 RNG 发生竞争。这很糟糕,因为如果你赢了比赛,那么你可能会在一段时间内从 urandom 获得可预测的输出。这是 Linux 中的一个 bug,如果你正在为 Linux 嵌入式设备构建平台级代码,那你需要了解它。 + +在 Linux 上,这确实是 urandom(而不是 /dev/random)的问题。这也是[Linux 内核中的错误][9]。 但它也容易在用户空间中修复:在引导时,明确地为 urandom 提供种子。长期以来,大多数 Linux 发行版都是这么做的。但不要切换到不同的 CSPRNG。 + +### 在其它操作系统上呢? + +FreeBSD 和 OS X 消除了 urandom 和 /dev/random 之间的区别; 这两个设备的行为是相同的。不幸的是,手册页在解释为什么这样做上干的很糟糕,并延续了 Linux 上 urandom 可怕的神话。 + +无论你使用 /dev/random 还是 urandom,FreeBSD 的内核加密 RNG 都不会阻塞。 除非它没有被提供种子,在这种情况下,这两者都会阻塞。与 Linux 不同,这种行为是有道理的。Linux 应该采用它。但是,如果你是一名应用程序开发人员,这对你几乎没有什么影响:Linux,FreeBSD,iOS,无论什么:使用 urandom 吧。 + +### 太长了,懒得看 + +直接使用 urandom 吧。 + +### 结语 + +[ruby-trunk Feature #9569][10] + +> 现在,在尝试检测 /dev/urandom 之前,SecureRandom.random_bytes 会尝试检测要使用的 OpenSSL。 我认为这应该反过来。在这两种情况下,你只需要将随机字节进行解压,所以 SecureRandom 可以跳过中间人(和第二个故障点),如果可用的话可以直接与 /dev/urandom 进行交互。 + +总结: + +> /dev/urandom 不适合用来直接生成会话密钥和频繁生成其他应用程序级随机数据 +> +> GNU/Linux 上的 random(4) 手册所述...... + +感谢 Matthew Green, Nate Lawson, Sean Devlin, Coda Hale, and Alex Balducci 阅读了本文草稿。公正警告:Matthew 只是大多同意我的观点。 + +-------------------------------------------------------------------------------- + +via: https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/ + +作者:[Thomas;Erin;Matasano][a] +译者:[kimii](https://github.com/kimii) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://sockpuppet.org/blog +[1]:http://blog.cr.yp.to/20140205-entropy.html +[2]:http://cr.yp.to/talks/2011.09.28/slides.pdf +[3]:http://golang.org/src/pkg/crypto/rand/rand_unix.go +[4]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key +[5]:http://stackoverflow.com/a/5639631 +[6]:https://twitter.com/bramcohen/status/206146075487240194 +[7]:http://research.swtch.com/openssl +[8]:http://arstechnica.com/security/2013/08/google-confirms-critical-android-crypto-flaw-used-in-5700-bitcoin-heist/ +[9]:https://factorable.net/weakkeys12.extended.pdf +[10]:https://bugs.ruby-lang.org/issues/9569 From 2deb94df66fabde7717cf0ce8204f3ac6496f3bb Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 17 Mar 2018 23:03:18 +0800 Subject: [PATCH 216/343] PRF:20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @shipsw --- ... automatically update RHEL-CentOS Linux.md | 129 +++++++++--------- 1 file changed, 67 insertions(+), 62 deletions(-) diff --git a/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md b/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md index b17db43664..e2d88ce4ec 100644 --- a/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md +++ b/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @@ -1,51 +1,91 @@ -translating by shipsw - 如何使用 yum-cron 自动更新 RHEL/CentOS Linux ====== -yum 命令是 RHEL / CentOS Linux 系统中用来安装和更新软件包的一个工具。知道如何使用 [yum 命令行] 更新系统,但是我想用 cron 手工更新软件包。该如何配置才能使得 yum 使用 [cron 自动更新][2]系统补丁或更新呢? -首先需要安装 yum-cron 软件包。该软件包提供以 cron 命令运行 yum 更新所需的文件。安装这个软件可以使得 yum 以 cron 命令每晚更新。 +`yum` 命令是 RHEL / CentOS Linux 系统中用来安装和更新软件包的一个工具。我知道如何使用 [yum 命令行][1] 更新系统,但是我想用 cron 任务自动更新软件包。该如何配置才能使得 `yum` 使用 [cron 自动更新][2]系统补丁或更新呢? + +首先需要安装 yum-cron 软件包。该软件包提供以 cron 命令运行 `yum` 更新所需的文件。如果你想要每晚通过 cron 自动更新可以安装这个软件包。 ### CentOS/RHEL 6.x/7.x 上安装 yum cron 输入以下 [yum 命令][3]: -`$ sudo yum install yum-cron` + +``` +$ sudo yum install yum-cron +``` + ![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-install-yum-cron-on-CentOS-RHEL-server.jpg) -使用 **CentOS/RHEL 7.x** 上的 systemctl 启动服务: +使用 CentOS/RHEL 7.x 上的 `systemctl` 启动服务: + ``` $ sudo systemctl enable yum-cron.service $ sudo systemctl start yum-cron.service $ sudo systemctl status yum-cron.service ``` -在 **CentOS/RHEL 6.x** 系统中,运行: + +在 CentOS/RHEL 6.x 系统中,运行: + ``` $ sudo chkconfig yum-cron on $ sudo service yum-cron start ``` + ![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-turn-on-yum-cron-service-on-CentOS-or-RHEL-server.jpg) -yum-cron 是 yum 的一个调用接口。使得 cron 调用 yum 变得非常方便。该软件提供元数据更新,更新检查、下载和安装等功能。yum-cron 的不同功能可以使用高配置文件配置,而不是输入一堆复杂的命令行参数。 +`yum-cron` 是 `yum` 的一个替代方式。使得 cron 调用 `yum` 变得非常方便。该软件提供了元数据更新、更新检查、下载和安装等功能。`yum-cron` 的各种功能可以使用配置文件配置,而不是输入一堆复杂的命令行参数。 ### 配置 yum-cron 自动更新 RHEL/CentOS Linux -使用 vi 等编辑器编辑文件 /etc/yum/yum-cron.conf 和 /etc/yum/yum-cron-hourly.conf: -`$ sudo vi /etc/yum/yum-cron.conf` -确保更新可用时自动更新 -`apply_updates = yes` -可以设置通知 email 地址。注意: localhost 将会被系统名称代替。 -`email_from = root@localhost` -email 通知地址列表。 -`email_to = your-it-support@some-domain-name` -发送 email 信息的主机名。 -`email_host = localhost` +使用 vi 等编辑器编辑文件 `/etc/yum/yum-cron.conf` 和 `/etc/yum/yum-cron-hourly.conf`: + +``` +$ sudo vi /etc/yum/yum-cron.conf +``` + +确保更新可用时自动更新: + +``` +apply_updates = yes +``` + +可以设置通知 email 的发件地址。注意: localhost` 将会被 `system_name` 的值代替。 + +``` +email_from = root@localhost +``` + +列出发送到的 email 地址。 + +``` +email_to = your-it-support@some-domain-name +``` + +发送 email 信息的主机名。 + +``` +email_host = localhost +``` + [CentOS/RHEL 7.x][4] 上不想更新内核的话,添加以下内容: -`exclude=kernel*` + +``` +exclude=kernel* +``` + RHEL/CentOS 6.x 下[添加以下内容来禁用内核更新][5]: -`YUM_PARAMETER=kernel*` -[保存并关闭文件][6]。如果想每小时更新系统的话修改文件 /etc/yum/yum-cron-hourly.conf,否则文件 /etc/yum/yum-cron.conf 将使用以下命令每天运行一次[cat 命令][7]: -`$ cat /etc/cron.daily/0yum-daily.cron` + +``` +YUM_PARAMETER=kernel* +``` + +[保存并关闭文件][6]。如果想每小时更新系统的话修改文件 `/etc/yum/yum-cron-hourly.conf`,否则文件 `/etc/yum/yum-cron.conf` 将使用以下命令每天运行一次(使用 [cat 命令][7] 查看): + +``` +$ cat /etc/cron.daily/0yum-daily.cron +``` + 示例输出: + ``` #!/bin/bash @@ -73,48 +113,14 @@ exec /usr/sbin/yum-cron ``` 完成配置。现在你的系统将每天自动更新一次。更多细节请参照 yum-cron 的说明手册。 -`$ man yum-cron` -### 方法二 – 使用 shell 脚本 - -**警告** : 以下命令已经过时了. 不要在 RHEL/CentOS 6.x/7.x 系统中使用。 我写在这里仅仅是因为历史原因,该命令适合 CentOS/RHEL version 4.x/5.x 上运行。 - -让我们看看如何在 CentOS/RHEL 上配置 yum 安全更新包的检索和安装。你可以使用 CentOS / RHEL 提供的 yum-updatesd 服务。然而,系统提供的服务开销有点大。你可以使用以下的 shell 脚本配置每天后每周的系统更新。 - - * **/etc/cron.daily/yumupdate.sh** 每天更新 - * **/etc/cron.weekly/yumupdate.sh** 每周更新 - - - -#### 系统更新的示例脚本 - -以下脚本功能是使用 [cron][8] 定时安装更新更新: ``` -#!/bin/bash -YUM=/usr/bin/yum -$YUM -y -R 120 -d 0 -e 0 update yum -$YUM -y -R 10 -e 0 -d 0 update +$ man yum-cron ``` -(Code listing -01: /etc/cron.daily/yumupdate.sh) - -其中: - - 1. 第一条命令更新 yum 自己。 - 2. **-R 120** : 设置允许一条命令前的等待最长时间 - 3. **-e 0** : 设置错误级别为 0 (范围 0-10)。0 意味着只有关键性错误才会显示。 - 4. -d 0 : 设置 debug 级别为 0 。增加或减少打印日志的量。(范围 0-10) - 5. **-y** : 默认同意;任何提示问题默认回答为 yes。 - - - -设置脚本的执行权限: -`# chmod +x /etc/cron.daily/yumupdate.sh` - - ### 关于作者 -作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][9]、[Facebook][10]、[Google+][11] 上关注他。**获取更多有关系统管理、Linux/Unix 和开源话题请关注[我的 RSS/XML 地址][12]**。 +作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][9]、[Facebook][10]、[Google+][11] 上关注他。获取更多有关系统管理、Linux/Unix 和开源话题请关注[我的 RSS/XML 地址][12]。 -------------------------------------------------------------------------------- @@ -122,18 +128,17 @@ via: https://www.cyberciti.biz/faq/fedora-automatic-update-retrieval-installatio 作者:[Vivek Gite][a] 译者:[shipsw](https://github.com/shipsw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.cyberciti.biz/ [1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ [2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses -[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) -[4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/ +[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ [4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/ [5]:https://www.cyberciti.biz/faq/redhat-centos-linux-yum-update-exclude-packages/ [6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/ -[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info) +[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ [8]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses [9]:https://twitter.com/nixcraft [10]:https://facebook.com/nixcraft From b62420c860b03296afe9095744e38b4c19e67dd0 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 17 Mar 2018 23:04:17 +0800 Subject: [PATCH 217/343] PUB:20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @shipsw https://linux.cn/article-9455-1.html --- ...w to use yum-cron to automatically update RHEL-CentOS Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md (100%) diff --git a/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md b/published/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md similarity index 100% rename from translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md rename to published/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md From 33435ebbd6ceb720567a4f1e92354d8e3a673a02 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 00:12:16 +0800 Subject: [PATCH 218/343] PRF:20180213 Getting started with the RStudio IDE.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @szcf-weiya 恭喜你,完成了第一篇翻译贡献! --- ...13 Getting started with the RStudio IDE.md | 43 +++++++++---------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/translated/tech/20180213 Getting started with the RStudio IDE.md b/translated/tech/20180213 Getting started with the RStudio IDE.md index 762165fa13..28e26691d7 100644 --- a/translated/tech/20180213 Getting started with the RStudio IDE.md +++ b/translated/tech/20180213 Getting started with the RStudio IDE.md @@ -1,49 +1,48 @@ -开始使用 RStudio IDE +RStudio IDE 入门 ====== +> 用于统计技术的 R 项目是分析数据的有力方式,而 RStudio IDE 则可使这一切更加容易。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_development_programming_screen.png?itok=BgcSm5Pl) -从我记事起,我就一直在与数字玩耍。作为 20 世纪 70 年代后期的本科生,我开始上统计学的课程,学习如何检查和分析数据以揭示某些意义。 +从我记事起,我就一直喜欢摆弄数字。作为 20 世纪 70 年代后期的大学生,我上过统计学的课程,学习了如何检查和分析数据以揭示其意义。 -那时候,我有一部科学计算器,它让统计计算变得比以前容易很多。在 90 年代早期,作为一名从事 t 检验,相关性以及 [ANOVA][1] 研究的教育心理学研究生,我开始通过精心编写输入 IBM 主机的文本文件来进行计算。这个主机是对我的手持计算器的一个改进,但是一个小的间距错误会使得整个过程无效,而且这个过程仍然有点乏味。 +那时候,我有一部科学计算器,它让统计计算变得比以往更容易。在 90 年代早期,作为一名从事 t 检验t-test、相关性以及 [ANOVA][1] 研究的教育心理学研究生,我开始通过精心编写输入到 IBM 主机的文本文件来进行计算。这个主机远超我的手持计算器,但是一个小的空格错误就会导致整个过程无效,而且这个过程仍然有点乏味。 -撰写论文时,尤其是我的毕业论文,我需要一种方法能够根据我的数据来创建图表并将它们嵌入到文字处理文档中。我着迷于 Microsoft Excel 及其数字运算能力以及可以用计算结果创建出的大量图表。但每一步都有成本。在 20 世纪 90 年代,除了 Excel,还有其他专有软件包,比如 SAS 和 SPSS+,但对于我那已经满满的研究生时间表来说,学习曲线是一项艰巨的任务。 +撰写论文时,尤其是我的毕业论文,我需要一种方法能够根据我的数据来创建图表,并将它们嵌入到文字处理文档中。我着迷于 Microsoft Excel 及其数字运算能力以及可以用计算结果创建出的大量图表。但这条路每一步都有成本。在 20 世纪 90 年代,除了 Excel,还有其他专有软件包,比如 SAS 和 SPSS+,但对于我那已经满满的研究生时间表来说,学习曲线是一项艰巨的任务。 ### 快速回到现在 -最近,由于我对数据科学的兴趣浓厚,加上对 Linux 和开源软件的浓厚兴趣,我阅读了大量的数据科学文章,并在 Linux 会议上听了许多数据科学演讲者谈论他们的工作。因此,我开始对编程语言 R(一种开源的统计计算软件)非常感兴趣。 +最近,由于我对数据科学的兴趣浓厚,加上对 Linux 和开源软件感兴趣,我阅读了大量的数据科学文章,并在 Linux 会议上听了许多数据科学演讲者谈论他们的工作。因此,我开始对编程语言 R(一种开源的统计计算软件)非常感兴趣。 -起初,这只是一个火花。当我和我的朋友 Michael J. Gallagher 博士谈论他如何在他的 [博士论文][2] 研究中使用 R 时,这个火花便增大了。最后,我访问了 [R project][3] 的网站,并了解到我可以轻松地安装 [R for Linux][4]。游戏开始! +起初,这只是一个偶发的一个想法。当我和我的朋友 Michael J. Gallagher 博士谈论他如何在他的 [博士论文][2] 研究中使用 R 时,这个火花便增大了。最后,我访问了 [R 项目][3] 的网站,并了解到我可以轻松地安装 [R for Linux][4]。游戏开始! ### 安装 R -根据你的操作系统和分布情况,安装 R 会稍有不同。请参阅 [Comprehensive R Archive Network][5] (CRAN) 网站上的安装指南。CRAN 提供了在 [各种 Linux 发行版][6],[Fedora,RHEL,及其衍生版][7],[MacOS][8] 和 [Windows][9] 上的安装指示。 +根据你的操作系统和发行版情况,安装 R 会稍有不同。请参阅 [Comprehensive R Archive Network][5] (CRAN)网站上的安装指南。CRAN 提供了在 [各种 Linux 发行版][6],[Fedora,RHEL,及其衍生版][7],[MacOS][8] 和 [Windows][9] 上的安装指示。 -我在使用 Ubuntu,则按照 CRAN 的指示,将以下行加入到我的 `/etc/apt/sources.list` 文件中: +我在使用 Ubuntu,按照 CRAN 的指示,将以下行加入到我的 `/etc/apt/sources.list` 文件中: ``` deb https:///bin/linux/ubuntu artful/ - ``` 接着我在终端运行下面命令: ``` $ sudo apt-get update - $ sudo apt-get install r-base - ``` -根据 CRAN,“需要从源码编译 R 的用户【如包的维护者,或者任何通过 `install.packages()` 安装包的用户】也应该安装 `r-base-dev` 的包。” +根据 CRAN 说明,“需要从源码编译 R 的用户[如包的维护者,或者任何通过 `install.packages()` 安装包的用户]也应该安装 `r-base-dev` 的包。” -### 使用 R 和 Rstudio +### 使用 R 和 RStudio -安装好了 R,我就准备了解更多关于使用这个强大的工具的信息。Gallagher 博士推荐了 [DataCamp][10] 上的 “Start learning R”,并且我也找到了适用于 R 新手的免费课程。两门课程都帮助我学习 R 的命令和语法。我还参加了 [Udemy][12] 上的 R 在线编程课程,并从 [No Starch Press][14] 上购买了 [Book of R][13]。 +安装好了 R,我就准备了解更多关于使用这个强大的工具的信息。Gallagher 博士推荐了 [DataCamp][10] 上的 “R 语言入门”,并且我也在 [Code School][11] 找到了适用于 R 新手的免费课程。两门课程都帮助我学习了 R 的命令和语法。我还参加了 [Udemy][12] 上的 R 在线编程课程,并从 [No Starch 出版社][14] 上购买了 [R 之书][13]。 -在阅读更多内容并观看 YouTube 视频后,我意识到我还应该安装 [RStudio][15]。Rstudio 是 R 的开源 IDE,易于在 [Debian, Ubuntu, Fedora, 和 RHEL][16] 上安装。它也可以安装在 MacOS 和 Windows 上。 +在阅读更多内容并观看 YouTube 视频后,我意识到我还应该安装 [RStudio][15]。Rstudio 是 R 语言的开源 IDE,易于在 [Debian、Ubuntu、 Fedora 和 RHEL][16] 上安装。它也可以安装在 MacOS 和 Windows 上。 -根据 Rstudio 网站的说明,可以根据你的偏好对 IDE 进行自定义,具体方法是选择工具菜单,然后从中选择全局选项。 +根据 RStudio 网站的说明,可以根据你的偏好对 IDE 进行自定义,具体方法是选择工具菜单,然后从中选择全局选项。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_global-options.png?itok=un6-SvS-) @@ -51,11 +50,11 @@ R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_plotting-vectors.png?itok=9T7UV8p2) -你可能想要开始学习如何将 R 和一些样本数据结合起来使用,然后将这些知识应用到自己的数据上得到描述性统计。我自己没有丰富的数据来分析,但我搜索了可以使用的数据集 [datasets][18];这样一个数据集(我并没有用这个例子)是由圣路易斯联邦储备银行提供的 [经济研究数据][19]。我对一个题为“美国商业航空公司的乘客里程(1937-1960)”很感兴趣,因此我将它导入 RStudio 以测试 IDE 的功能。Rstudio 可以接受各种格式的数据,包括 CSV,Excel,SPSS 和 SAS。 +你可能想要开始学习如何将 R 和一些样本数据结合起来使用,然后将这些知识应用到自己的数据上得到描述性统计。我自己没有丰富的数据来分析,但我搜索了可以使用的数据集 [datasets][18];有一个这样的数据集(我并没有用这个例子)是由圣路易斯联邦储备银行提供的 [经济研究数据][19]。我对一个题为“美国商业航空公司的乘客里程(1937-1960)”很感兴趣,因此我将它导入 RStudio 以测试 IDE 的功能。RStudio 可以接受各种格式的数据,包括 CSV、Excel、SPSS 和 SAS。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/rstudio-import.png?itok=1yJKQei1) -数据导入后,我使用 `summary(AirPassengers)` 命令获取数据的一些初始描述性统计信息。按回车键后,我得到了 1949-1960 年的每月航空公司旅客的摘要以及其他数据,包括飞机乘客数量的最小值,最大值,第一四分位数,第三四分位数。中位数以及平均数。 +数据导入后,我使用 `summary(AirPassengers)` 命令获取数据的一些初始描述性统计信息。按回车键后,我得到了 1949-1960 年的每月航空公司旅客的摘要以及其他数据,包括飞机乘客数量的最小值、最大值、四分之一位数、四分之三位数、中位数以及平均数。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_air-passengers.png?itok=RCJMLIb3) @@ -63,7 +62,7 @@ R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_sd-air-passengers.png?itok=d-25fQoz) -接下来,我生成了一个数据直方图,通过输入 `hist(AirPassengers);` 得到,这以图形的方式显示此数据集;Rstudio 可以将数据导出为 PNG,PDF,JPEG,TIFF,SVG,EPS 或 BMP。 +接下来,我生成了一个数据直方图,通过输入 `hist(AirPassengers);` 得到,这会以图形的方式显示此数据集;RStudio 可以将数据导出为 PNG、PDF、JPEG、TIFF、SVG、EPS 或 BMP。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_histogram-air-passengers.png?itok=0HWsseQE) @@ -79,9 +78,9 @@ R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo 在 R 提示符下输入 `help()` 可以很容易找到帮助信息。输入你正在寻找的信息的特定主题可以找到具体的帮助信息,例如 `help(sd)` 可以获得有关标准差的帮助。通过在提示符处输入 `contributors()` 可以获得有关 R 项目贡献者的信息。您可以通过在提示符处输入 `citation()` 来了解如何引用 R。通过在提示符出输入 `license()` 可以很容易地获得 R 的许可证信息。 -R 是在 GNU General Public License(1991 年 6 月的版本 2,或者 2007 年 6 月的版本 3)的条款下发布的。有关 R 许可证的更多信息,请参考 [R Project website][20]。 +R 是在 GNU General Public License(1991 年 6 月的版本 2,或者 2007 年 6 月的版本 3)的条款下发布的。有关 R 许可证的更多信息,请参考 [R 项目官网][20]。 -另外,RStudio 在 GUI 中提供了完美的帮助菜单。该区域包括 RStudio 备忘单(可作为 PDF 下载),[RStudio][21]的在线学习,RStudio 文档,支持和 [许可证信息][22]。 +另外,RStudio 在 GUI 中提供了完美的帮助菜单。该区域包括 RStudio 快捷表(可作为 PDF 下载),[RStudio][21]的在线学习、RStudio 文档、支持和 [许可证信息][22]。 -------------------------------------------------------------------------------- @@ -89,7 +88,7 @@ via: https://opensource.com/article/18/2/getting-started-RStudio-IDE 作者:[Don Watkins][a] 译者:[szcf-weiya](https://github.com/szcf-weiya) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d1d9fecab9da9a66488287e62ab76d3b26391815 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 00:13:05 +0800 Subject: [PATCH 219/343] PUB:20180213 Getting started with the RStudio IDE.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @szcf-weiya 首发地址: https://linux.cn/article-9456-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/szcf-weiya --- .../20180213 Getting started with the RStudio IDE.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180213 Getting started with the RStudio IDE.md (100%) diff --git a/translated/tech/20180213 Getting started with the RStudio IDE.md b/published/20180213 Getting started with the RStudio IDE.md similarity index 100% rename from translated/tech/20180213 Getting started with the RStudio IDE.md rename to published/20180213 Getting started with the RStudio IDE.md From cd1a15c44f48a776d4b2eadde41c349fb6e4b949 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 00:27:49 +0800 Subject: [PATCH 220/343] PRF:20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @shipsw --- ... automatically update RHEL-CentOS Linux.md | 129 +++++++++--------- 1 file changed, 67 insertions(+), 62 deletions(-) diff --git a/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md b/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md index b17db43664..e2d88ce4ec 100644 --- a/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md +++ b/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @@ -1,51 +1,91 @@ -translating by shipsw - 如何使用 yum-cron 自动更新 RHEL/CentOS Linux ====== -yum 命令是 RHEL / CentOS Linux 系统中用来安装和更新软件包的一个工具。知道如何使用 [yum 命令行] 更新系统,但是我想用 cron 手工更新软件包。该如何配置才能使得 yum 使用 [cron 自动更新][2]系统补丁或更新呢? -首先需要安装 yum-cron 软件包。该软件包提供以 cron 命令运行 yum 更新所需的文件。安装这个软件可以使得 yum 以 cron 命令每晚更新。 +`yum` 命令是 RHEL / CentOS Linux 系统中用来安装和更新软件包的一个工具。我知道如何使用 [yum 命令行][1] 更新系统,但是我想用 cron 任务自动更新软件包。该如何配置才能使得 `yum` 使用 [cron 自动更新][2]系统补丁或更新呢? + +首先需要安装 yum-cron 软件包。该软件包提供以 cron 命令运行 `yum` 更新所需的文件。如果你想要每晚通过 cron 自动更新可以安装这个软件包。 ### CentOS/RHEL 6.x/7.x 上安装 yum cron 输入以下 [yum 命令][3]: -`$ sudo yum install yum-cron` + +``` +$ sudo yum install yum-cron +``` + ![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-install-yum-cron-on-CentOS-RHEL-server.jpg) -使用 **CentOS/RHEL 7.x** 上的 systemctl 启动服务: +使用 CentOS/RHEL 7.x 上的 `systemctl` 启动服务: + ``` $ sudo systemctl enable yum-cron.service $ sudo systemctl start yum-cron.service $ sudo systemctl status yum-cron.service ``` -在 **CentOS/RHEL 6.x** 系统中,运行: + +在 CentOS/RHEL 6.x 系统中,运行: + ``` $ sudo chkconfig yum-cron on $ sudo service yum-cron start ``` + ![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-turn-on-yum-cron-service-on-CentOS-or-RHEL-server.jpg) -yum-cron 是 yum 的一个调用接口。使得 cron 调用 yum 变得非常方便。该软件提供元数据更新,更新检查、下载和安装等功能。yum-cron 的不同功能可以使用高配置文件配置,而不是输入一堆复杂的命令行参数。 +`yum-cron` 是 `yum` 的一个替代方式。使得 cron 调用 `yum` 变得非常方便。该软件提供了元数据更新、更新检查、下载和安装等功能。`yum-cron` 的各种功能可以使用配置文件配置,而不是输入一堆复杂的命令行参数。 ### 配置 yum-cron 自动更新 RHEL/CentOS Linux -使用 vi 等编辑器编辑文件 /etc/yum/yum-cron.conf 和 /etc/yum/yum-cron-hourly.conf: -`$ sudo vi /etc/yum/yum-cron.conf` -确保更新可用时自动更新 -`apply_updates = yes` -可以设置通知 email 地址。注意: localhost 将会被系统名称代替。 -`email_from = root@localhost` -email 通知地址列表。 -`email_to = your-it-support@some-domain-name` -发送 email 信息的主机名。 -`email_host = localhost` +使用 vi 等编辑器编辑文件 `/etc/yum/yum-cron.conf` 和 `/etc/yum/yum-cron-hourly.conf`: + +``` +$ sudo vi /etc/yum/yum-cron.conf +``` + +确保更新可用时自动更新: + +``` +apply_updates = yes +``` + +可以设置通知 email 的发件地址。注意: localhost` 将会被 `system_name` 的值代替。 + +``` +email_from = root@localhost +``` + +列出发送到的 email 地址。 + +``` +email_to = your-it-support@some-domain-name +``` + +发送 email 信息的主机名。 + +``` +email_host = localhost +``` + [CentOS/RHEL 7.x][4] 上不想更新内核的话,添加以下内容: -`exclude=kernel*` + +``` +exclude=kernel* +``` + RHEL/CentOS 6.x 下[添加以下内容来禁用内核更新][5]: -`YUM_PARAMETER=kernel*` -[保存并关闭文件][6]。如果想每小时更新系统的话修改文件 /etc/yum/yum-cron-hourly.conf,否则文件 /etc/yum/yum-cron.conf 将使用以下命令每天运行一次[cat 命令][7]: -`$ cat /etc/cron.daily/0yum-daily.cron` + +``` +YUM_PARAMETER=kernel* +``` + +[保存并关闭文件][6]。如果想每小时更新系统的话修改文件 `/etc/yum/yum-cron-hourly.conf`,否则文件 `/etc/yum/yum-cron.conf` 将使用以下命令每天运行一次(使用 [cat 命令][7] 查看): + +``` +$ cat /etc/cron.daily/0yum-daily.cron +``` + 示例输出: + ``` #!/bin/bash @@ -73,48 +113,14 @@ exec /usr/sbin/yum-cron ``` 完成配置。现在你的系统将每天自动更新一次。更多细节请参照 yum-cron 的说明手册。 -`$ man yum-cron` -### 方法二 – 使用 shell 脚本 - -**警告** : 以下命令已经过时了. 不要在 RHEL/CentOS 6.x/7.x 系统中使用。 我写在这里仅仅是因为历史原因,该命令适合 CentOS/RHEL version 4.x/5.x 上运行。 - -让我们看看如何在 CentOS/RHEL 上配置 yum 安全更新包的检索和安装。你可以使用 CentOS / RHEL 提供的 yum-updatesd 服务。然而,系统提供的服务开销有点大。你可以使用以下的 shell 脚本配置每天后每周的系统更新。 - - * **/etc/cron.daily/yumupdate.sh** 每天更新 - * **/etc/cron.weekly/yumupdate.sh** 每周更新 - - - -#### 系统更新的示例脚本 - -以下脚本功能是使用 [cron][8] 定时安装更新更新: ``` -#!/bin/bash -YUM=/usr/bin/yum -$YUM -y -R 120 -d 0 -e 0 update yum -$YUM -y -R 10 -e 0 -d 0 update +$ man yum-cron ``` -(Code listing -01: /etc/cron.daily/yumupdate.sh) - -其中: - - 1. 第一条命令更新 yum 自己。 - 2. **-R 120** : 设置允许一条命令前的等待最长时间 - 3. **-e 0** : 设置错误级别为 0 (范围 0-10)。0 意味着只有关键性错误才会显示。 - 4. -d 0 : 设置 debug 级别为 0 。增加或减少打印日志的量。(范围 0-10) - 5. **-y** : 默认同意;任何提示问题默认回答为 yes。 - - - -设置脚本的执行权限: -`# chmod +x /etc/cron.daily/yumupdate.sh` - - ### 关于作者 -作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][9]、[Facebook][10]、[Google+][11] 上关注他。**获取更多有关系统管理、Linux/Unix 和开源话题请关注[我的 RSS/XML 地址][12]**。 +作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][9]、[Facebook][10]、[Google+][11] 上关注他。获取更多有关系统管理、Linux/Unix 和开源话题请关注[我的 RSS/XML 地址][12]。 -------------------------------------------------------------------------------- @@ -122,18 +128,17 @@ via: https://www.cyberciti.biz/faq/fedora-automatic-update-retrieval-installatio 作者:[Vivek Gite][a] 译者:[shipsw](https://github.com/shipsw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.cyberciti.biz/ [1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ [2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses -[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) -[4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/ +[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ [4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/ [5]:https://www.cyberciti.biz/faq/redhat-centos-linux-yum-update-exclude-packages/ [6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/ -[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info) +[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ [8]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses [9]:https://twitter.com/nixcraft [10]:https://facebook.com/nixcraft From e8313171548d211dbf6dfe8c71460024699ec3e4 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 00:28:20 +0800 Subject: [PATCH 221/343] PUB:20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @shipsw --- ...w to use yum-cron to automatically update RHEL-CentOS Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md (100%) diff --git a/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md b/published/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md similarity index 100% rename from translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md rename to published/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md From b7d4006c2c1f4e833c7266da10235847a05da8a0 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 00:34:03 +0800 Subject: [PATCH 222/343] PRF:20180213 Getting started with the RStudio IDE.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @szcf-weiya 恭喜你完成了第一篇翻译贡献! --- ...13 Getting started with the RStudio IDE.md | 43 +++++++++---------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/translated/tech/20180213 Getting started with the RStudio IDE.md b/translated/tech/20180213 Getting started with the RStudio IDE.md index 762165fa13..28e26691d7 100644 --- a/translated/tech/20180213 Getting started with the RStudio IDE.md +++ b/translated/tech/20180213 Getting started with the RStudio IDE.md @@ -1,49 +1,48 @@ -开始使用 RStudio IDE +RStudio IDE 入门 ====== +> 用于统计技术的 R 项目是分析数据的有力方式,而 RStudio IDE 则可使这一切更加容易。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_development_programming_screen.png?itok=BgcSm5Pl) -从我记事起,我就一直在与数字玩耍。作为 20 世纪 70 年代后期的本科生,我开始上统计学的课程,学习如何检查和分析数据以揭示某些意义。 +从我记事起,我就一直喜欢摆弄数字。作为 20 世纪 70 年代后期的大学生,我上过统计学的课程,学习了如何检查和分析数据以揭示其意义。 -那时候,我有一部科学计算器,它让统计计算变得比以前容易很多。在 90 年代早期,作为一名从事 t 检验,相关性以及 [ANOVA][1] 研究的教育心理学研究生,我开始通过精心编写输入 IBM 主机的文本文件来进行计算。这个主机是对我的手持计算器的一个改进,但是一个小的间距错误会使得整个过程无效,而且这个过程仍然有点乏味。 +那时候,我有一部科学计算器,它让统计计算变得比以往更容易。在 90 年代早期,作为一名从事 t 检验t-test、相关性以及 [ANOVA][1] 研究的教育心理学研究生,我开始通过精心编写输入到 IBM 主机的文本文件来进行计算。这个主机远超我的手持计算器,但是一个小的空格错误就会导致整个过程无效,而且这个过程仍然有点乏味。 -撰写论文时,尤其是我的毕业论文,我需要一种方法能够根据我的数据来创建图表并将它们嵌入到文字处理文档中。我着迷于 Microsoft Excel 及其数字运算能力以及可以用计算结果创建出的大量图表。但每一步都有成本。在 20 世纪 90 年代,除了 Excel,还有其他专有软件包,比如 SAS 和 SPSS+,但对于我那已经满满的研究生时间表来说,学习曲线是一项艰巨的任务。 +撰写论文时,尤其是我的毕业论文,我需要一种方法能够根据我的数据来创建图表,并将它们嵌入到文字处理文档中。我着迷于 Microsoft Excel 及其数字运算能力以及可以用计算结果创建出的大量图表。但这条路每一步都有成本。在 20 世纪 90 年代,除了 Excel,还有其他专有软件包,比如 SAS 和 SPSS+,但对于我那已经满满的研究生时间表来说,学习曲线是一项艰巨的任务。 ### 快速回到现在 -最近,由于我对数据科学的兴趣浓厚,加上对 Linux 和开源软件的浓厚兴趣,我阅读了大量的数据科学文章,并在 Linux 会议上听了许多数据科学演讲者谈论他们的工作。因此,我开始对编程语言 R(一种开源的统计计算软件)非常感兴趣。 +最近,由于我对数据科学的兴趣浓厚,加上对 Linux 和开源软件感兴趣,我阅读了大量的数据科学文章,并在 Linux 会议上听了许多数据科学演讲者谈论他们的工作。因此,我开始对编程语言 R(一种开源的统计计算软件)非常感兴趣。 -起初,这只是一个火花。当我和我的朋友 Michael J. Gallagher 博士谈论他如何在他的 [博士论文][2] 研究中使用 R 时,这个火花便增大了。最后,我访问了 [R project][3] 的网站,并了解到我可以轻松地安装 [R for Linux][4]。游戏开始! +起初,这只是一个偶发的一个想法。当我和我的朋友 Michael J. Gallagher 博士谈论他如何在他的 [博士论文][2] 研究中使用 R 时,这个火花便增大了。最后,我访问了 [R 项目][3] 的网站,并了解到我可以轻松地安装 [R for Linux][4]。游戏开始! ### 安装 R -根据你的操作系统和分布情况,安装 R 会稍有不同。请参阅 [Comprehensive R Archive Network][5] (CRAN) 网站上的安装指南。CRAN 提供了在 [各种 Linux 发行版][6],[Fedora,RHEL,及其衍生版][7],[MacOS][8] 和 [Windows][9] 上的安装指示。 +根据你的操作系统和发行版情况,安装 R 会稍有不同。请参阅 [Comprehensive R Archive Network][5] (CRAN)网站上的安装指南。CRAN 提供了在 [各种 Linux 发行版][6],[Fedora,RHEL,及其衍生版][7],[MacOS][8] 和 [Windows][9] 上的安装指示。 -我在使用 Ubuntu,则按照 CRAN 的指示,将以下行加入到我的 `/etc/apt/sources.list` 文件中: +我在使用 Ubuntu,按照 CRAN 的指示,将以下行加入到我的 `/etc/apt/sources.list` 文件中: ``` deb https:///bin/linux/ubuntu artful/ - ``` 接着我在终端运行下面命令: ``` $ sudo apt-get update - $ sudo apt-get install r-base - ``` -根据 CRAN,“需要从源码编译 R 的用户【如包的维护者,或者任何通过 `install.packages()` 安装包的用户】也应该安装 `r-base-dev` 的包。” +根据 CRAN 说明,“需要从源码编译 R 的用户[如包的维护者,或者任何通过 `install.packages()` 安装包的用户]也应该安装 `r-base-dev` 的包。” -### 使用 R 和 Rstudio +### 使用 R 和 RStudio -安装好了 R,我就准备了解更多关于使用这个强大的工具的信息。Gallagher 博士推荐了 [DataCamp][10] 上的 “Start learning R”,并且我也找到了适用于 R 新手的免费课程。两门课程都帮助我学习 R 的命令和语法。我还参加了 [Udemy][12] 上的 R 在线编程课程,并从 [No Starch Press][14] 上购买了 [Book of R][13]。 +安装好了 R,我就准备了解更多关于使用这个强大的工具的信息。Gallagher 博士推荐了 [DataCamp][10] 上的 “R 语言入门”,并且我也在 [Code School][11] 找到了适用于 R 新手的免费课程。两门课程都帮助我学习了 R 的命令和语法。我还参加了 [Udemy][12] 上的 R 在线编程课程,并从 [No Starch 出版社][14] 上购买了 [R 之书][13]。 -在阅读更多内容并观看 YouTube 视频后,我意识到我还应该安装 [RStudio][15]。Rstudio 是 R 的开源 IDE,易于在 [Debian, Ubuntu, Fedora, 和 RHEL][16] 上安装。它也可以安装在 MacOS 和 Windows 上。 +在阅读更多内容并观看 YouTube 视频后,我意识到我还应该安装 [RStudio][15]。Rstudio 是 R 语言的开源 IDE,易于在 [Debian、Ubuntu、 Fedora 和 RHEL][16] 上安装。它也可以安装在 MacOS 和 Windows 上。 -根据 Rstudio 网站的说明,可以根据你的偏好对 IDE 进行自定义,具体方法是选择工具菜单,然后从中选择全局选项。 +根据 RStudio 网站的说明,可以根据你的偏好对 IDE 进行自定义,具体方法是选择工具菜单,然后从中选择全局选项。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_global-options.png?itok=un6-SvS-) @@ -51,11 +50,11 @@ R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_plotting-vectors.png?itok=9T7UV8p2) -你可能想要开始学习如何将 R 和一些样本数据结合起来使用,然后将这些知识应用到自己的数据上得到描述性统计。我自己没有丰富的数据来分析,但我搜索了可以使用的数据集 [datasets][18];这样一个数据集(我并没有用这个例子)是由圣路易斯联邦储备银行提供的 [经济研究数据][19]。我对一个题为“美国商业航空公司的乘客里程(1937-1960)”很感兴趣,因此我将它导入 RStudio 以测试 IDE 的功能。Rstudio 可以接受各种格式的数据,包括 CSV,Excel,SPSS 和 SAS。 +你可能想要开始学习如何将 R 和一些样本数据结合起来使用,然后将这些知识应用到自己的数据上得到描述性统计。我自己没有丰富的数据来分析,但我搜索了可以使用的数据集 [datasets][18];有一个这样的数据集(我并没有用这个例子)是由圣路易斯联邦储备银行提供的 [经济研究数据][19]。我对一个题为“美国商业航空公司的乘客里程(1937-1960)”很感兴趣,因此我将它导入 RStudio 以测试 IDE 的功能。RStudio 可以接受各种格式的数据,包括 CSV、Excel、SPSS 和 SAS。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/rstudio-import.png?itok=1yJKQei1) -数据导入后,我使用 `summary(AirPassengers)` 命令获取数据的一些初始描述性统计信息。按回车键后,我得到了 1949-1960 年的每月航空公司旅客的摘要以及其他数据,包括飞机乘客数量的最小值,最大值,第一四分位数,第三四分位数。中位数以及平均数。 +数据导入后,我使用 `summary(AirPassengers)` 命令获取数据的一些初始描述性统计信息。按回车键后,我得到了 1949-1960 年的每月航空公司旅客的摘要以及其他数据,包括飞机乘客数量的最小值、最大值、四分之一位数、四分之三位数、中位数以及平均数。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_air-passengers.png?itok=RCJMLIb3) @@ -63,7 +62,7 @@ R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_sd-air-passengers.png?itok=d-25fQoz) -接下来,我生成了一个数据直方图,通过输入 `hist(AirPassengers);` 得到,这以图形的方式显示此数据集;Rstudio 可以将数据导出为 PNG,PDF,JPEG,TIFF,SVG,EPS 或 BMP。 +接下来,我生成了一个数据直方图,通过输入 `hist(AirPassengers);` 得到,这会以图形的方式显示此数据集;RStudio 可以将数据导出为 PNG、PDF、JPEG、TIFF、SVG、EPS 或 BMP。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_histogram-air-passengers.png?itok=0HWsseQE) @@ -79,9 +78,9 @@ R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo 在 R 提示符下输入 `help()` 可以很容易找到帮助信息。输入你正在寻找的信息的特定主题可以找到具体的帮助信息,例如 `help(sd)` 可以获得有关标准差的帮助。通过在提示符处输入 `contributors()` 可以获得有关 R 项目贡献者的信息。您可以通过在提示符处输入 `citation()` 来了解如何引用 R。通过在提示符出输入 `license()` 可以很容易地获得 R 的许可证信息。 -R 是在 GNU General Public License(1991 年 6 月的版本 2,或者 2007 年 6 月的版本 3)的条款下发布的。有关 R 许可证的更多信息,请参考 [R Project website][20]。 +R 是在 GNU General Public License(1991 年 6 月的版本 2,或者 2007 年 6 月的版本 3)的条款下发布的。有关 R 许可证的更多信息,请参考 [R 项目官网][20]。 -另外,RStudio 在 GUI 中提供了完美的帮助菜单。该区域包括 RStudio 备忘单(可作为 PDF 下载),[RStudio][21]的在线学习,RStudio 文档,支持和 [许可证信息][22]。 +另外,RStudio 在 GUI 中提供了完美的帮助菜单。该区域包括 RStudio 快捷表(可作为 PDF 下载),[RStudio][21]的在线学习、RStudio 文档、支持和 [许可证信息][22]。 -------------------------------------------------------------------------------- @@ -89,7 +88,7 @@ via: https://opensource.com/article/18/2/getting-started-RStudio-IDE 作者:[Don Watkins][a] 译者:[szcf-weiya](https://github.com/szcf-weiya) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6423b3093d48ff027528cfcfb79a60ed9b97a030 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 00:34:57 +0800 Subject: [PATCH 223/343] PUB:20180213 Getting started with the RStudio IDE.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @szcf-weiya 首发地址: https://linux.cn/article-9456-1.html 您的 LCTT 专页地址: https://linux.cn/lctt/szcf-weiya --- .../20180213 Getting started with the RStudio IDE.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180213 Getting started with the RStudio IDE.md (100%) diff --git a/translated/tech/20180213 Getting started with the RStudio IDE.md b/published/20180213 Getting started with the RStudio IDE.md similarity index 100% rename from translated/tech/20180213 Getting started with the RStudio IDE.md rename to published/20180213 Getting started with the RStudio IDE.md From 2f2306a13f1936049b0433634c96845ef07df3d5 Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Sun, 18 Mar 2018 07:51:29 +0800 Subject: [PATCH 224/343] 2017-03-18 --- sources/talk/20180201 How I coined the term open source.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index ef99769b3b..ec6925bd77 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -33,7 +33,7 @@ 在这个镇上,Eric 把前瞻协会(Foresight) 作为行动的大本营。他一开始访问行程,他就被几个网景法律和市场部门的员工通电话。当他挂电话后,我被要求带着电话跟他们——一男一女,可能是 Mitchell Baker——这样我才能谈论对于新术语的需求。他们原则上是立即同意了,但详细条款并未达成协议。 -在那周的会议中,我仍然专注于起一个更好的名字并提出术语 “开源软件”came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, 然而一个从事市场公关的朋友觉得术语“open”被滥用了并且相信我们能做更好再说。理论上它是对的,可我想不出更好的了,所以我想尝试并推广它。 事后一想我应该直接向 Eric Raymond 提案,但在那时我并不是很了解他,所以我采取了间接的策略。 +在那周的会议中,我仍然专注于起一个更好的名字并提出术语 “开源软件”came up with the term "open source software." 虽然那不是完美的,但我觉得足够好了。I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, 然而一个从事市场公关的朋友觉得术语 “open” 被滥用了并且相信我们能做更好再说。理论上它是对的,可我想不出更好的了,所以我想尝试并推广它。 事后一想我应该直接向 Eric Raymond 提案,但在那时我并不是很了解他,所以我采取了间接的策略。 Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助,因为作为一个非编程人员,我在自由软件社区的影响力很弱。我从事的纳米技术是一个加分项,但不足以让我认真地接受自由软件问题的工作。作为一个Linux程序员,Todd 将会更仔细地聆听它。 @@ -47,7 +47,7 @@ Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助, 不仅如此——模因演化(人类学术语)在起作用。几分钟后,另一个人明显地,没有提醒地,在仍然进行话题讨论而没说术语的情况下,用了这个术语。Todd 和我面面相觑对视:是的我们都注意到了发生的事。我很激动——它起作用了!但我保持了安静:我在小组中仍然地位不高。可能有些人都奇怪为什么 Eric 会最终邀请我。 -临近会议尾声,可能是 Todd or Eric,[术语问题][8] 被明确提出。Maddog 提及了一个早期的术语“可自由分发的,和一个新的术语“合作开发的”。Eric 列出了“自由软件”、“开源软件”,并把[unknown]作为主要选项。Todd宣传“开源”模型,然后Eric 支持了他。我什么也没说,letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. 只有我在会议中大约10%的说明放在了术语问答中。 +临近会议尾声,可能是 Todd or Eric,[术语问题][8] 被明确提出。Maddog 提及了一个早期的术语“可自由分发的,和一个新的术语“合作开发的”。Eric 列出了“自由软件”、“开源软件”,并把[unknown]作为主要选项。Todd宣传“开源”模型,然后Eric 支持了他。我什么也没说,letting Todd and Eric pull the (loose, informal) consensus together around the open source name. 对于大多数与会者,他们很清楚改名不是在这讨论的最重要议题;那只是一个次要的相关议题。 我在会议中只有大约10%的说明放在了术语问答中。 但是我很高兴。在那有许多社区的关键领导人,并且他们喜欢这新名字,或者至少没反对。这是一个好的信号信号。可能我帮不上什么忙; Eric Raymond 被相当好地放在了一个宣传模因的好位子上,而且他的确做到了。立即签约参加行动,帮助建立 [Opensource.org][9] 并在新术语的宣传中发挥重要作用。 @@ -59,7 +59,7 @@ Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助, 1998 年 4 月 17 日, Tim O'Reilly 提前宣布首届 “[自由软件峰会][10]” ,在 4 月14 日之前,它以首届 “[开源峰会][11]” 被提及。 -这几个月对于开源来说是相当激动人心的。似乎每周都有一个新公司宣布加入计划。读 Slashdot(科技资讯网站)已经成了一个必需操作, even for those like me who were only peripherally involved. 我十分相信新术语能对快速传播到商业很有帮助,能被公众广泛使用。 +这几个月对于开源来说是相当激动人心的。似乎每周都有一个新公司宣布加入计划。读 Slashdot(科技资讯网站)已经成了一个必需操作, 甚至对于那些像我一样只能外围地参与者亦是如此。我坚信新术语能对快速传播到商业很有帮助,能被公众广泛使用。 尽管快捷的谷歌搜索表明“开源”比“自由软件”出现的更多,但后者仍然有大量的使用,特别是和偏爱它的人们沟通的时候。 From e36ba713ee567a2fbac80cdeea8d54e5e141d697 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 10:14:26 +0800 Subject: [PATCH 225/343] PRF:20180130 Use of du - df commands (with examples).md @geekpi --- ...Use of du - df commands (with examples).md | 46 +++++++++---------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/translated/tech/20180130 Use of du - df commands (with examples).md b/translated/tech/20180130 Use of du - df commands (with examples).md index 40327aad3a..5f0f6e9c42 100644 --- a/translated/tech/20180130 Use of du - df commands (with examples).md +++ b/translated/tech/20180130 Use of du - df commands (with examples).md @@ -1,85 +1,85 @@ du 及 df 命令的使用(附带示例) ====== -在本文中,我将讨论 du 和 df 命令。du 和 df 命令都是 Linux 系统的重要工具,来显示 Linux 文件系统的磁盘使用情况。这里我们将通过一些例子来分享这两个命令的用法。 -**(推荐阅读:[使用 scp 和 rsync 命令传输文件][1])** +在本文中,我将讨论 `du` 和 `df` 命令。`du` 和 `df` 命令都是 Linux 系统的重要工具,来显示 Linux 文件系统的磁盘使用情况。这里我们将通过一些例子来分享这两个命令的用法。 -**(另请阅读:[使用 dd 和 cat 命令为 Linux 系统克隆磁盘][2])** +- **(推荐阅读:[使用 scp 和 rsync 命令传输文件][1])** +- **(另请阅读:[使用 dd 和 cat 命令为 Linux 系统克隆磁盘][2])** ### du 命令 -du(disk usage 的简称)是用于查找文件和目录的磁盘使用情况的命令。du 命令在与各种选项一起使用时能以多种格式提供结果。 +`du`(disk usage 的简称)是用于查找文件和目录的磁盘使用情况的命令。`du` 命令在与各种选项一起使用时能以多种格式提供结果。 下面是一些例子: - **1- 得到一个目录下所有子目录的磁盘使用概况** +#### 1、 得到一个目录下所有子目录的磁盘使用概况 ``` - $ du /home +$ du /home ``` ![du command][4] -该命令的输出将显示 /home 中的所有文件和目录以及显示块大小。 +该命令的输出将显示 `/home` 中的所有文件和目录以及显示块大小。 -**2- 以人类可读格式也就是 kb、mb 等显示文件/目录大小** +#### 2、 以人类可读格式也就是 kb、mb 等显示文件/目录大小 ``` - $ du -h /home +$ du -h /home ``` ![du command][6] -**3- 目录的总磁盘大小** +#### 3、 目录的总磁盘大小 ``` - $ du -s /home +$ du -s /home ``` ![du command][8] -它是 /home 目录的总大小 +它是 `/home` 目录的总大小 ### df 命令 -df(disk filesystem 的简称)用于显示 Linux 系统的磁盘利用率。 +df(disk filesystem 的简称)用于显示 Linux 系统的磁盘利用率。(LCTT 译注:`df` 可能应该是 disk free 的简称。) 下面是一些例子。 -**1- 显示设备名称、总块数、总磁盘空间、已用磁盘空间、可用磁盘空间和文件系统上的挂载点。** +#### 1、 显示设备名称、总块数、总磁盘空间、已用磁盘空间、可用磁盘空间和文件系统上的挂载点。 ``` - $ df +$ df ``` ![df command][10] -**2- 人类可读格式的信息** +#### 2、 人类可读格式的信息 ``` - $ df -h +$ df -h ``` ![df command][12] 上面的命令以人类可读格式显示信息。 -**3- 显示特定分区的信息** +#### 3、 显示特定分区的信息 ``` - $ df -hT /etc +$ df -hT /etc ``` ![df command][14] --hT 加上目标目录将以可读格式显示 /etc 的信息。 +`-hT` 加上目标目录将以可读格式显示 `/etc` 的信息。 -虽然 du 和 df 命令有更多选项,但是这些例子可以让你初步了解。如果在这里找不到你要找的东西,那么你可以参考有关命令的 man 页面。 +虽然 `du` 和 `df` 命令有更多选项,但是这些例子可以让你初步了解。如果在这里找不到你要找的东西,那么你可以参考有关命令的 man 页面。 另外,[**在这**][15]阅读我的其他帖子,在那里我分享了一些其他重要和经常使用的 Linux 命令。 -如往常一样,你的评论和疑问是受欢迎的,因此在下面留下你的评论和疑问,我会回复你。 +如往常一样,欢迎你留下评论和疑问,因此在下面留下你的评论和疑问,我会回复你。 -------------------------------------------------------------------------------- @@ -87,7 +87,7 @@ via: http://linuxtechlab.com/du-df-commands-examples/ 作者:[SHUSAIN][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e24203278fdb1878a0f4a424df8334cef8cc372e Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 10:14:48 +0800 Subject: [PATCH 226/343] PUB:20180130 Use of du - df commands (with examples).md @geekpi --- .../20180130 Use of du - df commands (with examples).md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180130 Use of du - df commands (with examples).md (100%) diff --git a/translated/tech/20180130 Use of du - df commands (with examples).md b/published/20180130 Use of du - df commands (with examples).md similarity index 100% rename from translated/tech/20180130 Use of du - df commands (with examples).md rename to published/20180130 Use of du - df commands (with examples).md From 19206f4ffdb06937fbf18249972540a69d1dd1d9 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Sun, 18 Mar 2018 10:46:40 +0800 Subject: [PATCH 227/343] Translated by qhwdw --- ...25 Keep Accurate Time on Linux with NTP.md | 147 ------------------ ...25 Keep Accurate Time on Linux with NTP.md | 146 +++++++++++++++++ 2 files changed, 146 insertions(+), 147 deletions(-) delete mode 100644 sources/tech/20180125 Keep Accurate Time on Linux with NTP.md create mode 100644 translated/tech/20180125 Keep Accurate Time on Linux with NTP.md diff --git a/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md b/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md deleted file mode 100644 index 5895275c62..0000000000 --- a/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md +++ /dev/null @@ -1,147 +0,0 @@ -Translating by qhwdw -Keep Accurate Time on Linux with NTP -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/usno-amc.jpg?itok=KA8HwI02) - -How to keep the correct time and keep your computers synchronized without abusing time servers, using NTP and systemd. - -### What Time is It? - -Linux is funky when it comes to telling the time. You might think that the `time` tells the time, but it doesn't because it is a timer that measures how long a process runs. To get the time, you run the `date` command, and to view more than one date, you use `cal`. Timestamps on files are also a source of confusion as they are typically displayed in two different ways, depending on your distro defaults. This example is from Ubuntu 16.04 LTS: -``` -$ ls -l -drwxrwxr-x 5 carla carla 4096 Mar 27 2017 stuff -drwxrwxr-x 2 carla carla 4096 Dec 8 11:32 things --rw-rw-r-- 1 carla carla 626052 Nov 21 12:07 fatpdf.pdf --rw-rw-r-- 1 carla carla 2781 Apr 18 2017 oddlots.txt - -``` - -Some display the year, some display the time, which makes ordering your files rather a mess. The GNU default is files dated within the last six months display the time instead of the year. I suppose there is a reason for this. If your Linux does this, try `ls -l --time-style=long-iso` to display the timestamps all the same way, sorted alphabetically. See [How to Change the Linux Date and Time: Simple Commands][1] to learn all manner of fascinating ways to manage the time on Linux. - -### Check Current Settings - -NTP, the network time protocol, is the old-fashioned way of keeping correct time on computers. `ntpd`, the NTP daemon, periodically queries a public time server and adjusts your system time as needed. It's a simple lightweight protocol that is easy to set up for basic use. Systemd has barged into NTP territory with the `systemd-timesyncd.service`, which acts as a client to `ntpd`. - -Before messing with NTP, let's take a minute to check that current time settings are correct. - -There are (at least) two timekeepers on your system: system time, which is managed by the Linux kernel, and the hardware clock on your motherboard, which is also called the real-time clock (RTC). When you enter your system BIOS, you see the hardware clock time and you can change its settings. When you install a new Linux, and in some graphical time managers, you are asked if you want your RTC set to the UTC (Coordinated Universal Time) zone. It should be set to UTC, because all time zone and daylight savings time calculations are based on UTC. Use the `hwclock` command to check: -``` -$ sudo hwclock --debug -hwclock from util-linux 2.27.1 -Using the /dev interface to the clock. -Hardware clock is on UTC time -Assuming hardware clock is kept in UTC time. -Waiting for clock tick... -...got clock tick -Time read from Hardware Clock: 2018/01/22 22:14:31 -Hw clock time : 2018/01/22 22:14:31 = 1516659271 seconds since 1969 -Time since last adjustment is 1516659271 seconds -Calculated Hardware Clock drift is 0.000000 seconds -Mon 22 Jan 2018 02:14:30 PM PST .202760 seconds - -``` - -"Hardware clock is kept in UTC time" confirms that your RTC is on UTC, even though it translates the time to your local time. If it were set to local time it would report "Hardware clock is kept in local time." - -You should have a `/etc/adjtime` file. If you don't, sync your RTC to system time: -``` -$ sudo hwclock -w - -``` - -This should generate the file, and the contents should look like this example: -``` -$ cat /etc/adjtime -0.000000 1516661953 0.000000 -1516661953 -UTC - -``` - -The new-fangled systemd way is to run `timedatectl`, which does not need root permissions: -``` -$ timedatectl - Local time: Mon 2018-01-22 14:17:51 PST - Universal time: Mon 2018-01-22 22:17:51 UTC - RTC time: Mon 2018-01-22 22:17:51 - Time zone: America/Los_Angeles (PST, -0800) - Network time on: yes -NTP synchronized: yes - RTC in local TZ: no - -``` - -"RTC in local TZ: no" confirms that it is on UTC time. What if it is on local time? There are, as always, multiple ways to change it. The easy way is with a nice graphical configuration tool, like YaST in openSUSE. You can use `timedatectl`: -``` -$ timedatectl set-local-rtc 0 -``` - -Or edit `/etc/adjtime`, replacing UTC with LOCAL. - -### systemd-timesyncd Client - -Now I'm tired, and we've just gotten to the good part. Who knew timekeeping was so complex? We haven't even scratched the surface; read `man 8 hwclock` to get an idea of how time is kept on computers. - -Systemd provides the `systemd-timesyncd.service` client, which queries remote time servers and adjusts your system time. Configure your servers in `/etc/systemd/timesyncd.conf`. Most Linux distributions provide a default configuration that points to time servers that they maintain, like Fedora: -``` -[Time] -#NTP= -#FallbackNTP=0.fedora.pool.ntp.org 1.fedora.pool.ntp.org - -``` - -You may enter any other servers you desire, such as your own local NTP server, on the `NTP=` line in a space-delimited list. (Remember to uncomment this line.) Anything you put on the `NTP=` line overrides the fallback. - -What if you are not using systemd? Then you need only NTP. - -### Setting up NTP Server and Client - -It is a good practice to set up your own LAN NTP server, so that you are not pummeling public NTP servers from all of your computers. On most Linuxes NTP comes in the `ntp` package, and most of them provide `/etc/ntp.conf` to configure the service. Consult [NTP Pool Time Servers][2] to find the NTP server pool that is appropriate for your region. Then enter 4-5 servers in your `/etc/ntp.conf` file, with each server on its own line: -``` -driftfile /var/ntp.drift -logfile /var/log/ntp.log -server 0.europe.pool.ntp.org -server 1.europe.pool.ntp.org -server 2.europe.pool.ntp.org -server 3.europe.pool.ntp.org - -``` - -The `driftfile` tells `ntpd` where to store the information it needs to quickly synchronize your system clock with the time servers at startup, and your logs should have their own home instead of getting dumped into the syslog. Use your Linux distribution defaults for these files if it provides them. - -Now start the daemon; on most Linuxes this is `sudo systemctl start ntpd`. Let it run for a few minutes, then check its status: -``` -$ ntpq -p - remote refid st t when poll reach delay offset jitter -============================================================== -+dev.smatwebdesi 192.168.194.89 3 u 25 64 37 92.456 -6.395 18.530 -*chl.la 127.67.113.92 2 u 23 64 37 75.175 8.820 8.230 -+four0.fairy.mat 35.73.197.144 2 u 22 64 37 116.272 -10.033 40.151 --195.21.152.161 195.66.241.2 2 u 27 64 37 107.559 1.822 27.346 - -``` - -I have no idea what any of that means, other than your daemon is talking to the remote time servers, and that is what you want. To permanently enable it, run `sudo systemctl enable ntpd`. If your Linux doesn't use systemd then it is your homework to figure out how to run `ntpd`. - -Now you can set up `systemd-timesyncd` on your other LAN hosts to use your local NTP server, or install NTP on them and enter your local server in their `/etc/ntp.conf` files. - -NTP servers take a beating, and demand continually increases. You can help by running your own public NTP server. Come back next week to learn how. - -Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp - -作者:[CARLA SCHRODER][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/learn/how-change-linux-date-and-time-simple-commands -[2]:http://support.ntp.org/bin/view/Servers/NTPPoolServers -[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md b/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md new file mode 100644 index 0000000000..d2eddc0d63 --- /dev/null +++ b/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md @@ -0,0 +1,146 @@ +在 Linux 上使用 NTP 保持精确的时间 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/usno-amc.jpg?itok=KA8HwI02) + +如何保持正确的时间,如何使用 NTP 和 systemd 让你的计算机在不滥用时间服务器的前提下保持同步。 + +### 它的时间是多少? + +当让 Linux 来告诉你时间的时候,它是很奇怪的。你可能认为是使用 `time` 命令来告诉你时间,其实并不是,因为 `time` 只是一个测量一个进程运行了多少时间的计时器。为得到时间,你需要运行的是 `date` 命令,你想查看更多的日期,你可以运行 `cal` 命令。文件上的时间戳也是一个容易混淆的地方,因为根据你的发行版默认情况不同,它一般有两种不同的显示方法。下面是来自 Ubuntu 16.04 LTS 的示例: +``` +$ ls -l +drwxrwxr-x 5 carla carla 4096 Mar 27 2017 stuff +drwxrwxr-x 2 carla carla 4096 Dec 8 11:32 things +-rw-rw-r-- 1 carla carla 626052 Nov 21 12:07 fatpdf.pdf +-rw-rw-r-- 1 carla carla 2781 Apr 18 2017 oddlots.txt + +``` + +有些显示年,有些显示时间,这样的方式让你的文件更混乱。GNU 默认的情况是,如果你的文件在六个月以内,则显示时间而不是年。我想这样做可能是有原因的。如果你的 Linux 是这样的,尝试用 `ls -l --time-style=long-iso` 命令,让时间戳用同一种方式去显示,按字母顺序排序。查阅 [如何更改 Linux 的日期和时间:简单的命令][1] 去学习 Linux 上管理时间的各种方法。 + +### 检查当前设置 + +NTP —— 网络时间协议,它是老式的保持计算机正确时间的方法。`ntpd` 是 NTP 守护程序,它通过周期性地查询公共时间服务器来按需调整你的计算机时间。它是一个简单的、轻量级的协议,使用它的基本功能时设置非常容易。Systemd 通过使用 `systemd-timesyncd.service` 已经越俎代庖 “干了 NTP 的活”,它可以用作 `ntpd` 的客户端。 + +在我们开始与 NTP “打交道” 之前,先花一些时间来了检查一下当前的时间设置是否正确。 + +你的系统上(至少)有两个时钟:系统时间 —— 它由 Linux 内核管理,第二个是你的主板上的硬件时钟,它也称为实时时钟(RTC)。当你进入系统的 BIOS 时,你可以看到你的硬件时钟的时间,你也可以去改变它的设置。当你安装一个新的 Linux 时,在一些图形化的时间管理器中,你会被询问是否设置你的 RTC 为 UTC(协调世界时间)时区,因为所有的时区和夏令时都是基于 UTC 的。你可以使用 `hwclock` 命令去检查: +``` +$ sudo hwclock --debug +hwclock from util-linux 2.27.1 +Using the /dev interface to the clock. +Hardware clock is on UTC time +Assuming hardware clock is kept in UTC time. +Waiting for clock tick... +...got clock tick +Time read from Hardware Clock: 2018/01/22 22:14:31 +Hw clock time : 2018/01/22 22:14:31 = 1516659271 seconds since 1969 +Time since last adjustment is 1516659271 seconds +Calculated Hardware Clock drift is 0.000000 seconds +Mon 22 Jan 2018 02:14:30 PM PST .202760 seconds + +``` + +"硬件时钟用 UTC 时间维护" 确认了你的计算机的 RTC 是使用 UTC 时间,虽然你的本地时间是通过 UTC 转换来的。如果设置本地时间,它将报告 “硬件时钟用本地时间维护”。 + +如果你不同步你的 RTC 到系统时间,你应该有一个 `/etc/adjtime` 文件。 +``` +$ sudo hwclock -w + +``` + +这个命令将生成这个文件,它将包含如下示例中的内容: +``` +$ cat /etc/adjtime +0.000000 1516661953 0.000000 +1516661953 +UTC + +``` + +新发明的 systemd 方式是去运行 `timedatectl` 命令,运行它不需要 root 权限: +``` +$ timedatectl + Local time: Mon 2018-01-22 14:17:51 PST + Universal time: Mon 2018-01-22 22:17:51 UTC + RTC time: Mon 2018-01-22 22:17:51 + Time zone: America/Los_Angeles (PST, -0800) + Network time on: yes +NTP synchronized: yes + RTC in local TZ: no + +``` + +"RTC in local TZ: no" 确认了它没有使用 UTC 时间。如果要改变它的本地时间,怎么办?这里有许多种方法可以做到。最简单的方法是使用一个图形配置工具,比如像 openSUSE 中的 YaST。你可使用 `timedatectl`: +``` +$ timedatectl set-local-rtc 0 +``` + +或者编辑 `/etc/adjtime`,将 UTC 替换为 LOCAL。 + +### systemd-timesyncd 客户端 + +现在,我已经累了,但是我们刚到非常精彩的部分。谁能想到计时如此复杂?我们甚至还没有了解到它的皮毛;阅读 `man 8 hwclock` 去了解你的计算机如何保持时间的详细内容。 + +Systemd 提供了 `systemd-timesyncd.service` 客户端,它查询远程时间服务器并调整你的本地系统时间。在 `/etc/systemd/timesyncd.conf` 中配置你的服务器。大多数 Linux 发行版都提供一个默认配置,它指向他们维护的时间服务器上,比如,以下是 Fedora 的: +``` +[Time] +#NTP= +#FallbackNTP=0.fedora.pool.ntp.org 1.fedora.pool.ntp.org + +``` + +你可以输入你希望的其它时间服务器,比如你自己的本地 NTP 服务器,在 `NTP=` 行上输入一个以空格分隔的服务器列表。(别忘了取消这一行的注释)`NTP=` 行上的任何内容都将覆盖掉 fallback 行上的配置项。 + +如果你不想使用 systemd 呢?那么,你将需要一个 NTP。 + +### 配置 NTP 服务器和客户端 + +配置你自己的局域网 NTP 服务器是一个非常好的实践,这样你的网内计算机就不需要不停查询公共 NTP 服务器。在大多数 Linux 的 `ntp` 包中都带了 NTP,它们大多都提供 `/etc/ntp.conf` 文件去配置服务器。查阅 [NTP 时间服务器池][2] 去找到你所在的区域的合适的 NTP 服务器池。然后在你的 `/etc/ntp.conf` 中输入 4- 5 个服务器,每个服务器用单独的一行: +``` +driftfile /var/ntp.drift +logfile /var/log/ntp.log +server 0.europe.pool.ntp.org +server 1.europe.pool.ntp.org +server 2.europe.pool.ntp.org +server 3.europe.pool.ntp.org + +``` + +`driftfile` 告诉 `ntpd` 在这里保存的信息是用于在启动时,使用时间服务器去快速同步你的系统时钟的。而日志将保存在他们自己指定的目录中,而不是转储到 syslog 中。如果你的 Linux 发行版默认提供了这些文件,请使用它们。 + +现在去启动守护程序;在大多数主流的 Linux 中它的命令是 `sudo systemctl start ntpd`。让它运行几分钟之后,我们再次去检查它的状态: +``` +$ ntpq -p + remote refid st t when poll reach delay offset jitter +============================================================== ++dev.smatwebdesi 192.168.194.89 3 u 25 64 37 92.456 -6.395 18.530 +*chl.la 127.67.113.92 2 u 23 64 37 75.175 8.820 8.230 ++four0.fairy.mat 35.73.197.144 2 u 22 64 37 116.272 -10.033 40.151 +-195.21.152.161 195.66.241.2 2 u 27 64 37 107.559 1.822 27.346 + +``` + +我不知道这些内容是什么意思,但重要的是,你的守护程序已经与时间服务器开始对话了,而这正是我们所需要的。你可以去运行 `sudo systemctl enable ntpd` 命令,永久启用它。如果你的 Linux 没有使用 systemd,那么,给你留下的家庭作业就是找出如何去运行 `ntpd`。 + +现在,你可以在你的局域网中的其它计算机上设置 `systemd-timesyncd`,这样它们就可以使用你的本地 NTP 服务器了,或者,在它们上面安装 NTP,然后在它们的 `/etc/ntp.conf` 上输入你的本地 NTP 服务器。 + +NTP 服务器持续地接受客户端查询,并且这种需求在不断增加。你可以通过运行你自己的公共 NTP 服务器来提供帮助。下周我们将学习如何运行你自己的公共服务器。 + +通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][3] 来学习更多 Linux 的知识。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp + +作者:[CARLA SCHRODER][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/how-change-linux-date-and-time-simple-commands +[2]:http://support.ntp.org/bin/view/Servers/NTPPoolServers +[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From f9743c7ce5250f479a45b07111dee31cfb49dc4f Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 11:16:25 +0800 Subject: [PATCH 228/343] PRF:20180205 New Linux User- Try These 8 Great Essential Linux Apps.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @CYLeft 翻译用心了。 --- ... Try These 8 Great Essential Linux Apps.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md b/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md index 69a0817bd2..9ca22f9550 100644 --- a/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md +++ b/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md @@ -7,13 +7,13 @@ Linux 新用户?来试试这 8 款重要的软件 下面这些应用程序大多不是 Linux 独有的。如果有过使用 Windows/Mac 的经验,您很可能会熟悉其中一些软件。根据兴趣和需求,下面的程序可能不全符合您的要求,但是在我看来,清单里大多数甚至全部的软件,对于新用户开启 Linux 之旅都是有帮助的。 -**相关链接** : [每一个 Linux 用户都应该使用的 11 个便携软件][1] +**相关链接** : [每一个 Linux 用户都应该使用的 11 个可移植软件][1] ### 1. Chromium 网页浏览器 ![linux-apps-01-chromium][2] -很难有一个不需要使用网页浏览器的用户。您可以看到陈旧的 Linux 发行版几乎都会附带 Firefox(火狐浏览器)或者其他 [Linux 浏览器][3],关于浏览器,强烈建议您尝试 [Chromium][4]。它是谷歌浏览器的开源版。Chromium 的主要优点是速度和安全性。它同样拥有大量的附加组件。 +几乎不会不需要使用网页浏览器的用户。您可以看到陈旧的 Linux 发行版几乎都会附带 Firefox(火狐浏览器)或者其他 [Linux 浏览器][3],关于浏览器,强烈建议您尝试 [Chromium][4]。它是谷歌浏览器的开源版。Chromium 的主要优点是速度和安全性。它同样拥有大量的附加组件。 ### 2. LibreOffice @@ -21,13 +21,13 @@ Linux 新用户?来试试这 8 款重要的软件 [LibreOffice][6] 是一个开源办公套件,其包括文字处理(Writer)、电子表格(Calc)、演示(Impress)、数据库(Base)、公式编辑器(Math)、矢量图和流程图(Draw)应用程序。它与 Microsoft Office 文档兼容,如果其基本功能不能满足需求,您可以使用 [LibreOffice 拓展][7]。 -LibreOffice 当然是 Linux 应用中至关重要的一员,如果您使用 Linux 的计算机,安装它是有必要的。 +LibreOffice 显然是 Linux 应用中至关重要的一员,如果您使用 Linux 的计算机,安装它是有必要的。 -### 3. GIMP(GNU Image Manipulation Program、GUN 图像处理程序) +### 3. GIMP(GUN 图像处理程序GNU Image Manipulation Program) ![linux-apps-03-gimp][8] -[GIMP][9] 是一款非常强大的开源图片处理程序,它类似于 Photoshop。通过 GIMP,您可以编辑或是创建用于 web 或是打印的光栅图(位图)。如果您对专业的图片处理没有概念,Linux 自然提供有更简单的图像编辑器,GIMP 看上去可能会复杂一点。GIMP 并不单纯提供图片裁剪和大小调整,它更覆盖了图层、滤镜、遮罩、路径和其他一些高级功能。 +[GIMP][9] 是一款非常强大的开源图片处理程序,它类似于 Photoshop。通过 GIMP,您可以编辑或是创建用于 Web 或是打印的光栅图(位图)。如果您对专业的图片处理没有概念,Linux 自然提供有更简单的图像编辑器,GIMP 看上去可能会复杂一点。GIMP 并不单纯提供图片裁剪和大小调整,它更覆盖了图层、滤镜、遮罩、路径和其他一些高级功能。 ### 4. VLC 媒体播放器 @@ -39,15 +39,15 @@ LibreOffice 当然是 Linux 应用中至关重要的一员,如果您使用 Lin ![linux-apps-05-jitsi][12] -[Jitsy][13] 完全是关于通讯的。您可以借助它使用 Google talk、Facebook chat、Yahoo、ICQ 和 XMPP。它是用于音视频通话(包括电话会议),桌面流和群组聊天的多用户工具。会话会被加密。Jistsy 同样能帮助您传输文件或记录电话。 +[Jitsy][13] 完全是关于通讯的。您可以借助它使用 Google talk、Facebook chat、Yahoo、ICQ 和 XMPP。它是用于音视频通话(包括电话会议),桌面流desktop streaming和群组聊天的多用户工具。会话会被加密。Jistsy 同样能帮助您传输文件或记录电话。 ### 6. Synaptic ![linux-apps-06-synaptic][14] -[Synaptic][15] 是一款基于 Debian 的系统发行版的另一款应用程序安装程序。并不是所有基于 Debian 的 Linux 都安装有它,如果您使用基于 Debian 的 Linux 操作系统没有预装,也许您可以试一试。Synaptic 是一款用于添加或移除系统应用的 GUI 工具,甚至相对于许多发行版默认安装的 [软件中心包管理器][16] ,经验丰富的 Linux 用户更亲睐于 Sunaptic。 +[Synaptic][15] 是一款基于 Debian 系统发行版的另一款应用程序安装程序。并不是所有基于 Debian 的 Linux 都安装有它,如果您使用基于 Debian 的 Linux 操作系统没有预装,也许您可以试一试。Synaptic 是一款用于添加或移除系统应用的 GUI 工具,甚至相对于许多发行版默认安装的 [软件中心包管理器][16] ,经验丰富的 Linux 用户更亲睐于 Sunaptic。 -**相关链接** : [10 款您没听说过的充当生产力的 Linux 应用程序][17] +**相关链接** : [10 款您没听说过的 Linux 生产力应用程序][17] ### 7. VirtualBox @@ -59,9 +59,9 @@ LibreOffice 当然是 Linux 应用中至关重要的一员,如果您使用 Lin ![linux-apps-08-aisleriot][20] -对于 Linux 的新用户来说,一款纸牌游戏并不是刚需,但是它真的太有趣了。当您进入这款纸牌游戏,您会发现,这是一款极好的纸牌包。[AisleRiot][21] 是 Linux 标志性的应用程序,原因是 - 它涵盖超过八十中纸牌游戏,包括流行的 Klondike、Bakers Dozen、Camelot 等等,这些只是预告片 - 它是会上瘾的,您可能会花很长时间沉迷于此! +对于 Linux 的新用户来说,一款纸牌游戏并不是刚需,但是它真的太有趣了。当您进入这款纸牌游戏,您会发现,这是一款极好的纸牌游戏包。[AisleRiot][21] 是 Linux 标志性的应用程序,原因是 - 它涵盖超过八十种纸牌游戏,包括流行的 Klondike、Bakers Dozen、Camelot 等等,作为预警 - 它是会上瘾的,您可能会花很长时间沉迷于此! -根据您所使用的发行版,这些软件会有不同的安装方法。但是大多数都可以通过您使用的发行版中的包管理器安装使用,甚至它们可能会预装在您的发行版上。安装并且尝试它们想必是最好的,如果不和您的胃口,您可以轻松地删除它们。 +根据您所使用的发行版,这些软件会有不同的安装方法。但是大多数都可以通过您使用的发行版中的包管理器安装使用,甚至它们可能会预装在您的发行版上。安装并且尝试它们想必是最好的,如果不合您的胃口,您可以轻松地删除它们。 -------------------------------------------------------------------------------- @@ -69,7 +69,7 @@ via: https://www.maketecheasier.com/essential-linux-apps/ 作者:[Ada Ivanova][a] 译者:[CYLeft](https://github.com/CYLeft) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5c487c4e8800e59196d7130fb570aceb16052cff Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 11:16:54 +0800 Subject: [PATCH 229/343] PUB:20180205 New Linux User- Try These 8 Great Essential Linux Apps.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @CYLeft 延后至周二发布 --- ...0205 New Linux User- Try These 8 Great Essential Linux Apps.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md (100%) diff --git a/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md b/published/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md similarity index 100% rename from translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md rename to published/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md From 7bf9dcf93bc925369b0a2c269eb3bc62e01215a9 Mon Sep 17 00:00:00 2001 From: szcf-weiya <2215235182@qq.com> Date: Sun, 18 Mar 2018 12:05:36 +0800 Subject: [PATCH 230/343] translating by szcf-weiya --- ...180302 10 Quick Tips About sudo command for Linux systems.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md b/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md index bcfad89a12..3fe7d3ca49 100644 --- a/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md +++ b/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md @@ -1,3 +1,5 @@ +translating by szcf-weiya + 10 Quick Tips About sudo command for Linux systems ====== From 7d49b2087193292aead99172906982da2e5f4801 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Sun, 18 Mar 2018 13:16:12 +0800 Subject: [PATCH 231/343] Update 20180213 How to clone, modify, add, and delete files in Git.md --- ...180213 How to clone, modify, add, and delete files in Git.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md b/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md index fa6648cee0..a58a2b6b85 100644 --- a/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md +++ b/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md @@ -1,3 +1,5 @@ +Translating by MjSeven + How to clone, modify, add, and delete files in Git ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_cat.png?itok=ta54QTAf) From 97b545808ba2051fa1de924aaa882bfc945d4301 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Sun, 18 Mar 2018 13:30:33 +0800 Subject: [PATCH 232/343] Translated by qhwdw --- ...un Your Own Public Time Server on Linux.md | 102 ------------------ ...un Your Own Public Time Server on Linux.md | 101 +++++++++++++++++ 2 files changed, 101 insertions(+), 102 deletions(-) delete mode 100644 sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md create mode 100644 translated/tech/20180201 How to Run Your Own Public Time Server on Linux.md diff --git a/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md b/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md deleted file mode 100644 index 4824b0370b..0000000000 --- a/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md +++ /dev/null @@ -1,102 +0,0 @@ -Translating by qhwdw -How to Run Your Own Public Time Server on Linux -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/eddington_a._space_time_and_gravitation._fig._9.jpg?itok=KgNqViyZ) - -One of the most important public services is timekeeping, but it doesn't get a lot of attention. Most public time servers are run by volunteers to help meet always-increasing demands. Learn how to run your own public time server and contribute to an essential public good. (See [Keep Accurate Time on Linux with NTP][1] to learn how to set up a LAN time server.) - -### Famous Time Server Abusers - -Like everything in life, even something as beneficial as time servers are subject to abuse fueled by either incompetence or malice. - -Vendors of consumer network appliances are notorious for creating big messes. The first one I recall happened in 2003, when Netgear hard-coded the address of the University of Wisconsin-Madison's NTP server into their routers. All of a sudden the server was getting hammered with requests, and as Netgear sold more routers, the worse it got. Adding to the fun, the routers were programmed to send requests every second, which is way too many. Netgear issued a firmware upgrade, but few users ever upgrade their devices, and a number of them are pummeling the University of Wisconsin-Madison's NTP server to this day. Netgear gave them a pile of money, which hopefully will cover their costs until the last defective router dies. Similar ineptitudes were perpetrated by D-Link, Snapchat, TP-Link, and others. - -The NTP protocol has become a choice vector for distributed denial-of-service attacks, using both reflection and amplification. It is called reflection when an attacker uses a forged source address to target a victim; the attacker sends requests to multiple servers, which then reply and bombard the forged address. Amplification is a large reply to a small request. For example, on Linux the `ntpq` command is a useful tool to query your NTP servers to verify that they are operating correctly. Some replies, such as lists of peers, are large. Combine reflection with amplification, and an attacker can get a return of 10x or more on the bandwidth they spend on the attack. - -How do you protect your nice beneficial public NTP server? Start by using NTP 4.2.7p26 or newer, which hopefully is not an issue with your Linux distribution because that version was released in 2010. That release shipped with the most significant abuse vectors disabled as the default. The [current release is 4.2.8p10][2], released in 2017. - -Another step you can take, which you should be doing anyway, is use ingress and egress filtering on your network. Block packets from entering your network that claim to be from your network, and block outgoing packets with forged return addresses. Ingress filtering helps you, and egress filtering helps you and everyone else. Read [BCP38.info][3] for much more information. - -### Stratum 0, 1, 2 Time Servers - -NTP is more than 30 years old, one of the oldest Internet protocols that is still widely used. Its purpose is keep computers synchronized to Coordinated Universal Time (UTC). The NTP network is both hierarchical, organized into strata, and peer. Stratum 0 contains master timekeeping devices such as atomic clocks. Stratum 1 time servers synchronize with Stratum 0 devices. Stratum 2 time servers synchronize with Stratum 1 time servers, and Stratum 3 with Stratum 2. The NTP protocol supports 16 strata, though in real life there not that many. Servers in each stratum also peer with each other. - -In the olden days, we selected individual NTP servers for our client configurations. Those days are long gone, and now the better way is to use the [NTP pool addresses][4], which use round-robin DNS to share the load. Pool addresses are only for clients, such as individual PCs and your local LAN NTP server. When you run your own public server you won't use the pool addresses. - -### Public NTP Server Configuration - -There are two steps to running a public NTP server: set up your server, and then apply to join the NTP server pool. Running a public NTP server is a noble deed, but make sure you know what you're getting into. Joining the NTP pool is a long-term commitment, because even if you run it for a short time and then quit, you'll be receiving requests for years. - -You need a static public IP address, a permanent reliable Internet connection with at least 512Kb/s bandwidth, and know how to configure your firewall correctly. NTP uses UDP port 123. The machine itself doesn't have to be any great thing, and a lot of admins piggyback NTP on other public-facing servers such as Web servers. - -Configuring a public NTP server is just like configuring a LAN NTP server, with a few more configurations. Start by reading the [Rules of Engagement][5]. Follow the rules and mind your manners; almost everyone maintaining a time server is a volunteer just like you. Then select 4-7 Stratum 2 upstream time servers from [StratumTwoTimeServers][6]. Select some that are geographically close to your upstream Internet service provider (mine is 300 miles away), read their access policies, and then use `ping` and `mtr` to find the servers with the lowest latency and least number of hops. - -This example `/etc/ntp.conf` includes both IPv4 and IPv6 and basic safeguards: -``` -# stratum 2 server list -server servername_1 iburst -server servername_2 iburst -server servername_3 iburst -server servername_4 iburst -server servername_5 iburst - -# access restrictions -restrict -4 default kod noquery nomodify notrap nopeer limited -restrict -6 default kod noquery nomodify notrap nopeer limited - -# Allow ntpq and ntpdc queries only from localhost -restrict 127.0.0.1 -restrict ::1 - -``` - -Start your NTP server, let it run for a few minutes, and then test that it is querying the remote servers: -``` -$ ntpq -p - remote refid st t when poll reach delay offset jitter -================================================================= -+tock.no-such-ag 200.98.196.212 2 u 36 64 7 98.654 88.439 65.123 -+PBX.cytranet.ne 45.33.84.208 3 u 37 64 7 72.419 113.535 129.313 -*eterna.binary.n 199.102.46.70 2 u 39 64 7 92.933 98.475 56.778 -+time.mclarkdev. 132.236.56.250 3 u 37 64 5 111.059 88.029 74.919 - -``` - -Good so far. Now test from another PC, using your NTP server name. The following example shows correct output. If something is not correct you'll see an error message. -``` -$ ntpdate -q _yourservername_ -server 66.96.99.10, stratum 2, offset 0.017690, delay 0.12794 -server 98.191.213.2, stratum 1, offset 0.014798, delay 0.22887 -server 173.49.198.27, stratum 2, offset 0.020665, delay 0.15012 -server 129.6.15.28, stratum 1, offset -0.018846, delay 0.20966 -26 Jan 11:13:54 ntpdate[17293]: adjust time server 98.191.213.2 offset 0.014798 sec - -``` - -Once your server is running satisfactorily apply at [manage.ntppool.org][7] to join the pool. - -See the official handbook, [The Network Time Protocol (NTP) Distribution][8] to learn about all the command and configuration options, and advanced features such as management, querying, and authentication. Visit the following sites to learn pretty much everything you need about running a time server. - -Learn more about Linux through the free ["Introduction to Linux" ][9]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/2/how-run-your-own-public-time-server-linux - -作者:[CARLA SCHRODER][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp -[2]:http://www.ntp.org/downloads.html -[3]:http://www.bcp38.info/index.php/Main_Page -[4]:http://www.pool.ntp.org/en/use.html -[5]:http://support.ntp.org/bin/view/Servers/RulesOfEngagement -[6]:http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers?redirectedfrom=Servers.StratumTwo -[7]:https://manage.ntppool.org/manage -[8]:https://www.eecis.udel.edu/~mills/ntp/html/index.html -[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180201 How to Run Your Own Public Time Server on Linux.md b/translated/tech/20180201 How to Run Your Own Public Time Server on Linux.md new file mode 100644 index 0000000000..70eae1596d --- /dev/null +++ b/translated/tech/20180201 How to Run Your Own Public Time Server on Linux.md @@ -0,0 +1,101 @@ +如何在 Linux 上运行你自己的公共时间服务器 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/eddington_a._space_time_and_gravitation._fig._9.jpg?itok=KgNqViyZ) + +公共服务最重要的一点就是守时,但是很多人并没有意识到这一点。大多数公共时间服务器都是由志愿者管理,以满足不断增长的需求。学习如何运行你自己的时间服务器,为基本的公共利益做贡献。(查看 [在 Linux 上使用 NTP 保持精确时间][1] 去学习如何设置一台局域网时间服务器) + +### 著名的时间服务器滥用事件 + +就像现实生活中任何一件事情一样,即便是像时间服务器这样的公益项目,也会遭受不称职的或者恶意的滥用。 + +消费类网络设备的供应商因制造了大混乱而臭名昭著。我回想起的第一件事发生在 2003 年,那时,Netgear 在它们的路由器中硬编码了 University of Wisconsin-Madison 的 NTP 时间服务器地址。使得时间服务器的查询请求突然增加,随着 NetGear 卖出越来越多的路由器,这种情况越发严重。更有意思的是,路由器的程序设置是每秒钟发送一次请求,这将使服务器难堪重负。后来 Netgear 发布了升级固件,但是,升级他们的设备的用户很少,并且他们的其中一些用户的设备,到今天为止,还在不停地每秒钟查询一次 University of Wisconsin-Madison 的 NTP 服务器。Netgear 给 University of Wisconsin-Madison 捐献了一些钱,以帮助弥补他们带来的成本增加,直到这些路由器全部淘汰。类似的事件还有 D-Link、Snapchat、TP-Link 等等。 + +对 NTP 协议进行反射和放大,已经成为发起 DDoS 攻击的一个选择。当攻击者使用一个伪造的源地址向目标受害者发送请求,称为反射;攻击者发送请求到多个服务器,这些服务器将回复请求,这样就使伪造的地址受到轰炸。放大是指一个很小的请求收到大量的回复信息。例如,在 Linux 上,`ntpq` 命令是一个查询你的 NTP 服务器并验证它们的系统时间是否正确的很有用的工具。一些回复,比如,对端列表,是非常大的。组合使用反射和放大,攻击者可以将 10 倍甚至更多带宽的数据量发送到被攻击者。 + +那么,如何保护提供公益服务的公共 NTP 服务器呢?从使用 NTP 4.2.7p26 或者更新的版本开始,它们可以帮助你的 Linux 发行版不会发生前面所说的这种问题,因为它们都是在 2010 年以后发布的。这个发行版都默认禁用了最常见的滥用攻击。目前,[最新版本是 4.2.8p10][2],它发布于 2017 年。 + +你可以采用的另一个措施是,在你的网络上启用入站和出站过滤器。阻塞进入你的网络的数据包,以及拦截发送到伪造地址的出站数据包。入站过滤器帮助你,而出站过滤器则帮助你和其他人。阅读 [BCP38.info][3] 了解更多信息。 + +### 层级为 0、1、2 的时间服务器 + +NTP 有超过 30 年的历史了,它是至今还在使用的最老的因特网协议之一。它的用途是保持计算机与协调世界时间(UTC)的同步。NTP 网络是分层组织的,并且同层的设备是对等的。层次 0 包含主守时设备,比如,原子钟。层级 1 的时间服务器与层级 0 的设备同步。层级 2 的设备与层级 1 的设备同步,层级 3 的设备与层级 2 的设备同步。NTP 协议支持 16 个层级,现实中并没有使用那么多的层级。同一个层级的服务器是相互对等的。 + +过去很长一段时间内,我们都为客户端选择配置单一的 NTP 服务器,而现在更好的做法是使用 [NTP 服务器地址池][4],它使用往返的 DNS 信息去共享负载。池地址只是为客户端服务的,比如单一的 PC 和你的本地局域网 NTP 服务器。当你运行一台自己的公共服务器时,你不能使用这些池中的地址。 + +### 公共 NTP 服务器配置 + +运行一台公共 NTP 服务器只有两步:设置你的服务器,然后加入到 NTP 服务器池。运行一台公共的 NTP 服务器是一种很高尚的行为,但是你得先知道如何加入到 NTP 服务器池中。加入 NTP 服务器池是一种长期责任,因为即使你加入服务器池后,运行了很短的时间马上退出,然后接下来的很多年你仍然会接收到请求。 + +你需要一个静态的公共 IP 地址,一个至少 512Kb/s 带宽的、可靠的、持久的因特网连接。NTP 使用的是 UDP 的 123 端口。它对机器本身要求并不高,很多管理员在其它的面向公共的服务器(比如,Web 服务器)上顺带架设了 NTP 服务。 + +配置一台公共的 NTP 服务器与配置一台用于局域网的 NTP 服务器是一样的,只需要几个配置。我们从阅读 [协议规则][5] 开始。遵守规则并注意你的行为;几乎每个时间服务器的维护者都是像你这样的志愿者。然后,从 [StratumTwoTimeServers][6] 中选择 2 台层级为 4-7 的上游服务器。选择的时候,选取地理位置上靠近(小于 300 英里的)你的因特网服务提供商的上游服务器,阅读他们的访问规则,然后,使用 `ping` 和 `mtr` 去找到延迟和跳数最小的服务器。 + +以下的 `/etc/ntp.conf` 配置示例文件,包括了 IPv4 和 IPv6,以及基本的安全防护: +``` +# stratum 2 server list +server servername_1 iburst +server servername_2 iburst +server servername_3 iburst +server servername_4 iburst +server servername_5 iburst + +# access restrictions +restrict -4 default kod noquery nomodify notrap nopeer limited +restrict -6 default kod noquery nomodify notrap nopeer limited + +# Allow ntpq and ntpdc queries only from localhost +restrict 127.0.0.1 +restrict ::1 + +``` + +启动你的 NTP 服务器,让它运行几分钟,然后测试它对远程服务器的查询: +``` +$ ntpq -p + remote refid st t when poll reach delay offset jitter +================================================================= ++tock.no-such-ag 200.98.196.212 2 u 36 64 7 98.654 88.439 65.123 ++PBX.cytranet.ne 45.33.84.208 3 u 37 64 7 72.419 113.535 129.313 +*eterna.binary.n 199.102.46.70 2 u 39 64 7 92.933 98.475 56.778 ++time.mclarkdev. 132.236.56.250 3 u 37 64 5 111.059 88.029 74.919 + +``` + +目前表现很好。现在从另一台 PC 上使用你的 NTP 服务器名字进行测试。以下的示例是一个正确的输出。如果有不正确的地方,你将看到一些错误信息。 +``` +$ ntpdate -q _yourservername_ +server 66.96.99.10, stratum 2, offset 0.017690, delay 0.12794 +server 98.191.213.2, stratum 1, offset 0.014798, delay 0.22887 +server 173.49.198.27, stratum 2, offset 0.020665, delay 0.15012 +server 129.6.15.28, stratum 1, offset -0.018846, delay 0.20966 +26 Jan 11:13:54 ntpdate[17293]: adjust time server 98.191.213.2 offset 0.014798 sec + +``` + +一旦你的服务器运行的很好,你就可以向 [manage.ntppool.org][7] 申请加入池中。 + +查看官方的手册 [分布式网络时间服务器(NTP)][8] 学习所有的命令、配置选项、以及高级特性,比如,管理、查询、和验证。访问以下的站点学习关于运行一台时间服务器所需要的一切东西。 + +通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][9] 学习更多 Linux 的知识。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/2/how-run-your-own-public-time-server-linux + +作者:[CARLA SCHRODER][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp +[2]:http://www.ntp.org/downloads.html +[3]:http://www.bcp38.info/index.php/Main_Page +[4]:http://www.pool.ntp.org/en/use.html +[5]:http://support.ntp.org/bin/view/Servers/RulesOfEngagement +[6]:http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers?redirectedfrom=Servers.StratumTwo +[7]:https://manage.ntppool.org/manage +[8]:https://www.eecis.udel.edu/~mills/ntp/html/index.html +[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From be88d60d9032a6f9548dc760d9b80acdb1523d3b Mon Sep 17 00:00:00 2001 From: qhwdw Date: Sun, 18 Mar 2018 20:48:48 +0800 Subject: [PATCH 233/343] Translated by qhwdw --- .../20171213 Will DevOps steal my job-.md | 59 ------------------- .../20171213 Will DevOps steal my job-.md | 58 ++++++++++++++++++ 2 files changed, 58 insertions(+), 59 deletions(-) delete mode 100644 sources/tech/20171213 Will DevOps steal my job-.md create mode 100644 translated/tech/20171213 Will DevOps steal my job-.md diff --git a/sources/tech/20171213 Will DevOps steal my job-.md b/sources/tech/20171213 Will DevOps steal my job-.md deleted file mode 100644 index 91c3f3aa6a..0000000000 --- a/sources/tech/20171213 Will DevOps steal my job-.md +++ /dev/null @@ -1,59 +0,0 @@ -Translating by qhwdw -Will DevOps steal my job? -====== - ->Are you worried automation will replace people in the workplace? You may be right, but here's why that's not a bad thing. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_question_B.png?itok=f88cyt00) ->Image by : opensource.com - -It's a common fear: Will DevOps be the end of my job? After all, DevOps means developers doing operations, right? DevOps is automation. What if I automate myself out of a job? Do continuous delivery and containers mean operations staff are obsolete? DevOps is all about coding: infrastructure-as-code and testing-as-code and this-or-that-as-code. What if I don't have the skill set to be a part of this? - -[DevOps][1] is a looming change, disruptive in the field, with seemingly fanatical followers talking about changing the world with the [Three Ways][2]--the three underpinnings of DevOps--and the tearing down of walls. It can all be overwhelming. So what's it going to be--is DevOps going to steal my job? - -### The first fear: I'm not needed - -As developers managing the entire lifecycle of an application, it's all too easy to get caught up in the idea of DevOps. Containers are probably a big contributing factor to this line of thought. When containers exploded onto the scene, they were touted as a way for developers to build, test, and deploy their code all-in-one. What role does DevOps leave for the operations team, or testing, or QA? - -This stems from a misunderstanding of the principles of DevOps. The first principle of DevOps, or the First Way, is _Systems Thinking_ , or placing emphasis on a holistic approach to managing and understanding the whole lifecycle of an application or service. This does not mean that the developers of the application learn and manage the whole process. Rather, it is the collaboration of talented and skilled individuals to ensure success as a whole. To make developers solely responsible for the process is practically the extreme opposite of this tenant--essentially the enshrining of a single silo with the importance of the entire lifecycle. - -There is a place for specialization in DevOps. Just as the classically educated software engineer with knowledge of linear regression and binary search is wasted writing Ansible playbooks and Docker files, the highly skilled sysadmin with the knowledge of how to secure a system and optimize database performance is wasted writing CSS and designing user flows. The most effective group to write, test, and maintain an application is a cross-discipline, functional team of people with diverse skill sets and backgrounds. - -### The second fear: My job will be automated - -Accurate or not, DevOps can sometimes be seen as a synonym for automation. What work is left for operations staff and testing teams when automated builds, testing, deployment, monitoring, and notifications are a huge part of the application lifecycle? This focus on automation can be partially related to the Second Way: _Amplify Feedback Loops_. This second tenant of DevOps deals with prioritizing quick feedback between teams in the opposite direction an application takes to deployment --from monitoring and maintaining to deployment, testing, development, etc., and the emphasis to make the feedback important and actionable. While the Second Way is not specifically related to automation, many of the automation tools teams use within their deployment pipelines facilitate quick notification and quick action, or course-correction based on feedback in support of this tenant. Traditionally done by humans, it is easy to understand why a focus on automation might lead to anxiety about the future of one's job. - -Automation is just a tool, not a replacement for people. Smart people trapped doing the same things over and over, pushing the big red George Jetson button are a wasted, untapped wealth of intelligence and creativity. Automation of the drudgery of daily work means more time to spend solving real problems and coming up with creative solutions. Humans are needed to figure out the "how and why;" computers can handle the "copy and paste." - -There will be no end of repetitive, predictable things to automate, and automation frees teams to focus on higher-order tasks in their field. Monitoring teams, no longer spending all their time configuring alerts or managing trending configuration, can start to focus on predicting alarms, correlating statistics, and creating proactive solutions. Systems administrators, freed of scheduled patching or server configuration, can spend time focusing on fleet management, performance, and scaling. Unlike the striking images of factory floors and assembly lines totally devoid of humans, automated tasks in the DevOps world mean humans can focus on creative, rewarding tasks instead of mind-numbing drudgery. - -### The third fear: I do not have the skillset for this - -"How am I going to keep up with this? I don't know how to automate. Everything is code now--do I have to be a developer and write code for a living to work in DevOps?" The third fear is ultimately a fear of self-confidence. As the culture changes, yes, teams will be asked to change along with it, and some may fear they lack the skills to perform what their jobs will become. - -Most folks, however, are probably already closer than they think. What is the Dockerfile, or configuration management like Puppet or Ansible, but environment as code? System administrators already write shell scripts and Python programs to handle repetitive tasks for them. It's hardly a stretch to learn a little more and begin using some of the tools already at their disposal to solve more problems--orchestration, deployment, maintenance-as-code--especially when freed from the drudgery of manual tasks to focus on growth. - -The answer to this fear lies in the third tenant of DevOps, the Third Way: _A Culture of Continual Experimentation and Learning_. The ability to try and fail and learn from mistakes without blame is a major factor in creating ever-more creative solutions. The Third Way is empowered by the first two ways --allowing for for quick detection of and repair of problems, and just as the developer is free to try and learn, other teams are as well. Operations teams that have never used configuration management or written programs to automate infrastructure provisioning are free to try and learn. Testing and QA teams are free to implement new testing pipelines and automate approval and release processes. In a culture that embraces learning and growing, everyone has the freedom to acquire the skills they need to succeed at and enjoy their job. - -### Conclusion - -Any disruptive practice or change in an industry can create fear or uncertainty, and DevOps is no exception. A concern for one's job is a reasonable response to the hundreds of articles and presentations enumerating the countless practices and technologies seemingly dedicated to empowering developers to take responsibility for every aspect of the industry. - -In truth, however, DevOps is "[a cross-disciplinary community of practice dedicated to the study of building, evolving, and operating rapidly changing resilient systems at scale][3]." DevOps means the end of silos, but not specialization. It is the delegation of drudgery to automated systems, freeing you to do what people do best: think and imagine. And if you're motivated to learn and grow, there will be no end of opportunities to solve new and challenging problems. - -Will DevOps take away your job? Yes, but it will give you a better one. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/12/will-devops-steal-my-job - -作者:[Chris Collins][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/clcollins -[1]:https://opensource.com/resources/devops -[2]:http://itrevolution.com/the-three-ways-principles-underpinning-devops/ -[3]:https://theagileadmin.com/what-is-devops/ diff --git a/translated/tech/20171213 Will DevOps steal my job-.md b/translated/tech/20171213 Will DevOps steal my job-.md new file mode 100644 index 0000000000..065d638bfa --- /dev/null +++ b/translated/tech/20171213 Will DevOps steal my job-.md @@ -0,0 +1,58 @@ +DevOps 将让你失业? +====== + +>你是否担心工作中自动化将代替人?可能是对的,但是这并不是件坏事。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_question_B.png?itok=f88cyt00) +>Image by : opensource.com + +这是一个很正常的担心:DevOps 最终会让你失业?毕竟,DevOps 意味着开发人员做运营,对吗?DevOps 是自动化的。如果我的工作都自动化了,我去做什么?实行持续分发和容器化意味着运营已经过时了吗?对于 DevOps 来说,所有的东西都是代码:基础设施是代码、测试是代码、这个和那个都是代码。如果我没有这些技能怎么办? + +[DevOps][1] 是一个即将到来的变化,将颠覆这一领域,狂热的拥挤者们正在谈论,如何使用 [三种方法][2] 去改变世界 —— 即 DevOps 的三大基础 —— 去推翻一个旧的世界。它是势不可档的。那么,问题来了 —— DevOps 将会让我失业吗? + +### 第一个担心:再也不需要我了 + +由于开发者来管理应用程序的整个生命周期,接受 DevOps 的理念很容易。容器化可能是影响这一想法的重要因素。当容器化在各种场景下铺开之后,它们被吹嘘成开发者构建、测试、和部署他们代码的一站式解决方案。DevOps 对于运营、测试、以及 QA 团队来说,有什么作用呢? + +这源于对 DevOps 原则的误解。DevOps 的第一原则,或者第一方法是,_系统思考_ ,或者强调整体管理方法和了解应用程序或服务的整个生命周期。这并不意味着应用程序的开发者将学习和管理整个过程。相反,是拥有各个专业和技能的人共同合作,以确保成功。让开发者对这一过程完全负责的作法,几乎是将开发者置于使用者的对立面—— 本质上就是 “将鸡蛋放在了一个篮子里”。 + +在 DevOps 中有一个为你保留的专门职位。就像将一个受过传统教育的、拥有线性回归和二分查找知识的软件工程师,被用去写一些 Ansible playbooks 和 Docker 文件,这是一种浪费。而对于那些拥有高级技能,知道如何保护一个系统和优化数据库执行的系统管理员,被浪费在写一些 CSS 和设计用户流这样的工作上。写代码、做测试、和维护应用程序的高效团队一般是跨学科、跨职能的、拥有不同专业技术和背景的人组成的混编团队。 + +### 第二个担心:我的工作将被自动化 + +或许是,或许不是,DevOps 可能在有时候是自动化的同义词。当自动化构建、测试、部署、监视、以及提醒等事项,已经占据了整个应用程序生命周期管理的时候,还会给我们剩下什么工作呢?这种对自动化的关注可能与第二个方法有关:_放大反馈循环_。DevOps 的第二个方法是在团队和部署的应用程序之间,采用相反的方向优先处理快速反馈 —— 从监视和维护部署、测试、开发、等等,通过强调,使反馈更加重要并且可操作。虽然这第二种方式与自动化并不是特别相关,许多自动化工具团队在它们的部署流水线中使用,以促进快速提醒和快速行动,或者基于对使用者的支持业务中产生的反馈来改进。传统的做法是靠人来完成的,这就可以理解为什么自动化可能会导致未来一些人失业的焦虑了。 + +自动化只是一个工具,它并不能代替人。聪明的人使用它来做一些重复的工作,不去开发智力和创造性的财富,而是去按红色的 “George Jetson” 按钮是一种极大的浪费。让每天工作中的苦活自动化,意味着有更多的时间去解决真正的问题和即将到来的创新的解决方案。人类需要解决更多的 “怎么做和为什么” 问题,而计算机只能处理 “复制和粘贴”。 + +并不会仅限于在可重复的、可预见的事情上进行自动化,自动化让团队有更多的时间和精力去专注于本领域中更高级别的任务上。监视团队不再花费他们的时间去配置报警或者管理传统的配置,它们可能专注于预测可能的报警、相关性统计、以及设计可能的预案。系统管理员从计划补丁或服务器配置中解放出来,可以花费更多的时间专注于整体管理、性能、和可伸缩性。与工厂车间和装配线上完全没有人的景像不同,DevOps 中的自动化任务,意味着人更多关注于创造性的、有更高价值的任务,而不是一些重复的、让人麻木的苦差事。 + +### 第三个担心:我没有这些技能怎么办 + +"我怎么去继续做这些事情?我不懂如何自动化。现在所有的工作都是代码 —— 我不是开发人员,我不会做 DevOps 中写代码的工作“,第三个担心是一种不自信的担心。由于文化的改变,是的,团队将也会要求随之改变,一些人可能担心,他们缺乏继续做他们工作的技能。 + +然而,大多数人或许已经比他们所想的更接近。Dockerfile 是什么,或者像 Puppet 或 Ansible 配置管理是什么,但是环境即代码,系统管理员已经写了 shell 脚本和 Python 程序去处理他们重复的任务。学习更多的知识并使用已有的工具处理他们的更多问题 —— 编排、部署、维护即代码 —— 尤其是当从繁重的手动任务中解放出来,专注于成长时。 + +在 DevOps 的使用者中去回答这第三个担心,第三个方法是:_一种不断实验和学习的文化_。尝试、失败、并从错误中吸取教训而不是责怪它们的能力,是设计出更有创意的解决方案的重要因素。第三个方法是为前两个方法授权—— 允许快速检测和修复问题,并且开发人员可以自由地尝试和学习,其它的团队也是如此。从未使用过配置管理或者写过自动供给基础设施程序的运营团队也要自由尝试并学习。测试和 QA 团队也要自由实现新测试流水线,并且自动批准和发布新流程。在一个拥抱学习和成长的文化中,每个人都可以自由地获取他们需要的技术,去享受工作带来的成功和喜悦。 + +### 结束语 + +在一个行业中,任何可能引起混乱的实践或变化都会产生担心和不确定,DevOps 也不例外。对自己工作的担心是对成百上千的文章和演讲的合理回应,其中列举了无数的实践和技术,而这些实践和技术正致力于授权开发者对行业的各个方面承担职责。 + +然而,事实上,DevOps 是 "[一个跨学科的沟通实践,致力于研究构建、进化、和运营快速变化的弹性系统][3]"。 DevOps 意味着终结 ”筒仓“,但并不专业化。它是受委托去做苦差事的自动化系统,解放你,让你去做人类更擅长做的事:思考和想像。并且,如果你愿意去学习和成长,它将不会终结你解决新的、挑战性的问题的机会。 + +DevOps 会让你失业吗?会的,但它同时给你提供了更好的工作。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/will-devops-steal-my-job + +作者:[Chris Collins][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/clcollins +[1]:https://opensource.com/resources/devops +[2]:http://itrevolution.com/the-three-ways-principles-underpinning-devops/ +[3]:https://theagileadmin.com/what-is-devops/ From e6573005d7155b971d646a47a8f295ee92b34990 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 21:13:25 +0800 Subject: [PATCH 234/343] PRF:20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @MjSeven --- ...VM on CentOS 7 - RHEL 7 Headless Server.md | 193 ++++++++++++------ 1 file changed, 134 insertions(+), 59 deletions(-) diff --git a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md index 233daa72b2..fd9e9cdcba 100644 --- a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md +++ b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @@ -1,56 +1,79 @@ 如何在 CentOS 7 / RHEL 7 终端服务器上安装 KVM ====== -如何在 CnetOS 7 或 RHEL 7( Red Hat 企业版 Linux) 服务器上安装和配置 KVM(基于内核的虚拟机)?如何在 CnetOS 7 上设置 KMV 并使用云镜像/ cloud-init 来安装客户虚拟机? +如何在 CnetOS 7 或 RHEL 7(Red Hat 企业版 Linux)服务器上安装和配置 KVM(基于内核的虚拟机)?如何在 CentOS 7 上设置 KVM 并使用云镜像 / cloud-init 来安装客户虚拟机? + +基于内核的虚拟机(KVM)是 CentOS 或 RHEL 7 的虚拟化软件。KVM 可以将你的服务器变成虚拟机管理器。本文介绍如何在 CentOS 7 或 RHEL 7 中使用 KVM 设置和管理虚拟化环境。还介绍了如何使用命令行在物理服务器上安装和管理虚拟机(VM)。请确保在服务器的 BIOS 中启用了**虚拟化技术(VT)**。你也可以运行以下命令[测试 CPU 是否支持 Intel VT 和 AMD_V 虚拟化技术][1]。 -基于内核的虚拟机(KVM)是 CentOS 或 RHEL 7 的虚拟化软件。KVM 将你的服务器变成虚拟机管理程序。本文介绍如何在 CentOS 7 或 RHEL 7 中使用 KVM 设置和管理虚拟化环境。还介绍了如何使用 CLI 在物理服务器上安装和管理虚拟机(VM)。确保在服务器的 BIOS 中启用了**虚拟化技术(vt)**。你也可以运行以下命令[测试 CPU 是否支持 Intel VT 和 AMD_V 虚拟化技术][1]。 ``` $ lscpu | grep Virtualization Virtualization: VT-x ``` -### 按照 CentOS 7/RHEL 7 终端服务器上的 KVM 安装步骤进行操作 +按照 CentOS 7/RHEL 7 终端服务器上的 KVM 安装步骤进行操作。 -#### 步骤 1: 安装 kvm +### 步骤 1: 安装 kvm 输入以下 [yum 命令][2]: -`# yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install` + +``` +# yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install +``` [![How to install KVM on CentOS 7 RHEL 7 Headless Server][3]][3] 启动 libvirtd 服务: + ``` # systemctl enable libvirtd # systemctl start libvirtd ``` -#### 步骤 2: 确认 kvm 安装 +### 步骤 2: 确认 kvm 安装 -确保使用 lsmod 命令和 [grep命令][4] 加载 KVM 模块: -`# lsmod | grep -i kvm` +使用 `lsmod` 命令和 [grep命令][4] 确认加载了 KVM 模块: -#### 步骤 3: 配置桥接网络 +``` +# lsmod | grep -i kvm +``` + +### 步骤 3: 配置桥接网络 + +默认情况下,由 libvirtd 配置基于 dhcpd 的网桥。你可以使用以下命令验证: -默认情况下,由 libvirtd 配置的基于 dhcpd 的网桥。你可以使用以下命令验证: ``` # brctl show # virsh net-list ``` + [![KVM default networking][5]][5] -所有虚拟机(客户机器)只能在同一台服务器上对其他虚拟机进行网络访问。为你创建的私有网络是 192.168.122.0/24。验证: -`# virsh net-dumpxml default` +所有虚拟机(客户机)只能对同一台服务器上的其它虚拟机进行网络访问。为你创建的私有网络是 192.168.122.0/24。验证: + +``` +# virsh net-dumpxml default +``` + +如果你希望你的虚拟机可用于 LAN 上的其他服务器,请在连接到你的 LAN 的服务器上设置一个网桥。更新你的网卡配置文件,如 ifcfg-enp3s0 或 em1: + +``` +# vi /etc/sysconfig/network-scripts/ifcfg-enp3s0 +``` -如果你希望你的虚拟机可用于 LAN 上的其他服务器,请在连接到你的 LAN 的服务器上设置一个网桥。更新你的网卡配置文件,如 ifcfg-enp3s0 或 em1: -`# vi /etc/sysconfig/network-scripts/enp3s0 ` 添加一行: + ``` BRIDGE=br0 ``` -[使用 vi 保存并关闭文件][6]。编辑 /etc/sysconfig/network-scripts/ifcfg-br0 : -`# vi /etc/sysconfig/network-scripts/ifcfg-br0` -添加以下东西: +[使用 vi 保存并关闭文件][6]。编辑 `/etc/sysconfig/network-scripts/ifcfg-br0`: + +``` +# vi /etc/sysconfig/network-scripts/ifcfg-br0 +``` + +添加以下内容: + ``` DEVICE="br0" # I am getting ip from DHCP server # @@ -62,29 +85,38 @@ TYPE="Bridge" DELAY="0" ``` -重新启动网络服务(警告:ssh命令将断开连接,最好重新启动该设备): -`# systemctl restart NetworkManager` +重新启动网络服务(警告:ssh 命令将断开连接,最好重新启动该设备): -用 brctl 命令验证它: -`# brctl show` +``` +# systemctl restart NetworkManager +``` -#### 步骤 4: 创建你的第一个虚拟机 +用 `brctl` 命令验证它: + +``` +# brctl show +``` + +### 步骤 4: 创建你的第一个虚拟机 + +我将会创建一个 CentOS 7.x 虚拟机。首先,使用 `wget` 命令获取 CentOS 7.x 最新的 ISO 镜像: -我将会创建一个 CentOS 7.x 虚拟机。首先,使用 wget 命令获取 CentOS 7.x 最新的 ISO 镜像: ``` # cd /var/lib/libvirt/boot/ # wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso ``` 验证 ISO 镜像: + ``` # wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/sha256sum.txt # sha256sum -c sha256sum.txt ``` -##### 创建 CentOS 7.x 虚拟机 +#### 创建 CentOS 7.x 虚拟机 在这个例子中,我创建了 2GB RAM,2 个 CPU 核心,1 个网卡和 40 GB 磁盘空间的 CentOS 7.x 虚拟机,输入: + ``` # virt-install \ --virt-type=kvm \ @@ -98,35 +130,41 @@ DELAY="0" --disk path=/var/lib/libvirt/images/centos7.qcow2,size=40,bus=virtio,format=qcow2 ``` -从另一个终端通过 ssh 和 type 配置 vnc 登录: +从另一个终端通过 `ssh` 配置 vnc 登录,输入: + ``` # virsh dumpxml centos7 | grep v nc ``` -请记录下端口值(即 5901)。你需要使用 SSH 客户端来建立隧道和 VNC 客户端才能访问远程 vnc 服务区。在客户端/桌面/ macbook pro 系统中输入以下 SSH 端口转化命令: -`$ ssh vivek@server1.cyberciti.biz -L 5901:127.0.0.1:5901` +请记录下端口值(即 5901)。你需要使用 SSH 客户端来建立隧道和 VNC 客户端才能访问远程 vnc 服务器。在客户端/桌面/ macbook pro 系统中输入以下 SSH 端口转发命令: + +``` +$ ssh vivek@server1.cyberciti.biz -L 5901:127.0.0.1:5901 +``` 一旦你建立了 ssh 隧道,你可以将你的 VNC 客户端指向你自己的 127.0.0.1 (localhost) 地址和端口 5901,如下所示: + [![][7]][7] 你应该看到 CentOS Linux 7 客户虚拟机安装屏幕如下: + [![][8]][8] 现在只需按照屏幕说明进行操作并安装CentOS 7。一旦安装完成后,请继续并单击重启按钮。 远程服务器关闭了我们的 VNC 客户端的连接。 你可以通过 KVM 客户端重新连接,以配置服务器的其余部分,包括基于 SSH 的会话或防火墙。 -#### 步骤 5: 使用云镜像 +### 使用云镜像 -以上安装方法对于学习目的或单个虚拟机而言是可行的。你需要部署大量的虚拟机吗? 尝试云镜像。你可以根据需要修改预先构建的云图像。例如,使用 [Cloud-init][9] 添加用户,ssh 密钥,设置时区等等,这是处理云实例的早期初始化的事实上的多分发包。让我们看看如何创建带有 1024MB RAM,20GB 磁盘空间和 1 个 vCPU 的 CentOS 7 虚拟机。(译注: vCPU 即电脑中的虚拟处理器) +以上安装方法对于学习目的或单个虚拟机而言是可行的。你需要部署大量的虚拟机吗? 可以试试云镜像。你可以根据需要修改预先构建的云镜像。例如,使用 [Cloud-init][9] 添加用户、ssh 密钥、设置时区等等,这是处理云实例的早期初始化的事实上的多分发包。让我们看看如何创建带有 1024MB RAM,20GB 磁盘空间和 1 个 vCPU 的 CentOS 7 虚拟机。(LCTT 译注: vCPU 即电脑中的虚拟处理器) -##### 获取 CentOS 7 云镜像 +#### 获取 CentOS 7 云镜像 ``` # cd /var/lib/libvirt/boot # wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 ``` -##### 创建所需的目录 +#### 创建所需的目录 ``` # D=/var/lib/libvirt/images @@ -135,31 +173,39 @@ DELAY="0" mkdir: created directory '/var/lib/libvirt/images/centos7-vm1' ``` -##### 创建元数据文件 +#### 创建元数据文件 ``` # cd $D/$VM # vi meta-data ``` -添加以下东西: +添加以下内容: + ``` instance-id: centos7-vm1 local-hostname: centos7-vm1 ``` -##### 创建用户数据文件 +#### 创建用户数据文件 + +我将使用 ssh 密钥登录到虚拟机。所以确保你有 ssh 密钥: + +``` +# ssh-keygen -t ed25519 -C "VM Login ssh key" +``` -我将使用 ssh 密钥登录到虚拟机。所以确保你有 ssh-keys: -`# ssh-keygen -t ed25519 -C "VM Login ssh key"` [![ssh-keygen command][10]][11] -请参阅 "[如何在 Linux/Unix 系统上设置 SSH 密钥][12]" 来获取更多信息。编辑用户数据如下: +请参阅 “[如何在 Linux/Unix 系统上设置 SSH 密钥][12]” 来获取更多信息。编辑用户数据如下: + ``` # cd $D/$VM # vi user-data ``` -添加如下(根据你的设置替换主机名,用户,ssh-authorized-keys): + +添加如下(根据你的设置替换 `hostname`、`users`、`ssh-authorized-keys`): + ``` #cloud-config @@ -199,14 +245,14 @@ runcmd: - yum -y remove cloud-init ``` -##### 复制云镜像 +#### 复制云镜像 ``` # cd $D/$VM # cp /var/lib/libvirt/boot/CentOS-7-x86_64-GenericCloud.qcow2 $VM.qcow2 ``` -##### 创建 20GB 磁盘映像 +#### 创建 20GB 磁盘映像 ``` # cd $D/$VM @@ -215,25 +261,30 @@ runcmd: # virt-resize --quiet --expand /dev/sda1 $VM.qcow2 $VM.new.image ``` [![Set VM image disk size][13]][13] -覆盖它的缩放图片: + +用压缩后的镜像覆盖它: + ``` # cd $D/$VM # mv $VM.new.image $VM.qcow2 ``` -##### 创建一个 cloud-init ISO +#### 创建一个 cloud-init ISO + +``` +# mkisofs -o $VM-cidata.iso -V cidata -J -r user-data meta-data +``` -`# mkisofs -o $VM-cidata.iso -V cidata -J -r user-data meta-data` [![Creating a cloud-init ISO][14]][14] -##### 创建一个 pool +#### 创建一个池 ``` # virsh pool-create-as --name $VM --type dir --target $D/$VM Pool centos7-vm1 created ``` -##### 安装 CentOS 7 虚拟机 +#### 安装 CentOS 7 虚拟机 ``` # cd $D/$VM @@ -247,23 +298,31 @@ Pool centos7-vm1 created --graphics spice \ --noautoconsole ``` + 删除不需要的文件: + ``` # cd $D/$VM # virsh change-media $VM hda --eject --config # rm meta-data user-data centos7-vm1-cidata.iso ``` -##### 查找虚拟机的 IP 地址 +#### 查找虚拟机的 IP 地址 -`# virsh net-dhcp-leases default` +``` +# virsh net-dhcp-leases default +``` [![CentOS7-VM1- Created][15]][15] -##### 登录到你的虚拟机 +#### 登录到你的虚拟机 + +使用 ssh 命令: + +``` +# ssh vivek@192.168.122.85 +``` -使用 ssh 命令: -`# ssh vivek@192.168.122.85` [![Sample VM session][16]][16] ### 有用的命令 @@ -272,7 +331,9 @@ Pool centos7-vm1 created #### 列出所有虚拟机 -`# virsh list --all` +``` +# virsh list --all +``` #### 获取虚拟机信息 @@ -283,21 +344,33 @@ Pool centos7-vm1 created #### 停止/关闭虚拟机 -`# virsh shutdown centos7-vm1` +``` +# virsh shutdown centos7-vm1 +``` #### 开启虚拟机 -`# virsh start centos7-vm1` +``` +# virsh start centos7-vm1 +``` #### 将虚拟机标记为在引导时自动启动 -`# virsh autostart centos7-vm1` +``` +# virsh autostart centos7-vm1 +``` #### 重新启动(软安全重启)虚拟机 -`# virsh reboot centos7-vm1` +``` +# virsh reboot centos7-vm1 +``` + 重置(硬重置/不安全)虚拟机 -`# virsh reset centos7-vm1` + +``` +# virsh reset centos7-vm1 +``` #### 删除虚拟机 @@ -309,7 +382,9 @@ Pool centos7-vm1 created # VM=centos7-vm1 # rm -ri $D/$VM ``` -查看 virsh 命令类型的完整列表 + +查看 virsh 命令类型的完整列表: + ``` # virsh help | less # virsh help | grep reboot @@ -321,11 +396,11 @@ Pool centos7-vm1 created -------------------------------------------------------------------------------- -via: [https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/](https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/) +via: https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/ 作者:[Vivek Gite][a] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0771f5b7ae4309d30d3aecd618df081f43d0b1ee Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 21:13:57 +0800 Subject: [PATCH 235/343] PUB: 20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @MjSeven --- ...127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md (100%) diff --git a/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/published/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md similarity index 100% rename from translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md rename to published/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md From fa845baed7470f138a1cd34b8503b32c93fef0f7 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 23:32:04 +0800 Subject: [PATCH 236/343] PRF:20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md @qhwdw --- ...l Module for Runtime Integrity Checking.md | 34 ++++++++----------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/translated/tech/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md b/translated/tech/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md index e56b3c06d3..1ce18d33f9 100644 --- a/translated/tech/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md +++ b/translated/tech/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md @@ -1,52 +1,48 @@ -LKRG:Linux 的适用于运行时完整性检查的可加载内核模块 +LKRG:用于运行时完整性检查的可加载内核模块 ====== ![LKRG logo][1] -开源社区的成员正在致力于一个 Linux 内核的新项目,它可以让内核更安全。命名为 Linux 内核运行时防护(Linux Kernel Runtime Guard,简称:LKRG),它是一个在 Linux 内核执行运行时完整性检查时的可加载内核模块。 +开源社区的人们正在致力于一个 Linux 内核的新项目,它可以让内核更安全。命名为 Linux 内核运行时防护(Linux Kernel Runtime Guard,简称:LKRG),它是一个在 Linux 内核执行运行时完整性检查的可加载内核模块(LKM)。 它的用途是检测对 Linux 内核的已知的或未知的安全漏洞利用企图,以及去阻止这种攻击企图。 LKRG 也可以检测正在运行的进程的提权行为,在漏洞利用代码运行之前杀掉这个运行进程。 -### 这个项目从 2011 年开始开发以来,首个版本已经发布。 +### 这个项目开发始于 2011 年,首个版本已经发布 -因为这个项目开发的较早,LKRG 的当前版本仅仅是通过内核消息去报告违反内核完整性的行为,但是随着这个项目的成熟,一个完整的漏洞利用缓减系统将会部署。 +因为这个项目开发的较早,LKRG 的当前版本仅仅是通过内核消息去报告违反内核完整性的行为,但是随着这个项目的成熟,将会部署一个完整的漏洞利用缓减系统。 -LKRG 的成员 Alexander Peslyak 解释说,这个项目从 2011 年启动,并且 LKRG 已经经历了“预开发"阶段。 +LKRG 的成员 Alexander Peslyak 解释说,这个项目从 2011 年启动,并且 LKRG 已经经历了一个“重新开发"阶段。 -LKRG 的首个公开版本是 — LKRG v0.0 — 它现在可以从 [这个页面][2] 下载使用。[这里][3] 是这个项目的维基,为支持这个项目,它也有一个 [Patreon 页面][4]。 +LKRG 的首个公开版本是 LKRG v0.0,它现在可以从 [这个页面][2] 下载使用。[这里][3] 是这个项目的维基,为支持这个项目,它也有一个 [Patreon 页面][4]。 -虽然 LKRG 还是一个开源项目,LKRG 的维护者也计划做一个 LKRG Pro 版本,这个版本将包含一个专用的 LKRG 发行版,它将支持对特定漏洞利用的检测,比如,容器泄漏。开发团队计划从 LKRG Pro 基金中提取部分资金用于保证项目的剩余工作。 +虽然 LKRG 仍然是一个开源项目,LKRG 的维护者也计划做一个 LKRG Pro 版本,这个版本将包含一个专用的 LKRG 发行版,它将支持对特定漏洞利用的检测,比如,容器泄漏。开发团队计划从 LKRG Pro 基金中提取部分资金用于保证项目的剩余工作。 ### LKRG 是一个内核模块而不是一个补丁。 -一个类似的项目是去增加一个内核监视功能(AKO),但是 LKRG 与 AKO 是不一样的,因为 LKRG 是一个内核加载模式而不是一个补丁。LKRG 开发团队决定将它设计为一个内核模块是因为,在内核上打补丁对安全性、系统稳定性以及性能都有很直接的影响。 +一个类似的项目是附加内核监视器Additional Kernel Observer(AKO),但是 LKRG 与 AKO 是不一样的,因为 LKRG 是一个内核加载模块而不是一个补丁。LKRG 开发团队决定将它设计为一个内核模块是因为,在内核上打补丁对安全性、系统稳定性以及性能都有很直接的影响。 -而作为内核模块的方式,可以在每个系统上更容易部署去 LKRG,而不必去修改核心的内核代码,修改核心的内核代码非常复杂并且很容易出错。 +而以内核模块的方式提供,可以在每个系统上更容易部署 LKRG,而不必去修改核心的内核代码,修改核心的内核代码非常复杂并且很容易出错。 LKRG 内核模块在目前主流的 Linux 发行版上都可以使用,比如,RHEL7、OpenVZ 7、Virtuozzo 7、以及 Ubuntu 16.04 到最新的主线版本。 ### 它并非是一个完美的解决方案 -LKRG 的创建者警告用户,他们并不认为 LKRG 是一个完美的解决方案,它**提供不了**坚不可摧和 100% 的安全。他们说,LKRG 是 "设计为**可旁通**的",并且仅仅提供了"多元化安全" 的**一个**方面。 +LKRG 的创建者警告用户,他们并不认为 LKRG 是一个完美的解决方案,它**提供不了**坚不可摧和 100% 的安全。他们说,LKRG 是 “设计为**可旁通**的”,并且仅仅提供了“多元化安全” 的**一个**方面。 -``` -虽然 LKRG 可以防御许多对 Linux 内核的已存在的漏洞利用,而且也有可能会防御将来许多的(包括未知的)未特意设计去绕过 LKRG 的安全漏洞利用。它是设计为可旁通的(尽管有时候是以更复杂和/或低可利用为代价的)。因此,他们说 LKRG 通过多元化提供安全,就像运行一个不常见的操作系统内核一样,也就不会有真实运行一个不常见的操作系统的可用性弊端。 -``` +> 虽然 LKRG 可以防御许多已有的 Linux 内核漏洞利用,而且也有可能会防御将来许多的(包括未知的)未特意设计去绕过 LKRG 的安全漏洞利用。它是设计为可旁通的(尽管有时候是以更复杂和/或低可利用为代价的)。因此,他们说 LKRG 通过多元化提供安全,就像运行一个不常见的操作系统内核一样,也就不会有真实运行一个不常见的操作系统的可用性弊端。 LKRG 有点像基于 Windows 的防病毒软件,它也是工作于内核级别去检测漏洞利用和恶意软件。但是,LKRG 团队说,他们的产品比防病毒软件以及其它终端安全软件更加安全,因为它的基础代码量比较小,所以在内核级别引入新 bug 和漏洞的可能性就更小。 ### 运行当前版本的 LKRG 大约会带来 6.5% 的性能损失 -Peslyak 说 LKRG 是非常适用于 Linux 机器的,它在修补内核的安全漏洞后不需要重启动机器。LKRG 允许用户去持续运行带有安全措施的机器,直到在一个计划的维护窗口中测试和部署关键的安全补丁为止。 +Peslyak 说 LKRG 是非常适用于 Linux 机器的,它在修补内核的安全漏洞后不需要重启动机器。LKRG 允许用户持续运行带有安全措施的机器,直到在一个计划的维护窗口中测试和部署关键的安全补丁为止。 经测试显示,安装 LKRG v0.0 后大约会产生 6.5% 性能影响,但是,Peslyak 说将在后续的开发中持续降低这种影响。 -测试也显示,LKRG 检测到了 CVE-2014-9322 (BadIRET)、CVE-2017-5123 (waitid(2) missing access_ok)、以及 CVE-2017-6074 (use-after-free in DCCP protocol) 的漏洞利用企图,但是没有检测到 CVE-2016-5195 (Dirty COW) 的漏洞利用企图。开发团队说,由于前面提到的”可旁通“的设计策略,LKRG 没有检测到 Dirty COW 提权攻击。 +测试也显示,LKRG 检测到了 CVE-2014-9322 (BadIRET)、CVE-2017-5123 (waitid(2) missing access_ok)、以及 CVE-2017-6074 (use-after-free in DCCP protocol) 的漏洞利用企图,但是没有检测到 CVE-2016-5195 (Dirty COW) 的漏洞利用企图。开发团队说,由于前面提到的“可旁通”的设计策略,LKRG 没有检测到 Dirty COW 提权攻击。 -``` -在 Dirty COW 的测试案例中,由于 bug 机制的原因,使得 LKRG 发生了 "旁通",并且这也是一种利用方法,它也是将来类似的以用户空间为目标的绕过 LKRG 的一种方法。这样的漏洞利用是否会是普通情况(不太可能!除非 LKRG 或者类似机制的软件流行起来),以及对它的可用性的(负面的)影响是什么?(对于那些直接目标是用户空间的内核漏洞来说,这不太重要,也并不简单)。 -``` +> 在 Dirty COW 的测试案例中,由于 bug 机制的原因,使得 LKRG 发生了 “旁通”,并且这也是一种利用方法,它也是将来类似的以用户空间为目标的绕过 LKRG 的一种方法。这样的漏洞利用是否会是普通情况(不太可能!除非 LKRG 或者类似机制的软件流行起来),以及对它的可用性的(负面的)影响是什么?(对于那些直接目标是用户空间的内核漏洞来说,这不太重要,也并不简单)。 -------------------------------------------------------------------------------- @@ -54,7 +50,7 @@ via: https://www.bleepingcomputer.com/news/linux/lkrg-linux-to-get-a-loadable-ke 作者:[Catalin Cimpanu][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ffceb6301c88c77ef633061ac6a19bc3656ddac1 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 18 Mar 2018 23:32:33 +0800 Subject: [PATCH 237/343] PUB:20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md @qhwdw --- ...Get a Loadable Kernel Module for Runtime Integrity Checking.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md (100%) diff --git a/translated/tech/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md b/published/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md similarity index 100% rename from translated/tech/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md rename to published/20180204 LKRG- Linux to Get a Loadable Kernel Module for Runtime Integrity Checking.md From e0b2d7bb487cd3f51bb8057925bb8772ab69db2e Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Mon, 19 Mar 2018 08:20:54 +0800 Subject: [PATCH 238/343] almost done --- ...80201 How I coined the term open source.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index ec6925bd77..0f2c6a7852 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -7,25 +7,25 @@ ![How I coined the term 'open source'](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb "How I coined the term 'open source'") 图片来自: opensource.com -几天后, 2 月 3 日, 术语 "[开源软件][6]" 创立20周年的纪念日即将到来。由于开源软件渐受欢迎并且为这个时代强有力的重要变革提供动力,我们仔细思考了它的初生到崛起。 +几天后, 2 月 3 日, 术语“[开源软件][6]”创立 20 周年的纪念日即将到来。由于开源软件渐受欢迎并且为这个时代强有力的重要变革提供动力,我们仔细反思了它的初生到崛起。 我是 “开源软件” 这个词的始作俑者,它是我在前瞻技术协会(Foresight Institute)担任执行董事时想出的。并非向上面的一个程序开发者一样,我感谢 Linux 程序员 Todd Anderson 对这个术语的支持并将它提交小组讨论。 -这是我对于它如何想到的,如何提出的,以及后续影响的记叙。当然,还有一些关于该术语的记叙,例如 Eric Raymond 和 Richard Stallman 写的,而我的,则写于 2006 年 1 月 2 日。 +这是我对于它如何想到的,如何提出的,以及后续影响的记叙。当然,还有一些有关该术语的记叙,例如 Eric Raymond 和 Richard Stallman 写的,而我的,则写于 2006 年 1 月 2 日。 -直到今天,它才公诸于世。 +直到今天,它终于公诸于世。 * * * -推行术语“开源软件” 是特地为了这个领域让新手和商业人士更加易懂a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. 早期称号的问题是,“自由软件”,并非有政治含义,但是那对于新手来说貌似对于价格的关注令人心烦意乱。一个术语需要聚焦于源代码的关键问题而且不会被立即把概念跟那些新东西混淆。一个恰好想出并且满足这些要求的第一个术语被快速接受:开源。 +推行术语“开源软件”是特地为了这个领域让新手和商业人士更加易懂,它的推广被认为对于更大的用户社区很有必要。早期称号的问题是,“自由软件” 并非有政治含义,但是那对于新手来说貌似对于价格的关注令人感到心烦意乱。一个术语需要聚焦于关键的源代码而且不会被立即把概念跟那些新东西混淆。一个恰好想出并且满足这些要求的第一个术语被快速接受:开源(open source)。 -这个术语很长一段时间被用在“情报”(即间谍活动)的情境下,但据我所知,在1998年以前对软件使用该术语尚未得到证实。下面这个就是讲述了术语“开源软件”如何流行并且变成了一项产业和一场运动的名字的故事。 +这个术语很长一段时间被用在“情报”(即间谍活动)的背景下,但据我所知,1998 年以前软件领域使用该术语尚未得到证实。下面这个就是讲述了术语“开源软件”如何流行起来并且变成了一项产业和一场运动名称的故事。 ### 计算机安全会议 -在1997年的晚些时候,为期一周的会议将被在前瞻技术协会(Foresight Insttitue) 举行来讨论计算机安全问题。这个协会是一个非盈利性智库,它专注于纳米技术和人工智能,并且认为软件安全是二者的安全性以及可靠性的核心。我们在那确定了自由软件是一个改进软件安全可靠性且具有发展前景的方法并将寻找推动它的方式。 对自由软件的兴趣开始在编程社区外开始增长,而且越来越清晰,一个改变世界的机会正在来临。然而,该怎么做我们并不清楚,因为我们当时正在摸索中。 +在 1997 年的晚些时候,为期一周的会议将被在前瞻技术协会(Foresight Insttitue) 举行来讨论计算机安全问题。这个协会是一个非盈利性智库,它专注于纳米技术和人工智能,并且认为软件安全是二者的安全性以及可靠性的核心。我们在那确定了自由软件是一个改进软件安全可靠性且具有发展前景的方法并将寻找推动它的方式。 对自由软件的兴趣开始在编程社区外开始增长,而且越来越清晰,一个改变世界的机会正在来临。然而,该怎么做我们并不清楚,因为我们当时正在摸索中。 -在这些会议中,我们讨论了一些由于使人迷惑不解的因素而产生一个新术语的必要性。观点主要有以下:对于那些新接触“自由软件”的人把 "free" 当成了价格上的 “免费” 。老资格的成员们开始解释,通常像下面所说的那样:“我们的意思是自由的,而不是免费啤酒上的。"在这个点子上,一个软件方面的讨论变成了一个关于酒精价格的讨论。问题不在于解释不了含义——问题是重要概念的名称不应该使新手们感到困惑。所以需要一个更清晰的术语了。关于自由软件术语并没有政治上的问题;问题是缺乏对新概念的认识。 +在这些会议中,我们讨论了一些由于使人迷惑不解的因素而采用一个新术语的必要性。观点主要有以下:对于那些新接触“自由软件”的人把 "free" 当成了价格上的 “免费” 。老资格的成员们开始解释,通常像下面所说的:“我们的意思是自由的,而不是免费啤酒上的。"在这个点子上,一个软件方面的讨论变成了一个关于酒精价格的讨论。问题不在于解释不了含义——问题是重要概念的名称不应该使新手们感到困惑。所以需要一个更清晰的术语了。关于自由软件术语并没有政治上的问题;问题是缺乏对新概念的认识。 ### 网景发布 @@ -33,7 +33,7 @@ 在这个镇上,Eric 把前瞻协会(Foresight) 作为行动的大本营。他一开始访问行程,他就被几个网景法律和市场部门的员工通电话。当他挂电话后,我被要求带着电话跟他们——一男一女,可能是 Mitchell Baker——这样我才能谈论对于新术语的需求。他们原则上是立即同意了,但详细条款并未达成协议。 -在那周的会议中,我仍然专注于起一个更好的名字并提出术语 “开源软件”came up with the term "open source software." 虽然那不是完美的,但我觉得足够好了。I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, 然而一个从事市场公关的朋友觉得术语 “open” 被滥用了并且相信我们能做更好再说。理论上它是对的,可我想不出更好的了,所以我想尝试并推广它。 事后一想我应该直接向 Eric Raymond 提案,但在那时我并不是很了解他,所以我采取了间接的策略。 +在那周的会议中,我仍然专注于起一个更好的名字并提出术语 “开源软件”。 虽然那不是完美的,但我觉得足够好了。I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, 然而一个从事市场公关的朋友觉得术语 “open” 被滥用了并且相信我们能做更好再说。理论上它是对的,可我想不出更好的了,所以我想尝试并推广它。 事后一想我应该直接向 Eric Raymond 提案,但在那时我并不是很了解他,所以我采取了间接的策略。 Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助,因为作为一个非编程人员,我在自由软件社区的影响力很弱。我从事的纳米技术是一个加分项,但不足以让我认真地接受自由软件问题的工作。作为一个Linux程序员,Todd 将会更仔细地聆听它。 @@ -47,7 +47,7 @@ Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助, 不仅如此——模因演化(人类学术语)在起作用。几分钟后,另一个人明显地,没有提醒地,在仍然进行话题讨论而没说术语的情况下,用了这个术语。Todd 和我面面相觑对视:是的我们都注意到了发生的事。我很激动——它起作用了!但我保持了安静:我在小组中仍然地位不高。可能有些人都奇怪为什么 Eric 会最终邀请我。 -临近会议尾声,可能是 Todd or Eric,[术语问题][8] 被明确提出。Maddog 提及了一个早期的术语“可自由分发的,和一个新的术语“合作开发的”。Eric 列出了“自由软件”、“开源软件”,并把[unknown]作为主要选项。Todd宣传“开源”模型,然后Eric 支持了他。我什么也没说,letting Todd and Eric pull the (loose, informal) consensus together around the open source name. 对于大多数与会者,他们很清楚改名不是在这讨论的最重要议题;那只是一个次要的相关议题。 我在会议中只有大约10%的说明放在了术语问答中。 +临近会议尾声,可能是 Todd or Eric,[术语问题][8] 被明确提出。Maddog 提及了一个早期的术语“可自由分发的,和一个新的术语“合作开发的”。Eric 列出了“自由软件”、“开源软件”,并把 "自由软件源" 作为一个主要选项。Todd宣传 “开源” 模型,然后Eric 支持了他。我什么也没说,让 Todd 和 Eric 共同促进开源名字达成共识。对于大多数与会者,他们很清楚改名不是在这讨论的最重要议题;那只是一个次要的相关议题。 我在会议中只有大约10%的说明放在了术语问答中。 但是我很高兴。在那有许多社区的关键领导人,并且他们喜欢这新名字,或者至少没反对。这是一个好的信号信号。可能我帮不上什么忙; Eric Raymond 被相当好地放在了一个宣传模因的好位子上,而且他的确做到了。立即签约参加行动,帮助建立 [Opensource.org][9] 并在新术语的宣传中发挥重要作用。 @@ -73,7 +73,7 @@ Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助, ### 关于作者 - [![photo of Christine Peterson](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/cp2016_crop2_185.jpg?itok=vUkSjFig)][13] Christine Peterson - Christine Peterson writes, lectures, and briefs the media on coming powerful technologies, especially nanotechnology, artificial intelligence, and longevity. She is Cofounder and Past President of Foresight Institute, the leading nanotech public interest group. Foresight educates the public, technical community, and policymakers on coming powerful technologies and how to guide their long-term impact. She serves on the Advisory Board of the [Machine Intelligence... ][2][more about Christine Peterson][3][More about me][4] + [![photo of Christine Peterson](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/cp2016_crop2_185.jpg?itok=vUkSjFig)][13] Christine Peterson - Christine Peterson 撰写,举办讲座 lectures, and briefs the media on coming powerful technologies, especially nanotechnology, artificial intelligence, and longevity. She is Cofounder and Past President of Foresight Institute, the leading nanotech public interest group. Foresight educates the public, technical community, and policymakers on coming powerful technologies and how to guide their long-term impact. She serves on the Advisory Board of the [Machine Intelligence... ][2][more about Christine Peterson][3][More about me][4] -------------------------------------------------------------------------------- From 7df45cc7aa5cd735d445e20c0e0c0773c9d1dcb9 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 19 Mar 2018 09:11:11 +0800 Subject: [PATCH 239/343] translated --- ...ouTube Player For Privacy-minded People.md | 92 ------------------- ...ouTube Player For Privacy-minded People.md | 90 ++++++++++++++++++ 2 files changed, 90 insertions(+), 92 deletions(-) delete mode 100644 sources/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md create mode 100644 translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md diff --git a/sources/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md b/sources/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md deleted file mode 100644 index 8c0db2716a..0000000000 --- a/sources/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md +++ /dev/null @@ -1,92 +0,0 @@ -translating---geekpi - -An Open Source Desktop YouTube Player For Privacy-minded People -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/03/Freetube-720x340.png) - -You already know that we need a Google account to subscribe channels and download videos from YouTube. If you don’t want Google track what you’re doing on YouTube, well, there is an open source YouTube player named **“FreeTube”**. It allows you to watch, search and download Youtube videos and subscribe your favorite channels without an account, which prevents Google from having your information. It gives you complete ad-free experience. Another notable advantage is it has a built-in basic HTML5 player to watch videos. Since we’re not using the built-in YouTube player, Google can’t track the “views” and the video analytics either. FreeTube only sends your IP details, but this also can be overcome by using a VPN. It is completely free, open source and available for GNU/Linux, Mac OS X, and Windows. - -### Features - -* Watch videos without ads. -* Prevent Google from tracking what you watch using cookies or JavaScript. -* Subscribe to channels without an account. -* Store subscriptions, history, and saved videos locally. -* Import / Backup subscriptions. -* Mini Player. -* Light / Dark Theme. -* Free, Open Source. -* Cross-platform. - - - -### Installing FreeTube - -Go to the [**releases page**][1] and grab the version depending upon the OS you use. For the purpose of this guide, I will be using **.tar.gz** file. -``` -$ wget https://github.com/FreeTubeApp/FreeTube/releases/download/v0.1.3-beta/FreeTube-linux-x64.tar.xz - -``` - -Extract the downloaded archive: -``` -$ tar xf FreeTube-linux-x64.tar.xz - -``` - -Go to the Freetube folder: -``` -$ cd FreeTube-linux-x64/ - -``` - -Launch Freeube using command: -``` -$ ./FreeTub - -``` - -This is how FreeTube default interface looks like. - -![][3] - -### Usage - -FreeTube currently uses **YouTube API** to search for videos. And then, It uses **Youtube-dl HTTP API** to grab the raw video files and play them in a basic HTML5 video player. Since subscriptions, history, and saved videos are stored locally on your system, your details will not be sent to Google or anyone else. - -Enter the video name in the search box and hit ENTER key. FreeTube will list out the results based on your search query. - -![][4] - -You can click on any video to play it. - -![][5] - -If you want to change the theme or default API, import/export subscriptions, go to the **Settings** section. - -![][6] - -Please note that FreeTube is still in **beta** stage, so there will be bugs. If there are any bugs, please report them in the GitHub page given at the end of this guide. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/freetube-an-open-source-desktop-youtube-player-for-privacy-minded-people/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/FreeTubeApp/FreeTube/releases -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-1.png -[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-3.png -[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-5-1.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-2.png diff --git a/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md b/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md new file mode 100644 index 0000000000..20a352349d --- /dev/null +++ b/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md @@ -0,0 +1,90 @@ +注重隐私的开源桌面 YouTube 播放器 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/03/Freetube-720x340.png) + +你已经知道我们需要 Google 帐户才能订阅频道并从 YouTube 下载视频。如果你不希望 Google 追踪你在 YouTube 上的行为,那么有一个名为 **“FreeTube”** 的开源 Youtube 播放器。它能让你无需使用帐户观看、搜索和下载 Youtube 视频并订阅你喜爱的频道,这可以防止 Google 获取你的信息。它为你提供完整的无广告体验。另一个值得注意的优势是它有一个内置的基础的 HTML5 播放器来观看视频。由于我们没有使用内置的 YouTube 播放器,因此 Google 无法跟踪“观看次数”,也无法视频分析。FreeTube 只会发送你的 IP 详细信息,但这也可以通过使用 VPN 来解决。它是完全免费、开源的,可用于 GNU/Linux、Mac OS X 和 Windows。 + +### 功能 + +* 观看没有广告的视频。 +* 防止 Google 使用 Cookie 或 JavaScript 跟踪你观看的内容。 +* 无须帐户订阅频道。 +* 本地存储订阅、历史记录和已保存的视频。 +* 导入/备份订阅。 +* 迷你播放器。 +* 轻/黑暗的主题。 +* 免费、开源。 +* 跨平台。 + + + +### 安装 FreeTube + +进入[**发布页面**][1]并根据你使用的操作系统获取版本。在本指南中,我将使用 **.tar.gz** 文件。 +``` +$ wget https://github.com/FreeTubeApp/FreeTube/releases/download/v0.1.3-beta/FreeTube-linux-x64.tar.xz + +``` + +解压下载的归档: +``` +$ tar xf FreeTube-linux-x64.tar.xz + +``` + +进入 Freetube 文件夹: +``` +$ cd FreeTube-linux-x64/ + +``` + +使用命令启动 Freeube: +``` +$ ./FreeTub + +``` + +这就是 FreeTube 默认界面的样子。 + +![][3] + +### 用法 + +FreeTube 目前使用 **YouTube API ** 搜索视频。然后,它使用 **Youtube-dl HTTP API** 获取原始视频文件并在基础的 HTML5 视频播放器中播放它们。由于订阅、历史记录和已保存的视频都存储在本地系统中,因此你的详细信息将不会发送给 Google 或其他任何人。 + +在搜索框中输入视频名称,然后按下回车键。FreeTube 会根据你的搜索查询列出结果。 + +![][4] + +你可以点击任何视频来播放它。 + +![][5] + +如果你想更改主题或默认 API、导入/导出订阅,请进入**设置**部分。 + +![][6] + +请注意,FreeTube 仍处于 **beta** 阶段,所以仍然有 bug。如果有任何 bug,请在本指南最后给出的 GitHub 页面上报告。 + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/freetube-an-open-source-desktop-youtube-player-for-privacy-minded-people/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/FreeTubeApp/FreeTube/releases +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-3.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-5-1.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-2.png From 228b38d245da35c5cee6a1e71f10068e2b2a792d Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 19 Mar 2018 09:17:13 +0800 Subject: [PATCH 240/343] translating --- ...0170101 How to resolve mount.nfs- Stale file handle error.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md b/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md index d57280df28..2206abe7d7 100644 --- a/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md +++ b/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md @@ -1,3 +1,5 @@ +translating---geekpi + How to resolve mount.nfs: Stale file handle error ====== Learn how to resolve mount.nfs: Stale file handle error on Linux platform. This is Network File System error can be resolved from client or server end. From 3fea241cbd4ba6ad7b711685988a95a8ca21ab7d Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 19 Mar 2018 09:28:08 +0800 Subject: [PATCH 241/343] Delete 20180213 How to clone, modify, add, and delete files in Git.md --- ...e, modify, add, and delete files in Git.md | 205 ------------------ 1 file changed, 205 deletions(-) delete mode 100644 sources/tech/20180213 How to clone, modify, add, and delete files in Git.md diff --git a/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md b/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md deleted file mode 100644 index a58a2b6b85..0000000000 --- a/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md +++ /dev/null @@ -1,205 +0,0 @@ -Translating by MjSeven - -How to clone, modify, add, and delete files in Git -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_cat.png?itok=ta54QTAf) - -In the [first article in this series][1] on getting started with Git, we created a simple Git repo and added a file to it by connecting it with our computer. In this article, we will learn a handful of other things about Git, namely how to clone (download), modify, add, and delete files in a Git repo. - -### Let's make some clones - -Say you already have a Git repo on GitHub and you want to get your files from it—maybe you lost the local copy on your computer or you're working on a different computer and want access to the files in your repository. What should you do? Download your files from GitHub? Exactly! We call this "cloning" in Git terminology. (You could also download the repo as a ZIP file, but we'll explore the clone method in this article.) - -Let's clone the repo, called Demo, we created in the last article. (If you have not yet created a Demo repo, jump back to that article and do those steps before you proceed here.) To clone your file, just open your browser and navigate to `https://github.com//Demo` (where `` is the name of your own repo. For example, my repo is `https://github.com/kedark3/Demo`). Once you navigate to that URL, click the "Clone or download" button, and your browser should look something like this: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide11.png?itok=wJYqZyBX) - -As you can see above, the "Clone with HTTPS" option is open. Copy your repo's URL from that dropdown box (`https://github.com//Demo.git`). Open the terminal and type the following command to clone your GitHub repo to your computer: -``` -git clone https://github.com//Demo.git - -``` - -Then, to see the list of files in the `Demo` directory, enter the command: -``` -ls Demo/ - -``` - -Your terminal should look like this: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide12.png?itok=E7ZG9t-8) - -### Modify files - -Now that we have cloned the repo, let's modify the files and update them on GitHub. To begin, enter the commands below, one by one, to change the directory to `Demo/`, check the contents of `README.md`, echo new (additional) content to `README.md`, and check the status with `git status`: -``` -cd Demo/ - -ls - -cat README.md - -echo "Added another line to REAMD.md" >> README.md - -cat README.md - -git status - -``` - -This is how it will look in the terminal if you run these commands one by one: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide12.5.png?itok=jhb-EPH1) - -Let's look at the output of `git status` and walk through what it means. Don't worry about the part that says: -``` -On branch master - -Your branch is up-to-date with 'origin/master'.". - -``` - -because we haven't learned it yet. The next line says: `Changes not staged for commit`; this is telling you that the files listed below it aren't marked ready ("staged") to be committed. If you run `git add`, Git takes those files and marks them as `Ready for commit`; in other (Git) words, `Changes staged for commit`. Before we do that, let's check what we are adding to Git with the `git diff` command, then run `git add`. - -Here is your terminal output: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide13.png?itok=983p_vNw) - -Let's break this down: - - * `diff --git a/README.md b/README.md` is what Git is comparing (i.e., `README.md` in this example). - * `--- a/README.md` would show anything removed from the file. - * `+++ b/README.md` would show anything added to your file. - * Anything added to the file is printed in green text with a + at the beginning of the line. - * If we had removed anything, it would be printed in red text with a - sign at the beginning. - * Git status now says `Changes to be committed:` and lists the filename (i.e., `README.md`) and what happened to that file (i.e., it has been `modified` and is ready to be committed). - - - -Tip: If you have already run `git add`, and now you want to see what's different, the usual `git diff` won't yield anything because you already added the file. Instead, you must use `git diff --cached`. It will show you the difference between the current version and previous version of files that Git was told to add. Your terminal output would look like this: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide14.png?itok=bva9fHJj) - -### Upload a file to your repo - -We have modified the `README.md` file with some new content and it's time to upload it to GitHub. - -Let's commit the changes and push those to GitHub. Run: -``` -git commit -m "Updated Readme file" - -``` - -This tells Git that you are "committing" to changes that you have "added" to it. You may recall from the first part of this series that it's important to add a message to explain what you did in your commit so you know its purpose when you look back at your Git log later. (We will look more at this topic in the next article.) `Updated Readme file` is the message for this commit—if you don't think this is the most logical way to explain what you did, feel free to write your commit message differently. - -Run `git push -u origin master`. This will prompt you for your username and password, then upload the file to your GitHub repo. Refresh your GitHub page, and you should see the changes you just made to `README.md`. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide15.png?itok=Qa3spy13) - -The bottom-right corner of the terminal shows that I committed the changes, checked the Git status, and pushed the changes to GitHub. Git status says: -``` -Your branch is ahead of 'origin/master' by 1 commit - -  (use "git push" to publish your local commits) - -``` - -The first line indicates there is one commit in the local repo but not present in origin/master (i.e., on GitHub). The next line directs us to push those changes to origin/master, and that is what we did. (To refresh your memory on what "origin" means in this case, refer to the first article in this series. I will explain what "master" means in the next article, when we discuss branching.) - -### Add a new file to Git - -Now that we have modified a file and updated it on GitHub, let's create a new file, add it to Git, and upload it to GitHub. Run: -``` -echo "This is a new file" >> file.txt - -``` - -This will create a new file named `file.txt`. - -If you `cat` it out: -``` -cat file.txt - -``` - -You should see the contents of the file. Now run: -``` -git status - -``` - -Git reports that you have an untracked file (named `file.txt`) in your repository. This is Git's way of telling you that there is a new file in the repo directory on your computer that you haven't told Git about, and Git is not tracking that file for any changes you make. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide16.png?itok=UZpSKL13) - -We need to tell Git to track this file so we can commit it and upload it to our repo. Here's the command to do that: -``` -git add file.txt - -git status - -``` - -Your terminal output is: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide17.png?itok=quV-75Na) - -Git status is telling you there are changes to `file.txt` to be committed, and that it is a `new file` to Git, which it was not aware of before this. Now that we have added `file.txt` to Git, we can commit the changes and push it to origin/master. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide18.png?itok=e0D7-eol) - -Git has now uploaded this new file to GitHub; if you refresh your GitHub page, you should see the new file, `file.txt`, in your Git repo on GitHub. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide19.png?itok=FcuSsHQ6) - -With these steps, you can create as many files as you like, add them to Git, and commit and push them up to GitHub. - -### Delete a file from Git - -What if we discovered we made an error and need to delete `file.txt` from our repo. One way is to remove the file from our local copy of the repo with this command: -``` -rm file.txt - -``` - -If you do `git status` now, Git says there is a file that is `not staged for commit` and it has been `deleted` from the local copy of the repo. If we now run: -``` -git add file.txt - -git status - -``` - -I know we are deleting the file, but we still run `git add` ** because we need to tell Git about the **change** we are making. `git add` ** can be used when we are adding a new file to Git, modifying contents of an existing file and adding it to Git, or deleting a file from a Git repo. Effectively, `git add` takes all the changes into account and stages those changes for commit. If in doubt, carefully look at output of each command in the terminal screenshot below. - -Git will tell us the deleted file is staged for commit. As soon as you commit this change and push it to GitHub, the file will be removed from the repo on GitHub as well. Do this by running: -``` -git commit -m "Delete file.txt" - -git push -u origin master - -``` - -Now your terminal looks like this: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide20.png?itok=SrJMqNXC) - -And your GitHub looks like this: - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide21.png?itok=RhXM4Gua) - -Now you know how to clone, add, modify, and delete Git files from your repo. The next article in this series will examine Git branching. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/how-clone-modify-add-delete-git-files - -作者:[Kedar Vijay Kulkarni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/kkulkarn -[1]:https://opensource.com/article/18/1/step-step-guide-git From e03b1a547e72b1be689824f129de1dfc0466909d Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 19 Mar 2018 09:28:46 +0800 Subject: [PATCH 242/343] Create 20180213 How to clone, modify, add, and delete files in Git.md --- ...e, modify, add, and delete files in Git.md | 202 ++++++++++++++++++ 1 file changed, 202 insertions(+) create mode 100644 translated/tech/20180213 How to clone, modify, add, and delete files in Git.md diff --git a/translated/tech/20180213 How to clone, modify, add, and delete files in Git.md b/translated/tech/20180213 How to clone, modify, add, and delete files in Git.md new file mode 100644 index 0000000000..9cc05462d4 --- /dev/null +++ b/translated/tech/20180213 How to clone, modify, add, and delete files in Git.md @@ -0,0 +1,202 @@ +在 Git 中怎样克隆,修改,添加和删除文件? +===== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_cat.png?itok=ta54QTAf) + +在 [本系列的第一篇文章][1] 开始使用 Git 时,我们创建了一个简单的 Git 仓库,并通过它连接到我们的计算机向其中添加一个文件。在本文中,我们将学习一些关于 Git 的其他内容,即如何克隆(下载),修改,添加和删除 Git 仓库中的文件。 + + +### 让我们来克隆一下 + +假设你在 GitHub 上已经有一个 Git 仓库,并且想从它那里获取你的文件-也许你在你的计算机上丢失了本地副本,或者你正在另一台计算机上工作,但是想访问仓库中的文件,你该怎么办?从 GitHub 下载你的文件?没错!我们称之为 Git 术语中的“克隆”。(你也可以将仓库作为 ZIP 文件下载,但我们将在本文中探讨克隆方法。) + +让我们克隆在上一篇文章中创建的称为 Demo 的仓库。(如果你还没有创建 Demo 仓库,请跳回到那篇文章并在继续之前执行那些步骤。)要克隆文件,只需打开浏览器并导航到 `https://github.com//Demo` (其中 `` 是你仓库的名称。例如,我的仓库是 `https://github.com/kedark3/Demo`)。一旦你导航到该 URL,点击“克隆或下载”按钮,你的浏览器看起来应该是这样的: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide11.png?itok=wJYqZyBX) + +正如你在上面看到的,“使用 HTTPS 克隆”选项已打开。从该下拉框中复制你的仓库地址(`https://github.com//Demo.git`),打开终端并输入以下命令将 GitHub 仓库克隆到你的计算机: +``` +git clone https://github.com//Demo.git + +``` + +然后,要查看 `Demo` 目录中的文件列表,请输入以下命令: +``` +ls Demo/ + +``` + +终端看起来应该是这样的: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide12.png?itok=E7ZG9t-8) + +### 修改文件 + +现在我们已经克隆了仓库,让我们修改文件并在 GitHub 上更新它们。首先,逐个输入下面的命令,将目录更改为 `Demo/`,检查 `README.md` 中的内容,添加新的(附加的)内容到 `README.md`,然后使用 `git status` 检查状态: +``` +cd Demo/ + +ls + +cat README.md + +echo "Added another line to REAMD.md" >> README.md + +cat README.md + +git status + +``` + +如果你逐一运行这些命令,终端看起开将会是这样: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide12.5.png?itok=jhb-EPH1) + +让我们看一下 `git status` 的输出,并了解它的意思。不要担心这样的语句: +``` +On branch master + +Your branch is up-to-date with 'origin/master'.". + +``` +因为我们还没有学习这些。(译注:学了你就知道了)下一行说:`Changes not staged for commit`;这是告诉你,它下面列出的文件没有标记就绪(“分阶段”)提交。如果你运行 `git add`,Git 会把这些文件标记为 `Ready for commit`;换句话说就是 `Changes staged for commit`。在我们这样做之前,让我们用 `git diff` 命令来检查我们添加了什么到 Git 中,然后运行 `git add`。 + +这里是终端输出: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide13.png?itok=983p_vNw) + +我们来分析一下: + +* `diff --git a/README.md b/README.md` 是 Git 比较的内容(在这个例子中是 `README.md`)。 +* `--- a/README.md` 会显示从文件中删除的任何东西。 +* `+++ b/README.md` 会显示从文件中添加的任何东西。 +* 任何添加到文件中的内容都以绿色文本打印,并在该行的开头加上 + 号。 +* 如果我们删除了任何内容,它将以红色文本打印,并在该行的开头加上 - 号。 +* 现在 git status 显示“Changes to be committed:”,并列出文件名(即 `README.md`)以及该文件发生了什么(即它已经被 `modified` 并准备提交)。 + + +提示:如果你已经运行了 `git add`,现在你想看看文件有什么不同,通常 `git diff` 不会产生任何东西,因为你已经添加了文件。相反,你必须使用 `git diff --cached`。它会告诉你 Git 添加的当前版本和以前版本文件之间的差别。你的终端输出看起来会是这样: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide14.png?itok=bva9fHJj) + +### 上传文件到你的仓库 + +我们用一些新内容修改了 `README.md` 文件,现在是时候将它上传到 GitHub。 + +让我们提交更改并将其推送到 GitHub。运行: +``` +git commit -m "更新文件的名字" + +``` + +这告诉 Git 你正在“提交”已经“添加”的更改,你可能还记得,从本系列的第一部分中,添加一条消息来解释你在提交中所做的操作是非常重要的,以便你在稍后回顾 Git 日志时了解当时的目的。(我们将在下一篇文章中更多地关注这个话题。)`Updated Readme file` 是这个提交的消息--如果你认为这不是解释你所做的事情的最合理的方式,那么请随便写下你的提交消息。 + +运行 `git push -u origin master`,这会提示你输入用户名和密码,然后将文件上传到你的 GitHub 仓库。刷新你的 GitHub 页面,你应该会看到刚刚对 `README.md` 所做的更改。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide15.png?itok=Qa3spy13) + +终端的右下角显示我提交了更改,检查了 Git 状态,并将更改推送到了 GitHub。git status 显示: +``` +Your branch is ahead of 'origin/master' by 1 commit + +  (use "git push" to publish your local commits) + +``` + +第一行表示在本地仓库中有一个提交,但不在原始/主文件中(即在 GitHub 上)。下一行指示我们将这些更改推送到原始/主文件中,这就是我们所做的。(在本例中,请参阅本系列的第一篇文章,以唤醒你对“原始”含义的记忆。我将在下一篇文章中讨论分支的时候,解释“主文件”的含义。) + +### 添加新文件到 Git + +现在我们修改了一个文件并在 GitHub 上更新了它,让我们创建一个新文件,将它添加到 Git,然后将其上传到 GitHub。 +运行: +``` +echo "This is a new file" >> file.txt + +``` + +这将会创建一个名为 `file.txt` 的新文件。 + +如果使用 `cat` 查看它: +``` +cat file.txt + +``` +你将看到文件的内容。现在继续运行: +``` +git status + +``` + +Git 报告说你的仓库中有一个未跟踪的文件(名为 `file.txt`)。这是 Git 告诉你说在你的计算机中的仓库目录下有一个新文件,然而你并没有告诉 Git,Git 也没有跟踪你所做的任何修改。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide16.png?itok=UZpSKL13) + +我们需要告诉 Git 跟踪这个文件,以便我们可以提交并上传文件到我们的仓库。以下是执行该操作的命令: +``` +git add file.txt + +git status + +``` + +终端输出如下: +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide17.png?itok=quV-75Na) + +git status 告诉你有 `file.txt` 被修改,对于 Git 来说它是一个 `new file`,Git 在此之前并不知道。现在我们已经为 Git 添加了 `file.txt`,我们可以提交更改并将其推送到 原始/主文件。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide18.png?itok=e0D7-eol) + +Git 现在已经将这个新文件上传到 GitHub;如果刷新 GitHub 页面,则应该在 GitHub 上的仓库中看到新文件 `file.txt`。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide19.png?itok=FcuSsHQ6) + +通过这些步骤,你可以创建尽可能多的文件,将它们添加到 Git 中,然后提交并将它们推送到 GitHub。 + +### 从 Git 中删除文件 + +如果我们发现我们犯了一个错误,并且需要从我们的仓库中删除 `file.txt`,该怎么办?一种方法是使用以下命令从本地副本中删除文件: +``` +rm file.txt + +``` + +如果你现在做 `git status`,Git 就会说有一个文件 `not staged for commit`,并且它已经从仓库的本地拷贝中删除了。如果我们现在运行: +``` +git add file.txt + +git status + +``` +我知道我们正在删除这个文件,但是我们仍然运行 `git add`,因为我们需要告诉 Git 我们正在做的**更改**,`git add` 可以用在我们添加新文件,修改一个已存在文件的内容,或者从仓库中删除文件。实际上,`git add` 将所有更改考虑在内,并将这些更改分阶段进行提交。如果有疑问,请仔细查看下面终端屏幕截图中每个命令的输出。 + +Git 会告诉我们已删除的文件正在进行提交。只要你提交此更改并将其推送到 GitHub,该文件也将从 GitHub 的仓库中删除。运行以下命令: +``` +git commit -m "Delete file.txt" + +git push -u origin master + +``` + +现在你的终端看起来像这样: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide20.png?itok=SrJMqNXC) + +你的 GitHub 看起来像这样: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide21.png?itok=RhXM4Gua) + +现在你知道如何从你的仓库克隆,添加,修改和删除 Git 文件。本系列的下一篇文章将检查 Git 分支。 + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/how-clone-modify-add-delete-git-files + +作者:[Kedar Vijay Kulkarni][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/kkulkarn +[1]:https://opensource.com/article/18/1/step-step-guide-git From 88cb5e3805a68f05d06415e05e88495dc3216f18 Mon Sep 17 00:00:00 2001 From: rockouc Date: Mon, 19 Mar 2018 09:40:24 +0800 Subject: [PATCH 243/343] Update 20171114 Why pair writing helps improve documentation.md --- .../20171114 Why pair writing helps improve documentation.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171114 Why pair writing helps improve documentation.md b/sources/tech/20171114 Why pair writing helps improve documentation.md index ff3bbb5888..3df7e1df9c 100644 --- a/sources/tech/20171114 Why pair writing helps improve documentation.md +++ b/sources/tech/20171114 Why pair writing helps improve documentation.md @@ -1,3 +1,5 @@ +Translating by rockouc + Why pair writing helps improve documentation ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd) From 435e4b088492de14cd8cabf5f618bcd4ce81cd88 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Mon, 19 Mar 2018 13:22:00 +0800 Subject: [PATCH 244/343] Translated by qhwdw --- ...icroservices vs. monolith How to choose.md | 177 ------------------ ...icroservices vs. monolith How to choose.md | 176 +++++++++++++++++ 2 files changed, 176 insertions(+), 177 deletions(-) delete mode 100644 sources/tech/20180131 Microservices vs. monolith How to choose.md create mode 100644 translated/tech/20180131 Microservices vs. monolith How to choose.md diff --git a/sources/tech/20180131 Microservices vs. monolith How to choose.md b/sources/tech/20180131 Microservices vs. monolith How to choose.md deleted file mode 100644 index a337f3c85f..0000000000 --- a/sources/tech/20180131 Microservices vs. monolith How to choose.md +++ /dev/null @@ -1,177 +0,0 @@ -Translating by qhwdw -Microservices vs. monolith: How to choose -============================================================ - -### Both architectures have pros and cons, and the right decision depends on your organization's unique needs. - - -![Microservices vs. monolith: How to choose](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_architecture_design.jpg?itok=lB_qYv-I "Microservices vs. monolith: How to choose") -Image by :  - -Onasill ~ Bill Badzo on [Flickr][11]. [CC BY-NC-SA 2.0][12]. Modified by Opensource.com. - -For many startups, conventional wisdom says to start with a monolith architecture over microservices. But are there exceptions to this? - -The upcoming book,  [_Microservices for Startups_][13] , explores the benefits and drawbacks of microservices, offering insights from dozens of CTOs. - -While different CTOs take different approaches when starting new ventures, they agree that context and capability are key. If you're pondering whether your business would be best served by a monolith or microservices, consider the factors discussed below. - -### Understanding the spectrum - -More on Microservices - -* [How to explain microservices to your CEO][1] - -* [Free eBook: Microservices vs. service-oriented architecture][2] - -* [Secured DevOps for microservices][3] - -Let's first clarify what exactly we mean by “monolith” and “microservice.” - -Microservices are an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. - -A monolithic application is built as a single, unified unit, and usually one massive code base. Often a monolith consists of three parts: a database, a client-side user interface (consisting of HTML pages and/or JavaScript running in a browser), and a server-side application. - -“System architectures lie on a spectrum,” Zachary Crockett, CTO of [Particle][14], said in an interview. “When discussing microservices, people tend to focus on one end of that spectrum: many tiny applications passing too many messages to each other. At the other end of the spectrum, you have a giant monolith doing too many things. For any real system, there are many possible service-oriented architectures between those two extremes.” - -Depending on your situation, there are good reasons to tend toward either a monolith or microservices. - -"We want to use the best tool for each service." Julien Lemoine, CTO at Algolia - -Contrary to what many people think, a monolith isn’t a dated architecture that's best left in the past. In certain circumstances, a monolith is ideal. I spoke to Steven Czerwinski, head of engineering at [Scaylr][15] and a former Google employee, to better understand this. - -“Even though we had had positive experiences of using microservices at Google, we [at Scalyr] went [for a monolith] route because having one monolithic server means less work for us as two engineers,” he explained. (This was back in the early days of Scalyr.) - -But if your team is experienced with microservices and you have a clear idea of the direction you’re going, microservices can be a great alternative. - -Julien Lemoine, CTO at [Algolia][16], chimed in on this point: “We have always started with a microservices approach. The main goal was to be able to use different technology to build our service, for two big reasons: - -* We want to use the best tool for each service. Our search API is highly optimized at the lowest level, and C++ is the perfect language for that. That said, using C++ for everything is a waste of productivity, especially to build a dashboard. - -* We want the best talent, and using only one technology would limit our options. This is why we have different languages in the company.” - -If your team is prepared, starting with microservices allows your organization to get used to the rhythm of developing in a microservice environment right from the start. - -### Weighing the pros and cons - -Before you decide which approach is best for your organization, it's important to consider the strengths and weaknesses of each. - -### Monoliths - -### Pros: - -* **Fewer cross-cutting concerns:** Most apps have cross-cutting concerns, such as logging, rate limiting, and security features like audit trails and DOS protection. When everything is running through the same app, it’s easy to address those concerns by hooking up components. - -* **Less operational overhead:** There’s only one application to set up for logging, monitoring, and testing. Also, it's generally less complex to deploy. - -* **Performance:** A monolith architecture can offer performance advantages since shared-memory access is faster than inter-process communication (IPC). - -### Cons: - -* **Tightly coupled:** Monolithic app services tend to get tightly coupled and entangled as the application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability. - -* **Harder to understand:** Monolithic architectures are more difficult to understand because of dependencies, side effects, and other factors that are not obvious when you’re looking at a specific service or controller. - -### Microservices - -### Pros: - -* **Better organization:** Microservice architectures are typically better organized, since each microservice has a specific job and is not concerned with the jobs of other components. - -* **Decoupled:** Decoupled services are easier to recompose and reconfigure to serve different apps (for example, serving both web clients and the public API). They also allow fast, independent delivery of individual parts within a larger integrated system. - -* **Performance:** Depending on how they're organized, microservices can offer performance advantages because you can isolate hot services and scale them independently of the rest of the app. - -* **Fewer mistakes:** Microservices enable parallel development by establishing a strong boundary between different parts of your system. Doing this makes it more difficult to connect parts that shouldn’t be connected, for example, or couple too tightly those that need to be connected. - -### Cons: - -* **Cross-cutting concerns across each service:** As you build a new microservice architecture, you’re likely to discover cross-cutting concerns you may not have anticipated at design time. You’ll either need to incur the overhead of separate modules for each cross-cutting concern (i.e., testing), or encapsulate cross-cutting concerns in another service layer through which all traffic is routed. Eventually, even monolithic architectures tend to route traffic through an outer service layer for cross-cutting concerns, but with a monolithic architecture, it’s possible to delay the cost of that work until the project is more mature. - -* **Higher operational overhead:** Microservices are frequently deployed on their own virtual machines or containers, causing a proliferation of VM wrangling. These tasks are frequently automated with container fleet management tools. - -### Decision time - -Once you understand the pros and cons of both approaches, how do you apply this information to your startup? Based on interviews with CTOs, here are three questions to guide your decision process: - -**Are you in familiar territory?** - -Diving directly into microservices is less risky if your team has previous domain experience (for example, in e-commerce) and knowledge concerning the needs of your customers. If you’re traveling down an unknown path, on the other hand, a monolith may be a safer option. - -**Is your team prepared?** - -Does your team have experience with microservices? If you quadruple the size of your team within the next year, will microservices offer the best environment? Evaluating the dimensions of your team is crucial to the success of your project. - -**How’s your infrastructure?** - -To make microservices work, you’ll need a cloud-based infrastructure. - -David Strauss, CTO of [Pantheon][17], explained: “[Previously], you would want to start with a monolith because you wanted to deploy one database server. The idea of having to set up a database server for every single microservice and then scale out was a mammoth task. Only a huge, tech-savvy organization could do that. Today, with services like Google Cloud and Amazon AWS, you have many options for deploying tiny things without needing to own the persistence layer for each one.” - -### Evaluate the business risk - -As a tech-savvy startup with high ambitions, you might think microservices is the “right” way to go. But microservices can pose a business risk. Strauss explained, “A lot of teams overbuild their project initially. Everyone wants to think their startup will be the next unicorn, and they should therefore build everything with microservices or some other hyper-scalable infrastructure. But that's usually wrong.” In these cases, Strauss continued, the areas that they thought they needed to scale are often not the ones that actually should scale first, resulting in wasted time and effort. - -### Situational awareness - -Ultimately, context is key. Here are some tips from CTOs: - -#### When to start with a monolith - -* **Your team is at founding stage:** Your team is small—say, 2 to 5 members—and is unable to tackle a broader, high-overhead microservices architecture. - -* **You’re building an unproven product or proof of concept:** If you're bringing a brand-new product to market, it will likely evolve over time, and a monolith is better-suited to allow for rapid product iteration. The same notion applies to a proof of concept, where your goal is to learn as much as possible as quickly as possible, even if you end up throwing it away. - -* **You have no microservices experience:** Unless you can justify the risk of learning on the fly at an early stage, a monolith may be a safer approach for an inexperienced team. - -#### When to start with microservices - -* **You need quick, independent service delivery:** Microservices allow for fast, independent delivery of individual parts within a larger integrated system. Note that it can take some time to see service delivery gains with microservices compared to a monolith, depending on your team's size. - -* **A piece of your platform needs to be extremely efficient:** If your business does intensive processing of petabytes of log volume, you’ll likely want to build that service out in an efficient language like C++, while your user dashboard may be built in [Ruby on Rails][5]. - -* **You plan to grow your team:** Starting with microservices gets your team used to developing in separate small services from the beginning, and teams that are separated by service boundaries are easier to scale as needed. - -To decide whether a monolith or microservices is right for your organization, be honest and self-aware about your context and capabilities. This will help you find the best path to grow your business. - -### Topics - - [Microservices][21][DevOps][22] - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/profile_15.jpg?itok=EaSRMCN-)][18] jakelumetta - Jake is the CEO of [ButterCMS, an API-first CMS][6]. He loves whipping up Butter puns and building tools that makes developers lives better. For more content like this, follow [@ButterCMS][7] on Twitter and [subscribe to our blog][8].[More about me][9] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/how-choose-between-monolith-microservices - -作者:[jakelumetta ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jakelumetta -[1]:https://blog.openshift.com/microservices-how-to-explain-them-to-your-ceo/?intcmp=7016000000127cYAAQ&src=microservices_resource_menu1 -[2]:https://www.openshift.com/promotions/microservices.html?intcmp=7016000000127cYAAQ&src=microservices_resource_menu2 -[3]:https://opensource.com/business/16/11/secured-devops-microservices?src=microservices_resource_menu3 -[4]:https://opensource.com/article/18/1/how-choose-between-monolith-microservices?rate=tSotlNvwc-Itch5fhYiIn5h0L8PcUGm_qGvqSVzu9w8 -[5]:http://rubyonrails.org/ -[6]:https://buttercms.com/ -[7]:https://twitter.com/ButterCMS -[8]:https://buttercms.com/blog/ -[9]:https://opensource.com/users/jakelumetta -[10]:https://opensource.com/user/205531/feed -[11]:https://www.flickr.com/photos/onasill/16452059791/in/photolist-r4P7ci-r3xUqZ-JkWzgN-dUr8Mo-biVsvF-kA2Vot-qSLczk-nLvGTX-biVxwe-nJJmzt-omA1vW-gFtM5-8rsk8r-dk9uPv-5kja88-cv8YTq-eQqNJu-7NJiqd-pBUkk-pBUmQ-6z4dAw-pBULZ-vyM3V3-JruMsr-pBUiJ-eDrP5-7KCWsm-nsetSn-81M3EC-pBURh-HsVXuv-qjgBy-biVtvx-5KJ5zK-81F8xo-nGFQo3-nJr89v-8Mmi8L-81C9A6-qjgAW-564xeQ-ihmDuk-biVBNz-7C5VBr-eChMAV-JruMBe-8o4iKu-qjgwW-JhhFXn-pBUjw -[12]:https://creativecommons.org/licenses/by-nc-sa/2.0/ -[13]:https://buttercms.com/books/microservices-for-startups/ -[14]:https://www.particle.io/Particle -[15]:https://www.scalyr.com/ -[16]:https://www.algolia.com/ -[17]:https://pantheon.io/ -[18]:https://opensource.com/users/jakelumetta -[19]:https://opensource.com/users/jakelumetta -[20]:https://opensource.com/users/jakelumetta -[21]:https://opensource.com/tags/microservices -[22]:https://opensource.com/tags/devops diff --git a/translated/tech/20180131 Microservices vs. monolith How to choose.md b/translated/tech/20180131 Microservices vs. monolith How to choose.md new file mode 100644 index 0000000000..f6d389fab2 --- /dev/null +++ b/translated/tech/20180131 Microservices vs. monolith How to choose.md @@ -0,0 +1,176 @@ +微服务 vs. 整体服务:如何选择 +============================================================ + +### 任何一种架构都是有利有弊的,而能满足你组织的独特需要的决策才是正确的选择。 + + +![Microservices vs. monolith: How to choose](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_architecture_design.jpg?itok=lB_qYv-I "Microservices vs. monolith: How to choose") +Image by :  + +Onasill ~ Bill Badzo on [Flickr][11]. [CC BY-NC-SA 2.0][12]. Modified by Opensource.com. + +对于许多初创公司来说,传统的知识认为,从单一整体架构开始,而不是使用微服务。但是,我们还有别的选择吗? + +这本新书 —— [初创公司的微服务][13],从许多 CIO 们理解的微服务的角度,解释了微服务的优点与缺点。 + +对于初创公司,虽然不同的 CTO 对此给出的建议是不同的,但是他们都一致认为环境和性能很重要。如果你正考虑你的业务到底是采用微服务还是单一整体架构更好,下面讨论的这些因素正好可以为你提供一些参考。 + +### 理解范围 + +更多有关微服务的内容 + +* [如何向你的 CEO 解释微服务][1] + +* [免费电子书:微服务 vs. 面向服务的架构][2] + +* [DevOps 确保微服务安全][3] + +首先,我们先来准确定义我们所谓的 “整体服务” 和 “微服务” 是什么。 + +微服务是一种方法,它开发一个单一的应用程序来作为构成整体服务的小服务,每个小服务都运行在它自己的进程中,并且使用一个轻量级的机制进行通讯,通常是一个 HTTP 资源 API。这些服务都围绕业务能力来构建,并且可依赖全自动部署机制来独立部署。 + +一个整体应用程序是按单个的、统一的单元来构建,并且,通常情况下它是基于一个大量的代码来实现的。一般来说,一个整体服务是由三部分组成的:一个数据库、一个客户端用户界面(由 HTML 页面和/或运行在浏览器中的 JavaScript 组成)、以及一个服务器端应用程序。 + +“系统架构处于一个范围之中”,Zachary Crockett,[Particle][14] 的 CTO,在一次访谈中,他说,”在讨论微服务时,人们倾向于关注这个范围的一端:许多极小的应用程序给其它应用程序传递了过多的信息。在另一端,有一个巨大的整体服务做了太多的事情。在任何现实中的系统上,在这两个极端之间有很多合适的面向服务的架构“。 + +根据你的情况不同,不论是使用整体服务还是微服务都有很多很好的理由。 + +"我们希望为每个服务使用最好的工具”,Julien Lemoine 说,他是 Algolia 的 CTO。 + +与很多人的想法正好相反,整体服务并不是过去遗留下来的过时的架构。在某些情况下,整体服务是非常理想的。我采访了 Steven Czerwinski 之后,更好地理解了这一点,他是 [Scaylr][15] 的工程主管,前谷歌员工。 + +“尽管我们在谷歌时有使用微服务的一些好的经验,我们现在 [在 Scalyr] 却使用的是整体服务的架构,因为一个整体服务架构意味着我们的工作量更少,我们只有两位工程师。“ 他解释说。(采访他时,Scaylr 正处于早期阶段) + +但是,如果你的团队使用微服务的经验很丰富,并且你对你们的发展方向有明确的想法,微服务可能是一个很好的 替代者。 + +Julien Lemoine,[Algolia][16] 的 CTO,在这个问题上,他认为:”我们通常从使用微服务开始,主要目的是我们可以使用不同的技术来构建我们的服务,因为如下的两个主要原因: + +* 我们想为每个服务使用最好的工具。我们的搜索 API 是在底层做过高度优化的,而 C++ 是非常适合这项工作的。他说,在任何地方都使用 C++ 是一种生产力的浪费,尤其是在构建仪表板方面。 + +* 我们希望使用最好的人才,而只使用一种技术将极大地限制我们的选择。这就是为什么在公司中有不同语言的原因。“ + +如果你的团队已经准备好从一开始就使用微服务,这样你的组织从一开始就可以适应微服务环境的开发节奏。 + +### 权衡利弊 + +在你决定那种方法更适合你的组织之前,考虑清楚每种方法的优缺点是非常重要的。 + +### 整体服务 + +### 优点: + +* **很少担心横向联系:** 大多数应用程序开发者都担心横向联系,比如,日志、速度限制、以及像审计跟踪和 DoS 防护这样的安全特性。当所有的东西都运行在同一个应用程序中时,通过组件钩子来处理这些关注点就非常容易了。 + +* **运营开销很少:** 只需要为一个应用程序设置日志、监视、以及测试。一般情况下,部署也相对要简单。 + +* **性能:** 一个整体的架构可能会有更好的性能,因为共享内存的访问速度要比进程间通讯(IPC)更快。 + +### 缺点: + +* **紧耦合:** 整体服务的应用程序倾向于紧耦合,并且应用程序是整体进化,分离特定用途的服务是非常困难的,比如,独立扩展或者代码维护。 + +* **理解起来很困难:** 当你想查看一个特定的服务或者控制器时,因为依赖、副作用、和其它的不可预见因素,整体架构理解起来更困难。 + +### 微服务 + +### 优点: + +* **非常好组织:** 微服务架构一般很好组织它们,因为每个微服务都有一个特定的工作,并且还不用考虑其它组件的工作。 + +* **解耦合:** 解耦合的服务是能够非常容易地进行重组织和重配置,以服务于不同的应用程序(比如,同时向 Web 客户端和公共 API 提供服务)。它们在一个大的集成系统中,也允许快速、独立分发单个部分。 + +* **性能:** 根据组织的情况,微服务可以提供更好的性能,因为你可以分离热点服务,并根据其余应用程序的情况来扩展它们。 + +* **更少的错误:** 微服务允许系统中的不同部分,在维护良好边界的前提下进行并行开发。这样将使连接不该被连接的部分变得更困难,比如,需要连接的那些紧耦合部分。 + +### 缺点: + +* **跨每个服务的横向联系点:** 由于你构建了一个新的微服务架构,你可能会发现在设计时没有预料到的很多横向联系的问题。这也将导致需要每个横向联系点的独立模块(比如,测试)的开销增加,或者在其它服务层面因封装横向联系点,所导致的所有流量都需要路由。最终,即便是整体服务架构也倾向于通过横向联系点的外部服务层来路由流量,但是,如果使用整体架构,在项目更加成熟之前,也不过只是推迟了工作成本。 + +* **更高的运营开销:** 微服务在它所属的虚拟机或容器上部署非常频繁,导致虚拟机争用激增。这些任务都是使用容器管理工具进行频繁的自动化部署的。 + +### 决策时刻 + +当你了解了每种方法的利弊之后,如何在你的初创公司使用这些信息?通过与这些 CTO 们的访谈,这里有三个问题可以指导你的决策过程: + +**你是在熟悉的领域吗?** + +如果你的团队有以前的一些领域的经验(比如,电子商务)和了解你的客户需求,那么分割成微服务是低风险的。如果你从未做过这些,从另一个角度说,整体服务或许是一个更安全的选择。 + +**你的团队做好准备了吗?** + +你的团队有使用微服务的经验吗?如果明年,你的团队扩充到现在的四倍,将为微服务提供更好的环境?评估团队大小对项目的成功是非常重要的。 + +**你的基础设施怎么样?** + +实施微服务,你需要基于云的基础设施。 + +David Strauss,[Pantheon][17] 的 CTO,他解释说:"[以前],你使用整体服务是因为,你希望部署在一个数据库上。每个单个的微服务都需要配置数据库服务器,然后,扩展它将是一个很重大的任务。只有大的、技术力量雄厚的组织才能做到。现在,使用像谷歌云和亚马逊 AWS 这样的云服务,为部署一个小的东西而不需要为它们中的每个都提供持久存储,对于这种需求你有很多的选择。“ + +### 评估业务风险 + +技术力量雄厚的初创公司为追求较高的目标,可以考虑使用微服务。但是微服务可能会带来业务风险。Strauss 解释说,”许多团队一开始就过度构建他们的项目。每个人都认为,他们的公司会成为下一个 “独角兽”,因此,他们使用微服务构建任何一个东西,或者一些其它的高扩展性的基础设施。但是这通常是一种错误的做法“。Strauss 说,在那种情况下,他们认为需要扩大规模的领域往往并不是一开始真正需要扩展的领域,最后的结果是浪费了时间和努力。 + +### 态势感知 + +最终,环境是关键。以下是一些来自 CTO 们的提示: + +#### 什么时候使用整体服务 + +* **你的团队还在创建阶段:** 你的团队很小 —— 也就是说,有 2 到 5 位成员 —— 还无法应对大范围、高成本的微服务架构。 + +* **你正在构建的是一个未经证实的产品或者概念验证:** 如果你将一个全新的产品推向市场,随着时间的推移,它有可能会成功,而对于一个快速迭代的产品,整体架构是最合适的。这个提示也同样适用于概念验证,你的目标是尽可能快地学习,即便最终你可能会放弃它。 + +* **你没有使用微服务的经验:** 除非你有合理的理由证明早期学习阶段的风险可控,否则,一个整体的架构更适用于一个没有经验的团队。 + +#### 什么时候开始使用微服务 + +* **你需要快速、独立的分发服务:** 微服务允许在一个大的集成系统中快速、独立分发单个部分。请注意,根据你的团队规模,获取与整体服务的比较优势,可能需要一些时间。 + +* **你的平台中的某些部分需要更高效:** 如果你的业务要求集中处理 PB 级别的日志卷,你可能需要使用一个像 C++ 这样的更高效的语言来构建这个服务,尽管你的用户仪表板或许还是用 [Ruby on Rails][5] 构建的。 + +* **计划扩展你的团队:** 使用微服务,将让你的团队从一开始就开发独立的小服务,而服务边界独立的团队更易于按需扩展。 + +要决定整体服务还是微服务更适合你的组织,要坦诚并正确认识自己的环境和能力。这将有助于你找到业务成长的最佳路径。 + +### 主题 + + [微服务][21]、 [DevOps][22] + +### 关于作者 + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/profile_15.jpg?itok=EaSRMCN-)][18] jakelumetta - Jake 是 ButterCMS 的 CEO,它是一个 [API-first CMS][6]。他喜欢搅动出黄油双峰,以及构建让开发者工作更舒适的工具,喜欢他的更多内容,请在 Twitter 上关注 [@ButterCMS][7],订阅 [他的博客][8]。[关于他的更多信息][9] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/how-choose-between-monolith-microservices + +作者:[jakelumetta ][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jakelumetta +[1]:https://blog.openshift.com/microservices-how-to-explain-them-to-your-ceo/?intcmp=7016000000127cYAAQ&src=microservices_resource_menu1 +[2]:https://www.openshift.com/promotions/microservices.html?intcmp=7016000000127cYAAQ&src=microservices_resource_menu2 +[3]:https://opensource.com/business/16/11/secured-devops-microservices?src=microservices_resource_menu3 +[4]:https://opensource.com/article/18/1/how-choose-between-monolith-microservices?rate=tSotlNvwc-Itch5fhYiIn5h0L8PcUGm_qGvqSVzu9w8 +[5]:http://rubyonrails.org/ +[6]:https://buttercms.com/ +[7]:https://twitter.com/ButterCMS +[8]:https://buttercms.com/blog/ +[9]:https://opensource.com/users/jakelumetta +[10]:https://opensource.com/user/205531/feed +[11]:https://www.flickr.com/photos/onasill/16452059791/in/photolist-r4P7ci-r3xUqZ-JkWzgN-dUr8Mo-biVsvF-kA2Vot-qSLczk-nLvGTX-biVxwe-nJJmzt-omA1vW-gFtM5-8rsk8r-dk9uPv-5kja88-cv8YTq-eQqNJu-7NJiqd-pBUkk-pBUmQ-6z4dAw-pBULZ-vyM3V3-JruMsr-pBUiJ-eDrP5-7KCWsm-nsetSn-81M3EC-pBURh-HsVXuv-qjgBy-biVtvx-5KJ5zK-81F8xo-nGFQo3-nJr89v-8Mmi8L-81C9A6-qjgAW-564xeQ-ihmDuk-biVBNz-7C5VBr-eChMAV-JruMBe-8o4iKu-qjgwW-JhhFXn-pBUjw +[12]:https://creativecommons.org/licenses/by-nc-sa/2.0/ +[13]:https://buttercms.com/books/microservices-for-startups/ +[14]:https://www.particle.io/Particle +[15]:https://www.scalyr.com/ +[16]:https://www.algolia.com/ +[17]:https://pantheon.io/ +[18]:https://opensource.com/users/jakelumetta +[19]:https://opensource.com/users/jakelumetta +[20]:https://opensource.com/users/jakelumetta +[21]:https://opensource.com/tags/microservices +[22]:https://opensource.com/tags/devops From c4be5251581761ff6eb43f5dc773c6f7318786bc Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 19 Mar 2018 13:39:22 +0800 Subject: [PATCH 245/343] PRF:20180125 Keep Accurate Time on Linux with NTP.md @qhwdw --- ...25 Keep Accurate Time on Linux with NTP.md | 51 ++++++++++--------- 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md b/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md index d2eddc0d63..d501c32d25 100644 --- a/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md +++ b/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md @@ -7,25 +7,26 @@ ### 它的时间是多少? -当让 Linux 来告诉你时间的时候,它是很奇怪的。你可能认为是使用 `time` 命令来告诉你时间,其实并不是,因为 `time` 只是一个测量一个进程运行了多少时间的计时器。为得到时间,你需要运行的是 `date` 命令,你想查看更多的日期,你可以运行 `cal` 命令。文件上的时间戳也是一个容易混淆的地方,因为根据你的发行版默认情况不同,它一般有两种不同的显示方法。下面是来自 Ubuntu 16.04 LTS 的示例: +让 Linux 来告诉你时间的时候,它是很奇怪的。你可能认为是使用 `time` 命令来告诉你时间,其实并不是,因为 `time` 只是一个测量一个进程运行了多少时间的计时器。为得到时间,你需要运行的是 `date` 命令,你想查看更多的日期,你可以运行 `cal` 命令。文件上的时间戳也是一个容易混淆的地方,因为根据你的发行版默认情况不同,它一般有两种不同的显示方法。下面是来自 Ubuntu 16.04 LTS 的示例: + ``` $ ls -l drwxrwxr-x 5 carla carla 4096 Mar 27 2017 stuff drwxrwxr-x 2 carla carla 4096 Dec 8 11:32 things -rw-rw-r-- 1 carla carla 626052 Nov 21 12:07 fatpdf.pdf -rw-rw-r-- 1 carla carla 2781 Apr 18 2017 oddlots.txt - ``` -有些显示年,有些显示时间,这样的方式让你的文件更混乱。GNU 默认的情况是,如果你的文件在六个月以内,则显示时间而不是年。我想这样做可能是有原因的。如果你的 Linux 是这样的,尝试用 `ls -l --time-style=long-iso` 命令,让时间戳用同一种方式去显示,按字母顺序排序。查阅 [如何更改 Linux 的日期和时间:简单的命令][1] 去学习 Linux 上管理时间的各种方法。 +有些显示年,有些显示时间,这样的方式让你的文件更混乱。GNU 默认的情况是,如果你的文件在六个月以内,则显示时间而不是年。我想这样做可能是有原因的。如果你的 Linux 是这样的,尝试用 `ls -l --time-style=long-iso` 命令,让时间戳用同一种方式去显示,按字母顺序排序。请查阅 [如何更改 Linux 的日期和时间:简单的命令][1] 去学习 Linux 上管理时间的各种方法。 ### 检查当前设置 -NTP —— 网络时间协议,它是老式的保持计算机正确时间的方法。`ntpd` 是 NTP 守护程序,它通过周期性地查询公共时间服务器来按需调整你的计算机时间。它是一个简单的、轻量级的协议,使用它的基本功能时设置非常容易。Systemd 通过使用 `systemd-timesyncd.service` 已经越俎代庖 “干了 NTP 的活”,它可以用作 `ntpd` 的客户端。 +NTP —— 网络时间协议,它是保持计算机正确时间的老式方法。`ntpd` 是 NTP 守护程序,它通过周期性地查询公共时间服务器来按需调整你的计算机时间。它是一个简单的、轻量级的协议,使用它的基本功能时设置非常容易。systemd 通过使用 `systemd-timesyncd.service` 已经越俎代庖地 “干了 NTP 的活”,它可以用作 `ntpd` 的客户端。 在我们开始与 NTP “打交道” 之前,先花一些时间来了检查一下当前的时间设置是否正确。 -你的系统上(至少)有两个时钟:系统时间 —— 它由 Linux 内核管理,第二个是你的主板上的硬件时钟,它也称为实时时钟(RTC)。当你进入系统的 BIOS 时,你可以看到你的硬件时钟的时间,你也可以去改变它的设置。当你安装一个新的 Linux 时,在一些图形化的时间管理器中,你会被询问是否设置你的 RTC 为 UTC(协调世界时间)时区,因为所有的时区和夏令时都是基于 UTC 的。你可以使用 `hwclock` 命令去检查: +你的系统上(至少)有两个时钟:系统时间 —— 它由 Linux 内核管理,第二个是你的主板上的硬件时钟,它也称为实时时钟(RTC)。当你进入系统的 BIOS 时,你可以看到你的硬件时钟的时间,你也可以去改变它的设置。当你安装一个新的 Linux 时,在一些图形化的时间管理器中,你会被询问是否设置你的 RTC 为 UTC(世界标准时间Coordinated Universal Time)时区,因为所有的时区和夏令时都是基于 UTC 的。你可以使用 `hwclock` 命令去检查: + ``` $ sudo hwclock --debug hwclock from util-linux 2.27.1 @@ -39,27 +40,27 @@ Hw clock time : 2018/01/22 22:14:31 = 1516659271 seconds since 1969 Time since last adjustment is 1516659271 seconds Calculated Hardware Clock drift is 0.000000 seconds Mon 22 Jan 2018 02:14:30 PM PST .202760 seconds - ``` -"硬件时钟用 UTC 时间维护" 确认了你的计算机的 RTC 是使用 UTC 时间,虽然你的本地时间是通过 UTC 转换来的。如果设置本地时间,它将报告 “硬件时钟用本地时间维护”。 +`Hardware clock is on UTC time` 表明了你的计算机的 RTC 是使用 UTC 时间的,虽然它把该时间转换为你的本地时间。如果它被设置为本地时间,它将显示 `Hardware clock is on local time`。 + +你应该有一个 `/etc/adjtime` 文件。如果没有的话,使用如下命令同步你的 RTC 为系统时间, -如果你不同步你的 RTC 到系统时间,你应该有一个 `/etc/adjtime` 文件。 ``` $ sudo hwclock -w - ``` -这个命令将生成这个文件,它将包含如下示例中的内容: +这个命令将生成该文件,内容看起来类似如下: + ``` $ cat /etc/adjtime 0.000000 1516661953 0.000000 1516661953 UTC - ``` 新发明的 systemd 方式是去运行 `timedatectl` 命令,运行它不需要 root 权限: + ``` $ timedatectl Local time: Mon 2018-01-22 14:17:51 PST @@ -69,35 +70,36 @@ $ timedatectl Network time on: yes NTP synchronized: yes RTC in local TZ: no - ``` -"RTC in local TZ: no" 确认了它没有使用 UTC 时间。如果要改变它的本地时间,怎么办?这里有许多种方法可以做到。最简单的方法是使用一个图形配置工具,比如像 openSUSE 中的 YaST。你可使用 `timedatectl`: +`RTC in local TZ: no` 表明它使用 UTC 时间。那么怎么改成使用本地时间?这里有许多种方法可以做到。最简单的方法是使用一个图形配置工具,比如像 openSUSE 中的 YaST。你也可使用 `timedatectl`: + ``` $ timedatectl set-local-rtc 0 ``` -或者编辑 `/etc/adjtime`,将 UTC 替换为 LOCAL。 +或者编辑 `/etc/adjtime`,将 `UTC` 替换为 `LOCAL`。 ### systemd-timesyncd 客户端 现在,我已经累了,但是我们刚到非常精彩的部分。谁能想到计时如此复杂?我们甚至还没有了解到它的皮毛;阅读 `man 8 hwclock` 去了解你的计算机如何保持时间的详细内容。 -Systemd 提供了 `systemd-timesyncd.service` 客户端,它查询远程时间服务器并调整你的本地系统时间。在 `/etc/systemd/timesyncd.conf` 中配置你的服务器。大多数 Linux 发行版都提供一个默认配置,它指向他们维护的时间服务器上,比如,以下是 Fedora 的: +systemd 提供了 `systemd-timesyncd.service` 客户端,它可以查询远程时间服务器并调整你的本地系统时间。在 `/etc/systemd/timesyncd.conf` 中配置你的(时间)服务器。大多数 Linux 发行版都提供了一个默认配置,它指向他们维护的时间服务器上,比如,以下是 Fedora 的: + ``` [Time] #NTP= #FallbackNTP=0.fedora.pool.ntp.org 1.fedora.pool.ntp.org - ``` -你可以输入你希望的其它时间服务器,比如你自己的本地 NTP 服务器,在 `NTP=` 行上输入一个以空格分隔的服务器列表。(别忘了取消这一行的注释)`NTP=` 行上的任何内容都将覆盖掉 fallback 行上的配置项。 +你可以输入你希望使用的其它时间服务器,比如你自己的本地 NTP 服务器,在 `NTP=` 行上输入一个以空格分隔的服务器列表。(别忘了取消这一行的注释)`NTP=` 行上的任何内容都将覆盖掉 `FallbackNTP` 行上的配置项。 -如果你不想使用 systemd 呢?那么,你将需要一个 NTP。 +如果你不想使用 systemd 呢?那么,你将需要 NTP 就行。 ### 配置 NTP 服务器和客户端 -配置你自己的局域网 NTP 服务器是一个非常好的实践,这样你的网内计算机就不需要不停查询公共 NTP 服务器。在大多数 Linux 的 `ntp` 包中都带了 NTP,它们大多都提供 `/etc/ntp.conf` 文件去配置服务器。查阅 [NTP 时间服务器池][2] 去找到你所在的区域的合适的 NTP 服务器池。然后在你的 `/etc/ntp.conf` 中输入 4- 5 个服务器,每个服务器用单独的一行: +配置你自己的局域网 NTP 服务器是一个非常好的实践,这样你的网内计算机就不需要不停查询公共 NTP 服务器。在大多数 Linux 上的 NTP 都来自 `ntp` 包,它们大多都提供 `/etc/ntp.conf` 文件去配置时间服务器。查阅 [NTP 时间服务器池][2] 去找到你所在的区域的合适的 NTP 服务器池。然后在你的 `/etc/ntp.conf` 中输入 4 - 5 个服务器,每个服务器用单独的一行: + ``` driftfile /var/ntp.drift logfile /var/log/ntp.log @@ -105,12 +107,12 @@ server 0.europe.pool.ntp.org server 1.europe.pool.ntp.org server 2.europe.pool.ntp.org server 3.europe.pool.ntp.org - ``` -`driftfile` 告诉 `ntpd` 在这里保存的信息是用于在启动时,使用时间服务器去快速同步你的系统时钟的。而日志将保存在他们自己指定的目录中,而不是转储到 syslog 中。如果你的 Linux 发行版默认提供了这些文件,请使用它们。 +`driftfile` 告诉 `ntpd` 它需要保存用于启动时使用时间服务器快速同步你的系统时钟的信息。而日志也将保存在他们自己指定的目录中,而不是转储到 syslog 中。如果你的 Linux 发行版默认提供了这些文件,请使用它们。 现在去启动守护程序;在大多数主流的 Linux 中它的命令是 `sudo systemctl start ntpd`。让它运行几分钟之后,我们再次去检查它的状态: + ``` $ ntpq -p remote refid st t when poll reach delay offset jitter @@ -119,16 +121,15 @@ $ ntpq -p *chl.la 127.67.113.92 2 u 23 64 37 75.175 8.820 8.230 +four0.fairy.mat 35.73.197.144 2 u 22 64 37 116.272 -10.033 40.151 -195.21.152.161 195.66.241.2 2 u 27 64 37 107.559 1.822 27.346 - ``` 我不知道这些内容是什么意思,但重要的是,你的守护程序已经与时间服务器开始对话了,而这正是我们所需要的。你可以去运行 `sudo systemctl enable ntpd` 命令,永久启用它。如果你的 Linux 没有使用 systemd,那么,给你留下的家庭作业就是找出如何去运行 `ntpd`。 现在,你可以在你的局域网中的其它计算机上设置 `systemd-timesyncd`,这样它们就可以使用你的本地 NTP 服务器了,或者,在它们上面安装 NTP,然后在它们的 `/etc/ntp.conf` 上输入你的本地 NTP 服务器。 -NTP 服务器持续地接受客户端查询,并且这种需求在不断增加。你可以通过运行你自己的公共 NTP 服务器来提供帮助。下周我们将学习如何运行你自己的公共服务器。 +NTP 服务器会受到攻击,而且需求在不断增加。你可以通过运行你自己的公共 NTP 服务器来提供帮助。下周我们将学习如何运行你自己的公共服务器。 -通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][3] 来学习更多 Linux 的知识。 +通过来自 Linux 基金会和 edX 的免费课程 [“Linux 入门”][3] 来学习更多 Linux 的知识。 -------------------------------------------------------------------------------- @@ -136,7 +137,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux- 作者:[CARLA SCHRODER][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From de0b15b3bcc8ef931f9e36fcab6b629abd5a8977 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 19 Mar 2018 13:39:54 +0800 Subject: [PATCH 246/343] PUB:20180125 Keep Accurate Time on Linux with NTP.md @qhwdw --- .../20180125 Keep Accurate Time on Linux with NTP.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180125 Keep Accurate Time on Linux with NTP.md (100%) diff --git a/translated/tech/20180125 Keep Accurate Time on Linux with NTP.md b/published/20180125 Keep Accurate Time on Linux with NTP.md similarity index 100% rename from translated/tech/20180125 Keep Accurate Time on Linux with NTP.md rename to published/20180125 Keep Accurate Time on Linux with NTP.md From 593f45c35e2bb642c45b8e40defe4471dc3ef835 Mon Sep 17 00:00:00 2001 From: lontow <1423131852@qq.com> Date: Mon, 19 Mar 2018 20:09:18 +0800 Subject: [PATCH 247/343] translated --- ...0 Evolutional Steps of Computer Systems.md | 114 ------------------ ...0 Evolutional Steps of Computer Systems.md | 113 +++++++++++++++++ 2 files changed, 113 insertions(+), 114 deletions(-) delete mode 100644 sources/talk/20170210 Evolutional Steps of Computer Systems.md create mode 100644 translated/talk/20170210 Evolutional Steps of Computer Systems.md diff --git a/sources/talk/20170210 Evolutional Steps of Computer Systems.md b/sources/talk/20170210 Evolutional Steps of Computer Systems.md deleted file mode 100644 index b4f8de3bbd..0000000000 --- a/sources/talk/20170210 Evolutional Steps of Computer Systems.md +++ /dev/null @@ -1,114 +0,0 @@ -lontow Translating - -Evolutional Steps of Computer Systems -====== -Throughout the history of the modern computer, there were several evolutional steps related to the way we interact with the system. I tend to categorize those steps as following: - - 1. Numeric Systems - 2. Application-Specific Systems - 3. Application-Centric Systems - 4. Information-Centric Systems - 5. Application-Less Systems - - - -Following sections describe how I see those categories. - -### Numeric Systems - -[Early computers][1] were designed with numbers in mind. They could add, subtract, multiply, divide. Some of them were able to perform more complex mathematical operations such as differentiate or integrate. - -If you map characters to numbers, they were able to «compute» [strings][2] as well but this is somewhat «creative use of numbers» instead of meaningful processing arbitrary information. - -### Application-Specific Systems - -For higher-level problems, pure numeric systems are not sufficient. Application-specific systems were developed to do one single task. They were very similar to numeric systems. However, with sufficiently complex number calculations, systems were able to accomplish very well-defined higher level tasks such as calculations related to scheduling problems or other optimization problems. - -Systems of this category were built for one single purpose, one distinct problem they solved. - -### Application-Centric Systems - -Systems that are application-centric are the first real general purpose systems. Their main usage style is still mostly application-specific but with multiple applications working either time-sliced (one app after another) or in multi-tasking mode (multiple apps at the same time). - -Early personal computers [from the 70s][3] of the previous century were the first application-centric systems that became popular for a wide group of people. - -Yet modern operating systems - Windows, macOS, most GNU/Linux desktop environments - still follow the same principles. - -Of course, there are sub-categories as well: - - 1. Strict Application-Centric Systems - 2. Loose Application-Centric Systems - - - -Strict application-centric systems such as [Windows 3.1][4] (Program Manager and File Manager) or even the initial version of [Windows 95][5] had no pre-defined folder hierarchy. The user did start text processing software like [WinWord][6] and saved the files in the program folder of WinWord. When working with a spreadsheet program, its files were saved in the application folder of the spreadsheet tool. And so on. Users did not create their own hierarchy of folders mostly because of convenience, laziness, or because they did not saw any necessity. The number of files per user were sill within dozens up to a few hundreds. - -For accessing information, the user typically opened an application and within the application, the files containing the generated data were retrieved using file/open. - -It was [Windows 95][5] SP2 that introduced «[My Documents][7]» for the Windows platform. With this file hierarchy template, application designers began switching to «My Documents» as a default file save/open location instead of using the software product installation path. This made the users embrace this pattern and start to maintain folder hierarchies on their own. - -This resulted in loose application-centric systems: typical file retrieval is done via a file manager. When a file is opened, the associated application is started by the operating system. It is a small or subtle but very important usage shift. Application-centric systems are still the dominant usage pattern for personal computers. - -Nevertheless, this pattern comes with many disadvantages. For example in order to prevent data retrieval problems, there is the need to maintain a strict hierarchy of folders that contain all related files of a given project. Unfortunately, nature does not fit well in strict hierarchy of folders. Further more, [this does not scale well][8]. Desktop search engines and advanced data organizing tools like [tagstore][9] are able to smooth the edged a bit. As studies show, only a minority of users are using such advanced retrieval tools. Most users still navigate through the file system without using any alternative or supplemental retrieval techniques. - -### Information-Centric Systems - -One possible way of dealing with the issue that a certain topic needs to have a folder that holds all related files is to switch from an application-centric system to an information-centric systems. - -Instead of opening a spreadsheet application to work with the project budget, opening a word processor application to write the project report, and opening another tool to work with image files, an information-centric system combines all the information on the project in one place, in one application. - -The calculations for the previous month is right beneath notes from a client meeting which is right beneath a photography of the whiteboard notes which is right beneath some todo tasks. Without any application or file border in between. - -Early attempts to create such an environment were IBM [OS/2][10], Microsoft [OLE][11] or [NeXT][12]. None of them were a major success for a variety of reasons. A very interesting information-centric environment is [Acme][13] from [Plan 9][14]. It combines [a wide variety of applications][15] within one application but it never reached a notable distribution even with its ports to Windows or GNU/Linux. - -Modern approaches for an information-centric system are advanced [personal wikis][16] like [TheBrain][17] or [Microsoft OneNote][18]. - -My personal tool of choice is the [GNU/Emacs][19] platform with its [Org-mode][19] extension. I hardly leave Org-mode when I work with my computer. For accessing external data sources, I created [Memacs][20] which brings me a broad variety of data into Org-mode. I love to do spreadsheet calculations right beneath scheduled tasks, in-line images, internal and external links, and so forth. It is truly an information-centric system where the user doesn't have to deal with application borders or strictly hierarchical file-system folders. Multi-classifications is possible using simple or advanced tagging. All kinds of views can be derived with a single command. One of those views is my calendar, the agenda. Another derived view is the list of borrowed things. And so on. There are no limits for Org-mode users. If you can think of it, it is most likely possible within Org-mode. - -Is this the end of the evolution? Certainly not. - -### Application-Less Systems - -I can think of a class of systems which I refer to as application-less systems. As the next logical step, there is no need to have single-domain applications even when they are as capable as Org-mode. The computer offers a nice to use interface to information and features, not files and applications. Even a classical operating system is not accessible. - -Application-less systems might as well be combined with [artificial intelligence][21]. Think of it as some kind of [HAL 9000][22] from [A Space Odyssey][23]. Or [LCARS][24] from Star Trek. - -It is hard to believe that there is a transition between our application-based, vendor-based software culture and application-less systems. Maybe the open source movement with its slow but constant development will be able to form a truly application-less environment where all kinds of organizations and people are contributing to. - -Information and features to retrieve and manipulate information, this is all it takes. This is all we need. Everything else is just limiting distraction. - --------------------------------------------------------------------------------- - -via: http://karl-voit.at/2017/02/10/evolution-of-systems/ - -作者:[Karl Voit][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://karl-voit.at -[1]:https://en.wikipedia.org/wiki/History_of_computing_hardware -[2]:https://en.wikipedia.org/wiki/String_%2528computer_science%2529 -[3]:https://en.wikipedia.org/wiki/Xerox_Alto -[4]:https://en.wikipedia.org/wiki/Windows_3.1x -[5]:https://en.wikipedia.org/wiki/Windows_95 -[6]:https://en.wikipedia.org/wiki/Microsoft_Word -[7]:https://en.wikipedia.org/wiki/My_Documents -[8]:http://karl-voit.at/tagstore/downloads/Voit2012b.pdf -[9]:http://karl-voit.at/tagstore/ -[10]:https://en.wikipedia.org/wiki/OS/2 -[11]:https://en.wikipedia.org/wiki/Object_Linking_and_Embedding -[12]:https://en.wikipedia.org/wiki/NeXT -[13]:https://en.wikipedia.org/wiki/Acme_%2528text_editor%2529 -[14]:https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs -[15]:https://en.wikipedia.org/wiki/List_of_Plan_9_applications -[16]:https://en.wikipedia.org/wiki/Personal_wiki -[17]:https://en.wikipedia.org/wiki/TheBrain -[18]:https://en.wikipedia.org/wiki/Microsoft_OneNote -[19]:../../../../tags/emacs -[20]:https://github.com/novoid/Memacs -[21]:https://en.wikipedia.org/wiki/Artificial_intelligence -[22]:https://en.wikipedia.org/wiki/HAL_9000 -[23]:https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey -[24]:https://en.wikipedia.org/wiki/LCARS diff --git a/translated/talk/20170210 Evolutional Steps of Computer Systems.md b/translated/talk/20170210 Evolutional Steps of Computer Systems.md new file mode 100644 index 0000000000..2a71fd7f17 --- /dev/null +++ b/translated/talk/20170210 Evolutional Steps of Computer Systems.md @@ -0,0 +1,113 @@ +计算机系统的进化论 +====== +纵观现代计算机的历史,从与系统的交互方式方面,可以划分为数个进化阶段。而我更倾向于将之归类为以下几个阶段: + + 1. 数字系统 +2. 专用应用系统 +3. 应用中心系统 +4. 信息中心系统 +5. 无应用系统 + + + +下面我们详细聊聊这几种分类。 + +### 数字系统 + +在我看来,[ 早期计算机 ][1],只被设计用来处理数字。它们能够加,减,乘,除。在它们中有一些能够运行像是微分和积分之类的更复杂的数学操作。 + +当然,如果你把字符映射成数字,它们也可以计算字符串。但这多少有点“数字的创造性使用”的意思,而不是直接处理各种信息。 + +### 专用应用系统 + +对于更高层级的问题,纯粹的数字系统是不够的。专用应用系统被开发用来处理单一任务。它们和数字系统十分相似,但是,它们拥有足够的复杂数字计算能力。这些系统能够完成十分明确的高层级任务,像调度问题的相关计算或者其他优化问题。 + +这类系统为单一目的而搭建,它们解决的是单一明确的问题。 + +### 应用中心系统 + +应用中心系统是第一个真正的通用系统。它们的主要使用风格很像专用应用系统,但是它们拥有以时间片模式(一个接一个)或以多任务模式(多应用同时)运行的多个应用程序。 + +上世纪 70 年代的[ 早期的个人电脑 ][3]是第一种受大量人们欢迎的应用中心系统。 + +如今的现在操作系统 —— Windows , macOS , 大多数 GNU/Linux 桌面环境 —— 一直遵循相同的法则。 + +当然,应用中心系统还可以再细分为两种子类: + +1. 紧密型应用中心系统 +2. 松散型应用中心系统 + + + +精密型应用中心系统像是 [Windows 3.1][4] (拥有程序管理器和文件管理器)或者甚至 [ Windows 95 ][5] 的最初版本都没有预定义文件夹层次。用户启动文本处理程序(像 [ WinWord ][6])并且把文件保存在 WinWord 的程序文件夹中。在使用表格处理程序的时候,又把文件保存在表格处理工具的程序文件夹中。诸如此类。用户几乎不创建自己的文件层次结构,可能由于此举的不方便,用户单方面的懒惰,或者他们认为根本没有必要。那时,每个用户拥有几十个至多几百个文件。 + +为了访问文件中的信息,用户常常先打开一个应用程序,然后通过程序中的“文件/打开”功能来获取处理过的数据文件。 + + 在 Windows 平台的[ Windows 95][5] SP2 中,«[ 我的文档 ][7]»首次被使用。有了这样一个文件层次结构的样板,应用设计者开始把 «[我的文档][7]» 作为程序的默认 保存 / 打开 目录,抛弃了原来将软件产品安装目录作为默认目录的做法。这样一来,用户渐渐适应了这种模式,并且开始自己维护文件夹层次。 + + 松散型应用中心系统(通过文件管理器来提取文件)应用而生。在这种系统下,当打开一个文件的时候,操作系统会自动启动与之相关的应用程序。这是一次小而精妙的用法转变。这种应用中心系统的用法模式一直是个人电脑的主要用法模式。 + + 然而,这种模式有很多的缺点。例如,对于一个给定的项目,为了防止数据提取出现问题,需要维护一个包含所有相关文件的严格文件夹层次结构。不幸的是,人们并不总能这样做。当然,也有可能因为[ 文件数量规模还不是很大 ][8]。 桌面搜索引擎和高级数据组织工具(像[ tagstore ][9])可以起到一点改善作用。正如研究显示的那样,只有一少部分人正在使用那些高级文件提取工具。大多数的用户不使用替代提取工具或者辅助提取技术在文件系统中寻找文件。 + +### 信息中心系统 + +解决上述问题的可行办法之一就是从应用中心系统转换到信息中心系统。 + +信息中心系统将项目的所有信息联合起来,放在一个地方,放在同一个应用程序里。 +因此,我们再也不需要计算项目预算时,打开表格处理程序;写工程报告时,打开文本处理程序;处理图片文件时,又打开另一个工具。 + +上个月的预算情况在客户会议笔记的右下方,客户会议笔记又在画板的右下方,而画板又在另一些要去完成的任务的右下方。在各个层之间没有文件或者应用程序来回切换的麻烦。 + +早期,IBM [ OS/2 ][10], Microsoft [ OLE ][11] 和 [NeXT][12] 都做过类似的尝试。但都由于各种原因没有取得重大成功。从 [ Plan 9][14] 发展而来的 [ACme][13] 是一个令人兴奋的信息中心环境。它在一个应用程序中包含了多种应用程序。但是相比 Windows 和 GNU/Linux 而言,它从不是一个值得注意的系统发行版(即使在系统接口级别)。 + +信息中心系统的现代形式是高级 [ 个人 wikis ][16](像 [ TheBrain ][17]和[ Microsoft OneNote ][18])。 + +我选择的个人工具是带 [Org-mode][19] 扩展的 [GNU/Emacs][20]。在用电脑的时候,我几乎不能没有 Org-mode 。为了访问外部数据资源,我创建了一个可以将多种数据导入 Org-mode 的插件 —— [Memacs][20] 。我喜欢将表格数据计算放到日程任务的右下方,然后是行内图片,内部和外部链接,等等。它是一个真正的用户不用必须切换程序或者切换严格层次文件系统文件夹的信息中心系统。同时,用简单的或高级的标签也可以进行多分类。一个命令可以派生多种视图。比如,一个视图有日历,待办事项。另一个视图是租借事宜。等等。它对 Org-mode 用户没有限制。只有你想不到,没有它做不到。 + +进化结束了吗? 当然没有。 + +### 无应用系统 + +我能想到这样一类操作系统,我称之为无应用系统。在下一步的发展中,系统将不需要单域应用程序,即使它们能和 Org-mode 一样出色。计算机直接提供一个处理信息和使用功能的友好用户接口,而不通过文件和程序。甚至连传统的操作系统也不需要。 + +无应用系统也可能和 [人工智能][21] 联系起来。把它想象成 [2001太空漫游][23] 中的 02[HAL 9000][22] 和星际迷航中的 [LCARS][24]一类的东西就可以了。 + +从基于应用的,基于供应商的软件文化到无应用系统的转化让人很难相信。 或许,缓慢但却不断发展的开源环境,可以使一个由各种各样组织和人们贡献的真正无应用环境成型。 + +信息和提取、操作信息的功能,这是系统应该有的,同时也是我们所需要的。其他的东西仅仅是为了使我们不至于分散注意力。 + +-------------------------------------------------------------------------------- + +via: http://karl-voit.at/2017/02/10/evolution-of-systems/ + +作者:[Karl Voit][a] +译者:[lontow](https://github.com/lontow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://karl-voit.at +[1]:https://en.wikipedia.org/wiki/History_of_computing_hardware +[2]:https://en.wikipedia.org/wiki/String_%2528computer_science%2529 +[3]:https://en.wikipedia.org/wiki/Xerox_Alto +[4]:https://en.wikipedia.org/wiki/Windows_3.1x +[5]:https://en.wikipedia.org/wiki/Windows_95 +[6]:https://en.wikipedia.org/wiki/Microsoft_Word +[7]:https://en.wikipedia.org/wiki/My_Documents +[8]:http://karl-voit.at/tagstore/downloads/Voit2012b.pdf +[9]:http://karl-voit.at/tagstore/ +[10]:https://en.wikipedia.org/wiki/OS/2 +[11]:https://en.wikipedia.org/wiki/Object_Linking_and_Embedding +[12]:https://en.wikipedia.org/wiki/NeXT +[13]:https://en.wikipedia.org/wiki/Acme_%2528text_editor%2529 +[14]:https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs +[15]:https://en.wikipedia.org/wiki/List_of_Plan_9_applications +[16]:https://en.wikipedia.org/wiki/Personal_wiki +[17]:https://en.wikipedia.org/wiki/TheBrain +[18]:https://en.wikipedia.org/wiki/Microsoft_OneNote +[19]:../../../../tags/emacs +[20]:https://github.com/novoid/Memacs +[21]:https://en.wikipedia.org/wiki/Artificial_intelligence +[22]:https://en.wikipedia.org/wiki/HAL_9000 +[23]:https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey +[24]:https://en.wikipedia.org/wiki/LCARS From a0c7669a1ea90bceb95fe87bdfa92449b9d9cc2b Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 19 Mar 2018 21:05:54 +0800 Subject: [PATCH 248/343] Update 20180308 Test Your BASH Skills By Playing Command Line Games.md --- ...80308 Test Your BASH Skills By Playing Command Line Games.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180308 Test Your BASH Skills By Playing Command Line Games.md b/sources/tech/20180308 Test Your BASH Skills By Playing Command Line Games.md index e85c2b18e7..17e58e33ca 100644 --- a/sources/tech/20180308 Test Your BASH Skills By Playing Command Line Games.md +++ b/sources/tech/20180308 Test Your BASH Skills By Playing Command Line Games.md @@ -1,3 +1,5 @@ +Translating by MjSeven + Test Your BASH Skills By Playing Command Line Games ====== From 1ee82608247392ebe6b0913e2f70fa32b0a041ad Mon Sep 17 00:00:00 2001 From: szcf-weiya <2215235182@qq.com> Date: Mon, 19 Mar 2018 22:22:51 +0800 Subject: [PATCH 249/343] translated by szcf-weiya --- ...ps About sudo command for Linux systems.md | 217 ------------------ ...ps About sudo command for Linux systems.md | 209 +++++++++++++++++ 2 files changed, 209 insertions(+), 217 deletions(-) delete mode 100644 sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md create mode 100644 translated/tech/20180302 10 Quick Tips About sudo command for Linux systems.md diff --git a/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md b/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md deleted file mode 100644 index 3fe7d3ca49..0000000000 --- a/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md +++ /dev/null @@ -1,217 +0,0 @@ -translating by szcf-weiya - -10 Quick Tips About sudo command for Linux systems -====== - -![Linux-sudo-command-tips][1] - -### Overview - -**sudo** stands for **superuser do**. It allows authorized users to execute command as an another user. Another user can be regular user or superuser. However, most of the time we use it to execute command with elevated privileges. - -sudo command works in conjunction with security policies, default security policy is sudoers and it is configurable via **/etc/sudoers** file. Its security policies are highly extendable. One can develop and distribute their own policies as plugins. - -#### How it’s different than su - -In GNU/Linux there are two ways to run command with elevated privileges: - - * Using **su** command - * Using **sudo** command - - - -**su** stands for **switch user**. Using su, we can switch to root user and execute command. But there are few drawbacks with this approach. - - * We need to share root password with another user. - * We cannot give controlled access as root user is superuser - * We cannot audit what user is doing. - - - -sudo addresses these problems in unique way. - - 1. First of all, we don’t need to compromise root user password. Regular user uses its own password to execute command with elevated privileges. - 2. We can control access of sudo user meaning we can restrict user to execute only certain commands. - 3. In addition to this all activities of sudo user are logged hence we can always audit what actions were done. On Debian based GNU/Linux all activities are logged in **/var/log/auth.log** file. - - - -Later sections of this tutorial sheds light on these points. - -#### Hands on with sudo - -Now, we have fair understanding about sudo. Let us get our hands dirty with practical. For demonstration, I am using Ubuntu. However, behavior with another distribution should be identical. - -#### Allow sudo access - -Let us add regular user as a sudo user. In my case user’s name is linuxtechi - -1) Edit /etc/sudoers file as follows: -``` -$ sudo visudo - -``` - -2) Add below line to allow sudo access to user linuxtechi: -``` -linuxtechi ALL=(ALL) ALL - -``` - -In above command: - - * linuxtechi indicates user name - * First ALL instructs to permit sudo access from any terminal/machine - * Second (ALL) instructs sudo command to be allowed to execute as any user - * Third ALL indicates all command can be executed as root - - - -#### Execute command with elevated privileges - -To execute command with elevated privileges, just prepend sudo word to command as follows: -``` -$ sudo cat /etc/passwd - -``` - -When you execute this command, it will ask linuxtechi’s password and not root user password. - -#### Execute command as an another user - -In addition to this we can use sudo to execute command as another user. For instance, in below command, user linuxtechi executes command as a devesh user: -``` -$ sudo -u devesh whoami -[sudo] password for linuxtechi: -devesh - -``` - -#### Built in command behavior - -One of the limitation of sudo is – Shell’s built in command doesn’t work with it. For instance, history is built in command, if you try to execute this command with sudo then command not found error will be reported as follows: -``` -$ sudo history -[sudo] password for linuxtechi: -sudo: history: command not found - -``` - -**Access root shell** - -To overcome above problem, we can get access to root shell and execute any command from there including Shell’s built in. - -To access root shell, execute below command: -``` -$ sudo bash - -``` - -After executing this command – you will observe that prompt sign changes to pound (#) character. - -### Recipes - -In this section we’ll discuss some useful recipes which will help you to improve productivity. Most of the commands can be used to complete day-to-day task. - -#### Execute previous command as a sudo user - -Let us suppose you want to execute previous command with elevated privileges, then below trick will be useful: -``` -$ sudo !4 - -``` - -Above command will execute 4th command from history with elevated privileges. - -#### sudo command with Vim - -Many times we edit system’s configuration files and while saving we realize that we need root access to do this. Because this we may lose our changes. There is no need to get panic, we can use below command in Vim to rescue from this situation: -``` -:w !sudo tee % - -``` - -In above command: - - * Colon (:) indicates we are in Vim’s ex mode - * Exclamation (!) mark indicates that we are running shell command - * sudo and tee are the shell commands - * Percentage (%) sign indicates all lines from current line - - - -#### Execute multiple commands using sudo - -So far we have executed only single command with sudo but we can execute multiple commands with it. Just separate commands using semicolon (;) as follows: -``` -$ sudo -- bash -c 'pwd; hostname; whoami' - -``` - -In above command: - - * Double hyphen (–) stops processing of command line switches - * bash indicates shell name to be used for execution - * Commands to be executed are followed by –c option - - - -#### Run sudo command without password - -When sudo command is executed first time then it will prompt for password and by default password will be cached for next 15 minutes. However, we can override this behavior and disable password authentication using NOPASSWD keyword as follows: -``` -linuxtechi ALL=(ALL) NOPASSWD: ALL - -``` - -#### Restrict user to execute certain commands - -To provide controlled access we can restrict sudo user to execute only certain commands. For instance, below line allows execution of echo and ls commands only -``` -linuxtechi ALL=(ALL) NOPASSWD: /bin/echo /bin/ls - -``` - -#### Insights about sudo - -Let us dig more about sudo command to get insights about it. -``` -$ ls -l /usr/bin/sudo --rwsr-xr-x 1 root root 145040 Jun 13  2017 /usr/bin/sudo - -``` - -If you observe file permissions carefully, **setuid** bit is enabled on sudo. When any user runs this binary it will run with the privileges of the user that owns the file. In this case it is root user. - -To demonstrate this, we can use id command with it as follows: -``` -$ id -uid=1002(linuxtechi) gid=1002(linuxtechi) groups=1002(linuxtechi) - -``` - -When we execute id command without sudo then id of user linuxtechi will be displayed. -``` -$ sudo id -uid=0(root) gid=0(root) groups=0(root) - -``` - -But if we execute id command with sudo then id of root user will be displayed. - -### Conclusion - -Takeaway from this article is – sudo provides more controlled access to regular users. Using these techniques multiple users can interact with GNU/Linux in secure manner. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/quick-tips-sudo-command-linux-systems/ - -作者:[Pradeep Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxtechi.com/author/pradeep/ -[1]:https://www.linuxtechi.com/wp-content/uploads/2018/03/Linux-sudo-command-tips.jpg diff --git a/translated/tech/20180302 10 Quick Tips About sudo command for Linux systems.md b/translated/tech/20180302 10 Quick Tips About sudo command for Linux systems.md new file mode 100644 index 0000000000..2ff06bc449 --- /dev/null +++ b/translated/tech/20180302 10 Quick Tips About sudo command for Linux systems.md @@ -0,0 +1,209 @@ +Linux 系统中 sudo 命令的 10 个技巧 +====== + +![Linux-sudo-command-tips][1] + +### 概览 + +**sudo** 表示 **superuser do**。 它允许已验证的用户以其他用户的身份来运行命令。其他用户可以是普通用户或者超级用户。然而,大部分时候我们用它来以提升的权限来运行命令。 + +sudo 命令与安全策略配合使用,默认安全策略是 sudoers,可以通过文件 **/etc/sudoers** 来配置。其安全策略具有高度可拓展性。人们可以开发和分发他们自己的安全策略作为插件。 + +#### 与 su 的区别 + +在 GNU/Linux 中,有两种方式可以用提升的权限来运行命令: + + * 使用 **su** 命令 + * 使用 **sudo** 命令 + +**su** 表示 **switch user**。使用 su,我们可以切换到 root 用户并且执行命令。但是这种方式存在一些缺点 + + * 我们需要与他人共享 root 的密码。 + * 因为 root 用户为超级用户,我们不能授予受控的访问权限。 + * 我们无法审查用户在做什么。 + +sudo 以独特的方式解决了这些问题。 + + 1. 首先,我们不需要妥协来分享 root 用户的密码。普通用户使用他们自己的密码就可以用提升的权限来执行命令。 + 2. 我们可以控制 sudo 用户的访问,这意味着我们可以限制用户只执行某些命令。 + 3. 除此之外,sudo 用户的所有活动都会被记录下来,因此我们可以随时审查进行了哪些操作。在基于 Debian 的 GNU/Linux 中,所有活动都记录在 **/var/log/auth.log** 文件中。 + +本教程后面的部分阐述了这些要点。 + +#### 实际动手操作 sudo + +现在,我们对 sudo 有了大致的了解。让我们实际动手操作吧。为了演示,我使用 Ubuntu。但是,其它发行版本的操作应该是相同的。 + +#### 允许 sudo 权限 + +让我们添加普通用户为超级用户吧。在我的情形中,用户名为 linuxtechi + +1) 按如下所示编辑 /etc/sudoers 文件: +``` +$ sudo visudo + +``` + +2) 添加以下行来允许用户 linuxtechi 有 sudo 权限: +``` +linuxtechi ALL=(ALL) ALL + +``` + +上述命令中: + + * linuxtechi 表示用户名 + * 第一个 ALL 指示允许从任何终端、机器访问 sudo + * 第二个 (ALL) 指示 sudo 命令被允许以任何用户身份执行 + * 第三个 ALL 表示所有命令都可以作为 root 执行 + + +#### 以提升的权限执行命令 + +要用提升的权限执行命令,只需要在命令前加上 sudo,如下所示 +``` +$ sudo cat /etc/passwd + +``` + +当你执行这个命令时,它会询问 linuxtechi 的密码,而不是 root 用户的密码。 + +#### 以其他用户执行命令 + + +除此之外,我们可以使用 sudo 以另一个用户身份执行命令。例如,在下面的命令中,用户 linuxtechi 以用户 devesh 的身份执行命令: +``` +$ sudo -u devesh whoami +[sudo] password for linuxtechi: +devesh + +``` + +#### 内置命令行为 + +sudo 的一个限制是——它无法使用 Shell 的内置命令。例如,历史记录是内置命令,如果你试图用 sudo 执行这个命令,那么会提示如下的未找到命令的错误: +``` +$ sudo history +[sudo] password for linuxtechi: +sudo: history: command not found + +``` + +**访问 root shell** + +为了克服上述问题,我们可以访问 root shell,并在那里执行任何命令,包括 Shell 的内置命令。 + +要访问 root shell, 执行下面的命令: +``` +$ sudo bash + +``` + +执行完这个命令后——您将观察到提示符变为 磅(#)字符。 + +### 技巧 + +这节我们将讨论一些有用的技巧,这将有助于提高生产力。大多数命令可用于完成日常任务。 + +#### 以 sudo 用户执行之前的命令 + +让我们假设你想用提升的权限执行之前的命令,那么下面的技巧将会很有用: +``` +$ sudo !4 + +``` + +上面的命令将使用提升的权限执行历史记录中的第 4 条命令。 + +#### sudo command with Vim + +很多时候,我们编辑系统的配置文件时,在保存时意识到我们需要 root 访问权限来执行此操作。因为这个可能让我们失去我们对文件的改动。没有必要惊慌,我们可以在 Vim 中使用下面的命令来解决这种情况: +``` +:w !sudo tee % + +``` + +上述命令中: + + * 冒号 (:) 表明我们处于 Vim 的退出模式 + * 感叹号 (!) 表明我们正在运行 shell 命令 + * sudo 和 tee 都是 shell 命令 + * 百分号 (%) 表明从当前行开始的所有行 + + + +#### 使用 sudo 执行多个命令 + +至今我们用 sudo 只执行了单个命令,但我们可以用它执行多个命令。只需要用分号 (;) 隔开命令,如下所示: +``` +$ sudo -- bash -c 'pwd; hostname; whoami' + +``` + +上述命令中 + + * 双连字符 (–) 停止命令行切换 + * bash 表示要用于执行命令的 shell 名称 + * -c 选项后面跟着要执行的命令 + + + +#### 无密码运行 sudo 命令 + +当第一次执行 sudo 命令时,它会提示输入密码,默认情形下密码被缓存 15 分钟。但是,我们可以避免这个操作,并使用 NOPASSWD 关键字禁用密码认证,如下所示: +``` +linuxtechi ALL=(ALL) NOPASSWD: ALL + +``` + +#### 限制用户执行某些命令 + +为了提供受控访问,我们可以限制 sudo 用户只执行某些命令。例如,下面的行只允许执行 echo 和 ls 命令 +``` +linuxtechi ALL=(ALL) NOPASSWD: /bin/echo /bin/ls + +``` + +#### 深入了解 sudo + +让我们进一步深入了解 sudo 命令。 +``` +$ ls -l /usr/bin/sudo +-rwsr-xr-x 1 root root 145040 Jun 13  2017 /usr/bin/sudo + +``` + +如果仔细观测文件权限,则发现 sudo 上启用了 **setuid** 位。当任何用户运行这个二进制文件时,它将以拥有该文件的用户权限运行。在所示情形下,它是 root 用户。 + +为了演示这一点,我们可以使用 id 命令,如下所示: +``` +$ id +uid=1002(linuxtechi) gid=1002(linuxtechi) groups=1002(linuxtechi) + +``` + +当我们不使用 sudo 执行 id 命令时,将显示用户 linuxtechi 的 id。 +``` +$ sudo id +uid=0(root) gid=0(root) groups=0(root) + +``` + +但是,如果我们使用 sudo 执行 id 命令时,则会显示 root 用户的 id。 + +### 结论 + +从这篇文章可以看出——sudo 为普通用户提供了更多受控访问。使用这些技术,多用户可以用安全的方式与 GNU/Linux 进行交互。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/quick-tips-sudo-command-linux-systems/ + +作者:[Pradeep Kumar][a] +译者:[szcf-weiya](https://github.com/szcf-weiya) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxtechi.com/author/pradeep/ +[1]:https://www.linuxtechi.com/wp-content/uploads/2018/03/Linux-sudo-command-tips.jpg From 26a51ccc37838b8ddea2aa8dff871e08f371aa8a Mon Sep 17 00:00:00 2001 From: szcf-weiya <2215235182@qq.com> Date: Mon, 19 Mar 2018 22:33:10 +0800 Subject: [PATCH 250/343] translating by szcf-weiya --- ...0203 API Star- Python 3 API Framework - Polyglot.Ninja().md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md b/sources/tech/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md index 10bb70fe72..f1b199eb44 100644 --- a/sources/tech/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md +++ b/sources/tech/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md @@ -1,3 +1,6 @@ +translating by szcf-weiya + + API Star: Python 3 API Framework – Polyglot.Ninja() ====== For building quick APIs in Python, I have mostly depended on [Flask][1]. Recently I came across a new API framework for Python 3 named “API Star” which seemed really interesting to me for several reasons. Firstly the framework embraces modern Python features like type hints and asyncio. And then it goes ahead and uses these features to provide awesome development experience for us, the developers. We will get into those features soon but before we begin, I would like to thank Tom Christie for all the work he has put into Django REST Framework and now API Star. From 5e68a3920abd3e2173d79e89a866274be1cd6a61 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 20 Mar 2018 08:58:01 +0800 Subject: [PATCH 251/343] translated --- .../tech/20180215 What is a Linux -oops.md | 70 ------------------- .../tech/20180215 What is a Linux -oops.md | 67 ++++++++++++++++++ 2 files changed, 67 insertions(+), 70 deletions(-) delete mode 100644 sources/tech/20180215 What is a Linux -oops.md create mode 100644 translated/tech/20180215 What is a Linux -oops.md diff --git a/sources/tech/20180215 What is a Linux -oops.md b/sources/tech/20180215 What is a Linux -oops.md deleted file mode 100644 index 3238ca34de..0000000000 --- a/sources/tech/20180215 What is a Linux -oops.md +++ /dev/null @@ -1,70 +0,0 @@ -translating----geekpi - -What is a Linux 'oops'? -====== -If you check the processes running on your Linux systems, you might be curious about one called "kerneloops." And that’s “kernel oops,” not “kerne loops” just in case you didn’t parse that correctly. - -Put very bluntly, an “oops” is a deviation from correct behavior on the part of the Linux kernel. Did you do something wrong? Probably not. But something did. And the process that did something wrong has probably at least just been summarily knocked off the CPU. At worst, the kernel may have panicked and abruptly shut the system down. - -For the record, “oops” is NOT an acronym. It doesn’t stand for something like “object-oriented programming and systems” or “out of procedural specs”; it actually means “oops” like you just dropped your glass of wine or stepped on your cat. Oops! The plural of "oops" is "oopses." - -An oops means that something running on the system has violated the kernel’s rules about proper behavior. Maybe the code tried to take a code path that was not allowed or use an invalid pointer. Whatever it was, the kernel — always on the lookout for process misbehavior — most likely will have stopped the particular process in its tracks and written some messages about what it did to the console, to /var/log/dmesg or the /var/log/kern.log file. - -An oops can be caused by the kernel itself or by some process that tries to get the kernel to violate its rules about how things are allowed to run on the system and what they're allowed to do. - -An oops will generate a crash signature that can help kernel developers figure out what went wrong and improve the quality of their code. - -The kerneloops process running on your system will probably look like this: -``` -kernoops 881 1 0 Feb11 ? 00:00:01 /usr/sbin/kerneloops - -``` - -You might notice that the process isn't run by root, but by a user named "kernoops" and that it's accumulated extremely little run time. In fact, the only task assigned to this particular user is running kerneloops. -``` -$ sudo grep kernoops /etc/passwd -kernoops:x:113:65534:Kernel Oops Tracking Daemon,,,:/:/bin/false - -``` - -If your Linux system isn't one that ships with kerneloops (like Debian), you might consider adding it. Check out this [Debian page][1] for more information. - -### When should you be concerned about an oops? - -An oops is not a big deal, except when it is. It depends in part on the role that the particular process was playing. It also depends on the class of oops. - -Some oopses are so severe that they result in system panics. Technically speaking, a panic is a subset of the oops (i.e., the more serious of the oopses). A panic occurs when a problem detected by the kernel is bad enough that the kernel decides that it (the kernel) must stop running immediately to prevent data loss or other damage to the system. So, the system then needs to be halted and rebooted to keep any inconsistencies from making it unusable or unreliable. So a system that panics is actually trying to protect itself from irrevocable damage. - -In short, all panics are oops, but not all oops are panics. - -The /var/log/kern.log and related rotated logs (/var/log/kern.log.1, /var/log/kern.log.2 etc.) contain the logs produced by the kernel and handled by syslog. - -The kerneloops program collects and by default submits information on the problems it runs into where it can be analyzed and presented to kernel developers. Configuration details for this process are specified in the /etc/kerneloops.conf file. You can look at the settings easily with the command shown below: -``` -$ sudo cat /etc/kerneloops.conf | grep -v ^# | grep -v ^$ -[sudo] password for shs: -allow-submit = ask -allow-pass-on = yes -submit-url = http://oops.kernel.org/submitoops.php -log-file = /var/log/kern.log -submit-pipe = /usr/share/apport/kernel_oops - -``` - -In the above (default) settings, information on kernel problems can be submitted, but the user is asked for permission. If set to allow-submit = always, the user will not be asked. - -Debugging kernel problems is one of the finer arts of working with Linux systems. Fortunately, most Linux users seldom or never experience oops or panics. Still, it's nice to know what processes like kerneloops are doing on your system and to understand what might be reported and where when your system runs into a serious kernel violation. - - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3254778/linux/what-is-a-linux-oops.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[1]:https://packages.debian.org/stretch/kerneloops diff --git a/translated/tech/20180215 What is a Linux -oops.md b/translated/tech/20180215 What is a Linux -oops.md new file mode 100644 index 0000000000..744e7c2b28 --- /dev/null +++ b/translated/tech/20180215 What is a Linux -oops.md @@ -0,0 +1,67 @@ +什么是 Linux “oops”? +====== +如果你检查你的 Linux 系统上运行的进程,你可能会对一个叫做 “kerneloops” 的进程感到好奇。以防万一你没有正确认识,它是 “kernel oops”,而不是 “kerne loops”。 + +坦率地说,“oops” 是 Linux 内核的一部分出现了偏差。你有做错了什么么?可能没有。但发生了一些事情。而那个错误的进程可能已经被 CPU 结束。最糟糕的是,内核可能会报错并突然关闭系统。 + +对于记录,“oops” 不是首字母缩略词。它不代表像“面向对象的编程和系统” (object-oriented programming and systems) 或“超出程序规范” (out of procedural specs) 之类的东西。它实际上就是“哎呀” (oops),就像你刚掉下一杯酒或踩在你的猫上。哎呀! “oops” 的复数是 “oopses”。 + +oops 意味着系统上运行的某些东西违反了内核有关正确行为的规则。也许代码尝试采取不允许的代码路径或使用无效指针。不管它是什么,内核 - 总是在寻找进程的错误行为 - 很可能会阻止特定进程,并将它做了什么的消息写入控制台、 /var/log/dmesg 或 /var/log/kern.log 中。 + +oops 可能是由内核本身引起的,也可能是某些进程试图让内核违反在系统上能做的事以及它们被允许做的事。 + +oops 将生成一个崩溃签名,这可以帮助内核开发人员找出错误并提高代码质量。 + +系统上运行的 kerneloops 进程可能如下所示: +``` +kernoops 881 1 0 Feb11 ? 00:00:01 /usr/sbin/kerneloops + +``` + +你可能会注意到该进程不是由 root 运行的,而是由名为 “kernoops” 的用户运行的,并且它的运行时间极少。实际上,分配给这个特定用户的唯一任务是运行 kerneloops。 +``` +$ sudo grep kernoops /etc/passwd +kernoops:x:113:65534:Kernel Oops Tracking Daemon,,,:/:/bin/false + +``` + +如果你的 Linux 系统不带有 kerneloops(比如 Debian),你可以考虑添加它。查看这个[ Debian 页面][1]了解更多信息。 + +### 什么时候应该关注 oops? + +除非是预期的,oops 没什么大不了的。它在一定程度上取决于特定进程所扮演的角色。它也取决于 oops 的类别。 + +有些 oops 很严重,会导致系统恐慌。从技术上讲,系统恐慌是 oops 的一个子集(即更严重的 oops)。当内核检测到的问题足够严重以至于内核认为它(内核)必须立即停止运行以防止数据丢失或对系统造成其他损害时会出现。因此,系统需要暂停并重新启动,以防止不一致导致不可用或不可靠。所以系统恐慌实际上是为了保护自己免受不可挽回的损害。 + +总之,所有的内核恐慌都是 oops,但并不是所有的 oops 都是内核恐慌。 + +/var/log/kern.log 和相关的轮转日志(/var/log/kern.log.1、/var/log/kern.log.2 等)包含由内核生成并由 syslog 处理的日志。 + +kerneloops 程序收集并默认将错误信息提交到,在那里它会被分析并呈现给内核开发者。此进程的配置详细信息在 /etc/kerneloops.conf 文件中指定。你可以使用下面的命令轻松查看设置: +``` +$ sudo cat /etc/kerneloops.conf | grep -v ^# | grep -v ^$ +[sudo] password for shs: +allow-submit = ask +allow-pass-on = yes +submit-url = http://oops.kernel.org/submitoops.php +log-file = /var/log/kern.log +submit-pipe = /usr/share/apport/kernel_oops + +``` + +在上面的(默认)设置中,内核问题可以被提交,但要求用户获得许可。如果设置为 allow-submit = always,则不会询问用户。 + +调试内核问题是使用 Linux 系统的更高级技巧之一。幸运的是,大多数 Linux 用户很少或从没有经历过 oops 或内核恐慌。不过,知道 kerneloops 这样的进程在系统中执行什么操作,了解可能会报告什么以及系统何时遇到严重的内核冲突也是很好的。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3254778/linux/what-is-a-linux-oops.html + +作者:[Sandra Henry-Stocker][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]:https://packages.debian.org/stretch/kerneloops From 137a6bf629e2a5eedbd013aa5371fc2772f76b1a Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 20 Mar 2018 09:01:46 +0800 Subject: [PATCH 252/343] translating --- ...Look at the Arch Based Indie Linux Distribution- MagpieOS.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md b/sources/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md index a850a8fd33..8a1d32c586 100644 --- a/sources/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md +++ b/sources/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md @@ -1,3 +1,5 @@ +translating---geekpi + Quick Look at the Arch Based Indie Linux Distribution: MagpieOS ====== Most of the Linux distros that are in use today are either created and developed in the US or Europe. A young developer from Bangladesh wants to change all that. From edc4198a3b21c37bb24bebe617782883c16a1ed9 Mon Sep 17 00:00:00 2001 From: Snowden Fu Date: Tue, 20 Mar 2018 10:47:55 +0800 Subject: [PATCH 253/343] 2018-03-20 --- sources/talk/20180201 How I coined the term open source.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20180201 How I coined the term open source.md b/sources/talk/20180201 How I coined the term open source.md index 0f2c6a7852..7d41259cc3 100644 --- a/sources/talk/20180201 How I coined the term open source.md +++ b/sources/talk/20180201 How I coined the term open source.md @@ -73,7 +73,7 @@ Todd 强烈同意需要新的术语并提供协助推广它。这很有帮助, ### 关于作者 - [![photo of Christine Peterson](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/cp2016_crop2_185.jpg?itok=vUkSjFig)][13] Christine Peterson - Christine Peterson 撰写,举办讲座 lectures, and briefs the media on coming powerful technologies, especially nanotechnology, artificial intelligence, and longevity. She is Cofounder and Past President of Foresight Institute, the leading nanotech public interest group. Foresight educates the public, technical community, and policymakers on coming powerful technologies and how to guide their long-term impact. She serves on the Advisory Board of the [Machine Intelligence... ][2][more about Christine Peterson][3][More about me][4] + [![photo of Christine Peterson](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/cp2016_crop2_185.jpg?itok=vUkSjFig)][13] Christine Peterson - Christine Peterson 撰写,举办讲座,并向媒体介绍未来强大的技术,特别是纳米技术,人工智能和长寿。她是著名的纳米科技公共利益集团的创始人和过去的前瞻技术协会主席。前瞻向公众、技术团体和政策制定者提供未来强大的技术的教育以及告诉它是如何引导他们的长期影响。她服务于 [机器智能 ][2]咨询委员会……[更多关于 Christine Peterson][3][关于我][4] -------------------------------------------------------------------------------- From c1d6e1fe39d914822d046b292bdbb49f15cb2e5c Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 20 Mar 2018 12:56:26 +0800 Subject: [PATCH 254/343] PRF:20180131 10 things I love about Vue.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @yizhouyan 翻译的很棒! --- .../20180131 10 things I love about Vue.md | 86 +++++++++---------- 1 file changed, 42 insertions(+), 44 deletions(-) diff --git a/translated/tech/20180131 10 things I love about Vue.md b/translated/tech/20180131 10 things I love about Vue.md index 16fae2d64f..d36da57312 100644 --- a/translated/tech/20180131 10 things I love about Vue.md +++ b/translated/tech/20180131 10 things I love about Vue.md @@ -1,19 +1,18 @@ -#我喜欢Vue的10个方面 +我喜欢 Vue 的 10 个方面 ============================================================ ![](https://cdn-images-1.medium.com/max/1600/1*X4ipeKVYzmY2M3UPYgUYuA.png) +我喜欢 Vue。当我在 2016 年第一次接触它时,也许那时我已经对 JavaScript 框架感到疲劳了,因为我已经具有Backbone、Angular、React 等框架的经验,没有太多的热情去尝试一个新的框架。直到我在 Hacker News 上读到一份评论,其描述 Vue 是类似于“新 jQuery” 的 JavaScript 框架,从而激发了我的好奇心。在那之前,我已经相当满意 React 这个框架,它是一个很好的框架,建立于可靠的设计原则之上,围绕着视图模板、虚拟 DOM 和状态响应等技术。而 Vue 也提供了这些重要的内容。 +在这篇文章中,我旨在解释为什么 Vue 适合我,为什么在上文中那些我尝试过的框架中选择它。也许你将同意我的一些观点,但至少我希望能够给大家使用 Vue 开发现代 JavaScript 应用一些灵感。 +### 1、 极少的模板语法 -我喜欢Vue。当我在2016年第一次接触它时,也许那时我已有了JavaScript框架疲劳的观点,因为我已经具有Backbone, Angular, React等框架的经验 -而且我也没有过度的热情去尝试一个新的框架。直到我在hacker news上读到一份评论,其描述Vue是类似于“新jquery”的JavaScript框架,从而激发了我的好奇心。在那之前,我已经相当满意React这个框架,它是一个很好的框架,基于可靠的设计原则,围绕着视图模板,虚拟DOM和状态响应等技术。而Vue也提供了这些重要的内容。在这篇文章中,我旨在解释为什么Vue适合我,为什么在上文中那些我尝试过的框架中选择它。也许你将同意我的一些观点,但至少我希望能够给大家关于使用Vue开发现代JavaScript应用的一些灵感。 +Vue 默认提供的视图模板语法是极小的、简洁的和可扩展的。像其他 Vue 部分一样,可以很简单的使用类似 JSX 一样语法,而不使用标准的模板语法(甚至有官方文档说明了如何做),但是我觉得没必要这么做。JSX 有好的方面,也有一些有依据的批评,如混淆了 JavaScript 和 HTML,使得很容易导致在模板中出现复杂的代码,而本来应该分开写在不同的地方的。 -##1\. 极少的模板语法 +Vue 没有使用标准的 HTML 来编写视图模板,而是使用极少的模板语法来处理简单的事情,如基于视图数据迭代创建元素。 -Vue默认提供的视图模板语法是极小的,简洁的和可扩展的。像其他Vue部分一样,可以很简单的使用类似JSX一样语法而不使用标准的模板语法(甚至有官方文档说明如何这样做),但是我觉得没必要这么做。关于JSX有好的方面,也有一些有依据的批评,如混淆了JavaScript和HTML,使得很容易在模板中编写出复杂的代码,而本来应该分开写在不同的地方的。 - -Vue没有使用标准的HTML来编写视图模板,而是使用极少的模板语法来处理简单的事情,如基于视图数据迭代创建元素。 ```