diff --git a/translated/talk/20190108 Hacking math education with Python.md b/published/20190108 Hacking math education with Python.md
similarity index 93%
rename from translated/talk/20190108 Hacking math education with Python.md
rename to published/20190108 Hacking math education with Python.md
index 120e56c521..0ab5baca72 100644
--- a/translated/talk/20190108 Hacking math education with Python.md
+++ b/published/20190108 Hacking math education with Python.md
@@ -1,14 +1,15 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10527-1.html)
[#]: subject: (Hacking math education with Python)
[#]: via: (https://opensource.com/article/19/1/hacking-math)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
将 Python 结合到数学教育中
======
+
> 身兼教师、开发者、作家数职的 Peter Farrell 来讲述为什么使用 Python 来讲数学课会比传统方法更加好。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl)
@@ -19,11 +20,11 @@
Peter 的灵感来源于 Logo 语言之父 [Seymour Papert][2],他的 Logo 语言现在还存在于 Python 的 [Turtle 模块][3]中。Logo 语言中的海龟形象让 Peter 喜欢上了 Python,并且进一步将 Python 应用到数学教学中。
-Peter 在他的新书《[Python 数学奇遇记][5]》中分享了他的方法:“图文并茂地指导如何用代码探索数学”。因此我最近对他进行了一次采访,向他了解更多这方面的情况。
+Peter 在他的新书《[Python 数学奇遇记][5]》中分享了他的方法:“图文并茂地指导如何用代码探索数学”。因此我最近对他进行了一次采访,向他了解更多这方面的情况。
-**Don Watkins(译者注:本文作者):** 你的教学背景是什么?
+**Don Watkins(LCTT 译注:本文作者):** 你的教学背景是什么?
-**Peter Farrell:** 我曾经当过八年的数学老师,之后又教了十年的数学。我还在当老师的时候,就阅读过 Papert 的 《[头脑风暴][6]》并从中受到了启发,将 Logo 语言和海龟引入到了我所有的数学课上。
+**Peter Farrell:** 我曾经当过八年的数学老师,之后又做了十年的数学私教。我还在当老师的时候,就阅读过 Papert 的 《[头脑风暴][6]》并从中受到了启发,将 Logo 语言和海龟引入到了我所有的数学课上。
**DW:** 你为什么开始使用 Python 呢?
@@ -68,7 +69,7 @@ via: https://opensource.com/article/19/1/hacking-math
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/talk/20190110 Toyota Motors and its Linux Journey.md b/sources/talk/20190110 Toyota Motors and its Linux Journey.md
index 1d76ffe0b6..ef3afd38a0 100644
--- a/sources/talk/20190110 Toyota Motors and its Linux Journey.md
+++ b/sources/talk/20190110 Toyota Motors and its Linux Journey.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (jdh8383)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@@ -7,12 +7,11 @@
[#]: via: (https://itsfoss.com/toyota-motors-linux-journey)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-Toyota Motors and its Linux Journey
+丰田汽车的Linux之旅
======
-**This is a community submission from It’s FOSS reader Malcolm Dean.**
-
I spoke with Brian R Lyons of TMNA Toyota Motor Corp North America about the implementation of Linux in Toyota and Lexus infotainment systems. I came to find out there is an Automotive Grade Linux (AGL) being used by several autmobile manufacturers.
+我之前跟丰田汽车北美分公司的Brian.R.Lyons聊过天,话题是关于Linux在丰田和雷克萨斯汽车的信息娱乐系统上的实施方案。我发现一些汽车制造商使用了Automotive Grade Linux (AGL)。
I put together a short article comprising of my discussion with Brian about Toyota and its tryst with Linux. I hope that Linux enthusiasts will like this quick little chat.
diff --git a/sources/talk/20190131 4 confusing open source license scenarios and how to navigate them.md b/sources/talk/20190131 4 confusing open source license scenarios and how to navigate them.md
new file mode 100644
index 0000000000..fd93cdd9a6
--- /dev/null
+++ b/sources/talk/20190131 4 confusing open source license scenarios and how to navigate them.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 confusing open source license scenarios and how to navigate them)
+[#]: via: (https://opensource.com/article/19/1/open-source-license-scenarios)
+[#]: author: (P.Kevin Nelson https://opensource.com/users/pkn4645)
+
+4 confusing open source license scenarios and how to navigate them
+======
+
+Before you begin using a piece of software, make sure you fully understand the terms of its license.
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_openisopen.png?itok=FjmDxIaL)
+
+As an attorney running an open source program office for a Fortune 500 corporation, I am often asked to look into a product or component where there seems to be confusion as to the licensing model. Under what terms can the code be used, and what obligations run with such use? This often happens when the code or the associated project community does not clearly indicate availability under a [commonly accepted open source license][1]. The confusion is understandable as copyright owners often evolve their products and services in different directions in response to market demands. Here are some of the scenarios I commonly discover and how you can approach each situation.
+
+### Multiple licenses
+
+The product is truly open source with an [Open Source Initiative][2] (OSI) open source-approved license, but has changed licensing models at least once if not multiple times throughout its lifespan. This scenario is fairly easy to address; the user simply has to decide if the latest version with its attendant features and bug fixes is worth the conditions to be compliant with the current license. If so, great. If not, then the user can move back in time to a version released under a more palatable license and start from that fork, understanding that there may not be an active community for support and continued development.
+
+### Old open source
+
+This is a variation on the multiple licenses model with the twist that current licensing is proprietary only. You have to use an older version to take advantage of open source terms and conditions. Most often, the product was released under a valid open source license up to a certain point in its development, but then the copyright holder chose to evolve the code in a proprietary fashion and offer new releases only under proprietary commercial licensing terms. So, if you want the newest capabilities, you have to purchase a proprietary license, and you most likely will not get a copy of the underlying source code. Most often the open source community that grew up around the original code line falls away once the members understand there will be no further commitment from the copyright holder to the open source branch. While this scenario is understandable from the copyright holder's perspective, it can be seen as "burning a bridge" to the open source community. It would be very difficult to again leverage the benefits of the open source contribution models once a project owner follows this path.
+
+### Open core
+
+By far the most common discovery is that a product has both an open source-licensed "community edition" and a proprietary-licensed commercial offering, commonly referred to as open core. This is often encouraging to potential consumers, as it gives them a "try before you buy" option or even a chance to influence both versions of the product by becoming an active member of the community. I usually encourage clients to begin with the community version, get involved, and see what they can achieve. Then, if the product becomes a crucial part of their business plan, they have the option to upgrade to the proprietary level at any time.
+
+### Freemium
+
+The component is not open source at all, but instead it is released under some version of the "freemium" model. A version with restricted or time-limited functionality can be downloaded with no immediate purchase required. However, since the source code is usually not provided and its accompanying license does not allow perpetual use, the creation of derivative works, nor further distribution, it is definitely not open source. In this scenario, it is usually best to pass unless you are prepared to purchase a proprietary license and accept all attendant terms and conditions of use. Users are often the most disappointed in this outcome as it has somewhat of a deceptive feel.
+
+### OSI compliant
+
+Of course, the happy path I haven't mentioned is to discover the project has a single, clear, OSI-compliant license. In those situations, open source software is as easy as downloading and going forward within appropriate use.
+
+Each of the more complex scenarios described above can present problems to potential development projects, but consultation with skilled procurement or intellectual property professionals with regard to licensing lineage can reveal excellent opportunities.
+
+An earlier version of this article was published on [OSS Law][3] and is republished with the author's permission.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/open-source-license-scenarios
+
+作者:[P.Kevin Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/pkn4645
+[b]: https://github.com/lujun9972
+[1]: https://opensource.org/licenses
+[2]: https://opensource.org/licenses/category
+[3]: http://www.pknlaw.com/2017/06/i-thought-that-was-open-source.html
diff --git a/sources/talk/20190131 OOP Before OOP with Simula.md b/sources/talk/20190131 OOP Before OOP with Simula.md
new file mode 100644
index 0000000000..cae9d9bd3a
--- /dev/null
+++ b/sources/talk/20190131 OOP Before OOP with Simula.md
@@ -0,0 +1,203 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (OOP Before OOP with Simula)
+[#]: via: (https://twobithistory.org/2019/01/31/simula.html)
+[#]: author: (Sinclair Target https://twobithistory.org)
+
+OOP Before OOP with Simula
+======
+
+Imagine that you are sitting on the grassy bank of a river. Ahead of you, the water flows past swiftly. The afternoon sun has put you in an idle, philosophical mood, and you begin to wonder whether the river in front of you really exists at all. Sure, large volumes of water are going by only a few feet away. But what is this thing that you are calling a “river”? After all, the water you see is here and then gone, to be replaced only by more and different water. It doesn’t seem like the word “river” refers to any fixed thing in front of you at all.
+
+In 2009, Rich Hickey, the creator of Clojure, gave [an excellent talk][1] about why this philosophical quandary poses a problem for the object-oriented programming paradigm. He argues that we think of an object in a computer program the same way we think of a river—we imagine that the object has a fixed identity, even though many or all of the object’s properties will change over time. Doing this is a mistake, because we have no way of distinguishing between an object instance in one state and the same object instance in another state. We have no explicit notion of time in our programs. We just breezily use the same name everywhere and hope that the object is in the state we expect it to be in when we reference it. Inevitably, we write bugs.
+
+The solution, Hickey concludes, is that we ought to model the world not as a collection of mutable objects but a collection of processes acting on immutable data. We should think of each object as a “river” of causally related states. In sum, you should use a functional language like Clojure.
+
+![][2]
+The author, on a hike, pondering the ontological commitments
+of object-oriented programming.
+
+Since Hickey gave his talk in 2009, interest in functional programming languages has grown, and functional programming idioms have found their way into the most popular object-oriented languages. Even so, most programmers continue to instantiate objects and mutate them in place every day. And they have been doing it for so long that it is hard to imagine that programming could ever look different.
+
+I wanted to write an article about Simula and imagined that it would mostly be about when and how object-oriented constructs we are familiar with today were added to the language. But I think the more interesting story is about how Simula was originally so unlike modern object-oriented programming languages. This shouldn’t be a surprise, because the object-oriented paradigm we know now did not spring into existence fully formed. There were two major versions of Simula: Simula I and Simula 67. Simula 67 brought the world classes, class hierarchies, and virtual methods. But Simula I was a first draft that experimented with other ideas about how data and procedures could be bundled together. The Simula I model is not a functional model like the one Hickey proposes, but it does focus on processes that unfold over time rather than objects with hidden state that interact with each other. Had Simula 67 stuck with more of Simula I’s ideas, the object-oriented paradigm we know today might have looked very different indeed—and that contingency should teach us to be wary of assuming that the current paradigm will dominate forever.
+
+### Simula 0 Through 67
+
+Simula was created by two Norwegians, Kristen Nygaard and Ole-Johan Dahl.
+
+In the late 1950s, Nygaard was employed by the Norwegian Defense Research Establishment (NDRE), a research institute affiliated with the Norwegian military. While there, he developed Monte Carlo simulations used for nuclear reactor design and operations research. These simulations were at first done by hand and then eventually programmed and run on a Ferranti Mercury. Nygaard soon found that he wanted a higher-level way to describe these simulations to a computer.
+
+The kind of simulation that Nygaard commonly developed is known as a “discrete event model.” The simulation captures how a sequence of events change the state of a system over time—but the important property here is that the simulation can jump from one event to the next, since the events are discrete and nothing changes in the system between events. This kind of modeling, according to a paper that Nygaard and Dahl presented about Simula in 1966, was increasingly being used to analyze “nerve networks, communication systems, traffic flow, production systems, administrative systems, social systems, etc.” So Nygaard thought that other people might want a higher-level way to describe these simulations too. He began looking for someone that could help him implement what he called his “Simulation Language” or “Monte Carlo Compiler.”
+
+Dahl, who had also been employed by NDRE, where he had worked on language design, came aboard at this point to play Wozniak to Nygaard’s Jobs. Over the next year or so, Nygaard and Dahl worked to develop what has been called “Simula 0.” This early version of the language was going to be merely a modest extension to ALGOL 60, and the plan was to implement it as a preprocessor. The language was then much less abstract than what came later. The primary language constructs were “stations” and “customers.” These could be used to model certain discrete event networks; Nygaard and Dahl give an example simulating airport departures. But Nygaard and Dahl eventually came up with a more general language construct that could represent both “stations” and “customers” and also model a wider range of simulations. This was the first of two major generalizations that took Simula from being an application-specific ALGOL package to a general-purpose programming language.
+
+In Simula I, there were no “stations” or “customers,” but these could be recreated using “processes.” A process was a bundle of data attributes associated with a single action known as the process’ operating rule. You might think of a process as an object with only a single method, called something like `run()`. This analogy is imperfect though, because each process’ operating rule could be suspended or resumed at any time—the operating rules were a kind of coroutine. A Simula I program would model a system as a set of processes that conceptually all ran in parallel. Only one process could actually be “current” at any time, but once a process suspended itself the next queued process would automatically take over. As the simulation ran, behind the scenes, Simula would keep a timeline of “event notices” that tracked when each process should be resumed. In order to resume a suspended process, Simula needed to keep track of multiple call stacks. This meant that Simula could no longer be an ALGOL preprocessor, because ALGOL had only once call stack. Nygaard and Dahl were committed to writing their own compiler.
+
+In their paper introducing this system, Nygaard and Dahl illustrate its use by implementing a simulation of a factory with a limited number of machines that can serve orders. The process here is the order, which starts by looking for an available machine, suspends itself to wait for one if none are available, and then runs to completion once a free machine is found. There is a definition of the order process that is then used to instantiate several different order instances, but no methods are ever called on these instances. The main part of the program just creates the processes and sets them running.
+
+The first Simula I compiler was finished in 1965. The language grew popular at the Norwegian Computer Center, where Nygaard and Dahl had gone to work after leaving NDRE. Implementations of Simula I were made available to UNIVAC users and to Burroughs B5500 users. Nygaard and Dahl did a consulting deal with a Swedish company called ASEA that involved using Simula to run job shop simulations. But Nygaard and Dahl soon realized that Simula could be used to write programs that had nothing to do with simulation at all.
+
+Stein Krogdahl, a professor at the University of Oslo that has written about the history of Simula, claims that “the spark that really made the development of a new general-purpose language take off” was [a paper called “Record Handling”][3] by the British computer scientist C.A.R. Hoare. If you read Hoare’s paper now, this is easy to believe. I’m surprised that you don’t hear Hoare’s name more often when people talk about the history of object-oriented languages. Consider this excerpt from his paper:
+
+> The proposal envisages the existence inside the computer during the execution of the program, of an arbitrary number of records, each of which represents some object which is of past, present or future interest to the programmer. The program keeps dynamic control of the number of records in existence, and can create new records or destroy existing ones in accordance with the requirements of the task in hand.
+
+> Each record in the computer must belong to one of a limited number of disjoint record classes; the programmer may declare as many record classes as he requires, and he associates with each class an identifier to name it. A record class name may be thought of as a common generic term like “cow,” “table,” or “house” and the records which belong to these classes represent the individual cows, tables, and houses.
+
+Hoare does not mention subclasses in this particular paper, but Dahl credits him with introducing Nygaard and himself to the concept. Nygaard and Dahl had noticed that processes in Simula I often had common elements. Using a superclass to implement those common elements would be convenient. This also raised the possibility that the “process” idea itself could be implemented as a superclass, meaning that not every class had to be a process with a single operating rule. This then was the second great generalization that would make Simula 67 a truly general-purpose programming language. It was such a shift of focus that Nygaard and Dahl briefly considered changing the name of the language so that people would know it was not just for simulations. But “Simula” was too much of an established name for them to risk it.
+
+In 1967, Nygaard and Dahl signed a contract with Control Data to implement this new version of Simula, to be known as Simula 67. A conference was held in June, where people from Control Data, the University of Oslo, and the Norwegian Computing Center met with Nygaard and Dahl to establish a specification for this new language. This conference eventually led to a document called the [“Simula 67 Common Base Language,”][4] which defined the language going forward.
+
+Several different vendors would make Simula 67 compilers. The Association of Simula Users (ASU) was founded and began holding annual conferences. Simula 67 soon had users in more than 23 different countries.
+
+### 21st Century Simula
+
+Simula is remembered now because of its influence on the languages that have supplanted it. You would be hard-pressed to find anyone still using Simula to write application programs. But that doesn’t mean that Simula is an entirely dead language. You can still compile and run Simula programs on your computer today, thanks to [GNU cim][5].
+
+The cim compiler implements the Simula standard as it was after a revision in 1986. But this is mostly the Simula 67 version of the language. You can write classes, subclass, and virtual methods just as you would have with Simula 67. So you could create a small object-oriented program that looks a lot like something you could easily write in Python or Ruby:
+
+```
+! dogs.sim ;
+Begin
+ Class Dog;
+ ! The cim compiler requires virtual procedures to be fully specified ;
+ Virtual: Procedure bark Is Procedure bark;;
+ Begin
+ Procedure bark;
+ Begin
+ OutText("Woof!");
+ OutImage; ! Outputs a newline ;
+ End;
+ End;
+
+ Dog Class Chihuahua; ! Chihuahua is "prefixed" by Dog ;
+ Begin
+ Procedure bark;
+ Begin
+ OutText("Yap yap yap yap yap yap");
+ OutImage;
+ End;
+ End;
+
+ Ref (Dog) d;
+ d :- new Chihuahua; ! :- is the reference assignment operator ;
+ d.bark;
+End;
+```
+
+You would compile and run it as follows:
+
+```
+$ cim dogs.sim
+Compiling dogs.sim:
+gcc -g -O2 -c dogs.c
+gcc -g -O2 -o dogs dogs.o -L/usr/local/lib -lcim
+$ ./dogs
+Yap yap yap yap yap yap
+```
+
+(You might notice that cim compiles Simula to C, then hands off to a C compiler.)
+
+This was what object-oriented programming looked like in 1967, and I hope you agree that aside from syntactic differences this is also what object-oriented programming looks like in 2019. So you can see why Simula is considered a historically important language.
+
+But I’m more interested in showing you the process model that was central to Simula I. That process model is still available in Simula 67, but only when you use the `Process` class and a special `Simulation` block.
+
+In order to show you how processes work, I’ve decided to simulate the following scenario. Imagine that there is a village full of villagers next to a river. The river has lots of fish, but between them the villagers only have one fishing rod. The villagers, who have voracious appetites, get hungry every 60 minutes or so. When they get hungry, they have to use the fishing rod to catch a fish. If a villager cannot use the fishing rod because another villager is waiting for it, then the villager queues up to use the fishing rod. If a villager has to wait more than five minutes to catch a fish, then the villager loses health. If a villager loses too much health, then that villager has starved to death.
+
+This is a somewhat strange example and I’m not sure why this is what first came to mind. But there you go. We will represent our villagers as Simula processes and see what happens over a day’s worth of simulated time in a village with four villagers.
+
+The full program is [available here as a Gist][6].
+
+The last lines of my output look like the following. Here we are seeing what happens in the last few hours of the day:
+
+```
+1299.45: John is hungry and requests the fishing rod.
+1299.45: John is now fishing.
+1311.39: John has caught a fish.
+1328.96: Betty is hungry and requests the fishing rod.
+1328.96: Betty is now fishing.
+1331.25: Jane is hungry and requests the fishing rod.
+1340.44: Betty has caught a fish.
+1340.44: Jane went hungry waiting for the rod.
+1340.44: Jane starved to death waiting for the rod.
+1369.21: John is hungry and requests the fishing rod.
+1369.21: John is now fishing.
+1379.33: John has caught a fish.
+1409.59: Betty is hungry and requests the fishing rod.
+1409.59: Betty is now fishing.
+1419.98: Betty has caught a fish.
+1427.53: John is hungry and requests the fishing rod.
+1427.53: John is now fishing.
+1437.52: John has caught a fish.
+```
+
+Poor Jane starved to death. But she lasted longer than Sam, who didn’t even make it to 7am. Betty and John sure have it good now that only two of them need the fishing rod.
+
+What I want you to see here is that the main, top-level part of the program does nothing but create the four villager processes and get them going. The processes manipulate the fishing rod object in the same way that we would manipulate an object today. But the main part of the program does not call any methods or modify and properties on the processes. The processes have internal state, but this internal state only gets modified by the process itself.
+
+There are still fields that get mutated in place here, so this style of programming does not directly address the problems that pure functional programming would solve. But as Krogdahl observes, “this mechanism invites the programmer of a simulation to model the underlying system as a set of processes, each describing some natural sequence of events in that system.” Rather than thinking primarily in terms of nouns or actors—objects that do things to other objects—here we are thinking of ongoing processes. The benefit is that we can hand overall control of our program off to Simula’s event notice system, which Krogdahl calls a “time manager.” So even though we are still mutating processes in place, no process makes any assumptions about the state of another process. Each process interacts with other processes only indirectly.
+
+It’s not obvious how this pattern could be used to build, say, a compiler or an HTTP server. (On the other hand, if you’ve ever programmed games in the Unity game engine, this should look familiar.) I also admit that even though we have a “time manager” now, this may not have been exactly what Hickey meant when he said that we need an explicit notion of time in our programs. (I think he’d want something like the superscript notation [that Ada Lovelace used][7] to distinguish between the different values a variable assumes through time.) All the same, I think it’s really interesting that right there at the beginning of object-oriented programming we can find a style of programming that is not all like the object-oriented programming we are used to. We might take it for granted that object-oriented programming simply works one way—that a program is just a long list of the things that certain objects do to other objects in the exact order that they do them. Simula I’s process system shows that there are other approaches. Functional languages are probably a better thought-out alternative, but Simula I reminds us that the very notion of alternatives to modern object-oriented programming should come as no surprise.
+
+If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][8] on Twitter or subscribe to the [RSS feed][9] to make sure you know when a new post is out.
+
+Previously on TwoBitHistory…
+
+> Hey everyone! I sadly haven't had time to do any new writing but I've just put up an updated version of my history of RSS. This version incorporates interviews I've since done with some of the key people behind RSS like Ramanathan Guha and Dan Libby.
+>
+> — TwoBitHistory (@TwoBitHistory) [December 18, 2018][10]
+
+
+
+--------------------------------------------------------------------------------
+
+1. Jan Rune Holmevik, “The History of Simula,” accessed January 31, 2019, http://campus.hesge.ch/daehne/2004-2005/langages/simula.htm. ↩
+
+2. Ole-Johan Dahl and Kristen Nygaard, “SIMULA—An ALGOL-Based Simulation Langauge,” Communications of the ACM 9, no. 9 (September 1966): 671, accessed January 31, 2019, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.384&rep=rep1&type=pdf. ↩
+
+3. Stein Krogdahl, “The Birth of Simula,” 2, accessed January 31, 2019, http://heim.ifi.uio.no/~steinkr/papers/HiNC1-webversion-simula.pdf. ↩
+
+4. ibid. ↩
+
+5. Ole-Johan Dahl and Kristen Nygaard, “The Development of the Simula Languages,” ACM SIGPLAN Notices 13, no. 8 (August 1978): 248, accessed January 31, 2019, https://hannemyr.com/cache/knojd_acm78.pdf. ↩
+
+6. Dahl and Nygaard (1966), 676. ↩
+
+7. Dahl and Nygaard (1978), 257. ↩
+
+8. Krogdahl, 3. ↩
+
+9. Ole-Johan Dahl, “The Birth of Object-Orientation: The Simula Languages,” 3, accessed January 31, 2019, http://www.olejohandahl.info/old/birth-of-oo.pdf. ↩
+
+10. Dahl and Nygaard (1978), 265. ↩
+
+11. Holmevik. ↩
+
+12. Krogdahl, 4. ↩
+
+
+--------------------------------------------------------------------------------
+
+via: https://twobithistory.org/2019/01/31/simula.html
+
+作者:[Sinclair Target][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twobithistory.org
+[b]: https://github.com/lujun9972
+[1]: https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey
+[2]: /images/river.jpg
+[3]: https://archive.computerhistory.org/resources/text/algol/ACM_Algol_bulletin/1061032/p39-hoare.pdf
+[4]: http://web.eah-jena.de/~kleine/history/languages/Simula-CommonBaseLanguage.pdf
+[5]: https://www.gnu.org/software/cim/
+[6]: https://gist.github.com/sinclairtarget/6364cd521010d28ee24dd41ab3d61a96
+[7]: https://twobithistory.org/2018/08/18/ada-lovelace-note-g.html
+[8]: https://twitter.com/TwoBitHistory
+[9]: https://twobithistory.org/feed.xml
+[10]: https://twitter.com/TwoBitHistory/status/1075075139543449600?ref_src=twsrc%5Etfw
diff --git a/sources/talk/20190204 Config management is dead- Long live Config Management Camp.md b/sources/talk/20190204 Config management is dead- Long live Config Management Camp.md
new file mode 100644
index 0000000000..679ac9033b
--- /dev/null
+++ b/sources/talk/20190204 Config management is dead- Long live Config Management Camp.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Config management is dead: Long live Config Management Camp)
+[#]: via: (https://opensource.com/article/19/2/configuration-management-camp)
+[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
+
+Config management is dead: Long live Config Management Camp
+======
+
+CfgMgmtCamp '19 co-organizers share their take on ops, DevOps, observability, and the rise of YoloOps and YAML engineers.
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc)
+
+Everyone goes to [FOSDEM][1] in Brussels to learn from its massive collection of talk tracks, colloquially known as developer rooms, that run the gauntlet of curiosities, covering programming languages like Rust, Go, and Python, to special topics ranging from community, to legal, to privacy. After two days of nonstop activity, many FOSDEM attendees move on to Ghent, Belgium, to join hundreds for Configuration Management Camp ([CfgMgmtCamp][2]).
+
+Kris Buytaert and Toshaan Bharvani run the popular post-FOSDEM show centered around infrastructure management, featuring hackerspaces, training, workshops, and keynotes. It's a deeply technical exploration of the who, what, and how of building resilient infrastructure. It started in 2013 as a PuppetCamp but expanded to include more communities and tools in 2014.
+
+I spoke with Kris and Toshaan, who both have a healthy sense of humor, about CfgMgmtCamp's past, present, and future. Our interview has been edited for length and clarity.
+
+**Matthew: Your opening[keynote][3] is called "CfgMgmtCamp is dead." Is config management dead? Will it live on, or will something take its place?**
+
+**Kris:** We've noticed people are jumping on the hype of containers, trying to solve the same problems in a different way. But they are still managing config, only in different ways and with other tools. Over the past couple of years, we've evolved from a conference with a focus on infrastructure-as-code tooling, such as Puppet, Chef, CFEngine, Ansible, Juju, and Salt, to a more open source infrastructure automation conference in general. So, config management is definitely not dead. Infrastructure-as-code is also not dead, but it all is evolving.
+
+**Toshaan:** We see people changing tools, jumping on hype, and communities changing; however, the basic ideas and concepts remain the same.
+
+**Matthew: It's great to see[observability as the topic][4] of one of your keynotes. Why should those who care about configuration management also care about monitoring and observability?**
+
+**Kris:** While the name of the conference hasn't changed, the tools have evolved and we have expanded our horizon. Ten years ago, [Devopsdays][5] was just #devopsdays, but it evolved to focus on culture—the C of [CAMS][6] in the DevOps' core principles of Culture, Automation, Measurement, and Sharing.
+
+![](https://opensource.com/sites/default/files/uploads/cams.png)
+
+[Monitorama][7] filled the gap on monitoring and metrics (tackling the M in CAMS). Config Management Camp is about open source Automation, the A. Since they are all open source conferences, they fulfill the Sharing part, completing the CAMS concept.
+
+Observability sits on the line between Automation and Measurement. To go one step further, in some of my talks about open source monitoring, I describe the evolution of monitoring tools from #monitoringsucks to #monitoringlove; for lots of people (including me), the love for monitoring returned because we tied it to automation. We started to provision a service and automatically adapted the monitoring of that service to its state. Gone were the days where the monitoring tool was out of sync with reality.
+
+Looking at it from the other side, when you have an infrastructure or application so complex that you need observability in it, you'd better not be deploying manually; you will need some form of automation at that level of complexity. So, observability and infrastructure automation are tied together.
+
+**Toshaan:** Yes, while in the past we focused on configuration management, we will be looking to expand that into all types of infrastructure management. Last year, we played with this idea, and we were able to have a lot of cross-tool presentations. This year, we've taken this a step further by having more differentiated content.
+
+**Matthew: Some of my virtualization and Linux admin friends push back, saying observability is a developer's responsibility. How would you respond without just saying "DevOps?"**
+
+**Kris:** What you describe is what I call "Ooops Devs." This is a trend where the people who run the platform don't really care what they run; as long as port 80 is listening and the node pings, they are happy. It's equally bad as "Dev Ooops." "Ooops Devs" is where the devs rant about the ops folks because they are slow, not agile, and not responsive. But, to me, your job as an ops person or as a Linux admin is to keep a service running, and the only way to do that is to take on that task is as a team—with your colleagues who have different roles and insights, people who write code, people who design, etc. It is a shared responsibility. And hiding behind "that is someone else's responsibility," doesn't smell like collaboration going on.
+
+**Toshaan:** Even in the dark ages of silos, I believe a true sysadmin should have cared about observability, monitoring, and automation. I believe that the DevOps movement has made this much more widespread, and that it has become easier to get this information and expose it. On the other hand, I believe that pure operators or sysadmins have learned to be team players (or, they may have died out). I like the analogy of an army unit composed of different specialty soldiers who work together to complete a mission; we have engineers who work to deliver products or services.
+
+**Matthew: In a[Devopsdays Zurich talk][8], Kris offered an opinion that Americans build software for acquisition and Europeans build for resilience. In that light, what are the best skills for someone who wants to build meaningful infrastructure?**
+
+**Toshaan:** I believe still some people don't understand the complexity of code sprawl, and they believe that some new hype will solve this magically.
+
+**Kris:** This year, we invited [Steve Traugott][9], co-author of the 1998 USENIX paper "[Bootstrapping an Infrastructure][10]" that helped kickstart our community. So many people never read [Infrastructures.org][11], never experienced the pain of building images and image sprawl, and don't understand the evolution we went through that led us to build things the way we build them from source code.
+
+People should study topics such as idempotence, resilience, reproducibility, and surviving the tenth floor test. (As explained in "Bootstrapping an Infrastructure": "The test we used when designing infrastructures was 'Can I grab a random machine and throw it out the tenth-floor window without adversely impacting users for more than 10 minutes?' If the answer to this was 'yes,' then we knew we were doing things right.") But only after they understand the service they are building—the service is the absolute priority—can they begin working on things like: how can we run this, how can we make sure it keeps running, how can it fail and how can we prevent that, and if it disappears, how can we spin it up again fast, unnoticed by the end user.
+
+**Toshaan:** 100% uptime.
+
+**Kris:** The challenge we have is that lots of people don't have that experience yet. We've seen the rise of [YoloOps][12]—just spin it up once, fire, and forget—which results in security problems, stability problems, data loss, etc., and they often grasp onto the solutions in YoloOps, the easy way to do something quickly and move on. But understanding how things will eventually fail takes time, it's called experience.
+
+**Toshaan:** Well, when I was a student and manned the CentOS stand at FOSDEM, I remember a guy coming up to the stand and complaining that he couldn't do consulting because of the "fire once and forgot" policy of CentOS, and that it just worked too well. I like to call this ZombieOps, but YoloOps works also.
+
+**Matthew: I see you're leading the second year of YamlCamp as well. Why does a markup language need its own camp?**
+
+**Kris:** [YamlCamp][13] is a parody, it's a joke. Last year, Bob Walker ([@rjw1][14]) gave a talk titled "Are we all YAML engineers now?" that led to more jokes. We've had a discussion for years about rebranding CfgMgmtCamp; the problem is that people know our name, we have a large enough audience to keep going, and changing the name would mean effort spent on logos, website, DNS, etc. We won't change the name, but we joked that we could rebrand to YamlCamp, because for some weird reason, a lot of the talks are about YAML. :)
+
+**Matthew: Do you think systems engineers should list YAML as a skill or a language on their CV? Should companies be hiring YAML engineers, or do you have "Long live all YAML engineers" on the website in jest?**
+
+**Toshaan:** Well, the real question is whether people are willing to call themselves YAML engineers proudly, because we already have enough DevOps engineers.
+
+**Matthew: What FOSS software helps you manage the event?**
+
+**Toshaan:** I re-did the website in Hugo CMS because we were spending too much time maintaining the website manually. I chose Hugo, because I was learning Golang, and because it has been successfully used for other conferences and my own website. I also wanted a static website and iCalendar output, so we could use calendar tooling such as Giggity to have a good scheduling tool.
+
+The website now builds quite nicely, and while I still have some ideas on improvements, maintenance is now much easier.
+
+For the call for proposals (CFP), we now use [OpenCFP][15]. We want to optimize the submission, voting, selection, and extraction to be as automated as possible, while being easy and comfortable for potential speakers, reviewers, and ourselves to use. OpenCFP seems to be the tool that works; while we still have some feature requirements, I believe that, once we have some time to contribute back to OpenCFP, we'll have a fully functional and easy tool to run CFPs with.
+
+Last, we switched from EventBrite to Pretix because I wanted to be GDPR compliant and have the ability to run our questions, vouchers, and extra features. Pretix allows us to control registration of attendees, speakers, sponsors, and organizers and have a single overview of all the people coming to the event.
+
+### Wrapping up
+
+The beauty of Configuration Management Camp to me is that it continues to evolve with its audience. Configuration management is certainly at the heart of the work, but it's in service to resilient infrastructure. Keep your eyes open for the talk recordings to learn from the [line up of incredible speakers][16], and thank you to the team for running this (free) show!
+
+You can follow Kris [@KrisBuytaert][17] and Toshaan [@toshywoshy][18]. You can also see Kris' past articles [on his blog][19].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/configuration-management-camp
+
+作者:[Matthew Broberg][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mbbroberg
+[b]: https://github.com/lujun9972
+[1]: https://fosdem.org/2019/
+[2]: https://cfgmgmtcamp.eu/
+[3]: https://cfgmgmtcamp.eu/schedule/monday/intro00/
+[4]: https://cfgmgmtcamp.eu/schedule/monday/keynote0/
+[5]: https://www.devopsdays.org/
+[6]: http://devopsdictionary.com/wiki/CAMS
+[7]: http://monitorama.com/
+[8]: https://vimeo.com/272519813
+[9]: https://cfgmgmtcamp.eu/schedule/tuesday/keynote1/
+[10]: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
+[11]: http://www.infrastructures.org/
+[12]: https://gist.githubusercontent.com/mariozig/5025613/raw/yolo
+[13]: https://twitter.com/yamlcamp
+[14]: https://twitter.com/rjw1
+[15]: https://github.com/opencfp/opencfp
+[16]: https://cfgmgmtcamp.eu/speaker/
+[17]: https://twitter.com/KrisBuytaert
+[18]: https://twitter.com/toshywoshy
+[19]: https://krisbuytaert.be/index.shtml
diff --git a/sources/talk/20190205 7 predictions for artificial intelligence in 2019.md b/sources/talk/20190205 7 predictions for artificial intelligence in 2019.md
new file mode 100644
index 0000000000..2e1b047a15
--- /dev/null
+++ b/sources/talk/20190205 7 predictions for artificial intelligence in 2019.md
@@ -0,0 +1,91 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (7 predictions for artificial intelligence in 2019)
+[#]: via: (https://opensource.com/article/19/2/predictions-artificial-intelligence)
+[#]: author: (Salil Sethi https://opensource.com/users/salilsethi)
+
+7 predictions for artificial intelligence in 2019
+======
+
+While 2018 was a big year for AI, the stage is set for it to make an even deeper impact in 2019.
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/robot_arm_artificial_ai.png?itok=8CUU3U_7)
+
+Without question, 2018 was a big year for artificial intelligence (AI) as it pushed even further into the mainstream, successfully automating more functionality than ever before. Companies are increasingly exploring applications for AI, and the general public has grown accustomed to interacting with the technology on a daily basis.
+
+The stage is set for AI to continue transforming the world as we know it. In 2019, not only will the technology continue growing in global prevalence, but it will also spawn deeper conversations around important topics, fuel innovative business models, and impact society in new ways, including the following seven.
+
+### 1\. Machine learning as a service (MLaaS) will be deployed more broadly
+
+In 2018, we witnessed major strides in MLaaS with technology powerhouses like Google, Microsoft, and Amazon leading the way. Prebuilt machine learning solutions and capabilities are becoming more attractive in the market, especially to smaller companies that don't have the necessary in-house resources or talent. For those that have the technical know-how and experience, there is a significant opportunity to sell and deploy packaged solutions that can be easily implemented by others.
+
+Today, MLaaS is sold primarily on a subscription or usage basis by cloud-computing providers. For example, Microsoft Azure's ML Studio provides developers with a drag-and-drop environment to develop powerful machine learning models. Google Cloud's Machine Learning Engine also helps developers build large, sophisticated algorithms for a variety of applications. In 2017, Amazon jumped into the realm of AI and launched Amazon SageMaker, another platform that developers can use to build, train, and deploy custom machine learning models.
+
+In 2019 and beyond, be prepared to see MLaaS offered on a much broader scale. Transparency Market Research predicts it will grow to US$20 billion at an alarming 40% CAGR by 2025.
+
+### 2\. More explainable or "transparent" AI will be developed
+
+Although there are already many examples of how AI is impacting our world, explaining the outputs and rationale of complex machine learning models remains a challenge.
+
+Unfortunately, AI continues to carry the "black box" burden, posing a significant limitation in situations where humans want to understand the rationale behind AI-supported decision making.
+
+AI democratization has been led by a plethora of open source tools and libraries, such as Scikit Learn, TensorFlow, PyTorch, and more. The open source community will lead the charge to build explainable, or "transparent," AI that can clearly document its logic, expose biases in data sets, and provide answers to follow-up questions.
+
+Before AI is widely adopted, humans need to know that the technology can perform effectively and explain its reasoning under any circumstance.
+
+### 3\. AI will impact the global political landscape
+
+In 2019, AI will play a bigger role on the global stage, impacting relationships between international superpowers that are investing in the technology. Early adopters of AI, such as the US and [China][1], will struggle to balance self-interest with collaborative R&D. Countries that have AI talent and machine learning capabilities will experience tremendous growth in areas like predictive analytics, creating a wider global technology gap.
+
+Additionally, more conversations will take place around the ethical use of AI. Naturally, different countries will approach this topic differently, which will affect political relationships. Overall, AI's impact will be small relative to other international issues, but more noticeable than before.
+
+### 4\. AI will create more jobs than it eliminates
+
+Over the long term, many jobs will be eliminated as a result of AI-enabled automation. Roles characterized by repetitive, manual tasks are being outsourced to AI more and more every day. However, in 2019, AI will create more jobs than it replaces.
+
+Rather than eliminating the need for humans entirely, AI is augmenting existing systems and processes. As a result, a new type of role is emerging. Humans are needed to support AI implementation and oversee its application. Next year, more manual labor will transition to management-type jobs that work alongside AI, a trend that will continue to 2020. Gartner predicts that in two years, [AI will create 2.3 million jobs while only eliminating 1.8 million.][2]
+
+### 5\. AI assistants will become more pervasive and useful
+
+AI assistants are nothing new to the modern world. Apple's Siri and Amazon's Alexa have been supporting humans on the road and in their homes for years. In 2019, we will see AI assistants continue to grow in their sophistication and capabilities. As they collect more behavioral data, AI assistants will become better at responding to requests and completing tasks. With advances in natural language processing and speech recognition, humans will have smoother and more useful interactions with AI assistants.
+
+In 2018, we saw companies launch promising new AI assistants. Recently, Google began rolling out its voice-enabled reservation booking service, Duplex, which can call and book appointments on behalf of users. Technology company X.ai has built two AI personal assistants, Amy and Andrew, who can interact with humans and schedule meetings for their employers. Amazon also recently announced Echo Auto, a device that enables drivers to integrate Alexa into their vehicles. However, humans will continue to place expectations ahead of reality and be disappointed at the technology's limitations.
+
+### 6\. AI/ML governance will gain importance
+
+With so many companies investing in AI, much more energy will be put towards developing effective AI governance structures. Frameworks are needed to guide data collection and management, appropriate AI use, and ethical applications. Successful and appropriate AI use involves many different stakeholders, highlighting the need for reliable and consistent governing bodies.
+
+In 2019, more organizations will create governance structures and more clearly define how AI progress and implementation are managed. Given the current gap in explainability, these structures will be tremendously important as humans continue to turn to AI to support decision-making.
+
+### 7\. AI will help companies solve AI talent shortages
+
+A [shortage of AI and machine learning talent][3] is creating an innovation bottleneck. A [survey][4] released last year from O'Reilly revealed that the biggest challenge companies are facing related to using AI is a lack of available talent. And as technological advancement continues to accelerate, it is becoming harder for companies to develop talent that can lead large-scale enterprise AI efforts.
+
+To combat this, organizations will—ironically—use AI and machine learning to help address the talent gap in 2019. For example, Google Cloud's AutoML includes machine learning products that help developers train machine learning models without having any prior AI coding experience. Amazon Personalize is another machine learning service that helps developers build sophisticated personalization systems that can be implemented in many ways by different kinds of companies. In addition, companies will use AI to find talent and fill job vacancies and propel innovation forward.
+
+### AI In 2019: bigger and better with a tighter leash
+
+Over the next year, AI will grow more prevalent and powerful than ever. Expect to see new applications and challenges and be ready for an increased emphasis on checks and balances.
+
+What do you think? How might AI impact the world in 2019? Please share your thoughts in the comments below!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/predictions-artificial-intelligence
+
+作者:[Salil Sethi][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/salilsethi
+[b]: https://github.com/lujun9972
+[1]: https://www.turingtribe.com/story/china-is-achieving-ai-dominance-by-relying-on-young-blue-collar-workers-rLMsmWqLG4fGFwisQ
+[2]: https://www.gartner.com/en/newsroom/press-releases/2017-12-13-gartner-says-by-2020-artificial-intelligence-will-create-more-jobs-than-it-eliminates
+[3]: https://www.turingtribe.com/story/tencent-says-there-are-only-bTpNm9HKaADd4DrEi
+[4]: https://www.forbes.com/sites/bernardmarr/2018/06/25/the-ai-skills-crisis-and-how-to-close-the-gap/#19bafcf631f3
diff --git a/sources/talk/20190206 4 steps to becoming an awesome agile developer.md b/sources/talk/20190206 4 steps to becoming an awesome agile developer.md
new file mode 100644
index 0000000000..bad4025aef
--- /dev/null
+++ b/sources/talk/20190206 4 steps to becoming an awesome agile developer.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 steps to becoming an awesome agile developer)
+[#]: via: (https://opensource.com/article/19/2/steps-agile-developer)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
+
+4 steps to becoming an awesome agile developer
+======
+There's no magical way to do it, but these practices will put you well on your way to embracing agile in application development, testing, and debugging.
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk)
+
+Enterprises are rushing into their DevOps journey through [agile][1] software development with cloud-native technologies such as [Linux containers][2], [Kubernetes][3], and [serverless][4]. Continuous integration helps enterprise developers reduce bugs, unexpected errors, and improve the quality of their code deployed in production.
+
+However, this doesn't mean all developers in DevOps automatically embrace agile for their daily work in application development, testing, and debugging. There is no magical way to do it, but the following four practical steps and best practices will put you well on your way to becoming an awesome agile developer.
+
+### Start with design thinking agile practices
+
+There are many opportunities to learn about using agile software development practices in your DevOps initiatives. Agile practices inspire people with new ideas and experiences for improving their daily work in application development with team collaboration. More importantly, those practices will help you discover the answers to questions such as: Why am I doing this? What kind of problems am I trying to solve? How do I measure the outcomes?
+
+A [domain-driven design][5] approach will help you start discovery sooner and easier. For example, the [Start At The End][6] practice helps you redesign your application and explore potential business outcomes—such as, what would happen if your application fails in production? You might also be interested in [Event Storming][7] for interactive and rapid discovery or [Impact Mapping][8] for graphical and strategic design as part of domain-driven design practices.
+
+### Use a predictive approach first
+
+In agile software development projects, enterprise developers are mainly focused on adapting to rapidly changing app development environments such as reactive runtimes, cloud-native frameworks, Linux container packaging, and the Kubernetes platform. They believe this is the best way to become an agile developer in their organization. However, this type of adaptive approach typically makes it harder for developers to understand and report what they will do in the next sprint. Developers might know the ultimate goal and, at best, the app features for a release about four months from the current sprint.
+
+In contrast, the predictive approach places more emphasis on analyzing known risks and planning future sprints in detail. For example, predictive developers can accurately report the functions and tasks planned for the entire development process. But it's not a magical way to make your agile projects succeed all the time because the predictive team depends totally on effective early-stage analysis. If the analysis does not work very well, it may be difficult for the project to change direction once it gets started.
+
+To mitigate this risk, I recommend that senior agile developers increase the predictive capabilities with a plan-driven method, and junior agile developers start with the adaptive methods for value-driven development.
+
+### Continuously improve code quality
+
+Don't hesitate to engage in [continuous integration][9] (CI) practices for improving your application before deploying code into production. To adopt modern application frameworks, such as cloud-native architecture, Linux container packaging, and hybrid cloud workloads, you have to learn about automated tools to address complex CI procedures.
+
+[Jenkins][10] is the standard CI tool for many organizations; it allows developers to build and test applications in many projects in an automated fashion. Its most important function is detecting unexpected errors during CI to prevent them from happening in production. This should increase business outcomes through better customer satisfaction.
+
+Automated CI enables agile developers to not only improve the quality of their code but their also application development agility through learning and using open source tools and patterns such as [behavior-driven development][11], [test-driven development][12], [automated unit testing][13], [pair programming][14], [code review][15], and [design pattern][16].
+
+### Never stop exploring communities
+
+Never settle, even if you already have a great reputation as an agile developer. You have to continuously take on bigger challenges to make great software in an agile way.
+
+By participating in the very active and growing open source community, you will not only improve your skills as an agile developer, but your actions can also inspire other developers who want to learn agile practices.
+
+How do you get involved in specific communities? It depends on your interests and what you want to learn. It might mean presenting specific topics at conferences or local meetups, writing technical blog posts, publishing practical guidebooks, committing code, or creating pull requests to open source projects' Git repositories. It's worth exploring open source communities for agile software development, as I've found it is a great way to share your expertise, knowledge, and practices with other brilliant developers and, along the way, help each other.
+
+### Get started
+
+These practical steps can give you a shorter path to becoming an awesome agile developer. Then you can lead junior developers in your team and organization to become more flexible, valuable, and predictive using agile principles.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/steps-agile-developer
+
+作者:[Daniel Oh][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/daniel-oh
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/article/18/10/what-agile
+[2]: https://opensource.com/resources/what-are-linux-containers
+[3]: https://opensource.com/resources/what-is-kubernetes
+[4]: https://opensource.com/article/18/11/open-source-serverless-platforms
+[5]: https://en.wikipedia.org/wiki/Domain-driven_design
+[6]: https://openpracticelibrary.com/practice/start-at-the-end/
+[7]: https://openpracticelibrary.com/practice/event-storming/
+[8]: https://openpracticelibrary.com/practice/impact-mapping/
+[9]: https://en.wikipedia.org/wiki/Continuous_integration
+[10]: https://jenkins.io/
+[11]: https://en.wikipedia.org/wiki/Behavior-driven_development
+[12]: https://en.wikipedia.org/wiki/Test-driven_development
+[13]: https://en.wikipedia.org/wiki/Unit_testing
+[14]: https://en.wikipedia.org/wiki/Pair_programming
+[15]: https://en.wikipedia.org/wiki/Code_review
+[16]: https://en.wikipedia.org/wiki/Design_pattern
diff --git a/sources/talk/20190206 What blockchain and open source communities have in common.md b/sources/talk/20190206 What blockchain and open source communities have in common.md
new file mode 100644
index 0000000000..bc4f9464d0
--- /dev/null
+++ b/sources/talk/20190206 What blockchain and open source communities have in common.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What blockchain and open source communities have in common)
+[#]: via: (https://opensource.com/article/19/2/blockchain-open-source-communities)
+[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
+
+What blockchain and open source communities have in common
+======
+Blockchain initiatives can look to open source governance for lessons on establishing trust.
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity_flowers_chain.jpg?itok=ns01UPOp)
+
+One of the characteristics of blockchains that gets a lot of attention is how they enable distributed trust. The topic of trust is a surprisingly complicated one. In fact, there's now an [entire book][1] devoted to the topic by Kevin Werbach.
+
+But here's what it means in a nutshell. Organizations that wish to work together, but do not fully trust one another, can establish a permissioned blockchain and invite business partners to record their transactions on a shared distributed ledger. Permissioned blockchains can trace assets when transactions are added to the blockchain. A permissioned blockchain implies a degree of trust (again, trust is complicated) among members of a consortium, but no single entity controls the storage and validation of transactions.
+
+The basic model is that a group of financial institutions or participants in a logistics system can jointly set up a permissioned blockchain that will validate and immutably record transactions. There's no dependence on a single entity, whether it's one of the direct participants or a third-party intermediary who set up the blockchain, to safeguard the integrity of the system. The blockchain itself does so through a variety of cryptographic mechanisms.
+
+Here's the rub though. It requires that competitors work together cooperatively—a relationship often called [coopetition][2]. The term dates back to the early 20th century, but it grew into widespread use when former Novell CEO Ray Noorda started using the term to describe the company's business strategy in the 1990s. Novell was then planning to get into the internet portal business, which required it to seek partnerships with some of the search engine providers and other companies it would also be competing against. In 1996, coopetition became the subject of a bestselling [book][3].
+
+Coopetition can be especially difficult when a blockchain network initiative appears to be driven by a dominant company. And it's hard for the dominant company not to exert outsize influence over the initiative, just as a natural consequence of how big it is. For example, the IBM-Maersk joint venture has [struggled to sign up rival shipping companies][4], in part because Maersk is the world's largest carrier by capacity, a position that makes rivals wary.
+
+We see this same dynamic in open source communities. The original creators of a project need to not only let go; they need to put governance structures in place that give competing companies confidence that there's a level playing field.
+
+For example, Sarah Novotny, now head of open source strategy at Google Cloud Platform, [told me in a 2017 interview][5] about the [Kubernetes][6] project that it isn't always easy to give up control, even when people buy into doing what is best for a project.
+
+> Google turned Kubernetes over to the Cloud Native Computing Foundation (CNCF), which sits under the Linux Foundation umbrella. As [CNCF executive director Dan Kohn puts it][7]: "One of the things they realized very early on is that a project with a neutral home is always going to achieve a higher level of collaboration. They really wanted to find a home for it where a number of different companies could participate."
+>
+> Defaulting to public may not be either natural or comfortable. "Early on, my first six, eight, or 12 weeks at Google, I think half my electrons in email were spent on: 'Why is this discussion not happening on a public mailing list? Is there a reason that this is specific to GKE [Google Container Engine]? No, there's not a reason,'" said Novotny.
+
+To be sure, some grumble that open source foundations have become too common and that many are too dominated by paying corporate members. Simon Phipps, currently the president of the Open Source Initiative, gave a talk at OSCON way back in 2015 titled ["Enough Foundations Already!"][8] in which he argued that "before we start another open source foundation, let's agree that what we need protected is software freedom and not corporate politics."
+
+Nonetheless, while not appropriate for every project, foundations with business, legal, and technical governance are increasingly the model for open source projects that require extensive cooperation among competing companies. A [2017 analysis of GitHub data by the Linux Foundation][9] found a number of different governance models in use by the highest-velocity open source projects. Unsurprisingly, quite a few remained under the control of the company that created or acquired them. However, about a third were under the auspices of a foundation.
+
+Is there a lesson here for blockchain? Quite possibly. Open source projects can be sponsored by a company while still putting systems and governance in place that are welcoming to outside contributors. However, there's a great deal of history to suggest that doing so is hard because it's hard not to exert control and leverage when you can. Furthermore, even if you make a successful case for being truly open to equal participation to outsiders today, it will be hard to allay suspicions that you might not be as welcoming tomorrow.
+
+To the degree that we can equate blockchain consortiums with open source communities, this suggests that business blockchain initiatives should look to open source governance for lessons. Dominant players in the ecosystem need to forgo control, and they need to have conversations with partners and potential partners about what types of structures would make participating easier.
+
+Many blockchain infrastructure software projects are already under foundations such as Hyperledger. But perhaps some specific production deployments of blockchain aimed at specific industries and ecosystems will benefit from formal governance structures as well.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/blockchain-open-source-communities
+
+作者:[Gordon Haff][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ghaff
+[b]: https://github.com/lujun9972
+[1]: https://mitpress.mit.edu/books/blockchain-and-new-architecture-trust
+[2]: https://en.wikipedia.org/wiki/Coopetition
+[3]: https://en.wikipedia.org/wiki/Co-opetition_(book)
+[4]: https://www.theregister.co.uk/2018/10/30/ibm_struggles_to_sign_up_shipping_carriers_to_blockchain_supply_chain_platform_reports/
+[5]: https://opensource.com/article/17/4/podcast-kubernetes-sarah-novotny
+[6]: https://kubernetes.io/
+[7]: http://bitmason.blogspot.com/2017/02/podcast-cloud-native-computing.html
+[8]: https://www.oreilly.com/ideas/enough-foundations-already
+[9]: https://www.linuxfoundation.org/blog/2017/08/successful-open-source-projects-common/
diff --git a/sources/talk/20190208 Which programming languages should you learn.md b/sources/talk/20190208 Which programming languages should you learn.md
new file mode 100644
index 0000000000..31cef16f03
--- /dev/null
+++ b/sources/talk/20190208 Which programming languages should you learn.md
@@ -0,0 +1,46 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Which programming languages should you learn?)
+[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Which programming languages should you learn?
+======
+Learning a new programming language is a great way to get ahead in your career. But which one?
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0)
+
+If you want to get started or get ahead in your programming career, learning a new language is a smart idea. But the huge number of languages in active use invites the question: Which programming language is the best one to know? To answer that, let's start with a simplifying question: What sort of programming do you want to do?
+
+If you want to do web programming on the client side, then the specialized languages HTML, CSS, and JavaScript—in one of its seemingly infinite dialects—are de rigueur.
+
+If you want to do web programming on the server side, the options include all of the familiar general-purpose languages: C++, Golang, Java, C#, Node.js, Perl, Python, Ruby, and so on. As a matter of course, server-side programs interact with datastores, such as relational and other databases, which means query languages such as SQL may come into play.
+
+If you're writing native apps for mobile devices, knowing the target platform is important. For Apple devices, Swift has supplanted Objective C as the language of choice. For Android devices, Java (with dedicated libraries and toolsets) remains the dominant language. There are special languages such as Xamarin, used with C#, that can generate platform-specific code for Apple, Android, and Windows devices.
+
+What about general-purpose languages? There are various choices within the usual pigeonholes. Among the dynamic or scripting languages (e.g., Perl, Python, and Ruby), there are newer offerings such as Node.js. Java and C#, which are more alike than their fans like to admit, remain the dominant statically compiled languages targeted at a virtual machine (the JVM and CLR, respectively). Among languages that compile into native executables, C++ is still in the mix, along with later arrivals such as Golang and Rust. General-purpose functional languages abound (e.g., Clojure, Haskell, Erlang, F#, Lisp, and Scala), often with passionately devoted communities. It's worth noting that object-oriented languages such as Java and C# have added functional constructs (in particular, lambdas), and the dynamic languages have had functional constructs from the start.
+
+Let me end with a pitch for C, which is a small, elegant, and extensible language not to be confused with C++. Modern operating systems are written mostly in C, with the rest in assembly language. The standard libraries on any platform are likewise mostly in C. For example, any program that issues the Hello, world! greeting does so through a call to the C library function named **write**.
+
+C serves as a portable assembly language, exposing details about the underlying system that other high-level languages deliberately hide. To understand C is thus to gain a better grasp of how programs contend for the shared system resources (processors, memory, and I/O devices) required for execution. C is at once high-level and close-to-the-metal, so unrivaled in performance—except, of course, for assembly language. Finally, C is the lingua franca among programming languages, and almost every general-purpose language supports C calls in one form or another.
+
+For a modern introduction to C, consider my book [C Programming: Introducing Portable Assembler][1]. No matter how you go about it, learn C and you'll learn a lot more than just another programming language.
+
+What programming languages do you think are important to know? Do you agree or disagree with these recommendations? Let us know in the comments!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
diff --git a/sources/tech/20190129 A small notebook for a system administrator.md b/sources/tech/20190129 A small notebook for a system administrator.md
new file mode 100644
index 0000000000..45d6ba50eb
--- /dev/null
+++ b/sources/tech/20190129 A small notebook for a system administrator.md
@@ -0,0 +1,552 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (A small notebook for a system administrator)
+[#]: via: (https://habr.com/en/post/437912/)
+[#]: author: (sukhe https://habr.com/en/users/sukhe/)
+
+A small notebook for a system administrator
+======
+
+I am a system administrator, and I need a small, lightweight notebook for every day carrying. Of course, not just to carry it, but for use it to work.
+
+I already have a ThinkPad x200, but it’s heavier than I would like. And among the lightweight notebooks, I did not find anything suitable. All of them imitate the MacBook Air: thin, shiny, glamorous, and they all critically lack ports. Such notebook is suitable for posting photos on Instagram, but not for work. At least not for mine.
+
+After not finding anything suitable, I thought about how a notebook would turn out if it were developed not with design, but the needs of real users in mind. System administrators, for example. Or people serving telecommunications equipment in hard-to-reach places — on roofs, masts, in the woods, literally in the middle of nowhere.
+
+The results of my thoughts are presented in this article.
+
+
+[![Figure to attract attention][1]][2]
+
+Of course, your understanding of the admin notebook does not have to coincide with mine. But I hope you will find a couple of interesting thoughts here.
+
+Just keep in mind that «system administrator» is just the name of my position. And in fact, I have to work as a network engineer, and installer, and perform a significant part of other work related to hardware. Our company is tiny, we are far from large settlements, so all of us have to be universal specialist.
+
+In order not to constantly clarify «this notebook», later in the article I will call it the “adminbook”. Although it can be useful not only to administrators, but also to all who need a small, lightweight notebook with a lot of connectors. In fact, even large laptops don’t have as many connectors.
+
+So let's get started…
+
+### 1\. Dimensions and weight
+
+Of course, you want it smaller and more lightweight, but the keyboard with the screen should not be too small. And there has to be space for connectors, too.
+
+In my opinion, a suitable option is a notebook half the size of an x200. That is, approximately the size of a sheet of A5 paper (210x148mm). In addition, the side pockets of many bags and backpacks are designed for this size. This means that the adminbook doesn’t even have to be carried in the main compartment.
+
+Though I couldn’t fit everything I wanted into 210mm. To make a comfortable keyboard, the width had to be increased to 230mm.
+
+In the illustrations the adminbook may seem too thick. But that’s only an optical illusion. In fact, its thickness is 25mm (28mm taking the rubber feet into account).
+
+Its size is close to the usual hardcover book, 300-350 pages thick.
+
+It’s lightweight, too — about 800 grams (half the weight of the ThinkPad).
+
+The case of the adminbook is made of mithril aluminum. It’s a lightweight, durable metal with good thermal conductivity.
+
+### 2\. Keyboard and trackpoint
+
+A quality keyboard is very important for me. “Quality” there means the fastest possible typing and hotkey speed. It needs to be so “matter-of-fact” I don’t have to think about it at all, as if it types seemingly by force of thought.
+
+This is possible if the keys are normal size and in their typical positions. But the adminbook is too small for that. In width, it is even smaller than the main block of keys of a desktop keyboard. So, you have to work around that somehow.
+
+After a long search and numerous tests, I came up with what you see in the picture:
+
+![](https://habrastorage.org/webt/2-/mh/ag/2-mhagvoofl7vgqiadv3rcnclb0.jpeg)
+Fig.2.1 — Adminbook keyboard
+
+This keyboard has the same vertical key distance as on a regular keyboard. A horizontal distance decreased just only 2mm (17 instead of 19).
+
+You can even type blindly on this keyboard! To do this, some keys have small bumps for tactile orientation.
+
+However, if you do not sit at a table, the main input method will be to press the keys “at a glance”. And here the muscle memory does not help — you have to look at the keys with your eyes.
+
+To hit the buttons faster, different key colors are used.
+
+For example, the numeric row is specifically colored gray to visually separate it from the QWERTY row, and NumLock is mapped to the “6” key, colored black to stand out.
+
+To the right of NumLock, gray indicates the area of the numeric keypad. These (and neighboring) buttons work like a numeric keypad in NumLock mode or when you press Fn. I must say, this is a useful feature for the admin computer — some users come up with passwords on the numpad in the form of a “cross”, “snake”, “spiral”, etc. I want to be able to type them that way too.
+
+As for the function keys. I don’t know about you, but it annoys me when, in a 15-inch laptop, this row is half-height and only accessible through pressing Fn. Given that there’s a lot free space around the keyboard!
+
+The adminbook doesn’t have free space at all. But the function keys can be pressed without Fn. These are separate keys that are even divided into groups of 4 using color coding and location.
+
+By the way, have you seen which key is to the right of AltGr on modern ThinkPads? I don’t know what they were thinking, but now they have PrintScreen there!
+
+Where? Where, I ask, is the context menu key that I use every day? It’s not there.
+
+So the adminbook has it. Two, even! You can put it up by pressing Fn + Alt. Sorry, I couldn’t map it to a separate key due to lack of space. Just in case, I added the “Right Win” key as Fn + CtrlR. Maybe some people use it for something.
+
+However, the adminbook allows you to customize the keyboard to your liking. The keyboard is fully reprogrammable. You can assign the scan codes you need to the keys. Setting the keyboard parameters is done via the “KEY” button (Fn + F3).
+
+Of course, the adminbook has a keyboard backlight. It is turned on with Fn + B (below the trackpoint, you can even find it in the dark). The backlight here is similar to the ThinkPad ThinkLight. That is, it’s an LED above the display, illuminating the keyboard from the top. In this case, it is better than a backlight from below, because it allows you to distinguish the color of the keys. In addition, keys have several characters printed on them, while only English letters are usually made translucent to the backlight.
+
+Since we’re on the topic of characters… Red letters are Ukrainian and Russian. I specifically drew them to show that keys have space for several alphabets: after all, English is not a native language for most of humanity.
+
+Since there isn’t enough space for a full touchpad, the trackpoint is used as the positioning device. If you have no experience working with it — don’t worry, it’s actually quite handy. The mouse cursor moves with slight inclines of the trackpoint, like an analog joystick, and its three buttons (under the spacebar) work the same as on the mouse.
+
+To the left of the trackpoint keys is a fingerprint scanner. That makes it possible to login by fingerprint. It’s very convenient in most cases.
+
+The space bar has an NFC antenna location mark. You can simply read data from devices equipped with NFC, and you can make it to lock the system while not in use. For example, if you wear an NFC-equipped ring, it looks like this: when you remove hands from the keyboard, the computer locks after a certain time, and unlocks when you put hands on the keyboard again.
+
+And now the unexpected part. The keyboard and the trackpoint can work as a USB keyboard and mouse for an external computer! For this, there are USB Type C and MicroUSB connectors on the back, labeled «OTG». You can connect to an external computer using a standard USB cable from a phone (which is usually always with you).
+
+![](https://habrastorage.org/webt/e2/wa/m5/e2wam5d1bbckfdxpvqwl-i6aqle.jpeg)
+Fig.2.2 — On the right: the power connector 5.5x2.5mm, the main LAN connector, POE indicator, USB 3.0 Type A, USB Type C (with alternate HDMI mode), microSD card reader and two «magic» buttons
+
+Switching to the external keyboard mode is done with the «K» button on the right side of the adminbook. And there are actually three modes, since the keyboard+trackpoint combo can also work as a Bluetooth keyboard/mouse!
+
+Moreover: to save energy, the keyboard and trackpoint can work autonomously from the rest of the adminbook. When the adminbook is turned off, pressing «K» can turn on only the keyboard and trackpoint to use them by connecting to another computer.
+
+Of course, the keyboard is water-resistant. Excess water is drained down through the drainage holes.
+
+### 3\. Video subsystem
+
+There are some devices that normally do not need a monitor and keyboard. For example, industrial computers, servers or DVRs. And since the monitor is «not needed», it is, in most cases, absent.
+
+And when there is a need to configure such a device from the console, it can be a big surprise that the entire office is working on laptops and there is not a single stationary monitor within reach. Therefore, in some cases you have to take a monitor with you.
+
+But you don’t need to worry about this if you have the adminbook.
+
+The fact is that the video outputs of the adminbook can switch «in the opposite direction» and work as video inputs by displaying the incoming image on the built-in screen. So, the adminbook can also replace the monitor (in addition to replace the mouse and keyboard).
+
+![](https://habrastorage.org/webt/4a/qr/f-/4aqrf-1sgstwwffhx-n4wr0p7ws.jpeg)
+Fig.3.1 — On the left side of the adminbook, there are Mini DisplayPort, USB Type C (with alternate DisplayPort mode), SD card reader, USB 3.0 Type A connectors, HDMI, four audio connectors, VGA and power button
+
+Switching modes between input and output is done by pressing the «M» button on the right side of the adminbook.
+
+The video subsystem, as well as the keyboard, can work autonomously — that is, when used as a monitor, the other parts of the adminbook remain disabled. To turn on to this mode also uses the «M» button.
+
+Detailed screen adjustment (contrast, geometry, video input selection, etc.) is performed using the menu, brought up with the «SCR» button (Fn + F4).
+
+The adminbook has HDMI, MiniDP, VGA and USB Type C connectors (with DisplayPort and HDMI alternate mode) for video input / output. The integrated GPU can display the image simultaneously in three directions (including the integrated display).
+
+The adminbook display is FullHD (1920x1080), 9.5’’, matte screen. The brightness is sufficient for working outside during the day. And to do it better, the set includes folding blinds for protection from sunlight.
+
+![](https://habrastorage.org/webt/k-/nc/rh/k-ncrhphspvcoimfds1wurnzk3i.jpeg)
+Fig.3.2 — Blinds to protect from sunlight
+
+In addition to video output via these connectors, the adminbook can use wireless transmission via WiDi or Miracast protocols.
+
+### 4\. Emulation of external drives
+
+One of the options for installing the operating system is to install it from a CD / DVD, but now very few computers have optical drives. USB connectors are everywhere, though. Therefore, the adminbook can pretend to be an external optical drive connected via USB.
+
+That allows connecting it to any computer to install an operating system on it, while also running boot discs with test programs or antiviruses.
+
+To connect, it uses the same USB cable that’s used for connecting it to a desktop as an external keyboard/mouse.
+
+The “CD” button (Fn + F2) controls the drive emulation — select a disc image (in an .iso file) and mount / unmount it.
+
+If you need to copy data from a computer or to it, the adminbook can emulate an external hard drive connected via the same USB cable. HDD emulation is also enabled by the “CD” button.
+
+This button also turns on the emulation of bootable USB flash drives. They are now used to install operating systems almost more often than CDs. Therefore, the adminbook can pretend to be a bootable flash drive.
+
+The .iso files are located on a separate partition of the hard disk. This allows you to use them regardless of the operating system. Moreover, in the emulation menu you can connect a virtual drive to one of the USB interfaces of the adminbook. This makes it possible to install an operating system on the adminbook using itself as an installation disc drive.
+
+By the way, the adminbook is designed to work under Windows 10 and Debian / Kali / Ubuntu. The menu system called via function buttons with Fn works autonomously on a separate microcontroller.
+
+### 5\. Rear connectors
+
+First, a classic DB-9 connector for RS-232. Any admin notebook simply has to have it. We have it here, too, and galvanically isolated from the rest of the notebook.
+
+In addition to RS-232, RS-485 widely used in industrial automation is supported. It has a two-wire and four-wire version, with a terminating resistor and without, with the ability to enable a protective offset. It can also work in RS-422 and UART modes.
+
+All these protocols are configured in the on-screen menu, called by the «COM» button (Fn + F8).
+
+Since there are multiple protocols, it is possible to accidentally connect the equipment to a wrong connector and break it.
+
+To prevent this from happening, when you turn off the computer (or go into sleep mode, or close the display lid), the COM port switches to the default mode. This may be a “port disabled” state, or enabling one of the protocols.
+
+![](https://habrastorage.org/webt/uz/ii/ig/uziiig_yr86yzdcnivkbapkbbgi.jpeg)
+Fig.5.1 — The rear connectors: DB-9, SATA + SATA Power, HD Mini SAS, the second wired LAN connector, two USB 3.0 Type A connectors, two USB 2.0 MicroB connectors, three USB Type C connectors, a USIM card tray, a PBD-12 pin connector (jack)
+
+The adminbook has one more serial port. But if the first one uses the hardware UART chipset, the second one is connected to the USB 2.0 line through the FT232H converter.
+
+Thanks to this, via COM2, you can exchange data via I2C, SMBus, SPI, JTAG, UART protocols or use it as 8 outputs for Bit-bang / GPIO. These protocols are used when working with microcontrollers, flashing firmware on routers and debugging any other electronics. For this purpose, pin connectors are usually used with a 2.54mm pitch. Therefore, COM2 is made to look like one of these connectors.
+
+![](https://habrastorage.org/webt/qd/rc/ln/qdrclnoljgnlohthok4hgjb0be4.jpeg)
+Fig.5.2 — USB to UART adapter replaced by COM2 port
+
+There is also a secondary LAN interface at the back. Like the main one, it is gigabit-capable, with support for VLAN. Both interfaces are able to test the integrity of the cable (for pair length and short circuits), the presence of connected devices, available communication speeds, the presence of POE voltage. With the using a wiremap adapter on the other side (see chapter 17) it is possible to determine how the cable is connected to crimps.
+
+The network interface menu is called with the “LAN” button (Fn + F6).
+
+The adminbook has a combined SATA + SATA Power connector, connected directly to the chipset. That makes it possible to perform low-level tests of hard drives that do not work through USB-SATA adapters. Previously, you had to do it through ExpressCards-type adapters, but the adminbook can do without them because it has a true SATA output.
+
+![](https://habrastorage.org/webt/dr/si/in/drsiinbafiyz8ztzwrowtvi0lk8.jpeg)
+Fig.5.3 — USB to SATA/IDE and ExpressCard to SATA adapters
+
+The adminbook also has a connector that no other laptops have — HD Mini SAS (SFF-8643). PCIe x4 is routed outside through this connector. Thus, it's possible to connect an external U.2 (directly) or M.2 type (through an adapter) drives. Or even a typical desktop PCIe expansion card (like a graphics card).
+
+![](https://habrastorage.org/webt/ud/ph/86/udph860bshazyd6lvuzvwgymwnk.jpeg)
+Fig.5.4 — HD Mini SAS (SFF-8643) to U.2 cable
+
+![](https://habrastorage.org/webt/kx/dd/99/kxdd99krcllm5ooz67l_egcttym.jpeg)
+Fig.5.5 — U.2 drive
+
+![](https://habrastorage.org/webt/xn/de/gx/xndegxy5i1g7h2lwefs2jt1scpq.jpeg)
+Fig.5.6 — U.2 to M.2 adapter
+
+![](https://habrastorage.org/webt/z2/dd/hd/z2ddhdoioezdwov_nv9e3b0egsa.jpeg)
+Fig.5.7 — Combined adapter from U.2 to M.2 and PCIe (sample M.2 22110 drive is installed)
+
+Unfortunately, the limitations of the chipset don’t allow arbitrary use of PCIe lanes. In addition, the processor uses the same data lanes for PCIe and SATA. Therefore, the rear connectors can only work in two ways:
+— all four PCIe lanes go to the Mini SAS connector (the second network interface and SATA don’t work)
+— two PCIe lanes go to the Mini SAS, and two lanes to the second network interface and SATA connector
+
+On the back there are also two USB connectors (usual and Type C), which are constantly powered. That allows you to charge other devices from your notebook, even when the notebook is turned off.
+
+### 6\. Power Supply
+
+The adminbook is designed to work in difficult and unpredictable conditions, therefore, it is able to receive power in various ways.
+
+**Method number one** is Power Delivery. The power supply cable can be connected to any USB Type C connector (except the one marked “OTG”).
+
+**The second option** is from a normal 5V phone charger with a microUSB or USB Type C connector. At the same time, if you connect to the ports labeled QC 3.0, the QuickCharge fast charging standard will be supported.
+
+**The third option** — from any source of 12-60V DC power. To connect, use a coaxial ( also known as “barrel”) 5.5x2.5mm power connector, often found in laptop power supplies.
+
+For greater safety, the 12-60V power supply is galvanically isolated from the rest of the notebook. In addition, there’s reverse polarity protection. In fact, the adminbook can receive energy even if positive and negative ends are mismatched.
+
+![](https://habrastorage.org/webt/ju/xo/c3/juxoc3lxi7urqwgegyd6ida5h_8.jpeg)
+Fig.6.1 — The cable, connecting the power supply to the adminbook (terminated with 5.5x2.5mm connectors)
+
+Adapters for a car cigarette lighter and crocodile clips are included in the box.
+
+![](https://habrastorage.org/webt/l6/-v/gv/l6-vgvqjrssirnvyi14czhi0mrc.jpeg)
+Fig.6.2 — Adapter from 5.5x2.5mm coaxial connector to crocodile clips
+
+![](https://habrastorage.org/webt/zw/an/gs/zwangsvfdvoievatpbfxqvxrszg.png)
+Fig.6.3 — Adapter to a car cigarette lighter
+
+**The fourth option** — Power Over Ethernet (POE) through the main network adapter. Supported options are 802.3af, 802.3at and Passive POE. Input voltage from 12 to 60V. This method is convenient if you have to work on the roof or on the tower, setting up Wi-Fi antennas. Power to them comes through Ethernet cables, and there is no other electricity on the tower.
+
+POE electricity can be used in three ways:
+
+ * power the notebook only
+ * forward to a second network adapter and power the notebook from batteries
+ * power the notebook and the antenna at the same time
+
+
+
+To prevent equipment damage, if one of the Ethernet cables is disconnected, the power to the second network interface is terminated. The power can only be turned on manually through the corresponding menu item.
+
+When using the 802.3af / at protocols, you can set the power class that the adminbook will request from the power supply device. This and other POE properties are configured from the menu called with the “LAN” button (Fn + F6).
+
+By the way, you can remotely reset Ubiquity access points (which is done by closing certain wires in the cable) with the second network interface.
+
+The indicator next to the main network interface shows the presence and type of POE: green — 802.3af / at, red — Passive POE.
+
+**The last, fifth** power supply is the battery. Here it’s a LiPol, 42W/hour battery.
+
+In case the external power supply does not provide sufficient power, the missing power can be drawn from the battery. Thus, it can draw power from the battery and external sources at the same time.
+
+### 7\. Display unit
+
+The display can tilt 180 degrees, and it’s locked with latches while closed (opens with a button on the front side). When the display is closed, adminbook doesn’t react to pressing any external buttons.
+
+In addition to the screen, the notebook lid contains:
+
+ * front and rear cameras with lights, microphones, activity LEDs and mechanical curtains
+ * LED of the upper backlight of the keyboard (similar to ThinkLight)
+ * LED indicators for Wi-Fi, Bluetooth, HDD and others
+ * wireless protocol antennas (in the blue plastic insert)
+ * photo sensors and LEDs for the infrared remote
+ * gyroscope, accelerometer, magnetometer
+
+
+
+The plastic insert for the antennas does not reach the corners of the display lid. This is done because in the «traveling» notebooks the corners are most affected by impacts, and it's desirable that they be made of metal.
+
+### 8\. Webcams
+
+The notebook has 2 webcams. The front-facing one is 8MP (4K / UltraHD), while the “selfie” one is 2MP (FullHD). Both cameras have a backlight controlled by separate buttons (Fn + G and Fn + H). Each camera has a mechanical curtain and an activity LED. The shifted mechanical curtain also turns off the microphones of the corresponding side (configurable).
+
+The external camera has two quick launch buttons — Fn + 1 takes an instant photo, Fn + 2 turns on video recording. The internal camera has a combination of Fn + Q and Fn + W.
+
+You can configure cameras and microphones from the menu called up by the “CAM” button (Fn + F10).
+
+### 9\. Indicator row
+
+It has the following indicators: Microphone, NumLock, ScrollLock, hard drive access, battery charge, external power connection, sleep mode, mobile connection, WiFi, Bluetooth.
+
+Three indicators are made to shine through the back side of the display lid, so that they can be seen while the lid is closed: external power connection, battery charge, sleep mode.
+
+Indicators are color-coded.
+
+Microphone — lights up red when all microphones are muted
+
+Battery charge: more than 60% is green, 30-60% is yellow, less than 30% is red, less than 10% is blinking red.
+
+External power: green — power is supplied, the battery is charged; yellow — power is supplied, the battery is charging; red — there is not enough external power to operate, the battery is drained
+
+Mobile: 4G (LTE) — green, 3G — yellow, EDGE / GPRS — red, blinking red — on, but no connection
+
+Wi-Fi: green — connected to 5 GHz, yellow — to 2.4 GHz, red — on, but not connected
+
+You can configure the indication with the “IND” button (Fn + F9)
+
+### 10\. Infrared remote control
+
+Near the indicators (on the front and back of the display lid) there are infrared photo sensors and LEDs to recording and playback commands from IR remotes. You can set it up, as well as emulate a remote control by pressing the “IR” button (Fn + F5).
+
+### 11\. Wireless interfaces
+
+WiFi — dual-band, 802.11a/b/g/n/ac with support for Wireless Direct, Intel WI-Di / Miracast, Wake On Wireless LAN.
+
+You ask, why is Miracast here? Because is already embedded in many WiFi chips, so its presence does not lead to additional costs. But you can transfer the image wirelessly to TVs, projectors and TV set-top boxes, that already have Miracast built in.
+
+Regarding Bluetooth, there’s nothing special. It’s version 4.2 or newest. By the way, the keyboard and trackpoint have a separate Bluetooth module. This is much easier than connect them to the system-wide module.
+
+Of course, the adminbook has a built-in cellular modem for 4G (LTE) / 3G / EDGE / GPRS, as well as a GPS / GLONASS / Galileo / Beidou receiver. This receiver also doesn’t cost much, because it’s already built into the 4G modem.
+
+There is also an NFC communication module, with the antenna under the spacebar. Antennas of all other wireless interfaces are in a plastic insert above the display.
+
+You can configure wireless interfaces with the «WRLS» button (Fn + F7).
+
+### 12\. USB connectors
+
+In total, four USB 3.0 Type A connectors and four USB 3.1 Type C connectors are built into the adminbook. Peripherals are connected to the adminbook through these.
+
+One more Type C and MicroUSB are allocated only for keyboard / mouse / drive emulation (denoted as “OTG”).
+
+«QC 3.0» labeled MicroUSB connector can not only be used for power, but it can switch to normal USB 2.0 port mode, except using MicroB instead of normal Type A. Why is it necessary? Because to flash some electronics you sometimes need non-standard USB A to USB A cables.
+
+In order to not make adapters outselves, you can use a regular phone charging cable by plugging it into this Micro B connector. Or use an USB A to USB Type C cable (if you have one).
+
+![](https://habrastorage.org/webt/0p/90/7e/0p907ezbunekqwobeogjgs5fgsa.jpeg)
+Fig.12.1 — Homemade USB A to USB A cable
+
+Since USB Type C supports alternate modes, it makes sense to use it. Alternate modes are when the connector works as HDMI or DisplayPort video outputs. Though you’ll need adapters to connect it to a TV or monitor. Or appropriate cables that have Type C on one end and HDMI / DP on the other. However, USB Type C to USB Type C cables might soon become the most common video transfer cable.
+
+The Type C connector on the left side of the adminbook supports an alternate Display Port mode, and on the right side, HDMI. Like the other video outputs of the adminbook, they can work as both input and output.
+
+The one thing left to say is that Type C is bidirectional in regard to power delivery — it can both take in power as well as output it.
+
+### 13\. Other
+
+On the left side there are four audio connectors: Line In, Line Out, Microphone and the combo headset jack (headphones + microphone). Supports simple stereo, quad and 5.1 mode output.
+
+Audio outputs are specially placed next to the video connectors, so that when connected to any equipment, the wires are on one side.
+
+Built-in speakers are on the sides. Outside, they are covered with grills and acoustic fabric with water-repellent impregnation.
+
+There are also two slots for memory cards — full-size SD and MicroSD. If you think that the first slot is needed only for copying photos from the camera — you are mistaken. Now, both single-board computers like Raspberry Pi and even rack-mount servers are loaded from SD cards. MicroSD cards are also commonly found outside of phones. In general, you need both card slots.
+
+Sensors more familiar to phones — a gyroscope, an accelerometer and a magnetometer — are built into the lid of the notebook. Thanks to this, one can determine where the notebook cameras are directed and use this for augmented reality, as well as navigation. Sensors are controlled via the menu using the “SNSR” button (Fn + F11).
+
+Among the function buttons with Fn, F1 (“MAN”) and F12 (“ETC”) I haven’t described yet. The first is a built-in guide on connectors, modes and how to use the adminbook. The second is the settings of non-standard subsystems that have not separate buttons.
+
+### 14\. What's inside
+
+The adminbook is based on the Core i5-7Y57 CPU (Kaby Lake architecture). Although it’s less of a CPU, but more of a real SOC (System On a Chip). That is, almost the entire computer (without peripherals) fits in one chip the size of a thumb nail (2x1.6 cm).
+
+It emits from 3.5W to 7W of heat (depends on the frequency). So, a passive cooling system is adequate in this case.
+
+8GB of RAM are installed by default, expandable up to 16GB.
+
+A 256GB M.2 2280 SSD, connected with two PCIe lanes, is used as the hard drive.
+
+Wi-Fi + Bluetooth and WWAN + GNSS adapters are also designed as M.2 modules.
+
+RAM, the hard drive and wireless adapters are located on the top of the motherboard and can be replaced by the user — just unscrew and lift the keyboard.
+
+The battery is assembled from four LP545590 cells and can also be replaced.
+
+SOC and other irreplaceable hardware are located on the bottom of the motherboard. The heating components for cooling are pressed directly against the case.
+
+External connectors are located on daughter boards connected to the motherboard via ribbon cables. That allows to release different versions of the adminbook based on the same motherboard.
+
+For example, one of the possible version:
+
+![](https://habrastorage.org/webt/j9/sw/vq/j9swvqfi1-ituc4u9nr6-ijv3nq.jpeg)
+Fig.14.1 — Adminbook A4 (front view)
+
+![](https://habrastorage.org/webt/pw/fq/ag/pwfqagvrluf1dbnmcd0rt-0eyc0.jpeg)
+Fig.14.2 — Adminbook A4 (back view)
+
+![](https://habrastorage.org/webt/mn/ir/8i/mnir8in1pssve0m2tymevz2sue4.jpeg)
+Fig.14.3 — Adminbook A4 (keyboard)
+
+This is an adminbook with a 12.5” display, its overall dimensions are 210x297mm (A4 paper format). The keyboard is full-size, with a standard key size (only the top row is a bit narrower). All the standard keys are there, except for the numpad and the Right Win, available with Fn keys. And trackpad added.
+
+### 15\. The underside of the adminbook
+
+Not expecting anything interesting from the bottom? But there is!
+
+First I will say a few words about the rubber feet. On my ThinkPad, they sometimes fall away and lost. I don't know if it's a bad glue, or a backpack is not suitable for a notebook, but it happens.
+
+Therefore, in the adminbook, the rubber feet are screwed in (the screws are slightly buried in rubber, so as not to scratch the tables). The feet are sufficiently streamlined so that they cling less to other objects.
+
+On the bottom there are visible drainage holes marked with a water drop.
+
+And the four threaded holes for connecting the adminbook with fasteners.
+
+![](https://habrastorage.org/webt/3d/q9/ku/3dq9kus6t7ql3rh5mbpfo3_xqng.jpeg)
+Fig.15.1 — The underside of the adminbook
+
+Large hole in the center has a tripod thread.
+
+![](https://habrastorage.org/webt/t5/e5/ps/t5e5ps3iasu2j-22uc2rgl_5x_y.jpeg)
+Fig.15.2 — Camera clamp mount
+
+Why is it necessary? Because sometimes you have to hang on high, holding the mast with one hand, holding the notebook with the second, and typing something on the third… Unfortunately, I am not Shiva, so these tricks are not easy for me. And you can just screw the adminbook by a camera mount to any protruding part of the structure and free your hands!
+
+No protruding parts? No problem. A plate with neodymium magnets is screwed to three other holes and the adminbook is magnetised to any steel surface — even vertical! As you see, opening the display by 180° is pretty useful.
+
+![](https://habrastorage.org/webt/ua/28/ub/ua28ubhpyrmountubiqjegiibem.jpeg)
+Fig.15.3 — Fastening with magnets and shaped holes for nails / screws
+
+And if there is no metal? For example, working on the roof, and next to only a wooden wall. Then you can screw 1-2 screws in the wall and hang the adminbook on them. To do this, there are special slots in the mount, plus an eyelet on the handle.
+
+For especially difficult cases, there’s an arm mount. This is not very convenient, but better than nothing. Besides, it allows you to navigate even with a working notebook.
+
+![](https://habrastorage.org/webt/tp/fo/0y/tpfo0y_8gku4bmlbeqwfux1j4me.jpeg)
+Fig.15.4 — Arm mount
+
+In general, these three holes use a regular metric thread, specifically so that you can make some DIY fastening and fasten it with ordinary screws.
+
+Except fasteners, an additional radiator can be screwed to these holes, so that you can work for a long time under high load or at high ambient temperature.
+
+![](https://habrastorage.org/webt/k4/jo/eq/k4joeqhmaxgvzhnxno6z3alg5go.jpeg)
+Fig.15.5 — Adminbook with additional radiator
+
+### 16\. Accessories
+
+The adminbook has some unique features, and some of them are implemented using equipment designed specifically for the adminbook. Therefore, these accessories are immediately included. However, non-unique accessories are also available immediately.
+
+Here is a complete list of both:
+
+ * fasteners with magnets
+ * arm mount
+ * heatsink
+ * screen blinds covering it from sunlight
+ * HD Mini SAS to U.2 cable
+ * combined adapter from U.2 to M.2 and PCIe
+ * power cable, terminated by coaxial 5.5x2.5mm connectors
+ * adapter from power cable to cigarette lighter
+ * adapter from power cable to crocodile clips
+ * different adapters from the power cable to coaxial connectors
+ * universal power supply and power cord from it into the outlet
+
+
+
+### 17\. Power supply
+
+Since this is a power supply for a system administrator's notebook, it would be nice to make it universal, capable of powering various electronic devices. Fortunately, the vast majority of devices are connected via coaxial connectors or USB. I mean devices with external power supplies: routers, switches, notebooks, nettops, single-board computers, DVRs, IPTV set top boxes, satellite tuners and more.
+
+![](https://habrastorage.org/webt/jv/zs/ve/jvzsveqavvi2ihuoajjnsr1xlp0.jpeg)
+Fig.17.1 — Adapters from 5.5x2.5mm coaxial connector to other types of connectors
+
+There aren’t many connector types, which allows to get by with an adjustable-voltage PSU and adapters for the necessary connectors. It also needs to support various power delivery standards.
+
+In our case, the power supply supports the following modes:
+
+ * Power Delivery — displayed as **[pd]**
+ * Quick Charge **[qc]**
+ * 802.3af/at **[at]**
+ * voltage from 5 to 54 volts in 0.5V increments (displayed voltage)
+
+
+
+![](https://habrastorage.org/webt/fj/jm/qv/fjjmqvdhezywuyh9ew3umy9wgmg.jpeg)
+Fig.17.2 — Mode display on the 7-segment indicator (1.9. = 19.5V)
+
+![](https://habrastorage.org/webt/h9/zg/u0/h9zgu0ngl01rvhgivlw7fb49gpq.jpeg)
+Fig.17.3 — Front and top sides of power supply
+
+USB outputs on the power supply (5V 2A) are always on. On the other outputs the voltage is applied by pressing the ON/OFF button.
+
+The desired mode is selected with the MODE button and this selection is remembered even when the power is turned off. The modes are listed like this: pd, qc, at, then a series of voltages.
+
+Voltage increases by pressing and holding the MODE button, decreases by short pressing. Step to the right — 1 Volt, step to the left — 0.5 Volt. Half a volt is needed because some equipment requires, for example, 19.5 volts. These half volts are displayed on the display with decimal points (19V -> **[19]** , 19.5V -> **[1.9.]** ).
+
+When power is on, the green LED is on. When a short-circuit or overcurrent protection is triggered, **[SH]** is displayed, and the LED lights up red.
+
+In the Power Delivery and Quick Charge modes, voltage is applied to the USB outputs (Type A and Type C). Only one of them can be used at one time.
+
+In 802.3af/at modes, the power supply acts as an injector, combining the supply voltage with data from the LAN connector and supplying it to the POE connector. Power is supplied only if a device with 802.3af or 802.3at support is plugged into the POE connector.
+
+But in the simple voltage supply mode, electricity throu the POE connector is issued immediately, without any checks. This is the so-called Passive POE — positive charge goes to conductors 4 and 5, and negative charge to conductors 7 and 8. At the same time, the voltage is applied to the coaxial connector. Adapters for various types of connectors are used in this mode.
+
+The power supply unit has a built-in button to remotely reset Ubiquity access points. This is a very useful feature that allows you to reset the antenna to factory settings without having to climb on the mast. I don’t know — is any other manufacturers support a feature like this?
+
+The power supply also has the passive wiremap adapter, which allows you to determine the correct Ethernet cable crimping. The active part is located in the Ethernet ports of the adminbook.
+
+![](https://habrastorage.org/webt/pp/bm/ws/ppbmws4g1o5j05eyqqulnwuuwge.jpeg)
+Fig.17.4 — Back side and wiremap adapter
+
+Of course, the network cable tester built into the adminbook will not replace a professional OTDR, but for most tasks it will be enough.
+
+To prevent overheating, part of the PSU’s body acts as an aluminum heatsink. Power supply power — 65 watts, size 10x5x4cm.
+
+### 18\. Afterword
+
+“It won’t fit into such a small case!” — the sceptics will probably say. To be frank, I also sometimes think that way when re-reading what I wrote above.
+
+And then I open the 3D model and see, that all parts fits. Of course, I am not an electronic engineer, and for sure I miss some important things. But, I hope that if there are mistakes, they are “overcorrections”. That is, real engineers would fit all of that into a case even smaller.
+
+By and large, the adminbook can be divided into 5 functional parts:
+
+ * the usual part, as in all notebooks — processor, memory, hard drive, etc.
+ * keyboard and trackpoint that can work separately
+ * autonomous video subsystem
+ * subsystem for managing non-standard features (enable / disable POE, infrared remote control, PCIe mode switching, LAN testing, etc.)
+ * power subsystem
+
+
+
+If we consider them separately, then everything looks quite feasible.
+
+The **SOC Kaby Lake** contains a CPU, a graphics accelerator, a memory controller, PCIe, SATA controller, USB controller for 6 USB3 and 10 USB2 outputs, Gigabit Ethernet controller, 4 lanes to connect webcams, integrated audio and etc.
+
+All that remains is to trace the lanes to connectors and supply power to it.
+
+**Keyboard and trackpoint** is a separate module that connects via USB to the adminbook or to an external connector. Nothing complicated here: USB and Bluetooth keyboards are very widespread. In our case, in addition, needs to make a rewritable table of scan codes and transfer non-standard keys over a separate interface other than USB.
+
+**The video subsystem** receives the video signal from the adminbook or from external connectors. In fact, this is a regular monitor with a video switchboard plus a couple of VGA converters.
+
+**Non-standard features** are managed independently of the operating system. The easiest way to do it with via a separate microcontroller which receives codes for pressing non-standard keys (those that are pressed with Fn) and performs the corresponding actions.
+
+Since you have to display a menu to change the settings, the microcontroller has a video output, connected to the adminbook for the duration of the setup.
+
+**The internal PSU** is galvanically isolated from the rest of the system. Why not? On habr.com there was an article about making a 100W, 9.6mm thickness planar transformer! And it only costs $0.5.
+
+So the electronic part of the adminbook is quite feasible. There is the programming part, and I don’t know which part will harder.
+
+This concludes my fairly long article. It long, even though I simplified, shortened and threw out minor details.
+
+The ideal end of the article was a link to an online store where you can buy an adminbook. But it's not yet designed and released. Since this requires money.
+
+Unfortunately, I have no experience with Kickstarter or Indigogo. Maybe you have this experience? Let's do it together!
+
+### Update
+
+Many people asked for a simplified version. Ok. Done. Sorry — just a 3d model, without render.
+
+Deleted: second LAN adapter, micro SD card reader, one USB port Type C, second camera, camera lights and camera curtines, display latch, unnecessary audio connectors.
+
+Also in this version there will be no infrared remote control, a reprogrammable keyboard, QC 3.0 charging standard, and getting power by POE.
+
+![](https://habrastorage.org/webt/3l/lg/vm/3llgvmv4pebiruzgldqckab0uyc.jpeg)
+![](https://habrastorage.org/webt/sp/x6/rv/spx6rvmn6zlumbwg46xwfmjnako.jpeg)
+![](https://habrastorage.org/webt/sm/g0/xz/smg0xzdspfm3vr3gep__6bcqae8.jpeg)
+
+
+--------------------------------------------------------------------------------
+
+via: https://habr.com/en/post/437912/
+
+作者:[sukhe][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://habr.com/en/users/sukhe/
+[b]: https://github.com/lujun9972
+[1]: https://habrastorage.org/webt/_1/mp/vl/_1mpvlyujldpnad0cvvzvbci50y.jpeg
+[2]: https://habrastorage.org/webt/mr/m6/d3/mrm6d3szvghhpghfchsl_-lzgb4.jpeg
diff --git a/sources/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md b/sources/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md
new file mode 100644
index 0000000000..989cd0d60f
--- /dev/null
+++ b/sources/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md
@@ -0,0 +1,102 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro)
+[#]: via: (https://itsfoss.com/olive-video-editor)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro
+======
+
+[Olive][1] is a new open source video editor under development. This non-linear video editor aims to provide a free alternative to high-end professional video editing software. Too high an aim? I think so.
+
+If you have read our [list of best video editors for Linux][2], you might have noticed that most of the ‘professional-grade’ video editors such as [Lightworks][3] or DaVinciResolve are neither free nor open source.
+
+[Kdenlive][4] and Shotcut are there but they don’t often meet the standards of professional video editing (that’s what many Linux users have expressed).
+
+This gap between the hobbyist and professional video editors prompted the developer(s) of Olive to start this project.
+
+![Olive Video Editor][5]Olive Video Editor Interface
+
+There is a detailed [review of Olive on Libre Graphics World][6]. Actually, this is where I came to know about Olive first. You should read the article if you are interested in knowing more about it.
+
+### Installing Olive Video Editor in Linux
+
+Let me remind you. Olive is in the early stages of development. You’ll find plenty of bugs and missing/incomplete features. You should not treat it as your main video editor just yet.
+
+If you want to test Olive, there are several ways to install it on Linux.
+
+#### Install Olive in Ubuntu-based distributions via PPA
+
+You can install Olive via its official PPA in Ubuntu, Mint and other Ubuntu-based distributions.
+
+```
+sudo add-apt-repository ppa:olive-editor/olive-editor
+sudo apt-get update
+sudo apt-get install olive-editor
+```
+
+#### Install Olive via Snap
+
+If your Linux distribution supports Snap, you can use the command below to install it.
+
+```
+sudo snap install --edge olive-editor
+```
+
+#### Install Olive via Flatpak
+
+If your [Linux distribution supports Flatpak][7], you can install Olive video editor via Flatpak.
+
+#### Use Olive via AppImage
+
+Don’t want to install it? Download the [AppImage][8] file, set it as executable and run it.
+
+Both 32-bit and 64-bit AppImage files are available. You should download the appropriate file.
+
+Olive is also available for Windows and macOS. You can get it from their [download page][9].
+
+### Want to support the development of Olive video editor?
+
+If you like what Olive is trying to achieve and want to support it, here are a few ways you can do that.
+
+If you are testing Olive and find some bugs, please report it on their GitHub repository.
+
+If you are a programmer, go and check out the source code of Olive and see if you could help the project with your coding skills.
+
+Contributing to projects financially is another way you can help the development of open source software. You can support Olive monetarily by becoming a patron.
+
+If you don’t have either the money or coding skills to support Olive, you could still help it. Share this article or Olive’s website on social media or in Linux/software related forums and groups you frequent. A little word of mouth should help it indirectly.
+
+### What do you think of Olive?
+
+It’s too early to judge Olive. I hope that the development continues rapidly and we have a stable release of Olive by the end of the year (if I am not being overly optimistic).
+
+What do you think of Olive? Do you agree with the developer’s aim of targeting the pro-users? What features would you like Olive to have?
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/olive-video-editor
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.olivevideoeditor.org/
+[2]: https://itsfoss.com/best-video-editing-software-linux/
+[3]: https://www.lwks.com/
+[4]: https://kdenlive.org/en/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?resize=800%2C450&ssl=1
+[6]: http://libregraphicsworld.org/blog/entry/introducing-olive-new-non-linear-video-editor
+[7]: https://itsfoss.com/flatpak-guide/
+[8]: https://itsfoss.com/use-appimage-linux/
+[9]: https://www.olivevideoeditor.org/download.php
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190131 Will quantum computing break security.md b/sources/tech/20190131 Will quantum computing break security.md
new file mode 100644
index 0000000000..af374408dc
--- /dev/null
+++ b/sources/tech/20190131 Will quantum computing break security.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Will quantum computing break security?)
+[#]: via: (https://opensource.com/article/19/1/will-quantum-computing-break-security)
+[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
+
+Will quantum computing break security?
+======
+
+Do you want J. Random Hacker to be able to pretend they're your bank?
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
+
+Over the past few years, a new type of computer has arrived on the block: the quantum computer. It's arguably the sixth type of computer:
+
+ 1. **Humans:** Before there were artificial computers, people used, well, people. And people with this job were called "computers."
+
+ 2. **Mechanical analogue:** These are devices such as the [Antikythera mechanism][1], astrolabes, or slide rules.
+
+ 3. **Mechanical digital:** In this category, I'd count anything that allowed discrete mathematics but didn't use electronics for the actual calculation: the abacus, Babbage's Difference Engine, etc.
+
+ 4. **Electronic analogue:** Many of these were invented for military uses such as bomb sights, gun aiming, etc.
+
+ 5. **Electronic digital:** I'm going to go out on a limb here and characterise Colossus as the first electronic digital computer1: these are basically what we use today for anything from mobile phones to supercomputers.
+
+ 6. **Quantum computers:** These are coming and are fundamentally different from all of the previous generations.
+
+
+
+
+### What is quantum computing?
+
+Quantum computing uses concepts from quantum mechanics to allow very different types of calculations from what we're used to in "classical computing." I'm not even going to try to explain, because I know I'd do a terrible job, so I suggest you try something like [Wikipedia's definition][2] as a starting point. What's important for our purposes is to understand that quantum computers use qubits to do calculations, and for quite a few types of mathematical algorithms—and therefore computing operations––they can solve problems much faster than classical computers.
+
+What's "much faster"? Much, much faster: orders of magnitude faster. A calculation that might take years or decades with a classical computer could, in certain circumstances, take seconds. Impressive, yes? And scary. Because one of the types of problems that quantum computers should be good at solving is decrypting encrypted messages, even without the keys.
+
+This means that someone with a sufficiently powerful quantum computer should be able to read all of your current and past messages, decrypt any stored data, and maybe fake digital signatures. Is this a big thing? Yes. Do you want J. Random Hacker to be able to pretend they're your bank?2 Do you want that transaction on the blockchain where you were sold a 10 bedroom mansion in Mayfair to be "corrected" to be a bedsit in Weston-super-Mare?3
+
+### Some good news
+
+This is all scary stuff, but there's good news of various types.
+
+The first is that, in order to make any of this work at all, you need a quantum computer with a good number of qubits operating, and this is turning out to be hard.4 The general consensus is that we've got a few years before anybody has a "big" enough quantum computer to do serious damage to classical encryption algorithms.
+
+The second is that, even with a sufficient number of qubits to attacks our existing algorithms, you still need even more to allow for error correction.
+
+The third is that, although there are theoretical models to show how to attack some of our existing algorithms, actually making them work is significantly harder than you or I5 might expect. In fact, some of the attacks may turn out to be infeasible or just take more years to perfect than we worry about.
+
+The fourth is that there are clever people out there who are designing quantum-computation-resistant algorithms (sometimes referred to as "post-quantum algorithms") that we can use, at least for new encryption, once they've been tested and become widely available.
+
+All in all, in fact, there's a strong body of expert opinion that says we shouldn't be overly worried about quantum computing breaking our encryption in the next five or even 10 years.
+
+### And some bad news
+
+It's not all rosy, however. Two issues stick out to me as areas of concern.
+
+ 1. People are still designing and rolling out systems that don't consider the issue. If you're coming up with a system that is likely to be in use for 10 or more years or will be encrypting or signing data that must remain confidential or attributable over those sorts of periods, then you should be considering the possible impact of quantum computing on your system.
+
+ 2. Some of the new, quantum-computing-resistant algorithms are proprietary. This means that when you and I want to start implementing systems that are designed to be quantum-computing resistant, we'll have to pay to do so. I'm a big proponent of open source, and particularly of [open source cryptography][3], and my big worry is that we just won't be able to open source these things, and worse, that when new protocol standards are created––either de-facto or through standards bodies––they will choose proprietary algorithms that exclude the use of open source, whether on purpose, through ignorance, or because few good alternatives are available.
+
+
+
+
+### What to do?
+
+Luckily, there are things you can do to address both of the issues above. The first is to think and plan when designing a system about what the impact of quantum computing might be on it. Often—very often—you won't need to implement anything explicit now (and it could be hard to, given the current state of the art), but you should at least embrace [the concept of crypto-agility][4]: designing protocols and systems so you can swap out algorithms if required.7
+
+The second is a call to arms: Get involved in the open source movement and encourage everybody you know who has anything to do with cryptography to rally for open standards and for research into non-proprietary, quantum-computing-resistant algorithms. This is something that's very much on my to-do list, and an area where pressure and lobbying is just as important as the research itself.
+
+1\. I think it's fair to call it the first electronic, programmable computer. I know there were earlier non-programmable ones, and that some claim ENIAC, but I don't have the space or the energy to argue the case here.
+
+2\. No.
+
+3\. See 2. Don't get me wrong, by the way—I grew up near Weston-super-Mare, and it's got things going for it, but it's not Mayfair.
+
+4\. And if a quantum physicist says something's hard, then to my mind, it's hard.
+
+5\. And I'm assuming that neither of us is a quantum physicist or mathematician.6
+
+6\. I'm definitely not.
+
+7\. And not just for quantum-computing reasons: There's a good chance that some of our existing classical algorithms may just fall to other, non-quantum attacks such as new mathematical approaches.
+
+This article was originally published on [Alice, Eve, and Bob][5] and is reprinted with the author's permission.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/will-quantum-computing-break-security
+
+作者:[Mike Bursell][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mikecamel
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Antikythera_mechanism
+[2]: https://en.wikipedia.org/wiki/Quantum_computing
+[3]: https://opensource.com/article/17/10/many-eyes
+[4]: https://aliceevebob.com/2017/04/04/disbelieving-the-many-eyes-hypothesis/
+[5]: https://aliceevebob.com/2019/01/08/will-quantum-computing-break-security/
diff --git a/sources/tech/20190201 Top 5 Linux Distributions for New Users.md b/sources/tech/20190201 Top 5 Linux Distributions for New Users.md
new file mode 100644
index 0000000000..6b6985bf0a
--- /dev/null
+++ b/sources/tech/20190201 Top 5 Linux Distributions for New Users.md
@@ -0,0 +1,121 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top 5 Linux Distributions for New Users)
+[#]: via: (https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+Top 5 Linux Distributions for New Users
+======
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/deepin-main.jpg?itok=ASgr0mOP)
+
+Linux has come a long way from its original offering. But, no matter how often you hear how easy Linux is now, there are still skeptics. To back up this claim, the desktop must be simple enough for those unfamiliar with Linux to be able to make use of it. And, the truth is that plenty of desktop distributions make this a reality.
+
+### No Linux knowledge required
+
+It might be simple to misconstrue this as yet another “best user-friendly Linux distributions” list. That is not what we’re looking at here. What’s the difference? For my purposes, the defining line is whether or not Linux actually plays into the usage. In other words, could you set a user in front of a desktop operating system and have them be instantly at home with its usage? No Linux knowledge required.
+
+Believe it or not, some distributions do just that. I have five I’d like to present to you here. You’ve probably heard of all of them. They might not be your distribution of choice, but you can guarantee that they slide Linux out of the spotlight and place the user front and center.
+
+Let’s take a look at the chosen few.
+
+### Elementary OS
+
+The very philosophy of Elementary OS is centered around how people actually use their desktops. The developers and designers have gone out of their way to create a desktop that is as simple as possible. In the process, they’ve de-Linux’d Linux. That is not to say they’ve removed Linux from the equation. No. Instead, what they’ve done is create an operating system that is about as neutral as you’ll find. Elementary OS is streamlined in such a way as to make sure everything is perfectly logical. From the single Dock to the clear-to-anyone Applications menu, this is a desktop that doesn’t say to the user, “You’re using Linux!” In fact, the layout itself is reminiscent of Mac, but with the addition of a simple app menu (Figure 1).
+
+![Elementary OS Juno][2]
+
+Figure 1: The Elementary OS Juno Application menu in action.
+
+[Used with permission][3]
+
+Another important aspect of Elementary OS that places it on this list is that it’s not nearly as flexible as some other desktop distributions. Sure, some users would balk at that, but having a desktop that doesn’t throw every bell and whistle at the user makes for a very familiar environment -- one that neither requires or allows a lot of tinkering. That aspect of the OS goes a long way to make the platform familiar to new users.
+
+And like any modern Linux desktop distribution, Elementary OS includes and App Store, called AppCenter, where users can install all the applications they need, without ever having to touch the command line.
+
+### Deepin
+
+Deepin not only gets my nod for one of the most beautiful desktops on the market, it’s also just as easy to adopt as any desktop operating system available. With a very simplistic take on the desktop interface, there’s very little in the way of users with zero Linux experience getting up to speed on its usage. In fact, you’d be hard-pressed to find a user who couldn’t instantly start using the Deepin desktop. The only possible hitch in that works might be the sidebar control center (Figure 2).
+
+![][5]
+
+Figure 2: The Deepin sidebar control panel.
+
+[Used with permission][3]
+
+But even that sidebar control panel is as intuitive as any other configuration tool on the market. And anyone that has used a mobile device will be instantly at home with the layout. As for opening applications, Deepin takes a macOS Launchpad approach with the Launcher. This button is in the usual far right position on the desktop dock, so users will immediately gravitate to that, understanding that it is probably akin to the standard “Start” menu.
+
+In similar fashion as Elementary OS (and most every Linux distribution on the market), Deepin includes an app store (simply called “Store”), where plenty of apps can be installed with ease.
+
+### Ubuntu
+
+You knew it was coming. Ubuntu is most often ranked at the top of most user-friendly Linux lists. Why? Because it’s one of the chosen few where a knowledge of Linux simply isn’t necessary to get by on the desktop. Prior to the adoption of GNOME (and the ousting of Unity), that wouldn’t have been the case. Why? Because Unity often needed a bit of tweaking to get it to the point where a tiny bit of Linux knowledge wasn’t necessary (Figure 3). Now that Ubuntu has adopted GNOME, and tweaked it to the point where an understanding of GNOME isn’t even necessary, this desktop makes Linux take a back seat to simplicity and usability.
+
+![Ubuntu 18.04][7]
+
+Figure 3: The Ubuntu 18.04 desktop is instantly familiar.
+
+[Used with permission][3]
+
+Unlike Elementary OS, Ubuntu doesn’t hold the user back. So anyone who wants more from their desktop, can have it. However, the out of the box experience is enough for just about any user type. Anyone looking for a desktop that makes the user unaware as to just how much power they have at their fingertips, could certainly do worse than Ubuntu.
+
+### Linux Mint
+
+I will preface this by saying I’ve never been the biggest fan of Linux Mint. It’s not that I don’t respect what the developers are doing, it’s more an aesthetic. I prefer modern-looking desktop environments. But that old school desktop metaphor (found in the default Cinnamon desktop) is perfectly familiar to nearly anyone who uses it. With a taskbar, start button, system tray, and desktop icons (Figure 4), Linux Mint offers an interface that requires zero learning curve. In fact, some users might be initially fooled into thinking they are working with a Windows 7 clone. Even the updates warning icon will look instantly familiar to users.
+
+![Linux Mint ][9]
+
+Figure 4: The Linux Mint Cinnamon desktop is very Windows 7-ish.
+
+[Used with permission][3]
+
+Because Linux Mint benefits from being based on Ubuntu, it’ll not only enjoy an immediate familiarity, but a high usability. No matter if you have even the slightest understanding of the underlying platform, users will feel instantly at home on Linux Mint.
+
+### Ubuntu Budgie
+
+Our list concludes with a distribution that also does a fantastic job of making the user forget they are using Linux, and makes working with the usual tools a simple, beautiful thing. Melding the Budgie Desktop with Ubuntu makes for an impressively easy to use distribution. And although the layout of the desktop (Figure 5) might not be the standard fare, there is no doubt the acclimation takes no time. In fact, outside of the Dock defaulting to the left side of the desktop, Ubuntu Budgie has a decidedly Elementary OS look to it.
+
+![Budgie][11]
+
+Figure 5: The Budgie desktop is as beautiful as it is simple.
+
+[Used with permission][3]
+
+The System Tray/Notification area in Ubuntu Budgie offers a few more features than the usual fare: Features such as quick access to Caffeine (a tool to keep your desktop awake), a Quick Notes tool (for taking simple notes), Night Lite switch, a Places drop-down menu (for quick access to folders), and of course the Raven applet/notification sidebar (which is similar to, but not quite as elegant as, the Control Center sidebar in Deepin). Budgie also includes an application menu (top left corner), which gives users access to all of their installed applications. Open an app and the icon will appear in the Dock. Right-click that app icon and select Keep in Dock for even quicker access.
+
+Everything about Ubuntu Budgie is intuitive, so there’s practically zero learning curve involved. It doesn’t hurt that this distribution is as elegant as it is easy to use.
+
+### Give One A Chance
+
+And there you have it, five Linux distributions that, each in their own way, offer a desktop experience that any user would be instantly familiar with. Although none of these might be your choice for top distribution, it’s hard to argue their value when it comes to users who have no familiarity with Linux.
+
+Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/files/images/elementaryosjpg-2
+[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaryos_0.jpg?itok=KxgNUvMW (Elementary OS Juno)
+[3]: https://www.linux.com/licenses/category/used-permission
+[4]: https://www.linux.com/files/images/deepinjpg
+[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/deepin.jpg?itok=VV381a9f
+[6]: https://www.linux.com/files/images/ubuntujpg-1
+[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_1.jpg?itok=bax-_Tsg (Ubuntu 18.04)
+[8]: https://www.linux.com/files/images/linuxmintjpg
+[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linuxmint.jpg?itok=8sPon0Cq (Linux Mint )
+[10]: https://www.linux.com/files/images/budgiejpg-0
+[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/budgie_0.jpg?itok=zcf-AHmj (Budgie)
+[12]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/tech/20190204 7 Best VPN Services For 2019.md b/sources/tech/20190204 7 Best VPN Services For 2019.md
new file mode 100644
index 0000000000..e72d7de3df
--- /dev/null
+++ b/sources/tech/20190204 7 Best VPN Services For 2019.md
@@ -0,0 +1,77 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (7 Best VPN Services For 2019)
+[#]: via: (https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/)
+[#]: author: (Editor https://www.ostechnix.com/author/editor/)
+
+7 Best VPN Services For 2019
+======
+
+At least 67 percent of global businesses in the past three years have faced data breaching. The breaching has been reported to expose hundreds of millions of customers. Studies show that an estimated 93 percent of these breaches would have been avoided had data security fundamentals been considered beforehand.
+
+Understand that poor data security can be extremely costly, especially to a business and could quickly lead to widespread disruption and possible harm to your brand reputation. Although some businesses can pick up the pieces the hard way, there are still those that fail to recover. Today however, you are fortunate to have access to data and network security software.
+
+![](https://www.ostechnix.com/wp-content/uploads/2019/02/vpn-1.jpeg)
+
+As you start 2019, keep off cyber-attacks by investing in a **V** irtual **P** rivate **N** etwork commonly known as **VPN**. When it comes to online privacy and security, there are many uncertainties. There are hundreds of different VPN providers, and picking the right one means striking just the right balance between pricing, services, and ease of use.
+
+If you are looking for a solid 100 percent tested and secure VPN, you might want to do your due diligence and identify the best match. Here are the top 7 Best tried and tested VPN services For 2019.
+
+### 1. Vpnunlimitedapp
+
+With VPN Unlimited, you have total security. This VPN allows you to use any WIFI without worrying that your personal data can be leaked. With AES-256, your data is encrypted and protected against prying third-parties and hackers. This VPN ensures you stay anonymous and untracked on all websites no matter the location. It offers a 7-day trial and a variety of protocol options: OpenVPN, IKEv2, and KeepSolid Wise. Demanding users are entitled to special extras such as a personal server, lifetime VPN subscription, and personal IP options.
+
+### 2. VPN Lite
+
+VPN Lite is an easy-to-use and **free VPN service** that allows you to browse the internet at no charges. You remain anonymous and your privacy is protected. It obscures your IP and encrypts your data meaning third parties are not able to track your activities on all online platforms. You also get to access all online content. With VPN Lite, you get to access blocked sites in your state. You can also gain access to public WIFI without the worry of having sensitive information tracked and hacked by spyware and hackers.
+
+### 3. HotSpot Shield
+
+Launched in 2005, this is a popular VPN embraced by the majority of users. The VPN protocol here is integrated by at least 70 percent of the largest security companies globally. It is also known to have thousands of servers across the globe. It comes with two free options. One is completely free but supported by online advertisements, and the second one is a 7-day trial which is the flagship product. It contains military grade data encryption and protects against malware. HotSpot Shield guaranteed secure browsing and offers lightning-fast speeds.
+
+### 4. TunnelBear
+
+This is the best way to start if you are new to VPNs. It comes to you with a user-friendly interface complete with animated bears. With the help of TunnelBear, users are able to connect to servers in at least 22 countries at great speeds. It uses **AES 256-bit encryption** guaranteeing no data logging meaning your data stays protected. You also get unlimited data for up to five devices.
+
+### 5. ProtonVPN
+
+This VPN offers you a strong premium service. You may suffer from reduced connection speeds, but you also get to enjoy its unlimited data. It features an intuitive interface easy to use, and comes with a multi-platform compatibility. Proton’s servers are said to be specifically optimized for torrenting and thus cannot give access to Netflix. You get strong security features such as protocols and encryptions meaning your browsing activities remain secure.
+
+### 6. ExpressVPN
+
+This is known as the best offshore VPN for unblocking and privacy. It has gained recognition for being the top VPN service globally resulting from solid customer support and fast speeds. It offers routers that come with browser extensions and custom firmware. ExpressVPN also has an admirable scope of quality apps, plenty of servers, and can only support up to three devices.
+
+It’s not entirely free, and happens to be one of the most expensive VPNs on the market today because it is fully packed with the most advanced features. With it comes a 30-day money-back guarantee, meaning you can freely test this VPN for a month. Good thing is; it is completely risk-free. If you need a VPN for a short duration to bypass online censorship for instance, this could, be your go-to solution. You don’t want to give trials to a spammy, slow, free program.
+
+It is also one of the best ways to enjoy online streaming as well as outdoor security. Should you need to continue using it, you only have to renew or cancel your free trial if need be. Express VPN has over 2000 servers across 90 countries, unblocks Netflix, gives lightning fast connections, and gives users total privacy.
+
+### 7. PureVPN
+
+While this VPN may not be completely free, it falls under the most budget-friendly services on this list. Users can sign up for a free seven days trial and later choose one of its paid plans. With this VPN, you get to access 750-plus servers in at least 140 countries. There is also access to easy installation on almost all devices. All its paid features can still be accessed within the free trial window. That includes unlimited data transfers, IP leakage protection, and ISP invisibility. The supproted operating systems are iOS, Android, Windows, Linux, and macOS.
+
+### Summary
+
+With the large variety of available freemium VPN services today, why not take that opportunity to protect yourself and your customers? Understand that there are some great VPN services. Even the most secure free service however, cannot be touted as risk free. You might want to upgrade to a premium one for increased protection. Premium VPN allows you to test freely offering risk-free money-back guarantee. Whether you plan to sign up for a paid VPN or commit to a free one, it is highly advisable to have a VPN.
+
+**About the author:**
+
+**Renetta K. Molina** is a tech enthusiast and fitness enthusiast. She writes about technology, apps, WordPress and a variety of other topics. In her free time, she likes to play golf and read books. She loves to learn and try new things.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/
+
+作者:[Editor][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/editor/
+[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190204 Top 5 open source network monitoring tools.md b/sources/tech/20190204 Top 5 open source network monitoring tools.md
new file mode 100644
index 0000000000..5b6e7f1bfa
--- /dev/null
+++ b/sources/tech/20190204 Top 5 open source network monitoring tools.md
@@ -0,0 +1,125 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top 5 open source network monitoring tools)
+[#]: via: (https://opensource.com/article/19/2/network-monitoring-tools)
+[#]: author: (Paul Bischoff https://opensource.com/users/paulbischoff)
+
+Top 5 open source network monitoring tools
+======
+Keep an eye on your network to avoid downtime with these monitoring tools.
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3)
+
+Maintaining a live network is one of a system administrator's most essential tasks, and keeping a watchful eye over connected systems is essential to keeping a network functioning at its best.
+
+There are many different ways to keep tabs on a modern network. Network monitoring tools are designed for the specific purpose of monitoring network traffic and response times, while application performance management solutions use agents to pull performance data from the application stack. If you have a live network, you need network monitoring to make sure you aren't vulnerable to an attacker. Likewise, if you rely on lots of different applications to run your daily operations, you will need an [application performance management][1] solution as well.
+
+This article will focus on open source network monitoring tools. These tools help monitor individual nodes and applications for signs of poor performance. Through one window, you can view the performance of an entire network and even get alerts to keep you in the loop if you're away from your desk.
+
+Before we get into the top five network monitoring tools, let's look more closely at the reasons you need to use one.
+
+### Why do I need a network monitoring tool?
+
+Network monitoring tools are vital to maintaining networks because they allow you to keep an eye on devices connected to the network from a central location. These tools help flag devices with subpar performance so you can step in and run troubleshooting to get to the root of the problem.
+
+Running in-depth troubleshooting can minimize performance problems and prevent security breaches. In practical terms, this keeps the network online and eliminates the risk of falling victim to unnecessary downtime. Regular network maintenance can also help prevent outages that could take thousands of users offline.
+
+A network monitoring tool enables you to:
+
+ * Autodiscover devices connected to your network
+ * View live and historic performance data for a range of devices and applications
+ * Configure alerts to notify you of unusual activity
+ * Generate graphs and reports to analyze network activity in greater depth
+
+### The top 5 open source network monitoring tools
+
+Now, that you know why you need a network monitoring tool, take a look at the top 5 open source tools to see which might best meet your needs.
+
+#### Cacti
+
+![](https://opensource.com/sites/default/files/uploads/cacti_network-monitoring-tools.png)
+
+If you know anything about open source network monitoring tools, you've probably heard of [Cacti][2]. It's a graphing solution that acts as an addition to [RRDTool][3] and is used by many network administrators to collect performance data in LANs. Cacti comes with Simple Network Management Protocol (SNMP) support on Windows and Linux to create graphs of traffic data.
+
+Cacti typically works by using data sourced from user-created scripts that ping hosts on a network. The values returned by the scripts are stored in a MySQL database, and this data is used to generate graphs.
+
+This sounds complicated, but Cacti has templates to help speed the process along. You can also create a graph or data source template that can be used for future monitoring activity. If you'd like to try it out, [download Cacti][4] for free on Linux and Windows.
+
+#### Nagios Core
+
+![](https://opensource.com/sites/default/files/uploads/nagioscore_network-monitoring-tools.png)
+
+[Nagios Core][5] is one of the most well-known open source monitoring tools. It provides a network monitoring experience that combines open source extensibility with a top-of-the-line user interface. With Nagios Core, you can auto-discover devices, monitor connected systems, and generate sophisticated performance graphs.
+
+Support for customization is one of the main reasons Nagios Core has become so popular. For example, [Nagios V-Shell][6] was added as a PHP web interface built in AngularJS, searchable tables and a RESTful API designed with CodeIgniter.
+
+If you need more versatility, you can check the Nagios Exchange, which features a range of add-ons that can incorporate additional features into your network monitoring. These range from the strictly cosmetic to monitoring enhancements like [nagiosgraph][7]. You can try it out by [downloading Nagios Core][8] for free.
+
+#### Icinga 2
+
+![](https://opensource.com/sites/default/files/uploads/icinga2_network-monitoring-tools.png)
+
+[Icinga 2][9] is another widely used open source network monitoring tool. It builds on the groundwork laid by Nagios Core. It has a flexible RESTful API that allows you to enter your own configurations and view live performance data through the dashboard. Dashboards are customizable, so you can choose exactly what information you want to monitor in your network.
+
+Visualization is an area where Icinga 2 performs particularly well. It has native support for Graphite and InfluxDB, which can turn performance data into full-featured graphs for deeper performance analysis.
+
+Icinga2 also allows you to monitor both live and historical performance data. It offers excellent alerts capabilities for live monitoring, and you can configure it to send notifications of performance problems by email or text. You can [download Icinga 2][10] for free for Windows, Debian, DHEL, SLES, Ubuntu, Fedora, and OpenSUSE.
+
+#### Zabbix
+
+![](https://opensource.com/sites/default/files/uploads/zabbix_network-monitoring-tools.png)
+
+[Zabbix][11] is another industry-leading open source network monitoring tool, used by companies from Dell to Salesforce on account of its malleable network monitoring experience. Zabbix does network, server, cloud, application, and services monitoring very well.
+
+You can track network information such as network bandwidth usage, network health, and configuration changes, and weed out problems that need to be addressed. Performance data in Zabbix is connected through SNMP, Intelligent Platform Management Interface (IPMI), and IPv6.
+
+Zabbix offers a high level of convenience compared to other open source monitoring tools. For instance, you can automatically detect devices connected to your network before using an out-of-the-box template to begin monitoring your network. You can [download Zabbix][12] for free for CentOS, Debian, Oracle Linux, Red Hat Enterprise Linux, Ubuntu, and Raspbian.
+
+#### Prometheus
+
+![](https://opensource.com/sites/default/files/uploads/promethius_network-monitoring-tools.png)
+
+[Prometheus][13] is an open source network monitoring tool with a large community following. It was built specifically for monitoring time-series data. You can identify time-series data by metric name or key-value pairs. Time-series data is stored on local disks so that it's easy to access in an emergency.
+
+Prometheus' [Alertmanager][14] allows you to view notifications every time it raises an event. Alertmanager can send notifications via email, PagerDuty, or OpsGenie, and you can silence alerts if necessary.
+
+Prometheus' visual elements are excellent and allow you to switch from the browser to the template language and Grafana integration. You can also integrate various third-party data sources into Prometheus from Docker, StatsD, and JMX to customize your Prometheus experience.
+
+As a network monitoring tool, Prometheus is suitable for organizations of all sizes. The onboard integrations and the easy-to-use Alertmanager make it capable of handling any workload, regardless of its size. You can [download Prometheus][15] for free.
+
+### Which are best?
+
+No matter what industry you're working in, if you rely on a network to do business, you need to implement some form of network monitoring. Network monitoring tools are an invaluable resource that help provide you with the visibility to keep your systems online. Monitoring your systems will give you the best chance to keep your equipment in working order.
+
+As the tools on this list show, you don't need to spend an exorbitant amount of money to reap the rewards of network monitoring. Of the five, I believe Icinga 2 and Zabbix are the best options for providing you with everything you need to start monitoring your network to keep it online. Staying vigilant will help to minimize the change of being caught off-guard by performance issues.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/network-monitoring-tools
+
+作者:[Paul Bischoff][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/paulbischoff
+[b]: https://github.com/lujun9972
+[1]: https://www.comparitech.com/net-admin/application-performance-management/
+[2]: https://www.cacti.net/index.php
+[3]: https://en.wikipedia.org/wiki/RRDtool
+[4]: https://www.cacti.net/download_cacti.php
+[5]: https://www.nagios.org/projects/nagios-core/
+[6]: https://exchange.nagios.org/directory/Addons/Frontends-%28GUIs-and-CLIs%29/Web-Interfaces/Nagios-V-2DShell/details
+[7]: https://exchange.nagios.org/directory/Addons/Graphing-and-Trending/nagiosgraph/details#_ga=2.79847774.890594951.1545045715-2010747642.1545045715
+[8]: https://www.nagios.org/downloads/nagios-core/
+[9]: https://icinga.com/products/icinga-2/
+[10]: https://icinga.com/download/
+[11]: https://www.zabbix.com/
+[12]: https://www.zabbix.com/download
+[13]: https://prometheus.io/
+[14]: https://prometheus.io/docs/alerting/alertmanager/
+[15]: https://prometheus.io/download/
diff --git a/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md b/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md
new file mode 100644
index 0000000000..ef8c8dc460
--- /dev/null
+++ b/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md
@@ -0,0 +1,435 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (12 Methods To Check The Hard Disk And Hard Drive Partition On Linux)
+[#]: via: (https://www.2daygeek.com/linux-command-check-hard-disks-partitions/)
+[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
+
+12 Methods To Check The Hard Disk And Hard Drive Partition On Linux
+======
+
+Usually Linux admins check the available hard disk and it’s partitions whenever they want to add a new disks or additional partition in the system.
+
+We used to check the partition table of our hard disk to view the disk partitions.
+
+This will help you to view how many partitions were already created on the disk. Also, it allow us to verify whether we have any free space or not.
+
+In general hard disks can be divided into one or more logical disks called partitions.
+
+Each partitions can be used as a separate disk with its own file system and partition information is stored in a partition table.
+
+It’s a 64-byte data structure. The partition table is part of the master boot record (MBR), which is a small program that is executed when a computer boots.
+
+The partition information are saved in the 0 the sector of the disk. Make a note, all the partitions must be formatted with an appropriate file system before files can be written to it.
+
+This can be verified using the following 12 methods.
+
+ * **`fdisk:`** manipulate disk partition table
+ * **`sfdisk:`** display or manipulate a disk partition table
+ * **`cfdisk:`** display or manipulate a disk partition table
+ * **`parted:`** a partition manipulation program
+ * **`lsblk:`** lsblk lists information about all available or the specified block devices.
+ * **`blkid:`** locate/print block device attributes.
+ * **`hwinfo:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system.
+ * **`lshw:`** lshw is a small tool to extract detailed information on the hardware configuration of the machine.
+ * **`inxi:`** inxi is a command line system information script built for for console and IRC.
+ * **`lsscsi:`** list SCSI devices (or hosts) and their attributes
+ * **`cat /proc/partitions:`**
+ * **`ls -lh /dev/disk/:`** The directory contains Disk manufacturer name, serial number, partition ID and real block device files, Those were symlink with real block device files.
+
+
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using fdisk Command?
+
+**[fdisk][1]** stands for fixed disk or format disk is a cli utility that allow users to perform following actions on disks. It allows us to view, create, resize, delete, move and copy the partitions.
+
+```
+# fdisk -l
+
+Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+Disklabel type: dos
+Disk identifier: 0xeab59449
+
+Device Boot Start End Sectors Size Id Type
+/dev/sda1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 20973568 62914559 41940992 20G 83 Linux
+
+
+Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+
+
+Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+Disklabel type: dos
+Disk identifier: 0x8cc8f9e5
+
+Device Boot Start End Sectors Size Id Type
+/dev/sdc1 2048 2099199 2097152 1G 83 Linux
+/dev/sdc3 4196352 6293503 2097152 1G 83 Linux
+/dev/sdc4 6293504 20971519 14678016 7G 5 Extended
+/dev/sdc5 6295552 8392703 2097152 1G 83 Linux
+
+
+Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+
+
+Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using sfdisk Command?
+
+sfdisk is a script-oriented tool for partitioning any block device. sfdisk supports MBR (DOS), GPT, SUN and SGI disk labels, but no longer provides any functionality for CHS (Cylinder-Head-Sector) addressing.
+
+```
+# sfdisk -l
+
+Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+Disklabel type: dos
+Disk identifier: 0xeab59449
+
+Device Boot Start End Sectors Size Id Type
+/dev/sda1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 20973568 62914559 41940992 20G 83 Linux
+
+
+Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+
+
+Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+Disklabel type: dos
+Disk identifier: 0x8cc8f9e5
+
+Device Boot Start End Sectors Size Id Type
+/dev/sdc1 2048 2099199 2097152 1G 83 Linux
+/dev/sdc3 4196352 6293503 2097152 1G 83 Linux
+/dev/sdc4 6293504 20971519 14678016 7G 5 Extended
+/dev/sdc5 6295552 8392703 2097152 1G 83 Linux
+
+
+Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+
+
+Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors
+Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 512 bytes
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using cfdisk Command?
+
+cfdisk is a curses-based program for partitioning any block device. The default device is /dev/sda. It provides basic partitioning functionality with a user-friendly interface.
+
+```
+# cfdisk /dev/sdc
+ Disk: /dev/sdc
+ Size: 10 GiB, 10737418240 bytes, 20971520 sectors
+ Label: dos, identifier: 0x8cc8f9e5
+
+ Device Boot Start End Sectors Size Id Type
+>> /dev/sdc1 2048 2099199 2097152 1G 83 Linux
+ Free space 2099200 4196351 2097152 1G
+ /dev/sdc3 4196352 6293503 2097152 1G 83 Linux
+ /dev/sdc4 6293504 20971519 14678016 7G 5 Extended
+ ├─/dev/sdc5 6295552 8392703 2097152 1G 83 Linux
+ └─Free space 8394752 20971519 12576768 6G
+
+
+
+ ┌───────────────────────────────────────────────────────────────────────────────┐
+ │ Partition type: Linux (83) │
+ │Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7 │
+ │ Filesystem: ext2 │
+ │ Mountpoint: /part1 (mounted) │
+ └───────────────────────────────────────────────────────────────────────────────┘
+ [Bootable] [ Delete ] [ Quit ] [ Type ] [ Help ] [ Write ]
+ [ Dump ]
+
+ Quit program without writing changes
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using parted Command?
+
+**[parted][2]** is a program to manipulate disk partitions. It supports multiple partition table formats, including MS-DOS and GPT. It is useful for creating space for new operating systems, reorganising disk usage, and copying data to new hard disks.
+
+```
+# parted -l
+
+Model: ATA VBOX HARDDISK (scsi)
+Disk /dev/sda: 32.2GB
+Sector size (logical/physical): 512B/512B
+Partition Table: msdos
+Disk Flags:
+
+Number Start End Size Type File system Flags
+ 1 10.7GB 32.2GB 21.5GB primary ext4 boot
+
+
+Model: ATA VBOX HARDDISK (scsi)
+Disk /dev/sdb: 10.7GB
+Sector size (logical/physical): 512B/512B
+Partition Table: msdos
+Disk Flags:
+
+Model: ATA VBOX HARDDISK (scsi)
+Disk /dev/sdc: 10.7GB
+Sector size (logical/physical): 512B/512B
+Partition Table: msdos
+Disk Flags:
+
+Number Start End Size Type File system Flags
+ 1 1049kB 1075MB 1074MB primary ext2
+ 3 2149MB 3222MB 1074MB primary ext4
+ 4 3222MB 10.7GB 7515MB extended
+ 5 3223MB 4297MB 1074MB logical
+
+
+Model: ATA VBOX HARDDISK (scsi)
+Disk /dev/sdd: 10.7GB
+Sector size (logical/physical): 512B/512B
+Partition Table: msdos
+Disk Flags:
+
+Model: ATA VBOX HARDDISK (scsi)
+Disk /dev/sde: 10.7GB
+Sector size (logical/physical): 512B/512B
+Partition Table: msdos
+Disk Flags:
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using lsblk Command?
+
+lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information.
+
+If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default.
+
+```
+# lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+sda 8:0 0 30G 0 disk
+└─sda1 8:1 0 20G 0 part /
+sdb 8:16 0 10G 0 disk
+sdc 8:32 0 10G 0 disk
+├─sdc1 8:33 0 1G 0 part /part1
+├─sdc3 8:35 0 1G 0 part /part2
+├─sdc4 8:36 0 1K 0 part
+└─sdc5 8:37 0 1G 0 part
+sdd 8:48 0 10G 0 disk
+sde 8:64 0 10G 0 disk
+sr0 11:0 1 1024M 0 rom
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using blkid Command?
+
+blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system.
+
+```
+# blkid
+/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
+/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
+/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
+/dev/sdc5: PARTUUID="8cc8f9e5-05"
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using hwinfo Command?
+
+**[hwinfo][3]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format.
+
+```
+# hwinfo --block --short
+disk:
+ /dev/sdd VBOX HARDDISK
+ /dev/sdb VBOX HARDDISK
+ /dev/sde VBOX HARDDISK
+ /dev/sdc VBOX HARDDISK
+ /dev/sda VBOX HARDDISK
+partition:
+ /dev/sdc1 Partition
+ /dev/sdc3 Partition
+ /dev/sdc4 Partition
+ /dev/sdc5 Partition
+ /dev/sda1 Partition
+cdrom:
+ /dev/sr0 VBOX CD-ROM
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using lshw Command?
+
+**[lshw][4]** (stands for Hardware Lister) is a small nifty tool that generates detailed reports about various hardware components on the machine such as memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, usb, network card, graphics cards, multimedia, printers, bus speed, etc.
+
+```
+# lshw -short -class disk -class volume
+H/W path Device Class Description
+===================================================
+/0/3/0.0.0 /dev/cdrom disk CD-ROM
+/0/4/0.0.0 /dev/sda disk 32GB VBOX HARDDISK
+/0/4/0.0.0/1 /dev/sda1 volume 19GiB EXT4 volume
+/0/5/0.0.0 /dev/sdb disk 10GB VBOX HARDDISK
+/0/6/0.0.0 /dev/sdc disk 10GB VBOX HARDDISK
+/0/6/0.0.0/1 /dev/sdc1 volume 1GiB Linux filesystem partition
+/0/6/0.0.0/3 /dev/sdc3 volume 1GiB EXT4 volume
+/0/6/0.0.0/4 /dev/sdc4 volume 7167MiB Extended partition
+/0/6/0.0.0/4/5 /dev/sdc5 volume 1GiB Linux filesystem partition
+/0/7/0.0.0 /dev/sdd disk 10GB VBOX HARDDISK
+/0/8/0.0.0 /dev/sde disk 10GB VBOX HARDDISK
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using inxi Command?
+
+**[inxi][5]** is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif.
+
+```
+# inxi -Dp
+Drives: HDD Total Size: 75.2GB (22.3% used)
+ ID-1: /dev/sda model: VBOX_HARDDISK size: 32.2GB
+ ID-2: /dev/sdb model: VBOX_HARDDISK size: 10.7GB
+ ID-3: /dev/sdc model: VBOX_HARDDISK size: 10.7GB
+ ID-4: /dev/sdd model: VBOX_HARDDISK size: 10.7GB
+ ID-5: /dev/sde model: VBOX_HARDDISK size: 10.7GB
+Partition: ID-1: / size: 20G used: 16G (85%) fs: ext4 dev: /dev/sda1
+ ID-3: /part1 size: 1008M used: 1.3M (1%) fs: ext2 dev: /dev/sdc1
+ ID-4: /part2 size: 976M used: 2.6M (1%) fs: ext4 dev: /dev/sdc3
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using lsscsi Command?
+
+Uses information in sysfs (Linux kernel series 2.6 and later) to list SCSI devices (or hosts) currently attached to the system. Options can be used to control the amount and form of information provided for each device.
+
+By default in this utility device node names (e.g. “/dev/sda” or “/dev/root_disk”) are obtained by noting the major and minor numbers for the listed device obtained from sysfs
+
+```
+# lsscsi
+[0:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0
+[2:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda
+[3:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdb
+[4:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdc
+[5:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdd
+[6:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sde
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using ProcFS?
+
+The proc filesystem (procfs) is a special filesystem in Unix-like operating systems that presents information about processes and other system information.
+
+It’s sometimes referred to as a process information pseudo-file system. It doesn’t contain ‘real’ files but runtime system information (e.g. system memory, devices mounted, hardware configuration, etc).
+
+```
+# cat /proc/partitions
+major minor #blocks name
+
+ 11 0 1048575 sr0
+ 8 0 31457280 sda
+ 8 1 20970496 sda1
+ 8 16 10485760 sdb
+ 8 32 10485760 sdc
+ 8 33 1048576 sdc1
+ 8 35 1048576 sdc3
+ 8 36 1 sdc4
+ 8 37 1048576 sdc5
+ 8 48 10485760 sdd
+ 8 64 10485760 sde
+```
+
+### How To Check Hard Disk And Hard Drive Partition In Linux Using /dev/disk Path?
+
+This directory contains four directories, it’s by-id, by-uuid, by-path and by-partuuid. Each directory contains some useful information and it’s symlinked with real block device files.
+
+```
+# ls -lh /dev/disk/by-id
+total 0
+lrwxrwxrwx 1 root root 9 Feb 2 23:08 ata-VBOX_CD-ROM_VB0-01f003f6 -> ../../sr0
+lrwxrwxrwx 1 root root 9 Feb 3 00:14 ata-VBOX_HARDDISK_VB26e827b5-668ab9f4 -> ../../sda
+lrwxrwxrwx 1 root root 10 Feb 3 00:14 ata-VBOX_HARDDISK_VB26e827b5-668ab9f4-part1 -> ../../sda1
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VB3774c742-fb2b3e4e -> ../../sdd
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e -> ../../sdc
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part1 -> ../../sdc1
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part3 -> ../../sdc3
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part4 -> ../../sdc4
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part5 -> ../../sdc5
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBed1cf451-9f51c5f6 -> ../../sdb
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBf242dbdd-49a982eb -> ../../sde
+```
+
+Output of by-uuid
+
+```
+# ls -lh /dev/disk/by-uuid
+total 0
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
+lrwxrwxrwx 1 root root 10 Feb 3 00:14 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
+```
+
+Output of by-path
+
+```
+# ls -lh /dev/disk/by-path
+total 0
+lrwxrwxrwx 1 root root 9 Feb 2 23:08 pci-0000:00:01.1-ata-1 -> ../../sr0
+lrwxrwxrwx 1 root root 9 Feb 3 00:14 pci-0000:00:0d.0-ata-1 -> ../../sda
+lrwxrwxrwx 1 root root 10 Feb 3 00:14 pci-0000:00:0d.0-ata-1-part1 -> ../../sda1
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-2 -> ../../sdb
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-3 -> ../../sdc
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part1 -> ../../sdc1
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part3 -> ../../sdc3
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part4 -> ../../sdc4
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part5 -> ../../sdc5
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-4 -> ../../sdd
+lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-5 -> ../../sde
+```
+
+Output of by-partuuid
+
+```
+# ls -lh /dev/disk/by-partuuid
+total 0
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-01 -> ../../sdc1
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-03 -> ../../sdc3
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-04 -> ../../sdc4
+lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-05 -> ../../sdc5
+lrwxrwxrwx 1 root root 10 Feb 3 00:14 eab59449-01 -> ../../sda1
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-command-check-hard-disks-partitions/
+
+作者:[Vinoth Kumar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/vinoth/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/linux-fdisk-command-to-manage-disk-partitions/
+[2]: https://www.2daygeek.com/how-to-manage-disk-partitions-using-parted-command/
+[3]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
+[4]: https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/
+[5]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/
diff --git a/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md b/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md
new file mode 100644
index 0000000000..be44e75fea
--- /dev/null
+++ b/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md
@@ -0,0 +1,122 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (CFS: Completely fair process scheduling in Linux)
+[#]: via: (https://opensource.com/article/19/2/fair-scheduling-linux)
+[#]: author: (Marty kalin https://opensource.com/users/mkalindepauledu)
+
+CFS: Completely fair process scheduling in Linux
+======
+CFS gives every task a fair share of processor resources in a low-fuss but highly efficient way.
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
+
+Linux takes a modular approach to processor scheduling in that different algorithms can be used to schedule different process types. A scheduling class specifies which scheduling policy applies to which type of process. Completely fair scheduling (CFS), which became part of the Linux 2.6.23 kernel in 2007, is the scheduling class for normal (as opposed to real-time) processes and therefore is named **SCHED_NORMAL**.
+
+CFS is geared for the interactive applications typical in a desktop environment, but it can be configured as **SCHED_BATCH** to favor the batch workloads common, for example, on a high-volume web server. In any case, CFS breaks dramatically with what might be called "classic preemptive scheduling." Also, the "completely fair" claim has to be seen with a technical eye; otherwise, the claim might seem like an empty boast.
+
+Let's dig into the details of what sets CFS apart from—indeed, above—other process schedulers. Let's start with a quick review of some core technical terms.
+
+### Some core concepts
+
+Linux inherits the Unix view of a process as a program in execution. As such, a process must contend with other processes for shared system resources: memory to hold instructions and data, at least one processor to execute instructions, and I/O devices to interact with the external world. Process scheduling is how the operating system (OS) assigns tasks (e.g., crunching some numbers, copying a file) to processors—a running process then performs the task. A process has one or more threads of execution, which are sequences of machine-level instructions. To schedule a process is to schedule one of its threads on a processor.
+
+In a simplifying move, Linux turns process scheduling into thread scheduling by treating a scheduled process as if it were single-threaded. If a process is multi-threaded with N threads, then N scheduling actions would be required to cover the threads. Threads within a multi-threaded process remain related in that they share resources such as memory address space. Linux threads are sometimes described as lightweight processes, with the lightweight underscoring the sharing of resources among the threads within a process.
+
+Although a process can be in various states, two are of particular interest in scheduling. A blocked process is awaiting the completion of some event such as an I/O event. The process can resume execution only after the event completes. A runnable process is one that is not currently blocked.
+
+A process is processor-bound (aka compute-bound) if it consumes mostly processor as opposed to I/O resources, and I/O-bound in the opposite case; hence, a processor-bound process is mostly runnable, whereas an I/O-bound process is mostly blocked. As examples, crunching numbers is processor-bound, and accessing files is I/O-bound. Although an entire process might be characterized as either processor-bound or I/O-bound, a given process may be one or the other during different stages of its execution. Interactive desktop applications, such as browsers, tend to be I/O-bound.
+
+A good process scheduler has to balance the needs of processor-bound and I/O-bound tasks, especially in an operating system such as Linux that thrives on so many hardware platforms: desktop machines, embedded devices, mobile devices, server clusters, supercomputers, and more.
+
+### Classic preemptive scheduling versus CFS
+
+Unix popularized classic preemptive scheduling, which other operating systems including VAX/VMS, Windows NT, and Linux later adopted. At the center of this scheduling model is a fixed timeslice, the amount of time (e.g., 50ms) that a task is allowed to hold a processor until preempted in favor of some other task. If a preempted process has not finished its work, the process must be rescheduled. This model is powerful in that it supports multitasking (concurrency) through processor time-sharing, even on the single-CPU machines of yesteryear.
+
+The classic model typically includes multiple scheduling queues, one per process priority: Every process in a higher-priority queue gets scheduled before any process in a lower-priority queue. As an example, VAX/VMS uses 32 priority queues for scheduling.
+
+CFS dispenses with fixed timeslices and explicit priorities. The amount of time for a given task on a processor is computed dynamically as the scheduling context changes over the system's lifetime. Here is a sketch of the motivating ideas and technical details:
+
+ * Imagine a processor, P, which is idealized in that it can execute multiple tasks simultaneously. For example, tasks T1 and T2 can execute on P at the same time, with each receiving 50% of P's magical processing power. This idealization describes perfect multitasking, which CFS strives to achieve on actual as opposed to idealized processors. CFS is designed to approximate perfect multitasking.
+
+ * The CFS scheduler has a target latency, which is the minimum amount of time—idealized to an infinitely small duration—required for every runnable task to get at least one turn on the processor. If such a duration could be infinitely small, then each runnable task would have had a turn on the processor during any given timespan, however small (e.g., 10ms, 5ns, etc.). Of course, an idealized infinitely small duration must be approximated in the real world, and the default approximation is 20ms. Each runnable task then gets a 1/N slice of the target latency, where N is the number of tasks. For example, if the target latency is 20ms and there are four contending tasks, then each task gets a timeslice of 5ms. By the way, if there is only a single task during a scheduling event, this lucky task gets the entire target latency as its slice. The fair in CFS comes to the fore in the 1/N slice given to each task contending for a processor.
+
+ * The 1/N slice is, indeed, a timeslice—but not a fixed one because such a slice depends on N, the number of tasks currently contending for the processor. The system changes over time. Some processes terminate and new ones are spawned; runnable processes block and blocked processes become runnable. The value of N is dynamic and so, therefore, is the 1/N timeslice computed for each runnable task contending for a processor. The traditional **nice** value is used to weight the 1/N slice: a low-priority **nice** value means that only some fraction of the 1/N slice is given to a task, whereas a high-priority **nice** value means that a proportionately greater fraction of the 1/N slice is given to a task. In summary, **nice** values do not determine the slice, but only modify the 1/N slice that represents fairness among the contending tasks.
+
+ * The operating system incurs overhead whenever a context switch occurs; that is, when one process is preempted in favor of another. To keep this overhead from becoming unduly large, there is a minimum amount of time (with a typical setting of 1ms to 4ms) that any scheduled process must run before being preempted. This minimum is known as the minimum granularity. If many tasks (e.g., 20) are contending for the processor, then the minimum granularity (assume 4ms) might be more than the 1/N slice (in this case, 1ms). If the minimum granularity turns out to be larger than the 1/N slice, the system is overloaded because there are too many tasks contending for the processor—and fairness goes out the window.
+
+ * When does preemption occur? CFS tries to minimize context switches, given their overhead: time spent on a context switch is time unavailable for other tasks. Accordingly, once a task gets the processor, it runs for its entire weighted 1/N slice before being preempted in favor of some other task. Suppose task T1 has run for its weighted 1/N slice, and runnable task T2 currently has the lowest virtual runtime (vruntime) among the tasks contending for the processor. The vruntime records, in nanoseconds, how long a task has run on the processor. In this case, T1 would be preempted in favor of T2.
+
+ * The scheduler tracks the vruntime for all tasks, runnable and blocked. The lower a task's vruntime, the more deserving the task is for time on the processor. CFS accordingly moves low-vruntime tasks towards the front of the scheduling line. Details are forthcoming because the line is implemented as a tree, not a list.
+
+ * How often should the CFS scheduler reschedule? There is a simple way to determine the scheduling period. Suppose that the target latency (TL) is 20ms and the minimum granularity (MG) is 4ms:
+
+`TL / MG = (20 / 4) = 5 ## five or fewer tasks are ok`
+
+In this case, five or fewer tasks would allow each task a turn on the processor during the target latency. For example, if the task number is five, each runnable task has a 1/N slice of 4ms, which happens to equal the minimum granularity; if the task number is three, each task gets a 1/N slice of almost 7ms. In either case, the scheduler would reschedule in 20ms, the duration of the target latency.
+
+Trouble occurs if the number of tasks (e.g., 10) exceeds TL / MG because now each task must get the minimum time of 4ms instead of the computed 1/N slice, which is 2ms. In this case, the scheduler would reschedule in 40ms:
+
+`(number of tasks) core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated MG = (10 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 4) = 40ms ## period = 40ms`
+
+
+
+
+Linux schedulers that predate CFS use heuristics to promote the fair treatment of interactive tasks with respect to scheduling. CFS takes a quite different approach by letting the vruntime facts speak mostly for themselves, which happens to support sleeper fairness. An interactive task, by its very nature, tends to sleep a lot in the sense that it awaits user inputs and so becomes I/O-bound; hence, such a task tends to have a relatively low vruntime, which tends to move the task towards the front of the scheduling line.
+
+### Special features
+
+CFS supports symmetrical multiprocessing (SMP) in which any process (whether kernel or user) can execute on any processor. Yet configurable scheduling domains can be used to group processors for load balancing or even segregation. If several processors share the same scheduling policy, then load balancing among them is an option; if a particular processor has a scheduling policy different from the others, then this processor would be segregated from the others with respect to scheduling.
+
+Configurable scheduling groups are another CFS feature. As an example, consider the Nginx web server that's running on my desktop machine. At startup, this server has a master process and four worker processes, which act as HTTP request handlers. For any HTTP request, the particular worker that handles the request is irrelevant; it matters only that the request is handled in a timely manner, and so the four workers together provide a pool from which to draw a task-handler as requests come in. It thus seems fair to treat the four Nginx workers as a group rather than as individuals for scheduling purposes, and a scheduling group can be used to do just that. The four Nginx workers could be configured to have a single vruntime among them rather than individual vruntimes. Configuration is done in the traditional Linux way, through files. For vruntime-sharing, a file named **cpu.shares** , with the details given through familiar shell commands, would be created.
+
+As noted earlier, Linux supports scheduling classes so that different scheduling policies, together with their implementing algorithms, can coexist on the same platform. A scheduling class is implemented as a code module in C. CFS, the scheduling class described so far, is **SCHED_NORMAL**. There are also scheduling classes specifically for real-time tasks, **SCHED_FIFO** (first in, first out) and **SCHED_RR** (round robin). Under **SCHED_FIFO** , tasks run to completion; under **SCHED_RR** , tasks run until they exhaust a fixed timeslice and are preempted.
+
+### CFS implementation
+
+CFS requires efficient data structures to track task information and high-performance code to generate the schedules. Let's begin with a central term in scheduling, the runqueue. This is a data structure that represents a timeline for scheduled tasks. Despite the name, the runqueue need not be implemented in the traditional way, as a FIFO list. CFS breaks with tradition by using a time-ordered red-black tree as a runqueue. The data structure is well-suited for the job because it is a self-balancing binary search tree, with efficient **insert** and **remove** operations that execute in **O(log N)** time, where N is the number of nodes in the tree. Also, a tree is an excellent data structure for organizing entities into a hierarchy based on a particular property, in this case a vruntime.
+
+In CFS, the tree's internal nodes represent tasks to be scheduled, and the tree as a whole, like any runqueue, represents a timeline for task execution. Red-black trees are in wide use beyond scheduling; for example, Java uses this data structure to implement its **TreeMap**.
+
+Under CFS, every processor has a specific runqueue of tasks, and no task occurs at the same time in more than one runqueue. Each runqueue is a red-black tree. The tree's internal nodes represent tasks or task groups, and these nodes are indexed by their vruntime values so that (in the tree as a whole or in any subtree) the internal nodes to the left have lower vruntime values than the ones to the right:
+
+```
+ 25 ## 25 is a task vruntime
+ /\
+ 17 29 ## 17 roots the left subtree, 29 the right one
+ /\ ...
+ 5 19 ## and so on
+... \
+ nil ## leaf nodes are nil
+```
+
+In summary, tasks with the lowest vruntime—and, therefore, the greatest need for a processor—reside somewhere in the left subtree; tasks with relatively high vruntimes congregate in the right subtree. A preempted task would go into the right subtree, thus giving other tasks a chance to move leftwards in the tree. A task with the smallest vruntime winds up in the tree's leftmost (internal) node, which is thus the front of the runqueue.
+
+The CFS scheduler has an instance, the C **task_struct** , to track detailed information about each task to be scheduled. This structure embeds a **sched_entity** structure, which in turn has scheduling-specific information, in particular, the vruntime per task or task group:
+
+```
+struct task_struct { /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var info on a task **/
+ ...
+ struct sched_entity se; /** vruntime, etc. **/
+ ...
+};
+```
+
+The red-black tree is implemented in familiar C fashion, with a premium on pointers for efficiency. A **cfs_rq** structure instance embeds a **rb_root** field named **tasks_timeline** , which points to the root of a red-black tree. Each of the tree's internal nodes has pointers to the parent and the two child nodes; the leaf nodes have nil as their value.
+
+CFS illustrates how a straightforward idea—give every task a fair share of processor resources—can be implemented in a low-fuss but highly efficient way. It's worth repeating that CFS achieves fair and efficient scheduling without traditional artifacts such as fixed timeslices and explicit task priorities. The pursuit of even better schedulers goes on, of course; for the moment, however, CFS is as good as it gets for general-purpose processor scheduling.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/fair-scheduling-linux
+
+作者:[Marty kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190205 DNS and Root Certificates.md b/sources/tech/20190205 DNS and Root Certificates.md
new file mode 100644
index 0000000000..3934a414b7
--- /dev/null
+++ b/sources/tech/20190205 DNS and Root Certificates.md
@@ -0,0 +1,142 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (DNS and Root Certificates)
+[#]: via: (https://lushka.al/dns-and-certificates/)
+[#]: author: (Anxhelo Lushka https://lushka.al/)
+
+DNS and Root Certificates
+======
+
+Due to recent events we (as in we from the Privacy Today group) felt compelled to write an impromptu article on this matter. It’s intended for all audiences so it will be kept simple - technical details may be posted later.
+
+### What Is DNS And Why Does It Concern You?
+
+DNS stands for Domain Name System and you encounter it daily. Whenever your web browser or any other application connects to the internet, it will most likely do so using a domain. A domain is simply the address you type: i.e. [duckduckgo.com][1]. Your computer needs to know where this leads to and will ask a DNS resolver for help. It will return an IP like [176.34.155.23][2]; the public network address you need to know to connect. This process is called a DNS lookup.
+
+There are certain implications for both your privacy and your security as well as your liberty:
+
+#### Privacy
+
+Since you ask the resolver for an IP for a domain name, it knows exactly which sites you’re visiting and, thanks to the “Internet Of Things”, often abbreviated as IoT, even which appliances you use at home.
+
+#### Security
+
+You’re trusting the resolver that the IP it returns is correct. There are certain checks to ensure it is so, under normal circumstances, that is not a common source of issues. These can be undermined though and that’s why this article is important. If the IP is not correct, you can be fooled into connecting to malicious 3rd parties - even without ever noticing any difference. In this case, your privacy is in much greater danger because, not only are the sites you visit tracked, but the contents as well. 3rd parties can see exactly what you’re looking at, collect personal information you enter (such as passwords), and a lot more. Your whole identity can be taken over with ease.
+
+#### Liberty
+
+Censorship is commonly enforced via DNS. It’s not the most effective way to do so but it is extremely widespread. Even in western countries, it’s routinely used by corporations and governments. They use the same methods as potential attackers; they will not return the correct IP when you ask. They could act as if the domain doesn’t exist or direct you elsewhere entirely.
+
+### Ways DNS lookups can happen
+
+#### 3rd Party DNS Resolvers Hosted By Your ISP
+
+Most people are using 3rd party resolvers hosted by their Internet Service Provider. When you connect your modem, they will automatically be fetched and you might never bother with it at all.
+
+#### 3rd Party DNS Resolver Of Your Choice
+
+If you already knew what DNS means then you might have decided to use another DNS resolver of your choice. This might improve the situation since it makes it harder for your ISP to track you and you can avoid some forms of censorship. Both are still possible though, but the methods required are not as widely used.
+
+#### Your Own (local) DNS Resolver
+
+You can run your own and avoid some of the possible perils of using others’. If you’re interested in more information drop us a line.
+
+### Root Certificates
+
+#### What Is A Root Certificate?
+
+Whenever you visit a website starting with https, you communicate with it using a certificate it sends. It enables your browser to encrypt the communication and ensures that nobody listening in can snoop. That’s why everybody has been told to look out for the https (rather than http) when logging into websites. The certificate itself only verifies that it has been generated for a certain domain. There’s more though:
+
+That’s where the root certificate comes in. Think of it as the next higher level that makes sure the levels below are correct. It verifies that the certificate sent to you has been authorized by a certificate authority. This authority ensures that the person creating the certificate is actually the real operator.
+
+This is also referred to as the chain of trust. Your operating system includes a set of these root certificates by default so that the chain of trust can be guaranteed.
+
+#### Abuse
+
+We now know that:
+
+ * DNS resolvers send you an IP address when you send a domain name
+ * Certificates allow encrypting your communication and verify they have been generated for the domain you visit
+ * Root certificates verify that the certificate is legitimate and has been created by the real site operator
+
+
+
+**How can it be abused?**
+
+ * A malicious DNS resolver can send you a wrong IP for the purpose of censorship as said before. They can also send you to a completely different site.
+ * This site can send you a fake certificate.
+ * A malicious root certificate can “verify” this fake certificate.
+
+
+
+This site will look absolutely fine to you; it has https in the URL and, if you click it, it will say verified. All just like you learned, right? **No!**
+
+It now receives all the communication you intended to send to the original. This bypasses the checks created to avoid it. You won’t receive error messages, your browser won’t complain.
+
+**All your data is compromised!**
+
+### Conclusion
+
+#### Risks
+
+ * Using a malicious DNS resolver can always compromise your privacy but your security will be unharmed as long as you look out for the https.
+ * Using a malicious DNS resolver and a malicious root certificate, your privacy and security are fully compromised.
+
+
+
+#### Actions To Take
+
+**Do not ever install a 3rd party root certificate!** There are very few exceptions why you would want to do so and none of them are applicable to general end users.
+
+**Do not fall for clever marketing that ensures “ad blocking”, “military grade security”, or something similar**. There are methods of using DNS resolvers on their own to enhance your privacy but installing a 3rd party root certificate never makes sense. You are opening yourself up to extreme abuse.
+
+### Seeing It Live
+
+**WARNING**
+
+A friendly sysadmin provided a live demo so you can see for yourself in realtime. This is real.
+
+**DO NOT ENTER PRIVATE DATA! REMOVE THE CERT AND DNS AFTERWARDS!**
+
+If you do not know how to, don’t install it in the first place. While we trust our friend you still wouldn’t want to have the root certificate of a random and unknown 3rd party installed.
+
+#### Live Demo
+
+Here is the link:
+
+ * Set the provided DNS resolver
+ * Install the provided root certificate
+ * Visit and enter random login data
+ * Your data will show up on the website
+
+
+
+### Further Information
+
+If you are interested in more technical details, let us know. If there is enough interest, we might write an article but, for now, the important part is sharing the basics so you can make an informed decision and not fall for marketing and straight up fraud. Feel free to suggest other topics that are important to you.
+
+This post is mirrored from [Privacy Today channel][3]. [Privacy Today][4] is a group about all things privacy, open source, libre philosophy and more!
+
+All content is licensed under CC BY-NC-SA 4.0. ([Attribution-NonCommercial-ShareAlike 4.0 International][5]).
+
+--------------------------------------------------------------------------------
+
+via: https://lushka.al/dns-and-certificates/
+
+作者:[Anxhelo Lushka][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://lushka.al/
+[b]: https://github.com/lujun9972
+[1]: https://duckduckgo.com
+[2]: http://176.34.155.23
+[3]: https://t.me/privacytoday
+[4]: https://t.me/joinchat/Awg5A0UW-tzOLX7zMoTDog
+[5]: https://creativecommons.org/licenses/by-nc-sa/4.0/
diff --git a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
new file mode 100644
index 0000000000..7ce1201c4f
--- /dev/null
+++ b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
@@ -0,0 +1,443 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS)
+[#]: via: (https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/)
+[#]: author: (SK https://www.ostechnix.com/author/sk/)
+
+Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS
+======
+
+![](https://www.ostechnix.com/wp-content/uploads/2019/02/lamp-720x340.jpg)
+
+**LAMP** stack is a popular, open source web development platform that can be used to run and deploy dynamic websites and web-based applications. Typically, LAMP stack consists of Apache webserver, MariaDB/MySQL databases, PHP/Python/Perl programming languages. LAMP is the acronym of **L** inux, **M** ariaDB/ **M** YSQL, **P** HP/ **P** ython/ **P** erl. This tutorial describes how to install Apache, MySQL, PHP (LAMP stack) in Ubuntu 18.04 LTS server.
+
+### Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS
+
+For the purpose of this tutorial, we will be using the following Ubuntu testbox.
+
+ * **Operating System** : Ubuntu 18.04.1 LTS Server Edition
+ * **IP address** : 192.168.225.22/24
+
+
+
+#### 1. Install Apache web server
+
+First of all, update Ubuntu server using commands:
+
+```
+$ sudo apt update
+
+$ sudo apt upgrade
+```
+
+Next, install Apache web server:
+
+```
+$ sudo apt install apache2
+```
+
+Check if Apache web server is running or not:
+
+```
+$ sudo systemctl status apache2
+```
+
+Sample output would be:
+
+```
+● apache2.service - The Apache HTTP Server
+ Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: en
+ Drop-In: /lib/systemd/system/apache2.service.d
+ └─apache2-systemd.conf
+ Active: active (running) since Tue 2019-02-05 10:48:03 UTC; 1min 5s ago
+ Main PID: 2025 (apache2)
+ Tasks: 55 (limit: 2320)
+ CGroup: /system.slice/apache2.service
+ ├─2025 /usr/sbin/apache2 -k start
+ ├─2027 /usr/sbin/apache2 -k start
+ └─2028 /usr/sbin/apache2 -k start
+
+Feb 05 10:48:02 ubuntuserver systemd[1]: Starting The Apache HTTP Server...
+Feb 05 10:48:03 ubuntuserver apachectl[2003]: AH00558: apache2: Could not reliably
+Feb 05 10:48:03 ubuntuserver systemd[1]: Started The Apache HTTP Server.
+```
+
+Congratulations! Apache service is up and running!!
+
+##### 1.1 Adjust firewall to allow Apache web server
+
+By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https ports by following the below steps.
+
+First, list out the application profiles available on your Ubuntu system using command:
+
+```
+$ sudo ufw app list
+```
+
+Sample output:
+
+```
+Available applications:
+Apache
+Apache Full
+Apache Secure
+OpenSSH
+```
+
+As you can see, Apache and OpenSSH applications have installed UFW profiles. You can list out information about each profile and its included rules using “ **ufw app info “Profile Name”** command.
+
+Let us look into the **“Apache Full”** profile. To do so, run:
+
+```
+$ sudo ufw app info "Apache Full"
+```
+
+Sample output:
+
+```
+Profile: Apache Full
+Title: Web Server (HTTP,HTTPS)
+Description: Apache v2 is the next generation of the omnipresent Apache web
+server.
+
+Ports:
+80,443/tcp
+```
+
+As you see, “Apache Full” profile has included the rules to enable traffic to the ports **80** and **443** :
+
+Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile:
+
+```
+$ sudo ufw allow in "Apache Full"
+Rules updated
+Rules updated (v6)
+```
+
+If you don’t want to allow https traffic, but only http (80) traffic, run:
+
+```
+$ sudo ufw app info "Apache"
+```
+
+##### 1.2 Test Apache Web server
+
+Now, open your web browser and access Apache test page by navigating to **** or ****.
+
+![](https://www.ostechnix.com/wp-content/uploads/2016/06/apache-2.png)
+
+If you are see a screen something like above, you are good to go. Apache server is working!
+
+#### 2. Install MySQL
+
+To install MySQL On Ubuntu, run:
+
+```
+$ sudo apt install mysql-server
+```
+
+Verify if MySQL service is running or not using command:
+
+```
+$ sudo systemctl status mysql
+```
+
+**Sample output:**
+
+```
+● mysql.service - MySQL Community Server
+Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enab
+Active: active (running) since Tue 2019-02-05 11:07:50 UTC; 17s ago
+Main PID: 3423 (mysqld)
+Tasks: 27 (limit: 2320)
+CGroup: /system.slice/mysql.service
+└─3423 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid
+
+Feb 05 11:07:49 ubuntuserver systemd[1]: Starting MySQL Community Server...
+Feb 05 11:07:50 ubuntuserver systemd[1]: Started MySQL Community Server.
+```
+
+Mysql is running!
+
+##### 2.1 Setup database administrative user (root) password
+
+By default, MySQL **root** user password is blank. You need to secure your MySQL server by running the following script:
+
+```
+$ sudo mysql_secure_installation
+```
+
+You will be asked whether you want to setup **VALIDATE PASSWORD plugin** or not. This plugin allows the users to configure strong password for database credentials. If enabled, It will automatically check the strength of the password and enforces the users to set only those passwords which are secure enough. **It is safe to leave this plugin disabled**. However, you must use a strong and unique password for database credentials. If don’t want to enable this plugin, just press any key to skip the password validation part and continue the rest of the steps.
+
+If your answer is **Yes** , you will be asked to choose the level of password validation.
+
+```
+Securing the MySQL server deployment.
+
+Connecting to MySQL using a blank password.
+
+VALIDATE PASSWORD PLUGIN can be used to test passwords
+and improve security. It checks the strength of password
+and allows the users to set only those passwords which are
+secure enough. Would you like to setup VALIDATE PASSWORD plugin?
+
+Press y|Y for Yes, any other key for No y
+```
+
+The available password validations are **low** , **medium** and **strong**. Just enter the appropriate number (0 for low, 1 for medium and 2 for strong password) and hit ENTER key.
+
+```
+There are three levels of password validation policy:
+
+LOW Length >= 8
+MEDIUM Length >= 8, numeric, mixed case, and special characters
+STRONG Length >= 8, numeric, mixed case, special characters and dictionary file
+
+Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG:
+```
+
+Now, enter the password for MySQL root user. Please be mindful that you must use password for mysql root user depending upon the password policy you choose in the previous step. If you didn’t enable the plugin, just use any strong and unique password of your choice.
+
+```
+Please set the password for root here.
+
+New password:
+
+Re-enter new password:
+
+Estimated strength of the password: 50
+Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y
+```
+
+Once you entered the password twice, you will see the password strength (In our case it is **50** ). If it is OK for you, press Y to continue with the provided password. If not satisfied with password length, press any other key and set a strong password. I am OK with my current password, so I chose **y**.
+
+For the rest of questions, just type **y** and hit ENTER. This will remove anonymous user, disallow root user login remotely and remove test database.
+
+```
+Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
+Success.
+
+Normally, root should only be allowed to connect from
+'localhost'. This ensures that someone cannot guess at
+the root password from the network.
+
+Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
+Success.
+
+By default, MySQL comes with a database named 'test' that
+anyone can access. This is also intended only for testing,
+and should be removed before moving into a production
+environment.
+
+Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
+- Dropping test database...
+Success.
+
+- Removing privileges on test database...
+Success.
+
+Reloading the privilege tables will ensure that all changes
+made so far will take effect immediately.
+
+Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
+Success.
+
+All done!
+```
+
+That’s it. Password for MySQL root user has been set.
+
+##### 2.2 Change authentication method for MySQL root user
+
+By default, MySQL root user is set to authenticate using the **auth_socket** plugin in MySQL 5.7 and newer versions on Ubuntu. Even though it enhances the security, it will also complicate things when you access your database server using any external programs, for example phpMyAdmin. To fix this issue, you need to change authentication method from **auth_socket** to **mysql_native_password**. To do so, login to your MySQL prompt using command:
+
+```
+$ sudo mysql
+```
+
+Run the following command at the mysql prompt to find the current authentication method for all mysql user accounts:
+
+```
+SELECT user,authentication_string,plugin,host FROM mysql.user;
+```
+
+**Sample output:**
+
+```
++------------------|-------------------------------------------|-----------------------|-----------+
+| user | authentication_string | plugin | host |
++------------------|-------------------------------------------|-----------------------|-----------+
+| root | | auth_socket | localhost |
+| mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
+| mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
+| debian-sys-maint | *F126737722832701DD3979741508F05FA71E5BA0 | mysql_native_password | localhost |
++------------------|-------------------------------------------|-----------------------|-----------+
+4 rows in set (0.00 sec)
+```
+
+![][2]
+
+As you see, mysql root user uses `auth_socket` plugin for authentication.
+
+To change this authentication to **mysql_native_password** method, run the following command at mysql prompt. Don’t forget to replace **“password”** with a strong and unique password of your choice. If you have enabled VALIDATION plugin, make sure you have used a strong password based on the current policy requirements.
+
+```
+ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
+```
+
+Update the changes using command:
+
+```
+FLUSH PRIVILEGES;
+```
+
+Now check again if the authentication method is changed or not using command:
+
+```
+SELECT user,authentication_string,plugin,host FROM mysql.user;
+```
+
+Sample output:
+
+![][3]
+
+Good! Now the myql root user can authenticate using password to access mysql shell.
+
+Exit from the mysql prompt:
+
+```
+exit
+```
+
+#### 3\. Install PHP
+
+To install PHP, run:
+
+```
+$ sudo apt install php libapache2-mod-php php-mysql
+```
+
+After installing PHP, create **info.php** file in the Apache root document folder. Usually, the apache root document folder will be **/var/www/html/** or **/var/www/** in most Debian based Linux distributions. In Ubuntu 18.04 LTS, it is **/var/www/html/**.
+
+Let us create **info.php** file in the apache root folder:
+
+```
+$ sudo vi /var/www/html/info.php
+```
+
+Add the following lines:
+
+```
+
+```
+
+Press ESC key and type **:wq** to save and quit the file. Restart apache service to take effect the changes.
+
+```
+$ sudo systemctl restart apache2
+```
+
+##### 3.1 Test PHP
+
+Open up your web browser and navigate to **** URL.
+
+You will see the php test page now.
+
+![](https://www.ostechnix.com/wp-content/uploads/2019/02/php-test-page.png)
+
+Usually, when a user requests a directory from the web server, Apache will first look for a file named **index.html**. If you want to change Apache to serve php files rather than others, move **index.php** to first position in the **dir.conf** file as shown below
+
+```
+$ sudo vi /etc/apache2/mods-enabled/dir.conf
+```
+
+Here is the contents of the above file.
+
+```
+
+DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
+
+
+# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
+```
+
+Move the “index.php” file to first. Once you made the changes, your **dir.conf** file will look like below.
+
+```
+
+DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
+
+
+# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
+```
+
+Press **ESC** key and type **:wq** to save and close the file. Restart Apache service to take effect the changes.
+
+```
+$ sudo systemctl restart apache2
+```
+
+##### 3.2 Install PHP modules
+
+To improve the functionality of PHP, you can install some additional PHP modules.
+
+To list the available PHP modules, run:
+
+```
+$ sudo apt-cache search php- | less
+```
+
+**Sample output:**
+
+![][4]
+
+Use the arrow keys to go through the result. To exit, type **q** and hit ENTER key.
+
+To find the details of any particular php module, for example **php-gd** , run:
+
+```
+$ sudo apt-cache show php-gd
+```
+
+To install a php module run:
+
+```
+$ sudo apt install php-gd
+```
+
+To install all modules (not necessary though), run:
+
+```
+$ sudo apt-get install php*
+```
+
+Do not forget to restart Apache service after installing any php module. To check if the module is loaded or not, open info.php file in your browser and check if it is present.
+
+Next, you might want to install any database management tools to easily manage databases via a web browser. If so, install phpMyAdmin as described in the following link.
+
+Congratulations! We have successfully setup LAMP stack in Ubuntu 18.04 LTS server.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-1.png
+[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-2.png
+[4]: http://www.ostechnix.com/wp-content/uploads/2016/06/php-modules.png
diff --git a/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
new file mode 100644
index 0000000000..54e4ce314c
--- /dev/null
+++ b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way)
+[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Installing Kali Linux on VirtualBox: Quickest & Safest Way
+======
+
+**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**
+
+[Kali Linux][1] is one of the [best Linux distributions for hacking][2] and security enthusiasts.
+
+Since it deals with a sensitive topic like hacking, it’s like a double-edged sword. We have discussed it in the detailed Kali Linux review in the past so I am not going to bore you with the same stuff again.
+
+While you can install Kali Linux by replacing the existing operating system, using it via a virtual machine would be a better and safer option.
+
+With Virtual Box, you can use Kali Linux as a regular application in your Windows/Linux system. It’s almost the same as running VLC or a game in your system.
+
+Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your ‘host system’ (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe.
+
+![Kali Linux on Virtual Box][3]
+
+### How to install Kali Linux on VirtualBox
+
+I’ll be using [VirtualBox][4] here. It is a wonderful open source virtualization solution for just about anyone (professional or personal use). It’s available free of cost.
+
+In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS whose ISO file exists or a pre-built virtual machine save file is available.
+
+**Note:** The same steps apply for Windows/Linux running VirtualBox.
+
+As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows 10 installed (don’t hate me!) where I try to install Kali Linux in VirtualBox step by step.
+
+And, the best part is – even if you happen to use a Linux distro as your primary OS, the same steps will be applicable!
+
+Wondering, how? Let’s see…
+
+### Step by Step Guide to install Kali Linux on VirtualBox
+
+We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine – but why do that when you have an easy alternative?
+
+#### 1\. Download and install VirtualBox
+
+The first thing you need to do is to download and install VirtualBox from Oracle’s official website.
+
+[Download VirtualBox](https://www.virtualbox.org/wiki/Downloads)
+
+Once you download the installer, just double click on it to install VirtualBox. It’s the same for installing VirtualBox on Ubuntu/Fedora Linux as well.
+
+#### 2\. Download ready-to-use virtual image of Kali Linux
+
+After installing it successfully, head to [Offensive Security’s download page][5] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][6], that is available too.
+
+![Kali Linux Virtual Box Image][7]
+
+As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][8].
+
+#### 3\. Install Kali Linux on Virtual Box
+
+Once you have installed VirtualBox and downloaded the Kali Linux image, you just need to import it to VirtualBox in order to make it work.
+
+Here’s how to import the VirtualBox image for Kali Linux:
+
+**Step 1** : Launch VirtualBox. You will notice an **Import** button – click on it
+
+![virtualbox import][9] Click on Import button
+
+**Step 2:** Next, browse the file you just downloaded and choose it to be imported (as you can see in the image below). The file name should start with ‘kali linux‘ and end with . **ova** extension.
+
+![virtualbox import file][10] Importing Kali Linux image
+
+**S** Once selected, proceed by clicking on **Next**.
+
+**Step 3** : Now, you will be shown the settings for the virtual machine you are about to import. So, you can customize them or not – that is your choice. It is okay if you go with the default settings.
+
+You need to select a path where you have sufficient storage available. I would never recommend the **C:** drive on Windows.
+
+![virtualbox kali linux settings][11] Import hard drives as VDI
+
+Here, the hard drives as VDI refer to virtually mount the hard drives by allocating the storage space set.
+
+After you are done with the settings, hit **Import** and wait for a while.
+
+**Step 4:** You will now see it listed. So, just hit **Start** to launch it.
+
+You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done!
+
+![kali linux on windows virtual box][12]Kali Linux running in VirtualBox
+
+I hope this guide helps you easily install Kali Linux on Virtual Box. Of course, Kali Linux has a lot of useful tools in it for penetration testing – good luck with that!
+
+**Tip** : Both Kali Linux and Ubuntu are Debian-based. If you face any issues or error with Kali Linux, you may follow the tutorials intended for Ubuntu or Debian on the internet.
+
+### Bonus: Free Kali Linux Guide Book
+
+If you are just starting with Kali Linux, it will be a good idea to know how to use Kali Linux.
+
+Offensive Security, the company behind Kali Linux, has created a guide book that explains the basics of Linux, basics of Kali Linux, configuration, setups. It also has a few chapters on penetration testing and security tools.
+
+Basically, it has everything you need to get started with Kali Linux. And the best thing is that the book is available to download for free.
+
+Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox.
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-kali-linux-virtualbox
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://www.kali.org/
+[2]: https://itsfoss.com/linux-hacking-penetration-testing/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1
+[4]: https://www.virtualbox.org/
+[5]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
+[6]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
+[8]: https://itsfoss.com/4-best-download-managers-for-linux/
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190206 4 cool new projects to try in COPR for February 2019.md b/sources/tech/20190206 4 cool new projects to try in COPR for February 2019.md
new file mode 100644
index 0000000000..dc424f9625
--- /dev/null
+++ b/sources/tech/20190206 4 cool new projects to try in COPR for February 2019.md
@@ -0,0 +1,95 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 cool new projects to try in COPR for February 2019)
+[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-2019/)
+[#]: author: (Dominik Turecek https://fedoramagazine.org)
+
+4 cool new projects to try in COPR for February 2019
+======
+
+![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
+
+COPR is a [collection][1] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
+
+Here’s a set of new and interesting projects in COPR.
+
+### CryFS
+
+[CryFS][2] is a cryptographic filesystem. It is designed for use with cloud storage, mainly Dropbox, although it works with other storage providers as well. CryFS encrypts not only the files in the filesystem, but also metadata, file sizes and directory structure.
+
+#### Installation instructions
+
+The repo currently provides CryFS for Fedora 28 and 29, and for EPEL 7. To install CryFS, use these commands:
+
+```
+sudo dnf copr enable fcsm/cryfs
+sudo dnf install cryfs
+```
+
+### Cheat
+
+[Cheat][3] is a utility for viewing various cheatsheets in command-line, aiming to help remind usage of programs that are used only occasionally. For many Linux utilities, cheat provides cheatsheets containing condensed information from man pages, focusing mainly on the most used examples. In addition to the built-in cheatsheets, cheat allows you to edit the existing ones or creating new ones from scratch.
+
+![][4]
+
+#### Installation instructions
+
+The repo currently provides cheat for Fedora 28, 29 and Rawhide, and for EPEL 7. To install cheat, use these commands:
+
+```
+sudo dnf copr enable tkorbar/cheat
+sudo dnf install cheat
+```
+
+### Setconf
+
+[Setconf][5] is a simple program for making changes in configuration files, serving as an alternative for sed. The only thing setconf does is that it finds the key in the specified file and changes its value. Setconf provides only a few options to change its behavior — for example, uncommenting the line that is being changed.
+
+#### Installation instructions
+
+The repo currently provides setconf for Fedora 27, 28 and 29. To install setconf, use these commands:
+
+```
+sudo dnf copr enable jamacku/setconf
+sudo dnf install setconf
+```
+
+### Reddit Terminal Viewer
+
+[Reddit Terminal Viewer][6], or rtv, is an interface for browsing Reddit from terminal. It provides the basic functionality of Reddit, so you can log in to your account, view subreddits, comment, upvote and discover new topics. Rtv currently doesn’t, however, support Reddit tags.
+
+![][7]
+
+#### Installation instructions
+
+The repo currently provides Reddit Terminal Viewer for Fedora 29 and Rawhide. To install Reddit Terminal Viewer, use these commands:
+
+```
+sudo dnf copr enable tc01/rtv
+sudo dnf install rtv
+```
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-2019/
+
+作者:[Dominik Turecek][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org
+[b]: https://github.com/lujun9972
+[1]: https://copr.fedorainfracloud.org/
+[2]: https://www.cryfs.org/
+[3]: https://github.com/chrisallenlane/cheat
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/01/cheat.png
+[5]: https://setconf.roboticoverlords.org/
+[6]: https://github.com/michael-lazar/rtv
+[7]: https://fedoramagazine.org/wp-content/uploads/2019/01/rtv.png
diff --git a/sources/tech/20190206 And, Ampersand, and - in Linux.md b/sources/tech/20190206 And, Ampersand, and - in Linux.md
new file mode 100644
index 0000000000..88a0458539
--- /dev/null
+++ b/sources/tech/20190206 And, Ampersand, and - in Linux.md
@@ -0,0 +1,211 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (And, Ampersand, and & in Linux)
+[#]: via: (https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux)
+[#]: author: (Paul Brown https://www.linux.com/users/bro66)
+
+And, Ampersand, and & in Linux
+======
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ampersand.png?itok=7GdFO36Y)
+
+Take a look at the tools covered in the [three][1] [previous][2] [articles][3], and you will see that understanding the glue that joins them together is as important as recognizing the tools themselves. Indeed, tools tend to be simple, and understanding what _mkdir_ , _touch_ , and _find_ do (make a new directory, update a file, and find a file in the directory tree, respectively) in isolation is easy.
+
+But understanding what
+
+```
+mkdir test_dir 2>/dev/null || touch images.txt && find . -iname "*jpg" > backup/dir/images.txt &
+```
+
+does, and why we would write a command line like that is a whole different story.
+
+It pays to look more closely at the sign and symbols that live between the commands. It will not only help you better understand how things work, but will also make you more proficient in chaining commands together to create compound instructions that will help you work more efficiently.
+
+In this article and the next, we'll be looking at the the ampersand (`&`) and its close friend, the pipe (`|`), and see how they can mean different things in different contexts.
+
+### Behind the Scenes
+
+Let's start simple and see how you can use `&` as a way of pushing a command to the background. The instruction:
+
+```
+cp -R original/dir/ backup/dir/
+```
+
+Copies all the files and subdirectories in _original/dir/_ into _backup/dir/_. So far so simple. But if that turns out to be a lot of data, it could tie up your terminal for hours.
+
+However, using:
+
+```
+cp -R original/dir/ backup/dir/ &
+```
+
+pushes the process to the background courtesy of the final `&`. This frees you to continue working on the same terminal or even to close the terminal and still let the process finish up. Do note, however, that if the process is asked to print stuff out to the standard output (like in the case of `echo` or `ls`), it will continue to do so, even though it is being executed in the background.
+
+When you push a process into the background, Bash will print out a number. This number is the PID or the _Process' ID_. Every process running on your Linux system has a unique process ID and you can use this ID to pause, resume, and terminate the process it refers to. This will become useful later.
+
+In the meantime, there are a few tools you can use to manage your processes as long as you remain in the terminal from which you launched them:
+
+ * `jobs` shows you the processes running in your current terminal, whether be it in the background or foreground. It also shows you a number associated with each job (different from the PID) that you can use to refer to each process:
+
+```
+ $ jobs
+[1]- Running cp -i -R original/dir/* backup/dir/ &
+[2]+ Running find . -iname "*jpg" > backup/dir/images.txt &
+```
+
+ * `fg` brings a job from the background to the foreground so you can interact with it. You tell `fg` which process you want to bring to the foreground with a percentage symbol (`%`) followed by the number associated with the job that `jobs` gave you:
+
+```
+ $ fg %1 # brings the cp job to the foreground
+cp -i -R original/dir/* backup/dir/
+```
+
+If the job was stopped (see below), `fg` will start it again.
+
+ * You can stop a job in the foreground by holding down [Ctrl] and pressing [Z]. This doesn't abort the action, it pauses it. When you start it again with (`fg` or `bg`) it will continue from where it left off...
+
+...Except for [`sleep`][4]: the time a `sleep` job is paused still counts once `sleep` is resumed. This is because `sleep` takes note of the clock time when it was started, not how long it was running. This means that if you run `sleep 30` and pause it for more than 30 seconds, once you resume, `sleep` will exit immediately.
+
+ * The `bg` command pushes a job to the background and resumes it again if it was paused:
+
+```
+ $ bg %1
+[1]+ cp -i -R original/dir/* backup/dir/ &
+```
+
+
+
+
+As mentioned above, you won't be able to use any of these commands if you close the terminal from which you launched the process or if you change to another terminal, even though the process will still continue working.
+
+To manage background processes from another terminal you need another set of tools. For example, you can tell a process to stop from a a different terminal with the [`kill`][5] command:
+
+```
+kill -s STOP
+```
+
+And you know the PID because that is the number Bash gave you when you started the process with `&`, remember? Oh! You didn't write it down? No problem. You can get the PID of any running process with the `ps` (short for _processes_ ) command. So, using
+
+```
+ps | grep cp
+```
+
+will show you all the processes containing the string " _cp_ ", including the copying job we are using for our example. It will also show you the PID:
+
+```
+$ ps | grep cp
+14444 pts/3 00:00:13 cp
+```
+
+In this case, the PID is _14444_. and it means you can stop the background copying with:
+
+```
+kill -s STOP 14444
+```
+
+Note that `STOP` here does the same thing as [Ctrl] + [Z] above, that is, it pauses the execution of the process.
+
+To start the paused process again, you can use the `CONT` signal:
+
+```
+kill -s CONT 14444
+```
+
+There is a good list of many of [the main signals you can send a process here][6]. According to that, if you wanted to terminate the process, not just pause it, you could do this:
+
+```
+kill -s TERM 14444
+```
+
+If the process refuses to exit, you can force it with:
+
+```
+kill -s KILL 14444
+```
+
+This is a bit dangerous, but very useful if a process has gone crazy and is eating up all your resources.
+
+In any case, if you are not sure you have the correct PID, add the `x` option to `ps`:
+
+```
+$ ps x| grep cp
+14444 pts/3 D 0:14 cp -i -R original/dir/Hols_2014.mp4
+ original/dir/Hols_2015.mp4 original/dir/Hols_2016.mp4
+ original/dir/Hols_2017.mp4 original/dir/Hols_2018.mp4 backup/dir/
+```
+
+And you should be able to see what process you need.
+
+Finally, there is nifty tool that combines `ps` and `grep` all into one:
+
+```
+$ pgrep cp
+8
+18
+19
+26
+33
+40
+47
+54
+61
+72
+88
+96
+136
+339
+6680
+13735
+14444
+```
+
+Lists all the PIDs of processes that contain the string " _cp_ ".
+
+In this case, it isn't very helpful, but this...
+
+```
+$ pgrep -lx cp
+14444 cp
+```
+
+... is much better.
+
+In this case, `-l` tells `pgrep` to show you the name of the process and `-x` tells `pgrep` you want an exact match for the name of the command. If you want even more details, try `pgrep -ax command`.
+
+### Next time
+
+Putting an `&` at the end of commands has helped us explain the rather useful concept of processes working in the background and foreground and how to manage them.
+
+One last thing before we leave: processes running in the background are what are known as _daemons_ in UNIX/Linux parlance. So, if you had heard the term before and wondered what they were, there you go.
+
+As usual, there are more ways to use the ampersand within a command line, many of which have nothing to do with pushing processes into the background. To see what those uses are, we'll be back next week with more on the matter.
+
+Read more:
+
+[Linux Tools: The Meaning of Dot][1]
+
+[Understanding Angle Brackets in Bash][2]
+
+[More About Angle Brackets in Bash][3]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
+
+作者:[Paul Brown][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/bro66
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
+[2]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
+[3]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
+[4]: https://ss64.com/bash/sleep.html
+[5]: https://bash.cyberciti.biz/guide/Sending_signal_to_Processes
+[6]: https://www.computerhope.com/unix/signals.htm
diff --git a/sources/tech/20190206 Getting started with Vim visual mode.md b/sources/tech/20190206 Getting started with Vim visual mode.md
new file mode 100644
index 0000000000..e6b9b1da9b
--- /dev/null
+++ b/sources/tech/20190206 Getting started with Vim visual mode.md
@@ -0,0 +1,126 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with Vim visual mode)
+[#]: via: (https://opensource.com/article/19/2/getting-started-vim-visual-mode)
+[#]: author: (Susan Lauber https://opensource.com/users/susanlauber)
+
+Getting started with Vim visual mode
+======
+Visual mode makes it easier to highlight and manipulate text in Vim.
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y)
+
+Ansible playbook files are text files in a YAML format. People who work regularly with them have their favorite editors and plugin extensions to make the formatting easier.
+
+When I teach Ansible with the default editor available in most Linux distributions, I use Vim's visual mode a lot. It allows me to highlight my actions on the screen—what I am about to edit and the text manipulation task I'm doing—to make it easier for my students to learn.
+
+### Vim's visual mode
+
+When editing text with Vim, visual mode can be extremely useful for identifying chunks of text to be manipulated.
+
+Vim's visual mode has three versions: character, line, and block. The keystrokes to enter each mode are:
+
+ * Character mode: **v** (lower-case)
+ * Line mode: **V** (upper-case)
+ * Block mode: **Ctrl+v**
+
+
+
+Here are some ways to use each mode to simplify your work.
+
+### Character mode
+
+Character mode can highlight a sentence in a paragraph or a phrase in a sentence. Then the visually identified text can be deleted, copied, changed, or modified with any other Vim editing command.
+
+#### Move a sentence
+
+To move a sentence from one place to another, start by opening the file and moving the cursor to the first character in the sentence you want to move.
+
+![](https://opensource.com/sites/default/files/uploads/vim-visual-char1.png)
+
+ * Press the **v** key to enter visual character mode. The word **VISUAL** will appear at the bottom of the screen.
+ * Use the Arrow keys to highlight the desired text. You can use other navigation commands, such as **w** to highlight to the beginning of the next word or **$** to include the rest of the line.
+ * Once the text is highlighted, press the **d** key to delete the text.
+ * If you deleted too much or not enough, press **u** to undo and start again.
+ * Move your cursor to the new location and press **p** to paste the text.
+
+
+
+#### Change a phrase
+
+You can also highlight a chunk of text that you want to replace.
+
+![](https://opensource.com/sites/default/files/uploads/vim-visual-char2.png)
+
+ * Place the cursor at the first character you want to change.
+ * Press **v** to enter visual character mode.
+ * Use navigation commands, such as the Arrow keys, to highlight the phrase.
+ * Press **c** to change the highlighted text.
+ * The highlighted text will disappear, and you will be in Insert mode where you can add new text.
+ * After you finish typing the new text, press **Esc** to return to command mode and save your work.
+
+![](https://opensource.com/sites/default/files/uploads/vim-visual-char3.png)
+
+### Line mode
+
+When working with Ansible playbooks, the order of tasks can matter. Use visual line mode to move a task to a different location in the playbook.
+
+#### Manipulate multiple lines of text
+
+![](https://opensource.com/sites/default/files/uploads/vim-visual-line1.png)
+
+ * Place your cursor anywhere on the first or last line of the text you want to manipulate.
+ * Press **Shift+V** to enter line mode. The words **VISUAL LINE** will appear at the bottom of the screen.
+ * Use navigation commands, such as the Arrow keys, to highlight multiple lines of text.
+ * Once the desired text is highlighted, use commands to manipulate it. Press **d** to delete, then move the cursor to the new location, and press **p** to paste the text.
+ * **y** (yank) can be used instead of **d** (delete) if you want to copy the task.
+
+
+
+#### Indent a set of lines
+
+When working with Ansible playbooks or YAML files, indentation matters. A highlighted block can be shifted right or left with the **>** and **<** keys.
+
+![]9https://opensource.com/sites/default/files/uploads/vim-visual-line2.png
+
+ * Press **>** to increase the indentation of all the lines.
+ * Press **<** to decrease the indentation of all the lines.
+
+
+
+Try other Vim commands to apply them to the highlighted text.
+
+### Block mode
+
+The visual block mode is useful for manipulation of specific tabular data files, but it can also be extremely helpful as a tool to verify indentation of an Ansible playbook.
+
+Tasks are a list of items and in YAML each list item starts with a dash followed by a space. The dashes must line up in the same column to be at the same indentation level. This can be difficult to see with just the human eye. Indentation of other lines within the task is also important.
+
+#### Verify tasks lists are indented the same
+
+![](https://opensource.com/sites/default/files/uploads/vim-visual-block1.png)
+
+ * Place your cursor on the first character of the list item.
+ * Press **Ctrl+v** to enter visual block mode. The words **VISUAL BLOCK** will appear at the bottom of the screen.
+ * Use the Arrow keys to highlight the single character column. You can verify that each task is indented the same amount.
+ * Use the Arrow keys to expand the block right or left to check whether the other indentation is correct.
+
+![](https://opensource.com/sites/default/files/uploads/vim-visual-block2.png)
+
+Even though I am comfortable with other Vim editing shortcuts, I still like to use visual mode to sort out what text I want to manipulate. When I demo other concepts during a presentation, my students see a tool to highlight text and hit delete in this "new to them" text only editor.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/getting-started-vim-visual-mode
+
+作者:[Susan Lauber][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/susanlauber
+[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190207 10 Methods To Create A File In Linux.md b/sources/tech/20190207 10 Methods To Create A File In Linux.md
new file mode 100644
index 0000000000..b74bbacf13
--- /dev/null
+++ b/sources/tech/20190207 10 Methods To Create A File In Linux.md
@@ -0,0 +1,325 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (10 Methods To Create A File In Linux)
+[#]: via: (https://www.2daygeek.com/linux-command-to-create-a-file/)
+[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
+
+10 Methods To Create A File In Linux
+======
+
+As we already know that everything is a file in Linux, that includes device as well.
+
+Linux admin should be performing the file creation activity multiple times (It may 20 times or 50 times or more than that, it’s depends upon their environment) in a day.
+
+Navigate to the following URL, if you would like to **[create a file in a specific size in Linux][1]**.
+
+It’s very important. how efficiently are we creating a file. Why i’m saying efficient? there is a lot of benefit if you know the efficient way to perform an activity.
+
+It will save you a lot of time. You can spend those valuable time on other important or major tasks, where you want to spend some more time instead of doing that in hurry.
+
+Here i’m including multiple ways to create a file in Linux. I advise you to choose few which is easy and efficient for you to perform your activity.
+
+You no need to install any of the following commands because all these commands has been installed as part of Linux core utilities except nano command.
+
+It can be done using the following 6 methods.
+
+ * **`Redirect Symbol (>):`** Standard redirect symbol allow us to create a 0KB empty file in Linux.
+ * **`touch:`** touch command can create a 0KB empty file if does not exist.
+ * **`echo:`** echo command is used to display line of text that are passed as an argument.
+ * **`printf:`** printf command is used to display the given text on the terminal window.
+ * **`cat:`** It concatenate files and print on the standard output.
+ * **`vi/vim:`** Vim is a text editor that is upwards compatible to Vi. It can be used to edit all kinds of plain text.
+ * **`nano:`** nano is a small and friendly editor. It copies the look and feel of Pico, but is free software.
+ * **`head:`** head is used to print the first part of files..
+ * **`tail:`** tail is used to print the last part of files..
+ * **`truncate:`** truncate is used to shrink or extend the size of a file to the specified size.
+
+
+
+### How To Create A File In Linux Using Redirect Symbol (>)?
+
+Standard redirect symbol allow us to create a 0KB empty file in Linux. Basically it used to redirect the output of a command to a new file. When you use redirect symbol without a command then it’s create a file.
+
+But it won’t allow you to input any text while creating a file. However, it’s very simple and will be useful for lazy admins. To do so, simple enter the redirect symbol followed by the filename which you want.
+
+```
+$ > daygeek.txt
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:00 daygeek.txt
+```
+
+### How To Create A File In Linux Using touch Command?
+
+touch command is used to update the access and modification times of each FILE to the current time.
+
+It’s create a new file if does not exist. Also, touch command doesn’t allow us to enter any text while creating a file. By default it creates a 0KB empty file.
+
+```
+$ touch daygeek1.txt
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek1.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:02 daygeek1.txt
+```
+
+### How To Create A File In Linux Using echo Command?
+
+echo is a built-in command found in most operating systems. It is frequently used in scripts, batch files, and as part of individual commands to insert a text.
+
+This is nice command that allow users to input a text while creating a file. Also, it allow us to append the text in the next time.
+
+```
+$ echo "2daygeek.com is a best Linux blog to learn Linux" > daygeek2.txt
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek2.txt
+-rw-rw-r-- 1 daygeek daygeek 49 Feb 4 02:04 daygeek2.txt
+```
+
+To view the content from the file, use the cat command.
+
+```
+$ cat daygeek2.txt
+2daygeek.com is a best Linux blog to learn Linux
+```
+
+If you would like to append the content in the same file, use the double redirect Symbol (>>).
+
+```
+$ echo "It's FIVE years old blog" >> daygeek2.txt
+```
+
+You can view the appended content from the file using cat command.
+
+```
+$ cat daygeek2.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+### How To Create A File In Linux Using printf Command?
+
+printf command also works in the same way like how echo command works.
+
+printf command in Linux is used to display the given string on the terminal window. printf can have format specifiers, escape sequences or ordinary characters.
+
+```
+$ printf "2daygeek.com is a best Linux blog to learn Linux\n" > daygeek3.txt
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek3.txt
+-rw-rw-r-- 1 daygeek daygeek 48 Feb 4 02:12 daygeek3.txt
+```
+
+To view the content from the file, use the cat command.
+
+```
+$ cat daygeek3.txt
+2daygeek.com is a best Linux blog to learn Linux
+```
+
+If you would like to append the content in the same file, use the double redirect Symbol (>>).
+
+```
+$ printf "It's FIVE years old blog\n" >> daygeek3.txt
+```
+
+You can view the appended content from the file using cat command.
+
+```
+$ cat daygeek3.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+### How To Create A File In Linux Using cat Command?
+
+cat stands for concatenate. It is very frequently used in Linux to reads data from a file.
+
+cat is one of the most frequently used commands on Unix-like operating systems. It’s offer three functions which is related to text file such as display content of a file, combine multiple files into the single output and create a new file.
+
+```
+$ cat > daygeek4.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek4.txt
+-rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:18 daygeek4.txt
+```
+
+To view the content from the file, use the cat command.
+
+```
+$ cat daygeek4.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+If you would like to append the content in the same file, use the double redirect Symbol (>>).
+
+```
+$ cat >> daygeek4.txt
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+You can view the appended content from the file using cat command.
+
+```
+$ cat daygeek4.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+### How To Create A File In Linux Using vi/vim Command?
+
+Vim is a text editor that is upwards compatible to Vi. It can be used to edit all kinds of plain text. It is especially useful for editing programs.
+
+There are a lot of features are available in vim to edit a single file with the command.
+
+```
+$ vi daygeek5.txt
+
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek5.txt
+-rw-rw-r-- 1 daygeek daygeek 75 Feb 4 02:23 daygeek5.txt
+```
+
+To view the content from the file, use the cat command.
+
+```
+$ cat daygeek5.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+### How To Create A File In Linux Using nano Command?
+
+Nano’s is a another editor, an enhanced free Pico clone. nano is a small and friendly editor. It copies the look and feel of Pico, but is free software, and implements several features that Pico lacks, such as: opening multiple files, scrolling per line, undo/redo, syntax coloring, line numbering, and soft-wrapping overlong lines.
+
+```
+$ nano daygeek6.txt
+
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek6.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:26 daygeek6.txt
+```
+
+To view the content from the file, use the cat command.
+
+```
+$ cat daygeek6.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+### How To Create A File In Linux Using head Command?
+
+head command is used to output the first part of files. By default it prints the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name.
+
+```
+$ head -c 0K /dev/zero > daygeek7.txt
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek7.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:30 daygeek7.txt
+```
+
+### How To Create A File In Linux Using tail Command?
+
+tail command is used to output the last part of files. By default it prints the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name.
+
+```
+$ tail -c 0K /dev/zero > daygeek8.txt
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek8.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:31 daygeek8.txt
+```
+
+### How To Create A File In Linux Using truncate Command?
+
+truncate command is used to shrink or extend the size of a file to the specified size.
+
+```
+$ truncate -s 0K daygeek9.txt
+```
+
+Use the ls command to check the created file.
+
+```
+$ ls -lh daygeek9.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:37 daygeek9.txt
+```
+
+I have performed totally 10 commands in this article to test this. All together in the single output.
+
+```
+$ ls -lh daygeek*
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:02 daygeek1.txt
+-rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:07 daygeek2.txt
+-rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:15 daygeek3.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:20 daygeek4.txt
+-rw-rw-r-- 1 daygeek daygeek 75 Feb 4 02:23 daygeek5.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:26 daygeek6.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:32 daygeek7.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:32 daygeek8.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:38 daygeek9.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:00 daygeek.txt
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-command-to-create-a-file/
+
+作者:[Vinoth Kumar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/vinoth/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/create-a-file-in-specific-certain-size-linux/
diff --git a/sources/tech/20190207 How to determine how much memory is installed, used on Linux systems.md b/sources/tech/20190207 How to determine how much memory is installed, used on Linux systems.md
new file mode 100644
index 0000000000..c6098fa12d
--- /dev/null
+++ b/sources/tech/20190207 How to determine how much memory is installed, used on Linux systems.md
@@ -0,0 +1,227 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to determine how much memory is installed, used on Linux systems)
+[#]: via: (https://www.networkworld.com/article/3336174/linux/how-much-memory-is-installed-and-being-used-on-your-linux-systems.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to determine how much memory is installed, used on Linux systems
+======
+![](https://images.idgesg.net/images/article/2019/02/memory-100787327-large.jpg)
+
+There are numerous ways to get information on the memory installed on Linux systems and view how much of that memory is being used. Some commands provide an overwhelming amount of detail, while others provide succinct, though not necessarily easy-to-digest, answers. In this post, we'll look at some of the more useful tools for checking on memory and its usage.
+
+Before we get into the details, however, let's review a few details. Physical memory and virtual memory are not the same. The latter includes disk space that configured to be used as swap. Swap may include partitions set aside for this usage or files that are created to add to the available swap space when creating a new partition may not be practical. Some Linux commands provide information on both.
+
+Swap expands memory by providing disk space that can be used to house inactive pages in memory that are moved to disk when physical memory fills up.
+
+One file that plays a role in memory management is **/proc/kcore**. This file looks like a normal (though extremely large) file, but it does not occupy disk space at all. Instead, it is a virtual file like all of the files in /proc.
+
+```
+$ ls -l /proc/kcore
+-r--------. 1 root root 140737477881856 Jan 28 12:59 /proc/kcore
+```
+
+Interestingly, the two systems queried below do _not_ have the same amount of memory installed, yet the size of /proc/kcore is the same on both. The first of these two systems has 4 GB of memory installed; the second has 6 GB.
+
+```
+system1$ ls -l /proc/kcore
+-r--------. 1 root root 140737477881856 Jan 28 12:59 /proc/kcore
+system2$ ls -l /proc/kcore
+-r-------- 1 root root 140737477881856 Feb 5 13:00 /proc/kcore
+```
+
+Explanations that claim the size of this file represents the amount of available virtual memory (maybe plus 4K) don't hold much weight. This number would suggest that the virtual memory on these systems is 128 terrabytes! That number seems to represent instead how much memory a 64-bit systems might be capable of addressing — not how much is available on the system. Calculations of what 128 terrabytes and that number, plus 4K would look like are fairly easy to make on the command line:
+
+```
+$ expr 1024 \* 1024 \* 1024 \* 1024 \* 128
+140737488355328
+$ expr 1024 \* 1024 \* 1024 \* 1024 \* 128 + 4096
+140737488359424
+```
+
+Another and more human-friendly command for examining memory is the **free** command. It gives you an easy-to-understand report on memory.
+
+```
+$ free
+ total used free shared buff/cache available
+Mem: 6102476 812244 4090752 13112 1199480 4984140
+Swap: 2097148 0 2097148
+```
+
+With the **-g** option, free reports the values in gigabytes.
+
+```
+$ free -g
+ total used free shared buff/cache available
+Mem: 5 0 3 0 1 4
+Swap: 1 0 1
+```
+
+With the **-t** option, free shows the same values as it does with no options (don't confuse -t with terrabytes!) but by adding a total line at the bottom of its output.
+
+```
+$ free -t
+ total used free shared buff/cache available
+Mem: 6102476 812408 4090612 13112 1199456 4983984
+Swap: 2097148 0 2097148
+Total: 8199624 812408 6187760
+```
+
+And, of course, you can choose to use both options.
+
+```
+$ free -tg
+ total used free shared buff/cache available
+Mem: 5 0 3 0 1 4
+Swap: 1 0 1
+Total: 7 0 5
+```
+
+You might be disappointed in this report if you're trying to answer the question "How much RAM is installed on this system?" This is the same system shown in the example above that was described as having 6GB of RAM. That doesn't mean this report is wrong, but that it's the system's view of the memory it has at its disposal.
+
+The free command also provides an option to update the display every X seconds (10 in the example below).
+
+```
+$ free -s 10
+ total used free shared buff/cache available
+Mem: 6102476 812280 4090704 13112 1199492 4984108
+Swap: 2097148 0 2097148
+
+ total used free shared buff/cache available
+Mem: 6102476 812260 4090712 13112 1199504 4984120
+Swap: 2097148 0 2097148
+```
+
+With **-l** , the free command provides high and low memory usage.
+
+```
+$ free -l
+ total used free shared buff/cache available
+Mem: 6102476 812376 4090588 13112 1199512 4984000
+Low: 6102476 2011888 4090588
+High: 0 0 0
+Swap: 2097148 0 2097148
+```
+
+Another option for looking at memory is the **/proc/meminfo** file. Like /proc/kcore, this is a virtual file and one that gives a useful report showing how much memory is installed, free and available. Clearly, free and available do not represent the same thing. MemFree seems to represent unused RAM. MemAvailable is an estimate of how much memory is available for starting new applications.
+
+```
+$ head -3 /proc/meminfo
+MemTotal: 6102476 kB
+MemFree: 4090596 kB
+MemAvailable: 4984040 kB
+```
+
+If you only want to see total memory, you can use one of these commands:
+
+```
+$ awk '/MemTotal/ {print $2}' /proc/meminfo
+6102476
+$ grep MemTotal /proc/meminfo
+MemTotal: 6102476 kB
+```
+
+The **DirectMap** entries break information on memory into categories.
+
+```
+$ grep DirectMap /proc/meminfo
+DirectMap4k: 213568 kB
+DirectMap2M: 6076416 kB
+```
+
+DirectMap4k represents the amount of memory being mapped to standard 4k pages, while DirectMap2M shows the amount of memory being mapped to 2MB pages.
+
+The **getconf** command is one that will provide quite a bit more information than most of us want to contemplate.
+
+```
+$ getconf -a | more
+LINK_MAX 65000
+_POSIX_LINK_MAX 65000
+MAX_CANON 255
+_POSIX_MAX_CANON 255
+MAX_INPUT 255
+_POSIX_MAX_INPUT 255
+NAME_MAX 255
+_POSIX_NAME_MAX 255
+PATH_MAX 4096
+_POSIX_PATH_MAX 4096
+PIPE_BUF 4096
+_POSIX_PIPE_BUF 4096
+SOCK_MAXBUF
+_POSIX_ASYNC_IO
+_POSIX_CHOWN_RESTRICTED 1
+_POSIX_NO_TRUNC 1
+_POSIX_PRIO_IO
+_POSIX_SYNC_IO
+_POSIX_VDISABLE 0
+ARG_MAX 2097152
+ATEXIT_MAX 2147483647
+CHAR_BIT 8
+CHAR_MAX 127
+--More--
+```
+
+Pare that output down to something specific with a command like the one shown below, and you'll get the same kind of information provided by some of the commands above.
+
+```
+$ getconf -a | grep PAGES | awk 'BEGIN {total = 1} {if (NR == 1 || NR == 3) total *=$NF} END {print total / 1024" kB"}'
+6102476 kB
+```
+
+That command calculates memory by multiplying the values in the first and last lines of output like this:
+
+```
+PAGESIZE 4096 <==
+_AVPHYS_PAGES 1022511
+_PHYS_PAGES 1525619 <==
+```
+
+Calculating that independently, we can see how that value is derived.
+
+```
+$ expr 4096 \* 1525619 / 1024
+6102476
+```
+
+Clearly that's one of those commands that deserves to be turned into an alias!
+
+Another command with very digestible output is **top**. In the first five lines of top's output, you'll see some numbers that show how memory is being used.
+
+```
+$ top
+top - 15:36:38 up 8 days, 2:37, 2 users, load average: 0.00, 0.00, 0.00
+Tasks: 266 total, 1 running, 265 sleeping, 0 stopped, 0 zombie
+%Cpu(s): 0.2 us, 0.4 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+MiB Mem : 3244.8 total, 377.9 free, 1826.2 used, 1040.7 buff/cache
+MiB Swap: 3536.0 total, 3535.7 free, 0.3 used. 1126.1 avail Mem
+```
+
+And finally a command that will answer the question "So, how much RAM is installed on this system?" in a succinct fashion:
+
+```
+$ sudo dmidecode -t 17 | grep "Size.*MB" | awk '{s+=$2} END {print s / 1024 "GB"}'
+6GB
+```
+
+Depending on how much detail you want to see, Linux systems provide a lot of options for seeing how much memory is installed on your systems and how much is used and available.
+
+Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3336174/linux/how-much-memory-is-installed-and-being-used-on-your-linux-systems.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.facebook.com/NetworkWorld/
+[2]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md b/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md
new file mode 100644
index 0000000000..55c1067d12
--- /dev/null
+++ b/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md
@@ -0,0 +1,185 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (3 Ways to Install Deb Files on Ubuntu Linux)
+[#]: via: (https://itsfoss.com/install-deb-files-ubuntu)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+3 Ways to Install Deb Files on Ubuntu Linux
+======
+
+**This beginner article explains how to install deb packages in Ubuntu. It also shows you how to remove those deb packages afterwards.**
+
+This is another article in the Ubuntu beginner series. If you are absolutely new to Ubuntu, you might wonder about [how to install applications][1].
+
+The easiest way is to use the Ubuntu Software Center. Search for an application by its name and install it from there.
+
+Life would be too simple if you could find all the applications in the Software Center. But that does not happen, unfortunately.
+
+Some software are available via DEB packages. These are archived files that end with .deb extension.
+
+You can think of .deb files as the .exe files in Windows. You double click on the .exe file and it starts the installation procedure in Windows. DEB packages are pretty much the same.
+
+You can find these DEB packages from the download section of the software provider’s website. For example, if you want to [install Google Chrome on Ubuntu][2], you can download the DEB package of Chrome from its website.
+
+Now the question arises, how do you install deb files? There are multiple ways of installing DEB packages in Ubuntu. I’ll show them to you one by one in this tutorial.
+
+![Install deb files in Ubuntu][3]
+
+### Installing .deb files in Ubuntu and Debian-based Linux Distributions
+
+You can choose a GUI tool or a command line tool for installing a deb package. The choice is yours.
+
+Let’s go on and see how to install deb files.
+
+#### Method 1: Use the default Software Center
+
+The simplest method is to use the default software center in Ubuntu. You have to do nothing special here. Simply go to the folder where you have downloaded the .deb file (it should be the Downloads folder) and double click on this file.
+
+![Google Chrome deb file on Ubuntu][4]Double click on the downloaded .deb file to start installation
+
+It will open the software center and you should see the option to install the software. All you have to do is to hit the install button and enter your login password.
+
+![Install Google Chrome in Ubuntu Software Center][5]The installation of deb file will be carried out via Software Center
+
+See, it’s even simple than installing from a .exe files on Windows, isn’t it?
+
+#### Method 2: Use Gdebi application for installing deb packages with dependencies
+
+Again, life would be a lot simpler if things always go smooth. But that’s not life as we know it.
+
+Now that you know that .deb files can be easily installed via Software Center, let me tell you about the dependency error that you may encounter with some packages.
+
+What happens is that a program may be dependent on another piece of software (libraries). When the developer is preparing the DEB package for you, he/she may assume that your system already has that piece of software on your system.
+
+But if that’s not the case and your system doesn’t have those required pieces of software, you’ll encounter the infamous ‘dependency error’.
+
+The Software Center cannot handle such errors on its own so you have to use another tool called [gdebi][6].
+
+gdebi is a lightweight GUI application that has the sole purpose of installing deb packages.
+
+It identifies the dependencies and tries to install these dependencies along with installing the .deb files.
+
+![gdebi handling dependency while installing deb package][7]Image Credit: [Xmodulo][8]
+
+Personally, I prefer gdebi over software center for installing deb files. It is a lightweight application so the installation seems quicker. You can read in detail about [using gDebi and making it the default for installing DEB packages][6].
+
+You can install gdebi from the software center or using the command below:
+
+```
+sudo apt install gdebi
+```
+
+#### Method 3: Install .deb files in command line using dpkg
+
+If you want to install deb packages in command lime, you can use either apt command or dpkg command. Apt command actually uses [dpkg command][9] underneath it but apt is more popular and easy to use.
+
+If you want to use the apt command for deb files, use it like this:
+
+```
+sudo apt install path_to_deb_file
+```
+
+If you want to use dpkg command for installing deb packages, here’s how to do it:
+
+```
+sudo dpkg -i path_to_deb_file
+```
+
+In both commands, you should replace the path_to_deb_file with the path and name of the deb file you have downloaded.
+
+![Install deb files using dpkg command in Ubuntu][10]Installing deb files using dpkg command in Ubuntu
+
+If you get a dependency error while installing the deb packages, you may use the following command to fix the dependency issues:
+
+```
+sudo apt install -f
+```
+
+### How to remove deb packages
+
+Removing a deb package is not a big deal as well. And no, you don’t need the original deb file that you had used for installing the program.
+
+#### Method 1: Remove deb packages using apt commands
+
+All you need is the name of the program that you have installed and then you can use apt or dpkg to remove that program.
+
+```
+sudo apt remove program_name
+```
+
+Now the question comes, how do you find the exact program name that you need to use in the remove command? The apt command has a solution for that as well.
+
+You can find the list of all installed files with apt command but manually going through this will be a pain. So you can use the grep command to search for your package.
+
+For example, I installed AppGrid application in the previous section but if I want to know the exact program name, I can use something like this:
+
+```
+sudo apt list --installed | grep grid
+```
+
+This will give me all the packages that have grid in their name and from there, I can get the exact program name.
+
+```
+apt list --installed | grep grid
+WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
+appgrid/now 0.298 all [installed,local]
+```
+
+As you can see, a program called appgrid has been installed. Now you can use this program name with the apt remove command.
+
+#### Method 2: Remove deb packages using dpkg commands
+
+You can use dpkg to find the installed program’s name:
+
+```
+dpkg -l | grep grid
+```
+
+The output will give all the packages installed that has grid in its name.
+
+```
+dpkg -l | grep grid
+
+ii appgrid 0.298 all Discover and install apps for Ubuntu
+```
+
+ii in the above command output means package has been correctly installed.
+
+Now that you have the program name, you can use dpkg command to remove it:
+
+```
+dpkg -r program_name
+```
+
+**Tip: Updating deb packages**
+Some deb packages (like Chrome) provide updates through system updates but for most other programs, you’ll have to remove the existing program and install the newer version.
+
+I hope this beginner guide helped you to install deb packages on Ubuntu. I added the remove part so that you’ll have better control over the programs you installed.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-deb-files-ubuntu
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/remove-install-software-ubuntu/
+[2]: https://itsfoss.com/install-chrome-ubuntu/
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/deb-packages-ubuntu.png?resize=800%2C450&ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-4.jpeg?resize=800%2C347&ssl=1
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-5.jpeg?resize=800%2C516&ssl=1
+[6]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/gdebi-handling-dependency.jpg?ssl=1
+[8]: http://xmodulo.com
+[9]: https://help.ubuntu.com/lts/serverguide/dpkg.html.en
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/install-deb-file-with-dpkg.png?ssl=1
+[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/deb-packages-ubuntu.png?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190208 7 steps for hunting down Python code bugs.md b/sources/tech/20190208 7 steps for hunting down Python code bugs.md
new file mode 100644
index 0000000000..63058be4a4
--- /dev/null
+++ b/sources/tech/20190208 7 steps for hunting down Python code bugs.md
@@ -0,0 +1,114 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (7 steps for hunting down Python code bugs)
+[#]: via: (https://opensource.com/article/19/2/steps-hunting-code-python-bugs)
+[#]: author: (Maria Mckinley https://opensource.com/users/parody)
+
+7 steps for hunting down Python code bugs
+======
+Learn some tricks to minimize the time you spend tracking down the reasons your code fails.
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug-insect-butterfly-diversity-inclusion-2.png?itok=TcC9eews)
+
+It is 3 pm on a Friday afternoon. Why? Because it is always 3 pm on a Friday when things go down. You get a notification that a customer has found a bug in your software. After you get over your initial disbelief, you contact DevOps to find out what is happening with the logs for your app, because you remember receiving a notification that they were being moved.
+
+Turns out they are somewhere you can't get to, but they are in the process of being moved to a web application—so you will have this nifty application for searching and reading them, but of course, it is not finished yet. It should be up in a couple of days. I know, totally unrealistic situation, right? Unfortunately not; it seems logs or log messages often come up missing at just the wrong time. Before we track down the bug, a public service announcement: Check your logs to make sure they are where you think they are and logging what you think they should log, regularly. Amazing how these things just change when you aren't looking.
+
+OK, so you found the logs or tried the call, and indeed, the customer has found a bug. Maybe you even think you know where the bug is.
+
+You immediately open the file you think might be the problem and start poking around.
+
+### 1. Don't touch your code yet
+
+Go ahead and look at it, maybe even come up with a hypothesis. But before you start mucking about in the code, take that call that creates the bug and turn it into a test. This will be an integration test because although you may have suspicions, you do not yet know exactly where the problem is.
+
+Make sure this test fails. This is important because sometimes the test you make doesn't mimic the broken call; this is especially true if you are using a web or other framework that can obfuscate the tests. Many things may be stored in variables, and it is unfortunately not always obvious, just by looking at the test, what call you are making in the test. I'm not going to say that I have created a test that passed when I was trying to imitate a broken call, but, well, I have, and I don't think that is particularly unusual. Learn from my mistakes.
+
+### 2. Write a failing test
+
+Now that you have a failing test or maybe a test with an error, it is time to troubleshoot. But before you do that, let's do a review of the stack, as this makes troubleshooting easier.
+
+The stack consists of all of the tasks you have started but not finished. So, if you are baking a cake and adding the flour to the batter, then your stack would be:
+
+ * Make cake
+ * Make batter
+ * Add flour
+
+
+
+You have started making your cake, you have started making the batter, and you are adding the flour. Greasing the pan is not on the list since you already finished that, and making the frosting is not on the list because you have not started that.
+
+If you are fuzzy on the stack, I highly recommend playing around on [Python Tutor][1], where you can watch the stack as you execute lines of code.
+
+Now, if something goes wrong with your Python program, the interpreter helpfully prints out the stack for you. This means that whatever the program was doing at the moment it became apparent that something went wrong is on the bottom.
+
+### 3. Always check the bottom of the stack first
+
+Not only is the bottom of the stack where you can see which error occurred, but often the last line of the stack is where you can find the issue. If the bottom doesn't help, and your code has not been linted in a while, it is amazing how helpful it can be to run. I recommend pylint or flake8. More often than not, it points right to where there is an error that I have been overlooking.
+
+If the error is something that seems obscure, your next move might just be to Google it. You will have better luck if you don't include information that is relevant only to your code, like the name of variables, files, etc. If you are using Python 3 (which you should be), it's helpful to include the 3 in the search; otherwise, Python 2 solutions tend to dominate the top.
+
+Once upon a time, developers had to troubleshoot without the benefit of a search engine. This was a dark time. Take advantage of all the tools available to you.
+
+Unfortunately, sometimes the problem occurred earlier and only became apparent during the line executed on the bottom of the stack. Think about how forgetting to add the baking powder becomes obvious when the cake doesn't rise.
+
+It is time to look up the stack. Chances are quite good that the problem is in your code, and not Python core or even third-party packages, so scan the stack looking for lines in your code first. Plus it is usually much easier to put a breakpoint in your own code. Stick the breakpoint in your code a little further up the stack and look around to see if things look like they should.
+
+"But Maria," I hear you say, "this is all helpful if I have a stack trace, but I just have a failing test. Where do I start?"
+
+Pdb, the Python Debugger.
+
+Find a place in your code where you know this call should hit. You should be able to find at least one place. Stick a pdb break in there.
+
+#### A digression
+
+Why not a print statement? I used to depend on print statements. They still come in handy sometimes. But once I started working with complicated code bases, and especially ones making network calls, print just became too slow. I ended up with print statements all over the place, I lost track of where they were and why, and it just got complicated. But there is a more important reason to mostly use pdb. Let's say you put a print statement in and discover that something is wrong—and must have gone wrong earlier. But looking at the function where you put the print statement, you have no idea how you got there. Looking at code is a great way to see where you are going, but it is terrible for learning where you've been. And yes, I have done a grep of my code base looking for where a function is called, but this can get tedious and doesn't narrow it down much with a popular function. Pdb can be very helpful.
+
+You follow my advice, and put in a pdb break and run your test. And it whooshes on by and fails again, with no break at all. Leave your breakpoint in, and run a test already in your test suite that does something very similar to the broken test. If you have a decent test suite, you should be able to find a test that is hitting the same code you think your failed test should hit. Run that test, and when it gets to your breakpoint, do a `w` and look at the stack. If you have no idea by looking at the stack how/where the other call may have gone haywire, then go about halfway up the stack, find some code that belongs to you, and put a breakpoint in that file, one line above the one in the stack trace. Try again with the new test. Keep going back and forth, moving up the stack to figure out where your call went off the rails. If you get all the way up to the top of the trace without hitting a breakpoint, then congratulations, you have found the issue: Your app was spelled wrong. No experience here, nope, none at all.
+
+### 4. Change things
+
+If you still feel lost, try making a new test where you vary something slightly. Can you get the new test to work? What is different? What is the same? Try changing something else. Once you have your test, and maybe additional tests in place, it is safe to start changing things in the code to see if you can narrow down the problem. Remember to start troubleshooting with a fresh commit so you can easily back out changes that do not help. (This is a reference to version control, if you aren't using version control, it will change your life. Well, maybe it will just make coding easier. See "[A Visual Guide to Version Control][2]" for a nice introduction.)
+
+### 5. Take a break
+
+In all seriousness, when it stops feeling like a fun challenge or game and starts becoming really frustrating, your best course of action is to walk away from the problem. Take a break. I highly recommend going for a walk and trying to think about something else.
+
+### 6. Write everything down
+
+When you come back, if you aren't suddenly inspired to try something, write down any information you have about the problem. This should include:
+
+ * Exactly the call that is causing the problem
+ * Exactly what happened, including any error messages or related log messages
+ * Exactly what you were expecting to happen
+ * What you have done so far to find the problem and any clues that you have discovered while troubleshooting
+
+
+
+Sometimes this is a lot of information, but trust me, it is really annoying trying to pry information out of someone piecemeal. Try to be concise, but complete.
+
+### 7. Ask for help
+
+I often find that just writing down all the information triggers a thought about something I have not tried yet. Sometimes, of course, I realize what the problem is immediately after hitting the submit button. At any rate, if you still have not thought of anything after writing everything down, try sending an email to someone. First, try colleagues or other people involved in your project, then move on to project email lists. Don't be afraid to ask for help. Most people are kind and helpful, and I have found that to be especially true in the Python community.
+
+Maria McKinley will present [Hunting the Bugs][3] at [PyCascades 2019][4], February 23-24 in Seattle.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/steps-hunting-code-python-bugs
+
+作者:[Maria Mckinley][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/parody
+[b]: https://github.com/lujun9972
+[1]: http://www.pythontutor.com/
+[2]: https://betterexplained.com/articles/a-visual-guide-to-version-control/
+[3]: https://2019.pycascades.com/talks/hunting-the-bugs
+[4]: https://2019.pycascades.com/
diff --git a/sources/tech/20190208 How To Install And Use PuTTY On Linux.md b/sources/tech/20190208 How To Install And Use PuTTY On Linux.md
new file mode 100644
index 0000000000..844d55f040
--- /dev/null
+++ b/sources/tech/20190208 How To Install And Use PuTTY On Linux.md
@@ -0,0 +1,153 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Install And Use PuTTY On Linux)
+[#]: via: (https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/)
+[#]: author: (SK https://www.ostechnix.com/author/sk/)
+
+How To Install And Use PuTTY On Linux
+======
+
+![](https://www.ostechnix.com/wp-content/uploads/2019/02/putty-720x340.png)
+
+**PuTTY** is a free and open source GUI client that supports wide range of protocols including SSH, Telnet, Rlogin and serial for Windows and Unix-like operating systems. Generally, Windows admins use PuTTY as a SSH and telnet client to access the remote Linux servers from their local Windows systems. However, PuTTY is not limited to Windows. It is also popular among Linux users as well. This guide explains how to install PuTTY on Linux and how to access and manage the remote Linux servers using PuTTY.
+
+### Install PuTTY on Linux
+
+PuTTY is available in the official repositories of most Linux distributions. For instance, you can install PuTTY on Arch Linux and its variants using the following command:
+
+```
+$ sudo pacman -S putty
+```
+
+On Debian, Ubuntu, Linux Mint:
+
+```
+$ sudo apt install putty
+```
+
+### How to use PuTTY to access remote Linux systems
+
+Once PuTTY is installed, launch it from the menu or from your application launcher. Alternatively, you can launch it from the Terminal by running the following command:
+
+```
+$ putty
+```
+
+This is how PuTTY default interface looks like.
+
+![](https://www.ostechnix.com/wp-content/uploads/2019/02/putty-default-interface.png)
+
+As you can see, most of the options are self-explanatory. On the left pane of the PuTTY interface, you can do/edit/modify various configurations such as,
+
+ 1. PuTTY session logging,
+ 2. Options for controlling the terminal emulation, control and change effects of keys,
+ 3. Control terminal bell sounds,
+ 4. Enable/disable Terminal advanced features,
+ 5. Set the size of PuTTY window,
+ 6. Control the scrollback in PuTTY window (Default is 2000 lines),
+ 7. Change appearance of PuTTY window and cursor,
+ 8. Adjust windows border,
+ 9. Change fonts for texts in PuTTY window,
+ 10. Save login details,
+ 11. Set proxy details,
+ 12. Options to control various protocols such as SSH, Telnet, Rlogin, Serial etc.
+ 13. And more.
+
+
+
+All options are categorized under a distinct name for ease of understanding.
+
+### Access a remote Linux server using PuTTY
+
+Click on the **Session** tab on the left pane. Enter the hostname (or IP address) of your remote system you want to connect to. Next choose the connection type, for example Telnet, Rlogin, SSH etc. The default port number will be automatically selected depending upon the connection type you choose. For example if you choose SSH, port number 22 will be selected. For Telnet, port number 23 will be selected and so on. If you have changed the default port number, don’t forget to mention it in the **Port** section. I am going to access my remote via SSH, hence I choose SSH connection type. After entering the Hostname or IP address of the system, click **Open**.
+
+![](http://www.ostechnix.com/wp-content/uploads/2019/02/putty-1.png)
+
+If this is the first time you have connected to this remote system, PuTTY will display a security alert dialog box that asks whether you trust the host you are connecting to. Click **Accept** to add the remote system’s host key to the PuTTY’s cache:
+
+![][2]
+
+Next enter your remote system’s user name and password. Congratulations! You’ve successfully connected to your remote system via SSH using PuTTY.
+
+![](https://www.ostechnix.com/wp-content/uploads/2019/02/putty-3.png)
+
+**Access remote systems configured with key-based authentication**
+
+Some Linux administrators might have configured their remote servers with key-based authentication. For example, when accessing AMS instances from PuTTY, you need to specify the key file’s location. PuTTY supports public key authentication and uses its own key format ( **.ppk** files).
+
+Enter the hostname or IP address in the Session section. Next, In the **Category** pane, expand **Connection** , expand **SSH** , and then choose **Auth**. Browse the location of the **.ppk** key file and click **Open**.
+
+![][3]
+
+Click Accept to add the host key if it is the first time you are connecting to the remote system. Finally, enter the remote system’s passphrase (if the key is protected with a passphrase while generating it) to connect.
+
+**Save PuTTY sessions**
+
+Sometimes, you want to connect to the remote system multiple times. If so, you can save the session and load it whenever you want without having to type the hostname or ip address, port number every time.
+
+Enter the hostname (or IP address) and provide a session name and click **Save**. If you have key file, make sure you have already given the location before hitting the Save button.
+
+![][4]
+
+Now, choose session name under the **Saved sessions** tab and click **Load** and click **Open** to launch it.
+
+**Transferring files to remote systems using the PuTTY Secure Copy Client (pscp)
+**
+
+Usually, the Linux users and admins use **‘scp’** command line tool to transfer files from local Linux system to the remote Linux servers. PuTTY does have a dedicated client named **PuTTY Secure Copy Clinet** ( **PSCP** in short) to do this job. If you’re using windows os in your local system, you may need this tool to transfer files from local system to remote systems. PSCP can be used in both Linux and Windows systems.
+
+The following command will copy **file.txt** to my remote Ubuntu system from Arch Linux.
+
+```
+pscp -i test.ppk file.txt sk@192.168.225.22:/home/sk/
+```
+
+Here,
+
+ * **-i test.ppk** : Key file to access remote system,
+ * **file.txt** : file to be copied to remote system,
+ * **sk@192.168.225.22** : username and ip address of remote system,
+ * **/home/sk/** : Destination path.
+
+
+
+To copy a directory. use **-r** (recursive) option like below:
+
+```
+ pscp -i test.ppk -r dir/ sk@192.168.225.22:/home/sk/
+```
+
+To transfer files from Windows to remote Linux server using pscp, run the following command from command prompt:
+
+```
+pscp -i test.ppk c:\documents\file.txt.txt sk@192.168.225.22:/home/sk/
+```
+
+You know now what is PuTTY, how to install and use it to access remote systems. Also, you have learned how to transfer files to the remote systems from the local system using pscp program.
+
+And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-2.png
+[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-4.png
+[4]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-5.png